text
stringlengths 56
7.94M
|
---|
\begin{equation}gin{document}
\input epsf
\newcommand{\infig}[2]{\begin{equation}gin{center}\mbox{ \epsfxsize #1
\epsfbox{#2}}\end{center}}
\newcommand{\begin{equation}}{\begin{equation}gin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{equation}a}{\begin{equation}gin{eqnarray}}
\newcommand{\end{equation}a}{\end{eqnarray}}
\newcommand{\wee}[2]{\mbox{$\frac{#1}{#2}$}}
\newcommand{\unit}[1]{\,\mbox{#1}}
\newcommand{\mbox{$^{\circ}$}}{\mbox{$^{\circ}$}}
\newcommand{\ltish}{\raisebox{-0.4ex}{$\,\stackrel{<}{\scriptstyle\sim}$}}
\newcommand{{\em vs\/}}{{\em vs\/}}
\newcommand{\bin}[2]{\left(\begin{equation}gin{array}{c} #1 \\ #2\end{array}\right)}
\draft
\title{Creation of coherence in Bose-Einstein condensates by atom detection}
\author{Peter Horak and Stephen M.\ Barnett}
\address{Department of Physics and Applied Physics,
University of Strathclyde, Glasgow G4 ONG, United Kingdom}
\date{\today, submitted to Phys.\ Rev.\ A}
\maketitle
\begin{equation}gin{abstract}
We investigate the creation of a relative phase between two Bose-Einstein
condensates, initially in number states, by detection of atoms and show how
the system approaches a coherent state. Two very distinct time scales are
found: one for the creation of the interference is of the order of the
detection time for a few single atoms and another, for the preparation of
coherent states, of the order of the detection time for a significant
fraction of the total number of atoms. Approximate analytic solutions are
derived and compared with exact numerical results.
\end{abstract}
\pacs{PACS number(s): 03.75.Fi, 05.30.Jp, 42.50.Ar}
\narrowtext
\section{Introduction}
The first experimental realisations of Bose-Einstein condensation in
dilute atomic gases \cite{bec95a,bec95b,bec95c}
have opened up a new field in atomic and quantum physics. Despite its apparent
complexity, the condensate is well-described by a macroscopic wavefunction, or
complex field, obeying a nonlinear wave equation (the Gross-Pitaevskii
equation). The nature of this complex field and in particular its apparent
phase has been the subject of some debate.
One specific topic which has been frequently investigated is the question of
the coherence properties \cite{Burt,Ketterle}
of Bose-Einstein condensates as established in
interference experiments \cite{Andrews,Hall}.
The underlying question is whether the phase or, more
precisely, the relative phase of two condensates is created by a spontaneously
broken symmetry \cite{Leggett} or by some other mechanisms \cite{Kagan}.
It has been
suggested \cite{Javanainen} (and studied in detail by several authors
\cite{Wong,Cirac,Castin,Graham,Ruostekoski,Sinatra})
that such a relative phase can be created by the
detections of individual atoms in an interferometric setup, that is, where the
origin of the detected atoms is intrinsically unknown. This process gives rise
to a definite (but unpredictible) relative phase of two condensates even if the
initial states of the condensates are of undefined phase, as for initial number
states \cite{Javanainen,Wong,Castin}, initial Poissonian states
\cite{Cirac,Graham}, or initial thermal states \cite{Graham},
by entangling the states of the two condensates.
In this work we will study in detail the creation of coherence between
two condensates which initially have well defined occupation numbers, not only
in the limit where the number of atoms detected is small compared to the total
number of atoms but also in the long time limit. We concentrate thereby on two
manifestations of coherence: the creation of interference fringes and
the evolution of the compound two-condensate system towards a kind of
coherent state. We start with an idealized model
(Secs. \ref{sec:interference}-\ref{sec:state}) and later introduce more
realistic features including atomic collisions
(Sec.~\ref{sec:general}).
We use exact numerical simulations combined with
approximate analytical solutions to determine
the time evolution of our system. We show
that these two measures of coherence are created on very different time scales.
The relative phase associated with the appearance of interference fringes
develops in a time of order $1/(N\gamma)$, where $N$ is the
total initial number of atoms in the two condensates and $\gamma$ is the
rate at which atoms leak out of the condensates and are detected.
The approach towards coherent states, in contrast, takes a larger time of order
$1/\gamma$. These two timescales correspond to timescales recently identified
for the interaction of a Bose-Einstein condensate or other bosonic system with
its environment \cite{Barnett}. In this case, a well-specified atom number will
change on the timescale $1/(N\gamma)$ but any coherence present decays on a
timescale $1/\gamma$.
\section{Model}
Let us first introduce the model system which we use to investigate the
coherence properties in Bose-Einstein condensation
\cite{Javanainen,Wong,Cirac,Castin,Graham,Ruostekoski,Sinatra}.
We consider two independent non-interacting single-mode Bose-Einstein
condensates with creation (annihilation) operators $a^\dagger$ ($a$) and
$b^\dagger$ ($b$), respectively. Atoms are leaking out of the condensates
and are detected individually and spatially resolved. The same
detectors simultaneously monitor decays from both condensates, that is, atoms
coming from different condensates are allowed to interfere, see
Fig.~\ref{fig:scheme}.
Thus, the detections are described by the annihilation operators
\begin{equation}
a(\phi) = \frac{1}{\sqrt{2}} \left( a + b e^{-i\phi} \right)
\end{equation}
where $\phi \in [-\pi,\pi]$ is related to the position of the detector $x$ by
$\phi=px/\hbar$ where $p$ is the momentum of the atoms leaking out of the
condensates.
Assume now that the system can be described at a certain time $t$ by a wave
function $|\psi\rangle$. Then the probability of detecting the next atom at
position $\phi$ is given by
\begin{equation}
P(\phi) = {\cal N} \langle \psi|a(\phi)^\dagger a(\phi)|\psi \rangle
\end{equation}
where the normalisation constant ${\cal N}$ is chosen such that
\begin{equation}
\int_{-\pi}^{\pi} d \phi \, P(\phi) = 1.
\end{equation}
This probability function can be rewritten as
\begin{equation}
P(\phi) = \frac{1}{2\pi}\left[ 1+\begin{equation}ta_c \cos(\phi-\theta) \right]
\label{eq:prob}
\end{equation}
where
\begin{equation}
\begin{equation}ta_c = \frac{2|\langle \psi|a^\dagger b|\psi \rangle|}
{\langle \psi|a^\dagger a + b^\dagger b|\psi \rangle}
\label{eq:beta}
\end{equation}
is the visibility of the interference fringes conditioned
on the quantum state $|\psi\rangle$, and $\theta$ gives the most likely
position of detection of the next atom. It should be emphasized that this
visibility is not the one obtained by detecting a large number of atoms from the
initial state $|\psi\rangle$, but the one obtained by preparing this state
$|\psi\rangle$ very often and measuring a {\em single\/} atom in each run.
This difference is important since every detection changes the state of the
system and thus changes the conditional visibility $\begin{equation}ta_c$.
\begin{equation}gin{figure}[tb]
\infig{18em}{figscheme.eps}
\caption{Schematic representation of the interfering Bose-Einstein condensates.}
\label{fig:scheme}
\end{figure}
This system has already been investigated by several authors
\cite{Javanainen,Wong,Cirac,Castin,Graham,Ruostekoski,Sinatra}
in order to show that the
detection of atoms breaks the underlying symmetry of the system and thus creates
interference fringes. This is true even if the {\em initial\/}
state of the system does not exhibit any prefered phase so that $\begin{equation}ta_c=0$.
It is the main purpose of our
work to quantify this creation of coherence between two initially
uncorrelated Bose-Einstein condensates.
\section{Creating interference from initial number states}
\label{sec:interference}
In this section we will discuss the creation of interference fringes as a
consequence of consecutive detections of atoms when the two condensates are
initially in number states. The full system is given initially by
the quantum state
\begin{equation}
|\psi_0\rangle = |n_1,n_2\rangle.
\end{equation}
After the detection of $k$ atoms at positions $\phi_1$, $\phi_2$, ...,
$\phi_k$ the (unnormalized) state of the system is then
\begin{equation}
|\psi_k\rangle = (a+be^{i\phi_k})\dots (a+be^{i\phi_1})|\psi_0\rangle.
\end{equation}
The conditional probability of detecting the $(k+1)$th atom at
$\phi_{k+1}$ then reads
\begin{equation}a
& & P(\phi_{k+1}|\phi_k,...,\phi_1) = \nonumber \\
& & \quad\quad = \frac{1}{2\pi}
\frac{\langle\psi_k|(a^\dagger+b^\dagger e^{-i\phi_{k+1}})
(a+b e^{i\phi_{k+1}}) |\psi_k\rangle}
{\langle\psi_k|a^\dagger a + b^\dagger b |\psi_k\rangle}
\nonumber \\
& & \quad\quad = \frac{1}{2\pi} \frac{\langle\psi_{k+1}|\psi_{k+1}\rangle}
{(N-k)\langle\psi_k|\psi_k\rangle}
\end{equation}a
where $N=n_1+n_2$. Thus, the probability for the sequence of detections
$\phi_1$, $\phi_2$, ..., $\phi_k$ is
\begin{equation}a
& & P(\phi_k,...,\phi_1) = P(\phi_1)P(\phi_2|\phi_1)... = \nonumber \\
& & \quad\quad =
\frac{1}{(2\pi)^k} \frac{\langle\psi_k|\psi_k\rangle}{N(N-1)...(N-k+1)}.
\end{equation}a
The conditional visibility $\begin{equation}ta$ (we will write $\begin{equation}ta$ instead of $\begin{equation}ta_c$
in this section in order to simplify the notation)
for the state after $k$ detections is
\begin{equation}a
\begin{equation}ta = \frac{2|\langle\psi_k|a^\dagger b |\psi_k\rangle|}
{\langle\psi_k|a^\dagger a + b^\dagger b |\psi_k\rangle}
= \frac{2|\langle\psi_k|a^\dagger b |\psi_k\rangle|}
{(N-k)\langle\psi_k|\psi_k\rangle}.
\label{eq:betak}
\end{equation}a
Thus the average visibility after $k$ detections is
\begin{equation}
\langle \begin{equation}ta\rangle_k = \int d\phi_1...d\phi_k \frac{2}{(2\pi)^k}
\frac{(N-k-1)!}{N!} |\langle\psi_k|a^\dagger b |\psi_k\rangle|.
\label{eq:bmean}
\end{equation}
Note, however, that this mean conditional visibility is rather difficult
to access experimentally since it involves the averaging over
{\em ensembles\/} of experiments where each ensemble consists of repeatedly
preparing the quantum state of the system by the {\em same\/} sequence of
detections from the same initial number state and measuring the position
of the next detection. Nevertheless, this proves to be a useful measure
theoretically especially to describe the time scale on which interference
is created as we will show later in this section.
For the first two detections, that is, $k=1$, $2$, the integral in
Eq.~(\ref{eq:bmean}) can be evaluated analytically with the results
\begin{equation}a
\langle \begin{equation}ta\rangle_1 &=& \frac{2n_1 n_2}{(n_1+n_2)^2-(n_1+n_2)},
\label{eq:beta1} \\
\langle \begin{equation}ta\rangle_2 &=& \frac{4}{\pi} \langle \begin{equation}ta\rangle_1.
\label{eq:beta2}
\end{equation}a
For $n_1=n_2=N/2$ we thus obtain \cite{Wong,Graham}
\begin{equation}
\langle \begin{equation}ta\rangle_1 = \frac{1}{2}\frac{1}{1-1/N}.
\end{equation}
For any initial number of atoms this is larger than $1/2$, meaning that the
first detection already increases the average visibility from zero to more than
half its maximum value of one.
For $k>2$ Eq.~(\ref{eq:bmean}) can no longer be evaluated analytically but we
will derive an approximate solution in the following.
Let us assume that the system starts in the quantum state
$|\psi_0\rangle = |n,n\rangle$ (same number of atoms in both condensates) and
all detections occur at the same position $\phi$. Of course, this is a highly
unlikely detection sequence, but this assumption gives surprisingly good
approximate results as we will see. Without loss of generality we may assume
$\phi=0$. Thus
\begin{equation}
|\psi_k\rangle = (a+b)^k |\psi_0\rangle.
\label{eq:psikeq}
\end{equation}
From this we obtain the norm of the state as
\begin{equation}a
\langle \psi_k |\psi_k\rangle &=& \sum_{m=0}^k \bin{k}{m}^2
\langle n,n|(a^\dagger)^m a^m (b^\dagger)^{k-m} b^{k-m}|n,n\rangle
\nonumber \\
&=& \frac{n!^2}{(2n-k)!} \sum_{m=0}^k \bin{k}{m}^2 \bin{2n-k}{n-m}.
\end{equation}a
We now approximate the binomial coefficients by Gaussians using
\begin{equation}
\bin{k}{m}\approx 2^k\sqrt{\frac{2}{\pi k}}e^{-\frac{2}{k}(m-k/2)^2}
\label{eq:binapprox}
\end{equation}
and replace the sum over $m$ by an integral from $-\infty$ to $\infty$
which yields
\begin{equation}
\langle \psi_k |\psi_k\rangle \approx \frac{n!^2}{(2n-k)!} 2^{2n+k}
\frac{2}{\pi} \frac{1}{\sqrt{k(4n-k)}}.
\label{eq:psinorm}
\end{equation}
Analogously we obtain
\begin{equation}
\langle \psi_k |a^\dagger b|\psi_k\rangle \approx \frac{n!^2}{(2n-k-1)!}
2^{2n+k-1}\frac{2}{\pi} \frac{e^{-1/k}}{\sqrt{k(4n-k-2)}}
\end{equation}
and thus from Eq.~(\ref{eq:beta}) the final result
\begin{equation}
\begin{equation}ta_k \approx e^{-1/k}
\label{eq:bapprox}
\end{equation}
independent of $n$ (to order $1/n$). Hence, independent of the initial number of
atoms in the condensates, it always needs the same (and very small) number of
detected atoms to create the interference. If the initial total number
of atoms is $N$ and each atom decays with a rate $\gamma$ out of the
condensate, then the time to create the interference pattern will thus be of the
order of $1/(N\gamma)$.
In Fig.~\ref{fig:beta1} we compare the approximate result of
Eq.~(\ref{eq:bapprox}) with the exact
numerical solution of Eq.~(\ref{eq:betak}). The comparison shows that the
approximation is excellent for $k$ as low as 2 (10\% difference),
3 (5\%) and 4 (3\%) for $n=100$, even if the approximation made in
Eq.~(\ref{eq:binapprox}) only holds for $k\gg 1$.
We also plot the numerical solution of
Eq.~(\ref{eq:bmean}), that is, the visibility averaged over all possible outcomes
with the appropriate probabilities which we obtained by Monte-Carlo simulations
of the detection process. We see that the difference in the creation of
interference fringes between the case where all atoms are detected at the same
place and the case where all possible detection positions are allowed is
relatively small.
This is somewhat surprising given the great difference between the extremely
peaked distribution of detection positions of our approximation compared with
the expected sinusoidal behavior [see Eq.~(\ref{eq:prob})]. It is, however, a
consequence of the fact that only a small number of detections determine the
interference pattern for all subsequent detections.
\begin{equation}gin{figure}[tb]
\infig{20em}{fb1.eps}
\caption{Average conditional visibility $\langle \begin{equation}ta\rangle$ vs number of
detected atoms $k$: exact numerical solution (solid
curve), numerical solution if all atoms are detected at the same position
(dashed curve), approximate analytical solution (dotted curve). The initial
state is $n_1=n_2=100$.}
\label{fig:beta1}
\end{figure}
So far we have restricted ourselves to the case of equal initial occupation
numbers of the two condensates. In the case of unequal initial numbers the
approximation assuming that all the detections occur at the same position
fails as it turns out that this changes the relative occupation of the two
condensates (which is constant if the condensate decay rates are equal).
An analytic approximation can be found, however, assuming that the number of
detected atoms $k$ is small compared to the total number of atoms $N$.
In this case the normalisation of the state after $k$ equal detections is
\begin{equation}a
& & \langle \psi_k |\psi_k\rangle = \nonumber \\
& & \quad =\sum_{m=0}^k \bin{k}{m}^2
\langle n_1,n_2|(a^\dagger)^m a^m (b^\dagger)^{k-m} b^{k-m}|n_1,n_2\rangle
\nonumber \\
& & \quad \approx \sum_{m=0}^k \bin{k}{m}^2 n_1^m n_2^{k-m}.
\end{equation}a
The latter expression can be approximated by replacing the binomials with
Gaussians and the sum by an integral as before. Applying the same procedure
to $\langle \psi_k |a^\dagger b|\psi_k\rangle$ one finally obtains the
conditional visibility
\begin{equation}
\begin{equation}ta_k \approx \frac{2\sqrt{n_1 n_2}}{n_1+n_2} e^{-1/k}
\label{eq:buneq}
\end{equation}
the long-time limit of which ($k \gg 1$) has also been found in
Ref.~\cite{Graham}.
Three features are worth mentioning here. First, the evolution of the visibility
as a function of the number of detected atoms is the same as in the case of
equal initial atom numbers. Hence, also in this case only a few detections are
required to establish the interference pattern and hence the corresponding time
scale is again given by $1/(N\gamma)$. Second, the maximum possible visibility
is reduced to a value which depends on the ratio of the initial occupation
numbers of the two condensates. Third, this maximum visibility is exactly
the same as for initial {coherent\/} states of the condensates with the same
mean numbers of atoms.
\begin{equation}gin{figure}[tb]
\infig{20em}{fb2.eps}
\caption{Average conditional visibility $\langle \begin{equation}ta\rangle$ vs number $n_2$
of atoms initially in one of the condensates, the total number is
$n_1+n_2=100$. Solid curve: exact numerical result after 99 detections, dashed:
exact result after 5 detections, dotted: approximate analytic result after 99
detections, dash-dotted: approximate result after 5 detections.}
\label{fig:beta2}
\end{figure}
In Fig.~\ref{fig:beta2} we compare these results with the exact numerical
solutions for arbitrary detection positions. We see that in the
long-time limit, with nearly all of the atoms detected, the agreement is exact,
thus the visibility approaches the value found for coherent states of the same
mean atom number. After a small number $k$ of detections a more significant
deviation from our approximate result is found, especially for significantly
differing
initial occupation numbers $n_1$ and $n_2$. However, we still find that
Eq.~(\ref{eq:buneq}) predicts the correct order of magnitude for the time
required to establish the interference.
\section{Preparation of coherent states from initial number states}
\label{sec:state}
In this section we will discuss how the system state approaches a coherent state
in course of the sequence of detections. However, in order to simplify the
discussion and the numerical simulations we will assume that the {total\/}
number of atoms in the two condensates is exactly known at any time; the
system is initially in a number state $|n_1,n_2\rangle$ with $n_1$ atoms in
condensate $a$ and $n_2$ atoms in condensate $b$ and the number $k$ of detected
atoms is known at any time. Thus the system will never approach a
harmonic oscillator coherent state, that is, an eigenstate to the
annihilation operators $a$ and $b$, since these are
superposition states of different total atom numbers. Instead it will be
shown that the system approaches a state which can be described as the
restriction of a coherent state to the subset of states with a fixed total
number of atoms. Hereafter we will refer to these states as
``atomic coherent states'' as they are a representation of the atomic coherent
states \cite{states}. We can define these states to be
\begin{equation}
|\mu,\nu\rangle_N = \frac{1}{\sqrt{N!}}
\left( a^\dagger \mu + b^\dagger \nu\right)^N |0,0\rangle
\label{eq:qcs}
\end{equation}
where $\mu$ and $\nu$ obey the relation $|\mu|^2+|\nu|^2=1$.
Some properties of these atomic coherent states and their
relation to the coherent states are discussed in Appendix~\ref{appendix}.
The specific case of these states with $\mu=e^{i\phi}/\sqrt{2}$ and
$\nu=e^{-i\phi}/\sqrt{2}$ has also been refered to as ``phase states''
\cite{Castin,Sinatra},
\begin{equation}
|\phi\rangle_N = \frac{1}{\sqrt{2^N N!}}
\left( a^\dagger e^{i\phi} + b^\dagger e^{-i\phi}\right)^N |0,0\rangle
\label{eq:phs}
\end{equation}
since their conditional visibility is $\begin{equation}ta_c=1$.
The problem which we will consider in this section is the following. Assume the
system is initially in the number state $|\psi_0\rangle=|n_1,n_2\rangle$.
Then $k$ atoms are detected at positions $\phi_1$, $\phi_2$, ..., $\phi_k$
and the resulting state $|\psi_k\rangle$ is analyzed. What is the probability
of finding an atomic coherent state $|\mu,\nu\rangle_{N-k}$, where
$N=n_1+n_2$? To answer
this we have to evaluate the probability function
\begin{equation}
P(\mu,\nu)=\frac{|\langle \psi_k|\mu,\nu\rangle_{N-k}|^2}
{\langle \psi_k|\psi_k\rangle},
\end{equation}
or equivalently
\begin{equation}
P(\phi)=\frac{|\langle \psi_k|\phi\rangle_{N-k}|^2}
{\langle \psi_k|\psi_k\rangle}
\end{equation}
in the case of $n_1=n_2=N/2$.
Let us consider first the case of equal initial atom numbers $n_1=n_2=n$ and
assume without loss of generality that the first atom is detected at position
$\phi_1=0$ so that $|\psi_1\rangle=(a+b)|n,n\rangle$. We then find
\begin{equation}
P(\phi) = \frac{1}{2^{2n}} \bin{2n}{n} (1+\cos 2\phi)
\approx \frac{1}{\sqrt{\pi n}} (1+\cos 2\phi)
\end{equation}
where for the last approximation we have again used Eq.~(\ref{eq:binapprox}).
One can easily check by numerical simulation that this approximation is
highly acurate even
for just a few atoms in the condensates. Hence the overlap of the state after
one detection with any phase state is very small of the order of $1/\sqrt{n}$.
For the analysis of the state after $k$ detections we will assume, as in the
previous section, that all atoms are detected in the same position, so that
$|\psi_k\rangle = (a+b)^k |\psi_0\rangle$, and will compare the results for
$P(\phi)$ with numerical simulations at the end.
The state overlap after $k$ detections is then
\begin{equation}a
\langle \psi_k|\phi\rangle_{2n-k} &=& \frac{1}{\sqrt{2^{2n-k}(2n-k)!}}
\nonumber \\
& & \times \langle n,n|(a^\dagger+b^\dagger)^k
(a^\dagger e^{i\phi}+b^\dagger e^{-i\phi})^{2n-k}|0,0\rangle \nonumber \\
& = & \frac{n!}{\sqrt{2^{2n-k}(2n-k)!}} \nonumber \\
& & \times \sum_{p=0}^k \bin{k}{p} \bin{2n-k}{n-p} e^{i\phi(k-2p)}
\end{equation}a
which, after applying the approximation of Eq.~(\ref{eq:binapprox}), becomes
\begin{equation}
\langle \psi_k|\phi\rangle_{2n-k} \approx \frac{1}{\sqrt{2^{2n-k}(2n-k)!}}
\frac{n!2^{2n}}{\sqrt{\pi n}}
e^{-\frac{\phi^2}{4}\frac{k(2n-k)}{n}}.
\end{equation}
This, together with the normalization factor, Eq.~(\ref{eq:psinorm}), yields
\begin{equation}
P(\phi) \approx \frac{1}{2}\sqrt{\frac{k(4n-k)}{n^2}}
e^{-\frac{\phi^2}{2}\frac{k(2n-k)}{n}}.
\label{eq:papprox}
\end{equation}
In the limit of $k\ll n$ this result is consistent with the results presented
in Refs.~\cite{Cirac,Castin,Ruostekoski}.
\begin{equation}gin{figure}[tb]
\infig{20em}{fp1.eps}
\caption{Probability $P(\phi)$ of finding the phase state $|\phi\rangle$
after $k$ atom detections from initial state $|n_1=100,n_2=100\rangle$. The
solid curves correspond to exact numerical solutions
(averaged over 1000 Monte-Carlo simulations) and the dashed curves to
the analytic approximation given in Eq.~(\protect\ref{eq:papprox}).}
\label{fig:pphi1}
\end{figure}
In Fig.~\ref{fig:pphi1} we compare this analytic approximation for the
probability function $P(\phi)$ with the exact numerical solution obtained by
Monte-Carlo simulations where all possible detection positions are taken into
account. (On the scale of Fig.~\ref{fig:pphi1} the curves for the approximate
solution and the exact solution if all atoms are detected at the same position
coincide almost exactly.) After only a few detections ($k=10$ in the figure) the
probability function is already well approximated by a broad Gaussian with a
relatively small maximum, so the overlap with any phase state is still small
at this time. However, we note that the maximum overlap is {\em larger\/} than
predicted by the approximate analytic solution, which was derived under the
assumption that all atoms are detected at the same position. This may seem
surprising but can be explained by the fact that such a highly peaked position
distribution of the detected atoms is far from the one expected for a coherent
state. After the detection of half of the atoms ($k=100$) the maximum overlap
with a phase state is already close to one and the width of the Gaussian has
decreased significantly. For a larger number of detected atoms ($k=190$) the
maximum overlap still increases but the width of the probability function
increases again which is due to the changing non-orthogonality of the phase
states for changing number of atoms, see Eq.~(\ref{eq:nonorth}).
\begin{equation}gin{figure}[tb]
\infig{20em}{fp2.eps}
\caption{Maximum of the probability function $P(\mu,\nu)$ after $k$
atom detections out of the $N=n_1+n_2$ initial atoms.
The solid curves correspond to the initial state
$|\psi_0\rangle = |n_1=100,n_2=100\rangle$, the dashed curve to
$|\psi_0\rangle = |n_1=200,n_2=50\rangle$,
and the dotted curve is the maximum
of the approximate analytic solution, Eq.~(\protect\ref{eq:papprox}).
The first two cases are the results obtained from averaging over 2000
Monte-Carlo simulations.}
\label{fig:pphi2}
\end{figure}
In Fig.~\ref{fig:pphi2} we plot the {\em maximum\/} of the probability function
$P(\mu,\nu)$ as a function of the fraction of the detected atoms. The
approximate solution given by Eq.~(\protect\ref{eq:papprox}) is independent of
the total number $N=n_1+n_2$ of atoms initially in the two condensates and is a
quarter of a circle as a function of $k/N$. As already seen above, the
approach to an atomic coherent state starting from a pure number state is
in fact faster
than given by the approximation. Here we also plotted the numerical result for
an initial state with unequal atom numbers in the two condensates and we note
that also in this case the approximation by Eq.~(\protect\ref{eq:papprox}) is a
relatively good one.
Hence, we find that a significant fraction of the total number of atoms must be
detected in order that the system approaches an atomic coherent state. Thus the
time-scale for the preparation of such a state by detections is given by
$1/\gamma$ which widely differs from the time scale of $1/(N\gamma)$
found in the previous section to be relevant for the creation of interference
fringes.
\section{Generalisations of the model}
\label{sec:general}
\subsection{Imperfect detection}
We will now generalize our results from the previous sections to the case of
imperfect atom detection. Assuming that the detector efficiency is $\eta<1$ we
expect that after $k$ atoms have been lost from the condensates only $\eta k$
out of these are detected in the interferometric setup and thus contribute
to the build-up of coherence between the two condensates, whereas the remaining
$(1-\eta)k$ atoms are simple losses from either of the two condensates.
Hence, under the assumption that all detected atoms are found at the same
position (as in the previous sections) and the undetected atoms are coming
with the same probability from either of the two condensates, the state after
$k$ atoms have been lost from the condensates is approximately
\begin{equation}a
|\psi_k\rangle &=& (a+b)^{\eta k} a^{\xi k} b^{\xi k}
|n,n\rangle \nonumber \\
&=& \frac{n!}{(n-\xi k)!} (a+b)^{\eta k} |n-\xi k,n-\xi k\rangle
\end{equation}a
[where $\xi = (1-\eta)/2$] instead of the one given in Eq.~(\ref{eq:psikeq}).
Thus, in our earlier results, Eqs.~(\ref{eq:bapprox}) and (\ref{eq:papprox}),
we only have to substitute $n\rightarrow n-\xi k$ and $k\rightarrow \eta k$ to
obtain the approximate results for imperfect detector efficiency
\begin{equation}a
\begin{equation}ta_k & \approx & e^{-1/(\eta k)}, \label{eq:bimp}\\
P(\phi) & \approx & \sqrt{1-\left(1-\frac{\eta k}{2n-k+\eta k}\right)^2}
e^{-\phi^2 \frac{\eta k(2n-k)}{2n-k+\eta k}}.
\end{equation}a
As in the previous sections, comparison of these analytic approximations with
exact numerical solutions shows excellent agreement for the
visibility $\begin{equation}ta$ and an actually faster approach towards coherent states as
predicted by this approximation for $P(\phi)$.
Eq.~(\ref{eq:bimp}) shows that the only effect of imperfect detection on the
creation of interference fringes is that the number of atoms detected per unit
time is decreased and and thus it takes more time to detect the same number of
atoms as for perfect detectors. The effect of losses of atoms from individual
condensates does not seem to have any influence even if this changes the
relative atom number in the two condensates. However, since only a few atoms
need to be detected in order to build up the interference, the fluctuations
of the condensate occupation numbers remain very small compared to the
total number of
atoms and thus the maximum visibility is very close to the one for the
unperturbed initial state.
The situation is more complicated for the approach of the system state towards
an atomic coherent state (or a phase state) because, not only the number of
detected atoms, but also the total number of atoms left in the system plays an
important role, so that $P(\phi)$ depends on $k$ and $n$. The preparation of an
atomic coherent state also occurs on a larger time scale than for $\eta=1$
but the dependence on $\eta$ is not simple.
\subsection{Proper time evolution}
Another possible generalisation of our model is to investigate the system
evolution as a function of time instead of the number of detected atoms. Given a
constant decay rate of $\gamma$ for individual atoms from the condensates the
actual decay processes still occur in a probabilistic manner. Thus after a
certain amount of time the number of atoms remaining in the system is
uncertain. However, since the condensate decay follows an exponential law the
{\em mean\/} number of remaining atoms is
\begin{equation}
\langle n\rangle(t) = N e^{-2\gamma t}
\end{equation}
and so the mean number of detected atoms is
\begin{equation}
\langle k\rangle(t) = N \left( 1-e^{-2\gamma t} \right).
\label{eq:meank}
\end{equation}
We can generalise the results of the previous sections to incorporate this time
evolution by
simply replacing the number $k$ of detected atoms in
Eqs.~(\ref{eq:bapprox}) and (\ref{eq:papprox}) by its mean value according
to Eq.~(\ref{eq:meank}).
Using this assumption we compared
the exact numerical results for the cases of the system
evolution versus number of detections and versus time.
We found that the
visibility is established slightly more slowly in the latter case. Let us consider,
for example, the state of the system after such a time $t$ that
$\langle k\rangle(t) = 1$. At this time there is still a significant probability
that no atom was detected which greatly decreases the average visibility. On the other
hand there is a certain probability that two or three atoms were detected
which increases the average visibility, but since the difference between
$\begin{equation}ta_0$ and $\begin{equation}ta_1$ is much larger than the difference between $\begin{equation}ta_1$
and $\begin{equation}ta_2$ [see Eqs.~(\ref{eq:beta1}), (\ref{eq:beta2})] the former term
dominates and the average visibility is smaller than $\begin{equation}ta_1$. However, this
difference between the results for the two models decreases when more atoms are
detected and, starting from 100 atoms in each of the condensates, after $k=10$
detections the numerically found difference is already down to 1.7\%. A similar
agreement is also found for the maximum overlap of the system state with
atomic coherent states in the two models. We may thus conclude that our
approximate analytic solutions also describe the time evolution of the system
if one substitutes $k\rightarrow \langle k\rangle(t)$.
\subsection{Effect of collisions}
So far we have assumed the idealized case of noninteracting particles in the
condensates, so that our system was completely analogous to a system of photons in
two high-quality cavities from which the photons decay and interfere. All of our
previous results apply to this case as well \cite{Molmer}.
However, it is well known that atomic collisions play an important role in the
context of Bose-Einstein condensation \cite{Burt,Lewenstein,Javanainen2}.
We will discuss the modifications to our
results in this case in the following. To this end, the free time evolution of
the system between two quantum jumps now has to be replaced by the time
evolution due to the collisional Hamiltonian which in our simple model of two
single-mode condensates is \cite{Wong,Sinatra,Steel}
\begin{equation}
H = \kappa \left[(a^\dagger a)^2 + (b^\dagger b)^2 \right].
\label{eq:ham_k}
\end{equation}
The action of this Hamiltonian is to give different time dependent phases to the
various number states of the quantum state of the system of, for example, an
atomic coherent state. This dephasing gives rise to a time dependent ``decay'' of
the coherence and therefore of the conditional visibility $\begin{equation}ta$.
This decay of the coherence counteracts the creation of coherence due to atom
detections and thus prevents the system of reaching a state of maximum
visibility.
\begin{equation}gin{figure}[tb]
\infig{20em}{fk0.eps}
\caption{Time evolution of the conditional visibility $\begin{equation}ta$ for different
quantum states including collisions of atoms in the condensates. The initial
quantum state is a state after 50 detections from the number state
$|100,100\rangle$ (solid line), the state after 50 equal detections
$(a+b)^{50}|100,100\rangle$ (dashed), and a phase state
$|\phi\rangle_{N=150}$ (dotted), respectively.}
\label{fig:beta_kt}
\end{figure}
First we study the evolution of various quantum states under the action of
the Hamiltonian (\ref{eq:ham_k}) without atomic decays. Let the system initially
be in an
atomic coherent state with equal mean atom number $N$ in the two condensates,
$|\psi\rangle = |\mu,\mu\rangle_N$. The time evolution of the
visibility $\begin{equation}ta$, given by Eq.~(\ref{eq:beta}) can then be evaluated analytically as
\begin{equation}
\begin{equation}ta(t) = \left[\cos(2\kappa t)\right]^{N-1}
\approx e^{-2\kappa^2 t^2 (N-1)}
\label{eq:betacoh}
\end{equation}
where the latter approximation holds for small times $t \ll 1/\kappa$. Note that
the exact result of Eq.~(\ref{eq:betacoh}) predicts the well known
\cite{Castin,Sinatra,Steel,Wright}
revivals of the visibility after times which are multiples of
$\pi/(2\kappa)$.
As noted, however, in the preceding sections,
the atomic coherent states
are in general not a good approximation to the state of the system
until a significant number of atoms have been detected.
A better approximation is provided by considering the evolution of the initial
state $|\psi_k\rangle = (a+b)^k |n,n\rangle$. Using again the approximation of
Eq.~(\ref{eq:binapprox}) we obtain for this case
\begin{equation}
\begin{equation}ta(t) \approx e^{-1/k}
\exp\left\{-2\kappa^2 t^2 \frac{k(2n-k-1)}{4n-k-2} \right\}.
\label{eq:beta_keq}
\end{equation}
In the limit of $k \ll n$, we thus find that $\begin{equation}ta$ decays as
$\exp(-\kappa^2 t^2 k)$ which is much slower than the decay for an
atomic coherent
state (\ref{eq:betacoh}). For $k\rightarrow 2n$ the two expressions converge
since the state $|\psi_k\rangle$ approaches an atomic coherent state in this
case.
In Fig.~\ref{fig:beta_kt} we compare $\begin{equation}ta(t)$ for an atomic coherent state, the
state $|\psi_k\rangle$, and the numerical result for a state after 50 detections
from an initial number state $|100,100\rangle$ (all of these contain a
total number of 150 atoms). We see that in any case the collision-induced
decay can be well
described by a Gaussian. The decay obtained for the state with simulated
detections is faster than the one of Eq.~(\ref{eq:beta_keq}) which agrees with
our finding of Sec.~\ref{sec:interference} that an atomic coherent state is
approached faster with arbitrary detections than if all detections occur at
the same position.
We will now use these results to derive an approximate analytic expression for
the visibility $\begin{equation}ta$ after $k$ detections including the effects of atomic
collisions. Let us assume that at a
given time $t_0$ exactly $k$ atoms have been detected from an initial number
state $|n,n\rangle$ and that the visibility is $\begin{equation}ta = \begin{equation}ta_0$. Then
the probability of detecting the next atom at time $t_0+t$ is given by
\begin{equation}
P(t) = 2 n_0\gamma e^{-2n_0\gamma t}
\end{equation}
where $n_0=2n-k$ is the number of atoms left in the system at time $t_0$. Thus,
if we write the decay of the visibility as
\begin{equation}
\begin{equation}ta(t_0+t) = \begin{equation}ta(t_0) e^{-t^2/\tau^2},
\end{equation}
where $\tau$ is given in Eq.~(\ref{eq:beta_keq}), then the visibility at the time
immediately {\em before\/} the next atom detection is on average given by
\begin{equation}a
\langle \begin{equation}ta(t_0+t) \rangle &=& \int_0^{\infty}dt\,\begin{equation}ta(t_0+t)P(t)
\nonumber\\
&=& \begin{equation}ta(t_0) 2n_0\gamma\tau
\frac{\sqrt{\pi}}{2} e^{n_0^2\gamma^2 \tau^2}
\left[1-\Phi(n_0\gamma\tau ) \right],
\end{equation}a
where $\Phi$ denotes the error function. Let us now assume that the
system is in steady state between the creation and the decay of $\begin{equation}ta$,
that is, the following detection increases the visibility again to its
value $\begin{equation}ta(t_0)$ at time $t_0$. Hence, writing
\begin{equation}
\begin{equation}ta(t_0) = e^{-1/k_0} \approx 1 - \frac{1}{k_0}
\end{equation}
and using our previous result for the increase of $\begin{equation}ta$ with the number of
detections, Eq.~(\ref{eq:bapprox}), we obtain the following condition
\begin{equation}
\left( 1 - \frac{1}{k_0} \right) \langle \begin{equation}ta(t_0+t) \rangle
= 1 - \frac{1}{k_0-1}
\end{equation}
with the solution
\begin{equation}
k_0 = 1 + \frac{1}{\sqrt{1-\langle \begin{equation}ta(t_0+t) \rangle}}
\end{equation}
and thus the steady state visibility is
\begin{equation}
\begin{equation}ta_{\text{st}} = \frac{1}{1 + \sqrt{1-\langle \begin{equation}ta(t_0+t) \rangle}}.
\label{eq:beta_stat}
\end{equation}
\begin{equation}gin{figure}[tb]
\infig{20em}{fk1.eps}
\caption{Visibility $\begin{equation}ta$ vs number of detected atoms $k$ from an initial
number state $|100,100\rangle$ including atom collisions. Solid lines
are exact results (averages of 1000 Monte-Carlo simulations), dashed lines are
the corresponding analytic approximations given by
Eq.~(\protect\ref{eq:beta_stat}). The
parameters are (from top to bottom) $\kappa = 0.5\,\gamma$,
$\kappa = 2\,\gamma$, and $\kappa = 5\,\gamma$.
}
\label{fig:beta_kk}
\end{figure}
We compare this result with the results of Monte-Carlo simulations in
Fig.~\ref{fig:beta_kk} for different values of $\kappa$. From this we note that
the approximation yields values of $\begin{equation}ta$ that are too large. This is
especially true for small
numbers $k$ of detected atoms ($k<n$). This was to be expected since (i) the
approximation was based on the assumption of a steady state whereas it takes a
certain number of detections to reach this state in the simulations and (ii) in
the discussion of Fig.~\ref{fig:beta_kt} we have already seen that the
actual decay of the
visibility occurs faster than predicted by Eq.~(\ref{eq:beta_keq}) which in
turn
decreases the steady state value. On the other hand, the approximation yields
values of $\begin{equation}ta$ below the numerical results for values of
$\begin{equation}ta < 0.8$ where
we know that the increase of the visibility due to detections is faster than
given by Eq.~(\ref{eq:bapprox}).
Finally we will investigate the effects of atomic collisions in the condensates
on the creation of atomic coherent states. To this end we calculate the
maximum overlap of the time evolved state
\begin{equation}
|\psi_k(t)\rangle = e^{-iHt} |\psi_k\rangle,
\end{equation}
where $|\psi_k\rangle$ is the state after $k$ equal detections from the number
state $|n,n\rangle$ as used before, with the atomic coherent states.
Following the same steps as in Sec.~\ref{sec:state} one obtains
\begin{equation}
\mbox{max}\{P(\phi,t)\} = \frac{\mbox{max}\{P(\phi,t=0)\}}{\sqrt{
1+\left[ t\kappa k(2n-k)/(2n) \right]^2
}}
\end{equation}
where $\mbox{max}\{P(\phi,t=0)\}$ is the maximum overlap at time $t=0$ given by
Eq.~(\ref{eq:papprox}).
Hence, for $k\ll n$ this maximum overlap decays on a time scale of
$t\sim 1/(\kappa k)$ which is much faster than the time
scale of the decay of the visibility where $t\sim 1/(\kappa \sqrt{k})$,
Eq.~(\ref{eq:beta_keq}). This, together with the much longer time scale necessary
to create a significant overlap with coherent states, shows that atomic
collisions have a much larger effect on $P(\phi,t)$ than on the visibility.
The time scales for the decay and the build-up of this state overlap also
prevent the system from reaching a steady state and thus no analytic
approximation analogous to Eq.~(\ref{eq:beta_stat}) can be found. Hence we have
to rely on numerical simulations in this case, see Fig.~\ref{fig:beta_kp}.
\begin{equation}gin{figure}[tb]
\infig{20em}{fk2.eps}
\caption{Maximum overlap $\mbox{max}\{P(\mu,\nu)\}$
vs number of detected atoms $k$ from an initial
number state $|n_1=n_2=100\rangle$ including atom collisions with collision
rate $\kappa = 0.1\,\gamma$ (solid line),
$\kappa = 0.5\,\gamma$ (dashed), and $\kappa = 5\,\gamma$ (dotted).
Each curve is obtained from averaging over 1000 Monte-Carlo simulations.
}
\label{fig:beta_kp}
\end{figure}
We note that for small numbers of detected atoms the maximum overlap increases
close to the case of $\kappa=0$. For larger $k$, corresponding to a broader
distribution of the relative atom number between the two condensates, the
dephasing due to the atom collisions starts to dominate and significantly
reduces $\mbox{max}\{P(\mu,\nu)\}$ even for small values of $\kappa$.
However, when most of the atoms have been detected ($k\rightarrow 2n$) the
function approaches unity as all one-particle states and the final vacuum
state are exact atomic coherent states.
\section{Conclusions}
In this work we have studied in detail the creation of coherence between two
initially uncorrelated
Bose-Einstein condensates according to the detections of individual atoms in an
interferometric setup. Our main finding is that first order coherence, as
observed in the interference fringes of the condensates, is established on a time
scale corresponding to the detection of only a few atoms, that is, within times
$t\sim 1/(N\gamma)$, where $N$ is the total number of atoms in the condensates
and $\gamma$ is the single-atom decay rate. On the other hand, high-order
coherence,
which in this article we describe by the overlap of the quantum state of the
system with coherent states, is created by detecting a certain (large) {\em
fraction\/} of the condensed atoms, so the time scale for this develop
is given by
$t\sim 1/\gamma$. These results were obtained from approximate analytic
solutions as well as from exact numerical simulations.
We have also investigated the effect of atomic collisions on these results.
In this case the dephasing of the quantum state of the system due to the
collisions counteracts the entangling effect of the atom detections. However,
according to the widely different time scales a good visibility of the
interference fringes can be maintained whereas the atomic collisions
effectively prevent the preparation of a coherent state even for relatively
small collision rates.
\acknowledgments
This work was supported by the United Kingdom Engineering and Physical Sciences
Research Council.
\begin{equation}gin{appendix}
\section{Atomic coherent states}
\label{appendix}
The atomic coherent states \cite{states} were defined in Eq.~(\ref{eq:qcs})
to be
\begin{equation}
|\mu,\nu\rangle_N = \frac{1}{\sqrt{N!}}
\left( a^\dagger \mu + b^\dagger \nu\right)^N |0,0\rangle,
\end{equation}
where $|\mu|^2 + |\nu|^2 = 1$. These are states with precisely $N$ atoms shared
between the two condensates and can be expressed as an entangled superposition
of all product number states for which the total number is $N$:
\begin{equation}
|\mu,\nu\rangle_N = \nu^N \sum_{n=0}^N \bin{N}{n}^{1/2}
\left(\frac{\mu}{\nu}\right)^n |n,N-n\rangle.
\end{equation}
The number of atoms in one of the condensates, therefore, has a binomial
distribution.
Different atomic coherent states to the same number of atoms $N$ are in general
not orthogonal to each other, but have a nonvanishing scalar product
\begin{equation}
{}_N\langle \mu',\nu'|\mu,\nu\rangle_N = \left(
\mu\mu'^* + \nu\nu'^*
\right)^N.
\label{eq:nonorth}
\end{equation}
The atomic coherent states are related to the familiar two-mode coherent
states $|\alpha,\alpha'\rangle$ by
\begin{equation}
|\alpha,\alpha'\rangle = e^{-(|\alpha|^2+|\alpha'|^2)/2}
\sum_{N=0}^{\infty} \sqrt{\frac{(|\alpha|^2+|\alpha'|^2)^N}{N!}}
|\mu,\nu\rangle_N
\end{equation}
with
\begin{equation}a
\mu &=& \frac{\alpha}{\sqrt{|\alpha|^2+|\alpha'|^2}}, \\
\nu &=& \frac{\alpha'}{\sqrt{|\alpha|^2+|\alpha'|^2}}.
\end{equation}a
Clearly, the restriction of a two-mode coherent state to the $N$ particle
subset of states is an atomic coherent state.
The atomic coherent states satisfy the following useful identities
\begin{equation}a
a |\mu,\nu\rangle_N & = & \sqrt{N} \mu |\mu,\nu\rangle_{N-1}, \\
b |\mu,\nu\rangle_N & = & \sqrt{N} \nu |\mu,\nu\rangle_{N-1}.
\end{equation}a
From these it immediately follows that the expectation values of $a$ and $b$ in
an atomic coherent state are zero so that neither condensate exhibits a
prefered phase. It is also clear that the mean number of atoms in the
condensates $a$ and $b$ are $N|\mu|^2$ and $N|\nu|^2$, respectively, and that
the expectation value of $a^\dagger b$ is $N\mu^*\nu$. It follows that the
conditional visibility for an atomic coherent state is
\begin{equation}
\begin{equation}ta_c = 2|\mu||\nu|
\end{equation}
which assumes its maximum value of unity if and only if the two condensates
have the same mean occupation number.
Restricting considerations to equal mean occupation numbers leads to the phase
states \cite{Castin,Sinatra} defined in Eq.~(\ref{eq:phs}). Further properties
of the atomic coherent states can be found in the literature \cite{states}.
\end{appendix}
\begin{equation}gin{thebibliography}{99}
\bibitem{bec95a} M.\ H.\ Anderson, J.\ R.\ Ensher, M.\ R.\ Matthews,
C.\ E.\ Wieman, and E.\ A.\ Cornell, Science {\bf 269}, 198 (1995).
\bibitem{bec95b} C.\ C.\ Bradley, C.\ A.\ Sackett, J.\ J.\ Tollett, and
R.\ G.\ Hulet, Phys.\ Rev.\ Lett.\ {\bf 75}, 1687 (1995).
\bibitem{bec95c} K.\ B.\ Davis, M.-O.\ Mewes, M.\ R.\ Andrews,
N.\ J.\ van Druten, D.\ S.\ Durfee, D.\ M.\ Kurn, and
W.\ Ketterle, Phys.\ Rev.\ Lett.\ {\bf 75}, 3969 (1995).
\bibitem{Burt} E.\ A.\ Burt, R.\ W.\ Ghrist, C.\ J.\ Myatt, M.\ J.\ Holland,
E.\ A.\ Cornell, and C.\ E.\ Wieman, Phys.\ Rev.\ Lett.\ {\bf 79},
337 (1997).
\bibitem{Ketterle} W.\ Ketterle and H.-J.\ Miesner, Phys.\ Rev.\ A {\bf 56},
3291 (1997).
\bibitem{Andrews} M.\ R.\ Andrews, C.\ G.\ Townsend, H.-J.\ Miesner,
D.\ S.\ Durfee, D.\ M.\ Kurn, and W.\ Ketterle, Science {\bf 275},
637 (1997).
\bibitem{Hall} D.\ S.\ Hall, M.\ R.\ Matthews, C.\ E.\ Wieman, and
E.\ A.\ Cornell, Phys.\ Rev.\ Lett.\ {\bf 81}, 1543 (1998).
\bibitem{Leggett} A.\ J.\ Leggett and F.\ Sols, Found.\ Phys. {\bf 21}, 353
(1991).
\bibitem{Kagan} Y.\ Kagan and B.\ V.\ Svistunov, Phys.\ Rev.\ Lett.\ {\bf 79},
3331 (1997).
\bibitem{Javanainen} J.\ Javanainen and S.\ M.\ Yoo, Phys.\ Rev.\ Lett.\
{\bf 76}, 161 (1996).
\bibitem{Wong} T.\ Wong, M.\ J.\ Collett, and D.\ F.\ Walls, Phys.\ Rev.\ A
{\bf 54}, R3718 (1996).
\bibitem{Cirac} J.\ I.\ Cirac, C.\ W.\ Gardiner, M.\ Naraschewski, and
P.\ Zoller, Phys.\ Rev.\ A {\bf 54}, R3714 (1996).
\bibitem{Castin} Y.\ Castin and J.\ Dalibard, Phys.\ Rev.\ A {\bf 55}, 4330
(1997).
\bibitem{Graham} R.\ Graham, T.\ Wong, M.\ J.\ Collett, S.\ M.\ Tan, and
D.\ F.\ Walls, Phys.\ Rev.\ A {\bf 57}, 493 (1998).
\bibitem{Ruostekoski} J.\ Ruostekoski, M.\ J.\ Collett, R.\ Graham, and
D.\ F.\ Walls, Phys.\ Rev.\ A {\bf 57}, 511 (1998).
\bibitem{Sinatra} A.\ Sinatra and Y.\ Castin, Eur.\ Phys.\ J.\ D {\bf 4},
247 (1998).
\bibitem{Barnett} S.\ M.\ Barnett, K.\ Burnett, and J.\ A.\ Vaccarro,
J.\ Res.\ Natl.\ Inst.\ Stand.\ Technol.\ {\bf 101}, 593 (1996).
\bibitem{states} The atomic coherent states, also known as spin coherent states
and angular momentum coherent states, are familiar in discussions of rotation.
J.\ M.\ Radcliffe, J.\ Phys.\ A: Math.\ Gen.\ {\bf 4}, 313 (1971);
F.\ T.\ Arecchi, E.\ Courtens, R.\ Gilmore, and H.\ Thomas, Phys.\ Rev.\ A
{\bf 6}, 2211 (1972);
A.\ Perelomov, {\em Generalized Coherent States and their Applications}
(Springer-Verlag, Berlin, 1986);
S.\ M.\ Barnett and P.\ M.\ Radmore, {\em Methods in Theoretical Quantum Optics}
(Oxford University Press, Oxford, 1997).
\bibitem{Molmer} K.\ M{\o}lmer, J.\ Mod.\ Opt.\ {\bf 44}, 1937 (1997).
\bibitem{Lewenstein} M.\ Lewenstein and L.\ You, Phys.\ Rev.\ Lett.\ {\bf 77},
3489 (1996).
\bibitem{Javanainen2} J.\ Javanainen and M.\ Wilkens, Phys.\ Rev.\ Lett.\
{\bf 78}, 4675 (1997).
\bibitem{Steel} M.\ J.\ Steel and D.\ F.\ Walls, Phys.\ Rev.\ A {\bf 56},
3832 (1997).
\bibitem{Wright} E.\ M.\ Wright, D.\ F.\ Walls, and J.\ C.\ Garrison,
Phys.\ Rev.\ Lett.\ {\bf 77}, 2158 (1996);
E.\ M.\ Wright, T.\ Wong, M.\ J.\ Collett, S.\ M.\ Tan, and D.\ F.\ Walls,
Phys.\ Rev.\ A {\bf 56}, 591 (1997).
\end{thebibliography}
\end{document}
|
\betagin{document}
\widetildetle[Threshold corotational wave maps]{Threshold dynamics for corotational wave maps}
\author{Casey Rodriguez}
\betagin{abstract}
We study the dynamics of corotational wave maps from $\mathbf{m}athbb{R}^{1+2} \rightarrow \mathbf{m}athbb S^2$ at threshold energy. It is known that topologically trivial wave maps with energy $< 8\textrm{tr}ianglerightrtiali$ are global and scatter to a constant map. In this work, we prove that a corotational wave map with energy equal to $8\textrm{tr}ianglerightrtiali$ is globally defined and scatters in one time direction, and in the other time direction, either the map is globally defined and scatters, or the map breaks down in finite time and converges to a superposition of two harmonic maps. The latter behavior stands in stark contrast to higher equivariant wave maps with threshold energy which have been proven to be globally defined for all time. Using techniques developed in this paper, we also construct a corotational wave map with energy $= 8\textrm{tr}ianglerightrtiali$ which blows up in finite time. The blow-up solution we construct provides the first example of a minimal topologically trivial non-dispersing solution to the full wave map evolution.
\epsilonilonnd{abstract}
\mathbf{m}aketitle
\varsigmagmaection{Introduction}
\varsigmagmaubsection{Wave maps}
In this paper we study the dynamics of energy critical wave maps which are defined as follows. Let $\epsilonilonta$ be the Minkowski metric on $\mathbf{m}athbb{R}^{1+2}_{t,x}$, and let $\mathbf{m}athcal N$ be a Riemannian manifold with metric $h$. A map $u:\mathbf{m}athbb{R}^{1+2} \rightarrow \mathbf{m}athcal N$ is a \epsilonilonmph{wave map} if it is a critical point of the action
\betagin{align*}
\mathbf{m}athcal A(u) = \frac{1}{2} \int_{\mathbf{m}athbb{R}^{1+2}} \lambda_mbdangle \textrm{tr}ianglerightrtial^\mathbf{m}u u ,
\textrm{tr}ianglerightrtial_\mathbf{m}u u \rangle_{h} \, dx dt,
\epsilonilonnd{align*}
where we raise and lower indices using the Minkowski metric $\epsilonilonta$.
The associated Euler-Lagrange equations are the \epsilonilonmph{wave maps equations} given in local coordinates by
\betagin{align}\lambda_mbdabel{wm}
\textrm{tr}ianglerightrtial^\mathbf{m}u \textrm{tr}ianglerightrtial_\mathbf{m}u u^a + \mathbf{m}athcal{G}amma^a_{bc}(u) \textrm{tr}ianglerightrtial^\mathbf{m}u u^b \textrm{tr}ianglerightrtial_\mathbf{m}u u^c = 0.
\epsilonilonnd{align}
Here the $\mathbf{m}athcal{G}amma^a_{bc}$ are the Christoffel symbols associated to the metric $h$ on $\mathbf{m}athcal N$. The time translational symmetry of Minkowski space and Noether's theorem provide a conserved energy for the evolution
\betagin{align}\lambda_mbdabel{energy}
\mathbf{m}athcal E(u(t),\textrm{tr}ianglerightrtial_t u(t)) := \frac{1}{2} \int_{\mathbf{m}athbb{R}^2} |\textrm{tr}ianglerightrtial_t u(t,x)|^2_h + |\nablabla u(t,x)|_h^2 \, dx
= \mathbf{m}box{const.}
\epsilonilonnd{align}
We study wave maps as solutions to the Cauchy problem \epsilonilonqref{wm} with prescribed finite energy initial data $\vec u(0) = (u_0, u_1)$ where
\betagin{align*}
u_0(x) \in \mathbf{m}athcal N, \quad u_1(x) \in T_{u_0(x)} \mathbf{m}athcal N, \quad x \in \mathbf{m}athbb{R}^2.
\epsilonilonnd{align*}
Here and throughout the paper we use the notation $\vec u(t)$ to denote the pair of functions
\betagin{align*}
\vec u(t) := (u(t,\cdot), \textrm{tr}ianglerightrtial_t u(t,\cdot)).
\epsilonilonnd{align*}
We also assume that there exists $u_\infty \in \mathbf{m}athcal N$ such that
\betagin{align}\lambda_mbdabel{eq:behav_at_infinity}
u_0(x) \rightarrow u_\infty \mathbf{m}box{ as } |x| \rightarrow \infty.
\epsilonilonnd{align}
Due to the conformal symmetry of Minkowski space, we also have the following scaling symmetry: if $\vec u(t)$ is a wave map and $\lambda_mbda > 0$, then
\betagin{align}\lambda_mbdabel{scale}
\vec u_\lambda_mbda(t,x) = (u_\lambda_mbda(t,x), \textrm{tr}ianglerightrtial_t u_\lambda_mbda(t,x)) := \left (
u \mathbf{m}athcal{B}igl ( \frac{t}{\lambda_mbda}, \frac{x}{\lambda_mbda} \mathbf{m}athcal{B}igr ),
\frac{1}{\lambda_mbda} \textrm{tr}ianglerightrtial_t u \mathbf{m}athcal{B}igl ( \frac{t}{\lambda_mbda}, \frac{x}{\lambda_mbda} \mathbf{m}athcal{B}igr )
\right )
\epsilonilonnd{align}
is also a wave map. The energy is scale invariant,
\betagin{align*}
\mathbf{m}athcal E (\vec u_\lambda_mbda) = \mathbf{m}athcal E(\vec u),
\epsilonilonnd{align*}
and for this reason, the wave maps equations in (1+2)-dimensions are said to be \epsilonilonmph{energy critical}. Wave maps have been extensively studied over the past several decades, and we refer the reader to \cite{SSbook} and \cite{GG} for reviews of the work that has been done.
In this work we specialize to the case $\mathbf{m}athcal N = \BigS^2$ (with the usual round metric) and wave maps which respect the rotational symmetry of the background and target. More precisely, we fix an origin in $\mathbf{m}athbb{R}^2$ and north pole $N \in \BigS^2$. We say a map $u : \mathbf{m}athbb{R}^{1+2} \rightarrow \BigS^2$ is \epsilonilonmph{corotational} or $1$-\epsilonilonmph{equivariant} if $u \circ \rhoo = \rhoo \circ u$ for all $\rhoo \in SO(2)$. Here $\rhoo$ acts on $\BigS^2$ by rotation about the axis determined by $N$. Choosing $N = (0,0,1)$ without loss generality, we can write a corotational map as
\betagin{align}\lambda_mbdabel{equiv}
u(t,r,\theta) = (\varsigmagmain \textrm{tr}ianglerightrtialsi(t,r) \cos \theta, \varsigmagmain \textrm{tr}ianglerightrtialsi(t,r) \varsigmagmain \theta, \cos \textrm{tr}ianglerightrtialsi(t,r)) \in \BigS^2 \varsigmagmaubset \mathbf{m}athbb{R}^3,
\epsilonilonnd{align}
where $(t,r,\theta)$ are polar coordinates on $\mathbf{m}athbb{R}^{1+2}$, and $(\textrm{tr}ianglerightrtialsi,\theta)$ are spherical coordinates on $\BigS^2$. For corotational maps, the Cauchy problem \epsilonilonqref{wm} reduces to a single equation for the azimuth angle $\textrm{tr}ianglerightrtialsi = \textrm{tr}ianglerightrtialsi(t,r)$:
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:wmk}
\betagin{aligned}
&\textrm{tr}ianglerightrtial_t^2 \textrm{tr}ianglerightrtialsi - \textrm{tr}ianglerightrtial_r^2 \textrm{tr}ianglerightrtialsi - \frac{1}{r} \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi + \frac{\varsigmagmain 2 \textrm{tr}ianglerightrtialsi}{2r^2} = 0, \\
&\vec \textrm{tr}ianglerightrtialsi(0) = (\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1),
\epsilonilonnd{aligned}
}
The conserved energy \epsilonilonqref{energy} is given by
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:en}
\mathbf{m}athcal{E}( \vec \textrm{tr}ianglerightrtialsi(t) ) = \textrm{tr}ianglerightrtiali \int_0^\infty \left( (\textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi (t, r))^2 + ( \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi(t, r))^2 + \frac{\varsigmagmain^2\textrm{tr}ianglerightrtialsi(t, r)}{r^2} \right) rdr,
}
and the scaling symmetry of the equation \epsilonilonqref{scale} is given by
\betagin{align}
\vec \textrm{tr}ianglerightrtialsi_\lambda_mbda(t,r) := \left (
\textrm{tr}ianglerightrtialsi \mathbf{m}athcal{B}igl ( \frac{t}{\lambda_mbda}, \frac{r}{\lambda_mbda} \mathbf{m}athcal{B}igr ),
\frac{1}{\lambda_mbda} \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi \mathbf{m}athcal{B}igl ( \frac{t}{\lambda_mbda}, \frac{r}{\lambda_mbda} \mathbf{m}athcal{B}igr )
\right ).
\epsilonilonnd{align}
The expression for the energy implies that there exists $m,n \in \mathbf{m}athbb{Z}$ such that
$
\lim_{r \rightarrow 0} \textrm{tr}ianglerightrtialsi_0(r) = m\textrm{tr}ianglerightrtiali$ and $\lim_{r \rightarrow \infty} \textrm{tr}ianglerightrtialsi_0(r) = n\textrm{tr}ianglerightrtiali.
$
By continuity of the flow $\vec \textrm{tr}ianglerightrtialsi(t)$,
\betagin{align*}
\lim_{r \rightarrow 0} \textrm{tr}ianglerightrtialsi(t,r) = m\textrm{tr}ianglerightrtiali, \quad \lim_{r \rightarrow \infty} \textrm{tr}ianglerightrtialsi(t,r)
= n\textrm{tr}ianglerightrtiali, \quad \forall t.
\epsilonilonnd{align*}
Without loss of generality, we may assume that $m = 0$ and $n \in \mathbf{m}athbb{N} \cup \{0\}$.
Thus, finite energy solutions to \epsilonilonqref{eq:wmk} are split into disjoint classes given by
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:Hn}
\mathbf{m}athcal{H}_n := \{ (\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1) \mathbf{m}id \mathbf{m}athcal{E}(\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1) < \infty \mathbf{m}and \lim_{r \to 0}\textrm{tr}ianglerightrtialsi_0(r) =0, \, \lim_{r \to \infty}\textrm{tr}ianglerightrtialsi_0(r) = n\textrm{tr}ianglerightrtiali\}.
}
The parameter $n \in \mathbf{m}athbb{N} \cup \{0\}$ we refer to as the \epsilonilonmph{degree} of the map, and it can be thought of as parameterizing the minimal number of times the map $\textrm{tr}ianglerightrtialsi(t)$ (more precisely, $u(t)$ given by \epsilonilonqref{equiv}) wraps $\mathbf{m}athbb{R}^2$ around the sphere. We study those corotational initial data $(\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1) \in \mathbf{m}athcal H_0$, i.e. which satisfy
\betagin{align*}
\lim_{r \rightarrow 0} \textrm{tr}ianglerightrtialsi_0(r) = \lim_{r \rightarrow \infty} \textrm{tr}ianglerightrtialsi_0(r) = 0.
\epsilonilonnd{align*}
A corotational ansatz reduces the complexity of the wave maps equations greatly and is possible in the more general case when $\mathbf{m}athcal N$ is a surface of revolution. Choosing $\mathbf{m}athcal N = \BigS^2$ is motivated by what is known about stationary wave maps, or \epsilonilonmph{harmonic maps}, in this setting. By an ODE argument, the unique (up to scaling) nontrivial corotational harmonic map is given explicitly by
\betagin{align*}
Q(r) = 2 \arctan r,
\epsilonilonnd{align*}
with energy
\betagin{align*}
\mathbf{m}athcal E(\vec Q) = 4\textrm{tr}ianglerightrtiali.
\epsilonilonnd{align*}
We note that
\betagin{align*}
\lim_{r \rightarrow 0} Q(r) = 0, \quad \lim_{r \rightarrow \infty} Q(r) = \textrm{tr}ianglerightrtiali,
\epsilonilonnd{align*}
so that $\vec Q \in \mathbf{m}athcal H_1$. In fact, it can be shown that $Q$ minimizes the energy in $\mathbf{m}athcal H_1$ (see Section 2). As we will soon discuss, these harmonic maps play a fundamental role in the long time dynamics of wave maps with large initial data.
We conclude this subsection by discussing $k$-equivariant maps, a generalization of our corotational reduction. For $k \in \mathbf{m}athbb{N}$, we say a map $u : \mathbf{m}athbb{R}^{1+2} \rightarrow \BigS^2$ is \epsilonilonmph{k-equivariant} if $u \circ \rhoo = \rhoo^k \circ u$ for all $\rhoo \in SO(2)$ where $SO(2)$ acts on the $\mathbf{m}athbb{R}^{1+2}$ and $\BigS^2$ as before. Then we may write $$u(t,r,\theta) = (\varsigmagmain \textrm{tr}ianglerightrtialsi(t,r) \cos k\theta, \varsigmagmain \textrm{tr}ianglerightrtialsi(t,r) \varsigmagmain k\theta, \cos \textrm{tr}ianglerightrtialsi(t,r))$$ and the wave maps equations reduce to the single equation
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:equiv}
\betagin{aligned}
&\textrm{tr}ianglerightrtial_t^2 \textrm{tr}ianglerightrtialsi - \textrm{tr}ianglerightrtial_r^2 \textrm{tr}ianglerightrtialsi - \frac{1}{r} \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi + k^2 \frac{\varsigmagmain 2 \textrm{tr}ianglerightrtialsi}{2r^2} = 0, \\
&\vec \textrm{tr}ianglerightrtialsi(0) = (\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1),
\epsilonilonnd{aligned}
}
The conserved energy \epsilonilonqref{energy} is given by $\mathbf{m}athcal{E}^k( \vec \textrm{tr}ianglerightrtialsi(t) ) = \textrm{tr}ianglerightrtiali \int_0^\infty \left( (\textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi (t, r))^2 + ( \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi(t, r))^2 + k^2 \frac{\varsigmagmain^2\textrm{tr}ianglerightrtialsi(t, r)}{r^2} \right) rdr$.
As in the corotational setting, the unique (up to scaling) nontrivial $k$-equivariant harmonic map is given by
\betagin{align*}
Q^k(r) = 2 \arctan (r^k).
\epsilonilonnd{align*}
The harmonic map $\vec Q^k \in \mathbf{m}athcal H_1$, $\mathbf{m}athcal{E}^k(\vec Q^k) = 4 \textrm{tr}ianglerightrtiali k$ and $\vec Q^k$ minimizes the energy $\mathbf{m}athcal{E}^k(\cdot)$ in the class $\mathbf{m}athcal H_1$. In particular, the corotational harmonic map $Q = Q^1$ has the least energy of all nontrivial equivariant harmonic maps.
We now turn to motivating our main results.
\varsigmagmaubsection{History and motivation}
Strichartz estimates suffice to prove global existence for equivariant wave maps evolving from small degree-0 data (see Section 2), so recent work has been dedicated to understanding the long-time dynamics of wave maps evolving from large initial data. It is here that the family of harmonic maps play a fundamental role. Indeed, a classical result of Struwe \cite{Struwe} states that if a smooth $k$-equivariant wave map $\vec \textrm{tr}ianglerightrtialsi(t)$ breaks down at time $t = 1$, say, then $\vec \textrm{tr}ianglerightrtialsi(t)$ converges to the harmonic map $\vec Q^k$ in a local spacetime norm. Moreover, $\vec \textrm{tr}ianglerightrtialsi(t,r)$ must concentrate energy in excess of $\mathbf{m}athcal{E}^k(\vec Q^k)$ at the tip of the inverted light cone centered at $(T_+,r) = (1,0)$. Thus, a $k$-equivariant wave map $\vec \textrm{tr}ianglerightrtialsi(t)$ with energy less that $\mathbf{m}athcal{E}^k(\vec Q_k)$ is globally defined and smooth.
The works by Krieger, Schlag, Tataru~\cite{KST}, Rodnianski, Sterbenz~\cite{RS}, and Rapha\"el, Rodnianski~\cite{RR} constructed examples of degree-1 wave maps that blow-up by bubbling off a harmonic map, i.e.
\betagin{align*}
\vec \textrm{tr}ianglerightrtialsi(t) = \vec Q^k_{\lambda_mbda(t)} + \vec \varphi(t),
\epsilonilonnd{align*}
with $\lambda_mbda(t) \rightarrow 0$ as $t \rightarrow T_+ < \infty$ and $\varphi(t)$ regular up to $t = T_+$.
As we've discussed, harmonic maps play a key role in singularity formation for wave maps, but in fact they should be fundamental in describing the dynamics of \epsilonilonmph{arbitrary} wave maps. Indeed, according to the \epsilonilonmph{soliton resolution conjecture}, one expects the following beautiful simplification of the dynamics: smooth wave maps asymptotically break up into a sum of dynamically rescaled harmonic maps and a free radiation term (a solution to the linearized equations).
The problem of describing the dynamics of corotational wave maps with energy $= 2\mathbf{m}athcal E(\vec Q)$ we address in this paper is motivated by several recent advances made in establishing this conjecture for equivariant wave maps. We first state the following refined threshold theorem proved in~\cite{CKLS1}.
\betagin{thm}\epsilonilonmph{\cite{CKLS1}\lambda_mbdabel{t:2EQ}} For smooth initial data $(\textrm{tr}ianglerightrtialsi_0,\textrm{tr}ianglerightrtialsi_1) \in \mathbf{m}athcal{H}_0$ with
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal{E}^k(\textrm{tr}ianglerightrtialsi_0,\textrm{tr}ianglerightrtialsi_1) < 2\mathbf{m}athcal{E}^k(\vec Q^k),
}
there exists a unique global smooth $k$-equivariant wave map $\vec \textrm{tr}ianglerightrtialsi \in C(\mathbf{m}athbb{R}; \mathbf{m}athcal{H}_0)$ with $\vec \textrm{tr}ianglerightrtialsi(0) = (\textrm{tr}ianglerightrtialsi_0,\textrm{tr}ianglerightrtialsi_1)$. Moreover, $\vec\textrm{tr}ianglerightrtialsi(t)$ scatters both forward and backward in time, i.e. there exist solutions $\vec \varphi_L^\textrm{tr}ianglerightrtialm$ to the linearized equation
\betagin{align}\lambda_mbdabel{free}
\textrm{tr}ianglerightrtial_t^2 \varphi - \textrm{tr}ianglerightrtial_r^2 \varphi - \frac{1}{r} \textrm{tr}ianglerightrtial_r \varphi + \frac{k^2}{r^2} \textrm{tr}ianglerightrtial_r \varphi = 0,
\epsilonilonnd{align}
such that
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{scat}
\vec{\textrm{tr}ianglerightrtialsi}(t) = \vec \varphi_L^\textrm{tr}ianglerightrtialm(t) + o_{\mathbf{m}athcal{H}_0}(1) \mathbf{m}as t \to \textrm{tr}ianglerightrtialm\infty.
}
\epsilonilonnd{thm}
The intuition for the threshold energy being $2 \mathbf{m}athcal{E}^k(\vec Q^k)$ rather than $\mathbf{m}athcal{E}^k(\vec Q^k)$ is the following. If a $k$-equivariant map $\vec \textrm{tr}ianglerightrtialsi(t) \in \mathbf{m}athcal H_0$ wraps the plane around the sphere once, then it must also unwrap the sphere once more in order to have degree 0. Since the minimum amount of energy needed for a $k$-equivariant map to wrap the plane around the sphere once is equal to $\mathbf{m}athcal{E}^k(\vec Q^k)$, it follows that if $\mathbf{m}athcal{E}^k(\vec \textrm{tr}ianglerightrtialsi) < 2 \mathbf{m}athcal{E}^k(\vec Q^k)$ then $\textrm{tr}ianglerightrtialsi(t)$ is bounded away from the south pole (i.e. $\textrm{tr}ianglerightrtialsi(t,r) < \textrm{tr}ianglerightrtiali - \epsilonilon, \quad \forall t,r$). Thus, $\vec \textrm{tr}ianglerightrtialsi(t)$ cannot converge locally to a harmonic map $\vec Q^k$ which by Struwe's bubbling result implies $\vec \textrm{tr}ianglerightrtialsi(t)$ is globally regular.
A result analogous to Theorem \mathop{\mathrm{Re}}f{t:2EQ} for the full wave map system, with no symmetry assumptions, was established by Lawrie and Oh in \cite{LO1}. More precisely, we say initial data $(u_0,u_1)$ (with target $\BigS^2$) is \epsilonilonmph{topologically trivial} if
\betagin{align*}
\frac{1}{4\textrm{tr}ianglerightrtiali} \int_{\mathbf{m}athbb{R}^2} u_0^* \, \omega_{\BigS^2} = 0,
\epsilonilonnd{align*}
where $\omega_{\BigS^2}$ is the volume form on $\BigS^2$. It can be checked that the above condition is propagated by the wave map evolution, and an equivariant map $\vec u$ with associated azimuth angle $\vec \textrm{tr}ianglerightrtialsi \in \mathbf{m}athcal{H}_0$ is topological trivial. The authors obtain the following result as a consequence of the analysis from \cite{ST2}.
\betagin{thm}\epsilonilonmph{ \cite{LO1}} \lambda_mbdabel{t:2EQfull}
Suppose that $(u_0,u_1)$ is smooth topologically trivial finite energy initial data with
\betagin{align*}
\mathbf{m}athcal E(u_0,u_1) < 8\textrm{tr}ianglerightrtiali = 2 \mathbf{m}athcal E(\vec Q^1).
\epsilonilonnd{align*}
Then there exists a unique global solution $u:\mathbf{m}athbb{R}^{1+2} \rightarrow \BigS^2$ to the wave maps equations \epsilonilonqref{wm} with $\vec u(0) = (u_0,u_1)$. Moreover, $\vec u(t)$ scatters to the constant map as $t \rightarrow \textrm{tr}ianglerightrtialm \infty$.
\epsilonilonnd{thm}
The works~\cite{CKLS1, CKLS2} also established soliton resolution for corotational wave maps in $\mathbf{m}athcal{H}_1$ with energy below $3 \mathbf{m}athcal E(\vec Q)$. In this setting only one concentrating bubble is possible, and these works showed that for any such wave map there exists a solution $\vec \varphi_L(t) \in \mathbf{m}athcal{H}_0$ to the free equation \epsilonilonqref{free} (the radiation) and a continuous dynamical scale $\lambda_mbda(t) \in (0, \infty)$ such that
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:sr1}
\vec \textrm{tr}ianglerightrtialsi(t) = \vec Q_{\lambda_mbda(t)} + \vec \varphi_L(t) + o_{\mathbf{m}athcal{H}_0}(1) \mathbf{m}as t \to T_+.
}
Proving soliton resolution above $3 \mathbf{m}athcal E(\vec Q)$ is very challenging since one can conceivably have multiple harmonic maps concentrating at different scales and interacting. However, there has been exciting recent progress in establishing a weaker form of the conjecture. The work by Cote \cite{Cote15} (for $1$-equivariant maps) and Jia, Kenig \cite{JK} (for all equivariant maps) established the following soliton resolution result along a well-chosen sequence of times.
\betagin{thm}\epsilonilonmph{ \cite{Cote15, JK}}\lambda_mbdabel{t:cjk} Let $\vec \textrm{tr}ianglerightrtialsi(t)\in \mathbf{m}athcal{H}_{n}$ be a smooth $k$-equivariant wave map on $[0, T_+)$. Then there exists a sequence of times $t_n \to T_+$, an integer $J \in \mathbf{m}athbb{N} \cup \{0\}$, a solution $\vec \varphi_L(t) \in \mathbf{m}athcal H_0$ to \epsilonilonqref{free}, sequences of scales $\lambda_mbda_{n, j}$ which satisfy $0 < \lambda_mbda_{n,1} \ll \lambda_mbda_{n,2} \ll \cdots \ll \lambda_mbda_{n,J}$ and signs $\iotata_j \in \{-1, 1\}$ for $j \in \{1, \dots, J\}$, so that
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:seq}
\vec \textrm{tr}ianglerightrtialsi(t_n) = \varsigmagmaum_{j =1}^J \iotata_j \vec Q^k_{\lambda_mbda_{n,j}} + \vec \varphi_L(t_n) + o_{\mathbf{m}athcal{H}_0}(1) \mathbf{m}as n \to \infty.
}
If $T_+ < \infty$ then $J \mathbf{m}athbf{g}eq 1$, $0 < \lambda_mbda_{n,1} \ll \cdots \ll \lambda_mbda_{n,J} \ll T_+ - t_n$, and if $T_+ = \infty$ then $0 < \lambda_mbda_{n,1} \ll \cdots \ll \lambda_mbda_{n,J} \ll t_n.$ The
signs $\iotata_j$ are required to satisfy the topological constraint $\vec \textrm{tr}ianglerightrtialsi(t) \in \mathbf{m}athcal H_n$, i.e. $$\lim_{r \to \infty} \varsigmagmaum_{j =1}^J \iotata_j Q^k_{\lambda_mbda_{n,j}}(r) = n\textrm{tr}ianglerightrtiali.$$
\epsilonilonnd{thm}
We remark that the works
~\cite{CKLS1, CKLS2, Cote15, JK, JL} use ideas and techniques inspired by the seminal papers on the focusing quintic nonlinear wave equation in three space dimensions by Duyckaerts, Kenig, and Merle~\cite{DKM1, DKM2, DKM3, DKM4} (see also \cite{KBook-15} for an account of the important techniques and ideas in these papers).
In \cite{JJ-AJM}, Jendrej showed it is possible for more than one bubble to form in the decomposition \epsilonilonqref{eq:seq}.
\betagin{thm}\epsilonilonmph{\cite{JJ-AJM}}
\lambda_mbdabel{thm:deux-bulles-wmap}
Fix an equivariance class $k > 2$. There exists a solution $\vec\textrm{tr}ianglerightrtialsi: (-\infty, T_+) \to \mathbf{m}athcal{H}_0$ of \epsilonilonqref{eq:equiv}
such that
\betagin{equation}
\lambda_mbdabel{eq:mainthm-wmap}
\lim_{t\to -\infty}\big\|\vec\textrm{tr}ianglerightrtialsi(t) - \big(\vec Q_{c_k |t|^{-\frac{2}{k-2}}} - \vec Q \big)\big\|_{\mathbf{m}athcal{H}_0} = 0,
\epsilonilonnd{equation}
where $c_k > 0$ is explicit. \qed
\epsilonilonnd{thm}
A similar construction is possible when $k = 2$ with an explicit exponentially decaying scale as $t \rightarrow -\infty$. By Theorem \mathop{\mathrm{Re}}f{t:2EQ}, these solutions are examples of non-dispersing \epsilonilonmph{threshold solutions} to \epsilonilonqref{eq:equiv} for $k \mathbf{m}athbf{g}eq 2$.
In \cite{JL}, Jendrej and Lawrie classified the dynamics of $k$-equivariant wave maps $\vec \textrm{tr}ianglerightrtialsi(t)$ with \epsilonilonmph{threshold energy} $\mathbf{m}athcal{E}^k(\vec \textrm{tr}ianglerightrtialsi(t)) = 2 \mathbf{m}athcal{E}^k(\vec Q^k)$ for $k \mathbf{m}athbf{g}eq 2$. Their work provided the primary motivation and roadmap for establishing our main results. To state their results concisely,
we first introduce some terminology. Let $\vec \textrm{tr}ianglerightrtialsi(t) : (T_-, T_+) \to \mathbf{m}athcal{H}_0$ be a $k$-equivariant wave map with $\mathbf{m}athcal{E}^k(\vec \textrm{tr}ianglerightrtialsi) = 2 \mathbf{m}athcal{E}^k(\vec Q^k)$. We say that $\vec\textrm{tr}ianglerightrtialsi(t)$ is a \epsilonilonmph{two-bubble in
the forward time direction} if there exist $\iotata \in \{1, -1\}$
and continuous functions $\lambda_mbdambda(t), \mathbf{m}u(t) > 0$ such that
\mathbf{m}athcal{E}Q{
\lim_{t \to T_+} \| (\textrm{tr}ianglerightrtialsi(t) - \iotata(Q^k_{\lambda_mbda(t)} - Q^k_{\mathbf{m}u(t)}), \textrm{tr}ianglerightrtialsi_t(t))\|_{\mathbf{m}athcal{H}_0} = 0, \quad \lambda_mbdambda(t) \ll \mathbf{m}u(t)\thetaxt{ as }t \to T_+.
}
The notion of a \epsilonilonmph{two-bubble in the backward time direction} is defined similarly.
\betagin{thm}\epsilonilonmph{ \cite{JL}}\lambda_mbdabel{t:JL} Let $k \mathbf{m}athbf{g}eq 2$, and let $\vec\textrm{tr}ianglerightrtialsi :(T_-, T_+) \to \mathbf{m}athcal{H}_0$ be a $k$-equivariant wave map such that
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal{E}^k(\vec \textrm{tr}ianglerightrtialsi) = 2 \mathbf{m}athcal{E}^k(\vec Q^k) = 8\textrm{tr}ianglerightrtiali k.
}
Then $T_- = -\infty$, $T_+ = +\infty$ and one the following alternatives holds:
\betagin{itemize}[leftmargin=0.5cm]
\item $\vec \textrm{tr}ianglerightrtialsi(t)$ scatters in both time directions,
\item $\vec\textrm{tr}ianglerightrtialsi(t)$ scatters in one time direction and is a two-bubble in the other time direction. Moreover if $\vec \textrm{tr}ianglerightrtialsi(t)$ is a two-bubble in the forward time direction, then there exists $C = C(k) > 0, \mathbf{m}u_0 > 0$ such that $\mathbf{m}u(t) \rightarrow \mathbf{m}u_0$ and
\betagin{align}
\mathbf{m}u_0 \epsilonilonxp(-Ct) \leq&\,\, \lambda_mbda(t) \leq \mathbf{m}u_0 \epsilonilonxp(-t/C), \quad \mathbf{m}box{if } k = 2, \\
\frac{\mathbf{m}u_0}{C} t^{\frac{-2}{k-2}} \leq &\,\,\lambda_mbda(t) \leq C \mathbf{m}u_0 t^{\frac{-2}{k-2}}, \quad \mathbf{m}box{if } k \mathbf{m}athbf{g}eq 3.
\epsilonilonnd{align}
An analogous estimate holds if $\vec \textrm{tr}ianglerightrtialsi(t)$ is a two-bubble in the backward time direction.
\epsilonilonnd{itemize}
\epsilonilonnd{thm}
\varsigmagmaubsection{Main results}
The two-bubble solutions given by Theorem \mathop{\mathrm{Re}}f{thm:deux-bulles-wmap} and the classification result Theorem \mathop{\mathrm{Re}}f{t:JL} are for $k$-equivariant wave maps with $k \mathbf{m}athbf{g}eq 2$. The first main result of this paper establishes the existence of a corotational two-bubble solution. In contrast to higher equivariant wave maps, our solution is in fact a threshold \epsilonilonmph{blow-up solution}.
\betagin{thm}[Main Theorem 1]\lambda_mbdabel{t:main2}
There exists a corotational wave map $\vec \textrm{tr}ianglerightrtialsi_c : (0,T_+) \to \mathbf{m}athcal{H}_0$, a continuous scale $\lambda_mbda_c(t) > 0$, and constant $C > 0$ such that
\betagin{align}
\frac{1}{C} t^2 \leq \lambda_mbda_c(t) |\log \lambda_mbda_c(t)| \leq C t^2,
\epsilonilonnd{align}
and
\betagin{align}
\lim_{t \rightarrow 0^+} \left \| \vec \textrm{tr}ianglerightrtialsi_c(t) - \bigl (\vec Q_{\lambda_mbda_c(t)} - \vec Q \bigr ) \right \|_{\mathbf{m}athcal{H}_0} = 0.
\epsilonilonnd{align}
In particular, $\mathbf{m}athcal E(\vec \textrm{tr}ianglerightrtialsi_c) = 8\textrm{tr}ianglerightrtiali$ and $T_- = 0$.
\epsilonilonnd{thm}
By Theorem \mathop{\mathrm{Re}}f{t:2EQ}, $\vec \textrm{tr}ianglerightrtialsi_c$ is a minimal energy non-dispersing solution to \epsilonilonqref{eq:wmk}. Moreover, by Theorem \mathop{\mathrm{Re}}f{t:2EQfull} the map $u_c:(0,T_+) \widetildemes \mathbf{m}athbb{R}^2 \rightarrow \BigS^2$ given by
\betagin{align*}
u_c(t,r,\theta) = (\varsigmagmain \textrm{tr}ianglerightrtialsi_c(t,r) \cos \theta, \varsigmagmain \textrm{tr}ianglerightrtialsi_c(t,r) \varsigmagmain \theta, \cos \textrm{tr}ianglerightrtialsi_c(t,r)),
\epsilonilonnd{align*}
is a topologically trivial minimal energy non-dispersing solution to the \epsilonilonmph{full wave map equations.} The existence
of such a solution has been an open question up until now.
The proof of Theorem \mathop{\mathrm{Re}}f{t:main2} is a byproduct of estimates we derive to prove our second main result and the general scheme for constructing multi-soliton solutions introduced by Martel \cite{Martel-AJM15} and Merle \cite{Merle90}.
\betagin{thm}[Main Theorem 2] \lambda_mbdabel{t:main}
Let $\vec\textrm{tr}ianglerightrtialsi(t) :(T_-, T_+) \to \mathbf{m}athcal{H}_0$ be a solution to~\epsilonilonqref{eq:wmk} such that
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal{E}(\vec \textrm{tr}ianglerightrtialsi) = 2 \mathbf{m}athcal{E}(\vec Q) = 8\textrm{tr}ianglerightrtiali.
}
Then either $T_- = -\infty$ or $T_+ = +\infty$. Assume that $T_- = -\infty$. Then $\vec \textrm{tr}ianglerightrtialsi(t)$ scatters in backward time, while in forward time one of the following holds:
\betagin{itemize}[leftmargin=0.5cm]
\item $T_+ = \infty $ and $\vec \textrm{tr}ianglerightrtialsi(t)$ scatters in forward time,
\item $T_+ < \infty$, $\vec\textrm{tr}ianglerightrtialsi(t)$ is a two-bubble in the forward time direction, and there exists an absolute constant $C > 0$ such that the scales of the bubbles $\lambda_mbdambda(t), \mathbf{m}u(t)$ satisfy
\mathbf{m}athcal{E}Q{
\lim_{t \rightarrow T_+} \mathbf{m}u(t) = \mathbf{m}u_0 \in (0, \infty), \qquad
\frac{1}{C}(T_+ - t)^2 \leq \frac{\lambda_mbda(t)}{\mathbf{m}u_0}\left |
\log \mathbf{m}athcal{B}igl (
\frac{\lambda_mbda(t)}{\mathbf{m}u_0}
\mathbf{m}athcal{B}igr )
\right | \leq C (T_+ - t)^2.
}
\epsilonilonnd{itemize}
If we assume initially that $\vec \textrm{tr}ianglerightrtialsi(t)$ satisfies $T_+ = \infty$, then $\vec \textrm{tr}ianglerightrtialsi(t)$ scatters in forward time, and one of analogous alternatives formulated in backward time must hold.
\epsilonilonnd{thm}
Overall, our main results state that the dynamics of corotational wave maps at threshold energy are very different from the those of higher equivariant wave maps at threshold energy.
We remark that by Theorem \mathop{\mathrm{Re}}f{t:main}, the blow-up solution $\vec \textrm{tr}ianglerightrtialsi_c$ from Theorem \mathop{\mathrm{Re}}f{t:main2} is global in forward time, $T_+ = \infty$, and scatters. Thus, $\vec \textrm{tr}ianglerightrtialsi_c$ is a trajectory connecting asymptotically free behavior to blow-up behavior. Theorem \mathop{\mathrm{Re}}f{t:main} also asserts that for \epsilonilonqref{eq:wmk}, the collision of two bubbles produces only radiation and is therefore inelastic. This is consistent with what is known and expected for nonintegrable dispersive equations (see \cite{MM11-2, MM11, MM17, JL}). Our main results are in the spirit of the classification results at threshold energy by \cite{DM, DM-NLS, JL}, but one may also draw parallels to the study of minimal blow-up solutions for dispersive equations (see for example \cite{Merle93}, \cite{RaSz11}). Finally, we remark that apart from the seminal work by Duyckaert, Kenig and Merle \cite{DKM4} which verified the soliton resolution conjecture for the $3d$ radial energy critical wave equation and Theorem \mathop{\mathrm{Re}}f{t:JL} due to Jenrej and Lawrie, Theorem \mathop{\mathrm{Re}}f{t:main} is the only other result which proves soliton resolution continuously in time at an energy level that a priori allows two solitons in the asymptotic decomposition. In fact, Theorem 1.7 shows that solutions with two concentrating bubbles cannot occur, and any non-scattering solution must blow up precisely one bubble while radiating a second stationary harmonic map outside the inverted light cone.
\varsigmagmaubsection{Outline}
The general framework for proving Theorem \mathop{\mathrm{Re}}f{t:main} is inspired by the work \cite{JL} on higher equivariant wave maps, but due to the slow convergence to $\textrm{tr}ianglerightrtiali$ of the corotational harmonic map $Q(r) = 2 \arctan r$,
there are serious technical challenges not found in the higher equivariant setting that arise. The main source of these obstacles will be elaborated on below.
A rough outline of the proof of Theorem \mathop{\mathrm{Re}}f{t:main} is as follows. By Theorem \mathop{\mathrm{Re}}f{t:cjk}, a corotational wave map $\vec \textrm{tr}ianglerightrtialsi$ that does not scatter forward in time must approach the space of two-bubbles along a sequence of times. Towards a contradiction, we assume that $\vec \textrm{tr}ianglerightrtialsi$ does not approach the space of two-bubbles continuously in time. We then split time into a sequence of intervals $[a_m, b_m]$ so that $\vec \textrm{tr}ianglerightrtialsi(t)$ is close to the space of two-bubbles on $[a_m,b_m]$ (``bad intervals"), and $\vec \textrm{tr}ianglerightrtialsi(t)$ stays away from the space of two-bubbles on $[b_m, a_{m+1}]$ (``good intervals"). By concentration compactness techniques, the trajectory $\vec \textrm{tr}ianglerightrtialsi(t)$ has a certain \epsilonilonmph{compactness property} on the union of good intervals (see Section 2, Section 4). Past experience suggests that $\vec \textrm{tr}ianglerightrtialsi(t)$ converges to a degree-0 stationary solution to \epsilonilonqref{eq:wmk} along a sequence of times in the good intervals (see \cite{DKM16} for example). Since the only degree-0 stationary solution to \epsilonilonqref{eq:wmk} is $0$, we conclude $\vec \textrm{tr}ianglerightrtialsi = 0$, a contradiction.
To prove that $\vec \textrm{tr}ianglerightrtialsi(t)$ approaches a stationary solution to \epsilonilonqref{eq:wmk}, we use a virial identity for wave maps (see Section 2) which bounds an integral of $\| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|_{L^2}^2$ over certain good intervals by small error terms plus an integral of ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))^{1/2}$ over certain bad intervals. Here ${\bf d}(\cdot)$ is a measure of the distance to the space of two-bubbles (see Section 2). The errors can be made small because $\vec \textrm{tr}ianglerightrtialsi$ is close to a two-bubble on the bad intervals and has the compactness property on the good intervals. The time integral of ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))^{1/2}$ can be absorbed into the left-hand side, which shows that $\| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|_{L^2}^2$ converges to 0 in a certain averaged (over the good intervals) sense. The compactness property then allows us to conclude that $\vec \textrm{tr}ianglerightrtialsi(t)$ must approach a stationary solution. The fact that the integral of ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))^{1/2}$ can be absorbed into the left-hand side is due to the following informal fact: leaving the space of two-bubbles on a bad interval causes an appreciable amount of kinetic energy, $\| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|_{L^2}^2$, to be present on the neighboring good interval (see Proposition \mathop{\mathrm{Re}}f{prop:modulation}, Lemma \mathop{\mathrm{Re}}f{l:time_split} and Section 4 for precise statements).
We prove this fact by studying the interaction of corotational two-bubbles using the modulation method (see Section 3). This is one of the main novelties of this paper.
On a time interval where a corotational wave map $\vec \textrm{tr}ianglerightrtialsi$ is close to a two-bubble, we decompose the solution as
\betagin{align*}
\vec \textrm{tr}ianglerightrtialsi(t) = \mathbf{m}athcal{B}igl (
Q_{\lambda_mbda(t)} - Q_{\mathbf{m}u(t)} - g(t), \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t)
\mathbf{m}athcal{B}igr )
\epsilonilonnd{align*}
where the modulation parameters $\lambda_mbda(t)$ and $\mathbf{m}u(t)$ are chosen by imposing certain orthogonality conditions on $g$. The choice also ensures that ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))$ is comparable to $\lambda_mbda(t)/\mathbf{m}u(t)$. The goal of Section 3 is to show and control growth of the ratio $\lambda_mbda(t)/\mathbf{m}u(t)$ in the future of a time $t_0$ where $\frac{d}{dt} |_{t = t_0} \lambda_mbda(t)/\mathbf{m}u(t) > 0$ (see Proposition \mathop{\mathrm{Re}}f{p:modp}, Proposition \mathop{\mathrm{Re}}f{prop:modulation}). In contrast to the work by Jendrej and Lawrie \cite{JL} on higher equivariant wave maps, the function $(r \textrm{tr}ianglerightrtial_r Q)_{\lambda_mbda(t)}$, which is the tangent vector to the curve $t \mathbf{m}apsto Q_{\lambda_mbda(t)}$ is not in $L^2(\mathbf{m}athbb{R}^2)$. This function plays a key role in the scheme since $\lambda_mbda(t)^{-1} \ang{(r \textrm{tr}ianglerightrtial_r Q)_{\lambda_mbda(t)} \mathbf{m}id \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t)}_{L^2}$ should heuristically be proportional to $\lambda_mbda'(t)$, so we may then differentiate it and use \epsilonilonqref{eq:wmk} to get information about $\lambda_mbda''(t)$. The fact that $r \textrm{tr}ianglerightrtial_r Q \notin L^2$ is the major obstacle in deriving the estimates, and the technique we introduce in Section 3 to overcome this challenge is a central contribution of this work.
We conclude our discussion of the proof of Theorem \mathop{\mathrm{Re}}f{t:main} with the following remarks. Our overall scheme of proving Theorem \mathop{\mathrm{Re}}f{t:main} may also be summarized as showing that a threshold wave map that leaves a small neighborhood of the space of two-bubbles can never return. This type of \epsilonilonmph{ejection} result is similar in appearance to those obtained by Krieger, Nakanishi and Schlag in their study of the dynamics near the unstable ground state for the energy critical wave equation \cite{KNS13, KNS15}. However, the ejection of a near two-bubble wave map is due to a purely nonlinear mechanism (the interaction of the harmonic maps).
We now briefly outline the proof of Theorem \mathop{\mathrm{Re}}f{t:main2}. The construction of the blow-up solution $\vec \textrm{tr}ianglerightrtialsi_c(t)$ is quite short due to the results proved in Section 3. We consider initial data at time $t_n$ of the form
\betagin{align*}
\vec \textrm{tr}ianglerightrtialsi_n(t_n) = \mathbf{m}athcal{B}igl ( Q_{\epsilonilonll(t_n)} - Q, -\epsilonilonll'(t_n) \epsilonilonll(t_n)^{-1} (r \textrm{tr}ianglerightrtial_r Q)_{\epsilonilonll(t_n)} \chi_n \mathbf{m}athcal{B}igr )
\epsilonilonnd{align*}
where $\chi_n$ is a cutoff that ensures $\mathbf{m}athcal E(\vec \textrm{tr}ianglerightrtialsi(t_n)) = 8 \textrm{tr}ianglerightrtiali$. The function $\epsilonilonll(t)$ is chosen to satisfy $\epsilonilonll'(t_n) > 0$ and to essentially saturate the bounds on the modulation parameters in Proposition \mathop{\mathrm{Re}}f{p:modp}. Let $\vec \textrm{tr}ianglerightrtialsi_n(t)$ denote the solution to \epsilonilonqref{eq:wmk} with data $\vec \textrm{tr}ianglerightrtialsi_n(t_n)$ at time $t = t_n$. By our choice of the data, the control of the growth of the modulation parameters obtained in Proposition \mathop{\mathrm{Re}}f{p:modp} and a bootstrap argument, we conclude that there exist absolute constants $\alphaha, C, T > 0$ with $T$ small such that $T_+(\vec \textrm{tr}ianglerightrtialsi_n) > T$ and
\betagin{align*}
\inf_{\varsigmagmaubstack{\mathbf{m}u \in [1/2,2] \\
\lambda_mbda |\log \lambda_mbda| \in [t^2/C,Ct^2]}} \| \vec \textrm{tr}ianglerightrtialsi_n(t) - (Q_\lambda_mbda - Q_\mathbf{m}u) \|_{\mathbf{m}athcal{H}_0}^2 \leq \alphaha t^2, \quad \forall n, \forall t \in [t_n,T].
\epsilonilonnd{align*}
Passing to a weak limit then finishes the proof. Full details are in Section 5.
\varsigmagmaubsection{Acknowledgments}
Support of the National Science Foundation, DMS-1703180, is gratefully acknowledged.
\varsigmagmaection{Preliminaries}
The purpose of this section is to recall preliminary facts about solutions to~\epsilonilonqref{eq:wmk} that will be required in our analysis. Before recalling these facts, we establish some notation. For two quantities $A$ and $B$, we write $A \lesssim B$ if there exists a constant $C > 0$ such that $A \leq C B$, and we write $A \varsigmagmaim B$ if $A \lesssim B \lesssim A$. For the paper, we denote by $\chi$ a smooth cutoff $\chi \in C^{\infty}_{\mathbf{m}athrm{rad}}(\mathbf{m}athbb{R}^2)$, so that, writing $\chi = \chi(r)$ we have
\mathbf{m}athcal{E}Q{
\chi(r) = 1 \mathbf{m}if r \le 1 \mathbf{m}and \chi(r) = 0 \mathbf{m}if r \mathbf{m}athbf{g}e 2 \mathbf{m}and \abs{\chi'(r)} \le 2 \quad \forall r \mathbf{m}athbf{g}e 0.
}
We denote $\chi_R(r) := \chi(r/R)$. The $L^2$ pairing of two radial functions is denoted by
\mathbf{m}athcal{E}Q{
\ang{f \mathbf{m}id g} := \frac{1}{2\textrm{tr}ianglerightrtiali}\ang{ f \mathbf{m}id g}_{L^2(\mathbf{m}athbb{R}^2)} = \int_0^\infty f(r) g(r) \, r d r.
}
The $\dot H^1$ and $L^2$ re-scalings of a radial function $f$ are denoted by
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:scaledef}
f_\lambda_mbda(r) = f(r/ \lambda_mbda), \quad
f_{\underline{\lambda_mbda}}(r) = \frac{1}{\lambda_mbda} f(r/ \lambda_mbda),
}
and the corresponding infinitesimal generators are given by
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:LaLa0}
&\Lambda f := -\frac{\textrm{tr}ianglerightrtialartial}{\textrm{tr}ianglerightrtialartial \lambda_mbdambda}\bigg|_{\lambda_mbdambda = 1} f_\lambda_mbda = r \textrm{tr}ianglerightrtial_r f \quad (\dot H^1_{\thetaxtrm{rad}}(\mathbf{m}athbb{R}^2) \, \thetaxtrm{scaling}), \\
& \Lambda_0 f := -\frac{\textrm{tr}ianglerightrtialartial}{\textrm{tr}ianglerightrtialartial \lambda_mbdambda}\bigg|_{\lambda_mbdambda = 1} f_{\underline{\lambda_mbda}} = (1 + r \textrm{tr}ianglerightrtial_r ) f \quad (L^2_{\thetaxtrm{rad}}(\mathbf{m}athbb{R}^2) \, \thetaxtrm{scaling}).
}
Recall the definition of the space of degree-0 data with finite energy:
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal{H}_0:= \{ (\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1) \mathbf{m}id \mathbf{m}athcal{E}(\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1)< \infty, \quad \lim_{r \to 0} \textrm{tr}ianglerightrtialsi_0(r) = \lim_{r \to \infty} \textrm{tr}ianglerightrtialsi_0(r) = 0 \}.
}
We define the following norm $H$ via
\mathbf{m}athcal{E}Q{
\| \textrm{tr}ianglerightrtialsi_0 \|_{H}^2 := \int_0^\infty \left(( \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi_0(r))^2 + \frac{(\textrm{tr}ianglerightrtialsi_0(r))^2}{r^2} \right) r d r
}
and for pairs $\vec \textrm{tr}ianglerightrtialsi = (\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1) \in \mathbf{m}athcal{H}_0$ we write
\mathbf{m}athcal{E}Q{
\| \vec \textrm{tr}ianglerightrtialsi \|_{\mathbf{m}athcal{H}_0} := \| (\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1)\|_{H \widetildemes L^2}.
}
Given $\textrm{tr}ianglerightrtialsi_0 \in H$, if we define $\widetildelde \textrm{tr}ianglerightrtialsi_0(x) := \textrm{tr}ianglerightrtialsi(e^x)$, $x \in \mathbf{m}athbb{R}$, we see that
$\|\textrm{tr}ianglerightrtialsi_0 \|_{H} = \| \widetildelde \textrm{tr}ianglerightrtialsi_0 \|_{H^1(\mathbf{m}athbb{R})}$. Thus, by Sobolev embedding on $\mathbf{m}athbb{R}$ we conclude that
\mathbf{m}athcal{E}Q{
\| \textrm{tr}ianglerightrtialsi_0 \|_{L^{\infty}} \le C \| \textrm{tr}ianglerightrtialsi_0 \|_{H}.
}
This fact will be used frequently in our analysis.
\varsigmagmaubsection{Cauchy theory
} \lambda_mbdabel{s:2-4}
The study of the Cauchy problem for \epsilonilonqref{eq:wmk} with initial data $(\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1) \in \mathbf{m}athcal{H}_0$, is facilitated by a well-known reduction that takes into account the extra dispersion provided by the nonlinearity. In particular, the nonlinearity satisfies
\betagin{align*}
\frac{\varsigmagmain 2 \textrm{tr}ianglerightrtialsi}{2r^2} = \frac{\textrm{tr}ianglerightrtialsi}{r^2} + \frac{\varsigmagmain 2\textrm{tr}ianglerightrtialsi - 2\textrm{tr}ianglerightrtialsi}{2r^2}.
\epsilonilonnd{align*}
The second term on the right-hand side is now cubic in $\textrm{tr}ianglerightrtialsi$.
Thus, the linear part of \epsilonilonqref{eq:wmk} is given by
\betagin{align}
\textrm{tr}ianglerightrtial_t^2 \varphi - \textrm{tr}ianglerightrtial_r^2 \varphi - \frac{1}{r} \textrm{tr}ianglerightrtial_r \varphi + \frac{1}{r^2} \varphi = 0, \lambda_mbdabel{eq:2dlin}
\epsilonilonnd{align}
which due to the strong repulsive potential $\frac{1}{r^2}$ has more dispersion than the wave equation on $\mathbf{m}athbb{R}^{1+2}$. By a change of the dependent variable, one sees that the linearized equation \epsilonilonqref{eq:2dlin} has the same dispersion as the wave equation on $\mathbf{m}athbb{R}^{1+4}$. Indeed, for $\vec \varphi(t) \in \mathbf{m}athcal{H}_0$, we define $\vec v(t) = \frac{1}{r} \vec \varphi(t)$.
Then
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:free}
\frac{1}{r}\left (\textrm{tr}ianglerightrtial_t^2 - \mathbf{m}athbb{D}elta_{\mathbf{m}athbb{R}^2} + \frac{1}{r^2} \right ) \varphi = (\textrm{tr}ianglerightrtial_t^2-\mathbf{m}athbb{D}e_{\mathbf{m}athbb{R}^{4}}) v.
}
We now use this change of variables to study \epsilonilonqref{eq:wmk}.
Let $\vec \textrm{tr}ianglerightrtialsi(t)$ be a solution to~\epsilonilonqref{eq:wmk}, and define $u$ by $ru = \textrm{tr}ianglerightrtialsi$. Then $u$ satisfies
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:4d}
&\textrm{tr}ianglerightrtial_t^2 u - \textrm{tr}ianglerightrtial_r^2 u -\frac{3}{r} \textrm{tr}ianglerightrtial_r u = Z(ru) u^3 \\
&\vec u(0)= (u_0, u_1).
}
where the function $$Z(\textrm{tr}ianglerightrtialsi) := \frac{2\textrm{tr}ianglerightrtialsi - \varsigmagmain 2 \textrm{tr}ianglerightrtialsi}{2\textrm{tr}ianglerightrtialsi^3}$$ is a smooth, bounded, even function. The linear part of~\epsilonilonqref{eq:4d} is the radial wave equation in $\mathbf{m}athbb{R}^{1+4}$
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:4dlin}
&\textrm{tr}ianglerightrtial_t^2 v - \textrm{tr}ianglerightrtial_r^2 v -\frac{3}{r} \textrm{tr}ianglerightrtial_r v=0.
}
To compare the size of $\vec u$ to $\vec \textrm{tr}ianglerightrtialsi$, we note that by Hardy's inequality we have
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:hardy}
\int_0^{\infty} \left (
(\textrm{tr}ianglerightrtial_r u)^2 + \frac{u^2}{r^2}
\right ) r^4 dr \varsigmagmaim \| u \|_{\dot{H}^1(\mathbf{m}athbb{R}^4)}^2.
}
Thus, the map
\mathbf{m}athcal{E}Q{
(u_0, u_1) \mathbf{m}apsto ( \textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1):= (ru_0, ru_1)
}
satisfies
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:2-4}
\|(u_0, u_1) \|_{\dot{H}^1 \widetildemes L^2(\mathbf{m}athbb{R}^4)} \varsigmagmaim \| (\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1) \|_{H \widetildemes L^2 (\mathbf{m}athbb{R}^2)},
}
and we conclude that the Cauchy problem for~\epsilonilonqref{eq:4d} with initial data in $\dot{H}^1 \widetildemes L^2(\mathbf{m}athbb{R}^4)$ is equivalent to the Cauchy problem for~\epsilonilonqref{eq:wmk} for initial data $(\textrm{tr}ianglerightrtialsi_0, \textrm{tr}ianglerightrtialsi_1) \in \mathbf{m}athcal{H}_0$.
We recall the following Strichartz estimates for solutions to the free wave equation on $\mathbf{m}athbb{R}^{1+4}$. Let $v$ be a solution to the wave equation
\mathbf{m}athcal{E}Q{
&\textrm{tr}ianglerightrtial_t^2v - \mathbf{m}athbb{D}elta_{\mathbf{m}athbb{R}^4} v = F(t, x), \\ &\vec v(0) = (v_0, v_1) \in \dot{H}^1 \widetildemes L^2 (\mathbf{m}athbb{R}^4).
}
Then there exists an absolute constant $C > 0$ such that for any time interval $I \varsigmagmaubset \mathbf{m}athbb{R}$ we have
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:strich}
\| v \|_{L^{3}_t(I; L^6_x(\mathbf{m}athbb{R}^4))} + \varsigmagmaup_{t \in I}\| \vec v(t) \|_{\dot{H}^1 \widetildemes L^2(\mathbf{m}athbb{R}^4)} \leq C \left ( \| \vec v(0) \|_{\dot{H}^1 \widetildemes L^2(\mathbf{m}athbb{R}^4)} + \| F \|_{L^1_t(I; L^2_x(\mathbf{m}athbb{R}^4)} \right ).
}
Using the Strichartz estimates \epsilonilonqref{eq:strich} and a contraction mapping argument, it is now standard to obtain the following well-posedness and scattering criterion for \epsilonilonqref{eq:4d}. These facts will be stated in terms of the original azimuth angle $\textrm{tr}ianglerightrtialsi = ru$.
\betagin{prop}\lambda_mbdabel{l:scattering} Let $(\textrm{tr}ianglerightrtialsi_0,\textrm{tr}ianglerightrtialsi_1) \in \mathbf{m}athcal H_0$. Then there exists a unique solution $\vec \textrm{tr}ianglerightrtialsi \in C(I_{\mathbf{m}ax}; \mathbf{m}athcal H)$ to \epsilonilonqref{eq:wmk} with $\vec \textrm{tr}ianglerightrtialsi(0) = (\textrm{tr}ianglerightrtialsi_0,\textrm{tr}ianglerightrtialsi_1)$
defined on a maximal time interval of existence $I_{\mathbf{m}ax}(\textrm{tr}ianglerightrtialsi) = (T_-(\textrm{tr}ianglerightrtialsi), T_+(\textrm{tr}ianglerightrtialsi))$ such that for any $J \Subset I_{\mathbf{m}ax}$,
\betagin{align*}
\| \vec \textrm{tr}ianglerightrtialsi \|_{L^\infty_t(J, H \widetildemes L^2(\mathbf{m}athbb{R}^2))} + \| \textrm{tr}ianglerightrtialsi/r \|_{L^3_t(J; L^6_x(\mathbf{m}athbb{R}^4))} < \infty.
\epsilonilonnd{align*}
A solution $\vec \textrm{tr}ianglerightrtialsi(t)$ satisfies $T_+(\textrm{tr}ianglerightrtialsi) = \infty$ and scatters in forward time if and only if
$$\| \textrm{tr}ianglerightrtialsi/r \|_{L^3_t((0,T_+);L^6_x(\mathbf{m}athbb{R}^4))} < \infty.$$ A similar statement holds for negative times.
Finally, we have the standard finite time blow--up criterion:
$T_+(\textrm{tr}ianglerightrtialsi) < \infty$ if and only if $$\| \textrm{tr}ianglerightrtialsi/r \|_{L^3_t((0,T_+);L^6_x(\mathbf{m}athbb{R}^4))} = \infty.$$ A similar statement holding for negative times.
\epsilonilonnd{prop}
Using Strichartz estimates and continuity arguments, one also has the following long-time perturbation lemma from \cite{CKLS1}.
\betagin{lem}\epsilonilonmph{\cite[Lemma 2.18]{CKLS1} \lambda_mbdabel{l:pert}} There are continuous functions $\epsilonilon_0, C_0: (0, \infty) \to (0, \infty)$ with the following property. Let $I\varsigmagmaubset \mathbf{m}athbb{R}$ be an open interval, and let $\textrm{tr}ianglerightrtialsi, \varphi \in C^0(I; H) \cap C^1(I; L^2) $ be radial functions such that for some $A>0$
\betagin{align*}
&\|\vec \textrm{tr}ianglerightrtialsi\|_{L^{\infty}_t(I; H \widetildemes L^2(\mathbf{m}athbb{R}^2))}+ \|\vec\varphi\|_{L^{\infty}_t(I; H \widetildemes L^2(\mathbf{m}athbb{R}^2))}+ \|\varphi/r\|_{L^3_t(I; L^6_x(\mathbf{m}athbb{R}^4))} \le A\\
&\|\thetaxtrm{eq}(\textrm{tr}ianglerightrtialsi/r)\|_{L^1_t(I; L^2_x(\mathbf{m}athbb{R}^4))}+\|\thetaxtrm{eq}(\varphi/r)\|_{L^1_t(I; L^2_x(\mathbf{m}athbb{R}^4))} + \|w_0/r\|_{L^3_t(I; L^6_x(\mathbf{m}athbb{R}^4))} \le \epsilonilon \le \epsilonilon_0(A)
\epsilonilonnd{align*}
where $\thetaxtrm{eq}(\textrm{tr}ianglerightrtialsi/r):= \mathbf{m}athcal{B}ox_{\mathbf{m}athbb{R}^4} (\textrm{tr}ianglerightrtialsi/r) +(\textrm{tr}ianglerightrtialsi/r)^3Z(\textrm{tr}ianglerightrtialsi)$ in the sense of distributions, $\vec w_0(t):= S(t-t_0)(\vec \textrm{tr}ianglerightrtialsi-\vec \varphi)(t_0)$ with $t_0 \in I$ fixed, and $S$ denotes the propagator for~\epsilonilonqref{eq:2dlin}. Then,
\betagin{align*}
\|\vec \textrm{tr}ianglerightrtialsi -\vec \varphi - \vec w_0\|_{L^{\infty}_t(I; H \widetildemes L^2(\mathbf{m}athbb{R}^2))} + \|\frac{1}{r}(\textrm{tr}ianglerightrtialsi-\varphi)\|_{L^3_t(I; L^6_x(\mathbf{m}athbb{R}^4))} \le C_0(A) \epsilonilon
\epsilonilonnd{align*}
In particular, $\|\textrm{tr}ianglerightrtialsi/r\|_{L^3_t(I; L^6_x(\mathbf{m}athbb{R}^4))} < \infty$.
\epsilonilonnd{lem}
\varsigmagmaubsection{Concentration Compactness} \lambda_mbdabel{s:cc}
Two fundamental tools used in the study of \epsilonilonqref{eq:wmk} (and in the study of large data solutions of dispersive equations in general) are the linear and nonlinear profile decompositions of Bahouri and Ger\'ard. The following linear profile decomposition for the azimuth angles follows from the main result in \cite{BG} and the equivalence of \epsilonilonqref{eq:free} and \epsilonilonqref{eq:2-4}.
\betagin{lem}\epsilonilonmph{\cite{BG}} \lambda_mbdabel{c:bg} Let $(\vec \textrm{tr}ianglerightrtialsi_n) \varsigmagmaubset \mathbf{m}athcal{H}_0$ be a sequence that is uniformly bounded in $\mathbf{m}athcal{H}_0$. Then, after extracting a subsequence if necessary, there exists a sequence of solutions $\vec \varphi^j_L \in \mathbf{m}athcal{H}_0$ to~\epsilonilonqref{eq:2dlin}, sequences of times $\{t_{n, j}\}\varsigmagmaubset \mathbf{m}athbb{R}$, sequences of scales $\{\lambda_mbda_{n, j}\}\varsigmagmaubset (0, \infty)$, and errors $\vec \mathbf{m}athbf{g}a_n^J(0) \in \mathbf{m}athcal H_0$ defined by
\mathbf{m}athcal{E}Q{
\vec \textrm{tr}ianglerightrtialsi_n = \varsigmagmaum_{j=1}^J (\vec \varphi^j_L)_{\lambda_mbda_{n,j}}\mathbf{m}athcal{B}igl
( -\frac{t_{n,j}}{\lambda_mbda_{n,j}} \mathbf{m}athcal{B}igr )
+ \vec \mathbf{m}athbf{g}amma^J_n(0)
}
with the following properties. Let $\mathbf{m}athbf{g}a_{n, L}^J(t) \in \mathbf{m}athcal{H}_0$ denote the solution to \epsilonilonqref{eq:2dlin} with initial data $\vec \mathbf{m}athbf{g}a_n^J(0) \in \mathbf{m}athcal{H}_0$. Then, for any $j \le \epsilonilonll$,
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:ga-weak}
(\mathbf{m}athbf{g}a_n^\epsilonilonll( t_{n, j}, \lambda_mbda_{n, j}\cdot) , \lambda_mbda_{n, j} \mathbf{m}athbf{g}a_n^\epsilonilonll( t_{n, j}, \lambda_mbda_{n, j}\cdot)) \rightharpoonup 0 \thetaxtrm{ weakly in }\mathbf{m}athcal{H}_0.
}
In addition, for any $j\neq \epsilonilonll$ we have
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:po}
\frac{\lambda_mbda_{n, j}}{\lambda_mbda_{n, \epsilonilonll}} + \frac{\lambda_mbda_{n, \epsilonilonll}}{\lambda_mbda_{n, j}} + \frac{\abs{t_{n, j}-t_{n, \epsilonilonll}}}{\lambda_mbda_{n, j}} + \frac{\abs{t_{n, j}-t_{n, \epsilonilonll}}}{\lambda_mbda_{n, \epsilonilonll}} \to \infty \quad \thetaxtrm{as} \quad n \to \infty.
}
The errors $\vec \mathbf{m}athbf{g}a_n^J$ vanish asymptotically in the dispersive sense
\mathbf{m}athcal{E}Q{
\limsup_{n \to \infty} \left\|\frac{1}{r} \mathbf{m}athbf{g}a_{n, L}^J\right\|_{L^{\infty}_tL^4_x \cap L^3_tL^6_x( \mathbf{m}athbb{R} \widetildemes \mathbf{m}athbb{R}^4)} \to 0 \quad \thetaxtrm{as} \quad J \to \infty.
}
Finally, we have a Pythagorean expansion of the $\mathbf{m}athcal{H}_0$ norms:
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{ort H}
\|\vec \textrm{tr}ianglerightrtialsi_n\|_{\mathbf{m}athcal{H}_0}^2 = \varsigmagmaum_{1 \le j \le J} \| \vec \varphi_L^j( - t_{n, j}/ \lambda_mbda_{n, j}) \|_{\mathbf{m}athcal{H}_0}^2 + \|\vec \mathbf{m}athbf{g}a_n^J\|_{\mathbf{m}athcal{H}_0}^2 + o_n(1) \mathbf{m}as n \to \infty
}
\epsilonilonnd{lem}
Applying the concentration-compactness methods of Kenig and Merle \cite{KM06}, \cite{KM08} to the study of \epsilonilonqref{eq:wmk} requires the following Pythagorean expansion of the nonlinear energy proved in~\cite{CKLS1}.
\betagin{lem}\epsilonilonmph{\cite[Lemma $2.16$]{CKLS1} }\lambda_mbdabel{l:enorth}
Let $\vec \textrm{tr}ianglerightrtialsi_n \in \mathbf{m}athcal{H}_0$ be a bounded sequence with a profile decomposition as in Lemma \mathop{\mathrm{Re}}f{c:bg}. Then the following Pythagorean expansion for the nonlinear energy holds:
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:enorth}
\mathbf{m}athcal{E}(\vec \textrm{tr}ianglerightrtialsi_n) = \varsigmagmaum_{j=1}^J \mathbf{m}athcal{E}(\vec \varphi_L^j(-t_{n, j}/ \lambda_mbda_{n, j})) + \mathbf{m}athcal{E}(\vec \mathbf{m}athbf{g}a_n^J) + o_n(1) \mathbf{m}as n \to \infty.
}
\epsilonilonnd{lem}
To apply Lemma \mathop{\mathrm{Re}}f{c:bg} and Lemma \mathop{\mathrm{Re}}f{l:enorth} in the context of the nonlinear problem \epsilonilonqref{eq:wmk}, we construct the following nonlinear profiles.
For each linear profile $\varphi^j_L$ with parameters $\{ t_{n,j}, \lambda_mbda_{n,j} \}_n$, we define its associated nonlinear profile $\varphi^j$ to be the unique
solution to \epsilonilonqref{eq:wmk} such that after passing to a subsequence if necessary, for all $n$ sufficiently large, $-t_{n,j}/\lambda_mbda_{n,j} \in I_{\mathbf{m}ax}(\vec \varphi^j)$, and
\betagin{align*}
\lim_{n \rightarrow \infty} \| \vec \varphi^j(-t_{n,j}/\lambda_mbda_{n,j}) - \vec \varphi^j_L(-t_{n,j}/\lambda_mbda_{n,j}) \|_{\mathbf{m}athcal{H}_0} = 0.
\epsilonilonnd{align*}
It is easy to see that a nonlinear profile always exists. Indeed, if $-t_{n,j}/\lambda_mbda_{n,j} \rightarrow_n t_0 \in \mathbf{m}athbb{R}$, then we set $\varphi^j$ to be the
solution to \epsilonilonqref{eq:wmk} with initial data $\vec \varphi^j(t_0) = \vec \varphi^j_L(t_0)$. If $-t_{n,j}/\lambda_mbda_{n,j} \rightarrow_n \infty$, say, then we set
$\varphi^j$ to be the unique solution to the integral equation
\betagin{align}
\varphi^j(t) = \vec \varphi^j_L(t) - \int_t^\infty S(t-s)\left (0,\frac{2\varphi - \varsigmagmain 2 \varphi}{2r^2} \right ) ds. \lambda_mbdabel{e912}
\epsilonilonnd{align}
A unique solution to \epsilonilonqref{e912} can be shown to exist using contraction mapping arguments and Strichartz esimates (for $u = \textrm{tr}ianglerightrtialsi/r$). A similar construction can be made if $-t_{n,j} / \lambda_mbda_{n,j} \rightarrow -\infty$.
The existence of nonlinear profiles and the long-time perturbation lemma yield the following nonlinear profile decomposition.
\betagin{lem}\epsilonilonmph{\cite[Proposition 2.17]{CKLS1}\cite[Proposition 2.8]{DKM1}}\lambda_mbdabel{p:nlprof} Let $\vec \textrm{tr}ianglerightrtialsi_n(0)$ be a bounded sequence in $\mathbf{m}athcal{H}_0$ with a profile decomposition as in Lemma~\mathop{\mathrm{Re}}f{c:bg}.
Let $\vec \varphi_j$ be the associated nonlinear profiles. Let $s_n \in (0, \infty)$ be any sequence such that for all $j$ and for all $n$,
\mathbf{m}athcal{E}Q{
\frac{s_n - t_{n, j}}{\lambda_mbda_{n, j}} < T_+(\vec \varphi^j), \quad \limsup_{n \to \infty} \|\varphi^j/ r\|_{L^3_t([-\frac{t_{n, j}}{\lambda_mbda_{n, j}}, \frac{s_n - t_{n, j}}{\lambda_mbda_{n, j}}); L^6_x(\mathbf{m}athbb{R}^4))} <\infty.
}
Let $\vec \textrm{tr}ianglerightrtialsi_n(t)$ be the solution of \epsilonilonqref{eq:wmk} with initial data $\vec \textrm{tr}ianglerightrtialsi_n(0)$. Then, for all $n$ sufficiently large, $\vec \textrm{tr}ianglerightrtialsi_n(t)$ exists on the interval $s \in (0, s_n)$ and
\mathbf{m}athcal{E}Q{
\limsup_{n \to \infty} \|\textrm{tr}ianglerightrtialsi_n/r\|_{L^3_t([0, s_n); L^6_x(\mathbf{m}athbb{R}^4))} < \infty.
}
Finally, the following non-linear profile decomposition holds for all $s \in [0, s_n)$,
\mathbf{m}athcal{E}Q{
\vec \textrm{tr}ianglerightrtialsi_n(s, r) = \varsigmagmaum_{j=1}^J \left(\varphi^j\left( \frac{s- t_{n, j}}{\lambda_mbda_{n, j}}, \frac{r}{\lambda_mbda_{n, j}}\right), \frac{1}{\lambda_mbda_n^j}\textrm{tr}ianglerightrtial_t \varphi^j\left(\frac{s-t_{n, j}}{\lambda_mbda_{n, j}}, \frac{r}{\lambda_mbda_{n,j}}\right) \right) + \vec \mathbf{m}athbf{g}a_{n, L}^{J}(s, r)+ \vec \theta_n^J(s, r)
}
where $\mathbf{m}athbf{g}a_{n, L}^J(t)$ is defined in Lemma \mathop{\mathrm{Re}}f{c:bg} and
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:nlerror}
\lim_{J \to \infty} \limsup_{n\to \infty} \left( \|\theta_n^J/r\|_{L^3_t([0, s_n); L^6_x(\mathbf{m}athbb{R}^4))} + \|\vec \theta_n^J\|_{L^{\infty}_t ([0, s_n); \mathbf{m}athcal{H}_0)} \right) =0.
}
An analogous statement holds for sequences $s_n\in (-\infty, 0)$.
\epsilonilonnd{lem}
The main result obtained from concentration-compactness methods along with Theorem \mathop{\mathrm{Re}}f{t:2EQ} is the following compactness statement for nonscattering threshold solutions. The proof is the same as the higher equivariant analog found in \cite{JL} and is omitted.
\betagin{lem}\epsilonilonmph{\cite[Lemma 2.9]{JL}} \lambda_mbdabel{l:1profile} Let $\vec \textrm{tr}ianglerightrtialsi(t) \in \mathbf{m}athcal{H}_0$ be a solution to~\epsilonilonqref{eq:wmk} defined on $[0, T_+(\vec \textrm{tr}ianglerightrtialsi))$. Suppose that $\mathbf{m}athcal{E}(\vec \textrm{tr}ianglerightrtialsi) = 8 \textrm{tr}ianglerightrtiali$ and $\vec \textrm{tr}ianglerightrtialsi(t)$ does not scatter in forward time. Then if $t_n \to T_+ $ is any sequence of times such that
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:Hbounded}
\varsigmagmaup_n \| \vec\textrm{tr}ianglerightrtialsi(t_n)\|_{\mathbf{m}athcal{H}_0} \le C < \infty,
}
there exist a subsequence which we continue to denote by $t_n$, scales $\nu_n>0$ and a nonzero $\vec \varphi \in \mathbf{m}athcal{H}_0$ such that
\mathbf{m}athcal{E}Q{
\vec \textrm{tr}ianglerightrtialsi(t_n)_{\frac{1}{\nu_n}} \to \vec \varphi \in \mathbf{m}athcal{H}_0
}
strongly in $\mathbf{m}athcal{H}_0$. Moreover, $ \mathbf{m}athcal{E}(\vec \varphi ) = 8 \textrm{tr}ianglerightrtiali$, and the solution $\vec \varphi(s)$ to \epsilonilonqref{eq:wmk} with data $\vec \varphi(0) = \vec \varphi$ is non-scattering in forwards and backwards time.
\epsilonilonnd{lem}
\varsigmagmaubsection{Near two-bubble maps} \lambda_mbdabel{s:hm}
We recall that the unique (up to scaling) nontrivial corotational harmonic map $Q$ is given by
\betagin{align*}
Q(r) = 2 \arctan r.
\epsilonilonnd{align*}
The harmonic map $Q$ has a variational characterization as follows. As in the introduction, let $\mathbf{m}athcal H_{1}$ be the set of all finite energy corotational maps which map infinity to the south pole, i.e.
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal{H}_1:= \{ (\textrm{tr}ianglerightrtialhi_0, \textrm{tr}ianglerightrtialhi_1) \mathbf{m}id \mathbf{m}athcal{E}(\vec \textrm{tr}ianglerightrtialhi)< \infty, \quad \textrm{tr}ianglerightrtialhi_0(0) = 0, \quad \lim_{r \to \infty} \varphi_0(r) = \textrm{tr}ianglerightrtiali\}.
}
Then for $(\varphi_0, \varphi_1) \in \mathbf{m}athcal{H}_1$, we have the following
Bogomol'nyi factorization of the nonlinear energy:
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:bog}
\mathbf{m}athcal{E}( \varphi_0, \varphi_1) &= \textrm{tr}ianglerightrtiali \| \varphi_1 \|_{L^2}^2 + \textrm{tr}ianglerightrtiali \int_0^{\infty} \left(\textrm{tr}ianglerightrtial_r \varphi_0 - \frac{\varsigmagmain(\varphi_0)}{r}\right)^2 \, r\, dr + 2\textrm{tr}ianglerightrtiali \int_0^{\infty} \varsigmagmain(\varphi_0) \textrm{tr}ianglerightrtial_r\varphi_0 \, dr\\
&= \textrm{tr}ianglerightrtiali \| \varphi_1 \|_{L^2}^2 + \textrm{tr}ianglerightrtiali \int_0^{\infty} \left(\textrm{tr}ianglerightrtial_r \varphi_0 - \frac{\varsigmagmain(\varphi_0)}{r}\right)^2 \, rdr + 2 \textrm{tr}ianglerightrtiali \int_{\varphi_0(0)}^{\varphi_0(\infty)} \varsigmagmain(\rhoo) \, d\rhoo \\
& = \textrm{tr}ianglerightrtiali \| \varphi_1 \|_{L^2}^2 + \textrm{tr}ianglerightrtiali \int_0^{\infty} \left(\textrm{tr}ianglerightrtial_r \varphi_0 - \frac{\varsigmagmain(\varphi_0)}{r}\right)^2 \, rdr + 4\textrm{tr}ianglerightrtiali.
}
By solving the differential equation in the parentheses, we see that $\mathbf{m}athcal E(\varphi_0, \varphi_1) \mathbf{m}athbf{g}eq 4\textrm{tr}ianglerightrtiali$ with equality if and only if $(\varphi_0,\varphi_1) = (Q_\lambda_mbda,0)$ for some $\lambda_mbda > 0$.
In our analysis, we will need several technical facts related to the distance of a map $\vec \textrm{tr}ianglerightrtialsi$ to the set of $2$-bubbles. More precisely, given a map $ \vec\textrm{tr}ianglerightrtialhi = (\textrm{tr}ianglerightrtialhi_0, \textrm{tr}ianglerightrtialhi_1) \in \mathbf{m}athcal{H}_0$ we define its distance ${\bf d}(\vec \textrm{tr}ianglerightrtialhi)$ to the set of $2$-bubbles by
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:ddef}
{\bf d}(\vec \textrm{tr}ianglerightrtialhi) := \inf_{\lambda_mbda, \mathbf{m}u >0, \iotata \in \{+1, -1\}} \mathbf{m}athcal{B}ig( \| (\textrm{tr}ianglerightrtialhi_0 - \iotata (Q_\lambda_mbda - Q_\mathbf{m}u), \textrm{tr}ianglerightrtialhi_1) \|_{\mathbf{m}athcal{H}_0}^2 + \left( \lambda_mbda/\mathbf{m}u \right) \mathbf{m}athcal{B}ig).
}
To distinguish between the two cases of a map being close to a pure two-bubble ($\iotata = +1$ above) or an anti two-bubble ($\iotata = -1$ above), we define
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:dpm}
{\bf d}_\textrm{tr}ianglerightrtialm(\vec \textrm{tr}ianglerightrtialhi) := \inf_{\lambda_mbda, \mathbf{m}u >0} \mathbf{m}athcal{B}ig( \| (\textrm{tr}ianglerightrtialhi_0 \mathbf{m}p (Q_\lambda_mbda - Q_\mathbf{m}u), \textrm{tr}ianglerightrtialhi_1) \|_{\mathbf{m}athcal{H}_0}^2 + \left( \lambda_mbda/\mathbf{m}u \right) \mathbf{m}athcal{B}ig).
}
The next two lemmas follow from the same arguments given in \cite{JL} for higher equivariant wave maps, and the proofs will be omitted. The first lemma shows that the size of a map $\vec \textrm{tr}ianglerightrtialsi$ with threshold energy can be controlled by its distance to the surface of two-bubbles. The second lemma proves the intuitive fact that a map $\vec \textrm{tr}ianglerightrtialsi$ cannot simultaneously be close to a pure two-bubble and anti two-bubble.
\betagin{lem}\epsilonilonmph{\cite[Lemma 2.13]{JL}} \lambda_mbdabel{l:d-size}
Suppose that $\vec \textrm{tr}ianglerightrtialhi = (\textrm{tr}ianglerightrtialhi_0, \textrm{tr}ianglerightrtialhi_1) \in \mathbf{m}athcal{H}_0$ and
\mathbf{m}athcal{E}Q{
&\mathbf{m}athcal{E}( \vec \textrm{tr}ianglerightrtialhi) = 2 \mathbf{m}athcal{E}(\vec Q) = 8 \textrm{tr}ianglerightrtiali.
}
Then for each $\beta>0$ there exists $C(\beta)>0$ such that
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:d-big}
{\bf d}(\vec \textrm{tr}ianglerightrtialhi) \mathbf{m}athbf{g}e \beta \Longrightarrow \|(\textrm{tr}ianglerightrtialhi_0, \textrm{tr}ianglerightrtialhi_1) \|_{\mathbf{m}athcal{H}_0} \le C(\beta).
}
Conversely, for each $A>0$ there exists $\alphaha = \alphaha(A)$ such that
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:d-small}
{\bf d}(\vec \textrm{tr}ianglerightrtialhi) \le \alphaha(A) \Longrightarrow \|(\textrm{tr}ianglerightrtialhi_0, \textrm{tr}ianglerightrtialhi_1) \|_{\mathbf{m}athcal{H}_0} \mathbf{m}athbf{g}e A.
}
\epsilonilonnd{lem}
\betagin{lem}\epsilonilonmph{\cite[Lemma 2.14]{JL}} \lambda_mbdabel{l:dpm}
There exists an absolute constant $\alphaha_0>0$ such that for any $\vec \textrm{tr}ianglerightrtialhi \in \mathbf{m}athcal{H}_0$
\mathbf{m}athcal{E}Q{
{\bf d}_{\textrm{tr}ianglerightrtialm}(\vec \textrm{tr}ianglerightrtialhi) \le \alphaha_0 \Longrightarrow {\bf d}_{\mathbf{m}p}(\vec \textrm{tr}ianglerightrtialhi) \mathbf{m}athbf{g}e \alphaha_0.
}
\epsilonilonnd{lem}
The final preliminary results we will need for our analysis are related to a virial identity for solutions to \epsilonilonqref{eq:wmk}. The following virial identity follows easily from \epsilonilonqref{eq:wmk} and integration by parts.
\betagin{lem} \lambda_mbdabel{l:vir}
Let $\vec \textrm{tr}ianglerightrtialsi(t)$ be a solution to~\epsilonilonqref{eq:wmk} on a time interval $I$. Then for any time $t \in I$ and $R>0$ fixed we have
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:vir}
\frac{d}{d t} \ang{ \textrm{tr}ianglerightrtialsi_t \mathbf{m}id \chi_R \, r \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi}_{L^2}(t) = - \int_0^\infty \textrm{tr}ianglerightrtialsi_t^2(t, r) \, rdr + \Omega_R(\vec \textrm{tr}ianglerightrtialsi(t))
}
where
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:OmRdef}
\Omega_R(\vec \textrm{tr}ianglerightrtialsi(t)) &:= \int_0^\infty \textrm{tr}ianglerightrtialsi_t^2(t)(1 - \chi_R) \, rdr \\
& \quad -\frac{1}{2} \int_0^\infty \mathbf{m}athcal{B}ig( \textrm{tr}ianglerightrtialsi_t^2(t) + \textrm{tr}ianglerightrtialsi_r^2(t) - \frac{\varsigmagmain^2 \textrm{tr}ianglerightrtialsi(t)}{r^2} \mathbf{m}athcal{B}ig) \frac{r}{R} \chi'(r/R) \, rdr
}
satisfies
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:OmRest}
\abs{\Omega_R(\vec \textrm{tr}ianglerightrtialsi(t))} &\lesssim \int_{R}^\infty \textrm{tr}ianglerightrtialsi_t^2(t, r) \, r d r \, d t + \int_{R}^{\infty} \abs{ \textrm{tr}ianglerightrtialsi_r^2 - \frac{\varsigmagmain^2 \textrm{tr}ianglerightrtialsi}{r^2} } \,r d r d t \\
&\lesssim \int_{R}^\infty \left (
\textrm{tr}ianglerightrtialsi_t^2(t,r) + \textrm{tr}ianglerightrtialsi_r^2(t,r) + \frac{\varsigmagmain^2 \textrm{tr}ianglerightrtialsi(t,r)}{2r^2}
\right ) r \, dr.
}
\epsilonilonnd{lem}
Finally, using Lemma \mathop{\mathrm{Re}}f{l:d-size}, one can bound the virial and the error for threshold solutions by its distance to the set of $2$-bubbles. The proof of this fact is the same as in \cite{JL} and is omitted.
\betagin{lem}\epsilonilonmph{\cite[Lemma 2.16]{JL}}\lambda_mbdabel{l:error-estim}
There exists a number $C_0 > 0$ such that for all $\vec \textrm{tr}ianglerightrtialhi = (\textrm{tr}ianglerightrtialhi_0, \textrm{tr}ianglerightrtialhi_1) \in \mathbf{m}athcal{H}_0$ with $\mathbf{m}athcal{E}(\vec\textrm{tr}ianglerightrtialhi) = 2\mathbf{m}athcal{E}(Q)$
and all $R > 0$, we have
\betagin{align}
|\ang{\textrm{tr}ianglerightrtialhi_1, \chi_R r\textrm{tr}ianglerightrtialartial_r \textrm{tr}ianglerightrtialhi_0}| \leq C_0 R \varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialhi)}, \lambda_mbdabel{eq:virial-end} \\
\left | \Omegaega_R(\vec \textrm{tr}ianglerightrtialhi) \right | \leq C_0 \varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialhi)}. \lambda_mbdabel{eq:virial-err}
\epsilonilonnd{align}
\epsilonilonnd{lem}
\varsigmagmaection{The modulation method for two-bubble solutions} \lambda_mbdabel{s:mod}
In this section we analyze the modulation equations that govern the
evolution of corotational near $2$-bubble solutions. As in the case of higher equivariant wave maps studied by Jendrej and Lawrie \cite{JL}, the scale of the less concentrated bubble does not change, but it does affect the evolution of the more concentrated bubble. A central challenge which arises in the analysis of corotational maps which is not found in the higher equivariant setting is the fact that the zero mode of the operator obtained by linearizing about the harmonic map $Q$ is a \epsilonilonmph{resonance} rather than an eigenvalue. A rough outline of this section is as follows. For a solution $\vec \textrm{tr}ianglerightrtialsi(t)$ with ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))$ small on a time interval $J$, we first use the implicit function theorem to find modulation parameters $\lambda_mbda(t), \mathbf{m}u(t)$ defined on $J$ such that
$g(t) := \textrm{tr}ianglerightrtialsi(t) - (Q_{\lambda_mbda(t)} - Q_{\mathbf{m}u(t)})$ satisfies appropriate orthogonality conditions and ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) \varsigmagmaimeq \frac{\lambda_mbda(t)}{\mathbf{m}u(t)}$. We would like to then prove that if the modulation parameters $\lambda_mbda(t), \mathbf{m}u(t)$ are approaching each other in scale, i.e. if $\frac{d}{dt} \lambda_mbda(t)/\mathbf{m}u(t) |_{t = t_0} \mathbf{m}athbf{g}eq 0$, then $\lambda_mbda(t)/\mathbf{m}u(t)$ continues to grow in a controlled way in forward time near $t_0$. In particular, this would imply that $\vec \textrm{tr}ianglerightrtialsi(t)$ has to leave a small neighborhood of the set of two-bubbles. However, the slow decay of $Q$ requires us to deal with additional technical obstacles not encountered in the case of higher equivariant wave maps. In particular, we must replace $\lambda_mbda(t)$ with a carefully chosen logarithmic correction.
\varsigmagmaubsection{Modulation Equations}
In this section, we study solutions near two-bubble solutions $\vec \textrm{tr}ianglerightrtialsi(t)$ to~\epsilonilonqref{eq:wmk}. More precisely, we consider maps such that $ {\bf d}( \vec \textrm{tr}ianglerightrtialsi(t))$ (defined by \epsilonilonqref{eq:ddef}) is small on a time interval $J$.
The operator corresponding to linearizing \epsilonilonqref{eq:wmk} about the harmonic map $Q_\lambda_mbda$ is the Schr\"odinger operator
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal{L}_\lambda_mbda:= - \textrm{tr}ianglerightrtial_r^2 - \frac{1}{r} \textrm{tr}ianglerightrtial_r + \frac{\cos 2Q_\lambda_mbda}{r^2}.
}
For convenience we write $\mathbf{m}athcal{L} := \mathbf{m}athcal{L}_1$. Differentiating the equation
\betagin{align*}
\textrm{tr}ianglerightrtial_r^2 Q_\lambda_mbdambda + \frac{1}{r} \textrm{tr}ianglerightrtial_r Q_\lambda_mbdambda - \frac{\varsigmagmain 2 Q_\lambda_mbdambda}{2r^2} = 0
\epsilonilonnd{align*}
with respect to $\lambda_mbdambda$ and setting $\lambda_mbdambda = 1$ implies that $\Lambda Q$ is a zero mode for $\mathbf{m}athcal{L}$, i.e.,
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal{L} \Lambda Q = 0, \quad \Lambda Q \in L^\infty(\mathbf{m}athbb{R}^2).
}
Note that $\Lambda Q \varsigmagmaim \frac{1}{r}$ as $r \rightarrow \infty$ so that $\Lambda Q$ fails (logarithmically) to be in $L^2(\mathbf{m}athbb{R}^2)$. We say that $\Lambda Q$ is a \epsilonilonmph{resonance} of $\mathbf{m}athcal{L}$. In the $k$-equivariant setting with $k \mathbf{m}athbf{g}eq 2$, $\Lambda Q \in L^2(\mathbf{m}athbb{R}^2)$. This weak decay of $\Lambda Q$ requires more care when studying the modulation equations compared to the higher equivariant setting. We note that in general, we have
$$\mathbf{m}athcal{L}_\lambda_mbda Q_\lambda_mbda = 0.$$
Define
\betagin{align}
\mathbf{m}athcal{Z}(r) := \chi_{L}(r) \Lambda Q(r).
\epsilonilonnd{align}
where, as before, $\chi$ is a smooth cutoff.
The parameter $L > 0$ will be chosen later.
We use $\mathbf{m}athcal{Z}$ to obtain a useful choice of modulation parameters (the scales) for the near two-bubble solution $\vec \textrm{tr}ianglerightrtialsi(t)$. We first recall the following modulation lemma from \cite{JL}, which follows from standard arguments involving the implicit function theorem, an expansion of the nonlinear energy and coercivity properties of $\mathbf{m}athcal{L}_\lambda_mbda$.
\betagin{lem}\epsilonilonmph{\cite[Lemma $3.1$]{JL}} \lambda_mbdabel{l:modeq} There exist $\epsilonilonta_0 = \epsilonilonta_0(L) >0$ and $C = C(L) > 0$
such that the following holds. Let $\textrm{tr}ianglerightrtialsi(t)$ be a solution to~\epsilonilonqref{eq:wmk} defined on a time interval $J \varsigmagmaubset \mathbf{m}athbb{R}$, and assume that
\mathbf{m}athcal{E}Q{
{\bf d}_+(\vec\textrm{tr}ianglerightrtialsi(t)) \leq \epsilonilonta_0\qquad \forall t\in J.
}
Then there exist unique $C^1(J)$ functions $\lambda_mbda(t), \mathbf{m}u(t)$ so that the function
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:gdef1}
g(t):= \textrm{tr}ianglerightrtialsi(t) - Q_{\lambda_mbda(t)} + Q_{\mathbf{m}u(t)} \in H ,
}
satisfies for all $t \in J$
\betagin{gather}
\ang{ \cZ_{\uln{\lambda_mbdambda(t)}} \mathbf{m}id g(t)} = 0 \lambda_mbdabel{eq:ola}, \\
\ang{\mathbf{m}athcal{Z}_{\uln{\mathbf{m}u(t)}} \mathbf{m}id g(t) } = 0 \lambda_mbdabel{eq:omu}, \\
{\bf d}_+(\vec \textrm{tr}ianglerightrtialsi(t)) \le \| (g(t), \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t)) \|_{\mathbf{m}athcal{H}_0}^2 + \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \le C {\bf d}_+(\vec\textrm{tr}ianglerightrtialsi(t))\lambda_mbdabel{eq:gdotgd} .
\epsilonilonnd{gather}
Moreover,
\betagin{align}
\| (g(t), \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) ) \|_{\mathbf{m}athcal{H}_0} \le C\left( \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \right)^{1/2} \lambda_mbdabel{eq:gH} ,
\epsilonilonnd{align}
and hence
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:lamud1}
{\bf d}_+(\vec \textrm{tr}ianglerightrtialsi(t)) \varsigmagmaimeq \frac{\lambda_mbda(t)}{\mathbf{m}u(t)}.
}
Finally, we have the explicit bound for the kinetic energy
\betagin{align}
\| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|_{L^2}^2 \leq 16 \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} + o\left ( \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \right ).
\lambda_mbdabel{eq:leading-psit}
\epsilonilonnd{align}
\epsilonilonnd{lem}
\betagin{rem}
The little-oh term in \epsilonilonqref{eq:leading-psit} depends on the parameter $L$, but it will be important that the leading order term is \epsilonilonmph{independent} of $L$.
\epsilonilonnd{rem}
Given the modulation parameters $\lambda_mbda(t), \mathbf{m}u(t)$ we define
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:gdef}
&g(t) := \textrm{tr}ianglerightrtialsi(t) - Q_{\lambda_mbda(t)} + Q_{\mathbf{m}u(t)} \\
&\dot g(t):= \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t).
}
Then the vector $\vec g:= (g, \dot g)$ satisfies the equations
\betagin{align} \lambda_mbdabel{eq:ptg}
&\textrm{tr}ianglerightrtial_t g = \dot{g} + \lambda_mbda' \Lambda Q_{\underline{\lambda_mbda}} - \mathbf{m}u' \Lambda Q_{\underline{\mu}}, \\
& \textrm{tr}ianglerightrtial_t \dot g = \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g - \frac{1}{r^2} \left( f( Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u)\right) \lambda_mbdabel{eq:ptgdot}.
\epsilonilonnd{align}
As a first step towards understanding the behavior of the modulation parameters, we establish bounds on the first derivatives of $
\lambda_mbda(t), \mathbf{m}u(t)$. This information is not enough to study the interaction of the bubbles for the near two-bubble solution $\vec \textrm{tr}ianglerightrtialsi(t)$ and achieve the goal outlined at the start of the section. This should also be intuitively clear since $\vec \textrm{tr}ianglerightrtialsi(t)$ satisfies a second-order equation in time, and thus, the interaction of the bubbles should be governed by second derivatives of $\lambda_mbda(t), \mathbf{m}u(t)$.
\betagin{prop} \lambda_mbdabel{p:modp}
There exists a constant $C > 0$ and $\epsilonilonta_0 = \epsilonilonta_0(L) > 0$ with the following property. Let $J \varsigmagmaubset \mathbf{m}athbb{R}$, and let
$\vec \textrm{tr}ianglerightrtialsi(t)$ be a solution to~\epsilonilonqref{eq:wmk} on $J$ such that
\mathbf{m}athcal{E}Q{
{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))\le \epsilonilonta_0 \quad \forall t \in J.
}
Let $\lambda_mbda(t), \mathbf{m}u(t)$ be the modulation parameters given by Lemma~\mathop{\mathrm{Re}}f{l:modeq}. Then for all $t \in J$, we have:
\betagin{align}
\abs{\lambda_mbda'(t)} &\leq C (\log L)^{-1/2}
\left ( \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \right )^{1/2}, \lambda_mbdabel{eq:la'} \\
\abs{ \mathbf{m}u'(t)}& \leq C (\log L)^{-1/2}
\left ( \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \right )^{1/2}. \lambda_mbdabel{eq:mu'}
\epsilonilonnd{align}
\epsilonilonnd{prop}
\betagin{proof}
Differentiating the orthogonality conditions~\epsilonilonqref{eq:ola} and~\epsilonilonqref{eq:omu} and using \epsilonilonqref{eq:ptg} we obtain the relations
\betagin{align*}
- \ang{ \mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id \dot g} &= \lambda_mbda' \left(\ang{\mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id \Lambda Q_{\underline{\lambda_mbda}}} - \ang{\frac{1}{\lambda_mbda} [\Lambda_0 \mathbf{m}athcal{Z}]_{\underline{\lambda_mbda}} \mathbf{m}id g} \right) - \mathbf{m}u' \ang{ \mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id \Lambda Q_{\underline{\mu}}}, \\
- \ang{ \mathbf{m}athcal{Z}_{\underline{\mu}} \mathbf{m}id \dot g} &= \lambda_mbda' \ang{\mathbf{m}athcal{Z}_{\underline{\mu}} \mathbf{m}id \Lambda Q_{\underline{\lambda_mbda}}} + \mathbf{m}u' \left({-} \ang{ \mathbf{m}athcal{Z}_{\underline{\mu}} \mathbf{m}id \Lambda Q_{\underline{\mu}}} - \ang{\frac{1}{\mathbf{m}u} [\Lambda_0 \mathbf{m}athcal{Z}]_{\underline{\mu}} \mathbf{m}id g} \right).
\epsilonilonnd{align*}
These two equations yield the following linear system for $(\lambda_mbda', \mathbf{m}u')$,
\mathbf{m}athcal{E}Q{
\textrm{tr}ianglerightrtialmat{ A_{11} & A_{12} \\ A_{21} & A_{22}} \textrm{tr}ianglerightrtialmat{ \lambda_mbda' \\ \mathbf{m}u'} = \textrm{tr}ianglerightrtialmat{ {-}\ang{\mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id \dot g} \\ - \ang{ \mathbf{m}athcal{Z}_{\underline{\mu}} \mathbf{m}id \dot g} } \lambda_mbdabel{eq:Mmat}
}
where
\mathbf{m}athcal{E}Q{
& A_{11} := \ang{\mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id \Lambda Q_{\underline{\lambda_mbda}}} - \ang{\frac{1}{\lambda_mbda} [\Lambda_0 \mathbf{m}athcal{Z}]_{\underline{\lambda_mbda}} \mathbf{m}id g}, \\
& A_{12} := -\ang{ \mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id \Lambda Q_{\underline{\mu}}}, \\
&A_{21}:= \ang{\mathbf{m}athcal{Z}_{\underline{\mu}} \mathbf{m}id \Lambda Q_{\underline{\lambda_mbda}}}, \\
& A_{22}:= - \ang{ \mathbf{m}athcal{Z}_{\underline{\mu}} \mathbf{m}id \Lambda Q_{\underline{\mu}}} - \ang{\frac{1}{\mathbf{m}u} [\Lambda_0 \mathbf{m}athcal{Z}]_{\underline{\mu}} \mathbf{m}id g}.
}
We now estimate the coefficients of the matrix $A = (A_{ij})$ so that we may invert \epsilonilonqref{eq:Mmat} and obtain estimates for $(\lambda_mbda', \mathbf{m}u')$. We define
\betagin{align}
\alphaha_L := \ang{ \mathbf{m}athcal{Z} \mathbf{m}id \Lambda Q} = \int_0^\infty \chi_L |\Lambda Q|^2 r dr.
\epsilonilonnd{align}
Note that since $|\Lambda Q(r)| \lesssim \frac{1}{1+r}$, we have for all $L > 0$ sufficiently large
\betagin{align}
\log L \lesssim \alphaha_L \lesssim \log L, \lambda_mbdabel{eq:al_est}
\epsilonilonnd{align}
where the implied constants are absolute.
\betagin{claim}
For $\lambda_mbdambda/ \mathbf{m}u$ sufficiently small (depending on $L$), the diagonal terms satisfy
\betagin{align}
A_{11} &= \alphaha_L\left [ 1 + O_L((\lambda_mbda / \mathbf{m}u)^{1/2})\right ], \lambda_mbdabel{eq:A11est}, \\
A_{22} &= -\alphaha_L\left [ 1 + O_L((\lambda_mbda/ \mathbf{m}u)^{1/2})\right] \lambda_mbdabel{eq:A22est}.
\epsilonilonnd{align}
\epsilonilonnd{claim}
To prove the claim we simply observe that
\betagin{align*}
\left |
\ang{\frac{1}{\lambda_mbda} [\Lambda_0 \mathbf{m}athcal{Z}]_{\underline{\lambda_mbda}} \mathbf{m}id g}
\right | \lesssim \| g \|_{L^\infty} \| \Lambda_0 \mathbf{m}athcal{Z} \|_{L^1(rdr)} \lesssim_L \|g \|_{H} \lesssim_L (\lambda_mbda/\mathbf{m}u)^{1/2}.
\epsilonilonnd{align*}
Thus,
\betagin{align*}
A_{11} = \ang{\mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id \Lambda Q_{\underline{\lambda_mbda}}} - \ang{\frac{1}{\lambda_mbda} [\Lambda_0 \mathbf{m}athcal{Z}]_{\underline{\lambda_mbda}} \mathbf{m}id g} = \alphaha_L + O_L((\lambda_mbda/\mathbf{m}u)^{1/2}),
\epsilonilonnd{align*}
which establishes \epsilonilonqref{eq:A11est}. The estimate \epsilonilonqref{eq:A22est} is established analogously, and the claim is proved.
We now estimate the off-diagonal terms.
\betagin{claim} \lambda_mbdabel{c:M12est}
For $\lambda_mbdambda/ \mathbf{m}u$ sufficiently small (depending on $L$) we have
\betagin{align} \lambda_mbdabel{eq:M12}
|A_{12}| \lesssim_L ({\lambda_mbda/\mathbf{m}u})^{2}, \quad
|A_{21}| \lesssim \log L,
\epsilonilonnd{align}
where the implied constant in the estimate for $A_{21}$ is absolute.
\epsilonilonnd{claim}
Since $r \mathbf{m}athcal{Z}(r) \in C^\infty_0$, $\lambda_mbda / \mathbf{m}u \ll 1$ and $|\Lambda Q| \lesssim r$ for small $r$, we conclude that
\betagin{align*}
|A_{12}| = \mathbf{m}athcal{B}igl |
\ang{ \mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id \Lambda Q_{\underline{\mu}}}
\mathbf{m}athcal{B}igr |
= \left |
\int_0^{2L \lambda_mbda / \mathbf{m}u} \frac{\mathbf{m}u r}{\lambda_mbda}
Z ( r \mathbf{m}u / \lambda_mbda ) \Lambda Q(r) dr \right |
\lesssim_L \int_0^{2L \lambda_mbda / \mathbf{m}u} |\Lambda Q| dr \lesssim_L (\lambda_mbda / \mathbf{m}u)^2.
\epsilonilonnd{align*}
This proves the first estimate in \epsilonilonqref{eq:M12}.
Let $\varsigmagma = \lambda_mbda/\mathbf{m}u$. By a change of variables and the explicit expression for $\mathbf{m}athcal Z$ we have
\betagin{align*}
|A_{21}| = \mathbf{m}athcal{B}igl |
\ang{ \mathbf{m}athcal{Z}_{\underline{\mu}} \mathbf{m}id \Lambda Q_{\underline{\lambda_mbda}}}
\mathbf{m}athcal{B}igr |
&= \mathbf{m}athcal{B}igl |
\ang{ \mathbf{m}athcal{Z} \mathbf{m}id \Lambda Q_{\underline{\lambda_mbda/\mathbf{m}u}}} \mathbf{m}athcal{B}igr | \\
&\lesssim \frac{1}{\varsigmagma}
\int_0^{2L} \frac{r}{1+r^2} \frac{(r/\varsigmagmaigma)}{1+ (r/\varsigmagma)^2} r dr \\
&\lesssim \log L
\epsilonilonnd{align*}
which proves the second estimate in \epsilonilonqref{eq:M12} and the claim.
We now solve for $(\lambda_mbda', \mathbf{m}u')$ by inverting $A$:
\mathbf{m}athcal{E}Q{
\textrm{tr}ianglerightrtialmat{ \lambda_mbda' \\ \mathbf{m}u ' } = \frac{1}{\deltat A} \textrm{tr}ianglerightrtialmat{ - A_{22} \ang{ \mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id \dot g} + A_{12} \ang{\mathbf{m}athcal{Z}_{\underline{\mu}} \mathbf{m}id \dot g} \\ A_{21} \ang{\mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id \dot g}- A_{11} \ang{\mathbf{m}athcal{Z}_{\underline{\mu}} \mathbf{m}id \dot g}}
}
The previous two claims imply that
\mathbf{m}athcal{E}Q{
\deltat A = A_{11}A_{22} - A_{12}A_{21} = -\alphaha_L^2\left [1 + O_L((\lambda_mbda/\mathbf{m}u)^{1/2})\right] \lambda_mbdabel{eq:detA}
}
as long as $\lambda_mbda / \mathbf{m}u$ is sufficiently small. It is easy to see that the function $\mathbf{m}athcal{Z} = \chi_L \Lambda Q$ satisfies $\| \mathbf{m}athcal{Z} \|_{L^2} \lesssim (\log L)^{1/2}$. Then by Cauchy-Schwarz and \epsilonilonqref{eq:leading-psit} we have, for $\lambda_mbda/ \mathbf{m}u$ sufficiently small,
\betagin{align}
\left | \ang{Z_{\underline{\lambda_mbda}} \mathbf{m}id \dot g} \right | + \left | \ang{Z_{\underline{\mu}} \mathbf{m}id \dot g} \right | \lesssim |\log L|^{1/2}
(\lambda_mbda / \mathbf{m}u)^{1/2} \lesssim \alphaha_L^{1/2} (\lambda_mbda / \mathbf{m}u)^{1/2} \lambda_mbdabel{eq:rest}
\epsilonilonnd{align}
where the implied constant is absolute. Our two claims, \epsilonilonqref{eq:detA} and \epsilonilonqref{eq:rest} imply that as long as $\lambda_mbda/\mathbf{m}u$ is sufficiently small
\betagin{align*}
|\lambda_mbda'| &\lesssim |\deltat A|^{-1} \left (
|A_{22}| \left | \ang{Z_{\underline{\lambda_mbda}} \mathbf{m}id \dot g} \right | +
|A_{12}| \left | \ang{Z_{\underline{\mu}} \mathbf{m}id \dot g} \right |
\right ) \\
&\lesssim \alphaha_L^{-1/2} (\lambda_mbda / \mathbf{m}u)^{1/2} \\
&\lesssim (\log L)^{-1/2} (\lambda_mbda / \mathbf{m}u)^{1/2}
\epsilonilonnd{align*}
as desired.
A similar argument establishes
\mathbf{m}athcal{E}Q{
\abs{ \mathbf{m}u'} \lesssim (\log L)^{-1/2} (\lambda_mbda / \mathbf{m}u)^{1/2}
}
as well which finishes the proof.
\epsilonilonnd{proof}
\varsigmagmaubsection{Refined control of the modulation parameters}\lambda_mbdabel{s:modest}
As stated previously, information about the first derivatives of the modulation parameters is not enough to study the evolution of two-bubbles since \epsilonilonqref{eq:wmk} is second order in time. Due to the slow decay of the $\Lambda Q$, we will in fact need to study second order derivatives of $2 \lambda_mbda \left |\log \lambda_mbda/\mathbf{m}u \right |$ and $\mathbf{m}u$. Moreover, for technical reasons we will study a function $\zetaeta = \zetaeta(t)$ which approximates $2 \lambda_mbda |\log \lambda_mbda/\mathbf{m}u|$ and a function $b = b(t)$ which approximates $\zetaeta'(t)$ (see Proposition \mathop{\mathrm{Re}}f{p:modp2}).
We first define a truncated virial functional and state some relevant properties. This functional played a fundamental role in the work of Jendrej and Lawrie on threshold dynamics for higher equivariant wave maps \cite{JL} and in the two-bubble construction by Jendrej in~\cite{JJ-AJM}. It will play a very important role in our work as well.
For the proofs of the following statements we refer the reader to~\cite[Lemma 4.6]{JJ-AJM} and~\cite[Lemma 5.5]{JJ-AJM}. In what follows, we denote the nonlinearity by
$f(\rhoo):= \frac{1}{2} \varsigmagmain 2 \rhoo$.
\betagin{lem} \epsilonilonmph{\cite[Lemma 4.6]{JJ-AJM}}
\lambda_mbdabel{lem:fun-q}
For each $c, R > 0$ there exists a function $q(r) = q_{c, R}(r) \in C^{3,1}((0, +\infty))$ with the following properties:
\betagin{enumerate}[label=(P\arabic*)]
\item $q(r) = \frac{1}{2} r^2$ for $r \leq R$, \lambda_mbdabel{enum:approx-q}
\item there exists an absolute constant $\kappappa > 0$ such that $q(r) \epsilonilonquiv \tx{const}$ for $r \mathbf{m}athbf{g}eq \widetilde R := \kappappa e^{\kappappa/c} R$, \lambda_mbdabel{enum:support-q}
\item $|q'(r)| \lesssim r$ and $|q''(r)| \lesssim 1$ for all $r > 0$, with constants independent of $c, R$, \lambda_mbdabel{enum:gradlap-q}
\item $q''(r) \mathbf{m}athbf{g}eq -c$ and $\frac 1r q'(r) \mathbf{m}athbf{g}eq -c$, for all $r > 0$, \lambda_mbdabel{enum:convex-ym}
\item $(\frac{d^2}{d r^2} + \frac 1r \frac{d}{dr} r)^2 q(r) \leq c\cdot r^{-2}$, for all $r > 0$, \lambda_mbdabel{enum:bilapl-ym}
\item $\big|r\big(\frac{q'(r)}{r}\big)'\big| \leq c$, for all $r > 0$. \lambda_mbdabel{enum:multip-petit-ym}
\epsilonilonnd{enumerate}
\epsilonilonnd{lem}
For each $\lambda_mbdambda > 0$ we define the operators $\mathbf{m}athcal{A}(\lambda_mbdambda)$ and $\mathbf{m}athcal{A}_0(\lambda_mbdambda)$ as follows:
\betagin{align}
[\mathbf{m}athcal{A}(\lambda_mbdambda)g](r) &:= q'\big(\frac{r}{\lambda_mbdambda}\big)\cdot \textrm{tr}ianglerightrtial_r g(r), \lambda_mbdabel{eq:opA-wm} \\
[\mathbf{m}athcal{A}_0(\lambda_mbdambda)g](r) &:= \big(\frac{1}{2\lambda_mbdambda}q''\big(\frac{r}{\lambda_mbdambda}\big) + \frac{1}{2r}q'\big(\frac{r}{\lambda_mbdambda}\big)\big)g(r) + q'\big(\frac{r}{\lambda_mbdambda}\big)\cdot\textrm{tr}ianglerightrtial_r g(r). \lambda_mbdabel{eq:opA0-wm}
\epsilonilonnd{align}
Since $q(r) = \frac{1}{2} r^2$ for $r \leq R$, $\mathbf{m}athcal{A}(\lambda_mbda) g(r) = \frac{1}{\lambda_mbda} \Lambda g(r)$ and $\mathbf{m}athcal{A}_0(\lambda_mbda) g(r) = \frac{1}{\lambda_mbda} \Lambda_0 g(r)$ for $r \leq R$. One may intuitively think of $\mathbf{m}athcal{A}(\lambda_mbda)$ and $\mathbf{m}athcal{A}_0(\lambda_mbda)$ as extensions of $\frac{1}{\lambda_mbda} \Lambda$ and $\frac{1}{\lambda_mbda} \Lambda_0$ to $r \mathbf{m}athbf{g}eq R$ which have good boundedness properties. The following lemma makes this precise. In what follows, we denote
\mathbf{m}athcal{E}Q{
X:= \{ g \in H \mathbf{m}id \frac{g}{r}, \textrm{tr}ianglerightrtial_r g \in H\}.
}
\betagin{lem} \epsilonilonmph{\cite[Lemma 5.5]{JJ-AJM}}
\lambda_mbdabel{lem:op-A-wm}
Let $c_0>0$ be arbitrary. There exists $c>0$ small enough and $R, \widetilde R>0$ large enough in Lemma~\mathop{\mathrm{Re}}f{lem:fun-q} so that the operators $\mathbf{m}athcal{A}(\lambda_mbdambda)$ and $\mathbf{m}athcal{A}_0(\lambda_mbdambda)$ defined in~\epsilonilonqref{eq:opA-wm} and~\epsilonilonqref{eq:opA0-wm} have the following properties:
\betagin{itemize}[leftmargin=0.5cm]
\item the families $\{\mathbf{m}athcal{A}(\lambda_mbdambda): \lambda_mbdambda > 0\}$, $\{\mathbf{m}athcal{A}_0(\lambda_mbdambda): \lambda_mbdambda > 0\}$, $\{\lambda_mbdambda\textrm{tr}ianglerightrtialartial_\lambda_mbdambda \mathbf{m}athcal{A}(\lambda_mbdambda): \lambda_mbdambda > 0\}$
and $\{\lambda_mbdambda\textrm{tr}ianglerightrtialartial_\lambda_mbdambda \mathbf{m}athcal{A}_0(\lambda_mbdambda): \lambda_mbdambda > 0\}$ are bounded in $\mathbf{m}athscr{L}(H; L^2)$, with the bound depending only on the choice of the function $q(r)$,
\item
For all $\lambda_mbdambda > 0$ and $g_1, g_2 \in X$ there holds
\betagin{multline} \lambda_mbdabel{eq:A-by-parts-wm}
\mathbf{m}athcal{B}ig| \ang{ \mathbf{m}athcal{A}(\lambda_mbdambda)g_1\mathbf{m}id \frac{1}{r^2}\big(f(g_1 + g_2) - f(g_1) - f'(g_1)g_2\big)} \\ +\ang{ \mathbf{m}athcal{A}(\lambda_mbdambda)g_2\mathbf{m}id \frac{1}{r^2}\big(f(g_1+g_2) - f(g_1) -g_2\big)}\mathbf{m}athcal{B}ig|
\leq \frac{c_0}{\lambda_mbdambda} \|g_2\|_H^2,
\epsilonilonnd{multline}
\item For all $g \in X$ we have
\mathbf{m}athcal{E}Q{
\lambda_mbdabel{eq:A-pohozaev-wm}
\ang{\mathbf{m}athcal{A}_0(\lambda_mbdambda)g | \big(\textrm{tr}ianglerightrtialartial_r^2 + \frac 1r\textrm{tr}ianglerightrtialartial_r - \frac{1}{r^2}\big)g} \leq \frac{c_0}{\lambda_mbdambda}\|g\|_{H}^2 - \frac{1}{\lambda_mbdambda}\int_0^{R\lambda_mbdambda}\mathbf{m}athcal{B}ig((\textrm{tr}ianglerightrtialartial_r g)^2 + \frac{1}{r^2}g^2\mathbf{m}athcal{B}ig) dr,
}
\item Moreover, for $\lambda_mbda, \mathbf{m}u >0$ with $\lambda_mbda/\mathbf{m}u \ll 1$,
\betagin{gather}
\lambda_mbdabel{eq:L-A-wm}
\|\Lambdambda Q_\uln\lambda_mbdambda - \mathbf{m}athcal{A}(\lambda_mbdambda)Q_\lambda_mbdambda\|_{L^\infty} \leq \frac{c_0}{\lambda_mbdambda}, \\
\|\Lambda_0 \Lambdambda Q_\uln\lambda_mbdambda - \mathbf{m}athcal{A}_0(\lambda_mbdambda) \Lambda Q_\lambda_mbdambda\|_{L^2} \leq c_0, \lambda_mbdabel{eq:Al2}\\
\| \mathbf{m}athcal{A}(\lambda_mbda) Q_\mathbf{m}u \|_{L^\infty} + \| \mathbf{m}athcal{A}_0(\lambda_mbda) Q_\mathbf{m}u \|_{L^\infty} \lesssim \frac{1}{\mathbf{m}u}, \lambda_mbdabel{eq:Ainfty}
\epsilonilonnd{gather}
and, for any $g \in H$,
\betagin{multline} \lambda_mbdabel{eq:approx-potential-wm}
\bigg|\int_0^{+\infty}\frac 12 \mathbf{m}athcal{B}ig(q''\big(\frac{r}{\lambda_mbdambda}\big) + \frac{\lambda_mbdambda}{r}q'\big(\frac{r}{\lambda_mbdambda}\big)\mathbf{m}athcal{B}ig)\frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda)-g\big)g dr \\
- \int_0^{+\infty} \frac{1}{r^2}\big(f'(Q_\lambda_mbdambda)-1\big)g^2 dr\bigg| \leq c_0(\|g\|_H^2 + (\lambda_mbdambda/\mathbf{m}u)).
\epsilonilonnd{multline}
\epsilonilonnd{itemize}
\epsilonilonnd{lem}
\betagin{rem}
The argument for the estimate \epsilonilonqref{eq:Al2} from \cite{JJ-AJM} does not quite apply our case due to the slow decay of $Q$. We provide a different argument here. We first note that $\Lambda_0 \Lambda Q = \frac{4r}{(1+r^2)^2} \in L^2(\mathbf{m}athbb{R}^2)$ and the estimate \epsilonilonqref{eq:Al2} is scaling invariant so we can take $\lambda_mbda = 1$. Since $\Lambda_0 \Lambda Q = \mathbf{m}athcal A_0(1) \Lambda Q$ for $r \leq R$ and $\mathbf{m}athcal A_0(1)\Lambda Q = 0$ for $r \mathbf{m}athbf{g}eq \widetildelde R = R \kappappa e^{\kappappa/c}$, we have
\betagin{align}
\left \|\Lambda_0 \Lambdambda Q - \mathbf{m}athcal{A}_0(1) \Lambda Q\right \|_{L^2}^2
\leq \int_{R}^\infty |\Lambda_0 \Lambda Q|^2 r dr + \int_{R}^{\widetildelde R} |\mathbf{m}athcal A_0(1) \Lambda Q|^2 r dr.
\epsilonilonnd{align}
The first term on the right-hand side above can be made $<c_0^2/2$ as long as $R > 0$ is sufficiently large since $\Lambda_0 \Lambda Q \in L^2$. For the second term, we write
\betagin{align*}
\mathbf{m}athcal A_0(1) \Lambda Q = \frac{r}{2} \left (\frac{q'(r)}{r} \right )' \Lambda Q + \frac{q'(r)}{r} \Lambda_0 \Lambda Q.
\epsilonilonnd{align*}
Then by properties \mathop{\mathrm{Re}}f{enum:multip-petit-ym} and \mathop{\mathrm{Re}}f{enum:gradlap-q} in Lemma \mathop{\mathrm{Re}}f{lem:fun-q} we have
\betagin{align*}
\int_{R}^{\widetildelde R} |\mathbf{m}athcal A_0(1) \Lambda Q|^2 \, r dr
& \lesssim c^2 \int_{R}^{\widetildelde R} |\Lambda Q|^2 \, r dr
+ \int_{R}^\infty |\Lambda_0 \Lambda Q|^2 \, rdr \\
&\lesssim c^2 \int_R^{R \kappappa e^{\kappappa/c}} \frac{1}{r} \, dr
+ \int_R^\infty \frac{1}{r^5} dr \\
&\lesssim c +R^{-4} \leq c_0^2 /2
\epsilonilonnd{align*}
as long as $c$ is sufficiently small and $R$ is sufficiently large. We conclude that for $c,R$ chosen appropriately, we have
\betagin{align*}
\left \|\Lambda_0 \Lambdambda Q - \mathbf{m}athcal{A}_0(1) \Lambda Q\right \|_{L^2}^2 \leq c_0^2,
\epsilonilonnd{align*}
as desired.
\epsilonilonnd{rem}
As before, we let $\chi \in C^\infty_c(\mathbf{m}athbb{R}^2)$ be a smooth radial cutoff.
We then define the function $b(t)$ by
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:bdef}
b(t):= - \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda(t)\mathbf{m}u(t)}} \Lambda Q_{\underline{\lambda_mbda(t)}} \mathbf{m}id \dot g(t)} - \ang{ \dot g(t) \mathbf{m}id \mathbf{m}athcal{A}_0( \lambda_mbda(t) ) g(t)}.
}
Here $M > 0$ is a constant which we will later fix. Finally, we define
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:zetadef}
\zetaeta(t) := 2 \lambda_mbda(t) |\log (\lambda_mbda(t)/\mathbf{m}u(t))| - \lambda_mbdangle \chi_{M \varsigmagmaqrt{\lambda_mbda(t) \mathbf{m}u(t)}}\Lambdambda Q_{\uln{\lambda_mbdambda(t)}} \mathbf{m}id g(t)\rangle
}
Note that $\zetaeta(t)$ is $C^1$ since $\textrm{tr}ianglerightrtialartial_t g(t)$ is continuous in $L^2$ with respect to $t$. We will now show that we may roughly view $\zetaeta(t)$ as $2 \lambda_mbda(t) \log \lambda_mbda(t)$ and $b(t)$ as a subtle correction to $\zetaeta'(t)$. The essential feature of this correction is that $b'(t)$ (which intuitively is connected to $\lambda_mbda''(t)$) is bounded from below. More precisely, we prove the following.
\betagin{prop}[Modulation Control] \lambda_mbdabel{p:modp2}
Assume the same hypothesis as in Proposition~\mathop{\mathrm{Re}}f{p:modp}. Let $0 < \delta < 1/2$ be arbitrarily small, and let $\epsilonilonta_0$ be as in Lemma~\mathop{\mathrm{Re}}f{l:modeq}.
There exist functions $L_0 = L_0(\delta) > 0$, $M_0 = M_0(\delta,L) > 0$ and $\epsilonilonta_1 = \epsilonilonta_1(\delta,L,M) > 0$ such that if $L > L_0$, $M > M_0$ and ${\bf d}_+(\vec \textrm{tr}ianglerightrtialsi(t)) \le \epsilonilonta_1 < \epsilonilonta_0$, then for all $t \in J$ the functions $\lambda_mbda(t), \mathbf{m}u(t), \zetaeta(t)$ and $b(t)$ (which implicitly depend on $L$ and $M$) satisfy
\betagin{align}
&\abs{\frac{\zetaeta(t)}{2\lambda_mbda(t)|\log (\lambda_mbda(t)/\mathbf{m}u(t))|} - 1} \le \delta , \lambda_mbdabel{eq:bound-on-l} \\
& \abs{\zetaeta'(t) - b(t) } \le \delta \left [\frac{\lambda_mbda(t)}{\mathbf{m}u(t)}
\right]^{\frac{1}{2}} \left |
\log \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \right |^{\frac{1}{2}} \le \delta \left [\frac{\zetaeta(t)}{\mathbf{m}u(t)}
\right]^{\frac{1}{2}}, \lambda_mbdabel{eq:kala'} \\
\betagin{split}
&\abs{b(t)} \le 4 \left [\frac{\lambda_mbda(t)}{\mathbf{m}u(t)}
\right]^{\frac{1}{2}} \left |2
\log \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \right |^{\frac{1}{2}} + \delta \left [\frac{\lambda_mbda(t)}{\mathbf{m}u(t)}
\right]^{\frac{1}{2}} \left |
\log \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \right |^{\frac{1}{2}} \le 5\left [\frac{\zetaeta(t)}{\mathbf{m}u(t)}
\right]^{\frac{1}{2}}.
\epsilonilonnd{split} \lambda_mbdabel{eq:b-bound}
\epsilonilonnd{align}
Moreover, $b(t)$ is locally Lipschitz and there exists $C_1 = C_1(L) > 0$ such that
\betagin{align}
&|b'(t)| \leq C_1 /\mathbf{m}u(t), \lambda_mbdabel{eq:b'} \\
&b'(t) \mathbf{m}athbf{g}e (8 - \delta) /\mathbf{m}u(t). \lambda_mbdabel{eq:b'lb}
\epsilonilonnd{align}
\epsilonilonnd{prop}
\betagin{proof} Since we will take $\epsilonilonta_1 < \epsilonilonta_0$, the modulation parameters are well-defined and $C^1$ on the interval $J$. We also note that by rescaling $\vec \textrm{tr}ianglerightrtialsi(t_0)$ for some $t_0 \in J$ and shrinking the interval $J$ if necessary, we can assume that $\frac 12 \le \mathbf{m}u(t) \le 2$ on $J$. Throughout the argument, implied constants and big-oh terms will depend on the parameters $L$ and $M$ unless stated otherwise.
We first prove \epsilonilonqref{eq:bound-on-l}. By Proposition \mathop{\mathrm{Re}}f{p:modp} we have $\|g\|_{L^\infty} \leq \| g \|_H \lesssim \lambda_mbdambda^\frac 12$. Thus,
\betagin{align*}
\left |
\ang{\chi_{M \varsigmagmaqrt{ \mathbf{m}u \lambda_mbda}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id g}
\right | &\lesssim
\lambda_mbda^{3/2} \int_0^{4M / \varsigmagmaqrt{\lambda_mbda}} |\Lambda Q| r dr \\
&\lesssim \lambda_mbda.
\epsilonilonnd{align*}
We conclude that
\betagin{align*}
\frac{1}{2\lambda_mbda |\log (\lambda_mbda/\mathbf{m}u)|}\left |
\ang{\chi_{\varsigmagmaqrt{\mathbf{m}u \lambda_mbda}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id g}
\right | \lesssim |\log \lambda_mbda|^{-1}
\epsilonilonnd{align*}
which can be made smaller than $\delta$ as long as $\lambda_mbda/\mathbf{m}u$ is sufficiently small compared to $L$ and $M$. This proves \epsilonilonqref{eq:bound-on-l}.
Now we prove~\epsilonilonqref{eq:kala'}.
From \epsilonilonqref{eq:ptg} we have
\betagin{equation}
\lambda_mbdabel{eq:bound-on-l-666}
\betagin{aligned}
\frac{d}{dt} \ang{\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambdambda Q_{\underline{\lambda_mbda}} \mathbf{m}id g} &= \ang{\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id \dot g} + \lambda_mbdambda'\ang{ \chi_{M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id \Lambdambda Q_{\underline{\lambda_mbda}}} \\
&- \mathbf{m}u'\ang{\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id \Lambdambda Q_{\underline{\mu}}}
-\frac{\lambda_mbdambda'}{\lambda_mbdambda} \ang{\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambdambda_0\Lambdambda Q_{\underline{\lambda_mbda}} \mathbf{m}id g} \\&-
\mathbf{m}athcal{B}igl ( \frac{\lambda_mbda'}{2\lambda_mbda} + \frac{\mathbf{m}u'}{2\mathbf{m}u} \mathbf{m}athcal{B}igr ) \ang{ \Lambdambda \chi_{ M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id g},
\epsilonilonnd{aligned}
\epsilonilonnd{equation}
Since $|\Lambda Q| \lesssim r^{-1}$,
\betagin{align*}
\int_{\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}^{2M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} |\Lambda Q_{\underline{\lambda_mbda}}|^2 rdr
\lesssim
\int_{\varsigmagmaqrt{\frac{\mathbf{m}u}{\lambda_mbda}}}^{2M \varsigmagmaqrt{\frac{\mathbf{m}u}{\lambda_mbda}}} r^{-1} dr
\lesssim 1.
\epsilonilonnd{align*}
Thus,
\betagin{align*}
\lambda_mbda' \int_0^\infty \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} |\Lambda Q_{\underline{\lambda_mbda}}|^2 r dr &=
\lambda_mbda' \int_0^{\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} |\Lambda Q_{\underline{\lambda_mbda}}|^2 rdr + O(\lambda_mbda') \\
&= 2 \lambda_mbda' |\log (\lambda_mbda/\mathbf{m}u)| + O(\lambda_mbda^{1/2}).
\epsilonilonnd{align*}
We now show that the remaining terms on the right hand side of \epsilonilonqref{eq:bound-on-l-666} are $\ll |\log \lambda_mbda|^{1/2} \lambda_mbdambda^{1/2}$ for all $L$ and $M$ large and $\lambda_mbda/\mathbf{m}u$ sufficiently small compared to $L$ and $M$.
Since we are assuming $\frac 12 \le \mathbf{m}u \leq 2$, we have by \epsilonilonqref{eq:mu'}
\mathbf{m}athcal{E}Q{
|\mathbf{m}u'|\abs{\ang{\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id \Lambdambda Q_{\underline{\mu}}}}& \lesssim \lambda_mbda^{1/2} \int_0^{4M\varsigmagmaqrt{\lambda_mbda}} \frac{1}{\lambda_mbda} \frac{r}{\lambda_mbda} |Q_r(r/ \lambda_mbda)| \frac{1}{\mathbf{m}u} \frac{r}{\mathbf{m}u} |Q_{r}(r/ \mathbf{m}u)| \, r d r \\
& \lesssim \lambda_mbda^{-1/2} \int_0^{4M\varsigmagmaqrt{\lambda_mbda}} \frac{ (r/ \lambda_mbda)}{ 1+ (r/\lambda_mbda)^{2}} \frac{r^{2}}{ 1+ r^{2}} \, d r \\
& \lesssim \lambda_mbda^{1/2} \int_0^{4M\varsigmagmaqrt{\lambda_mbda}} \frac{ r^{2}}{ \lambda_mbda^{2} + r^{2}} \frac{r}{ 1+ r^{2}} \, d r \\
&\lesssim \lambda_mbda^{3/2}.
}
Thus, the third term in \epsilonilonqref{eq:bound-on-l-666}
is $\ll \lambda_mbda^{1/2} |\log \lambda_mbda |^{1/2}$.
For the fourth term, we have
\betagin{equation}
\mathbf{m}athcal{B}ig|\frac{\lambda_mbdambda'}{\lambda_mbdambda}\ang{\chi_{M \varsigmagmaqrt{\lambda_mbda\mathbf{m}u}} \Lambdambda_0\Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id g}\mathbf{m}athcal{B}ig| \lesssim |\lambda_mbdambda'|\|g\|_{L^\infty}\left \|\chi_{M \varsigmagmaqrt{\mathbf{m}u/\lambda_mbda}}\Lambdambda_0\Lambdambda Q\right \|_{L^1} \lesssim \lambda_mbdambda \|\chi_{4M/\varsigmagmaqrt{\lambda_mbdambda}}\Lambdambda_0\Lambdambda Q\|_{L^1}.
\epsilonilonnd{equation}
Now $\Lambda_0 \Lambda Q = \frac{4r}{(1+r^2)^2}$ so
\betagin{align*}
\|\chi_{4M/\varsigmagmaqrt{\lambda_mbdambda}}\Lambdambda_0\Lambdambda Q\|_{L^1} \lesssim 1.
\epsilonilonnd{align*}
Thus,
\betagin{align*}
\mathbf{m}athcal{B}ig|\frac{\lambda_mbdambda'}{\lambda_mbdambda}\ang{\chi_{M \varsigmagmaqrt{\lambda_mbda\mathbf{m}u}} \Lambdambda_0\Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id g}\mathbf{m}athcal{B}ig| \lesssim \lambda_mbda \ll \lambda_mbda^{1/2} |\log \lambda_mbda|^{1/2}.
\epsilonilonnd{align*}
For the fifth term appearing in \epsilonilonqref{eq:bound-on-l-666}, we have
\betagin{align*}
\left | \ang{ \Lambdambda \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id g}
\right | &\lesssim \| g \|_{L^\infty} \int_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}^{2M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} |\Lambda Q_{\underline{\lambda_mbda}}| r dr \\
&\lesssim \lambda_mbda^{3/2} \int_{M/2\varsigmagmaqrt{\lambda_mbda}}^{4M/\varsigmagmaqrt{\lambda_mbda}} |\Lambda Q| r dr \\
&\lesssim \lambda_mbda .
\epsilonilonnd{align*}
By \epsilonilonqref{eq:la'} and \epsilonilonqref{eq:mu'} we conclude that for all $\lambda_mbda$ sufficiently small depending on $L$ and $M$,
\betagin{align*}
\left |
\mathbf{m}athcal{B}igl ( \frac{\lambda_mbda'}{2\lambda_mbda} + \frac{\mathbf{m}u'}{2\mathbf{m}u} \mathbf{m}athcal{B}igr ) \ang{ \Lambdambda \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id g}
\right | \lesssim \lambda_mbda^{1/2} \ll \lambda_mbda^{1/2} |\log \lambda_mbda|^{1/2}.
\epsilonilonnd{align*}
From \epsilonilonqref{eq:bound-on-l-666} and the previous bounds we conclude that
\betagin{equation}
\lambda_mbdabel{eq:bound-on-l-6666}
\left |2 \lambda_mbda' \log (\lambda_mbda/\mathbf{m}u) - \frac{d}{dt }\left \lambda_mbdangle \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}\Lambdambda Q_{\uln{\lambda_mbdambda}} \mathbf{m}id g \right \rangle + \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}\Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id \dot g} \right | \ll \lambda_mbda^{1/2} |\log \lambda_mbda|^{1/2}.
\epsilonilonnd{equation}
By \epsilonilonqref{eq:la'} and \epsilonilonqref{eq:mu'}
\betagin{align*}
\frac{d}{dt} 2 \lambda_mbda \log (\lambda_mbda / \mathbf{m}u) = 2 \lambda_mbda' \log (\lambda_mbda / \mathbf{m}u) + 2 (\lambda_mbda' \mathbf{m}u - \mathbf{m}u' \lambda_mbda ) / \mathbf{m}u = 2 \lambda_mbda' \log (\lambda_mbda / \mathbf{m}u) + O(\lambda_mbda^{1/2}).
\epsilonilonnd{align*}
From this estimate and \epsilonilonqref{eq:bound-on-l-6666} we obtain
\betagin{align}
\left |\zetaeta' + \ang{\chi_{\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}\Lambdambda Q_{\underline{\lambda_mbda}}\mathbf{m}id \dot g} \right | \ll \lambda_mbda^{1/2} |\log \lambda_mbda |^{1/2}. \lambda_mbdabel{eq:bound6667}
\epsilonilonnd{align}
Recall that
\mathbf{m}athcal{E}Q{
b(t):= - \ang{ \chi_{\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}\Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \dot g} - \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) g}
}
By~\epsilonilonqref{eq:gH} and Lemma~\mathop{\mathrm{Re}}f{lem:op-A-wm} we have
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:dotgAg}
\left | \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) g} \right | \lesssim \| \dot g \|_{L^2} \| \mathbf{m}athcal{A}_0(\lambda_mbda) g \|_{L^2} \lesssim \| (g, \dot g) \|_{\mathbf{m}athcal{H}_0}^2 \lesssim \lambda_mbda \ll \lambda_mbda^{1/2} |\log \lambda_mbda|^{1/2}.
}
This estimate and \epsilonilonqref{eq:bound6667} imply
\mathbf{m}athcal{E}Q{
\abs{\zetaeta' - b} \ll \lambda_mbdambda^{1/2} |\log \lambda_mbda|^{1/2}.
}
for $L$ and $M$ large and $\lambda_mbda/\mathbf{m}u$ sufficiently small depending on $L$ and $M$. This completes the proof of \epsilonilonqref{eq:kala'}.
To prove \epsilonilonqref{eq:b-bound}, we argue as above and obtain
\mathbf{m}athcal{E}Q{
\abs{b(t)} &\le \| \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}}\|_2 \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi\|_{2} - O(\lambda_mbda) \\
&= \bigl [ 2 \log (\lambda_mbda / \mathbf{m}u) + O(1) \bigr ]^{1/2} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi \|_2 - O(\lambda_mbda).
}
By \epsilonilonqref{eq:leading-psit} we have
\mathbf{m}athcal{E}Q{
\| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|_2^2 \le 16 (\lambda_mbda/ \mathbf{m}u) + o(\lambda_mbda).
}
The previous two estimates combined yield ~\epsilonilonqref{eq:b-bound}.
We now turn to proving ~\epsilonilonqref{eq:b'lb} and ~\epsilonilonqref{eq:b'}. By approximating the initial data $\vec \textrm{tr}ianglerightrtialsi(t_0)$ for some $t_0 \in J$ by smooth functions and using the well-posedness theory, we may assume that $\vec \textrm{tr}ianglerightrtialsi(t)$ is smooth on $J$. We differentiate $b(t)$ and use the formulae~\epsilonilonqref{eq:ptg},~\epsilonilonqref{eq:ptgdot} to obtain
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:b'1}
b'(t) &= \frac{\lambda_mbda'}{\lambda_mbda} \ang{\chi_{M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}[ \Lambda_0 \Lambda Q]_{\underline{\lambda_mbda}} \mathbf{m}id \dot g } - \ang{ \chi_{M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}\Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \textrm{tr}ianglerightrtial_t \dot g}
- \ang{\textrm{tr}ianglerightrtial_t \dot g \mathbf{m}id \mathbf{m}athcal{A}_0( \lambda_mbda) g} \\
& \quad - \frac{\lambda_mbda'}{\lambda_mbda} \ang{ \dot g \mathbf{m}id \lambda_mbda \textrm{tr}ianglerightrtial_\lambda_mbda \mathbf{m}athcal{A}_0(\lambda_mbda) g} - \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) \textrm{tr}ianglerightrtial_t g} \\
& \quad + \left ( \frac{\lambda_mbda'}{2\lambda_mbda} + \frac{\mathbf{m}u'}{2\mathbf{m}u} \right )
\ang{\Lambda \chi_{M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \dot g } \\
& = \frac{\lambda_mbda'}{\lambda_mbda} \ang{\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} [ \Lambda_0 \Lambda Q]_{\underline{\lambda_mbda}} \mathbf{m}id \dot g } \\
&\quad - \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g - \frac{1}{r^2} \left( f( Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) +f(Q_\mathbf{m}u)\right)} \\
& \quad - \ang{ \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g - \frac{1}{r^2} \left( f( Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u)\right) \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) g} \\
& \quad - \frac{\lambda_mbda'}{\lambda_mbda} \ang{ \dot g \mathbf{m}id \lambda_mbda \textrm{tr}ianglerightrtial_\lambda_mbda \mathbf{m}athcal{A}_0(\lambda_mbda) g} - \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) \dot g} \\
&\quad - \lambda_mbda' \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) \Lambda Q_{\underline{\lambda_mbda}}}
+ \mathbf{m}u' \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) \Lambda Q_{\underline{\mu}}} \\
&\quad + \left ( \frac{\lambda_mbda'}{2\lambda_mbda} + \frac{\mathbf{m}u'}{2\mathbf{m}u} \right )
\ang{\Lambda \chi_{M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \dot g }.
}
We first discard those terms which are $\ll 1$ as long as $L > 0 $ is sufficiently large, $M > 0$ is sufficiently large depending on $L$, and $\lambda_mbda / \mathbf{m}u$ is sufficiently small depending on $L$ and $M$. Consider the last term appearing above. Here we will choose the size of $L$. For some absolute constant $C_2 > 0$, we have
\betagin{align}\lambda_mbdabel{eq:Qanulus2}
\| \Lambda \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \|_{L^2} \leq C_2.
\epsilonilonnd{align}
If $C$ is the constant in \epsilonilonqref{eq:la'}, then we choose $L > 0$ so large so that
\betagin{align}\lambda_mbdabel{eq:chooseL}
80 C C_2 (\log L)^{-1/2} \leq \frac{\delta}{100}.
\epsilonilonnd{align}
Then by Cauchy Schwarz, \epsilonilonqref{eq:Qanulus2}, \epsilonilonqref{eq:leading-psit} and \epsilonilonqref{eq:chooseL}, we conclude that
\betagin{align}
\left |
\frac{\lambda_mbda'}{\lambda_mbda}
\ang{\Lambda \chi_{M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \dot g }
\right | \leq \frac{|\lambda_mbda'|}{\lambda_mbda} \| \Lambda \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \|_{L^2} \| \dot g \|_{L^2}
\leq \frac{2 C (\log L)^{-1/2} \lambda_mbda^{1/2}}{\lambda_mbda} C_2 40 \lambda_mbda^{1/2}
\leq \frac{\delta}{100}
\epsilonilonnd{align}
as long as $\lambda_mbda / \mathbf{m}u$ is sufficiently small. Similarly, we have
\betagin{align}
\left |
\frac{\mathbf{m}u'}{\mathbf{m}u}
\ang{\Lambda \chi_{M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \dot g }
\right | \leq \frac{|\lambda_mbda'|}{\lambda_mbda} \| \Lambda \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \|_{L^2} \| \dot g \|_{L^2}
\leq 4 C_1 (\log L)^{-1/2} \lambda_mbda^{1/2} C_2 40 \lambda_mbda^{1/2}
\leq \frac{\delta}{100}
\epsilonilonnd{align}
as long as $\lambda_mbda / \mathbf{m}u$ is sufficiently small. Thus, the last term above can be made $\leq \delta/100$.
We now consider the first and sixth term appearing above. By Cauchy Schwarz and the fact that $\Lambda_0 \Lambda Q \in L^2$, we have
\betagin{align*}
\left |
\frac{\lambda_mbda'}{\lambda_mbda} \ang{(1-\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}) [ \Lambda_0 \Lambda Q]_{\underline{\lambda_mbda}} \mathbf{m}id \dot g }
\right |
\lesssim \frac{|\lambda_mbda'|}{\lambda_mbda}\| \dot g \|_{L^2} \| \Lambda_0 \Lambda Q \|_{L^2(r \mathbf{m}athbf{g}eq M \varsigmagmaqrt{\mathbf{m}u/\lambda_mbda})} \lesssim \| \Lambda_0 \Lambda Q \|_{L^2(r \mathbf{m}athbf{g}eq M \varsigmagmaqrt{\mathbf{m}u/\lambda_mbda})} \ll 1.
\epsilonilonnd{align*}
Then the first term and the sixth term combined yield
\betagin{align*}
\frac{\lambda_mbda'}{\lambda_mbda} \ang{\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} [ \Lambda_0 \Lambda Q]_{\underline{\lambda_mbda}} \mathbf{m}id \dot g } - \lambda_mbda' \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) \Lambda Q_{\underline{\lambda_mbda}}}
&=
\frac{\lambda_mbda'}{\lambda_mbda} \ang{[ \Lambda_0 \Lambda Q]_{\underline{\lambda_mbda}} \mathbf{m}id \dot g } - \lambda_mbda' \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) \Lambda Q_{\underline{\lambda_mbda}}} + o(1)\\
&= \frac{\lambda_mbda'}{\lambda_mbda}
\ang{[ \Lambda_0 \Lambda Q]_{\underline{\lambda_mbda}} - \mathbf{m}athcal{A}_0(\lambda_mbda) \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \dot g} + o(1),
\epsilonilonnd{align*}
where the little-oh satisfies $|o(1)| \ll 1$ as long as $L > 0 $ is sufficiently large, $M > 0$ is sufficiently large depending on $L$, and $\lambda_mbda / \mathbf{m}u$ is sufficiently small depending on $L$ and $M$.
By \epsilonilonqref{eq:Al2}
\betagin{align*}
\frac{|\lambda_mbda'|}{\lambda_mbda} \left |
\ang{[ \Lambda_0 \Lambda Q]_{\underline{\lambda_mbda}} - \mathbf{m}athcal{A}_0(\lambda_mbda) \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \dot g} \right |
\leq C \lambda_mbda^{-1/2} \| \dot g \|_{L^2} \| [ \Lambda_0 \Lambda Q]_{\underline{\lambda_mbda}} - \mathbf{m}athcal{A}_0(\lambda_mbda) \Lambda Q_{\underline{\lambda_mbda}} \|_{L^2} \lesssim c_0 \ll 1,
\epsilonilonnd{align*}
as long as $c_0$ is sufficiently small. We conclude that
\betagin{align}
\left | \frac{\lambda_mbda'}{\lambda_mbda} \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} [ \Lambda_0 \Lambda Q]_{\underline{\lambda_mbda}} \mathbf{m}id \dot g } - \lambda_mbda' \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) \Lambda Q_{\underline{\lambda_mbda}}} \right |
\ll 1.
\epsilonilonnd{align}
Since $(\lambda_mbda \textrm{tr}ianglerightrtial_\lambda_mbda \mathbf{m}athcal{A}_0(\lambda_mbda)): H \to L^2$ is bounded, we have that the fourth term satisfies
\mathbf{m}athcal{E}Q{
\left |
\frac{\lambda_mbda'}{\lambda_mbda} \ang{ \dot g \mathbf{m}id (\lambda_mbda \textrm{tr}ianglerightrtial_\lambda_mbda \mathbf{m}athcal{A}_0(\lambda_mbda)) g}
\right | \lesssim \lambda_mbda^{-\frac 12} \|(g, \dot g) \|_{\mathbf{m}athcal{H}_0}^2 \lesssim \lambda_mbda^{\frac 12} \ll 1.
}
Via integration by parts, the fifth term appearing above satisfies
\mathbf{m}athcal{E}Q{
\ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) \dot g} = 0.
}
Finally, since $1/2 \leq \mathbf{m}u \leq 2$ we have
\mathbf{m}athcal{E}Q{
\left | \mathbf{m}u' \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) \Lambda Q_{\underline{\mu}}} \right | &= \frac{|\mathbf{m}u'|}{\mathbf{m}u} \left | \ang{ \dot g \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) \Lambda Q_{\mathbf{m}u}} \right |\\
& \lesssim \abs{ \mathbf{m}u'} \| \dot g \|_{L^2} \lesssim \lambda_mbda \ll 1.
}
We now introduce some notation. Until the end of the proof, we write $ A \varsigmagmaimeq B$ if $A = B$ up to terms which which can be made $< \delta$ as long as $L > 0 $ is sufficiently large, $M > 0$ is sufficiently large depending on $L$, and $\lambda_mbda / \mathbf{m}u$ is sufficiently small depending on $L$ and $M$.
We have shown so far that
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:b'2}
b'(t) & \varsigmagmaimeq - \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g - \frac{1}{r^2} \left( f( Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u)\right)} \\
& \quad - \ang{ \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g - \frac{1}{r^2} \left( f( Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u)\right) \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) g}
}
We now choose the size of $M > 0$ (depending on $L$). Recall that
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal{L}_\lambda_mbda \Lambda Q_{\underline{\lambda_mbda}} := \left (- \textrm{tr}ianglerightrtial_{rr} - \frac{1}{r} \textrm{tr}ianglerightrtial_r + \frac{f'(Q_\lambda_mbda)}{r^2} \right ) \Lambda Q_{\underline{\lambda_mbda}} = 0.
}
In fact, since we have the factorization $\mathbf{m}athcal{L}_\lambda_mbda = A_\lambda_mbda^* A_\lambda_mbda$ with
$A_\lambda_mbda = -\textrm{tr}ianglerightrtial_r + \frac{\cos Q_\lambda_mbda}{r}$, we must have
\betagin{align*}
A_\lambda_mbda \Lambda Q_{\underline{\lambda_mbda}} = 0.
\epsilonilonnd{align*}
Thus,
\mathbf{m}athcal{E}Q{
\ang{ \chi_{M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g - \frac{f'(Q_\lambda_mbda)}{r^2} g } &= -\ang{ \chi_{M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u} } \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id
A^*_{\lambda_mbda} A_\lambda_mbda g} \\
&= -\ang{ A_\lambda_mbda \left (
\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}}
\right ) \mathbf{m}id A_\lambda_mbda g } \\
&= \frac{1}{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \ang{
\chi'_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}}
\mathbf{m}id A_\lambda_mbda g }.
}
Since $\chi'_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}$ is bounded by 2 and is supported on the annulus $\{ M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u} \leq r \leq 2M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}\}$, Cauchy-Schwarz and Proposition \mathop{\mathrm{Re}}f{p:modp} imply
\betagin{align*}
\frac{1}{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \left | \ang{
\chi'_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}}
\mathbf{m}id A_\lambda_mbda g } \right | \lesssim M^{-1} \lambda_mbda^{-\frac 12} \| A_\lambda_mbda g \|_{L^2}
\lesssim_L M^{-1} \lambda_mbda^{-\frac 12} \| g \|_{H} \lesssim_L M^{-1}.
\epsilonilonnd{align*}
Thus, for $M > M_0(L)$, the above term is $\ll 1$. We conclude that
\betagin{align*}
\ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g } \varsigmagmaimeq
\ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{f'(Q_\lambda_mbda)}{r^2} g}.
\epsilonilonnd{align*}
We now rewrite~\epsilonilonqref{eq:b'2} as
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:b'3}
b'(t) & \varsigmagmaimeq \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f( Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u)- f'(Q_\lambda_mbda)g\mathbf{m}athcal{B}ig)} \\
& \quad - \ang{ \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g - \frac{1}{r^2} \mathbf{m}athcal{B}ig( f( Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u)\mathbf{m}athcal{B}ig) \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) g}.
}
We add, subtract and regroup to obtain
\betagin{align} \lambda_mbdabel{eq:lead3}
b'(t) &\varsigmagmaimeq
\ang{\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f(Q_{\lambda_mbda} - Q_\mathbf{m}u) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u) \mathbf{m}athcal{B}ig)} \\
&+ \ang{\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f'(Q_\lambda_mbda - Q_\mathbf{m}u) - f'(Q_\lambda_mbda) \mathbf{m}athcal{B}ig) g} \lambda_mbdabel{eq:2ndterm} \\
& + \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}\Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f(Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda - Q_\mathbf{m}u) - f'(Q_\lambda_mbda - Q_\mathbf{m}u) g \mathbf{m}athcal{B}ig)} \lambda_mbdabel{eq:3rdterm} \\
& - \ang{ \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g - \frac{1}{r^2} \mathbf{m}athcal{B}ig( f( Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u)\mathbf{m}athcal{B}ig) \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) g} \lambda_mbdabel{eq:4thterm}.
\epsilonilonnd{align}
We now identify the first term above as the leading order contribution.
\betagin{claim} \lambda_mbdabel{c:lead}
\betagin{align}
\ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f(Q_{\lambda_mbda} - Q_\mathbf{m}u) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u) \mathbf{m}athcal{B}ig)} \varsigmagmaimeq \frac{8}{\mathbf{m}u}.
\lambda_mbdabel{eq:mlead}
\epsilonilonnd{align}
\epsilonilonnd{claim}
By trigonometric identities
\betagin{align}
f(Q_{\lambda_mbda} - Q_\mathbf{m}u) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u) &= \frac{1}{2} ( \varsigmagmain2 Q_\lambda_mbda( \cos 2Q_\mathbf{m}u - 1) + \varsigmagmain 2 Q_\mathbf{m}u( 1 - \cos 2 Q_\lambda_mbda)) \\
& = - \varsigmagmain 2 Q_\lambda_mbda \varsigmagmain^2 Q_\mathbf{m}u + \varsigmagmain 2 Q_\mathbf{m}u \varsigmagmain^2 Q_\lambda_mbda \\
& = - \varsigmagmain 2 Q_\lambda_mbda (\Lambda Q_\mathbf{m}u)^2 + \varsigmagmain 2 Q_\mathbf{m}u (\Lambda Q_\lambda_mbda)^2. \lambda_mbdabel{eq:trig1}
\epsilonilonnd{align}
We show that the first term in the above expansion gives a negligible contribution to the $L^2$ pairing on the left side of \epsilonilonqref{eq:mlead}. Indeed, if we denote $\varsigmagmaigma := \lambda_mbda / \mathbf{m}u$, then as long as $\varsigmagmaigma \ll 1$ depending on $L$ and $M$,
\betagin{align*}
\left | \ang{
\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id
\frac{\varsigmagmain 2Q_{\lambda_mbda}}{r^2} (\Lambda Q_{\mathbf{m}u})^2 } \right |
&\lesssim \frac{1}{\lambda_mbda} \int_0^{2M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} |\Lambda Q_{\lambda_mbda}|^2 |\Lambda Q_{\mathbf{m}u}|^2 \frac{dr}{r} \\
&\lesssim
\frac{1}{\varsigmagmaigma} \int_0^{2M \varsigmagmaqrt{\varsigmagmaigma}} |\Lambda Q_{\varsigmagmaigma}|^2 |\Lambda Q|^2 \frac{dr}{r} \\
&=
\frac{1}{\varsigmagmaigma} \int_0^{2M \varsigmagmaqrt{\varsigmagmaigma}} \frac{(r/\varsigmagmaigma)^2}{(1 + (r/\varsigmagmaigma)^2)^2} \frac{r^2}{(1+r^2)^2} \frac{dr}{r} \\
&\lesssim
\varsigmagmaigma \mathbf{m}athcal{B}igl [
\int_0^\varsigmagmaigma \varsigmagmaigma^{-4} r^3 dr +
\int_\varsigmagmaigma^{2M \varsigmagmaqrt{\varsigmagmaigma}} \frac{r^3}{(\varsigmagmaigma^2 + r^2)^2} dr
\mathbf{m}athcal{B}igr ] \\
&\lesssim \varsigmagmaigma [|\log \varsigmagmaigma| + \log M] \ll 1.
\epsilonilonnd{align*}
Thus,
\betagin{align}
\ang{\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \mathbf{m}athcal{B}ig( f(Q_{\lambda_mbda} - Q_\mathbf{m}u) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u) \mathbf{m}athcal{B}ig)} \varsigmagmaimeq \big\lambda_mbdangle \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{1}{r^2}(\Lambda Q_{\lambda_mbda})^2 \varsigmagmain 2Q_\mathbf{m}u \big\rangle \lambda_mbdabel{eq:firstlead}.
\epsilonilonnd{align}
We now compute
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:lead1}
\mathbf{m}athcal{B}ig\lambda_mbdangle \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} &\mathbf{m}id \frac{1}{r^2}(\Lambda Q_{\lambda_mbda})^2 \varsigmagmain 2Q_\mathbf{m}u \mathbf{m}athcal{B}ig\rangle= \frac{1}{ \lambda_mbda} \int_0^\infty \chi_{M \varsigmagmaqrt{\varsigmagma}} ( \Lambda Q_{\varsigmagma})^3 \varsigmagmain 2 Q \frac{ d r }{r} \\
& = \frac{1}{\lambda_mbda} \int_0^{\varsigmagmaqrt{\varsigmagma}} ( \Lambda Q_{\varsigmagma})^3 \varsigmagmain 2 Q\frac{ d r }{r} + \frac{1}{\lambda_mbda} \int_{\varsigmagmaqrt{\varsigmagma}}^\infty \chi_{M \varsigmagmaqrt{\varsigmagmaigma}}( \Lambda Q_{\varsigmagma})^3 \varsigmagmain 2 Q \frac{ d r }{r}
}
Since $|\Lambda Q| \lesssim r^{-1}$ for $r$ large and $\varsigmagmaigma \varsigmagmaim \lambda_mbda$, we have
\betagin{align}
\frac{1}{\lambda_mbda} \int_{\varsigmagmaqrt{\varsigmagma}}^\infty | \Lambda Q_{\varsigmagma}|^3 \frac{ d r }{r}
&\lesssim
\frac{1}{\varsigmagma}
\int_{\varsigmagmaqrt{\varsigmagma}}^\infty |\Lambda Q_{\varsigmagma}|^3 \frac{ d r }{r} \\
&\lesssim \frac{1}{\varsigmagmaigma} \int_{1/\varsigmagmaqrt{\varsigmagmaigma}}^\infty |\Lambda Q|^3 \frac{dr}{r} \\
&\lesssim \varsigmagmaigma^{1/2} \ll 1.
\epsilonilonnd{align}
Thus, from \epsilonilonqref{eq:lead1} it follows that
\betagin{align}
\big\lambda_mbdangle \chi_{\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} &\mathbf{m}id \frac{1}{r^2}(\Lambda Q_{\lambda_mbda})^2 \varsigmagmain 2Q_\mathbf{m}u \big\rangle \varsigmagmaimeq
\frac{1}{\lambda_mbda} \int_0^{\varsigmagmaqrt{\varsigmagma}} ( \Lambda Q_{\varsigmagma})^3 \varsigmagmain 2 Q \frac{ d r }{r}.
\epsilonilonnd{align}
Since $\varsigmagma = \lambda_mbda/ \mathbf{m}u \ll 1$, on the interval $[0, \varsigmagmaqrt{\varsigmagma}]$ we write
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:sin2Q-small}
\varsigmagmain 2 Q = 4 r \frac{ 1 - r^{2}}{ (1+ r^{2})^2} = 4 r + O(r^{3})
}
We compute
\mathbf{m}athcal{E}Q{
\frac{1}{\lambda_mbda} \int_0^{\varsigmagmaqrt{\varsigmagma}} ( \Lambda Q_{\varsigmagma})^3 4r \, \frac{d r}{r} &= \frac{4\varsigmagma}{\lambda_mbda} \int_0^{\frac{1}{\varsigmagmaqrt{\varsigmagma}}} (\Lambda Q)^3 \, d r \\
& = \frac{4\varsigmagma}{\lambda_mbda} \int_0^\infty (\Lambda Q)^3 \, d r - 4 \frac{\varsigmagma}{\lambda_mbda} \int_{\frac{1}{\varsigmagmaqrt{\varsigmagma}}}^{\infty} (\Lambda Q)^3 \, d r \\
& = \frac{ 8 }{\mathbf{m}u} + O( \varsigmagma)
}
where the integral $\int_0^\infty (\Lambda Q)^3 \, dr = 2$ is evaluated using substitution. By~\epsilonilonqref{eq:sin2Q-small},
\mathbf{m}athcal{E}Q{\left |
\frac{1}{\lambda_mbda} \int_0^{\varsigmagmaqrt{\varsigmagma}}( \Lambda Q_{\varsigmagma})^3( \varsigmagmain 2Q - 4r) \, \frac{d r}{r}\right |& \lesssim \frac{1}{\lambda_mbda} \int_0^{\varsigmagmaqrt{\varsigmagma}}| \Lambda Q_{\varsigmagma}|^3 r^{2} \, d r \\
& = \frac{\varsigmagma^{3}}{\lambda_mbda} \int_0^{\frac{1}{\varsigmagmaqrt{\varsigmagma}}}|\Lambda Q|^3 r^{2} \, d r \lesssim \varsigmagma^{2} \abs{\log \varsigmagma} \ll 1.
}
Thus,
\betagin{align}
\big\lambda_mbdangle \chi_{\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} &\mathbf{m}id \frac{1}{r^2}(\Lambda Q_{\lambda_mbda})^2 \varsigmagmain 2Q_\mathbf{m}u \big\rangle \varsigmagmaimeq
\frac{1}{\lambda_mbda} \int_0^{\varsigmagmaqrt{\varsigmagma}} ( \Lambda Q_{\varsigmagma}(r))^3 \varsigmagmain 2 Q(r) \frac{ d r }{r} \varsigmagmaimeq \frac{8}{\mathbf{m}u} \lambda_mbdabel{eq:secondlead}.
\epsilonilonnd{align}
Combining \epsilonilonqref{eq:firstlead} and \epsilonilonqref{eq:secondlead} we conclude that
\mathbf{m}athcal{E}Q{
\ang{\chi_{\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \mathbf{m}athcal{B}ig( f(Q_{\lambda_mbda} - Q_\mathbf{m}u) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u) \mathbf{m}athcal{B}ig)} \varsigmagmaimeq
\frac{8}{\mathbf{m}u}
}
as desired.
\epsilonilonnd{proof}
For what follows, we list the following useful identities:
\betagin{align}
& \Lambda^2 Q = \frac{1}{2} \varsigmagmain 2 Q = 2 r \frac{ 1-r^{2}}{ (1 + r^{2})^2} \lambda_mbdabel{eq:La2Q}, \\
&\Lambda^3 Q =2 r \left( \frac{1+ r^{2} - 5r^{4} - r^{6}}{(1+r^{2})^4}\right) \lambda_mbdabel{eq:La3Q}, \\
& \Lambda_0 \Lambda Q = ( r\textrm{tr}ianglerightrtial_r +1)(r \textrm{tr}ianglerightrtial_r Q) = 2 \Lambda Q + r^2 \textrm{tr}ianglerightrtial_r^2 Q \lambda_mbdabel{eq:DLa}.
\epsilonilonnd{align}
We now claim that the term~\epsilonilonqref{eq:2ndterm} in the expansion of $b'(t)$ satisfies
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:2term}
\left | \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f'(Q_\lambda_mbda - Q_\mathbf{m}u) - f'(Q_\lambda_mbda) \mathbf{m}athcal{B}ig) g} \right | \lesssim (\lambda_mbda / \mathbf{m}u)^{1/2}.
}
First note that the we have
\mathbf{m}athcal{E}Q{
f'(Q_\lambda_mbda - Q_\mathbf{m}u) - f'(Q_\lambda_mbda)& = \varsigmagmain 2Q_\lambda_mbda \varsigmagmain 2Q_\mathbf{m}u -2 \cos 2Q_\lambda_mbda \varsigmagmain^2Q_\mathbf{m}u \\
&= 4 \Lambda^2 Q_\lambda_mbda \Lambda^2 Q_\mathbf{m}u- (\Lambda Q_\mathbf{m}u)^2\cos 2Q_\lambda_mbda \lambda_mbdabel{trig}
}
By ~\epsilonilonqref{eq:La2Q} and ~\epsilonilonqref{eq:La3Q} we have
\betagin{align}
|\Lambda Q| + |\Lambda^2 Q| \lesssim \frac{r}{1+r^2}.
\epsilonilonnd{align}
We first estimate
\betagin{align}
\abs{ \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{4}{r^2} \Lambda^2 Q_\lambda_mbda \Lambda^2 Q_\mathbf{m}u g}} &\lesssim \| g \|_{L^\infty}
\frac{1}{\lambda_mbda} \int_0^{2M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}
|\Lambda Q_{\lambda_mbda}| |\Lambda^2 Q_{\lambda_mbda}| |\Lambda^2 Q_{\mathbf{m}u}| \frac{d r}{r} \\
&\lesssim \| g \|_{H}
\frac{1}{\varsigmagma} \int_0^{2M\varsigmagmaqrt{\varsigmagma}}
|\Lambda Q_{\varsigmagma}| |\Lambda^2 Q_{\varsigmagma}| |\Lambda^2 Q| \frac{d r}{r} \\
&\lesssim \| g \|_{H} \int_0^{2M/\varsigmagmaqrt{\varsigmagma}} |\Lambda Q| |\Lambda^2 Q| d r
\lesssim \varsigmagma^{1/2}.
\epsilonilonnd{align}
where $\varsigmagma = \lambda_mbda/ \mathbf{m}u$ as before. We then estimate
\mathbf{m}athcal{E}Q{
\abs{\ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{1}{r^2}( (\Lambda Q_\mathbf{m}u)^2\cos 2Q_\lambda_mbda) g}} &\lesssim \| g\|_{H} \left( \int_0^\infty (\Lambda Q_{\varsigmagma})^2 (\Lambda Q)^4 \frac{ d r}{r} \right)^{\frac{1}{2}} \\
& \lesssim \varsigmagma^{1/2} \left( \int_0^\infty (\Lambda Q_{\varsigmagma})^2 \frac{ d r}{r} \right)^{\frac{1}{2}} \lesssim \varsigmagma^{1/2}.
}
The previous two bounds along with \epsilonilonqref{trig} imply \epsilonilonqref{eq:2term}.
In summary, we have shown thus far that
\betagin{align}
b'(t) -
\frac{8}{\mathbf{m}u}
&\varsigmagmaimeq \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}\Lambda Q_{\underline{\lambda_mbda}} \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f(Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda - Q_\mathbf{m}u) - f'(Q_\lambda_mbda - Q_\mathbf{m}u) g \mathbf{m}athcal{B}ig)} \lambda_mbdabel{eq:3rdtermb} \\
& - \ang{ \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g - \frac{1}{r^2} \mathbf{m}athcal{B}ig( f( Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u)\mathbf{m}athcal{B}ig) \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) g}. \lambda_mbdabel{eq:4thtermb}
\epsilonilonnd{align}
We now rewrite~\epsilonilonqref{eq:3rdtermb} as
\betagin{multline}
\mathbf{m}athcal{B}ig\lambda_mbdangle \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \Lambdambda Q_\uln\lambda_mbdambda \mathbf{m}id
\frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) - f'({-}Q_\mathbf{m}u + Q_\lambda_mbdambda)g\big)\mathbf{m}athcal{B}ig\rangle \\
= - \ang{ \mathbf{m}athcal{A}(\lambda_mbda) g \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) -g\big)} \\
+\ang{ \mathbf{m}athcal{A}(\lambda_mbda) g \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) -g\big)} \\
+\ang{ \mathbf{m}athcal{A}(\lambda_mbda) (Q_\lambda_mbda- Q_\mathbf{m}u) \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) - f'({-}Q_\mathbf{m}u + Q_\lambda_mbdambda)g\big)} \\
+ \ang{ \mathbf{m}athcal{A}(\lambda_mbda) Q_\mathbf{m}u \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) - f'({-}Q_\mathbf{m}u + Q_\lambda_mbdambda)g\big)} \\
+ \ang{ \chi_{M\varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \bigl ( \Lambda Q_{\underline{\lambda_mbda}} - \mathbf{m}athcal{A}(\lambda_mbda) Q_\lambda_mbda \bigr ) \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) - f'({-}Q_\mathbf{m}u + Q_\lambda_mbdambda)g\big)}
\epsilonilonnd{multline}
We remark that we used the fact that $\chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}} \mathbf{m}athcal{A}(\lambda_mbda) Q_\lambda_mbda = \mathbf{m}athcal{A}(\lambda_mbda) Q_\lambda_mbda$ (as long as $\lambda_mbda/\mathbf{m}u$ is small) to obtain the previous expression.
The second and third terms above can be estimated using ~\epsilonilonqref{eq:A-by-parts-wm} with $g_1 = Q_{\lambda_mbda} - Q_\mathbf{m}u$ and $g_2 = g$:
\betagin{multline}
\bigg|\ang{ \mathbf{m}athcal{A}(\lambda_mbda) g \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) -g\big)}
\\ +\ang{ \mathbf{m}athcal{A}(\lambda_mbda) (Q_\lambda_mbda- Q_\mathbf{m}u) \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) - f'({-}Q_\mathbf{m}u + Q_\lambda_mbdambda)g\big)} \bigg|
\lesssim_L c_0
\epsilonilonnd{multline}
which is $\ll 1$ as long as $c_0$ is taken sufficiently small.
The pointwise bound
\betagin{multline}
\abs{ f(Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda - Q_\mathbf{m}u) - f'(Q_\lambda_mbda - Q_\mathbf{m}u) g } \\
= \frac{1}{2}\abs{ \varsigmagmain(2Q_\lambda_mbda - 2Q_\mathbf{m}u)[ \cos 2 g - 1] + \cos(2Q_\lambda_mbda - 2Q_\mathbf{m}u)[ \varsigmagmain 2g -2 g]}
\lesssim \abs{g}^2
\epsilonilonnd{multline}
and~\epsilonilonqref{eq:Ainfty} imply that the second to last line of the above satisfies
\mathbf{m}athcal{E}Q{
\left | \ang{ \mathbf{m}athcal{A}(\lambda_mbda) Q_\mathbf{m}u \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) - f'({-}Q_\mathbf{m}u + Q_\lambda_mbdambda)g\big)} \right | \lesssim \frac{1}{\mathbf{m}u} \| g \|_{H}^2 \lesssim \lambda_mbda \ll 1.
}
Using \epsilonilonqref{eq:L-A-wm} we estimate the last line of the expansion of~\epsilonilonqref{eq:3rdterm} similarly:
\betagin{multline}
\abs{\ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}(\Lambda Q_{\underline{\lambda_mbda}} - \mathbf{m}athcal{A}(\lambda_mbda) Q_\lambda_mbda) \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) - f'({-}Q_\mathbf{m}u + Q_\lambda_mbdambda)g\big)}}
\\ \lesssim \|\Lambda Q_{\underline{\lambda_mbda}} - \mathbf{m}athcal{A}(\lambda_mbda) Q_\lambda_mbda \|_{L^\infty} \| g\|^2_H \lesssim_L c_0
\ll 1.
\epsilonilonnd{multline}
Thus, we have shown that
\betagin{multline} \lambda_mbdabel{eq:3rdterm2}
\mathbf{m}athcal{B}ig\lambda_mbdangle \chi_{M \varsigmagmaqrt{\lambda_mbda \mathbf{m}u}}\Lambdambda Q_\uln\lambda_mbdambda \mathbf{m}id
\frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) - f'({-}Q_\mathbf{m}u + Q_\lambda_mbdambda)g\big)\mathbf{m}athcal{B}ig\rangle \\ \varsigmagmaimeq - \ang{ \mathbf{m}athcal{A}(\lambda_mbda) g \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) -g\big)},
\epsilonilonnd{multline}
which by \epsilonilonqref{eq:3rdtermb} implies
\betagin{align}
b'(t) -
\frac{8}{\mathbf{m}u}
&\varsigmagmaimeq - \ang{ \mathbf{m}athcal{A}(\lambda_mbda) g \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) -g\big)} \\
& - \ang{ \textrm{tr}ianglerightrtial_r^2 g + \frac{1}{r} \textrm{tr}ianglerightrtial_r g - \frac{1}{r^2} \mathbf{m}athcal{B}ig( f( Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u)\mathbf{m}athcal{B}ig) \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbda) g}. \lambda_mbdabel{eq:4thtermb}
\epsilonilonnd{align}
We now consider the line~\epsilonilonqref{eq:4thtermb}. By adding and subtracting terms and ~\epsilonilonqref{eq:A-pohozaev-wm} we have
\betagin{equation}
\lambda_mbdabel{eq:mod-dtb-2nd-line}
\betagin{aligned}
&{-} \mathbf{m}athcal{B}ig\lambda_mbdangle \textrm{tr}ianglerightrtialartial_r^2 g + \frac 1r \textrm{tr}ianglerightrtialartial_r g - \frac{1}{r^2}\big(f(Q_\lambda_mbda- Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) +f(Q_\mathbf{m}u)\big) \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbdambda)g\mathbf{m}athcal{B}ig\rangle \\
& = -\ang{ \mathbf{m}athcal{A}_0(\lambda_mbda) g \mathbf{m}id \textrm{tr}ianglerightrtialartial_r^2 g + \frac 1r \textrm{tr}ianglerightrtialartial_r g - \frac{1}{r^2}g} \\
&\quad + \ang{ \mathbf{m}athcal{A}_0(\lambda_mbda) g \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f(Q_\lambda_mbda - Q_\mathbf{m}u) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u) \mathbf{m}athcal{B}ig)}\\
& \quad +\ang{ \mathbf{m}athcal{A}_0(\lambda_mbda) g \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f(Q_\lambda_mbda - Q_\mathbf{m}u + g) - f(Q_\lambda_mbda-Q_\mathbf{m}u) - g \mathbf{m}athcal{B}ig)} \\
&\mathbf{m}athbf{g}eq -\frac{c_0}{\lambda_mbdambda}\|g \|_H^2 + \frac{1}{\lambda_mbdambda}\int_0^{R\lambda_mbdambda}\mathbf{m}athcal{B}ig((\textrm{tr}ianglerightrtialartial_r g)^2 + \frac{1}{r^2}g^2\mathbf{m}athcal{B}ig)rd r \\
&+ \ang{ \mathbf{m}athcal{A}_0(\lambda_mbda) g \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f(Q_\lambda_mbda - Q_\mathbf{m}u) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u) \mathbf{m}athcal{B}ig)}\\
&+ \mathbf{m}athcal{B}ig\lambda_mbdangle \mathbf{m}athcal{A}_0(\lambda_mbdambda)g \mathbf{m}id
\frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) + f(-Q_\mathbf{m}u + Q_\lambda_mbda) - g\big)\mathbf{m}athcal{B}ig\rangle
\epsilonilonnd{aligned}
\epsilonilonnd{equation}
where $R$ is defined in the statement of Lemma~\mathop{\mathrm{Re}}f{lem:op-A-wm}.
From ~\epsilonilonqref{eq:trig1} we have the pointwise estimate
\mathbf{m}athcal{E}Q{
\abs{ f(Q_\lambda_mbda - Q_\mathbf{m}u) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u)} \lesssim (\Lambda Q_\lambda_mbda)^2 (\Lambda Q_\mathbf{m}u) + \Lambda Q_\lambda_mbda (\Lambda Q_{\mathbf{m}u})^2.
}
By Lemma \mathop{\mathrm{Re}}f{lem:op-A-wm} $\| \mathbf{m}athcal{A}_0(\lambda_mbda) g \|_{L^2} \lesssim \|g \|_{H}$ and $\mathbf{m}athcal{A}_0(\lambda_mbda) g$ is supported on a ball of radius $C R \lambda_mbda$. Thus, the second to last line above satisfies
\mathbf{m}athcal{E}Q{
\bigg| \bigg\lambda_mbdangle &\mathbf{m}athcal{A}_0(\lambda_mbda) g \mathbf{m}id \frac{1}{r^2} \mathbf{m}athcal{B}ig( f(Q_\lambda_mbda - Q_\mathbf{m}u) - f(Q_\lambda_mbda) + f(Q_\mathbf{m}u) \mathbf{m}athcal{B}ig) \bigg \rangle \bigg| \\ & \lesssim \|g \|_H\left[ \left( \int_0^{C R \varsigmagma} r^{-2} (\Lambda Q_\varsigmagma)^4 (\Lambda Q)^2 \, \frac{d r}{r} \right)^{\frac{1}{2}} + \left( \int_0^{C R \varsigmagma} r^{-2} (\Lambda Q)^4 (\Lambda Q_\varsigmagma)^2 \, \frac{d r}{r} \right)^{\frac{1}{2}} \right] \\
&\lesssim \|g \|_{H} \lesssim \lambda_mbda^{1/2} \ll 1.
}
Thus,
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:34terms}
& - \ang{ \mathbf{m}athcal{A}(\lambda_mbda) g \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda) -g\big)} \\
&{-} \mathbf{m}athcal{B}ig\lambda_mbdangle \textrm{tr}ianglerightrtialartial_r^2 g + \frac 1r \textrm{tr}ianglerightrtialartial_r g - \frac{1}{r^2}\big(f(Q_\lambda_mbda- Q_\mathbf{m}u + g) - f(Q_\lambda_mbda) +f(Q_\mathbf{m}u)\big) \mathbf{m}id \mathbf{m}athcal{A}_0(\lambda_mbdambda)g\mathbf{m}athcal{B}ig\rangle \\
& \mathbf{m}athbf{g}e \frac{1}{\lambda_mbdambda}\int_0^{R\lambda_mbdambda}\mathbf{m}athcal{B}ig((\textrm{tr}ianglerightrtialartial_r g)^2 + \frac{1}{r^2}g^2\mathbf{m}athcal{B}ig)r dr \\
& \quad +\mathbf{m}athcal{B}ig\lambda_mbdangle \big(\mathbf{m}athcal{A}_0(\lambda_mbdambda) - \mathbf{m}athcal{A}(\lambda_mbdambda)\big) g \mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u+Q_\lambda_mbdambda)- g\big)\mathbf{m}athcal{B}ig\rangle + o(1).
}
The difference $\mathbf{m}athcal{A}_0(\lambda_mbdambda) - \mathbf{m}athcal{A}(\lambda_mbdambda)$ is given by the operator of multiplication by $\frac{1}{2\lambda_mbdambda} \mathbf{m}athcal{B}ig(q''\big(\frac{r}{\lambda_mbdambda}\big) + \frac{\lambda_mbdambda}{r}q'\big(\frac{r}{\lambda_mbdambda}\big)\mathbf{m}athcal{B}ig)$.
By \epsilonilonqref{eq:approx-potential-wm} we have
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:34terms-error}
\mathbf{m}athcal{B}ig\lambda_mbdangle \big(\mathbf{m}athcal{A}_0(\lambda_mbdambda) - \mathbf{m}athcal{A}(\lambda_mbdambda)\big) g\mathbf{m}id \frac{1}{r^2}\big(f({-}Q_\mathbf{m}u + Q_\lambda_mbdambda + g) - f({-}Q_\mathbf{m}u+Q_\lambda_mbdambda)- k^2g\big)\mathbf{m}athcal{B}ig\rangle \\
= \frac{1}{\lambda_mbdambda}\int_0^{\infty} \frac{1}{r^2}\big(f'(Q_\lambda_mbdambda)-1\big)g^2 dr + O(c_0 \lambda_mbda)
}
where $c_0>0$ is as in Lemma~\mathop{\mathrm{Re}}f{lem:op-A-wm}.
The estimates \epsilonilonqref{eq:4thtermb},~\epsilonilonqref{eq:34terms} and~\epsilonilonqref{eq:34terms-error} combine to yield
\mathbf{m}athcal{E}Q{
b'(t) - \frac{8}{\mathbf{m}u}
\mathbf{m}athbf{g}eq \frac{1}{\lambda_mbdambda}\int_0^{R\lambda_mbdambda}\mathbf{m}athcal{B}ig((\textrm{tr}ianglerightrtialartial_r g)^2 + \frac{1}{r^2}g^2\mathbf{m}athcal{B}ig)r dr + \frac{1}{\lambda_mbdambda}\int_0^{\infty} \frac{1}{r^2}\big(f'(Q_\lambda_mbdambda)-1\big)g^2 dr + o(1).
}
The orthogonality condition $\ang{ \mathbf{m}athcal{Z}_{\underline{\lambda_mbda}} \mathbf{m}id g} = 0$ implies
the localized coercivity estimate,
\mathbf{m}athcal{E}Q{
\frac{1}{\lambda_mbdambda}\int_0^{R\lambda_mbdambda}\mathbf{m}athcal{B}ig((\textrm{tr}ianglerightrtialartial_r g)^2 + \frac{1}{r^2}g^2\mathbf{m}athcal{B}ig)r dr + \frac{1}{\lambda_mbdambda}\int_0^{\infty} \frac{1}{r^2}\big(f'(Q_\lambda_mbdambda)-1\big)g^2 dr \mathbf{m}athbf{g}e - \frac{c_1}{\lambda_mbda} \| g\|_H^2
}
(see~\cite[Lemma 5.4, eq. (5.28)]{JJ-AJM} for the proof). The constant $c_1>0$ appearing above can be made small by choosing $R$ sufficiently large. Since $\| g \|_{H}^2 \lesssim \lambda_mbda$, we conclude that
\betagin{align*}
b'(t) - \frac{8}{\mathbf{m}u} \mathbf{m}athbf{g}eq -\frac{\deltalta}{2} \mathbf{m}athbf{g}eq -\frac{\deltalta}{\mathbf{m}u}
\epsilonilonnd{align*}
as long as $L$ is sufficiently large, $M$ is sufficiently large depending on $L$ and $\lambda_mbda / \mathbf{m}u$ is sufficiently small depending on $L$ and $M$.
\qed
From Proposition~\mathop{\mathrm{Re}}f{p:modp} and Proposition~\mathop{\mathrm{Re}}f{p:modp2} we now show that, roughly, if the modulation parameters are approaching each other in scale, then the solution to \epsilonilonqref{eq:wmk} is ejected from a small neighborhood of the set of two-bubbles.
\betagin{rem}\lambda_mbdabel{r:param}
We now fix the parameter $L$ and $M$ used in the definition of $\zetaeta(t)$ for the remainder of the section. In particular, we fix $L = L_0$ and $M = M_0$, large enough so that the estimates in Proposition \mathop{\mathrm{Re}}f{p:modp2} hold for
\betagin{align*}
\delta = \frac{1}{2018},
\epsilonilonnd{align*}
whenever ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) < \epsilonilonta_1 = \epsilonilonta_1(L_0,M_0) < \epsilonilonta_0$.
\epsilonilonnd{rem}
\betagin{prop}
\lambda_mbdabel{prop:modulation}
Let $C > 0$. Then for all $\epsilonilonps_0>0$ sufficiently small, and for any $\epsilonilonps>0$ sufficiently small relative to $\epsilonilon_0$ the following holds.
Let $\vec\textrm{tr}ianglerightrtialsi(t): [T_0, T_+) \to \mathbf{m}athcal{H}_0$ be a solution of \epsilonilonqref{eq:wmk}.
Assume that $t_0 \in [T_0, T_+)$ is so that ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t_0)) \leq \epsilonilonps$
and $\frac{d}{dt} (\zetaeta(t)/\mathbf{m}u(t))\vert_{t = t_0} \mathbf{m}athbf{g}eq 0$.
Then there exist $t_1$ and $t_2$, $T_0 \leq t_0 \leq t_1 \leq t_2 < T_+$,
such the follows estimates hold:
\betagin{align}
{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) &\mathbf{m}athbf{g}eq 2\epsilonilonps,\qquad \thetaxt{for }t \in [t_1, t_2], \lambda_mbdabel{eq:d-t1-t2}\\
{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) &\leq \frac 14\epsilonilonps_0,\qquad \thetaxt{for }t \in [t_0, t_1], \lambda_mbdabel{eq:d-t0-t1}\\
{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t_2)) &\mathbf{m}athbf{g}eq 2\epsilonilonps_0, \lambda_mbdabel{eq:d-at-t2} \\
\int_{t_1}^{t_2} \|\textrm{tr}ianglerightrtialartial_t\textrm{tr}ianglerightrtialsi(t)\|_{L^2}^2d t &\mathbf{m}athbf{g}eq C\int_{t_0}^{t_1}\varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))}d t
\lambda_mbdabel{eq:err-absorb}
\epsilonilonnd{align}
If we assume that $\frac{d}{dt} (\zetaeta(t)/\mathbf{m}u(t))\vert_{t = t_0} \le 0$, then analogous statements hold with times $t_2 \le t_1 \le t_0$.
\epsilonilonnd{prop}
\betagin{proof}
From \epsilonilonqref{eq:lamud1}, ~\epsilonilonqref{eq:bound-on-l} and Remark \mathop{\mathrm{Re}}f{r:param}, it follows that if $\epsilonilon_1 > 0$ is sufficiently small and
$\zetaeta(t)/\mathbf{m}u(t) \leq 4\epsilonilonps_1$, then the estimates in Proposition \mathop{\mathrm{Re}}f{p:modp2} hold with $\deltalta = 1/2018$
in a neighborhood of $t_0$. In particular, we have
\betagin{align}\lambda_mbdabel{eq:comparabile1}
\frac{1}{4} \frac{\zetaeta(t)}{\mathbf{m}u(t)} \leq \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \left |\log \frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \right | \leq \frac{\zetaeta(t)}{\mathbf{m}u(t)}.
\epsilonilonnd{align}
Let $t_2$ be the first time $t_2 \mathbf{m}athbf{g}eq t_0$ such that $\zetaeta(t_2) / \mathbf{m}u(t_2) = 4\epsilonilonps_1$. If there is no such time, we set $t_2 = T_+$. Define $$f(x) = x |\log x|,$$ which is smooth and increasing on $(0,100 \epsilonilon_1)$ for $\epsilonilon_1$ sufficiently small and satisfies $\lim_{x \rightarrow 0^+} f(x) = 0$. Then \epsilonilonqref{eq:comparabile1} becomes
\betagin{align}\lambda_mbdabel{eq:comp}
\frac{1}{4} \frac{\zetaeta(t)}{\mathbf{m}u(t)} \leq f (\lambda_mbda(t)/\mathbf{m}u(t)) \leq \frac{\zetaeta(t)}{\mathbf{m}u(t)}.
\epsilonilonnd{align}
Then if $t_2 < T_+$ we have $f(\lambda_mbda(t_2)/\mathbf{m}u(t_2)) \mathbf{m}athbf{g}eq \epsilonilon_1$ which by \epsilonilonqref{eq:lamud1} implies \epsilonilonqref{eq:d-at-t2} by taking $\epsilonilon_0$ comparable to $f^{-1}(\epsilonilon_1)$.
By the scaling symmetry of the equation, we can assume that $\mathbf{m}u(t_0) = 1$.
Let $t_3 \leq t_2$ be the last time such that $\mathbf{m}u(t) \in [\frac 12, 2]$ for all $t \in [t_0, t_3]$. If there is no such final time we set $t_3= t_2$. We will see by a bootstrapping argument that we can always take $t_3 = t_2$ and that $t_2 < T_+$.
By Remark \mathop{\mathrm{Re}}f{r:param} and by taking $\epsilonilon_1$ small enough, we have by \epsilonilonqref{eq:b'lb}
\betagin{equation}
\lambda_mbdabel{eq:b'lb2}
b'(t) \mathbf{m}athbf{g}eq 1.
\epsilonilonnd{equation}
We also obtain from \epsilonilonqref{eq:kala'}
\betagin{equation}
\zetaeta'(t) \mathbf{m}athbf{g}eq b(t) - \zetaeta(t)^\frac 12.
\epsilonilonnd{equation}
Consider $\xii(t) := b(t) + \zetaeta(t)^\frac 12$.
Using the two inequalities above we obtain
\betagin{equation}
\lambda_mbdabel{eq:psi'}
\betagin{aligned}
\xii'(t) \mathbf{m}athbf{g}eq 1 + \frac 12 \zetaeta(t)^{-\frac 12}
\mathbf{m}athcal{B}ig( b(t) - \zetaeta(t)^\frac 12\mathbf{m}athcal{B}ig)
= \frac{1}{2} \zetaeta(t)^{-\frac 12} (b(t) + \zetaeta(t)^{\frac 12})
= \frac{1}{2} \zetaeta(t)^{-\frac 12} \xii(t).
\epsilonilonnd{aligned}
\epsilonilonnd{equation}
By \epsilonilonqref{eq:b-bound} and the fact that $\mathbf{m}u(t) \in (1/2,2)$, we conclude that
\betagin{equation}
\lambda_mbdabel{eq:psi-bound}
\xii(t) \leq 10 \zetaeta(t)^\frac 12.
\epsilonilonnd{equation}
Let $\xii_1(t) := b(t) + \frac{1}{2}\zetaeta(t)^\frac 12 = \frac 12 b(t) + \frac 12 \xii(t) = \xii(t) - \frac{1}{2} \zetaeta(t)^{\frac 12}$.
Since $b'(t) \mathbf{m}athbf{g}eq 0$, we have
\betagin{equation}
\lambda_mbdabel{eq:psi1'}
\xii_1'(t) \mathbf{m}athbf{g}eq \frac 12 \xii'(t) \mathbf{m}athbf{g}eq \frac 14 \zetaeta(t)^{-\frac 12} \xii(t) \mathbf{m}athbf{g}eq \frac 14 \zetaeta(t)^{-\frac 12} \xii_1(t).
\epsilonilonnd{equation}
Since $\mathbf{m}u(t_0) = 1$, we have $0 \leq \frac{d}{dt} (\lambda_mbdambda(t)/\mathbf{m}u(t))\vert_{t = t_0} = \zetaeta'(t_0) - \zetaeta(t_0)\mathbf{m}u'(t_0)$, so \epsilonilonqref{eq:mu'} and~\epsilonilonqref{eq:bound-on-l} imply that $\zetaeta'(t_0) \mathbf{m}athbf{g}eq -\frac{1}{8}\zetaeta(t_0)^\frac 12$ as long as $\epsilonilonps$ is taken small enough. This fact and \epsilonilonqref{eq:kala'} gives $b(t_0) \mathbf{m}athbf{g}eq -\frac{1}{4}\zetaeta(t_0)^\frac 12$,
so $\xii_1(t_0) > 0$ and \epsilonilonqref{eq:psi1'} yields $\xii_1(t) > 0$ for all $t \in [t_0, t_3]$.
Thus
\betagin{equation}
\lambda_mbdabel{eq:psi-lbound}
\xii(t) \mathbf{m}athbf{g}eq \frac{1}{2}\zetaeta(t)^\frac 12, \qquad \thetaxt{for }t \in [t_0, t_3].
\epsilonilonnd{equation}
This lower bound along with $\xii'(t) \mathbf{m}athbf{g}eq \frac{1}{2} \zetaeta(t)^{-\frac 12} \xii(t)$ imply
\betagin{align}
\xii'(t) \mathbf{m}athbf{g}eq \frac{1}{4}. \lambda_mbdabel{eq:psi'2}
\epsilonilonnd{align}
By \epsilonilonqref{eq:psi-bound} we see that $\zetaeta(t)$ and thus $\lambda_mbda(t)$ is far from $0$ on $[t_0, t_3]$.
The bounds \epsilonilonqref{eq:psi-bound}, \epsilonilonqref{eq:comp} and \epsilonilonqref{eq:lamud1} imply that there exists a constant $\alphahapha_0$
such that $\xii(t) \mathbf{m}athbf{g}eq 40 [f(\alphaha_0 \epsilonilon)]^{1/2}$ forces ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq 2\epsilonilonps$.
Let $t_1 \in [t_0, t_3]$ be the last time such that $\xii(t_1) = 40 [f(\alphaha_0 \epsilonilon)]^{1/2}$
(set $t_1 = t_3$ if no such time exists).
Then by \epsilonilonqref{eq:psi-lbound} and~\epsilonilonqref{eq:comp} we have
\mathbf{m}athcal{E}Q{
[f(\lambda_mbda(t)/\mathbf{m}u(t))]^{1/2} \le \zetaeta(t)^{\frac{1}{2}} \leq 80 [f(\alphaha_0 \epsilonilon)]^{1/2} \quad\thetaxt{for }t\in[t_0, t_1],
}
which yields \epsilonilonqref{eq:d-t0-t1} if $\epsilonilonps$ is small enough.
We now claim that $\mathbf{m}u(t) \in (1/2, 2)$ for all $t \in [t_0,t_2]$ and that $t_2 < T_+$. Recall that on $[t_0,t_3]$ we have $\xii'(t) > 0$ as well as
\betagin{align*}
\zetaeta(t)^{\frac 12} \leq 2 \xii(t) \leq 20 \zetaeta(t)^{\frac 12}. \quad
\xii'(t) \mathbf{m}athbf{g}eq \frac{1}{2} \zetaeta^{-\frac 12}(t) \xii(t), \quad
\zetaeta(t) \leq 8 \epsilonilon_1.
\epsilonilonnd{align*}
Thus, by \epsilonilonqref{eq:mu'}
\betagin{align*}
&\int_{t_0}^{t_3} |\mathbf{m}u'| dt \lesssim \int_{t_0}^{t_3}
\zetaeta(t)^{\frac 12} dt \lesssim
\int_{t_0}^{t_3} \xii(t) dt
\lesssim \int_{t_0}^{t_3} \zetaeta(t)^{\frac 12} \xii'(t) dt \\
&\quad \lesssim \varsigmagmaqrt{\epsilonilon_1} \int_{t_0}^{t_3} \xii'(t) dt
\lesssim \varsigmagmaqrt{\epsilonilon_1} \xii(t_3)
\lesssim \epsilonilon_1,
\epsilonilonnd{align*}
where the implied constant is absolute. Thus,
we get $\mathbf{m}u(t_3) \in [2/3, 3/2]$ if $\epsilonilonps_1$ is small enough, which implies that $t_3 = t_2$.
Now suppose that there is no $t_2 \mathbf{m}athbf{g}eq t_0$ such that $\zetaeta(t_2)/\mathbf{m}u(t_2) = \epsilonilonps_1$.
Then, since $\zetaeta(t)$ (and hence $\lambda_mbda(t)$) is far from $0$, by \cite[Corollary A.4]{JJ-AJM}
the solution is global and \epsilonilonqref{eq:psi'2} implies that $\xii(t)$ is eventually $O(1)$.
Thus $\zetaeta(t)$ is eventually $O(1)$ which contradicts our definition of $t_2$.
This implies that there exists $t_2 < T_+$ such that $\zetaeta(t_2)/\mathbf{m}u(t_2) = \epsilonilonps_1$,
which implies \epsilonilonqref{eq:d-at-t2} by choosing $\epsilonilonps_0$ comparable to $f^{-1}(\epsilonilonps_1)$.
By \epsilonilonqref{eq:kala'} and~\epsilonilonqref{eq:b-bound} we have $|\zetaeta'(t)| \lesssim |\zetaeta(t)|$. Thus,
there exists an absolute constant $\alphaha_1 > 0$
such that $\zetaeta(t) \mathbf{m}athbf{g}eq \frac 14 \epsilonilonps_1$ for $t \in [t_2 - \alphaha_1, t_2]$. Since $\zetaeta(t) \lesssim f(\alphaha_0 \epsilonilon)$ on $[t_0,t_1]$, we must have $t_2 - t_1 \mathbf{m}athbf{g}eq \alphaha_1$ if $f(\alphaha_0\epsilonilon) \ll \epsilonilon_1$.
Then \epsilonilonqref{eq:b'lb2} yields
\betagin{equation}
b(t) \mathbf{m}athbf{g}eq b(t_1) + \alphaha_1 \mathbf{m}athbf{g}eq b(t_0) + \alphaha_1, \qquad \thetaxt{for }t \in [t_1, t_2].
\epsilonilonnd{equation}
Thus, if $\epsilonilonps$ is small enough, we get
\betagin{align}
b(t) \mathbf{m}athbf{g}eq \alphaha_1/2, \quad t \in [t_1, t_2]. \lambda_mbdabel{b_lower}
\epsilonilonnd{align}
By Proposition \mathop{\mathrm{Re}}f{p:modp}, the Cauchy-Schwarz inequality and the definition of $b(t)$ we have
\betagin{align*}
b(t) \lesssim |\log \lambda_mbda|^{1/2} \| \dot g \|_{L^2}.
\epsilonilonnd{align*}
Since $\lambda_mbda(t) \leq \zetaeta(t) \leq 8 \epsilonilon_1$ on $[t_0,t_2]$, we conclude that
there exists an absolute constant $\alphaha_2 > 0$ such that on $[t_0,t_2]$
\betagin{align}
|b(t)| \leq \alphaha_2 | \log \epsilonilon_1|^{1/2} \| \dot g \|_{L^2}. \lambda_mbdabel{b_upper}
\epsilonilonnd{align}
Integrating, from $t_1$ to $t_2$ the lower bound \epsilonilonqref{b_lower} and using \epsilonilonqref{b_upper} we obtain
\betagin{align*}
\frac{\alphaha_1^3}{4}
\leq \int_{t_1}^{t_2} |b(t)|^2 dt \leq \alphaha_2^2 |\log \epsilonilonps_1| \int_{t_1}^{t_2} \| \dot g(t) \|_{L^2}^2 dt,
\epsilonilonnd{align*}
which implies
\betagin{align}
\frac{\alphaha_1^3}{4 \alphaha_2^2 |\log \epsilonilonps_1|} \leq \int_{t_1}^{t_2} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi (t) \|^2_{L^2} dt. \lambda_mbdabel{eq:err-lbound}
\epsilonilonnd{align}
Recall that on $[t_0,t_1]$, we have $\xii'(t) \mathbf{m}athbf{g}eq 1/4$ and $|\xii(t)| + \zetaeta(t)^{1/2} \lesssim \varsigmagmaqrt{\epsilonilon \alphaha_0 |\log \alphaha_0 \epsilonilon|}$ where $\alphaha_0$ is an absolute constant. Thus,
\betagin{align}
\int_{t_0}^{t_1} \varsigmagmaqrt{{\bf d} (\vec \textrm{tr}ianglerightrtialsi(t))} dt
\lesssim \int_{t_0}^{t_1} \varsigmagmaqrt{\zetaeta(t)} dt
\lesssim \int_{t_0}^{t_1} \varsigmagmaqrt{\zetaeta(t)} \xii'(t) dt \lesssim
\varsigmagmaqrt{\epsilonilon |\log \epsilonilon|} \int_{t_0}^{t_1} \xii'(t) dt
\lesssim \epsilonilonps |\log \epsilonilonps|
\epsilonilonnd{align}
where the implied constant is absolute.
This estimate and \epsilonilonqref{eq:err-lbound} imply \epsilonilonqref{eq:err-absorb} after choosing $\epsilonilon$ sufficiently small.
\epsilonilonnd{proof}
\varsigmagmaection{Dynamics of Non-Scattering Threshold Solutions} \lambda_mbdabel{s:dynamics}
In this section we prove the main result, Theorem \mathop{\mathrm{Re}}f{t:main}. We will obtain it as a consequence of the following proposition.
\betagin{prop}\lambda_mbdabel{p:psi_t}
Let $\textrm{tr}ianglerightrtialsi(t) :(T_-, T_+) \to \mathbf{m}athcal{H}_0$ be a corotational wave map with $\mathbf{m}athcal{E}(\vec \textrm{tr}ianglerightrtialsi) = 2 \mathbf{m}athcal{E}(\vec Q)$ which does not scatter in forward time. Then
\mathbf{m}athcal{E}Q{
\lim_{t \to T_+} {\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) = 0.\lambda_mbdabel{eq:psi_t}
}
\epsilonilonnd{prop}
As a first step, we state a direct consequence of Theorem~\mathop{\mathrm{Re}}f{t:cjk}.
\betagin{prop}\lambda_mbdabel{p:cjk}
Let $\vec \textrm{tr}ianglerightrtialsi(t) : (T_-, T_+) \to \mathbf{m}athcal{H}_0$ be a corotational wave map with $\mathbf{m}athcal{E}(\vec\textrm{tr}ianglerightrtialsi) = 2 \mathbf{m}athcal{E}( \vec Q)$
which does not scatter in forward time. Then
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:seqbub}
\liminf_{t\to T_+} {\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) = 0.
}
\epsilonilonnd{prop}
For the remainder of this section, we will always denote by $\vec \textrm{tr}ianglerightrtialsi(t)$ a solution to~\epsilonilonqref{eq:wmk}, $\vec\textrm{tr}ianglerightrtialsi(t):(T_-, T_+) \to \mathbf{m}athcal{H}_0$,
such that $\mathbf{m}athcal{E}(\vec \textrm{tr}ianglerightrtialsi) = 2 \mathbf{m}athcal{E}(\vec Q)$
and $\vec\textrm{tr}ianglerightrtialsi(t)$ does not scatter in forward time.
A rough sketch of our strategy to prove Proposition \mathop{\mathrm{Re}}f{p:psi_t} is as follows. By our preliminary step Proposition \mathop{\mathrm{Re}}f{p:cjk}, we know that ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))$ tends to 0 along a sequence of times. If Proposition \mathop{\mathrm{Re}}f{p:psi_t} were false, then we split the maximal time interval of existence into a collection of \epsilonilonmph{bad} intervals where $\vec \textrm{tr}ianglerightrtialsi(t)$ is close to the set of two-bubbles, and \epsilonilonmph{good} intervals where $\vec \textrm{tr}ianglerightrtialsi(t)$ is far from them. On the union of good intervals which we denote by $I$, we use Lemma \mathop{\mathrm{Re}}f{l:d-size} and Lemma \mathop{\mathrm{Re}}f{l:1profile} to show that the $\vec \textrm{tr}ianglerightrtialsi(t)$ has the following \epsilonilonmph{compactness property}: there exists a continuous function $\nu(t) : I \rightarrow (0,\infty)$ such that the trajectory
\betagin{align*}
\mathbf{m}athcal K = \{ \vec \textrm{tr}ianglerightrtialsi(t)_{1/\nu(t)} \mathbf{m}id t \in I \}
\epsilonilonnd{align*}
is pre-compact in $\mathbf{m}athcal H_0$. Solutions with the compactness property do not radiate energy, and thus we expect that such solutions are given by rescalings of stationary solutions (harmonic maps). If this intuition is correct, we arrive at a contradiction since the only degree-0 harmonic map is the constant map which has energy equal to $0 \neq 8 \textrm{tr}ianglerightrtiali$.
To prove that a solution with the compactness property on the union of good intervals is stationary, we will use the virial identity. Integrating \epsilonilonqref{eq:vir} from $t =\tauu_1$ to $t = \tauu_2$ yields
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:virtau}
\int_{\tauu_1}^{\tauu_2} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi (t) \|_{L^2}^2 \, d t &\le \abs{ \ang{ \textrm{tr}ianglerightrtial_t\textrm{tr}ianglerightrtialsi \mathbf{m}id \chi_R r \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi}(\tauu_1)} + \abs{ \ang{ \textrm{tr}ianglerightrtial_t\textrm{tr}ianglerightrtialsi \mathbf{m}id \chi_R r \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi}(\tauu_2)} \\
&\quad + \int_{\tauu_1}^{\tauu_2} \abs{\Omega_{R}( \vec \textrm{tr}ianglerightrtialsi(t))} \, d t
}
where the error $\Omega_{R}(\vec \textrm{tr}ianglerightrtialsi(t))$ is given by ~\epsilonilonqref{eq:OmRdef}.
By Lemma~\mathop{\mathrm{Re}}f{l:error-estim}, we obtain
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:virR}
\betagin{aligned}
\int_{\tauu_1}^{\tauu_2} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi (t) \|_{L^2}^2 \, d t &\leq C_0\mathbf{m}athcal{B}ig( R \varsigmagmaqrt{{\bf d}(\vec\textrm{tr}ianglerightrtialsi(\tauu_1))} + R \varsigmagmaqrt{{\bf d}(\vec\textrm{tr}ianglerightrtialsi(\tauu_2))}\mathbf{m}athcal{B}ig) \\ &+ \int_{\tauu_1}^{\tauu_2} \abs{\Omega_{R}( \vec \textrm{tr}ianglerightrtialsi(t))} \, d t.
\epsilonilonnd{aligned}
}
We will then show that by choosing the parameters $R, \tauu_1, \tauu_2$ appropriately and using Proposition \mathop{\mathrm{Re}}f{p:modp}, we can absorb the error term involving $\Omega_{R}(\vec \textrm{tr}ianglerightrtialsi(t))$ from the right hand side into the left hand side. The resulting averaged smallness of $\| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|_{L^2}$ and the compactness property allow us to conclude that $\vec \textrm{tr}ianglerightrtialsi(t) = \vec 0$, our desired contradiction.
\varsigmagmaubsection{Splitting time into the good and the bad}
Before we are able to split the time interval of existence into good and bad intervals, we establish the following initial splitting of the time axis.
\betagin{lem}\lambda_mbdabel{cl:pre-split}
Suppose that~\epsilonilonqref{eq:psi_t} fails. Then for any $\epsilonilonps_0 > 0$ sufficiently small there exist
sequences $p_n$, $q_n$ such that
\mathbf{m}athcal{E}Q{
T_- < p_0 < q_0 < p_1 < q_1 < \dots < p_{n-1} < q_{n-1} < p_n < q_n < \dots
}
with the property that for all $n \in \mathbf{m}athbb{N}$:
\betagin{gather}
\forall t \in [p_n, q_n]: {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \leq \epsilonilonps_0, \lambda_mbdabel{eq:d-small-all-pq} \\
\forall t \in [q_n, p_{n+1}]: {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \frac 12\epsilonilonps_0, \lambda_mbdabel{eq:d-large-all-pq} \\
\lim_{n \to \infty}p_n = \lim_{n\to\infty}q_n = T_+. \lambda_mbdabel{eq:pq-lim}
\epsilonilonnd{gather}
\epsilonilonnd{lem}
\betagin{proof}
Suppose that \epsilonilonqref{eq:psi_t} fails. Let $\epsilonilonps_0 > 0$ so that
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:eps-limsup}
0 < \epsilonilonps_0 < \mathbf{m}in(\limsup_{t\to T_+}{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)), \epsilonilonta_1)
}
Then there exists $T_0 \in (T_-, T_+)$ such that ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(T_0)) > \epsilonilonps_0$.
Define
\mathbf{m}athcal{E}Q{
p_0 := \varsigmagmaup\mathbf{m}athcal{B}ig\{t: {\bf d}(\vec\textrm{tr}ianglerightrtialsi(\tauu)) \mathbf{m}athbf{g}eq \frac 12 \epsilonilonps_0, \forall \tauu\in[T_0, t]\mathbf{m}athcal{B}ig\}.
}
By Proposition \mathop{\mathrm{Re}}f{p:cjk} we have $p_0 < T_+$ and ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(p_0)) = \frac 12 \epsilonilonps_0$.
For $n \mathbf{m}athbf{g}eq 1$ we define inductively:
\betagin{align}
q_{n-1} := \varsigmagmaup\mathbf{m}athcal{B}ig\{t: {\bf d}(\vec\textrm{tr}ianglerightrtialsi(\tauu)) \leq \epsilonilonps_0, \forall \tauu\in[p_{n-1}, t]\mathbf{m}athcal{B}ig\}, \\
p_n := \varsigmagmaup\mathbf{m}athcal{B}ig\{t: {\bf d}(\vec\textrm{tr}ianglerightrtialsi(\tauu)) \mathbf{m}athbf{g}eq \frac 12 \epsilonilonps_0, \forall \tauu\in[q_{n-1}, t]\mathbf{m}athcal{B}ig\}.
\epsilonilonnd{align}
By \epsilonilonqref{eq:eps-limsup}, Proposition~\mathop{\mathrm{Re}}f{p:cjk} and induction
we have for all $n \in \mathbf{m}athbb{N}$
\betagin{gather}
p_{n-1} < q_{n-1} < T_+, \\
q_{n-1} < p_n < T_+, \\
{\bf d}(\vec\textrm{tr}ianglerightrtialsi(p_n)) = \frac 12\epsilonilonps_0, \lambda_mbdabel{eq:d-val-at-p} \\
{\bf d}(\vec\textrm{tr}ianglerightrtialsi(q_n)) = \epsilonilonps_0 \lambda_mbdabel{eq:d-val-at-q}.
\epsilonilonnd{gather}
The estimates \epsilonilonqref{eq:d-large-all-pq} and \epsilonilonqref{eq:d-small-all-pq} are immediate consequences of our choice
of $p_n$ and $q_n$. Suppose now that \epsilonilonqref{eq:pq-lim} does not hold. Since $p_n$ and $q_n$ are increasing sequences, we have
\mathbf{m}athcal{E}Q{
\lim_{n \to \infty}p_n = \lim_{n\to\infty}q_n = T_1 < T_+.
}
By continuity of the flow, ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(t))$ has a limit as $t \to T_1$ which contradicts \epsilonilonqref{eq:d-val-at-p} and \epsilonilonqref{eq:d-val-at-q}.
\epsilonilonnd{proof}
\betagin{lem}\lambda_mbdabel{cl:split}
Let $\epsilonilonps > 0$. There exist $\zetaeta_0, \epsilonilonps' > 0$ so that the following holds.
Assume that ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) < \epsilonilonta_1$, with $\epsilonilonta_1$ as in the remarks immediately preceding Proposition \mathop{\mathrm{Re}}f{p:psi_t}. Let $\lambda_mbdambda(t)$, $\mathbf{m}u(t)$ be the modulation parameters
given by Lemma~\mathop{\mathrm{Re}}f{l:modeq}. Let $\zetaeta(t)$ be defined as in~\epsilonilonqref{eq:zetadef}. Then
\betagin{align}
\frac{\zetaeta(t)}{\mathbf{m}u(t)} \mathbf{m}athbf{g}eq \zetaeta_0 &\mathbf{m}athbb{R}ightarrow {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) > \epsilonilonps', \lambda_mbdabel{eq:d-geq-epsp} \\
\frac{\zetaeta(t)}{\mathbf{m}u(t)} \leq \zetaeta_0 &\mathbf{m}athbb{R}ightarrow {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) < \epsilonilonps \lambda_mbdabel{eq:d-leq-eps}.
\epsilonilonnd{align}
\epsilonilonnd{lem}
\betagin{proof}
By Lemma~\mathop{\mathrm{Re}}f{l:modeq}, $${\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \leq (C^2+1)\frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \le 2(C^2+1)\frac{\zetaeta(t)}{\mathbf{m}u(t)},$$
so we get \epsilonilonqref{eq:d-leq-eps} with any $\zetaeta_0 < \epsilonilonps/2(C^2+1)$.
To prove \epsilonilonqref{eq:d-geq-epsp}, we first note that by \epsilonilonqref{eq:bound-on-l}, we have
\betagin{align*}
\frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \left |
\log \frac{\lambda_mbda(t)}{\mathbf{m}u(t)}
\right | \mathbf{m}athbf{g}eq \zetaeta_0/3.
\epsilonilonnd{align*}
Since the function $f(x) = x |\log x|$ is increasing on $(0,C \epsilonilonta_1)$ for $\epsilonilonta_1$ small, we conclude that
\betagin{align*}
\frac{\lambda_mbda(t)}{\mathbf{m}u(t)} \mathbf{m}athbf{g}eq f^{-1}(\zetaeta_0/3).
\epsilonilonnd{align*}
Thus, by \epsilonilonqref{eq:gdotgd} we obtain \epsilonilonqref{eq:d-geq-epsp} for any $\epsilonilon' < \frac{1}{C} f^{-1}(\zetaeta_0/3)$.
\epsilonilonnd{proof}
We now split the time axis into a collection of good and bad intervals.
\betagin{lem}\lambda_mbdabel{l:time_split}
Suppose that the conclusion ~\epsilonilonqref{eq:psi_t} fails. Let $\epsilonilonps_0>0$ be small enough so that the conclusions of Claim~\mathop{\mathrm{Re}}f{cl:pre-split}
and Proposition~\mathop{\mathrm{Re}}f{prop:modulation} hold, and let $C_0$ denote the constant from Lemma \mathop{\mathrm{Re}}f{l:error-estim}. Then there exist $\epsilonilonps, \epsilonilonps' > 0$ with $\epsilonilonps' < \epsilonilonps$ and $\epsilonilonps< \frac{1}{10} \epsilonilonps_0$ as in Proposition~\mathop{\mathrm{Re}}f{prop:modulation},
and sequences of times $(a_m), (b_m), (c_m)$ such that
\mathbf{m}athcal{E}Q{
T_- < a_1 < c_1 < b_1 < \dots < a_m < c_m < b_m < a_{m+1} < \dots
}
and the following holds for all $m = 2, 3, 4, \ldots$:
\betagin{gather}
\forall t \in [b_{m}, a_{m+1}]: {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilonps', \lambda_mbdabel{eq:d-large-all} \\
\epsilonilonxists t \in [b_{m}, a_{m+1}]: {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq 2\epsilonilonps, \lambda_mbdabel{eq:d-large-some} \\
{\bf d}(\vec\textrm{tr}ianglerightrtialsi(a_m)) = {\bf d}(\vec\textrm{tr}ianglerightrtialsi(b_{m})) = \epsilonilonps, \lambda_mbdabel{eq:d-small-some} \\
C_0 \int_{a_{m+1}}^{c_{m+1}}\varsigmagmaqrt{{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t))}\,d t \leq \frac{1}{10}\int_{b_{m}}^{a_{m+1}}\|\textrm{tr}ianglerightrtialartial_t\textrm{tr}ianglerightrtialsi(t)\|_{L^2}^2\,d t \lambda_mbdabel{eq:err-mod-contr-1} \\
C_0 \int_{c_{m}}^{b_{m}}\varsigmagmaqrt{{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t))}\,d t \leq \frac{1}{10}\int_{b_{m}}^{a_{m+1}}\|\textrm{tr}ianglerightrtialartial_t\textrm{tr}ianglerightrtialsi(t)\|_{L^2}^2\,d t \lambda_mbdabel{eq:err-mod-contr-2}
\epsilonilonnd{gather}
and
\mathbf{m}athcal{E}Q{
\liminf_{m \to\infty} {\bf d}(\vec\textrm{tr}ianglerightrtialsi(c_m)) = 0. \lambda_mbdabel{eq:d-conv-0}
}
\epsilonilonnd{lem}
\betagin{proof}
Let $\epsilonilonps, \epsilonilonps_0 > 0$ be small, $\epsilonilon < \frac{1}{10} \epsilonilon_0$, so that Claim~\mathop{\mathrm{Re}}f{cl:pre-split}
and Proposition~\mathop{\mathrm{Re}}f{prop:modulation} hold
with the constant $C = 10 C_0$ in Proposition~\mathop{\mathrm{Re}}f{prop:modulation}.
Let $\zetaeta_0$ and $\epsilonilonps'$ be as in Lemma~\mathop{\mathrm{Re}}f{cl:split}. We first construct the sequence of times $(c_m)$. By Proposition \mathop{\mathrm{Re}}f{p:cjk} and our initial splitting, there exists a sequence $0 \leq n_1 < n_2 < ...$ of indices such that
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:inf-leq-eps}
\inf_{t\in[p_{n_m}, q_{n_m}]} {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \leq \epsilonilonps'.
}
Since ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) < \epsilonilon_0$ on $[p_n,q_n]$, and $\epsilonilon_0$ is small, the modulation parameters $\lambda_mbdambda(t)$, $\mathbf{m}u(t)$ and $\zetaeta(t)$
are well defined on $[p_{n_m}, q_{n_m}]$.
Let $c_m \in [p_{n_m}, q_{n_m}]$ be such that
\mathbf{m}athcal{E}Q{
\zetaeta(c_m)/\mathbf{m}u(c_m) = \inf_{t \in [p_{n_m}, q_{n_m}]}\zetaeta(t)/\mathbf{m}u(t).
}
By Lemma~\mathop{\mathrm{Re}}f{cl:split} and \epsilonilonqref{eq:inf-leq-eps} we conclude that $\zetaeta(c_m) / \mathbf{m}u(c_m) < \zetaeta_0$.
By Lemma~\mathop{\mathrm{Re}}f{cl:split} it also follows
that ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(c_m)) < \epsilonilonps < \frac{1}{10}\epsilonilonps_0$.
Hence $c_m \in (p_{n_m}, q_{n_m})$ and
\mathbf{m}athcal{E}Q{
\frac{d}{dt} \mathbf{m}athcal{B}ig|_{t = c_m}\mathbf{m}athcal{B}ig(\frac{\zetaeta(t)}{\mathbf{m}u(t)}\mathbf{m}athcal{B}ig) = 0.
}
By Proposition~\mathop{\mathrm{Re}}f{prop:modulation} with $t_0$ = $c_m$, there exists $t_1,t_2$ with $t_2 < t_1 < t_0 = c_m$ such that the following holds:
\betagin{align}
{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) &\mathbf{m}athbf{g}eq 2\epsilonilonps,\qquad \thetaxt{for }t \in [t_2, t_1], \lambda_mbdabel{eq:d-t1-t2_sp}\\
{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) &\leq \frac 14\epsilonilonps_0,\qquad \thetaxt{for }t \in [t_1, t_0], \lambda_mbdabel{eq:d-t0-t1_sp}\\
{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t_2)) &\mathbf{m}athbf{g}eq 2\epsilonilonps_0, \lambda_mbdabel{eq:d-at-t2_sp} \\
\int_{t_2}^{t_1} \|\textrm{tr}ianglerightrtialartial_t\textrm{tr}ianglerightrtialsi(t)\|_{L^2}^2d t &\mathbf{m}athbf{g}eq 10 C_0\int_{t_1}^{t_0}\varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))}d t
\lambda_mbdabel{eq:err-absorb_sp}
\epsilonilonnd{align}
We denote $\alphaha_m := t_2$. Since ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilon_0/2$ on $[q_{n_m-1}, p_{n_m}]$, \epsilonilonqref{eq:d-t0-t1_sp} implies that $t_1 \in (p_{n_m}, c_m]$. Define
\mathbf{m}athcal{E}Q{
a_m := \varsigmagmaup\{t \mathbf{m}athbf{g}eq t_1: {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilonps,\ \forall \tauu\in [t_1, t]\}.
}
Note that the supremum is well defined since \epsilonilonqref{eq:d-t1-t2_sp} implies that ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(t_1)) > \epsilonilonps$.
Since ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(c_m)) < \epsilonilonps$, we have $a_m \in (p_{n_m}, c_m)$
and ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(a_m)) = \epsilonilonps$.
The bound \epsilonilonqref{eq:d-t1-t2_sp} implies that
${\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilonps$ for $t \in [\alphaha_m, t_1]$.
Since ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilonps$ for $t \in [t_1, a_m]$ by our definition of $a_m$, we conclude that
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:d-sigm-am}
{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilonps,\ \forall t \in [\alphaha_m, a_m].
}
The bounds \epsilonilonqref{eq:d-at-t2_sp} and \epsilonilonqref{eq:d-small-all-pq} imply that $t_2 < p_{n_m}$.
Thus,
by \epsilonilonqref{eq:d-sigm-am} we have
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:d-pnm-am}
{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilonps,\ \forall t \in [p_{n_m}, a_m].
}
Moreover, since $t_1 > p_{n_m}$ we have
\betagin{align}\lambda_mbdabel{eq:al_m}
{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq 2\epsilonilon, \quad \forall t \in [\alphaha_m, p_{n_m}].
\epsilonilonnd{align}
Finally, \epsilonilonqref{eq:err-absorb_sp} implies that
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:err-absorb-sig}
\int_{\alphaha_m}^{a_m} \|\textrm{tr}ianglerightrtialartial_t\textrm{tr}ianglerightrtialsi(t)\|_{L^2}^2d t \mathbf{m}athbf{g}eq 10 C_0\int_{a_m}^{c_m}\varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))}d t.
}
We now use Proposition~\mathop{\mathrm{Re}}f{prop:modulation} with $t_0$ = $c_m$ in the forward time direction and
obtain times $t_1, t_2$ (different from the previous) with $c_m < t_1 < t_2$. Arguing as before, we conclude that $t_1 \in (c_m, q_{n_m})$ and define
\mathbf{m}athcal{E}Q{
b_m := \inf\{t \leq t_1: {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilonps,\ \forall \tauu\in[t, t_1]\}.
}
As in the construction of $a_m$, we conclude that $b_m \in (c_m, q_{n_m})$ and ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(b_m)) = \epsilonilonps$. We denote $\betata_m := t_2$. By nearly the same arguments used to establish \epsilonilonqref{eq:d-sigm-am} and \epsilonilonqref{eq:d-pnm-am}, we conclude that
\betagin{gather}
\lambda_mbdabel{eq:d-bm-taum}
{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilonps,\ \forall t \in [b_m, \betata_m], \\
\lambda_mbdabel{eq:d-bm-qnm}
{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilonps,\ \forall t \in [b_m, q_{n_m}], \\
\lambda_mbdabel{eq:err-absorb-tau}
\int_{b_m}^{\betata_m} \|\textrm{tr}ianglerightrtialartial_t\textrm{tr}ianglerightrtialsi(t)\|_{L^2}^2d t \mathbf{m}athbf{g}eq 10 C_0\int_{c_m}^{b_m}\varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))}d t.
\epsilonilonnd{gather}
We now prove \epsilonilonqref{eq:err-mod-contr-1}. The argument for \epsilonilonqref{eq:err-mod-contr-2} is completely analogous and will be omitted.
By \epsilonilonqref{eq:err-absorb-sig}, it suffices to prove that $b_{m-1} < \alphaha_m$. If not, then by \epsilonilonqref{eq:al_m}, it follows that $\epsilonilon = {\bf d}(\textrm{tr}ianglerightrtialsi(b_{m-1})) \mathbf{m}athbf{g}eq 2 \epsilonilon$, a contradiction.
To prove \epsilonilonqref{eq:d-large-all}, let $t \in \mathbf{m}athbb{R}$ such that ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) < \epsilonilonps'$.
By our initial splitting, $t \in [p_{n_m}, q_{n_m}]$ for some $m$. Then
\epsilonilonqref{eq:d-pnm-am} and \epsilonilonqref{eq:d-bm-qnm} imply that $t\in[a_m, b_m]$.
Finally, we prove the convergence \epsilonilonqref{eq:d-conv-0}. By Proposition \mathop{\mathrm{Re}}f{p:cjk} and \epsilonilonqref{eq:d-large-all}, it follows that
\betagin{align*}
\liminf_{m \rightarrow \infty} \frac{\zetaeta(c_m)}{\mathbf{m}u(c_m)} = 0.
\epsilonilonnd{align*}
Since the function $f(x) = x |\log x|$ is increasing for small $x$ and $\zetaeta(t)/\mathbf{m}u(t) \varsigmagmaim f(\lambda_mbda(t)/\mathbf{m}u(t))$, it follows that $\liminf_m (\lambda_mbda(c_m)/\mathbf{m}u(c_m)) = 0$. The claim \epsilonilonqref{eq:d-conv-0} then follows from \epsilonilonqref{eq:lamud1}.
\epsilonilonnd{proof}
\betagin{rem}\lambda_mbdabel{rem:eps-small}
It follows from the proof that $\epsilonilonps$ can be taken as small as we wish.
\epsilonilonnd{rem}
\varsigmagmaubsection{Compactness on good intervals}
For the remainder of the proof of Proposition~\mathop{\mathrm{Re}}f{p:psi_t},
we fix $\epsilonilonps, \epsilonilonps' > 0$ and the partition of the time axis given by Lemma
\mathop{\mathrm{Re}}f{l:time_split}. The intervals $[b_{m-1},a_m]$ on which ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilon'$ are what we referred to earlier as the good intervals and are denoted by
\mathbf{m}athcal{E}Q{
I_m := [b_{m-1}, a_m], \qquad I := \bigcup_{m\mathbf{m}athbf{g}eq 1} I_m.
}
We then have
\betagin{equation}
\forall t\in I: {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t)) \mathbf{m}athbf{g}eq \epsilonilonps'. \lambda_mbdabel{eq:d-large-I}
\epsilonilonnd{equation}
We now show that $\vec\textrm{tr}ianglerightrtialsi(t)$ has the following compactness property on $I$.
\betagin{lem} \lambda_mbdabel{l:Icompact}
There exists a function $\nu \in C(I;(0,\infty))$ such that the modulated trajectory
\mathbf{m}athcal{E}Q{
\mathcal K:= \{ \vec \textrm{tr}ianglerightrtialsi(t)_{1/\nu(t)} \mathbf{m}id t \in I \} \varsigmagmaubset \mathbf{m}athcal{H}_0
}
is pre-compact in $\mathbf{m}athcal{H}_0$.
\epsilonilonnd{lem}
\betagin{proof}
We will first show pre-compactness along an arbitrary sequence of times. In particular, we claim that if $t_n \in I$ is a sequence of times, then there exists a subsequence, which we continue to denote by $t_n$, and a sequence of scales $\nu_n \in (0,\infty)$ so that $\vec \textrm{tr}ianglerightrtialsi(t_n)_{1/\nu_n}$ converges strongly in $\mathbf{m}athcal{H}_0$. Indeed, by Lemma~\mathop{\mathrm{Re}}f{l:d-size} and \epsilonilonqref{eq:d-large-I}, we conclude that
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:IepsH}
\| \vec \textrm{tr}ianglerightrtialsi(t) \|_{\mathbf{m}athcal{H}_0} \le C(\epsilonilonps') \quad \forall t \in I.
}
The claim now follows immediately from Lemma~\mathop{\mathrm{Re}}f{l:1profile}.
We now transfer the above sequential pre-compactness to full pre-compactness using continuity of the flow.
For $t \in I$, we define $\nu(t)$ to be the unique positive number so that
\betagin{equation}
\betagin{gathered}
\int_0^\infty \mathbf{m}athrm e^{-r}\left(( \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi_{1/\nu(t)}(t, r))^2+( \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi_{1/\nu(t)}(t, r))^2 + \frac{(\textrm{tr}ianglerightrtialsi_{1/\nu(t)}(t, r))^2}{r^2} \right) rdr \\ = \frac 12 \| \vec \textrm{tr}ianglerightrtialsi(t) \|_{\mathbf{m}athcal{H}_0}^2
\epsilonilonnd{gathered}
\epsilonilonnd{equation}
By the change of variables $\rhoo = \nu(t) r$, we see that $\nu(t)$ is well defined. Moreover, since the flow $t \mathbf{m}apsto \vec \textrm{tr}ianglerightrtialsi(t)$ is continuous in $\mathbf{m}athcal{H}_0$, the function $\nu(t)$ is continuous.
Assume, towards a contradiction, that $\{ \vec \textrm{tr}ianglerightrtialsi(t)_{1/\nu(t)} \mathbf{m}id t \in I \}$ is not pre-compact in $\mathbf{m}athcal{H}_0$.
Then there exists a sequence of times $t_n \in I$ such that $\vec \textrm{tr}ianglerightrtialsi(t_n)_{1/\nu(t_n)}$ has no convergent subsequence.
By our sequential pre-compactness claim, there exist a subsequence (still denoted $t_n$)
and scales $\nu_n$ such that $\vec \textrm{tr}ianglerightrtialsi(t_n)_{1/\nu_n}$ converges in $\mathbf{m}athcal{H}_0$
to some $\vec\varphi = (\varphi_0, \varphi_1) \neq (0,0)$ (see Lemma \mathop{\mathrm{Re}}f{l:1profile}).
We claim that there exist $C > 0$ such that $1/C \leq \nu_n / \nu(t_n) \leq C$. Suppose not. Then after passing to a subsequence, either $\nu_n / \nu(t_n) \rightarrow \infty$ or $\nu_n / \nu(t_n) \rightarrow 0$. By a change of variables we have
\betagin{align*}
\int_0^\infty & \mathbf{m}athrm e^{-r}\left(( \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi_{1/\nu(t_n)}(t_n, r))^2+( \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi_{1/\nu(t_n)}(t_n, r))^2 + \frac{(\textrm{tr}ianglerightrtialsi_{1/\nu(t_n)}(t_n, r))^2}{r^2} \right) rdr \\&=
\int_0^\infty \mathbf{m}athrm e^{- \nu_n r / \nu(t_n)}\left(( \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi_{1/\nu_n}(t_n, r))^2+( \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi_{1/\nu_n}(t_n, r))^2 + \frac{(\textrm{tr}ianglerightrtialsi_{1/\nu_n}(t_n, r))^2}{r^2} \right) rdr
\epsilonilonnd{align*}
Since $\|\vec\textrm{tr}ianglerightrtialsi(t_n)\|_{\mathbf{m}athcal{H}_0}$ converges to $\|\vec\varphi\|_{\mathbf{m}athcal{H}_0}$, the choice of $\nu(t)$ implies the left hand side above converges to $\frac{1}{2} \| \vec \varphi \|_{\mathbf{m}athcal{H}_0}^2 > 0$. Since $\vec \textrm{tr}ianglerightrtialsi(t_n)_{1/\nu_n} \rightarrow \vec \varphi$ in $\mathbf{m}athcal{H}_0$, if either $\nu_n / \nu(t_n) \rightarrow \infty$ or $\nu_n / \nu(t_n) \rightarrow 0$ then the right hand side converges to either 0 or $\| \vec \varphi \|_{\mathbf{m}athcal{H}_0}^2$, a contradiction. This proves the claim.
Since $1/C \leq \nu_n / \nu(t_n) \leq C$, the sequence $\nu_n / \nu(t_n)$ has a sub sequential limit in $(0,\infty)$.
Thus $\vec \textrm{tr}ianglerightrtialsi(t_n)_{1/\nu(t_n)}$ has a convergent subsequence in $\mathbf{m}athcal{H}_0$. This contradicts our initial assumption that $\vec \textrm{tr}ianglerightrtialsi(t_n)_{1/\nu(t_n)}$ has no convergent subsequence and finishes the proof.
\epsilonilonnd{proof}
For $m \in \mathbf{m}athbb{N}$, we denote the length of a good interval by
\mathbf{m}athcal{E}Q{
\nu_m := |I_m| = a_m - b_{m-1}.
}
\betagin{lem}\lambda_mbdabel{l:interval-bd}
There exists $C_2 > 0$ such that for all $m \mathbf{m}athbf{g}eq 1$ and all $t \in I_m$ we have
\mathbf{m}athcal{E}Q{
\frac{1}{C_2} \nu(t) \leq \nu_m \leq C_2 \nu(t).
}
\epsilonilonnd{lem}
\betagin{proof}[Proof of Lemma~\mathop{\mathrm{Re}}f{l:interval-bd}]
The proof is by contradiction. Suppose that there exists a sequence of integers $m_\epsilonilonll$ and times $t_\epsilonilonll \in I_{m_\epsilonilonll}$ such that
\mathbf{m}athcal{E}Q{ \lambda_mbdabel{eq:int-bd-part-1}
\lim_{\epsilonilonll\to\infty}\frac{\nu_{m_\epsilonilonll}}{\nu(t_\epsilonilonll)} = 0.
}
Let $\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(s)$ be the solution of \epsilonilonqref{eq:wmk} with initial data
$
\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(0) = \vec\textrm{tr}ianglerightrtialsi(t_\epsilonilonll)_{1/\nu(t_\epsilonilonll)}.
$
By Lemma \mathop{\mathrm{Re}}f{l:Icompact} and after extraction of a subsequence, there exists $\vec \varphi_0 \neq (0,0)$ such that $\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(0) \rightarrow \vec\varphi_0$ in $\mathbf{m}athcal{H}_0$.
Let $\vec\varphi(s)$ be the solution of \epsilonilonqref{eq:wmk} with initial data
$\vec\varphi(0) = \vec\varphi_0$ which is defined on some interval $[-s_0,s_0]$. By the well-posedness theory for \epsilonilonqref{eq:wmk}, the flow $\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(s)$ exists for $s \in [{-}s_0, s_0]$ for all sufficiently large $\epsilonilonll$, and $$\lim_{\epsilonilonll \rightarrow \infty} \| \vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(s) - \vec\varphi(s) \|_{L^\infty_t([-s_0,s_0]; \mathbf{m}athcal{H}_0)} = 0.$$
Let $t_\epsilonilonll' \in I_{m_\epsilonilonll}$ be any sequence of times. We define $s_\epsilonilonll = \frac{t_\epsilonilonll' - t_\epsilonilonll}{\nu(t_\epsilonilonll)}$.
By \epsilonilonqref{eq:int-bd-part-1} we have that $\lim_{\epsilonilonll} s_\epsilonilonll = 0$.
Thus, $s_\epsilonilonll \in [{-}s_0, s_0]$ for all $\epsilonilonll$ sufficiently large, and we conclude that
\mathbf{m}athcal{E}Q{
\lim_{\epsilonilonll\to\infty}\|\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(s_\epsilonilonll) - \vec\varphi(s_\epsilonilonll)\|_{\mathbf{m}athcal{H}_0} = 0.
}
By continuity of the flow $\lim_{\epsilonilonll\to \infty}\|\vec\varphi(s_\epsilonilonll) - \vec\varphi_0\|_{\mathbf{m}athcal{H}_0} = 0$,
which by the triangle inequality implies
\mathbf{m}athcal{E}Q{
\lim_{\epsilonilonll\to\infty}\|\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(s_\epsilonilonll) - \vec\varphi_0\|_{\mathbf{m}athcal{H}_0} = 0.
}
In particular, $\lim_{\epsilonilonll\to\infty} {\bf d}(\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(s_\epsilonilonll)) = {\bf d}(\vec\varphi_0)$.
By the time translation and scaling symmetry of \epsilonilonqref{eq:wmk}, we have $\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(s_\epsilonilonll) = \vec\textrm{tr}ianglerightrtialsi(t_\epsilonilonll')_{1/\nu(t_\epsilonilonll)}$. Thus, ${\bf d}(\vec\textrm{tr}ianglerightrtialsi(t_\epsilonilonll')) = {\bf d}(\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(s_\epsilonilonll))$
and we obtain
\mathbf{m}athcal{E}Q{
\lim_{\epsilonilonll\to\infty} {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t_\epsilonilonll')) = {\bf d}(\vec\varphi_0),
}
for any sequence $t_\epsilonilonll' \in I_{m_\epsilonilonll}$. If we choose $t_{\epsilonilonll}' = a_{m_\epsilonilonll}$ then we conclude that ${\bf d}(\vec \varphi_0) = \epsilonilon$. But by Lemma \mathop{\mathrm{Re}}f{l:time_split}, there exists a sequence of times $t_{\epsilonilonll}'$ such that ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t'_\epsilonilonll)) \mathbf{m}athbf{g}eq 2\epsilonilon$ so then ${\bf d}(\vec \varphi_0) \mathbf{m}athbf{g}eq 2\epsilonilon$. This is a contradiction, and we obtain the lower bound of the lemma.
Suppose now that there exist a sequence of integers $m_\epsilonilonll$ and times $t_\epsilonilonll \in I_{m_\epsilonilonll}$ such that
\mathbf{m}athcal{E}Q{
\lim_{\epsilonilonll\to\infty}\frac{\nu(t_\epsilonilonll)}{\nu_{m_\epsilonilonll}} = 0.
}
After extracting a subsequence, either $t_\epsilonilonll \leq \frac{a_{m_\epsilonilonll} + b_{m_\epsilonilonll-1}}{2}$ for all $\epsilonilonll$ or $t_\epsilonilonll \mathbf{m}athbf{g}eq \frac{a_{m_\epsilonilonll} + b_{m_\epsilonilonll-1}}{2}$ for all $\epsilonilonll$. In the former case we conclude that
\mathbf{m}athcal{E}Q{\lambda_mbdabel{eq:int-bd-part-2}
\lim_{\epsilonilonll\to\infty}\frac{\nu(t_\epsilonilonll)}{a_{m_\epsilonilonll} - t_\epsilonilonll} = 0.
}
We will consider this situation only; the other case is treated similarly.
As before we denote by $\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(s)$ the solution of \epsilonilonqref{eq:wmk} with initial data
$\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(0) = \vec\textrm{tr}ianglerightrtialsi(t_\epsilonilonll)_{1/\nu(t_\epsilonilonll)}$,
and assume (after extraction if necessary) $\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(0) \to \vec\varphi_0 \in \mathbf{m}athcal{H}_0$.
Let $\vec\varphi(s): (-T_-(\vec \varphi_0), T_+(\vec \varphi_0)) \to \mathbf{m}athcal{H}_0$ be the solution of \epsilonilonqref{eq:wmk} with initial data
$\vec\varphi(0) = \vec\varphi_0$.
By Lemma~\mathop{\mathrm{Re}}f{l:1profile}, the solution $\vec \varphi(s)$ does not scatter in forward or backward time and has threshold energy
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal{E}( \vec \varphi) = \mathbf{m}athcal{E}( \vec \textrm{tr}ianglerightrtialsi) = 2 \mathbf{m}athcal{E}(\vec Q).
}
Then by Proposition~\mathop{\mathrm{Re}}f{p:cjk} there exists $\varsigmagmaigma \in[0, T_+(\vec\varphi_0))$ such that
$ {\bf d}(\vec\varphi(\varsigmagmaigma)) \leq \frac 12 \epsilonilonps'$.
By the well-posedness theory for \epsilonilonqref{eq:wmk}, $\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(s)$ is defined for $s \in [0, \varsigmagmaigma]$ for all $\epsilonilonll$ sufficiently large and
\betagin{align*}
\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(\varsigmagmaigma) \to \vec\textrm{tr}ianglerightrtialhi(\varsigmagmaigma) \mathbf{m}box{ in } \mathbf{m}athcal{H}_0.
\epsilonilonnd{align*}
Thus, ${\bf d}(\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(\varsigmagmaigma)) \to {\bf d}(\vec\textrm{tr}ianglerightrtialhi(\varsigmagmaigma)) \leq \frac 12 \epsilonilonps'$.
Define $t_\epsilonilonll' := t_\epsilonilonll + \nu(t_\epsilonilonll)\varsigmagmaigma$. Then by the time translation and scaling symmetries of \epsilonilonqref{eq:wmk} we have $\vec\textrm{tr}ianglerightrtialsi(t_\epsilonilonll') = \vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(\varsigmagmaigma)_{\nu(t_\epsilonilonll)}$, so
\mathbf{m}athcal{E}Q{
\lim_{\epsilonilonll \to \infty} {\bf d}(\vec\textrm{tr}ianglerightrtialsi(t_\epsilonilonll')) = \lim_{\epsilonilonll\to\infty} {\bf d}(\vec\textrm{tr}ianglerightrtialsi_\epsilonilonll(\varsigmagmaigma)) \leq \frac 12 \epsilonilonps'.
}
Our assumption \epsilonilonqref{eq:int-bd-part-2} implies that for all $\epsilonilonll$ sufficiently large, we have
$t_\epsilonilonll \leq t_\epsilonilonll' \leq a_{m_\epsilonilonll}$. Thus, by \epsilonilonqref{eq:d-large-all} we conclude that
${\bf d}(\vec\textrm{tr}ianglerightrtialsi(t_\epsilonilonll')) \mathbf{m}athbf{g}eq \epsilonilonps'$, a contradiction. Thus the upper bound of the lemma holds, and the proof is complete.
\epsilonilonnd{proof}
An immediate corollary of Lemma \mathop{\mathrm{Re}}f{l:interval-bd} is the following.
\betagin{cor}\lambda_mbdabel{cor:k1}
The modulated trajectory
\mathbf{m}athcal{E}Q{
\mathcal K_1:= \bigcup_{m \mathbf{m}athbf{g}eq 1}\{ \vec \textrm{tr}ianglerightrtialsi(t)_{1/\nu_m} \mathbf{m}id t \in I_m \} \lambda_mbdabel{eq:K1-comp}
}
is pre-compact in $\mathbf{m}athcal{H}_0$.
\epsilonilonnd{cor}
Before concluding the proof of Proposition \mathop{\mathrm{Re}}f{p:psi_t}, we record the following standard consequence of compactness of the trajectory.
\betagin{lem}\lambda_mbdabel{l:err-on-I}
Given any $\delta > 0$, there exists $R_0 > 0$ such that if $R_1 \mathbf{m}athbf{g}eq R_0$, then for all $m = 2, 3, \ldots$ we have
\mathbf{m}athcal{E}Q{
\int_{I_m}|\Omegaega_{\nu_m R_1}(\vec\textrm{tr}ianglerightrtialsi(t))|d t \leq \delta \nu_m.
}
\epsilonilonnd{lem}
\betagin{proof}
By a change of variables, it suffices to show that
\mathbf{m}athcal{E}Q{
\left | \Omegaega_{R_1}(\vec\textrm{tr}ianglerightrtialsi(t)_{1/\nu_m}) \right| \leq \delta,
\quad \forall t \in I,
}
for all $R_1$ sufficiently large. By \epsilonilonqref{eq:OmRest}
\betagin{align*}
\left | \Omegaega_{R_1}(\vec\textrm{tr}ianglerightrtialsi(t)_{1/\nu_m}) \right |
\lesssim \mathbf{m}athcal E^\infty_{R_1}(\vec\textrm{tr}ianglerightrtialsi(t)_{1/\nu_m}).
\epsilonilonnd{align*}
By the pre-compactness of the trajectory $\mathbf{m}athcal{K}K_1$, the result follows.
\epsilonilonnd{proof}
\varsigmagmaubsection{Proof of Proposition \mathop{\mathrm{Re}}f{p:psi_t}}.
By \epsilonilonqref{eq:d-conv-0}, there exists a sequence of times $c_{m_j}$ such that ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(c_{m_j})) \rightarrow 0$ as $j \rightarrow \infty$. Let $\delta > 0$, and let $R_0$ be as in Lemma \mathop{\mathrm{Re}}f{l:err-on-I}. For $m_j < m_k$, we define
\betagin{align*}
R := R_0 \mathbf{m}ax_{m_j < m < m_k} \nu_m.
\epsilonilonnd{align*}
By the virial identity, Lemma \mathop{\mathrm{Re}}f{l:vir}, we have
\betagin{align*}
\int_{c_{m_j}}^{c_{m_k}} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|^2 dt &\leq
\left | \ang{ \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi \mathbf{m}id \chi_R \, r \textrm{tr}ianglerightrtial_r \textrm{tr}ianglerightrtialsi}(t) \mathbf{m}athcal{B}ig |_{t = c_{m_j}}^{t = c_{m_k}} \right | + \varsigmagmaum_{m = m_j}^{m_k-1} \int_{b_{m}}^{a_{m+1}} | \Omega_{R}(\vec \textrm{tr}ianglerightrtialsi(t)) | \, dt \\
&+ \varsigmagmaum_{m=m_j}^{m_k-1}\int_{c_m}^{b_m}| \Omega_{R}(\vec \textrm{tr}ianglerightrtialsi(t)) | d t
+ \varsigmagmaum_{m=m_j}^{m_k-1} \int_{a_{m+1}}^{c_{m+1}} | \Omega_{R}(\vec \textrm{tr}ianglerightrtialsi(t)) | d t.
\epsilonilonnd{align*}
Replacing the left-hand side above with an integral over only the good intervals and using Lemma \mathop{\mathrm{Re}}f{l:error-estim} to bound the right-hand side, we obtain
\betagin{align}
\betagin{split}\lambda_mbdabel{eq:virial_est}
\int \mathrm{div}splaylimits_{I\cap (c_{m_j},c_{m_k})} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t)\|^2 dt
&\leq
C_0 R \left [ \varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialsi(c_{m_j}))} + \varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialsi(c_{m_k}))} \right ] + \varsigmagmaum_{m = m_j}^{m_k-1} \int_{b_{m}}^{a_{m+1}} | \Omega_{R}(\vec \textrm{tr}ianglerightrtialsi(t)) | \, dt \\
&+ C_0 \varsigmagmaum_{m=m_j}^{m_k-1}\int_{c_m}^{b_m}\varsigmagmaqrt{{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t))}d t
+ C_0 \varsigmagmaum_{m=m_j}^{m_k-1} \int_{a_{m+1}}^{c_{m+1}}\varsigmagmaqrt{{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t))}d t.
\epsilonilonnd{split}
\epsilonilonnd{align}
The estimate \epsilonilonqref{eq:err-mod-contr-1} implies that
\betagin{align*}
C_0 \varsigmagmaum_{m=m_j}^{m_k-1}\int_{c_m}^{b_m}\varsigmagmaqrt{{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t))}d t
\leq \frac{1}{10} \varsigmagmaum_{m = m_j}^{m_k-1} \int_{b_{m}}^{a_{m+1}} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t)\|^2 dt = \frac{1}{10} \int \mathrm{div}splaylimits_{I\cap (c_{m_j},c_{m_k})} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t)\|^2 dt,
\epsilonilonnd{align*}
and the estimate \epsilonilonqref{eq:err-mod-contr-2} implies that
\betagin{align*}
C_0 \varsigmagmaum_{m=m_j}^{m_k-1} \int_{a_{m+1}}^{c_{m+1}} \varsigmagmaqrt{{\bf d}(\vec\textrm{tr}ianglerightrtialsi(t))}d t
\leq \frac{1}{10} \varsigmagmaum_{m = m_j}^{m_k-1} \int_{b_{m}}^{a_{m+1}} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t)\|^2 dt = \frac{1}{10} \int \mathrm{div}splaylimits_{I\cap (c_{m_j},c_{m_k})} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t)\|^2 dt.
\epsilonilonnd{align*}
By our choice of $R$ we have
\betagin{align*}
\varsigmagmaum_{m = m_j}^{m_k-1} \int_{b_{m}}^{a_{m+1}} | \Omega_{R}(\vec \textrm{tr}ianglerightrtialsi(t)) | \, dt
\leq \delta \varsigmagmaum_{m = m_j}^{m_k-1} \nu_m = \delta |I \cap (c_{m_j},c_{m_k})|,
\epsilonilonnd{align*}
as well as $R \leq R_0 |I \cap (c_{m_j},c_{m_k})|$. The previous three estimates and \epsilonilonqref{eq:virial_est} imply that
\betagin{align}
\fint \mathrm{div}splaylimits_{I\cap (c_{m_j},c_{m_k})} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t)\|^2 dt
\leq \frac{5C_0 R_0}{3} \left [ \varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialsi(c_{m_j}))} + \varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialsi(c_{m_k}))} \right ] + \frac{5 \delta}{3}.
\epsilonilonnd{align}
Since ${\bf d} (\vec \textrm{tr}ianglerightrtialsi(c_{m_j})) \rightarrow 0$ as $j \rightarrow \infty$ and $\delta$ is arbitrary, we conclude that
\betagin{align}\lambda_mbdabel{eq:nearly}
\limsup_{j \rightarrow \infty} \limsup_{k \rightarrow \infty}
\fint \mathrm{div}splaylimits_{I\cap (c_{m_j},c_{m_k})} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t)\|^2_{L^2} dt = 0.
\epsilonilonnd{align}
We claim that
there exists a sequence of good intervals $I_{m_\epsilonilonll} = [b_{m_\epsilonilonll-1}, a_{m_\epsilonilonll}]$ such that
\betagin{align}
\lim_{\epsilonilonll \rightarrow \infty} \fint \mathrm{div}splaylimits_{I_{m_\epsilonilonll}} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|^2_{L^2} dt = 0. \lambda_mbdabel{average_psit}
\epsilonilonnd{align}
If not, then there exists $\delta_0 > 0$ such that for all $m = 2, 3, \ldots$, we have
$\int_{I_m} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|^2_{L^2} dt \mathbf{m}athbf{g}eq \delta_0 \nu_m.$ Summing this lower bound implies that for every $c_{m_j} < c_{m_k}$ we have
\betagin{align*}
\int \mathrm{div}splaylimits_{I\cap (c_{m_j},c_{m_k})} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|_{L^2}^2 dt \mathbf{m}athbf{g}eq \delta_0 |I \cap (c_{m_j},c_{m_k})|,
\epsilonilonnd{align*}
which contradicts \epsilonilonqref{eq:nearly}. This proves our claim.
We now conclude the proof of Proposition \mathop{\mathrm{Re}}f{p:psi_t}. We
denote the midpoint of our sequence of good intervals $[b_{m_\epsilonilonll-1}, a_{m_\epsilonilonll }]$ by $t_{m_\epsilonilonll} := \frac 12(b_{m_\epsilonilonll-1} + a_{m_\epsilonilonll})$.
Note that for any $0<s_1 \le 1/2$ we have
\mathbf{m}athcal{E}Q{
b_{m_{\epsilonilonll}-1} \le t_{m_\epsilonilonll} - \nu_{m_\epsilonilonll} s_1 \leq t_{m_\epsilonilonll} + \nu_{m_\epsilonilonll} s_1 \le a_{m_\epsilonilonll}. \lambda_mbdabel{eq:tm-dist}
}
We define a sequence of solutions of \epsilonilonqref{eq:wmk} via
\mathbf{m}athcal{E}Q{
\vec \textrm{tr}ianglerightrtialsi_{m_\epsilonilonll}(s) := \vec\textrm{tr}ianglerightrtialsi(t_{m_\epsilonilonll} + \nu_{m_\epsilonilonll} s)_{1/\nu_{m_\epsilonilonll}}\qquad \thetaxt{for }s \in [-s_1, s_1].
}
Then by a change of variables, \epsilonilonqref{eq:tm-dist} and \epsilonilonqref{average_psit} we have
\betagin{align}\lambda_mbdabel{eq:dt_zero}
\lim_{\epsilonilonll \rightarrow \infty} \int_{-s_1}^{s_1} \|\textrm{tr}ianglerightrtialartial_s \textrm{tr}ianglerightrtialsi_{m_\epsilonilonll}(s)\|_{L^2}^2d s = 0.
\epsilonilonnd{align}
By Corollary \mathop{\mathrm{Re}}f{cor:k1} and extraction of a subsequence if necessary, $\vec \textrm{tr}ianglerightrtialsi_{m_\epsilonilonll}(0) \to \vec\varphi_0$ in $\mathbf{m}athcal{H}_0$.
Let $\vec\varphi(s)$ be the solution of \epsilonilonqref{eq:wmk}
with initial data $\vec\varphi(0) = \vec\varphi_0$. For $s_1 > 0$ sufficiently small, $\vec \varphi(t)$ is defined on $[-s_1,s_1]$, and by the well-posedness theory for \epsilonilonqref{eq:wmk} we have
\betagin{align*}
\lim_{\epsilonilonll \rightarrow \infty} \varsigmagmaup_{s \in [-s_1,s_1]}
\| \vec\textrm{tr}ianglerightrtialsi_{m_\epsilonilonll}(s) - \vec\varphi(s) \|_{\mathbf{m}athcal{H}_0} = 0.
\epsilonilonnd{align*}
By \epsilonilonqref{eq:dt_zero} we conclude that
\mathbf{m}athcal{E}Q{
\int_{-s_1}^{s_1}\|\textrm{tr}ianglerightrtialartial_s \varphi(s)\|_{L^2}^2d s = 0,
}
so $\vec \varphi(s)$ is a harmonic map. The only degree-0 harmonic map is the constant map $\vec \varphi = (0,0)$. This contradicts the fact that $\mathbf{m}athcal E(\vec \varphi) = \mathbf{m}athcal E(\vec \textrm{tr}ianglerightrtialsi) = 8\textrm{tr}ianglerightrtiali \neq 0$. The proof of Proposition \mathop{\mathrm{Re}}f{p:psi_t} is complete.
\qed
\varsigmagmaubsection{Proof of Theorem~\mathop{\mathrm{Re}}f{t:main}}
We first use Proposition \mathop{\mathrm{Re}}f{p:psi_t} to prove $\vec \textrm{tr}ianglerightrtialsi(t)$ converges to a pure two-bubble or anti two-bubble as $t \rightarrow T_+$. Let $\epsilonilon > 0$ be sufficiently small. By Proposition \mathop{\mathrm{Re}}f{p:psi_t} there exists a $T_0 \in (T_-, T_+)$ such that
\betagin{align}
{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) < \epsilonilon, \quad \forall t \mathbf{m}athbf{g}eq T_0.
\epsilonilonnd{align}
We further assume that $\epsilonilonps < \alphahapha_0$,
where $\alphahapha_0$ is the constant from Lemma~\mathop{\mathrm{Re}}f{l:dpm}. Towards a contradiction, assume that $\vec \textrm{tr}ianglerightrtialsi(t)$ alternates between being close to a pure two-bubble and anti two-bubble, i.e. that there exist $t_1, t_2 \mathbf{m}athbf{g}eq T_0$, $t_1 < t_2$ such that
${\bf d}_+(\vec\textrm{tr}ianglerightrtialsi(t_1)) \leq \epsilonilonps$ and ${\bf d}_-(\vec\textrm{tr}ianglerightrtialsi(t_2)) \leq \epsilonilonps$.
By Lemma~\mathop{\mathrm{Re}}f{l:dpm} we have ${\bf d}_+(\vec\textrm{tr}ianglerightrtialsi(t_2)) \mathbf{m}athbf{g}eq \alphahapha_0$
and ${\bf d}_-(\vec \textrm{tr}ianglerightrtialsi(t_1)) \mathbf{m}athbf{g}eq \alphahapha_0$. By continuity there exists $t_0 \in (t_1,t_2)$ such that ${\bf d}_+(\vec\textrm{tr}ianglerightrtialsi(t_0)) = {\bf d}_-(\vec \textrm{tr}ianglerightrtialsi(t_0))$. But then again by Lemma \mathop{\mathrm{Re}}f{l:dpm}, we conclude that ${\bf d}_+(\vec \textrm{tr}ianglerightrtialsi(t_0)) =
{\bf d}_-(\vec \textrm{tr}ianglerightrtialsi(t_0)) > \alphaha_0 > \epsilonilon$. This contradicts our definition of $T_0$ which proves the desired convergence. Without loss of generality, we assume that ${\bf d}_+(\vec \textrm{tr}ianglerightrtialsi(t)) \rightarrow 0$ as $t \rightarrow T_+$.
We now prove finite time blow-up and asymptotics of the scales. By taking $T_0$ larger if necessary, we may assume that
\betagin{align*}
{\bf d}_+(\vec \textrm{tr}ianglerightrtialsi(t)) < \epsilonilon, \quad \forall t \mathbf{m}athbf{g}eq T_0.
\epsilonilonnd{align*}
We note that as long as $\epsilonilon > 0$ is sufficiently small,
the modulation parameters $\lambda_mbdambda(t)$ and $\mathbf{m}u(t)$
are well-defined on $[T_0, T_+)$, and by Lemma \mathop{\mathrm{Re}}f{l:modeq}
\betagin{align}
\vec \textrm{tr}ianglerightrtialsi(t) = \vec Q_{\lambda_mbda(t)} + \vec Q_{\mathbf{m}u(t)} + o_{\mathbf{m}athcal{H}_0}(1), \quad \mathbf{m}box{ as } t \rightarrow T_+.
\epsilonilonnd{align}
Let $\epsilonilon_0 > 0$ and choose $\epsilonilon$ smaller if necessary so that the conclusions of Proposition \mathop{\mathrm{Re}}f{prop:modulation} hold.
Let $\zetaeta(t)$ be as in~\epsilonilonqref{eq:zetadef} with $L$ and $M$ chosen as in Remark \mathop{\mathrm{Re}}f{r:param} so that $\zetaeta(t) \varsigmagmaim \lambda_mbda(t) |\log \lambda_mbda(t)/\mathbf{m}u(t) |$.
By rescaling if necessary, we can assume that $\mathbf{m}u(T_0) = 1$.
Since ${\bf d}_+(\vec \textrm{tr}ianglerightrtialsi(t)) \rightarrow 0$ as $t \rightarrow T_+$, there exists a sequence of times $\tauu_n \to T_+$ such that
\mathbf{m}athcal{E}Q{
\frac{d}{dt} \mathbf{m}athcal{B}ig|_{t = \tauu_n}\mathbf{m}athcal{B}ig(\frac{\zetaeta(t)}{\mathbf{m}u(t)}\mathbf{m}athcal{B}ig) \leq 0.
}
Then there exist times $t_1 \leq t_0 =: \tauu_n$ and $t_2 \leq t_1$ satisfying the conclusions of Proposition \mathop{\mathrm{Re}}f{prop:modulation}.
By our choice of $T_0$ and \epsilonilonqref{eq:d-t1-t2} we have $t_1 \leq T_0$ for every $t_0 = \tauu_n$. From the proof of Proposition \mathop{\mathrm{Re}}f{prop:modulation} we recall that $\mathbf{m}u(t) \in [1/2,2]$ on $[T_0,\tauu_n]$, and the function $$\xii(t) = -b(t) + \zetaeta(t)^{\frac 12}$$ satisfies for all $t \in [T_0,\tauu_n]$
\betagin{align}\lambda_mbdabel{eq:fin_est}
\zetaeta(t)^{\frac 12} \leq 2 \xii(t) \leq 20 \zetaeta(t)^{\frac 12}. \quad
\xii'(t) \leq -\frac{1}{2} \zetaeta^{-\frac 12}(t) \xii(t).
\epsilonilonnd{align}
Since $\tauu_n \rightarrow T_+$, these same bounds hold on $[T_0,T_+)$. From \epsilonilonqref{eq:lamud1}, \epsilonilonqref{eq:b-bound}, \epsilonilonqref{eq:fin_est} and the fact that ${\bf d}_+(\vec \textrm{tr}ianglerightrtialsi(t)) \rightarrow 0$ as $t \rightarrow T_+$ we can conclude
\betagin{align*}
\zetaeta(t) \rightarrow 0 \mathbf{m}box{ and } \xii(t) \rightarrow 0 \mathbf{m}box{ as } t \rightarrow T_+.
\epsilonilonnd{align*}
From \epsilonilonqref{eq:fin_est} we see that $\xii(t)$ is positive on $[T_0, T_+)$ and satisfies $\xii'(t) \leq -1/4$. Since $\xii(t) \rightarrow 0$ as $t \rightarrow T_+$, we conclude that $T_+ < \infty$ which proves finite time blow-up.
We now turn to the asymptotics of the scales. The estimates \epsilonilonqref{eq:fin_est} and \epsilonilonqref{eq:mu'} imply that
\betagin{align*}
&\int_{T_0}^{T_+} |\mathbf{m}u'| dt \lesssim \int_{T_0}^{T_+}
\zetaeta(t)^{\frac 12} dt \lesssim
\int_{T_0}^{T_+} \xii(t) dt
\lesssim \int_{T_0}^{T_+} \zetaeta(t)^{\frac 12} (-\xii'(t)) dt
\lesssim \int_{T_0}^{T_+} (-\xii'(t)) dt
\lesssim 1
\epsilonilonnd{align*}
Thus, $\mathbf{m}u(t)$ converges to some $\mathbf{m}u_0 \in [1/2,2]$. For the decay of $\lambda_mbda(t)$, we first recall that by \epsilonilonqref{eq:fin_est} we have $\xii'(t) \lesssim -1$. By Lemma \mathop{\mathrm{Re}}f{p:modp}, we see that
\betagin{align*}
|\xii'(t)| \lesssim |b'(t)| + \zetaeta^{-1/2} |\zetaeta'(t)| \lesssim 1.
\epsilonilonnd{align*}
Thus, there exists $C > 0$ such that
\betagin{align*}
-C \leq \xii'(t) \leq -\frac{1}{C}, \quad \forall t \in [T_0, T_+),
\epsilonilonnd{align*}
which implies
\betagin{align}
\frac{1}{C} (T_+ - t) \leq \xii(t) \leq C(T_+ - t), \quad \forall t \in [T_0, T_+).
\epsilonilonnd{align}
Since $\xii(t) \varsigmagmaim \zetaeta(t)^{\frac{1}{2}} \varsigmagmaim [\lambda_mbda(t) |\log \lambda_mbda(t)|]^{\frac 12}$ on $[T_0, T_+)$, we conclude that
\betagin{align*}
\lambda_mbda(t) |\log \lambda_mbda(t)| \varsigmagmaim (T_+ - t)^2, \quad \mathbf{m}box{ as } t \rightarrow T_+
\epsilonilonnd{align*}
as desired.
Finally, we show that $\vec \textrm{tr}ianglerightrtialsi$ scatters backward in time. Suppose not.
Then $-\infty < T_- < T_+ < \infty$, and $\int_{T_-}^{T_+} \varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t))} \, dt < \infty$ by what we have shown up to this point. The virial identity \epsilonilonqref{eq:vir}, \epsilonilonqref{eq:virial-end} and the fact that ${\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)) \rightarrow 0$ as $t \rightarrow T_{\textrm{tr}ianglerightrtialm}$ imply that
\betagin{align*}
\int_{T_-}^{T_+} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|_{L^2}^2 \, dt
\leq \int_{T_-}^{T_+} |\Omega_R(\textrm{tr}ianglerightrtialsi(t))| \, dt, \quad \forall R > 0.
\epsilonilonnd{align*}
For all $t \in (T_-,T_+)$, we have
$|\Omega_R(\vec \textrm{tr}ianglerightrtialsi(t))| \leq C_0 \varsigmagmaqrt{{\bf d}(\vec \textrm{tr}ianglerightrtialsi(t)} \in L^1(T_-,T_+)$ and
$\lim_{R \rightarrow \infty} \Omega_R(\vec \textrm{tr}ianglerightrtialsi(t)) = 0$. Thus, by the dominated convergence theorem
\betagin{align*}
\int_{T_-}^{T_+} \| \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi(t) \|_{L^2}^2 \, dt = 0.
\epsilonilonnd{align*}
We conclude that $\vec \textrm{tr}ianglerightrtialsi$ is a degree-0 harmonic map, i.e $\vec \textrm{tr}ianglerightrtialsi = (0,0)$. This contradicts $\mathbf{m}athcal E(\vec \textrm{tr}ianglerightrtialsi) = 8\textrm{tr}ianglerightrtiali$ and finishes the proof.
\qed
\varsigmagmaection{Construction of a Minimal Blow-up Solution}
\varsigmagmaubsection{Proof of Theorem \mathop{\mathrm{Re}}f{t:main2}} Let $T > 0$ be small (to be determined later). We define a function $\epsilonilonll(t) : [0,T) \rightarrow [0,\infty)$ implicitly by the relation
\betagin{align}\lambda_mbdabel{eq:ell_def}
\epsilonilonll(t) |\log \epsilonilonll(t)| = 2t^2, \quad t \in (0,T),
\epsilonilonnd{align}
with $\epsilonilonll(0) = 0$. By elementary calculus it is easy to see that
$\epsilonilonll \in C^\infty(0,T),$ $\epsilonilonll$ is increasing on $[0,T)$ and
\betagin{align}\lambda_mbdabel{eq:ell_deriv}
\epsilonilonll'(t) |\log \epsilonilonll(t)| = 4t \left [ 1 + O(|\log \epsilonilonll(t)|^{-1}) \right ].
\epsilonilonnd{align}
In particular, this implies that
\betagin{align}
\frac{\epsilonilonll(t)}{\epsilonilonll'(t)} &= \frac{t}{2} \left [ 1 + O(|\log \epsilonilonll(t)|^{-1}) \right ], \lambda_mbdabel{eq:ellrelation1} \\
\frac{\epsilonilonll(t)}{(\epsilonilonll'(t))^2|\log \epsilonilonll(t)|} &= \frac{1}{8} + O(|\log \epsilonilonll(t)|^{-1}) \lambda_mbdabel{eq:ellrelation2}.
\epsilonilonnd{align}
Let $t_n$ be a sequence in $(0,T)$ which is monotonically decreasing to 0. We define a sequence of initial data at time $t = t_n$ via
\betagin{align}
\textrm{tr}ianglerightrtialsi_{0,n} &:= Q_{\epsilonilonll(t_n)} - Q, \\
\textrm{tr}ianglerightrtialsi_{1,n} &:= -\epsilonilonll'(t_n) \Lambda Q_{\underline{\epsilonilonll(t_n)}} \chi_{\varsigmagmaqrt{R_n \epsilonilonll(t_n)}},
\epsilonilonnd{align}
where $\chi$ is now a sharp cutoff, $\chi(r) = 1$ for $0 \leq r \leq 1$ and $\chi(r) = 0$ for $r > 1$, and $R_n > 0$ is chosen so that
\betagin{align}
\mathbf{m}athcal E(\textrm{tr}ianglerightrtialsi_{0,n}, \textrm{tr}ianglerightrtialsi_{1,n}) = 2\mathbf{m}athcal E(\vec Q).
\epsilonilonnd{align}
We first show that $R_n$ exists and that $R_n + R_n^{-1}$ is bounded.
\betagin{lem}\lambda_mbdabel{l:R_nlem}
For $T > 0$ sufficiently small, for all $n$ there exists $R_n > 0$ such that the pair of initial data $(\textrm{tr}ianglerightrtialsi_{0,n}, \textrm{tr}ianglerightrtialsi_{1,n})$ defined above satisfies
$\mathbf{m}athcal E(\textrm{tr}ianglerightrtialsi_{0,n}, \textrm{tr}ianglerightrtialsi_{1,n}) = 2 \mathbf{m}athcal E(Q)$. Moreover, there exists $R > 0$ such that
\betagin{align*}
\frac{1}{R} \leq R_n \leq R.
\epsilonilonnd{align*}
\epsilonilonnd{lem}
\betagin{proof}
We expand the nonlinear energy and obtain (see Section 3 of \cite{JL})
\betagin{align*}
2 \mathbf{m}athcal E(Q) &= \mathbf{m}athcal E(\textrm{tr}ianglerightrtialsi_{0,n}, \textrm{tr}ianglerightrtialsi_{1,n}) \\
&= 2 \mathbf{m}athcal E(Q) + \int_0^\infty \textrm{tr}ianglerightrtialsi_{1,n}^2 r dr
- 4 \int_0^\infty \Lambda Q_{\epsilonilonll(t_n)} (\Lambda Q)^3 \frac{dr}{r} +
2 \int_0^\infty (\Lambda Q_{\epsilonilonll(t_n)})^2 (\Lambda Q)^2 r \frac{dr}{r},
\epsilonilonnd{align*}
so that
\betagin{align}\lambda_mbdabel{eq:psi1_n}
\int_0^\infty \textrm{tr}ianglerightrtialsi_{1,n}^2 r dr =
4 \int_0^\infty \Lambda Q_{\epsilonilonll(t_n)} (\Lambda Q)^3 \frac{dr}{r} -
2 \int_0^\infty (\Lambda Q_{\epsilonilonll(t_n)})^2 (\Lambda Q)^2 \frac{dr}{r}.
\epsilonilonnd{align}
By a change of variables, the left side of \epsilonilonqref{eq:psi1_n} is readily computed to be
\betagin{align}
\betagin{split}\lambda_mbdabel{eq:s53}
\int_0^\infty \textrm{tr}ianglerightrtialsi_{1,n}^2 r dr &= (\epsilonilonll'(t_n))^2 \int_0^{\varsigmagmaqrt{R_n/\lambda_mbda_n}} |\Lambda Q|^2 r dr \\
&= 2 (\epsilonilonll'(t_n))^2 \left [
\log \mathbf{m}athcal{B}igl (1 + \frac{R_n}{\epsilonilonll(t_n)} \mathbf{m}athcal{B}igr ) + \frac{1}{1 + R_n/\epsilonilonll(t_n)}
- 1
\right ].
\epsilonilonnd{split}
\epsilonilonnd{align}
For the right side of \epsilonilonqref{eq:psi1_n}, we first consider the expression
\betagin{align}
4 \int_0^\infty \Lambda Q_{\varsigmagma} (\Lambda Q)^3 \frac{dr}{r} &=
64 \varsigmagma \int_0^\infty \frac{r^3}{(\varsigmagma^2 + r^2)(1 + r^2)^3} dr \\
&= 64 \varsigmagma \int_0^\varsigmagma \frac{r^3}{(\varsigmagma^2 + r^2)(1 + r^2)^3} dr dr
+ 64 \varsigmagma \int_\varsigmagma^\infty \frac{r^3}{(\varsigmagma^2 + r^2)(1 + r^2)^3} dr
\epsilonilonnd{align}
where for brevity we have set $\varsigmagma = \epsilonilonll(t_n)$. Now
\betagin{align*}
\int_0^\varsigmagma \frac{r^3}{(\varsigmagma^2 + r^2)(1 + r^2)^3} dr \lesssim \int_0^\varsigmagma r dr \lesssim \varsigmagma^2.
\epsilonilonnd{align*}
Since $\frac{1}{\varsigmagma^2 + r^2} = \frac{1}{r^2} + \frac{\varsigmagma^2}{(\varsigmagma^2 + r^2)r^2}$, we have
\betagin{align*}
\int_\varsigmagma^\infty \frac{r^3}{(\varsigmagma^2 + r^2)(1 + r^2)^3} dr
&= \int_\varsigmagma^\infty \frac{r}{(1 + r^2)^3} dr
+ \varsigmagma^2 \int_\varsigmagma^\infty \frac{r}{(1+r^2)^3(\varsigmagma^2 + r^2)} \\
&= \frac{1}{4} + O(\varsigmagma^2 |\log \varsigmagma|).
\epsilonilonnd{align*}
We conclude that
\betagin{align}\lambda_mbdabel{eq:s51}
4 \int_0^\infty \Lambda Q_{\epsilonilonll(t_n)} (\Lambda Q)^3 \frac{dr}{r} = 16 \epsilonilonll(t_n) \mathbf{m}athcal{B}igl [1 +
O(\epsilonilonll(t_n)^2 |\log \epsilonilonll(t_n)|)\mathbf{m}athcal{B}igr ].
\epsilonilonnd{align}
By a similar argument we also obtain
\betagin{align}\lambda_mbdabel{eq:s52}
\int_0^\infty (\Lambda Q_{\epsilonilonll(t_n)})^2 (\Lambda Q)^2 \frac{dr}{r} \lesssim
\epsilonilonll(t_n)^2 |\log \epsilonilonll(t_n)|.
\epsilonilonnd{align}
Combining \epsilonilonqref{eq:psi1_n}, \epsilonilonqref{eq:s53}, \epsilonilonqref{eq:s51} and \epsilonilonqref{eq:s52} we obtain
\betagin{align}
\log \mathbf{m}athcal{B}igl (1 + \frac{R_n}{\epsilonilonll(t_n)} \mathbf{m}athcal{B}igr ) + \frac{1}{1 + R_n/\epsilonilonll(t_n)}
- 1
= \frac{8 \epsilonilonll(t_n)}{(\epsilonilonll'(t_n))^2} \mathbf{m}athcal{B}igl [ 1 + O(\epsilonilonll(t_n)|\log \epsilonilonll(t_n)|) \mathbf{m}athcal{B}igr ].
\epsilonilonnd{align}
Thus by \epsilonilonqref{eq:ellrelation2}
\betagin{align}\lambda_mbdabel{elllast}
\log \mathbf{m}athcal{B}igl (1 + \frac{R_n}{\epsilonilonll(t_n)} \mathbf{m}athcal{B}igr ) + \frac{1}{1 + R_n/\epsilonilonll(t_n)}
- 1
= |\log \epsilonilonll(t_n)| \mathbf{m}athcal{B}igl [ 1 + O(|\log \epsilonilonll(t_n)|^{-1}) \mathbf{m}athcal{B}igr ].
\epsilonilonnd{align}
The function $f(x) = \log (1 + x) + \frac{1}{1+x} - 1$ is continuous, is equal to 0 when $x = 0$ and tends to $\infty$ as $x \rightarrow \infty$. Thus, by the intermediate value theorem and as long as $T$ is sufficiently small, there exist $R_n$ satisfying \epsilonilonqref{elllast} for all $n$. From \epsilonilonqref{elllast} we see that $R_n / \epsilonilonll(t_n) \rightarrow \infty$ as $n \rightarrow 0$. Rearranging the previous expression yields
\betagin{align*}
\log R_n = 1 - \log \mathbf{m}athcal{B}igl (1 + \frac{\epsilonilonll(t_n)}{R_n} \mathbf{m}athcal{B}igr ) - \frac{1}{1 + R_n/\epsilonilonll(t_n)} + O(1).
\epsilonilonnd{align*}
Since $R_n / \epsilonilonll(t_n) \rightarrow \infty$, the right side of the previous expression is bounded. This concludes the proof of the lemma.
\epsilonilonnd{proof}
Let $\vec \textrm{tr}ianglerightrtialsi_n(t)$ denote the solution to \epsilonilonqref{eq:wmk} with initial data
$\vec \textrm{tr}ianglerightrtialsi_n(t_n) = (\textrm{tr}ianglerightrtialsi_{0,n}, \textrm{tr}ianglerightrtialsi_{1,n})$. We remark that the previous computations yield
\betagin{align} \lambda_mbdabel{eq:psi_1n_L2}
\| \textrm{tr}ianglerightrtialsi_{1,n} \|_{L^2}^2
= 16 \epsilonilonll(t_n) \mathbf{m}athcal{B}igl [
1 + O(\epsilonilonll(t_n)^2 |\log \epsilonilonll(t_n)| )
\mathbf{m}athcal{B}igr ]
\epsilonilonnd{align}
Therefore, as long as $T > 0$ is small, for all $t$ in a neighborhood of $t_n$ the modulation parameters $\lambda_mbda_n(t)$ and $\mathbf{m}u_n(t)$ are well defined for $\vec \textrm{tr}ianglerightrtialsi_n(t)$ and
\betagin{align*}
\lambda_mbda_n(t_n) = \epsilonilonll(t_n), \quad \mathbf{m}u_n(t_n) = 1.
\epsilonilonnd{align*}
If we denote $g_n(t) := \textrm{tr}ianglerightrtialsi_n(t) - (Q_{\lambda_mbda_n(t)} - Q_{\mathbf{m}u_n(t)})$ and $\dot g_n(t) = \textrm{tr}ianglerightrtial_t \textrm{tr}ianglerightrtialsi_n(t)$, then
\betagin{align*}
g_n(t_n) = 0, \quad \dot g_n(t_n) = -\epsilonilonll'(t_n) \Lambda Q_{\underline{\epsilonilonll(t_n)}} \chi_{\varsigmagmaqrt{R_n \epsilonilonll(t_n)}}.
\epsilonilonnd{align*}
Let $\zetaeta_n(t)$ and $b_n(t)$ be defined as in \epsilonilonqref{eq:zetadef}, \epsilonilonqref{eq:bdef} for each $\vec \textrm{tr}ianglerightrtialsi_n$, i.e.
\mathbf{m}athcal{E}Q{
\zetaeta_n(t) &:= 2 \lambda_mbda_n(t) |\log (\lambda_mbda_n(t)/\mathbf{m}u_n(t))| - \lambda_mbdangle \chi_{M \varsigmagmaqrt{\lambda_mbda_n(t) \mathbf{m}u_n(t)}}\Lambdambda Q_{\uln{\lambda_mbdambda_n(t)}} \mathbf{m}id g_n(t)\rangle, \\
b_n(t)&:= - \ang{ \chi_{M \varsigmagmaqrt{\lambda_mbda_n(t)\mathbf{m}u_n(t)}} \Lambda Q_{\underline{\lambda_mbda_n(t)}} \mathbf{m}id \dot g_n(t)} - \ang{ \dot g_n(t) \mathbf{m}id \mathbf{m}athcal{A}_0( \lambda_mbda_n(t) ) g_n(t)}.
}
\betagin{cor}\lambda_mbdabel{l:zeta'}
As long $M > 0$ is sufficiently large we have
\betagin{align}
b_n(t_n) = 8 t_n \left [ 1 + O ( |\log \epsilonilonll(t_n)|^{-1}) \right ],
\epsilonilonnd{align}
\epsilonilonnd{cor}
\betagin{proof}
Let $M^2$ be larger than $R$ given by Lemma \mathop{\mathrm{Re}}f{l:R_nlem}. Then by \epsilonilonqref{eq:psi_1n_L2} and \epsilonilonqref{eq:ellrelation1} we have
\betagin{align*}
b_n(t_n) &= -\ang{ \chi_{M \varsigmagmaqrt{\epsilonilonll_n(t_n)}} \Lambda Q_{\underline{\epsilonilonll(t_n)}} \mathbf{m}id \dot g_n(t_n)} \\
&= \frac{1}{\epsilonilonll'(t_n)} \| \textrm{tr}ianglerightrtialsi_{1,n}(t_n) \|_{L^2}^2 \\
&= \frac{16 \epsilonilonll(t_n)}{\epsilonilonll'(t_n)} \mathbf{m}athcal{B}igl [ 1 + O(\epsilonilonll(t_n)^2 |\log \epsilonilonll(t_n)|) \mathbf{m}athcal{B}igr ] \\
&= 2 \epsilonilonll'(t_n) |\log \epsilonilonll(t_n)| \mathbf{m}athcal{B}igl [ 1 + O( |\log \epsilonilonll(t_n)|^{-1}) \mathbf{m}athcal{B}igr ] \\
&= 8 t_n \mathbf{m}athcal{B}igl [ 1 + O( |\log \epsilonilonll(t_n)|^{-1}) \mathbf{m}athcal{B}igr ].
\epsilonilonnd{align*}
\epsilonilonnd{proof}
Let $L = L_0 > 0$, $M = M_0 > 0$ and $\epsilonilonta_1 > 0$ be chosen so that the conclusions of Proposition \mathop{\mathrm{Re}}f{p:modp2} hold with $\delta = \frac{1}{2018}$ and so that the conclusion of Corollary \mathop{\mathrm{Re}}f{l:zeta'} holds.
Let
\betagin{align*}
T_n' = \varsigmagmaup \mathbf{m}athcal{B}igl \{ t \in [t_n,T] \mathbf{m}id \vec \textrm{tr}ianglerightrtialsi_n(s) \mathbf{m}box{ exists, } {\bf d}_+(\vec \textrm{tr}ianglerightrtialsi_n(s) < \epsilonilonta_1, \mathbf{m}box{ and } \mathbf{m}u_n(s) \in (1/2, 2) \quad \forall s\in[t_n,t] \mathbf{m}athcal{B}igr \}.
\epsilonilonnd{align*}
We will show that $T_n' = T$ as long as $T$ is sufficiently small.
Let $t \in [t_n,T'_n]$. By \epsilonilonqref{eq:kala'}, \epsilonilonqref{eq:b-bound} and our assumption on $\mathbf{m}u_n(t)$
\betagin{align*}
\zetaeta_n(t) &= \zetaeta_n(t_n) + \int_{t_n}^t \zetaeta_n'(s) ds \\
&\leq \zetaeta_n(t_n) + \int_{t_n}^t [|b_n(s)| + \zetaeta_n(s)^{1/2}] ds \\
&\leq \zetaeta_n(t_n) + 6 \int_{t_n}^t \zetaeta_n(s)^{1/2} ds.
\epsilonilonnd{align*}
Thus,
\betagin{align*}
\zetaeta_n(t) \leq 2 \zetaeta_n(t_n) + 36 (t - t_n)^2
\epsilonilonnd{align*}
Since $\zetaeta_n(t_n) = 2 \epsilonilonll(t_n) |\log \epsilonilonll(t_n)| = 4 t_n^2$, we conclude that
\betagin{align}\lambda_mbdabel{eq:zetabound}
\zetaeta_n(t) \leq 148 t^2.
\epsilonilonnd{align}
Then by \epsilonilonqref{eq:bound-on-l}
\betagin{align}\lambda_mbdabel{eq:lambound}
\lambda_mbda_n(t) |\log \lambda_mbda_n(t)| \leq 75 t^2.
\epsilonilonnd{align}
We now consider $\mathbf{m}u_n(t)$. By the fundamental theorem of calculus, \epsilonilonqref{eq:mu'}, \epsilonilonqref{eq:bound-on-l} and \epsilonilonqref{eq:zetabound} there exists an absolute constant $\betata > 0$ such that
\betagin{align}\lambda_mbdabel{eq:mubound}
|\mathbf{m}u_n(t) - 1| \leq \betata t^2.
\epsilonilonnd{align}
By \epsilonilonqref{eq:gH}, \epsilonilonqref{eq:lambound} and our assumption on $\mathbf{m}u_n$ there exists a constant $\alphaha > 0$ such that
\betagin{align}\lambda_mbdabel{s56}
\| \vec \textrm{tr}ianglerightrtialsi_n(t) - (\vec Q_{\lambda_mbda_n(t)} - Q_{\mathbf{m}u_n(t)}) \|^2_{\mathbf{m}athcal{H}_0} \leq \alphaha t^2.
\epsilonilonnd{align}
In summary, we have shown that
\betagin{align}
\lambda_mbda_n(t) |\log \lambda_mbda_n(t)| &\leq 75 t^2, \\
|\mathbf{m}u_n(t) - 1 | &\leq \betata t^2, \\
{\bf d}_+(\vec \textrm{tr}ianglerightrtialsi_n(t)) &\leq (\alphaha + 150) t^2.
\epsilonilonnd{align}
By a continuity argument, it follows that $T_n' = T$ provided that $\vec \textrm{tr}ianglerightrtialsi_n(t)$ is defined on $[t_n,T]$. We now prove this fact.
Let $t \in [t_n,T'_n]$. By Corollary \mathop{\mathrm{Re}}f{l:zeta'} and \epsilonilonqref{eq:b'lb} we have
\betagin{align}\lambda_mbdabel{eq:b_nlower}
b_n(t) \mathbf{m}athbf{g}eq \frac{1}{2} \mathbf{m}athcal{B}igl (8 - \frac{1}{2018} \mathbf{m}athcal{B}igr ) (t - t_n) + 8 t_n
\mathbf{m}athcal{B}igl[1 + O(|\log \epsilonilonll(t_n)|^{-1}) \mathbf{m}athcal{B}igr ]
\mathbf{m}athbf{g}eq 3 (t - t_n) + 5 t_n \mathbf{m}athbf{g}eq 3 t.
\epsilonilonnd{align}
By \epsilonilonqref{eq:kala'}, \epsilonilonqref{eq:b_nlower} and \epsilonilonqref{eq:zetabound} we have
\betagin{align*}
\zetaeta_n'(t) \mathbf{m}athbf{g}eq b_n(t) - \frac{2}{2018} \zetaeta_n^{1/2}(t) \mathbf{m}athbf{g}eq 3 t - \frac{2 \varsigmagmaqrt{148}}{2018} t \mathbf{m}athbf{g}eq 2 t.
\epsilonilonnd{align*}
By the fundamental theorem of calculus we conclude that
\betagin{align}
\zetaeta_n(t) \mathbf{m}athbf{g}eq \zetaeta_n(t_n) + t^2 - t_n^2
= 4 t_n^2 + t^2 - t_n^2 \mathbf{m}athbf{g}eq t^2.
\epsilonilonnd{align}
By \epsilonilonqref{eq:bound-on-l}, the previous implies that
\betagin{align}\lambda_mbdabel{lamlower}
\lambda_mbda_n(t) |\log \lambda_mbda_n(t)| \mathbf{m}athbf{g}eq \frac{1}{3} t^2.
\epsilonilonnd{align}
The estimates \epsilonilonqref{lamlower}, \epsilonilonqref{eq:lambound} and \epsilonilonqref{s56} imply
\betagin{align}\lambda_mbdabel{intermediate}
\inf_{\varsigmagmaubstack{\mathbf{m}u \in [1/2,2] \\
\lambda_mbda |\log \lambda_mbda| \in [t^2/3,75t^2]}} \| \vec \textrm{tr}ianglerightrtialsi_n(t) - (Q_\lambda_mbda - Q_\mathbf{m}u) \|_{\mathbf{m}athcal{H}_0}^2 \leq\alphaha t^2
\epsilonilonnd{align}
on $[t_n,T_n']$. By Corollary A.4 of \cite{JJ-AJM} we conclude that the interval of existence of $\vec \textrm{tr}ianglerightrtialsi_n$ strictly includes $[t_n,T_n']$ as long as long as $T$ is small. Thus, we have proved that $T_n' = T$.
The bound \epsilonilonqref{intermediate} also implies that we may pass to a weak limit and obtain our desired blow-up solution. Indeed, for any $T_0 < T$
\betagin{align}
\inf_{\varsigmagmaubstack{\mathbf{m}u \in [1/2,2] \\
\lambda_mbda |\log \lambda_mbda| \in [T_0^2/3,75T^2]}} \| \vec \textrm{tr}ianglerightrtialsi_n(t) - (Q_\lambda_mbda - Q_\mathbf{m}u) \|_{\mathbf{m}athcal{H}_0}^2 \leq\alphaha T^2, \quad \forall t \in [T_0,T], \forall n.
\epsilonilonnd{align}
By Corollary A.6 of \cite{JJ-AJM} we can conclude, after shrinking $T$ and extracting subsequences if necessary, there exists a solution $\vec \textrm{tr}ianglerightrtialsi_c(t)$ defined on $(0,T]$ such that $\vec \textrm{tr}ianglerightrtialsi_n(t) \rightharpoonup_n \vec \textrm{tr}ianglerightrtialsi_c(t)$ for all $t \in (0,T]$. By weak convergence and \epsilonilonqref{intermediate}
\betagin{align*}
\inf_{\varsigmagmaubstack{\mathbf{m}u \in [1/2,2] \\
\lambda_mbda |\log \lambda_mbda| \in [t^2/3,75t^2]}} \| \vec \textrm{tr}ianglerightrtialsi(t) - (Q_\lambda_mbda - Q_\mathbf{m}u) \|_{\mathbf{m}athcal{H}_0}^2 \leq \alphaha t^2
\epsilonilonnd{align*}
Thus, $\vec \textrm{tr}ianglerightrtialsi_c$ is the desired solution with blow-up time $T_- = 0$.
\qed
\centerline{\varsigmagmacshape Casey Rodriguez}
\varsigmagmamallskip
{\footnotesize
\centerline{Department of Mathematics, Massachusetts Institute of Technology}
\centerline{77 Massachusetts Ave, 2-246B, Cambridge, MA 02139, U.S.A.}
\centerline{\epsilonilonmail{[email protected]}}
}
\epsilonilonnd{document}
|
\begin{document}
\begin{abstract}
The notion of textile system was introduced by M. Nasu in order to analyze endomorphisms and automorphisms of
topological Markov shifts. A textile system is given by two finite directed graphs $G$ and $H$ and two morphisms $p,q:G\to H$, with some extra properties. It turns out that a textile system determines
a first quadrant two-dimensional shift of finite type, via a collection of Wang tiles, and conversely, any such shift is conjugate to a textile shift. In the case the morphisms $p$ and $q$ have the path lifting property, we prove that they induce groupoid morphisms $\pi, \rho:\Gamma(G)\to \Gamma(H)$ between the corresponding \'etale groupoids of $G$ and $H$.
We define two families ${\mathcal A}(m,n)$ and $\bar{\mathcal A}(m,n)$ of $C^*$-algebras associated to a textile shift, and compute them in specific cases. These are graph algebras, associated to some one-dimensional shifts of finite type constructed from the textile shift. Under extra hypotheses, we also define two families of Fell bundles which encode the complexity of these two-dimensional shifts.
We consider several classes of examples of textile shifts, including the full shift, the Golden Mean shift and shifts associated to rank two graphs.
\end{abstract}
\maketitle
\section{Introduction}
In dynamics, the time evolution of a physical system is often modeled by the iterates of a single transformation. However, multiple symmetries of some systems lead to the study of the join action of several commuting transformations, where new and deep phenomena occur.
The classical shift of finite type from symbolic dynamics was studied with powerful tools from linear algebra and matrix theory. The number of period $n$ points, the zeta function and the entropy can all be simply expressed in terms of the $k\times k$ transition matrix $A$. The Bowen-Franks group $BF(A)={\mathbb Z}^k/(I-A){\mathbb Z}^k$ is invariant under flow equivalence, and it was recovered in the K-theory of the Cuntz-Krieger algebra ${\mathcal O}_A$ generated by partial isometries $s_1,s_2,...,s_k$ such that
\[1=\sum_{i=1}^k s_is_i^*, \;\; s_j^*s_j=\sum_{i=1}^kA(j,i)s_is_i^*\;\;\text{for}\;\; 1\le j\le k.\]
The algebra ${\mathcal O}_A$ is simple and purely infinite if and only if $A$ is transitive (for every $i,j$ there exists $m$ such that $A^m(i,j)\neq 0$), and $A$ is not a permutation matrix. These $C^*$-algebras can also be understood as graph algebras, which were studied and generalized by several authors, see \cite{R}.
The higher dimensional analogue of a shift of finite type consists in all $d$-dimensional arrays of symbols from a finite alphabet subject to a finite number of local rules. Such arrays can be shifted in each of the $d$ coordinate directions, giving $d$ commuting transformations. There are also $d$ transition matrices, which in general do not commute. There are deep distinctions between the case $d=1$ and $d\ge 2$: for example, it is easy to describe the space of such arrays in the first case, but there is no general algorithm which will decide, given the set of local rules, whether or not the space of such arrays is empty in the second case.
Although the general theory of multi-dimensional shifts of finite type is still in a rudimentary stage, there are particular classes where significant progress was made, and where graphs and matrices play a useful role. These include the class of algebraic subshifts, see \cite {S1, S2} and the class of two-dimesional shifts associated to {\em textile systems}, or to {\em Wang tilings}. For these classes, some of the conjugacy invariants, like entropy (the growth rate of the number of patterns one can see in a square of side $n$), the number of periodic points and the zeta functions were computed.
In the literature, there are some papers relating higher dimensional shifts of finite type and $C^*$-algebras. For example, the particular case of shifts associated to rank $d$ graphs was studied by A. Kumjian, D. Pask and others.
In this case, the translations in the coordinate directions are local homeomorphisms, and there is a canonical \'etale groupoid and a $C^*$-algebra associated to such a graph, which is Morita equivalent to a crossed product of an AF-algebra by the group ${\mathbb Z}^d$. Under some mild conditions, the groupoid is essentially free and the $C^*$-algebra is simple and purely infinite. For more details, see \cite{KP}.
Also, in \cite{PRW1} and \cite {PRW2}, the authors analyze the $C^*$-algebra of rank two graphs whose infinite path spaces are Markov subgroups of $({\mathbb Z}/n{\mathbb Z})^{{\mathbb N}^2}$, like the Ledrappier example, see also \cite{KS} and \cite{LS}. In all these examples, the entropy is zero. The connections between higher dimensional subshifts of finite type and operator algebras remains to be explored further, and we think that this is a fascinating subject.
In this paper, in an attempt to apply results from operator algebra to arbitrary two-dimensional shifts of finite type supported in the first quadrant, we construct two families of $C^*$-algebras, defined using some one-dimensional shifts associated to a textile shift as in \cite{MP2}. The K-theory groups of these algebras provide invariants of the two-dimensional shift. We also construct groupoid morphisms and families of Fell bundles associated to some particular textile systems. We consider several examples of textile shifts, related to rank two graphs, to the full shift, to the Golden Mean transition matrices and to cellular automata.
{\bf Acknowledgements}. The author wants to express his gratitude to Alex Kumjian, David Pask and Aidan Sims for helpful discussions.
\section{Textile systems and two-dimensional shifts of finite type}
Throughout this paper, we consider finite directed graphs $G=(G^1,G^0)$, where $G^1$ is the set of edges, $G^0$ is the set of vertices, and $s,r:G^1\to G^0$ are the source and range maps, which are assumed to be onto.
\definition A textile system (see \cite{N}) is a quadruple $T=(G,H,p,q)$, where $G=(G^1,G^0)$, $H=(H^1,H^0)$ are two finite directed graphs, and $p,q:G\rightarrow H$ are two surjective graph morphisms such that $(p(e),q(e),r(e),s(e))\in H^1\times H^1 \times G^0\times G^0$ uniquely determines $e\in G^1$. We have the following commutative diagram:
\[\begin{array}{ccccc}H^0&\stackrel{p}{\leftarrow}&G^0 &\stackrel{q}{\rightarrow} &H^0\\\uparrow\! r &{}&\uparrow\! r&{}&
\uparrow\! r\\H^1&\stackrel{p}{\leftarrow}&G^1&\stackrel{q}{\rightarrow}&H^1\\
\downarrow\! s&{}&\downarrow\! s&{}&\downarrow\! s\\
H^0&\stackrel{p}{\leftarrow} &G^0 &\stackrel{q}{\rightarrow} &H^0\end{array}\]
The dual textile system $\bar{T}=(\bar{G}, \bar{H}, s,r)$ is obtained by interchanging the pairs of maps $(p,q)$ and $(s,r)$. The new graphs $\bar{G}=(G^1,H^1)$ and $\bar{H}=(G^0,H^0)$ have source and range maps given by $p$ and $q$, and $s,r$ are now graph morphisms. Note that, even if the initial graphs $G$ and $H$ have no sinks, the new graphs $\bar{G}$ and $\bar{H}$ may have sinks (vertices $v$ such that $s^{-1}(v)=\emptyset$).
A first quadrant textile weaved by a textile system $T$ is a two-dimensional array $(e(i,j))\in (G^1)^{{\mathbb N}^2}$, such that $r(e(i,j-1))=s(e(i,j))$ and such that $q(e(i-1,j))=p(e(i,j))$ for all $i,j\in {\mathbb N}$. It is clear that $(e(i,j)) _{j\in {\mathbb N}}\in G^{\infty}$ (the infinite path space of $G$) for all $i\in {\mathbb N}$. In some cases, the set of such arrays may be empty (see Example 3.1 in \cite{A}).
\remark A textile system associates to each edge $e\in G^1$ a square called
{\em Wang tile} with bottom edge $s(e)$, top edge $r(e)$, left edge $p(e)$,
and right edge $q(e)$:
\[\begin{array}{ccc}{}&r(e)&{}\end{array}\]\[\begin{array}{ccc}p(e){}&\begin{tabular}{|c|}\hline e\\ \hline\end{tabular} &{}q(e).\end{array}\]\[
\begin{array}{ccc}{}&s(e)&{}\end{array}\]
If we let $X=X(T)$ to be the set of all textiles
weaved by $T$, then $X$ is a closed, shift invariant subset of
$(G^1)^{{\mathbb N}^2}$, and we obtain a two-dimensional
shift of finite type, defined below.
Alternatively, if we use Wang tiles, we get a tiling of the first quadrant.
We will describe in Proposition \ref{stot} the connection between two-dimensional shifts of finite type and textile systems.
\definition Let $S$ be a finite alphabet of cardinality $|S|$. The full $d$-dimensional shift with alphabet $S$ is the dynamical system $(S^{{\mathbb N}^d}, \sigma)$, where \[\sigma^m (x)(n)=x(n+m), \; x\in X, \; n,m\in {\mathbb N}^d.\]
A subset $X\subset S^{{\mathbb N}^d}$ which is closed in the product topology and which is $\sigma$-invariant
is called a {\em $d$-dimensional shift of finite type} or a {\em Markov shift} if
there exists a finite set (window) $F\subset {\mathbb N}^d$ and a set of {\em admissible patterns}
$P\subset S^F$
such that \[X=X[P]=\{x\in S^{{\mathbb N}^d} \; \mid \;(\sigma^mx)\mid_F\in P\;
\mbox{for every
}\; m\in {\mathbb N}^d\}.\]
Many times $F=\{(0,0,...,0),(1,0,...,0),(0,1,...,0),...,(0,0,...,1)\}$. A shift of finite type has $d$ transition matrices of dimension $|S|$ with entries in $\{0,1\}$, which in general do not commute.
\definition\label{conj} Let $S_1$ and $S_2$ be alphabets, let $F\subset{\mathbb N}^d$ be a finite subset, and let $\Phi:S_1^F\to S_2$ be a map. A sliding block code defined by $\Phi$ is the map \[\phi:S_1^{{\mathbb N}^d}\to S_2^{{\mathbb N}^d}, \phi(x)_n=\Phi(x\mid_{F+n}), n\in {\mathbb N}^d.\]
For $d=1$ we recover the notion of cellular automaton. Two shifts of finite type $X[P_1], X[P_2]$ are {\em conjugate} if there is a bijective sliding block code $\phi:X[P_1]\to X[P_2]$. In this case, the dynamical systems $(X[P_1], \sigma)$ and $(X[P_2], \sigma)$ are topologically conjugate (see \cite{LS}).
For $d=2$, any Markov shift can be specified by two transition matrices. Such shifts are investigated by N.G. Markley and M.E. Paul in \cite{MP1}. Two $k\times k$ transition matrices $A$ and $B$ with no identically zero rows or columns are called {\em coherent} if
\[(AB)(i,j)>0\;\;\text{iff}\;\; (BA)(i,j)>0\;\;\text{and} \;\;(AB^t)(i,j)>0\;\;\text{iff}\;\; (B^tA)(i,j)>0,\] where $B^t$ is the transpose. If $A$ and $B$ are coherent, it is proved that
\[X(A,B)=\{x\in S^{{\mathbb N}^2}: A(x(i,j),x(i+1,j))=1\;\; \text{and}\;\; B(x(i,j),x(i,j+1))=1\;\; \text {for all}\;\; (i,j)\in {\mathbb N}^2\}\]
becomes a two-dimensional shift of finite type, where $S=\{0,1,2,...,k-1\}$.
For more about multi-dimensional shifts of finite type, we refer to \cite{S1, S2} and \cite{L, LS}.
We illustrate now with some examples of textile systems and their associated two-dimensional shifts.
\example\label{ex1}
Let $G^1=\{a,b\}, G^0=\{u,v\}$ with $s(a)=u=r(b), s(b)=v=r(a),$ and let $H^1=\{x\}, H^0=\{w\}$ with $p(a)=p(b)=x=q(a)=q(b).$
\begin{figure}\label{fig:example1}
\end{figure}
The corresponding two-dimensional shift has alphabet $S=\{a,b\}$ and transition matrices
\[A=\left[\begin{array}{cc}1&1\\1&1\end{array}\right],\; B=\left[\begin{array}{cc}0&1\\1&0\end{array}\right].\]
We will see later that this shift is a particular case of a cellular automaton, obtained from the automorphism of the Bernoulli shift $(\{a,b\}^{\mathbb N}, \sigma)$ which interchanges $a$ and $b$. It also corresponds to a rank two graph, because the transition matrices commute and the unique factorization property is satisfied (see \cite {KP} section 6).
\example \label{ex2}Let $G^1=\{a,b,c\}, G^0=\{u\}, H^1=\{e,f\}, H^0=\{v\}$ with $p(a)=p(b)=e, p(c)=f, q(a)=f, q(b)=q(c)=e$.
\begin{figure}\label{fig:example}
\end{figure}
Then the corresponding two-dimensional shift of finite type has alphabet $\{a,b,c\}$ and transition matrices
\[A=\left[\begin{array}{ccc}0&0&1\\1&1&0\\1&1&0\end{array}\right],\;\; B=\left[\begin{array}{ccc}1&1&1\\1&1&1\\1&1&1\end{array}\right].\]
Note that $A$ and $B$ are coherent in the sense of Markley and Paul, but do not commute, so this shift is not associated to a rank two graph.
\example \label{ex3} Let $G^1=\{a,b,c\}, G^0=\{u,v\}, s(a)=s(b)=r(c)=r(b)=u, r(a)=s(c)=v, H^1=\{e\}, H^0=\{w\}, p(a)=p(b)=p(c)=q(a)=q(b)=q(c)=e$.
\begin{figure}\label{fig:example}
\end{figure}
This textile system is isomorphic to the dual of the previous one.
The corresponding two-dimensional shift of finite type has the same alphabet, but the transition matrices are interchanged.
\section{Textile systems associated to a two-dimensional shift of finite type}
From a two-dimensional shift of finite type $X$ we will construct a double sequence of textile systems $T(m,n)$, considering higher block presentations of $X$ such that $X$ and the shift determined by $T(m,n)$ are conjugated. Recall
\begin{proposition}\label{stot} (see \cite{JM}) Let $(X,\sigma)$ be a two-dimensional shift of finite type with alphabet $S$. Then, moving to a higher block presentation of $X$ if necessary, there exists a textile system $T$ such that $X$ is determined by $T$.
\end{proposition}
\begin{proof} Consider ${\mathcal B}={\mathcal B}(2,2)$ the set of $2\times 2$ admissible blocks $\displaystyle \beta=\begin{array}{cc}a&b\\c&d\end{array}$ in $X$, and construct a graph $G$ with $G^0$ labeled by the rows of the blocks in ${\mathcal B}$, $G^1={\mathcal B},\; s(\beta)= c\;\; d$, $r(\beta)=a\;\; b$, and a graph $H$ with $H^0=S$ and $H^1$ labeled by the columns of the blocks in ${\mathcal B}$. Define graph morphisms $p,q:G\rightarrow H$ by $\displaystyle p(\beta)=\begin{array}{c}a\\c\end{array}$, $\displaystyle q(\beta)=\begin{array}{c}b\\d\end{array}$. It is clear that $T=(G,H,p,q)$ is a textile system such that $X$ is the set of textiles weaved by $T$.
\end{proof}
\begin{corollary}\label{ds} For $m,n \ge 1$, let ${\mathcal B}(m,n)$ denote the set of $m\times n$ admissible blocks in $X$, and for $n\ge 2$ define a graph $G(m,n)$ with $G^0(m,n)={\mathcal B}(m,n-1)$ and $G^1(m,n)={\mathcal B}(m,n)$. For $\beta\in G^1(m,n)$, let $s(\beta)=$ the lower $m\times (n-1)$ block of $\beta$ and let $r(\beta)=$ the upper $m\times (n-1)$ block of $\beta$. Then for $m\ge 2$ there are graph morphisms $p,q:G(m,n)\to G(m-1,n)$ defined by $p(\beta)=$ the left $(m-1)\times n$ block of $\beta$, $q(\beta)=$ the right $(m-1)\times n$ block of $\beta$, where $\beta\in G^1(m,n)$. Then $T(m,n):=(G(m,n), G(m-1,n), p,q)$ for $m,n\ge 2$ are textile systems, and $X$ is determined by $T(m,n)$. The shift $X$ is also determined by the dual textile system $\bar{T}(m,n):=(\bar{G}(m,n), \bar{G}(m, n-1), s, r)$, where $\bar{G}^1={\mathcal B}(m,n)$, $\bar{G}^0(m,n)={\mathcal B}(m-1,n)$ and the source and range maps are given by $p$ and $q$ as above.
\end{corollary}
We illustrate with some two-dimensional shifts of finite type and their associated textile systems. In each case, the
morphisms $p,q$ are defined as in \ref{stot}.
\example\label{fs}
(The full shift). Let $S=\{0,1\}$ and let $X= S^{{\mathbb N}^2}$. In the corresponding textile system $T=T(2,2)$, the graph $G=G(2,2)$ is the complete graph with $4$ vertices. Indeed,
$\displaystyle G^1=\left\{\begin{array}{cc}a&b\\c&d\end{array}\mid a, b, c, d\in S\right\}$
and
$G^0=\{0\; 0, 0\; 1, 1\; 0, 1\; 1\}.$
The graph $H=G(1,2)$ is the complete graph with $2$ vertices. Indeed,
$\displaystyle H^1=\left\{\begin{array}{c}0\\0\end{array}, \begin{array}{c}1\\0\end{array}, \begin{array}{c}0\\1\end{array}, \begin{array}{c}1\\1\end{array}\right\}$
and $H^0=\{0,1\}$.
\example\label{L} (Ledrappier). Let $S={\mathbb Z}/2{\mathbb Z}$, and let $X\subset S^{{\mathbb N}^2}$ be the subgroup defined by $x\in X$ iff \[x(i+1,j)+x(i,j)+x(i,j+1)=0\;\;\text{for all}\;\; (i,j)\in{\mathbb N}^2.\]
We have $G^0(2,2)=H^1=S\times S$, and $G^1(2,2)$ has 8 elements, corresponding
to the $2\times 2$ matrices $(a(i,j))$ with entries in $S$ such that $a(1,1)+a(2,1)+a(2,2)=0$. The Ledrappier shift is associated to a rank two graph, and if we consider the new alphabet
\[\begin{array}{cc}0&{}\\0&0\end{array},\;\; \begin{array}{cc}1&{}\\0&1\end{array},\;\; \begin{array}{cc}1&{}\\1&0\end{array}, \;\; \begin{array}{cc}0&{}\\1&1\end{array},\]
then the transition matrices are
\[A=\left[\begin{array}{cccc}1&1&0&0\\0&0&1&1\\1&1&0&0\\0&0&1&1\end{array}\right],\;\; B=\left[\begin{array}{cccc}1&1&0&0\\0&0&1&1\\0&0&1&1\\1&1&0&0\end{array}\right],\]
see \cite{PRW1}.
\example\label{gm} (Golden Mean). Let $S=\{0,1\}$, with transition matrices
\[A=B= \left[\begin{array}{cc}1&1\\1&0\end{array}\right].\]
Then in the corresponding textile system $T=T(2,2)$, the graphs $G=G(2,2)$ and $H=G(1,2)$ have
\[G^0
=\{0\; 0,0\; 1,1\; 0\},\]
\[G^1=\left\{\begin{array}{cc}0&0\\0&0\end{array},\begin{array}{cc}0&1\\0&0\end{array}, \begin{array}{cc}0&0\\0&1\end{array}, \begin{array}{cc}1&0\\0&0\end{array}, \begin{array}{cc}1&0\\0&1\end{array}, \begin{array}{cc}0&0\\1&0\end{array}, \begin{array}{cc}0&1\\1&0\end{array}\right\},\] \[ H^0=\{0,1\},\;\;H^1=\left\{\begin{array}{c}0\\0\end{array},\begin{array}{c}1\\0\end{array},\begin{array}{c}0\\1\end{array}\right\}.\]
\example\label{ca} (Cellular automata). Let $k\geq 1$ and let $Y\subset \{0,1,...,k-1\}^{\mathbb N}$ be a subshift of finite type. It is known that a
continuous, shift-commuting onto map $\varphi :Y\rightarrow Y$ is given by a sliding block code. Given such a $\varphi$, define a closed,
shift invariant subset \[X=\{(y_m)\in Y^{\mathbb N}\;\mid \; y_{m+1}=\varphi(y_m)\;\mbox{for all}\;
m\in {\mathbb N}\}\subset \{0,1,...,k-1\}^{{\mathbb N}^2}.\]In a natural way,
$X$ becomes a two-dimensional Markov shift. In the corresponding textile system $T=T(2,2)$, we have $G^0\subset \{0,1,...,k-1\}\times \{0,1,...,k-1\}$, $G^1=$ the set of admissible $2\times 2$ blocks $\begin{array}{cc}a&b\\c&d\end{array}$ with $a,b,c,d\in \{0,1,...,k-1\}$, $H^0=\{0,1,...,k-1\}$, and $H^1=$ the set of admissible columns $\begin{array}{c}a\\c\end{array}$.
For $k=2$, $Y=\{0,1\}^{\mathbb N}$ and $\varphi$ defined by interchanging the letters $0$ and $1$, we recover the textile system from example \ref{ex1}.
Recall that many rank two graphs can be obtained from two finite graphs $G_1$ and $G_2$ with the same set of vertices such that the associated vertex matrices commute, and a fixed bijection $\theta: G_1^1*G_2^1\to G_2^1*G_1^1$ such that if $\theta(\alpha,\beta)=(\beta',\alpha')$, then $r(\alpha)=r(\beta')$ and $s(\beta)=s(\alpha')$. Here
\[G_1^1*G_2^1:=\{(\alpha,\beta)\in G_1^1\times G_2^1\mid\; s(\alpha)=r(\beta)\},\]
and $s, r$ are the source and range maps.
This rank two graph is denoted by $G_1*_{\theta}G_2$. The infinite path space is a first quadrant grid with horizontal edges from $G_1$ and vertical edges from $G_2$. Each $1\times 1$ square is uniquely determined by one horizontal edge followed by one vertical edge.
\begin{proposition}\label{gtot} Any rank two graph of the form $G_1*_{\theta}G_2$ determines a textile system.
\end{proposition}
\begin{proof}
Indeed, let $H_i=G_i^{op}$, the graph $G_i$ with the source and range maps interchanged for $i=1,2$. The map $\theta$ induces a unique bijection $H_1^1*H_2^1\to H_2^1*H_1^1$, where
\[H_1^1*H_2^1=\{(\alpha,\beta)\in H_1^1\times H_2^1\mid\; r(\alpha)=s(\beta)\}.\]
We let $G$ with $G^1=H_1^1*H_2^1$ identified with $H_2^1*H_1^1$ by the map $\theta$, $G^0=H_1^1$, and we let $H=H_2$. Define $s(\alpha,\beta)=\alpha,\;\; r(\alpha,\beta)=\alpha',\;\; p(\alpha,\beta)=\beta'$, and $q(\alpha,\beta)=\beta$, where $\alpha', \beta'$ are uniquely determined by the bijection $\theta(\alpha,\beta)=(\beta',\alpha')$.
\end{proof}
\remark For a cellular automaton with $\varphi$ as in \ref{ca} defined by an automorphism of a rank one graph $G$, in \cite{FPS} the authors associated a rank two graph whose $C^*$-algebra is a crossed product $C^*(G)\rtimes{\mathbb Z}$, and they computed its K-theory.
\section{$C^*$-algebras associated to a two-dimensional shift of finite type}
Recall that in Corollary \ref{ds} we constructed a family $T(m,n)=(G(m,n), G(m-1,n),p,q)$ of textile systems from a two-dimensional shift of finite type. This defines a family of graph $C^*$-algebras ${\mathcal A}(m,n):=C^*(G(m,n))$ for $m,n\ge 2$. The dual textile system $\bar{T}(m,n)=(\bar{G}(m,n), \bar{G}(m, n-1), s, r)$ determines another family $\bar{\mathcal A}(m,n):=C^*(\bar{G}(m,n))$, where $\bar{G}(m,n)$ is the graph with source and range maps given by $p$ and $q$, described in Corollary \ref{ds}.
\remark We have ${\mathcal A}(m,n)\cong {\mathcal A}(m,2)$ for all $n\ge 2$ and $\bar{\mathcal A}(m,n)\cong \bar{\mathcal A}(2,n)$ for all $m\ge 2$. Indeed, the graph $G(m,n)$ is a higher block presentation of $G(m,2)$ and the graph $\bar{G}(m,n)$ is a higher block presentation of $\bar{G}(2,n)$ (see \cite{B}).
For matrix subshifts, we can be more specific. Consider $A,B$ two coherent $k\times k$ transition matrices indexed by $\{0,1,...,k-1\}$ as in \cite{MP2}, and let $X(A,B)$ be the associated matrix shift.
\begin{theorem}\label{t1} For a matrix shift $X(A,B)$ we have $\bar{\mathcal A}(2,n)\cong{\mathcal O}_{A_n}$ and ${\mathcal A}(n,2)\cong{\mathcal O}_{B_n}$ for $n\ge 2$. The transition matrices $A_n$ and $B_n$ can be constructed inductively as in \cite{MP2}, and they define two sequences $(Y(A_n))_{n\ge 1}$ and $(Y(B_n))_{n\ge 1}$ of one-dimensional shifts of finite type associated to $X(A,B)$.
\end{theorem}
\begin{proof}
Consider the strip \[K_n= \{(i,j)\in {\mathbb N}^2: 0\le j\le n-1\}\] and the alphabet \[Q_n={\mathcal B}(1,n)=\{\alpha:\alpha\; \text{is a}\; 1\times n\; \text{block occuring in}\; X(A,B)\},\] ordered lexicographically starting at the top. Define $Y(A_n)=\{x\mid_{K_n}: x\in X(A,B)\}$ to be the Markov shift with alphabet $Q_n$ and transition matrix $A_n$, obtained by restricting elements of $X(A,B)$ to the strip $K_n$. The shift $Y(B_n)$ is defined similarly, considering strips \[L_n=\{(i,j)\in {\mathbb N}^2: 0\le i\le n-1\}\] and alphabets \[R_n={\mathcal B}(n,1)=\{\beta:\beta \; \text{is a}\; n\times 1\; \text{block occuring in}\; X(A,B)\}.\]
Clearly, $A_1=A$ and $B_1=B$. For $n\ge 2$, $A_n$ is a $k_n\times k_n$ matrix, where $k_n$ is the sum of all entries in $B^{n-1}$. Suppose $\alpha$ and $\alpha'$ are $1\times n$ blocks in $X(A,B)$ and $j,j'\in \{0,1,2,...,k-1\}$. Then
\[A_{n+1}\left(\begin{array}{c} j\\\alpha\end{array}, \begin{array}{c}j'\\\alpha'\end{array}\right)=1\]
if and only if the $1\times (n+1)$ blocks $\begin{array}{c} j\\\alpha\end{array}$ and $\begin{array}{c}j'\\\alpha'\end{array}$ occur in $X(A,B)$ and $A(j,j')A_n(\alpha,\alpha')=1$. By Proposition 2.1 in \cite{MP2}, the matrix $A_{n+1}$ is the principal submatrix of $A\otimes A_n$ obtained by deleting the $m$th row and column of $A\otimes A_n$ if and only if $B(i,j)=0$, where $m=jk_n+h,\;\; 0\le h< k_n$, and
\[\sum_{l=0}^{i-1}\sum_{t=0}^{k-1}B^{n-1}(t,l)\le h< \sum_{l=0}^{i}\sum_{t=0}^{k-1}B^{n-1}(t,l).\]
The matrix $B_{n+1}$ is constructed similarly, by deleting rows and columns from $B\otimes B_n$.
\end{proof}
Recall that the dynamical system $(X(A,B), \sigma)$ is (topologically) strong mixing if given any nonempty open sets $U$ and $V$ in $X(A,B)$, there is $N\in{\mathbb N}^2$ such that $\sigma^n(U)\cap V\neq \emptyset$ for all $n\ge N$ (componentwise order).
\begin{corollary} Assume that the transition matrices $A_n, B_n$ are not permutation matrices. Then the $C^*$-algebras $\bar{\mathcal A}(2,n)$ and ${\mathcal A}(n,2)$ are simple and purely infinite if and only if $(X(A,B),\sigma)$ is strong mixing.
\end{corollary}
\begin{proof} Apply Proposition 2.2 in \cite{MP2}.
\end{proof}
\example\label{fs1} For the full shift described in \ref{fs}, we have $A=B=\left[\begin{array}{cc}1&1\\1&1\end{array}\right]=A_1=B_1$ and $A_{n+1}=B_{n+1}=A\otimes A_n$ for $n\ge 1$. The corresponding $C^*$-algebras are $\bar{\mathcal A}(2,n)\cong{\mathcal A}(n,2)\cong{\mathcal O}_{2^n}$.
\example\label{ex1a} For the shift associated to the textile system in \ref{ex1}, we have
\[A_1=A=\left[\begin{array}{cc}1&1\\1&1\end{array}\right], \;\; A_2=\left[\begin{array}{cccc}1&1&1&1\\1&1&1&1\\1&1&1&1\\1&1&1&1\end{array}\right], ...\]
\[B_1=B=\left[\begin{array}{cc}0&1\\1&0\end{array}\right], \;\; B_2=\left[\begin{array}{cccc}0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0\end{array}\right],...\]
with corresponding sequences of $C^*$-algebras $\bar{\mathcal A}(2,n)\cong{\mathcal O}_{2^n}$ and ${\mathcal A}(n,2)\cong C({\mathbb T})\otimes M_{2^n}$, since $A_n$ is the $2^n\times 2^n$ matrix with all entries $1$, and $B_n$ is a $2^n\times 2^n$ permutation matrix. Note that the dynamical system $(X(A,B),\sigma)$ is not strong mixing.
\example\label{gm1} Consider the Golden Mean shift $X(A,A)$ with $A=\left[\begin{array}{cc}1&1\\1&0\end{array}\right]$. Since the number of $n$-words in $Y(A)$ is a Fibonacci number, the dimension $k_n$ of $A_n=B_n$ is also a Fibonacci number, where $k_1=2$ and $k_2=3$. It is easy to see that to get $A_{n+1}$ from $A\otimes A_n$ we have to remove the last $2k_n-k_{n+1}$ rows and columns. Thus
\[A_1=A,\;\; A_2=\left[\begin{array}{ccc}1&1&1\\1&0&1\\1&1&0\end{array}\right],\;\; A_3=\left[\begin{array}{ccccc}1&1&1&1&1\\1&0&1&1&0\\1&1&0&1&1\\1&1&1&0&0\\1&0&1&0&0\end{array}\right],\]\[ A_4=\left[\begin{array}{cccccccc}1&1&1&1&1&1&1&1\\1&0&1&1&0&1&0&1\\1&1&0&1&1&1&1&0\\1&1&1&0&0&1&1&1\\1&0&1&0&0&1&0&1\\1&1&1&1&1&0&0&0\\1&0&1&1&0&0&0&0\\1&1&0&1&1&0&0&0\end{array}\right]\] etc, which are transitive and not permutation matrices.
The sequence of simple purely infinite $C^*$-algebras ${\mathcal A}(2,n)\cong\bar{\mathcal A}(n,2)\cong{\mathcal O}_{A_n}$ encodes the complexity of the Golden Mean shift.
\remark We have natural projections $Y(A_{n+1})\to Y(A_n)$ and $Y(B_{n+1})\to Y(B_n)$ such that
\[X(A,B)=\varprojlim Y(A_n)=\varprojlim Y(B_n).\]
The families of $C^*$-algebras ${\mathcal A}(m,n)$ and $\bar{\mathcal A}(m,n)$ can be thought as $C^*$-bundles over $\{(m,n)\in{\mathbb N}^2\mid \;\; m,n\ge 2\}$, and we can interpret the corresponding section $C^*$-algebras as other algebras associated to the shift $X(A,B)$. In the case $X(A,B)$ is constructed from a rank two graph, the relationship between the graph $C^*$-algebra and the above $C^*$-algebras remains to be explored.
\section{Groupoid morphisms and Fell bundles from textile systems}
\definition A surjective graph morphism $\phi:G\to H$
has the path lifting property for $s$ (or $\phi$ is an $s$-fibration) if for all $v\in G^0$ and for all $b\in H^1$ with $s(b)=w=\phi(v)$ there is $a\in G^1$ with $s(a)=v$ with $\phi(a)=b$. Similarly, we define an $r$-fibration. If the morphism $\phi$ has the path lifting property for both $s$ and $r$, we say that $\phi$ is a fibration. The morphism $\phi$ is a covering if it has the {\em unique} path lifting property for both $s$ and $r$.
\begin{remark} The morphisms $p$ and $q$ in the textile systems from examples \ref{ex2} and \ref{ex3} are fibrations. The morphism $p=q$ in example \ref{ex1} is a covering. The canonical morphisms $p$ and $q$ for the full shift (see \ref{fs}) are covering maps, but the full shift does not define a rank two graph, because the unique factorization property fails. Also, note that in this case, the horizontal and vertical shifts are not local homeomorphisms.
In general, the morphisms $p$ and $q$ in a textile system don't have the path lifting property: let $G^1=\{a,b,c\}, G^0=\{u,v\}, s(a)=r(a)=u, s(b)=r(c)=u, s(c)=r(b)=v, H^1=\{e,f\}, H^0=\{w\}, p(u)=p(v)=q(u)=q(v)=w, p(a)=p(b)=e, p(c)=f, q(a)=e, q(b)=q(c)=f$.
\begin{figure}\label{fig:example}
\end{figure}
Then for $u\in G^0$ and $f\in H^1$ with $s(f)=p(u)=w$ there is no edge $x\in G^1$ with $s(x)=u$ and $p(x)=f$. Also, for $v\in G^0$ and $e\in H^1$ with $s(e)=w=q(v)$ there is no $x\in G^1$ with $s(x)=v$ and $q(x)=e$. Note also that the graph $\bar{G}=(G^1,H^1)$ from the dual textile system has sinks.
\end{remark}
\begin{proposition} Consider any rank two graph of the form $G_1*_{\theta}G_2$ with the corresponding textile system described in Proposition \ref{gtot}. Then the morphism $q$ has the unique path lifting property for $s$, and the morphism $p$ has the unique path lifting property for $r$.
\end{proposition}
\begin{proof} Indeed, given $\alpha\in G^0=H_1^1$ and $\beta\in H^1=H_2^1$ with $q(\alpha)=s(\beta)$, there is a unique $(\alpha,\beta)\in G^1=H_1^1*H_2^1$ such that $s(\alpha, \beta)=\alpha$ and $q(\alpha,\beta)=\beta$. The proof for $p$ is similar.
\end{proof}
\begin{remark} For the textile system $T(2,2)$ associated to a two-dimensional shift, we can characterize the (unique) path lifting property for the morphisms $p,q$ in terms of filling a corner of a $2\times 2$ block. For example, $p$ has the (unique) path lifting property for $s$ if for any admissible column
$\displaystyle\begin{array}{c}a\\c\end{array}$ and for any admissible row $c\;\; d$, there is a (unique) $b$ which completes the admissible block $\displaystyle \beta=\begin{array}{cc}a&b\\c&d\end{array}.$ Similarly, we can characterize the path lifting property for the morphisms $p,q$ in $T(m,n)$.
\end{remark}
For a topological groupoid $\Gamma$, we denote by $s$ and $r$ the source and the range maps, by $\Gamma^0$ the unit space, and by $\Gamma^2$ the set of composable pairs.
\definition
Let $\Gamma, \Lambda$ be topological groupoids.
A {\em groupoid morphism} $\pi :\Gamma\rightarrow \Lambda$ is a
continuous map which
intertwines both the range and source maps and which satisfies
\[\pi(\gamma_1\gamma_2)=\pi(\gamma_1)\pi(\gamma_2)\;\; \text{for all}\; \;(\gamma_1,\gamma_2)\in \Gamma^2.\] It follows that \[\ker\pi:=\{\gamma\in \Gamma\mid \pi(\gamma)\in \Lambda^0\}\] contains the unit space $\Gamma^0$. A {\em groupoid fibration} is a surjective open morphism
$\pi :\Gamma\rightarrow \Lambda$ such
that for any $\lambda\in \Lambda$ and $x\in \Gamma^0$ with $\pi(x)=s(\lambda)$ there is $\gamma\in
\Gamma$ with $s(\gamma)=x$ and $\pi(\gamma)=\lambda$. Note that, using inverses, a groupoid fibration also has the property that for any $\lambda\in \Lambda$ and $x\in \Gamma^0$ with $\pi(x)=r(\lambda)$ there is $\gamma\in
\Gamma$ with $r(\gamma)=x$ and $\pi(\gamma)=\lambda$. If $\gamma$ is unique, then $\pi$ is called a {\em groupoid covering}.
For $G$ a finite graph without sinks, let $G^\infty$ be the space of infinite paths, and let $\sigma:G^\infty\to G^\infty$ be the unilateral shift $\sigma(x_1x_2x_3\cdots)=x_2x_3\cdots$. Let
\[\Gamma(G)=\{(x,m-n,x')\in G^{\infty}\times {\mathbb Z}\times G^{\infty}\; \mid \; \sigma^m(x)=\sigma^n(x')\}\]
be the corresponding \' etale groupoid with unit space $\Gamma(G)^0=\{(x,0,x)\mid x\in G^{\infty}\}$ identified with $G^{\infty}$.
\proposition Let $G, H$ be finite graphs with no sinks. Then any morphism $\phi:G\to H$ with the path lifting property for $s$ induces a
surjective continuous open map
\[\varphi: G^{\infty}\rightarrow H^{\infty}, \quad\varphi(x_1x_2x_3
\cdots)=\phi(x_1)\phi(x_2)\phi(x_3)\cdots\]
and a groupoid fibration \[\pi: \Gamma(G)\rightarrow \Gamma(H),
\quad\text{given by}\quad
\pi(x,k,x')=(\varphi(x),k,\varphi(x'))\]
with kernel
$\Delta=\{(x,0,x')\in \Gamma(G)\;\mid\; \varphi(x)=\varphi(x')\}.$ If $\phi$ is a graph covering, then $\pi$ is a groupoid covering.
\begin{proof} Let $y_1y_2\cdots \in H^{\infty}$ beginning
at $w_1\in H^0$. Since $\phi$ is onto, there is $v_1\in G^0$ with $\phi(v_1)=w_1$. By the path lifting property, there is $x_1\in G^1$ with $\phi(x_1)=y_1$. Continuing inductively, there is $x_1x_2\cdots\in G^{\infty}$ such that
$\varphi(x_1x_2\cdots)=y_1y_2\cdots$, and therefore $\varphi$ is onto.
Consider a cylinder set \[Z=\{a_1\cdots a_nx_1x_2\cdots \in G^{\infty}\;\mid x_1x_2\cdots\in G^{\infty}\}.\] By the path lifting property, $\varphi(Z)$ is the cylinder set in $H^{\infty}$ determined by the finite path $\phi(a_1)\cdots\phi(a_n)$. Hence $\varphi:G^{\infty}\to H^{\infty}$ is continuous and open.
We have \[\pi((x,k,x')(x',l,x''))=\pi(x,k,x')\pi(x',l,x'')=(\varphi(x),k+l,\varphi(x'')),\] and $\pi$ is a groupoid morphism.
Since $\varphi$ is surjective and takes cylinder sets into cylinder sets, $\pi$ is surjective, continuous and open.
To show that $\pi$ is a fibration, consider $\lambda=(y,k,y')\in \Gamma(H)$ and $x'\in \Gamma(G)^0=G^{\infty}$ with $\varphi(x')=s(\lambda)=y'$. Since $\varphi$ is onto and intertwines the shift maps, we can find $\gamma=(x,k,x')\in \Gamma(G)$ with $\pi(\gamma)=\lambda$. Hence $\pi$ is a groupoid fibration. In the case $\phi$ is a covering, let's show how we can find $\gamma$ in a unique way. We have $k=m-n$ and $\sigma^my=\sigma^ny', \sigma^mx=\sigma^nx'$. For $y_m$ and $v=s(x_{m+1})=s(x'_{n+1})$ with $r(y_m)=\phi(v)$ there is a unique $x_m$ with $r(x_m)=v$ and $\phi(x_m)=y_m$. We can continue inductively to find a unique $x$ with $\varphi(x)=y$, and it follows that $\pi$ is a groupoid covering.
Now $(x,k,x')\in \ker\pi$ iff $\varphi(x)=\varphi(x')$ and $k=0$. \end{proof}
\corollary Given a textile system $(G,H,p,q)$ such that $G, H$ have no sinks and $p, q$ have the path lifting property, we get two groupoid fibrations $\pi, \rho :\Gamma(G)\to \Gamma(H)$. If $p$ and $q$ are coverings, we get two groupoid coverings $\pi, \rho :\Gamma(G)\to \Gamma(H)$.
\example\label{fs2} Consider the coverings $p=q:G\to H$ in the textile system of the full shift as in Example \ref{fs}. We obtain a covering $\pi=\rho:\Gamma(G)\to\Gamma(H)$ of Cuntz groupoids.
Recall that a (saturated) Fell bundle over a groupoid $\Gamma$ is a Banach bundle
$\pi: E \to \Gamma$ with extra structure such that the fiber $E_{\gamma}=\pi^{-1}(\gamma)$ is an $E_{r(\gamma)}$--$E_{s(\gamma)}$
imprimitivity bimodule for all $\gamma\in\Gamma$. The restriction of $E$ to the unit space $\Gamma^0$ is a
$C^*$-bundle.
The $C^*$-algebra $C^*_r(\Gamma; E)$ is a completion of $C_c(\Gamma; E)$ in ${\mathcal L}(L^2(\Gamma; E))$. For more details, see \cite{DKR}, where
the following result is proved.
\begin{theorem} Given an open surjective morphism of \'etale groupoids
$\pi:\Gamma\rightarrow \Lambda$ with amenable kernel $\Delta := \pi^{-1}(\Lambda^0)$, there is
a Fell bundle $E=E(\pi)$ over $\Lambda$ such that
$C^*_r(\Gamma)\cong C^*_r(E)$.
\end{theorem}
Using Corollary \ref{ds} and Theorem \ref{t1}, we get
\begin{theorem}\label{t2} Given a matrix shift $X(A,B)$ such that in the associated family of textile systems $T(m,n)$ the morphisms $p$ and $q$ have the path lifting property, there are two families of Fell bundles $E^{(m,n)}(p)$ and $E^{(m,n)}(q)$ over $\Gamma(G(m-1,n))$ such that
\[{\mathcal A}(m,n)\cong C^*_r(E^{(m,n)}(p))\cong C^*_r(E^{(m,n)}(q)).\]
\end{theorem}
\example\label{gr}
Consider the textile system $(G,H,p,q)$ from example \ref{ex1}.
In this case
$\Gamma(G)^0$ has two points, and $C^*(G)\cong C({\mathbb T})\otimes M_2$. The maps $p$ and $q$ are coverings, they both induce the morphism \[\pi:\Gamma(G)\to \Gamma(H)\cong{\mathbb Z}, \;\pi(x,k,x')=k,\] and the two Fell bundles $E^{(2,2)}(p)$ and $E^{(2,2)}(q)$ over ${\mathbb Z}$ coincide. The fiber over $0\in {\mathbb Z}$ is isomorphic to $M_2$.
\begin{remark} For a rank two graph $G_1*_{\theta}G_2$, since the map $q$ in the corresponding textile system is a covering, we get a goupoid covering $\pi:\Gamma(G)\to\Gamma(H)$. Recall that $G^1=H_1^1*H_2^1, G^0=H_1^1$ and $H=H_2$, where $H_i=G_i^{op}$. In particular, $\Gamma(H)$ acts on $\Gamma(G)^0$ and $\Gamma(G)\cong \Gamma(H)\ltimes \Gamma(G)^0$ (see \cite{DKR} Proposition 5.3).
\end{remark}
\example
Consider the textile system from example \ref{ex2}.
Here $G^{\infty}=\{a,b,c\}^{\mathbb N}, H^{\infty}=\{e,f\}^{\mathbb N}$ are Cantor sets, $C^*(G)\cong {\mathcal O}_3$ and $C^*(H)\cong{\mathcal O}_2$, the Cuntz algebras. The morphisms $p$ and $q$ are fibrations and induce different groupoid morphisms $\pi, \rho:\Gamma(G)\to \Gamma(H)$. The fibers of the Fell bundle $E^{(2,2)}(p)$ over $y\in \Gamma(H)^0=H^{\infty}$
are isomorphic to $M_{2^n}$, where $n$
is the number of $e's$ in $y$. For $n=\infty$, $M_{2^{\infty}}$ is the
UHF-algebra of type $2^{\infty}$.
\example Let $(G,H,p,q)$ be the textile system from example \ref{ex3}.
The space $G^{\infty}\subset \{u,v\}^{\mathbb N}$ is defined by the vertex matrix
\[A=\left[\begin{array}{cc}1&1\\1&0\end{array}\right]\]
and $C^*(G)\cong {\mathcal O}_A$. The space $H^{\infty}$ has one point, and $\Gamma(H)\cong {\mathbb Z}$. Both $p$ and $q$ induce the same morphism $\pi:\Gamma(G)\to {\mathbb Z}, \;\pi(x,k,x')=k$ as in Example \ref{gr}, and the Fell bundle $E^{(2,2)}(p)=E^{(2,2)}(q)$ corresponds to the grading of ${\mathcal O}_A$.
\example The full shift $X= \{0,1\}^{{\mathbb N}^2}$ determines a sequence of textile systems $T(n,2)=\bar{T}(2,n)$, where $G(n,2)=\bar{G}(2,n)$ is the complete graph with $2^n$ vertices, and $G(n-1,2)=\bar{G}(2,n-1)$ is the complete graph with $2^{n-1}$ vertices. The two families of Fell bundles over the Cuntz groupoid $\Gamma(G(n-1,2))$ have C*-algebras isomorphic to $C^*(G(n,2))\cong {\mathcal O}_{2^n}$.
\example For the Golden Mean shift with transition matrices
\[A=B= \left[\begin{array}{cc}1&1\\1&0\end{array}\right],\]
the corresponding graphs $G(n,2)=\bar{G}(2,n)$ have vertex matrices as in Example \ref{gm1}.
The morphisms $p$ and $q$ in $T(2,n)=\bar{T}(n,2)$ are fibrations and determine different groupoid morphisms $\Gamma(G(2,n))\to \Gamma(G(2,n-1))$ and two Fell bundles $E^{(2,n)}(p)$ and $E^{(2,n)}(q)$ over $\Gamma(G(2,n-1))$.
It would be interesting to calculate the fibers of these Fell bundles.
\end{document}
|
\begin{document}
\title{A note on the consistency operator}
\begin{abstract}
It is a well known empirical observation that natural axiomatic theories are pre-well-ordered by proof-theoretic strength. For any natural theory $T$, the next strongest natural theory is $T+\mathsf{Con}_T$. We formulate and prove a statement to the effect that the consistency operator is the weakest natural way to uniformly extend axiomatic theories.
\end{abstract}
\section{Introduction}
G{\"o}del's second incompleteness theorem states that no consistent sufficiently strong effectively axiomatized theory $T$ proves its own consistency statement $\mathsf{Con}_T$. Using ad hoc proof-theoretic techniques (namely, Rosser-style self-reference) one can construct $\Pi_1$ sentences $\varphi$ that are not provable in $T$ such that $T+\varphi$ is a strictly weaker theory than $T+\mathsf{Con}_T$. Nevertheless, $\mathsf{Con}_T$ seems to be the weakest \emph{natural} $\Pi_1$ sentence that is not provable in $T$. Without a mathematical definition of ``natural,'' however, it is difficult to formulate a precise conjecture that would explain this phenomenon. This is a special case of the well known empirical observation that natural axiomatic theories are pre-well-ordered by consistency strength, which S. Friedman, Rathjen, and Weiermann \cite{friedman2013slow} call one of the ``great mysteries in the foundations of mathematics.''
Recursion theorists have observed a similar phenomenon in Turing degree theory. One can use ad hoc recursion-theoretic methods like the priority method to construct non-recursive $\Sigma_1$ definable sets whose Turing degree is strictly below that of $0'$. Nevertheless, $0'$ seems to be the weakest \emph{natural} non-recursive r.e. degree. Once again, without a mathematical definition of ``natural,'' however, it is difficult to formulate a precise conjecture that would explain this phenomenon.
A popular approach to studying natural Turing degrees is to focus on degree-invariant functions; a function $f$ on the reals is \emph{degree-invariant} if, for all reals $A$ and $B$, $A\equiv_T B$ implies $f(A) \equiv_T f(B)$. The definitions of natural Turing degrees tend to relativize to arbitrary degrees, yielding degree invariant functions on the reals; for instance, the construction of $0'$ relativizes to yield the Turing Jump. Sacks \cite{sacks1963degrees} asked whether there is a degree invariant solution to Post's Problem. Recall that a function $W:2^\omega\rightarrow 2^\omega$ is a \emph{recursively enumerable operator} if there is an $e\in\omega$ such that, for each $A$, $W(A)=W_e^A$, the $e^{th}$ set recursively enumerable in $A$.
\begin{question}[Sacks]
Is there a degree-invariant recursively enumerable operator $W$ such that for every real $A$, $A <_T W_e^A <_T A'$?
\end{question}
Though the question remains open, Slaman and Steel \cite{slaman1988definable} proved that there is no order-preserving solution to Post's Problem. Recall that a function $f$ on the reals is \emph{order-preserving} if, for all reals $A$ and $B$, $A\leq_T B$ implies $f(A) \leq_T f(B)$.
In \cite{montalban2019inevitability}, Montalb{\'a}n and the author proved a proof-theoretic analogue of a negative answer to Sacks' question for order-preserving functions. Let $T$ be a sound, sufficiently strong effectively axiomatized theory in the language of arithmetic, e.g., $\mathsf{EA}$.\footnote{$\mathsf{EA}$ is a theory in the language of arithmetic (with exponentiation) axiomatized by the axioms of Robinson's $Q$, recursive axioms for exponentiation, and induction for bounded formulas.} A function $\mathfrak{g}$ is \emph{monotone} if, for all sentences $\varphi$ and $\psi$, $T\vdash\varphi \rightarrow \psi$ implies $T\vdash \mathfrak{g}(\varphi)\rightarrow \mathfrak{g}(\psi)$ (this is just to say that $\mathfrak{g}$ induces a monotone function on the Lindenbaum algebra of $T$). Let $[\varphi]$ denote the equivalence class of $\varphi$ modulo $T$ provable equivalence, i.e., $[\varphi] := \{ \psi : T\vdash \varphi \leftrightarrow \psi \}.$ One of the main theorems of \cite{montalban2019inevitability} is the following.
\begin{theorem}[Montalb{\'a}n--W.]\label{minimal}
Let $\mathfrak{g}$ be recursive and monotone such that:
\begin{itemize}
\item for all $\varphi$, $T + \mathsf{Con}_T(\varphi) \vdash \mathfrak{g}(\varphi)$
\item for all consistent $\varphi$, $T + \varphi \nvdash \mathfrak{g}(\varphi)$
\end{itemize}
Then for every true $\varphi$, there is a true $\psi$ such that $T + \psi \vdash \varphi$ and
$$[\psi \wedge \mathfrak{g}(\psi)] = [\psi \wedge \mathsf{Con}_T(\psi)].$$
\end{theorem}
To state a corollary of this theorem, we recall that $\varphi$ \emph{strictly implies} $\psi$ if one of the following holds:
\begin{itemize}
\item[(i)] $T+\varphi\vdash\psi$ and $T+\psi\nvdash\varphi$.
\item[(ii)] $[\varphi]=[\psi]=[\bot]$.
\end{itemize}
\begin{corollary}
There is no recursive monotone $\mathfrak{g}$ such that for every $\varphi$,\\ $\big(\varphi\wedge \mathsf{Con}_T(\varphi)\big)$ strictly implies $\big(\varphi \wedge \mathfrak{g}(\varphi) \big)$ and $\big(\varphi \wedge \mathfrak{g}(\varphi) \big)$ strictly implies $\varphi$.
\end{corollary}
The Slaman--Steel theorem suggests a strengthening of these results. Recall that a \emph{cone} in the Turing degrees is any set of the form $\{ B : B\geq_T A \}$ where $A$ is a Turing degree. The following is a special case of a theorem due to Slaman and Steel.
\begin{theorem}[Slaman--Steel]
Let $f:2^\omega\rightarrow 2^\omega$ be Borel and order-preserving. Then one of the following holds:
\begin{enumerate}
\item $f(A) \leq_T A$ on a cone.
\item $A' \leq_T f(A)$ on a cone.
\end{enumerate}
\end{theorem}
Montalb{\'a}n and the author asked whether Theorem \ref{minimal} could be strengthened in the style of the Slaman--Steel theorem, i.e., by showing that all increasing monotone recursive functions that are no stronger than the consistency operator are equivalent to the consistency operator in the limit. In this note we provide a positive answer to this question.
To sharpen the notion of the ``limit behavior'' of a function, we introduce the notion of a true cone. A \emph{cone} is any set $\mathfrak{C}$ of the form $\{ \psi: T+\psi\vdash\varphi\}$ where $\varphi$ is a sentence. A \emph{true cone} is a cone that contains a true sentence. In \textsection{2} we prove that all recursive monotone operators that produce sentences of some bounded arithmetical complexity are bounded from below by the consistency operator on a true cone.
\begin{theorem}\label{main first}
Let $\mathfrak{g}$ be recursive and monotone such that, for some $k\in \mathbb{N}$, for all $\varphi$, $\mathfrak{g}(\varphi)$ is $\Pi_k$. Then one of the following holds:
\begin{enumerate}
\item There is a true cone $\mathfrak{C}$ such that for all $\varphi \in \mathfrak{C}$,
$$T+ \varphi \vdash \mathfrak{g}(\varphi) .$$
\item There is a true cone $\mathfrak{C}$ such that for all $\varphi \in \mathfrak{C}$,
$$T+ \varphi + \mathfrak{g}(\varphi) \vdash \mathsf{Con}_T(\varphi).$$
\end{enumerate}
\end{theorem}
In \textsection{3} we prove that the condition that $\mathfrak{g}$ is recursive cannot be weakened. More precisely, we exhibit a monotone $0'$ recursive function which vacillates between behaving like the identity operator and behaving like the consistency operator.
\begin{theorem}\label{main second}
There is a $0'$ recursive monotone function $\mathfrak{g}$ such that, for every $\varphi$, $\mathfrak{g}(\varphi)$ is $\Pi_1$, yet for arbitrarily strong true sentences
$$[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi\wedge\mathsf{Con}_T(\varphi)]$$
and for arbitrarily strong true sentences
$$[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi].$$
\end{theorem}
Though Theorem \ref{main first} is a considerable strengthening of the result in \cite{montalban2019inevitability}, we conjecture that it admits of a dramatic improvement. We remind the reader that the aforementioned theorem of Slaman and Steel is a special case of a sweeping classification of increasing Borel order-preserving function. We say that a function $f$ is \emph{increasing} if, for all $A$, $A\leq_Tf(A)$.
\begin{theorem}[Slaman--Steel]
Let $f:2^\omega\rightarrow 2^\omega$ be increasing, Borel, order-preserving. Suppose that for some $\alpha<\omega_1$, $f(A) \leq_T A^{(\alpha)}$ for every $A$. Then for some $\beta\leq\alpha$, $f(A)\equiv_T A^{(\beta)}$ on a cone.
\end{theorem}
We conjecture that a similar classification of monotone proof-theoretic operators is possible. Our conjecture is stated in terms of iterated consistency statements. Let $\prec$ be a nice elementary presentation of a recursive well-ordering.\footnote{Nice elementary presentations of well-orderings are defined in \cite{beklemishev1995iterated}, see \textsection 2.3, Definition 1.} We define the iterates of the consistency operator by appealing to G{\"o}del's fixed point lemma.
$$T \vdash \mathsf{Con}^\alpha_T(\varphi) \leftrightarrow \forall \beta \prec \alpha \mathsf{Con}_T \big(\varphi\wedge\mathsf{Con}^\beta_T(\varphi) \big)$$
For true $\varphi$, the iterations of $\mathsf{Con}_T$ form a proper hierarchy of true sentences by G{\"o}del's second incompleteness theorem. We make the following conjecture.
\begin{conjecture}
Suppose $\mathfrak{g}$ is monotone, non-constant, and recursive such that, for every $\varphi$, $\mathfrak{g}(\varphi)\in\Pi_1$. Let $\prec$ be a nice elementary presentation of well-ordering and $\alpha$ an ordinal notation. Suppose that, for every $\varphi$, $$T+\varphi + \mathsf{Con}_T^\alpha(\varphi) \vdash \mathfrak{g}(\varphi).$$ Then for some $\beta\preceq\alpha$, for all $\varphi$ in a true cone, $$[\varphi\wedge \mathfrak{g}(\varphi)]=[\varphi\wedge \mathsf{Con}_T^\beta(\varphi)].$$
\end{conjecture}
According to the conjecture, if an increasing monotone recursive function $\mathfrak{g}$ that produces only $\Pi_1$ sentences is no stronger than $\mathsf{Con}_T^\alpha$, it is equivalent on a true cone to $\mathsf{Con}_T^\beta$ for some $\beta\preceq\alpha$. This would provide a classification of a large class of monotone proof-theoretic operators in terms of their limit behavior.
\section{The main theorem}
Let $T$ be a sound, recursively axiomatized extension of $\mathsf{EA}$ in the language of arithmetic. We want to show that $T+\mathsf{Con}_T(\varphi)$ is the weakest natural theory that results from adjoining a $\Pi_1$ sentence to $T$. A central notion in our approach is that of a monotone operator on finite extensions of $T$.
\begin{definition}
$\mathfrak{g}$ is \emph{monotone} if, for every $\varphi$ and $\psi$, $$T\vdash\varphi \rightarrow \psi \textrm{ implies }T\vdash \mathfrak{g}(\varphi)\rightarrow \mathfrak{g}(\psi).$$
\end{definition}
\begin{remark}
We will switch quite frequently using the notation $T+\varphi \vdash \psi$ and $T\vdash \varphi \rightarrow \psi$, trusting that no confusion arises. The two claims are equivalent, by the Deduction Theorem.
\end{remark}
Our goal is to prove that the consistency operator is, roughly, the weakest operator for uniformly strengthening theories. Our strategy is to show that any uniform method for extending theories that is as weak as the consistency operator must be equivalent to the consistency operator in the limit. We sharpen the notion ``in the limit'' with the following definitions.
\begin{definition}
Given a sentence $\varphi$, the \emph{cone generated by $\varphi$} is the set of all sentences $\psi$ such that $T\vdash \psi \rightarrow \varphi$. A \emph{cone} is any set $\mathfrak{C}$ such that, for some $\varphi$, $\mathfrak{C}$ is the cone generated by $\varphi$. A \emph{true cone} is a cone that is generated by a sentence that is true in the standard model $\mathbb{N}$.
\end{definition}
We are now ready to state and prove the main theorem. Note that the following is a restatement of Theorem \ref{main first}.
\begin{theorem}\label{main} Let $T$ be a sound, effectively axiomatized extension of $\mathsf{EA}$. Let $\mathfrak{g}$ be recursive and monotone such that, for some $k\in\mathbb{N}$, for all $\varphi$, $\mathfrak{g}(\varphi)$ is $\Pi_k$. Then one of the following holds:
\begin{enumerate}
\item There is a true cone $\mathfrak{C}$ such that for all $\varphi \in \mathfrak{C}$,
$$T+\varphi \vdash \mathfrak{g}(\varphi).$$
\item There is a true cone $\mathfrak{C}$ such that for all $\varphi \in \mathfrak{C}$,
$$T+ \varphi + \mathfrak{g}(\varphi) \vdash \mathsf{Con}_T(\varphi).$$
\end{enumerate}
\end{theorem}
\begin{proof}
Since $\mathfrak{g}$ is recursive, its graph is defined by a $\Sigma_1$ formula $\mathcal{G}$, i.e., for any $\varphi$ and $\psi$,
$$\mathfrak{g}(\varphi)=\psi \iff \mathbb{N}\vDash \mathcal{G}(\ulcorner\varphi\urcorner,\ulcorner\psi\urcorner).$$
Since $T$ is sound and $\Sigma_1$ complete, this implies that for any $\varphi$ and $\psi$,
\begin{equation}
\label{eq:star}
\tag{$\star$}
\mathfrak{g}(\varphi)=\psi \iff T\vdash \mathcal{G}(\ulcorner\varphi\urcorner,\ulcorner\psi\urcorner)
\end{equation}
From now on we drop the corner quotes and write $\mathcal{G}(\varphi,\psi)$ instead of $\mathcal{G}(\ulcorner\varphi\urcorner,\ulcorner\psi\urcorner)$, trusting that no confusion will arise.
We consider the following sentence in the language of arithmetic:
\begin{equation}
\label{eq:A}
\tag{$A$}
\forall x \forall y \Big( \big( \mathcal{G}(x,y) \wedge \mathsf{True}_{\Pi_k} (y) \big) \rightarrow \mathsf{Con}_T(x) \Big).
\end{equation}
Informally, $A$ says that, for every $\varphi$, the truth of $\mathfrak{g}(\varphi)$ implies the consistency of $T+ \varphi$. Note that we need to use a partial truth predicate in the statement $A$ since we are formalizing a uniform claim about the outputs of the function $\mathfrak{g}$. For any specific output $\psi$ of the function $\mathfrak{g}$, $T$ will be able to reason about $\psi$ without relying on the partial truth predicate.
We break into cases based on whether $A$ is true or false.
\textbf{Case 1:} $A$ is true in the standard model $\mathbb{N}$. We claim that in this case $$\mathfrak{C}:=\{ \varphi : T + \varphi \vdash A \}$$ satisfies condition (2) from the statement of the theorem. Clearly $\mathfrak{C}$ is a true cone. It suffices to show that for any $\varphi \in \mathfrak{C}$, $$T+ \varphi + \mathfrak{g}(\varphi) \vdash \mathsf{Con}_T(\varphi).$$
So let $\varphi\in\mathfrak{C}$ and let $\psi=\mathfrak{g}(\varphi)$. We reason as follows.
\begin{flalign*}
T + \varphi & \vdash \forall x \forall y \Big( \big( \mathcal{G}(x,y) \wedge \mathsf{True}_{\Pi_k} (y) \big) \rightarrow \mathsf{Con}_T(x) \Big) \textrm{ by choice of $\mathfrak{C}$.}\\
T + \varphi & \vdash \big( \mathcal{G}(\varphi , \psi ) \wedge \mathsf{True}_{\Pi_k}(\psi)\big) \rightarrow \mathsf{Con}_T(\varphi) \textrm{ by instantiation.}\\
T + \varphi & \vdash \mathcal{G}( \varphi , \psi) \textrm{ by observation $(\star)$.}\\
T + \varphi & \vdash \mathsf{True}_{\Pi_k}(\psi) \rightarrow \mathsf{Con}_T(\varphi) \textrm{ from the previous two lines by logic.}\\
T + \varphi + \psi &\vdash \mathsf{Con}_T(\varphi) \textrm{ trivially from the previous line.}\\
T+ \varphi + \mathfrak{g}(\varphi) & \vdash \mathsf{Con}_T(\varphi) \textrm{ since $\psi=\mathfrak{g}(\varphi)$.}
\end{flalign*}
\textbf{Case 2:} $A$ is false in the standard model $\mathbb{N}$. We infer that
$$ \exists \varphi \exists \psi \Big( \mathcal{G}(\varphi,\psi) \wedge \mathsf{True}_{\Pi_k} (\psi) \wedge \neg \mathsf{Con}_T(\varphi) \Big).$$
Thus, there is an inconsistent sentence $\varphi$ such that $\mathfrak{g}(\varphi)$ is a true $\Pi_k$ sentence. This is to say that $\mathfrak{g}(\bot)$ is true. We claim that in this case $$\mathfrak{C}:=\{ \varphi : T + \varphi \vdash \mathfrak{g}(\bot) \}$$ satisfies condition (1) from the statement of the theorem. Clearly $\mathfrak{C}$ is a true cone. It suffices to show that for any $\varphi \in \mathfrak{C}$, $$T+ \varphi \vdash \mathfrak{g}(\varphi).$$ So let $\varphi \in \mathfrak{C}$. By the definition of $\mathfrak{C}$, we infer that
\begin{equation}
\label{eq:dagger}
\tag{$\dagger$}
T+\varphi \vdash \mathfrak{g}(\bot).
\end{equation}
We reason as follows.
\begin{flalign*}
T&\vdash \bot \rightarrow \varphi \textrm{ by logic.} \\
T&\vdash \mathfrak{g}(\bot) \rightarrow \mathfrak{g}(\varphi) \textrm{ by the monotonicity of $\mathfrak{g}$.}\\
T+\varphi&\vdash \mathfrak{g}(\varphi) \textrm{ from the previous line and $\dagger$, by logic.}
\end{flalign*}
This completes the proof.
\end{proof}
\begin{remark}
Theorem \ref{main} is stated about operators $\mathfrak{g}$ that produce sentences of bounded arithmetical complexity, i.e., for some $k\in\mathbb{N}$, for all $\varphi$, $\mathfrak{g}(\varphi)$ is $\Pi_k$. The reason for this restriction is to invoke the partial truth-predicate for $\Pi_k$ sentences when providing the sentence $A$.
\end{remark}
\section{Recursiveness is a necessary condition}
In the proof of Theorem \ref{main} we appealed to the recursiveness of $\mathfrak{g}$ to show that $T$ correctly calculates the values of $\mathfrak{g}$, i.e., that for every $\varphi$ and $\psi$,
$$\mathfrak{g}(\varphi)=\psi \iff T\vdash \mathcal{G}(\ulcorner\varphi\urcorner,\ulcorner\psi\urcorner).$$ In this section we show that recursiveness is a necessary condition for the proof of Theorem \ref{main}. In particular, we exhibit a monotone operator $\mathfrak{g}$ which is recursive in $0'$ and produces only $\Pi_1$ sentences such that for arbitrarily strong true sentences $\varphi$, $$[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi\wedge\mathsf{Con}_T(\varphi)]$$
and for arbitrarily strong true sentences $\varphi$,
$$[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi].$$
Our proof makes use of a recursive set $\mathfrak{A}$ that contains arbitrarily strong true sentences and omits arbitrarily strong true sentences. A very similar set is constructed in \cite{montalban2019inevitability}. We now present the construction of the set $\mathfrak{A}$, which is necessary to understand the proof of the theorem. After describing the construction of $\mathfrak{A}$ we will verify some of its basic properties.
\begin{quote}
Let $\{\varphi_0,\varphi_1,...\}$ be an effective G{\"o}del numbering of the language of arithmetic. We describe the construction of $\mathfrak{A}$ in stages. During a stage $n$ we may \emph{activate} a sentence $\psi$, in which case we say that $\psi$ is \emph{active} until it is \emph{deactivated} at some later stage.
\textbf{Stage 0:} Numerate $\varphi_0$ and $\neg\varphi_0$ into $\mathfrak{A}$. Activate the sentences $(\varphi_0\wedge \mathsf{Con}_T(\varphi_0))$ and $(\neg\varphi_0\wedge \mathsf{Con}_T(\neg\varphi_0))$.
\textbf{Stage n+1:} There are finitely many active sentences. For each such sentence $\psi$, numerate $\theta_0:=(\psi\wedge\varphi_{n+1})$ and $\theta_1:=(\psi\wedge\neg\varphi_{n+1})$ into $\mathfrak{A}$. Deactivate the sentence $\psi$ and activate the sentences $(\theta_0\wedge \mathsf{Con}_T(\theta_0))$ and $(\theta_1\wedge \mathsf{Con}_T(\theta_1))$.
\end{quote}
\begin{remark}\label{branching}
It can be useful to visualize, along with the construction of $\mathfrak{A}$, the construction of an upwards growing tree that is (at most) binary branching. The nodes in the tree are the \emph{consistent} sentences that are numerated into $\mathfrak{A}$. The immediate successors in this tree of a sentence $\varphi$ have the form $(\varphi \wedge \mathsf{Con}_T(\varphi) \wedge \theta)$ and $(\varphi \wedge \mathsf{Con}_T(\varphi) \wedge \neg \theta)$. Thus, the successors of any two points are inconsistent with each other. Observe that for any two distinct sentences $\varphi$ and $\psi$ in the tree, $\varphi$ is below $\psi$ (i.e., $\varphi$ and $\psi$ belong to the same path and $\varphi$ is below $\psi$) if and only if $T+\psi \vdash \varphi$. It follows from the previous two observations that any two sentences that are incompatible with each other in the tree ordering are inconsistent with each other.
\end{remark}
\begin{lemma}\label{truth lemma}
At any stage in the construction of $\mathfrak{A}$, (i) exactly one of the active sentences is true in the standard model $\mathbb{N}$ and (ii) exactly one of the sentences numerated into $\mathfrak{A}$ is true in the standard model $\mathbb{N}$.
\end{lemma}
\begin{proof}
We proceed by induction on the stages in the construction of $\mathfrak{A}$.
\textbf{Stage 0:} Exactly one of $\varphi_0$ or $\neg\varphi_0$ is true (these are the numerated sentences), and hence so is exactly one of $\varphi_0\wedge \mathsf{Con}(\varphi_0)$ and $\neg\varphi_0\wedge \mathsf{Con}(\neg\varphi_0)$ (these are the activated sentences).
\textbf{Stage n+1:} At the end of stage $n$ there is exactly one true activated sentence $\theta$. Then so exactly one of $\zeta_0:=\theta\wedge\varphi_n$ and $\zeta_1:=\theta\wedge\neg\varphi_n$ is true (these are the numerated sentences). Hence exactly one of $\zeta_0\wedge \mathsf{Con}(\zeta_0)$ and $\zeta_1\wedge \mathsf{Con}(\zeta_1)$ is true (these are the activated sentences).
\end{proof}
\begin{corollary}\label{true branch}
There is a unique branch through the tree described in Remark \ref{branching} that contains only true sentences. We will call it the \emph{true branch.}
\end{corollary}
\begin{lemma}\label{arbitrary strong}
$\mathfrak{A}$ contains arbitrarily strong true sentences.
\end{lemma}
\begin{proof}
Let $\psi$ be a true sentence. $\psi$ appears at some point in our G{\"o}del numbering of the language of arithmetic, i.e., for some $n$, $\psi$ is $\varphi_n$. Going into stage $n$ of the construction of $\mathfrak{A}$, there is exactly one true active sentence $\theta$ by Lemma \ref{truth lemma}. Then $\theta \wedge \varphi_n$ is numerated into $\mathfrak{A}$. So $\mathfrak{A}$ contains a true sentence that implies $\psi$.
\end{proof}
Our proof also makes use of iterated consistency statements. Let $\prec$ be an elementary presentation of $\omega$. For the sake of convenience, we reiterate the definition of the iterates of the consistency operator. We define these iterates by appealing to G{\"o}del's fixed point lemma:
$$T \vdash \mathsf{Con}^\alpha_T(\varphi) \leftrightarrow \forall \beta \prec \alpha \mathsf{Con}_T \big(\varphi\wedge\mathsf{Con}^\beta_T(\varphi) \big)$$
For true $\varphi$, the iterates of $\mathsf{Con}_T$ form a proper hierarchy of true sentences by G{\"o}del's second incompleteness theorem.
\begin{definition}
For a true sentence $\psi$ numerated into $\mathfrak{A}$ at stage $n$, let $\theta_\psi$ be a true sentence that is either the $(n+1)^{th}$ sentence in the G{\"o}del numbering of the language or the negation thereof (depending on which is true). The point of the definition is this: if $\psi$ is a true sentence numerated into $\mathfrak{A}$, then the next true sentence numerated into $\mathfrak{A}$ is $\psi \wedge \mathsf{Con}_T(\psi) \wedge \theta_\psi$.
\end{definition}
\begin{lemma}
\label{arbitrarily strong}
For arbitrarily strong true sentences $\psi \in \mathfrak{A}$, $T \nvdash (\psi \wedge \mathsf{Con}_T(\psi)) \rightarrow \theta_\psi.$
\end{lemma}
\begin{proof}
Suppose not, i.e., suppose that there is a true $\varphi$ such that for all true $\psi$, if both $T\vdash \psi \rightarrow \varphi$ and $\psi \in \mathfrak{A}$, then:
\begin{equation}
\label{eq:oplus}
\tag{$\oplus$}
T\vdash (\psi \wedge \mathsf{Con}_T(\psi)) \rightarrow \theta_\psi.
\end{equation}
By Lemma \ref{arbitrary strong}, $\mathfrak{A}$ contains arbitrarily strong true sentences, so we know there is at least one such sentence $\psi_0$ in $\mathfrak{A}$ that implies $\varphi$. By the construction of $\mathfrak{A}$, the true sentences numerated into $\mathfrak{A}$ after $\psi_0$ are:
\begin{itemize}
\item $\psi_1:= \psi_0 \wedge \mathsf{Con}_T(\psi_0) \wedge \theta_{\psi_0}$
\item $\psi_2:= \psi_1 \wedge \mathsf{Con}_T(\psi_1) \wedge \theta_{\psi_1}$
\end{itemize}
and so on. Each $\psi_n$ implies $\varphi$ and is in $\mathfrak{A}$. Thus, each $\psi_n$ satisfies condition ($\oplus$), i.e., for each $\psi_n$, $T\vdash (\psi_n \wedge \mathsf{Con}_T(\psi_n)) \rightarrow \theta_{\psi_n}.$ This means that for all $n\geq 1$ the final conjunct of $\psi_n$ is superfluous. It follows that for each $n$, $\psi_n$ is $T$ provably equivalent to $\psi_0 \wedge \mathsf{Con}^n_T(\psi_0).$ But then no sentence in $\mathfrak{A}$ is stronger than $\psi_0 \wedge \mathsf{Con}^\omega_T(\psi_0)$, contradicting the fact proved in Lemma \ref{arbitrary strong}, i.e., that $\mathfrak{A}$ contains arbitrarily strong true sentences.
\end{proof}
We are now ready to state and prove the theorem. Note that the following is a restatement of Theorem \ref{main second}.
\begin{theorem}
Let $T$ be a sound, effectively axiomatized extension of $\mathsf{EA}$. There is a $0'$ recursive monotone function $\mathfrak{g}$ such that, for every $\varphi$, $\mathfrak{g}(\varphi)$ is $\Pi_1$, yet for arbitrarily strong true sentences $\varphi$
$$[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi\wedge\mathsf{Con}_T(\varphi)]$$
and for arbitrarily strong true sentences $\varphi$
$$[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi].$$
\end{theorem}
\begin{proof}
We define the function $\mathfrak{g}$ as follows:
\begin{equation*}
\mathfrak{g}(\varphi)=
\begin{cases}
\bigwedge\{ \mathsf{Con}_T(\zeta) : \zeta\in\mathfrak{A} \textrm{ and } T+\varphi \vdash \zeta \} &\text{if $[\varphi]\neq[\bot]$}\\
\bot &\text{otherwise}
\end{cases}
\end{equation*}
We will check one-by-one that $\mathfrak{g}$ satisfies the properties ascribed to it in the statement of the theorem. We start by checking that $\mathfrak{g}$ is $0'$ recursive. In so doing, we will also demonstrate that $\mathfrak{g}$ is well defined, i.e., always produces a finitary sentence.
\begin{claim}
$\mathfrak{g}$ is $0'$ recursive.
\end{claim}
To verify that $\mathfrak{g}$ is $0'$ recursive, we informally describe an algorithm for calculating $\mathfrak{g}$ using $0'$ as an oracle. Here is the algorithm: Given an input $\varphi$, first use $0'$ to determine whether $[\varphi]=[\bot]$. If so, output $\bot$. Otherwise, we have to find all sentences $\psi \in \mathfrak{A}$ such that $T+\varphi \vdash \zeta$. Let's say that $\varphi$ is the $n^{th}$ sentence in our G{\"o}del numbering of the language of arithmetic. By the construction of $\mathfrak{A}$, the only sentences in $\mathfrak{A}$ that $T+\varphi$ proves must have been numerated into $\mathfrak{A}$ by stage $n$. So find each of the finitely many sentences that were activated by stage $n$ in the construction of $\mathfrak{A}$. For any such sentence $\zeta$, use $0'$ to determine whether $T+\varphi \vdash \zeta$.\footnote{Querying $0'$ is not strictly necessary here. If we already know that two sentences $\varphi$ and $\zeta$ are consistent, we can determine effectively whether $T+\varphi \vdash \zeta$ by paying attention to details of the construction of $\mathfrak{A}$.} Once all sentences $\psi \in \mathfrak{A}$ such that $T+\varphi \vdash \zeta$ have been found, output the conjunction of their consistency statements.
It is now routine to verify that the following claim is true:
\begin{claim}
$\mathfrak{g}$ is monotone and always produces a $\Pi_1$ sentence.
\end{claim}
We now work towards showing that $\mathfrak{g}$ behaves like the consistency operator for arbitrarily strong inputs.
\begin{claim}
For arbitrarily strong true sentences $\varphi$, $[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi\wedge\mathsf{Con}_T(\varphi)].$
\end{claim}
To see why the claim is true, note that whenever $\varphi\in\mathfrak{A}$, it follows that $[\mathfrak{g}(\varphi)]=[\mathsf{Con}_T(\varphi)]$, whence
$$[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi\wedge\mathsf{Con}_T(\varphi)].$$
Since $\mathfrak{A}$ contains arbitrary strong true sentences, it follows immediately that for arbitrarily strong true sentences $\varphi$, $$[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi\wedge\mathsf{Con}_T(\varphi)].$$
Thus, to prove the theorem, it suffices to see that $\mathfrak{g}$ behaves like the identity operator on arbitrarily strong inputs.
\begin{claim}
\label{goal}
For arbitrarily strong true sentences $\varphi$, $[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi].$
\end{claim}
To this end, we will assume only that $\psi$ is a true sentence satisfying the following claim:
\begin{equation}
\label{eq:sharp}
\tag{$\sharp$}
T\nvdash (\psi \wedge \mathsf{Con}_T(\psi)) \rightarrow \theta_\psi.
\end{equation}
Recall that if $\psi$ was numerated into $\mathfrak{A}$ at stage $n$, then $\theta_\psi$ is a true sentence that is either the $(n+1)^{th}$ sentence in the G{\"o}del numbering of the language or the negation thereof (depending on which is true). By Lemma \ref{arbitrarily strong}, we know that for arbitrarily strong true sentences $\psi$, $\psi$ satisfies $(\sharp)$. We will then show that for any such $\psi$, where $\varphi$ is the sentence $(\psi \wedge \mathsf{Con}_T(\psi))$, the following identity holds: $$[\varphi \wedge \mathfrak{g}(\varphi)]=[\varphi],$$ thus certifying the truth of Claim \ref{goal}.
So let $\psi$ be a true sentence in $\mathfrak{A}$ satisfying ($\sharp$). By Lemma \ref{truth lemma}, there is a unique next true sentence numerated into $\mathfrak{A}$, and that sentence is $\big(\psi \wedge \mathsf{Con}_T(\psi)\wedge\theta_\psi \big)$. We assert the following claim:
\begin{claim}\label{big claim}
For all sentences $\zeta \in \mathfrak{A}$, if $T +\big(\psi \wedge \mathsf{Con}_T(\psi) \big) \vdash \zeta$ then also $T+\psi\vdash \zeta$.
\end{claim}
Let's see why Claim \ref{big claim} is true. Since $(\psi \wedge\mathsf{Con}_T(\psi))$ is true, any sentence $\zeta$ in $\mathfrak{A}$ that is implied by $\big(\psi \wedge \mathsf{Con}_T(\psi) \big)$ belongs to the true branch (see Corollary \ref{true branch}). By assumption $(\sharp)$, $(\psi \wedge \mathsf{Con}_T(\psi))$ has strength strictly intermediate between $\psi$ and $(\psi \wedge \mathsf{Con}_T(\psi) \wedge \theta_\psi)$. Accordingly, any sentence $\zeta$ in $\mathfrak{A}$ that is implied by $(\psi \wedge\mathsf{Con}_T(\psi))$ must have been numerated into $\mathfrak{A}$ before $(\psi \wedge \mathsf{Con}_T(\psi) \wedge \theta_\psi)$. Recall that $\psi$ is the sentence in the true branch numerated into $\mathfrak{A}$ immediately before $(\psi \wedge \mathsf{Con}_T(\psi) \wedge \theta_\psi)$. So $\zeta$ either is $\psi$ or was numerated into $\mathfrak{A}$ earlier than $\psi$. Either way, $\psi$ implies $\zeta$. This certifies the truth of Claim \ref{big claim}.
We now introduce the formula $\varphi := \big( \psi \wedge \mathsf{Con}_T(\psi) \big)$. We make the following claim:
\begin{claim}
\label{mini claim}
$[\varphi \wedge \mathfrak{g}(\varphi)] = [\varphi \wedge \bigwedge\{ \mathsf{Con}_T(\zeta) : \zeta\in\mathfrak{A} \textrm{ and } T + \psi \vdash \zeta \}]$.
\end{claim}
We argue for Claim \ref{mini claim} as follows:
\begin{flalign*}
[\varphi \wedge \mathfrak{g}(\varphi)] &= [\varphi \wedge \bigwedge\{ \mathsf{Con}_T(\zeta) : \zeta\in\mathfrak{A} \textrm{ and } T + \varphi \vdash \zeta \}] \textrm{ by definition of $\mathfrak{g}$}\\
&= [\varphi \wedge \bigwedge\{ \mathsf{Con}_T(\zeta) : \zeta\in\mathfrak{A} \textrm{ and } T + (\psi\wedge \mathsf{Con}_T(\psi)) \vdash \zeta \}] \textrm{ by choice of $\varphi$}\\
&= [\varphi \wedge \bigwedge\{ \mathsf{Con}_T(\zeta) : \zeta\in\mathfrak{A} \textrm{ and } T + \psi \vdash \zeta \}] \textrm{ by Claim \ref{big claim}.}
\end{flalign*}
With Claim \ref{mini claim} on board, we are now ready to prove that $[\varphi]=[\varphi \wedge \mathfrak{g}(\varphi)].$ We reason as follows:
\begin{flalign*}
T + \varphi + \mathsf{Con}_T(\psi) &\vdash \bigwedge\{ \mathsf{Con}_T(\zeta) : \zeta\in\mathfrak{A} \textrm{ and } T + \psi \vdash \zeta\} \textrm{ by the monotonity of $\mathsf{Con}_T$}. \\
T + \varphi + \mathsf{Con}_T(\psi) &\vdash \mathfrak{g}(\varphi) \textrm{ by Claim \ref{mini claim}.}\\
T + \varphi & \vdash \mathfrak{g}(\varphi) \textrm{ since $T+\varphi \vdash \mathsf{Con}_T(\psi)$, by the choice of $\varphi$.}
\end{flalign*}
This trivially implies that: $$[\varphi]=[\varphi \wedge \mathfrak{g}(\varphi)].$$
This completes the proof of the theorem.
\end{proof}
\end{document}
|
\begin{document}
\prvastrana=1
\poslednastrana=5
\defK. Banaszek{K. Banaszek}
\defReconstruction of photon distribution{Reconstruction of photon distribution}
\headings{1}{5}
\title{RECONSTRUCTION OF PHOTON DISTRIBUTION WITH~POSITIVITY CONSTRAINTS}
\author{{Konrad Banaszek}\footnote{\email{[email protected]}}}
{Instytut Fizyki Teoretycznej, Uniwersytet Warszawski, Ho\.{z}a~69,
PL--00--681~Warszawa, Poland}
\datumy{}{}
\abstract{An iterative algorithm for reconstructing the photon
distribution from the random phase homodyne statistics is discussed. This
method, derived from the maximum-likelihood approach, yields a positive
definite estimate for the photon distribution with bounded statistical
errors even if numerical compensation for the detector imperfection
is applied.}
A fascinating topic studied extensively in quantum optics over past
several years is the measurement of the quantum state of simple
physical systems [1]. The central question is how to reconstruct a family
of observables characterizing the quantum state from data that can be
obtained using a feasible experimental scheme [2]. Most of the
initial work on this topic was based on considerations of ideal,
noise-free distributions of quantum observables that can be obtained
only in the limit of an infinite number of measurements. However, any
realistic experiment yields only a finite sample of data. Recognition
of the full importance of this fact has led to the development of
reconstruction techniques specifically designed to deal with finite
ensembles of physical systems [3--6]. The main motivation for these
developments is to optimize the amount of information on the quantum
state that can be gained from a realistic measurement, and to
distinguish clearly between the actual data obtained from an
experiment and {\em a priori} assumptions used in the reconstruction
scheme.
One of ideas that have proved to be fruitful in the field of quantum
state measurement is the principle of maximum-likelihood estimation.
In particular, it has been recently applied to the reconstruction of
the photon distribution from the random phase homodyne statistics [7]. The
essential advantage of the maximum-likelihood technique is that
physical constraints on the quantities to be determined can be
consistently built into the reconstruction scheme. This reduces the
overall statistical error, and automatically suppresses unphysical
artifacts generated by standard linear reconstruction techniques [8],
such as negative occupation probabilities of Fock states. However,
this is achieved at a significantly higher numerical cost than that
required by linear techniques.
Before we pass to the detailed discussion of the maximum-likelihood
method, let us demonstrate its capability to improve the accuracy of
the reconstruction. Fig.~1 depicts the final result of processing
Monte Carlo simulated homodyne data for a coherent state and a squeezed
vacuum state; both states have the average photon number equal to
one. We have assumed imperfect photodetection characterized by a quantum
efficiency $\eta=85\%$, which is numerically
compensated in the reconstruction scheme.
It is seen that the maximum-likelihood estimate, marked with dots,
is much closer to the exact distribution than probabilities obtained
using the standard linear method of pattern functions. Moreover, purely
artificial nonzero values for large photon numbers completely disappear
when the maximum-likelihood method is applied.
\begin{figure}
\caption{Fig.~1. Reconstruction of the photon distribution from Monte
Carlo homodyne data. The bars depict exact photon distributions for a
coherent state (top) and a squeezed vacuum state (bottom), both the
states with the average photon number equal to one. For each of these
states, $N=10^5$ homodyne events were simulated. The events were
divided into 100 bins covering the interval $-5 \le q \le 5$. The
simulated data were used to reconstruct the photon distributions using
the maximum-likelihood method ($\bullet$) and the linear pattern
function technique ({\small $\diamondsuit$}
\end{figure}
So, how does the maximum-likelihood method work, and what algorithm
provides the bounded, positive definite estimate for the photon
distribution? Let us start the discussion from considering
the raw data recorded in
an experiment. A single experimental run of a random phase homodyne
setup yields a certain value of the quadrature observable $q$. This data,
after an appropriate discretization, is recorded as an event in a $\nu$th
bin. The probability of this event $p_\nu$ depends linearly on the
photon distribution $\{\varrho_n\}$:
\begin{equation}
p_\nu(\{\varrho_n\}) = \sum_{n} A_{\nu n} \varrho_{n}.
\end{equation}
For a fixed $n$, the set of coefficients $A_{\nu n}$ describes the
homodyne statistics for the $n$th Fock state.
Repeating the measurement $N$ times yields a frequency histogram $\{k_n\}$
specifying in how many runs the outcome was found in a $\nu$th bin.
These incomplete, finite data are the source of information on the photon
distribution. The question is, how this information can be retrieved.
The answer given by the maximum-likelihood estimation method is to pick up the
photon distribution for which it was the most likely to obtain the
actual result of the performed series of measurements. Mathematically,
this is done by the maximization of the log-likelihood function [7]:
\begin{equation}
{\cal L}(\{\varrho_n\}) = \sum_{\nu} k_{\nu} \ln p_{\nu} (\{\varrho_n\})
- N \sum_{n} \varrho_{n},
\end{equation}
where $N=\sum_{\nu} k_{\nu}$ is the total number of experimental runs.
In the above formula, the method of Lagrange multipliers has been used to
satisfy the condition that the sum of all probabilities is equal to one.
As the estimate for $\{\varrho_n\}$ is supposed to describe a possible
physical situation, the search for the maximum likelihood is {\em a
priori} restricted to the manifold of real probability distributions.
Geometrically, this manifold is a simplex defined by the set of
inequalities:
\[ \varrho_{n} \ge 0, \hspace{1cm} n=0,1,2,\ldots
\]\[
\sum_{n} \varrho_{n} = 1.
\]
Thus, the physical constraints on the reconstructed quantities are
naturally incorporated in the maximum-likelihood reconstruction scheme.
The reconstruction of the photon distribution formulated in the
maximum-likelihood approach belongs a very wide class of linear inverse
problems with positivity constraints~[9]. This class encompasses a variety
of problems appearing in as diverse fields as medical imaging and
financial markets. As discussed by Vardi and Lee [9], a
straightforward method for solving these problems is provided by the
so-called expectation-maximization (EM) algorithm. In the following, we
will present a heuristic derivation of this algorithm applied the
reconstruction of the photon distribution.
Let us consider the necessary condition for the maximum of the
function ${\cal L}(\{\varrho_n\})$.
The partial derivatives of the log-likelihood function are given by:
\begin{equation}
\frac{\partial {\cal L}}{\partial\varrho_m} =
\sum_{\nu} k_{\nu} \frac{A_{\nu m}}{p_\nu(\{\varrho_n\})} - N
\end{equation}
For each $m$, the partial derivative ${\partial {\cal
L}}/{\partial\varrho_m}$ must vanish unless the maximum is located on
a face of the simplex for which $\varrho_m = 0$. Thus we have that
either ${\partial {\cal L}}/{\partial\varrho_m}=0$ or $\varrho_m = 0$.
These two possibilities can be written jointly as
\begin{equation}
\varrho_m \left(\sum_{\nu} k_{\nu}
\frac{A_{\nu m}}{p_\nu(\{\varrho_n\})} - N\right)
=0, \hspace{1cm} m=0,1,2,\ldots.
\end{equation}
It is convenient to rearrange this condition to the form which defines
the maximum-likelihood estimate as a fixed point of a certain nonlinear
transformation. Such a form immediately suggests an iterative procedure
for finding the fixed point by a multiple application of the derived
transformation:
\begin{equation}
\varrho_{m}^{(i+1)} = \varrho_{m}^{(i)} \sum_{\nu}
\frac{k_\nu}{N} \frac{A_{\nu m}}{p_{\nu}(\{\varrho_n^{(i)}\})},
\hspace{1cm} m=0,1,2,\ldots
\end{equation}
where upper indices in parentheses denote consecutive approximations
for the photon distribution.
In fact, Eq.~(5) provides a ready-to-use iterative method for
reconstructing the photon distribution, which is a special case of the
EM algorithm [9]. Given an approximation of the photon distribution
$\{\varrho_n^{(i)}\}$, the next one is simply evaluated according to
the right hand side of Eq.~(5). When sufficient mathematical
conditions are fulfilled, this procedure converges to the
maximum-likelihood solution. Thus, the complex problem of constrained
multidimensional optimization is effectively solved with the help
of a simple iterative algorithm.
The maximum-likelihood estimates depicted in Fig.~1 were obtained from
8000 iterations of this algorithm. The initial distributions were
assumed to be uniform for photon numbers $n \le 20$ and equal to
zero above this cut-off value.
\begin{figure}
\caption{Fig.~2.
Components of the random phase homodyne statistics generated by different
Fock states, for decreasing efficiency $\eta$ of the homodyne detector.
The homodyne statistics of an arbitrary quantum state is a sum of these
components with weights defined by the photon distribution $\{\varrho_n\}
\end{figure}
An essential yet delicate matter in quantum state measurements is the
role played by the detection efficiency. The impact of imperfect
detection on the maximum-likelihood reconstruction scheme can be
understood using a simple intuitive argument. According to Eq.~(1),
the homodyne statistics is a sum of components generated by the Fock
states $|n\rangle$, with the weights given by the appropriate
occupation probabilities $\varrho_n$. The shape of each component is
described by the set of coefficients $A_{\nu n}$, with $n$ treated as a
fixed parameter. Fig.~2 depicts several of these components for
different values of the detector efficiency $\eta$. In the case of unit
efficiency, the contribution originating from the Fock state
$|n\rangle$ is given by the squared modulus of the $n$th energy
eigenfunction in the position representation, and exhibits
characteristic oscillations. The important point is that each of these
contributions has a unique location of maxima and minima. Thus, each
number state leaves a specific, easily distinguishable trace in the
homodyne statistics. Roughly speaking, this is what allows the
maximum-likelihood method to resolve clearly the contributions from
different number states. When the detection is imperfect, homodyne
statistics generated by Fock states become blurred, and oscillations
disappear. This makes the shape of the contributions from higher Fock
states quite similar. Consequently, it becomes more difficult for the
maximum-likelihood method to resolve components generated by different
number states. Of course, this intuitive argument should be supported
by quantitative considerations. Mathematically, the effect of imperfect
detection is reflected by the shape of the log-likelihood function
${\cal L}$. For low detection efficiency, it becomes flatter, and its
maximum is not sharply peaked. This increases the statistical
uncertainty of the reconstructed photon distribution, and results in a
slower convergence of the EM algorithm.
Finally, let us note that the maximum-likelihood algorithm does not
make any use of the specific form of the coefficients $A_{\nu n}$
linking the photon distribution with the homodyne statistics. In fact,
it can be applied to any phase-insensitive measurement whose
statistical outcome depends on the photon distribution via a linear
relation of the form assumed in Eq.~(1).
\noindent {\bf Acknowledgements}
The author has benefited from discussions with Prof.\ Krzysztof
W\'{o}dkiewicz and Dr.\ Arkadiusz Or{\l}owski. This research was supported
by Komitet Bada\'{n} Naukowych and by Stypendium Krajowe dla M{\l}odych
Naukowc\'{o}w Fundacji na rzecz Nauki Polskiej.
\small
\kapitola{References}
\begin{description}
\itemsep0pt
\item{[1]}
See, for example, {\sl J. Mod. Opt.} {\bf 44} (1997) No.\ 11-12, Special
Issue on Quantum State Preparation and Measurement, ed.\ by M.~G.~Raymer
and W.~P.~Schleich.
\item{[2]}
\refer{G. M. D'Ariano, U. Leonhardt, H. Paul}{Phys. Rev. A}{52}{1995}{R1801}
\item{[3]}
\refer{V. Bu\v{z}ek, G. Adam, G. Drobn\'{y}}{Phys. Rev. A}{54}{1996}{804}
\item{[4]}
\refer{T. Opatrn\'{y}, D.-G. Welsch, W. Vogel}{Phys. Rev. A}{56}{1997}{1788}
\item{[5]}
\refer{Z. Hradil}{Phys. Rev. A}{55}{1997}{R1561}
\item{[6]}
\refer{R. Derka, V. Bu\v{z}ek, A. K. Ekert}{Phys. Rev. Lett.}{80}{1998}{1571}
\item{[7]}
K. Banaszek: {\sl ``Maximum-likelihood estimation of photon-number
distribution from homodyne statistics''}, to appear in {\sl Phys. Rev. A}.
\item{[8]}
\refer{U. Leonhardt, M. Munroe, T. Kiss, Th. Richter, M. G. Raymer}
{Opt. Commun.}{127}{1996}{144}
\item{[9]}
\refer{Y. Vardi, D. Lee}{J. R. Statist. Soc.\ {\rm B}}{55}{1993}{569}
\end{description}
\end{document}
|
\begin{document}
\author{Thomas Wannerer}
\address
{
\begin{flushleft}
Goethe-Universit\"at Frankfurt\\
Institut f\"ur Mathematik\\
Robert-Mayer-Str.\ 10\\
60325 Frankfurt am Main, Germany \\
\end{flushleft}
}
\email{[email protected]}
\title{Integral geometry of unitary area measures}
\thanks{Research supported by DFG grant BE 2484/5-1}
\subjclass[2000]{Primary 53C65; Secondary 52A22}
\begin{abstract} The existence of kinematic formulas for area measures with respect to any connected, closed subgroup of the orthogonal group acting transitively on the unit sphere is established.
In particular, the kinematic operator for area measures is shown to have the structure of a co-product.
In the case of the unitary group the algebra associated to this co-product is described explicitly in terms of generators and relations. As a consequence, a simple algorithm that yields explicit kinematic formulas
for unitary area measures is obtained.
\end{abstract}
\maketitle
\section{Introduction}
The answer to the question \textit{``What is the expected volume of the Minkowski sum of two convex bodies $K,L\subset \mathbb{R}^n$ if $L$ is rotated randomly?''} is given by the additive principal kinematic formula
\begin{equation}
\label{eq:principal}
\int_{SO(n)} \vol_n(K +gL) \; dg = \sum_{i+j=n} \binom{n}{i}^{-1} \frac{\omega_i\omega_j}{\omega_n} \mu_i(K) \mu_j(L),
\end{equation}
first proved by Blaschke \cite{blaschke55} for $n=2,3$ and in general by Chern \cite{chern52}. Here $SO(n)$ denotes the rotation group equipped with the Haar probability measure,
$\omega_i$ is the volume of the $i$-dimensional euclidean unit ball and $\mu_i(K)$ is the $i$th intrinsic volume of $K$, a suitably normalized $(n-i-1)$-th order mean curvature integral. The principal kinematic formula
\eqref{eq:principal} plays a central role in classical integral geometry, since it encompasses many results in euclidean
integral geometry as special or limiting cases (see, e.g., \cites{klain_rota97,santalo04,schneider_weil08} for the history and applications of integral geometry). In this paper we establish a hermitian,
local version of \eqref{eq:principal}. Here `local' means that the intrinsic volumes may be replaced
by local curvature integrals.
Over the last ten years there have been tremendous advances in integral geometry that completely reshaped the subject. The foundation for this progress was the discovery
of various algebraic structures on the space of valuations by Alesker \cites{alesker01, alesker03,alesker04}. Here a valuation is a function $\phi\colon \mathcal{K}(\mathbb{R}^n)\to \mathbb{R}$ on the space of convex bodies satisfying
the additivity property
\begin{equation}\label{eq:def_val}\phi(K\cup L)= \phi(K) + \phi(L) -\phi(K\cap L)\end{equation}
whenever $K\cup L $ is convex. The theory of valuations, which turned out to be the key to modern integral geometry, is a very rich one and dates back to Dehn's solution of Hilbert's 3rd problem
(see \cites{abardia_etal12,alesker99b,haberl12, haberl_parapatits13,klain99,ludwig03,ludwig_reitzner10, parapatits_wannerer13, schuster10} and the references therein).
Building on the work of Alesker, it was first realized by Fu \cite{fu06} and then put in more precise terms by Bernig and Fu \cite{bernig_fu06}, that the classical kinematic operators are co-products
(which is reflected by the bilinear structure of the principal kinematic formula \eqref{eq:principal}) and are in this sense dual to the only recently discovered products on valuations.
This crucial discovery provided not
only an explanation for the algebraic-combinatorial behavior of the numeric constants appearing in kinematic formulas, but also opened the door to determining kinematic formulas explicitly in
cases different from the euclidean. For example,
the problem of evaluating the integral in \eqref{eq:principal}
with $\mathbb{R}^n$ replaced by $\mathbb{C}^n$ and
$SO(n)$ by the unitary group $U(n)$ seemed for many years out of reach, but has recently been solved by Bernig and Fu in the landmark paper \cite{bernig_fu11}.
In this article we generalize the results of Bernig and Fu \cite{bernig_fu11} and determine the local additive kinematic formulas for the unitary group completely. Using a different approach,
the problem of localizing the intersectional
kinematic formulas has only recently been solved by Bernig, Fu, and Solanes \cite{bernig_etal13}.
To explain what we mean by `local' let us first consider
the classical euclidean case. It is a well-known fact from the geometry of convex bodies that to each convex body $K\subset \mathbb{R}^n$ one can associate a measure $S_{n-1}(K,\; \cdot\;)$ on the unit sphere $S^{n-1}$,
called the area measure of $K$, such that
\begin{equation}
\label{eq:vol_areameasure}
\vol_n(K)= \frac{1}{n} \int_{ S^{n-1}} \left\langle u,x\right\rangle \; dS_{n-1}(K,u),
\end{equation}
where $x$ is a boundary point of $K$ with outer unit normal $u$ and $\left\langle u,x\right\rangle$ denotes the euclidean inner product. If the boundary of $K$ is $C^2$ and has all principal curvatures positive,
then the measure $S_{n-1}(K,\; \cdot \;)$ is absolutely continuous with density
equal to the reciprocal of the Gauss curvature at $x$. Schneider \cite{schneider75} proved that for all convex bodies $K,L\subset \mathbb{R}^n$ and Borel subsets $U,V\subset S^{n-1}$ of the unit sphere
\begin{equation}\label{eq:principal_local}\int_{SO(n)} S_{n-1}(K+ gL, U\cap g V) \; dg = \frac{1}{n\omega_n} \sum_{i+j=n-1} \binom{n-1}{i} S_i(K,U) S_j(L,V),\end{equation}
where $S_{i}(K,\;\cdot\;)$ denotes the $i$th area measure of $K$. Using relation \eqref{eq:vol_areameasure} and similar relations between $\mu_{i}(K)$ and $S_{i-1}(K,\; \cdot\;)$,
the global principal kinematic formula \eqref{eq:principal} may be deduced from its local version \eqref{eq:principal_local}.
Let us describe now the results of the present article. It is far from obvious that one may replace $\mathbb{R}^n$ by $\mathbb{C}^n=\mathbb{R}^{2n}$ and $SO(n)$ by $U(n)$ and still obtain a
bilinear splitting of the integral in \eqref{eq:principal_local}. In Section~\ref{sec:existence} we use a method developed by Fu \cite{fu90} based on fiber integration to prove that the
special orthogonal group may be replaced by any connected, closed subgroup $G\subset SO(n)$
acting transitively on the unit sphere. In particular, we may replace $SO(n)$ by $U(n)$.
Let us denote by $\Val^G$ the space of $G$-invariant, translation-invariant, continuous valuations \cite{bernig_fu06} and by $\Area^G$ the space of $G$-invariant area measures \cite{wannerer13}.
As indicated above, in the case of global kinematic formulas,
the kinematic operator is a co-product which corresponds (after an identification of $\Val^{G}$ with its dual space via Poincar\'e duality \cite{alesker04}) to a product on $\Val^G$. This statement is known as the
fundamental theorem of algebraic integral geometry (ftaig), see \cite{bernig_fu06}.
If we want to use this approach to find explicit
local kinematic formulas, we are faced with two problems: First, no product on $\Area^G$ is known and second, it is not clear how to identify $\Area^{G}$ with its dual space.
However, as was shown by the author in \cite{wannerer13}, $\Area^G$ possesses the structure of a module over $\Val^G$. This property and the techniques of \cite{wannerer13} will play an important role in the present article.
Lacking a canonical identification of $\Area^{G}$ with its dual space, we will work directly with the product on $\Area^{G*}$ induced by the kinematic operator. This approach is new and will turn out to be
surprisingly convenient.
We denote by $\mathbb{R}[ s, t, v]$ the polynomial algebra generated by the variables $ s$, $t$, and $ v$.
The main result of this paper is an explicit description of the algebra $\Area^{U(n)*}$.
\begin{theoremA}
The algebra $\Area^{U(n)*}$ is isomorphic to $\mathbb{R}[s, t, v] /I_n$, where the ideal $I_n$ is generated by
\begin{equation}\label{eq:algebra_relations}f_{n+1}(s, t), \qquad f_{n+2}( s, t),\qquad p_n( s, t)-q_{n-1}( s, t) v,\qquad p_n( s, t) v,\end{equation}
and
$$v^2.$$
The polynomials $f_k$, $p_k$, and $q_k$ are given by the Taylor series expansions
\begin{align*}
\log(1+ t x + s x^2) &= \sum_{k=0}^\infty f_k( s, t) x^k,\\
\frac{1}{1+ t x + s x^2} &= \sum_{k=0}^\infty p_k( s, t) x^k, \ \text{and } \\
-\frac{1}{(1+ t x + s x^2)^2} &= \sum_{k=0}^\infty q_k( s, t) x^k.
\end{align*}
\end{theoremA}
In Section~\ref{sec:explicit} we show how Theorem~A can be used to achieve the ultimate goal of obtaining explicit kinematic formulas for unitary area measures.
We give examples and provide a simple algorithm which can be implemented in any computer algebra package.
We remark that the polynomials $f_k$ have appeared already in Fu's description of the unitary valuation algebra \cite{fu06} and so have the polynomials $p_k$ and $q_k$ in the description of
the module of unitary area measures by the author \cite{wannerer13}. The isomorphism in Theorem~A is given explicitly by
\begin{align*} t \quad & \longmapsto \quad \bar t:=\frac{2 \omega_{2n-2}}{\omega_{2n-1}}(B_{1,0}^* + \Gamma_{1,0}^*),\\
s \quad & \longmapsto \quad \bar s:=\frac{n}{\pi} \Gamma^*_{2,1}, \text{ and}\\
v \quad & \longmapsto \quad \bar v:=\frac{2\omega_{2n-2}}{\omega_{2n-1}} B_{1,0}^*.
\end{align*}
Here $B_{k,p}^*$, $\Gamma_{k,q}^*$ are elements of the basis dual to the $B_{k,q}$-$\Gamma_{k,q}$ basis for $\Area^{U(n)}$ used in \cite{wannerer13}.
Once the general set-up has been developed, the main difficulty in proving Theorem~A lies in proving $\bar v^2=0$; in fact, using the techniques of \cite{wannerer13}, the other relations \eqref{eq:algebra_relations} can be deduced from the explicit description of
the module $\Area^{U(n)}$, which has been obtained by the author in \cite{wannerer13}.
The proof of $\bar v^2=0$, however, requires once again new ideas and tools.
The main new idea is to relate the kinematic operator for area measures with a new kinematic operator for tensor valuations, i.e.\ valuations in the sense of \eqref{eq:def_val}, but with values in
the symmetric algebra on $\mathbb{R}^n$. Although tensor valuations compatible with the special orthogonal group have been studied
by several authors \cites{alesker99a,alesker_etal11,hug_etal07,mcmullen97} and a family of kinematic formulas has been established \cite{hug_etal08}, the integral geometry of tensor valuations is for the most part
still unexplored. In recent years, tensor valuations have received increased attention in applied sciences \cites{beisbart_etal02, beisbart_etal06, schroeder_etal10}.
Let $\Val^{r,G}$ denote the space of translation-invariant, continuous, $G$-covariant, rank $r$ tensor valuations. Since already in the case $G=SO(n)$
a full investigation of these valuations and their integral geometry would be out of the scope of this article,
we confine ourselves to introduce only those concepts which are necessary to prove Theorem A. In Section~\ref{sec:tensor}, we introduce for nonnegative integers $r_1,r_2\geq 0$ a kinematic operator
$a^{r_1,r_2}\colon \Val^{r_1+r_2,G}\rightarrow \Val^{r_1,G}\otimes \Val^{r_2,G}$ by
$$a^{r_1,r_2}(\Phi)(K,L) = \int_G (\operatorname{id}^{\otimes r_1} \otimes g^{\otimes r_2}) \Phi(K+ g^{-1}L)\; dg$$
whenever $K,L\subset \mathbb{R}^n$ are convex bodies and define for $r\geq 0$ the $r$th order moment map $M^r\colon \Area^G\rightarrow \Val^{r,G}$ by
$$M^r(\Phi)(K) = \int_{S^{n-1}} u^{ r} \; d\Phi(K,u).$$
We then prove that
\begin{equation}\label{eq:intro_momentkin}
a^{r_1,r_2} \circ M^{r_1+r_2}(\Phi) = (M^{r_1}\otimes M^{r_2}) \circ A (\Phi),\qquad \Phi\in \Val^{r_1+r_2,G}
\end{equation}
where $A\colon \Area^G\rightarrow \Area^G\otimes \Area^G$ denotes the kinematic operator for area measures. Moreover, we extend the Bernig-Fu convolution and Alesker's Poincar\'e duality for scalar valuations to
tensor valuations. In particular, we show that a version of the fundamental theorem of algebraic integral geometry (ftaig) holds for tensor valuations:
\begin{theoremB}
Let $r_1,r_2$ be nonnegative integers and denote by $c_G\colon \Val^{r_1,G}\otimes \Val^{r_2,G}\rightarrow \Val^{r_1+r_2, G}$ and $\widehat{\PD}\colon \Val^{r,G}\rightarrow (\Val^{r,G})^*$ the convolution product and
Poincar\'e duality. Then
$$a^{r_1,r_2} = (\widehat{\PD}^{-1}\otimes \widehat{\PD}^{-1}) \circ c^*_G \circ \widehat{\PD}.$$
\end{theoremB}
After these general results, we show that the second order moment map $M^2$ is injective in the case $G=U(n)$. Hence \eqref{eq:intro_momentkin} with $r_1=r_2=2$ can be used to obtain --- at least in principle --- the
kinematic formula for area measures from the kinematic formulas for tensor valuations. In a final step, we use Theorem~B to prove $\bar v^2=0$.
\section{Existence of kinematic formulas for area measures} \label{sec:existence}
The goal of this section is to prove that if we replace the special orthogonal group by any connected, closed subgroup $G\subset SO(n)$ acting transitively on the unit sphere, we still
obtain a bilinear splitting of the integral in \eqref{eq:principal_local}. To make this statement precise, we need to introduce some notation.
In the following $G\subset SO(n)$ denotes a connected, closed subgroup acting transitively on the unit sphere. We will always assume that $n\geq 2$. We denote by $S\mathbb{R}^n $ the sphere bundle of $\mathbb{R}^n$
and write $\pi_1\colon S\mathbb{R}^n\rightarrow \mathbb{R}^n$ and $\pi_2\colon S\mathbb{R}^n\rightarrow S^{n-1}$ for the natural projections. The group $\overline{G}=G\ltimes \mathbb{R}^n$ generated by
$G$ and translations of $\mathbb{R}^n$ acts on $\mathbb{R}^n$ by isometries and hence there is a canonical action of $\overline{G}$ on the sphere bundle. The space of differential forms on $S\mathbb{R}^n$ invariant under this action is denoted by $\Omega(S\mathbb{R}^n)^G$. Since $G$ acts transitively
on the unit sphere, $\Omega(S\mathbb{R}^n)^G$ is finite-dimensional. If $\phi:S^{n-1}\rightarrow \mathbb{R}$ is a function on the unit sphere, we denote by $(g \phi)(u)=\phi(g^{-1}u)$ left translation by $g\in G$.
Let $K\subset \mathbb{R}^n$ be a convex body, i.e.\ a nonempty, compact, convex subset. The normal cycle $N(K)$ of $K$ consists of those pairs $(x,u)\in S\mathbb{R}^n$ such that $x$ is a
boundary point of $K$ and $u$ is an outer unit normal of $K$ at $x$. Since $N(K)$
is a countably $(n-1)$-rectifiable set and carries a natural orientation (see \cites{fu90, fu94,alesker_fu08}), we can integrate $(n-1)$-forms over it,
$$N(K)(\omega):=\int_{N(K)} \omega,\qquad \omega\in \Omega^{n-1}(S\mathbb{R}^n).$$
For example, it is not difficult to see that there exists a form $\kappa_{n-1}\in \Omega^{n-1}(S\mathbb{R}^n)^{SO(n)}$ such that
$$ \int_{S^{n-1}} \phi(u)\; dS_{n-1}(K,u) = N(K)(\pi^*_2 \phi \cdot \kappa_{n-1})$$
for every bounded Baire function $\phi$.
\begin{theorem}\label{thm:existence}
Let $\beta_1,\ldots, \beta_m$ be a basis of $\Omega^{n-1}(S\mathbb{R}^n)^G$.
If $\omega\in \Omega^{n-1}(S\mathbb{R}^n)^G$, then there exist constants $c_{ij}$ such that
$$\int_G N(K+gL)( \pi_2^*(\phi \; g\psi) \cdot \omega) \; dg = \sum_{i,j}^{m} c_{ij} N(K)( \pi_2^*\phi \cdot \beta_i)\; N(L)( \pi_2^*\psi \cdot \beta_j)$$
for all bounded Baire functions $\phi, \psi$ on $S^{n-1}$ and convex bodies $K, L\subset \mathbb{R}^n$.
\end{theorem}
In \cite{fu90} Fu proved the existence of intersectional kinematic formulas in a very general setting. In this
section we adapt Fu's method to prove Theorem~\ref{thm:existence}.
\subsection{Fiber bundles and fiber integration} Before we prove the existence of general additive kinematic formulas, we first need to recall the definition and main properties of fiber integration, cf.\
Chapter~VII of \cite{greub_etal72} and \cite{fu90}.
Let $\mathcal{B}=(E,\pi, M, F)$ be a smooth fiber bundle with total space $E$,
base space $M$, projection $\pi\colon E\rightarrow M$ and fiber $F$. Recall that the bundle $\mathcal{B}$ is orientable if and only if the fiber $F$ is orientable and there exists an open cover
$\{U_i\}$ of $M$ and local trivializations $\phi_i \colon \pi^{-1}(U_i) \rightarrow U_i \times F$ such that the transition maps
$c_{ij}(x)\colon F\rightarrow F$, $\phi_i \circ \phi_j^{-1}(x,y)=(x,c_{ij}(x)y)$, are orientation preserving diffeomorphisms. A bundle $\mathcal{B}$ is oriented by an orientation of $F$ together with an open cover of the
base corresponding to a collection of local trivializations with orientation preserving transition maps.
A differential form $\omega$ on $E$ is said to have fiber-compact support if for every compact $K\subset M$ the intersection $\pi^{-1}(K)\cap \supp \omega$ is compact. The space of fiber-compact differential forms
is denoted by $\Omega_F(E)$.
If $(E,\pi,M,F)$ is an oriented bundle with $\dim M= n$ and $\dim F=r$, then there exists a canonical linear map
$\pi_*\colon \Omega_F(E) \rightarrow \Omega(M)$ called fiber integration. It is defined as follows:
Let $\omega$ be a fiber-compact form on $E$ and suppose $\phi:\pi^{-1}(U)\rightarrow U\times F$ is a
local trivialization compatible with the orientation of the bundle. Shrinking $U$ if necessary, we may assume that $x_1,\ldots,x_n$ are coordinates for $U$. Then $(\phi^{-1})^*\omega$ can be written uniquely
as
$$(\phi^{-1})^*\omega(x,y) = \sum_{k=0}^n\sum_{i_1<\ldots < i_k} dx_{i_1}\wedge \cdots \wedge dx_{i_k} \wedge \omega^{(i_1,\ldots,i_k)}(x,y),\qquad (x,y)\in U\times F,$$
where $\partial x_j\lrcorner \omega^{(i_1,\ldots,i_k)}(x,y)=0$ for every coordinate vector field $\partial x_j$. Then $y\mapsto \omega^{(i_1,\ldots,i_q)}(x,y)$ is an element of $\Omega_c(F)$
and the integral of $\omega$ over the fibers is defined by
$$\pi_*\omega(x):=\sum_{k=0}^n\sum_{i_1<\ldots < i_k} dx_{i_1}\wedge \cdots \wedge dx_{i_k} \int_F \omega^{(i_1,\ldots,i_k)}_x$$
with the convention $\int_F \xi=0$ for forms $\xi$ of degree less than $r$.
If $M$ is orientable and $E$ is equipped with the local product orientation (see \cite{greub_etal72}*{p. 288}), then
$$\int_M \alpha \wedge \pi_* \omega= \int_E \pi^* \alpha \wedge \omega$$
for every $\omega\in \Omega_F^k(E)$ and $\alpha\in \Omega_c^{n+r-k}(M)$. In particular, if $N\subset M$ is an oriented, compact submanifold with $\dim N= q$ and $\pi^{-1}(N)$ is equipped with the local product orientation, then the following version of Fubini's theorem holds:
If $\omega\in \Omega_F^{q+r}(E)$, then
\begin{equation}\label{eq:fiber_fubini}
\int_N \pi_* \omega =\int_{\pi^{-1}(N)} \omega.
\end{equation}
Let $(E,\pi, M, F)$ and $(E', \pi',M',F)$ be oriented bundles with the same fiber $F$ and let $\bar f\colon E'\rightarrow E$ be a bundle map covering the smooth map $f\colon M'\rightarrow M$.
If there exists an open cover $\{U\}$ of $M$ together with local trivializations $\phi\colon \pi^{-1}(U)\rightarrow U\times F$ and $\phi'\colon \pi^{-1}(f^{-1}(U))\rightarrow f^{-1}(U)\times F$ compatible with the
orientations of the bundles such that
$$\phi^{-1}\circ \bar f\circ \phi' = f\times \operatorname{id}_F,$$
then
\begin{equation}
\label{eq:bundlemap}
f^*\circ \pi_* = \pi_*' \circ \bar f^*,
\end{equation}
see \cite{fu90}.
\subsection{A slicing formula}
It will be helpful to restate the general slicing formula for currents \cite{federer69}*{4.3.2} in the special setting in which we are going to apply it.
Suppose $f\colon X\to Y$ is a surjective, smooth map between compact, smooth manifolds with $m=\dim X$ and $n=\dim Y$. Suppose that $X$ is oriented by a
smooth $m$-vector field $\xi$ and that $Y$ is oriented by a smooth $n$-form $dy$. By Sard's theorem, $f^{-1}(y)$ is a smooth submanifold for almost every $y\in Y$ and is
oriented by the smooth $(m-n)$-vector field
\begin{equation}
\label{eq:slice_orientation}
\zeta = \xi\llcorner f^*dy
\end{equation}
on $X$. If we define the measure $\mu$ by $\lcur Y \rcur \llcorner dy$, then by \cite{federer69}*{4.3.2, 4.3.8}
\begin{equation}
\label{eq:slicing}
\int_Y \Phi(y) \left( \int_{f^{-1}(y) } \omega \right) \; d\mu(y) = \int_X f^*(\Phi \wedge dy)\wedge \omega
\end{equation}
for every bounded Baire function $\Phi\colon Y\to \mathbb{R}$ and every $(m-n)$-form $\omega$ on $X$.
\subsection{Proof of Theorem~\ref{thm:existence}}
Let us fix a point $o\in S^{n-1}$ and let $G_o\subset G$ be the isotropy group of $o$. Note that $G_o$ is always connected. Indeed, if $n=2$, then $G_0=\{e\}$ and if $n\geq 3$, then $S^{n-1}$ is simply connected.
We put
$$E= \{ (g,\xi,\eta, \zeta)\in G\times (S\mathbb{R}^n )^3: \pi_1(\zeta)= \pi_1(\xi) + g\pi_1(\eta), \ \pi_2(\zeta)= \pi_2(\xi)= g\pi_2(\eta)\}$$
and define a projection $p\colon E\rightarrow S\mathbb{R}^n \times S\mathbb{R}^n$ by $p(g,\xi,\eta, \zeta)=(\xi, \eta)$. We claim that $p\colon E\rightarrow S\mathbb{R}^n \times S\mathbb{R}^n$ is an orientable fiber bundle with fiber $G_o$.
In fact, suppose $U,V\subset S\mathbb{R}^n$ are open sets and $\phi\colon U \rightarrow G$, $\psi\colon V \rightarrow G$ are smooth maps satisfying $\phi(\xi)o= \pi_2(\xi)$, $\xi\in U$, and $\psi(\eta) o = \pi_2(\eta)$, $\eta\in V$. Then
\begin{align*} \Phi\colon &p^{-1}(U\times V) \longrightarrow (U\times V) \times G_o \\
&(\xi,\eta, \zeta, g)\mapsto (\xi,\eta,\phi(\xi)^{-1} g \psi(\eta))
\end{align*}
is a local trivialization for $p$.
If $\Phi'\colon p^{-1}(U'\times V') \longrightarrow (U'\times V') \times G_o$ is another local trivialization constructed in the same way, then the transition map $c$ is given by
\begin{align*}
c \colon & (U\times V) \cap (U'\times V') \rightarrow \operatorname{Diff}(G_o)\\
& (\xi, \eta)\mapsto \left[ g \mapsto (\phi')(\xi)^{-1} \phi(\xi) \cdot g\cdot \psi(\eta)^{-1} \psi'(\eta)\right].
\end{align*}
Note that both $(\phi')(\xi)^{-1} \phi(\xi)$ and $ \psi(\eta)^{-1} \psi'(\eta)$ are elements of $G_o$. Since $G_o$ is compact and connected, left and right translations are orientation preserving.
Hence the transition map consists of orientation preserving diffeomorphisms. We conclude that our bundle $(E, p, S\mathbb{R}^n \times S\mathbb{R}^n, G_o)$ is orientable.
We define another projection $q\colon E\rightarrow G\times S\mathbb{R}^n$ by $p(g,\xi,\eta, \zeta)=(g,\zeta)$. Observe that $q\colon E\rightarrow G\times S\mathbb{R}^n$ is isomorphic
to the trivial bundle $G\times S\mathbb{R}^n\times \mathbb{R}^n$.
For every element $g\in \overline G$ we define $g_0\in G$ by $g_0(x):=g(x)-g(0)$. We let the group $\overline{G}\times \overline{G}$ act on the total
space $E$ by
$$(h,k)\cdot(g,\xi, \eta, \zeta):= (h_0gk_0^{-1}, h \xi, k \eta, h \zeta),\qquad (h,k)\in \overline{G}\times \overline{G},$$
and on the base spaces $S\mathbb{R}^n\times S\mathbb{R}^n$ and $G\times S\mathbb{R}^n$
by
$$(h,k)\cdot (\xi,\eta):= (h\xi,k\eta)\qquad \text{and} \qquad (h,k)\cdot (g,\zeta):=(h_0gk_0^{-1}, h \zeta).$$
Clearly, the projections $p$ and $q$ commute with these group actions.
\begin{lemma}\label{lem:invariance}
Suppose $\omega\in\Omega(E)$ is $\overline{G}\times \overline{G}$-invariant. Then the fiber integral $p_*\omega$ is a $\overline{G}\times \overline{G}$-invariant form on $S\mathbb{R}^n\times S \mathbb{R}^n$.
\end{lemma}
\begin{proof}
Fix $(h,k)\in \overline{G}\times \overline{G}$ and $(\xi_0,\eta_0)\in S\mathbb{R}^n\times S\mathbb{R}^n$. Choose open neighborhoods $U$, $V$ of $\xi_0$ and $\eta_0$ such that there exist smooth maps
$\phi\colon U \rightarrow G$ and $\psi\colon V \rightarrow G$ with
$\phi(\xi)o= \pi_2(\xi)$, and $\phi(\eta) o = \pi_2(\eta)$. Put $W=U\times V$ and define a local trivialization
\begin{align*}
\Phi\colon &p^{-1}(W) \longrightarrow W \times G_o \\
&(\xi,\eta, \zeta, g)\mapsto (\xi,\eta,\phi(\xi)^{-1} g \psi(\eta)).
\end{align*}
Define also maps $f\colon S\mathbb{R}^n\times S\mathbb{R}^n\rightarrow S\mathbb{R}^n\times S\mathbb{R}^n$ and $\bar f\colon E\rightarrow E$ by
$$f(\xi,\eta) = (h,k)\cdot (\xi, \eta) \quad \text{and} \quad \bar f(g,\xi,\eta, \zeta) =(h,k)\cdot (g,\xi, \eta, \zeta)$$
Observe that
\begin{align*}
\Phi'\colon &p^{-1}(f(W)) \longrightarrow f(W) \times G_o \\
&(\xi,\eta, \zeta, g)\mapsto (\xi,\eta,\phi(\xi)^{-1}h_0^{-1} g k_0\psi(\eta)).
\end{align*}
is also a local trivialization and that
$$\Phi'^{-1} \circ \bar f\circ \Phi = f\times \operatorname{id}_{G_o}.$$
Thus we can apply \eqref{eq:bundlemap} and obtain $f^*\circ p_* = p_* \circ \bar f^*$.
This shows that $p_*\omega$ is $\overline{G}\times \overline{G}$-invariant.
\end{proof}
Finally, we need to recall two facts concerning the normal cycles of convex bodies (see, e.g., the proof of Lemma~2.1.3 in \cite{alesker_fu08}).
\begin{lemma}\label{lem_massbound}
Let $\{K_\alpha \}$ be a collection of convex bodies. If there exists some ball containing all $K_\alpha$, then there exists a constant $C$ such that $$\sup_\alpha \mathbf{M}(N(K_\alpha))\leq C,$$ where $\mathbf{M}(T)$
denotes the mass of a current $T$, see \cite{federer69}*{p. 358}.
\end{lemma}
\begin{lemma}\label{lem_weakconvergence}
Let $\omega\in \Omega^{n-1}(S\mathbb{R}^n)$ and let $f: S^{n-1}\rightarrow \mathbb{R}$ be a continuous function. If $K_i\rightarrow K$ is a convergent sequence of convex bodies in $\mathbb{R}^n$, then
$$N(K_i)(\pi_2^* f\cdot \omega)\rightarrow N(K)(\pi_2^*f\cdot \omega).$$
\end{lemma}
\begin{proof}[Conclusion of the proof of Theorem~\ref{thm:existence}]
We will first prove Theorem~\ref{thm:existence} under the additional assumption that $\phi, \psi$ are smooth and that $K, L$ have smooth boundaries and all principal curvatures are positive. The general case
will then follow by approximation.
We define $q_1\colon E\rightarrow G$ by $q_1(g,\xi,\eta,\zeta)=g$ and $q_2\colon E\rightarrow S\mathbb{R}^n$ by $q_2(g,\xi,\eta,\zeta)= \zeta$. If we put $C= p^{-1}(N(K)\times N(L))$, then
$$ q_2(q_1^{-1}(g)\cap C) = N(K+g L).$$
In fact, this was the motivation for the definition of the bundle $E$.
Let $dg$ be a bi-invariant volume form on $G$ such that $\int_G dg=1$. Since $q_1\colon C\rightarrow G$ is a smooth surjective map, Sard's theorem implies that $q_1(g)\cap C$ is a smooth submanifold for a.e.\ $g\in G$.
Moreover, since the boundary of $K$ contains no segments,
$q_2 \colon q_1^{-1}(g)\cap C \rightarrow N(K+gL)$ is a diffeomorphism for a.e.\ $g\in G$. Since the bundle $p\colon C\to N(K)\times N(L)$ is orientable and the normal cycles carry a natural orientation,
we can equip $C$ with the local product orientation. By \eqref{eq:slice_orientation}, this induces an orientation
on almost every slice $q_1^{-1}(g)\cap C$. We choose the orientation for the bundle such that $q_2\colon q_1^{-1}(g)\cap C \to N(K+gL)$ is orientation preserving for a.e.\ $g\in G$.
Hence
$$N(K+gL)(\omega) = \int_{q_1^{-1}(g)\cap C} q_2^* \omega \qquad \text{for a.e. } g\in G$$
and for every $(n-1)$-form $\omega$ on $S\mathbb{R}^n$.
We define $p_1\colon E\rightarrow S\mathbb{R}^n$ by $p_1(g,\xi,\eta,\zeta)= \xi$ and $p_2\colon E\rightarrow S\mathbb{R}^n$ by $p_2(g,\xi,\eta,\zeta)=\eta$.
Since $q_2^*\pi_2^* \phi = p^*_1 \pi_2^*\phi$ and $q_2^*\pi_2^*(g \psi) = p^*_2 \pi_2^*\psi$,
we obtain
$$ q^*_2(\pi_2^*(\phi \; g\psi) \cdot \omega) = p^*_1 \pi_2^*\phi \cdot p^*_2 \pi_2^*\psi\cdot q^*_2\omega=:\omega'.$$
Using the slicing formula \eqref{eq:slicing} and \eqref{eq:fiber_fubini} we obtain
\begin{align*}
\int_G \left( \int_{q_1^{-1}(g)\cap C} \omega'\right) dg &= \int_C q_1^* dg \wedge \omega' \\
& = \int_{N(K)\times N(L)} p_*(q_1^* dg \wedge \omega')\\
& = \int_{N(K)\times N(L)}p^*_1\pi_2^* \phi \cdot p^*_2 \pi_2^*\psi\cdot p_*q^*(dg \wedge \omega).
\end{align*}
It follows from Lemma~\ref{lem:invariance} that $p_*q^*(dg \wedge \omega)$ is a $\overline G \times \overline G$-invariant $(2n-2)$-form on $S\mathbb{R}^n \times S\mathbb{R}^n$.
Since every $\overline G \times \overline G$-invariant form on the product $S\mathbb{R}^n\times S\mathbb{R}^n$ is a sum of wedge products of $\overline G$-invariant forms on $S\mathbb{R}^n$,
we obtain constants $c_{ij}$ such that
$$ \int_{N(K)\times N(L)}p^*_1\pi_2^* \phi \cdot p^*_2 \pi_2^*\psi\cdot p_*q^*(dg \wedge \omega) = \sum_{i,j}^{m} c_{ij} N(K)( \pi_2^*\phi \cdot \beta_i)\; N(L)( \pi_2^*\psi \cdot \beta_j).$$
This establishes Theorem~\ref{thm:existence} in a special case.
Suppose now that $K$ and $L$ are general convex bodies and let $K_i\rightarrow K$ and
$L_i\rightarrow L$ be sequences of convex bodies with smooth boundaries and such that all their principal curvatures positive. Since the collection $\{K_i+gL_i\colon i, g\}$ is contained in some sufficiently large ball,
we have by Lemma~\ref{lem_massbound} and the definition of the comass of a differential form \cite{federer69}*{p. 358}
$$| N(K_i+gL_i)(\pi_2^*(\phi \; g\psi) \cdot \omega)| \leq \mathbf{M}(N(K_i+gL_i)) \mathbf{M}( \pi_2^*(\phi \; g\psi) \cdot \omega) \leq C$$
for some uniform constant $C>0$.
For every $g\in G$ it follows from Lemma~\ref{lem_weakconvergence} that
$$N(K_i+gL_i)(\pi_2^*(\phi \; g\psi) \cdot \omega) \rightarrow N(K+gL)(\pi_2^*(\phi \; g\psi) \cdot \omega).$$
Hence, by the dominated convergence theorem, we obtain
$$\int_G N(K_i+gL_i)(\pi_2^*(\phi \; g\psi)\cdot \omega )\; dg\rightarrow \int_G N(K+gL)( \pi_2^*(\phi \; g\psi) \cdot \omega)\; dg.$$
This proves Theorem~1 for general convex bodies $K$ and $L$.
Finally let $\phi$ and $\psi$ be bounded Baire functions. Then $\phi$ and $\psi$ are the pointwise limits of sequences of uniformly bounded, smooth functions on $S^{n-1}$,
$$\phi_i(v)\rightarrow \phi(v)\qquad \text{and} \qquad \psi_i(v)\rightarrow \psi(v)$$
for every $v\in S^{n-1}$. Applying the dominated convergence theorem twice, we obtain
$$\int_G N(K+gL)( \pi_2^*(\phi_i \; g\psi_i) \cdot \omega )\; dg\rightarrow \int_G N(K+gL)(\pi_2^*(\phi \; g\psi)\cdot \omega)\; dg.$$
\end{proof}
\section{The algebra \texorpdfstring{$\Area^{G*}$}{Area*}}
We let $G \subset SO(n)$, $n\geq 2$, be a closed, connected subgroup acting transitively on the unit sphere and denote by $\mathcal{K} (\mathbb{R}^n)$ the space of convex bodies in $\mathbb{R}^n$.
\subsection{General properties of the kinematic product}
A function which assigns to every convex body $K\subset \mathbb{R}^n$ and Borel subset $U\subset S^{n-1}$ a real number is called a $G$-invariant area measure if there exists $\omega\in \Omega^{n-1}(S\mathbb{R}^n)^G$ such that
$$(K,U)\mapsto \int_{N(K)\cap \pi_2^{-1}(U) } \omega,$$
see \cite{wannerer13}. We denote by $\Area^G$ the space of all $G$-invariant area measures. In this terminology, Theorem~\ref{thm:existence} implies the following.
\begin{corollary}
There exists a linear map $ A\colon \Area^G \rightarrow \Area^G \otimes \Area^G$ called the \emph{local kinematic operator}
such that
$$A(\Psi)(K,U; L, V)= \int_G \Psi(K+gL, U\cap gV) \; dg$$
for all convex bodies $K,L\subset\mathbb{R}^n$ and Borel subsets $U,V\subset S^{n-1}$.
\end{corollary}
The space of $\overline G$-invariant, continuous valuations is denoted by $\Val^G=\Val^G(\mathbb{R}^n)$ and we write $\Val^{sm}$ the space of smooth, translation-invariant valuations.
We refer the reader to \cite{bernig_fu06} and the references therein for more information on translation-invariant scalar valuations.
The globalization map is the surjective linear map $\glob\colon \Area^G\rightarrow \Val^G$ given by
$$\glob(\Psi)= \Psi(\;\cdot\; , S^{n-1}).$$
We denote by
$$ \bar a= (\id\otimes \glob)\circ A\colon \Area^G \rightarrow \Area^G \otimes \Val^G$$
the semi-local kinematic operator. The additive kinematic operator is the linear map
$$ a\colon \Val^G \rightarrow \Val^G\otimes \Val^G$$
satisfying
$$a(\phi)(K,L)= \int_G \phi(K+gL)\; dg,$$
see \cite{bernig_fu06}. The various kinematic operators fit in the following commutative diagram:
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] {
\Area^G & \Area^G\otimes \Area^G \\
\Area^G & \Area^G\otimes \Val^G \\
\Val^G & \Val^G\otimes \Val^G \\};
\path[-stealth]
(m-1-1) edge node [right] {$\id $} (m-2-1)
edge node [above] {$A$} (m-1-2)
(m-2-1) edge node [right] {$\glob $} (m-3-1)
edge node [above] {$\bar a$} (m-2-2)
(m-1-2) edge node [right] {$\id \otimes \glob$} (m-2-2)
(m-2-2) edge node [right] {$\glob \otimes \id$} (m-3-2)
(m-3-1) edge node [above] {$a$} (m-3-2);
\end{tikzpicture}
\end{center}
We denote by $\phi \cdot \psi$ the product \cite{alesker04}, by $\phi*\psi$ the convolution \cite{bernig_fu06} and by $\hat \phi$ the Fourier transform \cite{alesker11} of $\phi,\psi\in\Val^{sm}$. The Fourier transform
has the fundamental property $\widehat{\phi \cdot \psi} = \hat \phi * \hat \psi$.
The Poincar\'e duality map \cites{alesker04, bernig_fu06} is denoted by $\PD\colon \Val^{sm} \rightarrow \left(\Val^{sm}\right)^*$.
Let $\chi^*\colon \Val\rightarrow \mathbb{R}$ be the linear functional $\left\langle \chi^*,\phi\right\rangle = \phi(\{0\})$, $0\in\mathbb{R}^n$. It was shown in \cite{bernig_fu06} that
\begin{equation}\label{eq:poincare_pairing_even}\left\langle \PD(\phi), \psi\right\rangle = \left\langle \chi^*, \phi* \psi\right\rangle\end{equation}
for even valuations $\phi,\psi\in\Val^{sm}$. We will see later (Proposition~\ref{lem:poincare}) that
\begin{equation}\label{eq:poincare_pairing}\left\langle \PD(\phi), \psi\right\rangle = -\left\langle \chi^*, \phi* \psi\right\rangle\end{equation}
if $\phi, \psi\in\Val^{sm}$ are odd valuations. It was proved in \cite{bernig_fu11} that every valuation in $\Val^G$ is even.
We define $\mu_C^G\in \Val^G$ by $\mu_C^G=\int_G \vol( \;\cdot \; + gC) \; dg = a(\vol)(\;\cdot\;,C)$.
\begin{proposition}\label{prop:poincare_evaluation}
$$\left\langle \PD(\mu_C^G), \phi\right\rangle = \phi(C)$$
for every $\phi\in\Val^G$. In particular, $\Val^G$ is spanned by $\mu_C^G$, $C\in \mathcal{K}(\mathbb{R}^n)$.
\end{proposition}
\begin{proof}
This is proved as Proposition~2.17 in \cite{bernig_etal13}. It follows from the definition of $a$ that
$$[(\chi^* \otimes \id) \circ a](\phi)= a(\phi)(\{0\}, \; \cdot\;) = \phi$$
Hence Fubini's theorem implies
$$\left\langle \PD(\mu_C^G), \phi\right\rangle = \left\langle \chi^*, \mu_C^G* \phi \right\rangle =\left\langle \chi^*, a(\phi)(\;\cdot\;,C)\right\rangle =\phi(C).$$
\end{proof}
The module product of \cite{wannerer13} is denoted by $\bar m\colon \Val^G\otimes \Area^G\rightarrow \Area^G$, $\bar m (\phi, \Psi) = \phi * \Psi$. If $C$ is a convex body and $\Psi\in \Area^G$, then
$$\mu_C^G * \Psi (K,U) = \int_G \Psi(K+gC, U)\;dg$$
for all convex bodies $K$ and Borel set $U\subset S^{n-1}$.
\begin{lemma}
$A$, $\bar a$, and $a$ are all compatible with convolution with elements from $\Val^G$, i.e.\
\begin{align}A(\phi * \Psi) &= (\phi \otimes \vol) * A(\Psi) = (\vol \otimes \phi) * A(\Psi),\notag\\
\bar a(\phi * \Psi) &= (\phi \otimes \vol) * \bar a( \Psi) = (\vol \otimes \phi) * \bar a (\Psi),\notag\\
a(\phi * \psi) &= (\phi \otimes \vol) * a(\psi) = (\vol \otimes \phi) * a(\psi),\notag\\
\bar a (\phi *\Psi) &= a(\phi) * (\Psi \otimes \vol), \label{eq:semilocal}
\end{align}
whenever $\phi,\psi\in\Val^G$ and $\Psi\in \Area^G$.
\end{lemma}
\begin{proof}
This can be proved as Theorem~2.19 in \cite{bernig_etal13}.
\end{proof}
The semi-local kinematic operator is in the following sense dual to the module product.
\begin{lemma}\label{lem:semilocal} Let $\bar m\colon \Hom(\Val^{G} \otimes \Area^{G}, \Area^{G}) = \Hom (\Area^{G}, \Area^{G}\otimes \Val^{G*} )$ be the module product. Then
$$\bar m = (\operatorname{id} \otimes \PD) \circ \bar a.$$
\end{lemma}
\begin{proof}
Using \eqref{eq:semilocal}, equation (15) of \cite{wannerer13}, the definition of $\mu_C^G$, and Proposition~\ref{prop:poincare_evaluation}, we obtain
$$\mu_C^G * \Psi (K,U) = \bar a(\vol * \Psi)(K,U; C) = \bar a (\Psi)(K,U; C) = \left[(\id \otimes \PD) \circ \bar a\right] (\Psi)(K,U; \mu_C^G)$$
for every convex body $K$ and Borel set $U\subset S^{n-1}$.
\end{proof}
Recall that a co-algebra consists of a vector space $X$ over some field $\mathbb{K}$, a linear map $C\colon X\rightarrow X\otimes X$ and a linear functional $\varepsilon \colon X\rightarrow \mathbb{K}$ satisfying
$$(\id_X \otimes C)\circ C= (C\otimes \id_X)\circ C \qquad \text{(co-associativity)}$$
and
$$(\varepsilon \otimes \id_X)\circ C=\id_X= (\id_X \otimes \varepsilon)\circ C.$$
The map $C$ is called a co-product and $\varepsilon$ is called a co-unit. A co-algebra is called co-commutative if $i\circ C = C$,
where $i\colon X\otimes X \rightarrow X\otimes X$ denotes the interchange map $i(x\otimes y) = y\otimes x$.
\begin{proposition} \label{prop:co-algebra}
$(\Area^G, A,\chi^*\circ \glob)$ is a co-commutative co-algebra.
\end{proposition}
\begin{proof}
Let $K,L,M\subset \mathbb{R}^n$ be convex bodies, let $U,V,W\subset S^{n-1}$ be Borel sets and let $\Psi\in \Area^G$. By the invariance of the Haar measure,
\begin{align*} \left[(\id \otimes A)\circ A\right](\Psi)(K,U; L,V; M,W) &=\int_G \left( \int_G \Psi(K +g(L +hM), U \cap g (V \cap h W)) \; dg\right) \; dh \\
& = \int_G \left( \int_G \Psi(K + gL + h M , U \cap g V \cap h W) \; dg\right) \; dh \\
& = \left[(A \otimes \id)\circ A\right](\Psi)(K,U; L,V; M,W).
\end{align*}
Thus $A$ is co-associative.
Since
$$\left[(\chi^*\circ \glob\otimes \id )\circ A\right] (\Psi)(K,U) = A(\Psi)(\{0\},S^{n-1}; K,U)= \Psi(K,U)= \left[(\id \otimes\chi^*\circ \glob )\circ A \right] (\Psi)(K,U),$$
$\chi^*\circ \glob$ is a co-unit. The co-commutativity of $A$ follows immediately from the invariance of the Haar measure.
\end{proof}
If $(X,C,\varepsilon)$ is a co-algebra and $X$ is finite-dimensional, then $(X^*,C^*,\varepsilon)$ is an algebra with unit $\varepsilon$. If $C$ is co-commutative, then $C^*$ is commutative.
In particular, we obtain from Proposition~\ref{prop:co-algebra}
that $(\Area^{G*},A^*,\chi^*\circ \glob)$ is a commutative algebra.
\begin{definition}
We call $A^*\colon \Area^{G*}\otimes \Area^{G*}\rightarrow \Area^{G*}$ the kinematic product on
$\Area^{G*}$. We will also write $\Lambda_1 * \Lambda_2$ instead of $A^*(\Lambda_1\otimes \Lambda_2)$ for $\Lambda_1,\Lambda_2\in \Area^{G*}$
\end{definition}
\begin{lemma}\label{lem:kinprod_module}
Let $\Lambda\in \Area^{G*}$, $\phi\in \Val^G$, and put $m_\phi(\Phi):= \phi * \Phi$. Then
\begin{equation}\label{eq:kinprod}\Lambda * \glob^*(\PD(\phi)) = m_\phi^*(\Lambda).\end{equation}
\end{lemma}
\begin{proof} Using Lemma~\ref{lem:semilocal}, we compute
\begin{align*}
\left\langle m_\phi^*(\Lambda), \Phi\right\rangle & = \left\langle \Lambda , m_\phi(\Phi) \right\rangle = \left\langle \Lambda \otimes \PD(\phi) , \bar a (\Phi)\right\rangle\\
& = \left\langle \Lambda \otimes \glob^*(\PD(\phi)) , A(\Phi)\right\rangle = \left\langle A^*( \Lambda \otimes \glob^*(\PD(\phi))) , \Phi\right\rangle\\
& = \left\langle \Lambda * \glob^*(\PD(\phi)) , \Phi\right\rangle.
\end{align*}
\end{proof}
\begin{corollary} \label{cor:subalgebra}
If $\phi, \psi\in \Val^G$, then
$$\glob^*(\PD(\phi)) * \glob^*(\PD(\psi)) = \glob^*(\PD(\phi * \psi)). $$
In particular, the image of $\glob^*$ is a subalgebra of $\Area^{G*}$.
\end{corollary}
\subsection{The module of unitary area measures}
For the convenience of the reader we collect for quick reference those definitions and results of \cite{wannerer13} which are most important for us in the following.
Recall that the forms $\alpha, \beta, \gamma, \theta_0, \theta_1, \theta_2$, and $\theta_s$, first defined in \cite{bernig_fu11}, generate the algebra of unitarly invariant forms on the sphere bundle
$S\mathbb{C}^n$. For all integers $0\leq k\leq 2n-1$ and $q\geq 0$ we denote by
\begin{align*}B_{k,q}, & \qquad \max\{0,k-n\}\leq q < k/2, \\
\Gamma_{k,q},& \qquad \max\{0,k-n+1\}\leq q \leq k/2 ,
\end{align*}
the unitary area measures represented by the forms $\beta_{k,q}$ and $\gamma_{k,q}$. The collection of the $B_{k,q}$ and $\Gamma_{k,q}$ area measures is a basis of $\Area^{U(n)}$. Another very useful basis is given by
\begin{equation}\label{eq:DeltaN_basis}
\begin{aligned} \Delta_{k,q} & = \frac{k-2q}{2n-k} B_{k,q} + \frac{2(n-k+q)}{2n-k} \Gamma_{k,q}, \qquad \max\{0,k-n\}\leq q \leq k/2,\\
N_{k,q} &= \frac{2(n-k+q)}{2n-k} \left( \Gamma_{k,q} - B_{k,q}\right), \qquad \max\{0,k-n+1\}\leq q < k/2.
\end{aligned}
\end{equation}
Recall that
\begin{equation}\label{eq:glob_DN}
\glob(\Delta_{k,q})=\mu_{k,q} \qquad \text{and}\qquad \glob(N_{k,q})=0,
\end{equation}
where the valuations $\mu_{k,q}$ are the hermitian intrinsic volumes (see \cite{bernig_fu11}).
The polynomials $p_k$ and $q_k$ are given explicitly by
\begin{equation} \label{eq:p_polynomial} p_k(s,t)= (-1)^k \sum_{i=0}^{\lfloor k/2 \rfloor} (-1)^i \binom{k-i}{i} s^i t^{k-2i}
\end{equation}
and
\begin{equation} \label{eq:q_polynomial} q_k(s,t)= (-1)^{k+1} \sum_{i=0}^{\lfloor k/2 \rfloor} (-1)^i(i+1) \binom{k+1-i}{i+1} s^i t^{k-2i}.
\end{equation}
They satisfy the relation
\begin{equation}\label{eq:fpq_relation}-(4s-t^2) q_{k-1} + tp_k = (k+1)^2 f_{k+1},\end{equation}
where $f_k$ denotes the Fu polynomial \cite{fu06}.
Recall also from \cite{fu06} that there are two special unitary valuations $s$ and $t$ which generate $ \Val^{U(n)}$ as an algebra.
The module product on $\Area^{U(n)}$ satisfies the two fundamental relations
\begin{equation}\label{eq:BequalsG}
p_n(\hat s, \hat t)* B_{2n-1,n-1} =q_{n-1}(\hat s, \hat t)* \Gamma_{2n-2,n-1}
\end{equation}
and
\begin{equation}\label{eq:Gequals0}
p_n(\hat s, \hat t)* \Gamma_{2n-2,n-1}=0.
\end{equation}
Here $\hat s$ and $\hat t$ are the Fourier transforms of $s$ and $t$. Moreover, it was proved in \cite{wannerer13} that
\begin{equation}\label{eq:Gammasubmod}
\textit{the span of the } \Gamma_{k,q} \textit{ coincides with the submodule of }\Area^{U(n)} \textit{ generated by }\Gamma_{2n-2,n-1}.
\end{equation}
and that
\begin{equation}\label{eq:Areamod}
\textit{as a module } \Area^{U(n)} \textit{ is generated by }B_{2n-1,n-1} \textit{ and }\Gamma_{2n-2,n-1}.
\end{equation}
\subsection{The kinematic product in the unitary case} After the general considerations of the first subsection, we investigate now the case $\mathbb{R}^{2n}=\mathbb{C}^n$ and $G=U(n)$ in detail.
In \cite{wannerer13} two special bases of $\Area^{U(n)}$ have been used:
The $B_{k,q}$-$\Gamma_{k,q}$ and the $\Delta_{k,q}$-$N_{k,q}$ basis. In the following we are going to use the corresponding dual bases for $\Area^{U(n)*}$ which, by \eqref{eq:DeltaN_basis},
are related by
\begin{equation}\label{eq:B*equals}
B_{k,q}^*=\frac{k-2q}{2n-k} \Delta_{k,q}^* - \frac{2(n-k+q)}{2n-k} N_{k,q}^*, \qquad \max\{0,k-n\}\leq q < k/2,
\end{equation}
and
\begin{equation}\label{eq:Gamma*equals}
\Gamma_{k,q}^*=\left\{\begin{array}{ll} \frac{2(n-k+q)}{2n-k} (\Delta_{k,q}^* + N_{k,q}^*), & \max\{0,k-n+1\}\leq q \leq k/2 ;\\
\Delta_{k,q}^*, & 2q=k.
\end{array}\right.
\end{equation}
It is an immediate consequence of \eqref{eq:glob_DN} that the image of $\glob^*$ coincides with the span of the $\Delta_{k,q}^*$. Hence, by Corollary~\ref{cor:subalgebra}, the span of the
$\Delta_{k,q}^*$ is a subalgebra of $\Area^{U(n)*}$.
We define
$$\bar t := \glob^* (\PD (\hat t)) \qquad\text{and}\qquad \bar s := \glob^* ( \PD (\hat s)).$$
For reasons which will become apparent later, we put
\begin{align*}
\bar v:= & \frac{2\omega_{2n-2}}{\omega_{2n-1}} B_{1,0}^*.
\end{align*}
From the definition of $\bar s$ and $\bar t$, Lemma~\ref{lem:kinprod_module}, and Propositions 4.7 and 4.8 of \cite{wannerer13}, we obtain the following formulas for multiplication by $\bar s$ and $\bar t$.
\begin{lemma}\label{lem:sbartbar}
\begin{align} \bar t * B_{k,q}^* & = \frac{\omega_{2n-k}}{\pi \omega_{2n-k-1}} \left( (k-2q) B_{k+1,q+1}^* + \frac{2(n-k+q)(k-2q)}{k-2q+1} B_{k+1,q}^*\right) \notag \\
\bar t * \Delta_{k,q}^* & = \frac{\omega_{2n-k}}{\pi \omega_{2n-k-1}}\left( (k-2q) \Delta_{k+1,q+1}^* + 2(n-k+q) \Delta_{k+1,q}^*\right) \notag \\
\bar t * N_{k,q}^* & = \frac{\omega_{2n-k}}{\pi \omega_{2n-k-1}} \frac{k-2q}{2n-k-1} \bigg( \Delta^*_{k+1,q+1} - \Delta^*_{k+1,q} \label{eq:tbarN} \\
& \qquad \qquad \qquad + (2n-k) \left( N^*_{k+1,q+1} + \frac{2(n-k+q-1)}{k-2q+1} N_{k+1,q}^* \right) \bigg) \notag \\
\bar s* B_{k,q}^* & = \frac{(k-2q)(k-2q-1)}{2\pi (2n-k)} B_{k+2,q+2}^* + \frac{2(n-k+q)(n-q)}{\pi (2n-k)} B_{k+2,q+1}^* \notag \\
\bar s * \Delta_{k,q}^* & = \frac{(k-2q)(k-2q-1)}{2\pi (2n-k)} \Delta_{k+2,q+2}^* + \frac{2(n-k+q)(n-q)}{\pi (2n-k)} \Delta_{k+2,q+1}^* \notag \\
\bar s * N^*_{k,q} & = \frac{(k-2q)(k-2q-1)}{2\pi (2n-k-2)} \left( N^*_{k+2,q+2} + \frac{2}{2n-k} \Delta^*_{k+2,q+2} \right) \notag \\
& \qquad \qquad \qquad + \frac{2(n-q)}{\pi (2n-k-2)}\left( (n-k+q-1)N^*_{k+2,q+1} - \frac{k-2q}{2n-k} \Delta^*_{k+2,q+1}\right) \notag
\end{align}
\end{lemma}
From the above we conclude that
$$\bar t = \frac{2 \omega_{2n-2}}{\omega_{2n-1}} \Delta^*_{1,0}=\frac{2 \omega_{2n-2}}{ \omega_{2n-1}}(B_{1,0}^* + \Gamma_{1,0}^*)\qquad \text{and}\qquad \bar s = \frac{n}{\pi} \Delta^*_{2,1}.$$
\begin{lemma}\label{lem:relations}
As elements of the algebra $\Area^{U(n)*}$ we have
$$p_n(\bar s,\bar t)-q_{n-1}(\bar s, \bar t)\bar v = 0 \qquad \text{and} \qquad p_n(\bar s,\bar t) \bar v=0.$$
\end{lemma}
\begin{proof}
We prove $p_n(\bar s,\bar t) \bar v=0$ first. By \eqref{eq:Areamod}, any unitary area measure may be expressed as $ \phi * B_{2n-1,n-1} + \psi * \Gamma_{2n-2,n-1}$ for some $\phi,\psi \in \Val^{U(n)}$.
Using \eqref{eq:kinprod}, \eqref{eq:BequalsG}, \eqref{eq:Gequals0},
and \eqref{eq:Gammasubmod}, we obtain
\begin{align*} \left\langle p_n(\bar s,\bar t) \bar v, \phi * B_{2n-1,n-1} + \psi * \Gamma_{2n-2,n-1}\right\rangle &= \left\langle \bar v, \phi* q_{n-1}(\hat s, \hat t) * \Gamma_{2n-2,n-1} \right\rangle \\
&= 0
\end{align*}
To prove the second identity, first observe that $p_n(\bar s,\bar t)$ is a linear combination of certain $B_{k,q}^*$. This follows immediately from \eqref{eq:Gequals0} and \eqref{eq:Gammasubmod}.
At the same time $p_n(\bar s,\bar t)$ is also an element of span of the $\Delta_{k,q}^*$. Hence \eqref{eq:B*equals} implies that
$p_n(\bar s,\bar t)$ is a multiple of $B_{n,0}^*$.
Put $u=4s-t^2$. From \eqref{eq:fpq_relation} we obtain
\begin{equation}\label{eq:vbaru}\left\langle q_{n-1}(\bar s,\bar t)\bar v,\hat u * \Psi\right\rangle= \left\langle -p_n(\bar s,\bar t) \bar v, \hat t*\Psi\right\rangle=0\end{equation}
for every unitary area measure $\Psi$.
Next we claim that
\begin{align}\label{eq:uBeta}
&\textit{if }q_0> 0,\textit{ then --- modulo the subspace spanned by the } \Gamma_{k,q}\text{ --- }B_{n,q_0} \textit{ can be} \\
& \textit{expressed as a linear combination of the area measures }\hat u * B_{n+2,q}. \notag
\end{align}
Indeed, by Corollary 3.8 of \cite{bernig_fu11}, $\mu_{n,q_0}$ can be expressed as a linear combination of the valuations $\hat u *\mu_{n+2,q}$ provided $q_0>0$. Since the globalization map is injective when restricted to the
subspace spanned by the $\Delta_{k,q}$, we obtain that $\Delta_{n,q_0}$ can be expressed as a linear combination of the area measures $\hat u *\Delta_{n+2,q}$ provided $q_0>0$. The claim follows now from \eqref{eq:Gammasubmod}.
Lemma~\ref{lem:sbartbar} implies that $q_{n-1}(\bar s,\bar t)\bar v$ is a linear combination of certain $B_{n,q}$. Together with \eqref{eq:vbaru} and \eqref{eq:uBeta},
we conclude that $q_{n-1}(\bar s,\bar t)\bar v$ is in fact a multiple of $B_{n,0}^*$.
Using $\hat s * B_{n,0}=0$, the explicit formulas for $p_n$ and $q_{n-1}$ given in \eqref{eq:p_polynomial} and \eqref{eq:q_polynomial}, and Lemma~\ref{lem:tnBn0} below, we compute
$$\left\langle p_n(\hat s,\hat t),B_{n,0}\right\rangle =(-1)^n \left\langle \Gamma_{0,0}^* , \hat t^{n} * B_{n,0}\right\rangle = (-1)^n\frac{2^n}{\omega_n}$$
and
$$\left\langle q_{n-1}(\hat s,\hat t)\bar v,B_{n,0}\right\rangle =(-1)^n \frac{2n \omega_{2n-2}}{\omega_{2n-1}} \left\langle B_{1,0}^* , \hat t^{n-1} * B_{n,0}\right\rangle = (-1)^n\frac{2^n}{\omega_n}.$$
\end{proof}
\begin{lemma}\label{lem:tnBn0}
For $n\geq 2$,
$$\hat t^{n-1} * B_{n,0}= \frac{2^{n-1}}{n} \frac{\omega_{2n-1}}{\omega_{2n-2} \omega_n} (B_{1,0} + (n-1) \Gamma_{1,0})$$
and
$$\hat t^{n} *B_{n,0}=\frac{2^n}{\omega_n}\Gamma_{0,0}.$$
\end{lemma}
\begin{proof}
By Propositions~4.7 and 4.8 of \cite{wannerer13}, there exist numbers $c_i, d_i$ such that $\hat t ^i *B_{n,0} = c_i B_{n-i,0} + d_i \Gamma_{n-i,0}$ for $i=0,\ldots, n$. More precisely,
$$\begin{pmatrix} c_{i+1}\\ d_{i+1}\end{pmatrix} = \frac{2(i+1) \omega_{n+i+1}}{(n-i) \pi \omega_{n+i} } \begin{pmatrix} n-i-1 & 0 \\ 1 & n-i \end{pmatrix} \begin{pmatrix} c_{i}\\ d_{i}\end{pmatrix},\qquad i=0,\ldots, n-1$$
with $c_0=1$ and $d_0=0$. Since
$$ \begin{pmatrix} 1 & 0 \\ 1 & 2 \end{pmatrix} \cdots \begin{pmatrix} n-1 & 0 \\ 1 & n \end{pmatrix} \begin{pmatrix} 1\\ 0\end{pmatrix}= (n-1)!\begin{pmatrix}1 \\ n-1\end{pmatrix}\qquad \text{for }n\geq 2,$$
the lemma follows.
\end{proof}
\begin{proposition}\label{prop:generators}
For every $\Lambda\in \Area^{U(n)*}$ there exist polynomials $\phi, \psi$ in $\bar s$ and $\bar t$ such that
\begin{equation}\label{eq:stv}\Lambda= \phi(\bar s,\bar t) + \psi(\bar s,\bar t) \bar v.\end{equation}
\end{proposition}
\begin{proof}
Since the span of the $\Delta_{k,q}^* $ coincides with the image of $\glob^*$, it follows from Corollary~\ref{cor:subalgebra} that for every $\Delta_{k,q}^*$
there exists a polynomial $\phi$ in $\bar s$ and $\bar t$ such that
$\Delta_{k,q}^* = \phi(\bar s,\bar t)$.
Since the case $n=2$ follows immediately from \eqref{eq:B*equals}, we assume from now on that $n\geq 3$. Suppose that $k=2m+1$ is odd and that $1\leq k< 2n-3 $ . Then $N_{2m+1,m}^* $ exists and,
up to linear combinations of $\Delta_{k+1,q}^*$, by Lemma~\ref{lem:sbartbar}
$$\bar t * N_{2m+1,m}\equiv c N_{2m+2,m}$$
for some nonzero constant $c$.
Hence if \eqref{eq:stv} holds for every $N_{k,q'}$, by \eqref{eq:tbarN}, \eqref{eq:stv} holds for every $N_{k+1,q}$ as well.
Suppose now that $k=2m+2$ is even and that $2\leq k< 2n-3$. Then $N_{2m+1,m}$ and $N_{2m+3, m+1}$ exist and up to certain $\Delta_{k+1,q}^*$
$$\bar s * N_{2m+1,m}\equiv c N_{2m+3,1}$$
for some nonzero constant $c$. Hence if \eqref{eq:stv} holds for $N_{2m+1,m}^*$ and every $N_{k,q'}^*$, then, by \eqref{eq:tbarN}, \eqref{eq:stv} holds for all $N_{k+1,q}^* $ as well. This proves the proposition.
\end{proof}
Later, when we derive explicit kinematic formulas from our main theorem, the following two results will be useful.
\begin{lemma}\label{lem:Bideal}
For every $B_{k,q}^*$ there exists a polynomial $\psi=\psi(\bar s,\bar t)$ in $\bar s$ and $\bar t$ such that
$$\psi(\bar s,\bar t) \bar v = B_{k,q}^*.$$
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:generators} there exist polynomials $\phi,\psi$ in $\bar s$ and $\bar t$ such that $\phi(\bar s,\bar t) + \psi(\bar s,\bar t)\bar v= B_{k,q}^*$. Since $\phi(\bar s,\bar t)$ is a linear combination of
certain $\Delta_{k,q'}^*$ measures and $\psi(\bar s,\bar t)\bar v$ is a linear combination of certain $B_{k,q'}^*$ by Lemma~\ref{lem:sbartbar}, we conclude that if $\phi(\bar s, \bar t)\neq 0$, then necessarily $k\geq n$ and
$\phi(\bar s,\bar t)$ is a multiple of $\Delta_{k,k-n}^*=B_{k,k-n}^*$.
In the proof of Lemma~\ref{lem:relations}, we have shown that $q_{n-1}(\bar s,\bar t) \bar v$ is a nonzero multiple of $B_{n,0}^*=\Delta^*_{n,0}$ and hence $\bar t^{k-n} * q_{n-1}(\bar s,\bar t)\bar v$ is a nonzero multiple of
$B_{k,k-n}^*=\Delta_{k,k-n}^*$. Hence $B_{k,q}^*=\psi'(\bar s,\bar t)\bar v$ for some polynomial $\psi'$.
\end{proof}
\begin{proposition}\label{prop:Delta_st}
For every $\Delta_{k,q}^*\in \Area^{U(n)*}$ we have
$$ \Delta_{k,q}^* =\frac{ \omega_{2n-k}(k-2q)!(n-k+q)!}{\pi^{n-k}2^{k-2q} n!} \sum_{i=q}^{\lfloor \frac{k}{2} \rfloor} \frac{(-1)^{i+q}}{(i-q)!} \frac{(n-i)!}{(k-2i)! } \bar t ^{k-2i} \bar s^i. $$
\end{proposition}
\begin{proof}
By Corollary~\ref{cor:subalgebra} and since $\glob^*$ is injective, it is sufficient to check that
\begin{equation}\label{eq:mustar_poincare}\left\langle \mu_{k,q}^*, \phi\right\rangle =
\frac{ \omega_{2n-k}(k-2q)!(n-k+q)!}{\pi^{n-k}2^{k-2q} n!} \sum_{i=q}^{\lfloor \frac{k}{2} \rfloor} \frac{(-1)^{i+q}}{(i-q)!} \frac{(n-i)!}{(k-2i)! } (\hat t ^{k-2i} *\hat s^i ,\phi)\end{equation}
for every $\phi\in\Val^{U(n)}$.
For
$$\phi= \hat t^{2n-k-2j} * \hat u^j,\qquad u=4s-t,\qquad 0\leq 2j\leq 2n-k ,$$
the left-hand side of \eqref{eq:mustar_poincare} evaluates to
\begin{equation}\label{eq:mustar_LHS} \left\langle \mu_{2n-k,n-k+q}^*, t^{2n-k-2j}u^j \right\rangle = \frac{\omega_{2n-k} (2n-k-2j)! (2j)!}{\pi^{2n-k}} \binom{n-k+q}{j}
\end{equation}
by Proposition~3.7 of \cite{bernig_fu11}.
Since
$$t^{2n-2i} s^i (B(\mathbb{C}^n)) = \binom{2n-2i}{n-i},$$
see \cite{fu06}, and $u=4s-t^2$, one shows by induction on $j$ that
\begin{equation}\label{eq:poincare_stu}t^{2n-2i-2j} s^i u^j(B(\mathbb{C}^n)) =\binom{2j}{j} \binom{2n-2i-2j}{n-i-j} \binom{n-i}{j}^{-1}.\end{equation}
Hence the right-hand side of \eqref{eq:mustar_poincare} becomes
\begin{equation}\label{eq:mustar_RHS}\frac{\omega_{2n-k}(n-k+q)!(2j)!}{\pi^{2n-k} j!}\cdot \frac{(k-2q)!}{2^{k-2q}} \sum_{i=q}^{\lfloor \frac{k}{2} \rfloor} \frac{(-1)^{i+q}}{(i-q)!} \frac{(2n-2i-2j)!}{(k-2i)!(n-i-j)!}.\end{equation}
Comparing \eqref{eq:mustar_LHS} and \eqref{eq:mustar_RHS}, we see that \eqref{eq:mustar_poincare} will be proved if we can show that
\begin{equation}\label{eq:combinat0}2^{k-2q} \binom{n-j-q}{k-2q} = \sum_{i=0}^{\lfloor \frac{k-2q}{2} \rfloor} (-1)^{i} \binom{2n-2i-2j-2q}{k-2i-2q}\binom{n-j-q}{i}.\end{equation}
Now notice that \eqref{eq:combinat0} is just \eqref{eq:combinat2} with $m$ replaced by $n +q -j -k$ and $r$ replaced by $k-2q$.
\end{proof}
\begin{lemma}
If $r$ is a nonnegative integer and $m$ is an integer satisfying $2m+r\geq 0$, then
\begin{equation}
\label{eq:combinat2}
2^r \binom{m+r}{r}= \sum^{\lfloor \frac{r}{2} \rfloor}_{i=0} (-1)^i \binom{2m+2r-2i}{r-2i} \binom{m+r}{i}.
\end{equation}
\end{lemma}
\begin{proof}
We are going to use Zeilberger's algorithm \cite{zeilberger96}*{p. 101}.
Fix $r\geq 0$. We denote the sum on the right-hand side of \eqref{eq:combinat2} by $S_m$ and put
$$F(m,i):= (-1)^i \binom{2m+2r-2i}{r-2i} \binom{m+r}{i},\qquad 2m\geq -r.$$
One immediately checks that $F(m,i)$ satisfies the recurrence relation
\begin{equation}\label{eq:recurrence}-(m+r+1) F(m,i) + (m+1) F(m+1,i) = G(m,i+1)-G(m,i) \end{equation}
where
$$G(m,i)= F(m,i) \frac{2i (2m+2r-2i+1)(m+r+1)}{(2m+r+1)(2m+r+2)}.$$
Summing the recurrence \eqref{eq:recurrence} over $i$ from $0$ to $\lfloor r/2 \rfloor$ and using
that $ G(m,i+1)-G(m,i)$ telescopes to $0$, we obtain
$$ (m+1) S_{m+1} = (m+r+1) S_m$$
and therefore
$$S_m=\binom{m+r}{r} S_0,\qquad 2m\geq-r.$$
To show that $S_0=2^r$ we put
$$f(r,i):= (-1)^i \binom{2r-2i}{r} \binom{r}{i}.$$
Then $f(r,i)$ satisfies
$$(r+1)\left( 2 f(r,i)-f(r+1,i) \right)= g(r,i+1) -g(r,i)$$
with
$$g(r,i)=4i \frac{2r-2i+1}{r-2i+1} f(r,i).$$
Summing the recurrence relation for $f$ over $i$ from $0$ to $\lfloor r/2 \rfloor$, we obtain $S_0=2^r$.
\end{proof}
\section{Tensor valuations} \label{sec:tensor}
In the following it will be convenient to work with an abstract $n$-dimensional euclidean vector space $V$ instead of $\mathbb{R}^n$. The group of special orthogonal transformations of $V$ will be denoted by $SO(V)$,
the unit sphere by $S(V)$ and the sphere bundle of $V$ by $SV$. As before, $G\subset SO(V)$ is a closed, connected subgroup acting transitively on the unit sphere.
Given a non-negative integer $r$ we denote by $\Val^r=\Val^r(V)$ the vector space of translation-invariant, continuous valuations on $V$ with values in
$\Sym^r V$. Here $\Sym^rV\subset \bigotimes^r V$ denotes the subspace of symmetric tensors of rank $r$. We call $\Val^r$ the space of tensor valuations of rank $r$ and we denote by $\Val_k^r\subset \Val^r$ the subspace of $k$-homogeneous
valuations. Applying McMullen's theorem \cite{mcmullen77} componentwise, we see that
$$\Val^r= \bigoplus_{k=0}^n \Val_k^r.$$
Note that in this paper we only consider translation-invariant tensor valuations.
If $f\colon V\rightarrow W$ is a linear map,
then $f^{\otimes r} \colon \bigotimes^r V\rightarrow \bigotimes^r W$ maps symmetric tensors to symmetric tensors. In particular, the action of $G$ on $V$ induces an action on all
symmetric powers $\Sym^r V$. The group $G$ acts on tensor valuations of rank $r$ by $(g\cdot \Phi)(K)=g^{\otimes r} ( \Phi (g^{-1}K))$. A tensor valuation of rank $r$ is called $G$-covariant, if
$$g\cdot \Phi = \Phi$$
for every $g\in G$. The subspace of $G$-covariant tensor
valuations is denoted by $\Val_k^{r,G}$. For more information on $SO(V)$-covariant tensor valuations, also in the non-translation-invariant case, see
\cites{alesker99a, alesker_etal11, hug_etal07, hug_etal08,ludwig03, ludwig13, mcmullen97}.
We denote by $\Val^{sm,r}\subset \Val^r$ the subspace of tensor valuations which can be written as
$\Phi(K)=\int_{N(K)} \omega,$
where $\omega$ is a translation-invariant, smooth differential form on the sphere bundle $SV$ with values in $\Sym^r V$.
\begin{lemma}
$\Val^{r,G}\subset \Val^{sm,r}$. In particular, $\Val^{r,G}$ is
finite-dimensional.
\end{lemma}
\begin{proof}
A left $G$-action on the space of translation-invariant smooth differential forms on $SV$ with values in $\Sym^r V$ is given by
\begin{equation}\label{eq:G-action}L_g(v\otimes \omega):= (g^{-1})^*\omega \otimes gv\end{equation}
for every $g\in G$, $v\in \Sym^r V$, and every translation-invariant form $\omega\in \Omega(SV)$. We define projections to the spaces of $G$-covariant valuations and
$G$-invariant forms by
$$\pi_G(\Phi)= \int_G g\cdot \Phi\; dg \qquad \text{and}\qquad \pi_G(\omega)=\int_G L_g \omega \; dg,$$
respectively. Applying Theorem 5.2.1 of \cite{alesker06} componentwise, we obtain tensor valuations $\Phi_i$ given by smooth, translation-invariant differential forms $\omega_i$ such that
$\Phi_i \rightarrow \Phi$ uniformly on compact subsets.
Since $G$ acts transitively on the unit sphere, the space of $G$-invariant, translation-invariant forms with in values in $\Sym^r V$ is finite-dimensional. Therefore the space of tensor valuations represented by $G$-invariant forms
is finite-dimensional
and consequently closed. Since
$$\pi_G(\Phi_i)(K)= \int_{N(K)} \pi_G(\omega_i) $$
and $\pi_G(\Phi_i)\rightarrow \Phi$ uniformly on compact subsets, we conclude that $\Phi$ is represented by a $G$-invariant, translation-invariant form.
\end{proof}
\subsection{Algebraic structures for tensor valuations}
In this subsection we extend some of the algebraic structures for scalar valuations to tensor valuations.
Choose some orthonormal basis $\{e_i\}$ of $V$. If $a$ is a symmetric tensor of rank $r$, then there exist numbers $a_{i_1\ldots i_r}$ such that
$$a= \sum_{i_1,\ldots,i_r=1}^n a^{i_1\ldots i_r} e_{i_1}\otimes \cdots \otimes e_{i_r}.$$
Using the Einstein summation convention --- which we will do in the following --- this can be written as $a= a^{i_1\ldots i_r} e_{i_1}\otimes \cdots \otimes e_{i_r}$.
Since $a$ is a symmetric tensor
we have $a^{i_{\pi(1)} \ldots i_{\pi(r)}}= a^{i_1\ldots i_r}$
for every permutation $\pi$ of the numbers $1,\ldots,r$.
If $b$ is a symmetric tensor of rank $s$, $b=b^{i_1\ldots i_s} e_{i_1}\otimes \cdots e_{i_s}$,
then
$$ab= \Sym(a\otimes b)= \frac{1}{(r+s)!} \sum_\pi a^{i_{\pi(1)}\ldots i_{\pi(r)}} b^{i_{\pi(r+1)}\ldots i_{\pi(r+s) } } e_{i_{1}} \otimes \cdots \otimes e_{i_{r+s}},$$
where the sum extends over all permutations of $1,\ldots, r+s$.
If $s\geq r$, then we define the contraction of $a$ with $b$ by
$$\contr(a,b)=\contr(b,a):= \sum_{i_1,\ldots,i_r=1}^n a^{i_1\ldots i_r} b^{i_1\ldots i_r j_1\ldots j_q} e_{j_1}\otimes \cdots \otimes e_{j_q},\qquad q=s-r.$$
Note that $\contr(a,b)$ is a symmetric tensor of rank $q$ and that the definition of $\contr(a,b)$ is independent of the choice of orthonormal basis. Moreover, we have
\begin{equation}\label{eq:innerprodcontr}(a,b)=\contr(a,b), \qquad a,b\in \Sym^r V,\end{equation}
where $(a,b)$ the denotes the restriction of the usual inner product on $\bigotimes^r V$ to $\Sym^r V$.
If $a\in\Sym^r V$, $b\in \Sym^s V$, and $c$ is a symmetric tensor of rank at least $r+s$, then
\begin{equation}\label{eq:contractions}
\contr(a,\contr(b,c))=\contr(ab,c).
\end{equation}
From the discussion above it is clear how the convolution and contraction of tensor valuations should be defined. First of all,
if $\Phi\in \Val^{r}$ is a tensor valuation of rank $r$, then there exist scalar valuations $\Phi^{i_1\ldots i_r}$ such that
$$\Phi(K)= \Phi^{i_1\ldots i_r}(K) e_{i_1}\otimes \cdots \otimes e_{i_r}$$
for every convex body $K\subset V$ and $\Phi^{i_{\pi(1)} \ldots i_{\pi(r)}}(K)= \Phi^{i_1\ldots i_r}(K)$
for every permutation $\pi$ of $1,\ldots,r$. We define the convolution of $\Phi\in \Val^{sm,r}$ and $\Psi\in \Val^{sm,s}$ by
$$\Phi * \Psi= \frac{1}{(r+s)!} \sum_\pi \Phi^{i_{\pi(1)}\ldots i_{\pi(r)}} *\Psi^{i_{\pi(r+1)}\ldots i_{\pi(r+s) } } e_{i_{1}} \otimes \cdots \otimes e_{i_{r+s}}\in\Val^{r+s},$$
where the sum extends over all permutations of $1,\ldots, r+s$. The contraction of $\Phi$ with $\Psi$ defined by
$$\contr(\Phi,\Psi)=\contr(\Psi,\Phi):= \sum_{i_1,\ldots,i_r=1}^n \Phi^{i_1\ldots i_r}* \Psi^{i_1\ldots i_r j_1\ldots j_q} e_{j_1}\otimes \cdots \otimes e_{j_q} \in\Val^{sm,q},\qquad q=s-r.$$
Note that convolution and contraction of tensor valuations are compatible with the action of $G$, i.e.\
\begin{equation}\label{eq:Ginvariantcontra}g\cdot (\Phi *\Psi) = (g\cdot \Phi)*(g\cdot \Psi)\qquad\text{and}\qquad \contr(g\cdot \Phi, g\cdot \Psi) = g\cdot \contr(\Phi,\Psi) \end{equation}
for every $g\in G$. In particular, $\Phi* \Psi$ is $G$-covariant, if $\Phi$ and $\Psi$ are.
Moreover, whenever the rank of $\Phi$ is greater than or equal to the sum of the ranks of $\Psi_1$ and $\Psi_2$
\begin{equation}\label{eq:valcontraconv} \contr(\Psi_1,\contr(\Psi_2,\Phi))= \contr(\Psi_1 *\Psi_2, \Phi)\end{equation}
can be proved as \eqref{eq:contractions}.
If $r=s$, we define the Poincar\'e duality paring of $\Phi$ and $\Psi$ by
$$(\Phi,\Psi)= \left\langle \chi^*, \contr(\Phi,\Psi)\right\rangle$$
and the Poincar\'e duality map $\widehat{\PD}\colon \Val^{sm,r} \to \left( \Val^{sm, r}\right)^*$ by $\left\langle \widehat{\PD}(\Phi),\Psi\right\rangle = (\Phi,\Psi)$.
We establish now a formula for the Poincar\'e duality paring of tensor valuations similar to Theorem~4.1 of \cite{bernig09}. To state the result we first have to introduce some notation.
For differential forms on a manifold $M$ with values in a finite-dimensional euclidean vector space $V$ we define a pairing by
$$(\;\cdot\;,\; \cdot\;) \colon \Omega(M, V) \otimes \Omega(M,V) \rightarrow \Omega(M)$$
by
$$(\omega\otimes v, \omega'\otimes v):= \left\langle v,v'\right\rangle\; \omega \wedge \omega'.$$
The pull-back and exterior derivative of $V$-valued forms are defined componentwise. For differential forms on $M$ with values in the algebra $\Sym V$ we define a wedge product
$$\wedge\ \colon \Omega(M, \Sym V) \otimes \Omega(M,\Sym V) \rightarrow \Omega(M, \Sym V)$$
by
$$(\omega\otimes v) \wedge (\omega'\otimes v'):= \omega \wedge \omega' \otimes vv'.$$
Let us also recall a few facts regarding the convolution of smooth, translation-invariant valuations \cite{bernig_fu06}.
If $\phi_1,\phi_2\in\Val^{sm}$ are given by the translation-invariant differential forms $\omega_1$, $\omega_2$, then the convolution product $\phi_1 *\phi_2$ is represented by the differential form
\begin{equation}\label{eq:convproduct}*_1^{-1} (*_1\omega_1 \wedge *_1 D \omega_2), \end{equation}
where $D$ denotes the Rumin differential (see \cite{rumin94}),
$$*_1 ( \pi_1^*\gamma_1\wedge \pi_2^*\gamma_2) = (-1)^{\binom{n-\operatorname{deg} \gamma_1}{2}} \pi_1^*(*\gamma_1)\wedge \pi_2^*\gamma_2,$$
and $*$ denotes the Hodge star operator on $V$. If we let $D$ and $*_1$ act on vector-valued forms componentwise, then formula \eqref{eq:convproduct} holds verbatim for tensor valuations.
\begin{proposition} \label{lem:poincare} Let $0<k<n$. If $\Phi \in \Val_k^r$ and $\Phi' \in \Val^{r}_{n-k}$ are tensor valuations represented by differential forms $\omega$ and $\omega'$, then
\begin{equation}\label{eq:poincare} (\Phi, \Phi')= \frac{(-1)^{k} }{\omega_n} \int_{B(V) \times S(V)} (\omega,D \omega'),\end{equation}
where $B(V)$ denotes the unit ball of $V$.
\end{proposition}
\begin{proof} By the bilinearity of the pairings $(\Phi, \Phi')$ and $(\omega,D \omega')$ it will be sufficient to prove the formula for scalar valuations.
Let $x_1,\ldots, x_n$ be the standard coordinates on $V=\mathbb{R}^n$ and let $y_1,\ldots,y_{n-1}$ be local coordinates on $S^{n-1}$ Put $dx_I=dx_{i_1} \wedge \cdots \wedge dx_{i_k}$ and
$dx_{I'}= dx_{i'_1}\wedge \cdots \wedge dx_{i'_{n-k}}$ and put $dy_{J}=dy_{j_1}\wedge \cdots \wedge dy_{j_{n-k-1}}$ and $dy_{J'}=dy_{j'_1}\wedge \cdots \wedge dy_{j'_{k}}$. By linearity it will be sufficient to
prove
$$*_1^{-1}(*_1(dx_I \wedge dy_J) \wedge *_1 (dx_{I'}\wedge dy_{J'})) =(-1)^k \left(\frac{\partial }{\partial x_1} \wedge \cdots \wedge\frac{\partial }{\partial x_n}\right) \lrcorner (dx_I \wedge dy_J \wedge dx_{I'}\wedge dy_{J'}). $$
If $dx_I\wedge dx_{I'}=0$ there is nothing to prove since the left-hand and right-hand side are zero. If $dx_I\wedge dx_{I'}\neq 0$, we choose $\epsilon\in \{-1,1\}$ such that
$dx_I\wedge dx_{I'} =\epsilon \vol_{\mathbb{R}^n}$. Then $*(dx_I)= \epsilon dx_{I'}$ and $*(dx_{I'})= (-1)^{k(n-k)}\epsilon dx_{I}$. Using that
$$\binom{n}{2} + \binom{k}{2} + \binom{n-k}{2} \equiv k(n-1) \mod 2$$
we compute
\begin{align*}
*_1^{-1}( *_1(dx_I \wedge dy_J) \wedge *_1 (dx_{I'}\wedge dy_{J'})) &= *_1^{-1}\left( (-1)^{\binom{k}{2} + \binom{n-k}{2} + k(n-k)} dx_{I'} \wedge dy_J \wedge dx_I \wedge dy_{J'} \right) \\
& = *_1^{-1}\left( (-1)^{\binom{k}{2} + \binom{n-k}{2} +k(n-k-1)} dx_I\wedge dx_{I'} \wedge dy_J \wedge dy_{J'}\right)\\
&= *_1^{-1}\left( (-1)^{\binom{k}{2} + \binom{n-k}{2} +k(n-k-1)} \epsilon \vol_{\mathbb{R}^n} \wedge dy_J \wedge dy_{J'}\right)\\
&= (-1)^{k^2} \epsilon\; dy_J\wedge dy_{J'}\\
& = (-1)^{k}\epsilon\; dy_J\wedge dy_{J'}.
\end{align*}
On the other hand,
\begin{align*}
\left(\frac{\partial }{\partial x_1} \wedge \cdots \wedge\frac{\partial }{\partial x_n}\right) \lrcorner (dx_I \wedge dy_J \wedge dx_{I'}\wedge dy_{J'})
&= (-1)^{(n-k)(n-k-1)} \epsilon\; dy_J \wedge dy_{J'} \\
& = \epsilon\; dy_J \wedge dy_{J'}.
\end{align*}
\end{proof}
We remark that Proposition~\ref{lem:poincare} together with Theorem~4.1 of \cite{bernig09} implies \eqref{eq:poincare_pairing}.
\subsection{The ftaig for tensor valuations}
The goal of this subsection is to introduce a new kinematic operator for tensor valuations and to prove a
version of the fundamental theorem of algebraic integral geometry (ftaig) for this operator.
Observe that if $f,g\colon V\rightarrow W$ are linear maps, $r_1,r_2\geq 0$, and $p\in \Sym^{r_1+r_2}V$, then $f^{\otimes r_1} \otimes g^{\otimes r_2} (p)$ is in general not an element of $\Sym^{r_1+r_2}W$, but only
an element of $ \Sym^{r_1}W \otimes \Sym^{r_2}W$. Given $\Phi\in \Val^{r_1+r_2,G}_k$, we define a bivaluation (see, e.g., \cite{ludwig10} or \cite{alesker_etal11} for more information on bivaluations)
with values in $ \Sym^{r_1}V \otimes \Sym^{r_2}V$ by
$$a^{r_1,r_2}(\Phi)(K,L) = \int_G (\operatorname{id}^{\otimes r_1} \otimes g^{\otimes r_2}) \Phi(K+ g^{-1}L)\; dg$$
and call $a^{r_1,r_2}$ the \emph{additive kinematic operator} for tensor valuations.
Note that
\begin{equation}
\label{eq:Gcovar}
a^{r_1,r_2}(\Phi)(h_1K,h_2 L)= h_1^{\otimes r_1}\otimes h_2^{\otimes r_2} (A_G^{r_1,r_2}(\Phi)(K,L))
\end{equation}
whenever $h_1,h_2\in G$.
\begin{theorem}
Let $\Phi_1,\ldots, \Phi_{m_1}$ be a basis of $\Val^{r_1,G}$ and let $\Psi_1,\ldots, \Psi_{m_2}$ be a basis of $\Val^{r_2,G}$. If $\Phi\in \Val^{r_1+r_2,G}$, then
there exist constants $c_{ij}$ depending only on $\Phi$ such that
$$a^{r_1,r_2}(\Phi)(K,L) = \sum_{i,j} c_{ij} \; \Phi_i(K)\otimes \Psi_j(L)$$
for all convex bodies $K,L\subset V$.
In particular, we can consider the additive kinematic operator as a linear map
$$a^{r_1,r_2}\colon \Val^{r_1+r_2,G}\rightarrow\Val^{r_1,G}\otimes \Val^{r_2,G}.$$
\end{theorem}
\begin{proof}
Let $\{a_r\} $ be a basis of $ \Sym^{r_1}V $ and let $\{b_s\}$ be a basis of $ \Sym^{r_2}V $. For every $K$ and $L$ there exist numbers $\phi_{rs}(K,L)$ such that
$$a^{r_1,r_2}(\Phi)(K,L) = \sum_{r,s} \phi_{rs}(K,L) a_r\otimes b_s.$$
Since $\{ a_r\otimes b_s\}$ is a basis of $ \Sym^{r_1}V \otimes \Sym^{r_2}V$, $\phi_{rs}$ is a bivaluation. If we fix $L$, then by \eqref{eq:Gcovar}, $K\mapsto \sum_{r} \phi_{rs}(K,L) a_r$
is an element of $\Val^{r_1, G}$ for every $s$ and hence there exist numbers $\mu_{is}(L)$ such that
$$\sum_{r} \phi_{rs}(K,L) a_r = \sum_i \mu_{is}(L) \Phi_i(K).$$
Since $\{ \Phi_i\}$ is a basis of $\Val^{r_1,G}$, each $\mu_{is}$ is a valuation. Rearranging terms, we arrive at
$$a^{r_1,r_2}(\Phi)(K,L) = \sum_{i,s}\mu_{is}(L) \Phi_i(K) \otimes b_s= \sum_i \Phi_i(K) \otimes \left(\sum_s \mu_{is}(L) b_s\right).$$
Again by \eqref{eq:Gcovar}, for each $i$, $L\mapsto \sum_s \mu_{is}(L) b_s$
is an element of $\Val^{r_2,G}$. Hence there exist constants $c_{ij}$ depending only on $\Phi$ such that
$$\sum_s \mu_{is}(L) b_s=\sum_j c_{ij} \Psi_j(L)$$
and thus
$$a^{r_1,r_2}(\Phi)(K,L)= \sum_{i,j} c_{ij} \; \Phi_i(K)\otimes \Psi_j(L).$$
\end{proof}
\begin{lemma}\label{lem:kinmatcontraction}
If $\Psi_1\in\Val^{r_1,G}$ and $\Psi_2\in \Val^{r_2,G}$, then
$$ \contr(\Psi_1,\;\cdot \;)\otimes \contr(\Psi_2,\;\cdot \;) \circ a^{r_1,r_2} = a\circ \contr(\Psi_1*\Psi_2, \; \cdot\;).$$
\end{lemma}
\begin{proof}
If $C\subset V$ is a convex body with smooth boundary and all principal curvatures positive, then $\mu_C(K)=\vol(K+C)$ is a smooth valuation and
$\mu_C *\phi(K)=\phi(K+C)$ for every $\phi\in \Val^{sm}$ and every convex body $K\subset V$, see \cite{bernig_fu06}.
Assume for the moment that $L$ has smooth boundary with all principal curvatures positive. Then
\begin{align*} a^{r_1,r_2}(\Phi)(\; \cdot\;,L) &= \int_G (\id^{\otimes r_1}\otimes g^{\otimes r_2}) \mu_{g^{-1}L}*\Phi \; dg \\
& = \int_G \mu_{g^{-1}L} * \Phi_{i_1\ldots i_{r_1+r_2}} e_1\otimes\cdots\otimes e_{i_{r_1}}\otimes ge_{i_{r_1+1}}
\otimes \cdots \otimes ge_{i_{r_1+r_2}} \; dg
\end{align*}
and hence
\begin{align*}
\left[ (\contr(\Psi_1,\;\cdot \;)\otimes \id) \circ a^{r_1,r_2}\right] (\Phi)(K, L) &= \int_G \mu_{g^{-1}L} * (\Psi_1)_{i_1\ldots i_{r_1}}*\Phi_{i_1\ldots i_{r_1+r_2}} ge_{i_{r_1+1}} \otimes \cdots \otimes ge_{i_{r_1+r_2}} \; dg\\
&= \int_G g^{\otimes r_2 } \contr(\Psi_1,\Phi)(K+g^{-1} L)\; dg\\
&= \int_G \contr(\Psi_1,\Phi)(gK+L)\; dg,
\end{align*}
where the last line follows from \eqref{eq:Ginvariantcontra}.
Applying now \eqref{eq:valcontraconv}, yields
$$
\left[ \left(\contr(\Psi_1,\;\cdot \;)\otimes \contr(\Psi_2,\;\cdot \;)\right) \circ a^{r_1,r_2}\right](\Phi)(K,L) = \int_G \contr(\Psi_1*\Psi_2,\Phi))(gK+L).$$
By continuity, this equality holds for all convex bodies $K$ and $L$.
\end{proof}
\begin{lemma}
The Poincar\'e duality map $\widehat{\PD}\colon \Val^{r,G} \rightarrow (\Val^{r,G})^*$ is a linear isomorphism.
\end{lemma}
\begin{proof} Fix some nonzero tensor valuation $\Phi\in \Val^{r,G}$. It follows from \eqref{eq:poincare_pairing_even} and \eqref{eq:poincare_pairing} that the
pairing of scalar valuations $\left\langle \chi^*, \phi* \psi\right\rangle$ is perfect. Hence there exists a tensor valuation $\Psi'$, not necessarily $G$-covariant,
such that
$(\Phi, \Psi')\neq 0$. By \eqref{eq:Ginvariantcontra}, the Poincar\'e pairing of tensor valuations is $G$-invariant. Hence, if we put $\Psi=\int_G g\cdot \Psi' \; dg$, then
$$(\Phi , \Psi) =\int_G ( \Phi , g\cdot \Psi')\; dg = ( \Phi , \Psi')\neq 0.$$
\end{proof}
\begin{theorem}[ftaig]\label{thm:ftaig} Let $r_1,r_2$ be non-negative integers and denote by $c_G\colon \Val^{r_1,G}\otimes \Val^{r_2,G}\rightarrow \Val^{r_1+r_2, G}$ the convolution of tensor valuations. Then
$$a^{r_1,r_2} = (\widehat{\PD}^{-1}\otimes \widehat{\PD}^{-1}) \circ c^*_G \circ \widehat{\PD}.$$
\end{theorem}
\begin{proof}
Let $\Psi_1$ and $\Psi_2$ be $G$-covariant tensor valuations of rank $r_1$ and $r_2$, respectively. By the definition of Poincar\'e duality and Lemma~\ref{lem:kinmatcontraction} we have
\begin{align*} \left\langle \widehat{\PD} \otimes \widehat{\PD} \circ a^{r_1,r_2}(\Phi), \Psi_1\otimes \Psi_2\right\rangle & = \left\langle \contr(\Psi_1, \; \cdot\; ) \otimes \contr(\Psi_2, \; \cdot\; ) \circ a^{r_1,r_2}(\Phi), \chi^*\otimes \chi^*\right\rangle\\
& = \left\langle a(\contr(\Psi_1 * \Psi_2 , \Phi)) , \chi^* \otimes \chi^* \right\rangle.
\end{align*}
On the other hand,
$$\left\langle c_G^* \circ \widehat{\PD}(\Phi), \Psi_1\otimes \Psi_2\right\rangle = \left\langle \widehat{\PD}(\Phi) , \Psi_1 *\Psi_2\right\rangle = \left\langle \contr(\Psi_1*\Psi_2, \Phi),\chi^*\right\rangle.$$
The theorem follows now from the fact that $\left\langle a(\phi),\chi^*\otimes \chi^*\right\rangle = \left\langle \phi , \chi^*\right\rangle$ for every $\phi\in \Val^G$ which is clear from the definition of $a$ and $\chi^*$.
\end{proof}
\subsection{Moment maps}
For each $r\geq 0$ we define the $r$th order moment map $M^r\colon \Area^G\rightarrow \Val^{ r,G}$ by
$$M^r(\Phi)(K) = \int_{S(V)} u^{ r} \; d\Phi(K,u).$$
Note that in particular $M^0=\glob$. The reason why moment maps are useful for us is that they connect the kinematic operator for area measures with the additive kinematic operators for tensor valuations.
\begin{proposition}\label{prop:tensorkin}
If $r_1, r_2$ are non-negative integers and $\Phi\in \Area^G$, then
$$\left[ a^{r_1,r_2} \circ M^{r_1+r_2}\right](\Phi) =\left[ (M^{r_1}\otimes M^{r_2}) \circ A \right](\Phi).$$
\end{proposition}
\begin{proof}
Let $K,L\subset V$ be convex bodies. Using the notation of Theorem~\ref{thm:existence}, we obtain
$$\left[ (M^{r_1}\otimes M^{r_2}) \circ A\right](\Phi)(K,L)= \int_G \Phi(K+g^{-1} L, u^{r_1} \otimes (gu)^{r_2} ) \; dg.$$
On the other hand, since $u^{r_1} \otimes (gu)^{r_2} = \operatorname{id}^{\otimes r_1} \otimes g^{\otimes r_2}(u^{r_1} \otimes u^{r_2})$ and $u^{r_1} \otimes u^{r_2}= u^{r_1+r_2}$, we obtain
$$\int_G \Phi(K+g^{-1} L, u^{r_1} \otimes (gu)^{r_2} ) \; dg= \left[ a^{r_1,r_2}\circ M^{r_1+r_2}\right] (\Phi)(K,L).$$
\end{proof}
\subsection{Moment maps for unitary area measures} We consider now the case $V=\mathbb{C}^n=\mathbb{R}^{2n}$ with $G=U(n)$.
When restricted to the space of unitary area measures, both $M^0$ and $M^1$ have non-trivial kernels. The kernel of the latter map was determined by the author
in \cite{wannerer13}.
The second order moment map, however, turns out to be injective.
As in \cite{wannerer13} we denote by $(z_1,\ldots, z_n, \zeta_1,\ldots, \zeta_n)$ the canonical coordinates on $\mathbb{C}^n \oplus \mathbb{C}^n \cong T\mathbb{C}^n$, $z_i= x_i + \sqrt{-1} y_i$ and $\zeta_i= \xi_i + \sqrt{-1} \eta_i$.
The canonical complex structure on $\mathbb{C}^n$, i.e.\ componentwise multiplication by $\sqrt{-1} $, is denoted by $J\colon \mathbb{C}^n \to \mathbb{C}^n$. Moreover we denote the action of $J$ on $T\mathbb{C}^n\cong \mathbb{C}^n \oplus \mathbb{C}^n$ by the same
letter. We let $e_i$ be the element of $\mathbb{C}^n$ with coordinates $z_j=\delta_{ij}$, $i,j=1,\ldots,n$ and put $e_{\bar i}:= J e_i$.
\begin{theorem}
The map $M^2\colon \Area^{U(n)}\rightarrow \Val^{2, U(n)}$ is injective.
\end{theorem}
\begin{proof}
Suppose the unitary area measure $\Phi$ satisfies $M^2(\Phi)=0$. Denoting by
$Q=\sum_{i=1}^n \left(e_i^2 + e_{\bar i}^2\right)$ the metric tensor, we have
$$0 = (M^2(\Phi)(K),Q ) = \glob(\Phi)(K)$$
for every convex body $K\subset \mathbb{C}^n$.
Hence, by \eqref{eq:glob_DN}, there exist numbers $c_{k,q}$ such that
$$\Phi= \sum_{k,q} c_{k,q} N_{k,q}.$$
As in \cite{bernig_fu11}, we denote by $E_{k,q}= \mathbb{C}^q\oplus \mathbb{R}^{k-2q}$ a two-parameter family of distinguished real subspaces of $\mathbb{C}^n$. If $K\subset E_{k,q}$ is a convex body, then
$$M^2(\Phi)(K)= c_{k,q} M^2(N_{k,q})(K).$$
In particular, if $K$ equals the unit ball in $E_{k,q}$, $K=B(E_{k,q})$, then
$$(M^2(\Phi)(K), e_n^2)= c_{k,q} \frac{2(n-k+q)}{2n-k} \int_{B(E_{k,q})\times S(E_{k,q}^\perp)} \xi_n^2 (\gamma_{k,q}-\beta_{k,q}).$$
The theorem will be proved if we can show that the above integral is not zero. Denoting by $\iota \colon E_{k,q}\oplus E_{k,q}^\perp\to \mathbb{C}^n\oplus \mathbb{C}^n$ the inclusion map, we obtain
$$\iota^* \theta_0 =\sum_{i=k-q+1}^n d\xi_i\wedge d\eta_i,\qquad \iota^* \theta_1 =\sum_{i=q+1}^{k-q} dx_i\wedge d\eta_i,\qquad \iota^* \theta_2 =\sum_{i=1}^{q} dx_i\wedge dy_i,$$
and
$$ \iota^* \beta =-\sum_{i=q+1}^{k-q} \eta_i dx_i,\qquad \text{and}\qquad \iota^* \gamma =\sum_{i=k-q+1}^{n} \xi_i d\eta_i -\eta_i d\xi_i.$$
Consequently,
$$\iota^* \beta_{k,q} = \frac{1}{(k-2q)\omega_{2n-k}} \sum_{i=q+1}^{k-q} \eta_i \partial \eta_i \lrcorner \vol_{E_{k,q}\oplus E_{k,q}^\perp}$$
and
$$\iota^* \gamma_{k,q} = \frac{1}{2(n-k+q)\omega_{2n-k}} \sum_{i=k-q+1}^{n} \left(\xi_i \partial \xi_i + \eta_i \partial \eta_i \right) \lrcorner \vol_{E_{k,q}\oplus E_{k,q}^\perp}.$$
For every bounded Borel function $f\colon S^{n-1}\to \mathbb{R}$ we have the identity
\begin{equation}
\label{eq:contraction_formula}
\int_{S^{n-1}} f(x) x_i \; \partial x_i\lrcorner \vol_{\mathbb{R}^n} = \int_{S^{n-1}} f(x) x_i^2 \; d\mathcal{H}^{n-1}(x),\qquad i=1,\ldots, n.
\end{equation}
Here $\mathcal{H}^{n-1}$ denotes the $(n-1)$-dimensional Hausdorff measure. Moreover, a calculation shows that
\begin{equation}
\label{eq:moments}
\int_{S^{n-1}} x_i^2 x_j^2 \; d\mathcal{H}^{n-1}(x)= \left\{ \begin{array}{ll}
\frac{\omega_n}{n+2}, & i\neq j;\\
\frac{3\omega_n}{n+2}, & i=j,
\end{array}
\right.
\end{equation}
see \cite{folland99}, Chapter~2, Exercise~63.
Using \eqref{eq:contraction_formula} and \eqref{eq:moments}, we obtain
$$\int_{B(E_{k,q})\times S(E_{k,q}^\perp)} \xi_n^2 (\gamma_{k,q}-\beta_{k,q})= \frac{\omega_k}{(n-k+q)(2n-k+2)}$$
which finishes the proof.
\end{proof}
\section{Proof of \texorpdfstring{$\bar v^2=0$}{v2=0}} \label{sec:vbar2}
We define three differential forms with values in $\Sym \mathbb{C}^n$ which are invariant under the $U(n)$-action \eqref{eq:G-action}. We define the $0$-form
$$w= \sum_{i=1}^n e_i \xi_i + e_{\bar i} \eta_i$$
and the $1$-forms
$$\nu_0= \sum_{i=1}^n e_i d\xi_i + e_{\bar i} d\eta_i \qquad \text{and}\qquad \nu_1= \sum_{i=1}^n e_i dx_i + e_{\bar i} dy_i.$$
Observe that $dw = \nu_0$, $$J ^*w= \sum_{i=1}^n e_{\bar i} \xi_i - e_i \eta_i,\qquad J^*\nu_0= \sum_{i=1}^n e_{\bar i} d\xi_i - e_i d\eta_i,\qquad J^*\nu_1= \sum_{i=1}^n e_{\bar i} dx_i - e_i dy_i,$$
and that also $J^*w$, $J^*\nu_0$, and $J^*\nu_1$ are $U(n)$-invariant.
Using these forms, we define a tensor valuation $\Phi_1\in \Val^{2,U(n)}$ by
$$\Phi_1(K) = \int_{N(K)} J^*w \; \nu_1 \wedge \theta_2^{n-1}.$$
Note that $\Phi_1$ is homogeneous of degree $2n-1$.
\begin{proposition}\label{lem:sym4}
If $n\geq 2$, then
$$(M^4(B_{2,0}), \Phi_1 * \Phi_1)= (M^4(\Gamma_{2,1}), \Phi_1 * \Phi_1)= 0.$$
If $n \geq 3$, then also
$$(M^4(\Gamma_{2,0}), \Phi_1 * \Phi_1)= 0.$$
\end{proposition}
\begin{proof}All we have to do is to plug the Rumin differentials computed in the Appendix into \eqref{eq:poincare} and \eqref{eq:convproduct}.
This computation, which at first sight looks a bit lengthy,
is simplified by the following two facts.
If $\omega\in \Omega^{2n-1}(S\mathbb{C}^n, \Sym^4\mathbb{C}^n)$ and $\omega_1,\omega_2\in \Omega^{2n-1}(S\mathbb{C}^n, \Sym^2\mathbb{C}^n)$ are invariant with respect to the $U(n)$-action \eqref{eq:G-action}, then the form
\begin{equation}\label{eq:poincare_form}( *_1^{-1}( *_1 \omega_1 \wedge *_1D\omega_2),D\omega)\end{equation}
is $U(n)$-invariant and as such already determined by its value at a single point $p\in S\mathbb{C}^n$. In the following, we choose $p=(0,e_1)$.
Moreover, the assertion that
$$\text{adding multiples of }\alpha \text{ to } \omega_1 \text{ does not change } \eqref{eq:poincare_form}$$
follows immediately from the fact that $D\omega$ is a multiple of $\alpha$.
For the rest of the proof we put $\omega_1=\omega_2=J^*w \; \nu_1 \wedge \theta_2^{n-2}$. The form $\omega$ will be either
$$ w^4 \beta_{2,0}, \qquad w^4 \gamma_{2,0},\qquad \text{or}\qquad w^4\gamma_{2,1}.$$
At the point $p$ we have $\alpha=dx_1$ and
\begin{equation}\label{eq:staromega}
*_1\omega_1 \equiv -(n-1)! e_{\bar 1}^2 dx_1
\end{equation}
up to forms which are not multiples $dx_1$.
To pick out only the relevant terms of $*_1D\omega_1$, observe that $ D\omega$ is a multiple of $ e_1^2$ at the point $p$. Since $ e_1^2$ does not appear
in \eqref{eq:staromega}, only those terms in $*_1D\omega_1$ contribute to \eqref{eq:poincare_form}
which are multiples of $ e_1^2$. Thus,
\begin{equation}\label{eq:starDomega_1} *_1 D\omega_1 \equiv 2 (n-1)! e_1^2 dy_1 d\eta_1.\end{equation}
It follows from \eqref{eq:staromega} and \eqref{eq:starDomega_1} that the relevant part of $*_1^{-1}(*_1\omega_1 \wedge *_1D\omega_1)$ equals
$$2(n-1)!^2 e_1^2 e_{\bar 1}^2 dx_2dy_2 \cdots dx_ndy_n d\eta_1.$$
Since every term in $D\omega$ which is a multiple of $ e_1^2e_{\bar 1}^2$ is a multiple of $d\eta_1$ as well, this immediatly implies that
\eqref{eq:poincare_form} vanishes.
\end{proof}
\begin{lemma}
If $n\geq 2$, then
\begin{equation}\label{eq:poincare_G10}(M^2(\Gamma_{1,0}),\Phi_1) = 0 \end{equation}
and
\begin{equation}\label{eq:poincare_B10}
(M^2(B_{1,0}),\Phi_1) = \frac{4 n! \omega_{2n}}{\omega_{2n-1}}.
\end{equation}
\end{lemma}
\begin{proof}
Again we are going to use Lemma~\ref{lem:poincare} and the Rumin differentials computed in the Appendix. By $U(n)$-invariance it will be sufficient to compute
$(\omega , D\omega')$ at the point $p=(0,e_1)$. If $\omega=w^2 \beta_{1,0}$ or $\omega=w^2 \gamma_{1,0}$, then at the point $p$ the form $\omega$ is a multiple of $ e_1^2$. Hence only the terms of $D\omega'$
which are multiples of $ e_1^2$ are relevant. At $p$ we have
$$D(Jw \; \nu \wedge \theta_2^{n-1}) \equiv 2 (n-1)! e_1^2 dx_1 dx_2 dy_2 \cdots dx_n dy_n d\eta_1$$
up to terms which are not multiples of $ e_1^2$.
From this we immediately obtain \eqref{eq:poincare_G10} and \eqref{eq:poincare_B10}.
\end{proof}
\begin{theorem}\label{thm:vbar} $\bar v^2=0$.
\end{theorem}
\begin{proof}
If $\Psi\in \{B_{2,0},\Gamma_{2,0}, \Gamma_{2,1}\}$, then there exist constants $c_{1}^\Psi$, $c_{2}^\Psi$, and, $c_{3}^\Psi$ such that
$$A(\Psi)= c_{1}^\Psi B_{1,0}\otimes B_{1,0} + c_{2}^\Psi (B_{1,0}\otimes \Gamma_{1,0} + \Gamma_{1,0}\otimes B_{1,0}) + c_{3}^\Psi \Gamma_{1,0}\otimes \Gamma_{1,0}.$$
By the definition of the kinematic product, $\bar v^2=0$ is equivalent to $c^\Psi_{1}=0$.
The ftaig for tensor valuations (Theorem~\ref{thm:ftaig}) and Proposition~\ref{lem:sym4} imply that
$$\left\langle a^{2,2}\circ M^4(\Psi), \widehat{\PD}(\Phi_1)\otimes \widehat{\PD}(\Phi_1)\right\rangle= \left\langle \Phi_1 * \Phi_1, \widehat{\PD}(M^4(\Psi))\right\rangle = 0.$$
On the other hand, by Proposition~\ref{prop:tensorkin} and \eqref{eq:poincare_G10}, we have
$$ \left\langle a^{2,2}\circ M^4(\Psi), \widehat{\PD}(\Phi_1)\otimes \widehat{\PD}(\Phi_1)\right\rangle =\left\langle M^2\otimes M^2 \circ A(\Psi), \widehat{\PD}(\Phi_1)\otimes \widehat{\PD}(\Phi_1)\right\rangle = c_1^\Psi (M^2(B_{1,0}),\Phi_1)^2.$$
From \eqref{eq:poincare_B10} we conclude that $c_1^\Psi=0$.
\end{proof}
\begin{proof}[Conclusion of the proof of Theorem~A]
Let $h$ be the unique algebra homomorphism
$$h\colon \mathbb{R}[s,t,v]\to \Area^{U(n)*}$$
determined by $t\mapsto \bar t$, $s\mapsto \bar s$, $v \mapsto \bar v$. It follows from Corollary~\ref{cor:subalgebra}, Lemma~\ref{lem:relations} and Theorem~\ref{thm:vbar}
that $h$ descends to an algebra homomorphism $\bar h \colon \mathbb{R}[s,t,v]/I_n\rightarrow \Area^{U(n)*}$. By Proposition~\ref{prop:generators}, $\bar h$ is surjective. The theorem will be proved if we can show that
\begin{equation}\label{eq:dimension} \dim \mathbb{R}[s,t,v]/I_n = \dim \Area^{U(n)*}.\end{equation}
To this end let $M$ be the module given by the action of $\mathbb{R}[s,t]$ on $\mathbb{R}[s,t,v]$, let $S_n$ be the submodule generated by
$$f_{n+1}(s,t), \quad f_{n+2}(s,t),\quad f_{n+1}(s,t)v, \quad f_{n+2}(s,t)v,\quad p_{n}(s,t)-q_{n-1}(s,t)v, \quad \text{and}\quad p_n(s,t)v,$$
and let $T$ be the
submodule generated by $\{v^k\colon k\geq 2\}$. Then $I_n=S_n + T$, $S_n\cap T=\{0\}$, and
$$ M/I_n \cong (M/T ) / ((S_n+T)/T) \cong (M/T) / (S_n/(S_n\cap T)) =(M/T)/S_n.$$
Since
$$M/T\cong \mathbb{R}[s,t]\oplus \mathbb{R}[s,t],$$
\eqref{eq:dimension} follows now from Theorem~4.3 of \cite{wannerer13}.
\end{proof}
\section{Explicit local kinematic formulas} \label{sec:explicit}
In this final section we want to demonstrate how our main theorem can be used to derive explicit kinematic formulas.
\begin{lemma}\label{lem:BB0}
$$B_{k,q}^* * B^*_{k',q'}=0.$$
\end{lemma}
\begin{proof}
This follows from Lemma~\ref{lem:Bideal} and $\bar v^2=0$.
\end{proof}
An immediate consequence of Lemma~\ref{lem:BB0} and \eqref{eq:B*equals} is
\begin{align} N_{k,q}^* * N_{k',p'}^* =
\frac{(k-2q)(k'-2q')}{4(n-k+q)(n-k'+q')} \bigg( & \frac{2(n-k'+q')}{k'-2q'} \Delta_{k,q}^* * N_{k',q'}^* \label{eq:NN} \\
&+\frac{2(n-k+q)}{k-2q} \Delta_{k',q'}^* * N_{k,q}^*-\Delta_{k,q}^* * \Delta_{k',q'}^*\bigg). \notag
\end{align}
This is the final ingredient we needed to obtain explicit local kinematic formulas. Indeed, Proposition~\ref{prop:Delta_st} gives us an explicit expression for $\Delta_{k,q}^*$
in terms of $\bar s$ and $\bar t$.
Hence, using Lemma~\ref{lem:sbartbar} repeatedly, we can compute $\Delta_{k,q}^* * \Delta_{k',q'}^*$ and $\Delta_{k,q}^* * N_{k',q'}^*$. Using relation \eqref{eq:NN} we can also compute $N_{k,q}^* * N_{k',p'}^*$.
Let us demonstrate this general procedure in a few examples.
\begin{example}
In the complex plane --- the simplest non-trivial case --- it is not difficult to write down the full array of kinematic formulas. We have
\begin{align*} A(\Delta_{3,1}) = &(\Delta_{0,0}\otimes \Delta_{3,1}+\Delta_{3,1}\otimes \Delta_{0,0}) + \frac{2}{3} \left(\Delta_{1,0}\otimes \Delta_{2,0} +\Delta_{2,0}\otimes \Delta_{1,0} \right)\\
&+ \frac{1}{3}\left( \Delta_{1,0}\otimes \Delta_{2,1} +\Delta_{2,1}\otimes \Delta_{1,0}\right)+ \frac{1}{3}\left( N_{1,0} \otimes \Delta_{2,0} + \Delta_{2,0} \otimes N_{1,0} \right) \\
& -\frac{2}{3}\left( N_{1,0} \otimes \Delta_{2,1} + \Delta_{2,1} \otimes N_{1,0} \right),\\
A(\Delta_{2,1}) = &\left(\Delta_{0,0}\otimes \Delta_{2,1}+\Delta_{2,1}\otimes \Delta_{0,0}\right) + \frac{4}{9\pi }\left( \Delta_{1,0}\otimes N_{1,0} +N_{1,0}\otimes \Delta_{1,0}\right)\\
& + \frac{8}{9\pi} \Delta_{1,0}\otimes \Delta_{1,0} + \frac{2}{9\pi }N_{1,0} \otimes N_{1,0}, \\
A(\Delta_{2,0}) = &\left(\Delta_{0,0}\otimes \Delta_{2,1}+\Delta_{2,1}\otimes \Delta_{0,0}\right) -\frac{4}{9\pi }\left( \Delta_{1,0}\otimes N_{1,0} +N_{1,0}\otimes \Delta_{1,0}\right)\\
& + \frac{16}{9\pi} \Delta_{1,0}\otimes \Delta_{1,0} -\frac{8}{9\pi }N_{1,0} \otimes N_{1,0}
\end{align*}
and the trivial formulas
\begin{align*}A(\Delta_{1,0}) & = \Delta_{0,0}\otimes \Delta_{1,0}+\Delta_{1,0}\otimes \Delta_{0,0}, \\
A(N_{1,0}) & = \Delta_{0,0}\otimes N_{1,0}+N_{1,0}\otimes \Delta_{0,0}, \\
A(\Delta_{0,0}) & = \Delta_{0,0}\otimes \Delta_{0,0}.
\end{align*}
Let us show how the coefficients of $N_{1,0} \otimes N_{1,0}$ in $A(\Delta_{2,1})$ and $A(\Delta_{2,0})$ are obtained. By Proposition~\ref{prop:Delta_st},
$$\Delta_{1,0}^* = \frac{2}{3} \bar t.$$
Using Lemma~\ref{lem:sbartbar}, we find that
$$\Delta_{1,0}^* * \Delta_{1,0}^* = \frac{2}{3}\bar t* \Delta_{1,0}^*= \frac{8}{9\pi}\left(2\Delta_{2,0}^* + \Delta_{2,1}^*\right)$$
and
$$\Delta_{1,0}^* * N_{1,0}^*= \frac{2}{3}\bar t * N_{1,0}^* = \frac{4}{9\pi}\left(-\Delta_{2,0}^* + \Delta_{2,1}^*\right).$$
From this we can read of the coefficients of $\Delta_{1,0}\otimes \Delta_{1,0}$ and $\Delta_{1,0}\otimes N_{1,0}$ in $A(\Delta_{2,1})$ and $A(\Delta_{2,0})$.
Using \eqref{eq:NN}, we obtain
$$N_{1,0}^* * N_{1,0}^* = \Delta_{1,0}^* * N_{1,0}^* -\frac{1}{4} \Delta_{1,0}^* * \Delta_{1,0}^* = \frac{2}{9\pi} \left( \Delta_{2,1}^* - 4\Delta_{2,0}^*\right)$$
which gives us the coefficients of $N_{1,0} \otimes N_{1,0}$.
\end{example}
\begin{example}
In $\mathbb{C}^3$, let us determine the coefficient of $N_{2,0}\otimes N_{3,1}$ in $A(\Delta_{5,2})$. By equation \eqref{eq:NN},
$$N_{2,0}^* * N_{3,1}^* = \frac{1}{2} \left( 2 \Delta_{2,0}^* * N_{3,1}^* + \Delta_{3,1}^* * N_{2,0}^* - \Delta_{2,0}^* * \Delta_{3,1}^*\right).$$
Using Proposition~\ref{prop:Delta_st} and Lemma~\ref{lem:sbartbar}, we find that
$$\Delta_{3,1}^* * N_{2,0}^* =\frac{2\pi}{9}\bar t \left(\bar s * N_{2,0}^* \right)= \frac{\bar t}{18}\left( \Delta_{4,2}^* -6 \Delta_{4,1}^*\right)= -\frac{5}{18}\Delta_{5,2}^*$$
and
$$ \Delta_{3,1}^* * \Delta_{2,0}^* = \frac{2\pi}{9}\bar t \left(\bar s * \Delta_{2,0}^* \right) = \frac{\bar t}{18}\left( \Delta_{4,2}^* +6 \Delta_{4,1}^*\right)=\frac{7}{18} \Delta_{5,2}^*. $$
Similarly, we obtain
$$\Delta_{2,0}^* * N_{3,1}^* = \left( \frac{\pi}{8} \bar t^2 - \frac{\pi}{12} \bar s\right) * N_{3,1}^*= \frac{1}{9} \Delta_{5,2}^*$$
and hence
$$N_{2,0}^* * N_{3,1}^* = \frac{1}{2}\left( \frac{2}{9}-\frac{5}{18}-\frac{7}{18}\right) \Delta_{5,2}^*= -\frac{2}{9} \Delta_{5,2}^*.$$
We conclude that the coefficient of $N_{2,0}\otimes N_{3,1}$ in $A(\Delta_{5,2})$ equals $-\frac{2}{9}$.
\end{example}
As the dimension $n$ increases, computing explicit kinematic formulas by hand does not become more difficult, but increasingly tedious.
Since the procedure itself is very simple, it can be implemented in any computer algebra package.
\section*{Appendix}
The purpose of this appendix is to provide the explicit expressions of certain Rumin differentials we needed in Section~\ref{sec:vbar2}.
Recall that if $M$ is a contact manifold of dimension $2n-1$ with global contact form $\alpha$, then there exists a canonical
second order differential operator $D\colon \Omega^{n-1}(M)\to \Omega^{n}(M)$ called Rumin differential. It is defined as follows:
$$D\omega = d(\omega + \alpha \wedge \xi ),$$
where $\xi\in \Omega^{n-2}(M)$ is chosen such that $d(\omega + \alpha \wedge \xi )$ is a multiple of the contact form. Rumin \cite{rumin94} showed
that such a form $\xi$ always exists and that moreover $\alpha\wedge \xi$ is unique. Recall also that the Reeb vector field $T$ is the unique smooth vector field on $M$ such that
$$T\lrcorner \alpha = 1 \qquad \text{and}\qquad T\lrcorner d\alpha =0.$$
Using the Reeb vector field we may reformulate Rumin's theorem as follows: For every $(n-1)$-form $\omega$ on $M$ there exists unique form $\xi=:Q(\omega)$ with $T\lrcorner \xi=0$ such that $d(\omega + \alpha \wedge \xi)$ is
a multiple of the contact form.
On the sphere bundle $S\mathbb{C}^n$ the standard contact form is given by
$$\alpha=\sum_{i=1}^n \xi_i dx_i + \eta_i dy_i,$$
which agrees with the notation we have used so far. We remark that $f^* Q(\omega)=Q(f^*\omega)$ for every smooth map $f\colon M \to M$ satisfying $f^* \alpha =\alpha$.
Note that, in particular, every isometry of $\mathbb{C}^n$ lifted to the sphere bundle has this property. This is very useful for us: Since we compute Rumin differentials of vector-valued forms $\omega$ invariant under the
$U(n)$-action \eqref{eq:G-action},
it will be sufficient to compute $Q(\omega)$ at a single point $p$. In the following we choose $p=(0,e_1)$.
\begin{proposition*}
\begin{align}
D(Jw \; \nu_1 \wedge \theta_2^{n-1}) =\ & \alpha \wedge \Big( 2(n-1) J^*\nu_0 \wedge \nu_1 \wedge \beta - J^*w \; \nu_0 \wedge \theta_2 - 2 w \; J^* \nu_0 \wedge \theta_2 \label{eq:D2}\\
& -(n-1) J^*w \; \nu_1\wedge \theta_1\Big) \wedge \theta_2^{n-2},\notag\\
D(w^4 \beta \wedge \theta_1 \wedge \theta_0^{n-2}) =\ & 4w^3 \alpha \wedge \Big( 7\nu_0\wedge\beta \wedge \gamma + 2 w \gamma\wedge \theta_1 + J^*w \gamma\wedge \theta_s - \nu_0\wedge \theta_s \label{eq:D3} \\
& + 2 J^*\nu_0 \wedge \theta_1 - 3w \beta \wedge \theta_0\Big) \wedge \theta_0^{n-2}\notag\\
& +12 w^2 \alpha \wedge \Big( J^*w \nu_0\wedge \theta_1 -\nu_0 \wedge J^*\nu_0\wedge \beta\Big) \wedge \theta_0^{n-2},\notag\\
D(w^4 \gamma \wedge \theta_2\wedge \theta_0^{n-2}) =\ & 2w^4 \alpha \wedge \Big( (2n+3) \beta \wedge \theta_0 - (n+1) \gamma \wedge \theta_1 \Big) \wedge \theta_0^{n-2} \label{eq:D4}\\
& + 4w^3 \alpha \wedge \Big( \nu_0 \wedge \theta_s + 2 J^*\nu_1 \wedge \theta_0 - (2n+3) \nu_0 \wedge \beta\wedge \gamma - J^*w \gamma\wedge \theta_s\Big) \wedge \theta_0^{n-2}\notag\\
& -12 w^2 \alpha \wedge \nu_0\wedge J^* \nu_1 \wedge \gamma \wedge \theta_0^{n-2},\notag
\intertext{for $n\geq 2$. If $n\geq 3$, then}
D(w^4 \gamma\wedge \theta_1^2 \wedge \theta_0^{n-3}) =\ & 8w^4 \alpha \wedge \Big( \gamma \wedge \theta_1 - \beta \wedge \theta_0 \Big) \wedge \theta_0^{n-2} \label{eq:D5}\\
& +4 w^3 \alpha \wedge \Big( 4 \nu_0 \wedge \beta \wedge \gamma + 4 J^*\nu_0 \wedge \theta_1 \Big) \wedge \theta_0^{n-2}\notag\\
& -24 w^2 \alpha \wedge \nu_0 \wedge J^*\nu_0 \wedge \gamma \wedge \theta_1 \wedge \theta_0^{n-3}.\notag
\end{align}
\end{proposition*}
Note that a the point $(0,e_1)\in S\mathbb{C}^n$ we have
$$\alpha= dx_1, \qquad \beta=dy_1,\qquad \gamma=d\eta_1, \qquad \text{and}\qquad d\xi_1=0.$$
Recall also that
$$ T\lrcorner \alpha = 1, \qquad T \lrcorner \beta =0, \qquad T \lrcorner \gamma =0,$$
and
$$ T\lrcorner \theta_0 = 0, \qquad T\lrcorner \theta_1 = \gamma, \qquad T\lrcorner \theta_2 = \beta.$$
Moreover,
$$ T\lrcorner \nu_0 = T\lrcorner J^*\nu_0 =0, \qquad T\lrcorner \nu_1 =w, \qquad \text{and}\qquad \qquad T\lrcorner J^*\nu_1 = J^*w.$$
\begin{lemma*}
\begin{align}
Q(Jw \; \nu_1 \wedge \theta_2^{n-1}) =\ & J^*w(w \theta_2 -(n-1) v_1 \wedge \beta) \wedge \theta_2^{n-2},\label{eq:Q2} \\
\intertext{and up to multiples of the contact form}
Q(w^4 \beta \wedge \theta_1 \wedge \theta_0^{n-2}) \equiv\ & w^3 \big( w \theta_s - 4 J^*w \theta_1 - 4 \beta \wedge J^*\nu_0 - 6 w \beta \wedge \gamma \big) \wedge \theta_0^{n-2} ,\label{eq:Q3}\\
Q(w^4 \gamma \wedge \theta_2\wedge \theta_0^{n-2}) \equiv\ & w^3 \big( 2(n+1)w \beta \wedge \gamma \wedge \theta_0 + 2(n-2) \gamma \wedge \nu_0\wedge \theta_s - (n-1) w \theta_0 \wedge \theta_s \label{eq:Q4} \\
& - 4 \gamma \wedge J^*\nu_1 \wedge \theta_0 \big)\wedge \theta_0^{n-3},\notag\\
Q(w^4 \gamma\wedge \theta_1^2 \wedge \theta_0^{n-3}) \equiv \ & 2 w^3 \big( w \theta_0 \wedge \theta_s -2w \beta \wedge \gamma \wedge \theta_0 -2 \gamma \wedge \nu_0 \wedge \theta_s -4 \gamma \wedge J^*\nu_0 \wedge \theta_1
\big) \wedge \theta_0^{n-3}. \label{eq:Q5}
\end{align}
\end{lemma*}
\begin{proof} We write $Q(\omega)$ for the left-hand side of \eqref{eq:Q2} and $\xi$ for the right-hand side. We first show
\begin{equation}
\label{eq:vertical}
\alpha \wedge d\omega + \alpha \wedge d \alpha \wedge \xi =0.
\end{equation}
Note that at $p=(0,e_1)$ we have
\begin{equation}
\label{eq:alpha_theta2}
\alpha \wedge \theta_2^{n-2}= -(n-2)! \sum_{i=2}^n \partial y_1 \wedge \partial x_i \wedge \partial y_i \lrcorner \vol_{\mathbb{C}^n}
\end{equation}
and hence
\begin{align*}
-(n-1) J^*w \nu_1 \wedge \beta \wedge d\alpha \wedge \alpha \wedge \theta_2^{n-2} & = (n-1)! J^*w \nu_1 \wedge d\alpha \wedge \left(\sum_{i=2}^n \partial x_i \wedge \partial y_i \lrcorner \vol_{\mathbb{C}^n} \right)\\
& = (n-1)! e_{\bar 1} \left(\sum_{i=2}^n e_{\bar i} d\xi_i - e_{ i} d\eta_i\right) \wedge \vol_{\mathbb{C}^n}.
\end{align*}
Moreover, we have at the point $p$
$$w J^* w \alpha \wedge d\alpha \wedge \theta_2^{n-1} = -(n-1)! e_1e_{\bar 1} d\eta_1 \wedge \vol_{\mathbb{C}^n}$$
and
$$\alpha \wedge d\omega = -(n-1)! e_{\bar 1} J^* \nu_0 \wedge \vol_{\mathbb{C}^n}.$$
This proves \eqref{eq:vertical} and, since $T\lrcorner \xi=0$, we conclude that $Q(\omega)=\xi$ at $p$ and the $U(n)$-invariance implies now \eqref{eq:Q2}.
To prove \eqref{eq:Q3} first note that at $p$ we have
$$\theta_0^{n-2}= (n-2)! \sum_{i=2}^n \partial \eta_1 \wedge \partial \xi_i \wedge \partial \eta_i \lrcorner \vol_{T_pS\mathbb{C}^n},$$
where
$$\vol_{T_pS\mathbb{C}^n} = d\eta_1\wedge d\xi_2 \wedge d\eta_2 \wedge \cdots \wedge d\xi_n \wedge d\eta_n.$$
Put $\omega'= w^4 \beta \wedge \theta_1$ and
$$\xi'=w^3 \big( w \theta_s - 4 J^*w \theta_1 - 4 \beta \wedge J^*\nu_0 - 6 w \beta \wedge \gamma \big).$$
For $i=2,\ldots,n$ we calculate
\begin{align*}
\alpha \wedge d\omega' \wedge \left( \partial \eta_1 \wedge \partial \xi_i \wedge \partial \eta_i \lrcorner \vol_{T_pS\mathbb{C}^n} \right) & = \alpha \wedge \Big(4e_1^3 e_{\bar 1} d\eta_1\wedge dy_1 \wedge (dx_i \wedge d\eta_i -dy_i\wedge d\xi_i)\\
& \qquad \qquad - 4e_1^3 (e_i dx_i + e_{\bar i} dy_i)\wedge dy_1 \wedge d\xi_i \wedge d\eta_i \\
& \qquad \qquad -2e_1^4 dx_i\wedge dy_i\wedge d\xi_i \wedge d\eta_i \Big) \wedge \left( \partial \xi_i \wedge \partial \eta_i \lrcorner \vol_{T_pS\mathbb{C}^n} \right)\\
& =-\alpha \wedge d\alpha \wedge \xi' \wedge \left(\partial \eta_1 \wedge \partial \xi_i \wedge \partial \eta_i \lrcorner \vol_{T_pS\mathbb{C}^n} \right).
\end{align*}
This establishes \eqref{eq:Q3} and \eqref{eq:Q4} is proved in the same way.
We come now to the proof of \eqref{eq:Q5}. We split $d\omega$ into two summands
$$d\omega= 2w^4 \theta_1^2\wedge \theta_0^{n-2} + 4w^3 \nu_0 \wedge \gamma \wedge \theta_1^2 \wedge \theta_0^{n-3} =:\Omega_1 + \Omega_2$$
and similarly write $\xi= \Xi_1 + \Xi_2$ with
$$\Xi_1= 2 w^4 \big(\theta_s -2 \beta \wedge \gamma \big) \wedge \theta_0^{n-2}$$
and
$$\Xi_2 = 4 w^3 \big( \nu_0 \wedge \theta_s + 2 J^*\nu_0 \wedge \theta_1 \big) \wedge \gamma \wedge \theta_0^{n-3}.$$
As in the proof of \eqref{eq:Q3} and \eqref{eq:Q4}, one checks that $\alpha \wedge (\Omega_1 + d\alpha \wedge \Xi_1)=0$ at $p$. Proving
$\alpha \wedge (\Omega_2 + d\alpha \wedge \Xi_2)=0$ at $p$ requires a bit more work. First observe that at the point $p$,
\begin{align*}\gamma \wedge \theta_0^{n-3} &= (n-3)! \sum_{2\leq i<j\leq n} \left( \partial \xi_i \wedge \partial \eta_i \wedge \partial \xi_j \wedge \partial \eta_j \right) \lrcorner \vol_{T_pS\mathbb{C}^n}\\
& =: (n-3)! \sum_{2\leq i<j\leq n} v_{ij}.
\end{align*}
We put
$$\sigma_i = d\xi_i \wedge dx_i + d\eta_i \wedge dy_i \qquad \text{and} \qquad \sigma_i' = d\xi_i \wedge dy_i -d\eta_i \wedge dx_i,$$
for $i=1,\ldots, n$.
Then $d\alpha = -\theta_s = \sum_i \sigma_i$ and $\theta_1= \sum_i \sigma_i'$. We compute at $p$
$$\alpha \wedge ( \theta_1^2 -(d\alpha)^2 ) \wedge v_{ij} =
2 \alpha \wedge (\sigma_i' \wedge \sigma_j' - \sigma_i \wedge \sigma_j ) \wedge v_{ij}, \qquad 2\leq i<j\leq n$$
and
$$\alpha \wedge (d\alpha \wedge \theta_1 ) \wedge v_{ij} =2 \alpha \wedge (\sigma_i' \wedge \sigma_j - \sigma_i \wedge \sigma_j' ) \wedge v_{ij}, \qquad 2\leq i<j\leq n.$$
Hence
$$ \alpha \wedge ( \nu_0 \wedge( \theta_1^2 -(d\alpha)^2 ) + J^*\nu_0 \wedge d\alpha \wedge \theta_1 ) \wedge v_{ij}= 0$$
and we conclude that also $\alpha \wedge (\Omega_2 + d\alpha \wedge \Xi_2)=0$ at $p$.
\end{proof}
\begin{proof}[Proof of the Proposition]
Since $d\omega+ d\alpha\wedge \xi $, $\xi\equiv Q(\omega)$, is a multiple of $\alpha$, we have
$$D\omega= \alpha \wedge \left( T\lrcorner (d\omega + d\alpha \wedge \xi) - d \xi\right).$$
If $\xi = Q(\omega)$, then
$$D\omega= \alpha \wedge \left( T\lrcorner d\omega - d \xi\right).$$
Using this, \eqref{eq:D2} follows immediately from
\begin{align*} T\lrcorner d\omega &= (n-1) J^*\nu_0\wedge \nu_1 \wedge \beta \wedge \theta_2^{n-2} - w J^*\nu_0 \wedge \theta_2^{n-1}\\
\intertext{and}
d\xi &= \left( -(n-1) J^*\nu_0 \wedge \nu_1 \wedge \beta + (w J^*\nu_0 + J^* w \nu_0 )\wedge \theta_2 + (n-1) J^*w \nu_1 \wedge \theta_1 \right) \wedge \theta_2^{n-2}.
\end{align*}
In the same way \eqref{eq:D3}, \eqref{eq:D4}, and \eqref{eq:D5} can be proved.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{abardia_etal12}{article}{
author={Abardia, J.},
author={Gallego, E.},
author={Solanes, G.},
title={The Gauss-Bonnet theorem and Crofton-type formulas in complex
space forms},
journal={Israel J. Math.},
volume={187},
date={2012},
pages={287--315},
}
\bib{alesker99a}{article}{
author={Alesker, S.},
title={Description of continuous isometry covariant valuations on convex
sets},
journal={Geom. Dedicata},
volume={74},
date={1999},
pages={241--248},
}
\bib{alesker99b}{article}{
author={Alesker, S.},
title={Continuous rotation invariant valuations on convex sets},
journal={Ann. of Math. (2)},
volume={149},
date={1999},
pages={977--1005},
}
\bib{alesker01}{article}{
title={Description of translation invariant valuations on convex sets with solution of P. McMullen's conjecture},
author={Alesker, S.},
journal={Funct. Anal.},
volume={11},
date={2001},
pages={244--272}
}
\bib{alesker03}{article}{
title={Hard Lefschetz theorem for valuations, complex integral geometry, and unitarily invariant valuations},
author={Alesker, S.},
journal={J. Differential Geom.},
volume={63},
date={2003},
pages={63--95}
}
\bib{alesker04}{article}{
title={The multiplicative structure on continuous polynomial valuations},
author={Alesker, S.},
journal={Geom. Funct. Anal.},
volume={14},
date={2004},
pages={1--26}
}
\bib{alesker06}{article}{
author={Alesker, S.},
title={Theory of valuations on manifolds. I. Linear spaces},
journal={Israel J. Math.},
volume={156},
date={2006},
pages={311--339},
}
\bib{alesker11}{article}{
author={Alesker, S.},
title={A Fourier-type transform on translation-invariant valuations on
convex sets},
journal={Israel J. Math.},
volume={181},
date={2011},
pages={189--294},
}
\bib{alesker_fu08}{article}{
author={Alesker, S.},
author={Fu, J. H. G.},
title={Theory of valuations on manifolds. III. Multiplicative structure
in the general case},
journal={Trans. Amer. Math. Soc.},
volume={360},
date={2008},
pages={1951--1981},
}
\bib{alesker_etal11}{article}{
author={Alesker, S.},
author={Bernig, A.},
author={Schuster, F. E.},
title={Harmonic analysis of translation invariant valuations},
journal={Geom. Funct. Anal.},
volume={21},
date={2011},
pages={751--773},
}
\bib{beisbart_etal06}{article}{
author={Beisbart, C.},
author={Barbosa, M. S.},
author={Wagner, H.},
author={da F. Costa, L.},
title={Extended merphometric analysis of neuronal cells with Minkowski valuations},
journal={Eur. Phys. J. B },
volume={52},
pages={531--546},
}
\bib{beisbart_etal02}{article}{
author={Beisbart, C.},
author={Dahlke, R.},
author={Mecke, K.},
author={Wagner, H.},
title={Vector- and tensor-valued descriptors for spacial patterns},
book={
editor={Mecke, K.},
editor={Stoyan, D.},
title={Morphology of Condensed Matter},
series={Lecture Notes in Physics},
volume={600},
publisher={Springer},
place={Berlin}
},
pages={283--260},
}
\bib{bernig09}{article}{
author={Bernig, A.},
title={A product formula for valuations on manifolds with applications to
the integral geometry of the quaternionic line},
journal={Comment. Math. Helv.},
volume={84},
date={2009},
pages={1--19},
}
\bib{bernig11}{article}{
author={Bernig, A.},
title={Integral geometry under $G_2$ and ${\rm Spin}(7)$},
journal={Israel J. Math.},
volume={184},
date={2011},
pages={301--316},
}
\bib{bernig_fu06}{article}{
author={Bernig, A.},
author={Fu, J. H. G.},
title={Convolution of convex valuations},
journal={Geom. Dedicata},
volume={123},
date={2006},
pages={153--169},
}
\bib{bernig_fu11}{article}{
author={Bernig, A.},
author={Fu, J. H. G.},
title={Hermitian integral geometry},
journal={Ann. of Math. (2)},
volume={173},
date={2011},
pages={907--945},
}
\bib{bernig_etal13}{article}{
author={Bernig, A.},
author={Fu, J. H. G.},
author={Solanes, G.},
title={Integral geometry of complex space forms},
eprint={ arXiv:1204.0604 [math.DG]}
}
\bib{blaschke55}{book}{
author={Blaschke, W.},
title={Vorlesungen \"uber Integralgeometrie},
publisher={Deutscher Verlag der Wissenschaften},
place={Berlin},
date={1955},
}
\bib{chern52}{article}{
author={Chern, S.-S.},
title={On the kinematic formula in the Euclidean space of $n$ dimensions},
journal={Amer. J. Math.},
volume={74},
date={1952},
pages={227--236},
}
\bib{federer69}{book}{
author={Federer, H.},
title={Geometric measure theory},
publisher={Springer-Verlag, New York},
date={1969},
}
\bib{folland99}{book}{
author={Folland, G. B.},
title={Real analysis},
edition={2},
publisher={John Wiley \& Sons Inc.},
place={New York},
date={1999},
}
\bib{fu90}{article}{
author={Fu, J. H. G.},
title={Kinematic formulas in integral geometry},
journal={Indiana Univ. Math. J.},
volume={39},
date={1990},
pages={1115--1154},
}
\bib{fu94}{article}{
author={Fu, J. H. G.},
title={Curvature measures of subanalytic sets},
journal={Amer. J. Math.},
volume={116},
date={1994},
pages={819--880},
}
\bib{fu06}{article}{
author={Fu, J. H. G.},
title={Structure of the unitary valuation algebra},
journal={J. Differential Geom.},
volume={72},
date={2006},
pages={509--533},
}
\bib{greub_etal72}{book}{
author={Greub, W.},
author={Halperin, S.},
author={Vanstone, R.},
title={Connections, curvature, and cohomology. Vol. I: De Rham cohomology
of manifolds and vector bundles},
publisher={Academic Press},
place={New York},
date={1972},
}
\bib{haberl12}{article}{
author={Haberl, C.},
title={Minkowski valuations intertwining with the special linear group},
journal={J. Eur. Math. Soc. (JEMS)},
volume={14},
date={2012},
pages={1565--1597},
}
\bib{haberl_parapatits13}{article}{
author={Haberl, C.},
author={Parapatits, L.},
title={The centro-affine Hadwiger theorem},
journal={J. Amer. Math. Soc.},
status={to appear},
}
\bib{hug_etal07}{article}{
author={Hug, D.},
author={Schneider, R.},
author={Schuster, R.},
title={The space of isometry covariant tensor valuations},
journal={Algebra i Analiz},
volume={19},
date={2007},
pages={194--224},
}
\bib{hug_etal08}{article}{
author={Hug, D.},
author={Schneider, R.},
author={Schuster, R.},
title={Integral geometry of tensor valuations},
journal={Adv. in Appl. Math.},
volume={41},
date={2008},
pages={482--509},
}
\bib{klain99}{article}{
author={Klain, D. A.},
title={An Euler relation for valuations on polytopes},
journal={Adv. Math.},
volume={147},
date={1999},
pages={1--34},
}
\bib{klain_rota97}{book}{
author={Klain, D. A.},
author={Rota, G.-C.},
title={Introduction to geometric probability},
publisher={Cambridge University Press},
place={Cambridge},
date={1997},
}
\bib{ludwig03}{article}{
author={Ludwig, M.},
title={Ellipsoids and matrix-valued valuations},
journal={Duke Math. J.},
volume={119},
date={2003},
pages={159--188},
}
\bib{ludwig10}{article}{
author={Ludwig, M.},
title={Minkowski areas and valuations},
journal={J. Differential Geom.},
volume={86},
date={2010},
pages={133--161},
}
\bib{ludwig13}{article}{
author={Ludwig, M.},
title={Covariance matrices and valuations},
journal={Adv. Math.},
status={to appear},
}
\bib{ludwig_reitzner10}{article}{
author={Ludwig, M.},
author={Reitzner, M.},
title={A classification of ${\rm SL}(n)$ invariant valuations},
journal={Ann. of Math. (2)},
volume={172},
date={2010},
pages={1219--1267},
}
\bib{mcmullen77}{article}{
author={McMullen, P.},
title={Valuations and Euler-type relations on certain classes of convex
polytopes},
journal={Proc. London Math. Soc. (3)},
volume={35},
date={1977},
pages={113--135},
}
\bib{mcmullen97}{article}{
author={McMullen, P.},
title={Isometry covariant valuations on convex bodies},
journal={Rend. Circ. Mat. Palermo (2) Suppl.},
volume={50},
date={1997},
pages={259--271},
}
\bib{parapatits_wannerer13}{article}{
author={Parapatits, L.},
author={Wannerer, T.},
title={On the inverse Klain map},
journal={Duke Math. J.},
volume={162},
date={2013},
pages={1895--1922},
}
\bib{zeilberger96}{book}{
author={Petkov{\v{s}}ek, M.},
author={Wilf, H. S.},
author={Zeilberger, D.},
title={$A=B$},
publisher={A K Peters Ltd.},
place={Wellesley, MA},
date={1996},
}
\bib{rumin94}{article}{
author={Rumin, M.},
title={Formes diff\'erentielles sur les vari\'et\'es de contact},
journal={J. Differential Geom.},
volume={39},
date={1994},
pages={281--330},
}
\bib{santalo04}{book}{
author={Santal{\'o}, L. A.},
title={Integral geometry and geometric probability},
edition={2},
publisher={Cambridge University Press},
place={Cambridge},
date={2004},
}
\bib{schneider75}{article}{
author={Schneider, R.},
title={Kinematische Ber\"uhrma\ss e f\"ur konvexe K\"orper und
Integralrelationen f\"ur Oberfl\"achenma\ss e},
journal={Math. Ann.},
volume={218},
date={1975},
pages={253--267},
}
\bib{schneider93}{book}{
author={Schneider, R.},
title={Convex bodies: the Brunn-Minkowski theory},
publisher={Cambridge University Press},
place={Cambridge},
date={1993},
}
\bib{schneider_weil08}{book}{
author={Schneider, R.},
author={Weil, W.},
title={Stochastic and integral geometry},
publisher={Springer-Verlag},
place={Berlin},
date={2008},
}
\bib{schroeder_etal10}{article}{
author={Schr\"oder-Turk, G. E.},
author={Kapfer, S.},
author={Breidenbach, B.},
author={Beisbart, C.},
author={Mecke, K.},
title={Tensorial Minkowski functionals and anistropy measures for planar patterns},
journal={J. Microscopy},
volume={238},
pages={57--74},
date={2010},
}
\bib{schuster10}{article}{
author={Schuster, F. E.},
title={Crofton measures and Minkowski valuations},
journal={Duke Math. J.},
volume={154},
date={2010},
pages={1--30},
}
\bib{wannerer13}{article}{
author={Wannerer, T.},
title={The module of unitarily invariant area measures},
eprint={arXiv:1207.6481 [math.DG]},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
\begin{document}
\title{Tamagawa numbers of elliptic curves with torsion points}
\begin{abstract}
Let $K$ be a global field and let $E/K$ be an elliptic curve with a $K$-rational point of prime order $p$. In this paper we are interested in how often the (global) Tamagawa number $c(E/K)$ of $E/K$ is divisible by $p$. This is a natural question to consider in view of the fact that the fraction $c(E/K)/ |E(K)_{\text{tors}}|$ appears in the second part of the Birch and Swinnerton-Dyer Conjecture. We focus on elliptic curves defined over global fields, but we also prove a result for higher dimensional abelian varieties defined over $\mathbb{Q}$.
\end{abstract}
\section{Introduction}
Let $K$ be a global field and let $E/K$ be an elliptic curve. Let $v$ be a non-archimedean valuation of $K$. We denote by $K_{v}$ the completion of $K$ with respect to the valuation $v$, by $\mathcal{O}_{K_{v}}$ the valuation ring of $K_{v}$, and by $k_{v}$ the residue field of $K_{v}$. The set $E_0(K_v)$ that consists of points with nonsingular reduction is a finite index subgroup of $E(K_v)$. The index $c_{v}(E/K)=[E(K_v):E_0(K_v)]$ is called the Tamagawa number of $E/K$ at $v$. Alternatively, the number $c_{v}(E/K)$ can be defined as $|\mathcal{E}_{k_v}(k_v)/\mathcal{E}^0_{k_v}(k_v)|$, where $\mathcal{E}_{k_v}/k_v$ is the special fiber of the N\'eron model of $E/K$ and $\mathcal{E}^0_{k_v}/k_v$ is the connected component of the identity of $\mathcal{E}_{k_v}/k_v$. The two above definitions of $c_{v}(E/K)$ agree by \cite[Corollary IV.9.2]{silverman2}. We define the (global) Tamagawa number of $E/K$ as $c(E/K):=\prod_{v} c_{v}(E/K)$, where the product is taken over all the non-archimedean valuations of $K$. If $\mathfrak{p}$ be a prime of the ring of integers $\mathcal{O}_K$ of $K$ corresponding to a non-archimedean valuation $v$, then we will also denote $c_v(E/K)$ by $c_{\mathfrak{p}}(E/K)$.
Let $K$ be a global field and let $E/K$ be an elliptic curve (or more generally an abelian variety). In this paper, we are interested in the effect of the torsion subgroup of $E/K$ to the Tamagawa number of $E/K$. The importance of such results stems from the fact that the fraction $c(E/K)/ |E(K)_{\text{tors}}|$ appears in the Birch and Swinnerton-Dyer Conjecture (see \cite[Appendix C.16]{aec} or \cite[Conjecture F.4.1.6]{hindrysilverman}).
The first to take up the study of Tamagawa numbers of abelian varieties with torsion points was Lorenzini in \cite{lor}. Krumm in \cite{Krummthesis} investigated the interplay between torsion points and Tamagawa numbers for elliptic curves over number fields of low degree and formulated a conjecture concerning elliptic curves with a point of order $13$ over quadratic number fields. Krumm's conjecture was later proved by Najman in \cite{najmantamawanumber}. Very recently, Trbović has studied in \cite{trbovic2021tamagawa} Tamagawa numbers of elliptic curves with an isogeny.
When $K$ is a quadratic number field, Lorenzini (see \cite[Corollary 3.4]{lor}) has proved that there are no elliptic curves $E/K$ with a $K$-rational point of order $11$ and such that $11 \nmid c(E/K)$. Krumm (see \cite[Proposition 4.2]{Krummthesis}) proved the same result when $K$ is a cubic number field. Example \ref{example11torsionoverdegree4} in the next section shows that there exists a quartic number field $K$ and an elliptic curve $E/K$ with a $K$-rational point of order $11$ such that $c(E/K)=1$. Since the modular curve $X_1(11)$ has genus $1$, for a given number field $K$, there may exist infinitely many elliptic curves $E/K$ with a $K$-rational point of order $11$. Theorem \ref{theorem11torsionpoint}, which is proved in Section \ref{proofsofresults}, provides a generalization of Lorenzini and Krumm's results.
\begin{theorem}\label{theorem11torsionpoint}
For every number field $K/\mathbb{Q}$ there exists a constant $n_{K,11}$ such that the following holds: For every elliptic curve $E/K$ with a $K$-rational point of order $11$ we have that $11$ divides $c(E/K)$ with at most $n_{K,11}$ exceptions.
\end{theorem}
Let now $K$ be equal to $\mathbb{Q}$ or a quadratic number field. It follows from work of Lorenzini (see \cite[Proposition 1.1, Proposition 2.10, and Corollary 3.4]{lor}) that if $E/K$ is an elliptic curve with a $K$-rational point of prime order $p \geq 7$, then $p$ divides $c(E/K)$ with only finitely many exceptions (the number of exceptions depends on $K$). The following theorem, which follows from Theorem \ref{theorem11torsionpoint} and is proved in Section \ref{proofsofresults}, provides a generalization of the above statement to number fields of arbitrary degree.
\begin{theorem}\label{firsttheorem}
For every number field $K/\mathbb{Q}$ there exists a constant $n_K$ such that the following holds: For every prime $p \geq 7$ and every elliptic curve $E/K$ with a $K$-rational point of order $p$ we have that $p$ divides $c(E/K)$ with at most $n_K$ exceptions.
\end{theorem}
It follows from \cite[Lemma 2.26]{lor} and \cite[Corollary 5.4]{barriosroy} that for $p=2$ or $3$ there exist infinitely many elliptic curves $E/\mathbb{Q}$ with a $\mathbb{Q}$-rational point of order $p$ such that $c(E/\mathbb{Q})=1$. Also, as we explain in Remark \ref{remarkp235}, it seems likely that there exists a number field $K/\mathbb{Q}$ and an infinite number of elliptic curves $E/K$ with a $K$-rational point of order $5$ and such that $5 \nmid c(E/\mathbb{Q})$. Therefore, the assumption that $p \geq 7$ is necessary in Theorem \ref{firsttheorem}. Moreover, Examples \ref{example1Krumm} and \ref{example2Krumm} below, which are due to Krumm, show that there exist elliptic curves $E/K$ defined over cubic and quartic number fields $K/\mathbb{Q}$ with a $K$-rational point of order greater than $11$ and such that $c(E/K)=1$.
One may wonder whether Theorem \ref{theorem11torsionpoint} can be generalized to abelian varieties of higher dimension by requiring that the constant also depends on their dimension. As Part $(i)$ of the following theorem shows, such a generalization is not possible even for $K=\mathbb{Q}$.
\begin{theorem}\label{weilrestrictiontamagawa}
Let $p \geq 5$ be a prime.
\begin{enumerate}[topsep=2pt,label=(\roman*)]
\itemsep0em
\item There exist infinitely many abelian varieties $A/\mathbb{Q}$ of dimension at most $\frac{p^2-1}{2}$ with a $\mathbb{Q}$-rational point of order $p$ and such that $p \nmid c(A/\mathbb{Q})$.
\item If $p \equiv 1 \; (\text{mod} \; 3)$, then there exists an abelian variety $A/\mathbb{Q}$ of dimension $\frac{p-1}{3}$ with a $\mathbb{Q}$-rational point of order $p$ and such that $p \nmid c(A/\mathbb{Q})$.
\item If $p \equiv 2 \; (\text{mod} \; 3)$ and $p \equiv 1 \; (\text{mod} \; 4)$, then there exists an abelian variety $A/\mathbb{Q}$ of dimension $\frac{p-1}{2}$ with a $\mathbb{Q}$-rational point of order $p$ and such that $p \nmid c(A/\mathbb{Q})$.
\end{enumerate}
\end{theorem}
\begin{remark}
Let $p \geq 5$ be a prime and let $f_p $ be the minimum out of all $d>0$ that satisfy the following statement: there exist only finitely many abelian varieties $A/\mathbb{Q}$ of dimension $d$ with a $\mathbb{Q}$-rational point of order $p$ and such that $p \nmid c(A/\mathbb{Q})$. It follows from \cite[Proposition 1.1]{lor} combined a celebrated theorem of Mazur (see \cite[Theorem (8)]{maz}) on the classification of all the possible rational torsion subgroups of rational elliptic curves that $1 \leq f_p$. Theorem \ref{weilrestrictiontamagawa} shows that $f_p \leq \frac{p^2-1}{2}$.
\end{remark}
On the other hand, keeping the same notation as above, if we let $p$ depend on the dimension of the abelian variety as well as the degree of the base field, then Tamagawa number divisibility by the prime $p$ can be achieved. More precisely, it follows from \cite[Proposition 3.1]{lor} that given a number field $K$ and an integer $d>0$, then there exists a constant $\gamma_{K,d}$ such that if $A/K$ is an abelian variety with a $K$-rational point of order $p \geq \gamma_{K,d}$, then $p$ divides $c(A/K)$.
One can also study Tamagawa numbers of elliptic curves defined over function fields. Here we will focus on the case where the elliptic curve is defined over a function field of characteristic $p$ and has a point of order $p$. More specifically, in Section \ref{sectionfunctionfields} we prove the following theorem.
\begin{theorem}
Let $p \geq 5$ be a prime and let $q$ be a power of $p$.
\begin{enumerate}
\item Let $E/\mathbb{F}_q(t)$ be a non-isotrivial elliptic curve with an $\mathbb{F}_q(t)$-rational point of order $p$. Then $p$ divides $c(E/\mathbb{F}_q(t))$.
\item Let $K=\mathbb{F}_q(\mathcal{C})$ be the function field of a smooth, projective, geometrically irreducible curve $\mathcal{C}/\mathbb{F}_q$ and let $E/K$ be a non-isotrivial elliptic curve with a $K$-rational point of order $p$. Then there exists a finite extension $K'/K$ such that $p$ divides $c(E_{K'}/K')$, where $E_{K'}/K'$ is the base change of $E/K$ to $K'$.
\end{enumerate}
\end{theorem}
\section{Proofs of Theorems \ref{theorem11torsionpoint}, \ref{firsttheorem}, and \ref{weilrestrictiontamagawa}}\label{proofsofresults}
\begin{emptyremark}\label{tatealgorithm}
Tate, in \cite{tatealgorithm}, has produced an algorithm that computes the Tamagawa number of an elliptic curve defined over a complete discrete valuation ring. We recall a part of the algorithm that we will use. We refer the reader to \cite[Section IV.9]{silverman2} (or \cite{tatealgorithm}) for more details. Let $R$ be a complete discrete valuation ring with (normalized) valuation $v$, fraction field $K$, and perfect residue field $k$. Let $E/K$ be an elliptic curve given by a Weierstrass equation $$y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6,$$ with $a_i \in R$ for $i = 1,2,3,4,6,$ and such that $v(c_4)=0$ and $v(\Delta) > 0$. Here $\Delta$ is the discriminant and $c_4$ is the $c_4$-invariant of the Weierstrass equation. Since $v( \Delta) >0$, we can make a change of variables so that $v(a_3), v(a_4), v(a_6) >0$. Set $b_2:=a_1^2+4a_2$. Since $v(c_4) = 0$ and $v(a_3), v(a_4), v(a_6) >0$, we obtain that $v(b_2)=0$ (see \cite[Section III.1]{aec} for the standard equation involving $c_4$ and $b_2$). Let $k'$ be the splitting field over $k$ of the polynomial $T^2+a_1T+a_2$. The curve $E/K$ has split multiplicative reduction of type I$_n$ if $v(\Delta)=n$ and $k'=k$. In this case, the Tamagawa number $c_v(E/K)$ of $E/K$ at $v$ is equal to $n$. We will use the following observation repeatedly in this paper.
\underline{Observation:} If $v(a_2), v(a_3), v(a_4), v(a_6) >0,$ and $v(a_1)=0$, then $E/K$ has split multiplicative reduction of type I$_n$, where $v(\Delta)=n$. Moreover, in this case $n = c_v(E/K)$.
\end{emptyremark}
\begin{proof}[Proof of Theorem \ref{theorem11torsionpoint}]
Let $K$ be a number field and let $E/K$ be an elliptic curve with a $K$-rational point of order $11$. The curve $E/K$ can be given by an equation of the form
\begin{align}\label{11torsiongeneralform}
E(r,s): y^2+(1-c)xy-by=x^3-bx^2,
\end{align}
where $$b=rs(r-1), \quad c=s(r-1),$$
and $r,s$ satisfy $$F_{11}(r,s)\; : \; r^2 - rs^3 + 3rs^2 - 4rs + s=0.$$
The above equation is the raw form of the affine modular curve $Y_1(11)$ (see \cite{sutherland} and \cite{sutherlandrawforms}). \stepcounter{theorem}
Using SAGE \cite{sagemath} we find that the discriminant of $E(r,s)$ is $$\Delta_{E(r,s)}=r^3s^4(r - 1)^5(r^2s^3 - 8r^2s^2 - 2rs^3 + 16r^2s + 5rs^2 + s^3 - 20rs + 3s^2 + 3s + 1).$$
Let $$f(r,s)=r^2s^3 - 8r^2s^2 - 2rs^3 + 16r^2s + 5rs^2 + s^3 - 20rs + 3s^2 + 3s + 1.$$
If $\mathfrak{p}$ is any prime of $\mathcal{O}_K$, then we denote by $v_{\mathfrak{p}}$ the valuation of $\mathcal{O}_K$ associated to $\mathfrak{p}$.
\begin{proposition}\label{11torsionvaluationofr}
Let $\mathfrak{p}$ be a prime of $\mathcal{O}_K$. If $v_{\mathfrak{p}}(r) \neq 0$, then $E/K$ has split multiplicative reduction modulo $\mathfrak{p}$ and $11 \mid c_{\mathfrak{p}}(E/K)$.
\end{proposition}
\begin{proof}
We split the proof into cases. Suppose first that $v_{\mathfrak{p}}(r)>0$ and $v_{\mathfrak{p}}(s)<0$. This implies that $v_{\mathfrak{p}}(r-1)=0$. By looking at the expression for $F_{11}(r,s)$ and keeping in mind that that $F_{11}(r,s)=0$, we must have that there are two terms of the same minimal valuation. Therefore, we see that $v_{\mathfrak{p}}(s)=v_{\mathfrak{p}}(rs^3)$ and, hence, $v_{\mathfrak{p}}(r)=-2v_{\mathfrak{p}}(s)$. By performing a change of variables of the form $(x,y) \rightarrow{(s^2 x, s^3 y)}$ in Equation $($\ref{11torsiongeneralform}$)$ we obtain the following Weierstrass equation
$$y^2+(\frac{1}{s}-(r-1))xy+\frac{1}{s^2}r(r-1)y=x^3-\frac{1}{s}r(r-1)x^2.$$
By looking at the valuation of each coefficient, using the observation of \ref{tatealgorithm}, we see that the above equation is a minimal Weierstrass equation and moreover that $E/K$ has split multiplicative reduction modulo $\mathfrak{p}$. The discriminant of the new equation is $${\Delta'}_{E(r,s)}=\frac{r^3s^4(r-1)^5f(r,s)}{s^{12}}=\frac{r^3(r-1)^5f(r,s)}{s^8}$$ and since $v_{\mathfrak{p}}(f(r,s))=3v_{\mathfrak{p}}(s)$, we have that
$$v_{\mathfrak{p}}({\Delta'}_{E(r,s)})=3v_{\mathfrak{p}}(r)+3v_{\mathfrak{p}}(s)-8v_{\mathfrak{p}}(s)=3v_{\mathfrak{p}}(r)-5v_{\mathfrak{p}}(s)=-6v_{\mathfrak{p}}(s)-5v_{\mathfrak{p}}(s)=-11v_{\mathfrak{p}}(s).$$
Suppose that $v_{\mathfrak{p}}(r)>0$ and $v_{\mathfrak{p}}(s) \geq 0$. This implies that $v_{\mathfrak{p}}(r-1)=0$. By looking at the valuation of each term in the expression for $F_{11}(r,s)$, we find that $2v_{\mathfrak{p}}(r)=v_{\mathfrak{p}}(s)$. By Equation $($\ref{11torsiongeneralform}$)$, using the observation of \ref{tatealgorithm}, we see that $E/K$ has split multiplicative reduction modulo $\mathfrak{p}$ with
$$v_{\mathfrak{p}}(\Delta_{E(r,s)})=3v_{\mathfrak{p}}(r)+4v_{\mathfrak{p}}(s)=3v_{\mathfrak{p}}(r)+8v_{\mathfrak{p}}(r)=11v_{\mathfrak{p}}(r)$$
Suppose now that $v_{\mathfrak{p}}(r)<0$ and $v_{\mathfrak{p}}(s)<0$. This implies that $v_{\mathfrak{p}}(r-1)=v_{\mathfrak{p}}(r)$. By looking at the expression for $F_{11}(r,s)$ we see that $v_{\mathfrak{p}}(r^2)=v_{\mathfrak{p}}(rs^3)$ and, therefore, $v_{\mathfrak{p}}(r)=3v_{\mathfrak{p}}(s)$. By performing a change of variables of the form $(x,y) \rightarrow{(s^2(r-1)^2 x, s^3(r-1)^3 y)}$ in Equation $($\ref{11torsiongeneralform}$)$ we get a new Weierstrass equation $$y^2+\Big(\frac{1}{s(r-1)}-1\Big)xy+\frac{r}{s^2(r-1)^2}y=x^3-\frac{r}{s(r-1)}x^2.$$
This equation is an integral Weierstrass equation and moreover $E/K$ has split multiplicative modulo $\mathfrak{p}$ by the observation of \ref{tatealgorithm}. The discriminant of the new equation is
$${\Delta'}_{E(r,s)}=\frac{r^3s^4(r-1)^5f(r,s)}{s^{12}(r-1)^{12}}=\frac{r^3f(r,s)}{s^8(r-1)^7}$$ and moreover $v_{\mathfrak{p}}(f(r,s))=2v_{\mathfrak{p}}(r)+3v_{\mathfrak{p}}(s)$. Therefore, we obtain that
$$v_{\mathfrak{p}}({\Delta'}_{E(r,s)})=3v_{\mathfrak{p}}(r)-8v_{\mathfrak{p}}(s)-7v_{\mathfrak{p}}(r)+2v_{\mathfrak{p}}(r)+3v_{\mathfrak{p}}(s)=-2v_{\mathfrak{p}}(r)-5v_{\mathfrak{p}}(s)=-6v_{\mathfrak{p}}(s)-5v_{\mathfrak{p}}(s)=-11v_{\mathfrak{p}}(s).$$
Suppose now that $v_{\mathfrak{p}}(r)<0$ and $v_{\mathfrak{p}}(s) \geq 0$. This case is impossible because by looking at the expression for $F_{11}(r,s)$, we find that $v_{\mathfrak{p}}(F_{11}(r,s))=2v_{\mathfrak{p}}(r)<0$ but $F_{11}(r,s)=0$.
\end{proof}
We are now ready to complete the proof of Theorem \ref{theorem11torsionpoint}. Let $$T= \{ (r,s) \; : \; r,s \in \mathcal{O}_K^\ast \text{ and } F_{11}(r,s)=0 \}.$$
Let $E/K$ be an elliptic curve given by parameters $r$ and $s$. We first show that $11 \mid c(E/K)$ except (possibly) for $(r,s) \in T$. Proposition \ref{11torsionvaluationofr} implies that if $v_{\mathfrak{p}}(r) \neq 0$ for some prime $\mathfrak{p}$, then $11 \mid c_{\mathfrak{p}}(E/K)$. Therefore, if $11 \nmid c(E/K)$, then $r \in \mathcal{O}_K^\ast$. Moreover, since $r \in \mathcal{O}_K^\ast$ and $F_{11}(r,s)=0$, we obtain that $s \in \mathcal{O}_K^\ast$.
Finally, since $F_{11}(r,s)=0$ defines a (geometric) genus 1 affine curve, Siegel's Theorem implies that the set $T$ is finite (see \cite[Theorem 7.3.9]{heightsindiophantinegeometry} or \cite[Remark D.9.2.2]{hindrysilverman}). This proves our theorem.
\end{proof}
\begin{example}\label{example11torsionoverdegree4}
This example shows that there exists a number field $K/\mathbb{Q}$ of degree $4$ and an elliptic curve $E/K$ with $K$-rational point of order $11$ and such that $c(E/\mathbb{Q})=1$. Let $$K:=\mathbb{Q}[a]/(a^4-a^3-3a^2+a+1))$$ and consider the elliptic curve $E/K$ given by
$$y^2+(a^3-3a)xy+(a^2-a)y=x^3+(-a^3+2a^2+a-3)x^2+(-a^2+1)x-a^2+a+1.$$ The curve $E/K$ is the elliptic curve with LMFDB \cite{lmfdb} label 4.4.725.1-109.1-a2, has a $K$-rational point of order $11$, and $c(E/\mathbb{Q})=1$.
\end{example}
\begin{proof}[Proof of Theorem \ref{firsttheorem}]
Let $K/\mathbb{Q}$ be a number field of degree $d$. Merel's Theorem on the boundedness of torsion of elliptic curves over number fields, see \cite[Th\'eorème]{merel1996}, implies that if $E(K)$ contains a point of order prime order $p$, then $p < d^{3d^2}$. Moreover, if $p > 11$ is a prime, then the modular curve $X_1(p)$ has genus greater or equal to $2$. Therefore, Falting's Theorem implies that for each prime $p$ with $11<p<d^{3d^2}$ there are only finitely many elliptic curves that are defined over $K$ and have a $K$-rational point of order $p$. If $E/K$ has a rational point of order $7$, then \cite[Proposition 2.10]{lor} implies that $7 \mid c(E/K)$ with only finitely many exceptions. Therefore, in order to prove Theorem \ref{firsttheorem} it is enough to consider the case $N=11$. This is exactly Theorem \ref{theorem11torsionpoint}.
\end{proof}
\begin{remark}\label{remarkp235}
As noted in \cite[Remark 2.8]{lor}, in order to produce a number field $K/\mathbb{Q}$ and an infinite number of elliptic curves $E/K$ with a $K$-rational point of order $5$ and such that $5 \nmid c(E/K)$ it is enough to find an infinite number of units $\lambda \in \mathcal{O}_K^*$ such that the order of $\lambda^2-\lambda +1$ at any prime $\mathfrak{p}$ of $\mathcal{O}_K$ is not divisible by $5$. It seems likely that this is possible.
\end{remark}
\begin{proof}[Proof of Theorem \ref{weilrestrictiontamagawa}]
Let $p \geq 5$ be a prime.
\textit{Proof of $(i)$:} The degree of the map $\pi : X_1(p) \rightarrow X_1(1) \cong \mathbb{P}^1$ coming from the $j$-invariant is $\frac{p^2-1}{2}$ (see \cite[Page 66]{diamondshurman}). For every $n \in \mathbb{Z}$, let $E_n/L_n$ be a closed point in the fiber of $\pi$ over $n$, i.e., the curve $E_n/L_n$ is defined over a number field $L_n/\mathbb{Q}$ of degree at most $\frac{p^2-1}{2}$ and $j(E_n)=n$. Let $A_n/\mathbb{Q}$ be the Weil restriction of $E_{n}/L_n$ to $\mathbb{Q}$ (see \cite[Section 7.6]{neronmodelsbook} for the basics of Weil restriction). Since $j(E_n)=n \in \mathbb{Z}$, the curve $E_{n}/L_n$ has everywhere potentially good reduction and, hence, $p \nmid c(E_{n}/L_{n})$ because $p \geq 5$. It follows from \cite[Proposition 3.19]{lor} that $c(A_n/\mathbb{Q})=c(E_{n}/L_n)$. Therefore, for every $n$ the abelian variety $A_n/\mathbb{Q}$ has a $\mathbb{Q}$-rational point of order $p$, has dimension at most $\frac{p^2-1}{2}$, and $p \nmid c(A_n/\mathbb{Q})$. Thus part $(i)$ is proved.
\textit{Proof of $(ii)$:} Since $p \equiv 1 \; (\text{mod} \; 3)$, Part $(a)$ of \cite[Theorem 1]{clarkcookstankewicz} implies that there exist a field extension $K/\mathbb{Q}$ of degree $\frac{p-1}{3}$ and a CM elliptic curve $E/K$ with a $K$-rational point of order $p$. Since $p \geq 5$ and $E/K$ has potentially good reduction, we find that $p \nmid c(E/K)$. Let $A/\mathbb{Q}$ be the Weil restriction of $E/K$ to $\mathbb{Q}$. It follows from \cite[Proposition 3.19]{lor} that $c(A/\mathbb{Q})=c(E/K)$. Therefore, the abelian variety $A/\mathbb{Q}$ has $\mathbb{Q}$-rational point of order $p$, has dimension $\frac{p-1}{3}$, and $p \nmid c(A/\mathbb{Q})$. This proves part $(ii)$.
\textit{Proof of $(iii)$:} The proof is similar to the proof of part $(ii)$. Since $p \equiv 2 \; (\text{mod} \; 3)$ and $p \equiv 1 \; (\text{mod} \; 4)$, Part $(b)$ of \cite[Theorem 1]{clarkcookstankewicz} implies that there exist a field extension $K/\mathbb{Q}$ of degree $\frac{p-1}{2}$ and a CM elliptic curve $E/K$ with a $K$-rational point of order $p$. Since $p \geq 5$ and $E/K$ has potentially good reduction, we find that $p \nmid c(E/K)$. Let $A/\mathbb{Q}$ be the Weil restriction of $E/K$ to $\mathbb{Q}$. It follows from \cite[Proposition 3.19]{lor} that $c(A/\mathbb{Q})=c(E/K)$. Therefore, the abelian variety $A/\mathbb{Q}$ has a $\mathbb{Q}$-rational point of order $p$, has dimension $\frac{p-1}{2}$, and $p \nmid c(A/\mathbb{Q})$. This concludes the proof of our theorem.
\end{proof}
We conclude this section with two examples of Krumm \cite{Krummthesis} which show that there exist elliptic curves $E/K$ defined over cubic (resp. quartic) number fields $K/\mathbb{Q}$ with a $K$-rational point of order $13$ (resp. $13$) and such that $c(E/K)=1$.
\begin{example}\label{example1Krumm} (\cite[Example 5.4.4]{Krummthesis})
Let $K:=\mathbb{Q}[t]/(t^3+2t^2-t-1)$ and let $E/K$ be the elliptic curve given by the Weierstrass equation $$y^2+(-2t^2+2)xy+(-9t^2+2t+4)y=x^3+(-9t^2+2t+4)x^2. $$
Then $K/\mathbb{Q}$ is a cubic number field, $E(K)_{\mathrm{tors}} \cong \mathbb{Z}/13\mathbb{Z}$, and $c(E/K)=1$.
\end{example}
\begin{example}\label{example2Krumm} (\cite[Example 5.5.2]{Krummthesis})
Let $K:=\mathbb{Q}[t]/(t^4-t^3-3t^2+t+1)$ and let $E/K$ be the elliptic curve given by the Weierstrass equation $$y^2+(-6t^3-7t^2+4t+4)xy+(-155t^3-170t^2+109t+74)y=x^3+(-155t^3-170t^2+109t+74)x^2. $$
Then $K/\mathbb{Q}$ is a quartic number field, $E(K)_{\mathrm{tors}} \cong \mathbb{Z}/17\mathbb{Z}$, and $c(E/K)=1$.
\end{example}
\section{Elliptic curves over function fields}\label{sectionfunctionfields}
For this section, we let $k$ be a finite field of characteristic $p>0$. We first recall a few definitions. An elliptic curve $E/k(t)$ is called {\it constant} if there exists an elliptic curve $E_0/k$ such that $E \cong E_0 \times_k k(t)$. An elliptic curve $E/k(t)$ is called {\it isotrivial} if there exists a finite extension $K'/k(t)$ such that the base change $E_{K'}/K'$ of $E/k(t)$ to $K'$ is constant. Finally, an elliptic curve $E/k(t)$ will be called {\it non-isotrivial} if it is not isotrivial.
\begin{proposition}\label{theoremk(t)}
Let $k$ be a finite field of characteristic $p \geq 5$, and let $E/k(t)$ be a non-isotrivial elliptic curve with a $k(t)$-rational point of order $p$. Then $p \mid c(E/k(t))$.
\end{proposition}
\begin{proof} If $k$ is a finite field of characteristic $p$ and there exists an elliptic curve $E/k(t)$ with a point of order $p$, then $p=5,7$, or $11$ (see \cite[Corollary 1.8]{mcdonaldtorsionongenus0}).
Assume that $p=5$ and let $E/k(t)$ be a non-isotrivial elliptic curve with a $k(t)$-rational point of order $5$. The curve $E/k(t)$ can be given by an equation of the form
\begin{align*}
y^2+(1-f)xy-fy=x^3-fx^2,
\end{align*}
for some non-constant $f \in k(t)$ (see \cite[Table 2]{mcdonaldtorsionongenus0}). The discriminant of this equation is $$\Delta=f^5(f^2-11f-1).$$ Since $f$ is non-constant there exists a valuation $v$ of $k[t]$ such that $v(f)>0$. Using the observation of \ref{tatealgorithm} we see that $E/k(t)$ has split multiplicative reduction modulo $v$ with $5 \mid c_v(E/k(t))$ and, hence, $5 \mid c(E/k(t))$.
Assume that $p=7$ and let $E/k(t)$ be a non-isotrivial elliptic curve with a $k(t)$-rational point of order $7$. The curve $E/k(t)$ can be given by an equation of the form
\begin{align*}
y^2+(1-a)xy-by=x^3-bx^2,
\end{align*}
with $a=f^2-f$ and $b=f^3-f^2$ for some non-constant $f \in k(t)$ (see \cite[Table 2]{mcdonaldtorsionongenus0}). The discriminant of this equation is $$\Delta=f^7(f-1)^7(f^3-8f^2+5f+1).$$ Since $f$ is non-constant there exists a valuation $v$ of $k[t]$ such that $v(f)>0$. Using \ref{tatealgorithm} we see that $E/k(t)$ has split multiplicative reduction modulo $v$ with $7 \mid c_v(E/k(t))$ and, hence, $7 \mid c(E/k(t))$.
Assume that $p=11$ and let $E/k(t)$ be a non-isotrivial elliptic curve with a $k(t)$-rational point of order $11$. The curve $E/k(t)$ can be given by an equation of the form
\begin{align*}
y^2+(1-a)xy-by=x^3-bx^2,
\end{align*}
with $$a=\frac{(f+3)(f+5)^2(f+9)^2}{3(f+1)(f+4)^4} \quad \text{and} \quad b=a\frac{(f+1)^2(f+9)}{2(f+4)^3},$$ for some non-constant $f \in k(t)$ (see \cite[Table 14]{mcdonaldtorsionongenus0}). The discriminant of this equation is $$\Delta=\frac{2f^2(f+3)^{11}(f+5)^{11}(f+9)^{11}}{(f+4)^{37}(f+1)}.$$
If $v$ is a valuation of $k(t)$ with $v(f+3)>0$, $v(f+5)>0$, or $v(f+9)>0$, then $v(a)>0$ and $v(b)>0$. Therefore, using \ref{tatealgorithm} we see from the Weierstrass equation of $E/k(t)$ that $E/k(t)$ has split multiplicative reduction at $v$ and moreover $11 \mid v(\Delta)$. This implies that $11 \mid c_v(E/k(t))$. Since $f$ is non constant, there exist valuations $v_1$, $v_2$, and $v_3$ of $k[t]$ such that $v_1(f+3)>0$, $v_2(f+5)>0$, and $v_3(f+9)>0$. Therefore, since $11 \mid c_{v_i}(E/k(t))$ for $i \in \{1,2,3\}$, we find that $11^3 \mid c(E/k(t))$.
\end{proof}
\begin{proposition}\label{splitfunctionfieldtamagawa}
Let $K=\mathbb{F}_q(\mathcal{C})$ be the function field of a smooth, projective, and geometrically irreducible curve $\mathcal{C}/\mathbb{F}_q$, where $q$ is a power of a prime $p \geq 5$. Let $E/K$ be a non-isotrivial elliptic curve with a $K$-rational point of order $p$. If $E/K$ has a place of split multiplicative reduction, then $p \mid c(E/K)$.
\end{proposition}
\begin{proof}
Let $v$ be a place of $K$ such that $E/K$ has split multiplicative reduction modulo $v$. It is enough to show that $p \mid v(\Delta)$, where $\Delta$ is the discriminant of a minimal Weierstrass equation for $E/K$. Since the $j$-invariant $j(E)$ of $E/K$ is equal to $\frac{c_4^3}{\Delta}$ and $v(c_4)=0$, to show that $p \mid c_v(E/K)$, it is enough to show that $p \mid v(j(E))$. The latter follows from the following proposition.
\end{proof}
\begin{proposition}(see \cite[Proposition 7.3]{ulmer2011park})
Let $K=\mathbb{F}_q(\mathcal{C})$ be the function field of a smooth, projective, and geometrically irreducible curve $\mathcal{C}/\mathbb{F}_q$, where $q$ is a power of $p$. Let $E/K$ be a non-isotrivial elliptic curve defined over $K$. Then $E/K$ has a $K$-rational point of order $p$ if and only if $j(E) \in K^p$ and $A(E,\omega)$ is a $(p-1)$-st power in $K^{\times}$, where $A(E,\omega)$ is the Hasse invariant of $E/K$.
\end{proposition}
\begin{proposition}\label{basechange}
Let $K=\mathbb{F}_q(\mathcal{C})$ be the function field of a smooth, projective, and geometrically irreducible curve $\mathcal{C}/\mathbb{F}_q$, where $q$ is a power of $p$. Let $E/K$ be a non-isotrivial elliptic curve with a $K$-rational point of order $p$. Then there exists a finite separable extension $K'/K$ such that $p \mid c(E_{K'}/K')$, where $E_{K'}/K'$ is the base change of $E/K$ to $K'$.
\end{proposition}
\begin{proof}
Since $j(E)$ is non-constant and $K$ is the function field of a smooth, projective, and geometrically irreducible curve, there exists a place $v$ of $K$ such that $v(j(E))<0$ (see \cite[Corollary 1.1.20]{stichtenothbook}). Therefore, the curve $E/K$ has potentially multiplicative reduction at $v$. By the semi-stable reduction theorem for elliptic curves there exists a finite extension of $L/K$ such that the base change $E_L/L$ of $E/K$ to $L$ has semi-stable reduction. After a further finite extension $K'/L$ if necessary (so that the slopes of the tangent lines at the node of the reduced curve are defined over the residue field) we can assume that the base change $E_{K'}/K'$ has split multiplicative reduction modulo a place above $v$. Using Proposition \ref{splitfunctionfieldtamagawa} we find that $p \mid c(E_{K'}/K')$. This proves our proposition.
\end{proof}
\end{document}
|
{\mathfrak b}egin{document}
\title{Ample tangent bundle on smooth projective stacks}
\author{Karim El Haloui}
\email{[email protected]}
\address{Dept. of Maths, University of Warwick, Coventry,
CV4 7AL, UK}
\date{Novembre 06, 2016}
\subjclass[2010]{Primary 14A20; Secondary 18E40}
{\mathfrak b}egin{abstract}
We study ample vector bundles on smooth projective stacks. In particular, we prove that the tangent bundle for the weighted projective stack ${{\mathfrak m}athbb P}(a_0,...,a_n)$ is ample. A result of Mori shows that the only smooth projective varieties with ample vector bundle are isomorphic to ${{\mathfrak m}athbb P}^N$ for some $N$. Extending our geometric spaces from varieties to projective stacks, we are able to provide a new example. This leaves the open question of knowing if the only smooth projective stacks are the weighted projective stacks.
\end{abstract}
{\mathfrak m}aketitle
A conjecture of Hartshorne says that the only $n$-dimensional irreducible smooth projective space whose tangent bundle is ample is isomorphic to ${{\mathfrak m}athbb P}^N$ for some $N$. This was proved for all dimensions and for any characteristic of the base field by Mori \cite{Mor}.
The goal of this paper is to extend the definition of ample vector bundles to smooth projective stacks and provide a new example in this context. We prove that for any positive weights $a_0,...,a_n$, the tangent bundle of the weighted projective stack ${{\mathfrak m}athbb P}(a_0,...,a_n)$ is ample.
The next step would be to show whether the only spaces with such characterisation are the weighted projective stacks. We leave this as a conjecture in this article.
In section 1 we recall some general observations about quotient categories.
In section 2 we establish a technical framework for working with ample bundles
on smooth projective stacks.
In section 3 we use this framework to prove that for the tangent bundle of any weighted projective stack is ample.
\section{Properties of the quotient category}
Let ${{\mathfrak m}athbb K}$ be a field. All our modules are graded modules and homomorphisms are graded module homomorphisms.
\subsection{Quotient stack}
Let $Y$ be a smooth algebraic variety with an action of an algebraic group $G$. ${{\mathfrak m}athcal O}_{[X]}$-modules on the quotient stack $[X]=[Y/G]$ can be understood in terms of $G$-equivariant ${{\mathfrak m}athcal O}_Y$-modules \cite{EHR}.
There are different notions of a projective stack, for instance,
a stack whose coarse moduli space is a projective variety.
Here we use a more restrictive notion:
a projective stack is a smooth closed substack of a weighted
projective stack \cite{Zho}. Let us spell it out.
Let $V=\oplus_{k=1}^m V_k$ be a positively graded
$n+1$-dimensional ${{\mathfrak m}athbb K}$-vector space. Naturally
we treat it as a ${{\mathfrak m}athbb G}_m$-module with positive weights by
$\lambda {\mathfrak b}ullet \ensuremath{\mathbf{v}}\xspace_k = \lambda^k \ensuremath{\mathbf{v}}\xspace_k$ where $\ensuremath{\mathbf{v}}\xspace_k \in V_k$.
Let $Y$ be a smooth closed ${{\mathfrak m}athbb G}_m$-invariant subvariety of $V\setminus
\{0\}$. We define {\em a projective stack} as the stack $[X]=[Y/{{\mathfrak m}athbb G}_m]$.
The G.I.T.-quotient $X=Y//{{\mathfrak m}athbb G}_m$ is the coarse moduli space of $[X]$.
\subsection{Coherent sheaves on projective stacks}
Let us describe the category of ${{\mathfrak m}athcal O}_{[X]}{{\mbox{\rm --\,Qcoh}}}$ of quasicoherent sheaves on $[X]$.
Choose a homogeneous basis $\ensuremath{\mathbf{e}}\xspace_i$ on $V$
with $\ensuremath{\mathbf{e}}\xspace_i\in V_{d_i}$,
$i=0,1, \ldots, n$.
Let $\ensuremath{\mathbf{x}}\xspace_i \in V^\ast$ be the dual basis. Then
${{\mathfrak m}athbb K}[V] = {{\mathfrak m}athbb K} [\ensuremath{\mathbf{x}}\xspace_0,...,\ensuremath{\mathbf{x}}\xspace_n ]$ possesses a natural grading
with $\deg(\ensuremath{\mathbf{x}}\xspace_{i})=d_{i}$.
Let $I$ be the defining ideal of $\overline{Y}$. Since $Y$ is ${{\mathfrak m}athbb G}_m$-invariant,
the ideal $I$ and the ring
$$
A:={{\mathfrak m}athbb K}[Y]= {{\mathfrak m}athbb K}[\overline{Y}] = {{\mathfrak m}athbb K} [\ensuremath{\mathbf{x}}\xspace_0,...,\ensuremath{\mathbf{x}}\xspace_n ]/I
$$
are graded.
Both $X$ and $[X]$ can be thought of as the projective spectrum of
$A$.
The scheme $X$ is naturally isomorphic to the scheme theoretic
$ {{\mbox{\rm Proj}}}\, A$.
The stack $[X]$ is the Artin-Zhang projective spectrum
$ {{\mbox{\rm Proj}}}_{AZ} A$ \cite{AKO},
i.e. its category of quasicoherent sheaves
${{\mathfrak m}athcal O}_{[X]}{{\mbox{\rm --\,Qcoh}}}$ is equivalent to the quotient category
$A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}$ where
$A{{\mbox{\rm --\,GrMod}}}$ is the category of
${{\mathfrak m}athbb Z}$-graded $A$-modules,
$A{{\mbox{\rm --\,Tors}}}$ is its full subcategory of torsion modules (we
identify the objects of ${{\mathfrak m}athcal O}_{[X]}{{\mbox{\rm --\,Qcoh}}}$ and $A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}$).
Recall that
$$
\tau (M) = \{\ensuremath{\mathbf{m}}\xspace \in M \,{\mathfrak m}id\, \exists N \; \forall k>N \; A_k\ensuremath{\mathbf{m}}\xspace=0\}
$$
is {\em the torsion submodule of} $M$. $M$ is said to be {\em torsion}
if $\tau (M)=M$. It can be seen as well that the torsion submodule of $M$ is the sum of all the finite dimensional submodules of $M$ since $A$ is connected.
Denote by
$$
{\mathfrak p}i_A:A{{\mbox{\rm --\,GrMod}}} \rightarrow A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}
$$
the quotient functor. Since $A{{\mbox{\rm --\,GrMod}}}$ has enough injectives and $A{{\mbox{\rm --\,Tors}}}$ is dense then there exists a section functor
$$
\omega_A:A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}} \rightarrow A{{\mbox{\rm --\,GrMod}}}
$$ which is right adjoint to ${\mathfrak p}i_A$ in the sense that
$$
{{\mbox{\rm Hom}}}_{A{\mathfrak m}box{\tiny {{\mbox{\rm --\,GrMod}}}}}(N,\omega_A({\mathfrak m}athcal{M}))\cong{{\mbox{\rm Hom}}}_{A{\mathfrak m}box{\tiny{{\mbox{\rm --\,GrMod}}}}/A{\mathfrak m}box{\tiny{{\mbox{\rm --\,Tors}}}}}({\mathfrak p}i_A(N),{\mathfrak m}athcal{M}).
$$
Recall as well that ${\mathfrak p}i_A$ is exact and that $\omega_A$ is left exact.
We call $\omega_A{\mathfrak p}i_A(M)$ the {\em $A$-saturation} of $M$. We say that a module is {\em $A$-saturated} is it is isomorphic to the saturation of a module. It can be easily seen that a saturated module is the saturation of itself and is torsion-free (from the adjunction). If $M$ and $N$ are $A$-saturated, then being isomorphic in $A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}$ is equivalent to being isomorphic in $A{{\mbox{\rm --\,GrMod}}}$.
The ${{\mathfrak m}athcal O}_{[X]}$-module ${{\mathfrak m}athcal O}_{[X]}(k)$ is defined
as ${{\mbox{\rm Sat}}}(A[k])$ where $A[k]$ is the shifted regular module and the grading is given by $A[k]_m = A_{k+m}$.
In particular, $A[k]$ is $A$-saturated if $A$ is a polynomial rings of more than two variables \cite{AZ}. A well-known example of a ring which isn't $A$-saturated is the polynomial ring of one variable $A={{\mathfrak m}athbb K}[x]$ which $A$-saturation is given by ${{\mathfrak m}athbb K}[x,x^{-1}]$.
\subsection{Tensor product}
Let $M$ and $N$ be two $A$-modules, then $M\otimes_A N$ possesses a natural $A$-module structure. We want to induce on the quotient category $A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}$ a structure of a symmetric monoidal category. Consider the full subcategory of $A$-saturated modules $A{{\mbox{\rm --\,Sat}}}$. The essential image of the section functor $\omega_A$ consists precisely of the $A$-saturated modules. Thus $$\omega_A: A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}} \rightarrow A{{\mbox{\rm --\,GrMod}}}$$ becomes full and faithful onto its image. Indeed
$$
{{\mbox{\rm Hom}}}_{A{\mathfrak m}box{\tiny{{\mbox{\rm --\,GrMod}}}}/A{\mathfrak m}box{\tiny{{\mbox{\rm --\,Tors}}}}}({\mathfrak p}i_A(M),{\mathfrak p}i_A(N)) \cong {{\mbox{\rm Hom}}}_{A{\mathfrak m}box{\tiny{{\mbox{\rm --\,GrMod}}}}}(M,N)
$$
if $N$ is $A$-saturated. So now, we identify the quotient category with its image and call its objects sheaves which we denote by curly letters. If $N$ is a finitely generated $A$-module, then ${{\mathfrak m}athcal N}={\mathfrak p}i_A(N)$ is said to be a \emph{coherent} ${{\mathfrak m}athcal O}_{[X]}$-module. The definition makes sense since the $A$-saturation of a finitely generated $A$-module $N$ is finitely generated. Indeed, It was proved \cite{AZ} that for any graded $A$-module $N$ we have:
$$
0 \rightarrow \tau_A(N) \rightarrow N \rightarrow \omega_A{\mathfrak p}i_A(N) \rightarrow R^{1}\tau_A(N) \rightarrow 0.
$$
where $\tau_A(N)$ and $R^{1}\tau_A(N)$ are torsion $A$-modules. Since the localisation functor is exact and that the localisation of a torsion module is zero, then the localisation of the saturation of $N$ is isomorphic to the localisation of $N$ which in turn is finitely generated. But being finitely generated is a local property, thus the saturation of $N$ is finitely generated.
From general localisation theory \cite{Ga}, $A{{\mbox{\rm --\,Sat}}}$ is an abelian category (it is actually a Grothendieck category) but it is not an abelian subcategory of $A{{\mbox{\rm --\,GrMod}}}$
\footnote{A full subcategory of an abelian category need not be an abelian subcategory}. The kernels in both categories are the same but the cokernel of two saturated $A$-modules isn't necessarily saturated.
The saturation functor ${{\mbox{\rm Sat}}}:A{{\mbox{\rm --\,GrMod}}} \rightarrow A{{\mbox{\rm --\,Sat}}}$ is exact and its right adjoint, namely the inclusion functor, is left exact. Moreover it preserves finite direct sums as does any additive functor in any additive category.
Let $A$ be the coordinate ring of some projective stack $[X]$. The graded global section functor $\Gamma_*:{{\mbox{\rm --\,Qcoh}}}oh([X]) \rightarrow A{{\mbox{\rm --\,GrMod}}}$ and the sheafification functor ${{\mbox{\rm Sh}}}:A{{\mbox{\rm --\,GrMod}}} \rightarrow {{\mbox{\rm --\,Qcoh}}}oh([X])$ induces the following equivalence of categories:
$$
A{{\mbox{\rm --\,Sat}}} \cong {{\mbox{\rm --\,Qcoh}}}oh([X]).
$$
There exists a symmetric monoidal structure in the category of quasicoherent sheaves on $[X]$ denoted by ${{\mbox{\rm --\,Qcoh}}}oh([X])$. The latter category is equivalent to $A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}$ and hence to $A{{\mbox{\rm --\,Sat}}}$. Note that ${{\mbox{\rm Sh}}}({M}) \otimes {{\mbox{\rm Sh}}}({N}) \cong {{\mbox{\rm Sh}}}({M \otimes_A N})$ in ${{\mbox{\rm --\,Qcoh}}}oh([X])$. So,
{\mathfrak b}egin{align*}
\Gamma_*({{\mbox{\rm Sh}}}({M}) \otimes {{\mbox{\rm Sh}}}({{N}}))
&\cong \Gamma_*({{\mbox{\rm Sh}}}{(M \otimes_A N})) \\
&\cong {{\mbox{\rm Sat}}}(M \otimes_A N)
\end{align*}
where all the isomorphisms are natural (to preserve the symmetric monoidal structure).
We can now define a tensor product: take ${{\mathfrak m}athcal M}$ and ${{\mathfrak m}athcal N}$ in $A{{\mbox{\rm --\,Sat}}}$ and let
$$
{{\mathfrak m}athcal M} \otimes {{\mathfrak m}athcal N} := {{\mbox{\rm Sat}}}(M\otimes_A N).
$$
where as objects ${{\mathfrak m}athcal M}={{\mbox{\rm Sat}}}(M)$ and ${{\mathfrak m}athcal N}={{\mbox{\rm Sat}}}(N)$.
Since ${{\mbox{\rm Sat}}}$ and the tensor product of graded modules is right-exact so is the tensor product defined on $A{{\mbox{\rm --\,Sat}}}$.
\section{Ample vector bundles}
We want to define a notion of vector bundles of finite rank that we shall call equivalently locally free sheaf of finite rank which will be defined purely in cohomological terms.
We have a \emph{internal} Hom defined on $A{{\mbox{\rm --\,Sat}}}$ defined as follow:
$$
\underline{{{\mbox{\rm Hom}}}}_{{\mathfrak m}box{\tiny A{{\mbox{\rm --\,Sat}}}}}({{\mathfrak m}athcal M},{{\mathfrak m}athcal N})= {\mathfrak b}igoplus_{k \in {{\mathfrak m}athbb Z}} {{\mbox{\rm Hom}}}_{{\mathfrak m}box{\tiny A{{\mbox{\rm --\,Sat}}}}}({{\mathfrak m}athcal M},{{\mathfrak m}athcal N}[k])
$$
where as objects ${{\mathfrak m}athcal N}[k]={{\mbox{\rm Sat}}}(N[k])$ (saturation is preserved under shifts).
The injective objects in $A{{\mbox{\rm --\,Sat}}}$ are the injective torsion-free $A$-modules in $A{{\mbox{\rm --\,GrMod}}}$ (they are all saturated) and from standard localisation theory $A{{\mbox{\rm --\,Sat}}}$ has enough injectives \cite{Ga}. Moreover an injective object in $A{{\mbox{\rm --\,GrMod}}}$ can be decomposed as a direct sum of an injective torsion-free $A$-module and an injective torsion $A$-modules determined up to isomorphism \cite{AZ}. So the injective resolution of a $A$-module $N$, say $E^{\mathfrak b}ullet(N)$, is equal to $Q^{\mathfrak b}ullet(N) \oplus I^{\mathfrak b}ullet(N)$ where $Q^{\mathfrak b}ullet(N)$ is the saturated torsion free part and $I^{\mathfrak b}ullet(N)$ the torsion free part. Supposing from now on that $M$ is a finitely generated graded $A$-module then:
{\mathfrak b}egin{align*}
{{\mbox{\rm Ext}}}^i_{{\mathfrak m}box{\tiny A{{\mbox{\rm --\,Sat}}}}}({{\mathfrak m}athcal M},{{\mathfrak m}athcal N}) &= R^i{{\mbox{\rm Hom}}}_{{\mathfrak m}box{\tiny A{{\mbox{\rm --\,Sat}}}}}({{\mathfrak m}athcal M},\_)({{\mathfrak m}athcal N}) \\
&\cong {{\mbox{\rm h}}}^i({{\mbox{\rm Hom}}}_{{\mathfrak m}box{\tiny A{{\mbox{\rm --\,GrMod}}}}} (M,Q^{\mathfrak b}ullet(N)))
\end{align*}
Graded Ext is defined as follows:
{\mathfrak b}egin{align*}
\underline{{{\mbox{\rm Ext}}}}^i_{{\mathfrak m}box{\tiny A{{\mbox{\rm --\,Sat}}}}}({{\mathfrak m}athcal M},{{\mathfrak m}athcal N}) &= {\mathfrak b}igoplus_{k \in {{\mathfrak m}athbb Z}} {{\mbox{\rm Ext}}}^i_{{\mathfrak m}box{\tiny A{{\mbox{\rm --\,Sat}}}}}({{\mathfrak m}athcal M},{{\mathfrak m}athcal N}[k]) \\
&\cong {{\mbox{\rm h}}}^i(\underline{{{\mbox{\rm Hom}}}}_{{\mathfrak m}box{\tiny A{{\mbox{\rm --\,GrMod}}}}}(M,Q^{\mathfrak b}ullet(N)))
\end{align*}
which is the $i$th right derived functor of $\underline{{{\mbox{\rm Hom}}}}$.
We can endow the graded Ext in $A{{\mbox{\rm --\,Sat}}}$ with the structure of a graded $A$-module and define the sheafified version of graded Ext as follows
$$
{{\mathfrak m}athcal E}xt^i({{\mathfrak m}athcal M},{{\mathfrak m}athcal N}) := {{\mbox{\rm Sat}}}(\underline{{{\mbox{\rm Ext}}}}^i_{{\mathfrak m}box{\tiny A{{\mbox{\rm --\,Sat}}}}}({{\mathfrak m}athcal M},{{\mathfrak m}athcal N}))
$$
where ${{\mathfrak m}athcal M}$ and ${{\mathfrak m}athcal N}$ are objects in $A{{\mbox{\rm --\,Sat}}}$ (and as object in $A{{\mbox{\rm --\,GrMod}}}$ ${\mathfrak p}i_A(M)={{\mathfrak m}athcal M}$ and $M$ is finitely generated).
Let $X$ be a smooth projective variety and ${{\mathfrak m}athcal E}$ a vector bundle (of finite rank). Equally, ${{\mathfrak m}athcal E}$ is a locally free sheaf which is equivalent to asking that for all $x\in X$ the stalk ${{\mathfrak m}athcal E}_x$ is a free module of finite rank over the regular local ring ${{\mathfrak m}athcal O}_x$. But ${{\mathfrak m}athcal E}_x$ is a free module if and only if ${{\mbox{\rm Ext}}}^i_{{{\mathfrak m}athcal O}_x}({{\mathfrak m}athcal E}_x,{{\mathfrak m}athcal O}_x)=0$ for all $i>0$. Since ${{\mbox{\rm Ext}}}^i_{{{\mathfrak m}athcal O}_x}({{\mathfrak m}athcal E}_x,{{\mathfrak m}athcal O}_x)\cong {\mathfrak m}athcal{E}xt^i_{{{\mathfrak m}athcal O}}({{\mathfrak m}athcal E},{{\mathfrak m}athcal O})_x$ for all $x\in X$, then ${{\mathfrak m}athcal E}$ is a vector bundle if and only if ${\mathfrak m}athcal{E}xt^i_{{{\mathfrak m}athcal O}}({{\mathfrak m}athcal E},{{\mathfrak m}athcal O})=0$ for all $i>0$. This justify the next definition,
{\mathfrak b}egin{defn}
Let ${{\mathfrak m}athcal M}$ be a coherent sheaf. ${{\mathfrak m}athcal M}$ is a {{\mathfrak b}f vector bundle} or {{\mathfrak b}f a locally free sheaf} if
$$
{{\mathfrak m}athcal E}xt^i({{\mathfrak m}athcal M},{{\mathfrak m}athcal O})=0
$$
for all $i>0$ where ${{\mathfrak m}athcal O}:={{\mbox{\rm Sat}}}(A)$.
\end{defn}
For example, if $[X]$ is a weighted projective stack of dimension greater than 2 then $A$ is a graded polynomial ring with more than 2 variables. In this case, it is known that ${{\mathfrak m}athcal O}=A$ \cite{AZ}. But since ${{\mathfrak m}athcal O}(k)$ is projective then ${{\mathfrak m}athcal O}(k)$ is locally free for all $k$.
{\mathfrak b}egin{defn}
A sheaf ${{\mathfrak m}athcal M}$ is said to be {{\mathfrak b}f weighted globally generated} if there exists an epimorphism
$$
{\mathfrak b}igoplus_{j = 0}^{l-1} {{\mathfrak m}athcal O}(j)^{\oplus s_j} \rightarrow {{\mathfrak m}athcal M} \rightarrow 0
$$
for some non-negative $s_j$'s with $l={{\mbox{\rm lcm}}}(d_0,...,d_n)$.
\end{defn}
In the case where all the weights are one, the least common multiple is equal to one and we recover the definition of globally generated sheaves adopted for projective varieties.
{\mathfrak b}egin{prop}
{\mathfrak b}egin{enumerate}
\item Any quotient of a weighted globally generated sheaf is weighted globally generated.
\item The direct sum of two weighted globally generated sheaves is weighted globally generated.
\item For all $k{{\mathfrak m}athfrak g}eqslant 0$, ${{\mathfrak m}athcal O}(k)$ is weighted globally generated.
\item The tensor product of two weighted globally generated sheaves is weighted globally generated.
\end{enumerate}
\end{prop}
{\mathfrak b}egin{proof}
{\mathfrak b}egin{enumerate}
\item It follows from the definition and the fact that the composition of two epimorphisms is an epimorphism.
\item This follows immediately by definition.
\item By the division algorithm, we know that $k=al+r$ for some non-negative integer $a$ and $0 \leqslant r < l$. We claim that the following map
$$
{{\mathfrak m}athcal O}(r)^{\oplus (n+1)} \rightarrow {{\mathfrak m}athcal O}(k)
$$
induced by $(0,...,1_j,...,0) {\mathfrak m}apsto x_j^{ \frac{al}{d_j}}$ is an epimorphism in $A{{\mbox{\rm --\,Sat}}}$.
To prove our claim we need to show that the cokernel of the map in $A{{\mbox{\rm --\,GrMod}}}$ is torsion. Take a homogeneous element $f\in A(k)$ and let $N={\mathfrak m}ax\left\lbrace \dfrac{al}{d_j}, j\in\{0,...,n\} \right\rbrace $. Suppose that $h\in A$ is an homogeneous element of degree greater than $N$. So it can be written as $h'x_j^{ \frac{al}{d_j}}$ for some $j\in\{0,...,n\}$. It follows that
$hf$ is in the image of the map. Hence its cokernel is zero.
\item Suppose that ${{\mathfrak m}athcal M}_1$ and ${{\mathfrak m}athcal M}_2$ are weighted globally generated. Then we know that
$$
{\mathfrak b}igoplus_{j = 0}^{l-1} {{\mathfrak m}athcal O}(j)^{\oplus s_j^1} \rightarrow {{\mathfrak m}athcal M}_1 \rightarrow 0 \:\:\:(1)
$$
and
$$
{\mathfrak b}igoplus_{k = 0}^{l-1} {{\mathfrak m}athcal O}(k)^{\oplus s_k^2} \rightarrow {{\mathfrak m}athcal M}_2 \rightarrow 0 \:\:\:(2)
$$
Tensoring (2) by ${\mathfrak b}igoplus_{j = 0}^{l-1} {{\mathfrak m}athcal O}(j)^{\oplus s_j^1}$ on the left and (1) by ${{\mathfrak m}athcal M}_2$ on the right, it follows that
$$
{\mathfrak b}igoplus_{j = 0}^{l-1} {{\mathfrak m}athcal O}(j)^{\oplus s_j^1} \otimes {\mathfrak b}igoplus_{k = 0}^{l-1} {{\mathfrak m}athcal O}(k)^{\oplus s_k^2} \rightarrow {\mathfrak b}igoplus_{j = 0}^{l-1} {{\mathfrak m}athcal O}(j)^{\oplus s_j^1} \otimes {{\mathfrak m}athcal M}_2 \rightarrow 0.
$$
and
$$
{\mathfrak b}igoplus_{j = 0}^{l-1} {{\mathfrak m}athcal O}(j)^{\oplus s_j^1} \otimes {{\mathfrak m}athcal M}_2 \rightarrow {{\mathfrak m}athcal M}_1 \otimes {{\mathfrak m}athcal M}_2 \rightarrow 0
$$
Since the composition of epimorphisms is an epimorphism, we get
$$
{\mathfrak b}igoplus_{j = 0}^{l-1} {{\mathfrak m}athcal O}(j)^{\oplus s_j^1} \otimes {\mathfrak b}igoplus_{k = 0}^{l-1} {{\mathfrak m}athcal O}(k)^{\oplus s_k^2} \rightarrow {{\mathfrak m}athcal M}_1 \otimes {{\mathfrak m}athcal M}_2 \rightarrow 0.
$$
Therefore,
$$
{\mathfrak b}igoplus_{0 \leqslant j+k \leqslant 2(l-1)} {{\mathfrak m}athcal O}(j+k)^{\oplus (s_j^1+s_k^2)} \rightarrow {{\mathfrak m}athcal M}_1 \otimes {{\mathfrak m}athcal M}_2 \rightarrow 0.
$$
Since each summand is weighted globally generated and that a direct sum of such is weighted globally generated, the result follows.
\end{enumerate}
\end{proof}
In any abelian symmetric braided tensor category we can define the $n^{th}$ symmetric power functor ${{\mbox{\rm Sym}}}^n:A{{\mbox{\rm --\,Sat}}} \rightarrow A{{\mbox{\rm --\,Sat}}}$ as the coequalizer of all the endomorphisms $\sigma \in S_n$ of the $n^{th}$ tensor power functor $T^n$. Hence, we get the following well-known properties:
{\mathfrak b}egin{prop}
{\mathfrak b}egin{enumerate}
\item There exists an epimorphism
$$
{{\mbox{\rm Sym}}}^p({{\mathfrak m}athcal M}) \otimes {{\mbox{\rm Sym}}}^q({{\mathfrak m}athcal M}) \twoheadrightarrow {{\mbox{\rm Sym}}}^{p+q}({{\mathfrak m}athcal M}).
$$
\item There is a natural isomorphism
$$
{\mathfrak b}igoplus_{p+q=n} {{\mbox{\rm Sym}}}^p({{\mathfrak m}athcal M}) \otimes {{\mbox{\rm Sym}}}^q({{\mathfrak m}athcal N}) \rightarrow {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}\oplus {{\mathfrak m}athcal N}).
$$
\item The functor ${{\mbox{\rm Sym}}}^n$ preserves epimorphisms and sends coherent sheaves to coherent sheaves.
\item There is a natural epimorphism
$$
{{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}) \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal N}) \rightarrow {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M} \otimes {{\mathfrak m}athcal N}).
$$
\end{enumerate}
\end{prop}
{\mathfrak b}egin{prop}
Let ${{\mathfrak m}athcal M}\in A{{\mbox{\rm --\,Sat}}}$, then ${{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}) \cong {{\mbox{\rm Sat}}}( {\mathfrak m}box{\rm S}^n(M))$ where ${\mathfrak m}box{\rm S}^n$ is the $n^{th}$ symmetric power taken in $A{{\mbox{\rm --\,GrMod}}}$ and ${{\mathfrak m}athcal M}={{\mbox{\rm Sat}}}(M)$.
\end{prop}
This results holds because of the definition of our tensor product in $A{{\mbox{\rm --\,Sat}}}$, we preserved the monoidal symmetric structure and now each transposition acts by switching tensorands before saturation. More generally, it should be noted that saturating a module corresponds to the sheafification of a presheaf.
We give a more detailed proof of the following proposition first given in \cite{GP}.
{\mathfrak b}egin{prop}
For all $M$, $N$ be two modules. We have
$$
{{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M)\otimes {{\mbox{\rm Sat}}}(N)) \cong {{\mbox{\rm Sat}}}(M\otimes_A N).
$$
\end{prop}
{\mathfrak b}egin{proof}
Consider the following exact sequence in $A{{\mbox{\rm --\,GrMod}}}$ \cite{AZ}:
$$
0 \rightarrow \tau(M) \rightarrow M \rightarrow {{\mbox{\rm Sat}}}(M) \rightarrow R^{1}\tau(M) \rightarrow 0
$$
where $\tau(M)$ is the largest torsion submodule of $M$. The saturation of $M$, denoted by $\widetilde{M}$, is the maximal essential extension of $M/\tau(M)$ such that ${\widetilde{M}}/(M/\tau(M))$ is in $A{{\mbox{\rm --\,Tors}}}$. So we have
$$
0 \rightarrow M/\tau(M) \rightarrow {{\mbox{\rm Sat}}}(M) \rightarrow T \rightarrow 0
$$
where $T$ is in $A{{\mbox{\rm --\,Tors}}}$. Applying by $\_\otimes_AN$ we obtain
$$
... \rightarrow \textrm{Tor}_1^A(T,N) \rightarrow M/\tau(M)\otimes_AN \rightarrow {{\mbox{\rm Sat}}}(M)\otimes_AN \rightarrow T\otimes_AN \rightarrow 0.
$$
From the properties of the $\textrm{Tor}$ functor, it is known that $\textrm{Tor}_1^A(T,N) \cong \textrm{Tor}_1^A(N,T)$. Now taking a projective resolution of $N$ and tensoring by $T$ we get a complex of objects in $A{{\mbox{\rm --\,Tors}}}$ since tensor product preserves torsion. Therefore $\textrm{Tor}_1^A(T,N)$ is in $A{{\mbox{\rm --\,Tors}}}$. The saturation functor is exact and the saturation of torsion objects is the zero object, so we get a short exact sequence
$$
0 \rightarrow {{\mbox{\rm Sat}}}(M/\tau(M)\otimes_AN) \rightarrow {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M)\otimes_AN) \rightarrow 0.
$$
And hence an isomorphism
$$
{{\mbox{\rm Sat}}}(M/\tau(M)\otimes_AN) \cong {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M)\otimes_AN).
$$
Moreover we have the following short exact sequence
$$
0 \rightarrow \tau(M) \rightarrow M \rightarrow M/\tau(M) \rightarrow 0.
$$
Tensoring on the left by $N$ we get
$$
\tau(M)\otimes_AN \rightarrow M\otimes_AN \rightarrow M/\tau(M)\otimes_AN \rightarrow 0.
$$
Since $\tau(M)\otimes_AN$ is torsion, applying the saturation functor we obtain
$$
0 \rightarrow {{\mbox{\rm Sat}}}(M\otimes_AN) \rightarrow {{\mbox{\rm Sat}}}(M/\tau(M)\otimes_AN) \rightarrow 0.
$$
And hence an isomorphism
$$
{{\mbox{\rm Sat}}}(M\otimes_AN) \cong {{\mbox{\rm Sat}}}(M/\tau(M)\otimes_AN)
$$
So,
$$
{{\mbox{\rm Sat}}}(M\otimes_AN) \cong {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M)\otimes_AN)
$$
To conclude,
{\mathfrak b}egin{align*}
{{\mbox{\rm Sat}}}(M\otimes_AN) &\cong {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M)\otimes_AN) \\
&\cong {{\mbox{\rm Sat}}}(N \otimes_A {{\mbox{\rm Sat}}}(M)) \\
&\cong {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(N) \otimes_A {{\mbox{\rm Sat}}}(M)) \\
&\cong {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M) \otimes_A {{\mbox{\rm Sat}}}(N))
\end{align*}
\end{proof}
{\mathfrak b}egin{defn}
A vector bundle ${{\mathfrak m}athcal M}$ is {{\mathfrak b}f ample} if for any coherent sheaf ${{\mathfrak m}athcal F}$ there exists $n_0>0$ such that $${{\mathfrak m}athcal F}\otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M})$$ is weighted globally generated for all $n {{\mathfrak m}athfrak g}eqslant n_0$.
\end{defn}
{\mathfrak b}egin{prop}
{\mathfrak b}egin{enumerate}
\item For any ample sheaf, there exists a non-negative integer such that any of its higher symmetric power is weighted globally generated.
\item Any quotient sheaf of an ample sheaf is ample.
\end{enumerate}
\end{prop}
{\mathfrak b}egin{proof}
{\mathfrak b}egin{enumerate}
\item Suppose ${{\mathfrak m}athcal M}$ is ample, since ${{\mathfrak m}athcal O}$ is a coherent sheaf then there exist a non-negative $n_0$ such that for all $n {{\mathfrak m}athfrak g}eqslant n_0$
$$
{{\mathfrak m}athcal O} \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}) \cong {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M})
$$
is weighted globally generated.
\item For a given sheaf ${{\mathfrak m}athcal F}$, the functor ${{\mathfrak m}athcal F} \otimes \_$ is right exact as a composition of a right exact functor and an exact functor. Let ${{\mathfrak m}athcal M}'$ be a quotient of ${{\mathfrak m}athcal M}$, i.e., we have an epimorphism ${{\mathfrak m}athcal M} \twoheadrightarrow {{\mathfrak m}athcal M}'$. Since ${{\mbox{\rm Sym}}}^n$ preserves epimorphisms we have $${{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}) \twoheadrightarrow {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}'),$$ so $${{\mathfrak m}athcal F} \otimes{{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}) \twoheadrightarrow {{\mathfrak m}athcal F} \otimes{{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}')$$ for any coherent sheaf ${{\mathfrak m}athcal F}$. But ${{\mathfrak m}athcal M}$ is ample, so for $n$ sufficiently large ${{\mathfrak m}athcal F} \otimes{{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M})$ is weighted globally generated and since ${{\mathfrak m}athcal F} \otimes{{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}) \twoheadrightarrow {{\mathfrak m}athcal F} \otimes{{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}')$ is an epimorphism then ${{\mathfrak m}athcal F} \otimes{{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}')$ is weighted globally generated. This shows that ${{\mathfrak m}athcal M}'$ is ample.
\end{enumerate}
\end{proof}
{\mathfrak b}egin{prop}
The finite direct sum of ample sheaves is ample.
\end{prop}
{\mathfrak b}egin{proof}
The proof is similar to the one given by Hartshorne \cite{Har}.
We know that
$$
{{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}\oplus{{\mathfrak m}athcal N}) = {\mathfrak b}igoplus_{p=0}^{n} {{\mbox{\rm Sym}}}^p({{\mathfrak m}athcal M})\otimes {{\mbox{\rm Sym}}}^{n-p}({{\mathfrak m}athcal N}).
$$
Write $q=n-p$. It suffices to show that there exists some non-negative integer $n_0$ such that when $p+q {{\mathfrak m}athfrak g}eqslant n_0$ then
$$
{{\mathfrak m}athcal F}\otimes {{\mbox{\rm Sym}}}^p({{\mathfrak m}athcal M})\otimes {{\mbox{\rm Sym}}}^p({{\mathfrak m}athcal N})
$$
is weighted globally generated.
Fix some coherent sheaf ${{\mathfrak m}athcal F}$,
{\mathfrak b}egin{enumerate}
\item ${{\mathfrak m}athcal M}$ is ample so there exists a positive integer $n_1$ such that for all $n {{\mathfrak m}athfrak g}eqslant n_1$,
$$
{{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M})
$$
is weighted globally generated.
\item ${{\mathfrak m}athcal N}$ is ample so there exists a positive $n_2$ such that for all $n {{\mathfrak m}athfrak g}eqslant n_2$,
$$
{{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal N})
$$
is weighted globally generated.
\item For each $r\in\{0,...,n_1-1\}$, the sheaf ${{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^r({{\mathfrak m}athcal M})$ is coherent. Since ${{\mathfrak m}athcal N}$ is ample, there exists $m_r$ such that for all $n {{\mathfrak m}athfrak g}eqslant m_r$,
$$
{{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^r({{\mathfrak m}athcal M}) \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal N})
$$
is weighted globally generated.
\item For each $s\in\{0,...,n_2-1\}$, the sheaf ${{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^s({{\mathfrak m}athcal N})$ is coherent. Since ${{\mathfrak m}athcal M}$ is ample, there exists $l_s$ such that for all $n {{\mathfrak m}athfrak g}eqslant l_s$,
$$
{{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}) \otimes {{\mbox{\rm Sym}}}^s({{\mathfrak m}athcal N})
$$
is weighted globally generated.
\end{enumerate}
Now take $n_0={\mathfrak m}ax_{r,s}\{r+m_r,s+l_s\}$, then for any $n{{\mathfrak m}athfrak g}eqslant n_0$
$$
{{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^p({{\mathfrak m}athcal M}) \otimes {{\mbox{\rm Sym}}}^q({{\mathfrak m}athcal N})
$$
is weighted globally generated.
Indeed, we have 3 cases,
{\mathfrak b}egin{enumerate}[(i)]
\item Suppose $p <n_1$. Then $p+q{{\mathfrak m}athfrak g}eqslant n_0 {{\mathfrak m}athfrak g}eqslant p+m_p$, so $q{{\mathfrak m}athfrak g}eqslant m_p$ and by 3. we are done.
\item Suppose $q <n_2$. Then $p+q{{\mathfrak m}athfrak g}eqslant n_0 {{\mathfrak m}athfrak g}eqslant l_q+q$, so $p{{\mathfrak m}athfrak g}eqslant l_q$ and by 4. we are done.
\item Suppose $p {{\mathfrak m}athfrak g}eqslant n_1$ and $q {{\mathfrak m}athfrak g}eqslant n_2$, so ${{\mbox{\rm Sym}}}^p({{\mathfrak m}athcal M})$ and ${{\mathfrak m}athcal F}\otimes {{\mbox{\rm Sym}}}^q({{\mathfrak m}athcal N})$ are weighted globally generated and so is their tensor product.
\end{enumerate}
We conclude that ${{\mathfrak m}athcal M} \oplus {{\mathfrak m}athcal N}$ is ample.
\end{proof}
{\mathfrak b}egin{cor}
Let ${{\mathfrak m}athcal M}$ and ${{\mathfrak m}athcal N}$ be two sheaves. Then ${{\mathfrak m}athcal M} \oplus {{\mathfrak m}athcal N}$ is ample if and only if ${{\mathfrak m}athcal M}$ and ${{\mathfrak m}athcal N}$ are ample.
\end{cor}
{\mathfrak b}egin{proof}
We already know that if ${{\mathfrak m}athcal M}$ and ${{\mathfrak m}athcal N}$ are ample then so is their direct sum. Conversely, ${{\mathfrak m}athcal M}$ and ${{\mathfrak m}athcal N}$ are quotient of ${{\mathfrak m}athcal M}\oplus{{\mathfrak m}athcal N}$ which is ample, so are ${{\mathfrak m}athcal M}$ and ${{\mathfrak m}athcal N}$.
\end{proof}
{\mathfrak b}egin{cor}
The tensor product of an ample sheaf and a weighted globally generated sheaf is ample.
\end{cor}
{\mathfrak b}egin{proof}
Let ${{\mathfrak m}athcal M}$ be an ample sheaf and ${{\mathfrak m}athcal N}$ a weighted globally generated sheaf. So,
$$
{\mathfrak b}igoplus_{j = 0}^{l-1} {{\mathfrak m}athcal O}(j)^{\oplus s_j} \rightarrow {{\mathfrak m}athcal N} \rightarrow 0.
$$
Tensoring by ${{\mathfrak m}athcal M}$,
$$
{\mathfrak b}igoplus_{j = 0}^{l-1} {{\mathfrak m}athcal M} \otimes {{\mathfrak m}athcal O}(j)^{\oplus s_j} \rightarrow {{\mathfrak m}athcal M} \otimes {{\mathfrak m}athcal N} \rightarrow 0.
$$
It suffices to show that ${{\mathfrak m}athcal M} \otimes {{\mathfrak m}athcal O}(j)$ for $j\in\{0,...,l-1\}$ is ample.
Let ${{\mathfrak m}athcal F}$ to be a coherent sheaf and consider
$$
{{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M} \otimes {{\mathfrak m}athcal O}(j))
$$
for $n$ a non-negative integer. It is a quotient of
$$
{{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}) \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal O}(j)) \cong {{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal O}(j)) \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M}).
$$
But ${{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal O}(j))$ is a coherent sheaf and ${{\mathfrak m}athcal M}$ is ample, so there exists a non-negative integer $n_0$ such that for all $n {{\mathfrak m}athfrak g}eqslant n_0$
$$
{{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal O}(j)) \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M})
$$
is weighted globally generated. It follows that all of its quotients are weighted globally generated and in particular ${{\mathfrak m}athcal F} \otimes {{\mbox{\rm Sym}}}^n({{\mathfrak m}athcal M} \otimes {{\mathfrak m}athcal O}(j))$. Hence, ${{\mathfrak m}athcal M} \otimes {{\mathfrak m}athcal O}(j)$ is ample and the result follows.
\end{proof}
{\mathfrak b}egin{lemma}
The sheaf ${{\mathfrak m}athcal O}(1)$ is ample.
\end{lemma}
{\mathfrak b}egin{proof}
Let ${{\mathfrak m}athcal F}={{\mbox{\rm Sat}}}(F)$ be a coherent sheaf. So $F$ is a finitely generated module over $A$ generated by finitely many homogeneous elements $f_0,...,f_c$ of degree $\rho_0,...,\rho_c$ respectively.
Take $n_0={\mathfrak m}ax\{\rho_0,...,\rho_c\}$, then for each $n {{\mathfrak m}athfrak g}eqslant n_0$ we have
$$
n-\rho_i=a_il+r_i
$$
where $0\leqslant r_i <l$ by the division algorithm. We claim that the following map is an epimorphism in $A{{\mbox{\rm --\,Sat}}}$,
$$
{\mathfrak b}igoplus_{j=0}^n {\mathfrak b}igoplus_{i=0}^c {{\mathfrak m}athcal O}(r_i) \rightarrow {{\mathfrak m}athcal F}(n)
$$
induced by $((0,...,0),...,(0,...,1_i,...,0)_j,...(0,...,0)) {\mathfrak m}apsto x_j^{\frac{a_il}{d_j}}f_i$.
Indeed, to prove the claim we need to show that the cokernel of the map in $A{{\mbox{\rm --\,GrMod}}}$ is torsion. So take $f\in F(n)$ homogeneous and assume that $f$ can be written $kf_i$ for some $i\in\{0,...,c\}$. Let $N={\mathfrak m}ax\left\lbrace \dfrac{a_il}{d_j}, i\in\{0,...,c\}, j\in\{0,...,n\} \right\rbrace $. Suppose that $h\in A$ is an homogeneous element of degree greater than $N$. So it can be written as $h'x_j^{ \frac{a_il}{d_j}}$ for some $i\in\{0,...,c\}$ and $j\in\{0,...,n\}$. It follows that $hf$ is in the image of the map. Henceforth its cokernel is zero.
\end{proof}
Since ${{\mathfrak m}athcal O}(1)$ is ample and weighted globally generated, so ${{\mathfrak m}athcal O}(2)$ is ample for any weighted projective stack. However, for ${{\mathfrak m}athbb P}(3,5)$, seen as a variety, we have ${{\mathfrak m}athcal O}(2) \cong {{\mathfrak m}athcal O}(-1)$ which isn't ample.
{\mathfrak b}egin{theorem}
The tangent sheaf of any weighted projective stack is ample.
\end{theorem}
{\mathfrak b}egin{proof}
We have the following short exact sequence \cite{Zho}
$$
0 \rightarrow {{\mathfrak m}athcal O} \rightarrow {\mathfrak b}igoplus_{j=0}^n {{\mathfrak m}athcal O}(a_j) \rightarrow {{\mathfrak m}athcal T} \rightarrow 0.
$$
From the long exact sequence in cohomolgy applied to the above short exact sequence and the fact that ${{\mathfrak m}athcal O}$ and ${{\mathfrak m}athcal O}(a_j)$ are vector bundles, it is evident to see that so is ${{\mathfrak m}athcal T}$.
Each summand of the central term is ample since ${{\mathfrak m}athcal O}(a_i)={{\mathfrak m}athcal O}(1)^{\otimes a_i}$ and ${{\mathfrak m}athcal O}(1)$ ample. Moreover, ${{\mathfrak m}athcal T}$ is the quotient of a finite direct sum of ample sheaves. Then, ${{\mathfrak m}athcal T}$ is ample.
\end{proof}
We obtain the following corollary proved first by Hartshorne
{\mathfrak b}egin{cor}[\cite{Har}]
The tangent sheaf of a standard projective space is ample.
\end{cor}
A converse of this corollary was proved and provides a characterisation of smooth projective spaces also known as the Hartshorne conjecture and proved by Mori,
{\mathfrak b}egin{theorem}[\cite{Mor}]
The only smooth irreducible projective smooth variety with ample tangent bundle is isomorphic to ${{\mathfrak m}athbb P}^n$ for some $n$.
\end{theorem}
It is natural to ask whether the conjecture of Hartshorne holds as well extending the class of spaces. We formulate it as follows:
{\mathfrak b}egin{conj}
The only smooth irreducible projective stack with an ample tangent bundle is isomorphic to a weighted projective stack.
\end{conj}
{\mathfrak b}ibliography{Reference_4th_year_report}{}
{\mathfrak b}ibliographystyle{amsplain}
\end{document}
\end{document}
|
\begin{document}
\maketitle
\begin{abstract}
Let $\mathbb{N}$ be a set of the natural numbers.
Symmetric inverse semigroup $R_\infty$ is the semigroup of all infinite 0-1 matrices $\left[ g_{ij}\right]_{i,j\in \mathbb{N}}$ with at most one 1 in each row and each column such that $g_{ii}=1$ on the complement of a finite set. The binary operation in $R_\infty$ is the ordinary matrix multiplication. It is clear that infinite symmetric group $\mathfrak{S}_\infty$ is a subgroup of $R_\infty$. The map $\star:\left[ g_{ij}\right]\mapsto\left[ g_{ji}\right]$ is an involution on $R_\infty$. We call a function $f$ on $R_\infty$ positive definite if for all $r_1, r_2, \ldots, r_n\in R_\infty$ the matrix $\left[ f\left( r_ir_j^\star\right)\right]$ is Hermitian and non-negatively definite. A function $f$ said to be indecomposable if the corresponding $\star$-representation $\pi_f$ is a factor-representation. A class of the $R_\infty$-central functions (characters) is defined by the condition $f(rs)=f(sr)$ for all $r,s\in R_\infty$. In this paper we classify all factor-representations of $R_\infty$ that correspond to the $R_\infty$-central positive definite functions.
\end{abstract}
\section{Introduction}
Let $R_n$ be the set of all $n\times n$ matrices that contain at most one entry of one in each column
and row and zeroes elsewhere. Under matrix multiplication, $R_n$ has the structure of a
semigroup, a set with an associative binary operation and an identity element. The number of ${\rm rank}\,k$ matrices in $R_n$ is ${{\displaystyle{{{n}\choose{k}}}^2}}k!$ and hence $R_n$ has a total of $\sum\limits_{k=0}^n{{\displaystyle{{{n}\choose{k}}}^2}}k!$ elements. Note that the
set of ${\rm rank}\,n$ matrices in the semigroup $R_n$ is isomorphic to $\mathfrak{S}_n$, the symmetric group on $n$
letters.
The semigroup $R_\infty$ is the inductive limit of the chain $R_n$, $ n =
1, 2, . . .$ , with the natural embeddings: $ R_n\ni r=\left[ r_{ij} \right]\mapsto \hat{r}=\left[ \hat{r}_{ij} \right]\in R_{n+1}$, where $r_{ij}=\hat{r}_{ij}$ for all $i,j\leq n$ and $\hat{r}_{n+1\,n+1}=1$. Respectively, the group $\mathfrak{S}_\infty\subset R_\infty$ is the inductive limit of the chain $\mathfrak{S}_n$, $n=1,2,\ldots$.
For convenience we will use the matrix representation of the elements of $R_\infty$. Namely, if $r=\left[ r_{ij} \right]\in R_\infty$ then the matrix $\left[ r_{ij} \right]$ contains at most one entry of one in each column
and row and $r_{nn}=1$ for all sufficiently large $n$.
Denote by $D_\infty\subset R_\infty$ the abelian subsemigroup of the diagonal matrices.
For any subset $\mathbb{A}\subset\mathbb{N}$ denote by $\epsilon_\mathbb{A}$ the matrix $\left[ \epsilon_{ij} \right]\in D_\infty$ such that $\epsilon_{ii} =\left\{
\begin{array}{rl}
0, &\text{ if } i\in \mathbb{A}\\
1, &\text{ if } i\notin \mathbb{A}.
\end{array}\right.$
For example, $\epsilon_{\{2\}}=
\left[\begin{matrix}
1&0&0&\cdots&\\
0&0&0&\cdots\\
0&0&1&\cdots\\
\cdots&\cdots&\cdots&\cdots
\end{matrix}\right]$.
The ordinary transposition of matrices define an involution on $R_\infty:\left[ r_{ij} \right]^{\star}=\left[ r_{ji} \right]$.
Let $\mathcal{B}(\mathcal{H})$ be the algebra of all bounded operators in a Hilbert space $\mathcal{H}$.
By a $\star$-representation of $R_\infty$ we mean a homomorphism $\pi$ of $R_\infty$ into the multiplicative semigroup of the algebra $\mathcal{B}(\mathcal{H})$ such that $\pi(r^*)=\left( \pi(r) \right)^*$, where $\left( \pi(r) \right)^*$ is the Hermitian adjoint of operator $\pi(r) $. It follows immediately that $\pi(s)$ is an unitary operator, when $s\in\mathfrak{S}_\infty$, and $\pi(d)$ is self-adjoint projection for $d\in D_\infty$.
Recall the notion of the quasiequivalent representation.
\begin{Def}
Let $\mathcal{N}_1$ and $\mathcal{N}_2$ be the $w^*$-algebras generated by the operators of the representations $\pi_1$ and $\pi_2$, respectively, of group or semigroup $G$. $\pi_1 $ and $\pi_2$ are quasiequivalent if there exists isomorphism $\theta:\mathcal{N}_1\to\mathcal{N}_2$ such that $\theta\left(\pi_1(g)\right)=\pi_2(g)$ for all $g\in G$.
\end{Def}
\begin{Def}\label{support_of_el_semigroup}
Given an element $r=[r_{mn}]\in R_\infty$, let ${\rm supp}\,r$ be the complement of a set $\left\{n\in \mathbb{N}:r_{nn}=1\right\}$.
\end{Def}
By definition of $R_\infty$, ${\rm supp}\,r$ is a finite set.
Let $c=\left( n_1\;n_2,\;\cdots\;n_k\right)$ be the cycle from $\mathfrak{S}_\infty$. If $\mathbb{A}\subseteq{\rm supp}\,c$, then $q=c\cdot \epsilon_{\mathbb{A}}$ we call as a {\it quasicycle}. Notice, that $\epsilon_{\{k\}}$ is a quasicycle too. Two quasicycles $q_1$ and $q_2$ are called {\it independent}, if $({\rm supp}\,q_1)\cap({\rm supp}\,q_2)=\emptyset$. Each $r\in R_\infty$ can be decomposed in the product of the independent quasicycles\label{quasicycle}:
\begin{eqnarray}\label{decomposition_into_product}
r=q_1\cdot q_2\cdots q_k, \text{ where } {\rm supp}\, q_i\,\cap \,{\rm supp}\, q_j=\emptyset \text{ for all } i\neq j.
\end{eqnarray}
In general, this decomposition is not unique.
In this paper we study the $\star$-representations of the $R_\infty$. The main results are the construction of the list of ${\rm II}_1$-factor representations and the proof of its fullness.
Finite semigroup $R_n$, its semigroup algebra and the corresponding representations theory was investigated by various authors in \cite{Munn_1, Munn_2, Solomon_1}. The irreducible representations of $R_n$ are indexed by the set of all Young diagrams
with at most $n$ cells. An analog of the Specht modules for finite symmetric semigroup was built by C. Grood in \cite{Grood}.
The main motivation of this paper is due to A. M. Vershik and P. P. Nikitin. Using the branching rule for the representations of the semigroups $R_n$, they found the full list of the characters on $R_\infty$ \cite{VN}. We give very short and simple proof of the main theorem 2.12 from \cite{VN} in this note. Our elementary approach is based on the study of the limiting operators, proposed by A. Okounkov \cite{Ok1, Ok2}. We construct the full collection of the new realisations of the corresponding ${\rm II}_1$-factor-representations in section \ref{Realisation}.
\subsection{The examples of $\star$-representations of $R_\infty$.} Given $r\in R_\infty$ define in the space $l^2(\mathbb{N})$ the map
\begin{eqnarray}
l^2(\mathbb{N})\ni(c_1,c_2,\ldots,c_n,\ldots)\stackrel{\mathfrak{N}(r)}{\mapsto} (c_1,c_2,\ldots,c_n,\ldots)r\in l^2(\mathbb{N}).
\end{eqnarray}
It is easy to check that the next statement holds.
\begin{Prop}
The operators $\mathfrak{N}(r)$ generate the {\it irreducible} $\star$-representation of $R_\infty$.
\end{Prop}
The next important representation is called the {\it left regular representation } of $R_\infty$ \cite{APat}. The formula for the action of the corresponding operators in the space $l^2(R_\infty)$ is given by
\begin{eqnarray}\label{Left_Reg_Repr}
\mathfrak{L}^{reg}(r)\left(\sum\limits_{t\in R_\infty} c_t\cdot t\right)=\sum\limits_{t\in R_\infty:(tt^*)(r^*r)=tt^*} c_t\cdot r\cdot t,\;\;c_t\in\mathbb{C}.
\end{eqnarray}
The operators of the {\it right regular representation } of $R_\infty$ act by
\begin{eqnarray}\label{Right_Reg_Repr}
\mathfrak{R}^{reg}(r)\left(\sum\limits_{t\in R_\infty} c_t\cdot t\right)=\sum\limits_{t\in R_\infty:(t^*t)(r^*r)=t^*t} c_t\cdot t\cdot r^*,\;\;c_t\in\mathbb{C}.
\end{eqnarray}
It is obvious that $\mathfrak{L}^{reg}$ and $\mathfrak{R}^{reg}$ are $\star$-representations of $R_\infty$.
Denote by $l^2_k$ the subspace of $l^2(R_\infty)$ generated by the elements of the view $\sum\limits_{t\in R_\infty:{\rm rank}\,(I-tt^*)=k }c_t\cdot t$. By definition, the subspaces $l^2_k$ are pairwise orthogonal.
It follows from (\ref{Left_Reg_Repr}) and (\ref{Right_Reg_Repr}) that $\mathfrak{L}^{reg}(r)l^2_k\subseteqq l^2_k$ and $\mathfrak{R}^{reg}(r)l^2_k\subseteqq l^2_k$ for all $r\in R_\infty$. Denote by $\mathfrak{L}^{reg}_k$ $\left(\mathfrak{R}^{reg}_k \right)$ the restriction of $\mathfrak{L}^{reg}$ $\left( \mathfrak{R}^{reg} \right)$ to $l^2_k$.
Since $\mathfrak{R}^{reg}(r_1)\cdot \mathfrak{L}^{reg}(r_2)=\mathfrak{L}^{reg}(r_2)\cdot \mathfrak{R}^{reg}(r_1)$ for all $r_1,r_2\in R_\infty$, the operators $T_k^{(2)}(r_1,r_2)=\mathfrak{L}^{reg}_k(r_1)\cdot\mathfrak{R}^{reg}_k(r_2)$ $\left( r_1,r_2\in R_\infty \right)$ define $\star$-representation of the semigroup $R_\infty\times R_\infty$.
\begin{Prop}
The next properties hold:
\begin{itemize}
\item {\bf a}) the representation $T_k^{(2)}$ is irreducible for each $k=0,1,\ldots,m,\ldots$;
\item {\bf b}) the representation $\mathfrak{L}^{reg}_0$ $\left( \mathfrak{R}^{reg}_0\right)$ is ${\rm II}_1$-factor-representation of $R_\infty$; in particular, $\mathfrak{L}^{reg}_0\left( \epsilon_{\{j\}} \right)=0$ $\left( \mathfrak{R}^{reg}_0\left( \epsilon_{\{j\}} \right)=0 \right)$;
\item {\bf c}) for each $k\geq1$ the representation $\mathfrak{L}^{reg}_k$ $\left( \mathfrak{R}^{reg}_k\right)$ is ${\rm II}_\infty$-factor-representation of $R_\infty$; in particular, if $l>k$ then $\mathfrak{L}^{reg}_k\left( \epsilon_{\{j_1\}}\cdots\epsilon_{\{j_l\}} \right)=0$ $\left(\mathfrak{R}^{reg}_k\left( \epsilon_{\{j_1\}}\cdots\epsilon_{\{j_l\}} \right)=0\right)$;
\item {\bf d}) for each $k\geq 0$ the restriction of $\mathfrak{L}^{reg}_k$ $\left( \mathfrak{R}^{reg}_k\right)$ to the subgroup $\mathfrak{S}_\infty\subset R_\infty$ is ${\rm II}_1$-factor-representation quasiequivalent to the regular one.
\end{itemize}
\end{Prop}
We do not use this proposition below and leave its proof to the reader.
\subsection{The results}
Let $\pi$ be ${\rm II}_1$-factor $\star$-representation of $R_\infty$ in Hilbert space $\mathcal{H}$. Set $\left(\pi(R_\infty)\right)^\prime=\left\{A\in\mathcal{B}(\mathcal{H}): \pi(r)\cdot A=A\cdot\pi(r) \text{ for all } r\in R_\infty\right\}$, $\left(\pi(R_\infty)\right)^{\prime\prime}=\left(\left(\pi(R_\infty)\right)^\prime\right)^\prime$.
Throughout this paper we will denote by $\left[v_1,v_2,\ldots, \right]$ the closure of the linear span of the vectors $v_1,v_2,\ldots\in \mathcal{H}$. Let ${\rm tr}$ be the unique faithful normal trace on the factor $\left(\pi(R_\infty)\right)^{\prime\prime}$.
Replacing if needed $\pi$ by the quasi-equivalent representation, we will suppose below that there exists the unit vector $\xi\in \mathcal{H}$ such that
\begin{eqnarray}
\label{bicyclic_1} \left[\pi\left(R_\infty \right)\xi \right]=\left[ \left(\pi(R_\infty)\right)^\prime\xi\right]=\mathcal{H};\\
\label{vector_trace} {\rm tr}(A)=(A\xi,\xi)\text{ for all } A\in \left(\pi(R_\infty)\right)^{\prime\prime}.
\end{eqnarray}
The function $f$ on $R_\infty$, defined by $f(r) =\left(\pi(r)\xi,\xi \right)$, satisfy the next conditions:
\begin{itemize}
\item ({\bf1}) central, that is, $f(rs)=f(sr)$ for all $s,r\in R_\infty$;
\item ({\bf2}) positive definite, that is, for all $r_1,r_2, \ldots, r_n\in R_\infty$ the matrix $\left[ f\left( r_i^*r_j \right) \right]$ is Hermitian and non-negative definite;
\item ({\bf3}) {\it indecomposable}, that is, it cannot be presented as a sum of two linearly independent functions, satisfying ({\bf1}) and ({\bf2});
\item ({\bf4}) normalized by $f(e)=1$, where $e$ is unit of $R_\infty$.
\end{itemize}
The functions with such properties are called the {\it finite characters} of semigroup or group.
From now on, $f$ denotes the character on $R_\infty$.
\begin{Th}[\rm Multiplicativity Theorem]\label{Mult_th} Let $f$ be the {\it indecomposable} character on $R_\infty$, and let $r=q_1\cdot q_2\cdots q_k$ be its decomposition into the product of the independent quasicycles (see (\ref{decomposition_into_product})). Then $f(r)=\prod\limits_{j=1}^k f\left(q_j\right)$.
\end{Th}
\begin{proof}
Since $({\rm supp}\,q_k)\cap \left(\bigcup\limits_{j=1}^{k-1}{\rm supp}\,q_j\right)=\emptyset$, there exist the sequence $\left\{s_n \right\}\subset \mathfrak{S}_\infty$ such that
\begin{eqnarray}\label{supp_infinity}
{\rm supp}\,s_nq_ks_n^{-1}\subset(n,\infty ) \text{ and } s_nl=l \text{ for all } l\in\bigcup\limits_{j=1}^{k-1}{\rm supp}\,q_j.
\end{eqnarray}
Let us prove that a sequence $\left\{\pi\left( s_nq_ks_n^{-1}\right)\xi \right\}\subset\mathcal{H}$ converges in the weak topology to
a vector $f(q_k)\xi$.
Indeed, using Multiplicativity Theorem and (\ref{supp_infinity}), we have $\lim\limits_{n\to\infty}\left(\pi( s_nq_ks_n^{-1})\xi,\eta \right)=\left(\pi\left(q_k \right)\xi,\xi \right)\cdot\left(\xi,\eta \right)$ for any $\eta=\pi(r)\xi$, where $r\in R_\infty$. Now we conclude from (\ref{bicyclic_1}) that $\lim\limits_{n\to\infty}\left(\pi( s_nq_ks_n^{-1})\xi,\eta \right)=\left(\pi\left(q_k \right)\xi,\xi \right)\cdot\left(\xi,\eta \right)$ for all $\eta\in\mathcal{H}$. Again using (\ref{bicyclic_1}), we obtain that there exists $\lim\limits_{n\to\infty}\pi( s_nq_ks_n^{-1})=f(q_k)\cdot I_\mathcal{H}$ in the weak operator topology. Therefore, $f(r)= f(s_n rs_n^{-1})$ $ \stackrel{(\ref{supp_infinity})}{=} \lim\limits_{n\to\infty}f\left(s_n q_ks_n^{-1}\prod\limits_{j=1}^{k-1} q_j\right)$ $=f(q_k)\cdot f\left(\prod\limits_{j=1}^{k-1} q_j \right)$
\end{proof}
Let us consider any cycle $c=\left(n_1\,n_2\,\ldots\,,n_k \right)$ of the length $k$. If $\mathbb{A}=\left\{n_{j_1},\ldots, n_{j_l} \right\}\subset {\rm supp}\,c$ and $\mathbb{A}\neq\emptyset$ then, using the relations
\begin{eqnarray*}
& c=\left(n_1\;n_k \right)\cdots \left(n_1\;n_{j_i} \right)\cdots\left(n_1\; n_2 \right) \text{ and }\\
&\epsilon_{\{n_1\}}\cdot \left(n_1\;n_{j_i} \right)\cdot\epsilon_{\{n_1\}}=\epsilon_{\{n_{j_i}\}}\cdot \left(n_1\;n_{j_i} \right)\cdot\epsilon_{\{n_{j_i}\}} =\epsilon_{\{n_{j_i}\}}\cdot \epsilon_{\{n_1\}},
\end{eqnarray*}
we obtain $f\left(c\cdot\epsilon_\mathbb{A} \right)= f\left( \left(n_1\;n_k \right)\cdots \left(n_1\;n_{j_1} \right)\cdots\left(n_1\; n_2 \right)\cdot\epsilon_{n_{j_1}}\cdot\epsilon_{\mathbb{A}}\right)$ $= f\left(\epsilon_{n_{j_1}}\cdot \left(n_1\;n_k \right)\right.$ $ \left.\cdots \left(n_1\;n_{j_1} \right)\cdots\left(n_1\; n_2 \right)\cdot\epsilon_{n_{j_1}}\cdot\epsilon_{\mathbb{A}}\right)$ $=f\left( \left(n_1\;n_k \right) \cdots\epsilon_{n_{j_1}}\cdot \left(n_1\;n_{j_1} \right)\cdot\epsilon_{n_{j_1}}\cdots\left(n_1\; n_2 \right)\cdot\epsilon_{\mathbb{A}}\right)$ $=f\left( \left(n_1\;n_k \right) \cdots\left(n_1\;n_{j_1+1} \right)\cdot\epsilon_{n_1}\cdot\epsilon_{n_{j_1}}\cdot\left(n_1\;n_{j_1-1} \right)\cdots\left(n_1\; n_2 \right)\cdot\epsilon_{\mathbb{A}}\right)$ $= f\left(\epsilon_{n_{j_1+1}}\cdot \left(n_1\;n_k \right) \right.$ $\left. \cdots\left(n_1\;n_{j_1+1} \right)\cdot\left(n_1\;n_{j_1-1} \right)\cdots\left(n_1\; n_2 \right)\cdot\epsilon_{\mathbb{A}}\right)$ $=f\left( \widetilde{c}\cdot\epsilon_{n_{j_1+1}}\cdot\epsilon_{\mathbb{A}}\right)$, where $\widetilde{c}=\left(n_1\;\cdots\; n_{j_1-1} \;n_{j_1+1}\;\cdots\;n_k \right)$. Applying these equalities $k$ times, we obtain $f\left(c\cdot\epsilon_\mathbb{A} \right)=f\left(\epsilon_{{\rm supp}\,c} \right)$.
Therefore, the next corollary is the supplement to theorem \ref{Mult_th} and does not need of the proof already.
\begin{Co}\label{Corollary}
There exists $\rho\in[0,1]$ such that
for any quasicycle $q=c\cdot\epsilon_\mathbb{A}$, where $\mathbb{A}\neq\emptyset$, we have $f\left( q\right)=\rho^{\#({{\rm supp}\,c})}$.
\end{Co}
Next statement follows from theorem \ref{Mult_th} and \cite{Thoma}.
\begin{Prop}\label{restriction_to_symmetric}
The restriction of $f$ to the symmetric subgroup $\mathfrak{S}_\infty\subset R_\infty$ is a character of $\mathfrak{S}_\infty$. Denote by $\alpha=\left\{\alpha_1\geq\alpha_2\geq\ldots>0 \right\}$ and $\beta=\left\{\beta_1\geq\alpha_2\geq\ldots>0\right\}$ the corresponding {\it Thoma parameters}.
Let $s$ be the permutation from $\mathfrak{S}_\infty$ and let $s=c_1\cdot c_2 \cdots c_k$ be its cycle decomposition, where the length of the cycle $c_j$ equal $l(c_j)>1$. Then
\begin{eqnarray*}
f(c_j)=\sum\limits_i \alpha_i^{l(c_j)}+(-1)^{l(c_j)}\sum\limits_i\beta_i^{l(c_j)}\text{ \rm and } f(s)=\prod\limits_{j=1}^k f\left(c_j \right).
\end{eqnarray*}
Further, we denote this restriction by $\chi_{\alpha\beta}$.
\end{Prop}
The main result of this paper is the following theorem
\begin{Th}[Main Theorem]\label{main_th}
Under the notations of Corollary \ref{Corollary} and Proposition \ref{restriction_to_symmetric}, we have $\rho\in \alpha\cup 0$. If $\rho=\lambda$, $c= (1\,2\,\ldots, \,k\}$ and $\mathbb{A}\subset{\rm supp}\,c= \{1,2\ldots, k\}$ then $f\left(c\epsilon_{\mathbb{A}}\right)=\left\{
\begin{array}{rl}
\lambda^k,&\text{ if } \mathbb{A}\neq \emptyset\\
\chi_{\alpha\beta}(c),&\text{ if } \mathbb{A}= \emptyset.
\end{array}\right.$
\end{Th}
\section{The proof of the main theorem}
Let us recall first the important statement from \cite{Ok1, Ok2} that we will use below.
\begin{Lm}\label{OkUnkov_operator}
For any $k$ the sequence $\left\{\pi\left(\left(k\;n \right) \right) \right\}_{n=1}^\infty$ converges in the weak operator topology to self-adjoint operator $\mathcal{O}_k\in\pi\left(\mathfrak{S}_\infty \right)^{\prime\prime}\subset\pi\left(R_\infty \right)^{\prime\prime}$.
\end{Lm}
\begin{proof}
To prove, it is suffices to notice that $\left(\pi\left(\left(k\;n+1 \right) \right)\cdot\pi(r_1)\xi,\pi(r_2)\xi \right)$ $=\left(\pi\left(\left(k\;N \right) \right)\cdot\pi(r_1)\xi,\pi(r_2)\xi \right)$ for all $r_1, r_2\in R_\infty$, $N>n$ and apply (\ref{bicyclic_1}).
\end{proof}
\begin{Lm}
Let $S(\mathcal{O}_k)$ be the spectrum of operator $\mathcal{O}_k$ and let $\mu$ denotes the spectral measure of operator $\mathcal{O}_k$, corresponding to vector $\xi$. Then the following hold:
\begin{itemize}
\item {\bf 1}) the measure $\mu$ is discrete and its atoms can only accumulate to zero;
\item {\bf 2}) if $\;\mathcal{O}_k=\sum\limits_{\lambda\in S\left(\mathcal{O}_k \right)} \lambda E_k(\lambda)$ is the spectral decomposition of $\mathcal{O}_k$ then $\left(E_k(\lambda)\xi,\xi \right)=m(\lambda)\cdot|\lambda|$, where $m(\lambda)\in\mathbb{N}\cup0$;
\item {\bf 3}) if $\lambda\in S(\mathcal{O}_k)$ is positive (negative) then there exists some Thoma parameter (see Proposition \ref{restriction_to_symmetric}) such that $\lambda$ $=\alpha_k$ $\left(\lambda=-\beta_k \right)$ and $m(\lambda)=\#\left\{k:\alpha_k=\lambda \right\}\;\;\left(\#\left\{k:-\beta_k=\lambda \right\} \right)$.
\end{itemize}
\end{Lm}
\begin{Lm}\label{commutativity-lemma}
The operators $\mathcal{O}_j$ and $\pi(\epsilon_{\{k\}})$ mutually commute.
\end{Lm}
\begin{proof}
In the case $k\neq j$ lemma is obvious. By (\ref{bicyclic_1}), it is sufficient to show that
\begin{eqnarray}\label{commutativity}
\left(\pi(\epsilon_{\{k\}})\cdot\mathcal{O}_k\xi,\pi(r)\xi \right)=\left(\mathcal{O}_k\cdot\pi(\epsilon_{\{k\}})\xi,\pi(r)\xi \right)\;\text{ for all } \; r\in R_\infty.
\end{eqnarray}
Fix the naturale number $N(r)$ such that $r\in R_{N(r)}$. For any sufficiently large number $N$ the next chain of the equalities holds:
\begin{eqnarray*}
&\left(\pi(\epsilon_{\{k\}})\cdot\mathcal{O}_k\xi,\pi(r)\xi \right)\stackrel{\text{Lemma }\ref{OkUnkov_operator}}{=}\lim\limits_{n\to\infty}\left(\pi(\epsilon_{\{k\}})\cdot\pi\left(\left(k\;n\right) \right)\xi,\pi(r)\xi \right)\\
&\stackrel{N>{\rm max}\{k,N(r)\}}{=}\left(\pi(\epsilon_{\{k\}})\cdot\pi\left(\left(k\;N\right) \right)\xi,\pi(r)\xi \right)=\left(\pi(\epsilon_{\{k\}})\cdot\pi\left(\left(k\;N\right) \right)\cdot(\pi(r))^*\xi,\xi \right)\\
&\left(\pi\left(\left(k\;N\right) \right)\cdot\pi(\epsilon_{\{N\}})\cdot(\pi(r))^*\xi,\xi \right)\stackrel{N>N(r)}{=}\left(\pi\left(\left(k\;N\right) \right)\cdot (\pi(r))^*\cdot\pi(\epsilon_{\{N\}})\xi,\xi \right)\\
&=\left(\pi(\epsilon_{\{N\}})\cdot\pi\left(\left(k\;N\right) \right)\cdot (\pi(r))^*\xi,\xi \right)=\left(\pi(\epsilon_{\{N\}})\cdot\pi\left(\left(k\;N\right) \right) \xi,\pi(r)\xi \right)\\
&=\left(\pi\left(\left(k\;N\right) \right)\cdot\pi(\epsilon_{\{k\}}) \xi,\pi(r)\xi \right)=\left(\mathcal{O}_k\cdot\pi(\epsilon_{\{k\}})\xi,\pi(r)\xi \right).
\end{eqnarray*}
The equality (\ref{commutativity}) is proved.
\end{proof}
\begin{Lm}
Let $\mathfrak{A}_j$ be $w^*$-algebra, generated by the operators $\mathcal{O}_j$ and $\pi(\epsilon_{\{j\}})$. Then the following hold:
\begin{itemize}
\item {\rm i}) $\pi(\epsilon_{\{j\}})$ is a minimal projection in $\mathfrak{A}_j$;
\item {\rm ii}) if $\lambda\leq 0$ then $E_j(\lambda)\cdot\pi(\epsilon_{\{j\}})=0$.
\end{itemize}
\end{Lm}
\begin{proof}
To prove the property {\rm i}), we notice that
\begin{eqnarray}\label{relations_generators}
\pi(\epsilon_{\{j\}})\cdot\pi\left(\left(j\;n \right) \right)\cdot\pi(\epsilon_{\{j\}})=\pi(\epsilon_{\{j\}})\cdot\pi(\epsilon_{\{n\}}).
\end{eqnarray}
Since $\pi$ is ${\rm II}_1$-factor representation, the limit of the sequence $\left\{ \pi(\epsilon_{\{n\}})\right\}$ exists in the weak operator topology. Namely,
\begin{eqnarray*}
w^*{\text{-}}\lim\limits_{n\to\infty}\pi(\epsilon_{\{n\}})=\kappa\cdot I, \text{ where } \kappa=\left(\pi(\epsilon_{\{1\}})\xi,\xi \right).
\end{eqnarray*}
Hence, applying (\ref{relations_generators}), lemma \ref{OkUnkov_operator}, lemma \ref{commutativity-lemma} and passing to the limit $n\to\infty$, we obtain
\begin{eqnarray}\label{need_for_zero}
\mathcal{O}_j\cdot\pi(\epsilon_{\{j\}})=\pi(\epsilon_{\{j\}})\cdot\mathcal{O}_j\cdot\pi(\epsilon_{\{j\}})=\kappa\cdot\pi(\epsilon_{\{j\}}).
\end{eqnarray}
Therefore, $\mathcal{O}_j^m\cdot\pi(\epsilon_{\{j\}})=\kappa^m\cdot \pi(\epsilon_{\{j\}})$ for all $m\in\mathbb{N}$. Property {\rm i}) is proved.
We now come to the proof of {\rm ii}).
By property {\rm i}) and lemma \ref{commutativity-lemma}, in the case, when $\pi(\epsilon_{\{j\}})\neq 0$, there exists only one spectral projection, for example $E_j(\hat{\lambda})$, such that
\begin{eqnarray*}
E_j(\hat{\lambda})\cdot\pi(\epsilon_{\{j\}})= \pi(\epsilon_{\{j\}}) \text{ and } E(\lambda^\prime)\cdot\pi(\epsilon_{\{j\}})=0\; \text{ for all } \lambda^\prime\neq\hat{\lambda}.
\end{eqnarray*}
Hence, under the condition $\hat{\lambda}\neq0$, we obtain
\begin{eqnarray*}
&\hat{\lambda}\left(\pi(\epsilon_{\{j\}})\xi,\xi \right)=\left(\mathcal{O}_j\cdot\pi(\epsilon_{\{j\}})\xi,\xi \right)
\stackrel{\text{Lemma }\ref{OkUnkov_operator}}{=}\lim\limits_{n\to\infty}\left(\pi((j\; n))\cdot \pi(\epsilon_{\{j\}})\xi,\xi\right)\\
&\stackrel{N\neq j}{=}\left(\pi((j\; N))\cdot \pi(\epsilon_{\{j\}})\xi,\xi \right)=\left(\pi(\epsilon_{\{j\}})\cdot\pi((j\; N))\cdot \pi(\epsilon_{\{j\}})\xi,\xi \right)\\
&=\left(\pi(\epsilon_{\{j\}})\cdot \pi(\epsilon_{\{N\}})\xi,\xi \right)\stackrel{\text{Theorem \ref{Mult_th}}}{=}\left(\pi(\epsilon_{\{j\}})\xi,\xi \right)\left( \pi(\epsilon_{\{N\}})\xi,\xi \right)=\left(\pi(\epsilon_{\{j\}})\xi,\xi \right)^2.
\end{eqnarray*}
Since $\pi(\epsilon_{\{j\}})\neq 0$, then, by (\ref{bicyclic_1}), $\left(\pi(\epsilon_{\{j\}})\xi,\xi \right)\neq0$. Therefore,
\begin{eqnarray}\label{nonzero}
\hat{\lambda}=\left(\pi(\epsilon_{\{j\}})\xi,\xi \right)>0.
\end{eqnarray}
If $\hat{\lambda}=0$, i.e. $\pi(\epsilon_{\{j\}})\leq E_k(0)$, then, using (\ref{need_for_zero}), we obtain $\left(\pi(\epsilon_{\{j\}})\xi,\xi \right)=0$.
Thus, by (\ref{bicyclic_1}), $\pi(\epsilon_{\{j\}})=0$.
\end{proof}
The next statement follows from preceding lemma and lemma \ref{commutativity-lemma}.
\begin{Co}\label{co_main}
If $f\left(\epsilon_{\{k\}} \right)\neq0$ then there exists positive $\lambda\in S\left(\mathcal{O}_k \right)$ such that $\pi(\epsilon_{\{k\}})\leq E_k(\lambda)$ and $f\left(\epsilon_{\{k\}}\right)=\lambda$.
\end{Co}
\subsection{The proof of Theorem \ref{main_th}}\label{proof_main_th}
By theorem \ref{Mult_th} and proposition \ref{restriction_to_symmetric}, it is sufficient to find the value of the character $f$ on quasicycle $q=c\cdot\epsilon_\mathbb{A}$, where $c=(1\;2\;\ldots\;k)$, $\mathbb{A}\subset{\rm supp}\,c= \{1,2\ldots, k\}$ and $\mathbb{A}\neq\emptyset$. Without loss of generality we can assume that $\pi(\epsilon_{\{k\}})\neq0$.
Define the map $\vartheta_{\mathbb{A}}: \left\{1,2,\ldots,k \right\}\mapsto \left\{e,\epsilon_{\{1\}} \right\}$, where $e$ is unit of $R_\infty$, by
\begin{eqnarray}
\vartheta_\mathbb{A}(j)=\left\{
\begin{array}{rl}
\epsilon_{\{1\}},&\text{ if } j\in\mathbb{A}\\
e,&\text{ if } j\notin\mathbb{A}.
\end{array}\right.
\end{eqnarray}
Since $c=(1\;k)\cdot(1\;k-1)\cdots (1\;2)$, we have
\begin{eqnarray*}
c\cdot\epsilon_{\mathbb{A}}=\vartheta_{\mathbb{A}}(k)\cdot(1\;k)\cdot\vartheta_{\mathbb{A}}(k-1)\cdot(1\;k-1)\cdots\vartheta_{\mathbb{A}}(2)\cdot(1\;2)\cdot\vartheta_{\mathbb{A}}(1).
\end{eqnarray*}
Therefore, for any collection $\left\{n_2< n_3<\ldots< n_k \right\}\subset\mathbb{N}$, where $n_2>k$, we obtain
\begin{eqnarray*}
f\left(c\cdot\epsilon_{\mathbb{A}} \right)=\left(\pi\left(\vartheta_{\mathbb{A}}(k)\cdot(1\;k)\cdot\vartheta_{\mathbb{A}}(k-1)\cdot(1\;k-1)\cdots\vartheta_{\mathbb{A}}(2)\cdot(1\;2) \cdot\vartheta_{\mathbb{A}}(1)\right)\xi,\xi\right)\\
=\left(\pi\left(\vartheta_{\mathbb{A}}(k)\cdot(1\;n_k)\cdot\vartheta_{\mathbb{A}}(k-1)\cdot(1\;n_{k-1})\cdots\vartheta_{\mathbb{A}}(2)\cdot(1\;n_2)\cdot\vartheta_{\mathbb{A}}(1) \right)\xi,\xi\right),
\end{eqnarray*}
Passing in series to the limits $n_k\to\infty, n_{k-1}\to\infty,\ldots,n_2\to\infty$, we come to the relation
\begin{eqnarray*}
\begin{split}
f\left(c\cdot\epsilon_{\mathbb{A}} \right)\stackrel{\text{Lemma \ref{OkUnkov_operator}}}{=}\left(\pi\left(\vartheta_{\mathbb{A}}(k) \right)\cdot\mathcal{O}_1\cdot \pi\left(\vartheta_{\mathbb{A}}(k-1) \right)\cdot\mathcal{O}_1\cdots \pi\left(\vartheta_{\mathbb{A}}(2) \right)\cdot\mathcal{O}_1\cdot\pi\left(\vartheta_{\mathbb{A}}(1) \right)\xi,\xi\right)\\
\stackrel{\text{Lemma \ref{commutativity-lemma} }}=\left\{
\begin{array}{rl}
\left(\pi\left(\epsilon_{\{1\}} \right)\mathcal{O}_1^{k-1}\xi,\xi \right),&\text{ if } \mathbb{A}\neq \emptyset\\
\left(\mathcal{O}_1^{k-1}\xi,\xi \right),&\text{ if } \mathbb{A}= \emptyset.
\end{array}\right.
\stackrel{{\text{Corollary \ref{co_main}}}}{=}\left\{
\begin{array}{rl}
\lambda^k,&\text{ if } \mathbb{A}\neq \emptyset\\
\chi_{\alpha\beta}(c),&\text{ if } \mathbb{A}= \emptyset.
\end{array}\right.
\end{split}
\end{eqnarray*}
Theorem \ref{main_th} is proved.
\section{The realisations of ${\rm II}_1$-factor-representations}\label{Realisation}
Our aim in this section is the construction of ${\rm II}_1$factor-representations for $R_\infty$.
\subsection{The parameters of the ${\rm II}_1$-factor-representations.}\label{parameters_of_repr}
Let $B(\mathbb{\mathbf{H}})$ denote
the set of all (bounded linear) operators acting on the complex Hilbert space $\mathbf{H}$.
Let ${\rm Tr}$ be the ordinary\footnote{ ${\rm Tr}(\mathfrak{p})$=1 for any nonzero minimal projection $\mathfrak{p}\in B(\mathbb{\mathbf{H}})$. } trace on $B(\mathbb{\mathbf{H}})$. Fix the self-adjoint operator $A\in B(\mathbf{H})$ and the {\it minimal} orthogonal projection $\mathbf{q}\in B(\mathbf{H})$ such that $A\cdot\mathbf{q}=\mathbf{q}\cdot A$. Let ${\rm Ker}\, A=\left\{ u\in B(\mathbf{H}):Au=0\right\}$, and let $({\rm
Ker} A)^\perp=\mathbf{H}\ominus {\rm Ker}\, A $. Denote by $E(\Delta)$ the spectral projection of operator $A$, corresponding to $\Delta\subset \mathbb{R}$. Suppose that occur the next conditions:
\begin{itemize}
\item {\rm(a)} ${\rm Tr} (|A|)\leq1$,
\item {\rm(b)} if $\mathbf{q}\neq0$, then ${\rm Tr}(\mathbf{q})=1$;
\item {\rm(c)} if ${\rm Tr} (|A|)=1$ then ${\rm Ker}\, A=0$;
\item {\rm(d)} if ${\rm Tr} (|A|)<1$ then
$\dim ({\rm Ker}\, A)=\infty$;
\item {\rm(e)} $\mathbf{q}\cdot E([-1,0])=0$;
\end{itemize}
\subsection{Hilbert space $\mathcal{H}_A^\mathbf{q}$.}\label{hp}
Let $\mathbb{S}=\left\{ 1,2,\ldots, {\rm dim}\,\mathbf{H} \right\}$.
Fix the matrix unit $\left\{ e_{kl}\right\}_{k,l\in\mathbb{S}}\subset B(\mathbf{H})$. Suppose for the convenience that
\begin{eqnarray*}
Ae_{ll}=e_{ll}A \text{ for all } l.
\end{eqnarray*}
Let $\mathbb{S}_{reg}=\left\{ n_1,n_2,\ldots\right\}=\left\{ l:e_{ll}\mathbf{H}\subset {\rm Ker}\, A\right\}$ \label{s_regular}(see ({\rm d})), where $n_k<n_{k+1}$.
Define
a state $\psi_k$ on $B\left(\mathbf{H} \right)$ as
follows
\begin{eqnarray}\label{psik}
\psi_k\left(b \right)={\rm Tr}\left(b|A| \right)+\left(1-{\rm
Tr}\left(|A| \right)\right) {\rm Tr}\left(b e_{n_kn_k}\right),\;\;b\in B\left(\mathbf{H} \right).
\end{eqnarray}
Let $ _1\psi_k$ denote the product-state on $B\left(\mathbf{H}\right)^{\otimes k}$:
\begin{eqnarray}\label{product_psik}
_1\psi_k\left(b_1\otimes b_2\otimes \ldots\otimes b_k
\right)=\prod\limits_{j=1}^k\psi_j\left(b_j \right).
\end{eqnarray}
Now define inner product on $B \left( \mathbf{H}\right)^{\otimes k}$ by
\begin{eqnarray}\label{inner_product_psik}
\left( v,u\right)_k=\,_1\psi_k\left( u^*v \right).
\end{eqnarray}
Let $\mathcal{H}_k$ denote the Hilbert space obtained by completing
$B \left( \mathbf{H}\right)^{\otimes k}$ in above inner product
norm. Now we consider the natural isometrical embedding
\begin{eqnarray*}
v\ni\mathcal{H}_k\mapsto v\otimes {\rm I}\in\mathcal{H}_{k+1},
\end{eqnarray*}
and define Hilbert space $\mathcal{H}^\mathbf{q}_A$ as completing
$\bigcup\limits_{k=1}^\infty \mathcal{H}_k$.
\subsection{The action of $R_\infty$ on $\mathcal{H}^\mathbf{q}_A$. }\label{action}
First, using the
embedding\\ $a\ni B\left(\mathbf{H} \right)^{\otimes
k}\mapsto a\otimes{\rm I}\in B\left(\mathbf{H}
\right)^{\otimes (k+1)}$, we identify $B\left(\mathbf{H}
\right)^{\otimes k}$ with subalgebra $B\left(\mathbf{H}
\right)^{\otimes
k}\otimes\mathbb{C}\subset B\left(\mathbf{H}
\right)^{\otimes (k+1)}$. Therefore, algebra
$B\left(\mathbf{H}\right)^{\otimes\infty}=\bigcup\limits_{n=1}^\infty
B\left(\mathbf{H} \right)^{\otimes n}$ is well defined.
Now we construct the explicit embedding of $\mathfrak{S}_\infty$ into the unitary subgroup of $B\left(\mathbf{H}\right)^{\otimes\infty}$. For $a\in B\left(\mathbf{H}\right)$ put $a^{(k)}={\rm I}\otimes\cdots\otimes{\rm I}\otimes\underbrace{a}_{k}\otimes{\rm I}\otimes{\rm I}\cdots$. Let $E_{-}=E([-1,0[)$ and let
\begin{eqnarray*}
U_{E_{-}}^{(k,\,k+1)}=({\rm I}-E_{-})^{(k)}({\rm I}-E_{-})^{(k+1)}+E_{-}^{(k)} ({\rm
I}-E_{-})^{(k+1)}\\
+ ({\rm I}-E_{-})^{(k)}E_{-}^{(k+1)}-E_{-}^{(k)}E_{-}^{(k+1)}.
\end{eqnarray*}
Define the unitary operator $T\left((k\;k+1)\right)\in B\left(\mathbf{H}
\right)^{\otimes\infty}$ as follows
\begin{eqnarray}\label{embeding_T}
T\left((k\;k+1)\right)=U_{E_{-}}^{(k,\, k+1)}\sum\limits_{ij}e_{ij}^{(k)}e_{ji}^{(k+1)}.
\end{eqnarray}
Put $T(\epsilon _{\{1\}})=\mathbf{q}^{(1)}$.
Using the relation $(k\;k+m)=(k+m-1\;k+m)$ $\cdots(k $ $+1\;k+2)$ $\cdot(k\;k+1)\cdot(k+1\;k+2)\cdots(k+m-1\;k+m)$,
we can to prove that
\begin{eqnarray}\label{any_transposition}
\begin{split}
T\left((k\;k+m)\right)=U_{E_{-}}^{(k,\, k+m)}\sum\limits_{ij}e_{ij}^{(k)}e_{ji}^{(k+m)}, \text{ where }\\
U_{E_{-}}^{(k,\, k+m)}=({\rm I}-E_{-})^{(k)}({\rm I}-E_{-})^{(k+m)}-E_{-}^{(k)}E_{-}^{(k+m)}\\
+\left(({\rm I}-E_{-})^{(k)}E_{-}^{(k+m)}+E_{-}^{(k)}({\rm I}-E_{-})^{(k+m)}\right)\prod\limits_{j=k+1}^{k+m-1}\left({\rm I}-2E_- \right)^{(j)}.
\end{split}
\end{eqnarray}
A easy verification of the standard relations between $\left\{ T((k\;k+1))\right\}_{k\in\mathbb{N}}$ and
$\mathbf{q}^{(1)}$ shows that
$T$ extends by multiplicativity to the $\star$-homomorphism of $R_\infty$ to $B\left(\mathbf{H}
\right)^{\otimes\infty}$.
Left multiplication in $B\left(\mathbf{H}\right)^{\otimes\infty}$ defines $\star$-representation $\mathfrak{L}_A$ of $B\left(\mathbf{H}\right)^{\otimes\infty}$ by bounded operators on $\mathcal{H}^\mathbf{q}_A$. Put $\Pi_A(r)=\mathfrak{L}_A\left( T(r)\right)$, $r\in R_\infty$. Denote by $\pi_A^{(0)}$ the restriction of $\Pi_A$ to $\left[ \Pi_A\left( R_\infty\right)\xi _0\right]$, where $\xi _0$ is the vector from $\mathcal{H}^\mathbf{q}_A$ corresponding to the unit element of $B\left(\mathbf{H}\right)^{\otimes\infty}$.
\begin{Rem}
If $T\left( \epsilon_{\{1\}} \right)=0$ and $T(s)$ $(s\in\mathfrak{S}_\infty)$ is defined by (\ref{embeding_T}), then $\left\{ \pi_A^{(0)}\left( R_\infty \right) \right\}^{\prime\prime}=\left\{ \pi_A^{(0)}\left( \mathfrak{S}_\infty \right) \right\}^{\prime\prime}$ and the corresponding representation $\pi_A^{(0)}$ is ${\rm II}_1$-factor-representation of $R_\infty$.
\end{Rem}
\begin{Rem}
If $A=\mathbf{p}$, where $\mathbf{p}$ is one-dimensional projection, then $\left(\Pi_\mathbf{p}(s)\xi _0,\xi _0 \right)_{\mathcal{H}^\mathbf{q}_A}=1$ for all $s\in\mathfrak{S}_\infty$. Therefore, we obtain two corresponding representations:
\begin{itemize}
\item {\bf 1}) $\pi_\mathbf{p}^{(0)}(s)=I$ for all $s\in\mathfrak{S}_\infty$, $\pi_\mathbf{p}^{(0)}(\epsilon_{\{k\}})=I$;
\item {\bf 2}) $\pi_\mathbf{p}^{(0)}(s)=I$ for all $s\in\mathfrak{S}_\infty$, $\pi_\mathbf{p}^{(0)}(\epsilon_{\{k\}})=0$.
\end{itemize}
If $A=-\mathbf{p}$, then we have the unique representation: $\pi_{-\mathbf{p}}^{(0)}(s)=({\rm sign}\,s)\cdot I$ for all $s\in\mathfrak{S}_\infty$ and $\pi^{(0)}_{-\mathbf{p}}(\epsilon_{\{k\}})=0$.
\end{Rem}
\subsection{The character formula}
Let $f(r)=\left(\pi_A^{(0)}(r)\xi_0,\xi_0 \right)$, $r\in R_\infty$. Here we will find a formula for $f$. Next statement follows from (\ref{any_transposition}), by the ordinary calculation.
\begin{Lm}\label{Realisations_okounkov}
Let $E_-\neq I$. Then $\lim\limits_{n\to\infty}\pi_A^{(0)}((k\;n))=\mathfrak{L}_A\left( A^{(k)} \right)$ in the weak operator topology.
\end{Lm}
It follows from definition of $\pi_A^{(0)}$ that $f$ satisfies to theorem \ref{Mult_th}. Therefore, it is sufficient to find the value $f$ on quasicycle $q=c\epsilon_\mathbb{A}$, where $c=(1\;2\;\ldots\;k)$ and $\mathbb{A}\subset{\rm supp}\,c$.
As in the proof of theorem \ref{main_th}(section \ref{proof_main_th}, lemma \ref{Realisations_okounkov} gives
\begin{eqnarray*}
f\left(c\cdot\epsilon_{\mathbb{A}}\right) =
\left\{\begin{array}{rl}
{\rm Tr}\left(|A|\cdot\mathbf{q}\cdot A^{k-1} \right),&\text{ if } \mathbb{A}\neq \emptyset\\
{\rm Tr}\left( |A|\cdot A^{k-1} \right) ,&\text{ if } \mathbb{A}= \emptyset.
\end{array}\right.
\end{eqnarray*}
It follows from p. \ref{parameters_of_repr} that there exists a spectral projection $E(\lambda)$ of operator $A$, where $\lambda>0$, with the property $\mathbf{q}\cdot E(\lambda)=\mathbf{q}$. Hence, using proposition \ref{restriction_to_symmetric}, we obtain
\begin{eqnarray*}
f\left(c\cdot\epsilon_{\mathbb{A}}\right) =
\left\{\begin{array}{rl}
\lambda^k,&\text{ if } \mathbb{A}\neq \emptyset\\
\chi_{\alpha\beta}(c) ,&\text{ if } \mathbb{A}= \emptyset.
\end{array}\right.
\end{eqnarray*}
{}
B. Verkin ILTPE of NASU - B.Verkin Institute for Low Temperature Physics and Engineering
of the National Academy of Sciences of Ukraine
[email protected]
\end{document}
|
\begin{document}
\title{Near-Optimal Decremental Hopsets with Applications}
\thispagestyle{empty}
\begin{abstract}
Given a weighted undirected graph $G=(V,E,w)$, a hopset $H$ of \emph{hopbound} $\beta$ and \emph{stretch} $(1+\epsilon)$ is a set of edges such that for any pair of nodes $u, v \in V$, there is a path in $G \cup H$ of at most $\beta$ hops, whose length is within a $(1+\epsilon)$ factor from the distance between $u$ and $v$ in $G$.
We show the first efficient decremental algorithm for maintaining hopsets with a \textit{polylogarithmic} hopbound.
The update time of our algorithm matches the best known static algorithm up to polylogarithmic factors. All the previous decremental hopset constructions had a \textit{superpolylogarithmic} (but subpolynomial) hopbound of $2^{\log^{\Omega(1)} n}$ [Bernstein, FOCS'09; HKN, FOCS'14; Chechik, FOCS'18].
By applying our decremental hopset construction, we get improved or near optimal bounds for several distance problems.
Most importantly, we show how to decrementally maintain $(2k-1)(1+\epsilon)$-approximate all-pairs shortest paths (for any constant $k \geq 2)$, in $\tilde{O}(n^{1/k})$ amortized update time\footnote{Throughout this paper we use the notation $\tilde{O}(f(n))$ to hide factors of $O(\text{polylog } (f(n)))$.} and $O(k)$ query time.
This improves (by a polynomial factor) over the update-time of the best previously known decremental algorithm in the \textit{constant} query time regime. Moreover, it improves over the result of [Chechik, FOCS'18] that has a query time of $O(\log \log(nW))$, where $W$ is the aspect ratio, and the amortized update time is $n^{1/k}\cdot(\frac{1}{\epsilon})^{\tilde{O}(\sqrt{\log n})})$. For sparse graphs our construction nearly matches the best known static running time / query time tradeoff.
We also obtain near-optimal bounds for maintaining approximate multi-source shortest paths and distance sketches, and get improved bounds for approximate single-source shortest paths. Our algorithms are randomized and our bounds hold with high probability against an \textit{oblivious} adversary.
\end{abstract}
\section{Introduction}
Given a weighted undirected graph $G=(V,E,w)$, a hopset $H$ of \emph{hopbound} $\beta$ and \emph{stretch} $(1+\epsilon)$ (or, a $(\beta, 1+\epsilon)$-hopset) is a set of edges such that for any pair of nodes $u, v \in V$, there is a path in $G \cup H$ of at most $\beta$ hops, whose length is within a $(1+\epsilon)$ factor from the distance between $u$ and $v$ in $G$ (see Definition~\ref{def:hopset} for a formal statement).
Hopsets, originally defined by \cite{cohen2000}, are widely used in distance related problems in various settings, such as parallel shortest path computation \cite{cohen2000,miller2015,elkin2019almost, elkin2019RNC}, distributed shortest path computation \cite{elkin2019journal, nanongkai2014, censor2019}, routing tables \cite{elkin2017}, and distance sketches \cite{elkin2017, dinitz2019}.
In addition to their direct applications, hopsets have recently gained more attention as a fundamental object (e.g.~\cite{merav2020, elkin2019journal,abboud2018,huang2019}), and are known to be closely related to several other fundamental objects such as additive (or near-additive) spanners and emulators~\cite{elkin2020survey}.
A key parameter of a hopset is its hopbound. In many settings, after constructing a hopset, we can approximate distances in a time that is proportional to the hopbound. For instance, in parallel or distributed settings a hopset with a hopbound of $\beta$ allows us to compute approximate single-source shortest path in $\beta$ parallel rounds (e.g. by using Bellman-Ford).
For many applications, such as approximate APSP (all-pairs shortest paths), MSSP (multi-source shortest paths), computing distance sketches, and diameter approximation, where we require computing distances from \textit{many} sources, we are interested in the regime where the hopbound is polylogairthmic. Indeed, we obtain improved (and in some cases near-optimal) bounds for several of these problems in decremental settings.
In this paper, we study the maintenance of hopsets in a dynamic setting.
Namely, we give an algorithm that given a weighted undirected graph $G$ maintains a hopset of $G$ under edge deletions.
Our algorithm covers a wide range of hopbound/update time/hopset size tradeoffs. Importantly, we get the first efficient algorithm for decrementally maintaining a hopset with a \textit{polylogarithmic} hopbound.
In this case, assuming $G$ initially has $m$ edges and $n$ vertices, our algorithm takes $O(mn^{\rho})$ time, given any constant $\rho > 0$, and maintains a hopset of polylogarithmic hopbound and $1+\epsilon$ stretch.
This matches (up to polylogarithmic factors) the
running time of the best known static algorithm~\cite{elkin2019RNC, elkin2019journal} for computing a hopset with polylogarithimic hopbound and $(1+\epsilon)$ stretch.
\begin{theorem}
Given an undirected graph $G=(V,E)$ with polynomial weights\footnote{If weights are not polynomial the $\log n$ factor will be replaced with $\log W$ in the hopbound, and a factor of $\log^2 W$ will be added to the update time, where $W$ is the aspect ratio (the ratio between largest and smallest distance).}, subject to edge deletions, we can maintain a $(\beta, 1+\epsilon)$-hopset of size $\tilde{O}(n^{1+\frac{1}{2^k -1}})$ in total expected update time $\tilde{O}(\frac{\beta}{\epsilon} \cdot (m+n^{1+\frac{1}{2^k -1}})n^{\rho})$, where $\beta= (O(\frac{\log n}{\epsilon} \cdot (k+1/\rho)))^{k+1/\rho+1}$, $k \geq 1$ is an integer, $0 < \epsilon <1$ and $\frac{2}{2^k-1} < \rho <1$.
\end{theorem}
In the decremental setting, to the best of our knowledge, the previous state-of-the art hopset constructions have a hopbound of $2^{\tilde{O}(\log^{3/4} n)}$~\cite{henzinger2014}, or
$(1/\epsilon)^{\tilde{O}(\sqrt{\log n})}$~\cite{bernstein2009,chechik2018}. As a special case, by setting $\rho=(2^k-1)^{-1}=\frac{\log \log n}{\sqrt{\log n}}$, we can maintain a hopset with hopbound $2^{\tilde{O}(\sqrt{\log n})}$ in $2^{\tilde{O}(\sqrt{\log n})}$ amortized time.
More importantly, by setting $\rho$ and $k$ to a constant, we can maintain a hopset of \textit{polylogarithmic} hopbound.
While hopsets are extensively studied in other models of computation (e.g.~distributed and parallel settings), their applicability in dynamic settings is less understood.
Examples of results utilizing hopsets include the state-of-the art decremental SSSP algorithm for undirected graphs by Henzinger, Krinninger and Nanongkai~\cite{henzinger2014}, and implicit hopsets considered in \cite{bernstein2009, chechik2018}. As stated, these decremental hopset algorithms as stated only provide a \emph{superpolylogarithmic} hopbound. It may be possible (while not discussed) to use the hop-reduction techniques of \cite{henzinger2014} (inspired by a similar technique in \cite{bernstein2009}) to obtain a wider-range of tradeoffs, however to the best of our knowledge these techniques do not lead to near-optimal size/hopbound tradeoffs\footnote{ In particular, in all regimes the algorithm of \cite{henzinger2014} gives a hopset with size that is super-linear in number of edges $m$ (e.g. $m^{1+p}$ for a parameter $p$), while our hopset size is $O(n^{1+p})$ for some (other but similar) parameter $p$, which is a constant when the hopbound is polylogarithmic. Moreover, our techniques lead to near-optimal approximate APSP, whereas it is unclear how to get comparable bounds using techniques in \cite{henzinger2014}, as they do not maintain Thorup Zwick-based clusters.}.
Hence our result constitutes the first near-optimal decremental algorithm for maintaining hopsets with in a wide-range of settings including polylogarithmic hopbound.
\paragraph{Discussion on hopset limitations and alternative techniques.}
In \cite{abboud2018} it was shown that for a $(\beta, 1+\epsilon)$-hopset with size $n^{1+\frac{1}{{2^k}-1}- \delta}$ for any fixed $k, \epsilon$ and $\delta >0$ we must have\footnote{$\Omega_k$ hides exponential a factor of roughly $1/(k2^k)$. As written in \cite{abboud2018} they assume $k$ is constant (and hence the sparse hopset regime is not covered), but they also indicate that a tighter analysis could change the exact relationship between $k$ and $\epsilon$ and hence allow a better $k$ dependence and covering the sparse case (see Theorem 4.6 and Remark 4.7 in \cite{abboud2018}).} $\beta = \Omega_k(\frac{1}{\epsilon})^k$. Their lower bound suggests that we cannot construct a $(\beta, 1+\epsilon)$-hopset of size $\tilde{O}(n)$ with $\beta = \textrm{poly} \log(n)$ hopbound, implying that hopsets cannot be used for obtaining optimal time (i.e.~polylog amortized time) for \textit{sparse} graphs and \textit{very small} $\epsilon$.
However when the graph is slightly denser ($|E|= n^{1+\Omega(1)}$), the approximation factor is slightly larger (see e.g.~\cite{merav2020, elkin2019almost}), or we aim to compute distances from many sources (in APSP or MSSP), using hopsets may still lead to optimal algorithms. Indeed, we show that our decremental hopsets allow us to obtain a running time matching the best static algorithm (up to polylogarithmic factors) both in $(2k-1)$-APSP and $(1+\epsilon)$-MSSP. We leave it as an open problem if hopsets can be used to obtain linear time algorithms for SSSP with \textit{larger} approximation factors (e.g.~$\epsilon \geq 1$), since as stated, the lower bound of \cite{abboud2018} does not apply in this case.
It is worth noting that in Theorem \ref{thm:restricted_hopset} we first give a decremental algorithm that maintains static hopsets of \cite{elkin2019RNC} that matches the size/hopbound tradeoff in the lower bound of \cite{abboud2018}. However, as we will see, this algorithm has a large update time, and thus we propose a new hopset with slightly worse size/hopbound tradeoff that can be maintained much more efficiently. This efficient variant has additional polylogarithmic (in aspect ratio) factors in the hopbound relative to the existentially optimal construction.
Finally, for \textit{single source} shortest path computation in other models recently algorithms based on continuous optimization techniques are proposed (e.g. \cite{andoni2020,becker2021,li2020}) that outperform algorithms based only on combinatorial objects such as hopsets/emulators. These optimization techniques lead to much better dependence on $\epsilon$, but are less suitable when there are many sources, as the running time scales with the number of sources. Interestingly, the authors of \cite{andoni2020} use low-hop combinatorial structures with \textit{larger (polylogrithmic)} stretch as a subroutine in their continuous optimization framework. Hence understanding both combinatorial and optimization directions seems crucial for distance computation in general.
\subsection{Applications of Our Decremental Hopsets}
To illustrate applicability of our decremental hopset algorithm, we show how it yields improved algorithms for decremetanlly maintaining shortest paths from a fixed set $S$ of sources.
We consider different variants of the problem which differ in the size of $S$: the single-source shortest paths (SSSP) problem ($|S|=1$), all-pairs shortest paths (APSP) problem ($S = n$, where $n$ is the number of vertices of the input graph), as well as the multi-source shortest paths (MSSP) problem ($S$ is of arbitrary size), which is a generalization of the previous two.
\paragraph{Near-Optimal approximate APSP.}
We give a new decremental algorithm for maintaining approximate all-pairs shortest paths (APSP) with \text{constant} query time.
\begin{theorem}[Approximate APSP]
For any constant integer\footnote{The $k$ here should not be confused with the parameter $k$ in the hopset size.} $k \geq 2$, there is a data structure that can answer $(2k-1)(1+\epsilon)$-approximate distance queries in a given a weighted undirected graph $G=(V, E, w)$ subject to edge deletions.
The total expected update time over any sequence of edge deletions is $\tilde{O}(mn^{1/k})$ and the expected size of the data structure is $\tilde{O}(m+n^{1+1/k})$.
Each query for the distance between two vertices is answered in $O(k)$ worst-case time.
\end{theorem}
Our result improves upon a decremental APSP algorithm by Chechik~\cite{chechik2018} in a twofold way.
First, for constant $k$, our update time bound is better by a $(1/\epsilon)^{{O}(\sqrt{\log n})}$ factor.
Second, we bring down the query time from $O(\log \log (n W))$ to constant.
We note that in the area of distance oracles
a major goal is to preprocess a data structure that can return a distance estimate in \textit{constant} time \cite{mendel2007, wulff2012, roditty2005det, chechik2014}\footnote{We need to store the original graph in addition to the distance oracle in order to update the distances and maintain correctness, however we do \textit{not} need the whole graph for \textit{querying distances} as we will also point out in describing the applications in maintaining distance sketches.}.
Our results match the best known static algorithm with the same tradeoff (up to $(1+\epsilon)$ in the stretch and polylog in time) by Thorup-Zwick \cite{TZ2005} for sparse graphs. For dense graphs there have been improvements by \cite{wulff2012} in static settings.
Prior to \cite{chechik2018}, Roditty and Zwick \cite{roditty2004} gave an algorithm for maintaining Thorup-Zwick distance oracles in total time $\tilde{O}(mn)$, stretch $(2k-1)(1+\epsilon)$ and $O(k)$ query time for \textit{unweighted graphs}. Later on, Bernstein and Roditty \cite{bernstein2011} gave a decremental algorithm for maintaining Thorup-Zwick distance oracles in $O(n^{2+1/k+o(1)})$ time using emulators also only for \textit{unweighted graphs}.
\paragraph{Distance Sketches.}
Another application of our hopsets with polylogarithmic hopbound is a near-optimal decremental algorithm for maintaining distance sketches (or distance labeling); an important tool in the context of distance computation. The goal is to store a small amount of information, a sketch, for each node, such that the distance between any pair of nodes can be approximated only using their sketches (without accessing the rest of the graph). Distance sketches are particularly important in networks, and distributed systems \cite{sarma2015, elkin2017}, and large-scale graph processing \cite{dinitz2019}. Their significance is that at query time we only need to access/communicate the small sketches rather than having to access the whole graph. This is specially useful for processing large data when queries happen more frequently than updates.
The Thorup-Zwick \cite{TZ2005} algorithm can be used to obtain distance sketches of expected size $O(kn^{1/k})$ (for each node) that supports $(2k-1)$-approximate queries in $O(k)$ time (in static settings), and this is known to be tight assuming a well-known girth conjecture. Our approximate APSP data structure has the additional property that the information stored for each node is a distance sketch of expected size $O(kn^{1/k})$ that supports $(2k-1)(1+\epsilon)$-approximate queries. Hence we can maintain distance sketches that almost match the guarantees of the best static algorithm. More specifically, for a fixed size our algorithm matches the best known static construction up to a $(1+\epsilon)$-factor in the stretch and polyloagrithmic factors in the update time. In decremental settings, distance oracles of \cite{TZ2005}, and hence distance sketches with the guarantees described are studied by \cite{roditty2004, bernstein2011}, but our total update time of $\tilde{O}(mn^{1/k})$ (for constant $k \geq 2$) significantly improves over these results. In particular \cite{roditty2004} maintains these distance sketches in a total update time of $\Omega(mn)$, and \cite{bernstein2011} requires total update time of $O(n^{2+1/k+o(1)})$.
\paragraph{Near-Optimal $(1+\epsilon)$-MSSP.} Our next result is a near-optimal algorithm for multi-source shortest paths.
\begin{theorem}[MSSP]
There is a data structure which given a weighted undirected graph $G=(V, E)$ \emph{explicitly} maintains $(1+\epsilon)$-approximate distances from a set of $s$ sources in $G$ under edge deletions, where $0<\epsilon<\frac{1}{2}$ is a constant.
Assuming that $|E|= n^{1+\Omega(1)}$ and $s=n^{\Omega(1)}$, the total expected update time is $\tilde{O}(sm)$.
The data structure is randomized and works against an oblivious adversary.
\end{theorem}
We note that total update time matches (up to polylogarithmic factors) the running time of the best known \emph{static} algorithm for computing $(1+\epsilon)$-approximate distances from $s$ sources for a wide range of graph densities.
While for very dense graphs, using algorithms based on fast matrix multiplication is faster, the running time of our decremental algorithm matches the best known results in the static settings (up to polylogarithmic factors) whenever $ms = n^\delta$, for a constant $\delta \in (1, 2.37)$.
In the dynamic setting, our algorithm improves upon algorithms obtained by using hopsets of Henzinger, Krinninger and Nanongkai~\cite{henzinger2014}, or emulators of Chechik~\cite{chechik2018}, both of which give a total update time of $O(sm \cdot 2^{\tilde{O}(\log^{\gamma} n)}), 0<\gamma <1$ (for a constant $\gamma$).
In particular, by maintaining a hopset with polylogarithmic hopbound in $\tilde{O}(sm)$ time, we can maintain approximate SSSP from each source in $\tilde{O}(m)$ time. In contrast, in~\cite{henzinger2014, chechik2018} with hopset of hopbound $2^{\tilde{O}(\log^{\gamma} n)}$ is maintained, which if one simply applies existing techniques, results in a total update time of $m2^{\tilde{O}(\log^{\gamma} n)}$.
In the general case, i.e., for very sparse graphs, the update bound of our algorithm is $sm{2^{\tilde{O}(\sqrt{\log n}}})$, which is similar but slightly better than the bound obtained by~\cite{henzinger2014}, and slightly improves over dependence on $\epsilon$ over \cite{chechik2018}.
\paragraph{Improved bounds for $(1+\epsilon)$-SSSP.}
Finally, in order to better demonstrate how our techniques compare to previous work, we show that we can obtain a slightly improved bound for decremental single-source shortest paths.
\begin{theorem}
Given an undirected and weighted graph $G=(V, E)$, there is data structure for maintaining $(1+\epsilon)$-approximate distances from a source $s_0 \in V$ under edge deletions, where $0 <\epsilon<1$ is a constant and $|E|= n \cdot 2^{\tilde{\Omega}(\sqrt{\log n})}$. The total expected update time of the data structure is $m \cdot 2^{\tilde{O}(\sqrt{\log n})}$. There is an additional factor of $O(\frac{1}{\epsilon})^{\frac{\sqrt{\log n}}{\log \log n}}$ in the running time for non-constant $\epsilon$.
\end{theorem}
The amortized update time of our algorithm over all $m$ deletions is $2^{\tilde{O}(\sqrt{\log n})}$.
This improves upon the state-of-the art algorithm of \cite{henzinger2014}, whose amortized update time is $2^{\tilde{O}(\log ^{3/4} n)}$.
We note that the techniques of \cite{chechik2018} can also be used to obtain $(1+\epsilon)$-SSSP in amortized update time $\tilde{O}(1/\epsilon)^{\sqrt{\log n}}$. This is close to our update time, but we get a better bound with respect to the dependence on $\epsilon$.
\paragraph{Recent developments on decremental shortest paths.} Recently and after a preprint of this paper was published, a decremental \textit{deterministic} $(1+\epsilon)$-SSSP also with amortized update time of $n^{o(1)}$ was proposed by \cite{bernstein2021deterministic}. Several other recent results have also focused on deterministic dynamic shortest path algorithms or algorithms that work against an \textit{adaptive adversary} (e.g.~\cite{bernstein2017, bernstein2017ICALP, gutenberg2020, bernstein2021deterministic}) most of which also use hopsets or related objects such as emulators. Our work leaves an open problem on whether hopsets with small hopbound can also be maintained and utilized deterministically\footnote{One possible direction is considering derandomization of Throup-Zwick based clustering in static settings\cite{TZ2005} combined with our techniques.}. This could have applications in deterministic approximate all-pairs shortest paths, which could in turn have implications in using decremental shortest path algorithms for obtaining faster algorithms in classic/static settings (e.g. see \cite{madry2010}).
\paragraph{Hopsets vs. emulators.}
A majority of the previous work on dynamic distance computation are based on sparse \textit{emulators} (e.g.~\cite{bernstein2009, bernstein2011, chechik2018}). For a graph $G=(V,E)$, an emulator $H'=(V, E')$ is a graph such that for any pair of nodes $x,y \in V$, there is a path in $H'$ that approximates the distance between $x$ and $y$ on $G$ (possibly with both multiplicative and additive factors). While there are some similarities in algorithms for constructing these objects, their analysis is different.
More importantly, their maintenance and utilization for dynamic shortest paths have significant differences. An emulator approximates distances without using the original graph edges and hence we can restrict the computation to a sparser graph, whereas for using hopsets we also need the edges in the original graph. On the other hand, hopsets allow one to only consider paths with few hops.
\subsection{Preliminaries and Notation}
Given a weighted undirected graph $G=(V,E, w)$, and a pair $u,v \in V$ we denote the (weighted) shortest path distance by $d_G(u,v)$. We denote by $d_G^{(h)}(u,v)$ the length of the shortest path between $u$ and $v$ among the paths that use at most $h$-hops, and call this the $h$-hop limited distance between $u$ and $v$.
In this paper, we are interested in designing \textit{decremental} algorithms for distance problems in weighted graphs. In the decremental setting, the updates are only edge deletions or weight increases. This is as opposed to an \textit{incremental} setting in which edges can be inserted, or a \textit{fully dynamic} setting, in which we have both insertions and deletions. Specifically, given a weighted graph $G= (V,E, w)$, we want to support the following operations: \textsc{Delete}($(u,v)$), where $(u,v) \in E$, which removes the edge $(u,v)$, \textsc{Distance}($s,u$), which returns an (approximate) distance between a source $s$ and any $u \in V$, and \textsc{Increase}($(u,v), \delta$), which increases the weight of the edge $(u,v)$ by $\delta > 0$. While our results also allow handling weight increases, in stating our theorems for simplicity we use the term \textit{total update time} to refer to a sequence of up to $m$ deletions.
\begin{definition}\label{def:hopset}
Let $G = (V, E, w)$ be a weighted undirected graph.
Fix $d, \epsilon > 0$ and an integer $\beta \geq 1$.
A $(d, \beta, 1+\epsilon)$-\emph{hopset} is a graph $H = (V, E(H), w_H)$ such that for each $u, v \in V$, where $d_G(u, v) \leq d$, we have $d_G(u,v) \leq d_{G \cup H}^{(\beta)}(u, v) \leq (1+\epsilon) d_G(u, v)$.
We say that $\beta$ is the \emph{hopbound} of the hopset and $1+\epsilon$ is the \emph{stretch} of the hopset.
We also use $(\beta, 1+\epsilon)$-hopset to denote a $(\infty, \beta, 1+\epsilon)$-hopset.
\end{definition}
We sometimes call a $(d, \beta, 1+\epsilon)$-hopset a $d$-\emph{restricted hopset}, when the other parameters are clear. We also sometimes consider hopset edges added for a specific distance range $(2^{j}, 2^{j+1}]$, which we call a hopset for a single distance \textit{scale}. We also use $W$ to denote the ratio between maximum and minimum weight in $G$, also called the aspect ratio. W.l.o.g we can assume that the maximum distance is bounded by $nW$.
In analyzing dynamic algorithms we sometimes also use a time subscript $t$ to denote a distance (or a weight) after the first $t$ updates. In particular we use $d_{t,G}(u,v)$ to denote the distance between $u$ and $v$ after $t$ updates, and similarly use
$d^{(h)}_{t,G}(u,v)$ to denote $h$-hop limited distance between $u$ and $v$ at time $t$.
\section{Overview of Our Algorithms}\label{sec:overview}
The starting point of our algorithm is a known static hopset construction \cite{elkin2019RNC, huang2019}. We first review this construction. As we shall see, maintaining this data structure dynamically directly would require update time of up to $O(mn)$. Our first technical contribution is another hopset construction that captures some of the properties of the hopsets of \cite{elkin2019RNC, huang2019}, but can be maintained efficiently in a decremental setting. We then explain how by hierarchically maintaining a sequence of data structures we can obtain a near-optimal time and stretch tradeoff.
\subsection{Static Hopset of \cite{elkin2017}}\label{sec:static_hopset}
In this section we outline the (static) hopset construction of Elkin and Neiman~\cite{elkin2019RNC}\footnote{In \cite{elkin2019RNC} two algorithms with different sampling probabilities are given, where one removes a factor of $k$ in the size. This factor does not impact our overall running time, so we will use the simpler version.} (which is similar to \cite{huang2019}).
We will later give a new (static) hopset algorithm that utilizes some of the properties of this construction but with modifications that allows us to maintain a \textit{similar} hopset dynamically.
\begin{definition}[Bunches and clusters]\label{def:bunches}
Let $G=(V,E, w)$ be a weighted, $n$-vertex graph, $k$ be an integer such that $1 \leq k \leq \log \log n$ and $\rho > 0$.
We define sets $V=A_0 \supseteq A_1 \supseteq ... \supseteq A_{k+ 1/\rho+1}=\emptyset$. Let $\nu =\frac{1}{2^k-1}$. Each set $A_{i+1}$ is obtained by sampling each element from $A_i$ with probability $q_i=\max(n^{-2^i \cdot \nu}, n^{-\rho})$.
Fix $0 \leq i \leq k+1/\rho+1$ and for every vertex $u \in A_i\setminus A_{i+1}$, let $p(u) \in A_{i+1}$ be the node of $A_{i+1}$, which is closest to $u$, and let $d(u,A_{i+1}):=d(u,p(u))$ (assume $d(u, \emptyset)= \infty$). We call $p(u)$ the \emph{pivot} of $u$.
We define a \emph{bunch} of $u$ to be a set $B(u):= \{ v \in A_i: d(u,v) < d(u,A_{i+1})\}$. Also, we define a set $C(v)$, called the \emph{cluster} of $v \in A_i \setminus A_{i+1}$, defined as $C(v)=\{ u \in V: d(u,v) <d(u, A_{i+1}) \}$.
\end{definition}
Note that if $v \in B(u)$ then $u \in C(v)$, but the converse does not necessarily hold.
The way we define the bunches and clusters here follows~\cite{elkin2019RNC}, but differs slightly from the definitions in~\cite{TZ2005, roditty2004}, where each vertex has a separate bunch and cluster defined for each level $i$ (and stores the union of these for all levels).
The clusters are \textit{connected} in a sense that if a node $u \in C(v)$ then any node $z$ on the shortest path between $v$ and $u$ is also in $C(v)$. This property is important for bounding the running time (as also noted in \cite{TZ2005, roditty2004}):
\begin{claim}
Let $u \in C(v)$, and let $z \in V$ be on a shortest path between $v$ and $u$. Then $z \in C(v)$.
\end{claim}
\begin{proof}
Let $v \in A_i$. If $z \not \in C(v)$ then by definition $d(z,A_{i+1}) \leq d(v,z)$. On the other hand, since $z$ is
on the shortest path between $u$ and $v$: $d(u,A_{i+1}) \leq d(z,u)+ d(z,A_{i+1}) \leq d(u,z)+d(z,v)=d(u,v)$, which contradicts the fact that $u \in C(v)$.
\end{proof}
The hopset is then obtained by adding an edge $(u,v)$ for each $u \in A_i \setminus A_{i+1}$ and $v \in B(u)\cup \{p(u)\}$, and setting the weight of this edge to be $d(u,v)$.
These distances can be computed by maintaining the clusters.
\begin{lemma}[\hspace{1sp}\cite{elkin2019RNC,huang2019}]\label{lem:bunches}
Let $G=(V,E, w)$ be a weighted, $n$-vertex graph, $k$ be an integer such that $1 \leq k \leq \log \log n$ and $0< \rho, 0 < \epsilon <1$.
Assume the sets $A_i$ and bunches are defined as in Definition~\ref{def:bunches}.
Define a graph $H=(V, E_H, w_H)$, such that for each $u \in A_i \setminus A_{i+1}$ and $v \in B(u)\cup \{p(u)\}$, we have an edge $(u, w) \in E_H$ with weight $d_G(u,v)$.
Then $H$ is a $(\beta, 1+\epsilon)$-hopset of size $O(n^{1+\frac{1}{2^k-1}})$, where $\beta= (O(\frac{k+1/\rho}{\epsilon})^{k+1/\rho+1}$.
\end{lemma}
For reference we sketch a proof of the hopset properties in Appendix \ref{app:static_hopset}. Our main result is based on a new construction consisted of a hierarchy of hopsets. Our dynamic hopset requires a new \emph{stretch} analysis as estimates on the shortest paths are obtained from different data structures, but the \emph{size} analysis is basically the same.
While we are generally interested in a hopset that is not much denser than the input, as we will see the running time (both in static and dynamic settings) is mainly determined by the number of clusters a node belongs to, rather than the size of the hopset.
Moreover, unlike an emulator, for computing the distances using a hopset, we also need to consider the edges in $G$, and a small hopbound is the key to efficiency rather than the sparsity.
The hopset of \cite{elkin2019RNC} has some structural similarities to the emulators of \cite{TZ2006spanners}. One main difference is that the sampling probabilities are adjusted (lower-bounded by $n^{-\rho}$) to allow for efficient construction of these hopsets in various models, at the cost of slightly weaker size/hopbound tradeoffs. This adjustment is also crucial for our efficient decremental algorithms. Inspired by the construction described, in Section \ref{sec:new_static} we describe a \textit{new static} hopset algorithm, and later in Section \ref{sec:overview_dynamic} we adapt it to decremental settings.
\subsection{New static hopset based on path doubling and scaling}\label{sec:new_static}
As a warm-up, before moving to our new dynamic hopset construction, we provide a simple static hopset and explain why we expect to maintain such a structure more efficiently than the structure in Section \ref{sec:static_hopset} in \textit{dynamic settings}. Our main contribution is to maintain a dynamic hopset \textit{efficiently} using ideas in the simple algorithm described in this section.
At a high level, computing one of the main components of the hopset of Lemma~\ref{lem:bunches} involves multiple single-source shortest paths computations.
Maintaining single-source shortest paths is easy in the decremental setting, if we limit ourselves to paths of low length (or allow approximation).
Namely, assuming integer edge weights, one can maintain single source shortest paths up to length $d$ under edge deletions in total $O(md)$ time.
If we simply modified the construction of the hopset of Lemma~\ref{lem:bunches}, and computed shortest paths up to length $d$ instead of shortest paths of unbounded length, we would obtain a $d$-restricted hopset.
We describe this idea in more detail in Section \ref{sec:restricted_hopset}, where we show that an adaptation of the techniques by \cite{roditty2004} allows us to maintain a $d$-restricted hopset in deceremental settings, in total time $O(dmn^{\rho})$, for a parameter $0<\rho<\frac{1}{2}$. However, for large $d$ such a running time is prohibitive. In order to address this challenge, in this section we describe a static hopset, which can be computed using shortest path explorations up to only a polylogarithmic depth, yet can be used to approximate arbitrarily large distances. In the next sections we leverage this property to maintain a similar hopset in the decremental setting efficiently. This will require overcoming other obstacles, notably the fact that the dynamic shortest path problems that we need to solve are not decremental.
\paragraph{Path doubling.}
Assume that we are given a procedure $\textsc{Hopset}(G, \beta, d, \epsilon)$ that constructs a $(d, \beta, 1+\epsilon)$ hopset. In Section \ref{sec:new_hopset}, we provide such an algorithm that uses only shortest path computation up to polylogarithmic depth. We argue that by applying the $\textsc{Hopset}(G, \beta, d, \epsilon)$ procedure \textit{repeatedly} we can compute a full hopset, and in this process by utilizing the previously added hopset edges we can restrict our attention to short-hop paths only.
More formally, we construct a sequence of graphs $H_0, \ldots, H_{\log (nW)}$, such that $H_j$ is a hopset that handles pairs of nodes with distance in range $[2^{j-1}, 2^j)$, for $0 \leq j \leq \log (nW)$. This implies that $\bigcup_{r=0}^j H_r$ is a $(2^j, \beta, (1+\epsilon)^j)$-hopset of $G$.
Note that for $0 \leq j \leq \log \beta$ we can set $H_j = \emptyset$, since $G$ covers these scales.
We would like to use $G \cup_{r=0}^{j-1} H_{r}$ to construct $H_j$ based on the following observation that has been previously used in other (static) models (e.g. parallel hopsets of Cohen \cite{cohen2000}).
Consider $u,v, d_G(u,v) \in [2^{j-1}, 2^j)$, and let $\pi$ be the shortest path between $u$ and $v$ in $G$. Then $\pi$ can be divided into three segments $\pi_1, \pi_2$ and $\pi_3$, where $\pi_1$ and $\pi_3$ have length at most $2^{j-1}$ and $\pi_2$ consists of a single edge. But we know there is a path in $G \cup_{r=0}^{j-1} H_{r}$ with at most $\beta$-hops that approximates each of $\pi_1$ and $\pi_2$. Hence for constructing $H_j$ we can compute \textit{approximate} shortest paths by restricting our attention to paths of consisting of at most $2\beta+1$ \textit{hops} in $G \cup_{r=0}^{j-1} H_{r}$.
This idea, which we call path doubling, has been previously used in hopset constructions in distributed/parallel models (e.g.~\cite{cohen2000, elkin2019journal, elkin2019RNC}), but to the best of our knowledge this is the first use of this approach in a dynamic setting. Applying this idea in parallel/distributed settings is relatively straight-forward, since having bounded hop paths already leads to efficient parallel shortest path explorations (e.g. by using Bellman-Ford). However, utilizing it efficiently in dynamic settings is more involved for several reasons: we have to simultaneously maintain the clusters (including their connectivity property), apply a scaling idea on the whole structure, and handle insertions in our hopset algorithm and its analysis.
But first we describe a \textit{scaling idea} that at a high-level allows to go from \textit{$h$-hop-bounded} explorations to $h$-depth bounded explorations on a scaled graph.
\paragraph{Scaling.}
We review a scaling algorithm that allows us to utilize the path doubling idea. Similar scaling techniques are used in dynamic settings~\cite{bernstein2009, bernstein2011, henzinger2014} for single-source shortest paths, but as we will see, using the scaling idea in our setting is more involved since it has to be carefully combined with other components of our construction.
This idea can summarized in the following scaling scheme due to Klein and Subramanian \cite{klein1997}, which, roughly speaking, says that finding shortest paths of length $\in [2^{j-1}, 2^j)$ and at most $\ell$ hops, can be (approximately) reduced to finding paths of length at most $O(\ell)$ in a graph with in integral weights. This is done by a rounding procedure that adds a small \textit{additive} term of roughly $\frac{\epsilon_0 w(e)}{\ell}$ to each edge $e$. Then for a path of $\ell$ hops the overall stretch will be $(1+\epsilon_0)$.
\begin{lemma}[\cite{klein1997}]\label{lem:rounding}
Let $G=(V,E,w)$ be a weighted undirected graph. Let $R \geq 0$ and $\ell \geq 1$ be integers and $\epsilon_0 > 0$.
We define the \emph{scaled graph} to be a graph $\textsc{Scale}(G, R, \epsilon_0, \ell) := (V, E, \hat{w})$, such that $\hat{w}(e) = \lceil \frac{w(e)}{\eta(R, \epsilon_0)} \rceil$, where $\eta(R, \epsilon_0)=\frac{\epsilon_0 R}{\ell}$. Then for any path $\pi$ in $G$ such that $\pi$ has at most $\ell$ hops and weight $R \leq w(\pi) \leq 2R$ we have,
\begin{itemize}
\item $\hat{w}(\pi) \leq \lceil 2\ell/\epsilon_0 \rceil$,
\item $w(\pi) \leq \eta(R, \epsilon_0) \cdot \hat{w}(\pi) \leq (1+\epsilon_0) w(\pi)$.
\end{itemize}
\end{lemma}
Similar scaling ideas have been used in the $h$-SSSP algorithm for maintaining approximate shortest paths~\cite{bernstein2009}.
The algorithm maintains a collection of trees and to return a distance estimate, it finds the tree that best approximates a given distance. But we note that in utilizing the scaling techniques in our final dynamic hopset construction we cannot simply maintain a \textit{disjoint set of} bounded hop shortest path trees. We need to maintain the whole structure of the hopset on the scaled graphs together: firstly, based on definition of bunches in Lemma \ref{lem:bunches}, nodes keep on leaving and joining clusters, so we cannot simply maintain a set of shortest trees from a fixed set of roots. We need to maintain the \textit{connectivity} of clusters as described in Section \ref{sec:static_hopset} at the same time as maintaining the shortest path trees. Additionally, while we are maintaining distances over the set of clusters we also need to handle insertions introduced by smaller scales.
To maintain these efficiently, we need to apply the scaling to the whole structure, including the hopset edges added so far. But when we utilize the smaller scale hopset edges (for applying path doubling) insertions or distance decreases are introduced. As we will see, handling insertions at the same time the clusters (and the corresponding distances) are updated makes the stretch/hopbound analysis more involved.
Next we combine the scaling with the path doubling techniques. The path doubling property states that we can restrict our attention to $(2\beta+1)$-hop limited shortest path computation, and the scaling idea ensures that such $(2\beta+1)$-\textit{hop} bounded paths in $G \cup \bigcup_{r=0}^{j-1} H_r$ correspond to paths bounded \textit{in depth} by $d=\lceil \frac{2\ell}{\epsilon_0} \rceil= O(\frac{\beta}{\epsilon_0})$ in the scaled graph $G_{scaled}=\textsc{Scale}(G \cup \bigcup_{r=0}^{j-1} H_r, 2^j, \epsilon_0, 2\beta+1)$. Informally, this mean it is enough to construct shortest path trees up to depth $d$ on the scaled graphs in our hopset construction.
We can now summarize our new static hopset construction in Algorithm \ref{alg:static_hopset}. Similarly to $\textsc{Scale}$, for a graph $G = (V, E, w)$ we define $\textsc{Unscale}(G, R, \epsilon, \ell)$ to be a graph $(V, E, w')$, where for each $e \in E$, $w'(e) = \eta(R, \epsilon) \cdot w(e)$. In static settings, the procedure $\textsc{Hopset}(G, \beta, d,\epsilon)$ for constructing a $d$-restricted hopset can be performed by running a $(\beta,1+\epsilon)$-hopset construction algorithm in which the shortest path explorations are restricted to depth $d$. In Section \ref{sec:restricted_hopset} we describe a decremental algorithm for this procedure, and describe how it leads to a $(d,\beta,1+\epsilon)$-restricted hopset. Note that we can set $\beta = \textrm{poly} \log n$, and so the shortest path explorations can be bounded by a polylogairthmic value.
\begin{algorithm}[H]
\label{alg:static_hopset}
\SetKwProg{Fn}{Function}{}{}
\For{$j=1$ to $\lceil \log W \rceil$}{
$G_{scaled} := \textsc{Scale}(G \cup \bigcup_{r=0}^{j-1} H_r, 2^j, \epsilon_0, 2\beta+1)$\\
$\hat{H}:= \textsc{Hopset}(G_{scaled}, \beta, \lceil \frac{2(2\beta+1)}{\epsilon_0}\rceil, \epsilon) $\\
$H_j := \textsc{Unscale}(\hat{H}, 2^j, \epsilon_0, 2\beta+1))$\\
$H:= H \cup H_j$
}
\caption{}
\end{algorithm}
It is not hard to see that in such a static construction, three different approximation factors are combined in each scale: a $(1+\epsilon)$-stretch due to the \textsc{Hopset} procedure, a $(1+\epsilon_1)$-factor from the restricted hopset, and a $(1+\epsilon_0)$-factor due to scaling. This is summarized in the following lemma.
\begin{lemma}
Let $G$ be a graph and $H$ be a $(d, \beta, 1+\epsilon_1)$ hopset of $G$.
Let $G_{scaled} = \textsc{Scale}(G \cup H, d, \epsilon_0, 2\beta+1)$
and let $H' = \textsc{Unscale}(\textsc{Hopset}(G_{scaled}, d, \lceil \frac{2(2\beta+1)}{\epsilon_0}\rceil, \epsilon_2))$.
Then $H \cup H'$ is a $(2d, \beta, (1+\epsilon)(1+\epsilon_1)(1+\epsilon_0))$ hopset of $G$.
\end{lemma}
Obtaining such a guarantee in dynamic settings is going to be more involved, since we also need to handle insertions, and at the same time ensure that the update time remains small. Moreover the stretch analysis will require combining estimates obtained by different procedures.
\subsection{Near-Optimal Decremental Hopsets}\label{sec:overview_dynamic}
In this section we describe how we can overcome the challenges of the dynamic settings in order to maintain a decremental hopset in near-optimal update-time.
The first step of our algorithm is constructing a $d$-restricted version of the hopset described in Section \ref{sec:static_hopset}. As discussed, for this we can use the techniques by \cite{roditty2004} to maintain a $(d, \beta, 1+\epsilon)$-hopset in $\tilde{O}(d m n^{\rho})$ total update time, where $0<\rho<\frac{1}{2}$. Now in order to remove the time dependence on $d$, we use the path doubling and scaling ideas described as follows: we maintain this data structure on a sequence of scaled graphs simultaneously, and argue that this data structure gives us a hopset on $G$ after \textit{unscaling} the edge weights.
\paragraph{Sequence of restricted hopsets.} Similiar to Section \ref{sec:new_hopset}, our decremtnal algorithm maintains the sequence of graphs $H_0, \ldots, H_{\log W}$, where for each $0 \leq j \leq \log W$, $\bigcup_{r=0}^j H_r$ is a $(2^j, \beta, (1+\epsilon)^j)$-hopset of $G$. For \textit{each scale} we show the following:
\begin{restatable}{lemma}{singlescale}\label{lem:single-scale}
Consider a graph $G = (V, E, w)$ subject to edge deletions, and parameters $0 < \epsilon<1, \rho <\frac{1}{2}$.
Assume that we have maintained $\bar{H}_j:=H_1,...,H_j$, which is a $(2^j, \beta, (1+\epsilon)^{j})$-hopset of $G$. Then given the sequence of changes to $G$ and $\bar{H}_j$, we can maintain a graph $H_{j+1}$, such that $\bar{H}_j \cup H_{j+1}$ is a $(2^{j+1}, \beta, (1+\epsilon)^{j+1})$-hopset of $G$. This restricted hopset can be maintained in $\tilde{O}( (m+\Delta) n^{\rho} \cdot \frac{\beta}{\epsilon})$ total time, where $m$ is the initial size of $G$, and $\Delta$ is the number of edges inserted to $\bar{H}_j$ over all updates, $\beta= ( \frac{1}{\epsilon \cdot \rho})^{O(1/\rho)}$.
\end{restatable}
Note that the lemma does not hold for \textit{any} restricted hopset, and in dynamic settings we need to use special properties of our construction to prove this.
To prove this lemma we use the techniques of \cite{roditty2004} to maintain the clusters. For obtaining near-optimal update time, we combine this algorithm with the path doubling and scaling ideas described earlier. However, in order to utilize these ideas, we need to deal with the fact that inserting hopset edges from smaller scales introduces \textit{insertions}.
\paragraph{Handling insertions.} In addition to maintaining clusters and distances decrementally, in our final construction we need to handle edge \emph{insertions}.
This is because we run it on a graph $G \cup \bigcup_{r=0}^{j-1} H_{r}$ (after applying scaling of Lemma~\ref{lem:rounding}).
While edges of $G$ can only be deleted, new edges are added to the $H$ that we need to take into account for obtaining faster algorithms.
At a high-level, the algorithm of Roditty and Zwick~\cite{roditty2004} decrementally maintains a collection of single-source shortest path trees (up to a bounded depth) using the Even-Shiloach algorithm (ES-tree)~\cite{ES} at the same time as maintaining a clustering.
To handle edge insertions, we modify the algorithm to use the \emph{monotone} ES-tree idea proposed by~\cite{henzinger2014, henzinger2016}.
The goal of a monotone ES-tree is to support edge insertions in a limited way.
Namely, whenever an edge $(u,v)$ is inserted and the insertion of the edge causes a distance decrease in the tree, we do \textit{not} update the currently maintained distance estimates.
Still the inserted edge may impact the distance estimates in later stages by preventing some estimates from increasing after further deletions.
While it is easy to see that this change keeps the running time roughly the same as in the decremental setting, analyzing the correctness is a nontrivial challenge.
This is because the existing analyses of a monotone ES-tree work under specific structural assumptions and do not generalize to any construction.
Specifically, while~\cite{henzinger2014} analyzed the stretch incurred by running monotone ES-trees on a hopset, the proof relied on the properties of the specific hopset used in their algorithm.
Since the hopset we use is quite different, we need a different analysis, which combines the static hopset analysis, with the ideas used in \cite{henzinger2014}, and also take into account the stretch incurred due to the fact that the restricted hopsets are maintained on the scaled graphs. Note also that our main hopset is not a simply a decremental maintenance of hopsets of \cite{elkin2017}, as our estimates are obtained from a \textit{sequence of hopsets} and insertions in one scale introduce insertions in the next scale. This is why we need a new argument and cannot simply rely on arguments in \cite{henzinger2014} and \cite{elkin2017}.
\paragraph{Putting it together.} We now go back to the setting of Lemma \ref{lem:single-scale}, and use a procedure similar to Algorithm \ref{alg:static_hopset}. Given a $2^j$-restricted hopset $\bar{H}_j=H_1 \cup... \cup H_{j}$ for distances up to $2^{j}$, we can now construct a graph $G^j$ by applying the scaling of Lemma \ref{lem:rounding} to $G \cup \bar{H}_j$ and setting $R=2^j$, $\ell=2\beta+1$. Then we can efficiently maintain an $\ell$-restricted hopset on $G^j$. Then by Lemma \ref{lem:single-scale} we can use this to update $H_{j+1}$. Importantly, $\ell$ is independent of $R$, and thus we can eliminate the factor $R$ to get $\tilde{O}(\beta mn^{\rho})$ total update time. Our final algorithm is a hierarchical construction that maintains the restricted hopsets on scaled graphs and the original graph simultaneously.
\paragraph{Stretch and hopbound analysis.}
As discussed, applying the path-doubling idea to the hopset analysis is straightforward in static settings (and can be to some extent separated from the rest of the analysis) as is the case in \cite{elkin2019RNC}.
However in our adapted decremental hopset algorithm this idea needs to be combined with the properties of the monotone ES tree idea and the fact that distance estimate are obtained from a sequence of hopsets on the scaled graph. In particular, in our stretch analysis we need to divide paths into smaller segments, such that the length of some segments is obtained from smaller iterations $i$, and the length of some segments are obtained from this combination of monotone ES tree estimates based on path doubling and scaling. We need a careful analysis to show that the stretch obtained from these different techniques combine nicely, which is based on a threefold inductive analysis:
\begin{enumerate}
\item An induction on $i$, the iterations of the base hopset, which controls the sampling rate and the resulting size and hopbound tradeoffs.
\item An induction on the scale $j$, which allows us to cover all ranges of distances $[2^j,2^{j+1}]$ by maintaining distances in the appropriate scaled graphs.
\item An induction on time $t$ that allows us to handle insertions by using the estimates from previous updates in order to keep the distances monotone.
\end{enumerate}
The overall stretch argument needs to deal with several error factors in \textit{addition to} the base hopset stretch. First, the error incurred by using hopsets for smaller scales, which we deal with by maintaining our hopsets by setting $\epsilon'=\frac{\epsilon}{\log n}$. This introduces polylogarithmic factors in the hopbound. The second type of error comes from the fact that the restricted hopsets are maintained for scaled graphs, which implies the clusters are only approximately maintained on the original graph. This can also be resolved by further adjusting $\epsilon'$. Finally, since we use an idea similar to the monotone ES tree of \cite{henzinger2014, henzinger2016}, we may set the level of nodes in each tree is to be larger than what it would be in a static hopset. But we argue that the specific types of insertions in our algorithm will still preserve the stretch. At a high-level this is because in case of a decrease we use an estimate from time $t-1$, which we can show inductively has the desired stretch. We note that while the monotone ES tree is widely used, we always need a different construction-specific analysis to prove the correctness.
\paragraph{Technical differences with previous decremental hopsets.} We note that while the use of monotone ES tree and the structure of the clusters in our construction are similar to \cite{henzinger2014}, our algorithm has several important technical differences. First, our hopset algorithm is based on different base hopset with a polylogarithmic hopbound, which as noted is crucial for obtaining near-optimal bounds in most of our applications. Additionally, we use a different approach to maintain the hopset efficiently by using path doubling and maintaining restricted hopsets on a sequence of \textit{scaled graphs}. Among other things, in \cite{henzinger2014} a notion of \textit{approximate ball} is used that is rather more lossy with respect to the hopbound/stretch tradeoffs. By maintaining restricted hopsets on scaled graphs, we are also effectively preserving approximate clusters/bunches with respect to the original graph, but as explained earlier, the error accumulation combines nicely with the path-doubling idea. Moreover, \cite{henzinger2014} uses an edge sampling idea to bound the update time, which we can avoid by utilizing the sampling probability adjustments in \cite{elkin2019RNC}, and the ideas in \cite{roditty2004}. Finally, our algorithm is based on maintaining the clusters up to a low hop, whereas they directly maintain bunches/balls.
\subsection{Applications in Decremental Shortest Paths}
Our algorithms for maintaining approximate distances under edge deletions are as follows. First, we maintain a $(\beta,1+\epsilon)$-hopset. Then, we use the hopset and Lemma \ref{lem:rounding} to reduce the problem to the problem of approximately maintaining short distances from a single source.
For our application in MSSP and APSP the best update time is obtained by setting the hopbound to be polylgarithmic whereas for SSSP the best choice is $\beta=2^{\tilde{O}(\sqrt{\log n})}$.
Using this idea for SSSP and MSSP mainly involves using the monotone ES tree ideas described earlier.
Maintaining the APSP distance oracle is slightly more involved but uses the same techniques as in our restricted hopset algorithm. This algorithm is based on maintaining Thorup-Zwick distance oracle \cite{TZ2005} more efficiently. At a high-level, we maintain \textit{both} a $(\beta, 1+\epsilon)$-hopset and Thorup-Zwick distance oracle simultaneously, and balance out the time required for these two algorithms. The hopset is used to improve the time required for maintaining the distance oracle from $O(mn)$ (as shown in \cite{roditty2004}) to $O(\beta mn^{1/k})$, but with a slightly weaker stretch of $(2k-1)(1+\epsilon)$. Querying distances is then the same as in the static algorithm of \cite{TZ2005}, and takes $O(k)$ time.
\section{Decremental Hopset}
In this section we provide several decremental hopset algorithms with different tradeoffs.
The starting-point of our constructions are the static hopsets described in Section \ref{sec:static_hopset}. But in order to get an efficient dynamic algorithm, we need to modify this construction in several ways. First, in Section \ref{sec:restricted_hopset}, we explain how we can adapt ideas by Roditty-Zwick \cite{roditty2004} to obtain an algorithm for computing a $d$-restricted hopset. The total running time of this algorithm is $O(dmn^{\rho})$ (where $\rho <1$ is a constant). While existentially this construction matches the state-of-the-art static hopsets with respect to size and hopbound tradeoffs, the update time is undesirable for large values of $d$.
We will then provide another hopset algorithm with total running time of $\tilde{O}(mn^{\rho})$, by simultaneously maintaining a sequence of restricted hopsets using scaling and path-doubling ideas.
Recall that our algorithm maintains a sequence of graphs $H_0, \ldots, H_{\log (nW)}$, where for each $1 \leq j \leq \log (nW)$ , $H_0 \cup \ldots \cup H_j$ is a $2^j$-restricted hopset of $G$ ($W$ is the aspect ratio). Instead of computing each $H_j$ separately, we use $G \cup \bigcup_{r=0}^{j-1} H_{r}$ to construct $H_j$.
We observe that at the cost of some small approximation errors, any path of length $\in [2^{j-1}, 2^j)$ in $G$ can be approximated by a path of at most $2\beta+1$ hops in $G \cup \bigcup_{r=0}^{j-1} H_{r}$.
To use this idea we will prove the following main lemma as a building block for our final hopset.
\singlescale*
There are two main challenges that we need to address for proving this lemma. First, we would like to make the running time independent of the scale bound $2^j$, which is what we would get by directly using the algorithm of \cite{roditty2004}.
To that end, we are going to run our algorithm on a scaled graph, which would allow us to only maintain distances up to depth $O(\beta / \epsilon)$.
This relies on having the $2^j$-restricted hopset $\bar{H}_j$, which allows us to maintain the hopset $\bar{H}_{j+1}$.
Second, while $G$ is undergoing deletions, $H_j$ may be undergoing edge \emph{insertions} incurred by restricted hopset edges added for smaller scales. In Section \ref{sec:new_hopset} we explain how such insertions can be handled using the monotone ES tree algorithm (proposed by \cite{henzinger2014}).
In Section \ref{sec:ss_stretch} we use the properties of this algorithm to prove Lemma \ref{lem:single-scale}.
\subsection{Maintaining a restricted hopset} \label{sec:restricted_hopset}
In this section our goal is to maintain a decremental \textit{restricted} hopset.
We start by adapting the decremental algorithm by \cite{roditty2004} that maintains the Thorup-Zwick distance oracles \cite{TZ2005} with stretch $(2k-1)$ for pairs of nodes within distance $d$ in $\tilde{O}(dmn^{1/k})$ total time, but we use it to obtain a $d$-restricted hopset. In particular, using ideas in \cite{roditty2004}, and by restricting the shortest path trees up to depth $d$, we can maintain a variant of the hopset defined in Lemma \ref{lem:bunches} in which the hopset guarantee only holds for nodes within distance $d$.
In our adaptation of \cite{roditty2004}, we make the following two modifications: First, we change the sampling probabilities based on the hopset algorithm described in Section \ref{sec:static_hopset}.
Second, in addition to computing clusters we also add edges for each cluster that forms the hopset. This leads to a $(d, \beta, 1+\epsilon)$-hopset, but the update time is $\tilde{O}(dmn^{\rho})$.
We can then use the algorithm of Roditty and Zwick \cite{roditty2004} to maintain the clusters and bunches. At a high-level, the idea is to maintain Even-Shiloach \cite{ES} trees for each node $u \in A_i \setminus A_{i+1}$ to compute the cluster $C(u)$. The running time in the static algorithm can be bounded using the fact that each node overlaps with at most $\tilde{O}(n^{\rho})$ clusters (see the modified Dijsktra's algorithm description in Appendix \ref{app:static_hopset}). In dynamic settings this would translate to how many ES trees each node overlaps with. While these overlaps can be bounded at any point in time, it does not immediately hold for a sequence of updates, since nodes may keep leaving and joining clusters.
Note that while the clusters we consider are slightly different than what was used in~\cite{roditty2004} (even if we ignore the difference in sampling probabilities), we use a \emph{subset} of the clusters defined in~\cite{roditty2004}.
We say that clusters are bounded by depth $d$ when $v \in C(u)$ if it satisfies Definition \ref{def:bunches}, and $d_G(v,u) \leq d$.
By extending the techniques \cite{roditty2004} we can get the following lemma for maintaining clusters (and hence bunches) up to a certain depth. At high-level this is done by computing the shortest path trees rooted at the cluster center up to depth $d$. Moreover, proof of this lemma relies on bounding the number of clusters overlapping each nodes over a sequence of updates. The details of this proof can be found in Appendix \ref{app:restricted_hopset}.
\begin{lemma}\label{lem:dec_bound_cluster}
There is an algorithm for maintaining clusters $C(w)$ (as in Definition \ref{def:bunches}) up to depth $d$ for all nodes $w \in A_i$, $0 \leq i \leq k+1/\rho+1$, such that for every $v \in V$ the expected total number of times the edges incident on $v$ are scanned over all trees (i.e.~trees on $C(w), w \in A_i$) is $O({d}/q_i)$, where $q_i$ is the sub-sampling probability.
\end{lemma}
It is easy to see that the stretch analysis of \cite{elkin2019RNC} extends to a $d$-restricted hopset by only restricting the analysis to pairs of nodes within distance at most $d$. As we will see, in our main construction (Section \ref{sec:new_hopset}) we will need a completely new stretch analysis.
By combining the analysis of the modified Dijkstra algorithms of \cite{TZ2005}, Lemma \ref{lem:bunches}, and Lemma \ref{lem:dec_bound_cluster}, we can show that a $d$-restricted hopset with the following guarantees can be constructed:
\begin{theorem} \label{thm:restricted_hopset}
Fix $\epsilon > 0, k \geq 2$ and $ \rho \leq 1$.
Given a graph $G=(V,E,w)$ with integer and polynomial weights, subject to edge deletions we can maintain a $(d, \beta, 1+\epsilon)$-hopset, with $\beta = O\left((\frac{1}{\epsilon} \cdot (k+1/\rho))^{k+1/\rho+1}\right)$ in $O(d (m+n^{1+\frac{1}{2^k-1}})n^{\rho})$ total time.
The algorithm works correctly with high probability.
\end{theorem}
We review the algorithm of \cite{roditty2004}, and its analysis in Appendix \ref{app:restricted_hopset}. In Section \ref{sec:new_hopset} we will use a similar algorithm (that also handles insertions) for our new hopset construction, presented in Algorithm \ref{alg:restricted_hopset}, so we do not repeat the algorithm here.
\subsection{Restricted hopsets with improved running time}\label{sec:new_hopset}
As discussed, the algorithm described in Section \ref{sec:restricted_hopset} has a large update time for $d$-restricted hopsets, when $d$ is large.
In this section we provide a new hopset algorithm that is based on maintaining these restricted hopsets on a sequence of scaled graphs, and we show how this improves the update time, in exchange for a small (polylogarithmic) loss in the hopbound. In this construction, we rely on hopset edges added for smaller distance scales, and
combined with a known technique-the monotone ES tree ideas-that are needed to handle insertions.
We note that while this general technique is used before, analyzing the correctness of monotone ES tree for each structure requires an analysis that depends on the specific construction.
\paragraph{Handling edge insertions}
\newcommand{L}{L}
First we explain the monotone ES tree idea and how it can be used for maintaining single-source shortest path up to a given depth $D$, and then combine these with the restricted hopset algorithm in Theorem \ref{thm:restricted_hopset}.
Using the monotone ES tree ideas may impact the stretch, and clearly do not apply to all types of insertions but only for insertion of certain structural properties. In Section \ref{sec:ss_stretch}, we will prove that specifically for the insertions in our restricted hopset algorithm the stretch guarantee holds.
We show how to handle edge insertions by using a combining of the monotone ES-tree algorithm~\cite{henzinger2014} (and further used in the hopset construction of \cite{henzinger2016}).
The idea in a monotone ES tree is that if an insertion of an edge $(u,v)$ causes the level of a node $v$, denoted by $L(v)$, in a certain tree to decrease, we will not decrease the level. In this case we say the edge $(u,v)$ and the node $v$ are \textit{stretched}. More formally, a node $v$ is stretched when $L(v) > \min_{(x,v) \in E} L(x) + w(x, v)$. The complete algorithm is presented in Algorithm~\ref{alg:estree} in Appendix \ref{app:monotone_es}. The update time analysis is straightforward, and summarized in the following lemma:
\begin{lemma} \label{lem:es_time}
Give a graph undergoing edge-deletions, and a set of edge insertions, a monotone ES tree can be maintained in $O( (m+\Delta) D)$ overall update time on a graph with $m$ edges, where $\Delta$ is the number of edge insertions.
\end{lemma}
\begin{proof}[Proof sketch.]
The running time analysis of the algorithm follows based on an argument similar to the analysis of the classic ES tree algorithm \cite{ES, king1999}. The total time for updating distances up to a depth $D$ is $O( (m+\Delta) D)$, roughly speaking, since the edges incident to each node $v$ are scanned any time level of each node changes. Since distances can only increase there can be at most $D$ times for nodes with depth at most $D$ to the source. Furthermore, $\Delta$ is the number of added edges that also need to be scanned in each update. By summing over all edges incident to all nodes the claim follows.
\end{proof}
\paragraph{Restricted hopsets with insertions.}
Next, given a sequence of deletes $E^-$, and insertions $E^+$
in Algorithm \ref{alg:restricted_hopset} we describe how the algorithm of \cite{roditty2004} is modified to handle these insertions by combining it with the monotone ES tree algorithm of Lemma \ref{lem:es_time} (Algorithm \ref{alg:estree} in Appendix \ref{app:monotone_es}) for each tree inside a cluster. Using this we can bound the update time, however proving the stretch is more involved, and depends on the specific structure of insertions, and it does not hold for any set of insertions.
Our goal is to maintain the hopset algorithm of Section \ref{sec:static_hopset}. Recall that we start by sampling sets $V=A_0 \supseteq A_1 \supseteq ... \supseteq A_{k+1/\rho+1}=\emptyset$ initially, with sampling probabilities $q_i=\max(n^{-2^i \cdot \nu}, n^{-\rho})$, where $0 < \rho \leq 1/2$ is a parameter. The sets remain unchanged during the updates.
Next, we need to maintain values $d(v, A_i), 1 \leq i \leq k+1/\rho+1$ for all nodes $v \in V$, and use these values for maintaining $p(v)$. For this we can simply maintain a monotone ES tree (using Algorithm \ref{alg:estree}) rooted at a dummy node $s_i$ connected to all nodes in $A_i$ up to depth $d$, in total time $O(dm)$. We denote the estimate obtained by maintaining this distance by $L(v,A_{i+1})$. The pivots $p(v), \forall v \in V$ can also be maintained in this process.
Next we need to maintain the clusters. Handling nodes that leave a cluster is simpler. Recall that in the static construction for $z \in A_i\setminus A_{i+1}$ we have $v \in C(z)$ if and only if $d(z,v) < d(v,A_{i+1})$, but here we maintain \textit{approximate} bunches and clusters based on monotone ES tree estimates. After each deletion, for each node $v$ and the cluster centers $z$ we first check whether the distance estimate $L(z,v)$ has increased. If $L(z,v) \geq \frac{L(v, A_{i+1})}{1+\epsilon}$, $v$ will be removed from $C(z)$. There is a slight technicality here for ensuring that the size of the bunches are still bounded. Instead of directly using $L(v,A_{i+1})$, we maintain approximate bunches with radius $\frac{L(v,A_{i+1})}{1+\epsilon}$, which ensures that the size of the bunches are bounded as these are subsets of the original bunches on $G$ (as argued in Lemma \ref{lem:dec_bound_cluster}). This $\epsilon$ parameter is rescaled later appropriately for obtaining the final estimate.
The more subtle part is adding nodes to new clusters. For each $0 \leq i <k+1/\rho+1$, we define a set $X_{i}$ consisted of all vertices whose distance to $A_i$ is increased as a result of a deletion, but where this distance is still at most ${d}$. The sets $X_{i}$ can be computed while maintaining $L(v,A_i)$. This can also be done by maintaining a single tree rooted at a dummy node $s_i$.
Note that a node $v$ would join $C(w)$ only after an increase in $L(v,A_{i+1})$. Using this observation, we can use the modified Dijkstra algorithm (also used in static hopset construction in Appendix \ref{app:static_hopset}) which can be summarized as follows: when we explore neighbors of a node $x$, we only relax an edge $(x,y)$ if $L(x,y)+w(x,y) < \frac{L(x,A_{i+1})}{1+\epsilon}$.
Hence in each iteration $i$, after each deletion for every $v \in X_{i+1}, z \in B_i(u) \setminus B_i(v)$, and each edge $(u,v) \in E$ we check if $L(z,u) +w(u,v) < \frac{L(v, A_{i+1}}{1+\epsilon})$. If yes, then $v$ joins $C(z)$, and $v$ is pushed to a priority queue $Q(z)$. This priority $Q(z)$ stores the distances in each tree $T(z)$ rooted at $z$.
These nodes join clusters $T(z)$, but there may be other nodes that also need to join $C(z)$ as a result of this change.
Hence after this initial phase, for each $z \in A_i \setminus A_{i+1}$ where $Q(z) \neq \emptyset$, we run the modified Dijkstra's algorithm.
A summary of this algorithm is presented in Algorithm \ref{alg:restricted_hopset}. Note that the input to this algorithm is a graph $G$, distance bound $d$, a set of edges $E^-$ deleted (or updated by distance increase), and a set of insertions $E^+$.
By combining these two algorithm we can keep the running time the same as in Theorem \ref{thm:restricted_hopset} despite the insertions:
\begin{theorem}\label{thm:monotone_es_time}
Assume that we are given a set of $\Delta$ updates including a set $E^-$ of deletions and a set $E^+$ of insertions, and parameters $d$, and $\epsilon$, w.h.p.~the total update time of Algorithm \ref{alg:restricted_hopset} is $O((m+\Delta+n^{1+\frac{1}{2^k-1}})dn^{\rho})$.
\end{theorem}
\begin{proof}[Proof sketch.]
The proof of this theorem is almost exactly the same as the proof of Theorem \ref{thm:restricted_hopset}. We rely on Lemma \ref{lem:dec_bound_cluster} again to show the over a sequence of updates, for each iteration $i$, each node $v$ is only in $\tilde{O}(1/q_i)$ clusters. Since the monotone ES tree ensures that the insertions do not reduce the level of a node, the number of times edges incident to $v$ are scanned is still $\tilde{O}(d/q_i)= \tilde{O}(dn^{\rho})$. We now have $m+\Delta$ edges, and the theorem follows by summing overall nodes.
\end{proof}
We showed that we can handle a set of insertions within the same the running time. But as discussed, directly using algorithm \ref{alg:restricted_hopset} still does not lead to our desired update time.
Therefore in the rest of this section we described how we can get improved running time by using this algorithm to maintain a hierarchical construction of restricted hopsets on a sequence of scaled graphs. As explained one idea is that we can add hopset edges for smaller scales and use the added edges in computing distances for larger scales. Once we specify the set of insertion into each of the \textit{scaled graphs} considered, we will show that such insertions will also preserve the hopset stretch (with small polylogarithmic overhead) in the \textit{original graph}.\\
\begin{algorithm}[H]\small
\caption{Monotone $d$-restricted hopset. Adaptation of \cite{roditty2004}. }
\label{alg:restricted_hopset}
\SetKwProg{Fn}{Function}{}{}
Sample sets $V=A_0 \supseteq A_1 \supseteq ... \supseteq A_{k+1/\rho+1}=\emptyset$.\\
\Fn{\textsc{UpdateClusters}$(G,E^-,E^+,d)$}{
Add edges $(x,y) \in E^+$ to any tree $T(z)$ s.t.~$(x,y) \in T(z)$\\
\For{$i=0$ to $k+1/\rho+1$}{
$\mathcal{C} = \emptyset$.\\
Remove edges $E^-$ from the ES tree maintaining distances $L(\cdot, A_{i+1})$\\
Remove hopset edges $(z,v)$, and remove $v$ from $T(z)$ where $L(z,v) \geq \frac{L(v,A_{i+1})}{1+\epsilon}$\\
$X_{i+1} := $ set of nodes whose distances to $A_{i+1}$ have increased due to removal of $E^{-}$, yet remained at most $d$\\
\For{$\forall v \in X_{i+1}$}{
\For{$(u,v) \in E$}{
\For{$\forall z \in B_i(u)\setminus B_i(v)$}
{
\If{$L(z,u) +w(u,v) < \frac{L(v,A_{i+1})}{1+\epsilon}$}{
$\mathcal{C}= \mathcal{C} \cup \{ z\}$\\
\textsc{Relax}($(Q(z), u,v)$)\tcc{Update the estimate from $z$ to $v$}
}
}
}
}
\For{$\forall z \in \mathcal{C}$}
{
\textsc{Dijkstra}$(z)$
}
}
return $(E^-,E^+)$
}
\Fn{\textsc{Dijkstra}$(z)$}{
\While{$Q(z) \neq \emptyset$}{
$u= \textsc{ExtractMin}(Q(z))$\\
$B(u)= B(u) \cup \{z\}$\\
\For{$\forall(u,v) \in E: z \not \in B(v)$}{
\If{$L(z,u)+w(u,v) < \frac{L(v,A_{i+1})}{1+\epsilon}$}{
\textsc{Relax}$(Q(z), u,v)$\tcc{Update the estimate from $z$ to $v$}
}
}
}
}
\Fn{\textsc{Relax}$(Q(z),u,v)$}{
\tcc{Distances $L(z,v)$ for each tree $T(z)$ are maintained in $Q(z)$}
$d' := L(z,v)+ w(z,v)$\\
\If{$d' \leq d$}{
\If{$v \in Q(z)$}{
\textsc{decrease-key}$(Q(z), v, d')$}
\ElseIf{$L(z,u) > d'$}{
\textsc{Insert}$(Q(z), v, d')$
}
Add node $v$ to $T(z)$\\
\textsc{InsertEdge}($T(z),(z,v), d'$)\tcc{As defined in Algorithm \ref{alg:estree} in Appendix \ref{app:monotone_es}}
$E^+= E^+ \cup \{ (z,v)\}$\\
Add $(z,v)$ to $E^-$ if $L(z,v)$ has increased.
}
}
\end{algorithm}
\paragraph{Path doubling and scaling.}
We first state the path doubling idea more formally for a \textit{static hopset} in the following lemma. However for utilizing this idea dynamically we need to combine it with other structural properties of our hopsets.
\begin{lemma}\label{lem:hop_doubling}
Given a graph $G=(V,E)$, $0 < \epsilon_1 <1$, the set of $(\beta, 1+\epsilon_1)$-hopsets $H_r, 0 \leq r < j$ for each distance scale $(2^r, 2^{r+1}]$, provides a $(1+\epsilon_1)$-approximate distance for any pair $x, y \in V$, where $d(x,y) \leq 2^{j+1}$ using paths with at most $2\beta+1$ hops.
\end{lemma}
\begin{proof}
We can show this by an induction on $j$. Let $\pi$ be the shortest path between $x$ and $y$ on . Then $\pi$ can be divided into two segments, where for each segment there is a $(1+\epsilon_1)$-stretch path using edges in $G \cup \bigcup_{r=0}^{j-1} H_r$. Let $[x,z]$ and $[z',y]$ be the segments on $\pi$ each of which has length at most $2^{j-1}$. In other words, $z$ is the furthest point from $x$ on $\pi$ that has distance at most $2^{j-1}$, and $z'$ is the next point on $\pi$. Then we have,
\begin{align*}
d^{(2\beta+1)}_{G \cup \bigcup_{r=1}^{j-1}H_r} (x,y) &\leq [d^{(\beta)}_{G \cup \bigcup_{r=1}^{j-1}H_r}(x,z)+ w(z,z')+ d^{(\beta)}_{G \cup \bigcup_{r=0}^{j-1}H_r}(z',y) ]\\
& \leq (1+\epsilon_1) d_{G}(x,z) + w(z,z') + (1+\epsilon_1) d_{G}(z',y) \\
&\leq (1+\epsilon_1) d_{G} (x,y)
\end{align*}
\end{proof}
This implies that it is enough to compute $(2\beta+1)$-hop limited distances in restricted hopsets for \textit{each} scale. For using this idea in dynamic settings we have to deal with some technicalities. We should show that we can combine the rounding with the modification needed for handling insertions.
We define a scaled graph using Lemma~\ref{lem:rounding}
as follows: $G^j := \textsc{Scale}(G \cup \bigcup_{r=0}^{j} H_{r}, 2^j, \epsilon_2, 2\beta +1)$. Here we set $R=2^j, \ell=2\beta+1$, and $\epsilon_2$ is a parameter that we tune later. We first describe the operations performed on this scaled graph. We then explain how we can put things together for all scales to get the desired guarantees. The key insight for scaling $G \cup \bigcup_{r=0}^{j} H_{r}, 2^j$ is that we can obtain $H_{j+1}$ by computing an $O(\ell)$-restricted hopset of $G^j$ (using the algorithm of Lemma~\ref{lem:single-scale}) and scaling back the weights of the hopset edges.
In addition to the graph $G$ undergoing deletions, our decremental algorithm maintains the following data structures for each $1 \leq j \leq \log (nW)$:
\begin{itemize}
\item The set $\bar{H}_j= \bigcup_{r=0}^j H_r$, union of all hopset edges for distance scales up to $[2^j,2^{j+1}]$.
\item The scaled graphs $G^1, ..., G^j$.
\item Data structure obtained by constructing an $O(\beta/\epsilon_2)$-restricted hopset on ${G}^j$ by running Algorithm \ref{alg:restricted_hopset} for the appropriate parameter $\epsilon_2<1$. We denote this data structure by $D_j$.
\end{itemize}
The data structure $D_j$ is maintained by running Algorithm \ref{alg:restricted_hopset} on ${G}^j$, and maintaining the clusters and hence the bunches $B(v)$ for all $v \in V$. Given $D_j$, we can maintain $H_{j+1}$, where the edge weights in clusters are assigned by computing approximate distances based on the monotone ES tree on each cluster as follows: In a tree rooted at a cluster center $z$, we set the weight $w_j$ on an edge $(z,v)$ to be $\min^{j-1}_{r=1} \eta( 2^r, \epsilon_2) L_r(z,v)$, where $L_r(z,v)$ is the level of $v$ on $G^r$ after running the monotone ES tree up to depth $D=\lceil \frac{2(2\beta+1)}{\epsilon_2} \rceil$. We the maintain a restricted hopset on the scaled graph $G^j$, and by \textit{unscaling} its weights we get $H_{j+1}$.
Note that we never underestimate any distances. The rounding in Lemma \ref{lem:rounding} does not underestimate the distances, and if the edge is stretched that means we are assigning a weight larger than what is obtained by the rounding.
Once each data structure $D_j$ is initialized with a graph, it can execute a single operation $\textsc{Update}(E^-, E^+)$, which updates the maintained graph by removing the edges of $E^-$ and adding edges $E^+$ by running Algorithm \ref{alg:restricted_hopset}. The set $E^-$ is the set the edges corresponding to nodes leaving clusters.
The operation returns a pair of edges $(E^-, E^+)$ that are edges that should be removed or added from $D_j$. Additionally, by multiplying these distance by $\eta(2^j, \epsilon_2)$ for the appropriate $\epsilon_2$, we can recover a pair $(H^-, H^+)$ of edge sets, where $H^-$ is the set of edges that are removed from the hopset and $H^+$ is the set of edges added to the hopset as a result of the update.
Note that a change in the weight of a hopset edge is equivalent to removing the edge and adding it with a new weight.
In Algorithm \ref{alg:main} we update the data structures described as follows: we run Algorithm \ref{alg:restricted_hopset} for distances bounded by $d= \lceil\frac{ 2(2 \beta+1)}{\epsilon_2}\rceil$ starting on $j= 0,..., \log W$ in increasing order of $j$ to compute hopset edges $H_j$. After processing all the changes in scaled graph $G^j$, we add the inserted edges to $G^{j+1}$. Then we process the changes in $G^{j+1}$ by running the algorithm of Section \ref{sec:restricted_hopset} and repeat until all distance scales of covered. As explained, when the distances increase a node may join a new cluster which will lead to a set of insertions in $H$ and in turn insertions in a sequence of graphs $G^j$.
We use an argument similar to Lemma \ref{lem:dec_bound_cluster} on each scaled graph to get the overall update time. In a way we can see the added edges passed to each scale as a set of batch distance increases, between the corresponding endpoints. This means we are not exactly in the setting of \cite{roditty2004} where only one deletion occurs at each time, but the exact same analysis as in Lemma \ref{lem:dec_bound_cluster} still holds.
\begin{algorithm}[h]
\caption{Updating the hopset after deleting an edge $e$.}
\label{alg:main}
\SetKwProg{Fn}{Function}{}{}
Input: $0<\epsilon, 0< \epsilon_2 <1$, set $d= \lceil\frac{ 2(2 \beta+1)}{\epsilon_2}\rceil$.\\
$(E^-, E^+) := (\{e\}, \emptyset)$\\
\For{$j = 0, \ldots, \lfloor \log W \rfloor$}{
$(E^-, E^+) := \textsc{UpdateClusters}(G^j, E^{-}, d, \epsilon)$\tcc{Run Algorithm \ref{alg:restricted_hopset} on ${G}^j$}
Update $H_{j+1}$ by unscaling weights of $E^+$ and removing $E^-$ (Lemma \ref{lem:rounding}) \tcc{add edges for the next scale}
Update $G^{j+1}$ based on Lemma \ref{lem:rounding} to reflect changes to $H_{j+1}$
}
\end{algorithm}
We summarized the algorithm obtained by maintaining this data structure over all scales in Algorithm \ref{alg:main}. Note that we need to update both the restricted hopsets $D_j$ on the scaled graphs and the hopset $H_j$ obtained by scaling back the distances using Lemma \ref{lem:rounding}.
\paragraph{Running time (proof of Lemma \ref{lem:single-scale}).}
We can now put all the steps discussed to maintain the data structure of Lemma \ref{lem:single-scale}.
In particular, for obtaining a $2^{j+1}$-restricted hopset, we maintain the data structure of Lemma \ref{lem:single-scale} on ${G}^j$ for each cluster rooted at a node $z \in A_i \setminus A_{i+1}$ and by setting $\ell=2\beta+1$. By using Lemma \ref{lem:es_time} and Theorem \ref{thm:restricted_hopset} to compute $d$-restricted hopsets for $d=O(\beta /\epsilon)$. When weights are polynomial we get the running time of $\tilde{O}(\frac{\beta}{\epsilon}(m+\Delta)n^\rho)$, where $\Delta$ is the overall number of hopset edges added over all updates.
\subsection{Hopset stretch} \label{sec:ss_stretch}
In this section, we first prove the stretch incurred for a single-scale by combining properties of the monotone ES-tree algorithm (incorported to Algorithm \ref{alg:restricted_hopset}) with the static hopset argument and the rounding framework. We will then show that by setting the appropriate parameters we can prove the overall stretch and hopbound tradeoffs described in Lemma \ref{lem:single-scale}.
In the following, we extend the static hopset argument to dynamic settings. We use the path doubling observation in Lemma \ref{lem:hop_doubling} and properties of monotone ES tree described to prove the stretch incurred in each scale. We denote the stretch of $\bar{H}_j$ to be $(1+\epsilon_j)$. Then for getting the final stretch and hopbound we will set the parameters $\epsilon_2=\epsilon'$ (error incurred by rounding), and $\delta= \frac{\epsilon}{8(k+1/\rho +1)}$.
The stretch argument is based on a threefold induction on $i$, $j$-th scale, and time $t$. By fixing $i,j,t$, and a source node $s$, we show that there is a $(1+\epsilon_j)$-stretch path between $s$ and any other node with $\beta$ hops (or if we are using previous scale $2 \beta+1$-hops) such that based each segment of this path has the desired stretch based on the inductive claim on one of these three factors. At a high level induction on $i$ and $j$ follows from static properties of our hopset. To show that bounded depth monotone ES tree maintains the approximate distances, we note that any segment of the path undergoing an insertion consistent of a single shortcut and the weight on such an edge is a distance estimate between its endpoints.
It is easy to see that we never underestimate distances. Roughly speaking, we either obtain an estimate from rounding estimates obtained from smaller scales, which is an upper bound on the original estimate, or we ignore a distance decrease.
\begin{theorem} \label{thm:single_stretch}
Given a graph $G=(V,E)$, assume that we have maintained a $(2^j, \beta, (1+\epsilon_j))$-restricted hopset $\bar{H}_j$, and let $H_{j+1}$ be the hopset obtained by running Algorithm \ref{alg:main} for any given $0 \leq \epsilon_2 < 1$ on $G \cup \bar{H}_{j}$.
Fix $0 < \delta \leq \frac{1}{8(k+1/\rho+1)}$, and consider a pair $x,y \in V$ where $d_{t,G}(x,y) \in [2^j, 2^{j+1}]$. Then for $0 \leq i \leq k+1/\rho+1$, either of the following conditions holds:
\begin{enumerate}
\item $d^{((3/\delta)^i)}_{G \cup \bar{H}_{j+1}}(x,y) \leq (1+8\delta i)(1+\epsilon_j)(1+\epsilon_2) d_{t,G}(x,y)$ or,
\item There exists $z \in A_{i+1}$ such that,
\[d^{((3/\delta)^i)}_{G \cup \bar{H}_{j+1}} (x,z) \leq 2(1+\epsilon_j)(1+\epsilon_2) d_{t,G}(x,y).\]
\end{enumerate}
Moreover, by maintaining a monotone ES tree on $G^{j+1}$ up to depth $\lceil \frac{2(2\beta+1)}{\epsilon_2} \rceil$, and applying the rounding in Lemma \ref{lem:rounding}, we can maintain ${(1+\epsilon_{j+1})}$-approximate single-source distances up to distance $2^{j+2}$ from a fixed source $s$ on $G$, where $1+\epsilon_{j+1} = (1+\epsilon_j)(1+\epsilon_2)^2(1+\epsilon)$ and $\beta=(3/\delta)^{k+1/\rho+1}$.
\end{theorem}
\begin{proof}
We use a double induction on $i$ and time $t$, and also rely on distance computed up to scaled graph $G^j$. First, using these distance estimates for smaller scales, we argue that when we add an edge to $\bar{H}_{j+1}$ it has the desired stretch.
Let $L_{t,j}(u,v)$ denote the level of node $v$ in the tree rooted at $u$ after running Algorithm \ref{alg:restricted_hopset} up to depth $D= \lceil \frac{2(2\beta+1)}{\epsilon}\rceil$ on graph $G^j$.
This proof is based on a cyclic argument: assuming we have correctly maintained distances up to a given scale using our hopset, we show how we can compute the distances for the next scale.
In particular, we first assume that based on the theorem conditions we are given $\bar{H}_j$ and have maintained all the clusters and the corresponding distances in $G^1,...,G^j$ with stretch $1+\epsilon_j$. This lets us analyze $H_{j+1}$. Then to complete the argument, we show how given the hopsets of scale $[2^j,2^{j+1}]$, we can compute approximate SSSP distances for the next scale based on the monotone ES tree on $G^{j+1}$.
First, in the following claim, we observe that the edge weights inserted in the latest scale have the desired stretch by using our assumption that all the shortest path trees on each cluster on $G^1,...,G^j$ are approximately maintained. We use such distance to add edges in each cluster to construct $H_{j+1}$, and we observe the following about the weights on these edges:
\begin{observation}\label{obs:monotone}
Let $v \in B(u)$ such that $d_{t,G}(u,v) \leq 2^{j+1}$. Consider an edge $(u,v)$ added to $H_{j+1}$ after running Algorithm \ref{alg:restricted_hopset} on $G^1,...,G^j$ for $D=\lceil \frac{2(2\beta+1)}{\epsilon_2} \rceil$ rooted at node $v$. Let $w_{j+1}(u,v):=\min^{j}_{r=1} \eta( 2^r, \epsilon_2) L_r(u,v)$, that is the unscaled edge weight. Then we have\ $d_{t,G}(u,v) \leq w_{j+1}(u,v) \leq (1+\epsilon_j)(1+\epsilon_2)d_{t,G}(u,v)$.
\end{observation}
\iffalse
\begin{proof}
We use an induction on time for this claim.
At time $t=0$, the claim follows by the static argument in Lemma \ref{lem:hop_doubling}.
After running Algorithm \ref{alg:restricted_hopset}, the weight assigned to $(v,u)$ is $\min_{r=1}^j \eta(2^j, \epsilon_2) L_{t,j}(v,u)$.
First case is that node $u$ is stretched in the monotone ES tree rooted at $v$ on $G^{j}$.
Then we have $L_{t,j}(v,u)=L_{t-1,j}(v,u)$. In this case, the claim follows based on induction on time, since we have maintained $\eta(2^j, \epsilon_2) L_{t-1,j}(v,u) \leq (1+\epsilon_1)(1+\epsilon_j) d_{t-1,G}(v,u) \leq (1+\epsilon_1)(1+\epsilon_j) d_{t,G}(v,u)$.
The second case is when node $u$ is not stretched on $G^{j}$.
Let $d(v,u) \in [2^r,2^{r+1}]$, where $r \leq j$. By path doubling of Lemma \ref{lem:hop_doubling} we know that there exists a path with $2\beta+1$ hops between $v$ and $u$ on $G \cup \bar{H}_r$ with stretch $(1+\epsilon_r)$. We know from Lemma \ref{lem:rounding} that this path corresponds to a
path with depth at most $\lceil \frac{2(2\beta+1)}{\epsilon_2}\rceil$ on $G^r$, and after scaling back it has length at most $(1+\epsilon_j)(1+\epsilon_2)d_G(v,u)$ on $G$. In other words we have,
\begin{align*}
w_{j+1}(v,u)&= \min_{r=1}^{j} \eta(2^r,\epsilon_2) L_{t,r}(v,u) \leq (1+\epsilon_2) d^{(2\beta+1)}_{t,G \cup \bar{H}_r} (v,u)\\
&\leq (1+\epsilon_r)(1+\epsilon_2)d_{t,G}(v,u)\\
&\leq (1+\epsilon_j)(1+\epsilon_2)d_{t,G}(v,u)
\end{align*}
Also, note that we never underestimate any distances. The rounding in Lemma \ref{lem:rounding} does not underestimate the distances, and if the edge is stretched that means we are assigning a weight larger than what is obtained by the rounding.
\end{proof}
\fi
This claim implies that the weights of hopset edges assigned by the algorithm correspond to approximate distance of their endpoints.
Let $d_{t,j}(x,y):= \min_{r=1}^j \eta(2^r, \epsilon_2)L_{t,j}(x,y)$ which would be the estimate we obtain by for distance between $x$ and $y$ after scaling back distances on $G^r, 1 \leq r \leq j$.
In other words this is the hop-bounded distance after running monotone ES tree on $G^j$ and scaling up the weights.
For any time $t$ and the base case of $i=0$, we have three cases. If $y \in B(x)$ then edge $(x,y)$ is in the hopset $H_{j+1}$, and by Observation \ref{obs:monotone} the weight assigned to this edge is at most $(1+\epsilon_j)(1+\epsilon_2)d_{t,G}(x,y)$. In this case the first condition of the theorem holds. Otherwise if $x \in A_1$, then $z=x$ trivially satisfies the second condition. Otherwise we have $x \in A_0/A_1$, and by setting $z=p(x)$ we know that there is an edge $(x,z) \in \bar{H}_j$ such that $d_{t,j}(x,z) \leq (1+\epsilon_2)d_{G \cup \bar{H}_j}(x,y)$ (by definition of $p(x)$ and using the same argument as above). Hence the second condition holds.
By inductive hypothesis assume the claim holds for $i$. Consider the shortest path $\pi(x,y)$ between $x$ and $y$. We divide this path into $1/\delta$ segments of length at most $\delta d_{t,G}(x,y)$ and denote the $a$-th segment by $[u_a,v_a]$, where $u_a$ is the node closest to $x$ (first node of distance at least $a \delta d_{t,G}(x,y)$) and $v_a$ is the node furthest to $x$ on this segment (of distance at most $(a+1)\delta d_{t,G}(x,y)$).
We then use the induction hypothesis on each segment. First consider the case where for all the segments the first condition holds for $i$, then there is a path of $(3/\delta)^{i}(1/\delta) \leq (3/\delta)^{i+1}$ hops consisted of the hopbounded path on each segment. We can show that this path satisfies the first condition for $i+1$. In other words,
\[ d^{((3/\delta)^{i+1})}_{t,G \cup \bar{H}_{j+1}} (x,y) \leq \sum_{a=1}^{1/\delta}d^{((3/\delta)^{i})}_{t,G \cup \bar{H}_{j+1}} (u_a,v_a) +d^{(1)}_{t,G}(v_a,u_{a+1}) \leq (1+8\delta i)(1+\epsilon_j)(1+\epsilon_2)d_{t,G}(x,y)\]
Next, assume that there are at least two segments for which the first condition does not hold for $i$. Otherwise, if there is only one such segment a similar but simpler argument can be used. Let $[u_l, v_l]$ be the first such segment (i.e.~the segment closest to $x$, where $u_l$ is the first and $v_l$ is the last node on the segment), and let $[u_r, v_r]$ be the last such segment.
First by inductive hypothesis and since we are in the case that the second condition holds for segments $[u_l,z_l]$ and $[u_r,v_r]$, we have,
\begin{itemize}
\item $d_{t, G \cup \bar{H}_{j+1}}^{((3/\delta)^i)} (u_l, z_l) \leq 2(1+\epsilon_2)(1+\epsilon_j) d_{t,G}(u_{l},v_l)$, \textit{and,}
\item $d_{t,G \cup \bar{H}_{j+1}}^{((3/\delta)^i)} (v_r, z_r) \leq 2(1+\epsilon_2)(1+\epsilon_j)d_{t,G}(u_r, v_r)$
\end{itemize}
Again, we consider two cases. First, in case $z_r \in B(z_l)$ (or $z_l \in C(z_r)$), we have added a single hopset edge $(z_r, z_l) \in \bar{H}_{j+1}$. Note that $d_{t,G}(z_r,z_l) \leq 2^{j+1}$, since $d_{t,G}(z_r,z_l) \leq d_{t,G}(x,y) \leq 2^{j+1}$. Hence by Observation \ref{obs:monotone} the weight we assign to $(z_r, z_l)$ is at most $(1+\epsilon_2)(1+\epsilon_j)d_{t,G}(z_r, z_l)$.
\iffalse
\begin{align*}
d_{t,j}(z_l,z_r)&= \eta(2^j,\epsilon_2) L_t(z_r, z_l) \leq \eta(2^j,\epsilon_2)(L_t(z_r,z_r)+ w_{\bar{G}^j}(z_r,z_l))\\
&= \eta(2^j,\epsilon_2)w_{\bar{G}^j}(z_r,z_l) \leq (1+\epsilon_2) d^{(2\beta+1)}_{G \cup \bar{H}_j} (z_r,z_l)
\end{align*}
Note that clearly $L(z_r,z_r)=0$, and $w_{\bar{G}^j}(z_l,z_r)$ refers to the smallest weight among $G^1,...,G^j$.
\fi
On the other hand, by triangle inequality, and the above inequalities (which are based on the induction hypothesis) we get,
\begin{align}
d^{(1)}_{\bar{H}_{j+1}}(z_l,z_r) &\leq (1+\epsilon_2)(1+\epsilon_j) d_G(z_l,z_r)\\
&\leq (1+\epsilon_2)(1+\epsilon_j) [d_{G \cup \bar{H}_{j+1}}^{((3/\delta)^i)}(u_l,z_l)+d_G(u_l,v_r)+d^{((3/\delta)^i)}_{G \cup \bar{H}_{j+1}} (z_r,v_r)]\label{eq:expand}
\end{align}
By applying the inductive hypothesis on segments before $[u_l, v_l]$, and after $[u_r,v_r]$, we have a path with at most $(3/\delta)^i$ for each of these segments, satisfying the first condition for the endpoints of the segment. Also, we have a $2(3/\delta)^i +1$-hop path going through $u_{l}, z_{l}, z_r, v_r$ that satisfies the first condition for $u_{l}, v_r$.
Putting all of these together, we argue that there is a path of hopbound $(3/\delta)^{i+1}$ satisfying the first condition. In particular, we have (the subscript $t$ is dropped in the following),
\begin{align}
d^{(3/ \delta)^{(i+1})}_{G \cup \bar{H}_{j+1}} (x,y) &\leq \sum^{l-1}_{a=1} [d_{G \cup \bar{H}_{j+1}}^{((3/\delta)^i)} (u_a, v_a) +d^{(1)}_G(v_a,u_{a+1})] +d^{((3/\delta)^i)}_{G \cup \bar{H}_{j+1}} (u_l, z_l) \label{line:not_stretched}\\
&+d^{(1)}_{\bar{H}_{j+1}} (z_l, z_r) + d^{((3/\delta)^i)}_{G \cup \bar{H}_{j+1}} (z_r, v_r) +d^{(1)}_{G} (v_r, u_{r+1})\\
&+\sum^{ 1/\delta }_{a=r+1} [d_{G \cup \bar{H}_{j+1}}^{((3/\delta)^i)} (u_a, v_a) +d^{(1)}_{G}(v_a,u_{a+1})] \label{line:triangle}\\
&\leq (1+8\delta i)(1+\epsilon_j)(1+\epsilon_2) [d_{G}(x,u_l)+d_G(v_r,y)] + d_G(u_l,v_r)\\
&+ (1+\epsilon_2)(1+\epsilon_j)[2d_{G}(u_l,z_l) + 2d_{G}(z_r,v_r)]\\
&\leq (1+\epsilon_2)(1+\epsilon_j) [8 \delta d_{G}(x,y) + (1+ 8 \delta i) d_{G}(x,y)] \label{line:subpath}\\
& \leq (1+8\delta (i+1))(1+\epsilon_2)(1+\epsilon_j) d_G(x,y)
\end{align}
In the first inequality we used the induction on $i$ for each segment, and triangle inequality.
In the second inequality we are using the fact that nodes $u_j,v_j$ for all $j$ are on the shortest path between $x$ and $y$ in $G$, and we are replacing $d^{(1)}_{\bar{H}_{j+1}} (z_l, z_r)$ with inequality \ref{eq:expand}.
In line \ref{line:subpath} we used the fact that the length of each segment is at most $\delta \cdot d_G(x,y)$. Hence we have shown that the first condition in the lemma statement holds.
Finally, consider the case where $z_r \not \in B(z_l)$. If $z_l \not \in A_{i+2}$, we consider $z =p(z_l)$, where $z_l \in A_{i+2}$. We now claim that this choice of $z$ satisfies the second lemma condition.
We have added the edge $(z_l, z)$ to the hopset. Since $z_r \not \in B(z_l)$, we have $d_{t-1,G}(z_l,p(z_l))\leq d_{t-1,G}(z_l,z_r) \leq d_{t,G}(x,y) \leq 2^{j+1}$. Therefore we can use Observation \ref{obs:monotone} on the edge $(z_l,p(z_l))$.
\begin{align}
d^{(3/ \delta)^{(i+1})}_{G \cup \bar{H}_{j+1}} (x,y) &\leq \sum^{l-1}_{a=1} [d_{G \cup \bar{H}_{j+1}}^{((3/\delta)^i)} (u_a, v_a) +d^{(1)}_G(v_a,u_{a+1})] +d^{((3/\delta)^i)}_{G \cup \bar{H}_{j+1}} (u_l, z_l)+(1+\epsilon_2)(1+\epsilon_j)d^{(1)}_{\bar{H}_{j+1}} (z_l, z)\\
&\leq (1+8\delta i)(1+\epsilon_2)(1+\epsilon_j) d_{G}(x,u_l) +d^{((3/\delta)^i)}_{G \cup \bar{H}_{j+1}} (u_l, z_l)+(1+\epsilon_2)(1+\epsilon_j)d_{\bar{H}_{j+1}}(z_l, z_r)\\
&\leq (1+8\delta i)(1+\epsilon_2)(1+\epsilon_j) d_{G}(x,u_l) +d^{((3/\delta)^i)}_{G \cup \bar{H}_{j+1}} (u_l, z_l)\\
&+(1+\epsilon_2)(1+\epsilon_j)[2d^{((3/\delta)^i)}_{G \cup \bar{H}_{j+1}}(z_l, u_l)+d_G(u_l,v_r)+d^{(3/\delta)^i}_{G\cup \bar{H}_{j+1}}(v_r,z_r)]\\
&\leq (1+ 8\delta i)(1+\epsilon_2)(1+\epsilon_j) d^{((3/\delta)^i)}_{G\cup \bar{H}_{j+1}}(x,v_r)+ 6\delta (1+\epsilon_j) d_{G}(x,y)\\
& \leq 2(1+\epsilon_2)(1+\epsilon_j)d_G(x,y)
\end{align}
In the last inequality we used the fact that we set $\delta <\frac{1}{8(k+1/\rho+1)}$ and thus $8\delta i <1$. The only remaining case is when $z_{\ell} \in A_{i+2}$, in which case a similar reasoning follows by setting $z=z_{l}$.
Finally, we prove that after adding hopset edges $H_{j+1}$ we can maintain approximate single-source shortest path distances from a given source $s$. This enables us to show that Observation \ref{obs:monotone} can be used for the next scale, i.e. that we can set the weights for the next scale by maintaining the clusters and $(1+\epsilon_{j+1})$ approximate distance rooted at a source $s$ when we have $d(s,v) \in [2^{j+1}, 2^{j+2}]$, and hence close the inductive cycle in the argument.
We run the monotone ES tree algorithm (Algorithm \ref{alg:estree} in Appendix \ref{app:monotone_es}) up to depth $\lceil \frac{2(2\beta+1)}{\epsilon} \rceil$ on all of the scaled graphs $G^1, ..., G^{j+1}$. We let set the distance estimate $d_{t,j+1}(s,v)$ to be $\min_{r} \eta( 2^r, \epsilon_2) L_{t,r}(s,v)$ where $L_{t,r}(s,v)$ is the level of $v$ on $G^r$ on the ES tree up to depth $\lceil \frac{2(2\beta+1)}{\epsilon_2} \rceil$ rooted at $s$. Note that by running Algorithm \ref{alg:restricted_hopset} we are also maintaining the same distances on each cluster while also maintaining the nodes that leave and join a cluster.
We analyze the estimate for any $v \in V$ such that $d_G(s,v) \leq 2^{j+2}$. W.l.o.g. assume that $d(s,v) \in [2^{j+1}, 2^{j+2}]$, since if $d(s,v) \in [2^r,2^{r+1}], r \leq j$, we can use the same argument for the ES tree on $G^r$. Let $L_{t,j+1}(s,v)$ be the level of $v$ in the monotone ES tree of $G^{j+1}$ maintained up to depth $\lceil \frac{2(2\beta+1)}{\epsilon_2} \rceil$. Our goal is to show,
\[ d_{t,j+1}(s,v):=\eta(2^{j+1}, \epsilon_2) L_{t,j+1}(s,v) \leq (1+\epsilon_{j+1})d_{t,G}(s,v)\]
As discussed in Lemma \ref{lem:hop_doubling}, we consider the shortest path between $s$ and $v$ in $G$, and first divide it into two segments $\pi_1$ and $\pi_2$ each with length at most $2^{j+1}$.
Then divide each one of $\pi_1$ and $\pi_2$ into segments and consider the case by case inductive analysis as we did before for showing the stretch in $H_{j+1}$.
We argue why the levels in tree rooted at $s$ corresponding to $\pi_1$ have the desired stretch, then a similar reasoning with a factor $2$ in the number of hops follows for $\pi_2$.
We use a case-by-case analysis similar to what we used for showing properties of $\bar{H}_{j+1}$, and consider the paths that were inductively constructed for each segment $\bar{H}_{j+1}$. Using that structure, we argue that in the monotone ES tree on $G^{j+1}$ we can maintain the levels such that for each $0 \leq i \leq 1/\rho+k+1$ one of the following conditions holds:
\begin{enumerate}
\item
We have $d_{t,j+1}(s,v) \leq \eta(2^j, \epsilon_2) L(s,v) \leq (1+ \epsilon_{j+1}) d_{t,G}(s,v)$, where this estimate corresponds to a path with $\beta_i =(3/\delta)^{i}$ in $H_{j+1}$ (and hence $G^{j+1}$).
\item There exists $z_1 \in A_{i+1}$, such that $d_{t,{j+1}} (s,z_1) \leq \eta(2^{j+1}, \epsilon_2) L(s,z_1) \leq 2(1+\epsilon_j)(1+\epsilon_2) d_{t,G}(s,v)$ that corresponds a path with $\beta_i =(3/\delta)^{i}$ hops.
\end{enumerate}
Then we can use this to show that after all iterations there either exists a path of depth at most $\lceil \frac{2(2\beta+1)}{\epsilon_2} \rceil$ on $G^{j+1}$ between $s$ and $v$ with stretch $(1+\epsilon_{j+1})$, or the monotone ES tree returns and estimate with this stretch.
We briefly review the different cases, same as before. First assume that we have $s \in B(v)$ for some iteration $1 \leq i \leq 1/\rho +1 +k$, which implies we have added a hopset edge with weight $w_{j+1}$ to $\bar{H}_{j+1}$. In this case the edge $(s,v)$ was directly added to $H_{j+1}$. If edge $(s,v)$ is stretched then we set $ L_{t,G}(s,v)=L_{t-1,G}(s,v)$, and by induction on time we have \[\eta(2^{j+1}, \epsilon_2) L_{t,j+1}(s,v)= \eta(2^{j+1}, \epsilon_2) L_{t-1,j+1}(s,v) \leq (1+\epsilon_{j+1}) d_{t,G}(s,v).\]
If this edge is not stretched then by Lemma \ref{lem:rounding} after scaling we get distance at most $(1+\epsilon_{j})(1+\epsilon_2)^2d_{t,G}(s,v)$, where the additional factor of $(1+\epsilon_2)$ is due to scaling of $G \cup \bar{H}_{j+1}$.
Now consider the case $s \not \in B(v)$. Recall that we inductively showed one of the two theorem conditions hold for each $i$ for length in $H_{j+1}$, and we now argue that this corresponds to one of the two conditions above for the same $i$, but now on $G^{j+1}$. Let $\pi_i$ be the path in $H_{j+1}$ that satisfies one the theorem conditions for a fixed $i$.
First assume that no edge on this path is stretched. Then the stretch argument for $L(s,v)$ clearly holds based on the earlier arguments and Lemma \ref{lem:rounding}. Now let us argue about the possible insertions on this path, i.e.~when an edge added on $\pi_i$ is strecthed with respect to $s$. Note that by our construction, and in all cases we considered in our hopset argument, an edge $(x',y')$ was inserted into $H_{j+1}$ only when $x' \in B(y')$ for some $0 \leq i \leq k+1/\rho+1$, and the weights were assigned based on Observation \ref{obs:monotone}. Using these weights, we prove a claim that allows us to reason about possible insertions on $\pi_i$. At a high level, we show that the level of $y'$ is either determined by an estimate at time $t-1$ for $d(s,y')$ or by the level of a node $x'$, and a single edge between $(x',y')$ with weights satisfying Observation \ref{obs:monotone}. In other words, in the second case using a case by case analysis same as before, we know that for any node $y'$ there exists another node, in this case $x'$, that shortcuts the path from $s$ to $y'$ using one edge.
\begin{claim} \label{claim:sssp_inserts}
Let $(x',y')$ be an edge added to $H_{j+1}$ and hence $G^{j+1}$ with weight $w_{G^{j+1}}(x',y')$ due to the fact that $x' \in B(y')$. Then either of the following holds for the level of node $y'$ in the monotone ES tree rooted at $s$:
\begin{itemize}
\item $L_{t,j+1}(s,y') = L_{t-1, j+1}(s,y')$ and thus $\eta(2^{j+1},\epsilon_2) L_{t,j+1}(s,y') \leq \eta(2^{j+1},\epsilon_2) L_{t-1,j+1}(s,y') \leq (1+\epsilon_{j+1})d_{t,G}(s,y')$; or,
\item We have $L_{t,j+1}(s,y') \leq L_{t,j+1}(s,x') + w_{G^{j+1}}(x',y')$.
\end{itemize}
\end{claim}
\begin{proof}
The first case is when the edge $(x',y')$ is stretched in the tree rooted at $s$ on $G^{j+1}$. Note that this is different from the setting in Observation \ref{obs:monotone}, where we were reasoning about the node $y'$ being stretched in the tree rooted at $x'$ on $G^j$. In this case we set $L_{t,j+1}(s,y')=L_{t-1,j+1}(s,y')$. Since we have maintained distances up to depth $\lceil \frac{2(2\beta+1)}{\epsilon_2}\rceil$ on $G^{j+1}$ with stretch $(1+\epsilon_j)$ at time $t-1$, and since we are in the decremental setting this means that after scaling back we get the desired stretch.
The second case is when the edge $(x',y')$ is not stretched in the tree rooted at $s$. The claim follows by definition of an edge that is not stretched.
\end{proof}
Note that if $d_{t,G}(x',y')$ belonged to a smaller scale, we have already added an edge that satisfied a similar condition for the corresponding scale.
Going back to the hopset argument, we note that every insertion into $H_{j+1}$ on path $\pi_i$ (and edge that is stretched with respect to $s$) satisfies the conditions in Claim \ref{claim:sssp_inserts}. In other words, for any node on the path, say $y'$, there exists a node $x'$ that is directly connected to $y'$, satisfying the stretch in Claim \ref{claim:sssp_inserts}.
This combined with what we proved inductively on the structure of segments of path $\pi_1$ in $H_{j+1}$, implies that for any node $v, d(s,v) \leq 2^{j+1}$ we have a path with the desired stretch that is consisted of all the edges added for different $i$. Finally, after the scaling we can obtain the desired stretch in which we lose another factor of $(1+\epsilon_2)$.
We briefly review the cases that, at a high-level, shows that a such a node $x'$, satisfying Claim \ref{claim:sssp_inserts} exists that appropriately \textit{shortcuts} the distance from $s$ to $y'$ for any node $y'$ on $\pi_i$ stretched with respect to $s$.
Recall the hopset argument for $i$: an insertion into one of the segments (of length $\delta d(s,v)$) can only occur when condition two of the theorem is satisfied for some node in $z \in A_{i+1}$. Let $[z'_l,z'_r]$ be the segment for which the new edge was inserted.
We argued that either $z'_r \in B(z'_l)$ or there is another node $z'$ for which the second theorem condition holds and $z' \in B(z'_l)$.
In any case we inserted a single edge in $H_{j+1}$ on this segment with weights satisfying Observation \ref{obs:monotone}. Then using Claim \ref{claim:sssp_inserts}, and similar calculation as we did for $H_{j+1}$ we can show that the second condition also holds on $G^{j+1}$, but there is an additional error factor of $(1+\epsilon_2)$ from scaling.
At a high level, we have shown that the inserted edge on $\pi_i$ has a length that appropriately shortcuts the last segment, otherwise no new edges were added in iteration $i$ (when the first theorem condition holds for all segment).
We argued earlier that this path $\pi_i$ has stretch $(1+\epsilon_j)(1+\epsilon)(1+\epsilon_2)$ in $G \cup \bar{H}_{j+1}$. Hence after scaling and running Algorithm \ref{alg:estree} on $G^{j+1}$, we know that path $\pi_i$ has depth $ \lceil \frac{2(2\beta+1)}{\epsilon} \rceil$ and we have the following estimate for $v$:
\[d_{t,j+1}(s,v)\leq \min_{r=1}^{j+1} \eta(2^{j+1}, \epsilon_2) L_{t, r}(s,v) \leq (1+\epsilon_j)(1+\epsilon_2)^2(1+\epsilon) d_{t,G}(s,v)
\]
Then after all the iterations $1 \leq i \leq 1/\rho +1 +k$, the second condition cannot hold (since $A_{1/\rho +1 +k}=\emptyset$), the first condition must hold, which states that there
is a path with $\beta =(3/\delta)^{1/\rho +1 +k}$-hops and stretch $(1+\epsilon_{j+1})d_{t,G}(s,v)$ in $G \cup \bar{H}_{j+1}$ between $s$ and $v$. Also by path doubling of Lemma \ref{lem:hop_doubling} we argued that this also means that there is a path with $2\beta+1$ hops and $(1+\epsilon_{j})(1+\epsilon_2)(1+\epsilon)$-stretch in $G \cup \bar{H}_{j+1}$ between $s$ and $v$ that is consisted of two paths satisfying the first theorem condition for $H_{j+1}$, and that this corresponds to a path with stretch $1+\epsilon_{j+1}$ in $G^{j+1}$.
The concatenation of this same paths in $G^{j+1}$ approximates $\pi_i$ and after scaling and unscaling we will have an additional factor of $(1+\epsilon_2)$.
\end{proof}
Theorem \ref{thm:single_stretch} allows us to hierarchically use the restricted hopsets for smaller scales to compute the distance for larger scales, that is in turn used to update the hopset edges in the larger scales.
In the following lemma, we will show that by setting $\delta=O(\frac{\epsilon}{(k+1/\rho +1)})$ we get the desired stretch for Lemma \ref{lem:single-scale}. Next, we use Lemma \ref{lem:single-scale} for all scales and by setting the appropriate error parameters we can prove our overall stretch and hopbound tradeoffs. We also prove the overall update time using the running time of the monotone ES tree algorithm to run the restricted hopset algorithm on the hopsets obtained for each scale.
\paragraph{Single scale stretch.} We will now use the stretch argument above to get the hopbound and stretch for each scale by setting the appropriate parameters.
As discussed, there are \textit{two} error factors incurred in each scale. One is caused by the fact that we are using previously added hopset edges, which we denoted by $(1+\epsilon_j)$ for scale $j$, and another is caused due to the rounding error, which we denote by $(1+\epsilon_2)$. To get an overall stretch of $(1+\epsilon)$, we will set $\epsilon'= \frac{\epsilon}{6 \log W}$ and $\epsilon_2=\epsilon'$.
\begin{corollary}\label{cor:overall_stretch}
After each update $t$, and for all $j, 0 \leq j \leq \log W$ and any pair $x,y \in V$, where $2^j \leq d_{t,G}(x,y) \leq 2^{j+1}$, we have $d_{t,G}(x,y) \leq d_{t,G \cup \bar{H}_j}(x,y) \leq (1+3\epsilon')^j \cdot d^{(\beta)}_{t,G}(x,y)$.
\end{corollary}
\begin{proof}
We use an induction on $j$. The base case ($j = 0$) is satisfied by the paths in $G$, since we can assume with out loss of generality that the edge weights are at least one.
First, by induction hypothesis, we have a $(2^{j}, \beta, (1+3\epsilon')^{j})$-hopset, and hence $1+\epsilon_j = (1+3\epsilon')^{j}$ .
We then use Theorem \ref{thm:single_stretch}, for $\epsilon_2=\epsilon'$, and $\delta= \frac{\epsilon'}{8(k+1/\rho +1)}$. For the final iteration $i=\frac{1}{k+1/\rho +1}$ since $A_{i+1} = \emptyset$, the second item can not hold. Hence the first item should hold, and since $8\delta i < \epsilon'$ we have,
\[ d^{(\beta)}_{t,G \cup \bar{H}_j}(x,y) \leq (1+3\epsilon')^{j-1}(1 +\epsilon')(1+\epsilon') d_{t,G}(x,y) \leq (1+3\epsilon')^j d_{t,G}(x,y). \]
Here $d_{t,j}(x,y)$ is the sum of weights in the monotone ES tree, which corresponds to the approximate $\beta$-limited distance of $x$ and $y$ on the scaled graph.
\begin{comment}
In Lemma \ref{lem:rounding}, we argued that in order to get $(2\beta+1)$-hop limited distance on $G \cup \bigcup_{r=1}^{j-1}H_r$, we have to explore distances up to $\lceil \frac{\ell}{\epsilon} \rceil$, where $\ell=2\beta+1$, on each of the scaled graphs (by Lemma \ref{lem:rounding}).
Let $h= \lceil \ell/\epsilon' \rceil$ and let $\pi$ be the shortest $\ell$-hop path between $x$ and $y$, and let $(u,v)$ be an edge on $\pi$ added to $G^{j}$. We consider multiple cases, and in each case show that
either $d_{t,j}(u,v) \leq (1+3\epsilon')^j d_G(u,v)$ or $d_{t,j}(u,v) \leq w(u,v)+ \frac{\epsilon' R_j}{\ell}$ and then use triangle inequality to get the overall stretch for $\pi$.
First consider the case where the edge is added to $G^{j}$ because $v$ has joined the cluster of $u$ or $u$ has joined the cluster of $v$ for the first time in this update. Recall that we set $w_{G^j}(u,v) = d_{G^j} (u,v)$, where $d_{G^j} (u,v)$ is obtained by computing distances on $G^j$ up to depth $h$. Also, we set $\eta(R_j, \ell)= \epsilon' R_j/\ell$, and by Lemma \ref{lem:rounding}, we have:
\begin{align}
d_{t,j}(u,v) \leq \eta(R_j, \ell) \cdot d_{G^j}(u,v) &\leq d^{(2\beta+1)}_{G \cup \bigcup_{r=0}^{j} H_r}(u,v)+ \frac{\epsilon'R_j}{\ell}\\
&\leq (1+\epsilon') d^{(2\beta+1)}_{G \cup \bigcup_{r=0}^{j-1} H_r}(u,v)+ \frac{\epsilon'R_j}{\ell} \label{line:hopset_stretch}
\end{align}
The second line follows from the hopset stretch argument in Theorem \ref{thm:static_hopset}, and the last line follows from induction hypothesis.
If this is the case for all the edges on $\pi$, then by using the fact that $\pi$ has $\ell$ hops and by triangle inequality we get,
\[ d_{t,j}(x,y) \leq (1+ \epsilon') d^{(2\beta+1)}_{G \cup \bigcup_{r=0}^{j-1} H_r} +\epsilon' R_j \leq (1+3\epsilon')d^{(2\beta+1)}_{G \cup \bigcup_{r=0}^{j-1} H_r} \]
Now assume that $(u,v)$ is added due to a hopset edge added for a scale $j' <j$. Then it is also added to $G^{j-1}$. If $(u,v)$ was not stretched, we have:
\[ d_{t,j}(u,v) = \eta(R_j, \ell) \cdot \lceil \frac{d_{t,G^j}(u,v)}{\eta(R_{j}, \ell)} \rceil \leq
d_{t,G^{j-1}}(u,v)+ \frac{\epsilon'R_j}{\ell} \label{line:hopset_stretch} \leq (1+\epsilon') d^{(2\beta+1)}_{G \cup \bigcup_{r=0}^{j-1} H_r}(u,v)+ \frac{\epsilon'R_j}{\ell}\]
Similar to the previous case, we get the claim for $d_{t,j}(x,y)$ if all edges on $\pi$ are in one of these two cases.
Finally, we consider the case where $(u,v)$ is stretched, (for the monotone ES tree ) rooted at \textit{some} node $z \in A_i \setminus A_{i+1}$ that is inserted at time $t$ in $G^j$. In this case we use an induction on update time $t$. The base case follows from the static hopset construction.
We will show that,
\[ d_{t,j}(z,v) = \eta(R_j, \ell) \cdot L_{t, G^j}(z,v) \leq (1+3\epsilon')^{j} d_{t,G}(z,v) \]
We can now consider the set of edges on $\pi$ that are in one of the first two cases, and consider a set of $\ell' \leq \ell$ of stretched edges. By summing up over all the edge weights and by triangle inequality we get $d_j(x,y) \leq (1+3\epsilon')^j \cdot d_{t,G}(x,y)$.
\end{comment}
\end{proof}
\paragraph{Proof of stretch and hopbound in Lemma \ref{lem:single-scale}.}
Now by simply setting $\epsilon'= \frac{\epsilon}{3}$ in Corollary \ref{cor:overall_stretch} we get the desired stretch and hopbound.
\paragraph{Putting it together.}
We now use the stretch argument of Corollary \ref{cor:overall_stretch} with the update time followed by Lemma \ref{lem:single-scale} to get the following hopset guarantees.
\begin{theorem}\label{thm:main_hopset}
The total update time in each scaled graph $G^j$, $1 \leq j \leq \log W$, over all deletions is $\tilde{O}((\ell/\epsilon') (n^{1+\nu}+ m)n^{\rho})$, and hence the total update time\footnote{If weights are not polynomial the $\log n$ factor will be replace with $\log W$, and a factor of $\log^2 W$ will be added to the update time.} for maintaining $(\beta, 1+\epsilon)$-hopset with hopbound $\beta= O(\frac{\log n}{\epsilon} \cdot (k+1/\rho))^{k+1/\rho+1}$ is $\tilde{O}(\frac{\beta}{\epsilon} \cdot mn^{\rho})$.
\end{theorem}
\begin{proof}
First we use Corollary \ref{cor:overall_stretch} to prove the stretch and hopbound, by setting $j= \log (nW)$. For the final scale we have $d_{t,\log nW}(u,v)= (1+3\epsilon')^{\log (nW)} d_G(u,v) \leq (1+\epsilon) \log (nW)$. The hopbound obtained is
\[O(\frac{1}{\epsilon'} \cdot (k+1/\rho))^{k+1/\rho+1}=O(\frac{\log (nW)}{\epsilon} \cdot (k+1/\rho))^{k+1/\rho+1}.\]
The running time follows by Lemma \ref{lem:single-scale} where $\Delta= O(n^{1+\nu})$, we get an overall running time of $\tilde{O}(mn^{\rho} \cdot \frac{\beta}{\epsilon})$ time.
\end{proof}
Hence for constant $\rho =1/k$ the total update time is $\tilde{O}(mn^{\rho})$ and the hopbound $\beta$ is polylogarithmic.
\section{Applications}\label{sec:applications}
In this section we explain two applications of our decremental hopsets to get improved bounds for $(1+\epsilon)$-approximate SSSP and MSSP and $(2k-1)(1+\epsilon)$-APSP. For both of these problems we first construct a hopset, where we choose the appropriate hopbound depending on the number of source. We then use the scaling scheme in Lemma \ref{lem:rounding} on the obtained graph.
Our algorithm for $(2k-1)(1+\epsilon)$-APSP involves maintaining two data structures simultaneously: A $(\beta, 1+\epsilon)$-hopset, and a Thorup-Zwick distance oracle \cite{TZ2005}. At a high-level the hopset will let us maintain the distance oracle much faster, at the expense of a $(1+\epsilon)$-factor loss in the stretch.
\subsection{$(1+\epsilon)$-approximate SSSP and $(1+\epsilon)$-MSSP}
Given a graph $G=(V,E)$ and a set $S$ of size of sources, our goal is to maintain the distance from each source in $\tilde{O}(sm+mn^\rho)$, total update time (where $\rho$ is a constant), and constant query time.
Once a $(\beta,\epsilon)$-hopset is constructed, we can run Algorithm \ref{alg:estree} on all the scaled graphs $G^1, G^2, ..., G^{\log (nW)}$ up to depth $O(\beta)$, scale back the distances, and return the smallest value to each source.
In the next theorems we argue that using the same techniques as we used for maintaining the hopset (that are similar to framework of \cite{henzinger2016}), namely by combining monotone ES tree and scaling, we get our SSSP and MSSP results. In particular after constructing the hopset we can use Theorem \ref{thm:single_stretch} and Theorem \ref{thm:main_hopset} to get,
\begin{theorem}\label{thm:mssp}
Given an undirected and weighted graph $G=(V, E)$, there is a decremental algorithm for maintaining $(1+\epsilon)$-approximate distances from a set $S$ of sources in total update time of $\tilde{O}(\beta (|S| (m+n^{1+\frac{1}{2^k-1}})+ mn^\rho))$, where $\beta= O(\frac{\log (nW)}{\epsilon} \cdot (k+1/\rho)^{k+1/\rho+1})$, and with $O(1)$ query time.
\end{theorem}
\begin{proof}
We maintain a $(\beta, \frac{\epsilon}{3})$-hopset $H$ based on Theorem \ref{thm:main_hopset}. Then we run Algorithm \ref{alg:estree} on $G \cup H$ from all the $s$ for all scaled graphs. The the claim follows by the argument in Theorem \ref{thm:single_stretch}.
In particular, after adding all the hopset edges at time $t$ for all scales, we will run the monotone ES tree algorithm rooted at each source again on the union of all scaled graphs $G^1 \cup ...\cup G^{\log W}$ (by setting $\epsilon_0=\epsilon/3$) and let the level $L(s,v)$ of a node be $\min_{j} \eta( 2^j, \frac{\epsilon}{3}) L_j(s,v)$ where $L_j(s,v)$ is the level of $v$ on $G^j$ after running the monotone ES tree that is run up to depth $\beta$. By item 3 of Theorem \ref{thm:main_hopset}, we get an overall stretch of $(1+\epsilon/3)^2 \leq (1+\epsilon)$.
The time required for maintaining the hopset is $\tilde{O}( (m+ n^{1+\frac{1}{2^k-1}}) n^\rho)$ and by setting $n^{\rho}=s$ the time required for maintaining $\beta$-hop bounded shortest path from all sources is $O(sm \cdot \beta) = \tilde{O}(sm)$, when $s=n^{\Omega(1)}$.
\end{proof}
We next state two specific consequences. First implication is that when the number of sources is a polynomial, and the graph is not very sparse, we can get a near-optimal (up to polylogarithmic factors) algorithm for $(1+\epsilon)$-MSSP.
\begin{corollary}
Given an undirected and weighted graph $G=(V, E)$, where $|E|= n^{1+\Omega(1)}$, there is a decremental algorithm for maintaining $(1+\epsilon)$-approximate distances from $s$ sources, where $s=n^{\Omega(1)}$ in total update time of $\tilde{O}(sm)$, and with $O(1)$ query time.
\end{corollary}
When the number of sources $s= n^{o(1)}$ (e.g.~in case of SSSP), the best tradeoff can be obtain by setting $\rho= \frac{\log \log n}{\sqrt{\log n}}$. We will then have $\beta= 2^{\tilde{O}(\sqrt{\log n})}$ and also $n^{\rho}=2^{\tilde{O}(\sqrt{\log n})}$. In this case we get improved bounds over the result of \cite{henzinger2014}, which has a total update time of is $mn^{\tilde{O}({\log^{3/4}n})}$.
\begin{corollary}
Given an undirected and weighted graph $G=(V, E)$, there is a decremental algorithm for maintaining $(1+\epsilon)$-approximate distances from $s$ sources, when $0 <\epsilon<1$ is a constant and $|E|= n \cdot 2^{\tilde{\Omega}(\sqrt{\log n})}$, with total update time of $\tilde{O}(sm \cdot 2^{\tilde{O}(\sqrt{\log n})})$, and with $O(1)$ query time. Hence, we can maintain $(1+\epsilon)$-approximate SSSP in $ 2^{\tilde{O}(\sqrt{\log n})}$ amortized time.
\end{corollary}
\subsection{APSP distance oracles} \label{sec:APSP}
It is known that in static settings for any weighted graph $G=(V,E)$, we can construct a Thorup-Zwick \cite{TZ2005} distance oracle of size (w.h.p.) $\tilde{O}(n^{1+1/k})$, such that after the preprocessing time of $\tilde{O}(mn^{1/k})$, we can query $(2k-1)$-approximate distances for any pair of nodes in $O(k)$ time. In this section we show that in decremental settings we can maintain these distance oracles in total update time of $\tilde{O}(mn^{1/k})$ (for graphs that are not too sparse), and we can query $(2k-1)(1+\epsilon)$-approximate distances in $O(k)$ time. This can be done by maintaining a $(\beta, 1+\epsilon)$-hopset and a distance oracles for $G$ at the same time, where $\beta$ is polylogarithmic in $n$. Intuitively, the hopset will allow us to update distances faster on the distance oracles.
\paragraph{Distance oracle algorithm via a hopset.}
Assume that we are given a $(\beta, 1+\epsilon)$-hopset for $G$. The algorithm for constructing the Thorup-Zwick distance oracle is as follows: Similar to the algorithm in Section \ref{sec:static_hopset}, we define sets $V=A_0 \supseteq A_1 \supseteq ... \supseteq A_k=\emptyset$\footnote{This $k$ should not be confused with the size parameter in the hopset algorithm of Section \ref{sec:static_hopset}. Here we only use the fact that the hopset size can be bounded based on the graph density.}. But here each set $A_{i+1}$ is obtained by sampling each element from $A_i$ with probability $p_i=n^{-1/k}$. Same as before, for every node $u \in A_i\setminus A_{i+1}$, let $p_i(u) \in A_{i+1}$ be the closest node to set $A_{i+1}$. We the bunch of a node $u$ is the set $B(u) = \cup_{i=1}^{k} B_i(u)= \{ v \in A_i: d(u,v) < d(u,A_{i+1})\} \cup \{p(u)\}$, $C(v)$ called the cluster of $v$ such that if $v \in B(u)$ then $u \in C(v)$. The distance oracle is consisted of bunches $B(v)$ for all $v \in V$, and the distances associated with them. Note that the information stored here are also different from the hopset algorithm described in Section \ref{sec:static_hopset}, since there we only added edges for nodes $v \in A_i$ and their bunches.
Thorup and Zwick \cite{TZ2005} show that this distance oracle has the following properties (in static settings):
\begin{theorem}[\cite{TZ2005}]
There is a distance oracle of expected size $O(kn^{1+1/k})$, that can answer $(2k-1)$-approximate distance queries for a given weighted and undirected graph $G=(V,E)$ in $O(k)$ time for any $k \geq 2$. The preprocessing time in static settings is w.h.p.~$\tilde{O}(mn^{1/k})$.
\end{theorem}
As discussed in Appendix \ref{app:restricted_hopset}, Roditty and Zwick \cite{roditty2004} showed how to maintain this data structure in $O(mn)$ update time for \textit{unweighted graphs}, but where the size is increased to $\tilde{O}(m+n^{1+1/k})$. For weighted graphs their updates time can be as large as $O(mn^{1+1/k})$.
We will argue that by maintaining a $(\beta,1+\epsilon)$-hopset along with the distance oracle we can improve the total update time to $\tilde{O}(\beta mn^{1/k})$. This combined with our decremental hopset of Theorem \ref{thm:main_hopset} will lead to the desired bounds. More formally,
\begin{theorem}\label{thm:oracle_time}
Given a weighted and undirected graph $G=(V,E)$ and a $(\beta, 1+\epsilon)$-hopset $H$ for $G$, and a parameter $k \geq 2$, we can maintain a distance oracle with size $\tilde{O}(m+ |E(H)|+n^{1+1/k})$ that supports $(1+\epsilon)(2k-1)$-approximate queries in $\tilde{O}(\frac{\beta}{\epsilon} \cdot m n^{1/k})$ total update time with $O(k)$ query time.
\end{theorem}
\begin{proof}
Similar to Theorem \ref{thm:mssp}, we consider the sequence $G^1, ..., G^j$, where $G^r, r \leq j$ is scaling of the graph $G \cup \bar{H}_r$ as defined in Section \ref{sec:new_hopset} (and Algorithm \ref{alg:main}), where $\epsilon_0= \frac{\epsilon}{3}$ and $\bar{H}_j$ is a $(2^j, \beta, \frac{\epsilon}{3})$ hopset. We then run the algorithm of Roditty-Zwick \cite{roditty2004} on $G$ up to depth $\lceil 3\beta/\epsilon \rceil$ for maintaining the clusters and the bunches.
The algorithm and the running time analysis is similar to the restricted hopset algorithm described in Section \ref{sec:new_hopset}. The main differences in these algorithms are the sampling probabilities and the information stored. Therefore using the argument in Lemma \ref{lem:dec_bound_cluster} we can show that by running this algorithm on $G$ with depth $\lceil 3\beta/\epsilon \rceil$ we can maintain a bunch $B_i(u)$ for all nodes $u \in V, 1 \leq i \leq k-1$ in $\tilde{O}(\frac{\ell m}{\epsilon q_i})=\tilde{O}(\frac{\beta}{\epsilon} mn^{1/k})$ total update time. This algorithm lets us maintain clusters. We also maintain the distances in clusters and hence bunches as follows: For each $v \in V, u \in B(v)$, we run single-source shortest path distance between from $v$ on scaled graphs $G^1, ..., G^{\log W}$ (by setting $\epsilon_0=\epsilon/3$). We then set the distance $d(u,v)$ to be $\min_{j} \eta( 2^j, \frac{\epsilon}{3}) L_j(s,v)$ where $L_j(s,v)$ is the level of $v$ on $G^j$ after running the monotone ES tree that is run up to depth $\lceil 6(\beta+1)/\epsilon)\rceil$.
Again, when we combine the hopset stretch with the stretch with the rounding algorithm caused by rounding, we get an overall stretch of $(1+\frac{\epsilon}{3})^2 \leq 1+\epsilon$. The overall stretch is thus $(2k-1)(1+\epsilon)$.
\end{proof}
\begin{theorem} \label{thm:oracle_main}
Given a weighted graph $G=(V,E)$ with polynomial weights, and constant\footnote{If $k=\omega(1)$, then a factor of $n^{o(1)}$ will be added to the running time.} $k \geq 2$ and $0 < \epsilon <1$, we can maintain a data structure with expected size $\tilde{O}(m+n^{1+1/k})$ and total update time of $\tilde{O}(mn^{1/k}\cdot (1/\epsilon)^{O(1)})$, that returns $(2k-1)(1+\epsilon)$-stretch queries for any pair $u,v \in V$ with $O(k)$ query time.
\end{theorem}
\begin{proof}
We construct and maintain a $(\beta, 1+\frac{\epsilon}{3})$-hopset using Theorem \ref{thm:main_hopset}. If $m= n^{1+\Omega(1)}$ we can set $\rho=\frac{1}{k}$, and we set the hopset size parameter $\nu$ to a small constant\footnote{The choice of size parameter impact the polylogarithmic factors. Hence one option is to choose the smallest constant such that the graph size is not smaller than the hopset size.} such that $O(n^{1+\nu})=O(m)$ ($\nu =\frac{1}{2^k-1}$). If $m=n^{1+o(1)}$, we set $\rho=\frac{1}{2k})$. In both cases time required for maintaining a hopset is $\tilde{O}(mn^{1/k} \cdot (1/\epsilon)^{O(1)})$. We get hopbound $\beta= O(\log n/\epsilon)^{\log (1/\nu) +1/k+1}= \textit{polylog }(n)$. Hence we can also maintain the distance oracle in $\tilde{O}(mn^{1/k})$ total update time. The stretch will be $(2k-1)(1+\epsilon)$, and the query time remains the same as the static query time, which is $O(k)$.
\end{proof}
\paragraph{Distance Sketches.} As shown in \cite{TZ2005}, the bunch $B(v)$ of each node $v$ can also be seen as a distance sketch of expected size $O(kn^{1/k})$. They show that only using $B(u)$ and $B(v)$ we can approximate the distance of $u$ and $v$. Since we are maintaining distances using a hopset, we will lose an additional factor of $(1+\epsilon)$ in the stretch. Note that we need access to $G$ in order to \textit{update} the sketches, but the rest of the graph is not needed in order to \textit{query} the distances. The update time is the same as Theorem \ref{thm:oracle_main}, so we do not repeat the statement.
\appendix
\section{Static Hopset Properties} \label{app:static_hopset}
In this section, we will briefly overview the (static) hopset algorithm of \cite{elkin2019RNC} (which is similar to \cite{huang2019}). Given a weighted graph $G=(V,E)$, and a parameter $k$, we first construct a hopset of size $O(n^{1+\frac{1}{{2^k}-1}})$ and hopbound $O(k/\epsilon)^{k}$. We then modify the algorithm in order to get a better running time at the cost of a worse stretch.
We define sets $V=A_0 \supseteq A_1 \supseteq ... \supseteq A_k=\emptyset$. Let $\nu =\frac{1}{2^k-1}$. Each set $A_{i+1}$ is obtained by sampling each element from $A_i$ with probability $q_i=n^{-2^i \cdot \nu}$. Hence it can be shown that $E[|A_i|]= n^{1-2^{i-1}\nu}$.
For every vertex $u \in A_i\setminus A_{i+1}$, let $p(u) \in A_{i+1}$ be the closest node to set $A_{i+1}$. The bunch is set to $B(u)= \{ v \in A_i: d(u,v) < d(u,A_{i+1})\} \cup \{p(u)\}$. Also, we define a set $C(v)$ called the cluster of $v$ such that if $v \in B(u)$ then $u \in C(v)$. It can easily be shown that clusters are \textit{connected} in a sense that if a node $v \in C(u)$ then any node $z$ on the shortest path between $v$ and $u$ is also in $C(u)$. As we will see, this property is important for bounding the running time. The hopset is consisted of adding edges $(u,v)$ where $v \in B(u)$, and setting the weight to be $d(u,v)$.
The algorithm described will lead to a $((k/\epsilon)^k, \epsilon)$-hopset, but the running time can be as large $O(mn)$. In order to resolve this, \cite{elkin2019RNC} proposed an algorithm using modified sampling probabilities of $q_i=\max(n^{-2^i \cdot \nu}, n^{-\rho})$.
Using this approach, the number of iterations becomes $k+1/\rho+1$, but the hopbound is also increase to $O(\frac{k+1/\rho+1}{\epsilon})^{O(k+1/\rho+1)}$.
We briefly review some the properties of this hopset algorithm as discussed in \cite{elkin2019RNC, huang2019}, and will then explain how \cite{elkin2019RNC} modifies the algorithm to improve the running time.
One important component of this algorithm is the \textit{modified Dijsktra's algorithm} that we will also utilize in our dynamic algorithms, and thus we breifly review it. This algorithm was presented by Thorup-Zwick \cite{TZ2005} and it allows us to construct the bunches and clusters for level $i$ in $O((m + n\log n)/q_i)$ (expected) time.
At a high-level this is done by making a modification to Dijkstra. In the original Dijkstra for each source $u \in A_i \setminus A_{i+1}$, at each iteration we consider an unvisited vertex $v$, and relax each incident edge $(v,z)$ by setting ${d}(u,z) := \min \{d(u,z), d(u,v)+ w(v,z)\}$. But in the modified algorithm this is done only if $d(u,v) + w(v,z) < d(z, A_{i+1})$. In other words each node $z$ only ``participates" in a shortest-path exploration from a source $u$ only if $z \in B(u)$. Note that if $z \in B(u)$, all the nodes on the shortest path between $u$ and $z$ are considered. This will let us bound the running time by expected size of $B(u)$.
\subsection{Properties}
\paragraph{Size.} We rely on the fact that the hopset size is not denser than the original graph. This is why our main bounds do not hold for very sparse graphs. For analyzing the size, \cite{elkin2019RNC} argues that for each $u \in A_i \setminus A_{i+1}$ we have $E[|B(u)|] \leq 1/q_i$ for the following reason: Consider an ordering of vertices in $A_i$ based on their distance to $u$. By definition, size of $B(u)$ is bounded by the number of vertices in this ordering until the first vertex in $A_{i+1}$ is visited. This corresponds to a geometric random variable with parameter $q_i$ and thus in expectation it is $1/q_i =n^{2^i \nu}$. Hence for all $i$ the number of edges added is in expectation
\[\sum_{i=1}^{k-2} E[|A_i|]n^{2^i \cdot \nu } = O(kn^{1+\nu}). \]
\paragraph*{Modified Dijsktra's algorithm.} For an efficient construction of these hopsets, \cite{elkin2019RNC} used the \textit{modified Dijsktra's algorithm}, which was proposed by Thorup-Zwick \cite{TZ2005}. This algorithm the bunches for level $i$ can be constructed in $O(m + n\log n)/q_i$. At a high-level this is done by making a modification to Dijkstra. In the original Dijkstra's, for each source $u \in A_i \setminus A_{i+1}$, at each iteration we consider an unvisited vertex $v$, and ``relax" each incident edge $(v,z)$ by setting ${d}(u,z) = \min \{d(u,z), d(u,v)+ w(v,z)\}$. But in the modified algorithm this is done only if $d(u,v) + w(v,z) < d(z, A_{i+1})$. In other words each node $z$ only ``participates" in a shortest-path exploration from a source $u$ only if $z \in B(u)$. Note that if $z \in B(u)$, all the nodes on the shortest path between $u$ and $z$ are considered. Since $|B(u)| \leq \frac{1}{q_i}$, this allows us to bound the running time by $O(m n^{\rho})$.
\subsection{Hopbound and Stretch.} \label{app:hopbound_stretch}
In this section we sketch the anlysis of the hopbound and stretch of a simple static hopset algorithm, that does not bound the sampling probabilities by $n^{-\rho}$. This leads to (almost) optimal size and hopbound tardeoff but has a larger construction time.
The extension to the more efficient variant will be straightforward.
The following lemma was proved by \cite{elkin2019RNC, huang2019}. We give a proof sketch here. We use a similar idea in our dynamic hopset construction (in combination with monotone ES tree and scaling), and hence some of the missing details can be found in proof of Theorem \ref{thm:single_stretch}.
\begin{lemma} \label{lem:hopbound}
Fix $0 < \delta \leq 1/{8k}$, and consider a pair $x,y \in V$. Then for $0 \leq i \leq k+1$ we have either of the following conditions:
\begin{itemize}
\item $d^{((3/\delta)^i)}_{G \cup {H}}(x,y) \leq (1+8\delta i) d_{G}(x,y)$ or,
\item There exists $z \in A_{i+1}$ such that,
$d^{((3/\delta)^i)}_{G \cup H} (x,z)\leq 2 d_{G}(x,y).$
\end{itemize}
\end{lemma}
\begin{proof}[Proof sketch]
This can be shown by an induction on $i$. For the base case of $i=0$, we have three cases. If $y \in B(x)$ then edge $(x,y)$ is in the hopset, and the first condition of the lemma holds. Otherwise if $x \in A_1$, then $z=x$ trivially satisfies the second condition. Otherwise we have $x \in A_0/A_1$, and by setting $z=p(x)$ we know that there is an edge $(x,z) \in H$ such that $d(x,z) \leq d(x,y)$ by definition of $p(x)$, and hence the second condition holds.
Now assume the claim holds for $i$. Consider the shortest path $\pi(x,y)$ between $x$ and $y$. We divide this path into $1/\delta$ segments og length roughly $\delta d_G(x,y)$ (up to rounding). Using triangle inequality on all segments we then use the induction hypothesis on each segment. If for all the segments the first condition holds for $i$, then there is a path of $(3/\delta)^{i+1}$-hops consisted of the hopbounded path on each segment. We can show that this path satisfies the first condition for $i+1$.
Now, assume that there are at least two segments for which the first condition does not hold for $i$. Then let $[u_\ell, v_\ell]$ be the first such segment (i.e.~closest to $x$) and let $[u_r, v_r]$ be the last such segment.
Then by inductive hypothesis there are $z_\ell, z_r \in A_{i+1}$ such that:
\begin{itemize}
\item $d_{G \cup H}^{((3/\delta)^i)} (u_\ell, z_\ell) \leq 2d(u_\ell, v_\ell)$, \textit{and,}
\item $d_{G \cup H}^{((3/\delta)^i)} (v_r, z_r) \leq 2d(u_r, v_r)$
\end{itemize}
Again, we consider two cases. First, in case $z_r \in B(z_\ell)$, we have added a single hopset edge between $z_r$ and $z_\ell$ with weight $d(z_r, z_\ell)$. By applying the inductive hypothesis on segments before $[u_\ell, v_\ell]$, and after $[u_r,v_r]$, we have a path with at most $(3/\delta)^i$ for each of these segments, satisfying the first condition for the endpoints of the segment. Also, we have a $2(3/\delta)^i +1$-hop path going through $u_{\ell}, z_{\ell}, z_r, v_r$ that satisfies the first condition for $u_{\ell}, v_r$. Putting all of these together, we can show that there is a path of hopbound $(3/\delta)^{i+1}$ satisfying the first condition. To get this we need to use the fact that the length of each segment is at most $\delta \cdot d(x,y)$. We have,
\begin{align*}
d^{(3/ \delta)^{(i+1})}_{G \cup H} (x,y) &\leq \sum^{\ell-1}_{j=1} [d_{G \cup H}^{((3/\delta)^i)} (u_j, v_j) +d^{(1)}_G(v_j,u_{j+1})] +d^{((3/\delta)^i)}_{G \cup H} (u_\ell, z_\ell)\\
&+d^{(1)}_H (z_\ell, z_r) + d^{((3/\delta)^i)}_{G \cup H} (u_r, v_r) +d^{(1)}_H (v_r, u_{r+1})\\
&+\sum^{(1/\delta)}_{j=r+1} [d_{G \cup H}^{((3/\delta)^i)} (u_j, v_j) +d^{(1)}_{G}(v_j,u_{j+1})]\\
&\leq 8 \delta d_G(x,y) + (1+ 8 \delta i) d_G(x,y)\\
&\leq (1+ 8 \delta(i+1)) d_G(x,y)
\end{align*}
Finally, consider the case where $z_r \not \in B(z_{\ell})$. If $z_{\ell} \not \in A_{i+2}$, we consider $z =p(z_\ell)$. By definition we have added the edge $(z_{\ell}, z)$ to the hopset, we can show that the second condition holds. We use similar reasoning as before and also use the fact that we set $\delta <1/{8k}$ to show that item 2 holds in this case. The only remaining case is when $z_{\ell} \in A_{i+2}$, in a similar but simpler reasoning follows by setting $z=z_{\ell}$.
\end{proof}
We can now set $\delta =\Theta(k/\epsilon)$ in Lemma \ref{lem:hopbound}, and since $A_{k}=\emptyset$, for $i= k-1$ only the first condition can hold. Therefore we get a hopbound of $\beta= \Theta(k/\epsilon)^{k}$.
\paragraph{Hopbound in the efficient variant.} For more efficient construction, we considered a two phase algorithm. For the first phase we use similar reasoning as Lemma \ref{lem:hopbound}, but in the second phase the parameters change. The algorithm will require more \textit{iterations} that will impact the overall hopbound. We require $k+1/\rho+1$ iterations overall. In the second phase we have $\delta'= (k+1/\rho)/\epsilon$ and thus we have overall hopbound of $O(\frac{k+1/\rho}{\epsilon})^{k +1/\rho+1}$.
By putting everything we have the following guarantees for the static hopset:
\begin{theorem}[\cite{elkin2019RNC}]
There is an algorithm that given a weighted and undirected graph $G=(V,E)$, and $2 \leq k \leq \log \log n -2$, $\frac{2}{2^k-1} < \rho <1$ computes a $(\beta, \epsilon)$-hopset of size $O(n^{1+\frac{1}{2^k-1}})$, where $\beta= O( ( \frac{k+1/\rho}{\epsilon}))^{k+1/\rho+1}$. It runs in $O(\frac{n^\rho}{\rho}( m +n\log n))$ expected time.
\end{theorem}
\section{Details Omitted from Section \ref{sec:restricted_hopset}} \label{app:restricted_hopset}
In this section, we review the restricted hopset algorithm that is mainly based on algorithm of \cite{roditty2004}, and alayze the running time.
\subsection{Algorithm of \cite{roditty2004}}
In this section, we review the algorithm of \cite{roditty2004}, which allows us to maintain restricted hopset as stated (but with additional property of handling certain edge insertions) can also be found in Algorithm \ref{alg:restricted_hopset} in Section \ref{sec:new_hopset}.
We sample sets $V=A_0 \supseteq A_1 \supseteq ... \supseteq A_{i(\rho)}=\emptyset$, where $i(\rho)= k+1/\rho+1$ once and they remain the same during the updates.
Next, we need to maintain values $d(v, A_i), 1 \leq i \leq k-1$ for all nodes $v \in V$. This can be performed by computing a shortest path tree rooted at a dummy node $s_i$ connected to all nodes in $A_i$. Let $\hat{d}=(1+\epsilon)d$. We can use the Even-Schiloach \cite{ES} algorithm up to depth $\hat{d}$ to compute all these distances in $O(\hat{d}m)$ time. The pivots $p(v), \forall v \in V$ can also be maintained in this process.
\paragraph{Maintaining the clusters.} Recall that for $z \in A_i\setminus A_{i+1}$ we have $v \in C(z)$ if and only if $d(z,v) < d(v,A_{i+1})$. After each deletion, for each node $v$ and the cluster centers $z$ we first check whether the distance $d(z,v)$ has increased. If $d(z,v) \geq d(v, A_{i+1})$, $v$ will be removed from $C(z)$. The more subtle part is adding nodes to new clusters. For each $0 \leq i <k$, we define a set $X_{i}$ consisted of all vertices whose distance to $A_i$ is increased as a result of a deletion, but where this distance is still at most $\hat{d}$. The sets $X_{i}$ can be computed while maintaining $d(v,A_i)$.
Note that a node $v$ would join $C(w)$ only after an increase in $d(v,A_{i+1})$. Using this observation, after each deletion for every $v \in X_{i+1}, z \in B_i(u) \setminus B_i(v)$, and each edge $(u,v) \in E$ we check if $d(z,u) +w(u,v) < d(v, A_{i+1})$. If yes, then $v$ joins $C(z)$. We push $v$ to a priority queue $Q(z)$ with key $d(z,u)+w(u,v)$. If $v$ was already in the queue the key will be updated if this distance is smaller than the existing estimate. In this case we \textit{mark} $v$. The marked nodes join clusters $z$, but there may be other nodes that also need to join $C(z)$ as a result of this change.
Hence after this initial phase, for each $z \in A_i \setminus A_{i+1}$ where $Q(z) \neq \emptyset$, we run the modified Dijkstra's algorithm. Recall that in the modified Dijkstra's algorithm when we explore neighbors of a node $x$, we only relax an edge $(x,y)$ if $d(x,y)+w(x,y) < d(x,A_i)$. Then \cite{roditty2004} show that this process correctly maintains the clusters. We then repeat this process for all the $k+1/\rho+1$ iterations. We add a hopset edge between each $z \in A_i \setminus A_{i+1}$ and all nodes $v \in C(z)$ and set the weight of this edge to $w(v,z)=d_G(v,z)$.
\subsection{Proof of Lemma \ref{lem:dec_bound_cluster}}
Next, we argue that using the above algorithm (with the modified probabilities), we can get the following extension of a result by \cite{roditty2004} that is crucial in bounding the total update time throughout this paper:
\begin{lemma}
For every $v \in V$ and $0 \leq i \leq k-1$, the expected total number of times the edges incident on $v$ are scanned over all trees for each $w \in A_i$ (i.e.~trees on $C(w)$) is $O(\hat{d}/q_i)$, where $q_i$ is the sub-sampling probability.
\end{lemma}
\begin{proof}
Let $w \in A_i \setminus A_{i+1}$. The edges of a node $v \in V$ is scanned when $v$ joins $C(w)$, and any time $d(v,w)$ is increased until $v$ leaves $C(w)$.
We start by analyzing the total cost of joining new clusters. Recall that $C(w)=\{ v \in V: d(v,w) <d(w, A_{i+1}) \}$. Since we are in a decremental setting, $v$ can join $C(w)$ only when $d(w, A_{i+1})$ increases, and this can happen at most $\hat{d}$ times \textit{per tree}. As in the static setting, at any time, $v$ joins at most $\tilde{O}(1/q_i)$ trees, since the number of clusters $v$ belongs to is dominated by a geometric random variable with parameter $q_i$. We will use a similar argument for analyzing the total number of clusters each node belongs to over time.
Hence the total time for nodes joining new clusters is $\tilde{O}(\hat{d}m/q_i)$.
Next, we consider the case when after the deletion the distance between $v$ and the center increases. This will let us bound the number of times the edges incident on $v$ are scanned for a tree rooted at some node in $A_i$.
Let $d_t(w,v)$ denote the distance between $v$ and $w$ at time $t$ (after $t$ deletions), and let $C_t(w)$ denote the cluster rooted at $w$ at time $t$. We bound the number of indices $t$ for which $v \in C_t(w)$ and $d_t(w,v) < d_{t+1}(w,v)$.
Let $w_{t,1}, w_{t,2},...$ be the sequence of nodes in $A_i$ sorted based on their distance from $v$ at time $t$. Ties will be broken by ordering based on pairs $(d_t(v,w),d_{t+1}(v,w))$, i.e.~nodes with the same distance from $v$ at time $t$ will be sorted based on their distance at time $t+1$. This ensures that if $d_t(v,w_{t,j}) < d_{t+1}(v,w_{t,j})$, then $d_t(v,w_{t,j}) < d_{t+1}(v,w_{t+1,j})$. Same as before $\Pr[v \in C_t(w_{t,j})] \leq (1-q_i)^{j-1}$, since $v \in C_t(w_t,j)$ only if for all $j' <j$ we have $w_{t,j'} \in A_i \setminus A_{i+1}$. Let $I=\{(t,j) \mid d_t(v,w_{t,j}) < d_{t+1}(v,w_{t,j}) \leq \hat{d}\}$. Then since edges incident to $v$ are scanned only if their distance increases, the expected number of times they are scanned over all trees rooted at centers in $A_i$ is at most $\sum_{(t,j)} \Pr[v \in C_t(w_{t,j})]$. Also, by definition for a fixed $j$ there can be at most $\hat{d}$ pairs of form $(t,j)$. In other words, the distance to the $j$-th closest vertex can increase at most $\hat{d}$ times, and hence,
\[\sum_{(t,j)} \Pr[v \in C_t(w_{t,j})] \leq \hat{d }\sum_{j \geq 1} (1-q_i)^{j-1} \leq \hat{d}/q_i.\]
\end{proof}
\iffalse
We summarized this algorithm in Algorithm \ref{alg:RZ}.
This is a straight-forward adaptation of the algorithm proposed by \cite{roditty2004} with the difference that we also add hopset edges, and the number of iteration and probabilities are changed.
\begin{algorithm}[H]
\caption{Decremental algorithm for maintaining a $d$-restricted hopset.}
\label{alg:RZ}
\For{$i=1$ to $i(\rho)$}{
$C_i = \emptyset$.\\
\For{$\forall v \in X_{i+1}$}{
\For{$(u,v) \in E$}{
\For{$\forall z \in B_i(u)-B_i(v)$}
{
\If{$d(z,u) +w(u,v) < d(v,A_{i+1)}$}{
Add $z$ to $C_i$.\\
Add $v$ to $C(w)$, and update the estimate for $d(z,v)$.\\
Add $(v,z)$ to $H$ and set the edge weight to $d(z,v)$.
}
}
}
}
\For{$\forall z \in C$}
{
Run dynamic modified Dijkstra rooted at $w$.
}
}
\end{algorithm}
\fi
\section{Monotone ES tree}\label{app:monotone_es}
In this section, we explain the monotone ES tree idea and how it can be used for maintaining single-source shortest path up to a given depth $D$. Using the monotone ES tree ideas may impact the stretch, and clearly do not apply to all types of insertions but only for insertion of certain structural properties. In Section \ref{sec:ss_stretch}, we will prove that specifically for the insertions in our restricted hopset algorithm the stretch guarantee holds.
\begin{comment}
\begin{lemma}\label{lem:sssp}
Consider a graph $G = (V, E, w)$ subject to edge deletions. Assume that $H_1 \cup...\cup H_j$ is a $(2^j, \beta, (1+\epsilon_j)$-hopset of $G$, which is updated dynamically, as the edges of $G$ are deleted. Then, for any $0<\epsilon<1$ we can maintain $(1+\epsilon_j)(1+\epsilon_2)(1+\epsilon)$-approximate single-source shortest path for a fixed source $s$ to any other node $v$ where $d(s,v) \leq [2^j,2^{j+1}]$ in $O(\frac{\beta}{\epsilon}\cdot (m+ \Delta))$ total time, where $m$ is the initial size of $G$ and $\Delta$ is the total number of edges inserted to the hopsets $H_1 \cup \ldots \cup H_j$ over all updates.
\end{lemma}
The algorithm that we use to prove Lemma~\ref{lem:sssp} is based on Even-Shiloach trees~\cite{ES}.
\end{comment}
We show how to handle edge insertions by using a variant of the monotone ES-tree algorithm~\cite{henzinger2014} (and further used in the hopset construction of \cite{henzinger2016}). This algorithm is given as Algorithm~\ref{alg:estree}.
The idea in a monotone ES tree is that if an insertion of an edge $(u,v)$ causes the level of a node $v$ to decrease, we will not decrease the level. In this case we say the edge $(u,v)$ and the node $v$ are \textit{stretched}. More formally, a node $v$ is stretched when $L(v) > \min_{(x,v) \in E} L(x) + w(x, v)$.
We observe multiple properties of the monotone ES tree algorithm as observed by \cite{henzinger2014, henzinger2016} that will be helpful in analyzing the stretch later:
\begin{itemize}
\item The level of a node never decreases.
\item Only an inserted edge can be stretched.
\item While an edge is stretched, its level remains the same. In other words, a stretched edge is not going to get stretched again unless it is deleted (or get a distance increase).
\end{itemize}
Also observe that we never underestimate the distances. This is clearly true for any edge weights obtained by the rounding in Lemma \ref{lem:rounding}. It is also easy to see this is true for the stretched edges for the following reason: For any node $v$, the algorithm maintains the invariant that $L(s,v) \geq min_{(x,v) \in E}L(s,x)+w(x,v)$. In other words, $L(s,v)$ is either an estimate based on rounding that is at least $d_G(s,v)$ or it is larger than such an estimate.
\begin{algorithm} [h]
\SetKwProg{Fn}{Function}{}{}
\Fn{\textsc{Init}$(G,s, D)$}{
$E:=E(G)\cup \{e_v=(s,v):v\in V(G)\setminus\{s\},w(e_v)=D+1\}$ \tcc{This ensures that distances are maintained up to level $D$}
\For{$v\in V$}{
$L(s,v):=0$
}
\For{$v\in V$}{
\textsc{Update}($T(s),v$)
}
}
\Fn{\textsc{InsertEdge}$(T(s),(a,b),c)$}{ \tcc{Insert an edge in the tree rooted at $s$}
$E := E \cup \{(a,b)\}$
$w(a,b):={c}$
\textsc{Update}($T(s),b$)
}
\Fn{\textsc{Update}$(T(s),v)$}{
$upd := \min_{(x,v) \in E} L(s,x) + w(x, v)$
\If{$v=s$ \textbf{or} $L(s,v) \geq upd$}{ \tcc{Node $v$ is stretched.}
\Return}
$L(s,v):= upd$
\For{$(v,y)\in E(G)$}{
\textsc{Update}($T(s),y$)
}
}
\caption{\label{alg:estree} Maintaining a monotone ES tree up to depth $D$ on $G$. Note that edge deletion can be achieved by setting the edge weight to $\infty$.}
\end{algorithm}
\end{document}
|
\begin{document}
\title{PerSim: Data-efficient Offline Reinforcement\ Learning with Heterogeneous Agents\ via Personalized Simulators}
\begin{abstract}
We consider offline reinforcement learning (RL) with heterogeneous agents under severe data scarcity, i.e., we only observe a single historical trajectory for every agent under an unknown, potentially sub-optimal policy.
We find that the performance of state-of-the-art offline and model-based RL methods degrade significantly given such limited data availability, even for commonly perceived ``solved” benchmark settings such as ``MountainCar'' and ``CartPole''.
To address this challenge, we propose PerSim, a model-based offline RL approach which first learns a personalized simulator for each agent by collectively using the historical trajectories across all agents, prior to learning a policy.
We do so by positing that the transition dynamics across agents can be represented as a latent function of latent factors associated with agents, states, and actions; subsequently, we theoretically establish that this function is well-approximated by a ``low-rank'' decomposition of separable agent, state, and action latent functions.
This representation suggests a simple, regularized neural network architecture to effectively learn the transition dynamics per agent, even with scarce, offline data.
We perform extensive experiments across several benchmark environments and RL methods.
The consistent improvement of our approach, measured in terms of both state dynamics prediction and eventual reward, confirms the efficacy of our framework in leveraging limited historical data to simultaneously learn personalized policies across agents.
\end{abstract}
\input{content/introduction}
\input{content/related_work}
\input{content/setup}
\input{content/model_continous}
\input{content/algorithm}
\input{content/experiments}
\input{content/conclusion}
\section*{Acknowledgements and Funding Disclosure}
We would like to express our thanks to the authors of \cite{lee2020context} Kimin Lee, Younggyo Seo, Seunghyun Lee, Honglak Lee, and Jinwoo Shin for for their insightful comments and feedback.
{
\noindent This work was supported in parts by the MIT-IBM project on time series anomaly detection, NSF TRIPODS Phase II grant towards Foundations of Data Science Institute, KACST project on Towards Foundations of Reinforcement Learning, and scholarship from KACST (for Abdullah Alomar).}
\appendix
\begin{center}
\LARGE{\textbf{Supplementary Materials}}
\end{center}
\input{content/Appendix/appendix_organization}
\input{content/Appendix/appendix_related_work}
\input{content/Appendix/appendix_proofs_continous}
\input{content/Appendix/appendix_implementaion}
\input{content/Appendix/appendix_environment_details}
\input{content/Appendix/appendix_experiments_results}
\end{document}
|
\begin{document}
\title{Generalized Turán Problem for Complete Hypergraphs}
\begin{abstract}
Write $K^{(k)}_{n}$ for the complete $k$-graph on $n$ vertices. For $2 \leq k \leq g < r$ integers, let $\pi\left(n, K^{(k)}_{g}, K^{(k)}_r\right)$ be the maximum density of $K^{(k)}_{g}$ in $n$ vertex $K^{(k)}_{r}$-free $k$-graphs. The main contribution of this paper is the upper bound: $\pi\left(n, K^{(k)}_{g}, K^{(k)}_r\right) \leq \left(1 + O\left(n^{-1}\right) \right)\prod_{m=k}^{g} \left(1 - \frac{\binom{m-1}{k-1}}{\binom{r-1}{k-1}} \right).$ The graph case ($k=2$) is the first known generalized Turán question, investigated by Erdős. The $k=g$ case is the hypergraph Turán problem where the best known general upper bound is by de Caen. The result proved here matches both bounds asymptotically, while any triple $k, g, r$ with $2 < k < g < r$ provides a new upper bound. The proof uses techniques from the theory of flag algebras to derive linear relations between different densities. These relations can be combined with linear algebraic methods. Additionally a simple flag algebraic certificate will be given for $\lim_{n \rightarrow \infty} \pi \left(n, K^{(3)}_4, K^{(3)}_5 \right) = 3/8$.
\end{abstract}
\section{Introduction}
\subsection{Hypergraph Turán Problems}
Given a $k$-graph $H$, write $\pi\left(n, H\right)$ for the maximum density of $k$-uniform edges among $H$-free hypergraphs with size $n$, and let $\pi \left(H \right) = \lim_{n \rightarrow \infty} \pi\left(n, H\right)$. It is known that the limit always exists. Let $K^{(k)}_n$ be the complete $k$-graph with $n$ vertices. A landmark result by Turán determined the values $\pi\left(n, K^{(2)}_r\right)$ exactly, with the unique graphs attaining the maximum.
\begin{theorem}[\cite{turan_original}]
$\pi\left(n, K^{(2)}_r\right)$ is uniquely attained at the balanced complete $(r-1)$-partite graph on $n$ vertices.
\end{theorem}
Following this, Erdős and Stone found more generally the value $\pi(H)$ for all graph $H$.
\begin{theorem}[\cite{erdos_stone}] Suppose $H$ is a graph with chromatic number $\chi(H)$, then
\begin{equation*}
\pi(H) = 1 - \frac{1}{\chi(H)-1}.
\end{equation*}
\end{theorem}
The corresponding question, when $k>2$, is still open and seems to be much more difficult. There are sporadic results for various $k$-graphs, but no $\pi\left(K^{(k)}_r\right)$ value is known. The best general upper bound comes from de Caen.
\begin{theorem}[\cite{de_caen_bound}]
\begin{equation*}
\pi\left(n, K^{(k)}_r\right) \leq 1 - \left( 1 + \frac{r-k}{n-r+1} \right) \frac{1}{\binom{r-1}{k-1}}.
\end{equation*}
\end{theorem}
For an extensive survey, focusing on the $\pi\left(n, K^{(k)}_r\right)$ problem, with various lower and upper bounds, see \cite{sido_survey}. More recent coverage of the question with different $k$-graphs can be found in \cite{keevash_survey}.
\subsection{Generalized Turán Problems}
As a possible generalization of the Turán question, one can ask the maximum density of a given $k$-graph $F$, instead of the $k$-edges. For $F, H$ given $k$-graphs, write $\pi(n, F, H)$ for the maximum density of $F$ among $n$ sized $H$-free $k$-graphs and use $\pi (F, H) = \lim_{n \rightarrow \infty} \pi(n, F, H)$. For complete graphs, this was initially investigated by Erdős \cite{erdos_generalized_1, erdos_generalized_2}.
\begin{theorem}[\cite{erdos_generalized_1}]
For $2 \leq g < r$ integers $\pi\left(n, K^{(2)}_g, K^{(2)}_r\right)$ is uniquely attained at the balanced complete $(r-1)$-partite graph on $n$ vertices.
\end{theorem}
Note that this gives asymptotically that $\pi\left(K^{(2)}_g, K^{(2)}_r\right) = \prod_{m=2}^{g} \left(1 - \frac{m-1}{r-1} \right)$. The generalized Turán problem for graphs was systematically investigated by Alon and Shikhelman, obtaining a result similar to Erdős-Stone.
\begin{theorem}[\cite{alon_generalized_erdos_stone}]
For any graph $H$, with chromatic number $\chi(H)$, the following holds
\begin{equation*}
\pi\left(K^{(2)}_g, H\right) = \prod_{m=2}^{g} \left(1 - \frac{m-1}{\chi(H)-1} \right).
\end{equation*}
\end{theorem}
In addition, \cite{alon_generalized_erdos_stone} investigates degenerate generalized Turán questions -- the rate of convergence of $\pi(n, F, H)$ when $\pi(F, H) = 0$. \cite{deg_gen_hyp_turan} finds various bounds for several degenerate generalized hypergraph Turán problems.
The generalized Turán problem for complete $k$-graphs corresponds with the separation of different layers of the boolean hypercube using a $k$-CNF. This idea appears for example in \cite{application_sidorenko} and will be further explored in a different paper. \cite{application_vcdim} gives new insights into the set of satisfying assignments of CNFs using a variant of the VC dimension. Bounds on this variant of the VC dimension turn out to be equivalent to a generalized Turán-type conjecture.
\subsection{Flag Algebras}
The theory of flag algebras \cite{razb_flag_algebras} provides a systematic approach to studying extremal combinatorial problems and the tools available for solving them. It gives a common ground for combinatorial ideas, by expressing them as linear operators, acting between flag algebras. Linearity means the different techniques can be easily combined with linear programming/linear algebra.
A large part of the theory can be automated with state-of-the-art optimization algorithms, providing spectacular improvements in density bounds. There has been significant progress in the famous tetrahedron problem \cite{de_caen_bound, tetra_previous} with the previous best bound being $\pi\left( K^{(3)}_4 \right) \leq \frac{3+\sqrt{7}}{12} < 0.59360$ while flag algebraic calculations improved it to the following bound: \begin{theorem}[\cite{razb_4_vertex} verified in \cite{flagmatic, baber_thesis}]
$\pi\left( K^{(3)}_4 \right) \leq 0.56167.$
\end{theorem} Note that the best known lower bound is $5/9 \leq \pi\left( K^{(3)}_4 \right)$. For excluded $K^{(3)}_5$ the calculations give the following: \begin{theorem}[\cite{baber_thesis}]
$\pi\left( K^{(3)}_5 \right) \leq 0.76954$
\end{theorem} with best known lower bound $3/4 \leq \pi\left( K^{(3)}_5 \right)$.
For a list of results provided by flag algebraic calculations, see \cite{razb_4_vertex, flagmatic}. The power of flag algebra has been illustrated in a wide range of other combinatorial questions \cite{ramsey_flag, permutation_flag}. Unfortunately, the computer-generated proofs lack insight and scale-ability compared to classical, hand-crafted arguments. They often only work in a small enough parameter range (for example, bounding $\pi\left(H\right)$ for $H$ with at most $7$ vertices). A survey by Razborov \cite{flag_interim_report} calls such applications plain.
One of the goals of this paper is to show that the powerful plain flag algebra method, in this case, can be performed by hand, resulting in a general and scale-able theorem. The main ideas and proof steps, therefore, correspond with a plain application of flag algebra and were heavily inspired by it. During the proofs, relevant parts of the flag algebra theory will be highlighted. While the asymptotic result can be fully proved with flag algebraic manipulations, the bound with finite $n$ is only attainable with a more precise bounding of the errors. The theory is not explained here, for a quick introduction see \cite{flag_first_glance} or the original text \cite{razb_flag_algebras}.
\subsection{Overview of the Result}
In this paper, the generalized Turán problem for complete hypergraphs will be investigated, with the following contribution:
\begin{theorem}\label{main_theorem}
For integers $1 < k \leq g < r$ and any $n>(r-1) \left(1 + \left(\frac{(r-k)}{k-1}\right)^2 \right),$
\begin{equation*}
\begin{gathered}
\pi\left(n, K^{(k)}_{g}, K^{(k)}_r\right) \leq \\
\leq \left(1 + \frac{(r-1) (r-k)^2}{(k-1)^2 n-(r-1) \left(2 k^2-2 k (r+1)+r^2+1\right)} \right)\prod_{m=k}^{g} \left(1 - \frac{\binom{m-1}{k-1}}{\binom{r-1}{k-1}} \right).
\end{gathered}
\end{equation*}
\end{theorem}
Note this means asymptotically that
\begin{corollary}
For integers $1 < k \leq g < r$
\begin{equation*}
\pi\left(K^{(k)}_{g}, K^{(k)}_r\right) \leq \prod_{m=k}^{g} \left(1 - \frac{\binom{m-1}{k-1}}{\binom{r-1}{k-1}} \right).
\end{equation*}
\end{corollary}
The asymptotic bound is known to be tight when $k=2$, with the matching, balanced $(r-1)$-partite construction. Additionally, it agrees with the best-known general hypergraph Turán bound by de Caen \cite{de_caen_bound} which is conjectured to not be tight.
\cite{sido_survey} describes various lower bound constructions for the $2 < k = g$ case. A simple construction (when $k-1$ divides $r-1$) splits the vertex set into $\frac{r-1}{k-1}$ equal groups and includes each $k$ set that is not fully contained in a group. Using $l = \frac{r-1}{k-1}$ and an inclusion-exclusion calculation, this gives the asymptotic bound
\begin{equation*}
\sum_{s=0}^{\lfloor g/k \rfloor} (-1)^{s} \binom{l}{s} \sum_{\substack{k \leq i_1, \dots, i_s \\ i_1+ \dots + i_s \leq g}} \binom{g}{i_1, \dots, i_s, g-i_1- \dots - i_s}l^{-i_1- \dots - i_s} \leq \pi\left( K^{(k)}_g, K^{(k)}_r\right).
\end{equation*}
While it is known that this construction is not optimal when $k=g$, in the $k \ll g$ regime, where $K^{(k)}_g$ appears more if the edges are "grouped", it provides a stronger bound. \Cref{pi345_section} shows that this is asymptotically the best construction for $\pi \left( K^{(3)}_4, K^{(3)}_5\right)$. In general $g=r-1$ gives
\begin{equation*}
\begin{split}
& \ \sqrt{ 2 \pi r} \ \ e^{\frac{\log(2 \pi k)r}{2k}} \approx \\[2ex]
\approx & \ \binom{r-1}{k-1, k-1, \dots , k-1} l^{-(r-1)} \leq \\[2ex]
\leq & \ \pi\left(K^{(k)}_{r-1}, K^{(k)}_{r}\right) \leq \\[2ex]
\leq & \ \prod_{m=k}^{r-1} \left( 1 - \frac{\binom{m-1}{k-1}}{\binom{r-1}{k-1}} \right) \leq \\[2ex]
\leq & \ e^{(k-r)/k}.
\end{split}
\end{equation*}
Given a hypergraph $G$, write $d(H, G)$ for the induced density of $H$ in $G$. The main tool used in the proof of \cref{main_theorem} is:
\begin{lemma}\label{main_lemma}
For all $0 \leq x$ and integers $k \leq m < n$, if $G$ is an $n$ vertex $k$-graph then
\begin{equation*}
\begin{matrix*}[l]
0 \geq & \left(- \frac{1 - \frac{k-1}{m}}{x}\right) & d\left(K^{(k)}_{m+1}, G\right) & + \\ & \left(2 - \frac{k-1}{mx} - \frac{1}{(n-m)x}\right) & d\left(K^{(k)}_m, G\right) & + \\ & (-x) & d\left(K^{(k)}_{m-1}, G\right). &
\end{matrix*}
\end{equation*}
\end{lemma}
Note that the densities $d\left(K^{(k)}_m, G\right)$ only appear linearly in the expression. For different $k, g, r$ parameters, a convex combination of the expressions appearing in \cref{main_lemma}, with suitable $x, m$ values substituted in yields \cref{main_theorem}. As a comparison, \cite{de_caen_bound} utilizes similar ideas, but with a more complicated (non-linear) expression,
\begin{equation*}
f_{m+1} \geq \frac{m^2 f_m}{(m-k+1)(n-m)} \left(\frac{f_m (n-m+1)}{f_{m-1} m} - \frac{(k-1)(n-m) + m}{m^2} \right)
\end{equation*} where $f_m = d\left(K^{(k)}_m, G\right)$ for short.
\subsection{Outline of the Paper}
\Cref{notation_section} summarizes the important notations and conventions throughout the paper. The proof of \cref{main_theorem} is included in \cref{main_theorem_section} using two important components: \cref{main_lemma}, which is proved in \cref{lindens_section}; and a technical calculation (\cref{tridiag_lemma}), that is included in \cref{tridiag_section}. The short \cref{pi345_section} includes a certificate for $\pi\left(K^{(3)}_4, K^{(3)}_5\right) = 3/8$. The paper finishes with a few concluding remarks in \cref{outro_section}, the limitations of this approach and possible directions.
\section{Notation and Conventions}\label{notation_section}
\subsection{Basic Notation}
For a set $V$, the collection of subsets with size $k$ is denoted by $\binom{V}{k}$. The hypergraphs are identified with their edge sets; $G \subseteq \binom{V(G)}{k}$ is a $k$-graph with $V(G)$ vertex set. $K_n^{(k)}$ is the complete $k$-graph with $n$ vertices. For $S \subseteq V(G)$ the induced sub-hypergraph is $G \! \upharpoonright_S = G \cap \binom{S}{k}$. Hypergraph isomorphism is represented by $G \simeq H$. $\mathcal{H}^{(k)}_n$ is the collection of non-isomorphic $k$-graphs having $n$ vertices.
Bold symbols indicate random variables. The uniform distribution from a set $V$ is represented by $\operatorname{Unif}(V)$. The density of $H$ in $G$ is defined to be $d(H, G) = \mathbb{P} \left[ G \! \upharpoonright_{\mathbf{S}} \simeq H \right]$ where $\mathbf{S} \sim \operatorname{Unif}\binom{V(G)}{|H|}$. Notice that this is the induced density, corresponding more with the flag algebraic approach, rather than the classical sub-hypergraph inclusion (referenced in the introduction). Write $d_s(H, G) = \mathbb{P} \left[ G \! \upharpoonright_{\mathbf{S}} \simeq F, H \subseteq F \right]$ for the classical inclusion with the same $\mathbf{S} \sim \operatorname{Unif}\binom{V(G)}{|H|}$. When $H$ is a complete hypergraph, the two notions are equivalent. The generalized Turán problem is to determine the value $$\pi\left(n, F, H\right) = \max\left\{d_s(F, G) \ : \ G \in \mathcal{H}_n, \quad d_s(H, G) = 0 \right\}.$$ The asymptotic problem asks $\pi(F, G) = \lim_{n \rightarrow \infty } \pi(n, F, G)$ (it is known that the limit always exists).
The quantity $$x_{m, r}^{(k)} = 1 - \frac{\binom{m-1}{k-1}}{\binom{r-1}{k-1}}$$ will be important, these are the terms appearing in the product.
\subsection{Flag Notation}
The hypergraphs represent the corresponding flags with empty type. $T^{(k)}_n$ is the complete type with $n$ vertices and all $k$-uniform edges. Type is indicated as a superscript. In particular $K^{(k), T^{(k)}_m}_{n}$ is the unique complete flag on $n$ vertices with a type having $m$ vertices. For a type $T$, $T$ also represents the flag with type $T$ and no extra vertices/edges. The averaging operator, transforming a $T$-typed flag $F^T$ into a flag with empty type is $\left\llbracket F^T \right\rrbracket_T$.
\subsection{Conventions}
In most of the proofs, the symbol $k$ is fixed, and the appearing statements concern $k$-graphs. For this reason, $k$ superscripts from the notations are often dropped. Additionally, $g, \ r, \ n$ symbols are reserved. They are integer parameters of the main question; determining the value of $\pi(n, K_g, K_r)$.
\section{Proof of Main Theorem}\label{main_theorem_section}
In this short section \cref{main_theorem} will be proved with the use of \cref{main_lemma} and \cref{tridiag_lemma}, a technical result included in \cref{tridiag_section}. Let's recall the main theorem, with the introduced $x^{(k)}_{m, r}$ notation.
\begin{reptheorem}{main_theorem}
Given integers $1 < k \leq g < r$ and $n>(r-1) \left(1 + \left(\frac{(r-k)}{k-1}\right)^2 \right)$, then
\begin{equation*}
\begin{gathered}
\pi\left(n, K^{(k)}_{g}, K^{(k)}_r\right) \\ \leq \left(1 + \frac{(r-1) (r-k)^2}{(k-1)^2 n-(r-1) \left(2 k^2-2 k (r+1)+r^2+1\right)} \right) \prod_{m=k}^{g} x^{(k)}_{m, r}.
\end{gathered}
\end{equation*}
\end{reptheorem}
In the following, the superscript $k$ is dropped from the notations for easier readability. The following claim gives bounds on the $x_{m, r}$ values. It follows easily after expanding the definition of $x_{m, r}$, therefore the proof is not included.
\begin{claim}\label{x_positive_claim}
When $k-1 \leq m \leq r$, $$0 \leq x^{(k)}_{m, r} \leq 1,$$ with equality at $m=r$ and $m = k-1$ respectively. The smallest nonzero value is $x_{r-1, r} = \frac{k-1}{r-k}$
\end{claim}
\begin{proof}[Proof of \cref{main_theorem}]
Choose any $G \in \mathcal{H}_{n}$ (irrespective of the value of $d(K_{r}, G)$), with $$n>(r-1) \left(1 + \left(\frac{(r-k)}{k-1}\right)^2 \right),$$ and for short write $f_m = d\left(K_{m}, G\right)$. In the range $m \in \{k, k+1, ..., r-1\}$, $x_{m, r}$ is positive, therefore \cref{main_lemma}, with $x=x_{m, r}$ holds.
\begin{equation*}
0 \geq - \frac{1 - \frac{k-1}{m}}{x_{m, r}}f_{m+1} + \left(2 - \frac{k-1}{m x_{m, r}} - \frac{1}{(n-m)x_{m, r}} \right) f_{m} - x_{m, r} f_{m-1}
\end{equation*}
The value $(n-m)x_{m, r}$ is minimal at $m=r-1$. Use $E_m$ for the above expression but $\frac{1}{(n-m)x_{m, r}}$ replaced with $\frac{r-k}{(n-r+1)(k-1)}$.
\begin{equation*}
E_{m} = - \frac{1 - \frac{k-1}{m}}{x_{m, r}}f_{m+1} + \left(2 - \frac{k-1}{m x_{m, r}} - \frac{r-k}{(n-r+1)(k-1)} \right) f_{m} - x_{m, r} f_{m-1}
\end{equation*}
The replacement decreases the value, giving that each $E_{m}$ is still non-positive. This gives that any $\delta_{k}, \delta_{k+1}, ..., \delta_{r-1}$ sequence with all $0 \leq \delta_m$ results in $0 \geq \sum_{m=k}^{r-1} \delta_{m} E_{m}$. For a lower bound it is enough to find coefficients $0 \leq \delta_m$ satisfying \begin{equation} \label{main_theorem_equation}
0 \geq \sum_{m=k}^{r-1} \delta_m E_{m} = - \delta_k x_{k, r} f_{k-1} + f_g - \delta_{r-1} \frac{1-\frac{k-1}{r-1}}{x_{r-1, r}} f_r.
\end{equation}
With the assumption that $f_r = d(K_r, G) = 0$ and the simple observation that $f_{k-1} = d(K_{k-1}, G) = 1$, one can deduce from \cref{main_theorem_equation} that $\pi\left(n, K_g, K_r \right) \leq \delta_k x_{k, r}$. Notice that finding $\delta_m$ corresponds with solving \cref{main_theorem_equation}, a system of linear equations. The technical \cref{tridiag_section} includes a way to approximate this system of linear equations. The following lemma summarizes the result, concluding the proof of \cref{main_theorem}.
\begin{lemma}\label{tridiag_lemma}
If $n>(r-1) \left(1 + \left(\frac{(r-k)}{k-1}\right)^2 \right)$ then the solution to the linear equations \cref{main_theorem_equation} satisfies that $\delta_m \geq 0$ and that $$\delta_k = \left(1 + \frac{(r-1) (r-k)^2}{(k-1)^2 n-(r-1) \left(2 k^2-2 k (r+1)+r^2+1\right)} \right) \prod_{m=k+1}^g x_{m, r}.$$
\end{lemma}
\end{proof}
\section{Linear Density Relations}\label{lindens_section}
This section covers the main combinatorial calculations involved in the proof of \cref{main_lemma}. The main purpose of \cref{main_lemma} is to act as a building block. Not only the validity is easier to verify, but it also involves the densities $d(K_m, G)$ linearly, therefore it can be combined easily; as illustrated in \cref{main_theorem}. The small claims in this section follow closely core results from the flag algebra theory. The connection will be highlighted in \cref{flag_claims_remark}. The connection between the plain flag algebra application and this proof is discussed in \cref{flag_sdp_remark}.
In the upcoming proofs, the $k$ superscript will not be included. $S$ is any subset of $V(G)$, while $S_m$ is an $m$ element subset of $V(G)$. Write $q(S)$ for the indicator function that is $1$ when $S$ induces a complete hypergraph in $G$ and $0$ otherwise. $l(S)$ is the number of $v \in V(G) \setminus S$ where $S+v$ is complete in $G$. The corresponding probability is $r(S_m) = \frac{l(S_m)}{n-m}$. Similarly $rr(S)$ is the probability that two different vertex extensions are both complete. $$rr(S_m) = \frac{\binom{l(S_m)}{2}}{\binom{n-m}{2}}$$
When $H \in \mathcal{H}_n$, then $s(H)$ is used for the size of the intersection of non-edges in $H$. In particular $s(K_n) = n$ and $s(K_n^-) = k$ where $K_n^-$ represents the hypergraph on $n$ vertices that has exactly one edge missing.
In the proof of \cref{main_lemma}, the fact, that $q(S)(r(S)-x)^2$ is always positive, will be exploited. Understanding the terms in the square is done by the following short claims. First, $r(S)^2$ and $rr(S)$ are related.
\begin{claim}\label{ppp_claim}
$$r(S_{m-1})^2 \leq rr(S_{m-1}) + r(S_{m-1}) \frac{1}{n-m}$$
\end{claim}
This simply follows from \begin{equation*}
r(S_{m-1})^2 - rr(S_{m-1}) = r(S_{m-1}) \left(\frac{l(S_{m-1})}{n-m+1} - \frac{l(S_{m-1})-1}{n-m}\right) \leq r(S_{m-1}) \frac{1}{n-m}.
\end{equation*}
Second, linear equality between densities is shown.
\begin{claim}\label{chain_rule_claim}
Suppose $m \leq l \leq n$ with $F \in \mathcal{H}_{m}$ and $G \in \mathcal{H}_n$ then
\begin{equation*}
d(F, G) = \sum_{H \in \mathcal{H}_{l}} d(F, H) d(H, G),
\end{equation*}
in particular
\begin{equation*}
d(K_m, G) = \sum_{H \in \mathcal{H}_{m+1}} \frac{s(H)}{m+1} d(H, G).
\end{equation*}
\end{claim}
\begin{proof}
Note that a uniform $m$ sized subset of $V(G)$ can be sampled by first choosing $\mathbf{S}_l \sim \operatorname{Uniform}\binom{V(G)}{l}$ and then $\mathbf{S}_m \sim \operatorname{Uniform}\binom{\mathbf{S}_l}{m}$. The claim follows from the law of total probability. The events $\{G \! \upharpoonright_{\mathbb{S}_l} \simeq H \ : \ H \in \mathcal{H}_l\}$ partition the probability space, therefore \begin{equation*}
\begin{split}
d(F, G) = & \mathbb{P}\Big[ G \! \upharpoonright_{\mathbf{S}_m} \simeq F \Big] \\
= & \sum_{H \in \mathcal{H}_l} \mathbb{P}\Big[ G \! \upharpoonright_{\mathbf{S}_m} \simeq F \ \Big| \ G \! \upharpoonright_{\mathbf{S}_l} \simeq H \Big] \mathbb{P}\Big[ G \! \upharpoonright_{\mathbf{S}_l} \simeq H \Big] \\
= & \sum_{H \in \mathcal{H}_l} d(F, H) d(H, G).
\end{split}
\end{equation*}
The special case follows from $d(K_m, H) = \frac{s(H)}{m+1}$ when $H \in \mathcal{H}_{m+1}$.
\end{proof}
In the proof of \cref{main_lemma}, $S_{m-1}$ is chosen uniformly from the possible $m-1$ sized sets. The final claim connects the expected values arising in the terms of $q(S)(p(S_{m-1})-x)^2$ with densities of various $k$-graphs in $G$.
\begin{claim}\label{exp_claim} Suppose $\mathbf{S}_{m-1}$ is chosen uniformly randomly from the set $\binom{V(G)}{m-1}$ then
\leavevmode
\begin{enumerate}
\item $$\mathbb{E} \left[ q(\mathbf{S}_{m-1})r(\mathbf{S}_{m-1}) \right] = d\left(K_{m}, G\right), $$
\item $$\mathbb{E} \left[ q(\mathbf{S}_{m-1}) rr(\mathbf{S}_{m-1}) \right] = \sum_{H \in \mathcal{H}_{m+1}} \frac{\binom{s(H)}{2}}{\binom{m+1}{2}} d\left(H, G\right).$$
\end{enumerate}
\end{claim}
\begin{proof}
Similar to the proof of \cref{chain_rule_claim}, first choosing $\mathbf{S}_l \sim \operatorname{Uniform}\binom{V(G)}{l}$ and then $\mathbf{S}_{m-1} \sim \operatorname{Uniform}\binom{\mathbf{S}_l}{m-1}$ results in uniformly distributed $\mathbf{S}_{m-1}$. Note that $r(\mathbf{S}_{m-1})$ and $rr(\mathbf{S}_{m-1})$ corresponds with choosing $1$ and $2$ additional vertices accordingly, and then checking a condition on the extended set.
In particular, for $\mathbf{S}_m, \mathbf{S}_{m-1}$ pair, let $R\left(\mathbf{S}_m, \mathbf{S}_{m-1}\right)$ be the event that $G \! \upharpoonright_{\mathbf{S}_m} \simeq K_m$ (and therefore $G \! \upharpoonright_{\mathbf{S}} \simeq K_{m-1}$). The claim follows from the law of total expectation, by conditioning on the shape of $G \! \upharpoonright_{\mathbf{S}_m}$
\begin{equation*}
\begin{split}
& \mathbb{E} \left[ q(\mathbf{S}_{m-1})r(\mathbf{S}_{m-1}) \right] \\
& \qquad = \sum_{H \in \mathcal{H}_{m}} \mathbb{P}\left[ R(\mathbf{S}_m, \mathbf{S}_{m-1}) \ \large| \ G \! \upharpoonright_{\mathbf{S}_m} = H \right] \mathbb{P}\left[ G \! \upharpoonright_{\mathbf{S}_m} = H \right] \\
& \qquad = d(K_m, G).
\end{split}
\end{equation*}
Since only $H = K_{m}$ contains a suitable $m-1$ sized subset that satisfies $R_{\mathbf{S}_m, \mathbf{S}_{m-1}}$.
For the second part, write $RR(\mathbf{S}_{m+1}, \mathbf{S}_{m-1})$ for the event that there are two copies of $K_{m}$ inside $G \! \upharpoonright_{\mathbf{S}_{m+1}}$ intersecting exactly at $\mathbf{S}_{m-1}$ (again this implies $G \! \upharpoonright_{\mathbf{S}_{m-1}} = K_{m-1}$). The calculation in this case gives
\begin{equation*}
\begin{split}
& \mathbb{E} \left[ q(\mathbf{S}_{m-1})rr(\mathbf{S}_{m-1}) \right] \\
& \qquad = \sum_{H \in \mathcal{H}_{m+1}} \mathbb{P}\left[ RR(\mathbf{S}_{m+1}, \mathbf{S}_{m-1}) \ \large| \ G \! \upharpoonright_{\mathbf{S}_{m+1}} = H \right] \mathbb{P}\left[ G \! \upharpoonright_{\mathbf{S}_{m+1}} = H \right] \\
& \qquad = \sum_{H \in \mathcal{H}_{m+1}} \frac{\binom{s(H)}{2}}{\binom{m+1}{2}} d\left(H, G\right)
\end{split}
\end{equation*}
since in a given $G \! \upharpoonright_{\mathbf{S}_{m+1}} \simeq H$, a randomly chosen $\mathbf{S}_{m-1}$ satisfies $RR(\mathbf{S}_{m+1}, \mathbf{S}_{m-1})$ with probability $\frac{\binom{s(H)}{2}}{\binom{m+1}{2}}$.
\end{proof}
\begin{remark}\label{flag_claims_remark}
The above claims all correspond to parts of the general flag algebra theory \cite{razb_flag_algebras}.
\begin{enumerate}
\item \cref{chain_rule_claim} corresponds to the chain rule (Lemma 2.2). In the language of flags it gives $$K_m = \sum_{H \in \mathcal{H}_{m+1}} \frac{s(H)}{m+1} H.$$
\item \cref{ppp_claim} corresponds to products (Lemma 2.3). It is more or less equivalent with $$p\left(K_m^{T_{m-1}}; G^{T_{m-1}}\right)^2 - p\left(K_m^{T_{m-1}}, K_m^{T_{m-1}}; G^{T_{m-1}}\right) = O(|G|^{-1}).$$
\item \cref{exp_claim} corresponds to averaging (Theorem 2.5). It is a restatement of \begin{equation*}
\left\llbracket K_m^{T_{m-1}} \right\rrbracket_{T_{m-1}} = K_m
\end{equation*}
and \begin{equation*}
\left\llbracket \left( K_m^{T_{m-1}} \right)^2 \right\rrbracket_{T_{m-1}} = \sum_{H \in \mathcal{H}_{m+1}} \frac{\binom{s(H)}{2}}{\binom{m+1}{2}} H.
\end{equation*}
\end{enumerate}
\end{remark}
With these claims, the main lemma follows easily.
\begin{replemma}{main_lemma}
For all $x>0$ and integers $k \leq m < n$, if $G \in \mathcal{H}_n$ then the following holds
\begin{equation*}
\begin{matrix*}[l]
0 \geq & \left(- \frac{1 - \frac{k-1}{m}}{x}\right) & d\left(K_{m+1}, G\right) & + \\ & \left(2 - \frac{k-1}{mx} - \frac{1}{(n-m)x}\right) & d\left(K_m, G\right) & + \\ & (-x) & d\left(K_{m-1}, G\right). &
\end{matrix*}
\end{equation*}
\end{replemma}
\begin{proof}
When $x \in \mathbb{R}$ the quantity $q(S)(r(S) - x)^2$ is always non-negative. Therefore choosing $\mathbf{S}_{m-1} \sim \operatorname{Uniform}\binom{V(G)}{m-1}$ the following is true
$$0 \leq \mathbb{E} \left[ q(\mathbf{S}_{m-1}) (r(\mathbf{S}_{m-1}) - x)^2 \right].$$ Expanding the terms and applying \cref{ppp_claim} gives
\begin{equation*}
0 \leq \mathbb{E} \left[ q(\mathbf{S}_{m-1})rr(\mathbf{S}_{m-1}) +\left(\frac{1}{n-m} - 2x\right)q(\mathbf{S}_{m-1})r(\mathbf{S}_{m-1}) + x^2 q(\mathbf{S}_{m-1}) \right].
\end{equation*}
The substitution from \cref{exp_claim} yields the following expression, without expected values:
\begin{equation*}
0 \leq \sum_{H \in \mathcal{H}_{m+1}} \frac{\binom{s(H)}{2}}{\binom{m+1}{2}} d\left(H, G\right) + \left(\frac{1}{n-m} - 2x\right) d\left(K_{m}, G\right) + x^2 d\left(K_{m}, G\right).
\end{equation*}
Notice that $s(H)$ is maximal on $K_{m+1}$, otherwise it is at most $k$. This observation gives
\begin{equation*}
\begin{split}
0 & \leq \frac{k-1}{m} \sum_{^{H \in \mathcal{H}_{m+1}}_{H \neq K_{m+1}}} \frac{s(H)}{(m+1)} d\left(H, G\right) \\ & \qquad + d\left(K_{m+1}, G\right) \\ & \qquad + \left(\frac{1}{n-m} - 2x\right) d\left(K_{m}, G\right) \\ & \qquad + x^2 d\left(K_{m-1}, G\right).
\end{split}
\end{equation*}
Finally expanding $\frac{k-1}{m} d(K_{m}, G)$ using \cref{chain_rule_claim} results in
\begin{equation*}
\begin{matrix*}[l]
0 \leq & \left(1 - \frac{k-1}{m}\right) & d\left(K_{m+1}, G\right) & + \\ & \left(\frac{k-1}{m} + \frac{1}{(n-m)} - 2x\right) & d\left(K_m, G\right) & + \\ & x^2 & d\left(K_{m-1}, G\right). &
\end{matrix*}
\end{equation*}
Since $x \geq 0$, note that \cref{main_lemma} is a $-1/x$ multiple of the above, and the proof is complete.
\end{proof}
\begin{remark}\label{flag_sdp_remark}
This lemma can be easily stated as \begin{equation*}
0 \leq \left(1 - \frac{k-1}{m}\right) K_{m+1} + \left(\frac{k-1}{m} - 2x\right) K_m + x^2 K_{m-1}
\end{equation*} in the language of flags. The proof uses the expansion of the simple square
\begin{equation*}
0 \leq \left\llbracket \left( K_m^{T_{m-1}} - xT_{m-1} \right)^2 \right\rrbracket_{T_{m-1}}
\end{equation*} In a plain application of flag algebra, the computer finds a conic combination of squares, similar to the above expression. \Cref{main_lemma} provides squares in a form that is easy to handle later (they only involve a small number of $d(K_m, G)$ values). The following section shows that the target expression \begin{equation*}
K_g \leq \prod_{m=k}^g x^{(k)}_{m, r} + c K_r
\end{equation*} for some $c$ constant, lies in the conic combination of the squares.
\end{remark}
\section{The Associated Tridiagonal Matrix}\label{tridiag_section}
\begin{definition}
Given $k < r$ integers, the problem has an associated tridiagonal matrix $D^{(k)}_r$ with entries $d_{l, m}$ indexed by the range $k \leq l, m < r$ \begin{equation*}
d_{l, m} = \begin{cases}
-x^{(k)}_{m, r} & \text{ if } \ l=m-1\\
\\
2 - \frac{k-1}{mx^{(k)}_{m, r}} & \text{ if } \ l=m \\
\\
-\frac{1 - \frac{k-1}{m}}{x^{(k)}_{m, r}} & \text{ if } \ l=m+1\\
\\
0 & \text{ otherwise}
\end{cases}
\end{equation*}
\end{definition}
Write $\epsilon = \frac{r-k}{(n-r+1)(k-1)}$ and notice that $D^{(k)}_{r} - \epsilon I$ has column values equal to the coefficients in $E_{m}$. With the new notation, recall \cref{tridiag_lemma}, which was used to finish the proof of \cref{main_theorem}.
\begin{replemma}{tridiag_lemma}
Let $\Delta^{(k)}_{r}(\epsilon) = \left(D^{(k)}_{r} - \epsilon I \right)^{-1}$ be the inverse of the associated tridiagonal matrix, with entries $\delta_{m, g}(\epsilon)$. If $0 \leq \epsilon < \frac{k-1}{(r-1)(r-k)}$, then the values $\delta_{m, g}(\epsilon)$ are all positive, and $$\delta_{k, g}(\epsilon) \leq \frac{1}{1-\epsilon \frac{(r-1)(r-k)}{k-1}} \prod_{m=k}^g x_{m+1, r}$$
\end{replemma}
Notice that $\epsilon=0$ corresponds with the inverse of $D_r^{(k)}$, and the asymptotic question when $n \rightarrow \infty$. This section is devoted to the proof of \cref{tridiag_lemma}, but it is illuminating and helpful for the proof to first calculate the inverse of $D_r^{(k)}$.
\subsection{The inverse of \texorpdfstring{$D^{(k)}_r$}{Dk,r}}
\begin{lemma}\label{simple_tridiag_lemma}
Let $\Delta^{(k)}_{r} = \left(D^{(k)}_{r}\right)^{-1}$ be the inverse of the associated tridiagonal matrix with entries $\delta_{m, g}$. Then $\delta_{m, g}$ are all positive and $$\delta_{k, g} = \prod_{m=k+1}^{g} x^{(k)}_{m, r}.$$
\end{lemma}
In the following proofs, the $k$ superscripts are omitted to increase readability. The complete inverse can be calculated following the method described in \cite{tridiagonal_inverse}. The value $\theta_m$ represents the determinant of the rows and columns indexed by the set $\{k, k+1, ..., m\}$, while $\phi_m$ is the determinant for the rows and columns indexed by $\{m, m+1, ..., r-1\}$.
From the cofactor calculation of determinants and inverses, the following claim is easy to verify.
\begin{claim}\label{determinant_long_claim}\leavevmode
\begin{enumerate}
\item For all $k \leq m < r$ the induction $$\theta_m = d_{m, m} \theta_{m-1} - d_{m-1, m} d_{m, m-1} \theta_{m-2}$$ holds with initial values $\theta_{k-1} = 1$ and $\theta_{k-2} = 0$
\item For all $k \leq m < r$ the reverse induction $$\phi_{m} = d_{m, m} \phi_{m+1} - d_{m+1, m} d_{m, m+1} \phi_{m+2}$$ holds with initial values $\phi_{r} = 1$ and $\phi_{r+1} = 0$
\item For all $k \leq m < r$ the determinant can be calculated $$\operatorname{Det}\left(D_r\right) = \theta_{r-1} = \phi_{k} = \theta_m \phi_{m+1} - d_{m, m+1} d_{m+1, m} \theta_{m-1} \phi_{m+2}$$
\item The entries of the inverse matrix are \begin{equation*}
\delta_{m, g} = \frac{(-1)^{m+g+r-k}}{\operatorname{Det}(D_r)}\begin{cases}
\theta_{m-1} \phi_{g+1} \prod_{i=m}^{g-1} d_{i, i+1} & \ \text{if } m \leq g \\
\ \\
\theta_{g-1} \phi_{m+1} \prod_{i=g}^{m-1} d_{i+1, i} & \ \text{otherwise}
\end{cases}
\end{equation*} In the corresponding $k \leq m, g <r$ range.
\end{enumerate}
\end{claim}
First the $\phi_{m}$ values will be calculated using the recursive expression above.
\begin{claim}\label{phi_claim}
$\phi_{m} = 1$ in the range $k \leq m \leq r$
\end{claim}
\begin{proof}
By reverse induction. The claim holds for $m=r$ and notice $$\phi_{r-1} = d_{r-1, r-1} = 2 - \frac{\frac{k-1}{r-1}}{1- \frac{\binom{r-2}{k-1}}{\binom{r-1}{k-1}}} = 1$$
Using point 2 from \cref{determinant_long_claim} \begin{equation*}
\begin{split}
\phi_{m} = & d_{m, m} - d_{m+1, m} d_{m, m+1} \\ =& 2 - \frac{k-1}{mx_{m, r}} - \left( 1 - \frac{k-1}{m}\right) \frac{x_{m+1, r}}{x_{m, r}} \\
=& 2 - \frac{k-1}{m(1-u)} - \left( 1 - \frac{k-1}{m}\right) \frac{1-\frac{m
u}{m-k+1}}{1-u} \\ =& 1
\end{split}
\end{equation*}
Where $u = \frac{\binom{m-1}{k-1}}{\binom{r-1}{k-1}} = 1-x_{m, r}$ and $u\frac{m}{m-k+1} = \frac{\binom{m}{k-1}}{\binom{r-1}{k-1}}= 1-x_{m+1, r}$ substitution was used to simplify the calculation.
\end{proof}
This gives that $\operatorname{Det}\left(D_r\right) = \theta_{r-1} = \phi_{k} = 1$.
\begin{claim}\label{theta_claim}
In the $k-1 \leq m < r$ range, $\operatorname{Sign}(\theta_m) = 1$
\end{claim}
\begin{proof}
By induction, note that the claim holds for $\theta_{k-1} = 1$. Then using point 3 from \cref{determinant_long_claim} and the value for the determinant,
\begin{equation*}
\theta_m = 1 + d_{m, m+1}d_{m+1, m}\theta_{m-1}
\end{equation*}
Using \cref{x_positive_claim}, note that the values $d_{m, m+1} = -x_{m+1, r}$ and $d_{m+1, m} = -\frac{1 - \frac{k-1}{m}}{x_{m, r}}$ are both negative. Therefore their product; and by induction, $\theta_{m-1}$, are positive.
\end{proof}
\begin{proof}[Proof of \cref{simple_tridiag_lemma}]
It claims two things,
\begin{enumerate}
\item The entries $\delta_{m, g}$ are all positive: \cref{determinant_long_claim} point 4 gives that \begin{equation*}
\operatorname{Sign}(\delta_{m, g}) = (-1)^{m+g+r-k} \begin{cases} \prod_{i=m}^{g-1} \operatorname{Sign}(d_{i, i+1}) & \text{if } m \leq g \\
\prod_{i=g}^{m-1} \operatorname{Sign}(d_{i+1, i}) & \text{otherwise.}\end{cases}
\end{equation*} using $\operatorname{Sign}(\theta_m)=1$ from \cref{theta_claim} and that $\phi_m = 1$ from \cref{phi_claim}. Since all $d_{i, i+1}, d_{i+1, i}$ are negative, the inverse of $D_r$ has only positive entries.
\item $\delta_{k, g} = \prod_{m=k}^{g} x_{m+1, r}$: Again substituting the values $d_{i, i+1} = -x_{i+1, r}$ and $\operatorname{Det}(D_r) = \phi_{g+1} = \theta_{k-1} = 1$ into \cref{determinant_long_claim} point 4 gives the stated value for $\delta_{k, g}$.
\end{enumerate}
\end{proof}
Interestingly, the values $\delta_{k, g}$ are easy enough to calculate exactly. In contrast, $\delta_{m, g}$ requires the value of some $\theta_m$ which is difficult to find in general with the recursive expression. The sign of $\theta_m$ is easy to find, exactly what is needed for the proof.
\subsection{The inverse of \texorpdfstring{$D^{(k)}_r - \epsilon I$}{Dk,r - eps I}}
Consider the same calculation but with $D_r - \epsilon I$. The value $\phi_m(\epsilon)$ is the determinant for the rows and columns indexed by $\{m, m+1, ..., r-1\}$ of $D_r - \epsilon I$. The next claim shows that $\phi_m(\epsilon)$ is increasing in $m$ when $\epsilon>0$. A notation for the increments will be useful, write $\zeta_m(\epsilon) = \phi_{m+1}(\epsilon) - \phi_m(\epsilon)$.
\begin{claim}
When $k \leq m < r$ and $0 \leq \epsilon \leq \frac{k-1}{(r-1)(r-m)}$, $$0 \leq \zeta_m(\epsilon) \leq \epsilon \frac{r-1}{k-1} \left(1 - \left( 1 - \frac{k-1}{r-1} \right)^{r-m}\right)$$ and correspondingly $$1 \geq \phi_m(\epsilon) \geq 1 - \epsilon \frac{(r-1)(r-m)}{k-1}$$
\end{claim}
\begin{proof}
Use point 2 from \cref{determinant_long_claim}. The initial value is $\zeta_{r}(\epsilon) = 0$ and by reverse induction take
\begin{equation*}
\begin{split}
\phi_m(\epsilon) = & \left( d_{m, m} - \epsilon \right) \phi_{m+1}(\epsilon) - d_{m+1, m} d_{m+1, m} \phi_{m+2}(\epsilon) \\
= & \left( 2 - \frac{k-1}{mx_{m, r}} - \epsilon \right) \phi_{m+1}(\epsilon) - \left( 1 - \frac{k-1}{mx_{m, r}} \right) \left( \phi_{m+1}(\epsilon) + \zeta_{m+1}(\epsilon) \right) \\
= & \phi_{m+1}(\epsilon) - \zeta_{m+1}(\epsilon) \left(1 - \frac{k-1}{mx_{m, r}} \right) - y\phi_{m+1}(\epsilon).
\end{split}
\end{equation*}
Therefore \begin{equation}\label{zeta_equation}
\zeta_m(\epsilon) = \zeta_{m+1}(\epsilon) \left(1 - \frac{k-1}{mx_{m, r}} \right) + \epsilon \phi_{m+1}(\epsilon).
\end{equation}
Note that $$0 \leq d_{m, m+1} d_{m+1, m} = \left(1 - \frac{k-1}{mx_{m, r}} \right) \leq \left(1 - \frac{k-1}{r-1} \right)$$ in the $k \leq m < r$ range. As $\epsilon \leq \frac{k-1}{(r-1)(r-m)} < \frac{k-1}{(r-1)(r-m-1)}$ by reverse induction it holds that $0 \leq \phi_{m+1}(\epsilon)$ and $0 \leq \zeta_{m+1}(\epsilon)$ giving the required lower bound $0 \leq \zeta_{m}(\epsilon)$. This implies the upper bound $\phi_{m}(\epsilon) \leq 1$.
For the $\zeta_m(\epsilon)$ upper bound, in \cref{zeta_equation} bound each term: $m x_{m, r} \leq (r-1)$ and $\phi_{m+1}(\epsilon) \leq 1$. This gives the intermediate result $$\zeta_m(\epsilon) \leq \zeta_{m+1}(\epsilon) \left(1 - \frac{k-1}{r-1} \right) + \epsilon.$$ which, by iterated application and $\zeta_{r}(\epsilon) = 0$ initial value, implies $$\zeta_m(\epsilon) \leq \epsilon \frac{r-1}{k-1} \left(1 - \left( 1 - \frac{k-1}{r-1} \right)^{r-m}\right).$$ A summation formula for the upper and lower $\zeta_m(\epsilon)$ bounds combined with the initial $\phi_{r}(\epsilon) = 1$ value gives $$1 \geq \phi_m(\epsilon)\geq 1 - \epsilon \frac{(r-1)^2}{(k-1)^2} \left( \frac{(k-1)(r-m)}{r-1} + \left(1 - \frac{k-1}{r-1} \right)^{r-m} - 1\right).$$ This provides a tighter bound but for simplicity use $$1 - \epsilon \frac{(r-1)^2}{(k-1)^2} \left( \frac{(k-1)(r-m)}{r-1} + \left(1 - \frac{k-1}{r-1} \right)^{r-m} - 1\right) \geq 1 - \epsilon \frac{(r-1)(r-m)}{k-1}.$$
\end{proof}
This gives a simple linear bound for the determinant. When $0 < \epsilon < \frac{k-1}{(r-1)(r-k)}$, $$1 - \epsilon \frac{(r-1)(r-k)}{k-1} \leq \phi_k(\epsilon) = \operatorname{Det}(D_r - \epsilon I) \leq 1.$$ If $0 \leq \epsilon < \frac{k-1}{(r-1)(r-k)}$ then the determinant is strictly positive, bounding the smallest eigenvalue of $D_r$.
\begin{proof}[Proof of \cref{tridiag_lemma}]
Again it claims two things.
\begin{enumerate}
\item The entries $\delta_{m, g}(\epsilon)$ are all positive: By assumption, $\epsilon$ is smaller than the smallest eigenvalue of $D_r$. Therefore the expansion $$\left(D_r - \epsilon I \right)^{-1} = D_r^{-1} + \epsilon D_r^{-2} + \epsilon^2 D_r^{-3}... $$ holds. \Cref{simple_tridiag_lemma} shows that the entries in $D_r^{-1}$ (and in $D_r^{-i}$ for $1 > i$ correspondingly) are all positive, giving the required positivity of $\delta_{m, g}(\epsilon)$.
\item $$\delta_{k, g}(\epsilon) \leq \frac{1}{1-\epsilon \frac{(r-1)(r-k)}{k-1}} \prod_{m=k}^g x_{m+1, r}:$$ This follows from substituting the bounds $\phi_{m}(\epsilon) \leq 1$ and $1 - \epsilon \frac{(r-1)(r-k)}{k-1} \leq \operatorname{Det}(D_r - \epsilon I)$ into \cref{determinant_long_claim} point 4.
\end{enumerate}
\end{proof}
\section{\texorpdfstring{$\pi\left(K^{(3)}_4, K^{(3)}_5\right) = 3/8$}{pi K3,4, K3,5 = 3/8}}\label{pi345_section}
The combination of more sophisticated, but still simple squares can provide tight bounds for $\pi\left(K^{(3)}_4, K^{(3)}_5\right) = 3/8$. For an easier description of the flags, consider the complement question. If $E_n$ is the $3$-graph with $n$ vertices and no edges then $$\pi\left(K^{(3)}_4, K^{(3)}_5\right) = \lim_{n \rightarrow \infty} \max \left\{ d(E_4, G) \ : \ G \in \mathcal{H}^{(3)}_n, \ \ d(E_5)=0 \right\}.$$
Use $P_n$ for the corresponding type with $n$ vertices and no edges. Note $E_n^{P_m}$ is unique for any $m<n$ pair. Define the further flags:
\begin{enumerate}
\item $L^{P_2}_{a}$ is the flag with vertex set $\{0, 1, 2, 3\}$, edge set $\{(0, 2, 3)\}$ and type formed from the vertices $0, 1$. Similarly, write $L^{P_2}_{b}$ for the flag with the same vertex set and type but $\{(1, 2, 3)\}$ edge set.
\item Use $M^{P_3}_a$ for the flag with vertices $\{0, 1, 2, 3\}$, edges $\{(1, 2, 3)\}$ and type from $0, 1, 2$. Symmetrically, with the same vertex and type set use $M^{P_3}_b$ for the edge set $\{(0, 2, 3)\}$. And $M^{P_3}_c$ for the edge set $\{(0, 1, 3)\}$.
\item $N^{Q_4}$ is the flag with vertices $\{0, 1, 2, 3, 4\}$, edges $\{(0, 1, 2)\}$ and type formed by $0, 1, 2, 3$. Note that $Q_4$ does not agree with any of the $T_n$ or $P_n$ types.
\item $O^{T_4}_{a}$ has vertex set $\{0, 1, 2, 3, 4\}$, edge set $\{(0, 1, 4)\}$ and type formed from $0, 1, 2, 3$. Additionally write $O^{T_4}_{b}$ for the flag with the same vertex and type set but $\{(2, 3, 4)\}$ edges.
\end{enumerate}
\input{illustrations/pi345}
Then the following inequality holds on $E_5$-free hypergraphs:
\begin{equation}\label{pi345_equation}
\begin{split}
0 \leq \ \ \frac{2}{3} & \left\llbracket \left( E_3^{P_1} - \frac{3}{4} P_1 \right)^2 \right\rrbracket_{P_1} +
\frac{1}{6} \left\llbracket \left( L^{P_2}_a - L^{P_2}_b \right)^2 \right\rrbracket_{P_2} + \\
\frac{13}{12} & \left\llbracket \left( M^{P_3}_a + M^{P_3}_b + M^{P_3}_c - \frac{1}{2} P_3 \right)^2 \right\rrbracket_{P_3} +
\frac{11}{12} \left\llbracket \left( E_4^{P_3} - \frac{1}{2} P_3 \right)^2 \right\rrbracket_{P_3} + \\
2 & \left\llbracket \left( N^{Q_4} - \frac{1}{2} Q_4 \right)^2 \right\rrbracket_{Q_4} +
\frac{1}{2} \left\llbracket \left( O^{T_4}_a - O^{T_4}_b \right)^2 \right\rrbracket_{P_4} \leq \quad \frac{3}{8} - E_4.
\end{split}
\end{equation}
So far the only verification of \cref{pi345_equation} requires a tedious (computer assisted) checking of all the $2102$ hypergraphs in $\mathcal{H}^{(3)}_6$ without $E_5$. This can be found in the supplement. The corresponding lower bound is attained at $G_n = K^{(3)}_{\lfloor n/2 \rfloor} \bigsqcup K^{(3)}_{\lceil n/2 \rceil}$. Note that $d(E_5, G_n)=0$ while $\lim_{n \rightarrow \infty} d(E_4, G_n) = \frac{3}{8}$.
\section{Concluding Remarks}\label{outro_section}
This paper investigated a natural extension of the generalized Turán problem to hypergraphs. The result matches the best-known general bounds for $k$-graphs but fails to provide tight bounds when $k>3$.
The main combinatorial insight comes from the simple inequality \begin{equation}\label{flag_square_equation}
0 \leq \left\llbracket \left( K_m^{T_{m-1}} - xT_{m-1} \right)^2 \right\rrbracket_{T_{m-1}},
\end{equation} combined with a close approximation of $\left\llbracket \left( K_m^{T_{m-1}} \right)^2 \right\rrbracket_{T_{m-1}}$. As shown in the paper, the convex combination of these squares includes difficult results for the generalized hypergraph Turán problem.
The long list of questions improved by the plain flag algebraic method indicates that finding more sophisticated squares can greatly improve the available density bounds. It would be interesting to identify other families of simple linear density relations (like the one described in \cref{main_lemma}) whose conic combination includes new bounds for extremal hypergraph problems, even better if the bounds are tight. The provided certificate for $\pi\left(K^{(3)}_4, K^{(3)}_5\right) = 3/8$ can perhaps be generalized to larger cases. It is interesting that for the smallest $k, g, r$ tuple, which is not already known ($k=2$) and is not a classical hypergraph Turán problem ($k=g$), the exact solution follows from flag algebraic calculations. It also highlights the limitations of computer assisted searches: calculations for problems with higher parameters are infeasible.
\subsection{Finding Squares}
There is an easy to describe reason why \cref{flag_square_equation} fails to provide tight bounds for $k$-graphs where $k>2$ but is asymptotically exact when $k=2$. The extremal configuration for $\pi\left(n, K^{(2)}_g, K^{(2)}_r\right)$ is a unique balanced $(r-1)$-partite graph, call it $G_r(n)$, and say $G_r^{T_{m-1}}(n)$ is the same structure with a complete $(m-1)$-tuple marked as a type. Any choice of $T_{m-1}$ results in the same $\lim_{n \rightarrow \infty} d\left(K^{T_{m-1}}_m, G_r^{T_{m-1}}(n)\right)$ value, which is $x^{(2)}_{m, r} = 1 - \frac{m-1}{r-1}$. In contrast, the conjectured optimal constructions when $k>2$ give different values for different $T_{m-1}$ choices, therefore no $x \in \mathbb{R}$ exists with $$\left\llbracket \left( K_m^{T_{m-1}} - xT_{m-1} \right)^2 \right\rrbracket_{T_{m-1}} = 0$$ on the conjectured optimal constructions. This slackness gives the difference between the conjectured optimal constructions and the proved bounds here. The values $x^{(k)}_{m, r}$ are chosen optimally, any asymptotically significant improvement must utilize a different combinatorial insight.
\end{document}
|
\begin{document}
\title{DP-colorings of graphs with high chromatic number}
\begin{abstract}
\emph{DP-coloring} is a generalization of list coloring introduced recently by Dvo\v r\' ak and Postle~\cite{DP}.
We prove that for every $n$-vertex graph $G$ whose chromatic number $\chi(G)$ is ``close'' to~$n$, the DP-chromatic number of $G$ equals $\chi(G)$.
``Close'' here means $\chi(G)\geq n-O(\sqrt{n})$, and we also show that this lower bound is best possible (up to the constant factor in front of~$\sqrt{n}$), in contrast to the case of list coloring.
\end{abstract}
\section{Introduction}
We use standard notation. In particular, $\mathbb{N}$ denotes the set of all nonnegative integers. For a set~$S$, $\powerset{S}$ denotes the power set of $S$, i.e., the set of all subsets of $S$.
All graphs considered here are finite, undirected, and simple.
For a graph~$G$, $V(G)$ and $E(G)$ denote the vertex and the edge sets of $G$, respectively.
For a set $U \subseteq V(G)$, $G[U]$ is the subgraph of $G$ induced by $U$. Let $G - U \coloneqq G[V(G) \setminus U]$, and for $u \in V(G)$,
let $G - u \coloneqq G - \set{u}$. For $U_1$, $U_2 \subseteq V(G)$, let $E_G(U_1, U_2) \subseteq
E(G)$ denote the set of all edges in $G$ with one endpoint in $U_1$ and the other one in $U_2$.
For $u \in V(G)$, $N_G(u)\subset V(G)$ denotes the set of all neighbors of $u$, and $\deg_G(u) \coloneqq |N_G(u)|$ is the \emph{degree} of~$u$ in $G$. We use $\delta(G)$ to denote the \emph{minimum degree} of $G$, i.e., $\delta(G) \coloneqq \min_{u \in V(G)} \deg_G(u)$. For $U \subseteq V(G)$, let $N_G(U) \coloneqq \bigcup_{u \in U} N_G(u)$. To simplify notation, we write $N_G(u_1, \ldots, u_k)$ instead of $N_G(\set{u_1, \ldots, u_k})$.
A set $I \subseteq V(G)$ is {\em independent} if $I \cap N_G(I) = \varnothing$, i.e., if $uv \not \in E(G)$ for all $u$, $v \in I$.
We denote the family of all independent sets in a graph $G$ by $\mathcal{I}(G)$. The complete $k$-vertex graph is denoted by~$K_k$.
\subsection{The Noel--Reed--Wu Theorem for list coloring}
Recall that a \emph{proper coloring} of a graph $G$ is a function $f \colon V(G) \to Y$, where $Y$ is a set of \emph{colors},
such that $f(u) \neq f(v)$ for every edge $uv \in E(G)$. The smallest $k \in \mathbb{N}$ such that there exists a proper coloring $f \colon V(G) \to Y$ with $|Y| = k$ is called the \emph{chromatic number} of $G$ and is denoted by $\chi(G)$.
\emph{List coloring} was introduced independently by Vizing~\cite{Vizing} and Erd\H os, Rubin, and Taylor~\cite{ERT}.
A \emph{list assignment} for a graph $G$ is a function $L \colon V(G) \to \powerset{Y}$, where $Y$ is a set. For each $u \in V(G)$, the set $L(u)$ is called the \emph{list} of $u$, and its elements are the \emph{colors} \emph{available} for~$u$. A~proper coloring $f \colon V(G) \to Y$
is called an \emph{$L$-coloring} if $f(u) \in L(u)$ for each $u \in V(G)$.
The \emph{list chromatic number} $\chi_\ell(G)$ of $G$ is the smallest $k \in \mathbb{N}$ such that~$G$ is $L$-colorable for each list assignment $L$
with $|L(u)| \geq k$ for all $u \in V(G)$.
It is an immediate consequence of the definition that $\chi_\ell(G) \geq \chi(G)$ for every graph $G$.
It is well-known (see, e.g.,~\cite{ERT, Vizing}) that the list chromatic number of a graph can significantly exceed its ordinary chromatic number. Moreover, there exist $2$-colorable graphs with arbitrarily large list chromatic numbers. On the other hand, Noel, Reed, and Wu~\cite{NRW}
established the following result, which was conjectured by Ohba~\cite[Conjecture~1.3]{Ohba}:
\begin{theo}[Noel--Reed--Wu~\cite{NRW}]\label{theo:NRW}
Let $G$ be an $n$-vertex graph with $\chi(G)\geq (n-1)/2$. Then $\chi_\ell(G)=\chi(G)$.
\end{theo}
The following construction was first studied by Ohba~\cite{Ohba} and Enomoto, Ohba, Ota, and Sakamoto~\cite{EOOS}. For a graph $G$ and $s \in \mathbb{N}$, let $\mathbf{J}(G,s)$ denote the {\em join} of $G$ and a copy of $K_s$, i.e., the graph obtained from $G$ by adding $s$ new vertices that are adjacent to every vertex in $V(G)$ and to each other. It is clear from the definition that for all $G$ and $s$, $\chi(\mathbf{J}(G,s))=\chi(G)+s$. Moreover, we have $\chi_\ell(\mathbf{J}(G, s)) \leq \chi_\ell(G) + s$; however, this inequality can be strict. Indeed, Theorem~\ref{theo:NRW} implies that for every graph $G$ and every $s \geq |V(G)| - 2 \chi(G) - 1$,
$$
\chi_\ell(\mathbf{J}(G, s)) = \chi(\mathbf{J}(G, s)),
$$
even if $\chi_\ell(G)$ is much larger than $\chi(G)$. In view of this observation, it is interesting to consider the following parameter:
\begin{equation}\label{eq:list_Zhu}
Z_\ell(G) \coloneqq \min \set{s \in \mathbb{N} \,:\, \chi_\ell(\mathbf{J}(G, s)) = \chi(\mathbf{J}(G, s))},
\end{equation}
i.e., the smallest $s \in \mathbb{N}$ such that the list and the ordinary chromatic numbers of $\mathbf{J}(G, s)$ coincide. The parameter $Z_\ell(G)$ was explicitly defined by Enomoto, Ohba, Ota, and Sakamoto in~\cite[page~65]{EOOS} (they denoted it $\psi(G)$). Recently, Kim, Park, and Zhu (personal communication, 2016) obtained new lower bounds on $Z_\ell(K_{2,n})$, $Z_\ell(K_{n,n})$, and $Z_\ell(K_{n,n,n})$. One can also consider, for $n \in \mathbb{N}$,
\begin{equation}\label{eq:list_Zhu_n}
Z_\ell(n) \coloneqq \max \set{Z_\ell(G) \,:\, |V(G)| = n}.
\end{equation}
The parameter $Z_\ell(n)$ is closely related to the Noel--Reed--Wu Theorem, since, by definition, there exists a graph $G$ on $n + Z_\ell(n) - 1$ vertices whose ordinary chromatic number is at least $Z_\ell(n)$ and whose list and ordinary chromatic numbers are distinct. The finiteness of $Z_\ell(n)$ for all $n \in \mathbb{N}$ was first established by Ohba~\cite[Theorem~1.3]{Ohba}. Theorem~\ref{theo:NRW} yields an upper bound $Z_\ell(n) \leq n - 5$ for all $n \geq 5$; on the other hand, a result of Enomoto, Ohba, Ota, and Sakamoto~\cite[Proposition~6]{EOOS} implies that $Z_\ell(n) \geq n - O(\sqrt{n})$.
\subsection{DP-colorings and the results of this paper}
The goal of this note is to study analogs of $Z_\ell(G)$ and $Z_\ell(n)$ for the generalization of list coloring that was recently introduced by Dvo\v r\' ak and Postle~\cite{DP}, which we call \emph{DP-coloring}.
Dvo\v r\' ak and Postle invented DP-coloring to attack an open problem on list coloring of planar graphs with no cycles of certain lengths.
\begin{defn}
Let $G$ be a graph. A \emph{cover} of $G$ is a pair $(L, H)$, where $H$ is a graph and $L \colon V(G) \to \powerset{V(H)}$ is a function, with the following properties:
\begin{itemize}
\item[--] the sets $L(u)$, $u \in V(G)$, form a partition of $V(H)$;
\item[--] if $u$, $v \in V(G)$ and $L(v) \cap N_H(L(u)) \neq \varnothing$, then $v \in \set{u} \cup N_G(u)$;
\item[--] each of the graphs $H[L(u)]$, $u \in V(G)$, is complete;
\item[--] if $uv \in E(G)$, then $E_H(L(u), L(v))$ is a matching (not necessarily perfect and possibly empty).
\end{itemize}
\end{defn}
\begin{defn}
Let $G$ be a graph and let $(L, H)$ be a cover of $G$. An \emph{$(L, H)$-coloring} of $G$ is an independent set $I \in \mathcal{I}(H)$ of size $|V(G)|$. Equivalently, $I \in \mathcal{I}(H)$ is an $(L, H)$-coloring of $G$ if $|I \cap L(u)| = 1$ for all $u \in V(G)$.
\end{defn}
\begin{remk}
Suppose that $G$ is a graph, $(L, H)$ is a cover of $G$, and $G'$ is a subgraph of $G$. In such situations, we will allow a slight abuse of terminology and speak of $(L, H)$-colorings of $G'$ (even though, strictly speaking, $(L, H)$ is not a cover of $G'$).
\end{remk}
The \emph{DP-chromatic number} $\chi_{DP}(G)$ of a graph $G$ is the smallest $k \in \mathbb{N}$ such that $G$ is $(L,H)$-colorable for each cover $(L, H)$ with $|L(u)| \geq k$ for all $u \in V(G)$.
To show that DP-colorings indeed generalize list colorings, consider a graph $G$ and a list assignment $L$ for $G$. Define a graph $H$ as follows: Let $V(H) \coloneqq \set{(u, c)\,:\, u \in V(G) \text{ and } c \in L(u)}$ and let
$$
(u_1, c_1)(u_2, c_2) \in E(H) \,\vcentcolon\Longleftrightarrow\, (u_1 = u_2 \text{ and } c_1 \neq c_2) \text{ or } (u_1u_2 \in E(G) \text{ and }c_1 = c_2).
$$
For $u \in V(G)$, let $\hat{L}(u) \coloneqq \set{(u, c)\,:\, c \in L(u)}$. Then $(\hat{L}, H)$ is a cover of $G$, and there is a one-to-one correspondence between $L$-colorings and $(\hat{L}, H)$-colorings of $G$. Indeed, if $f$ is an $L$-coloring of $G$, then the set $I_f \coloneqq \set{(u, f(u))\,:\, u \in V(G)}$ is an $(\hat{L}, H)$-coloring of~$G$. Conversely, given an $(\hat{L}, H)$-coloring $I$ of $G$, we can define an $L$-coloring $f_I$ of $G$ by the property $(u, f_I(u)) \in I$ for all $u \in V(G)$. Thus, list colorings form a subclass of DP-colorings. In particular, $\chi_{DP}(G) \geq \chi_\ell(G)$ for each graph $G$.
Some upper bounds on list-chromatic numbers hold for DP-chromatic numbers as well. For example, $\chi_{DP}(G) \leq d+1$ for any $d$-degenerate graph $G$. Dvo\v r\'ak and Postle~\cite{DP} pointed out that Thomassen's bounds~\cite{Thomassen1, Thomassen2}
on the list chromatic numbers of planar graphs hold also for their DP-chromatic numbers; in particular,
$\chi_{DP}(G) \leq 5$ for every planar graph $G$.
On the other hand, there are also some striking differences between DP- and list coloring. For instance, even cycles are $2$-list-colorable, while
their DP-chromatic number is $3$; in particular, the orientation theorems of Alon--Tarsi~\cite{AT} and the Bondy--Boppana--Siegel Lemma (see~\cite{AT}) do not extend to DP-coloring (see~\cite{BK} for further examples of differences between list and DP-coloring).
By analogy with~\eqref{eq:list_Zhu} and \eqref{eq:list_Zhu_n}, we consider the parameters
$$
Z_{DP}(G) \coloneqq \min \set{s \in \mathbb{N} \,:\, \chi_{DP}(\mathbf{J}(G, s)) = \chi(\mathbf{J}(G, s))},
$$
and
$$
Z_{DP}(n) \coloneqq \max \set{Z_{DP}(G) \,:\, |V(G)| = n}.
$$
Our main result asserts that for all graphs $G$, $Z_{DP}(G)$ is finite:
\begin{theo}\label{theo:main}
Let $G$ be a graph with $n$ vertices, $m$ edges, and chromatic number $k$. Then $Z_{DP}(G) \leq 3m$. Moreover, if $\delta(G) \geq k-1$, then $$Z_{DP}(G) \leq 3m - \frac{3}{2}(k-1)n.$$
\end{theo}
\begin{corl}\label{corl:Z_n}
For all $n \in \mathbb{N}$, $Z_{DP}(n) \leq 3n^2/2$.
\end{corl}
Note that the upper bound on $Z_{DP}(n)$ given by Corollary~\ref{corl:Z_n} is quadratic in $n$, in contrast to the linear upper bound on $Z_\ell(n)$ implied by Theorem~\ref{theo:NRW}. Our second result shows that the order of magnitude of $Z_{DP}(n)$ is indeed quadratic:
\begin{theo}\label{theo:lower}
For all $n \in \mathbb{N}$, $Z_{DP}(n) \geq n^2/4 - O(n)$.
\end{theo}
Corollary~\ref{corl:Z_n} and Theorem~\ref{theo:lower} also yield the following analog of Theorem~\ref{theo:NRW} for DP-coloring:
\begin{corl}\label{corl:NRW_DP}
For $n \in \mathbb{N}$, let $r(n)$ denote the minimum $r \in \mathbb{N}$ such that for every $n$-vertex graph~$G$ with $\chi(G)\geq r$, we have $\chi_{DP}(G)=\chi(G)$.
Then
$$
n - r(n) = \Theta(\sqrt{n}).
$$
\end{corl}
We prove Theorem~\ref{theo:main} in Section~\ref{sec:ma} and Theorem~\ref{theo:lower} in Section~\ref{sec:low}. The derivation of Corollary~\ref{corl:NRW_DP} from Corollary~\ref{corl:Z_n} and Theorem~\ref{theo:lower} is straightforward; for completeness, we include it at the end of Section~\ref{sec:low}.
\section{Proof of Theorem~\ref{theo:main}}\label{sec:ma}
For a graph $G$ and a finite set $A$ disjoint from $V(G)$, let $\mathbf{J}(G, A)$ denote the graph with vertex set $V(G) \cup A$ obtained from $G$ be adding all edges with at least one endpoint in $A$ (i.e., $\mathbf{J}(G, A)$ is a concrete representative of the isomorphism type of $\mathbf{J}(G, |A|)$).
First we prove the following more technical version of Theorem~\ref{theo:main}:
\begin{theo}\label{theo:upper_bound}
Let $G$ be a $k$-colorable graph. Let $A$ be a finite set disjoint from $V(G)$ and let
$(L,H)$ be a cover of $\mathbf{J}(G, A)$ such that for all $a \in A$, $|L(a)| \geq |A| + k$. Suppose that \begin{equation}\label{eq:IH}
|A| \,\geq\, \frac{3}{2}\sum_{v \in V(G)} \max \set{\deg_G(v) + |A| - |L(v)| + 1,\,0}.
\end{equation}
Then $\mathbf{J}(G,A)$ is $(L,H)$-colorable.
\end{theo}
\begin{proof}
For a graph $G$, a set $A$ disjoint from $V(G)$, a cover $(L, H)$ of $\mathbf{J}(G, A)$, and a vertex $v \in V(G)$, let
$$
\sigma(G, A, L, H, v) \coloneqq \max \set{\deg_G(v) + |A| - |L(v)| + 1,\,0}
$$
and
$$
\sigma(G, A, L, H) \coloneqq \sum_{v \in V(G)} \sigma(G, A, L, H, v).
$$
Assume, towards a contradiction, that a tuple $(k, G, A, L, H)$ forms a counterexample which minimizes $k$, then $|V(G)|$, and then $|A|$. For brevity, we will use the following shortcuts:
$$
\sigma(v) \coloneqq \sigma(G, A, L, H, v); \;\;\; \sigma \coloneqq \sigma(G, A, L, H).
$$
Thus,~\eqref{eq:IH} is equivalent to
$$
|A| \geq \frac{3\sigma}{2}.
$$
Note that $|V(G)|$ and $|A|$ are both positive. Indeed, if $V(G) = \varnothing$, then $\mathbf{J}(G, A)$ is just a clique with vertex set $A$, so its DP-chromatic number is $|A|$. If, on the other hand, $A = \varnothing$, then~\eqref{eq:IH} implies that $|L(v)| \geq \deg_G(v) + 1$ for all $v \in V(G)$, so an $(L, H)$-coloring of $G$ can be constructed greedily. Furthermore, $\chi(G) = k$, since otherwise we could have used the same $(G, A, L, H)$ with a smaller value of $k$.
\begin{claim}\label{claim:critical}
For every $v \in V(G)$, the graph $\mathbf{J}(G - v, A)$ is $(L,H)$-colorable.
\end{claim}
\begin{claimproof}
Consider any $v_0 \in V(G)$ and let $G' \coloneqq G - v_0$. For all $v \in V(G')$, $\deg_{G'}(v) \leq \deg_G(v)$, and thus
$\sigma(G', A, L, H, v) \leq \sigma(v)$. Therefore,
$$
\frac{3}{2}\sigma (G', A, L, H)\,\leq\, {3\sigma \over 2}\,\leq \,|A|.
$$
By the minimality of $|V(G)|$, the conclusion of Theorem~\ref{theo:upper_bound} holds for $(k, G', A, L, H)$, i.e., $\mathbf{J}(G', A)$ is $(L,H)$-colorable, as claimed.
\end{claimproof}
\begin{smallcorl}\label{corl:small_lists}
For every $v \in V(G)$,
$$\sigma(v) = \deg_G(v) + |A| - |L(v)| + 1 > 0.$$
\end{smallcorl}
\begin{claimproof}
Suppose that for some $v_0 \in V(G)$,
$$
\deg_G(v_0) + |A| - |L(v_0)| + 1 \leq 0,
$$
i.e.,
$$
|L(v_0)| \geq \deg_G(v_0) + |A| + 1.
$$
Using Claim~\ref{claim:critical}, fix any $(L,H)$-coloring $I$ of $\mathbf{J}(G - v_0, A)$. Since $v_0$ still has at least
$$
|L(v_0)| - (\deg_G(v_0) + |A|) \geq 1
$$
available colors, $I$ can be extended to an $(L,H)$-coloring of $\mathbf{J}(G, A)$ greedily; a contradiction.
\end{claimproof}
\begin{claim}\label{claim:no_neighbor}
For every $v \in V(G)$ and $x \in \bigcup_{a \in A} L(a)$, there is $y \in L(v)$ such that $xy \in E(H)$.
\end{claim}
\begin{claimproof}
Suppose that for some $a_0 \in A$, $x_0 \in L(a_0)$, and $v_0 \in V(G)$, we have $L(v_0) \cap N_H(x_0) = \varnothing$. Let $A' \coloneqq A \setminus \set{a_0}$, and for every $w \in V(G) \cup A'$, let $L'(w) \coloneqq L(w) \setminus N_H(x_0)$. Note that for all $a \in A'$, $|L'(a)| \geq |A'| + k$, and for all $v \in V(G)$, $\sigma(G, A', L', H, v) \leq \sigma(v)$. Moreover, by the choice of~$x_0$, $|L'(v_0)| = |L(v_0)|$, which, due to Corollary~\ref{corl:small_lists}, yields
$ \sigma(G, A', L', H, v_0) \leq \sigma(v_0)-1$.
This implies $\sigma(G, A', L', H) \leq \sigma - 1$, and thus
$$
\frac{3}{2} \sigma(G, A', L', H) \leq {3(\sigma - 1) \over 2} \leq |A|-\frac{3}{2} < |A'|.
$$
By the minimality of $|A|$, the conclusion of Theorem~\ref{theo:upper_bound} holds for $(k, G, A', L', H)$, i.e., the graph $\mathbf{J}(G, A')$ is $(L', H)$-colorable. By the definition of~$L'$, for any $(L', H)$-coloring $I$ of $\mathbf{J}(G, A')$, $I \cup \set{x_0}$ is an $(L,H)$-coloring of $\mathbf{J}(G, A)$. This is a~contradiction.
\end{claimproof}
\begin{smallcorl}\label{corl:k_geq_2}
$k \geq 2$.
\end{smallcorl}
\begin{claimproof}
Let $v \in V(G)$ and consider any $a \in A$. Since, by Claim~\ref{claim:no_neighbor}, each $x \in L(a)$ has a neighbor in $L(v)$, we have
$$
|L(v)| \geq |L(a)| \geq |A| + k.
$$
Using Corollary~\ref{corl:small_lists}, we obtain
$$
0 \leq \deg_G(v) + |A| - |L(v)| \leq \deg_G(v) - k,
$$
i.e., $\deg_G(v) \geq k$. Since $V(G) \neq \varnothing$, $k \geq 1$, which implies $\deg_G(v) \geq 1$. But then $\chi(G) \geq 2$, as desired.
\end{claimproof}
\begin{claim}\label{claim:walk}
$H$ does not contain a walk of the form $x_0- y_0-x_1-y_1-x_2$, where
\begin{itemize}
\item $x_0$, $x_1$, $x_2 \in \bigcup_{a \in A} L(a)$;
\item $y_0$, $y_1 \in \bigcup_{v \in V(G)} L(v)$;
\item $x_0 \neq x_1\neq x_2$ and $y_0 \neq y_1$ (but it is possible that $x_0 = x_2$);
\item the set $\set{x_0, x_1, x_2}$ is independent in $H$.
\end{itemize}
\end{claim}
\begin{claimproof}
Suppose that such a walk exists and let $a_0$, $a_1$, $a_2 \in A$ and $v_0$, $v_1 \in V(G)$ be such that $x_0 \in L(a_0)$, $y_0 \in L(v_0)$, $x_1 \in L(a_1)$, $y_1 \in L(v_1)$, and $x_2 \in L(a_2)$. Let $A' \coloneqq A \setminus \set{a_0, a_1, a_2}$, and for every $w \in V(G) \cup A'$, let $L'(w) \coloneqq L(w) \setminus N_H(x_0, x_1, x_2)$. Since $\set{x_0, x_1, x_2}$ is an independent set, for all $a \in A'$, $|L'(a)| \geq |A'| + k$, while for all $v \in V(G)$, $\sigma(G, A', L', H, v) \leq \sigma(v)$. Moreover, since for each $i \in \set{0,1}$, the set $\set{x_0, x_1, x_2}$ contains two distinct neighbors of $y_i$, we have
$\sigma(G, A', L', H, v_i) \leq \sigma(v_i) - 1$.
Therefore, $\sigma(G, A', L', H) \leq \sigma - 2$, and thus
$$
\frac{3}{2} \sigma(G, A', L', H) \leq \frac{3(\sigma - 2)}{2} \leq |A| - 3 \leq |A'|.
$$
By the minimality of $|A|$, the conclusion of Theorem~\ref{theo:upper_bound} holds for $(k, G, A', L', H)$, i.e., the graph $\mathbf{J}(G, A')$ is $(L', H)$-colorable. By the definition of~$L'$, for any $(L', H)$-coloring $I$ of $\mathbf{J}(G, A')$, $I \cup \set{x_0,x_1, x_2}$ is an $(L,H)$-coloring of $\mathbf{J}(G, A)$. This is a~contradiction.
\end{claimproof}
Due to Corollary~\ref{corl:k_geq_2}, we can choose a pair of disjoint independent sets $U_0$, $U_1 \subset V(G)$ such that $\chi(G - U_0) = \chi(G - U_1) = k-1$. Choose arbitrary elements $a_1 \in A$ and $x_1 \in L(a_1)$. By Claim~\ref{claim:no_neighbor}, for each $u \in U_0 \cup U_1$, there is a unique element $y(u) \in L(u)$ adjacent to $x_1$ in $H$ (the uniqueness of $y(u)$ follows from the definition of a cover). Let
$$
I_0 \coloneqq \set{y(u) \,:\, u \in U_0} \;\;\;\text{ and }\;\;\; I_1 \coloneqq \set{y(u) \,:\, u \in U_1}.
$$
Since $U_0$ and $U_1$ are independent sets in $G$, $I_0$ and $I_1$ are independent sets in $H$.
\begin{claim}\label{claim:first_step}
There exists an element $a_0 \in A \setminus \set{a_1}$ such that $L(a_0) \cap N_H(I_0) \not \subseteq N_H(x_1)$.
\end{claim}
\begin{claimproof}
Assume that for all $a \in A \setminus \set{a_1}$, we have $L(a) \cap N_H(I_0) \subseteq N_H(x_1)$. Let $G' \coloneqq G - U_0$, and for each $w \in V(G') \cup A$, let $L'(w) \coloneqq L(w) \setminus N_H(I_0)$. By the definition of $I_0$, $L'(a_1) = L(a_1) \setminus \set{x_1}$, so $$|L'(a_1)| = |L(a_1)| - 1 \geq |A| + (k-1).$$
On the other hand, by our assumption, for each $a \in A \setminus \set{a_1}$, we have $$|L'(a)| = |L(a) \setminus N_H(I_0)| \geq |L(a) \setminus N_H(x_1)| \geq |L(a)| - 1 \geq |A| + (k-1).$$
Since for all $v \in V(G)$, $\sigma(G', A, L', H, v) \leq \sigma(v)$, the minimality of $k$ implies the conclusion of Theorem~\ref{theo:upper_bound} for $(k-1, G', A, L', H)$; in other words, the graph $\mathbf{J}(G', A)$ is $(L', H)$-colorable. By the definition of~$L'$, for any $(L', H)$-coloring $I$ of $\mathbf{J}(G', A)$, $I \cup I_0$ is an $(L,H)$-coloring of $\mathbf{J}(G, A)$; this is a~contradiction.
\end{claimproof}
Using Claim~\ref{claim:first_step}, fix some $a_0 \in A \setminus \set{a_1}$ satisfying $L(a_0) \cap N_H(I_0) \not\subseteq N_H(x_1)$, and choose any
$$x_0 \in (L(a_0) \cap N_H(I_0)) \setminus N_H(x_1).$$
Since $x_0 \in N_H(I_0)$, we can also choose $y_0 \in I_0$ so that $x_0 y_0 \in E(H)$.
\begin{claim}
$x_0 \not \in N_H(I_1)$.
\end{claim}
\begin{claimproof}
If there is $y_1 \in I_1$ such that $x_0 y_1 \in E(H)$, then $x_0 - y_0 - x_1 - y_1 - x_0$ is a walk in $H$ whose existence is ruled out by Claim~\ref{claim:walk}.
\end{claimproof}
\begin{claim}
There is an element $a_2 \in A \setminus \set{a_0, a_1}$ such that $L(a_2) \cap N_H(I_1) \not \subseteq N_H(x_0, x_1)$.
\end{claim}
\begin{claimproof}
The proof is almost identical to the proof of Claim~\ref{claim:first_step}. Assume that for all $a \in A \setminus \set{a_0, a_1}$, we have $L(a) \cap N_H(I_1) \subseteq N_H(x_0, x_1)$. Let $G' \coloneqq G - U_1$, $A' \coloneqq A \setminus \set{a_0}$, and for each $w \in V(G') \cup A'$, let $L'(w) \coloneqq L(w) \setminus N_H(\set{x_0} \cup I_1)$. By the definition of $I_1$, $L(a_1) \cap N_H(I_1) = \set{x_1}$, so $$|L'(a_1)| \geq |L(a_1)| - 2 \geq |A| + k-2 = |A'| + (k-1).$$
On the other hand, by our assumption, for each $a \in A \setminus \set{a_0, a_1}$, we have $$|L'(a)| \geq |L(a) \setminus N_H(x_0, x_1)| \geq |L(a)| - 2 \geq |A| + k-2 = |A'| + (k-1).$$
Since for all $v \in V(G)$, $\sigma(G', A', L', H, v) \leq \sigma(v)$, the minimality of $k$ implies the conclusion of Theorem~\ref{theo:upper_bound} for $(k-1, G', A', L', H)$; in other words, the graph $\mathbf{J}(G', A')$ is $(L', H)$-colorable. By the definition of~$L'$, for any $(L', H)$-coloring $I$ of $\mathbf{J}(G', A)$, $I \cup \set{x_0} \cup I_1$ is an $(L,H)$-coloring of $\mathbf{J}(G, A)$. This is a~contradiction.
\end{claimproof}
Now we are ready to finish the proof of Theorem~\ref{theo:upper_bound}. Fix some $a_2 \in A \setminus \set{a_0, a_1}$ satisfying $L(a_2) \cap N_H(I_1) \not\subseteq N_H(x_0, x_1)$, and choose any
$$
x_2 \in (L(a_2) \cap N_H(I_1)) \setminus N_H(x_0, x_1).
$$
Since $x_2 \in N_H(I_1)$, there is $y_1 \in I_1$ such that $x_2 y_1 \in E(H)$. Then $x_0-y_0-x_1-y_1-x_2$ is a walk in $H$ contradicting the conclusion of Claim~\ref{claim:walk}.
\end{proof}
Now it is easy to derive Theorem~\ref{theo:main}. Indeed, let $G$ be a graph with $n$ vertices, $m$ edges, and chromatic number $k$, let $A$ be a finite set disjoint from $V(G)$, and let $(L, H)$ be a cover of $\mathbf{J}(G, A)$ such that for all $v \in V(G)$ and $a \in A$, $|L(v)| = |L(a)| = \chi(\mathbf{J}(G,A)) = |A| + k$. Note that
$$
\frac{3}{2} \sum_{v \in V(G)} \max \set{\deg_G(v) - |L(v)| + |A| + 1,\,0} = \frac{3}{2} \sum_{v \in V(G)} \max \set{\deg_G(v) - k + 1, \, 0}.
$$
If $|A| \geq 3m$, then
$$
\frac{3}{2} \sum_{v \in V(G)} \max \set{\deg_G(v) - k + 1, \, 0} \leq \frac{3}{2} \sum_{v \in V(G)} \deg_G(v) = 3m \leq |A|,
$$
so Theorem~\ref{theo:upper_bound} implies that $\mathbf{J}(G, A)$ is $(L, H)$-colorable, and hence $Z_{DP}(G) \leq 3m$. Moreover, if $\delta(G) \geq k-1$, then
$$
\frac{3}{2} \sum_{v \in V(G)} \max \set{\deg_G(v) - k + 1, \, 0} = \frac{3}{2} \sum_{v \in V(G)} (\deg_G(v) - k + 1) = 3m - \frac{3}{2}(k-1)n,
$$
so $Z_{DP}(G) \leq 3m - \frac{3}{2}(k-1)n$, as desired. Finally, Corollary~\ref{corl:Z_n} follows from Theorem~\ref{theo:main} and the fact that an $n$-vertex graph can have at most ${n \choose 2} \leq n^2/2$ edges.
\section{Proof of Theorem~\ref{theo:lower}}\label{sec:low}
We will prove the following precise version of Theorem~\ref{theo:lower}:
\begin{theo}\label{theo:lower2}
For all even $n \in \mathbb{N}$,
$Z_{DP}(n) \geq n^2/4-n$.
\end{theo}
\begin{proof} Let $n\in \mathbb{N}$ be even and let $k \coloneqq n/2-1$. Note that $n^2/4 - n = k^2 - 1$. Thus, it is enough to exhibit an $n$-vertex bipartite graph $G$ and a cover $(L, H)$ of $\mathbf{J}(G, k^2 - 2)$ such that $|L(u)| = k^2$ for all $u \in V(\mathbf{J}(G, k^2 - 2))$, yet $\mathbf{J}(G, k^2 - 2)$ is not $(L, H)$-colorable.
Let $G \cong K_{n/2, n/2}$ be an $n$-vertex complete bipartite graph with parts $X = \set{x, x_0, \ldots, x_{k-1}}$ and $Y = \set{y, y_0, \ldots, y_{k-1}}$, where the indices $0$, \ldots, $k-1$ are viewed as elements of the additive group $\mathbb{Z}_k$ of integers modulo $k$. Let $A$ be a set of size $k^2 - 2$ disjoint from $X \cup Y$. For each $u \in X \cup Y \cup A$, let $L(u) \coloneqq \set{u} \times \mathbb{Z}_k \times \mathbb{Z}_k$. Let $H$ be the graph with vertex set $(X \cup Y \cup A) \times \mathbb{Z}_k \times \mathbb{Z}_k$ in which the following pairs of vertices are adjacent:
\begin{itemize}
\item[--] $(u, i, j)$ and $(u, i', j')$ for all $u \in X \cup Y \cup A$ and $i$, $j$, $i'$, $j' \in \mathbb{Z}_k$ such that $(i, j) \neq (i', j')$;
\item[--] $(u, i, j)$ and $(v, i, j)$ for all $u \in \set{x, y} \cup A$, $v \in N_{\mathbf{J}(G, A)}(u)$, and $i$, $j \in \mathbb{Z}_k$;
\item[--] $(x_s, i, j)$ and $(y_t, i+s, j+t)$ for all $s$, $t$, $i$, $j \in \mathbb{Z}_k$.
\end{itemize}
It is easy to see that $(L, H)$ is a cover of $\mathbf{J}(G, A)$. We claim that $\mathbf{J}(G, A)$ is not $(L, H)$-colorable. Indeed, suppose that $I$ is an $(L, H)$-coloring of $\mathbf{J}(G, A)$. For each $u \in X \cup Y \cup A$, let $i(u)$ and $j(u)$ be the unique elements of $\mathbb{Z}_k$ such that $(u, i(u), j(u)) \in I$. By the construction of $H$ and since $I$ is an independent set, we have
$$
(i(u), j(u)) \neq (i(a), j(a))
$$
for all $u \in X \cup Y$ and $a \in A$. Since all the $k^2 - 2$ pairs $(i(a), j(a))$ for $a \in A$ are pairwise distinct, $(i(u), j(u))$ can take at most $2$ distinct values as $u$ is ranging over $X \cup Y$. One of those $2$ values is $(i(y), j(y))$, and if $u \in X$, then
$$
(i(u), j(u)) \neq (i(y), j(y)),
$$
so the value of $(i(u), j(u))$ must be the same for all $u \in X$; let us denote it by $(i, j)$. Similarly, the value of $(i(u), j(u))$ is the same for all $u \in Y$, and we denote it by $(i', j')$.
It remains to notice that the vertices $(x_{i' - i}, i, j)$ and $(y_{j' - j}, i', j')$ are adjacent in $H$, so $I$ is not an independent set.
\end{proof}
Now we can prove Corollary~\ref{corl:NRW_DP}:
\begin{proof}[Proof of Corollary~\ref{corl:NRW_DP}]
First, suppose that $G$ is an $n$-vertex graph with $\chi(G) = r$ that maximizes the difference $\chi_{DP}(G) - \chi(G)$. Adding edges to $G$ if necessary, we may arrange $G$ to be a complete $r$-partite graph. Assuming $2r > n$, at least $2r - n$ of the parts must be of size $1$, i.e., $G$ is of the form $\mathbf{J}(G', 2r-n)$ for some $2(n-r)$-vertex graph $G'$. By Corollary~\ref{corl:Z_n}, we have $\chi_{DP}(G) = \chi(G)$ as long as $2r - n \geq 6(n-r)^2$, which holds for all $r \geq n - (1/\sqrt{6}-o(1))\sqrt{n}$. This establishes the upper bound $r(n) \leq n - \Omega(\sqrt{n})$.
On the other hand, due to Theorem~\ref{theo:lower}, for each $n$, we can find a graph $G$ with $s$ vertices, where $s \leq (2+o(1))\sqrt{n}$, such that $\chi_{DP}(\mathbf{J}(G, n - s)) > \chi(\mathbf{J}(G, n - s))$. Since $\mathbf{J}(G, n - s)$ is an $n$-vertex graph, we get
$$
r(n) > \chi(\mathbf{J}(G, n - s)) = \chi(G) + n - s \geq n - (2+o(1))\sqrt{n} = n - O(\sqrt{n}). \qedhere
$$
\end{proof}
\paragraph{Acknowledgements.} The authors are grateful to the anonymous referees for their valuable comments and suggestions.
\end{document}
|
{\bf m}athfrak begin{document}
\title[Drinfeld double of $GL_n$ and generalized cluster structures]{Drinfeld double of $GL_n$ and generalized cluster structures}
\author{M. Gekhtman}
\operatorname{ad}dress{Department of Mathematics, University of Notre Dame, Notre Dame,
IN 46556}
{\bf e}mail{[email protected]}
\author{M. Shapiro}
\operatorname{ad}dress{Department of Mathematics, Michigan State University, East Lansing,
MI 48823}
{\bf e}mail{[email protected]}
\author{A. Vainshtein}
\operatorname{ad}dress{Department of Mathematics \& Department of Computer Science, University of Haifa, Haifa,
Mount Carmel 31905, Israel}
{\bf e}mail{[email protected]}
{\bf m}athfrak begin{abstract}
We construct a generalized cluster structure compatible with the Poisson bracket on the Drinfeld double of the standard
Poisson-Lie group $GL_n$ and derive from it a generalized cluster structure on $GL_n$ compatible with the push-forward of the Poisson bracket on the
dual Poisson--Lie group.
{\bf e}nd{abstract}
\operatorname{sign}ubjclass[2010]{53D17,13F60}
{\bf m}aketitle
\tableofcontents
\operatorname{sign}ection{Introduction}
\label{intro}
The connection between cluster algebras and Poisson structures is documented in \cite{GSVb}. Among the most important examples in which this connection has been utilized are coordinate rings of double Bruhat cells in semisimple Lie groups equipped with (the restriction of) the standard Poisson-Lie structure. In \cite{GSVb}, we applied our technique of constructing a cluster structure compatible with a given Poisson structure in this situation and recovered the cluster structure built in \cite{CAIII}. The standard Poisson-Lie structure is a particular case of a family of Poisson-Lie structures corresponding to quasi-triangular Lie bialgebras. Such structures are associated with solutions to the classical Yang-Baxter equation. Their complete classification was obtained by Belavin and Drinfeld in \cite{BD} using certain combinatorial data defined in terms of the corresponding root system. In \cite{GSVMMJ} we conjectured that any such solution gives rise to a compatible cluster structure on the Lie group and provided several examples supporting this conjecture.
Recently \cite{GSVPNAS,GSVMem}, we constructed the cluster structure corresponding to the Cremmer--Gervais Poisson structure in $GL_n$ for any $n$.
As we established in \cite{GSVMem}, the construction of
cluster structures on a simple Poisson-Lie group ${{\bf m}athcal G}$ relies on properties of the Drinfeld double $D({{\bf m}athcal G})$. Moreover, in the Cremmer-Gervais case generalized determinantal identities on which cluster transformations are modeled can be extended to identities valid in the double. It is not too far-fetched then to suspect that
there exists a cluster structure on $D({{\bf m}athcal G})$ compatible with the Poisson-Lie bracket induced by the Poisson-Lie bracket on ${{\bf m}athcal G}$.
However, an interesting phenomenon was observed even in the first nontrivial example of $D(GL_2)$: although we were able to construct a log-canonical regular coordinate chart in terms of which all standard coordinate functions are expressed as (subtraction free) Laurent polynomials, it is not possible to define cluster transformations in such a way that all cluster variables which one expects to be mutable are transformed into regular functions. This problem is resolved, however, if one is allowed to use {{\bf e}m generalized cluster transformations} previously considered in \cite{GSV1,GSVb} and, more recently, axiomatized in \cite{CheSha}.
In this paper, we describe
such a generalized cluster structure on the Drinfeld double in the case of the standard Poisson-Lie group $GL_n$.
Using this structure, one can recover the standard cluster structure on $GL_n$.
Furthermore, there is a well-known map from the dual Poisson--Lie group $GL_n^*\operatorname{sign}ubset D(GL_n)$ to an open dense subset $GL_n^{\bf m}athbf dag$ of $GL_n$ (see \cite{r-sts}). The push-forward of the Poisson structure on $GL_n^*$ extends from $GL_n^{\bf m}athbf dag$ to the whole $GL_n$ (the resulting Poisson structure is not
Poisson--Lie). We define a generalized cluster structure on $GL_n$ compatible with the
latter Poisson structure, using a special seed of the generalized cluster structure on $D(GL_n)$.
Note that the log-canonical basis suggested in \cite{Bra} is different from the one
constructed here and does not lead to a regular cluster structure.
Section~{\bf r}ef{section2} contains the definition of a generalized cluster structure of geometric type, as well as
properties of such structures in the rings of regular functions of algebraic varieties. This includes
several basic results on compatibility with Poisson brackets and toric actions.
These statements are natural extensions of the corresponding results on ordinary cluster structures, and their proofs are obtained via minor modifications.
Section~{\bf r}ef{section2} also contains basic information on Poisson--Lie groups and the corresponding Drinfeld doubles.
Section~{\bf r}ef{section3} contain the main results of the paper. The initial log-canonical basis is described in Section~{\bf r}ef{logcan}, the corresponding quiver
is presented in Section~{\bf r}ef{init}, and the generalized exchange relation is given in Section~{\bf r}ef{exrelsec}. The main results are stated in Section~{\bf r}ef{main}
and include two theorems: Theorem~{\bf r}ef{structure} treats the generalized cluster structure on the Drinfeld double, while Theorem~{\bf r}ef{dualstructure} deals
with the generalized cluster structure on $GL_n$ compatible with the push-forward of the bracket on the dual Poisson--Lie group.
Section~{\bf r}ef{outline} explains the main steps
in the proof of Theorem~{\bf r}ef{structure}, and shows how it yields Theorem~{\bf r}ef{dualstructure}.
Section~{\bf r}ef{novoeslovo} has an independent value. Recall that originally upper cluster algebras were defined over the ring of Laurent polynomials in stable variables.
In this section we prove that upper cluster algebras over subrings of this ring retain all properties of usual upper cluster algebras, and under certain
coprimality conditions coincide with the intersection of rings of Laurent polynomials in a finite collection of clusters.
The log-canonicity of the suggested initial basis is proved in Section~{\bf r}ef{prlogcan}. The proofs rely on invariance properties of the elements
of the basis.
The first part of Theorem~{\bf r}ef{structure} is proved in Section~{\bf r}ef{prgcs}. We start with studying a left--right toric action by diagonal matrices, see Section~{\bf r}ef{torac}.
In Section~{\bf r}ef{cmpty} we prove the compatibility of our generalized cluster structure with the standard Poisson--Lie structure on the Drinfeld double. To prove the
regularity of the generalized cluster structure, we verify the coprimality conditions in the initial cluster and its neighbors.
The second part of Theorem~{\bf r}ef{structure} is proved in Section~{\bf r}ef{upeqreg}.
Section~{\bf r}ef{aux} contains proofs of various auxiliary statements in matrix theory that are used in other sections.
A short preliminary version of this paper was published in~\cite{GSVcr}.
\operatorname{sign}ection{Preliminaries}\label{section2}
\operatorname{sign}ubsection{Generalized cluster structures of geometric type and compatible Poisson brackets}
\label{SecPrel}
Let ${\bf w}idetilde{B}=(b_{ij})$ be an $N\times(N+M)$ integer matrix
whose principal part $B$ is skew-symmetrizable (recall that the principal part of a rectangular matrix
is its maximal leading square submatrix).
Let ${{\bf m}athbb F}FF$ be the field of rational functions in $N+M$ independent variables
with rational coefficients. There are $M$ distinguished variables;
they are denoted
$x_{N+1},{\bf m}athbf dots,x_{N+M}$ and called {{\bf e}m stable\/}. A stable variable $x_{j}$ is called {{\bf e}m isolated\/} if $b_{ij}=0$
for $1\le i\le N$. The {\it coefficient group\/} is a free multiplicative abelian group of Laurent monomials in stable variables,
and its integer group ring is ${\bf m}athfrak bar{{\bf m}athcal A}A={{\bf m}athbb Z}[x_{N+1}^{{\bf p}m1},{\bf m}athbf dots,x_{N+M}^{{\bf p}m1}]$ (we write
$x^{{\bf p}m1}$ instead of $x,x^{-1}$).
For each $i$, $1\le i\le N$, fix a factor $d_i$ of ${\bf m}athfrak gcd\{b_{ij}: 1\le j\le N\}$.
A {{\bf e}m seed\/} (of {{\bf e}m geometric type\/}) in ${{\bf m}athbb F}FF$ is a triple
$\Sigma=({\bf x},{\bf w}idetilde{B},{{\bf m}athcal P})$,
where ${\bf x}=(x_1,{\bf m}athbf dots,x_N)$ is a transcendence basis of ${{\bf m}athbb F}FF$ over the field of
fractions of ${\bf m}athfrak bar{{\bf m}athcal A}A$ and ${{\bf m}athcal P}$ is a set of $n$ {{\bf e}m strings}. The $i$th string is a collection of
monomials
$p_{ir}\in{{\bf m}athcal A}A={{\bf m}athbb Z}[x_{N+1},{\bf m}athbf dots,x_{N+M}]$, $0\le r\le d_i$, such that
$p_{i0}=p_{id_i}=1$;
it is called {{\bf e}m trivial\/} if $d_i=1$, and hence both elements of the string are equal to one.
Matrices $B$ and ${\bf w}B$ are called the
{\it exchange matrix\/} and the {\it extended exchange matrix}, respectively. The $N$-tuple ${\bf x}$ is called a {{\bf e}m cluster\/}, and its elements
$x_1,{\bf m}athbf dots,x_N$ are called {{\bf e}m cluster variables\/}. The monomials $p_{ir}$ are called {{\bf e}m exchange coefficients}.
We say that
${\bf w}idetilde{{\bf x}}=(x_1,{\bf m}athbf dots,x_{N+M})$ is an {{\bf e}m extended
cluster\/}, and ${\bf w}idetilde\Sigma=({\bf w}x,{\bf w}idetilde{B},{{\bf m}athcal P})$ is an {{\bf e}m extended seed}.
Given a seed as above, the {{\bf e}m adjacent cluster\/} in direction $k$, $1\le k\le N$,
is defined by
${\bf x}'=({\bf x}\operatorname{sign}etminus\{x_k\})\cup\{x'_k\}$,
where the new cluster variable $x'_k$ is given by the {{\bf e}m generalized exchange relation}
{\bf m}athfrak begin{equation}\label{exchange}
x_kx'_k=\operatorname{sign}um_{r=0}^{d_k}p_{kr}u_{k;>}^r v_{k;>}^{[r]}u_{k;<}^{d_k-r}v_{k;<}^{[d_k-r]}
{\bf e}nd{equation}
with {{\bf e}m cluster $\tau$-monomials\/} $u_{k;>}$ and $u_{k;<}$, $1\le k\le N$, defined by
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{aligned}
u_{k;>}&={\bf p}rod\{x_i^{b_{ki}/d_k}: 1\le i\le N, b_{ki}>0\}, \\
u_{k;<}&={\bf p}rod\{x_i^{-b_{ki}/d_k}: 1\le i\le N, b_{ki}<0\},
{\bf e}nd{aligned}
{\bf e}nd{equation*}
and {{\bf e}m stable $\tau$-monomials\/}
$v_{k;>}^{[r]}$ and $v_{k;<}^{[r]}$, $1\le k\le N$, $0\le r\le d_k$, defined by
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{aligned}
v_{k;>}^{[r]}&={\bf p}rod\{x_i^{\lfloor rb_{ki}/d_k{\bf r}floor}: N+1\le i\le N+M, b_{ki}>0\},\\
v_{k;<}^{[r]}&={\bf p}rod\{x_i^{\lfloor -rb_{ki}/d_k{\bf r}floor}: N+1\le i\le N+M, b_{ki}<0\};
{\bf e}nd{aligned}
{\bf e}nd{equation*}
here, as usual, the product over the empty set is assumed to be
equal to~$1$. In what follows we write $v_{k;>}$ instead of $v_{k;>}^{[d_k]}$ and $v_{k;<}$ instead of $v_{k;<}^{[d_k]}$.
The right hand side of~{\bf e}qref{exchange} is called a {\it generalized exchange polynomial}.
We say that ${\bf w}B'$ is
obtained from ${\bf w}B$ by a {{\bf e}m matrix mutation\/} in direction $k$ if
{\bf m}athfrak begin{equation}
\label{MatrixMutation}
b'_{ij}={\bf m}athfrak begin{cases}
-b_{ij}, & \text{if $i=k$ or $j=k$;}\\
b_{ij}+{\bf m}athbf displaystyle\frac{|b_{ik}|b_{kj}+b_{ik}|b_{kj}|}2,
&\text{otherwise.}
{\bf e}nd{cases}
{\bf e}nd{equation}
Note that $b_{ij}=0$ for $1\le i\le N$ implies
$b'_{ij}=0$ for $1\le i\le N$; in other words, the set of isolated variables does not depend on the cluster.
Moreover, ${\bf m}athfrak gcd\{b_{ij}:1\le j\le N\}={\bf m}athfrak gcd\{b'_{ij}:1\le j\le N\}$, and hence the $N$-tuple $(d_1,{\bf m}athbf dots,d_N)$
retains its property.
The {{\bf e}m exchange coefficient mutation\/} in direction $k$ is given by
{\bf m}athfrak begin{equation}
\label{CoefMutation}
p'_{ir}={\bf m}athfrak begin{cases}
p_{i,d_i-r}, & \text{if $i=k$;}\\
p_{ir}, &\text{otherwise.}
{\bf e}nd{cases}
{\bf e}nd{equation}
{\bf m}athfrak begin{remark}
The definition above is adjusted from an earlier definition of generalized
cluster structures given in~\cite{CheSha}. In that case a somewhat
less involved construction was modeled on examples appearing in a study
of triangulations of surfaces with orbifold points, and coefficients of
exchange polynomials were assumed to be elements of an arbitrary
tropical semifield. In contrast, our main example forces us to consider a situation in which
the coefficients have to be realized as regular functions on an
underlying variety which results in a complicated definition above.
More exactly, if one defines coefficients $p_{i;l}$ in~\cite{CheSha} as $p_{il}v_{i;>}^{[l​]}v_{i;<}^{[d_i-l]}$, then our
exchange coefficient mutation rule~{\bf e}qref{CoefMutation} becomes a specialization of the general rule (2.5) in~\cite{CheSha}.
As we will explain in future publications, many examples of this sort
arise in an investigation of exotic cluster structures on Poisson--Lie groups.
{\bf e}nd{remark}
Consider Laurent monomials in stable variables
{\bf m}athfrak begin{equation*}
q_{ir}=\frac{v_{i;>}^rv_{i;<}^{d_i-r}}{\left(v_{i;>}^{[r]}v_{i;<}^{[d_i-r]}{\bf r}ight)^{d_i}}, {\bf q}quad 0\le r\le d_i, 1\le i\le N.
{\bf e}nd{equation*}
By~{\bf e}qref{MatrixMutation}, $b_{ij}=b'_{ij}{\bf m}athfrak bmod d_i$
for $1\le j\le N+M$. Consequently, the mutation rule for $q_{ir}$ is the same as~{\bf e}qref{CoefMutation}. In what follows we will
use Laurent monomials ${\bf m}athfrak hat p_{ir}=p_{ir}^{d_i}/q_{ir}$. One can rewrite the generalized exchange relation~{\bf e}qref{exchange} in
terms of the ${\bf m}athfrak hat p_{ir}$ as follows:
{\bf m}athfrak begin{equation}\label{modexchange}
x_kx'_k=\operatorname{sign}um_{r=0}^{d_k}\left({\bf m}athfrak hat p_{kr}v_{k;>}^rv_{k;<}^{d_k-r}{\bf r}ight)^{1/d_k} u_{k;>}^r u_{k;<}^{d_k-r};
{\bf e}nd{equation}
note that $\left({\bf m}athfrak hat p_{kr}v_{k;>}^rv_{k;<}^{d_k-r}{\bf r}ight)^{1/d_k}$ is a monomial in ${{\bf m}athcal A}A$ for $0\le r\le d_k$.
In certain cases, it is convenient to represent the data $({\bf w}B, d_1,{\bf m}athbf dots,d_N)$ by a quiver.
Assume that the {\it modified extended exchange matrix\/} ${\bf w}idehat B$ obtained from
${\bf w}B$ by replacing
each $b_{ij}$ by $b_{ij}/d_i$ for $1\le j\le N$ and retaining it for $N+1\le j\le N+M$ has a skew-symmetric
principal part;
we say that the corresponding quiver $Q$ with vertex multiplicities $d_1,{\bf m}athbf dots,d_N$
represents $({\bf w}B, d_1,{\bf m}athbf dots,d_N)$ and write
$\Sigma=({\bf x},Q,{{\bf m}athcal P})$. Vertices that correspond to cluster variables are called {\it mutable},
those that correspond to stable variables are called {\it frozen}.
A mutable vertex with $d_i\mathfrak ne 1$ is called {{\bf e}m special\/}, and $d_i$ is said to be its {{\bf e}m multiplicity}.
A frozen vertex corresponding to an isolated variable is called {{\bf e}m isolated}.
Given a seed $\Sigma=({\bf x},{\bf w}idetilde{B},{{\bf m}athcal P})$, we say that a seed
$\Sigma'=({\bf x}',{\bf w}idetilde{B}',{{\bf m}athcal P}')$ is {{\bf e}m adjacent\/} to $\Sigma$ (in direction
$k$) if ${\bf x}'$, ${\bf w}idetilde{B}'$ and ${{\bf m}athcal P}'$ are as above.
Two seeds are {{\bf e}m mutation equivalent\/} if they can
be connected by a sequence of pairwise adjacent seeds.
The set of all seeds mutation equivalent to $\Sigma$ is called the {\it generalized cluster structure\/}
(of geometric type) in ${{\bf m}athbb F}FF$ associated with $\Sigma$ and denoted by ${{\bf m}athcal G}CC(\Sigma)$; in what follows,
we usually write ${{\bf m}athcal G}CC({\bf w}B,{{\bf m}athcal P})$, or even just ${{\bf m}athcal G}CC$ instead. Clearly, by taking $d_i=1$ for $1\le i\le N$, and hence making all strings trivial, we get an ordinary cluster structure. Indeed, in this case the right hand side of the generalized exchange relation {\bf e}qref{exchange} contains two terms; furthermore,
$u_{k;>}^0=u_{k;<}^0=v_{k;>}^{[0]}=v_{k;<}^{[0]}=1$ and
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{aligned}
u_{k;>}^1 v_{k;>}^{[1]}&={\bf p}rod \{x_i^{b_{ki}} : 1\le i\le N+M, b_{ki}>0\},\\
u_{k;<}^1 v_{k;<}^{[1]}&={\bf p}rod \{x_i^{-b_{ki}} : 1\le i\le N+M, b_{ki}<0\}.
{\bf e}nd{aligned}
{\bf e}nd{equation*}
Consequently, in this case {\bf e}qref{exchange} coincides with the ordinary exchange relation, while the exchange coefficient mutation {\bf e}qref{CoefMutation} becomes trivial.
Similarly to the case of
ordinary cluster structures, we will associate to ${{\bf m}athcal G}CC({\bf w}B,{{\bf m}athcal P})$ a labeled $N$-regular tree ${{\bf m}athbb T}_N$ whose vertices correspond to seeds, and edges
correspond to the adjacency of seeds.
Fix a ground ring ${\bf w}idehat{{{\bf m}athcal A}A}$ such that ${{\bf m}athcal A}A\operatorname{sign}ubseteq{\bf w}idehat{{\bf m}athcal A}A\operatorname{sign}ubseteq{\bf m}athfrak bar{{\bf m}athcal A}A$.
We associate with ${{\bf m}athcal G}CC({\bf w}B,{{\bf m}athcal P})$ two algebras of rank $N$ over ${\bf w}idehat{{{\bf m}athcal A}A}$:
the {{\bf e}m generalized cluster algebra\/} ${{\bf m}athcal A}={{\bf m}athcal A}({{\bf m}athcal G}CC)={{\bf m}athcal A}({\bf w}B,{{\bf m}athcal P})$, which
is the ${\bf w}idehat{{{\bf m}athcal A}A}$-subalgebra of ${{\bf m}athbb F}F$ generated by all cluster
variables from all seeds in ${{\bf m}athcal G}CC({\bf w}B,{{\bf m}athcal P})$, and the {\it generalized upper cluster algebra\/}
${{\bf m}athcal U}U={{\bf m}athcal U}U({{\bf m}athcal G}CC)={{\bf m}athcal U}U({\bf w}B,{{\bf m}athcal P})$, which is the intersection of the rings of Laurent polynomials over ${\bf w}idehat{{{\bf m}athcal A}A}$ in cluster variables
taken over all seeds in ${{\bf m}athcal G}CC({\bf w}B,{{\bf m}athcal P})$. The generalized {\it Laurent phenomenon\/} \cite{CheSha}
claims the inclusion ${{\bf m}athcal A}({{\bf m}athcal G}CC)\operatorname{sign}ubseteq{{\bf m}athcal U}U({{\bf m}athcal G}CC)$.
{\bf m}athfrak begin{remark} Note that our definition of the generalized cluster algebra is slightly more general than the one used in \cite{CheSha}. However, the proof in \cite{CheSha} utilizes the Caterpillar Lemma
of Fomin and Zelevinsky (see \cite{FoZe}) and follows their standard pattern of reasoning; it can be repeated {\it verbatim\/} in our case as well.
{\bf e}nd{remark}
Let $V$ be a quasi-affine variety over ${{\bf m}athbb C}$, ${{\bf m}athbb C}(V)$ be the field of rational functions on $V$, and
${{\bf m}athcal O}(V)$ be the ring of regular functions on $V$. Let ${{\bf m}athcal G}CC$ be a generalized cluster structure in ${{\bf m}athbb F}F$ as above.
Assume that $\{f_1,{\bf m}athbf dots,f_{N+M}\}$ is a transcendence basis of ${{\bf m}athbb C}(V)$. Then the map $\theta: x_i{\bf m}apsto f_i$,
$1\le i\le N+M$, can be extended to a field isomorphism $\theta: {{\bf m}athbb F}F_{{\bf m}athbb C}\to {{\bf m}athbb C}(V)$,
where ${{\bf m}athbb F}F_{{\bf m}athbb C}={{\bf m}athbb F}F\otimes{{\bf m}athbb C}$ is obtained from ${{\bf m}athbb F}F$ by extension of scalars.
The pair $({{\bf m}athcal G}CC,\theta)$ is called a generalized cluster structure {\it in\/}
${{\bf m}athbb C}(V)$, $\{f_1,{\bf m}athbf dots,f_{N+M}\}$ is called an extended cluster in
$({{\bf m}athcal G}CC,\theta)$.
Sometimes we omit direct indication of $\theta$ and say that ${{\bf m}athcal G}CC$ is a generalized cluster structure {{\bf e}m on\/} $V$.
A generalized cluster structure $({{\bf m}athcal G}CC,\theta)$ is called {\it regular\/}
if $\theta(x)$ is a regular function for any cluster variable $x$.
The two algebras defined above have their counterparts in ${{\bf m}athbb F}F_{{\bf m}athbb C}$ obtained by extension of scalars; they are
denoted ${{\bf m}athcal A}_{{\bf m}athbb C}$ and ${{\bf m}athcal U}U_{{\bf m}athbb C}$. As it is explained in~\cite[Section 3.4]{GSVb}, the natural choice of
the ground ring for ${{\bf m}athcal A}_{{\bf m}athbb C}$ and ${{\bf m}athcal U}U_{{\bf m}athbb C}$
is
{\bf m}athfrak begin{equation}\label{hata}
{\bf w}idehat{{\bf m}athcal A}A={{\bf m}athbb C}[x_{N+1}^{{\bf p}m1},{\bf m}athbf dots,x_{N+M'}^{{\bf p}m1}, x_{N+M'+1},{\bf m}athbf dots,x_{N+M}],
{\bf e}nd{equation}
where $\theta(x_{N+i})$ does not vanish on $V$ if and only if $1\le i\le M'$.
If, moreover, the field isomorphism $\theta$ can be restricted to an isomorphism of
${{\bf m}athcal A}_{{\bf m}athbb C}$ (or ${{\bf m}athcal U}U_{{\bf m}athbb C}$) and ${{\bf m}athcal O}(V)$, we say that
${{\bf m}athcal A}_{{\bf m}athbb C}$ (or ${{\bf m}athcal U}U_{{\bf m}athbb C}$) is {\it naturally isomorphic\/} to ${{\bf m}athcal O}(V)$.
The following statement is a direct corollary of the natural extension of the Starfish Lemma (Proposition~3.6 in \cite{FoPy})
to the case of generalized cluster structures.
The proof of this extension literally follows the proof of the Starfish Lemma.
{\bf m}athfrak begin{proposition}\label{regcoin}
Let $V$ be a Zariski open subset in ${{\bf m}athbb C}^{N+M}$ and ${{\bf m}athcal G}CC=({{\bf m}athcal G}CC({\bf w}B,{{\bf m}athcal P}),\theta)$ be a generalized cluster structure in ${{\bf m}athbb C}(V)$
with $N$ cluster and $M$ stable variables such that
{{\bf r}m(i)} there exists an extended cluster ${\bf w}x=(x_1,{\bf m}athbf dots,x_{N+M})$ in ${{\bf m}athcal G}CC$ such that $\theta(x_i)$ is
regular on $V$ for $1\le i\le N+M$, and $\theta(x_i)$ and $\theta(x_j)$ are coprime in ${{\bf m}athcal O}(V)$ for $1\le i\mathfrak ne j\le N+M$;
{{\bf r}m(ii)} for any cluster variable $x_k'$, $1\le k\le N$, obtained via the generalized exchange relation~{\bf e}qref{exchange}
applied to ${\bf w}x$, $\theta(x_k')$ is regular on $V$ and coprime in ${{\bf m}athcal O}(V)$ with $\theta(x_k)$.
\mathfrak noindent Then ${{\bf m}athcal G}CC$ is a regular generalized cluster structure. If additionally
{{\bf r}m(iii)} each regular function on $V$ belongs to $\theta({{\bf m}athcal U}U_{{\bf m}athbb C}({{\bf m}athcal G}CC))$,
\mathfrak noindent then ${{\bf m}athcal U}U_{{\bf m}athbb C}({{\bf m}athcal G}CC)$ is naturally isomorphic to ${{\bf m}athcal O}(V)$.
{\bf e}nd{proposition}
{\bf m}athfrak begin{remark} Conditions of the Starfish Lemma in our case are satisfied since ${{\bf m}athcal O}(V)$ is a unique factorization domain.
{\bf e}nd{remark}
Let ${{\bf m}athcal P}oi$ be a Poisson bracket on the ambient field ${{\bf m}athbb F}FF$, and ${{\bf m}athcal G}CC$ be a generalized cluster structure in ${{\bf m}athbb F}FF$.
We say that the bracket and the generalized cluster structure are {{\bf e}m compatible\/} if any extended
cluster ${\bf w}idetilde{{\bf x}}=(x_1,{\bf m}athbf dots,x_{N+M})$ is {{\bf e}m log-canonical\/} with respect to ${{\bf m}athcal P}oi$, that is,
$\{x_i,x_j\}=\omega_{ij} x_ix_j$,
where $\omega_{ij}\in{{\bf m}athbb Z}$ are constants for all $i,j$, $1\le i,j\le N+M$.
Let ${{\bf m}athcal O}mega=(\omega_{ij})_{i,j=1}^{N+M}$. The following proposition can be considered as a natural extension of Proposition~2.3 in \cite{GSVMem}.
The proof is similar to the proof of Theorem~4.5 in \cite{GSVb}.
{\bf m}athfrak begin{proposition}\label{compatchar}
Assume that ${\bf w}B{{\bf m}athcal O}mega=[{{\bf m}athfrak d}elta\;\; 0]$ for a non-degenerate diagonal matrix ${{\bf m}athfrak d}elta$, and all Laurent polynomials ${\bf m}athfrak hat p_{ir}$ are Casimirs of the bracket ${{\bf m}athcal P}oi$.
Then ${\bf r}ank{\bf w}B=N$, and the bracket ${{\bf m}athcal P}oi$ is compatible with ${{\bf m}athcal G}CC({\bf w}B,{{\bf m}athcal P})$.
{\bf e}nd{proposition}
The notion of compatibility extends to Poisson brackets on ${{\bf m}athbb F}F_{{\bf m}athbb C}$ without any changes.
Fix an arbitrary extended cluster
${\bf w}x=(x_1,{\bf m}athbf dots,x_{N+M})$ and define a {\it local toric action\/} of rank $s$ as a map
${{\bf m}athbb T}A^W_{{\bf q}}:{{\bf m}athbb F}F_{{\bf m}athbb C}\to
{{\bf m}athbb F}F_{{\bf m}athbb C}$ given on the generators of ${{\bf m}athbb F}F_{{\bf m}athbb C}={{\bf m}athbb C}(x_1,{\bf m}athbf dots,x_{N+M})$ by the formula
{\bf m}athfrak begin{equation}
{{\bf m}athbb T}A^W_{{\bf q}}({\bf w}x)=\left ( x_i {\bf p}rod_{\alpha=1}^s q_\alpha^{w_{i\alpha}}{\bf r}ight )_{i=1}^{N+M},{\bf q}quad
{\bf q}=(q_1,{\bf m}athbf dots,q_s)\in ({{\bf m}athbb C}^*)^s,
\label{toricact}
{\bf e}nd{equation}
where $W=(w_{i\alpha})$ is an integer $(N+M)\times s$ {\it weight matrix\/} of full rank, and extended naturally to the whole ${{\bf m}athbb F}F_{{\bf m}athbb C}$.
Let ${\bf w}x'$ be another extended cluster in ${{\bf m}athcal G}CC$, then the corresponding local toric action defined by the weight matrix $W'$
is {\it compatible\/} with the local toric action {\bf e}qref{toricact} if it commutes with the sequence of (generalized) cluster transformations that takes ${\bf w}x$ to ${\bf w}x'$.
If local toric actions at all clusters are compatible, they define a {\it global toric action\/} ${{\bf m}athbb T}A_{{\bf q}}$ on ${{\bf m}athbb F}F_{{\bf m}athbb C}$ called a
${{\bf m}athcal G}CC$-extension of the local toric action {\bf e}qref{toricact}.
The following proposition can be viewed as a natural extension of Lemma~2.3 in \cite{GSV1} and is proved in a similar way.
{\bf m}athfrak begin{proposition}\label{globact}
The local toric action~{\bf e}qref{toricact} is uniquely ${{\bf m}athcal G}CC$-extendable
to a global action of $({{\bf m}athbb C}^*)^s$ if ${\bf w}B W = 0$ and all Casimirs ${\bf m}athfrak hat p_{ir}$
are invariant under~{\bf e}qref{toricact}.
{\bf e}nd{proposition}
\operatorname{sign}ubsection{Standard Poisson-Lie group ${{\bf m}athcal G}$ and its Drinfeld double}
\label{double}
A reductive complex Lie group ${{\bf m}athcal G}$ equipped with a Poisson bracket ${{\bf m}athcal P}oi$ is called a {{\bf e}m Poisson--Lie group\/}
if the multiplication map
${{\bf m}athcal G}\times {{\bf m}athcal G} \mathfrak ni (X,Y) {\bf m}apsto XY \in {{\bf m}athcal G}$
is Poisson. Denote by
$\langle \ , \ {\bf r}angle$ an invariant nondegenerate form on
${\bf m}athfrak g$, and by $\mathfrak nabla^R$, $\mathfrak nabla^L$ the right and
left gradients of functions on ${{\bf m}athcal G}$ with respect to this form defined by
{\bf m}athfrak begin{equation*}
\left\langle \mathfrak nabla^R f(X),{\bf x}i{\bf r}ight{\bf r}angle=\left.\frac d{dt}{\bf r}ight|_{t=0}f(Xe^{t{\bf x}i}), {\bf q}uad
\left\langle \mathfrak nabla^L f(X),{\bf x}i{\bf r}ight{\bf r}angle=\left.\frac d{dt}{\bf r}ight|_{t=0}f(e^{t{\bf x}i}X)
{\bf e}nd{equation*}
for any ${\bf x}i\in{\bf m}athfrak g$, $X\in{{\bf m}athcal G}$.
Let ${\bf p}i_{>0}, {\bf p}i_{<0}$ be projections of
${\bf m}athfrak g$ onto subalgebras spanned by positive and negative roots, ${\bf p}i_0$ be the projection onto the Cartan
subalgebra ${\bf m}athfrak h$, and let $R={\bf p}i_{>0} - {\bf p}i_{<0}$.
The {{\bf e}m standard Poisson-Lie bracket\/} ${{\bf m}athcal P}oi_r$ on ${{\bf m}athcal G}$ can be written as
{\bf m}athfrak begin{equation}
\{f_1,f_2\}_r = \frac 1 2 \left( \left\langle R(\mathfrak nabla^L f_1), \mathfrak nabla^L f_2 {\bf r}angle - \langle R(\mathfrak nabla^R f_1), \mathfrak nabla^R f_2 {\bf r}ight{\bf r}angle {\bf r}ight).
\label{sklyabra}
{\bf e}nd{equation}
The standard Poisson--Lie structure is a particular case of Poisson--Lie structures corresponding to
quasitriangular Lie bialgebras. For a detailed exposition of these structures see, e.~g.,
\cite[Ch.~1]{CP}, \cite{r-sts} and \cite{Ya}.
Following \cite{r-sts}, let us recall the construction of {{\bf e}m the Drinfeld double}. The double of ${\bf m}athfrak g$ is
$D({\bf m}athfrak g)={\bf m}athfrak g \oplus {\bf m}athfrak g$ equipped with an invariant nondegenerate bilinear form
$\langle\langle ({\bf x}i,{\bf e}ta), ({\bf x}i',{\bf e}ta'){\bf r}angle{\bf r}angle = \langle {\bf x}i, {\bf x}i'{\bf r}angle - \langle {\bf e}ta, {\bf e}ta'{\bf r}angle$.
Define subalgebras ${{\bf m}athfrak d}_{\bf p}m$ of $D({\bf m}athfrak g)$ by
${{\bf m}athfrak d}_+=\{( {\bf x}i,{\bf x}i) : {\bf x}i \in{\bf m}athfrak g\}$ and ${{\bf m}athfrak d}_-=\{ (R_+({\bf x}i),R_-({\bf x}i)) : {\bf x}i \in{\bf m}athfrak g\}$,
where $R_{\bf p}m\in \operatorname{End}{\bf m}athfrak g$ is given by $R_{\bf p}m=\frac{1}{2} ( R {\bf p}m {\operatorname {Id}})$.
The operator $R_D= {\bf p}i_{{{\bf m}athfrak d}_+} - {\bf p}i_{{{\bf m}athfrak d}_-}$ can be used to define
a Poisson--Lie structure on $D({{\bf m}athcal G})={{\bf m}athcal G}\times {{\bf m}athcal G}$, the double of the group ${{\bf m}athcal G}$, via
{\bf m}athfrak begin{equation}
\{f_1,f_2\}_D = \frac{1}{2}\left (\left\langle\left\langle R_D({\bf m}athbf dnabla^L f_1), {\bf m}athbf dnabla{^L} f_2 {\bf r}ight{\bf r}angle{\bf r}ight{\bf r}angle
- \left\langle\left\langle R_D({\bf m}athbf dnabla^R f_1), {\bf m}athbf dnabla^R f_2 {\bf r}ight{\bf r}angle{\bf r}ight{\bf r}angle {\bf r}ight),
\label{sklyadouble}
{\bf e}nd{equation}
where ${\bf m}athbf dnabla^R$ and ${\bf m}athbf dnabla^L$ are right and left gradients with respect to $\langle\langle \cdot ,\cdot {\bf r}angle{\bf r}angle$.
The diagonal subgroup $\{ (X,X)\ : \ X\in {{\bf m}athcal G}\}$ is a Poisson--Lie subgroup of $D({{\bf m}athcal G})$ (whose Lie algebra is ${{\bf m}athfrak d}_+$) naturally isomorphic
to $({{\bf m}athcal G},{{\bf m}athcal P}oi_r)$.
The group ${{\bf m}athcal G}^*$ whose Lie algebra is ${{\bf m}athfrak d}_-$ is a Poisson-Lie subgroup of $D({{\bf m}athcal G})$ called {{\bf e}m the dual Poisson-Lie group of ${{\bf m}athcal G}$}.
The map $D({{\bf m}athcal G}) \to {{\bf m}athcal G}$ given by $(X,Y) {\bf m}apsto U=X^{-1} Y$ induces another Poisson bracket on ${{\bf m}athcal G}$, see~\cite{r-sts}; we denote this bracket ${{\bf m}athcal P}oi_*$.
The image of the restriction of this map to ${{\bf m}athcal G}^*$ is denoted ${{\bf m}athcal G}^{\bf m}athbf dag$. Symplectic leaves on
$({{\bf m}athcal G},{{\bf m}athcal P}oi_*)$ were studied in \cite{EvLu}.
In this paper we only deal with the case of ${{\bf m}athcal G}=GL_n$. In that case ${{\bf m}athcal G}^{\bf m}athbf dag$ is the non-vanishing locus of trailing principal minors ${\bf m}athbf det U_{[i,n]}^{[i,n]}$
(here and in what follows we write $[a,b]$ to denote the set $\{i\in {{\bf m}athbb Z} : a\le i\le b\}$).
The bracket {\bf e}qref{sklyadouble} takes the form
{\bf m}athfrak begin{equation}\label{sklyadoubleGL}
{\bf m}athfrak begin{split}
\{f_1,f_2\}_D = &\langle R_+(E_L f_1), E_L f_2{\bf r}angle - \langle R_+(E_R f_1), E_R f_2{\bf r}angle\\
&+ \langle X\mathfrak nabla_X f_1, Y\mathfrak nabla_Y f_2{\bf r}angle - \langle\mathfrak nabla_X f_1\cdot X, \mathfrak nabla_Y f_2 \cdot Y{\bf r}angle,
{\bf e}nd{split}
{\bf e}nd{equation}
where $\mathfrak nabla_X f=\left(\frac{{\bf p}artial f}{{\bf p}artial x_{ji}}{\bf r}ight)_{i,j=1}^n$,
$\mathfrak nabla_Y f=\left(\frac{{\bf p}artial f}{{\bf p}artial y_{ji}}{\bf r}ight)_{i,j=1}^n$,
and
{\bf m}athfrak begin{equation}\label{erel}
E_R = X \mathfrak nabla_X + Y\mathfrak nabla_Y, {\bf q}uad E_L = \mathfrak nabla_X X+ \mathfrak nabla_Y Y.
{\bf e}nd{equation}
So, {\bf e}qref{sklyadoubleGL} can be rewritten as
{\bf m}athfrak begin{equation}\label{sklyadoubleGL1}
{\bf m}athfrak begin{split}
\{f_1,f_2\}_D = &\left\langle R_+(E_L f_1), E_L f_2{\bf r}ight{\bf r}angle - \left\langle R_+(E_R f_1), E_R f_2{\bf r}ight{\bf r}angle\\
&+ \left\langle E_R f_1, Y\mathfrak nabla_Y f_2{\bf r}ight{\bf r}angle - \left\langle E_L f_1, \mathfrak nabla_Y f_2 \cdot Y{\bf r}ight{\bf r}angle.
{\bf e}nd{split}
{\bf e}nd{equation}
Further, if $\varphi$ is a function of $U=X^{-1}Y$ then $E_L\varphi=[\mathfrak nabla\varphi,U]$ and $E_R\varphi=0$, and hence for an arbitrary function $f$ on $D(GL_n)$ one has
{\bf m}athfrak begin{equation}\label{sklyadoubleGL2}
\{\varphi,f\}_D = \left\langle R_+([\mathfrak nabla\varphi,U]), E_L f{\bf r}ight{\bf r}angle - \left\langle [\mathfrak nabla\varphi,U], \mathfrak nabla_Y f \cdot Y{\bf r}ight{\bf r}angle.
{\bf e}nd{equation}
\operatorname{sign}ection{Main results}\label{section3}
\operatorname{sign}ubsection{Log-canonical basis}
\label{logcan}
Let $(X,Y)$ be a point in the double $D(GL_n)$. For $k,l{\bf m}athfrak ge 1$, $k+l\le n-1$ define a $(k+l)\times(k+l)$ matrix
$$
F_{kl}=F_{kl}(X,Y)=\left[{\bf m}athfrak begin{array}{cc}X^{[n-k+1,n]} & Y^{[n-l+1,n]}{\bf e}nd{array}{\bf r}ight]_{[n-k-l+1,n]}.
$$
For $1\le j\le i\le n$ define an $(n-i+1)\times (n-i+1)$ matrix
$$
G_{ij}=G_{ij}(X)=X_{[i,n]}^{[j,j+n-i]}.
$$
For $1\le i\le j\le n$ define an $(n-j+1)\times (n-j+1)$ matrix
$$
H_{ij}=H_{ij}(Y)=Y_{[i,i+n-j]}^{[j,n]}.
$$
For $k,l{\bf m}athfrak ge 1$, $k+l\le n$ define an $n\times n$ matrix
{\bf m}athfrak begin{equation*}\label{phidef}
{{\bf m}athcal P}hi_{kl}={{\bf m}athcal P}hi_{kl}(X,Y)=
\left[{\bf m}athfrak begin{array}{ccccc}(U^0)^{[n-k+1,n]}& U^{[n-l+1,n]} & (U^2)^{[n]} & {\bf m}athbf dots & (U^{n-k-l+1})^{[n]}{\bf e}nd{array}{\bf r}ight]
{\bf e}nd{equation*}
where $U=X^{-1}Y$.
{\bf m}athfrak begin{remark}
\label{identify}
Note that the definition of $F_{kl}$ can be extended to the case $k+l=n$ yielding $F_{n-l,l}=X{{\bf m}athcal P}hi_{n-l,l}$.
One can also identify $F_{0 l}$ with $H_{n-l+1,n-l+1}$ and $F_{k 0}$ with $G_{n-k+1,n-k+1}$.
Finally, it will be convenient, for technical reasons, to identify $G_{i,i+1}$ with $F_{n-i,1}$.
{\bf e}nd{remark}
Denote
{\bf m}athfrak begin{gather*}
f_{kl}={\bf m}athbf det F_{kl},{\bf q}uad g_{ij}={\bf m}athbf det G_{ij},{\bf q}uad h_{ij}={\bf m}athbf det H_{ij},\\
\varphi_{kl}=s_{kl}({\bf m}athbf det X)^{n-k-l+1}{\bf m}athbf det{{\bf m}athcal P}hi_{kl},
{\bf e}nd{gather*}
$2n^2-n+1$ functions in total.
Here $s_{kl}$ is a sign defined as follows:
\[
s_{kl}={\bf m}athfrak begin{cases}
(-1)^{k(l+1)}{\bf q}quad{\bf q}quad{\bf q}quad{\bf q}quad \text{for $n$ even},\\
(-1)^{(n-1)/2+k(k-1)/2+l(l-1)/2} {\bf q}uad \text{for $n$ odd}.
{\bf e}nd{cases}
\]
It is periodic in $k+l$ with period~4 for
$n$ odd and period~2 for $n$ even; $s_{n-l,l}=1$; $s_{n-l-1,l}=(-1)^l$ for $n$ odd and $s_{n-l-1,l}=(-1)^{l+1}$ for $n$ even;
$s_{n-l-2,l}=-1$ for $n$ odd; $s_{n-l-3,l}=(-1)^{l+1}$ for $n$ odd.
Note that the pre-factor in the definition of $\varphi_{kl}$ is needed to obtain a regular function in matrix entries of $X$ and $Y$.
Consider the polynomial
\[
{\bf m}athbf det(X+\lambda Y)=\operatorname{sign}um_{i=0}^n \lambda^i s_ic_i(X,Y),
\]
where $s_i=(-1)^{i(n-1)}$.
Note that $c_0(X,Y)={\bf m}athbf det X=g_{11}$ and $c_n(X,Y)={\bf m}athbf det Y=h_{11}$.
{\bf m}athfrak begin{theorem}
\label{basis}
The family of functions $F_n=\{g_{ij}, h_{ij}, f_{kl}, \varphi_{kl}, c_1,\ldots, c_{n-1} \}$ forms a log-canonical coordinate system with respect to the Poisson-Lie bracket {\bf e}qref{sklyadoubleGL1} on $D(GL_n)$.
{\bf e}nd{theorem}
\operatorname{sign}ubsection{Initial quiver}
\label{init}
The modified extended exchange matrix ${\bf w}idehat{B}$ has a skew-symmet\-ric principal part, and hence
can be represented by a quiver.
The quiver $Q_n$ contains $2n^2 - n +1$ vertices labeled by the functions $g_{ij}, h_{ij}, f_{kl}, \varphi_{kl}$ in the log-canonical basis $F_n$.
The functions $c_1, \ldots, c_{n-1}$ correspond to isolated vertices. They are not connected to any of the other vertices, and will be not shown on figures.
The vertex $\varphi_{11}$ is the only special vertex, and its multiplicity equals $n$.
The vertices $g_{i1}$, $1\le i\le n$, and $h_{1j}$, $1\le j\le n$, are frozen, so $N=2n^2-3n+1$ and $M=3n-1$. Below we describe $Q_n$ assuming that $n>2$.
Vertex $\varphi_{kl}$, $k,l\mathfrak ne1$, $k+l<n$, has degree~6. The edges pointing from $\varphi_{kl}$ are $\varphi_{kl}\to\varphi_{k+1,l}$,
$\varphi_{kl}\to \varphi_{k,l-1}$, and $\varphi_{kl}\to \varphi_{k-1,l+1}$; the edges pointing towards $\varphi_{kl}$ are $\varphi_{k,l+1}\to\varphi_{kl}$,
$\varphi_{k+1,l-1}\to\varphi_{kl}$, and $\varphi_{k-1,l}\to\varphi_{kl}$. Vertex $\varphi_{kl}$, $k,l\mathfrak ne1$, $k+l=n$, has degree~4.
The edges pointing from $\varphi_{kl}$ in this case are $\varphi_{kl}\to \varphi_{k,l-1}$ and $\varphi_{kl}\to f_{k-1,l}$; the edges pointing towards $\varphi_{kl}$ are
$\varphi_{k-1,l}\to\varphi_{kl}$ and $f_{k,l-1}\to \varphi_{kl}$.
Vertex $\varphi_{k1}$, $k\in [2,n-2]$, has degree~4. The edges pointing from $\varphi_{k1}$ are $\varphi_{k1}\to\varphi_{k-1,2}$ and
$\varphi_{k1}\to \varphi_{1k}$; the edges pointing towards $\varphi_{k1}$ are $\varphi_{k2}\to\varphi_{k1}$ and
$\varphi_{1,k-1}\to\varphi_{k1}$. Note that for $k=2$ the vertices $\varphi_{k-1,2}$ and $\varphi_{1k}$ coincide, hence for $n>3$
there are two edges pointing from $\varphi_{21}$ to $\varphi_{12}$. Vertex $\varphi_{n-1,1}$ has degree 5. The edges pointing from $\varphi_{n-1,1}$
are $\varphi_{n-1,1}\to \varphi_{1,n-1}$, $\varphi_{n-1,1}\to f_{n-2,1}$, and $\varphi_{n-1,1}\to g_{11}$; the edges pointing towards
$\varphi_{n-1,1}$ are $\varphi_{1,n-2}\to \varphi_{n-1,1}$ and $g_{22}\to \varphi_{n-1,1}$.
Vertex $\varphi_{1l}$, $l\in [2,n-2]$, has degree~6. The edges pointing from $\varphi_{1l}$ are $\varphi_{1l}\to\varphi_{2l}$,
$\varphi_{1l}\to \varphi_{1,l-1}$, and $\varphi_{1l}\to \varphi_{l+1,1}$; the edges pointing towards $\varphi_{1l}$ are $\varphi_{1,l+1}\to\varphi_{1l}$,
$\varphi_{2,l-1}\to\varphi_{1l}$, and $\varphi_{l1}\to\varphi_{1l}$. Vertex $\varphi_{1,n-1}$ has degree 5. The edges pointing from $\varphi_{1,n-1}$
are $\varphi_{1,n-1}\to \varphi_{1,n-2}$ and $\varphi_{1,n-1}\to h_{22}$; the edges pointing towards
$\varphi_{1,n-1}$ are $\varphi_{n-1,1}\to \varphi_{1,n-1}$, $f_{1,n-2}\to \varphi_{1,n-1}$, and $h_{11}\to \varphi_{1,n-1}$.
Finally, $\varphi_{11}$ is the special vertex. It has degree~4, and the corresponding edges are $\varphi_{12}\to\varphi_{11}$, $g_{11}\to\varphi_{11}$
and $\varphi_{11}\to\varphi_{21}$, $\varphi_{11}\to h_{11}$.
Vertex $f_{kl}$, $k+l<n$, has degree~6. The edges pointing from $f_{kl}$ are $f_{kl}\to f_{k+1,l-1}$,
$f_{kl}\to f_{k,l+1}$, and $f_{kl}\to f_{k-1,l}$; the edges pointing towards $f_{kl}$ are $f_{k-1,l+1}\to f_{kl}$,
$f_{k,l-1}\to f_{kl}$, and $f_{k+1,l}\to f_{kl}$. To justify this description in the extreme cases (such that $k+l=n-1$, $k=1$,
and $l=1$), we use the identification of Remark~{\bf r}ef{identify}.
{\bf m}athfrak begin{figure}[ht]
{\bf m}athfrak begin{center}
\includegraphics[width=12cm]{double4sttr.eps}
\caption{Quiver $Q_4$}
\label{D4}
{\bf e}nd{center}
{\bf e}nd{figure}
Vertex $g_{ij}$, $i\mathfrak ne n$, $j\mathfrak ne 1$, has degree~6. The edges pointing from $g_{ij}$ are $g_{ij}\to g_{i+1,j+1}$,
$g_{ij}\to g_{i,j-1}$, and $g_{ij}\to g_{i-1,j}$; the edges pointing towards $g_{ij}$ are $g_{i,j+1}\to g_{ij}$,
$g_{i-1,j-1}\to g_{ij}$, and $g_{i+1,j}\to g_{ij}$ (for $i=j$ we use the identification of Remark~{\bf r}ef{identify}).
Vertex $g_{nj}$, $j\mathfrak ne 1$, has degree~4. The edges pointing from $g_{nj}$
are $g_{nj}\to g_{n-1,j}$ and $g_{nj}\to g_{n,j-1}$; the edges pointing towards $g_{nj}$ are
$g_{n-1,j-1}\to g_{nj}$ and $g_{n,j+1}\to g_{nj}$ (for $j=n$ we use the identification of Remark~{\bf r}ef{identify}).
Vertex $g_{i1}$, $i\mathfrak ne 1$, $i\mathfrak ne n$, has degree~2, and the corresponding edges
are $g_{i1}\to g_{i+1,2}$ and $g_{i2}\to g_{i1}$.
Vertex $g_{11}$ has degree~3, and the corresponding edges are $\varphi_{n-1,1}\to g_{11}$ and $g_{11}\to g_{21}$, $g_{11}\to\varphi_{11}$.
Finally, $g_{n1}$ has degree~1, and the only edge is $g_{n2}\to g_{n1}$.
Vertex $h_{ij}$, $i\mathfrak ne 1$, $j\mathfrak ne n$, $i\mathfrak ne j$, has degree~6. The edges pointing from $h_{ij}$ are
$h_{ij} \to h_{i,j-1}$, $h_{ij} \to h_{i-1,j}$, and $h_{ij} \to h_{i+1,j+1}$;
the edges pointing towards $h_{ij}$ are $h_{i+1,j}\to h_{ij}$,
$h_{i,j+1}\to h_{ij}$, and $h_{i-1,j-1}\to h_{ij} $.
Vertex $h_{ii}$, $i\mathfrak ne 1$, $i\mathfrak ne n$, has degree~4. The edges pointing from $h_{ii}$ are $h_{ii}\to f_{1,n-i}$ and $h_{ii}\to h_{i-1,i}$; the edges pointing to $h_{ii}$ are $f_{1,n-i+1}\to h_{ii}$ and $h_{i,i+1}\to h_{ii}$.
Vertex $h_{in}$, $i\mathfrak ne 1$, $i\mathfrak ne n$, has degree~4. The edges pointing from $h_{in}$
are $h_{in}\to h_{i,n-1}$ and $h_{in}\to h_{i-1,n}$; the edges pointing towards $h_{in}$ are
$h_{i+1,n}\to h_{in}$ and $h_{i-1,n-1}\to h_{in}$.
Vertex $h_{1j}$, $j\mathfrak ne 1$, $j\mathfrak ne n$, has degree~2, and the corresponding edges
are $h_{1j}\to h_{2,j+1}$ and $h_{2j}\to h_{1j}$.
Vertex $h_{nn}$ has degree~3. The edges pointing from $h_{nn}$ are $h_{nn}\to g_{nn}$ and $h_{nn}\to h_{n-1,n}$; the edge pointing
to $h_{nn}$ is $f_{11}\to h_{nn}$. Vertex $h_{11}$ has degree~2, and the corresponding edges are $h_{11}\to\varphi_{1,n-1}$ and $\varphi_{11}\to h_{11}$.
Finally, $h_{1n}$ has degree~1, and the only edge is $h_{2n}\to h_{1n}$.
The quiver $Q_4$ is shown in Fig.~{\bf r}ef{D4}. The frozen vertices are shown as squares, the special vertex is shown as a hexagon, isolated vertices are not shown.
Certain arrows are dashed; this does not have any particular meaning, and just makes the picture more comprehensible.
One can identify in $Q_n$ four
``triangular regions" associated with four families
$\{ g_{ij}\}$, $\{h_{ij}\}$, $\{f_{kl}\}$, $\{\varphi_{kl}\}$. We will call vertices in these regions $g$-, $h$-, $f$- and $\varphi$-vertices, respectively. It is easy to see that $Q_4$, as well as $Q_n$ for any $n$, can be embedded into a torus.
The case $n=2$ is special. In this case there are only three types of vertices: $g$, $h$, and $\varphi$. The quiver $Q_2$ is shown in Fig.~{\bf r}ef{D2}.
{\bf m}athfrak begin{figure}[ht]
{\bf m}athfrak begin{center}
\includegraphics[width=6cm]{double2st.eps}
\caption{Quiver $Q_2$}
\label{D2}
{\bf e}nd{center}
{\bf e}nd{figure}
{\bf m}athfrak begin{remark}
\label{diagonal} On the diagonal subgroup $\{ (X,X) : X\in GL_n\}$ of $D(GL_n)$, $g_{ii} = h_{ii}$ for $1\le i\le n$, and functions $f_{kl}$ and $\varphi_{kl}$ vanish identically.
Accordingly, vertices in $Q_n$ that correspond to $f_{kl}$ and $\varphi_{kl}$ are erased and, for $1\le i\le n$, vertices corresponding to $g_{ii}$ and $h_{ii}$ are identified.
As a result, one recovers a seed of the cluster structure compatible with the standard Poisson-Lie structure on $GL_n$, see~\cite[Chap.~4.3]{GSVb}.
{\bf e}nd{remark}
{\bf m}athfrak begin{remark}
\label{tribfz}
At this point, we should emphasize a connection between the data $(F_n, Q_n)$ and particular seeds that give rise to the standard cluster structures on double Bruhat
cells ${{\bf m}athcal G}^{e,w_0}$, ${{\bf m}athcal G}^{w_0,e}$ for ${{\bf m}athcal G}=GL_n$ and $w_0$ the longest element in the corresponding Weyl group.
We will frequently explore this connection in what follows. Consider, in particular, the subquiver $Q_n^h$ of $Q_n$ associated with functions $h_{ij}$ in which,
in addition to vertices $h_{1i}$, we also
view vertices $h_{ii}$ as frozen. Restricted to upper triangular matrices, the family $\{h_{ij}\}$ together with the quiver $Q_n^h$ defines an initial seed for the
cluster structure on ${{\bf m}athcal G}^{e,w_0}$. This can be seen, for example, by applying the construction of Section~2.4 in \cite{CAIII} using a reduced word
$1 2 1 3 2 1\ldots (n-1) (n-2) \ldots 2 1$ for $w_0$. This leads to the cluster formed by
all dense minors that involve the first row. The seed we are interested in is then obtained via the transformation $B {\bf m}apsto W_0 B^T W_0$
applied to upper triangular matrices. Similarly,
the family $\{g_{ij}\}$ restricted to lower-triangular matrices together with the quiver $Q_n^g$ obtained from $Q_n$ in the same way as $Q_n^h$ (and isomorphic to it) defines an initial seed for ${{\bf m}athcal G}^{w_0,e}$.
As explained in Remark~2.20 in \cite{CAIII}, in the case of the standard cluster structures on ${{\bf m}athcal G}^{e,w_0}$ or ${{\bf m}athcal G}^{w_0,e}$, the cluster algebra and the upper cluster algebra coincide. This implies, in particular, that in every cluster for this cluster structure, each matrix entry of an upper/lower triangular matrix is expressed as a Laurent polynomial in cluster variables which is polynomial in stable variables. Furthermore, using similar considerations and the invariance under right multiplication by unipotent lower triangular matrices of column-dense minors that involve the first column, it is easy to conclude that each such minor has a Laurent polynomial expression in terms of dense minors involving the first column and, moreover, each leading dense principal minor enters this expression polynomially. Both these properties will be utilized below.
{\bf e}nd{remark}
\operatorname{sign}ubsection{Exchange relations}\label{exrelsec}
We define the set
${{\bf m}athcal P}_n$ of strings for $Q_n$ that contains only one nontrivial string $\{p_{1r}\}$,
$0\le r\le n$. It corresponds to the vertex $\varphi_{11}$ of multiplicity $n$, and $p_{1r}=c_r$, $1\le r\le n-1$.
The strings corresponding to all other vertices are trivial. Consequently, the generalized exchange relation at the vertex $\varphi_{11}$
for $n>2$ is expected to look as follows:
{\bf m}athfrak begin{equation*}
\varphi_{11}\varphi'_{11}=\operatorname{sign}um_{r=0}^nc_r\varphi_{21}^r\varphi_{12}^{n-r}.
{\bf e}nd{equation*}
Indeed, such a relation exists in the ring of regular functions on $D(GL_n)$, and is given by the following proposition.
{\bf m}athfrak begin{proposition}\label{polyrel}
For any $n>2$,
{\bf m}athfrak begin{equation}\label{general}
{\bf m}athbf det ((-1)^{n-1}\varphi_{12}X+\varphi_{21}Y)=\varphi_{11}P^*_n,
{\bf e}nd{equation}
where $P^*_n$ is a polynomial in the entries of $X$ and $Y$.
{\bf e}nd{proposition}
For $n=2$, relation~{\bf e}qref{general} is replaced by
$$
{\bf m}athbf det(-y_{22}X+x_{22}Y)=\varphi_{11}
\left|{\bf m}athfrak begin{matrix}
y_{21} & y_{22}\cr
x_{21} & x_{22}
{\bf e}nd{matrix}{\bf r}ight|.
$$
\operatorname{sign}ubsection{Statement of main results}
\label{main}
Let $n{\bf m}athfrak ge 2$.
{\bf m}athfrak begin{theorem}
\label{structure}
{{\bf r}m (i)}
The extended seed ${\bf w}idetilde\Sigma_n=(F_n,Q_n,{{\bf m}athcal P}_n)$ defines
a generalized cluster structure ${{\bf m}athcal G}CC_n^D$
in the ring of regular functions on $D(GL_n)$ compatible with the standard Poisson--Lie structure on $D(GL_n)$.
{{\bf r}m (ii)} The corresponding generalized upper cluster algebra
${{\bf m}athcal U}U({{\bf m}athcal G}CC_n^D)$
over
\[
{\bf w}idehat{{{\bf m}athcal A}A}={{\bf m}athbb C}[g_{11}^{{\bf p}m1},g_{21},{\bf m}athbf dots, g_{n1},h_{11}^{{\bf p}m1}, h_{12},{\bf m}athbf dots,h_{1n}, c_1,{\bf m}athbf dots, c_{n-1}]
\]
is naturally isomorphic to
the ring of regular functions on $D(GL_n)$.
{\bf e}nd{theorem}
{\bf m}athfrak begin{remark}\label{lowdim}
1. Since the only stable variables that do not vanish on $D(GL_n)$ are $g_{11}={\bf m}athbf det X$ and $h_{11}={\bf m}athbf det Y$,
the ground ring in (ii) above is a particular case of~{\bf e}qref{hata}.
In fact, it follows from the proof that a stronger statement holds:
(i) ${{\bf m}athcal G}CC_n^D$ extends to a regular generalized cluster structure on ${{\bf m}athcal M}at_n\times {{\bf m}athcal M}at_n$;
(ii) the generalized upper cluster algebra over
\[
{\bf w}idehat{{{\bf m}athcal A}A}={{\bf m}athbb C}[g_{11}^{{\bf p}m1},g_{21},{\bf m}athbf dots, g_{n1},h_{11}, h_{12},{\bf m}athbf dots,h_{1n}, c_1,{\bf m}athbf dots, c_{n-1}]
\]
is naturally isomorphic to
the ring of regular function on $GL_n\times {{\bf m}athcal M}at_n$.
2. For $n=2$ the obtained generalized cluster structure has a finite type.
Indeed, the principal part of the exchange matrix for the cluster shown
in Fig.~{\bf r}ef{D2} has a form
\[
\left({\bf m}athfrak begin{array}{rrr}
0 & 2 & -2 \\
-1 & 0 & 1 \\
1 & -1 & 0\\
{\bf e}nd{array}{\bf r}ight).
\]
The mutation of this matrix in direction $2$ transforms it into
\[
\left({\bf m}athfrak begin{array}{rrr}
0 & -2 & 0 \\
1 & 0 &-1 \\
0 & 1 & 0\\
{\bf e}nd{array}{\bf r}ight),
\]
and its Cartan companion is a Cartan matrix of type $B_3$. Therefore, by~\cite[Thm.~2.7]{CheSha}, the generalized cluster structure has type $B_3$. This
implies, in particular, that its exchange graph is the 1-skeleton of
the 3-dimensional cyclohedron (also known as the Bott--Taubes polytope), and its cluster complex is the 3-dimensional
polytope polar dual to the cyclohedron (see~\cite[Sec.~5.2]{FoRe} for further details).
3. It follows immediately from Theorem~{\bf r}ef{structure}(i) that the extended seed obtained from ${\bf w}idetilde\Sigma_n$ by deleting functions ${\bf m}athbf det X$
and ${\bf m}athbf det Y$ from $F_n$, deleting the corresponding vertices from $Q_n$
and restricting relation~{\bf e}qref{general} to ${\bf m}athbf det X={\bf m}athbf det Y=1$ defines a generalized cluster structure in the ring of regular functions on $D(SL_n)$
compatible with the standard Poisson--Lie structure on $D(SL_n)$. Moreover,
by Theorem~{\bf r}ef{structure}(ii),
the corresponding generalized upper cluster algebra is naturally isomorphic to the ring of regular functions on $D(SL_n)$.
{\bf e}nd{remark}
Using Theorem {\bf r}ef{structure}, we can construct a generalized cluster
structure
on $GL_n^{\bf m}athbf dag$. For $U\in GL_n^{\bf m}athbf dag$, denote
${\bf p}si_{kl}(U) = s_{kl}{\bf m}athbf det {{\bf m}athcal P}hi_{kl}(U)$, where $s_{kl}$ are the signs defined in Section {\bf r}ef{logcan}.
The initial extended cluster $F^{\bf m}athbf dag_n$ for $GL_n^{\bf m}athbf dag$ consists of
functions ${\bf p}si_{kl}(U)$, $k,l{\bf m}athfrak ge 1$, $k+l\le n$, $(-1)^{(n-i)(j-i)}h_{ij}(U)$, $2\le i\le j\le n$, $h_{11}(U)={\bf m}athbf det U$, and $c_i(\mathbf 1, U)$,
$1\le i\le n-1$.
To obtain the initial seed for $GL_n^{\bf m}athbf dag$, we apply a certain
sequence ${{\bf m}athcal S}$ of cluster transformations to the initial seed
for $D(GL_n)$. This sequence does not involve vertices associated with
functions ${\bf p}si_{kl}$. The resulting cluster ${{\bf m}athcal S}(F_n)$ contains a
subset $\{ \left ({\bf m}athbf det X{\bf r}ight )^{\mathfrak nu(f)}f : f \in F^{\bf m}athbf dag_n\}$ with
$\mathfrak nu({\bf p}si_{kl})=n-k-l+1$ and $\mathfrak nu(h_{ij})=1$.
These functions are attached to a subquiver $Q^{\bf m}athbf dag_n$ in the resulting quiver ${{\bf m}athcal S}(Q_n)$, which
is isomorphic to the subquiver of $Q_n$ formed by vertices
associated with functions $\varphi_{kl}, f_{ij}$ and $h_{ii}$, see Fig.~{\bf r}ef{D*4}.
Functions $h_{ii}(U)$ are declared stable variables, $c_i(\mathbf 1,U)$ remain isolated.
See Theorem~{\bf r}ef{clusterU} below for more details.
{\bf m}athfrak begin{figure}[ht]
{\bf m}athfrak begin{center}
\includegraphics[width=6cm]{dual4st.eps}
\caption{Quiver $Q^{\bf m}athbf dag_4$}
\label{D*4}
{\bf e}nd{center}
{\bf e}nd{figure}
All exchange relations defined by mutable
vertices of $Q^{\bf m}athbf dag_n$ are homogeneous in ${\bf m}athbf det X$. This allows us to use $(F^{\bf m}athbf dag_n, Q_n^{\bf m}athbf dag,{{\bf m}athcal P}_n)$ as an initial seed for $GL_n^{\bf m}athbf dag$.
The generalized exchange relation associated with the cluster variable
${\bf p}si_{11}$ now takes the form
${\bf m}athbf det ((-1)^{n-1}{\bf p}si_{12}\mathbf 1+{\bf p}si_{21}U)={\bf p}si_{11}{{\bf m}athcal P}i^*_n$,
where ${{\bf m}athcal P}i^*_n$ is a polynomial in the entries of $U$.
{\bf m}athfrak begin{theorem}
\label{dualstructure}
{{\bf r}m (i)} The extended seed $(F_n^{\bf m}athbf dag,Q_n^{\bf m}athbf dag,{{\bf m}athcal P}_n)$ defines a generalized cluster structure
${{\bf m}athcal G}CC_n^{{\bf m}athbf dag}$
in the ring of regular functions on $GL_n^{\bf m}athbf dag$ compatible with ${{\bf m}athcal P}oi_*$.
{{\bf r}m (ii)} The corresponding generalized upper cluster algebra over
\[
{\bf w}idehat{{{\bf m}athcal A}A}={{\bf m}athbb C}[h_{11}(U)^{{\bf p}m1}, {\bf m}athbf dots, h_{nn}(U)^{{\bf p}m1}, c_1(\mathbf 1,U),{\bf m}athbf dots, c_{n-1}(\mathbf 1,U)]
\]
is naturally isomorphic to the ring of regular functions on $GL_n^{\bf m}athbf dag$.
{\bf e}nd{theorem}
{\bf m}athfrak begin{remark}
1. It follows from Remark~{\bf r}ef{ultinohii} that a stronger statement holds, similarly to Remark~{\bf r}ef{lowdim}.1:
(i) ${{\bf m}athcal G}CC_n^{\bf m}athbf dag$ extends to a regular generalized cluster structure on ${{\bf m}athcal M}at_n$;
(ii) the generalized upper cluster algebra over
\[
{\bf w}idehat{{{\bf m}athcal A}A}={{\bf m}athbb C}[h_{11}(U),{\bf m}athbf dots, h_{nn}(U), c_1(\mathbf 1,U),{\bf m}athbf dots, c_{n-1}(\mathbf 1,U) ]
\]
is naturally isomorphic to
the ring of regular function on ${{\bf m}athcal M}at_n$.
2. Let ${\bf m}athcal V_n$ be the intersection of $SL_n^{\bf m}athbf dag$ with a generic conjugation orbit in $SL_n$. This variety plays a role in a rigorous mathematical
description of Coulomb branches in 4D gauge theories. The generalized cluster structure ${{\bf m}athcal G}CC_n^{{\bf m}athbf dag}$ descends
to ${\bf m}athcal V_n$ if one fixes the values of $c_1(\mathbf 1,U),{\bf m}athbf dots,c_{n-1}(\mathbf 1,U)$.
The existence of a cluster structure on ${\bf m}athcal V_n$ was suggested by D.~Gaiotto (A.~Braverman, private communication).
{\bf e}nd{remark}
\operatorname{sign}ubsection{The outline of the proof}\label{outline}
We start with defining a local toric action of right and left multiplication by diagonal
matrices, and use Proposition~{\bf r}ef{globact} to check that this action can be extended to a global one. This fact is then used in the proof of the compatibility
assertion in Theorem~{\bf r}ef{structure}(i), which is based on Proposition~{\bf r}ef{compatchar}. As a byproduct, we get that the extended exchange matrix
of ${{\bf m}athcal G}CC_n^D$ is of full rank.
Next, we have to check conditions (i)--(iii) of Proposition~{\bf r}ef{regcoin}. The regularity condition in (i) follows from Theorem~{\bf r}ef{basis} and the explicit
description of the basis. The coprimality condition in (i) is a corollary of the following stronger statement.
{\bf m}athfrak begin{theorem}\label{irredinbas}
All functions in $F_n$ are irreducible polynomials in matrix entries.
{\bf e}nd{theorem}
We then establish the regularity and coprimality conditions in (ii), which completes the proof of Theorem~{\bf r}ef{structure}(i).
To prove Theorem~{\bf r}ef{structure}(ii), it is left to check condition~(iii) of Proposition~{\bf r}ef{regcoin}.
The usual way to do that consists in applying Theorem~3.21 from~\cite{GSVb} which claims that for cluster structures of
geometric type with an exchange matrix of full rank, the upper cluster algebra coincides with the upper bound at any cluster. It remains to choose an
appropriate set of generators in ${{\bf m}athcal O}(V)$ and to check that each element of this set can be represented as a Laurent polynomial in some fixed cluster and in all its neighbors.
We will have to extend the above result in three directions:
1) to upper cluster algebras over ${\bf w}idehat{{\bf m}athcal A}A$,
as opposed to upper cluster algebras over ${\bf m}athfrak bar{{\bf m}athcal A}A$;
2) to more general neighborhoods of a vertex in ${{\bf m}athbb T}_N$, as opposed to the stars of vertices;
3) to generalized cluster structures of geometric type, as opposed to ordinary cluster structures.
Let ${{\bf m}athcal G}CC={{\bf m}athcal G}CC({\bf w}B,{{\bf m}athcal P})$ be a generalized cluster structure as defined in Section~{\bf r}ef{SecPrel}, and let $L$ be the number of isolated variables in ${{\bf m}athcal G}CC$. For the $i$th
nontrivial string in ${{\bf m}athcal P}$, define a $(d_i-1)\times L$ integer matrix ${\bf w}B(i)$: the $r$th row of ${\bf w}B(i)$ contains the exponents of isolated variables in the exchange coefficient
$p_{ir}$ (recall that $p_{ir}$ are monomials).
Following~\cite{Fra}, we call a {\it nerve\/} an arbitrary subtree of ${{\bf m}athbb T}_N$ on $N+1$ vertices such that all its edges have different labels. A star
of a vertex in ${{\bf m}athbb T}_N$ is an example of a nerve. Given a nerve ${\bf m}athcal N$, we define an {\it upper bound\/} ${{\bf m}athcal U}U({\bf m}athcal N)$ as the intersection of the rings of Laurent polynomials
${\bf w}idehat{{\bf m}athcal A}A[{\bf x}^{{\bf p}m 1}]$ taken over all seeds in ${\bf m}athcal N$. We prove the following theorem that seems to be interesting in its own right.
{\bf m}athfrak begin{theorem}\label{nervub}
Let ${\bf w}B$ be a skew-symmetrizable matrix of full rank, and let
\[
{\bf r}ank {\bf w}B(i)=d_i-1
\]
for any nontrivial string in ${{\bf m}athcal P}$.
Then the upper bounds ${{\bf m}athcal U}U({\bf m}athcal N)$ do not depend on the choice of ${\bf m}athcal N$, and hence coincide with the
generalized upper cluster algebra ${{\bf m}athcal U}U({\bf w}B,{{\bf m}athcal P})$ over ${\bf w}idehat{{\bf m}athcal A}A$.
{\bf e}nd{theorem}
We then proceed as follows. First, we choose the $2n^2$ matrix entries of $X$ and $Y$ as the generating set of the ring of regular functions on $D(GL_n)$. Then we
prove the following result.
{\bf m}athfrak begin{theorem}\label{allxcluster}
Each matrix entry of $X$ is either a stable variable or a cluster variable in ${{\bf m}athcal G}CC_n^D$.
{\bf e}nd{theorem}
To treat the remaining part of the generating set we consider a special nerve ${\bf m}athcal N_0$ in the tree ${{\bf m}athbb T}_{(n-1)(2n-1)}$. First of all, we design a sequence ${{\bf m}athcal S}$ of
cluster transformations that takes the initial extended seed ${\bf w}idetilde\Sigma_n$ to a new extended seed ${\bf w}idetilde\Sigma'_n={{\bf m}athcal S}({\bf w}idetilde\Sigma_n)=
({{\bf m}athcal S}(F_n),{{\bf m}athcal S}(Q_n),{{\bf m}athcal S}({{\bf m}athcal P}_n))$ having the following properties.
Let $Q_n^{\bf m}athbf dag$ and $F_n^{\bf m}athbf dag$ be as defined in Section~{\bf r}ef{main}, and $U=X^{-1}Y$.
{\bf m}athfrak begin{theorem}\label{clusterU}
There exists a sequence ${{\bf m}athcal S}$ of cluster transformations such that
{{\bf r}m (i)} ${{\bf m}athcal S}({{\bf m}athcal P}_n)={{\bf m}athcal P}_n$;
{{\bf r}m (ii)} ${{\bf m}athcal S}(Q_n)$ contains a subquiver $Q_n'$ isomorphic to $Q_n^{\bf m}athbf dag$;
{{\bf r}m (iii)} the functions in ${{\bf m}athcal S}(F_n)$ assigned to the vertices of $Q_n'$ constitute the set
$\left\{\left({\bf m}athbf det X^{n-k-l+1}{\bf p}si_{kl}(U){\bf r}ight)_{k,l{\bf m}athfrak ge 1, k+l\le n}, \left({\bf m}athbf det X\cdot h_{ij}(U){\bf r}ight)_{2\le i\le j\le n}, {\bf m}athbf det X\cdot h_{11}(U){\bf r}ight\}$;
{{\bf r}m (iv)} the only vertices in $Q'_n$ connected with the rest of vertices in ${{\bf m}athcal S}(Q_n)$ are those associated with
${\bf m}athbf det X\cdot h_{ii}$, $2\le i\le n$, and $\varphi_{11}$.
{\bf e}nd{theorem}
As an immediate corollary we get Theorem~{\bf r}ef{dualstructure}(i).
The nerve ${\bf m}athcal N_0$ contains the seed ${\bf w}idetilde\Sigma'_n$, a seed
${\bf w}idetilde\Sigma''_n$ adjacent to ${\bf w}idetilde\Sigma'_n$, and a seed ${\bf w}idetilde\Sigma'''_n$ adjacent to ${\bf w}idetilde\Sigma''_n$.
Besides, it contains $2(n-1)^2$ seeds adjacent to ${\bf w}idetilde\Sigma'_n$ and distinct from ${\bf w}idetilde\Sigma''_n$,
and $n-3$ seeds adjacent to ${\bf w}idetilde\Sigma'''_n$ and distinct from ${\bf w}idetilde\Sigma''_n$. A more detailed description of ${\bf m}athcal N_0$ is given in
Section~{\bf r}ef{thenerve} below. We then prove
{\bf m}athfrak begin{theorem}\label{Urestore}
Each matrix entry of $U=X^{-1}Y$ multiplied by an appropriate power of ${\bf m}athbf det X$ belongs to the upper bound ${{\bf m}athcal U}U({\bf m}athcal N_0)$.
{\bf e}nd{theorem}
Consequently each matrix entry of $Y=XU$ belongs to ${{\bf m}athcal U}U({\bf m}athcal N_0)$.
It remains to note that ${\bf w}B(1)$ is the $(n-1)\times (n-1)$ identity matrix.
Therefore, all conditions in Theorem~{\bf r}ef{nervub} are satisfied, and we get the proofs of Theorems~{\bf r}ef{structure}(ii)
and~{\bf r}ef{dualstructure}(ii).
\operatorname{sign}ection{Generalized upper cluster algebras of geometric type over ${\bf m}athfrak hat{{\bf m}athcal A}A$}\label{novoeslovo}
Let ${{\bf m}athcal G}CC={{\bf m}athcal G}CC({\bf w}B,{{\bf m}athcal P})$ be a generalized cluster structure as defined in Section~{\bf r}ef{SecPrel}, and let ${{\bf m}athcal A}A\operatorname{sign}ubseteq{\bf w}idehat{{\bf m}athcal A}A\operatorname{sign}ubseteq{\bf m}athfrak bar{{\bf m}athcal A}A$ be the corresponding rings.
The goal of this section is to prove Theorem~{\bf r}ef{nervub}. We start with the following statement, which is an extension of the standard result on
the coincidence of upper bounds (see e.g. \cite[Corollary~3.22]{GSVb}).
{\bf m}athfrak begin{theorem} \label{nervubcop}
If the generalized exchange polynomials are coprime in ${{\bf m}athcal A}A[x_1,{\bf m}athbf dots,x_{N}]$ for any seed in
${{\bf m}athcal G}CC$, then the upper
bounds ${{\bf m}athcal U}U({\bf m}athcal N)$ do not depend on the choice of the nerve ${\bf m}athcal N$, and hence coincide with the upper cluster algebra ${{\bf m}athcal U}U({\bf w}B,{{\bf m}athcal P})$ over ${\bf w}idehat{{\bf m}athcal A}A$.
{\bf e}nd{theorem}
{\bf m}athfrak begin{proof} Let us consider first the case $N=1$. In this case everything is exactly the same as in the standard situation. Namely, we consider two adjacent
clusters ${\bf x}=\{x_1\}$ and ${\bf x}_1=\{x_1'\}$ and the exchange relation $x_1x_1'=P_1$, where $P_1\in {{\bf m}athcal A}A$. The same reasoning as in
Lemma~3.15 from \cite{GSVb} yields
\[
{\bf w}idehat{{\bf m}athcal A}A[x_1^{{\bf p}m1}]\cap{\bf w}idehat{{\bf m}athcal A}A[(x_1')^{{\bf p}m1}]={\bf w}idehat{{\bf m}athcal A}A[x_1,x_1'].
\]
As a corollary, for general $N$ one gets
{\bf m}athfrak begin{equation}\label{twolaur}
{\bf w}idehat{{\bf m}athcal A}A[x_1^{{\bf p}m1},x_2^{{\bf p}m1},{\bf m}athbf dots,x_N^{{\bf p}m1}]\cap{\bf w}idehat{{\bf m}athcal A}A[(x_1')^{{\bf p}m1},x_2^{{\bf p}m1},{\bf m}athbf dots,x_N^{{\bf p}m1}]={\bf w}idehat{{\bf m}athcal A}A[x_1,x_1',x_2^{{\bf p}m1},{\bf m}athbf dots,x_N^{{\bf p}m1}].
{\bf e}nd{equation}
The latter relation is obtained from the one for $N=1$ via replacing ${{\bf m}athcal A}A$ with ${{\bf m}athcal A}A[x_2,{\bf m}athbf dots,x_N]$ and
the ground ring ${\bf w}idehat{{\bf m}athcal A}A$ with ${\bf w}idehat{{\bf m}athcal A}A[x_2^{{\bf p}m1},{\bf m}athbf dots,x_N^{{\bf p}m1}]$.
Let now $N=2$. Note that ${{\bf m}athbb T}_2$ is an infinite path, and hence all nerves are just two-pointed stars. Let ${\bf x}=\{x_1,x_2\}$ be an arbitrary cluster,
${\bf x}_1=\{x_1',x_2\}$ and ${\bf x}_2=\{x_1,x_2'\}$ be the two adjacent clusters obtained via generalized exchange relations $x_1x_1'=P_1$ and $x_2x_2'=P_2$
with $P_1\in{{\bf m}athcal A}A[x_2]$ and $P_2\in{{\bf m}athcal A}A[x_1]$. Besides, let ${\bf x}_3=\{x_1',x_2''\}$ be the cluster obtained from ${\bf x}_1$ via the generalized
exchange relation $x_2x_2''={\bf m}athfrak bar P_2$ with ${\bf m}athfrak bar P_2\in{{\bf m}athcal A}A[x_1']$. Let ${\bf m}athcal N$ be the nerve
${\bf x}_1$\textemdash${\bf x}$\textemdash${\bf x}_2$, and ${\bf m}athcal N_1$ be the nerve
consisting of the clusters
${\bf x}_1$\textemdash${\bf x}$\textemdash${\bf x}_3$. The following statement is an analog of Lemma~3.19 in \cite{GSVb}.
{\bf m}athfrak begin{lemma}\label{2pstar}
Assume that $P_1$ and $P_2$ are coprime in ${{\bf m}athcal A}A[x_1,x_2]$ and $P_1$ and ${\bf m}athfrak bar P_2$ are coprime in ${{\bf m}athcal A}A[x_1',x_2]$. Then ${{\bf m}athcal U}U({\bf m}athcal N)={{\bf m}athcal U}U({\bf m}athcal N_1)$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} The proof differs substantially from the proof of Lemma 3.19, since we are not allowed to invert monomials in ${\bf w}idehat{{\bf m}athcal A}A$.
It is enough to prove the inclusion ${{\bf m}athcal U}U({\bf m}athcal N)\operatorname{sign}ubseteq{{\bf m}athcal U}U({\bf m}athcal N_1)$, since the opposite inclusion is obtained by switching roles between ${\bf x}$ and ${\bf x}_1$.
By~{\bf e}qref{twolaur}, we have
\[
{{\bf m}athcal U}U({\bf m}athcal N)={\bf w}idehat{{\bf m}athcal A}A[x_1,x_1',x_2^{{\bf p}m1}]\cap{\bf w}idehat{{\bf m}athcal A}A[x_1^{{\bf p}m1},(x_2')^{{\bf p}m1}],{\bf q}uad
{{\bf m}athcal U}U({\bf m}athcal N_1)={\bf w}idehat{{\bf m}athcal A}A[x_1,x_1',x_2^{{\bf p}m1}]\cap{\bf w}idehat{{\bf m}athcal A}A[(x_1')^{{\bf p}m1},(x_2'')^{{\bf p}m1}].
\]
Let $y\in {\bf w}idehat{{\bf m}athcal A}A[x_1,x_1',x_2^{{\bf p}m1}]$; expand $y$ as a Laurent polynomial in $x_2$. Each term of this expansion containing a non-negative power of $x_2$ belongs
to ${\bf w}idehat{{\bf m}athcal A}A[(x_1')^{{\bf p}m1},(x_2'')^{{\bf p}m1}]$, so we have to consider only $y$ of the form
\[
y=\operatorname{sign}um_{k=1}^a \frac{Q'_k+Q_k}{x_2^k}, {\bf q}quad a{\bf m}athfrak ge 1,
\]
with $Q_k\in{\bf w}idehat{{\bf m}athcal A}A[x_1]$, $Q'_k\in{\bf w}idehat{{\bf m}athcal A}A[x_1']$.
We can treat $y$ as above in two different ways. On the one hand, by substituting $x_1'=P_1/x_1$,
it can be considered as an element in ${\bf w}idehat{{\bf m}athcal A}A[x_1^{{\bf p}m1},x_2^{{\bf p}m1}]$ and written as
{\bf m}athfrak begin{equation}\label{firsty}
y=\operatorname{sign}um_{k\le a}\frac{R_k}{x_1^{{\bf m}athbf delta_k}x_2^k}
{\bf e}nd{equation}
with $R_k\in{\bf w}idehat{{\bf m}athcal A}A[x_1]$ and ${\bf m}athbf delta_k{\bf m}athfrak ge 0$. Imposing the condition $y\in{\bf w}idehat{{\bf m}athcal A}A[x_1^{{\bf p}m1},(x_2')^{{\bf p}m1}]$,
we get $R_k=P_2^kS_k$ for $k> 0$ and some $S_k\in{\bf w}idehat{{\bf m}athcal A}A[x_1]$. Note that each summand in~{\bf e}qref{firsty} with $k\le 0$
belongs to ${\bf w}idehat{{\bf m}athcal A}A[x_1^{{\bf p}m1},(x_2')^{{\bf p}m1}]$ automatically.
On the other hand, by substituting $x_1=P_1/x_1'$,
$y$ can be considered as an element in ${\bf w}idehat{{\bf m}athcal A}A[(x_1')^{{\bf p}m1},x_2^{{\bf p}m1}]$ and written as
{\bf m}athfrak begin{equation}\label{secy}
y=\operatorname{sign}um_{k\le a}\frac{R_k'}{(x_1')^{{\bf m}athbf delta_k'}x_2^k}
{\bf e}nd{equation}
with $R_k'\in{\bf w}idehat{{\bf m}athcal A}A[x_1']$ and ${\bf m}athbf delta_k'{\bf m}athfrak ge 0$. Note that $R_k'$ can be restored via $R_l$ and ${\bf m}athbf delta_l$, $k\le l\le a$.
We will prove that ${\bf m}athfrak bar P_2^k$ divides $R_k'$ in ${\bf w}idehat{{\bf m}athcal A}A[x_1']$ for any $k>0$. This would mean that each summand in~{\bf e}qref{secy}
belongs to ${\bf w}idehat{{\bf m}athcal A}A[(x_1')^{{\bf p}m1},(x_2'')^{{\bf p}m1}]$, and hence ${{\bf m}athcal U}U({\bf m}athcal N)\operatorname{sign}ubseteq{{\bf m}athcal U}U({\bf m}athcal N_1)$ as claimed above.
Assume first that ${\bf m}athfrak hat b_{12}={\bf m}athfrak hat b_{21}=0$ in the modified exchange matrix ${\bf m}athfrak hat B$, which means that $P_1, P_2\in{{\bf m}athcal A}A$ and ${\bf m}athfrak bar P_2=P_2$.
Rewrite an arbitrary term $T_k=R_k/(x_1^{{\bf m}athbf delta_k}x_2^k)$,
$k> 0$, in~{\bf e}qref{firsty} as an element in ${\bf w}idehat{{\bf m}athcal A}A[(x_1')^{{\bf p}m1},x_2^{{\bf p}m1}]$ via substituting $x_1=P_1/x_1'$. Recall that $R_k$ is divisible by $P_2^k$,
hence
\[
T_k=\frac{(x_1')^{{\bf m}athbf delta_k}P_2^k S_k{\bf r}vert_{x_1\leftarrow P_1/x_1'}}{P_1^{{\bf m}athbf delta_k}x_2^k}=\frac{{\bf m}athfrak bar P_2^k {\bf m}athfrak bar S_k}{(x_1')^{{\bf m}athfrak gamma_k}{P_1^{{\bf m}athbf delta_k}x_2^k}}
\]
for some ${\bf m}athfrak gamma_k{\bf m}athfrak ge 0$ and ${\bf m}athfrak bar S_k\in {\bf w}idehat{{\bf m}athcal A}A[x_1']$. Comparing the latter expression with~{\bf e}qref{secy}, we see that ${\bf m}athfrak gamma_k={\bf m}athbf delta_k'$ and
$R_k'={\bf m}athfrak bar P_2^k {\bf m}athfrak bar S_k/P_1^{{\bf m}athbf delta_k}$. Since ${\bf m}athfrak bar P_2$ and $P_1$ are coprime, this means that ${\bf m}athfrak bar P_2^k$ divides
$R_k'$ in ${\bf w}idehat{{\bf m}athcal A}A[x_1']$.
Assume now that ${\bf m}athfrak hat b_{12}=b> 0$ (otherwise ${\bf m}athfrak hat b_{21}>0$, and we proceed in the same way with $P_2$ instead of $P_1$).
Then one can rewrite $P_1$ as $P_1=P_{10}+x_2^bP_{11}$ with $P_{10}$ is a monomial in ${{\bf m}athcal A}A$ and
$P_{11}\in{{\bf m}athcal A}A[x_2]$ is not divisible by $x_2$.
Consider an arbitrary term $T_k=R_k/(x_1^{{\bf m}athbf delta_k}x_2^k)$, $k> 0$, in~{\bf e}qref{firsty} as an element in ${\bf w}idehat{{\bf m}athcal A}A[(x_1')^{{\bf p}m1}]((x_2))$ via substituting
$x_1=(P_{10}+x_2^bP_{11})/x_1'$ and expanding the result in the Taylor series in $x_2$. Similarly to the previous case, we get
\[
T_k=\operatorname{sign}um_{j=0}^k\frac{P_2^{k-j}{\bf r}vert_{x_1\leftarrow P_{10}/x_1'}{\bf m}athfrak hat S_j}{j!P_{10}^{{\bf m}athbf delta_k+j}x_2^{k-j}}+
\operatorname{sign}um_{j>k}\frac{{\bf m}athfrak hat S_jx_2^{j-k}}{j!P_{10}^{{\bf m}athbf delta_k+j}}
\]
for some ${\bf m}athfrak hat S_j\in{\bf w}idehat{{\bf m}athcal A}A[x_1']$.
Since $y\in{\bf m}athfrak hat{{\bf m}athcal A}[(x_1')^{{\bf p}m 1},x_2^{{\bf p}m 1}]$, we conclude
that the infinite sums above contribute only finitely many terms to
$y=\operatorname{sign}um_{k\le a} T_k$. By~{\bf e}qref{secy}, any term of these finitely many automatically
belongs to ${\bf m}athfrak hat{{\bf m}athcal A}[(x'_1)^{{\bf p}m 1},(x''_2)^{{\bf p}m 1}]$. To treat the finite
sum in $T_k$ we note that
\[
\frac{(x_1')^{{\bf m}athbf deg_{x_1}P_2}P_2{\bf r}vert_{x_1\leftarrow P_{10}/x_1'}}{{\bf m}athfrak bar P_2}
\]
is a monomial in ${{\bf m}athcal A}A$. So, the finite sum can be rewritten as
\[
\operatorname{sign}um_{j=0}^k\frac{{\bf m}athfrak bar P_2^{k-j}{\bf m}athfrak bar S_j}{j!(x_1')^{{\bf m}athfrak gamma_j}P_{10}^{{\bf m}athbf delta_k+j}x_2^{k-j}}
\]
for some ${\bf m}athfrak gamma_j{\bf m}athfrak ge 0$ and ${\bf m}athfrak bar S_j\in {\bf w}idehat{{\bf m}athcal A}A[x_1']$. Comparing the latter expression with~{\bf e}qref{secy}, we get
\[
\frac{R_k'}{(x_1')^{{\bf m}athbf delta'_k}}={\bf m}athfrak bar P_2^k\operatorname{sign}um_{j=0}^{a-k}\frac{{\bf m}athfrak bar S_j}{j!(x_1')^{{\bf m}athfrak gamma_{j+k}}P_{10}^{{\bf m}athbf delta_{j+k}+j}}.
\]
Note that ${\bf m}athfrak bar P_2$ and $P_{10}$ are coprime, since $P_{10}$ is a monomial and ${\bf m}athfrak bar P_2$ does not have monomial factors,
which means that ${\bf m}athfrak bar P_2^k$ divides $R_k'$ in ${\bf w}idehat{{\bf m}athcal A}A[x_1']$.
{\bf e}nd{proof}
In the case of an arbitrary $N$ one can use Lemma~{\bf r}ef{2pstar} to reshape nerves while preserving the upper bounds.
Namely, let ${\bf m}athcal N$ be a nerve, and $v_1,v_2,v_3\in{\bf m}athcal N$ be three vertices such that $v_1$ is adjacent to $v_2$ and $v_2$ is the unique vertex adjacent
to $v_3$. Consider the nerve ${\bf m}athcal N'$ that does not contain $v_3$, contains a new
vertex $v_3'$ adjacent only to $v_1$, and otherwise is identical to ${\bf m}athcal N$; the edge between $v_1$ and $v_3'$ in ${\bf m}athcal N'$ bears the same label as the edge
between $v_2$ and $v_3$ in ${\bf m}athcal N$. A single application of Lemma~{\bf r}ef{2pstar} with ${\bf w}idehat{{\bf m}athcal A}A$ replaced by ${\bf w}idehat{{\bf m}athcal A}A[x_3^{{\bf p}m1},{\bf m}athbf dots,x_N^{{\bf p}m1}]$
shows that ${{\bf m}athcal U}U({\bf m}athcal N)={{\bf m}athcal U}U({\bf m}athcal N')$. Clearly, any two nerves can be connected via a sequence of such transformations, and the result follows.
{\bf e}nd{proof}
To complete the proof of Theorem~{\bf r}ef{nervub}, it remains to establish the following result.
{\bf m}athfrak begin{lemma}\label{rankcond}
Let ${\bf w}B$ be a skew-symmetrizable matrix of full rank, and let ${\bf r}ank {\bf w}B(i)=d_i-1$ for any nontrivial string in ${{\bf m}athcal P}$.
Then the generalized exchange polynomials are coprime in ${{\bf m}athcal A}A[x_1,{\bf m}athbf dots,x_{N}]$ for any seed in ${{\bf m}athcal G}CC$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} We follow the proof of Lemma~3.24 from \cite{GSVb} with minor modifications. Fix an arbitrary seed $\Sigma=({\bf w}x,{\bf w}B,{{\bf m}athcal P})$, and let
$P_i$ be the generalized exchange polynomial corresponding to the $i$th cluster variable.
Assume first that there exist $j$ and $j'$ such that $b_{ij}>0$ and $b_{ij'}<0$. We want to define the weights of the variables that make $P_i$ into
a quasihomogeneous polynomial. Put $w(x_j)=1/b_{ij}$, $w(x_{j'})=-1/b_{ij'}$.
If $j,j'\le N$, put $w(x_k)=0$ for $k\mathfrak ne j,j'$. Otherwise, put $w(x_k)=0$ for all remaining cluster variables and all remaining stable non-isolated
variables. Finally, define the weights of isolated variables from the equations $w({\bf m}athfrak hat p_{ir})=0$, $1\le r\le d_i-1$. The condition on the rank of
${\bf w}B(i)$ guarantees that these equations possess a unique solution. Now~{\bf e}qref{modexchange} shows that this weight assignment turns $P_i$ into a quasihomogeneous
polynomial of weight one.
Let $P_i=P'P''$ for some nontrivial polynomials $P'$ and $P''$, then they both are quasihomogeneous with respect to the weights defined above, and
each one of them contains exactly one monomial
in variables entering $u_{i;>}, v_{i;>}$, and exactly one monomial in variables entering $u_{i;<}, v_{i;<}$. Consider these two monomials in $P'$.
Let ${\bf m}athbf delta_j$ and ${\bf m}athbf delta_{j'}$ be the degrees of $x_j$ and $x_{j'}$ in these two monomials, respectively. Then the quasihomogeneity condition implies
${\bf m}athbf delta_j/b_{ij}=-{\bf m}athbf delta_{j'}/b_{ij'}$. Moreover, for any $j''\mathfrak ne j, j'$ such that $b_{ij''}>0$ (or $b_{ij''}<0$) a similar procedure gives
${\bf m}athbf delta_{j''}/b_{ij''}=-{\bf m}athbf delta_{j'}/b_{ij'}$ (or ${\bf m}athbf delta_j/b_{ij}=-{\bf m}athbf delta_{j''}/b_{ij''}$, respectively. This means that the $i$th row of ${\bf w}B$ can be restored
from the exponents of variables entering the above two monomials by dividing them by a constant. Consequently, if $P_i$ and $P_j$ possess a nontrivial
common factor, the corresponding rows of ${\bf w}B$ are proportional, which contradicts the full rank assumption.
If all nonzero entries in the $i$th row have the same sign, we proceed in a similar way. Namely, if there exist $j,j'$ such that $b_{ij}, b_{ij'}\mathfrak ne0$, we
put $w(x_j)=1/|b_{ij}|$, $w(x_{j'})=-1/|b_{ij'}|$. The weights of other variables are defined in the same way as above. This makes $P_i$ into a quasihomogeneous
polynomial of weight zero, and the result follows. The case when there exists a unique $j$ such that $b_{ij}\mathfrak ne 0$ is trivial.
{\bf e}nd{proof}
\operatorname{sign}ection{Proof of Theorem~{\bf r}ef{basis}}\label{prlogcan}
The proof exploits various invariance properties of functions in $F_n$. First, we need some preliminary lemmas.
Let a bilinear form $\langle\cdot , \cdot{\bf r}angle_0$ on ${\bf m}athfrak{gl}_n$ be defined
as $\langle A ,B {\bf r}angle_0 = \langle {\bf p}i_0(A) , {\bf p}i_0(B) {\bf r}angle$.
{\bf m}athfrak begin{lemma}
\label{cross-inv}
Let $g(X), h(Y), f(X,Y), \varphi(X,Y)$ be functions with the following invariance properties:
{\bf m}athfrak begin{equation}\label{inv_prop}
{\bf m}athfrak begin{aligned}
g(X)&=g(N_+X), {\bf q}uad h(Y)=h(YN_-), \\
f(X,Y) &= f(N_+X N_- ,N_+Y N'_-), {\bf q}uad
\varphi(X,Y)= \varphi(A X N_-, A Y N_-),
{\bf e}nd{aligned}
{\bf e}nd{equation}
where $A$ is an arbitrary element of $GL_n$, $N_+$ is an arbitrary unipotent upper-triangular element and $N_-, N'_-$ are arbitrary unipotent lower-triangular elements. Then
{\bf m}athfrak begin{align*}
&\{g, h\}_D = \frac{1}{2} \langle X\mathfrak nabla_X g, Y\mathfrak nabla_Y h{\bf r}angle_0 - \frac{1}{2} \langle\mathfrak nabla_X g\cdot X, \mathfrak nabla_Y h \cdot Y{\bf r}angle_0, \\
&\{f , g\}_D = \frac{1}{2} \langle E_L f , \mathfrak nabla_X g\cdot X{\bf r}angle_0 - \frac{1}{2} \langle E_R f , X\mathfrak nabla_X g{\bf r}angle_0, \\
&\{h, f\}_D = \frac{1}{2} \langle \mathfrak nabla_Y h \cdot Y , E_L f{\bf r}angle_0 - \frac{1}{2} \langle Y\mathfrak nabla_Y h, E_R f {\bf r}angle_0, \\
&\{\varphi, f\}_D = \frac{1}{2} \langle E_L \varphi , \mathfrak nabla_X f\cdot X{\bf r}angle_0 - \frac12\langle E_L \varphi, \mathfrak nabla_Y f\cdot Y {\bf r}angle_0, \\
&\{\varphi, g\}_D = \frac{1}{2} \langle E_L \varphi , \mathfrak nabla_X g\cdot X {\bf r}angle_0, \\
&\{\varphi, h\}_D = -\frac{1}{2} \langle E_L \varphi , \mathfrak nabla_Y h\cdot Y {\bf r}angle_0,
{\bf e}nd{align*}
where $E_L$ and $E_R$ are given by~{\bf e}qref{erel}.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} From {\bf e}qref{inv_prop}, we obtain
$ X\mathfrak nabla_X g, E_R f \in {\bf m}athfrak b_+$, $ \mathfrak nabla_Y h\cdot Y, \mathfrak nabla_X f\cdot X, \mathfrak nabla_Y f\cdot Y, E_L \varphi \in {\bf m}athfrak b_-$,
$E_R \varphi=0$. Taking into account that $R_+({\bf x}i)=\frac{1}{2}{\bf p}i_0({\bf x}i)$ for ${\bf x}i\in{\bf m}athfrak b_-$ and that
${\bf m}athfrak b_{\bf p}m {\bf p}erp \mathfrak n_{\bf p}m$ with respect to $\langle\cdot ,\cdot {\bf r}angle$, the result follows from {\bf e}qref{sklyadoubleGL},{\bf e}qref{sklyadoubleGL1}.
{\bf e}nd{proof}
{\bf m}athfrak begin{lemma}
\label{homogen}
Let $g(X), h(Y), f(X,Y), \varphi(X,Y)$ be functions as in Lemma {\bf r}ef{cross-inv}. Assume, in addition,
that $g$ and $h$ are homogeneous with respect to right and left multiplication of their arguments by arbitrary diagonal matrices and that $f$ and $\varphi$ are homogeneous with respect to right and left multiplication of $X, Y$ by the same pair of diagonal matrices.
Then all Poisson brackets ${{\bf m}athcal P}oi_D$ among functions $\log g$, $\log h$, $\log f$, $\log \varphi$ are constant.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} The homogeneity of $g(X)$ with respect to the left multiplication by diagonal matrices
implies that there exists a diagonal element ${\bf x}i$ such that for any diagonal $h$ and any $X$,
$g({\bf e}xp(h) X) = {\bf e}xp \langle h, {\bf x}i{\bf r}angle g(X)$. The infinitesimal version of this property reads
${\bf p}i_0(X \mathfrak nabla \log g(X)) = {\bf x}i$. A similar argument shows that diagonal projections of all
elements needed to compute Poisson brackets between $\log g$, $\log h$, $\log f$, $\log \varphi$ using
formulas of Lemma~{\bf r}ef{cross-inv} are constant diagonal matrices, and the claim follows.
{\bf e}nd{proof}
Lemmas {\bf r}ef{cross-inv}, {\bf r}ef{homogen} show that any four functions $f_{ij}, g_{kl}, h_{\alpha{\bf m}athfrak beta}, \varphi_{{\bf m}u\mathfrak nu}$
are log-canonical. Indeed, it is clear from definitions in Section~{\bf r}ef{logcan} that
{\bf m}athfrak begin{equation}\label{ourinv_prop}
{\bf m}athfrak begin{aligned}
g_{ij}(X)&=g_{ij}(N_+X), {\bf q}uad g_{ii}(X)=g_{ii}(N_+XN_-), \\
h_{ij}(Y)&=h_{ij}(YN_-), {\bf q}uad h_{ii}(Y)=h_{ii}(N_+YN_-),\\
f_{kl}(X,Y) &= f_{kl}(N_+X N_- ,N_+Y N'_-),\\
\tilde \varphi_{kl}(X,Y) &= \tilde\varphi_{kl}(A X N_-, A Y N_-),
{\bf e}nd{aligned}
{\bf e}nd{equation}
where $\tilde \varphi_{kl} = {\bf m}athbf det{{\bf m}athcal P}hi_{kl}$, and so the corresponding invariance
properties in~{\bf e}qref{inv_prop} are satisfied for any function
taken in any of these four families. Besides, all these
functions possess the homogeneity property as in Lemma~{\bf r}ef{homogen} as well.
For a generic element $X \in GL_n$, consider its Gauss factorization
{\bf m}athfrak begin{equation}
\label{gauss}
X=X_{>0} X_0 X_{<0}
{\bf e}nd{equation}
with $X_{<0}$ unipotent lower-triangular, $X_0$ diagonal and $X_{>0}$ unipotent upper-trian\-gular elements. Sometimes it will be convenient to use notations $X_{\leq 0} = X_0 X_{<0}$ and $X_{{\bf m}athfrak geq 0} = X_{>0} X_0$.
Taking
$N_+=(X_{>0})^{-1}$ in the first relation in~{\bf e}qref{ourinv_prop}, $N_-=(Y_{<0})^{-1}$ in the second relation, and
$N_+=(Y_{>0})^{-1}$, $N_-=(X_{<0})^{-1}$, $N'_-=(Y_{<0})^{-1}$ in the third relation, one gets
{\bf m}athfrak begin{equation}\label{canform}
{\bf m}athfrak begin{aligned}
g_{ij}(X)&=g_{ij}(X_{\leq 0}),\\
h_{ij}(Y)&=h_{ij}(Y_{{\bf m}athfrak geq 0}), \\
f_{kl}(X,Y) &= f_{kl}\left ((Y_{>0})^{-1}X_{{\bf m}athfrak geq 0},Y_0{\bf r}ight )\\
&= h_{n-l+1,n-l+1}(Y) h_{n-k-l+1,n-k+1}\left ((Y_{>0})^{-1}X_{{\bf m}athfrak geq 0}{\bf r}ight ).
{\bf e}nd{aligned}
{\bf e}nd{equation}
Next, we need to prove log-canonicity within each of the four families.
The following lemma is motivated by the third formula in~{\bf e}qref{canform}.
{\bf m}athfrak begin{lemma}
\label{Zmap}
The almost everywhere defined map
\[
Z : (D(GL_n), {{\bf m}athcal P}oi_D)\to (GL_n, {{\bf m}athcal P}oi_r)
\]
given by
$(X,Y) {\bf m}apsto Z= Z(X,Y)=(Y_{>0})^{-1}X_{{\bf m}athfrak geq 0}$ is Poisson.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} Denote ${\bf p}i_{{\bf m}athfrak geq0}={\bf p}i_{>0}+{\bf p}i_0$ and ${\bf p}i_{\leq0}={\bf p}i_{<0}+{\bf p}i_0$. We start by computing the variation
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{aligned}
{\bf m}athbf delta Z &= (Y_{>0})^{-1}\left ({\bf m}athbf delta X_{{\bf m}athfrak geq 0} - {\bf m}athbf delta Y_{>0} (Y_{>0})^{-1}X_{{\bf m}athfrak geq 0}{\bf r}ight )=
Z (X_{{\bf m}athfrak geq 0})^{-1} {\bf m}athbf delta X_{{\bf m}athfrak geq 0} - (Y_{>0})^{-1} {\bf m}athbf delta Y_{>0} Z\\
&=Z {\bf p}i_{{\bf m}athfrak geq 0}\left ( (X_{{\bf m}athfrak geq 0})^{-1} {\bf m}athbf delta X (X_{<0})^{-1}{\bf r}ight ) -
{\bf p}i_{>0}\left ( (Y_{>0})^{-1} {\bf m}athbf delta Y (Y_{\leq 0})^{-1}{\bf r}ight ) Z.
{\bf e}nd{aligned}
{\bf e}nd{equation*}
Then for a smooth function $f$ on $GL_n$ we have
{\bf m}athfrak begin{multline*}
{\bf m}athbf delta f(Z(X,Y))= \left\langle\mathfrak nabla f,{\bf m}athbf delta Z{\bf r}ight{\bf r}angle\\
=\left\langle (X_{<0})^{-1} {\bf p}i_{\leq 0} ( \mathfrak nabla f \cdot Z) (X_{{\bf m}athfrak geq 0})^{-1}, {\bf m}athbf delta X {\bf r}ight{\bf r}angle
- \left\langle (Y_{\leq 0})^{-1} {\bf p}i_{<0} ( Z \mathfrak nabla f ) (Y_{>0})^{-1}, {\bf m}athbf delta Y {\bf r}ight{\bf r}angle.
{\bf e}nd{multline*}
Therefore, if we denote $\tilde f(X,Y) = f\circ Z(X,Y)$ then
{\bf m}athfrak begin{align*}
&X \mathfrak nabla_X \tilde f = {{\bf m}athcal A}d_{X_{{\bf m}athfrak geq 0}} {\bf p}i_{\leq 0} (\mathfrak nabla f \cdot Z),\\
&Y \mathfrak nabla_Y \tilde f = - {{\bf m}athcal A}d_{Y_{>0}} {\bf p}i_{<0} (Z \mathfrak nabla f ),\\
&\mathfrak nabla_X \tilde f\cdot X = {{\bf m}athcal A}d_{(X_{<0})^{-1}} {\bf p}i_{\leq 0} (\mathfrak nabla f \cdot Z) \in {\bf m}athfrak b_-,\\
&\mathfrak nabla_Y \tilde f \cdot Y= - {{\bf m}athcal A}d_{(Y_{\leq 0})^{-1}} {\bf p}i_{<0} (Z \mathfrak nabla f ) \in \mathfrak n_-,
{\bf e}nd{align*}
and
{\bf m}athfrak begin{align*}
E_R \tilde f &= {{\bf m}athcal A}d_{Y_{>0}} \left ( {{\bf m}athcal A}d_Z {\bf p}i_{\leq 0} (\mathfrak nabla f \cdot Z) - {\bf p}i_{<0} (Z \mathfrak nabla f ){\bf r}ight )\\
&= {{\bf m}athcal A}d_{Y_{>0}} \left ( Z \mathfrak nabla f - {{\bf m}athcal A}d_Z {\bf p}i_{>0} (\mathfrak nabla f \cdot Z) - {\bf p}i_{<0} (Z \mathfrak nabla f ){\bf r}ight )\\
&={{\bf m}athcal A}d_{Y_{>0}} \left ({\bf p}i_{{\bf m}athfrak geq 0} (Z \mathfrak nabla f ) - {{\bf m}athcal A}d_Z {\bf p}i_{>0} (\mathfrak nabla f \cdot Z) {\bf r}ight )\in {\bf m}athfrak b_+.
{\bf e}nd{align*}
Plugging into {\bf e}qref{sklyadouble} we obtain
{\bf m}athfrak begin{align*}
\{ f_1\circ Z, f_1\circ Z\}_D&= \frac{1}{2} \langle\mathfrak nabla f_1\cdot Z, \mathfrak nabla f_2\cdot Z{\bf r}angle_0
- \frac{1}{2} \langle Z\mathfrak nabla f_1, Z\mathfrak nabla f_2{\bf r}angle_0 + \langle X\mathfrak nabla_X \tilde f_1, Y\mathfrak nabla_Y \tilde f_2{\bf r}angle \\
&= \frac{1}{2} \langle\mathfrak nabla f_1\cdot Z, \mathfrak nabla f_2\cdot Z{\bf r}angle_0
- \frac{1}{2} \langle Z\mathfrak nabla f_1, Z\mathfrak nabla f_2{\bf r}angle_0\\
&{\bf q}quad - \langle {{\bf m}athcal A}d_Z{\bf p}i_{\leq 0}( \mathfrak nabla f_1 \cdot Z), {\bf p}i_{<0}(Z \mathfrak nabla f_2){\bf r}angle.
{\bf e}nd{align*}
The last term can be rewritten as
$
\langle Z \mathfrak nabla f_1 , {\bf p}i_{<0}(Z \mathfrak nabla f_2){\bf r}angle - \langle {\bf p}i_{>0} ( \mathfrak nabla f_1 \cdot Z) , Z \mathfrak nabla f_2){\bf r}angle
$.
Comparing with
{\bf e}qref{sklyabra},
we obtain
$\{ f_1\circ Z, f_1\circ Z\}_D = \{f_1, f_2\}_r \circ Z$.
{\bf e}nd{proof}
We are now ready to deal with the three families out of four.
{\bf m}athfrak begin{lemma}
\label{fgh}
Families of functions $\{ f_{ij}\}, \{ g_{ij}\}, \{ h_{ij}\}$ are log-canonical with respect to ${{\bf m}athcal P}oi_D$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} If $\varphi_1(X,Y) = g_{ij}(X)$ and $\varphi_2(X,Y) = g_{\alpha{\bf m}athfrak beta}(X)$ (or
$\varphi_1(X,Y) = h_{ij}(Y)$ and $\varphi_2(X,Y) = h_{\alpha{\bf m}athfrak beta}(Y)$) then
$\{ \varphi_1,\varphi_2\}_D = \{ \varphi_1,\varphi_2\}_r$. Furthermore, Proposition~4.19 in \cite{GSVb} specialized to the $GL_n$ case implies that in both cases
{\bf m}athfrak begin{equation}
\label{aux_psi}
\{ \log\varphi_1,\log\varphi_2\}_r = \frac{1}{2} \langle {\bf x}i_{1,L} , {\bf x}i_{2,L}{\bf r}angle_0 - \frac{1}{2}
\langle {\bf x}i_{1,R} , {\bf x}i_{2,R}{\bf r}angle_0,
{\bf e}nd{equation}
provided $i-j{\bf m}athfrak geq \alpha-{\bf m}athfrak beta$ ($i-j\leq \alpha-{\bf m}athfrak beta$, respectively),
where ${\bf x}i_{{\bf m}athfrak bullet,L}$, ${\bf x}i_{{\bf m}athfrak bullet,R}$ are projections of the left and right gradients
of $\log\varphi_{\bf m}athfrak bullet$
to the diagonal subalgebra. These projections are constant due to the homogeneity of all functions involved with respect to both left and right multiplication by diagonal matrices. Thus, families $\{ g_{ij}\}$, $\{ h_{ij}\}$ are log-canonical. The claim about the family $\{ f_{ij}\}$ now follows from Lemma~{\bf r}ef{Zmap} and the third
equation in~{\bf e}qref{canform}.
{\bf e}nd{proof}
The remaining family $\{\varphi_{ij}\}$ is treated separately.
{\bf m}athfrak begin{lemma}
\label{fylc}
The family $\{ \varphi_{kl}\}$ is log-canonical with respect to ${{\bf m}athcal P}oi_D$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} Since ${\bf m}athbf det X$ is a Casimir function for ${{\bf m}athcal P}oi_D$, we only need to show that functions $\tilde \varphi_{kl}={\bf m}athbf det {{\bf m}athcal P}hi_{kl}$ are log-canonical with respect to the Poisson bracket
{\bf m}athfrak begin{equation}
\label{PoissonU}
\{\varphi_1,\varphi_2\}_* = \left\langle R_+([\mathfrak nabla\varphi_1,U]), [\mathfrak nabla\varphi_2,U]{\bf r}ight{\bf r}angle - \left\langle [\mathfrak nabla\varphi_1,U], \mathfrak nabla \varphi_2 \cdot U{\bf r}ight{\bf r}angle,
{\bf e}nd{equation}
which one obtains from {\bf e}qref{sklyadoubleGL2} by assuming that $f(X,Y)=\varphi(X^{-1}Y)$. In other words, ${{\bf m}athcal P}oi_*$ is the push-forward of ${{\bf m}athcal P}oi_D$ under the map
$(X,Y) {\bf m}apsto U=X^{-1}Y$.
Let $C=e_{21} + \cdots + e_{n, n-1} + e_{1n}$ be the cyclic permutation matrix. By Lemma~{\bf r}ef{BC}, we write $U$ as
{\bf m}athfrak begin{equation}
\label{normalU}
U = N_- B_+ C N_-^{-1},
{\bf e}nd{equation}
where $N_-$ is unipotent lower triangular and $B_+$ is upper triangular.
Since functions $\tilde \varphi_{kl}$ are invariant under the conjugation by unipotent lower triangular matrices, we have
$\tilde \varphi_{kl}(U)=\tilde\varphi_{kl}(B_+C)$.
Furthermore,
\[
\left((B_+C)^i{\bf r}ight)^{[n]} = b_{ii} \cdots b_{11} e_i + \operatorname{sign}um_{s < i} \alpha_{is} e_s
\]
for $i\le n$, where $b_{ij}$, $1\le i\le j\le n$, are the entries
of $B_+$. It follows that
{\bf m}athfrak begin{equation}
{\bf m}athfrak begin{split}
\label{tildefy}
\tilde \varphi_{kl}(U)&= {\bf p}m \left ({\bf p}rod_{s=1}^{n-k-l+1}b_{ss}^{n-k-l-s+2}{\bf r}ight ){\bf m}athbf det (B_+)_{[n-k-l+2, n-k]}^{[n-l+2, n]}\\
&= {\bf p}m {\bf m}athbf det U^{n-k-l+1}\frac{h_{n-k-l+2,n-l+2}(B_+)}{{\bf p}rod_{s=2}^{n-k-l+2}h_{ss}(B_+)}.
{\bf e}nd{split}
{\bf e}nd{equation}
{\bf m}athfrak begin{remark}\label{tildefysign} It is easy to check that the sign in the first line of~{\bf e}qref{tildefy} equals $(-1)^{k(n-k)+(l-1)(n-k-l+1)}s_{kl}$.
We will use this fact below
in the proof of Theorem~{\bf r}ef{structure}(ii).
{\bf e}nd{remark}
Note that ${\bf m}athbf det U={\bf m}athbf det B_+C= {\bf p}m{\bf p}rod_{s=1}^{n}b_{ss} $ is a Casimir function for~{\bf e}qref{PoissonU}. Therefore to prove Lemma~{\bf r}ef{fylc} it suffices to show that
functions ${\bf m}athbf det (B_+)_{[i, i+ n-j ]}^{[j, n]}$, $2\leq i \leq j \leq n$, are log-canonical with respect to ${{\bf m}athcal P}oi_*$ as functions of $U$. To this end, we first will compute the push-forward of ${{\bf m}athcal P}oi_*$ under the
map $U {\bf m}apsto B_+'=B_+'(U)=(B_+)_{[2,n]}^{[2,n]}$ of $GL_n$ to the space ${{\bf m}athcal B}_{n-1}$ of $(n-1)\times (n-1)$ invertible upper triangular matrices.
Let $S= e_{12} + \cdots + e_{m-1, m}$ be the $m\times m$ upper shift matrix. For an $m\times m$ matrix $A$, define
$$
\tau(A)= S A S^T.
$$
{\bf m}athfrak begin{lemma}
\label{PoissonB}
Let $f_1, f_2$ be two differentiable functions on ${{\bf m}athcal B}_{n-1}$. Then
\[
\{f_1\circ B_+', f_2\circ B_+'\}_* = \{f_1, f_2\}_b\circ B_+',
\]
where ${{\bf m}athcal P}oi_b$ is defined by
{\bf m}athfrak begin{equation}
\label{brackB}
\{f_1, f_2\}_b(A) = \{f_1, f_2\}_r(A) + \frac{1}{2} \left ( \left\langle A \mathfrak nabla f_1, \tau( (\mathfrak nabla f_2) A){\bf r}ight{\bf r}angle_0 -
\left\langle \tau( (\mathfrak nabla f_1) A), A \mathfrak nabla f_2 {\bf r}ight{\bf r}angle_0 {\bf r}ight )
{\bf e}nd{equation}
for any $A\in {{\bf m}athcal B}_{n-1}$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} We start by computing an infinitesimal variation of $B_+$ as a function of $U$. From~{\bf e}qref{normalU}, we obtain
$\left ({{\bf m}athcal A}d_{N_-^{-1}}{\bf m}athbf delta U{\bf r}ight )C^{-1} =
[ N_-^{-1}{\bf m}athbf delta N_-, B_+C] C^{-1} + {\bf m}athbf delta B_+$.
Then
{\bf m}athfrak begin{equation}
\label{aux1}
{\bf p}i_{< 0}\left( \left ({{\bf m}athcal A}d_{N_-^{-1}}{\bf m}athbf delta U{\bf r}ight )C^{-1} {\bf r}ight ) = {\bf p}i_{< 0}\left( [ N_-^{-1}{\bf m}athbf delta N_-, B_+C] C^{-1}{\bf r}ight ).
{\bf e}nd{equation}
If we define ${{\bf m}athcal L}L : \mathfrak n_- \to \mathfrak n_-$ via ${{\bf m}athcal L}L(\mathfrak nu_-) = - {\bf p}i_{< 0}\left( \operatorname{ad}_{B_+C}(\mathfrak nu_-) C^{-1}{\bf r}ight)$ for $\mathfrak nu_-\in\mathfrak n_-$ then~{\bf e}qref{aux1} above implies
that $N_-^{-1}{\bf m}athbf delta N_- = {{\bf m}athcal L}L^{-1} \left( {\bf p}i_{< 0}\left( \left ({{\bf m}athcal A}d_{N_-^{-1}}{\bf m}athbf delta U{\bf r}ight )C^{-1} {\bf r}ight ) {\bf r}ight)$.
Invertibility of ${{\bf m}athcal L}L$ is easy to establish
by observing that~{\bf e}qref{aux1} can be written as a triangular linear system for matrix entries of $N_-^{-1}{\bf m}athbf delta N_-$.
The operator ${{\bf m}athcal L}L^* : \mathfrak n_+ \to \mathfrak n_+$ dual to ${{\bf m}athcal L}L$ with respect to
$\langle\cdot,\cdot {\bf r}angle$ acts on $\mathfrak nu_+\in \mathfrak n_+$ as
{\bf m}athfrak begin{equation}
\label{Lstar}
{\bf m}athfrak begin{split}
{{\bf m}athcal L}L^*(\mathfrak nu_+) &= {\bf p}i_{>0} \left ( \operatorname{ad}_{B_+C} (C^{-1} \mathfrak nu_+) {\bf r}ight )
={\bf p}i_{>0} \left ( B_+ \mathfrak nu_+ - C^{-1} \mathfrak nu_+ B_+ C {\bf r}ight )\\
& = B_+ \mathfrak nu_+ - S \mathfrak nu_+ B_+ S^T
= \left (\mathbf 1 - \tau\circ {{\bf m}athcal A}d_{B_+^{-1}}{\bf r}ight ) (B_+ \mathfrak nu_+).
{\bf e}nd{split}
{\bf e}nd{equation}
Note that ${{\bf m}athcal L}L^*$ extends to an operator on ${\bf m}athfrak{gl}_n$ given by the same formula and invertible due to the fact that
$\tau\circ {{\bf m}athcal A}d_{B_+^{-1}}$ is nilpotent.
Let $f$ be a differentiable function on ${{\bf m}athcal B}_n$. Denote $\tilde f (U)=(f\circ B_+)(U)$.
Then
{\bf m}athfrak begin{equation}
\label{nablafu}
{\bf m}athfrak begin{split}
\langle \mathfrak nabla\tilde f, & {\bf m}athbf delta U{\bf r}angle = \langle \mathfrak nabla f, {\bf m}athbf delta B_+{\bf r}angle =
\langle C^{-1}\mathfrak nabla f, \left ({{\bf m}athcal A}d_{N_-^{-1}}{\bf m}athbf delta U{\bf r}ight ) - [ N_-^{-1}{\bf m}athbf delta N_-, B_+C] {\bf r}angle\\
& =\langle {{\bf m}athcal A}d_{N_-}(C^{-1} \mathfrak nabla f ), {\bf m}athbf delta U{\bf r}angle + \left\langle [C^{-1}\mathfrak nabla f, B_+C ], {{\bf m}athcal L}L^{-1} \left({\bf p}i_{< 0}\left(\left ({{\bf m}athcal A}d_{N_-^{-1}}{\bf m}athbf delta U{\bf r}ight )C^{-1}{\bf r}ight){\bf r}ight){\bf r}ight{\bf r}angle\\
&=\langle {{\bf m}athcal A}d_{N_-}\left (C^{-1} \left ( \mathfrak nabla f + ({{\bf m}athcal L}L^*)^{-1}\left({\bf p}i_{> 0} ( [ C^{-1}\mathfrak nabla f, B_+C ] ){\bf r}ight){\bf r}ight ) {\bf r}ight ), {\bf m}athbf delta U{\bf r}angle;
{\bf e}nd{split}
{\bf e}nd{equation}
in the last equality we have used the identities ${{\bf m}athcal L}L^{-1}={\bf p}i_{<0}\circ{{\bf m}athcal L}L^{-1}$
and $({{\bf m}athcal L}L^*)^{-1}={\bf p}i_{>0}\circ({{\bf m}athcal L}L^*)^{-1}$.
From now on we assume that $f$ depends only on $B_+'$, that is, does not depend on the first row of
$B_+$. Thus, the first column of $\mathfrak nabla f$ is zero.
Define ${\bf z}eta^f={\bf p}i_{> 0} ( [ C^{-1}\mathfrak nabla f, B_+C ] )$ and ${\bf x}i^f = \mathfrak nabla f+ ({{\bf m}athcal L}L^*)^{-1}({\bf z}eta^f)$. Clearly,
$$
{\bf z}eta^f= {\bf p}i_{> 0} ( C^{-1}\mathfrak nabla f \cdot B_+C - B_+ \mathfrak nabla f )= {\bf p}i_{> 0}( \tau (\mathfrak nabla f\cdot B_+)) - {\bf p}i_{> 0} ( B_+\mathfrak nabla f )
$$
and
{\bf m}athfrak begin{align*}
[C^{-1}{\bf x}i^f, B_+C] &= C^{-1}{\bf x}i^f B_+C - B_+ {\bf x}i^f = C^{-1}{\bf x}i^f B_+ S^T - B_+ {\bf x}i^f \\
& = S{\bf x}i^f B_+ S^T - B_+ {\bf x}i^f + e_{n1} {\bf x}i^f B_+C \\
&= - \left (\mathbf 1 - \tau\circ {{\bf m}athcal A}d_{B_+^{-1}}{\bf r}ight ) (B_+ {\bf x}i^f) + e_{n1} {\bf x}i^f B_+C,
{\bf e}nd{align*}
which is equivalent to
{\bf m}athfrak begin{equation}\label{aux2}
[C^{-1}{\bf x}i^f, B_+C]= -{{\bf m}athcal L}L^*( {\bf x}i^f) + e_{n1} {\bf x}i^f B_+C
{\bf e}nd{equation}
by~{\bf e}qref{Lstar}. Furthermore,
$$
{{\bf m}athcal L}L^*( {\bf x}i^f) = {{\bf m}athcal L}L^*(\mathfrak nabla f) + {\bf z}eta^f
= {\bf p}i_{\leq 0} \left ( B_+ \mathfrak nabla f- \tau (\mathfrak nabla f \cdot B_+){\bf r}ight ).
$$
Consequently, $[C^{-1}{\bf x}i^f, B_+C]\in {\bf m}athfrak b_-$.
Using this fact and the invariance of the trace form, for any $f_1, f_2$ that depend only on $B_+'$ we can now compute $\{ f_1\circ B_+, f_2\circ B_+\}_*$ from~{\bf e}qref{PoissonU} and~{\bf e}qref{nablafu} as
{\bf m}athfrak begin{equation}
\label{PoissonBraw}
{\bf m}athfrak begin{split}
\{ f_1\circ B_+, f_2\circ B_+\}_* (U) &= \langle R_+ \left ([C^{-1}{\bf x}i^1, B_+C] {\bf r}ight ), [C^{-1}{\bf x}i^2, B_+C] {\bf r}angle\\
& - \langle [C^{-1}{\bf x}i^1, B_+C] , C^{-1}{\bf x}i^2 B_+C{\bf r}angle,
{\bf e}nd{split}
{\bf e}nd{equation}
where ${\bf x}i^i = {\bf x}i^{f_i}$, $i=1,2$.
Thus, $\mathfrak nabla^{i}=\mathfrak nabla f_i$ are lower triangular matrices with zero first column, and so $\mathfrak nabla^i B_+$, $B_+ \mathfrak nabla^i$, ${\bf x}i^i$, ${\bf x}i^i B_+$, $B_+ {\bf x}i^i$ have zero first column as well, and $C^{-1}{\bf x}i^2 B_+C$ has zero last column.
We conclude that the second term in~{\bf e}qref{aux2} does not affect either term in the right hand side of~{\bf e}qref{PoissonBraw}.
In particular, the first term in~{\bf e}qref{PoissonBraw} becomes
$$
\frac{1}{2} \left\langle B_+ \mathfrak nabla^1- \tau (\mathfrak nabla^1 B_+), B_+ \mathfrak nabla^2- \tau (\mathfrak nabla^2 B_+){\bf r}ight{\bf r}angle_0,
$$
while the second can be re-written as
{\bf m}athfrak begin{equation}
\mathfrak nonumber
{\bf m}athfrak begin{split}
{\bf m}athfrak big\langle {\bf p}i_{\leq 0} \left ( \tau (\mathfrak nabla^1 B_+) - B_+ \mathfrak nabla^1{\bf r}ight ),& C^{-1}{\bf x}i^2 B_+C{\bf m}athfrak big{\bf r}angle\ =
\left\langle S^T{\bf p}i_{\leq 0} \left ( \tau (\mathfrak nabla^1 B_+) - B_+ \mathfrak nabla^1 {\bf r}ight )S, {\bf x}i^2 B_+{\bf r}ight{\bf r}angle \\
&= \left\langle {\bf p}i_{\leq 0} \left ( \mathfrak nabla^1 B_+{\bf r}ight ) - S^T{\bf p}i_{\leq 0} \left ( B_+ \mathfrak nabla^1 {\bf r}ight ) S, {\bf x}i^2 B_+{\bf r}ight{\bf r}angle\\
&= \left\langle {\bf p}i_{\leq 0} \left ( \mathfrak nabla^1 B_+{\bf r}ight ) - S^T{\bf p}i_{\leq 0} \left ( B_+ \mathfrak nabla^1 {\bf r}ight ) S, \mathfrak nabla^2 B_+{\bf r}ight{\bf r}angle\\
& + \left\langle {\bf p}i_{\leq 0} \left ( \mathfrak nabla^1 B_+{\bf r}ight ) - S^T{\bf p}i_{\leq 0} \left ( B_+ \mathfrak nabla^1 {\bf r}ight ) S, ({{\bf m}athcal L}L^*)^{-1} ({\bf z}eta^2)B_+{\bf r}ight{\bf r}angle,
{\bf e}nd{split}
{\bf e}nd{equation}
where ${\bf z}eta^2={\bf z}eta^{f_2}$.
The last term can be transformed as
{\bf m}athfrak begin{equation}
\mathfrak nonumber
{\bf m}athfrak begin{split}
\langle {{\bf m}athcal A}d_{B_+}\left({\bf p}i_{\leq 0}( \mathfrak nabla^1 B_+) {\bf r}ight.&\left.- S^T{\bf p}i_{\leq 0} ( B_+ \mathfrak nabla^1) S{\bf r}ight), B_+({{\bf m}athcal L}L^*)^{-1} ({\bf z}eta^2){\bf r}angle\\
& = \langle \left (\mathbf 1 - {{\bf m}athcal A}d_{B_+}\circ \tau^* {\bf r}ight ) {\bf p}i_{\leq 0} ( B_+ \mathfrak nabla^1 ), B_+({{\bf m}athcal L}L^*)^{-1} ({\bf z}eta^2){\bf r}angle\\
& = \langle {\bf p}i_{\leq 0} \left ( B_+ \mathfrak nabla^1 {\bf r}ight ), \left (\mathbf 1 - \tau\circ {{\bf m}athcal A}d_{B_+^{-1}}{\bf r}ight ) B_+ ({{\bf m}athcal L}L^*)^{-1} ({\bf z}eta^2){\bf r}angle\\
&= \langle {\bf p}i_{\leq 0} \left ( B_+ \mathfrak nabla^1 {\bf r}ight ), {{\bf m}athcal L}L^* (({{\bf m}athcal L}L^*)^{-1} ({\bf z}eta^2)){\bf r}angle\\
& = \langle {\bf p}i_{\leq 0} \left ( B_+ \mathfrak nabla^1 {\bf r}ight ), {\bf p}i_{> 0}( \tau (\mathfrak nabla^2 B_+)) - {\bf p}i_{> 0} ( B_+\mathfrak nabla^2 )
{\bf r}angle.
{\bf e}nd{split}
{\bf e}nd{equation}
Here, in the first equality we used the fact that $({{\bf m}athcal L}L^*)^{-1} ({\bf z}eta^2) \in \mathfrak n_+$, and that
$$
\langle {{\bf m}athcal A}d_{B_+}{\bf p}i_{\leq 0}( \mathfrak nabla^1 B_+),A{\bf r}angle=\langle {\bf p}i_{\le0}\left( {{\bf m}athcal A}d_{B_+}{\bf p}i_{\leq 0}( \mathfrak nabla^1 B_+){\bf r}ight),A{\bf r}angle
=\langle {\bf p}i_{\le0}\left( {{\bf m}athcal A}d_{B_+} (\mathfrak nabla^1 B_+){\bf r}ight),A{\bf r}angle
$$
for $A\in {\bf m}athfrak b_+$.
Combining our simplified expressions for two terms in the right hand side of {\bf e}qref{PoissonBraw} and taking into account that
$\langle\tau(\mathfrak nabla^1B_+),\tau(\mathfrak nabla^2B_+){\bf r}angle_0=\langle\mathfrak nabla^1B_+,\mathfrak nabla^2B_+{\bf r}angle_0$
we obtain
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{split}
\{f_1\circ B_+, f_2\circ &B_+\}_*(U) \\
&= \frac{1}{2} \left ( \left\langle B_+ \mathfrak nabla^1, \tau( \mathfrak nabla^2 B_+){\bf r}ight{\bf r}angle_0 - \left\langle \tau( \mathfrak nabla^1 B_+), B_+ \mathfrak nabla^2 {\bf r}ight{\bf r}angle_0
{\bf r}ight) + \{f_1, f_2\}_r(B_+)
{\bf e}nd{split}
{\bf e}nd{equation*}
for functions $f_1, f_2$ on ${{\bf m}athcal B}_n$ that depend only on $B_+'$. To complete the proof of Lemma {\bf r}ef{PoissonB}, it remains to observe that for such functions, the right hand side does not depend on the first row of $B_+$ and is equal to a similar expression in which $B_+$ is replaced with $B_+'$ and the bracket ${{\bf m}athcal P}oi_r$ and the forms
$\langle\cdot ,\cdot {\bf r}angle$, $\langle\cdot , \cdot{\bf r}angle_0$ are replaced with their counterparts
for $GL_{n-1}$.
{\bf e}nd{proof}
Now we can finish the proof of Lemma~{\bf r}ef{fylc}. Let functions $f_1, f_2$ belong to the family
$\{ \log{\bf m}athbf det (B_+)_{[i, i+ n-j ]}^{[j, n]}= \log{\bf m}athbf det (B_+')_{[i-1, i+ n-j ]}^{[j-1, n]},\ 2\leq i \leq j \leq n\}$.
Then the second term in our expression~{\bf e}qref{brackB} for $\{f_1, f_2\}_b(B_+')$ is constant
because of the homogeneity of minors of $B_+'$ under right and left diagonal multiplication, and the first term is constant because, as we discussed earlier,
functions ${\bf m}athbf det (B_+')_{[i-1, i+ n-j ]}^{[j-1, n]}$ are log-canonical with respect to ${{\bf m}athcal P}oi_r$ (see, e.g.,~{\bf e}qref{aux_psi}).
{\bf e}nd{proof}
This ends the proof of Theorem~{\bf r}ef{basis}.
{\bf m}athfrak begin{remark}
\label{MMJcompat}
The bracket {\bf e}qref{brackB} can be extended to the entire $GL_{n-1}$. In fact, the right hand side makes sense for $GL_m$ for any $m\in {\bf m}athbb{N}$.
It can be induced via the map
${{\bf m}athbb T}_{m}\times {{\bf m}athbb T}_{m}\times GL_{m} \mathfrak ni (H={\bf m}athbf diag (h_1,\ldots, h_{m}), \tilde H={\bf m}athbf diag (\tilde h_1,\ldots, \tilde h_{m}), X) {\bf m}apsto H X\tilde H\in GL_{m}$ if one equips ${{\bf m}athbb T}_{m}\times {{\bf m}athbb T}_{m}$ with a Poisson bracket $\{h_i, h_j\} = \{\tilde h_i, \tilde h_j\} =0, \{h_i, \tilde h_j\}={\bf m}athbf delta_{i,j-1}$. It follows from~\cite[Prop.~2.2]{LMP} that right and left diagonal multiplication generates a global toric action for the
standard cluster structure on $GL_m$
(and on double Bruhat cells in $GL_m$), for which ${{\bf m}athcal P}oi_r$ is a compatible Poisson structure.
Therefore, the above extension of~{\bf e}qref{brackB} to the entire group is compatible with this cluster structure as well.
{\bf e}nd{remark}
\operatorname{sign}ection{Proof of Theorem~{\bf r}ef{structure}(i)}\label{prgcs}
\operatorname{sign}ubsection{Toric action}\label{torac}
Let us start from the following important statement.
{\bf m}athfrak begin{theorem}\label{gtaind}
The action
{\bf m}athfrak begin{equation}\label{lraction}
(X,Y){\bf m}apsto (T_1 X T_2, T_1 Y T_2)
{\bf e}nd{equation}
of right and left multiplication by diagonal matrices
is ${{\bf m}athcal G}CC_n^D$-extendable to a global toric action on ${{\bf m}athbb F}F_{{{\bf m}athbb C}}$.
{\bf e}nd{theorem}
{\bf m}athfrak begin{proof}
For an arbitrary vertex $v$ in $Q_n$
denote by $x_v$ the cluster variable attached to $v$. If $v$ is a mutable vertex, then the {{\bf e}m $y$-variable} ($\tau$-variable in the terminology of \cite{GSV1}) corresponding to $v$ is defined as
$$
y_v = \frac{{\bf p}rod_{(v\to u) \in Q} x_u}{{\bf p}rod_{(w\to v) \in Q} x_w}.
$$
Note that the product in the above formula is taken over all arrows,
so, for example, $\varphi_{21}^2$ enters the numerator of the $y$-variable
corresponding to $\varphi_{12}$.
By Proposition~{\bf r}ef{globact}, to prove the theorem it suffices to check that $y_v$ is a homogeneous function of degree zero with respect to the
action~{\bf e}qref{lraction} (see~\cite[Remark 3.3]{GSVMMJ} for details), and that the Casimirs ${\bf m}athfrak hat p_{1r}$ are invariant under~{\bf e}qref{lraction}.
Let us start with verifying the latter condition. According to Section~{\bf r}ef{logcan}, ${\bf m}athfrak hat p_{1r}=c_r^ng_{11}^{r-n}h_{11}^{-r}$, $1\le r\le n-1$.
It is well-known that functions $c_i$ are Casimirs for the Poisson-Lie bracket~{\bf e}qref{sklyadoubleGL1} on $D(GL_n)$, as well as $g_{11}$ and $h_{11}$.
Therefore, ${\bf m}athfrak hat p_{1r}$ are indeed Casimirs. Their invariance under~{\bf e}qref{lraction} is an easy calculation.
Next, for a function ${\bf p}si(X,Y)$ on $D(GL_n)$ homogeneous with respect
to~{\bf e}qref{lraction}, define
the {{\bf e}m left\/} and {{\bf e}m right weights} ${\bf x}i_L({\bf p}si)$, ${\bf x}i_R({\bf p}si)$
of ${\bf p}si$ as the {{\bf e}m constant\/} diagonal matrices
${\bf p}i_0(E_L \log{\bf p}si)$ and ${\bf p}i_0(E_R \log{\bf p}si)$.
Recall that all functions $g_{ij}$, $h_{ij}$, $f_{kl}$, $\varphi_{kl}$
possess this homogeneity property.
For $1\leq i \leq j \leq n$, let ${{\bf m}athfrak d}elta(i,j)$ denote a diagonal matrix with $1$'s in the entries $(i,i), \ldots, (j,j)$ and $0$'s everywhere else.
It follows directly from the definitions in Section~{\bf r}ef{logcan} that
{\bf m}athfrak begin{equation}\label{weights}
{\bf m}athfrak begin{aligned}
& {\bf x}i_L(g_{ij})= {{\bf m}athfrak d}elta(j, n+j -i),{\bf q}uad{\bf x}i_R(g_{ij})= {{\bf m}athfrak d}elta(i,n), \\
& {\bf x}i_L(h_{ij})= {{\bf m}athfrak d}elta(j, n),{\bf q}uad {\bf x}i_R(h_{ij})= {{\bf m}athfrak d}elta(i,n+i-j),\\
& {\bf x}i_L(f_{kl})= {{\bf m}athfrak d}elta(n-k+1, n)+ {{\bf m}athfrak d}elta(n-l+1, n),{\bf q}uad
{\bf x}i_R(f_{kl})= {{\bf m}athfrak d}elta(n-k-l+1,n), \\
& {\bf x}i_L(\varphi_{kl})= (n-k-l)\left(\mathbf 1 + {{\bf m}athfrak d}elta(n,n){\bf r}ight ) + {{\bf m}athfrak d}elta(n-k+1,n)+ {{\bf m}athfrak d}elta(n-l+1,n),\\
& {\bf x}i_R(\varphi_{kl})= (n-k-l+1)\mathbf 1.
{\bf e}nd{aligned}
{\bf e}nd{equation}
Now the verification of the claim above
becomes a straightforward calculation
based on the description of $Q_n$ in Section~{\bf r}ef{init}
and the fact that for a monomial in
homogeneous functions $M={\bf p}si_1^{\alpha_1}{\bf p}si_2^{\alpha_2}\cdots$ the
right and left weights are ${\bf x}i_{R,L}(M) = \alpha_1 {\bf x}i_{R,L}({\bf p}si_1) + \alpha_2 {\bf x}i_{R,L}({\bf p}si_2) + \cdots$.
For example, if $v$ is the vertex associated with the function $h_{ii}$,
$i\mathfrak ne 1$, $j\mathfrak ne n$, $i\mathfrak ne j$, then
{\bf m}athfrak begin{align*}
{\bf x}i_R(y_v) &= {\bf x}i_R(h_{i-1,i}) + {\bf x}i_R(f_{1,n-i}) -{\bf x}i_R(h_{i,i+1}) - {\bf x}i_R(f_{1,n-i+1})\\
&= {{\bf m}athfrak d}elta(i-1, n-1)+ {{\bf m}athfrak d}elta(i,n)- {{\bf m}athfrak d}elta(i, n-1) - {{\bf m}athfrak d}elta(i-1,n)= 0
{\bf e}nd{align*}
and
{\bf m}athfrak begin{align*}
{\bf x}i_L(y_v) &= {\bf x}i_L(h_{i-1,i}) + {\bf x}i_L(f_{1,n-i}) -{\bf x}i_L(h_{i,i+1}) - {\bf x}i_L(f_{1,n-i+1})\\
& = {{\bf m}athfrak d}elta(i, n) + {{\bf m}athfrak d}elta(i+1,n) + {{\bf m}athfrak d}elta(n,n)
- {{\bf m}athfrak d}elta(i+1, n) -{{\bf m}athfrak d}elta(n,n) - {{\bf m}athfrak d}elta(i,n)= 0.
{\bf e}nd{align*}
Other vertices are treated in a similar way.
{\bf e}nd{proof}
\operatorname{sign}ubsection{Compatibility}\label{cmpty}
Let us proceed to the proof of the compatibility statement of
Theorem~{\bf r}ef{structure}(i). We have already seen above that ${\bf m}athfrak hat p_{1r}$ are Casimirs of the bracket ${{\bf m}athcal P}oi_D$.
Therefore, by Proposition~{\bf r}ef{compatchar}, it suffices to
show that for every mutable vertex $v\in Q_n$
{\bf m}athfrak begin{equation}\label{compcond}
\{\log x_u, \log y_v\}_D = \lambda {\bf m}athbf delta_{u,v}{\bf q}uad\text{for any $u\in Q_n$},
{\bf e}nd{equation}
where $\lambda$ is some nonzero rational number not depending on $v$.
Let $v$ be a mutable $g$-vertex in $Q_n$ and
$u$ be a vertex in one of the other three regions of $Q_n$. Then to show that $\{\log x_u, \log y_v\}_D=0$ one can use~{\bf e}qref{ourinv_prop},
Lemma~{\bf r}ef{cross-inv} and the proof of Theorem~{\bf r}ef{gtaind}, which implies that
{\bf m}athfrak begin{align*}
{\bf p}i_0\left (X\mathfrak nabla_X \log y_v{\bf r}ight )& = {\bf p}i_0\left (E_R \log y_v{\bf r}ight ) = {\bf x}i_R(y_v)=0,\\
{\bf p}i_0\left (\mathfrak nabla_X \log y_v X{\bf r}ight )& = {\bf p}i_0\left (E_L \log y_v{\bf r}ight ) = {\bf x}i_L(y_v)=0.
{\bf e}nd{align*}
The same argument works if $u$ and $v$ belong to any two different regions of the quiver $Q_n$.
Thus, to complete the proof it remains to verify~{\bf e}qref{compcond} for vertices $u, v$ in the same region of $Q_n$.
In view of~{\bf e}qref{canform}, for $g$- and $h$-vertices other than vertices corresponding to $g_{ii}$ and $h_{ii}$, this becomes a particular case of Theorem~4.18
in~\cite{GSVb} which establishes the compatibility of the standard
Poisson--Lie structure on a simple Lie group ${{\bf m}athcal G}$ with the cluster
structure on double Bruhat cells in ${{\bf m}athcal G}$. We just need to set ${{\bf m}athcal G}=GL_n$ (a
transition to $GL_n$ from a simple group $SL_n$ is straightforward), set
$\lambda$ in~{\bf e}qref{compcond} to be equal to $-1$ and apply the theorem to ${{\bf m}athcal G}^{id,w_0}, {{\bf m}athcal G}^{w_0,id}$ in the case of $g$- and $h$-regions, respectively (here $w_0$ is the longest permutation of length $n-1$).
Vertices corresponding to $g_{ii}$ and $h_{ii}$ are treated separately, because in quivers for ${{\bf m}athcal G}^{id,w_0}, {{\bf m}athcal G}^{w_0,id}$ they would have been frozen. For any such vertex $v$ we only need to check that $\{\log x_v, \log y_v\}_D = -1$. Using the description of $Q_n$ in Section~{\bf r}ef{init},
the third equation in Lemma~{\bf r}ef{cross-inv}, the second and the third lines in~{\bf e}qref{weights}, and equation~{\bf e}qref{aux_psi}, we compute
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{aligned}
\left \{ \log h_{ii},\log\frac{f_{1, n-i} h_{i-1,i}} {f_{1, n-i+1} h_{i,i+1}}{\bf r}ight \}_D
&= \frac 1 2 \langle {{\bf m}athfrak d}elta(i,n) ,2{{\bf m}athfrak d}elta(i+1,n) - 3{{\bf m}athfrak d}elta(i,n) +{{\bf m}athfrak d}elta(i-1,n)\\
&{\bf q}quad + {{\bf m}athfrak d}elta(i-1,n-1) - {{\bf m}athfrak d}elta(i,n-1){\bf r}angle_0\\
& = \frac 1 2 \left \langle {{\bf m}athfrak d}elta(i,n) , 2({{\bf m}athfrak d}elta(i-1,i-1) -{{\bf m}athfrak d}elta(i,i)) {\bf r}ight {\bf r}angle_0 = -1
{\bf e}nd{aligned}
{\bf e}nd{equation*}
for $1<i<n$. Using in addition the first equation in Lemma~{\bf r}ef{cross-inv}
and the first line in~{\bf e}qref{weights} we get
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{aligned}
\left \{ \log h_{nn},\log\frac{ h_{n-1,n}g_{nn}} {f_{11}}{\bf r}ight \}_D
&= \frac 1 2 \langle {{\bf m}athfrak d}elta(n,n) ,{{\bf m}athfrak d}elta(n-1,n)- 3{{\bf m}athfrak d}elta(n,n) + {{\bf m}athfrak d}elta(n-1,n-1){\bf r}angle_0\\
& = \frac 1 2 \left \langle {{\bf m}athfrak d}elta(n,n) , 2({{\bf m}athfrak d}elta(n-1,n-1) -{{\bf m}athfrak d}elta(n,n)) {\bf r}ight {\bf r}angle_0 = -1.
{\bf e}nd{aligned}
{\bf e}nd{equation*}
Vertices corresponding to $g_{ii}$ are treated in a similar way.
Now, let us turn to the $f$-region. Let $v$ be a vertex that corresponds to $f_{kl}$, $k,l{\bf m}athfrak ge 1$, $k+l< n$, then by the last equality in~{\bf e}qref{canform},
{\bf m}athfrak begin{equation}\label{yftoyh}
y_v = \frac{f_{k+1,l-1} f_{k,l+1}f_{k-1,l}} {f_{k+1,l} f_{k-1,l+1}f_{k,l-1}}(X,Y) = \frac{h_{\alpha,{\bf m}athfrak beta-1} h_{\alpha-1,{\bf m}athfrak beta}h_{\alpha+1,{\bf m}athfrak beta+1}}
{h_{\alpha,{\bf m}athfrak beta+1}h_{\alpha+1,{\bf m}athfrak beta}h_{\alpha-1,{\bf m}athfrak beta-1}}(Z),
{\bf e}nd{equation}
where $\alpha=n-k-l+1$, ${\bf m}athfrak beta=n-k+1$, and $Z=(Y_{>0})^{-1}X_{{\bf m}athfrak geq 0}$. Consequently, if $u$ is a vertex that corresponds to $f_{k'l'}$,
and $\alpha'=n-k'-l'+1$, ${\bf m}athfrak beta'=n-k'+1$, then
$$
\{ \log x_u, \log y_v\}_D = \{\log h_{n-l'+1,n-l'+1}(Y), \log y_v\}_D + \{\log h_{\alpha'{\bf m}athfrak beta'}, \log y_v\}_r(Z)
$$
by Lemma~{\bf r}ef{Zmap}. The first term in the right hand side vanishes,
as it was already shown above (this corresponds to the case when one vertex belongs to the $h$-region and the other to the $f$-region), and we are left
with
{\bf m}athfrak begin{equation}\label{Dtor}
\{ \log x_u, \log y_v\}_D = \{\log h_{\alpha'{\bf m}athfrak beta'}, \log y_v\}_r(Z).
{\bf e}nd{equation}
Consider the subquiver of $Q_n$ formed by all $f$-vertices,
as well as vertices (viewed as frozen) that correspond to
functions $g_{ii}(X)$ and $\varphi_{n-i, i}(X,Y)$. It is isomorphic, up to edges
between the frozen vertices,
to the $h$-part of $Q_n$ in which vertices corresponding to $h_{ii}(Y)$ are
viewed as frozen. The isomorphism consists in sending the vertex occupied by
$f_{kl}(X,Y)$ to the vertex occupied by $h_{\alpha{\bf m}athfrak beta}(Z)$, including the
values of $k,l$ subject to identifications of Remark~{\bf r}ef{identify}.
The latter is possible since $g_{ii}(X)=h_{ii}(Z)$ by the second equation in~{\bf e}qref{canform},
and since the third equation in~{\bf e}qref{canform} can be extended to the cases $k=0$ and $l=0$ by setting $h_{i,n+1}{\bf e}quiv1$ for any $i$. It now follows
from~{\bf e}qref{yftoyh}, {\bf e}qref{Dtor} that this isomorphism takes~{\bf e}qref{compcond} for
$f$-vertices to the same statement for $h$-vertices, which has
been already proved.
We are left with the $\varphi$-region.
If $v$ is a vertex that corresponds to
$\varphi_{kl}$, $k > 1$, $l>1$, $k+l < n$, then by~{\bf e}qref{tildefy}
{\bf m}athfrak begin{equation}\label{fyviah}
y_v = \frac{\varphi_{k,l-1} \varphi_{k-1,l+1}\varphi_{k+1,l}} {\varphi_{k+1,l-1}
\varphi_{k,l+1}\varphi_{k-1,l}}(X,Y) = \frac{h_{\alpha-1,{\bf m}athfrak gamma} h_{\alpha+1,{\bf m}athfrak gamma+1}h_{\alpha,{\bf m}athfrak gamma-1}}
{h_{\alpha,{\bf m}athfrak gamma+1} h_{\alpha+1,{\bf m}athfrak gamma}h_{\alpha-1,{\bf m}athfrak gamma-1}}(B_+'),
{\bf e}nd{equation}
where $\alpha=n-k-l+2$, ${\bf m}athfrak gamma=n-l+2$, $B_+'=(B_+)_{[2,n]}^{[2,n]}$
(here we use the identity $h_{ij}(B_+)=h_{i-1,j-1}(B_+')$ for $i,j>1$).
In view of Lemma~{\bf r}ef{PoissonB} and Remark~{\bf r}ef{MMJcompat}, we can
establish~{\bf e}qref{compcond} for $v$ by applying the same reasoning
as in the case of $f$-vertices. To include the case $k=1$ it suffices
to use the same convention $h_{i,n+1}{\bf e}quiv1$ as above.
Therefore, the only vertices left to
consider are the ones corresponding to $\varphi_{k,n-k}$, $1\le k\le n-1$, and
$\varphi_{k1}$, $1\le k\le n-2$. They are treated by straightforward, albeit lengthy,
calculations based on Lemma~{\bf r}ef{PoissonB}, equations~{\bf e}qref{tildefy}, {\bf e}qref{aux_psi}, and the fourth equation in Lemma~{\bf r}ef{cross-inv}.
Note that in all the cases equation~{\bf e}qref{compcond} is satisfied with $\lambda=-1$.
{\bf m}athfrak begin{remark}\label{Brank}
It follows from~{\bf e}qref{compcond} and Proposition~{\bf r}ef{compatchar} that the exchange matrix corresponding to the extended seed
${\bf w}idetilde{\Sigma}_n=(F_n, Q_n, {{\bf m}athcal P}_n)$
is of full rank.
{\bf e}nd{remark}
\operatorname{sign}ubsection{Irreducibility: the proof of Theorem~{\bf r}ef{irredinbas}}
The claim is clear for the functions $g_{ij}$, $h_{ij}$ and $f_{kl}$, since each one of them is a minor of
a matrix whose entries are independent variables (see e.g.~\cite[Ch.~XIV, Th.~1]{Boch}).
Functions $c_k$, $1\le k\le n-1$, are sums of such minors. Consequently, each $c_k$
is linear in any of the variables $x_{ij}$, $y_{ij}$. Assume that $c_k=P_1P_2$, where $P_1$ and $P_2$ are nonconstant polynomials,
and that $P_1$ is linear in $y_{11}$, hence $P_2$ does not depend
on $y_{11}$. If $P_2$ is linear in any of $y_{1j}$, $2\le j\le n$, then $c_k$ includes a multiple of the product $y_{11}y_{1j}$, a contradiction.
Therefore, $P_1$ is linear in any one of $y_{1j}$, $1\le j\le n$. If $P_2$ is linear in $z_{ij}$ for some values of $i$ and $j$ (here $z$ is either $x$ or $y$)
then $c_k$ includes a multiple of the product $y_{1j}z_{ij}$, a contradiction. Hence, $P_2$ is trivial, which means that $c_k$ is irreducible.
Our goal is to prove
{\bf m}athfrak begin{proposition}\label{irredfy}
$\varphi_{kl}(X,Y)$ is an irreducible polynomial in the entries of $X$ and $Y$.
{\bf e}nd{proposition}
{\bf m}athfrak begin{proof} We first aim at a weaker statement:
{\bf m}athfrak begin{proposition}\label{irredfyu}
$\varphi_{kl}(I,Y)$ is irreducible in the entries of $Y$.
{\bf e}nd{proposition}
{\bf m}athfrak begin{proof}
In this case $U=X^{-1}Y=Y$,
so we write ${\bf p}si_{kl}(U)$ instead of $\varphi_{kl}(I,Y)$. It will be convenient to indicate explicitly the size of $U$ and
to write ${\bf p}si_{kl}^{[n]}(U)$ for the function ${\bf p}si_{kl}$ evaluated on an $n \times n$ matrix
$$
U={\bf m}athfrak begin{pmatrix} u_{11} & u_{12}& {\bf m}athbf dots & u_{1n}\cr
u_{21} & u_{22} & {\bf m}athbf dots & u_{2n}\cr
\vdots & & & \vdots \cr
u_{n1} & u_{n2} & {\bf m}athbf dots & u_{nn}
{\bf e}nd{pmatrix}.
$$
We start with the following observations.
{\bf m}athfrak begin{lemma}\label{coeff}
{{\bf r}m (i)} Consider ${\bf p}si_{kl}^{[n]}(U)$ as a polynomial in $u_{jn}$, then its leading coefficient does not depend on the entries
$u_{ji}$, $1\le i\le n-1$, nor on $u_{in}$, $1\le i\le n$, $i\mathfrak ne j$.
{{\bf r}m (ii)} The degree of ${\bf p}si_{kl}^{[n]}(U)$ as a polynomial in $u_{1n}$ equals $n-k-l+1$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} Explicit computation.
{\bf e}nd{proof}
Next, let us find a specialization of variables $u_{ij}$ such that the corresponding ${\bf p}si_{kl}(U)$ is irreducible.
Define a polynomial in two variables $z, t$ by
\[
P_m(z,t)=t^{m-1}z^m+\operatorname{sign}um_{i=0}^{m-2}(-1)^{m-i-1}{\bf m}athfrak binom mi \frac{t^{m-1}-t^i}{t-1}z^i.
\]
This is an explicit expression for the determinant of the $m\times m$ matrix
\[
{\bf m}athfrak begin{pmatrix} z & 1 & 1 & 1 &{\bf m}athbf dots & 1\cr
1 & tz & t & t &{\bf m}athbf dots & t \cr
1 & 1 & tz & t &{\bf m}athbf dots & t \cr
{\bf m}athbf ddots & & & & {\bf m}athbf ddots & \cr
1 & 1 & 1 & 1 & {\bf m}athbf dots & tz
{\bf e}nd{pmatrix}.
\]
{\bf m}athfrak begin{lemma}\label{polytwo} For any $m{\bf m}athfrak ge 2$,
$P_m(z,t)$ is an irreducible polynomial.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof}
It is easy to see that a polynomial in two variables is irreducible if its Newton polygon can not be represented as a Minkowski
sum of two polygons.
The Newton polygon of $P_m(z,t)$ in the $(z,t)$-plane has the following vertices: $(m,m-1), (0,m-2), (0,m-3),{\bf m}athbf dots, (0,1), (0,0)$.
It contains two non-vertical sides
connecting $(m,m-1)$ with $(0,m-2)$ and $(0,0)$, respectively; all the other sides are vertical. Assume that two polygons as above exist.
Consequently, both non-vertical sides should belong to the same
one out of the two. However, it is not possible to complete this polygon without using all the remaining vertical sides, a contradiction.
{\bf e}nd{proof}
{\bf m}athfrak begin{lemma}\label{expform}
Define
an $n\times n$ matrix $M_{kl}$ via
{\bf m}athfrak begin{gather*}
M_{kl}={\bf m}athfrak begin{pmatrix}
0 & 0 & 1 & z \cr
t & 0 & 0 & 1 \cr
0 & \mathbf 1_{n-3} & 0 & 1 \cr
0 & 0 & 0 & 1 {\bf e}nd{pmatrix} {\bf q}quad \text{if $l=1$},\\
M_{kl}={\bf m}athfrak begin{pmatrix}
0 & 0 & 1 & 0 & 0 & 0 & z \cr
t & 0 & 0 & 0 & 0 & 0 & 1 \cr
0 & 0 & 0 & 0 & 0 & \mathbf 1_{l-1} & 1 \cr
0 & 1 & 0 & 0 & 0 & 0 & 1 \cr
0 & 0 & 0 & 0 & \mathbf 1_{m-1} & 0 & 1 \cr
0 & 0 & 0 & \mathbf 1_{l-2} & 0 & 0 & 1 \cr
0 & 0 & 0 & 0 & 0 & 0 & 1 {\bf e}nd{pmatrix}{\bf q}quad \text{if $k{\bf m}athfrak ge l$, $l>1$},\\
M_{kl}={\bf m}athfrak begin{pmatrix}
0 & 0 & 0 & 0 & 1 & 0 & z \cr
t & 0 & 0 & 0 & 0 & 0 & 1 \cr
0 & 0 & 0 & 0 & 0 & \mathbf 1_{k-1} & 1 \cr
0 & 1 & 0 & 0 & 0 & 0 & 1 \cr
0 & 0 & 0 & \mathbf 1_{m-2} & 0 & 0 & 1 \cr
0 & 0 & \mathbf 1_{k-1} & 0 & 0 & 0 & 1 \cr
0 & 0 & 0 & 0 & 0 & 0 & 1 {\bf e}nd{pmatrix}{\bf q}quad \text{if $k\le l$, $l>1$}.
{\bf e}nd{gather*}
where $m={\bf m}ax\{n-2l, n-2k\}$.
Then ${\bf m}athbf det M_{kl}={\bf p}m t$, and
\[
{\bf p}si^{[n]}_{kl}(M_{kl})={\bf p}m P_{n-k-l+1}(z,t).
\]
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof}
Explicit computation.
{\bf e}nd{proof}
Let us proceed with the proof of Proposition~{\bf r}ef{irredfyu}. Assume to the contrary that ${\bf p}si_{kl}^{[n]}(U)=P_1P_2$ for some
values of $n$, $k$ and $l$, where both $P_1$ and $P_2$ are nontrivial. It follows from Lemmas~{\bf r}ef{expform},~{\bf r}ef{polytwo} and~{\bf r}ef{coeff}(ii) that only
one of $P_1$ and $P_2$ depends on $u_{1n}$, say, $P_1$.
Therefore, $P_2(M_{kl})$ is a nonzero constant. Consequently, $P_2$ contains a monomial
${\bf p}rod_{j=3}^{n-1}u_{j\operatorname{sign}igma_{kl}(j)}^{r_j}$,
where $r_j{\bf m}athfrak ge 0$ and $\operatorname{sign}igma_{kl}\in S_{n-1}$ is the permutation defined by the first $n-1$ rows and columns of $M_{kl}$.
Assume there exists $j$ such that $r_j>0$, and consider $u_{jn}$. If $P_2$ depends on $u_{jn}$ then the leading coefficient of ${\bf p}si_{kl}^{[n]}(U)$ at
$u_{1n}$ depends on $u_{jn}$, in a contradiction to Lemma~{\bf r}ef{coeff}(i). Therefore, $P_2$ does not depend on $u_{jn}$, and hence the leading coefficient of ${\bf p}si_{kl}^{[n]}(U)$ at
$u_{jn}$ depends on $u_{j\operatorname{sign}igma_{kl}(j)}$, which again contradicts Lemma~{\bf r}ef{coeff}(i). We thus obtain that $r_j=0$ for $3\le j\le n-1$, which means that
$P_2$ is a constant, and
Proposition~{\bf r}ef{irredfyu} holds true.
{\bf e}nd{proof}
The next step of the proof is the following statement:
{\bf m}athfrak begin{proposition}\label{irredfyuinv}
$\varphi_{kl}(X,I)$ is irreducible in the entries of $X$.
{\bf e}nd{proposition}
{\bf m}athfrak begin{proof}
Note that
{\bf m}athfrak begin{multline*}
\left[I^{[n-k+1,n]}\;(X^{-1})^{[n-l+1,n]}\;(X^{-2})^{[n]}{\bf m}athbf dots(X^{k+l-n-1})^{[n]}{\bf r}ight]={\bf m}athfrak hfill\\
X^{k+l-n-1}\left[(X^{n-k-l+1})^{[n-k+1,n]}\;(X^{n-k-l})^{[n-l+1,n]}\;(X^{n-k-l-1})^{[n]}\;{\bf m}athbf dots I^{[n]}{\bf r}ight],
{\bf e}nd{multline*}
hence it suffices to prove that the functions
$$
{\bf m}athfrak bar{\bf p}si_{kl}={\bf m}athbf det\left[(X^{n-k-l+1})^{[n-k+1,n]}\;(X^{n-k-l})^{[n-l+1,n]}\;(X^{n-k-l-1})^{[n]}\;{\bf m}athbf dots I^{[n]}{\bf r}ight]
$$
are irreducible. As in the proof of Proposition~{\bf r}ef{irredfyu}, we write
${\bf m}athfrak bar{\bf p}si_{kl}^{[n]}(X)$ for the function ${\bf p}si_{kl}$ evaluated on an $n \times n$ matrix
$$
X={\bf m}athfrak begin{pmatrix} x_{11} & x_{12}& {\bf m}athbf dots & x_{1n}\cr
x_{21} & x_{22} & {\bf m}athbf dots & x_{2n}\cr
\vdots & & & \vdots \cr
x_{n1} & x_{n2} & {\bf m}athbf dots & x_{nn}
{\bf e}nd{pmatrix}.
$$
Similarly to the case of ${\bf p}si_{kl}^{[n]}(U)$, we have the following observations:
{\bf m}athfrak begin{lemma}\label{coefpsi}
{{\bf r}m (i)} Consider ${\bf m}athfrak bar{\bf p}si_{kl}^{[n]}(X)$ as a polynomial in $x_{jn}$, then its leading coefficient does not depend on the entries
$x_{ji}$, $1\le i\le n-1$, and $x_{in}$, $1\le i\le n$, $i\mathfrak ne j$.
{{\bf r}m (ii)} The degree of ${\bf m}athfrak bar{\bf p}si_{kl}^{[n]}(X)$ as a polynomial in $x_{1n}$ equals $n-k-l+1$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} Explicit computation.
{\bf e}nd{proof}
As in the previous case, we find a specialization of variables $x_{ij}$, such that the corresponding ${\bf m}athfrak bar{\bf p}si_{kl}(X)$
is irreducible.
{\bf m}athfrak begin{lemma}\label{expformpsi}
Define
an $n\times n$ matrix ${\bf m}athfrak bar M_{kl}$ via
{\bf m}athfrak begin{equation*}
{\bf m}athfrak bar M_{kl}={\bf m}athfrak begin{pmatrix}
0 & 0 & 1 & 0 & 0 & 0 & 0 & z \cr
t & 0 & 0 & 0 & 0 & 0 & 0 & 1 \cr
0 & \mathbf 1_{m-1} & 0 & 0 & 0 & 0 & 0 & 1 \cr
0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \cr
0 & 0 & 0 & \mathbf 1_{l-2} & 0 & 0 & 0 & 1 \cr
0 & 0 & 0 & 0 & 0 & 0 & \mathbf 1_{k-2} & 1 \cr
0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \cr
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 {\bf e}nd{pmatrix},
{\bf e}nd{equation*}
where $m=n-k-l$.
Then ${\bf m}athbf det {\bf m}athfrak bar M_{kl}={\bf p}m t$, and
\[
{\bf m}athfrak bar{\bf p}si^{[n]}_{kl}({\bf m}athfrak bar M_{kl})={\bf p}m P_{n-k-l+1}(z,t).
\]
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof}
Similar to the proof of Lemma~{\bf r}ef{expform}.
{\bf e}nd{proof}
The rest of the proof of Proposition~{\bf r}ef{irredfyuinv} is parallel to the proof of Proposition~{\bf r}ef{irredfyu}.
{\bf e}nd{proof}
Assume now that $\varphi_{kl}(X,Y)=P_1P_2$. It follows from Propositions~{\bf r}ef{irredfyu} and~{\bf r}ef{irredfyuinv}
and the quasihomogeneity of $\varphi_{kl}(X,Y)$
that one
of $P_1, P_2$ depends only on $X$, and the other only on $Y$; moreover, both $P_1(I)$ and $P_2(I)$ are nonzero complex numbers.
Consequently, $\varphi_{kl}(I,I)$ is a nonzero complex number, which contradicts the explicit expression for $\varphi_{kl}(X,Y)$.
{\bf e}nd{proof}
The proof of Theorem~{\bf r}ef{irredinbas} is complete.
\operatorname{sign}ubsection{Regularity}
We now proceed with condition (ii) of Proposition~{\bf r}ef{regcoin}.
By Theorem~{\bf r}ef{irredinbas}, we have to check that for any mutable vertex of $Q_n$ the new variable in the
adjacent cluster is regular and not divisible by the corresponding variable in the initial cluster.
Let us consider the exchange relations according to the type of the vertices.
Regularity of the function $\varphi_{11}'$ that replaces $\varphi_{11}$ follows from Proposition~{\bf r}ef{polyrel}.
Let us prove that this function is not divisible by $\varphi_{11}$.
Indeed, assume the contrary, and define an $n \times n$ matrix $\Sigma_{11}$ via
\[
\Sigma_{11}={\bf m}athfrak begin{pmatrix}
0 & 0 & 0 & 1 \cr
\mathbf 1_{n-3} & 0 & 0 & 0 \cr
0 & t & 0 & 0 \cr
0 & 0 & 1 & 0 {\bf e}nd{pmatrix}.
\]
An explicit computation shows that ${\bf m}athbf det\Sigma_{11}={\bf p}m t$ and
\[
\varphi_{11}(I,\Sigma_{11})={\bf p}m t,{\bf q}quad \varphi_{21}(I,\Sigma_{11})={\bf p}m 1,{\bf q}quad \varphi_{12}(I,\Sigma_{11})=0.
\]
Therefore,~{\bf e}qref{general} yields
$$
\varphi_{11}(I,\Sigma_{11})\varphi'_{11}(I,\Sigma_{11})=\varphi_{21}(I,\Sigma_{11}){\bf m}athbf det\Sigma_{11},
$$
and hence
$\varphi'_{11}(I,\Sigma_{11})={\bf p}m 1$, which contradicts the divisibility assumption.
Denote by ${{\bf m}athcal P}hi_{pl}^*$ the matrix obtained from ${{\bf m}athcal P}hi_{pl}$ via replacing the column $U^{[n-l+1]}$ by the column $U^{[n-l]}$,
and put $\tilde\varphi^*_{pl}={\bf m}athbf det {{\bf m}athcal P}hi^*_{pl}$. Clearly, $\varphi^*_{pl}=-s_{pl}\tilde\varphi^*_{pl}{\bf m}athbf det X^{t_{p+l}}$ is regular (here and in what follows we set $t_s=n-s+1$).
By the short Pl\"ucker relation for the matrix
$$
\left[{\bf m}athfrak begin{array}{ccccc}(U^0)^{[n-p+1,n]}& U^{[n-l,n]} & (U^2)^{[n]} & {\bf m}athbf dots & (U^{n-p-l+2})^{[n]}{\bf e}nd{array}{\bf r}ight]
$$
and columns
$I^{[n-p+1]}$, $U^{[n-l]}$, $U^{[n-l+1]}$, $(U^{n-p-l+2})^{[n]}$, one has
{\bf m}athfrak begin{equation*}
\tilde\varphi_{pl}\tilde\varphi^*_{p-1,l}=\tilde\varphi_{pl}^*\tilde\varphi_{p-1,l}+
\tilde\varphi_{p,l-1}\tilde\varphi_{p-1,l+1},
{\bf e}nd{equation*}
provided $p,l>1$ and $p+l\le n$. Multiplying the above relation by
${\bf m}athbf det X^{t_{p+l}+t_{p+l-1}}$
and $s_{p,l-1}s_{p-1,l+1}=-s_{pl}s_{p-1,l}$, one gets
{\bf m}athfrak begin{equation}\label{shpl}
\varphi_{pl}\varphi^*_{p-1,l}=\varphi_{pl}^*\varphi_{p-1,l}+\varphi_{p,l-1}\varphi_{p-1,l+1}.
{\bf e}nd{equation}
A linear combination of~{\bf e}qref{shpl}
for $p=k$ and for $p=k+1$ yields
{\bf m}athfrak begin{equation}\label{fyklex}
\varphi_{kl} (\varphi_{k+1,l}\varphi^*_{k-1,l}-\varphi_{k-1,l}\varphi^*_{k+1,l})
=\varphi_{k+1,l} \varphi_{k,l-1} \varphi_{k-1,l+1}+\varphi_{k+1,l-1} \varphi_{k,l+1} {\bf p}si_{k-1,l},
{\bf e}nd{equation}
which means that the function that replaces $\varphi_{kl}$ after the transformation is regular whenever $k,l>1$ and $k+l<n$.
Let us prove that $\varphi_{k+1,l}\varphi^*_{k-1,l}-\varphi_{k-1,l}\varphi^*_{k+1,l}$ is not divisible by $\varphi_{kl}$. Indeed, assume the contrary, and define an $n \times n$
matrix $\Sigma_{kl}$ via
{\bf m}athfrak begin{gather*}
\Sigma_{kl}={\bf m}athfrak begin{pmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 1 \cr
0 & 0 & 0 & 1 & 0 & 0 & 0 \cr
0 & 0 & 0 & 0 & 0 & J_{k-2} & 0 \cr
1 & 0 & 0 & 0 & 0 & 0 & 0 \cr
0 & 0 & \mathbf 1_{m} & 0 & 0 & 0 & 0 \cr
0 & 0 & 0 & 0 & 1 & 0 & 0 \cr
0 & J_{k-1} & 0 & 0 & 0 & 0 & 0 {\bf e}nd{pmatrix}, {\bf q}quad \text{if $k=l$},\\
\Sigma_{kl}={\bf m}athfrak begin{pmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \cr
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \cr
0 & 0 & 0 & 0 & 0 & 0 & 0 & J_{l-2} & 0 \cr
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr
0 & 0 & \mathbf 1_{m} & 0 & 0 & 0 & 0 & 0 & 0 \cr
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \cr
0 & 0 & 0 & 0 & 0 & \mathbf 1_{k-l-1} & 0 & 0 & 0\cr
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \cr
0& J_{l-1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 {\bf e}nd{pmatrix}+e_{2,n-l},{\bf q}quad \text{if $k{\bf m}athfrak ge l+1$},\\
\Sigma_{kl}={\bf m}athfrak begin{pmatrix}
0 & 0 & 0 & 0 & 0 & 1 \cr
\mathbf 1_{m+1} & 0 & 0 & 0 & 0 & 0 \cr
0 & 0 & 0 & 0 & \mathbf 1_{l-2} & 0 \cr
0 & 1 & 0 & 0 & 0 & 0 \cr
0 & 0 & 0 & 1 & 0 & 0 \cr
0 & 0 & \mathbf 1_{k-1} & 0 & 0 & 0 {\bf e}nd{pmatrix}+e_{m+2,n-l},{\bf q}quad \text{if $k< l$},
{\bf e}nd{gather*}
where $m=n-k-l-1$ and $J_r$ is the $r\times r$ unitary antidiagonal matrix. An explicit computation shows that ${\bf m}athbf det\Sigma_{kl}={\bf p}m1$, and
{\bf m}athfrak begin{gather*}
\varphi_{k-1,l}(I,\Sigma_{kl})={\bf p}m1,{\bf q}quad \varphi_{k+1,l}^*(I,\Sigma_{kl})={\bf p}m1,\\
\varphi_{k+1,l}(I,\Sigma_{kl})=\varphi_{k-1,l}^*(I,\Sigma_{kl})=\varphi_{kl}(I,\Sigma_{kl})=0,
{\bf e}nd{gather*}
which contradicts the divisibility assumption.
Extend the definition of ${{\bf m}athcal P}hi_{pq}$ and $s_{pq}$ to the case $p=0$, denote by ${{\bf m}athcal P}hi^{**}_{pq}$ the matrix obtained from ${{\bf m}athcal P}hi_{pq}$ by
deleting the last column and inserting $(U^2)^{[n-1]}$ as the $(p+q+1)$-st column, and put $\tilde\varphi^{**}_{pq}={\bf m}athbf det{{\bf m}athcal P}hi^{**}_{pq}$.
Clearly, $\varphi^{**}_{pq}=s_{q,p+2}\tilde\varphi^{**}_{pq}{\bf m}athbf det X^{t_{p+q}-1}$ is regular.
It is easy to see that $U{{\bf m}athcal P}hi_{p1}={{\bf m}athcal P}hi_{0p}$ and $U{{\bf m}athcal P}hi_{p2}={{\bf m}athcal P}hi^{**}_{0p}$. Therefore, by the short Pl\"ucker relation
for the matrix
$$
\left[{\bf m}athfrak begin{array}{cccccc} I^{[n]} & U^{[n-k+1,n]} & (U^2)^{[n-1,n]} & (U^3)^{[n]} {\bf m}athbf dots & (U^{n-k+1})^{[n]}{\bf e}nd{array}{\bf r}ight]
$$
and columns
$I^{[n]}$, $U^{[n-k+1]}$, $(U^2)^{[n-1]}$, $(U^{n-k+1})^{[n]}$, one has
{\bf m}athfrak begin{equation*}
\tilde\varphi_{0k}\tilde\varphi^{**}_{1,k-1}=\tilde\varphi^{**}_{0,k-1}\tilde\varphi_{1k}+\tilde\varphi^{**}_{0k}\tilde\varphi_{1,k-1}.
{\bf e}nd{equation*}
Multiplying the above relation by ${\bf m}athbf det X^{2t_{k}-1}={\bf m}athbf det X^{t_{k-1}+t_{k+1}-1}$ and $s_{k2}s_{1,k-1}=s_{k-1,2}s_{1k}=s_{0k}s_{k-1,3}$,
taking into account that $\varphi_{0p}=h_{11}\varphi_{p1}$, $\varphi^{**}_{0p}=h_{11}\varphi_{p2}$, $s_{k1}=(-1)^ns_{0k}$ and dividing the above relation by $h_{11}$ we arrive at
{\bf m}athfrak begin{equation}\label{k1neigh}
(-1)^n\varphi_{k1}\varphi^{**}_{1,k-1}=\varphi_{k-1,2}\varphi_{1k}+\varphi_{k2}\varphi_{1,k-1},
{\bf e}nd{equation}
which means that the function that replaces $\varphi_{k1}$ after the transformation is regular whenever $1<k<n-1$.
To prove that $\varphi^{**}_{1,k-1}$ is not divisible by $\varphi_{k1}$, we assume the contrary and define an $n \times n$ matrix $\Sigma_{k1}$ via
{\bf m}athfrak begin{gather*}
\Sigma_{k1}={\bf m}athfrak begin{pmatrix}
0 & 0 & 0 & 1 \cr
\mathbf 1_{n-4} & 0 & 0 & 0 \cr
0 & 0 & 1 & 0 \cr
0 & \mathbf 1_{2} & 0 & 0 {\bf e}nd{pmatrix}+e_{n-1,n-l}, {\bf q}quad \text{if $k=2$},\\
\Sigma_{k1}={\bf m}athfrak begin{pmatrix}
0 & 0 & 0 & 0 & 0 & 1 \cr
\mathbf 1_{m}& 0 & 0 & 0 & 0 & 0 \cr
0 & 0 & 1 & 0 & 0 & 0 \cr
0 & 0 & 0 & 0 & 1 & 0 \cr
0& 0 & 0 & \mathbf 1_{k-3} & 0 & 0 \cr
0 & \mathbf 1_2 & 0 & 0 & 0 & 0 {\bf e}nd{pmatrix},{\bf q}quad \text{if $k{\bf m}athfrak ge 3$},
{\bf e}nd{gather*}
where $m=n-k-2$. An explicit computation shows that ${\bf m}athbf det\Sigma_{k1}={\bf p}m1$, and
\[
\varphi^{**}_{1,k-1}(I,\Sigma_{k1})={\bf p}m1,{\bf q}quad \varphi_{kl}(I,\Sigma_{k1})=0,
\]
which contradicts the divisibility assumption.
With the extended definition of ${{\bf m}athcal P}hi_{pq}$, relation~{\bf e}qref{shpl} becomes valid for $p=1$. Taking a linear combination of~{\bf e}qref{shpl} for $p=1$ and $p=2$
one gets
{\bf m}athfrak begin{equation*}
\varphi_{1l}(\varphi_{2l}\varphi^*_{0l}-\varphi_{0l}\varphi^*_{2l})=\varphi_{2l}\varphi_{1,l-1}\varphi_{0,l+1}+
\varphi_{2,l-1}\varphi_{1,l+1}\varphi_{0l}.
{\bf e}nd{equation*}
Recall that $\varphi_{0p}=h_{11}\varphi_{p1}$. Besides, ${{\bf m}athcal P}hi^*_{0l}=U{{\bf m}athcal P}hi^\circ_{l1}$, where ${{\bf m}athcal P}hi_{pq}^\circ$ is the matrix obtained from ${{\bf m}athcal P}hi_{pq}$
via replacing the column $I^{[n-p+1]}$ by the column $I^{[n-p]}$. Denote $\tilde\varphi^\circ_{pq}={\bf m}athbf det{{\bf m}athcal P}hi^\circ_{pq}$;
clearly, $\varphi^\circ_{pq}=-s_{q-1,p}\tilde\varphi^\circ_{pq}{\bf m}athbf det X^{t_{p+q}}$ is regular. We thus have $\tilde\varphi^*_{0l}={\bf m}athbf det U\tilde\varphi^\circ_{l1}$, and hence
$\varphi^*_{0l}=h_{11}\varphi^\circ_{l1}$. Consequently,
{\bf m}athfrak begin{equation}\label{fy1lex}
\varphi_{1l}(\varphi_{2l}\varphi^\circ_{l1}-\varphi_{l1}\varphi^*_{2l})=\varphi_{2l}\varphi_{1,l-1}\varphi_{l+1,1}+
\varphi_{2,l-1}\varphi_{1,l+1}\varphi_{l1},
{\bf e}nd{equation}
which means that the function that replaces $\varphi_{1l}$ after the transformation is regular whenever $1<l<n-1$.
To prove that $\varphi_{2l}\varphi^\circ_{l1}-\varphi_{l1}\varphi^*_{2l}$ is not divisible by $\varphi_{1l}$, we assume the contrary and define an $n \times n$ matrix $\Sigma_{1l}$ via
\[
\Sigma_{1l}={\bf m}athfrak begin{pmatrix}
0 & 0 & 0 & 1 \cr
\mathbf 1_{m} & 0 & 0 & 0 \cr
0 & 0 & \mathbf 1_{l-2} & 0 \cr
0 & \mathbf 1_{2} & 0 & 0 {\bf e}nd{pmatrix}+e_{n-l,n-l},
\]
where $m=n-l-1$. An explicit computation shows that ${\bf m}athbf det\Sigma_{1l}={\bf p}m1$, and
{\bf m}athfrak begin{gather*}
\varphi_{l1}(I,\Sigma_{1l})={\bf p}m1,{\bf q}quad \varphi_{2l}^*(I,\Sigma_{1l})={\bf p}m1,\\
\varphi_{2l}(I,\Sigma_{1l})=\varphi_{l1}^\circ(I,\Sigma_{1l})=\varphi_{1l}(I,\Sigma_{1l})=0,
{\bf e}nd{gather*}
which contradicts the divisibility assumption.
Define an $n\times n$ matrix $F^\lozenge_{k,n-k-1}$, $0\le k\le n-2$, by
$$
F^\lozenge_{k,n-k-1}=\left[{\bf m}athfrak begin{array}{ccc}I^{[1]} & X^{[n-k+1,n]} & Y^{[k+2,n]}{\bf e}nd{array}{\bf r}ight].
$$
Clearly, $f_{k,n-k-1}={\bf m}athbf det F^\lozenge_{k,n-k-1}$. Besides, denote by ${{\bf m}athcal P}hi^\lozenge_{k,n-k-1}$ the matrix obtained from $X{{\bf m}athcal P}hi_{k,n-k-1}$ via replacing
the column $X^{[n-k+1]}$ by the column $I^{[1]}$. Denote $\tilde\varphi^\lozenge_{k,n-k-1}={\bf m}athbf det{{\bf m}athcal P}hi^\lozenge_{k,n-k-1}$. Clearly,
$\varphi^\lozenge_{k,n-k-1}=s_{k,n-k-1}\tilde\varphi^\lozenge_{k,n-k-1}{\bf m}athbf det X$ is regular.
Therefore, the short Pl\"ucker relation for the matrix
$$
\left[{\bf m}athfrak begin{array}{cccc}I^{[1]}& X^{[n-k+1,n]} & Y^{[k+1,n]} & (YX^{-1}Y)^{[n]}{\bf e}nd{array}{\bf r}ight]
$$
and columns
$I^{[1]}$, $X^{[n-k+1]}$, $Y^{[k+1]}$, $(YX^{-1}Y)^{[n]}$ involves submatrices
$X{{\bf m}athcal P}hi_{k,n-k}$, $X{{\bf m}athcal P}hi_{k,n-k-1}$, $X{{\bf m}athcal P}hi_{k-1,n-k}$, ${{\bf m}athcal P}hi^\lozenge_{k,n-k-1}$, $F^\lozenge_{k,n-k-1}$, and $F^\lozenge_{k-1,n-k}$ and gives
{\bf m}athfrak begin{equation*}
{\bf m}athbf det X\tilde\varphi_{k,n-k}\tilde\varphi^\lozenge_{k,n-k-1}=-{\bf m}athbf det X\tilde\varphi_{k-1,n-k}f_{k,n-k-1}+{\bf m}athbf det X\tilde\varphi_{k,n-k-1}f_{k-1,n-k}.
{\bf e}nd{equation*}
Multiplying by ${\bf m}athbf det X$ and $s_{k,n-k-1}=-s_{k-1,n-k}$ and taking into account that $s_{k,n-k}=1$, we arrive at
{\bf m}athfrak begin{equation*}
\varphi_{k,n-k}\varphi^\lozenge_{k,n-k-1}=\varphi_{k-1,n-k}f_{k,n-k-1}+\varphi_{k,n-k-1}f_{k-1,n-k},
{\bf e}nd{equation*}
which together with the description of the quiver $Q_n$ means that the function that replaces $\varphi_{k,n-k}$ after the transformation is regular whenever $1<k<n-1$.
For $k=1$ the latter relation is modified taking into account that $f_{0,n-1}=h_{22}$ and $\varphi_{0,n-1}=s_{0,n-1}s_{n-1,1}h_{11}\varphi_{n-1,1}$,
which together with $s_{0,n-1}=s_{n-1,1}=1$ gives
{\bf m}athfrak begin{equation*}
\varphi_{1,n-1}\varphi^\lozenge_{1,n-2}=h_{11}\varphi_{n-1,1}f_{1,n-2}+\varphi_{1,n-2}h_{22}.
{\bf e}nd{equation*}
This means that the function that replaces $\varphi_{1,n-1}$ after the transformation is regular.
To prove that $\varphi^{\lozenge}_{k,n-k-1}$ is not divisible by $\varphi_{k,n-k}$ is enough to note
that $\varphi^{\lozenge}_{k,n-k-1}$ is irreducible as a minor of a matrix whose entries are independent variables.
Define two $n \times n$ matrices ${{\bf m}athcal P}hi^\operatorname{sign}quare_{2,n-2}$ and $F^\operatorname{sign}quare_{n-2,1}$ via replacing the first column of ${{\bf m}athcal P}hi_{2,n-2}$ by $U^{[1]}$ and the first
column of $F^\lozenge_{n-2,1}$ by $X^{[1]}$. Clearly, $f^\operatorname{sign}quare_{n-2,1}={\bf m}athbf det F^\operatorname{sign}quare_{n-2,1}$ is regular; besides, denote
$\tilde\varphi^\operatorname{sign}quare_{2,n-2}={\bf m}athbf det{{\bf m}athcal P}hi^\operatorname{sign}quare_{2,n-2}$, then
$\varphi^\operatorname{sign}quare_{2,n-2}=-s_{2,n-2}\tilde\varphi^\operatorname{sign}quare_{2,n-2}{\bf m}athbf det X$ is regular. The short Pl\"ucker relation for the matrix
$$
\left[{\bf m}athfrak begin{array}{cccc}U^{[1]}& I^{[n]} & U^{[2,n]} & (U^2)^{[n]}{\bf e}nd{array}{\bf r}ight]
$$
and columns
$U^{[1]}$, $I^{[n]}$, $U^{[2]}$, $(U^2)^{[n]}$ involves submatrices
$U{{\bf m}athcal P}hi_{n-1,1}$, ${{\bf m}athcal P}hi_{1,n-2}$,
${{\bf m}athcal P}hi_{1,n-1}$, $UX^{-1}F^\operatorname{sign}quare_{n-2,1}$, ${{\bf m}athcal P}hi^\operatorname{sign}quare_{2,n-2}$, and $U$ and gives
{\bf m}athfrak begin{equation*}
{\bf m}athbf det U\tilde\varphi_{n-1,1}\tilde\varphi^\operatorname{sign}quare_{2,n-2}={\bf m}athbf det U\tilde\varphi_{1,n-2}-{\bf m}athbf det U\tilde\varphi_{1,n-1}f^\operatorname{sign}quare_{n-2,1}.
{\bf e}nd{equation*}
Multiplying the above relation by ${\bf m}athbf det X^2$ and $s_{1,n-2}=-s_{1,n-1}$, dividing by ${\bf m}athbf det U$ and taking into account that $s_{k,n-k}=1$ for $0\le k \le n$ one gets
{\bf m}athfrak begin{equation}\label{reg6}
\varphi_{n-1,1}\varphi^\operatorname{sign}quare_{2,n-2}=\varphi_{1,n-2}+\varphi_{1,n-1}f^\operatorname{sign}quare_{n-2,1},
{\bf e}nd{equation}
which means that the function that replaces $\varphi_{n-1,1}$ after the transformation is regular.
To prove that $\varphi^{\operatorname{sign}quare}_{2,n-2}$ is not divisible by $\varphi_{n-1,1}$ is enough to note that $\varphi_{n-1,1}(I,Y)=y_{1n}$ and
$\varphi^{\operatorname{sign}quare}_{2,n-2}(I,Y)$ is a $(n-1)\times(n-1)$ minor of $Y$.
Define a $(p+q+1)\times (p+q+1)$ matrix $F^*_{pq}$ by
$$
F^*_{pq}=\left[{\bf m}athfrak begin{array}{ccc}I^{[n-p-q+1]} & X^{[n-p+1,n]} & Y^{[n-q+1,n]}{\bf e}nd{array}{\bf r}ight]_{[n-p-q,n]},
$$
and put $f^*_{pq}={\bf m}athbf det F^*_{pq}$.
The short Pl\"ucker relation for the matrix
$$
\left[{\bf m}athfrak begin{array}{ccc}I^{[n-p-q,n-p-q+1]}& X^{[n-p+1,n]} & Y^{[n-q,n]}{\bf e}nd{array}{\bf r}ight]_{[n-p-q,n]}
$$
and columns
$I^{[n-p-q]}$, $I^{[n-p-q+1]}$, $X^{[n-p+1]}$, $Y^{[n-q]}$
gives
{\bf m}athfrak begin{equation*}
f_{pq}f^*_{p-1,q+1}=f_{p-1,q+1}f^*_{pq}+f_{p-1,q}f_{p,q+1}
{\bf e}nd{equation*}
for $p+q<n-1$, $p>0$, $q>0$.
Applying this relation for $(p,q)=(i,j)$ and $(p,q)=(i+1,j-1)$ we get
{\bf m}athfrak begin{equation*}
f_{ij}(f_{i+1,j-1}f^*_{i-1,j+1}-f_{i-1,j+1}f^*_{i+1,j-1})=f_{i+1,j-1}f_{i-1,j}f_{i,j+1}+f_{i,j-1}f_{i+1,j}f_{i-1,j+1},
{\bf e}nd{equation*}
which
means that the function that replaces $f_{ij}$ after the transformation is regular whenever $i+j<n-1$, $i>0$, $j>0$.
To extend the above relation to the case $i+j=n-1$ is enough to recall that
$f_{k,n-k}=\varphi_{k,n-k}$ by the identification of Remark~{\bf r}ef{identify}.
To prove that $f'_{ij}=f_{i+1,j-1}f^*_{i-1,j+1}-f_{i-1,j+1}f^*_{i+1,j-1}$ is not divisible by $f_{ij}$ is enough to show
that $f'_{ij}$ is irreducible. Indeed, $f'_{ij}$ can be written as
\[
f'_{ij}=y_{n-i-j,n-j}f_{i+1,j-1}f_{i-1,j}-x_{n-i-j,n-i}f_{i-1,j+1}f_{i,j-1}+R(X,Y),
\]
where $R$ does not depend on $y_{n-i-j,n-j}$ and $x_{n-i-j,n-i}$. Moreover, $f_{i+1,j-1}$, $f_{i-1,j}$, $f_{i-1,j+1}$, and $f_{i,j-1}$
do not depend on these two variables as well. Consequently, reducibility of $f'_{ij}$ would imply
\[
f'_{ij}=\left(y_{n-i-j,n-j}P(X,Y)+x_{n-i-j,n-i}Q(X,Y)+R'(X,Y){\bf r}ight)R''(X,Y),
\]
which contradicts the irreducibility of $f_{i+1,j-1}$, $f_{i-1,j}$, $f_{i-1,j+1}$, $f_{i,j-1}$.
Define an $(n-i+2)\times (n-i+2)$ matrix $H^\operatorname{sign}tar_{ii}$ by
$$
H^\operatorname{sign}tar_{ii}=\left[{\bf m}athfrak begin{array}{ccc} X^{[n]} & Y^{[i+1,n]} & I^{[n]}{\bf e}nd{array}{\bf r}ight]_{[i-1,n]}
$$
and denote $h^\operatorname{sign}tar_{ii}={\bf m}athbf det H^\operatorname{sign}tar_{ii}$. Clearly,
{\bf m}athfrak begin{equation}\label{hii}
h_{ii}^\operatorname{sign}tar = {\bf m}athbf det \left [{\bf m}athfrak begin{array}{cc} X^{[n]} & Y^{[i+1,n]}{\bf e}nd{array} {\bf r}ight ]_{[i-1,n-1]};
{\bf e}nd{equation}
in particular, $h_{nn}^\operatorname{sign}tar=x_{n-1,n}$. The short Pl\"ucker relation for the matrix
$$
\left[{\bf m}athfrak begin{array}{cccc} I^{[i-1]} & X^{[n]} & Y^{[i,n]} & I^{[n]}{\bf e}nd{array}{\bf r}ight]_{[i-1,n]}
$$
and columns $I^{[i-1]}$, $X^{[n]}$, $Y^{[i]}$ and $I^{[n]}$ gives
{\bf m}athfrak begin{equation*}
h_{ii}h^\operatorname{sign}tar_{ii}=f_{1,n-i}h_{i-1,i}+f_{1,n-i+1}h_{i,i+1},
{\bf e}nd{equation*}
which means that the function that replaces $h_{ii}$ after the transformation is regular whenever $2\le i\le n$. Note that for $i=n$ we use the convention $h_{n,n+1}=1$, which was already used above.
To prove that $h^\operatorname{sign}tar_{ii}$ is not divisible by $h_{ii}$ is enough to note
that $h^\operatorname{sign}tar_{ii}$ is irreducible as a minor of a matrix whose entries are independent variables.
Define an $(n-i+2)\times (n-i+2)$ matrix $F^\circ_{n-i+1,1}$ and an $(n-i+1)\times (n-i+1)$ matrix $G^\circ_{ii}$
via replacing the column $X^{[i]}$ with the column $X^{[i-1]}$ in $F_{n-i+1,1}$ and $G_{ii}$, respectively, and denote
$f^\circ_{n-i+1,1}={\bf m}athbf det F^\circ_{n-i+1,1}$, $g^\circ_{ii}={\bf m}athbf det G^\circ_{ii}$. The short Pl\"ucker relation for the matrix
$$
\left[{\bf m}athfrak begin{array}{ccc} I^{[i-1]} & X^{[i-1,n]} & Y^{[n]}{\bf e}nd{array}{\bf r}ight]_{[i-1,n]}
$$
and columns $I^{[i-1]}$, $X^{[i-1]}$, $X^{[i]}$, and $Y^{[n]}$ gives
{\bf m}athfrak begin{equation}\label{pluone}
g_{ii}f^\circ_{n-i+1,1}=f_{n-i,1}g_{i-1,i-1}+f_{n-i+1,1}g^\circ_{ii}
{\bf e}nd{equation}
for $2\le i\le n$. Taking into account that $g^\circ_{nn}=g_{n,n-1}$ and the description of the quiver $Q_n$, we see that
the function that replaces $g_{nn}$ after the transformation is regular.
To prove that $f^\circ_{n-i+1,1}$ is not divisible by $g_{ii}$ is enough to note
that $f^\circ_{n-i+1,1}$ is irreducible as a minor of a matrix whose entries are independent variables.
Next, define an $(n-i)\times (n-i)$ matrix $G^\circ_{i+1,i}$ via replacing the column $X^{[i]}$ with the column $X^{[i-1]}$ in
$G_{i+1,i}$, and denote $g^\circ_{i+1,i}={\bf m}athbf det G^\circ_{i+1,i}$. The short Pl\"ucker relation for the matrix
$$
\left[{\bf m}athfrak begin{array}{cc} I^{[i-1,i]} & X^{[i-1,n]}{\bf e}nd{array}{\bf r}ight]_{[i-1,n]}
$$
and columns $I^{[i]}$, $X^{[i-1]}$, $X^{[i]}$, and $X^{[n]}$ gives
{\bf m}athfrak begin{equation}\label{plutwo}
g_{i+1,i}g^\circ_{ii}=g_{i,i-1}g_{i+1,i+1}+g_{ii}g^\circ_{i+1,i}
{\bf e}nd{equation}
for $2\le i\le n-1$.
Combining relations~{\bf e}qref{pluone} and~{\bf e}qref{plutwo} one gets
{\bf m}athfrak begin{equation*}
g_{ii}(g_{i+1,i}f^\circ_{n-i+1,1}-g^\circ_{i+1,i}f_{n-i+1,1})=f_{n-i,1}g_{i-1,i-1}g_{i+1,i}+f_{n-i+1,1}g_{i,i-1}g_{i+1,i+1},
{\bf e}nd{equation*}
which
means that the function that replaces $g_{ii}$ after the transformation is regular whenever $2\le i\le n-1$.
To prove that $g_{ii}'=g_{i+1,i}f^\circ_{n-i+1,1}-g^\circ_{i+1,i}f_{n-i+1,1}$ is not divisible by $g_{ii}$, define $X^\circ$ to be the lower bidiagonal matrix with
$x^\circ_{ii}=t$ and all other entries in the two diagonals equal to one. Then $g_{i+1,i}(X^\circ)=1$, $f^\circ_{n-i+1,1}(X^\circ,Y)={\bf p}m(y_{in}-y_{i-1,n})$,
$g^\circ_{i+1,i}(X^\circ)=0$, and hence $g_{ii}'(X^\circ,Y)={\bf p}m(y_{in}-y_{i-1,n})$ is not divisible by $g_{ii}(X^\circ)=t$.
The rest of the vertices $g_{ij}$ and $h_{ij}$ do not need a separate treatment since the corresponding relations coincide with those for the standard cluster
structure in $GL_n$.
\operatorname{sign}ection{Proof of Theorem~{\bf r}ef{structure}(ii)}\label{upeqreg}
\operatorname{sign}ubsection{Proof of Theorem~{\bf r}ef{allxcluster}}
Functions $x_{ni}$, $1\le i\le n$, are in the initial cluster. Our goal is to explicitly construct a sequence of cluster
transformations that will allow us to recover all $x_{ij}$ as cluster variables.
For this, we will only need to work with a subquiver ${{\bf m}athcal G}amma_n^n$ of $Q_n$ whose vertices belong to lower $n$ levels of
$Q_n$ and in which we view the vertices in the top row
as frozen (see Fig.~{\bf r}ef{Gamma55} for the quiver ${{\bf m}athcal G}amma_5^5$).
{\bf m}athfrak begin{figure}[ht]
{\bf m}athfrak begin{center}
\includegraphics[width=12cm]{gamma55.eps}
\caption{Quiver ${{\bf m}athcal G}amma_5^5$}
\label{Gamma55}
{\bf e}nd{center}
{\bf e}nd{figure}
One can distinguish two oriented triangular grid subquivers of ${{\bf m}athcal G}amma_n^n$: a ``square'' one on $n^2$ vertices with the clockwise
orientation of triangles in the lowest row (dashed horizontally on Fig.~{\bf r}ef{Gamma55}), and
a ``triangular'' one on $n(n-1)/2$ vertices with the counterclockwise orientation of triangles in the lowest row (dashed vertically).
They are glued together with the help of the quiver ${{\bf m}athcal G}amma_{n}$ on $3n-2$ vertices placed in three columns of size $n$, $n-1$, and $n-1$.
The left column of ${{\bf m}athcal G}amma_{n}$ is identified with the rightmost side of the square subquiver, and the right column, with the leftmost side of the triangular subquiver, see Fig.~{\bf r}ef{sub2}a) for the case $n=5$. More generally, we can define a quiver ${{\bf m}athcal G}amma_m^n$, $m\le n$, by using ${{\bf m}athcal G}amma_{m}$
to glue an oriented triangular grid quiver on $m n$ vertices forming a parallelogram with a base of length $n$ and the side of length $m$ with an
oriented triangular grid quiver on $m(m-1)/2$ vertices forming a triangle with sides $m-1$ (the orientations of the triangles in the lowest rows of both
grids obeys the same rule as above).
{\bf m}athfrak begin{figure}[ht]
{\bf m}athfrak begin{center}
\includegraphics[height=5cm]{gamma5star.EPS}
\caption{Quivers ${{\bf m}athcal G}amma_5$ and ${{\bf m}athcal G}amma^*_5$}
\label{sub2}
{\bf e}nd{center}
{\bf e}nd{figure}
Let $A$ be an $m\times m$ matrix; for $1\le i\le m-1$, denote
$$
a_{m-i}={\bf m}athbf det A_{[i+1,m]}^{[i+1,m]}, {\bf q}quad b_{m-i}={\bf m}athbf det A_{[i,m-1]}^{[i+1,m]}, {\bf q}quad c_{m-i}={\bf m}athbf det A_{[i+1,m]}^{[1]\cup [i+2,m]}.
$$
It is easy to check that $a_{m-i}$, $b_{m-i}$ and $c_{m-i}$ are maximal minors in the $(m-i+1)\times (m-i+3)$ matrix
$$
\left[{\bf m}athfrak begin{array}{ccc} I^{[i]} & A^{[1]\cup [i+1,m]} & I^{[m]} {\bf e}nd{array}{\bf r}ight ]_{[i,m]},
$$
obtained by removing columns $A^{[1]}$ and $I^{[m]}$, $I^{[i]}$ and $A^{[1]}$, $A^{[i+1]}$ and $I^{[m]}$, respectively. Moreover, the
maximal minor obtained by removing columns $A^{[1]}$ and $A^{[i+1]}$ equals $b_{m-i-1}$ (assuming $b_0=1$), and the one obtained
by removing columns $I^{[i]}$ and $I^{[m]}$ equals $c_{m-i+1}$ (assuming $c_m={\bf m}athbf det A$).
Consequently, the short Pl\"ucker relation for the matrix above and columns $ I^{[i]}$, $A^{[1]}$, $A^{[i+1]}$, $I^{[m]}$ reads
$$
a_i a^*_i = b_i c_i + b_{i-1} c_{i+1},
$$
where $a^*_{m-i} ={\bf m}athbf det A^{[1]\cup [i+2,m]}_{[i,m-1]}$. Let us assign variables $a_i$ to the vertices in the central column of ${{\bf m}athcal G}amma_m$ bottom up,
variables $b_i$ to the vertices in the right column, and variables $c_i$ to the vertices in the left column. It follows from the above discussion that
applying commuting cluster transformations and the corresponding quiver mutations to vertices in the central column of ${{\bf m}athcal G}amma_m$
results in the quiver ${{\bf m}athcal G}amma^*_m$ shown in Fig.~{\bf r}ef{sub2}b) for $m=5$. The variables attached to the vertices of the central column bottom up are
$a_i^*$ defined above.
Denote $X_m=X_{[1,m]}$ and $Y_m=Y_{[1,m]}^{[n-m+1,n]}$. Extending the definition of functions
$g_{ij}(X)$ in Section~{\bf r}ef{logcan} to rectangular matrices, we write
$$
g_{ij}^m=g_{ij}(X_m) = {\bf m}athbf det X_{[i,m]}^{[j,j+m-i]}
$$
for $1\leq j \leq i + n -m$. Functions $f_{kl}^m=f_{kl}(X_m,Y_m)$ and $h_{ij}^m=h_{ij}(Y_m)$ are defined exactly
as in Section~{\bf r}ef{logcan}. Consequently,
{\bf m}athfrak begin{equation}\label{hshift}
h_{ij}^m=h_{i+1,j}^{m-1} {\bf q}quad \text{for $i<j$}.
{\bf e}nd{equation}
Going back to the quiver ${{\bf m}athcal G}amma_m^n$, let us
denote by ${\bf w}idetilde\Sigma_m^n$ the extended seed that consists of ${{\bf m}athcal G}amma_m^n$ and
the following family of functions attached to its vertices: the functions attached to the $s$th row listed
left to right are $g_{m-s+1,1}^m,{\bf m}athbf dots, g_{m-s+1,n-s+1}^m$ followed by $f_{s-1,1}^m,{\bf m}athbf dots,f_{1,s-1}^m$
followed by $h_{m-s+1,m-s+1}^m,{\bf m}athbf dots$, $h_{1,m-s+1}^m$ (except for the top row that does not contain $h_{11}^m$).
Now, consider the sequence
of $m-1$ commuting mutations of ${{\bf m}athcal G}amma_m^n$ at vertices $h_{ii}^m$, $2\le i\le m$. As in~{\bf e}qref{hii}, one sees that the new variables associated
with these vertices are $h_{ii}^\operatorname{sign}tar = {\bf m}athbf det \left [ X^{[n]} \;\; Y^{[i+1,m]} {\bf r}ight ]_{[i-1,m-1]}$. In particular, $h^\operatorname{sign}tar_{mm}=x_{m-1,n}$, and we thus have
obtained $x_{m-1,n}$ as a cluster variable. Moreover, $h^\operatorname{sign}tar_{mm}=g_{m-1,n}^{m-1}$, and $h^\operatorname{sign}tar_{ii}=f_{1,m-i}^{m-1}$ for $2\le i\le m-1$.
The cumulative effect of this sequence of transformations can be summarized as follows:
(i) detach from the quiver ${{\bf m}athcal G}amma_m^n$ a quiver isomorphic to ${{\bf m}athcal G}amma_{m}$ with $h_{ii}^m$, $i=m, {\bf m}athbf dots, 2$, playing the role of $a_{i}$, $i=1, {\bf m}athbf dots, m-1$,
and note that functions assigned to its vertices are of the form described above if one selects $A= \left [ X_m^{[n]}\;\; Y_m^{[2,m]} {\bf r}ight ]$;
(ii) apply cluster transformations to vertices $a_{i}$, $1\le i\le m-1$, of ${{\bf m}athcal G}amma_{m}$;
(iii) glue the resulting quiver ${{\bf m}athcal G}amma^*_{m}$ back into ${{\bf m}athcal G}amma_m^n$ and erase any two-cycles that may have been created in this process;
(iv) note that the new variables attached to the mutated vertices are $g_{m-1,n}^{m-1}$, $f_{1,m-i}^{m-1}$, $i=m-1,{\bf m}athbf dots, 2$.
The resulting quiver contains another copy of ${{\bf m}athcal G}amma_{m}$, shifted leftwards by~1, with vertices
$g_{mn}^m$, $f_{11}^m,\ldots, f_{1,m-2}^m$
playing the role of $a_{i}$, $i=1, {\bf m}athbf dots, m-1$, and the matrix $\left [ X_m^{[n-1,n]}\;\; Y_m^{[3,m]} {\bf r}ight ]$ playing the role of $A$.
Therefore, we can repeat the procedure used on the previous step
to obtain a new quiver in which $g_{mn}^m$, $f_{11}^m,\ldots, f_{1,m-2}^m$ are replaced
by $x_{m-1,n-1}$, ${\bf m}athbf det \left [ X^{[n-1,n]}\;\; Y^{[i+2,m]} {\bf r}ight ]_{[i-1,m-1]}$, $i= m-1, \ldots, 2$, respectively. Thus, we have obtained
$x_{m-1,n-1}$ as a cluster variable, and, moreover, the new variables attached to the mutated vertices are $g_{m-1,n-1}^{m-1}$, $g_{m-2,n-1}^{m-1}$,
$f_{2,m-i}^{m-1}$ for $i=m-1,{\bf m}athbf dots, 3$.
We proceed in the same way $n-2$ more times. At the $j$th stage, $1\le j\le n-1$, the copy of ${{\bf m}athcal G}amma_m$ is shifted leftwards by $j$,
$A= \left [ X_m^{[n-j,n]}\;\; Y_m^{[j+2,m]} {\bf r}ight ]$, the role of $a_{i}$, $i=1, {\bf m}athbf dots, m-1$,
is played by vertices associated with the functions $g_{m,n-j+1}^m,\ldots{}, g_{m-j+1, n-j+1}^m$, $f_{j1}^m,\ldots, f_{j,m-j}^m$, which are being replaced with
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{split}
&{\bf m}athbf det X_{[m-i,m-1]}^{[n-j,n-j+i-1]},{\bf q}uad i=1,{\bf m}athbf dots, j+1,\\
&{\bf m}athbf det \left [ X^{[n-j,n]}\;\; Y^{[j+i+1,m]} {\bf r}ight ]_{[i-1,m-1]}, {\bf q}uad i= m-j, \ldots, 2.
{\bf e}nd{split}
{\bf e}nd{equation*}
Note that
the first of the functions listed above is $x_{m-1,n-j}$, so in the end of this process, we have restored all the entries of the $(m-1)$-st row of $X$. Moreover,
the new variables are $g_{m-i,n-j}^{m-1}$, $i=1,{\bf m}athbf dots, j+1$, $f_{j+1,m-i}^{m-1}$, $i=m-1,{\bf m}athbf dots, j+2$.
Let us freeze in the resulting quiver
$\tilde{{\bf m}athcal G}amma_m^n$
all $g$-vertices and $f$-vertices adjacent to the frozen vertices. It is easy to check that the quiver obtained in this way is isomorphic to ${{\bf m}athcal G}amma_{m-1}^n$.
Moreover, the above discussion together with the identity~{\bf e}qref{hshift} shows that the functions assigned to its vertices are exactly those stipulated by
the definition of the extended seed ${\bf w}idetilde\Sigma_{m-1}^n$. Thus we establish the claim of the theorem by applying the procedure described above
consecutively to ${\bf w}idetilde\Sigma_{n}^n$, ${\bf w}idetilde\Sigma_{n-1}^n, \ldots, {\bf w}idetilde\Sigma_{2}^n$.
\operatorname{sign}ubsection{Sequence ${{\bf m}athcal S}$: the proof of Theorem~{\bf r}ef{clusterU}}
Consider the subquiver ${\bf w}idehat{{\bf m}athcal G}amma_n^n$ of ${{\bf m}athcal G}amma_n^n$ obtained by freezing the vertices corresponding to functions
$h_{ii}(Y)$, $2\le i\le n$, and ignoring vertices to the right of them. In other words, ${\bf w}idehat{{\bf m}athcal G}amma_n^n$ is the subquiver of $Q_n$
induced by all $g$-vertices, all $f$-vertices, $\varphi$-vertices with $k+l=n$, and $h$-vertices with $i=j{\bf m}athfrak ge 2$.
The quiver ${\bf w}idehat{{\bf m}athcal G}amma_5^5$
is shown in Fig.~{\bf r}ef{sub4}; the vertices that are frozen in ${\bf w}idehat{{\bf m}athcal G}amma_5^5$, but are mutable in $Q_5$ are shown by rounded squares.
Note the special edge shown by the dashed line. It does not exist in ${\bf w}idehat{{\bf m}athcal G}amma_n^n$ (since it connects frozen vertices),
but it exists in $Q_n$.
Within this proof we label the vertices of ${\bf w}idehat{{\bf m}athcal G}amma_n^n$ by pair of indices $(i,j)$, $1\le i\le n$, $1\le j\le n+1$, $(i,j)\mathfrak ne (1, n+1)$, where $i$
increases from top to bottom and $j$ increases from left to right; thus, the special edge is $(1,n)\to (2,n+1)$.
The set of vertices with $j-i=l$ forms the $l$th diagonal in ${\bf w}idehat{{\bf m}athcal G}amma_n^n$,
$1-n\le l \le n-1$. The function attached to the vertex $(i,j)$ is
$$
\chi_{ij} = {\bf m}athfrak begin{cases} g_{ij}(X) = {\bf m}athbf det X_{[i,n]}^{ [j, n+j-i]}& {\bf m}box{if}{\bf q}uad i {\bf m}athfrak geq j, \\
f_{n-j+1,j-i}(X,Y)= {\bf m}athbf det \left [ X^{ [j,n]}\;\; Y^{[n+i-j+1,n]} {\bf r}ight ]_{[i,n]} & {\bf m}box{if} {\bf q}uad i < j. {\bf e}nd{cases}
$$
We denote the extended seed thus obtained from ${\bf w}idetilde{\Sigma}_n^n$ by ${\bf w}idehat{\Sigma}_n^n$. Note that it is a seed
of an ordinary cluster structure, since no generalized exchange relations are involved.
{\bf m}athfrak begin{figure}[ht]
{\bf m}athfrak begin{center}
\includegraphics[width=9cm]{tgamma55.eps}
\caption{Quiver ${\bf w}idehat{{\bf m}athcal G}amma_5^5$}
\label{sub4}
{\bf e}nd{center}
{\bf e}nd{figure}
Consider a sequence of mutations ${{\bf m}athcal S}_n$ which involves mutating once at every non-frozen vertex of ${\bf w}idehat{{\bf m}athcal G}amma_n^n$ starting with $(n,2)$ then
using vertices of the $(3-n)$-th, $(4-n)$-th, ..., $(n-3)$-rd, $(n-2)$-nd diagonals. Note that a similar sequence of transformations was used in the proof
of Proposition 4.15 in \cite{GSVb}
in the study of the natural cluster structure on Grassmannians. The order in which vertices of each diagonal are mutated
is not important, since at the moment a diagonal is reached in this sequence of transformations,
there are no edges between its vertices. In fact, functions $\chi_{ij}^1$ obtained as a result of applying ${{\bf m}athcal S}_n$ are subject to relations
$$
\chi_{ij}^1 \chi_{ij} = \chi^1_{i,j-1} \chi_{i,j+1} + \chi^1_{i+1,j} \chi_{i-1,j},{\bf q}quad 2\le i,j \le n,
$$
where we adopt a convention $\chi^1_{n+1,j} = x_{n1}$, $\chi^1_{i1} = \chi_{i-1,1}=g_{i-1,1}(X)$.
These relations imply
{\bf m}athfrak begin{equation}
\label{chistar}
\chi^1_{ij}= {\bf m}athfrak begin{cases} {\bf m}athbf det X_{[i-1,n]}^{[1] \cup [j+1, n+j-i+1]} & {\bf m}box{if}{\bf q}uad i > j, \\
{\bf m}athbf det \left [ X^{[1]\cup [j+1,n]}\;\; Y^{[n+i-j,n]} {\bf r}ight ]_{[i-1,n]} & {\bf m}box{if} {\bf q}uad i \leq j. {\bf e}nd{cases}
{\bf e}nd{equation}
To verify~{\bf e}qref{chistar} for $i>j$, one has to apply the short Pl\"ucker relation to
$$
\left[{\bf m}athfrak begin{array}{cc} I^{[i-1]} & X^{[1]\cup [j,n+j-i+1]} {\bf e}nd{array}{\bf r}ight ]_{[i-1,n]}
$$
using columns
$I^{[i-1]}$, $X^{[1]}$, $X^{[j]}$, $X^{[n+j-i+1]}$. In the case $i\leq j$, the short Pl\"ucker relation is applied to
$$
\left[{\bf m}athfrak begin{array}{ccc} I^{[i-1]} & X^{[1]\cup [j,n]} & Y^{[n+i-j,n]} {\bf e}nd{array}{\bf r}ight ]_{[i-1,n]}
$$
using columns
$I^{[i-1]}$, $X^{[1]}$, $X^{[j]}$, $Y^{[n+i-j]} $.
Note that
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{aligned}
\chi^1_{2j} &= {\bf m}athbf det \left [ X^{[1]\cup [j+1,n]}\;\; Y^{[n-j+2,n]} {\bf r}ight ]_{[1,n]}
= {\bf m}athbf det X \cdot(-1)^{(n-j)(j-1)}{\bf m}athbf det U_{[2,j]}^{[n-j+2,n]}\\ &= {\bf m}athbf det X\cdot(-1)^{(n-j)(n-2)}h_{2,n-j+2}(U), {\bf q}quad 2\le j\le n.
{\bf e}nd{aligned}
{\bf e}nd{equation*}
The subquiver of ${{\bf m}athcal S}_n({\bf w}idehat{{\bf m}athcal G}amma_n^n)$ formed by non-frozen vertices is isomorphic to the corresponding
subquiver of ${\bf w}idehat{{\bf m}athcal G}amma_n^n$. However, the frozen vertices are connected to non-frozen ones in a different way now:
there are arrows $(i,1)\to(i+2, 2)$ and $(i+1,n+1)\to(i+2, n)$ for $1\le i\le n-2$, $(i,2)\to(i-1, 1)$ and $(i,n)\to(i, n+1)$
for $2\le i\le n$, $(1,j)\to(2, j)$ for $2\le j\le n$, $(n,1)\to (n,n)$, and
$(2,j)\to(1, j+1)$ for $2\le j\le n-1$. After moving frozen vertices we can make
${{\bf m}athcal S}_n({\bf w}idehat{{\bf m}athcal G}amma_n^n)$ look as shown in Fig.~{\bf r}ef{sub5}.
{\bf m}athfrak begin{figure}[ht]
{\bf m}athfrak begin{center}
\includegraphics[width=8cm]{transtgamma55.eps}
\caption{Quiver ${{\bf m}athcal S}_n({\bf w}idehat{{\bf m}athcal G}amma_5^5)$}
\label{sub5}
{\bf e}nd{center}
{\bf e}nd{figure}
Note that if we freeze the vertices $(2,2), \ldots, (2,n), (3,n), \ldots (n,n)$ in ${{\bf m}athcal S}_n({\bf w}idehat{{\bf m}athcal G}amma_n^n)$
(marked gray in Fig.~{\bf r}ef{sub5}) and ignore the isolated frozen vertices thus obtained, we will be left
with a quiver isomorphic to ${\bf w}idehat{{\bf m}athcal G}amma_{n-1}^{n-1}$ whose vertices are labeled by $(i,j)$, $2\le i\le n$,
$1\le j\le n$, $(i,j)\mathfrak ne (2, n)$, and have functions $\chi^1_{ij}$ attached to them. The new special edge is $(2,n-1)\to (3,n)$.
Let us call the resulting
extended seed ${\bf w}idehat{\Sigma}_{n-1}^{n-1}$.
We can now repeat the procedure described above $n-2$ more times by applying, on the $k$th step, the sequence of mutations
${{\bf m}athcal S}_{n-k+1}$ to the extended seed
$$
{\bf w}idehat\Sigma_{n-k+1}^{n-k+1}= \left((\chi^{k-1}_{ij})_{k\le i\le n, 1\le j\le n-k+2,
(i,j)\mathfrak ne (k, n-k+2)}, {\bf w}idehat{{\bf m}athcal G}amma_{n-k+1}^{n-k+1}{\bf r}ight ).
$$
The functions $\chi^k_{ij}$ are subject to relations
$$
\chi^{k}_{ij} \chi^{k-1}_{ij} = \chi^{k}_{i,j-1} \chi^{k-1}_{i,j+1} + \chi^{k}_{i+1,j} \chi^{k-1}_{i-1,j},{\bf q}quad
k+1\le i\le n, {\bf q}uad 2\le j\le n-k+1,
$$
where we adopt the convention $\chi^{k}_{n+1,j} = \chi^{k-1}_{n1}$, $\chi^{k}_{i1} = \chi^{k-1}_{i-1,1}$.
Arguing as above, we conclude that
{\bf m}athfrak begin{equation}
\label{chistark}
\chi^k_{ij}= {\bf m}athfrak begin{cases} {\bf m}athbf det X_{[i-k,n]}^{[1,k] \cup [j+k, n+j-i+k]} & {\bf m}box{if}{\bf q}uad i - k+1 > j, \\
{\bf m}athbf det \left [ X^{[1,k]\cup [j+k,n]}\;\; Y^{[n+i-j+1-k,n]} {\bf r}ight ]_{[i-k,n]} & {\bf m}box{if} {\bf q}uad i - k+1 \leq j. {\bf e}nd{cases}
{\bf e}nd{equation}
To verify {\bf e}qref{chistark} for $i-k+1>j$, one has to apply the short Pl\"ucker relation to
$$
\left [ {\bf m}athfrak begin{array}{cc}I^{[i-k]} & X^{[1,k] \cup [j+k-1, n+j-i+k]} {\bf e}nd{array} {\bf r}ight ]_{[i-k,n]}
$$
using columns
$I^{[i-k]}$, $X^{[k]}$, $X^{[j+k-1]}$, $X^{[n+j-i+k]}$. In the case $i-k+1\leq j$, the short Pl\"ucker relation is applied to
$$
\left [ {\bf m}athfrak begin{array}{ccc} I^{[i-k]} & X^{[1,k]\cup [j+k-1,n]} & Y^{[n+i-j-k+1,n]}{\bf e}nd{array} {\bf r}ight ]_{[i-k,n]}
$$
using columns
$I^{[i-k]}$, $X^{[k]}$, $X^{[j+k-1]}$, $Y^{[n+i-j-k+1]}$.
Note that
{\bf m}athfrak begin{align*}
\chi^k_{k+1,j} &= {\bf m}athbf det \left [ X^{[1,k]\cup [j+k,n]}\;\; Y^{[n-j+2,n]} {\bf r}ight ]_{[1,n]} \\
&= {\bf m}athbf det X \cdot(-1)^{(n-j-k+1)(j-1)}{\bf m}athbf det U_{[k+1,j+k-1]}^{[n-j+2,n]}\\
&= {\bf m}athbf det X \cdot (-1)^{(n-j-k+1)(n-k-1)}h_{k+1, n-j+2}(U),{\bf q}quad 2\le j\le n-k+1.
{\bf e}nd{align*}
Define the sequence of transformations ${{\bf m}athcal S}$ as the composition ${{\bf m}athcal S}= {{\bf m}athcal S}_2 \circ \cdots \circ {{\bf m}athcal S}_n$. Assertion (i) of Theorem~{\bf r}ef{clusterU} follows from the fact
that $\varphi$-vertices of $Q_n$ are not involved in any of ${{\bf m}athcal S}_i$. This fact also implies that the subquiver of $Q_n$ induced by
$\varphi$-vertices remains intact in ${{\bf m}athcal S}(Q_n)$. As it was shown above, the function ${\bf m}athbf det X\cdot h_{ij}(U)$
is attached to the vertex $(i,n-j+2)$. It is easy to prove by induction that the last mutation at $(i,j)$ (which occurs
at the $(i-1)$-st step) creates edges $(i,j)\to (i,j-1)$, $(i-1,j)\to (i,j)$ and $(i,j-1)\to (i-1,j)$.
Comparing this with the description of $Q_n$ in Section~{\bf r}ef{init} and the definition of $Q_n^{\bf m}athbf dag$ in Section~{\bf r}ef{main}
yields assertions (ii) and (iii) of Theorem~{\bf r}ef{clusterU}. Finally, assertion (iv) follows from the fact that the special
edge $(i,n-i+1)\to (i+1,n-i+2)$ disappears after the last mutation at $(i+1,n-i+1)$.
The quiver ${{\bf m}athcal S}(Q_5)$ and the subquiver $Q_5'$ are shown in Fig.~{\bf r}ef{sub6}. The vertices of $Q_5'$ are shadowed in dark gray. The area shadowed in light gray
represents the remaining part of ${{\bf m}athcal S}(Q_5)$. The only vertices in this part shown in the figure are those connected to vertices of $Q_5'$.
{\bf m}athfrak begin{figure}[ht]
{\bf m}athfrak begin{center}
\includegraphics[width=8cm]{TQ5.eps}
\caption{Quiver ${{\bf m}athcal S}(Q_5)$ and the subquiver $Q_5'$}
\label{sub6}
{\bf e}nd{center}
{\bf e}nd{figure}
\operatorname{sign}ubsection{Proof of Theorem~{\bf r}ef{Urestore}}
\operatorname{sign}ubsubsection{The nerve ${\bf m}athcal N_0$} \label{thenerve} The nerve ${\bf m}athcal N_0$ is obtained as follows: it contains the seed ${\bf w}idetilde\Sigma'_n$ built in the proof
of Theorem~{\bf r}ef{clusterU}, the seed ${\bf w}idetilde\Sigma''_n$ adjacent to ${\bf w}idetilde\Sigma'_n$ in direction ${\bf p}si_{n-1,1}$, and the seed
${\bf w}idetilde\Sigma'''_n$ adjacent to ${\bf w}idetilde\Sigma''_n$ in direction ${\bf p}si_{1,n-1}$.
Besides, it contains $n-3$ seeds adjacent to ${\bf w}idetilde\Sigma'''_n$ in directions ${\bf p}si_{n-i,i}$, $2\le i\le n-2$, and $2(n-1)^2$ seeds adjacent to ${\bf w}idetilde\Sigma'_n$ in all the remaining directions.
We subdivide ${\bf m}athcal N_0$ into several disjoint components. Component I contains the seed ${\bf w}idetilde\Sigma'_n$ and its $(n-1)^2$ neighbors in directions that are not in $Q_n'$.
Component II contains $n-3$ neighbors of ${\bf w}idetilde\Sigma'_n$ in directions
${\bf p}si_{i1}$, $2\le i\le n-2$.
Component III contains only the neighbor of ${\bf w}idetilde\Sigma'_n$ in direction ${\bf p}si_{11}$.
Component IV contains $(n-2)(n-3)/2$ seeds adjacent to ${\bf w}idetilde\Sigma'_n$ in directions ${\bf p}si_{kl}$, $k+l<n$, $l>1$.
Component V contains $n(n-1)/2$ seeds adjacent to ${\bf w}idetilde\Sigma'_n$ in directions
$h_{ij}$, $2\le i\le j\le n$.
Component VI contains the seeds ${\bf w}idetilde\Sigma''_n$ and
${\bf w}idetilde\Sigma'''_n$ together with all other seeds adjacent to the latter.
In each of the components we consider several normal forms for $U$ with respect to actions of different subgroups of $GL_n$. We
show how to restore entries of these normal forms and,
consequently, the entries of $U$ as Laurent expressions in corresponding clusters. Recall that ${\bf m}athbf det X$ and ${\bf m}athbf det X^{-1}$ belong to the ground ring,
so it suffices to obtain Laurent expressions in variables ${\bf p}si_{kl}$, $h_{ij}$ (and their neighbors), and $c_i$ instead of actual cluster variables.
\operatorname{sign}ubsubsection{Component I} To restore $U$ in component I, we use two normal forms for $U$: one under right multiplication by unipotent lower triangular matrices,
and the other under conjugation by unipotent lower triangular matrices, so
$U=B_+N_-={\bf m}athfrak bar N_- {\bf m}athfrak bar B_+ C {\bf m}athfrak bar N_-^{-1}$, where
$B_+, {\bf m}athfrak bar B_+$ are upper triangular, $N_-, {\bf m}athfrak bar N_-$ are unipotent lower triangular, and $C=e_{21} + \cdots + e_{n, n-1} + e_{1n}$ is the
cyclic permutation matrix (cf.~with~{\bf e}qref{normalU}). Note that by~{\bf e}qref{ourinv_prop},
$h_{ij}(U)=h_{ij}(B_+)$ and ${\bf p}si_{kl}(U)={\bf p}si_{kl}({\bf m}athfrak bar B_+C)$.
Our goal is to restore $B_+$ and ${\bf m}athfrak bar B_+C$.
Once this is done, the matrix $U$
itself is restored as follows. We multiply the equality $B_+N_-={\bf m}athfrak bar N_-{\bf m}athfrak bar B_+ C {\bf m}athfrak bar N_-^{-1}$ by $W_0$ on the left and by ${\bf m}athfrak bar N_-$ on the right,
where $W_0$ is the matrix corresponding to the longest permutation of size $n$, and
consider the Gauss factorizations~{\bf e}qref{gauss} for $W_0B_+$ in the left side and for $W_0{\bf m}athfrak bar B_+C$ in the right hand side. This gives
\[
(W_0B_+)_{>0}(W_0B_+)_{\le 0}N_-{\bf m}athfrak bar N_-=W_0{\bf m}athfrak bar N_-W_0\cdot(W_0{\bf m}athfrak bar B_+C)_{>0}(W_0{\bf m}athfrak bar B_+C)_{\le 0},
\]
where $W_0{\bf m}athfrak bar N_-W_0$ is unipotent upper triangular. Consequently,
{\bf m}athfrak begin{equation}\label{gluecompI}
{\bf m}athfrak bar N_-=W_0(W_0B_+)_{>0}(W_0{\bf m}athfrak bar B_+C)_{>0}^{-1}W_0.
{\bf e}nd{equation}
Recall
that matrix entries in the Gauss factorization are given by Laurent expressions in the entries of the initial matrix with denominators equal to its trailing
principal minors (see, e.g., \cite[Ch.~2.4]{Gant}). Clearly, the trailing principal minors of $W_0B_+$ and $W_0 {\bf m}athfrak bar B_+C$ are just ${\bf p}si_{n-i,i}$,
which allows to restore $U$ in any cluster of component I.
Restoration of $B_+=({\bf m}athfrak beta_{ij})$ is standard: an explicit computation shows that ${\bf m}athfrak beta_{ii}={\bf p}m h_{ii}/h_{i+1,i+1}$ with $h_{n+1,n+1}=1$,
and ${\bf m}athfrak beta_{ij}$ for $i<j$ is a Laurent
polynomial in $h_{kl}$, $k\le l$, with denominators in the range $i+1\le k\le n$, $j+1\le l\le n$ (here $h_{1l}$ is identified up to a sign with
${\bf p}si_{l-1, n-l+1}$ for $l>1$).
Since all $h_{ij}$ are cluster variables in the clusters of component I, we are done.
In order to restore ${\bf m}athfrak bar M={\bf m}athfrak bar B_+C$ we proceed as follows. Let ${\bf m}athfrak bar B_+=({\bf m}athfrak bar{\bf m}athfrak beta_{ij})$. Clearly, ${\bf p}si_{n-k,1}={\bf p}m {\bf p}rod_{i=1}^k{\bf m}athfrak bar{\bf m}athfrak beta_{ii}^{k-i+1}$ for
$1\le k\le n-1$, which yields
{\bf m}athfrak begin{equation}\label{barbeta}
{\bf m}athfrak bar{\bf m}athfrak beta_{ii}={\bf p}m\frac{{\bf p}si_{n-i,1}{\bf p}si_{n-i+2,1}}{{\bf p}si_{n-i+1,1}^2}, {\bf q}quad 1\le i\le n-1,
{\bf e}nd{equation}
where we assume ${\bf p}si_{n1}={\bf p}si_{n+1,1}=1$. The remaining diagonal entry ${\bf m}athfrak bar{\bf m}athfrak beta_{nn}$ is given by
\[
{\bf m}athfrak bar{\bf m}athfrak beta_{nn}=h_{11}{\bf p}rod_{i=1}^{n-1}{\bf m}athfrak bar{\bf m}athfrak beta_{ii}^{-1}={\bf p}m\frac{h_{11}{\bf p}si_{21}}{{\bf p}si_{11}}.
\]
{\bf m}athfrak begin{remark}\label{forfy11}
Note that the only diagonal entries depending on ${\bf p}si_{11}$ are ${\bf m}athfrak bar{\bf m}athfrak beta_{nn}$ and ${\bf m}athfrak bar{\bf m}athfrak beta_{n-1,n-1}={\bf p}m{\bf p}si_{11}{\bf p}si_{31}/{\bf p}si_{21}^2$. This fact will be important
for the restoration process in component III below.
{\bf e}nd{remark}
We proceed with the restoration process and use~{\bf e}qref{tildefy} to find
\[
{\bf m}athbf det ({\bf m}athfrak bar M)_{[n-k-l+2,n-k]}^{[n-l+1,n-1]}={\bf p}m {\bf p}si_{kl}{\bf p}rod_{i=1}^{n-k-l+1}{\bf m}athfrak bar{\bf m}athfrak beta_{ii}^{k+l+i-n-2},
\]
which together with~{\bf e}qref{barbeta} gives ${\bf m}athbf det ({\bf m}athfrak bar M)_{[n-k-l+2,n-k]}^{[n-l+1,n-1]}={\bf p}m{\bf p}si_{kl}{\bf p}si_{k+l,1}^{k+l-3}/{\bf p}si_{k+l-1,1}^{k+l-2}$. By
Remark~{\bf r}ef{tribfz}, this means that all entries ${\bf m}athfrak bar {\bf m}athfrak beta_{ij}$ with $i>1$ are restored as Laurent polynomials in any cluster in component I.
Note that non-diagonal entries do not depend on ${\bf p}si_{11}$.
{\bf m}athfrak begin{remark}\label{moreforfy11} Using Remark~{\bf r}ef{tildefysign}, we can find signs in the above relations. Specifically,
${\bf m}athfrak bar{\bf m}athfrak beta_{n-1,n}=(-1)^n{\bf p}si_{12}/{\bf p}si_{21}$. This fact will be used in the restoration
process in component III below.
{\bf e}nd{remark}
To restore the entries in the first row of ${\bf m}athfrak bar M$, we first conjugate it by a diagonal matrix ${{\bf m}athfrak d}elta={\bf m}athbf diag({\bf m}athbf delta_1,{\bf m}athbf dots{\bf m}athbf delta_{n-1},h_{11}^{-1})$ so that
${{\bf m}athfrak d}elta{\bf m}athfrak bar M{{\bf m}athfrak d}elta^{-1}$ has ones on the subdiagonal. This condition implies ${\bf m}athbf delta_i={\bf p}m {\bf p}si_{n-i+1,1}/{\bf p}si_{n-i,1}$. Consequently, the entries of the rows
$2,{\bf m}athbf dots,n$ of ${{\bf m}athfrak d}elta{\bf m}athfrak bar M{{\bf m}athfrak d}elta^{-1}$ remain Laurent polynomials. Next, we further conjugate
the obtained matrix with a unipotent upper triangular matrix $N_+$ so that ${\bf m}athfrak bar M^*=N_+{{\bf m}athfrak d}elta{\bf m}athfrak bar M{{\bf m}athfrak d}elta^{-1}N_+^{-1}$ has the companion form
{\bf m}athfrak begin{equation}\label{compan}
{\bf m}athfrak bar M^*=\left[{\bf m}athfrak begin{array}{c} {\bf m}athfrak gamma\\ \mathbf 1_{n-1}\;\; 0{\bf e}nd{array}{\bf r}ight]
{\bf e}nd{equation}
with ${\bf m}athfrak gamma=({\bf m}athfrak gamma_1,{\bf m}athbf dots,{\bf m}athfrak gamma_n)$.
If we set all non-diagonal entries in the last column of $N_+$ equal to zero, all other entries of
$N_+$ (and hence of $N_+^{-1}$) can be restored uniquely as polynomials in the entries in the rows
$2,{\bf m}athbf dots,n$ of ${{\bf m}athfrak d}elta{\bf m}athfrak bar M{{\bf m}athfrak d}elta^{-1}$. Recall that ${\bf m}athfrak bar M^*$ is obtained from $U$ by conjugations, and hence $U$ and ${\bf m}athfrak bar M^*$ are
isospectral. Therefore, ${\bf m}athfrak gamma_i=(-1)^{i-1} c_{n-i}$ for $1\le i\le n$. This allows to restore the entries in the first row of
{\bf m}athfrak begin{equation}\label{betaviac}
{\bf m}athfrak bar M={{\bf m}athfrak d}elta^{-1}N_+^{-1}{\bf m}athfrak bar M^* N_+{{\bf m}athfrak d}elta
{\bf e}nd{equation}
as Laurent polynomials in any cluster in component I.
{\bf m}athfrak begin{remark}\label{nohii}
Note that although diagonal entries of $B_+$ are Laurent monomials in stable variables $h_{ii}$, Laurent expressions for entries of $(W_0B_+)_{>0}$ depend
on $h_{ii}$ polynomially. This follows from the fact that these entries are Laurent polynomials in dense minors of $W_0B_+$ containing the last column; recall that
such minors are cluster variables in any cluster of component I. Moreover,
dense minors containing the upper right corner enter these expressions polynomially, see Remark~{\bf r}ef{tribfz}. Consequently, stable variables $h_{ii}$ do not enter denominators of Laurent expressions for entries of $U$ by~{\bf e}qref{gluecompI}, since restoration of ${\bf m}athfrak bar B_+C$ does not involve division by $h_{ii}$.
{\bf e}nd{remark}
\operatorname{sign}ubsubsection{Component II} The two normal forms used in this component are given by $U=B_+N_-=\check N_-\check B_+ W_0 \check N_-^{-1}$, where $B_+, \check B_+$ are upper
triangular, $N_-, \check N_-$ are unipotent lower triangular, and $W_0$ is the matrix corresponding to the longest permutation, see Lemma~{\bf r}ef{BW}.
Note that by~{\bf e}qref{ourinv_prop},
$h_{ij}(U)=h_{ij}(B_+)$ and ${\bf p}si_{kl}(U)={\bf p}si_{kl}(\check B_+W_0)$.
Our goal is to restore $B_+$ and $\check B_+W_0$.
Once this is done, the matrix $U$
itself is restored as follows. We multiply the equality $B_+N_-=\check N_-\check B_+ W_0 N_-^{-1}$ by $W_0$ on the left and by $\check N_-$ on the right and
consider the Gauss factorization~{\bf e}qref{gauss} for $W_0B_+$ in the left hand side. This gives
\[
(W_0B_+)_{>0}(W_0B_+)_{\le 0}N_-\check N_-=W_0\check N_-W_0\cdot W_0\check B_+W_0
\]
where $W_0\check N_-W_0$ is unipotent upper triangular and $W_0\check B_+W_0$ is lower triangular. Consequently, $\check N_-=W_0(W_0B_+)_{>0}W_0$.
Clearly, the trailing principal minors of $W_0B_+$ are just ${\bf p}si_{n-i,i}$, which allows to restore $U$
in ${\bf w}idetilde\Sigma'_n$ and in any cluster of component II.
Restoration of $B_+$ is exactly the same as before. In order to restore $\check M=\check B_+W_0$ we proceed as follows.
Let $\check B_+=(\check{\bf m}athfrak beta_{ij})_{1\le i\le j\le n}$.
We start with the diagonal entries.
An explicit computation immediately gives
{\bf m}athfrak begin{equation}\label{diagrI}
\check{\bf m}athfrak beta_{ii}={\bf p}m\frac{{\bf p}si_{n-i,i}}{{\bf p}si_{n-i+1,i-1}}, {\bf q}quad 1\le i\le n,
{\bf e}nd{equation}
with ${\bf p}si_{n0}=1$ and ${\bf p}si_{0n}=h_{11}$. Next, we recover the entries in the last column of $\check B_+$. We find
\[
{\bf p}si_{n-l,l-1}={\bf p}m\check {\bf m}athfrak beta_{11}\check {\bf m}athfrak beta_{ln}{\bf p}rod_{i=1}^{l-1}\check {\bf m}athfrak beta_{ii},
\]
which together with~{\bf e}qref{diagrI} gives
{\bf m}athfrak begin{equation}\label{lastrI}
\check {\bf m}athfrak beta_{ln}={\bf p}m\frac{{\bf p}si_{n-l,l-1}}{{\bf p}si_{n-1,1}{\bf p}si_{n-l+1,l-1}}, {\bf q}uad 2\le l\le n-1.
{\bf e}nd{equation}
Note that we have already restored the last two rows of $\check M$. We restore the other rows consecutively, starting from row $n-2$
and moving upwards. To this end,
define an $n\times n$ matrix ${{\bf m}athcal P}si$ via
\[
{{\bf m}athcal P}si=\left[{\bf m}athfrak begin{array}{ccccc} e_1 & \check Me_1 & \check M^{2}e_1 & {\bf m}athbf dots & \check M^{n-1}e_1{\bf e}nd{array}{\bf r}ight].
\]
Clearly, for $2\le l\le n-2$, $2\le t\le n-l+1$ one has
\[
{\bf p}si_{n-t-l+2,l-1}={\bf p}m \check{\bf m}athfrak beta_{11}^{t-1}{\bf p}rod_{i=1}^{l-1}\check{\bf m}athfrak beta_{ii}{\bf m}athbf det {{\bf m}athcal P}si_{[l,t+l-2]}^{[2,t]},
\]
which together with~{\bf e}qref{diagrI} yields
{\bf m}athfrak begin{equation}\label{detPsi}
{\bf m}athbf det {{\bf m}athcal P}si_{[l,t+l-2]}^{[2,t]}={\bf p}m \frac{{\bf p}si_{n-t-l+2,l-1}}{{\bf p}si_{n-l+1,l-1}{\bf p}si_{n-1,1}^{t-1}}.
{\bf e}nd{equation}
Therefore, each ${\bf m}athbf det {{\bf m}athcal P}si_{[l,t+l-2]}^{[2,t]}$ is a Laurent polynomial in any cluster in component II. Moreover, the minors ${\bf m}athbf det{{\bf m}athcal P}si_I^{[2,t]}$
possess the same property for any index set $I\operatorname{sign}ubset [2,n]$, $|I|=t-1$, since they can be expressed as Laurent polynomials
in ${\bf m}athbf det {{\bf m}athcal P}si_{[l,t+l-2]}^{[2,t]}$ for $l>2$ that are polynomials in ${\bf m}athbf det{{\bf m}athcal P}si_{[2,t]}^{[2,t]}$, see Remark~{\bf r}ef{tribfz}.
On the other hand, ${{\bf m}athcal P}si_{[l,t+l-2]}^{[2,t]}=\check M_{[l,t+l-2]}{{\bf m}athcal P}si^{[1,t-1]}$, which yields a system of linear
equations on the entries $\check{\bf m}athfrak beta_{lj}$:
{\bf m}athfrak begin{equation}\label{rowl}
{\bf m}athfrak begin{split}
\operatorname{sign}um_{j=2}^{n-l}\check{\bf m}athfrak beta_{lj}&{\bf m}athbf det \left(\left[{\bf m}athfrak begin{array}{c} e_j^T\\ \check M_{[l+1,t+l-2]}{\bf e}nd{array}{\bf r}ight]{{\bf m}athcal P}si^{[1,t-1]}{\bf r}ight)\\
& ={\bf m}athbf det {{\bf m}athcal P}si_{[l,t+l-2]}^{[2,t]}-{\bf m}athbf det \left(\left[{\bf m}athfrak begin{array}{c} {\bf m}athfrak hat {\bf m}athfrak beta_l\\ \check M_{[l+1,t+l-2]}{\bf e}nd{array}{\bf r}ight]{{\bf m}athcal P}si^{[1,t-1]}{\bf r}ight),
{\bf q}uad 3\le t\le n-l+1,
{\bf e}nd{split}
{\bf e}nd{equation}
where ${\bf m}athfrak hat {\bf m}athfrak beta_{l1}=\check{\bf m}athfrak beta_{1l}$, ${\bf m}athfrak hat {\bf m}athfrak beta_{l,n-l+1}=\check{\bf m}athfrak beta_{ll}$, and ${\bf m}athfrak hat {\bf m}athfrak beta_{lj}=0$ for $j\mathfrak ne 1, n-l+1$.
Rewrite the second determinant in the right hand side
of~{\bf e}qref{rowl} via the Binet--Cauchy formula; it involves minors ${\bf m}athbf det {{\bf m}athcal P}si^{[2,t]}_I$ and minors of $\check M_{[l+1,t+l-2]}$.
Assuming that the entries in rows $l+1,{\bf m}athbf dots,n$ have been already restored and taking into account~{\bf e}qref{detPsi}, we ascertain that the
right hand side can be expressed as a Laurent polynomial
in any cluster in component II. Clearly, the same is also true for the coefficients in the left hand side of~{\bf e}qref{rowl}.
It remains to calculate the determinant of the linear system~{\bf e}qref{rowl}. Denote the coefficient at $\check{\bf m}athfrak beta_{lj}$ in the $t$-th equation by $a_{j,t-2}$.
Then
\[
a_{jk}=\operatorname{sign}um_{i=1}^k\left(\check M^i{\bf r}ight)_{l1}{\bf m}athbf det{{\bf m}athcal P}si_{[l+1,k+l]}^{[2,i+1]\cup[i+3,k+2]}
=\operatorname{sign}um_{i=1}^k\left(\check M^i{\bf r}ight)_{l1}{\bf m}athfrak gamma_{ik}.
\]
Let $A=(a_{jk})$, $2\le j\le n-l$, $1\le k\le n-l-1$, be the matrix of the system~{\bf e}qref{rowl}. By the above formula we get
\[
A=\left[{\bf m}athfrak begin{array}{ccc} \check Me_1 & {\bf m}athbf dots &\check M^{n-l-1}e_1{\bf e}nd{array}{\bf r}ight]_{[2,n-l]}\left[{\bf m}athfrak begin{array}{cccc}
{\bf m}athfrak gamma_{11} & {\bf m}athfrak gamma_{12} & {\bf m}athbf dots &{\bf m}athfrak gamma _{1,n-l-1}\\
0 & {\bf m}athfrak gamma_{22} & {\bf m}athbf dots & {\bf m}athfrak gamma_{2,n-l-1}\\
\vdots & \vdots & {\bf m}athbf ddots & \vdots \\
0 & 0 & {\bf m}athbf dots & {\bf m}athfrak gamma_{n-l-1,n-l-1}{\bf e}nd{array}{\bf r}ight],
\]
and hence
\[
{\bf m}athfrak begin{split}
{\bf m}athbf det A={\bf m}athbf det {{\bf m}athcal P}si_{[2,n-l]}^{[2,n-l]}{\bf p}rod_{i=1}^{n-l-1}{\bf m}athfrak gamma_{ii}&={\bf m}athbf det {{\bf m}athcal P}si_{[2,n-l]}^{[2,n-l]}{\bf p}rod_{i=1}^{n-l-1}{\bf m}athbf det{{\bf m}athcal P}si_{[l+1,l+i]}^{[2,i+1]}\\
&={\bf p}m \frac{{\bf p}si_{l1}}{{\bf p}si_{n-l,l}^{n-l-1}{\bf p}si_{n-1,1}^{(n-l)(n-l+1)/2}}{\bf p}rod_{i=1}^{n-l-1}{\bf p}si_{n-l-i,l}.
{\bf e}nd{split}
\]
We thus restored the entries in the $l$-th row of $\check M$ as Laurent polynomials in all clusters in component II except for the neighbor
of ${\bf w}idetilde\Sigma'_n$ in direction ${\bf p}si_{l1}$, which we denote ${\bf w}idetilde\Sigma'_n(l)$. In the latter cluster ${\bf p}si_{l1}$, which enters the expression for ${\bf m}athbf det A$, is no longer available.
It is easy to see that the factor ${\bf p}si_{l1}$ in ${\bf m}athbf det A$ comes from the $(n-l+1)$-st equation in~{\bf e}qref{rowl}: its left hand side is defined by the expression
{\bf m}athfrak begin{equation}\label{oldeq}
{\bf m}athbf det \check M_{[l,n-1]}^{[1,n-l]}{\bf m}athbf det{{\bf m}athcal P}si_{[2,n-l]}^{[2,n-l]}={\bf p}m{\bf m}athbf det \check M_{[l,n-1]}^{[1,n-l]}
\frac{{\bf p}si_{l1}}{{\bf p}si_{n-1,1}^{n-l}}.
{\bf e}nd{equation}
In other words, each coefficient in the left hand side of this equation is proportional to ${\bf p}si_{l1}$. To avoid this problem, we replace this
equation by a different one. Recall that by~{\bf e}qref{k1neigh}, in the cluster under consideration
${\bf p}si_{l1}$ is replaced by ${\bf p}si_{1,l-1}^{**}$
given by
\[
{\bf m}athfrak begin{split}
&{\bf p}si_{1,l-1}^{**}={\bf p}m {\bf m}athbf det \left[
e_n \; \check M^{[n-l+2,n]} \; \check M^2e_{n-1} \; \check M^2e_n \; \check M^3e_n {\bf m}athbf dots \check M^{n-l}e_n
{\bf r}ight]\\
&{\bf q}uad={\bf p}m\check{\bf m}athfrak beta_{11}^{n-l-1}\check{\bf m}athfrak beta_{22} {\bf m}athbf det \left[
\check M^{[n-l+2,n]} \; \check Me_{2} \; \check Me_1 \; \check M^2e_1 {\bf m}athbf dots \check M^{n-l-1}e_1
{\bf r}ight]_{[1,n-1]}\\
&{\bf q}uad={\bf p}m\check{\bf m}athfrak beta_{11}^{n-l}\check{\bf m}athfrak beta_{22}{\bf m}athbf det \left(\check M_{[2,n-1]}^{[1,n-1]}\left[
\mathbf 1^{[1,2]\cup[n-l+2,n-1]} \; \check Me_1 \; \check M^2e_1 {\bf m}athbf dots \check M^{n-l-2}e_1
{\bf r}ight]_{[1,n-1]}{\bf r}ight);
{\bf e}nd{split}
\]
here we used relations $\check Me_n=\check{\bf m}athfrak beta_{11}e_1$ and $\check Me_{n-1}=\check{\bf m}athfrak beta_{12}e_1+\check{\bf m}athfrak beta_{22}e_2$. By the Binet--Cauchy formula, the
latter determinant can be written as
\[
\operatorname{sign}um_{j=1}^{n-1}{\bf m}athbf det \check M_{[n-j+1,n-1]}^{[1,j-1]}{\bf p}rod_{i=2}^{n-j}\check{\bf m}athfrak beta_{ii}{\bf m}athbf det\left[
\mathbf 1^{[1,2]\cup[n-l+2,n-1]} \; \check Me_1 \; \check M^2e_1 {\bf m}athbf dots \check M^{n-l-2}e_1
{\bf r}ight]_{[1,n-1]\operatorname{sign}etminus\{j\}}.
\]
Clearly, the second factor in each summand vanishes for $j=1,2$ and $n-l+2\le j\le n-1$. For $3\le j\le n-l+1$, the second factor equals
${\bf m}athbf det{{\bf m}athcal P}si_{[3,n-l+1]\operatorname{sign}etminus\{j\}}^{[2,n-l-1]}$. As it was explained above, for $3\le j\le n-l$ these determinants are Laurent polynomials in
all clusters of component II, whereas the corresponding first factors contain only entries in rows $l+1,{\bf m}athbf dots,n$ of $\check M$, which are already restored
as Laurent polynomials. Consequently, the left hand side of the new equation for the entries in the
$l$-th row is defined by the $(n-l+1)$-st summand
\[
{\bf m}athbf det \check M_{[l,n-1]}^{[1,n-l]}{\bf p}rod_{i=2}^{l-1}\check{\bf m}athfrak beta_{ii}{\bf m}athbf det{{\bf m}athcal P}si_{[3,n-l]}^{[2,n-l-1]}={\bf p}m{\bf m}athbf det \check M_{[l,n-1]}^{[1,n-l]}
\frac{{\bf p}si_{l-1,n-l+1}{\bf p}si_{l2}}{{\bf p}si_{n-2,2}{\bf p}si_{n-1,1}^{n-l-1}};
\]
note that the inverse of the factor in the right hand side above is a Laurent monomial in the cluster under consideration.
Comparing with~{\bf e}qref{oldeq}, we infer that the left hand side of the new equation is proportional to the left hand side of the initial one. Therefore, the determinant
of the new system is Laurent in the cluster ${\bf w}idetilde\Sigma'_n(l)$, and the entries of the $l$-th row of $\check M$ are restored as Laurent polynomials.
Therefore, the entries in all rows of $\check M$ except for the first one are restored as Laurent polynomials in component II. The entries of the first row
are restored via Lemma~{\bf r}ef{row1viac} as polynomials in the entries of other rows and variables $c_i$ divided by the ${\bf m}athbf det K={\bf m}athbf det{{\bf m}athcal P}si=
{\bf p}m h_{11}{\bf p}si_{11}/{\bf p}si_{n-1,1}^n$; the latter equality follows from $\check{\bf m}athfrak beta_{11}e_1=\check M e_n$ and~{\bf e}qref{diagrI}.
\operatorname{sign}ubsubsection{Component III} In this component we use all three normal forms that have been used in components I and II.
Restoration of $B_+$ is exactly the same as before. Restoration of $\check B_+W_0$ goes through for all entries except for the entries in the first row,
since the determinant ${\bf m}athbf det K$ involves a factor ${\bf p}si_{11}$, which is no longer available. On the other hand,
restoration of ${\bf m}athfrak bar B_+C$ also fails at two instances: firstly, at the entry ${\bf m}athfrak bar{\bf m}athfrak beta_{nn}$, see Remark~{\bf r}ef{forfy11}, and secondly, at the entry
${\bf m}athfrak bar{\bf m}athfrak beta_{1n}$, which gets ${\bf p}si_{11}$ in the denominator after the conjugation by ${{\bf m}athfrak d}elta^{-1}$. However, we will be able to use partial results obtained
during restoration of ${\bf m}athfrak bar B_+C$ in order to restore the first row of $\check B_+W_0$.
Indeed, using Lemma~{\bf r}ef{BW} we can write
$\check M = W_0\left (W_0 U{\bf r}ight)_{\leq 0} W_0 \left (W_0 {\bf m}athfrak bar M{\bf r}ight)_{>0}W_0$.
Clearly, $W_0{\bf m}athfrak bar M$ is block-triangular
with diagonal blocks $1$ and $W_0'{\bf m}athfrak bar B_+'$, where ${\bf m}athfrak bar B_+'=({\bf m}athfrak bar B_+)_{[2,n]}^{[2,n]}$ and $W_0'$ is the matrix of the longest permutation on size $n-1$. Therefore,
\[
(\check{\bf m}athfrak beta_{11}, \check{\bf m}athfrak beta_{12}, {\bf m}athbf dots, \check{\bf m}athfrak beta_{1n})=({\bf m}athfrak bar{\bf m}athfrak beta_{11},{\bf m}athfrak bar{\bf m}athfrak beta_{1n},{\bf m}athfrak bar{\bf m}athfrak beta_{1,n-1},{\bf m}athbf dots,{\bf m}athfrak bar{\bf m}athfrak beta_{12})
\left[{\bf m}athfrak begin{array}{cc} 0 & (W_0'{\bf m}athfrak bar B_+')_{>0}W_0'\\
1 & 0{\bf e}nd{array}{\bf r}ight],
\]
which can be more conveniently written as
{\bf m}athfrak begin{equation}\label{row1}
{\bf m}athfrak begin{split}
(\check{\bf m}athfrak beta_{11}, \check{\bf m}athfrak beta_{12}, {\bf m}athbf dots, \check{\bf m}athfrak beta_{nn})&
\left[{\bf m}athfrak begin{array}{cc} 1 & 0\\
0 & W_0'(W_0'{\bf m}athfrak bar B_+')_{\le 0}W_0'
{\bf e}nd{array}{\bf r}ight]\\
&=
({\bf m}athfrak bar{\bf m}athfrak beta_{11},{\bf m}athfrak bar{\bf m}athfrak beta_{1n},{\bf m}athfrak bar{\bf m}athfrak beta_{1,n-1},{\bf m}athbf dots,{\bf m}athfrak bar{\bf m}athfrak beta_{12})
\left[{\bf m}athfrak begin{array}{cc} 0 & W_0'{\bf m}athfrak bar B_+'W_0'\\
1 & 0{\bf e}nd{array}{\bf r}ight].
{\bf e}nd{split}
{\bf e}nd{equation}
It follows from the description of the restoration process for ${\bf m}athfrak bar M$, that all entries of the matrix in the left hand side
of the system~{\bf e}qref{row1} are Laurent polynomials in component III, and the determinant of this matrix equals ${\bf p}m h_{11}/{\bf p}si_{n-1,1}$.
In the right hand side, $W_0'{\bf m}athfrak bar B_+'W_0'$ is a lower triangular matrix with ${\bf m}athfrak bar{\bf m}athfrak beta_{nn}$ in the upper left corner and ${\bf m}athfrak bar{\bf m}athfrak beta_{n-1,n-1}$ next to it along the
diagonal. Recall that other entries of $W_0'{\bf m}athfrak bar B_+'W_0'$ do not involve ${\bf p}si_{11}$; besides, the entries ${\bf m}athfrak bar{\bf m}athfrak beta_{1j}$ for $j\mathfrak ne n$ may involve ${\bf p}si_{11}$ only in the
numerator, which makes them Laurent polynomials in component III. Therefore, the right hand side of~{\bf e}qref{row1} involves two expressions that should be investigated:
${\bf m}athfrak bar{\bf m}athfrak beta_{11}{\bf m}athfrak bar{\bf m}athfrak beta_{nn}+{\bf m}athfrak bar{\bf m}athfrak beta_{1n}{\bf m}athfrak bar{\bf m}athfrak beta_{n-1,n}$ and ${\bf m}athfrak bar{\bf m}athfrak beta_{1n}{\bf m}athfrak bar{\bf m}athfrak beta_{n-1,n-1}$. Recall that ${\bf m}athfrak bar{\bf m}athfrak beta_{1n}$ is the product of a Laurent
polynomial in component III by ${\bf m}athbf delta_{n-1}/{\bf m}athbf delta_1={\bf p}m{\bf p}si_{21}{\bf p}si_{n-1,1}/{\bf p}si_{11}$, whereas ${\bf m}athfrak bar{\bf m}athfrak beta_{n-1,n-1}={\bf p}m{\bf p}si_{11}{\bf p}si_{31}/{\bf p}si_{21}^2$ by
Remark~{\bf r}ef{forfy11}, hence the second expression above is a Laurent polynomial in component III.
Recall further that ${\bf m}athfrak bar M$ is transformed to the companion form~{\bf e}qref{compan} by a conjugation first with ${{\bf m}athfrak d}elta$ and second with $N_+$.
The latter can be written in two forms:
\[
N_+=\left[{\bf m}athfrak begin{array}{cc} N_+' & 0\\
0 & 1{\bf e}nd{array}{\bf r}ight] {\bf q}quad\text{and}{\bf q}quad
N_+=\left[{\bf m}athfrak begin{array}{cc} 1 & \mathfrak nu\\
0 & N_+''{\bf e}nd{array}{\bf r}ight]
\]
where $N_+=(\mathfrak nu_{ij})$ and $N_+''$ are $(n-1)\times (n-1)$ unipotent upper triangular matrices, and $\mathfrak nu=(\mathfrak nu_{12},{\bf m}athbf dots,\mathfrak nu_{1,n-1})$. Consequently,~{\bf e}qref{compan}
yields
{\bf m}athfrak begin{equation}\label{forn}
N_+''({{\bf m}athfrak d}elta{\bf m}athfrak bar M{{\bf m}athfrak d}elta^{-1})_{[2,n]}^{[1,n-1]}=N_+'.
{\bf e}nd{equation}
Note that~{\bf e}qref{betaviac} implies
\[
\left(\frac{{\bf m}athfrak bar{\bf m}athfrak beta_{12}}{{\bf m}athbf delta_1}, \frac{{\bf m}athfrak bar{\bf m}athfrak beta_{13}}{{\bf m}athbf delta_2},{\bf m}athbf dots,\frac{{\bf m}athfrak bar{\bf m}athfrak beta_{1n}}{{\bf m}athbf delta_{n-1}}{\bf r}ight)=
\frac1{{\bf m}athbf delta_1}\left({\bf m}athfrak gamma_1+{\bf m}athfrak bar\mathfrak nu_{12},{\bf m}athfrak gamma_2+{\bf m}athfrak bar\mathfrak nu_{13},{\bf m}athbf dots, {\bf m}athfrak gamma_{n-2}+{\bf m}athfrak bar\mathfrak nu_{1,n-1},{\bf m}athfrak gamma_{n-1}{\bf r}ight)N_+',
\]
where ${\bf m}athfrak bar\mathfrak nu_{ij}$ are the entries of $(N_+')^{-1}$. Taking into account that
\[
{\bf m}athbf delta_1={\bf p}rod_{i=2}^n{\bf m}athfrak bar{\bf m}athfrak beta_{ii},{\bf q}quad {\bf m}athbf delta_{n-1}={\bf m}athfrak bar{\bf m}athfrak beta_{nn},{\bf q}quad {\bf p}rod_{i=1}^n{\bf m}athfrak bar{\bf m}athfrak beta_{ii}=(-1)^{n-1}h_{11},
\]
we infer that
\[
{\bf m}athfrak bar{\bf m}athfrak beta_{1n}=(-1)^{n-1}\frac{{\bf m}athfrak bar{\bf m}athfrak beta_{11}{\bf m}athfrak bar{\bf m}athfrak beta_{nn}}{h_{11}}\left(\operatorname{sign}um_{i=1}^{n-1}{\bf m}athfrak gamma_i\mathfrak nu_{i,n-1}+\operatorname{sign}um_{i=2}^{n-1}{\bf m}athfrak bar\mathfrak nu_{1i}\mathfrak nu_{i-1,n-1}{\bf r}ight).
\]
Besides, ${\bf m}athfrak bar{\bf m}athfrak beta_{n-1,n}=(-1)^n{\bf p}si_{12}/{\bf p}si_{21}$ by Remark~{\bf r}ef{moreforfy11}. Denote ${\bf z}eta=(-1)^{n-1}{\bf p}si_{12}/{\bf p}si_{21}$,
Then one can write
{\bf m}athfrak begin{equation}\label{binom}
{\bf m}athfrak bar{\bf m}athfrak beta_{11}{\bf m}athfrak bar{\bf m}athfrak beta_{nn}+{\bf m}athfrak bar{\bf m}athfrak beta_{1n}{\bf m}athfrak bar{\bf m}athfrak beta_{n-1,n}
=\frac{{\bf m}athfrak bar{\bf m}athfrak beta_{11}{\bf m}athfrak bar{\bf m}athfrak beta_{nn}}{h_{11}}
\left(h_{11}+(-1)^{n}{\bf z}eta\left( \operatorname{sign}um_{i=1}^{n-1}{\bf m}athfrak gamma_i\mathfrak nu_{i,n-1}+\operatorname{sign}um_{i=2}^{n-1}{\bf m}athfrak bar\mathfrak nu_{1i}\mathfrak nu_{i-1,n-1}{\bf r}ight){\bf r}ight).
{\bf e}nd{equation}
To treat the latter expression, we consider first the last column of $({{\bf m}athfrak d}elta{\bf m}athfrak bar M{{\bf m}athfrak d}elta^{-1})_{[2,n]}^{[1,n-1]}$. It is easy to see that the first $n-3$ entries
in this column equal zero modulo ${\bf p}si_{11}$, whereas the $(n-2)$-nd entry equals $-{\bf z}eta$, and the last entry equals $1$. Consequently,~{\bf e}qref{forn}
implies that $\mathfrak nu_{i,n-1}=(-1)^{n-i-1}{\bf z}eta^{n-i-1}{\bf m}od {\bf p}si_{11}$. Therefore, the first sum in the right hand side of~{\bf e}qref{binom} equals
\[
\operatorname{sign}um_{i=1}^{n-1}{\bf m}athfrak gamma_i(-1)^{n-i-1}{\bf z}eta^{n-i-1} {\bf m}od {\bf p}si_{11}.
\]
By Lemma~{\bf r}ef{Hessenberg},
$-{\bf m}athfrak bar\mathfrak nu_{12},{\bf m}athbf dots,-{\bf m}athfrak bar\mathfrak nu_{1,n-1},0$ form the first row of the companion form for $({{\bf m}athfrak d}elta{\bf m}athfrak bar M{{\bf m}athfrak d}elta^{-1})_{[2,n]}^{[2,n]}$. Consequently,
\[
{\bf m}athfrak begin{split}
\operatorname{sign}um_{i=2}^{n-1}{\bf m}athfrak bar\mathfrak nu_{1i}\mathfrak nu_{i-1,n-1}&=\operatorname{sign}um_{i=2}^{n-1}(-1)^{n-i}{\bf m}athfrak bar\mathfrak nu_{1i}{\bf z}eta^{n-i} {\bf m}od {\bf p}si_{11}\\
&=(-1)^{n-1}\left({\bf m}athbf det\left(({{\bf m}athfrak d}elta{\bf m}athfrak bar M{{\bf m}athfrak d}elta^{-1})_{[2,n]}^{[2,n]}+{\bf z}eta\mathbf 1_{n-1}{\bf r}ight)-{\bf z}eta^{n-1}{\bf r}ight) {\bf m}od {\bf p}si_{11}.
{\bf e}nd{split}
\]
Note that ${\bf m}athbf det(({{\bf m}athfrak d}elta{\bf m}athfrak bar M{{\bf m}athfrak d}elta^{-1})_{[2,n]}^{[2,n]}+{\bf z}eta\mathbf 1_{n-1})=0 {\bf m}od {\bf p}si_{11}$, since the last column of $({{\bf m}athfrak d}elta{\bf m}athfrak bar M{{\bf m}athfrak d}elta^{-1})_{[2,n]}^{[2,n]}$
is zero, and the second to last column equals $e_{n-1}-{\bf z}eta e_{n-2} {\bf m}od{\bf p}si_{11}$; therefore, the second sum in the right hand side of~{\bf e}qref{binom}
equals $(-1)^{n}{\bf z}eta^{n-1} {\bf m}od {\bf p}si_{11}$. Combining this with the previous result and taking into account that $h_{11}=(-1)^{n-1}{\bf m}athfrak gamma_n$, we get
\[
{\bf m}athfrak begin{split}
{\bf m}athfrak bar{\bf m}athfrak beta_{11}{\bf m}athfrak bar{\bf m}athfrak beta_{nn}&+{\bf m}athfrak bar{\bf m}athfrak beta_{1n}{\bf m}athfrak bar{\bf m}athfrak beta_{n-1,n}\\
&=\frac{{\bf m}athfrak bar{\bf m}athfrak beta_{11}{\bf m}athfrak bar{\bf m}athfrak beta_{nn}}{h_{11}}\left(h_{11}+(-1)^{n}\left(\operatorname{sign}um_{i=1}^{n-1}{\bf m}athfrak gamma_i(-1)^{n-i-1}{\bf z}eta^{n-i}+(-1)^{n}{\bf z}eta^{n}{\bf r}ight){\bf r}ight){\bf m}od {\bf p}si_{11}\\
&=\frac{{\bf m}athfrak bar{\bf m}athfrak beta_{11}{\bf m}athfrak bar{\bf m}athfrak beta_{nn}}{h_{11}}\left({\bf z}eta^n+\operatorname{sign}um_{i=1}^n{\bf m}athfrak gamma_i(-1)^{i+1}{\bf z}eta^{n-i}{\bf r}ight) {\bf m}od {\bf p}si_{11}\\
&={\bf p}m\frac{{\bf p}si_{n-1,1}}{{\bf p}si_{21}^{n-1}{\bf p}si_{11}}{\bf m}athbf det({\bf p}si_{21}{\bf m}athfrak bar M+(-1)^{n-1}{\bf p}si_{12}\mathbf 1_n) {\bf m}od {\bf p}si_{11}.
{\bf e}nd{split}
\]
Recall that ${\bf m}athbf det({\bf p}si_{21}{\bf m}athfrak bar M+(-1)^{n-1}{\bf p}si_{12}\mathbf 1_n)/{\bf p}si_{11}$ multiplied by an appropriate power of ${\bf m}athbf det X$ is the new variable that replaces ${\bf p}si_{11}$
in component III,
and hence~{\bf e}qref{row1} defines the entries of the first row of $\check M$ as Laurent polynomials.
\operatorname{sign}ubsubsection{Component IV} In this component we use the same two normal forms as in component I. Restoration of $B_+$ and of diagonal entries of ${\bf m}athfrak bar B_+$
is exactly the same as before. Next,
we apply the second line of~{\bf e}qref{tildefy} and~{\bf e}qref{barbeta} to~{\bf e}qref{fyklex} (or~{\bf e}qref{fy1lex}) and
observe that the right hand side of the exchange relation for ${\bf p}si_{kl}$, $k>1$,
can be written as
\[
{\bf m}athfrak begin{split}
&{\bf p}si_{k+l-3,1}{\bf p}si_{k+l-2,1}{\bf p}si_{k+l-1,1}\\
&\times\left(h_{\alpha-1,{\bf m}athfrak gamma+1}({\bf m}athfrak bar B_+)h_{\alpha,{\bf m}athfrak gamma-1}({\bf m}athfrak bar B_+)h_{\alpha+1,{\bf m}athfrak gamma}({\bf m}athfrak bar B_+)
+h_{\alpha-1,{\bf m}athfrak gamma}({\bf m}athfrak bar B_+)h_{\alpha,{\bf m}athfrak gamma+1}({\bf m}athfrak bar B_+)h_{\alpha+1,{\bf m}athfrak gamma-1}({\bf m}athfrak bar B_+){\bf r}ight),
{\bf e}nd{split}
\]
where $\alpha=n-k-l+2$, ${\bf m}athfrak gamma=n-l+2$ (cf.~with~{\bf e}qref{fyviah}). Note that for $l=2$ we have $h_{\alpha-1,{\bf m}athfrak gamma+1}({\bf m}athfrak bar B_+)=h_{\alpha,{\bf m}athfrak gamma+1}({\bf m}athfrak bar B_+)=1$,
and hence the expression above can be rewritten as
\[
{{\bf p}si_{k-1,1}{\bf p}si_{k1}{\bf p}si_{k+1,1}}
\left(h_{\alpha,n-1}({\bf m}athfrak bar B_+)h_{\alpha+1,n}({\bf m}athfrak bar B_+)
+h_{\alpha-1,n}({\bf m}athfrak bar B_+)h_{\alpha+1,n-1}({\bf m}athfrak bar B_+){\bf r}ight).
\]
We conclude that the map
\[
(k,l){\bf m}apsto {\bf m}athfrak begin{cases} (\alpha,{\bf m}athfrak gamma){\bf q}uad\text{for $l>1$},\\
(\alpha+1,\alpha+1){\bf q}uad\text{for $l=1$ }
{\bf e}nd{cases}
\]
transforms exchange relations
for ${\bf p}si_{kl}$, $l>1$, $k+l<n$, to exchange relations for the standard cluster structure on triangular matrices of size $(n-2)\times(n-2)$,
up to a monomial factor in variables that
are fixed in component IV, see Fig.~{\bf r}ef{Q6}. Consequently, the entries ${\bf m}athfrak bar{\bf m}athfrak beta_{ij}$, $2\le i <j\le n$, can be restored as Laurent polynomials in component IV via
Remark~{\bf r}ef{tribfz}.
{\bf m}athfrak begin{figure}[ht]
{\bf m}athfrak begin{center}
\includegraphics[width=10cm]{Q6regIV.eps}
\caption{Modification of the relevant part of $Q_6^{\bf m}athbf dag$ in component IV}
\label{Q6}
{\bf e}nd{center}
{\bf e}nd{figure}
The entries in the first row of ${\bf m}athfrak bar B_+$ are restored exactly as in component I.
\operatorname{sign}ubsubsection{Component V} In this component we once again use the same two normal forms as in component I. Restoration of ${\bf m}athfrak bar B_+$ is exactly the same as before.
To restore $B_+$, we note that the cluster structure in component V coincides with the standard cluster structure on triangular $n\times n$ matrices, and hence
the entries of $B_+$ can be restored as Laurent polynomials via Remark~{\bf r}ef{tribfz}.
\operatorname{sign}ubsubsection{Component VI} The two normal forms used in this component are given by $U=B_+N_-={\bf w}idehat N_+{\bf w}idehat N_-{\bf w}idehat M S_{12}{\bf w}idehat N_-^{-1}{\bf w}idehat N_+^{-1}$,
where $B_+$ is upper triangular, $N_-, {\bf w}idehat N_-=({\bf m}athfrak hat\mathfrak nu_{ij})$ are unipotent lower triangular with ${\bf m}athfrak hat \mathfrak nu_{j1}=0$ for $2\le j\le n$,
${\bf w}idehat N_+=\mathbf 1_n+{\bf m}athfrak hat\mathfrak nu e_{12}$, and ${\bf w}idehat M=({\bf m}athfrak hat {\bf m}u_{ij})$ satisfies conditions ${\bf m}athfrak hat {\bf m}u_{1n}=0$ and ${\bf m}athfrak hat {\bf m}u_{i,n+2-j}=0$ for
$2\le j<i\le n$, see Lemma~{\bf r}ef{NMN}.
Note that
$h_{2j}(U)=h_{2j}({\bf w}idehat M)$ and ${\bf p}si_{kl}(U)={\bf p}si_{kl}({\bf w}idehat M)$.
Our goal is to restore $B_+$ and ${\bf w}idehat M$.
Once this is done, the matrix $U$ itself is restored as follows. First, by the proof of Lemma~{\bf r}ef{NMN},
${\bf m}athfrak hat \mathfrak nu={\bf m}athfrak beta_{1n}/{\bf m}athfrak hat {\bf m}u_{2n}$, which is a Laurent polynomial in
component VI, since ${\bf m}athfrak hat {\bf m}u_{2n}= h_{2n}$ is a Laurent monomial. Next, we write
$B_+N_-{\bf w}idehat N_+{\bf w}idehat N_-={\bf w}idehat N_+{\bf w}idehat N_-{\bf w}idehat M$. Taking into account that
${\bf w}idehat N_+{\bf w}idehat N_-={\bf w}idehat N_-{\bf w}idehat N_+$, we arrive at
{\bf m}athfrak begin{equation*}
{\bf m}athfrak bar W_0' B_+\cdot N_-{\bf w}idehat N_-={\bf m}athfrak bar W_0'{\bf w}idehat N_- {\bf m}athfrak bar W_0'\cdot{\bf m}athfrak bar W_0' {\bf w}idehat N_+{\bf w}idehat M{\bf w}idehat N_+^{-1}
{\bf e}nd{equation*}
with ${\bf m}athfrak bar W_0'={\bf m}athfrak begin{pmatrix} 1 & 0\\ 0 & W_0'{\bf e}nd{pmatrix}$. Note that the second factor on the left is unipotent lower triangular, whereas the first factor
on the right is unipotent upper triangular. Taking the Gauss factorizations of the remaining two factors, we restore
${\bf m}athfrak bar W_0'{\bf w}idehat N_- {\bf m}athfrak bar W_0'=\left( {\bf m}athfrak bar W_0' B_+{\bf r}ight)_{> 0}\left({\bf m}athfrak bar W_0' {\bf w}idehat N_+{\bf w}idehat M{\bf w}idehat N_+^{-1}{\bf r}ight)^{-1}_{> 0}$. Note that trailing minors
needed for Gauss factorizations in the right hand side above equal ${\bf m}athbf det{\bf w}idehat M_{[2,i]}^{[n-i+2,n]}= h_{2i}$, and hence are Laurent
monomials in component VI.
We describe the restoration process at ${\bf w}idetilde\Sigma'''_n$, and indicate the changes that occur at its neighbors.
Restoration of $B_+$ is almost the same as before. The difference is that $h_{1n}$ and $h_{12}$, which coincide up to a sign with ${\bf p}si_{n-1,1}$ and ${\bf p}si_{1,n-1}$,
are no longer available (at ${\bf w}idetilde\Sigma''_n$ only $h_{1n}$ is not available). However, since they are cluster variables in other clusters, say, in any cluster in component I, they both can be written as Laurent polynomials
at ${\bf w}idetilde\Sigma'''_n$. Moreover, they never enter denominators in expressions for ${\bf m}athfrak beta_{ij}$, and hence $B_+$ is restored. At the neighbor of
${\bf w}idetilde\Sigma'''_n$ in direction ${\bf p}si_{n-i,i}$ we apply the same reasoning to $h_{1,n-i+1}={\bf p}m {\bf p}si_{n-i,i}$; at ${\bf w}idetilde\Sigma''_n$ we apply it only to $h_{1n}$.
In order to restore ${\bf w}idehat M$ we proceed as follows. First, we note that
\[
h_{2j}={\bf p}m{\bf p}rod_{i=j}^n {\bf m}athfrak hat {\bf m}u_{n+2-i,i},
\]
and hence ${\bf m}athfrak hat {\bf m}u_{n+2-i,i}={\bf p}m h_{2j}/h_{2,j+1}$ for $2\le j\le n$ with $h_{2,n+1}=1$.
Next, note that by~{\bf e}qref{reg6}, the function ${\bf p}si_{n-1,1}$ is replaced in component VI by
${\bf p}si'_{n-1,1}={\bf p}m{\bf m}athbf det {\bf w}idehat M_{[1,n-1]}^{1\cup[3,n]}$. Besides, it is easy to see that in all clusters in component VI except for ${\bf w}idetilde\Sigma''_n$,
${\bf p}si_{1,n-1}$ is replaced by ${\bf p}si_{1,n-1}'={\bf p}m{\bf m}athbf det {\bf w}idehat M_{[2,n]}^{1\cup[3,n]}$.
Consequently, $h_{11}={\bf p}m\left({\bf m}athfrak hat {\bf m}u_{n1}{\bf p}si_{1,n-1}-{\bf m}athfrak hat {\bf m}u_{n2}{\bf p}si'_{n-1,1}{\bf r}ight)$
yields ${\bf m}athfrak hat {\bf m}u_{n1}$ as a Laurent monomial at ${\bf w}idetilde\Sigma''_n$, and ${\bf p}si_{1,n-1}'={\bf p}m{\bf m}athfrak hat{\bf m}u_{n1}h_{23}$ yields it as a Laurent monomial in all
other clusters in component VI.
To proceed further, we introduce an $n \times n$ matrix ${\bf w}idehat{{\bf m}athcal P}si$ similar to the matrix ${{\bf m}athcal P}si$ used in component II:
\[
{\bf w}idehat{{\bf m}athcal P}si=\left[{\bf m}athfrak begin{array}{ccccc} e_2 & {\bf w}idehat Me_2 & {\bf w}idehat M^{2}e_2 & {\bf m}athbf dots & {\bf w}idehat M^{n-1}e_2{\bf e}nd{array}{\bf r}ight].
\]
{\bf m}athfrak begin{lemma}\label{psiminors}
The minors ${\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_I^{[2,t]}$, $2\le t\le n-1$, are Laurent polynomials in component VI for any index set $I$ such that $2\mathfrak notin I$, $|I|=t-1$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof}
An easy computation shows that for $k+l<n$ one has
\[
{\bf p}si_{kl}={\bf p}m h_{2n}^{n-k-l+1}\operatorname{sign}um_{j\in [1,l+1]\operatorname{sign}etminus2}(-1)^{j+\chi_j}{\bf m}athbf det{\bf w}idehat M_{[1,l+1]\operatorname{sign}etminus\{2, j\}}^{[n-l+1,n-1]}
{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{j\cup[l+2,n-k]}^{[2,n-k-l+1]},
\]
where $\chi_1=1$ and $\chi_j=0$ for $j\mathfrak ne 1$. It follows from the discussion above that ${\bf m}athbf det{\bf w}idehat M_{[3,l+1]}^{[n-l+1,n-1]}=(-1)^{l-1}h_{2,n-l+1}/h_{2n}$.
Besides,
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{split}
{\bf m}athbf det {\bf w}idehat M_{[1,l+1]\operatorname{sign}etminus\{2, j\}}^{[n-l+1,n-1]}&=(-1)^{(j-1)(l-j+1)}{\bf m}athbf det{\bf w}idehat M_{[1,j-1]\operatorname{sign}etminus2}^{[n-j+2,n-1]}{\bf m}athbf det{\bf w}idehat M_{[j+1,l+1]}^{[n-l+1,n-j+1]}\\
&=(-1)^{l}\frac{h_{1,n-j+2}}{h_{2n}}\frac{h_{2,n-l+1}}{h_{2,n-j+2}},
{\bf e}nd{split}
{\bf e}nd{equation*}
and hence
{\bf m}athfrak begin{equation}\label{psiviaPsi}
{\bf p}si_{kl}={\bf p}m h_{2n}^{n-k-l}h_{2,n-l+1}\operatorname{sign}um_{j\in [1,l+1]\operatorname{sign}etminus2}{\bf e}ta_j{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{j\cup[l+2,n-k]}^{[2,n-k-l+1]},
{\bf e}nd{equation}
where ${\bf e}ta_j=(-1)^{(n-j)(j-1)}{{\bf p}si_{n-j+1,j-1}}/{h_{2,n-j+2}}$ with ${\bf p}si_{n0}=h_{2,n+1}=1$.
Now we can prove the claim of the lemma by induction on the maximal index in $I$. The minimum value of this index is $t$. In this case we use~{\bf e}qref{psiviaPsi}
to see that
{\bf m}athfrak begin{equation}\label{solidpsi}
{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{[1,t]\operatorname{sign}etminus2}^{[2,t]}={\bf p}m{\bf p}si_{n-t,1}/h_{2n}^{t}
{\bf e}nd{equation}
is a Laurent monomial in component VI.
Assume that the value of the maximal index in $I$ equals $r>t$. We multiply the
$(r-1)\times(t-1)$ matrix ${\bf w}idehat{{\bf m}athcal P}si^{[2,t]}_{[1,r]\operatorname{sign}etminus2}$
by a $(t-1)\times (t-1)$ block upper triangular matrix with unimodular blocks of size $t-2$ and $1$, so that
the upper $(t-2)\times(t-1)$ submatrix is diagonalized with $1$'s on the diagonal except for the first row. Clearly, this transformation does not change the
values of any minors in the first $t-2$ and $t-1$ columns. Consequently, each matrix entry (except for the one in the lower right corner, which we denote $z$) is a Laurent polynomial in component VI. We then consider~{\bf e}qref{psiviaPsi} with $k=n-r$ and $l=r-t+1$ and expand each minor in the right hand side by the last row.
Each entry in the last row
distinct from $z$ enters this expansion with a coefficient that is a Laurent polynomial in component VI.
The entry $z$ enters the expansion with the coefficient
\[
{\bf p}m h_{2n}^{t-1}h_{2,n-r+t}\operatorname{sign}um_{j\in [1,r-t+2]\operatorname{sign}etminus2}{\bf e}ta_j{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si^{[2,t-1]}_{j\cup[r-t+3,r-1]}={\bf p}m{\bf p}si_{n-r+1,r-t+1}h_{2n}.
\]
Thus, $z$ is a Laurent monomial in component VI, and hence any minor of ${\bf w}idehat{{\bf m}athcal P}si^{[2,t]}_{[1,r]\operatorname{sign}etminus2}$ that involves the $r$-th row is a Laurent polynomial.
{\bf e}nd{proof}
We can now proceed with restoring the entries of ${\bf w}idehat M$. Equation~{\bf e}qref{psiviaPsi} for $k=n-l-1$ gives
\[
{\bf p}si_{n-l-1,l}=
{\bf p}m h_{2n}h_{2,n-l+1}\operatorname{sign}um_{j\in [1,l+1]\operatorname{sign}etminus2}{\bf e}ta_j{\bf w}idehat{{\bf m}athcal P}si_j^2={\bf p}m h_{2n}^2h_{2,n-l+1}\operatorname{sign}um_{j\in [1,l+1]\operatorname{sign}etminus2}{\bf e}ta_j{\bf m}athfrak hat{\bf m}u_{j2},
\]
and we consecutively
restore ${\bf m}athfrak hat{\bf m}u_{j2}$, $1\le j\le n-1$, $j\mathfrak ne2$, as Laurent polynomials at all clusters in component VI
except for the neighbor of ${\bf w}idetilde\Sigma'''_n$ in direction ${\bf p}si_{n-j+1,j-1}$. An easy computation shows that in this cluster ${\bf p}si_{n-j+1,j-1}$
is replaced by ${\bf p}si_{n-j+1,j-1}'={\bf p}m h_{2n}{\bf m}athbf det{\bf w}idehat M_{[2,j]}^{2\cup[n-j+3,n]}={\bf p}m h_{2n}h_{2,n-j+3}{\bf m}athfrak hat{\bf m}u_{j2}$, and hence ${\bf m}athfrak hat{\bf m}u_{j2}$
is restored there as a Laurent monomial. In particular, it follows from above that
{\bf m}athfrak begin{equation}\label{mu12}
{\bf m}athfrak hat{\bf m}u_{12}={\bf p}m\frac{{\bf p}si_{n-2,1}}{h_{2n}^3}
{\bf e}nd{equation}
in any cluster in component VI.
Recall that we have already restored the last row of ${\bf w}idehat M$. We restore the other rows consecutively,
starting from row $n-1$ and moving upwards.
Matrix entries in the $l$-th row, $3\le l\le n-1$,
are restored in two stages, together with the minors ${\bf m}athbf det {\bf w}idehat M^{i\cup[n-l+3,n]}_{[1,l-1]}$ for $1\le i\le n-l+1$, $i\mathfrak ne 2$.
First, we recover minors ${\bf m}athbf det {\bf w}idehat M^{2\cup i\cup[n-l+3,n]}_{[1,l]}$ for $1\le i\le n-l+1$, $i\mathfrak ne 2$, as Laurent polynomials in component VI.
Once they are recovered,
we find ${\bf m}athfrak hat{\bf m}u_{li}$ and ${\bf m}athbf det {\bf w}idehat M^{i\cup[n-l+3,n]}_{[1,l-1]}$ via expanding ${\bf m}athbf det {\bf w}idehat M^{2\cup i\cup[n-l+3,n]}_{[1,l]}$ and
${\bf m}athbf det {\bf w}idehat M^{i\cup[n-l+2,n]}_{[1,l]}$ by the last row. This gives a system of two linear equations for a fixed $i$,
and its determinant equals ${\bf p}m{\bf p}si_{n-l,l-1}$. Note that minors ${\bf m}athbf det {\bf w}idehat M^{i\cup[n-l+2,n]}_{[1,l]}$ for $1\le i\le n-l$, $i\mathfrak ne 2$,
were recovered together with the entries of the $(l+1)$-st row, and ${\bf m}athbf det {\bf w}idehat M^{[n-l+1,n]}_{[1,l]}={\bf p}m{\bf p}si_{n-l,l}$
(recall that ${\bf p}si_{n-l,l}$ are Laurent polynomials in component VI).
For $l=3$, we have ${\bf m}athbf det {\bf w}idehat M^{i\cup n}_{[1,2]}={\bf m}athfrak hat{\bf m}u_{1i}h_{2n}$,
and hence the entries of the first row are recovered together with the entries of the third row.
The minors ${\bf m}athbf det {\bf w}idehat M^{2\cup i\cup[n-l+3,n]}_{[1,l]}$ for $1\le i\le n-l+1$, $i\mathfrak ne 2$, are recovered together with all other minors
${\bf m}athbf det {\bf w}idehat M^{i\cup j\cup[n-l+3,n]}_{[1,l]}$ for $1\le i<j\le n-l+1$, $i,j\mathfrak ne 2$, altogether $(n-l)(n-l+1)/2$ minors.
We first note that the Binet--Cauchy formula gives
{\bf m}athfrak begin{equation}\label{psiviabinet}
{\bf p}si_{kl}=\operatorname{sign}um_{\operatorname{sign}ubstack{J\operatorname{sign}ubseteq [1,n-l]\operatorname{sign}etminus2\\ |J|=n-k-l-1}}(-1)^{\chi_J}{\bf m}athbf det{\bf w}idehat M^{2\cup J\cup[n-l+1,n]}_{[1,n-k]}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si^{[2,n-k-l]}_J
{\bf e}nd{equation}
for $k+l<n-1$, where $\chi_J=\operatorname{sign}um_{j\in J}\chi_j$. Recall that by Lemma~{\bf r}ef{psiminors}, ${\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si^{[2,n-k-l]}_J$ are Laurent
polynomials in component VI. Writing down ${\bf p}si_{k,l-2}$ for $1\le k\le n-l$ via the above formula, and expanding the minors ${\bf m}athbf det{\bf w}idehat M^{2\cup J\cup[n-l+3,n]}_{[1,n-k]}$ for $J\mathfrak not\mathfrak ni n-l+2$ by the last $n-k-l$ rows, we get $n-l$ linear equations; note that the corresponding minors for $J\mathfrak ni n-l+2$ have been already restored as Laurent
polynomials in component VI when we dealt with the previous rows. For $l=n-1$ we get a single equation
\[
{\bf p}si_{1,n-3}=-{\bf m}athbf det{\bf w}idehat M_{[1,n-1]}^{[1,n]\operatorname{sign}etminus3}{\bf m}athfrak hat{\bf m}u_{12}+{\bf m}athbf det{\bf w}idehat M_{[1,n-1]}^{[2,n]}{\bf m}athfrak hat{\bf m}u_{32},
\]
and hence ${\bf m}athbf det{\bf w}idehat M_{[1,n-1]}^{[1,n]\operatorname{sign}etminus3}$ is a Laurent polynomial in component VI, since ${\bf m}athfrak hat{\bf m}u_{12}$ is a Laurent monomial by~{\bf e}qref{mu12}.
In what follows we assume that $3\le l\le n-2$.
The remaining $(n-l)(n-l-1)/2$ linear equations are provided by short Pl\"ucker relations
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{split}
{\bf m}athbf det {\bf w}idehat M^{i\cup j\cup[n-l+3,n]}_{[1,l]}{\bf m}athbf det {\bf w}idehat M^{2\cup[n-l+2,n]}_{[1,l]}&=
{\bf m}athbf det {\bf w}idehat M^{2\cup j\cup[n-l+3,n]}_{[1,l]}{\bf m}athbf det {\bf w}idehat M^{i\cup[n-l+2,n]}_{[1,l]}\\
&+(-1)^{\chi_i+1}{\bf m}athbf det {\bf w}idehat M^{2\cup i\cup[n-l+3,n]}_{[1,l]}{\bf m}athbf det {\bf w}idehat M^{j\cup[n-l+2,n]}_{[1,l]},
{\bf e}nd{split}
{\bf e}nd{equation*}
where the second factor in each term is a Laurent polynomial in component VI, and, moreover, ${\bf m}athbf det {\bf w}idehat M^{2\cup[n-l+2,n]}_{[1,l]}={\bf p}m {\bf p}si_{n-l-1,l}/h_{2n}$.
We can arrange the variables in such a way that the matrix of the linear system takes the form $A={\bf m}athfrak begin{pmatrix} A_1 & A_2\\ A_3 & A_4 {\bf e}nd{pmatrix}$, where
$A_4$ is an $(n-l)(n-l-1)/2\times (n-l)(n-l-1)/2$ diagonal matrix with ${\bf m}athbf det {\bf w}idehat M^{2\cup[n-l+2,n]}_{[1,l]}$ on the diagonal.
The column $(i,j)$ of $A_2$ corresponding
to the variable ${\bf m}athbf det {\bf w}idehat M^{i\cup j\cup[n-l+3,n]}_{[1,l]}$, $i,j\mathfrak ne 2$, contains
{\bf m}athfrak begin{equation}\label{a2element}
\operatorname{sign}um_{\operatorname{sign}ubstack{I\operatorname{sign}ubseteq [1,n-l+1]\operatorname{sign}etminus\{2,i,j\}\\ |I|=t-1}}(-1)^{\theta(i,j,I)}{\bf m}athbf det{\bf w}idehat M^{2\cup I}_{[l+1,l+t]}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup j\cup I}^{[2, t+2]}
{\bf e}nd{equation}
in row $n-l-t$ for $1\le t\le n-l-1$, where $\theta(i,j,I)=\chi_{i\cup I}+\#\{p\in 2\cup I: i<p<j\}$, and zero in row $n-l$.
Similarly, the column $(2,i)$ of $A_1$ corresponding to the variable ${\bf m}athbf det {\bf w}idehat M^{2\cup i\cup[n-l+3,n]}_{[1,l]}$, $i\mathfrak ne 2$, contains
\[
\operatorname{sign}um_{\operatorname{sign}ubstack{J\operatorname{sign}ubseteq [1,n-l+1]\operatorname{sign}etminus\{2,i\}\\ |J|=t}}(-1)^{\theta(i,J)}{\bf m}athbf det{\bf w}idehat M^{ J}_{[l+1,l+t]}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup J}^{[2, t+2]}
\]
in row $n-l-t$ for $1\le t\le n-l-1$, where
\[
\theta(i,J)={\bf m}athfrak begin{cases} 1 {\bf q}uad\text{for $i=1$},\\
\chi_J+\#\{p\in J: 2<p<i\} {\bf q}uad\text{for $i\mathfrak ne1$},
{\bf e}nd{cases}
\]
and $(-1)^{\chi_i}{\bf m}athfrak hat{\bf m}u_{i2}$ in row $n-l$.
To find ${\bf m}athbf det A$, we multiply $A$ by a square lower triangular matrix whose column $(2,i)$ contains ${\bf m}athfrak hat{\bf m}u_{l+1,2}$ in row $(2,i)$, ${\bf m}athfrak hat{\bf m}u_{l+1,j}$
in row $(j,i)$, $1\le j\le i-1$, $j\mathfrak ne 2$, and $(-1)^{\chi_i+1}{\bf m}athfrak hat{\bf m}u_{l+1,j}$ in row $(i,j)$, $i+1\le j\le n-l+1$, $j\mathfrak ne 2$; column $(i,j)$ of this matrix
contains a single $1$ on the main diagonal, and all its other entries are equal to zero.
Let $A'={\bf m}athfrak begin{pmatrix} A'_1 & A_2\\ A'_3 & A_4 {\bf e}nd{pmatrix}$ be the obtained product. Clearly,
${\bf m}athbf det A={\bf m}athfrak hat{\bf m}u_{l+1,2}^{l-n}{\bf m}athbf det A'$. We claim that $(A'_1)_{[1,n-l-1]}$ is a zero matrix, and $(A'_1)_{[n-l]}={\bf m}athfrak hat{\bf m}u_{l+1,2}(A_1)_{[n-l]}$.
The second claim follows immediately from the fact that $(A_2)_{[n-l]}$ is a zero vector. To prove the first one, we fix arbitrary $i$, $t$, and $J=\{j_1<j_2<\cdots<j_t\}$,
and find the coefficient at ${\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup J}^{[2, t+2]}$ in the entry of $A_1'$ in row $n-l-t$ and column $(2,i)$. For $i=1$ this coefficient equals
\[
(-1)^{\theta(1,J)}{\bf m}athfrak hat{\bf m}u_{l+1,2}{\bf m}athbf det{\bf w}idehat M^J_{[l+1,l+t]}+\operatorname{sign}um_{r=1}^t(-1)^{2+\theta(1,j_r,J\operatorname{sign}etminus j_r)}
{\bf m}athfrak hat{\bf m}u_{l+1,j_r}{\bf m}athbf det{\bf w}idehat M^{2\cup J\operatorname{sign}etminus j_r}_{[l+1,l+t]}.
\]
Taking into account that $\theta(1,J)=1$ and $\theta(1,j_r,J\operatorname{sign}etminus j_r)=2+\#\{p\in J:2<p<j_r\}=1+r$, we conclude that the signs in the above expression alternate.
For $i\mathfrak ne 1$, the coefficient in question equals
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{split}
(-1)^{\theta(i,J)}{\bf m}athfrak hat{\bf m}u_{l+1,2}{\bf m}athbf det{\bf w}idehat M^J_{[l+1,l+t]}
&+\operatorname{sign}um_{j_r<i}(-1)^{\theta(j_r,i,J\operatorname{sign}etminus j_r)}{\bf m}athfrak hat{\bf m}u_{l+1,j_r}{\bf m}athbf det{\bf w}idehat M^{2\cup J\operatorname{sign}etminus j_r}_{[l+1,l+t]}\\
&+\operatorname{sign}um_{j_r>i}(-1)^{1+\theta(i,j_r,J\operatorname{sign}etminus j_r)}{\bf m}athfrak hat{\bf m}u_{l+1,j_r}{\bf m}athbf det{\bf w}idehat M^{2\cup J\operatorname{sign}etminus j_r}_{[l+1,l+t]}.
{\bf e}nd{split}
{\bf e}nd{equation*}
If $j_1=1$, then $\theta(i,J)=r^*$, where $r^*=r^*(i,J)={\bf m}ax\{r: j_r<i\}$. Further, $\theta(1,i,J\operatorname{sign}etminus 1)=1+r^*$, $\theta(j_r,i,J\operatorname{sign}etminus j_r)=1+r^*-r$ for $1<j_r<i$
and $\theta(i,j_r,J\operatorname{sign}etminus j_r)=2+r-r^*$ for $i<j_r$, hence the signs in the above expression taken in the order $1, 2, j_2,{\bf m}athbf dots, j_t$ alternate once again.
Finally, if $j_1>1$, then $\theta(i,J)=r^*$, $\theta(j_r,i,J\operatorname{sign}etminus j_r)=r^*-r$ for $j_r<i$ and $\theta(i,j_r,J\operatorname{sign}etminus j_r)=1+r-r^*$ for $i<j_r$, and again the signs
taken in the corresponding order alternate. Therefore, in all three cases the coefficient at ${\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup J}^{[2, t+2]}$ equals
${\bf m}athbf det{\bf w}idehat M^{2\cup J}_{{l+1}\cup[l+1,l+t]}$, and hence
vanishes as the determinant of a matrix with two identical rows.
The entry of $A_3'$ in column $(2,i)$ and row
$(i,j)$ is
{\bf m}athfrak begin{equation}\label{a3elementi}
{\bf m}athfrak begin{split}
(-1)^{\chi_i+1}{\bf m}athfrak hat{\bf m}u_{l+1,j}{\bf m}athbf det {\bf w}idehat M^{2\cup[n-l+2,n]}_{[1l]}&+(-1)^{\chi_i}{\bf m}athfrak hat{\bf m}u_{l+1,2}{\bf m}athbf det {\bf w}idehat M^{j\cup[n-l+2,n]}_{[1l]}\\
&=(-1)^{\chi_i}{\bf w}idehat M^{2\cup j\cup[n-l+2,n]}_{[1,l+1]},
{\bf e}nd{split}
{\bf e}nd{equation}
and the entry in column $(2,j)$ and row $(i,j)$ is
{\bf m}athfrak begin{equation}\label{a3elementj}
{\bf m}athfrak hat{\bf m}u_{l+1,i}{\bf m}athbf det {\bf w}idehat M^{2\cup[n-l+2,n]}_{[1l]}-{\bf m}athfrak hat{\bf m}u_{l+1,2}{\bf m}athbf det {\bf w}idehat M^{i\cup[n-l+2,n]}_{[1l]}=
(-1)^{\chi_i+1}{\bf w}idehat M^{2\cup i\cup[n-l+2,n]}_{[1,l+1]}.
{\bf e}nd{equation}
By the Schur determinant lemma,
\[
{\bf m}athbf det A'={\bf m}athbf det A_4\cdot{\bf m}athbf det\left(A_1'-A_2A_4^{-1}A_3'{\bf r}ight)={\bf p}m {\bf m}athfrak hat{\bf m}u_{l+1,2}\left(\frac{{\bf p}si_{n-l-1,l}}{h_{2n}}{\bf r}ight)^{(n-l)(n-l-3)/2+1}{\bf m}athbf det A'',
\]
where $A''$ is the $(n-l)\times (n-l)$ matrix satisfying $A''_{[1,n-l-1]}=(A_2A_3')_{[1,n-l-1]}$ and $A''_{[n-l]}=(A_1')_{[n-l]}$.
Using~{\bf e}qref{a2element},~{\bf e}qref{a3elementi}, and~{\bf e}qref{a3elementj}, we compute
the entry $A''_{n-l-t,i}$ for $1\le t\le n-l-1$, $1\le i\le n-l+1$, $i\mathfrak ne 2$, as
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{split}
&\operatorname{sign}um_{j<i}\operatorname{sign}um_{\operatorname{sign}ubstack{I\operatorname{sign}ubseteq [1,n-l+1]\operatorname{sign}etminus\{2,i,j\}\\ |I|=t-1}}(-1)^{\theta(j,i,I)+\chi_j+1}{\bf m}athbf det{\bf w}idehat M^{2\cup I}_{[l+1,l+t]}
{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup j\cup I}^{[2, t+2]}{\bf m}athbf det{\bf w}idehat M_{[1,l+1]}^{2\cup j\cup [n-l+2,n]}\\
&+\operatorname{sign}um_{j>i}\operatorname{sign}um_{\operatorname{sign}ubstack{I\operatorname{sign}ubseteq [1,n-l+1]\operatorname{sign}etminus\{2,i,j\}\\ |I|=t-1}}(-1)^{\theta(i,j,I)+\chi_i}{\bf m}athbf det{\bf w}idehat M^{2\cup I}_{[l+1,l+t]}
{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup j\cup I}^{[2, t+2]}{\bf m}athbf det{\bf w}idehat M_{[1,l+1]}^{2\cup j\cup [n-l+2,n]}\\
&=\operatorname{sign}um_{\operatorname{sign}ubstack{J\operatorname{sign}ubseteq [1,n-l+1]\operatorname{sign}etminus\{2,i\}\\ |J|=t}}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup J}^{[2, t+2]}\\
&{\bf q}quad\times\left(\operatorname{sign}um_{j_r<i}
(-1)^{\theta(j_r,i,J\operatorname{sign}etminus j_r)+\chi_{j_r}+1}{\bf m}athbf det{\bf w}idehat M^{2\cup J\operatorname{sign}etminus j_r}_{[l+1,l+t]}{\bf m}athbf det{\bf w}idehat M_{[1,l+1]}^{2\cup j_r\cup [n-l+2,n]}{\bf r}ight.\\
&{\bf q}quad{\bf q}quad+\left.\operatorname{sign}um_{j_r>i} (-1)^{\theta(i,j_r,J\operatorname{sign}etminus j_r)+\chi_i+1}{\bf m}athbf det{\bf w}idehat M^{2\cup J\operatorname{sign}etminus j_r}_{[l+1,l+t]}{\bf m}athbf det{\bf w}idehat M_{[1,l+1]}^{2\cup j_r\cup [n-l+2,n]}
{\bf r}ight).
{\bf e}nd{split}
{\bf e}nd{equation*}
Analyzing the same three cases as in the computation of the entries of $A_1'$, we conclude that the second factor in the above expression can be rewritten as
\[
(-1)^{\chi_J+\chi_i+r^*+1}\operatorname{sign}um_{r=1}^t(-1)^{r+1}{\bf m}athbf det{\bf w}idehat M^{2\cup J\operatorname{sign}etminus j_r}_{[l+1,l+t]}{\bf m}athbf det{\bf w}idehat M_{[1,l+1]}^{2\cup j_r\cup [n-l+2,n]}.
\]
To evaluate the latter sum, we multiply ${\bf w}idehat M_{[1,l+t]}^{2\cup J\cup [n-l+2,n]}$ by a matrix ${\bf m}athfrak begin{pmatrix} \mathbf 1_{l+1} & 0\\ 0 & N {\bf e}nd{pmatrix}$, where
$N$ is unipotent lower triangular, so that the entries of column~2 in rows $l+2,{\bf m}athbf dots,l+t$ vanish. This operation preserves the minors involved in the above sum, and hence
the latter is equal to
\[
\operatorname{sign}um_{r=1}^t(-1)^{r+1+\chi_J}{\bf m}athfrak hat{\bf m}u_{l+1,2}{\bf m}athbf det{\bf w}idehat M^{J\operatorname{sign}etminus j_r}_{[l+2,l+t]}{\bf m}athbf det{\bf w}idehat M_{[1,l+1]}^{2\cup j_r\cup [n-l+2,n]}=
{\bf m}athfrak hat{\bf m}u_{l+1,2}{\bf m}athbf det{\bf w}idehat M_{[1,l+t]}^{2\cup J\cup [n-l+2,n]}.
\]
Thus, finally,
\[
A''_{n-l-t,i}={\bf m}athfrak hat{\bf m}u_{l+1,2}\operatorname{sign}um_{\operatorname{sign}ubstack{J\operatorname{sign}ubseteq [1,n-l+1]\operatorname{sign}etminus\{2,i\}\\ |J|=t}}(-1)^{\chi_J+\chi_i+r^*+1}
{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup J}^{[2, t+2]}{\bf m}athbf det{\bf w}idehat M_{[1,l+t]}^{2\cup J\cup [n-l+2,n]}.
\]
To evaluate ${\bf m}athbf det A''$, we multiply $A''$ by the $(n-l)\times (n-l)$ upper triangular matrix having $(-1)^i{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{[1,k]\operatorname{sign}etminus\{2,i\}}^{[2,k-1]}$ in row $i$ and column $k$, $i,k\in [1,n-l+1]\operatorname{sign}etminus 2$, $i\le k$; we assume that for $i=k=1$ this expression equals~1. Denote the obtained product by $A'''$ and note
that ${\bf m}athbf det A'''={\bf p}m{\bf m}athbf det A''{\bf p}rod_{k=3}^{n-l+1}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si^{[2,k-1]}_{[1,k-1]\operatorname{sign}etminus 2}$.
The entry $A'''_{n-l-t,k}$ equals
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{split}
&{\bf m}athfrak hat{\bf m}u_{l+1,2}\operatorname{sign}um_{i\in [1,k]\operatorname{sign}etminus2}(-1)^i{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{[1,k]\operatorname{sign}etminus\{2,i\}}^{[2,k-1]}\\
&{\bf q}quad{\bf q}quad\times\operatorname{sign}um_{\operatorname{sign}ubstack{J\operatorname{sign}ubseteq [1,n-l+1]\operatorname{sign}etminus\{2,i\}\\ |J|=t}}(-1)^{\chi_J+\chi_i+r^*+1}
{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup J}^{[2, t+2]}{\bf m}athbf det{\bf w}idehat M_{[1,l+t]}^{2\cup J\cup [n-l+2,n]}\\
&={\bf m}athfrak hat{\bf m}u_{l+1,2}\operatorname{sign}um_{\operatorname{sign}ubstack{J\operatorname{sign}ubseteq [1,n-l+1]\operatorname{sign}etminus2\\ |J|=t}}{\bf m}athbf det{\bf w}idehat M_{[1,l+t]}^{2\cup J\cup [n-l+2,n]}\\
&{\bf q}quad{\bf q}quad\times\operatorname{sign}um_{i\in [1,k]\operatorname{sign}etminus2}
(-1)^{\chi_J+\chi_i+i+r^*+1}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{[1,k]\operatorname{sign}etminus\{2,i\}}^{[2,k-1]}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup J}^{[2, t+2]};
{\bf e}nd{split}
{\bf e}nd{equation*}
the equality follows from the fact that ${\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{i\cup J}^{[2, t+2]}=0$ for $i\in J$. Expanding the latter determinant by the $i$-th row, we see that the entry
in question equals
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{split}
{\bf m}athfrak hat{\bf m}u_{l+1,2}\operatorname{sign}um_{\operatorname{sign}ubstack{J\operatorname{sign}ubseteq [1,n-l+1]\operatorname{sign}etminus2\\ |J|=t}}&(-1)^{\chi_J+1}{\bf m}athbf det{\bf w}idehat M_{[1,l+t]}^{2\cup J\cup [n-l+2,n]}\\
&\times\operatorname{sign}um_{s=2}^{t+2}(-1)^s{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{J}^{[2, t+2]\operatorname{sign}etminus s}\operatorname{sign}um_{i\in [1,k]\operatorname{sign}etminus2}(-1)^{\chi_i+i}{\bf m}athfrak hat{\bf m}u_{is}
{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{[1,k]\operatorname{sign}etminus\{2,i\}}^{[2,k-1]}.
{\bf e}nd{split}
{\bf e}nd{equation*}
Note that the inner sum above equals
\[
{\bf m}athbf det{\bf w}idehat {{\bf m}athcal P}si_{[1,k]\operatorname{sign}etminus 2}^{s\cup [2,k-1]}=
{\bf m}athfrak begin{cases}
0 , {\bf q}quad{\bf q}quad{\bf q}uad 2\le s \le k-1,\\
{\bf m}athbf det{\bf w}idehat {{\bf m}athcal P}si_{[1,k]\operatorname{sign}etminus 2}^{[2,k]}, {\bf q}uad s=k.
{\bf e}nd{cases}
\]
We conclude that $A'''_{n-l-t,k}=0$ if $t+2<k$, and hence ${\bf m}athbf det A'''$ equals the product of the entries on the main antidiagonal. The latter correspond to
$t+2=k$ and are equal to
\[
(-1)^{t+1}{\bf m}athfrak hat{\bf m}u_{l+1,2}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{[1,t+2]\operatorname{sign}etminus2}^{[2,t+2]}\operatorname{sign}um_{\operatorname{sign}ubstack{J\operatorname{sign}ubseteq [1,n-l+1]\operatorname{sign}etminus2\\ |J|=t}}(-1)^{\chi_J}
{\bf m}athbf det{\bf w}idehat M_{[1,l+t]}^{2\cup J\cup [n-l+2,n]}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_J^{[2,t+1]},
\]
which by~{\bf e}qref{psiviabinet} coincides with ${\bf p}m{\bf m}athfrak hat{\bf m}u_{l+1,2}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{[1,t+2]\operatorname{sign}etminus2}^{[2,t+2]}{\bf p}si_{n-l-t,l-1}$. Therefore,
\[
{\bf m}athbf det A'''={\bf p}m{\bf m}athfrak hat{\bf m}u_{l+1,2}^{n-l-1}{\bf p}rod_{t=1}^{n-l-1}{\bf p}si_{n-l-t,l-1}{\bf p}rod_{t=0}^{n-l-1}{\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si_{[1,t+2]\operatorname{sign}etminus 2}^{[2,t+2]},
\]
and hence, taking into account~{\bf e}qref{solidpsi} for $t=n-l+1$, we get
\[
{\bf m}athbf det A={\bf p}m \frac{{\bf p}si_{n-l-1,l}^{(n-l)(n-l-3)/2+1}{\bf p}si_{l-1,1}}{h_{2n}^{(n-l)(n-l-1)/2+2}}{\bf p}rod_{t=1}^{n-l-1}{\bf p}si_{n-l-t,l-1}.
\]
So, all minors ${\bf m}athbf det {\bf w}idehat M^{2\cup i\cup[n-l+3,n]}_{[1,l]}$ for $1\le i\le n-l+1$, $i\mathfrak ne 2$ are Laurent polynomials in component VI, and hence so are all
entries in the rows $1,3,4,{\bf m}athbf dots,n$ of ${\bf w}idetilde M$.
The entries of the second row
are restored via Lemma~{\bf r}ef{row1viac} as polynomials in the entries of other rows and variables $c_i$ divided by the ${\bf m}athbf det K={\bf m}athbf det{\bf w}idehat{{\bf m}athcal P}si=
{\bf p}m h_{11}{\bf p}si_{11}/h_{2n}^n$.
{\bf m}athfrak begin{remark}\label{ultinohii}
Once we have established that stable variables $h_{ii}$ do not enter denominators of Laurent expressions for entries of $U$
at ${\bf w}idetilde\Sigma'_n$ (see Remark~{\bf r}ef{nohii}), this remains valid for any other cluster in ${\bf m}athcal N_0$. This is guaranteed by the fact that entries of
$U$ are Laurent polynomials in any cluster in ${\bf m}athcal N_0$ and the
condition that exchange polynomials are not divisible by any of the stable variables.
{\bf e}nd{remark}
\operatorname{sign}ection{Auxiliary results from matrix theory}\label{aux}
\operatorname{sign}ubsection{Proof of Proposition~{\bf r}ef{polyrel}}
\label{long}
This proposition is an easy corollary of the following more general result.
{\bf m}athfrak begin{proposition}
\label{long_identity}
Let $A$ be a complex $n\times n$ matrix. For $u,v\in {{\bf m}athbb C}^n$, define matrices
{\bf m}athfrak begin{gather*}\mathfrak nonumber
K(u)=\left[ u \; A u \; A^2 u {\bf m}athbf dots A^{n-1} u{\bf r}ight],\\
K_1(u,v)=\left[ v \; u \; A u {\bf m}athbf dots A^{n-2} u{\bf r}ight],\;\;
K_2(u,v)=\left[A v \; u \;A u {\bf m}athbf dots A^{n-2} u{\bf r}ight].
{\bf e}nd{gather*}
In addition, let $w$ be the last row of the classical adjoint of $ K_1(u,v)$, i.e.
$w K_1(u,v) = \left ({\bf m}athbf det K_1(u,v) {\bf r}ight )e_n^T$. Define
$ K^*(u,v)$ to be the matrix with rows $w, w A, \ldots, w A^{n-1}$. Then
{\bf m}athfrak begin{equation*}
\label{longid}
{\bf m}athbf det{{\bf m}athcal B}ig({\bf m}athbf det K_1(u,v) A - {\bf m}athbf det K_2(u,v) \mathbf 1 {{\bf m}athcal B}ig ) = (-1)^{\frac{n(n-1)}{2}} {\bf m}athbf det K(u) {\bf m}athbf det K^*(u,v).
{\bf e}nd{equation*}
{\bf e}nd{proposition}
{\bf m}athfrak begin{proof} It suffices to prove the identity for generic $A, u$ and $v$. In particular, we can assume that $A$ has distinct eigenvalues.
Then, after a change of basis, we can reduce the proof to the case of a diagonal $A={\bf m}athbf diag(a_1,\ldots, a_n)$ and vectors $u=(u_i)_{i=1}^n$ and
$v=(v_i)_{i=1}^n$ with all entries non-zero. In this case,
$$
{\bf m}athbf det K(u) = \operatorname{Van} A {\bf p}rod_{j=1}^n u_j, {\bf q}quad {\bf m}athbf det K^*(u,v) = \operatorname{Van} A {\bf p}rod_{j=1}^n w_j,
$$
where $\operatorname{Van} A$ is the $n\times n$
Vandermonde determinant based on $a_1,\ldots, a_n$ and $w_j$ is the $j$th component of the row vector $w$.
The the left-hand side of the identity can be rewritten as
$$
{\bf p}rod_{j=1}^n \left (a_j {\bf m}athbf det K_1(u,v) - {\bf m}athbf det K_2(u,v) {\bf r}ight ) =
{\bf p}rod_{j=1}^n {\bf m}athbf det \left[ (a_j\mathbf 1 - A)v\ u \ Au \ldots\ A^{n-2} u {\bf r}ight ].
$$
We compute the $j$th factor in the product above as
{\bf m}athfrak begin{equation}\label{Pj}
{\bf m}athfrak begin{split}
P_j&=\operatorname{sign}um_{\alpha\mathfrak ne j} (-1)^{\alpha + 1} (a_j - a_\alpha) {v_\alpha} \operatorname{Van} A_{\{\alpha\}}
{\bf p}rod_{i\mathfrak ne\alpha} u_i \\
&=\left(u_j {\bf p}rod_{{\bf m}athfrak beta\mathfrak ne j} (a_j -a_{\bf m}athfrak beta){\bf r}ight) \operatorname{sign}um_{\alpha\mathfrak ne j} (-1)^{\alpha + 1}
(-1)^{n-j -\theta(\alpha - j)} {v_\alpha}\operatorname{Van} A_{\{\alpha,j\}}{\bf p}rod_{i\mathfrak ne\alpha,j} u_i,
{\bf e}nd{split}
{\bf e}nd{equation}
where $\operatorname{Van} A_I$ is the $(n-|I|)\times (n-|I|)$
Vandermonde determinant based on $a_i$, $i\mathfrak notin I$, and $\theta(\alpha - j)$ is 1 if $\alpha > j$ and $0$ otherwise.
Note that
$$
{\bf p}rod_{j=1}^n u_j {\bf p}rod_{{\bf m}athfrak beta\mathfrak ne j} (a_j -a_{\bf m}athfrak beta) = (-1)^{\frac{n(n-1)}{2}}(\operatorname{Van} A)^2{\bf p}rod_{j=1}^n u_j = (-1)^{\frac{n(n-1)}{2}}\operatorname{Van} A {\bf m}athbf det K(u),
$$
and that the minor of $ K(u)$ obtained by deleting the last two columns and rows $\alpha$ and $j$ is
$$
{\bf m}athbf det K(u)^{[1,n-2]}_{[1,n]\operatorname{sign}etminus\{\alpha,j\}} = \operatorname{Van} A_{\{\alpha,j\}}{\bf p}rod_{i\mathfrak ne\alpha,j} u_i.
$$
Therefore
{\bf m}athfrak begin{equation*}
{\bf m}athfrak begin{aligned}
w_j =& (-1)^{n+j} {\bf m}athbf det K_1(u,v)^{[1,n-1]}_{[1,n]\operatorname{sign}etminus\{j\}}\\
=&(-1)^{n+j} \operatorname{sign}um_{\alpha\mathfrak ne j}
(-1)^{ \alpha + 1-\theta(\alpha - j)}v_\alpha {\bf m}athbf det K(u)^{[1,n-2]}_{[1,n]\operatorname{sign}etminus\{\alpha,j\}}.
{\bf e}nd{aligned}
{\bf e}nd{equation*}
Comparing with~{\bf e}qref{Pj}, we obtain
{\bf m}athfrak begin{align*}
P_1&\cdots P_n \\
&= (-1)^{\frac{n(n-1)}{2}} {\bf m}athbf det K(u) \operatorname{Van} A {\bf p}rod_{j=1}^n \operatorname{sign}um_{\alpha\mathfrak ne j}
(-1)^{n + \alpha - j + 1-\theta(\alpha - j)}v_\alpha {\bf m}athbf det K(u)^{[1,n-2]}_{[1,n]\operatorname{sign}etminus\{\alpha,j\}}\\
& = (-1)^{\frac{n(n-1)}{2}} {\bf m}athbf det K(u) \operatorname{Van} A {\bf p}rod_{j=1}^n w_j = (-1)^{\frac{n(n-1)}{2}} {\bf m}athbf det K(u) {\bf m}athbf det K^*(u,v),
{\bf e}nd{align*}
as needed.
{\bf e}nd{proof}
Specializing Proposition~{\bf r}ef{long_identity} to the case $A=X^{-1} Y$, $u=e_n$, $v= e_{n-1}$, one obtains Proposition~{\bf r}ef{polyrel}.
\operatorname{sign}ubsection{Normal forms}
In this section we derive several normal forms that were used in the main body of the paper.
{\bf m}athfrak begin{lemma}
\label{BC}
For a generic $n\times n$ matrix $U$ there exist a unique unipotent lower triangular matrix $N_-$ and an upper triangular matrix $B_+$ such that
{\bf m}athfrak begin{equation*}
U = N_- B_+ C N_-^{-1},
{\bf e}nd{equation*}
where $C=e_{21} + \cdots + e_{n, n-1} + e_{1n}$ is the cyclic permutation matrix.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof}
Let $N_1= \mathbf 1 - \operatorname{sign}um_{i=2}^n u_{in}/u_{1n}$. Then $U_1= N_1 U N_1^{-1}$ is a matrix whose last column has the first entry equal to $u_{1n}$ and all other
entries equal to zero. Next, let $N_2$ be a unipotent lower triangular matrix such that
(i) off-diagonal entries in the first column of $N_2$ are zero, and
(ii) $U_2=N_2 U_1 N_2^{-1}$ is in the upper Hessenberg form, that, is has zeroes below the first subdiagonal.
\mathfrak noindent
Then $U_2$ has a required form $B_+C$, where $B_+$ is upper triangular.
To establish uniqueness, it is enough to show that if $B_1, B_2$ are invertible upper triangular matrices and $N$ is lower unipotent matrix, such that
$N B_1 C = B_2 C N$, then $N=\mathbf 1$. Comparing the last columns on both sides of the equality, it is easy to see that off-diagonal elements in the first column of $N$ are zero. Then, comparison
of first columns implies that the same is true for off-diagonal elements in the second column of $N$, etc.
{\bf e}nd{proof}
{\bf m}athfrak begin{lemma}
\label{BW}
For a generic $n\times n$ matrix $U$ there exist a unique unipotent lower triangular matrix $N_-$ and an upper triangular matrix $B_+$ such that
{\bf m}athfrak begin{equation*}
U = N_- B_+ W_0 N_-^{-1},
{\bf e}nd{equation*}
where $W_0$ is the matrix of the longest permutation.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof}
Equality $U N_- = N_- B_+ W_0$ implies $W_0 U\cdot N_- = W_0 N_- W_0\cdot W_0 B_+ W_0$. Using the uniqueness of the Gauss factorization, we obtain
$W_0 N_- W_0 =\left (W_0 U{\bf r}ight )_{> 0} $ and $W_0 B_+ W_0 = \left (W_0 U{\bf r}ight )_{\leq 0} N_-$, and thus recover
$N_-= W_0 \left (W_0 U{\bf r}ight)_{>0}W_0 $ and $B_+= W_0\left (W_0 U{\bf r}ight)_{\leq 0} W_0 \left (W_0 U{\bf r}ight)_{>0}$.
{\bf e}nd{proof}
{\bf m}athfrak begin{lemma}
\label{NMN}
For a generic $n\times n$ matrix $U$ there exists a unique representation
{\bf m}athfrak begin{equation}
\label{normalNMN}
U = \left (\mathbf 1_n + \mathfrak nu e_{12}{\bf r}ight ) N_- M N_-^{-1} \left (\mathbf 1_n - \mathfrak nu e_{12}{\bf r}ight ),
{\bf e}nd{equation}
where $N_-=(\mathfrak nu_{ij})$ is a unipotent lower triangular matrix with $\mathfrak nu_{j1}=0$ for $2\le j\le n$ and $M=({\bf m}u_{ij})$ has ${\bf m}u_{1n}=0$
and ${\bf m}u_{i,n+2-j}=0$ for $2\leq j < i\leq n$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} First, observe that if~{\bf e}qref{normalNMN} is valid, then ${\bf m}u_{2n}=u_{2n}$, and $\mathfrak nu ={u_{1n}} /{u_{2n}}$. Denote
$U'= \left (\mathbf 1_n - \mathfrak nu e_{12}{\bf r}ight ) U\left (\mathbf 1_n + \mathfrak nu e_{12}{\bf r}ight )$. Then~{\bf e}qref{normalNMN} implies that the first row and column
of $M$ coincide with those of $U'$, and
that $M_{[2,n]}^{[2,n]}$ and $N$ are uniquely determined by applying Lemma~{\bf r}ef{BW} to ${U'}_{[2,n]}^{[2,n]}$.
{\bf e}nd{proof}
\operatorname{sign}ubsection{Matrix entries via eigenvalues}
For an $n\times n$ matrix $A$, denote $K= K(e_1)$, where $ K(u)$ is defined in Proposition~{\bf r}ef{long_identity}.
{\bf m}athfrak begin{lemma} \label{row1viac}
Every matrix entry in the first row of $A$ can be expressed as ${P}/{{\bf m}athbf det K}$, where $P$ is a polynomial in matrix entries
of the last $n-1$ rows of $A$ and coefficients of the characteristic polynomial of $A$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} Let ${\bf m}athbf det (\lambda \mathbf 1 - A) = \lambda^n + c_1 \lambda^{n-1} + \cdots +c_n$.
Compare two expressions for the first row $R(\lambda)$ of the resolvent $(\lambda \mathbf 1 - A)^{-1}$ of $A$:
$$
R(\lambda)= \operatorname{sign}um_{j{\bf m}athfrak geq 0} \frac{1}{\lambda^{j+1}} A^j e_1= \frac{1}{{\bf m}athbf det (\lambda \mathbf 1 - A) } \left ( \lambda^{n-1} e_1 + \lambda^{n-2} Q_2 +\cdots + Q_{n-1} {\bf r}ight ),
$$
where $Q_i$, $2\le i\le n-1$, are vectors polynomial in matrix entries of the last $n-1$ rows of $A$. Cross-multiplying by ${\bf m}athbf det (\lambda \mathbf 1 - A)$ and comparing coefficients at positive degrees of $\lambda$ on both sides, we obtain
$$
c_i e_1+ c_{i-1} A e_1 + \cdots + c_{1} A^{i-1} e_1 + A^{i} e_1 = Q_{i}, {\bf q}quad 2\le i\le n-1.
$$
Let $T$ be an $n\times n$ upper triangular unipotent Toeplitz matrix with entries of the $i$th superdiagonal equal to $c_i$, and let $Q$ be the matrix
with columns $e_1, Q_2,\ldots, Q_{n-1}$. Then the equations above can be re-written as a single matrix equation $K T = Q$. Note also that $A K = K C_A$, where $C_A$ is a companion matrix of $A$ with $1$s on the first subdiagonal, $-c_n,\ldots, -c_{1}$ in the last column, and $0$s everywhere else.
Therefore, $A K T = K T\cdot T^{-1} C_A T = Q T^{-1} C_A T$ (note that $C'_A=T^{-1} C_A T= W_0 C_A^T W_0$ is an alternative companion form of $A$).
Consequently, the first row $A_{[1]}$ of $A$ satisfies the linear equation $A_{[1]} K T = Q_{[1]} C_A'$, and the claim follows.
{\bf e}nd{proof}
{\bf m}athfrak begin{lemma}
\label{Hessenberg}
Let $H$ be an $n\times n$ upper Hessenberg matrix, $C^H=\left [ {\bf m}athfrak begin{array}{cc} \operatorname{sign}tar & \operatorname{sign}tar\\ \mathbf 1_{n-1}& 0{\bf e}nd{array} {\bf r}ight ]$ be its upper Hessenberg companion and $N=(\mathfrak nu_{ij})$ be a unipotent upper triangular matrix such that $N^{-1} H N = C^H$. Then the row vector $\mathfrak nu=(-\mathfrak nu_{1i})_{i=2}^n$ coincides with the first row of the companion form of $\tilde H=H_{[2,n]}^{[2,n]}$.
{\bf e}nd{lemma}
{\bf m}athfrak begin{proof} Factor $N$ as $N=\left [ {\bf m}athfrak begin{array}{cc} 1 & 0\\ 0 & \tilde N{\bf e}nd{array} {\bf r}ight ] \left [ {\bf m}athfrak begin{array}{cc} 1 & \mathfrak nu\\ 0 & \mathbf 1_{n-1}{\bf e}nd{array} {\bf r}ight ] $. Then
\[
{\tilde N}^{-1} \tilde H \tilde N = \left ( \left [ {\bf m}athfrak begin{array}{cc} \mathbf 1_{n-1} & 0 {\bf e}nd{array} {\bf r}ight ] \left [ {\bf m}athfrak begin{array}{cc} 1 & -\mathfrak nu\\ 0 & \mathbf 1_{n-1}{\bf e}nd{array} {\bf r}ight ] {\bf r}ight )^{[2,n]},
\]
and the claim follows.
{\bf e}nd{proof}
\operatorname{sign}ection*{Acknowledgments}
M.~G.~was supported in part by NSF Grant DMS \#1362801.
M.~S.~was supported in part by NSF Grants DMS \#1362352.
A.~V.~was supported in part by ISF Grant \#162/12.
The authors would like to thank the following institutions for support and excellent working conditions:
Max-Planck-Institut f\"ur Mathematik, Bonn (M.~G., Summer 2014),
Institut des Hautes \'Etudes Scientifiques (A.~V., Fall 2015), Stockholm University and Higher School of Economics, Moscow (M.~S., Fall 2015), Universit\'e Claude Bernard Lyon~1 and Universit\'e Paris Diderot (M.~S., Spring 2016).
This paper was completed during the joint visit of the authors to the University of Notre Dame London Global Gateway in May 2016. The authors are grateful to this
institution for warm hospitality. Special thanks are due to A.~Berenstein, A.~Braverman, Y.~Greenstein, D.~Rupel, G.~Schrader and A.~Shapiro for valuable discussions and to an anonymous referee for helpful comments.
{\bf m}athfrak begin{thebibliography}{00}
{\bf m}athfrak bibitem{BD} A.~Belavin and V.~Drinfeld,
\textit{Solutions of the classical Yang-Baxter equation for simple Lie algebras}.
Funktsional. Anal. i Prilozhen. {{\bf m}athfrak bf16} (1982), 1--29.
{\bf m}athfrak bibitem {CAIII} A.~Berenstein, S.~Fomin, and A.~Zelevinsky,
\textit{Cluster algebras. III. Upper bounds and double Bruhat cells}.
Duke Math. J. \textbf{126} (2005), 1--52.
{\bf m}athfrak bibitem{Boch} M.~Bocher,
\textit{Introduction to higher algebra}. Dover Publications, 2004.
{\bf m}athfrak bibitem{Bra} R.~Brahami,
\textit{Cluster $\chi$-varieties for dual Poisson-Lie groups. I.}
Algebra i Analiz \textbf{22} (2010), 14--104.
{\bf m}athfrak bibitem{CP} V.~Chari and A.~Pressley, \textit{A guide to quantum groups}.
Cambridge University Press, 1994.
{\bf m}athfrak bibitem{CheSha} L.~Chekhov and M.~Shapiro,
\textit{ Teichm\"uller spaces of Riemann surfaces with orbifold points of arbitrary order and cluster variables}. IMRN (2014), no.~10, 2746--2772.
{\bf m}athfrak bibitem{EvLu} S.~Evens and J.-H.~Lu, \textit{Poisson geometry of the Grothendieck resolution of a complex semisimple group}. Mosc. Math. J. \textbf{7} (2007), 613--642.
{\bf m}athfrak bibitem{FoPy}
S.~Fomin and P.~Pylyavskyy, {\it Tensor diagrams and cluster algebras}.
Adv. Math. \textbf{300} (2017), 717--787.
{\bf m}athfrak bibitem{FoRe}
S.~Fomin and N.~Reading, {\it Root systems and generalized associahedra}.
In: Geometric Combinatorics, AMS, Providence, RI, 2007, 63--131.
{\bf m}athfrak bibitem{FoZe}
S.~Fomin and A.~Zelevinsky, {\it The Laurent phenomenon}.
Adv. Appl. Math. \textbf{28} (2002), 119--144.
{\bf m}athfrak bibitem{Fra}
C.~Fraser, {\it Quasi-homomorphisms of cluster algebras}.
Adv. Appl. Math. \textbf{81} (2016), 40--77.
{\bf m}athfrak bibitem{Gant}
F.~Gantmacher,
\textit{The theory of matrices, vol.1}. American Mathematical Society, Providence, RI, 1998.
{\bf m}athfrak bibitem{LMP} M.~Gekhtman, M.~Shapiro, A.~Stolin, and A.~Vainshtein,
\textit{Poisson structures compatible with the cluster algebra structures in Grassmannians}.
Lett. Math. Phys. \textbf{100} (2012), 139--150.
{\bf m}athfrak bibitem{GSV1} M.~Gekhtman, M.~Shapiro, and A.~Vainshtein,
\textit{Cluster algebras and Poisson geometry}.
Mosc. Math. J. \textbf{3} (2003), 899--934.
{\bf m}athfrak bibitem{GSVb} M.~Gekhtman, M.~Shapiro, and A.~Vainshtein,
\textit{Cluster algebras and Poisson geometry}.
Mathematical Surveys and Monographs, 167. American Mathematical Society, Providence, RI, 2010.
{\bf m}athfrak bibitem{GSVMMJ} M.~Gekhtman, M.~Shapiro, and A.~Vainshtein,
\textit{Cluster structures on simple complex Lie groups and Belavin--Drinfeld classification}.
Mosc. Math. J. \textbf{12} (2012), 293--312.
{\bf m}athfrak bibitem{GSVPNAS} M.~Gekhtman, M.~Shapiro, and A.~Vainshtein, {\it Cremmer-Gervais cluster structure on $SL_n$}.
Proc. Natl. Acad. Sci. USA {{\bf m}athfrak bf 111} (2014), no.~27, 9688--9695.
{\bf m}athfrak bibitem{GSVcr} M.~Gekhtman, M.~Shapiro, and A.~Vainshtein,
\textit{Generalized cluster structure on the Drinfeld double of $GL_n$}.
C.~R.~Math. Acad. Sci. Paris \textbf{354} (2016), 345--349.
{\bf m}athfrak bibitem{GSVMem} M.~Gekhtman, M.~Shapiro, and A.~Vainshtein, {\it Exotic cluster structures on $SL_n$: the Cremmer--Gervais case}.
Memoirs of the AMS \textbf{246} (2017), no.~1165, 94pp.
{\bf m}athfrak bibitem{r-sts} A.~Reyman and M.~Semenov-Tian-Shansky,
\textit{Group-theoretical methods in the theory of
finite-dimensional integrable systems}. Encyclopaedia of
Mathematical Sciences, vol.16, Springer--Verlag, Berlin, 1994 pp.116--225.
{\bf m}athfrak bibitem{Ya} M.~Yakimov,
{\it Symplectic leaves of complex reductive Poisson-Lie groups}. Duke Math. J. \textbf{112} (2002),
453--509.
{\bf e}nd{thebibliography}
{\bf e}nd{document}
|
\begin{equation}gin{document}
\title{Duality-based Asymptotic-Preserving method for highly
anisotropic diffusion equations}
\author{Pierre Degond\footnotemark[2]\ \footnotemark[3] \and Fabrice
Deluzet\footnotemark[2]\ \footnotemark[3]
\and Alexei Lozinski\footnotemark[2]
\and Jacek Narski\footnotemark[2] \and Claudia Negulescu\footnotemark[4] }
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}}
\footnotetext[2]{Universit\'e de Toulouse, UPS, INSA, UT1, UTM, Institut de Math\'ematiques de Toulouse, F-31062 Toulouse, France}
\footnotetext[3]{CNRS, Institut de Math\'ematiques de Toulouse UMR 5219, F-31062 Toulouse, France}
\footnotetext[4]{CMI/LATP, Universit\'e de Provence, 39 rue Fr\'ed\'eric Joliot-Curie 13453 Marseille cedex 13}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\maketitle
\begin{equation}gin{abstract}
The present paper introduces an efficient and
accurate numerical scheme for the solution of a highly anisotropic
elliptic equation, the anisotropy direction being given by a
variable vector field. This scheme is based on an asymptotic
preserving reformulation of the original system, permitting an
accurate resolution independently of the anisotropy strength and
without the need of a mesh adapted to this anisotropy. The
counterpart of this original procedure is the larger system size, enlarged by adding auxiliary variables
and Lagrange multipliers. This Asymptotic-Preserving method generalizes the method investigated in
a previous paper \cite{DDN} to the case of an arbitrary anisotropy
direction field.
\end{abstract}
\section{Introduction}
Anisotropic problems are common in mathematical modeling of physical
problems. They occur in various fields of applications such as flows
in porous media \cite{porous1,TomHou}, semiconductor modeling
\cite{semicond}, quasi-neutral plasma simulations \cite{Navoret},
image processing \cite{image1,Weickert}, atmospheric or oceanic flows
\cite{ocean} and so on, the list being not exhaustive. The initial
motivation for this work is closely related to magnetized plasma
simulations such as atmospheric plasma \cite{Kelley2,Kes_Oss},
internal fusion plasma \cite{Beer,Sangam} or plasma thrusters
\cite{SPT}. In this context, the media is structured by the magnetic
field, which may be strong in some regions and weak in others. Indeed,
the gyration of the charged particles around magnetic field lines
dominates the motion in the plane perpendicular to magnetic
field. This explains the large number of collisions in the
perpendicular plane while the motion along the field lines is rather
undisturbed. As a consequence the mobility of particles in different
directions differs by many orders of magnitude. This ratio can be as
huge as $10^{10}$. On the other hand, when the magnetic field is weak
the anisotropy is much smaller. As the regions with weak and strong
magnetic field can coexist in the same computational domain, one needs
a numerical scheme which gives accurate results for a large range of
anisotropy strengths. The relevant boundary conditions in many fields
of application are periodic (for instance in simulations of the
tokamak plasmas on a torus) or Neumann boundary conditions
(atmospheric plasma for example \cite{BCDDGT_1}). For these reasons
we propose a strongly anisotropic model problem for wich we wish to
introduce an efficient and accurate numerical scheme. This model
problem reads
\begin{equation}gin{gather}
\left\{
\begin{equation}gin{array}{ll}
- \nabla \cdot \mathbb A \nabla \phi^{\varepsilon } = f & \text{ in }
\Omega, \\[3mm]
n \cdot \mathbb A \nabla \phi^{\varepsilon }= 0
& \text{ on } \partial\Omega _N\,,\\[3mm]
\phi^{\varepsilon }= 0
& \text{ on } \partial\Omega _D\,,
\end{array}
\right.
\label{eq:Jg0a}
\end{gather}
where $\Omega \subset \mathbb{R}^{2}$ or $\Omega \subset \mathbb{R}^{3}$ is a
bounded domain with boundary $\partial \Omega = \partial\Omega _D
\cup \partial\Omega _N$ and outward normal $n$. The direction of the
anisotropy is defined by a vector field $B$, where we suppose
$\text{div} B = 0$ and $B \neq 0$. The direction of $B$ is given by a
vector field $b = B/|B|$. The anisotropy matrix is then defined as
\begin{equation}gin{gather} \mathbb A = \frac{1}{\varepsilon }
A_\parallel b \otimes b + (Id - b \otimes b)A_\perp (Id - b \otimes
b)
\label{eq:Jh0a}
\end{gather}
and $\partial \Omega _D = \{ x \in \partial\Omega \ | \ b (x) \cdot n
= 0 \}$. The scalar field $A_\parallel>0$ and the symmetric positive
definite matrix field $A_\perp$ are of order one while the parameter
$0 < \varepsilon < 1$ can be very small, provoking thus the high
anisotropy of the problem. This work extends the results of
\cite{DDN}, where the special case of a vector field $b$, aligned with
the $z$-axis, was studied.
\color{black}
An extension of this approach is proposed
in \cite{besse} to handle more realistic anisotropy topologies. It
relies on the introduction of a curvilinear coordinate system with one
coordinate aligned with the anisotropy direction. Adapted
coordinates are widely used in the framework of plasma
simulation (see for instance \cite{Beer, Dhaeseleer, Miamoto}),
coordinate systems being either developped to fit particular magnetic field
geometry or plasma equilibrium (Euler potentials \cite{Stern}, toroidal and poloidal
\cite{Gysela, poltor}, quasiballooning \cite{Dimits}, Hamada \cite{Hamada} and Boozer
\cite{Boozer} coordinates). Note that the study of certain plasma regions in
a tokamak have motivated the use of non-orthogonal coordinates systems \cite{Igitkhanov}.
In contrast with all these methods, we propose here a numerical scheme that uses coordinates and meshes independent of the
anisotropy direction, like in \cite{Ottaviani}. This
feature offers the capability to easily treat time evolving
anisotropy directions. This \fabrice{is very} important in the context
of tokamak plasma simulation, the anisotropy being driven by the
magnetic field which is time dependent.
\color{black}
One of the difficulties associated with the numerical solution of
problem (\ref{eq:Jg0a}) lies in the fact that this problem becomes
very ill-conditioned for small $0 < \varepsilon \ll 1$. Indeed,
replacing $\varepsilon $ by zero yields an ill-posed problem as it has
an infinite number of solutions (any function constant along the $b$
field solves the problem with $\varepsilon =0 $). In the discrete case
the problem translates into a linear system which is ill-conditioned,
as it mixes the terms of different orders of magnitude for
$\varepsilon \ll 1 $. As a consequence the numerical algorithm for
solving this linear system gives unacceptable errors (in the case of
direct solvers) or fails to converge in a reasonable time (in the case
of iterative methods).
This difficulty arises when the boundary conditions supplied to the
dominant $O (1/\varepsilon )$ operator lead to an ill-posed
problem. This is the case for Neumann boundary
conditions imposed on the part of the boundary with $b \cdot n \neq 0$ as well as for periodic boundary conditions.
If instead, the boundary conditions are such that the dominant operator
gives a well-posed problem, the numerical difficulty vanishes. One
can resort to standard methods, as the dominant operator is sufficient
to determine the limit solution. This is the case for Dirichlet and
Robin boundary conditions. The problem addressed in this paper arises
therefore only with specific boundary conditions. It has however a
considerable impact in numerous physical problems concerning plasmas,
geophysical flows, plates and shells as an example. In this paper, we
will focus on Neumann boundary condition since they represent a larger
range of physical applications. The periodic boundary
conditions can be addressed in a very similar way.
Numerical methods for anisotropic problems have been extensively
studied in the literature. Distinct methods have been
developed. Domain decomposition techniques using multiple coarse grid
corrections are adapted to the anisotropic equations in
\cite{Giraud,Koronskij}. Multigrid methods have been studied in
\cite{Gee,Notay}. For anisotropy aligned with one or two directions,
point or plane smoothers are shown to be very efficient
\cite{ICASE}. The $hp$-finite element method is also known to give
good results for singular perturbation problems \cite{Melenek}. All
these methods have in common that they try to discretize the
anisotropic PDE as it is written and then to apply purely numerical
tricks to circumvent the problems related to lack of accuracy of the
discrete solution or to the slow convergence of iterative
algorithms. This leads to methods which are rather difficult to
implement.
The approach that we pursue in this paper is entirely different: we
reformulate first the original PDE in such a way that the resulting
problem can be efficiently and accurately discretized by
straight-forward and easily implementable numerical methods for any
anisotropy strength. Our scheme is related to the {\it Asymptotic
Preserving} method introduced in \cite{ShiJin}. These techniques are
designed to give a precise solution in the various regimes with no
restrictions on the computational meshes and with additional property
of converging to the limit solution when $\varepsilon \rightarrow
0$. The derivation of the Asymptotic Preserving method requires
identification of the limit model. In the case of Singular
Perturbation problems, the original problem is reformulated in such a
way that the obtained set of equations contain both the limit model
and the original problem with a continuous transition between them,
according to the values of $\varepsilon$. This reformulated system of
equation sets the foundation of the AP-scheme. These Asymptotic
Preserving techniques have been explored in previous studies, for instance
quasi-neutral or gyro-fluid limits \cite{Crispel,Sangam}, as well as
anisotropic elliptic problems of the form (\ref{eq:Jg0a}) with vector $b$ aligned
with a coordinate axis \cite{DDN,besse}.
In this paper, we present a new algorithm which extends the results of
\cite{DDN}. The originality of this algorithm consists in the fact,
that it is applicable for variable anisotropy directions $b$, without
additional work. The discretization mesh has not to be adapted to the
field direction $b$, but is simply a Cartesian grid, whose mesh-size
is governed by the desired accuracy, independently on the anisotropy
strength $\eps$. All this is possible by a well-adapted mathematical
framework (optimally chosen spaces, introduction of Lagrange
multipliers). The key idea, as in \cite{DDN}, is to decompose the
solution $\phi $ into two parts: a mean part $p$ which is constant
along the field lines and the fluctuation part $q$ consisting of a
correction to the mean part needed to recover the full solution. Both
parts $p$ and $q$ are solutions to well-posed problems for any
$\varepsilon>0 $. In the limit of $\varepsilon \rightarrow 0 $ the
AP-reformulation reduces to the so called {\it Limit} model (L-model),
whose solution is an acceptable approximation of the P-model solution
for $\varepsilon \ll 1 $ (see Theorem \ref{thm_EX}). In \cite{DDN} the
Asymptotic Preserving reformulation of the original problem was
obtained in two steps. Firstly, the original problem was integrated
along the field lines ($z$-axis) leading to an $\varepsilon
$-independent elliptic problem for the mean part $p$. Secondly, the
mean equation was subtracted from the original problem and the
$\varepsilon $-dependent elliptic problem for the fluctuating part $q$
was obtained. This approach however is not applicable if the field $b$
is arbitrary. In this paper we present a new approach. Instead of
integrating the original problem along the arbitrary field lines, we
choose to force the mean part $p$ to lie in the Hilbert space of
functions constant along the field lines and the fluctuating part $q$
to be orthogonal (in $L^2$ sense) to this space. This is done by a
Lagrange multiplier technique and requires introduction of additional
variables thus enlarging the linear system to be solved. This method
allows to treat the arbitrary $b$ field case, regardless of the field
topology and thus eliminates the
limitations of the algorithm presented in \cite{DDN}.
\textcolor{black}{We note that an alternative method, bypassing the need in Lagrange multipliers, is proposed in \cite{brull}.
It is based on a reformulation of the original problem as a fourth order equation.}
\\
The outline of this paper is the following. Section \ref{sec:prob_def}
introduces the original anisotropic elliptic problem. The original
problem will be referred to as the Singular-{\it Perturbation} model
(P-model). The mathematical framework is introduced and the {\it
Asymptotic Preserving} reformulation (AP-model) is then
derived. Section \ref{sec:num_met} is devoted to the numerical
implementation of the AP-formulation. Numerical results are presented
for 2D and 3D test cases, for constant and variable fields $b$. Three
methods are compared (AP-formulation, P-model and L-model) according
to their precision for different values of $\varepsilon $. The
rigorous numerical analysis of this new algorithm will be the subject
of a forthcoming publication.
\section{Problem definition}\label{sec:prob_def}
We consider a two or three dimensional anisotropic problem, given on a
sufficiently smooth, bounded domain $\Omega \subset \mathbb R^d$,
$d=2,3$ with boundary $\partial \Omega$. The direction of the
anisotropy is defined by the vector field $b \in
(C^{\infty}(\Omega))^d$, satisfying $|b(x)|=1$ for all $x \in \Omega$.
\noindent Given this vector field $b$, one can decompose now vectors
$v \in \mathbb R^d$, gradients $\nabla \phi$, with $\phi(x)$ a scalar
function, and divergences $\nabla \cdot v$, with $v(x)$ a vector
field, into a part parallel to the anisotropy direction and a part
perpendicular to it. These parts are defined as follows:
\begin{equation}gin{equation}
\begin{equation}gin{array}{llll}
\ds v_{||}:= (v \cdot b) b \,, & \ds v_{\perp}:= (Id- b \otimes b) v\,, &\textrm{such that}&\ds
v=v_{||}+v_{\perp}\,,\\[3mm]
\ds \nabla_{||} \phi:= (b \cdot \nabla \phi) b \,, & \ds
\nabla_{\perp} \phi:= (Id- b \otimes b) \nabla \phi\,, &\textrm{such that}&\ds
\nabla \phi=\nabla_{||}\phi+\nabla_{\perp}\phi\,,\\[3mm]
\ds \nabla_{||} \cdot v:= \nabla \cdot v_{||} \,, & \ds
\nabla_{\perp} \cdot v:= \nabla \cdot v_{\perp}\,, &\textrm{such that}&\ds
\nabla \cdot v=\nabla_{||}\cdot v+\nabla_{\perp}\cdot v\,,
\end{array}
\end{equation}
where we denoted by $\otimes$ the vector tensor product. With these
notations we can now introduce the mathematical problem, the so-called
Singular Perturbation problem, whose numerical solution is the main
concern of this paper.
\subsection{The Singular Perturbation problem (P-model)}
We consider the following Singular Perturbation problem
\begin{equation}gin{gather}
(P)\,\,\,
\left\{
\begin{equation}gin{array}{ll}
-{1 \over \varepsilon} \nabla_\parallel \cdot
\left(A_\parallel \nabla_\parallel \phi^{\varepsilon }\right)
- \nabla_\perp \cdot
\left(A_\perp \nabla_\perp \phi^{\varepsilon }\right)
= f
& \text{ in } \Omega, \\[3mm]
{1 \over \varepsilon}
n_\parallel \cdot
\left( A_\parallel \nabla_\parallel \phi^{\varepsilon } \right)
+
n_\perp \cdot
\left(A_\perp \nabla_\perp \phi^{\varepsilon }\right)
= 0
& \text{ on } \partial\Omega _N, \\[3mm]
\phi^{\varepsilon }= 0
& \text{ on } \partial\Omega _D\,,
\end{array}
\right.
\label{eq:J07a}
\end{gather}
where $n$ is the outward normal to $\Omega $ and the boundaries are defined by
\begin{equation}gin{gather}
\partial\Omega _D = \{ x \in \partial\Omega \ | \ b (x) \cdot n = 0
\},\quad \quad \partial\Omega_N = \partial\Omega
\setminus \partial\Omega_D
\label{eq:Ju9a}.
\end{gather}
The parameter $0<\eps <1$ can be very small and is responsible for the
high anisotropy of the problem. The aim is to introduce a numerical
scheme, whose computational costs (simulation time and memory), for
fixed precision, are independent of $\eps$.\\
We shall assume in the rest of this
paper the following hypothesis on the diffusion coefficients and the
source terms\\
\noindent {\bf Hypothesis A} \label{hypo} {\it
Let $f \in L^2(\Omega)$ and $\overset{\circ}{\partial\Omega _D} \neq \varnothing$.
The diffusion coefficients $A_{\parallel} \in
L^{\infty} (\Omega)$ and $A_{\perp} \in \mathbb{M}_{d \times d} (L^{\infty}
(\Omega))$ are supposed to satisfy
\begin{equation}gin{gather}
0<A_0 \le A_{\parallel}(x) \le A_1\,, \quad \textrm{f.a.a.}\,\,\,x \in \Omega,
\label{eq:J48a1}
\\[3mm]
A_0 ||v||^2 \le v^t A_{\perp}(x) v \le A_1 ||v||^2\,,
\quad \forall v\in \mathbb R^d\,\,\, \text{and} \,\,\, \textrm{f.a.a.}\,\,\, x \in \Omega.
\label{eq:J48a3}
\end{gather}
}
\noindent As we intend to use the finite element method for the
numerical solution of the P-problem, let us put (\ref{eq:J07a}) under
variational form. For this let ${\cal V}$ be the Hilbert space
$$
{\cal V}:=\{ \phi \in H^1(\Omega)\,\, / \,\, \phi_{| \partial \Omega_D} =0 \}
\,, \quad (\phi,\psi)_{\cal V}:= (\nabla_{||} \phi,\nabla_{||}
\psi)_{L^2} + \eps (\nabla_{\perp} \phi,\nabla_{\perp}
\psi)_{L^2}\,.
$$
Thus, we are seaking for $\phi^\eps \in {\cal V}$, the solution of
\begin{equation} \label{eq:Ja8a}
a_{||}(\phi^\eps,\psi) + \eps a_{\perp}(\phi^\eps,\psi)=\eps (f,\psi)\,, \quad
\forall \psi \in {\cal V}\,,
\end{equation}
where $(\cdot,\cdot)$ stands for the standard $L^2$ inner product and the continuous bilinear forms $a_{||} : \cal{V} \times \cal{V} \rightarrow
\mathbb{R}$ and $a_{\perp}: \cal{V} \times \cal{V} \rightarrow \mathbb{R}$ are given by
\begin{equation} \label{bil}
\begin{equation}gin{array}{lll}
\ds a_{||}(\phi,\psi)&:=&\ds \int_{\Omega} A_{||} \nabla_{||}
\phi \cdot \nabla_{||}\psi\, dx\,, \quad a_{\perp}(\phi,\psi):=\ds \int_{\Omega} ( A_{\perp} \nabla_{\perp}
\phi) \cdot \nabla_{\perp}\psi\, dx\,.
\end{array}
\end{equation}
Thanks to Hypothesis A and the Lax-Milgram theorem, problem
(\ref{eq:J07a}) admits a unique solution $\phi^\eps \in {\cal V}$ for
all fixed $\eps >0$.
\subsection{The Limit problem (L-model)}
The direct numerical solution of (\ref{eq:J07a}) may be very
inaccurate for $\varepsilon \ll 1$. Indeed, when $\varepsilon$ tends
to zero, the system reduces to
\begin{equation}gin{gather}
\left\{
\begin{equation}gin{array}{ll}
\displaystyle
-\nabla_\parallel \cdot
\left(A_\parallel \nabla_\parallel \phi\right)
= 0
& \text{ in } \Omega, \\[3mm]
\displaystyle
n_\parallel \cdot
\left( A_\parallel \nabla_\parallel \phi \right)
= 0
& \text{ on } \partial\Omega _N, \\[3mm]
\displaystyle
\phi= 0
& \text{ on } \partial\Omega _D.
\end{array}
\right.
\label{eq:Jc8a}
\end{gather}
This is an ill-posed problem as it has an infinite number of solutions
$\phi \in \mathcal G$, where
\begin{equation}gin{gather}
\mathcal G = \{ \phi \in \mathcal V \ | \ \nabla_\parallel \phi =0\}\,,
\label{eq:Jd8a}
\end{gather}
is the Hilbert space of functions, which are constant along the field
lines of $b$. This shows that the condition number of the system
obtained by discretizing (\ref{eq:J07a}) tends to $\infty$ as
$\varepsilon\to 0$ so that its solution will suffer from round-off
errors.
For this reason, we should approximate (\ref{eq:J07a}) in the limit
$\varepsilon \rightarrow 0$ differently. Supposing that
$\phi^{\varepsilon } \rightarrow \phi^{0}$ as $\varepsilon \rightarrow
0$ we identify (at first formally) the problem satisfied by
$\phi^{0}$. From the above arguments we know that $\phi^{0} \in
\mathcal G$. Taking now test functions $\psi \in \mathcal G$ in
(\ref{eq:Ja8a}), we obtain
\begin{equation}gin{gather}
\int_\Omega A_{\perp} \nabla_{\perp} \phi^{\varepsilon }
\cdot
\nabla_{\perp} \psi \, dx
=
\int_\Omega f \psi \, dx
\label{eq:Jn8a}
.
\end{gather}
Passing to the limit $\varepsilon \rightarrow 0$ into this equation yields the variational
formulation of the problem satisfied by $\phi^{0}$ (Limit problem):
find $\phi^{0} \in \mathcal G$, the solution of
\begin{equation}gin{gather}
(L)\,\,\,
\int_\Omega A_{\perp} \nabla_{\perp} \phi^{0}
\cdot
\nabla_{\perp} \psi \, dx
=
\int_\Omega f \psi \, dx
\;\; , \;\; \forall \psi \in \mathcal G\,,
\label{eq:Jv9a}
\end{gather}
which is a well posed problem. Indeed, the space ${\cal G} \subset {\cal V}$ is a Hilbert space, associated with the inner product
\begin{equation} \label{sc_G}
(\phi,\psi)_{\cal G}:= (\nabla_{\perp} \phi,\nabla_{\perp}
\psi)_{L^2}\,, \quad \forall \phi, \psi \in {\cal G}\,,
\end{equation}
and the norm $||\cdot ||_{\cal G}$ is equivalent to the $H^1$ norm. This is due to the Poincar\'e inequality, as
$$
||\phi||_{L^2}^2 \le C ||\nabla \phi||_{L^2}^2 = C ||\nabla_{||}
\phi||_{L^2}^2+C ||\nabla_{\perp} \phi||_{L^2}^2 = C ||\nabla_{\perp}
\phi||_{L^2}^2\,, \quad \forall \phi \in {\cal G}\,.
$$
Hypothesis A and the Lax-Milgram lemma
imply the existence and uniqueness of a solution $\phi^0 \in
{\cal G}$ of the Limit problem (\ref{eq:Jv9a}).
\color{black}
\begin{equation}gin{rem}\label{remark:limit:UniformB}
Let us restrict ourselves for the moment to the simple special case (considered in a previous paper \cite{DDN}) of the two dimensional domain $\Omega =
(0,L_x)\times(0,L_z)$ in the $(x,z)$ plane with a constant $b$-field aligned with the $Z$-axis:
\begin{equation}gin{gather}
b=
\left(
\begin{equation}gin{array}{c}
0 \\
1
\end{array}
\right)
\label{eq:Jy9a}
.
\end{gather}
The functions in the space ${\cal G}$ are
independent of $z$ so that ${\cal G}$
can be identified to $H^1_0(0,L_x)$. The limit
problem \eqref{eq:Jv9a} now reads: Find $\phi^0$ in
$H^1_0(0,L_x)$ verifying
\begin{equation}gin{equation*}
\int_0^{L_x} \bar A_\perp(x) \partial_x \phi^0(x) \, \partial_x
\psi(x) \, dx
= \int_0^{L_x} \bar f(x) \psi(x) \, dx \,, \qquad \forall \psi \in H^1_0(0,L_x)\,,\\
\end{equation*}
where $ \bar A_\perp(x) = (1/L_z) \int_0^{\fabrice{L_z}}
A_{\perp,11}(x,z) \, dz$ and $ \bar f(x) = (1/L_z) \int_0^{\fabrice{L_z}} f(x,z) \, dz$ are the mean values of $A_\perp$ and $f$ along the field lines.
The limit solution $\phi^0$ thus verifies a one-dimensional elliptic equation whose
coefficients are integrated along the anisotropy direction:
\begin{equation}gin{equation}\label{eq:limit:uniformB}
\begin{equation}gin{split}
& - \partial_x \Big(\bar A_\perp(x) \, \partial_x \phi^0(x) \Big) = \bar
f(x) \text{ on } (0,L_x), \\
& \phi^0(0) = \phi^0(L_x) = 0 \,.
\end{split}
\end{equation}
We see now that $\phi^0(x)$ is a solution to the one dimensional elliptic problem so that it belongs to $H^2(0,L_x)$ provided $f\in L^2(\Omega)$.
Since $\phi^0$ as a function of $(x,z)$ does not depend on $z$, we have also $\phi^0\in H^2(\Omega)$. This conclusion ($\phi^0\in H^2(\Omega)$) remains valid in the case of
a cylindrical three dimensional domain $\Omega = \Omega_{xy}\times(0,L_z)$ in the $(x,y,z)$ space with any sufficiently smooth $\Omega_{xy}$ in the $(x,y)$ plane
and the field $b$ aligned with the $Z$-axis, $b=(0,0,1)^t$. Indeed, it is easy to see that $\phi^0=\phi^0(x,y)$ solves in this case an elliptic two dimensional problem
in $\Omega_{xy}$ similar to (\ref{eq:limit:uniformB}) so that we can apply the standard regularity results for the elliptic problems.
These examples show that it is reasonable to suppose $\phi^0\in H^2(\Omega)$ also in more general geometries of $\Omega$ and $b$. This can be indeed proved under the hypotheses
in Appendix \ref{appA} by specifying the $(d-1)$ dimensional elliptic problem for $\phi^0$. The proof being rather lengthy and technical, we prefer to postpone it to a forthcoming work \cite{AJC}.
\end{rem}
\color{black}
\subsection{The Asymptotic Preserving approach (AP-model)}
In this section we introduce the AP-formulation, which is a
reformulation of the Singular Perturbation problem (\ref{eq:J07a}),
permitting a ``continuous'' transition from the (P)-problem
(\ref{eq:J07a}) to the (L)-problem (\ref{eq:Jv9a}), as $\eps
\rightarrow 0$. For this purpose, each function is decomposed into its
mean part along the anisotropy direction (lying in the subspace
$\mathcal{G}$ of $\mathcal{V}$) and a fluctuating part
(cf. \cite{DDN}) lying in the $L^2$-orthogonal complement
$\mathcal{A}$ of $\mathcal{G}$ in $\mathcal{V}$, defined by
\begin{equation}gin{gather}
\mathcal A : =
\{ \phi \in \mathcal V \ | (\phi,\psi ) =0 \;\; , \;\; \forall
\psi \in \mathcal G\}\,.
\label{eq:Jg8a}
\end{gather}
Note that $(\cdot,\cdot)$ denote here and elsewhere the inner product
of $L^2(\Omega)$.
In what follows, we need the following\\
\noindent {\bf Hypothesis B} {\it The Hilbert-space $\mathcal V$ admits the
decomposition
\begin{equation}gin{gather}
\mathcal V = \mathcal G \oplus^{\perp} \mathcal A
\label{eq:Jf8a},
\end{gather}
with ${\cal G}$ given by (\ref{eq:Jd8a}) and ${\cal A}$ given by
(\ref{eq:Jg8a}) and where the orthogonality of the direct sum is taken
with respect to the $L^2$-norm. Denoting by $P$ the orthogonal
projection on $\mathcal G$ with respect to the $L^{2}$ inner product:
\begin{equation}gin{gather}
P : \mathcal V \rightarrow \mathcal G\,\,\text{ such that }\,\,
(P\phi,\psi)=(\phi,\psi)\ \ \forall\phi\in\mathcal V,\, \psi\in\mathcal G\,,
\label{eq:Je8a}
\end{gather}
we shall suppose that this mapping is continuous and that we have the
Poincar\'e-Wirtinger inequality
\begin{equation}gin{equation}
||\phi -P\phi ||_{L^{2}(\Omega )}\leq C||\nabla _{||}\phi ||_{L^{2}(\Omega
)}\,,\quad \forall \phi \in \mathcal{V}\,. \label{PoinW}
\end{equation}
}
\noindent Applying the projection $P$ to a function $\phi$ is nothing but a
weighted average of $\phi$ along the anisotropy field lines of
$b$. The space ${\cal G}$ is the space of averaged functions (the
parallel $\cal{G}$radient of these averaged functions being equal to
zero), whereas the space ${\cal A}$ is the space of the fluctuations
(the $\cal{A}$verage of the fluctuations being equal to zero). Note
that the decomposition (\ref{eq:Jf8a}) is not self evident and it may
in fact fail on some ``pathological'' domains $\Omega$. Indeed,
although one can always define an $L^2$-orthogonal projection $\tilde
P\phi$ on the space of functions constant along each field line, for
any $\phi$ with square-integrable $\nabla_{||}\phi$, one cannot assure
in general that $\tilde P\phi$ belongs to $\mathcal{V}$ for
$\phi\in\mathcal{V}$ since one may lose control of the perpendicular
part of the gradient of $\tilde P\phi$. Fortunately however,
Hypothesis B is typically satisfied for the domains of practical
interest. The interested reader is referred to Appendix \ref{appA}
for an example of a set of assumptions on $\Omega$ and $b$ which
entail Hypothesis B and which resume essentially to the requirement
for the field $b$ to intersect $\partial\Omega_N$ in a uniformly
non-tangential manner and for the boundary components
$\partial\Omega_N$ and $\partial\Omega_D$ to be sufficiently smooth.
Let us also define the operator
\begin{equation}gin{gather}
Q : \mathcal V \rightarrow \mathcal A\,, \quad Q = I - P\,.
\label{eq:Jh8a}
\end{gather}
Each function $\phi \in \mathcal V$ can be decomposed uniquely as
$\phi = p + q$, where $p = P\phi \in \mathcal G$ and $q = Q\phi \in
\mathcal A$. Using this decomposition, we reformulate the Singular-Perturbation problem (\ref{eq:J07a}). Indeed, replacing $\phi^{\varepsilon}:=p^{\varepsilon}+ q^{\varepsilon}$
in problem (\ref{eq:J07a}) and taking test functions $\eta \in
\mathcal G$ and $\xi \in \mathcal A$ leads to an asymptotic
preserving formulation of the original problem: Find $(p^\varepsilon
,q^\varepsilon ) \in \mathcal G \times \mathcal A$ such that
\begin{equation}gin{gather}
\left\{
\begin{equation}gin{array}{ll}
\displaystyle
a_{\perp} (p^{\varepsilon },\eta ) + a_{\perp} (q^{\varepsilon},\eta ) = (f,\eta )
& \forall \eta \in \mathcal G, \\[3mm]
\displaystyle
a_{||} (q^{\varepsilon },\xi ) + \varepsilon a_{\perp}
(q^{\varepsilon},\xi) + \varepsilon a_{\perp} (p^{\varepsilon},\xi)
= \varepsilon (f, \xi )
& \forall \xi \in \mathcal A.
\end{array}
\right.
\label{eq:Ji8a}
\end{gather}
Contrary to the Singular Perturbation problem (\ref{eq:J07a}), setting
formally $\eps=0$ in (\ref{eq:Ji8a}) yields the system
\begin{equation} \label{sy}
\left\{
\begin{equation}gin{array}{lll}
\ds a_{\perp}(p^0,\eta)+a_{\perp}(q^0,\eta) &=&\ds(f,\eta) \,, \quad \forall
\eta \in {\cal G}\\[3mm]
\ds a_{||}(q^0,\xi )
&=&\ds 0 \,, \quad \forall \xi \in {\cal A}\,,
\end{array}
\right.
\end{equation}
which has a unique solution $(p^0,q^0) \in \cal{G} \times \cal{A}$,
where $p^0$ is the unique solution of the L-problem (\ref{eq:Jv9a}) and
$q^0 \equiv 0$. Indeed, taking $\xi=q^0$ as test function in the second equation of (\ref{sy}) yields $\nabla_{||} q^0 =0$, which means $q^0 \in \cal{G}$.
But at the same time, $q^0 \in \cal{A}$, so that $q^0 \in {\cal G} \cap {\cal A} = \{ 0 \}$. Setting then $q^0 \equiv 0$ in the first equation of (\ref{sy}), shows that $p^0$ is the unique solution of the L-problem.
\begin{equation}gin{thm}\label{thm_EX}
For every $\eps>0$ the Asymptotic Preserving formulation (\ref
{eq:Ji8a}), under Hypotheses A and B, admits a unique solution
$(p^{\eps},q^{\eps
})\in \mathcal{G}\times \mathcal{A}$, where $\phi
^{\eps}:=p^{\eps}+q^{\eps}$ is the unique solution in $\mathcal{V}$
of the Singular Perturbation model (
\ref{eq:J07a}).\newline These solutions satisfy the bounds
\begin{equation}gin{equation}
||\phi ^{\eps}||_{H^{1}(\Omega )}\leq C||f||_{L^{2}(\Omega )}\,,\quad ||q^{
\eps}||_{H^{1}(\Omega )}\leq C||f||_{L^{2}(\Omega )}\,,\quad ||p^{\eps
}||_{H^{1}(\Omega )}\leq C||f||_{L^{2}(\Omega )}\,, \label{est_sol}
\end{equation}
with an $\eps$-independent constant $C>0$. Moreover, we have
\begin{equation}gin{equation}
\phi^{\eps}\rightarrow \phi^{0},\,\
p^{\eps}\rightarrow \phi^{0}\,\ \text{and}\quad
q^{\eps}\rightarrow
0\quad \text{in}\quad H^{1}(\Omega )\text{ as }\eps\rightarrow 0\,, \label{wconv_sol}
\end{equation}
where $\phi^{0}\in \mathcal{G}$ is the unique solution of the Limit model (\ref{eq:Jv9a}).
\end{thm}
\begin{equation}gin{proof}
The existence and uniqueness of a solution for the P-problem as well
as L-problem are consequences of the Lax-Milgram theorem. The
existence and uniqueness of a solution of (\ref{eq:Ji8a}) is then
immediate by construction, remarking that the decomposition $\phi
^{\eps}=p^{\eps}+q^{\eps
}$ is unique.\newline The bound $||\phi ^{\eps}||_{H^{1}(\Omega
)}\leq C||f||_{L^{2}(\Omega )}$ is obtained by a standard elliptic
argument. Furthermore, $p^{\eps}=P\phi ^{\eps
}$ where $P$ is the $L^{2}$-orthogonal projector on $\mathcal{G}$,
which is a bounded operator in $\mathcal{V}$ by (\ref{reg}). This
implies the estimates for $p^{\eps}$ and $q^{\eps}$ in
(\ref{est_sol}). Since $p^{\eps} \in \mathcal{G}$ and $q^{\eps} \in
\mathcal{A}$ are bounded, there exist subsequences $p^{\eps_{n}}$
and $q^{\eps_{n}}$ that weakly converge for $\varepsilon
_{n}\rightarrow 0$ to some $p^{0}\in \mathcal{G}$ and $q^{0}\in
\mathcal{A}$. Taking $
\varepsilon =\varepsilon _{n}$ in (\ref{eq:Ji8a}) and passing to the
limit $
\varepsilon _{n}\rightarrow 0$ we identify $(p^{0},q^{0})$ with the
unique solution of (\ref{sy}), i.e. $p^{0}=\phi^0$ is the unique solution
of (\ref{eq:Jv9a}
) and $q^{0}\equiv 0$. Since the limit does not depend on the choice
of the subsequence, we have the weak convergence as $\varepsilon
\rightarrow 0$, i.e.
\begin{equation}gin{equation*}
p^{\eps}\rightharpoonup _{\eps\rightarrow 0}p^{0}\quad \text{in}\quad
H^{1}(\Omega )\,,\quad q^{\eps}\rightharpoonup _{\eps\rightarrow 0}0\quad
\text{in}\quad H{^{1}(\Omega )}\,.
\end{equation*}
We shall prove now that these convergences are actually strong. Introducing $
e^{\varepsilon }=p^{\varepsilon }-p^{0}$, we have
\begin{equation}gin{equation*}
a_{\perp}(e^{\varepsilon },\eta )+a_{\perp}(q^{\varepsilon },\eta )=0,~\forall \eta
\in \mathcal{G}\,.
\end{equation*}
Taking now $\eta =e^{\varepsilon }$ in this relation and adding it to the second
equation in (\ref{eq:Ji8a}), where we put $\xi =q^{\varepsilon }/\varepsilon $, yields
\begin{equation}gin{equation}
\frac{1}{\varepsilon }a_{||}(q^{\varepsilon },q^{\varepsilon
})+a_{\perp}(q^{\varepsilon }+e^{\varepsilon },q^{\varepsilon }+e^{\varepsilon
})=(f,q^{\varepsilon })-a_{\perp}(p^{0},q^{\varepsilon }). \label{aa1}
\end{equation}
Due to the Poincar\'e-Wirtinger equation (\ref{PoinW}), there exist a constant $C>0$ such that
\begin{equation}gin{equation}\label{wirta}
||q||_{L^{2}(\Omega )}\leq Ca_{||}(q,q)^{1/2}\,,\quad \forall q\in \mathcal{A}\,.
\end{equation}
In combination with a Young inequality this gives $(f,q^{\varepsilon })\leq
||f||_{L^{2}}||q^{\varepsilon }||_{L^{2}}\leq \varepsilon \frac{C^{2}}{2}
||f||_{L^{2}}^{2}+\frac{1}{2\varepsilon }a_{||}(q^{\epsilon },q^{\epsilon })$
. Using this in the right hand side of (\ref{aa1}), we arrive at
\begin{equation}gin{equation*}
\frac{1}{2\varepsilon }a_{||}(q^{\varepsilon },q^{\varepsilon
})+a_{\perp}(q^{\varepsilon }+e^{\varepsilon },q^{\varepsilon }+e^{\varepsilon
})\leq \varepsilon \frac{C^{2}}{2}||f||_{L^{2}}^{2}
-a_{\perp}(p^{0},q^{\varepsilon }).
\end{equation*}
Noting that $q^{\varepsilon }+e^{\varepsilon }=\phi ^{\varepsilon }-p^{0}$
and $\nabla _{\Vert }e^{\varepsilon }=0$ we can rewrite this last inequality
as
\begin{equation}gin{equation*}
\frac{1}{2\varepsilon }a_{||}(\phi ^{\varepsilon }-p^{0},\phi ^{\varepsilon
}-p^{0})+a_{\perp}(\phi ^{\varepsilon }-p^{0},\phi ^{\varepsilon }-p^{0})\leq
\varepsilon \frac{C^{2}}{2}||f||_{L^{2}}^{2} -a_{\perp}(p^{0},q^{\varepsilon }).
\end{equation*}
Since $a_{\perp}(p^{0},q^{\varepsilon })\rightarrow 0$ as $\varepsilon
\rightarrow 0$ (thanks to the weak convergence $q^{\eps}\rightharpoonup 0$) we
observe that $\phi ^{\varepsilon }\rightarrow p^{0}$ strongly in $
H^{1}(\Omega )$. Reminding again that $p^{\varepsilon }=P\phi ^{\varepsilon }
$ and $P$ is bounded in the norm of $H^{1}(\Omega )$, we obtain also $
p^{\varepsilon }\rightarrow Pp^{0}=p^{0}$, which entails $q^{\varepsilon
}\rightarrow 0$.
\end{proof}
\color{black}
\begin{equation}gin{rem}\label{remark:UniformB}
Let us return to the simple special case discussed in remark~\ref{remark:limit:UniformB}, i.e. $\Omega =(0,L_x) \times
(0,L_z)$ and the $b$-field given by (\ref{eq:Jy9a}).
Remind that the space $\mathcal G$ can be identified in this case with the space of
functions constant along the $Z$-axis, which means $\mathcal G := \{ \phi \in
{\cal V} \,\, / \,\, \partial_z \phi =0 \}$. The space $\mathcal A$ is
orthogonal (with respect to the $L^2$-norm) to $\mathcal G$ and thus
contains the functions that have zero mean value along the $Z$-axis,
i.e. $\mathcal A := \{ \phi \in {\cal V} \,\, / \,\, \int_0^{L_z}
\phi(x,z)\, dz =0\}$. Therefore, for $\phi^\eps = p^\eps + q^\eps \in
{\cal V}$, the function $p^{\varepsilon }$ is the mean value of
$\phi^{\varepsilon }$ in the direction of the field $b$:
\begin{equation}gin{gather}
p^{\varepsilon } = \frac{1}{L_z}\int_0^{L_z} \phi^{\varepsilon } dz\,,
\label{eq:Jz9a}
\end{gather}
and $q^{\varepsilon }$ is the fluctuating part with zero mean
value:
\begin{equation}gin{gather}
q^{\varepsilon } = \phi^{\varepsilon } - \frac{1}{L_z}\int_0^{L_z} \phi^{\varepsilon } dz
\label{eq:J29a}
.
\end{gather}
Hypothesis B is thus easily verified. The results obtained in this
special case were presented in a previous paper \cite{DDN}. In the
case of an arbitrary $b$-field, formula (\ref{eq:Jz9a}) is generalized
as (\ref{pdef}) in Appendix \ref{appA}, where the length element along
the $b$-field line is weighted by the infinitesimal cross-sectional
area of the field tube around the considered $b$-field-line. This
formula can be thus interpreted as a consequence of the co-area
formula. Note that in the special case of a uniform anisotropy
direction, the limit problem can easily be formulated as an elliptic
problem depending on the only transverse coordinates (see
equation~\eqref{eq:limit:uniformB}). The size of the
problem is thus significantly smaller than that of the initial
one. This feature still occurs for non-uniform $b$-fields as long as
adapted coordinates and meshes are used. In our case, aligned and
transverse coordinates are not at our disposal and the solution of the
limit problem must be searched as a function of the whole set of
coordinates.
\end{rem}
\color{black}
\subsection{Lagrange multiplier space }
The objective of this work is the numerical solution of system
(\ref{eq:Ji8a}) and the comparison of the obtained results with those
obtained by directly solving the original problem (\ref{eq:J07a}). In
a general case, when the field $b$ is not necessarily constant, the
discretization of the subspaces $\mathcal G$ and $\mathcal A$, is not
straightforward, as in the simpler case \cite{DDN}. In order to
overcome this difficulty a Lagrange multiplier technique will be used.
\subsubsection{The $\mathcal A$ space}
To avoid the use of the constrained space $\mathcal A$, we can remark
that $\cal{A}$ can be characterized as being the orthogonal complement
(in the $L^2$ sense) of the $\cal{G}$-space. Thus, instead of
(\ref{eq:Ji8a}), the slightly changed system will be solved: find
$(p^{\varepsilon }, q^{\varepsilon }, l^{\varepsilon }) \in \mathcal G
\times \mathcal V \times \mathcal G$ such that
\begin{equation}gin{gather}
\left\{
\begin{equation}gin{array}{ll}
\displaystyle
a_{\perp} (p^{\varepsilon },\eta ) + a_{\perp} (q^{\varepsilon},\eta )
= (f,\eta )
& \forall \eta \in \mathcal G, \\[3mm]
\displaystyle
a_{||} (q^{\varepsilon },\xi ) + \varepsilon a_{\perp}
(q^{\varepsilon},\xi) + \varepsilon a_{\perp} (p^{\varepsilon},\xi)
+ \left( l^{\varepsilon } , \xi \right)
= \varepsilon (f, \xi )
& \forall \xi \in \mathcal V , \\[3mm]
\displaystyle
\left( q^{\varepsilon } , \chi \right) =0
& \forall \chi \in \mathcal G.
\end{array}
\right.
\label{eq:Jj8a}
\end{gather}
The constraint $( q^{\varepsilon } , \chi ) =0$, $\forall \chi \in
\mathcal G$ is forcing the solution $q^{\varepsilon }$ to belong to
$\mathcal A$, and this property is carried over to the limit $\eps \rightarrow 0$. We have thus
circumvented the difficulty of discretizing $\mathcal A$ by introducing
a new variable and enlarging the linear system.
\begin{equation}gin{prop}\label{lem:AP-aquiv1}
Problems (\ref{eq:Ji8a}) and (\ref{eq:Jj8a}) are
equivalent. Indeed, $(p^\eps,q^\eps) \in \mathcal{G} \times
\mathcal{A}$ is the unique solution of (\ref{eq:Ji8a}) if and only
if $(p^\eps,q^\eps,l^\eps) \in {\cal G} \times {\cal V} \times {\cal
G}$ with $l^\eps \equiv 0$ is the unique solution of
(\ref{eq:Jj8a}).
\end{prop}
\begin{equation}gin{proof}
Let $(p^\eps,q^\eps) \in \mathcal{G} \times \mathcal{A}$ be the
unique solution of (\ref{eq:Ji8a}). Then, it is immediate to show
that $(p^\eps,q^\eps,0)$ solves (\ref{eq:Jj8a}). Let now
$(p^\eps,q^\eps,l^\eps) \in {\cal G} \times {\cal V} \times {\cal
G}$ be a solution of (\ref{eq:Jj8a}). Then, the last equation of
(\ref{eq:Jj8a}) implies that $q^\eps \in \mathcal A$. Choosing in
the second equation as test function $\xi \in \mathcal G$, one gets
$$
\varepsilon a_{\perp}
(q^{\varepsilon},\xi) + \varepsilon a_{\perp} (p^{\varepsilon},\xi)
+ \left( l^{\varepsilon } , \xi \right)
= \varepsilon (f, \xi )\,, \quad\forall \xi \in \mathcal G\,,
$$
which because of the first equation in (\ref{eq:Jj8a}), yields $\left( l^{\varepsilon } , \xi
\right) = 0$ for all $\xi \in \mathcal G$. Thus $l^{\eps} \equiv 0$.
\end{proof}
\subsubsection{The $\mathcal G$ space}
In order to eliminate the problems that arise when dealing with the
discretization of $\mathcal G$, the Lagrange multiplier method will
again be used. First note that
\begin{equation}gin{gather}
p \in \mathcal G
\Leftrightarrow
\left\{
\begin{equation}gin{array}{l}
\nabla_{||} p = 0 \\[3mm]
p \in \mathcal V
\end{array}
\right.
\;
\Leftrightarrow
\;
\left\{
\begin{equation}gin{array}{l}
\ds \int_\Omega A_{||} \nabla_{||} p \cdot \nabla_{||} \lambda
\, dx = a_{||}(p,\lambda)= 0, \;\; \forall
\lambda \in {\cal L} \\[3mm]
p \in \mathcal V\,,
\end{array}
\right.
\label{caract}
\end{gather}
where ${\cal L}$ is a functional space that should be chosen large
enough so that one could find for any $p\in{\cal V}$ a $\lambda\in{\cal L}$ with $\nabla_{||}
\lambda=\nabla_{||} p$. On the other hand, the
space ${\cal L}$ should be not too large in order to ensure the
uniqueness of the Lagrange multipliers in the unconstrained system. A
space that satisfies these two requirements under some quite general
assumptions to be detailed later, can be defined as \begin{equation}\label{Lsp}
{\cal L} := \{ \lambda \in L^2(\Omega)\,\, / \,\, \nabla_{||} \lambda
\in L^2(\Omega)\,, \,\,\, \lambda_{| \partial \Omega_{in}} =0 \} \,,
\quad \textrm{with} \quad \partial \Omega_{in} := \{ x \in \partial
\Omega \,\, / \,\, b(x) \cdot n <0\}\,. \end{equation}
Using the characterization (\ref{caract}) of the constrained space $
\mathcal G $, we shall now reformulate the system (\ref{eq:Jj8a}) as
follows: Find
$(p^\varepsilon,\;\lambda^\varepsilon,\;q^\varepsilon,\;l^\varepsilon,\;\mu^\varepsilon)
\in \mathcal V\times \mathcal L\times \mathcal V\times \mathcal V
\times \mathcal L$ such that
\begin{equation}gin{gather}
(AP)\,\,\,
\left\{
\begin{equation}gin{array}{l}
\displaystyle
a_{\perp} (p^\varepsilon , \eta ) +
a_{\perp} (q^\varepsilon , \eta ) + a_{||}(\eta,\lambda^\varepsilon )
= \left(f,\eta \right) \,, \quad \forall \eta \in \mathcal V\,,
\\[3mm]
\displaystyle
a_{||}( p^{\varepsilon },\kappa)=0\,,\quad \forall \kappa \in \mathcal L\,,
\\[3mm]
\displaystyle
a_{||} (q^\varepsilon , \xi ) +
\varepsilon a_{\perp} (q^\varepsilon , \xi ) +
\varepsilon a_{\perp} (p^\varepsilon , \xi ) +
\left( l^{\varepsilon } , \xi \right)
= \varepsilon \left(f,\xi \right) \,, \quad \forall \xi \in \mathcal V\,,
\\[3mm]
\displaystyle
\left( q^{\varepsilon } , \chi \right) +
a_{||}(\chi, \mu^\varepsilon)=0\,, \quad \forall \chi \in \mathcal V\,,
\\[3mm]
\displaystyle
a_{||}(l^\varepsilon,\tau)=0\,, \quad \forall \tau \in \mathcal L\,.
\end{array}
\right.
\label{eq:Ju7a}
\end{gather}
The advantage of the above formulation, as compared to
(\ref{eq:Ji8a}), is that we only have to discretize the spaces
$\mathcal V$ and $\mathcal L$ (at the price of the introduction of
three additional variables), which is much easier than the
discretization of the constrained spaces $\mathcal G$ and $\mathcal
A$. More importantly, the dual formulation (\ref{eq:Ju7a}) does not
require any change of coordinates to express the fact that $p^\eps$ is
constant along the $b$-field lines and that $q^\eps$ averages to zero
along these lines. Therefore this formulation is particularly well
adapted to time-dependent $b$-fields, as it does not require any
operation which would have to be reinitiated as $b$ evolves. The
system (\ref{eq:Ju7a}) will be called the Asymptotic-Preserving
formulation in the sequel.
To analyse this Asymptotic-Preserving formulation, we need the
following\\
\color{black}
\noindent {\bf Hypothesis B'} {\it The trace
$\lambda_{| \partial \Omega_{in}}$ is well defined for any
$\lambda\in\tilde{\cal V}$ as an element of $L^2(\partial
\Omega_{in})$, with continuous dependence of the trace norm in
$L^2(\partial \Omega_{in})$ on $||\lambda||_{\tilde{\cal V}}$. Moreover, the Hilbert space
\begin{equation} \label{V_tilde}
\tilde{\cal V}= \{ \phi \in L^2(\Omega) \,\, / \,\, \nabla_{||} \phi
\in L^2(\Omega) \}\,, \quad (\phi,\psi)_{\tilde{\cal V}} :=
(\phi,\psi)+(\nabla_{||} \phi,\nabla_{||} \psi)\,,
\end{equation}
admits the decomposition
\begin{equation}\label{eq:Jf8aa} \tilde{\cal V} = \tilde{\cal G}
\oplus \mathcal L\,,
\end{equation}
where $\tilde{\cal G}$ is given by
\begin{equation} \label{G_tilde} \tilde{\cal G}:=\{ \phi \in \tilde{\cal V} \,\,
/ \,\, \nabla_{||} \phi =0 \}\,, \end{equation} and ${\cal L}$ is given by
(\ref{Lsp}). The spaces $\tilde{\cal G}$ and $G=\tilde{\cal G}\cap {\cal V}$ are related in the following way:
if $g\in\tilde{\cal G}$ is such that $\int_{\partial \Omega_{in}}\eta gd\sigma=0$ for all $\eta\in{\cal G}$, then $g=0$.}\\
\color{black}
\noindent The decomposition (\ref{eq:Jf8aa}) is quite natural. It tells simply
that any function $\phi$ can be decomposed on each field line as a sum
of a function that vanishes at one given point on this line and a
constant (which is therefore the value of $\phi$ at this
point). Hypothesis B' will be thus normally satisfied in cases of
practical interest. For example, we prove in Appendix \ref{appA} that
the set of assumptions on the domain $\Omega$ and the $b$-field which
can be used to verify Hypothesis B, is also sufficient (but far from
necessary) for Hypothesis B'. We are now able to show the relation
between systems (\ref{eq:Jj8a}) and (\ref{eq:Ju7a}).
\begin{equation}gin{prop}\label{lem:AP-aquiv2}
Assuming Hypotheses A, B and B', problem (\ref{eq:Ju7a}) admits a
unique solution\newline
$(p^\varepsilon,\;\lambda^\varepsilon,\;q^\varepsilon,\;l^\varepsilon,\;\mu^\varepsilon)\in
\mathcal V\times \mathcal L\times \mathcal V\times \mathcal V \times
\mathcal L$, where $(p^{\varepsilon }, q^{\varepsilon
},l^{\varepsilon })\in \mathcal G\times \mathcal V\times \mathcal G$
is the unique solution of (\ref{eq:Jj8a}).
\end{prop}
The proof of Proposition \ref{lem:AP-aquiv2} is based on the following two lemmas
\begin{equation}gin{lem} \label{lemlem0} Assume Hypothesis B' and let $p \in
\tilde{\cal V}$ be such that $a_{||}( p, \lambda)=0$,
$\forall\lambda \in {\cal L}$. Then $p \in \tilde{\cal G}$.
\end{lem}
\begin{equation}gin{proof} Take any $\eta \in \tilde{\cal V}$ and decompose
$\eta=\lambda +g$ with $\lambda \in {\cal L}$ and $g\in \tilde{\cal
G}$. We have $a_{||}( p, g)=0$, hence $a_{||}( p, \eta)=0$ for
all $\eta \in \tilde{\cal V}$. This entails $\nabla_{||}p=0$, hence
$p \in \tilde{\cal G}$.
\end{proof}
\begin{equation}gin{lem} \label{lemlem} Assume Hypothesis B' and let $F \in
\tilde{\cal V}^*$ be such that $F( \eta)=0$ for all $\eta \in
{\cal G}$. Then the problem of finding $\lambda \in {\cal L}$
such that \begin{equation}\label{MvF} a_{||}( \eta, \lambda)=F(\eta)\,, \quad
\forall \eta \in \tilde{\cal V}\,, \end{equation} has a unique solution.
\end{lem}
\begin{equation}gin{proof}
Consider the bilinear form $b$ on $\tilde{\cal V}\times\tilde{\cal V}$
$$
b(u,v)=a_{||}(u,v)+\int_{\partial \Omega_{in}}uvd\sigma
$$
By Hypothesis B', this is an inner product on $\tilde{\cal
V}$. Indeed, if $b(u,u)=0$ then $u\in\tilde{\cal G} \cap {\cal L}$
so that $u=0$. Riesz representation theorem implies that the problem
of finding $\mu \in \tilde{\cal V}$ such that
$$
b( \eta, \mu)=F(\eta)\,, \quad \forall \eta \in \tilde{\cal V}\,,
$$
has a unique solution. We can now decompose $\mu=\lambda +g$ with
$\lambda \in {\cal L}$ and $g\in \tilde{\cal G}$. This yields
$$
a_{||}(\eta,\lambda)+\int_{\partial \Omega_{in}}\eta gd\sigma=F(\eta)\,,
\quad \forall \eta \in \tilde{\cal V}\,,
$$
so that, in particular, $\int_{\partial \Omega_{in}}\eta gd\sigma=0$ for all $\eta\in{\cal G}$ which implies $g=0$. We see now that $\lambda$ is a
solution to (\ref{MvF}). The uniqueness follows easily.
\end{proof}
\newline Let us now prove Proposition \ref{lem:AP-aquiv2}.\\
\textbf{Proof of existence in Proposition \ref{lem:AP-aquiv2}.}
Take $(p^{\varepsilon }, q^{\varepsilon},l^{\varepsilon })\in \mathcal
G\times \mathcal V\times \mathcal G$ as the unique solution of
(\ref{eq:Jj8a}). Then, equations 2,3,5 in (\ref{eq:Ju7a}) are
immediately satisfied. It remains to choose properly the
Lagrange multipliers ${\lambda}^\eps, {\mu}^\eps \in {\cal L}$ to satisfy equations 1,4 in (\ref{eq:Ju7a}). For
this, let us define $F_1, F_2 \in \tilde{\cal V}^*$ by
\begin{equation} \label{NR0}
F_1(\eta):= { 1 \over \eps} a_{||} (q^\eps,\eta)\,, \quad
F_2(\eta):=-(q^\eps,\eta)\,, \quad \forall \eta \in \tilde{\cal V}\,.
\end{equation}
These functionals are indeed continuous in the norm of $\tilde{\cal
V}$ since their definitions do not contain the derivatives in directions perpendicular to $b$.
Since $F_1(\eta)=F_2(\eta)=0$ for all $\eta \in{\cal G}$, Lemma \ref{lemlem} implies the existence of
${\lambda}^\eps \in {\cal L}$ and ${\mu}^\eps \in {\cal L}$, such that
\begin{equation} \label{NR}
a_{||}( \eta, {\lambda}^\eps) =F_1(\eta)\,, \quad a_{||}(\chi,
{\mu}^\eps ) =F_2(\chi)\,, \quad \forall \eta, \chi \in \tilde{\cal
V}\,.
\end{equation}
Taking $\eta,\chi\in{\cal V} \subset \tilde{\cal V}$ we observe (cf. the second line in (\ref{eq:Jj8a}) where $l^\eps=0$)
$$
a_{||}( \eta,{\lambda}^\eps)= { 1 \over \eps} a_{||} (q^\eps,\eta)=
(f,\eta) -a_{\perp}(p^\eps,\eta)-a_{\perp}(q^\eps,\eta)\,, \quad
\forall \eta \in {\cal V}\,,
$$
$$
a_{||}(\chi, {\mu}^\eps) = -(q^\eps,\chi)\,, \quad \forall \chi \in {\cal V}\,,
$$
which coincides with equations 1,4 in (\ref{eq:Ju7a}).\\
\textbf{Proof of uniqueness in Proposition \ref{lem:AP-aquiv2}.} Consider the solution to system
(\ref{eq:Ju7a}) with $f=0$. Lemma \ref{lemlem0} implies then that
$p^{\varepsilon }, l^{\varepsilon }\in \tilde{\mathcal{G}}\cap\mathcal{V}=\mathcal{G}$
and
$(p^{\varepsilon }, q^{\varepsilon},l^{\varepsilon })\in \mathcal
G\times \mathcal V\times \mathcal G$ verifies (\ref{eq:Jj8a}) with
$f=0$ so that $p^{\varepsilon } = q^{\varepsilon} = l^{\varepsilon } =
0$ by Proposition \ref{lem:AP-aquiv1}. Equations 1,4 in (\ref{eq:Ju7a}) now tell us that ${\lambda}^\eps,
{\mu}^\eps \in \tilde{\cal G}$, but $\tilde{\cal G} \cap {\cal L} =
\{0\}$, hence ${\lambda}^\eps = {\mu}^\eps =0$.
$\Box$
\color{black}
The presence of $1/\eps$ in the formulas (\ref{NR0}), (\ref{NR}) defining $\lambda^\eps$ indicates at a first sight that $\lambda^\eps$ may tend to $\infty$
as $\eps\to 0$ which would be disastrous for an AP numerical method based on (\ref{eq:Ju7a}) at very small $\eps$. Fortunately $\lambda^\eps$ remains bounded
uniformly in $\eps$ in the cases of practical interest. It suffices to suppose that the limit solution $\phi^0$ is in $H^2(\Omega)$ which is a reasonable assumption
as discussed in Remark \ref{remark:limit:UniformB}.
\begin{equation}gin{prop}\label{lamBoundLem}
Assume Hypotheses A, B, B' and $\phi^0 \in H^2(\Omega)$ where $\phi^0$ is the solution to (\ref{eq:Jv9a}). Then $\lambda^\eps$ introduced in (\ref{eq:Ju7a}) satisfies
\begin{equation}\label{lamBound}
||\nabla_{||}\lambda^\eps||_{L^2} \le C\max(||f||_{L^2},||\phi^0||_{H^2})
\end{equation}
with a constant $C$ independent of $\eps$.
\end{prop}
\begin{equation}gin{proof}
We will denote all the $\eps$-independent constants by $C$ in this proof.
We start from relation (\ref{aa1}) in the proof of Theorem \ref{thm_EX}. Dropping the positive term $a_\perp(q^\eps+e^\eps,q^\eps+e^\eps)$ it can be rewritten as
$$
\frac 1 \eps a_{||}(q^\eps,q^\eps) \le (f,q^\eps) - a_\perp(\phi^0,q^\eps).
$$
Since $\phi^0 \in H^2(\Omega)$ we can integrate by parts in the integral defining $a_\perp(\phi^0,q^\eps)$:
\begin{equation}gin{align*}
-a_\perp(\phi^0,q^\eps)
&=-\int_\Omega A_\perp\nabla_\perp\phi^0\cdot\nabla_\perp q^\eps dx
\\
&=-\int_{\partial\Omega_N} (Id-bb^t)A_\perp\nabla_\perp\phi^0\cdot n q^\eps d\sigma
+\int_\Omega (\nabla_\perp\cdot A_\perp\nabla_\perp\phi^0) q^\eps dx
\\
&\le C||\phi^0||_{H^2} \left( ||q^\eps||_{L^2(\partial\Omega_N)}+||q^\eps||_{L^2(\Omega)} \right)
\end{align*}
since $\nabla\phi^0$ has a trace on $\partial\Omega$ and its norm in $L^2(\partial\Omega_N)$ is bounded by $C||\phi^0||_{H^2}$. Thus,
$$
\frac 1 \eps ||\nabla_{||}q^\eps||^2_{L^2} \le \frac C \eps a_{||}(q^\eps,q^\eps)
\le C||f||_{L^2} ||q^\eps||_{L^2(\Omega)} + C||\phi^0||_{H^2} \left( ||q^\eps||_{L^2(\partial\Omega_N)}+||q^\eps||_{L^2(\Omega)} \right).
$$
By Poincar\'e-Wirtinger inequality (\ref{PoinW}) (note that $Pq^\eps=0$) and by Hypothesis B' we have
$$
\max(||q^\eps||_{L^2(\Omega)}, ||q^\eps||_{L^2(\partial\Omega_N)}) \le C ||\nabla_{||}q^\eps||_{L^2}
$$
so that
$$
\frac 1 \eps ||\nabla_{||}q^\eps||_{L^2} \le C\max(||f||_{L^2},||\phi^0||_{H^2}).
$$
This is the same as (\ref{lamBound}) since $\nabla_{||}\lambda^\eps = \frac 1 \eps \nabla_{||}q^\eps$ according to (\ref{NR0}) and (\ref{NR}).
\end{proof}
\begin{equation}gin{rem}
The Limit model (\ref{eq:Jv9a}), reformulated using the Lagrange
multiplier technique, now reads: Find $(\phi^0,\;\lambda^0) \in
\mathcal V\times \mathcal L$ such that
\begin{equation}gin{gather}
(L')\,\,\,
\left\{
\begin{equation}gin{array}{lr}
\displaystyle
\int_\Omega A_{\perp} \nabla_{\perp} \phi^{0}
\cdot
\nabla_{\perp} \psi \, dx
+
\int_\Omega A_{||} \nabla_{||} \psi \cdot \nabla_{||} \lambda^{0} \, dx
=
\int_\Omega f \psi \, dx
& \forall \psi \in \mathcal V
\\[3mm]
\displaystyle
\int_\Omega A_{||} \nabla_{||} \phi^{0} \cdot \nabla_{||} \kappa \, dx =0
& \forall \kappa \in \mathcal L\,.
\end{array}
\right.
\label{eq:Jx9a}
\end{gather}
Problem (\ref{eq:Jx9a}) is also well posed assuming Hypotheses A, B, B' and $\phi^0 \in H^2(\Omega)$. Indeed, the uniqueness of the solution
to (\ref{eq:Jx9a}) can be proved in exactly the same
manner as in the proof of Proposition \ref{lem:AP-aquiv2} above. To prove the existence of a solution,
it suffices to take the limit $\eps\to 0$ in the first two lines of (\ref{eq:Ju7a}). Indeed, we know by Theorem \ref{thm_EX} that
$p^\eps\to\phi^0$, the solution to (\ref{eq:Jv9a}), and $q^\eps\to 0$ in $H^1(\Omega)$. Moreover, the family $\{\nabla_{||}\lambda^\eps\}$ is bounded in the norm of $L^2(\Omega)$
by Proposition \ref{lamBoundLem}. We can take therefore a weakly convergence subsequence $\{\nabla_{||}\lambda^{\eps_n}\}$ and identify its limit with $\{\nabla_{||}\lambda^0\}$
with some $\lambda^0\in{\cal L}$ (cf. Lemma \ref{lemlem}) to see that $(\phi^0,\lambda^0)\in \mathcal V\times \mathcal L$ solves (\ref{eq:Jx9a}).
\end{rem}
\color{black}
\section{Numerical method}\label{sec:num_met}
This section concerns the discretization of the Asymptotic Preserving
formulation (\ref{eq:Ju7a}), based on a finite element method, and the
detailed study of the obtained numerical results. The numerical
analysis of the present scheme is investigated in a forthcoming work
\cite{AJC},
in particular we are interested in the convergence of the
scheme, independently of the parameter $\eps >0$.\\
Let us denote by ${\cal V}_{h} \subset \mathcal{V}$ and ${\cal L}_{h}
\subset \mathcal{L}$ the finite dimensional approximation spaces,
constructed by means of appropriate numerical discretizations (see Section
\ref{Discr} and Appendix {\bf B}). We are thus looking for a discrete solution
$(p^\varepsilon_h,\;\lambda^\varepsilon_h,\;q^\varepsilon_h,\;l^\varepsilon_h,\;\mu^\varepsilon_h)
\in \mathcal V_h\times \mathcal L_h\times \mathcal V_h\times \mathcal
V_h \times \mathcal L_h$ of the following system
\begin{equation}gin{gather}
\left\{
\begin{equation}gin{array}{ll}
\displaystyle
a_{\perp} (p^\varepsilon_h , \eta ) +
a_{\perp} (q^\varepsilon_h , \eta ) +
a_{||} (\eta , \lambda^{\varepsilon }_h )
= \left(f,\eta \right) \,, \quad \forall \eta \in {\cal V}_h\,,
\\[3mm]
\displaystyle
a_{||} ( p^{\varepsilon }_h , \kappa) =0 \,, \quad \forall \kappa \in {\cal L}_h \,,
\\[3mm]
\displaystyle
a_{||} (q^\varepsilon_h , \xi ) +
\varepsilon a_{\perp} (q^\varepsilon_h , \xi ) +
\varepsilon a_{\perp} (p^\varepsilon_h , \xi ) +
\left( l^{\varepsilon }_h , \xi \right)
= \varepsilon \left(f,\xi \right) \,, \quad \forall \xi \in {\cal V}_h\,,
\\[3mm]
\displaystyle
\left( q^{\varepsilon }_h , \chi \right) +
a_{||} (\chi , \mu^{\varepsilon }_h ) =0 \,, \quad \forall \chi \in {\cal V}_h\,,
\\[3mm]
\displaystyle
a_{||} (l^{\varepsilon }_h , \tau ) =0 \,, \quad \forall \tau \in {\cal L}_h\,.
\end{array}
\right.
\label{eq:Jt8a}
\end{gather}
Our numerical experiments indicate that the spaces ${\cal V}_{h}$ and
$\mathcal{L}_h$ can be always taken of the same type and on the same
mesh. The only difference between these two finite element spaces
lies thus in the incorporation of boundary conditions. In general,
let $X_h$ denote the complete finite element space (without any
restrictions on the boundary) which should be $H^1$ conforming but
otherwise arbitrarily chosen. We define then \begin{equation}\label{VH}
\mathcal{V}_h=\{v_h\in X_h/v_h|_{\partial\Omega_{D}}=0\}, \end{equation}
\begin{equation}\label{LH} \mathcal{L}_h=\{\lambda_h\in
X_h/\lambda_h|_{\partial\Omega_{in} \cup \partial\Omega_{D}}=0\}\,.
\end{equation} While this choice of $\mathcal{V}_h$ is straight forward, the
boundary conditions in $\mathcal{L}_h$ require special
attention. Indeed, nothing in the definition (\ref{Lsp}) of space
$\mathcal{L}$ on the continuous level indicates that its elements
should vanish on $\partial\Omega_{D}$. However, this liberty on
$\partial\Omega_{D}$ is somewhat counter-intuitive. Indeed, the
Lagrange multiplier $\lambda^\eps\in\mathcal{L}$ serves to impose
$\nabla_{||}p^\eps=0$ for some function $p^\eps$ taken from the space
$\mathcal{V}$. But, for $p\in\mathcal{V}$ the trace on
$\partial\Omega_{D}$ is zero so that $\nabla_{||}p^\eps=0$ there
without the help of a Lagrange multiplier. Of course, this argument
is not valid on the continuous level since the trace of functions in
$\mathcal{L}$ does not even necessarily exist. However, this may
become very important on the finite element level. Indeed, we provide
in Appendix \ref{appB} an example of a finite element setting without
incorporating $\lambda_h|_{\partial\Omega_{D}}=0$ into the definition
of $\mathcal{L}_h$, which leads to an ill-posed system
(\ref{eq:Jt8a}). To avoid this difficulty, we choose $\mathcal{L}_h$
as in (\ref{LH}) in all our experiments, thus obtaining well-posed
problems.
\subsection{Discretization} \label{Discr}
Let us present the discretization in a 2D case, the 3D case being a
simple generalization. The here considered computational domain
$\Omega $ is a square $\Omega = [0,1]\times [0,1]$. All simulations
are performed on structured meshes. Let us introduce the Cartesian,
homogeneous grid
\begin{equation}gin{gather}
x_i = i / N_x \;\; , \;\; 0 \leq i \leq N_x \,, \quad
y_j = j / N_y \;\; , \;\; 0 \leq j \leq N_y
\label{eq:Jp8a},
\end{gather}
where $N_x$ and $N_y$ are positive even constants, corresponding to
the number of discretization intervals in the $x$-
resp. $y$-direction. The corresponding mesh-sizes are denoted by $h_x
>0$ resp. $h_y >0$. Choosing a $\mathbb Q_2$ finite element method
($\mathbb Q_2$-FEM), based on the following quadratic base functions
\begin{equation}gin{gather}
\theta _{x_i}=
\left\{
\begin{equation}gin{array}{ll}
\frac{(x-x_{i-2})(x-x_{i-1})}{2h_x^{2}} & x\in [x_{i-2},x_{i}],\\
\frac{(x_{i+2}-x)(x_{i+1}-x)}{2h_x^{2}} & x\in [x_{i},x_{i+2}],\\
0 & \text{else}
\end{array}
\right.\,, \quad
\theta _{y_j} =
\left\{
\begin{equation}gin{array}{ll}
\frac{(y-y_{j-2})(y-y_{j-1})}{2h_y^{2}} & y\in [y_{j-2},y_{j}],\\
\frac{(y_{j+2}-y)(y_{j+1}-y)}{2h_y^{2}} & y\in [y_{j},y_{j+2}],\\
0 & \text{else}
\end{array}
\right.
\label{eq:Js8a1}
\end{gather}
for even $i,j$ and
\begin{equation}gin{gather}
\theta _{x_i}=
\left\{
\begin{equation}gin{array}{ll}
\frac{(x_{i+1}-x)(x-x_{i-1})}{h_x^{2}} & x\in [x_{i-1},x_{i+1}],\\
0 & \text{else}
\end{array}
\right.\,, \quad
\theta _{y_j} =
\left\{
\begin{equation}gin{array}{ll}
\frac{(y_{j+1}-y)(y-y_{j-1})}{h_y^{2}} & y\in [y_{j-1},y_{j+1}],\\
0 & \text{else}
\end{array}
\right.
\label{eq:Js8a2}
\end{gather}
for odd $i,j$, we define
$$
X_h := \{ v_h = \sum_{i,j} v_{ij}\, \theta_{x_i} (x)\, \theta_{y_j}(y)\}\,,
$$
We then search for discrete solutions $(p^\varepsilon_h,\;q^\varepsilon_h,\;l^\varepsilon_h)
\in \mathcal V_h\times \mathcal V_h\times \mathcal
V_h$ and $(\lambda^\varepsilon_h,\;\mu^\varepsilon_h)
\in \mathcal L_h\times \mathcal L_h$ with
${\cal V}_h $ and ${\cal L}_h $ defined by (\ref{VH}) and (\ref{LH}).
This leads to the inversion of a linear system, the corresponding
matrix being non-symmetric and given by
\begin{equation}gin{gather}
A =
\left(
\begin{equation}gin{array}{ccccc}
A_1 & A_0 & A_1 & 0 & 0 \\
A_0 & 0 & 0 & 0 & 0 \\
\varepsilon A_1 & 0 & A_0 +\varepsilon A_1& C & 0 \\
0 & 0 & C & 0 & A_0\\
0 & 0 & 0 & A_0& 0
\end{array}
\right)
\label{eq:Jo8a}
.
\end{gather}
The sub-matrices $A_0$, $A_1$ resp. $C$ correspond to the bilinear
forms $a_{||}(\cdot,\cdot)$, $a_{\perp}(\cdot,\cdot)$ resp. $(\cdot,\cdot)$,
used in equations (\ref{eq:Ju7a}) and belong to $\mathbb
R^{(N_x+1)(N_y+1)\times (N_x+1)(N_y+1)}$. The matrix elements are
computed using the 2D Gauss quadrature formula, with 3 points in the
$x$ and $y$ direction:
\begin{equation}gin{gather}
\int_{-1}^{1}\int_{-1}^{1}f (x,y) =
\sum_{i,j=-1}^{1} \omega _{i}\omega _{j} f (x_i,y_j)\,,
\label{eq:Jk9a}
\end{gather}
where $x_0=y_0=0$, $x_{\pm 1}=y_{\pm 1}=\pm\sqrt {\frac{3}{5}}$,
$\omega _0 = 8/9$ and $\omega _{\pm 1} = 5/9$, which is exact for
polynomials of degree 5.
\subsection{Numerical Results}\label{sec:test case}
\subsubsection{2D test case, uniform and aligned $b$-field}
In this section we compare the numerical results obtained via the
$\mathbb Q_2$-FEM, by discretizing the Singular Perturbation
model (\ref{eq:J07a}), the Limit model (\ref{eq:Jv9a}) and the
Asymptotic Preserving reformulation (\ref{eq:Ju7a}). In all numerical
tests we set $A_\perp = Id$ and $A_\parallel = 1$. We start with a
simple test case, where the analytical solution is known. Let the
source term $f$ be given by
\begin{equation}gin{gather}
f = \left(4 + \varepsilon \right) \pi^{2} \cos \left( 2\pi x\right)
\sin \left(\pi y \right) +
\pi^{2} \sin \left(\pi y \right)
\label{eq:J87a}
\end{gather}
and the $b$ field be aligned with the $x$-axis. Hence, the solution
$\phi^{\varepsilon }$ of (\ref{eq:J07a}) and its decomposition
$\phi^{\varepsilon }=p^{\varepsilon } + q^{\varepsilon }$ write
\begin{equation}gin{gather}
\phi^{\varepsilon } = \sin \left(\pi y \right) + \varepsilon \cos \left( 2\pi x\right)
\sin \left(\pi y \right), \\
p^{\varepsilon } = \sin \left(\pi y \right)\,, \quad
q^{\varepsilon } = \varepsilon \cos \left( 2\pi x\right)
\sin \left(\pi y \right)
\label{eq:J97a}.
\end{gather}
We denote by $\phi _P$, $\phi _L$, $\phi _A$ the numerical solution of
the Singular Perturbation model (\ref{eq:J07a}), the Limit model
(\ref{eq:Jv9a}) and the Asymptotic Preserving reformulation
(\ref{eq:Ju7a}) respectively. The comparison will be done in the
$L^{2}$-norm as well as the $H^{1}$-norm. The linear systems obtained after
discretization of the three methods are solved using the same
numerical algorithm --- LU decomposition implemented in a solver
MUMPS\cite{MUMPS}.
\def0.45\textwidth{0.45\textwidth}
\begin{equation}gin{figure}[!ht]
\centering
\subfigure[$L^{2}$ error for a grid with $50\times 50$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/50L2}}
\subfigure[$H^{1}$ error for a grid with $50\times 50$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/50H1}}
\subfigure[$L^{2}$ error for a grid with $100\times 100$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/100L2}}
\subfigure[$H^{1}$ error for a grid with $100\times 100$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/100H1}}
\subfigure[$L^{2}$ error for a grid with $200\times 200$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/200L2}}
\subfigure[$H^{1}$ error for a grid with $200\times 200$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/200H1}}
\caption{Absolute $L^{2}$ (left column) and $H^{1}$ (right column)
errors between the exact solution $\phi^{\varepsilon }$ and the
computed numerical solution $\phi _A$ (AP), $\phi _L$ (L), $\phi
_P$ (P) for the test case with constant $b$. The error is plotted
as a function of the parameter $\eps$ and for three different
mesh-sizes.}
\label{fig:error}
\end{figure}
\begin{equation}gin{table}
\centering
\begin{equation}gin{tabular}{|c||c|c||c|c||c|c|}
\hline
\multirow{2}{*}{$\varepsilon$} &
\multicolumn{2}{|c||}{\rule{0pt}{2.5ex}AP scheme}
& \multicolumn{2}{|c||}{Limit model}
& \multicolumn{2}{|c|}{Singular Perturbation scheme} \\
\cline{2-7}
&\rule{0pt}{2.5ex}
$L^2$ error & $H^1$ error & $L^2$ error & $H^1$ error & $L^2$ error & $H^1$ error \\
\hline\hline\rule{0pt}{2.5ex}
10 &
$7.2\times 10^{-6}$ & $4.7\times 10^{-3}$ &
$5.0\times 10^{0}$ & $3.51\times 10^{1}$ &
$7.2\times 10^{-6}$ & $4.7\times 10^{-3}$
\\
\hline\rule{0pt}{2.5ex}
1 &
$7.3\times 10^{-7}$ & $4.7\times 10^{-4}$ &
$5.0\times 10^{-1}$ & $3.51\times 10^{0}$ &
$7.3\times 10^{-7}$ & $4.7\times 10^{-4}$
\\
\hline\rule{0pt}{2.5ex}
$10^{-1}$&
$1.47\times 10^{-7}$ & $9.6\times 10^{-5}$ &
$5.0\times 10^{-2}$ & $3.51\times 10^{-1}$ &
$1.45\times 10^{-7}$ & $9.4\times 10^{-5}$
\\
\hline\rule{0pt}{2.5ex}
$10^{-4}$&
$1.28\times 10^{-7}$ & $8.3\times 10^{-5}$ &
$5.0\times 10^{-5}$ & $3.61\times 10^{-4}$ &
$1.26\times 10^{-7}$ & $8.2\times 10^{-5}$
\\
\hline\rule{0pt}{2.5ex}
$10^{-6}$&
$1.28\times 10^{-7}$ & $8.3\times 10^{-5}$ &
$5.2\times 10^{-7}$ & $8.4\times 10^{-5}$ &
$5.9\times 10^{-7}$ & $8.2\times 10^{-5}$
\\
\hline\rule{0pt}{2.5ex}
$10^{-10}$&
$1.28\times 10^{-7}$ & $8.3\times 10^{-5}$ &
$1.28\times 10^{-7}$ & $8.3\times 10^{-5}$ &
$9.9\times 10^{-3}$ & $3.12\times 10^{-2}$
\\
\hline\rule{0pt}{2.5ex}
$10^{-15}$&
$1.28\times 10^{-7}$ & $8.3\times 10^{-5}$ &
$1.28\times 10^{-7}$ & $8.3\times 10^{-5}$ &
$7.1\times 10^{-1}$ & $2.23\times 10^{0}$
\\
\hline
\end{tabular}
\caption{Comparison between the Asymptotic Preserving scheme, the
Limit model and the Singular Perturbation model for $h=0.005$ (200 mesh points
in each direction) and constant $b$: absolute $L^2$-error and $H^1$-error, for different $\eps$-values.}
\label{tab:error}
\end{table}
In Figure \ref{fig:error} we plotted the absolute errors (in the $L^2$
resp. $H^1$-norms) between the numerical solutions obtained with one
of the three methods and the exact solution, and this, as a function
of the parameter $\eps$ and for several mesh-sizes. In Table
\ref{tab:error}, we specified the error values for one fixed grid and
several $\eps$-values. One observes that the Singular Perturbation
finite element approximation is accurate only for $\varepsilon$ bigger
than some critical value $\varepsilon _P$, the Limit model gives
reliable results for $\varepsilon$ smaller than $\varepsilon _L$,
whereas the AP-scheme is accurate independently on $\eps$. The order
of convergence for all three methods is three in the $L^2$-norm and
two in the $H^1$-norm, which is an optimal result for $\mathbb Q_2$
finite elements. When designing a robust numerical method one has
therefore two options. The first one is to use an Asymptotic
Preserving scheme, which is accurate independently on $\varepsilon $,
but requires the solution of a bigger linear system. The second one
is to design a coupling strategy that involves the solution of the
Singular Perturbation formulation and the Limit problem in their
respective validity domains. This is however a very delicate problem,
since we observe that the critical values $\varepsilon _P$ and
$\varepsilon _L$ are mesh dependent, namely $\varepsilon _P$ inversely
proportional to $h$ and $\varepsilon _L$ proportional to
$h^{}$. Therefore for small meshes there may exist a range of
$\varepsilon $-values, where neither the Singular Perturbation nor the
Limit model finite element approximation give accurate results. For
our test case, this is even the case for meshes as big as $200\times
200$ points, if one regards the $L^2$-norm. This mesh-size is
generally insufficient in the case of real physical applications.
\begin{equation}gin{table}
\centering
\begin{equation}gin{tabular}{|c||c|c|c|c|c|}
\hline\rule{0pt}{2.5ex}
method & \# rows & \# non zero & time & $L^{2}$-error & $H^{1}$-error\\
\hline
\hline\rule{0pt}{2.5ex}
AP &
$50\times 10^{3}$ &
$1563\times 10^{3}$ &
$13.212$ s &
$1.02\times 10^{-6}$ &
$3.34\times 10^{-4}$
\\
\hline\rule{0pt}{2.5ex}
L &
$20\times 10^{3}$ &
$469\times 10^{3}$ &
$5.227$ s &
$1.14\times 10^{-6}$ &
$3.34\times 10^{-4}$
\\
\hline\rule{0pt}{2.5ex}
P &
$10\times 10^{3}$ &
$157\times 10^{3}$ &
$3.707$ s &
$1.02\times 10^{-6}$ &
$3.27\times 10^{-4}$
\\
\hline
\end{tabular}
\caption{Comparison between the Asymptotic Preserving scheme (AP),
the Limit model (L) and the Singular Perturbation model (P) for
$h=0.01$ (100 mesh points in each direction) and fixed $\varepsilon =
10^{-6}$: matrix size, number of nonzero elements, average
computational time and error in $L^{2}$ and $H^{1}$ norms.}
\label{tab:time}
\end{table}
Another interesting aspect with respect to which the three methods
must be compared, is the computational time and the size of the
matrices involved in the linear systems. Table \ref{tab:time} shows
that the Asymptotic Preserving scheme is expensive in computational
time and memory requirements, as compared to the other
methods. Indeed, the computational time required to solve the problem
is almost four times bigger than that of the Singular Perturbation
scheme. Moreover, the Asymptotic Preserving method involves matrices
that have five times more rows and ten times more nonzero elements
than the matrices obtained with the Singular Perturbation
approximation. It is however the only scheme that provides the
$h$-convergence regardless of $\varepsilon$. In order to reduce the
computational costs, a coupling strategy for problems with variable
$\varepsilon $ will be proposed in a forthcoming paper. In sub-domains
where $\varepsilon > \varepsilon _P$ the Singular Perturbation problem
will be solved, in sub-domains where $\varepsilon < \varepsilon _L$
the Limit problem will be solved and only in the remaining part, where
neither the Limit nor the Singular Perturbation model are valid, the
Asymptotic Preserving formulation will be solved.
\subsubsection{2D test case, non-uniform and non-aligned $b$-field}
We now focus our attention on the original feature of the here
introduced numerical method, namely its ability to treat nonuniform
$b$ fields. In this section we present numerical simulations performed
for a variable field $b$.
First, let us construct a numerical test case. Finding an analytical
solution for an arbitrary $b$ presents a considerable difficulty. We
have therefore chosen a different approach. First, we choose a limit
solution
\begin{equation}gin{gather}
\phi^{0} = \sin \left(\pi y +\alpha (y^2-y)\cos (\pi x) \right)
\label{eq:J79a},
\end{gather}
where $\alpha $ is a numerical constant aimed at controlling the
variations of $b$. For $\alpha =0$, the limit solution of the previous
section is obtained. The limit solution for $\alpha =2$ is shown in
Figure \ref{fig:limit}. \textcolor{black}{We set $\alpha =2$ in what follows.}
\begin{equation}gin{figure}[!ht]
\centering
\def0.45\textwidth{0.45\textwidth}
\includegraphics[width=0.45\textwidth]{fig/pvar}
\caption{The limit solution for the test case with variable $b$.}
\label{fig:limit}
\end{figure}
Since $\phi^{0}$ is a limit solution, it is constant along the $b$
field lines. Therefore we can determine the $b$ field using the
following implication
\begin{equation}gin{gather}
\nabla_{\parallel} \phi^{0} = 0 \quad \Rightarrow \quad
b_x \frac{\partial \phi^{0}}{\partial x} +
b_y \frac{\partial \phi^{0}}{\partial y} = 0\,,
\label{eq:J89a}
\end{gather}
which yields for example
\begin{equation}gin{gather}
b = \frac{B}{|B|}\, , \quad
B =
\left(
\begin{equation}gin{array}{c}
\alpha (2y-1) \cos (\pi x) + \pi \\
\pi \alpha (y^2-y) \sin (\pi x)
\end{array}
\right)
\label{eq:J99a}\,\quad.
\end{gather}
Note that the field $B$, constructed in this way, satisfies
$\text{div} B = 0$, which is an important property in the framework of
plasma simulation. Furthermore, we have $B \neq 0$ in the
computational domain. Now, we choose $\phi^\varepsilon $ to be a
function that converges, as $\varepsilon \rightarrow 0$, to the limit
solution $\phi^{0}$:
\begin{equation}gin{gather}
\phi^{\varepsilon } = \sin \left(\pi y +\alpha (y^2-y)\cos (\pi
x) \right) + \varepsilon \cos \left( 2\pi x\right) \sin \left(\pi
y \right)
\label{eq:Jc0a}.
\end{gather}
Finally, the force term is calculated, using the equation, i.e.
\begin{equation}gin{gather}
f = - \nabla_\perp \cdot (A_\perp \nabla_\perp \phi^{\varepsilon })
- \frac{1}{\varepsilon }\nabla_\parallel \cdot (A_\parallel \nabla_\parallel \phi^{\varepsilon })
\nonumber.
\end{gather}
As in the previous section, we shall compare here the numerical
solution of the Singular Perturbation model (\ref{eq:J07a}), the Limit
model (\ref{eq:Jv9a}) and the Asymptotic Preserving reformulation
(\ref{eq:Ju7a}), i.e. $\phi _P$, $\phi _L$, $\phi _A$ with the exact
solution (\ref{eq:Jc0a}) . The $L^{2}$ and $H^{1}$-errors are reported
on Figure \ref{fig:errorvar} and Table \ref{tab:errorvar}. Once again
the Asymptotic Preserving scheme proves to be valid for all values of
$\varepsilon $, contrary to the other schemes. There is however a
difference compared to the constant-$b$ case.
For a variable $b$ , the threshold value $\varepsilon _P$ seems to be
independent on the mesh size and is much larger than that of the
uniform $b$ test case. This observation limits further the possible
choice of coupling strategies, since even for coarse meshes there
exists a range of $\varepsilon $-values, where neither the Singular
Perturbation nor the Limit model are valid. The coupling strategy,
involving all three models, remains however interesting to
investigate.
\begin{equation}gin{table}
\centering
\begin{equation}gin{tabular}{|c||c|c||c|c||c|c|}
\hline
\multirow{2}{*}{$\varepsilon$} &
\multicolumn{2}{|c||}{\rule{0pt}{2.5ex}AP scheme}
& \multicolumn{2}{|c||}{Limit model}
& \multicolumn{2}{|c|}{Singular Perturbation scheme} \\
\cline{2-7}
&\rule{0pt}{2.5ex}
$L^2$ error & $H^1$ error & $L^2$ error & $H^1$ error & $L^2$ error & $H^1$ error \\
\hline\hline\rule{0pt}{2.5ex}
10 &
$7.2\times 10^{-6}$ & $4.6\times 10^{-3}$ &
$5.0\times 10^{0}$ & $3.50\times 10^{1}$ &
$7.2\times 10^{-6}$ & $4.6\times 10^{-3}$
\\
\hline\rule{0pt}{2.5ex}
1 &
$7.1\times 10^{-7}$ & $4.6\times 10^{-4}$ &
$5.0\times 10^{-1}$ & $3.50\times 10^{0}$ &
$7.1\times 10^{-7}$ & $4.6\times 10^{-4}$
\\
\hline\rule{0pt}{2.5ex}
$10^{-2}$&
$2.05\times 10^{-7}$ & $1.33\times 10^{-4}$ &
$5.0 \times 10^{-3}$ & $3.50\times 10^{-2}$ &
$2.05\times 10^{-7}$ & $1.33\times 10^{-4}$
\\
\hline\rule{0pt}{2.5ex}
$10^{-4}$&
$2.12\times 10^{-7}$ & $1.38\times 10^{-4}$ &
$5.0 \times 10^{-5}$ & $3.77\times 10^{-4}$ &
$1.74\times 10^{-6}$ & $1.43\times 10^{-4}$
\\
\hline\rule{0pt}{2.5ex}
$10^{-7}$&
$2.17\times 10^{-7}$ & $1.41\times 10^{-4}$ &
$2.22\times 10^{-7}$ & $1.41\times 10^{-4}$ &
$1.68\times 10^{-3}$ & $1.26\times 10^{-2}$
\\
\hline\rule{0pt}{2.5ex}
$10^{-10}$&
$2.17\times 10^{-7}$ & $1.41\times 10^{-4}$ &
$2.17\times 10^{-7}$ & $1.41\times 10^{-4}$ &
$3.93\times 10^{-1}$ & $1.35\times 10^{0}$
\\
\hline\rule{0pt}{2.5ex}
$10^{-15}$&
$2.17\times 10^{-7}$ & $1.41\times 10^{-4}$ &
$2.17\times 10^{-7}$ & $1.41\times 10^{-4}$ &
$6.7 \times 10^{-1}$ & $2.32\times 10^{0}$
\\
\hline
\end{tabular}
\caption{Comparison between the Asymptotic preserving scheme, the
Limit model and the Singular Perturbation model for $h=0.005$ (200 mesh points
in each direction) and variable $b$: absolute $L^2$-error and $H^1$-error.}
\label{tab:errorvar}
\end{table}
\def0.45\textwidth{0.45\textwidth}
\begin{equation}gin{figure}[!ht]
\centering
\subfigure[$L^{2}$ error for a grid with $50\times 50$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/50varL2}}
\subfigure[$H^{1}$ error for a grid with $50\times 50$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/50varH1}}
\subfigure[$L^{2}$ error for a grid with $100\times 100$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/100varL2}}
\subfigure[$H^{1}$ error for a grid with $100\times 100$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/100varH1}}
\subfigure[$L^{2}$ error for a grid with $200\times 200$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/200varL2}}
\subfigure[$H^{1}$ error for a grid with $200\times 200$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/200varH1}}
\caption{Absolute $L^{2}$ (left column) and $H^{1}$ (right column)
errors between the exact solution $\phi^{\varepsilon }$ and the
computed solution $\phi _A$ (AP), $\phi _L$ (L), $\phi _P$ (P) for
the test case with variable $b$. Plotted are the errors as a
function of the small parameter $\varepsilon $, for three different
meshes.}
\label{fig:errorvar}
\end{figure}
In the next test case we investigate the influence of the variations
of the $b$ field on the accuracy of the solution. We would like to answer
the following question: what is the minimal number of points per
characteristic length of $b$ variations required to obtain an
acceptable solution. For this, let us modify the previous test case. Let $b =
B/|B|$, with
\begin{equation}gin{gather}
B =
\left(
\begin{equation}gin{array}{c}
\alpha (2y-1) \cos (m\pi x) + \pi \\
m\pi \alpha (y^2-y) \sin (m\pi x)
\end{array}
\right)\, ,
\label{eq:Ji0a}
\end{gather}
$m$ being an integer. The limit solution and $\phi^{\varepsilon}$ are
chosen to be
\begin{equation}gin{gather}
\phi^{0} = \sin \left(\pi y +\alpha (y^2-y)\cos (m\pi x) \right)
\label{eq:Jj0a}, \\
\phi^{\varepsilon } = \sin \left(\pi y +\alpha (y^2-y)\cos (m\pi
x) \right) + \varepsilon \cos \left( 2\pi x\right) \sin \left(\pi
y \right)
\label{eq:Jk0a}.
\end{gather}
We perform two tests: first, we fix the mesh size and vary $m$ to
find the minimal period of $b$ for which the Asymptotic Preserving
method yields still acceptable results. We define a result to be acceptable
when the relative error is less then $0.01$. In the second test $m$
remains fixed and the convergence of the scheme is studied. The
results are presented on Figures \ref{fig:bp_var} and
\ref{fig:bp_h_var}.
For $\varepsilon =1$ and 400 mesh points in each direction
($h=0.0025$) the relative error in the $L^{2}$-norm, defined as
$\frac{||\phi^{\varepsilon} -
\phi_A||_{L^{2}(\Omega)}}{||\phi_A||_{L^{2}(\Omega)}}$, is below
$0.01$ for all tested values of $1 \leq m \leq 50$. The relative
$H^1$-error $\frac{||\phi^{\varepsilon} -
\phi_A||_{H^{1}(\Omega)}}{||\phi_A||_{H^{1}(\Omega)}}$ exceeds the
critical value for $m > 25$. For $\varepsilon = 10^{-20}$ the maximal
$m$ for which the error is acceptable in both norms is $20$. The
minimal number of mesh points per period of $b$ variations is 40 in
the worst case, in order to obtain an $1\%$ relative error.
Figure \ref{fig:bp_h_var} and Table \ref{tab:bp_h_var} show the
convergence of the Asymptotic Preserving scheme with respect to $h$
for $m=10$ and $\varepsilon = 10^{-10}$. We observe that for big
values of $h$ the error does not diminish with $h$. Then, for $h <
0.025$ the scheme converges at a better rate then 2 for $H^{1}$-error
and 3 for $L^{2}$-error. For $h<0.00625$ (160 points) the optimal
convergence rate in the $H^{1}$-norm is obtained (which is 32 mesh
points per period of $b$). The method is super-convergent in the whole
tested range for the $L^{2}$-error.
These results are reassuring, as they prove that the Asymptotic
Preserving scheme is precise even for strongly varying fields for
relatively small mesh sizes, which was not evident. Indeed, the
optimal convergence rate in the $H^{1}$-norm is obtained for 32 mesh
points per $b$ period, and an $1\%$ relative error for 40 points. It
shows that accurate results can be obtained in more complex
simulations, such as tokamak plasma for example. The application of
the method to bigger scale problems is the subject of an ongoing work.
\def0.45\textwidth{0.45\textwidth}
\begin{equation}gin{figure}[!ht]
\centering
\subfigure[$\varepsilon = 1$]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/bp_res400e0}}
\subfigure[$\varepsilon = 10^{-20}$]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/bp_res400e-20}}
\caption{Relative $L^{2}$ and $H^{1}$ errors between the exact
solution $\phi^{\varepsilon }$ and the computed solution $\phi _A$
(AP) for $h=0.0025$ (400 points in each direction) as a function of
$m$ and for $\varepsilon = 1$ respectively $10^{-20}$.}
\label{fig:bp_var}
\end{figure}
\def0.45\textwidth{0.45\textwidth}
\begin{equation}gin{figure}[!ht]
\centering
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/res_bp10e-10}}
\caption{Relative $L^{2}$ and $H^{1}$ errors between the exact
solution $\phi^{\varepsilon }$ and the computed solution $\phi _A$
(AP) for $m=10$ and $\varepsilon = 10^{-10}$ as a function of $h$.}
\label{fig:bp_h_var}
\end{figure}
\begin{equation}gin{table}[!ht]
\centering
\begin{equation}gin{tabular}{|c|c||c|c|}
\hline\rule{0pt}{2.5ex}
$h$ & \# points per period & $L^{2}$-error & $H^{1}$-error\\
\hline
\hline\rule{0pt}{2.5ex}
$0.1$ &
$2$ &
$4.7\times 10^{-1}$ &
$1.05$
\\
\hline\rule{0pt}{2.5ex}
$0.05$ &
$4$ &
$5.2\times 10^{-1}$ &
$1.29$
\\
\hline\rule{0pt}{2.5ex}
$0.025$ &
$8$ &
$1.82\times 10^{-1}$ &
$4.3\times 10^{-1}$
\\
\hline\rule{0pt}{2.5ex}
$0.0125$ &
$16$ &
$1.89\times 10^{-2}$ &
$6.4\times 10^{-2}$
\\
\hline\rule{0pt}{2.5ex}
$0.00625$ &
$32$ &
$1.41\times 10^{-3}$ &
$1.00\times 10^{-2}$
\\
\hline\rule{0pt}{2.5ex}
$0.0003125$ &
$64$ &
$9.3\times 10^{-5}$ &
$2.21\times 10^{-3}$
\\
\hline\rule{0pt}{2.5ex}
$0.0015625$ &
$128$ &
$6.1\times 10^{-6}$ &
$5.5\times 10^{-4}$
\\
\hline\rule{0pt}{2.5ex}
$0.00078125$ &
$256$ &
$4.6\times 10^{-7}$ &
$1.36\times 10^{-4}$
\\
\hline
\end{tabular}
\caption{Relative $L^{2}$ and $H^{1}$ errors between the exact
solution $\phi^{\varepsilon }$ and the computed solution $\phi _A$
(AP) for $m=10$ and $\varepsilon = 10^{-10}$ as a function of
$h$.}
\label{tab:bp_h_var}
\end{table}
\subsubsection{3D test case, uniform and aligned $b$-field}
Finally, we test our method on a simple $3D$ test case. Let the
field $b$ be aligned with the $X$-axis:
\begin{equation}gin{gather}
b=
\left(
\begin{equation}gin{array}{c}
1 \\
0 \\
0
\end{array}
\right)
\label{eq:Jf0a}.
\end{gather}
Let $\Omega = [0,1] \times [0,1]\times [0,1]$, and the source term $f$
is such that the solution is given by
\begin{equation}gin{gather}
\phi^{\varepsilon } = \sin \left(\pi y \right)\sin \left(\pi z \right)
+ \varepsilon \cos \left( 2\pi x\right)
\sin \left(\pi y \right)\sin \left(\pi z \right), \\
p^{\varepsilon } = \sin \left(\pi y \right)\sin \left(\pi z \right)\,, \quad
q^{\varepsilon } = \varepsilon \cos \left( 2\pi x\right)
\sin \left(\pi y \right)\sin \left(\pi z \right)
\label{eq:J97a:2}.
\end{gather}
Numerical simulations were performed on a $30\times 30\times 30$
grid. Once again all three methods are compared. The $L^2$ and
$H^{1}$-errors are given on Figure \ref{fig:error3d}. The numerical
results are equivalent with those obtained in the 2D test with
constant $b$. Note that it is difficult to perform 3D simulations with
more refined grids, due to memory requirements on standard desktop
equipment as we are doing now. Every row in the matrix constructed for
the Singular Perturbation model, can contain up to 125 non zero
entries (for $\mathbb Q_2$ finite elements), while matrices associated
with the Asymptotic Preserving reformulation have rows with up to 375
non zero entries. Furthermore the dimension of the latter is five
times bigger. The memory requirements of the direct solver used in our
simulations grow rapidly. The remedy could be to use an iterative
solver with suitable preconditioner. Finding the most efficient method
to inverse these matrices is however beyond the scope of this
paper. In future work we will address this problem as well as a
parallelization of this method.
\def0.45\textwidth{0.45\textwidth}
\begin{equation}gin{figure}[!ht]
\centering
\subfigure[$L^{2}$ error for a grid with $30\times 30 \times 30$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/3D30L2}}
\subfigure[$H^{1}$ error for a grid with $30\times 30 \times 30$ points.]
{\includegraphics[angle=-90,width=0.45\textwidth]{fig/3D30H1}}
\caption{Absolute $L^{2}$ (left column) and $H^{1}$ (right column)
errors between the exact solution $\phi^{\varepsilon }$ and the
computed solution $\phi _A$ (AP), $\phi _L$ (L), $\phi _P$ (P) for
the 3D test case. The errors are plotted as a function of the
anisotropy ratio $\varepsilon $.}
\label{fig:error3d}
\end{figure}
\section{Conclusions}\label{SEC6}
The asymptotic preserving method presented in this paper is shown to
be very efficient for the solution of highly anisotropic elliptic
equations, where the anisotropy direction is given by an arbitrary,
but smooth vector field $b$ with non-adapted coordinates and
meshes. The results presented here generalize the procedure used in
\cite{DDN} and have the important advantage to permit the use of
Cartesian grids, independently on the shape of the
anisotropy. Moreover, the scheme is equally accurate, independently on
the anisotropy strength, avoiding thus the use of coupling
methods. The numerical study of this AP-scheme shall be investigated
in a forthcoming paper, in particular the $\eps$-independent
convergence results shall be stated.
Another important related work consists in extending our methods to
the case of anisotropy ratios $\eps$, which are variable in $\Omega$ from
moderate to very small values. This is important, for example, in
plasma physics simulations as already noted in the introduction. An
alternative strategy to the Asymptotic Preserving schemes whould be to
couple a standard discretization in subregions with moderate $\eps$
with a limit ($\eps\to 0$) model in subregions with small $\eps$ as
suggested, for example, in \cite{BCDDGT_1,Keskinen}. However, the
limit model is only valid for $\varepsilon \ll 1$ and cannot be
applied for weak anisotropies. Thus, the coupling strategy requires
existence of a range of anisotropy strength where both methods are
valid. This is rather undesirable since this range may not exist at
all, as illustrated by our results in Fig. \ref{fig:error}.
\appendix
\section{Decompositions $\mathcal{V}=\mathcal{G}\oplus ^{\perp }\mathcal{A}$, $\tilde{\mathcal{V}}=\tilde{\mathcal{G}}\oplus \mathcal{L}$ and related estimates} \label{appA}
\color{black}
We shall show in this Appendix that all the statements in Hypotheses B and B' can be rigorously derived under some
assumptions on the domain boundary $\partial\Omega$ \fabrice{and on} the manner in which it is intersected by the field $b$.
We assume essentially that $b$ is tangential to $\partial\Omega$ on $\partial\Omega_D$ and that $b$ penetrates the remaining part of the boundary $\partial\Omega_N$ at an angle
that stays away from 0 on $\overline{\partial\Omega}_N$. We assume also that $\partial\Omega_N$ consists of two disjoint components for which there exist global and smooth parametrizations.
This last assumption can be weakened (existence of an atlas of local smooth parametrizations should be sufficient) at the expense of lengthening the proofs.
The precise set of our assumptions is the following:\\
\color{black}
\textbf{Hypothesis C} The boundary of $\Omega $ is the union of three
components: $\partial \Omega _{D}$\textit{\ }where $b\cdot n=0$, $\partial
\Omega _{in}$\textit{\ }where $b\cdot n\leq -\alpha $ and $\partial \Omega
_{out}$\textit{\ }where $b\cdot n\geq \alpha $ with some constant $\alpha >0$.
Moreover, there is a smooth system of coordinates $\xi _{1},\ldots ,\xi
_{d-1}$ on $\partial \Omega _{in}$ meaning that there is a bounded domain $
\Gamma _{in}\in \mathbb{R}^{d-1}$ and a one-to-one map $h_{in}:\Gamma
_{in}\rightarrow \mathbb{R}^{d}$ such that $h_{in}\in C^{2}(\overline{\Gamma}
_{in})$ and $\partial \Omega _{in}$ is the image of $h_{in}(\xi _{1},\ldots
,\xi _{d-1})$ as $(\xi _{1},\ldots ,\xi _{d-1})$ goes over $\Gamma _{in}$.
The matrix formed by the vectors $(\partial h_{in}/\partial \xi _{1},\ldots
,\partial h_{in}/\partial \xi _{1},n)$ is invertible for all $(\xi
_{1},\ldots ,\xi _{d-1})\in \overline{\Gamma}_{in}$. Similar assumptions hold
also for $\partial \Omega _{out}$ (changing $\Gamma _{in}$ to $\Gamma _{out}$
and $h_{in}$ to $h_{out}$).\\
Using this hypothesis we can introduce a system of coordinates in $\Omega$
such that the field lines of $b$ coincide with the coordinate lines.
To do this consider the initial value problem for a parametrized ordinary
differential equation (ODE):
\begin{equation}gin{equation}
\frac{\partial X}{\partial \xi _{d}}(\xi ^{\prime },\xi _{d})=b(X(\xi
^{\prime },\xi _{d})),~X(\xi ^{\prime },0)=h_{in}(\xi ^{\prime }).
\label{odeX}
\end{equation}
Here $X(\xi ^{\prime },\xi _{d})$ is $\mathbb{R}^{d}$-valued and $\xi
^{\prime }$ stands for $(\xi _{1},\ldots ,\xi _{d-1})$. For any
fixed $\xi ^{\prime }\in \Gamma _{in}$, $\ $equation (\ref{odeX}) should be
understood as an ODE for a function of $\xi _{d}$. Its solution $X(\xi
^{\prime },\xi _{d})$ goes then over the field line of $b$ starting (as $\xi
_{d}=0$) at the point on the inflow boundary $\partial \Omega _{in}$, parametrized by $
\xi ^{\prime }$. This field line hits the outflow boundary $\partial \Omega
_{out}$ somewhere. In other words, for any $\xi ^{\prime }\in \Gamma _{in}$
there exists $L(\xi ^{\prime })>0$ such that $X(\xi ^{\prime },L(\xi
^{\prime }))\in \partial \Omega _{out}$. The domain of definition of $X$ is
thus
\begin{equation}gin{equation*}
D=\{(\xi ^{\prime },\xi _{d})\in \mathbb{R}^{d}~/~\xi ^{\prime }\in \Gamma
_{in}\text{ and }0<\xi _{d}<L(\xi ^{\prime })\}.
\end{equation*}
Gathering the results on parametrized ODEs, from for instance \cite{ODE}, we
conclude that $X(\xi ^{\prime },\xi _{d})=X(\xi _{1},\ldots ,\xi _{d})$ is a
smooth function of all its $d$ parameters, more precisely $X\in C^{2}(\overline{D}
)$. Evidently, the map $X$ is one-to-one from $\overline{D}$ to $\overline{\Omega}$
and thus $\xi _{1},\ldots ,\xi _{d}$ provide a system of coordinates for $
\overline{\Omega}$. Moreover this system is not degenerate in the sense that the
vectors $\partial X/\partial \xi _{1},\ldots ,\partial X/\partial \xi _{d}$
are linearly independent at each point of $\overline{\Omega}$. Indeed, if this
was not the case, then there would exist a non trivial linear combination $
\lambda _{1}\partial X/\partial \xi +\cdots +\lambda _{d}\partial X/\partial
\xi _{d}$ that would vanish at some point in $\overline{\Omega}$. But, ODE (\ref
{odeX}) implies
\begin{equation}gin{equation*}
\frac{\partial }{\partial \xi _{d}}\sum_{i=1}^{d}\lambda _{i}\frac{\partial X
}{\partial \xi _{i}}(\xi ^{\prime },\xi _{d})=\nabla b(X(\xi ^{\prime },\xi
_{d}))\cdot \sum_{i=1}^{d}\lambda _{i}\frac{\partial X}{\partial \xi _{i}}
(\xi ^{\prime },\xi _{d})
\end{equation*}
so that, the unique solution of this ODE, i.e. the linear combination $\sum_{i=1}^{d}\lambda _{i}\frac{\partial X}{
\partial \xi _{i}}$, would vanish on the whole field line, in particular on
the inflow. But this is impossible since $\frac{\partial X}{\partial \xi _{i}
}=\frac{\partial h_{in}}{\partial \xi _{i}}$, $i=1,\ldots ,d-1$ on the
inflow, while $\frac{\partial X}{\partial \xi _{d}}=b$ and the vectors $
\left( \frac{\partial h_{in}}{\partial \xi _{1}},\ldots ,\frac{\partial
h_{in}}{\partial \xi _{d-1}},b\right) $ are linearly independent for all $
(\xi _{1},\ldots ,\xi _{d-1})\in \overline{\Gamma}_{in}$. We see thus that the
Jacobian $J=\det (\partial X_{j}/\partial \xi _{i})$ does not vanish on $
\overline{\Omega}$ so that we can assume that $m<J<M$ everywhere on $\overline{\Omega}$
with some positive constants $m$ and $M$ (assuming that $J$ is positive
does not prevent the generality since if \ $J$ is negative in $\Omega $ than
one can replace $\xi _{1}$ by $-\xi _{1}$). Since $X \in C^2(\overline{\Omega})$, we have also that $J \in C^1(\overline{\Omega})$.
One also sees easily that the top of $D$ is given by a smooth function $
L(\xi ^{\prime })$. Indeed, $L(\xi ^{\prime })$ is determined for each $\xi
^{\prime }\in \Gamma _{in}$ from the equation $X(\xi ^{\prime },L(\xi
^{\prime }))=h_{out}(\eta )$ with some $\eta =(\eta _{1},\ldots ,\eta
_{d-1})\in \Gamma _{out}$. We know already that this equation is solvable
for $\xi _{d}=L(\xi ^{\prime })$, $\eta _{1},\ldots ,\eta _{d-1}$ for any $
\xi ^{\prime }\in \Gamma _{in}$. To conclude that the solution depends
smoothly on $\xi ^{\prime }$ we can apply the implicit function theorem to
the equation
\begin{equation}gin{equation*}
F(\xi ^{\prime };\xi _{d},\eta _{1},\ldots ,\eta _{d-1})=X(\xi ^{\prime
},\xi _{d})-h_{out}(\eta _{1},\ldots ,\eta _{d-1})=0.
\end{equation*}
Indeed, the $R^{d}$-valued function $F$ is smooth and the matrix of its
partial derivatives with respect to $\xi _{d},\eta _{1},\ldots ,\eta _{d-1}$
is invertible since $\partial F/\partial \xi _{d}=b$ and $\partial
F/\partial \eta _{i}=-\partial h_{out}/\partial \eta _{i}$ at some point at
the outflow and the vectors $\partial h_{out}/\partial \eta _{i}$ lie in the
tangent plane to $\partial \Omega _{out}$ while $b$ is nowhere in this
plane. We have moreover that $L\in C^{1}(\overline{\Gamma}_{in})$. Indeed, we can
prove that all the derivatives of $L$ are bounded. In order to do it, let us
remark that the differential of $X(\xi ^{\prime },L(\xi ^{\prime }))$
represents a vector in the tangent plane at some point on $\partial \Omega
_{out}$ so that it is perpendicular to the outward normal $n$. We have thus
for any $i=1,\ldots ,d-1$
\begin{equation}gin{equation*}
0=n\cdot \left( \frac{\partial X}{\partial \xi _{i}}(\xi ^{\prime },L(\xi
^{\prime }))+\frac{\partial X}{\partial \xi _{d}}(\xi ^{\prime },L(\xi
^{\prime }))\frac{\partial L}{\partial \xi _{i}}(\xi ^{\prime })\right)
=n\cdot \left( \frac{\partial X}{\partial \xi _{i}}(\xi ^{\prime },L(\xi
^{\prime }))+b(\xi ^{\prime },L(\xi ^{\prime }))\frac{\partial L}{\partial
\xi _{i}}(\xi ^{\prime })\right)
\end{equation*}
so that
\begin{equation}gin{equation*}
\frac{\partial L}{\partial \xi _{i}}(\xi ^{\prime })=-\frac{n\cdot \frac{
\partial X}{\partial \xi _{i}}(\xi ^{\prime },L(\xi ^{\prime }))}{n\cdot
b(\xi ^{\prime },L(\xi ^{\prime }))}
\end{equation*}
and this is bounded since $X$ has bounded partial derivatives and $n\cdot
b\geq \alpha $ by the hypothesis. Note also that $L$ is strictly positive.
\begin{equation}gin{itemize}
\item
We can now establish the decomposition $\mathcal{V}=\mathcal{G}\oplus
^{\perp }\mathcal{A}$. Take any $\phi \in \mathcal{V}$ $\cap C^{1}(\overline{
\Omega})$ and introduce $p\in L^{2}(\Omega )$ by
\begin{equation}gin{equation}
p(x)=p(\xi ^{\prime },\xi _{d})=p(\xi ^{\prime })=\frac{\int_{0}^{L(\xi
^{\prime })}\phi (\xi ^{\prime },t)J(\xi ^{\prime },t)dt}{\int_{0}^{L(\xi
^{\prime })}J(\xi ^{\prime },t)dt}. \label{pdef}
\end{equation}
(from now on we switch back and forth between the Cartesian coordinates $
x=(x_{1},\ldots ,x_{d})$ and the new ones $(\xi ^{\prime },\xi _{d})=(\xi
_{1},\ldots ,\xi _{d}))$. Evidently, $p$ is constant along each field line$.$
Moreover, $p$ is the $L^{2}$-orthogonal projection of $\phi $ on the space
of such functions$.$ Indeed, if $\psi =\psi (\xi ^{\prime })\in L^{2}(\Omega )
$ is any function constant along each field line then
\begin{equation}gin{eqnarray*}
\int_{\Omega }p\psi dx &=&\int_{D}p\psi Jd\xi =\int_{\Gamma _{in}}p(\xi
^{\prime })\psi (\xi ^{\prime })\int_{0}^{L(\xi ^{\prime })}J(\xi ^{\prime
},\xi _{d})d\xi _{d}d\xi ^{\prime } \\
&=&\int_{\Gamma _{in}}\int_{0}^{L(\xi ^{\prime })}\phi (\xi ^{\prime },\xi_d)\, \psi
(\xi ^{\prime })\,J(\xi ^{\prime },\xi _{d})d\xi _{d}d\xi ^{\prime
}=\int_{\Omega }\phi \psi dx.
\end{eqnarray*}
Let us prove that $p\in \mathcal{V}$, i.e. that its derivatives are square
integrable. The change of variable $t=L(\xi')s$ yields the function
$$
p(\xi ^{\prime })=\frac{\int_{0}^1 \phi (\xi ^{\prime },L(\xi')s)J(\xi ^{\prime },L(\xi')s)ds}{\int_{0}^1 J(\xi ^{\prime },L(\xi')s)ds}.
$$
Now we have $\partial p/\partial \xi _{d}=0$ and for all $
\partial p/\partial \xi _{i,i=1,\ldots ,d-1}$ denoting $a=a(\xi ^{\prime
})=(\int_{0}^{1}J(\xi ^{\prime },L(\xi')s)ds)^{-1}$, $\phi = \phi(\xi ^{\prime },L(\xi')s)$ and same for $J$ we obtain
\begin{equation}gin{equation}
\begin{equation}gin{array}{lll}
\ds \frac{\partial p}{\partial \xi _{i}}&=&\ds \frac{\partial a}{\partial \xi _{i}}
\int_{0}^{1}\phi Jds+a\int_{0}^{1}\frac{\partial \phi }{\partial \xi _{i}}Jds+a\int_{0}^{1}\frac{\partial \phi }{\partial \xi _{d}}
\, \frac{\partial L }{\partial \xi_i} s \,J\, ds\\[3mm]
&&\ds + a\int_{0}^{1} \phi \, \frac{\partial J }{\partial \xi _{i}} ds+a\int_{0}^{1} \phi \, \frac{\partial J }{\partial \xi _{d}} \,
\frac{\partial L }{\partial \xi_i} s \,ds
\end{array}
\end{equation}
Using all the previous bounds on the functions $L$ and $J$ and skipping the
details of somewhat tedious calculations, we arrive at
\begin{equation}gin{eqnarray*}
\int_{\Omega }\left( \frac{\partial p}{\partial \xi _{i}}\right) ^{2}dx
&=&\int_{\Gamma _{in}}\int_{0}^{L(\xi ^{\prime })}\left( \frac{\partial p}{
\partial \xi _{i}}\right) ^{2}Jd\xi _{d}d\xi ^{\prime } \\
&\leq &C\int_{\Gamma _{in}}\int_{0}^{L(\xi ^{\prime })}\left( \phi
^{2}+\left( \frac{\partial \phi }{\partial \xi _{i}}\right) ^{2}+\left( \frac{\partial \phi }{\partial \xi _{d}}\right) ^{2}\right)
Jd\xi _{d}d\xi ^{\prime }
\end{eqnarray*}
implying
\begin{equation}gin{equation*}
\left\Vert \frac{\partial p}{\partial \xi _{i}}\right\Vert _{L^{2}(\Omega
)}^{2}\leq C\left( \left\Vert \phi \right\Vert _{L^{2}(\Omega
)}^{2}+\left\Vert \frac{\partial \phi }{\partial \xi _{i}}\right\Vert
_{L^{2}(\Omega )}^{2}+\left\Vert \frac{\partial \phi }{\partial \xi _{d}}\right\Vert
_{L^{2}(\Omega )}^{2}\right) \le C\left\Vert \phi \right\Vert
_{H^{1}(\Omega )}^2\,.
\end{equation*}
Thus $p\in H^{1}(\Omega )$, hence $p\in \mathcal{G}$ and
$q=\phi -p\in \mathcal{A}$. Since the dependence of $p$ on $\phi $ is
continuous in the norm of $H^{1}(\Omega )$, a density argument shows that the
decomposition $\phi =p+q$ with $p\in \mathcal{G}$ and $q\in \mathcal{A}$
exists for any $\phi \in \mathcal{V}$.
\item Let us now introduce the operator $P$ as the $L^{2}$-orthogonal projector on $
\mathcal{G}$, that means
$$
P: {\cal V} \rightarrow {\cal G}\,, \quad \phi \in {\cal V} \longmapsto P \phi \in {\cal G}\quad \textrm{given by} \quad (\ref{pdef})\,.
$$
Then, the estimates in
the preceding paragraph show that the operator $P$ is continuous in the norm of $
H^{1}(\Omega )$:
\begin{equation}gin{equation}
||\nabla _{\perp }(P\phi )||_{L^{2}(\Omega )}\leq C||\nabla \phi
||_{L^{2}(\Omega )}\,,\quad \forall \phi \in \mathcal{V} \label{reg}
\end{equation}
\item We have also the following Poincar\'{e}-Wirtinger inequality:
\begin{equation}gin{equation}
||\phi -P\phi ||_{L^{2}(\Omega )}\leq C||\nabla _{||}\phi ||_{L^{2}(\Omega
)}\,,\quad \forall \phi \in \mathcal{V}\,. \label{PoinW_bis}
\end{equation}
To prove this, it is sufficient to establish that $||q||_{L^{2}(\Omega
)}\leq C||\nabla _{||}q||_{L^{2}(\Omega )}$ for all $q\in \mathcal{A}$. We
observe that
\begin{equation}gin{equation*}
||q||_{L^{2}(\Omega )}^{2}=\int_{\Gamma _{in}}\int_{0}^{L(\xi ^{\prime
})}q^{2}(\xi ^{\prime },\xi _{d})J(\xi ^{\prime },\xi _{d})d\xi _{d}d\xi
^{\prime }
\end{equation*}
and
\begin{equation}gin{equation*}
||\nabla _{||}\phi ||_{L^{2}(\Omega )}^{2}=\int_{\Gamma
_{in}}\int_{0}^{L(\xi ^{\prime })}\left( \frac{\partial q}{\partial \xi _{d}}
\right) ^{2}(\xi ^{\prime },\xi _{d})J(\xi ^{\prime },\xi _{d})d\xi _{d}d\xi
^{\prime }\,.
\end{equation*}
The requirement $q\in \mathcal{A}$ is equivalent to
\begin{equation}gin{equation}
\int_{0}^{L(\xi ^{\prime })}q(\xi ^{\prime },\xi _{d})J(\xi ^{\prime },\xi
_{d})d\xi _{d}=0 \label{qrest}\quad \textrm{f.a.a.} \,\, \xi ^{\prime }\in \Gamma _{in}\,.
\end{equation}
We have thus to prove for
every $\xi ^{\prime }$
\begin{equation}gin{equation*}
\int_{0}^{L(\xi ^{\prime })}q^{2}(\xi ^{\prime },\xi _{d})J(\xi ^{\prime
},\xi _{d})d\xi _{d}\leq C^{2}\int_{0}^{L(\xi ^{\prime })}\left( \frac{
\partial q}{\partial \xi _{d}}\right) ^{2}(\xi ^{\prime },\xi _{d})J(\xi
^{\prime },\xi _{d})d\xi _{d}
\end{equation*}
provided (\ref{qrest}). Fixing any $\xi ^{\prime },$ making the change of
integration variable $\xi _{d}=L(\xi ^{\prime })t$ and introducing the functions $
u(t)=q(\xi ^{\prime },L(\xi ^{\prime })t)J(\xi ^{\prime },L(\xi ^{\prime })t)
$ and $J(t)=J(\xi ^{\prime },L(\xi ^{\prime })t)$, we rewrite the last inequality
as
\begin{equation}gin{equation}
\int_{0}^{1}\frac{u^{2}(t)}{J(t)}dt\leq \frac{C^{2}}{L^{2}(\xi ^{\prime })}
\int_{0}^{1}\left( \frac{u^{\prime }(t)}{J(t)}-\frac{u(t)}{J^{2}(t)}
J^{\prime }(t)\right) ^{2}J(t)dt\,. \label{uine}
\end{equation}
Since $\int_{0}^{1}u(t)dt=0$ we have by the standard Poincar\'{e} inequality
\begin{equation}gin{equation}
\int_{0}^{1}u^{2}(t)dt\leq C_{P}^{2}\int_{0}^{1}\left( u^{\prime }(t)\right)
^{2}dt\,. \label{uinep}
\end{equation}
\item
Let us turn to the verification of Hypothesis B'. Take any $u\in \mathcal{\tilde{
V}}$. We want to prove that one can decompose $u=p+q$ with $p\in \mathcal{
\tilde{G}}$ and $q\in \mathcal{L}$ and the trace of $u$ on $\partial \Omega
_{in}$ (denoted $g$) is in $L^{2}(\partial \Omega _{in})$. In the $\xi $
-coordinates we can write a surface element of $\partial \Omega _{in}$ as $
d\sigma =S(\xi ^{\prime })d\xi ^{\prime }$ with a function $S$ smoothly
depending on $\xi ^{\prime }$. We see now that for $u$ suffuciently smooth
\begin{equation}gin{eqnarray*}
||g||_{L^{2}(\partial \Omega _{in})}^{2}
&=&\int_{\Gamma _{in}}g^{2}(\xi^{\prime })S(\xi ^{\prime })d\xi ^{\prime }\\
&\leq& C\int_{0}^{1}\int_{\Gamma
_{in}}\left[ u^{2}(\xi ^{\prime },L(\xi ^{\prime })s)+\frac{1}{L(\xi
^{\prime })}\left( \frac{\partial u}{\partial \xi _{d}}\right) ^{2}(\xi
^{\prime },L(\xi ^{\prime })s)\right] S(\xi ^{\prime })d\xi ^{\prime }ds \\
&&\quad \text{(by a one-dimensional trace inequlity)} \\
&\leq &C||u||_{\mathcal{\tilde{V}}}^{2}
\end{eqnarray*}
By density, the trace $g$ is thus defined for any $u\in \mathcal{\tilde{V}}$
with $||g||_{L^{2}(\partial \Omega _{in})}\leq C||u||_{\mathcal{\tilde{V}}}$
. Taking $p=p(\xi ^{\prime })=g(\xi ^{\prime })$ we observe by a similar
calculation that $||p||_{L^{2}( \Omega)}\leq C||u||_{\mathcal{\tilde{V}}
}$ so that $p\in \mathcal{\tilde{G}}$. By definition $q=u-p\in \mathcal{L}$.
\end{itemize}
\section{On the choice of the finite element space
$\mathcal{L}_h$}\label{appB}
Let $\Omega$ be the rectangle $(0,L_x)\times(0,L_y)$ and the
anisotropy direction be constant and aligned with the $y$-axis:
$b=(0,1)$. Let us use the $\mathbb Q_k$ finite elements on a Cartesian
grid, i.e. take some basis function $\theta_{x_i}(x)$,
$i=0,\ldots,N_x$ and $\theta_{y_j}(y)$, $j=0,\ldots,N_y$ and define
the complete finite element space $X_h$ (without any restrictions on
the boundary) as span$\{\theta_{x_i}(x)\theta_{y_j}(y)\ 0\le i\le
N_x,\ 0\le j\le N_y\}$. The following subspace is then used for the
approximation of the unknowns $p,q,l \in {\cal V}$
$$
\mathcal{V}_h=\{v_h\in X_h/v_h|_{\partial\Omega_{D}}=0\}.
$$
We want to prove that taking for the approximation of $\lambda,\mu \in {\cal L}$ the space $\mathcal{L}_h$ under the form
\begin{equation} \label{L_H_bad}
\mathcal{L}_h=\{\lambda_h\in X_h/\lambda_h|_{\partial\Omega_{in}}=0\}\,,
\end{equation}
leads to an ill posed problem (\ref{eq:Jt8a}).\\
\noindent\textbf{Claim} There exist $\lambda_h\in\mathcal{L}_h$, $\lambda_h \neq 0$, such that $a_{||}(\lambda_h,p_h)=0$ for all $p_h\in\mathcal{V}_h$. In fact
there are exactly $2N_y$ linearly independent functions having this property.
\begin{equation}gin{rem} In the continuous case, the equation
$$
a_{||}(p,\lambda)=0\,, \quad \forall p \in {\cal V}\,,
$$
implies $\lambda = 0$ by density arguments. These density arguments are lost when discretizing the spaces ${\cal V}$ resp. ${\cal L}$.
\end{rem}
\noindent\textbf{Proof of the Claim.} We can suppose that the basis functions $\theta_{ij}(x,y):=\theta_{x_i}(x)\theta_{y_j}(y)$ are
enumerated so that $\theta_{ij}(0,y)=0$ for all $i\ge 1$ and $\theta_{0j}(0,y)\not=0$.
Hence for all $p_h=\sum p_{ij}\theta_{ij}\in\mathcal{V}_h$, the coefficients satisfy $p_{0j}=0$
since the part of the boundary $\{x=0\}$ is in $\partial\Omega_{D}$.
Let $M=(m_{ik})_{0\le i,k \le N_x}$ be the mass matrix in the $x$-direction: $m_{ik}=\int\theta_{x_i}(x)\theta_{x_k}(x)dx$.
This matrix is invertible, hence there is a vector $a\in\mathbb{R}^{N_x+1}$ that solves $Ma=e$ with $e\in\mathbb{R}^{N_x+1}$, $e=(1,0,\ldots,0)^t$.
Take any fixed integer $j$, $1\le j\le N_y$ and define $\lambda_h\in\mathcal{L}_h$ as $\lambda_h=\sum a_i\theta_{ij}$. Then,
for all $p_h=\sum p_{kl}\theta_{kl}\in\mathcal{V}_h$ we have
\begin{equation}gin{align*}
a_{||}(\lambda_h,p_h)
&=\sum_{i,k,l} a_ip_{kl}\int_\Omega\frac{\partial\theta_{ij}}{\partial y}\frac{\partial\theta_{kl}}{\partial y}dxdy\\
&=\sum_{i,k,l} a_ip_{kl} \int_0^{L_x}\theta_{x_i}(x)\theta_{x_k}(x)dx \int_0^{L_y}\theta'_{y_j}(y)\theta'_{y_l}(y)dy\\
&=\sum_{k,l} \delta_{k0}p_{kl} \int_0^{L_y}\theta'_{y_j}(y)\theta'_{y_l}(y)dy = 0.
\end{align*}
As we can do this for all $(i,j)$, $i=0$, $1\le j\le N_y$ and in the same manner for all $(i,j)$, $i=N_x$, $1\le j\le N_y$, there are
$2N_y$ linearly independent functions with the property $a_{||}(\lambda_h,p_h)=0$ for all $p_h\in\mathcal{V}_h$.
We see now that the system (\ref{eq:Jt8a}) with zero right hand side $f=0$ possesses non-zero solutions
$(p^\varepsilon_h,\;\lambda^\varepsilon_h,\;q^\varepsilon_h,\;l^\varepsilon_h,\;\mu^\varepsilon_h)
=(0,\lambda^j_h,0,0,0)$ where $\lambda^j_h$ is any of the functions constructed in the preceding paragraph. It means that (\ref{eq:Jt8a})
is ill posed, i.e. the corresponding matrix is singular.
{\bf Acknowledgement.} This work has been supported by the Marie Curie Actions of the European
Commission under the contract DEASE (MEST-CT-2005-021122), by the French
'Commissariat \`a l'Energie Atomique (CEA)' under contracts ELMAG
(CEA-Cesta 4600156289), and GYRO-AP (Euratom-CEA V 3629.001), by the Agence Nationale
de la Recherche (ANR) under contract IODISEE (ANR-09-COSI-007-02), by the
'Fondation Sciences et Technologies pour l'A\'eronautique et l'Espace (STAE)'
under contract PLASMAX (RTRA-STAE/2007/PF/002) and by the Scientific
Council of the Universit\'e Paul Sabatier, under contract MOSITER. Support from the
French magnetic fusion programme 'f\'ed\'eration de recherche sur la fusion
par confinement magn\'etique' is also acknowledged. The authors would like
to express their gratitude to G. Gallice and C. Tessieras from CEA-Cesta for bringing
their attention to this problem, to G. Falchetto, X. Garbet and M. Ottaviani from
CEA-Cadarache, for their constant support to this research programme.
\end{document}
|
\begin{document}
\date{}
\title{Rectangular knot diagrams classification with deep learning}
\begin{abstract}
In this article we discuss applications of neural networks to recognising knots and, in particular, to the unknotting problem. One of motivations for this study is to understand how neural networks work on the example of a problem for which rigorous mathematical algorithms for its solution are known. We represent knots by rectangular Dynnikov diagrams and apply neural networks to distinguish a given diagram’s class from the given finite families of topological types. The data presented to the program is generated by applying Dynnikov moves to initial samples. The significance of using these diagrams and moves is that in this context the problem of determining whether a diagram is unknotted is a finite search of a bounded combinatorial space.
\end{abstract}
\section{Introduction}
In this article we discuss the applications of neural networks to recognising knots and, in particular, to the unknotting problem which appears to be the most interesting application.
We represent knots by rectangular diagrams (see the text of the paper for the definition of these diagrams and for the moves that we use in transforming the diagrams) and apply networks to distinguish a given diagram’s class from the given finite family of classes (topological types of knots which include the unknot class) that may be represented by the diagram.
The data we present to our program is self-generated in the sense that we begin with samples of knots and unknots and then apply the Dynnikov moves (see the text of the paper for a definition of these moves) to produce the larger collections that the program uses. We have two types of data. In one case we use internal moves only. In the second case we use external moves as well. In the second case we find that the program has a significantly harder time identifying the data. This provides us with a material difference that can be used for the next stages of the research. The significance of using rectangular diagrams and the Dynnikov moves is that in this context the problem of determining whether a diagram is unknotted is a finite search of a bounded combinatorial space. This finiteness is not available for algorithms using the Reidemeister moves or other formulations of the knot theory.
In comparison to other problems to which neural networks are applied the knot recognition problem has two special features:
\begin{itemize}
\item
there is no a natural data set and, in particular, the efficiency of networks is evaluated on generated data sets and strongly depends on generation methods;
\item
there are several mathematical algorithms for solving the unknotting problem (see Section 2). The comparison of them with the applied neural networks may help in understanding how these neural networks function.
\end{itemize}
\section{Algorithms in three-dimensional topology}
The most famous recognition problem of three-dimensional topology which was solved algorithmically is the unknotting problem for knots. It has a long story.
\begin{itemize}
\item
The first solution to the unknotting problem was found by Haken \cite{Haken}
who introduced for that the theory of normal surfaces in three-manifolds.
\end{itemize}
Although his algorithm was quite inefficient, this approach led to a special study of Haken, or sufficiently large, three-manifolds and revolutionized three-dimensional topology.
The idea consisted in finding an embedded two-dimensional disk which is bounded by the knot. By the Papakiriakopolous theorem, such a disk exists if and only the given knot is the unknot \cite{Pap} .
\begin{itemize}
\item
The algorithm by Birman and Hirsch used so-called braid foliations and also consists in looking for a disk bounded by a knot \cite{BH}. Their considerations are based on
normal surface theory;
\item
A different approach was proposed by Dynnikov who proved that
any presentation of the unknot by a rectangular diagram admits a monotonic simplification by elementary moves \cite{Dynnikov}.
\end{itemize}
The work by Dynnikov was preceded by another: his approach based on Gaussian codes of knots which led to a partial algorithm that works very fast for some quite complicated representations. However it is not effectively applied to all representations of the unknot. This
partial algorithm was realized on computers \cite{ADP} and this software was used for experiments
with different representations.
In our research we also use rectangular diagrams.
Two other solutions of the unknotting problem are based on algebraic invariants of low-dimensional manifolds:
\begin{itemize}
\item
Knot Floer homology contains information on the Seifert genus of a knot, which is the minimal genus of embedded surfaces bounded by the knot. By the Papakyriakopoulos theorem, the Seifert genus vanishes exactly for the unknot. The combinatorial representation of knot Floer homology gives an algorithm for computing the Seifert genus, and hence for the recognition of the unknot \cite{MOS};
\item
Kronheimer and Mrowka proved that the property that the reduced Hovanov cohomology of a knot has rank one distinguishes the unknot \cite{KM}.
\end{itemize}
Speaking about the algorithms, we refer to the complexity of this recognition problem.
By using normal surface theory, Hass, Lagarias, and Pippenger showed that unknottedness
is in the complexity class NP \cite{HLP}.
On the other hand, it was proved by Kuperberg that, assuming the generalized Riemann hypothesis,
the knottedness is also in the NP class, which implies that unknottedness is in co-NP \cite{Kuperberg}.
A fundamental problem of knot theory reads
\vskip2mm
{\sc Is the Jones polynomial of a knot $K$ equal to one, $J(K)=1$, if and only if $K$ is the unknot? }
\vskip2mm
The positive answer to this question would give another solution of the unknottedness problem.
For general knots and links the problem of algorithmic recognition was positively solved by Matveev \cite{Matveev}
who actually established this result for more general class of Haken manifolds.
One of the traditional approaches for comparing general links consists in comparing their planar diagrams. It is know that two planar diagrams define the same link if and only if they can be related by a sequence of so-called Reidemeister moves.
Few years ago
\begin{itemize}
\item
Coward and Lackenby proved that there exists a superexponential function $f(n_1,n_2)$ such that for any two planar diagrams of a link with $n_1$ and $n_2$ crossings there are at most $f(n_1,n_2)$ Reidemeister moves which take one diagram into another \cite{CL};
\item
Lackenby proved that a planar diagram of the unknot with $n$ crossings can be converted into the diagram with no crossings by at most $(236n)^{11}$ Reidemeister moves \cite{L}.
\end{itemize}
We remark that the theorem by Coward and Lackenby gives a new solution of the algorithmic recognition problem for links.
For a recent survey of algorithmic problems of three-dimensional topology we refer to \cite{L2020}.
\section{Rectangular diagrams}
\subsection{Definition} \label{rect_definition}
To apply computer science methods to mathematical objects, one must find computer-compatible representation for them. In our case, we use rectangular diagrams \cite{Dynnikov}. Basically, any knot diagram can be transformed in such way that all edges are either vertical or horizontal. Less obvious, we can further transform it so all vertical lines are overcrossing lines. Finally, we place all vertices on the grid in such way that only one edge belongs to each horizontal or vertical line of the grid. An example of the rectangular diagram is depicted at figure [\ref{fig:init_diag}].
\begin{figure}
\caption{An example of the knot with corresponding rectangular diagram}
\label{fig:init_diag}
\end{figure}
At this point, we're ready to represent the diagram numerically. Namely, we label each vertex with
$X$ or $O$, consecutively changing them, and after that encode each horizontal edge with coordinates of $X$ and $O$ end. Since on each vertical line is only one edge, the diagram is represented by two permutations - coordinates of all $X$ and $O$ ends. There are two permutations, one corresponding to $X$ and the other corresponding to $O$. The $X$ permutation is a vector list of the columns that appear with an $X$ as one goes through the rows in order from top to bottom. The $O$ permutation is a vector list of the columns labeled $O$ as one goes through the rows in order from top to bottom. We indicate the two permutations by a matrix of two rows with the $X$ permutation preceding the $O$ permutation, which is a numerical representation of the diagram. As an example, the diagram from the Fig.1, corresponds to the following representation:
$$
\begin{pmatrix}
3 & 8 & 9 & 1 & 6 & 2 & 4 & 5 & 7 \\
1 & 2 & 6 & 5 & 3 & 4 & 7 & 8 & 9
\end{pmatrix},
$$
where the top row corresponds to $X$-vertices, and bottom - to the $O$ ones.
The \textbf{complexity} of the rectangular diagram is defined as the number of horizontal edges.
\subsection{Elementary moves}
There are three kinds of class-preserving rectangular diagram moves, which called Dynnikov moves:
\begin{itemize}
\item Neighboring edges can be switched if they are not interleaving. An example of that is depicted at Fig.[\ref{fig:move1_diag}]. A diagram which is one move away from the initial one is depicted at \ref{fig:inter_switch_diag}. Please note that switched edges share an endpoint, and despite that they're not interleaving.
\item The top and the bottom (the leftmost and the rightmost) edges can be switched under the same assumptions (it is called external switch, or translation move). The example can be seen at the \ref{fig:move2_diag}.
\item And finally, corners can be eliminated (or created) the way is depicted at the Fig. [\ref{fig:move_stab}]. This move is called (de)stabilization.
\end{itemize}
\begin{figure}
\caption{An example of the internal switch move.}
\label{fig:move1_diag}
\end{figure}
\begin{figure}
\caption{A diagram obtained from the initial one with the second Dynnikov move.}
\label{fig:move2_diag}
\end{figure}
\begin{figure}
\caption{An example of the stabilization move}
\label{fig:move_stab}
\end{figure}
Performing these moves in terms of permutation coding is described in the Appendix [\ref{section: perm}].
This set of moves is worth considering because it is proved in \cite{Dynnikov} that any unknot diagram can be monotonically (in terms of complexity) simplified to the diagram with no crossings. This means that determining whether a diagram is unknotted is a finite search using this notion of complexity.
\section{Application of deep learning to the unknottedness problem}
Despite that mathematical community was mostly focused on the algorithms which simplify given knot diagram, we take the different path.
Our program answers the following question(which extends unknottedness problem): to which knot class does (up to 36 classes with less than or equal to 8 crossings in the minimal diagram) a given rectangular diagram belongs? We answer this question considering the diagram as-is, without attempts of its simplification.
We use a collection of knot types from the tables of knots up to eight crossings. The data we present to our program is self-generated in the sense that we begin with samples of knots and unknots and then apply the Dynnikov moves to these to produce the larger collections that the program uses. We have two types of data. In one case we use internal moves only. In the second case we use external moves as well. In the second case we find that the program has a signifcantly harder time identifying the data. This provides us with a material difference that can be used for the next stages of the research.
In terms of deep learning, our program performs multiclass classification task. Deep learning approach was used to address this task in \cite{Vand} though they used a different knot parametrization: they represented a knot as a polymer conformation, consecutively attached unit length rods. Their sampling procedure is also different: they always start with an unknot (random circular conformation) and evolve it to the final configuration possibly changing its class. The class is derived afterwards using some knot invariants. Oppositely, we use label-preserving perturbations of the diagram, starting with the diagram with the minimal number of crossings. This lets us consider knots with arbitrary complexities and classes since we don't need any algorithms to assign labels for the training set.
\begin{tabular}{ |p{3cm}|p{3cm}|p{3cm}| }
\hline
\multicolumn{3}{|c|}{Comparison of our approach with \cite{Vand}} \\
\hline
Property & Vandans Paper& Our approach\\
\hline
Number of classes & 5 & 36 \\
Architecture & BiLSTM & BiLSTM \\
Labeling & Alexander polynomial & Label of the initial diagram \\
\hline
\end{tabular}
\section{Deep learning foundations}
\subsection{Network architecture} \label{architecture}
We consider a knot as a sequence of coordinates of its horizontal edges in the rectangular diagram (see section \ref{rect_definition}). Discovering architectures of neural networks which fit best to sequential nature of the data is a long-standing field in the deep learning research. We chose the classic LSTM layer \cite{LSTM} to address it. We used a bidirectional variation of it \cite{BiRNN}. The layer is composed of a \textbf{cell} $c$ (the memory part of the LSTM unit) and three ``regulators'', usually called \textbf{gates}, of the flow of information inside the LSTM unit: an input gate $i$, an output gate $o$ and a forget gate $f$. All $c, h, i, f$ and $o$ are vectors of the same representation dimension $d$, which we have set to 1024. Intuitively, the cell is responsible for keeping track of the dependencies between the elements in the input sequence. The input gate controls the extent to which a new value flows into the cell, the forget gate controls the extent to which a value remains in the cell and the output gate controls the extent to which the value in the cell is used to compute the output activation of the LSTM unit. To sum up, the LSTM layer consumes an input sequence step by step (one step for each vertex $x_t \in \mathbf{R}^2$), updating $h$ and $c$ via the following set of equations:
$$
f_t = \sigma(W_f x_t + U_f h_{t-1} +b_f),
$$
$$
i_t = \sigma(W_i x_t + U_i h_{t-1} +b_i),
$$
$$
o_t = \sigma(W_o x_t + U_o h_{t-1} +b_o),
$$
$$
c_t = f_t \odot c_{t-1} + i_t \odot tanh(W_c x_t + U_c h_{t-1} + b_c),
$$
$$
h_t = o_t \odot tanh(c_t),
$$
Matrices $W_f$, $W_i$, $W_o$, $W_c$ are called \textbf{weights}, vectors $b_f$, $b_i$, $b_o$ and $b_c$ are called biases, and their elements are learnable parameters which are adjusted during the training of the neural network.
Notation $\odot$ stands for elementwise multiplication and $\sigma$ denotes sigmoid activation function
$$\sigma (z) = \dfrac{1}{1+\exp{(-z)}}$$ ,
which is also applied elementwise and squeezes each coordinate of a vector into $(0,1)$ .
In order to obtain fixed size representation out of LSTM layer outputs, we perform an operation which is called \textbf{global average pooling}:
$$
\Tilde{h} = \dfrac{1}{N}\sum_{t=1}^{N} h_t
$$
Then, this fixed size representation is passed to the \textbf{fully connected layer}, which is, mathematically, is just a matrix multiplication:
$$
z = W_{fc}h,
$$
where $W_{fc}$ has the size of (36, 2048) since there are 36 classes of knots with crossing number up to 8 and we set the dimension of our LSTM hidden representation to 1024, and using bidirectional modification doubles the dimension by using representations for another direction. Finally, we apply a transformation to $z$ which is called a \textbf{softmax function}:
$$
\widetilde{y}_i = \dfrac{e^{z_i}}{\sum_{i=1}^{36}e^{z_i}}.
$$
Each $y_i$ corresponds to the model's confidence that the sample belongs to the class $i$.
It is easy to see that after the softmax function application, neural network outputs become non-negative and their sum equals unity. This makes model's assumptions about the sample class mutually exclusive.
So, overall, we stated a function $f(x, \theta)$ where $\theta$ is a set of neural network parameters, which domain is rectangular representation matrices (of any complexity) and the set of values is 36-dimensional simplex.
\subsection{Training}
Essentially, training of the neural networks is adjusting their parameters (in our case $W_f$, $W_i$, $W_o$, $W_c$, $b_f$, $b_i$, $b_o$ and $b_c$ along with their counterparts for another direction and $W_{fc}$) based on empirical data. A common procedure for this adjustment which we used is called the \textbf{stochastic gradient descent}. In this subsection we're going to briefly describe this procedure. Namely, a set of training examples along with their known labels (this set is often called a \textbf{batch}, or \textbf{mini-batch})
$$\{(x_j, y_j), i \in (1...N)\}$$
is randomly sampled from the fixed training set, or, as in our case, obtained via stochastic generating procedure. In this context, $N$ is called a \textbf{batch size}. Then, for each training sample in the batch we evaluate the neural network output (see subsection \ref{architecture})
$$
\widetilde{y}_i = f(x_i, \theta).
$$
After that, we evaluate a \textbf{loss function}, which in our case is \textbf{categorical cross entropy}
$$
L = - \sum_{j=1}^{N}\sum_{i=1}^{36} I(y_j,i) log(\widetilde{y}_{ij})
$$
where $I(y_j,i)$ is an indicator function which is equals unity if sample with index $j$ belongs to the class $i$ and zero otherwise and $\widetilde{y}_{ij}$ is corresponding coordinate of the neural network output for that sample. One can see that this function is lower when the neural network prediction are closer to the correct ones and equals zero when correct classes are predicted with highest possible confidence.
In order to adjust neural network parameters, a \textbf{gradient step}
$$
\theta \longrightarrow \theta - \lambda \nabla_{\theta}L,
$$
is performed, where $\lambda \in \mathbf{R}^{+}$ is a \textbf{learning rate}.
The gradient of the loss function with respect to the neural network is evaluated via backpropagation algorithm.
\section{Experiments}
\subsection{Data}
The nature of our task provides a vast amount of the training data. Though, obtaining sufficiently many diagrams with the known class is non-trivial. We prepare each batch of the training data for our model in the following way: uniformly sample with repetitions a set of 2048 (which is our batch size) 'classic' knot diagrams, perform randomly chosen stabilization moves until each diagram complexity reaches value uniformly sampled between 25 and 30 for each batch. Then 1000 commutation moves (including external) are performed. Validation set consists of the 10000 diagrams of complexity 35 prepared with the same protocol and kept unchanged.
\subsection{Importance of the external switch}
It turns out that set of moves which were picked to both train and test diagrams generation procedure has crucial impact on the neural network performance. We have conducted two experiments: in the first diagrams were perturbed using internal edge switching moves solely. In the second one, we have used the external switches as well. Here we will call these experiments internal and external case.
\begin{figure}
\caption{Learning curves for internal and external case}
\label{fig:learning_curves_int_ext}
\end{figure}
From the Fig.\ref{fig:learning_curves_int_ext} one can easily see that incorporation of external switch to the move set brings the complexity of the task to a new level. With exactly the same network architecture validation accuracy approaches much lower value.
\subsection{Training and hyperparameters}
We trained our model for 300000 steps with Adam optimizer with learning rate 0.0001. Batch size was set to 2048. Since we do not have a fixed training set, we consider 100 gradient steps as an epoch. On the Fig.\ref{fig:learning_curves_int_ext} only first 100 epochs were considered. Full learning curves may be found in the section \ref{section: ext_results}.
\section{Results}
Our model managed to achieve significantly better accuracy than $100\%*(1/36) = 2.78 \%$ random guessing baseline. This accuracy is expected for a dummy algorithm which assigns knot classes randomly.
Speaking of the unknots in validation set, precision and recall achieved 0.84 and 0.95 respectively. Full classification report, along with definitions of precision and recall may be found in the section \ref{section: ext_results}. One also may find interesting the confusion matrix for the validation set. The $(i,j)$ element of this matrix shows how many times the model considered the sample which belongs to the class $i$ as a sample belonging to the class $j$. Diagonal elements corrrespond to correct prediction, so in the ideal case this matrix is diagonal. In order to enhance its informativeness, values were binarized - elements greater than 40 showed with white pixels, everything else is black. One can see that besides mistakes we consider sporadic the model tends to confuse knot $5_1$ with $3_1$ and $7_1$.
\subsection{Dependence of accuracy on the number of complication steps}
We are getting samples by complicating the diagrams from finite set of 'classic' diagrams of knots with minimal number of intersections. If we don't complicate them enough, say, just learn on initial diagrams, they would be just memorized by large model. If we complicate them too much, the task becomes too hard for the model to perform. In order to illustrate dependence of prediction accuracy on number of complication steps we conduct the following experiment: we take a several test sets of different complexity, perform 500 complication steps and then complicate our diagrams further measuring accuracy on the way each 100 steps.
\begin{figure}
\caption{Accuracy with respect to the number of moves on the test set(external case)}
\label{fig:acc_ent_external}
\end{figure}
It turns out that levels of 15, 70 and 80 turned out to be the hardest cases for the models which makes sense since these kind of diagrams are the farthest from the training set. Complexity levels closer to the training ones (between 25 and 30) show the highest test set accuracy. Moreover, we can see that accuracy decreases for small diagrams and slightly increases for the large ones for some time then also decreases with their further complication. One may look at Fig. \ref{fig:heatmap_external} to explore dependence of accuracy regarding two factors - diagram complexity and number of entangling steps.
\subsection{Test-time augmentation}
Since we have easy ways to perturb the diagram preserving its class, we find test-time augmentation a plausible way to increase the accuracy of predicting diagrams' classes.
To illustrate that, we conduct this kind of experiment: prepare test set of 3600 diagrams of complexity 35, and then slowly (1 move at a time) perturb it, compute the model output on changed diagrams (let's denote these predictions on the step $j$ as $y_j$) and then average the predictions.
$$
\hat{y_j} = \dfrac{1}{j}\sum_{k=1}^{j}y_k
$$
The following plot shows that prediction accuracy benefits from adding more prediction steps even though accuracy of individual predictions may degrade.
\begin{figure}
\caption{Result of the test-time augmentation (external case).}
\label{fig:tta_external}
\end{figure}
We believe that the same technique can be applied to the approach used in \cite{Vand} to enhance the prediction accuracy. There are some label-preserving perturbations of the knot in their parametrization such as rigid rotations of the whole polymer, which could be applied for the test-time augmentation.
\section{Discussion}
We've collected some evidence of efficacy of deep learning-based models to prediction of invariants of topological objects. There are many space for improvements left in terms of appropriate neural network architecture and hyperparameters choice, diagram parametrization and so on. In this paper we choose pretty simple neural network in order to make the whole process transparent to the readers with mathematical background. Our model didn't show any signs of overfitting thus might be trained longer with expectation of better performance. The knowledge of what makes it hard to correctly predict a knot diagram class is still to be found. There are infinitely many knot diagrams, as well as the classes, and designing the training and testing set, and the very definition of generalization in such case is a fundamental question. Definition of the arc diagram complexity as number of vertical edges does not always goes well with the experimental data: smaller diagrams sometimes are harder to classify by neural network.
Another deep learning approach seem promising: simplifying rectangular knot diagrams performing moves chosen by neural network.
\section{Appendix A}
\label{section: perm}
This appendix contains description of Dynnikov moves in terms of permutation representation.
\subsection{Transition to the dual diagram}
Basically, any Dynnikov moves can be performed on both vertical and horizontal edges. So, we must find a way to switch between permutation representation in terms of vertical and horizontal edges. Let's consider a permutation representation in terms of horizontal edges:
$$
\begin{pmatrix}
3 & 8 & 9 & 1 & 6 & 2 & 4 & 5 & 7 \\
1 & 2 & 6 & 5 & 3 & 4 & 7 & 8 & 9
\end{pmatrix}.
$$
It contains horizontal coordinates of consecutive edges. Let's add corresponding vertical coordinates for both vertices in each edge as additional rows (the second and the fourth):
$$
\begin{pmatrix}
3 & 8 & 9 & 1 & 6 & 2 & 4 & 5 & 7 \\
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
1 & 2 & 6 & 5 & 3 & 4 & 7 & 8 & 9 \\
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9
\end{pmatrix}.
$$
The first two rows contain coordinates for the $X$ vertices, and the second - for the $O$ ones. Now let's sort $X$ and $O$ vertices (separately) according to their horizontal coordinates:
$$
\begin{pmatrix}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
4 & 6 & 1 & 7 & 8 & 5 & 9 & 2 & 3\\
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
1 & 2 & 5 & 6 & 4 & 3 & 7 & 8 & 9
\end{pmatrix}.
$$
Now each pair of $X$ and $O$ vertices which belong to the same column of the obtained matrix has the same horizontal coordinates which, by definition of the rectangular diagram, makes them belonging to the same vertical edge. So, in order to obtain the permutation representation we just discard the rows with horizontal coordinates:
$$
\begin{pmatrix}
4 & 6 & 1 & 7 & 8 & 5 & 9 & 2 & 3\\
1 & 2 & 5 & 6 & 4 & 3 & 7 & 8 & 9
\end{pmatrix}.
$$
\subsection{Switch moves}
Switch moves are easily interpreted in terms of the permutation notation. Let's recall our diagram from the Fig.[\ref{fig:init_diag}] and its representation
$$
\begin{pmatrix}
3 & 8 & 9 & 1 & 6 & 2 & 4 & 5 & 7 \\
1 & 2 & 6 & 5 & 3 & 4 & 7 & 8 & 9
\end{pmatrix}.
$$
Let's consider the diagram which is one move away from the initial one:
\begin{figure}
\caption{An example of the diagram one internal switch move avay from \ref{fig:init_diag}
\label{fig:inter_switch_diag}
\end{figure}
Its permutation representation is
$$
\begin{pmatrix}
3 & 8 & 9 & 1 & 6 & 4 & 2 & 5 & 7 \\
1 & 2 & 6 & 5 & 3 & 7 & 4 & 8 & 9
\end{pmatrix}.
$$
Since switching of horizontal edges affects only their order, not coordinates of their ends, it's only a matter of order of columns in the corresponding permutation matrix.
\section{Appendix B}
\label{section: diag_examples}
This appendix contains rectangular diagrams used for the neural network training.
\begin{figure}
\caption{Unknot diagram of complexity 25, 1000 moves}
\end{figure}
\begin{figure}
\caption{Knot diagram of complexity 25, 1000 moves}
\end{figure}
\section{Appendix C}
This appendix contauins illustrations regarding results of the neural network training and evaluation.
\label{section: ext_results}
\begin{figure}
\caption{Learning curve (accuracy) }
\label{fig:learning_curve_acc}
\end{figure}
\begin{figure}
\caption{Learning curve (loss) }
\label{fig:learning_curve_loss}
\end{figure}
\begin{figure}
\caption{Classification report for all considered knot classes}
\end{figure}
Let us explain the meaning of each column in the classification report. For each class, precision and recall are evaluated with the following formulas:
$$
precision = \dfrac{TP}{TP+FP},
$$
$$
recall = \dfrac{TP}{TP+FN},
$$
where $TP$ is number of samples in the evaluation set correctly assigned to the class (the model assigned the sample its true class), $FP$ is number of the samples of another classes which were assigned by the model to considered class, and $FN$ is number of the samples with considered class assigned to other classes. In other words, for each class, precision is a fraction of correct predictions among samples were assigned to this the class by the model, and recall is a fraction of correct predictions among samples which truly belong to this class.
Then, f1-score is a harmonic mean of precision and recall
$$
f1-score = \dfrac{2*precision*recall}{precision+recall}
$$
and is aimed to have a single number for each to measure how good this class is handled by the model. A support is just a number of samples in the validation set which belong to each class. An accuracy is a fraction of correctly classified samples.
\begin{figure}
\caption{Confusion matrix for the test set}
\end{figure}
\begin{figure}
\caption{Binarized confusion matrix(the threshold is 40)}
\end{figure}
\begin{figure}
\caption{Accuracy heatmap(external case)}
\label{fig:heatmap_external}
\end{figure}
\end{document}
|
\begin{document}
\title{NP-Hardness of a 2D, a 2.5D, and a 3D Puzzle Game}
\author{Matthew Ferland, Vikram Kher}
\maketitle
\begin{abstract}
In this paper, we give simple NP-hardness reductions for three popular video games. The first is Baba Is You, an award winning 2D block puzzle game with the key premise being the ability to rewrite the rules of the game. The second is Fez, a puzzle platformer whose main draw is the ability to swap between four different 2-dimensional views of the player's position. The final is Catherine, a 3-dimensional puzzle game where the player must climb a tower of rearrangable blocks.
\end{abstract}
\section{Introduction}
Games have been a major component of computer science research nearly since the inception of the field. Early on, research was focused entirely on algorithmic problems, notably chess \cite{shannon1950xxii}. In the 1970s, this trend shifted with the emergence of computational complexity research. Researchers began demonstrating the intractability of games which were typically combinatorial in nature \cite{schaefer1978complexity}. Despite video games (that would later shown to be intractible) becoming popular in the 1980s, it has only been within the past 20 years that there have been relevant intractability results. A few examples include showing the NP-hardness of generalized versions of Tetris \cite{demaine2003tetris}, the intractability of handful of NES and SNES games \cite{aloupis2015classic}, and the undecidability of a PC puzzle game \cite{demaine2020recursed}.
We will add to this recent trend by presenting three new hardness results for some recent block-based puzzle games, where we believe each proof is simple enough to be shown in an undergraduate algorithms course. For one of these games, the classification is tight.
The first of these games is Baba Is You, which released for PC and Nintendo Switch on May 13, 2019 to critical acclaim and commercial success. The game either won or was nominated for a dozen awards, including winning two awards at the Independent Games Festival before its release \cite{wik}. The developers were invited to talk about their experiences at the 2020 Game Developer Conference.\cite{BabaGDCTalk}
The basic premise of the game is that a player token needs to reach a goal token. However, the player may move around semantic blocks that determine various rules and interactions between objects. Regardless of the modifcations to the ruleset, the goal is always to reach some token.
The second of these games is Fez, a puzzle platformer released on April 13th, 2012 to universal acclaim \footnote{\url{https://www.metacritic.com/game/pc/fez/critic-reviews}}. Like Baba Is You, it won or was nominated for several awards. By 2014, it had sold over a million copies \cite{FezSales} and was cited as an inspiration for several later video games \cite{FezInsp1}\cite{FezInsp2}\cite{FezInsp3}.
Fez's novel feature is that it is a 2-dimensional while also acting 3-dimensional. The player can change the perspective to any of the four orthogonal perspectives, and as such, each object has four faces that can be interacted with. The game's mechanics have various ways of exploiting this feature.
The final game we explore is Catherine, a puzzle game released in 2011 (the exact date varies by region). Catherine, like the previous two, is a game that both won and was nominated for several awards \cite{wikCath}. The game has sold over a million copies, and in 2019 received a remake. The game intertwines two mechanically separate sections. The first is a dating drama, while the second, which this paper focuses on, is a 3-dimensional puzzle game that takes place in dreams of the main character.
This puzzle game requires the player to ascend a tower of various blocks, which can be rearranged by the player in order to facilitate their climb. The ``selling point'' feature of these segments is the game's unique physics. Instead of falling being obstructed by having a block below, it is instead only requires that a block below touches via a single one dimensional ``edge."
\section{Baba Is You}
\subsection{Rules}
The core game mechanics of Baba Is You are simple to understand. There is a token the player controls on a grid. A player can move this token up, right, left, or down. These are the only possible inputs. The player's goal is to move the token to the same square as a goal token.
What makes the mechanics unique is that there are additional rules other than the ones above which are subject to player changes. All additional game rules are determined by "$x$ IS $y$" sentences, where $x$ is the identifier for one or several tokens, and $y$ is either another token or a modifier to the functionality of the token. The sentences can possibly contain one or more "AND"s. For example, we may have "FLAG AND CRAB IS WIN AND FLOAT" which means that both the flag and crab tokens have the "WIN" and "FLOAT" properties added to them.
These rules can be interacted with as they each exist in the form of blocks (the tokens, modifiers, "IS," and "AND" are all tiles). These blocks may be moved by the player token. If the player token attempts to move onto a tile that has a rule block, the rule block will be pushed to the adjacent spot in the direction the player token moved. Players can disable a rule by pushing a block to not be adjacent to the others, and then create a new rule by pushing another block in its place.
While there are several modifiers the game can use, we will only describe the ones relevant for our proof. First, "YOU" is a modifier which denotes the token(s) the player controls. Whatever is identified by this will move in the direction specified when the player chooses a direction, and will qualify for reaching the goal token. Second, "WIN" denotes the token(s) that the player needs to reach with the "YOU" token(s) to win the level. Next, "STOP" identifies tokens in which other tokens cannot pass through. If a token attempts to move or be moved to a tile that has rule "STOP," then the token will simply not be moved. Finally, "DEFEAT" indicates tokens that will destroy a player token if they occupy the same tile space.
For example, let's look at figure \ref{fig:example}. Here, we have rules "BABA IS YOU," meaning the player controls the white token, "FLAG IS WIN," meaning the player's goal is to have the white token touch the yellow flag token, "WALL IS STOP," meaning that the white token can't move through the grey wall tiles, and "SKULL IS DEFEAT," meaning our token will be removed upon touching the red skull, resulting in a loss. Clearly, our token is unable to get to the flag currently, since both the skull and the walls are stopping the token from being able to reach the flag. However, the player can move the rule blocks using the white token to fix this. By pushing "IS" down one tile, the "SKULL IS DEFEAT" rule will be disabled, allowing the white token to safely reach the flag token and solve the puzzle.
\begin{figure}
\caption{Sample Baba Is You Puzzle}
\label{fig:example}
\end{figure}
\subsection{NP-Hardness}
We focus on showing the ruleset to be NP-hard. To this end, we will focus on being able to instantiate maps corresponding to any NP-hard problem. Here, we will reduce from 3-SAT.
\subsection{Reduction}
Our construction (as seen in figure \ref{fig:babareduction}) of the map begins with defining the following rules in the bottom left of the map: 'BABA IS YOU', 'FLAG IS WIN', and 'WALL IS STOP'. For each variable $x_i$ in the CNF, we create two unique tokens. We will refer to one of these as $y_i$ and the other as $\bar{y}_i$.
We represent each clause of 3-SAT in the map with a clause gadget (as seen in figure \ref{fig:babaclause}). Each clause gadget consists of three 1-block wide pathways that the player may choose to walk through. At the entrance of a given pathway, there exists $y_i$ or $\bar{y}_i$ token that corresponds to $x_i$ or $\bar{x_i}$ respectively in the clause.
After passing through token $y_i$, there is a rule block corresponding with the $\bar{y}_i$ token blocking the way (or alternatively $y_i$ after passing through $\bar{y}_i$). Then, there is an open space followed by "IS DEFEAT." Below and to the right of this construction is a wall. We connect the $n$ clause gadgets together sequentially and insert a flag at the end of the final clause gadget.
\begin{figure}
\caption{Clause Gadget. Here, "PANTS" is the negation of the plant tokens, "BUG" is the negation of crab tokens, and "SHELL" is the negation of jellyfish tokens.}
\label{fig:babaclause}
\end{figure}
\begin{figure}
\caption{Representation of $(x_1 \vee x_2 \vee x_3) \wedge (\overline{x}
\label{fig:babareduction}
\end{figure}
\subsection{Correctness of the Reduction}
It now remains to verify that our embedding of 3-SAT is valid; namely that if our embedded 3-SAT formula is satisfiable then a path exists for the player to win the map.
If the corresponding 3-SAT is satisfiable, then for each clause there exists variable assignments that make it evaluate to true. If the player simply traverses each clause choosing a path corresponding to a variable assignment ($y_i$ if $x_i$ is true, and otherwise $\bar{y}_i$), they will eventually reach the flag since they will never enter a path where the token has the property DEFEAT, which only is active if one traversed a corresponding negation to the variable, which will never be assigned in a 3-SAT solution.
Conversely, we also wish to now verify that if a player can win the map, the embedded 3-SAT problem is satisfiable. If a path exists for a player to win the map then necessarily they must be able to traverse each clause. In order to traverse a clause, one needs to ensure that there is a path not corresponding to a negation of a previously selected variable. And since each clause must be traversed, there will exist a satisfying assignment of variables that makes each clause evaluate to true. Thus, all clauses are satisfied with the variable assignment from this path.
\begin{figure}
\caption{Fez Sample Map}
\label{fig:fezSample}
\end{figure}
\begin{figure}
\caption{Jumping from a vine}
\label{fig:fezjump}
\end{figure}
\section{Fez}
Fez is a platformer game with a variety of mechanics. For this reduction, we will use only a small handful of them which we will introduce now.
Fez's most notable mechanic is the shifting of perspective. One may think of the game world in 3 dimensions with the the player seeing through a camera that is always located on one face of a cube, excluding the top or bottom faces. The player may switch the perspective to a different face of the cube at any time. These switches rotate elements of the game world. Thus, it is a 3-dimensional game world that is interacted with on a 2-dimensional basis. Figure \ref{fig:fezSample} shows an example of a map in Fez. As with Baba is you, we will only introduce the minimum amount of information in order to construct the proof.
The player may make the character, Gomez, move left, right, or jump so long as there is a platform below that they are standing on. If there is nothing below, then they will fall. If Gomez is touching a vine (represented by a green texture on a wall), they may move around anywhere on the vine, even without a platform below, and can jump off of the vine at any time. These vines are attached to a face of a block.
When Gomez jumps off of a vine, he can travel a horizontal distance related to his current height compared to the ground. If Gomez is less than or equal to 4 blocks off the ground and he jumps, he can travel a maximum horizontal distance of 4 blocks. If Gomez is at a height greater than 4 blocks, then he can jump a horizontal distance of 4 blocks plus some additional distance that grows at a sub-linear rate compared to his height. For example, in figure \ref{fig:fezjump} we see Gomez's jump arc from a height of 6 and we see him land 5 blocks away from his starting position. Note, if Gomez jumped from a height of 2, he would land a maximum of 4 blocks away from his starting position.
The goal for the player is to pick up pieces of the Hexahedron, which are golden blocks. Finally, there are levers which can rotate rows of blocks 90 degrees, bringing one of the other faces to the current one. It is primarily the combination of vines and these rotations which allow us to construct NP-hard puzzles in Fez.
\subsection{Reduction}
We aim to embed a 3-SAT problem into a constructed in-game tower such that the player is able to horizontally traverse the tower from right to left and thus solve the puzzle if and only if the embedded 3-SAT is satisfiable. This 3-SAT problem has $n$ variables and $m$ clauses.
\begin{figure}
\caption{Ring Gadget for $x_3$ from figure \ref{fig:fezreduction}
\label{fig:fezColumns}
\end{figure}
\begin{figure}
\caption{Fez Construction: $(x_1 \lor \neg x_2 \lor x_3) \wedge (\neg x_3 \lor x_4 \lor \neg x_5) \wedge (x_1 \lor \neg x_4 \lor x_5)$. In this figure, all variables are set to true (in other words, they have their front face towards the camera). The bottom row is $x_1$, the second to bottom row is $x_2$, and so on, until the top row is $x_5$. The first clause is represented by the right most 5 columns, the second clause by the center five columns, and the last clause by the five columns left of that.}
\label{fig:fezreduction}
\end{figure}
We begin by defining the tower which consists of $n$ ring gadgets stacked vertically. Each ring corresponds to a variable in 3-SAT and has $(n+1)\cdot m$ columns (as seen in figure \ref{fig:fezColumns}). Each ring also has 4 faces: front, back, left, and right. We leave the left and right faces unused. We additionally connect each ring to a unique turnstile mechanism (a switch) that the player can manipulate to rotate each ring between these 4 faces.
We further divide each ring into $m$ segments each with $n+1$ columns. One of these columns serves as a buffer, while the other $n$ columns exist to enforce the clauses. For the buffer column, we simply let each row of its front and back faces contain vines. The idea is that these vines exist on each buffer column to allow Gomez to choose which variable he will use to traverse the next clause. For the $n$ "main" columns, for row $n_i$ and segment $m_j$, all of the columns in row $i$ have a vine on the front face if and only if variable $i$ appears unnegated in clause $j$, and have a vine on the back face if and only if variable $i$ appears negated in clause $j$. We leave the left and right faces of every segment without vines. The intuition is that the front of the ring corresponds to the variable being true, while the back represents the variable being false. For each clause an assignment satisfies, there will be a row of vines connecting the buffer columns that surround the clause.
We employ a large gap between buffer columns to prevent Gomez from being able to jump between buffer columns directly without the clause being satisfied. If $n < 5$, we will enforce that there are always at least $5$ main columns for each clause to prevent this jumping. Once, $n > 5$ then having a gap of distance $n$ is sufficient to prevent jumping. An example of our construction is shown in figure \ref{fig:fezreduction}.
We claim that the player is able to horizontally traverse this tower from right to left iff the embedded 3-SAT is satisfiable.
\subsection{Correctness of Proof}
Suppose a satisfying assignment exists for our embedded 3-SAT formula, we prove the player must be able to horizontally traverse the constructed tower and thus solve the puzzle.
Since a satisfying assignment exists for our embedded 3-SAT instance, at least one variable in each clause evaluates to true. We argue that this means that the player can rotate some subset of the rings through the turnstile mechanism in such a way that every "main" column contains at least one segment covered with player-facing vines. In our construction, we only placed vines on the front face of non-negated literals and on the back face of negated-literals. Initially, all of the rings' front faces are player-facing. Thus, we initially interpret all literals as being set to true. By rotating a ring twice (180 degrees), the player can set the literal that corresponds to the ring to false. Thus, a literal evaluating to true in a clause results in its corresponding segment in the ring having player-facing vines. Since a satisfying assignment exists, there must exist some combination of ring rotations that results in every segment having at least one row of vines.
Since each segment's first column has all vines, and contains a row that with all vines (through the satisfying assignment), and links to the first column of the next segment which has all vines, the player can traverse to each subsequent segment, letting the player reach the end.
Now, suppose the the player is able to horizontally traverse the constructed tower from right to left, we argue that the 3-SAT has a satisfying assignment. Since the player is able to traverse the tower, there must exist at least one row of only vines for each segment, since the player is unable to jump horizontally $n$ blocks from a height of $n$. Thus, there exists some configuration of the rows that let us achieve this.
Then, by construction, we can apply that configuration as assignments for the variables, achieving a true literal in each clause. Consequently, the embedded 3-SAT instance is satisfiable.
\section{Catherine}
\subsection{Rules}
In Catherine, the player controls a character called Vincent. In each level, Vincent's goal is to climb a tower of blocks to reach its top.
He has a few basic controls. He may move to any adjacent block that is on his same vertical level as the one he is standing on. He may climb on top of an adjacent block that is exactly one space higher than the block he currently stands on. Similarly, he may step down to an adjacent block that is exactly one lower than his current block. If Vincent attempts to step down in a direction where there is no block that is precisely one level lower, he will hang from the edge of his current block. From here the player may either let go of the block to drop down, or continue hanging while being able to traverse to adjacent blocks at the same height. For example, in figure \ref{fig:cathhang}, Vincent is hanging from a block and can choose to either move to the left or right of him, or drop down to the block below him. Notably, Vincent does not have the ability to move up to a vertically higher block, and is stuck traversing horizontally at that level.
The main puzzle mechanic comes from Vincent's capacity to move blocks. Vincent may push a block in the direction opposite of where he stands. Similarly, he may pull a block in the direction he is currently standing. Vincent cannot pull a block if there is a block placed behind him, though he can pull a block if there is a gap behind him, since he will simply hang on the ledge.
If a block has at least one of its bottom edges touching the top edge of a block below it (as in, at least 2 vertices of a cube below it), then it will remain in place, regardless of how blocks below are moved. If there is no such edge, the block will fall until it encounters another block's edge, or until it reaches the bottom of the map, where it disappears. For example, figure \ref{fig:cathedge} shows two red blocks that form a bridge over a gap. Each red block is supported by the blue block that is adjacent to it. If one of these blue blocks is removed, it's adjacent red block would fall by a height of 1.
In addition to standard blocks, towers can contain several special types of blocks. One special block we will use in our reduction is a \textit{type-2 cracked block}, which may only have up to two instances of Vincent and/or blocks passing over it before it disintegrates and disappears. Thus, we say that a \textit{type-2 cracked block} initially has a durability of 2 and disappears when its durability reaches zero. Note, there is a \textit{type-1 cracked block} but we do not use it in our reduction. So, from here on, we will simply use the term \textit{cracked block} to denote \textit{type-2 cracked blocks}. Additionally, we note that certain blocks cannot be moved. In the following figures, the only blocks that Vincent can move are those colored green. Grey, blue, and red blocks are not movable by him.
There is also effectively a timer for Catherine's levels, which we refer to as the death plane. The death plane is a invisible plane that slowly rises, incrementally destroying layers of the tower in the process. As the name implies, Vincent will fall to his death if the plane reaches the layer he is currently standing on. Initially, this plane is below the playable level, but it will eventually reach the base of the tower. At least on the first map, the death plane rises and destroys one layer of the tower approximately every 10 seconds, so for our reduction we will conservatively assume this occurs once every 9 seconds. Furthermore, we will assume (again, conservatively) that Vincent can take one action every second. This is an acceptable assumption, since we will show that the tower in our reduction cannot be traversed even with an unlimited amount of time if we are reducing from a false instance of 3-SAT. Meanwhile, if we are reducing from a positive instance of 3-SAT, we will always be able to solve the puzzle within the time constraint.
\begin{figure}
\caption{Example of Vincent Hanging from a Block}
\label{fig:cathhang}
\end{figure}
\begin{figure}
\caption{Example of the Edge Property}
\label{fig:cathedge}
\end{figure}
\subsection{NP-Hardness}
We aim to show that the game is NP-Hard. We reduce from 3-SAT with $n$ variables and $m$ clauses to a constructed puzzle tower in the game.
\subsection{Reduction}
We begin by constructing a block tower of $O(n^2 + m)$ height, $O(n)$ width, and constant depth. The tower is divided into two sections, the staircase section and the main section. The staircase section consists of $2n^2 + 2m$ stacked spiral staircase gadgets (as seen in figure \ref{fig:cathstaircase}). By climbing these spiral staircases, Vincent will gain enough time to traverse the main section of the tower before the death plane destroys it.
Vincent can traverse the spiral staircase gadget, shown in figure \ref{fig:cathstaircase}, in a counter-clockwise fashion. Specifically, he may walk two blocks forward then ascend the block in front of him and finally rotate 90 degrees to the left. By repeating this process three times, Vincent will traverse a single spiral staircase gadget and gain three blocks of height, after only 8 seconds have passed.
The stacked spiral staircase gadgets allow Vincent to reach the tower's main section, which is of $O(m)$ height. The bottom floor of this section consists of the main platform gadget. This gadget, shown in figure \ref{fig:cathmain}, is a $6n$-block wide and $3$-block deep platform. The main platform gadget provides access to $n$ variable gadgets through two 1-block wide \textit{cracked block} pathways. The figure has only a portion of the main platform gadget, where we show a section of the platform (with gray blocks) and show the pathways to 2 different variable gadgets (with red blocks).
Each variable gadget (figure \ref{fig:cathvariable}) consists of a 5-block wide and 3-block deep platform with two towers of height $O(m)$ and width $2$ attached by 1-block to the variable platform. The red block pathways connect back to the main platform and each green block tower extends to a height of $O(m)$. Additionally, each tower is supported through the edge property by 1-block. Removal of this "connector" block will cause its respective tower to collapse. Intuitively, before either of these connector blocks are removed, the variable gadget is simultaneously set to both True and False. By removing one of the connector blocks, we effectively eliminate one of the two assignments. Since each variable gadget is connected to the main platform gadget by two 1-block wide \textit{cracked block} pathways, it can only be entered and left a total of 4 times; this includes being entered by Vincent, being left by Vincent, and taking a block from the variable gadget to the main platform gadget.
The main platform gadget also connects to the clause gadgets via the gap gadget. The clause gadgets can be reached from the right side of the main platform gadget by crossing a 3-block wide and $n-3$ height pit (as seen in figure \ref{fig:cathmaingap}). Note, we say that this gap has three slots (numbered from left to right); this terminology is used to indicate the three different places where blocks may fall into. The blue block is a visual placeholder for $n-4$ vertically stacked blocks. This gap cannot be traversed by Vincent unless he collects and deposits $n$ blocks into the gap. Upon crossing this pit, there is a spiral staircase gadget that connects to the first clause gadget.
Each clause gadget (figure \ref{fig:cathclause}) consists of a $6n$-block wide and $3$-block deep platform with 1-block placed on the north-west corner of the platform. There also exists up to three adjacent towers that are separated from the platform by a 1-block wide gap. A tower is adjacent to a clause gadget if and only if its corresponding variable evaluates to True in the clause. Figure \ref{fig:cathclause} displays a clause gadget with two green block towers, meaning that two out of three of the clause's variables evaluate to True. The base of these two towers originate from their respectively variable gadgets (recall figure \ref{fig:cathvariable}). Intuitively, the extra block placed on each platform can be used to bridge the 1-block wide gap between the platform and a tower. Using this bridge, the player could extract a block from a tower and pull it onto the platform. Clause gadgets are connected to each other by spiral staircase gadgets that can be reached by crossing a 2-block wide and 2-block high pit that is attached to the right side of the gadget (as seen in figure \ref{fig:cathclausegap}). Vincent must push two blocks into this gap to make it crossable (see figure \ref{fig:cathedge} for an example). The exit door of the tower can be reached by traversing the last clause gadget. This completes our encoding of 3-SAT.
\begin{figure}
\caption{Spiral Staircase Gadget}
\label{fig:cathstaircase}
\end{figure}
\begin{figure}
\caption{Main Platform Gadget}
\label{fig:cathmain}
\end{figure}
\begin{figure}
\caption{Variable Gadget}
\label{fig:cathvariable}
\end{figure}
\begin{figure}
\caption{Gap Gadget}
\label{fig:cathmaingap}
\end{figure}
\begin{figure}
\caption{Clause Gadget}
\label{fig:cathclause}
\end{figure}
\begin{figure}
\caption{Clause Gap Gadget}
\label{fig:cathclausegap}
\end{figure}
\begin{lemma}\label{lem1}
The gap gadget can be crossed if and only if $n$ blocks are pushed into it.
\end{lemma}
\begin{proof}
First, we will point out that one can cross the gadget with $n$ blocks. First, recall that the gap gadget is 3-blocks wide and consequently has three sequentially labeled slots where blocks may fall into. The first block pushed into the gap will hover over the gap's first slot due to the edge property. By pushing a second block into the gap, the first block will shift into the second slot of the gap and will no longer be supported by the edge property. As a result, the block will fall to the bottom of the second slot. The second block will now be hovering over the first slot (again supported by the edge property).
By conducting this procedure a total of $n-2$ times, the middle slot of the gap will completely filled with $n-3$ blocks and the first slot will have 1-block (the $n-2$th block) hovering over it due to the edge property. The player could now push two more blocks into the gap to make it crossable. When the $(n-1)$th block is pushed into the gap, it would shift the $n-2$ block from the first slot to the second slot. Since the second slot is completely filled, the block would not fall. If the player now pushes the $n$th block into the gap, it would shift the $(n-1)$th block to the second slot and the $n-2$ block to the third slot. The $n$th and $(n-2)$th blocks would not fall since they are both supported by the edge property. The $(n-1)$th block would not fall since it is in the second slot, which again is completely filled. Since every slot in the gap is covered, the player may now cross the gap. Figure \ref{fig:cathgapcorrect} shows a crossable gap gadget when we have $n=6$ variables.
On the other hand, the gap gadget cannot be crossed by pushing less than $n$ blocks into it. Since Catherine contains no jump mechanic, Vincent will not be able to cross the gap by just jumping over it. If Vincent were to fall into the gap without pushing any blocks into it, he would be stuck there since he cannot pull any blocks from the surrounding walls and has no ability to jump.
Now suppose Vincent pushes $k$ blocks into gap for some $k < n$. Vincent can only push blocks into the gap in same manner as described previously. There is no other way for Vincent to push blocks into the gap. As a result, there would be 1-block hovering over the first slot and $k-1$ blocks that have fallen into the second slot. Since the third slot is still not covered or filled in any way, the gap would not be crossable. An example of this scenario when we have $n=6$ variables is shown in figure \ref{fig:cathgapincorrect1}.
The player could try to fall into the gap after pushing $k$ blocks into it. If the player pulled the block hovering over the first slot away from the gap, they could drop into the first slot. Once there, they would not be able to get out of the gap. The only action they could perform is pushing the adjacent block in the second slot into the third slot. Due to the edge property, stacked blocks in the second slot would not fall. The player could now move into the second slot (see figure \ref{fig:cathgapincorrect2}). From there, they could either go back to the first slot or pull the block from the third slot back into the second spot. In both cases, the player is back in the first spot and is no closer to escaping the pit. Vincent does not have the ability to pull blocks from the walls of the gap.
If the player dropped into the second slot of the gap, then they would be on the top of $k-1$ stacked blocks. From there, they could only choose to either drop down the first or third slot of the gap. If they dropped down to the first spot, the player would be in the exact same situation as the previous case. If they dropped to the third spot, the player would only have the option to push the adjacent block in the second slot into the first slot (see figure \ref{fig:cathgapincorrect3}). This is symmetric to the case when the player is in the first slot and thus, they would not be able to cross the gap.
Consequently, the gap gadget is crossable by Vincent if and only if he pushes $n$ blocks into it.
\end{proof}
\begin{figure}
\caption{A gap gadget for a 3-SAT instance with $n=6$ variables. By pushing in 6 green blocks, the gap is made crossable.}
\label{fig:cathgapcorrect}
\end{figure}
\begin{figure}
\caption{A gap gadget for a 3-SAT instance with $n=6$ variables. The gap is not crossable.}
\label{fig:cathgapincorrect1}
\end{figure}
\begin{figure}
\caption{A gap gadget for a 3-SAT instance with $n=6$ variables. The gap is not crossable.}
\label{fig:cathgapincorrect2}
\end{figure}
\begin{figure}
\caption{A gap gadget for a 3-SAT instance with $n=6$ variables. The gap is not crossable}
\label{fig:cathgapincorrect3}
\end{figure}
\subsection{Correctness of the Reduction}
Suppose a satisfying assignment exists for our embedded 3-SAT formula, we prove that a path must exist from the base of the tower to the exit door at the top of the tower. The player begins by ascending the spiral staircase gadgets from the starting area to the main platform gadget, which is the base of the tower's main segment. By traversing the staircase segment of the tower, the player will gain a height of $3\cdot(2n^2 + 2m)$. Since it takes at most 9 seconds for the player to traverse a spiral staircase gadget, the death plane will have risen by a height of $2n^2 + 2m$ in this time.
Once at the main platform, the player will set the truth value of each variable according to the previously stated satisfying assignment of the 3-SAT formula. Players can set the truth value of a variable by entering its variable gadget through one of its cracked-block pathways and removing the connecting block to one of the towers. This causes the tower to collapse. Since each tower corresponds to a particular truth assignment of a variable, the player should remove the connecting block from the tower which represents the undesired truth assignment. For example, if the player desires to set the variable to True, they would remove the connecting block from the False assignment tower.
The player can now use this removed block to help fill in the gap (recall figure \ref{fig:cathmaingap}) in the pathway to the clause gadgets. The player accomplishes this by pulling the removed block from the variable gadget through the unused cracked-block pathway and pushing it into the gap in the pathway to the clause gadgets. By lemma \ref{lem1}, this gap can be crossed by pushing $n$ blocks into it. The player can obtain these $n$ blocks by setting the truth assignment of all $n$ variables.
The truth assignment of each variable can be set using at most $20$ actions and the block removed from each variable gadget can be pushed into the gap using at most $6n$ actions. Thus, to set the truth assignment of every variable and traverse the gap, it would take at most $n\cdot(6n+20)$ actions. Recall, Vincent can conservatively execute one action per second, thus the traversal would take at most $n\cdot(6n+20)$ seconds. Since the death plane rises at a rate of 1-block per 9 seconds, it will be at a height of $2n^2 + 2m + \frac{n(6n+20)}{9}$, which is below the main platform's height of $3\cdot(2n^2 + 2m)$.
The player can traverse a clause by obtaining two blocks to push into the clause gap gadget (recall figure \ref{fig:cathclausegap} and figure \ref{fig:cathedge}). The player may use the extra block placed in the north-west corner to connect the platform to one of adjacent towers. Note, when this extra block is pushed off the platform, it will not fall since it is supported by the edge property. Consequently, the player now has access to one of the tower(s) and may extract a block from it. Notice, this tower does not collapse due to the edge property. The player can reach the next clause gadget by using the extracted block along with the extra block (which was previously used to connect to a tower) to fill the gap in the pathway to the next spiral staircase. Since a satisfying assignment exists for our embedded 3-SAT formula, each clause will have at least one adjacent tower that the player can pull a block from with the help of the extra block. Thus, every clause can be traversed. When the final clause is traversed, the player reaches the exit door. Each clause can be traversed using at most 20 actions (and consequently 20 seconds). Thus, to traverse all clauses, it would take the player at most $20m$ seconds. In this time, the death plane would have risen to a height of $2n^2 + 2m + \frac{n(n+20)}{9} + \frac{20m}{9}$, which is still below the main platform's height of $3\cdot(2n^2 + 2m)$. Consequently, given a satisfying assignment for the embedded 3-SAT, the player will always be able to reach the top of the tower before the death plane reaches the main segment of the tower.
Now suppose that a path exists from the starting area to the exit door, the 3-SAT that we embedded is satisfiable. For a path to exist, a player must enter each variable gadget and extract at exactly 1-block from one of its towers. The player cannot extract more than 1-block from a variable gadget since each of its connecting pathways can only be traversed twice before it collapses and both Vincent and the block count against a \textit{crack-block's} durability. Suppose, a player attempted to push two blocks over a cracked-block pathway. This would not be possible since each time you traverse a cracked block pathway it durability decreases by one. It is, however, possible to collapse both towers within the variable gadget, which is fine, and is effectively declaring that the variable will not be used. By lemma \ref{lem1}, the player cannot traverse the main platform gap gadget without $n$ blocks. Thus, the player must take exactly 1-block from each of the $n$ variable gadgets in order to obtain the $n$ blocks necessary to fill in the gap in the pathway to the clause gadgets.
At each satisfied clause gadget, there must exist at least one tower adjacent to it. The player may extract a block from the tower by using the extra block placed in the north-west corner of the platform to form a bridge between the platform and the tower. Notice, the tower does not collapse since it is supported by the edge property. Without at least one tower adjacent to a clause gadget, the player will not be able to traverse the clause gadget, since they would not be able to obtain 2 blocks to place in the clause gap gadget to make it traversable. Suppose, the player tried to cross the gap without using two blocks. Since there is no jump mechanic, they would fall into the gap and be unable to get out of it because the gap has a height of 2 blocks. Now, suppose the player tried to just use the extra block placed on each platform to cross the gap. If they push this block into the gap, it will hover over the first slot of the gap due the edge property. The second slot of the gap would still be open and uncrossable. If the player jumped into the gap, they would be stuck as the in previous case. Moreover, the player would not be able to use blocks from one clause's tower to satisfy another clause since clauses are connected by spiral staircases and there is no mechanic in the game to push blocks up to another level.
If a tower does exist adjacent to a clause gadget, then the clause must be satisfiable since each adjacent tower represents a variable that evaluates to True for the clause. For a path to exist from the starting area to the exit door, all clause gadgets must be traversable. Therefore, each of them must have at least one adjacent tower to pull a block from. This necessarily means that an assignment of variables exists that makes all of the clause gadgets satisfiable. Since all clause gadgets are satisfied, our embedded 3-SAT is satisfiable.
\subsection{In NP}
Because every level has a time limit based on the position of the death plane, the plane moves up after a constant amount of time, and the plane must be placed somewhere on the map, so long as we assume the level itself has only polynomial height, this must mean that if there is a sequence of inputs to make it to the top, it can be verified in polynomial time by simply checking to see if the sequence allows Vincent to reach the top of the tower.
\end{document}
|
\begin{document}
\title{{\TheTitle}
\begin{abstract}
We present a new approach to discretizing shape optimization problems that generalizes
standard moving mesh methods to higher-order mesh deformations and that is naturally
compatible with higher-order finite element discretizations of PDE-constraints.
This shape optimization method is based on discretized deformation
diffeomorphisms and allows for arbitrarily high resolution of shapes with arbitrary smoothness.
Numerical experiments show that it allows the solution of PDE-constrained
shape optimization problems to high accuracy.
\end{abstract}
\begin{keywords}
shape optimization, PDE-constraint, finite elements, moving mesh
\end{keywords}
\begin{AMS}
49Q10, 65N30
\end{AMS}
\section{Introduction}\label{sec:intro}
Shape optimization problems are optimization problems {where} the control to
be optimized is the shape of a domain. Their basic formulation generally reads
\begin{equation}\label{eq:shapeoptproblemnoconstraint}
\text{find }\Omega^*\in\argmin_{\Omega \in \mathcal{U}_{\mathrm{ad}}} \Cj(\Omega)\,,
\end{equation}
where $\mathcal{U}_{\mathrm{ad}}$ denotes a collection of \emph{admissible shapes} and
$\Cj: \mathcal{U}_{\mathrm{ad}} \to\bbR$ represents a \emph{shape functional}.
In many applications, the shape functional depends not only on the shape of a domain
$\Omega\subset{\bbR^d}$,
but also on the solution $u$ of a boundary value problem (BVP) posed on $\Omega$,
in which case \cref{eq:shapeoptproblemnoconstraint} becomes
\begin{subequations}\label{eq:shapeoptproblem}
\begin{equation}\label{eq:shapefunctional}
\text{find }\Omega^*\in\argmin_{\Omega \in \mathcal{U}_{\mathrm{ad}}} \Cj(\Omega, u_\Omega)
\quad \text{subject to}
\end{equation}
\begin{equation}\label{eq:stateconstr}
u_\Omega\in V(\Omega)\,, \quad
a_\Omega(u_\Omega, v) = f_\Omega(v) \quad \text{for all } v\in W(\Omega)\,,
\end{equation}
\end{subequations}
where \cref{eq:stateconstr} represents the variational formulation of a BVP that acts as a
PDE-constraint.
These problems are said to be \emph{PDE-constrained} and are notoriously difficult
to solve because the dependence of $\Cj$ on the domain is nonconvex. Additionally, the function
$u_\Omega$ cannot be computed analytically. Even approximating it with a numerical method is
challenging because the computational domain of the PDE-constraint is the unknown variable to be
solved for in the shape optimization problem.
The literature abounds with numerical methods for BVPs. Here, we consider approximation
by means of finite elements, which has become the most popular choice for PDE-constrained
shape optimization
due to its flexibility for engineering applications. Nevertheless, it is worth mentioning that alternatives
based on other discretizations have also been considered
\cite{EpHa12, BaCiOfStZa15, AnBiVe15,ScIlScGa13}.
Most commonly, PDE-constrained shape optimization problems are formulated
with the aim of further improving the performance of an initial design $\Omega^0$.
The standard procedure to pursue this goal is to iteratively update
some parametrization of $\Omega^0$ to decrease the value of $\Cj$.
Obviously, the choice of this parametrization has an enormous influence on
the design of the related shape optimization algorithm and on the search space $\mathcal{U}_{\mathrm{ad}}$ itself.
In this work, we parametrize shapes by applying deformation
diffeomorphisms to the initial guess $\Omega^0$.
In this framework, solving a shape optimization problems translates into
constructing an optimal diffeomorphism.
To construct this diffeomorphism with numerical methods, we introduce a
discretization of deformation vector fields.
This approach can be interpreted as a higher-order generalization of
standard moving mesh methods and can be combined with isoparametric finite elements
to obtain a higher-order discretization of the PDE-constraint.
There are several advantages to using
higher-degree and smoother transformations.
First, higher-degree parametrization of domains allows for the
consideration of more general shapes (beyond polytopes).
Secondly, the efficiency of a higher-order discretization of a BVP hinges on
the regularity of its solution, which depends on the regularity of the
computational domain, among other factors.
Finally, a smoother
discretization of deformation vector fields allows the computation of
more accurate Riesz representatives of shape derivatives \cite{Pa17, HiPaSa15}, and thus,
more accurate descent directions for shape optimization algorithms.
Our approach is generic and allows for the discretization of domain transformations based on
B-splines, Lagrangian finite elements, or harmonic functions, among others.
This discretization can comprise arbitrarily many basis functions and thus allow for arbitrarily
high resolution of shapes with arbitrary smoothness.
Moreover, because our approach decouples the discretization of the state and the
control, it is straightforward to implement, and requires typically no modification of existing finite element software.
This is a significant advantage for practical applications.
The rest of this article is organized as follows.
In \cref{sec:formulation}, we describe how we model the search space $\mathcal{U}_{\mathrm{ad}}$
with deformation diffeomorphisms and discuss the advantages and disadvantages of this choice.
In \cref{sec:optimization}, we give a brief introduction to \emph{shape calculus} and
explain how to compute steepest-descent updates for deformation diffeomorphisms
using shape derivatives of shape functionals.
In \cref{sec:PDEdJ}, we emphasize that having a PDE constraint necessitates the
solution of a BVP and its adjoint at each step of the optimization, and comment on the error introduced by their finite element discretizations.
In \cref{sec:isofem}, we give an introduction to isoparametric finite elements and explain
why it is natural to employ this kind of discretization to approximate the state variable $u_\Omega$
when the domain $\Omega$ is modified by a diffeomorphism.
In \cref{sec:implementation}, we examine implementation aspects of the algorithm
resulting from \cref{sec:optimization} and \cref{sec:isofem}.
In particular, we give detailed remarks for an efficient implementation of a decoupled discretization
of the state and control variables.
Finally, in \cref{sec:numexp}, we perform numerical experiments.
On the one hand, we consider a well-posed test case and investigate the impact of the discretization
of the state and of the control variables on the performance of higher-order moving mesh methods,
showing that these methods can be employed to solve PDE-constrained shape optimization problems
to high accuracy.
On the other hand, we consider more challenging PDE-constrained shape optimization problems and
show that the proposed shape optimization method is not restricted to a specific problem.
\begin{remark}
Shape optimization problems with a distinction between computational domain
and control variable also exist.
For instance, this is the case for PDE-constrained optimal control problems
where the control is a piecewise constant coefficient in the PDE-constraint \cite{Pa16, LaSt16},
in which case the control is the shape of the contour levels of the piecewise constant coefficient.
The approach suggested in this work covers already this more general type of shape optimization problem.
However, to simplify the discussion and reduce the amount of technicalities,
we consider only problems of the form
\cref{eq:shapeoptproblem}.
\end{remark}
\section{Parametrization of shapes via diffeomorphisms}\label{sec:formulation}
Among the many possibilities for defining $\mathcal{U}_{\mathrm{ad}}$,
we choose to construct it by collecting all domains that can be obtained by applying
(sufficiently regular) geometric transformations\footnote{A geometric transformation
is a bijection from $\bbR^d$ onto itself.} to an initial domain $\Omega^0$, that is,
\begin{equation}\label{eq:Uad}
\mathcal{U}_{\mathrm{ad}} \coloneqq \{T(\Omega^0):\,T\in\mathcal{T}_{\mathrm{ad}}\}\,,
\end{equation}
where $\mathcal{T}_{\mathrm{ad}}$ is (a subgroup of) the group of $W^{1,\infty}$-diffeomorphisms.
We recall that $W^{1,\infty}(\bbR^d; \bbR^d)$ is the Sobolev space of locally integrable vector fields with
essentially bounded weak derivatives. We impose this regularity requirement on $\mathcal{T}_{\mathrm{ad}}$
to guarantee that the state constraint \cref{eq:stateconstr} is well-defined for every domain
in $\mathcal{U}_{\mathrm{ad}}$ (assuming that it is well-defined on the initial domain $\Omega^0$).
Note that it may be necessary to strengthen the regularity requirements on $\mathcal{T}_{\mathrm{ad}}$
if the PDE-constraint is a BVP of higher order such as, for example, the biharmonic equation
\cite{BaMa14}.
While there are many alternatives (for instance level sets \cite{AlJoTo02}
or phase fields \cite{GaHeHiKaLa16}), we prefer to describe $\mathcal{U}_{\mathrm{ad}}$
as in \cref{eq:Uad} because it incorporates an explicit description of the boundaries
of the domains contained within it.
In fact, describing shapes via diffeomorphisms is a standard approach in shape optimization;
cf. \cite[Chap. 3]{DeZo11}.
From a theoretical point of view, it is possible to impose a metric on \cref{eq:Uad} and to
investigate the existence of optimal solutions within this framework \cite{SiMu76}.
Recently, the convergence of Newton's method in this framework has also been investigated \cite{St16}.
\begin{remark}
The representation of domains via transformations in \cref{eq:Uad} is not unique,
and it is generally possible to find two transformations $T_1\in \mathcal{T}_{\mathrm{ad}}$ and $T_2\in \mathcal{T}_{\mathrm{ad}}$
such that $T_1 \neq T_2$ and $T_1(\Omega^0) = T_2(\Omega^0)$. For instance, this is the
case if $\Omega^0$
is a ball and $T_2=T_1\circ T_R$, where $T_R$ is a rotation around the center of $\Omega^0$.
To obtain a one-to-one correspondence between shapes and transformations,
one can introduce equivalence classes, but this is not particularly relevant for this work.
\end{remark}
To shorten the notation, we introduce the reduced functional
\begin{equation}\label{eq:redfct}
j:\mathcal{U}_{\mathrm{ad}} \to \bbR\,,\quad \Omega \mapsto \Cj(\Omega,u_{\Omega})\,,
\end{equation}
which is well-defined under the following assumption on the PDE-constraint \cref{eq:stateconstr}.\begin{assumption}\label{ass:welldefBVP}
Henceforth, we assume that the BVP \cref{eq:stateconstr} that acts as PDE-constraint is
well-defined in the sense of Hadamard: for every $\Omega\in\mathcal{U}_{\mathrm{ad}}$
the BVP \cref{eq:stateconstr} has a unique solution $u_\Omega$ that depends continuously on the
BVP data.
\end{assumption}
In \cref{sec:intro}, we mentioned that shape optimization problems are solved
updating iteratively some parametrization of $\Omega^0$, that is,
constructing a sequence of domains $\{\Omega^{(k)}\}_{k\in\bbN}$ so that
$\{j(\Omega^{(k)})\}_{k\in\bbN}$ decreases monotonically.
For simplicity, we relax the terminology and call such a sequence \emph{minimizing},
although the equality
\begin{equation}\label{eq:minseq}
\lim_{k\to \infty}j(\Omega^{(k)}) = \inf_{\Omega\in\mathcal{U}_{\mathrm{ad}}} j(\Omega)\,.
\end{equation}
may not be satisfied.
When the search space $\mathcal{U}_{\mathrm{ad}}$ is constructed as in \cref{eq:Uad},
computing $\{\Omega^{(k)}\}_{k\in\bbN}$ translates into creating
a sequence of diffeomorphisms $\{T^{(k)}\}_{k\in\bbN}$. Generally,
the sequence $\{T^{(k)}\}_{k\in\bbN}$ is constructed according to the following procedure:
\begin{itemize}
\label{test}\item[(a)] given the current iterate $T^{(k)}$, derive a tentative iterate $\tilde{T}$,
\item[(b)] if $\tilde{T}$ satisfies certain quality criteria, set $T^{(k+1)} = \tilde{T}$ and move to the next step;
otherwise compute another $\tilde{T}$.
\end{itemize}
In the next section, we discuss the computation of $\tilde{T}$ with shape derivatives.
For simplicity, we first assume that the state variable $u_\Omega$ is known analytically
and restrict our considerations to the reduced functional $j$ defined in \cref{eq:redfct}.
The role of the PDE-constraint and the discretization of the state
variable is discussed in \cref{sec:PDEdJ}.
\section{Iterative construction of diffeomorphisms}\label{sec:optimization}
Shape calculus offers an elegant approach for constructing
a minimizing sequence of domains $\{\Omega^{(k)}\}_{k\in\bbN}$.
The key tool is the derivative of the shape functional $\Cj$ with respect
to shape perturbations. To give a more precise
description, let us first introduce the operator
\begin{equation}\label{eq:Jtilde}
J: \mathcal{T}_{\mathrm{ad}} \to\bbR\,, \quad T \mapsto j(T(\Omega^0))\,.
\end{equation}
Since $\mathcal{T}_{\mathrm{ad}}\subset W^{1,\infty}(\bbR^d; \bbR^d)$, which is a Banach space with respect to the norm \cite[Sect. 5.2.2]{Ev10}
\begin{equation}
{\mathbf{e}}rt T {\mathbf{e}}rt_{W^{1,\infty}(\bbR^d; \bbR^d)} \coloneqq
\sum_{\vert \alpha \vert \leq 1} \mathrm{ess}\sup {\mathbf{e}}rt {\mathbf{D}}^{\alpha} T {\mathbf{e}}rt\,,
\end{equation}
we can formally define the directional derivative of $J$ at $T\in \mathcal{T}_{\mathrm{ad}}$ in the direction
$\Ct\inW^{1,\infty}(\bbR^d; \bbR^d)$ through the limit
\begin{equation}\label{eq:dirder}
dJ(T; \Ct) \coloneqq \lim_{s\to 0^+} \frac{J((\Ci+s \Ct)\circ T) - J(T)}{s}
= \lim_{s\to 0^+} \frac{J(T+s \Ct\circ T) - J(T)}{s}\,.
\end{equation}
\begin{remark}
Note that $\Ci+s \Ct$ is a $W^{1,\infty}$-diffeomorphism for sufficiently small $s$
\cite[Lemma 6.13]{Al07}.
\end{remark}
A shape functional $\Cj$ is said to be \emph{shape differentiable} (in $T(\Omega^0)$)
if the corresponding functional \cref{eq:Jtilde} is Fr\'{e}chet differentiable (in $T$), that is, if
\cref{eq:dirder} defines a linear continuous operator on $W^{1,\infty}(\bbR^d; \bbR^d)$
such that
\begin{equation}
\vert J(T+s \Ct\circ T)\ - J(T)- dJ(T; s\Ct)\vert = o(s)
\quad \text{for all } \Ct\in W^{1,\infty}(\bbR^d; \bbR^d)\,.
\end{equation}
\begin{remark}
Generally, \cref{ass:welldefBVP} is not sufficient to guarantee that
$\Cj$ is shape differentiable. In particular, it is necessary to ensure that
the solution operator $\Omega \mapsto u_\Omega$ is continuously differentiable;
cf. \cite[Sect. 1.6]{HiPiUlUl09}.
\end{remark}
The Fr\'{e}chet derivative $dJ$ can be used to construct a sequence of
diffeomorphisms $\{T^{(k)}\}_{k\in\bbN}$ to solve \cref{eq:shapeoptproblem}
in a steepest descent fashion.
More specifically, the entries of this sequence take the form
\begin{equation}\label{eq:Tsequence}
T^{(0)}({\mathbf{x}}) = {\mathbf{x}}\quad \text{and} \quad T^{(k+1)}({\mathbf{x}}) = (\Ci + \mathrm{d}T^{(k)})\circ(T^{(k)}({\mathbf{x}})),
\end{equation}
where the update $\mathrm{d}T^{(k)} :\bbR^d \to \bbR^d$ is computed with the help of $dJ$.
For instance, we could define \cite[Page 103]{HiPiUlUl09}
\begin{equation}\label{eq:dTinf}
dT^{(k)} \in
\argmin_{\substack{\Ct \in W^{1,\infty}(\bbR^d; \bbR^d)\,,\\ {\mathbf{e}}rt \Ct{\mathbf{e}}rt_{W^{1,\infty}}=1}} dJ(T^{(k)}; \Ct)\,.
\end{equation}
Unfortunately, such a descent direction may not exist without making further assumptions
on $dJ$ because $W^{1,\infty}(\bbR^d; \bbR^d)$ is not reflexive; cf. \cite{Ja63}. However, in \cite{LaSt16} it has been shown
that in many instances (and under suitable assumptions), the operator $dJ$ takes the form
\begin{equation}\label{eq:dJtensor}
dJ(T^{(k)}; \Ct) = \int_{T^{(k)}(\Omega_0)} \sum_{i,j=1}^d s_1^{i,j} {\mathbf{D}}\Ct^{i,j} + \sum_{\ell=1}^d s_0^\ell \Ct^\ell\,\mathrm{d}\Bx\,,
\end{equation}
where $s_1^{i,j}$, $i,j=1,\dots, d$ and $s_0^{\ell}$, $\ell=1,\dots, d$ are (instance dependent)
$L^{1}(\bbR^d)$-functions.
The following proposition\footnote{We provide a full proof of this proposition because, to the best of
our knowledge, this result is new.} states that, in this case, \eqref{eq:dTinf} can be used to define
steepest descent directions.
\begin{proposition}\label{prop:dtexists}
Let $dJ$ be as in \eqref{eq:dJtensor}. Then, there exists a descent direction $dT^{(k)}$ as defined in \cref{eq:dTinf}
\end{proposition}
\begin{proof}
First of all, we recall that $L^{\infty}(D)$ is isometrically isomorphic to the dual $X^*$ of
$X = L^{1}(D)$ (for any open domain $D\subset\bbR^m$ in any fixed dimension $m$).
We denote by $\phi_D : L^{\infty}(D) \to X^*$ this isomorphism.
The duality pairing $\langle\cdot,\cdot\rangle_{X^*\times X}$ can be characterized by
\begin{equation}\label{eq:dualitypairing}
\langle f,g\rangle_{X^*\times X} = \int_D \phi_D^{-1}(f) g \,\mathrm{d}\Bx\,.
\end{equation}
Clearly, similar pairings exist for Cartesian products of $L^{\infty}(D)$.
Finally, note that $L^{1}(D)$ is separable. Therefore, by the Banach-Alaoglu theorem,
any bounded sequence in $L^{\infty}(D)$ has a subsequence $\{{\mathbf{x}}_n\}$ that converges
weakly-* to an ${\mathbf{x}}\in L^{\infty}(D)$, that is,
\begin{equation}
\lim_{n\to\infty} \int_D {\mathbf{x}}_n g \,\mathrm{d}\Bx = \int_D {\mathbf{x}} g \,\mathrm{d}\Bx\quad \text{for every } g\in L^{1}(D)\,.
\end{equation}
Using these results, we show that a steepest descent direction exists.
Let $\Ct_n$ be a minimizing sequence of \cref{eq:dTinf}. By definition,
$\Ct_n$ is bounded in $W^{1,\infty}(\bbR^d; \bbR^d)$, and hence in $L^{\infty}(\bbR^d,\bbR^d)$, too.
Therefore, there exists a subsequence $\Ct_{n_k}$
that converges weakly-* to a $T\in L^{\infty}(\bbR^d,\bbR^d)$.
Since $\Ct_{n_k}$ is bounded in $W^{1,\infty}(\bbR^d; \bbR^d)$, there is a subsequence $\Ct_{n_{k_\ell}}$
such that ${\mathbf{D}}\Ct_{n_{k_\ell}}$ converges weakly-* in $L^{\infty}(\bbR^d,\bbR^{d,d})$.
By the definition of weak derivative, it is easy to see that the weakly-* limit of
${\mathbf{D}}\Ct_{n_{k_\ell}}$ is ${\mathbf{D}} T$.
This shows that $T\in W^{1,\infty}(\bbR^d,\bbR^d)$.
Since \cref{eq:dJtensor} is a sum of duality pairings as in \cref{eq:dualitypairing}
(with $D=\bbR^d$ and $g=\chi_{T^{(k)}(\Omega_0)} s_{1}^{i,j}$ or $g=\chi_{T^{(k)}(\Omega_0)} s_{0}^{\ell}$,
where $\chi_{T^{(k)}(\Omega_0)}$ is the characteristic function associated to $T^{(k)}(\Omega_0)$),
$dJ$ is weakly-* continuous.
Therefore, $T$ is a minimizer, because it is the weak-* limit
of $\Ct_{n_{k_\ell}}$, which is a subsequence of a minimizing sequence.
Finally, to show that ${\mathbf{e}}rt T {\mathbf{e}}rt_{W^{1,\infty}(\bbR^d; \bbR^d)}=1$, we recall that the norm of a Banach space is weak-*
lower semi-continuous. Therefore,
\begin{equation}
{\mathbf{e}}rt T {\mathbf{e}}rt_{W^{1,\infty}(\bbR^d; \bbR^d)} \leq \liminf_{k\to\infty} {\mathbf{e}}rt \Ct_{n_k} {\mathbf{e}}rt_{W^{1,\infty}(\bbR^d; \bbR^d)} \leq 1\,,
\end{equation}
and since $dJ(T^{(k)};\cdot)$ is linear, ${\mathbf{e}}rt T {\mathbf{e}}rt_{W^{1,\infty}(\bbR^d; \bbR^d)} = 1$.
To conclude, note that $dJ(T^{(k)};T)>-\infty$ because $dJ(T^{(k)};\cdot)$ is continuous.
\end{proof}
Although possibly well-defined, it is challenging to compute such a descent direction $dT^{(k)}$
(because $W^{1,\infty}(\bbR^d; \bbR^d)$ is infinite dimensional and neither reflexive nor separable).
One possible remedy is to introduce a Hilbert subspace
$(\Cx, (\cdot,\cdot)_\Cx)$, $\Cx \subset W^{1,\infty}(\bbR^d; \bbR^d)$, and to compute
\begin{equation}\label{eq:dTX}
dT^{(k)}_\Cx \coloneqq
\argmin_{\substack{\Ct \in \Cx\,,\\ {\mathbf{e}}rt \Ct{\mathbf{e}}rt_{\Cx}=1}} dJ(T^{(k)}; \Ct)\,,
\end{equation}
that is, the gradient of $dJ$ with respect to $(\cdot,\cdot)_\Cx$.
Up to a scaling factor, the solution of \cref{eq:dTX} can be computed by solving
the variational problem: find $dT^{(k)}$ such that
\begin{equation}\label{eq:dTXBVP}
(dT^{(k)}, \Ct)_\Cx = -dJ(T^{(k)}; \Ct)\quad \text{for all }\Ct \in \Cx\,,
\end{equation}
which is well-posed by the Riesz representation theorem.
However, the condition $\Cx \subset W^{1,\infty}(\bbR^d; \bbR^d)$ is restrictive (for a Hilbert space).
For instance, the general Sobolev inequalities guarantee that
the Sobolev space $H^k(\bbR^d;\bbR^d)$ is contained in $W^{1,\infty}(\bbR^d; \bbR^d)$
only for $k\geq d/2+1$ \cite[Sect. 5.6.3]{Ev10}.
A more popular approach is to introduce a
finite dimensional subspace $Q_N\subset \Cx \cap W^{1,\infty}(\bbR^d; \bbR^d)$
and to compute the solution of
\begin{equation}\label{eq:dTQn}
(dT^{(k)}_N, \Ct_N)_\Cx = -dJ(T^{(k)}; \Ct_N)\quad \text{for all }\Ct_N \in Q_N\,.
\end{equation}
In this case, the requirement $\Cx \subset W^{1,\infty}(\bbR^d; \bbR^d)$
can be dropped as long as the dimension $N\coloneqq\dim(Q_N)$ of $Q_N$ is finite.
However, note that if $\{Q_N\}_{N\in\bbN}$ is a family of nested finite dimensional spaces
such that $\overline{\cup_{N\in\bbN} Q_N}^{\Cx}=\Cx$,
the sequence $dT^{(k)}_N$ can be interpreted as the Ritz-Galerkin approximation of $dT^{(k)}$.
Therefore, as $N\to \infty$, the sequence $dT^{(k)}_N$ may converge to an element of
$\Cx\setminus W^{1,\infty}(\bbR^d; \bbR^d)$, which does not qualify as an admissible update.
The trial space $Q_N$ can be constructed with
linear Lagrangian finite elements defined on (a mesh of) a hold-all domain $D\supset\Omega^0$.
The resulting algorithm is equivalent to standard moving mesh methods \cite{Pa17}.
Alternatively, one can employ tensorized B-splines \cite{HiPa15}.
Lagrangian finite elements have the advantage of inclusion in standard finite element software,
whereas B-splines offer higher regularity, which is often desirable (as we will argue in \cref{sec:implementation}).
For instance, univariate B-splines of degree $d$ are in $W^{d, \infty}(\bbR)$ \cite{Ho03},
whereas Lagrangian finite elements are not even $C^1$.
As for the Hilbert space $\Cx$, one usually opts for $\Cx = H^1(D)$
or, equivalently, for $H^{1/2}(\partial\Omega^{(k)})$ combined with an elliptic extension operator
onto $D$ \cite{ScSiWe16}. This choice can be motivated by considerations of the shape
Hessian \cite{ScSiWe16,EpHa12}.
To the best of our knowledge, it has not been settled yet which definition of
steepest direction among \cref{eq:dTinf,eq:dTX,eq:dTQn} is best suited to formulate
a numerical shape optimization algorithm. Since the focus of this work is more on the discretization of
shape optimization problems than on actual optimization algorithms, we
postpone investigations of this topic to future research. In our numerical experiments
in \cref{sec:numexp}, we will employ \cref{eq:dTQn}, which is the computationally most
tractable definition. However, note that computing steepest directions according to
\cref{eq:dTinf} or \cref{eq:dTX} would also inevitably involve some discretization, because
$W^{1,\infty}(\bbR^d; \bbR^d)$ (and generally $\Cx$) are infinite dimensional.
We conclude this section with the Hadamard homeomorphism theorem
\cite[Thm 1.2]{Ka94}, which gives explicit criteria to verify that
the entries of the sequence $\left\{ÊT^{(k)}\right\}_{k\in\bbN}$
defined in \cref{eq:Tsequence} are admissible transformations.
\begin{theorem}\label{thm:Hadamard}
Let $X$ and $Y$ be finite dimensional Euclidean spaces, and let
$T:X\to Y$ be a $C^1$-mapping that satisfies the following conditions:
\begin{enumerate}
\item $\det({\mathbf{D}} T)(x) \neq 0$ for all $x\in X$.
\item ${\mathbf{e}}rt T(x) {\mathbf{e}}rt \to \infty$ as ${\mathbf{e}}rt x {\mathbf{e}}rt \to \infty$.
\end{enumerate}
Then $T$ is a $C^1$-diffeomorphism from $X$ to $Y$.
\end{theorem}
A counterpart of \cref{thm:Hadamard} for $W^{1,\infty}$-transformations
can be found in \cite{GaRa15}.
For the sequence \cref{eq:Tsequence}, note that the second hypothesis of \cref{thm:Hadamard}
is automatically satisfied if the hold-all domain $D$ is bounded, because the update
$\mathrm{d}T^{(k)}$ has compact support, and
$T^{(k+1)}({\mathbf{x}}) = {\mathbf{x}}$ for every ${\mathbf{x}}\in \bbR^d\setminus\overline{D}$.
\section{Shape derivatives of PDE-constrained functionals}\label{sec:PDEdJ}
To simplify the exposition,
in the previous section we treated the dependence of $J$ on $u$ implicitly;
this dependence was hidden in the reduced functional $j$.
We now examine the consequences of this dependence, as it introduces additional difficulties.
Indeed,
it is generally the case that to evaluate the Fr\'{e}chet derivative $dJ$
it is necessary to solve (at least) one BVP.
To illustrate this fact, we
consider the following example:
\begin{subequations}\label{eq:examplePDEJ}
\begin{equation}
\Cj(\Omega, u_\Omega) = \frac{1}{2} \int_\Omega u_\Omega^2\,\mathrm{d}\Bx\,, \quad\text{subject to}
\end{equation}
\begin{equation}
u_\Omega\in\Hone\,,\quad\int_\Omega \nabla u_\Omega\cdot\nabla v+ u_\Omega v\,\mathrm{d}\Bx
= \int_\Omega v \,\mathrm{d}\Bx \quad \text{for all }Êv\in \Hone\,.
\end{equation}
\end{subequations}
Its shape derivative reads \cite[Eq. 2.12]{Pa16}
\begin{align}\label{eq:examplePDEdJ}
dJ(T;\Ct) = \int_{T(\Omega^0)} \big(&\nabla u_{T(\Omega^0)} \cdot ({\mathbf{D}}\Ct+{\mathbf{D}}\Ct^\top)\nabla p\\
\nonumber
&+(p+u_{T(\Omega^0)}^2 -\nabla u_{T(\Omega^0)}\cdot\nabla p - u_{T(\Omega^0)}p)\Div\Ct \big) \,\mathrm{d}\Bx\,,
\end{align}
where $p\in H^1(T(\Omega^0))$ is the solution of the adjoint BVP
\begin{equation}\label{eq:exampleAdjBVP}
\int_{T(\Omega^0)} \nabla p\cdot\nabla v+ p v\,\mathrm{d}\Bx
= \int_{T(\Omega^0)} u_{T(\Omega^0)}v \,\mathrm{d}\Bx \quad \text{for all }Êv\in H^1(T(\Omega^0))\,.
\end{equation}
Formula \cref{eq:examplePDEdJ} clearly shows that it is necessary to compute the functions $u_{T(\Omega^0)}$ and $p$ to evaluate $dJ$.
The adjoint BVP \cref{eq:exampleAdjBVP} is introduced to derive a formula of $dJ$
that does not contain the shape derivative of $u_\Omega$. This is a well-known strategy
in PDE-constrained optimization \cite[Sect. 1.6]{HiPiUlUl09}.
In general, deriving explicit formulae for Fr\'{e}chet derivatives of PDE-constrained functionals is a
delicate and error prone task. However, in many instances one can
introduce a Lagrangian functional that allows the automation of the differentiation process
and gives the correct adjoint equations \cite[Sect. 1.6.4]{HiPiUlUl09}.
The level of automation is such that numerical software is capable of differentiating several
PDE-constrained functionals \cite{FaHaFuRo13}.
Clearly, Lagrangians are useful also for the special case of PDE-constrained shape
functionals \cite[Chap. 10]{DeZo11}, and dedicated numerical software for shape differentiation
has recently become available \cite{Sc16}.
\begin{remark}
The Hadamard-Zol\'{e}sio structure theorem \cite[Chap. 9, Thm 3.6]{DeZo11}
states that, under certain regularity assumptions on $\Omega$,
the Fr\'{e}chet derivative $dJ(\Omega;\Ct)$ depends only on perturbations
$\Ct(\partial\Omega)$ of the domain boundary.
As a consequence, the derivative of most shape functionals can be formulated as an integral
both in the volume $\Omega$ and on the boundary $\partial\Omega$,
and these formulations are equivalent. For instance, the boundary formulation that corresponds
to \cref{eq:examplePDEdJ} reads \cite[Eq. 2.13]{Pa16}
\begin{equation}
\int_{T(\partial\Omega^0)} \Ct \cdot {\mathbf{n}} \left( u_{T(\Omega^0)}^2
-\nabla u_{T(\Omega^0)}\cdot\nabla p -u_{T(\Omega^0)}p+p\right) \,\mathrm{d}S\,.
\end{equation}
When the state and the adjoint variables are replaced by numerical approximations,
these two formulae define two different approximations of $dJ$.
In the framework of finite elements, it has been shown that volume based formulations
usually offer higher accuracy compared to their boundary based counterparts
\cite{HiPaSa15, Pa15}.
Additionally, the combination of volume based formulae with piecewise linear
finite element discretization of the control variable results in shape optimization algorithms
for which the paradigms \emph{optimize-then-discretize} and \emph{discretize-then-optimize}
commute \cite{HiPa15, Be10}. This does not hold in general for boundary based formulae
because piecewise linear finite elements do not fulfill the necessary regularity requirements,
and the equivalence of boundary and volume based formulae is not guaranteed \cite{Be10}.
\end{remark}
\section{Isoparametric Lagrangian finite elements}\label{sec:isofem}
To evaluate the shape functional $\Cj(\Omega, u_\Omega)$,
it is necessary to approximate the function $u_\Omega$, which is the solution of the PDE-constraint \cref{eq:stateconstr}. For the Fr\'{e}chet derivative $dJ$, it may be necessary to
also approximate the solution $p$ of an adjoint BVP.
In this work, we consider the discretization of \cref{eq:stateconstr} and the adjoint BVP
by means of finite elements.
Finite element spaces are defined on meshes of the computational domain.
As shape optimization algorithms modify the computational domain,
a new mesh is required at each iteration. This new mesh can either be constructed
\emph{de novo} or by modifying a previously existing mesh. On the one hand, remeshing should
be avoided
because it is computationally expensive and may introduce undesirable
noise in the optimization algorithm. On the other hand, updating the mesh is a delicate process
and may return a mesh with poor quality (which in turn introduces noise in the optimization as well).
Isoparametric finite elements offer an interesting perspective on the process of mesh updating
that fits well with our encoding of changes in the domain via geometric transformations.
In particular, with isoparametric finite elements it is possible to mimic the modification of the
computational domain without tampering directly with the finite element mesh. Additionally,
isoparametric finite element theory
provides insight into the extent to which remeshing can be avoided. Next, we provide a concise recapitulation of isoparametric finite element theory.
For simplicity, we assume that the PDE-constraint is a
linear $V$-elliptic second-order BVP. However, we believe that most of the considerations
readily cover more general BVPs.
For a more thorough introduction to isoparametric finite elements, we refer to \cite[Sect. 4.3]{Ci02}.
The Ritz-Galerkin discretization of \cref{eq:stateconstr} reads
\begin{equation}\label{eq:stateconstrFE}
\text{find}\quad u_h\in V_h(\Omega)\,, \quad
a_\Omega(u_h, v_h) = f_\Omega(v_h) \quad \text{for all } v_h\in V_h(\Omega)\,,
\end{equation}
where $V_h(\Omega)$ is a finite dimensional subspace of $V(\Omega)=W(\Omega)$.
Henceforth, we restrict ourselves to Lagrangian finite element approximations
on simplicial meshes.
Let us assume for the moment that $\Omega$ is a polytope.
The most common construction of finite element spaces begins with a triangulation $\Delta_h(\Omega)$
of $\Omega$.
This triangulation is used to introduce global basis functions that span the finite element space.
The finite element space is called Lagrangian if the degrees of freedom of its global basis
functions are point evaluations \cite[Page 36]{Ci02}, and it is called of degree $p$
if the local basis functions, that is, the restriction of global basis functions to elements $K$
of the triangulation, are polynomials of degree $p$.
It is well known that Lagrangian finite elements on simplicial meshes are \emph{affine equivalent}.
Affine equivalence means that we can define a reference element $\hat{K}$ and a set of reference
local basis functions $\{\hat{b}_i\}_{i\leq M}$ on $\hat{K}$, and construct a family of affine diffeomorphisms
$\{ G_K: \hat{K} \to K\}_{K\in\Delta_h(\Omega)}$ such that
the local basis functions $\{b_i^K\}_{i\leq M}$ on $K$ satisfy $b_i^K({\mathbf{x}}) = \hat{b}_i(G_K^{-1}({\mathbf{x}}))$.
Note that both $\{b_i^K\}_{i\leq M}$ and $\{\hat{b}_i\}_{i\leq M}$ are polynomials, because
the pullback induced by a bijective affine transformation is an automorphism.
Issues with this construction arise if $\Omega$ has curved boundaries. In this case,
we introduce first an affine equivalent finite element space $V_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz})$ built on
the triangulation $\Delta_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz})$ of a polytope $\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}$ that approximates
$\Omega$.
Then, we construct a vector field $F\in (V_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}))^d$ such that
$F(\partial\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz})\approx \partial \Omega$ and generate a (curved) triangulation
$\Delta_h(\Omega)$ by deforming the elements of $\Delta_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz})$ according to $F$.
Finally, we define the finite element space $V_h(\Omega)$ on $\Delta_h(\Omega)$
by choosing $b_i^K({\mathbf{x}}) = \hat{b}_i(G_K^{-1}(F^{-1}({\mathbf{x}})))$ as local basis functions.
This construction leads to so-called isoparametric finite elements. Again, this space is called
Lagrangian if the reference local basis functions $\{\hat{b}_i\}_{i\leq M}$ are polynomials.
However, note that the local basis functions $\{b_i\}_{i\leq M}$
of isoparametric Lagrangian finite elements may not be polynomials.
Isoparametric Lagrangian finite elements on curved domains
are proved to retain the approximation properties of Lagrangian finite elements
on polytopes under the following additional assumptions \cite[Thm 4.3.4]{Ci02}:
\begin{enumerate}
\item the triangulation $\Delta_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz})$ is regular \cite[Page 124]{Ci02},
\item the mesh width $h$ is sufficiently small,
\item for every quadrature point ${\mathbf{x}}_q\in\hat{K}$, and for every element $K\in\Delta_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz})$
\begin{equation}\label{eq:distortion}
{\mathbf{e}}rt F(G_K({\mathbf{x}}_q)) - G_K({\mathbf{x}}_q) {\mathbf{e}}rt = \Co(h^p)\,,
\end{equation}
and $F(G_K({\mathbf{x}}_q)) \in \partial\Omega$ whenever $G_K({\mathbf{x}}_q)\in \partial\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}$.
\end{enumerate}
Equation \cref{eq:distortion} is sufficient to guarantee that the map $F\circ G_K$
is a diffeomorphism, and to provide algebraic estimates of the form
\begin{equation}\label{eq:Fdecay}
{\mathbf{e}}rt {\mathbf{D}}^{\alpha} (F\circ G_K) {\mathbf{e}}rt = \Co(h^\alpha)\,,
\end{equation}
which are necessary to derive the desired approximation estimates.
This knowledge of isoparametric finite elements is sufficient to tackle our initial problem:
solve \cref{eq:stateconstrFE} on $\Omega^{(k)}$
(where $\Omega^{(k)}\coloneqq T^{(k)}(\Omega^0)$).
In the first iteration, we construct $V_h(\Omega^0)$ in the isoparametric fashion described above.
First, we generate a triangulation of a suitable polytope
$\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0$ that approximates $\Omega^0$.
Then, we define the finite element space $V_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0)$
and generate a transformation $F^{(0)}\in(V_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0))^d$ that maps $\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0$
onto $\Omega^0$. Finally, we construct $V_h({\Omega}^0)$ by combining
reference local basis functions with the diffeomorphism $F^{(0)}$.
In the next iteration, we construct $V_h(\Omega^{(1)})$
in the same way, but replacing
the diffeomorphism $F^{(0)}$ with the interpolant
$$(V_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0))^d\ni F^{(1)} \coloneqq \Ci_h (T^{(1)}\circ F^{(0)})\,,$$
where $\Ci_h$ denotes the interpolation operator onto $(V_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0))^d$.
Since
$$T^{(1)}({\mathbf{x}}) = {\mathbf{x}} + \mathrm{d}T^{(0)}({\mathbf{x}})\,,$$
the map $F^{(1)}$ can be written as
$$F^{(1)} = F^{(0)} + \Ci_h(\mathrm{d}T^{(0)}\circ F^{(0)}) \,.$$
Repeating this procedure at every iteration results in the isoparametric space
$V_h(\Omega^{(k)})$ being constructed with the map
\begin{align}
\nonumber F^{(k)} &= \Ci_h(T^{(k)}\circ F^{(0)}) \,,\\
\nonumber&= \Ci_h(T^{(k-1)}\circ F^{(0)}) + \Ci_h(\mathrm{d}T^{(k-1)}\circ T^{(k-1)}\circ F^{(0)})\,,\\
\label{eq:Fk}&= F^{(k-1)} + \Ci_h(\mathrm{d}T^{(k-1)}\circ F^{(k-1)})\,,
\end{align}
where the second equality follows from $F^{(k-1)} = \Ci_h (T^{(k-1)}\circ F^{(0)})$.
In general, the map $F^{(k)}$ may not fulfill the condition \cref{eq:distortion}.
However, by $W^{1,\infty}$-error estimates of $\Ci_h$ \cite[Thm 4.3.4]{Ci02},
it holds that
$$\det({\mathbf{D}} F^{(k)})(x) \to \det({\mathbf{D}} (T^{(k)}\circ F^{(0)}))(x) \text{ as } h\to 0\,.$$
This, in light of \cref{thm:Hadamard}, guarantees that $F^{(k)}$
is indeed a diffeomorphism if $h$ is small enough (because $T^{(k)}\circ F^{(0)}$ is
a diffeomorphism as well, and therefore $\det({\mathbf{D}} (T^{(k)}\circ F^{(0)}))(x)\neq 0$).
Additionally, note that the element transformation $G_K:\hat{K}\to K$ is affine,
and thus,
$${\mathbf{D}}^\alpha (F^{(k)}\circ G_K) = ({\mathbf{D}}^\alpha (F^{(k)})\circ G_K)({\mathbf{D}} G_K)^{\alpha}\,.$$
Therefore,
\begin{equation}\label{eq:Fkdecay}
{\mathbf{e}}rt {\mathbf{D}}^{\alpha} (F^{(k)}\circ G_K) {\mathbf{e}}rt
\leq {\mathbf{e}}rt ({\mathbf{D}}^\alpha (F^{(k)})\circ G_K){\mathbf{e}}rt {\mathbf{e}}rt ({\mathbf{D}} G_K)^{\alpha} {\mathbf{e}}rt\leq {\mathbf{e}}rt {\mathbf{D}}^\alpha (F^{(k)}){\mathbf{e}}rt h^\alpha\,.
\end{equation}
The estimate \cref{eq:Fkdecay} is asymptotically equivalent to \cref{eq:Fdecay}.
This implies that modifying the transformation used to generated the isoparametric finite element
space does not affect its approximation properties
as long as ${\mathbf{e}}rt {\mathbf{D}}^\alpha (F^{(k)}){\mathbf{e}}rt$ is moderate.
\begin{remark}
It is not strictly necessary to replace the transformation $T^{(k)}\circ F^{(0)}$
with its interpolant $F^{(k)}$. However, evaluating $T^{(k)}$ can be computationally
expensive and may not be supported natively in finite element software (in particular,
if evaluating $T^{(k)}$ involves a complicated formula).
On the other hand, as explained in the \cref{sec:optimization},
$T^{(k)}$ lies in practice in a finite dimensional space.Therefore, the interpolation operator
$\Ci_h$ can be represented as a matrix, which can be used to significantly
speed up the computation of the update $dT^{(k)}$, as explained in \cref{sec:implementation}.
\end{remark}
\begin{remark}
Usually, the isoparametric transformation $F$ is chosen to be the identity on
elements of $\Delta_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz})$ that do not share edges/faces with $\partial\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}$.
This particular choice of $F$ is made to decrease the computational cost of matrix/vector assembly,
and is not dictated by error analysis. In our approach, the function $F$ will generally differ from the
identity even in the interior of the domain.
\end{remark}
\begin{remark}
In \cite{HiPa15}, the authors suggest to purse shape optimization in Lagrangian coordinates by
reformulating shape optimization problems as an optimal control problem on the initial domain. The
resulting method is formally equivalent to the one presented in this work, but implies hard-coding of
geometric transformation into shape functionals and PDE-constraints (which is problem dependent),
and requires the derivation of Fr\'{e}chet derivatives in Lagrangian coordinates
(which are usually not considered in the shape optimization literature).
In contrast, the approach presented in this work exploits the fact that these geometric transformations
are included in standard finite element software, and allows the use of formulae for Fr\'{e}chet derivatives
that are already available in the literature.
\end{remark}
\section{Implementation aspects}\label{sec:implementation}
The previous sections consider different discretization aspects of the
shape optimization problem \cref{eq:shapeoptproblem}.
\Cref{sec:optimization} introduces the finite dimensional space
$Q_N$ to construct the sequence of diffeomorphisms \cref{eq:Tsequence}.
\Cref{sec:isofem}, on the other hand, introduces the finite dimensional space $(V(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0))^d$
to approximate the solution of \cref{eq:stateconstr} by means of isoparametric finite elements.
There are conflicting demands on the choice of these two finite dimensional spaces.
On the one hand, employing the same discretization
based on piecewise linear Lagrangian finite elements
greatly simplifies the implementation in existing finite element libraries and may reduce
the execution time. On the other hand, a decoupled discretization facilitates enforcing
stability in the optimization process. For instance, the authors of \cite{AlPa06,GiPaTr16} suggest the
use of linear Lagrangian finite elements
built on two nested meshes: a coarser one to discretize the geometry and a
finer one to solve the state equation. They report that this reduces the presence of spurious oscillations
in the optimized shape.
A decoupled discretization may also be required
if one aims at using higher-order approximations of the state variable $u$. To elucidate this,
recall that the use of higher-order finite elements is motivated only if the
exact solution is sufficiently regular. More specifically,
isoparametric finite element solutions of degree $p$ converge as
$\Co(h^p)$ in the energy norm provided that the exact solution satisfies $u\in H^{p+1}(\Omega)$,
whereas the convergence rate deteriorates if $u$ is less regular \cite[Thm 3.2.1]{Ci02}
or if isoparametric finite elements are replaced by standard affine-parametric finite
elements \cite[Rmk 4.4.4 (ii)]{Ci02}.
It is virtually impossible to prescribe universal and sharp rules
that ensure that the solution of the state constraint remains
sufficiently regular during the optimization process,
but elliptic regularity theory can provide some guidelines.
Assuming the problem data is sufficiently smooth, the solution of a linear elliptic Dirichlet BVP is
$H^{s}$-regular \cite[Def. 7.1]{Br07} when $\Omega$ has a $C^s$-boundary \cite[Thm 8.13]{GiTr01} (see also \cite{Gr11}
for an extensive treatment of elliptic regularity theory).
Therefore, it might be desirable to employ sufficiently regular transformations, so that
the regularity of the domain is preserved during the optimization process.
In this case, typical isoparametric Lagrangian finite elements are not a good choice for
$Q_N$ because they only allow $W^{1,\infty}$ piecewise polynomial representations of
the domain transformations. The natural alternative is to employ multivariate B-splines
of degree $\tilde{p}$ \cite[Def. 4.1]{Ho03}, which are piecewise polynomials with compact support
and are both $W^{\tilde{p},\infty}$ and $C^{\tilde{p}-1}$-regular.
For these reasons, we focus on the more general case of a decoupled discretization of
$(V(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0))^d$ and $Q_N$ and discuss the implementation details for the following
simple optimization algorithm, which covers all fundamental aspects of shape optimization.
\subsection*{Minimal shape optimization pseudo-code}
\begin{enumerate}
\item initialize, then, for $k\geq 0$:
\item compute the state $u$ and evaluate $J$; stop if converged, otherwise
\item compute the update $dT$ solving \cref{eq:dTQn}
\item choose $s$ such that $T + sdT\circ T$ is feasible and $J$ is minimal
\item update $T$ and go back to step 2.
\end{enumerate}
\subsection*{Step 1}
First, we construct the finite element space
$(V_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0))^d\coloneqq\mathrm{span}\left\{{\mathbf{v}}_i\right\}_{i=1}^M$
and store the coefficient vector ${\mathbf{f}}^{(0)}\in\bbR^M$ of the transformation
$F^{(0)}\in(V_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0))^d$,
which maps $\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0$ onto (an approximation of) $\Omega^0$.
Then, we construct the space $Q_N\coloneqq\mathrm{span}\left\{{\mathbf{q}}_i\right\}_{i=1}^N$,
initialize the vector field $T^{(0)}\in Q_N$ to the identity, and store its coefficients in the vector
${\mathbf{t}}^{(0)}\in\bbR^N$.
Finally, we store the matrix representation ${\mathbf{I}}_h$ of the interpolation operator
$$\Ci_h : Q_N \to (V_h(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0))^d\,.$$ The matrix ${\mathbf{I}}_h$ is sparse if the basis functions
$\left\{{\mathbf{v}}_i\right\}_{i=1}^M$ and $\left\{{\mathbf{q}}_i\right\}_{i=1}^N$ have (small) compact support.
\subsection*{Step 2}
First, we compute the coefficients of $F^{(k)}\coloneqq\Ci_h(T^{(k)}\circ F^{(0)})$.
This is done by computing
\begin{equation}\label{eq:Fkalg}
{\mathbf{f}}^{(k)} = {\mathbf{f}}^{(k-1)}+{\mathbf{I}}_h {\mathbf{d}}{\mathbf{t}}^{(k-1)},
\end{equation}
as will be justified in the next step.
Then, we approximate $u_{\Omega^{(k)}}$ by means of isoparametric
finite elements and evaluate $\Cj$ on the domain $\Omega^{(k)}$. If the convergence
criteria are not satisfied, we proceed further and compute an update of $T^{(k)}$.
\subsection*{Step 3}
First, we have to assemble the load vector
${\mathbf{d}}\tilde{{\mathbf{J}}}^{(k)}_{{\mathbf{q}}}\coloneqq\{dJ(T^{(k)}; {\mathbf{q}}_i)\}_{i=1}^N$.
This can be computationally expensive because $dJ$ depends
on $u$, which is approximated with a finite element function
and lives on a finite element mesh. Therefore, to evaluate $dJ$
it is necessary to loop through the finite element
mesh. Although one loop is sufficient if one evaluates the contribution of each cell for all basis
functions in $Q_N$, evaluating these functions can be computationally expensive
and may require extensive modifications
of finite element software. Therefore, it may be desirable to employ a different
strategy, which we detail below.
Let $\{{\mathbf{v}}^{(k)}_i\}_{i=1}^M$ denote the isoparametric basis of $(V_h({\Omega}^{(k)}))^d$.
The vector
\begin{equation}\label{eq:dJisop}
{\mathbf{d}}\tilde{{\mathbf{J}}}^{(k)}_{{\mathbf{v}}^{(k)}}\coloneqq\{dJ(T^{(k)}; {\mathbf{v}}^{(k)}_i)\}_{i=1}^M\,,
\end{equation}
can be assembled efficiently with existing software, because
the basis functions $\{{\mathbf{v}}^{(k)}_i\}$ are generally included in finite element software
(whilst $\{{\mathbf{q}}_i\}$ may not be).
Interestingly, the product of the transpose of the interpolation matrix ${\mathbf{I}}_h^\top$ with
\cref{eq:dJisop} can be interpreted as the approximation
\begin{equation}\label{eq:ITdJ}
{\mathbf{I}}_h^\top{\mathbf{d}}\tilde{{\mathbf{J}}}^{(k)}_{{\mathbf{v}}^{(k)}}\approx
\{dJ(T^{(k)};{\mathbf{q}}_i \circ (F^{(0)})^{-1}\circ(T^{(k)})^{-1})\}_{i=1}^N \,,
\end{equation}
where the right-hand side corresponds to the evaluation of $dJ(T^{(k)};\cdot)$ on functions
that move along with the domain transformation (see \cref{fig:transpqs}).
To explain the nature of the approximation in \cref{eq:ITdJ}, we introduce a new finite dimensional space
\begin{equation}\label{eq:newbasis}
Q_N^{(k)} \coloneqq\mathrm{span}\{{\mathbf{q}}^{(k)}_i\coloneqq {\mathbf{q}}_i \circ (F^{(0)})^{-1}\circ(T^{(k)})^{-1}\}_{i=1}^N\,.
\end{equation}
\begin{figure}
\caption{Graphical example of a domain $\Omega^0$, a discretization $Q_N$, a perturbed
domain $\Omega^{(1)}
\label{fig:transpqs}
\end{figure}
The space $Q_N^{(k)}$ arises naturally if
one considers shape optimization in Lagrange coordinates \cite{HiPa15}, and satisfies
$Q_N^{(k)}\subset W^{1,\infty}(\bbR^d,\bbR^d)$. Compared to $Q_N$,
this new space $Q_N^{(k)}$ has a great computational advantage: the previously
computed matrix ${\mathbf{I}}_h$ corresponds to the representation of
the interpolation operators
\begin{equation}
\Ci_h:Q_N \to (V_h(\Omega^0))^d
\end{equation}
as well as to the representation of
\begin{equation}\label{eq:IntOpk}
\Ci_h^{(k)}:Q_N^{(k)} \to (V_h({\Omega}^{(k)}))^d
\end{equation}
with respect to the basis $\{ {\mathbf{q}}^{(k)}_i\}_{i=1}^N$ and $\{{\mathbf{v}}^{(k)}_i\}_{i=1}^M$.
This implies that the vector ${\mathbf{d}}\tilde{{\mathbf{J}}}^{(k)}_{{\mathbf{q}}^{(k)}}$ (the right-hand side of \cref{eq:ITdJ})
can be assembled more easily and quickly than ${\mathbf{d}}\tilde{{\mathbf{J}}}^{(k)}_{{\mathbf{q}}}$. On top of that,
\cref{eq:IntOpk} motivates the interpretation \cref{eq:ITdJ}. Note that for given test cases,
the asymptotic rate of convergence in this approximation can be explicitely computed.
Once the load vector for the update step has been assembled, we need to assemble the stiffness matrix.
In principle, this should be done as well with respect to the new discretization
\cref{eq:newbasis}. Alternatively, one can redefine the inner
product of $\Cx$ so that
$$\big( {\mathbf{q}}_i,{\mathbf{q}}_j \big)_\Cx = \big( {\mathbf{q}}_i^{(k)},{\mathbf{q}}_j^{(k)} \big)_{\Cx^{(k)}}
\quad \text{for all } i,j=1,\dots,N\,,$$
and the stiffness matrix can be computed once and for all in the initialization step.
Finally, one computes the vector ${\mathbf{d}}{\mathbf{t}}^{(k)}$, which contains the coefficients of the
series expansion of $dT^{(k)}$ with respect to the basis of \cref{eq:newbasis}. Computing
${\mathbf{I}}_h {\mathbf{d}}{\mathbf{t}}^{(k)}$ returns the coefficients of the interpolant of $dT^{(k)}$ in
$(V_h({\Omega}^{(k)}))^d$. Due to the isoparametric nature of its basis $\{{\mathbf{v}}^{(k)}_i\}_{i=1}^M$,
these coefficients equal those resulting from interpolating $dT^{(k)}\circ F^{(k)}$ onto
$(V(\begin{tikz} \draw[thick] (0,0)--(0.07,0)--(0.02,0.13)--(0.12,0.22)--(0.22,0.13)--(0.17,0)--(0.24,0); \end{tikz}^0))^d$. This explains why \cref{eq:Fkalg} is the algebraic counterpart of \cref{eq:Fk}.
\subsection*{Step 4-5}
The simplest way to compute $s$ is by line search.
In this case, we have to evaluate $J$ on $T^{(k)}+sdT^{(k)}$ for various $s$.
In light of \cref{eq:Fkalg}, this means computing the state variable $u$ using
isoparametric finite elements whose coefficients are
$${\mathbf{f}}^{(k+1)}_s = {\mathbf{f}}^{(k)}+s{\mathbf{I}}_h {\mathbf{d}}{\mathbf{t}}^{(k)}\,,$$
and choosing $s$ such that the value of $J$ is minimal.
Of course, one has to enforce admissibility of $T^{(k)}+sdT^{(k)}$.
By \cref{thm:Hadamard}, it is sufficient to verify that the minimum of
\begin{equation}\label{eq:detDFk}
\det({\mathbf{D}}( F^{(k)}+s\Ci_h(dT^{(k)})))\approx \det({\mathbf{D}} (T^{(k)}+sdT^{(k)}))
\end{equation}
remains positive. However, note that small values of \cref{eq:detDFk}
may negatively affect the ellipticity constant of the BVP \cref{eq:stateconstr},
which in turn negatively affects the constant of finite element error estimates.
Finally, one rescales $dT^{(k)}\coloneqq sdT^{(k)}$, set
$T^{(k+1)}=(\Ci + \mathrm{d}T^{(k)})\circ(T^{(k)})$, and goes back to step 2.
\section{Numerical experiments}\label{sec:numexp}
We split our numerical investigations in two parts. In the first one,
we consider a PDE-constrained shape optimization problem that
admits stable minimizers. We use this test case to investigate the
approximation properties of the algorithm presented in \cref{sec:implementation}
for different discretizations of control and state variables. In the second part,
we test our approach on more challenging shape optimization problems for which analytical
solutions are unavailable.
The numerical results are obtained with a code based on
the finite element library Firedrake
\cite{petsc-user-ref,petsc-efficient,Dalcin2011,Rathgeber2016,Luporini2016,Chaco95,MUMPS01,MUMPS02}.
\subsection{Bernoulli free-boundary problem}
\label{sec:numexpBernoulli}
We consider the Dirichlet BVP
\begin{equation}\label{eq:BPdir}
- \Delta u = 0\; \text{ in } \Omega\,,\; \quad u = u_\mathrm{in}\quad \text{ on } \partial\Omega^\mathrm{in}\,,
\;\quad u = u_\mathrm{out}\; \text{ on } \partial\Omega^\mathrm{out}\,,
\end{equation}
stated on the domain $\Omega\subset \bbR^2$ depicted in \cref{fig:BernoulliDomain}.
The goal is to find the shape of $\partial\Omega^\mathrm{in}$ so that
the Neumann trace $\dn{u}$ is equal to a prescribed function $g$ on $\partial\Omega^\mathrm{in}$.
For the sake of simplicity, we assume that the Dirichlet
data $u_\mathrm{in}$ and the Neumann data $g$ are constants, and that only
$\partial\Omega^\mathrm{in}$ is ``free'' to move.
\begin{figure}
\caption{Computational domain for the Dirichlet BVP \eqref{eq:BPdir}
\label{fig:BernoulliDomain}
\end{figure}
This Bernoulli free boundary problem can be reformulated as the following
shape optimization problem \cite{EpHa06b}
\begin{equation}\label{eq:benchmark}
\inf_{\Omega \in \mathcal{U}_{\mathrm{ad}}} \Cj(\Omega, u)=\int_\Omega \nabla u\cdot\nabla u + g^2 \,\mathrm{d}\Bx\,,
\quad \text{subject to}
\left\{\begin{array}{rll}
-\Delta u &= 0 &\text{in }\Omega\,,\\
u &= u_\textrm{in} & \text{on } \partial\Omega^{\mathrm{in}}\,,\\
u &= u_\textrm{out} & \text{on } \partial\Omega^{\mathrm{out}}\,,
\end{array}\right.
\end{equation}
whose Fr\'{e}chet derivative reads \cite{HiPa15}
\begin{equation}\label{eq:BerDer}
dJ(T;\Ct) = \int_{T(\Omega_0)}
\Div \Ct (\nabla u_{T(\Omega_0)}\cdot\nabla u_{T(\Omega_0)}+ g^2 )
-\nabla u_{T(\Omega_0)}\cdot ({\mathbf{D}}\Ct +{\mathbf{D}}\Ct^\top)\nabla u_{T(\Omega_0)}\, \,\mathrm{d}\Bx\,.
\end{equation}
In \cite{EpHa06b}, the authors have studied this shape optimization problem \cref{eq:benchmark}
in detail and, performing shape analysis in polar coordinates, have shown that
the shape Hessian is both continuous and coercive (when restricted to normal perturbations)
in the $H^{1/2}(\partial\Omega)$-norm. For this reason, minimizers of \cref{eq:benchmark} are stable.
To construct a test case for our numerical simulations, we set the optimal shape
of $\partial\Omega^\mathrm{in}$ to be a circle centered at the origin with radius $0.4$.
For such a choice of $\partial\Omega^\mathrm{in}$, the function (expressed in polar coordinates)
$$u(r, \varphi) = \ln(0.4) - \log(r)$$
satisfies the Dirichlet BVP \eqref{eq:BPdir} with $u_\textrm{in} = 0$
and $u_\textrm{out} = u$. The Neumann data on the interior boundary is $g=2.5$.
The value $J_\mathrm{min}$ of the misfit functional in the optimal shape is approximatively \texttt{28.306941614057237}.
This value has been computed with quadratic isoparametric finite elements on a
sequence of nested meshes; the relative error between the value of the misfit functional
computed on the last and on the second last mesh is approximately $6\cdot 10^{-11}$.
As initial guess, we set $\partial\Omega^\mathrm{in}zero$ to be a circle of radius 0.5 centered at (0.04,0.05).
Note that we have repeated these numerical experiments with other 3 choices for the initial guess
$\partial\Omega^\mathrm{in}zero$ and have obtained similar results. These alternative initial guesses are:
a circle of radius 0.47 centered at (0.07, 0.03), a circle of radius 0.55 centered at (-0.1, 0),
and a circle of radius 0.5367 centered at (-0.137, 0.03).
To discretize geometric transformations, we consider linear/quadratic/cubic tensorized Schoenberg
B-splines constructed on regular grids \cite{Sc15}.
These grids are refined uniformly (with widths ranging from $1.8\times2^{-1}$ to $1.8\times2^{-6}$)
and are contained in a square (the hold-all domain $D$) that is centered at the origin and has a corner at
(0.95, 0.95), so that $\partial\Omega^\mathrm{out}$ is not modified in the optimization process. Finite element approximations of
$u_{T(\Omega_0)}$ are computed with linear/quadratic isoparametric finite elements on a sequence
of 5 triangular meshes generated using uniform refinement in Gmsh \cite{GeRe09}. Note that
finer meshes are adjusted to fit curved boundaries.
The optimization is carried out by repeating the following simple procedure for a fixed number
of iterations: at every iteration, we
compute a $H^1_0(D)$-descent direction $dT$ by solving \cref{eq:dTQn} and choose the
optimization step $s\in\{0, 0.1, \dots, 1\}$ that minimizes $J(T+ sdT\circ{T})$.
Such a simple optimization strategy is sufficient for our numerical experiments,
although we are aware that it is not efficient.
The development of more efficient optimization strategies in the context of shape optimization
is a current topic of research. In \cite{ScSiWe16}, the authors obtained promising results with
a BFGS-type algorithm based on a Steklov--Poincar\'{e} metric. We defer to future research
the numerical comparison of optimization strategies for shape optimization.
In \cref{fig:linesearchguess1}, we plot two steps of this simple optimization strategy.
Transformations are discretized with quadratic B-splines built on the fourth grid,
whereas the state $u_{T(\Omega_0)}$ is approximated with linear finite elements on the
second coarsest mesh. Qualitatively, we observe the expected behavior of a (truncated) linesearch.
The predicted-descent line is given by $J(\Ct)- dJ(\Ct, sdT)$, with
$s=0, 0.1, 0.2, 0.3$, and is tangential to $J(T+ sdT\circ{T})$ at $s=0$. This shows empirically that
formula \cref{eq:BerDer} is correct.
\begin{figure}
\caption{Evolution of $J$ on two optimization steps. The second linesearch starts at the minimum
of the first one. The predicted descent is computed evaluating $dJ$ on the (selected) descent direction.}
\label{fig:linesearchguess1}
\end{figure}
Next, we investigate to which accuracy we can solve shape optimization problems.
In particular, we study the impact of the discretization (polynomial-)degree and
(refinement-)level used for the control and the state.
This is done systematically by keeping certain discretization parameters fixed
and varying the remaining ones.
To simplify the exposition, we associate the term \emph{grid} to the
discretization of the control (that is, of geometric transformations),
whereas the term \emph{mesh} refers to the discretization of the state).
First, we fix the control discretization to quadratic B-splines built on the finest grid.
For each finite element mesh (previously generated with Gmsh), we perform 101 steps of our
simple optimization strategy employing linear FE approximations of the state $u_{T(\Omega_0)}$.
Although not displayed,
the sequence of shapes always converges qualitatively to the optimum. For a quantitative comparison,
we store the minimum of $J$ for every linesearch and plot the absolute error with respect to
$J_\mathrm{min}$ in \cref{fig:convergencehistory_linearfemguess1}
(we plot a convergence history for each mesh). Henceforth, we use the notation \texttt{Jerr}
to refer to this absolute error.
\begin{figure}
\caption{Evolution of \texttt{Jerr}
\label{fig:convergencehistory_linearfemguess1}
\end{figure}
We observe that the convergence history lines
saturate at different levels. In particular, the saturation level decays algebraically with respect
to the mesh width. To further investigate the impact of FE approximations on shape optimization,
we repeat this experiment with quadratic isoparamentric FEs.
Again, we observe algebraic convergence with respect to the mesh width, but at higher convergence rate
(note the difference in the y-axis scale). In order to reach the saturation level on finer meshes,
more optimization steps have to be carried out. This issue has been observed in \cite{HiPa15} as well, and
is probably due to the simplicity of the optimization strategy. Before proceeding further, let us point out that
the saturation level worsens if quadratic isoparametric FEs are replaced by quadratic affine FEs;
see \cref{fig:convergencehistory_11vs12vs22femguess1}.
\begin{figure}
\caption{Evolution of \texttt{Jerr}
\label{fig:convergencehistory_11vs12vs22femguess1}
\end{figure}
In the previous experiments, we kept the discretization of transformations fixed.
Now, we test different discretizations.
In \cref{fig:meshsplineconvergenceguess1}, we consider two different discretization degrees
of the control variable (linear/cubic B-splines).
For each of these discretization degrees, we consider 6 grids and 5 meshes.
The state is approximated once with linear and once with quadratic isoparametric FEs.
For each combination, we plot \texttt{Jerr} after 200 iterations.
We observe that both the discretization of the control and of the state
have an impact on \texttt{Jerr} (the algebraic decay with respect to the FE mesh width is conspicuous).
\begin{figure}
\caption{Value of \texttt{Jerr}
\label{fig:meshsplineconvergenceguess1}
\end{figure}
In \cref{fig:splineconvergenceguess1} (left), we consider the finest level and highest degree
of the control discretization.
We plot \texttt{Jerr} (after 200 iterations) versus the FE mesh index and consider both linear and quadratic
isoparametric FE approximations of the state.
The algebraic rates of convergence for linear and quadratic FEs read $1.97$ and $3.24$, respectively.
This rates are in line with our expectations because duality techniques can be employed
to prove superconvergence in the FE approximation of the quadratic functional $J$ \cite{HiPaSa15}.
\begin{figure}
\caption{Details from \cref{fig:meshsplineconvergenceguess1}
\label{fig:splineconvergenceguess1}
\end{figure}
\cref{fig:meshsplineconvergenceguess1} shows also that \texttt{Jerr} is almost entirely dominated
by the error in the state when transformations are discretized with cubic B-splines.
This is better highlighted in \cref{fig:splineconvergenceguess1} (right).
There, we fix the state discretization to quadratic isoparametric FEs on the
finest mesh and plot \texttt{Jerr} versus the grid index for linear, quadratic,
and cubic B-splines. For instance, grid refinement for cubic B-splines
has a very mild impact on \texttt{Jerr} (due to the FE approximation error of the state).
We observe that the algebraic convergence rate for grid refinement of linear B-splines is approximately
2.45. This rate is higher than our expectations (a perturbation analysis via the Strang lemma
would give rise to only linear convergence).
Finally, we consider the highest discretization level of the control and investigate whether its
discretization degree has an impact on the rate of convergence of \texttt{Jerr} with respect
to the FE discretization of the state.
Let us recall that the regularity
of the state on a perturbed domain depends, in principle, on the regularity of the domain.
When perturbed with less smooth transformations, the resulting domain may not
guarantee that the regularity of the state is preserved. This may have a negative impact on
the FE approximation. In \cref{fig:meshconvergence_impactofbsplineregularityguess1},
we plot \texttt{Jerr} versus FE mesh refinement (both for linear and quadratic FEs) when
transformations are discretized with linear B-splines.
When the state is approximated with quadratic isoparametric FEs, the control discretization error
is negligible only on much finer grids.
However, it seems that the FE convergence rate is not affected by the
the discretization degree of the control (in \cref{fig:splineconvergenceguess1}, left,
the control is discretized with cubic B-splines, and a similar convergence rate is observed).
\begin{figure}
\caption{Surprisingly, discretizing transformations with linear B-splines instead of cubic
does not affect the rate of convergence of \texttt{Jerr}
\label{fig:meshconvergence_impactofbsplineregularityguess1}
\end{figure}
\subsection{Energy minimization in Stokes flow}
\label{subsec:stokes}
Next, we consider the shape optimization of a 2D obstacle $\Omega_o$ embedded in a viscous fluid $\Omega_c$
(see \cref{fig:stokescp}).
This problem has been thoroughly investigated in \cite{ScSi15}.
The state variables are the velocity ${\mathbf{u}}$ and the pressure $p$ of the fluid. These are
governed by Stokes' equations, whose weak formulation reads:
find ${\mathbf{u}}\in H^1(\Omega_c;\bbR^2)$ and $p\in L^2_0(\Omega_c)=\{p \in L^2(\Omega_c) : \int_{\Omega_c} p \,\mathrm{d}\Bx = 0\}$ such that
${\mathbf{u}}\vert_{\Gamma_{\text{in}}} = {\mathbf{g}},\ u\vert_{\Gamma_{wall}} = 0$ and
\begin{equation} \label{eq:stokesweak}
\int_{\Omega_c} \sum_{i=1}^2\nabla u_i\cdot \nabla v_i - p \Div {\mathbf{v}} + q\Div {\mathbf{u}} \,\mathrm{d}\Bx = 0
\end{equation}
for all ${\mathbf{v}}\in H^1(\Omega_c;\bbR^2)$ and $q\in L^2_0(\Omega_c)$ such that ${\mathbf{v}}\vert_{\Gamma_{\text{in}}} = 0$ and ${\mathbf{v}}\vert_{\Gamma_{\text{wall}}} = 0$.
It is known that \cref{eq:stokesweak} admits a unique solution if the computational domain
$\Omega_c$ is Lipschitz \cite{GiRa86, Te01}.
\begin{figure}
\caption{Computational domain for the Stokes BVP \eqref{eq:stokesweak}
\label{fig:stokescp}
\end{figure}
The energy dissipated in the fluid due to shear forces is given by
\begin{equation}\label{eq:stokesJ}
\Cj(\Omega_c, {\mathbf{u}}, p) = \int_{\Omega_c} \sum_{i=1}^2 \nabla u_i^\top \nabla u_i \,\mathrm{d}\Bx \equiv\int_{\Omega_c} \lVert{\mathbf{D}}{\mathbf{u}}\rVert_{F}^2 \,\mathrm{d}\Bx, \,,
\end{equation}
and its shape derivative is given by\footnote{We believe that this volume based formula
(which can be computed with \cite{Sc16})
is already known to the shape optimization community, although we did not manage to
find it explicitly in available publications.}
\begin{multline} \label{eq:stokedJ}
d\Cj(T, \Ct)= \int_{T(\Omega_c)}
\sum_{i=1}^2 \left( \nabla u_i^\top \nabla u_i \Div\Ct-\nabla u_i^\top({\mathbf{D}}{\Ct}^\top + {\mathbf{D}}{\Ct} ) \nabla u_i\right)\\
+ 2\tr({\mathbf{D}} {\mathbf{u}} {\mathbf{D}}{\Ct} )p - 2p\Div {\mathbf{u}} \Div\Ct\,\mathrm{d}\Bx\,.
\end{multline}
Minimizing \eqref{eq:stokesJ} subject to \eqref{eq:stokesweak} alone is problematic because
energy dissipation can be reduced by shrinking or removing the obstacle.
Therefore, we introduce two additional constraints: we require the area and the
barycentre of the obstacle to remain constant.
Similarly to \cite{ScSi15}, we define the functionals
\begin{equation}
\begin{aligned}
\Ca(T) &\coloneqq \int_{T(\Omega_c)} 1 \,\mathrm{d}\Bx - \int_{\Omega_c} 1 \,\mathrm{d}\Bx\,, \\
\Cb_i(T) &\coloneqq \int_{T(\Omega_c)} x_i \,\mathrm{d}\Bx - \int_{\Omega_c} x_i \,\mathrm{d}\Bx , \text{ for } i=1,2\,.
\end{aligned}
\end{equation}
For the sake of simplicity, we enforce the constraints $\Ca(T)=0$ and $\Cb_i(T)=0$, $i=1,2$
onto the shape optimization problem in the form of penalty functions, that is, we replace the functional
\eqref{eq:stokesJ} with
\begin{equation}\label{eq:stokespenalized}
\Cj_p(T) = \Cj(T) + \frac{\mu_0}{2} \Ca^2(T) + \sum_{i=1}^2 \frac{\mu_i}{2} \Cb_i^2(T)\,,
\end{equation}
where $\Ca^2(T)\coloneqq(\Ca(T))^2$, $\Cb_i^2(T)\coloneqq (\Cb_i(T))^2$, and
$0\leq \mu_i\in\bbR$, i=0,1,2.
The shape derivatives of the squared constraints are given by
\begin{equation}
\begin{aligned}
d{A^2}(T, \Ct) &= 2 \Ca(T) \int_{T(\Omega_c)} \Div \Ct\,\mathrm{d}\Bx\,, \\
d{B_i^2}(T, \Ct) &= 2 \Cb_i(T) \int_{T(\Omega_c)} \Div (x_i \Ct) \,\mathrm{d}\Bx, \text{ for } i=1, 2\,.
\end{aligned}
\end{equation}
The state variables are discretized with Taylor-Hood P2-P1 finite elements on
a triangular mesh. This discretization is stable \cite{ElSiWa14}.
The resulting linear system can then be solved using GMRES and a block-diagonal preconditioner
based on the stiffness matrix and on the mass matrix for the velocity- and the pressure-block,
respectively \cite{ElSiWa14}.
\begin{figure}
\caption{\emph{Left:}
\label{fig:stokes-shapes}
\end{figure}
The control is discretized with cubic B-splines on a rectangular
grid that covers the entire channel.
We can see that the optimized shape in \cref{fig:stokes-shapes} is qualitatively similar to the one obtained by \cite{ScSi15},
although the shape optimization algorithm used in \cite{ScSi15} relies on the
boundary based formulation of \cref{eq:stokedJ}.
For this example we have used a simple steepest descent with line search; the number of optimization steps can be drastically reduced employing
more sophisticated optimization algorithms \cite{ScSi15}.
\subsection{Compliance minimization under linear elasticity}
We conclude this section on numerical experiments with another classical example from shape optimization:
compliance minimization of a cantilever subject to a given load (see \cref{fig:elasticitycp}).
\begin{figure}
\caption{Computational domain for the elasticity BVP \eqref{eq:elasticityweak}
\label{fig:elasticitycp}
\end{figure}
The structural behaviour of the cantilever is modelled by linear elasticity.
In particular, we consider the following variational problem: find ${\mathbf{u}}\in H^1(\Omega;\bbR^2)$ such that
${\mathbf{u}}\vert_{\Gamma_1} = 0$ and
\begin{equation} \label{eq:elasticityweak}
\int_{\Omega} (A e({\mathbf{u}})):\nabla {\mathbf{v}} \,\mathrm{d}\Bx - \int_{\Gamma_2} {\mathbf{g}} \cdot {\mathbf{v}} \,\mathrm{d}S = 0
\end{equation}
for all ${\mathbf{v}}\in H^1(\Omega;\bbR^2)$ with ${\mathbf{v}}\vert_{\Gamma_1} = 0$.
In \cref{eq:elasticityweak}, the symbol $:$ denotes the Frobenius inner product of matrices,
$e({\mathbf{u}}) = \mathrm{sym}(\nabla {\mathbf{u}})= \frac{1}{2}(\nabla {\mathbf{u}} + \nabla {\mathbf{u}}^\top)$ denotes the
strain tensor, and $A$ encodes the Hookes' law of the material.
It is well known that \cref{eq:elasticityweak} admits a unique stable solution for compatible data
${\mathbf{g}}$ \cite[Page 22]{Ci02}.
In this numerical experiment, we minimize the compliance
\begin{equation}\label{eq:elasticityJ}
\Cj(\Omega, {\mathbf{u}}) = \int_{\Omega} (A e({\mathbf{u}})): e({\mathbf{u}}) \,\mathrm{d}\Bx\,,
\end{equation}
whose shape derivative reads (the formula can be derived following closely \cite[Thm 7.7]{St14})
\begin{equation}
\begin{aligned}
d\Cj(T,\Ct) =& \int_{T(\Omega)} A\mathrm{sym}({\mathbf{D}} {\mathbf{u}} {\mathbf{D}} \Ct):\mathrm{sym}({\mathbf{D}} {\mathbf{u}})\,\mathrm{d}\Bx\\
& +\int_{T(\Omega)} A\mathrm{sym}({\mathbf{D}} {\mathbf{u}}):\mathrm{sym}({\mathbf{D}} {\mathbf{u}}{\mathbf{D}} \Ct)\,\mathrm{d}\Bx\\
& +\int_{T(\Omega)} A\mathrm{sym}({\mathbf{D}} {\mathbf{u}}):\mathrm{sym}({\mathbf{D}} {\mathbf{u}})\Div \Ct\,\mathrm{d}\Bx\,.
\end{aligned}
\end{equation}
Similarly to \cref{eq:stokespenalized}, we enforce a volume constraint by adding a penalty function
to \cref{eq:elasticityJ}.
We discretize the state variables with piecewise linear Lagrangian finite elements on a triangular mesh;
the control is discretized with cubic B-splines on a rectangular grid covering the cantilever.
\begin{figure}
\caption{\emph{Top left}
\label{fig:elasticity-optimal}
\end{figure}
We use the same Lam\'e parameters as in \cite{AlPa06} ($E=15$, $\nu=0.35$) and obtain a qualitatively similar shape, which is shown in \cref{fig:elasticity-optimal}.
\section{Conclusion}
We have formulated shape optimization problems in terms of deformation diffeomorphisms.
This perspective simplifies the treatment of PDE-constrained shape optimization problems
because it couples naturally with isoparametric finite element discretization of the PDE-constraint.
In particular, it retains the asymptotic behavior of higher-order FE discretization, and it allows
the solution of PDE-constrained shape optimization problems to high accuracy, as confirmed by
detailed numerical experiments.
This shape optimization method can be implemented in standard finite element software and
used to tackle challenging shape optimization problems that stem from industrial applications.
The approach advocated is modular and can be combined with more advanced optimization algorithms,
such as that of Schulz et al.~\cite{ScSiWe16}; research in this vein will form the basis of
future work.
\end{document}
|
\begin{document}
\title{Rigid character groups, Lubin-Tate theory, and $(\varphi,\Gamma)$-modules}
\author{Laurent Berger}
\address{Laurent Berger \\ UMPA de l'ENS de Lyon \\
UMR 5669 du CNRS \\ IUF \\ Lyon \\ France}
\email{[email protected]}
\urladdr{http://perso.ens-lyon.fr/laurent.berger/}
\author{Peter Schneider}
\address{Peter Schneider \\ Universit\"at M\"unster \\
Mathematisches Institut \\ M\"unster \\ Germany}
\email{[email protected]}
\urladdr{http://wwwmath.uni-muenster.de/u/schneider/}
\author{Bingyong Xie}
\address{Bingyong Xie \\ÊDepartment of Mathematics \\
East China Normal University \\ Shanghai \\ PR China}
\email{[email protected]}
\urladdr{http://math.ecnu.edu.cn/~byxie}
\thanks{We acknowledge support by the DFG Sonderforschungsbereich 878 at M\"unster}
\date{\today}
\subjclass[2010]{11F; 11S; 14G; 22E; 46S}
\begin{abstract}
The construction of the $p$-adic local Langlands correspondence for $\operatorname{GL}_2(\mathbf{Q}_p)$ uses in an essential way Fontaine's theory of cyclotomic $(\varphi,\Gamma)$-modules. Here \emph{cyclotomic} means that $\Gamma = \operatorname{Gal}(\mathbf{Q}_p(\mu_{p^\infty})/\mathbf{Q}_p)$ is the Galois group of the cyclotomic extension of $\mathbf{Q}_p$. In order to generalize the $p$-adic local Langlands correspondence to $\operatorname{GL}_2(L)$, where $L$ is a finite extension of $\mathbf{Q}_p$, it seems necessary to have at our disposal a theory of Lubin-Tate $(\varphi,\Gamma)$-modules. Such a generalization has been carried out to some extent, by working over the $p$-adic open unit disk, endowed with the action of the endomorphisms of a Lubin-Tate group. The main idea of our article is to carry out a Lubin-Tate generalization of the theory of cyclotomic $(\varphi,\Gamma)$-modules in a different fashion. Instead of the $p$-adic open unit disk, we work over a character variety, that parameterizes the locally $L$-analytic characters on $o_L$. We study $(\varphi,\Gamma)$-modules in this setting, and relate some of them to what was known previously.
\end{abstract}
\maketitle
\setlength{\baselineskip}{16pt}
\tableofcontents
\section*{Introduction}
The construction of the $p$-adic local Langlands correspondence for $\operatorname{GL}_2(\mathbf{Q}_p)$ (see \cite{ICBM}, \cite{CGL2}, and \cite{LBGL}) uses in an essential way Fontaine's theory \cite{Fon} of cyclotomic $(\varphi,\Gamma)$-modules. Here \emph{cyclotomic} means that $\Gamma = \operatorname{Gal}(\mathbf{Q}_p(\mu_{p^\infty})/\mathbf{Q}_p)$ is the Galois group of the cyclotomic extension of $\mathbf{Q}_p$. These $(\varphi,\Gamma)$-modules are modules over various rings of power series (denoted by $\mathscr{E}$, $\mathscr{E}^\dagger$, and $\mathscr{R}$) that are constructed, by localizing and completing, from the ring $\mathbf{Q}_p \otimes_{\mathbf{Z}_p} \mathbf{Z}_p[[X]]$ of bounded functions on the $p$-adic open unit disk $\mathfrak{B}$. The Frobenius map $\varphi$ and the action of $\Gamma$ on these rings come from the action of the multiplicative monoid $\mathbf{Z}_p \setminus \{0\}$ on $\mathfrak{B}$. The relevance of these objects comes from the following theorem (which combines results from \cite{Fon}, \cite{CC98}, and \cite{KSF}).
\begin{theorec}
\mathrm{la}bel{cyclorecall}
There is an equivalence of categories between the category of $p$-adic representations of $G_{\mathbf{Q}_p}$ and the categories of \'etale $(\varphi,\Gamma)$-modules over either $\mathscr{E}$, $\mathscr{E}^\dagger$, or $\mathscr{R}$.
\end{theorec}
In order to generalize the $p$-adic local Langlands correspondence to $\operatorname{GL}_2(L)$, where $L$ is a finite extension of $\mathbf{Q}_p$, it seems necessary to have at our disposal a theory of Lubin-Tate $(\varphi,\Gamma)$-modules, where \emph{Lubin-Tate} means that $\Gamma$ is now the Galois group of the field generated over $L$ by the torsion points of a Lubin-Tate group $LT$ attached to a uniformizer of $L$. Such a generalization has been carried out to some extent (see \cite{Fon}, \cite{KR}, \cite{FX} and \cite{PGMLAV}). The resulting $(\varphi,\Gamma)$-modules are modules over rings $\mathscr{E}_L(\mathfrak{B})$, $\mathscr{E}_L^\dagger(\mathfrak{B})$, and $\mathscr{R}_L(\mathfrak{B})$ that are constructed from the ring of bounded analytic functions on the rigid analytic open unit disk $\mathfrak{B}_{/L}$ over $L$, with an $o_L \setminus \{0\}$-action given by the endomorphisms of $LT$. If $M$ is a $(\varphi,\Gamma)$-module over $\mathscr{R}_L(\mathfrak{B})$, the action of $\Gamma$ is differentiable, and we say that $M$ is $L$-analytic if the derived action of $\operatorname{Lie}(\Gamma)$ is $L$-bilinear. We can also define the notion of an $L$-analytic representation of $G_L$. In this setting, the following results are known (see \cite{KR} for (i) and \cite{PGMLAV} for (ii)).
\begin{theorec}
\mathrm{la}bel{introrecall}
(i) There is an equivalence of categories between the category of $L$-linear continuous representations of $G_L$ and the category of \'etale $(\varphi,\Gamma)$-modules over $\mathscr{E}_L(\mathfrak{B})$.
(ii) There is an equivalence of categories between the category of $L$-linear $L$-analytic representations of $G_L$ and the category of \'etale $L$-analytic $(\varphi,\Gamma)$-modules over $\mathscr{R}_L(\mathfrak{B})$.
\end{theorec}
The main idea of our article is to carry out a Lubin-Tate generalization of the theory of cyclotomic $(\varphi,\Gamma)$-modules in a different fashion. The open unit disk $\mathfrak{B}_{/\mathbf{Q}_p}$, with its $\mathbf{Z}_p \setminus \{0\}$-action, is naturally isomorphic to the space of locally $\mathbf{Q}_p$-analytic characters on $\mathbf{Z}_p$. Indeed, if $K$ is an extension of $\mathbf{Q}_p$, a point $z \in \mathfrak{B}(K)$ corresponds to the character $\kappa_z : a \mapsto (1+z)^a$ and all $K$-valued continuous characters are of this form. In the Lubin-Tate setting, there exists (\cite{ST}) a rigid analytic group variety $\mathfrak{X}$ over $L$, whose closed points in an extension $K/L$ parameterize locally $L$-analytic characters $o_L \to K^\times$. The ring $\mathcal{O}^b_L(\mathfrak{X})$ of bounded analytic functions on $\mathfrak{X}$ is endowed with an action of the multiplicative monoid $o_L \setminus \{0\}$, and can also be localized and completed. This way we get new analogs $\mathscr{E}_L(\mathfrak{X})$, $\mathscr{E}_L^\dagger(\mathfrak{X})$, and $\mathscr{R}_L(\mathfrak{X})$ of the rings on which the cyclotomic $(\varphi,\Gamma)$-modules are defined, and a corresponding theory of $(\varphi,\Gamma)$-modules.
Note that the varieties $\mathfrak{B}_{/L}$ and $\mathfrak{X}$ are quite different. For instance, the ring $\mathcal{O}_L(\mathfrak{B})$ of rigid analytic functions on $\mathfrak{B}_{/L}$ is isomorphic to the ring of power series in one variable with coefficients in $L$ converging on $\mathfrak{B}(\mathbf{C}_p)$, and is hence a Bezout ring (it is the same for an ideal of $\mathcal{O}_L(\mathfrak{B})$ to be closed, finitely generated, or principal). If $L \neq \mathbf{Q}_p$, then in the ring $\mathcal{O}_L(\mathfrak{X})$ there is a finitely generated ideal that is not principal. It is still true, however, that $\mathcal{O}_L(\mathfrak{X})$ is a Pr\"ufer ring (it is the same for an ideal of $\mathcal{O}_L(\mathfrak{X})$ to be closed, finitely generated, or invertible). The beginning of our paper is therefore devoted to establishing geometric properties of $\mathfrak{X}$, and the corresponding properties of $\mathscr{E}_L(\mathfrak{X})$, $\mathscr{E}_L^\dagger(\mathfrak{X})$, and $\mathscr{R}_L(\mathfrak{X})$.
Although the varieties $\mathfrak{B}_{/L}$ and $\mathfrak{X}$ are not isomorphic, they become isomorphic over $\mathbf{C}_p$. This gives rise to an isomorphism $\mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{X})$ (and likewise for $\mathscr{E}$ and $\mathscr{E}^\dagger$; the subscript ${-}_{\mathbf{C}_p}$ denotes the extension of scalars from $L$ to $\mathbf{C}_p$). In addition, there is an action of $G_L$ on those rings and $\mathscr{R}_L(\mathfrak{B}) = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{X})^{G_L}$. There is also a ``twisted'' action of $G_L$ and $\mathscr{R}_L(\mathfrak{X}) = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B})^{G_L,*}$. These isomorphisms make it possible to compare the theories of $(\varphi,\Gamma)$-modules over $\mathscr{R}_L(\mathfrak{B})$ and $\mathscr{R}_L(\mathfrak{X})$, by extending scalars to $\mathbf{C}_p$ and descending. We construct two functors $M \mapsto M_{\mathfrak{X}}$ and $N \mapsto N_{\mathfrak{B}}$ from the category of $(\varphi,\Gamma)$-modules over $\mathscr{R}_L(\mathfrak{B})$ to the category of $(\varphi,\Gamma)$-modules over $\mathscr{R}_L(\mathfrak{X})$ and vice versa.
\begin{theoa}
\mathrm{la}bel{theoA}
The functors $M \mapsto M_{\mathfrak{X}}$ and $N \mapsto N_{\mathfrak{B}}$ from the category of $L$-analytic $(\varphi,\Gamma)$-modules over $\mathscr{R}_L(\mathfrak{B})$ to the category of $L$-analytic $(\varphi,\Gamma)$-modules over $\mathscr{R}_L(\mathfrak{X})$ and vice versa, are mutually inverse and give rise to equivalences of categories.
\end{theoa}
\begin{theob}
\mathrm{la}bel{theoB}
The functors $M \mapsto M_{\mathfrak{X}}$ and $N \mapsto N_{\mathfrak{B}}$ preserve degrees, and give equivalences of categories between the categories of \'etale objects on both sides. There is an equivalence of categories between the category of $L$-linear $L$-analytic representations of $G_L$ and the category of \'etale $L$-analytic $(\varphi,\Gamma)$-modules over $\mathscr{R}_L(\mathfrak{X})$.
\end{theob}
One way of constructing cyclotomic $(\varphi,\Gamma)$-modules over $\mathscr{R}$ is to start from a filtered $\varphi$-module $D$, and to perform a modification of $\mathscr{R} \otimes_{\mathbf{Q}_p} D$ according to the filtration on $D$ (see \cite{BEQ}). The generalization of this construction to $\mathscr{R}_L(\mathfrak{B})$ has been done in \cite{KR} and \cite{KFC}: They attach to every filtered $\varphi_L$-module $D$ over $L$ a $(\varphi_L,\Gamma_L)$-module $\mathcal{M}_{\mathfrak{B}}(D)$ over $\mathcal{O}_L(\mathfrak{B})$ (we can then extend scalars to $\mathscr{R}_L(\mathfrak{B})$). We carry out the corresponding construction over $\mathcal{O}_L(\mathfrak{X})$, and show the following.
\begin{theoc}
\mathrm{la}bel{theoC}
(i) The functor $D \mapsto \mathcal{M}_{\mathfrak{X}}(D)$ provides an equivalence of categories between the category of filtered $\varphi_L$-modules over $L$ and the category $\operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{X}}$ of $(\varphi_L,\Gamma_L)$-modules $M$ over $\mathcal{O}_L(\mathfrak{X})$ defined in \ref{defmodw}.
(ii) The functors $D \mapsto \mathscr{R}_L(\mathfrak{B}) \otimes_{\mathcal{O}_L(\mathfrak{B})} \mathcal{M}_{\mathfrak{B}}(D)$ and $D \mapsto \mathscr{R}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}_{\mathfrak{X}}(D)$ are compatible with the equivalence of categories of Theorem A.
\end{theoc}
\subsection*{Notations and preliminaries}
Let $\mathbf{Q}_p \subseteq L \subseteq K \subseteq \mathbf{C}_p$ be fields with $L/\mathbf{Q}_p$ finite Galois and $K$ complete. Let $d := [L:\mathbf{Q}_p]$, $o_L$ the ring of integers of $L$, and $\pi_L \in o_L$ a fixed prime element. Let $q$ be the cardinality of $k_L := o_L/\pi_L$ and let $e$ be the ramification index of $L$. We always use the absolute value $|\ |$ on $\mathbf{C}_p$ which is normalized by $|p| = p^{-1}$.
For any locally $L$-analytic manifold $M$ we let $M_0$ denote $M$ but viewed as a locally $\mathbf{Q}_p$-analytic manifold.
For any rigid analytic variety $\mathfrak{Y}$ over $L$, let $\mathcal{O}_K(\mathfrak{Y})$ denote the ring of $K$-valued global holomorphic functions on $\mathfrak{Y}$. Suppose that $\mathfrak{Y}$ is reduced. Then a function $f \in \mathcal{O}_K(\mathfrak{Y})$ is called bounded if there is a real constant $C > 0$ such that $|f(y)| < C$ for any rigid point $y \in \mathfrak{Y}(\mathbf{C}_p)$.
\begin{remark}
Let $A$ be an affinoid algebra over $K$ whose base extension $A_{\mathbf{C}_p} := A \widehat{\otimes}_K \mathbf{C}_p$ is a reduced affinoid algebra over $\mathbf{C}_p$. Then $A$ is reduced as well, and the corresponding supremum norms $\|\ \|_{\sup,K}$ on $A$ and $\|\ \|_{\sup, \mathbf{C}_p}$ on $A_{\mathbf{C}_p}$ are characterized as being the only power-multiplicative complete algebra norms on $A$ and $A_{\mathbf{C}_p}$, respectively (\cite{BGR} Lemma 3.8.3/3 and Thm.\ 6.2.4/1). It follows that $\|\ \|_{\sup, \mathbf{C}_p} \mid_A = \|\ \|_{\sup,K}$.
\end{remark}
On the subring
\begin{equation*}
\mathcal{O}_K^b(\mathfrak{Y}) := \{f \in \mathcal{O}_K(\mathfrak{Y}) : f\ \text{is bounded}\}
\end{equation*}
we have the supremum norm
\begin{equation*}
\|f\|_\mathfrak{Y} = \sup_{y \in \mathfrak{Y}(\mathbf{C}_p)} |f(y)|.
\end{equation*}
One checks that $(\mathcal{O}_K^b(\mathfrak{Y}), \|\ \|_\mathfrak{Y})$ is a Banach $K$-algebra.
\section{Lubin-Tate theory and the character variety}
\subsection{The ring of global functions on a one dimensional smooth quasi-Stein space}\mathrm{la}bel{sec:prufer}
Let $\mathfrak{Y}$ be a rigid analytic variety over $K$ which is smooth, one dimensional, and quasi-Stein. In this section we will establish the basic ring theoretic properties of the ring $\mathcal{O}_K(\mathfrak{Y})$. For simplicity we always will assume that $\mathfrak{Y}$ is connected in the sense that $\mathcal{O}_K(\mathfrak{Y})$ is an integral domain. We recall that the quasi-Stein property means that $\mathfrak{Y}$ has an admissible covering $\mathfrak{Y} = \bigcup_{n} \mathfrak{Y}_n$ by an increasing sequence $\mathfrak{Y}_1 \subseteq \ldots \subseteq \mathfrak{Y}_n \subseteq \ldots$ of affinoid subdomains such that the restriction maps $\mathcal{O}_K(\mathfrak{Y}_{n+1}) \longrightarrow \mathcal{O}_K(\mathfrak{Y}_n)$, for any $n \geq 1$, have dense image.
An effective divisor on $\mathfrak{Y}$ is a function $\Delta : \mathfrak{Y} \longrightarrow \mathbf{Z}_{\geq 0}$ such that, for any affinoid subdomain $\mathfrak{Z} \subseteq \mathfrak{Y}$, the set $\{x \in \mathfrak{Z} : \Delta(x) > 0\}$ is finite. The support $\operatorname{supp}(\Delta) := \{x \in \mathfrak{Y} : \Delta(x) > 0\}$ of a divisor is always a countable subset of $\mathfrak{Y}$. The set of effective divisors is partially ordered by $\Delta \leq \Delta'$ if and only if $\Delta(x) \leq \Delta'(x)$ for any $x \in \mathfrak{Y}$.
For simplicity let $\mathcal{O}$ denote the structure sheaf of the rigid variety $\mathfrak{Y}$ and $\mathcal{O}_x$ its stalk in a point $x \in \mathfrak{Y}$ with maximal ideal $\mathfrak{m}_x$. Since $\mathfrak{Y}$ is smooth and one dimensional every $\mathcal{O}_x$ is a discrete valuation ring. For any element $0 \neq f \in \mathcal{O}_K(\mathfrak{Y})$ we define the function $\operatorname{div}(f) : \mathcal{O}_K(\mathfrak{Y}) \longrightarrow \mathbf{Z}_{\geq 0}$ by $\operatorname{div}(f)(x) = n$ if and only if the germ of $f$ in $x$ is contained in $\mathfrak{m}_x^n \setminus \mathfrak{m}_x^{n+1}$.
\begin{lemma}\mathrm{la}bel{principal-divisor}
$\operatorname{div}(f)$ is a divisor.
\end{lemma}
\begin{proof}
Let $\operatorname{Sp}(A) \subseteq \mathfrak{Y}$ be any connected affinoid subdomain. Then $\operatorname{Sp}(A)$ is smooth and one dimensional. Hence the affinoid algebra $A$ is a Dedekind domain. But in a Dedekind domain any nonzero element is contained in at most finitely many maximal ideals.
\end{proof}
For any effective divisor $\Delta$ we have the ideal $I_\Delta := \{f \in \mathcal{O}_K(\mathfrak{Y}) \setminus \{0\} : \operatorname{div}(f) \geq \Delta\} \cup \{0\}$ in $\mathcal{O}_K(\mathfrak{Y})$. It also follows that for any nonzero ideal $I \subseteq \mathcal{O}_K(\mathfrak{Y})$ we have the effective divisor $\Delta(I)(x) := \min_{0 \neq f \in I} \operatorname{div}(f)(x)$.
As the projective limit $\mathcal{O}_K(\mathfrak{Y}) = \varprojlim_n \mathcal{O}_K(\mathfrak{Y}_n)$ of the Banach algebras $\mathcal{O}_K(\mathfrak{Y}_n)$ the ring $\mathcal{O}_K(\mathfrak{Y})$ has a natural structure of a $K$-Fr\'echet algebra (which is independent of the choice of the covering $(\mathfrak{Y}_n)_n$). In fact, $\mathcal{O}_K(\mathfrak{Y})$ is a Fr\'echet-Stein algebra in the sense of \cite{ST0}. Any closed ideal $I \subseteq \mathcal{O}_K(\mathfrak{Y})$ is the space of global sections of a unique coherent ideal sheaf $\widetilde{I} \subseteq \mathcal{O}$. Let $\widetilde{I}_x \subseteq \mathcal{O}_x$ denote the stalk of $\widetilde{I}$ in the point $x$.
\begin{lemma}\mathrm{la}bel{germ}
For any closed ideal $I \subseteq \mathcal{O}_K(\mathfrak{Y})$ and any point $x \in \mathfrak{Y}$ the map $I \longrightarrow \widetilde{I}_x/\mathfrak{m}_x \widetilde{I}_x$ induced by passing to the germ in $x$ is surjective, continuous, and open, if we equip the finite dimensional $K$-vector space $\widetilde{I}_x/\mathfrak{m}_x \widetilde{I}_x$ with its unique Hausdorff vector space topology.
\end{lemma}
\begin{proof}
Let $\operatorname{Sp}(A) = \mathfrak{Y}_n \subseteq \mathfrak{Y}$ be such that it contains the point $x$, and let $\mathfrak{m} \subseteq A$ be the maximal ideal of functions vanishing in $x$. By \cite{BGR} Prop.\ 7.3.2/3 we have $AI/\mathfrak{m}AI = \widetilde{I}_x/\mathfrak{m}_x \widetilde{I}_x$. The continuity of the map in question follows since the restriction map $I \longrightarrow AI$ is continuous and any ideal in the Banach algebra $A$ is closed (\cite{BGR} Prop.\ 6.1.1/3). Moreover, Theorem A for the quasi-Stein space $\mathfrak{Y}$ says that the map $I \longrightarrow AI$ has dense image (cf.\ \cite{ST0} Thm.\ in \S3, Cor.\ 3.1, and Remark 3.2) which implies the asserted surjectivity. The openness then follows from the uniqueness of the Hausdorff topology on a finite dimensional vector space.
\end{proof}
\begin{lemma}\mathrm{la}bel{prescribe-divisor}
Let $I \subseteq \mathcal{O}_K(\mathfrak{Y})$ be any nonzero closed ideal and let $Z \subseteq \mathfrak{Y}$ be any countable subset disjoint from $\operatorname{supp}(\Delta(I))$; then there exists a function $f \in I$ such that
\begin{equation*}
\operatorname{div}(f)(x) =
\begin{cases}
\Delta(I)(x) & \text{if $x \in \operatorname{supp}(\Delta(I))$}, \\
0 & \text{if $x \in Z$}.
\end{cases}
\end{equation*}
\end{lemma}
\begin{proof}
It follows from Lemma \ref{germ} that $V(x) := I \setminus \mathfrak{m}_x \widetilde{I}_x$, for any point $x \in \mathfrak{Y}$, is an open subset of $I$. It is even dense in $I$: On the one hand, the complement $I \setminus \overline{V(x)}$ of its closure, being contained in $\mathfrak{m}_x \widetilde{I}_x$, projects into the subset $\{0\} \subseteq \widetilde{I}_x/\mathfrak{m}_x \widetilde{I}_x$. On the other hand, by Lemma \ref{germ}, the image of the open set $I \setminus \overline{V(x)}$ is open in $\widetilde{I}_x/\mathfrak{m}_x \widetilde{I}_x$. But $I$ being nonzero the vector space $\widetilde{I}_x/\mathfrak{m}_x \widetilde{I}_x$ is one dimensional so that the subset $\{0\}$ is not open. Hence we must have $\overline{V(x)} = I$.
The topology of $I$ being complete and metrizable the Baire category theorem implies that, for any countable number of points $x$ the intersection of the corresponding $V(x)$ still is dense in $I$ and, in particular, is nonempty. Since $V(x) = \{f \in I : \operatorname{div}(f)(x) = \Delta(I)(x)\}$ any $f \in \bigcap_{x \in \operatorname{supp}(\Delta(I)) \cup Z} V(x)$ satisfies the assertion.
\end{proof}
\begin{proposition}\mathrm{la}bel{divisor-theory}
The map $\Delta \longmapsto I_\Delta$ is an order reversing bijection between the set of all effective divisors on $\mathfrak{Y}$ and the set of all nonzero closed ideals in $\mathcal{O}_K(\mathfrak{Y})$. The inverse is given by $I \longmapsto \Delta(I)$.
\end{proposition}
\begin{proof}
The map $I \longmapsto \widetilde{I}$ is a bijection between the set of all nonzero closed ideals and the set of all nonzero coherent ideal sheaves in $\mathcal{O}$. Let $\mathfrak{Y}_n = \operatorname{Sp}(A_n)$. Then a coherent ideal sheaf $\mathcal{I}$ is the same as a sequence $(I_n)_{n \geq 1}$ where $I_n$ is an ideal in $A_n$ such that $I_{n+1} A_n = I_n$ for any $n \geq 1$ the bijection being given by $\mathcal{I} \longmapsto (\mathcal{I}(\operatorname{Sp}(A_n))_n$. On the other hand, an effective divisor on $\mathfrak{Y}$ is the same as a sequence $(\Delta_n)_{n \geq 1}$ where $\Delta_n$ is an effective divisor supported on $\operatorname{Sp}(A_n)$ such that $\Delta_{n+1} | \operatorname{Sp}(A_n) = \Delta_n$ for any $n \geq 1$. Since $I_\Delta = \varprojlim_n I_{\Delta|\operatorname{Sp}(A_n)}$ this reduces the asserted bijection to the case of the affinoid varieties $\operatorname{Sp}(A_n)$. But each $A_n$ is a Dedekind ring for which the bijection between nonzero ideals and effective divisors is well known. The description of the inverse bijection follows from Lemma \ref{germ} which implies that $\min_{0 \neq f \in I} \operatorname{div}(f)(x) = \min_{0 \neq f \in A_n I} \operatorname{div}(f)(x)$ provided $x \in \operatorname{Sp}(A_n)$.
\end{proof}
\begin{lemma}\mathrm{la}bel{fingensub}
Any finitely generated submodule in a finitely generated free $\mathcal{O}_K(\mathfrak{Y})$-module is closed.
\end{lemma}
\begin{proof}
(Any finitely generated free $\mathcal{O}_K(\mathfrak{Y})$-module carries the product topology which is easily seen to be independent of the choice of a basis.) Our assertion is a general fact about Fr\'echet-Stein algebras (cf.\ \cite{ST0} Cor.\ 3.4.iv and Lemma 3.6).
\end{proof}
\begin{proposition}\mathrm{la}bel{closed-fingen}
An ideal $I \subseteq \mathcal{O}_K(\mathfrak{Y})$ is closed if and only it is finitely generated.
\end{proposition}
\begin{proof}
If $I$ is finitely generated then it is closed by the previous Lemma \ref{fingensub}. Suppose therefore that $I$ is closed. We apply Lemma \ref{prescribe-divisor} twice, first to obtain a function $f \in I$ such that $\operatorname{div}(f) = \Delta(I) + \Delta_1$ with $\operatorname{supp}(\Delta(I)) \cap \operatorname{supp}(\Delta_1) = \emptyset$ and then to obtain another function $g \in I$ such that $\operatorname{div}(g) = \Delta(I) + \Delta_2$ with $\big( \operatorname{supp}(\Delta(I)) \cup \operatorname{supp}(\Delta_1) \big) \cap \operatorname{supp}(\Delta_2) = \emptyset$. By the first part of the assertion the ideal $(f,g)$ is closed. Its divisor, by construction, must be equal to $\Delta(I)$. It follows that $I = (f,g)$.
\end{proof}
\begin{proposition}\mathrm{la}bel{closed-invert}
A nonzero ideal $I \subseteq \mathcal{O}_K(\mathfrak{Y})$ is closed if and only it is invertible.
\end{proposition}
\begin{proof}
In any ring invertible ideals are finitely generated. Hence $I$ is closed if it is invertible by Prop.\ \ref{closed-fingen}. Now assume that $I \neq 0$ is closed. Again by \ref{closed-fingen} we have $I = (g_1, \ldots, g_m)$ for appropriate functions $g_i$. Fix any nonzero function $f \in I$. Then $\operatorname{div}(f) - \Delta(I)$ is an effective divisor and the closed ideal $J := I_{\operatorname{div}(f) - \Delta(I)}$ is defined. Then $IJ = \sum_i g_i J$. Using once more \ref{closed-fingen} we see that $J$ and therefore all the ideals $g_iJ$ as well as $\sum_i g_i J$ are finitely generated and hence closed. We have $\Delta(g_iJ) = \operatorname{div}(f) - \Delta(I) + \operatorname{div}(g_i)$, and we conclude that
\begin{align*}
\Delta(\sum_i g_iJ) & = \min_i \Delta(g_iJ) = \min_i (\operatorname{div}(f) - \Delta(I) + \operatorname{div}(g_i)) \\
& = \operatorname{div}(f) - \Delta(I) + \min_i \operatorname{div}(g_i) = \operatorname{div}(f) - \Delta(I) + \Delta(I) \\
& = \operatorname{div}(f) \ .
\end{align*}
This implies $IJ = (f)$ and then $I (f^{-1}J) = (1)$ which shows that $I$ is invertible.
\end{proof}
\begin{corollary}\mathrm{la}bel{pruefer}
$\mathcal{O}_K(\mathfrak{Y})$ is a Pr\"ufer domain; in particular, we have:
\begin{itemize}
\item[--] $\mathcal{O}_K(\mathfrak{Y})$ is a coherent ring;
\item[--] an $\mathcal{O}_K(\mathfrak{Y})$-module is flat if and only if it is torsionfree;
\item[--] any finitely generated torsionfree $\mathcal{O}_K(\mathfrak{Y})$-module is projective;
\item[--] any finitely generated projective $\mathcal{O}_K(\mathfrak{Y})$-module is isomorphic to a direct sum of (finitely generated) ideals (\cite{CE} Prop.\ I.6.1).
\end{itemize}
\end{corollary}
\begin{lemma}\mathrm{la}bel{closed-proj}
Any closed submodule of a finitely generated projective $\mathcal{O}_K(\mathfrak{Y})$-module is finitely generated projective.
\end{lemma}
\begin{proof}
Because of Lemma \ref{fingensub} we may assume that $P$ is a closed submodule of some free module $\mathcal{O}_K(\mathfrak{Y})^m$. We now prove the assertion by induction with respect to $m$. For $m=1$ this is Prop.\ \ref{closed-fingen} and Cor.\ \ref{pruefer}. In general we consider the exact sequence
\begin{equation*}
0 \longrightarrow P \cap \mathcal{O}_K(\mathfrak{Y})^{m-1} \longrightarrow P \xrightarrow{\; \operatorname{pr}_m \;} \mathcal{O}_K(\mathfrak{Y}) \ .
\end{equation*}
By the induction hypothesis the left term is finitely generated projective. On the other hand, by \cite{ST0} Cor.\ 3.4.ii and Lemma 3.6 the image of $\operatorname{pr}_m$ is closed and therefore finitely generated projective as well (by the case $m=1$).
\end{proof}
In fact, $\mathcal{O}_K(\mathfrak{Y})$ is a Pr\"ufer domain of a special kind. We recall that a Pr\"ufer domain $R$ is called a $1 \frac{1}{2}$ generator Pr\"ufer domain if for any nonzero finitely generated ideal $I$ of $R$ and any $0 \neq f \in I$, there exists another element $g \in I$ such that $f$ and $g$ generate $I$.
\begin{proposition}\mathrm{la}bel{1 1/2}
The ring $\mathcal{O}_K(\mathfrak{Y})$ is a $1 \frac{1}{2}$ generator Pr\"ufer domain.
\end{proposition}
\begin{proof}
Let $I \subseteq \mathcal{O}_K(\mathfrak{Y})$ be an arbitrary nonzero finitely generated ideal and $0 \neq f \in I$ be any element. Our assertion, by definition, amounts to the claim that there exists another element $g \in I$ such that $f$ and $g$ generate $I$.
As before, let $\widetilde{I} \subseteq \mathcal{O}$ be the coherent ideal sheaf corresponding to $I$. By Thm.\ B we have $(\mathcal{O}/\widetilde{I})(\mathfrak{Y}) = \mathcal{O}_K(\mathfrak{Y})/I$. According to \cite{BGR} Prop.\ 9.5.3/3 the quotient sheaf $\mathcal{O}/\widetilde{I}$ is (the direct image of) the structure sheaf of a (in general nonreduced) structure of a rigid analytic variety on the analytic subset $\operatorname{supp}(\Delta(I))$ of $\mathfrak{Y}$. But topologically $\operatorname{supp}(\Delta(I))$ is a discrete set. We deduce that
\begin{equation*}
\mathcal{O}_K(\mathfrak{Y})/I = (\mathcal{O}/\widetilde{I})(\mathfrak{Y}) = \operatorname{pr}od_{x \in \mathfrak{Y}} \mathcal{O}_x/\mathfrak{m}_x^{\Delta(I)(x)} \ .
\end{equation*}
We now repeat the corresponding observation for the principal ideal $J := f \mathcal{O}_K(\mathfrak{Y})$. By comparing the two computations we obtain
\begin{equation*}
I/f \mathcal{O}_K(\mathfrak{Y}) = \operatorname{pr}od_{x \in \mathfrak{Y}} \mathfrak{m}_x^{\Delta(I)(x)}/\mathfrak{m}_x^{\operatorname{div}(f)(x)} \subseteq \operatorname{pr}od_{x \in \mathfrak{Y}} \mathcal{O}_x/\mathfrak{m}_x^{\operatorname{div}(f)(x)} = \mathcal{O}_K(\mathfrak{Y})/ f \mathcal{O}_K(\mathfrak{Y}) \ .
\end{equation*}
This shows that for $g$ we may take any function in $I$ whose germ generates $\mathfrak{m}_x^{\Delta(I)(x)}/\mathfrak{m}_x^{\operatorname{div}(f)(x)}$ for any $x \in \mathcal{O}_K(\mathfrak{Y})$.
\end{proof}
\begin{corollary}\phantomsection\mathrm{la}bel{free-invertible}
\begin{itemize}
\item[i.] If $I \subseteq \mathcal{O}_K(\mathfrak{Y})$ is any nonzero finitely generated ideal, then in the factor ring $\mathcal{O}_K(\mathfrak{Y})/I$ every finitely generated ideal is principal.
\item[ii.] For any two nonzero finitely generated ideals $I$ and $J$ in $\mathcal{O}_K(\mathfrak{Y})$ we have an isomorphism of $\mathcal{O}_K(\mathfrak{Y})$-modules $I \oplus J \widehat{\otimes}ng \mathcal{O}_K(\mathfrak{Y}) \oplus IJ$.
\end{itemize}
\end{corollary}
\begin{proof}
i. This is immediate from the $1 \frac{1}{2}$ generator property. ii. Because of i. this is \cite{Kap} Thm.\ 2(a).
\end{proof}
We also recall the following facts about coherent modules on a quasi-Stein space, which will be used, sometimes implicitly, over and over again.
\begin{fact}\mathrm{la}bel{coherent}
Let $\mathfrak{M}$ be a coherent $\mathcal{O}$-module sheaf on $\mathfrak{Y}$ and set $M := \mathfrak{M}(\mathfrak{Y})$; then:
\begin{itemize}
\item[i.] For any open affinoid subvariety $\mathfrak{Z} \subseteq \mathfrak{Y}$ we have $\mathfrak{M}(\mathfrak{Z}) = \mathcal{O}(\mathfrak{Z}) \otimes_{\mathcal{O}_K(\mathfrak{Y})} M$;
\item[ii.] for any point $x \in \mathfrak{Y}$ the stalk of $\mathfrak{M}$ in $x$ is $\mathfrak{M}_x = \mathcal{O}_x \otimes _{\mathcal{O}_K(\mathfrak{Y})} M$;
\item[iii.] $\mathfrak{M} = 0$ if and only if $M = 0$ if and only if $\mathfrak{M}_x = 0$ for any point $x$;
\item[iv.] if $M$ is a finitely generated projective $\mathcal{O}_K(\mathfrak{Y})$-module then $\mathfrak{M}(\mathfrak{Z}) = \mathcal{O}(\mathfrak{Z}) \otimes_{\mathcal{O}_K(\mathfrak{Y})} M$ for any admissible open subvariety $\mathfrak{Z} \subseteq \mathfrak{Y}$.
\end{itemize}
\end{fact}
\begin{proof}
i. By the quasi-Stein property the claim holds true for any $\mathfrak{Z} = \mathfrak{Y}_n$ (cf.\ \cite{ST0} Cor.\ 3.1). A general open affinoid $\mathfrak{Z}$ is contained in some $\mathfrak{Y}_n$, and we have
\begin{align*}
\mathfrak{M}(\mathfrak{Z}) & = \mathcal{O}(\mathfrak{Z}) \otimes_{\mathcal{O}_K(\mathfrak{Y}_n)} \mathfrak{M}(\mathfrak{Y}_n) = \mathcal{O}(\mathfrak{Z}) \otimes_{\mathcal{O}_K(\mathfrak{Y}_n)} (\mathcal{O}_K(\mathfrak{Y}_n) \otimes_{\mathcal{O}_K(\mathfrak{Y})} M) \\
& = \mathcal{O}(\mathfrak{Z}) \otimes_{\mathcal{O}_K(\mathfrak{Y})} M
\end{align*}
(using \cite{BGR} Prop.\ 9.4.2/1 for the first identity). ii. This follows immediately from i. by passing to the direct limit. iii. Use \cite{BGR} Cor.\ 9.4.2/7. iv. Choose admissible affinoid coverings $\mathfrak{Z} = \bigcup_i \mathfrak{Z}_i$ and $\mathfrak{Z}_i \cap \mathfrak{Z}_j = \bigcup_\ell \mathfrak{Z}_{i,j,\ell}$. The sheaf axiom gives the upper exact sequence
\begin{equation*}
\xymatrix{
0 \ar[r] & \mathfrak{M}(\mathfrak{Z}) \ar[r] & \operatorname{pr}od_i \mathfrak{M}(\mathfrak{Z}_i) \ar[r] & \operatorname{pr}od_{i,j,\ell} \mathfrak{M}(\mathfrak{Z}_{i,j,\ell}) \\
0 \ar[r] & \mathcal{O}(\mathfrak{Z}) \otimes_{\mathcal{O}_K(\mathfrak{Y})} M \ar[u] \ar[r] & \operatorname{pr}od_i (\mathcal{O}(\mathfrak{Z}_i) \otimes_{\mathcal{O}_K(\mathfrak{Y})} M) \ar[u]^{\widehat{\otimes}ng} \ar[r] & \operatorname{pr}od_{i,j,\ell} (\mathcal{O}(\mathfrak{Z}_{i,j,\ell}) \otimes_{\mathcal{O}_K(\mathfrak{Y})} M) \ar[u]^{\widehat{\otimes}ng}. }
\end{equation*}
Since the tensor product by a finitely generated projective module is exact and commutes with arbitrary direct products the lower sequence is exact as well. The middle and the right arrow are bijective by i. So the first arrow must be bijective as well.
\end{proof}
\begin{proposition}\mathrm{la}bel{gruson}
The global section functor $\mathfrak{M} \longmapsto \mathfrak{M}(\mathfrak{Y})$ induces an equivalence of categories between the category of locally free coherent $\mathcal{O}$-module sheaves on $\mathfrak{Y}$ and the category of finitely generated projective $\mathcal{O}_K(\mathfrak{Y})$-modules.
\end{proposition}
\begin{proof}
This is a special case of \cite{Gru} Thm.\ V.1 and subsequent Remark.
\end{proof}
\subsection{The variety $\mathfrak{X}$ and the ring $\mathcal{O}_K(\mathfrak{X})$}
Let $G := o_L$ denote the additive group $o_L$ viewed as a locally $L$-analytic group. The group of $K$-valued locally analytic characters of $G$ is denoted by $\widehat{G}(K)$.
We have the bijection
\begin{align}\mathrm{la}bel{f:variety}
\mathfrak{B}_1(K) \otimes_{\mathbf{Z}_p} \operatorname{Hom}_{\mathbf{Z}_p}(o_L,\mathbf{Z}_p) & \xrightarrow{\; \sim \;} \widehat{G}_0(K) \\
z \otimes \beta & \longmapsto \chi_{z \otimes \beta} (g) := z^{\beta(g)} \nonumber
\end{align}
where $\mathfrak{B}_1$ is the rigid $\mathbf{Q}_p$-analytic open disk of radius one around the point $1 \in \mathbf{Q}_p$. The rigid analytic group variety
\begin{equation*}
\mathfrak{X}_0 := \mathfrak{B}_1 \otimes_{\mathbf{Z}_p} \operatorname{Hom}_{\mathbf{Z}_p}(o_L,\mathbf{Z}_p)
\end{equation*}
over $\mathbf{Q}_p$ (noncanonically a $d$-dimensional open unit polydisk) satisfies $\mathfrak{X}_0(K) = \mathfrak{X}_{0/L}(K) = \widehat{G}_0(K)$ and therefore ``represents the character group $\widehat{G}_0$''. It is shown in \cite{ST} \S2 that, if $t_1, \ldots, t_d$ is a $\mathbf{Z}_p$-basis of $o_L$, then the equations
\begin{equation*}
(\beta(t_i) - t_i \cdot \beta(1)) \cdot \log(z) = 0 \qquad\text{for $1 \leq i \leq d$}
\end{equation*}
define a one dimensional rigid analytic subgroup $\mathfrak{X}$ in $\mathfrak{X}_{0/L}$ which ``represents the character group $\widehat{G}$''.\footnote{Even more explicitly, if we use the basis $\beta_1, \ldots, \beta_d$ dual to $t_i$ in order to identify $\mathfrak{X}_0$ with $\mathfrak{B}_1^d$, then $\mathfrak{X}$ identifies with
\begin{multline*}
\{(z_1, \ldots, z_d) \in \mathfrak{B}_{1/L}^d : \sum_j \beta_j(1) \log(z_j) = \frac{1}{t_i} \log(z_i)\ \text{for $1 \leq i \leq d$}\} \\ = \{(z_1, \ldots, z_d) \in \mathfrak{B}_{1/L}^d : \log(z_i) = \frac{t_i}{t_1} \log(z_1)\ \text{for $1 \leq i \leq d$}\} \ .
\end{multline*}}
In particular, restriction of functions induces a surjective homomorphism of rings
\begin{equation*}
\mathcal{O}_K(\mathfrak{X}_0) \longrightarrow \mathcal{O}_K(\mathfrak{X})
\end{equation*}
which further restricts to a ring homomorphism
\begin{equation*}
\mathcal{O}_K^b(\mathfrak{X}_0) \longrightarrow \mathcal{O}_K^b(\mathfrak{X}) \ .
\end{equation*}
There is the following simple, but remarkable fact.
\begin{lemma}\mathrm{la}bel{injective}
The restriction map $\mathcal{O}_K^b(\mathfrak{X}_0) \longrightarrow \mathcal{O}_K^b(\mathfrak{X})$ is injective.
\end{lemma}
\begin{proof}
By Fourier theory (cf.\ \cite{ST} and \cite{Schi} App.\ A.6) the vector space $\mathcal{O}_K(\mathfrak{X})$ is the continuous dual of the locally convex vector space of locally $L$-analytic functions $C^{an}(G,K)$ whereas $\mathcal{O}_K^b(\mathfrak{X}_0)$ is the continuous dual of the $K$-Banach space $C^{cont}(G,K)$ of all continuous functions on $G$. It therefore suffices to show that $C^{an}(G,K)$ is dense in $C^{cont}(G,K)$. In fact, already the vector space $C^\infty(G,K)$ of all locally constant functions, which obviously are locally $L$-analytic, is dense in $C^{cont}(G,K)$. This is a well known fact about functions on compact totally disconnected groups. Let $\epsilon > 0$ be any positive constant and $f \in C^{cont}(G,K)$ be any function. By continuity of $f$ we find for any point $x \in G$ an open neighbourhood $U_x \subseteq G$ such that $|f(y) - f(x)| \leq \epsilon$ for any $y \in U_x$. The covering $(U_x)_x$ of $G$ can be refined into a finite disjoint open covering $G = V_1 \dot{\cup} \ldots \dot{\cup} V_m$. Pick points $x_i \in V_i$ and define the locally constant function $f_\epsilon$ by requiring that $f_\epsilon | V_i$ has constant value $f(x_i)$. Then $\|f - f_\epsilon\| \leq \epsilon$ in the sup-norm of the Banach space $C^{cont}(G,K)$.
\end{proof}
For any $r \in (0,1) \cap p^\mathbf{Q}$ we let $\mathfrak{B}_1(r)$, resp.\ $\mathfrak{B}(r)$, denote the $\mathbf{Q}_p$-affinoid disk of radius $r$ around $1$, resp.\ around $0$, and we put $\mathfrak{X}(r) := \mathfrak{X} \cap (\mathfrak{B}_1(r) \otimes_{\mathbf{Z}_p} \operatorname{Hom}_{\mathbf{Z}_p}(o_L,\mathbf{Z}_p))_{/L}$. In fact, each $\mathfrak{X}(r)$ is an affinoid subgroup of $\mathfrak{X}$. For small $r$ its structure is rather simple. By \cite{ST} Lemma 2.1 we have the cartesian diagram of rigid $L$-analytic varieties
\begin{equation*}
\xymatrix{
\mathfrak{X}_{\vphantom{\mathbf{Z}_i}} \ar[d]_{d} \ar[r]^-{\subseteq} & \mathfrak{B}_1 \otimes \operatorname{Hom}_{\mathbf{Z}_p} (o_L,\mathbf{Z}_p) \ar[d]^{\log \otimes \operatorname{id}} \\
\mathbb{A}^1_{\vphantom{\mathbf{Z}}} \ar[r]^-{\subseteq} & \mathbb{A}^1 \otimes \operatorname{Hom}_{\mathbf{Z}_p} (o_L,\mathbf{Z}_p) }
\end{equation*}
where $d$ is the morphism which sends a locally analytic character $\chi$ of $o_L$ to its derivative $d\chi(1) = \frac{d}{dt}\chi(t) {\mid}_{t=0}$ and where the lower horizontal arrow is the map $a \longmapsto \sum_{i=1}^d at_i \otimes \beta_i$ with $\beta_1, \ldots, \beta_d$ being the basis dual to $t_1, \ldots, t_d$. Consider any $r < p^{-\frac{1}{p-1}}$. Then the logarithm restricts to an isomorphism $\mathfrak{B}_1(r) \xrightarrow{\widehat{\otimes}ng} \mathfrak{B}(r)$ with inverse the exponential map. The above diagram restricts to the cartesian diagram with vertical isomorphisms
\begin{equation*}
\xymatrix{
\mathfrak{X}(r)_{\vphantom{\mathbf{Z}_i}} \ar[d]_{d}^{\widehat{\otimes}ng} \ar[r]^-{\subseteq} & \mathfrak{B}_1(r) \otimes \operatorname{Hom}_{\mathbf{Z}_p} (o_L,\mathbf{Z}_p) \ar[d]^{\log \otimes \operatorname{id}}_{\widehat{\otimes}ng} \\
\mathfrak{B}(r)_{\vphantom{\mathbf{Z}}} \ar[r]^-{\subseteq} & \mathfrak{B}(r) \otimes \operatorname{Hom}_{\mathbf{Z}_p} (o_L,\mathbf{Z}_p). }
\end{equation*}
\begin{lemma}\mathrm{la}bel{small-disk}
For any $r \in (0,p^{-\frac{1}{p-1}}) \cap p^\mathbf{Q}$ the map
\begin{align*}
\mathfrak{B}(r) & \xrightarrow{\;\widehat{\otimes}ng\;} \mathfrak{X}(r) \\
y & \longmapsto \chi_y(g) := \exp(gy)
\end{align*}
is an isomorphism of $L$-affinoid groups.
\end{lemma}
\begin{proof}
Obviously we have $\chi_y \in \mathfrak{X}(r)$ and $d\chi_y(1) = y$. Hence the map under consideration is the inverse of the isomorphism $d$ in the above diagram.
\end{proof}
\subsection{The LT-isomorphism}\mathrm{la}bel{sec:LT}
Let $LT = LT_{\pi_L}$ be the Lubin-Tate formal $o_L$-module over $o_L$ corresponding to the prime element $\pi_L$. If $\mathfrak{B}$ denotes the rigid $\mathbf{Q}_p$-analytic open disk of radius one around $0 \in \mathbf{Q}_p$, then we always identify $LT$ with $\mathfrak{B}_{/L}$, which makes the rigid variety $\mathfrak{B}_{/L}$ into an $o_L$-module object and which gives us a global coordinate $Z$ on $LT$. The resulting $o_L$-action on $\mathfrak{B}_{/L}$ denoted by $(a,z) \longmapsto [a](z)$ is given by formal power series $[a](Z) \in o_L[[Z]]$. In this way, in particular, the multiplicative monoid $o_L \setminus \{0\}$ acts on the rigid group variety $\mathfrak{B}_{/L}$. For any $r \in (0,1) \cap p^\mathbf{Q}$ the $L$-affinoid disk $\mathfrak{B}(r)_{/L}$ of radius $r$ around zero is an $o_L$-submodule object of $\mathfrak{B}_{/L}$.
Let $T$ be the Tate module of $LT$. Then $T$ is
a free $o_L$-module of rank one, and the action of
$\operatorname{Gal}(\overline{L}/L)$ on $T$ is given by a continuous character $\chi_{LT} :
\operatorname{Gal}(\overline{L}/L) \longrightarrow o_L^\times$. Let $T'$ denote the Tate module of the $p$-divisible group dual to $LT$, which again is a free $o_L$-module of rank one. The Galois action on $T'$ is given by the continuous character $\tau := \chi_{cyc}\cdot\chi_{LT}^{-1}$, where
$\chi_{cyc}$ is the cyclotomic character. \footnote{We always normalize the isomorphism of local class field theory by letting a prime element correspond to a geometric Frobenius. Then, by \cite{Ser} III.A4 Prop.\ 4, the character $\chi_{LT}$ coincides with the character
\begin{equation*}
\operatorname{Gal}(\overline{L}/L) \twoheadrightarrow \operatorname{Gal}(\overline{L}/L)^{ab} \widehat{\otimes}ng \widehat{L}^\times = o_L^\times \times \pi_L^{\widehat{\mathbf{Z}}} \xrightarrow{\operatorname{pr}} o_L^\times \ .
\end{equation*}
}
The ring $\mathcal{O}_K(\mathfrak{B})$ is the ring of all formal power series in $Z$ with coefficients in $K$ which converge on $\mathfrak{B}(\mathbf{C}_p)$. In terms of formal power series the induced $o_L \setminus \{0\}$-action on $\mathcal{O}_K(\mathfrak{B})$ is given by $(a,F) \longmapsto F \circ [a]$. We define
\begin{equation*}
\mathscr{R}_K(\mathfrak{B}) := \bigcup_r \, \mathcal{O}_K(\mathfrak{B} \setminus \mathfrak{B}(r))
\end{equation*}
which is the ring of all formal series
$F(Z) = \sum_{n\in \mathbf{Z}} a_n Z^n$, $a_n \in K$, which are convergent in $(\mathfrak{B} \setminus \mathfrak{B}(r))(\mathbf{C}_p)$ for some $r<1$ depending on $F$. It follows from \cite{ST} Lemma 3.2 that the $o_L \setminus \{0\}$-action on $\mathcal{O}_K(\mathfrak{B})$ extends to $\mathscr{R}_K(\mathfrak{B})$.
By the maximum modulus principle (cf.\ \cite{Schi} Thm.\ 42.3(i)) $\mathcal{O}_K^b(\mathfrak{B})$ is the ring of all formal power series $F(Z) = \sum_{n \geq 0} a_n Z^n$, $a_n \in K$, such that $\sup_{n \geq 0} |a_n| < \infty$, and the supremum norm satisfies
\begin{equation*}
\|F\|_{\mathfrak{B}} = \sup_{z \in \mathfrak{B}(\mathbf{C}_p)} |F(z)| = \sup_{n \geq 0} |a_n| \ .
\end{equation*}
The norm $\|\ \|_{\mathfrak{B}}$ is known to be multiplicative (cf.\ \cite{vRo} Lemma 6.40). We define
\begin{equation*}
\mathscr{E}_K^\dagger(\mathfrak{B}) := \bigcup_r \, \mathcal{O}_K^b(\mathfrak{B} \setminus \mathfrak{B}(r))
\end{equation*}
and
\begin{equation*}
\|F\|_1 := \lim_{r \rightarrow 1} \|F\|_{\mathfrak{B} \setminus \mathfrak{B}(r)}
\end{equation*}
for $F \in \mathscr{E}_K^\dagger(\mathfrak{B})$. Using the maximum modulus principle for affinoid annuli one shows that an element in $\mathscr{R}_K(\mathfrak{B})$ written as a formal Laurent series $F(Z) = \sum_{n \in \mathbf{Z}} a_nZ^n$ lies in $\mathscr{E}_K^\dagger(\mathfrak{B})$ if and only if $\sup_{n \in \mathbf{Z}} |a_n| < \infty$ and that, in this case, $\|F\|_1 = \sup_{n \in \mathbf{Z}} |a_n| < \infty$. Since $\|F\|_1 = \lim_{r \rightarrow 1} \|F\|_r$ is the limit of the multiplicative norms $\|F\|_r := \sup_{n \in \mathbf{Z}} |a_n|r^n$ we see that $\|\ \|_1$ is a multiplicative norm. We now let
\begin{equation*}
\mathscr{E}_K(\mathfrak{B}) := \text{completion of $\mathscr{E}^\dagger_K(\mathfrak{B})$ with respect to $\|\ \|_1$}
\end{equation*}
and
\begin{equation*}
\mathscr{E}_K^{\leq 1}(\mathfrak{B}) := \{ F \in \mathscr{E}_K(\mathfrak{B}) : \|F\|_1 \leq 1\} \ .
\end{equation*}
Again by \cite{ST} Lemma 3.2 the $o_L \setminus \{0\}$-action on $\mathcal{O}_K^b(\mathfrak{B})$ extends to a $\|\ \|_1$-isometric action on $\mathscr{E}_K(\mathfrak{B})$ which respects the subrings $\mathscr{E}_K^\dagger(\mathfrak{B})$ and $\mathscr{E}_K^{\leq 1}(\mathfrak{B})$.
\begin{lemma}\mathrm{la}bel{E}
$\mathscr{E}_K(\mathfrak{B})$ is the ring of formal series $F(Z) = \sum_{n\in \mathbf{Z}} a_n Z^n$, $a_n \in K$, such that $\sup_{n \in \mathbf{Z}} |a_n| < \infty$ and $\lim_{n \rightarrow -\infty} a_n = 0$, and $\|F\|_1 = \sup_{n \in \mathbf{Z}} |a_n|$ is a multiplicative norm. \footnote{Suppose that $K$ is discretely valued. Then $\mathscr{E}_K^\dagger(\mathfrak{B})$ and $\mathscr{E}_K(\mathfrak{B})$ are fields. (Cf.\ \cite{TdA} \S\S9-10 for detailed proofs.) This is no longer the case in general.}
\end{lemma}
\begin{proof}
This is well known. See a formal argument, for example, in the proof of \cite{TdA} Lemma 10.4.
\end{proof}
By Cartier duality, $T'$ is the group of homomorphisms of formal
groups over $o_{\mathbf{C}_p}$ from $LT$ to the formal multiplicative group. This gives rise to a Galois equivariant and $o_L$-invariant pairing
\begin{equation*}
\mathrm{la}ngle\ ,\ \rangle : T' \otimes_{o_L} \mathfrak{B}(\mathbf{C}_p) \longrightarrow \mathfrak{B}_1(\mathbf{C}_p) \ .
\end{equation*}
We fix a generator $t'_0$ of the $o_L$-module $T'$. Thm.\ 3.6 in \cite{ST} constructs an isomorphism
\begin{equation*}
\kappa : \mathfrak{B}_{/\mathbf{C}_p} \xrightarrow{\;\widehat{\otimes}ng\;} \mathfrak{X}_{/\mathbf{C}_p}
\end{equation*}
of rigid group varieties over $\mathbf{C}_p$ which on $\mathbf{C}_p$-points is given by
\begin{align*}
\mathfrak{B}(\mathbf{C}_p) & \xrightarrow{\;\widehat{\otimes}ng\;} \mathfrak{X}(\mathbf{C}_p) = \widehat{G}(\mathbf{C}_p) \\
z & \longmapsto \kappa_z(g) := \mathrm{la}ngle t'_0, [g](z) \rangle \ .
\end{align*}
In fact, we need a more precise statement. We define
\begin{equation*}
R_n := p^{\mathbf{Q}} \cap [p^{-q/e(q-1)}, p^{-1/e(q-1)})^{1/q^{en}} \qquad\text{for $n \geq 0$}.
\end{equation*}
Note that these sets are pairwise disjoint; any sequence $(r_n)_{n \geq 0}$ with $r_n \in R_n$ converges to $1$. We also put $\omega := p^{1/e(q-1) - 1/(p-1)}$,
\begin{equation*}
S_0 := R_0 \omega = p^{\mathbf{Q}} \cap [p^{-1/e - 1/(p-1)}, p^{-1/(p-1)}) \subseteq p^{\mathbf{Q}} \cap [p^{- p/(p-1)}, p^{-1/(p-1)}) \ ,
\end{equation*}
and $S_n := S_0^{1/p^n}$ for $n \geq 0$. Again these latter sets are pairwise disjoint such that any sequence $(s_n)_{n \geq 0}$ with $s_n \in S_n$ converges to $1$. The map
\begin{align*}
S_n & \xrightarrow{\;\simeq\;} R_n \\
s & \longmapsto s^{1/p^{(d-1)n}} \omega^{-1/p^{dn}} \ ,
\end{align*}
for any $n \geq 0$, is an order preserving bijection.
\begin{proposition}\mathrm{la}bel{LT-iso}
For any $n \geq 0$ and any $s \in S_n$ the isomorphism $\kappa$ restricts to an isomorphism of affinoid group varieties
\begin{equation*}
\kappa : \mathfrak{B}(s^{1/p^{(d-1)n}} \omega^{-1/p^{dn}})_{/\mathbf{C}_p} \xrightarrow{\;\widehat{\otimes}ng\;} \mathfrak{X}(s)_{/\mathbf{C}_p} \ .
\end{equation*}
\end{proposition}
\begin{proof}
Although formally not stated there in this generality, the proof is completely contained in \cite{ST} Thm.\ 3.6 and App.\ Thm.\ part (c). We briefly recall the argument. We have:
\begin{itemize}
\item[--] For any $r \in R_0$ the map $[p^n] : \mathfrak{B}(r^{1/q^{en}}) = [p^n]^{-1}(\mathfrak{B}(r)) \longrightarrow \mathfrak{B}(r)$ is a finite etale affinoid map (\cite{ST} Lemma 3.2).
\item[--] For any $s \in S_0$ the map $p^n : \mathfrak{X}(s^{1/p^n}) = (p^n)^{-1}(\mathfrak{X}(s)) \longrightarrow \mathfrak{X}(s)$ is a finite etale affinoid map (\cite{ST} Lemma 3.3).
\item[--] For any $r \in p^{\mathbf{Q}} \cap (0,p^{-1/e(q-1)})$ the map $\kappa$ restricts to a rigid isomorphism
\begin{equation*}
\mathfrak{B}(r)_{/\mathbf{C}_p} \xrightarrow{\;\widehat{\otimes}ng\;} \mathfrak{X}(r\omega)_{/\mathbf{C}_p}
\end{equation*}
(\cite{ST} Lemma 3.4 and proof of Lemma 3.5).
\end{itemize}
Exactly as in the proof of \cite{ST} Thm.\ 3.6 it follows from these three facts that the horizontal arrows in the commutative diagram
\begin{equation*}
\xymatrix{
\mathfrak{B}(1^{1/q^{en}})_{/\mathbf{C}_p} \ar[d]_{[p^n]} \ar[r]^-{\kappa} & \mathfrak{X}((r\omega)^{1/p^n})_{/\mathbf{C}_p} \ar[d]^{p^n} \\
\mathfrak{B}(r)_{/\mathbf{C}_p} \ar[r]^-{\kappa} & \mathfrak{X}(r\omega)_{/\mathbf{C}_p}, }
\end{equation*}
are rigid isomorphisms for any $n \geq 0$ and $r \in R_0$.
\end{proof}
\begin{remark}
If $L/\mathbf{Q}_p$ is unramified then $\bigcup_{n \geq 0} R_n = p^{\mathbf{Q}} \cap [p^{-q/(q-1)},1)$ and $\bigcup_{n \geq 0} S_n = p^{\mathbf{Q}} \cap [p^{-p/(p-1)},1)$. Hence in this case $\mathfrak{X}(s)_{/\mathbf{C}_p}$, for any radius $s \in p^{\mathbf{Q}} \cap (0,1)$, is isomorphic via $\kappa$ to an affinoid disk.
\end{remark}
In the following we abbreviate $\mathfrak{B}_n := \mathfrak{B}(p^{-1/e(q-1)q^{en-1}})$ and $\mathfrak{X}_n := \mathfrak{X}(p^{-(1+e/(p-1))/ep^n})$ for any $n \geq 1$
\footnote{Everything which follows also works for $n=0$. We avoid this case only since the symbol $\mathfrak{X}_0$ already has a different meaning.}; the two radii are the left boundary points of the sets $R_n$ and $S_n$, respectively, and they correspond to each other under the above bijection. From now on we treat $\kappa$ as an identification and view both $\mathcal{O}_K(\mathfrak{X})$ and $\mathcal{O}_K(\mathfrak{B})$ as subalgebras of $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{B})$. The standard action of the Galois group $G_K := \operatorname{Gal}(\overline{K}/K)$ on $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{B})$ is given by $(\sigma,F) \longmapsto {^\sigma F} := \sigma \circ F \circ \sigma^{-1}$ (in terms of power series $F = \sum_{n \geq 0} a_n Z^n$ we have ${^\sigma F} = \sum_{n \geq 0} \sigma(a_n) Z^n$), and $\mathcal{O}_K(\mathfrak{B})$ is the corresponding ring of Galois fixed elements
\begin{equation*}
\mathcal{O}_K(\mathfrak{B}) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B})^{G_K} \ .
\end{equation*}
The latter is a special case of the following general principle.
\begin{remark}\mathrm{la}bel{Galois-fixed}
For any quasi-separated rigid analytic variety $\mathfrak{Y}$ over $K$ we have $\mathcal{O}_K(\mathfrak{Y}) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y})^{G_K}$.
\end{remark}
\begin{proof}
See \cite{ST} p.\ 463 observing that $\mathbf{C}_p^{G_K} = K$ by the Ax-Sen-Tate theorem (\cite{Ax} or \cite{Tat}).
\end{proof}
The twisted Galois action on $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{B})$ is defined by $(\sigma,F) \longmapsto {^{\sigma *}F} := ({^\sigma F})([\tau(\sigma^{-1}](\cdot))$, it commutes with the $o_L \setminus \{0\}$-action, and according to \cite{ST} Cor.\ 3.8 we have
\begin{equation}\mathrm{la}bel{f:twisted-O}
\mathcal{O}_K(\mathfrak{X}) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B})^{G_K,*} \ .
\end{equation}
Obviously we also have $\mathcal{O}_{\mathbf{C}_p}^b(\mathfrak{X}) = \mathcal{O}_{\mathbf{C}_p}^b(\mathfrak{B})$ (isometrically). The twisted Galois action on $\mathcal{O}_{\mathbf{C}_p}^b(\mathfrak{B})$ is by isometries, and we have
\begin{equation}\mathrm{la}bel{f:twisted-Ob}
\mathcal{O}_K^b(\mathfrak{X}) = \mathcal{O}_{\mathbf{C}_p}^b(\mathfrak{B})^{G_K,*} \ .
\end{equation}
\begin{corollary}\mathrm{la}bel{multiplic}
The norms $\|\ \|_\mathfrak{X}$ on $\mathcal{O}_K^b(\mathfrak{X})$ and $\|\ \|_{\mathfrak{X}_n}$ on $\mathcal{O}_K(\mathfrak{X}_n)$, for any $n \geq 1$, are multiplicative.
\end{corollary}
\begin{proof}
Because of \eqref{f:twisted-Ob} the multiplicativity of $\|\ \|_\mathfrak{B}$ implies the multiplicativity of $\|\ \|_\mathfrak{X}$. In view of Prop.\ \ref{LT-iso} the argument for the $\|\ \|_{\mathfrak{X}_n}$ is exactly analogous.
\end{proof}
\begin{corollary}\mathrm{la}bel{units}
We have $\mathcal{O}_K(\mathfrak{X})^\times = \mathcal{O}_K^b(\mathfrak{X})^\times$.
\end{corollary}
\begin{proof}
For any fixed $f \in \mathcal{O}_K(\mathfrak{X})$ the supremum norms $\|f\|_{\mathfrak{X}(r)}$ on the affinoids $\mathfrak{X}(r)$ form a function in $r$ which is monotonously increasing. If $f$ is a unit this applies to $f^{-1}$ as well. In Cor.\ \ref{multiplic} we have seen that there is a sequence $r_1 < \ldots < r_n < \ldots$ converging to $1$ such that the norms $\|\ \|_{\mathfrak{X}(r_n)}$ are multiplicative. It follows that the sequence $(\|f\|_{\mathfrak{X}(r_n)})_n$ at the same time is monotonously increasing and decreasing. Hence it is constant which shows that $f$ and $f^{-1}$ are bounded.
\end{proof}
\begin{lemma}\mathrm{la}bel{twist-Xn}
For any $n \geq 1$ we have $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B} \setminus \mathfrak{B}_n)^{G_K,*}$ and $\mathcal{O}_K^b(\mathfrak{X} \setminus \mathfrak{X}_n) = \mathcal{O}_{\mathbf{C}_p}^b(\mathfrak{B} \setminus \mathfrak{B}_n)^{G_K,*}$.
\end{lemma}
\begin{proof}
As a consequence of Prop.\ \ref{LT-iso} the isomorphism $\kappa$ restricts to isomorphisms
\begin{equation*}
(\mathfrak{B} \setminus \mathfrak{B}_n)_{/\mathbf{C}_p} \xrightarrow{\;\widehat{\otimes}ng\;} (\mathfrak{X} \setminus \mathfrak{X}_n)_{/\mathbf{C}_p} \ .
\end{equation*}
On the other hand every unit $a \in o_L^\times$ with $\mathfrak{B}_{n/L}$ also preserves the admissible open subset $(\mathfrak{B} \setminus \mathfrak{B}_n)_{/L}$. Hence the twisted Galois action on $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{B} \setminus \mathfrak{B}_n)$ is well defined. The assertion now follows in the same way as \eqref{f:twisted-O}.
\end{proof}
\subsection{Properties of $\mathcal{O}_K(\mathfrak{X})$}\mathrm{la}bel{sec:globalring}
The rigid variety $\mathfrak{X}$ is smooth and one dimensional by \cite{ST} paragraph before Lemma 2.4. As a closed subvariety of an open polydisk $\mathfrak{X}_{/K}$ is quasi-Stein. By \cite{ST} Cor.\ 3.7 the ring $\mathcal{O}_K(\mathfrak{X})$ is an integral domain. Therefore $\mathfrak{X}_{/K}$ satisfies the assumptions of section \ref{sec:prufer}. Hence the ring $\mathcal{O}_K(\mathfrak{X})$ has all the properties which we have established in section \ref{sec:prufer} (and which were listed, without proof, at the end of section 3 in \cite{ST}). We need a few further properties specific to $\mathfrak{X}$.
\begin{lemma}\phantomsection\mathrm{la}bel{unramified}
\begin{itemize}
\item[i.] The ring homomorphism $\varphi_L : \mathcal{O}_K(\mathfrak{X}) \longrightarrow \mathcal{O}_K(\mathfrak{X})$ makes $\mathcal{O}_K(\mathfrak{X})$ a free module over itself of rank equal to the cardinality of the residue field $o_L/\pi_L o_L$.
\item[ii.] The ring homomorphism $\varphi_L : \mathcal{O}_K(\mathfrak{X}) \longrightarrow \mathcal{O}_K(\mathfrak{X})$ is unramified at every point of $\mathfrak{X}_{/K}$.\footnote{For a torsion point $x$ the subsequent Lemma \ref{zeros}.i, which says that $\mathfrak{m}_x = \log_\mathfrak{X} \mathcal{O}_x$, allows the following elementary argument. Using assertion ii. of the same lemma we have
\begin{equation*}
\varphi_L(\mathfrak{m}_{\pi_L^*(x)}) \mathcal{O}_x = \varphi_L(\log_\mathfrak{X}) \mathcal{O}_x = \pi_L \log_\mathfrak{X} \mathcal{O}_x = \mathfrak{m}_x \ .
\end{equation*}}
\item[iii.] For any $0 \neq f \in \mathcal{O}_K(\mathfrak{X})$ and any point $x \in \mathfrak{X}_{/K}$ we have
\begin{equation*}
\operatorname{div}(\varphi_L(f))(x) = \operatorname{div}(f)(\pi_L^*(x)) \ .
\end{equation*}
\end{itemize}
\end{lemma}
\begin{proof}
i. This is most easily seen by using the Fourier isomorphism which reduces the claim to the corresponding statement about the distribution algebra $D(o_L,K)$. But here the ring homomorphism $\varphi_L$ visibly induces an isomorphism between $D(o_L,K)$ and the subalgebra $D(\pi_L o_L,K)$ of $D(o_L,K)$. Let $R \subseteq o_L$ denote a set of representatives for the cosets in $o_L/\pi_L o_L$. Then the Dirac distributions $\{\delta_g\}_{g \in R}$ form a basis of $D(o_L,K)$ as a $D(\pi_L o_L,K)$-module.
ii. Since a power of $\varphi_L$ times an appropriate automorphism is equal to $p_*$ it suffices to show that the latter homomorphism is everywhere unramified. But $\mathfrak{X}$ is a closed subvariety of $\mathfrak{X}_0 \widehat{\otimes}ng \mathfrak{B}_1^d$. Hence it further suffices to observe that the endomorphism of $\mathcal{O}_K(\mathfrak{X}_0)$ induced by the $p$th power map $(z_1,\ldots,z_d) \longmapsto (z_1^p,\ldots,z_d^p)$ on $\mathfrak{B}_1^d$ is everywhere unramified, which is clear.
iii. This follows immediately from the second assertion.
\end{proof}
The Lie algebra $\mathfrak{g} = L$ of the locally $L$-analytic group $G = o_L$ embeds $L$-linearly into the distribution algebra $D(G,K)$ via
\begin{align*}
\mathfrak{g} & \longrightarrow D(G,K) \\
\mathfrak{x} & \longmapsto \delta_\mathfrak{x}(f) := (-\mathfrak{x}(f))(0) \ .
\end{align*}
Composing this with the Fourier isomorphism in \cite{ST} Thm.\ 2.3 we obtain the embedding
\begin{align*}
\mathfrak{g} & \longrightarrow \mathcal{O}_K(\mathfrak{X}) \\
\mathfrak{x} & \longmapsto [x \mapsto \delta_\mathfrak{x}(\chi_x) = d\chi_x(\mathfrak{x})]
\end{align*}
where we denote by $\chi_x$ the locally $L$-analytic character of $G$ corresponding to the point $x$ (cf. \cite{ST} \S2).
\begin{definition}
$\log_\mathfrak{X} \in \mathcal{O}_L(\mathfrak{X})$ denotes the holomorphic function which is the image of $1 \in \mathfrak{g} = L$ under the above embedding.
\end{definition}
Using \eqref{f:variety} we compute $d\chi_{z \otimes \beta}(1) = \log(z) \cdot \beta(1) = \log(z^{\beta(1)}) = \log(\chi_{z \otimes \beta}(1))$, and we obtain the formula
\begin{equation*}
\log_\mathfrak{X}(x) = \log(\chi_x(1)) \ .
\end{equation*}
We see that the set of zeros of $\log_\mathfrak{X}$ coincides with the torsion subgroup of $\mathfrak{X}$.
\begin{lemma}\phantomsection\mathrm{la}bel{zeros}
\begin{itemize}
\item[i.] All zeros of $\log_\mathfrak{X}$ are simple.
\item[ii.] For any $a \in o_L$ we have $a_*(\log_\mathfrak{X}) = a \cdot \log_\mathfrak{X}$.
\end{itemize}
\end{lemma}
\begin{proof}
i. By the commutative diagram after Lemma 3.4 in \cite{ST} the function $\log_\mathfrak{X}$ corresponds under the LT-isomorphism $\kappa$ to the function $\Omega_{t'_0} \log_{LT}$. The simplicity of the zeros of the Lubin-Tate logarithm $\log_{LT}$ is well known.
ii. The locally analytic endomorphism $g \longmapsto ag$ of $G = o_L$ induces on the Lie algebra $\mathfrak{g}$ the multiplication by the scalar $a$. On the other hand, the map $\mathfrak{g} \longrightarrow D(G,L)$ is functorial in $G$. Hence we have $\delta_{a\mathfrak{x}} = a_*(\delta_\mathfrak{x})$.
\end{proof}
\section{The boundary of $\mathfrak{X}$ and $(\varphi_L,\Gamma_L)$-modules}
\subsection{The boundary of $\mathfrak{X}$}
Recall that the complement of an affinoid domain in an affinoid space is an admissible open subset (compare \cite{Sch} \S3 Prop.\ 3(ii)). Thus $\mathfrak{X}(r) \setminus \mathfrak{X}(r_0)$, for any pair of
$r > r_0$ in $(0,1) \cap p^\mathbf{Q}$, is an admissible open subset of $\mathfrak{X}(r)$. As $\{\mathfrak{X}(r)\}_{r}$ is an admissible covering of $\mathfrak{X}$, a subset $S$ of $\mathfrak{X}$ is admissible open if and
only if $S\cap \mathfrak{X}(r)$ is admissible open in $\mathfrak{X}(r)$ for any $r$. Hence $\mathfrak{X} \setminus \mathfrak{X}(r)$ is an admissible open subset of $\mathfrak{X}$ and the rings $\mathcal{O}_K^b(\mathfrak{X} \setminus \mathfrak{X}(r)) \subseteq \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r))$ are defined.
The ring
\begin{equation*}
\mathscr{R}_K(\mathfrak{X}) := \bigcup_r \, \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r))
\end{equation*}
is called the \textit{Robba ring} for $L$ (and $K$). In the case of $L=\mathbf{Q}_p$ this definition coincides with the usual one. Observe that every affinoid subdomain of $\mathfrak{X}$ is contained in some
$\mathfrak{X}(r)$, so $\mathscr{R}_K(\mathfrak{X})$ is isomorphic to
$\varinjlim_\mathfrak{Y} \mathcal{O}_K(\mathfrak{X} \setminus
\mathfrak{Y})$, where $\mathfrak{Y}$ runs through all affinoid subdomains of $\mathfrak{X}$.
Next we define
\begin{equation*}
\mathscr{E}^\dagger_K(\mathfrak{X}) :=
\bigcup_r \, \mathcal{O}_K^b(\mathfrak{X} \setminus \mathfrak{X}(r)) \ .
\end{equation*}
We obviously have $\|\ \|_{\mathfrak{X} \setminus \mathfrak{X}(r')} \leq \|\ \|_{\mathfrak{X} \setminus \mathfrak{X}(r)}$, for any $r' \geq r$ in $(0,1) \cap p^\mathbf{Q}$, and therefore may define
\begin{equation*}
\|f\|_1 := \lim_{r \rightarrow 1} \|f\|_{\mathfrak{X} \setminus \mathfrak{X}(r)}
\end{equation*}
for any $f \in \mathscr{E}^\dagger_K(\mathfrak{X})$. Later on, before Prop.\ \ref{twisted-R-E}, we will see that $\|\ \|_1$ is a multiplicative norm on $\mathscr{E}^\dagger_K(\mathfrak{X})$. We finally put
\begin{equation*}
\mathscr{E}_K(\mathfrak{X}) := \text{completion of $\mathscr{E}^\dagger_K(\mathfrak{X})$ with respect to $\|\ \|_1$}
\end{equation*}
as well as
\begin{equation*}
\mathscr{E}_K^{\leq 1}(\mathfrak{X}) := \{ f \in \mathscr{E}_K(\mathfrak{X}) : \|f\|_1 \leq 1\} \ .
\end{equation*}
There are a natural topology on $\mathscr{R}_K(\mathfrak{X})$ as well as the so called weak topology on $\mathscr{E}_K(\mathfrak{X})$, which is weaker than the norm topology.
In order to discuss $\mathscr{R}_K(\mathfrak{X})$ we first need to collect a few facts about the rigid topology of the varieties $\mathfrak{X}_0$ and $\mathfrak{X}$. In the following all radii like $r$, $r'$, $r_0$, and $r_1$ will be understood to lie in $(0,1) \cap p^\mathbf{Q}$. Besides $\mathfrak{B}_1(r)$ we also need the open $\mathbf{Q}_p$-disk $\mathfrak{B}_1^-(r)$ of radius $r$ around $1$. We introduce the subsets
\begin{align*}
\mathfrak{X}_0(r) & := \mathfrak{B}_1(r) \otimes_{\mathbf{Z}_p} \operatorname{Hom}_{\mathbf{Z}_p}(o_L,\mathbf{Z}_p), \\
\mathfrak{X}_0^-(r) & := \mathfrak{B}_1^-(r) \otimes_{\mathbf{Z}_p} \operatorname{Hom}_{\mathbf{Z}_p}(o_L,\mathbf{Z}_p), \\
\mathfrak{X}_0(r,r') & := \mathfrak{X}_0(r') \setminus \mathfrak{X}_0^-(r) \quad\text{for $r \leq r'$}
\end{align*}
of $\mathfrak{X}_0$. The first two obviously are admissible open. We also noted already that $\mathfrak{X}_0 \setminus \mathfrak{X}_0(r)$ is admissible open in $\mathfrak{X}_0$. In order to understand $\mathfrak{X}_0(r,r')$ we list the following facts.
\begin{itemize}
\item[--] If $z_1, \ldots, z_d$ are coordinate functions on $\mathfrak{X}_0$ then $\mathfrak{X}_0(r,r')$ is the union of the $d$ affinoid subdomains
\begin{equation*}
\mathfrak{X}_0^{(i)}(r,r') := \{ x \in \mathfrak{X}_0(r') : |z_i(x)| \geq r\}
\end{equation*}
of $\mathfrak{X}_0(r')$. (\cite{BGR} Cor.\ 9.1.4/4)
\item[--] In particular, $\mathfrak{X}_0(r,r')$ is admissible open in $\mathfrak{X}_0(r')$ and hence in $\mathfrak{X}_0$.
\item[--] If $\mathfrak{Y} \longrightarrow \mathfrak{X}_0 \setminus \mathfrak{X}_0(r_0)$ is any morphism from a $\mathbf{Q}_p$-affinoid variety into $\mathfrak{X}_0 \setminus \mathfrak{X}_0(r_0)$ then its image is contained in $\mathfrak{X}_0(r,r')$ for some $r_0 < r \leq r'$. We apply the maximum modulus principle in the following way. Let $\alpha$ denote the morphism in question. First by applying the maximum modulus principle to the functions $\alpha^*(z_i)$ we find an $r' > r_0$ such that $\alpha(\mathfrak{Y})$ is contained in $\mathfrak{X}_0(r')$. Next we observe that $\mathfrak{X}_0(r') \setminus \mathfrak{X}_0(r_0) = \bigcup_{i=1}^d \mathfrak{U}_i$ with $\mathfrak{U}_i := \{ x \in \mathfrak{X}_0(r') : |z_i(x)| > r_0\}$ is an admissible covering (\cite{BGR} Prop.\ 9.1.4/5). Then $\mathfrak{Y} = \bigcup_{i=1}^d \alpha^{-1}(\mathfrak{U}_i)$ is an admissible covering and therefore, necessarily, can be refined into a finite affinoid covering $\mathfrak{Y} = \mathfrak{V}_1 \cup \ldots \cup \mathfrak{V}_m$. For any $1 \leq j \leq m$ let $1 \leq i(j) \leq d$ be such that $\alpha(\mathfrak{V}_j) \subseteq \mathfrak{U}_{i(j)}$. By applying the maximum modulus principle to the function $z_{i(j)}^{-1}$ pulled back to $\mathfrak{V}_j$ we obtain that
\begin{equation*}
\alpha(\mathfrak{V}_j) \subseteq \mathfrak{X}_0^{(i(j))}(r_j,r') \qquad\text{for some $r_0 < r_j < r'$}.
\end{equation*}
We deduce that $\alpha(\mathfrak{Y}) \subseteq \mathfrak{X}_0(r,r')$ with $r := \min_j r_j$.
\item[--] In particular, $\mathfrak{X}_0 \setminus \mathfrak{X}_0(r_0) = \bigcup_{r_0 < r \leq r' < 1} \mathfrak{X}_0(r,r')$ is an admissible covering.
\end{itemize}
With $\mathfrak{X}(r)$ being defined already we also put
\begin{equation*}
\mathfrak{X}^-(r) := \mathfrak{X} \cap \mathfrak{X}_0^-(r)_{/L} \qquad\text{and}\qquad \mathfrak{X}(r,r') := \mathfrak{X} \cap \mathfrak{X}_0(r,r')_{/L} = \mathfrak{X}(r') \setminus \mathfrak{X}^-(r) \ ,
\end{equation*}
and we have a corresponding list of properties:
\begin{itemize}
\item[--] $\mathfrak{X}(r,r')$ is a finite union of affinoid subdomains and is admissible open in the $L$-affinoid variety $\mathfrak{X}(r')$.
\item[--] If $\mathfrak{Y} \longrightarrow \mathfrak{X} \setminus \mathfrak{X}(r_0)$ is any morphism from an $L$-affinoid variety $\mathfrak{Y}$ then its image is contained in $\mathfrak{X}(r,r')$ for some $r_0 < r \leq r'$. In particular,
\end{itemize}
\begin{equation}\mathrm{la}bel{f:projlim}
\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0)) = \varprojlim_{r_0 < r \leq r' < 1} \mathcal{O}_K(\mathfrak{X}(r,r')) \ .
\end{equation}
For simplicity we now use the fact that $\mathfrak{X}$ and hence each $\mathfrak{X}(r')$ is a connected, smooth, and one dimensional rigid variety (cf.\ \cite{ST}). Therefore (\cite{Fie} Satz 2.1) any finite union of affinoid subdomains in $\mathfrak{X}(r')$ again is an affinoid subdomain. It follows that each $\mathcal{O}_K(\mathfrak{X}(r,r'))$ is a $K$-affinoid algebra which is a Banach algebra with respect to the supremum norm. This together with \eqref{f:projlim} permits us to equip $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0))$ with the structure of a $K$-Fr\'echet algebra by viewing it as the topological projective limit of these affinoid algebras. By construction the restriction maps $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0)) \longrightarrow \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_1))$, for $r_0 \leq r_1$, are continuous.
\begin{proposition}\mathrm{la}bel{quasi-Stein}
For any $n \geq 1$, the rigid variety $\mathfrak{X} \setminus \mathfrak{X}_n$ is a quasi-Stein space (w.r.t. the admissible covering $\{\mathfrak{X}(s,s')\}$ where $p^{-(1+e/(p-1))/ep^n} < s \leq s' < 1$, $s \in S_n$, and $s' \in \bigcup_{m \geq n} S_m$).
\end{proposition}
\begin{proof}
(The sets $S_n$ were defined in section \ref{sec:LT}.) We have to show that the restriction map $\mathcal{O}_K(\mathfrak{X}(s,s')) \longrightarrow \mathcal{O}_K(\mathfrak{X}(r,r'))$, for any $p^{-(1+e/(p-1))/ep^n} < s \leq r \leq r' \leq s' < 1$ with $r,s \in S_n$ and $r',s' \in \bigcup_{m \geq n} S_m$, has dense image. First of all note that affinoid algebras are Banach spaces of countable type.
In a first step we check that we may assume that $K = \mathbf{C}_p$. Quite generally, let $\beta : B_1 \longrightarrow B_2$ be a continuous linear map between $K$-Banach spaces such that $B_2$ is of countable type. By \cite{NFA} Prop.\ 10.5 there is a closed vector subspace $C \subseteq B_2$ such that $B_2 = \overline{\operatorname{im}(\beta)} \oplus C$ topologically. It follows that
\begin{equation*}
\overline{\operatorname{im}(\operatorname{id} \widehat{\otimes} \beta)} = \mathbf{C}_p\, \widehat{\otimes}_K\, \overline{\operatorname{im}(\beta)} \subseteq \mathbf{C}_p\, \widehat{\otimes}_K\, B_2 = (\mathbf{C}_p\, \widehat{\otimes}_K\, \overline{\operatorname{im}(\beta)}) \oplus (\mathbf{C}_p\, \widehat{\otimes}_K\, C) \ .
\end{equation*}
Moreover, \cite{NFA} Cor.\ 17.5.iii implies that $\mathbf{C}_p\, \widehat{\otimes}_K\, C$ is nonzero if $C$ was. We see that $\beta$ has dense image if $\operatorname{id} \widehat{\otimes}\, \beta$ has.
So for the rest of the proof we let $K = \mathbf{C}_p$. We first observe that, quite generally, $\mathfrak{X}(r_0) = \bigcup_{r_1 < r_0} \mathfrak{X}(r_1)$. Due to the conditions we have imposed on the radii $r, r', s, s'$ we may apply Prop.\ \ref{LT-iso} and we see that $\mathfrak{X}(s,s')_{/\mathbf{C}_p}$ is isomorphic to a one dimensional affinoid annulus in such a way that $\mathfrak{X}(r,r')_{/\mathbf{C}_p}$ becomes isomorphic to a subannulus. It follows that $\mathfrak{X}(r,r')_{/\mathbf{C}_p}$ is a Weierstra{\ss} domain in $\mathfrak{X}(s,s')_{/\mathbf{C}_p}$, and the density statement holds by \cite{BGR} Prop.\ 7.3.4/2.
\end{proof}
The rigid variety $\mathfrak{X} \setminus \mathfrak{X}_n$ is smooth and one dimensional since it is admissible open in $\mathfrak{X}$. It is quasi-Stein by the above Prop.\ \ref{quasi-Stein}. The isomorphism $\mathcal{O}_{\mathbf{C}_p} (\mathfrak{X} \setminus \mathfrak{X}_n) \widehat{\otimes}ng \mathcal{O}_{\mathbf{C}_p} (\mathfrak{B} \setminus \mathfrak{B}_n)$, which is a consequence of Prop.\ \ref{LT-iso}, implies that $\mathcal{O}_K (\mathfrak{X} \setminus \mathfrak{X}_n)$ is an integral domain. Therefore $(\mathfrak{X}\setminus \mathfrak{X}_n)_{/K}$ satisfies the assumptions of section \ref{sec:prufer}. Hence the ring $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$ has all the properties which we have established in section \ref{sec:prufer}.
\begin{corollary}\phantomsection\mathrm{la}bel{pruefer2}
\begin{itemize}
\item[i.] $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$, for any $n \geq 1$, is a $1 \frac{1}{2}$ generator Pr\"ufer domain.
\item[ii.] $\mathscr{R}_K(\mathfrak{X})$ is a $1 \frac{1}{2}$ generator Pr\"ufer domain. In particular, the assertions of Cor.\ \ref{free-invertible} analogously hold over $\mathscr{R}_K(\mathfrak{X})$.
\end{itemize}
\end{corollary}
\begin{proof}
i. Prop.\ \ref{1 1/2}. ii. This follows by a direct limit argument from i.
\end{proof}
We will view $\mathscr{R}_K(\mathfrak{X})$ as the locally convex inductive limit of the Fr\'echet algebras $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$. We note that the multiplication in $\mathscr{R}_K(\mathfrak{X})$ is only separately continuous.
In order to analyze the functional analytic nature of the $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$ and of $\mathscr{R}_K(\mathfrak{X})$ we need a few preliminary facts.
\begin{lemma}\mathrm{la}bel{descent-compactoid}
Let $C \subseteq B$ be a bounded subset of a $K$-Banach space $B$; if the image $\iota(C)$ of $C$ under the canonical map $\iota : B \longrightarrow \mathbf{C}_p\, \widehat{\otimes}_K\, B$ is compactoid, then $C$ is compactoid.
\end{lemma}
\begin{proof}
We fix a defining norm on $B$. Note (\cite{NFA} Prop.\ 17.4) that the map $\iota$ is norm preserving. As a consequence of \cite{PGS} Thm.\ 3.9.6 it suffices to show that, for any $t \in (0,1]$ and any sequence $(c_n)_{n \in \mathbf{Z}_{\geq 0}}$ in $C$ which is $t$-orthogonal (\cite{PGS} Def.\ 2.2.14), there exists a $t' \in (0,1]$ such that the sequence $(\iota(c_n))_n$ is $t'$-orthogonal in $\mathbf{C}_p \widehat{\otimes}_K B$.
Let $B_0 \subseteq B$ be the closed subspace generated by $(c_n)_n$, and let $c_0(K)$ denote the standard $K$-Banach space of all $0$-sequences in $K$. By the boundedness of $(c_n)_n$ the linear map
\begin{align*}
\alpha : \qquad c_0(K) & \longrightarrow B_0 \\
(\mathrm{la}mbda_1, \mathrm{la}mbda_2, \ldots) & \longmapsto \sum_{i=n}^\infty \mathrm{la}mbda_n c_n
\end{align*}
is well defined and continuous. The $t$-orthogonality then implies that $\alpha$, in fact, is a homeomorphism (cf.\ the proof of \cite{PGS} Thm.\ 2.3.6). Hence
\begin{equation*}
\operatorname{id} \widehat{\otimes}\, \alpha : \mathbf{C}_p\, \widehat{\otimes}_K\, c_0(K) \xrightarrow{\;\widehat{\otimes}ng\;} \mathbf{C}_p\, \widehat{\otimes}_K\, B_0
\end{equation*}
is a homeomorphism. By \cite{BGR} Prop.\ 2.1.7/8 the left hand side is equal to $c_0(\mathbf{C}_p)$. Hence $(\iota(c_n))_n$ is a basis of $\mathbf{C}_p\, \widehat{\otimes}_K\, B_0$ in the sense of \cite{PGS} Def.\ 2.3.10 and hence, by \cite{PGS} Thm.\ 2.3.11, is $t'$-orthogonal in $\mathbf{C}_p\, \widehat{\otimes}_K\, B_0$ for some $t' \in (0,1]$. Using \cite{NFA} Prop.\ 17.4.iii we finally obtain that $(\iota(c_n))_n$ is $t'$-orthogonal in $\mathbf{C}_p\, \widehat{\otimes}_K\, B$.
\end{proof}
For two locally convex $K$-vector spaces $V$ and $W$ we always view the tensor product $V \otimes_K W$ as a locally convex $K$-vector space with respect to the projective tensor product topology (cf.\ \cite{NFA} \S17), and we let $V \widehat{\otimes}_K W$ denote the completion of $V \otimes_K W$.
\begin{lemma}\mathrm{la}bel{tensor-projlim}
Let $\ldots \rightarrow V_n \rightarrow \ldots \rightarrow V_1$ and $\ldots \rightarrow W_n \rightarrow \ldots \rightarrow W_1$ be two sequences of locally convex $K$-vector spaces, $V := \varprojlim_n V_n$ and $W := \varprojlim_n W_n$ their projective limits, and $\alpha_n : V \rightarrow V_n$ and $\beta_n : W \rightarrow W_n$ the corresponding canonical maps. We then have:
\begin{itemize}
\item[i.] The projective tensor product topology on $V \otimes_K W$ is the initial topology with respect to the maps $\alpha_n \otimes \beta_n : V \otimes_K W \rightarrow V_n \otimes_K W_n$;
\item[ii.] the canonical map $V \otimes_K W \longrightarrow \varprojlim_n V_n \otimes_K W_n$ is a topological embedding;
\item[iii.] suppose that, for any $n \geq 1$, the spaces $V_n$ and $W_n$ are Hausdorff and the maps $\alpha_n$ and $\beta_n$ have dense image; then $V \widehat{\otimes}_K W = \varprojlim_n V_n \widehat{\otimes}_K W_n$.
\end{itemize}
\end{lemma}
\begin{proof}
i. Because of \cite{NFA} Cor.\ 17.5.ii we may assume, by replacing $V_n$, resp.\ $W_n$, by $\alpha_n(V)$, resp.\ $\beta_n(W)$, equipped with the subspace topology, that $V_n = \alpha_n(V)$ and $W_n = \beta_n(W)$ for any $n \geq 1$.
Let $\{L_{n,i}\}_{i \in I}$ and $\{M_{n,j}\}_{j \in J}$ be the families of all open lattices in $V_n$ and $W_n$, respectively (cf.\ \cite{NFA} \S4). Then $\{\alpha_n^{-1}(L_{n,i})\}_{n,i}$ and $\{\beta_n^{-1}(M_{n,j})\}_{n,j}$ are defining families of open lattices in $V$ and $W$, respectively, and $\{\alpha_n^{-1}(L_{n,i}) \otimes_o \beta_n^{-1}(M_{n,j})\}_{n,i,j}$ is a defining family of open lattices in $V \otimes_K W$. We therefore have to show that $\alpha_n^{-1}(L_{n,i}) \otimes_o \beta_n^{-1}(M_{n,j})$ is open for the initial topology of the assertion. Since, by construction, $(\alpha_n \otimes \beta_n)^{-1}(L_{n,i} \otimes_o M_{n,j})$ is open for this initial topology it suffices to prove that
\begin{equation*}
(\alpha_n \otimes \beta_n)^{-1}(L_{n,i} \otimes_o M_{n,j}) \subseteq \alpha_n^{-1}(L_{n,i}) \otimes_o \beta_n^{-1}(M_{n,j})
\end{equation*}
(the opposite inclusion being trivial we then, in fact, have equality). Let $x \in V \times_K W$ such that
\begin{equation*}
(\alpha_n \otimes \beta_n)(x) = \sum_{\rho = 1}^r v_\rho \otimes w_\rho \qquad\text{with $v_\rho \in L_{n,i}$ and $w_\rho \in M_{n,j}$}.
\end{equation*}
By our additional assumption we find $v'_\rho \in V$ and $w_\rho' \in W$ such that $\alpha_n(v'_\rho) = v_\rho$ and $\beta_n(w_\rho') = w_\rho$. Hence $x' := \sum_\rho v_\rho' \otimes w_\rho' \in \alpha_n^{-1}(L_{n,i}) \otimes_o \beta_n^{-1}(M_{n,j})$ and
\begin{equation*}
x - x' \in \ker(\alpha_n \otimes \beta_n) = \ker(\alpha_n) \otimes_K V + W \otimes_K \ker(\beta_n) \ .
\end{equation*}
But the right hand side (and therefore $x$) is contained in $\alpha_n^{-1}(L_{n,i}) \otimes_o \beta_n^{-1}(M_{n,j})$. This follows from the general fact that for any $K$-subspace $V_0 \subseteq V$ and any lattice $L \subseteq W$ we have
\begin{equation*}
V_0 \otimes_K W = V_0 \otimes_o W = V_0 \otimes_o L \ .
\end{equation*}
ii. It remains to check that the map in question is injective. Let $x \in V \otimes_K W$ be a nonzero element. We fix a $K$-basis $(e_j)_{j \in J}$ of $W$ and write $x = \sum_{j \in J_0} v_j \otimes e_j$ with an appropriate finite subset $J_0 \subseteq J$ and nonzero elements $v_j \in V$ for $j \in J_0$. We may choose $n$ large enough so that $\alpha_n(v_j) \neq 0$ for any $j \in J_0$. Then $y := \alpha_n \otimes \operatorname{id}_W (x) \neq 0$. We now repeat the argument with $y$ and a $K$-basis of $V_n$ in order to find an $m \geq n$ such that $\operatorname{id}_{V_n} \otimes \beta_m (y) \neq 0$. It follows that $\alpha_n \otimes \beta_m (x) \neq 0$ and hence that $\alpha_m \otimes \beta_m (x) \neq 0$.
iii. With $V_n$ and $W_n$ also $V$ and $W$ as well as $V_n \otimes_K W_n$ (cf.\ \cite{NFA} Cor.\ 17.5.i) and $\varprojlim_n V_n \otimes_K W_n$ are Hausdorff. In particular, $V_n \otimes_K W_n$ is a subspace of $V_n \widehat{\otimes}_K W_n$ and, consequently, $\varprojlim_n V_n \otimes_K W_n$ is a subspace of $\varprojlim_n V_n \widehat{\otimes}_K W_n$. Hence, by ii., the composite map
\begin{equation*}
V \otimes_K W \longrightarrow \varprojlim_n V_n \otimes_K W_n \longrightarrow \varprojlim_n V_n \widehat{\otimes}_K W_n
\end{equation*}
is a topological embedding. Since the right hand space is complete it suffices to check that $V \otimes_K W$ maps to a dense subspace. But our assumption on $\alpha_n$ and $\beta_n$ implies that each map
\begin{equation*}
V \otimes_K W \xrightarrow{\alpha_n \otimes \beta_n} V_n \otimes_K W_n \xrightarrow{\;\subseteq\;} V_n \widehat{\otimes}_K W_n
\end{equation*}
has dense image.
\end{proof}
As a special case of this lemma (together with the Mittag-Leffler theorem \cite{B-TG} II.3.5 Thm.\ 1) we obtain that the functor $\mathbf{C}_p\, \widehat{\otimes}_K .$ commutes with projective limits of sequences of $K$-Fr\'echet spaces such that the transition maps have dense image. Note (\cite{PGS} \S10.6) that the scalar extension $\mathbf{C}_p\, \widehat{\otimes}_K\, V$ of any locally convex $K$-vector space $V$ is a locally convex $\mathbf{C}_p$-vector space.
\begin{proposition}\mathrm{la}bel{compactoid}
For any $n \geq 1$ we have:
\begin{itemize}
\item[i.] The Fr\'echet space $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$ is nuclear; in particular, it is Montel, hence reflexive, and in $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$ the class of compactoid subsets coincides with the class of bounded subsets;
\item[ii.] $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{X} \setminus \mathfrak{X}_n) = \mathbf{C}_p\, \widehat{\otimes}_K\, \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$, and the inclusion $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) \subseteq \mathcal{O}_{\mathbf{C}_p}(\mathfrak{X} \setminus \mathfrak{X}_n)$ is topological.
\end{itemize}
\end{proposition}
\begin{proof}
The Fr\'echet space $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$ is the projective limit of the affinoid $K$-Banach spaces $\mathcal{O}_K(\mathfrak{X}(s,s'))$ with the radii $s \leq s'$ as in Prop.\ \ref{quasi-Stein}. In the proof of Prop.\ \ref{quasi-Stein} we have seen that the transition maps $\mathcal{O}_K(\mathfrak{X}(s,s')) \longrightarrow \mathcal{O}_K(\mathfrak{X}(r,r'))$, for $s \leq r \leq r' \leq s'$, in this projective system have dense image. For a $K$-affinoid variety $\mathfrak{Y}$ we have $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}) = \mathbf{C}_p\, \widehat{\otimes}_K\, \mathcal{O}_K(\mathfrak{Y})$ by construction of the base field extension (cf.\ \cite{BGR} 9.3.6). Hence we may apply Lemma \ref{tensor-projlim} and obtain the equality in part ii. of the assertion. The second half of ii. then follows by direction inspection or by using \cite{NFA} Cor.\ 17.5.iii.
We now assume that the radii satisfy $s < r \leq r' < s'$ and show that then the restriction map $\mathcal{O}_K(\mathfrak{X}(s,s')) \longrightarrow \mathcal{O}_K(\mathfrak{X}(r,r'))$ is compactoid. Recall that this means (cf.\ \cite{PGS} Thm.\ 8.1.3(vii)) that the unit ball in $\mathcal{O}_K(\mathfrak{X}(s,s'))$ for the supremum norm is mapped to a compactoid subset in $\mathcal{O}_K(\mathfrak{X}(r,r'))$. Because of Lemma \ref{descent-compactoid} we may assume that $K = \mathbf{C}_p$. But then, according to Prop.\ \ref{LT-iso}, $\mathfrak{X}(s,s')$ is isomorphic to a one dimensional affinoid annulus. This reduces us to the following claim. Let $a < b \leq b' < a'$ be any radii in $p^{\mathbf{Q}}$ and let $\mathfrak{B}(a,a') := \{z \in \mathbf{C}_p : a \leq |z| \leq a'\}$; then the restriction map $\mathcal{O}_K(\mathfrak{B}(a,a')) \longrightarrow \mathcal{O}_K(\mathfrak{B}(b,b'))$ is compactoid. Let $a = |u|$ for some $u \in \mathbf{C}_p$. The Mittag-Leffler decomposition (which is directly visible in terms of Laurent series) gives the topological decomposition
\begin{align*}
\mathcal{O}_K(\mathfrak{B}(a^{-1})) \oplus \mathcal{O}_K(\mathfrak{B}(a')) & \xrightarrow{\; \widehat{\otimes}ng \;} \mathcal{O}_K(\mathfrak{B}(a,a')) \\
(F_1,F_2) & \longmapsto uZ^{-1}F_1(Z^{-1}) + F_2(Z) \ .
\end{align*}
It further reduces us to showing that the restriction map $\mathcal{O}_K(\mathfrak{B}(a')) \longrightarrow \mathcal{O}_K(\mathfrak{B}(b'))$ is compactoid. But this is a well known fact (cf.\ \cite{PGS} Thm.\ 11.4.2).
We have established that $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) = \varprojlim_n B_n$ is the projective limit of a sequence of $K$-Banach spaces $B_n$ with compactoid transition maps. In order to show that $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$ is nuclear we have to check that any continuous linear map $\alpha : \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) \longrightarrow V$ into a normed $K$-vector space $B$ is compactoid (cf.\ \cite{PGS} Def.\ 8.4.1(ii)). But there is an $n \geq 1$ such that $\alpha$ factorizes through $B_n$:
\begin{equation*}
\xymatrix{
\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) \ar[rr]^-{\alpha} \ar[d] \ar[dr]
& & V \\
B_{n+1} \ar[r] & B_n \ar[ur] }
\end{equation*}
Since the transition map $B_{n+1} \rightarrow B_n$ is compactoid it follows that $\alpha$ is compactoid as well. The remaining assertions in i. now follow by \cite{PGS} Cor.\ 8.5.3 and Thm.\ 8.4.5.
\end{proof}
Next we investigate the inductive system of Fr\'echet spaces
\begin{equation*}
\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_1) \longrightarrow \ldots \rightarrow \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) \rightarrow \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_{n+1}) \rightarrow \ldots
\end{equation*}
with locally convex inductive limit $\mathscr{R}_K(\mathfrak{X})$. Note that all transition maps in this system are injective.
\begin{proposition}\phantomsection\mathrm{la}bel{regular}
\begin{itemize}
\item[i.] $\mathscr{R}_K(\mathfrak{X}) = \varinjlim_n \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$ is a regular inductive limit, i.e., for each bounded subset $B \subseteq \mathscr{R}_K(\mathfrak{X})$ there exists an $n \geq 1$ such that $B$ is contained in $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$ and is bounded as a subset of the Fr\'echet space $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$.
\item[ii.] $\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X})$ is complete.
\item[iii.] $\mathscr{R}_K(\mathfrak{X})$ is Hausdorff, nuclear, and reflexive.
\end{itemize}
\end{proposition}
\begin{proof}
i. and ii. We first reduce the assertion to the case $K = \mathbf{C}_p$. For this we consider the commutative diagram
\begin{equation*}
\xymatrix{
\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_1) \ar@{^{(}->}[d] \ar[r] & \ldots \ar[r] & \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) \ar@{^{(}->}[d] \ar[r] & \ldots \ar[r] & \mathscr{R}_K(\mathfrak{X}) \ar[d] \\
\mathcal{O}_{\mathbf{C}_p}(\mathfrak{X} \setminus \mathfrak{X}_1) \ar[r] & \ldots \ar[r] & \mathcal{O}_{\mathbf{C}_p}(\mathfrak{X} \setminus \mathfrak{X}_n) \ar[r] & \ldots \ar[r] & \mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) }
\end{equation*}
in which all maps are injective and continuous and the two left vertical maps are topological embeddings. Let us suppose that the lower inductive limit is regular, and let $B \subseteq \mathscr{R}_K(\mathfrak{X})$ be a bounded subset. Then $B$ is bounded in $\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X})$ and hence, by assumption, is bounded in some $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{X} \setminus \mathfrak{X}_n)$. Using Remark \ref{Galois-fixed} we see that $B \subseteq \mathscr{R}_K(\mathfrak{X}) = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{X})^{G_K}$ and hence $B \subseteq \mathcal{O}_{\mathbf{C}_p}(\mathfrak{X} \setminus \mathfrak{X}_n)^{G_K} = \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$.
In the following we therefore assume that $K = \mathbf{C}_p$. By Lemma \ref{twist-Xn} we now may replace $\mathfrak{X}$ and $\mathfrak{X}_n$ by $\mathfrak{B}$ and $\mathfrak{B}_n$, respectively. This time the Mittag-Leffler decomposition is of the form
\begin{equation*}
\mathcal{O}_{\mathbf{C}_p}(\mathfrak{B}^-(r_n)) \oplus \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B}) \xrightarrow{\widehat{\otimes}ng} \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B} \setminus \mathfrak{B}_n)
\end{equation*}
with $\mathfrak{B}^-(r_n)$ denoting the open disk of radius $r_n := p^{1/e(q-1)q^{en-1}}$ around $0$. Hence
\begin{equation*}
\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) \widehat{\otimes}ng \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) = \big( \varinjlim_n \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B}^-(r_n)) \big) \oplus \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B}) \ .
\end{equation*}
This reduces us to showing that $\varinjlim_n \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B}^-(r_n))$ is a regular inductive limit which, moreover, is complete. But the restriction maps $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{B}^-(r_n)) \longrightarrow \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B}^-(r_{n+1}))$ go through two affinoid disks for which the corresponding restriction maps are compactoid as recalled in the proof of Prop.\ \ref{compactoid}. Hence $\varinjlim_n \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B}^-(r_n))$ is a compactoid inductive limit and, in particular, is regular and complete by \cite{PGS} Thm.\ 11.3.5.
iii. This follows from i. and Prop.\ \ref{compactoid}.i and \cite{PGS} Thm.\ 11.2.4(ii), Thm.\ 8.5.7(vi), and Cor.\ 11.2.15.
\end{proof}
The reflexivity of $\mathscr{R}_K(\mathfrak{X})$ already implies that $\mathscr{R}_K(\mathfrak{X})$ is quasicomplete. To see that it, in fact, is always complete, we need an additional argument. Since $\mathscr{R}_K(\mathfrak{X})$ is an inductive limit it is not surprising that the inductive tensor product topology will play a role. For two locally convex $K$-vector spaces $V$ and $W$ let $V \otimes_{K,\iota} W$ denote their tensor product equipped with the inductive tensor product topology (cf.\ \cite{NFA} \S17) and let $V \widehat{\otimes}_{K,\iota} W$ denote its completion. Note that for Fr\'echet spaces $V$ and $W$ we have $V \otimes_K W = V \otimes_{K,\iota} W$ (cf. \cite{NFA} Prop.\ 17.6).
\begin{proposition}\phantomsection\mathrm{la}bel{complete}
\begin{itemize}
\item[i.] $\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) = \mathbf{C}_p\, \widehat{\otimes}_{K,\iota}\, \mathscr{R}_K(\mathfrak{X})$.
\item[ii.] $\mathscr{R}_K(\mathfrak{X}) \subseteq \mathscr{R}_{\mathbf{C}_p}(\mathfrak{X})$ is a topological inclusion.
\item[iii.] $\mathscr{R}_K(\mathfrak{X}) = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{X})^{G_K}$ is closed in $\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X})$.
\item[iv.] $\mathscr{R}_K(\mathfrak{X})$ is complete.
\end{itemize}
\end{proposition}
\begin{proof}
i. So far we know from Prop.\ \ref{compactoid}.ii and Prop.\ \ref{regular} that
\begin{equation*}
\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) = \varinjlim_n\, \mathcal{O}_{\mathbf{C}_p}(\mathfrak{X} \setminus \mathfrak{X}_n)
= \varinjlim_n\, \mathbf{C}_p\, \widehat{\otimes}_K\, \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)
= \varinjlim_n\, \mathbf{C}_p\, \widehat{\otimes}_{K,\iota}\, \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)
\end{equation*}
is Hausdorff and complete. The inductive tensor product commutes with locally convex inductive limits (\cite{Eme} Lemma 1.1.30) so that we have
\begin{equation*}
\varinjlim_n\, \mathbf{C}_p \otimes_{K,\iota} \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) = \mathbf{C}_p \otimes_{K,\iota} \mathscr{R}_K(\mathfrak{X})
\end{equation*}
and hence
\begin{equation*}
\big( \varinjlim_n\, \mathbf{C}_p \otimes_{K,\iota} \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) \big)^{\widehat{}} = \mathbf{C}_p \widehat{\otimes}_{K,\iota} \mathscr{R}_K(\mathfrak{X}) \ .
\end{equation*}
It therefore remains to check that, for any inductive system $(E_n)_n$ of locally convex vector spaces such that the locally convex inductive limit $\varinjlim_n \widehat{E_n}$ of the completions $\widehat{E_n}$ is Hausdorff and complete, we have
\begin{equation*}
\varinjlim_n \widehat{E_n} = \widehat{\varinjlim_n E_n} \ .
\end{equation*}
By the universal properties we have a commutative diagram of natural continuous maps
\begin{equation*}
\xymatrix{
\varinjlim_n \widehat{E_n} \ar[rr]^{\gamma}
& & \widehat{\varinjlim_n E_n} \\
& \varinjlim_n E_n , \ar[ul]^{\alpha} \ar[ur] }
\end{equation*}
which all three have dense image. By our assumption on the upper left term the map $\alpha$ extends uniquely to a continuous map $\beta : \widehat{\varinjlim_n E_n} \longrightarrow \varinjlim_n \widehat{E_n}$. Of course, $\beta$ has dense image as well. On the other hand it necessarily satisfies $\gamma \circ \beta = \operatorname{id}$. Since $\varinjlim_n \widehat{E_n}$ is assumed to be Hausdorff this implies that $\beta$ has closed image. It follows that $\beta$ is bijective and therefore that $\beta$ and $\gamma$ are topological isomorphisms.
ii. This is a consequence of i. and \cite{NFA} Cor.\ 17.5.iii.
iii. The identity follows from Remark \ref{Galois-fixed}. The second part of the assertion then is immediate from the fact that $G_K$ acts on $\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X})$ by continuous (semilinear) automorphisms.
iv. This follows from ii./iii. and Prop.\ \ref{regular}.iii.
\end{proof}
\begin{corollary}\mathrm{la}bel{scalar-ext-R}
$\mathscr{R}_E(\mathfrak{X}) = E\, \widehat{\otimes}_{K,\iota}\, \mathscr{R}_K(\mathfrak{X})$ for any complete intermediate field $K \subseteq E \subseteq \mathbf{C}_p$.
\end{corollary}
\begin{proof}
Now that we know from Prop.\ \ref{complete}.iv that $\mathscr{R}_E(\mathfrak{X})$ is complete we may repeat the argument in the proof of Prop.\ \ref{complete}.i with $E$ instead of $\mathbf{C}_p$.
\end{proof}
We summarize the additional features of the LT-isomorphism, which we have established by now:
--- As explained in the proof of Lemma \ref{twist-Xn} the isomorphism $\kappa$ induces compatible topological identifications
\begin{equation*}
\mathcal{O}_{\mathbf{C}_p}(\mathfrak{X} \setminus \mathfrak{X}_n) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B} \setminus \mathfrak{B}_n) \ ,
\end{equation*}
which restrict to isometric identifications between the subrings of bounded functions. The twisted Galois action is well defined on the right hand side and corresponds to the standard Galois action on the left hand side.
--- By passing to the inductive limit we obtain the twisted Galois action by topological automorphisms on $\mathscr{R}_{\mathbf{C}_p}(\mathfrak{B})$ as well as the topological identification
\begin{equation*}
\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \ .
\end{equation*}
--- By restriction from $\mathscr{R}_{\mathbf{C}_p}(.)$ to $\mathscr{E}_{\mathbf{C}_p}^\dagger(.)$ we obtain the twisted Galois action by $\|\ \|_1$-isometries on $\mathscr{E}_{\mathbf{C}_p}^\dagger(\mathfrak{B})$ as well as the $\|\ \|_1$-isometric identification
\begin{equation*}
\mathscr{E}_{\mathbf{C}_p}^\dagger(\mathfrak{X}) = \mathscr{E}_{\mathbf{C}_p}^\dagger(\mathfrak{B}) \ .
\end{equation*}
Both, by completion, extend to $\mathscr{E}_{\mathbf{C}_p}(.)$. Since we know that $\|\ \|_1$ is a multiplicative norm on the right hand side it must be a multiplicative norm on $\mathscr{E}_K(\mathfrak{X})$ as well.
\begin{proposition}\phantomsection\mathrm{la}bel{twisted-R-E}
$\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) = \mathbf{C}_p\, \widehat{\otimes}_{K,\iota}\, \mathscr{R}_K(\mathfrak{B})$ and $\mathscr{R}_K(\mathfrak{X}) = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B})^{G_K,*}$.
\end{proposition}
\begin{proof}
This is a restatement of Prop.\ \ref{complete}.i/iii.
\end{proof}
\subsection{The monoid action}\mathrm{la}bel{sec:monoid}
For any $a \in o_L$ the map $g \longmapsto ag$ on $G$ is locally $L$-analytic. This induces an action of the multiplicative monoid $o_L \setminus \{0\}$ on the vector spaces of locally analytic functions $C^{an}(G,K) \subseteq C^{an}(G_0,K)$ given by $f \longmapsto a^*(f)(g) := f(ag)$. Obviously, with $\chi \in \widehat{G}(K)$, resp.\ $\in \widehat{G}_0(K)$, also $a^*(\chi)$ is a character in $\widehat{G}(K)$, resp.\ in $\widehat{G}_0(K)$. In this way we obtain actions of the ring $o_L$ on these groups. It is clear that under the bijection \eqref{f:variety} the action on the target, which we just have defined, corresponds to the obvious $o_L$-action on the second factor of the tensor product in the source. This shows that the action on character groups in fact comes from an $o_L$-action on the rigid character varieties $\mathfrak{X}_0$ and $\mathfrak{X}$ which, moreover, respects each of the affinoid subgroups $\mathfrak{X}(r)$. Moreover, from these actions on character varieties we obtain translation actions by the multiplicative monoid $o_L \setminus \{0\}$ on the corresponding rings of global holomorphic functions, which will be denoted by $(a,f) \longmapsto a_*(f)$. Note also that these actions respect the respective subrings of bounded holomorphic functions.
Each unit in $o_L^\times$ with $\mathfrak{X}(r)$ also preserves the admissible open subset $\mathfrak{X} \setminus \mathfrak{X}(r)$. The $o_L^\times$-action on $\mathcal{O}_K^b(\mathfrak{X}) \subseteq \mathcal{O}_K(\mathfrak{X})$ therefore extends to the rings $\mathscr{R}_K(\mathfrak{X})$, $\mathscr{E}_K^\dagger(\mathfrak{X})$, $\mathscr{E}_K(\mathfrak{X})$, and $\mathscr{E}_K^{\leq 1}(\mathfrak{X})$ (being isometric in the norm $\|\ \|_1$).
\begin{lemma}\mathrm{la}bel{Gamma-cont-O}
The $o_L^\times$-action on $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0))$, for any $r_0 \in [p^{- 1/e -1/(p-1)},1) \cap p^\mathbf{Q}$, is continuous.
\end{lemma}
\begin{proof}
Coming from an action on the variety $\mathfrak{X} \setminus \mathfrak{X}(r_0)$ each unit in $o_L^\times$ acts by a continuous ring automorphism on $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0))$. Since $o_L^\times$ is compact and $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0))$ as a Fr\'echet space is barrelled it remains, by the nonarchimedean Banach-Steinhaus theorem (cf.\ \cite{NFA} Prop.\ 6.15), to show that the orbit maps
\begin{align*}
\rho_f : o_L^\times & \longrightarrow \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0)) \\
a & \longmapsto a_*(f) \ ,
\end{align*}
for any $f \in \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0))$, are continuous. But in the subsequent section \ref{sec:locan} we will establish, under the above assumption on $r_0$, the stronger fact that these orbit maps even are differentiable.
\end{proof}
\begin{lemma}\mathrm{la}bel{pi}
For any $r \in [p^{-p/(p-1)},1) \cap p^\mathbf{Q}$ we have $(\pi_L^*)^{-1}(\mathfrak{X} \setminus \mathfrak{X}(r)) \supseteq \mathfrak{X} \setminus \mathfrak{X}(r^{1/p})$.
\end{lemma}
\begin{proof}
Suppose there is an $x \in \mathfrak{X} \setminus \mathfrak{X}(r^{1/p})$ such that $\pi_L^*x \in \mathfrak{X}(r)$. Since $\pi_L$ as well as any unit in $o_L^\times$ preserve $\mathfrak{X}(r)$ it follows that $p^*x \in \mathfrak{X}(r)$. This contradicts Lemma 3.3 in \cite{ST}.
\end{proof}
The lemma implies that the action of $\pi_L$ and hence of the full multiplicative monoid $o_L \setminus \{0\}$ extends to the rings $\mathscr{R}_K(\mathfrak{X})$, $\mathscr{E}_K^\dagger(\mathfrak{X})$, $\mathscr{E}_K(\mathfrak{X})$, and $\mathscr{E}_K^{\leq 1}(\mathfrak{X})$. By construction the action of $\pi_L$ on $\mathscr{E}_K(\mathfrak{X})$ is decreasing in the norm $\|\ \|_1$. But the action of $p$, as a consequence of \cite{ST} Lemma 3.3, is isometric. It follows that the full monoid $o_L \setminus \{0\}$ acts isometrically on $\mathscr{E}_K(\mathfrak{X})$.
\begin{lemma}\mathrm{la}bel{Gamma-cont-R}
The $o_L \setminus \{0\}$-action on $\mathscr{R}_K(\mathfrak{X})$ is continuous.
\end{lemma}
\begin{proof}
Each $a \in o_L^\times$ acts continuously on $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0))$ and therefore gives rise, in the limit, to a continuous automorphism of $\mathscr{R}_K(\mathfrak{X})$. The prime element $\pi_L$, by Lemma \ref{pi}, maps $\mathfrak{X} \setminus \mathfrak{X}(r_0)$ to $\mathfrak{X} \setminus \mathfrak{X}(r_0^p)$, for any $p^{-\frac{1}{p-1}} \leq r_0 < 1$, and therefore again gives rise to a continuous endomorphism of $\mathscr{R}_K(\mathfrak{X})$. The continuity of the orbit maps
\begin{align*}
\rho_f : o_L^\times & \longrightarrow \mathscr{R}_K(\mathfrak{X}) \\
a & \longmapsto a_*(f) \ ,
\end{align*}
for any $f \in \mathscr{R}_K(\mathfrak{X})$, follows immediately from Lemma \ref{Gamma-cont-O}. As a locally convex inductive limit of Fr\'echet spaces $\mathscr{R}_K(\mathfrak{X})$ is barrelled so that the assertion now follows from the nonarchimedean Banach-Steinhaus theorem.
\end{proof}
Observing that $\kappa_{[a](z)} = a^*(\kappa_z)$ one checks that the isomorphism $\kappa : \mathfrak{B}_{/\mathbf{C}_p} \xrightarrow{\;\widehat{\otimes}ng\;} \mathfrak{X}_{/\mathbf{C}_p}$ from section \ref{sec:LT} is equivariant for the $o_L$-action on both sides. It follows that the ring isomorphisms as summarized after Cor.\ \ref{scalar-ext-R} all are equivariant for the actions of the monoid $o_L \setminus \{0\}$.
Traditionally one thinks of an $o_L \setminus \{0\}$-action as an action of the multiplicative group $\Gamma_L := o_L^\times$ together with a commuting endomorphism $\varphi_L$ which represents the action of $\pi_L$. We note that $\varphi_L$ is injective on $\mathscr{E}_K(\mathfrak{X})$ and a fortiori on $\mathcal{O}_K(\mathfrak{X})$.
For later purposes we need to discuss, for any $n \geq 1$, the structure of the $\pi_L^n$-torsion subgroup
\begin{equation*}
\mathfrak{X}[\pi_L^n] := \ker (\mathfrak{X} \xrightarrow{\; (\pi_L^n)^* \;} \mathfrak{X}) \ .
\end{equation*}
\begin{lemma}\mathrm{la}bel{torsion}
$\mathfrak{X}[\pi_L^n](\mathbf{C}_p) \widehat{\otimes}ng o_L/\pi_L^n o_L$ as $o_L$-modules.
\end{lemma}
\begin{proof}
As a consequence of \cite{ST} Lemma 2.1 the bijection \eqref{f:variety} induces an $o_L$-isomorphism
\begin{equation*}
\mathfrak{X}[\pi_L^n](\mathbf{C}_p) \widehat{\otimes}ng \ker\big[ \mathfrak{B}_1(\mathbf{C}_p) \otimes_{\mathbf{Z}_p} \operatorname{Hom}_{\mathbf{Z}_p}(o_L, {\mathbf{Z}_p}) \xrightarrow{\; \operatorname{id} \otimes \pi_L^n \;}
\mathfrak{B}_1(\mathbf{C}_p) \otimes_{\mathbf{Z}_p} \operatorname{Hom}_{\mathbf{Z}_p}(o_L, {\mathbf{Z}_p}) \big] \ .
\end{equation*}
The short exact sequence $0 \longrightarrow o_L \xrightarrow{\; \pi_L^n \;} o_L \longrightarrow o_L/\pi_L^n o_L \longrightarrow 0$ gives rise to the short exact sequence
\begin{equation*}
0 \longrightarrow \operatorname{Hom}_{\mathbf{Z}_p}(o_L, {\mathbf{Z}_p}) \xrightarrow{\; \pi_L^n \;} \operatorname{Hom}_{\mathbf{Z}_p}(o_L, {\mathbf{Z}_p}) \longrightarrow \operatorname{Ext}^1_{\mathbf{Z}_p}(o_L/\pi_L^n o_L, {\mathbf{Z}_p}) \longrightarrow 0
\end{equation*}
which in turn leads to the exact sequence
\begin{multline*}
0 \longrightarrow \operatorname{Tor}^{\mathbf{Z}_p}_1(\mathfrak{B}_1(\mathbf{C}_p), \operatorname{Ext}^1_{\mathbf{Z}_p}(o_L/\pi_L^n o_L, {\mathbf{Z}_p})) \longrightarrow \\
\mathfrak{B}_1(\mathbf{C}_p) \otimes_{\mathbf{Z}_p} \operatorname{Hom}_{\mathbf{Z}_p}(o_L, {\mathbf{Z}_p}) \xrightarrow{\; \operatorname{id} \otimes \pi_L^n \;}
\mathfrak{B}_1(\mathbf{C}_p) \otimes_{\mathbf{Z}_p} \operatorname{Hom}_{\mathbf{Z}_p}(o_L, {\mathbf{Z}_p}) \ .
\end{multline*}
We deduce that $\mathfrak{X}[\pi_L^n](\mathbf{C}_p) \widehat{\otimes}ng \operatorname{Tor}^{\mathbf{Z}_p}_1(\mathfrak{B}_1(\mathbf{C}_p), \operatorname{Ext}^1_{\mathbf{Z}_p}(o_L/\pi_L^n o_L, {\mathbf{Z}_p}))$. Since $\operatorname{Hom}_{\mathbf{Z}_p}(o_L, {\mathbf{Z}_p})$ is a free $o_L$-module of rank one we have $\operatorname{Ext}^1_{\mathbf{Z}_p}(o_L/\pi_L^n o_L, {\mathbf{Z}_p})) \widehat{\otimes}ng o_L/\pi_L^n o_L$ as $o_L$- and, a fortiori, as $\mathbf{Z}_p$-module. On the other hand the torsion subgroup of $\mathfrak{B}_1(\mathbf{C}_p)$ is the group of $p$-power roots of unity which is isomorphic to $\mathbf{Q}_p / \mathbf{Z}_p$. It follows that
\begin{align*}
\mathfrak{X}[\pi_L^n](\mathbf{C}_p) & \widehat{\otimes}ng \operatorname{Tor}^{\mathbf{Z}_p}_1(\mathfrak{B}_1(\mathbf{C}_p), o_L/\pi_L^n o_L) \\
& \widehat{\otimes}ng \operatorname{Tor}^{\mathbf{Z}_p}_1(\mathbf{Q}_p / \mathbf{Z}_p, o_L/\pi_L^n o_L) \\
& = \text{torsion subgroup of $o_L / \pi_L^n o_L$} \\
& = o_L / \pi_L^n o_L \ .
\end{align*}
\end{proof}
\subsection{The action of $\operatorname{Lie}(\Gamma_L)$}\mathrm{la}bel{sec:locan}
We begin by setting up some axiomatic formalism. In the following we view $\Gamma_L = o_L^\times$ as a locally $L$-analytic group. Its Lie algebra is $L$. Let $(B,\|\ \|_B)$ be a $K$-Banach space which carries a $\Gamma_L$-action by continuous $K$-linear automorphisms. We consider the following condition:
\begin{align}\mathrm{la}bel{f:condition-locan}
\text{There is an $m \geq 2$ such that, in the operator norm on $B$, we have} \\
\text{$\|\gamma - 1\| < p^{-\frac{1}{p-1}}$ for any $\gamma \in 1 + p^m o_L$.} \qquad\qquad\qquad \nonumber
\end{align}
If $B$ has the orthogonal basis $(v_i)_{i \in I}$ then \eqref{f:condition-locan} follows from the existence of a constant $0 < C < p^{-\frac{1}{p-1}}$ such that $\|(\gamma - 1)(v_i)\|_B \leq C \|v_i\|_B$ for any $\gamma \in 1 + p^m o_L$ and any $i \in I$.
\begin{lemma}\mathrm{la}bel{locan}
Suppose that \eqref{f:condition-locan} holds; then the $\Gamma_L$-action on $B$ is locally $\mathbf{Q}_p$-analytic.
\end{lemma}
\begin{proof}
The main point is to show that the orbit maps
\begin{align*}
\rho_b : \Gamma_L & \longrightarrow B \\
\gamma & \longmapsto \gamma b,
\end{align*}
for any $b \in B$, are locally $\mathbf{Q}_p$-analytic. Since the multiplication in $\Gamma_L$ is locally $L$-analytic it suffices to show that $\rho_b | 1 + p^m o_L$ with $m$ as in the assumption is locally $\mathbf{Q}_p$-analytic. On the one hand we fix a basis $\gamma_1, \ldots, \gamma_r$ of the free $\mathbf{Z}_p$-module $1 + p^m o_L$ of rank $r := [L:\mathbf{Q}_p]$ and write $1 + p^m o_L \ni \gamma = \gamma_1^{x_1(\gamma)} \cdot \ldots \cdot \gamma_r^{x_r(\gamma)}$. Then $\gamma \longmapsto (x_1(\gamma), \ldots , x_r(\gamma))$ is a global $\mathbf{Q}_p$-chart of $1 + p^m o_L$. On the other hand we consider the $K$-Banach algebra $A$ of continuous linear endomorphisms of $B$ with the operator norm. We have the bijections
\begin{align*}
\{a' \in A : \|a'\| < p^{-\frac{1}{p-1}} \} & \xrightarrow{\;\exp\;} \{a \in A : \|a - 1\| < p^{-\frac{1}{p-1}} \} \\
& \xleftarrow{\;\log\;} \\
a' & \longmapsto \exp(a') = \sum_{n \geq 0} \frac{1}{n!} (a')^n \\
- \sum_{n \geq 1} \frac{1}{n} (1-a)^n = \log (1+(a-1)) = \log(a) & \longleftarrow a
\end{align*}
which are inverse to each other and which ``preserve'' norms (cf.\ \cite{B-GAL} II\S8.4 Prop.\ 4). By our assumption we have in $A$ the expansion
\begin{align}\mathrm{la}bel{f:expansion}
\gamma & = \exp(\log(\gamma)) = \sum_{n \geq 0} \frac{1}{n!} \log(\gamma)^n = \sum_{n \geq 0} \frac{1}{n!} \big( \sum_{i=1}^r x_i(\gamma) \log(\gamma_i)\big)^n \\
& = \sum_{(n_1, \ldots, n_r) \in \mathbf{Z}_{\geq 0}^r} \frac{c_{(n_1, \ldots,n_r)}}{(n_1 + \ldots + n_r)!} \operatorname{pr}od_{i=1} (\log(\gamma_i))^{n_i} \operatorname{pr}od_{i=1}^r x_i(\gamma)^{n_i} \nonumber
\end{align}
on $1 + p^m o_L$, where the integers $c_{(n_1,\ldots,c_r)}$ denote the usual polynomial coefficients. Since $d := \max_i \|\log(\gamma_i)\| < p^{-\frac{1}{p-1}}$ by assumption the coefficient operators
\begin{equation*}
\nabla_{B,(n_1,\ldots,n_r)} := \frac{c_{(n_1, \ldots,n_r)}}{(n_1 + \ldots + n_r)!} \operatorname{pr}od_{i=1} (\log(\gamma_i))^{n_i}
\end{equation*}
satisfy
\begin{align*}
\|\nabla_{B,(n_1,\ldots,n_r)}\| & \leq |(n_1 + \ldots n_r)!|^{-1} d^{n_1 + \ldots n_r} \\
& = |(n_1 + \ldots + n_r)!|^{-1} |p|^{\frac{n_1 + \ldots + n_r}{p-1}} (d/|p|^{\frac{1}{p-1}})^{n_1 + \ldots + n_r} \\
& \leq (d/|p|^{\frac{1}{p-1}})^{n_1 + \ldots n_r} \xrightarrow{\; n_1 + \ldots + n_r \rightarrow \infty\;} 0
\end{align*}
where the last inequality comes from $|p|^{\frac{n}{p-1}} \leq |n!|$ (cf.\ \cite{B-GAL} II\S8.1 Lemma 1). Evaluating \eqref{f:expansion} in $b \in B$ therefore produces an expansion of $\rho_b$ into a power series convergent on $1 + p^m o_L$.
It follows in particular that the orbit maps $\rho_b$ are continuous, i.e., that the $\Gamma_L$-action on $B$ is separately continuous. But, since $\Gamma_L$ is compact, the nonarchimedean Banach-Steinhaus theorem (cf.\ \cite{NFA} Prop.\ 6.15) then implies that the action $\Gamma_L \times B \rightarrow B$ is jointly continuous.
\end{proof}
The operators $\nabla_{B,(n_1, \ldots, n_r)}$ on $B$ which we have constructed in the proof of Lemma \ref{locan} have the following conceptual interpretation. First we observe that
\begin{equation*}
\nabla_{B,(n_1, \ldots, n_r)} = \operatorname{pr}od_{i=1}^r \nabla_{B,(\ldots,0,n_i,0, \ldots)} = \operatorname{pr}od_{i=1}^r \frac{1}{n_i !} \nabla_{B,\underline{i}}^{n_i}
\end{equation*}
where $\underline{i} := (\ldots,0,1,0,\ldots)$ is the multi-index with entry one in the $i$th place. On the other hand the derived action of the Lie algebra of $\Gamma_L$ on $B$ is given by
\begin{equation*}
\mathfrak{x} b = \frac{d}{dt} \exp_{\Gamma_L} (t \mathfrak{x}) b_{\big| t=0} \qquad\text{for $\mathfrak{x} \in \operatorname{Lie}(\Gamma_L) = L$ and $b \in B$}
\end{equation*}
where $\exp_{\Gamma_L}$ is an exponential map for $\Gamma_L$ (cf.\ \cite{Fea} \S3.1) and where $t$ varies in a small neighbourhood of zero in $\mathbf{Z}_p$. Note that the usual exponential function $\exp : \operatorname{Lie}(\Gamma_L) = L - - -> o_L^\times = \Gamma_L$ is an exponential map for the locally $L$-analytic group $\Gamma_L$ (cf.\ \cite{B-GAL} III\S4.3 Ex.\ 2). We put $\mathfrak{x}_i := \log \gamma_i$. Using the expansion \eqref{f:expansion} we compute
\begin{align*}
\exp_{\Gamma_L} (t \mathfrak{x}_j) b & = \sum_{(n_1,\ldots,n_r) \in \mathbf{Z}_{\geq 0}^r} \nabla_{B,(n_1, \ldots, n_r)} (b) \operatorname{pr}od_{i=1}^r x_i(\exp (t \mathfrak{x}_j))^{n_i} \\
& = \sum_{(n_1,\ldots,n_r) \in \mathbf{Z}_{\geq 0}^r} \nabla_{B,(n_1, \ldots, n_r)} (b) \operatorname{pr}od_{i=1}^r x_i(\exp (\mathfrak{x}_j)^t)^{n_i} \\
& = \sum_{(n_1,\ldots,n_r) \in \mathbf{Z}_{\geq 0}^r} \nabla_{B,(n_1, \ldots, n_r)} (b) \operatorname{pr}od_{i=1}^r x_i(\gamma_j^t)^{n_i} \\
& = \sum_{n=0}^\infty \frac{1}{n!} \nabla_{B,\underline{j}}^n (b) t^n
\end{align*}
and hence
\begin{equation}\mathrm{la}bel{f:nabla}
\frac{d}{dt} \exp_{\Gamma_L} (t \mathfrak{x}_j) b_{\big| t=0} = \nabla_{B,\underline{j}}(b) \ .
\end{equation}
This proves the following.
\begin{corollary}\mathrm{la}bel{locan2}
If \eqref{f:condition-locan} holds, then the operator $\nabla_{B,\underline{j}}$ coincides with the derived action of $\log \gamma_j \in L = \operatorname{Lie}(\Gamma_L)$ on $B$.
\end{corollary}
The commutativity of $\Gamma_L$ implies that the operators $\nabla_{B,(n_1,\ldots,n_r)}$ commute with each other. It follows that the derived $\operatorname{Lie}(\Gamma_L)$-action on $B$ is through commuting operators. In the following we denote by $\nabla_B$, or simply by $\nabla$, the operator corresponding to the element $1 \in L = \operatorname{Lie}(\Gamma_L)$.
We remark that, although $\Gamma_L$ is a locally $L$-analytic group, the action on $B$ only is locally $\mathbf{Q}_p$-analytic so that the derived action $\operatorname{Lie}(\Gamma_L) \longrightarrow \operatorname{End}_K(B)$ only is $\mathbf{Q}_p$-linear in general.
Next we consider the situation where $B$ in addition is a Banach algebra such that $\|\ \|_B$ is submultiplicative and $M$ is a finitely generated projective $B$-module. We choose generators $e_1, \ldots, e_d$ of $M$ and consider the $B$-module homomorphism
\begin{align*}
B^d & \longrightarrow M \\
(b_1, \ldots, b_d) & \longmapsto \sum_{i=1}^d b_i e_i \ .
\end{align*}
On $B^d$ we have the maximum norm $\|(b_1, \ldots, b_d)\|_{B^d} := \max_i \|b_i\|_B$ and on $M$ the corresponding quotient norm
\begin{equation*}
\|x\|_M := \inf \{\max_i \|b_i\|_B : x = \sum_{i=1}^d b_i e_i\} \ .
\end{equation*}
To see that $\|\ \|_M$ indeed is a norm we observe that the above map, by the projectivity of $M$, is the projection map onto the first summand of a suitable $B$-module isomorphism $B^d \xrightarrow{\widehat{\otimes}ng} M \oplus M'$. This isomorphism is continuous if we equip $M$ and $M'$ with the corresponding quotient topologies. By the open mapping theorem it has to be a topological isomorphism. Hence the quotient topology on $M$ is Hausdorff and complete (cf.\ \cite{NFA} Prop.\ 8.3). In particular, $\|\ \|_M$ is a norm. This construction makes $M$ into a $K$-Banach space whose topology is independent of the choice of basis. We assume that $\Gamma_L$ acts continuously by $K$-linear automorphisms on $M$ which are semilinear with respect to the $\Gamma_L$-action on $B$.
\begin{proposition}\mathrm{la}bel{locan3}
If \eqref{f:condition-locan} holds for $B$, then also the $\Gamma_L$-action on $M$ satisfies \eqref{f:condition-locan} and, in particular, is locally $\mathbf{Q}_p$-analytic.
\end{proposition}
\begin{proof}
Let the natural number $m$ be as in the condition \eqref{f:condition-locan} for $B$ in Lemma \ref{locan}. By the continuity of the $\Gamma$-action on $M$ we find an $m' \geq m$ such that $\|(\gamma - 1)(e_i)\|_M < p^{-2}$ (and hence $\|\gamma (e_i)\|_M = \|e_i\|_M = 1$) for any $\gamma \in 1 + p^{m'} o_L$ and any $1 \leq i \leq d$. For any such $\gamma$ and any $x = \sum_{i=1}^d b_i e_i \in M$ we then compute
\begin{align*}
\|(\gamma - 1)(x)\|_M & = \| \sum_i (\gamma - 1)(b_i e_i)\|_M \\
& = \| \sum_i \big( (\gamma - 1)(b_i) \gamma(e_i) + b_i (\gamma - 1)(e_i) \big) \|_M \\
& \leq \max ( \max_i \|(\gamma - 1)\| \|b_i\|_B \|\gamma (e_i)\|_M , \max_i \|b_i\|_B \|(\gamma - 1)(e_i)\|_M) \\
& \leq \max (\|\gamma - 1\|, p^{-2}) \cdot \max_i \|b_i\|_B \ .
\end{align*}
It follows that
\begin{equation*}
\|(\gamma - 1)(x)\|_M \leq \max (\|\gamma - 1\|, p^{-2}) \cdot \|x\|_M < p^{-\frac{1}{p-1}} \|x\|_M \ .
\end{equation*}
\end{proof}
In a first step we apply this formalism to the $o_L^\times$-action on the open disk $\mathfrak{B}$. For any $r_0 \leq r$ in $(0,1) \cap p^{\mathbf{Q}}$ let $\mathfrak{B}(r_0,r)_{/L}$ denote the $L$-affinoid annulus of inner, resp.\ outer, radius $r_0$, resp.\ $r$, around zero. It is preserved by the $o_L^\times$-action.
\begin{proposition}\mathrm{la}bel{annuli-locan}
The $\Gamma_L$-actions on $\mathcal{O}_K (\mathfrak{B}(r))$ and on $\mathcal{O}_K (\mathfrak{B}(r_0,r))$, induced by the Lubin-Tate formal $o_L$-module, verify the condition \eqref{f:condition-locan} and are locally $L$-analytic.
\end{proposition}
\begin{proof}
The other case being simpler we only treat the $\Gamma_L$-action on $\mathcal{O}_K (\mathfrak{B}(r_0,r))$. First of all we verify the condition \eqref{f:condition-locan}. The elements of $\mathcal{O}_K (\mathfrak{B}(r_0,r))$ are the Laurent series in the coordinate $Z$ which converge on $\mathfrak{B}(r_0,r))(\mathbf{C}_p)$. The maximum modulus principle tells us that $\{Z^n\}_{n \in \mathbf{Z}}$ is an orthogonal basis of the $K$-Banach space $\mathcal{O}_K (\mathfrak{B}(r_0,r))$ with respect to the supremum norm $\|\ \|$. It therefore suffices to find an $m \geq 2$ such that
\begin{equation*}
\|\gamma (Z^n) - Z^n \| \leq p^{-2} \|Z^n\| \qquad\text{for any $n \in \mathbf{Z}$ and any $\gamma \in 1 + p^m o_L$}.
\end{equation*}
We have $\gamma(Z) = [\gamma](Z) = \gamma Z + \ldots = Zu_\gamma$ with $u_\gamma = \gamma + \ldots \in o_L[[Z]]^\times$, and we compute
\begin{equation*}
\gamma(Z^n) - Z^n = Z^n(u_\gamma^n - 1)
=
\begin{cases}
Z^n(u_\gamma - 1)(u_\gamma^{n-1} + \ldots 1) & \text{if $n > 0$} \\
Z^n(u_\gamma - 1)(-u_\gamma^n - \ldots - u_\gamma^{-1}) & \text{if $n < 0$}.
\end{cases}
\end{equation*}
It follows that $\|\gamma (Z^n) - Z^n \| \leq \|u_\gamma - 1\| \|Z^n\|$. By the proof of Lemma 2.1.1 in \cite{KR} there exists an $m \geq 2$ such that $\|u_\gamma - 1\| = \| \frac{\gamma(Z)}{Z} - 1\| \leq p^{-2}$. This shows that \eqref{f:condition-locan} holds true. Lemma \ref{locan} then tells us that the $\Gamma_L$-action is locally $\mathbf{Q}_p$-analytic.
Next we establish that the derived $\operatorname{Lie}(\Gamma_L)$-action is $L$-bilinear. Since $\operatorname{Lie}(\Gamma_L)$ acts by continuous derivations it suffices to check that $\mathfrak{x}Z = \mathfrak{x} \cdot 1Z$ holds true for any sufficiently small $\mathfrak{x} \in L = \operatorname{Lie}(\Gamma_L)$ (where $\cdot$ on the right hand side denotes the scalar multiplication). We compute
\begin{align*}
\mathfrak{x} Z & = \frac{d}{dt} [\exp_{\Gamma_L} (t \mathfrak{x})] (Z)_{\big| t=0} = \frac{d}{dt} [\exp (t \mathfrak{x})](Z)_{\big| t=0} \\
& = \frac{d}{dt} \exp_{LT}(\log_{LT}([\exp (t \mathfrak{x})](Z)))_{\big| t=0}
= \frac{d}{dt} \exp_{LT}(\exp (t \mathfrak{x}) \cdot \log_{LT}(Z))_{\big| t=0} \\
& = \mathfrak{x} \cdot \frac{d}{dt} \exp_{LT}(\exp (t) \cdot \log_{LT}(Z))_{\big| t=0} = \mathfrak{x} \cdot 1Z \ .
\end{align*}
The fourth identity uses the fact that the logarithm $\log_{LT}$ of $LT$ is ``$o_L$-linear'' (\cite{Lan} 8.6 Lemma 2).
Finally, by looking at the Taylor expansion
\begin{equation*}
\exp(\mathfrak{x})b = \sum_{n=0}^\infty \frac{1}{n!} \mathfrak{x}^n b = \sum_{n=0}^\infty \frac{1}{n!} \mathfrak{x}^n \cdot 1^n b
\end{equation*}
we see that the orbit maps $\rho_b$ are locally $L$-analytic.
\end{proof}
\begin{lemma}\mathrm{la}bel{smalldisk-locan}
For any $r \in (0,p^{-\frac{1}{p-1}}) \cap p^\mathbf{Q}$ the $\Gamma_L$-action on $\mathcal{O}_K (\mathfrak{X}(r))$ verifies the condition \eqref{f:condition-locan} and is locally $L$-analytic.
\end{lemma}
\begin{proof}
Lemma \ref{small-disk} reduces us to proving the assertion for the $\Gamma_L$-action on $\mathcal{O}_K (\mathfrak{B}(r))$ which is induced by the multiplication action of $o_L^\times$ on the disk $\mathfrak{B}(r)$. But this is seen by an almost trivial version of the reasoning in the proof of Prop.\ \ref{annuli-locan}.
\end{proof}
According to Prop.\ \ref{annuli-locan} we have derived $\operatorname{Lie}(\Gamma_L)$-actions on $\mathcal{O}_K (\mathfrak{B}(r))$ and $\mathcal{O}_K (\mathfrak{B}(r_0,r))$ which are $L$-bilinear. For $r_0' \leq r_0 \leq r \leq r'$ the inclusions $\mathcal{O}_K (\mathfrak{B}(r')) \subseteq \mathcal{O}_K (\mathfrak{B}(r))$ and $\mathcal{O}_K (\mathfrak{B}(r_0',r')) \subseteq \mathcal{O}_K (\mathfrak{B}(r_0,r))$ respects these actions. By first a projective limit and then a direct limit argument we therefore obtain compatible $L$-bilinear $\operatorname{Lie}(\Gamma_L)$-actions on
\begin{equation*}
\mathcal{O}_K(\mathfrak{B}) \subseteq \mathcal{O}_K(\mathfrak{B} \setminus \mathfrak{B}(r)) \subseteq \mathscr{R}_K(\mathfrak{B}) \ .
\end{equation*}
Next we use the LT-isomorphism in section \ref{sec:LT} to obtain $L$-bilinear $\operatorname{Lie}(\Gamma_L)$-actions on
\begin{equation*}
\mathcal{O}_K(\mathfrak{X}_n) ,\; \mathcal{O}_K(\mathfrak{X}(s,s'))\ \text{(with $s,s'$ as in Prop.\ \ref{quasi-Stein}), and} \quad \mathcal{O}_K(\mathfrak{X}) \subseteq \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) \subseteq \mathscr{R}_K(\mathfrak{X}) \ .
\end{equation*}
Recall that $\mathscr{R}_K(\mathfrak{B})$ and $\mathscr{R}_K(\mathfrak{X})$ are the locally convex inductive limits of the Fr\'echet spaces $\mathcal{O}_K(\mathfrak{B} \setminus \mathfrak{B}(r))$ and $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$, respectively. Hence all the above locally convex $K$-vector spaces are barrelled (\cite{NFA} Examples at the end of \S6). The orbit maps $\rho_b$ for the $\Gamma_L$-action on these spaces (with the exception of $\mathcal{O}_K(\mathfrak{X}_n)$ and $\mathcal{O}_K(\mathfrak{X}(s,s'))$) are no longer locally $L$-analytic but they still are differentiable (\cite{Fea} 3.1.2). Hence these actions still are derived in the sense that they are given by the usual formula $(\mathfrak{x},b) \longmapsto \mathfrak{x} b = \frac{d}{dt} \exp_{\Gamma_L} (t \mathfrak{x}) b_{\big| t=0}$. The $\Gamma_L$-action on each $\mathcal{O}_K(\mathfrak{X}_n)$ and each $\mathcal{O}_K(\mathfrak{X}(s,s'))$ satisfies the condition \eqref{f:condition-locan} and is locally $L$-analytic.
For convenience we introduce the following notion.
\begin{definition}\mathrm{la}bel{def:L-analytic}
A differentiable (continuous) $\Gamma_L$-action on a barrelled locally convex $K$-vector space $V$ is called $L$-analytic if the derived action $\operatorname{Lie}(\Gamma_L) \times V \longrightarrow V$ is $L$-bilinear.
\end{definition}
We observe that with $V$ the induced $\Gamma_L$-action on any $\Gamma_L$-invariant closed barrelled subspace of $V$ is $L$-analytic as well.
If $\mathscr{F}(X,Y)$ denotes the formal group law of $LT$ then $\frac{\partial \mathscr{F}}{\partial Y}(0,Z)$ is a unit in $o_L[[Z]]$ and we put $g_{LT}(Z) := \big( \frac{\partial \mathscr{F}}{\partial Y}(0,Z) \big)^{-1}$. Then $g_{LT}(Z) dZ$ is, up to scalars, the unique invariant differential form on $LT$ (\cite{Haz} \S5.8). As before we let $\log_{LT}(Z) = Z + \ldots$ denote the unique formal power series in $L[[Z]]$ whose formal derivative is $g_{LT}$. This $\log_{LT}$ is the logarithm of $LT$ and lies in $\mathcal{O}_L(\mathfrak{B})$ (\cite{Lan} 8.6). In particular, $g_{LT}dZ = d\log_{LT}$ and $\mathcal{O}_L(\mathfrak{B}) dZ = \mathcal{O}_L(\mathfrak{B}) d\log_{LT}$. The invariant derivation $\partial_\mathrm{inv}$ on $\mathcal{O}_K(\mathfrak{B})$ corresponding to the form $d\log_{LT}$ is determined by
\begin{equation*}
dF = \partial_\mathrm{inv} (F) d\log_{LT} = \partial_\mathrm{inv} (F) g_{LT} dZ = \frac{\partial F}{\partial Z} dZ
\end{equation*}
and hence is given by
\begin{equation*}
\partial_\mathrm{inv}(F) = g_{LT}^{-1} \frac{\partial F}{\partial Z} \ .
\end{equation*}
The identity
\begin{equation*}
\nabla_{\mathcal{O}_K(\mathfrak{B})} = \log_{LT} \cdot \partial_\mathrm{inv}
\end{equation*}
is shown in \cite{KR} Lemma 2.1.4.
Since the rigid variety $\mathfrak{X}$ is smooth of dimension one its sheaf of holomorphic differential forms is locally free of rank one. The group structure of $\mathfrak{X}$ forces this sheaf to even be free (cf.\ \cite{DG} II \S4.3.4). Hence the $\mathcal{O}_L(\mathfrak{X})$-module $\Omega_L(\mathfrak{X})$ of global holomorphic differential forms on $\mathfrak{X}$ is free of rank one. We claim that the differential form $d\log_{\mathfrak{X}}$ is invariant (for the group structure on $\mathfrak{X}$) and is a basis of $\Omega_L(\mathfrak{X})$. By the commutative diagram after Lemma 3.4 in \cite{ST} the function $\log_{\mathfrak{X}}$ corresponds, under the LT-isomorphism $\kappa$, to a nonzero scalar multiple of $\log_{LT}$. This implies our claim over $\mathbf{C}_p$ and then, by a simple descent argument with respect to the twisted Galois action, also over $L$. The invariant derivation $\partial_\mathrm{inv}$ on $\mathcal{O}_K(\mathfrak{X})$ corresponding to the form $d\log_{\mathfrak{X}}$ is defined by
\begin{equation*}
df = \partial_\mathrm{inv} (f) d\log_{\mathfrak{X}} \ .
\end{equation*}
Using the LT-isomorphism it follows that
\begin{equation*}
\nabla_{\mathcal{O}_K(\mathfrak{X})} = \log_{\mathfrak{X}} \cdot \partial_\mathrm{inv} \ .
\end{equation*}
\subsection{$(\varphi_L,\Gamma_L)$-modules}
Let $M$ be any finitely generated module over some topological ring $R$. The canonical topology of $M$ is defined to be the quotient topology with respect to a surjective $R$-module homomorphism $\alpha : R^m \longrightarrow M$. It makes $M$ into a topological $R$-module. If the multiplication in $R$ is only separately continuous then the module multiplication $R \times M \longrightarrow M$ is only separately continuous as well. Any $R$-module homomorphism between two finitely generated $R$-modules is continuous for the canonical topologies. We also need a semilinear version of this latter fact.
\begin{remark}\mathrm{la}bel{semilinear-cont}
Let $\psi : R \longrightarrow S$ be a continuous homomorphism of topological rings, let $M$ and $N$ be finitely generated $R$- and $S$-modules, respectively, and let $\alpha : M \longrightarrow N$ be any $\psi$-linear map (i.e., $\alpha(rm) = \psi(r)\alpha(m)$ for any $r \in R$ and $m \in M$); then $\alpha$ is continuous for the canonical topologies on $M$ and $N$.
\end{remark}
\begin{proof}
The map
\begin{align*}
\alpha^{lin} : S \otimes_{R,\psi} M & \longrightarrow N \\
s \otimes m & \longmapsto s \alpha(m)
\end{align*}
is $S$-linear. We pick free presentations $\mathrm{la}mbda : R^\ell \twoheadrightarrow M$ and $\mu : S^m \twoheadrightarrow N$. Then we find an $S$-linear map $\beta$ such that the diagram
\begin{equation*}
\xymatrix{
R^\ell \ar@{>>}[d]_{\mathrm{la}mbda} \ar[r]^-{\psi^\ell} & S^\ell = S \otimes_{R,\psi} R^\ell \ar@{>>}[d]_{\operatorname{id} \otimes \mathrm{la}mbda} \ar[r]^-{\beta} & S^m \ar@{>>}[d]^{\mu} \\
M \ar@/_2pc/[rr]^-{\alpha} \ar[r]^-{m \mapsto 1 \otimes m} & S \otimes_{R,\psi} M \ar[r]^-{\alpha^{lin}} & N }
\end{equation*}
is commutative. All maps except possibly the lower left horizontal arrow are continuous. The universal property of the quotient topology then implies that $\alpha$ must be continuous as well.
\end{proof}
Suppose now that $M$ is finitely generated projective over $R$. Then the homomorphism $\alpha$ has a continuous section $\sigma : M \longrightarrow R^m$. Hence $M$ is topologically isomorphic to the submodule $\sigma(M)$ of $R^m$ (equipped with the subspace topology). Suppose further that $R$ is Hausdorff, resp.\ complete. Then $R^m$ is Hausdorff, resp.\ complete. We see that $\sigma(M)$ and $M$ are Hausdorff. Furthermore it follows that $\sigma(M) = \ker(\operatorname{id}_M - \sigma\circ\alpha)$ is closed in $R^m$. Hence, if $R$ is complete, then also $M$ is complete.
In our applications $R$ usually is a locally convex $K$-algebra. If such an $R$ is barrelled then the canonical topology on any $M$ is barrelled as well (cf.\ \cite{NFA} Ex.\ 4 after Cor.\ 6.16).
\begin{definition}\mathrm{la}bel{def:modR}
A $(\varphi_L,\Gamma_L)$-module $M$ over $\mathscr{R}_K(\mathfrak{X})$ is a finitely generated projective $\mathscr{R}_K(\mathfrak{X})$-module $M$ which carries a semilinear continuous (for the canonical topology) $o_L \setminus \{0\}$-action such that the $\mathscr{R}_K(\mathfrak{X})$-linear map
\begin{align*}
\varphi_M^{lin} : \mathscr{R}_K(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X}),\varphi_L} M & \xrightarrow{\; \widehat{\otimes}ng \;} M \\
f \otimes m & \longmapsto f \varphi_M (m)
\end{align*}
is bijective (writing the action of $\pi_L$ on $M$ as $\varphi_M$). Let $\operatorname{Mod}_L(\mathscr{R}_K(\mathfrak{X}))$ denote the category of all $(\varphi_L,\Gamma_L)$-modules over $\mathscr{R}_K(\mathfrak{X})$.
\end{definition}
The $(\varphi_L,\Gamma_L)$-modules over $\mathscr{R}_K(\mathfrak{X})$ are Hausdorff and complete. They also are barrelled since $\mathscr{R}_K(\mathfrak{X})$ as a locally convex inductive limit (Prop.\ \ref{regular}.i) of Fr\'echet spaces is barrelled (cf.\ \cite{NFA} Ex.\ 2 and 3 after Cor.\ 6.16).
We briefly discuss scalar extension for $(\varphi_L,\Gamma_L)$-modules. Let $K \subseteq E \subseteq \mathbf{C}_p$ be another complete intermediate field. First we make the following simple observation.
\begin{remark}\mathrm{la}bel{tensor-barrelled}
Let $V_1$ and $V_2$ be two barrelled locally convex $K$-vector spaces; then $V_1 \otimes_{K,\iota} V_2$ is barrelled as well.
\end{remark}
\begin{proof}
The inductive tensor product topology on $V_1 \otimes_K V_2$ is the finest locally convex topology such that all linear maps
\begin{align*}
V_1 & \longrightarrow V_1 \otimes_K V_2 & \text{and}\qquad\qquad V_2 & \longrightarrow V_1 \otimes_K V_2 & \text{for any $v_i \in V_i$} \\
v & \longmapsto v \otimes v_2 & v & \longmapsto v_1 \otimes v &
\end{align*}
are continuous. It is basically by definition that any locally convex final topology with respect to maps which originate from barrelled spaces, like the above one, is barrelled.
\end{proof}
\begin{lemma}\mathrm{la}bel{scalar-ext}
For any $(\varphi_L,\Gamma_L)$-module $M$ over $\mathscr{R}_K(\mathfrak{X})$ we have:
\begin{itemize}
\item[i.] $\mathscr{R}_E(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X})} M = E\, \widehat{\otimes}_{K,\iota}\, M$;
\item[ii.] $\mathscr{R}_E(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X})} M$ is a $(\varphi_L,\Gamma_L)$-module over $\mathscr{R}_E(\mathfrak{X})$.
\end{itemize}
\end{lemma}
\begin{proof}
i. Obviously we have the algebraic identity $(E \otimes_K \mathscr{R}_K(\mathfrak{X})) \otimes_{\mathscr{R}_K(\mathfrak{X})} M = E \otimes_K M$. We claim that this is a topological identity $(E \otimes_{K,\iota} \mathscr{R}_K(\mathfrak{X})) \otimes_{\mathscr{R}_K(\mathfrak{X})} M = E \otimes_{K,\iota} M$, which extends to a topological isomorphism $(E\, \widehat{\otimes}_{K,\iota}\, \mathscr{R}_K(\mathfrak{X})) \otimes_{\mathscr{R}_K(\mathfrak{X})} M = E\, \widehat{\otimes}_{K,\iota}\, M$ . Note that on the left hand sides the topology is given as follows: By viewing $M$ as a topological direct summand of some $\mathscr{R}_K(\mathfrak{X})^m$ we realize the left hand sides as topological direct summands of $(E \otimes_{K,\iota} \mathscr{R}_K(\mathfrak{X}))^m$ and $(E\, \widehat{\otimes}_{K,\iota}\, \mathscr{R}_K(\mathfrak{X}))^m$, respectively. Hence our claim reduces to the case $M = \mathscr{R}_K(\mathfrak{X})$ where it is obvious. It remains to recall from Cor.\ \ref{scalar-ext-R} that $\mathscr{R}_E(\mathfrak{X}) = E\, \widehat{\otimes}_{K,\iota}\, \mathscr{R}_K(\mathfrak{X})$.
ii. Clearly $\mathscr{R}_E(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X})} M$ is finitely generated projective over $\mathscr{R}_E(\mathfrak{X})$. The $o_L \setminus \{0\}$-action extends by semilinearity to $\mathscr{R}_E(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X})} M$ and satisfies
\begin{align*}
\mathscr{R}_E(\mathfrak{X}) \otimes_{\mathscr{R}_E(\mathfrak{X}),\varphi_L} (\mathscr{R}_E(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X})} M) & = \mathscr{R}_E(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X})} (\mathscr{R}_K(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X}),\varphi_L} M) \\
& \widehat{\otimes}ng \mathscr{R}_E(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X})} M \ .
\end{align*}
It remains to establish the continuity of the $o_L \setminus \{0\}$-action. Because of i. we may view it as the completed $E$-linear extension of the $o_L \setminus \{0\}$-action on $M$. Hence each individual element in $o_L \setminus \{0\}$-action certainly acts by a continuous linear endomorphism on $V := E \otimes_{K,\iota} M$ and then also on $\widehat{V} = E\, \widehat{\otimes}_{K,\iota}\, M$. We still have to check that the resulting group actions $o_L^\times \times V \longrightarrow V$ and $o_L^\times \times \widehat{V} \longrightarrow \widehat{V}$ are continuous. Since $M$ is barrelled it follows from Remark \ref{tensor-barrelled} that $V$ is barrelled. For the continuity of the action on $V$ it therefore suffices, by the usual Banach-Steinhaus argument, to check the continuity of the orbit maps
\begin{align*}
\rho_v : o_L^\times & \longrightarrow E \otimes_{K,\iota} M \qquad\text{for any $v \in E \otimes_{K,\iota} M$} \\
a & \longmapsto a(v) \ .
\end{align*}
But this follows easily from the continuity of the $o_L^\times$-action on $M$. Let $\mathcal{C}_c(o_L^\times,V)$ denote the locally convex $K$-vector space of all continuous $V$-valued maps on $o_L^\times$ equipped with the compact-open topology. By \cite{B-TG} X.3.4 Thm.\ 3 the continuity of the action $o_L^\times \times V \longrightarrow V$ is equivalent to the continuity of the linear map
\begin{align*}
V & \longrightarrow \mathcal{C}_c(o_L^\times,V) \\
v & \longmapsto \rho_v \ .
\end{align*}
Hence all solid arrows in the diagram
\begin{equation*}
\xymatrix{
V \ar[d]_{\subseteq} \ar[r] & \mathcal{C}_c(o_L^\times,V) \ar[d]^{\subseteq} \\
\widehat{V} \ar@{-->}[r] & \mathcal{C}_c(o_L^\times,\widehat{V}) }
\end{equation*}
are continuous linear maps. Since $\mathcal{C}_c(o_L^\times,\widehat{V})$ is Hausdorff and complete (cf.\ \cite{NFA} Example in \S17) there is a unique continuous linear map $\widehat{V} \longrightarrow \mathcal{C}_c(o_L^\times,\widehat{V})$ which makes the diagram commutative. It corresponds to a continuous map $o_L^\times \times \widehat{V} \longrightarrow \widehat{V}$ which is easily seen to be the original $o_L^\times$-action on $\widehat{V}$. The latter therefore is continuous.
\end{proof}
At various points we will need the following technical descent result.
\begin{proposition}\mathrm{la}bel{descent}
Let $M$ be a $(\varphi_L,\Gamma_L)$-modules over $\mathscr{R}_K(\mathfrak{X})$. Then there exist a $p^{-p/(p-1)} \leq r_0 < 1$, a finitely generated projective $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0))$-module $M_0$ with a semilinear continuous $o_L^\times$-action, and a semilinear continuous homomorphism
\begin{equation*}
\varphi_{M_0} : M_0 \longrightarrow \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0^{1/p})) \otimes_{\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0))} M_0
\end{equation*}
such that the induced $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0^{1/p}))$-linear map
\begin{equation*}
\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0^{1/p})) \otimes_{\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0)), \varphi_L} M_0 \xrightarrow{\; \widehat{\otimes}ng \;} \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0^{1/p})) \otimes_{\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0))} M_0
\end{equation*}
is an isomorphism and such that
\begin{equation*}
\mathscr{R}_K(\mathfrak{X}) \otimes_{\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0))} M_0 = M
\end{equation*}
with the $o_L^\times$ actions on both sides as well as $\varphi_L \otimes \varphi_{M_0}$ and $\varphi_M$ corresponding to each other.
\end{proposition}
\begin{proof}
For some appropriate integer $m \geq 1$ we can view $M \subseteq \mathscr{R}_K(\mathfrak{X})^m$ as the image of some projector $\Pi \in M_{m\times m}(\mathscr{R}_K(\mathfrak{X}))$. The matrix $\Pi$ lies in $M_{m\times m}(\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r_0)))$ for some $r_0 \geq p^{-p/(p-1)}$, and we may define finitely generated projective $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r))$-modules $M(r) := \Pi(\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r))^m)$ for any $r_0 \leq r < 1$. We have $M = \mathscr{R}_K(\mathfrak{X}) \otimes_{\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r))} M(r)$ and $M = \bigcup_{r_0 \leq r < 1} M(r)$.
Since $M(r)$ is finitely generated we further have $\varphi_M (M(r)) \subseteq M(r')$ for some $r' \geq r$. But any set of generators for $M(r)$ also is a set of generators for $M(r')$. It follows (cf.\ Lemma \ref{pi}) that $\varphi_M (M(r')) \subseteq M(r'^{1/p})$. The associated linear map \begin{align*}
\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r'^{1/p})) \otimes_{\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r')), \varphi_L} M(r') & \longrightarrow M(r'^{1/p}) \\
f \otimes m & \longmapsto f \varphi_M(m)
\end{align*}
has the property that its base change to $\mathscr{R}_K(\mathfrak{X})$ is an isomorphism. The cokernel being finitely generated must already vanish after enlarging $r'$ sufficiently. Then the map is surjective and, by the projectivity of the modules, splits. Hence the kernel is finitely generated as well and vanishes after further enlarging $r'$. By enlarging the initial $r_0$ we therefore may assume that
\begin{equation*}
\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r^{1/p})) \otimes_{\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r)), \varphi_L} M(r) \widehat{\otimes}ng M(r^{1/p}) \qquad\text{for any $r_0 \leq r < 1$}.
\end{equation*}
By Remark \ref{semilinear-cont} the map $\varphi_M : M(r) \longrightarrow M(r^{1/p})$ is continuous.
By assumption the orbit map $\rho_m : o_L^\times \longrightarrow M$, for any $m \in M$, which sends $a$ to $a(m)$, is continuous. Hence its image is compact and, in particular, bounded. But $M$ is, as a consequence of Prop.\ \ref{regular}.i, the regular inductive limit of the $M(r)$. It follows that the image of $\rho_m$ already is contained in some $M(r)$ and is bounded in the canonical topology of $M(r)$ as an $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}(r))$-module. Using Prop.\ \ref{compactoid}.i we obtain that the image of $\rho_m$ is compactoid in $M(r)$. If we apply this to finitely many generators of $M(r_0)$ then we see that, by further enlarging $r_0$, we also may assume that the $o_L^\times$-action on $M$ preserves $M(r)$ for any $r \geq r_0$. By \cite{PGS} Cor.\ 3.8.39 the continuous inclusion $M(r) \subseteq M$ restricts to a homeomorphism between the image of $\rho_m$ in $M(r)$ and its image in $M$. It follows that $\rho_m : o_L^\times \longrightarrow M(r)$ is continuous. On the other hand, for each individual $a \in o_L^\times$, the map $a : M(r) \longrightarrow M(r)$ is $a_*$-linear and hence continuous by Remark \ref{semilinear-cont}. Together we have shown that the $o_L^\times$-action on $M(r)$ is separately continuous. Since $o_L^\times$ is compact and the Fr\'echet space $M(r)$ is barrelled it is, in fact, jointly continuous by the nonarchimedean Banach-Steinhaus theorem.
\end{proof}
\begin{proposition}\mathrm{la}bel{differentiable}
Any continuous (for the canonical topology) semilinear $\Gamma_L$-action on a finitely generated projective module $M$ over any of the rings $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$, for $n \geq 1$, or $\mathscr{R}_K(\mathfrak{X})$ is differentiable.
\end{proposition}
\begin{proof}
First we consider $M$ over $\mathscr{R}_K(\mathfrak{X})$. As seen from its proof the descent result Prop.\ \ref{descent} holds equally true without a $\varphi_M$. Hence, for some sufficiently big $n_0$, we find a finitely generated projective $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_{n_0})$-module $M_{n_0}$ with a continuous semilinear $\Gamma_L$-action such that $M = \mathscr{R}_K(\mathfrak{X}) \otimes_{\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_{n_0})} M_{n_0}$ as $\Gamma_L$-modules. For any $n \geq n_0$, the finitely generated projective module $M_n := \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n) \otimes_{\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_{n_0})} M_{n_0}$ over $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$ carries the continuous semilinear (diagonal) $\Gamma_L$-action. The $\Gamma_L$-equivariant maps $M_n \longrightarrow M_{n*1} \longrightarrow M$ are continuous (by Remark \ref{semilinear-cont}), and $M = \bigcup_{n \geq n_0} M_n$. This reduces the differentiability of $M$ to the differentiability of each $M_n$.
So, in the following we fix an $n \geq 1$ and consider $M$ over $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$. We abbreviate $\mathfrak{Y} := \mathfrak{X} \setminus \mathfrak{X}_n$. According to Prop.\ \ref{quasi-Stein} the quasi-Stein space $\mathfrak{Y}$ has an admissible covering by an increasing sequence of $\Gamma_L$-invariant affinoid subdomains $\mathfrak{X}(s_1,s_1') \subseteq \ldots \subseteq \mathfrak{X}(s_i,s_i') \subseteq \ldots$ such that the restriction maps $\mathcal{O}_K(\mathfrak{X}(s_{i+1},s_{i+1}')) \longrightarrow \mathcal{O}_K(\mathfrak{X}(s_i,s_i'))$ have dense image. We then have the finitely generated projective modules $M_i := \mathcal{O}_K(\mathfrak{X}(s_i,s_i')) \otimes_{\mathcal{O}_K(\mathfrak{Y})} M$ over $\mathcal{O}_K(\mathfrak{X}(s_i,s_i'))$ with a continuous semilinear (diagonal) $\Gamma_L$-action. The covering property implies that $M = \varprojlim_i M_i$ holds true topologically for the canonical topologies (as well as $\Gamma_L$-equivariantly). This reduces the differentiability of $M$ to the differentiability of the $\Gamma_L$-action on each $M_i$. But, as we have discussed before Def.\ \ref{def:L-analytic}, the $\Gamma_L$-action on $\mathcal{O}_K(\mathfrak{X}(s_i,s_i'))$ satisfies the condition \eqref{f:condition-locan}. Hence, by Prop.\ \ref{locan3}, the $\Gamma_L$-action on $M_i$ even is locally $\mathbf{Q}_p$-analytic.
\textit{Addendum:} First suppose that $M$ over $\mathscr{R}_K(\mathfrak{X})$ is $K$-analytic. Then each $M_n$ is $L$-analytic since $M_n$ is a $\operatorname{Lie}(\Gamma_L$-invariant $K$-vector subspace of $M$. Next suppose that $M$ over $\mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)$ is $L$-analytic. Then each $M_i$ is $L$-analytic since the natural map $M \longrightarrow M_i$ has dense image (cf.\ \cite{ST0} Thm.\ in \S3).
\end{proof}
This result allows us to introduce the full subcategory $\operatorname{Mod}_{L,an}(\mathscr{R}_K(\mathfrak{X}))$ of all $L$-analytic (Def.\ \ref{def:L-analytic}) $(\varphi_L,\Gamma_L)$-modules in $\operatorname{Mod}_L(\mathscr{R}_K(\mathfrak{X}))$.
There is a useful duality functor on the category $\operatorname{Mod}_L(\mathscr{R}_K(\mathfrak{X}))$. Let $M$ be $(\varphi_L,\Gamma_L)$-module over $\mathscr{R}_K(\mathfrak{X})$. The dual module $M^* := \operatorname{Hom}_{\mathscr{R}_K(\mathfrak{X})}(M,\mathscr{R}_K(\mathfrak{X}))$ again is finitely generated projective over $\mathscr{R}_K(\mathfrak{X})$.
\begin{remark}\mathrm{la}bel{simple-conv}
For any finitely generated projective $\operatorname{Mod}_L(\mathscr{R}_K(\mathfrak{X}))$-module $N$ the canonical topology on $\operatorname{Hom}_{\mathscr{R}_K(\mathfrak{X})}(N,\mathscr{R}_K(\mathfrak{X}))$ coincides with the topology of pointwise convergence.
\end{remark}
\begin{proof}
Since the formation of both topologies commutes with direct sums it suffices to consider $N = \mathscr{R}_K(\mathfrak{X})$, in which case the assertion is straightforward..
\end{proof}
It is a $(\varphi_L,\Gamma_L)$-module with respect to
\begin{equation*}
\gamma(\alpha) := \gamma \circ \alpha \circ \gamma^{-1} \qquad\text{and}\qquad \varphi_{M^*}(\alpha) := \varphi_L^{lin} \circ (\operatorname{id}_{\mathscr{R}_K(\mathfrak{X})} \otimes\, \alpha) \circ (\varphi_M^{lin})^{-1}
\end{equation*}
for any $\gamma \in \Gamma_L$ and any $\alpha \in \operatorname{Hom}_{\mathscr{R}_K(\mathfrak{X})}(M,\mathscr{R}_K(\mathfrak{X}))$. Each individual such operator on $M^*$ is continuous by Remark \ref{semilinear-cont}. If we use $\varphi_M^{lin}$ and $\varphi_L^{lin}$ to identify $\operatorname{Hom}_{\mathscr{R}_K(\mathfrak{X})}(M,\mathscr{R}_K(\mathfrak{X}))$ and
\begin{multline*}
\operatorname{Hom}_{\mathscr{R}_K(\mathfrak{X})}(\mathscr{R}_K(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X}),\varphi_L} M, \mathscr{R}_K(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X}),\varphi_L} \mathscr{R}_K(\mathfrak{X})) \\ = \operatorname{Hom}_{\mathscr{R}_K(\mathfrak{X})}(M, \mathscr{R}_K(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X}),\varphi_L} \mathscr{R}_K(\mathfrak{X}))
\end{multline*}
then $\varphi_{M^*}^{lin}$ becomes the map
\begin{align*}
\mathscr{R}_K(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X}),\varphi_L} \operatorname{Hom}_{\mathscr{R}_K(\mathfrak{X})}(M,\mathscr{R}_K(\mathfrak{X})) & \longrightarrow \operatorname{Hom}_{\mathscr{R}_K(\mathfrak{X})}(M, \mathscr{R}_K(\mathfrak{X}) \otimes_{\mathscr{R}_K(\mathfrak{X}),\varphi_L} \mathscr{R}_K(\mathfrak{X})) \\
f \otimes \alpha & \longmapsto [m \mapsto f \otimes \alpha(m)] \
\end{align*}
which does not involve the $(\varphi_L,\Gamma_L)$-structure any longer. To see that the latter map is bijective we may first reduce, since $M$ is projective, to the case of a finitely generated free module $M$ and then to the case $M = \mathscr{R}_K(\mathfrak{X})$, in which the bijectivity is obvious. Since $\Gamma_L$ is compact and $M^*$ is barrelled it remains, by the nonarchimedean Banach-Steinhaus theorem, to show, for the joint continuity of the $\Gamma_L$-action on $M^*$, that for any $\alpha \in M^*$ the map $\Gamma_L \longrightarrow M^*$ sending $\gamma$ to $\gamma(\alpha)$ is continuous. Because of Remark \ref{simple-conv} it suffices to check that, for any $m \in M$, the map $\Gamma_L \longrightarrow M$ sending $\gamma$ to $\gamma(\alpha(m)) = \gamma(\alpha(\gamma^{-1}(m))$ is continuous. This is a straightforward consequence of the continuity of the $\Gamma_L$-action on $M$ and on $\mathscr{R}_K(\mathfrak{X})$.
As an application we make the following technically helpful observation.
\begin{remark}\mathrm{la}bel{summand-of-free}
For any $M$ in $\operatorname{Mod}_L(\mathscr{R}_K(\mathfrak{X}))$ the $(\varphi_L,\Gamma_L)$-module $M \oplus M^*$ is free over $\mathscr{R}_K(\mathfrak{X})$.
\end{remark}
\begin{proof}
As a module $M$ is isomorphic to a direct sum of invertible ideals. Hence it suffices to consider an invertible ideal $I$ in $\mathscr{R}_K(\mathfrak{X})$. Then $I \oplus I^* \widehat{\otimes}ng I \oplus I^{-1} \widehat{\otimes}ng \mathscr{R}_K(\mathfrak{X}) \oplus I I^{-1} = \mathscr{R}_K(\mathfrak{X}) \oplus \mathscr{R}_K(\mathfrak{X})$ by Cor.\ \ref{pruefer2}.ii.
\end{proof}
Of course, everything above makes sense and is valid for $\mathfrak{B}$ replacing $\mathfrak{X}$. In particular, we have the categories $\operatorname{Mod}_{L,an}(\mathscr{R}_K(\mathfrak{B})) \subseteq \operatorname{Mod}_L(\mathscr{R}_K(\mathfrak{B}))$ of all $L$-analytic, resp.\ of all, $(\varphi_L,\Gamma_L)$-modules over $\mathscr{R}_K(\mathfrak{B})$.
We also add the following fact which will be crucial for the definition of etale $L$-analytic $(\varphi_L,\Gamma_L)$-modules in section \ref{sec:etalepgm}.
\begin{proposition}\mathrm{la}bel{unitsR}
$\mathscr{R}_K(\mathfrak{X})^\times = \mathscr{E}_K^\dagger(\mathfrak{X})^\times$.
\end{proposition}
\begin{proof}
Let $f \in \mathscr{R}_K(\mathfrak{X})^\times$. We have $f \in \mathcal{O}_K(\mathfrak{X} \setminus \mathfrak{X}_n)^\times$ for some sufficiently large $n$. It suffices to show that $f$ is bounded. By symmetry then $f^{-1}$ is bounded as well so that $f \in \mathcal{O}_K^b(\mathfrak{X} \setminus \mathfrak{X}_n)^\times \subseteq \mathscr{E}_K^\dagger(\mathfrak{X})^\times$. Since boundedness can be checked after scalar extension to $\mathbf{C}_p$ we may assume that $K = \mathbf{C}_p$. Using the LT-isomorphism this reduces us to showing that any unit in $\mathcal{O}_K(\mathfrak{B} \setminus \mathfrak{B}_n)$ is bounded. But this is well known to follow from \cite{Laz} Prop.\ 4.1).
\end{proof}
\section{Construction of $(\varphi_L,\Gamma_L)$-modules}
\subsection{An application of the Colmez-Sen-Tate method}
\mathrm{la}bel{sec:key}
In the following the Colmez-Sen-Tate formalism as formulated in \cite{BC} will be of crucial technical importance. In this section we therefore recall it, complement it somewhat, and prove an essential additional result.
Let $A$ be an $L$-Banach algebra with a submultiplicative norm $\|\ \|_A$. For any complete intermediate field $\mathbf{Q}_p \subseteq K \subseteq \mathbf{C}_p$ we denote by $A_K := K\, \widehat{\otimes}_L\, A$ the tensor product completed with respect to the tensor product norm $\|\ \|_{A_K} := |\ | \otimes \|\ \|_A$.
\footnote{There is a subtle point here which one has to keep in mind. Let $A$ be a reduced affinoid $L$-algebra with supremum norm $\|\ \|_{\mathrm{sup},A}$. The affinoid $K$-algebra $A_K$ again is reduced (\cite{Co1} Lemma 3.3.1(1)). But, in general, the supremum norm $\|\ \|_{\mathrm{sup},A_K}$ is NOT equal to the tensor product norm $\|\ \|_{A_K} := |\ | \otimes \|\ \|_{\mathrm{sup},A}$; the two are only equivalent (\cite{BGR} Thm.\ 6.2.4/1).
Suppose that $K/L$ is unramified. Then $gr_{|\ |}(K) = k_K \otimes_{k_L} gr_{|\ |}(L)$, where $k_L$ denotes the residue class field of $L$. We therefore have
\begin{equation*}
gr_{\|\ \|_{A_K}}(A_K) = gr_{|\ |}(K) \otimes_{gr_{|\ |}(L)} gr_{\|\ \|_{\mathrm{sup},A}}(A) = k_K \otimes_{k_L} gr_{\|\ \|_{\mathrm{sup},A}}(A) \ .
\end{equation*}
The supremum norm being power-multiplicative the algebra $gr_{\|\ \|_{\mathrm{sup},A}}(A)$ is reduced. Since $k_K /k_L$ is unramified the above right hand algebra is reduced as well. Hence $gr_{\|\ \|_{A_K}}(A_K)$ is reduced, which in turn implies that $\|\ \|_{A_K}$ is power-multiplicative and therefore must be equal to $\|\ \|_{\mathrm{sup},A_K}$.}
If $\mathbf{Q}_p \subseteq F \subseteq \mathbf{C}_p$ is an arbitrary intermediate field then $\widehat{F}$ denotes its completion.
The Galois group $G_L$ acts continuously and isometrically on $\mathbf{C}_p$ and hence continuously, semilinearly, and isometrically on $A_{\mathbf{C}_p}$ (through the first factor). Of course, it then also acts continuously on $\operatorname{GL}_m(A_{\mathbf{C}_p})$ for any $m \geq 1$.
\begin{remark}\mathrm{la}bel{AST}
$(A_{\mathbf{C}_p})^{G_K} = A_K$.
\end{remark}
\begin{proof}
It suffices to show that $(\mathbf{C}_p\, \widehat{\otimes}_K\, A)^{G_K} \subseteq A$. It is easy to see that any element of $\mathbf{C}_p\, \widehat{\otimes}_K\, A$ is contained in $\mathbf{C}_p\, \widehat{\otimes}_K\, A_0$ for some Banach subspace of countable type $A_0 \subseteq A$. Hence we may assume that $A$ is of countable type. The case of a finite dimensional $A$ being trivial we then further may assume, by \cite{PGS} Cor.\ 2.3.9, that $A = c_0(K)$ (notation as in the proof of Lemma \ref{descent-compactoid}). In this case we have $(\mathbf{C}_p\, \widehat{\otimes}_K\, c_0(K))^{G_K} = c_0(\mathbf{C}_p)^{G_K} = c_0(K)$.
\end{proof}
We put $L_n := L(\mu_{p^n})$ and $L^{cyc} := \bigcup_n L_n$ and we let $H_L^{cyc} := \operatorname{Gal}(\overline{L}/L^{cyc})$ and $\Gamma_L^{cyc} := \operatorname{Gal}(L^{cyc}/L)$. According to \cite{BC} Prop.s 3.1.4 and 4.1.1 the Banach algebra $A_{\mathbf{C}_p}$ verifies the Colmez-Sen-Tate conditions for any constants $c_1, c_2 > 0$ and $c_3 > \frac{1}{p-1}$. It is not necessary to here recall the content of the Colmez-Sen-Tate conditions; for the sake of clarity we only point out that in the notations of loc.\ cit.\ we have $\widetilde{\Lambda} = A_{\mathbf{C}_p}$ and $\Lambda_{H_L^{cyc},n} = L_n \otimes_L A = A_{L_n}$. What is important is that this has the following consequences.
\begin{proposition}\phantomsection\mathrm{la}bel{TS}
\begin{itemize}
\item[i.] For any sufficiently large $n$ there is a $G_L$-invariant decomposition into $A_{L_n}$-submodules $A_{\widehat{L^{cyc}}} = A_{L_n} \oplus X_{L,n}$; in particular, $X_{L,n}^{\Gamma_{L_n}^{cyc}} = 0$.
\item[ii.] Given any continuous $1$-cocycle $c : G_L \longrightarrow \operatorname{GL}_m(A_{\mathbf{C}_p})$ there exists a finite Galois extension $L'/L$ and an integer $n \geq 0$ such that there is a continuous $1$-cocycle on $G_L$ which is cohomologous to $c$, has values in $\operatorname{GL}_m(A_{L_n'})$, and is trivial on $H_{L'}^{cyc}$.
\end{itemize}
\end{proposition}
\begin{proof}
i. This is immediate from the condition (TS2) in \cite{BC} \S3.1. ii. This is a somewhat less precise form of \cite{BC} Prop.\ 3.2.6 (the $k$ there is irrelevant since $p$ is invertible in $\widetilde{\Lambda}$).
\end{proof}
Let $P$ be a finitely generated free $A_{\mathbf{C}_p}$-module of rank $r$ and consider any continuous semilinear action of $G_L$ on $P$. Note that $P$ as well as $\operatorname{End}_{A_{\mathbf{C}_p}}(P)$ are naturally topological $A_{\mathbf{C}_p}$-modules.
\begin{corollary}\mathrm{la}bel{TS2}
We have:
\begin{itemize}
\item[i.] $P^{H_L^{cyc}}$ is a projective $A_{\widehat{L^{cyc}}}$-module of rank $r$, and $A_{\mathbf{C}_p} \otimes_{A_{\widehat{L^{cyc}}}} P^{H_L^{cyc}} = P$.
\item[ii.] $P^{H_L^{cyc}}$ contains, for any sufficiently large $n$, a $\Gamma_L^{cyc}$-invariant $A_{L_n}$-submodule $Q_n$ which satisfies:
\begin{itemize}
\item[a)] $Q_n$ is projective of rank $r$,
\item[b)] $A_{\widehat{L^{cyc}}} \otimes_{A_{L_n}} Q_n = P^{H_L^{cyc}}$, and
\item[c)] there is a finite extension $L' / L_n$ such that $A_{L'} \otimes_{L_n} Q_n$ is a free $A_{L'}$-module.
\end{itemize}
\item[iii.] for any $Q_n$ as in ii. there is an $m \geq n$ such that $P^{G_L} \subseteq A_{L_m} \otimes_{A_{L_n}} Q_n$; in particular, $P^{G_L}$ is a submodule of a finitely generated free $A$-module.
\end{itemize}
\end{corollary}
\begin{proof}
i. and ii. We fix a free $A$-module $P_0$ of rank $r$ such that $A_{\mathbf{C}_p} \otimes_A P_0 = P$, and we let act $G_L$ continuously and semilinearly on $\operatorname{End}_{A_{\mathbf{C}_p}}(P)$ by
\begin{align*}
G_L \times \operatorname{End}_{A_{\mathbf{C}_p}}(P) & \longrightarrow \operatorname{End}_{A_{\mathbf{C}_p}}(P) \\
(\sigma, \alpha) & \longmapsto (\sigma \otimes \operatorname{id}_{P_0}) \circ \alpha \circ (\sigma^{-1} \otimes \operatorname{id}_{P_0}) \ .
\end{align*}
Then
\begin{align*}
c : G_L & \longrightarrow \operatorname{Aut}_{A_{\mathbf{C}_p}}(P) \\
\sigma & \longmapsto \sigma \circ (\sigma^{-1} \otimes \operatorname{id}_{P_0})
\end{align*}
is a continuous 1-cocycle. By Prop.\ \ref{TS}.ii there are a finite Galois extension $L'/L$, a natural number $n \geq 0$, and an element $\beta \in \operatorname{Aut}_{A_{\mathbf{C}_p}}(P)$ such that the cohomologous cocycle
\begin{equation*}
c'(\sigma) = \beta \circ c(\sigma) \circ (\sigma \otimes \operatorname{id}_{P_0}) \circ \beta^{-1} \circ (\sigma^{-1} \otimes \operatorname{id}_{P_0}) = \beta \circ \sigma \circ \beta^{-1} \circ (\sigma^{-1} \otimes \operatorname{id}_{P_0})
\end{equation*}
on $G_L$ has values in $\operatorname{Aut}_{A_{L_n'}}(A_{L_n'} \otimes_A P_0)$ and is trivial on $H_{L'}^{cyc}$. It follows that $\sigma = \beta^{-1} \circ (\sigma \otimes \operatorname{id}_{P_0}) \circ \beta$ for any $\sigma \in H_{L'}^{cyc}$ and hence that
\begin{equation*}
P^{H_{L'}^{cyc}} = \beta^{-1}(P^{{H_{L'}^{cyc}} \otimes \operatorname{id}_{P_0} = 1}) = \beta^{-1}(A_{\mathbf{C}_p}^{H_{L'}^{cyc}} \otimes_A P_0) = \beta^{-1}(A_{\widehat{L'^{cyc}}} \otimes_A P_0) \ .
\end{equation*}
In particular, $P^{H_{L'}^{cyc}}$ is a free $A_{\widehat{L'^{cyc}}}$-module of rank $r$ and $A_{\mathbf{C}_p} \otimes_{A_{\widehat{L'^{cyc}}}} P^{H_{L'}^{cyc}} = P$. Moreover, the residual action of $G_L/H_{L'}^{cyc}$ on $P^{H_{L'}^{cyc}}$ leaves the free $A_{L_n'}$-submodule $Q' := \beta^{-1}(A_{L_n'} \otimes_A P_0)$ of rank $r$ invariant.
By the usual finite Galois descent formalism (cf.\ \cite{BC} Prop.\ 2.2.1 and \cite{B-AC} II.5.3 Prop.\ 4) we conclude that $P^{H_L^{cyc}}$ is a projective $A_{\widehat{L^{cyc}}}$-module of rank $r$ with $A_{\mathbf{C}_p} \otimes_{A_{\widehat{L^{cyc}}}} P^{H_L^{cyc}} = P$ and that the $\Gamma_L^{cyc}$-action on $P^{H_L^{cyc}}$ leaves invariant the projective $A_{L_n'}^{H_L^{cyc}}$-submodule $Q := (Q')^{H_L^{cyc}}$ of rank $r$. By construction we have $A_{\widehat{L^{cyc}}} \otimes_{A_{L_n'}^{H_L^{cyc}}} Q = P^{H_L^{cyc}}$. By enlarging $n$ we achieve that $A_{L_n'}^{H_L^{cyc}} = A_{L_n}$.
iii. We fix a $Q_n$ as in ii. We may assume that $L'/L_n$ is Galois and that $L^{cyc} \cap L' = L_n$. By assumption the $A_{L'}$-module $Q' := A_{L'} \otimes_{A_{L_n}} Q_n$ is free. For any sufficiently big $m \geq n$ we have
\begin{align*}
P^{H_{L'}^{cyc}} & = A_{\widehat{L'^{cyc}}} \otimes_{A_{\widehat{L^{cyc}}}} P^{H_L^{cyc}} = A_{\widehat{L'^{cyc}}} \otimes_{A_{L_n}} Q_n = A_{\widehat{L'^{cyc}}} \otimes_{A_{L'}} Q' \\
& = (A_{L'_m} \otimes_{A_{L'}} Q') \oplus (X_{L',m} \otimes_{A_{L'}} Q') \ ,
\end{align*}
where the first and the last identity uses finite Galois descent and Prop.\ \ref{TS}.i, respectively. Suppose that we find sufficiently big integers $m \geq \ell \geq n$ such that
\begin{equation}\mathrm{la}bel{f:X-inv}
(X_{L',m} \otimes_{A_{L'}} Q')^{\Gamma_{L'_\ell}^{cyc}} = 0 \ .
\end{equation}
We then obtain
\begin{equation*}
P^{G_L} \subseteq (A_{L'_m} \otimes_{A_{L'}} Q')^{H_L^{cyc}} = (A_{L'_m} \otimes_{A_{L_n}} Q_n)^{H_L^{cyc}} = A_{L_m} \otimes_{A_{L_n}} Q_n
\end{equation*}
with the last identity again using finite Galois descent.
In order to establish \eqref{f:X-inv} we note that the group $\Gamma_{L'}^{cyc}$ acts $A_{L'}$-linearly on $Q'$. The same computation as in the proof of Prop.\ \ref{locan3} shows that for a sufficiently big $\ell \geq n$ we have
\begin{equation*}
\|(\gamma - 1)(x)\|_{Q'} < p^{-c_3} \|x\|_{Q'} \qquad\text{for any $\gamma \in \Gamma_{L'_\ell}^{cyc}$ and $x \in Q'$}.
\end{equation*}
We fix an element $\gamma_0 \in \Gamma_{L'_\ell}^{cyc}$ of infinite order. The Colmez-Sen-Tate condition (TS3) in \cite{BC} \S3.1 says that, for $m \geq \ell$ sufficiently big, we have
\begin{equation*}
\|(\gamma_0 - 1)(a)\|_{A_{\mathbf{C}_p}} \geq p^{-c_3} \|a\|_{A_{\mathbf{C}_p}} \qquad\text{for any $a \in X_{L',m}$}.
\end{equation*}
We claim that $\gamma_0$ has no nonzero fixed vector in $X_{L',m} \otimes_{A_{L'}} Q'$.
Let $e_1, \ldots, e_r$ be a basis of $Q'$ over $A_{L'}$. We equip $P = A_{\mathbf{C}_p} e_1 \oplus \ldots \oplus A_{\mathbf{C}_p} e_r$ with the norm $\|\sum_{i=1}^r a_i e_i\| := \max_i \|a_i\|_{A_{\mathbf{C}_p}}$. We may assume that $\|\ \|_{Q'} = \|\ \|_{|Q'}$. For any $0 \neq y = \sum_i a_i e_i \in X_{L',m} \otimes_{A_{L'}} Q'$ (with $a_i \in X_{L',m}$) we have
\begin{equation*}
\|\sum_i (\gamma_0 - 1)(a_i) \cdot e_i \| = \max_i \|(\gamma_0 - 1)(a_i)\|_{A_{\mathbf{C}_p}} \geq p^{-c_3} \max_i \|a_i\|_{A_{\mathbf{C}_p}}
\end{equation*}
and
\begin{equation*}
\| \sum_i \gamma_0(a_i)(\gamma_0 - 1)(e_i) \| \leq \max_i \|a_i\|_{A_{\mathbf{C}_p}} \cdot \|(\gamma_0 - 1)(e_i)\|_{Q'} < p^{-c_3} \max_i \|a_i\|_{A_{\mathbf{C}_p}} \ .
\end{equation*}
It follows that
\begin{equation*}
(\gamma_0 - 1)(y) = (\gamma_0 - 1)(\sum_i a_i e_i ) = \sum_i (\gamma_0 - 1)(a_i) \cdot e_i + \sum_i \gamma_0(a_i)(\gamma_0 - 1)(e_i) \neq 0 \ .
\end{equation*}
\end{proof}
We fix an $n$ together with $Q_n$ as in Cor.\ \ref{TS2}.ii and such that, for simplicity, $\Gamma_{L_n}^{cyc}$ is topologically cyclic. The group $\Gamma_{L_n}^{cyc}$ is locally $\mathbf{Q}_p$-analytic since it is, via the cyclotomic character $\chi_{cyc}$, an open subgroup of $\mathbf{Z}_p^\times$. It acts $A_{L_n}$-linearly on $Q_n$. The condition \eqref{f:condition-locan} trivially holds for the trivial $\Gamma_{L_n}^{cyc}$-action on $A_{L_n}$. Therefore Prop.\ \ref{locan3} applies so that:
\begin{corollary}\mathrm{la}bel{qnlocan}
The $\Gamma_{L_n}^{cyc}$-action on $Q_n$ is locally $\mathbf{Q}_p$-analytic.
\end{corollary}
In particular, we have the linear operator $\nabla_{Sen}$ on $Q_n$ which is given by the derived action of $\nabla_{cyc} := \operatorname{Lie}(\chi_{cyc})^{-1}(1) \in \operatorname{Lie}(\Gamma_{L_n}^{cyc})$. If $\gamma$ is a topological generator of $\Gamma_{L_n}^{cyc}$ then $\nabla_{Sen}$ is explicitly given by
\begin{equation*}
\nabla_{Sen} = \frac{\log (\gamma^{p^j})}{\log \chi_{cyc}(\gamma^{p^j})} \quad\text{for any sufficiently big $j \in \mathbf{Z}_{\geq 0}$}.
\end{equation*}
The notation $\nabla_{Sen}$ is justified by the fact that, being defined by deriving the action of $\Gamma_L^{cyc}$, these operators for various $n$ and $Q_n$ all are compatible in an obvious sense.
\begin{lemma}\mathrm{la}bel{Sen-zero}
Suppose that $\nabla_{Sen} = 0$. Then $P^{G_L}$ is a projective $A$-module of rank $r$ and $A_{\mathbf{C}_p} \otimes_A P^{G_L} = P$.
\end{lemma}
\begin{proof}
The assumption implies that $\gamma^{p^j}$ for any sufficiently big $j$ fixes $Q_n$. Replacing $n$ by $n+j$ and $Q_n$ by $A_{L_{n+j}} \otimes_{A_{L_n}} Q_n$ we therefore may assume that $Q_n \subseteq P^{G_{L_n}}$ and that $n$ satisfies Prop.\ \ref{TS}.i. We then compute
\begin{align*}
P^{G_{L_n}} & = (P^{H_L^{cyc}})^{\Gamma_{L_n}^{cyc}} = (A_{\widehat{L^{cyc}}} \otimes_{A_{L_n}} Q_n)^{\Gamma_{L_n}^{cyc}} \\
& = ((A_{L_n} \oplus X_{L,n}) \otimes_{A_{L_n}} Q_n)^{\Gamma_{L_n}^{cyc}} \\
& = Q_n \oplus (X_{L,n} \otimes_{A_{L_n}} Q_n)^{\Gamma_{L_n}^{cyc}} \\
& = Q_n \oplus (X_{L,n}^{\Gamma_{L_n}^{cyc}} \otimes_{A_{L_n}} Q_n) \\
& = Q_n
\end{align*}
where the fourth identity uses the projectivity of $Q_n$. The assertion follows from this by Galois descent.
\end{proof}
Now we suppose given an $L$-Banach algebra $B$ (with submultiplicative norm) on which the group $o_L^\times$ acts continuously by ring automorphisms. By linearity and continuity this action extends to a continuous $o_L^\times$-action on $B_{\mathbf{C}_p}$ (compare the proof of Lemma \ref{scalar-ext}.ii). It allows us to introduce the twisted $G_L$-action on $B_{\mathbf{C}_p} = \mathbf{C}_p \widehat{\otimes}_L B$ by
\begin{equation*}
^{\sigma*}(c \otimes b) := \sigma(c) \otimes \tau(\sigma^{-1})(b) \qquad\text{for $c \in \mathbf{C}_p$, $b \in B$, and $\sigma \in G_L$}.
\end{equation*}
The fixed elements $A := B_{\mathbf{C}_p}^{G_L,*}$ in $B_{\mathbf{C}_p}$ with respect to this twisted action form an $L$-Banach algebra. We assume that
\begin{equation}\mathrm{la}bel{f:form}
A_{\mathbf{C}_p} = B_{\mathbf{C}_p} \ .
\end{equation}
We also suppose that our free $A_{\mathbf{C}_p}$-module $P$ is of the form
\begin{equation*}
P = B_{\mathbf{C}_p} \otimes_B P_0
\end{equation*}
for some finitely generated projective $B$-module $P_0$ which carries a continuous semilinear action of $o_L^\times$. By the arguments in the proof of Lemma \ref{scalar-ext} this action extends to a continuous semilinear action on $P$ (and we have $P = \mathbf{C}_p\, \widehat{\otimes}_L\, P_0$). We then have a twisted $G_L$-action on $P$ as well by
\begin{equation*}
^{\sigma*}(b \otimes x) := {^{\sigma*}b} \otimes \tau(\sigma^{-1})(x) \qquad\text{for $b \in B_{\mathbf{C}_p}$, $x \in P_0$, and $\sigma \in G_L$}.
\end{equation*}
The corresponding fixed elements $P^{G_L,*}$ form an $A$-module. For an element in $P$ of the form $c \otimes x$ with $c \in \mathbf{C}_p$ and $x \in P_0$ we, of course, simply have $^{\sigma*}(c \otimes x) = \sigma(c) \otimes \tau(\sigma^{-1})(x)$. Using this observation (and again the reasoning in the proof of Lemma \ref{scalar-ext}.ii) one checks that this twisted $G_L$-action on $P$ is continuous. It follows that $P^{G_L,*}$ is closed in $P$. Obviously the $\Gamma_L$-action on $P^{G_L,*}$ then is continuous for the topology induced by the (canonical) topology of $P$. In our later applications we will show that the $A$-module $P^{G_L,*}$ is finitely generated projective. As a consequence of the open mapping theorem there is only one complete and Hausdorff module topology on a finitely generated module over a Banach algebra (cf.\ \cite{BGR} 3.7.3 Prop.\ 3). Hence in those applications we will have that the $\Gamma_L$-action on $P^{G_L,*}$ is continuous for the canonical topology as an $A$-module.
Our goal is the following sufficient condition for the vanishing of $\nabla_{Sen}=0$.
\begin{proposition}\mathrm{la}bel{nabsenul}
If the actions of $o_L^\times$ on $B$ and $P_0$ are locally $L$-analytic then $\nabla_{Sen}=0$ on $Q_n$.
\end{proposition}
We start with some preliminary results. First we consider any $p$-adic Lie group $H$, any $L$-Banach space representation $X$ of $H$, and any $L$-Banach space $Y$. Then $X \widehat{\otimes}_L Y$ becomes an $L$-Banach space representation of $H$ through the $H$-action on the first factor. We denote by $X^H$ the Banach subspace of $H$-fixed vectors in $X$ and by $X_{la}$ the $H$-invariant subspace of locally analytic vectors with respect to $H$. As in \cite{Eme} Def.\ 3.5.3 and Thm.\ 3.5.7 we view $X_{la}$ as the locally convex inductive limit
\begin{equation*}
X_{la} = \varinjlim_U X_{\mathbb{U}-an} \ ,
\end{equation*}
where $U$ runs over all analytic open subgroups of $H$, $\mathbb{U}$ denotes the rigid analytic group corresponding to $U$, and $X_{\mathbb{U}-an}$ is the $L$-Banach space of $\mathbb{U}$-analytic vectors in $X$ (loc.\ cit.\ Def.\ 3.3.1 and Prop.\ 3.3.3). If $C^{an}(\mathbb{U},X) = \mathcal{O}(\mathbb{U}) \widehat{\otimes}_{\mathbf{Q}_p} X$ denotes the $L$-Banach space of $X$-valued rigid analytic function on $\mathbb{U}$ then the orbit maps $x \longrightarrow \rho_x(h) := hx$ induce an isomorphism of Banach spaces
\begin{equation*}
X_{\mathbb{U}-an} \xrightarrow{\;\widehat{\otimes}ng\;} C^{an}(\mathbb{U},X)^U \ ,
\end{equation*}
where the right hand side is the subspace of $U$-fixed vectors for the continuous action $(h,f) \longmapsto (hf)(h') := h(f(h^{-1}h'))$. The subspace $X_{\mathbb{U}-an}$ of $X$ is $U$-invariant. But the inclusion map $X_{\mathbb{U}-an} \xrightarrow{\subseteq} X$ only is continuous in general (and not a topological inclusion). Furthermore, $X_{\mathbb{U}-an}$ is a locally $\mathbf{Q}_p$-analytic representation of $U$ (loc.\ cit.\ the paragraph after Def.\ 3.3.1, Cor.\ 3.3.6, and Cor.\ 3.6.13). In particular, we have the derived action of the Lie algebra $\operatorname{Lie}(U) = \operatorname{Lie}(H)$ on $X_{\mathbb{U}-an}$. We also remark that, if $X_{\mathbb{U}-an} = X$ as vector spaces, then this is, in fact, an identity of Banach spaces (loc.\ cit.\ Thm.\ 3.6.3).
\begin{lemma}\mathrm{la}bel{ant}
For any analytic open subgroup $U$ of $H$ we have:
\begin{itemize}
\item[i.] $(X \widehat{\otimes}_L Y)^U = X^U \widehat{\otimes}_L Y$;
\item[ii.] $(X \widehat{\otimes}_L Y)_{\mathbb{U}-an} = X_{\mathbb{U}-an} \, \widehat{\otimes}_{L}\, Y$;
\item[iii.] if $X_{\mathbb{U}-an} = X$ then, with respect to the $U$-action
\begin{align*}
U \times C^{an}(\mathbb{U},X) & \longrightarrow C^{an}(\mathbb{U},X) \\
(h,f) & \longmapsto h(f(h^{-1}.)) \ ,
\end{align*}
we have $C^{an}(\mathbb{U},X)_{\mathbb{U}-an} = C^{an}(\mathbb{U},X)$.
\end{itemize}
\end{lemma}
\begin{proof}
i. This follows by considering a Banach base of $Y$ (cf.\ \cite{NFA} Prop.\ 10.1).
ii. Using i. we compute
\begin{align*}
(X \widehat{\otimes}_L Y)_{\mathbb{U}-an} & = C^{an}(\mathbb{U},X \widehat{\otimes}_L Y)^U = (\mathcal{O}(\mathbb{U}) \widehat{\otimes}_{\mathbf{Q}_p} X \widehat{\otimes}_L Y)^U \\
& = (\mathcal{O}(\mathbb{U}) \widehat{\otimes}_{\mathbf{Q}_p} X)^U \widehat{\otimes}_L Y = C^{an}(\mathbb{U},X)^U \widehat{\otimes}_L Y \\
& = X_{\mathbb{U}-an} \widehat{\otimes}_L Y \ .
\end{align*}
iii. This is a consequence of \cite{Eme} Prop.\ 3.3.4 and Lemma 3.6.4.
\end{proof}
Next we suppose given an $L$-Banach algebra $D$ with a continuous action of $H$ together with an $H$-invariant Banach subalgebra $C$. Further, we let $Z$ be a finitely generated projective $C$-module (with its canonical Banach module structure) which carries a continuous semilinear $H$-action. Then $D \otimes_C Z$ is a Banach module over $D$ with a semilinear continuous diagonal $H$-action. The dual module $Z^* := \operatorname{Hom}_C(Z,C)$ is finitely generated projective as well. The natural semilinear $H$-action on $Z^*$ is continuous; this follows from Remark \ref{semilinear-cont} and an analog of Remark \ref{simple-conv} by the same arguments as those before Remark \ref{summand-of-free}. Corresponding properties hold for $Z_D^* := \operatorname{Hom}_D(D \otimes_C Z,D)$.
\begin{lemma}\mathrm{la}bel{Uan-dual}
If $Z_{\mathbb{U}-an} = Z$ then also $(Z^*)_{\mathbb{U}-an} = Z^*$.
\end{lemma}
\begin{proof}
The assumption $C_{\mathbb{U}-an} = C$ implies, by Lemma \ref{ant}.iii, that, with respect to the $U$-action
\begin{align*}
U \times C^{an}(\mathbb{U},C) & \longrightarrow C^{an}(\mathbb{U},C) \\
(h,f) & \longmapsto h(f(h^{-1}.)) \ ,
\end{align*}
we have $C^{an}(\mathbb{U},C)_{\mathbb{U}-an} = C^{an}(\mathbb{U},C)$. Let $z \in Z$ and $\alpha \in Z^*$. Applying this to the map $f(h) := \alpha(hz)$, which by assumption lies in $C^{an}(\mathbb{U},C)$, we obtain that $[h \longmapsto (h\alpha)(z)] \in C^{an}(\mathbb{U},C)$ for any $z \in Z$. Let now $z_1, \ldots, z_r$ be $C$-module generators of $Z$. Evaluation at the $z_i$ gives a topological inclusion of Banach modules $Z^* \hookrightarrow C^r$. What we have shown is that the continuous orbit map $\rho_\alpha$ composed with this inclusion lies in $C^{an}(\mathbb{U},C^r)$. It then follows from \cite{Eme} Prop.\ 2.1.23 that $\rho_\alpha$ must lie in $C^{an}(\mathbb{U},Z^*)$. Hence $\alpha \in (Z^*)_{\mathbb{U}-an}$.
\end{proof}
\begin{lemma}\mathrm{la}bel{Uan-projective}
Suppose that there is an analytic open subgroup $U \subseteq H$ such that $C_{\mathbb{U}-an} = C$ and $Z_{\mathbb{U}-an} = Z$; then the natural map $D_{\mathbb{U}-an} \otimes_C Z \xrightarrow{\widehat{\otimes}ng} (D \otimes_C Z)_{\mathbb{U}-an}$ is an isomorphism of Banach modules over $C$.
\end{lemma}
\begin{proof}
It follows from \cite{Fea} 3.3.14 or \cite{Eme} Prop.\ 3.3.12 that $D_{\mathbb{U}-an}$ is a Banach module over $C_{\mathbb{U}-an} = C$ and that the obvious map $D_{\mathbb{U}-an} \otimes_C Z \longrightarrow D \otimes_C Z$ factorizes continuously through $(D \otimes_C Z)_{\mathbb{U}-an}$. Since $Z$ is projective the latter map is injective.
\textit{Step 1:} We show that $(\operatorname{id} \otimes \alpha)((D \otimes_C Z)_{\mathbb{U}-an}) \subseteq D_{\mathbb{U}-an}$ for any $\alpha \in Z^*$. The map
\begin{align*}
Z^* & \longrightarrow \operatorname{Hom}_D(D \otimes_C Z,D) = Z_D^* \\
\alpha & \longmapsto \operatorname{id} \otimes \alpha
\end{align*}
is a continuous map of $H$-representations on Banach spaces. Since $(Z^*)_{\mathbb{U}-an} = Z^*$ we obtain that $\operatorname{id} \otimes \alpha \in \operatorname{Hom}_D(D \otimes_C Z,D)_{\mathbb{U}-an}$. By the same references as above the obvious $H$-equivariant pairing
\begin{equation*}
\operatorname{Hom}_D(D \otimes_C Z,D) \otimes_D (D \otimes_C Z) \longrightarrow D
\end{equation*}
induces a corresponding pairing between the spaces $(.)_{\mathbb{U}-an}$.
\textit{Step 2:} We have that, for any (abstract) $C$-module $S$ and any nonzero element $u \in S \otimes_C Z$, there exists an $\alpha \in Z^*$ such that $(\operatorname{id} \otimes \alpha)(u) \neq 0$ in $S$. This immediately reduces to the case of a finitely generated free module $Z$, in which it is obvious.
We now apply Step 2 with $S := D/D_{\mathbb{U}-an}$. Because of Step 1 we must have $(D \otimes_C Z)_{\mathbb{U}-an} \subseteq D_{\mathbb{U}-an} \otimes_C Z$. Hence the asserted map is a continuous bijection. The open mapping theorem then implies that it even is a topological isomorphism.
\end{proof}
Consider any infinitely ramified $p$-adic Lie extension $L_\infty$ of $L$, and let $\Gamma := \operatorname{Gal}(L_\infty/L)$. The completion $\hat{L}_\infty$ of $L_\infty$ is an $L$-Banach representation of $\Gamma$. Following \cite{STLAV}, we denote by $\hat{L}_\infty^{\mathrm{la}}$ the set of elements of $\hat{L}_\infty$ that are locally analytic for the action of $\Gamma$. The structure of $\hat{L}_\infty^{\mathrm{la}}$ is studied in \cite{STLAV}, where it is proved that, informally: $\hat{L}_\infty^{\mathrm{la}}$ looks like a space of power series in $\dim(\Gamma) - 1$ variables. In particular, there is (\cite{STLAV} Prop.\ 6.3) a nonzero element $D \in \mathbf{C}_p \otimes_{\mathbf{Q}_p} \operatorname{Lie}(\Gamma)$ such that $D=0$ on $\hat{L}_\infty^{\mathrm{la}}$. This element is the pullback of the Sen operator attached to a certain representation of $\Gamma$.
Now let $L_\infty$ be the Lubin-Tate extension of $L$ such that $\operatorname{Gal}(\overline{L}/L_\infty) = \ker(\chi_{LT})$. Then $\chi_{LT} : \Gamma_L^{LT} := \operatorname{Gal}(L_\infty/L) \xrightarrow{\widehat{\otimes}ng} \Gamma_L = o_L^\times$, and $d\chi_{LT} : \operatorname{Lie}(\Gamma_L^{LT}) \xrightarrow{\widehat{\otimes}ng} \operatorname{Lie}(\Gamma_L) = L$. With $\Sigma := \operatorname{Gal}(L/\mathbf{Q}_p)$ we have the $L$-linear isomorphism
\begin{align}\mathrm{la}bel{diag:splitting}
L \otimes_{\mathbf{Q}_p} \operatorname{Lie}(\Gamma_L^{LT}) & \xrightarrow{\;\widehat{\otimes}ng\;} \bigoplus_{\sigma \in \Sigma} L \\
a \otimes \mathfrak{x} & \longmapsto (a\sigma(d\chi_{LT}(\mathfrak{x})))_\sigma \ . \nonumber
\end{align}
For any $\sigma \in \Sigma$ we let $\nabla_\sigma$ the element in the left hand side which maps to the tuple with entry $1$, resp.\ $0$, at $\sigma$, resp.\ at all $\sigma' \neq \sigma$ (it should be considered as the ``derivative in the direction of $\sigma$''). Then
\begin{equation*}
1 \otimes \mathfrak{x} = \sum_{\sigma \in \Sigma} \sigma(d\chi_{LT}(\mathfrak{x})) \cdot \nabla_\sigma \qquad\text{for any $\mathfrak{x} \in \operatorname{Lie}(\Gamma_L^{LT})$} \ .
\end{equation*}
Therefore, for any locally $\mathbf{Q}_p$-representation $V$ of $\Gamma_L^{LT}$, the Taylor expansion of its orbit maps (\cite{STla} p.\ 452) has the form
\begin{align*}
\gamma v & = \sum_{n=0}^\infty \frac{1}{n!} \log_{\Gamma_L^{LT}}(\gamma)^n v = \sum_{n=0}^\infty \frac{1}{n!} (\sum_{\sigma \in \Sigma} \sigma(d\chi_{LT}(\log_{\Gamma_L^{LT}}(\gamma))) \cdot \nabla_\sigma)^n v \\
& = \sum_{n=0}^\infty \frac{1}{n!} (\sum_{\sigma \in \Sigma} \sigma(\log(\chi_{LT}(\gamma))) \cdot \nabla_\sigma)^n v \\
& = v + \sum_{\sigma \in \Sigma} \log (\sigma(\chi_{LT}(\gamma))) \cdot \nabla_\sigma(x) + \ \text{higher terms}
\end{align*}
for any $v \in V$, and any small enough $\gamma \in \Gamma_L^{LT}$.
Let $L^{max}$ be the compositum of $L_\infty$ and $L^{cyc}$.
\begin{lemma}\mathrm{la}bel{essdispro}
If $L_\infty \cap L^{cyc}$ is a finite extension of $L$, then $\nabla_{\operatorname{id}} + \nabla_{cyc} = 0$ on $(\widehat{L^{max}})^{\mathrm{la}}$.
\end{lemma}
\begin{proof}
As a consequence of our assumption we have
\begin{equation*}
\operatorname{Lie}(\operatorname{Gal}(L^{max}/L)) = \operatorname{Lie}(\Gamma_L^{LT}) \oplus \operatorname{Lie}(\Gamma^{cyc}_L) \ .
\end{equation*}
\cite{STLAV} Prop.\ 6.3, there exists a nonzero element $D \in \mathbf{C}_p \otimes_{\mathbf{Q}_p} \operatorname{Lie}(\operatorname{Gal}(L^{max}/L))$ such that $D=0$ on $(\widehat{L^{max}})^{\mathrm{la}}$. We claim that $D$ is a scalar multiple of $\nabla_{\operatorname{id}} + \nabla_{cyc}$. For each element $\sigma \in \Sigma \setminus \{\operatorname{id}\}$, the field $(\hat{L}_\infty)^{\mathrm{la}}$ contains the variable $x_\sigma$ such that $\gamma x_\sigma = x_\sigma + \log(\sigma(\chi_{LT}(\gamma)))$ for any $\gamma \in \Gamma_L^{LT}$ (\cite{STLAV} \S4.2). It follows that $\nabla_\sigma (x_\sigma) = 1$ and $\nabla_{\sigma'} (x_\sigma) = 0$ if $\sigma' \neq \sigma$. By our assumption we also have $\nabla_{cyc}(x_\sigma) = 0$.
The character $\tau = \chi_{cyc} \chi_{LT}^{-1}$ has a Hodge-Tate weight equal to zero, so that there exists $z \in \mathbf{C}_p^\times$ such that $g(z) = z \cdot \tau(g)$ for any $g \in G_L$. It is then obvious that $z \in (\widehat{L^{max}})^{\mathrm{la}}$. Secondly this implies that $\mathfrak{x}z = d\tau(\mathfrak{x}) \cdot z = d\chi_{cyc}(\mathfrak{x}) \cdot z - d\chi_{LT}(\mathfrak{x}) \cdot z$, for any $\mathfrak{x} \in \operatorname{Lie}(\operatorname{Gal}(L^{max}/L))$, and hence $\nabla_{\operatorname{id}} (z) = -z = - \nabla_{cyc} (z)$.
Write $D = \sum_{\sigma \in \Sigma} \delta_\sigma \cdot \nabla_\sigma + \delta_{cyc} \cdot\nabla_{cyc}$. Applying $D$ to $x_\sigma$ with $\sigma \in \Sigma \setminus \{\operatorname{id}\}$, we find that $\delta_\sigma = 0$. Applying $D$ to $z$, we find that $\delta_{\operatorname{id}} = \delta_{cyc}$. This implies the claim and hence our assertion.
\end{proof}
\begin{proof}[Proof of Proposition \ref{nabsenul}]
Let $H^{max} := \operatorname{Gal}(\overline{\mathbf{Q}}_p/L^{max})$. Using Remark \ref{AST} we have
\begin{equation*}
P^{H^{max}} = (A_{\mathbf{C}_p} \otimes_B P_0)^{H^{max}} = (A_{\mathbf{C}_p})^{H^{max}} \otimes_B P_0 = (A \widehat{\otimes}_L \widehat{L^{max}}) \otimes_B P_0 \ .
\end{equation*}
We have to consider the twisted action of the $p$-adic Lie group $\operatorname{Gal}(L^{max}/L)$ on the above terms. On $Q_n \subseteq P^{H_L^{cyc}} \subseteq P^{H^{max}}$ the group $\operatorname{Gal}(L^{max}/L)$ acts through its quotient $\Gamma_L^{cyc}$. By Cor.\ \ref{qnlocan} the $\Gamma_L^{cyc}$-action on $Q_n$ is locally $\mathbf{Q}_p$-analytic. Moreover, by assumption the $\operatorname{Gal}(L^{max}/L)$-actions (through the map $\tau^{-1}$) on $B$ and $P_0$ are locally $\mathbf{Q}_p$-analytic. We therefore find, by \cite{Eme} Cor.\ 3.6.13, an analytic open subgroup $U \subseteq \operatorname{Gal}(L^{max}/L)$ such that $Q_n = (Q_n)_{\mathbb{U}-an}$, $B = B_{\mathbb{U}-an}$, and $P_0 = (P_0)_{\mathbb{U}-an}$. Using Lemma \ref{ant}.ii and Lemma \ref{Uan-projective} we then obtain that
\begin{align*}
Q_n \subseteq (P^{H^{max}})_{\mathbb{U}-an} & = ((A \widehat{\otimes}_L \widehat{L^{max}}) \otimes_B P_0)_{\mathbb{U}-an} = (A \widehat{\otimes}_L \widehat{L^{max}})_{\mathbb{U}-an} \otimes_B P_0 \\
& = (A \widehat{\otimes}_L (\widehat{L^{max}})_{\mathbb{U}-an}) \otimes_B P_0 \ .
\end{align*}
Recall that the twisted $G_L$-action on $P_0$ is given by ${^{g*}x} = \tau(g^{-1})(x)$. Since, by our assumption, the $o_L^\times$-action on $P_0$ is locally $L$-analytic this twisted $\operatorname{Gal}(L^{max}/L)$-action on $P_0$ is locally $\mathbf{Q}_p$-analytic with the corresponding derived action of $L \otimes_{\mathbf{Q}_p} \operatorname{Lie}(\operatorname{Gal}(L^{max}/L))$ factorizing through the map
\begin{equation}\mathrm{la}bel{diag:L-ana}
\xymatrix{
L \otimes_{\mathbf{Q}_p} \operatorname{Lie}(\operatorname{Gal}(L^{max}/L)) \ar[d]_{\operatorname{id} \otimes \operatorname{pr} \oplus \operatorname{id} \otimes \operatorname{pr}} \ar[r]^-{\operatorname{id} \otimes -d\tau} & L \otimes_{\mathbf{Q}_p} L \ar[rr]^-{a \otimes b \mapsto ab} & & L = \operatorname{Lie}(o_L^\times) \\
L \otimes_{\mathbf{Q}_p} \operatorname{Lie}(\Gamma_L^{LT}) \oplus L \otimes_{\mathbf{Q}_p} \operatorname{Lie}(\Gamma_L^{cyc}) \ar[d]_{\eqref{diag:splitting}\, \oplus \, \operatorname{id} \otimes d\chi_{cyc}} & & \\
(\sum_{\sigma \in \Sigma} L) \oplus L \ar[uurrr]_-{\qquad\ (\sum_\sigma a_\sigma) + c \;\mapsto\; a_{\operatorname{id}} - c} & & }
\end{equation}
We now distinguish two cases, according to whether there is a finite extension $M$ of $L$ such that $L^{cyc} \subseteq M \cdot L_\infty$ or whether there is a finite extension $M$ of $L$ such that $L^{cyc} \cap L_\infty = M$. Since $\Gamma^{cyc}_{L_n}$ is an open subgroup of $\mathbf{Z}_p^\times$, we always are in one of these two cases.
\textit{First case:} By \cite{Ser} III.A4 Prop.\ 4 the character $\chi_{LT}$ corresponds, via the reciprocity isomorphism of local class field theory, to the projection map $o_L^\times \times \pi_L^{\widehat{\mathbf{Z}}} \xrightarrow{\operatorname{pr}} o_L^\times$. One deduces from this that, in the present situation, $\chi_{cyc}$ differs from $N_{L/\mathbf{Q}_p} \circ \chi_{LT}$ by a character of finite order. It follows that $\tau = \chi_{cyc} \chi_{LT}^{-1}$ is equal to a finite order character times $\operatorname{pr}od_{\sigma \in \Sigma, \sigma \neq \operatorname{id}} \sigma \circ \chi_{LT}$. We also have $\operatorname{Lie}(\operatorname{Gal}(L^{max}/L)) = \operatorname{Lie}(\Gamma_L^{LT})$ in this case. By feeding this information into the above commutative diagram \eqref{diag:L-ana} one easily deduces that $\nabla_{\operatorname{id}}$ acts as zero on $P_0$. On the other hand, by \cite{STLAV} Cor.\ 4.3, we have $\nabla_{\operatorname{id}} = 0$ on $(\widehat{L^{max}})^{\mathrm{la}}$. This implies that $\nabla_{\operatorname{id}} = 0$ on $(A \widehat{\otimes}_L (\widehat{L^{max}})_{\mathbb{U}-an}) \otimes_B P_0$. The natural map $\operatorname{Lie}(\operatorname{Gal}(L^{max}/L)) = \operatorname{Lie}(\Gamma_L^{LT}) \to \operatorname{Lie}(\Gamma_L^{cyc})$ sends $\nabla_{\operatorname{id}}$ to $\nabla_{Sen}$. This implies the result.
\textit{Second case:} This time the perpendicular maps in the diagram \eqref{diag:L-ana} are bijective. One immediately deduces that $\nabla_{\operatorname{id}} + \nabla_{cyc} = 0$ acts as zero on $P_0$. On the other hand we know from Lemma \ref{essdispro} that $\nabla_{\operatorname{id}} + \nabla_{cyc} = 0$ on $(\widehat{L^{max}})^{\mathrm{la}}$. Hence $\nabla_{\operatorname{id}} + \nabla_{cyc} = 0$ on $(A \widehat{\otimes}_L (\widehat{L^{max}})_{\mathbb{U}-an}) \otimes_B P_0$. The natural map $\operatorname{Lie}(\operatorname{Gal}(L^{max} / L)) \to \operatorname{Lie}(\Gamma_L^{cyc})$ sends $\nabla_{\operatorname{id}}$ to $0$ and $\nabla_{cyc}$ to $\nabla_{Sen}$. This again implies the result.
\end{proof}
We now can state our main technical result.
\begin{theorem}\mathrm{la}bel{Sen-vanishes}
If the $o_L^\times$-actions on $B$ and $P_0$ are locally $L$-analytic then $P^{G_L,*}$ is a finitely generated projective $A$-module of the same rank as $P$ and $A_{\mathbf{C}_p} \otimes_A P^{G_L,*} = P$.
\end{theorem}
\begin{proof}
This follows from Lemma \ref{Sen-zero} and Prop.\ \ref{nabsenul}.
\end{proof}
\begin{remark}\mathrm{la}bel{untwisting}
All of the above remains valid if we replace $\tau$ by $\tau^{-1}$.
\end{remark}
\subsection{The equivalence between $L$-analytic $(\varphi_L,\Gamma_L)$-modules over $\mathfrak{B}$ and $\mathfrak{X}$}
\mathrm{la}bel{sec:TS}
We now explore the diagram
\begin{equation*}
\xymatrix{
& \mathbf{C}_p \times_{\mathbf{Q}_p} \mathfrak{B} \ar[ld] \ar[r]^{\kappa}_{\widehat{\otimes}ng} & \mathbf{C}_p \times_L \mathfrak{X} \ar[rd] & \\
\mathfrak{B}_{/L} & & & \mathfrak{X} }
\end{equation*}
in order to transfer modules between the two sides. As in section \ref{sec:LT} we treat $\kappa$ as an identification. Recall also that $\kappa$ is equivariant for the $o_L \setminus \{0\}$-actions on both sides.
As a consequence of Remark \ref{Galois-fixed} we have (cf.\ Lemma \ref{twist-Xn})
\begin{equation*}
\mathscr{R}_L(\mathfrak{B}) = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B})^{G_L} \qquad\text{and}\qquad \mathscr{R}_L(\mathfrak{X}) = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B})^{G_L,*} \ ,
\end{equation*}
where the twisted Galois action is given by $(\sigma,f) \longmapsto {^{\sigma *}f} := ({^\sigma f})([\tau(\sigma^{-1}](.))$; it commutes with the $o_L \setminus \{0\}$-action. By extending the concept of the twisted Galois action we will construct a functor from $\operatorname{Mod}_L(\mathscr{R}_K(\mathfrak{B}))$ to $\operatorname{Mod}_L(\mathscr{R}_K(\mathfrak{X}))$.
Let $M$ be a $(\varphi_L,\Gamma_L)$-module over
$\mathscr{R}_L(\mathfrak{B})$. Then $M_{\mathbf{C}_p} :=
\mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \otimes_{\mathscr{R}_L(\mathfrak{B})} M$
is a $(\varphi_L,\Gamma_L)$-module over $\mathscr{R}_{\mathbf{C}_p}(\mathfrak{B})$ by (the analog for $\mathfrak{B}$ of) Lemma \ref{scalar-ext}.ii. One easily checks that the twisted $G_L$-action on $M_{\mathbf{C}_p}$ given by
\begin{equation*}
(\sigma, f \otimes m) \longmapsto \sigma * (f \otimes m) := {^{\sigma *}f} \otimes \tau(\sigma^{-1})(m) \ ,
\end{equation*}
for $\sigma \in G_L$, $f \in \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B})$, and $m \in M$, is well defined. Observe that this twisted Galois action and the action of $o_L\backslash \{0\}$ on
$M_{\mathbf{C}_p}$ commute with each other. Hence the fixed elements
\begin{equation*}
M_\mathfrak{X} := (\mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \otimes_{\mathscr{R}_L(\mathfrak{B})} M)^{G_L,*}
\end{equation*}
form an $\mathscr{R}_L(\mathfrak{X})$-module which is invariant under the $o_L \setminus \{0\}$-action.
In order to analyze $M_\mathfrak{X}$ as an $\mathscr{R}_L(\mathfrak{X})$-module we use the result from the previous section \ref{sec:key}.
\begin{theorem}\phantomsection\mathrm{la}bel{descent-to-X}
\begin{itemize}
\item[i.] $M_\mathfrak{X}$ is a finitely generated projective $\mathscr{R}_L(\mathfrak{X})$-module.
\item[ii.] If $M$ is $L$-analytic then the rank of $M_\mathfrak{X}$ is equal to the rank of $M$ and
\begin{equation*}
\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) \otimes_{\mathscr{R}_L(\mathfrak{X})} M_\mathfrak{X} = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \otimes_{\mathscr{R}_L(\mathfrak{B})} M \ .
\end{equation*}
\end{itemize}
\end{theorem}
\begin{proof}
By the analog for $\mathfrak{B}$ of Prop.\ \ref{descent} we find an $n \geq 1$ and an $\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_n)$-submodule $M_n \subseteq M$ such that
\begin{itemize}
\item[--] $M_n$ is finitely generated projective over $\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_n)$, and $\mathscr{R}_L(\mathfrak{B}) \otimes_{\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_n)} M_n = M$,
\item[--] $M_n$ is $o_L^\times$-invariant, and the induced $o_L^\times$-action on $M_n$ is continuous,
\item[--] $\varphi_M$ restricts to a continuous homomorphism
\begin{equation*}
\varphi_{M_n} : M_n \longrightarrow \mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_{n+1}) \otimes_{\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_n)} M_n
\end{equation*}
such that the induced linear map
\begin{equation*}
\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_{n+1}) \otimes_{\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_n),\varphi_L} M_n \xrightarrow{\;\widehat{\otimes}ng\;}
\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_{n+1}) \otimes_{\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_n)} M_n
\end{equation*}
is an isomorphism.
\end{itemize}
For any $m \geq n$ we define $M_m := \mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_m) \otimes_{\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_n)} M_n$. The above three properties hold correspondingly for any $m \geq n$. Each $M_{m,\mathbf{C}_p} := \mathcal{O}_{\mathbf{C}_p}(\mathfrak{B} \setminus \mathfrak{B}_m) \otimes_{\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_m)} M_m$ carries an obvious twisted $G_L$-action such that the identity $M_{\mathbf{C}_p} = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \otimes_{\mathcal{O}_{\mathbf{C}_p}(\mathfrak{B} \setminus \mathfrak{B}_m)} M_{m,\mathbf{C}_p}$ is compatible with the twisted $G_L$-actions on both sides. The fixed elements $M_{m,\mathbf{C}_p}^{G_L,*}$ form an $\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_m)$-module (cf.\ Lemma \ref{twist-Xn}), and $M_{\mathbf{C}_p}^{G_L,*} = (\bigcup_{m \geq n} M_{m,\mathbf{C}_p})^{G_L,*} = \bigcup_{m \geq n} M_{m,\mathbf{C}_p}^{G_L,*}$. We claim that it suffices to show:
\begin{itemize}
\item[iii.] $M_{m,\mathbf{C}_p}^{G_L,*}$, for any $m \geq n$, is a finitely generated projective $\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_m)$-module.
\item[iv.] For any $m \geq n$, the natural map
\begin{equation*}
\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_{m+1}) \otimes_{\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_m)} M_{m,\mathbf{C}_p}^{G_L,*} \xrightarrow{\;\widehat{\otimes}ng\;} M_{m+1,\mathbf{C}_p}^{G_L,*}
\end{equation*}
is an isomorphism.
\item[v.] If $M$ is $L$-analytic then $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{X} \setminus \mathfrak{X}_n) \otimes_{\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_n)} M_{n,\mathbf{C}_p}^{G_L,*} = M_{n,\mathbf{C}_p}$.
\end{itemize}
Suppose that iii. - v. hold true. By iv. we have $M_{m,\mathbf{C}_p}^{G_L,*} = \mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_m) \otimes_{\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_n)} M_{n,\mathbf{C}_p}^{G_L,*}$ for any $m \geq n$. Hence
\begin{align*}
M_{\mathbf{C}_p}^{G_L,*} & = \bigcup_{m \geq n} \big( \mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_m) \otimes_{\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_n)} M_{n,\mathbf{C}_p}^{G_L,*} \big) \\
& = \big( \bigcup_{m \geq n} \mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_m) \big) \otimes_{\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_n)} M_{n,\mathbf{C}_p}^{G_L,*} \\
& = \mathscr{R}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_n)} M_{n,\mathbf{C}_p}^{G_L,*} \ ,
\end{align*}
and iii. therefore implies i. Assuming that $M$ is $L$-analytic we deduce from v. that
\begin{align*}
\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) \otimes_{ \mathscr{R}_L(\mathfrak{X})} M_{\mathbf{C}_p}^{G_L,*} & = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_n)} M_{n,\mathbf{C}_p}^{G_L,*} \\
& = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) \otimes_{\mathcal{O}_{\mathbf{C}_p}(\mathfrak{X} \setminus \mathfrak{X}_n)} M_{n,\mathbf{C}_p} \\
& = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \otimes_{\mathcal{O}_{\mathbf{C}_p}(\mathfrak{B} \setminus \mathfrak{B}_n)} M_{n,\mathbf{C}_p} \\
& = M_{\mathbf{C}_p} \ ,
\end{align*}
which is the second part of ii. Since all rings involved are integral domains the first part of ii. is a consequence of the second part by \cite{B-AC} II.5.3 Prop.\ 4.
In order to establish iii. - v. we fix an $m \geq n$ and abbreviate $\mathfrak{Y} := \mathfrak{X} \setminus \mathfrak{X}_m$ and $\mathfrak{A} := \mathfrak{B} \setminus \mathfrak{B}_m$. We recall from Prop.\ \ref{quasi-Stein} and its proof that we have increasing sequences $\mathfrak{Y}_1 \subset \ldots \subset \mathfrak{Y}_j \subset \ldots \subset \mathfrak{Y}$ and $\mathfrak{A}_1 \subset \ldots \subset \mathfrak{A}_j \subset \ldots \subset \mathfrak{A}$ of open affinoid subdomains (over $L$) which form admissible open coverings of $\mathfrak{Y}$ and $\mathfrak{A}$, respectively, and such that:
\begin{itemize}
\item[a.] All restriction maps $\mathcal{O}_K(\mathfrak{Y}_{j+1}) \longrightarrow \mathcal{O}_K(\mathfrak{Y}_j)$ and $\mathcal{O}_K(\mathfrak{A}_{j+1}) \longrightarrow \mathcal{O}_K(\mathfrak{A}_j)$ are compactoid (proof of Prop.\ \ref{compactoid}.i) and have dense image. In particular, these coverings exhibit $\mathfrak{Y}$ and $\mathfrak{A}$ as Stein spaces.
\item[b.] Each $\mathfrak{A}_j$ is a subannulus of $\mathfrak{A}$.
\item[c.] Every $\mathfrak{Y}_j$ and every $\mathfrak{A}_j$ is $o_L^\times$-invariant, and the induced $o_L^\times$-actions on $\mathcal{O}_L(\mathfrak{Y}_j)$ and $\mathcal{O}_L(\mathfrak{A}_j)$ satisfy the condition \eqref{f:condition-locan} and are locally $L$-analytic (Prop.\ \ref{annuli-locan} and discussion before Def.\ \ref{def:L-analytic}).
\item[d.] The LT-isomorphism $\kappa : \mathfrak{X}_{/\mathbf{C}_p} \xrightarrow{\widehat{\otimes}ng} \mathfrak{B}_{/\mathbf{C}_p}$ restricts, for any $j \geq 1$, to an isomorphism $\mathfrak{Y}_{j/\mathbf{C}_p} \xrightarrow{\widehat{\otimes}ng} \mathfrak{A}_{j/\mathbf{C}_p}$. With respect to the resulting identifications
\begin{equation*}
\mathbf{C}_p\, \widehat{\otimes}_L\, \mathcal{O}_L(\mathfrak{Y}_j) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{A}_j) = \mathbf{C}_p\, \widehat{\otimes}_L\, \mathcal{O}_{\mathbf{C}_p}(\mathfrak{A}_j)
\end{equation*}
the $G_L$-action through the coefficients $\mathbf{C}_p$ on the left hand side corresponds to the twisted $G_L$-action on the right hand side; in particular, $\mathcal{O}_L(\mathfrak{Y}_j) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{A}_j)^{G_L,*}$.
\end{itemize}
As explained, for example, in \cite{ST} \S3 to give a coherent sheaf $\mathcal{S}$ on a quasi-Stein space $\mathfrak{Z}$ with defining admissible affinoid covering $\mathfrak{Z}_1 \subset \ldots \subset \mathfrak{Z}_j \subset \ldots \subset \mathfrak{Z}$ is the same as giving the finitely generated $\mathcal{O}(\mathfrak{Z}_j)$-modules $\mathcal{S}(\mathfrak{Z}_j)$ together with isomorphisms
\begin{equation*}
\mathcal{O}(\mathfrak{Z}_j) \otimes_{\mathcal{O}(\mathfrak{Z}_{j+1})} \mathcal{S}(\mathfrak{Z}_{j+1}) \xrightarrow{\widehat{\otimes}ng} \mathcal{S}(\mathfrak{Z}_j) \ .
\end{equation*}
Then
\begin{equation*}
\mathcal{S}(\mathfrak{Z}) = \varprojlim \mathcal{S}(\mathfrak{Z}_j) \qquad\text{and}\qquad \mathcal{S}(\mathfrak{Z}_j) = \mathcal{O}(\mathfrak{Z}_j) \otimes_{\mathcal{O}(\mathfrak{Z})} \mathcal{S}(\mathfrak{Z}) \ .
\end{equation*}
Moreover, by Prop.\ \ref{gruson}, the coherent sheaf $\mathcal{S}$ is a vector bundle if and only if $\mathcal{S}(\mathfrak{Z})$ is a finitely generated projective $\mathcal{O}(\mathfrak{Z})$-module.
In our situation therefore the modules $M_m$ and $M_{m,\mathbf{C}_p}$ are the global sections of vector bundles $\mathcal{M}_m$ and $\mathcal{M}_{m,\mathbf{C}_p}$ on $\mathfrak{A}$ and $\mathfrak{A}_{/\mathbf{C}_p}$, respectively. As a consequence of property c. and Prop.\ \ref{locan3} the continuous $o_L^\times$-action on $M_m$ extends semilinearly to compatible locally $\mathbf{Q}_p$-analytic $o_L^\times$-actions on the system $(\mathcal{M}_m(\mathfrak{A}_j) = \mathcal{O}_L(\mathfrak{A}_j) \otimes_{\mathcal{O}_L(\mathfrak{A})} M_m)_j$. For later use we note right away that, as explained in the Addendum to the proof of Prop.\ \ref{differentiable}, each $\mathcal{M}_m(\mathfrak{A}_j)$ is $L$-analytic if $M$ is $L$-analytic. As explained before Prop.\ \ref{nabsenul} these give rise to compatible (continuous) twisted $G_L$-actions on the system $(\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{A}_j) \otimes_{\mathcal{O}_L(\mathfrak{A})} M_m = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{A}_j) \otimes_{\mathcal{O}_{\mathbf{C}_p}(\mathfrak{A})} M_{m,\mathbf{C}_p})_j$. By construction the latter is a system of finitely generated projective modules (all of the same rank, of course). But because of the property b. the rings $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{A}_j)$ are principal ideal domains. Hence we actually have a system of free modules. Each member of this system therefore satisfies the assumptions of the previous section \ref{sec:key}. The second half of Cor.\ \ref{TS2}.iii then tells us that by passing to the fixed elements for the twisted $G_L$-action we obtain a system $(\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*})_j$ of $\mathcal{O}_L(\mathfrak{Y}_j)$-modules $\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*}$ which are submodules of finitely generated free modules. The affinoid $\mathfrak{Y_j}$ being one dimensional and smooth the ring $\mathcal{O}_L(\mathfrak{Y}_j)$ is a Dedekind domain, over which submodules of finitely generated free modules are finitely generated projective. Hence $(\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*})_j$ is a system of finitely generated projective modules. We obviously have
\begin{equation*}
\varprojlim_j \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*} = (\varprojlim_j \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}))^{G_L,*} = \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{/\mathbf{C}_p})^{G_L,*} = M_{m/\mathbf{C}_p}^{G_L,*} \ .
\end{equation*}
If we show that the transition maps
\begin{equation}\mathrm{la}bel{f:transition}
\mathcal{O}_L(\mathfrak{Y}_j) \otimes_{\mathcal{O}_L(\mathfrak{Y}_{j+1})} \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j+1/\mathbf{C}_p})^{G_L,*} \longrightarrow \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*}
\end{equation}
are isomorphisms then the system $(\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*})_j$ corresponds to a vector bundle $\mathcal{M}_{\mathfrak{Y}}$ on $\mathfrak{Y}$ with global sections $M_{m/\mathbf{C}_p}^{G_L,*}$. This proves iii.
To see that \eqref{f:transition}, for a fixed $j$, is an isomorphism we use Cor.\ \ref{TS2} again which implies the existence of finite extensions $L \subseteq L_1 \subseteq L_2 \subseteq \mathbf{C}_p$ and of a finitely generated projective $(G_L,\ast)$-invariant $\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})$-submodule $Q$ of $\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j+1/\mathbf{C}_p})$ with the properties a) - c) in Cor.\ \ref{TS2}.ii and such that
\begin{align*}
\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j+1/\mathbf{C}_p})^{G_L,*} & \subseteq \mathcal{O}_{L_2}(\mathfrak{Y}_{j+1}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q \\ & \subseteq \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j+1/\mathbf{C}_p}) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_{j+1}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q
\end{align*}
and hence, in particular, that
\begin{equation*}
\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j+1/\mathbf{C}_p})^{G_L,*} = (\mathcal{O}_{L_2}(\mathfrak{Y}_{j+1}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q)^{G_L,*} \ .
\end{equation*}
We deduce that
\begin{align}\mathrm{la}bel{f:transition2}
\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}) & = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j) \otimes_{\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_{j+1})} \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j+1/\mathbf{C}_p}) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q \\
& = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j)} (\mathcal{O}_{L_1}(\mathfrak{Y}_j) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q) \ . \nonumber
\end{align}
We claim that for $P := \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})$ we may take as input for Cor.\ \ref{TS2}.iii the submodule $\mathcal{O}_{L_1}(\mathfrak{Y}_j) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q$. The properties a) and c) are obvious. For b) we have to check that the inclusion
\begin{equation*}
\mathcal{O}_{\widehat{L^{cyc}}}(\mathfrak{Y}_j) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j)} (\mathcal{O}_{L_1}(\mathfrak{Y}_j) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q) \subseteq \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{H_L^{cyc}}
\end{equation*}
is an equality. Since the ring homomorphism $\mathcal{O}_{\widehat{L^{cyc}}}(\mathfrak{Y}_j) \longrightarrow \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j) = \mathbf{C}_p \, \widehat{\otimes}_{\widehat{L^{cyc}}} \mathcal{O}_{\widehat{L^{cyc}}}(\mathfrak{Y}_j)$ is faithfully flat (\cite{Co1} Lemma 1.1.5(1)) it suffices to see that the base change to $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j)$ of the above inclusion is an equality. But both sides become equal to $\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})$, the left hand side by \eqref{f:transition2} and the right hand side by Cor.\ \ref{TS2}.i.
As output from Cor.\ \ref{TS2}.iii we then obtain the first identity in
\begin{align*}
\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*} & = (\mathcal{O}_{L_2}(\mathfrak{Y}_j) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q)^{G_L,*} \\
& = (\mathcal{O}_L(\mathfrak{Y}_j) \otimes_{\mathcal{O}_L(\mathfrak{Y}_{j+1})} (\mathcal{O}_{L_2}(\mathfrak{Y}_{j+1}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q))^{G_L,*} \ .
\end{align*}
Since the twisted $G_L$-actions are continuous and since the profinite group $G_L$ is topologically finitely generated (cf.\ \cite{NSW} Thm.\ 7.5.14) we, after picking finitely many topological generators $\sigma_1, \ldots, \sigma_r$ of $G_L$, may write
\begin{align*}
\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j+1/\mathbf{C}_p})^{G_L,*} & = (\mathcal{O}_{L_2}(\mathfrak{Y}_{j+1}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q)^{G_L,*} \\
& = \ker \Big(\mathcal{O}_{L_2}(\mathfrak{Y}_{j+1}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q \xrightarrow{\operatorname{pr}od 1 - \sigma_i} \operatorname{pr}od_{i=1}^r \mathcal{O}_{L_2}(\mathfrak{Y}_{j+1}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q \Big).
\end{align*}
The ring homomorphism $\mathcal{O}_L(\mathfrak{Y}_{j+1}) \longrightarrow \mathcal{O}_L(\mathfrak{Y}_j)$ is flat (\cite{BGR} Cor.\ 7.3.2/6). Hence the formation of the kernel above commutes with base extension along this homomorphism. It follows that
\begin{align*}
\mathcal{O}_L(\mathfrak{Y}_j) \otimes_{\mathcal{O}_L(\mathfrak{Y}_{j+1})} \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j+1/\mathbf{C}_p})^{G_L,*} & = \mathcal{O}_L(\mathfrak{Y}_j) \otimes_{\mathcal{O}_L(\mathfrak{Y}_{j+1})} (\mathcal{O}_{L_2}(\mathfrak{Y}_{j+1}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q)^{G_L,*} \\
& = (\mathcal{O}_L(\mathfrak{Y}_j) \otimes_{\mathcal{O}_L(\mathfrak{Y}_{j+1})} (\mathcal{O}_{L_2}(\mathfrak{Y}_{j+1}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_{j+1})} Q))^{G_L,*} \\
& = \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*} \ .
\end{align*}
This settles the isomorphism \eqref{f:transition}.
To check the isomorphism in iv. we temporarily indicate the dependance of our coverings on $m$ by writing $\mathfrak{Y}_j^{(m)}$ and $\mathfrak{A}_j^{(m)}$. Examining again the proof of Prop.\ \ref{quasi-Stein} one easily sees that, for a fixed $m$, these coverings can be chosen so that $\mathfrak{Y}_j^{(m+1)} \subseteq \mathfrak{Y}_j^{(m)}$ and $\mathfrak{A}_j^{(m+1)} \subseteq \mathfrak{A}_j^{(m)}$ for any $j$. But then the exact same argument as above for \eqref{f:transition} shows that the natural maps
\begin{equation*}
\mathcal{O}_L(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_L(\mathfrak{Y}_j^{(m)})} \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m)})^{G_L,*} \xrightarrow{\;\widehat{\otimes}ng\;} \mathcal{M}_{m+1/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m+1)})^{G_L,*} = \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m+1)})^{G_L,*}
\end{equation*}
are isomorphisms. This means that the restriction to $\mathfrak{X} \setminus \mathfrak{X}_{m+1}$ of the vector bundle $\mathcal{M}_{\mathfrak{X} \setminus \mathfrak{X}_m}$ on $\mathfrak{X} \setminus \mathfrak{X}_m$ coincides with the vector bundle $\mathcal{M}_{\mathfrak{X} \setminus \mathfrak{X}_{m+1}}$. On global sections this is equivalent to the isomorphism in iv.
Now we assume that $M$ is $L$-analytic. Earlier in the proof we had observed already that then each $\mathcal{M}_m(\mathfrak{A}_j)$ is $L$-analytic. In this situation Thm.\ \ref{Sen-vanishes} tells us that
\begin{equation}\mathrm{la}bel{f:form2}
\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j) \otimes_{\mathcal{O}_L(\mathfrak{Y}_j)} \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*} = \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}) \ .
\end{equation}
Being finitely generated projective, $\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}) \otimes_{\mathcal{O}_L(\mathfrak{Y})} M_\mathfrak{Y}(\mathfrak{Y})$ must be the global sections of a coherent sheaf on $\mathfrak{Y}_{/\mathbf{C}_p} \widehat{\otimes}ng \mathfrak{A}_{/\mathbf{C}_p}$. But one easily deduces from \eqref{f:form2} that this sheaf is $\mathcal{M}_{m/\mathbf{C}_p}$. It follows that
\begin{equation*}
\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}) \otimes_{\mathcal{O}_L(\mathfrak{Y})} M_{m/\mathbf{C}_p}^{G_L,*} = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}) \otimes_{\mathcal{O}_L(\mathfrak{Y})} M_\mathfrak{Y}(\mathfrak{Y}) = \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{/\mathbf{C}_p}) = M_{m,\mathbf{C}_p} \ ,
\end{equation*}
which shows v.
\end{proof}
\begin{lemma}\mathrm{la}bel{prop:N}
$M_\mathfrak{X}$, for any $M$ in $\operatorname{Mod}_L(\mathscr{R}_L(\mathfrak{B}))$, is an object in
$\operatorname{Mod}_L(\mathscr{R}_L(\mathfrak{X}))$.
\end{lemma}
\begin{proof}
It remains to show that:
\begin{itemize}
\item[a)] The $\Gamma_L$-action on $M_\mathfrak{X}$ is continuous for the canonical topology of $M_\mathfrak{X}$ as an $\mathscr{R}_L(\mathfrak{X})$-module.
\item[b)] The map
\begin{equation*}
\varphi_{M_\mathfrak{X}}^{lin} : \mathscr{R}_L(\mathfrak{X})\otimes_{\mathscr{R}_L(\mathfrak{X}), \varphi_L} M_\mathfrak{X} \longrightarrow M_\mathfrak{X}
\end{equation*}
is an isomorphism.
\end{itemize}
We will use the notations in the proof of the previous Thm.\ \ref{descent-to-X}.
Ad a): By Remark \ref{semilinear-cont} each individual $\gamma \in \Gamma_L$ acts by a continuous automorphism. Since $M_\mathfrak{X}$ is barrelled it therefore suffices, by the Banach-Steinhaus theorem, to check that the orbit maps $\gamma \longrightarrow \gamma m$, for $m \in M_\mathfrak{X}$, are continuous. But $M_\mathfrak{X} = \bigcup_{m \geq n} M_{m,\mathbf{C}_p}^{G_L,*}$, and the inclusion maps $M_{m,\mathbf{C}_p}^{G_L,*} \longrightarrow M_\mathfrak{X}$ are continuous for the canonical topologies (again by Remark \ref{semilinear-cont}). This reduces us to showing the continuity of the orbit maps for each $M_{m,\mathbf{C}_p}^{G_L,*}$. Now we use that, by \eqref{f:transition}, we have
\begin{equation*}
M_{m,\mathbf{C}_p}^{G_L,*} = \varprojlim_j \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*}
\end{equation*}
at least algebraically and (cf.\ Fact \ref{coherent}.i)
\begin{equation*}
\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*} = \mathcal{O}_L(\mathfrak{Y}_j) \otimes_{\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_m)} M_{m,\mathbf{C}_p}^{G_L,*} \ .
\end{equation*}
In fact, the first identity holds topologically if we equip the left hand side with the canonical topology and the right hand side with the projective limit of the canonical topologies. This is most easily seen by writing $\mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*}$ as a direct summand of a finitely generated free module in which case the corresponding identity is evidently topological. This finally reduces us further to the continuity of the orbit maps of $\Gamma_L$ on the $\mathcal{O}_L(\mathfrak{Y}_j)$-module $ \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p})^{G_L,*}$. Since this module is finitely generated projective over the Banach algebra $\mathcal{O}_L(\mathfrak{Y}_j) = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{A}_j)^{G_L,*}$ we have pointed out the required continuity already in the discussion before Prop.\ \ref{nabsenul}.
Ad b): This time we choose the coverings $\{\mathfrak{Y}_j^{(m)}\}_j$ and $\{\mathfrak{A}_j^{(m)}\}_j$ of $\mathfrak{X} \setminus \mathfrak{X}_m$ and $\mathfrak{B} \setminus \mathfrak{B}_m$, respectively, in such a way that $\pi_L^* \mathfrak{Y}_j^{(m+1)} \subseteq \mathfrak{Y}_j^{(m)}$ and $\pi_L^* \mathfrak{A}_j^{(m+1)} \subseteq \mathfrak{A}_j^{(m)}$ for any $j$.
In a first step we show that the map
\begin{align*}
\mathcal{O}_L(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_L(\mathfrak{Y}_j^{(m)}),\varphi_L} \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m)})^{G_L,*} & \xrightarrow{\;\widehat{\otimes}ng\;} \mathcal{M}_{m+1/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m+1)})^{G_L,*} \\
f \otimes m & \longmapsto f \varphi_M(m)
\end{align*}
is an isomorphism. As listed at the beginning of the proof of Thm.\ \ref{descent-to-X} we have the isomorphisms
\begin{equation*}
\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_{m+1}) \otimes_{\mathcal{O}_L(\mathfrak{B} \setminus \mathfrak{B}_m),\varphi_L} M_m \xrightarrow{\;\widehat{\otimes}ng\;} M_{m+1} \ ,
\end{equation*}
hence
\begin{equation*}
\mathcal{O}_{\mathbf{C}_p}(\mathfrak{B} \setminus \mathfrak{B}_{m+1}) \otimes_{\mathcal{O}_{\mathbf{C}_p} (\mathfrak{B} \setminus \mathfrak{B}_m),\varphi_L} M_{m/\mathbf{C}_p} \xrightarrow{\;\widehat{\otimes}ng\;} M_{m+1/\mathbf{C}_p} \ ,
\end{equation*}
and therefore
\begin{equation*}
\mathcal{O}_{\mathbf{C}_p}(\mathfrak{A}_j^{(m+1)}) \otimes_{\mathcal{O}_{\mathbf{C}_p} (\mathfrak{A}_j^{(m)}),\varphi_L} \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m)}) \xrightarrow{\;\widehat{\otimes}ng\;} \mathcal{M}_{m+1/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m+1)})
\end{equation*}
induced by $\varphi_M$. Since the $o_L \setminus \{0\}$-action commutes with the twisted $G_L$-action the latter map restricts to the isomorphism
\begin{equation*}
\big[\mathcal{O}_{\mathbf{C}_p}(\mathfrak{A}_j^{(m+1)}) \otimes_{\mathcal{O}_{\mathbf{C}_p} (\mathfrak{A}_j^{(m)}),\varphi_L} \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m)}) \big]^{G_L,*} \xrightarrow{\;\widehat{\otimes}ng\;} \mathcal{M}_{m+1/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m+1)})^{G_L,*} \ .
\end{equation*}
This reduces us to showing that the obvious inclusions induce the isomorphism
\begin{multline*}
\mathcal{O}_L(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_L(\mathfrak{Y}_j^{(m)}),\varphi_L} \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m)})^{G_L,*} \\
\xrightarrow{\ \widehat{\otimes}ng\ }
\big[ \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m)}),\varphi_L} \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m)}) \big]^{G_L,*} \ .
\end{multline*}
For this we proceed entirely similarly as in the argument for \eqref{f:transition} in the proof of Thm.\ \ref{descent-to-X}. By Cor.\ \ref{TS2} we find finite extensions $L \subseteq L_1 \subseteq L_2 \subseteq \mathbf{C}_p$ and a finitely generated projective $(G_L,\ast)$-invariant $\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)})$-submodule $Q$ of $X := \mathcal{M}_{m/\mathbf{C}_p}(\mathfrak{A}_{j/\mathbf{C}_p}^{(m)})$ with the properties a) - c) and such that
\begin{equation*}
X^{G_L,*} \subseteq \mathcal{O}_{L_2}(\mathfrak{Y}_j^{(m)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)})} Q \subseteq X = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)})} Q
\end{equation*}
and hence, in particular, that
\begin{equation*}
X^{G_L,*} = (\mathcal{O}_{L_2}(\mathfrak{Y}_j^{(m)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)})} Q)^{G_L,*} \ .
\end{equation*}
We deduce that
\begin{align*}
\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m)}),\varphi_L} X & = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)}),\varphi_L} Q \\
& = \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m+1)}} (\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)}),\varphi_L} Q) \ .
\end{align*}
This shows that for $P := \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m)}),\varphi_L} X$ we may take as input for Cor.\ \ref{TS2}.iii the submodule $\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)}),\varphi_L} Q$, and as output we obtain the first identity in
\begin{multline*}
\big[ \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m)}),\varphi_L} X \big]^{G_L,*} = (\mathcal{O}_{L_2}(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)}),\varphi_L} Q)^{G_L,*} \\
= (\mathcal{O}_L(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_L(\mathfrak{Y}_j^{(m)}),\varphi_L} (\mathcal{O}_{L_2}(\mathfrak{Y}_j^{(m)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)})} Q))^{G_L,*} \ .
\end{multline*}
Choosing finitely many topological generators $\sigma_1, \ldots, \sigma_r$ of $G_L$ we may write
\begin{align*}
X^{G_L,*} & = (\mathcal{O}_{L_2}(\mathfrak{Y}_j^{(m)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)})} Q)^{G_L,*} \\
& = \ker \Big( \mathcal{O}_{L_2}(\mathfrak{Y}_j^{(m)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)})} Q \xrightarrow{\operatorname{pr}od 1 - \sigma_i} \operatorname{pr}od_{i=1}^r \mathcal{O}_{L_2}(\mathfrak{Y}_j^{(m)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)})} Q \Big).
\end{align*}
As a consequence of \cite{ST} Lemma 3.3 the ring homomorphism $\varphi_L : \mathcal{O}_L(\mathfrak{Y}_j^{(m)}) \rightarrow \mathcal{O}_L(\mathfrak{Y}_j^{(m+1)})$ is flat. Hence applying the functor $\mathcal{O}_L(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_L(\mathfrak{Y}_j^{(m)}),\varphi_L}$ commutes with the formation of this kernel, and we obtain
\begin{align*}
\mathcal{O}_L(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_L(\mathfrak{Y}_j^{(m)}),\varphi_L} X^{G_L,*} & = (\mathcal{O}_L(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_L(\mathfrak{Y}_j^{(m)}),\varphi_L} (\mathcal{O}_{L_2}(\mathfrak{Y}_j^{(m)}) \otimes_{\mathcal{O}_{L_1}(\mathfrak{Y}_j^{(m)})} Q))^{G_L,*} \\
& = \big[ \mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m+1)}) \otimes_{\mathcal{O}_{\mathbf{C}_p}(\mathfrak{Y}_j^{(m)}),\varphi_L} X \big]^{G_L,*} \ .
\end{align*}
This finishes the first step.
Still fixing $m$ but varying $j$, what we have shown amounts to the statement that $\varphi_M$ induces an isomorphism
\begin{equation*}
(\pi_L^*)^* \mathcal{M}_{\mathfrak{X} \setminus \mathfrak{X}_m} \xrightarrow{\;\widehat{\otimes}ng\;} \mathcal{M}_{\mathfrak{X} \setminus \mathfrak{X}_{m+1}}
\end{equation*}
of vector bundles on $\mathfrak{X} \setminus \mathfrak{X}_{m+1}$. We deduce that, on global sections, we have the isomorphism
\begin{equation*}
\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_{m+1}) \otimes_{\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}_m),\varphi_L} M_{m,\mathbf{C}_p}^{G_L,*} \xrightarrow{\;\widehat{\otimes}ng\;} M_{m+1,\mathbf{C}_p}^{G_L,*}
\end{equation*}
By passing to the limit with respect to $m$ we obtain the assertion.
\end{proof}
The above construction is entirely symmetric in $\mathfrak{B}$ and $\mathfrak{X}$. Starting from a $(\varphi_L,\Gamma_L)$-module $N$ over $\mathscr{R}_L(\mathfrak{X})$ we obtain by the ``inverse'' twisting the $(\varphi_L,\Gamma_L)$-module
\begin{equation*}
N_\mathfrak{B} := (\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) \otimes_{\mathscr{R}_L(\mathfrak{X})} N)^{G_L,*}
\end{equation*}
over $\mathscr{R}_L(\mathfrak{B})$.
\begin{theorem}\mathrm{la}bel{equiv}
The functors $M \longmapsto M_\mathfrak{X}$ and $N \longmapsto N_\mathfrak{B}$ induce equivalences of categories
\begin{equation*}
\operatorname{Mod}_{L,an}(\mathscr{R}_L(\mathfrak{B})) \simeq \operatorname{Mod}_{L,an}(\mathscr{R}_L(\mathfrak{X}))
\end{equation*}
which are quasi-inverse to each other.
\end{theorem}
\begin{proof}
We have
\begin{align*}
(M_\mathfrak{X})_\mathfrak{B} & = (\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) \otimes_{\mathscr{R}_L(\mathfrak{X})} M_\mathfrak{X})^{G_L,*} \widehat{\otimes}ng (\mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \otimes_{\mathscr{R}_L(\mathfrak{B})} M)^{G_L \times \operatorname{id}} \\ & = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B})^{G_L} \otimes_{\mathscr{R}_L(\mathfrak{B})} M = M \ ,
\end{align*}
where the second last identity uses the fact that $M$ is free. Correspondingly we obtain $(N_\mathfrak{B})_\mathfrak{X} \widehat{\otimes}ng N$ since we may first reduce to the case of a free $N$ by Remark \ref{summand-of-free}.
\end{proof}
\subsection{Etale $L$-analytic $(\varphi_L,\Gamma_L)$-modules}\mathrm{la}bel{sec:etalepgm}
In the previous section, we have constructed an equivalence of categories (Thm.\ \ref{equiv}) between $\operatorname{Mod}_{L,an}(\mathscr{R}_L(\mathfrak{B}))$ and $\operatorname{Mod}_{L,an}(\mathscr{R}_L(\mathfrak{X}))$. In this section, we define the slope of an $L$-analytic $(\varphi_L,\Gamma_L)$-module, as well as \'etale $L$-analytic $(\varphi_L,\Gamma_L)$-modules, and prove that the above equivalence respects the slopes as well as the condition of being \'etale. Before we do that, we show that our equivalence implies the following theorem.
\begin{theorem}\mathrm{la}bel{freeness}
Any $L$-analytic $(\varphi_L,\Gamma_L)$-module $M$ over $\mathscr{R}_L(\mathfrak{X})$ is free as an $\mathscr{R}_L(\mathfrak{X})$-module.
\end{theorem}
Before proving this theorem, we need a few preliminary results. Let $\delta : L^\times \to L^\times$ be a continuous character. We define a $(\varphi_L,\Gamma_L)$-module $\mathscr{R}_K(\mathfrak{X})(\delta)$ of rank $1$ over $\mathscr{R}_K(\mathfrak{X})$ as follows. We set $\mathscr{R}_K(\mathfrak{X})(\delta) = \mathscr{R}_K(\mathfrak{X})\cdot e_\delta$ where $\varphi_L(e_\delta) = \delta(\pi_L) \cdot e_\delta$ and $\gamma(e_\delta) = \delta(\gamma) \cdot e_\delta$ if $a \in \Gamma_L$. We likewise define a $(\varphi_L,\Gamma_L)$-module $\mathscr{R}_K(\mathfrak{B})(\delta)$ of rank $1$ over $\mathscr{R}_K(\mathfrak{B})$ by $\mathscr{R}_K(\mathfrak{B})(\delta) = \mathscr{R}_K(\mathfrak{B})\cdot e_\delta$ where $\varphi_L(e_\delta) = \delta(\pi_L) \cdot e_\delta$ and $\gamma(e_\delta) = \delta(\gamma) \cdot e_\delta$ if $\gamma \in \Gamma_L$. Note that $\mathscr{R}_K(\mathfrak{X})(\delta)$ is $L$-analytic if and only if $\delta$ is $L$-analytic, and likewise for $\mathscr{R}_K(\mathfrak{B})(\delta)$.
\begin{proposition}\mathrm{la}bel{rank1b}
If $M$ is a $(\varphi_L,\Gamma_L)$-module of rank $1$ over $\mathscr{R}_L(\mathfrak{B})$, then there exists a character $\delta : L^\times \to L^\times$ such that $M \simeq \mathscr{R}_L(\mathfrak{B})(\delta)$.
\end{proposition}
\begin{proof}
This is \cite{FX} Prop.\ 1.9.
\end{proof}
\begin{lemma}\mathrm{la}bel{perdeltau}
If $\delta : L^\times \to L^\times$ is a locally $L$-analytic character, then there exists $\alpha \in \mathbf{C}_p^\times$ such that $g(\alpha) = \alpha \cdot \delta(\tau(g))$ for all $g \in G_L$.
\end{lemma}
\begin{proof}
Recall that since $\tau$ has a Hodge-Tate weight equal to $0$, there exists $\beta \in \mathbf{C}_p^\times$ such that $g(\beta) = \beta \cdot \tau(g)$ for all $g \in G_L$. Since $\delta : L^\times \to L^\times$ is locally $L$-analytic, there exists $s \in L$ and an open subgroup $U$ of $o_L^\times$ such that $\delta(a) = \exp(s \cdot \log(a))$ for any $a \in U$. By approximating $\beta$ multiplicatively by an element in $\overline{L}$ we find a finite extension $L'$ of $L$ and a $\beta' \in \mathbf{C}_p^\times$ such that
\begin{itemize}
\item[(1)] $g(\beta') = \beta' \cdot \tau(g)$ for $g \in G_{L'}$, and
\item[(2)] $s \cdot \log(\beta') \in p \cdot o_{\mathbf{C}_p}$.
\end{itemize}
Let $\alpha' = \exp(s \cdot \log(\beta'))$. We then have $g(\alpha') = \alpha' \cdot \delta(\tau(g))$ for all $g \in G_{L'}$. The map $g \mapsto \frac{g(\alpha')}{\alpha' \cdot \delta(\tau(g))}$ therefore is a 1-cocycle on $G_L$ which is trivial on $G_{L'}$. By inflation-restriction its class then contains a 1-cocycle $\operatorname{Gal}(L'/L) \to (\mathbf{C}_p^\times)^{G_{L'}} = L'^\times$. Due to Hilbert 90 this class actually is trivial, which implies the assertion.
\end{proof}
\begin{proposition}\mathrm{la}bel{r1b2x}
If $\delta : L^\times \to L^\times$ is a locally $L$-analytic character, then the equivalence in Thm.\ \ref{equiv} satisfies $\mathscr{R}_L(\mathfrak{B})(\delta)_{\mathfrak{X}} \widehat{\otimes}ng \mathscr{R}_L(\mathfrak{X})(\delta)$.
\end{proposition}
\begin{proof}
Let $\alpha \in \mathbf{C}_p^\times$ be the element afforded by Lemma \ref{perdeltau}. Then $\alpha \otimes e_\delta \in (\mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \otimes_{\mathscr{R}_L(\mathfrak{B})} M_\mathfrak{B})^{G_L,*}$. We now compute
\begin{equation*}
\mathscr{R}_L(\mathfrak{B})(\delta)_{\mathfrak{X}} = (\mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \cdot e_\delta)^{G_L,*} = (\mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \cdot \alpha e_\delta)^{G_L,*} = \mathscr{R}_{\mathbf{C}_p}(\mathfrak{B})^{G_L,*} \cdot \alpha e_\delta = \mathscr{R}_L(\mathfrak{X}) \cdot \alpha e_\delta \ .
\end{equation*}
The $o_L \setminus\{0\}$-action on this is given by $a(f \otimes \alpha e_\delta) = a(f) \otimes \alpha \delta(a) e_\delta$ for $0 \neq a \in o_L$. Hence $M_\mathfrak{X} \widehat{\otimes}ng \mathscr{R}_L(\mathfrak{X})(\delta)$.
\end{proof}
\begin{proposition}\mathrm{la}bel{rank1x}
If $M$ is an $L$-analytic $(\varphi_L,\Gamma_L)$-module of rank $1$ over $\mathscr{R}_L(\mathfrak{X})$, then there exists a locally $L$-analytic character $\delta : L^\times \to L^\times$ such that $M \widehat{\otimes}ng \mathscr{R}_L(\mathfrak{X})(\delta)$.
\end{proposition}
\begin{proof}
If $M_{\mathfrak{B}}$ is the $(\varphi_L,\Gamma_L)$-module of rank $1$ over $\mathscr{R}_L(\mathfrak{B})$ that comes from applying Thm.\ \ref{equiv} to $M$, then $M_{\mathfrak{B}}$ is an $L$-analytic $(\varphi_L,\Gamma_L)$-module of rank $1$ over $\mathscr{R}_L(\mathfrak{B})$. By Prop.\ \ref{rank1b}, there exists a character $\delta : L^\times \to L^\times$ such that $M \widehat{\otimes}ng \mathscr{R}_L(\mathfrak{B})(\delta)$. Prop.\ \ref{r1b2x} now implies that $M \widehat{\otimes}ng \mathscr{R}_L(\mathfrak{X})(\delta)$.
\end{proof}
This proposition implies theorem \ref{freeness} for $L$-analytic $(\varphi_L,\Gamma_L)$-modules of rank $1$ over $\mathscr{R}_L(\mathfrak{X})$.
\begin{proof}[Proof of theorem \ref{freeness}]
Since $\mathscr{R}_L(\mathfrak{X})$ is an integral domain the projective module $M$ has a well defined rank $r$. We may assume that $r \neq 0$. We have seen in Cor.\ \ref{pruefer2}.ii that $\mathscr{R}_L(\mathfrak{X})$ is a $1 \frac{1}{2}$ generator Pr\"ufer domain. Hence $M$ is isomorphic to a finite direct sum of invertible ideals in $\mathscr{R}_L(\mathfrak{X})$ (cf.\ \cite{CE} Prop.\ I.6.1). By induction with respect to the number of these ideals one easily deduces from this, by using the analog of Cor.\ \ref{free-invertible}.ii, that
\begin{equation*}
M \widehat{\otimes}ng \mathscr{R}_L(\mathfrak{X})^{r-1} \oplus I
\end{equation*}
for some invertible ideal $I \subseteq \mathscr{R}_L(\mathfrak{X})$. Moreover, $I$ is isomorphic to the exterior power $\bigwedge^r M$ of $M$ (cf.\ \cite{B-A} III \S7). The latter (and therefore also $I$) inherits from $M$ the structure of an $L$-analytic $(\varphi_L,\Gamma_L)$-module $M$ over $\mathscr{R}_L(\mathfrak{X})$. This reduces the proof of our assertion to the case that the $(\varphi_L,\Gamma_L)$-module $M$ is of rank $1$, which follows from Prop.\ \ref{rank1x}.
\end{proof}
We consider now a $(\varphi_L,\Gamma_L)$-module $M$ over $\mathscr{R}_K(\mathfrak{X})$ that is free of $\mathrm{rank}(M) = r$. If we pick a basis $e_1, \ldots, e_r$ of $M$ then the isomorphism in Def.\ \ref{def:modR} guarantees that $\varphi_M(e_1), \ldots, \varphi_M(e_r)$ again is a basis of $M$. The matrix
\begin{equation*}
A_M = (a_{ij}) \quad\text{where $\varphi_M(e_j) = \sum_{i=1}^r a_{ij} e_i$}
\end{equation*}
therefore is invertible over $\mathscr{R}_K(\mathfrak{X})$. If we change the given basis by applying an invertible matrix $B$ then $A_M$ gets replaced by $B^{-1} A_M \varphi_L(B)$. By the above Prop.\ \ref{unitsR} we have $\det(A_M)$, $\det(B) \in \mathscr{E}_K^\dagger(\mathfrak{X})^\times$. We compute
\begin{align*}
\|\det(B^{-1} A_M \varphi_L (B))\|_1 & = \|\det(B)\|_1^{-1} \cdot \|\det(A_M)\|_1 \cdot \|\varphi_L(\det(B))\|_1 \\
& = \|\det(B)\|_1^{-1} \cdot \|\det(A_M)\|_1 \cdot \|\det(B)\|_1 \\
& = \|\det(A_M)\|_1 \ ,
\end{align*}
where the first, resp.\ second, identity uses that the norm $\|\ \|_1$ is multiplicative, resp.\ that $\varphi_L$ is $\|\ \|_1$-isometric. This leads to the following definitions.
\begin{definition}\phantomsection\mathrm{la}bel{def:deg}
Let $M$ be a free $(\varphi_L,\Gamma_L)$-module over $\mathscr{R}_K(\mathfrak{X})$.
\begin{enumerate}
\item The degree of $M$ is the rational number $\deg(M)$ such that
\begin{equation*}
p^{\deg(M)} = \|{\det(A_M)}\|_1 \ .
\end{equation*}
\item The slope of $M$ is $\mu(M) := \deg(M)/\mathrm{rk}(M)$.
\end{enumerate}
\end{definition}
\begin{definition}\mathrm{la}bel{def:etaleR}
An $L$-analytic $(\varphi_L,\Gamma_L)$-module $M$ over $\mathscr{R}_L(\mathfrak{X})$ is called \'etale if $M$ has degree zero and if every sub-$(\varphi_L,\Gamma_L)$-module $N$ of $M$ has degree $\geq 0$. Let $\operatorname{Mod}^{et}_{L,an}(\mathscr{R}_L(\mathfrak{X}))$ denote the full subcategory of all \'etale ($L$-analytic) $(\varphi_L,\Gamma_L)$-modules over $\mathscr{R}_L(\mathfrak{X})$.
\end{definition}
\begin{remark}
\mathrm{la}bel{remetale}
Every sub-$(\varphi_L,\Gamma_L)$-module $N$ of an $L$-analytic $(\varphi_L,\Gamma_L)$-module $M$ is $L$-analytic and hence is free by Thm.\ \ref{freeness}, so that we can define its degree.
\end{remark}
As before this makes equally sense for $\mathfrak{B}$ instead of $\mathfrak{X}$ so that we also have the category $\operatorname{Mod}^{et}_{L,an}(\mathscr{R}_K(\mathfrak{B}))$. By the subsequent remark, the definition of \'etaleness coincides with the usual definition in the case of $\mathfrak{B}$.
\begin{remark}\phantomsection\mathrm{la}bel{B-proj-free}
\begin{itemize}
\item[i.] Any finitely generated submodule of a finitely generated projective module over $\mathscr{R}_L(\mathfrak{B})$ is free.
\item[ii.] Let $M$ be an \'etale $(\varphi_L,\Gamma_L)$-module over $\mathscr{R}_L(\mathfrak{B})$; then any sub-$\varphi_L$-module $N$ of $M$ has degree $\geq 0$.
\end{itemize}
\end{remark}
\begin{proof}
i. It is well known to follow from Lazard's work in \cite{Laz} that $\mathscr{R}_L(\mathfrak{B})$ is a Bezout domain (compare \cite{TdA} Satz 10.1). But it is a general fact that finitely generated projective modules over Bezout domains are free (cf.\ \cite{Lam} Thm.\ 2.29). Moreover, even over a Pr\"ufer domain, finitely generated submodules of finitely generated projective modules are projective.
ii. For sake of clarity: A sub-$\varphi_L$-module $N$ of $M$ is a finitely generated $\varphi_L$-invariant submodule $N \subseteq M$ such that $\mathscr{R}_L(\mathfrak{B}) \otimes_{\mathscr{R}_L(\mathfrak{B}),\varphi_L} N \xrightarrow{\widehat{\otimes}ng} N$. By i. any such $N$ is free of rank $\leq \mathrm{rank}(M)$.
Let now $N \subseteq M$ be an arbitrary but fixed nonzero sub-$\varphi_L$-module. We have to show that $\mu(N) \geq 0$. By \cite{KedAst} Lemma 1.4.12 it suffices to consider the unique largest sub-$\varphi_L$-module $M_1 \subseteq M$ of least slope. We claim that $M_1$ is $\Gamma_L$-invariant. Let $g \in \Gamma_L$ and let $e_1, \ldots, e_r$ be a basis of $M_1$. Then $ge_1, \ldots, ge_r$ is a basis of the sub-$\varphi_L$-module $gM_1$. By the $\Gamma_L$-invariance of the norm $\|\ \|_1$ we obtain
\begin{equation*}
\|\det(A_{gM_1})\|_1 = \|g \det(A_{M_1})\|_1 = \|\det(A_{M_1})\|_1 \quad\text{and hence} \quad \mu(gM_1) = \mu(M_1) \ .
\end{equation*}
It follows that $gM_1 \subseteq M_1$. This shows that $M_1$ is a sub-$(\varphi_L,\Gamma_L)$-module. By the \'etaleness of $M$ we then must have $\mu(M_1) \geq 0$.
\end{proof}
\begin{proposition}
\mathrm{la}bel{mxet}
If $M \in \operatorname{Mod}_{L,an}(\mathscr{R}_L(\mathfrak{B}))$, then $\deg(M_\mathfrak{X}) = \deg(M)$.
\end{proposition}
\begin{proof}
The degree of $M_\mathfrak{X}$ depends only on $\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) \otimes_{\mathscr{R}_L(\mathfrak{X})} M_\mathfrak{X}$, and we have
\begin{equation*}
\mathscr{R}_{\mathbf{C}_p}(\mathfrak{X}) \otimes_{\mathscr{R}_L(\mathfrak{X})} M_\mathfrak{X} =
\mathscr{R}_{\mathbf{C}_p}(\mathfrak{B}) \otimes_{\mathscr{R}_L(\mathfrak{B})} M \ .
\end{equation*}
\end{proof}
\begin{theorem}\mathrm{la}bel{etalequiv}
The functors $M \longmapsto M_\mathfrak{X}$ and $N \longmapsto N_\mathfrak{B}$ induce an equivalence of categories
\begin{equation*}
\operatorname{Mod}_{L,an}^{et}(\mathscr{R}_L(\mathfrak{B})) \simeq \operatorname{Mod}_{L,an}^{et}(\mathscr{R}_L(\mathfrak{X}))
\end{equation*}
\end{theorem}
\begin{proof}
By Prop.\ \ref{mxet}, for every sub-$(\varphi_L,\Gamma_L)$-module $N$ of $M$, we have $\deg(N_\mathfrak{X}) = \deg(N)$. If $M$ is \'etale, then this shows that $\deg(M_\mathfrak{X}) = 0$ and that $\deg(N_\mathfrak{X}) \geq 0$ for every sub-$(\varphi_L,\Gamma_L)$-module $N$ of $M$. As a consequence of Thm.\ \ref{equiv}, if $N$ runs over all sub-$(\varphi_L,\Gamma_L)$-modules of $M$, then $N_\mathfrak{X}$ runs over all sub-$(\varphi_L,\Gamma_L)$-modules of $M_\mathfrak{X}$. It follows that $M_\mathfrak{X}$ is \'etale. By symmetry, the same holds with $N$ and $N_\mathfrak{B}$. Hence the equivalence in Thm.\ \ref{equiv} restricts to the asserted equivalence.
\end{proof}
\begin{corollary}
\mathrm{la}bel{repequiv}
There is an equivalence of categories:
\[ \operatorname{Mod}_{L,an}^{et}(\mathscr{R}_L(\mathfrak{X})) \simeq \{ \text{$L$-analytic representations of of $G_L$} \}. \]
\end{corollary}
\begin{proof}
This follows from Thm.\ \ref{etalequiv}, and Thm.\ D of \cite{PGMLAV} according to which there
is an equivalence of categories $\operatorname{Mod}_{L,an}^{et}(\mathscr{R}_L(\mathfrak{B})) \simeq \{ L$-analytic representations of of $G_L\}$.
\end{proof}
\subsection{Crystalline $(\varphi_L,\Gamma_L)$-modules over $\mathcal{O}_L(\mathfrak{X})$}
\mathrm{la}bel{sec:fil-to-R}
In \cite{KR} and \cite{KFC} they attach to every filtered $\varphi_L$-module $D$ over $L$
a $(\varphi_L,\Gamma_L)$-module $\mathcal{M}(D)$ over $\mathcal{O}_L(\mathfrak{B})$.
We carry out the corresponding construction over $\mathcal{O}_L(\mathfrak{X})$. In the next section,
we show that the two generalizations are compatible with the equivalence of categories of Theorem A, after extending scalars to $\mathscr{R}_L(\mathfrak{B})$ and $\mathscr{R}_L(\mathfrak{X})$.
We now give the analogue over $\mathfrak{X}$ of Kisin's construction over $\mathfrak{B}$. It is possible to replace everywhere in this section $\mathfrak{X}$ by $\mathfrak{B}$, and we then recover the results of \cite{KR}. If $Y = \operatorname{supp}(\Delta)$ is the support of an effective divisor $\Delta$ on $\mathfrak{X}$ of the form
\begin{equation*}
\Delta(x) :=
\begin{cases}
1 & \text{if $x \in Y$}, \\
0 & \text{otherwise},
\end{cases}
\end{equation*}
then we simply write $I_Y := I_\Delta = \{f \in \mathcal{O}_L(\mathfrak{X}) : f(x) = 0 \ \text{for any $x \in Y$}\}$ for the corresponding closed ideal, and let $I_Y^{-1}$ denote its inverse fractional ideal. We introduce the $\mathcal{O}_L(\mathfrak{X})$-algebra $\mathcal{O}_L(\mathfrak{X})[Y^{-1}] := \bigcup_{m \geq 1} I_{Y}^{-m}$ and, for any $\mathcal{O}_L(\mathfrak{X})$-module $M$, we put
\begin{equation*}
M[Y^{-1}] := \mathcal{O}_L(\mathfrak{X})[Y^{-1}] \otimes_{\mathcal{O}_L(\mathfrak{X})} M \ .
\end{equation*}
\begin{lemma}\mathrm{la}bel{Y-1-flat}
The ring homomorphism $\mathcal{O}_L(\mathfrak{X}) \longrightarrow \mathcal{O}_L(\mathfrak{X})[Y^{-1}]$ is flat.
\end{lemma}
\begin{proof}
The assertion is a special case of the following fact. Let $I \subseteq A$ be an invertible ideal in the integral domain $A$ and consider the ring homomorphism $A \longrightarrow A_I := \bigcup_{m \geq 1} I^{-m}$. We have $\sum_{i=1}^n f_i g_i = 1$ for appropriate elements $f_i \in I$ and $g_i \in I^{-1}$. The $f_i$ then generate the unit ideal in the ring $A_I$. Hence $\operatorname{Sp}ec(A_I) = \bigcup_i \operatorname{Sp}ec((A_I)_{f_i})$. But $Af_i \subseteq I$ implies $I^{-1} \subseteq Af_i^{-1}$, hence $A_I \subseteq A_{f_i}$, and therefore $(A_I)_{f_i} = A_{f_i}$ and $\operatorname{Sp}ec(A_I) = \bigcup_i \operatorname{Sp}ec(A_{f_i})$. Since the localizations $A \longrightarrow A_{f_i}$ are flat it follows that $A \longrightarrow A_I$ is flat.
\end{proof}
In this section we will construct $(\varphi_L,\Gamma_L)$-modules over $\mathscr{R}_L(\mathfrak{X})$ from the following kind of data.
\begin{definition}\mathrm{la}bel{def:fil-vs}
A filtered $\varphi_L$-module $D$ is a finite dimensional $L$-vector space equipped with
\begin{itemize}
\item[--] a linear automorphism $\varphi_D : D \longrightarrow D$
\item[--] and a descending, separated, and exhaustive $\mathbf{Z}$-filtration $\operatorname{Fil}^\bullet D$ by vector subspaces.
\end{itemize}
\end{definition}
In the following we let $Z \subseteq \mathfrak{X}$ denote the subset of all torsion points different from $1$, i.e., $Z = (\bigcup_{n \geq 1} \mathfrak{X}[\pi_L^n]) \setminus \{1\}$. We define the function $n : Z \longrightarrow \mathbf{Z}_{\geq 0}$ by $x \in \mathfrak{X}[\pi_L^{n(x)+1}]) \setminus \mathfrak{X}[\pi_L^{n(x)}])$. We keep the notations introduced in the previous section \ref{sec:globalring} (with $K = L$). We always equip the field of fractions $\operatorname{Fr}(\mathcal{O}_x)$ of the local ring $\mathcal{O}_x$, for any point $x \in Z$, with the $\mathfrak{m}_x$-adic filtration.
Let $D$ be a filtered $\varphi_L$-module of dimension $d_D$. We introduce the $\mathcal{O}_L(\mathfrak{X})$-module
\begin{equation*}
\mathcal{M}(D) := \{ s \in \mathcal{O}_L(\mathfrak{X})[Z^{-1}] \otimes_L D : (\operatorname{id} \otimes \varphi_D^{-n(x)})(s) \in \operatorname{Fil}^0(\operatorname{Fr}(\mathcal{O}_x) \otimes_L D) \ \text{for any $x \in Z$} \} \ ,
\end{equation*}
where the $\operatorname{Fil}^0$ refers to the tensor product filtration on $\operatorname{Fr}(\mathcal{O}_x) \otimes_L D$. Suppose that $a \leq b$ are integers such that $\operatorname{Fil}^a D = D$ and $\operatorname{Fil}^{b+1} D = 0$. Then
\begin{equation}\mathrm{la}bel{f:bounds}
I_Z^{-a} \otimes_L D \subseteq \mathcal{M}(D) \subseteq I_Z^{-b} \otimes_L D \ .
\end{equation}
\begin{lemma}\mathrm{la}bel{fg-proj}
$\mathcal{M}(D)$ is a finitely generated projective $\mathcal{O}_L(\mathfrak{X})$-module of rank $d_D$. If $\widetilde{\mathcal{M}}(D)$ denotes the corresponding coherent $\mathcal{O}$-module sheaf and $\widetilde{\mathcal{M}}_x(D)$ its stalk in any point $x \in \mathfrak{X}$, then $\widetilde{\mathcal{M}}_x(D) = \mathcal{O}_x \otimes_L D$ for any $x \not\in Z$.
\end{lemma}
\begin{proof}
The definition of $\mathcal{M}(D)$ shows that
\begin{align*}
\mathcal{M}(D) = \ker \big(I_Z^{-b} \otimes_L D \longrightarrow \operatorname{pr}od\limits_r I_Z^{-b} \mathcal{O}_L(\mathfrak{X}(r)) \otimes_L D/\mathcal{O}_L(\mathfrak{X}(r)) \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}(D) \big).
\end{align*}
Since the above map is continuous and since over the noetherian Banach algebra $\mathcal{O}_L (\mathfrak{X}(r))$ any submodule of a finitely generated module is closed it follows that $\mathcal{M}(D)$ is closed in the finitely generated projective $\mathcal{O}_L(\mathfrak{X})$-module $I_Z^{-b} \otimes_L D$. Hence Lemma \ref{closed-proj} implies that $\mathcal{M}(D)$ is finitely generated projective. The second part of the assertion is an immediate consequence of \eqref{f:bounds}.
\end{proof}
We may also compute the stalks of $\widetilde{\mathcal{M}}(D)$ in the points in $Z$.
\begin{lemma}\mathrm{la}bel{stalks}
For any $x \in Z$ we have
\begin{align*}
& \widetilde{\mathcal{M}}_x(D) = \mathcal{O}_x \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}(D)
= \{ s \in \operatorname{Fr}(\mathcal{O}_x) \otimes_L D : (\operatorname{id} \otimes \varphi_D^{-n(x)}) (s) \in \operatorname{Fil}^0 ( \operatorname{Fr} (\mathcal{O}_x) \otimes_L D ) \} \\
& \qquad \xrightarrow [\operatorname{id} \otimes \varphi_D^{-n(x)}] { \widehat{\otimes}ng } \operatorname{Fil}^0 ( \operatorname{Fr}(\mathcal{O}_x) \otimes_L D )
\end{align*}
\end{lemma}
\begin{proof}
We temporarily define
\begin{equation*}
\mathcal{M}_x(D): = \{ s \in \mathcal{O}_L(\mathfrak{X})[Z^{-1}] \otimes_L D : ( \operatorname{id} \otimes \varphi _D^{-n(x)} ) (s) \in \operatorname{Fil}^0 \left( \operatorname{Fr}(\mathcal{O}_x) \otimes_L D \right) \}.
\end{equation*}
We pick a function $f \in \mathcal{O}_L(\mathfrak{X})$ whose zero set contains $Z \setminus \{x\}$ but not $x$ (Lemma \ref{prescribe-divisor}). Then
\begin{equation*}
\mathcal{M}(D) \subseteq \mathcal{M}_x (D) \subseteq f^{-(b-a)} \mathcal{M}(D) \ .
\end{equation*}
This shows that $\mathcal{O}_x \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}(D) = \mathcal{O}_x \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}_x(D)$. Due to \eqref{f:bounds} we have the commutative diagram
\begin{equation*}
\xymatrix{
\operatorname{Fil}^{-b}(\operatorname{Fr}(\mathcal{O}_x)) \otimes_L D \ar[r]^{=} & \operatorname{Fil}^{-b}(\operatorname{Fr}(\mathcal{O}_x)) \otimes_L D \\
\mathcal{O}_x \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}_x(D) \ar[u]_{\subseteq} \ar[r] & \{ s \in \operatorname{Fr}(\mathcal{O}_x) \otimes_L D : (\operatorname{id} \otimes \varphi_D^{-n(x)}) (s) \in \operatorname{Fil}^0 ( \operatorname{Fr} (\mathcal{O}_x) \otimes_L D ) \} \ar[u]^{\subseteq} \\
\operatorname{Fil}^{-a}(\operatorname{Fr}(\mathcal{O}_x)) \otimes_L D \ar[u]_{\subseteq} \ar[r]^{=} & \operatorname{Fil}^{-a}(\operatorname{Fr}(\mathcal{O}_x)) \otimes_L D \ar[u]_{\subseteq} }
\end{equation*}
In particular, the middle horizontal arrow is injective. But it also is surjective as follows easily from the surjectivity of the map $I_Z^{-b} \longrightarrow \operatorname{Fil}^{-b}(\operatorname{Fr}(\mathcal{O}_x)) / \operatorname{Fil}^{-a}(\operatorname{Fr}(\mathcal{O}_x))$ (compare the proof of Lemma \ref{germ}).
\end{proof}
The injective ring endomorphism $\varphi_L$ of $\mathcal{O}_L(\mathfrak{X})$ extends, of course, to its field of fractions. Using Lemma \ref{unramified}.iii one checks that $\varphi_L(I_Z^{-1}) \subseteq I_Z^{-1}$. Hence $\mathcal{O}_L(\mathfrak{X})[Z^{-1}] \otimes_L D$ carries the injective $\varphi_L$-linear endomorphism $\varphi_L \otimes \varphi_D$. We define $Z_0 := \{x \in Z : n(x) = 0\} = \mathfrak{X}[\pi_L] \setminus \{1\}$.
\begin{lemma}\mathrm{la}bel{varphi}
$\varphi_L \otimes \varphi_D$ induces an injective $\varphi_L$-linear homomorphism
\begin{equation*}
\varphi_{\mathcal{M}(D)} : \mathcal{M} (D) \longrightarrow I_{Z_0}^a \mathcal{M}(D) \ .
\end{equation*}
\end{lemma}
\begin{proof}
Let $s \in \mathcal{M}(D)$. We show that $I_{Z_0}^{-a} (\varphi_L \otimes \varphi_D) (s)$ is contained in $\mathcal{M}(D)$. Since $I_{Z_0}^{-1} \subseteq I_Z^{-1}$ we certainly have $I_{Z_0}^{-a} (\varphi_L \otimes \varphi_D) (s) \subseteq \mathcal{O}_L(\mathfrak{X})[Z^{-1}] \otimes_L D$. Hence we have to check the local condition for any $x \in Z$. First we assume that $n(x) \geq 1$ or, equivalently, that $x \not\in Z_0$. Then
\begin{align*}
(\operatorname{id} \otimes \varphi_D^{-n(x)})( I_{Z_0}^{-a} (\varphi_L \otimes \varphi_D) (s))
& = I_{Z_0}^{-a} (\varphi_L \otimes \operatorname{id}) (\operatorname{id} \otimes \varphi_D^{-n(x)+1} ) (s) \\
& = I_{Z_0}^{-a} (\varphi_L \otimes \operatorname{id}) (\operatorname{id} \otimes \varphi_D^{-n(\pi_L^* (x))} ) (s) \\
& \subseteq I_{Z_0}^{-a} (\varphi_L \otimes \operatorname{id}) (\operatorname{Fil}^0 ( \operatorname{Fr}(\mathcal{O}_{\pi_L^*(x)}) \otimes_L D)) \\
& \subseteq I_{Z_0}^{-a} \operatorname{Fil}^0 (\operatorname{Fr}(\mathcal{O}_x) \otimes_L D) \\
& = \operatorname{Fil}^0 (\operatorname{Fr}(\mathcal{O}_x) \otimes_L D) \ .
\end{align*}
If $n(x) = 0$ then, using that $\pi_L^* (x) = 1 \not\in Z$, we have
\begin{align*}
I_{Z_0}^{-a} (\varphi_L \otimes \varphi_D) (s) & = I_{Z_0}^{-a} (\varphi_L \otimes \operatorname{id}) (\operatorname{id} \otimes \varphi_D) (s) \subseteq I_{Z_0}^{-a} (\varphi_L \otimes \operatorname{id}) (\mathcal{O}_{\pi_L^*(x)} \otimes_L D) \\
& \subseteq I_{Z_0}^{-a} \mathcal{O}_x \otimes_L D = \mathfrak{m}_x^{-a} \otimes_L D \subseteq \operatorname{Fil}^0 ( \operatorname{Fr}(\mathcal{O}_x) \otimes_L D).
\end{align*}
\end{proof}
Let $\mathfrak{n}(1) \subseteq \mathcal{O}_L(\mathfrak{X})$ denote the (closed) maximal ideal of functions vanishing in the point $1$.
\begin{lemma}\mathrm{la}bel{n(1)}
We have $I_{Z_0} = \mathcal{O}_L(\mathfrak{X}) \varphi_L(\mathfrak{n}(1)) \mathfrak{n}(1)^{-1}$.
\end{lemma}
\begin{proof}
Since $\varphi_L(\mathfrak{n}(1)) \subseteq \mathfrak{n}(1)$ we have $\mathcal{O}_L(\mathfrak{X}) \varphi_L(\mathfrak{n}(1)) \mathfrak{n}(1)^{-1} \subseteq \mathfrak{n}(1) \mathfrak{n}(1)^{-1} = \mathcal{O}_L(\mathfrak{X})$. Hence the right hand side of the asserted identity is a finitely
generated and therefore (Prop.\ \ref{closed-fingen}) closed ideal in
$\mathcal{O}_L(\mathfrak{X})$ as is the left hand side. By Prop. \ref{divisor-theory} this reduces us
to checking that both ideals have the same divisor. But this follows immediately from
Lemma \ref{unramified}.iii.
\end{proof}
In view of this Lemma \ref{n(1)} the Lemma \ref{varphi} can be restated as saying that $\varphi_L \otimes \varphi_D$ induces a $\varphi_L$-linear endomorphism
\begin{equation*}
\varphi_{\mathcal{M}(D)} : \mathfrak{n}(1)^{-a} \mathcal{M} (D) \longrightarrow \mathfrak{n}(1)^{-a} \mathcal{M}(D) \ ,
\end{equation*}
and we therefore may define the linear map
\begin{align*}
\overline{\varphi}_{\mathcal{M}(D)} : \mathcal{O}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} \mathfrak{n}(1)^{-a} \mathcal{M}(D) & \longrightarrow \mathfrak{n}(1)^{-a} \mathcal{M}(D) \\
f \otimes s & \longmapsto f \varphi_{\mathcal{M}(D)} (s) \ .
\end{align*}
Note that $\mathfrak{n}(1)^{-a} \mathcal{M}(D)$ is a finitely generated projective $\mathcal{O}_L(\mathfrak{X})$-module by Lemma \ref{fg-proj} and Cor.\ \ref{pruefer}.
\begin{remark}\mathrm{la}bel{phi-pullback}
Let $\phi := \pi_L^* : \mathfrak{X} \longrightarrow \mathfrak{X}$; for any coherent $\mathcal{O}$-module $\mathfrak{M}$ such that $\mathfrak{M}(\mathfrak{X})$ is a finitely generated projective $\mathcal{O}_L(\mathfrak{X})$-module the coherent $\mathcal{O}$-module $\phi^*\mathfrak{M}$ has global sections $\mathcal{O}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X}), \varphi_L} \mathfrak{M}(\mathfrak{X})$.
\end{remark}
\begin{proof}
Let $\phi_r$ denote the restriction of $\phi$ to $\mathfrak{X}(r)$. We compute
\begin{align*}
(\phi^*\mathfrak{M})(\mathfrak{X}) & = \varprojlim (\phi^*\mathfrak{M})(\mathfrak{X}(r)) = \varprojlim (\phi_r^*(\mathfrak{M}_{|\mathfrak{X}(r)}))(\mathfrak{X}(r)) \\
& = \varprojlim \mathcal{O}_L(\mathfrak{X}(r)) \otimes_{\mathcal{O}_L(\mathfrak{X}(r)), \varphi_L} \mathfrak{M}(\mathfrak{X}(r)) \\
& = \varprojlim \mathcal{O}_L(\mathfrak{X}(r)) \otimes_{\mathcal{O}_L(\mathfrak{X}), \varphi_L} \mathfrak{M}(\mathfrak{X}) \\
& = \big( \varprojlim \mathcal{O}_L(\mathfrak{X}(r)) \big) \otimes_{\mathcal{O}_L(\mathfrak{X}), \varphi_L} \mathfrak{M}(\mathfrak{X}) \\
& = \mathcal{O}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X}), \varphi_L} \mathfrak{M}(\mathfrak{X}) \ .
\end{align*}
Here the fourth identity comes from Remark \ref{coherent}.i, and the fifth identity uses the fact that the tensor product by a finitely generated projective module commutes with projective limits.
\end{proof}
Let $\mathcal{N}$ denote the coherent $\mathcal{O}$-module with global sections $\mathcal{N}(\mathfrak{X}) = \mathfrak{n}(1)^{-a} \mathcal{M} (D)$. Using Remarks \ref{coherent} and \ref{phi-pullback} we see that $\widetilde{\varphi}_{\mathcal{M}(D)}$ is the map induced on global sections by a homomorphism $\Phi : \phi^* \mathcal{N} \longrightarrow \mathcal{N}$ of coherent $\mathcal{O}$-modules (where $\phi := \pi_L^*$).
\begin{proposition}\phantomsection\mathrm{la}bel{varphi-iso}
\begin{itemize}
\item[i.] On stalks in a point $x \in \mathfrak{X}$ the map $\Phi$ is an isomorphism if $x \not\in Z_0$ and identifies with
\begin{equation*}
\mathfrak{m}_x^{-a} \otimes_L D \xrightarrow{\; \subseteq \;} \sum_j \mathfrak{m}_x^{-j} \otimes_L \operatorname{Fil}^j D \ ,
\end{equation*}
whose cokernel is $\mathcal{O}_x$-isomorphic to $\sum_{j = 0}^{b-a} \mathcal{O}_x / \mathfrak{m}_x^j \otimes_L \operatorname{gr}^{a+j} D$, if $x \in Z_0$.
\item[ii.] The map $\overline{\varphi}_{\mathcal{M}(D)}$ is injective and its cokernel is annihilated by $I_{Z_0}^{b-a}$.
\end{itemize}
\end{proposition}
\begin{proof}
i. The map under consideration between the stalks in a point $x$ is
\begin{equation*}
\mathcal{O}_x \otimes_{\mathcal{O}_{\pi_L^*(x),\varphi_L}} \widetilde{\mathcal{M}}_{\pi_L^*(x)}(D) \xrightarrow[\mathcal{O}_x \otimes (\varphi_{\mathcal{M}(D)})_x]{}
\widetilde{\mathcal{M}}_x(D)
\end{equation*}
if $x \not\in Z_0 \cup \{1\}$, resp.\
\begin{equation*}
\mathcal{O}_x \otimes_{\mathcal{O}_{1,\varphi_L}} \mathfrak{m}_1^{-a} \widetilde{\mathcal{M}}_{1}(D) \xrightarrow[\mathcal{O}_x \otimes (\varphi_{\mathcal{M}(D)})_x]{}
\widetilde{\mathcal{M}}_x(D)
\end{equation*}
if $x \in Z_0$, resp.\
\begin{equation*}
\mathcal{O}_1 \otimes_{\mathcal{O}_{1,\varphi_L}} \mathfrak{m}_1^{-a} \widetilde{\mathcal{M}}_{1}(D) \xrightarrow[\mathcal{O}_1 \otimes (\varphi_{\mathcal{M}(D)})_1]{}
\mathfrak{m}_1^{-a} \widetilde{\mathcal{M}}_1(D)
\end{equation*}
if $x = 1$. If $x \not\in Z$ then also $\pi_L^*(z) \not\in Z$, and this map identifies with the isomorphism
\begin{equation*}
\mathcal{O}_x \otimes_L D \xrightarrow[\operatorname{id} \otimes \varphi_D]{\widehat{\otimes}ng} \mathcal{O}_x \otimes_L D \ .
\end{equation*}
if $x \neq 1$, resp.\
\begin{equation*}
\mathcal{O}_1 \otimes_{\mathcal{O}_{1,\varphi_L}} \mathfrak{m}_1^{-a} \otimes_L D = \mathcal{O}_1 \varphi_L(\mathfrak{m}_1)^{-a} \otimes_L D \xrightarrow[\operatorname{id} \otimes \varphi_D]{\widehat{\otimes}ng}
\mathfrak{m}_1^{-a} \otimes_L D
\end{equation*}
if $x= 1$, where the identity comes from the flatness of $\varphi_L$ (Lemma \ref{unramified}.i).
Next suppose that $x \in Z \setminus Z_0$ so that $\pi_L^*(x) \in Z$ as well. Then, by Lemma \ref{stalks}, we have the commutative diagram
\begin{equation*}
\begin{xymatrix}{
\mathcal{O}_x \otimes_{\mathcal{O}_{\pi_L^*(x),\varphi_L}} \widetilde{\mathcal{M}}_{\pi_L^*(x)}(D)
\ar[d]^{\widehat{\otimes}ng}_{\operatorname{id} \otimes \varphi_D^{-n(x)+1}} \ar[r]^-{\mathcal{O}_x \otimes (\varphi_{\mathcal{M}(D)})_x} & \widetilde{\mathcal{M}}_x(D) \ar[d]_{\widehat{\otimes}ng}^{\ \operatorname{id} \otimes \varphi_D^{-n(x)}} \\
\mathcal{O}_x \otimes_{\mathcal{O}_{\pi_L^*(x),\varphi_L}} \operatorname{Fil}^0 (\operatorname{Fr}(\mathcal{O}_{\pi_L^*(x)}) \otimes_L D) \ar[d]^= & \operatorname{Fil}^0 (\operatorname{Fr}(\mathcal{O}_x) \otimes_L D) \ar[dd]_= \\
\sum\limits_{j} \mathcal{O}_x \otimes_{\mathcal{O}_{\pi_L^*(x),\varphi_L}} \mathfrak{m}^{-j}_{\pi_L^*(x)} \otimes_L \operatorname{Fil}^j D \ar[d]_{\operatorname{id} \otimes \varphi_L \otimes \operatorname{id}}^{\widehat{\otimes}ng} & \\
\sum\limits_{j}(\mathcal{O}_x \varphi_L(\mathfrak{m}_{\pi_L^* (x)} ) )^{-j} \otimes_L \operatorname{Fil}^j D \ar[r]^-{=} & \sum\limits_{j} \mathfrak{m}^{-j}_x \otimes \operatorname{Fil}^j D }
\end{xymatrix}
\end{equation*}
where the lower left vertical map is an isomorphism again by the flatness of $\varphi_L$. For $x \in Z_0$ the corresponding diagram is
\begin{equation*}
\begin{xymatrix}{
\mathcal{O}_x \otimes_{\mathcal{O}_{1,\varphi_L}} \mathfrak{m}_1^{-a} \widetilde{\mathcal{M}}_1(D)
\ar[d]^{\widehat{\otimes}ng} \ar[r]^-{\mathcal{O}_x \otimes (\varphi_{\mathcal{M}(D)})_x} & \widetilde{\mathcal{M}}_x(D) \ar[d]_{\widehat{\otimes}ng}^{\ \operatorname{id} \otimes \varphi_D^{-1}} \\
\mathcal{O}_x \otimes_{\mathcal{O}_{1,\varphi_L}} \mathfrak{m}_1^{-a} \otimes_L D \ar[d]^= & \operatorname{Fil}^0 (\operatorname{Fr}(\mathcal{O}_x) \otimes_L D) \ar[d]_= \\
\mathfrak{m}_x^{-a} \otimes_L D \ar[r]^-{\subseteq} & \sum\limits_{j} \mathfrak{m}^{-j}_x \otimes \operatorname{Fil}^j D. }
\end{xymatrix}
\end{equation*}
(Note that we have used several times implicitly that $\varphi_L$ is unramified by Lemma \ref{unramified}.ii.)
ii. This follows from i. by passing to global sections.
\end{proof}
\begin{corollary}\phantomsection\mathrm{la}bel{iso-Z0invert}
\begin{itemize}
\item[i.] We have $I_{Z_0}^b \mathcal{M}(D) \subseteq \mathcal{O}_L(\mathfrak{X}) \varphi_{\mathcal{M}(D)}(\mathcal{M}(D)) \subseteq I_{Z_0}^a \mathcal{M}(D)$.
\item[ii.] The $\mathcal{O}_L(\mathfrak{X})$-linear map $\mathcal{O}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} \mathcal{M}(D) \longrightarrow I_{Z_0}^a \mathcal{M}(D)$
induced by $\varphi_{\mathcal{M}(D)}$ is injective with cokernel annihilated by $I_{Z_0}^{b-a}$.
\item[iii.] The $\mathcal{O}_L(\mathfrak{X})[Z_0^{-1}]$-linear map $\mathcal{O}_L(\mathfrak{X})[Z_0^{-1}] \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} \mathcal{M}(D) \xrightarrow{\;\widehat{\otimes}ng\;} \mathcal{M}(D)[Z_0^{-1}]$
induced by $\varphi_{\mathcal{M}(D)}$ is an isomorphism.
\end{itemize}
\end{corollary}
\begin{proof}
i. The right hand inclusion was shown in Lemma \ref{varphi}. For the left hand inclusion
we observe that Prop.\ \ref{varphi-iso}.ii implies that
\begin{equation*}
I_{Z_0}^{b-a} \mathfrak{n}(1)^{-a} \mathcal{M}(D) \subseteq \mathcal{O}_L(\mathfrak{X})
\varphi_L(\mathfrak{n}(1)^{-a}) \varphi_{\mathcal{M}(D)}(\mathcal{M}(D)) \ .
\end{equation*}
Using Lemma \ref{n(1)} we deduce that
\begin{equation*}
I_{Z_0}^b \mathcal{M}(D) \subseteq (\mathcal{O}_L(\mathfrak{X})
\varphi_L(\mathfrak{n}(1))^a \mathcal{O}_L(\mathfrak{X})
\varphi_L(\mathfrak{n}(1)^{-a}) \varphi_{\mathcal{M}(D)}(\mathcal{M}(D)) \ .
\end{equation*}
But one easily checks that $(\mathcal{O}_L(\mathfrak{X})
\varphi_L(\mathfrak{n}(1))^a \mathcal{O}_L(\mathfrak{X})
\varphi_L(\mathfrak{n}(1)^{-a}) = \mathcal{O}_L(\mathfrak{X})$.
ii. The annihilation property of the cokernel is clear from i. To establish the injectivity we may assume without loss of generality that $a \leq 0$. We consider the commutative diagram
\begin{equation*}
\xymatrix{
\mathcal{O}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} \mathfrak{n}(1)^{-a} \mathcal{M}(D) \ar[d] \ar[r]^-{\overline{\varphi}_{\mathcal{M}(D)}} & \mathfrak{n}(1)^{-a} \mathcal{M}(D) \ar[d]^{\subseteq} \\
\mathcal{O}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} \mathcal{M}(D) \ar[r] & I_{Z_0}^a \mathcal{M}(D), }
\end{equation*}
and we pick a function $0 \neq f \in \mathfrak{n}(1)$. Then $\varphi_L(f^{-a}) \neq 0$ in $\mathcal{O}_L(\mathfrak{X})$. Suppose that $s$ is a nonzero element in the kernel of the lower horizontal map. Then $\varphi_L(f^{-a})s$ is a nonzero element in this kernel as well since $\mathcal{M}(D)$ is a projective $\mathcal{O}_L(\mathfrak{X})$-module. But $\varphi_L(f^{-a})s$ lifts to an element in $\mathcal{O}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} \mathfrak{n}(1)^{-a} \mathcal{M}(D)$ which necessarily lies in the kernel of $\overline{\varphi}_{\mathcal{M}(D)}$. This contradicts the injectivity assertion in Prop.\ \ref{varphi-iso}.ii.
iii. This immediately follows from ii. since $\mathcal{O}_L(\mathfrak{X})[Z_0^{-1}]$ is flat over $\mathcal{O}_L(\mathfrak{X})$ by Lemma \ref{Y-1-flat}.
\end{proof}
For any $c \in o_L^\times$ the map $c^*$ is an automorphism of $\mathfrak{X}$ which respects the subset $Z$ as well as the function $n : Z \longrightarrow \mathbf{Z}_{\geq 0}$. It is straightforward to conclude from this that $c_* \otimes \operatorname{id}$ induces a semilinear automorphism $c_{\mathcal{M}(D)}$ of $\mathcal{M}(D)$ such that the diagram
\begin{equation}\mathrm{la}bel{f:commute}
\xymatrix{
\mathcal{M}(D) \ar[d]_{c_{\mathcal{M}(D)}} \ar[rr]^{\varphi_{\mathcal{M}(D)}} && I_{Z_0}^a \mathcal{M}(D) \ar[d]^{c_* \otimes c_{\mathcal{M}(D)}} \\
\mathcal{M}(D) \ar[rr]^{\varphi_{\mathcal{M}(D)}} && I_{Z_0}^a \mathcal{M}(D) }
\end{equation}
is commutative. Clearly these constructions are functorial in $D$.
Recall that $\operatorname{Vec}(\operatorname{Fil},\varphi_L)$ denotes the exact category of filtered $\varphi_L$-modules as introduced in Definition \ref{def:fil-vs}. For any $D$ in $\operatorname{Vec}(\operatorname{Fil},\varphi_L)$ we have constructed a finitely generated projective $\mathcal{O}_L(\mathfrak{X})$-module $\mathcal{M}(D)$ together with an injective $\varphi_L$-linear map
\begin{equation*}
\varphi_{\mathcal{M}(D)} : \mathcal{M}(D) \longrightarrow \mathcal{M}(D)[Z_0^{-1}]
\end{equation*}
(cf.\ Lemmas \ref{fg-proj} and \ref{varphi}) such that the induced $\mathcal{O}_L(\mathfrak{X})[Z_0^{-1}]$-linear map
\begin{align*}
\widetilde{\varphi}_{\mathcal{M}(D)} : \mathcal{O}_L(\mathfrak{X})[Z_0^{-1}] \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} \mathcal{M}(D) & \xrightarrow{\; \widehat{\otimes}ng \;} \mathcal{M}(D)[Z_0^{-1}] \\
f \otimes s & \longmapsto f \varphi_{\mathcal{M}(D)} (s)
\end{align*}
is an isomorphism (Cor.\ \ref{iso-Z0invert}.iii). We also have the group $\Gamma_L$ acting on $\mathcal{M}(D)$ by semilinear automorphisms $\gamma_{\mathcal{M}(D)}$ such that the diagrams
\begin{equation*}
\xymatrix{
\mathcal{M}(D) \ar[d]_{\gamma_{\mathcal{M}(D)}} \ar[rr]^-{\varphi_{\mathcal{M}(D)}} && \mathcal{M}(D)[Z_0^{-1}] \ar[d]^{\gamma_* \otimes \gamma_{\mathcal{M}(D)}} \\
\mathcal{M}(D) \ar[rr]^-{\varphi_{\mathcal{M}(D)}} && \mathcal{M}(D)[Z_0^{-1}] }
\end{equation*}
are commutative (cf.\ \eqref{f:commute}). Being a finitely generated module $\mathcal{M}(D)$ carries a natural Fr\'echet topology.
\begin{lemma}\mathrm{la}bel{action-MD}
The $\Gamma_L$-action on $\mathcal{M}(D)$ is continuous and differentiable, and the derived $\operatorname{Lie}(\Gamma_L)$-action is $L$-bilinear.
\end{lemma}
\begin{proof}
By the proof of Lemma \ref{fg-proj} the module $\mathcal{M}(D)$ is a closed submodule of the finitely generated projective module $I_Z^{-b} \otimes_L D$ for some $b \geq 0$. This reduces us to showing that the $\Gamma_L$-action on $I_Z^{-b}$ has all the asserted properties. Multiplication by the function $(\log_{\mathfrak{X}})^b$ induces a continuous bijection $I_Z^{-b} \longrightarrow \mathfrak{n}(1)^b$. By the open mapping theorem it has to be a topological isomorphism. By Lemma \ref{zeros}.ii the $\Gamma_L$-action on $I_Z^{-b}$ corresponds to the natural $\Gamma_L$-action on the closed ideal $\mathfrak{n}(1)^b$ in $\mathcal{O}_L(\mathfrak{X})$ twisted by the locally $L$-analytic character $\Gamma_L = o_L^\times \longrightarrow L^\times$ sending $\gamma$ to $\gamma^{-b}$. By the discussion after Lemma \ref{smalldisk-locan} the $\Gamma_L$-action on $\mathcal{O}_L(\mathfrak{X})$ and a fortiori the natural action on the closed ideal $\mathfrak{n}(1)^b$ and hence the twisted action all have the wanted properties.
\end{proof}
There is one additional important property. By Lemma \ref{stalks} we have $\widetilde{\mathcal{M}}_1(D) = \mathcal{O}_1 \otimes_L D$. We see that the induced $\Gamma_L$-action (note that the monoid action on $\mathfrak{X}$ fixes the point $1$) on $\mathcal{M}(D)/\mathfrak{n}(1)\mathcal{M}(D) = \widetilde{\mathcal{M}}_1(D)/\mathfrak{m}_1 \widetilde{\mathcal{M}}_1(D) = D$ is trivial.
\begin{definition}
\mathrm{la}bel{defmodw}
We therefore introduce the category $\operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{X}}$ of finitely generated projective $\mathcal{O}_L(\mathfrak{X})$-modules
$M$ equipped with an injective $\varphi_L$-linear map
\begin{equation*}
\varphi_M: M \longrightarrow M[Z_0^{-1}]
\end{equation*}
as well as a continuous (w.r.t.\ the natural Fr\'echet topology on $M$) action of $\Gamma_L$ by semilinear automorphisms $\gamma_M$ such that the following properties are satisfied:
\begin{itemize}
\item[a)] The induced $\mathcal{O}_L(\mathfrak{X})[Z_0^{-1}]$-linear map
\begin{align*}
\widetilde{\varphi}_M : \mathcal{O}_L(\mathfrak{X})[Z_0^{-1}] \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} M & \xrightarrow{\; \widehat{\otimes}ng \;} M[Z_0^{-1}] \\
f \otimes s & \longmapsto f \varphi_M (s)
\end{align*}
is an isomorphism.
\item[b)] For any $\gamma \in \Gamma_L$ the diagram
\begin{equation*}
\xymatrix{
M\ar[d]_{\gamma_M} \ar[rr]^-{\varphi_M} && M[Z_0^{-1}] \ar[d]^{\gamma_* \otimes \gamma_M} \\
M \ar[rr]^-{\varphi_M} && M[Z_0^{-1}] }
\end{equation*}
is commutative.
\item[c)] The induced $\Gamma_L$-action on $M/\mathfrak{n}(1)M$ is trivial.
\end{itemize}
\end{definition}
\begin{remark}\mathrm{la}bel{phi-M-bounded}
There are integers $a \leq 0 \leq c$ such that the map $\widetilde{\varphi}_M$ restricts to an injective map $\mathcal{O}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} M \longrightarrow I_{Z_0}^a M$ whose cokernel is annihilated by $I_{Z_0}^c$.
\end{remark}
\begin{proof}
Since $M$ is finitely generated we find an $a \leq 0$ such that $\varphi_M(M) \subseteq I_{Z_0}^a M$. Since $M$ is projective we have $\mathcal{O}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} M \subseteq \mathcal{O}_L(\mathfrak{X})[Z_0^{-1}] \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} M$. Hence the injectivity follows from condition a). Tensoring the resulting map $\mathcal{O}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} M \longrightarrow I_{Z_0}^a M$ with $\mathcal{O}_L(\mathfrak{X})[Z_0^{-1}]$ gives back the isomorphism $\widetilde{\varphi}_M$ (recall that $\mathcal{O}_L(\mathfrak{X})[Z_0^{-1}]$ is flat over $\mathcal{O}_L(\mathfrak{X})$ by Lemma \ref{Y-1-flat}). Hence the cokernel $N$ of this map satisfies $N[Z_0^{-1}] = 0$. For any function $0 \neq f \in I_{Z_0}$ we have $I_{Z_0}^{-1} \subseteq \mathcal{O}_L(\mathfrak{X}) f^{-1}$ and consequently that the localization $N_f$ vanishes. Since $N$ is finitely generated we find a $c(f) \geq 0$ such that $f^{c(f)} N = 0$. Finally, since $I_{Z_0}$ is finitely generated, we must have a $c \geq 0$ such that $I_{Z_0}^c N = 0$.
\end{proof}
\begin{lemma}\mathrm{la}bel{M-diff}
The $\Gamma_L$-action on $M$ is differentiable.
\end{lemma}
\begin{proof}
We put $M_n := \mathcal{O}_L(\mathfrak{X}_n) \otimes_{\mathcal{O}_L(\mathfrak{X})} M$. The $\Gamma_L$-action extends semilinearly to an action on each $M_n$ by automorphisms $\gamma_{M_n} := \gamma_* \otimes \gamma_M$. Since $M_n$ is a finitely generated projective $\mathcal{O}_L(\mathfrak{X}_n)$-module (and $\Gamma_L$ acts continuously on $\mathcal{O}_L(\mathfrak{X}_n)$) any individual $\gamma_{M_n}$ is a continuous automorphism of the Banach module $M_n$. On the other hand, the orbit maps for $M_n$ are the composite of the, by assumption, continuous orbit maps for $M$ and the continuous map $M \longrightarrow M_n$ and hence are continuous. It therefore follows from the nonarchimedean Banach-Steinhaus theorem that the $\Gamma_L$-action on each $M_n$ is (jointly) continuous. From the discussion after Lemma \ref{smalldisk-locan} we know that the $\Gamma_L$-action on $\mathcal{O}_L(\mathfrak{X}_n)$ satisfies the condition \eqref{f:condition-locan}. We then conclude from Prop.\ \ref{locan3} that the $\Gamma_L$-action on $M_n$ is locally $\mathbf{Q}_p$-analytic. The asserted differentiability now follows from the fact that the Fr\'echet module $M$ is the projective limit of the Banach modules $M_n$.
\end{proof}
As we have seen above we then have the well defined functor
\begin{align*}
\mathcal{M} : \operatorname{Vec}(\operatorname{Fil},\varphi_L) & \longrightarrow \operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{X}} \\
D & \longmapsto \mathcal{M}(D) \ .
\end{align*}
On the other hand, for any $M$ in $\operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{X}}$ we have the finite dimensional $L$-vector space
\begin{equation*}
\mathcal{D}(M) := M/\mathfrak{n}(1)M \ .
\end{equation*}
Since $M/\mathfrak{n}(1)M =
M[Z_0^{-1}]/\mathfrak{n}(1)M[Z_0^{-1}]$ the map $\varphi_M$ induces an $L$-linear endomorphism of $M/\mathfrak{n}(1)M$ denoted by $\varphi_{\mathcal{D}(M)}$. The condition a) implies that $\varphi_{\mathcal{D}(M)}$ is surjective and hence is an automorphism.
By Lemma \ref{M-diff} we have available the operator $\nabla_M$ on $M$ corresponding to the derived action of the element $1 \in \operatorname{Lie}(\Gamma_L)$.
In $\mathfrak{X}$ we have the Zariski and hence admissible open subvariety $\mathfrak{X} \setminus Z$, which is preserved by $\pi_L^*$ and by the action of $\Gamma_L$. Therefore $\varphi_M$ and the action of $\Gamma_L$ extend semilinearly to $M\{Z^{-1}\} := \mathcal{O}_L(\mathfrak{X} \setminus Z) \otimes_{\mathcal{O}_L(\mathfrak{X})} M$. The same argument as for \eqref{f:tilde-varphi-iso} shows that $\mathcal{O}_L(\mathfrak{X})[Z^{-1}] \subseteq \mathcal{O}_L(\mathfrak{X} \setminus Z)$. Hence the condition a) implies that the map
\begin{equation*}
\operatorname{id} \otimes \varphi_M : \mathcal{O}_L(\mathfrak{X} \setminus Z) \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} M = \mathcal{O}_L(\mathfrak{X} \setminus Z) \otimes_{\mathcal{O}_L(\mathfrak{X} \setminus Z),\varphi_L} M\{Z^{-1}\} \xrightarrow{\; \widehat{\otimes}ng \;} M\{Z^{-1}\}
\end{equation*}
is an isomorphism.
\begin{proposition}\mathrm{la}bel{flat}
We have $\mathcal{O}_L(\mathfrak{X} \setminus Z) \otimes_L M\{Z^{-1}\}^{\Gamma_L} = M\{Z^{-1}\}$. In particular, the projection map $M\{Z^{-1}\} \longrightarrow \mathcal{D}(M)$ restricts to an isomorphism $M\{Z^{-1}\}^{\Gamma_L} \xrightarrow{\widehat{\otimes}ng} \mathcal{D}(M)$ such that the diagram
\begin{equation*}
\xymatrix{
M\{Z^{-1}\}^{\Gamma_L} \ar[d]_{\varphi_M} \ar[r]^-{\widehat{\otimes}ng} & \mathcal{D}(M) \ar[d]^{\varphi_{\mathcal{D}(M)}} \\
M\{Z^{-1}\}^{\Gamma_L} \ar[r]^-{\widehat{\otimes}ng} & \mathcal{D}(M) }
\end{equation*}
is commutative. In fact, we have
\begin{equation*}
M[Z^{-1}]^{\Gamma_L} = M\{Z^{-1}\}^{\Gamma_L} \qquad\text{and}\qquad \mathcal{O}_L(\mathfrak{X})[Z^{-1}] \otimes_L M[Z^{-1}]^{\Gamma_L} = M[Z^{-1}] \ .
\end{equation*}
\end{proposition}
\begin{proof}
Let $r \in (0,1) \cap p^\mathbf{Q}$. We put
\begin{align*}
& M(r) := \mathcal{O}_L(\mathfrak{X}(r)) \otimes_{\mathcal{O}_L(\mathfrak{X})} M \; \text{and} \\
& M(r)\{Z^{-1}\} := \mathcal{O}_L(\mathfrak{X}(r) \setminus Z) \otimes_{\mathcal{O}_L(\mathfrak{X})} M = \mathcal{O}_L(\mathfrak{X}(r) \setminus Z) \otimes_{\mathcal{O}_L(\mathfrak{X}(r))} M(r) \ .
\end{align*}
We semilinearly extend the $\Gamma_L$-action to $M(r)$ and $M(r)\{Z^{-1}\}$. At first we suppose that $r < p^{-\frac{1}{p-1}}$. Using Lemma \ref{smalldisk-locan} the same reasoning as in the proof of Lemma \ref{M-diff} shows that this $\Gamma_L$-action on $M(r)$ is locally $\mathbf{Q}_p$-analytic. In particular, we have the operator $\nabla_{M(r)}$. By possibly passing to a smaller $r$ we may assume that $M(r)$ is free over $\mathcal{O}_L(\mathfrak{X}(r))$. By Lemma \ref{small-disk} we may view $M(r)$ as a finitely generated free differential module over $\mathcal{O}_L(\mathfrak{B}(r))$ for the derivation $y \frac{d}{dy}$. The $p$-adic Fuchs theorem for disks (\cite{Ked} Thm.\ 13.2.2) then implies, again by possibly passing to a smaller $r$, that the $L$-vector space $M(r)^{\nabla_{M(r)}=0}$ contains a basis of the $\mathcal{O}_L(\mathfrak{X}(r))$-module $M(r)$ (observe that, as a consequence of the condition c), the constant matrix of this differential module is the zero matrix). By \cite{Ked} Lemma 5.1.5 (applied to the field of fractions of $\mathcal{O}_L(\mathfrak{X}(r))$) we, in fact, obtain that
\begin{equation*}
\mathcal{O}_L(\mathfrak{X}(r)) \otimes_L M(r)^{\nabla_{M(r)}=0} = M(r) \ .
\end{equation*}
It follows that the projection map restricts to an isomorphism
\begin{equation*}
M(r)^{\nabla_{M(r)}=0} \xrightarrow{\;\widehat{\otimes}ng\;} M(r)/\mathfrak{n}(1) M(r) = M/\mathfrak{n}(1) M \ .
\end{equation*}
Since, on the one hand, $\Gamma_L$ acts trivially on $M/\mathfrak{n}(1) M$ by condition c) and, on the other hand, the $\Gamma_L$-action commutes with $\nabla_{M(r)}$ it further follows that $M(r)^{\Gamma_L} = M(r)^{\nabla_{M(r)}=0}$ and hence that
\begin{equation*}
\mathcal{O}_L(\mathfrak{X}(r)) \otimes_L M(r)^{\Gamma_L} = M(r) \qquad\text{and}\qquad M(r)^{\Gamma_L} \xrightarrow{\;\widehat{\otimes}ng\;} M/ \mathfrak{n}(1) M \ .
\end{equation*}
Observe that $\mathfrak{X}(r) \cap Z = \emptyset$ and hence $M(r) = M(r)\{Z^{-1}\}$.
We now choose a sequence $r = r_0 < r_1 < \ldots < r_m < \ldots < 1$ in $p^\mathbf{Q}$ converging to $1$ such that $p^* (\mathfrak{X}(r_{m+1})) \subseteq \mathfrak{X}(r_m)$ for any $m \geq 0$. By induction we assume that
\begin{equation*}
\mathcal{O}_L(\mathfrak{X}(r_m) \setminus Z) \otimes_L M(r_m)\{Z^{-1}\}^{\Gamma_L} = M(r_m)\{Z^{-1}\}
\end{equation*}
holds true. We temporarily denote by $\phi$ the ring endomorphism of $\mathcal{O}_L(\mathfrak{X} \setminus Z)$ induced by the morphism $p^* : \mathfrak{X} \longrightarrow \mathfrak{X}$. It extends to a ring homomorphism $\phi : \mathcal{O}_L(\mathfrak{X}(r_m) \setminus Z) \longrightarrow \mathcal{O}_L(\mathfrak{X}(r_{m+1}) \setminus Z)$. As a consequence of the condition a) we have the $\mathcal{O}_L(\mathfrak{X} \setminus Z)$-linear isomorphism
\begin{align*}
\mathcal{O}_L(\mathfrak{X} \setminus Z) \otimes_{\mathcal{O}_L(\mathfrak{X}),\phi} M & \xrightarrow{\;\widehat{\otimes}ng\;} M\{Z^{-1}\} \\
f \otimes s & \longmapsto f p(s)
\end{align*}
which base changes to the $\mathcal{O}_L(\mathfrak{X}(r_{m+1}) \setminus Z)$-linear isomorphism
\begin{multline*}
\mathcal{O}_L(\mathfrak{X}(r_{m+1}) \setminus Z) \otimes_L M(r_m)\{Z^{-1}\}^{\Gamma_L} =
\mathcal{O}_L(\mathfrak{X}(r_{m+1}) \setminus Z) \otimes_{\mathcal{O}_L(\mathfrak{X}(r_m) \setminus Z),\phi} M(r_m)\{Z^{-1}\} \\ = \mathcal{O}_L(\mathfrak{X}(r_{m+1}) \setminus Z) \otimes_{\mathcal{O}_L(\mathfrak{X}),\phi} M \xrightarrow{\;\widehat{\otimes}ng\;} M(r_{m+1})\{Z^{-1}\},
\end{multline*}
where the first identity is our induction hypothesis, denoted by $\alpha_m$. By restricting to $\Gamma_L$-invariants the latter map fits into the diagram
\begin{equation*}
\xymatrix{
M(r_m)\{Z^{-1}\}^{\Gamma_L} \ar[d]_{\widehat{\otimes}ng} \ar[r]^-{\alpha_m} & M(r_{m+1})\{Z^{-1}\}^{\Gamma_L} \ar[d] \ar@{^{(}->}[r]^{\mathrm{res}} & M(r_m)\{Z^{-1}\}^{\Gamma_L} \ar[dl]^{\widehat{\otimes}ng} \\
M/\mathfrak{n}(1) M \ar[r]^{\widehat{\otimes}ng}_{\varphi_{\mathcal{D}(M)}^e} & M/\mathfrak{n}(1) M & }
\end{equation*}
where $e$ denotes the ramification index of $L/\mathbf{Q}_p$. The left square is commutative as a consequence of conditions b) and c). The right upper horizontal arrow is given by restriction from $\mathfrak{X}(r_{m+1})$ to $\mathfrak{X}(r_m)$; it is injective since $M$ is projective. The right part of the diagram is trivially commutative. The indicated perpendicular isomorphisms hold by the induction hypothesis. It follows that all arrows in the diagram are isomorphisms. We see that the original isomorphism $\alpha_m$ sends $M(r_m)\{Z^{-1}\}^{\Gamma_L}$ isomorphically onto $M(r_{m+1})\{Z^{-1}\}^{\Gamma_L}$ which implies that
\begin{equation*}
\mathcal{O}_L(\mathfrak{X}(r_{m+1}) \setminus Z) \otimes_L M(r_{m+1})\{Z^{-1}\}^{\Gamma_L} = M(r_{m+1})\{Z^{-1}\} \ .
\end{equation*}
Moreover, the restriction maps induce isomorphisms $M(r_{m+1})\{Z^{-1}\}^{\Gamma_L} \widehat{\otimes}ng M(r_m)\{Z^{-1}\}^{\Gamma_L}$ as well.
Let $\mathfrak{M}$ denote the coherent $\mathcal{O}_\mathfrak{X}$-module with global sections $M$. According to Remark \ref{coherent}.iv we have $\mathfrak{M}(\mathfrak{X} \setminus Z) = M\{Z^{-1}\}$ and $\mathfrak{M}(\mathfrak{X}(r_m) \setminus Z) = M(r_m)\{Z^{-1}\}$. Since the $\mathfrak{X}(r_m) \setminus Z$ form an admissible covering of $\mathfrak{X} \setminus Z$ we have $\mathfrak{M}(\mathfrak{X} \setminus Z) = \varprojlim_m \mathfrak{M}(\mathfrak{X}(r_m) \setminus Z)$. It follows first that $M\{Z^{-1}\}^{\Gamma_L} = \varprojlim_m M(r_m)\{Z^{-1}\}^{\Gamma_L} \widehat{\otimes}ng M(r_n)\{Z^{-1}\}^{\Gamma_L}$ for any $n$, hence $\mathcal{O}_L(\mathfrak{X}(r_m) \setminus Z) \otimes_L M\{Z^{-1}\}^{\Gamma_L} = M(r_m)\{Z^{-1}\}$, and finally, by passing to the projective limit in the latter identity, that $\mathcal{O}_L(\mathfrak{X} \setminus Z) \otimes_L M\{Z^{-1}\}^{\Gamma_L} = M\{Z^{-1}\}$.
For any $j \geq 0$ we set $Z_j := \{x \in Z : n(x) \leq j\} = \mathfrak{X}[\pi_L^{j+1}]$, and $Z_{-1} := \emptyset$. Lemma \ref{unramified} implies that
\begin{equation*}
\mathcal{O}_L(\mathfrak{X}) \varphi_L(I_{Z_j}) = I_{Z_{j+1} \setminus Z_0} \qquad\text{and}\qquad \mathcal{O}_L(\mathfrak{X}) \varphi_L(I_{Z \setminus Z_j}) = I_{Z \setminus Z_{j+1}} \ .
\end{equation*}
If $e$ denotes the ramification index of $L/\mathbf{Q}_p$ then we have $p = \pi_L^e u$ for some $u \in o_L^\times$. We deduce inductively that
\begin{equation*}
\mathcal{O}_L(\mathfrak{X}) \phi(I_Z) = \mathcal{O}_L(\mathfrak{X}) \varphi_L^e(I_Z) = I_{Z \setminus Z_{e-1}} \ .
\end{equation*}
For the more precise form of the assertion we use Remark \ref{phi-M-bounded} with the integers $a \leq 0 \leq c$. Again we deduce inductively that
\begin{equation*}
p(M) = \varphi_M^e(M) \subseteq \varphi_M(I_{Z_{e-2}}^a M) = \varphi_L(I_{Z_{e-2}}^a)\varphi_M(M) \subseteq I_{Z_{e-1} \setminus Z_0}^a I_{Z_0}^a M = I_{Z_{e-1}}^a M
\end{equation*}
and
\begin{align*}
\mathcal{O}_L(\mathfrak{X})p(M) & = \mathcal{O}_L(\mathfrak{X}) \varphi_M^e(M) = \mathcal{O}_L(\mathfrak{X}) \varphi_M(\mathcal{O}_L(\mathfrak{X}) \varphi_M^{e-1}(M)) \\
& \supseteq \mathcal{O}_L(\mathfrak{X}) \varphi_M(I_{Z_{e-2}}^{a+c} M) = I_{Z_{e-1} \setminus Z_0}^{a+c} \varphi_M(M) \supseteq I_{Z_{e-1} \setminus Z_0}^{a+c} I_{Z_0}^{a+c} M \\
& = I_{Z_{e-1}}^{a+c} M \ .
\end{align*}
For the identity $M[Z^{-1}]^{\Gamma_L} = M\{Z^{-1}\}^{\Gamma_L}$ it suffices to show that $I_Z^{-a} \cdot M\{Z^{-1}\}^{\Gamma_L} \subseteq M$ which further reduces to the claim that $I_Z^{-a} \cdot M(r_m)\{Z^{-1}\}^{\Gamma_L} \subseteq M(r_m)$ for any $m \geq 0$. This holds trivially true for $m = 0$. Any element in $M(r_{m+1})\{Z^{-1}\}^{\Gamma_L}$ can be written as $\alpha_m(s)$ for some $s \in M(r_m)\{Z^{-1}\}^{\Gamma_L}$. By induction we may assume that $I_Z^{-a}s \subseteq M(r_m)$. Then $\phi(I_Z^{-a})\alpha_m(s) = p(I_Z^{-a}s) \subseteq p(M(r_m)) \subseteq I_{Z_{e-1}}^a M(r_{m+1})$ and hence
\begin{equation*}
M(r_{m+1}) \supseteq I_{Z_{e-1}}^{-a} \phi(I_Z^{-a})\alpha_m(s) = I_{Z_{e-1}}^{-a} I_{Z \setminus Z_{e-1}}^{-a} \alpha_m(s) = I_Z^{-a} \alpha_m(s) \ .
\end{equation*}
Next we consider the commutative diagram
\begin{equation*}
\xymatrix{
\mathcal{O}_L(\mathfrak{X})[Z^{-1}] \otimes_L M[Z^{-1}]^{\Gamma_L} \ar[d]_{\subseteq\; \otimes\; =} \ar[r]^-{\subseteq} & M[Z^{-1}] \ar[d]^{\subseteq} \\
\mathcal{O}_L(\mathfrak{X} \setminus Z) \otimes_L M\{Z^{-1}\}^{\Gamma_L} \ar[r]^-{=} & M\{Z^{-1}\} }
\end{equation*}
where the right vertical inclusion comes from the projectivity of $M$. We have seen that $\mathcal{O}_L(\mathfrak{X}) \otimes_L M[Z^{-1}]^{\Gamma_L} \subseteq I_Z^a M$. In order to recognize the upper horizontal arrow as an identity it suffices to show that $I_Z^{a+c} M \subseteq \mathcal{O}_L(\mathfrak{X}) \otimes_L M[Z^{-1}]^{\Gamma_L}$. This reduces to the claim that $I_Z^{a+c} M(r_m) \subseteq \mathcal{O}_L(\mathfrak{X}(r_m)) \otimes_L M(r_m)[Z^{-1}]^{\Gamma_L}$ for any $m \geq 0$. This is clear for $m=0$. By induction we compute
\begin{align*}
\mathcal{O}_L(\mathfrak{X}(r_{m+1})) \otimes_L M(r_{m+1})[Z^{-1}]^{\Gamma_L} & = \mathcal{O}_L(\mathfrak{X}(r_{m+1})) \otimes_L \alpha_m(M(r_m)[Z^{-1}]^{\Gamma_L}) \\
& = \mathcal{O}_L(\mathfrak{X}(r_{m+1})) p(\mathcal{O}_L(\mathfrak{X}(r_m)) \otimes_L M(r_m)[Z^{-1}]^{\Gamma_L}) \\
& \supseteq \mathcal{O}_L(\mathfrak{X}(r_{m+1})) p(I_Z^{a+c} M(r_m)) \\
& = \mathcal{O}_L(\mathfrak{X}(r_{m+1})) \phi(I_Z^{a+c}) p(M(r_m)) \\
& = \mathcal{O}_L(\mathfrak{X}(r_{m+1})) I_{Z \setminus Z_{e-1}}^{a+c} p(M(r_m)) \\
& \supseteq I_{Z \setminus Z_{e-1}}^{a+c} I_{Z_{e-1}}^{a+c} M(r_{m+1}) = I_Z^{a+c} M(r_{m+1}) \ .
\end{align*}
\end{proof}
\begin{remark}\mathrm{la}bel{L-bilinear}
The derived $\operatorname{Lie}(\Gamma_L)$-action on any $M$ in $\operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{X}}$ is $L$-bilinear.
\end{remark}
\begin{proof}
We use the notations in the proof of Prop.\ \ref{flat}, in particular, the radius $r = r_0$. Since $M \subseteq M(r)$ it suffices to prove the analogous assertion for $M(r)$. But in that proof we had seen that $M(r) = \mathcal{O}_L(\mathfrak{X}(r)) \otimes_L M(r)^{\Gamma_L}$ which further reduces us to the case of the derived action on $\mathcal{O}_L(\mathfrak{X}(r))$. This case was treated in Lemma \ref{smalldisk-locan}.
\end{proof}
We now define a filtration on $\mathcal{D}(M)$. In fact, using the isomorphism in Prop.\ \ref{flat} we define the filtration on the isomorphic $M[Z^{-1}]^{\Gamma_L}$. We pick a point $x_0 \in Z_0$. Passing to the germ in $x_0$ gives an injective homomorphism $M \longrightarrow \mathcal{O}_{x_0} \otimes_{\mathcal{O}_L(\mathfrak{X})} M$. This map extends to a homomorphism $M[Z^{-1}] \longrightarrow \operatorname{Fr}(\mathcal{O}_{x_0}) \otimes_{\mathcal{O}_L(\mathfrak{X})} M$, which still is injective, leading to the commutative diagram
\begin{equation*}
\xymatrix{
& M \ar[d] \ar[r] & \mathcal{O}_{x_0} \otimes_{\mathcal{O}_L(\mathfrak{X})} M \ar[d]^{\subseteq} \\
M[Z^{-1}]^{\Gamma_L} \ar[r]^{\subseteq} & M[Z^{-1}] \ar[r] & \operatorname{Fr}(\mathcal{O}_{x_0}) \otimes_{\mathcal{O}_L(\mathfrak{X})} M. }
\end{equation*}
The $\mathfrak{m}_{x_0}$-adic filtration on $\operatorname{Fr}(\mathcal{O}_{x_0}) \otimes_{\mathcal{O}_L(\mathfrak{X})} M$ induces, via the injective composition of the lower horizontal arrows, an exhaustive and separated filtration $\operatorname{Fil}^\bullet M[Z^{-1}]^{\Gamma_L}$ on $M[Z^{-1}]^{\Gamma_L}$ which we transport to a filtration on $\mathcal{D}(M)$. Since $\Gamma_L$ acts transitively on $Z_0$ this filtration is independent of the choice of $x_0$. In this way we obtain a functor
\begin{align*}
\mathcal{D} : \operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{X}} & \longrightarrow \operatorname{Vec}(\operatorname{Fil},\varphi_L) \\
M & \longmapsto \mathcal{D}(M) \ .
\end{align*}
\begin{theorem}\mathrm{la}bel{Wach-equiv}
The functors $\mathcal{M}$ and $\mathcal{D}$ are quasi-inverse equivalences between the categories $\operatorname{Vec}(\operatorname{Fil},\varphi_L)$ and $\operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{X}}$.
\end{theorem}
\begin{proof}
Given any $D$ in $\operatorname{Vec}(\operatorname{Fil},\varphi_L)$ we put $M := \mathcal{M}(D)$. It follows from \eqref{f:bounds} and Lemma \ref{Y-1-flat} that
\begin{equation*}
D \subseteq \mathcal{M}(D)[Z^{-1}]^{\Gamma_L} = M[Z^{-1}]^{\Gamma_L} \subseteq M\{Z^{-1}\}^{\Gamma_L} \widehat{\otimes}ng \mathcal{D}(M) \ .
\end{equation*}
By Lemma \ref{fg-proj} the rank of $M$ is equal to the dimension of $D$. Hence the outer terms in the above chain have the same dimension, and so we have to have equality everywhere. Obviously, $\varphi_M$ induces $\varphi_D$. As a consequence of Lemma \ref{stalks} we have
\begin{equation*}
\mathfrak{m}_{x_0}^i \otimes_{\mathcal{O}_L(\mathfrak{X})} M = \mathfrak{m}_{x_0}^i (\mathcal{O}_{x_0} \otimes_{\mathcal{O}_L(\mathfrak{X})} M) = \mathfrak{m}_{x_0}^i \operatorname{Fil}^0(\operatorname{Fr}(\mathcal{O}_{x_0}) \otimes_L D) = \operatorname{Fil}^i(\operatorname{Fr}(\mathcal{O}_{x_0}) \otimes_L D)
\end{equation*}
for any $i \in \mathbf{Z}$. This shows that the $\mathfrak{m}_{x_0}^i$-adic filtration on $\operatorname{Fr} (\mathcal{O}_{x_0}) \otimes_{\mathcal{O}_L(\mathfrak{X})} M$ coincides with the tensor product filtration on $\operatorname{Fr}(\mathcal{O}_{x_0}) \otimes_L D$ and therefore induces the original filtration on $D$. We conclude that the above isomorphism $D \widehat{\otimes}ng \mathcal{D}(\mathcal{M}(D))$ is a natural isomorphism in the category
$\operatorname{Vec}(\operatorname{Fil},\varphi_L)$.
Now consider any $M$ in $\operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{X}}$ and $D' := M\{Z^{-1}\}^{\Gamma_L} = M[Z^{-1}]^{\Gamma_L}$ in $\operatorname{Vec}(\operatorname{Fil},\varphi_L)$. Using Prop.\ \ref{flat} we see that
\begin{equation*}
\mathcal{M}(D') = \{s \in M[Z^{-1}] : (\operatorname{id} \otimes \varphi_{D'}^{-n(x)})(s) \in \operatorname{Fil}^0(\operatorname{Fr}(\mathcal{O}_x) \otimes_L D')\ \text{for any $x \in Z$}\} \ .
\end{equation*}
The identity $\mathcal{O}_L(\mathfrak{X})[Z^{-1}] \otimes_L D' = \mathcal{O}_L(\mathfrak{X})[Z^{-1}] \otimes_{\mathcal{O}_L(\mathfrak{X})} M$ gives rise, for any $x \in Z$, to the identity $\operatorname{Fr}(\mathcal{O}_x) \otimes_L D' = \operatorname{Fr}(\mathcal{O}_x) \otimes_{\mathcal{O}_L(\mathfrak{X})} M$. We claim that
\begin{equation*}
(\operatorname{id} \otimes \varphi_{D'}^{-n(x)})(s) \in \operatorname{Fil}^0(\operatorname{Fr}(\mathcal{O}_x) \otimes_L D') \qquad\text{if and only if}\qquad s \in \mathcal{O}_x \otimes_{\mathcal{O}_L(\mathfrak{X})} M \ .
\end{equation*}
This shows that $\mathcal{M}(D') = M$. It is straightforward to see that this identity is compatible with the monoid actions on both sides. Hence we obtain a natural isomorphism $M \widehat{\otimes}ng \mathcal{M}(\mathcal{D}(M))$ in the category $\operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{X}}$. To establish this claim we first consider the case $x \in Z_0$. We need to verify that $\operatorname{Fil}^0(\operatorname{Fr}(\mathcal{O}_x) \otimes_L D') = \mathcal{O}_x \otimes_{\mathcal{O}_L(\mathfrak{X})} M$. But this is a special case of the subsequent Lemma \ref{ED}. For a general $x \in Z$ we put $x_0 := (\pi_L^*)^{n(x)}(x) \in Z_0$. Then $(\operatorname{id} \otimes \varphi_{D'}^{-n(x)})(s) \in \operatorname{Fil}^0(\operatorname{Fr}(\mathcal{O}_x) \otimes_L D')$ if and only if $s$ lies in
\begin{align*}
(\operatorname{id} \otimes \varphi_{D'}^{n(x)}) \operatorname{Fil}^0(\operatorname{Fr}(\mathcal{O}_x) \otimes_L D') & = \sum_i \mathfrak{m}_x^i \otimes_L \varphi_{D'}^{n(x)} (\operatorname{Fil}^{-i} D') \\
& = \sum_i \mathcal{O}_x \varphi_L^{n(x)}(\mathfrak{m}_{x_0}^i) \otimes_L \varphi_{D'}^{n(x)} (\operatorname{Fil}^{-i} D') \\
& = \mathcal{O}_x (\varphi_L \otimes \varphi_M)^{n(x)} \operatorname{Fil}^0(\operatorname{Fr}(\mathcal{O}_{x_0}) \otimes_L D') \\
& = \mathcal{O}_x (\varphi_L \otimes \varphi_M)^{n(x)} (\mathcal{O}_{x_0} \otimes_{\mathcal{O}_L(\mathfrak{X})} M) \\
& = \mathcal{O}_x \otimes_{\mathcal{O}_L(\mathfrak{X})} M \ .
\end{align*}
Here the second, resp.\ fourth, resp.\ last, identity uses that $\varphi_L$ is unramified (Lemma \ref{unramified}.ii), resp.\ the previous case of points in $Z_0$, resp.\ an iteration of Remark \ref{phi-M-bounded}.
\end{proof}
Consider any point $x \in Z_0$, which is the Galois orbit of a character $\chi$ of $o_L$. Since $\chi$ has values in the group $\mu_p$ of $p$th roots of unity, the residue class field $L_x$ of $\mathcal{O}_x$ is equal to $L_x = L(\mu_p)$, and we have $x = \operatorname{Gal}(L_x/L) \chi$. On the other hand, by Lemma \ref{torsion} we have $\mathfrak{X}(\pi_L)(\mathbf{C}_p) \widehat{\otimes}ng k_L$ as $o_L$-modules, where $k_L$ denotes the residue field of $o_L$. We see that $\Gamma_L$ acts on $Z_0(\mathbf{C}_p)$ through its factor group $k_L^\times$. The stabilizer $\Gamma_x$ of $x$ satisfies $1 + \pi_L o_L \subseteq \Gamma_x \subseteq \Gamma_L$. Since the $\Gamma_L/1+\pi_L o_L = k_L^\times$-action on $Z_0(\mathbf{C}_p)$ is simply transitive there is a unique isomorphism $\sigma_x : \Gamma_x/1+\pi_L o_L \xrightarrow{\widehat{\otimes}ng} \operatorname{Gal}(L_x/L)$ such that $\gamma^*(\chi) = \sigma_x(\gamma)(\chi)$ for any $\gamma \in \Gamma_x$. On the other hand the $\Gamma_L$-action on $\mathfrak{X}$ induces a $\Gamma_x$-action on $\mathcal{O}_x$ and hence on $L_x$. The identities
\begin{equation*}
(\gamma_*(f))(\chi) = f(\gamma^*(\chi)) = f(\sigma_x(\gamma)(\chi)) = \sigma_x(\gamma)(f(\chi)) \qquad\text{for any $f \in \mathcal{O}_x$ and $\gamma \in \Gamma_x$}
\end{equation*}
show that this last action is given by the homomorphism $\sigma_x : \Gamma_x \longrightarrow \operatorname{Gal}(L_x/L)$.
\begin{lemma}\mathrm{la}bel{ED}
Let $x \in Z_0$, let $V$ be a finite dimensional $L$-vector space, and equip $\operatorname{Fr}(\mathcal{O}_x) \otimes_L V$ with the $\Gamma_x$-action through the first factor. Suppose given any $\Gamma_x$-invariant $\mathcal{O}_x$-lattice $\mathcal{L} \subseteq \operatorname{Fr}(\mathcal{O}_x) \otimes_L V$ and define $\operatorname{Fil}^i V := V \cap \mathfrak{m}_x^i \mathcal{L}$ for any $i \in \mathbf{Z}$. We then have
\begin{equation*}
\mathcal{L} = \sum_{i \in \mathbf{Z}} \mathfrak{m}_x^i \otimes_L \operatorname{Fil}^{-i} V \ .
\end{equation*}
\end{lemma}
\begin{proof}
Let $\ell_x \in \mathcal{O}_x$ denote the germ of $\log_\mathfrak{X}$. By Lemma \ref{zeros} we have $\mathfrak{m}_x = \ell_x \mathcal{O}_x$ and $\gamma_*(\ell_x) = \gamma \cdot \ell_x$ for any $\gamma \in \Gamma_x$. We find integers $n \leq 0 \leq m$ such that
\begin{equation*}
\ell_x^m \mathcal{O}_x \otimes_L V \subseteq \mathcal{L} \subseteq \ell_x^n \mathcal{O}_x \otimes_L V \ .
\end{equation*}
Let $\widehat{\mathcal{O}}_x = L_x [[\ell_x]]$ be the $\mathfrak{m}_x$-adic completion of $\mathcal{O}_x$. We then have the $\Gamma_x$-invariant decomposition
\begin{equation*}
\ell_x^n \mathcal{O}_x / \ell_x^m \mathcal{O}_x = \ell_x^n \widehat{\mathcal{O}}_x / \ell_x^m \widehat{\mathcal{O}}_x = \ell_x^n L_x \oplus \ell_x^{n+1} L_x \oplus \ldots \oplus \ell_x^{m-1} L_x \ .
\end{equation*}
The $\Gamma_x$-action on $\ell_x^j L_x$ is given by $\gamma_*(\ell_x^j c) = \ell_x^j \gamma^j \cdot \sigma_x(\gamma)(c)$. Since $\operatorname{Gal}(L_x/L)$ acts semisimply on $L_x$ we see that the above decomposition exhibits the left hand side as a semisimple $\Gamma_x$-module with the summands on the right hand side having no simple constituents in common. The same then holds true for the decomposition
\begin{equation*}
\ell_x^n \mathcal{O}_x \otimes_L V / \ell_x^m \mathcal{O}_x \otimes_L V = (\ell_x^n L_x \otimes_L V) \oplus \ldots \oplus (\ell_x^{m-1} L_x \otimes_L V) \ .
\end{equation*}
It therefore follows that
\begin{equation*}
\mathcal{L} / \ell_x^m \mathcal{O}_x \otimes_L V = \ell_x^n (L_x \otimes_L V)_{-n} \oplus \ldots \oplus \ell_x^{m-1} (L_x \otimes_L V)_{-(m-1)}
\end{equation*}
with $\operatorname{Gal}(L_x/L)$-invariant $L_x$-vector subspaces $(L_x \otimes_L V)_{-j} \subseteq L_x \otimes_L V$. By Galois descent we have $(L_x \otimes_L V)_{-j} = L_x \otimes_L V_{-j}$ for a unique $L$-vector subspace $V_{-j} \subseteq V$. Hence
\begin{equation*}
\mathcal{L} / \ell_x^m \mathcal{O}_x \otimes_L V = (\ell_x^n L_x \otimes_L V_{-n}) \oplus \ldots \oplus (\ell_x^{m-1} L_x \otimes_L V_{-(m-1)}) \ .
\end{equation*}
Multiplying this identity by $\ell_x$ shows that $V_{-j} \subseteq V_{-(j+1)}$. We deduce that
\begin{equation*}
\mathcal{L} = (\ell_x^n \mathcal{O}_x \otimes_L V_{-n}) + \ldots + (\ell_x^{m-1} \mathcal{O}_x \otimes_L V_{-(m-1)}) + (\ell_x^m \mathcal{O}_x \otimes_L V) = \sum_{i \in \mathbf{Z}} \mathfrak{m}_x^i \otimes_L V_{-i}
\end{equation*}
with $V_{-i} := V$, resp.\ $:= \{0\}$, for $i \geq m$, resp.\ $i > n$. In particular, we obtain $\mathfrak{m}_x^i \otimes_L V_{-i} \subseteq \mathcal{L}$, hence $V_{-i} \subseteq \mathfrak{m}_x^{-i} \mathcal{L}$, and therefore $V_{-i} \subseteq \operatorname{Fil}^{-i} V$. We conclude that
\begin{equation*}
\mathcal{L} \subseteq \sum_{i \in \mathbf{Z}} \mathfrak{m}_x^i \otimes_L \operatorname{Fil}^{-i} V \ .
\end{equation*}
The reverse inclusion is immediate from the definition of $\operatorname{Fil}^\bullet V$.
\end{proof}
\subsection{Crystalline $(\varphi_L,\Gamma_L)$-modules over $\mathscr{R}_L(\mathfrak{X})$}
\mathrm{la}bel{kisrobx}
We now consider the $\mathscr{R}_L(\mathfrak{X})$-module
\begin{equation*}
\mathcal{M}_\mathscr{R}(D) := \mathscr{R}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}(D) \ .
\end{equation*}
Let $r_0 \in (0,1) \cap p^\mathbf{Q}$ be such that $Z_0 \subseteq \mathfrak{X}(r_0)$. Then the inclusion of coherent ideal sheaves $\widetilde{I_{Z_0}} \subseteq \mathcal{O}$ is an isomorphism over
$\mathfrak{X} \setminus \mathfrak{X}(r_0)$. It follows, using Remark \ref{coherent}.iv, that $\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}(r_0)) \otimes_{\mathcal{O}_L(\mathfrak{X})} I_{Z_0} = \mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}(r_0))$ and hence that $I_{Z_0}$ generates the unit ideal in $\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}(r_0))$. This implies that $\mathcal{O}_L(\mathfrak{X})[Z_0^{-1}] \subseteq \mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}(r_0))$. Using Cor.\ \ref{iso-Z0invert}.ii we deduce that the $\mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}(r_0))$-linear map
\begin{align}\mathrm{la}bel{f:tilde-varphi-iso}
\widetilde{\varphi}_{\mathcal{M}(D)} : \mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}(r_0)) \otimes_{\mathcal{O}_L(\mathfrak{X}),\varphi_L} \mathcal{M}(D) & \xrightarrow{\; \widehat{\otimes}ng \;} \mathcal{O}_L(\mathfrak{X} \setminus \mathfrak{X}(r_0)) \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}(D) \\
f \otimes s & \longmapsto f \varphi_{\mathcal{M}(D)} (s) \nonumber
\end{align}
is an isomorphism.
We now define the finitely generated projective module
\begin{equation*}
\mathcal{M}_\mathscr{R}(D) := \mathscr{R}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}(D)
\end{equation*}
over the Robba ring. Its rank is equal to the dimension $d_D$ of $D$. The $\varphi_L$-linear endomorphisms $\varphi_{\mathcal{M}_\mathscr{R}(D)} := \varphi_L \otimes \varphi_{\mathcal{M}(D)}$ and $c_{\mathcal{M}_\mathscr{R}(D)} := c_* \otimes c_{\mathcal{M}(D)}$, for $c \in o_L^\times$, which by \eqref{f:commute} commute with $\varphi_{\mathcal{M}_\mathscr{R}(D)}$, together define a semilinear action of the monoid $o_L \setminus \{0\}$ on $\mathcal{M}_\mathscr{R}(D)$ (which does not depend on the choice of $\pi_L$). As a consequence of \eqref{f:tilde-varphi-iso} the induced $\mathscr{R}_L(\mathfrak{X})$-linear map
\begin{equation*}
\mathscr{R}_L(\mathfrak{X}) \otimes_{\mathscr{R}_L(\mathfrak{X}), \varphi_L} \mathcal{M}_\mathscr{R}(D) \xrightarrow[\operatorname{id} \otimes \varphi_{\mathcal{M}_\mathscr{R}(D)}]{\; \widehat{\otimes}ng \;} \mathcal{M}_\mathscr{R}(D)
\end{equation*}
is an isomorphism.
\begin{corollary}
\mathrm{la}bel{mrispgm}
The module $\mathcal{M}_\mathscr{R}(D)$ is a $(\varphi_L,\Gamma_L)$-module over $\mathscr{R}_L(\mathfrak{X})$.
\end{corollary}
In this way we have constructed a functor
\begin{equation*}
D \longmapsto \mathcal{M}_\mathscr{R}(D)
\end{equation*}
from the category of filtered $\varphi_L$-modules into the category $\operatorname{Mod}_L(\mathscr{R}_L(\mathfrak{X}))$, which is exact as can be seen from the description of the stalks in Lemmas \ref{fg-proj} and \ref{stalks}.
By Lemma \ref{action-MD}, $\mathcal{M}_\mathscr{R}(D)$ is $L$-analytic.
Those $(\varphi_L,\Gamma_L)$-modules that come by extension of scalars from a $(\varphi_L,\Gamma_L)$-module over $\mathcal{O}_L(\mathfrak{X})$ are said to be of finite height. The above constructions show that $\mathcal{M}_\mathscr{R}(D)$ is of finite height, at least when $\operatorname{Fil}^0 D = D$. We can get further examples by allowing an action of $\Gamma_L$ on $D$ that commutes with $\varphi_L$.
The construction of the equivalence of categories
\begin{equation*}
\mathcal{M}_{\mathfrak{X}} := \mathcal{M} : \operatorname{Vec}(\operatorname{Fil},\varphi_L) \xrightarrow{\;\simeq\;} \operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{X}}
\end{equation*}
in Thm.\ \ref{Wach-equiv} works literally in the same way over $\mathfrak{B}$ producing an equivalence
\begin{equation*}
\mathcal{M}_{\mathfrak{B}} : \operatorname{Vec}(\operatorname{Fil},\varphi_L) \xrightarrow{\;\simeq\;} \operatorname{Mod}^{\varphi_L,\Gamma_L,\mathrm{an}}_{/\mathfrak{B}}
\end{equation*}
with the property that $\mathscr{R}_L(\mathfrak{B}) \otimes_{\mathcal{O}_L(\mathfrak{B})} \mathcal{M}_{\mathfrak{B}}(D)$ is a $(\varphi_L,\Gamma_L)$-module over $\mathscr{R}_L(\mathfrak{B})$. This case is due to \cite{KFC} and \cite{KR} \S 2.2.
\begin{lemma}\mathrm{la}bel{O-iso}
Under the identification $\mathfrak{B}_{/\mathbf{C}_p} = \mathfrak{X}_{/\mathbf{C}_p}$ via $\kappa$ we have $\mathcal{O}_{\mathbf{C}_p} (\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{B})} \mathcal{M}_{\mathfrak{X}}(D) = \mathcal{O}_{\mathbf{C}_p} (\mathfrak{B}) \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}_{\mathfrak{B}}(D)$.
\end{lemma}
\begin{proof}
Let $\mathbf{S}$ be either $\mathfrak{X}$ or $\mathfrak{B}$ over $L$ with structure sheaf $\mathcal{O}_{\mathbf{S}}$. Let $t_{\mathbf{S}}$ denote $\log_\mathbf{S}$ and recall that $\log_\mathfrak{X} = \Omega_{t_0'} \cdot \log_{\mathfrak{B}}$ (compare the proof of Lemma \ref{zeros}.i). We consider the $\mathcal{O}_{\mathbf{C}_p} (\mathbf{S})$-module
\begin{multline*}
\mathcal{M}_{\mathbf{C}_p}(D) := \\
\{ s \in \mathcal{O}_{\mathbf{C}_p} (\mathbf{S}) [t_{\mathbf{S}}^{-1}] \otimes_L D :
(\operatorname{id} \otimes \varphi_D^{-n(x)})(s) \in \operatorname{Fil}^0(\operatorname{Fr}(\mathcal{O}_{\mathbf{S}_{/\mathbf{C}_p},y}) \otimes_L D) \ \text{for all $y \in
Z(\mathbf{C}_p)$} \}.
\end{multline*}
It is, in fact, independent of the choice of $\mathbf{S}$. Therefore it suffices to show that
\begin{equation*}
\mathcal{O}_{\mathbf{C}_p} (\mathbf{S}) \otimes_{\mathcal{O}_L(\mathbf{S})} \mathcal{M}_{\mathbf{S}}(D) = \mathcal{M}_{\mathbf{C}_p}(D) \ .
\end{equation*}
The left hand side is obviously included in the right hand side. By redoing Lemmas \ref{fg-proj} and \ref{stalks} over $\mathfrak{X}_{/\mathbf{C}_p}$ we see that $\mathcal{M}_{\mathbf{C}_p}(D)$ is a finitely generated projective $\mathcal{O}_{\mathbf{C}_p} (\mathbf{S})$-module such that the stalks $\widetilde{\mathcal{M}}_{\mathbf{C}_p,y}(D)$, for $y \in \mathbf{S}(\mathbf{C}_p)$, of the corresponding coherent module sheaf on $\mathbf{S}_{/\mathbf{C}_p}$ satisfy
\begin{equation*}
\widetilde{\mathcal{M}}_{\mathbf{C}_p,y}(D)
\begin{cases}
= \mathcal{O}_{\mathbf{S}_{/\mathbf{C}_p},y} \otimes_L D & \text{if $y \not\in Z(\mathbf{C}_p)$}, \\
\xrightarrow[\operatorname{id} \otimes \varphi_D^{-n(y)}]{\widehat{\otimes}ng} \operatorname{Fil}^0(\operatorname{Fr}(\mathcal{O}_{\mathbf{S}_{/\mathbf{C}_p},y}) \otimes_L D) & \text{if $y \in Z(\mathbf{C}_p)$}.
\end{cases}
\end{equation*}
On the other hand we deduce directly, by base change, from these same lemmas that the stalks of the coherent module sheaf corresponding to the finitely projective $\mathcal{O}_{\mathbf{C}_p} (\mathbf{S})$-module $\mathcal{O}_{\mathbf{C}_p} (\mathbf{S}) \otimes_{\mathcal{O}_L(\mathbf{S})} \mathcal{M}_{\mathbf{S}}(D)$ satisfy the very same formula as above. This shows the asserted equality.
\end{proof}
\begin{theorem}\mathrm{la}bel{mdbxcomp}
The $(\varphi_L,\Gamma_L)$-modules $\mathscr{R}_L(\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}_{\mathfrak{X}}(D)$ and $\mathscr{R}_L(\mathfrak{B}) \otimes_{\mathcal{O}_L(\mathfrak{B})} \mathcal{M}_{\mathfrak{B}}(D)$ correspond to each other via the equivalence in Thm.\ \ref{equiv}.
\end{theorem}
\begin{proof}
Given the construction of our equivalence, it is enough to show that
\begin{equation*}
\mathscr{R}_{\mathbf{C}_p} (\mathfrak{X}) \otimes_{\mathcal{O}_L(\mathfrak{X})} \mathcal{M}_{\mathfrak{X}}(D) = \mathscr{R}_{\mathbf{C}_p} (\mathfrak{B}) \otimes_{\mathcal{O}_L(\mathfrak{B})} \mathcal{M}_{\mathfrak{B}}(D) \ .
\end{equation*}
But this is immediate from Lemma \ref{O-iso}.
\end{proof}
Let $D$ be a $1$-dimensional filtered $\varphi_L$-module,
and let $v$ be a basis of $D$. Let $t_H(D)$ denote the unique integer
$i$ such that $\operatorname{gr}^i(D) \neq 0$, and let $t_N(D) = v_L(\alpha)$
where $\varphi_D(v) = \alpha v$. If $D$ is any filtered $\varphi_L$-module,
let $t_H(D) = t_H(\det D)$ and $t_N(D) = t_N(\det D)$.
We say that $D$ is admissible if $t_H(D) = t_N(D)$ and if $t_H(D') \leq t_N(D')$ for every sub filtered $\varphi_L$-module $D'$ of $D$.
\begin{proposition}
\mathrm{la}bel{colfononx}
If $D$ is a filtered $\varphi_L$-module, then $\deg(\mathcal{M}_\mathscr{R}(D)) = t_N(D) - t_H(D)$. In particular, $\mathcal{M}_\mathscr{R}(D)$ is \'etale if and only if $D$ is admissible.
\end{proposition}
\begin{proof}
Given Thm.\ \ref{mdbxcomp} and Prop.\ \ref{mxet}, this is a consequence of the analogous claim over $\mathfrak{B}$, which is proved in \S 2.3 of \cite{KR}.
\end{proof}
Let $V$ be a crystalline $L$-analytic representation of $G_L$ and let
$D = (\mathbf{B}_{\mathrm{cris},L} \otimes_L V)^{G_L}$. The filtered
$\varphi_L$-module $D$ is admissible (see \S 3.1 of \cite{KR}; note that
$D$ is not $\mathrm{D}_{\mathrm{cris}}(V)$ but the two are related by
a simple recipe given in ibid.). By Prop.\ \ref{colfononx},
the $(\varphi_L,\Gamma_L)$-module $\mathcal{M}_\mathscr{R}(D)$ is \'etale.
This gives us a functor from the category of crystalline $L$-analytic
representations of $G_L$ to the category of \'etale $L$-analytic
$(\varphi_L,\Gamma_L)$-modules over $\mathscr{R}_L(\mathfrak{X})$.
By Thm.\ \ref{mdbxcomp}, this functor is compatible with the one
given in Cor.\ \ref{repequiv}.
\end{document}
|
\begin{document}
\title{Understanding Curriculum Learning in Policy Optimization for Online Combinatorial Optimization}
\doparttoc
\faketableofcontents
\begin{abstract}
Over the recent years, reinforcement learning (RL)
starts to show promising results in \rebut{tackling} combinatorial optimization (CO) problems, in particular when coupled with curriculum learning to facilitate training.
Despite emerging empirical evidence, theoretical study on why RL helps is still at its early stage.
This paper presents the first systematic study on policy optimization methods for \rebut{online} CO problems.
We show that \rebut{online} CO problems can be naturally formulated as latent Markov Decision Processes (LMDPs), and prove convergence bounds on natural policy gradient (NPG) for solving LMDPs.
Furthermore, our theory explains the benefit of curriculum learning: it can find a strong sampling policy and reduce the distribution shift, a critical quantity that governs the convergence rate in our theorem.
For a canonical \rebut{online} CO problem, Secretary Problem, we formally prove that distribution shift is reduced \textup{e}mph{exponentially} with curriculum learning \textup{e}mph{even if the curriculum is randomly generated}.
Our theory also shows we can simplify the curriculum learning scheme used in prior work from multi-step to single-step.
Lastly, we provide extensive experiments on Secretary Problem and Online Knapsack to verify our findings.
\textup{e}nd{abstract}
\section{Introduction}
\label{sec:intro}
In recent years, machine learning techniques have shown promising results in solving combinatorial optimization (CO) problems, including traveling salesman problem (TSP, \citet{paper_tsp}), maximum cut \citep{paper_maxcut} and satisfiability problem~\citep{paper_sat}.
While in the worst case some CO problems are NP-hard, in practice, the probability that we need to solve the worst-case problem instance is low \citep{paper_co_gnn}.
Machine learning techniques are able to find generic models \rebut{which have exceptional performance on} the majority of a class of CO problems.
A significant subclass of CO problems is called online CO problems, which has gained much attention \citep{paper_online_co_real_time, paper_online_co_nonlinear_obj, paper_stochastic_online_co}.
Online CO problems entail a sequential decision-making process, which perfectly matches the nature of reinforcement learning (RL).
This paper concerns using RL to \rebut{tackle online} CO problems.
RL is often coupled with specialized techniques including (a particular type of) Curriculum Learning \citep{paper_new_dog}, human feedback and correction (\citet{paper_dcoach}, \citet{paper_ppmp}), and policy aggregation (boosting, \citet{paper_rl_boosting}).
Practitioners use these techniques to accelerate the training speed.
While these hybrid techniques enjoy empirical success, the theoretical understanding is still limited: it is unclear when and why they improve the performance.
In this paper, we particularly focus on \textup{e}mph{RL with Curriculum Learning} (\citet{paper_curl}, also named ``bootstrapping'' in \citet{paper_new_dog}): train the agent from an easy task and \textup{e}mph{gradually} increase the difficulty until the target task.
Interestingly, these techniques exploit the special structures of \rebut{online} CO problems.
\textbf{Main contributions.}
In this paper, we initiate the formal study on using RL \rebut{to tackle online} CO problems, with a particular emphasis on understanding the specialized techniques developed in this emerging subarea.
Our contributions are summarized below.
$\bullet$ \textbf{Formalization.}
For \rebut{online} CO problems, we want to learn a single policy that enjoys good performance over \textup{e}mph{a distribution} of problem instances. This motivates us to use Latent Markov Decision Process (LMDP)~\citep{paper_latent_mdp} instead of standard MDP formulation.
We give concrete examples, Secretary Problem (SP) and Online Knapsack, to show how LMDP models \rebut{online} CO problems.
With this formulation, we can systematically analyze RL algorithms.
$\bullet$ \textbf{Provable efficiency of policy optimization.}
By leveraging recent theory on Natural Policy Gradient for standard MDP \cite{paper_npg}, we analyze the performance of NPG for LMDP.
The performance bound is characterized by the number of iterations, the excess risk of policy evaluation, the transfer error, and the relative condition number $\kappa$ that characterizes the distribution shift between the sampling policy and the optimal policy.
To our knowledge, this is the first performance bound of policy optimization methods on LMDP.
$\bullet$ \textbf{Understanding and simplifying Curriculum Learning.}
Using our performance guarantee on NPG for LMDP, we study when and why Curriculum Learning is beneficial to RL for \rebut{online} CO problems.
Our main finding is that the main effect of Curriculum Learning is to give a stronger sampling policy.
Under certain circumstances, Curriculum Learning reduces the relative condition number $\kappa$, improving the convergence rate.
For the Secretary Problem, we provably show that Curriculum Learning can \textup{e}mph{exponentially reduce} $\kappa$ compared with using the na\"ive sampling policy.
Surprisingly, this means \textup{e}mph{even a random curriculum of SP accelerates the training exponentially}.
As a direct implication, we show that the multi-step Curriculum Learning proposed in \cite{paper_new_dog} can be significantly simplified into a single-step scheme.
Lastly, to obtain a complete understanding, we study the failure mode of Curriculum Learning, in a way to help practitioners to decide whether to use Curriculum Learning based on their prior knowledge.
To verify our theories, we conduct extensive experiments on two classical \rebut{online} CO problems (Secretary Problem and Online Knapsack) and carefully track the dependency between the performance of the policy and $\kappa$.
\section{Related work}
\label{sec:rel}
\para{RL for CO.}
There have been rich literature studying RL for CO problems, e.g., using Pointer Network in REINFORCE and Actor-Critic for routing problems \citep{paper_vrp}, combining Graph Attention Network with Monte Carlo Tree Search \rebut{for} TSP \citep{paper_co_realworld} and incorporating Structure-to-Vector Network in Deep Q-networks \rebut{for} maximum independent set problems \citep{paper_co_drl}.
\citet{paper_rl_co_nn} proposed a framework to tackle CO problems using RL and neural networks.
\citet{paper_tsp} combined REINFORCE and attention technique to learn routing problems.
\citet{paper_co_graph_survey} and \citet{paper_rl_co_survey} are taxonomic surveys of RL approaches for graph problems.
\citet{paper_ml_co} summarized learning methods, algorithmic structures, objective design and discussed generalization. In particular scaling to larger problems was mentioned as a major challenge.
Compared to supervised learning, RL not only mimics existing heuristics, but also discover novel ones that humans have not thought of, for example chip design \citep{paper_chip_design} and compiler optimization \citep{paper_ml_compile}.
Theoretically, there is a line of work on analyzing data-driven approach to combinatorial problems~\citep{balcan2020data}.
However, to our knowledge, the theoretical analysis for RL-based method is still missing.
\citet{paper_new_dog} focused on using RL to \rebut{tackle} \textup{e}mph{online} CO problems, which means that the agent must make sequential and irrevocable decisions.
They encoded the input in a length-independent manner.
For example, the $i$-th element of a $n$-length sequence is encoded by the fraction $\frac{i}{n}$ and other features, so that the agent can generalize to unseen $n$, paving the way for Curriculum Learning.
Three online CO problems were mentioned in their paper: Online Matching, Online Knapsack and Secretary Problem (SP).
\rebut{Currently, Online Matching and Online Knapsack have only approximation algorithms \citep{paper_om_hard, paper_ok_hard}.}
There are also other works about RL for online CO problems.
\citet{paper_drl_om} uses deep-RL for Online Matching.
\citet{paper_solo_online_co} studies Parallel Machine Job Scheduling problem (PMSP) and Capacitated Vehicle Routing problem (CVRP), which are both online CO problems, using offline-learning and Monte Carlo Tree Search.
\para{LMDP.}
We provide the exact definition of LMDP in Sec.~\ref{sec:lmdp}.
As studied by \citet{paper_multimodel_mdp}, in the general cases, optimal policies for LMDPs are \textup{e}mph{history-dependent}.
This is different from standard MDP cases where there always exists an optimal \textup{e}mph{history-independent} policy.
They showed that even finding the optimal history-independent policy is \textup{e}mph{NP-hard}.
\citet{paper_latent_mdp} investigated the sample complexity and regret bounds of LMDP \rebut{in the history-independent policy class}.
They presented an exponential lower-bound for a general LMDP and derived algorithms with polynomial sample complexities for cases with special assumptions.
\citet{paper_reward_mixing} showed that in reward-mixing MDPs, where MDPs share the same transition model, a polynomial sample complexity is achievable without any assumption \rebut{to find an optimal history-independent policy}.
\para{Convergence rate for policy gradient methods.}
There is line of work on the convergence rates of policy gradient methods for standard MDPs (\citet{paper_pg_linear}, \citet{paper_pg_neural}, \citet{paper_pg_npg_improved}, \citet{paper_pg_momentum}, \citet{paper_reinforce_sample}).
For \textup{e}mph{softmax tabular parameterization}, NPG can obtain an $O(1 / T)$ rate \citep{paper_npg} where $T$ is the number of iterations; with entropy regularization, both PG and NPG achieves linear convergence \citep{paper_softmax_pg,paper_npg_tab_reg}.
For \textup{e}mph{log-linear policies}, sample-based NPG makes an $O(1 / \sqrt{T})$ convergence rate, with assumptions on $\textup{e}psilon_{\textup{stat}}, \textup{e}psilon_{\text{bias}}$ and $\kappa$ \citep{paper_npg} (see Def.\,\ref{asp_main}); exact NPG with entropy regularization enjoys a linear convergence rate up to $\textup{e}psilon_{\text{bias}}$ \citep{paper_npg_logl_reg}.
We extend the analysis to LMDP.
\para{Curriculum Learning.}
There are a rich body of literature on Curriculum Learning \citep{paper_curl_robust,paper_curl_docl,paper_curl_dihcl, paper_curl_copilot,paper_curl_master_rate,paper_curl_auto}.
As surveyed in \citet{paper_curl}, Curriculum Learning has been applied to training deep neural networks and non-convex optimizations and improves the convergence in several cases.
\citet{paper_curl_rl} rigorously modeled curriculum as a directed acyclic graph and surveyed work on curriculum design.
\citet{paper_new_dog} proposed a bootstrapping / Curriculum Learning approach: gradually increase the problem size after the model works sufficiently well on the current problem size.
\section{Motivating \rebut{Online} Combinatorial Optimization problems}
\rebut{Online} CO problems are a natural class of problems that admit constructions of small-scale instances, because the hardness of \rebut{online} CO problems can be characterized by the input length, and instances of different scales are similar.
This property simplifies the construction of curricula and underscores curriculum learning.
We also believe \rebut{online} CO problems make the use of LMDP suitable, because under a proper distribution $\{ w_m \}$, instances with drastically different solutions do not occupy much of the probability space.
In this section we introduce two motivating \rebut{online} CO problems.
We are interested in these problems because they have all been extensively studied and have closed-form, easy-to-implement policies as references.
Furthermore, they were studied in \citet{paper_new_dog}, the paper that motivates our work.
They also have real-world applications, e.g., auction design \citep{paper:knapsack_secretary_application}.
\subsection{Secretary Problem} \label{sec_setting_sp}
In SP, the goal is to maximize the \textup{e}mph{probability} of choosing the best among $n$ candidates, where $n$ is known.
All candidates have \textup{e}mph{different} scores to quantify their abilities.
They arrive sequentially and when the $i$-th candidate shows up, the decision-maker observes the relative ranking $X_i$ among the first $i$ candidates, which means being the $X_i$th-best so far.
A decision that whether to accept or reject the $i$-th candidate must be made \textup{e}mph{immediately} when the candidate comes, and such decisions \textup{e}mph{cannot be revoked}.
Once one candidate is accepted, the game ends immediately.
The ordering of the candidates is unknown.
There are in total $n!$ permutations, and an instance of SP is drawn from an unknown distribution over these permutations.
In the classical SP, each permutation is sampled with equal probability.
The optimal solution for the classical SP is the well-known \textup{e}mph{$1/\textup{e}$-threshold strategy}: reject all the first $\floor{n/\textup{e}}$ candidates, then accept the first one who is the best so-far.
In this paper, we also study some different distributions.
\subsection{Online Knapsack (decision version)} \label{sec_setting_okd}
In Online Knapsack problems the decision-maker observes $n$ (which is known) items arriving sequentially, each with value $v_i$ and size $s_i$ revealed upon arrival. A decision to either accept or reject the $i$-th item must be made \textup{e}mph{immediately} when it arrives, and such decisions \textup{e}mph{cannot be revoked}.
At any time the accepted items should have their total size no larger than a known budget $B$.
The goal of standard Online Knapsack is to maximize the total value of accepted items. In this paper, we study the decision version, denoted as OKD, whose goal is to maximize the \textup{e}mph{probability} of total value reaching a known target $V$.
We assume that all values and sizes are sampled independently from two fixed distributions, namely $v_1, v_2, \ldots, v_n \iid F_v$ and $s_1, s_2, \ldots, s_n \iid F_s$. In \citet{paper_new_dog} the experiments were carried out with $F_v = F_s = \textup{Unif}_{[0, 1]}$, and we also study other distributions.
\begin{remark}
A challenge in OKD is the sparse reward: the only signal is reward $1$ when the total value of accepted items first exceeds $V$ (see the detailed formulation in Sec.\,\ref{sec_full_exp_okd}), unlike in Online Knapsack the reward of $v_i$ is given instantly after the $i$-th item is successfully accepted.
This makes random exploration hardly get reward signals, necessitating Curriculum Learning.
\textup{e}nd{remark}
\section{Problem setup}
In this section, we first introduce LMDP and why it naturally formulates \rebut{online} CO problems.
\rebut{The next are necessary components required by the algorithm, Natural Policy Gradient.}
\subsection{Latent Markov Decision Process} \label{sec:lmdp}
\rebut{Tackling} an \rebut{online} CO problem entails handling a family of problem instances.
Each instance can be modeled as a Markov Decision Process.
However, for \rebut{online} CO problems, we want to find one algorithm that works for a family of problem instances and performs well on average over an (unknown) distribution over this family.
To this end, we adopt the concept of Latent MDP which naturally models \rebut{online} CO problems.
Latent MDP \citep{paper_latent_mdp} is a collection of MDPs $\mathcal{M} = \{\mathcal{M}_1, \mathcal{M}_2, \ldots, \mathcal{M}_M\}$.
All the MDPs share state set $\mathcal{S}$, action set $\mathcal{A}$ and horizon $H$.
Each MDP $\mathcal{M}_m = (\mathcal{S}, \mathcal{A}, H, \nu_m, P_m, r_m)$ has its own \rebut{initial state distribution $\nu_m \in \Delta(\mathcal{S})$}, transition $P_m: \mathcal{S} \times \mathcal{A} \to \Delta(\mathcal{S})$ and reward $r_m: \mathcal{S} \times \mathcal{A} \to [0, 1]$, where $\Delta(\mathcal{S})$ is the probability simplex over $\mathcal{S}$. Let $w_1, w_2, \ldots, w_M$ be the mixing weights of MDPs such that $w_m > 0$ for any $m$ and $\sum_{m=1}^M w_m = 1$. At the start of every episode, one MDP $\mathcal{M}_m \in \mathcal{M}$ is randomly chosen with probability $w_m$.
Due to the time and space complexities of finding the optimal history-dependent policies,
we stay in line with \citet{paper_new_dog} and care only about finding the optimal \textup{e}mph{history-independent} policy.
Let $\mathbb{P}i = \{ \pi: \mathcal{S} \to \Delta(\mathcal{A}) \}$ denote the class of all the history-independent policies.
\rebut{
\textbf{Log-linear policy.}
Let $\phi: \mathcal{S} \times \mathcal{A} \to \mathbb{R}^d$ be a feature mapping function where $d$ denotes the dimension of feature space.
Assume that $\| \phi(s, a) \|_2 \le B$.
A log-linear policy is of the form:
\begin{align*}
\pi_\theta (a | s) = \frac{\textup{e}xp(\theta^\top \phi (s, a))}{\sum_{a' \in \mathcal{A}} \textup{e}xp(\theta^\top \phi (s, a'))},
\text{ where } \theta \in \mathbb{R}^d.
\textup{e}nd{align*}
}
\begin{remark}
\rebut{
Log-linear parameterization is a generalization of \textup{e}mph{softmax tabular parameterization} by setting $d = | \mathcal{S} | | \mathcal{A} |$ and $\phi (s, a) =$ One-hot$(s, a)$.
They are ``scalable'': if $\phi$ extracts important features from different $\mathcal{S} \times \mathcal{A}$s with a fixed dimension $d \ll | \mathcal{S} | | \mathcal{A} |$, then a single $\pi_\theta$ can generalize.
}
\textup{e}nd{remark}
\para{Entropy regularized value function, Q-function and advantage function.}
The expected reward of executing $\pi$ on $M_m$ is defined via value functions.
We incorporate \textup{e}mph{entropy regularization} for completeness because prior works (especially empirical works) used it to facilitate training.
We define the value function in a unified way: $V_{m, h}^{\pi, \lambda} (s)$ is defined as the sum of future $\lambda$-regularized rewards starting from $s$ and executing $\pi$ for $h$ steps in $M_m$, i.e.,
{\ifnum0=1
\fi}
\begin{align*}
V_{m, h}^{\pi, \lambda} (s) := \mathop{\mathbb{E}} \left[ \left. \sum_{t = 0}^{h - 1} r_m^{\pi, \lambda} (s_t, a_t)\ \right|\ \mathcal{M}_m, \pi, s_0 = s \right],
\textup{e}nd{align*}
where $r_m^{\pi, \lambda} (s, a) := r_m (s, a) + \lambda \ln \frac{1}{\pi (a | s)}$, and the expectation is with respect to the randomness of trajectory induced by $\pi$ in $M_m$. Denote $V_{m, h}^\pi (s) := V_{m, h}^{\pi, 0} (s)$, the unregularized value function.
For any $\mathcal{M}_m, \pi, h$, with $\mathcal{H} (\pi (\cdot | s)) := \sum_{a \in \mathcal{A}} \pi(a | s) \ln \frac{1}{\pi(a | s)} \in [0, \ln |\mathcal{A}|]$ we define
\begin{align*}
H_{m, h}^\pi (s) := \mathop{\mathbb{E}} \left[ \left. \sum_{t = 0}^{h - 1} \mathcal{H}(\pi (\cdot | s_t))\ \right|\ \mathcal{M}_m, \pi, s_0 = s \right].
\textup{e}nd{align*}
In fact, $V_{m, h}^{\pi, \lambda} (s) = V_{m, h}^\pi (s) + \lambda H_{m, h}^\pi (s)$.
Denote $V^{\pi, \lambda} := \sum_{m=1}^M w_m \rebut{\sum_{s_0 \in \mathcal{S}} \nu_m (s_0)} V_{m, H}^{\pi, \lambda} (s_0)$ and $V^\pi := V^{\pi, 0}$.
We need to find $\pi^\star = \arg \max_{\pi \in \mathbb{P}i} V^\pi$.
Under regularization, we seek for $\pi_\lambda^\star = \arg \max_{\pi \in \mathbb{P}i} V^{\pi, \lambda}$ instead.
Denote $V^\star := V^{\pi^\star},\ V^{\star, \lambda} = V^{\pi_\lambda^\star, \lambda}$.
Since $V^\star \le V^{\pi^\star, \lambda} \le V^{\star, \lambda} \le V^{\pi_\lambda^\star} + \lambda H \ln |\mathcal{A}|$, the regularized optimal policy $\pi_\lambda^\star$ can be nearly optimal as long as the regularization coefficient $\lambda$ is small enough.
For notational ease, we abuse $\pi^\star$ with $\pi_\lambda^\star$.
The Q-function can be defined in a similar manner:
{\ifnum0=1 \small \fi
\begin{align*}
Q_{m, h}^{\pi, \lambda} (s, a) := \mathop{\mathbb{E}} \left[ \left. \sum_{t = 0}^{h - 1} r_m^{\pi, \lambda} (s_t, a_t) \ \right|\ \mathcal{M}_m, \pi, (s_0, a_0) = (s, a) \right],
\textup{e}nd{align*}}
and the advantage function is defined as $A_{m, h}^{\pi, \lambda} (s, a) := Q_{m, h}^{\pi, \lambda} (s, a) - V_{m, h}^{\pi, \lambda} (s)$.
\para{Modeling SP.} For SP, each instance is a permutation of length $n$, and in each round an instance is drawn from an unknown distribution over all permutations. In the $i$-th step for $i \in [n]$, the state encodes the $i$-th candidate and relative ranking so far. The transition is deterministic according to the problem definition. A reward of $1$ is given if and only if the best candidate is accepted.
We model the distribution as follows: suppose for candidate $i$, he/she is the best so far with probability $P_i$ and is independent of other $i'$.
Hence, the weight of each instance is simply the product of the probabilities on each position.
The classical SP satisfies $P_i = \frac{1}{i}$.
\para{Modeling OKD.} For OKD, each instance is a sequence of items with values and sizes drawn from unknown distributions $F_v$ and $F_s$. In the $i$-th step for $i \in [n]$, the state encodes the information of $i$-th item's value and size, the remaining budget, and the remaining target value to fulfill. The transition is also deterministic according to the problem definition, and a reward of $1$ is given if and only if the agent obtains the target value for the first time.
$F_v = F_s = \textup{Unif}_{[0, 1]}$ in \citet{paper_new_dog}.
\subsection{Algorithm components}
In this subsection we will introduce some necessary notions used by our main algorithm.
\begin{definition}[State(-action) Visitation Distribution]
The state visitation distribution and state-action visitation distribution at step $h \ge 0$ with respect to $\pi$ in $\mathcal{M}_m$ are defined as
\begin{align*}
d_{m, h}^{\pi} (s) &:= \mathbb{P} (s_h = s\ |\ \mathcal{M}_m, \pi), \\
d_{m, h}^{\pi} (s, a) &:= \mathbb{P} (s_h = s, a_h = a\ |\ \mathcal{M}_m, \pi).
\textup{e}nd{align*}
\textup{e}nd{definition}
{\ifnum0>0
\fi}
We will encounter a grafted distribution $\wt{d}_{m, h}^{\pi} (s, a) = d_{m, h}^{\pi} (s) \circ \textup{Unif}_{\mathcal{A}} (a)$ which in general cannot be the state-action visitation distribution with respect to any policy. However, it can be attained by first acting under $\pi$ for $h$ steps to get states then sample actions from the uniform distribution $\textup{Unif}_{\mathcal{A}}$. This distribution will be useful when we apply a variant of NPG, where the sampling policy is fixed.
Denote $d_{m, h}^\clubsuit := d_{m, h}^{\pi_\clubsuit}$ and $d^\clubsuit$ as short for $\{ d_{m, h}^\clubsuit \}_{1 \le m \le M, 0 \le h \le H - 1}$, here $\clubsuit$ can be any symbol.
We also need the following definitions for NPG, which are different from the standard versions for discounted MDP because weights $\{w_m\}$ must be incorporated in the definitions to deal with LMDP.
\rebut{In the following definitions, let $v$ be the collection of any distribution, which will be instantiated by $d^\star$, $d^t$, etc. in the remaining sections.}
\begin{definition}[Compatible Function Approximation Loss] \label{def_func_approx_loss}
Let $g$ be the parameter update weight, then NPG is related to finding the minimizer for the following function:
{\ifnum0=1 \small \fi
\begin{align*}
&L(g; \theta, v) := {\ifnum0=1 \\&~ \fi}
\sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim v_{m, H-h}} \left[ \left( A_{m, h}^{\pi_\theta, \lambda}(s, a) - g^\top \nabla_\theta \ln \pi_\theta (a | s) \right)^2 \right].
\textup{e}nd{align*}}
\textup{e}nd{definition}
\begin{definition}[Generic Fisher Information Matrix]
{\ifnum0=1 \small \fi
\begin{align*}
&\Sigma_v^\theta := {\ifnum0=1 \\&~ \fi}
\sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim v_{m, H-h}} \left[ \nabla_\theta \ln \pi_\theta (a | s) \left( \nabla_\theta \ln \pi_\theta (a | s) \right)^\top \right].
\textup{e}nd{align*}}
\textup{e}nd{definition}
{\ifnum0=1
\fi}
Particularly, denote $F(\theta) = \Sigma_{d^\theta}^\theta$ as the Fisher information matrix induced by $\pi_\theta$.
\iffalse
Finally, we present the well-known Policy Gradient Theorem for finite horizon LMDP. The full statement and proof are deferred to Sec.\,\ref{sec_policy_gradient}.
\begin{theorem} [Policy Gradient Theorem] \label{thm_policy_gradient}
For any policy $\pi_\theta$ parameterized by $\theta$,
{\ifnum0=1 \small \fi
\begin{align*}
\nabla_\theta V^{\pi_\theta, \lambda}
= \sum_{m = 1}^M w_m \sum_{h = 1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H - h}^{\theta}} \left[ Q_{m, h}^{\pi_\theta, \lambda} (s, a) \nabla_\theta \ln \pi_\theta (a | s) \right].
\textup{e}nd{align*}}
\textup{e}nd{theorem}
{\ifnum0=2
\fi}
\fi
\section{Learning procedure}
\label{sec_learn_proc}
\rebut{In this section we introduce the algorithms: NPG supporting any customized sampler, and our proposed Curriculum Learning framework.}
\textbf{Natural Policy Gradient.}
The learning procedure generates a series of parameters and policies. Starting from $\theta_0$, the algorithm updates the parameter by setting
$
\theta_{t + 1} = \theta_t + \textup{e}ta g_t,
$
where $\textup{e}ta$ is a predefined constant learning rate, and $g_t$ is the update weight.
Denote $\pi_t := \pi_{\theta_t}, V^{t, \lambda} := V^{\pi_t, \lambda}$ and $A_{m, h}^{t, \lambda} := A_{m, h}^{\pi_t, \lambda}$ for convenience.
We adopt NPG \citep{paper_npg_original} because it is \rebut{efficient in} training parameterized policies and admits clean theoretical analysis.
NPG satisfies
$g_t \in \argmin_g L(g; \theta_t, d^{\theta_t})$ (see Sec.\,\ref{sec_policy_gradient} for explanation).
When we only have samples, we use the approximate version of NPG: $g_t \approx \argmin_{g \in \mathcal{G}} L(g; \theta_t, d^{\theta_t})$, where $\mathcal{G} = \{ x : \| x \|_2 \le G \}$ for some hyper-parameter $G$.
We also introduce a variant of NPG: instead of sampling from $d^{\theta_t}$ using the current policy $\pi_t$, we sample from $\wt{d}^{\pi_s}$ using a \textup{e}mph{fixed} sampling policy $\pi_s$.
The update rule is $g_t \approx \argmin_{g \in \mathcal{G}} L(g; \theta_t, \wt{d}^{\pi_s})$.
This version makes a closed-form analysis for SP possible.
The main algorithm is shown in Alg.\,\ref{alg_npg}.
It admits two types of training:
\ding{172} If $\pi_s =$ None, it calls Alg.\,\ref{alg_samp} (deferred to App.\,\ref{sec_defer_algo}) to sample $s, a \sim d^{\theta_t}$;
\ding{173} If $\pi_s \ne$ None, it then calls Alg.\,\ref{alg_samp} to sample $s, a \sim \wt{d}^{\pi_s}$.
Alg.\,\ref{alg_samp} also returns an unbiased estimation of $A_{H - h}^{\pi_t} (s, a)$.
In both cases, we denote $d^t$ as the sampling distribution and $\Sigma_t$ as the induced Fisher Information Matrix used in step $t$, i.e. $d^t := d^{\theta_t}, \Sigma_t := F(\theta_t)$ if $\pi_s =$ None; $d^t := \wt{d}^{\pi_s}, \Sigma_t := \Sigma_{\wt{d}^{\pi_s}}^{\theta_t}$ otherwise. The update rule can be written in a unified way as
$
g_t \approx \argmin_{g \in \mathcal{G}} L(g; \theta_t, d^t).
$
This is equivalent to solving a constrained quadratic optimization and we can use existing solvers.
\begin{remark}
Alg.\,\ref{alg_npg} is different from Alg.\,4 of \citet{paper_npg} \rebut{in that we} use a ``batched'' update while they used successive Projected Gradient Descents (PGD). This is an important implementation technique to speed up training in our experiments.
\textup{e}nd{remark}
\textbf{Curriculum Learning.}
We use Curriculum Learning to facilitate training.
Alg.\,\ref{alg_curl} is our proposed training framework, which first constructs an easy environment $E'$ and trains a (near-)optimal policy $\pi_s$ of it.
In the target environment $E$, we either use $\pi_s$ to sample data while training a new policy from scratch, or simply continue training $\pi_s$.
To be specific and provide clarity for the results in Sec.\,\ref{sec_exp}, we name a few training modes (without regularization) here, and the rest are in Tab.\,\ref{tab_training_scheme}.
\texttt{curl}, the standard Curriculum Learning, runs Alg.\,\ref{alg_curl} with $samp =$ \texttt{pi\_t}; \texttt{fix\_samp\_curl} stands for the fixed sampler Curriculum Learning, running Alg.\,\ref{alg_curl} with $samp =$ \texttt{pi\_s}.
\texttt{direct} means directly learning in $E$ without curriculum, i.e., running Alg.\,\ref{alg_npg} with $\pi_s =$ None; \texttt{naive\_samp} also directly learns in $E$, while using $\pi_s =$ na\"ive random policy to sample data in Alg.\,\ref{alg_npg}.
\setlength{\textfloatsep}{-2cm}
\begin{algorithm}[ht]
\caption{\texttt{NPG}: Sample-based NPG. \rebut{(Full version: Alg.\,\ref{alg_npg_full}.)}}
\label{alg_npg}
\begin{algorithmic}[1]
\ifnum0>0 \small \fi
\STATE {\bfseries Input:} Environment $E$; learning rate $\textup{e}ta$; episode number $T$; batch size $N$; initialization $\theta_0$; sampler $\pi_s$; regularization coefficient $\lambda$; entropy clip bound $U$; optimization domain $\mathcal{G}$.
\FOR{$t \gets 0, 1, \ldots, T - 1$}
\STATE For $0 \le n \le N - 1$ and $0 \le h \le H - 1$, sample $(a_h^{(n)}, s_h^{(n)})$ and estimate $\wh{A}_{H - h}^{(n)}$ using Alg.\,\ref{alg_samp}.
\STATE Calculate:
{\ifnum0=2
\fi}
{\ifnum0=3
\fi}
\begin{align*}
\wh{F}_t &\gets \sum_{n = 0}^{N - 1} \sum_{h = 0}^{H - 1} \nabla_\theta \ln \pi_{\theta_t} (a_h^{(n)} | s_h^{(n)}) \left( \nabla_\theta \ln \pi_{\theta_t} (a_h^{(n)} | s_h^{(n)}) \right)^\top, \\
\wh{\nabla}_t &\gets \sum_{n = 0}^{N - 1} \sum_{h = 0}^{H - 1} \wh{A}_{H - h}^{(n)} \nabla_\theta \ln \pi_{\theta_t} (a_h^{(n)} | s_h^{(n)}).
\textup{e}nd{align*}
{\ifnum0>0
\fi}
\STATE Call any solver to get $\wh{g}_t \gets \argmin_{g \in \mathcal{G}} g^\top \wh{F}_t g - 2 g^\top \wh{\nabla}_t$.
\STATE Update $\theta_{t + 1} \gets \theta_t + \textup{e}ta \wh{g}_t$.
\mathop{\mathbb{E}}NDFOR
\STATE {\bfseries Return:} $\theta_T$.
\textup{e}nd{algorithmic}
\textup{e}nd{algorithm}
\setlength{\textfloatsep}{0.3cm}
\begin{algorithm}[ht]
\caption{Curriculum learning framework.}
\label{alg_curl}
\begin{algorithmic}[1]
\ifnum0>0 \small \fi
\STATE {\bfseries Input:} Environment $E$; learning rate $\textup{e}ta$; episode number $T$; batch size $N$; sampler type $samp \in \{$ \texttt{pi\_s, pi\_t} $\}$; regularization coefficient $\lambda$; entropy clip bound $U$; optimization domain $\mathcal{G}$.
\STATE Construct an environment $E'$ with a task easier than $E$. This environment should have optimal policy similar to that of $E$.
\STATE $\theta_s \gets$ \texttt{NPG} ($E', \textup{e}ta, T, N, 0^d, \textup{None}, \lambda, U, \mathcal{G}$) (see Alg.\,\ref{alg_npg}).
\IF{$samp = $\texttt{pi\_s}}
\STATE $\theta_T \gets$ \texttt{NPG} ($E, \textup{e}ta, T, N, 0^d, \pi_s, \lambda, U, \mathcal{G}$).
\mathop{\mathbb{E}}LSE
\STATE $\theta_T \gets$ \texttt{NPG} ($E, \textup{e}ta, T, N, \theta_s, \textup{None}, \lambda, U, \mathcal{G}$).
\mathop{\mathbb{E}}NDIF
\STATE {\bfseries Return:} $\theta_T$.
\textup{e}nd{algorithmic}
\textup{e}nd{algorithm}
\section{Performance analysis}
Our analysis contains two important components, namely \rebut{the sub-optimality gap guarantee of the NPG we proposed, and the efficacy guarantee of Curriculum Learning on Secretary Problem}.
The first component can also be extended to history-dependent policies with features being the tensor products of features from each time step (which is exponentially large).
\subsection{Natural Policy Gradient for Latent MDP}
Let $g_t^\star \in \argmin_{g \in \mathcal{G}} L(g; \theta_t, d^t)$ denote the true minimizer. We have the following definitions:
\begin{definition} \label{asp_main}
Define for $0 \le t \le T$:
$\bullet$ (Excess risk) {\ifnum0=1 \small \fi $\textup{e}psilon_{\textup{stat}} := \max_t \mathop{\mathbb{E}}[L(g_t; \theta_t, d^t) - L(g_t^\star; \theta_t, d^t)]$;}
$\bullet$ (Transfer error) $\textup{e}psilon_{\textup{bias}} := \max_t \mathop{\mathbb{E}}[L(g_t^\star; \theta_t, d^\star)]$;
$\bullet$ (Relative condition number) $\kappa := \max_t \mathop{\mathbb{E}} \left[ \sup_{x \in \mathbb{R}^d} \frac{x^\top \Sigma_{d^\star}^{\theta_t} x}{x^\top \Sigma_t x} \right]$.
Note that term inside the expectation is a random quantity as $\theta_t$ is random.
{\ifnum0=1
\fi}
The expectation is with respect to the randomness in the sequence of weights $g_0, g_1, \ldots, g_T$.
\textup{e}nd{definition}
All the quantities are commonly used in literature mentioned in Sec.\,\ref{sec:rel}.
$\textup{e}psilon_{\textup{stat}}$ is due to that the minimizer $g_t$ from samples may not minimize the population loss $L$.
$\textup{e}psilon_{\textup{bias}}$ quantifies the approximation error due to feature maps.
$\kappa$ characterizes the distribution mismatch between $d^t$ and $d^\star$.
This is a key quantity in Curriculum Learning and will be studied in more details in the following sections.
Our main result is based on a fitting error which depicts the closeness between $\pi^\star$ and any policy $\pi$.
\begin{definition}[Fitting Error] \label{def:err_t}
Suppose the update rule is $\theta_{t+1} = \theta_t + \textup{e}ta g_t$, define
\begin{align*}
\textup{e}rr_t := \sum_{m=1}^M w_m \sum_{h = 1}^H \mathop{\mathbb{E}}_{(s, a) \sim d_{m,H - h}^\star} \left[ A_{m,h}^{t,\lambda} (s, a) - g_t^\top \nabla_\theta \ln \pi_t (a | s) \right].
\textup{e}nd{align*}
\textup{e}nd{definition}
{\ifnum0>0
\fi}
Thm.\,\ref{thm_main} shows the convergence rate of Alg.\,\ref{alg_npg}, and its proof is deferred to Sec.\,\ref{sec_proof_main_thm}.
\begin{theorem} \label{thm_main}
With Def.\,\ref{asp_main}, \ref{def:err_t} and \ref{def:lyapunov_potential}, our algorithm enjoys the following performance bound:
{\ifnum0=1 \small \fi
\begin{align*}
\mathop{\mathbb{E}} \left[ \min_{0 \le t \le T} \rebut{V^{\star, \lambda} - V^{t, \lambda}} \right]
&\le \frac{\lambda (1 - \textup{e}ta \lambda)^{T + 1} \mathbb{P}hi (\pi_0)}{1 - (1 - \textup{e}ta \lambda)^{T + 1}} + \textup{e}ta \frac{B^2 G^2}{2} + \frac{\sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t} \mathop{\mathbb{E}}[\textup{e}rr_t]}{\sum_{t' = 0}^T (1 - \textup{e}ta \lambda)^{T - t'}} \\
&\le \frac{\lambda (1 - \textup{e}ta \lambda)^{T + 1} \mathbb{P}hi (\pi_0)}{1 - (1 - \textup{e}ta \lambda)^{T + 1}} + \textup{e}ta \frac{B^2 G^2}{2} + \sqrt{H \textup{e}psilon_{\textup{bias}}} + \sqrt{H \kappa \textup{e}psilon_{\textup{stat}}},
\textup{e}nd{align*}}
where $\mathbb{P}hi (\pi_0)$ is the Lyapunov potential function which is only relevant to the initialization.
\textup{e}nd{theorem}
\begin{remark} \label{rem:thm_main}
\ding{172} This is the first result for LMDP and sample-based NPG with entropy regularization.
\ding{173} For any fixed $\lambda > 0$ we have a linear convergence, which matches the result of discounted infinite horizon MDP (Thm.\,1 in \citet{paper_npg_logl_reg}); the limit when $\lambda$ tends to $0$ is $O( 1 / (\textup{e}ta T) + \textup{e}ta )$ (which implies an $O(1/\sqrt{T})$ rate), matching the result in \citet{paper_npg}.
\ding{174} $\textup{e}psilon_{\textup{stat}}$ can be reduced using a larger batch size $N$ (Lem.\,\ref{lem_L_conc}). The typical scaling is $\textup{e}psilon_{\textup{stat}} = \wt{O}(1/\sqrt{N})$.
\ding{175} If some $d_t$ (especially the initialization $d_0$) is far away from $d^\star$, $\kappa$ may be extremely large (Sec.\,\ref{sec_curl_kappa_sp} as an example).
In other words, if we can find a policy whose $\kappa$ is small with a single curriculum, we do not need the multi-step curriculum learning procedure used in \citet{paper_new_dog}.
\textup{e}nd{remark}
\subsection{Curriculum learning for Secretary Problem} \label{sec_curl_kappa_sp}
For SP,
there exists a threshold policy that is optimal \citep{paper_sp_theory}. Suppose the threshold is $p \in (0, 1)$, then the policy is: accept candidate $i$ if and only if $\frac{i}{n} > p$ and $X_i = 1$. For the classical SP where all the $n!$ instances have equal probability to be sampled, the optimal threshold is $1/\textup{e}$.
To show that curriculum learning makes the training converge faster, Thm.\,\ref{thm_main} gives a direct hint: \textup{e}mph{curriculum learning produces a good sampler leading to much smaller $\kappa$ than that of a na\"ive random sampler.}
Here we focus on the cases where $samp =$ \texttt{pi\_s} because the sampler is fixed, while when $samp =$ \texttt{pi\_t} it is impossible to analyze a dynamic procedure.
We show Thm.\,\ref{thm_kappa_sp} to characterize $\kappa$ in SP.
Its full statement and proof is deferred to Sec.\,\ref{sec_proof_kappa_sp}.
\begin{theorem} \label{thm_kappa_sp}
Assume that each candidate is independent of others and the $i$-th candidate has a probability $P_i$ of being the best so far (Sec.\,\ref{sec:lmdp}). Assume the optimal policy is a $p$-threshold policy and the sampling policy is a $q$-threshold policy. There exists a policy parameterization such that:
{
\begin{align}
\kappa_{\textup{curl}}
&= \Theta \left(
\left\{ \begin{array}{ll}
\prod_{j = \floor{n q} + 1}^{\floor{n p}} \frac{1}{1 - P_j}, & q \le p, \\
1, & q > p,
\textup{e}nd{array} \right.
\right),
\notag
\\
\quad \kappa_{\textup{na\"ive}}
&= \Theta \left(
2^{\floor{n p}} \max \left\{ 1, \max_{i \ge \floor{n p} + 2}\prod_{j = \floor{n p} + 1}^{i - 1} 2 (1 - P_j) \right\} \right)
,
\label{eq_kappa}
\textup{e}nd{align}}
where $\kappa_{\textup{curl}}$ and $\kappa_{\textup{na\"ive}}$ are $\kappa$ of the sampling policy and the na\"ive random policy, respectively.
\textup{e}nd{theorem}
{\ifnum0>1
\fi}
To understand how curriculum learning influences $\kappa$, we apply Thm.~\ref{thm_kappa_sp} to three concrete cases.
They show that, when the state distribution induced by the optimal policy in the small problem is similar to that in the original large problem, then a single-step curriculum suffices (cf. \ding{175} of Rem.\,\ref{rem:thm_main}).
\textbf{The classical case: an exponential improvement.}
We study the classical SP first, where all the $n!$ permutations are sampled with equal probability.
The probability series for this case is $P_i = \frac{1}{i}$.
Substituting them into
Eq.\,\ref{eq_kappa} directly gives:
\begin{align*}
\kappa_{\textup{curl}}
= \left\{ \begin{array}{ll}
\frac{\floor{n / \textup{e}}}{\floor{n q}}, & q \le \frac{1}{\textup{e}}, \\
1, & q > \frac{1}{\textup{e}},
\textup{e}nd{array} \right. \quad
\kappa_{\textup{na\"ive}}
= 2^{n - 1} \frac{\floor{n / \textup{e}}}{n - 1}.
\textup{e}nd{align*}
Except for the corner case where $q < \frac{1}{n}$, we have that $\kappa_{\textup{curl}} = O(n)$ while $\kappa_{\textup{na\"ive}} = \Omega(2^n) $.
Notice that any distribution with $P_i \le \frac{1}{i}$ leads to an exponential improvement.
\textbf{A more general case.}
Now we try to loosen the condition where $P_i \le \frac{1}{i}$. Let us consider the case where $P_i \le \frac{1}{2}$ for $i \ge 2$ (by definition $P_1$ is always equal to $1$).
Eq.\,\ref{eq_kappa} now becomes:
\begin{align*}
\kappa_{\textup{curl}}
\le \left\{ \begin{array}{ll}
2^{\floor{n p} - \floor{n q}}, & q \le p, \\
1, & q > p,
\textup{e}nd{array} \right. \quad
\kappa_{\textup{na\"ive}}
\ge 2^{\floor{n p}}.
\textup{e}nd{align*}
Clearly, $\kappa_{\textup{curl}} \le \kappa_{\textup{na\"ive}}$ always holds.
\rebut{When $q$ is close to $p$,} the difference is \rebut{exponential in $\floor{n q}$}.
\textbf{Failure mode of Curriculum Learning.}
Lastly we show further relaxing the assumption on $P_i$ leads to failure cases.
The extreme case is that all $P_i = 1$, i.e., the best candidate always comes as the last one.
Suppose $q < \frac{n - 1}{n}$, then $d^{\pi_q} \left( \frac{n}{n} \right) = 0$.
Hence $\kappa_{\textup{curl}} = \infty$, larger than $\kappa_{\textup{na\"ive}} = 2^{n - 1}$.
From
Eq.\,\ref{eq_kappa}, $\kappa_{\textup{na\"ive}} \le 2^{n - 1}$. Similar as Sec.\,3 of \citet{paper_sp_theory}, the optimal threshold $p$ satisfies:
\begin{align*}
\sum_{i = \floor{n p} + 2}^n \frac{P_i}{1 - P_i}
\le 1
< \sum_{i = \floor{n p} + 1}^n \frac{P_i}{1 - P_i}.
\textup{e}nd{align*}
So letting $P_n > \frac{1}{2}$ results in $p \in [\frac{n - 1}{n}, 1)$. Further, if $q < \frac{n - 1}{n}$ and $P_j > 1 - 2^{- \frac{n}{n - \floor{n q} - 1}}$ for any $\floor{n q} + 1 \le j \le n - 1$, then from
Eq.\,\ref{eq_kappa}, $\kappa_{\textup{curl}} > 2^n > \kappa_{\textup{na\"ive}}$.
This means that Curriculum Learning can always be manipulated adversarially.
Sometimes there is hardly any reasonable curriculum.
\begin{remark}
Here we only provide theoretical explanations for SP when $samp =$ \texttt{pi\_s}, because $\kappa$ is highly problem-dependent, and the analytical forms for $\kappa$ is tractable when the sampler is fixed.
For $samp =$ \texttt{pi\_t} and other CO problems such as OKD, however, we do not have analytical forms, so we resort to empirical studies (Sec.\,\ref{sec_exp_okd}).
\textup{e}nd{remark}
{\ifnum0=2
\fi}
\section{Experiments}
{\ifnum0=2
\fi}
\label{sec_exp}
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{plots/plot_SP_2018011309.eps}
{\ifnum0=2
\fi}
{\ifnum0=3
\fi}
\caption{
One experiment of SP.
\rebut{The $x$-axis is the number of trajectories, i.e., number of epsidoes $\times$ horizon $\times$ batch size.}
\textbf{Dashed lines represent only final phase training and solid lines represent Curriculum Learning.} The shadowed area shows the $95 \%$ confidence interval for the expectation.
The explanation for different modes can be found in Sec.\,\ref{sec_learn_proc}.
The reference policy is the optimal threshold policy.
}
\label{fig_SP}
\textup{e}nd{figure*}
\begin{figure*}[!t]
{\ifnum0=3
\fi}
\centering
\includegraphics[width=\linewidth]{plots/plot_OKD_uniform.eps}
{\ifnum0=2
\fi}
{\ifnum0=3
\fi}
\caption{
One experiment of OKD. Legend description is the same as that of Fig.\,\ref{fig_SP}. The reference policy is the bang-per-buck algorithm for Online Knapsack (Sec.\,3.1 of \citet{paper_new_dog}.
}
{\ifnum0=2
\fi}
{\ifnum0=3
\fi}
\label{fig_OKD}
\textup{e}nd{figure*}
The experiments' formulations are modified from \citet{paper_new_dog}.
Due to page limit, more formulation details and results are presented in Sec.\,\ref{sec_full_exp}.
In Curriculum Learning the entire training process splits into two phases.
We call the training on curriculum (small scale instances) ``warm-up phase'' and the training on large scale instances ``final phase''.
We ran more than one experiments for each problem.
In one experiment there are more than one training processes to show the effect of different samplers and regularization coefficients.
To highlight the effect of Curriculum Learning, we omit the results regarding regularization, and they can be found in supplementary files.
All the trainings in the same experiment have \textup{e}mph{the same distributions} over LMDPs for final phase and warm-up phase (if any), respectively.
\textbf{Secretary Problem.} \label{sec_exp_sp}
We show one of the four experiments in Fig.\,\ref{fig_SP}.
Aside from reward and $\ln \kappa$, we plot the weighted average of $\textup{e}rr_t$ according to Thm.\,\ref{thm_main}: avg$(\textup{e}rr_t) = \frac{\sum_{i = 0}^t (1 - \textup{e}ta \lambda)^{t - i} \textup{e}rr_i}{\sum_{i' = 0}^t (1 - \textup{e}ta \lambda)^{t - i'}}$.
All the instance distributions are generated from parameterized series $\{ P_n \}$ with fixed random seeds, which guarantees reproducibility and comparability.
There is no explicit relation between the curriculum and the target environment, so the curriculum can be viewed as \textup{e}mph{random and independent}.
The experiments clearly demonstrate that curriculum learning can boost the performance by a large margin and curriculum learning indeed dramatically reduces $\kappa$, even the curriculum is randomly generated.
\textbf{Online Knapsack (decision version).} \label{sec_exp_okd}
We show one of the three experiments in Fig.\,\ref{fig_OKD}.
$\ln \kappa$ and avg($\textup{e}rr_t$) are with respect to the reference policy, a bang-per-buck algorithm, which is not the optimal policy.
Thus, they are only for reference.
The curriculum generation is also parameterized, random and independent of the target environment.
The experiments again demonstrate the effectiveness of curriculum learning and curriculum learning indeed dramatically reduces $\kappa$.
{\ifnum0=2
\fi}
\section{Conclusion}
{\ifnum0=2
\fi}
We showed \rebut{online} CO problems could be naturally formulated as LMDPs, and we analyzed the convergence rate of NPG for LMDPs.
Our theory shows the main benefit of curriculum learning is finding a stronger sampling strategy, especially for standard SP any curriculum exponentially improves the learning rate.
Our empirical results also corroborated our findings.
Our work is the first attempt to systematically study techniques devoted to using RL to \rebut{tackle online} CO problems, which we believe is a fruitful direction worth further investigations.
\appendix
\part{Appendix}
\section{Skipped algorithms}
\label{sec_defer_algo}
In this section, we present the algorithms skipped in the main text.
Alg.\,\ref{alg_npg_full} is the full version of Alg.\,\ref{alg_npg}.
Alg.\,\ref{alg_samp} is the sampling function.
\begin{algorithm}[ht]
\caption{\texttt{NPG}: Sample-based NPG (full version).}
\label{alg_npg_full}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Environment $E$; learning rate $\textup{e}ta$; episode number $T$; batch size $N$; initialization $\theta_0$; sampler $\pi_s$; regularization coefficient $\lambda$; entropy clip bound $U$; optimization domain $\mathcal{G}$.
\FOR{$t \gets 0, 1, \ldots, T - 1$}
\STATE Initialize $\wh{F}_t \gets 0^{d \times d}, \wh{\nabla}_t \gets 0^d$.
\FOR{$n \gets 0, 1, \ldots, N - 1$}
\FOR{$h \gets 0, 1, \ldots, H - 1$}
\IF{$\pi_s$ is not None}
\STATE $s_h, a_h, \wh{A}_{H - h} (s_h, a_h) \gets$ {\ifnum0=1 \\ \quad \fi}
\texttt{Sample} $(E, \pi_s, \textup{True}, \pi_t, h, \lambda, U)$ (see Alg.\,\ref{alg_samp}). \\
\algocomment{$s, a \sim \wt{d}_{m, h}^{\pi_s}$, estimate $A_{m, H - h}^{t, \lambda} (s, a)$.}
\mathop{\mathbb{E}}LSE
\STATE $s_h, a_h, \wh{A}_{H - h} (s_h, a_h) \gets$ {\ifnum0=1 \\ \quad \fi}
\texttt{Sample} $(E, \pi_t, \textup{False}, \pi_t, h, \lambda, U)$. \\
\algocomment{$s, a \sim d_{m, h}^{\theta_t}$, estimate $A_{m, H - h}^{t, \lambda} (s, a)$.}
\mathop{\mathbb{E}}NDIF
\mathop{\mathbb{E}}NDFOR
\STATE Update:
\begin{align*}
\wh{F}_t &\gets \wh{F}_t + \sum_{h = 0}^{H - 1} \nabla_\theta \ln \pi_{\theta_t} (a_h | s_h) \left( \nabla_\theta \ln \pi_{\theta_t} (a_h | s_h) \right)^\top, \\
\wh{\nabla}_t &\gets \wh{\nabla}_t + \sum_{h = 0}^{H - 1} \wh{A}_{H - h} (s_h, a_h) \nabla_\theta \ln \pi_{\theta_t} (a_h | s_h).
\textup{e}nd{align*}
\mathop{\mathbb{E}}NDFOR
\STATE Call any solver to get $\wh{g}_t \gets \argmin_{g \in \mathcal{G}} g^\top \wh{F}_t g - 2 g^\top \wh{\nabla}_t$.
\STATE Update $\theta_{t + 1} \gets \theta_t + \textup{e}ta \wh{g}_t$.
\mathop{\mathbb{E}}NDFOR
\STATE {\bfseries Return:} $\theta_T$.
\textup{e}nd{algorithmic}
\textup{e}nd{algorithm}
\begin{algorithm}[ht]
\caption{\texttt{Sample}: Sampler for $s \sim d_{m, h}^{\pi_{\textup{samp}}}$ where $m \sim $ Multinomial $(w_1, \ldots, w_M)$, $a \sim \textup{Unif}_{\mathcal{A}}$ if $unif =$ True and $a \sim \pi_{\textup{samp}} (\cdot | s)$ otherwise, and estimate of $A_{m, H - h}^{t, \lambda} (s, a)$.}
\label{alg_samp}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Environment $E$; sampler policy $\pi_{\textup{samp}}$; whether to sample uniform actions after state $unif$; current policy $\pi_t$; time step $h$; regularization coefficient $\lambda$; entropy clip bound $U$.
\STATE $E$.reset().
\FOR{$i \gets 0, 1, \ldots, h - 1$}
\STATE $s_i \gets E$.get\_state().
\STATE Sample action $a_i \sim \pi_{\textup{samp}} (\cdot | s_i)$ and $E$.execute($a_i$).
\mathop{\mathbb{E}}NDFOR
\STATE $s_h \gets E$.get\_state().
\IF{$unif$ = True}
\STATE $a_h \sim \textup{Unif}_{\mathcal{A}}$.
\mathop{\mathbb{E}}LSE
\STATE $a_h \sim \pi_{\textup{samp}} (\cdot | s_h)$.
\mathop{\mathbb{E}}NDIF
\STATE $(s, a) \gets (s_h, a_h)$.
\STATE Get a random number $p \sim$ $\textup{Unif}[0, 1]$.
\IF{$p < \frac{1}{2}$}
\STATE Override $a_h \sim \pi_t (\cdot | s_h)$.
\STATE Set importance weight $C \gets -2$.
\STATE $r_h \gets E$.execute($a_h$).
\STATE Initialize cumulative reward $R \gets r_h + \lambda \mathcal{H} (\pi_t (\cdot | s_h))$.
\mathop{\mathbb{E}}LSE
\STATE $C \gets 2$.
\STATE $r_h \gets E$.execute($a_h$).
\STATE $R \gets r_h + \lambda \min \{ \ln \frac{1}{\pi_t (a_h | s_h)}, U \}$.
\mathop{\mathbb{E}}NDIF
\FOR{$i \gets h + 1, h + 2, \ldots, H - 1$}
\STATE $s_i \gets E$.get\_state().
\STATE $a_i \sim \pi_t (\cdot | s_i)$ and $r_h \gets E$.execute($a_i$).
\STATE $R \gets R + r_i + \lambda \mathcal{H} (\pi_t (\cdot | s_i))$.
\mathop{\mathbb{E}}NDFOR
\STATE {\bfseries Return:} $s, a, \wh{A}_{H - h}^{t, \lambda} (s, a) = C R$.
\textup{e}nd{algorithmic}
\textup{e}nd{algorithm}
\section{Proof of the main results}
\subsection{Performance of Natural Policy Gradient for LMDP} \label{sec_proof_main_thm}
First we give the skipped definition of the Lyapunov potential function $\mathbb{P}hi$, then prove Thm.\,\ref{thm_main}.
\begin{definition}[Lyapunov Potential Function \citep{paper_npg_logl_reg}] \label{def:lyapunov_potential}
We define the potential function $\mathbb{P}hi: \mathbb{P}i \to \mathbb{R}$ as follows: for any $\pi \in \mathbb{P}i$,
\begin{align*}
\mathbb{P}hi(\pi)
= \sum_{m = 1}^M w_m \sum_{h=0}^{H - 1} \mathop{\mathbb{E}}_{(s, a) \sim d_{m, h}^\star} \left[ \ln \frac{\pi^\star (a | s)}{\pi (a | s)} \right].
\textup{e}nd{align*}
\textup{e}nd{definition}
\begin{ntheorem} {\ref{thm_main}} [Restatement of Thm.\,\ref{thm_main}]
With Def.\,\ref{asp_main}, \ref{def:err_t} and \ref{def:lyapunov_potential}, our algorithm enjoys the following performance bound:
\begin{align*}
\mathop{\mathbb{E}} \left[ \min_{0 \le t \le T} V^{\star, \lambda} - V^{t, \lambda} \right]
&\le \frac{\lambda (1 - \textup{e}ta \lambda)^{T + 1} \mathbb{P}hi (\pi_0)}{1 - (1 - \textup{e}ta \lambda)^{T + 1}} + \textup{e}ta \frac{B^2 G^2}{2} + \frac{\sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t} \mathop{\mathbb{E}}[\textup{e}rr_t]}{\sum_{t' = 0}^T (1 - \textup{e}ta \lambda)^{T - t'}} \\
&\le \frac{\lambda (1 - \textup{e}ta \lambda)^{T + 1} \mathbb{P}hi (\pi_0)}{1 - (1 - \textup{e}ta \lambda)^{T + 1}} + \textup{e}ta \frac{B^2 G^2}{2} + \sqrt{H \textup{e}psilon_{\textup{bias}}} + \sqrt{H \kappa \textup{e}psilon_{\textup{stat}}}.
\textup{e}nd{align*}
\textup{e}nd{ntheorem}
\begin{proof}
Here we make shorthands for the sub-optimality gap and potential function: $\Delta_t := V^{\star, \lambda} - V^{t, \lambda}$ and $\mathbb{P}hi_t := \mathbb{P}hi (\pi_t)$. From Lem.\,\ref{lem_lyapunov_drift} we have
\begin{align*}
\textup{e}ta \Delta_t \le (1 - \textup{e}ta \lambda) \mathbb{P}hi_t - \mathbb{P}hi_{t + 1} + \textup{e}ta \textup{e}rr_t + \textup{e}ta^2 \frac{B^2 G^2}{2}.
\textup{e}nd{align*}
Taking expectation over the update weights, we have
\begin{align*}
\mathop{\mathbb{E}} [\textup{e}ta \Delta_t] \le (1 - \textup{e}ta \lambda) \mathop{\mathbb{E}} [\mathbb{P}hi_t] - \mathop{\mathbb{E}} [\mathbb{P}hi_{t + 1}] + \textup{e}ta \mathop{\mathbb{E}}[\textup{e}rr_t] + \textup{e}ta^2 \frac{B^2 G^2}{2}.
\textup{e}nd{align*}
Thus,
\begin{align*}
&\mathop{\mathbb{E}} \left[ \textup{e}ta \sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t} \Delta_t \right] \\
&\le \sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t + 1} \mathop{\mathbb{E}} [\mathbb{P}hi_t] - \sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t} \mathop{\mathbb{E}}[\mathbb{P}hi_{t + 1}] \\
&\quad + \textup{e}ta \sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t} \mathop{\mathbb{E}}[\textup{e}rr_t] + \textup{e}ta^2 \frac{B^2 G^2}{2} \sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t} \\
&= (1 - \textup{e}ta \lambda)^{T + 1} \mathbb{P}hi_0 - \mathop{\mathbb{E}} [\mathbb{P}hi_{T + 1}] + \textup{e}ta \sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t} \mathop{\mathbb{E}}[\textup{e}rr_t] + \textup{e}ta^2 \frac{B^2 G^2}{2} \sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t}\\
&\le (1 - \textup{e}ta \lambda)^{T + 1} \mathbb{P}hi_0 + \textup{e}ta \sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t} \mathop{\mathbb{E}}[\textup{e}rr_t] + \textup{e}ta^2 \frac{B^2 G^2}{2} \sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t},
\textup{e}nd{align*}
where the last step uses the fact that $\mathbb{P}hi(\pi) \ge 0$.
This is a weighted average, so by normalizing the coefficients,
\begin{align*}
\mathop{\mathbb{E}} \left[ \min_{0 \le t \le T} \Delta_t \right]
&\le \frac{\lambda (1 - \textup{e}ta \lambda)^{T + 1} \mathbb{P}hi_0}{1 - (1 - \textup{e}ta \lambda)^{T + 1}} + \textup{e}ta \frac{B^2 G^2}{2} + \frac{\sum_{t = 0}^T (1 - \textup{e}ta \lambda)^{T - t} \mathop{\mathbb{E}}[\textup{e}rr_t]}{\sum_{t' = 0}^T (1 - \textup{e}ta \lambda)^{T - t'}} \\
&\le \frac{\lambda (1 - \textup{e}ta \lambda)^{T + 1} \mathbb{P}hi_0}{1 - (1 - \textup{e}ta \lambda)^{T + 1}} + \textup{e}ta \frac{B^2 G^2}{2} + \sqrt{H \textup{e}psilon_{\textup{bias}}} + \sqrt{H \kappa \textup{e}psilon_{\textup{stat}}},
\textup{e}nd{align*}
where the last step comes from Lem.\,\ref{lem_approx_err} and Jensen's inequality.
This completes the proof.
\textup{e}nd{proof}
\subsection{Curriculum learning and the constant gap for Secretary Problem} \label{sec_proof_kappa_sp}
\begin{ntheorem} {\ref{thm_kappa_sp}} [Formal statement of Thm.\,\ref{thm_kappa_sp}]
For SP, set $samp =$ \texttt{pi\_s} in Alg.\,\ref{alg_curl}. Assume that each candidate is independent from others and the $i$-th candidate has probability $P_i$ of being the best so far (see formulation in Sec.\,\ref{sec:lmdp} and \ref{sec_full_exp_sp}). Assume the optimal policy is a $p$-threshold policy and the sampling policy is a $q$-threshold policy. There exists a policy parameterization and quantities
\begin{align*}
k_{\textup{curl}}
= \left\{ \begin{array}{ll}
\prod_{j = \floor{n q} + 1}^{\floor{n p}} \frac{1}{1 - P_j}, & q \le p, \\
1, & q > p,
\textup{e}nd{array} \right. \quad
k_{\textup{na\"ive}}
= 2^{\floor{n p}} \max \left\{ 1, \max_{i \ge \floor{n p} + 2}\prod_{j = \floor{n p} + 1}^{i - 1} 2 (1 - P_j) \right\},
\textup{e}nd{align*}
such that $k_{\textup{curl}} \le \kappa_{\textup{curl}} \le 2 k_{\textup{curl}}$ and $k_{\textup{na\"ive}} \le \kappa_{\textup{na\"ive}} \le 2 k_{\textup{na\"ive}}$.
Here $\kappa_{\textup{curl}}$ and $\kappa_{\textup{na\"ive}}$ correspond to $\kappa$ induced by the $q$-threshold policy and the na\"ive random policy respectively.
\textup{e}nd{ntheorem}
\begin{proof}
We need to calculate three state-action visitation distributions: that induced by the optimal policy, $d^\star$; that induced by the sampler which is the optimal for the curriculum, $\wt{d}^{\textup{curl}}$; and that induced by the na\"ive random sampler, $\wt{d}^{\textup{na\"ive}}$. This then boils down to calculating the state(-action) visitation distribution under two types of policies: any threshold policy and the na\"ive random policy.
For any policy $\pi$, denote $d^{\pi} \left(\frac{i}{n}\right)$ as the probability for the agent acting under $\pi$ to see states $\frac{i}{n}$ with arbitrary $x_i$. We do not need to take the terminal state $g$ into consideration, since it stays in a zero-reward loop and contributes $0$ to $L(g; \theta, d)$. We use the LMDP distribution described in Sec.\,\ref{sec_exp_sp}.
Denote $\pi_p$ as the $p$-threshold policy, i.e. accept if and only if $\frac{i}{n} > p$ and $x_i = 1$. Then
\begin{align*}
d^{\pi_p} \left( \frac{i}{n} \right)
&= \mathbb{P} (\text{reject all previous $i - 1$ states} | \pi_p) \\
&= \prod_{j = 1}^{i - 1} \left( \mathbb{P} \left(\frac{j}{n}, 1 \right) \mathbbm{1}\left[\frac{j}{n} \le p \right] + 1 - \mathbb{P} \left(\frac{j}{n}, 1 \right) \right) \\
&= \prod_{j = \floor{n p} + 1}^{i - 1} \left( 1 - \mathbb{P} \left(\frac{j}{n}, 1 \right) \right) \\
&= \prod_{j = \floor{n p} + 1}^{i - 1} ( 1 - P_j ).
\textup{e}nd{align*}
Denote $\pi_{\textup{na\"ive}}$ as the na\"ive random policy, i.e., accept with probability $\frac{1}{2}$ regardless of the state. Then
\begin{align*}
d^{\pi_{\textup{na\"ive}}} \left( \frac{i}{n} \right)
= \mathbb{P} (\text{reject all previous $i - 1$ states} | \pi_{\textup{na\"ive}})
= \frac{1}{2^{i - 1}}.
\textup{e}nd{align*}
For any $\pi$, we can see that the state visitation distribution satisfies $d^\pi \left(\frac{i}{n}, 1 \right) = P_i d^{\pi} \left(\frac{i}{n}\right)$ and $d^\pi \left(\frac{i}{n}, 0 \right) = (1 - P_i) d^{\pi} \left(\frac{i}{n}\right)$.
To show the possible largest difference, we use a parameterization that for each state $s$, $\phi (s) =$ One-hot$(s)$. The policy is then
\begin{align*}
\pi_\theta (\textup{accept} | s) = \frac{\textup{e}xp (\theta^\top \phi(s))}{\textup{e}xp (\theta^\top \phi(s)) + 1}, \quad
\pi_\theta (\textup{reject} | s) = \frac{1}{\textup{e}xp (\theta^\top \phi(s)) + 1}.
\textup{e}nd{align*}
Denote $\pi_\theta (s) = \pi_\theta (\textup{accept} | s)$, we have
\begin{align*}
\nabla_\theta \ln \pi_\theta (\textup{accept} | s) = (1 - \pi_\theta (s)) \phi (s), \quad
\nabla_\theta \ln \pi_\theta (\textup{reject} | s) = - \pi_\theta (s) \phi (s).
\textup{e}nd{align*}
Now suppose the optimal threshold and the threshold learned through curriculum are $p$ and $q$, then
\begin{align*}
\Sigma_{d^\star}^\theta
&= \sum_{s \in \mathcal{S}} d^{\pi_p} (s) \left(\pi_p (s) (1 - \pi_\theta (s))^2 + (1 - \pi_p (s)) \pi_\theta (s)^2 \right) \phi(s) \phi(s)^\top, \\
\Sigma_{\wt{d}^{\textup{curl}}}^\theta
&= \sum_{s \in \mathcal{S}} d^{\pi_q} (s) \left(\frac{1}{2} (1 - \pi_\theta (s))^2 + \frac{1}{2} \pi_\theta (s)^2 \right) \phi(s) \phi(s)^\top, \\
\Sigma_{\wt{d}^{\textup{na\"ive}}}^\theta
&= \sum_{s \in \mathcal{S}} d^{\textup{na\"ive}} (s) \left(\frac{1}{2} (1 - \pi_\theta (s))^2 + \frac{1}{2} \pi_\theta (s)^2 \right) \phi(s) \phi(s)^\top.
\textup{e}nd{align*}
Denote $\kappa_\clubsuit (\theta) = \sup_{x \in \mathbb{R}^d} \frac{x^\top \Sigma_{d^\star}^{\theta} x}{x^\top \Sigma_{\wt{d}^{\clubsuit}}^\theta x}$. From parameterization we know all $\phi(s)$ are orthogonal. Abusing $\pi_q$ with $\pi_{\textup{curl}}$, we have
\begin{align*}
\kappa_\clubsuit (\theta) = \max_{s \in \mathcal{S}} \frac{d^{\pi_p} (s) \left(\pi^\star (s) (1 - \pi_\theta (s))^2 + (1 - \pi^\star (s)) \pi_\theta (s)^2 \right)}{d^\clubsuit (s) \left(\frac{1}{2} (1 - \pi_\theta (s))^2 + \frac{1}{2} \pi_\theta (s)^2 \right)}.
\textup{e}nd{align*}
We can separately consider each $s \in \mathcal{S}$ because of the orthogonal features. Observe that $\pi_p (s) \in \{0, 1\}$, so for $s \in \mathcal{S}$, its corresponding term in $\kappa_\clubsuit (\theta)$ is maximized when $\pi_\theta (s) = 1 - \pi_p (s)$ and is equal to $2 \frac{d^{\pi_p} (s)}{d^\clubsuit (s)}$. By definition, $\kappa_\clubsuit = \max_{0 \le t \le T} \mathop{\mathbb{E}} [\kappa_\clubsuit (\theta_t)]$. Since $\theta_0 = 0^d$, we have $\kappa_\clubsuit \ge \kappa_\clubsuit (0^d)$ where $\pi_\theta (s) = \frac{1}{2}$ and the corresponding term is $\frac{d^{\pi_p} (s)}{d^\clubsuit (s)}$. So
\begin{align*}
\max_{s \in \mathcal{S}} \frac{d^{\pi_p} (s)}{d^\clubsuit (s)} \le \kappa_\clubsuit \le 2 \max_{s \in \mathcal{S}} \frac{d^{\pi_p} (s)}{d^\clubsuit (s)}.
\textup{e}nd{align*}
We now have an order-accurate result $k_\clubsuit = \max_{s \in \mathcal{S}} \frac{d^{\pi_p} (s)}{d^\clubsuit (s)}$ for $\kappa_\clubsuit$. Direct computation gives
\begin{align*}
k_{\textup{curl}}
&= \left\{ \begin{array}{ll}
\prod_{j = \floor{n q} + 1}^{\floor{n p}} \frac{1}{1 - P_j}, & q \le p, \\
1, & q > p,
\textup{e}nd{array} \right. \\
k_{\textup{na\"ive}}
&= 2^{\floor{n p}} \max \left\{ 1, \max_{i \ge \floor{n p} + 2}\prod_{j = \floor{n p} + 1}^{i - 1} 2 (1 - P_j) \right\}.
\textup{e}nd{align*}
This completes the proof.
\textup{e}nd{proof}
\section{Full experiments} \label{sec_full_exp}
Here are all the experiments not shown in Sec.\,\ref{sec_exp}.
All the experiments were run on a server with CPU AMD Ryzen 9 3950X, GPU NVIDIA GeForce 2080 Super and 128G memory.
For legend description please refer to the caption of Fig.\,\ref{fig_SP}.
For experiment data (code, checkpoints, logging data and policy visualization) please refer to the supplementary files.
\para{Policy parameterization.}
Since in all the experiments there are exactly two actions, we can use $\phi(s) = \phi(s, \textup{accept}) - \phi(s, \textup{reject})$ instead of $\phi(s, \textup{accept})$ and $\phi(s, \textup{reject})$.
Now the policy is
$ \pi_\theta (\textup{accept} | s) = \frac{\textup{e}xp (\theta^\top \phi(s))}{\textup{e}xp (\theta^\top \phi(s)) + 1}$ and $\pi_\theta (\textup{reject} | s) = \frac{1}{\textup{e}xp (\theta^\top \phi(s)) + 1}.$
\rebut{
\para{Training schemes.} We ran seven experiments in total, three for Secretary Problem and four for Online Knapsack (decision version). The difference between the experiments of the same problem lies in the distribution over instances (i.e., $\{w_m\}$). In the following subsections, we will introduce how we parameterized the distribution in detail.
In a single experiment, we ran eight setups, each representing a combination of sampler policies, initialization policies of the final phase, and whether we used regularization.
For visual clarity, we did not plot setups with entropy regularization, but the readers can plot it using \texttt{plot.py} (comment L55-58 and uncomment L59-62) in the supplementary files.
We make a detailed list of the training schemes in Tab.\,\ref{tab_training_scheme}.
}
\begin{table}[ht]
\centering
\begin{tabular}{|p{3.2cm}|p{6.5cm}|p{3cm}|}
\toprule[2pt]
\textbf{Abbreviation} & \textbf{Detailed setup} & \textbf{Script} \\
\midrule[1pt]
\texttt{fix\_samp\_curl} & \textbf{Fix}ed \textbf{samp}ler \textbf{cur}riculum \textbf{l}earning. In the warm-up phase, train a policy $\pi_s$ from scratch (with zero initialization in parameters) using a small environment $E'$. In the final phase, change to the true environment $E$, use $\pi_s$ as the sampler policy to train a policy from scratch. & Run Alg.\,\ref{alg_curl} with $samp = \texttt{pi\_s}$ and $\lambda = 0$. \\
\midrule[0.5pt]
\texttt{fix\_samp\_curl\_reg} & The same as \texttt{fix\_samp\_curl}, but add entropy \textbf{reg}ularization to both phases. & Run Alg.\,\ref{alg_curl} with $samp = \texttt{pi\_s}$ and $\lambda \ne 0$. \\
\midrule[0.5pt]
\texttt{direct} & \textbf{Direct} learning. Only the final phase. Train a policy from scratch directly in $E$. & Run Alg.\,\ref{alg_npg} with $\theta_0 = 0^d$, $\pi_s =$ None and $\lambda = 0$. \\
\midrule[0.5pt]
\texttt{direct\_reg} & The same as \texttt{direct}, but add entropy \textbf{reg}ularization. & Run Alg.\,\ref{alg_npg} with $\theta_0 = 0^d$, $\pi_s =$ None and $\lambda \ne 0$. \\
\midrule[0.5pt]
\texttt{naive\_samp} & Learning with the \textbf{na\"ive} \textbf{samp}ler. Only the final phase. Use the na\"ive random policy as the sampler to train a policy from scratch in $E$. & Run Alg.\,\ref{alg_npg} with $\theta_0 = 0^d$, $\pi_s =$ na\"ive random policy and $\lambda = 0$. \\
\midrule[0.5pt]
\texttt{naive\_samp\_reg} & The same as \texttt{naive\_samp}, but add entropy \textbf{reg}ularization. & Run Alg.\,\ref{alg_npg} with $\theta_0 = 0^d$, $\pi_s =$ naive random policy and $\lambda \ne 0$. \\
\midrule[0.5pt]
\texttt{curl} & \textbf{Cur}riculum \textbf{l}earning. In the warm-up phase, train a policy $\pi_s$ from scratch in $E'$. In the final phase, change to $E$ and continue on training $\pi_s$. & Run Alg.\,\ref{alg_curl} with $samp = \texttt{pi\_t}$ and $\lambda = 0$. \\
\midrule[0.5pt]
\texttt{curl\_reg} & The same as \texttt{curl}, but add entropy \textbf{reg}ularization. & Run Alg.\,\ref{alg_curl} with $samp = \texttt{pi\_t}$ and $\lambda \ne 0$. \\
\midrule[0.5pt]
\texttt{reference} & This is the \textbf{reference} policy. For SP, it is exactly the optimal policy since it can be calculated. For OKD, it is a bang-per-buck policy and is not the optimal policy (whose exact form is not clear). & N/A \\
\bottomrule[2pt]
\textup{e}nd{tabular}
\caption{Detailed setups for each training scheme.}
\label{tab_training_scheme}
\textup{e}nd{table}
\subsection{Secretary Problem}
\label{sec_full_exp_sp}
\para{State and action spaces.} States with $X_i > 1$ are the same. To make the problem ``scale-invariant'', we use $\frac{i}{n}$ to represent $i$. So the states are $(\frac{i}{n}, x_i = \mathbbm{1}[X_i = 1])$. There is an additional terminal state $g = (0, 0)$. For each state, the agent can either accept or reject.
\para{Transition and reward.} Any action in $g$ leads back to $g$. Once the agent accepts the $i$-th candidate, the state transits into $g$, and reward is $1$ if $i$ is the best in the instance. If the agent rejects, then the state goes to $(\frac{i + 1}{n}, x_{i + 1})$ if $i < n$ and $g$ if $i = n$. For all other cases, rewards are $0$.
\para{Feature mapping.} Recall that all states are of the form $(f, x)$ where $f \in [0, 1],\ x \in \{0, 1\}$. We set a degree $d_0$ and the feature mapping is constructed as the collection of polynomial bases with degree less than $d_0$ ($d = 2 d_0$):
\begin{align*}
\phi(f, x) = (1, f, \ldots, f^{d_0 - 1}, x, f x, \ldots, f^{d_0 - 1} x).
\textup{e}nd{align*}
\para{LMDP distribution.} We model the distribution as follows: for each $i$, we can have $x_i = 1$ with probability $P_i$ and is independent from other $i'$. By definition, $P_1 = 1$ while other $P_i$ can be arbitrary. The classical SP satisfies $P_i = \frac{1}{i}$. We also experimented on three other distributions (so in total there are four experiments), each with a series of numbers $p_2, p_3, \ldots, p_n \iid \textup{Unif}_{[0, 1]}$ and set $P_i = \frac{1}{i^{2 p_i + 0.25}}$.
For each experiment, we run eight setups, each with different combinations of sampler policies, initialization policies of the final phase, and the value of regularization coefficient $\lambda$. For the warm-up phases we set $n = 10$ and for final phases $n = 100$.
\para{Results.} Fig.\,\ref{fig_SP_uniform_trunc} (with its full view Fig.\,\ref{fig_SP_uniform}), Fig.\,\ref{fig_SP_20000308}, Fig.\,\ref{fig_SP_19283746}, along with Fig.\,\ref{fig_SP} (with seed 2018011309) show four experiments of SP. They shared a learning rate of $0.2$, batch size of $100$ per step in horizon, final $n = 100$ and warm-up $n = 10$ (if applied curriculum learning). \footnote{All the four trainings shown in the figures have their counterparts with regularization ($\lambda = 0.01$). Check the supplementary files and use TensorBoard for visualization.}
The experiment in Fig.\,\ref{fig_SP_uniform_trunc} was done in the classical SP environment, i.e., all permutations have probability $\frac{1}{n!}$ to be sampled. Experiments Fig.\,\ref{fig_SP}, Fig.\,\ref{fig_SP_20000308} and Fig.\,\ref{fig_SP_19283746} were done with other distributions (see LMDP distribution of Sec.\,\ref{sec_exp_sp}): the only differences are the random seeds, which we fixed and used to generate $P_i$s for reproducibility.
The experiment of classical SP was run until the direct training of $n = 100$ converges, while all other experiments were run to a maximum episode of $30000$ (hence sample number of $T H b = 30000 \times 100 \times 100 = 3 \times 10^8$).
The optimal policy was derived from dynamic programming.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{plots/plot_SP_uniform_300000000.eps}
\caption{Classical SP, truncated to $3 \times 10^8$ samples.}
\label{fig_SP_uniform_trunc}
\textup{e}nd{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{plots/plot_SP_uniform.eps}
\caption{Classical SP, full view.}
\label{fig_SP_uniform}
\textup{e}nd{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{plots/plot_SP_20000308.eps}
\caption{SP, with seed 20000308.}
\label{fig_SP_20000308}
\textup{e}nd{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{plots/plot_SP_19283746.eps}
\caption{SP, with seed 19283746.}
\label{fig_SP_19283746}
\textup{e}nd{figure}
\subsection{Online Knapsack (decision version)} \label{sec_full_exp_okd}
\para{State and action spaces.} The states are represented as
\begin{align*}
\left( \frac{i}{n}, s_i, v_i, \frac{\sum_{j = 1}^{i - 1} x_j s_j}{B}, \frac{\sum_{j = 1}^{i - 1} x_j v_j}{V} \right),
\textup{e}nd{align*}
where $x_j = \mathbbm{1}[\text{item $j$ was successfully chosen}]$ for $1 \le j \le i - 1$ (in the instance). There is an additional terminal state $g = (0, 0, 0, 0, 0)$. For each state (including $g$ for simplicity), the agent can either accept or reject.
\para{Transition and reward.} The transition is implied by the definition of the problem. Any action in terminal state $g$ leads back to $g$. The item is successfully chosen if and only if the agent accepts and the budget is sufficient. A reward of $1$ is given only the first time $\sum_{j = 1}^i x_i v_i \ge V$, and then the state goes to $g$. For all other cases, reward is $0$.
\para{Feature mapping.} Suppose the state is $(f, s, v, r, q)$. We set a degree $d_0$ and the feature mapping is constructed as the collection of polynomial bases with degree less than $d_0$ ($d = d_0^5$): $\phi(f, s, v, r, q) = (f^{i_f} s^{i_s} v^{i_v} r^{i_r} q^{i_q})_{i_f, i_s, i_v, i_r, i_q}$ where $i_\clubsuit \in \{0, 1, \ldots, d_0 - 1\}$.
\para{LMDP distribution.} In Sec.\,\ref{sec_setting_okd} the values and sizes are sampled from $F_v$ ans $F_s$. If $F_v$ or $F_s$ is not $\textup{Unif}_{[0, 1]}$, we model the distribution as: first set a granularity $gran$ and take $gran$ numbers $p_1, p_2, \ldots, p_{gran} \iid \textup{Unif}_{[0, 1]}$. $p_i$ represents the (unnormalized) probability that $x \in (\frac{i - 1}{gran}, \frac{i}{gran})$. To sample, we take $i \sim $ Multinomial$(p_1, p_2, \ldots, p_{gran})$ and return $x \sim \frac{i - 1 + \textup{Unif}_{[0, 1]}}{gran}$.
For each experiment, we ran four setups, each with different combinations of sampler policies and initialization policies of the final phase. For the warm-up phases $n = 10$ and for final phases we set $n = 100$ in all experiments, while $B$ and $V$ vary. In one experiment it satisfies that $\frac{B}{n}$ are close for warm-up and final, and $\frac{V}{B}$ increases from warm-up to final.
\para{Results.} Fig.\,\ref{fig_OKD_2018011309}, Fig.\,\ref{fig_OKD_20000308}, along with Fig.\,\ref{fig_OKD} (with $F_v = F_s = \textup{Unif}_{[0, 1]}$) show three experiments of OKD. They shared a learning rate of $0.1$, batch size of $100$ per step in horizon, final $n = 100$ and warm-up $n = 10$ (if applied curriculum learning).
Experiments Fig.\,\ref{fig_OKD_2018011309} and Fig.\,\ref{fig_OKD_20000308} were done with other value and size distributions (see LMDP distribution of Sec.\,\ref{sec_exp_okd}): the only differences are the random seeds, which we fixed and used to generate $F_v$ and $F_s$ for reproducibility.
All experiments were run to a maximum episode of $50000$ (hence sample number of $T H b = 50000 \times 100 \times 100 = 5 \times 10^8$).
The reference policy is a bang-per-buck algorithm (Sec.\,3.1 of \citet{paper_new_dog}): given a threshold $r$, accept $i$-th item if $\frac{v_i}{s_i} \ge r$. We searched for the optimal $r$ with respect to Online Knapsack because we found that in general the reward is unimodal to $r$ and contains no ``plain area'', so we can easily apply ternary search (the reward of OKD contains ``plain area'').
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{plots/plot_OKD_2018011309.eps}
\caption{OKD, with seed 2018011309.}
\label{fig_OKD_2018011309}
\textup{e}nd{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{plots/plot_OKD_20000308.eps}
\caption{OKD, with seed 20000308.}
\label{fig_OKD_20000308}
\textup{e}nd{figure}
\section{Technical details and lemmas}
\subsection{Natural Policy Gradient for LMDP}
\label{sec_policy_gradient}
This section is a complement to Sec.\,\ref{sec_learn_proc}.
We give details about the correctness of Natural Policy Gradient for LMDP.
Thm.\,\ref{thm_policy_gradient} is the finite-horizon Policy Gradient Theorem for LMDP, which takes the mixing weight $\{w_m\}$ into consideration.
According to \citet{paper_npg}, the unconstrained, full-information NPG update weight satisfies $F(\theta_t) g_t = \nabla_\theta V^{t, \lambda}$.
Lem.\,\ref{lem_sol_pinv} and Lem.\,\ref{lem_npg_upd_rule} together show that: it is equivalent to finding a minimizer of the fitting compatible function approximation loss (Def.\,\ref{def_func_approx_loss}).
\begin{theorem} [Policy Gradient Theorem for LMDP] \label{thm_policy_gradient}
For any policy $\pi_\theta$ parameterized by $\theta$, and any $1 \le m \le M$,
\begin{align*}
\nabla_\theta \left( \mathop{\mathbb{E}}_{s_0 \sim \nu_m} \left[ V_{m, H}^{\pi_\theta, \lambda} (s_0) \right] \right)
= \sum_{h = 1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H - h}^{\theta}} \left[ Q_{m, h}^{\pi_\theta, \lambda} (s, a) \nabla_\theta \ln \pi_\theta (a | s) \right].
\textup{e}nd{align*}
As a result,
\begin{align*}
\nabla_\theta V^{\pi_\theta, \lambda}
= \sum_{m = 1}^M w_m \sum_{h = 1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H - h}^{\theta}} \left[ Q_{m, h}^{\pi_\theta, \lambda} (s, a) \nabla_\theta \ln \pi_\theta (a | s) \right].
\textup{e}nd{align*}
\textup{e}nd{theorem}
\begin{proof}
For any $1 \le h \le H$ and $s \in \mathcal{S}$, since $V_{m, h}^{\pi_\theta, \lambda} (s) = \sum_{a \in \mathcal{A}} \pi_\theta (a | s) Q_{m, h}^{\pi_\theta, \lambda} (s, a)$, we have
\begin{align*}
\nabla_\theta V_{m, h}^{\pi_\theta, \lambda} (s) = \sum_{a \in \mathcal{A}} \left( Q_{m, h}^{\pi_\theta, \lambda} (s, a) \nabla_\theta \pi_\theta (a | s) + \pi_\theta (a | s) \nabla_\theta Q_{m, h}^{\pi_\theta, \lambda} (s, a) \right).
\textup{e}nd{align*}
Hence
\begin{align*}
\sum_{h = 1}^H \sum_{s \in \mathcal{S}} d_{m, H - h}^{\theta} (s) \nabla_\theta V_{m, h}^{\pi_\theta, \lambda} (s)
&= \sum_{h = 1}^H \sum_{s \in \mathcal{S}} d_{m, H - h}^{\theta} (s) \sum_{a \in \mathcal{A}} \left( Q_{m, h}^{\pi_\theta, \lambda} (s, a) \nabla_\theta \pi_\theta (a | s) + \pi_\theta (a | s) \nabla_\theta Q_{m, h}^{\pi_\theta, \lambda} (s, a) \right) \\
&= \sum_{h = 1}^H \sum_{s \in \mathcal{S}} d_{m, H - h}^{\theta} (s) \sum_{a \in \mathcal{A}} \pi_\theta (a | s) Q_{m, h}^{\pi_\theta, \lambda} (s, a) \nabla_\theta \ln \pi_\theta (a | s) \\
&\quad + \sum_{h = 1}^H \sum_{s \in \mathcal{S}} d_{m, H - h}^{\theta} (s) \sum_{a \in \mathcal{A}} \pi_\theta (a | s) \nabla_\theta Q_{m, h}^{\pi_\theta, \lambda} (s, a) \\
&= \sum_{h = 1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H - h}^{\theta}} \left[ Q_{m, h}^{\pi_\theta, \lambda} (s, a) \nabla_\theta \ln \pi_\theta (a | s) \right] \\
&\quad + \sum_{h = 1}^H \sum_{s \in \mathcal{S}} d_{m, H - h}^{\theta} (s) \sum_{a \in \mathcal{A}} \pi_\theta (a | s) \nabla_\theta Q_{m, h}^{\pi_\theta, \lambda} (s, a).
\textup{e}nd{align*}
Next we focus on the second term. From the Bellman equation,
\begin{align*}
\nabla_\theta Q_{m, h}^{\pi_\theta, \lambda} (s, a)
&= \nabla_\theta \left( r_\theta (s, a) - \lambda \ln \pi_\theta (a | s) + \sum_{s' \in \mathcal{S}} P(s' | s, a) V_{m, h - 1}^{\pi_\theta, \lambda} (s') \right) \\
&= - \lambda \nabla_\theta \ln \pi_\theta (a | s) + \sum_{s' \in \mathcal{S}} P(s' | s, a) \nabla_\theta V_{m, h - 1}^{\pi_\theta, \lambda} (s').
\textup{e}nd{align*}
Particularly, $\nabla_\theta Q_{i, 1}^{\pi, \lambda} (s, a) = - \lambda \nabla_\theta \ln \pi_\theta (a | s)$. So
\begin{align*}
& \sum_{h = 1}^H \sum_{s \in \mathcal{S}} d_{m, H - h}^{\theta} (s) \sum_{a \in \mathcal{A}} \pi_\theta (a | s) \nabla_\theta Q_{m, h}^{\pi_\theta, \lambda} (s, a) \\
&= \sum_{h = 1}^H \sum_{s \in \mathcal{S}} d_{m, H - h}^{\theta} (s) \sum_{a \in \mathcal{A}} \pi_\theta (a | s) \left( - \lambda \nabla_\theta \ln \pi_\theta (a | s) + \sum_{s' \in \mathcal{S}} P(s' | s, a) \nabla_\theta V_{m, h - 1}^{\pi_\theta, \lambda} (s') \right) \\
&= -\lambda \sum_{h = 1}^H \sum_{s \in \mathcal{S}} d_{m, H - h}^{\theta} (s) \underbrace{\sum_{a \in \mathcal{A}} \nabla_\theta \pi_\theta (a | s)}_{= \boldsymbol{0}} + \sum_{h = 2}^H \sum_{s' \in \mathcal{S}} \nabla_\theta V_{m, h - 1}^{\pi_\theta, \lambda} (s') \underbrace{\sum_{s \in \mathcal{S}} d_{m, H - h}^{\theta} (s) \sum_{a \in \mathcal{A}} \pi_\theta (a | s) P(s' | s, a)}_{= d_{m, H - h + 1}^{\theta} (s')} \\
&= \sum_{h = 2}^H \sum_{s' \in \mathcal{S}} d_{m, H - h + 1}^{\theta} (s') \nabla_\theta V_{m, h - 1}^{\pi_\theta, \lambda} (s') \\
&= \sum_{h = 1}^H \sum_{s \in \mathcal{S}} d_{m, H - h}^{\theta} (s) \nabla_\theta V_{m, h}^{\pi_\theta, \lambda} (s) - \sum_{s_0 \in \mathcal{S}} \nu_m (s_0) \nabla_\theta V_{m, H}^{\pi_\theta, \lambda} (s_0),
\textup{e}nd{align*}
where we used the definition of $d$ and $\nu_m$. So by rearranging the terms, we complete the proof.
\textup{e}nd{proof}
\begin{lemma} \label{lem_sol_pinv}
Suppose $\Gamma \in \mathbb{R}^{n \times m}, D = \mathrm{diag}(d_1, d_2, \ldots, d_m) \in \mathbb{R}^{m \times m}$ where $d_i \ge 0$ and $q \in \mathbb{R}^{m}$, then $x = (\Gamma D \Gamma^\top)^\dagger \Gamma D q$ is a solution to the equation $\Gamma D \Gamma^\top x = \Gamma D q$.
\textup{e}nd{lemma}
\begin{proof}
Denote $D^{1/2} = \textup{diag}(\sqrt{d_1}, \sqrt{d_2}, \ldots, \sqrt{d_m}), P = \Gamma D^{1/2}, p = D^{1/2} q$, then the equation is reduced to $P P^\top x = P p$. Suppose the singular value decomposition of $P$ is $U \Sigma V^\top$ where $U \in \mathbb{R}^{n \times n}, \Sigma \in \mathbb{R}^{n \times m}, V \in \mathbb{R}^{m \times m}$ where $U$ and $V$ are unitary, and singular values are $\sigma_1, \sigma_2, \ldots, \sigma_k$. So $P P^\top = U (\Sigma \Sigma^\top) U^\top$ and $(P P^\top)^\dagger = U (\Sigma \Sigma^\top)^\dagger U^\top$. Notice that
\begin{align*}
\Sigma \Sigma^\top = \textup{diag}(\sigma_1^2, \sigma_2^2, \ldots, \sigma_k^2, 0, \ldots, 0) \in \mathbb{R}^{n \times n},
\textup{e}nd{align*}
we can then derive the pseudo-inverse of this particular diagonal matrix as
\begin{align*}
(\Sigma \Sigma^\top)^\dagger = \textup{diag}(\sigma_1^{-2}, \sigma_2^{-2}, \ldots, \sigma_k^{-2}, 0, \ldots, 0).
\textup{e}nd{align*}
It is then easy to verify that $(\Sigma \Sigma^\top) (\Sigma \Sigma^\top)^\dagger \Sigma = \Sigma$. Finally,
\begin{align*}
P P^\top x &= (P P^\top) [(P P^\top)^\dagger P p] \\
&= U (\Sigma \Sigma^\top) U^\top U (\Sigma \Sigma^\top)^\dagger U^\top U \Sigma V^\top p \\
&= U (\Sigma \Sigma^\top) (\Sigma \Sigma^\top)^\dagger \Sigma V^\top p \\
&= U \Sigma V^\top p \\
&= P p.
\textup{e}nd{align*}
This completes the proof.
\textup{e}nd{proof}
\begin{lemma}[NPG Update Rule] \label{lem_npg_upd_rule}
The update rule $\theta \gets \theta + \textup{e}ta F(\theta)^\dagger \nabla_\theta V^{\pi_\theta, \lambda}$ where
\begin{align*}
F(\theta) = \sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H-h}^{\theta}} \left[ \nabla_\theta \ln \pi_\theta (a | s) \left( \nabla_\theta \ln \pi_\theta (a | s) \right)^\top \right]
\textup{e}nd{align*}
is equivalent to $\theta \gets \theta + \textup{e}ta g^\star$, where $g^\star$ is a minimizer of the function
\begin{align*}
L(g) = \sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H-h}^{\theta}} \left[ \left( A_{m, h}^{\pi_\theta, \lambda}(s, a) - g^\top \nabla_\theta \ln \pi_\theta (a | s) \right)^2 \right].
\textup{e}nd{align*}
\textup{e}nd{lemma}
\begin{proof}
\begin{align*}
\nabla_g L(g) = -2 \sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H-h}^{\theta}} \left[ \left( A_{m, h}^{\pi_\theta, \lambda}(s, a) - g^\top \nabla_\theta \ln \pi_\theta (a | s) \right) \nabla_\theta \ln \pi_\theta (a | s) \right].
\textup{e}nd{align*}
Suppose $g^\star$ is any minimizer of $L(g)$, we have $\nabla_g L(g^\star) = \boldsymbol{0}$, hence
\begin{align*}
& \sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H-h}^{\theta}} \left[ \left( g^{\star \top} \nabla_\theta \ln \pi_\theta (a | s) \right) \nabla_\theta \ln \pi_\theta (a | s) \right] \\
&= \sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H-h}^{\theta}} \left[ A_{m, h}^{\pi_\theta, \lambda}(s, a) \nabla_\theta \ln \pi_\theta (a | s) \right] \\
&= \sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H-h}^{\theta}} \left[ Q_{m, h}^{\pi_\theta, \lambda}(s, a) \nabla_\theta \ln \pi_\theta (a | s) \right].
\textup{e}nd{align*}
Since $(u^\top v) v = (v v^\top) u$, then
\begin{align*}
F (\theta) g^\star = \nabla_\theta V^{\pi_\theta, \lambda}.
\textup{e}nd{align*}
Now we assign $1, 2, \ldots, M H S A$ as indices to all $(m, h, s, a) \in \{1, \ldots, M\} \times \{1, \ldots, H\} \times \mathcal{S} \times \mathcal{A}$, and set
\begin{align*}
\gamma_j &= \nabla_\theta \ln \pi_\theta (a | s), \\
d_j &= w_m d_{m, H - h}^{\theta}(s, a), \\
q_j &= Q_{m, h}^{\pi_\theta, \lambda}(s, a),
\textup{e}nd{align*}
where $j$ is the index assigned to $(m, h, s, a)$. Then $F (\theta) = \mathbb{P}hi D \mathbb{P}hi^\top$ and $\nabla_\theta V^\theta = \mathbb{P}hi D q$ where
\begin{align*}
\Gamma &= [\gamma_1, \gamma_2, \ldots, \gamma_{M H S A}] \in \mathbb{R}^{d \times M H S A}, \\
D &= \textup{diag}(d_1, d_2, \ldots, d_{M H S A}) \in \mathbb{R}^{M H S A \times M H S A}, \\
q &= [q_1, q_2, \ldots, q_{M H S A}]^\top \in \mathbb{R}^{M H S A}.
\textup{e}nd{align*}
We now conclude the proof by utilizing Lem.\,\ref{lem_sol_pinv}.
\textup{e}nd{proof}
\subsection{Auxiliary lemmas used in the main results}
\begin{lemma}[Performance Difference Lemma]\label{lem_perf_diff}
For any two policies $\pi_1$ and $\pi_2$, and any $1 \le m \le M$,
\begin{align*}
\mathop{\mathbb{E}}_{s_0 \sim \nu_m} \left[ V_{m, H}^{\pi_1, \lambda}(s_0) - V_{m, H}^{\pi_2, \lambda}(s_0) \right]
= \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H - h}^{\pi_1}} \left[ A_{m, h}^{\pi_2, \lambda}(s, a) + \lambda \ln \frac{\pi_2 (a | s)}{\pi_1 (a | s)} \right].
\textup{e}nd{align*}
As a result,
\begin{align*}
V^{\pi_1, \lambda} - V^{\pi_2, \lambda} = \sum_{m = 1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H - h}^{\pi_1}} \left[ A_{m, h}^{\pi_2, \lambda}(s, a) + \lambda \ln \frac{\pi_2 (a | s)}{\pi_1 (a | s)} \right].
\textup{e}nd{align*}
\textup{e}nd{lemma}
\begin{proof}
First we fix $s_0$.
By definition of the value function, we have
\begin{align*}
& V_{m, H}^{\pi_1, \lambda}(s_0) - V_{m, H}^{\pi_2, \lambda}(s_0) \\
&= \mathop{\mathbb{E}} \left[\left. \sum_{h = 0}^{H - 1} r_m(s_h, a_h) - \lambda \ln \pi_1 (a_h | s_h) \ \right|\ \mathcal{M}_m, \pi_1, s_0 \right] - V_{m, H}^{\pi_2, \lambda}(s_0) \\
&= \mathop{\mathbb{E}} \left[\left. \sum_{h = 0}^{H - 1} r_m(s_h, a_h) - \lambda \ln \pi_1 (a_h | s_h) + V_{m, H + 1 - h}^{\pi_2, \lambda}(s_{h + 1}) - V_{m, H - h}^{\pi_2, \lambda}(s_h) \ \right|\ \mathcal{M}_m, \pi_1, s_0 \right] \\
&= \mathop{\mathbb{E}} \left[\left. \sum_{h = 0}^{H - 1} \mathop{\mathbb{E}} \left[ \left. r_m(s_h, a_h) - \lambda \ln \pi_2 (a_h | s_h) + V_{m, H + 1 - h}^{\pi_2, \lambda}(s_{h + 1}) \ \right|\ \mathcal{M}_m, \pi_2, s_h, a_h \right] \ \right|\ \mathcal{M}_m, \pi_1, s_0 \right] \\
&\quad + \mathop{\mathbb{E}} \left[\left. \sum_{h = 0}^{H - 1} - V_{m, H - h}^{\pi_2, \lambda}(s_h) + \lambda \ln \frac{\pi_2 (a_h | s_h)}{\pi_1 (a_h | s_h)} \ \right|\ \mathcal{M}_m, \pi_1, s_0 \right],
\textup{e}nd{align*}
where the last step uses law of iterated expectations. Since
\begin{align*}
\mathop{\mathbb{E}} \left[ \left. r_m(s_h, a_h) - \lambda \ln \pi_2 (a_h | s_h) + V_{m, H + 1 - h}^{\pi_2, \lambda}(s_{h + 1}) \ \right|\ \mathcal{M}_m, \pi_2, s_h, a_h \right]
= Q_{m, H - h}^{\pi_2, \lambda}(s_h, a_h),
\textup{e}nd{align*}
we have
\begin{align*}
V_{m, H}^{\pi_1, \lambda}(s_0) - V_{m, H}^{\pi_2, \lambda}(s_0)
&= \mathop{\mathbb{E}} \left[\left. \sum_{h = 0}^{H - 1} Q_{m, H - h}^{\pi_2, \lambda}(s_h, a_h) - V_{m, H - h}^{\pi_2, \lambda}(s_h) + \lambda \ln \frac{\pi_2 (a_h | s_h)}{\pi_1 (a_h | s_h)} \ \right|\ \mathcal{M}_m, \pi_1, s_0 \right] \\
&= \mathop{\mathbb{E}} \left[\left. \sum_{h = 0}^{H - 1} A_{m, H - h}^{\pi_2, \lambda}(s_h, a_h) + \lambda \ln \frac{\pi_2 (a_h | s_h)}{\pi_1 (a_h | s_h)} \ \right|\ \mathcal{M}_m, \pi_1, s_0 \right].
\textup{e}nd{align*}
By taking expectation over $s_0$, we have
\begin{align*}
\mathop{\mathbb{E}}_{s_0 \sim \nu_m} \left[ V_{m, H}^{\pi_1, \lambda}(s_0) - V_{m, H}^{\pi_2, \lambda}(s_0) \right]
&= \mathop{\mathbb{E}} \left[\left. \sum_{h = 0}^{H - 1} A_{m, H - h}^{\pi_2, \lambda}(s_h, a_h) + \lambda \ln \frac{\pi_2 (a_h | s_h)}{\pi_1 (a_h | s_h)} \ \right|\ \mathcal{M}_m, \pi_1 \right] \\
&= \sum_{h = 0}^{H - 1} \sum_{(s, a) \in \mathcal{S} \times \mathcal{A}} d_{m, h}^{\pi_1}(s, a) \left( A_{m, H - h}^{\pi_2, \lambda}(s, a) + \lambda \ln \frac{\pi_2 (a | s)}{\pi_1 (a | s)} \right).
\textup{e}nd{align*}
The proof is completed by reversing the order of $h$.
\textup{e}nd{proof}
\begin{lemma}[Lyapunov Drift] \label{lem_lyapunov_drift}
Recall definitions in Def.\,\ref{def:lyapunov_potential} and \ref{def:err_t}. We have that:
\begin{align*}
\mathbb{P}hi (\pi_{t + 1}) - \mathbb{P}hi (\pi_t)
\le - \textup{e}ta \lambda \mathbb{P}hi (\pi_t) + \textup{e}ta \textup{e}rr_t - \textup{e}ta \left( V^{\star, \lambda} - V^{t, \lambda} \right) + \frac{\textup{e}ta^2 B^2 \| g_t \|_2^2}{2}.
\textup{e}nd{align*}
\textup{e}nd{lemma}
\begin{proof}
Denote $\mathbb{P}hi_t := \mathbb{P}hi (\pi_t)$. This proof follows a similar manner as in that of Lem.\,6 in \citet{paper_npg_logl_reg}. By smoothness (see Rem.\,6.7 in \citet{paper_npg}),
\begin{align*}
\ln \frac{\pi_t (a | s)}{\pi_{t + 1} (a | s)}
& \le (\theta_t - \theta_{t + 1})^\top \nabla_\theta \ln \pi_t (a | s) + \frac{B^2}{2} \| \theta_{t + 1} - \theta_t \|_2^2 \\
& = -\textup{e}ta g_t^\top \nabla_\theta \ln \pi_t (a | s) + \frac{\textup{e}ta^2 B^2 \| g_t \|_2^2}{2}.
\textup{e}nd{align*}
By the definition of $\mathbb{P}hi$,
\begin{align*}
\mathbb{P}hi_{t + 1} - \mathbb{P}hi_t
&= \sum_{m = 1}^M w_m \sum_{h = 1}^H \mathop{\mathbb{E}}_{(s, a) \sim d_{m,H - h}^\star} \left[ \ln \frac{\pi_t (a | s)}{\pi_{t + 1} (a | s)} \right] \\
&\le -\textup{e}ta \sum_{m = 1}^M w_m \sum_{h = 1}^H \mathop{\mathbb{E}}_{(s, a) \sim d_{m,H - h}^\star} \left[ g_t^\top \nabla_\theta \ln \pi_t (a | s) \right] + \frac{\textup{e}ta^2 B^2 \| g_t \|_2^2}{2}.
\textup{e}nd{align*}
By the definition of $\textup{e}rr_t$, Lem.\,\ref{lem_perf_diff} and again the definition of $\mathbb{P}hi$, we finally have
\begin{align*}
\mathbb{P}hi_{t + 1} - \mathbb{P}hi_t
&\le \textup{e}ta \sum_{m = 1}^M w_m \sum_{h = 1}^H \mathop{\mathbb{E}}_{(s, a) \sim d_{m,H - h}^\star} \left[ A_{m,h}^{t,\lambda} (s, a) - g_t^\top \nabla_\theta \ln \pi_t (a | s) \right] \\
&\quad - \textup{e}ta \sum_{m = 1}^M w_m \sum_{h = 1}^H \mathop{\mathbb{E}}_{(s, a) \sim d_{m,H - h}^\star} \left[ A_{m,h}^{t,\lambda} (s, a) + \lambda \ln \frac{\pi_t (a | s)}{\pi^\star (a | s)} \right] \\
&\quad - \textup{e}ta \lambda \sum_{m = 1}^M w_m \sum_{h = 1}^H \mathop{\mathbb{E}}_{(s, a) \sim d_{m,H - h}^\star} \left[ \ln \frac{\pi^\star (a | s)}{\pi_t (a | s)} \right] + \frac{\textup{e}ta^2 B^2 \| g_t \|_2^2}{2} \\
&= \textup{e}ta \textup{e}rr_t - \textup{e}ta \left( V^{\star, \lambda} - V^{t, \lambda} \right) - \textup{e}ta \lambda \mathbb{P}hi_t + \frac{\textup{e}ta^2 B^2 \| g_t \|_2^2}{2},
\textup{e}nd{align*}
which completes the proof.
\textup{e}nd{proof}
\begin{lemma} \label{lem_approx_err}
Recall that $g_t^\star$ is the true minimizer of $L(g; \theta_t, d^t)$ in domain $\mathcal{G}$. $\textup{e}rr_t$ defined in Def.\,\ref{def:err_t} satisfies
\begin{align*}
\textup{e}rr_t
\le \sqrt{H L(g_t^\star; \theta_t, d^\star)} + \sqrt{H \kappa (L(g_t; \theta_t, d^t) - L(g_t^\star; \theta_t, d^t)) }.
\textup{e}nd{align*}
\textup{e}nd{lemma}
\begin{proof}
The proof is similar to that of Thm.\,6.1 in \citet{paper_npg}. We make the following decomposition of $\textup{e}rr_t$:
\begin{align*}
\textup{e}rr_t
&= \underbrace{\sum_{m=1}^M w_m \sum_{h = 0}^{H - 1} \mathop{\mathbb{E}}_{(s, a) \sim d_{m,h}^\star} \left[ A_{m,h}^{t,\lambda} (s, a) - g_t^{\star \top} \nabla_\theta \ln \pi_t (a | s) \right]}_{\textup{\ding{172}}} \\
&\quad + \underbrace{\sum_{m=1}^M w_m \sum_{h = 0}^{H - 1} \mathop{\mathbb{E}}_{(s, a) \sim d_{m,h}^\star} \left[ (g_t^{\star} - g_t)^\top \nabla_\theta \ln \pi_t (a | s) \right]}_{\textup{\ding{173}}}.
\textup{e}nd{align*}
Since $\sum_{m=1}^M w_m \sum_{h = 0}^{H - 1} \sum_{(s, a) \in \mathcal{S} \times \mathcal{A}} d_{m,h}^\star (s, a) = H$, normalize the coefficients and apply Jensen's inequality, then
\begin{align*}
\textup{\ding{172}}
&\le \sqrt{\sum_{m=1}^M w_m \sum_{h = 0}^{H - 1} \sum_{(s, a) \in \mathcal{S} \times \mathcal{A}} d_{m,h}^\star (s, a)} \cdot \sqrt{\sum_{m=1}^M w_m \sum_{h = 0}^{H - 1} \mathop{\mathbb{E}}_{(s, a) \sim d_{m,h}^\star} \left[ \left( A_{m,h}^{t,\lambda} (s, a) - g_t^{\star \top} \nabla_\theta \ln \pi_t (a | s) \right)^2 \right]} \\
&= \sqrt{H L(g_t^\star; \theta_t, d^\star)}.
\textup{e}nd{align*}
Similarly,
\begin{align*}
\textup{\ding{173}}
&\le \sqrt{H \sum_{m=1}^M w_m \sum_{h = 0}^{H - 1} \mathop{\mathbb{E}}_{(s, a) \sim d_{m,h}^\star} \left[ \left( (g_t^{\star} - g_t)^\top \nabla_\theta \ln \pi_t (a | s) \right)^2 \right]} \\
&= \sqrt{H \sum_{m=1}^M w_m \sum_{h = 0}^{H - 1} \mathop{\mathbb{E}}_{(s, a) \sim d_{m,h}^\star} \left[ (g_t^{\star} - g_t)^\top \nabla_\theta \ln \pi_t (a | s) (\nabla_\theta \ln \pi_t (a | s) )^\top (g_t^{\star} - g_t) \right]} \\
&\myeqi \sqrt{H \| g_t^{\star} - g_t \|_{\Sigma_{d^\star}^t}^2} \\
&\le \sqrt{H \kappa \| g_t^{\star} - g_t \|_{\Sigma_t}^2},
\textup{e}nd{align*}
where in (i), for vector $v$, denote $ \| v \|_A = \sqrt{v^\top A v}$ for a symmetric positive semi-definite matrix $A$.
Due to that $g_t^\star$ minimizes $L(g; \theta_t, d^t)$ over the set $\mathcal{G}$, the first-order optimality condition implies that
\begin{align*}
(g - g_t^\star)^\top \nabla_g L (g_t^\star; \theta_t, d^t) \ge 0
\textup{e}nd{align*}
for any $g$. Therefore,
\begin{align*}
& L(g; \theta_t, d^t) - L(g_t^\star; \theta_t, d^t) \\
&= \sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H-h}^t} \left[ \left( A_{m, h}^{t, \lambda}(s, a) - g_t^{\star \top} \nabla \ln \pi_t (a | s) + (g_t^\star - g)^\top \nabla \ln \pi_t (a | s) \right)^2 \right] - L(g_t^\star; \theta_t, d^t) \\
&= \sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H-h}^t} \left[ \left( (g_t^{\star} - g)^\top \nabla_\theta \ln \pi_t (a | s) \right)^2 \right] \\
&\quad + (g - g_t^\star)^\top \left( -2 \sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H-h}^t} \left[ \left( A_{m, h}^{t, \lambda}(s, a) - g_t^{\star \top} \nabla_\theta \ln \pi_t (a | s) \right) \nabla_\theta \ln \pi_t (a | s) \right] \right) \\
&= \| g_t^{\star} - g \|_{\Sigma_t}^2 + (g - g_t^\star)^\top \nabla_g L(g_t^\star; \theta_t, d^t) \\
&\ge \| g_t^{\star} - g \|_{\Sigma_t}^2.
\textup{e}nd{align*}
So finally we have
\begin{align*}
\textup{e}rr_t
\le \sqrt{H L(g_t^\star; \theta_t, d^\star)} + \sqrt{H \kappa (L(g_t; \theta_t, d^t) - L(g_t^\star; \theta_t, d^t)) }.
\textup{e}nd{align*}
This completes the proof.
\textup{e}nd{proof}
\subsection{Bounding $\textup{e}psilon_{\textup{stat}}$}
\begin{lemma} [Hoeffding's Inequality] \label{lem_hoeffding}
Suppose $X_1, X_2, \ldots, X_n$ are i.i.d. random variables taking values in $[a, b]$, with expectation $\mu$. Let $\bar{X}$ denote their average, then for any $\textup{e}psilon \ge 0$,
\begin{align*}
\mathbb{P} \left( \left| \bar{X} - \mu \right| \ge \textup{e}psilon \right) \le 2 \textup{e}xp\left( -\frac{2 n \textup{e}psilon^2}{(b - a)^2} \right).
\textup{e}nd{align*}
\textup{e}nd{lemma}
\begin{lemma} \label{lem_clip_err}
For any policy $\pi$, any state $s \in \mathcal{S}$ and any $U \ge \ln |\mathcal{A}| - 1$,
\begin{align*}
0 \le \sum_{a \in \mathcal{A}} \pi (a | s) \ln \frac{1}{\pi (a | s)} - \sum_{a \in \mathcal{A}} \pi (a | s) \min \left\{ \ln \frac{1}{\pi (a | s)}, U \right\} \le \frac{|\mathcal{A}|}{\textup{e}^{U + 1}}.
\textup{e}nd{align*}
\textup{e}nd{lemma}
\begin{proof}
The first inequality is straightforward, so we focus on the second part. Set $\mathcal{A}' = \{ a \in \mathcal{A}\ :\ \ln \frac{1}{\pi (a | s)} > U \} = \{ a \in \mathcal{A}\ :\ \pi (a | s) < \frac{1}{\textup{e}^U} \} $ and $p = \sum_{a \in \mathcal{A}'} \pi (a | s)$, then
\begin{align*}
\sum_{a \in \mathcal{A}} \pi (a | s) \ln \frac{1}{\pi (a | s)} - \sum_{a \in \mathcal{A}} \pi (a | s) \min \left\{ \ln \frac{1}{\pi (a | s)}, U \right\}
&= \sum_{a \in \mathcal{A}'} \pi (a | s) \ln \frac{1}{\pi (a | s)} - \sum_{a \in \mathcal{A}'} \pi (a | s) U \\
&= p \sum_{a \in \mathcal{A}'} \frac{\pi (a | s)}{p} \ln \frac{1}{\pi (a | s)} - p U \\
&\le p \ln \left( \sum_{a \in \mathcal{A}'} \frac{\pi (a | s)}{p} \frac{1}{\pi (a | s)} \right) - p U \\
&\le p \ln \frac{|\mathcal{A}|}{p} - p U,
\textup{e}nd{align*}
where the penultimate step comes from concavity of $\ln x$ and Jensen's inequality. Let $f(p) = p \ln \frac{|\mathcal{A}|}{p} - p U$, then $f'(p) = \ln |\mathcal{A}| - U - 1 - \ln p$. Recall that $U \ge \ln |\mathcal{A}| - 1$, so $f(p)$ increases when $p \in (0, \frac{|\mathcal{A}|}{\textup{e}^{U + 1}})$ and decreases when $p \in (\frac{|\mathcal{A}|}{\textup{e}^{U + 1}}, 1)$. Since $f(\frac{|\mathcal{A}|}{\textup{e}^{U + 1}}) = \frac{|\mathcal{A}|}{\textup{e}^{U + 1}}$ we complete the proof.
\textup{e}nd{proof}
\begin{lemma} [Loss Function Concentration] \label{lem_L_conc}
If set $\pi_s =$ None and $U \ge \ln |\mathcal{A}| - 1$, then with probability $1 - 2 (T + 1) \textup{e}xp\left( -\frac{2 N \textup{e}psilon^2}{C^2} \right)$, the update weight sequence of Alg.\,\ref{alg_npg} satisfies: for any $0 \le t \le T$,
\begin{align*}
L (\wh{g}_t; \theta_t, d^{\theta_t}) - L (g_t^\star; \theta_t, d^{\theta_t})
\le 2 \textup{e}psilon + \frac{8 \lambda G B |\mathcal{A}|}{\textup{e}^{U + 1}},
\textup{e}nd{align*}
where
\begin{align*}
C = 16 H G B [1 + \lambda U + H (1 + \lambda \ln |\mathcal{A}|)] + 4 H G^2 B^2.
\textup{e}nd{align*}
If $\pi_s \ne$ None and $\lambda = 0$, then with probability $1 - 2 (T + 1) \textup{e}xp\left( -\frac{2 N \textup{e}psilon^2}{C^2} \right)$, the update weight sequence of Alg.\,\ref{alg_npg} satisfies: for any $0 \le t \le T$,
\begin{align*}
L (\wh{g}_t; \theta_t, \wt{d}^{\pi_s}) - L (g_t^\star; \theta_t, \wt{d}^{\pi_s})
\le 2 \textup{e}psilon,
\textup{e}nd{align*}
where
\begin{align*}
C = 16 H^2 G B + 4 H G^2 B^2.
\textup{e}nd{align*}
\textup{e}nd{lemma}
\begin{proof}
We first prove the $\pi_s =$ None case. For time step $t$, Alg.\,\ref{alg_npg} samples $H N$ trajectories. Abusing the notation, denote
\begin{align*}
\wh{F}_t &= \frac{1}{N} \sum_{n = 1}^N \sum_{h = 0}^{H - 1} \nabla_\theta \ln \pi_\theta (a_{n, h} | s_{n, h}) \left( \nabla_\theta \ln \pi_\theta (a_{n, h} | s_{n, h}) \right)^\top, \\
\wh{\nabla}_t &= \frac{1}{N} \sum_{n = 1}^N \sum_{h = 0}^{H - 1} \wh{A}_{n, H - h} (s_{n, h}, a_{n, h}) \nabla_\theta \ln \pi_\theta (a_{n, h} | s_{n, h}), \\
\wh{L}(g) &= \underbrace{\sum_{m=1}^M w_m \sum_{h=1}^H \mathop{\mathbb{E}}_{s, a \sim d_{m, H-h}^{\theta_t}} \left[ A_{m, h}^{t, \lambda}(s, a)^2 \right]}_{\textup{\ding{172}}} + \underbrace{g^\top \wh{F}_t g - 2 g^\top \wh{\nabla}_t}_{\textup{\ding{173}}}.
\textup{e}nd{align*}
Notice that \ding{172} is a constant. From Alg.\,\ref{alg_npg}, $\wh{g}_t$ is the minimizer of \ding{173} (hence $\wh{L}(g)$) inside the ball $\mathcal{G}$. From $\nabla_\theta \ln \pi_\theta (a | s) = \phi(s, a) - \mathop{\mathbb{E}}_{a' \sim \pi_\theta (\cdot | s)} [\phi(s, a')],\ \| \phi(s, a) \|_2 \le B,\ \| g \|_2 \le G$, we know that $\left| g^\top \nabla_\theta \ln \pi_\theta (a | s) \right| \le 2 G B$. So $0 \le g^\top \wh{F}_t g \le 4 H G^2 B^2$. From Alg.\,\ref{alg_samp}, we know that any sampled $\wh{A}$ satisfies $| \wh{A} | \le 2 [1 + \lambda U + H (1 + \lambda \ln |\mathcal{A}|)]$. So $| g^\top \wh{\nabla}_t | \le 4 H G B [1 + \lambda U + H (1 + \lambda \ln |\mathcal{A}|)]$. We first have that
\begin{align}
- 8 H G B [1 + \lambda U + H (1 + \lambda \ln |\mathcal{A}|)]
\le \textup{\ding{173}}
\le 8 H G B [1 + \lambda U + H (1 + \lambda \ln |\mathcal{A}|)] + 4 H G^2 B^2. \label{eq_wtL_range}
\textup{e}nd{align}
To apply any standard concentration inequality, we next need to calculate the expectation of \ding{173}. According to Monte Carlo sampling and Lem.\,\ref{lem_clip_err}, for any $1 \le m \le M, 1 \le h \le H$ and $(s, a) \in \mathcal{S} \times \mathcal{A}$, we have
\begin{align*}
A_{m, h}^{t, \lambda} (s, a) - \frac{\lambda |\mathcal{A}|}{\textup{e}^{U + 1}} \le \mathop{\mathbb{E}} \left[ \wh{A}_{m, h}^{t, \lambda} (s, a) \right] \le A_{m, h}^{t, \lambda} (s, a).
\textup{e}nd{align*}
Denote $\nabla_t$ as the exact policy gradient at time step $t$, then
\begin{align*}
\left| \mathop{\mathbb{E}} \left[ g^\top \wh{\nabla}_t \right] - g^\top \nabla_t \right|
&\le \| g \|_2 \left\| \mathop{\mathbb{E}} \left[ \wh{\nabla}_t \right] - \nabla_t \right\|_2 \\
&\le \| g \|_2 \cdot H \| \nabla_\theta \ln \pi_\theta (a | s) \|_2 \left\| \mathop{\mathbb{E}} \left[ \wh{A} (s, a) \right] - A (s, a) \right\|_\infty \\
&\le \frac{2 \lambda G B |\mathcal{A}|}{\textup{e}^{U + 1}}.
\textup{e}nd{align*}
Since Monte Carlo sampling correctly estimates state-action visitation distribution, $\mathop{\mathbb{E}} \left[ \wh{F}_t \right] = F(\theta_t)$. Notice that $g^\top \wh{F}_t g$ is linear in entries of $\wh{F}_t$, we have $\mathop{\mathbb{E}} \left[ g^\top \wh{F}_t g \right] = g^\top F(\theta_t) g$. Now we are in the position to show that
\begin{align*}
\left| \mathop{\mathbb{E}} \left[ \wh{L}(g) \right] - L (g) \right| \le \frac{4 \lambda G B |\mathcal{A}|}{\textup{e}^{U + 1}}.
\textup{e}nd{align*}
Hoeffding's inequality (Lem.\,\ref{lem_hoeffding}) gives
\begin{align*}
\mathbb{P} \left( \left| \wh{L} (g) - \mathop{\mathbb{E}} \left[ \wh{L}(g) \right] \right| \ge \textup{e}psilon \right) \le 2 \textup{e}xp\left( -\frac{2 N \textup{e}psilon^2}{C^2} \right).
\textup{e}nd{align*}
where from Eq.\,\ref{eq_wtL_range},
\begin{align*}
C = 16 H G B [1 + \lambda U + H (1 + \lambda \ln |\mathcal{A}|)] + 4 H G^2 B^2.
\textup{e}nd{align*}
After applying union bound for all $t$, with probability $1 - 2 (T + 1) \textup{e}xp\left( -\frac{2 N \textup{e}psilon^2}{C^2} \right)$ the following holds for any $g \in \mathcal{G}$:
\begin{align*}
\left| \wh{L} (g; \theta_t, d^{\theta_t}) - L (g; \theta_t, d^{\theta_t}) \right|
\le \textup{e}psilon + \frac{4 \lambda G B |\mathcal{A}|}{\textup{e}^{U + 1}}.
\textup{e}nd{align*}
Hence
\begin{align*}
L (\wh{g}_t; \theta_t, d^{\theta_t})
&\le \wh{L} (\wh{g}_t; \theta_t, d^{\theta_t}) + \textup{e}psilon + \frac{4 \lambda G B |\mathcal{A}|}{\textup{e}^{U + 1}} \\
&\le \wh{L} (g_t^\star; \theta_t, d^{\theta_t}) + \textup{e}psilon + \frac{4 \lambda G B |\mathcal{A}|}{\textup{e}^{U + 1}} \\
&\le L (g_t^\star; \theta_t, d^{\theta_t}) + 2 \textup{e}psilon + \frac{8 \lambda G B |\mathcal{A}|}{\textup{e}^{U + 1}}.
\textup{e}nd{align*}
For $\pi_s \ne$ None and $\lambda = 0$, we notice that $| \wh{A} | \le 2 H$ and hence $- 8 H^2 G B \le \textup{\ding{173}} \le 8 H^2 G B + 4 H G^2 B^2$. Moreover, $\mathop{\mathbb{E}} \left[ \wh{A}_{m, h}^{t, \lambda} (s, a) \right] = A_{m, h}^{t, \lambda} (s, a)$. So by slightly modifying the proof we can get the result.
\textup{e}nd{proof}
\textup{e}nd{document}
|
\begin{document}
\title[Iterated splitting]
{Iterated splitting and the\\ classification of knot tunnels}
\author{Sangbum Cho}
\address{Department of Mathematics Education\\
Hanyang University\\
Seoul 133-791\\
Korea}
\email{[email protected]}
\author{Darryl McCullough}
\address{Department of Mathematics\\
University of Oklahoma\\
Norman, Oklahoma 73019\\
USA}
\email{[email protected]}
\urladdr{www.math.ou.edu/$_{\widetilde{\phantom{n}}}$dmccullough/}
\thanks{The second author was supported in part by NSF grant DMS-0802424}
\subjclass{Primary 57M25}
\date{\today}
\keywords{knot, tunnel, (1,1), torus knot, regular, splitting, 2-bridge}
\begin{abstract} For a genus-1 1-bridge knot in $S^3$, that is, a
$(1,1)$-knot, a middle tunnel is a tunnel that is not an upper or lower
tunnel for some $(1,1)$-position. Most torus knots have a middle tunnel,
and non-torus-knot examples were obtained by Goda, Hayashi, and
Ishihara. In a previous paper, we generalized their construction and
calculated the slope invariants for the resulting examples. We give an
iterated version of the construction that produces many more examples, and
calculate their slope invariants. If one starts with the trivial knot, the
iterated constructions produce all the $2$-bridge knots, giving a new
calculation of the slope invariants of their tunnels. In the final section
we compile a list of the known possibilities for the set of tunnels of a
given tunnel number 1 knot.
\end{abstract}
\maketitle
\section*{Introduction}
\label{sec:intro}
Genus-$2$ Heegaard splittings of the exteriors of knots in $S^3$ have been
a topic of considerable interest for several decades. They form a class
large enough to exhibit rich and interesting geometric behavior, but
restricted enough to be tractable. Traditionally such splittings are
discussed with the language of knot tunnels, which we will use from now
on.
The article \cite{CMtree} developed two sets of invariants that together give a
complete classification of all tunnels of all tunnel number 1 knots. One is
a finite sequence of rational ``slope'' invariants, the other a finite
sequence of ``binary'' invariants. The latter is trivial exactly when the
tunnel is a $(1,1)$-tunnel, that is, a tunnel that arises as the ``upper"
or ``lower" tunnel of a genus-$1$ $1$-bridge position of the knot. In the
language of \cite{CMtree}, the $(1,1)$-tunnels are called semisimple, apart from
those which occur as the well-known upper and lower tunnels of a $2$-bridge
knot, which are distinguished by the term ``simple". The tunnels which are
not $(1,1)$-tunnels are called regular.
For quite a long time, the only known examples of knots having both regular
and $(1,1)$-tunnels were (most) torus knots, whose tunnels were classified
by M. Boileau, M. Rost, and H. Zieschang~\cite{B-R-Z} and independently by
Y. Moriah~\cite{Moriah}. Recently, another example was found by H. Goda and
C. Hayashi~\cite{Goda-Hayashi}. The knot is the Morimoto-Sakuma-Yokota
$(5,7,2)$-knot, and Goda and Hayashi credit H.\ Song with bringing it to
their attention. Using his algorithm to compute tunnel invariants,
K. Ishihara verified that the tunnel is regular, and in view of this, we
refer to this example as the Goda-Hayashi-Ishihara tunnel. As noted
in~\cite{Goda-Hayashi}, a simple modification of their construction,
varying a nonzero integer parameter $n$, produces an infinite collection of
very similar examples.
In \cite{CMsplitting}, we gave an extensive generalization of the
Goda-Hayashi-Ishihara example, called the splitting construction, to
produce all examples directly obtainable by the geometric phenomenon that
underlies it. In addition, we gave an effective method to compute the full
set of invariants of the examples. Our construction will be reviewed in
Section~\ref{sec:splitting}.
In this paper, we develop an iterative method that begins with the result
of a splitting construction. The steps are not exactly splittings in the
sense of \cite{CMsplitting}, but are similar enough that we may call this
iterated splitting. The steps may be repeated an arbitrary number of times,
giving an immense collection of new examples of regular tunnels of
$(1,1)$-knots. At each step, a choice of nonzero integer parameter allows
further variation. Starting from each of the four splitting constructions,
we find two distinct ways to iterate, giving eight types of iteration.
Section~\ref{sec:iterated} describes the constructions in detail.
As with the splitting construction, the binary invariants of these new
tunnels are easy to find, but the slope invariants require more
effort. Fortunately, the general method given in \cite{CMsplitting} for
tunnels obtained by splitting can be applied to obtain the slope invariants
for the iterated construction, as we detail in
Section~\ref{sec:iterated_slopes}. The method is effective and could easily
be programmed to read off slope invariants at will.
The iterated splitting construction actually sits in plain view in a very
familiar family of examples, the semisimple tunnels of $2$-bridge knots. In
Section~\ref{sec:2bridge_iteration}, we present a special case of the
iterated splitting method that, as one varies its parameters, produces all
semisimple tunnels of all $2$-bridge knots. No doubt there is a geometric
way to verify this, but our proof is short and entirely algebraic: we
simply calculate the slope sequences of the tunnels produced by the
iterations and see that they are exactly the sequences that arise from this
class of tunnels. The binary invariants are trivial in both cases, and
since the invariants together form a complete invariant of a knot tunnel,
the verification is complete.
The work in this paper greatly enlarges the list of known examples of
tunnels having a pair of $(1,1)$-tunnels and an additional regular tunnel,
motivating us to compile a list of known phenomena for the set of tunnels
of a given tunnel number~$1$ knot. In the final section, we give the list
of seven known cases, which includes three new cases apparent from examples
recently found by John Berge using his software package \textit{Heegaard.}
The authors are very grateful to John, not only for the new examples, but
also for providing patient consultation to help us understand his methods.
Although we do provide a review of the splitting construction of
\cite{CMsplitting}, this paper presupposes a reasonable familiarity with
that work. We have not included a review of the general theory
of~\cite{CMtree}, as condensed reviews are already available in several of
our articles. For the present paper, we surmise that Section~1
of~\cite{CMgiant_steps} together with the review sections
of~\cite{CMsemisimple} form the best option for most readers.
\section{The splitting construction}
\label{sec:splitting}
In this section we will review the splitting construction
from~\cite{CMsplitting}. To set notation,
Figure~\ref{fig:torus_knot_notation} shows a standard Heegaard torus $T$ in
$S^3$, and an oriented longitude-meridian pair $\{\ell,m\}$ which will be
our ordered basis for $H_1(T)$ and for the homology of a product
neighborhood $T\times I$. For a relatively prime pair of integers $(p,q)$,
we denote by $T_{p,q}$ a torus knot isotopic to a $(p,q)$-curve in $T$. In
particular, $\ell=T_{1,0}$ and $m=T_{0,1}$, also $T_{p,q}$ is isotopic in
$S^3$ to $T_{q,p}$ in $S^3$, $T_{-p,-q}=T_{p,q}$ since our knots are
unoriented.
\begin{figure}
\caption{$\ell$, $m$, and $T_{3,5}
\label{fig:torus_knot_notation}
\end{figure}
Four kinds of disks, called drop-$\lambda$, lift-$\lambda$, drop-$\rho$,
and lift-$\rho$ disks, are used in the splitting construction.
Figure~\ref{fig:drop-lambda}(a) shows a torus knot $T_{p+r,q+s}$, its
middle tunnel $\tau$, the principal pair $\{\lambda,\rho\}$ of $\tau$, the
knots $K_\rho=T_{p,q}$, and $K_\lambda=T_{r,s}$, and a drop-$\lambda$
disk, called $\sigma$ there. Figure~\ref{fig:drop-lambda}(b) is an isotopic
repositioning of the configuration of Figure~\ref{fig:drop-lambda}(a): the
vertical coordinate is the $I$-coordinate in a product neighborhood
$T\times I$, $K_\tau$ and $K_\lambda$ lie on concentric tori in $T\times
I$, and the $1$-handle with cocore $\sigma$ is a vertical $1$-handle
connecting tubular neighborhoods of these two knots. The term
``drop-$\lambda$'' is short for ``drop-$K_\lambda$'', motivated by the fact
that a copy of $K_\lambda$ can be dropped to a lower torus level, as in
Figure~\ref{fig:drop-lambda}(b).
\begin{figure}
\caption{The drop-$\lambda$ disk $\sigma$, first as seen in a
neighborhood of $K_\tau=T_{p+r,q+s}
\label{fig:drop-lambda}
\end{figure}
A lift-$\lambda$ disk is similar, and is shown in
Figure~\ref{fig:lift-lambda}. Drop-$\rho$ and lift-$\rho$ disks are
similar, except that they cut across the upper copy of $\lambda$, travel
over the portion of the neighborhood of $T_{p+r,q+s}$ that does not contain
the drop-$\lambda$ disks, and cut across the lower copy of $\lambda$, while
staying disjoint from the copies of~$\rho$.
\begin{figure}
\caption{The lift-$\lambda$ disk $\sigma$, first as seen in a
neighborhood of $K_\tau=T_{p+r,q+s}
\label{fig:lift-lambda}
\end{figure}
The splitting constructions split off a copy of $K_\rho=T_{p,q}$ or
$K_\lambda=T_{r,s}$ from $K_\tau=T_{p+r,q+s}$, producing copies of these
knots on two concentric torus levels, then sum the copies together by a
pair of arcs with some number of twists. In the case of the the
drop-$\lambda$ splitting, the first step was illustrated in
Figure~\ref{fig:drop-lambda}. Next, consider the disk $\gamma_n$ shown in
Figure~\ref{fig:gamma}. It is obtained from $\rho$ by $n$ right-handed
half-twists along $\sigma$. When $n<0$, the twists are left-handed, while
$\gamma_0=\rho$. The $\gamma_n$ are nonseparating, since each meets
$K_\tau$ in a single point.
\begin{figure}
\caption{The disk $\gamma_n$ is obtained from $\rho$ by $n$ right-handed
half-twists along $\sigma$. The case $n=3$ is shown here. For $n<0$, the
half-twists are left-handed, while $\gamma_0=\rho$.}
\label{fig:gamma}
\end{figure}
Each $\gamma_n$ with $n\neq 0$ is a tunnel for the knot obtained by joining
the copies of $K_\tau$ and $K_\lambda$ in Figure~\ref{fig:drop-lambda} by a
pair of vertical arcs that have $n$ right-handed half-twists. That is, for
$n\neq 0$ going from $\tau$ to $\gamma_n$ is a cabling construction
replacing $\rho$, so that the principal pair of $\gamma_n$ is
$\{\lambda,\tau\}$. The case of $n=0$ does not produce a cabling
construction (that is, the resulting tunnel would be $\rho$ so the
principal path would have reversed direction).
The lift-$\lambda$, drop-$\rho$, and lift-$\rho$ splittings are exactly
analogous, using the lift-$\lambda$, drop-$\rho$, and lift-$\rho$ disks
as $\sigma$ in the respective cases.
The slope invariants of the resulting tunnels are the slopes of the disks
$\gamma_n$ in certain coordinates. To calculate them, we need the slopes of
the drop- and lift-disks. We review the method used in~\cite{CMsplitting},
which will apply to the iterated construction that we will develop in this
paper.
Figure~\ref{fig:first_general} illustrates the setup for the slope
calculation. The first drawing shows tubular neighborhoods of two
(oriented) knots $K_U$ and $K_L$, contained in a product neighborhood
$T\times I$ of a Heegaard torus $T$ of $S^3$. The neighborhoods are
connected by a vertical $1$-handle to yield a genus-$2$ handlebody $H$. In
our context, $H$ will always be unknotted, although that is not needed for
the calculations of this and the next section.
\begin{figure}
\caption{The setup for the first general slope calculation.}
\label{fig:first_general}
\end{figure}
We interpret $K_U$ as the ``upper'' knot, contained in $T\times [0,1/4)$
and $K_L$ as the ``lower'' knot, contained in $T\times (3/4,1]$ (the
$I$-coordinate of $T\times I$ increases as one moves downward in our
figures). The vertical $1$-handle with cocore $\sigma$ is assumed to run
between $T\times \{1/4\}$ and $T\times \{3/4\}$, with $\sigma$ as its
intersection with~$T\times \{1/2\}$.
The homology group $H_1(T\times I)\cong H_1(T)$ will have ordered basis the
oriented longitude and meridian $\ell$ and $m$ shown in
Figure~\ref{fig:torus_knot_notation}. Our linking convention is that
$\operatorname{Lk}(m\times \{1\},\ell\times \{0\})=+1$. Now, suppose that $K_U$ represents
$(\ell_U,m_U)$ and $K_L$ represents $(\ell_L,m_L)$ in $H_1(T\times I)$.
Since $\operatorname{Lk}(m\times \{0\},\ell\times \{1\})=0$, we have
$\operatorname{Lk}(K_U,K_L)=m_U\ell_L$.
The disks $D_U^+$ and $D_U^-$ in Figure~\ref{fig:first_general} are
parallel in $H$, as are the disks $D_L^+$ and $D_L^-$, and these four disks
bound a ball $B$. Figure~\ref{fig:first_general}(a) shows a slope disk
$D$. Associated to $D$ is a slope-$0$ separating disk $D^0$, defined by the
requirement that it meets $D$ in a single arc and the core circles of its
complementary solid tori in $H$ have linking number $0$ in $S^3$. For this
setup, \cite[Proposition 5.1]{CMsplitting} tells us the slope $m_\sigma$ of $\sigma$
in $(D,D^0)$-coordinates.
\begin{proposition}\label{prop:first_general_slope_calculation}
In Figure~\ref{fig:first_general}, the slope $m_\sigma$ of $\sigma$ in
$(D,D^0)$-coordinates is $2\operatorname{Lk}(K_U,K_L)$. Consequently, if $K_U$ represents
$(\ell_U,m_U)$ and $K_L$ represents $(\ell_L,m_L)$ in $H_1(T\times I)$,
then $m_\sigma$ equals $2m_U\ell_L$.
\end{proposition}
Proposition~6.1 of \cite{CMsplitting} then gives the slope of $\gamma_n$.
\begin{proposition}\label{prop:second_general_slope_calculation}
The slope of $\gamma_n$ in $(D,D^0)$-coordinates is $m_\sigma+1/n$.
\end{proposition}
As detailed in \cite[Proposition 7.1]{CMsplitting}, applying
Proposition~\ref{prop:first_general_slope_calculation}
to splitting disks gives their slopes
in terms of $p$, $q$, $r$, and $s$.
\begin{corollary}\label{coro:splitting_disk_slopes}
The slopes of the splitting disks are as follows:
\begin{enumerate}
\item[(a)] In $(\rho,\rho^0)$-coordinates,
the drop-$\lambda$ disk has slope $2r(q+s)$.
\item[(b)] In $(\rho,\rho^0)$-coordinates,
the lift-$\lambda$ disk has slope $2s(p+r)$.
\item[(c)] In $(\lambda,\lambda^0)$-coordinates,
the drop-$\rho$ disk has slope $2p(q+s)$
\item[(d)] In $(\lambda,\lambda^0)$-coordinates,
the lift-$\rho$ disk has slope $2q(p+r)$.
\end{enumerate}
\end{corollary}
Proposition~\ref{prop:second_general_slope_calculation} then gives immediately
the slopes of the tunnels obtained by splitting constructions
using $\gamma_n$.
\begin{proposition}\label{prop:sigma_slopes}
For the torus knot $T_{p+r,q+s}$:
\begin{enumerate}
\item[(a)] A drop-$\lambda$ splitting has slope $2r(q+s)+1/n$.
\item[(b)] A lift-$\lambda$ splitting has slope $2s(p+r)+1/n$.
\item[(c)] A drop-$\rho$ splitting has slope $2p(q+s)+1/n$.
\item[(d)] A lift-$\rho$ splitting has slope $2q(p+r)+1/n$.
\end{enumerate}
\end{proposition}
\section{The iterated splitting construction}
\label{sec:iterated}
We are now prepared to describe the iterated splitting construction.
We begin with the drop-$\rho$ case, as it is the case we will need in our
later application to $2$-bridge knots in Section~\ref{sec:2bridge_iteration}.
Figure~\ref{fig:iterate}(a) shows a
knot resulting from a drop-$\rho$ splitting. Its tunnel will now be denoted
by $\gamma_{n_0}^0$, the superscript distinguishing it from later
tunnels. Its principal pair $\{\rho,\tau\}$ is also shown.
\begin{figure}
\caption{The first case of the drop-$\rho$ iteration.}
\label{fig:iterate}
\end{figure}
In $S^3$, $\gamma_{n_0}^0$ would appear with twists along the horizontal
drop-$\rho$ disk $\sigma$, so Figure~\ref{fig:iterate}(a) is only a
picture up to abstract homeomorphism. Nonetheless, the vertical coordinate
represents the levels of $T\times I$, which
will be true in the remaining drawings of Figure~\ref{fig:iterate}, so it
will be seen that knots in $1$-bridge position will always be obtained.
In Figure~\ref{fig:iterate}(b), $\gamma_{n_0}^0$ and a portion of the
surrounding handlebody $H$ have been shrunk vertically, keeping
$K_{\gamma_{n_0}^0}$ fixed. The horizontal line at the bottom is a copy of
$K_\rho$, as indicated. The picture of $\gamma_{n_0}^0$ without twisting is
now accurate, but in the true picture in $S^3$, the two vertical
$1$-handles would be intertwined by $n_0$ right-hand half-twists rather
than being straight. The bottom part of the picture in $S^3$, from the
level $K_{\gamma_{n_0}^0}$ and below, is as seen in
Figure~\ref{fig:iterate}(b).
Figure~\ref{fig:iterate}(c) is obtained from Figure~\ref{fig:iterate}(b) by
an isotopy of $H$, keeping $K_{\gamma_{n_0}^0}$ and $K_\rho$ fixed. The
effect is to create the setup picture of Figure~\ref{fig:first_general}(a)
near $\tau$, with $K_U= K_{\gamma_{n_0}^0}$ and $K_L=K_\rho$. Notice that
in the orientations needed for the first general slope calculation,
$K_\rho$ is oriented left-to-right, and $K_{\gamma_{n_0}^0}$ must be
oriented so that the portion that intersects $\tau$ and originally came
from the copy of $K_\rho$ in the splitting construction used to create
$K_{\gamma_{n_0}^0}$ is also oriented from left-to-right. With this
orientation on $K_{\gamma_{n_0}^0}$ the top portion that originally came
from $K_\tau$ will be oriented from left-to-right or from right-to-left
according as $n_0$ is odd or even. This will be a key observation when we
compute the slope invariants of the iterated splitting constructions in
Sections~\ref{sec:iterated_slopes}.
Figure~\ref{fig:iterate}(d) differs from Figure~\ref{fig:iterate}(c) only
in that $\tau$ has been replaced by $\gamma_{n_1}^1$, which in $S^3$ would be
seen with $n_1$ right-hand half-twists. This is a cabling construction.
The resulting knot $K_{\gamma_{n_1}^1}$ is in $1$-bridge position, and was
obtained from $K_{\gamma_{n_0}^0}$ and the copy of $K_\rho$ by connecting
them with two vertical arcs with $n_1$ half-twists. The principal pair of
$\gamma_{n_1}^1$ is~$\{\rho,\gamma_{n_0}^0\}$.
The stage is now set to repeat the construction using $\gamma_{n_0}^0$ and
$\gamma_{n_1}^1$ in the role of $\tau$ and $\gamma_{n_0}^0$ in the previous
step. Figure~\ref{fig:iterate}(e) is obtained from
Figure~\ref{fig:iterate}(d) two steps, analogous to the steps from
Figure~\ref{fig:iterate}(a) to Figure~\ref{fig:iterate}(b) and from
Figure~\ref{fig:iterate}(b) to Figure~\ref{fig:iterate}(c). First,
$\gamma_{n_1}^1$ is shrunk vertically, then $H$ is moved as indicated,
creating the setup picture of Figure~\ref{fig:first_general}(a) in the
lower left-hand area of Figure~\ref{fig:iterate}(d). Again, in $S^3$ the
two vertical $1$-handles in the middle would be intertwined with $n_1$
half-twists. Another copy of $K_\rho$ appears at the bottom.
The next cabling construction replaces $\gamma_{n_0}^0$ by
$\gamma_{n_2}^2$, and $K_{\gamma_{n_2}^2}$ is obtained by joining
$K_{\gamma_{n_1}^1}$ and the copy of $K_\rho$ with two vertical arcs with
$n_2$ half-twists. The principal pair of $\gamma_{n_2}^2$ is
$\{\rho,\gamma_{n_1}^1\}$. The true picture in $S^3$ has $n_0$ half-twists
in the two vertical $1$-handles connecting the top and second levels of
Figure~\ref{fig:first_general}(e), $n_1$ half-twists in the two
vertical $1$-handles connecting the second and third levels, and
$\gamma_{n_2}^2$ appears with $n_2$ half-twists.
The iteration can be continued indefinitely, producing a sequence of
tunnels $\gamma_{n_m}^m$ with principal pairs
$\{\rho,\gamma_{n_{m-1}}^{m-1}\}$, and the knots $K_{\gamma_{n_m}^m}$ in
$(1,1)$-position.
We indicate this sequence by $\tau \serho \gamma_{n_0}^0 \serho
\gamma_{n_1}^2 \serho \cdots$. The cabling constructions in the iterations
all retain $\rho$ in their principal pairs so have binary invariant $0$,
although the original drop-$\rho$ splitting that produces $\gamma_{n_0}^0$
may have nontrivial binary invariant.
From Figure~\ref{fig:iterate}(a) there is a second way to
proceed. Figure~\ref{fig:tau_iterate} shows an alternative to the isotopy
in Figure~\ref{fig:iterate}(b), that shrinks $\gamma_{n_0}^0$ upward. The
next step replaces $\rho$ by $\gamma_{n_1}^1$, which has principal pair
$\{\tau,\gamma_{n_0}^0\}$, and $K_{\gamma_{n_1}^1}$ is obtained by joining
copies of $K_{\gamma_{n_0}^0}$ and $K_\tau$ by vertical arcs. The
successive iterations each add on another copy of $K_\tau$, moving upward,
and retain $\tau$ in their principal pairs. We indicate this sequence by
$\tau \serho \gamma_{n_0}^0 \netau \gamma_{n_1}^1 \netau \gamma_{n_2}^2
\netau \cdots$. The up-or-down direction of the diagonal arrow indicates
whether the knot that is joined to the previous one is a copy of the
original $K_U$ (in this case, $K_\tau$) or the original $K_L$ (in this
case, $K_\rho$), and the letter above it indicates which of $\rho$,
$\lambda$, or $\tau$ is retained in the principal pair.
\begin{figure}
\caption{The second case of the drop-$\rho$ iteration.}
\label{fig:tau_iterate}
\end{figure}
Starting with the drop-$\lambda$ splitting instead of the drop-$\rho$
splitting produces two more interations,
\[\tau \selambda \gamma_{n_1}^1 \selambda
\gamma_{n_2}^2 \selambda\cdots\text{\ and\ }
\tau \selambda \gamma_{n_1}^1 \netau
\gamma_{n_2}^2 \netau \gamma_{n_3}^3 \netau \cdots\ .\]
Starting with the lift-$\rho$ splitting instead of the drop-$\rho$
splitting produces two more,
\[\tau \nerho \gamma_{n_1}^1 \nerho
\gamma_{n_2}^2 \nerho\cdots\text{\ and\ } \tau \nerho \gamma_{n_1}^1 \setau
\gamma_{n_2}^2 \setau \gamma_{n_3}^3 \setau \cdots\ ,\] and starting these
with the lift-$\lambda$ splitting give the latter two but with $\lambda$
replacing $\rho$. Provided that one started with a tunnel $\tau$
which was not trivial and not simple, the eight sequences are distinct,
since they have distinct principal paths.
\section{The iterated splitting slope invariants}
\label{sec:iterated_slopes}
We begin with the slope invariants. Consider the first iteration discussed
in Section~\ref{sec:iterated}, whose initial steps were illustrated in
Figure~\ref{fig:iterate}. The initial step is a regular drop-$\rho$
splitting, and according to Proposition~\ref{prop:sigma_slopes}(c), the
slope of the resulting tunnel disk $\gamma_{n_0}^0$ is $2p(q+s)+1/n_0$.
The first iterate $\gamma_{n_1}^1$ is obtained using the setup of
Figure~\ref{fig:first_general}(a) with $K_U=K_{\gamma_{n_0}^0}$ and
$K_L=K_\rho$ as in Figure~\ref{fig:iterate}(c). Now $K_{\gamma_{n_0}^0}$ is
obtained by connecting $K_{\tau}=T_{p+r,q+s}$ and $K_\rho=T_{p,q}$ with two
arcs intertwined with $n_0$ half-twists. The portion of
$K_{\gamma_{n_0}^0}$ seen in setup picture for calculating the slope of
$\gamma_{n_1}^1$ must be oriented from left-to-right, so it is obtained by
adding the left-to-right orientation of $K_\rho$ to either the
left-to-right or right-to-left orientation of $K_\tau$, according as $n_0$
is odd or even. In $H_1(T\times I)$, $K_\tau$ (with left-to-right
orientation) represents $(p+r,q+s)$ and $K_\rho$ represents $(p,q)$, so
$K_{\gamma_{n_0}^0}$ with this orientation represents
$(p,q)+(-1)^{1+n_0}(p+r,q+s) =(1+(-1)^{1+n_0})(p,q) +
(-1)^{1+n_0}(r,s)$. Therefore $\operatorname{Lk}(K_{\gamma_{n_0}^0},K_\rho)
=p((1+(-1)^{1+n_0}q + (-1)^{1+n_0}s)$, and by
Proposition~\ref{prop:first_general_slope_calculation} the slope of
$\gamma_{n_1}^1$ in $(\tau,\tau^0)$-coordinates is $2pq(1+(-1)^{1+n_0}) +
2p s\, (-1)^{1+n_0} +1/n_1$.
To continue this process, let us put $\epsilon(k)=(-1)^{1+n_k}$,
$t(r,k)=\epsilon(r)\epsilon(r+1)\cdots\epsilon(k-1)$ for $r<k$, and
$t(k,k)=0$. Now, define
\[a(k)=t(0,k)\text{ and }A(k)=1+\sum_{r=0}^{k-1}t(r,k)\ .\]
In particular, $a(0)=A(0)=1$, $a(1)=\epsilon(0)$, $A(1)=1+\epsilon(0)$,
and since $\epsilon(k)t(r,k)=t(r,k+1)$,
\[\epsilon(k)a(k)=a(k+1)\text{ and }1+\epsilon(k)A(k)=A(k+1)\ .\]
We orient each $K_{\gamma_{n_k}^k}$ so that
the portion that came from $K_L=K_\rho$
is left-to-right, as this is the orientation needed
in order to compute the slope of $\gamma_{n_{k+1}}^{k+1}$ in the setup
of Figure~\ref{fig:first_general}(a).
In $H_1(T\times I)$, $K_{\gamma_{n_0}^0}$ represents
$(p,q)+\epsilon(0)(p+r,q+s)=A(1)(p,q)+a(1)(r,s)$.
For $k\geq 1$ assume inductively that
$K_{\gamma_{n_{k-1}}^{k-1}}$ represents $A(k)\,(p,q)+a(k)\,(r,s)$.
In the orientation on $K_{\gamma_{n_k}^k}$, the direction
on the portion from
$K_{\gamma_{n_{k-1}}^{k-1}}$ must be reversed exactly when $n_k$ is
even. Therefore in $H_1(T\times I)$,
$K_{\gamma_{n_k}^k}$ represents
\[(p,q)+\epsilon(k)(A(k)\,(p,q) + a(k)\,(r,s))
=A(k+1)\,(p,q)+a(k+1)\,(r,s)\ ,\]
completing the induction.
For all $k\geq 1$, then,
$\operatorname{Lk}(K_{\gamma_{n_{k-1}}^{k-1}},K_\rho)=
p\,(A(k)q+a(k)s)$, and
Proposition~\ref{prop:first_general_slope_calculation}
gives the slope of $\gamma_{n_k}^k$ to
be $2p(A(k)q+a(k)s)+1/n_k$.
We now consider the second case $\tau \serho
\gamma_{n_0}^0 \netau \gamma_{n_1}^1 \netau \gamma_{n_2}^2 \netau \cdots$
of the drop-$\rho$ iteration. For the iterative step, when computing the
slope of $\gamma_{n_{k+1}}$,
the setup picture Figure~\ref{fig:first_general}(a) has $K_U=K_\tau$
and $K_L=K_{\gamma_{n_k}^k}$, the latter oriented so that its top
portion is $K_\tau$ oriented left-to-right, and bottom portion, originally
$K_{\gamma_{n_{k-1}}^{k-1}}$, has top portion (from $K_\tau$)
oriented left-to-right or right-to-left
according as $n_k$ is odd or even.
For $k=0$, $K_{\gamma_{n_0}}$ with this orientation represents
\[ (p+r,q+s)+\epsilon(0)(p,q) = (A(1)-a(1))(p+r,q+s) + a(1)(p,q)\ .\]
Inductively, assume that
$K_{\gamma_{n_{k-1}}^{k-1}}$ represents $(A(k)-a(k))(p+r,q+s)+a(k)(p,q)$.
Then, with the needed orientation for the setup picture,
$K_{\gamma_{n_k}^k}$ represents
\begin{gather*}
(p+r,q+s)+\epsilon(k)((A(k)-a(k))(p+r,q+s)+a(k)(p,q))\\
=(A(k+1)-a(k+1))(p+r,q+s)+a(k+1)(p,q)
\end{gather*}
The slope calculation of $\gamma_{n_k}^k$ is then
\begin{gather*} 2\operatorname{Lk}(K_\tau,K_{\gamma_{n_{k-1}}^k})+1/n_k
=2(q+s)((A(k)-a(k))(p+r)+a(k)p) + 1/n_k\\
=2p(q+s)A(k)+2r(q+s)(A(k)-a(k)) + 1/n_k\ .
\end{gather*}
These calculations have established the first two cases of the following
result. Each of the remaining six cases is very similar to one of the first
two. Summarizing, we have
\begin{theorem}\label{thm:iterated_slopes} The slopes
of the tunnels in the iterated splitting sequences for the torus knot
$T_{p+r,q+s}$ are as follows.
\vspace*{2 ex}
\begin{small}
\begin{center}
\renewcommand{1.3}{1.3}
\setlength{\fboxsep}{0pt}
\setlength{\tabcolsep}{8pt}
\fbox{
\begin{tabular}{c|c}
$\mathrm{sequence}$&$\mathrm{slope\ of}$ $\gamma_{n_k}^k$\\
\hline\hline
$\tau \serho \gamma_{n_0}^0 \serho \gamma_{n_1}^1 \serho \cdots$&
$2p\,(\,A(k)q+a(k)s\,)+1/n_k$\\
$\tau \serho \gamma_{n_0}^0 \netau \gamma_{n_1}^1 \netau \cdots$&
$2(q+s)\,(\,A(k)p+(A(k)-a(k))r\,) + 1/n_k$\\
$\tau \selambda \gamma_{n_0}^0 \selambda \gamma_{n_1}^1 \selambda \cdots$&
$2r\,(\,A(k)s+a(k)q\,)+1/n_k$\\
$\tau \selambda \gamma_{n_0}^0 \netau \gamma_{n_1}^1 \netau \cdots$&
$2(q+s)\,(\,A(k)r+(A(k)-a(k))p\,)+1/n_k$\\
$\tau \nerho \gamma_{n_0}^0 \nerho \gamma_{n_1}^1 \nerho \cdots$&
$2q\,(\,A(k)p+a(k)r\,)+1/n_k$\\
$\tau \nerho \gamma_{n_0}^0 \setau \gamma_{n_1}^1 \setau \cdots$&
$2(p+r)\,(\,A(k)q+(A(k)-a(k))s\,) + 1/n_k$\\
$\tau \nelambda \gamma_{n_0}^0 \nelambda \gamma_{n_1}^1 \nelambda \cdots$&
$2s\,(\,A(k)r+a(k)p\,)+1/n_k$\\
$\tau \nelambda \gamma_{n_0}^0 \setau \gamma_{n_1}^1 \setau \cdots$&
$2(p+r)\,(\,A(k)s+(A(k)-a(k))q\,)+1/n_k$\\
\end{tabular}}
\end{center}
\end{small}
\end{theorem}
The binary invariants produced by splitting and iterated splitting are
easily determined. When $\rho$ is one of the disks of the principal pair of
a tunnel (that is, one of the two disks in the principal vertex other than
the tunnel disk itself), a drop-$\rho$ or lift-$\rho$ splitting or
iterative step retains $\rho$ and replaces the other disk of the principal
pair. Thus, for example, in the all drop-$\rho$ iteration, every binary
invariant is $0$ except possible that of the splitting, which depends on
the cabling construction that preceded it (that is, the invariant is $0$ if
$\rho$ was in the principal pair of the tunnel for the cabling construction
that preceded the splitting, and $1$ if $\rho$ was the previous tunnel). In
a sequence such as $\tau \serho \gamma_{n_0}^0 \netau \gamma_{n_1}^1 \netau
\cdots$, the second binary invariant, associated to the first lift-$\tau$
step of the iteration, has binary invariant $1$, and all others
except possibly the initial splitting have binary invariant~$0$.
Since a splitting-and-iteration sequence can never have more than two
binary invariants equal to $1$, with the two $1$'s contiguous in that
case, the sequence can never increase the depth by more than $1$ from that
of the starting torus tunnel (see for example the last paragraph of
Section~3 of~\cite{CMbridge}).
\section{Two-bridge knots}
\label{sec:2bridge_iteration}
A good example of the iterated splitting construction is furnished by
$2$-bridge knots. Indeed, in some sense the iterated splitting construction
is a far-reaching generalization of $2$-bridge knots. In this section, we
will see that any drop-$\rho$ iteration of the first kind examined in
Sections~\ref{sec:iterated} and~\ref{sec:iterated_slopes} and starting with
the trivial knot positioned as $T_{1,1}$ produces a $2$-bridge knot in the
$(1,1)$-position whose upper tunnel is the upper semisimple tunnel of the
knot, and moreover that every semisimple tunnel of every $2$-bridge
knot can be obtained in this way.
We wil use the notation and the description of the classification of
$2$-bridge knots presented in \cite[Section 10]{CMsemisimple}. We first
recall the calculation of the slope invariants of the upper semisimple
tunnel of a $2$-bridge knot given in~\cite[Proposition~10.4]{CMsemisimple}:
\begin{proposition}\label{prop:2bridge_slopes}
Let $K$ be a $2$-bridge knot in the $2$-bridge position corresponding to
the continued fraction $[2a_d,2b_d,\ldots,2a_0,2b_0]$, with $b_0\neq 0$ and
each $a_i=\pm 1$. Then the slope invariants of the upper semisimple tunnel
of $K$ are as follows:
\begin{enumerate}
\item[(i)] $m_0=\left[\displaystyle\frac{2b_0}{4b_0+1}\right]$ or
$m_0=\left[\displaystyle\frac{2b_0-1}{4b_0-1}\right]$
according as $a_0$ is $1$ or $-1$.
\item[(ii)] For $1\leq i\leq d$, $m_i=-2a_{i-1}+1/k_i$, where
\begin{enumerate}
\item[(a)] $k_i=2b_i+1$ if $a_i=a_{i-1}=1$,
\item[(b)] $k_i=2b_i$ if $a_i$ and $a_{i-1}$ have opposite signs, and
\item[(c)] $k_i=2b_i-1$ if $a_i=a_{i-1}=-1$.
\end{enumerate}
\end{enumerate}
\label{prop:semisimple_slopes}
\end{proposition}
Fix $K$ as in Proposition~\ref{prop:2bridge_slopes}. Denote the slope
invariants of its upper semisimple tunnel as given in
Proposition~\ref{prop:2bridge_slopes} by $m_0,\ldots\,$, $m_d$.
Starting with the trivial knot $T_{1,1}$, we will carry out a drop-$\rho$
splitting and iteration, that is, the first type detailed in each of
Sections~\ref{sec:iterated} and~\ref{sec:iterated_slopes}. We have
\[M_{1,1}=\begin{pmatrix}1&0\\0&1\end{pmatrix} = I\ ,\]
thus $(p,q)=(1,0)$, $(r,s)=(0,1)$, and $K_\rho=T_{1,0}$.
Perform the initial drop-$\rho$ splitting with $n_0$ equal to $2b_0$ if
$a_0=1$ and to $2b_0-1$ if $a_0=-1$. Note that every nonzero choice of
$n_0$ occurs for some $m_0$. By Proposition~\ref{prop:sigma_slopes}(c) (or
Theorem~\ref{thm:iterated_slopes} with $k=0$), the slope of
$\gamma_{n_0}^0$ is $2+1/n_0$, so its simple slope is $[n_0/(2n_0+1)]$. By
Proposition~\ref{prop:2bridge_slopes}(i), this is $m_0$.
Now we carry out the first $d$ steps of the iteration, using $n_r=k_r$ at
each step. Again, every possible nonzero value of $n_r$ occurs for some
choice of $K$. We have $m_1=-2a_0+1/k_1$. If $a_0=1$, then $n_0$ was even
and (using the notation of Section~\ref{sec:iterated_slopes})
$a(1)=(-1)^{1+n_0}=-1$. If $a_0=-1$, then $n_0$ was odd and $a(1)=1$. In
either case, $a(1)=-a_0$. Theorem~\ref{thm:iterated_slopes} gives the slope
of $\gamma_{n_1}^1$ to be $2a(1)+1/n_1=-2a_0+1/k_1=m_1$.
For $r\geq 2$, assume inductively that $a(r)=-a_{r-1}$. If $n_r=k_r$ is even,
then we are in Case~(ii)(b) of
Proposition~\ref{prop:2bridge_slopes}, and $a_{r-1}=-a_r$. We find that
$a(r+1)=(-1)^{1+n_r}a(r)=-a(r)=a_{r-1}=-a_r$.
If $n_r$ is odd, then we are in Case~(ii)(a) or~(ii)(c) of
Proposition~\ref{prop:2bridge_slopes}, and $a_{r-1}=a_r$. We find that
$a(r+1)=(-1)^{1+n_r}a(r)=a(r)=-a_{r-1}=-a_r$, completing the induction.
Theorem~\ref{thm:iterated_slopes} now gives the slope of $\gamma_{n_r}^r$
to be \[ 2a(r)+1/n_r = -2a_{r-1}+1/k_r=m_r\ ,\] completing the induction.
\section{Classification of tunnels}
\label{sec:TC}
At this point in history one may begin to contemplate a classification of
tunnels of tunnel number 1 knots based on the examples that have been found
during the past several decades. In this section we will list the cases
that occur or appear to occur. It is plausible that this list may be
complete or nearly so, but we are unaware of any evidence supporting this
other than the absence of other examples found and a sense that there ought
to be a fairly strict limitation on the complexity of tunnel behavior for a
given knot.
In our list it is to be understood that in some cases, tunnels are
equivalent due to symmetries or degeneracies. For example, the upper and
lower tunnels of a $2$-bridge knot may be equivalent under an involution of
$S^3$ preserving the knot, and the middle tunnel of a torus knot is known
to be isotopic to the upper or lower tunnel for certain cases (and hence is
a $(1,1)$-tunnel rather than a regular tunnel).
We list the cases, then comment on them below.
\begin{KTP} These are the known possibilities for the set of tunnels of
a tunnel number $1$ knot $K$, allowing some of the tunnels to be
equivalent due to symmetries or degeneracies:
\begin{enumerate}
\item[I.] $K$ has a unique regular tunnel.
\item[II.] $K$ has one $(1,1)$-position and two $(1,1)$-tunnels.
\item[III.] $K$ has two $(1,1)$-positions and four $(1,1)$-tunnels.
\item[IV.] $K$ has one $(1,1)$-position and two $(1,1)$-tunnels, plus one
regular tunnel.
\item[V.] $K$ has two $(1,1)$-positions and four $(1,1)$-tunnels, plus one
regular tunnel.
\item[VI.] $K$ has one $(1,1)$-position and two $(1,1)$-tunnels, plus two
regular tunnels.
\item[VII.] $K$ has no $(1,1)$-position, but has two regular tunnels.
\end{enumerate}
\end{KTP}
We now comment on the individual cases.
\noindent \textsl{Case I}
As explained in \cite[Section~3]{CMbridge}, results of M. Scharlemann and
M. Tomova~\cite{Scharlemann-Tomova} and J. Johnson~\cite{Johnson} combine
to show that whenever $K$ has a tunnel of Hempel distance at least $6$
(that is, the Hempel distance of the associated genus-$2$ Heegaard
splitting of the exterior of $K$), it is the unique tunnel of $K$. Thus
Case~I holds for all high-distance tunnels.
\noindent \textsl{Case II}
This seems likely to be the generic case when $K$ has a $(1,1)$-tunnel,
although we are not aware of any examples for which it has been proven that
a specific knot admits exactly two $(1,1)$-tunnels, other than symmetric or
degenerate cases such as torus knots for which the middle tunnel is
equivalent to the upper or lower $(1,1)$-tunnel.
\noindent \textsl{Case III}
Tunnels of $2$-bridge knots are fully classified due to work of several
authors, and they satisfy Case~III. D. Heath and H. Song~\cite{Heath-Song}
proved that the $(-2,3,7)$-pretzel knot satisfies Case~III, and there are
expected to be other examples.
\noindent \textsl{Case IV}
Torus knots and their middle tunnels are the long-known examples of
Case~IV. Assuming that at least some of them have no other unknown tunnels,
the examples generated in \cite{CMsplitting} and this paper provide more
such knots. See also the comments on the remaining three cases.\par
\noindent \textsl{Cases V, VI, and VII}
These remaining cases describe examples recently found and kindly provided
to us by John Berge~\cite{Berge}. They were obtained using his software
\textit{Heegaard,} which works with two-generator one-relator presentations
of $\pi_1(S^3-K)$ whose generators are free generators of the fundamental
group of the exterior handlebody $H'=\overline{S^3-H}$, and whose relator
is represented by the boundary $C$ of a tunnel disk $D$ in $H$. The knot
$K$ is the usual knot associated to $D$, that is, a core circle of the
solid torus $\overline{H- N(D)}$, where $N(D)$ is a regular neighborhood of
$D$ in $H$. \textit{Heegaard} is able to distinguish equivalence classes of
such $C$ under diffeomorphism of $H'$, showing that the tunnel disks they
bound cannot be equivalent. Regularity of the tunnels can be tested by
using a procedure (also used by K. Ishihara~\cite{Ishihara}) that finds the
principal meridian pair for $K$ associated to a tunnel, and then checking
whether either of the disks is primitive; primitivity of a disk $E\subset
H$ in our sense (that is, $\partial E$ crosses the boundary of some disk
$E'\subset H'$ exactly once) is equivalent to primitivity of $\partial E$
as an element of $\pi_1(H')$, and can be checked algebraically.
Once a tunnel has been found, the software searches for more tunnels for
the knot by a method that generates a large number of additional such
two-generator one-relator presentations for $\pi_1(S^3-K)$ and tests them
for isomorphism with those already found. Although there is no known means
to ensure that this method finds all of the tunnels for these examples, it
seems likely that it does. For example, for the $(-2,3,7)$-pretzel knot,
all four tunnels are found among the first few of the large number of
presentations that the software examines.
Berge examined the hyperbolic double-primitive knots $K$ having Dehn
surgeries that produce lens spaces $L(p,q)$ with $p<100$, and the
``sporadic'' double-primitive knots of Types 9, 10, 11, and 12 (detailed in
J. Berge~\cite{doubleprimitive}) having Dehn surgeries that produce lens
spaces $L(p,q)$ with $p<500$, as well as some non-double-primitive
knots. Assuming that the software did find all tunnels of those knots, the
possibilities listed in Cases~V, VI, VII were obtained, as well as quite a
few instances of the other cases including Case~IV. Some of the examples of
Case~VII occurred for knots that are not double-primitive. We do not know
whether the regular tunnels in his examples of Cases~IV, V, and~VI arise
from $(1,1)$-positions by the construction we have examined in this paper.
\end{document}
|
\begin{document}
\title{space{-2em}
\begin{abstract}
In many scientific studies, it is of interest to determine whether an exposure has a causal effect on an outcome. In observational studies, this is a challenging task due to the presence of confounding variables that affect both the exposure and the outcome. Many methods have been developed to test for the presence of a causal effect when all such confounding variables are observed and when the exposure of interest is discrete. In this article, we propose a class of nonparametric tests of the null hypothesis that there is no average causal effect of an arbitrary univariate exposure on an outcome in the presence of observed confounding. Our tests apply to discrete, continuous, and mixed discrete-continuous exposures. We demonstrate that our proposed tests are doubly-robust consistent, that they have correct asymptotic type I error if both nuisance parameters involved in the problem are estimated at fast enough rates, and that they have power to detect local alternatives approaching the null at the rate $n^{-1/2}$. We study the performance of our tests in numerical studies, and use them to test for the presence of a causal effect of BMI on immune response in early-phase vaccine trials.
\end{abstract}
\doublespacing
\section{Introduction}
\subsection{Motivation and literature review}
One of the central goals of many scientific studies is to determine whether an exposure of interest has a causal effect on an outcome. In some cases, researchers are able to randomly assign units to exposure values. Classical statistical methods for assessing the association between two random variables can then be used to determine whether there is a causal effect because randomization ensures that there are no common causes of the exposure and the outcome. However, random assignment of units to exposures is not feasible in some settings, and even when it is feasible, preliminary evidence is often needed to justify a randomized study. In either case, it is often of interest to use data from an observational study, in which the exposure is not assigned by the researcher but instead varies according to some unknown mechanism, to assess the evidence of a causal effect. This is a more difficult task due to potential confounding of the exposure-outcome relationship.
Here, we are interested in assessing whether a non-randomized exposure that occurs at a single time point has a causal effect on a subsequent outcome when it can be assumed that there are no unobserved confounders. Many methods have been proposed to test the null hypothesis that there is no causal effect in this setting when the exposure is discrete. For instance, matching estimators \perp\!\!\!\perptep{rubin1973matching}, inverse probability weighted (IPW) estimators \perp\!\!\!\perptep{horvitz1952sampling}, and doubly-robust estimators including augmented IPW \perp\!\!\!\perptep{scharfstein1999adjusting, bang2005doubly} and targeted minimum loss-based estimators (TMLE) \perp\!\!\!\perptep{vanderlaan2011tmle} can all be used for this purpose.
Much less work exists in the context of non-discrete exposures---that is, exposures that may take an uncountably infinite number of values. In practice, researchers often discretize such an exposure in order to return to the discrete exposure setting. This simple approach has several drawbacks. First, since the results often vary with the choice of discretization, it may be tempting for researchers to choose a discretization based on the results. However, this can inflate type I error rate. Second, tests based on a discretized exposure typically have less power than tests based on the original exposure because discretizing discards possibly relevant information (see, e.g.\ \perp\!\!\!\perptealp{cox1957grouping, cohen1983cost, fedorov2009consequence}). Finally, causal estimates based on a discretized exposure have a more complicated interpretation than those based on the original exposure \perp\!\!\!\perptep{young2019representative}.
In the context of causal inference with a continuous exposure, one common estimand is the \emph{causal dose-response curve}, which is defined for each value of the exposure as the average outcome were all units assigned to that exposure value. We say there is no average causal effect if the dose-response curve is flat; i.e.\ the average outcome does not depend on the assigned value of the exposure. One approach to estimating the dose-response curve is to assume the regression of the outcome on the exposure and confounders follows a linear model. If the model is correctly specified, the coefficient corresponding to the exposure is the slope of the dose-response curve, and the null hypothesis that the dose-response curve is flat can be assessed by testing whether this coefficient is zero. This approach can be generalized to other regression models, which can be marginalized using the G-formula to obtain the dose-response function \perp\!\!\!\perptep{robins1986,robins2000msm, zhang2016quantitative}. However, if the regression model is mis-specified, then the resulting estimator of the causal dose-response curve is typically inconsistent, and the resulting test will not necessarily be calibrated or consistent. Inverse probability weighting may also be used to estimate the dose-response curve \perp\!\!\!\perptep{imai2004gps, hirano2005gps}, but mis-specification of the propensity score model may again lead to unreliable inference.
Nonparametric methods make fewer assumptions about the data-generating mechanism than methods based on parametric models, and are therefore often more robust. In the context of nonparametric estimation of a causal dose-response curve, \perp\!\!\!\perpte{neugebauer2007jspi} considered inference for the projection onto a parametric working model; \perp\!\!\!\perpte{rubin2006msm} and \perp\!\!\!\perpte{diaz2011super} discussed the use of data-adaptive algorithms; \perp\!\!\!\perpte{kennedy2016continuous} and \perp\!\!\!\perpte{van2018non} proposed estimators based on kernel smoothing; and \perp\!\!\!\perpte{westling2020isotonic} proposed an isotonic estimator. However, none of these works considered tests of the null hypothesis that the dose-response curve is flat. We also note that tests of Granger causality \perp\!\!\!\perptep{granger1980testing} have been developed in the context of time series (e.g.\ \perp\!\!\!\perptealp{granger1995causality, boudjellaba1992arma,nishiyama2011testing}). This is distinct from our goal, which is to test whether an exposure at a single time point has a causal effect on the average of a subsequent outcome.
\subsection{Contribution and organization of the article}
In this article, we focus on the problem of testing the null hypothesis that there is no average causal effect of a possibly non-discrete exposure on an outcome against the complementary alternative. To the best of our knowledge, no nonparametric test has yet been developed for this purpose. Specifically, we (1) propose a test based on a cross-fitted nonparametric estimator of an integral of the causal dose-response curve; (2) provide conditions under which our test has desirable large-sample properties, including (i) consistency under any alternative as long as either of two nuisance functions involved in the problem is estimated consistently (known as \emph{doubly-robust} consistency), (ii) asymptotically correct type I error rate, and (iii) non-zero power under local alternatives approaching the null at the rate $n^{-1/2}$; and (3) illustrate the practical performance of the tests through numerical studies and an analysis of the causal effect of BMI on immune response in early-phase vaccine trials.
Notably, the conditions we establish for consistency and validity of our test do not restrict the form of the marginal distribution of the exposure. Therefore, our test applies equally to discrete, continuous, and mixed discrete-continuous exposures. In the second set of numerical studies, we demonstrate that even in the context of discrete exposures, existing tests do not control type I error when the number of discrete components is moderate or large relative to sample size, while the tests proposed here are valid in all such circumstances.
The remainder of the article is organized as follows. In Section~\ref{sec:methods}, we define our proposed procedure. In Section~\ref{sec:asymptotic}, we discuss the large-sample properties of our procedure. In Section~\ref{sec:simulation}, we illustrate the behavior of our method using numerical studies. In Section~\ref{sec:bmi}, we use our procedure to analyze the causal effect of BMI on immune response. Section~\ref{sec:discussion} presents a brief discussion. Proofs of all theorems are provided in Supplementary Material. An \texttt{R} \perp\!\!\!\perptep{Rlang} package implementing all the methods developed in this paper is available at \texttt{https://github.com/tedwestling/ctsCausal}.
\section{Proposed methodology}\label{sec:methods}
\subsection{Notation and null hypothesis of interest}
We denote by $A \in \s{A}$ the real-valued exposure of interest, whose support we denote by $\s{A}_0 \subseteq \d{R}$. Adopting the Neyman-Rubin potential outcomes framework, for each $a \in \s{A}_0$, we denote by $Y(a) \in \s{Y} \subseteq\d{R}$ a unit's potential outcome under an intervention setting exposure to $A =a$. The causal parameter $m(a) := E\left[Y(a)\right]$ represents the average outcome under assignment of the entire population to exposure level $A=a$. The resulting curve $m:\s{A}\rightarrow \d{R}$ is known as the \emph{causal dose-response curve}.
We are interested in testing the null hypothesis that $m(a) = \gamma_0$ for all $a \in \s{A}_0$ and some $\gamma_0 \in \d{R}$; that is, the dose-response curve is flat. This null hypothesis holds if and only if the average value of the potential outcome is unaffected by the value of the exposure to which units are assigned. However, $Y(a)$ is not typically observed for all units in the population, but instead the outcome $Y := Y(A)$ corresponding to the exposure value actually received is observed. Thus, $m$ is not a mapping of the joint distribution of the pair $(A, Y)$, so the null hypothesis that $m$ is flat cannot be tested using this data. The first step in developing a testing procedure is to translate the causal problem into a problem that is testable with the observed data, which is called \emph{identification} in the causal inference literature.
We first assume that (i) each unit's potential outcomes are independent of all other units' exposures
and (ii) the observed outcome $Y$ almost surely equals $Y(A)$.
If (i)--(ii) hold and in addition (iii) $Y(a) \independent A$ for all $a \in \s{A}_0$, then $m(a) = E[Y \mid A = a]$ for all $a \in \s{A}_0$. In this case, $m$ is identified with a univariate regression function, so the null hypothesis that $m$ is flat on $\s{A}_0$ can be tested using existing work from the nonparametric regression literature (see, e.g.\ \perp\!\!\!\perptealp{eubank1990testing, andrews1997kolmogorov, horowitz2001adaptive}, among many others). However, assumption (iii) typically only holds in experiments in which $A$ is randomly assigned. In observational studies, there are typically \emph{confounding variables} that impact both $A$ and $Y(a)$, thus invalidating (iii). In these settings, tests based on nonparametric regression may not have valid type I error rate, even asymptotically.
We suppose that a collection $W \in \s{W} \subseteq \d{R}^p$ of possible confounders is recorded. If (i)--(ii) hold and in addition (iv) $Y(a) \independent A \mid W$ for all $a \in \s{A}_0$, known as \emph{no unmeasured confounding}, and (v) all $a \in \s{A}_0$ are in the support of the conditional distribution of $A$ given $W = w$ for almost every $w$, known as \emph{positivity}, then $m(a)= \theta_0(a) := E[ E(Y \mid A=a, W)]$, which is known as the G-computed regression function \perp\!\!\!\perptep{robins1986, gill2001}. Therefore, under (i)--(ii) and (iv)--(v), $m$ is flat on $\s{A}_0$ if and only if $\theta_0$ is flat on $\s{A}_0$.
Here, we assume that we observe independent and identically distributed random vectors $(Y_1, A_1, W_1), \dotsc, (Y_n, A_n,W_n)$ from a distribution $P_0$ contained in the nonparametric model $\s{M}_{NP}$ consisting of all distributions on $\s{Y} \times \s{A} \times \s{W}$. For a distribution $P \in \s{M}_{NP}$, we denote by $F_P$ the marginal distribution of $A$ under $P$, $\s{A}_P$ the support of $F_P$, $\mu_P(a,w) := E_P[Y \mid A =a, W=w]$ the outcome regression function, and $Q_P$ the marginal distribution of $W$. Throughout, we use the subscript $0$ to refer to evaluation at or under $P_0$; for example, we denote by $F_0$ the marginal distribution function of $A$ under $P_0$ and by $\s{A}_0$ the support of $F_0$. Denoting by $C_b(S)$ the class of continuous and bounded functions on a subset $S$ of $\d{R}$, we assume that $P_0$ is contained in the statistical model $\s{M} := \{ P \in \s{M}_{NP}: \theta_P \in C_b(\s{A}_P)\}$.
In this article, we focus on testing the null hypothesis $H_0 : P_0 \in \s{M}_0 \subset \s{M}$ for $\s{M}_0 := \left\{ P \in \s{M} : \theta_P(a) = \theta_P(a') \text{ for all } a, a' \in \s{A}_P \right\}$ versus the complementary alternative $H_A : P_0 \in \s{M}_A := \s{M} \backslash \s{M}_0$.
This null hypothesis holds if and only if $\theta_0(a) = \gamma_0$ for all $a \in \s{A}_0$, where $\gamma_0 := \int \theta_0(a) \,F_0(da) = \iint \mu_0(a, w) \, Q_0(dw) \,F_0(da)$. As noted above, conditions (i)--(ii) and (iv)--(v) imply that $H_0$ holds if and only if the exposure has no causal effect on the average outcome in the sense that setting $A$ to $a$ for all units in the population yields the same average outcome for all $a \in \s{A}_0$. On the other hand, $H_A$ holds if and only if at least two exposures in $\s{A}_0$ yield different average outcomes. Our null hypothesis is stated in terms of the possibly unknown $\s{A}_0$ because condition (v) does not hold for $a \notin \s{A}_0$, and in fact $m(a)$ is not identified in the observed data for such $a$ without further assumptions.
A special case of our null hypothesis is that $\mu_0(a, w) = \mu_0(a', w)$ for all $a, a' \in \s{A}_0$ and almost all $w$. In this case, $\gamma_0 = E_0[Y]$. Under conditions (i)--(ii) and (iv)--(v), this happens if and only if $E_0[Y(a) \mid W = w] =E_0[Y(a') \mid W = w]$ for almost all $w$---i.e. there is no effect of the exposure on the average potential outcome for any strata of $W$ in the population. We shall refer to this case as the \emph{strong} null hypothesis. We emphasize that our null hypothesis $H_0$ can hold even if the strong null does not, since interactions between the exposure and covariates may average out to yield a flat G-computed regression curve. We note that \perp\!\!\!\perpte{luedtke2019omnibus} recently proposed a general procedure for testing null hypotheses of the form $R_0(O) \stackrel{d}{=} S_0(O)$, where the generic observation $O$ follows distribution $P_0$, and the functions $R_0$ and $S_0$ may depend on $P_0$. \perp\!\!\!\perpte{luedtke2019omnibus} demonstrated that their procedure can be used to consistently test the strong null hypothesis stated above, albeit with type I error rate tending to zero. Our null hypothesis may also be stated in their general form with $R_0(Y,A,W) := \theta_0(A)$ and $S_0(Y, A, W) := \gamma_0$. However, their results do not apply to our null hypothesis because their Condition~3 does not hold for $R_0 = \theta_0$.
For a measure $\lambda$, we define $\| h \|_{\lambda, p} := \left[ \int | h(x) |^p \, d\lambda(x) \right]^{1/p}$ for $p \in [1,\infty)$, and $\| h \|_{\lambda, \infty} := \sup_{x \in \n{supp}(\lambda)} |h(x)|$. For a probability measure $P$ and $P$-integrable function $h$, we define $Ph := \int h \, dP$. We define $\d{P}_n$ as the empirical distribution function of the observed data.
\subsection{Testing in terms of the primitive function}
Our procedure will be based on estimating a primitive parameter (i.e.\ an integral) of $\theta_0$. A useful analogy to consider is testing that a density function $h$ is flat on $[0,1]$. A density function, like $\theta_0$, is a challenging parameter to estimate in a nonparametric model, and the fastest attainable rate is slower than $n^{-1/2}$. However, $h$ is flat if and only if the corresponding cumulative distribution function $H$, which is the primitive of $h$, is the identity function on $[0,1]$, which is further equivalent to $H(x) - x = 0$ for all $x \in [0,1]$. In our setting, we define the primitive function as $\Gamma_0(a) := \int_{-\infty}^a \theta_0(u) \, dF_0(u)$. We integrate against the marginal distribution $F_0$ of $A$ because $\theta_0$ is only estimable on the support of $F_0$. We then note that $\theta_0(a) = \gamma_0$ for all $a \in \s{A}_0$ if and only if $\Gamma_0(a) = \int_{-\infty}^a \gamma_0 \, dF_0(u) = \gamma_0F(a)$. Therefore, our null hypothesis is equivalent to $\Omega_0(a) := \Gamma_0(a) - \gamma_0 F_0(a)$ equals $0$ for all $a \in \s{A}_0$. This is stated formally in the following result.
\begin{prop}\label{prop:equivalence}
If $\theta_0$ is continuous on $\s{A}_0$, then the following are equivalent: (1) $P_0 \in \s{M}_0$; (2) $\theta_0(a) = \gamma_0$ for all $a \in \s{A}_0$; (3) $\Omega_0(a) = 0$ for all $a \in \d{R}$; (4) $\|\Omega_0\|_{F_0, p} = 0$ for all $p \geq 1$.
\end{prop}
\begin{figure}
\caption{Four hypothetical dose-response curves $\theta(a)$ (left) and their corresponding primitives $\Omega(a)$ (right). The marginal distribution of the exposure is the standard normal.}
\label{fig:theta_omega}
\end{figure}
For illustrative purposes, Figure~\ref{fig:theta_omega} displays four hypothetical $\theta_0$'s and their corresponding $\Omega_0$'s. We note that $\Omega_0(a) = 0$ for all $a \in \d{R}$ if and only if $\Omega_0(a) = 0$ for all $a \in \s{A}_0$. This, combined with Proposition~\ref{prop:equivalence}, indicates that in the model $\s{M}$, testing $H_0$ is equivalent to testing the null hypothesis $\|\Omega_0\|_{F_0, p} = 0$ versus the alternative $\|\Omega_0\|_{F_0, p} > 0$. In the case of testing uniformity of a density, this is analogous to testing that $\|H - \n{Id}\|_{[0,1],p} = 0$ for $\n{Id}(x) = x$ the identity map, which is exactly what the Kolmogorov-Smirnov (in which $p = \infty$) and Anderson-Darling (in which $p = 2$) tests do. Furthermore, representing the problem as a test that $\|H - \n{Id}\|_{[0,1],p} = 0$ is useful because $H$ is staightforward to estimate at the $n^{-1/2}$ rate, unlike a density function. In our case, $\Omega_0$ is pathwise differentiable in the nonparametric model with an estimable influence function under standard conditions, unlike $\theta_0$. Specifically, defining $g_0(a, w) := G_0(da, w) / F_0(da)$, where $G_0(a , w) := P_0(A \leq a \mid W = w)$ is the conditional distribution of $A$ given $W =w$ evaluated at $a$, we have the following result.
\begin{prop}\label{prop:eif}
For each $a_0 \in \d{R}$, $\Omega_0(a_0)$ is pathwise differentiable relative to the subset of $\s{M}$ in which $g_P$ is almost surely bounded away from zero and $E_P\left[Y^2 \right] < \infty$, and its efficient influence function is
\begin{align*}
D_{a_0,0}^*(y, a, w) &:= \left[I_{(-\infty, a_0]}(a) - F_0(a_0)\right] \left[\frac{ y - \mu_0(a,w)}{g_0(a, w)} + \theta_{0}(a) - \gamma_0\right] \\
&\qquad +\int \left[I_{(-\infty, a_0]}(u) - F_0(a_0)\right] \mu_0(u, w) \, F_0(du) - 2\Omega_0(a_0)\ .
\end{align*}
\end{prop}
We note that $g_0(a, w) = P_0(A = a \mid W =w) / P_0(A = a)$ for $a$ such that $P_0(A = a) > 0$, and $g_0(a, w) = \left[\frac{d}{da} P_0( A \leq a \mid W =w)\right] /\left[ \frac{d}{da} F_0(a)\right]$ for $a$ where $F_0$ is absolutely continuous. Hence, $g_0$ generalizes both the standardized generalized propensity score (GPS) and the standardized propensity score.
For any $p \in [1,\infty]$, Proposition~\ref{prop:eif} allows us to test $H_0$ in the following manner: (1) construct a uniformly asymptotically linear estimator $\Omega_n^\perp\!\!\!\perprc$ of $\Omega_0$ for which $\{n^{1/2}[ \Omega_n^\perp\!\!\!\perprc(a) - \Omega_0(a)] : a \in \s{A}_0\}$ converges weakly to a Gaussian limit process, (2) use the estimated influence function of $\Omega_n^\perp\!\!\!\perprc(a)$ to construct an estimator $T_{n,p,\alpha}$ of the $1-\alpha$ quantile of $n^{1/2}\|\Omega_n^\perp\!\!\!\perprc - \Omega_0\|_{F_0,p}$, and (3) reject $H_0$ at level $\alpha$ if $n^{1/2}\|\Omega_n^\perp\!\!\!\perprc \|_{F_n,p} > T_{n,p, \alpha}$. In the remainder of this section, we provide details for accomplishing each of these three steps.
\subsection{Estimating the primitive function}\label{sec:primitive_est}
The first step in our testing procedure is to construct an asymptotically linear estimator of $\Omega_0(a)$ for each fixed $a$. The four nuisance parameters present in the definition of $\Omega_0(a_0)$ and its nonparametric efficient influence function $D_{a_0,0}^*$ are the outcome regression $\mu_0$, the standardized propensity $g_0$, and the marginal distributions $F_0$ and $Q_0$ corresponding to $A$ and $W$, respectively. Given estimators $\mu_n$ and $g_n$ of $\mu_0$ and $g_0$, respectively, we can construct an estimator $D_{a_0,n}^*$ of the influence function by plugging in $\mu_n$ for $\mu_0$, $g_n$ for $g_0$, and the empirical marginal distributions $F_n$ and $Q_n$ for $F_0$ and $Q_0$. A one-step estimator of $\Omega_0(a_0)$ is then given by $\Omega_n(a_0) := \Omega_{\mu_n, F_n, Q_n}(a_0)+ \d{P}_n D_{a_0,n}^*$, where $\Omega_{\mu_n, F_n, Q_n}(a_0) := \iint \left[ I_{(-\infty, a_0]}(a) - F_n(a_0)\right] \mu_n(a, w) \, dF_n(a) \, dQ_n(w)$ is the plug-in estimator of $\Omega_0$. In expanding $\Omega_n(a_0)$, some terms cancel and we are left with
\begin{equation}
\Omega_n(a_0) = \frac{1}{n}\sum_{i=1}^n \left[ I_{(-\infty, a_0]}(A_i) - F_n(a_0) \right] \left[\frac{ Y_i - \mu_n(A_i,W_i)}{g_n(A_i, W_i)} + \int \mu_n(A_i, w) \, dQ_n(w) \right] \ .\label{eq:one_step_est}
\end{equation}
If we were to base our test on $\Omega_n$, then, as we will see in Section~\ref{sec:asymptotic}, the large-sample properties of our test would depend on consistency of $\Omega_n$ and on weak convergence of $\{ n^{1/2}[\Omega_n(a) - \Omega_0(a)] : a\in\s{A}_0\}$ as a process. Such statistical properties of asymptotically linear estimators of pathwise differentiable parameters depend on estimators of nuisance parameters in two important ways. First, negligibility of a so-called \emph{second-order} remainder term requires negligibility of $(\mu_n - \mu_0)(g_n - g_0)$ in an appropriate sense. Second, negligibility of \emph{empirical process} remainder terms can be guaranteed if the nuisance estimators fall in sufficiently small function classes. In observational studies, researchers can rarely specify a priori correct parametric models for $\mu_0$ or $g_0$, which motivates the use of data-adaptive (e.g.\ machine learning) estimation of these functions in order to achieve negligibility of the second-order remainder. However, data-adaptive estimators typically constitute large function classes. Hence, finding estimators that simultaneously satisfy these two requirements can require a delicate balance. Cross-fitting has been found to resolve this challenge by removing the need for nuisance estimators to fall in small function classes \perp\!\!\!\perptep{zheng2011cvtmle, belloni2018uniform, kennedy2019incremental}. Instead of basing our test on $\Omega_n$, we will therefore base our test on a cross-fitted version of $\Omega_n$, which we now define.
For a deterministic integer $V \in \{2, 3, \dotsc, \lfloor n/2\rfloor \}$, we randomly partition the indices $\{1, \dotsc, n\}$ into $V$ disjoint sets $\s{V}_{n,1}, \dotsc, \s{V}_{n,V}$ with cardinalities $N_1, \dotsc, N_V$. We require that these sets be as close to equal sizes as possible, so that $|N_v - n/V| \leq 1$ for each $v$, and that the number of folds $V$ be bounded as $n$ grows. For each $v \in \{1, \dotsc, V\}$, we define $\s{T}_{n,v} := \{ O_i : i \notin \s{V}_{n,v}\}$ as the \emph{training set} for fold $v$, and we define $\mu_{n,v}$ and $g_{n,v}$ as nuisance estimators that are estimated using only the observations from $\s{T}_{n,v}$. Similarly, we define $F_{n,v}$ and $Q_{n,v}$ as the marginal empirical distributions of $A$ and $W$, respectively, corresponding to the observations in $\s{T}_{n,v}$. We then define the cross-fitted estimator $\Omega_n^\perp\!\!\!\perprc$ of $\Omega_0$ as
\begin{align}\Omega_{n}^\perp\!\!\!\perprc(a_0) &:=\frac{1}{V} \sum_{v=1}^V \left\{ \frac{1}{N_v}\sum_{i \in \s{V}_{n,v}} \left[ I_{(-\infty, a_0]}(A_i) - F_{n,v}(a_0)\right] \frac{ Y_i - \mu_{n,v}(A_i,W_i)}{g_{n,v}(A_i, W_i)} \right.\nonumber\\
&\qquad\qquad\qquad \left. + \frac{1}{N_v^2}\sum_{i, j \in \s{V}_{n,v}} \left[ I_{(-\infty, a_0]}(A_i) - F_{n,v}(a_0)\right] \mu_{n,v}(A_i, W_j) \right\}\ . \label{eq:one_step_cv_est} \end{align}
In the next section, we indicate properties of the estimators $\mu_{n,v}$ and $g_{n,v}$ that imply certain large-sample properties of $\Omega_n^\perp\!\!\!\perprc$, which in turn imply properties of our testing procedure. In particular, we provide conditions under which $\Omega_n^\perp\!\!\!\perprc(a)$ is uniformly asymptotically linear with influence function $D_{a,0}^*$, meaning that
\begin{equation} \Omega_n^\perp\!\!\!\perprc(a) - \Omega_0(a) = \d{P}_n D_{a,0}^* + R_n(a) \ ,\label{eq:unif_asy_lin} \end{equation}
where $\sup_{a\in\s{A}_0} |R_{n}(a)| = o_{P_0}(n^{-1/2})$. If \eqref{eq:unif_asy_lin} holds and in addition the one-dimensional class of functions $\{D_{a, 0}^* : a \in \s{A}_0\}$ is $P_0$-Donsker, then $\{ n^{1/2}[\Omega_n^\perp\!\!\!\perprc(a) - \Omega_0(a)] : a \in \s{A}_0 \}$ converges weakly in the space $\ell^{\infty}(\s{A}_0)$ of bounded real-valued functions on $\s{A}_0$ to a mean-zero Gaussian process $Z_0$ with covariance function $\Sigma_0(s,t) := P_0 [ D_{s, 0}^* D_{t,0}^*]$. Since the $L_p(F_0)$-norm is a continuous functional on $\ell^{\infty}(\s{A}_0)$ for any $p \in [1, \infty]$, by the continuous mapping theorem we will then have $n^{1/2} \| \Omega_n^\perp\!\!\!\perprc - \Omega_0\|_{F_0,p} \indist \| Z_0 \|_{F_0, p}$. Given an estimator $D_{a,n,v}^*$ of $D_{a,0}^*$ for each $v$, we can approximate the distribution $\| Z_0 \|_{F_0, p}$ by simulating sample paths of a mean-zero Gaussian process $Z_n$ with covariance function $\Sigma_n(s,t) := \frac{1}{V} \sum_{v=1}^V \d{P}_{n,v} D_{s,n,v}^* D_{t,n,v}^*$, and computing the $L_p(F_n)$-norm of these sample paths, where $\d{P}_{n,v}$ is the empirical distribution for the validation fold $\s{V}_{n,v}$. Putting it all together, our fully specified procedure for testing the null hypotheses $H_0$ is as follows:
\begin{description}[style=multiline,leftmargin=1.8cm]
\item[Step 1:] Split the sample into $V$ sets $\s{V}_{n,1}, \dotsc, \s{V}_{n,V}$ of approximately equal size.
\item[Step 2:] For each $v \in \{1, \dotsc, V\}$, construct estimates $\mu_{n,v}$ and $g_{n,v}$ of the nuisance functions $\mu_0$ and $g_0$ based on the training set $\s{T}_{n,v}$ for fold $v$.
\item[Step 3:] For each $a$ in the observed values of the exposure $\s{A}_n := \{A_1, \dotsc, A_n\}$, use $\mu_{n,v}$ and $g_{n,v}$ to construct $\Omega_n^\perp\!\!\!\perprc(a)$ as defined in \eqref{eq:one_step_cv_est}.
\item[Step 4:] Let $T_{n, \alpha, p}$ be the $1-\alpha$ quantile of $ \left( \frac{1}{n} \sum_{i=1}^n |Z_{n}(A_i)|^p\right)^{1/p}$ for $p < \infty$ or $\max_{a \in \s{A}_n} |Z_n(A_i)|$ for $p = \infty$, where, conditional on $O_1, \dotsc, O_n$, $(Z_n(A_1), \dotsc, Z_n(A_n))$ is distributed according to a mean-zero multivariate normal distribution with covariances given by $\Sigma_n(A_i,A_j) := E[ Z_{n}(A_i) Z_n(A_j) \mid O_1, \dotsc, O_n] = \frac{1}{V} \sum_{v=1}^V \d{P}_{n,v} D_{A_i,n,v}^*D_{A_j,n,v}^*$ for
\begin{align*}
D_{a_0,n,v}^*(y, a, w)&=\left[I_{(-\infty, a_0]}(a) - F_{n,v}(a_0)\right] \left[\frac{ y - \mu_{n,v}(a,w)}{g_{n,v}(a, w)} + \theta_{n,v}(a) - \gamma_{n,v}\right] \\
&\qquad + \int \left[I_{(-\infty, a_0]}(u) - F_{n,v}(a_0)\right] \mu_{n,v}(u, w) \, F_{n,v}(du) - 2\Omega_{\mu_{n,v}, F_{n,v}, Q_{n,v}}(a_0) \ ,
\end{align*}
where $\theta_{n,v}(a) := \int \mu_{n,v}(a, w) \, dQ_{n,v}(w)$ and $\gamma_{n,v} := \iint \mu_{n,v}(a, w) \, dF_{n,v}(a) \, dQ_{n,v}(w)$.
\item[Step 5:] Reject $H_0$ at level $\alpha$ if $n^{1/2}\|\Omega_n^\perp\!\!\!\perprc \|_{F_n, p} > T_{n, \alpha, p}$.
\end{description}
In practice, we recommend using $p=\infty$ for several reasons. First, as illustrated in the numerical studies, tests based on $p=\infty$ offer better finite-sample power for detecting non-linear alternatives than other $p$, at no cost to test size. Relatedly, we expect tests based on $p= \infty$ to be more sensitive to deviations from the null that are concentrated on a narrow region of the support of $F_0$. Second, unlike $\|\Omega_0\|_{F_0, p}$ for $p < \infty$, $\|\Omega_0\|_{F_0, \infty}$ only depends on $F_0$ through its support, which makes it a more interpretable and generalizable parameter.
\section{Asymptotic properties of the proposed procedure}\label{sec:asymptotic}
\subsection{Doubly-robust consistency}
In this section, we derive sufficient conditions for three large-sample properties of our proposed test: consistency under fixed alternatives, asymptotically correct type I error rate, and positive asymptotic power under local alternatives. Each of these three properties is established by first proving an accompanying result for the estimator $\Omega_n^\perp\!\!\!\perprc$ upon which the test is based. We start by showing that the proposed test is doubly-robust consistent, meaning that it rejects any alternative hypothesis with probability tending to one as long as either of the two nuisance parameters involved in the problem is estimated consistently. We first introduce several conditions upon which our results rely.
\begin{description}[style=multiline,leftmargin=1.1cm]
\item[(A1)] There exist $K_0, K_1, K_2 \in (0, \infty)$ such that, almost surely as $n \to \infty$ and for all $v$, $\mu_{n,v}$ and $\mu_0$ are contained in a class of functions $\s{F}_0$ and $g_{n,v}$ and $g_0$ are contained in a class of functions $\s{F}_{1}$, where $|\mu| \leq K_{0}$ for all $\mu \in \s{F}_{0}$ and $K_{1} \leq g \leq K_{2}$ for all $g \in \s{F}_{1}$. Additionally, $E_0[Y^2] < \infty$.
\item[(A2)] There exist $\mu_{\infty} \in \s{F}_0$ and $g_{\infty} \in \s{F}_1$ such that $\max_v P_0 (\mu_{n,v} - \mu_{\infty})^2 \inprob 0$ and $\max_v P_0(g_{n,v} - g_{\infty})^2 \inprob 0$.
\item[(A3)] There exist subsets $\s{S}_1, \s{S}_2$ and $\s{S}_3$ of $\s{A}_0 \times\s{W}$ such that $P_0(\s{S}_1 \cup \s{S}_2 \cup \s{S}_3) = 1$ and:
\begin{enumerate}[(a)]
\item $\mu_{\infty}(a,w) = \mu_0(a,w)$ for all $(a,w) \in \s{S}_1$;
\item $g_{\infty}(a,w) = g_0(a,w)$ for all $(a,w) \in \s{S}_2$;
\item $\mu_{\infty}(a,w) = \mu_0(a,w)$ and $g_{\infty}(a,w) = g_0(a,w)$ for all $(a,w) \in \s{S}_3$.
\end{enumerate}
\end{description}
Condition (A1) requires that the true nuisance functions as well as their estimators satisfy certain boundedness constraints. The requirement that $g_0 \geq K_1 > 0$ is a type of \emph{overlap} or \emph{positivity} condition. For mass points $a \in \s{A}_0$, this is equivalent to requiring that $P_0(A = a_0 \mid W = w) / P_0(A = a_0) \geq K_1 > 0$ for almost every $w$ and every such $a$. If there are a finite number of mass points, then this condition is equivalent to the standard overlap condition used for $n^{-1/2}$-rate estimation with a discrete exposure. However, if there are an infinite number of mass points, then the condition is weaker than the standard overlap condition. Similarly, for points $a \in \s{A}_0$ where $F_0$ is absolutely continuous, (A1) is related to but strictly weaker than the standard overlap condition for estimation of a dose-response curve with a continuous exposure, which requires that the conditional density $p_0(a \mid w)$ be bounded away from zero for almost all $w$ (e.g.\ condition (e) of Theorem 2 of \perp\!\!\!\perptealp{kennedy2016continuous}). Instead, (A1) requires that $p_0(a \mid w) / f_0(a)$ be bounded away from zero, which may hold even when $p_0(a \mid w)$ is arbitrarily close to zero. For example, if $A$ and $W$ are independent, so that $p_0(a \mid w) = f_0(a)$, then (A1) is automatically satisfied, whereas the standard overlap condition would not necessarily be.
Condition (A2) requires that the nuisance estimators converge to some limits $\mu_\infty$ and $g_\infty$. Condition (A3) is known as a double-robustness condition, since it is satisfied if either $\mu_\infty = \mu_0$ almost surely or $g_\infty = g_0$ almost surely. Double-robustness has been studied for over two decades, and is now commonplace in causal inference \perp\!\!\!\perptep{robins1994estimation, rotnitzky1998semiparametric, scharfstein1999adjusting, van2003unified, neugebauer2005prefer, bang2005doubly}. However, condition (A3) is slightly more general than standard double-robustness, since it is satisfied if either $\mu_\infty(a,w) = \mu_0(a,w)$ or $g_\infty(a,w) = g_0(a,w)$ for almost all $(a,w)$, which can happen even if neither $\mu_\infty = \mu_0$ nor $g_\infty = g_0$ almost surely.
Under these conditions, we have the following result concerning consistency of $\Omega_n^\perp\!\!\!\perprc$.
\begin{thm}[Doubly-robust consistency of $\Omega_n^\perp\!\!\!\perprc$]\label{thm:dr_cons_omega}
If (A1)--(A3) hold, then $\sup_{a \in \d{R}}|\Omega_n^\perp\!\!\!\perprc(a) - \Omega_0(a)| \inprob 0.$
\end{thm}
It follows immediately from Theorem~\ref{thm:dr_cons_omega} that if (A1)--(A3) hold, then $\|\Omega_n^\perp\!\!\!\perprc\|_{F_0, p} \inprob \|\Omega_0\|_{F_0,p}$ for any $p \in [1, \infty]$, so that $P_0\left(\|\Omega_n^\perp\!\!\!\perprc\|_{F_0, p} > t_n\right) \longrightarrow 1$ for any $t_n \inprob 0$ and $P_0 \in \s{M}$ such that $H_A$ holds. In order to fully establish consistency of the proposed test, we need to justify using $\|\cdot\|_{F_n,p}$ instead of $\|\cdot\|_{F_0,p}$, and in addition we need to show that $T_{n,\alpha, p} / n^{1/2} \inprob 0$. We do so in the next result, and conclude that the proposed test is doubly-robust consistent.
\begin{thm}[Doubly-robust consistency of proposed test]\label{thm:dr_cons_test}
If conditions (A1)--(A3) hold, then $P_0\left(n^{1/2}\|\Omega_n^\perp\!\!\!\perprc \|_{F_n, p} > T_{n, \alpha, p} \right) \longrightarrow 1$ for any $P_0 \in \s{M}_A$.
\end{thm}
\subsection{Asymptotically correct type I error rate}
Next, we provide conditions under which the proposed test has asymptotically correct type I error rate. We start by introducing an additional condition that we will need.
\begin{description}[style=multiline,leftmargin=1.1cm]
\item[(A4)] Both $\mu_\infty = \mu_0$ and $g_\infty = g_0$, and $r_n := \max_v \left|P_0 (\mu_{n,v} - \mu_0)( g_{n,v} - g_0)\right| = o_{P_0}\left(n^{-1/2}\right)$.
\end{description}
In concert with Condition (A2), condition (A4) requires that both estimators are consistent. Furthermore, condition (A4) requires that the rate of convergence of the mean of the product of the nuisance errors tend to zero in probability faster than $n^{-1/2}$. We note that $r_n^2 \leq \max_v P_0(\mu_{n,v} - \mu_0)^2 P_0(g_{n,v} - g_0)^2$, so that $r_n$ is bounded by the product of the $L_2(P_0)$ rates of convergence of the two estimators. Therefore, if in particular $\max_v \|\mu_{n,v} - \mu_0\|_{P_0,2} = o_{P_0}(n^{-1/4})$ and $\max_v \| g_{n,v} - g_0\|_{P_0,2} = o_{P_0}(n^{-1/4})$, then (A4) is satisfied. For example, if $\mu_n$ and $g_n$ are based on correctly-specified parametric models, then (A4) typically holds with room to spare. However, in many contexts, \textit{a priori} correctly specified parametric models for $\mu_0$ and $g_0$ are not available, which motivates the use of semiparametric and nonparametric estimators for $\mu_n$ and $g_n$. While we can expect such semi- or nonparametric estimators to be consistent for a larger class of true functions than parametric estimators, whether (A4) is satisfied depends on the adaptability of the estimators to the specific, often unknown nature of $\mu_0$ and $g_0$. For this reason, we suggest using ensemble methods based on cross-validation in practice: leveraging several parametric, semiparametric, and nonparametric estimators may give the best chance of minimizing bias and ensuring that (A4) is satisfied. These themes are prevalent throughout the doubly-robust estimation literature, and indeed they are fundamental to nonparametric estimation problems in causal inference (see, e.g.\ \perp\!\!\!\perptealp{van2003unified,neugebauer2005prefer, kennedy2016continuous}).
Under these conditions, we have the following result.
\begin{thm}[Weak convergence of $n^{1/2} ( \Omega_n^\perp\!\!\!\perprc - \Omega_0)$] \label{thm:weak_conv_omega}
If (A1)--(A2) and (A4) hold, then
\[\sup_{a \in \s{A}_0} \left| n^{1/2} \left[ \Omega_n^\perp\!\!\!\perprc(a) - \Omega_0(a) \right] - n^{1/2} \d{P}_n D_{a, 0}^* \right| \inprob 0 \ ,\]
and in particular, $\left\{ n^{1/2} \left[ \Omega_n^\perp\!\!\!\perprc(a)- \Omega_0(a)\right] : a \in \s{A}_0\right\}$ converges weakly as a process in $\ell^\infty(\s{A}_0)$ to a mean-zero Gaussian process $Z_0$ with covariance function given by $\Sigma_0(s,t) := P_0 \left[ D_{s,0}^* D_{t,0}^*\right]$.
\end{thm}
As with the relationship between Theorems~\ref{thm:dr_cons_omega} and~\ref{thm:dr_cons_test}, Theorem~\ref{thm:weak_conv_omega} does not quite imply that the proposed test has asymptotically correct size due to the two additional approximations made in the proposed test. Specifically, it follows from Theorem~\ref{thm:weak_conv_omega} that $P_0\left(\left\|\Omega_n^\perp\!\!\!\perprc \right\|_{F_0, p} > T_{0,\alpha,p} / n^{1/2}\right) \longrightarrow \alpha$, where $T_{0,\alpha,p}$ is the $1-\alpha$ quantile of $\|Z_0\|_{F_0,p}$. Validity of the proposed test follows if $\left\|\Omega_n^\perp\!\!\!\perprc\right\|_{F_n, p} - \left\|\Omega_n^\perp\!\!\!\perprc\right\|_{F_0, p} = o_{P_0}(n^{-1/2})$ and $T_{n,\alpha,p} \inprob T_{0,\alpha,p}$. Theorem~\ref{thm:test_size} verifies these facts and concludes that the test has asymptotically valid size.
\begin{thm}[Asymptotic validity of proposed test]\label{thm:test_size}
If conditions (A1)--(A2) and (A4) hold and the distribution function of $\|Z_0\|_{F_0,p}$ is strictly increasing and continuous in a neighborhood of $T_{0,\alpha,p}$, then $P_0\left(n^{1/2}\|\Omega_n^\perp\!\!\!\perprc \|_{F_n, p} > T_{n, \alpha, p} \right) \longrightarrow \alpha $ for any $P_0 \in \s{M}_0$.
\end{thm}
\subsection{Asymptotic behavior under local alternatives}
Finally, we demonstrate that the proposed test has power to detect local alternatives approaching the null at the rate $n^{-1/2}$. We let $h : \s{O} \to \d{R}$ be a score function satisfying $P_0h = 0$ and $P_0( h^2)< \infty$. We suppose that the local alternative distributions $P_n$ satisfy
\begin{equation}\lim_{n\to\infty} \int \left[ n^{1/2} \left( dP_n^{1/2} - dP_0^{1/2}\right) - \tfrac{1}{2} h \, dP_0^{1/2} \right]^2 = 0\label{eq:local_alt}\end{equation}
for some $P_0 \in \s{M}_0$. We then have the following result.
\begin{thm}[Weak convergence of $n^{1/2}\Omega_n^\perp\!\!\!\perprc$ under local alternatives] \label{thm:weak_conv_local}
If for each $n$, $(O_1, \dotsc, O_n)$ are independent and identically distributed from $P_n$ satisfying \eqref{eq:local_alt} and the conditions of Theorem~\ref{thm:test_size} hold, then $\left\{ n^{1/2} \Omega_n^\perp\!\!\!\perprc(a): a \in \s{A}_0\right\}$ converges weakly under $P_n$ in $\ell^\infty(\s{A}_0)$ to a Gaussian process $\overline{Z}_{0,h}$ with mean $E[\overline{Z}_{0,h}(a)] = P_0 \left( h D_{a,0}^*\right)$ and covariance $\Sigma_0(s,t) := P_0 \left[ D_{s,0}^* D_{t,0}^*\right]$.
\end{thm}
The limiting process $\overline{Z}_{0,h}$ in Theorem~\ref{thm:weak_conv_local} is equal in distribution to $\{ Z_0(a) + P_0 \left( h D_{a,0}^*\right) : a \in \s{A}_0\}$, where $Z_0$ is the limit Gaussian process when generating data under $P_0$ from Theorem~\ref{thm:weak_conv_omega}.
Theorem~\ref{thm:weak_conv_local} leads to the following local power result for the proposed test.
\begin{thm}[Power under local alternatives]\label{thm:local_alt_power}
If the conditions of Theorem~\ref{thm:weak_conv_local} hold and $T_{0,\alpha, p}$ is the $1-\alpha$ quantile of $\| Z_0 \|_{F_0, p}$, then $P_n\left(n^{1/2}\|\Omega_n^\perp\!\!\!\perprc \|_{F_n, p} > T_{n, \alpha, p} \right) \longrightarrow P\left( \|\overline{Z}_{0,h}\|_{F_0, p} >T_{0,\alpha,p} \right) $.
\end{thm}
We note that $P_0\left( \|\overline{Z}_{0,h}\|_{F_0, p} >T_{0,\alpha,p} \right) >\alpha$. Therefore, Theorem~\ref{thm:local_alt_power} implies that, if the two nuisance parameters converge fast enough to the true functions, our test has non-trivial asymptotic power to detect local alternatives approaching the null at the rate $n^{-1/2}$.
\section{Simulation studies}\label{sec:simulation}
\subsection{Simulation study I: mixed continuous-discrete exposure}
We conducted two simulation studies to examine the finite-sample behavior of the proposed procedure under various null and alternative hypotheses. The general form of our first simulation procedure was as follows. We generated three continuous covariates $W \in \d{R}^3$ from a multivariate normal distribution with mean $(0,0,1)^T$ and identity covariance. In order to generate $A$ given $W$, we define $\lambda_{\beta,\kappa}(w) := \kappa + 2(1 - \kappa) \n{logit}^{-1}\left(\beta^T w - \beta_3\right)$, where $\beta \in \d{R}^3$, $\kappa \in (0, 1)$, and $\n{logit}(x) := \log[x / (1-x)]$, and we define $G_{\beta, \kappa}(u, w) := \lambda_{\beta, \kappa}(w) u + [1-\lambda_{\beta, \kappa}(w)] u^2$ and $G_{\beta,\kappa}^{-1}$ its inverse with respect to the first argument. Finally, we define the mixed discrete-continuous distribution function $F_0$ as $F_0(a) := 0.2\times \left[I_{[0,\infty)}(a) + I_{[0.5,\infty)}(a) + I_{[1,\infty)}(a)\right] + 0.4 \times B(a; 2,2)$, where $B$ is the distribution function of a beta random variable, and we define $F_0^-$ is the generalized inverse corresponding to $F_0$. Given $W$, we then simulated $A$ as $F_0^- \perp\!\!\!\perprc G_{\beta, \kappa}^{-1}(Z, W)$, where $Z$ was a Uniform$(0,1)$ random variable independent of $W$, so that $A$ had marginal mass $0.2$ each at $0, 0.5,$ and $1$, and the remaining $0.4$ mass was distributed as $\n{Beta}(2,2)$. For all data generating processes, we set $\kappa = 0.1$ and $\beta = (-1,1,-1)$.
We generated $Y$ given $A$ and $W$ from a linear model with possible interactions and a possible quadratic component. Defining $\mu_{\gamma_1, \gamma_2, \gamma_3}(a, w) := \gamma_1^T \bar{w}+ \left( \gamma_2^T \bar{w}\right) a + \gamma_3 (a - 0.5)^2$, where $\bar{w} := (1, w)$, $\gamma_1$ and $\gamma_2$ are elements of $\d{R}^4$, and $\gamma_3 \in \d{R}$, we generated $Y$ from a normal distribution with mean $\mu_{\gamma_1, \gamma_2,\gamma_3}(A, W)$ and variance $ 1 + \left|\mu_{\gamma_1, \gamma_2, \gamma_3}(A,W) \right|$. Given these definitions, we then have $\theta_0(a) = \gamma_{1,1} + \gamma_{1,4} + \left( \gamma_{2,1} + \gamma_{2,4}\right) a + \gamma_3(a - 0.5)^2$. Hence, $H_0$ holds if and only if $\gamma_{2,1} = - \gamma_{2,4}$ and $\gamma_3 = 0$. We set $\gamma_1 = (0, 2, 2, -2)^T$ for all simulations, and we considered five combinations of $\gamma_2$ and $\gamma_3$. First, we set $\gamma_2 = (2, 2, 2, -2)^T$ and $\gamma_3 = 0$. We call this the \emph{weak null} because $\mu_0$ depends on $a$ even though $\theta_0$ does not. Second, we simulated data under the strong null by setting $\gamma_2 = (0,0,0,0)^T$ and $\gamma_3 = 0$, so that neither $\mu_0$ nor $\theta_0$ depend on $a$. We also simulated data under four alternative hypotheses. In the first three alternative hypotheses, we set $\gamma_3 = 0$, but varied $\gamma_{2,1} + \gamma_{2,4}$, which is the slope of $\theta_0$. We call these \emph{weak}, \emph{moderate}, and \emph{strong} (linear) alternatives. Finally, we set $\gamma_3 = 2$ and $\gamma_2 = (1,1,-1,-1)^T$, which we call the \emph{quadratic} alternative. These simulation settings are summarized for convenience in Table~\ref{tab:sim_settings}.
\begin{table}
\centering
\begin{tabular}{cccccc}
Setting name & $\gamma_2$ & $\gamma_3$ & $\|\Omega_0\|_{F_0, 1}$ & $\|\Omega_0\|_{F_0, 2}$ & $\|\Omega_0\|_{F_0, \infty}$ \\
\hline
Weak null & $(2, 2, 2, -2)^T$ & 0 & 0 &0 & 0\\
Strong null & $(0, 0, 0, 0)^T$ & 0 & 0& 0 & 0\\
Weak alternative & $(0.5, 1, -1, -0.25)^T$ & 0 & 0.019 & 0.023 & 0.036 \\
Moderate alternative & $(1, 1, -1, -0.5)^T$ & 0 &0.043 & 0.050 & 0.070 \\
Strong alternative & $(2, 1, -1, -1)^T$ & 0 &0.10 & 0.11 & 0.14\\
Quadratic alternative & $(1, 1, -1, -1)^T$ & 2 & 0.03 & 0.04 & 0.06 \\
\hline
\end{tabular}
\caption{Summary of the six simulation settings used to generate the outcome. We note that $\gamma_2 = (0, 2, 2, -2)^T$ for all settings. For context, $\n{Var}(Y) \in (4,4.5)$ for all alternative simulation settings.}
\label{tab:sim_settings}
\end{table}
For each sample size $n \in \{100, 250,500, 750, 1000, 2500, 5000\}$ and each of the settings listed in Table~\ref{tab:sim_settings}, we generated 1000 datasets using the process described above. For each dataset, we estimated the pair of nuisance parameters $(\mu_n, g_n)$ in the following ways. First, we estimated $\mu_n$ using a correctly specified linear regression, and $g_n$ using maximum likelihood estimation with a correctly specified parametric model for $\beta$ with $\kappa$ set to the true data-generating value. Second, we used the same correctly-specified procedure for $g_n$, but used an incorrectly specified linear regression to estimate $\mu_n$ by excluding the interactions between $A$ and $W$ and $W_3$ from the regression. Third, we used the correctly specified linear regression to estimate $\mu_n$, but used an incorrectly specified parametric model for $g_n$ by maximizing the incorrectly specified likelihood $(\alpha_1, \alpha_2) \mapsto \sum_{i=1}^n \log \left\{2U_i + \left(1 - 2U_i\right) \n{logit}^{-1}\left(\alpha_1 W_1 + \alpha_2 W_2\right)\right\}$ for $U_i = F_n(A_i)$. Fourth, we used the incorrectly specified parametric models for both $\mu_n$ and $g_n$. Fifth, we estimated $\mu_n$ and $g_n$ nonparametrically. To estimate $\mu_n$ nonparametrically, we used SuperLearner \perp\!\!\!\perptep{vanderlaan2007super} with a library consisting of linear regression, linear regression with interactions, a generalized additive model, and multivariate adaptive regression splines. To estimate $g_n$ nonparametrically, we used an adaptation of the method described in \perp\!\!\!\perpte{diaz2011super} that allows for mass points. For each of these five pairs of estimation strategies for $\mu_n$ and $g_n$, we used the method described in this article with $p \in \{1, 2, \infty\}$ to test the null hypothesis. For nonparametric nuisance estimation, we used both the cross-fitted estimator $\Omega_n^\perp\!\!\!\perprc$ and the non-cross-fitted estimator $\Omega_n$ in order to assess the effect of cross-fitting on type I error. Finally, we compared our test to a test based on dichotomizing $A$. Specifically, we defined $\bar{A} := I_{[0.5,1]}(A)$, and used Targeted Minimum-Loss based Estimation (TMLE) \perp\!\!\!\perptep{vanderlaan2011tmle} to test the null hypothesis that $E_0[E_0(Y \mid \bar{A} = 0, W)] = E_0[E_0(Y \mid \bar{A} = 1, W)]$. We used cross-fitted SuperLearners with the same library as above as the nuisance estimators for TMLE.
\begin{figure}
\caption{Empirical type I error rate of nominal $\alpha = 0.05$ level tests using the nonparametric nuisance estimators. The left column is our cross-fitted test with nonparametric nuisances estimators, the middle column is the same without cross-fitting, and the right column is a TMLE-based test using a dichotomized exposure. Horizontal wide-dash line indicates then nominal 0.05 test size, and horizontal dotted lines indicate expected sampling error bounds were the true size 0.05.}
\label{fig:sim_size_np}
\end{figure}
We now turn to the results of the simulation study. We focus on the results from the use of nonparametric nuisance estimators, since this is what we suggest to use in practice. The results from the use of parametric nuisance estimators were in line with expectations based on our theoretical results; full details of the results may be found in Supplementary Material. Figure~\ref{fig:sim_size_np} displays the empirical type I error rate for the three estimators with nonparametric nuisance estimators. Our tests with cross-fitted nuisance estimators (first column) had empirical error rates within Monte Carlo error of the nominal error rate at all sample sizes and under both the strong and weak nulls. This empirically validates the large-sample theoretical guarantee of Theorem~\ref{thm:test_size}, and also indicates that the type I error of the method is valid even for small sample sizes. However, the nonparametric nuisance estimators without cross-fitting (second column) had type I error significantly larger than $0.05$ for $n \leq 1000$. This suggests that the cross-fitting procedure reduced the bias of the estimator of $\Omega_0$ and/or of the bias of the estimator of the quantile $T_{0,\alpha,p}$ for small and moderate sample sizes, resulting in improved type I error rates. The TMLE-based test with a dichotomized exposure also had empirical error rates within Monte Carlo error of the nominal rate for all sample sizes under both types of null hypotheses, as expected.
Figure~\ref{fig:sim_power_np} displays the empirical power using the nonparametric nuisance estimators. We omitted the estimator without cross-fitting, since this estimator had poor type I error control. Power was generally very low for $n = 100$, but increased with sample size and with distance from the null. For the three linear alternatives (first three rows), our test had only slightly (i.e.\ 5-10 percentage points) better power than the TMLE-based test using a dichotomized exposure. This makes sense, since the true effect size induced by dichotomization of the exposure increased with the slope of $\theta_0$ in the case that $\theta_0$ was linear. However, for the quadratic alternative, the test proposed here had substantially larger power than the TMLE-based test. For example, at sample size $n=1000$, the TMLE-based test had power 0.09, while our test had power between 0.25 and 0.35, and at sample size $n=5000$, the TMLE-based test had power 0.25, while our test had power near 0.85. This can be explained by the fact that the true effect size induced by dichotomization for the quadratic alternative was close to zero because the axis of symmetry for the parabolic effect curve was 0.5, the same as the point of dichotomization. This suggests that, as has been previously noted (e.g.\ \perp\!\!\!\perptealp{fedorov2009consequence}), dichotomization can result in substantial loss of power for certain types of data-generating mechanisms. Discretizing the exposure into more categories would increase the power of the TMLE-based test, but in practice it is hard to know what discretization will yield acceptable power without knowing the true form of $\theta_0$.
\begin{figure}
\caption{Empirical power of nominal $\alpha = 0.05$ level tests using nonparametric nuisance estimators. Columns indicate the alternative hypothesis used to generate the data, and rows indicate the method used to test the null hypothesis.}
\label{fig:sim_power_np}
\end{figure}
Overall, we observed little systematic difference in type I error rates between the three values of $p$ using either type of nuisance estimator for our test. For the linear alternatives, the test with $p=\infty$ had consistently slightly smaller power than that with $p=1$ or $p=2$. However, for the quadratic alternative, the test with $p=\infty$ had consistently larger power than the others. Therefore, which value of $p$ yields the greatest power depends on the shape of the true effect curve. As noted previously, we recommend using $p=\infty$ in practice due in part to the improved power against nonlinear alternatives.
\subsection{Simulation study II: discrete exposure with many levels}
In a second simulation study, we assessed the effect of increasing the number of levels of a discrete exposure on the properties of tests of the null hypothesis of no average causal effect. We simulated three covariates $W \in \d{R}^3$ from independent uniform distributions on $[-1,1]$, $[-1,1]$, and $[0,2]$, respectively. For a number of levels $k$, we set $P_0(A = a \mid W = w) = \n{logit}^{-1}[(.5 - a)(\beta^T w) ] / h(w)$ for $a \in \{1/k, 2/k,\dotsc, 1\}$, and $P_0(A = a \mid W = w) = 0$ otherwise, where $h(w)$ is a normalizing constant. Given $A$ and $W$, we simulated $Y$ as in simulation study I described above. For each $n \in \{250, 500, 750\}$, we considered eight values of $k$ between $k = 5$ and $k = n/2$, which allowed us to assess the effect of the number of discrete components of the exposure on the properties of testing procedures. For each setting, we simulated 1000 datasets, and used two methods to test the null hypothesis of no average causal effect of $A$ on $Y$. First, we used the method described here with $p \in \{1,2,\infty\}$. Second, we used a chi-squared test based on an augmented IPW (AIPW) estimator of $\theta_0(a_j)$ for each $a_j$ in the support of $A$. Since the exposure was discrete, the AIPW-based test had \emph{asymptotically} valid size for any fixed $k$; our goal was to assess its finite-sample performance as a function of $k$. For both tests, we used cross-fitted maximum likelihood estimators from correctly-specified parametric models for the nuisance estimators.
\begin{figure}
\caption{Empirical type I error rate of nominal $\alpha = 0.05$ level tests in the second simulation study. The $x$-axis is the ratio of $k$, the number of levels of the exposure, to $n$, the sample size. Columns indicate the null hypothesis used to generate the data, and rows indicate the method used to test the null hypothesis.}
\label{fig:sim_size_disc}
\end{figure}
Figure~\ref{fig:sim_size_disc} displays the empirical type I error of the two methods under the weak and strong null hypotheses. Our methods (bottom row) had type I error near 0.05 for all $n$ and $k$ considered. AIPW, on the other hand, only had valid type I error for the smallest value of $k$ considered ($k = 5$). As the number of levels of $A$ grew, the type I error rate of the AIPW-based test rapidly grew to 1. In Supplementary Material, we display the power of our test, which was constant in $k$, and for all settings considered increased with $n$. Therefore, use of the methods proposed here is not limited to exposures with a continuous component, but should be considered for all discrete exposures with more than a few values.
\section{BMI and T-cell response in HIV vaccine studies}\label{sec:bmi}
Numerous scientific studies have found a negative association between obesity or BMI and immune responses to vaccination. In \perp\!\!\!\perpte{jin2015multiple}, the authors found that low BMI ($<25$) participants in early-phase HIV vaccine trials had a higher rate of CD4+ T cell response than high BMI ($\geq 30$) participants. They also found a significant effect of BMI in a logistic regression of CD4+ responses on sex, age, BMI, vaccination dose, and number of vaccinations (OR: 0.92; 95\% CI: 0.86--0.98; $p$=0.007). However, as discussed in the introduction, this odds ratio only has a causal interpretation under strong parametric assumptions.
In \perp\!\!\!\perpte{westling2020isotonic}, the authors estimated the causal dose-response curve $\theta_0$, adjusting for the same set of confounders as did \perp\!\!\!\perpte{jin2015multiple}, under the assumption that it is decreasing. However, they did not assess the null hypothesis that the curve was flat. We note that \perp\!\!\!\perpte{westling2020isotonic} estimated $\theta_0(20) - \theta_0(35)$, i.e.\ the difference between the probabilities of having a positive CD4+ immune response under assignment to BMIs of 20 and 35, to be 0.22, with 95\% confidence interval 0.03--0.41. However, this confidence interval is only valid under the assumption that $\theta_0(20) > \theta_0(35)$, and hence cannot be used as evidence against the null hypothesis that $\theta_0$ is flat. Furthermore, the fact that the lower end of this confidence interval is relatively close to zero suggests that there may not actually be strong evidence against this null. Here, we formally assess this null hypothesis using the same data as \perp\!\!\!\perpte{westling2020isotonic}.
The data consist of pooled vaccine arms from 11 phase I/II clinical trials conducted through the HIV Vaccine Trials Network (HVTN). Descriptions of these trials may be found in \perp\!\!\!\perpte{jin2015multiple} and \perp\!\!\!\perpte{westling2020isotonic}. CD4+ and CD8+ T-cell responses at the first visit following administration of the last vaccine dose were measured using validated intracellular cytokine staining, and these continuous responses were converted to binary indicators of whether there was a significant change from baseline using the method described in \perp\!\!\!\perpte{jin2015multiple}. After excluding three participants with missing BMI and participants with missing immune response, our analysis datasets consisted of a total of $n=439$ participants for the analysis of CD4+ responses and $n=462$ participants for CD8+ responses.
We tested the null hypotheses that there is a causal effect of BMI on CD4+ and CD8+ T-cell responses using the method developed in this paper with $p = \infty$ and $V = 10$ folds. To estimate the outcome regression $\mu_0$ and the propensity score $g_0$, we used SuperLearner \perp\!\!\!\perptep{vanderlaan2007super, diaz2011super} with flexible libraries consisting of generalized linear models, generalized additive models, multivariate regression splines, random forests, and gradient boosting. For the analysis of the effect of BMI on CD4+ responses, we found $p = 0.16$, and for the analysis of the effect of BMI on CD8+ responses, we found $p = 0.22$. Hence, we do not find evidence of a causal effect of BMI on the probability of having a positive immune response in these data. Plots of $\Omega_n$ are presented in Supplementary Material.
\section{Discussion}\label{sec:discussion}
We have presented a nonparametric method for testing the null hypothesis that a causal dose-response curve is flat, for use in observational studies with no unobserved confounding. The key idea behind our test was to translate the null hypothesis on the parameter of interest, which is not a pathwise differentiable parameter in the nonparametric model, into a null hypothesis on a primitive parameter, which is pathwise differentiable.
In addition to permitting the use of methods and theory for pathwise differentiable parameters, using the primitive function gives our tests non-trivial power to detect alternatives approaching the null at the rate $n^{-1/2}$. However, results from the literature concerning tests of marginal regression functions suggest that any test able to detect $n^{-1/2}$-rate alternatives must necessarily have low finite-sample power against certain smooth alternatives \perp\!\!\!\perptep{horowitz2001adaptive}. Analogous results regarding causal dose-response functions are not to our knowledge available, but we conjecture that appropriate parallels can be established. Therefore, our tests may have poor finite-sample power against certain shapes of dose-response functions; in particular, functions that are nearly flat except for a sharp peak in a narrow range of the support of the exposure. We suggest that users conduct numerical studies on the power of our tests if they expect that their dose-response function may look like this. Tests based on an estimator of the dose-response function, which to our knowledge do not yet exist, may not have this problem, although such tests would also have slower than $n^{-1/2}$ rates of convergence. We leave further inquiry along these lines to future work.
An additional benefit of using the primitive function defined here is that it makes the test agnostic to the marginal distribution of the exposure: the test works equally well with discrete, continuous, and mixed discrete-continuous exposures. This was validated in numerical studies, where in particular we demonstrated that a traditional doubly-robust test of no average causal effect in the setting of a fully discrete exposure quickly became invalid as the number of levels of the exposure grew. We note that tests based on directly estimating the dose-response function, such as tests based on the local linear estimator, may only work in the context of fully continuous exposures.
Several modifications of the proposed test may be of interest in future research. Here, we studied the properties of the test for fixed values of $p$. In numerical studies, we found little difference in the performance of the test for $p \in \{1,2,\infty\}$, and we do not expect that the choice of $p$ would drastically change the results in most cases. However, the results presented herein were for fixed values of $p$, and so if a researcher were to select a value of $p$ based on the results of the test, the test may no longer have asymptotically valid type I error. In future research, it would be of interest to adaptively select a value of $p$ to maximize power while retaining type I error control. In addition, here, we used the empirical distribution function as our weight function to assess whether the primitive parameter is flat. Alternative weight functions could be used to, for instance, place more emphasis in the tails or center of the distribution of the exposure, or a weight function could possibly be adaptively chosen to maximize power. Finally, while we used a one-step estimator of the primitive parameter, TMLE could be used instead.
\begin{center}
\Large
\textit{
\textsc{Supplementary Material for:}\\
Nonparametric tests of the causal null \\with non-discrete exposures}
\end{center}
\section*{Proof of Theorems}
\begin{proof}[\bfseries{Proof of Proposition~\ref{prop:equivalence}}]
(1)$\implies$(2): Let $a \in \s{A}_0$. Then, since $\theta_0(u) = \theta_0(a)$ for all $u \in \s{A}_0$, $\gamma_0 = \int\theta_0(u) \, dF_0(u) = \int\theta_0(a) \, dF_0(u) = \theta_0(a)$.
(2)$\implies$(1): trivial.
(2)$\implies$(3): Let $a \in \d{R}$. Then $\Gamma_0(a) =\int_{-\infty}^a \theta_0(u) \, dF_0(u) = \gamma_0F_0(a)$, so $\Omega_0(a) = 0$.
(3)$\implies$(2): We proceed by contradiction: suppose that $\theta_0(a) \neq \gamma_0$ for some $a \in \s{A}_0$. We assume first that $\theta_0(a) - \gamma_0 = \delta > 0$. Then since by assumption $\theta_0$ is continuous on $\s{A}_0$, there exists $\varepsilon > 0$ such that $|\theta_0(u) - \theta_0(a) | \leq \delta / 2$ for all $u \in \s{A}_0 \cap [a - \varepsilon, a + \varepsilon]$, which implies that $\theta_0(u) -\gamma_0 \geq \delta / 2$ for all such $u$. We then have
\begin{align*}
0 = \Omega_0(a + \varepsilon) - \Omega_0(a - \varepsilon) &= \int_{a - \varepsilon}^{a+\varepsilon} [\theta_0(u) - \gamma_0] \, dF_0(u) = \int_{(a - \varepsilon, a+\varepsilon] \cap\s{A}_0} [\theta_0(u) - \gamma_0] \, dF_0(u) \\
&\geq (\delta / 2) \left[F_0(a + \varepsilon) - F_0(a - \varepsilon)\right] > 0 \ ,
\end{align*}
where the last inequality follows because $a$ is in the support of $F_0$. This is a contradiction, and therefore $\theta_0(a) \leq \gamma_0$. The argument if $\theta_0(a) < \gamma_0$ is essentially identical, and since $a \in \s{A}_0$ was arbitrary, this yields that $\theta_0(a) = \gamma_0$ for all $a \in \s{A}_0$.
(3)$\implies$(4): trivial.
(4)$\implies$(3): We proceed again by contradiction. Suppose $|\Omega_0(a)| > 0$ for some $a \in \d{R}$. First suppose $a \in \s{A}_0$. If $a$ is a mass point of $F_0$, then clearly $\|\Omega_0\|_{F_0,1} >0$, a contradiction. If $a$ is not a mass point of $F_0$, then for any $\varepsilon > 0$
\[\left|\Omega_0(a + \varepsilon) - \Omega_0(a)\right| \leq \int_{a}^{a + \varepsilon} \left| \theta_0(u) - \gamma_0\right| \, dF_0(u) \leq c[F_0(a + \varepsilon) - F_0(a)] \]
for $c < \infty$, and since $F_0(a + \varepsilon) \to F_0(a)$ as $\varepsilon \to 0$, $\Omega_0$ is right-continuous at $a$. An analogous argument shows that $\Omega_0$ is also left-continuous at $a$. This implies that $|\Omega_0|$ is positive in a neighborhood of $a$, which implies since $a \in \s{A}_0$ that $\|\Omega_0\|_{F_0,1} >0$, a contradiction. Finally, if $a \in \d{R}$ is not an element of $\s{A}_0$, then $\Omega_0(a) = \Omega_0(a_0)$ for $a_0 := \sup\{ u \in\s{A}_0 : u < a\}$, so that $|\Omega_0(a)| > 0$ implies $|\Omega_0(a_0)| > 0$, and since $a_0 \in \s{A}_0$ (because $\s{A}_0$ is closed), this leads to a contradiction.
\end{proof}
\begin{proof}[\bfseries{Proof of Proposition~\ref{prop:eif}}]
We let $\left\{P_\varepsilon : |\varepsilon | \leq \delta \right\}$ be any one-dimensional differentiable in quadratic mean (DQM) path in $\s{M}^\perp\!\!\!\perprc$ such that $P_{\varepsilon = 0} = P_0$, where $\s{M}^\perp\!\!\!\perprc$ be the subset of $P \in \s{M}$ such that there exists $\eta > 0$ such that $ g_P(a,w) \geq \eta$ for $P$-a.e.\ $(a,w)\}$ and $E_P[Y^2] < \infty$. We let $(y, a, w) \mapsto \dot\ell_0(y, a, w)$ be the score function of the path at $\varepsilon = 0$, We note that $P_0\dot\ell_0 = 0$ and $P_0 \dot\ell_0^2 < \infty$ \perp\!\!\!\perptep{bickel1998efficient}. Furthermore, we define $\dot\ell_0(a, w) := E_0[ \dot\ell_0 \mid A = a, W =w]$ and analogously $\dot\ell_0(a)$ and $\dot\ell_0(w)$ as the marginal score functions and $\dot\ell_0(y \mid a, w) := \dot\ell_0(y, a, w) - \dot\ell_0(a,w)$ as the conditional score function, which has mean zero conditional given $A, W$ under $P_0$.
The nonparametric efficient influence function $D_{a_0,0}^*$ of $\Omega_0(a_0)$ can be derived by showing that $\left.\frac{\partial}{\partial\varepsilon} \Omega_\varepsilon(a_0) \right|_{\varepsilon = 0} = P_0 \left( D_{a_0,0}^* \dot\ell_0 \right)$. We have by definition of $\Omega_0$ and ordinary rules of calculus that
\begin{align}
\left.\frac{\partial}{\partial\varepsilon} \Omega_\varepsilon(a_0) \right|_{\varepsilon = 0} &= \left.\frac{\partial}{\partial\varepsilon} \Gamma_\varepsilon(a_0) \right|_{\varepsilon = 0} - \left.\frac{\partial}{\partial\varepsilon} \gamma_\varepsilon \right|_{\varepsilon = 0} F_0(a_0) - \gamma_0 \left.\frac{\partial}{\partial\varepsilon} F_\varepsilon(a_0) \right|_{\varepsilon = 0}. \label{eq:inf_decomp1}
\end{align}
We first note that since $F_P(a_0) = E_P[ I(A \leq a_0)] = \int I(a \leq a_0) \, dP(Y, A, W)$ for any $P$,
\begin{align*}
\left.\frac{\partial}{\partial\varepsilon} F_\varepsilon(a_0) \right|_{\varepsilon = 0} &= \left.\frac{\partial}{\partial\varepsilon} \int I(a \leq a_0) dP_\varepsilon(y,a,w) \right|_{\varepsilon = 0} = \int I(a \leq a_0) \dot\ell_0(y, a, w) dP_0(y,a,w) \\
&= E_0 \left[ I(A \leq a_0) \dot\ell_0(Y,A,W)\right].
\end{align*}
Therefore, the term $\gamma_0 \left.\frac{\partial}{\partial\varepsilon} F_\varepsilon(a_0) \right|_{\varepsilon = 0}$ in~\eqref{eq:inf_decomp1} contributes $-\gamma_0 I(a \leq a_0)$ to the uncentered influence function.
Next, by definitions of $\Gamma_P$ and $\gamma_P$ and the product rule,
\begin{align*}
\left.\frac{\partial}{\partial\varepsilon} \Gamma_\varepsilon(a_0) \right|_{\varepsilon = 0} - \left.\frac{\partial}{\partial\varepsilon} \gamma_\varepsilon \right|_{\varepsilon = 0} F_0(a_0) &= \left.\frac{\partial}{\partial\varepsilon} \iint I(a \leq a_0) \mu_\varepsilon (a, w) \, dF_\varepsilon(a) \, dQ_\varepsilon(w) \right|_{\varepsilon = 0} \\
&\qquad - \left.\frac{\partial}{\partial\varepsilon} \iint \mu_\varepsilon (a, w) \, dF_\varepsilon(a) \, dQ_\varepsilon(w) \right|_{\varepsilon = 0} F_0(a_0) \\
&=\left.\frac{\partial}{\partial\varepsilon} \iint \left[ I(a \leq a_0) - F_0(a_0) \right] \mu_\varepsilon (a, w) \, dF_\varepsilon(a) \, dQ_\varepsilon(w) \right|_{\varepsilon = 0} \\
&= \left.\frac{\partial}{\partial\varepsilon} \iiint \left[ I(a \leq a_0) - F_0(a_0) \right] y \, dP_\varepsilon(y \mid a, w) \, dF_0(a) \, dQ_0(w) \right|_{\varepsilon = 0} \\
&\qquad + \left.\frac{\partial}{\partial\varepsilon} \iint \left[ I(a \leq a_0) - F_0(a_0) \right] \mu_0 (a, w) \, dF_\varepsilon(a) \, dQ_0 (w) \right|_{\varepsilon = 0} \\
&\qquad + \left.\frac{\partial}{\partial\varepsilon} \iint \left[ I(a \leq a_0) - F_0(a_0) \right] \mu_0 (a, w) \, dF_0(a) \, dQ_\varepsilon(w) \right|_{\varepsilon = 0}.
\end{align*}
Now using basic properties of score functions, we have
\begin{align*}
& \left.\frac{\partial}{\partial\varepsilon} \iiint \left[ I(a \leq a_0) - F_0(a_0) \right] y \, dP_\varepsilon(y \mid a, w) \, dF_0(a) \, dQ_0(w) \right|_{\varepsilon = 0} \\
&\qquad = \iiint \left[ I(a \leq a_0) - F_0(a_0) \right] y \dot\ell_0(y \mid a, w) \, dP_0(y \mid a, w) \, dF_0(a) \, dQ_0(w) \\
&\qquad = E_0 \left\{\left[ I(A \leq a_0) - F_0(a_0) \right] \frac{Y}{g_0(A, W)} \dot\ell_0(Y \mid A, W) \right\} \\
&\qquad = E_0 \left\{\left[ I(A \leq a_0) - F_0(a_0) \right] \frac{Y}{g_0(A, W)} \left[ \dot\ell_0(Y, A, W) - \dot\ell_0(A, W) \right] \right\} \\
&\qquad = E_0 \left\{\left[ I(A \leq a_0) - F_0(a_0) \right] \frac{Y}{g_0(A, W)} \dot\ell_0(Y, A, W)\right\} \\
&\qquad\qquad -E_0 \left\{\left[ I(A \leq a_0) - F_0(a_0) \right] \frac{E_0 \left[Y \mid A, W \right]}{g_0(A, W)} \dot\ell_0(A, W) \right\} \\
&\qquad = E_0 \left\{\left[ I(A \leq a_0) - F_0(a_0) \right] \frac{Y}{g_0(A, W)} \dot\ell_0(Y, A, W)\right\} \\
&\qquad\qquad -E_0 \left\{\left[ I(A \leq a_0) - F_0(a_0) \right] \frac{\mu_0(A,W)}{g_0(A, W)} \dot\ell_0(Y, A, W) \right\} \\
&\qquad = E_0 \left\{\left[ I(A \leq a_0) - F_0(a_0) \right] \frac{Y - \mu_0(A, W)}{g_0(A, W)} \dot\ell_0(Y, A, W)\right\}
\end{align*}
since $E_0[ h(A, W) \dot\ell_0(A, W)] = E_0[ h(A, W) \dot\ell_0(Y,A, W)]$ for any suitable $h$. Note that we have used the assumption that $g_0$ is almost surely positive in the above derivation. We also have
\begin{align*}
& \left.\frac{\partial}{\partial\varepsilon} \iint \left[ I(a \leq a_0) - F_0(a_0) \right] \mu_0 (a, w) \, dF_\varepsilon(a) \, dQ_0 (w) \right|_{\varepsilon = 0} \\
&\qquad = \iint \left[ I(a \leq a_0) - F_0(a_0) \right] \mu_0 (a, w) \dot\ell_0(a) \, dF_0(a) \, dQ_0 (w) \\
&\qquad =\int \left[ I(a \leq a_0) - F_0(a_0) \right] \theta_0(a) \dot\ell_0(a) \, dF_0(a)\\
&\qquad = E_0 \left\{ \left[ I(A \leq a_0) - F_0(a_0) \right] \theta_0(A) \dot\ell_0(A)\right\} \\
&\qquad = E_0 \left\{ \left[ I(A \leq a_0) - F_0(a_0) \right] \theta_0(A) \dot\ell_0(Y,A,W)\right\}
\end{align*}
by an analogous argument, and similarly
\begin{align*}
& \left.\frac{\partial}{\partial\varepsilon} \iint \left[ I(a \leq a_0) - F_0(a_0) \right] \mu_0 (a, w) \, dF_0(a) \, dQ_\varepsilon (w) \right|_{\varepsilon = 0} \\
&\qquad = \iint \left[ I(a \leq a_0) - F_0(a_0) \right] \mu_0 (a, w) \dot\ell_0(w) \, dF_0(a) \, dQ_0 (w) \\
&\qquad = E_0 \left\{ \int \left[ I(a \leq a_0) - F_0(a_0) \right] \mu_0(a, W) \, dF_0(a) \dot\ell_0(W)\right\} \\
&\qquad = E_0 \left\{ \int \left[ I(a \leq a_0) - F_0(a_0) \right] \mu_0(a, W) \, dF_0(a) \dot\ell_0(Y,A,W)\right\}.
\end{align*}
Putting it together, the uncentered influence function is
\begin{align*}
\left[I(a \leq a_0) - F_0(a_0) \right] \left[\frac{y - \mu_0(a, w)}{g_0(a, w)} +\theta_0(a) \right] + \int \left[ I(u \leq a_0) - F_0(a_0) \right] \mu_0(u, w) \, dF_0(u) - \gamma_0 I(a \leq a_0).
\end{align*}
Influence functions have mean zero, so we need to subtract the mean under $P_0$ of this uncentered function to obtain the centered influence function.
The mean under $P_0$ of this function is $2\Omega_0(a_0) + \gamma_0 F_0(a_0)$, so the centered influence function is
\begin{align*}
&\left[I(a \leq a_0) - F_0(a_0) \right] \left[\frac{y - \mu_0(a, w)}{g_0(a, w)} +\theta_0(a) - \gamma_0\right] \\
&\qquad + \int \left[ I(u \leq a_0) - F_0(a_0) \right] \mu_0(u, w) \, dF_0(u) -2\Omega_0(a_0),
\end{align*}
which equals $D_{a_0,0}^*(y,a,w)$ as claimed. When $g_0$ is almost surely bounded away from zero and $E_0[Y^2] < \infty$, this function has finite variance.
\end{proof}
Before proving our main results, we derive a first-order expansion of $\Omega_n^\perp\!\!\!\perprc(a)$. We define
\begin{align*}
D_{a_0,n,v}(y, a, w) &:= \left[ I_{(-\infty, a_0]}(a) - F_{n,v}(a_0)\right] \left[ \frac{ y - \mu_{n,v}(a,w)}{g_{n,v}(a,w)}+ \theta_{\mu_{n,v}, Q_{n,v}}(a)-\gamma_{\mu_{n,v}, F_{n,v}, Q_{n,v}}\right]\\
&\qquad + \int \left[ I_{(-\infty, a_0]}(\tilde{a}) - F_0(a_0)\right] \mu_{n,v}(\tilde{a}, w) F_0(d\tilde{a}) - \Omega_{\mu_{n,v}, F_0, Q_{n,v}}(a_0) \ ,\\
D_{a_0,\mu, g}(y, a, w) &:= \left[ I_{(-\infty, a_0]}(a) - F_0(a_0)\right] \left[\frac{ y - \mu(a,w)}{g(a,w)}+\theta_{\mu, Q_0}(a)-\gamma_{\mu, F_0, Q_0}\right] \\
& \qquad + \int \left[ I_{(-\infty, a_0]}(\tilde{a}) - F_0(a_0)\right] \mu(\tilde{a}, w) F_0(d\tilde{a}) - \Omega_{\mu, F_0, Q_0}(a_0)\\
D_{a_0,n,v}^{\perp\!\!\!\perprc}(y,a,w) &:= \left[ I_{(-\infty, a_0]}(a) - F_{n,v}(a_0)\right] \left[ \frac{ y - \mu_{n,v}(a,w)}{g_{n,v}(a,w)} +\theta_{\mu_{n,v}, Q_0}(a) \right] \\
&\qquad + \int \left[ I_{(-\infty, a_0]}(\tilde{a}) - F_0(a_0)\right] \mu_{n,v}(\tilde{a}, w) F_0(d\tilde{a}) \\
D_{a_0, \infty}^{\perp\!\!\!\perprc}(y,a,w) &:= \left[ I_{(-\infty, a_0]}(a) - F_0(a_0)\right] \left[ \frac{ y - \mu_\infty(a,w)}{g_\infty(a,w)} + \theta_{\mu_{\infty}, Q_0}(a)\right] \\
&\qquad + \int \left[ I_{(-\infty, a_0]}(\tilde{a}) - F_0(a_0)\right] \mu_\infty(\tilde{a}, w) F_0(d\tilde{a}) \\
D_{a_0,\infty} &:= D_{a_0,\mu_\infty, g_\infty} \\
D_{a_0,\infty}^* &:=D_{a_0, \infty} - \Omega_0(a_0)
\end{align*}
and
\begin{align*}
R_{n,a_0,v,1} &:= (\d{P}_{n,v} - P_0) \left( D_{a_0,n,v}^{\perp\!\!\!\perprc} - D_{a_0, \infty}^{\perp\!\!\!\perprc} \right)\ ,\\
R_{n,a_0,v,2} &:= \left( \gamma_{\mu_{n,v}, F_0, Q_0} - \gamma_{\mu_{n,v}, F_0, Q_{n,v}}- \gamma_{\mu_0, F_0, Q_0}+ \gamma_{\mu_\infty, F_0, Q_0} \right) \left[ F_{n,v}(a_0) - F_0(a_0) \right] \\
R_{n,a_0,v,3} &:= \iint \left[ I_{(-\infty, a_0]}(a) - F_{n,v}(a_0)\right] \mu_{n,v}(a, w) \, (F_{n,v} - F_0)(da) \, (Q_{n,v} - Q_0)(dw)\\
R_{n,a_0,v,4} &:= \iint\left[ I_{(-\infty, a_0]}(a) - F_{n,v}(a_0)\right] \left[\mu_{n,v}(a, w) - \mu_0(a, w)\right] \left[ 1 - \frac{g_0(a, w)}{g_{n,v}(a,w)}\right] \, F_0(da) \, Q_0(dw)\\
R_{n,a_0,v,5} &:= \left( 1 - \frac{VN_v}{n} \right) \d{P}_{n,v} D_{a_0, \infty}^*\ .
\end{align*}
\begin{lemma}[First-order expansion of estimator]\label{lemma:first_order}
If condition (A3) or (A4) holds, then $\Omega_{n}^\perp\!\!\!\perprc(a_0) - \Omega_0(a_0) = \d{P}_nD_{a_0, \infty}^* + \frac{1}{V}\sum_{v=1}^V \sum_{j=1}^5 R_{n,a_0,v,j}.$
\end{lemma}
\begin{proof}[\bfseries{Proof of Lemma~\ref{lemma:first_order}}]
We recall that $\theta_{\mu, Q}(a) := \int \mu(a, w) \, Q(dw)$,
\[ \Omega_{\mu, F, Q}(a) := \iint \left[ I_{(-\infty, a_0]}(a) - F(a_0)\right] \mu(a, w) \, F(da) \, Q(dw) \ ,\]
and $\gamma_{\mu, F, Q} := \iint \mu(a, w) \, F(da) \, Q(dw)$. We thus have $\Omega_{n}^\perp\!\!\!\perprc(a_0) =\frac{1}{V}\sum_{v=1}^V \d{P}_{n,v} D_{a_0,n,v}$. If (A3) or (A4) hold, then
\begin{align*}
P_0 D_{a_0,\infty} &= \Omega_0(a_0) + \iint \left[I_{(-\infty, a_0]}(a) - F_0(a_0) \right] \left[\mu_{\infty}(a,w) - \mu_0(a,w)\right] \left[1 - \frac{g_0(a,w)}{g_{\infty}(a,w)}\right]\, F_0(da) \, Q_0(dw) \\
&= \Omega_0(a_0)\ .
\end{align*}
Thus, we have the first-order expansion $\Omega_{n}^\perp\!\!\!\perprc(a_0) - \Omega_0(a_0) = \d{P}_nD_{a_0, \infty}^* + \frac{1}{V}\sum_{v=1}^V R_{n,a_0,v}$, where
\[R_{n,a_0,v} := (\d{P}_{n,v} - P_0)(D_{a_0,n, v} - D_{a_0, \infty})+\left[P_0 D_{a_0,n, v} - \Omega_0(a_0)\right] + \left( 1 - \frac{VN_v}{n} \right) \d{P}_{n,v} D_{a_0, \infty}^*\ .\]
Straightforward algebra shows that the remainder term $R_{n,a_0,v}$ can be further decomposed as $\sum_{j=1}^5 R_{n,a_0,v,j}$.
\end{proof}
\begin{lemma}\label{lemma:emp_process_neg}
Conditions (A1) and (A2) imply that $\max_v\sup_{a_0 \in\s{A}_0} |\d{G}_{n,v} ( D_{a_0,n,v}^{\perp\!\!\!\perprc} - D_{a_0, \infty}^{\perp\!\!\!\perprc})| \inprob 0$.
\end{lemma}
\begin{proof}[\bfseries{Proof of Lemma~\ref{lemma:emp_process_neg}}]
We define $\s{F}_{n,v} := \{ D_{a_0,n,v}^{\perp\!\!\!\perprc} - D_{a_0, \infty}^{\perp\!\!\!\perprc} : a_0 \in \s{A}_0 \}$, so that we can write $\max_v\sup_{a_0 \in\s{A}_0} |\d{G}_{n,v} ( D_{a_0,n,v}^{\perp\!\!\!\perprc} - D_{a_0,\infty}^{\perp\!\!\!\perprc})| = \max_v\sup_{f \in\s{F}_{n,v} } |\d{G}_{n,v} f|$. By the tower property, we have
\[ E_0\left[ \sup_{f \in\s{F}_{n,v} } \left|\d{G}_{n,v} f \right| \right] = E_0 \left[ E_0\left( \sup_{f \in\s{F}_{n,v} } \left|\d{G}_{n,v} f \right| \Bigg| \s{T}_{n,v} \right) \right] \ .\]
The inner expectation is taken with respect to the distribution of the observations with indices in the validation sample $\s{V}_{n,v}$, while the outer expectation is with respect to the observations in the training sample $\s{T}_{n,v}$. By construction, the functions $\mu_{n,v}$ and $g_{n,v}$ depend only upon the observations in the training sample $\s{T}_{n,v}$, so that they are fixed with respect to the inner expectation. We note that $\sup_{f \in\s{F}_{n,v} } |f(y, a,w)| \leq F_{n,v}(y,a,w)$ for all $y,a,w$, where
\begin{align*}
F_{n,v}(y,a,w)| &= \left[ \left(|y| + K_0\right) K_1^{-1} + K_0\right]\sup_{a_0 \in \s{A}_0} \left| F_{n,v}(a_0) - F_0(a_0)\right| \\
&\qquad+ K_1^{-2}\left( |y| + K_0\right)\left| g_{n,v}(a, w) - g_\infty(a,w) \right| + K_1^{-1}\left|\mu_{n,v}(a,w) - \mu_\infty(a,w)\right| \\
&\qquad+ \int \left| \mu_{n,v}\left(a,\tilde{w}\right) - \mu_\infty\left(a,\tilde{w}\right)\right| \, Q_0\left(d\tilde{w}\right) + \int \left| \mu_{n,v}(u, w) - \mu_\infty(u, w) \right| \, F_0(du) \ .
\end{align*}
We then have by Theorem~2.14.1 of \perp\!\!\!\perpte{van1996weak} that
\[ E_0\left( \sup_{f \in\s{F}_{n,v} } \left|\d{G}_{n,v} f \right| \Bigg| \s{T}_{n,v} \right) \leq C \left\{ E_{0}\left[ F_{n,v}(Y, A,W)^2\Bigg| \s{T}_{n,v}\right] \right\}^{1/2} J\left(1, \s{F}_{n,v}\right) \ , \]
for a constant C not depending on $\s{F}_{n,v}$, where $J$ is the uniform entropy integral as defined in Chapter 2.14 of \perp\!\!\!\perpte{van1996weak}. The class $\s{F}_{n,v}$ is a convex combination of the classes (1) $\{I_{(-\infty,a_0]}(a) : a_0 \in \s{A}_0\}$, (2) $\{\int I_{(-\infty,a_0]}(a) F(da) : a_0 \in \s{A}_0\}$ for $F = F_{n,v}$ and $F = F_\infty$, (3) $\{\int I_{(-\infty,a_0]}(a) \mu(a,w)F_0(da) : a_0 \in \s{A}_0\}$ for $\mu = \mu_\infty$ and $\mu = \mu_{n,v}$, and various fixed functions with finite second moments. Class (1) is well-known to possess polynomial covering numbers. Classes (2) and (3) therefore do as well by Lemma~1 of \perp\!\!\!\perpte{westling2020isotonic}. Thus, $\max_v J\left(1, \s{F}_{n,v}\right) = O_{P_0}det(1)$. Hence, we now have
\begin{align*}
E_0\left[ \sup_{f \in\s{F}_{n,v} } \left|\d{G}_{n,v} f \right| \right] &\leq C' E_0\left( \left\{ E_{0}\left[ F_{n,v}(Y, A,W)^2\Bigg| \s{T}_{n,v}\right] \right\}^{1/2}\right) = C' E_0 \left[ \|F_{n,v}\|_{P_0,2} \right] \ ,
\end{align*}
The triangle inequality and conditions (A1) and (A2) imply that $\|F_{n,v}\|_{P_0, 2} \inprob 0$ for each $v$, and also that $\|F_{n,v}\|_{P_0, 2}$ is uniformly bounded for all $n$ and $v$. This implies that $E_0 \left[ \|F_{n,v}\|_{P_0,2} \right] \longrightarrow 0$. Therefore $\sup_{f \in \s{F}_{n,v}} \left|\d{G}_{n,v} f\right| = \sup_{a_0 \in \s{A}_0} \left|\d{G}_n( D_{a_0,n,v}^{\perp\!\!\!\perprc} - D_{a_0, \infty}^{\perp\!\!\!\perprc})\right| \inprob 0$ for each $v$, which implies that \[\max_v \sup_{a_0 \in \s{A}_0} \left| \d{G}_n(D_{a_0,n,v}^{\perp\!\!\!\perprc} - D_{a_0, \infty}^{\perp\!\!\!\perprc})\right| \inprob 0\] since $V = O_{P_0}det(1)$.
\end{proof}
\begin{lemma}\label{lemma:uprocess}
Condition (A1) implies that
\[\max_v \sup_{a_0 \in \s{A}_0} \left| \iint \left[ I_{(-\infty, a_0]}(a) - F_{n,v}(a_0)\right]\mu_{n,v}(a, w)(F_{n,v} - F_0)(da) (Q_{n,v} - Q_0)(dw) \right| = O_{P_0}(n^{-1}) \ .\]
\end{lemma}
\begin{proof}[\bfseries{Proof of Lemma~\ref{lemma:uprocess}}]
We have
\begin{align*}
&\iint \left[ I_{(-\infty, a_0]}(a) - F_{n,v}(a_0)\right]\mu_{n}(a, w)(F_{n,v} - F_0)(da) (Q_{n,v} - Q_0)(dw) \\
&\qquad= \iint I_{(-\infty, a_0]}(a)\mu_{n,v}(a, w)(F_{n,v} - F_0)(da) (Q_{n,v} - Q_0)(dw) \\
&\qquad\qquad- F_{n,v}(a_0)\iint \mu_{n,v}(a, w)(F_{n,v} - F_0)(da) (Q_{n,v} - Q_0)(dw) \ .
\end{align*}
Controlling these two terms is almost identical, and in fact the second term can be controlled by setting $a_0 = +\infty$. Therefore, we focus only on the first term.
We write
\[ \iint I_{(-\infty, a_0]}(a)\mu_{n,v}(a, w)(F_{n,v} - F_0)(da) (Q_{n,v} - Q_0)(dw) = R_{n, a_0,v, 6} + R_{n, a_0, v,7} + R_{n,a_0, v,8}\] where
\begin{align*}
R_{n,a_0,v,6} &= \frac{1}{2N_v^2} \sum_{\stackrel{i \neq j}{ i, j \in \s{V}_{n,v}}} \gamma_{\mu_{n,v}, a_0}(O_i, O_j) \\
R_{n, a_0,v ,7} &= N_v^{-3/2}\d{G}_{n,v} \omega_{\mu_{n,v}, a_0}\\
R_{n,a_0,v,8} &= N_v^{-1}E_{0} [I_{(-\infty, a_0]}(A) \mu_{n,v}(A, W)]\ ,
\end{align*}
where we have defined $\omega_{\mu, a_0}(y, a, w) := I_{(-\infty, a_0]}(a)\mu(a, w)$ and
\begin{align*}
\gamma_{\mu, a_0}(o_i, o_j) &:= I_{(-\infty, a_0]}(a_i) \mu(a_i, w_j) + I_{(-\infty, a_0]}(a_j) \mu(a_j, w_i) \\
&\qquad- \int \left[ I_{(-\infty, a_0]}(a_i) \mu(a_i, w) + I_{(-\infty, a_0]}(a_j)\mu(a_j, w)\right]Q_0(dw) \\
&\qquad- \int_{-\infty}^{a_0} \left[ \mu(a, w_i) + \mu(a, w_j)\right]F_0(da) + 2\int I_{(-\infty, a_0]}(a) \mu(a, w) F_0(da) Q_0(dw)\ .
\end{align*}
For $R_{n,a_0,6}$, we define $\s{G}_{n,v} := \{ \gamma_{\mu_{n,v}, a_0}(O_i, O_j) : a_0 : \s{A}_0\}$ and $S_{n,v}(\gamma) := \sum_{\stackrel{i \neq j}{ i, j \in \s{V}_{n,v}}} \gamma(O_i, O_j)$. As in the proof of Lemma~\ref{lemma:emp_process_neg}, we begin by conditioning on $\s{T}_{n,v}$ using the tower property:
\[ E_0 \left[ \sup_{\gamma \in \s{G}_{n,v}} \left| S_{n,v}(\gamma) \right| \right] = E_0 \left\{ E_0 \left[\sup_{\gamma \in \s{G}_{n,v}} \left| S_{n,v}(\gamma) \right| \Bigg| \s{T}_{n,v} \right] \right\} \ .\]
The function $\mu_{n,v}$ is fixed with respect to the inner expectation, so we apply Lemma~2 of \perp\!\!\!\perpte{westling2020isotonic} to bound this inner expectation. The class $\s{G}_{n,v}$ is uniformly bounded and satisfies the uniform entropy condition since it is a convex combination of the class $\{ a \mapsto I_{(-\infty, a_0]}(a) : a_0 \in \s{A}_0\}$, various fixed functions, and integrals of the two. Therefore, Lemma~2 of \perp\!\!\!\perpte{westling2020isotonic} implies that
\[ E_0 \left[\sup_{\gamma \in \s{G}_{n,v}} \left| S_{n,v}(\gamma) \right| \Bigg| \s{T}_{n,v} \right] \leq C \left[ N_v (N_v - 1) \right]^{1/2} \]
for some $C < \infty$ not depending on $n$. We thus have that $\sup_{a_0 \in \s{A}_0} \left| R_{n,a_0,v,6}\right| \leq (C /2) N_v^{-1}$, and since $\max_v N_v^{-1} = O_{P_0}det(n^{-1})$, we then have $\max_v \sup_{a_0 \in \s{A}_0} \left| R_{n,a_0,v,6}\right| = O_{P_0}(n^{-1})$.
For $R_{n,a_0,7}$, since the class of functions $\{\omega_{\mu_{n,v}, a_0} : a_0 \in \s{A}_0\}$ is uniformly bounded almost surely for all $n$ large enough, $\max_v \sup_{a_0 \in \s{A}_0} \left| \d{G}_{n,v}\omega_{\mu_{n,v}, a_0}\right| = O_{P_0}(1)$ by an analogous conditioning argument to that used above. Therefore, $\max_v \sup_{a_0 \in \s{A}_0} \left| R_{n, a_0,v ,7} \right| = O_{P_0}\left(n^{-3/2}\right)$.
Finally, $\max_v\sup_{a_0 \in \s{A}_0}|R_{n,a_0,v,8}| = O_{P_0}(n^{-1})$ since $\max_v |\mu_{n,v}| \leq K_0$ almost surely for all $n$ large enough. This completes the proof
\end{proof}
\begin{proof}[\bfseries{Proof of Theorem~\ref{thm:dr_cons_omega}}]
By Lemma~\ref{lemma:first_order}, we have that
\[\sup_{a_0 \in \s{A}_0} \left| \Omega_n^\perp\!\!\!\perprc(a_0) - \Omega_0(a_0)\right| \leq \sup_{a_0 \in \s{A}_0} \left| \d{P}_{n}D_{a_0, \infty}^* \right| + \sum_{j=1}^5 \max_v\sup_{a_0 \in \s{A}_0}\left| R_{n, a_0, v,j} \right| \ .\]
The class $\{D_{a_0, \infty}^* : a_0 \in \d{R}\}$ is $P_0$-Donsker because it is a convex combination of the class $\{I_{(-\infty, a_0]}(a) : a_0 \in \s{A}_0\}$, which is well-known to have polynomial covering numbers, and integrals thereof, which thus also have polynomial covering numbers by Lemma~1 of \perp\!\!\!\perpte{westling2020isotonic}. Since $P_0 D_{a_0, \infty}^* = 0$ for all $a_0$ by (A3), we then have
\[\sup_{a_0 \in \s{A}_0} \left| \d{P}_nD_{a_0, \infty}^* \right| = n^{-1/2}\sup_{a_0 \in \s{A}_0} \left| \d{G}_nD_{a_0, \infty}^* \right| = O_{P_0}\left(n^{-1/2}\right) \ .\]
Next, we have $\max_v\sup_{a_0 \in \s{A}_0}|R_{n,a_0,v,1}| = n^{-1/2}\max_v \sup_{a_0 \in \s{A}_0} | \d{G}_{n,v}\left( D_{a_0,n,v}^{\perp\!\!\!\perprc} - D_{a_0, \infty}^{\perp\!\!\!\perprc}\right)| = o_{P_0}(n^{-1/2})$ by Lemma~\ref{lemma:emp_process_neg}. Since $\max_v\sup_{a_0 \in \s{A}_0} | F_{n,v}(a_0) - F_0(a_0)| = O_{P_0}(n^{-1/2})$ and \[\max_v \left|\gamma_{\mu_{n,v}, F_0, Q_0} - \gamma_{\mu_{n,v}, F_0, Q_{n,v}}- \gamma_{\mu_0, F_0, Q_0}+ \gamma_{\mu_\infty, F_0, Q_0} \right| = O_{P_0}(1)\ ,\] $\max_v\sup_{a_0 \in \d{R}} |R_{n,a_0,v,2}| = O_{P_0}(n^{-1/2})$. Additionally, $\max_v\sup_{a_0 \in \s{A}_0} | R_{n, a_0,v, 3}| = o_{P_0}(n^{-1/2})$ by Lemma~\ref{lemma:uprocess}.
For $R_{n,a_0,v,4}$ we first have
\begin{align*}
\max_v\sup_{a_0 \in \s{A}_0} | R_{n,a_0,v,4}| &\leq 2K_1^{-2} \max_v \iint \left|\mu_{n,v}(a, w) - \mu_0(a, w)\right| \left| g_{n,v}(a,w)- g_0(a, w)\right| dP_0(a,w) \\
&= 2K_1^{-2} r_n \ .
\end{align*}
Finally, for $R_{n,a_0, v,5}$, since $|N_v - n/V| \leq 1$, $\left| 1 - VN_v / n\right| = O(n^{-1})$, so that $\max_v\sup_{a_0 \in \s{A}_0} |R_{n,a_0, v, 5}| = o_{P_0}(n^{-1})$.
We therefore have that
\[\sup_{a_0 \in \s{A}_0} \left| \Omega_n^\perp\!\!\!\perprc(a_0) - \Omega_0(a_0)\right| \leq O_{P_0}\left(n^{-1/2}\right) + 2 r_n = O_{P_0}\left( \max\left\{ n^{-1/2}, r_n\right\} \right) \ . \]
This establishes the first statement in the proof. For the second statement, it suffices to show that $r_n \inprob 0$. For this, we have by (A3) that
\begin{align*}
\frac{K_1^2}{2}r_n &\leq \max_v \int_{\s{S}_1} \left|\mu_{n,v}- \mu_\infty\right| \left| g_{n,v} - g_0\right| dP_0 + \max_v \int_{\s{S}_2} \left|\mu_{n,v} - \mu_0\right| \left| g_{n,v} - g_\infty\right| dP_0\\
&\qquad + \int_{\s{S}_3} \left|\mu_{n,v} - \mu_\infty\right| \left| g_{n,v} - g_\infty\right| dP_0 \\
&\leq \left[P_0(\mu_{n,v} - \mu_\infty)^2 P_0\left( g_{n,v} - g_0\right)^2 \right]^{1/2} + \left[ P_0(\mu_{n,v} - \mu_0)^2 P_0\left( g_{n,v} - g_\infty\right)^2 \right]^{1/2} \\
&\qquad + \left[P_0(\mu_{n,v} - \mu_\infty)^2 P_0\left(g_{n,v} - g_\infty\right)^2 \right]^{1/2} \ .
\end{align*}
Condition (A2) states that $\max_v P_0(\mu_{n,v} - \mu_\infty)^2 \inprob 0$ and $\max_v P_0(g_{n,v} - g_\infty)^2\inprob 0$, which implies in addition that $\max_v P_0(\mu_{n,v} - \mu_0)^2 = O_{P_0}(1)$, and $\max_v P_0(g_{n,v} - g_0)^2 = O_{P_0}(1)$ by the boundedness condition (A1). Therefore, (A1)--(A3) imply that $r_n \inprob 0$.
\end{proof}
\begin{proof}[\bfseries{Proof of Theorem~\ref{thm:dr_cons_test}}]
The proof proceeds in two steps. First, we show that under the stated conditions, $\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} \inprob \|\Omega_0 \|_{F_0, p}$ for any $p \in [1, \infty]$. Second, we show that $T_{n,\alpha,p} / n^{1/2} \inprob 0$. Then we will have that $\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - T_{n,\alpha,p} / n^{1/2} \inprob \|\Omega_0 \|_{F_0, p}$, which is strictly positive by Proposition~\ref{prop:equivalence} since $H_A$ holds. The result follows.
To see that $\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} \inprob \|\Omega_0 \|_{F_0, p}$, we first write
\begin{align*}
\left|\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - \|\Omega_0 \|_{F_0, p}\right|&\leq \left| \|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} -\|\Omega_0\|_{F_n, p}\right|+ \left|\|\Omega_0\|_{F_n, p} - \|\Omega_0 \|_{F_0, p}\right|
\end{align*}
The first term is bounded above by $\sup_{a \in \d{R}} \left| \Omega_n^\perp\!\!\!\perprc(a)- \Omega_0(a)\right|$, which by Theorem~\ref{thm:dr_cons_omega} tends to zero in probability under (A1)--(A3). For the second term, for $p < \infty$, $\|\Omega_0\|_{F_n, p}^p \inprob \|\Omega_0 \|_{F_0, p}^p$ by the law of large numbers since $|\Omega_0|^p$ is bounded, which implies by the continuous mapping theorem that $\left|\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - \|\Omega_0 \|_{F_0, p}\right| \inprob 0$. For $p = \infty$, we have $\|\Omega_0\|_{F_n, p} = \sup_{a\in\s{A}_n} |\Omega_0| \leq \|\Omega_0\|_{F_0, p} = \sup_{a\in\s{A}_0} |\Omega_0|$ for all $n$. Let $\varepsilon > 0$, and let $a_0 \in \s{A}_0$ be such that $|\Omega_0(a_0)| > \sup_{a \in \s{A}_0} |\Omega_0(a)| - \varepsilon / 2$. If $a_0$ is a mass point of $F_0$, then $a_0 \in \s{A}_n$ with probability tending to one, so that \[P_0\left(\sup_{a\in\s{A}_n} |\Omega_0| > \sup_{a\in\s{A}_0} |\Omega_0| - \varepsilon / 2\right) \to 1 \ ,\] which implies that $P_0\left( \left|\|\Omega_0\|_{F_n, \infty} - \|\Omega_0 \|_{F_0, \infty}\right| < \varepsilon\right) \to 1$. If $a_0$ is not a mass point of $F_0$, then $\Omega_0$ must be continuous at $a_0$, so that there exists a $\delta > 0$ such that $|\Omega_0(a) - \Omega_0(a_0)| < \varepsilon / 2$ for all $a$ such that $|a - a_0| < \delta$. Then $|\Omega_0(a)| > \|\Omega_0\|_{F_0, \infty} - \varepsilon$ for all such $a$. Since $P_0(\s{A}_n \cap (a_0 - \delta, a_0 + \delta) = \emptyset) \to 0$, we then have $P_0\left( \left|\|\Omega_0\|_{F_n, \infty} - \|\Omega_0 \|_{F_0, \infty}\right| < \varepsilon\right) \to 1$. In either case, since $\varepsilon$ was arbitrary, we have that $\left|\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - \|\Omega_0 \|_{F_0, p}\right| \inprob 0$.
We have now shown that $\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} \inprob \|\Omega_0 \|_{F_0, p}$, and it remains to show that $T_{n,\alpha,p} / n^{1/2} \inprob 0$. We recall that $T_{n,\alpha,p}$ is defined as $T_{n,\alpha,p} := \inf\left\{t : P_0\left( \|Z_n\|_{F_n,p} \leq t \mid O_1, \dotsc, O_n\right) \geq 1 - \alpha\right\}$, where $Z_n$ is a mean-zero Gaussian process on $\s{A}_n := \{A_1, \dotsc, A_n\}$ with covariance given by
\[ \Sigma_n(s,t) := E_0 \left[ Z_n(s) Z_n(t) \mid O_1, \dotsc, O_n\right] = \frac{1}{V}\sum_{v=1}^V \d{P}_{n,v} \left( D_{s, n, v}^* D_{t, n, v}^* \right) \ .\]
(The dependence on $O_1, \dotsc, O_n$ in the probability is due to $\Sigma_n$ depending on $O_1, \dotsc, O_n$.) Therefore, $T_{n,\alpha, p} / n^{1/2} > \varepsilon$ implies that $P_0\left( \|Z_n\|_{F_n,p} / n^{1/2} > \varepsilon \mid O_1, \dotsc, O_n\right) \geq \alpha$, which further implies that $P_0\left( \sup_{a \in \s{A}_n} \left|Z_n(a) / n^{1/2} \right| > \varepsilon \mid O_1, \dotsc, O_n\right) \geq \alpha$ since $\sup_{a \in \s{A}_n} \left|Z_n(a) \right| \geq \|Z_n\|_{F_n,p}$ for all $p \in [1, \infty]$. By Markov's inequality, we then have
\[P_0\left( T_{n,\alpha, p} / n^{1/2} > \varepsilon\right) \leq P_0\left( E_0\left[ \sup_{a \in \s{A}_n} \left|Z_n(a) / n^{1/2} \right| \Bigg| O_1, \dotsc, O_n\right] \geq \varepsilon\alpha\right) \ . \]
We define $\rho_n(s,t) := \left[ \Sigma_n(s,s) - 2\Sigma_n(s,t) + \Sigma_n(t,t)\right]^{1/2} / n^{1/2}$. Then, since $Z_n / n^{1/2}$ is a Gaussian process with covariance $\Sigma_n / n$, it is sub-Gaussian with respect to its intrinsic semimetric $\rho_n$, so that
\begin{align*}
E_0\left[ \sup_{a \in \s{A}_n} \left|Z_n(a) / n^{1/2} \right| \Bigg| O_1, \dotsc, O_n\right] & \leq C\left\{\Sigma_n(a_0, a_0)^{1/2} / n^{1/2} + \int_0^\infty \left[ \log N(\varepsilon, \s{A}_n, \rho_n) \right]^{1/2} \, d\varepsilon\right\}
\end{align*}
for any $a_0 \in \s{A}_n$ by Corollay~2.2.8 of \perp\!\!\!\perpte{van1996weak}. Here, $N(\varepsilon, \s{A}_n, \rho_n)$ is the minimal number of $\rho_n$ balls of radius $\varepsilon$ required to cover $\s{A}_n$. We note that for $\varepsilon \geq \left(\|\Sigma_n\|_{\infty} / n\right)^{1/2}$, $N(\varepsilon, \s{A}_n, \rho_n) = 1$, since it only takes one $\rho_n$ ball of radius $\left(\|\Sigma_n\|_\infty / n\right)^{1/2}$ to cover $\s{A}_n$. For $\varepsilon \leq \left(\|\Sigma_n\|_\infty / n\right)^{1/2}$, we have the trivial inequality $N(\varepsilon, \s{A}_n, \rho_n) \leq n$, since $|\s{A}_n| \leq n$. Thus, we have almost surely for all $n$ large enough that
\begin{align*}
\int_0^\infty \left[ \log N(\varepsilon, \s{A}_n, \rho_n) \right]^{1/2} \, d\varepsilon &\leq \int_0^{ \left(M / n\right)^{1/2}}\left[ \log n \right]^{1/2} \, d\varepsilon = \left(M n^{-1} \log n \right)^{1/2} \ .
\end{align*}
Therefore,
\begin{align*}
& P_0\left( E_0\left[ \sup_{a \in \s{A}_n} \left|Z_n(a) / n^{1/2} \right| \Bigg| O_1, \dotsc, O_n\right] \geq \varepsilon\alpha\right) \\
&\qquad\leq P_0\left( C\Sigma_n(a_0, a_0)^{1/2} / n^{1/2} + C\left(\|\Sigma_n\|_\infty n^{-1} \log n \right)^{1/2} \geq \varepsilon \alpha \right) \ .
\end{align*}
It is straightforward to see that condition (A1) implies that $\sup_{s, t \in \s{A}_0}|\Sigma_n(s,t)| = O_{P_0}(1)$, so that the last probability tends to zero. for any $\epsilon, \alpha > 0$. Therefore, $P_0\left( T_{n,\alpha, p} / n^{1/2} > \varepsilon\right) \longrightarrow 0$ for any $\varepsilon > 0$, so that $ T_{n,\alpha, p} / n^{1/2} \inprob 0$. This completes the proof.
\end{proof}
\begin{proof}[\bfseries{Proof of Theorem~\ref{thm:weak_conv_omega}}]
As in the proof of Theorem~\ref{thm:dr_cons_omega}, $\max_v\sup_{a_0 \in \s{A}_0}|R_{n,a_0,v,1}| = o_{P_0}(n^{-1/2})$ by Lemma~\ref{lemma:emp_process_neg}, $\max_v\sup_{a_0 \in \s{A}_0} | R_{n, a_0,v, 3}| = o_{P_0}(n^{-1/2})$ by Lemma~\ref{lemma:uprocess}, $\max_v\sup_{a_0 \in \s{A}_0} | R_{n,a_0,v,4}| = O_{P_0}(r_n) = o_{P_0}(n^{-1/2})$ by assumption, and $\max_v\sup_{a_0 \in \s{A}_0} | R_{n,a_0,v,5}| = o_{P_0}(n^{-1})$. For $R_{n,a_0,v,2}$, since $\mu_\infty = \mu_0$, we have
\[ R_{n,a_0,v,2} = \left(\gamma_{\mu_{n,v}, F_0, Q_0} - \gamma_{\mu_{n,v}, F_0, Q_{n,v}}\right) \left[ F_{n,v}(a_0) - F_0(a_0)\right] = \left(N_v^{-1/2} \d{G}_{n,v} \eta_{\mu_{n,v}, F_0}\right) O_{P_0}(n^{-1/2}) \ ,\]
where we define $\eta_{\mu, F}(w) := \int \mu(a,w) \, F(da)$. Since $\eta_{\mu_{n,v}, F_0}$ is a fixed function relative to $\s{V}_{n,v}$, $\max_v \d{G}_{n,v} \eta_{\mu_{n,v}, F_0} = O_{P_0}(1)$, so that $\max_v \sup_{a_0 \in \s{A}_0} |R_{n,a_0,v,2}| = O_{P_0}(n^{-1})$.
We now have $ \Omega_n^\perp\!\!\!\perprc(a) - \Omega_0(a) = \d{P}_n D_{a, 0}^* + R_{n,a},$
where $\sup_{a \in \s{A}_0} |R_{n,a}| = o_{P_0}(n^{-1/2})$. Since $\{ D_{a,0}^* : a \in \s{A}_0\}$ is a $P_0$-Donsker class, the result follows.
\end{proof}
Before proving Theorem~\ref{thm:weak_conv_omega}, we introduce several additional Lemmas. First, we demonstrate that $\Sigma_n$ is a uniformly consistent estimator of the limiting covariance $\Sigma_0$.
\begin{lemma}\label{lemma:covar_cons}
If the conditions of Theorem~\ref{thm:test_size} hold, then $E_0\left[\sup_{(s,t) \in \s{A}_0^2} | \Sigma_n(s,t) - \Sigma_0(s,t)| \right]\longrightarrow 0$.
\end{lemma}
\begin{proof}[\bfseries{Proof of Lemma~\ref{lemma:covar_cons}}]
We recall that $\Sigma_n(s,t) := \frac{1}{V}\sum_{v=1}^V \d{P}_{n,v} [D_{s,n,v}^* D_{t,n,v}^*]$ and $\Sigma_0(s,t) := P_0[ D_{s,0}^* D_{t,0}^*]$. We can thus write
\begin{align*}
\nonumber\Sigma_n(s,t) - \Sigma_0(s,t) &= \frac{1}{V} \sum_{v=1}^V \left[ (\d{P}_{n,v} - P_0)(D_{s,n,v}^* D_{t,n,v}^*) + P_0(D_{s,n,v}^* D_{t,n,v}^* - D_{s,0}^* D_{t,0}^*) \right]
\end{align*}
Therefore,
\begin{align}
E_0\left[\sup_{(s,t) \in \s{A}_0^2} | \Sigma_n(s,t) - \Sigma_0(s,t)| \right] &\leq \max_v N_v^{-1/2} E_0 \left[\sup_{(s,t) \in \s{A}_0^2}\left| \d{G}_{n,v}(D_{s,n,v}^* D_{t,n,v}^*)\right| \right] \nonumber\\
&\qquad + \max_v E_0 \left[\sup_{(s,t) \in \s{A}_0^2}\left| P_0(D_{s,n,v}^* D_{t,n,v}^* - D_{s,0}^* D_{t,0}^*) \right| \right] \label{eq:decomp_sigma}\ .
\end{align}
For the first term, a conditioning argument analogous to that in the proof of Lemma~\ref{lemma:emp_process_neg} in conjunction with Theorem~2.14.1 of \perp\!\!\!\perpte{van1996weak} implies that \[\max_v E_0 \left[\sup_{(s,t) \in \s{A}_0^2}\left| \d{G}_{n,v}(D_{s,n,v}^* D_{t,n,v}^*)\right| \right] = O(1) \ ,\]since $\{ D_{s,n,v}^* D_{t,n,v}^* : (s,t) \in \s{A}_0^2\}$ satisfies a suitable entropy bound conditional upon the nuisance function estimators by permanence properties of entropy bounds. Therefore, the first term is $O_{P_0}det(n^{-1/2})$, and in particular is $o_{P_0}det(1)$.
For the second term, we note that
\begin{align*}
P_0 \left| D_{s,n,v}^* D_{t,n,v}^* - D_{s,0}^* D_{t,0}^*\right| &\leq P_0 \left|\left(D_{s,n,v}^* - D_{s,0}^*\right) D_{t,0}^* \right| +P_0 \left|\left(D_{t,n,v}^* - D_{t,0}^*\right) D_{s,n,v}^* \right| \\
&\leq \left\{ P_0 \left( D_{s,n,v}^* - D_{s,0}^*\right)^2P_0 \left(D_{t,0}^*\right)^2 \right\}^{1/2} \\
&\qquad+\left\{P_0 \left(D_{t,n,v}^* - D_{t,0}^*\right)^2 P_0 \left( D_{s,n,v}^* \right)^2\right\}^{1/2} \ .
\end{align*}
Since $P_0 (D_{s,n,v}^*)^2$ and $P_0 (D_{t,0}^*)^2$ are uniformly bounded for all $n$ large enough by condition (A1), the preceding display is bounded up to a constant by $P_0 \left(F_{n,v}^2\right)$ for $F_{n,v}$ defined in the proof of Lemma~\ref{lemma:emp_process_neg}. This tends to zero in expectation uniformly over $v$ by an argument analogous to that proof of Lemma~\ref{lemma:emp_process_neg} and the assumption that $r_n = o_{P_0}(n^{-1/2})$. This implies that the second term in \eqref{eq:decomp_sigma} tends to zero.
\end{proof}
Given $O_1, \dotsc, O_n$, let $Z_n$ be distributed according to a mean-zero Gaussian process with covariance $\Sigma_n$ as defined in the main text. The next lemma shows that $Z_n$ converges weakly in $\ell^\infty(\s{A}_0)$ to the limiting Gaussian process $Z_0$ with covariance $\Sigma_0$.
\begin{lemma}\label{lemma:Zn_conv}
If the conditions of Theorem~\ref{thm:test_size} hold, then $Z_n$ converges weakly in $\ell^\infty(\s{A}_0)$ to the limiting Gaussian process $Z_0$.
Furthermore, $\|Z_n\|_{F_n,p} - \|Z_n\|_{F_0, p} \inprob 0$, so that $\|Z_n\|_{F_n,p} \indist \|Z_0\|_{F_0, p}$ for any $p \in [1, \infty]$.
\end{lemma}
\begin{proof}[\bfseries{Proof of Lemma~\ref{lemma:Zn_conv}}]
We first demonstrate that the finite-dimensional marginals of $Z_n$ converge in distribution to the finite-dimensional marginals of $Z_{0}$. We let $\Sigma_{n,a}$ be the covariance matrix of $(Z_n(a_1), \dotsc, Z_n(a_m))$ and $\Sigma_{0,a}$ be the covariance matrix of $Z_{0,a} = (Z_0(a_1), \dotsc, Z_0(a_m))$. We then have since $Z_n$ is a mean-zero Gaussian process conditional on $O_1, \dotsc, O_n$ and $Z_0$ is a mean-zero Gaussian process that
\begin{align*}
\left|E_0\left[ \exp\{ i t^T Z_{n,a} \} \right] - E_0\left[\exp\{ i t^T \d{G}_{0,a} \} \right]\right| &= \left|E_0\left\{ E\left[ \exp\{ i t^T Z_{n,a} \} \mid O_1, \dotsc, O_n\right] \right\}- E_0\left[\exp\{ i t^T \d{G}_{0,a} \} \right]\right|\\
& =\left|E_0\left[ \exp\left\{ -\tfrac{1}{2} t^T \Sigma_{n,a} t\right\} -\exp\left\{ -\tfrac{1}{2} t^T \Sigma_{0,a} t\right\} \right]\right| \\
&\leq E_0 \left| \tfrac{1}{2} t^T \left(\Sigma_{n,a}- \Sigma_{0,a}\right) t\right| \\
&\leq E_0\left[ \sup_{s,t} |\Sigma_{n}(s,t)- \Sigma_{0}(s,t)| \right] \sum_{i,j} |t_i t_j| \ ,
\end{align*}
which tends to zero for every $t$ by Lemma~\ref{lemma:covar_cons}. Therefore,
\[(Z_n(a_1), \dotsc, Z_n(a_m)) \indist (Z_0(a_1), \dotsc, Z_0(a_m))\]
for any $(a_1, \dotsc, a_m) \in \s{A}_0^m$ and $m \in \{1, 2, \dotsc\}$.
In order to show that $Z_n$ converges weakly in $\ell^\infty(\s{A}_0)$ to the limiting Gaussian process $Z_0$, we need also to demonstrate asymptotic uniform mean-square equicontinuity, meaning that for all $\varepsilon$ and $\eta > 0$, there exists $\delta > 0$ such that
\[ P_0 \left( \sup_{d_0(s,t) < \delta} |Z_n(s) - Z_n(t)| > \varepsilon \right) < \eta \ ,\]
where $d_0(s,t) := |F_0(s) - F_0(t)|^{1/2}$. We define $d_n(s, t):= |F_n(s) - F_n(t)|^{1/2}$. Then $\sup_{(s,t) \in \s{A}_0^2} |d_n(s,t) - d_0(s,t)| \inprob 0$. We note that since $Z_n$ is a Gaussian process conditional on $O_1,\dotsc,O_n$ with covariance $\Sigma_n$, it is sub-Gaussian with respect to the semi-metric $\rho_n$ given by $\rho_n(s,t) := [\Sigma_n(s,s) + \Sigma_n(t,t) - 2\Sigma_n(s,t)]^{1/2}$. Furthermore, it is straightforward to verify that condition (A1) implies that $\rho_n(s,t) \leq Cd_n(s,t)$ for all $(s,t) \in \s{A}_0^2$ and some $C < \infty$ not depending on $n$, so that $Z_n$ is sub-Gaussian with respect to $d_n$ as well. Therefore, by Corollary~2.2.8 of \perp\!\!\!\perpte{van1996weak},
\[ E\left[ \sup_{d_n(s,t) < \delta} |Z_n(s) - Z_n(t)| \mid O_1, \dotsc O_n \right] \leq C' \int_0^\delta \left[ \log N(x, \s{A}_0, d_n)\right]^{1/2} \, dx \]
for every $\delta > 0$ and some $C < \infty$ not depending on $n$ or $\delta$, where, as before, $N(x, \s{A}_0, d)$ is the minimal number of $d$-balls of radius $x$ required to cover $\s{A}_0$. For $x < n^{-1/2}$, $N(x, \s{A}_0, d_n) \leq n$, and $N(x, \s{A}_0, d_n) \leq x^{-2}$ otherwise, so that
\[ E\left[ \sup_{d_n(s,t) < \delta} |Z_n(s) - Z_n(t)| \mid O_1, \dotsc O_n \right] \leq C''\left[ \left(\log n\right)^{1/2} n^{-1/2} + h(\delta)\right]\ , \]
where $h(x) = x \left[ \log (1/x)\right]^{1/2}$, which tends to zero as $x \to 0$. Thus, for any $\alpha > 0$ we have
\begin{align*}
&P_0\left( \sup_{d_0(s,t) < \delta} |Z_n(s) - Z_n(t)| > \varepsilon\right) = E_0\left[ P_0\left( \sup_{d_0(s,t) < \delta} |Z_n(s) - Z_n(t)| > \varepsilon \mid O_1, \dotsc O_n\right) \right]\\
&\qquad\qquad\leq E_0\left[ P_0\left( \sup_{d_0(s,t) < \delta} |Z_n(s) - Z_n(t)| > \varepsilon \mid \| d_n - d_0 \|_\infty < \alpha, O_1, \dotsc O_n\right)\right]\\
&\qquad\qquad \qquad+ P_0\left(\| d_n - d_0 \|_\infty \geq \alpha \right) \\
&\qquad\qquad\leq E_0\left[ P_0\left( \sup_{d_n(s,t) < \delta + \alpha} |Z_n(s) - Z_n(t)| > \varepsilon \mid \| d_n - d_0 \|_\infty < \alpha, O_1, \dotsc O_n\right)\right] \\
&\qquad\qquad \qquad+ P_0\left(\| d_n - d_0 \|_\infty \geq \alpha \right) \\
&\qquad\qquad\leq \varepsilon^{-1} C''\left[ \left(\log n\right)^{1/2}\min\{\delta + \alpha, n^{-1/2}\} + h(\delta + \alpha) \right] + P_0\left(\| d_n - d_0 \|_\infty \geq \alpha \right) \ .
\end{align*}
We can choose $\delta$ and $\alpha$ such that $C'' h(\delta + \alpha) /\varepsilon < \eta / 3$. For any such fixed $\delta$ and $\alpha$, $n^{-1/2} < \delta + \alpha$ and $\varepsilon^{-1} C'' \left( n^{-1} \log n\right)^{1/2} < \eta/ 3$ for all $n$ large enough. Finally, for any $\alpha > 0$, $P_0\left(\| d_n - d_0 \|_\infty \geq \alpha \right) < \eta /3$ for all $n$ large enough since $\| d_n - d_0 \|_\infty \inprob 0$. We thus have that the limit superior as $n \to \infty$ of the preceding display is smaller than $\eta$.
For the claim that $\|Z_n\|_{F_n,p} - \|Z_n\|_{F_0, p} \inprob 0$, we first note that $Z_n(s) = Z_n(t)$ almost surely for any $s,t$ such that $F_n(s) = F_n(t)$ since $\rho_n(s,t)^2 = E \left\{ \left[Z_n(s) - Z_n(t)\right]^2\right\} \leq C |F_n(s) - F_n(t)|$. Therefore, $Z_n$ is almost surely a right-continuous step function with steps at $A_1, \dotsc, A_n$, so that $\|Z_n\|_{F_n, \infty} = \|Z_n\|_{F_0, \infty}$ almost surely. For the case that $p \in [1, \infty)$, we let $\varepsilon > 0$. Then, for any $\delta, \gamma > 0$ we have
\begin{align*}
P_0\left(\left| \|Z_n\|_{F_n,p}-\|Z_n\|_{F_0,p} \right| > \varepsilon \right) &\leq P_0\left(\left| \|Z_n\|_{F_n,p}-\|Z_n\|_{F_0,p} \right| > \varepsilon \, \Bigg| \, \sup_{d_0(s,t) < \delta} |Z_n(s) - Z_n(t)| \leq \gamma \right) \\
&\qquad \qquad + P_0\left( \sup_{d_0(s,t) < \delta} |Z_n(s) - Z_n(t)| > \gamma \right) \ .
\end{align*}
The second term tends to zero by the above. For the first term, we let $\s{A}_1^+, \dotsc \s{A}_m^+$ be intervals covering $\s{A}_0$ such that $\s{A}_0 \cap \s{A}_j^+ \neq \emptyset$ and such that $\max_{1 \leq j\leq m}F_0\left(\s{A}_j^+\right) \leq \delta^2$. This can be done with $m \leq 2 \delta^{-2}$ intervals. We let $a_j \in \s{A}_j^+ \cap \s{A}_0$ for each $j$. We then define $Z_n^+$ as the stochastic process on $\s{A}_0$ such that $Z_n^+(a) = Z_n(a_j)$ for all $a \in \s{A}_j^+$ for each $j \in \{1, \dotsc, m\}$. Given that $\sup_{d_0(s,t) < \delta} |Z_n(s) - Z_n(t)| \leq \gamma$, we then have
\begin{align*}
\left| \|Z_n\|_{F_n, p} - \|Z_n^+\|_{F_n, p} \right| &\leq \|Z_n - Z_n^+\|_{F_n, p} = \left[ \sum_{j=1}^m \int_{\s{A}_j^+} \left| Z_n(a) - Z_n^+(a_j)\right|^p \, dF_n(a) \right]^{1/p} \leq \gamma
\end{align*}
and
\begin{align*}
\left| \|Z_n\|_{F_0, p} - \|Z_n^+\|_{F_0, p} \right| &\leq \|Z_n - Z_n^+\|_{F_0, p} = \left[ \sum_{j=1}^m \int_{\s{A}_j^+} \left| Z_n(a) - Z_n^+(a_j)\right|^p \, dF_0(a) \right]^{1/p} \leq \gamma \ .
\end{align*}
Therefore, if $\sup_{d_0(s,t) < \delta} |Z_n(s) - Z_n(t)| \leq \gamma$, then
\begin{align*}
\left| \|Z_n\|_{F_n,p}-\|Z_n\|_{F_0,p} \right| &\leq 2\gamma + \left| \|Z_n^+\|_{F_n, p} - \|Z_n^+\|_{F_0, p} \right| \leq 2 \gamma + \left| \sum_{j=1}^m \int_{\s{A}_j^+} \left|Z_n^+(a)\right|^p (F_n - F_0)(da) \right|^{1/p} \ .
\end{align*}
Now, since $Z_n^+(a) = Z_n(a_j)$ for all $a \in \s{A}_j$,
\[\int_{\s{A}_j^+} \left|Z_n^+(a)\right|^p (F_n - F_0)(da) = |Z_n(a_j)|^p \left[ F_n\left(\s{A}_j^+\right) - F_0\left(\s{A}_j^+\right)\right] \ .\]
Furthermore, $\left|F_n\left(\s{A}_j\right) - F_0\left(\s{A}_j^+\right) \right| \leq 2 \| F_n-F_0\|_\infty$ since $\s{A}_j^+$ is an interval. Therefore,
\[ \left| \sum_{j=1}^m \int_{\s{A}_j^+} \left|Z_n^+(a)\right|^p (F_n - F_0)(da) \right|^{1/p} \leq \left( 2 m\| Z_n\|_{\infty}^p \| F_n-F_0\|_\infty \right)^{1/p} \leq \left(4 \delta^{-2}\right)^{1/p} \| Z_n\|_{\infty} \| F_n-F_0\|_\infty^{1/p} \ , \]
which implies that
\[ \left| \|Z_n\|_{F_n,p}-\|Z_n\|_{F_0,p} \right| \leq 2 \gamma + 4\delta^{-2}\| Z_n\|_{\infty} \| F_n-F_0\|_\infty^{1/p} \ . \]
Hence, setting $\gamma = \varepsilon / 4$ and $\delta = \varepsilon$, we have that
\begin{align*}
&P_0\left(\left| \|Z_n\|_{F_n,p}-\|Z_n\|_{F_0,p} \right| > \varepsilon \, \Bigg| \, \sup_{d_0(s,t) < \delta} |Z_n(s) - Z_n(t)| \leq \gamma\right) \\
&\qquad \leq P_0\left(\| Z_n\|_{\infty} \| F_n-F_0\|_\infty^{1/p} > (\varepsilon/2)^{1+2/p}\, \Bigg| \, \sup_{d_0(s,t) < \delta} |Z_n(s) - Z_n(t)| \leq \gamma\right)\ .
\end{align*}
Since $\|Z_n \|_{\infty} = O_{P_0}(1)$ and $\| F_n-F_0\|_\infty^{1/p} = o_{P_0}(1)$, this tends to zero for any $\varepsilon > 0$ and $p \in [1, \infty)$, which completes the proof.
Since $\|\cdot\|_{F_0,p}$ is a continuous mapping on $\ell^\infty(\s{A}_0)$, $\|Z_n\|_{F_0,p} \indist \|Z_0\|_{F_0, p}$ for any $p \in [1, \infty]$. Therefore, $\|Z_n\|_{F_n,p} \indist \|Z_0\|_{F_0, p}$ as well.
\end{proof}
\begin{lemma}\label{lemma:Omegan_norm}
If the conditions of Theorem~\ref{thm:test_size} hold, then $\|\Omega_n^\perp\!\!\!\perprc\|_{F_n,p} - \|\Omega_n^\perp\!\!\!\perprc\|_{F_0,p} = o_{P_0}\left( n^{-1/2}\right)$ for any $p \in [1, \infty]$.
\end{lemma}
\begin{proof}[\bfseries{Proof of Lemma~\ref{lemma:Omegan_norm}}]
We first note that $\Omega_n^\perp\!\!\!\perprc$ is a right-continuous step function with steps at $A_1, \dotsc, A_n$, and that $\Omega_n^\perp\!\!\!\perprc(a) = 0$ for $a < \min_i A_i$. Therefore, since each $A_i \in \s{A}_0$, $\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, \infty} = \|\Omega_n^\perp\!\!\!\perprc\|_{F_0, \infty}$. For $p < \infty$, we have
\begin{align*}
n^{1/2}\left| \|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - \|\Omega_n^\perp\!\!\!\perprc\|_{F_0, p}\right| &= n^{1/2}\left| \left( \int |\Omega_n^\perp\!\!\!\perprc|^p \, dF_n \right)^{1/p} - \left( \int |\Omega_n^\perp\!\!\!\perprc|^p \, dF_0 \right)^{1/p} \right| \\
&\leq \left| n^{p/2} \int |\Omega_n^\perp\!\!\!\perprc|^p \, d(F_n-F_0) \right|^{1/p} = \left| n^{\frac{p-1}{2}}\d{G}_n |\Omega_n^\perp\!\!\!\perprc|^p \right|^{1/p} \ .
\end{align*}
Therefore, if we can demonstrate that $ |\Omega_n^\perp\!\!\!\perprc|^p$ in contained in a class of functions $\s{G}_{n,p}$ such that $E_0 \left[ \sup_{g \in \s{G}_{n,p}} | \d{G}_n g| \right] = o_{P_0}det\left( n^{-\frac{p-1}{2}}\right)$, then we will have that $n^{1/2}\left| \|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - \|\Omega_n^\perp\!\!\!\perprc\|_{F_0, p}\right| \inprob 0$. In order to show this, we will need boundedness which only holds in probability, but not almost surely for all $n$ large enough. Thus, for any $\eta > 0$, we write
\begin{align*}
P_0 \left( \left| n^{\frac{p-1}{2}} \d{G}_n |\Omega_n^\perp\!\!\!\perprc|^p \right| > \eta\right) &\leq P_0 \left( \left| n^{\frac{p-1}{2}}\d{G}_n |\Omega_n^\perp\!\!\!\perprc|^p \right| > \eta \Bigg| \left\| \Omega_n^\perp\!\!\!\perprc \right\|_\infty \leq n^{-\alpha}, \frac{1}{n} \sum_{i=1}^n |Y_i| \leq E_0[|Y|] + 1\right) \\
&\qquad + P_0\left( \left\| \Omega_n^\perp\!\!\!\perprc \right\|_\infty > n^{-\alpha}\right) + P_0 \left( \frac{1}{n} \sum_{i=1}^n |Y_i| > E_0[|Y|] + 1 \right) \ .
\end{align*}
The final probability on the right tends to zero since $\frac{1}{n} \sum_{i=1}^n |Y_i| \inprob E_0[ |Y| ]$. The second probability on the right side tends to zero for any fixed $\alpha \in [0,1/2)$ since $\left\|n^{\alpha}\Omega_n^\perp\!\!\!\perprc \right\|_\infty = n^{\alpha - 1/2} \left\| n^{1/2} \Omega_n^\perp\!\!\!\perprc \right\|_\infty = O_{P_0}\left(n^{\alpha - 1/2}\right)$. Now we can focus on the first probability. We note that, with some rearranging, we can write $\Omega_n^\perp\!\!\!\perprc(a_0) = \sum_{i=1}^n \omega_{n,i} I_{[A_i, \infty)}(a_0)$, where
\begin{align*}
\omega_{n,i} &:= \frac{1}{V N_{v_i}} \frac{Y_i - \mu_{n,v_i}(A_i, W_i)}{ g_{n,v_i}(A_i ,W_i) } - \sum_{O_j \in \s{T}_{n, v_i}} \frac{1}{V N_{v_j} (n - N_{v_j})} \frac{Y_j - \mu_{n, v_j}(A_j, W_j)}{g_{n,v_j}(A_j, W_j)} \\
&\qquad + \frac{1}{V N_{v_i}^2} \sum_{j \in \s{V}_{n,v_i}} \mu_{n, v_i}(A_i, W_j) - \sum_{O_k \in \s{T}_{n,v_i}} \sum_{j \in \s{V}_{n,v_k}} \frac{\mu_{n, v_k}(A_k, W_j) }{V N_{v_k}^2 (n - N_{v_k})} \ ,
\end{align*}
where $v_i$ is the unique element of $\{1, \dotsc, V\}$ such that $i \in \s{V}_{n,v_i}$. Using the boundedness condition (A1) and the fact that $\frac{1}{V N_v} \leq \frac{2}{n}$ for each $v$, it is straightforward to see that
\[ | \omega_{n,i}| \leq \frac{2}{n} \left[ K_1^{-1} |Y_i| + \left(K_1^{-1} + 1\right) K_0 + \sum_{O_j \in \s{T}_{n,v_i}}\frac{ K_1^{-1} |Y_j| + K_0}{n - N_{v_j}} \right]\ , \]
which, if $\frac{1}{n} \sum_{i=1}^n |Y_i| \leq E_0[|Y|] + 1$, implies that
\[ \sum_{i=1}^n | \omega_{n,i}| \leq 2\left[ 2K_1^{-1} \left( E_0[|Y|] + 1\right) + \left(K_1^{-1} + 2\right) K_0 \right] =: C \ .\]
We then have that $C^{-1} \Omega_n^\perp\!\!\!\perprc(a_0) = \sum_{i=1}^n \lambda_{n,i} I_{[A_i, \infty)}(a_0)$, where $\lambda_{n,i} := \omega_{n, i} / C$ satisfy $\sum_{i=1}^n |\lambda_{n,i}| \leq 1$. Thus, if $\frac{1}{n} \sum_{i=1}^n |Y_i| \leq E_0[|Y|] + 1$, then $C^{-1} \Omega_n^\perp\!\!\!\perprc$ is contained in the symmetric convex hull $\s{F}$ of the class $\{ x \mapsto I_{[a, \infty)}(x) : a \in \d{R}\}$. Since this latter class has VC index 2, by Theorem~2.6.9 of \perp\!\!\!\perpte{van1996weak}, $\s{F}$ satisfyies $\log N(\varepsilon, \s{F}, L_2(Q)) \leq D \varepsilon^{-1}$ for all probability measures $Q$ and for a constant $D$ not depending on $\varepsilon$ or $Q$. Thus, if both $\frac{1}{n} \sum_{i=1}^n |Y_i| \leq E_0[|Y|] + 1$ and $\left\| \Omega_n^\perp\!\!\!\perprc \right\|_\infty \leq n^{-\alpha}$, we have that $\Omega_n^\perp\!\!\!\perprc$ is contained in the class $\s{F}_{n} := \{ Cf : f \in \s{F}, \|Cf\|_\infty \leq n^{-\alpha} \}$ with envelope function $R_n(x) = n^{-\alpha}$. Since $\s{F}_n \subseteq C\s{F}$, $\s{F}_n$ satisfies the same entropy bound as $\s{F}$ up to the constant $D$. Hence $|\Omega_n^\perp\!\!\!\perprc|^p$ is contained in $\left|\s{F}_{n}\right|^p := \{ |f|^p : f \in \s{F}_n\}$ with envelope $R_{n}^p = n^{-p\alpha}$. Since the function $x \mapsto |x|^p$ is convex for $p \geq 1$, we have $\left| |f|^p - |g|^p\right| \leq |f - g| p R_{n}^{p-1}$ for $f,g \in \s{F}_n$. By Theorem~2.10.20 of \perp\!\!\!\perpte{van1996weak} (or Lemma~5.1 of \perp\!\!\!\perpte{vaart2006survival}), we then have
\[\sup_Q \log N\left(\varepsilon \|p R_{n}^{p} \|_{Q,2}, \left| \s{F}_n\right|^p, L_2(Q)\right) \leq \sup_Q \log N\left(\varepsilon \| R_n \|_{Q,2},\s{F}_n , L_2(Q)\right) \leq D \left(\varepsilon n^{-\alpha}\right)^{-1}\ . \]
Theorem~2.14.1 of \perp\!\!\!\perpte{van1996weak} then implies that for $n$ large enough
\begin{align*}
E_0 \left| \d{G}_n \left| \Omega_n^\perp\!\!\!\perprc\right|^p \right| &\leq E_0\left[ \sup_{g \in |\s{F}_n|^p} \left| \d{G}_n g\right| \right] \leq C' \| R_n^p \|_{P,2} \int_0^1 \left[ 1 + \sup_Q \log N\left(\varepsilon \|R_n^p \|_{Q,2}, \left| \s{F}_n\right|^p, L_2(Q)\right)\right]^{1/2} \, d\varepsilon \\
&\leq C' n^{-p\alpha} \int_0^1 \left[ 1 + D \left(\varepsilon n^{-\alpha}/p\right)^{-1}\right]^{1/2} \, d\varepsilon = C' p n^{(1-p)\alpha} \int_0^{n^{-\alpha}/p} \left[ 1 + D / \varepsilon\right]^{1/2} \, d\varepsilon \\
&\leq C' p n^{(1-p)\alpha} \int_0^{n^{-\alpha}/p} \left[ 2D / \varepsilon\right]^{1/2} \, d\varepsilon\\
&= C'' n^{(1-p)\alpha} n^{-\alpha / 2} = O_{P_0}det\left(n^{(1/2 - p)\alpha}\right) \ .
\end{align*}
Thus, we have $n^{\frac{p-1}{2}}\d{G}_n |\Omega_n^\perp\!\!\!\perprc|^p = O_{P_0}\left(n^{(p-1)/2+ (1/2 - p)\alpha}\right)$. Since $(p-1)/2 + (1/2 - p)\alpha < 0$ for any $\alpha > \frac{p-1}{2p-1}$ and $\frac{p-1}{2p-1} < \frac{1}{2}$ for all $p \geq 1$, we can choose an $\alpha$ to get $n^{\frac{p-1}{2}}\d{G}_n |\Omega_n^\perp\!\!\!\perprc|^p = o_{P_0}(1)$ as desired.
\end{proof}
We can now prove Theorem~\ref{thm:test_size}.
\begin{proof}[\bfseries{Proof of Theorem~\ref{thm:test_size}}]
Since $\Omega_0 = 0$ under $H_0$, Theorem~\ref{thm:weak_conv_omega} implies that $n^{1/2} \Omega_n^\perp\!\!\!\perprc$ converges weakly as a process in $\ell^\infty(\s{A}_0)$ to $Z_0$ Thus, $n^{1/2}\|\Omega_n^\perp\!\!\!\perprc\|_{F_0, p} \indist \|Z_0\|_{F_0, p}$ by the continuous mapping theorem. By Lemma~\ref{lemma:Omegan_norm}, we have $n^{1/2}\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} \indist \|Z_0\|_{F_0, p}$ as well. By Lemma~\ref{lemma:Zn_conv}, $\|Z_n\|_{F_n,p} \indist \|Z_0\|_{F_0, p}$, and since by assumption the distribution function of $\|Z_0\|_{F_0, p}$ is strictly increasing in a neighborhood of $T_{0,\alpha,p}$, the quantile function of $\|Z_0\|_{F_0, p}$ is continuous at $1-\alpha$. Therefore, $T_{n,\alpha,p}$, which is by definition the $1-\alpha$ quantile of $\|Z_n\|_{F_n,p}$, converges in probability to the $1-\alpha$ quantile of $\|Z_0\|_{F_0, p}$. Therefore, $n^{1/2}\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - T_{n,\alpha,p} + T_{0,\alpha,p} \indist \|Z_0\|_{F_0, p}$. Hence,
\begin{align*}
P_0 \left( n^{1/2}\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} > T_{n,\alpha,p}\right) &= P_0 \left( n^{1/2}\|\Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - T_{n,\alpha,p} + T_{0,\alpha,p} > T_{0,\alpha,p}\right) \\
&\longrightarrow P_0 \left( \|Z_0\|_{F_0, p} > T_{0,\alpha,p}\right) \leq \alpha\ .
\end{align*}
Since by assumption the distribution function of $\|Z_0\|_{F_0,p}$ is continuous at $T_{0,\alpha,p}$, $P_0 \left( \|Z_0\|_{F_0, p} > T_{0,\alpha,p}\right) = \alpha$, which completes the proof.
\end{proof}
\begin{proof}[\bfseries{Proof of Theorem~\ref{thm:weak_conv_local}}]
By Theorem~\ref{thm:weak_conv_omega} and since $\Omega_0(a) = 0$ for all $a$,
\[ \sup_{a \in \s{A}_0} \left| n^{1/2} \Omega_n(a) - \d{G}_n D_{a,0}^* \right| \inprob 0\ .\]
The distribution $P_n$ is contiguous to $P_0$ by Lemma~3.10.11 of \perp\!\!\!\perpte{van1996weak}. Therefore, by Theorem~3.10.5 of \perp\!\!\!\perpte{van1996weak},
\[ \sup_{a \in \s{A}_0} \left| n^{1/2} \Omega_n(a) - \d{G}_n D_{a,0}^* \right| \stackrel{\mathrm{P_n}}{\longrightarrow} 0\ .\]
Since $\{D_{a, 0}^* : a \in \s{A}_0\}$ is a $P_0$-Donsker class and $\sup_{a \in \s{A}_0} | P_0 D_{a, 0}^*|< \infty$, Theorem~3.10.12 of \perp\!\!\!\perpte{van1996weak} implies that $\left\{ \d{G}_nD_{a,0}^* : a \in \s{A}_0 \right\}$ converges weakly in $\ell^\infty(\s{A}_0)$ to $\{ Z_0(a) + P_0( h D_{a, 0}^*) : a \in \s{A}_0 \}$. The result follows.
\end{proof}
\begin{proof}[\bfseries{Proof of Theorem~\ref{thm:local_alt_power}}]
By Lemma~\ref{lemma:Omegan_norm}, $n^{1/2}\left( \| \Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - \| \Omega_n^\perp\!\!\!\perprc\|_{F_0, p}\right) \inprob 0$. Therefore, since $P_n$ is contiguous to $P_0$ by Lemma~3.10.11 of \perp\!\!\!\perpte{van1996weak}, Theorem~3.10.5 of \perp\!\!\!\perpte{van1996weak} implies that $n^{1/2}\left( \| \Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - \| \Omega_n^\perp\!\!\!\perprc\|_{F_0, p}\right) \stackrel{\mathrm{P_n}}{\longrightarrow} 0$. Hence, by the continuous mapping theorem and Theorem~\ref{thm:weak_conv_local}, $n^{1/2} \| \Omega_n^\perp\!\!\!\perprc\|_{F_n, p}$ converges in distribution under $P_n$ to $\|\bar{Z}_{0,h}\|_{F_0,p}$. In addition, since $T_{n,\alpha,p} \inprob T_{0,\alpha,p}$ (as demonstrated in the proof of Theorem~\ref{thm:test_size}), $T_{n,\alpha,p} \stackrel{\mathrm{P_n}}{\longrightarrow} T_{0,\alpha,p}$. Therefore, $n^{1/2} \| \Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - T_{n,\alpha,p} + T_{0,\alpha,p}$ converges in distribution under $P_n$ to $\|\bar{Z}_{0,h}\|_{F_0,p}$. Thus,
\begin{align*}
P_n\left( n^{1/2} \| \Omega_n^\perp\!\!\!\perprc\|_{F_n, p} > T_{n,\alpha,p}\right) &= P_n\left( n^{1/2} \| \Omega_n^\perp\!\!\!\perprc\|_{F_n, p} - T_{n,\alpha,p} + T_{0,\alpha,p} > T_{0,\alpha,p}\right)\\
&\longrightarrow P\left( \|\bar{Z}_{0,h}\|_{F_0,p} > T_{0,\alpha,p}\right) \ .
\end{align*}
\end{proof}
\section*{Additional simulation results}
Here, we present the results of the first simulation study using parametric nuisance estimators. Figure~\ref{fig:sim_size} displays the empirical type I error rate (i.e.\ the fraction of tests with $p < 0.05$) of nominal $\alpha = 0.05$ level tests using the parametric nuisance estimators across the two types of null hypotheses and three sample sizes. The tests with correctly-specified parametric outcome regression estimators of the nuisances (first and third columns from the left) had empirical error rates within or slightly below Monte Carlo error of the nominal rate at all sample sizes and under both the strong and weak nulls. The fact that the tests with both nuisance estimators correctly specified yields correct size empirically validates the large-sample theoretical guarantee of Theorem~\ref{thm:test_size}. That the test achieved valid size when the propensity score estimator was mis-specified was not guaranteed by our theory, and we would not expect this to always be the case. The tests with $\mu_n$ based on an incorrectly specified parametric model and $g_n$ based on a correctly specified parametric model (second column of Figure~\ref{fig:sim_size}), had empirical type I error rates below the nominal rate. The tests with both $\mu_n$ and $g_n$ based on incorrectly specified parametric models (second row of Figure~\ref{fig:sim_size}), had empirical type I error rates far above the the nominal rate and converging to 1. These results align with our expectations that, in general, no guarantees with regard to type I error can be made when the nuisance estimators are inconsistent.
Figure~\ref{fig:sim_power} displays the empirical power (i.e.\ the fraction of tests with $p < 0.05$) of nominal $\alpha = 0.05$ level tests using the parametric nuisance estimators across the four types of alternative hypotheses and three sample sizes. Power increased with sample size in all cases. Given Theorem~\ref{thm:dr_cons_test}, this was expected for the first three columns, but not necessarily for the last column, in which both nuisance estimators were inconsistent. Under alternative data-generating mechanisms, the power under inconsistent estimation of both nuisance parameters may not increase to one as the sample size increases. The power of the test was generally better further away from the null hypothesis, except when $\mu_n$ was based on a correctly specified parametric model and $g_n$ was based on an incorrectly specified parametric model (third column).
\begin{figure}
\caption{Empirical type I error rate (i.e.\ the fraction of tests with $p < 0.05$) of nominal $\alpha = 0.05$ level tests using the parametric nuisance estimators across the two types of null hypotheses and seven sample sizes. Panels in the top row were generated under the weak null where $\mu_0$ depends on $a$, but $\theta_0$ does not. Panels in the bottom row were generated under the strong null where neither $\mu_0$ nor $\theta_0$ depends on $a$. Columns indicate the type of nuisance estimators used. Horizontal wide-dash lines indicate the nominal 0.05 test size, and horizontal dotted lines indicate sampling error bounds were the true size 0.05. In the third and fourth columns from the left, the empirical sizes are off the scale of the figure, and in particular are larger than 0.5 in all cases.}
\label{fig:sim_size}
\end{figure}
\begin{figure}
\caption{Empirical power (i.e.\ the fraction of tests with $p < 0.05$) of nominal $\alpha = 0.05$ level tests using the parametric nuisance estimators across the four types of alternative hypotheses and three sample sizes. Columns indicate the type of nuisance estimators used.}
\label{fig:sim_power}
\end{figure}
Figure~\ref{fig:sim_power_disc} displays the empirical power of our tests in the second numerical study. We note that the number $k$ of levels of the exposure had little impact on the power of the tests for any sample or any under alternative hypothesis. In each case, the power increased with the sample size $n$.
\begin{figure}
\caption{Empirical power of nominal $\alpha = 0.05$ level tests in the second numerical study using our method with cross-fitted, correctly-specified parametric nuisance estimators. The $x$-axis is the ratio of $k$, the number of levels of the exposure, to $n$, the sample size. Columns indicate the alternative hypothesis used to generate the data.}
\label{fig:sim_power_disc}
\end{figure}
\section*{Additional results from analysis of the effect of BMI on immune response}\label{sec:bmi}
Figure~\ref{fig:bmi_omega} shows the estimated primitive functions $\Omega_n$ and 95\% uniform confidence bands as a function of BMI for the analysis of CD4+ responses (left) and CD8+ responses (right).
\begin{figure}
\caption{Estimated functions $\Omega_n$ (solid line) and 95\% uniform confidence band (dashed lines) for the analysis of CD4+ responses (left) and CD8+ responses (right) presented in the article.}
\label{fig:bmi_omega}
\end{figure}
\end{document}
|
\begin{document}
\title[Quasi-minimal Lorentz Surfaces with Pointwise 1-type Gauss Map]
{Quasi-minimal Lorentz Surfaces with Pointwise 1-type Gauss Map in Pseudo-Euclidean 4-Space}
\author{Velichka Milousheva, Nurettin Cenk Turgay}
\address{Institute of Mathematics and Informatics, Bulgarian Academy of Sciences,
Acad. G. Bonchev Str. bl. 8, 1113, Sofia, Bulgaria; "L.
Karavelov" Civil Engineering Higher School, 175 Suhodolska Str., 1373 Sofia, Bulgaria}
\email{[email protected]}
\address{Istanbul Technical University, Faculty of Science and Letters, Department of Mathematics,
34469 Maslak, Istanbul, Turkey}
\email{[email protected]}
\subjclass[2010]{Primary 53B30, Secondary 53A35, 53B25}
\keywords{Pseudo-Euclidean space, Lorentz surface, quasi-minimal surface, finite type Gauss map, parallel mean curvature vector field}
\begin{abstract}
A Lorentz surface in the four-dimensional pseudo-Euclidean space with neutral metric is called quasi-minimal if
its mean curvature vector is lightlike at each point. In the present paper we obtain the complete classification of quasi-minimal Lorentz surfaces with pointwise 1-type Gauss map.
\end{abstract}
\maketitle
\section{Introduction}
In the present paper we study Lorentz surfaces in pseudo-Euclidean space $\mathbb E^4_2$.
A surface is called \textit{minimal} if its mean curvature vector vanishes identically.
Minimal surfaces are important in differential geometry as well as in physics.
Minimal Lorentz surfaces in $\mathbb{C}^2_1$ have been classified recently by B.-Y. Chen \cite{Chen-TaJM}.
Several classification results for minimal Lorentz surfaces in indefinite space forms are obtained in \cite{Chen3}. In particular, a complete classification of all minimal Lorentz surfaces in a pseudo-Euclidean space $\mathbb E^m_s$ with arbitrary dimension $m$ and arbitrary index $s$ is given.
A natural extension of minimal surfaces are quasi-minimal surfaces. A surface in a pseudo-Riemannian manifold is called \textit{quasi-minimal} (also pseudo-minimal or marginally trapped) if its mean curvature vector is lightlike at each point of the surface \cite{Rosca}.
Quasi-minimal surfaces in pseudo-Euclidean space have been very actively studied in the last few years.
In \cite{Chen-JMAA} B.-Y. Chen classified
quasi-minimal Lorentz flat surfaces in $\mathbb E^4_2$ and gave a complete classification of biharmonic Lorentz surfaces
in $\mathbb E^4_2$ with lightlike mean curvature vector. Several other
families of quasi-minimal surfaces have also been classified. For
example, quasi-minimal surfaces with constant Gauss curvature in
$\mathbb E^4_2$ were classified in \cite{Chen-HMJ, Chen-Yang}. Quasi-minimal
Lagrangian surfaces and quasi-minimal slant surfaces in complex
space forms were classified, respectively, in \cite{Chen-Dillen} and
\cite{Chen-Mihai}. The classification of quasi-minimal surfaces with parallel mean
curvature vector in $\mathbb E^4_2$ is obtained
in \cite{Chen-Garay}.
In \cite{GM5} the classification of quasi-minimal rotational
surfaces of elliptic, hyperbolic or parabolic type is given.
For an up-to-date survey on quasi-minimal
surfaces, see also \cite{Chen-TJM}.
Another basic class of surfaces in Riemannian and pseudo-Riemannian geometry are the surfaces with parallel mean curvature vector field, since they are critical points of some natural functionals and play important role in differential geometry, the theory of harmonic maps, as well as in physics.
Surfaces with parallel mean curvature vector field in Riemannian space forms were classified in the early 1970s by Chen \cite{Chen1} and Yau \cite{Yau}. Recently, spacelike surfaces with parallel mean
curvature vector field in arbitrary indefinite space forms were classified in \cite{Chen1-2} and \cite{Chen1-3}.
A complete classification of Lorentz surfaces with parallel mean curvature vector field in arbitrary pseudo-Euclidean space $\mathbb E^m_s$ is given in \cite{Chen-KJM,Fu-Hou,HouYang2010}. A survey on classical and recent results concerning
submanifolds with parallel mean curvature vector in Riemannian manifolds
as well as in pseudo-Riemannian manifolds is presented in \cite{Chen-survey}.
The study of submanifolds of Euclidean or pseudo-Euclidean
space via the notion of finite type immersions began in the late
1970's with the papers \cite{Ch1,Ch2} of B.-Y. Chen. An isometric immersion $x:M$
$\rightarrow $ $\mathbb E^{m}$ of a submanifold $M$ in Euclidean
$m$-space $\mathbb E^{m}$ (or pseudo-Euclidean space $\mathbb E^m_s$) is said
to be of \emph{finite type} \cite{Ch1}, if $x$ identified with the
position vector field of $M$ in $\mathbb E^{m}$ (or $\mathbb E^m_s$) can be
expressed as a finite sum of eigenvectors of the Laplacian $\Delta
$ of $M$, i.e.
\begin{equation*}
x=x_{0}+\sum_{i=1}^{k}x_{i},
\end{equation*}
where $x_{0}$ is a constant map, $x_{1},x_{2},...,x_{k}$ are
non-constant maps such that $\Delta x_i=\lambda _{i}x_{i},$
$\lambda _{i}\in \mathbb{R}$, $1\leq i\leq k.$ More precisely, if $\lambda
_{1},\lambda _{2},...,\lambda _{k}$ are different, then $M$ is
said to be of \emph{$k$-type}. Many results on finite type
immersions have been collected in the survey paper \cite{Ch3}. The newest results on
submanifolds of finite type are collected in \cite{Chen-book}.
The notion of finite type immersion is naturally extended to the
Gauss map $G$ on $M$ by B.-Y. Chen and P. Piccinni in \cite{CP}, where they introduced
the problem ``\textit{To what extent does the type of the Gauss map of a submanifold of $\mathbb E^m$ determine the submanifold?''}. A submanifold $M$ of an Euclidean (or pseudo-Euclidean)
space is said to have \emph{1-type Gauss map} $G$, if $G$
satisfies $\Delta G=a (G+C)$ for some $a \in \mathbb{R}$ and some
constant vector $C$.
A submanifold $M$ is
said to have \emph{pointwise 1-type Gauss map} if its Gauss map
$G$ satisfies
\begin{equation} \label{PW1typeEq1}
\Delta G=\phi (G+C)
\end{equation}
for some non-zero smooth function $\phi $ on $M$ and some
constant vector $C$ \cite{CCK}. A pointwise 1-type Gauss map is called
\emph{proper} if the function $\phi $ is non-constant. A
submanifold with pointwise 1-type Gauss map is said to be of \emph{first kind} if the vector $C$ is zero. Otherwise, it is said
to be of \emph{second kind}.
Classification results on surfaces with pointwise 1-type Gauss map
in Minkowski space have been obtained in the last few years. For
example, in \cite{KY3} Y. Kim and D. Yoon studied ruled surfaces with
1-type Gauss map in Minkowski space $\mathbb E^m_1$ and gave a complete
classification of null scrolls with 1-type Gauss map. The
classification of ruled surfaces with pointwise 1-type Gauss map
of first kind in Minkowski space $\mathbb E^3_1$ is given in
\cite{KY2}. Ruled surfaces with pointwise 1-type Gauss map of
second kind in Minkowski 3-space were classified in \cite{CKY}.
The complete classification of flat rotation surfaces with pointwise
1-type Gauss map in the 4-dimensional pseudo-Euclidean space
$\mathbb E^4_2$ is given in \cite{KY2-a}. A classification of
flat Moore type rotational surfaces in terms of the type of their Gauss map is obtained in \cite{Aksoyak15}.
Recently, Arslan and the first author have obtained
a classification of meridian surfaces with pointwise 1-type Gauss map \cite{Arslanetall2014}.
The classification of marginally trapped surfaces with pointwise 1-type Gauss
map in Minkowski 4-space is given in \cite{Mil} and \cite{Turg}.
In the present paper we study quasi-minimal Lorentz surfaces in $\mathbb E^4_2$ with pointwise 1-type Gauss
map. First we describe the quasi-minimal surfaces with harmonic Gauss map proving that each such surface is a flat surface with parallel mean curvature vector field.
Next we give explicitly all flat quasi-minimal surfaces with pointwise 1-type Gauss map (Theorem \ref{PropFlatQuasi}).
Further, we obtain that a non-flat quasi-minimal surface with flat normal connection has
pointwise 1-type Gauss map if and only if it has parallel mean
curvature vector field (Theorem \ref{THMNonFlat}).
We give necessary and sufficient conditions for a quasi-minimal surface with non-flat normal connection to have pointwise 1-type Gauss map.
In Theorem \ref{THMNONFLATNORMALMainTheo} we present the complete classification of quasi-minimal surface with non-flat normal connection and pointwise 1-type Gauss map.
At the end of the paper we give an explicit example of a quasi-minimal surface with non-flat normal connection and pointwise 1-type Gauss map. This is also an example of a quasi-minimal surface with non-parallel mean curvature vector field.
\section{Preliminaries}
Let $\mathbb E^m_s$ be the pseudo-Euclidean $m$-space endowed with the
canonical pseudo-Euclidean metric of index $s$ given by
$$g_0 = \sum_{i=1}^{m-s} dx_i^2 - \sum_{j=m-s+1}^{m} dx_j^2,$$
where $x_1, x_2, \hdots, x_m$ are rectangular coordinates of the
points of $\mathbb E^m_s$. As usual, we denote by $\langle \, , \rangle$ the
indefinite inner scalar product with respect to $g_0$.
A non-zero vector $v$ is said to be \emph{spacelike} (respectively, \emph{timelike}) if $\langle v, v \rangle > 0$ (respectively, $\langle v, v \rangle < 0$).
A vector $v$ is called \emph{lightlike} if it is nonzero and satisfies $\langle v, v \rangle = 0$.
We use the following denotations:
$$\mathbb{S}^{m-1}_s(1) = \{ v \in \mathbb E^m_s: \langle v,v \rangle = 1\},$$
$$\mathbb{H}^{m-1}_{s-1}(-1) = \{ v \in \mathbb E^m_s: \langle v,v \rangle = - 1\}.$$
$\mathbb{S}^{m-1}_s(1)$ and $\mathbb{H}^{m-1}_{s-1}(-1)$ ($m\geq 3$) are complete pseudo-Riemannian manifolds with constant sectional curvatures
$1$ and $-1$, respectively.
The pseudo-Euclidean space $\mathbb E^m_1$ is known as the \emph{Minkowski $m$-space},
the space $\mathbb{S}^{m-1}_1(1)$ is known as the \emph{de Sitter space}, and the space $\mathbb{H}^{m-1}_1(-1)$ is the \emph{hyperbolic space}
(or the \emph{anti-de Sitter space}) \cite{O'N}.
\vskip 1mm The Gauss map $G$ of a submanifold $M^n$ of
$\mathbb E^m_s$ is defined as follows. Let $G(n,m)$ be the Grassmannian
manifold consisting of all oriented $n$-planes through the origin
of $\mathbb{E}^m_s$ and $\wedge ^{n}\mathbb{E}^m_s$ be the vector
space obtained by the exterior product of $n$ vectors in
$\mathbb{E}^{m}_s$.
Let $e_{i_1} \wedge \dots \wedge e_{i_n}$ and $f_{j_1} \wedge \dots \wedge f_{j_n}$ be two vectors of $\wedge^{n}\mathbb{E}^{m}_s$.
The indefinite
inner product on the Grassmannian manifold is defined by
\begin{equation}\notag
\langle e_{i_1} \wedge \dots \wedge e_{i_n}, f_{j_1} \wedge \dots \wedge f_{j_n} \rangle =
\det \left( \langle e_{i_k}, f_{j_l} \rangle \right).
\end{equation}
Thus, in a natural way, we can identify $\wedge
^{n}\mathbb{E}^{m}_s$ with some pseudo-Euclidean space $\mathbb{E}^{N}_k$,
where $N=\left(
\begin{array}{c}
m \\
n
\end{array}
\right)$, and $k$ is a positive integer.
Let $\left\{ e_{1},...,e_{n},e_{n+1},\dots,e_{m}\right\} $ be a
local orthonormal frame field in $\mathbb{E}^{m}_s$ such that $e_{1},e_{2},\dots,$ $e_{n}$ are tangent to $M^n$ and
$e_{n+1},e_{n+2},\dots,e_{m}$ are
normal to $M^n$.
The map $G:M^n \rightarrow G(n,m)$ defined by $
G(p)=(e_{1}\wedge e_{2}\wedge \dots \wedge $ $e_{n})(p)$ is called the \emph{Gauss
map} of $M^n$. It is a smooth map which carries a point $p$ of $M^n$ into the
oriented $n$-plane in $\mathbb{E}^{m}_s$ obtained by the parallel translation
of the tangent space of $M^n$ at $p$ in $\mathbb{E}^{m}$ \cite{KY2-a}.
See also \cite{Turg} for detailed information about definition and geometrical interpretation of the Gauss map of submanifolds.
For any real valued function $\varphi$ on $M^n$ the Laplacian $\Delta \varphi$ of $\varphi$ is given by the formula
\begin{equation}\notag
\Delta \varphi =-\sum_{i} \varepsilon_i (\widetilde\nabla_{e_{i}}\widetilde\nabla_{e_{i}}\varphi -\widetilde\nabla_{\widetilde\nabla_{e_{i}}e_{i}}\varphi ),
\end{equation}
where $\varepsilon_i = \langle e_i \wedge e_i \rangle = \pm 1$, $\widetilde\nabla$ is the Levi-Civita connection of $\mathbb E^m_s$ and $\nabla$ is the induced connection on $M^n$.
\vskip 2mm
In the present paper we consider the pseudo-Euclidean 4-dimensional space $\mathbb E^4_2$ with the canonical pseudo-Euclidean metric of index 2. In this case, the metric $g_0$ becomes $g_0 = dx_1^2 + dx_2^2 - dx_3^2 - dx_4^2$.
A surface $M^2_1$ in $\mathbb E^4_2$ is called \emph{Lorentz} if the
induced metric $g$ on $M^2_1$ is Lorentzian. So, at each point $p\in M^2_1$ we have the following decomposition
$$\mathbb E^4_2 = T_pM^2_1 \oplus N_pM^2_1$$
with the property that the restriction of the metric
onto the tangent space $T_pM^2_1$ is of
signature $(1,1)$, and the restriction of the metric onto the normal space $N_pM^2_1$ is of signature $(1,1)$.
We denote by $\nabla$ and $\widetilde\nabla$ the Levi Civita connections of $M^2_1$ and $\mathbb E^4_2$, respectively.
For vector fields $X$, $Y$ tangent to $M^2_1$ and a vector field $\xi$ normal to $M^2_1$, the formulas of Gauss and Weingarten,
giving a decomposition of the vector fields $\widetilde\nabla_XY$ and
$\widetilde\nabla_X \xi$ into tangent and normal components, are given respectively by \cite{Chen1}:
$$\begin{array}{l}
\widetilde\nabla_XY = \nabla_XY + h(X,Y);\\
\widetilde\nabla_X \xi = - A_{\xi} X + D_X \xi.
\end{array}$$
These formulas define the second fundamental form $h$, the normal
connection $D$, and the shape operator $A_{\xi}$ with respect to
$\xi$. For each normal vector field $\xi$, the shape operator $A_{\xi}$ is a symmetric endomorphism of the tangent space $T_pM^2_1$ at $p \in M^2_1$. In general, $A_{\xi}$ is not diagonalizable.
It is well known that the shape operator and the second fundamental form are related by the formula
$$\langle h(X,Y), \xi \rangle = \langle A_{\xi} X, Y \rangle$$
for $X$, $Y$ tangent to $M^2_1$ and $\xi$ normal to $M^2_1$.
The mean curvature vector field $H$ of $M^2_1$ in $\mathbb E^4_2$
is defined as $H = \displaystyle{\frac{1}{2}\, \mathrm{tr}\, h}$.
The surface $M^2_1$ is called \emph{minimal} if its mean curvature vector vanishes identically, i.e. $H =0$.
The surface $M^2_1$ is called \emph{quasi-minimal} if its
mean curvature vector is lightlike at each point, i.e. $H \neq 0$ and $\langle H, H \rangle =0$.
Obviously, quasi-minimal surfaces are always non-minimal.
A normal vector field $\xi$ on $M^2_1$ is called \emph{parallel in the normal bundle} (or simply \emph{parallel}) if $D{\xi}=0$ holds identically \cite{Chen2}.
The surface $M^2_1$ is said to have \emph{parallel mean curvature vector field} if its mean curvature vector $H$
satisfies $D H =0$ identically.
\section{Classification of quasi-minimal surfaces with pointwise 1-type Gauss map} \label{S:Classification}
In this section we study quasi-minimal surfaces in pseudo-Euclidean space $\mathbb E^4_2$. We obtain complete classification of quasi-minimal surfaces with pointwise 1-type Gauss map.
\subsection{Moving frame on a quasi-minimal surface} \label{SubSect:ConForm}
Let $M^2_1$ be a Lorentz surface in $\mathbb E^4_2$. Then, locally there exists a coordinate system $(u,v)$ on $M^2_1$ such that the metric tensor is given by
$$g=-f^2(u,v)(du \otimes dv +dv \otimes du)$$ for some positive function $f(u,v)$ \cite{Chen2}.
Thus, putting $x=f^{-1}\frac{\partial}{\partial u}$ and $y=f^{-1}\frac{\partial}{\partial v}$, we obtain a pseudo-orthonormal frame field $\{x, y\}$ of the tangent bundle of $M^2_1$ such that $\langle x, x\rangle = 0$, $\langle y, y\rangle = 0$, $\langle x, y\rangle = -1$.
Then the mean curvature vector field $H$ is given by
$$H = - h(x,y).$$
Now, let $M^2_1$ be quasi-minimal, i.e. its mean curvature vector is lightlike at each point. Then there exists a pseudo-orthonormal frame field $\{n_1,n_2\}$ of the normal bundle such that $n_1 = -H$, $\langle n_1, n_1 \rangle = 0$, $\langle n_2, n_2 \rangle = 0$, $\langle n_1, n_2 \rangle = -1$.
By a direct computation we obtain the following derivative formulas:
\begin{subequations}\label{LeviCivitaConnection1ALL}
\begin{eqnarray}
\label{LeviCivitaConnection1a} \widetilde\nabla_xx=\gamma_1x+an_1+bn_2, & \qquad &\widetilde\nabla_yx=-\gamma_2x+n_1,\\
\label{LeviCivitaConnection1b} \widetilde\nabla_xy=-\gamma_1y+n_1, &\qquad &\widetilde\nabla_yy=\gamma_2y+cn_1+dn_2,\\
\label{LeviCivitaConnection1c} \widetilde\nabla_xn_1=-by+\beta_1n_1, &\qquad &\widetilde\nabla_yn_1=-dx+\beta_2n_1,\\
\label{LeviCivitaConnection1d} \widetilde\nabla_xn_2=-x-ay-\beta_1n_2, &\qquad &\widetilde\nabla_yn_2=-cx-y-\beta_2n_2
\end{eqnarray}
\end{subequations}
for some smooth functions $a,b,c,d$, $\beta_1$, and $\beta_2$, where $\gamma_1=\frac{f_u}{f^2}$ and $\gamma_2=\frac{f_v}{f^2}.$
Thus, the Gaussian curvature $K$ and the normal curvature $\varkappa$ of $M^2_1$ are
\begin{eqnarray}
\label{GaussianEQ} K = -R(x,y,y,x)= x(\gamma_2) + y(\gamma_1) +2\gamma_1 \gamma_2, \notag\\
\label{kappaeq} \varkappa = -R^D(x,y,n_1,n_2)= x(\beta_2) - y(\beta_1) + \gamma_1 \beta_2 - \gamma_2 \beta_1,
\end{eqnarray}
where $R$ and $R^D$ are the curvature tensors associated with the connections $\nabla$ and $D$, respectively.
\begin{lem}\label{Lemma-parallel H}
If $M^2_1$ is a quasi-minimal surface with parallel mean curvature vector field, then $M^2_1$ has flat normal connection.
\end{lem}
\begin{proof}
It follows from \eqref{LeviCivitaConnection1c} that $D_xH=\beta_1H$, $D_yH=\beta_2H$. Hence, the mean curvature vector field $H$ is parallel if and only if $\beta_1 = \beta_2 = 0$. Now, under the assumption $\beta_1 = \beta_2 = 0$, equality \eqref{kappaeq} implies $\varkappa = 0$. Therefore, if $H$ is parallel then the surface has flat normal connection.
\end{proof}
Using the equations of Gauss and Codazzi, from formulas \eqref{LeviCivitaConnection1ALL}
we obtain the following integrability conditions:
\begin{subequations}\label{IntEqAll}
\begin{eqnarray}
\label{IntEq4} x(c)=-c\beta_1-2c\gamma_1+\beta_2,\\
\label{IntEq5} x(d)=d\beta_1-2d\gamma_1,\\
\label{IntEq2} y(a)=-a\beta_2-2a\gamma_2+\beta_1,\\
\label{IntEq3} y(b)=b\beta_2-2b\gamma_2,\\
\label{IntEq1} x(\gamma_2) + y(\gamma_1) + 2\gamma_1\gamma_2 = ad +bc,\\
\label{IntEq6} x(\beta_2) - y(\beta_1) - \beta_1 \gamma_2 + \beta_2 \gamma_1 = ad - bc.
\end{eqnarray}
\end{subequations}
Equalities \eqref{IntEq1} and \eqref{IntEq6} imply that the Gauss curvature $K$ and the normal curvature $\varkappa$ are expressed as follows:
\begin{eqnarray}
\label{GaussianEQ-a} K = ad + bc,\\
\label{kappaeq-a} \varkappa = ad - bc.
\end{eqnarray}
\begin{rem}\label{Casebd0}
If $b=d=0$, then \eqref{GaussianEQ-a} implies $K=0$, i.e. $M^2_1$ is flat. In \cite[Theorem 4.1]{Chen-JMAA}, B.-Y. Chen obtained complete classification of flat quasi-minimal surfaces in $\mathbb E^4_2$. Considering the proof of this theorem, one can see that a quasi-minimal surface in $\mathbb E^4_2$ satisfying $b=d=0$ is congruent to a surface parametrized by
\begin{equation}\label{PropSpecSurfPosVec}
z(u,v)=\left(\theta(u,v),\frac{u-v}{\sqrt2},\frac{u+v}{\sqrt2},\theta(u,v)\right)
\end{equation}
for a smooth function $\theta$.
\end{rem}
\begin{lem}\label{LemmaConstGauss}
Let $M^2_1$ be a quasi-minimal surface in $\mathbb E^4_2$ with parallel mean curvature vector field. If $M^2_1$ has constant Gauss curvature, then $M^2_1$ is flat.
\end{lem}
\begin{proof}
Let $M^2_1$ be a quasi-minimal surface with parallel mean curvature vector field and constant Gaussian curvature $K_0$.
Assume that $K_0 \neq 0$. Since $M^2_1$ has parallel mean curvature vector, according to Lemma
\ref{Lemma-parallel H} it has flat normal connection. Therefore, because of \eqref{kappaeq-a}, we have
$ad = bc$ and hence the Gauss curvature is $K_0 = 2 ad$. Since $K_0 \neq 0$ we have that
$a,b,c,d$ do not vanish and satisfy
\begin{equation}\label{LemmaConstGaussEq2}
a=\alpha c, \quad b=\alpha d
\end{equation}
for a smooth function $\alpha(u,v)$. Note that $\alpha$ does not vanish and
$\alpha cd=\mbox{const}$.
In addition, since $M^2_1$ has parallel mean curvature vector field, we have $\beta_1=\beta_2=0$. Therefore, equalities \eqref{IntEq2}, \eqref{IntEq4}, \eqref{IntEq5} take the form
\begin{subequations}\label{IntEqaaaAll}
\begin{eqnarray}
\label{IntEq4a} x(c)=-2c\gamma_1,\\
\label{IntEq5a} x(d)=-2d\gamma_1,\\
\label{IntEq2a} y(a)=-2a\gamma_2.
\end{eqnarray}
\end{subequations}
Equalities \eqref{IntEq4a}, \eqref{IntEq5a} together with \eqref{LemmaConstGaussEq2} imply that
\begin{equation}\label{LemmaConstGaussEq3}
x(\alpha)=4\alpha\gamma_1.
\end{equation}
Applying $x$ to the first equality of \eqref{LemmaConstGaussEq2} and using \eqref{IntEq4a} and \eqref{LemmaConstGaussEq3}
we obtain $x(a)=2 a \gamma_1$. Having in mind that $\gamma_1 = \frac{f_u}{f^2}$, we get
\begin{equation*}\label{LemmaConstGaussEq4}
\frac{a_u}{a}=2\frac{f_u}{f},
\end{equation*}
which implies $a(u,v)= \varphi(v) f^2(u,v)$ for a smooth function $\varphi(v)$. Now using \eqref{IntEq2a} we get
$\displaystyle \frac{\varphi'(v)}{4\varphi(v)}= -\frac{f_v}{f}$. Since $\varphi$ is a function of $v$, we obtain $\displaystyle \frac{\partial}{\partial u}\left(\frac{f_v}{f}\right)=0$. Solving this equation, we get $f(u,v)=f_1(u) f_2(v)$ for some smooth functions $f_1(u), \ f_2(v)$. On the other hand, the Gauss curvature is expressed by the function $f(u,v)$ according to the formula
$K =\frac{2f f_{uv} - 2 f_u f_v}{f^4}$. Hence, $K_0=0$, which contradicts the assumption $K_0 \neq 0$.
\end{proof}
\subsection{Gauss map of quasi-minimal surfaces} \label{SubSect:MainPARTY}
Let $M^2_1$ be a quasi-minimal surface in $\mathbb E^4_2$. The Gauss map of $M^2_1$ is defined by
\begin{equation*}\label{MinkGaussTasvTanim}
\begin{array}{rcl}
G: M &\rightarrow & \mathbb H^{5}_3(-1)\subset \mathbb E^6_4\\
p & \mapsto & G(p)=(x\wedge y)(p).
\end{array}
\end{equation*}
We shall use the frame field $\{x,y,n_1,n_2\}$ defined in the previous subsection. This frame field generates the following frame of the Grassmanian manifold:
$$\{x \wedge y, x \wedge n_1, x \wedge n_2,
y \wedge n_1, y \wedge n_2, n_1 \wedge n_2\},$$
for which we have
\begin{equation} \notag
\begin{array}{lll}
\langle x \wedge y, x \wedge y \rangle = -1, & \qquad \langle x \wedge n_1, x \wedge n_1 \rangle = 0, & \qquad \langle x \wedge n_2, x \wedge n_2 \rangle = 0,\\
\langle y \wedge n_1, y \wedge n_1 \rangle = 0, & \qquad \langle y \wedge n_2, y \wedge n_2 \rangle = 0, & \qquad \langle n_1 \wedge n_2, n_1 \wedge n_2 \rangle = - 1,\\
\langle x \wedge n_1, y\wedge n_2 \rangle = 1, & \qquad \langle x \wedge n_2, y \wedge n_1 \rangle = 1, &
\end{array}
\end{equation}
and all other scalar products are equal to zero.
Since $\langle x, x\rangle = \langle y, y\rangle = 0, \langle x, y\rangle = -1$,
the Laplace operator $\Delta:C^\infty(M^2_1)\rightarrow C^\infty(M^2_1)$ of $M^2_1$ takes the form
$$\Delta \varphi = \widetilde\nabla_x \widetilde\nabla_y \varphi + \widetilde\nabla_y \widetilde\nabla_x \varphi - \widetilde\nabla_{\nabla_x y} \varphi - \widetilde\nabla_{\nabla_y x} \varphi$$
for any real valued function $\varphi$.
Hence, the Laplacian of the Gauss map is given by the
formula
\begin{equation*}\label{Eq-7}
\Delta G = \widetilde\nabla_x \widetilde\nabla_y G + \widetilde\nabla_y \widetilde\nabla_x G - \widetilde\nabla_{\nabla_x y} G - \widetilde\nabla_{\nabla_y x} G.
\end{equation*}
By a direct computation, using \eqref{LeviCivitaConnection1ALL}, \eqref{IntEqAll}, \eqref{GaussianEQ-a}, and \eqref{kappaeq-a}, we obtain
\begin{equation}\label{E42MargTrapGMLap}
\Delta G = -2 K x\wedge y + 2\varkappa \,n_1\wedge n_2 + 2\beta_2 x\wedge n_1 - 2\beta_1 y\wedge n_1.
\end{equation}
The next proposition follows directly from formula \eqref{E42MargTrapGMLap}.
\begin{prop}\label{THMGMHarmonic}
Let $M^2_1$ be a quasi-minimal surface in the pseudo-Euclidean space $\mathbb E^4_2$. Then, $M^2_1$ has harmonic Gauss map if and only if $M^2_1$ is a flat surface with parallel mean curvature vector field.
\end{prop}
\begin{rem}
See \cite{HouYang2010} for the classification of Lorentzian surfaces with parallel mean curvature vector in $\mathbb E^4_2$.
\end{rem}
Further we shall study quasi-minimal surfaces with pointwise 1-type Gauss map, i.e. the Laplacian of $G$ satisfies \eqref{PW1typeEq1} for a smooth non-vanishing function $\phi$ and a constant vector $C$.
Having in mind Lemma \ref{Lemma-parallel H} and Lemma \ref{LemmaConstGauss}, from \eqref{E42MargTrapGMLap} we have the following proposition.
\begin{prop}\label{THMNonFlat1type}
Let $M^2_1$ be a quasi-minimal surface in the pseudo-Euclidean space $\mathbb E^4_2$. If $M^2_1$ is a non-flat surface with parallel mean curvature vector field, then it has proper pointwise 1-type Gauss map of first kind. In this case, \eqref{PW1typeEq1} is satisfied for the smooth function $\phi=-2K$.
\end{prop}
Now, we shall give the complete classification of quasi-minimal surfaces with pointwise 1-type Gauss map.
Assume that $M^2_1$ has pointwise 1-type Gauss map. Then from \eqref{PW1typeEq1} and \eqref{E42MargTrapGMLap} we get the equality
$$-2Kx\wedge y+2\varkappa n_1\wedge n_2+2\beta_2 x\wedge n_1-2\beta_1 y\wedge n_1=\phi(G+C), \quad \phi \neq 0,$$
which implies
\begin{subequations}\label{PW1TypeEq1ALL}
\begin{eqnarray}
\label{PW1TypeEq1a}\langle C, x\wedge y\rangle&=&1+\frac{2K}\phi,\\
\label{PW1TypeEq1b}\langle C, n_1\wedge n_2\rangle&=&-\frac{2\varkappa}\phi,\\
\label{PW1TypeEq1c}\langle C, x\wedge n_1\rangle&=&0,\\
\label{PW1TypeEq1d}\langle C, y\wedge n_1\rangle&=&0,\\
\label{PW1TypeEq1e}\langle C, x\wedge n_2\rangle&=&-\frac{2\beta_1}\phi,\\
\label{PW1TypeEq1f}\langle C, y\wedge n_2\rangle&=&\frac{2\beta_2}\phi.
\end{eqnarray}
\end{subequations}
\subsection{Flat quasi-minimal surfaces with pointwise 1-type Gauss map}
In this subsection we give the classification of flat quasi-minimal surfaces in $\mathbb E^4_2$ with pointwise 1-type Gauss map.
Let $M^2_1$ be a flat quasi-minimal surface with pointwise 1-type Gauss map, i.e., \eqref{PW1typeEq1} is satisfied for a non-zero function $\phi$ and a constant vector $C$. Applying $x$ and $y$ to \eqref{PW1TypeEq1a} and using $K = 0$, we obtain
$\beta_2 b = \beta_1 d = 0$.
Note that if $M^2_1$ has parallel mean curvature vector, then according to Proposition \ref{THMGMHarmonic} the Gauss map is harmonic. Since we consider the non-harmonic case, we have $\beta_1^2+\beta_2^2\neq0$. Hence, $b \,d=0$, i.e. at least one of the functions $b$ or $d$ is zero. If we assume that $b = 0, d \neq 0$, then we get $\varkappa=0, \beta_1 = 0, \beta_2 = 0$. Applying $y$ to \eqref{PW1TypeEq1e} we obtain $-1 = 0$, which is a contradiction. Similarly for the case $d = 0, b \neq 0$. Hence, the only possible case is $b = d =0$.
Thus, according to Remark 3.2 $M^2_1$ is congruent to the surface given by \eqref{PropSpecSurfPosVec} for some function $\theta(u,v)$. Now, using the condition that the surface has pointwise 1-type Gauss map we will obtain conditions on $\theta(u,v)$.
We put $\eta_0=\left(1,0,0,1\right)$, $\displaystyle \eta_1=\left(0,\frac{1}{\sqrt2},\frac{1}{\sqrt2},0\right)$ and $\displaystyle \eta_2=\left(0,-\frac{1}{\sqrt2},\frac{1}{\sqrt2},0\right)$.
\begin{thm}\label{PropFlatQuasi}
Let $M^2_1$ be a flat quasi-minimal surface in the pseudo-Euclidean space $\mathbb E^4_2$. Then, $M^2_1$ has pointwise 1-type Gauss map if and only if it is congruent to the surface given by \eqref{PropSpecSurfPosVec} for a smooth function $\theta$ satisfying
\begin{equation}\label{PropSpecSurfSufNecEq}
\frac{\partial^2\theta(u,v)}{\partial u\partial v}=(F\circ\psi)(u,v), \quad\psi(u,v)=\theta(u,v)+c_1u+c_2v,
\end{equation}
where $F$ is a non-constant function, $c_1$ and $c_2$ are constants. In this case, \eqref{PW1typeEq1} is satisfied for the smooth function
\begin{equation}\label{PropSpecSurfFuncPHI}
\phi=(F'\circ\psi)
\end{equation}
and the non-zero constant vector
\begin{equation}\label{PropSpecSurfC}
C=c_1\eta_0\wedge\eta_2-c_2\eta_0\wedge\eta_1-\eta_1\wedge\eta_2.
\end{equation}
\end{thm}
\begin{proof}
By a direct computation from \eqref{PropSpecSurfPosVec} we get
\begin{align}\nonumber
\begin{split}
x=&z_u=\theta_u\eta_0+\eta_1,\\
y=&z_v=\theta_v\eta_0+\eta_2,
\end{split}
\end{align}
which imply
\begin{equation}\label{PropSpecSurfG}
G=\eta_0\wedge(\theta_u\eta_2-\theta_v\eta_1)+\eta_1\wedge\eta_2
\end{equation}
and
\begin{equation}\label{PropSpecSurfDG}
\Delta G=\eta_0\wedge(2\zeta_u\eta_2-2\zeta_v\eta_1),
\end{equation}
where we put $\zeta=\frac{\partial^2\theta}{\partial u\partial v}.$
From \eqref{PW1typeEq1}, \eqref{PropSpecSurfG} and \eqref{PropSpecSurfDG} we obtain
\begin{align}\nonumber
\begin{split}
\eta_0\wedge\left(\left(2\frac{\zeta_u}{\phi}-\theta_u\right)\eta_2-\left(2\frac{\zeta_v}{\phi}-\theta_v\right)\eta_1\right)=C+\eta_1\wedge\eta_2.
\end{split}
\end{align}
Since the right hand side of this equation is a constant vector, the vector field
$$\eta_0\wedge\left(\left(2\frac{\zeta_u}{\phi}-\theta_u\right)\eta_2-\left(2\frac{\zeta_v}{\phi}-\theta_v\right)\eta_1\right)$$
is also constant. So, we obtain that the system of equations
\begin{align}\nonumber
\begin{split}
2\frac{\zeta_u}{\phi}-\theta_u = & c_1,\\
2\frac{\zeta_v}{\phi}-\theta_v = & c_2
\end{split}
\end{align}
is satisfied for some constants $c_1$ and $c_2$. The last two equations imply
\begin{align*}\label{PropSpecSurfEq1}
\phi =2\frac{\zeta_u}{c_1+\theta_u}=2\frac{\zeta_v}{c_2+\theta_v}.
\end{align*}
From this equation, one can see that the function $\zeta$ remains constant along the curve $\psi=c$, where $\psi$ is the function given by the second equality in \eqref{PropSpecSurfSufNecEq}. Hence, we have proved that $\theta$ satisfies the first equality in \eqref{PropSpecSurfSufNecEq}.
Conversely, by a straightforward computation one can see that \eqref{PW1typeEq1} is satisfied for the constant vector $C$ and the smooth non-zero function $F$ given by \eqref{PropSpecSurfFuncPHI} and \eqref{PropSpecSurfC}.
\end{proof}
\subsection{Quasi-minimal surfaces with flat normal connection}
In this subsection, we focus on quasi-minimal surfaces with $\varkappa=0$.
Let $M^2_1$ be a quasi-minimal surface with flat normal connection and pointwise 1-type Gauss map. Then, \eqref{PW1TypeEq1b} implies $\langle C, n_1\wedge n_2\rangle=0$. Applying $x$ and $y$ to the last equation and using \eqref{PW1TypeEq1e} and \eqref{PW1TypeEq1f} we obtain
\begin{equation*}\label{PW1TypeLastEq}
b\beta_2=d\beta_1=0.
\end{equation*}
On the other hand, from \eqref{GaussianEQ-a} and \eqref{kappaeq-a} we have $K=2bc$.
If the Gauss curvature does not vanish, then $\beta_1 = \beta_2 = 0$ and hence $M^2_1$ has parallel mean curvature vector field. Combining this with Proposition \ref{THMNonFlat1type}, we get the following result.
\begin{thm}\label{THMNonFlat}
Let $M^2_1$ be a quasi-minimal surface in the pseudo-Euclidean space $\mathbb E^4_2$ with flat normal connection and non-vanishing Gauss curvature. Then, $M^2_1$ has pointwise 1-type Gauss map if and only if it has parallel mean curvature vector field. In this case, $M^2_1$ has proper pointwise 1-type Gauss map of first kind.
\end{thm}
\begin{rem}
We would like to note that a classification of Lorentz surfaces with parallel mean curvature vector field in the pseudo-Euclidean space $\mathbb E^4_2$ is given in \cite[Theorem 3.1]{HouYang2010}. Considering this theorem and its proof, one can see that there exist two families of quasi-minimal surfaces with proper pointwise 1-type Gauss map of first kind:
\begin{enumerate}
\item[(i)] A non-flat CMC-surface lying in $\mathbb S^3_2(r^2)$ for some $r>0$ such that the
mean curvature vector $H'$ of $M$ in $\mathbb S^3_2(r^2)$ satisfies $\langle H',H'\rangle=-r^2$;
\item[(ii)] A non-flat CMC-surface lying in $\mathbb H^3_1(-r^2)$ for some $r>0$ such that the
mean curvature vector $H'$ of $M$ in $\mathbb H^3_2(-r^2)$ satisfies $\langle H',H'\rangle=r^2$.
\end{enumerate}
Conversely, any quasi-minimal surface with proper pointwise 1-type Gauss map of first kind belongs to one of the above two families.
\end{rem}
\subsection{Quasi-minimal surfaces with non-flat normal connection}
In this subsection we focus on quasi-minimal surfaces with non-flat normal connection and pointwise 1-type Gauss map. Before we proceed, we would like to note that recent results show that there are no such surfaces if the ambient space is $\mathbb E^4_1$ or $\mathbb S^4_1(1)$ (see \cite{Mil,Turg,Turg2}).
First we prove the following proposition.
\begin{prop}\label{THMNONFLATNORMAL}
Let $M^2_1$ be a quasi-minimal surface in $\mathbb E^4_2$ with non-vanishing normal curvature. Then, $M^2_1$ has pointwise 1-type Gauss map if and only if there exists a local coordinate system $(s,t)$ such that $\bar x=\partial_s$, $\bar y=\widetilde f\partial_s+\partial_t$, $n_1$ and $n_2$ form a pseudo-orthonormal frame field of the tangent bundle of $M^2_1$ and the Levi-Civita connection satisfies
\begin{subequations}\label{LeviCivitaConnectionnonflat1ALL}
\begin{eqnarray}
\label{LeviCivitaConnectionnonflat1a} \widetilde\nabla_{\bar x}\bar x=\widetilde an_1, & \qquad &\widetilde\nabla_{\bar y}{\bar x}=-\widetilde f_s{\bar x}+n_1,\\
\label{LeviCivitaConnectionnonflat1b} \widetilde\nabla_{\bar x}{\bar y}=n_1, &\qquad &\widetilde\nabla_{\bar y}\bar y=\widetilde f_s{\bar y}+\widetilde c n_1+\widetilde dn_2,\\
\label{LeviCivitaConnectionnonflat1c} \widetilde\nabla_{\bar x}n_1=0, &\qquad &\widetilde\nabla_{\bar y}n_1=-\widetilde d{\bar x}+\widetilde\beta_2n_1,\\
\label{LeviCivitaConnectionnonflat1d} \widetilde\nabla_{\bar x}n_2=-{\bar x}-\widetilde a{\bar y}, &\qquad & \widetilde\nabla_{\bar y}n_2=-\widetilde c{\bar x}-{\bar y}-\widetilde \beta_2n_2
\end{eqnarray}
\end{subequations}
where $\widetilde a$, $\widetilde c$, $\widetilde d$, $\widetilde \beta_2$ and $\widetilde f$ are smooth functions given by
\begin{subequations}\label{LeviCivitaConnectionnonflatInvariants1ALL}
\begin{eqnarray}
\label{LeviCivitaConnectionnonflatInvariants1ad} \widetilde a&=&\lambda_3\left(-\frac 23 \lambda_1s-\frac 23 \lambda_2\right)^{-3/2}, \qquad\qquad \widetilde d=\frac{1}{\lambda_3},\\
\label{LeviCivitaConnectionnonflatInvariants1cFinal} \widetilde c&=& -\frac{9}{\lambda_1^2}\left(-\frac 23 \lambda_1s-\frac 23 \lambda_2\right)^{1/2},\\
\label{LeviCivitaConnectionnonflatInvariants1beta} \widetilde\beta_2&=&\frac{3}{\lambda_1}\left(-\frac 23 \lambda_1s-\frac 23 \lambda_2\right)^{-1/2},\\
\label{LeviCivitaConnectionnonflatInvariants1fFinal} \widetilde f&=&-\frac{9}{\lambda_1^2}\left(-\frac 23 \lambda_1s-\frac 23 \lambda_2\right)^{1/2}-\left(2\frac{\lambda_3'}{\lambda_3}-3\frac{\lambda_1'}{\lambda_1}\right)s-\frac{\lambda_2'}{\lambda_1}-2\frac {\lambda_2}{\lambda_1}\frac{\lambda_3'}{\lambda_3}+4\frac {\lambda_2}{\lambda_1}\frac{\lambda_1'}{\lambda_1}
\end{eqnarray}
\end{subequations}
for some non-vanishing smooth functions $\lambda_1,\lambda_3$ and a smooth function $\lambda_2$.
In this case, $M^2_1$ has Gauss curvature
\begin{eqnarray}
\label{NewCoordPW1Cond1Gaussian} K&=&\left(-\frac 23 \lambda_1s-\frac 23 \lambda_2\right)^{-3/2}.
\end{eqnarray}
Moreover, \eqref{PW1typeEq1} is satisfied for
\begin{equation}\label{TheoremnonflatEQUfC}
\phi=-4K \quad\mbox{ and}\quad C=-\frac12\left(\bar x \wedge \bar y +n_1\wedge n_2+\frac {\widetilde \beta_2}{K}\bar x\wedge n_1\right).
\end{equation}
\end{prop}
\begin{proof}
Assume that the normal curvature $\varkappa$ is non-vanishing and $M^2_1$ has pointwise 1-type Gauss map. Then, \eqref{PW1typeEq1} is satisfied for a constant vector $C$ and a smooth non-vanishing function $\phi$ such that formulas \eqref{PW1TypeEq1ALL} hold true.
Applying $x$ to \eqref{PW1TypeEq1c} and $y$ to \eqref{PW1TypeEq1d}, we obtain
\begin{subequations}\label{PW1TypeEq2ALL}
\begin{eqnarray}
\label{PW1TypeEq2a}b(\langle C, x\wedge y\rangle+\langle C, n_1\wedge n_2\rangle)&=&0,\\
\label{PW1TypeEq2b}d(\langle C, x\wedge y\rangle-\langle C, n_1\wedge n_2\rangle)&=&0.
\end{eqnarray}
\end{subequations}
Note that if $b^2+d^2=0$ at a point $p$, then \eqref{kappaeq-a} implies $\varkappa(p)=0$ which contradicts the assumption that $\varkappa$ is non-vanishing.
If both $b$ and $d$ are non-zero, then $\langle C, x\wedge y\rangle = \langle C, n_1\wedge n_2\rangle = 0$ and from \eqref{PW1TypeEq1b} we get $\varkappa = 0$, a contradiction. Hence, $b = 0, d \neq 0$ or $b \neq 0, d = 0$.
Therefore, replacing $x$ and $y$ if necessary, we may assume that $d\neq 0$ and $b=0$. Thus, \eqref{PW1TypeEq2b} gives $\langle C, x\wedge y\rangle=\langle C, n_1\wedge n_2\rangle$ and equations \eqref{GaussianEQ-a}, \eqref{kappaeq-a} imply $K=\varkappa = ad$. Therefore, \eqref{PW1TypeEq1a} and \eqref{PW1TypeEq1b} imply
\begin{equation}\label{TheoremnonflatEq1}
\langle C, x\wedge y\rangle=\langle C, n_1\wedge n_2\rangle=\frac 12\quad\mbox{ and }\quad \phi=-4K.
\end{equation}
On the other hand, from $y(\langle C, x\wedge y\rangle)=0$ we obtain $d \langle C, x\wedge n_2\rangle = 0$ which gives $\beta_1=0$ because of \eqref{PW1TypeEq1e}. Therefore, combining \eqref{PW1TypeEq1ALL} and \eqref{TheoremnonflatEq1}, we obtain
\begin{equation}\label{TheoremnonflatEQUfCPre}
\quad C=-\frac12\left( x \wedge y +n_1\wedge n_2+\frac {\beta_2}{K} x\wedge n_1\right).
\end{equation}
Next, we define a local coordinate system $(s,t)$ in the following way:
$$s=s(u,v)=\int_{u_0}^uf^2(\tau,v)d\tau,\quad t=v.$$
Note that we have
\begin{equation*}\label{MinimalcaseII8}
\partial_u=f^2\partial_s\quad\mbox{and}\quad \partial_v=\widetilde f\partial_s+\partial_t,
\end{equation*}
where $\widetilde f=\frac{\partial}{\partial v}\left(\int_{u_0}^u f^2(\tau,v)d\tau\right)$,
which give
$$\langle\partial_s,\partial_s\rangle=0,\quad \langle\partial_s,\partial_t\rangle=-1,\quad \langle\partial_t,\partial_t\rangle=2\widetilde f.$$
Thus, the metric tensor of $M^2_1$ with respect to the new coordinate system $(s,t)$ takes the form
\begin{equation*}\label{SurfaceQMinimalInducedM}
g=-(ds \otimes dt+dt \otimes ds) + 2 \widetilde{f} dt \otimes dt.
\end{equation*}
It is easy to calculate that
\begin{subequations}\label{QMinimalS421LeviCivita1ALL}
\begin{eqnarray}
\label{QMinimalS421LeviCivita1a} \nabla_{\partial_s}\partial_s&=&0,\\
\label{QMinimalS421LeviCivita1b} \nabla_{\partial_s}\partial_t= \nabla_{\partial_t}\partial_s&=& -\widetilde f_s\partial_s,\\
\label{QMinimalS421LeviCivita1c} \nabla_{\partial_t}\partial_t&=& \widetilde f_s\partial_t+(2\widetilde f\widetilde f_s-\widetilde f_t)\partial_s.
\end{eqnarray}
\end{subequations}
Moreover, $\bar x = \partial_s =\frac 1fx$ and $\bar y = \widetilde f\partial_s+\partial_t = fy$. Hence, $\{\bar x, \bar y \}$ form a pseudo-orthonormal frame field of the tangent bundle of $M^2_1$. Using \eqref{QMinimalS421LeviCivita1ALL} and taking into account that $b=\beta_1=0$, we get that the Levi-Civita connection of $M^2_1$ satisfies \eqref{LeviCivitaConnectionnonflat1ALL}
for the functions $\widetilde a=a/f^2$, $\widetilde c=f^2c$, $\widetilde d=f^2d$ and $\widetilde \beta_2=f\beta_2$. Now, equality \eqref{TheoremnonflatEQUfCPre} gives the second equality in \eqref{TheoremnonflatEQUfC}.
With respect to the new coordinate system $(s,t)$ the integrability conditions \eqref{IntEqAll} become
\begin{subequations}\label{NewIntEqAll}
\begin{eqnarray}
\label{NewIntEq4} \widetilde c_s&=&\widetilde \beta_2,\\
\label{NewIntEq5} \widetilde d_s&=&0,\\
\label{NewIntEq2} \widetilde f \widetilde a_s+\widetilde a_t&=&-\widetilde a\widetilde \beta_2-2\widetilde a\widetilde f_s,\\
\label{NewIntEq6} (\widetilde \beta_2)_s&= & \widetilde a\widetilde d =K=\varkappa,\\
\label{NewIntEq1} \widetilde f_{ss} &=& \widetilde a\widetilde d =K=\varkappa.
\end{eqnarray}
\end{subequations}
Since $d \neq 0$, from \eqref{NewIntEq5} we get the second
equality in \eqref{LeviCivitaConnectionnonflatInvariants1ad} for a
non-vanishing smooth function $\lambda_3=\lambda_3(t)$.
Furthermore, combining \eqref{NewIntEq4}, \eqref{NewIntEq6}, and \eqref{NewIntEq1}, we get
\begin{equation}
\label{LeviCivitaConnectionnonflatInvariants1c} \widetilde c= \widetilde f+\lambda_4s+\lambda_5
\end{equation}
and
\begin{eqnarray}
\label{NewIntEqAraEq1} \widetilde{\beta}_2- \widetilde f_s=\lambda_4
\end{eqnarray}
for some smooth functions $\lambda_4=\lambda_4(t)$ and $\lambda_5=\lambda_5(t)$ .
Next, using \eqref{QMinimalS421LeviCivita1ALL} we obtain
\begin{equation*}
\begin{array}{lll}
\widetilde{\nabla}_{\bar x} C & = & -\frac{1}{2} \left(2+ \bar x\left(\frac{\widetilde\beta_2}K\right) \right) \bar x \wedge n_1,\\
\widetilde{\nabla}_{\bar y} C & = & -\frac{1}{2} \left( 2\widetilde c+\bar y\left(\frac{\widetilde\beta_2}K\right)+\frac{\widetilde\beta_2}K\left(\widetilde\beta_2- \widetilde f_s\right) \right) \bar x \wedge n_1.
\end{array}
\end{equation*}
Since $C$ is a constant vector, we have $\widetilde{\nabla}_{\bar x} C =0$ and $\widetilde{\nabla}_{\bar y} C=0$.
So, we get the following two equations:
\begin{subequations}\label{NewCoordPW1Cond1ALL}
\begin{eqnarray}
\label{NewCoordPW1Cond1a} \bar x\left(\frac{\widetilde\beta_2}K\right)&=&-2,\\
\label{NewCoordPW1Cond1b} \bar y\left(\frac{\widetilde\beta_2}K\right)+\frac{\widetilde\beta_2}K\left(\widetilde\beta_2- \widetilde f_s\right) + 2\widetilde c &=&0.
\end{eqnarray}
\end{subequations}
From \eqref{NewCoordPW1Cond1a} using \eqref{NewIntEq6} we get $\widetilde\beta_2 K_s = 3 K^2$. Differentiating the last equality with respect to $s$, we obtain the equation
$$3 K K_{ss} = 5 K_s^2,$$
whose solution is given by
$$K = \left(-\frac 23 \lambda_1(t) s-\frac 23 \lambda_2(t) \right)^{-3/2}$$
for some smooth functions $\lambda_1 = \lambda_1(t) \neq 0$ and $\lambda_2 = \lambda_2(t)$.
The function $\widetilde\beta_2$ is expressed as $\widetilde\beta_2 = \frac{3}{\lambda_1(t)} K^{1/3}$, which implies
\eqref{LeviCivitaConnectionnonflatInvariants1beta}.
Combining \eqref{NewCoordPW1Cond1Gaussian} and \eqref{NewIntEq1} with the second equality in \eqref{LeviCivitaConnectionnonflatInvariants1ad},
we obtain that $\widetilde a$ is expressed as given by the first equality in \eqref{LeviCivitaConnectionnonflatInvariants1ad}.
Furthermore, from \eqref{LeviCivitaConnectionnonflatInvariants1beta} and \eqref{NewIntEqAraEq1} we get
\begin{equation}
\label{LeviCivitaConnectionnonflatInvariants1f} \widetilde f=-\frac{9}{\lambda_1^2}\left(-\frac 23 \lambda_1(t) s-\frac 23 \lambda_2(t) \right)^{1/2}-\lambda_4(t) s+\lambda_6(t)
\end{equation}
for a smooth function $\lambda_6=\lambda_6(t)$.
Next, using \eqref{NewCoordPW1Cond1b} and taking into consideration \eqref{LeviCivitaConnectionnonflatInvariants1beta}, \eqref{LeviCivitaConnectionnonflatInvariants1c}, \eqref{NewIntEqAraEq1} and \eqref{NewCoordPW1Cond1Gaussian},
we get that the function $\lambda_5(t)$ is expressed as follows:
\begin{equation}
\label{LeviCivitaConnectionnonflatInvariantsCond1c} \lambda_5=\left(\frac{\lambda_2}{\lambda_1}\right)'+\frac{\lambda_2\lambda_4}{\lambda_1}.
\end{equation}
Further, using \eqref{LeviCivitaConnectionnonflatInvariants1ad}, \eqref{LeviCivitaConnectionnonflatInvariants1beta},
\eqref{LeviCivitaConnectionnonflatInvariants1f}, and \eqref{NewIntEq2}, we obtain
$$s\left(\frac 13 \lambda_1\lambda_3\lambda_4+\lambda_1'\lambda_3-\frac 23 \lambda_1\lambda_3'\right)+\left(\lambda_1\lambda_3\lambda_6+\lambda_2'\lambda_3-\frac 23 \lambda_2\lambda_3'+\frac 43\lambda_2\lambda_3\lambda_4\right) = 0,$$
which implies the following expressions for the functions $\lambda_4(t)$ and $\lambda_6(t)$:
\begin{eqnarray}
\label{LeviCivitaConnectionnonflatInvariantsCond1b} \lambda_4&=&2\frac{\lambda_3'}{\lambda_3}-3\frac{\lambda_1'}{\lambda_1},\\
\label{LeviCivitaConnectionnonflatInvariantsCond1d} \lambda_6&=&-\frac{\lambda_2'}{\lambda_1} - \frac {2\lambda_2\lambda_3'}{\lambda_1\lambda_3}+ \frac {4\lambda_1'\lambda_2}{\lambda_1^2}.
\end{eqnarray}
Now, from \eqref{LeviCivitaConnectionnonflatInvariants1c}, \eqref{LeviCivitaConnectionnonflatInvariants1f}, \eqref{LeviCivitaConnectionnonflatInvariantsCond1c}, and \eqref{LeviCivitaConnectionnonflatInvariantsCond1b} we obtain \eqref{LeviCivitaConnectionnonflatInvariants1cFinal}.
Finally, taking into consideration \eqref{LeviCivitaConnectionnonflatInvariants1f}, \eqref{LeviCivitaConnectionnonflatInvariantsCond1b}, and \eqref{LeviCivitaConnectionnonflatInvariantsCond1d} we obtain that the function $\widetilde f$ is expressed as given in \eqref{LeviCivitaConnectionnonflatInvariants1fFinal}.
Hence, the proof of the necessary condition is completed.
In order to prove the sufficient condition we assume that there exists a coordinate system $(s,t)$ such that \eqref{LeviCivitaConnectionnonflat1ALL} and \eqref{LeviCivitaConnectionnonflatInvariants1ALL} are satisfied. By a straightforward computation using \eqref{E42MargTrapGMLap} one can check that $G$ satisfies \eqref{PW1typeEq1} for the non-constant function
$\phi = -4 \left(-\frac 23 \lambda_1s-\frac 23 \lambda_2\right)^{-3/2}$ and vector $C = -\frac12\left(\bar x \wedge \bar y +n_1\wedge n_2+\frac {\widetilde \beta_2}{K}\bar x\wedge n_1\right)$. A further calculation yields that $C$ is a non-zero constant vector. Hence, $M^2_1$ has proper pointwise 1-type Gauss map of second kind.
\end{proof}
\begin{cor}\label{ACorollaryKDneq0}
Let $M^2_1$ be a quasi-minimal surface with non-flat normal connection in the pseudo-Euclidean space $\mathbb E^4_2$. If the Gauss map of $M^2_1$ satisfies \eqref{PW1typeEq1}, then $M^2_1$ has proper pointwise 1-type Gauss map of second kind.
\end{cor}
In the following theorem, we obtain a parametrization for the surface described in Proposition \ref{THMNONFLATNORMAL}. This completes the classification of quasi-minimal surfaces in $\mathbb E^4_2$ with non-flat normal connection and pointwise 1-type Gauss map.
\begin{thm}\label{THMNONFLATNORMALMainTheo}
Let $M^2_1$ be a quasi-minimal surface in $\mathbb E^4_2$ with non-vanishing normal curvature. Then, $M^2_1$ has pointwise 1-type Gauss map if and only if it is congruent to the surface given by
\begin{align}\label{PW1TYPEPAra}
z(s,t)=- s\lambda_3(t) n_1'(t)-\frac{3 \sqrt{6} \lambda_3(t)\sqrt{-s \lambda_1(t)}}{\lambda_1^2(t)}n_1(t)+\xi(t).
\end{align}
for some smooth functions $\lambda_1 =\lambda_1(t),\ \lambda_3=\lambda_3(t)$ and some $\mathbb E^4_2$-valued smooth functions $n_1(t)$, $\xi(t)$ satisfying the equations
\begin{eqnarray}\label{MainTheLastEqs}
\langle n_1,n_1\rangle=\langle n_1',n_1'\rangle=\langle n_1,\xi'\rangle=\langle \xi',\xi'\rangle&=0,&\quad
\langle n_1',\xi'\rangle =\frac{1}{\lambda_3},\\
\label{PW1TYPEEqu1} n_1''-\left(\frac{\lambda _3'}{\lambda _3 }-\frac{3 \lambda _1'}{\lambda _1}\right)n_1'+\frac{1}{\lambda _3 }n_1&=0,&
\end{eqnarray}
and
\begin{align}\label{PW1TYPEEqu2}
\begin{split}
\xi'''+\left(\frac{3 \lambda _3'}{\lambda _3} -\frac{3\lambda _1' }{\lambda _1 }\right)\xi''+ \frac{-3 \lambda _1 \left(\lambda _1' \lambda _3' +\lambda _3 \lambda _1'' \right)+3 \lambda _3 {\lambda _1'} ^2+\lambda _1 ^2 \left(2 \lambda _3'' +1\right)}{\lambda _1 ^2\lambda _3} \, \xi'=\zeta,
\end{split}
\end{align}
where $\zeta=\zeta(t)$ is the $\mathbb E^4_2$-valued function given by
\begin{align}\label{PW1TYPEEqu2b}
\begin{split}
\zeta=& 81\frac{8{\lambda _1'}^2 \lambda_3^2 - 2 \lambda _1 \lambda _1'' \lambda_3^2 + \lambda _1^2 \lambda_3 \lambda _3'' - 7\lambda_1 \lambda _1' \lambda_3 \lambda _3' +\lambda _1^2 {\lambda _3'}^2}{\lambda _1^5 \lambda_3} \,n_1+ 162 \frac{\lambda _1 \lambda _3' -2 \lambda _3 \lambda _1'}{\lambda _1^4} \,n_1'.
\end{split}
\end{align}
\end{thm}
\begin{proof}
Let $M^2_1$ be the surface described in Proposition \ref{THMNONFLATNORMAL} for some smooth functions $\lambda_1(t)$ and $\lambda_3(t)$. Then, the first equation in \eqref{LeviCivitaConnectionnonflat1c} implies $n_1=n_1(t)$. Thus, the first equation in \eqref{LeviCivitaConnectionnonflat1a} and \eqref{LeviCivitaConnectionnonflatInvariants1ad} give the differential equation
$$z_{ss}=\lambda_3\left(-\frac 23 \lambda_1s-\frac 23 \lambda_2\right)^{-3/2}n_1(t)$$
which implies
$$z_s= \frac{3\lambda_3}{\lambda_1} \left(-\frac 23 \lambda_1s-\frac 23 \lambda_2\right)^{-1/2}n_1(t) + \xi _1(t)$$
for some $\mathbb E^4_2$-valued smooth function $\xi_1(t)$. Hence, the position vector $z(s,t)$ is given by
\begin{equation}\label{MainTheoEqu1}
z(s,t)=-\frac{3 \sqrt{6} \lambda _3\sqrt{-s \lambda _1-\lambda _2}}{\lambda_1^2} \,n_1(t) +s \,\xi_1(t)+\xi(t)
\end{equation}
where $\xi(t)$ is a $\mathbb E^4_2$-valued smooth function. Without loss of generality, we may assume that $\lambda _2=0$ by re-defining $s$.
Now, using \eqref{LeviCivitaConnectionnonflat1c}, \eqref{LeviCivitaConnectionnonflatInvariants1ad} and \eqref{LeviCivitaConnectionnonflatInvariants1beta} we obtain
$\xi_1(t) = -\lambda_3 n_1'(t)$. Combining the last equality with \eqref{MainTheoEqu1} we get that $M^2_1$ is congruent to the surface parametrized by \eqref{PW1TYPEPAra}.
Now, from the first equation in \eqref{LeviCivitaConnectionnonflat1b} we have
$$\frac{\partial}{\partial s}\left(\widetilde f z_s+z_t\right)=n_1.$$
The last equation together with \eqref{LeviCivitaConnectionnonflatInvariants1f} and \eqref{PW1TYPEPAra} imply \eqref{PW1TYPEEqu1}. Note that, since $\bar x=z_s$ and $\bar y=\widetilde f z_s+z_t$ form a pseudo-orthonormal frame field of the tangent bundle of $M^2_1$, \eqref{PW1TYPEPAra} and \eqref{PW1TYPEEqu1} imply \eqref{MainTheLastEqs}.
On the other hand, from the second equality in \eqref{LeviCivitaConnectionnonflat1b} we have
$$n_2=\frac1{\widetilde d}\left(\widetilde\nabla_{\bar y}\bar y-\widetilde f_s{\bar y}-\widetilde c n_1\right).$$
Using \eqref{LeviCivitaConnectionnonflatInvariants1ALL}, \eqref{PW1TYPEEqu1} and $\lambda_2=0$, we compute the right-hand side of this equality and obtain
\begin{align*}\label{LABELNEEDED1}
\begin{split}
n_2=& \frac{1}{2 s \lambda_1^2}\left(2s \lambda_1 (2 \lambda_3' \lambda_1 - 3 \lambda_1' \lambda_3+ 3 \sqrt{6} \sqrt{-s \lambda _1} \lambda_3 \right)\xi'+\lambda _3 \xi''+ \frac{\lambda _3}{\lambda_1^3} \left(s \lambda_1^3 - 27 \lambda_3\right)n_1'
\\&
+ \frac{3\lambda _3}{2 s \lambda _1 {}^5}\left( 54 s \lambda_1\left(2 \lambda _3 \lambda _1' - \lambda _1\lambda _3' \right)+ \sqrt{6} \sqrt{-s \lambda _1} \left(s \lambda _1^3-27 \lambda _3 \right)\right)n_1.
\end{split}
\end{align*}
The last equality together with the second equality in \eqref{LeviCivitaConnectionnonflat1d} and \eqref{PW1TYPEEqu1} imply
\begin{align}\notag
\begin{split}
\xi'''+ 3\left(\frac{\lambda_3'}{\lambda _3} -\frac{\lambda _1' }{\lambda _1 }\right)\xi''+ \left(\frac{2\lambda_3''}{\lambda_3} - \frac{3\lambda_3'\lambda_1'}{\lambda_1\lambda_3} - \frac{3 \lambda_1''}{\lambda_1} + \frac{3\lambda_1'^2}{\lambda_1^2}+\frac{1}{\lambda_3} \right)\, \xi' =\\
= 81 \left(\frac{8 \lambda_1'^2 \lambda_3}{\lambda_1^5} - \frac{2\lambda_3 \lambda _1''} {\lambda_1^4} + \frac{\lambda _3''}{\lambda_1^3} - \frac{7 \lambda _1' \lambda _3'}{\lambda_1^4} + \frac{\lambda_3'^2}{\lambda_1^3 \lambda_3} \right)\,n_1+ 162 \left(\frac{\lambda _3'}{\lambda_1^3} - \frac{2 \lambda _3 \lambda _1'}{\lambda _1^4}\right) \,n_1'.
\end{split}
\end{align}
Denoting by $\zeta$ the vector field given in \eqref{PW1TYPEEqu2b} we obtain \eqref{PW1TYPEEqu2}.
Hence, the proof of the necessary condition is completed.
The converse follows by a direct computation.
\end{proof}
Below we present an explicit example of a quasi-minimal surface with non-flat normal connection and pointwise 1-type Gauss map.
\begin{Example}
Let $\mathcal{M}$ be the surface given by
\begin{align*}
\begin{split}
z(s,t)=&\left(-4s^{1/2}\cos t+s\sin t+\frac 12\cos t,-4s^{1/2}\sin t-s\cos t+\frac 12\sin t,\right.\\
&-4s^{1/2}\sin t-s\cos t-\frac 12\sin t,\left.-4s^{1/2}\cos t+s\sin t-\frac 12\cos t\right)
\end{split}
\end{align*}
in the pseudo-Euclidean space $\mathbb E^4_2$. Calculating the tangent vector fields $z_s$ and $z_t$ we get $\langle z_s, z_s \rangle = 0$, $\langle z_s, z_t \rangle = -1$, $\langle z_t, z_t \rangle = -8 s^{1/2}$. Hence, $\mathcal{M}$ is a Lorentz surface in $\mathbb E^4_2$.
We consider the following normal vector field $n_1= (\cos t,\sin t,\sin t,\cos t)$.
Note that $\langle n_1, n_1 \rangle = 0$. Hence, there exists a normal vector field $n_2$ such that $\langle n_2, n_2 \rangle = 0$ and $\langle n_1, n_2 \rangle = -1$.
Now, we consider the following pseudo-orthonormal tangent frame field
$$\bar x = z_s,\quad \bar y = -4 s^{1/2} z_s + z_t.$$
By a straightforward computation
one can see that the mean curvature vector field is
$H=-n_1$. Hence, $\mathcal{M}$ is a quasi-minimal surface.
Direct computations show that
$$ \widetilde\nabla_{\bar x}n_1=0, \qquad \widetilde\nabla_{\bar y}n_1=-\bar x - 2s^{-1/2} n_1,$$
which imply $\widetilde \beta_1 = 0$, $\widetilde \beta_2=-2s^{-1/2}$. So, $\mathcal{M}$ is a surface with non-parallel mean curvature vector field. The Gauss curvature $K$ and the normal curvature $\varkappa$ are given by the expressions:
$$K = s^{-3/2}, \quad \varkappa = s^{-3/2}.$$
Hence, $\mathcal{M}$ is a quasi-minimal surface with non-flat normal connection.
By a straightforward computation we obtain that
\eqref{PW1typeEq1} is satisfied for the function $\phi = -4 s^{-3/2}$ and the constant vector
$C = -\frac12\left(\bar x \wedge \bar y - 2 s \,\bar x\wedge n_1 + n_1\wedge n_2 \right)$. So, $\mathcal{M}$ is of proper pointwise 1-type Gauss map of second type.
Note that the Levi-Civita connection of $\mathcal{M}$ satisfies \eqref{LeviCivitaConnectionnonflat1ALL}
for the functions $\widetilde a=s^{-3/2}$, $\widetilde d=1$, $\widetilde \beta_2=-2s^{-1/2}$, $\widetilde c=\widetilde f=-4s^{1/2}$,
which can be obtained by putting $\lambda_1=-\frac 32$, $\lambda_2=0$, $\lambda_3=1$ in \eqref{LeviCivitaConnectionnonflatInvariants1ALL}.
\end{Example}
\begin{rem}
We would like to note that in the Minkowski space $\mathbb E^4_1$ all
marginally trapped surfaces with pointwise 1-type Gauss map have flat normal connection, while in the pseudo-Euclidean space $\mathbb E^4_2$ there exist quasi-minimal surfaces with non-flat normal connection and pointwise 1-type Gauss map.
\end{rem}
\begin{rem}
As far as we know, all examples of quasi-minimal surfaces in $\mathbb E^4_2$ known till
now in the literature are surfaces with parallel mean curvature vector field. Here we give an explicit example of a quasi-minimal surface with non-parallel mean curvature
vector field.
\end{rem}
\textbf{Acknowledgments:}
The first author is partially supported by the National Science Fund,
Ministry of Education and Science of Bulgaria under contract
DFNI-I 02/14. The second author is supported by T\"UB\.ITAK (Project Name: Y\_EUCL2TIP, Project Number: 114F199).
This work was done during the second author's visit at the Bulgarian Academy of Sciences in June 2015. He would like to express his deepest gratitude to Professors G. Ganchev and V. Milousheva for their warm hospitality during his stay.
\vskip 5mm
\end{document}
|
\begin{document}
\title{Cohen-Macaulayness of bipartite graphs, revisited}
\author{Rashid Zaare-Nahandi\\ Institute for Advanced Studies in Basic Sciences,
Zanjan 45195, Iran \\ E-mail: [email protected]}
\date{}
\maketitle
\begin{abstract}
Cohen-Macaulayness of bipartite graphs is investigated by several mathematicians and has been characterized combinatorially. In this note, we give some different combinatorial conditions for a bipartite graph which are equal to Cohen-Macaulayness of the graphs. Conditions in the previous works are depending on an appropriate ordering on vertices of the graph. The conditions presented in this paper are not depending to any ordering. Finally, we present a fast algorithm to check Cohen-Macaulayness of a given bipartite graph.
\noindent{\bf Key words:} edge ideal of a graph, Cohen-Macaulay, bipartite graph.
\noindent{\bf 2010 MR Subject Classification:} 13F55, 05C25, 05E45.
\end{abstract}
Characterization and classification of Cohen-Macaulay graphs, specially bipartite graphs, have been extensively studied in the last decades. For instance, see \cite{EsVil}, \cite{HH}, \cite{HHZ}, \cite{Vill1} and \cite{HYZ}.
Complete prerequisites for the subject are nicely written in the mentioned references and \cite{St}. To make this note self-contained, we review here some basic definitions.
Through out this paper, $G$ is a finite simple graph with no any vertex of degree zero. For two vertices $v$ and $w$ which are adjacent in $G$, we write $v\sim w$. The set of all vertices of $G$ adjacent to a vertex $v$ is denoted by $N(v)$.
A subset $P$ of the set of edges is called a perfect matching if there is no any pair of distinct edges in $P$ with a common vertex and any vertex in $G$ belongs to one of edges in $P$.
Let $[n] = \{1,2,\ldots,n\}$. A (finite) simplicial complex $\Delta$ on $n$ vertices, is a collection of subsets of
$[n]$ such that the following conditions hold:\\
i) $\{i\}\in\Delta$ for any $i\in [n]$,\\
ii) if $E\in\Delta$ and $F\subseteq E$, then $F\in\Delta$.\\
An element of $\Delta$ is called a face and a maximal face with respect to inclusion is called a facet. The dimension of a face $F\in\Delta$ is defined to be $|F|-1$ and dimension of $\Delta$ is maximum of dimensions of its faces. Faces with dimension 0 are called vertices.
Let $\Delta$ be a simplicial complex on
$[n]$. Let $S=K[x_1,\ldots,x_n]$ be the polynomial ring in $n$
variables with coefficients in a field $K$. Let $I_{\Delta}$
be the ideal of $S$ generated by all square-free monomials
$x_{i_1}\cdots x_{i_s}$ provided that $\{i_1,\ldots,i_s\}\not\in\Delta$. The quotient ring
$K[\Delta]=S/I_{\Delta}$ is called Stanley-Reisner ring of the simplicial
complex $\Delta$.
Let $G$ be a graph on the vertex set $V = \{v_l,... ,v_n\}$. Let $S=K[x_1,\ldots,x_n]$. The edge ideal $I(G)$, is defined to be the ideal of $S$ generated by all square-free monomials $x_ix_j$ provided that $v_i$ is adjacent to $v_j$ in $G$. The quotient ring $R(G)=S/I(G)$ is called edge ring of $G$. We say that a set $F\subseteq V$ is an independent set in $G$ if no two of its vertices are adjacent. Define the independence complex of $G$, the simplicial complex $\Delta_G$ by
$$\Delta_G = \{F \subseteq V : F \ \mbox{is an independent set in} \ G\} .$$
Let $S$ be the polynomial ring and let $I$ be a homogeneous ideal of $S$. The depth of $S/I$, denoted by $\mbox{depth}(S/I)$, is the largest integer $r$ such that there is a sequence $f_l, \ldots, f_r$ of homogeneous elements such that $f_i$ is not a zero-divisor in $S / (I, f_i, \ldots, f_{i-1})$ for all $1\leq i \leq r$, where $f_0$ is assumed to be $0$. Furthermore, $(I, f_i, \ldots, f_r)\neq S$. Such a sequence is called a regular sequence. Depth is an important invariant of a ring. It is bounded by another important invariant which is Krull dimension of the ring, length of the longest chain of prime ideals. The ring $S/I$ is called Cohen-Macaulay if $\mbox{depth}(S/I) = \dim(S/I)$. A graph $G$ (a simplicial complex $\Delta$, respectively) is called Cohen-Macaulay if the ring $R(G)$ (the ring $K[\Delta]$, respectively) is Cohen-Macaulay.
A simplicial complex $\Delta$ is called pure if all its facets have the same cardinality. A graph $G$ is called unmixed if all maximal independent sets of vertices of $G$ have the same cardinality. It is clear that a graph $G$ is unmixed if and only if the simplicial complex $\Delta_G$ is pure. It is well known that a Cohen-Macaulay simplicial complex is pure, but the converse is not true, i.e., there are pure simplicial complexes which are not Cohen-Macaulay.
A pure simplicial complex $\Delta$ with vertex set $V$, is called completely balanced if there is a partition of $V$ as $C_1,\ldots,C_r$ such that each facet of $\Delta$ has exactly one vertex in common with each $C_i$. Here a partition means that $C_1\cup\cdots\cup C_r=V$ and for each $i\neq j$, $C_i\cap C_j=\varnothing$. R. Stanley has studied such simplicial complexes in \cite{St1}. He proved that, in a completely balanced simplicial complex with partition $C_1, \ldots, C_r$, the elements $\theta_1,\ldots,\theta_r$ form a homogeneous system of parameters, where
$$
\theta_i=\sum_{x\in C_i}x .
$$
Here by a homogeneous system of parameters in a standard graded ring $R$, we mean a set of homogeneous elements $\theta_1,\ldots,\theta_r$ of nonzero degrees such that $\dim(R/\langle \theta_1,\ldots,\theta_r\rangle)=0$.
R. H. Villarreal has proved in \cite{Vill2} that a bipartite graph $G$ with parts $V_1$ and $V_2$ is unmixed if and only if $|V_1|=|V_2|$ and there is an order on vertices of $V_1$ and $V_2$ as $x_1, \ldots, x_n$ and $y_1, \ldots, y_n$ respectively, such that:\\
1) $x_i\sim y_i$ for $i=1, \ldots, n$,\\
2) for each $1\leq i<j<k\leq n$ if $x_i\sim y_j$ and $x_j\sim y_k$, then $x_i\sim y_k$.
Then, M. Estrada and R. H. Villarreal in \cite{EsVil} have shown that, Cohen-Macaulayness and shellability of a bipartite graph $G$ are coincide and if $G$ is Cohen-Macaulay, then, there is a vertex $v$ in $G$ such that $G\setminus\{v\}$ is again Cohen-Macaulay.
Finally, J. Herzog and T. Hibi in \cite{HH} have proved that a bipartite graph $G$ is Cohen-Macaulay if and only if $|V_1|=|V_2|$ and there is an order on vertices of $V_1$ and $V_2$ as $x_1, \ldots, x_n$ and $y_1, \ldots, y_n$ respectively, such that:\\
1) $x_i\sim y_i$ for $i=1, \ldots, n$,\\
2) if $x_i\sim y_j$, then $i\leq j$, \\
3) for each $1\leq i<j<k\leq n$ if $x_i\sim y_j$ and $x_j\sim y_k$, then $x_i\sim y_k$.
In the above criteria, one needs to find an appropriate order on vertices of $G$ and it makes more complicated to check Cohen-Macaulayness of a given bipartite graph in practice. Here, we show that there is no need to have an order and one can check Cohen-Macaulayness of a given bipartite graph in a quite short time.
\begin{theorem}\label{main}
Let $G$ be a bipartite graph with parts $V_1$ and $V_2$. Then, $G$ is Cohen-Macaulay if and only if there is a perfect matching in $G$ as $\{x_1,y_1\}, \ldots, \{x_n,y_n\}$, such that, $x_i\in V_1$ and $y_i\in V_2$ for $i=1,\ldots,n$, and two following conditions hold.\\
1) The induced subgraph on $N(x_i)\cup N(y_i)$ is a complete bipartite graph, for $i=1,\ldots,n$. \\
2) If $x_i\sim y_j$ for $i\neq j$, then, $x_j\not\sim y_i$.
\end{theorem}
Before proving the theorem, we prove some lemmas.
\begin{lemma}\label{cm}
Let $G$ be an unmixed bipartite graph with a perfect matching $\{x_1,y_1\}, \ldots, \{x_n,y_n\}$. Then, $G$ is Cohen-Macaulay if and only if the sequence $x_1+y_1, \ldots, x_n+y_n$ is a regular sequence in $R(G)$.
\end{lemma}
\paragraph{Proof.} The sets $\{x_1,y_1\}, \ldots, \{x_n,y_n\}$ is a partition of vertices of $G$ and any maximal independent set intersects each of these sets in exactly one vertex. Thus, the simplicial complex $\Delta_{G}$ is completely balanced. By Corollary 4.2 and its Remark in \cite{St1}, $x_1+y_1, \ldots, x_n+y_n$ is a system of parameters in $R(G)$. By Theorem 17.4 in \cite{Mat} (using graded ring instead of local ring), $R(G)$ is Cohen-Macaulay if and only if every system of parameters is a regular sequence in $R(G)$.
$\Box$
\begin{lemma}\label{zdivisor}
Let $I$ be an ideal of $S=K[x_1,\ldots,x_n]$ generated by quadratic monomials. Let for some $i,j$, $1\leq i<j\leq n$, $x_i^2\not\in I$ and $x_j^2\not\in I$. Then, $\bar{x}_i+\bar{x}_j$ is zero-divisor in $S/I$ if and only if one of the following conditions hold. Here, $\bar{x}_i$ denotes the image of $x_i$ in $S/I$.\\
i) There is $x_k$, $k\not\in\{i,j\}$ such that $\bar{x}_k(\bar{x}_i+\bar{x}_j)=0$ or,\\
ii) there are integers $k,l$, $1\leq k<l\leq n$, both distinct from $i$ and $j$, such that $x_kx_l\not\in I$ and $\bar{x}_k\bar{x}_l(\bar{x}_i + \bar{x}_j)=0$.
\end{lemma}
\paragraph{Proof.}
Without loss of generality, we may assume that $i=1$ and $j=2$.
It is well known that a polynomial $f$ in $S$ belongs to a monomial ideal $I$ if and only if all monomials of $f$ are belonging to $I$. Let $\prec$ be the lexicographic order on monomials of $S$ induced by $x_1\succ x_2 \succ \cdots \succ x_n$.
Let $\bar{x}_1+\bar{x}_2$ be zero-divisor in $S/I$. Then, there is a polynomial $h$ in $S$ such that $\bar{h}$ is nonzero in $S/I$ and $\bar{h}(\bar{x}_1+\bar{x}_2)=0$ or equivalently, $f=h(x_1+x_2)\in I$. Let $h=h_1+h_2+\cdots + h_r$ such that $h_i$'s are monomials and $h_1\succ h_2\succ\cdots\succ h_r$. We may assume that $h_1\not\in I$. Now, $h_1x_1$ is the greatest monomial of $f$ with respect to the order $\prec$ and can not be canceled by other monomials. Therefore, $h_1x_1\in I$ and there is a quadratic monomial in generating set of $I$ which divides $h_1x_1$ and does not divide $h_1$. This monomial must be of the form $x_1x_k$ for some $k$, $1\leq k\leq n$. In other hand, $k\neq 1$ because $x_1^2\not\in I$, and $x_1\nmid h_1$ because $x_k|h_1$ and $h_1\not\in I$. According to the lexicographic order, $x_1$ does not divide any other monomial of $h$. In the polynomial $hx_1+hx_2$, in the part $hx_2$, none of monomials are divided by $x_1$. In this part, $h_1x_2$ is the greatest monomial with respect to $\prec$ and can not be canceled by other monomials and therefore, $h_1x_2\in I$. As before, there is a quadratic monomial in generating set of $I$ which divides $h_1x_2$ but not $h_1$. This monomial must be of the form $x_2x_l$ for some $2<l\leq n$. And also, $x_2\nmid h_1$. Now, $x_k|h_1$, $x_l|h_1$ and if $k=l$, then, $x_k(x_1+x_2)\in I$ and if $k\neq l$, then, $x_kx_l\not\in I$ because $x_kx_l|h_1$, and $x_kx_l(x_1+x_2)\in I$. This completes the proof in one direction. The converse is trivial.
$\Box$
\paragraph{Proof of Theorem \ref{main}.} The proof is in 3 steps. First we prove that a bipartite graph $G$ is unmixed if and only if there is a perfect matching in $G$ satisfying condition 1. Then, in Step 2, we prove that for an unmixed bipartite graph, condition 2 is necessary for Cohen-Macauleyness and finally in Step 3 we prove that, condition 2 is also sufficient for Cohen-Macaulayness of such a graph. \\
{\it Step 1.} Let $G$ be unmixed. There is no isolated vertex and any vertex in $V_1$ is adjacent to some vertices in $V_2$. Therefore, there is no any vertex in $V_1$ independent to the set $V_2$. This means that $V_2$ is a maximal independent set in $G$ and similarly, $V_1$ is a maximal independent set. Then, by unmixedness of $G$, $|V_1|=|V_2|$. Let $A\subseteq V_1$ be a nonempty set and $N(A)$ be the set of all vertices in $V_2$ which are adjacent to some vertices in $A$. Suppose $|N(A)|< |A|$. There is no any edge between $A$ and $V_2\setminus N(A)$. Therefore, $A\cup (V_2\setminus N(A))$ is an independent set and its size is strictly greater than size of $V_2$, which is a contradiction with unmixedness of $G$. Therefore, $|N(A)| \geq |A|$ for each nonempty subset $A$ of $V_1$. Therefore, by Theorem of Hall \cite{Hall}, there is a set of distinct representatives (SDR) for the set $\{\{N(v)\} : v\in V_1\}$, which determines a perfect matching between $V_1$ and $V_2$.
Now, let $V_1=\{x_1,\ldots,x_n\}$, $V_2=\{y_1,\ldots,y_n\}$ and $\{x_1,y_1\}, \ldots, \{x_n,y_n\}$ be a perfect matching in $G$. $G$ is unmixed and any maximal independent set of vertices in $G$ has cardinality $n$. Therefore, any maximal independent set intersects each edge of the perfect matching in exactly one vertex. Suppose for some $j$, $1\leq j\leq n$, the induced subgraph on $N(x_j)\cup N(y_j)$ is not complete bipartite graph. Then, there are $x\in N(y_j)$ and $y\in N(x_j)$ such that $x\not\sim y$. The set $\{x,y\}$ is independent and so there is a maximal independent set containing it. This maximal independent set does not meet the edge $\{x_j,y_j\}$ which is a contradiction. Therefore, condition 1 holds.
Conversely, let there is a perfect matching $\{x_1,y_1\}, \ldots, \{x_n,y_n\}$ in $G$ which satisfies condition 1. Let $A$ be a maximal independent set in $G$. Then $A$ meets each edge in the perfect matching in at most one vertex. Suppose that for some $j$, $1\leq j\leq n$, $A\cap \{x_j,y_j\}=\varnothing$. Then, none of $x_j$ and $y_j$ is independent to $A$, and there are $x,y\in A$ such that $x\sim y_j$ and $y\sim x_j$. But, $x$ and $y$ are not adjacent and the induced subgraph on $N(x_j)\cup N(y_j)$ is not complete bipartite graph, which is a contradiction. Therefore, $A$ meets any edge in the perfect matching and has cardinality $n$. It means that $G$ is unmixed. \\
{\it Step 2.} Let $G$ be a bipartite graph with a perfect matching which satisfies condition 1 but condition 2 fails. That is, for some $i$ and $j$, $1\leq i<j\leq n$, we have $x_i\sim y_j$ and $x_j\sim y_i$. Then, in the quotient ring $R(G)/\langle x_i + y_i\rangle$, the element $\bar{x}_i$ is not zero and $\bar{x}_i(\bar{x}_j+\bar{y}_j)=0$ because $\bar{x}_i=-\bar{y}_i$. Therefore, $\bar{x}_j+\bar{y}_j$ is a zero-divisor in $R(G)/\langle x_i + y_i\rangle$. This means that the sequence $\bar{x}_1+\bar{y}_1, \ldots, \bar{x}_n+\bar{y}_n$ is not a regular sequence in $R(G)$ and by Lemma \ref{cm}, $R(G)$ is not Cohen-Macaulay. \\
{\it Step 3.} Let $G$ be a bipartite graph with a perfect matching satisfying condition 1. In this case, $\dim(R(G))=n$ and to prove that $R(G)$ is Cohen-Macaulay, it is enough to show that the sequence $\bar{x}_1+\bar{y}_1, \ldots, \bar{x}_n+\bar{y}_n$ is a regular sequence in $R(G)$ (Lemma \ref{cm}). For an integer $i$, $1\leq i< n$, the ring $R(G)/\langle x_1+y_1,\ldots,x_{i-1}+y_{i-1} \rangle$ can be considered to be the ring $R'(G)$ obtained by $R(G)$ with identifying variables $x_j$ with $-y_j$ for $j=1,\ldots,i-1$. By Lemma \ref{zdivisor} and its proof, the only possibility for $\bar{x}_i+\bar{y}_i$ to be zero-divisor in $R'(G)$ is that there is $j$, $1\leq j\leq i-1$, such that $\bar{x}_j(\bar{x}_i+\bar{y}_i)=0$. Therefore, $\bar{x}_j\bar{y}_i=0$ and $\bar{x}_j\bar{x}_i=0$ or equivalently, $\bar{y}_j\bar{x}_i=0$. Therefore, $x_j\sim y_i$ and $y_j\sim x_i$. But, in this case, condition 2 fails. This completes the proof.
$\Box$
\begin{proposition}
Condition 1 in Theorem 1 which is equal to unmixedness of a bipartite graph is also equal to saying that non of the polynomials $x_1+y_1, \ldots, x_n+y_n$ are zero-divisor in $R(G)$.
\end{proposition}
\paragraph{Proof.} It is clear by Lemma \ref{zdivisor} and Theorem \ref{main}.
$\Box$
\begin{remark}\label{connect} Condition 2 in Theorem 1 is equal to say that, for each $i$ and $j$, $1\leq i<j\leq n$, the induced subgraph on vertices $\{x_i,y_i,x_j,y_j\}$ has connected complement.
\end{remark}
\begin{corollary}
Let $G$ be a bipartite Cohen-Macaulay graph and $\{x_i,y_i\}$ be any edge in the perfect matching mentioned in Theorem 1. Then, $G\setminus\{x_i,y_i\}$ is again Cohen-Macaulay.
\end{corollary}
\paragraph{Proof.} Here, by $G\setminus\{x_i,y_i\}$ we mean the graph obtained by deleting vertices $x_i$ and $y_i$ and all edges passing through one of these vertices. It is clear that if condition 1 or 2 in Theorem 1 holds for $G$, then, it holds for $G\setminus\{x_i,y_i\}$ for each $i=1,\ldots,n$.
$\Box$
\begin{proposition}
Let $G$ be a bipartite Cohen-Macaulay graph with parts $V_1$ and $V_2$. Then, there is at least one vertex of degree one in each part.
\end{proposition}
\paragraph{Proof.} Let $y$ be a vertex in $V_2$ such that for any other vertex $y'\in V_2$, we have $\deg(y')\leq\deg(y)$. Let $x\in V_1$ be the vertex such that $\{x,y\}$ is in a perfect matching in $G$. We have $\deg(x)\geq 1$. If $\deg(x)>1$, then there is a vertex $y'\in V_2\setminus \{y\}$ such that $x\sim y'$. Let $x'$ be a vertex in $V_1\setminus\{x\}$ such that $\{x',y'\}$ is in the perfect matching. $G$ is Cohen-Macaulay then, the induced subgraph on $N(x)\cup N(y)$ is a complete bipartite graph and $x'\not\in N(y)$. Then, $y'$ is adjacent to each vertex in $N(y)\cup \{x'\}$. Therefore, $\deg(y')>\deg(y)$ which is a contradiction. Therefore, $\deg(x)=1$.
$\Box$
Let $G$ be a Cohen-Macaulay bipartite graph. There are some vertices in both parts with degree one. If we remove the vertex adjacent to a vertex of degree one, the edge consisting these two vertices in a perfect matching will be removed and the remaining graph is also Cohen-Macaulay.
\begin{corollary}\label{unique}
Let $G$ be a Cohen-Macaulay bipartite graph. There is a unique perfect matching in $G$.
\end{corollary}
\paragraph{Proof.} By Theorem 1, there is a perfect matching. Let $V_1$ and $V_2$ be two parts of $G$. Let $P$ be a perfect matching in $G$. By the above proposition, there is a vertex of degree one in $V_1$. Let $x_1$ be the vertex and $y_1\in V_2$ be the unique vertex adjacent to $x_1$. Then $\{x_1,y_1\}\in P$. The graph $G\setminus\{x_1,y_1\}$ is again Cohen-Macaulay and $V_1\setminus\{x_1\}$ has a vertex of degree one as $x_2$. Let $y_2\in V_2\setminus\{y_1\}$ be the unique vertex adjacent to $x_2$. Then, $\{x_2,y_2\}\in P$. Continuing this process, determines $P$ uniquely.
$\Box$
\begin{corollary}
Let $G$ be an unmixed bipartite graph. Then, the following conditions are equivalent.\\
i) $G$ is Cohen-Macaulay. \\
ii) There is a unique perfect matching in $G$.\\
iii) For each two edges $e_1, e_2$ in a perfect matching, complement of the induced subgraph on vertices of $e_1$ and $e_2$ is connected.
\end{corollary}
\paragraph{Proof.} (i$\to$ii) is proved in Corollary \ref{unique}. Let $G$ be unmixed but not Cohen-Macaulay. Then, there is a perfect matching and two edges in the perfect matching as $\{x_i,y_i\}$ and $\{x_j,y_j\}$ such that $x_i\sim y_j$ and $x_j\sim y_i$. Substituting $\{x_i,y_i\}$ and $\{x_j,y_j\}$ by $\{x_i,y_j\}$ and $\{x_j,y_i\}$, we get a different perfect matching. This proves (ii$\to$i). Equality of i and iii is clear by Theorem \ref{main} and Remark \ref{connect}.
$\Box$
For a given bipartite graph $G$, we present a fast polynomial-time algorithm to check wether $G$ is Cohen-Macaulay or not.
\begin{algorithm}
Let $G$ be a given bipartite graph with $m$ vertices.
\begin{itemize}
\item[]{\em Step~1.} Take $i=0$. If $m$ is not even, then, go to Step~7.
\item[]{\em Step~2.} If there is no any vertex with degree 1 in $G$, go to Step~7.
\item[]{\em Step~3.} Take $i=i+1$. Choose a vertex of degree one and name it $x_i$. Name the vertex adjacent to $x_i$ to be $y_i$. Take $G=G\setminus\{x_i,y_i\}$. If $i<n$, go to Step~2.
\item[]{\em Step~4.} If there is $j$, $1\leq j\leq n$ such that, a vertex in $N(x_j)$ and a vertex in $N(y_j)$ are not adjacent, then, go to Step~7.
\item[]{\em Step~5.} If there are $i, j$, $1\leq i<j\leq n$ such that $x_i\sim y_j$ and $x_j\sim y_i$, then, go to Step~7.
\item[]{\em Step~6.} Write "G is Cohen-Macaulay" and end the algorithm.
\item[]{\em Step~7.} Write "G is not Cohen-Macaulay" and end the algorithm.
\end{itemize}
\end{algorithm}
In Step 3, $G\setminus\{x_i,y_i\}$ is the induced subgraph of $G$ on vertex set $V(G)\setminus\{x_i,y_i\}$.
Note that the assumption that there is no vertex of degree zero in $G$ is not really a restriction in the class of all bipartite graphs for Cohen-Macaulayness. Because, any graph with only one vertex is Cohen-Macaulay and disjoint union of two graphs is Cohen-Macaulay if and only if both of them are Cohen-Macaulay. Therefore, in a given bipartite graph $G$ we may omit all isolated vertices and check Cohen-Macaulayness of the remaining graph.
Some of results of this paper were already known. For example, equality of unmixedness of a bipartite graph with condition 1 in Theorem \ref{main} is proved in \cite{Riv}. But, the aim of this work was gathering together these results and reformulate and reprove them in a constructive way such that an algorithm can be obtained. Also we hope that the proofs in this paper give some ideas to find the same results in a larger class consisting of $r$-partite graphs which have some separated maximal cliques covering all vertices.
\end{document}
|
\begin{equation}gin{document}
\begin{equation}gin{abstract}
The formation of singularity and breakdown of classical solutions to the three-dimensional
compressible viscoelasticity and inviscid elasticity are considered.
For the compressible inviscid elastic fluids, the finite-time formation of singularity in classical solutions is proved for certain initial data.
For the compressible viscoelastic fluids, a criterion in term of the temporal integral of the velocity gradient is obtained for the breakdown of smooth solutions.
{\bf E}nd{abstract}
{\bf m}aketitle
\section{Introduction}
We are concerned with the formation of singularities in smooth solutions to the multi-dimensional partial differential equations of viscoelasticity, especially, of the viscoelastic fluids.
Viscoelastic fluids exhibit a combination of both fluid and solid characteristics,
and keep memory of their past deformations due to their ``elastic" nature.
Viscoelastic fluids have a wide range of applications and hence have received
a great deal of interests. Examples and applications of
viscoelastic fluids include from oil, liquid polymers,
to bioactive fluids, and viscoelastic blood flow past valves; see
cite{ KM} for more applications. For the viscoelastic
materials, the competition between the kinetic energy and the
internal elastic energy through the special transport properties
of their respective internal elastic variables makes the materials
more intractable in understanding their behavior, since any
distortion of microstructures, patterns or configurations in the
dynamical flow will involve the deformation tensor. For classical
simple fluids, the internal energy can be determined solely by
the determinant of the deformation tensor; however, the internal
energy of complex fluids carries all the information of the
deformation tensor. The interaction between the
microscopic elastic properties and the macroscopic fluid motions
leads to the rich and complicated rheological phenomena in
viscoelastic fluids, and also causes formidable analytic and
numerical challenges in mathematical analysis.
For the viscoelastic materials with significant viscosities, the equations of the compressible viscoelastic fluids of Oldroyd type (cite{Oldroyd1, Oldroyd2}) in three spatial dimensions take the
following form (cite{CD, Gurtin, Joseph}):
\begin{equation}gin{equation}\lambdaabel{cve}
\begin{equation}gin{cases}
\rho_t+{\rhom div}(\rho{\bf u})=0,\\
(\rho{\bf u})_t+{\rhom div}(\rho{\bf u}\otimes{\bf u})+\nablabla P={\bf m}u\Delta {\bf u}+({\bf m}u+\lambdaambda)\nablabla{\rhom div}{\bf u}+{\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop),\\
{\mathtt F}_t+{\bf u}cdotot\nablabla{\mathtt F}=\nablabla{\bf u}\,{\mathtt F},
{\bf E}nd{cases}
{\bf E}nd{equation}
where $\rho$ stands for the density, ${\bf u}\varepsilonn {{\bf m}athbb R}^3$ the velocity, and
${\mathtt F}\varepsilonn M^{3\thetaimes 3}$ (the set of $3\thetaimes 3$ matrices) the
deformation gradient, and
$$P=A\rho^\gamma, \quad A>0, \; \gamma>1,$$ is the pressure.
The viscosity coefficients ${\bf m}u, \lambdaambda$
are two constants satisfying
\begin{equation}gin{equation}\lambdaabel{v2}
{\bf m}u\ge 0,\quad 3\lambdaambda+2{\bf m}u\ge 0,
{\bf E}nd{equation}
which ensure that the operator
$-{\bf m}u\Delta {\bf u}-(\lambdaambda+{\bf m}u)\nablabla{\rhom div}{\bf u}$ is a strongly elliptic
operator. The notation ${\bf u}cdotot\nablabla{\mathtt F}$ is understood to be
$({\bf u}cdotot\nablabla){\mathtt F}$ and ${\mathtt F}^\thetaop$ means the transpose matrix of ${\mathtt F}$.
As usual, we call the first equation in
{\bf E}qref{cve} the continuity equation. For system {\bf E}qref{cve}, the
corresponding elastic energy is chosen to be the special form of
the Hookean linear elasticity:
$$W({\mathtt F})=\fracrac{1}{2}|{\mathtt F}|^2,$$
which, however, does not reduce the essential difficulties for
analysis. The methods and results of this paper can be applied to
more general cases.
In the physical regime of negligible viscosities, {\bf E}qref{cve} becomes the system of inviscid compressible elasticity:
\begin{equation}gin{equation}\lambdaabel{1}
\begin{equation}gin{cases}
\rho_t+{\rhom div}(\rho{\bf u})=0,\\
(\rho{\bf u})_t+{\rhom div}(\rho{\bf u}\otimes{\bf u})+\nablabla P={\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop),\\
{\mathtt F}_t+{\bf u}cdotot\nablabla{\mathtt F}=\nablabla{\bf u}\,{\mathtt F}.
{\bf E}nd{cases}
{\bf E}nd{equation}
We refer the readers to cite{CD, Gurtin, Joseph, LW, RHN} for more discussions and physical background on viscoelasticity.
This paper is devoted to the study of formation of singularities for
both the inviscid elastic flow {\bf E}qref{1} and the viscoelastic flow {\bf E}qref{cve}.
Without the deformation gradient ${\mathtt F}$, the system {\bf E}qref{cve}
becomes the compressible Navier-Stokes equations. There is a huge
literature about solutions to the compressible Navier-Stokes
equations; see Danchin cite{Danchin}, Feireisl cite{Feireisl}, Lions cite{Lions}, and references therein.
In particular, the global weak solutions were
constructed in cite{Feireisl, Lions} for $\gamma>\frac{3}{2}$, while
in cite{Danchin} the global existence of strong solution was proved in the critical spaces when the solution is a small perturbation of the equilibrium.
When the deformation gradient ${\mathtt F}$ does appear, the system {\bf E}qref{cve} is much more complicated than the Navier-Stokes equations, although the equation
satisfied by the deformation gradient ${\mathtt F}$ is a transport equation
and is similar to the continuity equation. Fortunately, {\bf E}qref{cve}
inhibits a list of local conservation laws which make the analysis
for the global existence of strong solutions available.
The local strong solutions to the compressible viscoelastic flow {\bf E}qref{cve} with large data were obtained in Hu-Wang cite{HW1, HW3}, and the global
strong solutions to the compressible viscoelastic flow {\bf E}qref{cve} with small data in the Besov spaces were established in Hu-Wang cite{HW2, HW4} and Qiang-Zhang cite{Zhang}.
We remark that for these local and global existence of strong solutions, the local conservation laws are crucial for the dissipation of the deformation gradient, and the property that the curl of the deformation gradient is of higher order (Lemma 2.1 in cite{HW1}) is also very helpful.
For large initial data, the global existence of strong or weak solutions for {\bf E}qref{cve} is still an outstanding open problem.
Regarding the global existence of weak solutions to {\bf E}qref{cve} with large
data, among all of difficulties, the rapid oscillation of the density
and the non-compatibility between the quadratic form and the weak convergence are of the main issues; namely, the local conservation laws are not enough for the convergence of the
nonlinear term, especially for the term related to the deformation
gradient although those terms sit well in the framework of
div-curl structure, and the higher integrability of the density or the deformation gradient is not available up to now.
For a strong solution to {\bf E}qref{cve} with large data, although the local existence was proved in cite{HW1}, we expect that it will break down in a finite time as for the strong solution to the compressible Navier-Stokes equations (cite{BKM, Ponce, HX}). The breakdown of smooth solutions is due to the lack of control
of the hydrodynamic variables, for instance, the $L^\varepsilonnfty$ norm of
the gradient of the velocity or the $L^\varepsilonnfty$ norm of the
density. As it is well-known, the $L^\varepsilonnfty$ norm of the gradient
of the velocity controls the $L^\varepsilonnfty$ norm of the density and
the deformation gradient in the compressible viscoelastic fluids {\bf E}qref{cve}. In this paper, we will provide a criterion and justify the blowup phenomena. Comparing with the compressible Navier-Stokes equations, the viscoelastic flow {\bf E}qref{cve} is more complicated and thus more delicate and new estimates are needed for the analysis of strong solutions. More precisely, the main difficulty lies in the estimates
of the gradients of the density and the deformation gradient, and the
estimates on the $L^\varepsilonnfty_t H^1_x$ bounds of $\nablabla\rho$ and $\nablabla{\mathtt F}$ are crucial.
We find a criterion for breakdown of strong solutions of {\bf E}qref{cve} in term of the temporal integral of the $L^\varepsilonnfty$ norm of the velocity gradient.
For the inviscid flow {\bf E}qref{1},
similar to the compressible Euler equations (Sideris cite{ST2}),
we expect that the smooth solution to the system {\bf E}qref{1} will develop singularities in a finite time.
We will first reformulate the system {\bf E}qref{1} into a symmetric hyperbolic system so that a local smooth solution can be obtained from cite{Kato, Majda}. Then we will prove that
the smooth solution cannot exist globally in time under some
restrictions on the initial data; that is, the finite-time formation of singularities is essentially inevitable
provided that the initial velocity in some region near the
origin is supersonic relative to the sound speed at infinity.
The proof will follow the idea of Sideris cite{ST2} with more subtle estimates on the deformation gradients.
For the incompressible viscoelastic flows and related models, there are many papers in literature on classical solutions (cf. cite{CM, CZ, KP, LLZH2,
LZP} and the references therein). On the other hand, the global existence of weak solutions
to the incompressible viscoelastic flows with large initial data
is also an outstanding open question, although there are some
progress in that direction (cite{LLZH, LM, LW}).
For the inviscid elastodynamics, see cite{ST} and
their references on the global existence of classical solutions.
The rest of this paper is organized as follows. In Section 2, we
will explain the mechanism which will ensure a local existence of smooth solution to
the inviscid compressible elastic fluid and also provide a proof of the finite-time
formation of singularities. In Section 3, we will
consider the compressible viscoelastic fluids and prove a blowup criterion in term of the
the $L^\varepsilonnfty$ norm of the gradient of the velocity.
\section{The Inviscid Case}
In this section, we consider the formation of singularities in smooth solutions of the inviscid flow {\bf E}qref{1}
in ${{\bf m}athbb R}^3$ with sufficiently smooth initial data:
\begin{equation}gin{equation}\lambdaabel{in}
(\rho, {\bf u}, {\mathtt F})|_{t=0}=(\rho_0(x), {\bf u}_0(x), {\mathtt F}_0(x)),\qquad x\varepsilonn{{\bf m}athbb R}^3.
{\bf E}nd{equation}
We assume that
\begin{equation}gin{equation}
\rho_0(x)>0 \quad \thetaextrm{for all}\quad x\varepsilonn{{\bf m}athbb R}^3,
{\bf E}nd{equation}
there exist positive constants $\bar{\rho}_0$ and $R>0$ such that
\begin{equation}gin{equation}
(\rho_0(x), {\bf u}_0(x), {\mathtt F}_0(x))=(\bar{\rho}_0,0, I) \quad \thetaextrm{for all}\quad |x|\ge R,
{\bf E}nd{equation}
where $I$ is the $3\thetaimes 3$ identity matrix, and
\begin{equation}gin{equation}\lambdaabel{a}
{\rhom div}(\rho_0 {\mathtt F}^\thetaop_0)=0,
{\bf E}nd{equation}
One useful property of the deformation gradient ${\mathtt F}$ is the following (see Lemma 6.1 in cite{HW2}):
\begin{equation}gin{Lemma}\lambdaabel{div}
If $(\rho,{\bf u},{\mathtt F})$ is a smooth solution of {\bf E}qref{1}, and $\rho,{\mathtt F}$
initially satisfy {\bf E}qref{a},
then the following identity holds for any time:
\begin{equation}gin{equation}\lambdaabel{div1}
{\rhom div}(\rho{\mathtt F}^\thetaop)=0.
{\bf E}nd{equation}
{\bf E}nd{Lemma}
Under the assumption {\bf E}qref{a} and using Lemma \rhoef{div}, we can rewrite
the system {\bf E}qref{1} for smooth solutions $(\rho, {\bf u}, {\mathtt F})$ with $\rho>0$ as:
\begin{equation}gin{equation}\lambdaabel{111}
\begin{equation}gin{cases}
{\bf d}isplaystyle \frac{1}{\rho c^2}\frac{d \hat{P}}{dt}+{\rhom div}{\bf u}=0,\\
{\bf d}isplaystyle \frac{d{\bf u}}{dt}+\nablabla \hat{P}={\mathtt F}_{jk}\nablabla_{x_j}{\mathtt F}_{ik}:=\sum_{k=1}^3({\mathtt F}_kcdotot\nablabla{\mathtt F}_k),\\
{\bf d}isplaystyle \frac{d{\mathtt F}_{i}}{dt}=\nablabla{\bf u}\,{\mathtt F},
{\bf E}nd{cases}
{\bf E}nd{equation}
where
$$\frac{d}{dt}=\frac{\partial}{\partial t}+{\bf u}cdotot\nablabla,\quad
c^2=A\gamma(\gamma-1) \rho^{\gamma-2}, \quad
\hat{P}=\frac{A\gamma}{\gamma-1}\rho^{\gamma-1},$$
and ${\mathtt F}_i$ is the i-th column of the matrix ${\mathtt F}$.
Set
$$
V=
\begin{equation}gin{bmatrix}
\hat{P}\\ {\bf u}\\ {\mathtt F}_1\\ {\mathtt F}_2\\ {\mathtt F}_3
{\bf E}nd{bmatrix}, \quad
A_0=
\begin{equation}gin{bmatrix}
\frac{1}{\rho c ^2}& 0\\
0&I_{12}
{\bf E}nd{bmatrix},
$$
and
$$
A_i=\begin{equation}gin{bmatrix}
\frac{{\bf u}_i}{\rho c^2}& e_i& 0& 0& 0\\
e_i^\thetaop& \rho{\bf u}_i I& -{\mathtt F}_{i1}I& -{\mathtt F}_{i2}I& -{\mathtt F}_{i3}I\\
0& -{\mathtt F}_{i1}I& {\bf u}_i I& 0& 0\\
0& -{\mathtt F}_{i2}I& 0&{\bf u}_i I& 0\\
0& -{\mathtt F}_{i3}I& 0& 0& {\bf u}_i I
{\bf E}nd{bmatrix}, \quad i=1, 2, 3,
$$
where $\{e_1,e_2, e_3\}$ is the standard basis of ${{\bf m}athbb R}^3$, $I_{12}$ is the $12\thetaimes 12$ identity matrix, $e_i^\thetaop$ is the transpose of $e_i$, and $I$
is again the $3\thetaimes 3$ identity matrix.
Then, in view of {\bf E}qref{111}, the system {\bf E}qref{1} can be written as a symmetric hyperbolic system of the form
\begin{equation}gin{equation}\lambdaabel{sym}
A_0V_t+\sum_{i=1}^3A_i V_{x_i}=0,
{\bf E}nd{equation}
with $A_0(V)>0$ and $A_i(V)$ is a $13\thetaimes 13$ symmetric matrix for each
$i=1,2,3$. According to the well-known
result in cite{Kato, Majda}, the hyperbolic system of conservation laws {\bf E}qref{sym}
admits a local $C^1$ solution on some time interval $[0,T)$,
provided the initial data are sufficiently regular; moreover,
$\rho>0$ on ${{\bf m}athbb R}^3\thetaimes[0,T)$.
Since $\rho_0(x)=\bar{\rho}_0$, ${\bf u}_0(x)=0$, and ${\mathtt F}_0(x)=I$ for all $|x|\ge R$, then
$\sigma$, the sound speed at infinity, is given by
$$\sigma=\lambdaeft(\frac{\partial P}{\partial\rho}(\bar{\rho}_0)\rhoight)^{\frac{1}{2}}
=\lambdaeft(A\gamma\bar{\rho}_0^{\gamma-1}\rhoight)^{\frac{1}{2}}.$$
The following proposition is an immediate consequence of local energy
estimates (cf. cite{ST1}). It simply states that the maximum
speed of propagation of the front of a smooth disturbances is
governed by $\sigma$.
\begin{equation}gin{Proposition}\lambdaabel{p1}
If $(\rho,{\bf u}, {\mathtt F})\varepsilonn C^1({{\bf m}athbb R}^3\thetaimes[0,T))$ is a solution of
{\bf E}qref{1} and {\bf E}qref{in}, then
$$(\rho,{\bf u}, {\mathtt F})=(\bar{\rho}_0, 0, I) \thetaext{ for all }
|x|\ge \sigma t+R \thetaext{ and } 0\lambdae t<T.$$
{\bf E}nd{Proposition}
As for the compressible Euler equations (Sideris cite{ST2}),
we expect that the smooth solution to the system {\bf E}qref{1} will develop singularities in a finite time.
The result in the following theorem shows that the $C^1$ solution to
{\bf E}qref{1} and {\bf E}qref{in} does not exist globally in time under some
restrictions on the initial data; more precisely, it states that
the finite-time formation of singularities in the three-dimensional
inviscid compressible elastic fluid is essentially inevitable
provided that the initial flow velocity, in some region near the
origin, is supersonic relative to the sound speed at infinity.
In order to state our result, we define
$$m(t)=\varepsilonnt_{{{\bf m}athbb R}^3}(\rho(x,t)-\bar{\rho}_0)dx,$$
$${\bf m}athcal{F}(t)=\varepsilonnt_{{{\bf m}athbb R}^3}\rho(x,t)xcdotot{\bf u}(x,t)dx,$$
$${\bf m}athcal{E}(t)=\varepsilonnt_{{{\bf m}athbb R}^3}
\lambdaeft(\frac{1}{2}\rho|{\bf u}|^2+\frac{1}{2}\rho|{\mathtt F}-I|^2+\frac{P-P_0}{\gamma-1}\rhoight)dx,$$
with
$$P_0=P(\bar\rho_0)=A\bar\rho_0^\gamma,$$
and
$$D(t)=\{x\varepsilonn {{\bf m}athbb R}^3: |x|\lambdae \sigma t+R\}.$$
\begin{equation}gin{Remark}\lambdaabel{p1b}
Proposition \rhoef{p1} implies that the integrands of
$m(t)$, ${\bf m}athcal{F}(t)$, and ${\bf m}athcal{E}(t)$
are identically equal to zero outside $D(t)$.
{\bf E}nd{Remark}
For two $3\thetaimes 3$ matrices $A$ and $B$, the following notations will be used:
$$A:B=\sum_{i,j=1}^3A_{ij}B_{ij},\quad|A|^2=\sum_{i,j=1}^3A_{ij}^2.$$
\begin{equation}gin{Lemma}\lambdaabel{ei}
Let $(\rho,{\bf u}, {\mathtt F})$ be a $C^1$ solution of {\bf E}qref{1} with initial data
{\bf E}qref{in}-{\bf E}qref{a}.
Then ${\bf m}athcal{E}(t)$ is conserved; that is,
\begin{equation}gin{equation}\lambdaabel{EE}
{\bf m}athcal{E}'(t)=0,\qquad {\bf m}athcal{E}(t)={\bf m}athcal{E}(0),
{\bf E}nd{equation}
for all $t>0$.
{\bf E}nd{Lemma}
\begin{equation}gin{proof}
Multiplying the second equation in {\bf E}qref{1} by ${\bf u}$ and using the
first equation in {\bf E}qref{1} yield
\begin{equation}gin{equation}\lambdaabel{222}
\begin{equation}gin{split}
\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\lambdaeft(\frac{1}{2}\rho|{\bf u}|^2+\frac{P-P_0}{\gamma-1}\rhoight)dx
=-\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla{\bf u} dx.
{\bf E}nd{split}
{\bf E}nd{equation}
On the other hand, from the third and then the first equations of {\bf E}qref{1}, one deduces that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\frac{\partial}{\partial t}\lambdaeft(\rho |{\mathtt F}|^2\rhoight)&=\frac{\partial \rho}{\partial t}|{\mathtt F}|^2+2\rho {\mathtt F}:\frac{\partial {\mathtt F}}{\partial t}\\
&=\frac{\partial \rho}{\partial t}|{\mathtt F}|^2+2\rho {\mathtt F}:(\nablabla{\bf u} \,{\mathtt F}-{\bf u}cdotot\nablabla {\mathtt F})\\
&=\frac{\partial \rho}{\partial t}|{\mathtt F}|^2+2\rho {\mathtt F}:(\nablabla{\bf u} \,{\mathtt F})-\rho{\bf u}cdotot\nablabla |{\mathtt F}|^2\\
&=\frac{\partial \rho}{\partial t}|{\mathtt F}|^2+2\rho {\mathtt F}:(\nablabla{\bf u} \,{\mathtt F})+{\rhom div}(\rho{\bf u})|{\mathtt F}|^2-{\rhom div}(\rho{\bf u}|{\mathtt F}|^2)\\
&=2\rho {\mathtt F}:(\nablabla{\bf u} \,{\mathtt F})-{\rhom div}(\rho{\bf u}|{\mathtt F}|^2).
{\bf E}nd{split}
{\bf E}nd{equation*}
Since
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\rho|{\mathtt F}-I|^2&=\rho\sum_{i\neq j}{\mathtt F}_{ij}^2+\rho\sum_{i=1}^3({\mathtt F}_{ii}-1)^2\\
&=\rho\sum_{i,j}{\mathtt F}_{ij}^2-2\rho\sum_{i=1}^3{\mathtt F}_{ii}+3\rho\\
&=\rho|{\mathtt F}|^2-2\rho\,\thetaextrm{tr}({\mathtt F}-I)-3\rho,
{\bf E}nd{split}
{\bf E}nd{equation*}
where $\thetaextrm{tr}({\mathtt F}-I)$ denotes the trace of the matrix ${\mathtt F}-I$, then
$$\rho|{\mathtt F}|^2=\rho |{\mathtt F}-I|^2+2\rho\,\thetaextrm{tr}({\mathtt F}-I)+3(\rho-\bar\rho_0)+3\bar\rho_0,$$
thus
$$\frac{\partial}{\partial t}\lambdaeft(\rho |{\mathtt F}-I|^2+2\rho\,\thetaextrm{tr}({\mathtt F}-I)+3(\rho-\bar\rho_0)\rhoight)=2\rho {\mathtt F}:(\nablabla{\bf u} \,{\mathtt F})-{\rhom div}(\rho{\bf u}|{\mathtt F}|^2).$$
Integrating the above equality, we arrive at
\begin{equation}gin{equation}\lambdaabel{2222}
\begin{equation}gin{split}
&\frac{1}{2}\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}
\lambdaeft(\rho |{\mathtt F}-I|^2+2\rho\,\thetaextrm{tr}({\mathtt F}-I)+3(\rho-\bar\rho_0)\rhoight)dx\\
&=\varepsilonnt_{{{\bf m}athbb R}^3}\rho {\mathtt F}:(\nablabla{\bf u}\,{\mathtt F})dx=\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla{\bf u} dx.
{\bf E}nd{split}
{\bf E}nd{equation}
Adding {\bf E}qref{222} and {\bf E}qref{2222} together, one has
\begin{equation}gin{equation}\lambdaabel{22222}
\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}
\lambdaeft(\frac{1}{2}\rho|{\bf u}|^2+\frac{1}{2}\rho|{\mathtt F}-I|^2
+\rho\,\thetaextrm{tr}({\mathtt F}-I)+\fracrac32(\rho-\bar\rho_0)
+\frac{P-P_0}{\gamma-1}\rhoight)dx=0.
{\bf E}nd{equation}
We note from the first equation of {\bf E}qref{1} that
\begin{equation}gin{equation}\lambdaabel{TT1}
m'(t)=\fracrac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\lambdaeft(\rho-\bar\rho_0\rhoight)dx
=-\varepsilonnt_{{{\bf m}athbb R}^3}{\rhom div} (\rho{\bf u})dx=0,
{\bf E}nd{equation}
thus,
\begin{equation}gin{equation}\lambdaabel{TT2}
m(t)=m(0).
{\bf E}nd{equation}
Due to Lemma \rhoef{div}, we have, using integration by parts,
$$\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}^\thetaop:\nablabla{\bf u} dx=0.$$
From the first and third equations in {\bf E}qref{1}, it is easy to deduce that
$$\partial_t(\rho\,\thetaextrm{tr}({\mathtt F}-I))
+{\rhom div}(\rho\,{\bf m}athrm{tr}({\mathtt F}-I)\,{\bf u})=\rho{\mathtt F}^\thetaop:\nablabla{\bf u}.$$
Integrating the above equality yields
\begin{equation}gin{equation}\lambdaabel{TT3}
\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho\,{\bf m}athrm{tr}({\mathtt F}-I)\,dx=0,
{\bf E}nd{equation}
and then
\begin{equation}gin{equation}\lambdaabel{TT4}
\varepsilonnt_{{{\bf m}athbb R}^3}\rho\,{\bf m}athrm{tr}({\mathtt F}-I)\,dx=\varepsilonnt_{{{\bf m}athbb R}^3}\rho_0\,{\bf m}athrm{tr}({\mathtt F}_0-I)\,dx.
{\bf E}nd{equation}
Substituting {\bf E}qref{TT1} and {\bf E}qref{TT3} into {\bf E}qref{22222}, we have
\begin{equation}gin{equation}\lambdaabel{22222b}
\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}
\lambdaeft(\frac{1}{2}\rho|{\bf u}|^2+\frac{1}{2}\rho|{\mathtt F}-I|^2
+\frac{P-P_0}{\gamma-1}\rhoight)dx=0.
{\bf E}nd{equation}
Therefore, {\bf E}qref{EE} follows from {\bf E}qref{22222b}.
The proof is complete.
{\bf E}nd{proof}
Now we state and prove the finite-time formation of singularity for {\bf E}qref{1}.
\begin{equation}gin{Theorem}\lambdaabel{T1}
Let $(\rho,{\bf u}, {\mathtt F})\varepsilonn C^1({{\bf m}athbb R}^3\thetaimes[0,T))$ be a solution of
{\bf E}qref{1} with initial data
{\bf E}qref{in}-{\bf E}qref{a}. If
\begin{equation}gin{gather}
m(0)\ge 0,\lambdaabel{FF1}\\
{\bf m}athcal{F}(0)>\frac{16\pi}{3}\sigma R^4\|\rho_0\|_{L^\varepsilonnfty}, \lambdaabel{FF}
{\bf E}nd{gather}
and
\begin{equation}gin{equation}\lambdaabel{a2}
\varepsilonnt_{{{\bf m}athbb R}^3}\rho_0\,{\bf m}athrm{tr}(I-{\mathtt F}_0)dx \ge 2{\bf m}athcal{E}(0),
{\bf E}nd{equation}
then $T$ is necessary finite.
{\bf E}nd{Theorem}
\begin{equation}gin{proof}
Let $(\rho,{\bf u},{\mathtt F})$ be a $C^1$ solution of {\bf E}qref{1} and {\bf E}qref{in}.
Then, ${\bf m}athcal{F}\varepsilonn C^1[0,T)$ and
$${\bf m}athcal{F}'(t)=\varepsilonnt_{{{\bf m}athbb R}^3}xcdotot(\rho_t{\bf u}+\rho{\bf u}_t)dx.$$
It follows from the Proposition \rhoef{p1}, the first two equations
in {\bf E}qref{1}, and integration by parts, that
\begin{equation}gin{equation}\lambdaabel{5}
\begin{equation}gin{split}
{\bf m}athcal{F}'(t)=\varepsilonnt_{D(t)}\rho|{\bf u}|^2dx+3\varepsilonnt_{D(t)}(P-P_0)dx+\varepsilonnt_{D(t)}xcdotot{\rhom div}\lambdaeft(\rho{\mathtt F}{\mathtt F}^\thetaop\rhoight)dx.
{\bf E}nd{split}
{\bf E}nd{equation}
Using the H\"{o}lder inequality, {\bf E}qref{TT2} and {\bf E}qref{FF1}, we have,
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\varepsilonnt_{D(t)}Pdx&=A\varepsilonnt_{D(t)}\rho^\gamma dx\\
&\ge A(\thetaextrm{vol}D(t))^{1-\gamma}\lambdaeft(\varepsilonnt_{D(t)}\rho
dx\rhoight)^\gamma\\
&=A(\thetaextrm{vol}D(t))^{1-\gamma}\lambdaeft(m(0)+\thetaextrm{vol}D(t)\bar\rho_0\rhoight)^\gamma\\
&\ge \thetaextrm{vol}D(t)A\bar\rho_0^\gamma =\varepsilonnt_{D(t)}P_0 dx,
{\bf E}nd{split}
{\bf E}nd{equation*}
thus
\begin{equation}gin{equation}\lambdaabel{7}
\varepsilonnt_{D(t)}(P-P_0)dx\ge 0.
{\bf E}nd{equation}
Note that Lemma \rhoef{div} implies
$${\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop)={\rhom div}(\rho({\mathtt F}-I)({\mathtt F}-I)^\thetaop)+{\rhom div}(\rho({\mathtt F}-I)).$$
Then the divergence theorem yields, using Lemma \rhoef{ei}, {\bf E}qref{TT4}, {\bf E}qref{a2} and {\bf E}qref{7},
\begin{equation}gin{equation}\lambdaabel{6}
\begin{equation}gin{split}
&\varepsilonnt_{D(t)}xcdotot{\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop)dx\\
&=\varepsilonnt_{D(t)}xcdotot{\rhom div}(\rho({\mathtt F}-I)({\mathtt F}-I)^\thetaop)dx+\varepsilonnt_{D(t)}xcdotot{\rhom div}(\rho({\mathtt F}-I))dx\\
&=-\varepsilonnt_{D(t)}I:\rho({\mathtt F}-I)({\mathtt F}-I)^\thetaop dx-\varepsilonnt_{D(t)}I:\rho({\mathtt F}-I) dx\\
&=-\varepsilonnt_{D(t)}\rho|{\mathtt F}-I|^2 dx-\varepsilonnt_{D(t)}\rho\,{\bf m}athrm{tr}({\mathtt F}-I) dx\\
&\ge-2{\bf m}athcal{E}(t)-\varepsilonnt_{D(t)}\rho\,{\bf m}athrm{tr}({\mathtt F}-I) dx\\
&=-2{\bf m}athcal{E}(0)+\varepsilonnt_{{{\bf m}athbb R}^3}\rho_0\,{\bf m}athrm{tr}(I-{\mathtt F}_0)dx\ge 0.
{\bf E}nd{split}
{\bf E}nd{equation}
Therefore, {\bf E}qref{5}-{\bf E}qref{6} yield
\begin{equation}gin{equation}\lambdaabel{8}
{\bf m}athcal{F}'(t)\ge\varepsilonnt_{D(t)}\rho|{\bf u}|^2dx.
{\bf E}nd{equation}
On the other hand, {\bf E}qref{8} and the Cauchy-Schwarz inequality lead to
\begin{equation}gin{equation}\lambdaabel{9}
\begin{equation}gin{split}
{\bf m}athcal{F}'(t)&\ge {\bf m}athcal{F}(t)^2\lambdaeft(\varepsilonnt_{D(t)}|x|^2\rho
dx\rhoight)^{-1}\\
&\ge {\bf m}athcal{F}(t)^2(\sigma t+R)^{-2}\lambdaeft(\varepsilonnt_{D(t)}\rho
dx\rhoight)^{-1}\\
&={\bf m}athcal{F}(t)^2(\sigma
t+R)^{-2}\lambdaeft(\varepsilonnt_{D(t)}\rho_0(x)dx\rhoight)^{-1}\\
&\ge\lambdaeft(\frac{4\pi}{3}(\sigma
t+R)^5\|\rho_0\|_{L^\varepsilonnfty}\rhoight)^{-1}{\bf m}athcal{F}(t)^2,
{\bf E}nd{split}
{\bf E}nd{equation}
where
$$\|\rho_0\|_{L^\varepsilonnfty}=\sup_{x\varepsilonn{{\bf m}athbb R}^3}\rho_0(x).$$
Since ${\bf m}athcal{F}(0)>0$, then {\bf E}qref{9} implies that ${\bf m}athcal{F}(t)>0$ for
$0\lambdae t<T$, and that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
{\bf m}athcal{F}(0)^{-1}&\ge {\bf m}athcal{F}(0)^{-1}-{\bf m}athcal{F}(T)^{-1}\\
&\ge \frac{3}{16\pi
\sigma\|\rho_0\|_{L^\varepsilonnfty}}\lambdaeft(\frac{1}{R^4}-\frac{1}{(\sigma
T+R)^4}\rhoight),
{\bf E}nd{split}
{\bf E}nd{equation*}
which shows that $T$ cannot become arbitrarily
large without contradicting the assumption {\bf E}qref{FF}.
Therefore the proof is complete.
{\bf E}nd{proof}
\begin{equation}gin{Remark}
Taking $\rho_0(x)=\bar{\rhoho}_0$, we see that
{\bf E}qref{FF1}-{\bf E}qref{a2} can be satisfied if ${\bf u}_0$ is supersonic relative to the sound speed $\sigma$,
the diagonal entries $[I-{\mathtt F}_0]_{ii}\varepsilonn (0,1)$, $i=1, 2, 3$, and $\bar\rho_0$ is not very large.
{\bf E}nd{Remark}
\section{The Viscous Case}
This section is devoted to the study of formation of singularity and breakdown, especially of the blowup criteria for the smooth solutions of compressible viscoelastic fluids {\bf E}qref{cve} in ${{\bf m}athbb R}^3$ with sufficiently smooth initial data:
\begin{equation}gin{equation}\lambdaabel{in2}
(\rho, {\bf u}, {\mathtt F})|_{t=0}=(\rho_0(x), {\bf u}_0(x), {\mathtt F}_0(x)),\qquad x\varepsilonn{{\bf m}athbb R}^3.
{\bf E}nd{equation}
The goal is to obtain a blowup criteria in
term of the $L^\varepsilonnfty$ norm of the velocity. In other words, the
$L^\varepsilonnfty$ norm of the velocity controls the blowup for the
compressible viscoelastic fluids. For this purpose, we
need to assume that
\begin{equation}gin{equation}\lambdaabel{va}
7{\bf m}u>\lambdaambda.
{\bf E}nd{equation}
Obviously, this condition will be fulfilled physically if
viscosities ${\bf m}u$ and $\lambdaambda$ satisfy the condition {\bf E}qref{v2}
and $\lambdaambda\lambdae 0$ simultaneously.
Denote $D^k$ and $D^k_0$ as:
$$D^k({{\bf m}athbb R}^3)
:=\{f\varepsilonn L^1_{loc}({{\bf m}athbb R}^3): \, \|\nablabla^k
f\|_{L^2({{\bf m}athbb R}^3)}<\varepsilonnfty\};$$ and
$$D_0^k({{\bf m}athbb R}^3):=\{f\varepsilonn L^6({{\bf m}athbb R}^3): \, \|\nablabla^k f\|_{L^2({{\bf m}athbb R}^3)}<\varepsilonnfty\}.$$
Now, our blowup result for the system {\bf E}qref{cve} takes the following form:
\begin{equation}gin{Theorem}\lambdaabel{vMT}
Assume that the initial data satisfy
\begin{equation}gin{equation}\lambdaabel{v3}
0\lambdae\rho_0\varepsilonn H^3({{\bf m}athbb R}^3),\quad {\bf u}_0\varepsilonn D_0^1({{\bf m}athbb R}^3)cap
D^3({{\bf m}athbb R}^3),\quad {\mathtt F}_0\varepsilonn H^3({{\bf m}athbb R}^3),\quad{\rhom div}(\rho_0{\mathtt F}_0^\thetaop)=0
{\bf E}nd{equation}
and
\begin{equation}gin{equation}\lambdaabel{v4}
-{\bf m}u\Delta {\bf u}_0-(\lambdaambda+{\bf m}u)\nablabla{\rhom div}{\bf u}_0+A\nablabla\rho_0^\gamma=\rho_0 g
{\bf E}nd{equation}
for some $g\varepsilonn H^1({{\bf m}athbb R}^3)$ with $\sqrt{\rho_0}g\varepsilonn L^2({{\bf m}athbb R}^3)$.
Let $(\rho,{\bf u},{\mathtt F})$ be a classical solutions to the system {\bf E}qref{cve}
satisfying
\begin{equation}gin{equation}\lambdaabel{v5}
\begin{equation}gin{cases}
(\rho, {\mathtt F})\varepsilonn C([0,T^*], H^3({{\bf m}athbb R}^3)),\\
{\bf u}\varepsilonn C([0,T^*], D_0^1({{\bf m}athbb R}^3)cap D^3({{\bf m}athbb R}^3))cap L^2(0,T^*; D^4({{\bf m}athbb R}^3)),\\
{\bf u}_t\varepsilonn L^\varepsilonnfty(0,T^*; D_0^1({{\bf m}athbb R}^3))cap L^2(0, T^*; D^2({{\bf m}athbb R}^3)),\\
\sqrt{\rho}{\bf u}_t\varepsilonn L^\varepsilonnfty(0,T^*; L^2({{\bf m}athbb R}^3)),
{\bf E}nd{cases}
{\bf E}nd{equation}
and $T^*$ be the maximal existence time. If $T^*<\varepsilonnfty$ and
{\bf E}qref{va} holds, then
\begin{equation}gin{equation}\lambdaabel{v6}
\lambdaim_{T\rhoightarrow T^*}\varepsilonnt_0^T\|\nablabla{\bf u}\|_{L^\varepsilonnfty({{\bf m}athbb R}^3)}dt=\varepsilonnfty.
{\bf E}nd{equation}
{\bf E}nd{Theorem}
The local existence up to a possible finite time $T^*$ can be
constructed in the functional framework {\bf E}qref{v5} with the
compatibility condition {\bf E}qref{v4} in spirit of the corresponding
results for the compressible Navier-Stokes equations (see for example cite{CK}),
thus we omit the details in this paper and focus on the breakdown of smooth solutions.
To prove Theorem \rhoef{vMT}, the main difficulty lies in the estimates
of the gradients of the density and the deformation gradient
provided that the quantity in {\bf E}qref{v6} is finite. In fact, the
key estimates in our analysis is $L^\varepsilonnfty_t H^1_x$ bounds of
$\nablabla\rho$ and $\nablabla{\mathtt F}$.
\subsection{Regularity of solutions}
In this subsection, we will derive a number of regularities of
solutions $(\rho,{\bf u},{\mathtt F})$ to the system {\bf E}qref{cve} provided
\begin{equation}gin{equation}\lambdaabel{v7}
\lambdaim_{T\rhoightarrow
T^*}\varepsilonnt_0^T\|\nablabla{\bf u}\|_{L^\varepsilonnfty({{\bf m}athbb R}^3)}dt<\varepsilonnfty.
{\bf E}nd{equation}
The standard energy estimate gives
$$\sup_{0\lambdae t\lambdae T}{\bf B}ig(\|\sqrt{\rho}{\bf u}\|^2_{L^2}+\|\rho\|_{L^\gamma}
+\|\sqrt{\rho}{\mathtt F}\|_{L^2}^2{\bf B}ig)+\varepsilonnt_0^T\|\nablabla{\bf u}\|_{L^2}^2dt\lambdae
C,\quad 0\lambdae T\lambdae T^*.$$
Moreover, from the two transport equations
for the density and the deformation gradient, the assumption
{\bf E}qref{v7} yields the $L^\varepsilonnfty$ bounds for the density and the
deformation gradient respectively.
The first step for the regularity of solutions is to improve the
integrability of the velocity. In fact, we have
\begin{equation}gin{Lemma}
Under the assumption {\bf E}qref{va}, there exists a small $\partialta>0$ such
that
$$\sup_{0\lambdae t\lambdae T}\varepsilonnt_{{{\bf m}athbb R}^3}\rho|{\bf u}|^{3+\partialta}dx\lambdae C, \quad
0<T<T^*,$$ where $C$ is a positive constant depending only on
$\|\rho\|_{L^\varepsilonnfty}$ and $\|{\mathtt F}\|_{L^\varepsilonnfty}$.
{\bf E}nd{Lemma}
\begin{equation}gin{proof}
The argument is similar to that of cite{HX}.
For $q>3$, we multiply the second equation in
{\bf E}qref{cve} by $q|{\bf u}|^{q-2}{\bf u}$ and integrate over ${{\bf m}athbb R}^3$ to obtain, using the conservation of mass,
\begin{equation}gin{equation}\lambdaabel{v8}
\begin{equation}gin{split}
&\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho|{\bf u}|^qdx+\varepsilonnt_{{{\bf m}athbb R}^3}{\bf B}ig(q|{\bf u}|^{q-2}\big({\bf m}u|\nablabla{\bf u}|^2+(\lambdaambda+{\bf m}u)|{\rhom div}{\bf u}|^2+{\bf m}u(q-2)|\nablabla|{\bf u}||^2\big)\\
&\qquad\qquad\qquad\qquad\qquad+q(\lambdaambda+{\bf m}u)(\nablabla|{\bf u}|^{q-2})cdotot{\bf u}{\rhom div}{\bf u}{\bf B}ig)dx\\
& =Aq\varepsilonnt_{{{\bf m}athbb R}^3}{\rhom div}(|{\bf u}|^{q-2}{\bf u})\rho^\gamma
dx-q\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla(|{\bf u}|^{q-2}{\bf u}) dx\\
& \lambdae C\varepsilonnt_{{{\bf m}athbb R}^3}\sqrt{\rho}|{\bf u}|^{q-2}|\nablabla{\bf u}|dx\lambdae
\varepsilon\varepsilonnt_{{{\bf m}athbb R}^3}|{\bf u}|^{q-2}|\nablabla{\bf u}|^2dx+C\varepsilonnt_{{{\bf m}athbb R}^3}\rho|{\bf u}|^{q-2}dx\\
& \lambdae\varepsilon\varepsilonnt_{{{\bf m}athbb R}^3}|{\bf u}|^{q-2}|\nablabla{\bf u}|^2dx+C\lambdaeft(\varepsilonnt_{{{\bf m}athbb R}^3}\rho|{\bf u}|^{q}dx\rhoight)^{\frac{q-2}{q}},
{\bf E}nd{split}
{\bf E}nd{equation}
for $\varepsilon>0$.
As in cite{HX}, if $7{\bf m}u>\lambdaambda$, one has
\begin{equation}gin{equation}\lambdaabel{v9}
\begin{equation}gin{split}
&q|{\bf u}|^{q-2}{\bf B}ig({\bf m}u|\nablabla{\bf u}|^2+(\lambdaambda+{\bf m}u)|{\rhom div}{\bf u}|^2+{\bf m}u(q-2)|\nablabla|{\bf u}||^2{\bf B}ig)\\
&\qquad+q(\lambdaambda+{\bf m}u)(\nablabla|{\bf u}|^{q-2})cdotot{\bf u}\,{\rhom div}{\bf u}\\
&\ge C|{\bf u}|^{q-2}|\nablabla{\bf u}|^2.
{\bf E}nd{split}
{\bf E}nd{equation}
Substituting {\bf E}qref{v9} into {\bf E}qref{v8}, and taking $\varepsilon$ small
enough, we get by Gronwall's inequality,
$$\sup_{0\lambdae t\lambdae T}\varepsilonnt_{{{\bf m}athbb R}^3}\rho|{\bf u}|^{3+\partialta}dx\lambdae C, \quad
0<T<T^*,$$
for some small $\partialta>0$.
{\bf E}nd{proof}
The second step for the regularity considers the
integrability of the material derivative of the velocity, which is
useful for the bounds on $\nablabla\rho$. To this end, we have,
\begin{equation}gin{Lemma}
Let $$G=\rho{\bf u}_t+\rho{\bf u}cdotot\nablabla{\bf u}.$$ Then,
$$\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}G^2dxdt+\sup_{0\lambdae t\lambdae
T}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}|^2dx\lambdae
C\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}(|\nablabla\rho|^2+|\nablabla{\mathtt F}|^2)dxdt+C$$ for all
$0\lambdae T\lambdae T^*$.
{\bf E}nd{Lemma}
\begin{equation}gin{proof}
From the $L^\varepsilonnfty$ bound of $\rho$, we have
\begin{equation}gin{equation}\lambdaabel{v10}
\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}G^2dxdt\lambdae
C\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_t^2dxdt+2\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}|\rho{\bf u}cdotot\nablabla{\bf u}|^2dxdt.
{\bf E}nd{equation}
We proceed to estimate the right-hand side of {\bf E}qref{v10} term by
term as follows. For the last term of {\bf E}qref{v10}, we have
\begin{equation}gin{equation}\lambdaabel{v11}
\begin{equation}gin{split}
&\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho^2|{\bf u}|^2|\nablabla{\bf u}|^2dxdt \\
&\lambdae C\varepsilonnt_0^T\|\sqrt{\rho}{\bf u}\|_{L^4}^2\|\nablabla{\bf u}\|_{L^4}^2dt\\
&\lambdae
C\varepsilonnt_0^T\|\sqrt{\rho}{\bf u}\|^\alphalpha_{L^{3+\alphalpha}}\|\sqrt{\rho}{\bf u}\|_{L^6}^{2-\alphalpha}\|\nablabla{\bf u}\|_{L^4}^2dt\\
&\lambdae
C\varepsilonnt_0^T\|\nablabla{\bf u}\|_{L^2}^{3-\alphalpha}\|\nablabla{\bf u}\|_{L^\varepsilonnfty}dt\\
&\lambdae C\sup_{0\lambdae T\lambdae
T^*}\|\nablabla{\bf u}\|_{L^2}^{3-\alphalpha}\varepsilonnt_0^T\|\nablabla{\bf u}\|_{L^\varepsilonnfty}dt\\
&\lambdae C\sup_{0\lambdae T\lambdae T^*}\|\nablabla{\bf u}\|_{L^2}^{3-\alphalpha},
{\bf E}nd{split}
{\bf E}nd{equation}
where
$$\frac{\alphalpha}{3+\partialta}+\frac{2-\alphalpha}{6}=\frac12, \quad
1<\alphalpha=\frac{3+\partialta}{3-\partialta}<2.$$
For the first term, we multiply the second equation of {\bf E}qref{cve}
by $u_t$ and integrate over ${{\bf m}athbb R}^3$ to obtain
\begin{equation}gin{equation}\lambdaabel{v12}
\begin{equation}gin{split}
&\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_t^2dx+\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}cdotot\nablabla{\bf u}cdotot{\bf u}_t
dx+\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}{\bf B}ig(\frac{{\bf m}u}{2}|\nablabla{\bf u}|^2+\frac{\lambdaambda+{\bf m}u}{2}|{\rhom div}{\bf u}|^2{\bf B}ig)dx\\
&\quad=A\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma{\rhom div}{\bf u}_t
dx-\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla{\bf u}_tdx.
{\bf E}nd{split}
{\bf E}nd{equation}
From the first equation and the third equation in {\bf E}qref{cve}, we
deduce that
\begin{equation}gin{equation}\lambdaabel{v13}
\begin{equation}gin{split}
&\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma{\rhom div}{\bf u}_t dx\\
&=\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma{\rhom div}{\bf u}
dx-\gamma\varepsilonnt_{{{\bf m}athbb R}^3}\rho_t\rho^{\gamma-1}{\rhom div}{\bf u} dx\\
&=\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma{\rhom div}{\bf u}
dx+\gamma\varepsilonnt_{{{\bf m}athbb R}^3}{\rhom div}(\rho{\bf u})\rho^{\gamma-1}{\rhom div}{\bf u} dx\\
&=\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma{\rhom div}{\bf u}
dx-\varepsilonnt_{{{\bf m}athbb R}^3}\rho^{\gamma}{\bf u}cdotot\nablabla{\rhom div}{\bf u}
dx+(\gamma-1)\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma|{\rhom div}{\bf u}|^2dx\\
&\lambdae\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma{\rhom div}{\bf u}
dx+C\|\sqrt{\rho}{\bf u}\|_{L^2}\|\nablabla^2{\bf u}\|_{L^2({{\bf m}athbb R}^3)}+C\|\nablabla{\bf u}\|_{L^2({{\bf m}athbb R}^3)}^2\\
&\lambdae\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma{\rhom div}{\bf u}
dx+C+\varepsilon\|G\|_{L^2({{\bf m}athbb R}^3}^2+C\|\nablabla\rho\|_{L^2({{\bf m}athbb R}^3)}^2\\
&\quad+C\|\nablabla{\mathtt F}\|_{L^2({{\bf m}athbb R}^3)}^2+C\|\nablabla{\bf u}\|_{L^2({{\bf m}athbb R}^3)}^2
{\bf E}nd{split}
{\bf E}nd{equation}
and
\begin{equation}gin{equation}\lambdaabel{v14}
\begin{equation}gin{split}
&\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla{\bf u}_tdx\\
&=\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla{\bf u}
dx-\varepsilonnt_{{{\bf m}athbb R}^3}\partial_t(\rho{\mathtt F}{\mathtt F}^\thetaop):\nablabla{\bf u} dx\\
&\lambdae\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla{\bf u}
dx-\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}_{ik}{\mathtt F}_{jk}{\bf u}cdotot\nablabla\partial_j{\bf u}^i dx\\
&\quad +C\varepsilonnt_{{{\bf m}athbb R}^3}\rho|{\mathtt F}|^2|\nablabla{\bf u}|^2dx\\
&\lambdae \frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla{\bf u}
dx+C\|\sqrt{\rho}{\bf u}\|_{L^2}\|\nablabla^2{\bf u}\|_{L^2({{\bf m}athbb R}^3)}+C\|\nablabla{\bf u}\|_{L^2({{\bf m}athbb R}^3)}^2\\
&\lambdae \frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla{\bf u}
dx+C+\varepsilon\|G\|_{L^2({{\bf m}athbb R}^3}^2+C\|\nablabla\rho\|_{L^2({{\bf m}athbb R}^3)}^2\\
&\quad+C\|\nablabla{\mathtt F}\|_{L^2({{\bf m}athbb R}^3)}^2+C\|\nablabla{\bf u}\|_{L^2({{\bf m}athbb R}^3)}^2,
{\bf E}nd{split}
{\bf E}nd{equation}
where we used
$$\partial_t(\rho{\mathtt F}{\mathtt F}^\thetaop)+{\bf u}cdotot\nablabla(\rho{\mathtt F}{\mathtt F}^\thetaop)=\nablabla{\bf u}(\rho{\mathtt F}{\mathtt F}^\thetaop)+\rho{\mathtt F}{\mathtt F}^\thetaop(\nablabla{\bf u})^\thetaop-\rho{\mathtt F}{\mathtt F}^\thetaop{\rhom div}{\bf u}.$$
Note that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&\lambdaeft|\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}cdotot\nablabla{\bf u}cdotot{\bf u}_tdxdt\rhoight|\\
&\lambdae \frac12\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_t^2dxdt+\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho|{\bf u}cdotot\nablabla{\bf u}|^2dxdt\\
&\lambdae\frac12\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_t^2dxdt+C\sup_{0\lambdae
T<T^*}\|\nablabla{\bf u}\|_{L^2}^{3-\alphalpha}.
{\bf E}nd{split}
{\bf E}nd{equation*}
Substituting {\bf E}qref{v13} and {\bf E}qref{v14} to {\bf E}qref{v12},
using Gronwall's inequality and the Cauchy-Schwarz inequality, we have,
$$\sup_{0\lambdae t\lambdae T^*}\|\nablabla{\bf u}\|_{L^2({{\bf m}athbb R}^3)}\lambdae C;$$
and
\begin{equation}gin{equation}\lambdaabel{v15}
\begin{equation}gin{split}
&\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_t^2dxdt\\
&\lambdae C\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}(|\nablabla\rho|^2+|\nablabla{\mathtt F}|^2)dxdt+\varepsilon\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}G^2dxdt\\
&\quad +C\sup_{0\lambdae T<T^*}\|\nablabla{\bf u}\|_{L^2}^{3-\alphalpha}+C.
{\bf E}nd{split}
{\bf E}nd{equation}
Here we used the following estimates: for all $0\lambdae t\lambdae T$
$$\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla{\bf u} dx\rhoight|\lambdae C\|\sqrt{\rho}{\mathtt F}\|_{L^2}\|\nablabla{\bf u}\|_{L^2}\lambdae C+\frac{{\bf m}u}{8}\|\nablabla{\bf u}\|_{L^2}^2$$
and
$$\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma{\rhom div}{\bf u} dx\rhoight|\lambdae C\|\rho^\gamma\|_{L^1}\|\nablabla{\bf u}\|_{L^2}\lambdae C+\frac{{\bf m}u}{8}\|\nablabla{\bf u}\|_{L^2}^2.$$
Choosing a sufficiently small $\varepsilon$, from {\bf E}qref{v10} and
{\bf E}qref{v15}, one has
$$\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}G^2dxdt+\sup_{0\lambdae t\lambdae
T}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}|^2dx\lambdae
C\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}(|\nablabla\rho|^2+|\nablabla{\mathtt F}|^2)dxdt+C.$$
The proof is complete.
{\bf E}nd{proof}
With the aid of the estimate of the material derivative of the
velocity, we can obtain the $L^\varepsilonnfty(0,T; L^2)$ estimates of
$\nablabla\rho$ and $\nablabla{\mathtt F}$ and we actually have
\begin{equation}gin{Lemma}
Under the assumption {\bf E}qref{va}, the following estimates hold for
$0\lambdae T\lambdae T^*$:
\begin{equation}gin{equation}\lambdaabel{v16}
\sup_{0\lambdae T\lambdae T^*}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla\rho|^2dx\lambdae C,
{\bf E}nd{equation}
\begin{equation}gin{equation}\lambdaabel{v17}
\sup_{0\lambdae T\lambdae T^*}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\mathtt F}|^2dx\lambdae C,
{\bf E}nd{equation}
\begin{equation}gin{equation}\lambdaabel{v18}
\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_t^2dxdt+\sup_{0\lambdae T\lambdae
T^*}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}|^2dx+\varepsilonnt_0^T\|{\bf u}\|_{H^2({{\bf m}athbb R}^3)}^2dt\lambdae C,
{\bf E}nd{equation}
{\bf E}nd{Lemma}
\begin{equation}gin{proof}
The arguments of {\bf E}qref{v16}, {\bf E}qref{v18} are similar to
Proposition 2.4 in cite{HX} provided {\bf E}qref{v17} holds, and thus
we omit them. For {\bf E}qref{v17}, we can proceed as follows.
Differentiating the third equation in {\bf E}qref{cve} with respect to
$x_i$ (denoting $\partial_{x_i}$ by $\partial_i$) and multiplying the resulting identity by $2\partial_{i}{\mathtt F}$
yield
\begin{equation}gin{equation*}
\partial_t|\partial_i{\mathtt F}|^2+{\bf u}cdotot\nablabla|\partial_i{\mathtt F}|^2=2\nablabla\partial_i{\bf u}{\mathtt F}:\partial_i{\mathtt F}+2\nablabla{\bf u}\partial_i{\mathtt F}:\partial_i{\mathtt F}.
{\bf E}nd{equation*}
Integrating the above identity over ${{\bf m}athbb R}^3$, one has
\begin{equation}gin{equation}\lambdaabel{v20}
\begin{equation}gin{split}
\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}|\partial_i{\mathtt F}|^2dx&=\varepsilonnt_{{{\bf m}athbb R}^3}|\partial_i{\mathtt F}|^2{\rhom div}{\bf u}
dx+2\varepsilonnt_{{{\bf m}athbb R}^3}\lambdaeft(\nablabla\partial_i{\bf u}{\mathtt F}:\partial_i{\mathtt F}+\nablabla{\bf u}\partial_i{\mathtt F}:\partial_i{\mathtt F}\rhoight)dx\\
&\lambdae
C\|\nablabla{\bf u}\|_{L^\varepsilonnfty}\|\partial_i{\mathtt F}\|_{L^2}^2+C\|\partial_i{\mathtt F}\|_{L^2}\|\nablabla^2{\bf u}\|_{L^2}\\
&\lambdae
C\|\nablabla{\bf u}\|_{L^\varepsilonnfty}\|\partial_i{\mathtt F}\|_{L^2}^2+C\|\partial_i{\mathtt F}\|_{L^2}^2+C\|\nablabla^2{\bf u}\|^2_{L^2}\\
&\lambdae
C\|\nablabla{\bf u}\|_{L^\varepsilonnfty}\|\partial_i{\mathtt F}\|_{L^2}^2+C\|\partial_i{\mathtt F}\|_{L^2}^2+C\|\partial_i\rho\|_{L^2}^2+C\|G\|_{L^2}^2
{\bf E}nd{split}
{\bf E}nd{equation}
since
$$-{\bf m}u\Delta {\bf u}-(\lambdaambda+{\bf m}u)\nablabla{\rhom div}{\bf u}=-G-A\nablabla\rho^\gamma-{\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop).$$
From {\bf E}qref{v16}, {\bf E}qref{v20} and Gronwall's inequality,
we obtain {\bf E}qref{v17}.
The proof is complete.
{\bf E}nd{proof}
The next step is to obtain the uniform in time estimate for ${\bf u}_t$ as the following:
\begin{equation}gin{Lemma}
Under the assumption {\bf E}qref{va}, the following estimates hold for
all $0\lambdae T\lambdae T^*$,
\begin{equation}gin{equation}\lambdaabel{v21}
\sup_{0\lambdae t\lambdae
T}\|\sqrt{\rho}{\bf u}_t\|_{L^2}^2+\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}_t|^2dxdt\lambdae C,
{\bf E}nd{equation}
\begin{equation}gin{equation}\lambdaabel{v221}
\sup_{0\lambdae T\lambdae T^*}\|{\bf u}\|_{H^2}\lambdae C.
{\bf E}nd{equation}
{\bf E}nd{Lemma}
\begin{equation}gin{proof}
The argument of {\bf E}qref{v221} is similar to Proposition 2.5 in
cite{HX}, and thus we omit it.
To prove {\bf E}qref{v21}, we differentiating the second equation in
{\bf E}qref{cve} with respect to $t$ to obtain
\begin{equation}gin{equation}\lambdaabel{v23}
\begin{equation}gin{split}
&\rho{\bf u}_{tt}+\rho{\bf u}cdotot\nablabla{\bf u}_t-{\bf m}u\Delta {\bf u}_t-(\lambdaambda+{\bf m}u)\nablabla{\rhom div}{\bf u}_t+A\nablabla
(\rho^\gamma)_t\\
&=-\rho_t({\bf u}_t+{\bf u}cdotot\nablabla{\bf u})-\rho{\bf u}_tcdotot\nablabla{\bf u}+{\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop)_t.
{\bf E}nd{split}
{\bf E}nd{equation}
Taking the inner product of the above equation with ${\bf u}_t$ in
$L^2({{\bf m}athbb R}^3)$ and integrating by parts, one obtains,
\begin{equation}gin{equation}\lambdaabel{v24}
\begin{equation}gin{split}
&\frac12\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_t^2dx+\varepsilonnt_{{{\bf m}athbb R}^3}({\bf m}u|\nablabla{\bf u}_t|^2+(\lambdaambda+{\bf m}u)|{\rhom div}{\bf u}_t|^2)dx\\
&\quad-A\varepsilonnt_{{{\bf m}athbb R}^3}(\rho^\gamma)_t{\rhom div}{\bf u}_tdx+\varepsilonnt_{{{\bf m}athbb R}^3}(\rho{\mathtt F}{\mathtt F}^\thetaop)_t:\nablabla{\bf u}_t\\
&=-\varepsilonnt_{{{\bf m}athbb R}^3}\lambdaeft(\rho{\bf u}cdotot\nablabla\lambdaeft[\lambdaeft(\frac12{\bf u}_t+{\bf u}cdotot\nablabla{\bf u}\rhoight){\bf u}_t\rhoight]+\rho{\bf u}_tcdotot\nablabla{\bf u}cdotot{\bf u}_t\rhoight)dx.
{\bf E}nd{split}
{\bf E}nd{equation}
From the first and the third equations in {\bf E}qref{cve}, we have,
\begin{equation}gin{equation}\lambdaabel{v25}
\begin{equation}gin{split}
&\varepsilonnt_{{{\bf m}athbb R}^3}(\rho^\gamma)_t{\rhom div}{\bf u}_t dx\\
&=-\varepsilonnt_{{{\bf m}athbb R}^3}{\rhom div}(\rho^\gamma{\bf u}){\rhom div}{\bf u}_tdx-(\gamma-1)\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma{\rhom div}{\bf u}{\rhom div}{\bf u}_tdx\\
&=\varepsilonnt_{{{\bf m}athbb R}^3}(\rho^\gamma{\rhom div}{\bf u}+\nablabla\rho^\gammacdotot{\bf u}){\rhom div}{\bf u}_tdx-(\gamma-1)\varepsilonnt_{{{\bf m}athbb R}^3}\rho^\gamma{\rhom div}{\bf u}{\rhom div}{\bf u}_tdx\\
&\lambdae C\|\nablabla{\bf u}_t\|_{L^2}\|\nablabla{\bf u}\|_{L^2}+C\|{\bf u}\|_{L^\varepsilonnfty}\|\nablabla\rho\|_{L^2}\|\nablabla{\bf u}_t\|_{L^2}\\
&\lambdae\varepsilon\|\nablabla{\bf u}_t\|_{L^2}^2+C\|{\bf u}\|_{H^2}^2
{\bf E}nd{split}
{\bf E}nd{equation}
since
$$(\rho^\gamma)_t+{\rhom div}(\rho^\gamma{\bf u})+(\gamma-1)\rho^\gamma{\rhom div}{\bf u}=0;$$
and
\begin{equation}gin{equation}\lambdaabel{v26}
\begin{equation}gin{split}
&\varepsilonnt_{{{\bf m}athbb R}^3}(\rho{\mathtt F}{\mathtt F}^\thetaop)_t:\nablabla{\bf u}_tdx\\
&=-\varepsilonnt_{{{\bf m}athbb R}^3}{\bf u}cdotot\nablabla(\rho{\mathtt F}{\mathtt F}^\thetaop):\nablabla{\bf u}_tdx+\varepsilonnt_{{{\bf m}athbb R}^3}\nablabla{\bf u}\rho{\mathtt F}{\mathtt F}^\thetaop:\nablabla{\bf u}_tdx\\
&\quad+\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop(\nablabla{\bf u})^\thetaop:\nablabla{\bf u}_tdx+\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\mathtt F}{\mathtt F}^\thetaop{\rhom div}{\bf u}:\nablabla{\bf u}_tdx\\
&\lambdae
C\|{\bf u}\|_{L^\varepsilonnfty}(\|\nablabla\rho\|_{L^2}+\|\nablabla{\mathtt F}\|_{L^2})\|\nablabla{\bf u}_t\|_{L^2}+C\|\nablabla{\bf u}\|_{L^2}\|\nablabla{\bf u}_t\|_{L^2}\\
&\lambdae \varepsilon\|\nablabla{\bf u}_t\|_{L^2}^2+C\|{\bf u}\|_{H^2}^2.
{\bf E}nd{split}
{\bf E}nd{equation}
For the right-hand side of {\bf E}qref{v24}, we have
\begin{equation}gin{equation}\lambdaabel{v27}
\begin{equation}gin{split}
&\lambdaeft|-\varepsilonnt_{{{\bf m}athbb R}^3}\lambdaeft(\rho{\bf u}cdotot\nablabla\lambdaeft[\lambdaeft(\frac12{\bf u}_t+{\bf u}cdotot\nablabla{\bf u}\rhoight){\bf u}_t\rhoight]+\rho{\bf u}_tcdotot\nablabla{\bf u}cdotot{\bf u}_t\rhoight)dx\rhoight|\\
&\quad\lambdae
\varepsilonnt_{{{\bf m}athbb R}^3}{\bf B}ig(\rho|{\bf u}||{\bf u}_t||\nablabla{\bf u}_t|+\rho|{\bf u}||{\bf u}_t||\nablabla{\bf u}|^2+\rho|{\bf u}|^2|{\bf u}_t||\nablabla^2{\bf u}|\\
&\qquad\qquad\qquad +\rho|{\bf u}|^2|\nablabla{\bf u}||\nablabla{\bf u}_t|+\rho|{\bf u}_t|^2|\nablabla{\bf u}|{\bf B}ig)dx\\
&\quad\lambdae
C{\bf B}ig(\|{\bf u}\|_{L^6}\|\sqrt{\rho}{\bf u}_t\|_{L^3}\|\nablabla{\bf u}_t\|_{L^2}+\|{\bf u}\|_{L^6}\|{\bf u}_t\|_{L^6}\|\nablabla{\bf u}\|_{L^3}^2\\
&\qquad +\|{\bf u}^2\|_{L^3}\|{\bf u}_t\|_{L^6}\|\nablabla^2{\bf u}\|_{L^2}
+\|\nablabla{\bf u}_t\|_{L^2}\|\nablabla{\bf u}\|_{L^6}\|{\bf u}^2\|_{L^3}\\
&\qquad +\|\sqrt{\rho}{\bf u}_t\|^2_{L^2}\|\nablabla{\bf u}\|_{L^\varepsilonnfty}{\bf B}ig)\\
&\quad\lambdae
C{\bf B}ig(\|\sqrt{\rho}{\bf u}_t\|_{L^2}^{\frac12}\|\nablabla{\bf u}_t\|_{L^2}^{\frac32}+\|\nablabla{\bf u}_t\|_{L^2}\|\nablabla{\bf u}\|_{L^6}+\|{\bf u}_t\|_{L^6}\|\nablabla^2{\bf u}\|_{L^2}\\
&\qquad+\|\nablabla{\bf u}\|_{L^6}\|\nablabla{\bf u}_t\|_{L^2}+\|\sqrt{\rho}{\bf u}_t\|^2_{L^2}\|\nablabla{\bf u}\|_{L^\varepsilonnfty}{\bf B}ig)\\
&\quad\lambdae
\varepsilon\|\nablabla{\bf u}_t\|^2_{L^2}+C(1+\|\nablabla{\bf u}\|_{L^\varepsilonnfty})\|\sqrt{\rho}{\bf u}_t\|_{L^2}^2+C\|{\bf u}\|_{H^2}^2.
{\bf E}nd{split}
{\bf E}nd{equation}
Substituting {\bf E}qref{v25}, {\bf E}qref{v26} and {\bf E}qref{v27} into
{\bf E}qref{v24}, we get
\begin{equation}gin{equation}\lambdaabel{v28}
\begin{equation}gin{split}
&\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\frac12\rho{\bf u}_t^2dx+{\bf m}u\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}_t|^2dx\\
& \lambdae \varepsilon\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}_t|^2dx+C((1+\|\nablabla{\bf u}\|_{L^\varepsilonnfty})\|\sqrt{\rho}{\bf u}_t\|_{L^2}^2+\|{\bf u}\|_{H^2}^2).
{\bf E}nd{split}
{\bf E}nd{equation}
Thanks to the compatibility condition, it holds
$$\sqrt{\rho_0}{\bf u}_t(0,x)\varepsilonn L^2({{\bf m}athbb R}^3),$$
and thus, for sufficiently small $\varepsilon$, {\bf E}qref{v28} gives us
{\bf E}qref{v21}.
The proof is complete.
{\bf E}nd{proof}
Finally, we now can obtain bounds of the first order derivatives
of the density and the second derivatives of the velocity.
\begin{equation}gin{Lemma}
Under the assumption {\bf E}qref{va}, the following estimates hold for
$0\lambdae T\lambdae T^*$,
\begin{equation}gin{equation}\lambdaabel{v29}
\sup_{0\lambdae t\lambdae T}(\|\rho_t(t)\|_{L^6}+\|\rho\|_{W^{1,6}})\lambdae C,
{\bf E}nd{equation}
\begin{equation}gin{equation}\lambdaabel{v30}
\sup_{0\lambdae t\lambdae T}(\|{\mathtt F}_t(t)\|_{L^6}+\|{\mathtt F}\|_{W^{1,6}})\lambdae C,
{\bf E}nd{equation}
\begin{equation}gin{equation}\lambdaabel{v31}
\varepsilonnt_0^T\|{\bf u}(t)\|_{W^{2,6}}^2dt\lambdae C.
{\bf E}nd{equation}
{\bf E}nd{Lemma}
\begin{equation}gin{proof}
The arguments of {\bf E}qref{v29} and {\bf E}qref{v31} are similar to those
in cite{HX} provided {\bf E}qref{v30} is verified, and hence are omitted here.
To establish {\bf E}qref{v30}, we first deduce from the previous
estimates that,
$${\bf u}_t\varepsilonn L^2(0,T; L^6({{\bf m}athbb R}^3)),\quad G\varepsilonn L^2(0,T; L^6({{\bf m}athbb R}^3)).$$
Differentiating the third equation in {\bf E}qref{cve} with respect to
$x_i$, and multiplying the resulting identity by
$6|\partial_i{\mathtt F}|^4\partial_i{\mathtt F}$, one has,
\begin{equation}gin{equation}\lambdaabel{v32}
\begin{equation}gin{split}
&\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}|\partial_i{\mathtt F}|^6dx\\&=-6\varepsilonnt_{{{\bf m}athbb R}^3}|\partial_i{\mathtt F}|^6{\rhom div}{\bf u}
dx-6\varepsilonnt_{{{\bf m}athbb R}^3}|\partial_i{\mathtt F}|^4\partial_i{\mathtt F}\partial_i\nablabla{\bf u}{\mathtt F}
dx-6\varepsilonnt_{{{\bf m}athbb R}^3}|\partial_i{\mathtt F}|^4\nablabla{\bf u}\partial_i{\mathtt F}\partial_i{\mathtt F}
dx\\
&\lambdae
C{\bf B}ig(\|\nablabla{\bf u}\|_{L^\varepsilonnfty}\|\nablabla{\mathtt F}\|^6_{L^6}+\|\nablabla{\mathtt F}\|_{L^6}^5(\|\nablabla\rho\|_{L^6}+\|\nablabla{\mathtt F}\|_{L^6}+\|G\|_{L^6}){\bf B}ig).
{\bf E}nd{split}
{\bf E}nd{equation}
The above inequality and Gronwall's inequality imply
$$\sup_{0\lambdae t\lambdae T}\|\nablabla{\mathtt F}\|_{L^6}\lambdae C.$$
This estimate, together with the third equation in {\bf E}qref{cve} give
the bound of ${\mathtt F}_t$ in {\bf E}qref{v30}.
The proof is complete.
{\bf E}nd{proof}
\subsection{Proof of Theorem \rhoef{vMT}}
To begin with, we have the
following higher order estimates for the density and the
deformation gradient:
\begin{equation}gin{Lemma}\lambdaabel{y1}
For all $0\lambdae T\lambdae T^*$, the following estimates hold:
$$\|\rho\|_{L^\varepsilonnfty(H^2)}+\|\rho_t\|_{L^\varepsilonnfty(H^1)}+\|\rho_{tt}\|_{L^2}\lambdae
C,$$
$$\|\rho^\gamma\|_{L^\varepsilonnfty(H^2)}+\|(\rho^\gamma)_t\|_{L^\varepsilonnfty(H^1)}+\|(\rho^\gamma)_{tt}\|_{L^2}\lambdae
C,$$ and
$$\|{\mathtt F}\|_{L^\varepsilonnfty(H^2)}+\|{\mathtt F}_t\|_{L^\varepsilonnfty(H^1)}+\|{\mathtt F}_{tt}\|_{L^2}\lambdae
C.$$
{\bf E}nd{Lemma}
\begin{equation}gin{proof}
The argument for $\rho$ and $\rho^\gamma$ is similar to those in
cite{HX}, and thus we will focus on the estimates on ${\mathtt F}$.
For this purpose, we first observe that from the second equation
in {\bf E}qref{cve}, we have
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\|{\bf u}\|_{H^3}&\lambdae
C{\bf B}ig(\|G\|_{H^1}+\|\nablabla\rho^\gamma\|_{H^1}+\|{\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop)\|_{H^1}{\bf B}ig)\\
&\lambdae
C{\bf B}ig(\|G\|_{H^1}+\|\nablabla^2\rho^\gamma\|_{L^2}+C+\|\nablabla{\mathtt F}\|_{L^4}^2+\|\nablabla\rho\|_{L^4}^2+\|\nablabla^2{\mathtt F}\|_{L^2}{\bf B}ig)\\
&\lambdae
C{\bf B}ig(\|G\|_{H^1}+\|\nablabla^2\rho^\gamma\|_{L^2}+C+\|\nablabla^2{\mathtt F}\|_{L^2}{\bf B}ig),
{\bf E}nd{split}
{\bf E}nd{equation*}
since
$$\|\nablabla\rho\|_{L^4}^2\lambdae \|\nablabla\rho\|_{L^2}^{\frac12}\|\nablabla\rho\|_{L^6}^{\frac32}\lambdae C$$
and
$$\|\nablabla{\mathtt F}\|_{L^4}^2\lambdae \|\nablabla{\mathtt F}\|_{L^2}^{\frac12}\|\nablabla{\mathtt F}\|_{L^6}^{\frac32}\lambdae C.$$
Applying $\nablabla^2$ to the third equation of {\bf E}qref{cve} to yield
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&(\nablabla_{ij}{\mathtt F})_t+{\bf u}cdotot\nablabla(\nablabla_{ij}{\mathtt F})+\nablabla_i{\bf u}cdotot\nablabla\nablabla_j{\mathtt F}+\nablabla_{ij}{\bf u}cdotot\nablabla{\mathtt F}\\
&\quad=(\nablabla_{ij}\nablabla{\bf u}){\mathtt F}+\nablabla{\bf u}\nablabla_{ij}{\mathtt F}+\nablabla_i\nablabla{\bf u}\nablabla_j{\mathtt F}+\nablabla_j\nablabla{\bf u}\nablabla_i{\mathtt F}.
{\bf E}nd{split}
{\bf E}nd{equation*}
Multiplying the above identity by $2\nablabla_{ij}{\mathtt F}$ and integrating
over ${{\bf m}athbb R}^3$, one obtains
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla^2{\mathtt F}|^2dx\\
&\lambdae C\varepsilonnt_{{{\bf m}athbb R}^3}{\bf B}ig(|\nablabla^3{\bf u}||{\mathtt F}||\nablabla^2{\mathtt F}|+|\nablabla{\bf u}||\nablabla^2{\mathtt F}|^2+|\nablabla^2{\bf u}||\nablabla{\mathtt F}||\nablabla^2{\mathtt F}|{\bf B}ig)dx\\
&\quad+\varepsilonnt_{{{\bf m}athbb R}^3}{\bf u}cdotot\nablabla|\nablabla^2{\mathtt F}|^2dx\\
&\lambdae
C\varepsilonnt_{{{\bf m}athbb R}^3}{\bf B}ig(|\nablabla^3{\bf u}||{\mathtt F}||\nablabla^2{\mathtt F}|+|\nablabla{\bf u}||\nablabla^2{\mathtt F}|^2+|\nablabla^2{\bf u}||\nablabla{\mathtt F}||\nablabla^2{\mathtt F}|{\bf B}ig)dx\\
&\quad-\varepsilonnt_{{{\bf m}athbb R}^3}{\rhom div}{\bf u}|\nablabla^2{\mathtt F}|^2dx\\
&\lambdae C\varepsilonnt_{{{\bf m}athbb R}^3}{\bf B}ig(|\nablabla^3{\bf u}||{\mathtt F}||\nablabla^2{\mathtt F}|+|\nablabla{\bf u}||\nablabla^2{\mathtt F}|^2+|\nablabla^2{\bf u}||\nablabla{\mathtt F}||\nablabla^2{\mathtt F}|{\bf B}ig)dx\\
&\lambdae
C{\bf B}ig(\|\nablabla{\bf u}\|_{L^\varepsilonnfty}\|\nablabla^2{\mathtt F}\|_{L^2}^2+\|\nablabla^2{\mathtt F}\|_{L^2}^2+\|G\|_{H^1}^2+\|\nablabla^2\rho^\gamma\|_{L^2}^2+C+\|\nablabla^2{\bf u}\|_{L^6}^2{\bf B}ig)\\
&\lambdae
C(\|\nablabla{\bf u}\|_{L^\varepsilonnfty}+1)\|\nablabla^2{\mathtt F}\|_{L^2}^2+C(\|G\|_{H^1}^2+\|\nablabla^2{\bf u}\|_{L^6}^2+1),
{\bf E}nd{split}
{\bf E}nd{equation*}
since
$$\|\nablabla{\mathtt F}\|_{L^3}\lambdae \|\nablabla{\mathtt F}\|_{L^2}^{\frac12}\|\nablabla{\mathtt F}\|_{L^6}^{\frac12}\lambdae C.$$
Because ${\bf u}\varepsilonn L^2_tW^{2,6}_x$, by Gronwall's inequality, the above
inequality yields the bound
$$\|{\mathtt F}\|_{L^\varepsilonnfty_t H^2_x}\lambdae C.$$
The estimates of ${\mathtt F}_t$ and ${\mathtt F}_{tt}$ follow from the similar energy
methods and the third equation of {\bf E}qref{cve}.
The proof is complete.
{\bf E}nd{proof}
When we state the $H^3$ regularity of the solution $(\rho,{\bf u},{\mathtt F})$,
the following estimate is useful.
\begin{equation}gin{Lemma}\lambdaabel{y2}
For $0\lambdae T\lambdae T^*$, we have
$$\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_{tt}^2dxdt+\sup_{0\lambdae t\lambdae
T}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}_t|^2dx\lambdae C.$$
{\bf E}nd{Lemma}
\begin{equation}gin{proof}
Differentiating the second equation in {\bf E}qref{cve} with respect to
$t$, multiplying the resulting equation by ${\bf u}_{tt}$, and then
integrating over ${{\bf m}athbb R}^3$, one obtains
\begin{equation}gin{equation}\lambdaabel{v22}
\begin{equation}gin{split}
&\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_{tt}^2dx+\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}cdotot\nablabla{\bf u}_tcdotot{\bf u}_{tt}dx+\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\lambdaeft(\frac{{\bf m}u}{2}|\nablabla{\bf u}_t|^2+\frac{\lambdaambda+{\bf m}u}{2}|{\rhom div}{\bf u}_t|^2\rhoight)dx\\
& =A\varepsilonnt_{{{\bf m}athbb R}^3}(\rho^\gamma)_t{\rhom div}{\bf u}_{tt}dx-\varepsilonnt_{{{\bf m}athbb R}^3}(\rho{\mathtt F}{\mathtt F}^\thetaop)_t:\nablabla{\bf u}_{tt}dx-\varepsilonnt_{{{\bf m}athbb R}^3}\rho_t({\bf u}_t+{\bf u}cdotot\nablabla{\bf u}){\bf u}_{tt}dx\\
&\qquad-\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_tcdotot\nablabla{\bf u}cdotot{\bf u}_{tt}dx\\
& =A\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}(\rho^\gamma)_t{\rhom div}{\bf u}_{t}dx-A\varepsilonnt_{{{\bf m}athbb R}^3}(\rho^\gamma)_{tt}{\rhom div}{\bf u}_tdx-\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}(\rho{\mathtt F}{\mathtt F}^\thetaop)_{t}:\nablabla{\bf u}_{t}dx\\
&\qquad +\varepsilonnt_{{{\bf m}athbb R}^3}(\rho{\mathtt F}{\mathtt F}^\thetaop)_{tt}:\nablabla{\bf u}_t dx
-\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_tcdotot\nablabla{\bf u}cdotot{\bf u}_{tt}dx-\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\frac12\rho_t|{\bf u}_t|^2dx\\
&\qquad+\frac12\varepsilonnt_{{{\bf m}athbb R}^3}\rho_{tt}|{\bf u}_t|^2dx-\frac{d}{dt}\varepsilonnt_{{{\bf m}athbb R}^3}\rho_t({\bf u}cdotot\nablabla{\bf u}){\bf u}_tdx+\varepsilonnt_{{{\bf m}athbb R}^3}\rho_{tt}{\bf u}cdotot\nablabla{\bf u}cdotot{\bf u}_tdx\\
&\qquad
+\varepsilonnt_{{{\bf m}athbb R}^3}\rho_t{\bf u}_tcdotot\nablabla{\bf u}cdotot{\bf u}_tdx+\varepsilonnt_{{{\bf m}athbb R}^3}\rho_t{\bf u}cdotot\nablabla{\bf u}_tcdotot{\bf u}_tdx.
{\bf E}nd{split}
{\bf E}nd{equation}
Observe that
$$\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}cdotot\nablabla{\bf u}_tcdotot{\bf u}_{tt}dx\rhoight|\lambdae\varepsilon\|\sqrt{\rho}{\bf u}_{tt}\|_{L^2}^2+C\|\nablabla{\bf u}_t\|_{L^2}^2;$$
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_tcdotot\nablabla{\bf u}cdotot{\bf u}_{tt}dx\rhoight|&\lambdae
\varepsilon\|\sqrt{\rho}{\bf u}_{tt}\|_{L^2}^2+C\|\sqrt{\rho}{\bf u}_t\|_{L^3}^2\|\nablabla{\bf u}\|_{L^6}^2\\
&\lambdae
\varepsilon\|\sqrt{\rho}{\bf u}_{tt}\|_{L^2}^2+C\|\sqrt{\rho}{\bf u}_t\|_{L^2}\|{\bf u}_t\|_{L^6}\\
&\lambdae \varepsilon\|\sqrt{\rho}{\bf u}_{tt}\|_{L^2}^2+C\|\nablabla{\bf u}_t\|_{L^2};
{\bf E}nd{split}
{\bf E}nd{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho_{tt}|{\bf u}_t|^2dx\rhoight|&=\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}{\rhom div}(\rho_t{\bf u}+\rho{\bf u}_t)|{\bf u}_t|^2dx\rhoight|\\
&=\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}(\rho_t{\bf u}+\rho{\bf u}_t)cdotot\nablabla|{\bf u}_t|^2dx\rhoight|\\
&\lambdae
C\|\rho_t\|_{L^6}\|{\bf u}\|_{L^6}\|{\bf u}_t\|_{L^6}\|\nablabla{\bf u}_t\|_{L^2}+\varepsilonnt_{{{\bf m}athbb R}^3}\rho|{\bf u}_t|^2|\nablabla{\bf u}_t|dx\\
&\lambdae
C\|\nablabla{\bf u}_t\|_{L^2}^2+C\|\sqrt{\rho}{\bf u}_t\|_{L^2}\|\sqrt{\rho}{\bf u}_t\|_{L^6}\|\nablabla{\bf u}_t\|_{L^3}\\
&\lambdae
C\|\nablabla{\bf u}_t\|_{L^2}^2+C\|\nablabla{\bf u}_t\|_{L^2}\|{\bf u}_t\|_{H^2}\\
&\lambdae
C\|\nablabla{\bf u}_t\|_{L^2}^2+C\|\nablabla{\bf u}_t\|_{L^2}(\|\sqrt{\rho}{\bf u}_{tt}\|_{L^2}+\|\nablabla{\bf u}_t\|_{L^2}+1)\\
&\lambdae C\|\nablabla{\bf u}_t\|_{L^2}^2+\varepsilon\|\sqrt{\rho}{\bf u}_{tt}\|_{L^2}^2+C
{\bf E}nd{split}
{\bf E}nd{equation*}
since
$${\bf m}u\Delta {\bf u}_t+(\lambdaambda+{\bf m}u)\nablabla{\rhom div}{\bf u}_t=G_t+A\nablabla(\rho^\gamma)_t-{\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop)_t$$
implies that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\|{\bf u}_t\|_{H^2}&\lambdae
C(\|G_t\|_{L^2}+\|\nablabla(\rho^\gamma)_t\|_{L^2}+\|{\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop)_t\|_{L^2})\\
&\lambdae C(\|\sqrt{\rho}{\bf u}_{tt}\|_{L^2}+\|\nablabla{\bf u}_t\|_{L^2}+1);
{\bf E}nd{split}
{\bf E}nd{equation*}
and
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho_{tt}{\bf u}cdotot\nablabla{\bf u}cdotot{\bf u}_{t}dx\rhoight|&\lambdae
\|\rho_{tt}\|_{L^2}\|{\bf u}cdotot\nablabla{\bf u}\|_{L^3}\|{\bf u}_t\|_{L^6}\\
&\lambdae C\|\rho_{tt}\|_{L^2}^2+C\|\nablabla{\bf u}_t\|_{L^2}^2;
{\bf E}nd{split}
{\bf E}nd{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho_t{\bf u}_tcdotot\nablabla{\bf u}cdotot{\bf u}_tdx\rhoight|&\lambdae
\|\rho_t\|_{L^2}\|{\bf u}_t^2\|_{L^3}\|\nablabla{\bf u}\|_{L^6}\lambdae
C\|\nablabla{\bf u}_t\|_{L^2}^2;
{\bf E}nd{split}
{\bf E}nd{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho_t{\bf u}cdotot\nablabla{\bf u}_tcdotot{\bf u}_tdx\rhoight|&\lambdae
\|\rho_t\|_{L^3}\|{\bf u}\|_{L^\varepsilonnfty}\|\nablabla{\bf u}_t\|_{L^2}\|{\bf u}_t\|_{L^6}\\
&\lambdae C\|\nablabla{\bf u}_t\|_{L^2}\|{\bf u}_t\|_{L^6}\lambdae
C\|\nablabla{\bf u}_t\|_{L^2}^2.
{\bf E}nd{split}
{\bf E}nd{equation*}
Substituting the above estimates back to {\bf E}qref{v22} and then
integrating over $[0,t]$ with $0\lambdae t\lambdae T$, one has
\begin{equation}gin{equation}\lambdaabel{v23b}
\begin{equation}gin{split}
&\varepsilonnt_0^t\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_{tt}^2dxds+\frac{{\bf m}u}{4}\sup_{0\lambdae t \lambdae T}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}_t|^2dx\\
&\lambdae C\varepsilon\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_{tt}^2dxds +C\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\|\nablabla{\bf u}_t|^2dxds+C.
{\bf E}nd{split}
{\bf E}nd{equation}
Here we used the following estimates: for all $0\lambdae t\lambdae T$,
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}(\rho^\gamma)_t{\rhom div}{\bf u}_{t}dx\rhoight|
&\lambdae\|(\rho^\gamma)_t\|_{L^2}\|\nablabla{\bf u}_t\|_{L^2}\\
&\lambdae C+\frac{{\bf m}u}{8}\sup_{0\lambdae t\lambdae T}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}_t|^2dx;
{\bf E}nd{split}
{\bf E}nd{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}(\rho{\mathtt F}{\mathtt F}^\thetaop)_{t}:\nablabla{\bf u}_{t}dx\rhoight|
&\lambdae (\|\rho_t\|_{L^2}+\|{\mathtt F}_t\|_{L^2})\|\nablabla{\bf u}_t\|_{L^2}\\
&\lambdae C+\frac{{\bf m}u}{8}\sup_{0\lambdae t\lambdae T}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}_t|^2dx;
{\bf E}nd{split}
{\bf E}nd{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho_t|{\bf u}_t|^2dx\rhoight|&=\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}{\rhom div}(\rho{\bf u}){\bf u}_t^2dx\rhoight|=\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}\nablabla{\bf u}_t^2dx\rhoight|\\
&\lambdae \|\sqrt{\rho}{\bf u}_t\|_{L^2}\|\nablabla{\bf u}_t\|_{L^2}\lambdae
C+\frac{{\bf m}u}{8}\sup_{0\lambdae t\lambdae T}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}_t|^2dx;
{\bf E}nd{split}
{\bf E}nd{equation*}
and
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\lambdaeft|\varepsilonnt_{{{\bf m}athbb R}^3}\rho_t({\bf u}cdotot\nablabla{\bf u}){\bf u}_tdx\rhoight|
&\lambdae \|\rho_t\|_{L^6}\|{\bf u}\|_{L^6}\|\nablabla{\bf u}\|_{L^6}\|\nablabla{\bf u}_t\|_{L^2}\\
&\lambdae C+\frac{{\bf m}u}{8}\sup_{0\lambdae t\lambdae T}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}_t|^2dx.
{\bf E}nd{split}
{\bf E}nd{equation*}
Choosing a sufficient small $\varepsilon$, then {\bf E}qref{v24}, combining with
Gronwall's inequality, gives
$$\varepsilonnt_0^T\!\!\varepsilonnt_{{{\bf m}athbb R}^3}\rho{\bf u}_{tt}^2dxdt+\sup_{0\lambdae t\lambdae
T}\varepsilonnt_{{{\bf m}athbb R}^3}|\nablabla{\bf u}_t|^2dx\lambdae C.$$
The proof is complete.
{\bf E}nd{proof}
With aid of the above estimate, we can obtain the $H^3$ regularity as
follows.
\begin{equation}gin{Lemma}
$$\|\rho\|_{L^\varepsilonnfty_tH^3_x}+\|{\mathtt F}\|_{L^\varepsilonnfty_tH^3_x}+\|{\bf u}\|_{L^\varepsilonnfty_tH^3_x}\lambdae C.$$
{\bf E}nd{Lemma}
\begin{equation}gin{proof}
It follows from Lemma \rhoef{y1} and Lemma \rhoef{y2} that
$$G\varepsilonn L^\varepsilonnfty_t H^1_x,\quad \nablabla\rho^\gamma\varepsilonn L^\varepsilonnfty_t H^1_x,\quad
{\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop)\varepsilonn L^\varepsilonnfty_t H^1_x,$$
which gives
$${\bf m}u\Delta {\bf u}+(\lambdaambda+{\bf m}u)\nablabla{\rhom div}{\bf u}=G+A\nablabla\rho^\gamma-{\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop)\varepsilonn L^\varepsilonnfty_t H^1_x.$$
From the standard estimate for elliptic equations, we have
$$\|{\bf u}\|_{L^\varepsilonnfty_t H^3_x}\lambdae C.$$
From the second equation in {\bf E}qref{cve}, Lemma \rhoef{y1} and Lemma
\rhoef{y2}, we have
$${\bf m}u\Delta {\bf u}_t+(\lambdaambda+{\bf m}u)\nablabla{\rhom div}{\bf u}_t=G_t+A\nablabla(\rho^\gamma)_t-{\rhom div}(\rho{\mathtt F}{\mathtt F}^\thetaop)_t\varepsilonn L^2((0,T)\thetaimes {{\bf m}athbb R}^3).$$
Therefore, again from the standard estimate for elliptic
equations, we have
$${\bf u}_t\varepsilonn L^2_t H^2_x,\quad G\varepsilonn L^2_t H^2_x.$$
With those bounds, we can apply a similar argument in Lemma
\rhoef{y1} to deduce from the first and the third equation in
{\bf E}qref{cve} that
$$\|\rho\|_{L^\varepsilonnfty_t H^3_x}+\|{\mathtt F}\|_{L^\varepsilonnfty_t H^3_x}\lambdae C.$$
The proof is complete.
{\bf E}nd{proof}
Now we are able to state the proof of Theorem \rhoef{vMT}.
\begin{equation}gin{proof}[Proof of Theorem \rhoef{vMT}]
From the previous lemmas, the functions $(\rho, {\mathtt F}, {\bf u})|_{t=T^*}$
satisfy the same conditions as the initial data. Moreover
$$G=\rho{\bf u}_t+\rho{\bf u}cdotot\nablabla{\bf u}\varepsilonn L^\varepsilonnfty_t H^1_x$$
and
$$(-{\bf m}u\Delta {\bf u}-(\lambdaambda+{\bf m}u)\nablabla{\rhom div}{\bf u}+A\nablabla\rho^\gamma)|_{t=T^*}=\rho (T^*)g$$
with $$g\varepsilonn H^1({{\bf m}athbb R}^3), \quad\thetaext{and}\quad \sqrt{\rho(T^*)}g\varepsilonn L^2.$$ Therefore, we
can take $(\rho,{\mathtt F},{\bf u})(T^*)$ as the initial data and apply the local
existence theorem to extend our local classical solution beyond
$T^*$, and this contradicts the assumption on $T^*$.
The proof is complete.
{\bf E}nd{proof}
\section*{Acknowledgments}
X. Hu's research was supported in part by the National Science Foundation.
D. Wang's research was supported in part by the National Science
Foundation and the Office of Naval Research.
\varepsilonffalse
X. Hu's research was supported in part by the National Science Foundation
under Grant DMS-1108647.
D. Wang's research was supported in part by the National Science
Foundation under Grant DMS-0906160 and by the Office of Naval
Research under Grant N00014-07-1-0668.
\fraci
\begin{equation}gin{thebibliography}{999}
\bibitem{BKM}
Beale, J. T.; Kato, T.; Majda, A., {{\bf E}m Remarks on the breakdown of smooth solutions for the 3-D
Euler equation}. Comm. Math. Phys., 94 (1984), 61-66.
\bibitem{CM} Chemin, J.; Masmoudi, N.,
{\bf E}mph{About lifespan of regular solutions of equations related to
viscoelastic fluids.} SIAM J. Math. Anal. 33 (2001), 84--112.
\bibitem{CZ} Chen, Y.; Zhang, P.,
{\bf E}mph{The global existence of small solutions to the
incompressible viscoelastic fluid system in 2 and 3 space
dimensions.} Comm. Partial Differential Equations 31 (2006),
1793--1810.
\bibitem{CK} Cho, Y.; Kim, H.,
{\bf E}mph{On classical solutions of the compressible Navier-Stokes equations
with nonnegative initial densities}. Manuscripta Math. 120 (2006),
91-129.
\bibitem{CD} Dafermos, C. M.,
{\bf E}mph{Hyperbolic conservation laws in continuum physics.} Third
edition. Grundlehren der Mathematischen Wissenschaften, 325.
Springer-Verlag, Berlin, 2010.
\bibitem{Danchin} Danchin, R.,
{\bf E}mph{Global existence in critical spaces for compressible
Navier-Stokes equations}. Invent. Math. 141 (2000), 579--614.
\bibitem{Feireisl}Feireisl, E., {\bf E}mph{ Dynamics of viscous compressible
fluids.} Oxford Lecture Series in Mathematics and its
Applications, 26. Oxford University Press, Oxford, 2004.
\bibitem{Gurtin}
Gurtin, M. E., {{\bf E}m An introduction to Continuum Mechanics.}
Mathematics in Science and Engineering,158. Academic Press, New
York-London, 1981.
\bibitem{HW1} Hu, X., Wang, D.,
{\bf E}mph{Local strong solution to the compressible viscoelastic flow with large data}. J. Differential Equations 249 (2010), no. 5, 1179-1198.
\bibitem{HW2} Hu, X.; Wang, D., {\bf E}mph{Global existence for the
multi-dimensional compressible viscoelastic flow}. J. Differential Equations 250 (2011), no. 2, 1200-1231.
\bibitem{HW3} Hu, X.; Wang, D.,
{{\bf E}m Strong solutions to the three-dimensional compressible viscoelastic fluids}. Submitted.
\bibitem{HW4} Hu, X.; Wang, D.,
{{\bf E}m The initial-boundary value problem for the compressible viscoelastic fluids}. Submitted.
\bibitem{HX} Huang, X.; Xin, Z.,
{\bf E}mph{A blowup criterion for classical solutions to the compressible Navier-Stokes equations}.
Science China-Mathematics 53 (2010), 671-686.
\bibitem{KM}
Kunisch, K.; Marduel, M., {{\bf E}m Optimal control of non-isothermal
viscoelastic fluid flow}, J. Non-Newtonian Fluid Mechanics 88
(2000), 261-301.
\bibitem{Joseph}
Joseph, D., {{\bf E}m Fluid dynamics of viscoelastic liquids.} Applied
Mathematical Sciences, 84. Springer-Verlag, New York, 1990.
\bibitem{Kato} Kato, T.,
{\bf E}mph{The Cauchy problem for quasi-linear symmetric hyperbolic
systems.} Arch. Rational Mech. Anal. 58 (1975), 181-205.
\bibitem{KP} Kessenich, P., {\bf E}mph{Global Existence with Small Initial Data for Three-Dimensional Incompressible Isotropic Viscoelastic Materials}. arXiv:0903.2824v1.
\bibitem{LLZH2} Lei, Z.; Liu, C.; Zhou, Y.,
{\bf E}mph{Global existence for a 2D incompressible viscoelastic model
with small strain.} Commun. Math. Sci. 5 (2007), 595--616.
\bibitem{LLZH} Lin, F; Liu, C.; Zhang, P.,
{\bf E}mph{On hydrodynamics of viscoelastic fluids.} Comm. Pure Appl.
Math. 58 (2005), 1437--1471.
\bibitem{LZP} Lin, F; Zhang, P.,
{\bf E}mph{On the initial-boundary value problem of the incompressible
viscoelastic fluid system.} Comm. Pure Appl. Math. 61 (2008),
539--558.
\bibitem{Lions} Lions, P. L. {\bf E}mph{ Mathematical topics in fluid mechanics. Vol. 2. Compressible
models.} Oxford Lecture Series in Mathematics and its
Applications, 10. Oxford Science Publications. The Clarendon
Press, Oxford University Press, New York, 1998.
\bibitem{LM} Lions, P. L.; Masmoudi, N.,
{\bf E}mph{Global solutions for some Oldroyd models of non-Newtonian
flows.} Chinese Ann. Math. Ser. B 21 (2000), 131--146.
\bibitem{LW} Liu, C.; Walkington, N. J.,
{\bf E}mph{An Eulerian description of fluids containing visco-elastic
particles.} Arch. Ration. Mech. Anal. 159 (2001), 229--252.
\bibitem{Majda} Majda, A.,
{\bf E}mph{Compressible fluid flow and systems of conservation laws in
several space variables.} Applied Mathematical Sciences, 53.
Springer-Verlag, New York, 1984.
\bibitem{Zhang} Qian, J., Zhang, Z., {\bf E}mph{Global well-posedness for
the compressible viscoelastic fluids near equilibrium}.
Arch. Ration. Mech. Anal. 198 (2010), no. 3, 835-868.
\bibitem{Oldroyd1}
Oldroyd, J. G., {{\bf E}m On the formation of rheological equations of
state}, Proc. Roy. Soc. London, Series A 200 (1950), 523-541.
\bibitem{Oldroyd2}
Oldroyd, J. G., {{\bf E}m Non-Newtonian effects in steady motion of
some idealized elastico-viscous liquids}, Proc. Roy. Soc. London,
Series A 245 (1958), 278-297.
\bibitem{Ponce}
Ponce, G., {{\bf E}m Remarks on a paper: Remarks on the breakdown of smooth solutions for the 3-D
Euler equations}. Comm. Math. Phys. 98 (1985), 349-353.
\bibitem{RHN}
Renardy, M.; Hrusa, W. J.; Nohel, J. A., {{\bf E}m Mathematical
Problems in Viscoelasticity.} Longman Scientific and Technical;
copublished in the US with John Wiley, New York, 1987.
\bibitem{ST1} Sideris, T.,
{\bf E}mph{Formation of singularities in solutions to nonlinear
hyperbolic equations.} Arch. Rational Mech. Anal. 86 (1984), 369-381.
\bibitem{ST2} Sideris, T.,
{\bf E}mph{Formation of singularities in three-dimensional compressible
fluids.} Comm. Math. Phys. 101 (1985), 475-485.
\bibitem{ST} Sideris, T. C.; Thomases, B.,
{\bf E}mph{Global existence for three-dimensional incompressible
isotropic elastodynamics via the incompressible limit.} Comm. Pure
Appl. Math. 58 (2005), 750--788.
{\bf E}nd{thebibliography}
{\bf E}nd{document}
|
\begin{document}
\date{}
\title{Quantum Cat's Dilemma}
\markboth{Marcin Makowski and~Edward W.~Piotrowski}{Quantum cat's dilemma}
\author{Marcin Makowski\footnote{[email protected]} \,and~Edward W.~Piotrowski\footnote{[email protected]}\\[1ex]\small
Institute of Mathematics, University of Bia\l ystok,\\\small
Lipowa 41, Pl 15424 Bia\l ystok,
Poland}
\maketitle
\begin{abstract}
We study a quantum version of the sequential game illustrating problems connected with making rational decisions. We compare the results that the two models (quantum and classical) yield. In the quantum model intransitivity gains importance significantly. We argue that the quantum model describes our spontaneously shown preferences more precisely than the classical model, as these preferences are often intransitive.
\end{abstract}
Keywords: quantum strategies; quantum games; intransitivity; sequential game.
\section{Introduction}
A fundamental scientific theory is marked by its ability to solve
the widest possible range of problems. In the 20th century, it was
quantum mechanics \cite{r2} that became such an effective panacea
for the problems that could not be either understood or solved
with the use of the traditional methods. Quantum mechanics
describes the fascinating structure of
elements of which the world is composed and explains such
phenomena as radioactivity, antimatter, stability of molecular
structures, stars evolution etc. Quantum theory allows
us to abandon the traditional paradigms of perceiving the world. Quantum-like ideas are used in various fields of research and in this way they contribute to the unification of modern
science. Some of the mechanisms characteristic for living nature
may find their reflection in quantum theory \cite{r4}. Presently,
quantum information theory is being built at the meeting point of
quantum mechanics and theory of information \cite{r5,r6,r7}. The
concept of a quantum computer stress the
qualitative limitations of orthodox Turing machines
which in the future would probably be replaced with the
quantum computers whose counting ability will substantially exceed
the possibilities of the present computers \cite{r8}. It
poses a threat of using quantum technology to jeopardize the
contemporary methods used to guarantee the confidentiality of data
transfer \cite{r32}. It seems that the methods of quantum
cryptography that are being presently worked out will remain safe
even in the times of the quantum computers \cite{r9}. The
combination of the research methods of both information and game
theories results in emerging of the new mysterious field - quantum
game theory, in which the subtle quantum rules characterizing the
material world determine ways of controlling and
transformation of information \cite{r10, r11, r12, r13, r14}. In the quantum game formalism, pure strategies correspond to
the vectors of Hilbert space (to be more precise: the projective
operators on subspaces determined by these vectors). The
mixed strategies are represented by the convex combinations of
vectors projected on these directions. In comparison with the sets
of the traditional strategies, quantum strategies provide
players with much more possibilities which they can use while
making the most beneficial decision for themselves. This
characteristic feature of quantum game theory is the reason why
its results go beyond the traditional boundaries \cite{r33}.
Plenty of quantum variants of problems analysed by the
traditional classical game theory (see \cite{r15, r30, 14})
have already been put forward. First attempts at creating
quantum economy by applying quantum game theory to selected
economic problems have been made too \cite{r16}. It is assumed
that there exists a market where financial transactions are made
with help of quantum computers operating on quantum strategies
\cite{r16, r17, 16}. It is essential to mention here that game
theory in its traditional form has been formulated in the
context of economic issues.\newline The quantum game
formalism has already been used to describe the idea of the
Evolutionary Stable Strategy(ESS) \cite{r31}. Perhaps, further
research in this direction will be used to explain a number of
phenomena that are now being researched by evolutionary biology.
In our work we concentrate on the quantitative analysis of the
quantum version of a very simple game against Nature which was
presented and analyzed in \cite{r1}. To illustrate the problem,
we will use the story about Pitts's experiments with cats,
mentioned in the Steinhaus diary \cite{r21}. Let us assume (alike
as in \cite{r1}), that a cat (we will be calling it the {\em
quantum cat}\/) is offered three types of food (no. 1, no. 2 and
no. 3), every time in pairs of two types, whereas the food
portions are equally attractive regarding the calories, and each
one has unique components that are necessary for the cat's good
health. Let us assume that the quantum cat reaches its optimal
strategy in conditions of a constant frequency of
appearance of a particular pairs of food in a diet, and that it
will never refrain from making the choice. The ability of finding
the optimal strategy by quantum cat may explain the principle of
last action formulated by Ernest Mach \cite{r20}. It is possible
that some of our psychological processes are subject to some
variant of the principle linked with Ockham's razor.
Non-orthodox quantum description of the decision algorithms provides a possibility to extend the results of Ref. \cite{r1}. In the following paragraphs, we compare the quantum and the classical variants of the model we are interested in.
\section{Intransitivity}
However, before we start analyzing all possible behavioral patterns of quantum cat, it would be advisable to explain what the intransitive order is.
Any relation $\succ$ existing between the elements of a certain set is called \emph{transitive}\/ if $A\succ C$ results from the fact that $A\succ B$ and $B\succ C$ for any three elements $A$, $B$, $C$. If this condition is not fulfilled then the relation will be called \emph{intransitive}\/.
The best known example of intransitivity is the children game
"Rock, Scissors, Paper" (see quantum analysis of this game
\cite{r19}). Another interesting example of intransitive
order is Condorcet's voting paradox. Consideration regarding this
paradox led Arrow in the XX-th century to prove the theorem
stating that there is no procedure of successful choice that would
meet the democratic assumption \cite{r26} (some other problems
with intransitive options can be found in \cite{r25,r27}).
Intransitive orders still are surprisingly suspicious for many
researchers. Economists have long presented a view that people
should choose between things they like in a specific, linear order
\cite{r28}. But what we actually prefer often depends on how
the choice is being offered \cite{r22,r23}. Mentioned in
Steinhaus's diary Pitts notice that a cat facing choice between
fish, meat and milk prefers fish to meat, meat to milk, and milk
to fish! Pitts's cat, thanks to the above-mentioned food
preferences, provided itself with a balanced diet.
Let us have a closer look at the effects of the consideration of the problem that Pitts's was trying to tackle, in the language of quantum game theory.
\section{Properties of cat's optimal strategies}
There is the following relation between the frequencies $\omega_k$\/, $k=0,1,2$ of appearance of the particular foods in a diet and the conditional probabilities which we are interested in (\,see \cite{r1}):
\begin{equation}
\omega_k:=P(C_k)=\sum_{j=0}^{2}P(C_{k}|B_{j})P(B_{j}),\,\, k=0,1,2\,,
\end{equation}
where $P(C_{k} | B_{j})$ indicates the probability of choosing the food of number $k$\/, when the
offered food pair does not contain the food of number $j$\/, $P(B_{j})=:q_j$ indicates the frequency
of occurrence of pair of food that does not occur food number $j$\/.
The most valuable way of choosing the food by cat occurs for such six conditional probabilities
($P(C_1|B_0),$ $P(C_2|B_0)$,$
P(C_0|B_1)$,$P(C_2|B_1)$,
$P(C_0|B_2)$,
$P(C_1|B_2)$)
which fulfills the following condition:
\begin{equation}
\label{maximum}
\omega_0=\omega_1=\omega_2=\tfrac{1}{3}.
\end{equation}
Any six conditional probabilities, that for a fixed triple ($q_0,q_1,q_2$) fulfill (\ref{maximum})
will be called a cat's \emph{optimal strategy}\/.
The system of Eq. (\ref{maximum}) has the following matrix form:
\begin{equation}\label{system}
\left( \begin{array}{ccc} P(C_0|B_2) & P(C_0|B_1) & 0 \\ P(C_1|B_2) & 0 &
P(C_1|B_0) \\ 0 & P(C_2|B_1) & P(C_2|B_0)
\end{array} \right)
\left( \begin{array}{ccc} q_2 \\ q_1 \\ q_0
\end{array} \right)=\tfrac{1}{3}
\left( \begin{array}{ccc} 1 \\ 1 \\ 1
\end{array} \right).
\end{equation}
and its solution:
\begin{eqnarray}\label{solution}
q_2&=&\tfrac{1}{d}\bigg(\frac{P(C_0|B_1)+P(C_1|B_0)}{3}-P(C_0|B_1)P(C_1|B_0)\negthinspace\bigg),\nonumber\\
q_1&=&\tfrac{1}{d}\bigg(\frac{P(C_0|B_2)+P(C_2|B_0)}{3}-P(C_0|B_2)P(C_2|B_0)\negthinspace\bigg),\\
q_0&=&\tfrac{1}{d}\bigg(\frac{P(C_1|B_2)+P(C_2|B_1)}{3}-P(C_1|B_2)P(C_2|B_1)\negthinspace\bigg),\nonumber
\end{eqnarray}
defines a mapping $\mathcal{A}_0:D_3\rightarrow T_2$ of the three-dimensional cube ($D_3$\/)
into a triangle ($T_2$\/)\,(two-dimensional simplex, $q_0+q_1+q_2=1$ and $q_i\geq 0$), where
$d$ is the determinant of the matrix of parameters $P(C_j|B_k)$.
The barycentric coordinates of a point of this triangle are interpreted as a probabilities
$q_0, q_1$ and $q_2$.
Thus we get relation between the optimal cat's strategy and frequencies $q_j$ of appearance
of food pairs.
\section{Quantum cat}
We start with the presentation of formalism which is indispensable for the quantum description of the variant of the game presented in the article \cite{r1}.
Let us denote tree different bases of two-dimensional Hilbert space as
$\{\,| 1 \rangle\negthinspace_{\,\,0}, | 2 \rangle\negthinspace_{\,\,0}\,\}$,
$\{\,| 0 \rangle\negthinspace_{\,\,1}$, $| 2 \rangle\negthinspace_{\,\,1}\,\}$,
$\{\,| 0 \rangle\negthinspace_{\,\,2}$, $| 1 \rangle\negthinspace_{\,\,2}\,\}=\{\,
(1,0)^{T},(0,1)^{T}\,\}$.
The bases should be such that bases
\{\,$| 0 \rangle\negthinspace_{\,\,1}$, $| 2 \rangle\negthinspace_{\,\,1}$\,\},
\{\,$| 1 \rangle\negthinspace_{\,\,0}$, $| 2 \rangle\negthinspace_{\,\,0}$\,\}
are the image of
\{\,$| 0 \rangle\negthinspace_{\,\,2}$, $| 1 \rangle\negthinspace_{\,\,2}$\,\}
under the transformations $H$
and $K$ respectively:\footnote{$H$ is called Hadamard matrix.}
\begin{displaymath}
\label{matrix maximal}
H=\frac{1}{\sqrt{2}}\left(\begin{array}{cr}1 & 1 \\ 1 & -1
\end{array}\right)\/,\quad
K=\frac{1}{\sqrt{2}}\left(\begin{array}{cr}1 & 1 \\ i & -i
\end{array}\right).
\end{displaymath}
It is worth to mention here that the set of so called conjugated bases, which is presented above,
allowed Wiesner (before asymmetric key cryptography was invented !) to begin research into quantum cryptography. These bases play also an important role in universality of quantum market games \cite{1}.
Let us denote strategy of choosing the food number $k$\/, when the offered food pair not contain
the food of number $l$\/, as $| k \rangle\negthinspace_{\,\,l}$ ($k, l=0,1,2$, $k\ne l$).\newline
A family $\{|z\rangle\negthinspace\,\/\}$ (\,$z \in \overline{\mathbb{C}}$\,) of convex vectors:
\begin{displaymath}
| z \rangle\negthinspace:=| 0 \rangle\negthinspace_{\,\,2}+z
|1\rangle\negthinspace_{\,\,2}=| 0 \rangle\negthinspace_{\,\,1}+\frac{1-z}{1+z}
|2\rangle\negthinspace_{\,\,1}=| 1 \rangle\negthinspace_{\,\,0}+\frac{1+iz}{1-iz}
|2\rangle\negthinspace_{\,\,0},
\end{displaymath}
defined by the parameters of the heterogeneous coordinates of the projective space $\mathbb{C}P^{1}$, represents all quantum cat strategies spanned by the base vectors.
The coordinates of the same strategy $| z \rangle\negthinspace$ read (measured) in various bases define quantum cat's preferences toward a food pair represented by the base vectors.
Squares of their modules, after normalization, measure the conditional probability of quantum cat's making decision in choosing a particular product, when the choice is related to the suggested food pair (the choice of the way of measuring a strategy).
In this way, quantum cat makes a decision to choose the right food pair with the following probabilities:
\begin{align}\label{derek}
P(C_0|B_2) & =\frac{1}{1+|z|^{2}},& P(C_1|B_2) &=\frac{|z|^{2}}{1+|z|^{2}}, \nonumber \\
P(C_0|B_1)&=\frac{1}{1+|\frac{1-z}{1+z}|^2},
& P(C_2|B_1) &=
\frac{|\frac{1-z}{1+z}|^2}{1+|\frac{1-z}{1+z}|^2}, \\
P(C_1|B_0)&=\frac{1}{1+|\frac{1+iz}{1-iz}|^2},&
P(C_2|B_0)&= \frac{|\frac{1+iz}{1-iz}|^2}{1+|\frac{1+iz}{1-iz}|^2}\,.\nonumber
\end{align}
Strategies $| z \rangle\negthinspace$\, can be parameterized by the sphere $S_2\backsimeq \overline{\mathbb{C}}$\, by using stereographic projection which establishes correspondence (bijection) between elements of $\overline{\mathbb{C}}$ and
the points of $S_2$ (\,the north pole of the sphere corresponds with the point in infinity, $|\infty\rangle\negthinspace$ :=$| 1 \rangle\negthinspace_{\,\,2}$\,). Eq. $(\ref{derek})$ lead to the mapping $\mathcal{A}_1:S_2\rightarrow D_3$ of the strategies defined by the parameters of the sphere points onto the three-dimensional cube of conditional probabilities:
\begin{align}\label{prop}
P(C_0|B_2) &=\frac{1-x_3}{2}, & P(C_1|B_2) &=\frac{1+x_3}{2},\nonumber \\
P(C_0|B_1) &=\frac{1+x_1}{2},& P(C_2|B_1) &=\frac{1-x_1}{2}, \\
P(C_1|B_0) &=\frac{1+x_2}{2},& P(C_2|B_0) &=\frac{1-x_2}{2}\,.\nonumber
\end{align}
Combination of the above projection with (\ref{solution}) results in the projection\newline $\mathcal{A}:S_2\rightarrow T_2$, $\mathcal{A}:=\mathcal{A}_0\circ\mathcal{A}_1$
of two-dimensional sphere $S_2$ into a triangle $T_2$.
The knowledge of $\mathcal{A}$ allows to compare the number (measure) of the sets of the possible strategies of the quantum cat and the classical cat having the characteristics we are interested in.
\section{Quantum cat vs. Classical cat}
In this paragraph, we compare the model described above, in which
quantum cat can adopt strategies from any group of strategies
described above with the quantitative results of Ref. \cite{r1}.
In order to present the range of representation $\mathcal{A}$ of
our interest, we illustrated it with the values of this
representation for 10,000 randomly selected points with respect to
constant probability distribution on the sphere $S_2$\/. The
choice of such a measurement method for quantum cat's strategy
justifies the fact that that constant probability distribution
corresponds to the Fubini-Study measure on $\mathbb{C}P^{1}$
$\cite{r34}$ which is the only invariant measure in relation to
any change of quantum cat's decision regarding the chosen strategy
(the so called quantum tactic, homography on $\mathbb{C}P^{1}$).
Changes of the quantum cat's strategies therefore do not
influence the discussed below model.
\subsection{Optimal strategies}
Figure \ref{qhex} presents the areas (in both models) of frequency $q_m$ of appearance of individual
choice alternatives between two types of food, for which optimal strategies exist.
\begin{figure}
\caption{Optimal strategies: classical
(left) and quantum (right).}
\label{qhex}
\end{figure}
Let us observe that in the quantum case the area of the simplex corresponding to the optimal strategies has become slightly diminished in relation to the classical model.
The difference lies in the disappearance of areas at three boundaries of the regular hexagon which correspond to the arc-bounded surfaces.\footnote{As a curious detail, let us provide the precise size of such an area: $((\frac{1}{3}((9(-17+38\sqrt{2}\cdot3^{\frac{1}{4}}+10\sqrt{3}-22\sqrt{2}\cdot3^{\frac{3}{4}})+
(-939+2082\sqrt{2}\cdot3^{\frac{1}{4}}+542\sqrt{3}-1202\sqrt{2}\cdot3^\frac{3}{4})\pi)^{2}+(9(-72-8\sqrt{2}\cdot3^{\frac{1}{4}}+40\sqrt{3}+6\sqrt{2}\cdot3^{\frac{3}{4}})+(-3864-504\sqrt{2}\cdot3^{\frac{1}{4}}+2232\sqrt{3}+290\sqrt{2}\cdot3^{\frac{3}{4}}
)\pi)^2))^\frac{1}{2})/(324((-3+\sqrt{2}\cdot3^{\frac{1}{4}}+2\sqrt{3}-\sqrt{2}
\cdot3^{\frac{3}{4}})^2+(-2-2\sqrt{2}\cdot3^{\frac{1}{4}}+2\sqrt{3}+\sqrt{2}\cdot3^\frac{3}{4})^2))\approx$ 0.0120471.}
Assuming the same measure of the possibility of occurrence of determined proportion of all three food pairs, we may say that the number of situations where the optimal strategies can be used in the quantum model makes up about 63$\%$ of all possibilities.
In the classical variant, the area representing the optimal strategies makes up 67$\%$ of the simplex.
This difference will be significant for analysis of intransitivity, which will be discussed more precisely in the next paragraph.
It is also worth mentioning that in the classical model we deal with sort of condensation of optimal strategies in the central part of the picture in the area of the balanced frequencies of all pairs of food. In the quantum case, they are more evenly spread, although they also appear less frequently towards the sides of the triangle.
\subsection{Intransitive orders}
In the quantum model, we deal with an intransitive choice if one of the following conditions is fulfilled (\,see \cite{r1}):
\begin{itemize}
\item $P(C_2|B_1)=\frac{1-x_1}{2}<\frac{1}{2}$,
$P(C_1|B_0)=\frac{1+x_2}{2}<\frac{1}{2}$,
$P(C_0|B_2)=\frac{1-x_3}{2}<\frac{1}{2}$\,.
\item $P(C_2|B_1)=\frac{1-x_1}{2}>\frac{1}{2}$,
$P(C_1|B_0)=\frac{1+x_2}{2}>\frac{1}{2}$,
$P(C_0|B_2)=\frac{1-x_3}{2}>\frac{1}{2}$\,.
\end{itemize}
They form two spherical equilateral triangles has tree equal $\frac{\pi}{2}$ angles.
\begin{figure}
\caption{Optimal intransitive strategies: classical
(left) and quantum (right).}
\label{qstar}
\end{figure}
It may be seen in Figure \ref{qstar} in what part of the simplex of parameters $(q_0,q_1,q_2)$ intransitive strategies may be used in both models. They form the six-armed star composed of two triangles\footnote{Any
of them corresponding to one of two possible intransitive orders.}
in both the quantum and the classical model. As in the previous Figure one can notice that quantum variant is characterized by higher regularity, the star has clearly marked boundaries.
In both cases, we have got 33$\%$ \footnote {They are measured by the area of equilateral triangle inscribed
into a regular hexagon.}
of conditions allowing to use intransitive optimal strategies in a determined order. There are 44$\%$ of conditions allowing to use intransitive strategies with an arbitrary order.
However, it is important to remember that in the quantum model, the number of all optimal strategies has decreased in relation to the classical variant. This, when the number of intransitive optimal strategies is equal, means that intransitive orders gain more importance in the quantum model. It is not the only reason leading to such a conclusion (see next paragraf).
\subsection{Transitive orders}
Let us have a closer look at Figure \ref{qtrans}. It presents a simplex area for which there exist transitive optimal strategies in both models.
\begin{figure}
\caption{Optimal transitive strategies: classical
(left) and quantum (right).}
\label{qtrans}
\end{figure}
In the classical case optimal transitive strategies cover the same
area of the simplex as all optimal strategies, however they occur
less often in the center of the simplex (near point
$q_0=q_1=q_2=\frac{1}{3}$). The quantum version is essentially
different - transitive optimal strategies do not appear
within the boundaries of a hexagon in the central part of the
picture (thus, there are about 41$\%$ of them). Let's observe that
this is the area where two different intransitive orders
superimpose (22$\%$).\footnote{The area of the regular six-armed
star is two times bigger than the area of the hexagon inscribed
into it.} Therefore, there cannot be defined for each intransitive
order a transitive order whose working effects are identical.
Moreover, the transitive strategies appear much less frequently
within the arms of the star forming intransitive orders. The above
remarks point to the fact that in the quantum model (within the
boundaries of the pure strategy) intransitive preferences
significantly gain more importance. To make the analysis clear,
let us sum up our quantitative discussion by gathering the
results round into a table:
\begin{table}[htbp]
\caption{Comparison of achievability of various types of optimal
strategies in both models. }
\centering\footnotesize
\begin{tabular}{|c|c|c|c|} \hline
\vphantom{$\int^1$}&
All & Intransitive & Transitive
\\[1pt]\hline
\vphantom{$\int^1$}Classical model \vphantom{$F^K$} & 67 $\%$ & 44 $\%$ & 67 $\%$
\\[1pt]\hline \vphantom{$\int^1$} Quantum model\vphantom{$F^K$}
& 63 $\%$ & 44 $\%$ & 41 $\%$ \\[1pt]\hline
\end{tabular}
\end{table}
\subsection{Remark about quantum mixed strategies}
Any quantum cat's mixed strategy $\rho$
can be identified with a point $p$ inside a ball whose boundary is a set of pure strategies represented by a Bloch sphere $S_2$.
A line passing through a point $p$ and the centre of the ball cuts the sphere in two antipodal points $-\vec{v}$ and $\vec{v}$.
The point $p$ divides the segment [$-\vec{v}$,\,$\vec{v}$] in the same ratio as the ratio of weights $w_{v}$ i $w_{-v}$
in the representation of a mixed strategy $\rho$ as a convex combination of two pure strategies:
\begin{displaymath}
\rho = w_v |z_v\rangle\langle z_v|+ w_{-v} |z_{-v}\rangle\langle z_{-v}|.
\end{displaymath}
Two antipodal points $-\vec{v}$ i $\vec{v}$ of the sphere represent pure cat's strategies with the same property
( intransitive or transitive).\footnote{If coordinates of any vector $\vec{v}$ satisfy one of the conditions of intransitivity (see paragraph 5.2), then coordinates of $-\vec{v}$ satisfy the other one.}
Since formulas (\ref{prop}) are linear, each point lying on the segment [$-\vec{v}$,\,$\vec{v}$] will represent an strategy of the same property as points being the ends of this line. \newline
The randomized model in which player operates mixed
strategies has the unique property --- preferences of mixed
strategies are not different from preferences of respective pure
strategies lying on the line passing through the middle of the
sphere and a point inside the sphere specifying mixed strategy.
\section{Conclusion}
The aim of this work is to present some methods of
quantitative analysis of the, among others, intransitive orders
within the boundaries of the quantum game theory. We compared the results that the two models (quantum and classical) yield. The geometrical interpretation presented in
this article can turn out to be very helpful in
understanding various quantum models in use.
It turns out that the order imposed by the player's rational
preferences can be intransitive. The quantum model gives a
considerable weight to intransitive orders. They are a constituent
part of more of all optimal strategies than in the classical case.
Moreover, for some frequencies of appearance of pairs of food,
quantum cat is able to achieve optimal results only thanks to the
intransitive strategy. It is a significant difference in reference
to classical cat's situation. However, it must be admitted here
that it refers only to the simple patterns of cat's behavior.
Perhaps, more advanced research into quantum game theory
will confirm validity of the intransitive decision algorithms,
which are often in contradiction with our intuition. Thorough
analysis of this problem would be of great importance to those
who investigate our mind performance or for the construction of
thinking machines.
Mathematics have often been inspired by games. This gave rise
to the new fields of research ( studies of games of chance gave
rise to a large branch of mathematic called probability theory).
In our everyday lives, we encounter various situations of
conflict and cooperation where we have to make particular
decisions. Many problems in the fields of economy and political
sciences can be expressed in the language of the quantum
game theory. In physics, the problem of measurement can be
considered as a game against Nature - the observer tries to
gain most possible information about the observed object. Other
experiments can be modelled in the same way. Due to the
wide range of possible applications, quantum game theory may shed
new light on the contemporary physics \cite{r29}. It may also
considerably influence the development of science and should
prepare us for the incoming era of quantum computers. Therefore,
it is vital to carry on research into this new field.
\begin{center}
{\bf Acknowledgements}
\end{center}
This paper has been supported
by the {\bf Polish Ministry of Scientific Research and Information
Technology} under the (solicited) grant No {\bf
PBZ-MIN-008/P03/2003}.
\end{document}
|
\begin{enumerate}gin{document}
\title{Smith Normal Form in Combinatorics}
\date{\today}
\author{Richard P. Stanley}
\email{[email protected]}
\address{Department of Mathematics, University of Miami, Coral Gables,
FL 33124}
\thanks{Partially supported by NSF grant DMS-1068625.}
\begin{enumerate}gin{abstract}
This paper surveys some combinatorial aspects of Smith normal form,
and more generally, diagonal form. The discussion includes general
algebraic properties and interpretations of Smith normal form,
critical groups of graphs, and Smith normal form of random integer
matrices. We then give some examples of Smith normal form and diagonal
form arising from (1) symmetric functions, (2) a result of Carlitz, Roselle,
and Scoville, and (3) the Varchenko matrix of a hyperplane arrangement.
\end{abstract}
\maketitle
\section{Introduction}
Let $A$ be an $m\times n$ matrix over a field $K$. By means of
elementary row and column operations, namely:
\begin{enumerate}\item
add a multiple of a row
(respectively, column) to another row (respectively, column), or
\item multiply a row or column by a unit (nonzero element) of $K$,
\end{enumerate}
we can transform $A$ into a matrix that vanishes off the main diagonal
(so $A$ is a diagonal matrix if $m=n$) and whose main diagonal
consists of $k$ 1's followed by $m-k$ 0's. Moreover, $k$ is uniquely
determined by $A$ since $k=\mathrm{rank}(A)$.
What happens if we replace $K$ by another ring $R$ (which we always
assume to be commutative with identity 1)? We allow the same row and
column operations as before. Condition (2) above is ambiguous since a
unit of $R$ is not the same as a nonzero element. We want the former
interpretation, i.e., we can multiply a row or column by a unit
only. Equivalently, we transform $A$ into a matrix of the form $PAQ$,
where $P$ is an $m\times m$ matrix and $Q$ is an $n\times n$ matrix,
both invertible over $R$. In other words, $\det P$ and $\det Q$ are
units in $R$. Now the situation becomes much more complicated.
We say that $PAQ$ is a \emph{diagonal form} of $A$ if it vanishes off
the main diagonal. (Do not confuse the diagonal form of a square
matrix with the matrix $D$ obtained by diagonalizing $A$. Here
$D=XAX^{-1}$ for some invertible matrix $X$, and the diagonal entries
are the eigenvalues of $A$.) If $A$ has a diagonal form $B$ whose main
diagonal is $(\alpha_1,\dots,\alpha_r,0,\dots,0)$, where $\alpha_i$
divides $\alpha_{i+1}$ in $R$ for $1\elleq i\elleq r-1$, then we call $B$
a \emph{Smith normal form} (SNF) of $A$. If $A$ is a nonsingular
square matrix, then taking determinants of both sides of the equation
$PAQ=B$ shows that $\det A=u\alpha_1\cdots \alpha_n$ for some unit
$u\in R$. Hence an SNF of
$A$ yields a factorization of $\det A$. Since there is a huge
literature on determinants of combinatorially interesting matrices
(e.g., \cite{kratt1}\cite{kratt2}), finding an SNF of such matrices
could be a fruitful endeavor.
In the next section we review the basic properties of SNF, including
questions of existence and uniqueness, and some algebraic aspects. In
Section~\ref{sec:cgp} we discuss connections between SNF and the
abelian sandpile or chip-firing process on a graph. The distribution
of the SNF of a random integer matrix is the topic of
Section~\ref{sec:random}. The remaining sections deal with some
examples and open problems related to the SNF of combinatorially
defined matrices.
We will state most of our results with no proof or just the
hint of a proof. It would take a much longer paper to summarize all
the work that has been done on computing SNF for special matrices. We
therefore will sample some of this work based on our own interests and
research. We will include a number of open problems which we hope will
stir up some further interest in this topic.
\section{Basic properties}
In this section we summarize without proof the
basic properties of SNF. We will use the following notation. If $A$ is
an $m\times n$ matrix over a ring $R$, and $B$ is the matrix with
$(\alpha_1,\dots,\alpha_m)$ on the main diagonal and 0's elsewhere
then we write
$A\mathfrak{S}_nf(\alpha_1,\dots,\alpha_m)$ to indicate that $B$ is an SNF of
$A$.
\subsection{Existence and uniqueness} For
connections with combinatorics we are primarily interested in the ring
$\mathbb{Z}$ or in polynomial rings over a field or over $\mathbb{Z}$. However, it
is still interesting to ask over what rings $R$ does a matrix always
have an SNF, and how unique is the SNF when it exists. For this
purpose, define an \emph{elementary divisor ring} $R$ to be a ring
over which every matrix has an SNF. Also define a
\emph{B\'ezout ring} to be a commutative ring for
which every finitely generated ideal is principal. Note that a
noetherian B\'ezout ring is (by definition) a principal ideal ring,
i.e., a ring (not necessarily an integral domain) for which every
ideal is principal. An important example of a principal ideal ring
that is not a domain is $\mathbb{Z}/k\mathbb{Z}$ (when $k$ is not prime). Two
examples of non-noetherian B\'ezout domains are the ring of entire
functions and the ring of all algebraic integers.
\begin{enumerate}gin{thm} \ellambdabel{thm:rings}
Let $R$ be a commutative ring with identity.
\begin{enumerate}\item If every rectangular matrix over $R$ has an SNF, then $R$ is
a B\'ezout ring. In fact, if $I$ is an ideal with a minimum size
generating set $a_1,\dots,a_k$, then the $1\times 2$ matrix
$[a_1,a_2]$ does not have an SNF. See \cite[p.~465]{kap}.
\item Every diagonal matrix over $R$ has an SNF if and only if $R$ is
a B\'ezout ring \cite[(3.1)]{l-l-s}.
\item A B\'ezout domain $R$ is an elementary divisor domain if and only
if it satisfies:
$$ \mbox{For all $a,b,c\in R$ with $(a,b,c)=R$, there exists $p,q\in
R$ such that $(pa,pb+qc)=R$.} $$
See \cite[{\textsection}5.2]{kap}\cite[{\textsection}6.3]{f-s}.
\item Every principal ideal ring is an elementary divisor ring. This
is the classical existence result (at least for principal ideal
domains), going back to Smith \cite{smith} for the integers.
\item Suppose that $R$ is an \emph{associate ring}, that is, if two
elements $a$ and $b$ generate the same principal ideal there is a
unit $u$ such that $ua=b$. (Every integral domain is an associate
ring.) If a matrix $A$ has an SNF $PAQ$ over $R$, then $PAQ$ is
unique (up to multiplication of each diagonal entry by a
unit). This result is immediate from \cite[{\textsection}IV.5,
Thm.~5.1]{l-q}.
\end{enumerate}
\end{thm}
It is open whether every B\'ezout domain is an elementary divisor
domain. For a recent paper on this question, see Lorenzini
\cite{lorenzini}.
Let us give a simple example where SNF does not exist.
\begin{enumerate}gin{ex}
Let $R=\mathbb{Z}[x]$, the polynomial ring in one variable over $\mathbb{Z}$, and
let $A=\elleft[ \begin{enumerate}gin{array}{cc} 2 & 0\\ 0 & x \end{array}
\right]$. Clearly $A$ has a diagonal form (over $R$) since it is
already a diagonal matrix. Suppose that $A$ has an SNF $B=PAQ$. The
only possible SNF (up to units $\pm 1$) is $\mathrm{diag}(1,2x)$, since $\det
B = \pm 2x$. Setting $x=2$ in $B=PAQ$ yields the SNF $\mathrm{diag}(1,4)$
over $\mathbb{Z}$, but setting $x=2$ in $A$ yields the SNF $\mathrm{diag}(2,2)$.
\end{ex}
Let us remark that there is a large literature on the computation of
SNF over a PID (or sometimes more general rings) which we will not
discuss. We are unaware of any literature on deciding whether a given
matrix over a more general ring, such as $\mathbb{Q}[x_1,\dots,x_n]$ or
$\mathbb{Z}[x_1,\dots,x_n]$, has an SNF.
\subsection{Algebraic interpretation}
Smith normal form, or more generally diagonal form, has a simple
algebraic interpretation. Suppose that the $m\times n$ matrix $A$ over
the ring $R$ has a diagonal form with diagonal entries
$\alpha_1,\dots,\alpha_m$. The rows $v_1,\dots,v_m$ of $A$ may be
regarded as elements of the free $R$-module $R^n$.
\begin{enumerate}gin{thm}
We have
$$ R^n/(v_1,\dots,v_m) \mathcal Ong (R/\alpha_1 R)\oplus\cdots \oplus
(R/\alpha_m R). $$
\end{thm}
\begin{enumerate}gin{proof}
It is easily seen that the allowed row and column operations do not
change the isomorphism class of the quotient of $R^n$ by the rows of
the matrix. Since the conclusion is tautological for diagonal
matrices, the proof follows.
\end{proof}
The quotient module $R^n/(v_1,\dots,v_m)$ is called the
\emph{cokernel} (or sometimes the \emph{Kasteleyn cokernel}) of the
matrix $A$, denoted $\mathrm{coker}(A)$
Recall the basic result from algebra that a finitely-generated module
$M$ over a PID $R$ is a (finite) direct sum of cyclic modules
$R/\alpha_i R$. Moreover, we can choose the $\alpha_i$'s so that
$\alpha_i|\alpha_{i+1}$ (where $\alpha|0$ for all $\alpha\in R$). In
this case the $\alpha_i$'s are unique up to multiplication by
units. In the case $R=\mathbb{Z}$, this result is the ``fundamental theorem
for finitely-generated abelian groups.'' For a general PID $R$, this
result is equivalent to the PID case of Theorem~\ref{thm:rings}(4).
\subsection{A formula for SNF}
Recall that a \emph{minor} of a matrix $A$ is the determinant of some
square submatrix.
\begin{enumerate}gin{thm} \ellambdabel{thm:minors}
Let $R$ be a unique factorization domain (e.g., a PID), so that any
two elements have a greatest common divisor (gcd). Suppose that the
$m\times n$ matrix $M$ over $R$ satisfies $M\mathfrak{S}_nf
(\alpha_1,\dots,\alpha_m)$. Then for $1\elleq k\elleq m$ we have that
$\alpha_1 \alpha_2\cdots \alpha_k$ is equal to the gcd of all $k\times
k$ minors of $A$, with the convention that if all $k\times k$ minors
are 0, then their gcd is 0.
\end{thm}
\noindent \emph{Sketch of proof.} The assertion is easy to check if
$M$ is already in Smith normal form, so we have to show that the
allowed row and column operations preserve the gcd of the $k\times k$
minors. For $k=1$ this is easy. For $k>1$ we can apply the $k=1$ case
to the matrix $\wedge^k M$, the $k$th exterior power of $M$. For
details, see \cite[Prop.~8.1]{m-r}.
\section{The critical group of a graph} \ellambdabel{sec:cgp}
Let $G$ be a finite graph on the vertex set $V$. We allow multiple
edges but not loops (edges from a vertex to itself). (We could allow
loops, but they turn out to be irrelevant.) Write $\mu(u,v)$ for the
number of edges between vertices $u$ and $v$, and $\deg v$ for the
degree (number of incident edges) of vertex $v$. The \emph{Laplacian
matrix} $\bm{L}=\bm{L}(G)$ is the matrix with rows and columns
indexed by the elements of $V$ (in some order), with
$$ \bm{L_{uv}} =\elleft\{ \begin{enumerate}gin{array}{rl}
-\mu(u,v), & \mathrm{if}\ u\neq v\\
\deg(v), & \mathrm{if}\ u=v. \end{array} \right. $$
The matrix $\bm{L}(G)$ is always singular since its rows sum to
0.\ \ Let $\bm{L_0}=\bm{L_0}(G)$ be $\bm{L}$ with the last row and
column removed. (We can just as well remove any row and column.) The
well-known \emph{Matrix-Tree Theorem} (e.g., \cite[Thm.~5.6.8]{ec2})
asserts that $\det \bm{L_0}=\kappa(G)$, the number of spanning trees
of $G$. Equivalently, if $\#V=n$ and $\bm{L}$ has eigenvalues
$\theta_1,\dots, \theta_n$, where $\theta_n=0$, then $\kappa(G) =
\theta_1\cdots \theta_{n-1}/n$. We are regarding $\bm{L}$ and
$\bm{L_0}$ as matrices over $\mathbb{Z}$, so they both have an SNF. It is
easy to see that $\bm{L_0}\mathfrak{S}_nf (\alpha_1,\dots,\alpha_{n-1})$ if and
only if $\bm{L}\mathfrak{S}_nf (\alpha_1,\dots, \alpha_{n-1},0)$.
Let $G$ be connected. The group coker$(\bm{L_0})$ has an interesting
interpretation in terms of chip-firing, which we explain below. For
this reason there has been a lot of work on finding the SNF of
Laplacian matrices $\bm{L}(G)$.
A \emph{configuration} is a finite collection $\sigma$ of
indistinguishable chips distributed among the vertices of the graph
$G$. Equivalently, we may regard $\sigma$ as a function $\sigma \mathcal Olon
V\to \mathbb{N}=\{0,1,2,\dots\}$. Suppose that for some vertex $v$ we have
$\sigma(v)\geq \deg(v)$. The \emph{toppling} or \emph{firing} $\tau$ of
vertex $v$ is the configuration obtained by sending a chip from $v$ along
each incident edge to the vertex at the other end of the edge. Thus
$$ \tau(u) = \elleft\{ \begin{enumerate}gin{array}{rl}
\sigma(v)-\deg(v), & u=v\\
\sigma(u)+\mu(u,v), & u\neq v. \end{array} \right. $$
Now choose a vertex $w$ of $G$ to be a \emph{sink}, and ignore chips
falling into the sink. (We never topple the sink.) This dynamical system
is called the \emph{abelian sandpile} model. A \emph{stable}
configuration is one for which no vertex can topple, i.e.,
$\sigma(v)<\deg(v)$ for all vertices $v\neq w$. It is easy to see that
after finitely many topples a stable configuration will be reached,
which is independent of the order of topples. (This independence of
order accounts for the word ``abelian'' in ``abelian sandpile.'')
Let $M$ denote the set of all stable configurations. Define a binary
operation $\oplus$ on $M$ by vertex-wise addition followed by
stabilization. An \emph{ideal} of $M$ is a subset $J\subseteq M$
satisfying $\sigma \oplus J\subseteq J$ for all $\sigma\in M$. The
\emph{sandpile group} or \emph{critical group} $K(G)$ is the minimal
ideal of $M$, i.e., the intersection of all ideals. (Following the
survey \cite{l-p} of Levine and Propp, the reader is encouraged to
prove that the minimal ideal of any finite commutative monoid is a
group.) The group $K(G)$ is independent of the choice of sink up to
isomorphism.
An equivalent but somewhat less abstract definition of $K(G)$ is the
following. A configuration $u$ is called \emph{recurrent} if, for all
configurations $v$, there is a configuration $y$ such that $v\oplus
y=u$. A configuration that is both stable and recurrent is called
\emph{critical}. Given critical configurations $C_1$ and $C_2$,
define $C_1+C_2$ to be the unique critical configuration reachable
from the vertex-wise sum of $C_1$ and $C_2$.
This operation turns the set of critical configurations into an
abelian group isomorphic to the critical group $K(G)$.
The basic result on $K(G)$ \cite{biggs}\cite{dhar} is the following.
\begin{enumerate}gin{thm}
We have $K(G)\mathcal Ong \mathrm{coker}(\bm{L_0}(G))$. Equivalently, if $\bm{L_0}(G)
\mathfrak{S}_nf (\alpha_1,\dots,\alpha_{n-1})$, then
$$ K(G) \mathcal Ong \mathbb{Z}/\alpha_1\mathbb{Z} \oplus \cdots \oplus
\mathbb{Z}/\alpha_{n-1} \mathbb{Z}. $$
\end{thm}
Note that by the Matrix-Tree Theorem we have $\#K(G)=\det \bm{L_0(G)}
= \kappa(G)$. Thus the critical group $K(G)$ gives a canonical
factorization of $\kappa(G)$. When $\kappa(G)$ has a ``nice''
factorization, it is especially interesting to determine $K(G)$. The
simplest case is $G=K_n$, the complete graph on $n$ vertices. We have
$\kappa(K_n)=n^{n-2}$, a classic result going back to Sylvester and
Borchardt. There is a simple trick for computing $K(K_n)$ based on
Theorem~\ref{thm:minors}. Let $\bm{L_0}(K_n)\mathfrak{S}_nf
(\alpha_1,\dots,\alpha_{n-1})$. Since $\bm{L_0}(K_n)$ has an entry
equal to $-1$, it follows from Theorem~\ref{thm:minors} that
$\alpha_1=1$. Now the $2\times 2$ submatrices (up to row and column
permutations) of $\bm{L_0}(K_n)$ are given by
$$ \elleft[ \begin{enumerate}gin{array}{cc} n-1 & -1\\ -1 & n-1 \end{array} \right],
\quad
\elleft[ \begin{enumerate}gin{array}{cc} n-1 & -1\\ -1 & -1\end{array} \right],
\quad
\elleft[ \begin{enumerate}gin{array}{cc} -1 & -1\\ -1 & -1 \\ \end{array} \right],
$$
with determinants $n(n-2)$, $-n$, and 0. Hence $\alpha_2=n$ by
Theorem~\ref{thm:minors}. Since $\prod \alpha_i = \pm n^{n-2}$ and
$\alpha_i|\alpha_{i+1}$, we get $K(G)\mathcal Ong (\mathbb{Z}/n\mathbb{Z})^{n-2}$.
\textsc{Note.} A similar trick works for the matrix $M=\elleft[
\binom{2(i+j)}{i+j}\right]_{i,j=0}^{n-1}$, once it is known that
$\det M=2^{n-1}$ (e.g., \cite[Thm.~9]{e-r-r}). Every entry of $M$ is
even except for $M_{00}$, so $2|\alpha_2$, yielding $M\mathfrak{S}_nf
(1,2,2,\dots,2)$. The matrix
$\elleft[\binom{3(i+j)}{i+j}\right]_{i,j=0}^{n-1}$ is much more
complicated. For instance, when $n=8$ the diagonal elements of the
SNF are
$$ 1,\ 3,\ 3,\ 3,\ 3,\ 6,\ 2\cdot 3\cdot 29\cdot 31,\ 2\cdot
3^2\cdot 11\cdot
29\cdot 31\cdot 37\cdot 41. $$
It seems that if $d_n$ denotes the number of diagonal entries of the
SNF that are equal to 3, then $d_n$ is close to $\frac 23n$. The least
$n$ for which $|d_n-\ellfloor \frac 23n\rfloor|>1$ is $n=224$.
For the determinant of $M$, see \cite[(10)]{g-x}. If
$M=\elleft[ \binom{a(i+j)}{i+j}\right]_{i,j=0}^{n-1}$ for $a\geq 4$,
then $\det M$ does not seem ``nice'' (it doesn't factor into small
factors).
The critical groups of many classes of graphs have been computed. As a
couple of nice examples, we mention threshold graphs (work of
B. Jacobson \cite{jac}) and Paley graphs (D. B. Chandler, P. Sin, and
Q. Xiang \cite{c-s-x}). Critical groups have been generalized in
various ways. In particular, A. M. Duval, C. J. Klivans, and
J. L. Martin \cite{d-k-m} consider the critical group of a simplicial
complex.
\section{Random matrices} \ellambdabel{sec:random}
There is a huge literature on the distribution of eigenvalues and
eigenvectors of a random matrix. Much less has been done on the
distribution of the SNF of a random matrix. We will restrict our
attention to the situation where $k\geq 0$ and $M$ is an $m\times n$
integer matrix with independent entries uniformly distributed in the
interval $[-k,k]$, in the limit as $k\to\infty$. We write
$P_k^{(m,n)}(\mathcal{E})$ for the probability of some event under
this model (for fixed $k$). To illustrate that the distribution of
SNF in such a model might be interesting, suppose that $M\mathfrak{S}_nf
(\alpha_1,\dots,\alpha_m)$. Let $j\geq 1$. The probability
$P_k^{(m,n)}(\alpha_1=j)$ that $\alpha_1=j$ is equal to the
probability that $mn$ integers between $-k$ and $k$ have gcd equal to
$j$. It is then a well-known, elementary result that when $mn>1$,
\begin{enumerate}q \ellim_{k\to\infty}P_k^{(m,n)}(\alpha_1=j) =
\frac{1}{j^{mn}\zeta(mn)}, \ellambdabel{eq:a1ej} \end{enumerate}q
where $\zeta$ denotes the Riemann zeta function. This suggests
looking, for instance, at such numbers as
$$ \ellim_{k\to\infty}P_k^{(m,n)}(\alpha_1=1,
\alpha_2=2, \alpha_3=12). $$
In fact, it turns out that if $m<n$ and we specify
the values $\alpha_1,\dots, \alpha_m$ (subject of course to
$\alpha_1|\alpha_2|\cdots |\alpha_{m-1}$), then the probability as
$k\to\infty$ exists and is strictly between 0 and 1. For $m=n$ the
same is true for specifying $\alpha_1,\dots,\alpha_{n-1}$. However, for
any $j\geq 1$, we have $\ellim_{k\to\infty}P_k^{(n,n)}(\alpha_n=j)=0$.
The first significant result of this nature is due to Ekedahl
\cite[{\textsection}3]{ekedahl}, namely, let
$$ \sigma(n) = \ellim_{k\to\infty} P_k^{(n,n)}(\alpha_{n-1}=1). $$
Note that this number is just the probability (as $k\to\infty$) that
the cokernel of the $n\times n$ matrix $M$ is cyclic (has one
generator). Then
\begin{enumerate}q \sigma(n) = \frac{\prod_p
\elleft(1+\frac{1}{p^2}+\frac{1}{p^3}+\cdots+\frac{1}{p^n}\right)}
{\zeta(2)\zeta(3)\cdots}, \ellambdabel{eq:eke1} \end{enumerate}q
where $p$ ranges over all primes. It is not hard to deduce that
\begin{enumerate}a \ellim_{n\to\infty}\sigma(n) & = &
\frac{1}{\zeta(6)\prod_{j\geq 4}\zeta(j)} \ellambdabel{eq:eke}\\[.5em] &
= & 0.84693590173\cdots. \nonumber\end{enumerate}a
At first sight it seems surprising that this latter probability is not
1.\ \ It is the probability (as $k\to\infty$, $n\to\infty$) that the
$n^2$ $(n-1)\times (n-1)$ minors of $M$ are relatively prime. Thus the
$(n-1)\times (n-1)$ minors do not behave at all like $n^2$ independent
random integers.
Further work on the SNF of random integer matrices appears in
\cite{wood2} and the references cited there. These papers are
concerned with powers of a fixed prime $p$
dividing the $\alpha_i$'s. Equivalently, they are working (at least
implicitly) over the $p$-adic integers $\mathbb{Z}_p$. The first paper to
treat systematically SNF over $\mathbb{Z}$ is by Wang and Stanley
\cite{w-s}. One would expect that the behavior of the prime power
divisors to be independent for different primes as $k\to\infty$. This
is indeed the case, though it takes some work to prove.
In particular, for any positive integers $h\elleq m\elleq n$ and
$a_1|a_2|\cdots|a_h$ Wang and Stanley determine
$$ \ellim_{k\to\infty}P_k^{(m,n)}(\alpha_1=a_1,\dots,\alpha_h=a_h). $$
A typical result is the following:
\begin{enumerate}as \ellim_{k\to\infty}P_k^{(n,n)}(\alpha_1=2,\alpha_2=6) & = &
2^{-n^2}\elleft(1-\sum_{i=(n-1)^2}^{n(n-1)}2^{-i}+
\sum_{i=n(n-1)+1}^{n^2-1}2^{-i}\right)\\ & & \cdot
\frac 32\cdot 3^{-(n-1)^2}(1-3^{(n-1)^2})(1-3^{-n})^2\\
& & \cdot \prod_{p>3}\elleft(1-\sum_{i=(n-1)^2}^{n(n-1)}p^{-i}+
\sum_{i=n(n-1)+1}^{n^2-1}p^{-i}\right). \end{enumerate}as
A further result in \cite{w-s} is an extension of Ekedahl's formula
\eqref{eq:eke1}. The authors obtain explicit formulas for
$$ \rho_j(n)\mathrel{\mathop:}=\ellim_{k\to\infty} P_k^{(n,n)}(\alpha_{n-j}=1), $$
i.e., the probability (as $k\to\infty$) that the cokernel of $M$ has
at most $j$ generators. Thus \eqref{eq:eke1} is the case
$j=1$. Write $\rho_j=\ellim_{n\to\infty} \rho_j(n)$. Numerically we have
\begin{enumerate}as \rho_1 & = & 0.846935901735\\
\rho_2 & = & 0.994626883543\\
\rho_3 & = & 0.999953295075\\
\rho_4 & = & 0.999999903035 \\
\rho_5 & = & 0.999999999951. \end{enumerate}as
The convergence $\rho_n\to 1$ looks very rapid. In fact
\cite[(4.38)]{w-s},
$$ \rho_n = 1 -
c\,2^{-(n+1)^2}(1-2^{-n}+O(4^{-n})), $$
where
$$ c = \frac{1}{(1-\frac 12)(1-\frac 14)(1-\frac 18)\cdots}
= 3.46275\cdots. $$
A major current topic related to eigenvalues and eigenvectors of
random matrices is \emph{universality} (e.g., \cite{t-v}). A
certain distribution of eigenvalues (say) occurs for a large class of
probability distributions on the matrices, not just for a special
distribution like the GUE model on the space of $n\times n$ Hermitian
matrices. Universality of SNF over the rings $\mathbb{Z}_p$ of $p$-adic
integers and over $\mathbb{Z}/n\mathbb{Z}$ was considered by Kenneth Maples
\cite{maples}.
On the other hand, Clancy, Kaplan, Leake, Payne and Wood \cite{cklpw}
make some conjectures for the SNF distribution of the Laplacian matrix
of an Erd\H{o}s-R\'enyi random graph that differs from the
distribution obtained in \cite{w-s}. (It is clear, for instance, that
$\alpha_1=1$ for Laplacian matrices, in contradistinction to
equation~\eqref{eq:a1ej}, but conceivably equation~\eqref{eq:eke} could
carry over.) Some progress on these conjectures was made by Wood
\cite{wood}.
\section{Symmetric functions}
\subsection{An up-down linear transformation}
Many interesting matrices arise in the theory of symmetric
functions. We will adhere to notation and terminology on this subject
from \cite[Chap.~7]{ec2}. For our first example, let $\mathcal Lambda_\mathbb{Q}^n$
denote the $\mathbb{Q}$-vector space of homogeneous symmetric functions
of degree $n$ in the variables $x=(x_1,x_2,\dots)$ with rational
coefficients. One basis for $\mathcal Lambda_\mathbb{Q}^n$ consists of the
\emph{Schur functions} $s_\ellambdambda$ for $\ellambdambda\vdash n$. Define a
linear transformation $\psi_n\mathcal Olon\mathcal Lambda_\mathbb{Q}^n\to\mathcal Lambda_\mathbb{Q}^n$
by
$$ \psi_n(f) = \frac{\partial}{\partial p_1}p_1f. $$
Here $p_1=s_1=\sum x_i$, the first power sum symmetric function. The
notation $\frac{\partial}{\partial p_1}$ indicates that we
differentiate with respect to $p_1$ after writing the argument as a
polynomial in the $p_k$'s, where $p_k=\sum x_i^k$. It is a standard
result \cite[Thm.~7.15.7, Cor.~7.15.9, Exer.~7.35]{ec2} that
for $\ellambdambda\vdash n$,
\begin{enumerate}as p_1 s_\ellambdambda & = & \sum_{\substack{\mu\vdash
n+1\\ \mu\supset\ellambdambda}}s_\mu\\
\frac{\partial}{\partial p_1}s_\ellambdambda & = & s_{\ellambdambda/1}\ = \
\sum_{\substack{\mu\vdash n-1\\ \mu\subset\ellambdambda}} s_\mu. \end{enumerate}as
Note that the power sum $p_\ellambdambda$, $\ellambdambda\vdash n$, is an
eigenvector for $\psi_n$ with eigenvalue
$m_1(\ellambdambda)+1$, where $m_1(\ellambdambda)$ is the number of 1's in
$\ellambdambda$. Hence
$$ \det \psi_n = \prod_{\ellambdambda\vdash n}
(m_1(\ellambdambda)+1). $$
The factorization of $\det\psi_n$ suggests looking at the SNF of
$\psi_n$ with respect to the basis $\{s_\ellambdambda\}$. We denote this
matrix by $[\psi_n]$. Since the matrix transforming the $s_\ellambdambda$'s
to the $p_\mu$'s is not invertible over $\mathbb{Z}$, we cannot simply
convert the diagonal matrix with entries $m_1(\ellambdambda)+1$ to SNF. As
a special case of a more general conjecture Miller and Reiner
\cite{m-r} conjectured the SNF of $[\psi_n]$, which was then proved by
Cai and Stanley \cite{c-s}. Subsequently Nie \cite{nie} and Shah
\cite{shah} made some further progress on the conjecture of Miller and
Reiner. We state two equivalent forms of the result of Cai and
Stanley.
\begin{enumerate}gin{thm} \ellambdabel{thm:cai}
Let $[\psi_n]\mathfrak{S}_nf (\alpha_1,\dots,\alpha_{p(n)})$, where $p(n)$
denotes the number of partitions of $n$.
\begin{enumerate}\item[(a)] The $\alpha_i$'s are as follows:
\begin{enumerate}gin{itemize}
\item $(n+1)(n-1)!$, with multiplicity $1$
\item $(n-k)!$, with multiplicity $p(k+1)-2p(k)+p(k-1)$, $3\elleq
k\elleq n-2$
\item $1$, with multiplicity $p(n)-p(n-1)+p(n-2)$.
\end{itemize}
\item[(b)] Let $\mathcal{M}_1(n)$ be the multiset of all numbers
$m_1(\ellambdambda)+1$, for $\ellambdambda\vdash n$. Then $\alpha_{p(n)}$ is the
product of the \emph{distinct} elements of $\mathcal{M}_1(n)$;
$\alpha_{p(n)-1}$ is the product of the remaining \emph{distinct}
elements of $\mathcal{M}_1(n)$, etc.
\end{enumerate}
\end{thm}
In fact, the following stronger result than Theorem~\ref{thm:cai} is
actually proved.
\begin{enumerate}gin{thm} \ellambdabel{thm:cai2}
Let $t$ be an indeterminate. Then the matrix $[\psi_n+tI]$
has an SNF over $\mathbb{Z}[t]$.
\end{thm}
To see that Theorem~\ref{thm:cai2} implies Theorem~\ref{thm:cai}, use
the fact that $[\psi_n]$ is a symmetric matrix (and therefore
semisimple), and for each eigenvalue $\ellambdambda$ of $\psi_n$ consider
the rank of the matrices obtained by substituting $t=-\ellambdambda$ in
$[\psi_n+tI]$ and its SNF over $\mathbb{Z}[t]$. For details and further
aspects, see \cite[{\textsection}8.2]{m-r}.
The proof of Theorem \ref{thm:cai2} begins by working with the basis
$\{h_\ellambdambda\}$ of complete symmetric functions rather than with the
Schur functions, which we can do since the transition matrix between
these bases is an integer unimodular matrix. The proof then consists
basically of describing the row and column operations to achieve SNF.
The paper \cite{c-s} contains a conjectured generalization of
Theorem~\ref{thm:cai2} to the operator $\psi_{n,k}\mathrel{\mathop:}=
k\frac{\partial}{\partial
p_k}p_k\mathcal Olon \mathcal Lambda_\mathbb{Q}^n\to \mathcal Lambda_\mathbb{Q}^n$ for any $k\geq 1$.
Namely, the matrix $[\psi_{n,k}+tI]$ with
respect to the basis $\{s_\ellambdambda\}$ has an SNF over $\mathbb{Z}[t]$. This
implies that if $[\psi_{n,k}]\mathfrak{S}_nf (\alpha_1,\dots,\alpha_{p(n)})$ and
$\mathcal{M}_k(n)$ denotes the multiset of all numbers
$k(m_k(\ellambdambda)+1)$, for $\ellambdambda\vdash n$, then $\alpha_{p(n)}$ is
the product of the \emph{distinct} elements of $\mathcal{M}_k(n)$;
$\alpha_{p(n)-1}$ is the product of the remaining \emph{distinct}
elements of $\mathcal{M}_k(n)$, etc. This conjecture was proved in
2015 by Zipei Nie (private communication).
There is a natural generalization of the SNF of $\psi_{n,k}$, namely,
we can look at operators like $(\prod
\ellambdambda_i)\frac{\partial^\ell}{\partial p_\ellambdambda}p_\ellambdambda$. Here
$\ellambdambda$ is a partition of $n$ with $\ell$ parts and
$$ \frac{\partial^\ell}{\partial p_\ellambdambda} =
\frac{\partial^\ell}{\partial p_1^{m_1}\partial p_2^{m_2}\cdots}, $$
where $\ellambdambda$ has $m_i$ parts equal to $i$. Even more generally, if
$\ellambdambda, \mu\vdash n$ where $\ellambdambda$ has $\ell$ parts, then we could
consider $(\prod \ellambdambda_i)\frac{\partial^\ell}{\partial
p_\ellambdambda}p_\mu$. No conjecture is known for the SNF (with respect
to an integral basis), even when $\ellambdambda=\mu$.
\subsection{A specialized Jacobi-Trudi matrix}
A fundamental identity in the theory of symmetric functions is the
\emph{Jacobi-Trudi identity}. Namely, if $\ellambdambda$ is a partition with
at most $t$ parts, then the \emph{Jacobi-Trudi
matrix} $\mathrm{JT}l$ is defined by
$$ \mathrm{JT}l=\elleft[ h_{\ellambdambda_i+j-i}\right]_{i,j=1}^t, $$
where $h_i$ denotes the complete symmetric function of degree $i$
(with $h_0=1$ and $h_{-i}=0$ for $i\geq 1$).
The \emph{Jacobi-Trudi identity} \cite[{\textsection}7.16]{ec2}
asserts that
$\det \mathrm{JT}l =s_\ellambdambda$, the Schur function indexed by $\ellambdambda$.
For a symmetric function $f$, let $\varphi_n f$ denote the
specialization $f(1^n)$, that is, set $x_1=\cdots=x_n=1$ and all other
$x_i=0$ in $f$. It is easy to see \cite[Prop.~7.8.3]{ec2} that
\begin{enumerate}q \varphi_n h_i =\binom{n+i-1}{i}, \ellambdabel{eq:phihi} \end{enumerate}q
a polynomial in $n$ of degree $i$. Identify $\ellambdambda$ with its (Young)
diagram, so the squares of $\ellambdambda$ are indexed by pairs $(i,j)$,
$1\elleq i\elleq \ell(\ellambdambda)$, $1\elleq j\elleq \ellambdambda_i$. The
\emph{content} $c(u)$ of the square $u=(i,j)$ is defined to be
$c(u)=j-i$. A standard result \cite[Cor.~7.21.4]{ec2} in the theory of
symmetric functions states that
\begin{enumerate}q \varphi_n s_\ellambdambda =\frac{1}{H_\ellambdambda}\prod_{u\in\ellambdambda}
(n+c(u)), \ellambdabel{eq:hc} \end{enumerate}q
where $H_\ellambdambda$ is a positive integer whose value is irrelevant here
(since it is a unit in $\mathbb{Q}[n]$). Since this polynomial factors a lot
(in fact, into linear factors) over $\mathbb{Q}[n]$, we are motivated to
consider the SNF of the matrix
$$ \varphi_n\mathrm{JT}l = \elleft[ \binom{n+\ellambdambda_i+j-i-1}{\ellambdambda_i+j-i}
\right]_{i,j=1}^t. $$
Let $D_k$ denote the $k$th \emph{diagonal hook} of $\ellambdambda$, i.e.,
all squares $(i,j)\in\ellambdambda$ such that either $i=k$ and $j\geq k$, or
$j=k$ and $i\geq k$. Note that $\ellambdambda$ is a disjoint union of its
diagonal hooks. If $r=\mathrm{rank}(\ellambdambda)\mathrel{\mathop:}= \max\{
i\,:\, \ellambdambda_i\geq i\}$, then note also that $D_k=\emptyset$ for
$k>r$. The following result was proved in \cite{rs:jt}.
\begin{enumerate}gin{thm} \ellambdabel{thm:jt}
Let $\varphi_n\mathrm{JT}l\mathfrak{S}_nf (\alpha_1,\alpha_2,\dots,\alpha_t)$, where
$t\geq \ell(\ellambdambda)$. Then we can take
$$ \alpha_i = \prod_{u\in D_{t-i+1}} (n+c(u)). $$
\end{thm}
An equivalent statement to Theorem~\ref{thm:jt} is that the $\alpha_i$'s
are squarefree (as polynomials in $n$), since $\alpha_t$ is the
largest squarefree factor of $\varphi_n s_\ellambdambda$, $\alpha_{t-1}$ is
the largest squarefree factor of $(\varphi_n s_\ellambdambda)/\alpha_t$, etc.
\begin{enumerate}gin{ex}
Let $\ellambdambda=(7,5,5,2)$. Figure~\ref{fig1} shows the diagram of
$\ellambdambda$ with the content of each square. Let $t=\ell(\ellambdambda)=4$. We
see that
\begin{enumerate}as \alpha_4 & = & (n-3)(n-2)\cdots (n+6)\\
\alpha_3 & = & (n-2)(n-1)n(n+1)(n+2)(n+3)\\
\alpha_2 & = & n(n+1)(n+2)\\
\alpha_1 & = & 1. \end{enumerate}as
\begin{enumerate}gin{figure}
\centering
\centerline{\includegraphics[width=6cm]{contents.eps}}
\mathcal{A}ption{The contents of the partition $(7,5,5,2)$}
\ellambdabel{fig1}
\end{figure}
\end{ex}
The problem of computing the SNF of a suitably specialized
Jacobi-Trudi matrix was raised by Kuperberg \cite{kup}. His Theorem~14
has some overlap with our Theorem~\ref{thm:jt}. Propp
\cite[Problem~5]{propp} mentions a two-part question of Kuperberg. The
first part is equivalent to our Theorem~\ref{thm:jt} for rectangular
shapes. (The second part asks for an interpretation in terms of
tilings, which we do not consider.)
Theorem~\ref{thm:jt} is proved not by the more usual method of row and
column operations. Rather, the gcd of the $k\times k$ minors is
computed explicitly so that Theorem~\ref{thm:minors} can be
applied. Let $M_k$ be the bottom-left $k\times k$ submatrix of
$\mathrm{JT}l$. Then $M_k$ is itself the Jacobi-Trudi matrix of a certain
partition $\mu^k$, so $\varphi_n M_k$ can be explicitly evaluated. One
then shows using the Littlewood-Richardson rule that every $k\times k$
minor of $\varphi_n\mathrm{JT}l$ is divisible by $\varphi_n M_k$. Hence
$\varphi_n M_k$ is the gcd of the $k\times k$ minors of
$\varphi_n\mathrm{JT}l$, after which the proof is a routine computation.
There is a natural $q$-analogue of the specialization $f(x) \to
f(1^n)$, namely, $f(x)\to f(1,q,q^2,\dots,q^{n-1})$. Thus we can ask
for a $q$-analogue of Theorem~\ref{thm:jt}. This can be done using the
same proof technique, but some care must be taken in order to get a
$q$-analogue that reduces directly to Theorem~\ref{thm:jt} by setting
$q=1$. When this is done we get the following result
\cite[Thm.~3.2]{rs:jt}.
\begin{enumerate}gin{thm} \ellambdabel{thm:jtq}
For $k\geq 1$ let
$$ f(k) =\frac{n(n+\boldsymbol{(1)})(n+\boldsymbol{(2)})\cdots
(n+\boldsymbol{(k-1)})}{\boldsymbol{(1)}\boldsymbol{(2)}
\cdots \boldsymbol{(k)}}, $$
where $\boldsymbol{(j)}=(1-q^j)/(1-q)$ for any $j\in\mathbb{Z}$.
Set $f(0)=1$ and $f(k)=0$ for $k<0$. Define
$$ \mathrm{JT}q=\elleft[f(\ellambdambda_i-i+j)\right]_{i,j=1}^t, $$
where $\ell(\ellambdambda)\elleq t$. Let $\mathrm{JT}q\mathfrak{S}_nf
(\gamma_1,\gamma_2,\dots,\gamma_t)$ over the ring
$\mathbb{Q}(q)[n]$. Then we can take
$$ \gamma_i = \prod_{u\in D_{t-i+1}}(n+\boldsymbol{c(u)}). $$
\end{thm}
\section{A multivariate example}
In this section we give an example where the SNF exists over a
multivariate polynomial ring over $\mathbb{Z}$. Let $\ellambdambda$ be a partition,
identified with its Young diagram regarded as a set of squares; we fix
$\ellambdambda$ for all that follows. Adjoin to $\ellambdambda$ a border strip
extending from the end of the first row to the end of the first column
of $\ellambdambda$, yielding an \emph{extended partition} $\ellambdambda^*$. Let
$(r,s)$ denote the square in the $r$th row and $s$th column of
$\ellambdambda^*$. If $(r,s)\in\ellambdambda^*$, then let $\ellambdambda(r,s)$ be the
partition whose diagram consists of all squares $(u,v)$ of $\ellambdambda$
satisfying $u\geq r$ and $v\geq s$. Thus $\ellambdambda(1,1)=\ellambdambda$, while
$\ellambdambda(r,s)=\emptyset$ (the empty partition) if
$(r,s)\in\ellambdambda^*\setminus\ellambdambda$. Associate with the square $(i,j)$
of $\ellambdambda$ an indeterminate~$x_{ij}$. Now for each square $(r,s)$
of~$\ellambdambda^*$, associate a polynomial $P_{rs}$ in the
variables~$x_{ij}$, defined as follows:
\begin{enumerate}q
P_{rs} = \sum_{\mu\subseteq
\ellambdambda(r,s)}\prod_{(i,j)\in\ellambdambda(r,s)\setminus \mu} x_{ij},
\ellambdabel{eq:prsdef}
\end{enumerate}q
where $\mu$ runs over all partitions contained in $\ellambda (r,s)$.
In particular, if $(r,s)\in\ellambdambda^*\setminus \ellambdambda$ then $P_{rs}=1$.
Thus for $(r,s)\in\ellambdambda$, $P_{rs}$ may be regarded as a generating
function for the squares
of all skew diagrams $\ellambdambda(r,s)\setminus \mu$. For instance, if
$\ellambdambda=(3,2)$ and we set $x_{11}=a$, $x_{12}=b$, $x_{13}=c$,
$x_{21}=d$, and $x_{22}=e$, then Figure~\ref{fig2} shows the extended
diagram $\ellambdambda^*$ with the polynomial $P_{rs}$ placed in the
square~$(r,s)$.
\begin{enumerate}gin{figure}
\centerline{\includegraphics[width=8cm]{32ex.eps}}
\mathcal{A}ption{The polynomials $P_{rs}$ for $\ellambdambda=(3,2)$}
\ellambdabel{fig2}
\end{figure}
Write
$$ A_{rs}=\prod_{(i,j)\in\ellambdambda(r,s)} x_{ij}. $$
Note that $A_{rs}$ is simply the leading term of $P_{rs}$.
Thus for $\ellambdambda=(3,2)$ as in Figure~\ref{fig2} we have
$A_{11}=abcde, A_{12}=bce$, $A_{13}=c$, $A_{21}=de$, and
$A_{22}=e$.
For each square $(i,j)\in\ellambdambda^*$ there will be a unique subset of
the squares of $\ellambdambda^*$ forming an $m\times m$ square $S(i,j)$ for
some $m\geq 1$, such that the upper left-hand corner of $S(i,j)$ is
$(i,j)$, and the lower right-hand corner of $S(i,j)$ lies in
$\ellambdambda^*\setminus \ellambdambda$. In fact, if $\rho_{ij}$ denotes the
\emph{rank} of $\ellambdambda(i,j)$ (the number of squares on the main
diagonal, or equivalently, the largest $k$ for which
$\ellambdambda(i,j)_k\geq k$), then $m=\rho_{ij}+1$. Let $M(i,j)$ denote the
matrix obtained by inserting in each square $(r,s)$ of $S(i,j)$ the
polynomial $P_{rs}$. For instance, for the partition $\ellambdambda=(3,2)$
of Figure~\ref{fig2}, the matrix $M(1,1)$ is given by
$$ M(1,1) = \elleft[ \begin{enumerate}gin{array}{ccc}
P_{11} & bce+ce+c+e+1 & c+1\\ de+e+1 & e+1 & 1\\
1 & 1 & 1 \end{array} \right], $$
where $P_{11}=abcde+bcde+bce+cde+ce+de+c+e+1$. Note that for this
example we have
$$ \det M(1,1)= A_{11}A_{22}A_{33}=abcde\cdot e\cdot 1=abcde^2. $$
The main result on the matrices $M(i,j)$ is the following. For
convenience we state it only for $M(1,1)$, but it applies to any
$M(i,j)$ by replacing $\ellambdambda$ with $\ellambdambda(i,j)$.
\begin{enumerate}gin{thm} \ellambdabel{thm:bessen}
Let $\rho=\mathrm{rank}(\ellambdambda)$. The matrix $M(1,1)$ has an SNF over
$\mathbb{Z}[x_{ij}]$, given explicitly by
$$ M(1,1)\mathfrak{S}_nf
(A_{11},A_{22},\dots,A_{\rho+1,\rho+1}). $$
Hence $\det M(1,1)=A_{11}A_{22}\cdots A_{\rho\rho}$ (since
$A_{\rho+1,\rho+1}=1$).
\end{thm}
Theorem~\ref{thm:bessen} is proved by finding row and column
operations converting $M(1,1)$ to SNF. In \cite{b-s} this is done in
two ways: an explicit description of the row and column operations,
and a proof by induction that such operations exist without stating
them explicitly.
Another way to describe the SNF of $M(1,1)$ is to replace its
nondiagonal entries with 0 and a diagonal entry with its leading term
(unique monomial of highest degree). Is there some conceptual reason
why the SNF has this simple description?
If we set each $x_{ij}=1$ in $M(1,1)$ then we get $\det
M(1,1)=1$. This formula is equivalent to result of Carlitz, Roselle,
and Scoville \cite{c-r-s} which answers a question posed by Berlekamp
\cite{berl1}\cite{berl2}. If we set each $x_{ij}=q$ in $M(1,1)$ and
take $\ellambdambda=(m-1,m-2,\dots,1)$, then the entries of $M(1,1)$ are
certain $q$-Catalan numbers, and $\det M(1,1)$ was determined by
Cigler \cite{cigler1}\cite{cigler2}. This determinant (and some
related ones) was a primary motivation for \cite{b-s}. Miller and
Stanton \cite{m-r} have generalized the $q$-Catalan result to Hankel
matrices of moments of orthogonal polynomials and some other similar
matrices.
Di Francesco \cite{difran} shows that the polynomials $P_{rs}$ satisfy
the ``octahedron recurrence'' and are related to cluster algebras,
integrable systems, dimer models, and other topics.
\section{The Varchenko matrix}
Let $\mathcal{A}$ be a finite arrangement (set) of affine hyperplanes in
$\mathbb{R}^n$. The complement $\mathbb{R}^n-\bigcup_{H\in\mathcal{A}}H$ consists of a
disjoint union of finitely many open \emph{regions}. Let $\mathcal{A}lr(\mathcal{A})$
denote the
set of all regions. For each hyperplane $H\in\mathcal{A}$ associate an
indeterminate $a_H$. If $R,R'\in \mathcal{A}lr(\mathcal{A})$ then let sep$(R,R')$
denote the set of $H\in\mathcal{A}$ separating $R$ from $R'$, that is, $R$ and
$R'$ lie on different sides of $H$. Now define a matrix $V(\mathcal{A})$ as
follows. The rows and columns are indexed by $\mathcal{A}lr(\mathcal{A})$ (in some
order). The $(R,R')$-entry is given by
$$ V_{RR'} = \prod_{H\in \mathrm{sep}(R,R')} a_H. $$
If $x$ is any nonempty intersection of a set of hyperplanes
in $\mathcal{A}$, then define $a_x=\prod_{H\supseteq x}a_H$. Varchenko
\cite{varchenko} showed that
\begin{enumerate}q \det V(\mathcal{A}) = \prod_x (1-a_x^2)^{n(x)p(x)}, \ellambdabel{eq:vardet}
\end{enumerate}q
for certain nonnegative integers $n(x),p(x)$ which we will not define
here.
\textsc{Note.} We include the intersection $x$ over the empty set of
hyperplanes, which is the ambient space $\mathbb{R}^n$. This gives an
irrelevant factor of 1 in the determinant above, but it also accounts
for an essential diagonal entry of 1 in Theorem~\ref{thm:g-z} below.
Since $\det V(\mathcal{A})$ has such a nice factorization, it is natural to ask
about its diagonal form or SNF. Since we are working over the
polynomial ring $\mathbb{Z}[a_H\,:\, H\in\mathcal{A}]$ or $\mathbb{Q}[a_H\,:\, H\in\mathcal{A}]$, there
is no reason for a diagonal form to exist. Gao and
Zhang \cite{g-z} found the condition for this property to hold. We say
that $\mathcal{A}$ is \emph{semigeneric} or \emph{in semigeneral form} if for
any $k$ hyperplanes $H_1,\dots,H_k\in\mathcal{A}$ with intersection
$x=\bigcap_{i=1}^k H_i$, either codim$(x)=k$ or $x=\emptyset$. (Note
that $x$ is an affine subspace of $\mathbb{R}^n$ so has a well-defined
codimension.) In particular, $x=\emptyset$ if $k>n$.
\begin{enumerate}gin{thm} \ellambdabel{thm:g-z}
The matrix $V(\mathcal{A})$ has a diagonal form if and only if $\mathcal{A}$ is
semigeneric. In this case, the diagonal entries of $\mathcal{A}$ are given by
$\prod_{H\supseteq x}(1-a_H^2)$, where $x$ is a nonempty intersection
of the hyperplanes in some subset of $\mathcal{A}$.
\end{thm}
Gao and Zhang actually prove their result for pseudosphere
arrangements, which are a generalization of hyperplane
arrangements. Pseudosphere arrangements correspond to oriented
matroids.
\begin{enumerate}gin{ex}
Let $\mathcal{A}$ be the arrangement of three lines in $\mathbb{R}^2$ shown in
Figure~\ref{fig:arrex}, with the hyperplane variables $a,b,c$ as in
the figure. This arrangement is semigeneric. The diagonal entries
of the diagonal form of $V(\mathcal{A})$
are
$$ 1,\ 1-a^2,\ 1-b^2,\ 1-c^2, (1-a^2)(1-c^2),\ (1-b^2)(1-c^2). $$
\end{ex}
\begin{enumerate}gin{figure}
\centerline{\includegraphics[width=6cm]{arrex.eps}}
\mathcal{A}ption{An arrangement of three lines }
\ellambdabel{fig:arrex}
\end{figure}
Now define the \emph{$q$-Varchenko matrix} $V_q(\mathcal{A})$ of $\mathcal{A}$ to be
the result of substituting $a_H=q$ for all $H\in\mathcal{A}$. Equivalently,
$V_q(\mathcal{A})_{RR'} = q^{\#\mathrm{sep}(R,R')}$. The SNF of $V_q(\mathcal{A})$
exists over the PID $\mathbb{Q}[q]$, and it seems to be a very interesting
and little studied problem to determine this SNF. Some special cases
were determined by Cai and Mu \cite{c-s}. A generalization related to
distance matrices of graphs was considered by Shiu \cite{shiu}. Note
that by equation~\eqref{eq:vardet} the diagonal entries of the SNF of
$V_q(\mathcal{A})$ will be products of cyclotomic polynomials $\Phi_d(q)$.
The main paper to date on the SNF of $V_q(\mathcal{A})$ is by Denham and
Hanlon \cite{d-h}. In particular, let
$$ \chi_\mathcal{A}(t)=\sum_{i=0}^n (-1)^i c_i t^{n-i} $$
be the \emph{characteristic polynomial} of
$\mathcal{A}$, as defined for instance in \cite[\textsection
1.3]{rs:hyp}\cite[\textsection 3.11.2]{ec1}.
Denham and Hanlon show the following in their Theorem~3.1.
\begin{enumerate}gin{thm}
Let $N_{d,i}$ be the number of diagonal entries of the SNF of
$V_q(\mathcal{A})$ that are exactly divisible by $\Phi_d(q)^i$. Then $N_{1,i}=
c_i$.
\end{thm}
It is easy to see that $N_{1,i}=N_{2,i}$. Thus the next step would be
to determine $N_{3,i}$ and $N_{4,i}$.
An especially interesting hyperplane arrangement is the \emph{braid
arrangement} $\mathcal B_n$ in $\mathbb{R}^n$, with hyperplanes $x_i=x_j$ for
$1\elleq i<j\elleq n$. The determinant of $V_q(\mathcal B_n)$, originally due to
Zagier \cite{zagier}, is given by
$$ \det V_q(\mathcal B_n) = \prod_{j=2}^n\elleft( 1-q^{j(j-1)}\right)
^{\binom nj(j-2)!\,(n-j+1)!}. $$
An equivalent description of $V_q(\mathcal B_n)$ is the following. Let $\mathfrak{S}_n$
denote the symmetric group of all permutations of $1,2,\dots,n$, and
let inv$(w)$ denote the number of inversions of $w\in\mathfrak{S}_n$, i.e.,
inv$(w) =\#\{(i,j)\,:\, 1\elleq i<j\elleq n,\ w(i)>w(j)\}$. Define
$\Gamma_n(q) =\sum_{w\in\mathfrak{S}_n}q^{\mathrm{inv}(w)}w$, an element of the
group algebra $\mathbb{Q}[q]\mathfrak{S}_n$. The element $\Gamma_n(q)$ acts on
$\mathbb{Q}[q]\mathfrak{S}_n$ by left multiplication, and $V_q(\mathcal B_n)$ is the matrix of
this linear transformation (with a suitable indexing of rows and
columns) with respect to the basis $\mathfrak{S}_n$. The SNF of $V_q(\mathcal B_n)$
(over the PID $\mathbb{Q}[q]$) is not known. Denham and Hanlon
\cite[\textsection 5]{d-h} compute it for $n\elleq 6$.
Some simple representation theory allows us to refine the SNF of
$V_q(\mathcal B_n)$. The complex irreducible representations
$\varphi_\ellambdambda$ of $\mathfrak{S}_n$ are indexed by partitions $\ellambdambda\vdash
n$. Let $f^\ellambdambda=\dim \varphi_\ellambdambda$. The action of $\mathfrak{S}_n$ on
$\mathbb{Q}\mathfrak{S}_n$ by right multiplication commutes with the action of
$\Gamma_n(q)$. It follows (since every irreducible representation of
$\mathfrak{S}_n$ can be defined over $\mathbb{Z}$) that by a unimodular change of basis
we can write
$$ V_q(\mathcal B_n) = \bigoplus_{\ellambdambda\vdash n} f^\ellambdambda V_\ellambdambda, $$
for some integral matrices $V_\ellambdambda$ of size $f^\ellambdambda\times
f^\ellambdambda$. Thus computing $\det V_\ellambdambda$ and the SNF of $V_\ellambdambda$
is a refinement of computing $\det V_q(\mathcal B_n)$ and the SNF of
$V_q(\mathcal B_n)$. (Computing the SNF of each $V_\ellambdambda$ would give a
diagonal form of $V_q(\mathcal B_n)$, from which it is easy to determine the
SNF.) The problem of computing $\det V_\ellambdambda$ was solved by Hanlon
and Stanley \cite[Conj.~3.7]{h-s}. Of course the SNF of $V_\ellambdambda$
remains open since the same is true for $V_q(\mathcal B_n)$. Denham and
Hanlon have computed the SNF of $V_\ellambdambda$ for
$\ellambdambda\vdash n\elleq 6$ and published the results for $n\elleq 4$ in
\cite[\textsection 5]{d-h}. For instance, for the partitions
$\ellambdambda\vdash 4$ we have the following diagonal elements of the SNF
of $V_\ellambdambda$:
$$ \begin{enumerate}gin{array}{rl}
(4): & \Phi_2^2\Phi_3\Phi_4\\
(3,1): & \Phi_1\Phi_2,\ \Phi_1^2\Phi_2^2\Phi_3,\ \Phi_1^3\Phi_2^3
\Phi_3^2\\
(2,2): & \Phi_1^2\Phi_2^2,\ \Phi_1^2\Phi_2^2\Phi_{12}\\
(2,1,1): & \Phi_1\Phi_2,\ \Phi_1^2\Phi_2^2\Phi_6,\
\Phi_1^3\Phi_2^3\Phi_6^2\\
(1,1,1,1): & \Phi_1\Phi_2\Phi_4\Phi_6, \end{array} $$
where $\Phi_d$ denotes the cyclotomic polynomial whose zeros are the
primitive $d$th roots of unity. For a nice survey of this topic see
Denham and Hanlon \cite{d-h2}.
The discussion above of $\Gamma_n$ suggests that it might be
interesting to consider the SNF of other elements of $R\mathfrak{S}_n$ for
suitable rings $R$ (or possibly $RG$ for other finite groups $G$). One
intriguing example is the \emph{Jucys-Murphy element} (though it first
appears in the work of Alfred Young \cite[\textsection 19]{young})
$X_k\in\mathbb{Q} \mathfrak{S}_n$, $1\elleq k\elleq n$. It is defined by $X_1=0$ and
$$ X_k = (1,k)+(2,k)+\cdots+(k-1,k),\ \ 2\elleq k\elleq n, $$
where $(i,k)$ denotes the transposition interchanging $i$ and $k$.
Just as for $\Gamma_n(q)$, we can choose an integral basis for
$\mathbb{Q}\mathfrak{S}_n$ (that is, a $\mathbb{Z}$-basis for $\mathbb{Z}\mathfrak{S}_n$) so that the action of
$X_k$ on $\mathbb{Q}\mathfrak{S}_n$ with respect to this basis has a matrix of the form
$\bigoplus_{\ellambdambda\vdash n} f^\ellambdambda W_{\ellambdambda,k}$. The
eigenvalues of $W_{\ellambdambda,k}$ are known to be the contents of the
positions occupied by $k$ in all standard Young tableaux of shape
$\ellambdambda$. For instance, when $\ellambdambda=(5,1)$ the standard Young
tableaux are
$$ \begin{enumerate}gin{array}{lllll}
1\,2\,3\,4\,5 & 1\,2\,3\,4\,6 & 1\,2\,3\,5\,6 & 1\,2\,4\,5\,6 &
1\,3\,4\,5\,6\\ 6 & 5 & 4 & 3 & 2
\end{array}. $$
The positions occupied by 5 are $(1,5)$, $(2,1)$, $(1,4)$, $(1,4)$,
$(1,4)$. Hence the eigenvalues of $W_{(5,1),5}$ are $5-1=4$,
$1-2=-1$, and $4-1=3$ (three times). Darij Grinberg (private
communication) computed the SNF of the matrices $W_{\ellambdambda,k}$ for
$\ellambdambda\vdash n\elleq 7$. On the basis of this data we make the
following conjecture.
\begin{enumerate}gin{conj}
Let $\ellambdambda\vdash n$, $1\elleq k\elleq n$, and $W_{\ellambdambda,k}\mathfrak{S}_nf
(\alpha_1,\dots,\alpha_{f^\ellambdambda}) $. Fix $1\elleq r\elleq f^\ellambdambda$. Let
$S_r$ be the set of positions $(i,j)$ that $k$ occupies in at least
$r$ of the SYT's of shape $\ellambdambda$.
Then $\alpha_{f^\ellambdambda-r+1}=\pm\prod_{(i,j)\in S_r}(j-i)$.
\end{conj}
Note in particular that every SNF diagonal entry is (conjecturally) a
product of some of the eigenvalues of $W_{\ellambdambda,k}$.
For example, when $\ellambdambda=(5,1)$ and $k=5$ we have $f^{(5,1)}=5$ and
$S_1=\{(1,5),(2,1),(1,4)\}$, $S_2=S_3=\{(1,4)\}$, $S_4=S_5=
\emptyset$. Hence $W_{(5,1),5}\mathfrak{S}_nf (1,1,3,3,12)$.
\begin{enumerate}gin{thebibliography}{9}
\bibitem{berl1} E. R. Berlekamp, A class of convolutional codes,
\emph{Information and Control} \textbf{6} (1963), 1--13.
\bibitem{berl2} E. R. Berlekamp, Unimodular arrays, \emph{Computers
and Mathematics with Applications} \textbf{20} (2000), 77--83.
\bibitem{b-s} C. Bessenrodt and R. P. Stanley, Smith normal form of a
multivariate matrix associated with partitions, \emph{J. Alg.\
Combin.}\ \textbf{41} (2015), 73--82.
\bibitem{biggs} N. L. Biggs, Chip-firing and the critical group of a
graph, \emph{J.\ Alg.\ Combin.}\ \textbf{9} (1999), 25--45.
\bibitem{b-v} T. Brylawski and A. Varchenko, The determinant formula
for a matroid bilinear form, \emph{Adv.\ Math.}\ \textbf{129}
(1997), 1--24.
\bibitem{c-m-s} T. W. Cai and L. Mu, On the Smith normal form of the
$q$-Varchenko matrix of a real hyperplane arrangement, preprint.
\bibitem{c-s} T. W. X. Cai and R. P. Stanley, The Smith normal form of
a matrix associated with Young's lattice,
\emph{Proc.\ Amer.\ Math.\ Soc.}\ \textbf{143} (2015), 4695--4703.
\bibitem{c-r-s} L. Carlitz, D. P. Roselle, and R. A. Scoville, Some
remarks on ballot-type sequences, \emph{J. Combinatorial Theory}
\textbf{11} (1971), 258--271.
\bibitem{c-s-x} D. B. Chandler, P. Sin, and Q. Xiang, The Smith and
critical groups of Paley graphs, \emph{J. Algebraic
Combin.}\ \textbf{41} (2015), 1013--1022.
\bibitem{cigler1} J.\ Cigler, $q$-Catalan und $q$-Motzkinzahlen,
\emph{Sitzungsber.\ \"OAW} \textbf{208} (1999), 3--20.
\bibitem{cigler2} J.\ Cigler, $q$-Catalan numbers and $q$-Narayana
polynomials, preprint; \texttt{arXiv:math.CO/0507225}.
\bibitem{cklpw} J. Clancy, N. Kaplan, T. Leake, S. Payne, and
M. M. Wood, On a Cohen-Lenstra heuristic for Jacobians of random
graphs, \emph{J.\ Alg.\ Combin.}\ (online) (2015).
\bibitem{d-h} G. Denham and P. Hanlon, On the Smith normal form of
the Varchenko bilinear form of a hyperplane arrangement,
\emph{Pacific J.\ Math.}\ \textbf{181} (1997), 123--146.
\bibitem{d-h2} G. Denham and P. Hanlon, Some algebraic properties of
the Schechtman-Varchenko bilinear forms, in \emph{New Perspectives
in Algebraic Combinatorics (Berkeley, CA, 1996--97)}, Math.\ Sci.
Res.\ Inst.\ Publ.\ \textbf{38}, Cambridge University Press,
Cambridge, 1999, pp.~149--176.
\bibitem{dhar} D. Dhar, Theoretical studies of self-organized
criticality, \emph{Physica A} \textbf{369} (2006), 29--70.
\bibitem{d-k-m} A. M. Duval, C. J. Klivan, and J. L Martin, Critical
groups of simplicial complexes, \emph{Ann.\ Comb.}\ \textbf{7}
(2013), 53--70.
\bibitem{difran} P. Di Francesco, Bessenrodt-Stanley polynomials and
the octahedron recurrence, \emph{Electron.\ J. Combin.}\
\textbf{22} (2015) \#P3.35.
\bibitem{e-r-r} \"O.\ E\u{g}ecio\u{g}lu, T. Redmond, and C. Ryavec,
From a polynomial Riemann hypothesis to alternating sign matrices,
\emph{Electron.\ J. Combin.}\ \textbf{8} (2001), \#R36.
\bibitem{ekedahl} T. Ekedahl, An infinite version of the Chinese
remainder theorem, \emph{Comment. Math. Univ. St. Paul.}\
\textbf{40}(1) (1991), 53--59.
\bibitem{f-s} L. Fuchs and L. Salce, \emph{Modules over non-Noetherian
domains}, Mathematical Surveys and Monographs \textbf{84}, American
Mathematical Society, Providence, RI, 2001.
\bibitem{g-z} Y. Gao and A. Y. Zhang, Diagonal form of the Varchenko
matrices of oriented matroids, in preparation.
\bibitem{g-x} I. M. Gessel and G. Xin, The generating function of
ternary trees and continued fractions,
\emph{Electron.\ J.\ Combin.}\ \textbf{13} (2006), \#R53.
\bibitem{h-s} P. Hanlon and R. P. Stanley, A $q$-deformation of a
trivial symmetric group action,
\emph{Trans.\ Amer.\ Math.\ Soc.}\ \textbf{350} (1998), 4445--4459.
\bibitem{jac} B. Jacobson, Critical groups of graphs, Honors Thesis,
University of Minnesota, 2003;
\texttt{http://www.math.umn.edu/$\sim$reiner/HonorsTheses/Jacobson\_thesis.pdf}.
\bibitem{kap} I. Kaplansky, Elementary divisors and modules,
\emph{Trans.\ Amer.\ Math.\ Soc.}\ \textbf{66} (1949), 464--491.
\bibitem{kratt1} C. Krattenthaler, Advanced determinant calculus,
\emph{S\'em.\ Lothar.\ Combin.}\ \textbf{42} (1999), Art.\ B42q, 67
pp.
\bibitem{kratt2} C. Krattenthaler, Advanced determinant calculus: a
complement, \emph{Linear Algebra Appl.}\ \textbf{411} (2005),
68--166.
\bibitem{kup} G. Kuperberg, Kasteleyn cokernels,
\emph{Electron.\ J.\ Combin.}\ \textbf{9} (2002), \#R29.
\bibitem{l-l-s} M.\,D.\ Larsen, W.\,J.\ Lewis, and T.\,S.\ Shores,
Elementary divisor rings and finitely presented modules,
\emph{Trans.\ Amer.\ Math.\ Soc.}\ \textbf{187} (1974), 231--248.
\bibitem{l-p} L. Levine and J. Propp, WHAT IS a sandpile?,
\emph{Notices Amer.\ Math.\ Soc.}\ \textbf{57} (2010), 976--979.
\bibitem{l-q} H. Lombardi and C. Quitt\'e, \emph{Commutative algebra:
Constructive methods: Finite projective modules}, Algebra and Its
Applications \textbf{20}, Springer, Dordrecht, 2015.
\bibitem{lorenzini} D. Lorenzini, Elementary divisor domains and
B\'ezout domains, \emph{J. Algebra} \textbf{371} (2012), 609--619.
\bibitem{maples} K. Maples, Cokernels of random matrices satisfy the
Cohen-Lentra heuristics, \texttt{arXiv:1301.1239}.
\bibitem{m-r} A. R. Miller and V. Reiner, Differential posets and
Smith normal forms, \emph{Order} \textbf{26} (2009), 197--228.
\bibitem{m-s} A. R. Miller and D. Stanton, Orthogonal polynomials and
Smith normal form, in preparation.
\bibitem{nie} Z. Nie, On nice extensions and Smith normal form, in
preparation.
\bibitem{propp} J. Propp, Enumeration of matchings: problems and
progress, in \emph{New Perspectives in Algebraic Combinatorics}
(L. J. Billera, A. Bj\"orner, C. Greene, R. E. Simion, and
R. P. Stanley, eds.), Math.\ Sci.\ Res.\ Inst.\ Publ.\ \textbf{38},
Cambridge University Press, Cambridge, 1999, pp.~255--291.
\bibitem{shah} S.\,W.\,A. Shah, Smith normal form of matrices associated
with differential posets, \texttt{arXiv:1510.00588}.
\bibitem{shiu} W.\,C. Shiu, Invariant factors of graphs associated with
hyperplane arrangements, \emph{Discrete Math.}\ \textbf{288}
(2004), 135--148.
\bibitem{smith} H.\,J.\,S. Smith, On systems of linear indeterminate
equations and congruences, \emph{Phil.\ Trans.\ R.\ Soc.\ London}
\textbf{151} (1861), 293--326.
\bibitem{ec1} R.\,P. Stanley, \emph{Enumerative Combinatorics},
vol.\ 1, second edition, Cambridge Studies in Advanced Mathematics,
vol.\ 49, Cambridge University Press, Cambridge, 2012.
\bibitem{ec2} R.\,P. Stanley, \emph{Enumerative Combinatorics},
vol.~2, Cambridge Studies in Advanced Mathematics, vol.\ 62,
Cambridge University Press, Cambridge, 1999.
\bibitem{rs:hyp} R.\,P. Stanley, An introduction to hyperplane
arrangements, in \emph{Geometric Combinatorics} (E. Miller,
V. Reiner, and B. Sturmfels, eds.), IAS/Park City Mathematics
Series, vol.~13, American Mathematical Society, Providence, RI,
2007, pp.~389--496.
\bibitem{rs:jt} R.\,P. Stanley, The Smith normal form of a specialized
Jacobi-Trudi matrix, preprint; \texttt{arXiv:1508.04746}.
\bibitem{t-v} T. Tao and V.\,H. Vu, Random matrices: The universality
phenomenon for Wigner ensembles, in \emph{Modern Aspects of Random
Matrix Theory} (V. H. Vu, ed.), Proc.\ Symp.\ Applied
Math.\ \textbf{72}, American Mathematical Society, Providence, RI,
2014, pp.~121--172.
\bibitem{varchenko} A. Varchenko, Bilinear form of real configuration
of hyperplanes, \emph{Advances in Math.}\ \textbf{97} (1993),
110--144.
\bibitem{w-s} Y. Wang and R.\,P. Stanley, The Smith normal form
distribution of a random integer matrix, \texttt{arXiv: 1506.00160}.
\bibitem{wood} M.\,M. Wood, The distribution of sandpile groups of random
graphs, \texttt{arXiv:1402.5149}.
\bibitem{wood2} M. M. Wood, Random integral matrices and the Cohen
Lenstra heuristics, \texttt{arXiv:1504.0439}.
\bibitem{young} A. Young, On quantitative substitutional analysis II,
\emph{Proc.\ London Math.\ Soc.}\ \textbf{33} (1902), 361--397;
\emph{The Collected Papers of Alfred Young} (G. de B. Robinson,
ed.), Mathematical Expositions \textbf{21}, University of Toronto
Press, Toronto, 1977, pp.~92--128.
\bibitem{zagier} D. Zagier, Realizability of a model in infinite
statistics, \emph{Comm.\ Math.\ Phys.}\ \textbf{147} (1992),
199--210.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Cryogenic properties of optomechanical silica microcavities}
\defMax-Planck-Institut f\"ur Quantenoptik, 85748 Garching, Germany{Max-Planck-Institut f\"ur Quantenoptik, 85748 Garching, Germany}
\defEcole F\'ed\'erale Polytechnique de Lausanne (EPFL), 1015 Lausanne, Switzerland{Ecole F\'ed\'erale Polytechnique de Lausanne (EPFL), 1015 Lausanne, Switzerland}
\author{O. Arcizet}
\affiliation{Max-Planck-Institut f\"ur Quantenoptik, 85748 Garching, Germany}
\author{R.~Rivi\`ere}
\affiliation{Max-Planck-Institut f\"ur Quantenoptik, 85748 Garching, Germany}
\author{A.~Schliesser}
\affiliation{Max-Planck-Institut f\"ur Quantenoptik, 85748 Garching, Germany}
\author{T.~J.~Kippenberg}
\affiliation{Max-Planck-Institut f\"ur Quantenoptik, 85748 Garching, Germany}
\affiliation{Ecole F\'ed\'erale Polytechnique de Lausanne (EPFL), 1015 Lausanne, Switzerland}
\begin{abstract}
We present the optical and mechanical properties of high-Q fused silica microtoroidal resonators at cryogenic temperatures (down to 1.6\,K). A thermally induced optical multistability is observed and theoretically described; it serves to characterize quantitatively the static heating induced by light absorption. Moreover the influence of structural defect states in glass on the toroid mechanical properties is observed and the resulting implications of cavity optomechanical systems on the study of mechanical dissipation discussed.
\end{abstract}
\maketitle
\textit{Introduction.---}
From their ability to combine both high optical and mechanical properties in one and the same device, micro-toroidal optomechanical cavities can be considered as a promising system for future progress in optomechanics at low phonon number \cite{Kippenberg2008}.
In particular these resonators support high frequency and low mass mechanical eigen-modes (in the range of 60\,MHz and 10\,ng, respectively) -whose clamping losses can be strongly suppressed with a suitable geometric design \cite{Anetsberger2008}- as well as simultaneously ultra-high finesse ($10^6$) optical resonances. This feature allows to enter deeply the resolved sideband regime \cite{Schliesser2008}, a prerequisite for cooling their radial breathing mode - a macroscopic mechanical oscillator- down to its quantum ground state. The cryogenic cooling of the resonator lowers the initial mean phonon occupancy of the mode of interest that can be further reduced by optical cooling \cite{Arcizet2006,Gigan2006,Schliesser2006}.
However, the specificity of the device, based on an amorphous material which confines the photons within the dielectric resonator, induces non trivial and hereto unobserved behavior that is presented in this letter. Moreover we discuss systems suitability for further progress towards quantum optomechanics. In particular the phonon coupling to the structural defect states of glass opens promising perspectives for amorphous optomechanical microcavities, that may represent a new powerful tool for probing mechanical decoherence at low temperatures\cite{Blencowe2008}.
\textit{Experimental setup.---} A tunable 1550-nm diode laser is coupled by means of a tapered fiber to an optical resonance of a microcavity, hosted in a $^4$He exchange gas cryostat. The mechanical vibrations of the toroid are imprinted on the phase of the transmitted laser field and detected via a Pound Drever Hall technique, providing a high sensitivity measurement of the toroid's displacement noise \cite{Arcizet2006a,Schliesser2008b}.
No significant modification of its opto-mechanical properties were observed when working with light intensities below $1\,\rm \mu W$.
To overcome photodetector dark noise, it is possible to take advantage of a low noise erbium doped fiber amplifier (EDFA), which when combined with a low noise Koheras fiber laser, yields a typical displacement sensitivity of $3 \times\,\rm 10^{-18} m/\sqrt{Hz}$ at $60\,\rm MHz$ achieved with $1\,\rm \mu W$ of laser power injected in a $10^5$ finesse optical resonance.
\begin{figure}
\caption{(color online).(a): Experimental setup. PFC: polarization fiber controller, EDFA: Erbium doped fiber amplifier. (b): Typical displacement noise spectrum obtained at low temperature (1.6\,K, 4\,mbar) for the radial breathing mode of a 30-$\rm \mu m$-radius toroid oscillating at 63\,MHz. The measure of the rms amplitude of its Brownian motion (ca. 4\,fm) serves as a toroid temperature sensor, proving its proper thermalization ensured by the exchange gas.
(c): optical resonance frequency $\nu_{o}
\label{fig1B}
\end{figure}
In this manner, the resonator's Brownian motion can be monitored down to 1.6\,K (corresponding to an average occupancy of 600 phonons) with a signal-to-noise ratio ($>10\,\rm dB$ at 1.6\,K) sufficient for measuring the mechanical quality factor $Q$, the resonance frequency $ \Omega_{\rm m}/2\pi$ and the effective temperature of the radial breathing mode. The latter was compared to the cryostat temperature, determined with semiconductor sensors, and no differences (at the level of 0.5\,K) could be observed, indicating that the displacement read-out does not induce any significant temperature increase via light absorption or dynamical back action. The proper thermalization of the toroid -even for the micro-structures weakly coupled to the silicon substrate \cite{Anetsberger2008}- requires a Helium gas pressure of ca. 5\,mbar, sufficiently low for not increasing its mechanical losses by acoustic damping.
\textit{Temperature dependent optical resonance frequency---}
The high optical quality factor are preserved at low temperature, and no significant change (at the MHz level) could be observed.
The frequency $\nu_o(T)$ of the microcavity optical resonance varies with the cryostat temperature (cf. Fig.\ref{fig1B}c), presenting a $-135\,\rm MHz/K$ shift at 30\,K that reverses \cite{Park2007} around $T^\star=13.3\,\rm K$ and reaches $+100\,\rm MHz/K$ at 2\,K. The measurements were performed at low light intensity ($<100\,\rm nW$) in order to avoid inducing any thermal or optical (e.g. Kerr \cite{Treussart1998} or radiation pressure) non-linearities that could have altered the optical resonances, and at a pressure of 10\,mbar which moreover ensures a good thermalization.
The frequency shift originates both from a mechanical expansion (thermal expansion coefficient $\alpha$) and a change in the effective refractive index $n_{\rm eff}$ of the resonator according to the relation: $-\frac{1}{\nu_o}\frac{d\nu_o}{dT}=\alpha +\frac{dn_{\rm eff}}{dT}$. Both silica and the surrounding medium contribute to the effective refractive index: $n_{\rm eff}=(1-\eta)n_{\rm SiO_2}+\eta n_{\rm ext}$ where $\eta$ is the evanescent fraction of the optical mode and ranges between ca. 0.1 to $5\,\%$ depending on the spatial shape of the mode.
The inferred variation of the effective refractive index $\frac{dn_{\rm eff}}{dT}$ is shown in Fig. \ref{fig1B}d. This estimation is in agreement at high temperature (30\,K) with measurements reported in Ref. \cite{Leviton2006}.
Below 7\,K, the effective refractive index temperature dependence becomes negative. This is a consequence of the presence of the surrounding exchange gas, since additional measurements performed at 50\,mbar presented an increased negative shift, of $-520\,\rm MHz/K$ at 2\,K. From the recorded pressure evolution in the experimental chamber, the change in the Helium refractive index have been estimated \cite{Luther1996} at a level of ca.$-10\,\rm ppm/K$ at 5\,K, which is in agreement with the measured value of ($-.15\,\rm ppm/K$). A quantitative study of the surrounding Helium contribution will allow to measure for the first time silica's optical refractive index at low temperatures (note that the temperature dependence of silica's microwave dielectric constant is known to reverse \cite{Schickfus1976}).
The inversion takes on high practical interest since it renders the cooling side of the resonance thermally stable \cite{Carmon2004}, as opposed to room temperature situation.
\begin{figure}
\caption{(color online). (a) Above: normalized intracavity intensity $\tilde I$ versus laser normalized detuning $\varphi_l$ for increasing input powers (resp. $13$, $131$ and $260\,\rm \mu W$) presenting a multistable behavior (finesse $\mathcal{F}
\label{fig3}
\end{figure}
\textit{Optical multistability.---}
The measured temperature dependence is responsible for a thermally induced multistability. A fraction of the light is absorbed inside the silica resonator whose temperature increases and induces an optical frequency shift that distorts the Lorentzian profile of the resonances. If the optical resonance is shifted by more than half its linewidth, a bistability appears, as can be observed in Fig.\,\ref{fig3}a ($\it i$).
It can only be observed for resonances whose half linewidth is smaller than the optical frequency shift between the cryostat temperature $T_0$ and $T^\star$ (i.e. $\mathcal{F} > 10\,000$ at 4\,K for a radius of $30 \,\rm\mu m$).
For increasing input intensities, it is possible to observe a regime where the cavity transmission presents two turning points. This is a consequence of the inversion of the optical frequency shift that creates additional working points if the light induced temperature increase is strong enough to reach the inversion temperature ($T^\star$).
To model the multi-stability, one has to take into account the coupling between the optical field $\alpha$ and the effective resonator temperature $T$. The normalized complex intracavity optical field $\tilde\alpha\equiv \alpha/\sqrt{I_{\rm max}}$, where $I_{\rm max}\equiv I_{\rm in} K \frac{\mathcal{F}}{\pi}$ is the maximum intracavity photon flux expected for a input flux $I_{\rm in}$ and coupling efficiency $K$ ($K=1$ in case of impedance matching), follows the dynamical equation:
$
\frac{1}{\Omega_{\rm c}}\frac{d\tilde\alpha}{dt}= \left(-1+i \varphi(t,T)\right)\tilde\alpha + 1$,
where the laser to cavity detuning $\varphi$ is normalized to the cavity bandwidth $\Omega_{\rm c}/2\pi=c/(4\pi n R \mathcal{F})$ and can be written as $\varphi(t,T)=\varphi_l(t)-\frac{4\pi n R\mathcal{F}}{c}\nu_o(T_0+\delta T(t))$ where $\varphi_l$ stands for the laser frequency scan and $\delta T$ is the effective temperature increase induced by light absorption. The dynamical equivalent temperature response depends on the mechanism considered (expansion or refraction), we define here a global thermal response function $\chi_{\rm th}$ for the effective temperature increase, $\delta T (t)= h \nu I_{\rm max} \int_{-\infty}^t{\chi_{\rm th}(t-t')\tilde I (t') dt'}$, where $\tilde I= |\alpha|^2 /I_{\rm max}$ is the normalized intracavity intensity.
The static working point of the system is then solution of $1+ (\varphi_l-\frac{4\pi n R \mathcal{F}}{c}\nu_o(T_0+ \mu \tilde I))^2= 1/{\tilde I}$,
with $\mu\equiv h\nu\bar I_{\rm max} \bar\chi_{\rm th}^{\rm stat} $ being the maximum light induced heating expected, $\bar\chi_{\rm th}^{\rm stat}=\int_{-\infty}^\infty{ \chi_{\rm th}(t)dt}$, in $\rm K/W$ representing the induced static heating per intracavity power.
A fit (with a $7^{\rm th}$ order polynome) of the experimental $\nu_o(T)$ is used for the numerical resolution of the non-linear equation. It allows to calculate the normalized intracavity intensity for each laser detuning $\varphi_l$ and for various input intensities $I_{\rm in}$ by varying the parameter $\mu$.
The results of the simulation are shown in Fig.\,\ref{fig3}a and present a remarkable agreement with experimental traces. The dynamically unstable branches are marked with dashed lines. Compared to higher temperature behavior, the upper stable branch does not present the typical triangular shape \cite{Carmon2004} (also observed for Kerr or radiation pressure non-linearities) but is progressively curved as the temperature approaches $T^\star$ ({\it i}, {\it ii}).
When the input intensity is sufficient for heating the resonator beyond $T^\star$, two new branches appear, allowing the onset of a novel multistability ({\it iii}). At even higher intensities, the upper branch can be found at a laser detuning lower than the "blue side" lower turning point (cf. Fig.\,\ref{fig3}d inset), inducing a characteristic double turning point feature when reducing the laser frequency.
This mechanism also allows to quantify the temperature increase induced by light absorption (i.e. the parameter $\bar\chi_{\rm th}^{\rm stat}$). A simple estimation can be obtained by measuring the injected power $P_{\rm in}^{\rm thres}$ required for observing the multistability (or $\mu = T^\star-T_0$) and the cavity optical parameters (coupling and internal losses ). A value of $\bar\chi_{\rm th}^{\rm stat} \approx 4.5\,\rm K/W $ is extracted for an exchange gas pressure of $5\,\rm mbar$. For lower pressures of exchange gas, it increases up to $8.6\,\rm K/W$ at $0.5\,\rm mbar$ (assuming the same static optical shift), emphasizing its crucial role for thermalization.
The static heating of the resonator is of particular relevance in the context of ground state cooling of mechanical modes that requires high optical cooling power (up to ca. $1\,\rm mW$), since it competes with the laser cooling. It can however be strongly circumvented in the resolved sideband regime where the laser is far detuned from the optical resonance.
For completeness, it is important to mention that at higher exchange gas pressure, it is possible to enter the $^4$He superfluid phase where a thin layer of Helium appears on the surface of the microcavity. A clear signature of this phenomena is imprinted on the microcavity's optical transmission (cf. Fig. \ref{fig3}e). At low light intensities, the cavity resonances are undeformed and simultaneously slightly red shifted (ca. -40\,MHz). It corresponds to the build-up in presence of the superfluid layer whose refractive index temperature dependence reverses below the lambda point \cite{Edwards1956} and underlines the increased heat extraction efficiency. If the intracavity intensity exceeds 6.3\,mW, the optical resonances are seriously deformed, presenting a blue shift of the maximum build-up that is found again to strongly depend on the injected power.
Note also that the total optical frequency shift observed ($>1\,\rm GHz$) at 70\,mbar is higher than the 350 \,MHz measured at 10\,mbar, underlining the role of the surrounding Helium in the static frequency shift observed. Microcavities constitute a promising tool for non destructive studies of Helium films, and will be subject of further report.
\textit{Mechanical properties ---} The amorphous nature of the micro-resonator strongly influences its mechanical behavior.
Indeed, the experiment provides a novel way of probing phonon propagation properties (speed of sound and attenuation) in glass at low temperatures \cite{Blencowe2008}, that can be deduced from the micro-resonator's Brownian motion (respectively its mechanical frequency and quality factor).
\begin{figure}
\caption{ Mechanical damping $Q^{-1}
\label{fig4}
\end{figure}
Results are represented in Fig.\,\ref{fig4} and present a non trivial temperature dependence that is characteristic of amorphous materials.
Indeed, most of their acoustic and dielectric properties, ranging from kHz to GHz frequencies, that have been widely studied in bulk materials, can be explained \cite{Enss2005,Jackle1972} by considering a coupling of the strain and electromagnetic fields to the structural defects of glass -whose origin is still under investigation- modelized as an assembly of two-level systems (TLS) presenting a wide distribution of energy parameters.
Interestingly, our approach consisting of a high sensitivity detection of the thermal displacement noise in a micrometric acoustic resonator -the least perturbing measurement- gives results in good agreement with direct studies of sound propagation in bulk material, relying on an external acoustic driving.
The mechanical damping presents a maximum at ca. 50\,K of $Q\approx 500$, corresponding to the thermally activated relaxation regime. It is fitted here with the parameters taken from \cite{Vacher2005}. Below 10\,K the relaxation mechanism is dominated by tunneling assisted transitions between the two levels of the defects. They are responsible for both the plateau ($Q\approx 1200$ at 5\,K) and the further improvement of the mechanical quality factor observed at lower temperatures and higher frequencies ($Q\propto \Omega_{\rm m}/ T^{3}$).
In the context of ground state cooling, it is important to maintain a high mechanical quality factor at low temperatures for facilitating the readout and providing an efficient temperature reduction with optical cooling techniques. In this view, a promising mechanical Q factor above 30\,000 for 90\,MHz oscillators can be expected at 600\,mK \cite{Enss2005}, a temperature at which the $\rm ^3He$ vapor pressure is still high enough (ca. 1\,mbar) to ensure a proper thermalization of the resonator in the resolved sideband regime.
It is important to mention here that the phonon coupling to the assembly of two-level systems opens promising future perspectives for amorphous materials based optomechanical devices. Indeed it has been shown that in addition to the relaxation mechanisms described above there also exists processes where the phonon resonantly interacts with the TLS presenting the right energy splitting \cite{Jackle1972}. This mechanism dominates the phonon propagation properties at lower temperatures and higher frequencies (and is responsible for the anomalous electromagnetic dispersion previously mentioned). At 1\,K and 500\,MHz, their contributions have similar magnitude \cite{Jackle1972,Enss2005}. Our system has the potential for entering this resonant regime since higher frequency oscillators can be obtained by reducing the toroid size and working on higher order radial breathing modes, that could be still thermalized and monitored in $^3$He cryostats.
There exists a finite density of TLS and it has been shown that they can be saturated at both high electromagnetic or acoustic intensities \cite{Hunklinger1972} ($J_{\rm ac}^{\rm sat}\approx 10^{-3}\,\rm W/m^2$ at 1\,K). The latter would require an mechanical driving of $\delta x = \sqrt{ \frac{2 J_{\rm ac}^{\rm sat}}{\rho c_s\Omega_{\rm m}^2}}\approx 2\,\rm fm$ that is experimentally feasible via radiation pressure force and well within the detection capacity offered by optical readout.
Importantly, the TLS can be coupled simultaneously to strain and electromagnetic fields and cross-couplings have been demonstrated \cite{Laermans1977} at microwave intensities ($20\,\rm W/m^2$ at 1\,K) that could feasibly be implemented. This would allow a possible control and read-out of the resonator mechanical state by mean of an external radio frequency field, a promising step towards mechanical quantum state engineering. Furthermore, this interaction has allowed to generate phonon echo phenomena \cite{Golding1976} at temperatures and frequencies where the equivalent mechanical oscillator would be found at occupancies close to unity and could be interestingly exploited to measure mechanical decoherence at low phonon number \cite{Armour2008}.
This work was supported by an Independent Max Planck Junior Research Group of the Max Planck Society, the Deutsche Forschungsgemeinschaft (DFG-GSC) and a Marie Curie Excellence Grant. O.A. acknowledges funding from a Marie Curie Grant (QUOM).
\begin{thebibliography}{25}
\expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi
\expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi
\expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi
\expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Kippenberg et~al.}(2008)\citenamefont{Kippenberg and Vahala}}]{Kippenberg2008}
\bibinfo{author}{\bibfnamefont{T.~J.} \bibnamefont{Kippenberg}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.~J.} \bibnamefont{Vahala}},
\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{321}},
\bibinfo{pages}{1172} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Anetsberger et~al.}(2008)\citenamefont{Anetsberger,
Rivi\`{e}re, Schliesser, Arcizet, and Kippenberg}}]{Anetsberger2008}
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Anetsberger}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Rivi\`{e}re}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Schliesser}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Arcizet}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{T.~J.} \bibnamefont{Kippenberg}},
\bibinfo{journal}{Nature Photon.} \textbf{\bibinfo{volume}{2}},
\bibinfo{pages}{627} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Schliesser
et~al.}(2008{\natexlab{a}})\citenamefont{Schliesser, Rivi\`{e}re,
Anetsberger, Arcizet, and Kippenberg}}]{Schliesser2008}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Schliesser}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Rivi\`{e}re}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Anetsberger}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Arcizet}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{T.~J.} \bibnamefont{Kippenberg}},
\bibinfo{journal}{Nat. Phys.} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{415} (\bibinfo{year}{2008}{\natexlab{a}}).
\bibitem[{\citenamefont{Arcizet
et~al.}(2006{\natexlab{a}})\citenamefont{Arcizet, Cohadon, Briant, Pinard,
and Heidmann}}]{Arcizet2006}
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Arcizet}},
\bibinfo{author}{\bibfnamefont{P.-F.} \bibnamefont{Cohadon}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Briant}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Pinard}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Heidmann}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{444}}, \bibinfo{pages}{71}
(\bibinfo{year}{2006}{\natexlab{a}}).
\bibitem[{\citenamefont{Gigan et~al.}(2006)\citenamefont{Gigan, Bohm,
Paternostro, Blaser, Langer, Hertzberg, Schwab, Bauerle, Aspelmeyer, and
Zeilinger}}]{Gigan2006}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Gigan}}
\bibnamefont{{\emph et al}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{444}}, \bibinfo{pages}{67}
(\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Schliesser et~al.}(2006)\citenamefont{Schliesser,
Del'Haye, Nooshi, Vahala, and Kippenberg}}]{Schliesser2006}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Schliesser}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Del'Haye}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Nooshi}},
\bibinfo{author}{\bibfnamefont{K.~J.} \bibnamefont{Vahala}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{T.~J.}
\bibnamefont{Kippenberg}}, \bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{97}}, \bibinfo{pages}{243905}
(\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Blencowe}(2008)}]{Blencowe2008}
\bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{Blencowe}},
\bibinfo{journal}{Nature Physics} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{753} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Arcizet
et~al.}(2006{\natexlab{b}})\citenamefont{Arcizet, Cohadon, Briant, Pinard,
Heidmann, Mackowski, Michel, Pinard, Fran\c{c}ais, and
Rousseau}}]{Arcizet2006a}
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Arcizet}}
\bibnamefont{{\emph et al}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{97}},
\bibinfo{eid}{133601} (\bibinfo{year}{2006}{\natexlab{b}}).
\bibitem[{\citenamefont{Schliesser
et~al.}(2008{\natexlab{b}})\citenamefont{Schliesser, Anetsberger, Rivi\`ere,
Arcizet, and Kippenberg}}]{Schliesser2008b}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Schliesser}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Anetsberger}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Rivi\`ere}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Arcizet}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{T.~J.} \bibnamefont{Kippenberg}},
\bibinfo{journal}{New J. Phys.} \textbf{\bibinfo{volume}{10}},
\bibinfo{pages}{095015} (\bibinfo{year}{2008}{\natexlab{b}}).
\bibitem[{\citenamefont{White}(1975)}]{White1975}
\bibinfo{author}{\bibfnamefont{G.~K.} \bibnamefont{White}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{34}},
\bibinfo{pages}{204} (\bibinfo{year}{1975}).
\bibitem[{\citenamefont{Leviton and Frey}(2006)}]{Leviton2006}
\bibinfo{author}{\bibfnamefont{D.~B.} \bibnamefont{Leviton}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{B.~J.} \bibnamefont{Frey}},
\bibinfo{journal}{Proc. SPIE} \textbf{\bibinfo{volume}{6273}},
\bibinfo{pages}{62732K} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Park and Wang}(2007)}]{Park2007}
\bibinfo{author}{\bibfnamefont{Y.-S.} \bibnamefont{Park}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Wang}},
\bibinfo{journal}{Opt. Lett.} \textbf{\bibinfo{volume}{32}},
\bibinfo{pages}{3104} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Treussart et~al.}(1998)\citenamefont{Treussart,
Ilchenko, Roch, Hare, Lef\`{e}vre-Seguin, Raimond, and
Haroche}}]{Treussart1998}
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Treussart}}
\bibnamefont{{\emph et al}},
\bibinfo{journal}{Eur. Phys. J. D} \textbf{\bibinfo{volume}{1}},
\bibinfo{pages}{235} (\bibinfo{year}{1998}).
\bibitem[{\citenamefont{Luther et~al.}(1996)\citenamefont{Luther, Grohmann, and
Fellmuth}}]{Luther1996}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Luther}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Grohmann}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Fellmuth}},
\bibinfo{journal}{Metrologia} \textbf{\bibinfo{volume}{33}},
\bibinfo{pages}{341 } (\bibinfo{year}{1996}).
\bibitem[{\citenamefont{von Schickfus and Hunklinger}(1976)}]{Schickfus1976}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{von Schickfus}}
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Hunklinger}},
\bibinfo{journal}{J. Phys. C} \textbf{\bibinfo{volume}{9}},
\bibinfo{pages}{L439} (\bibinfo{year}{1976}).
\bibitem[{\citenamefont{Carmon et~al.}(2004)\citenamefont{Carmon, Yang, and
Vahala}}]{Carmon2004}
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Carmon}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Yang}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Vahala}},
\bibinfo{journal}{Opt. Express} \textbf{\bibinfo{volume}{12}},
\bibinfo{pages}{4742} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Edwards}(1956)}]{Edwards1956}
\bibinfo{author}{\bibfnamefont{M.~H.} \bibnamefont{Edwards}},
\bibinfo{journal}{Can. J. Phys.} \textbf{\bibinfo{volume}{34}},
\bibinfo{pages}{898} (\bibinfo{year}{1956}).
\bibitem[{\citenamefont{Vacher et~al.}(2005)\citenamefont{Vacher, Courtens, and
Foret}}]{Vacher2005}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Vacher}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Courtens}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Foret}},
\bibinfo{journal}{Phys. Rev. B} \textbf{\bibinfo{volume}{72}},
\bibinfo{eid}{214205} (\bibinfo{year}{2005}).
\bibitem[{\citenamefont{Bartell and Hunklinger}(1982)}]{Bartell1982}
\bibinfo{author}{\bibfnamefont{U.}~\bibnamefont{Bartell}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Hunklinger}},
\bibinfo{journal}{J. Phys. (Paris) Colloq.} \textbf{\bibinfo{volume}{43}},
\bibinfo{pages}{C9–} (\bibinfo{year}{1982}).
\bibitem[{\citenamefont{Enss and Hunklinger}(2005)}]{Enss2005}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Enss}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Hunklinger}},
\emph{\bibinfo{title}{Low-Temperature Physics}}
(\bibinfo{publisher}{Springer}, \bibinfo{place}{Heidelberg}, \bibinfo{year}{2005}).
\bibitem[{\citenamefont{J\"{a}ckle}(1972)}]{Jackle1972}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{J\"{a}ckle}},
\bibinfo{journal}{Z. Phys.} \textbf{\bibinfo{volume}{257}},
\bibinfo{pages}{212} (\bibinfo{year}{1972}).
\bibitem[{\citenamefont{Hunklinger et~al.}(1972)\citenamefont{Hunklinger,
Arnold, Stein, Nava, and Dransfeld}}]{Hunklinger1972}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Hunklinger}},
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Arnold}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Stein}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Nava}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Dransfeld}},
\bibinfo{journal}{Phys. Lett.} \textbf{\bibinfo{volume}{42A}},
\bibinfo{pages}{253} (\bibinfo{year}{1972}).
\bibitem[{\citenamefont{Laermans and Hunklinger}(1977)}]{Laermans1977}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Laermans},
\bibfnamefont{C.~Arnold}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Hunklinger}},
\bibinfo{journal}{J. Phys. C} \textbf{\bibinfo{volume}{10}},
\bibinfo{pages}{L161} (\bibinfo{year}{1977}).
\bibitem[{\citenamefont{Golding and Graebner}(1976)}]{Golding1976}
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Golding}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.~E.} \bibnamefont{Graebner}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{37}},
\bibinfo{pages}{852} (\bibinfo{year}{1976}).
\bibitem[{\citenamefont{Armour and Blencowe}(2008)}]{Armour2008}
\bibinfo{author}{\bibfnamefont{A.~D.} \bibnamefont{Armour}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{Blencowe}},
\bibinfo{journal}{New J. Phys.} \textbf{\bibinfo{volume}{10}},
\bibinfo{pages}{095004} (\bibinfo{year}{2008}).
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Neuro CROSS exchange: Learning to CROSS exchange to solve realistic vehicle routing problems}
\def*}\footnotetext{Corresponding author}\def\thefootnote{\arabic{footnote}{$\dagger$}\footnotetext{Equal contribution}\def*}\footnotetext{Corresponding author}\def\thefootnote{\arabic{footnote}{\arabic{footnote}}
\def*}\footnotetext{Corresponding author}\def\thefootnote{\arabic{footnote}{*}\footnotetext{Corresponding author}\def*}\footnotetext{Corresponding author}\def\thefootnote{\arabic{footnote}{\arabic{footnote}}
\doparttoc
\faketableofcontents
\begin{abstract}
CROSS exchange (CE), a meta-heuristic that solves various vehicle routing problems (VRPs), improves the solutions of VRPs by swapping the sub-tours of the vehicles. Inspired by CE, we propose Neuro CE (NCE), a fundamental operator of \textit{learned} meta-heuristic, to solve various VRPs while overcoming the limitations of CE (i.e., the expensive $\mathcal{O}(n^4)$ search cost). NCE employs graph neural network to predict the cost-decrements (i.e., results of CE searches) and utilizes the predicted cost-decrements as guidance for search to decrease the search cost to $\mathcal{O}(n^2)$. As the learning objective of NCE is to predict the cost-decrement, the training can be simply done in a supervised fashion, whose training samples can be prepared effortlessly. Despite the simplicity of NCE, numerical results show that the NCE trained with flexible multi-depot VRP (FMDVRP) outperforms the meta-heuristic baselines. More importantly, it significantly outperforms the neural baselines when solving distinctive special cases of FMDVRP (e.g., MDVRP, mTSP, CVRP) without additional training.
\end{abstract}
\section{Introduction}
The field of neural combinatorial optimization (NCO), an emerging research area intersecting operation research and artificial intelligence, aims to train an effective solver for various combinatorial optimization, such as the traveling salesman problem (TSP) \citep{bello2016neural, khalil2017learning, nazari2018reinforcement, kool2018attention,kwon2020pomo}, vehicle routing problems (VRPs) \citep{bello2016neural, khalil2017learning, nazari2018reinforcement, kool2018attention,kwon2020pomo,hottung2019neural, lu2019learning, da2021learning}, and vertex covering problems \citep{khalil2017learning,li2018combinatorial,guo2019solving}. As NCO tackles NP-hard problems using various state-of-the-art (SOTA) deep learning techniques, it is considered an important research area in artificial intelligence. At the same time, NCO is an important field from a practical point of view because it can solve complex real-world problems.
Most NCO methods learn an operator that improves the current solution to obtain a better solution (i.e., improvement heuristics) \citep{hottung2019neural, lu2019learning, da2021learning} or constructs a solution sequentially (i.e., construction heuristics) \citep{bello2016neural, khalil2017learning, nazari2018reinforcement,kool2018attention,kwon2020pomo,park2021schedulenet,cao2021dan}. To learn such operators, NCO methods either employ supervised learning (SL) (which imitates the solutions of the verified solvers) or reinforcement learning (RL) (which necessitates the design of an effective representation, architecture or learning method), making them less trainable for complex and realistic VRPs. Moreover, most NCO researches in recent years has extensively focused on improving the performance of the benchmark CO problems while overlooking the applicability of NCO to more realistic problems.
Focusing on that \textit{improvement} (meta) heuristics are applicable various VRP with some minor problem-specific modifications, we aim to learn a fundamental and universal improvement operator that overcomes the limitation of CROSS-exchange (CE) \cite{taillard1997tabu}, a generalization of various hand-craft improvement operators of meta heuristics. CE improves the solution of VRP by updating the tours of two vehicles. To be specific, it chooses the sub-tours from each tour and swap the sub-tours to generate the updated tours. In practice, to find the (best) \textit{improving} sub-tours (i.e., the sub-tours that decrease the cost value of VRP), CE performs brute-force search that costs $\mathcal{O}(N^4)$, which makes CE unsuitable for large scale VRPs.
In this paper, we propose Neuro CE (NCE) that effectively conducts the CE operation with significantly less computational complexity. NCE amortizes the search for ending nodes of the sub-tours by employing a
graph neural network (GNN) that predicts the best cost decrement, given two starting nodes from the given two trajectories. By using the predictions, NCE searches over the promising starting nodes only. Hence, the proposed NCE has $\mathcal{O}(N^2)$ search complexity. Furthermore, unlike other SL or RL approaches, the prediction target of NCE is not the entire solution of VRP, but the cost decrements of the CE operations that lowers the difficulty of the prediction task. This allows the training data to be prepared effortlessly.
The contributions of this study are summarized as follows:
\begin{itemize}[leftmargin=0.5cm]
\item \textbf{Generalizability/Transferability:} As NCE learns a fundamental and universal operator, it can solve various complex VRPs without training for each type of VRPs without retraining.
\item \textbf{Trainability:} The NCE operator is trained in a supervised manner with the dataset comprised of the tour pairs and cost decrements, which are easy to obtain.
\item \textbf{Practicality/Performance:}
We evaluate NCE with various types of VRPs, including flexible multi-depot VRP (FMDVRP), multi-depot VRP (MDVRP), multiple traveling salesman problem (mTSP), and capacitated VRP (CVRP). Extensive numerical experiments validate that the strong empirical performance of NCE compared to the SOTA meta-heuristics and NCO baselines even though NCE is only trained to solve FMDVRP.
\end{itemize}
\section{Preliminaries}
\begin{figure}
\caption{The overall procedure of improvement heuristic that uses CE as the inter-operation.}
\label{fig:ce_overall}
\end{figure}
This section introduces the target problem, flexible multi-depot VRP (FMDVRP) and CE, which is one of the possible approach that solves FMDVRP.
\subsection{Min-max flexible multi-depot VRP}
\label{subsec:fmdvrp}
Min-max FMDVRP is a generalization of VRP that aims to find the coordinated routes of multiple vehicles with multiple depots. The flexibility allows vehicles to go back to any depots regardless of their starting depots. FMDVRP is formulated as follows:
\begin{align}
\label{eqn:vrp_obj}
\min_{\pi \in \sS(P)}\max_{i \in \sV}{C(\tau_i)}
\end{align}
where $P$ is the description of the FMDVRP instance that is composed of a set of vehicles $\sV$, $\sS(P)$ is the set of solutions that satisfy the constraints of FMDVRP (i.e., feasible solutions), and $\pi=\{\tau_i\}_{i \in \sV}$ is a solution of the min-max FMDVRP. The tour $\tau_i=[N_1, N_2, ..., N_{l(i)}]$ of vehicle $i$ is the ordered collection of the visited cities by the vehicle $v_i$, $C(\tau_i)$ is the cost of $\tau_i$. FMDVRP reflects VRP where the vehicles are shared and pickup/delivered from arbitrary space (e.g., shared rental car services). For the mixed integer linear programming (MILP) formulation of FMDVRP, please refer to \cref{appendix:fmdvrp}.
Classical VRPs are special cases of FMVDRP. \emph{TSP} is a VRP with a single vehicle and depot, \emph{mTSP} is a VRP with multiple vehicles and a single depot, and \emph{MDVRP} is a VRP with multiple vehicles and depots. Since FMVDRP is a general problem class, we learn a solver for FMVDRP and employ it to solve other specific problems (i.e., MDVRP, mTSP, and CVRP), without retraining or fine-tuning. We demonstrate that the proposed method can solve the special cases without retraining in \cref{sec:experiments}.
\input{algorithms/00-NCE}
\subsection{CROSS exchange}
CE is a solution updating operator that iteratively improves the solution until it reaches a satisfactory result \cite{taillard1997tabu}. CE reduces the overall cost by \textit{exchanging} the sub-tours in two tours. The CE operator is defined as:
\begin{align}
\label{eqn:cross-exchange}
\tau_1', \tau_2' = \text{\fontfamily{lmtt}\selectfont CROSS}(a_1, b_1, a_2, b_2;\tau_1,\tau_2) \\
\tau_1' \triangleq \tau_1[:a_1] \oplus \tau_2[a_2:b_2] \oplus \tau_1[b_1:] \\
\tau_2' \triangleq \tau_2[:a_2] \oplus \tau_1[a_1:b_1] \oplus \tau_2[b_2:]
\end{align}
where $\tau_i$ and $\tau_i'$ are the input and updated tours of the vehicle $i$, respectively. $\tau_i[a:b]$ represents the sub-tour of $\tau_i$, ranging from node $a$ to $b$. $\tau \oplus \tau'$ represents the concatenation of tours $\tau$ and $\tau'$. For brevity, we assume the node $a_1, a_2$ comes early than node $b_1, b_2$ in $\tau_1, \tau_2$ , respectively.
CE selects the sub-tours (i.e., $\tau_1[a_1:b_1], \tau_2[a_2:b_2]$) from $\tau_1,\tau_2$ and swaps the sub-tours to generate new tours $\tau_1', \tau_2'$. CE seeks to find the four points $(a_1, b_1, a_2, b_2)$ to reduce the cost of the tours (i.e., $\max(C(\tau_1'), C(\tau_2')) \leq \max(C(\tau_1), C(\tau_2))$). When the full search method is naively employed, the search cost is $\mathcal{O}(n^4)$, where $n$ is the number of nodes in a tour.
\cref{fig:ce_overall} illustrates how improvement heuristics utilize CE to solve FMDVRP. The improvement heuristics start by generating the initial feasible tours using simple heuristics. Then, they repeatedly (1) select two tours, (2) apply inter-operation to generate improved tours by CE, and (3) apply intra-operation to improve the tours independently. The application of the inter-operation makes the heuristics more suitable for solving the multi-vehicle routing problems as it considers the interactions among the vehicles while \textit{improving} the solutions. The improvement heuristics terminate when no more (local) improvement is possible.
\section{Neuro CROSS Exchange}
\label{sec:nce}
In this section, we introduce Neuro CROSS exchange (NCE) to solve FMDVRP and its special cases. The overall procedure of NCE is summarized in \cref{alg:NCE-overall}. We briefly explain
\model{GetInitialSolution}, \model{SelectTours}, \model{NeuroCROSS}, and \model{IntraOperation}, and then provide the details of the proposed {\fontfamily{lmtt}\selectfont NeuroCROSS} operation in the following subsections. NCE is particularly designed to enhance CE to improve the solution quality and solving speed. Each component of NCE is as follows:
\begin{itemize}[leftmargin=0.5cm]
\item \textbf{\model{GetInitialSolution} } We use a multi-agent extended version of the greedy assignment heuristic to obtain the initial feasible solutions. The heuristic first clusters the cities into $|\sV|$ clusters and then applies the greedy assignment to each cluster to get the initial solution.
\item \textbf{\model{SelectTours} } Following the common practice, we set $\tau_1, \tau_2$ as the tours of the largest and smallest cost (i.e., $\tau_1 = \argmax_{\tau}({C(\tau_i)}_{i\in\sV})$, $\tau_2 = \argmin_{\tau}({C(\tau_i)}_{i\in\sV})$).
\item \textbf{\model{NeruoCROSS} } We utilize the cost-decrement prediction model $f_\theta(\cdot)$ and two-stage search method to find the cost-improving tour pair $(\tau_1',\tau_2')$ with $2\mathcal{O}(n^2)$ budget. The details of NCE operation will be given in \cref{subsec:NCE,subsec:cost-decr-pred}.
\item \textbf{\model{IntraOperation} } For our targeting VRPs, the intra operation is equivalent to solving traveling salesman problem (TSP). We utilize {\fontfamily{lmtt}\selectfont elkai} \cite{elkai} to solve TSP.
\end{itemize}
\subsection{Neuro CROSS exchange operation}
\label{subsec:NCE}
\input{sections/03-1-NCE-operation}
\subsection{Cost-decrement prediction model}
\label{subsec:cost-decr-pred}
\input{sections/03-2-NCE-cost-pred-new}
\section{Related works}
\paragraph{Supervised learning (SL) approach to solve VRPs} SL approaches \cite{joshi2019efficient, vinyals2015pointer, xin2021neurolkh,li2021learning,li2018combinatorial} utilize the supervision from the VRP solvers as the training labels. \cite{vinyals2015pointer, joshi2019efficient} imitates TSP solvers using PointerNet and graph convolution network (GCN), respectively. \cite{joshi2019efficient} trains a GCN to predict the edge occurrence probabilities in TSP solutions. Even though SL often offer a faster solving speed than existing solvers, their use is limited to the problems where the solvers are available. Such property limits the use of SL from general and realistic VRPs.
\paragraph{Reinforcement learning (RL) approach to solve VRPs}
RL approaches \citep{bello2016neural, khalil2017learning, nazari2018reinforcement, kool2018attention, kwon2020pomo, park2021schedulenet, cao2021dan, guo2019solving, wu2019learning, wu2021learning, falkner2020learning, chen2019learning} exhibit promising performances that are comparable to existing solvers as they learn solvers from the problem-solving simulations. \citep{bello2016neural, nazari2018reinforcement, kool2018attention,guo2019solving} utilize an encoder-decoder structure to generate routing schedules sequentially, while \citep{park2021schedulenet, khalil2017learning} use graph-based embedding to determine the next assignment action. However, RL approaches often requires the problem-specific Markov decision process and network design. NCE does not require the simulation of the entire problem-solving. Instead, it only requires computing the swapping operation (i.e., the results of CE). This property allows NCE to be trained easily to solve various routing problems with one scheme.
\paragraph{Neural network-based (meta) heuristic approach} Combining machine learning (ML) components with existing (meta) heuristics shows strong empirical performances when solving VRPs \cite{hottung2019neural, xin2021neurolkh, li2021learning, lu2019learning, da2021learning, kool2021deep}. They often employ ML to learn to solve NP-hard sub-problems of VRPs, which are difficult. For example, L2D \cite{li2021learning} learns to predict the objective value of CVRP, NLNS \cite{hottung2019neural} learns a TSP solver when solving VRPs and DPDP \cite{kool2021deep} learns to boost dynamic programming algorithms. To learn such solvers, these methods apply SL or RL. Instead, NCE learns the fundamental operator of meta-heuristics rather than predict or generate a solution. Hence, NCE that is trained on FMDVRP generalizes well to the special cases of FMDVRP. Furthermore, the training data for NCE can be prepared effortlessly.
\section{Experiments}
\label{sec:experiments}
This section provides the experiment results that validat the effectiveness of the proposed NCE in solving FMDVRP and the various VRPs. To train $f_\theta(\cdot)$, we use the input $(\tau1, \tau2, a1, a2)$ and output $y^*$ pairs obtained from 50,000 random FMDVRP instances. The details regarding the train data generation are described in \cref{appendix:train-detail}. The cost decrement model $f_\theta(\cdot)$ is parametrized by the GNN that contains the five attentive embedding layers. The details of the $f_\theta(\cdot)$ architecture and the computing infrastructure used to train $f_\theta(\cdot)$ are discussed in \cref{appendix:train-detail}.
We emphasize that we use a single $f_\theta(\cdot)$ that is trained using FMDVRP for all experiments. We found that $f_\theta(\cdot)$ effectively solves the three special cases (i.e., MDVRP, mTSP, and CVRP) without retraining, proving the effectiveness of NCE as an universal operator for VRPs.
\subsection{FMDVRP experiments}
\label{subsec:FMDVRP-results}
\input{sections/05-1-FMDVRP}
\subsection{mTSP experiments}
\label{subsec:mTSP-results}
\input{sections/05-2-mTSP}
\subsection{CVRP experiments}
\label{subsec:CVRP-results}
\input{sections/05-3-CVRP}
\subsection{Ablation studies}
\label{subsec:Parametric study}
\input{sections/05-4-ablation}
\section{Conclusion}
We propose Neuro CROSS exchange (NCE), a neural network-enhanced CE operator, to learn a fundamental and universal operator that can be used to solve the various types of practical VRPs. NCE learns to predict the best cost-decrements of the CE operation and utilizes the prediction to amortize the costly search process of CE. As a result, NCE reduces the search cost of CE from $\mathcal{O}(N^4)$ to $\mathcal{O}(N^2)$. Furthermore, the NCE operator can learn with data that are relatively easy to obtain, which reduces training difficulty. We validated that NCE can solve various VRPs without training for each specific problem, exhibiting strong empirical performances.
Although NCE addresses more realistic VRPs (i.e., FMDVRP) than existing NCO solvers, NCE does not yet consider complex constraints such as pickup and delivery, and time windows. Our future research will focus on solving more complex VRP by considering such various constraints during the NCE operation.
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{}
\item Did you discuss any potential negative societal impacts of your work?
\answerNA{}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerNA{}
\item Did you include complete proofs of all theoretical results?
\answerNA{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerNo{} We will publicize the code after the decision.
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerNA{} We follow the convention of our targeting research community which reports the mean values.
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{}
\item Did you mention the license of the assets?
\answerYes{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\appendix
\renewcommand \thepart{}
\renewcommand \partname{}
\rule[0pt]{\columnwidth}{3pt}
\begin{center}
\huge{\bf{Neuro CROSS exchange} \\
\emph{Supplementary Material}}
\end{center}
\vspace*{3mm}
\rule[0pt]{\columnwidth}{1pt}
\vspace*{-.5in}
\appendix
\addcontentsline{toc}{section}{}
\part{}
\parttoc
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
\setcounter{equation}{0}
\input{appendix/07-01-MILP}
\input{appendix/07-02-MDVRP-results}
\input{appendix/07-03-ablation}
\input{appendix/07-04-training-detail}
\input{appendix/07-05-evaluation-of-the-cost-decrement-model}
\input{appendix/07-06-comparison-with-full-search}
\input{appendix/07-07-example-solutions}
\end{document}
|
\begin{document}
\vskip0.5cm
\begin{center}\textbf{Lower Bound for the Discrete Norm of a Polynomial on the
Circle}\end{center}
\begin{center}\textbf{V. N. Dubinin}\end{center}
\textit{Keywords: discrete norm of a polynomial, uniform grid,
uniform norm on a set, Schwartz lemma, conformal mapping, analytic
continuation, maximum principle.}
\begin{center}1. INTRODUCTION AND STATEMENT OF THE RESULT\end{center}
For the approximation of functions, a uniform grid of values of
the arguments is often chosen. In this connection, it is natural
to pose the question of how the discrete norm on a given grid
relates to the uniform norm of the corresponding function on a
given set. In the comparatively recent paper [1], Sheil- Small
showed that, for algebraic polynomials $P$ of degree $n$ and
natural $N>n$, the following estimate holds:
$$
\max\limits_{\omega^N=1}|P(\omega)|\geq\sqrt{\frac{N-n}{n}}
\max\limits_{|z|=1}|P(z)|.\eqno(1)
$$
Earlier Rakhmanov and Shekhtman [2] proved the inequality
$$
\max\limits_{\omega^N=1}|P(\omega)|\geq \left(1+C
\log\frac{N}{N-n}\right)^{-1} \max\limits_{|z|=1}|P(z)|,\eqno(2)
$$
where the absolute constant $C$ can be estimated by the number 16
(see [2, p. 3, 5]). This result generalizes an estimate due to
Marcinkiewicz [3], who obtained (2) for the case $N = n+ 1$.
Inequality (2) is better than inequality (1) for $n/N$ close to 1,
but worse than the Sheil-Small estimate for small values of $n/N$.
In the present paper, we prove the following statement.
\vskip0.5cm\textbf{Theorem.} \textit{Let $P$ be a polynomial of
degree $n$, and let $N$ be a natural number, $N\geq 2n$. Then, for
the discrete norm of the polynomial $P$, the following inequality
holds:}
$$
\max\limits_{\omega^N=1}|P(\omega)|\geq\cos\frac{\pi
n}{2N}\max\limits_{|z|=1}|P(z)|.
$$
\textit{The equality in (3) is attained in the case
$P(z)=(z\exp(i\pi/N))^n+1$ and for any $N$ which is a multiple of
$n$.}
\vskip0.5cm Estimate (3) holds for all $N >n$. However, for $n < N
< 2n$, it is worse than estimate (1). In the case $N = 2n$,
estimates (1) and (3) coincide and, for $N >2n$, inequality (3)
strengthens inequalities (1) and (2). Moreover, for numbers $N$
which are multiples of $n$, inequality (3) is sharp, and we obtain
the equality
$$
\sup\left\{\left(\max\limits_{|z|=1}|P(z)|\right)/\left(\max\limits_{\omega^N=1}
|P(\omega)|\right)\right\}=\left(\cos\left(\frac{\pi
n}{2N}\right)\right)^{-1},
$$
where $N$ is a multiple of $n$, and the upper bound is taken over
all polynomials $P$ of degree $n$ (see [2, Theorem 1]).
\vskip0.5cm Note that, as far back as 1931, Bernstein [4] obtained
inequalities for trigonometric sums close to inequalities (1)–(3)
(see also [5, pp. 147, 149, 154]). In particular, the corollary on
p. 154 of [5] implies inequality (3) in the case of even degrees
$n$. Our proof is different from the proofs in [1]–[5]. It is
based, essentially, only on the maximum principle of the modulus
of the suitable analytic function. Following [6], we can
strengthen inequality (3) by taking the constant term and the
leading coefficient of the polynomial $P$ into account.
\begin{center}2. AUXILIARY RESULT\end{center}
We introduce the notation
$$
m(P)=\min\limits_{|z|=1}|P(z)|,\qquad\quad
M(P)=\max\limits_{|z|=1}|P(z)|.
$$
We shall need the following analog of the Schwartz lemma in one
its particular cases. \vskip0.5cm\textbf{Lemma.} \textit{Let $P$
be a polynomial of degree $n$ for which $P(0)\neq 0$ and $m(P)\neq
M(P)$, and let the function $\zeta=\Phi(w)$ conformally and
univalently map the exterior of the closed interval
$\gamma=[m^2(P),M^2(P)]$ onto the disk $|\zeta|<1$ so that
$\Phi(\infty)=0$ and $\Phi(m^2(P))=-1$. Then the function}
$$
f(z)=\Phi\left( \overline{P(\overline{z})}P(\frac{1}{z})\right)
$$
\textit{is regular on the set}
$$
G=\left\{z:|z|<1,\;
\overline{P(\overline{z})}P(\frac{1}{z})\not\in\gamma\right\},
$$
\textit{analytically continuable to the set}
$$
E=\left\{z:|z|=1,\; |P(\overline{z})|\neq
m(P),\;|P(\overline{z})|\neq M(P)\right\},
$$
\textit{and, at the points of the set $E$, the following
inequality holds:}
$$
|f'(z)|\leq n.\eqno(4)
$$
\textbf{Proof.} The smoothness of the function $f$ on the sets $G$
and $E$ can easily be verified. Further, in a neighborhood of the
origin, the following expansion is valid:
$$
f(z)=\frac{M^2(P)-m^2(P)}{4\overline{c_0}c_n}z^n+\ldots\;,
$$
where $c_0$ is the constant term and $c_n$ is the leading
coefficient of the polynomial $P$. In addition, $f(z)\neq 0$ in
$G\setminus\{0\}$. Therefore, the function $z^n/f(z)$ is regular
on the open set $G$. At the points of the boundary of this set,
the modulus of this function does not exceed 1. By the maximum
principle for the modulus, we find that the inequality
$$
|f(z)|\geq|z^n|\eqno(5)
$$
holds everywhere on the set $G$. Now, let $z$ be an arbitrary
fixed point of the set $E$. Taking inequality (5) into account, we
obtain
$$
|f'(z)|=\frac{\partial|f(z)|}{\partial|z|}=\lim\limits_{r\to
1}\frac{|f(z)|-|f(rz)|}{1-r}\leq\lim\limits_{r \to
1}\frac{1-r^n}{1-r}=n.
$$
The lemma is proved.
\begin{center}3. PROOF OF THE THEOREM\end{center}
We can assume that $M(P)=1$ and $P(0)\neq 0$. Under these
conditions, $m(P)<M(P)$. Indeed, otherwise, the polynomial $P$
maps the circle $|z|=1$ into itself and, in view of the equality
$P(\infty)=\infty$, the symmetry principle leads to a
contradiction: $P(0)=0$. Let us show that, for any point
$z=e^{i\varphi}$ on the circle $|z|=1$, the following inequality
holds:
$$
\left|\left(|P(z)|^2\right)'_{\varphi}\right|\leq
n\sqrt{|P(z)|^2(1-|P(z)|^2)}\eqno(6)
$$
(see [6, Theorem 2]). In view of the continuity, it suffices to
verify this inequality for all points $z$ such that $\overline{z}$
belongs to the set $E$ from the lemma. Suppose that $z\in E$.
Then, for the function $f$ from the lemma, we have
$$
|f'(z)|=\left|\Phi'\left(\overline{P(\overline{z})}P(\frac1z)\right)\right|\left|\overline{P'
(\overline{z})}P(\frac1z)-\frac{1}{z^2}\overline{P(\overline{z})}P'(\frac1z)\right|=
$$
$$
=\left|\Phi'\left(|P(\overline{z})|^2\right)\right|\left|\frac{P^2(\overline{z})}{z}\right|\left|
\frac{\overline{\overline{z}P'(\overline{z})}}{\overline{P(\overline{z})}}-\frac{\dfrac1zP'(\dfrac1z)}{
P(\dfrac1z)}\right|=
$$
$$
=\left|\Phi'\left(|P(\overline{z})|^2\right)\right||P^2(\overline{z})|\left|
2{\rm
Im}\frac{\overline{z}P'(\overline{z})}{P(\overline{z})}\right|=
\left|\Phi'\left(|P(\overline{z})|^2\right)\right|\left|\left(|P(\overline{z})|^2\right)'_{\varphi}\right|.
$$
Before calculating the derivative $\Phi'$, note that the inverse
function $\Phi^{-1}(\zeta)$ is of the form
$$
\Phi^{-1}(\zeta)=\frac14\left(\zeta+\frac{1}{\zeta}\right)
(M^2(P)-m^2(P))+ \frac12(M^2(P)+m^2(P)).
$$
Hence
$$
\left|\Phi'\left(|P(\overline{z})|^2\right)\right|^{-1}=
\left|\frac14 \left(1-e^{-i2\theta}\right)(M^2(P)-m^2(P))\right|=
\frac12|\sin \theta|(M^2(P)-m^2(P)),
$$
where $\Phi^{-1}(e^{i\theta})=|P(\overline{z})|^2$ and, therefore,
$$
\cos
\theta=\frac{2|P(\overline{z})|^2-(M^2(P)+m^2(P))}{M^2(P)-m^2(P)}.
$$
Finally,
$$
|f'(z)|=\frac{\left|\left(|P(\overline{z})|^2\right)'_{\varphi}\right|}{
\sqrt{(|P(\overline{z})|^2-m^2(P))(M^2(P)-|P(\overline{z})|^2)}}.
$$
Using inequality (4), we obtain inequality (6) for $\overline{z}$.
Let us now pass to the proof of inequality (3). Let
$z_0=e^{i\varphi_0}$ denote one of the points at which the maximum
$M(P)=|P(z_0)|=1$ is attained, and let $\omega_k$ be the $N$th
root of 1 for which the arc of the circle
$$
\left\{z:|z|=1,\;|\arg z-\arg\omega_k|\leq\frac{\pi}{N}\right\}
$$
contains the point $z_0$. Suppose that, for some branch of the
argument, the following inequality holds:
$$
\arg\omega_k-\frac{\pi}{N}\leq\varphi_0\leq\arg \omega_k.
$$
Dividing both sides of inequality (6) by the quadratic root on the
right and integrating the resulting relation on the interval
$(\varphi_0,\arg\omega_k)$ with the replacement
$|P(z)|^2=u(\varphi)$, we obtain
$$
n(\arg\omega_k-\varphi_0)\geq\int\limits_{\varphi_0}^{\arg\omega_k}
\frac{-u'_{\varphi}d\varphi}{\sqrt{u(1-u)}}=-\int\limits_{1}^{u_k}
\frac{du}{\sqrt{u(1-u)}}=
$$
$$
=-2\int\limits_{1}^{\sqrt{u_k}}\frac{dt}{\sqrt{1-t^2}}=-2\arcsin\sqrt{u_k}+\pi,
$$
where $u_k=u(\arg\omega_k)$. Hence
$$
2\arcsin|P(\omega_k)|\geq\pi-n\frac{\pi}{N}>0,
$$
and
$$
|P(\omega_k)|\geq\sin\left(\frac{\pi}{2}-\frac{n\pi}{2N}\right)=\cos\frac{\pi
n}{2N}.
$$
Passing to the maximum, we obtain inequality (3). In the case
$\arg\omega_k\leq\varphi_0$, similar arguments yield the same
inequality.
If $P(z)=(z\exp(i\pi/N))^n+1$ and $N=nl$, where $l\geq 2$ is a
natural number, then
$$
\max\limits_{|z|=1}|P(z)|=2.
$$
On the other hand, direct calculations yield
$$
\max\limits_{\omega^N=1}|P(\omega)|=\max\limits_{0\leq k\leq
l-1}|P(\omega_k)|= \max\limits_{0\leq k\leq l-1}
2\left|\cos(\frac{\pi}{l}k+\frac{\pi}{2l})\right| =
2\cos\frac{\pi}{2l}.
$$
Thus, for the given polynomial $P$, we have the equality sign in
(3). The theorem is proved.
\begin{center}REFERENCES\end{center}
\begin{enumerate}
\item T. Sheil-Small, Bull. London Math. Soc. \textbf{40} (6), 956 (2008).
\item E. Rakhmanov and B. Shekhtman, J. Approx. Theory \textbf{139} (1-2), 2
(2006).
\item J. Marcinkiewicz, Acta Litt. Sci. Szeged \textbf{8}, 131 (1937).
\item S. N. Bernstein, Izv. Akad. Nauk SSSR OMEN \textbf{9}, 1151 (1931).
\item S. N. Bernstein, Collected Works, Vol. 2: The Constructive Theory
ofFunctions (Izdat. Akad. Nauk SSSR, Moscow, 1954) [in Russian].
\item V. N. Dubinin,Mat. Sb. \textbf{191} (12), 51 (2000) [Russian Acad. Sci.
Sb.Math. \textbf{191} (12), 1797 (2000)].
\end{enumerate}
\end{document}
|
\begin{document}
\title{Explicit Caplet Implied Volatilities for Quadratic Term-Structure Models}
\author{
Matthew Lorig
\thanks{Department of Applied Mathematics, University of Washington. \textbf{e-mail}: \url{[email protected]}}
\and
Natchanon Suaysom
\thanks{Department of Applied Mathematics, University of Washington. \textbf{e-mail}: \url{[email protected]}}
}
\date{This version: \today}
\maketitle
\begin{abstract}
We derive an explicit asymptotic approximation for implied volatilities of caplets under the assumption that the short-rate is described by a generic quadratic term-structure model. In addition to providing an asymptotic accuracy result, we perform experiments in order to gauge the numerical accuracy of our approximation.
\end{abstract}
\noindent
\textsc{Keywords}: quadratic term-structure, simple forward rate, implied volatility, caplet.
\\left[0.5em]
\textsc{MSC Codes}: 60G99, 35C20, 91-08.
\\left[0.5em]
\textsc{JEL Classification}: C600 , C630 , C650, G190, G100.
\section{Introduction}
In a general \textit{term-structure} framework, the instantaneous short-rate of interest is given by an explicit function of time and some auxiliary factors, which are typically modeled as the solution of a (multi-dimensional) stochastic differential equation. By far the most well-known class of term-structure models are the \textit{affine term-structure} (ATS) models which, as the name suggests, model the short rate as an affine function of the auxiliary factor process. Notable ATS models include the Vasicek \cite{vasicek1977equilibrium}, Cox-Ingersoll-Ross {(CIR)} \cite{cox2005theory} and Hull-White \cite{hull1990pricing} models. ATS models have enjoyed widespread popularity because they allow for zero-coupon bond prices to be written explicitly as exponential affine functions of the auxiliary factors.
\\left[0.5em]
A somewhat lesser-known class of term-structure models are the \textit{quadratic term-structure} (QTS) models, which, as the name suggests, model the short rate as an quadratic function of the auxiliary factor process.
QTS models include some ATS models as special cases and also offer some additional modeling flexibility due to the fact that the zero-coupon bond price can be written as exponential quadratic functions of the auxiliary factors. Moreover, empirical results from \cite{ahn2002quadratic} indicate that QTS better capture historical bond price than ATS models.
\\left[0.5em]
The purpose of this paper is to derive an explicit approximation for implied volatilities of options written on simple forward rates, assuming the underlying short-rate is given by a general QTS model. The implied volatility approximation we obtain is based on the coefficient polynomial expansion method that was introduced by \cite{pagliarani2011analytical} in order to derive approximate option prices in a scalar setting and later extended in \cite{lorig-pagliarani-pascucci-2} in order to derive approximate implied volatilities in a multi-factor local-stochastic volatility (LSV) setting. Our work is similar in some senses to \cite{lorig2022options}, who also employ the polynomial expansion method to derive approximate implied volatilities. But, there are two important differences between that work and ours: (i) \cite{lorig2022options} derived implied volatilities for options on bonds rather than on simple-forward rates and (ii) \cite{lorig2022options} focus on ATS rather than QTS models. For related work on implied volatility in a Heath-Jarrow-Morton (HJM) setting, we refer the reader to \cite{angelini2006notes}.
\\left[0.5em]
The rest of the paper proceeds as follows:
in Section \ref{sec:model} we introduce a financial market in which the short-rate of interest is described by the class of QTS models. In Section \ref{sec:pricing} we provide a concise review of how to explicitly price options on bonds and simple forward rates (including caplets) in a QTS setting using Fourier transforms.
In Section \ref{sec:lsv-connection} we provide a precise link (see Proposition \ref{prop:connection} and Remark \ref{rmk:lsv}) between simple forward rates and classical multi-factor LSV models. We use this result in Section \ref{sec:price-asymptotics}
to develop an explicit approximation for caplet prices.
In Section \ref{sec:imp-vol}, we translate the price approximation into a corresponding approximation of implied volatility.
Lastly, in Section \ref{sec:examples}, {we perform experiments to gauge the numerical accuracy of our implied volatility approximation.}
\section{Quadratic term-structure models}
\label{sec:model}
We fix a time horizon $\topb < \infty$ and consider a continuous-time financial market, defined on a filtered probability space $(\Om,\Fc,\Fb,\Pb)$ with no arbitrages and no transaction costs. The probability measure $\Pb$ represents the market's chosen pricing measure taking the \textit{money market account} $M = (M_t)_{0 \leq t \leq \topb}$ as num\'eraire. The filtration $\Fb = (\Fc_t)_{0 \leq t \leq \topb}$ represents the history of the market.
\\left[0.5em]
We suppose that the money market account $M$ is strictly positive, continuous and non-decreasing. As such, there exists a non-negative $\Fb$-adapted \textit{short-rate} process $R = (R_t)_{0 \leq t \leq \topb}$ such that
\begin{align}
\dd M_t
&= R_t M_t \, \dd t , &
M_0
&> 0 . \label{eq:dM}
\end{align}
We will focus on the case in which the dynamics of the short-rate $R$ are described by a QTS model.
Specifically, let $Y = (Y_t^{(1)}, Y_t^{(2)}, \ldots, Y_t^{(d)})^\top_{0\leq t \leq \topb}$, be the unique strong solution of a stochastic differential equation (SDE) of the following form
\begin{align}
\dd Y_t
&= ( \lam + \Lam \, Y_t ) \, \dd t + \Sig \, \dd W_t , \label{eq:dY}
\end{align}
where $\lam \in \Rb_+^{d \times 1}$ is a column vector, the matrix $\Lam \in \Rb^{d \times d}$ is diagonalizable and has negative real components of eigenvalues, the matrix $\Sig \in \Rb^{d \times d}$ and $W = (W_t^{(1)}, W_t^{(2)}, \ldots, W_t^{(d)})_{0 \leq t \leq \topb}^\top$ is a $d$-dimensional $(\Pb,\Fb)$-Brownian motion. Then, following \cite[Section 2]{ahn2002quadratic}, every equivalence class of QTS models has a unique \textit{canonical representation} of the form
\begin{align}
R_t
&= r(Y_t)
:= q + Y_t^{\top} \Xi \, Y_t , \label{eq:r-def}
\end{align}
for some constant $q \in \Rb_+$ and some matrix $\Xi \in \Rb^{d \times d}$ that is positive semidefinite and satisfies $\Xi_{i,i} = 1$ for $i = 1, 2, \ldots, d$. Note that the restrictions on $q$ and $\Xi$ guarantee that the short-rate $R$ is non-negative.
\section{Pricing options on bonds and simple forward rates in a QTS setting}
\label{sec:pricing}
In this section we provide a formal review of how to explicitly price options on bonds and simple forward rates in the QTS setting. For a rigorous treatment of the results presented below, we refer the reader to \cite[Section 4.3]{chen2004quadratic}. To begin, for any $T \leq \topb$, column vector $\nu \in \Cb^{d \times 1}$ and matrix $\Omega \in \Cb^{d\times d}$, let us define $\Gam(\,\cdot\,,\,\cdot\,;T,\nu,\Omega) : [0,T] \times \Rb^{d \times 1} \to \Cb$ by
\begin{align}
\Gam(t,Y_t;T,\nu, \Omega)
&:= \Eb_t \exp \Big( -\int_t^T \dd s \,r(Y_s) + \nu^\top Y_T + Y_T^\top \Omega \, Y_T \Big) , \label{eq:Gamma-def}
\end{align}
where $\Eb_t$ denotes the $\Fc_t$-conditional expectation under $\Pb$. The existence of the function $\Gam$ follows from the Markov property of $Y$. Formally, the function $\Gam$ satisfies the Kolmogorov backward partial differential equation (PDE)
\begin{align}
(\d_t + \Ac - r ) \Gam(t,\, \cdot \, ; T,\nu, \Omega)
&= 0 , &
\Gam(T,y;T,\nu,\Omega)
&= \exp \Big( \nu^{\top} y + y^{\top} \Omega \, y \Big) , \label{eq:Gamma-pde}
\end{align}
where the operator $\Ac$ is the generator of $Y$ under $\Pb$. Explicitly, the generator $\Ac$ is given by
\begin{align}
\Ac
&= ( \lam + \Lam y )^\top \nabla_y + \tfrac{1}{2} \text{Tr}(\Sig \Sig^\top \nabla_y \nabla_y^\top ) , \label{eq:A}
\end{align}
where $\nabla_y = (\d_{y_1}, \d_{y_2}, \ldots, \d_{y_d})^\top$, and ``$\text{Tr}$'' denotes the trace operator.
The solution to \eqref{eq:Gamma-pde} is given by
\begin{align}
\Gam(t,y;T,\nu,\Omega)
&= \exp \Big( - F(t;T,\nu,\Omega) - G^\top(t;T,\nu,\Omega) y - y^{\top} H(t;T,\nu,\Omega)y \Big) , \label{eq:Gamma-explicit}
\end{align}
where, from \cite[Theorem 3.6]{chen2004quadratic}, the scalar-valued function $F(\, \cdot \, ;T,\nu,\Om) : [0,T] \to \Cb$, the vector-valued function $G(\, \cdot \, ;T,\nu,\Om) : [0,T] \to \Cb^{d \times 1}$ and the matrix-valued function $H(\, \cdot \, ;T,\nu,\Om) : [0,T] \to \Cb^{d \times d}$ solve the following system of ordinary differential equations (ODEs)
\begin{align}
\d_t F(t;T,\nu,\Omega) & = \tfrac{1}{2}G^{\top}(t;T,\nu,\Omega)\Sig \Sig ^{\top}G(t;T,\nu,\Omega)-\text{Tr}(\Sig \Sig^{\top}H(t;T,\nu,\Omega))-G^{\top}(t;T,\nu,\Omega)\lam-q,
\\
F(T;T,\nu,\Omega) & = 0, \label{eq:Fode}
\\
\d_t G(t;T,\nu,\Omega) & = 2H^\top(t;T,\nu,\Omega)\Sig \Sig^\top G(t;T,\nu,\Omega)- \Lam G(t;T,\nu,\Omega)-2H^\top(t;T,\nu,\Omega) \lam ,
\\
G(T;T,\nu,\Omega) & = -\nu, \label{eq:Gode}
\\
\d_t H(t;T,\nu,\Omega) & = 2H^\top(t;T,\nu,\Omega)\Sig\Sig^\top H(t;T,\nu,\Omega)-\Lam H(t;T,\nu,\Omega)- H^\top(t;T,\nu,\Omega) \Lam^\top - \Xi,
\\
H(T;T,\nu,\Omega) & = -\Omega, \label{eq:Hode}
\end{align}
The solution to the system \eqref{eq:Fode},\eqref{eq:Gode} and \eqref{eq:Hode} exists and is unique.
\\left[0.5em]
As $F(t;T,0,0)$, $G(t;T,0,0)$ and $H(t;T,0,0)$ will appear frequently throughout this paper, it will be convenient to define
\begin{align}
\mathfrak{F}(t;T)
&:=F(t;T,0,0) , &
\mathfrak{G}(t;T)
&:=G(t;T,0,0) , &
\mathfrak{H}(t;T)
&:=H(t;T,0,0) . \label{eq:F0-G0-H0}
\end{align}
Now, for any $T \leq \topb$, let us denote by $B^T = (B_t^T)_{0 \leq t \leq T}$ the value of a \textit{zero-coupon bond} that pays one unit of currency at time $T$.
In the absence of arbitrage, the process $B^T/M$ must be a $(\Pb,\Fb)$-martingale. As such, we have
\begin{align}
\frac{B_t^T}{M_t}
&= \Eb_t \Big( \frac{B_T^T}{M_T} \Big) = \Eb_t \Big( \frac{1}{M_T} \Big) ,
\end{align}
where we have used $B_T^T = 1$.
Solving for $B_t^T$, we obtain
\begin{align}
B_t^T
&= \Eb_t \Big( \frac{M_t}{M_T} \Big) = \Eb_t \Big( \ee^{- \int_t^T \dd s \, r(Y_s)} \Big) = \Gam(t,Y_t;T,0,0) \\
&= \exp \Big(-\mathfrak{F}(t;T) - \mathfrak{G}^\top(t;T) Y_t - Y^{\top}_t \mathfrak{H}(t;T) Y_t \Big) , \label{eq:B-explicit}
\end{align}
where the third equality follows from \eqref{eq:Gamma-def} and the fourth equality follows from \eqref{eq:Gamma-explicit} and \eqref{eq:F0-G0-H0}.
\subsection{Pricing options on zero-coupon bonds}
Let $U = (U_t)_{0 \leq t \leq T}$ denote the value of a European option that pays $\psi( \log B_T^\topb )$ at time $T$ for some function $\psi : \Rb_- \to \Rb$. With the aim of finding $U_t$, let $\psih:\Cb \to \Cb$ denote the generalized Fourier transform of $\psi$, which {is defined} as follows
\begin{align}
\psih(\om)
&:= \int_{-\infty}^{\infty} \dd x \, \ee^{-\ii \om x} \psi(x) , &
\om
&= \om_r + \ii \om_i , &
\om_r, \om_i
&\in \Rb . \label{eq:ft}
\end{align}
We can recover $\psi$ from $\psih$ using the inverse Fourier transform
\begin{align}
\psi(x)
&= \frac{1}{2\pi} \int_{-\infty}^{\infty} \dd \om_r \, \ee^{\ii \om x} \psih(\om) . \label{eq:ift}
\end{align}
Now, noting that, in the absence of arbitrage, the process $U/M$ must be a $(\Pb,\Fb)$-martingale, we have
\begin{align}
\frac{U_t}{M_t}
&= \Eb_t \Big( \frac{U_T}{M_T} \Big)
= \Eb_t \Big( \frac{\psi(\log B_T^\topb)}{M_T} \Big) .
\end{align}
Solving for $U_t$, we obtain
\begin{align}
U_t
&= \Eb_t \exp \Big( -\int_t^T \dd s \, r(Y_s) \Big) \psi( \log B_T^\topb ) \label{eq:conditional-expectation} \\
&= \frac{1}{2\pi} \int_{-\infty}^{\infty} \dd \om_r \, \psih(\om) \Eb_t \ee^{ -\int_t^T \dd s \, r(Y_s) + \ii \om \log B_T^\topb } \\
&= \frac{1}{2\pi} \int_{-\infty}^{\infty} \dd \om_r \, \psih(\om) \Eb_t \ee^{ -\int_t^T \dd s \, r(Y_s) } \Eb_T \ee^{ \ii \om \log B_T^\topb } \\
&= \frac{1}{2\pi} \int_{-\infty}^{\infty} \dd \om_r \, \psih(\om) \ee^{ - \ii \om \mathfrak{F}(T;\topb) }
\Eb_t \ee^{ -\int_t^T \dd s \, r(Y_s) - \ii \om \mathfrak{G}^\top(T;\topb) Y_T - \ii \om Y^\top_T \mathfrak{H}(T;\topb) Y_T } \\
&= \frac{1}{2\pi} \int_{-\infty}^{\infty} \dd \om_r \, \psih(\om) \ee^{ - \ii \om \mathfrak{F}(T;\topb) }
\Gam\big(t,Y_t;T,-\ii \om \mathfrak{G}(T;\topb), -\ii \om \mathfrak{H}(T;\topb) \big) \label{eq:u-integral} \\
&=: u(t,Y_t;T,\topb) . \label{eq:u-def}
\end{align}
In general, the inverse Fourier integral \eqref{eq:u-integral} that defines $u$ must be computed numerically.
\begin{remark}
As $\log B_T^\topb \leq 0$ $\Pb$-a.s., values of $\psi(x)$ for $x > 0$ do not affect the conditional expectation \eqref{eq:conditional-expectation} and thus do not affect the value $U_t$ of the option. The values of $\psi(x)$ for $x > 0$ do, however, affect convergence properties of the Fourier transform \eqref{eq:ft} and inverse Fourier transform \eqref{eq:ift}. As such, it makes sense to choose values of $\psi(x)$ for $x>0$ so that these integrals converge for some value of $\om_i \in \Rb$.
\end{remark}
\subsection{Pricing options on simple forward rates}
\label{sec:options-on-forward-rates}
The \emph{simple forward rate from $T$ to $\topb$} is a process $L^{T,\topb} = (L_t^{T,\topb})_{0 \leq t \leq T}$, which is defined as follows
\begin{align}
L^{T,\topb}_{t}
& := \frac{1}{\tau }\Big(\frac{B^{T}_t}{B^{\topb}_t}-1 \Big), &
\tau
&:= \topb - T . \label{eq:L-def}
\end{align}
Let $V = (V_t)_{0 \leq t \leq T}$ denotes the value of a European forward rate option with \emph{reset date} $T$ and \emph{settlement date} $\topb$ that pays $\phi(\log L^{T,\topb}_T)$ at time $\topb$ for some function $\phi : \Rb \to \Rb$. Because the payoff $\phi(\log L^{T,\topb}_T)$ to be made at time $\topb$ is known at time $T$ we have
\begin{align}
V_T
&= B_T^\topb \phi( \log L_T^{T,\topb} ) . \label{eq:V=B-phi}
\end{align}
To see this, simply note that $V_\topb = B_\topb^\topb \phi( \log L_T^{T,\topb} ) = \phi( \log L_T^{T,\topb} )$.
Using \eqref{eq:L-def} and $B_T^T = 1$, we can express $V_T$ as a function of $B_T^\topb$ as follows
\begin{align}
V_T
&= B_T^\topb \phi \Big( \log \Big[ \frac{1}{\tau} \Big(\frac{1}{B_T^\topb}-1 \Big) \Big] \Big)
= \ee^{ \log B_T^\topb } \phi \Big( \log \Big[ \frac{1}{\tau} \Big( \ee^{ - \log B_T^\topb}-1 \Big) \Big] \Big)
=: \psi( \log B_T^\topb ) . \label{eq:psi-def}
\end{align}
Thus, we can view a forward rate option on $L^{T,\topb}$ with reset date $T$, settlement date $\topb$ and payoff $\phi(\log L_T^{T,\topb})$ as a European option on $B^T$ with expiration date $T$ and payoff $\psi( \log B_T^\topb )$, where $\psi$ is defined in \eqref{eq:psi-def}.
It follows that
\begin{align}
V_t
&= u(t,Y_t;T,\topb) , &
&\text{where}&
\psi(x)
&= \ee^{ x } \phi \Big( \log \Big[ \frac{1}{\tau} \Big( \ee^{ - x}-1 \Big) \Big] \Big) , \label{eq:V-explicit}
\end{align}
with $u$ given by \eqref{eq:u-def}.
\begin{example}
An important example of a European forward rate option is a \textit{caplet}, which has a payoff
\begin{align}
\phi( \log L_T^{T,\topb} )
&= \tau ( \ee^{\log L_T^{T,\topb}} - \ee^{k} )^+ .
\end{align}
Here, $k := \log K$ is the $\log$ strike of the caplet. We have from \eqref{eq:V-explicit} that
\begin{align}
\psi( x )
&= ( 1 + \tau \ee^k) \Big( \frac{1}{1 + \tau \ee^k} - \ee^{ x } \Big)^+ ,
\end{align}
and thus, from \eqref{eq:ft}, the generalized Fourier transform of $\psi$ is given by
\begin{align}
\psih(\om)
&= \frac{ - \, (1 + \tau \ee^k )^{\ii \om} }{\om^2 + \ii\om } , &
\om_i
&> 0 . \label{eq:caplet-ft}
\end{align}
The caplet price $V_t$ can now be computed by inserting the expression \eqref{eq:caplet-ft} for $\psih$ into \eqref{eq:u-integral} and evaluating the integral numerically.
\end{example}
\section{Relation between QTS models and LSV models}
\label{sec:lsv-connection}
While \eqref{eq:u-def} and \eqref{eq:V-explicit} in conjunction with \eqref{eq:caplet-ft} can be used to compute caplet prices explicitly, the resulting expression tells us very little about the corresponding implied volatilities. In this section, we will establish a precise relation between QTS models and LSV models. This relation will be used in subsequent sections to find an explicit approximation for caplet prices and implied volatilities.
\\left[0.5em]
We begin by deriving the dynamics of $B^T/M$. Using \eqref{eq:dM} and \eqref{eq:B-explicit}, we have by It\^o's Lemma that
\begin{align}
\dd \Big( \frac{B_t^T}{M_t} \Big)
&= \Big( \frac{B_t^T}{M_t} \Big) \gam^\top(t,Y_t;T) \dd W_t , \label{eq:BoverM}
\end{align}
where we have introduced the vector-valued function $\gam(\, \cdot \, , \, \cdot \, ;T) : [0,T] \times \Rb^{d \times 1} \to \Rb^{d \times 1}$, which is defined as follows
\begin{align}
\gam(t,Y_t;T)
&:= \Sig^\top \nabla_y \log \Gamma(t,Y_t;T,0,0) \\
&= - \Sig^\top \Big( \mathfrak{G}(t;T) + \Big( \mathfrak{H}(t;T) + \mathfrak{H}^\top(t;T) \Big) Y_t \Big). \label{eq:gamma-def}
\end{align}
Now, let us denote by $\Pbt$ the \textit{$\topb$-forward probability measure}, whose relation to $\Pb$ is given by the following Radon-Nikodym derivative
\begin{align}
\frac{\dd \Pbt}{\dd \Pb}
&:= \frac{M_0 B_\topb^\topb}{B_0^\topb M_\topb}
= \exp \Big( - \frac{1}{2} \int_0^\topb \| \gam(t,Y_t;\topb) \|^2 \dd t + \int_0^\topb \gam^\top(t,Y_t;\topb) \dd W_t \Big) , \label{eq:measure-change}
\end{align}
where $\| \gam \|^2 = \gam^\top \gam$. Observe that the last equality follows from \eqref{eq:BoverM}. By Girsanov's theorem and \eqref{eq:measure-change}, the process $\Wt = (\Wt_t^{(1)}, \Wt_t^{(2)}, \ldots, \Wt_t^{(d)})_{0 \leq t \leq \topb}^\top$, defined as follows
\begin{align}
\Wt_t
&:= - \int_0^t \gam(s,Y_s;\topb) \dd s + W_t , \label{eq:W_tilde}
\end{align}
is a $d$-dimensional $(\Pbt,\Fb)$-Brownian motion. The following lemma will be useful.
\begin{lemma}
\label{lemma:forward-price}
Let $\Pi = (\Pi_t)_{0 \leq t \leq \topb}$ denotes the value of a self-financing portfolio
and let $\Pi^\topb = (\Pi_t^\topb)_{0 \leq t \leq \topb}$,
defined by $\Pi_t^\topb := \Pi_t/B_t^\topb$, be the \textit{$\topb$-forward price of $\Pi$}. Then the process $\Pi^\topb$ is a $(\Pbt,\Fb)$-martingale.
\end{lemma}
\begin{proof}
Define the \textit{Radon-Nikodym derivative process} $Z = (Z_t)_{0 \leq t \leq \topb}$ by $Z_t := \Eb_t (\dd \Pbt/ \dd \Pb)$. Using the fact that $\Pi/M$ is a $(\Pb,\Fb)$-martingale as well as \cite[Lemma 5.2.2]{shreve2004stochastic} we have for any $0 \leq t \leq T \leq \topb$ that
\begin{align}
\frac{\Pi_t}{M_t}
&= \Eb_t \Big( \frac{\Pi_T}{M_T} \Big)
= Z_t \Ebt_t \Big( \frac{1}{Z_T} \frac{\Pi_T}{M_T} \Big)
= \frac{B_t^\topb}{M_t} \Ebt_t \Big( \frac{M_T}{B_T^\topb} \frac{\Pi_T}{M_T} \Big) , \label{eq:1}
\end{align}
where $\Ebt_t$ denotes the $\Fc_t$-conditional expectation under $\Pbt$. Dividing both sides of equation \eqref{eq:1} by $B_t^\topb$ and canceling common factors of $M_t$ and $M_T$, we obtain
\begin{align}
\Pi_t^\topb
&= \frac{\Pi_t}{B_t^\topb}
= \Ebt_t \frac{\Pi_T}{B_T^\topb}
= \Ebt_t \Pi_T^\topb ,
\end{align}
which establishes that $\Pi^\topb$ is a $(\Pbt,\Fb)$-martingale, as claimed.
\end{proof}
\noindent
Note from \eqref{eq:L-def} that $L^{T,\topb}$ is the $\topb$-forward price of a static portfolio consisting of $1/\tau$ shares of $B^T$ and $-1/\tau$ shares of $B^\topb$. As such, we have from Lemma \ref{lemma:forward-price} that $L^{T,\topb}$ is a $(\Pbt,\Fb)$-martingale.
\\left[0.5em]
It will be helpful to write the dynamics of $L^{T,\topb}$ under the $\topb$-forward measure $\Pbt$.
Using It\^o's Lemma, \eqref{eq:L-def}, \eqref{eq:BoverM} and \eqref{eq:W_tilde}, we obtain
\begin{align}
\dd L^{T,\topb}_{t}
&= \frac{1}{\tau } \dd \Big( \frac{B^{T}_t}{B^{\topb}_t} \Big)
= \frac{1}{\tau } \dd \Big( \frac{B^{T}_t/M_t}{B^{\topb}_t/M_t} \Big) \\
&= \frac{1}{\tau } \Big( \frac{B^{T}_t}{B^{\topb}_t} \Big) \Big( \gam^\top(t,Y_t;T) - \gam^\top(t,Y_t;\topb) \Big) \dd \Wt_t \\
&= \Big( L^{T,\topb}_{t}+\frac{1}{\tau } \Big) \Big( \gam^\top(t,Y_t;T) - \gam^\top(t,Y_t;\topb) \Big) \dd \Wt_t . \label{eq:L-dynamics}
\end{align}
Now, let us denote by $X = (X_t)_{0 \leq t \leq T}$ the $\log$ of the simple forward rate from $T$ to $\topb$, that is
\begin{align}
X_t
& := \log L_t^{T,\topb} . \label{eq:X-def}
\end{align}
We are now in a position to state the main result of this section.
\begin{proposition}
\label{prop:connection}
As in Section \ref{sec:options-on-forward-rates}, let $V = (V_t)_{0 \leq t \leq T}$ denote the value of a European forward rate option with {reset date} $T$ and {settlement date} $\topb$ that pays $\phi(\log L^{T,\topb}_T) = \phi(X_T)$ at time $\topb$ for some function $\phi : \Rb \to \Rb$.
Let $V^\topb = (V_t^\topb)_{0 \leq t \leq T}$ denote the $\topb$-forward price of $V$. Then, there exists a function
$v(\, \cdot \, ,\, \cdot \, ,\, \cdot \, ;T,\topb) : [0,T] \times \Rb^{} \times \Rb^{d \times 1} \to \Rb$ such that
\begin{align}
V_t^\topb
= v(t,X_t,Y_t;T,\topb) .
\end{align}
Moreover, the function $v$ satisfies the following PDE
\begin{align}
( \d_t + \Act(t) ) v(t,\cdot,\cdot;T,\topb)
&= 0 , &
v(T,x,y;T,\topb)
&= \phi(x) ,
\label{eq:v-pde}
\end{align}
where the operator $\Act$ is given by
\begin{align}
\Act(t)
&= \frac{1}{2} \Big( 1 + \frac{\ee^{-x}}{\tau} \Big)^2 \| \gam(t,y;T) - \gam(t,y;\topb) \|^2 (\d_x^2 - \d_x) \\ & \quad
+ \Big( \lam + \Lam y + \Sig \, \gam(t,y;\topb) \Big)^\top \nabla_y + \frac{1}{2} \textup{Tr}(\Sig \Sig^\top \nabla_y \nabla_y^\top ) \\ &\quad
+ \Big( 1 + \frac{\ee^{-x}}{\tau} \Big) \Big( \Sig \, \gam(t,y;T) - \Sig \, \gam(t,y;\topb) \Big)^\top \nabla_y \d_x . \label{eq:A-tilde}
\end{align}
\end{proposition}
\begin{proof}
We begin by writing the dynamics of $X$ and $Y$ under the $\topb$-forward probability measure $\Pbt$.
First, using It\^o's Lemma and \eqref{eq:L-dynamics}, we obtain
\begin{align}
\dd X_t
&= - \frac{1}{2} \Big( 1 + \frac{\ee^{-X_t}}{\tau} \Big)^2 \| \gam(t,Y_t;T) - \gam(t,Y_t;\topb) \|^2 \dd t \\ &\quad
+ \Big( 1 + \frac{\ee^{-X_t}}{\tau} \Big) \Big( \gam^\top(t,Y_t;T) - \gam^\top(t,Y_t;\topb) \Big) \dd \Wt_t .
\end{align}
Next, using \eqref{eq:dY} and \eqref{eq:W_tilde}, we find
\begin{align}
\dd Y_t
&= \Big( \lam + \Lam \, Y_t + \Sig \, \gam(t,Y_t;\topb) \Big) \dd t + \Sig \, \dd \Wt_t . \label{eq:dY-forward}
\end{align}
The pair $(X,Y)$ is a Markov process whose generator $\Act$ under $\Pbt$ is given by \eqref{eq:A-tilde}.
Now, using the fact that $\topb$-forward prices are $(\Pbt,\Fb)$-martingales and the fact that the process $(X,Y)$ is Markov, there exists a function $v$ such that
\begin{align}
V_t^\topb
&= \frac{V_t}{B_t^\topb}
= \Ebt_t \frac{V_T}{B_T^\topb}
= \Ebt_t \phi(X_T)
= v(t,X_t,Y_t;T,\topb) , \label{eq:v-def}
\end{align}
where, in the third equality, we have used \eqref{eq:V=B-phi}. We have from \eqref{eq:v-def} that the function $v$ satisfies the Kolmogorov backward PDE \eqref{eq:v-pde}.
\end{proof}
\noindent
A few important remarks are in order.
\begin{remark}
\label{rmk:lsv}
As $L^{T,\topb} = \ee^X$ is a positive $(\Pbt,\Fb)$-martingale, the process $(X,Y)$ can be seen as a classical LSV model, where $Y$ represents non-local factors of volatility. For example, when $d=1$ we have from \eqref{eq:A-tilde} that
\begin{align}
\Act(t)
&= c(t,x,y) (\d_x^2 - \d_x ) + f(t,x,y) \d_y + g(t,x,y) \d_y^2 + h(t,x,y) \d_x \d_y, \label{eq:A-1d}
\end{align}
where the functions $c$, $f$, $g$ and $h$ are given by
\begin{align}
c(t,x,y)
&= \tfrac{1}{2} \Sig^2 \Big( 1 + \frac{\ee^{-x}}{\tau} \Big)^2 \Big( \mathfrak{G}(t;\topb) - \mathfrak{G}(t;T) + 2 \Big( \mathfrak{H}(t;\topb) - \mathfrak{H}(t;T) \Big) y \Big)^2, \\
f(t,x,y)
& = \lam + \Lam y - \Sig^2\Big( \mathfrak{G}(t;\topb) +2 \mathfrak{H}(t;\topb) y \Big), \\
g(t,x,y)
&= \tfrac{1}{2} \Sig^2 , \\
h(t,x,y)
&= \Sig^2 \Big( 1 + \frac{\ee^{-x}}{\tau} \Big) \Big( \mathfrak{G}(t;\topb) - \mathfrak{G}(t;T) + 2 \Big( \mathfrak{H}(t;\topb) - \mathfrak{H}(t;T) \Big) y \Big) .
\end{align}
\end{remark}
\begin{remark}
\label{remark:elliptic}
The instantaneous covariance matrix of the process $(X,Y)$ is singular due to the fact that $X_t$ can be written as an explicit function of $t$ and $Y_t$. Indeed, using \eqref{eq:B-explicit}, \eqref{eq:L-def} and \eqref{eq:X-def}, we have
\begin{align}
X_t
&= \log \Big[ \frac{1}{\tau} \Big( \ee^{ \mathfrak{F}(t;\topb) - \mathfrak{F}(t;T) + \Big( \mathfrak{G}^\top(t;\topb) - \mathfrak{G}^\top(t;T) \Big) Y_t
+ Y^{\top}_t \Big( \mathfrak{H}(t;\topb) - \mathfrak{H}(t;T) \Big) Y_t} - 1 \Big) \Big]
=: \xi(t,Y_t;T,\topb) . \label{eq:xi-def}
\end{align}
As a result, the generator $\Act$ defined in \eqref{eq:A-tilde} is not uniformly elliptic. The setting here is similar to the settings in \cite{LETF} and \cite{VIX} where the authors use the approximation methods described in Sections \ref{sec:price-asymptotics} and \ref{sec:imp-vol} of this paper to find explicit approximations of implied volatility for options on leveraged exchange traded funds and the VIX, respectively. As the authors of those papers point out, the lack of a uniformly elliptic generator does \textit{not} complicate the construction of a formal implied volatility approximation.
\end{remark}
\begin{remark}
Let $\Yt^{(-j)} := (Y_t^{(1)}, \ldots, Y_t^{(j-1)},Y_t^{(j+1)}, \ldots, Y_t^{(d)})_{0 \leq t \leq T}^\top$. Had the function $\xi(t,y;T,\topb)$ defined in \eqref{eq:xi-def} been invertible with respect to $y_j$ for some $j \in \{1,2,\ldots,d\}$, we could have written $Y_t^{(j)} = \xi_j^{-1}(t,X_t,\Yt_t^{(-j)};T,\topb)$ where $\xi_j^{-1}$ is the inverse of $\xi$ with respect to $y_j$. The process $(X,\Yt^{(-j)})$ would have been a $d$-dimensional Markov process with a non-singular instantaneous covariance matrix and thus, a uniformly elliptic generator. This was the approach taken in \cite{lorig2022options}, where the authors found explicit approximations of implied volatilities for options on bonds in an \textit{affine} (as opposed to \textit{quadratic}) term-structure setting.
\end{remark}
\begin{remark}
Clearly, because $V_t^\topb = V_t/B_t^\topb$, we have from \eqref{eq:B-explicit} and \eqref{eq:V-explicit} that
\begin{align}
v(t,\xi(t,y;T;\topb),y;T,\topb)
&= \frac{u(t,y;T,\topb)}{\Gam(t,y;\topb,0,0)} . \label{eq:v-explicit}
\end{align}
However, as mentioned previously, the explicit expression \eqref{eq:v-explicit} for $v$ does not tell us anything about implied volatilities of caplets, which is the aim of this paper.
\end{remark}
\section{Option price asymptotics}
\label{sec:price-asymptotics}
Let $z := (x, y_1, \ldots, y_d)$. We have from \eqref{eq:v-pde} that the function $v$ satisfies a parabolic PDE of the form
\begin{align}
( \d_t + \Act(t) ) v(t, \,\cdot \,)
&= 0 , &
\Act(t)
&= \sum_{|\alpha| \leq 2} a_\alpha(t,z) \d_z^\alpha , &
v(T, \, \cdot \,)
&= \phi , \label{eq:pde-form}
\end{align}
where, for brevity, we have omitted the dependence on $T$ and $\topb$. Note that we have introduced standard multi-index notation
\begin{align}
\alpha
&= (\alpha_1, \alpha_2, \dots, \alpha_{d+1}) , &
\d_z^\alpha
&= \prod_{i=1}^{d+1} \d_{z_i}^{\alpha_i} , &
z^\alpha
&= \prod_{i=1}^{d+1} {z_i}^{\alpha_i} , &
| \alpha |
&= \sum_{i=1}^{d+1} \alpha_i , &
\alpha!
&= \prod_{i=1}^{d+1} \alpha_i! .
\end{align}
In general, there is no explicit solution to PDEs of the form \eqref{eq:pde-form}. In this section, we will show in a formal manner how an explicit approximation of $v$ can be obtained by using a simple Taylor series expansion of the coefficients $(a_\alpha)_{|\alpha|\leq 2}$ of $\Act$. The method described below was introduced for scalar diffusions in \cite{pagliarani2011analytical} and subsequently extended to $d$-dimensional diffusions in \cite{lorig-pagliarani-pascucci-2,lorig-pagliarani-pascucci-4}.
\\left[0.5em]
To begin, for any $\eps \in [0,1]$ and $\zb: [0,T] \to \Rb^{d+1}$, let $v^\eps$ be the unique classical solution to
\begin{align}
0
&= ( \d_t + \Act^\eps(t) ) v^\eps(t, \,\cdot \,) , &
v^\eps(T, \, \cdot \,)
&= \phi , \label{eq:v-eps-pde}
\end{align}
where the operator $\Act^\eps$ is defined as follows
\begin{align}
\Act^\eps(t)
&:= \sum_{|\alpha| \leq 2} a_\alpha^\eps(t,z) \d_z^\alpha , &
&\text{with}&
a_\alpha^\eps
&:= a_\alpha(t,\zb(t) + \eps(z - \zb(t))) , \label{eq:A-eps}
\end{align}
Observe that $\Act^\eps |_{\eps = 1} = \Act$ and thus $v^\eps |_{\eps=1} = v$. We will seek an approximate solution of \eqref{eq:v-eps-pde} by expanding $v^\eps$ and $\Act^\eps$ in powers of $\eps$. Our approximation for $v$ will be obtained by setting $\eps = 1$ in our approximation for $v^\eps$. We have
\begin{align}
v^\eps
&= \sum_{n=0}^\infty \eps^n v_n , &
\Act^\eps(t)
&= \sum_{n=0}^\infty \eps^n \Act_n(t) , \label{eq:expansion}
\end{align}
where the functions $(v_n)_{n \geq 0}$ are, at the moment, unknown, and the operators $(\Act_n)_{n \geq 0}$ are given by
\begin{align}
\Act_n(t)
&= \frac{\dd^n }{\dd \eps^n} \Act^\eps |_{\eps=0}
= \sum_{|\alpha| \leq 2} a_{\alpha,n}(t,z) \d_z^\alpha , &
a_{\alpha,n}
= \sum_{|\eta|=n} \frac{1}{\eta!} (z - \zb(t))^\eta \d_z^\eta a_\alpha(t,\zb(t)) . \label{eq:an-taylor}
\end{align}
Note that $a_{\alpha,n}(t, \, \cdot \,)$ is the sum of the $n$th order terms in the Taylor series expansion of $a_\alpha(t,\,\cdot\,)$ about the point $\zb(t)$. Inserting the expansions from \eqref{eq:expansion} for $v^\eps$ and $\Act^\eps$ into PDE \eqref{eq:v-eps-pde} and collecting terms of like order in $\eps$ we obtain
\begin{align}
&\Oc(\eps^0):&
0
&= ( \d_t + \Act_0(t) ) v_0(t, \,\cdot \,) , &
v_0(T, \, \cdot \,)
&= \phi , \label{eq:v0-pde} \\
&\Oc(\eps^n):&
0
&= ( \d_t + \Act_0(t) ) v_n(t, \,\cdot \,) + \sum_{k=1}^n \Act_k(t) v_{n-k}(t,\,\cdot\,) , &
v_n(T, \, \cdot \,)
&= 0 . \label{eq:vn-pde}
\end{align}
Now, observe that the coefficients $(a_{\alpha,0})_{|\alpha|\leq 2}$ of $\Act_0$ do not depend on $z$. Thus, $\Act_0$ is the generator of a $(d+1)$-dimensional Brownian motion with a time-dependent drift vector and covariance matrix. As such, $v_0$ is given by
\begin{align}
v_0(t,z)
&= \Pc_0(t,T)\phi(z)
= \int_{\Rb^{d+1}} \dd z' \, p_0(t,z;T,z') \phi(z') . \label{eq:v0-explicit}
\end{align}
where $\Pc_0$ is the semigroup generated by $\Act_0$ and $p_0$ is the associated transition density (i.e., the solution to \eqref{eq:v0-pde} with $\phi = \del_{\zeta}$). Explicitly, we have
\begin{align}
p_0(t,z;T,\zeta)
&= \frac{1}{\sqrt{(2\pi)^{d+1}|\Cv(t,T)|}}
{\exp\left(-\tfrac{1}{2} (\zeta-z-\mv(t,T))^\top \Cv^{-1}(t,T) (\zeta-z-\mv(t,T)) \right)} , \label{eq:p0}
\end{align}
where $\mv$ and $\Cv$ are given by
\begin{align}
\mv(t,T)
&:= \int_t^T \dd s \, m(s) , &
\Cv(t,T)
&:= \int_t^T \dd s \, A(s) , \label{eq:m-and-C}
\end{align}
and $m$ and $A$ are, respectively, the instantaneous drift vector and covariance matrices
\begin{align}
m(s)
&:= \begin{pmatrix}
a_{(1,0,\cdots,0),0}(s) \\ a_{(0,1,\cdots,0),0}(s) \\ \vdots \\ a_{(0,0,\cdots,1),0}(s)
\end{pmatrix} , &
A(s)
&:= \begin{pmatrix}
2a_{(2,0,\cdots,0),0}(s) & a_{(1,1,\cdots,0),0}(s) & \ldots & {a_{(1,0,\cdots,1),0}(s)} \\
a_{(1,1,\cdots,0),0}(s) & 2a_{(0,2,\cdots,0),0}(s) & \ldots & a_{(0,1,\cdots,1),0}(s) \\
\vdots & \vdots & \ddots & \vdots \\
a_{(1,0,\cdots,1),0}(s) & a_{(0,1,\cdots,1),0}(s) & \ldots & 2 a_{(0,0,\cdots,2),0}(s) \\
\end{pmatrix} .
\end{align}
By Duhamel's principle, the solution $v_n$ of \eqref{eq:vn-pde} is
\begin{align}
v_n(t,z)
&= \sum_{k=1}^n \int_t^T \dd t_1 \, \Pc_0(t,t_1) \Act_k(t_1) v_{n-k}(t_1,z) \\
&= \sum_{k=1}^n \sum_{i \in I_{n,k}}
\int_{t}^T \dd t_1 \int_{t_1}^T \dd t_2 \cdots \int_{t_{k-1}}^T \dd t_k \\ & \qquad
\Pc_0(t,t_1) \Ac_{i_1}(t_1)
\Pc_0(t_1,t_2) \Ac_{i_2}(t_2) \cdots
\Pc_0(t_{k-1},t_k) \Ac_{i_k}(t_k)
\Pc_0(t_k,T)\phi(z) , \label{eq:vn-explicit} \\
I_{n,k}
&= \{ i = (i_1, i_2, \cdots , i_k ) \in \mathds{N}^k : i_1 + i_2 + \cdots + i_k = n \} .
\label{eq:Ink}
\end{align}
While the expression \eqref{eq:vn-explicit} for $v_n$ is explicit, it is not easy to compute as written because operating on a function with $\Pc_0$ requires performing a $(d+1)$-dimensional integral. The following proposition establishes that $v_n$ can be expressed as a differential operator acting on $v_0$.
\begin{proposition}
\label{thm:vn}
The solution $v_n$ of PDE \eqref{eq:vn-pde} is given by
\begin{align}
v_n(t,z)
&= \Lc_n(t,T) v_0(t,z) , \label{eq:un}
\end{align}
where $\Lc$ is a linear differential operator, which is given by
\begin{align}
\Lc_n(t,T)
&= \sum_{k=1}^n \sum_{i \in I_{n,k}}
\int_{t}^T \dd t_1 \int_{t_1}^T \dd t_2 \cdots \int_{t_{k-1}}^T \dd t_k
\Gc_{i_1}(t,t_1)
\Gc_{i_2}(t,t_2) \cdots
\Gc_{i_k}(t,t_k) , \label{eq:Ln}
\end{align}
the index set $I_{n,k}$ is as defined in \eqref{eq:Ink} and the operator $\Gc_i$ is given by
\begin{align}
\Gc_i(t,t_k)
&:= \sum_{|\alpha |\leq 2} a_{\alpha,i}(t_k,\Zc(t,t_k)) \d_z^\alpha , &
\Zc(t,t_k)
&:= z + \mv(t,t_k) + \Cv(t,t_k) \nabla_z . \label{eq:Gc.def}
\end{align}
\end{proposition}
\begin{proof}
The proof, which is given in \cite[Theorem 2.6]{lorig-pagliarani-pascucci-2}, relies on the fact that, for any $0 \leq t \leq t_k < \infty$ the operator $\Gc_i$ in \eqref{eq:Gc.def} satisfies
\begin{align}
\Pc_0(t,t_k) \Ac_{i}(t_k)
&= \Gc_i(t,t_k) \Pc_0(t,t_k) . \label{eq:PA=GP}
\end{align}
Using \eqref{eq:PA=GP}, as well as the semigroup property ${\Pc_0}(t_1,t_2) {\Pc_0}(t_2,t_3) = {\Pc_0}(t_1,{t_3})$, we have that
\begin{align}
&\Pc_0(t,t_1) \Ac_{i_1}(t_1) \Pc_0(t_1,t_2) \Ac_{i_2}(t_2) \cdots \Pc_0(t_{k-1},t_k) \Ac_{i_k}(t_k) \Pc_0(t_k,T) \phi(z) \\
&= \Gc_{i_1}(t,t_1) \Gc_{i_2}(t,t_2) \cdots \Gc_{i_k}(t,t_k)
\Pc_0(t,t_1) \Pc_0(t_1,t_2) \cdots \Pc_0(t_{k-1},t_k) \Pc_0(t_k,T) \phi \\
&= \Gc_{i_1}(t,t_1) \Gc_{i_2}(t,t_2) \cdots \Gc_{i_k}(t,t_k) \Pc_0(t,T) \phi \\
&= \Gc_{i_1}(t,t_1) \Gc_{i_2}(t,t_2) \cdots \Gc_{i_k}(t,t_k) v_0(t,\,\cdot\,) , \label{eq:result}
\end{align}
where, in the last equality we have used $\Pc_0(t,T) \phi = v_0(t,\,\cdot\,)$. Inserting \eqref{eq:result} into \eqref{eq:vn-explicit} yields \eqref{eq:un}.
\end{proof}
\noindent
Having obtained expressions for the functions $(v_n)_{n \geq 0}$ as differential operators $(\Lc_n)_{n \geq 0}$ acting on $v_0$, we define $\bar{v}$, the \textit{$n$th order approximation of $v$}, as follows
\begin{align}
\bar{v}_n
:= \sum_{k=0}^n {v_k} . \label{eq:def-vbar}
\end{align}
Note that $\bar{v}_n$ depends on the choice of $\zb$. In general, if one is interested in the value of $v(t,z)$ a good choice for $\zb$ is $\zb(t) = z$.
\section{Implied volatility asymptotics}
\label{sec:imp-vol}
In this section, we show how to translate the price approximation developed in Section \ref{sec:price-asymptotics} into an approximation of Black implied volatilities associated with caplets. The derivation below closely follows the derivation in \cite{lorig-pagliarani-pascucci-2}, where the authors develop an approximation for Black-Scholes implied volatilities associated with call options on equity.
\\left[0.5em]
Throughout this section, we fix a QTS model \eqref{eq:dY}-\eqref{eq:r-def}, an initial date $t$, a reset date $T > t$, a settlement date $\topb > T$, the initial values $(X_t = \log L_t^{T,\topb}, Y_t) = (x, y)$ and a caplet payoff $\phi(X_T) = \tau (\ee^{X_T} - \ee^k)^+$. Our goal is to find an approximation of implied volatility for \textit{this particular caplet}. To ease notation, we will sometimes hide the dependence on $(t, x, y; T, \topb, k)$. However, the reader should keep in mind that the implied volatility of the caplet under consideration does depend on $(t, x, y; T, \topb, k)$, even if this is not explicitly indicated. Below, we remind the reader of the \textit{Black model} and provide definitions of the \textit{Black price} and \textit{Black implied volatility}, which will be used throughout this section.
\\left[0.5em]
In the \textit{Black model}, the dynamics of the simple forward rate $L^{T,\topb} = \ee^X$ are given by
\begin{align}
\dd L^{T,\topb}_t
&= \sig L^{T,\topb}_t \dd \Wt_t , &
&\text{and thus}&
\dd X_t
&= -\tfrac{1}{2}\sig^2 \dd t + \sig \dd \Wt_t , \label{eq:black-model}
\end{align}
where $\sig > 0$ is the \textit{Black volatility} and $\Wt$ is a scalar $(\Pbt,\Fb)$-Brownian motion. Equation \eqref{eq:black-model} leads to the following definitions.
\begin{definition}
The $\topb$-forward \textit{Black price} of a caplet, denoted $v^\text{B}$, is defined as follows
\begin{align}
v^\text{B}(t,x;T,\topb,k,\sig)
&:= \tau \, \Ebt[ ( \ee^{X_T} - \ee^k )^+ | X_t = x ]
= \tau \, \Big( \ee^x \Phi(d_+) - \ee^{k } \Phi(d_-) \Big) , \label{eq:black-price}
\end{align}
where the dynamics of $X$ are given by \eqref{eq:black-model} and
\begin{align}
d_\pm
&:= \frac{1}{\sig \sqrt{T-t}} \left(x-k \pm \frac{\sig^2 (T-t)}{2} \right) , &
\Phi(d)
&:= \int_{-\infty}^d \dd x \, \frac{1}{\sqrt{2\pi}} \ee^{-x^2/2}.
\end{align}
\end{definition}
\begin{definition}
The \textit{Black implied volatility} corresponding to the $\topb$-forward price $v$ of a caplet is the unique positive solution $\sig$ of the equation
\begin{align}
v^{\text{B}}(t,x;T,\topb,k,\sig)
&= v . \label{eq:iv-def}
\end{align}
where the Black price $v^\text{B}$ is given by \eqref{eq:black-price}.
\end{definition}
\noindent
Now, suppose that $v \equiv v(t,x,y;T,\topb,k)$ is the $\topb$-forward price of a caplet corresponding to a QTS model, where we have now indicated the dependence on the $\log$ strike $k$ explicitly.
As in Section \ref{sec:price-asymptotics}, we will seek an approximation of the implied volatility $\sig^\eps$ corresponding to $v^\eps$ by expanding $\sig^\eps$ in power of $\eps$. Our approximation {of} $\sig$ will then be obtained by setting $\eps = 1$. We have
\begin{align}
\sig^\eps
&= \sig_0 + \del \sig^\eps , &
\del \sig^\eps
&= \sum_{n=1}^\infty \eps^n \sig_n , \label{eq:I-expand}
\end{align}
where $(\sig_n)_{n \geq 0}$ are, at the moment, unknown.
Expanding the Black price $v^\text{B}(\sig^\eps)$ in powers of $\eps$ we obtain
\begin{align}
v^\text{B}(\sig^\eps)
&= v^\text{B}(\sig_0 + \del \sig^\eps) \\
&= \sum_{k=0}^\infty \frac{1}{k!}(\del \sig^\eps \d_\sig )^k v^\text{B}(\sig_0) \\
&= v^\text{B}(\sig_0) +
\sum_{k=1}^\infty \frac{1}{k!}
\sum_{n=1}^\infty \eps^n \sum_{I_{n,k}} \Big( \prod_{j=1}^k \sig_{i_j} \Big) \d_\sig^k v^\text{B}(\sig_0) \\
&= v^\text{B}(\sig_0) +
\sum_{n=1}^\infty \eps^n \sum_{k=1}^\infty \frac{1}{k!}
\sum_{ I_{n,k}} \Big( \prod_{j=1}^k \sig_{i_j} \Big) \d_\sig^k v^\text{B}(\sig_0) \\
&= v^\text{B}(\sig_0) +
\sum_{n=1}^\infty \eps^n \bigg( \sig_n \d_\sig + \sum_{k=2}^\infty \frac{1}{k!}
\sum_{ I_{n,k}} \Big( \prod_{j=1}^k \sig_{i_j} \Big) \d_\sig^k \bigg) v^\text{B}(\sig_0) ,
\end{align}
where $I_{n,k}$ is given by \eqref{eq:Ink}. Inserting the expansions for $v^\eps$ and $v^\text{B}(\sig^\eps)$ into the equation $v^\eps = v^\text{B}(\sig^\eps)$ and collecting terms of like order in $\eps$ we obtain
\begin{align}
&\Oc(\eps^0)&
v_0
&= v^\text{B}(\sig_0) , \label{eq:v0=expression} \\
&\Oc(\eps^n)&
v_n
&= \bigg( \sig_n \d_\sig + \sum_{k=2}^\infty \frac{1}{k!} \sum_{ I_{n,k}} \Big( \prod_{j=1}^k \sig_{i_j} \Big) \d_\sig^k \bigg) v^\text{B}(\sig_0) . \label{eq:vn=expression}
\end{align}
Now, from \eqref{eq:v0-explicit} and \eqref{eq:black-price} we have
\begin{align}
v_0
&= v^\text{B} \left( \sqrt{ \Cv_{1,1}(t,T)/(T-t) } \right) ,
\end{align}
where $\Cv$ is defined in \eqref{eq:m-and-C}. Thus, it follows from \eqref{eq:v0=expression} that
\begin{align}
\sig_0
&= \sqrt{ \Cv_{1,1}(t,T)/(T-t) } .\label{eq:sig-0}
\end{align}
Having identified $\sig_0$, we can use \eqref{eq:vn=expression} to obtain $\sig_n$ recursively for every $n \geq 1$. We have
\begin{align}
\sig_n
&= \frac{1}{\d_\sig v^\text{B}(\sig_0)} \bigg( v_n - \sum_{k=2}^\infty \frac{1}{k!} \sum_{ I_{n,k}} \Big( \prod_{j=1}^k \sig_{i_j} \Big) \d_\sig^k v^\text{B}(\sig_0) \bigg) . \label{eq:sig-n}
\end{align}
Using the expression given in \eqref{eq:un} for $v_n$, one can show that $\sig_n$ is an $n$th order polynomial in $\log$-moneyness $k-x$ with coefficients that depend on $(t,T)$; see \cite[Section 3]{lorig-pagliarani-pascucci-2} for details. We provide explicit expressions for $\sig_0$, $\sig_1$, and $\sig_2$ for $d = 1$ in Appendix \ref{sec:explicit-expressions}.
\\left[0.5em]
We now define our $n$\textit{th order approximation of implied volatility} as
\begin{align}
\bar{\sig}_n
&:= \sum_{j=0}^n \sig_j , \label{eq:sigma-bar}
\end{align}
where $\sig_j$ is given by \eqref{eq:sig-n}.
If we set $\zb(t) = (x,y)$ in the price approximation, then the corresponding implied volatility approximation \eqref{eq:sigma-bar} satisfies the following asymptotic accuracy result
\begin{align}
|\sig(t,x,y;T,\topb,k)-\bar{\sig}_n(t,x,y;T,\topb,k)|
&= \Oc((T-t)^{(n+1)/2}), &
&\text{as}&
(T,k) \to (t,x), \label{eq:accuracy}
\end{align}
within the parabolic region $\{(T,k) : |k-x| \leq \ell \sqrt{T-t} \}$ for some $\ell > 0$. The proof of \eqref{eq:accuracy} is a direct consequence of \cite[Theorem 3.10]{VIX}.
\section{Numerical example: Quadratic Ornstein-Uhlenbeck model}
\label{sec:examples}
Throughout this section, we consider a QTS model, whose dynamics are as follows
\begin{align}
\dd Y_t
&= \kappa ( \theta - Y_t) \dd t + \del \dd W_t , &
R_t = r(Y_t)
&= q + Y_t^2, \label{eq:quadratic-vasicek-def}
\end{align}
where the constants {$\kappa,\del$ are positive and $\theta,q$ are nonnegative}.
Noting that $Y$ is an Ornstein-Uhlenbeck process,
we refer to the model \eqref{eq:quadratic-vasicek-def} as the \textit{Quadratic Ornstein-Uhlenbeck} (QOU) model.
\begin{remark}
If we consider the special case $\theta = q = 0$.
then we have by It\^o's lemma that
\begin{align}
\dd R_t
& = 2\kappa \Big( \frac{\del^2}{2\kappa}- R_t \Big) \dd t + 2\del \sqrt{R_t} \dd W_t . \label{eq:cir}
\end{align}
Note that \eqref{eq:cir} is a \textit{Cox-Ingersoll-Ross} (CIR) process with a mean $\frac{\del^2}{2\kappa}$, rate of mean-reversion $2 \kappa$ and volatility $2 \del$. Thus, the QOU model contains as a special case, some (but not all) CIR short-rate models.
\end{remark}
\noindent
Comparing \eqref{eq:quadratic-vasicek-def} with \eqref{eq:dY} and \eqref{eq:r-def} we obtain
\begin{align}
\lam & = \kappa \theta, & \Lam & = -\kappa, & \Sig & = \delta, & \Xi &= 1.
\end{align}
Next, we can obtain from \eqref{eq:Fode}, \eqref{eq:Gode}, and \eqref{eq:Hode} that $(F,G,H)$ satisfies the following system of ODEs
\begin{align}
\left.
\begin{aligned}
\d_t F(t;T,\nu,\Omega) & = \tfrac{1}{2}\del^2 G^{2}(t;T,\nu,\Omega)-\del^2H(t;T,\nu,\Omega)-\kappa \theta G(t;T,\nu,\Omega)-q,
&
F(T;T,\nu,\Omega) & = 0 ,
\\
\d_t G(t;T,\nu,\Omega) & = \big(2\del^2 H(t;T,\nu,\Omega)+ \kappa\big) G(t;T,\nu,\Omega)-2\kappa\theta H(t;T,\nu,\Omega) ,
&
G(T;T,\nu,\Omega) & = -\nu ,
\\
\d_t H(t;T,\nu,\Omega) & = 2\del^2 H^2(t;T,\nu,\Omega)+2\kappa H(t;T,\nu,\Omega) - 1 ,
&
H(T;T,\nu,\Omega) & = -\Omega .
\end{aligned}
\right \}
\label{eq:qts-vasicek-ode}
\end{align}
Solving \eqref{eq:qts-vasicek-ode}, we obtain
\begin{align}
F(t;T,\nu,\Omega) & = \int_{T}^{t} \dd s \, \Big( \tfrac{1}{2}\del^2 G^2(s;T,\nu,\Omega) -\del^2 H(s;T,\nu,\Omega) -\kappa\theta G(s;T,\nu,\Omega) -q \Big), \label{eq:F-cir-qts}
\end{align}
\begin{align}
G(t;T,\nu,\Omega) & = -\frac{Q_1(T-t) \nu + Q_2(T-t) \Omega + Q_3(T-t)}{Q_4(T-t)\Omega+Q_5(T-t)},
&
H(t;T,\nu,\Omega) & = -\frac{Q_6(T-t)\Omega+Q_7(T-t)}{Q_4(T-t)\Omega+Q_5(T-t)}, \label{eq:GH-cir-qts}
\end{align}
where the functions $Q_i(t)$ for $i\in \{1,2,\ldots,7\}$ are given by
\begin{align}
Q_1(t) & := 2 \mu \ee^{\tfrac{1}{2}\mu t},
&
Q_2(t) &:= \frac{8\del^2}{\mu } \left(\ee^{\tfrac{1}{2}\mu t}-1\right)^2 \left(\frac{- \kappa^2\theta }{\del^2 }\right)-\frac{\kappa\theta Q_4(t)}{\del^2 },
\\
Q_3(t) & := -\frac{\kappa\theta }{\del^2}\left(\frac{\kappa}{\mu } Q_7\left(\frac{t}{2}\right)Q_5\left(\frac{t}{2}\right)-Q_1(t)+Q_5(t)\right),
&
Q_4(t)& := 4\del^2 (1-\ee^{\mu t}),
\\
Q_5(t) & := \mu (\ee^{\mu t}+1)+2\kappa (\ee^{\mu t}-1),
&
Q_6(t) & := \mu (\ee^{\mu t}+1)-2\kappa (\ee^{\mu t}-1),
\\
Q_7(t) & := 2 (1-\ee^{\mu t}),
&
\mu & := 2 \sqrt{\kappa^2 + 2\del^2 }.
\end{align}
\noindent
Next, from \eqref{eq:A-1d} we have the form of the generator
\begin{align}
\Act(t)
&= c(t,x,y) (\d_x^2 - \d_x ) + f(t,x,y) \d_y + g(t,x,y) \d_y^2 + h(t,x,y) \d_x \d_y,
\end{align}
where the functions $c$, $f$, $g$ and $h$ are given by
\begin{align}
c(t,x,y)& = \tfrac{1}{2} \del^2 \Big( 1 + \frac{\ee^{-x}}{\tau} \Big)^2 \Big( \mathfrak{G}(t;\topb) - \mathfrak{G}(t;T) + 2 \Big( \mathfrak{H}(t;\topb) - \mathfrak{H}(t;T) \Big) y \Big)^2,
\\
f(t,x,y)
& = \kappa\theta-\kappa y - \del^2\Big( \mathfrak{G}(t;\topb) +2 \mathfrak{H}(t;\topb) y \Big), \\
g(t,x,y)
&= \tfrac{1}{2}\del^2 , \\
h(t,x,y)
&= \del^2 \Big( 1 + \frac{\ee^{-x}}{\tau} \Big) \Big( \mathfrak{G}(t;\topb) - \mathfrak{G}(t;T) + 2 \Big( \mathfrak{H}(t;\topb) - \mathfrak{H}(t;T) \Big) y \Big).
\end{align}
Introducing the notation
\begin{align} \chi_{i,j}(t,x,y) & := \frac{1}{i! j!}\d_x^i \d_{y}^j \chi(t,x,y) &
&\text{where}& \chi & \in \{c,f,g,h\},
\end{align}
the explicit implied volatility approximation $\bar{\sig}_n$ can now be computed up to order $n=2$ using the formulas in Appendix \ref{sec:explicit-expressions}. We have
\begin{align}
\sig_0 & = \sqrt{\frac{2}{T-t}\int_{t}^T \dd s \, c_{0,0}(s,x,y_2)},
\\
\sig_1 & = \frac{(k-x)}{(T-t)^2\sig^3_0}\Big(2\int_{t}^T\dd s \, c_{1,0}(s,x,y_2)\int_{t}^s \dd q \, c_{0,0}(q,x,y_2)+ \int_{t}^T \dd s \, c_{0,1}(s,x,y_2)\int_{t}^s \dd q \, h_{0,0}(q,x,y_2)\Big)
\\
&\quad + \frac{1}{2(T-t)\sig_0}\int_{t}^T \dd s \, c_{0,1}(s,x,y_2)\Big(2\int_{t}^s \dd q \, f_{0,0}(q,x,y_2)+ \int_{t}^s \dd q \, h_{0,0}(q,x,y_2)\Big) ,
\end{align}
where we have omitted the 2nd order term $\sig_2$ due to its considerable length.
\\left[0.5em]
{In Figures \ref{fig:qts-vasicek-iv} and \ref{fig:qts-cir-iv}}, using different parameters for $(\kappa,\theta,\del,q,y)$, we plot our explicit approximation of implied volatility $\bar{\sig}_n$ up to order $n=2$ as a function of $\log$-moneyness $k-x$ with $t=0$ and $\topb = 2$ fixed and with reset date ranging over $T = \{\frac{1}{64},\frac{1}{32},\frac{1}{16},\frac{1}{8}\}$.
{For comparison, we also plot the ``exact'' implied volatility $\sig$, which can be computed using $\topb$-forward caplet prices using \eqref{eq:v-explicit} and inverting the Black formula \eqref{eq:black-price} numerically. }
In both figures, we observe that the second order approximation $\bar{\sig}_2$ accurately matches the level, slope, and convexity of the exact implied volatility $\sig$ near-the-money for all four reset dates.
\\left[0.5em]
{In Figures \ref{fig:qts-vasicek-err} and \ref{fig:qts-cir-err}}, using the same values for $(\kappa,\theta,\del,q,y)$ as in Figures \ref{fig:qts-vasicek-iv} and \ref{fig:qts-cir-iv}, respectively, we plot the absolute value of the relative error of our second order approximation $|\bar{\sig}_2-\sig|/\sig$ as a function of $\log$-moneyness $k-x$ and reset date $T$.
Consistent with the asymptotic accuracy results \eqref{eq:accuracy}, we observe that the {errors decrease} as we approach the origin in both directions of $k-x$ and $T$.
\appendix
\section{Explicit expressions for \texorpdfstring{$\sig_0$}{}, \texorpdfstring{$\sig_1$}{} and \texorpdfstring{$\sig_2$}{}}
\label{sec:explicit-expressions}
In this appendix we give the expressions for the implied volatility approximation using \eqref{eq:sig-0} and \eqref{eq:sig-n} explicitly up to second order for $d=1$ in terms of the coefficients $c$,$f$,$g$, and $h$ of $\Act$, given in \eqref{eq:A-1d}, by performing Taylor's series expansion of the coefficients around $\zb(t) = (x,y)$. To ease the notation, we define
\begin{align}
\chi_{i,j}(t) \equiv \chi_{i,j}(t,x,y) & = \frac{\d^i_x\d^j_y\chi(t,x,y)}{i!j!}, & \chi & \in \{c,f,g,h\}. \label{eq:chi-ij}
\end{align}
The zeroth order term $\sig_0$ is given by
\begin{align}
\sig_0 & = \sqrt{\frac{2}{T-t}\int_{t}^T \dd s \, c_{0,0}(s) } .
\end{align}
Next, let us define
\begin{align}
\mathscr{H}_n(\topheta) &:= \Big(\frac{-1}{\sigma_0\sqrt{2(T-t)}}\Big)^n \mathsf{H}_n(\topheta), &
\topheta &:= \frac{x-k-\frac{1}{2}\sigma^2_0(T-t)}{\sig_0\sqrt{2(T-t)}}.
\end{align}
where $\mathsf{H}_n(\topheta)$ is the $n$th-order \emph{Hermite polynomial}.
Then the first order term $\sig_1$ is given by
\begin{align}
\sig_1 & = \sig_{1,0}+\sig_{0,1},
\end{align}
where $ \sig_{1,0}$ and $\sig_{0,1}$ are given by
\begin{align}
\sig_{1,0} & = \frac{1}{(T-t)\sig_0}\int_{t}^T\dd s \, c_{1,0}(s)\int_{t}^s \dd q \, c_{0,0}(q) \Big(2\mathscr{H}_1(\topheta)-1\Big), \\
\sig_{0,1} & = \frac{1}{(T-t)\sig_0}\int_{t}^T \dd s \, c_{0,1}(s)\Big(\int_{t}^s \dd q \, f_{0,0}(q)+ \int_{t}^s \dd q \, h_{0,0}(q) \mathscr{H}_1(\topheta)\Big) .
\end{align}
Lastly, the second order term $\sig_2$ is given by
\begin{align}
\sig_{2}
& = \sig_{2,0}+\sig_{1,1}+\sig_{0,2},
\end{align}
where the terms $\sig_{2,0}$, $\sig_{1,1}$, $\sig_{0,2}$ are given by
\begin{align}
\sig_{2,0} & = \frac{1}{(T-t)\sig_0}\bigg(\frac{1}{2}\int_{t}^T \dd s \, c_{2,0}(s) \Big((\int_{t}^s \dd q \, c_{0,0}(q) )^2 (4\mathscr{H}_2(\topheta)-4\mathscr{H}_1(\topheta)+1) + 2 \int_{t}^s \dd q \, c_{0,0}(q) \Big)
\\
& \quad + \int_{t}^T \dd s_1 \, \int_{s_1}^T \dd s_2 \, c_{1,0}(s_1)c_{1,0}(s_2)\Big(\int_{t}^{s_1} \dd q_1 \, c_{0,0}(q_1) \int_{t}^{s_2} \dd q_2 \, c_{0,0}(q_2)
\\
& \quad \times
\big(4\mathscr{H}_4(\topheta)-8\mathscr{H}_3(\topheta)+5\mathscr{H}_2(\topheta)-\mathscr{H}_1(\topheta)\big) + \int_{t}^{s_1} \dd q_1 \, c_{0,0}(q_1) \Big(6\mathscr{H}_2(\topheta)-6\mathscr{H}_1(\topheta)+1)\Big)\bigg)
\\
& \quad -\frac{\sig^2_{1,0}}{2}\Big((T-t)\sig_0 (\mathscr{H}_2(\topheta)-\mathscr{H}_1(\topheta))
+ \frac{1}{\sig_0} \Big), \\
\sig_{1,1} & = \frac{1}{(T-t)\sig_0}\Bigg(\frac{1}{2}\int_{t}^T \dd s \, c_{1,1}(s) \bigg(2\int_{t}^{s} \dd q_1 \, c_{0,0}(q_1) \int_{t}^{s} \dd q_2 \, h_{0,0}(q_2) \mathscr{H}_2(\topheta)
\\
& \quad + \int_{t}^{s} \dd q_1 \, c_{0,0}(q_1)(2\int_{t}^{s} \dd q_2 \, f_{0,0}(q_2)-\int_{t}^{s} \dd q_2 \, h_{0,0}(q_2))\mathscr{H}_1(\topheta)
\\
& \quad -\int_{t}^{s} \dd q_1 \, c_{0,0}(q_1)\int_{t}^{s} \dd q_2 \, f_{0,0}(q_2)+\int_{t}^{s} \dd q_1 \, h_{0,0}(q_1)\bigg)
\\
& \quad + \int_{t}^T \dd s_1 \,\int_{s_1}^{T} \dd s_2 \, c_{1,0}(s_1)c_{0,1}(s_2) \bigg( 2\int_{t}^{s_1} \dd q_1 \, c_{0,0}(q_1)\int_{t}^{s_2} \dd q_2 \, h_{0,0}(q_2)\mathscr{H}_4(\topheta)
\\
& \quad +\int_{t}^{s_1} \dd q_1 \, c_{0,0}(q_1) \Big(2\int_{t}^{s_2} \dd q_2 \, f_{0,0}(q_2) - 3\int_{t}^{s_2} \dd q_2 \, h_{0,0}(q_2)\Big)\mathscr{H}_3(\topheta) \\
& \quad +(\int_{t}^{s_1} \dd q \, c_{0,0}(q)(\int_{t}^{s_2} \dd q \, h_{0,0}(q)-3 \int_{t}^{s_2} \dd q \, f_{0,0}(q))+\int_{t}^{s_1} \dd q \, h_{0,0}(q)\Big) \mathscr{H}_2(\topheta)
\\
& \quad + \Big(\int_{t}^{s_1} \dd q_1 \, c_{0,0}(q_1)\int_{t}^{s_2} \dd q_2 \, f_{0,0}(q_2)-\int_{t}^{s_1} \dd q_1 \, h_{0,0}(q_1)\Big)\mathscr{H}_1(\topheta) \bigg)
\\
& \quad + \int_{t}^T \dd s_1 \,\int_{s_1}^{T} \dd s_2 \, c_{0,1}(s_1)c_{1,0}(s_2) \bigg(2 \int_{t}^{s_1} \dd q_1 \, h_{0,0}(q_1) \int_{t}^{s_2} \dd q_2 \, c_{0,0}(q_2) \mathscr{H}_4(\topheta)
\\
& \quad + \Big(2\int_{t}^{s_1} \dd q_1 \, f_{0,0}(q_1)-3\int_{t}^{s_1} \dd q_1 \, h_{0,0}(q_1)\Big)\int_{t}^{s_2} \dd q_2 \, c_{0,0}(q_2)\mathscr{H}_3(\topheta)
\\
& \quad + \Big(\big(\int_{t}^{s_1} \dd q_1 \, h_{0,0}(q_1)-3 \int_{t}^{s_1} \dd q_1 \, f_{0,0}(q_1)\big)\int_{t}^{s_2} \dd q_2 \, c_{0,0}(q_2) + 3 \int_{t}^{s_1} \dd q_1 \, h_{0,0}(q_1)\Big)\mathscr{H}_2(\topheta)
\\
& \quad + \Big(\int_{t}^{s_1} \dd q_1 \, f_{0,0}(q_1)(2 +\int_{t}^{s_2} \dd q_2 \, c_{0,0}(q_2))-2\int_{t}^{s_1} \dd q_1 \, h_{0,0}(q_1)\Big) \mathscr{H}_1(\topheta) - \int_{t}^{s_1} \dd q_1 \, f_{0,0}(q_1) \bigg)
\\
& \quad + \int_{t}^T \dd s_1 \,\int_{s_1}^{T} \dd s_2 \,f_{1,0}(s_1)c_{0,1}(s_2)\int_{t}^{s_1} \dd q_1 \, c_{0,0}(q_1)\Big(2\mathscr{H}_1(\topheta) -1\Big)
\\
& \quad + 2\int_{t}^T \dd s_1 \,\int_{s_1}^{T} \dd s_2 \,h_{1,0}(s_1)c_{0,1}(s_2)\int_{t}^{s_1} \dd q_1 \, c_{0,0}(q_1)\Big(2\mathscr{H}_2(\topheta) -\mathscr{H}_1(\topheta)\Big) \Bigg)
\\
& \quad -\sig_{1,0}\sig_{0,1}\Big((T-t)\sig_0 (\mathscr{H}_2(\topheta)-\mathscr{H}_1(\topheta))
+ \frac{1}{\sig_0} \Big), \\
\sig_{0,2} & = \frac{1}{(T-t)\sig_0}\Bigg(\frac{1}{2}\int_{t}^T \dd s \, c_{0,2}(s) \bigg(\Big(\int_{t}^{s} \dd q \, h_{0,0}(q)\big)^2 \mathscr{H}_2(\topheta) + 2\int_{t}^{s} \dd q_1 \, h_{0,0}(q_1)\int_{t}^{s} \dd q_2 \, f_{0,0}(q_2)\mathscr{H}_1(\topheta)
\\
& \quad + (\int_{t}^{s} \dd q \, f_{0,0}(q))^2 + 2\int_{t}^{s} \dd q \, g_{0,0}(q)\bigg)
\\
& \quad + \int_{t}^T \dd s_1 \,\int_{s_1}^{T} \dd s_2 \, c_{0,1}(s_1)c_{0,1}(s_2) \bigg( \int_{t}^{s_1} \dd q_1 \, h_{0,0}(q_1) \int_{t}^{s_2} \dd q_2 \, h_{0,0}(q_2) \mathscr{H}_4(\topheta)
\\
& \quad + \Big(\int_{t}^{s_1} \dd q_1 \, f_{0,0}(q_1) \int_{t}^{s_2} \dd q_2 \, h_{0,0}(q_2) + \int_{t}^{s_1} \dd q_1 \, h_{0,0}(q_1)\int_{t}^{s_2} \dd q_2 \, f_{0,0}(q_2) \\
& \quad -\int_{t}^{s_1} \dd q_1 \, h_{0,0}(q_1) \int_{t}^{s_2} \dd q_2 \, h_{0,0}(q_2) \Big) \mathscr{H}_3(\topheta)
\\
& \quad + \Big(2\int_{t}^{s_1} \dd q \, g_{0,0}(q) + \int_{t}^{s_1} \dd q_1 \, f_{0,0}(q_1) \int_{t}^{s_2} \dd q_2 \, f_{0,0}(q_2) -\int_{t}^{s_1} \dd q_1 \, f_{0,0}(q_1) \int_{t}^{s_2} \dd q_2 \, h_{0,0}(q_2)
\\
& \quad - \int_{t}^{s_2} \dd q_1 \, f_{0,0}(q_1) \int_{t}^{s_1} \dd q_2 \, h_{0,0}(q_2)\Big)\mathscr{H}_2(\topheta)
\\
& \quad -\Big(2\int_{t}^{s_1} \dd q \, g_{0,0}(q)+\int_{t}^{s_1} \dd q_1 \, f_{0,0}(q_1) \int_{t}^{s_2} \dd q_2 \, f_{0,0}(q_2)\Big) \mathscr{H}_1(\topheta)\bigg)
\\
& \quad + \int_{t}^T \dd s_1 \,\int_{s_1}^{T} \dd s_2 \, f_{0,1}(s_1)c_{0,1}(s_2)\Big(\int_{t}^{s_1} \dd q \, h_{0,0}(q)\mathscr{H}_1(\topheta)+\int_{t}^{s_1} \dd q \, f_{0,0}(q)\Big)
\\
& \quad + \int_{t}^T \dd s_1 \,\int_{s_1}^{T} \dd s_2 \, h_{0,1}(s_1)c_{0,1}(s_2)\Big(\int_{t}^{s_1} \dd q \, h_{0,0}(q) \mathscr{H}_2(\topheta) + \int_{t}^{s_1} \dd q \, f_{0,0}(q)\mathscr{H}_1(\topheta)\Big)\Bigg)
\\
& \quad -\frac{\sig^2_{0,1}}{2}\Big((T-t)\sig_0 (\mathscr{H}_2(\topheta)-\mathscr{H}_1(\topheta))
+ \frac{1}{\sig_0} \Big).
\end{align}
Note that, although $\mathscr{H}_3(\topheta)$ and $\mathscr{H}_4(\topheta)$ appear in the expressions for $\sig_{2,0}, \sig_{1,1}$ and $\sig_{0,2}$, the 3rd and 4th order terms in $k-x$ cancel the 3rd and 4th order terms resulting from $\{\sig_{1,0}^2\mathscr{H}_2(\topheta), \sig_{1,0}^2\mathscr{H}_1(\topheta) \}$,
$ \{\sig_{0,1}\sig_{1,0}\mathscr{H}_2(\topheta), \sig_{0,1}\sig_{1,0}\mathscr{H}_1(\topheta)\}$, and $\{ \sig_{0,1}^2\mathscr{H}_2(\topheta),\sig_{0,1}^2\mathscr{H}_1(\topheta)\}$, respectively, resulting in a second order implied volatility expansion that is quadratic in $k-x$.
\begin{figure}
\caption{For the QOU model described in Section \ref{sec:examples}
\label{fig:qts-vasicek-iv}
\end{figure}
\begin{figure}
\caption{For the QOU model described in Section \ref{sec:examples}
\label{fig:qts-cir-iv}
\end{figure}
\begin{figure}
\caption{
For the QOU model described in Section \ref{sec:examples}
\label{fig:qts-vasicek-err}
\end{figure}
\begin{figure}
\caption{
For the QOU model described in Section \ref{sec:examples}
\label{fig:qts-cir-err}
\end{figure}
\end{document}
|
\begin{document}
\title{On interrelations between strongly, weakly and chord separated set-systems
(a geometric approach)}
\author{V.I.~Danilov\thanks{Central Institute of Economics and
Mathematics of the RAS, 47, Nakhimovskii Prospect, 117418 Moscow, Russia;
email: [email protected].}
\and
A.V.~Karzanov\thanks{Institute for System Analysis at FRC Computer Science and
Control of the RAS, 9, Prospect 60 Let Oktyabrya, 117312 Moscow, Russia;
emails: [email protected]; [email protected]. Corresponding author.
}
\and
G.A.~Koshevoy\thanks{Central Institute of Economics and Mathematics of the RAS,
47, Nakhimovskii Prospect, 117418 Moscow, Russia; email:
[email protected].}
}
\date{}
\maketitle
\begin{quote}
{\bf Abstract.} \small We consider three types of set-systems that have
interesting applications in algebraic combinatorics and representation theory:
maximal collections of the so-called \emph{strongly separated}, \emph{weakly
separated}, and \emph{chord separated} subsets of a set $[n]=\{1,2,\ldots,n\}$.
These collections are known to admit nice geometric interpretations; namely,
they are bijective, respectively, to rhombus tilings on the zonogon
$Z(n,2)$, combined tilings on $Z(n,2)$, and fine zonotopal tilings (or
``cubillages'') on the 3-dimensional zonotope $Z(n,3)$. We describe
interrelations between these three types of set-systems in $2^{[n]}$, by
studying interrelations between their geometric models. In particular, we
completely characterize the sets of rhombus and combined tilings properly
embeddable in a fixed cubillage, explain that they form distributive lattices,
give efficient methods of extending a given rhombus or combined tiling to a
cubillage, and etc.
{\em Keywords}\,: strongly separated sets, weakly separated sets, chord
separated sets, rhombus tiling, cubillage, higher Bruhat order
{\em AMS Subject Classification}\, 05E10, 05B45
\end{quote}
\baselineskip=15pt
\parskip=2pt
\section{Introduction} \label{sec:intr}
For a positive integer $n$, the set $\{1,2,\ldots,n\}$ with the usual order is
denoted by $[n]$. For a set $X\subseteq[n]$ of elements $x_1<x_2<\ldots<x_k$,
we may write $x_1x_2\cdots x_k$ for $X$, $\min(X)$ for $x_1$, and $\max(X)$ for
$x_k$ (where $\min(X)=\max(X):=0$ if $X=\emptyset$). We use three
binary relations on the set $2^{[n]}$ of all subsets of $[n]$. Namely, for
subsets $A,B\subseteq[n]$, we write:
\begin{numitem1}
\begin{itemize}
\item[(i)] $A<B$ if $\max(A)<\min(B)$ (\emph{global dominating});
\item[(ii)]
$A\lessdot B$ if $(A-B)<(B-A)$, where $A'-B'$ denotes $\{i'\colon A'\ni
i'\not\in B'\}$ (\emph{global dominating after cancelations});
\item[(iii)] $A\rhd B$ if $A-B\ne\emptyset$, and $B-A$
can be expressed as a union of nonempty subsets $B',B''$ so that $B'<(A-B)<B''$
(\emph{splitting}).
\end{itemize}
\label{eq:2relat}
\end{numitem1}
We also say that $A$ \emph{surrounds} $B$ if there are no elements $i<j<k$ of
$[n]$ such that $i,k\in B-A$ and $j\in A-B$ (equivalently, if either $A=B$ or
$A\lessdot B$ or $B\lessdot A$ or $B\rhd A$). The above relations are used in
the following notions.
\noindent \textbf{Definitions.} ~Following Leclerc and Zelevinsky~\cite{LZ},
sets $A,B\subseteq[n]$ are called \emph{strongly separated} (from each other)
if $A\lessdot B$ or $B\lessdot A$ or $A=B$, and called \emph{weakly separated}
if either $|A|\le |B|$ and $A$ surrounds $B$, or $|B|\le|A|$ and $B$ surrounds
$A$ (or both). Following terminology of Galashin~\cite{gal}, $A,B\subseteq [n]$
are called \emph{chord separated} if one of $A,B$ surrounds the other
(equivalently, if there are no elements $i<j<k<l$ of $[n]$ such that $i,k$
belong to one, and $j,\ell$ to the other set among $A-B$ and $B-A$).
(The third notion for $A,B$ is justified by the observation that if $n$ points
labeled $1,2,\ldots,n$ are disposed on a circumference $O$, in this order
cyclically, then there exists a chord to $O$ separating $A-B$ from $B-A$.)
Accordingly, a collection (set-system) ${\cal F}\subseteq 2^{[n]}$ is called
strongly (resp. weakly, chord) separated if any two of its members are such.
For brevity, we refer to strongly, weakly, and chord separated collections as
\emph{s-}, \emph{w-}, and \emph{c-collections}, respectively. In the hierarchy
of these collections, any s-collection is a w-collection, and any w-collection
is a c-collection, but the converse need not hold. Such collections are
encountered in interesting applications (in particular, w-collections appeared
in~\cite{LZ} in connection with the problem of quasi-commuting flag minors of a
quantum matrix). Also they admit impressive geometric-combinatorial
representations (which will be discussed later).
An important fact is that these three sorts of collections possess the property
of \emph{purity}. More precisely, we say that a domain ${\cal D}\subseteq 2^{[n]}$
is \emph{s-pure} (\emph{w-pure}, \emph{c-pure}) if all inclusion-wise maximal
s-collections (resp. w-collections, c-collections) in ${\cal D}$ have the same
cardinality, which in this case is called the \emph{s-rank} (resp.
\emph{w-rank}, \emph{c-rank}) of ${\cal D}$. We will rely on the following results
on the full domain $2^{[n]}$.
\begin{numitem1} \label{eq:s-pure}
\cite{LZ} ~$2^{[n]}$ is s-pure and its s-rank is equal to $\binom{n}{2}+
\binom{n}{1}+\binom{n}{0}$ ($=\frac12 n(n+1)+1$).
\end{numitem1}
\begin{numitem1} \label{eq:w-pure}
\cite{DKK2} ~$2^{[n]}$ is w-pure and its w-rank is equal to $\binom{n}{2}+
\binom{n}{1}+\binom{n}{0}$.
\end{numitem1}
\begin{numitem1} \label{eq:c-pure}
\cite{gal} ~$2^{[n]}$ is c-pure and its c-rank is equal to
$\binom{n}{3}+\binom{n}{2}+ \binom{n}{1}+\binom{n}{0}$.
\end{numitem1}
(The phenomenon of w-purity has also been established for some other important
domains, see~\cite{OPS,DKK3,OS}; however, those results are beyond the main
stream of our paper.)
As is seen from~\refeq{s-pure}--\refeq{c-pure}, the c-rank of $2^{[n]}$ is at
$O(n)$ times larger that its s- and w-ranks (which are equal), and we address
the following issue: given a maximal c-collection $C\subset 2^{[n]}$, what can
one say about the sets ${\bf S}(C)$ and ${\bf W}(C)$ of inclusion-wise maximal
s-collections and w-collections, respectively, contained in $C$?
It turns out that a domain $C$ of this sort need not be s-pure or w-pure in
general, as we show by an example in Sect.~\SSEC{example}. Nevertheless, the
sets of s-collections and w-collections contained in $C$ and having the maximal
\emph{size} (equal to $\frac12 n(n+1)+1$), denoted as ${\bf S}^\ast(C)$ and ${\bf W}^\ast(C)$,
respectively, have nice structural properties, and to present them is just the
main purpose of this paper.
On this way, we are based on the following known geometric-combinatorial
constructions for s-, w-, and c-collections. As it follows from results
in~\cite{LZ}, each maximal s-collection in $2^{[n]}$ corresponds to the vertex
set of a \emph{rhombus tiling} on the $n$-zonogon in the plane, and vice versa.
A somewhat more sophisticated planar structure, namely, the so-called
\emph{combined tilings}, or \emph{combies}, on the $n$-zonogon are shown to
represent the maximal w-collections in $2^{[n]}$, see~\cite{DKK3}. As to the maximal
c-collections, Galashin~\cite{gal} recently showed that they are bijective to
subdivisions of the 3-dimensional zonotope $Z(n,3)$ into parallelotopes. For
brevity, we liberally refer to such subdivisions as \emph{cubillages}, and its
elements (parallelotopes) as \emph{cubes}.
In this paper, we first discuss interrelations between strongly and chord
separated set-systems. A brief outline is as follows.
(a) For a maximal c-collection $C\subset 2^{[n]}$, let $Q=Q(C)$ be its
associated cubillage (where the elements of $C$ correspond to the 0-dimensional
cells, or \emph{vertices}, of $Q$ regarded as a complex). Then for each
$S\in{\bf S}^\ast(C)$, its associated rhombus tiling $T(S)$ is viewed (up to a
piecewise linear deformation) as a 2-dimensional subcomplex of $Q$, called an
\emph{s-membrane} in it. Furthermore, these membranes (and therefore the
members of ${\bf S}^\ast(C)$) form a distributive lattice with the minimal and
maximal elements to be the ``front side'' $Z^{\,\rm fr}$ and ``rear side'' $Z^{\,\rm re}$ of
the boundary subcomplex of $Z(n,3)$, respectively. This lattice is ``dense'',
in the sense that any two s-collections whose s-membranes are neighboring in
the lattice are obtained from each other by a standard \emph{flip}, or
\emph{mutation} (which involves a hexagon, or, in terminology of Leclerc and
Zelevinsky~\cite{LZ}, is performed ``in the presence of six witnesses'').
(b) It is natural to raise a ``converse'' issue: given a maximal s-collection
$S\subset 2^{[n]}$, what can one say about the set ${\bf C}(S)$ of maximal
c-collections containing $S$? One can efficiently construct an instance of such
c-collections, by embedding the tiling $T(S)$ (as an s-membrane) into the
``empty'' zonotope $Z(n,3)$ and then by growing, step by step (or cube by
cube), a required cubillage containing $T(S)$. In fact, the set of
cubillages for ${\bf C}(S)$ looks like a ``direct product'' of two sets ${\bf Q}^-$
and ${\bf Q}^+$, where the former (latter) is formed by partial
cubillages consisting of ``cubes'' filling the volume of $Z(n,3)$
between the surfaces $Z^{\,\rm fr}$ and $T(S)$ (resp. between $T(S)$ and $Z^{\,\rm re}$).
\Xcomment{
We explain that each of ${\bf Q}^-, {\bf Q}^+$ is connected by its own system of
3-flips, where a \emph{3-flip} in $Q$ consists in choosing a sub-zonotope $Z'$
isomorphic to $Z(4,3)$ subdivided into four cubes contained in $Q$, and then
replacing this configuration by the other subdivision of $Z'$ into four cubes
(which is unique). Also other properties are discussed.
}
A somewhat similar programme is fulfilled for w-collections, and on this way,
we obtain main results of this paper. We consider a maximal c-collection
$C\subset 2^{[n]}$ and cut each cube of the cubillage $Q$ associated with $C$
into two tetrahedra and one octahedron, forming a subdivision of $Z(n,3)$ into
smaller pieces, denoted as $Q^{\boxtimes}$ and called the \emph{fragmentation} of $Q$.
We show that each combi $K(W)$ associated with a maximal by size w-collection
$W\subset {\bf W}^\ast(C)$ is related to a set of 2-dimensional subcomplexes of
$Q^{\boxtimes}(C)$, called \emph{w-membranes}. Like s-membranes, the set of all
w-membranes of $Q^{\boxtimes}$ are shown to form a distributive lattice with the
minimal element $Z^{\,\rm fr}$ and the maximal element $Z^{\,\rm re}$, and any two
neighboring w-membranes in the lattice are linked by either a \emph{tetrahedral
flip} or an \emph{octahedral flip} (the latter corresponds, for a w-collection,
to a ``mutation in the presence of four witnesses'', in terminology
of~\cite{LZ}). As to the ``converse direction'', we consider a fixed maximal
w-collection $W\subset 2^{[n]}$ and develop an efficient geometric method to
construct a cubillage containing the combi $K(W)$.
Also additional results on interrelations between s- and w-collections from
one side, and c-collections from the other side are presented.
This paper is organized as follows. Section~\SEC{backgr} recalls definitions of
rhombus tilings and combined tilings on a zonogon and fine zonotopal tilings
(``cubillages'') on a 3-dimensional zonotope, and reviews their relations to
maximal s-, w-, and c-collections in $2^{[n]}$. Section~\SEC{smembr} starts
with an example of a maximal c-collection in $2^{[n]}$ that is neither s-pure
nor w-pure. Then it introduces s-membranes in a cubillage, discusses their
relation to rhombus tilings, and describes transformations of cubillages on
$Z(n,3)$ to ones on $Z(n-1,3)$ and back, that are needed for further purposes.
Section~\SEC{lattice_s} studies the structure of the set of s-membranes in a
fixed cubillage and, as a consequence, describes the lattice ${\bf S}^\ast(C)$.
Section~\SEC{embed_rt} discusses the task of constructing a cubillage
containing one or two prescribed rhombus tilings. Then we start studying
interrelations between maximal w- and c-collections. In Section~\SEC{w-membr}
we introduce w-membranes in the fragmentation $Q^{\boxtimes}$ of a fixed cubillage
$Q$, explain that they form a lattice, demonstrate a relationship to combined
tilings, and more. The concluding Section~\SEC{embed_combi} is devoted to the
task of extending a given combi to a cubillage, which results in an efficient
algorithm of finding a maximal c-collection in $2^{[n]}$ containing a given
maximal w-collection.
\section{Backgrounds} \label{sec:backgr}
In this section we recall the geometric representations for s-, w-, and
c-collections that we are going to use. For disjoint subsets $A$ and
$\{a,\ldots,b\}$ of $[n]$, we use the abbreviated notation $Aa\ldots b$ for
$A\cup\{a,\ldots,b\}$, and write $A-c$ for $A-\{c\}$ when $c\in A$.
\subsection{Rhombus tilings} \label{ssec:rhomb_til}
Let $\Xi=\{\xi_1,\ldots,\xi_n\}$ be a system of $n$ non-colinear vectors in the
upper hyperplane ${\mathbb R}\times {\mathbb R}_{\ge 0}$ that follow in this order
clockwise around $(0,0)$. The \emph{zonogon} generated by $\Xi$ is the
$2n$-gone that is the Minkowski sum of segments $[0,\xi_i]$, $i=1,\ldots,n$,
i.e., the set
$$
Z=Z_\Xi:=\{\lambda_1\xi_1+\ldots+ \lambda_n\xi_n\colon \lambda_i\in{\mathbb R},\;
0\le\lambda_i\le 1,\; i=1,\ldots,n\},
$$
also denoted as $Z(n,2)$. A tiling that we deal with is a subdivision $T$ of
$Z$ into \emph{tiles}, each being a parallelogram of the form $\sum_{k\in X} \xi_k
+\{\lambda\xi_i+\lambda'\xi_j\colon 0\le \lambda,\lambda'\le 1\}$ for
some $i<j$ and some subset $X\subseteq[n]-\{i,j\}$. In other words, the tiles
are not overlapping (have no common interior points) and their union is $Z$. A
tile determined by $X,i,j$ as above is called an $ij$-\emph{tile} and denoted
as $\rho(X|ij)$.
We identify each subset $X\subseteq[n]$ with the point $\sum_{i\in X} \xi_i$ in
$Z$ (assuming that the generators $\xi_i$ are ${\mathbb Z}$-independent). Depending
on the context, we may think of $T$ as a 2-dimensional complex and associate to
it the planar directed graph $(V_T,E_T)$ in which each vertex (0-dimensional
cell) is labeled by the corresponding subset of $[n]$ and each edge
(1-dimensional cell) that is a parallel translate of $\xi_i$ for some $i$ is
called an $i$-\emph{edge}, or an edge of \emph{type} (or \emph{color}) $i$. In
particular, the \emph{left boundary} of the zonogon is the directed path
$(v_0,e_1,v_1,\ldots,e_n,v_n)$ in which each vertex $v_i$ is the set $[i]$ (and
$e_i$ is an $i$-edge), whereas the \emph{right boundary} of $Z$ is the directed
path $(v'_0,e'_1,v'_1,\ldots, e'_n,v'_n)$ with $v'_i=[n]-[n-i]$ (and $e'_i$
being an $(n-i+1)$-edge).
We call the vertex set $V_T$ (regarded as a set-system in $2^{[n]}$) the
\emph{spectrum} of $T$. In fact, the graphic structure of $T$ (and therefore
its spectrum) does not depend on the choice of generating vectors $\xi_i$ (by
keeping their ordering clockwise). In the literature one often takes vectors
of the same euclidean length, in which case each tile becomes a rhombus and $T$
is called a \emph{rhombus tiling}. In what follows we will liberally use this
term whatever generators $\xi_i$ are chosen.
One easily shows that for any $1\le i<j\le n$, there exists
a unique $ij$-tile, or $ij$-\emph{rhombus}, in $T$. The central property of
rhombus tilings is as follows.
\begin{theorem} {\rm \cite{LZ}} \label{tm:LZ}
The correspondence $T\mapsto V_T$ gives a bijection between the set ${\bf
RT}_n$ of rhombus tilings on $Z(n,2)$ and the set ${\bf S}_n$ of maximal
s-collections in $2^{[n]}$.
\end{theorem}
In particular, each maximal s-collection $S$ determines a unique rhombus tiling
$T$ with $V_T=S$, and this $T$ is constructed easily: each pair of vertices of
the form $X,Xi$ is connected by (straight line) edge from $X$ to $Xi$; then the
resulting graph is planar and all its faces are rhombi, giving $T$. Two rhombus
tilings play an especial role. The spectrum of one, called the \emph{standard
tiling} and denoted as $T^{\rm st}_n$, is formed by all \emph{intervals} in $[n]$,
i.e., the sets $I_{ij}:=\{i,i+1,\ldots,j\}$ for $1\le i\le j\le n$, plus the
``empty interval'' $\emptyset$. The other one, called the \emph{anti-standard
tiling} and denoted as $T^{\rm ant}_n$, has the spectrum consisting of all
\emph{co-intervals}, the sets of the form $[n]-I_{ij}$. These two tilings for
$n=4$ are illustrated on the picture.
\begin{center}
\includegraphics{swc1}
\end{center}
Next, as it follows from results in~\cite{LZ}, ${\bf RT}_n$ is endowed with a
poset structure. In this poset, $T^{\rm st}_n$ and $T^{\rm ant}_n$ are the unique minimal
and maximal elements, respectively, and a tiling $T$ immediately precedes a
tiling $T'$ if $T'$ is obtained from $T$ by one \emph{strong} (or hexagonal)
\emph{raising flip} (and accordingly $T$ is obtained from $T'$ by one
\emph{strong lowering flip}). This means that
\begin{numitem1} \label{eq:strong_flip}
there exist $i<j<k$ and $X\subseteq [n]-\{i,j,k\}$ such that: $T$ contains the
vertices $X,Xi,Xj,Xk,Xij,Xjk,Xijk$, and the set $V_{T'}$ is obtained from $V_T$ by
replacing $Xj$ by $Xik$.
\end{numitem1}
(This transformation is called in~\cite{LZ} a ``mutation in the presence of six
witnesses'', namely, $X,Xi,Xk,Xij,Xjk,Xijk$.) See the picture.
\begin{center}
\includegraphics[scale=0.9]{swc2}
\end{center}
We denote the corresponding hexagon in $T$ as $H=H(X|ijk)$ and say that $H$ has
$Y$-\emph{configuration} ($\Lambda$-\emph{configuration}) if the three rhombi
spanning $H$ are as illustrated in the left (resp. right) fragment of the above
picture.
\subsection{Combined tilings} \label{ssec:combi}
For tilings of this sort, the system $\Xi$ generating the zonogon is required
to satisfy the additional condition of \emph{strict convexity}, namely: for any
$1\le i<j<k\le n$,
\begin{equation} \label{eq:strict_xi}
\xi_j=\lambda\xi_i+\lambda'\xi_k,\quad \mbox{where $\lambda,\lambda'\in
{\mathbb R}_{>0}$ and $\lambda+\lambda'>1$}.
\end{equation}
Besides, we use vectors $\epsilon_{ij}:=\xi_j-\xi_i$ for $1\le i<j\le n$. A
\emph{combined tiling}, or simply a \emph{combi}, is a subdivision $K$ of
$Z_\Xi$ into certain polygons specified below. Like the case of rhombus
tilings, a combi $K$ may be regarded as a complex and we associate to it a
planar directed graph $(V_K,E_K)$ in which each vertex corresponds to some
subset of $[n]$ and each edge is a parallel translate of either $\xi_i$ or
$\epsilon_{ij}$ for some $i,j$. In the later case we say that the edge has
\emph{type} $ij$. We call $V_K$ the \emph{spectrum} of $K$.
There are three sorts of tiles in $T$: $\Delta$-tiles, $\nabla$-tiles, and
lenses. A $\Delta$-\emph{tile} ($\nabla$-\emph{tile}) is a triangle with
vertices $A,B,C\subseteq[n]$ and edges $(B,A),(C,A),(B,C)$ (resp.
$(A,C),(A,B),(B,C)$) of types $i$, $j$ and $ij$, respectively, where $i<j$. For
purposes of Sect.~\SEC{w-membr}, we denote this tile as $\Delta(A|ji)$
(resp. $\nabla(A|ij)$), call $(B,C)$ its \emph{base} edge and call $A$ its
\emph{top} (resp. \emph{bottom}) vertex. See the left and middle fragments of
the picture.
\begin{center}
\includegraphics{swc3}
\end{center}
In a \emph{lens} $\lambda$, the boundary is formed by two directed paths
$U_\lambda$ and $L_\lambda$, with at least two edges in each, having the same
beginning vertex $\ell_\lambda$ and the same end vertex $r_\lambda$; see the
right fragment of the above picture. The \emph{upper boundary}
$U_\lambda=(v_0,e_1,v_1,\ldots,e_p,v_p)$ is such that $v_0=\ell_\lambda$,
$v_p=r_\lambda$, and $v_k=Xi_k$ for $k=0,\ldots,p$, where $p\ge 2$,
$X\subset[n]$ and $i_0<i_1<\cdots <i_p$ (so $k$-th edge $e_k$ is of type
$i_{k-1}i_k$). And the \emph{lower boundary}
$L_\lambda=(u_0,e'_1,u_1,\ldots,e'_q,u_q)$ is such that $u_0=\ell_\lambda$,
$u_q=r_\lambda$, and $u_m=Y-j_m$ for $m=0,\ldots,q$, where $q\ge 2$,
$Y\subseteq [n]$ and $j_0>j_1>\cdots>j_q$ (so $m$-th edge $e'_m$ is of type
$j_mj_{m-1}$). Then $Y=Xi_0j_0=Xi_pj_q$, implying $i_0=j_q$ and $i_p=j_0$, and
we say that the lens $\lambda$ has \emph{type} $i_0j_0$. Note that $X$ as well
as $Y$ need not be a vertex in $K$; we refer to $X$ and $Y$ as the \emph{lower}
and \emph{upper root} of $\lambda$, respectively. Due to
condition~\refeq{strict_xi}, each lens $\lambda$ is a convex polygon of which
vertices are exactly the vertices of $U_\lambda\cup L_\lambda$.
\noindent\textbf{Remark 1.} In the definition of a combi introduced
in~\cite{DKK3}, the generators $\xi_i$ are assumed to have the same euclidean
length. However, taking arbitrary (cyclically ordered) generators subject
to~\refeq{strict_xi} does not affect, in essence, the structure of the combi
and its spectrum, and in what follows we will vary the choice of generators
when needed. Next, to simplify visualizations, it will be convenient to think
of edges of type $i$ as ``almost vertical'', and of edges of type $ij$ as
``almost horizontal''; following terminology of~\cite{DKK3}, we refer to the
former edges as \emph{V-edges}, and to the latter ones as \emph{H-edges}. Note
that any rhombus tiling turns into a combi without lenses in a natural way:
each rhombus is subdivided into two ``semi-rhombi'' $\Delta$ and $\nabla$ by
drawing the ``almost horizontal'' diagonal in it.
The picture below illustrates a combi $K$ having one lens $\lambda$ for $n=4$;
here the V-edges and H-edges are drawn by thick and thin lines, respectively.
\begin{center}
\includegraphics{swc4}
\end{center}
We will rely on the following central result on combies.
\begin{theorem} {\rm\cite{DKK3}} \label{tm:combi}
The correspondence $K\mapsto V_K$ gives a bijection between the set ${\bf K}_n$
of combined tilings on $Z(n,2)$ and the set ${\bf W}_n$ of maximal
w-collections in $2^{[n]}$.
\end{theorem}
In particular, each maximal w-collection $W$ determines a unique combi $K$ with
$V_K=W$, and~\cite{DKK3} explains how to construct this $K$ efficiently.
By results in~\cite{DKK1,DKK2,DKK3}, the set ${\bf K}_n$ forms a poset in which
$T^{\rm st}_n$ and $T^{\rm ant}_n$ are the unique minimal and maximal elements,
respectively, and a combi $K$ immediately precedes a combi $K'$ if $K'$ is
obtained from $K$ by one \emph{weak raising flip} (and accordingly $K$ is
obtained from $K'$ by one \emph{weak lowering flip}). This means that
\begin{numitem1} \label{eq:weak_flip}
there exist $i<j<k$ and $X\subseteq [n]-\{i,j,k\}$ such that: $K$ contains the
vertices $Xi,Xj,Xk,Xij,Xjk$, and the set $V_{K'}$ is obtained from $V_K$ by
replacing $Xj$ by $Xik$.
\end{numitem1}
(Using terminology of~\cite{LZ}, one says that $V_K$ and $V_{K'}$ are linked by a
``mutation in the presence of four witnesses'', namely, $Xi,Xk,Xij,Xjk$.)
\subsection{Cubillages} \label{ssec:cubil}
Now we deal with the zonotope generated by a special cyclic configuration
$\Theta$ of vectors in the space ${\mathbb R}^3$ with coordinates $(x,y,z)$. It
consists of $n$ vectors $\theta_i=(x_i,y_i,1)$, $i=1,\ldots,n$, with the
following strict convexity condition:
\begin{numitem1} \label{eq:cyc_conf}
$x_1<x_2<\cdots<x_n$, and each $(x_i,y_i)$ is a vertex of the convex hull $H$
of the points $(x_1,y_1),\ldots,(x_n,y_n)$ in the plane $z=1$.
\end{numitem1}
\noindent An example with $n=5$ is illustrated in the picture (where
$y_i=x_i^2$ and $x_i=-x_{6-i}$).
\begin{center}
\includegraphics{swc5}
\end{center}
The \emph{zonotope} $Z$ generated by $\Theta$, also denoted as $Z(n,3)$, is the
sum of segments $[0,\theta_i]$, $i=1,\ldots,n$. A \emph{fine zonotopal tiling}
of $Z$ is a subdivision $Q$ of $Z$ into parallelotopes of which any two can
intersect only by a common face and any face of the boundary of $Z$ is a one of
some parallelotope. This is possible only if each parallelotope is of the form
$\sum_{i\in X}\theta_i +\{\lambda\theta_i+\lambda'\theta_j+
\lambda''\theta_k\colon 0\le \lambda,\lambda',\lambda''\le 1\}$ for some
$i<j<k$ and $X\subseteq [n]-\{i,j,k\}$. (For more aspects of fine zonotopal
tilings on zonotopes generated by cyclic configurations, see, e.g.,
\cite{GP}.) For brevity, we liberally refer to parallelotopes as \emph{cubes},
and to $Q$ as a \emph{cubillage}.
Depending on the context, we also may think of a cubillage $Q$ as a polyhedral
complex or as the corresponding set of cubes. In particular, (in the former
case) by a vertex, edge, rhombus in $Q$ we mean, respectively, (the closure of)
a 0-, 1-, 2-dimensional cell of this complex, and (in the latter case) when
writing $\zeta\in Q$, we mean that $\zeta$ is a cube of $Q$.
Like the case of zonogons and rhombus tilings, each subset $X\subseteq [n]$ is
identified with the point $\sum_{i\in X}\theta_i$ in $Z$ (assuming that
generators $\theta_i$ are ${\mathbb Z}$-independent), and we apply to an edge,
rhombus, and cube in $Q$ terms an $i$-\emph{edge}, $ij$-\emph{rhombus}, and
$ijk$-\emph{cube} in a due way (where $i<j<k$). Also we say that such a rhombus
(cube) is of \emph{type} $ij$ (resp. $ijk$). The edges are directed according
to the generating vectors. An $ij$-rhombus ($ijk$-cube) with the bottom vertex
$X$ is denoted as $\rho(X|ij)$ (resp. $\zeta(X|ijk)$). As a specialization to
$d=3$ of a well-known fact about fine zonotopal tilings on zonotopes $Z(n,d)$
generated by cyclic vector configurations in ${\mathbb R}^d$ with an arbitrary
dimension $d$, the following is true:
\begin{numitem1} \label{eq: one_ijk}
for any $1\le i<j<k\le n$, a cubillage $Q$ has exactly one $ijk$-cube.
\end{numitem1}
The directed graph formed by the vertices and edges occurring in $Q$ is denoted
by $G_Q=(V_Q,E_Q)$ and we call the vertex set $V_Q$ regarded as a set-system in
$2^{[n]}$ the \emph{spectrum} of $Q$. The following property is of most
importance for us.
\begin{theorem} {\rm\cite{gal}} \label{tm:galash}
The correspondence $Q\mapsto V_Q$ gives a bijection between the set ${\bf Q}_n$
of cubillages on $Z(n,3)$ and the set ${\bf C}_n$ of maximal c-collections in
$2^{[n]}$.
\end{theorem}
Next, in our study of interrelations of s- and c-collections, we will use the
projection $\pi:{\mathbb R}^3\to{\mathbb R}^2$ along the second coordinate vector, i.e.,
given by $\pi(x,y,z):=(x,z)$. Then $\pi(Z)$ is the zonogon generated by the
vectors $\pi(\theta_1),\ldots, \pi(\theta_n)$ (which lie in the ``upper
half-plane'' and are numbered clockwise, in view of~\refeq{cyc_conf}). Let us
represent the boundary ${\rm bd}(Z)$ of $Z$ as the union $Z^{\,\rm fr}\cup Z^{\,\rm re}$ of its
\emph{front} and \emph{rear} sides, i.e., $Z^{\,\rm fr}$ ($Z^{\,\rm re}$) is formed by the
points $(x,y,z)\in Z$ with $y$ minimal (resp. maximal) in $\pi^{-1}(x,z)$. Then
$Z^{\,\rm rim}:=Z^{\,\rm fr}\cap Z^{\,\rm re}$ is the closed piecewise linear curve being the union
of two directed paths connecting the vertices $\emptyset$
and $[n]$ in $G_Q$. We call $Z^{\,\rm rim}$ the \emph{rim} of $Z$.
Condition~\refeq{cyc_conf} provides that
\begin{numitem1} \label{eq:fr-rear}
the maximal affine sets in $Z^{\,\rm fr}$ and $Z^{\,\rm re}$ are rhombi which are projected
by $\pi$ to the standard and antistandard tilings in $\pi(Z)$, respectively
(defined in Sect.~\SSEC{rhomb_til}).
\end{numitem1}
We identify $Z^{\,\rm fr}$ and $Z^{\,\rm re}$ with the corresponding polyhedral complexes.
Finally, for $h=0,1,\ldots,n$, the intersection of $Z$ with the horizontal
plane $z=h$ is denoted as $\Sigma_h$ and called $h$-th \emph{section} of
$Z$; the definition of $\Theta$ implies that $\Sigma_h$ contains all vertices
$X$ of size $|X|=h$ in $Q$.
\section{S-membranes} \label{sec:smembr}
This section starts with an example of cubillages whose spectra are neither
s-pure nor w-pure. Then we consider a fixed cubillage $Q$ on $Z(n,3)$,
introduce a class of 2-dimensional subcomplexes in it, called
\emph{s-membranes}, explain that each of them is isomorphic to a rhombus tiling
$T$ on $Z(n,2)$ such that $V_T\subset V_Q$, and vice versa (thus obtaining a
geometric description of ${\bf S}^\ast(V_Q)$), and demonstrate some other
structural properties.
\subsection{An example} \label{ssec:example}
Consider the zonotope $Z=Z(4,3)$. The vertices of its boundary ${\rm bd}(Z)$ are the
intervals and co-intervals on the set $[4]$ (cf.~\refeq{fr-rear}), and there
are exactly two subsets of $[4]$ that are neither intervals nor co-intervals,
namely, $13$ and $24$. So $13$ and $24$ are just those ``points'' from
$2^{[4]}$ that are contained in the interior of $Z$. Since they are not chord
separated, there are exactly two cubillages on $Z$: one containing $13$ and the
other containing $24$ (taking into account that the vertices of ${\rm bd}(Z)$ belong
to any cubillage and that each cubillage is determined by its spectrum, by
Theorem~\ref{tm:galash}).
\begin{lemma} \label{lm:example}
For the cubillage $Q$ on $Z(4,3)$ that contains $13$, the set $V_Q$ is neither
s-pure nor w-pure.
\end{lemma}
\begin{proof}
Let $R,V_1,V_2$ be the vertices in the rim, front side, and rear side of
$Z(4,3)$, respectively (for definitions, see the end of Sect.~\SSEC{cubil}).
Then $R$ consists of the eight intervals of the form $[i]$ or $[4]-[i]$ ($0\le
i\le 4$); $V_1$ is $R$ plus the intervals $2,3,23$; and $V_2$ is $R$ plus the
co-intervals $14, 124, 134$. Note that the vertices (intervals) of the rim of
any zonotope $Z(n,3)$ are strongly separated from any subset of $[n]$.
Consider the set $S:=R\cup\{2,124\}$. It is a subset of
$$
V_Q=V_1\cup V_2\cup \{13\}= R\cup\{2,3,23,14,124,134,13\}.
$$
Observe that $S$ is an s-collection (since $2\subset 124$) but not an
s-collection of maximum size in $2^{[4]}$ (since $|S|=10$ but $|V_1|=11$). We
have $3,23\rhd 124$ but $|3|,|23|<|124|$, and $2\rhd 14,134,13$ but $|2|<
|14|,|134|,|13|$. Thus, $S$ is a maximal s-collection and a maximal
w-collection in $V_Q$, yielding the result.
\end{proof}
In fact, using results on s- and w-membranes given later, one can strengthen
the above lemma by showing that for $n\ge 4$, the spectrum $V_Q$ of \emph{any}
cubillage $Q$ on $Z(n,3)$ is neither s-pure nor w-pure (we omit a proof here).
\subsection{S-membranes} \label{ssec:smembr}
Like the definitions of $Z^{\,\rm fr}$ and $Z^{\,\rm re}$ from Sect.~\SSEC{cubil}, for
$S\subseteq Z=Z(n,3)$, let $S^{\,\rm fr}$ ($S^{\,\rm re}$) denote the set of points
$(x,y,z)\in S$ with $y$ minimum (resp. maximum) for each $\pi^{-1}(x,z)$,
called the \emph{front} (resp. \emph{rear}) part of $S$. (In other words,
$S^{\,\rm fr}$ and $S^{\,\rm re}$ are what is seen in $S$ in the directions $(0,1,0)$ and
$(0,-1,0)$, respectively.) This is extended in a natural way when we deal with
a subcomplex of a cubillage in $Z$.
\noindent \emph{Example.} In view of~\refeq{cyc_conf}, for a cube
$\zeta=\zeta(X|ijk)$ (where $i<j<k)$), ~$\zeta^{\,\rm fr}$ is formed by the rhombi
$\rho(X|ij), \rho(X|ji), \rho(Xj|ik)$, while $\zeta^{\,\rm re}$ is formed by
$\rho(X|ik), \rho(Xi|jk), \rho(Xk|ij)$. See the picture.
\begin{center}
\includegraphics[scale=0.9]{swc6}
\end{center}
\noindent\textbf{Definition.} A 2-dimensional subcomplex $M$ of a cubillage $Q$
is called an \emph{s-membrane} if $\pi$ is injective on $M$ and sends it
to a rhombus tiling on the zonogon $Z(n,2)$. In other words, $M$ is a disk
(i.e., a shape homeomorphic to a circle) whose boundary coincides with $Z^{\,\rm rim}$
and such that $M=M^{\,\rm fr}$.
In particular, both $Z^{\,\rm fr}$ and $Z^{\,\rm re}$ are s-membranes. Therefore, up to a
piecewise linear deformation, we may think of $M$ as a rhombus tiling whose
spectrum is contained in $V_Q$. So the vertex set $V_M$ of $M$ belongs to ${\bf
S}^\ast(V_Q)$. Moreover, a sharper property takes place (which can be deduced
from general results on higher Bruhat orders and related aspects
in~\cite{MS,VK,zieg}; yet we prefer to give a direct and shorter proof).
\begin{theorem} \label{tm:smembr-scoll}
The correspondence $M\mapsto V_M$ gives a bijection between the s-membranes in
a cubillage $Q$ on $Z(n,3)$ and the set ${\bf S}^\ast(V_Q)$ of maximum by size
s-collections contained in $V_Q$.
\end{theorem}
In light of explanations above, it suffices to prove the following
\begin{prop} \label{pr:T-smembr}
For any rhombus tiling $T$ on $Z(n,2)$ with $V_T\subset V_Q$, there exists an
s-membrane $M$ in $Q$ isomorphic to $T$.
\end{prop}
This proposition will be proved in Sect.~\SSEC{appl_contr}, based on a more
detailed study of structural features of cubillages and operations on them
given in the next subsection.
\subsection{Pies in a cubillage} \label{ssec:pies}
Given a cubillage $Q$ on $Z=Z(n,3)$, let $\Pi_i=\Pi_i(Q)$ be the part of $Z$
covered by cubes of $Q$ having edges of color $i$, or, let us say,
$i$-\emph{cubes}. When it is not confusing, we also think of $\Pi_i$ as the set
of $i$-cubes or as the corresponding subcomplex of $Q$. We refer to $\Pi_i$ as
$i$-th \emph{pie} of $Q$. When $i=n$ or 1, the pie structure becomes rather
transparent, which will enable us to apply some useful reductions.
To clarify the structure of $\Pi_n$, we first consider the set $U$ of $n$-edges
lying in ${\rm bd}(Z)$. Since the tilings on the sides $Z^{\,\rm fr}$ and $Z^{\,\rm re}$ of $Z$
are isomorphic to $T^{\rm st}_n$ and $T^{\rm ant}_n$, respectively (cf.~\refeq{fr-rear}),
one can see that
\begin{numitem1} \label{eq:C-Cp}
the beginning vertices of edges of $U$ are precisely those contained in the
cycle $C=P'\cup P''$, where $P'$ is the subpath of left path of $Z^{\,\rm rim}$ from
the bottom vertex $\emptyset$ to $[n-1]$, and $P''$ is the path in $Z^{\,\rm fr}$
passing the vertices $[n-1]-[i]$ for $i=n-1,n-2,\ldots,0$; in other words, $C$
is the rim of the zonotope $Z(n-1,3)$ generated by
$\theta_1,\ldots,\theta_{n-1}$.
\end{numitem1}
\noindent Accordingly, the end vertices of edges of $U$ lie on the cycle
$C':=C+\theta_n$; this $C'$ is viewed as the rim of the zonotope $Z(n-1,3)$
shifted by $\theta_n$. The area of ${\rm bd}(Z)$ between $C$ and $C'$ is subdivided
into $2(n-2)$ rhombi of types $\ast n$ (where $\ast$ means an element of
$[n-1]$); we call this subdivision the \emph{belt} of $\Pi_n$. See the picture
with $n=5$.
\begin{center}
\includegraphics[scale=1]{swc7}
\end{center}
Now fix an $n$-edge $e=(X,Xn)$ not on ${\rm bd}(Z)$ and consider the set $S$ of
cubes in $\Pi_n$ containing $e$. Each cube $\zeta\in S$ is viewed as the
(Minkowski) sum of some rhombus $\rho=\rho(X|ij)$ and the segment
$[0,\theta_n]$, and an important fact is that $\rho$ belongs to the front side
of $\zeta$ (in view of~\refeq{cyc_conf} and $n>i,j$). Gluing together such
rhombi $\rho$, we obtain a disk lying on the front side of the shape $\widehat
S:=\cup(\zeta\in S)$ and containing $X$ as an interior point, and $\widehat S$ is
just the sum of this disk and $[0,\theta_n]$. Based on this local behavior, one
can conclude that
\begin{numitem1} \label{eq:Pin}
$\Pi_n$ is the sum of a disk $D$ and the segment $[0,\theta_n]$; this disk lies
in $\Pi^{\,\rm fr}_n$ and its boundary is formed by the cycle $C$ as in~\refeq{C-Cp}.
\end{numitem1}
Then $D':=D+\theta_i$ is a disk in $\Pi^{\,\rm re}_n$ whose boundary is the cycle $C'$
as above.
The facts that $D^{\rm fr}=D$ and $C=Z^{\,\rm rim}(n-1,3)$ imply that $D$ is subdivided
into rhombi which (being projected by $\pi$) form a rhombus tiling on
$Z(n-1,2)$. And similarly for $D'$.
In what follows we write $\Pi^-_n$ for $D$, $\Pi^+_n$ for $D'$, $Z^-_n$
($Z^+_n$) for the (closed) subset of $Z$ between $Z^{\,\rm fr}$ and $\Pi^-_n$ (resp.
between $\Pi^+_n$ and $Z^{\,\rm re}$), and $Q^-_n$ ($Q^+_n$) for the portion (\emph{partial
cubillage}) of $Q$ lying in $Z^-_n$ (resp. $Z^+_n$). One can see that
\begin{numitem1} \label{eq:Z-Z+}
the edges of $G_Q$ connecting $Z^-_n$ and $Z^+_n$ are directed from the former
to the latter and are exactly the $n$-edges of $Q$; as a consequence, each
vertex of $Q^-_n$ is in $[n-1]$ and each vertex of $Q^+_n$ is of the form $Xn$,
where $X\subseteq [n-1]$.
\end{numitem1}
The following operation is of importance.
\noindent\textbf{$n$-Contraction.} Delete the interior of $\Pi_n$ and shift
$Z^+_n$ together with the cubillage $Q^+_n$ filling it by the vector
$-\theta_n$. As a result, the disks $\Pi^-_n$ and $\Pi^+_n$ merge and we obtain
a cubillage on the zonotope $Z(n-1,3)$; it is denoted by $Q^{\,\rm con}_n$ and called
the \emph{contraction} of $Q$ by (the color) $n$.
Note that $\Pi^-_n$ becomes an s-membrane of $Q^{\,\rm con}_n$. Also the following is
obvious:
\begin{numitem1} \label{eq:contr}
each cube $\zeta=\zeta(X|ijk)$ of $Q$ with $k<n$ (i.e. not contained in
$\Pi_n$) one-to-one corresponds to a cube $\zeta'$ of $Q^{\,\rm con}_n$; this $\zeta'$
is of the form $\zeta(X|ijk)$ if $\zeta\in Q^-_n$, and $\zeta(X-n|ijk)$ if
$\zeta\in Q^+_n$.
\end{numitem1}
Next we introduce a converse operation.
\noindent\textbf{$n$-Expansion.} Let $M$ be an s-membrane in a cubillage $Q'$
on the zonogon $Z'=Z(n-1,3)$. Define $Z^-(M)$ ($Z^+(M)$) to be the part of
$Z'$ between $(Z')^{\rm fr}$ and $M$ (resp. between $M$ and $(Z')^{\rm re}$),
and define $Q^-(M)$ ($Q^+(M)$) to be the subcubillage of $Q'$ contained in
$Z^-(M)$ (resp. $Z^+(M)$). The $n$-\emph{expansion} operation for $(Q,M)$
consists in shifting $Z^+(M)$ together with $Q^+(M)$ by $\theta_n$ and filling
the space ``between'' $M$ and $M+\theta_n$ by the corresponding set of
$n$-cubes, denoted as $Q^0_n(M)$. More precisely, each rhombus $\rho(X|ij)$ in
$M$ (where $i<j<n$ and $X\subset [n-1]-\{i,j\}$) generates the cube
$\zeta(X|ijn)$ of $Q^0_n(M)$. A fragment of the operation is illustrated in the
picture.
\begin{center}
\includegraphics[scale=1]{swc8}
\end{center}
Using the facts that $M^{\,\rm fr}=M$ and that the boundary cycle of $M$ is the rim of
$Z'$, one can see that
\begin{numitem1} \label{eq:Q-Q+}
taken together, the sets of cubes in $Q^-(M)$, $Q^0_n(M)$ and
$\{\zeta+\theta_n\colon \zeta\in Q^+(M)\}$ form a cubillage in $Z=Z(n,3)$.
\end{numitem1}
We denote this cubillage as $Q(M)=Q_n(Q',M)$ and call it the
$n$-\emph{expansion} of $Q'$ using $M$. There is a natural relation between the
$n$-contraction and $n$-expansion operations, as follows. (A proof is
straightforward and left to the reader as an exercise.)
\begin{prop} \label{pr:contr-exp}
The correspondence $(Q',M)\mapsto Q(M)$, where $Q'$ is a cubillage on
$Z(n-1,3)$, $M$ is an s-membrane in $Q'$, and $Q(M)$ is the $n$-expansion of
$Q'$ using $M$, gives a bijection between the set of such pairs $(Q',M)$ in
$Z(n-1,3)$ and the set of cubillages in $Z(n,3)$. Under this correspondence,
$Q'$ is the $n$-contraction of $Q=Q(M)$ and $M$ is the image of the $n$-pie in
$Q$ under the $n$-contraction operation. \ \vrule width.1cm height.3cm depth0cm
\end{prop}
We will also take advantages from handling the 1-pie of a cubillage $Q$ on
$Z(n,3)$ and applying the corresponding \emph{1-contraction} and
\emph{1-expansion} operations, which are symmetric to those concerning the
color $n$ as above. More precisely, if we make a mirror reflection of $\Theta$
by replacing each generator $\theta_i=(x_i,y_i,1)$ by $(-x_i,y_i,1)$, denoted
as $\theta'_{n+1-i}$, then the 1-edges of $Q$ turn into $n$-edges of the
corresponding cubillage $Q'$ on $Z(\theta'_1,\ldots,\theta'_n)$, and the 1-pie
of $Q$ turns into the $n$-pie of $Q'$. This leads to the corresponding
counterparts of~\refeq{C-Cp}--\refeq{Q-Q+} and Proposition~\ref{pr:contr-exp}.
\noindent\textbf{Remark 2.} The usage of $j$-pies and operations on them is less
advantageous when $1<j<n$. The trouble is that if a cube $\zeta$ containing a
$j$-edge is of the form $\zeta(X|ijk)$, where $i<j<k$, then $\zeta$ is the sum
of the rhombus $\rho=\rho(X|ik)$ and segment $[0,\theta_j]$, but $\rho$ lies on
the rear side of $\zeta$. For this reason, a relation between $j$-pies and
rhombus tilings becomes less visualized. However, we will not use $j$-pies
with $1<j<n$ in this paper.
\subsection{Applications of the contraction and expansion operations} \label{ssec:appl_contr}
We start with the following assertion, using terminology and notation as above.
\begin{prop} \label{pr:edge_rh_cube}
Let $Q$ be a cubillage on $Z=Z(n,3)$.
\begin{itemize}
\item[\rm(i)] If $Q$ contains vertices $X$ and $Xi$, then it does the edge
$(X,Xi)$.
\item[\rm(ii)] If $Q$ contains vertices $X,Xi,Xj,Xij$ ($i<j$), then it does the
rhombus $\rho(X|ij)$.
\item[\rm(iii)] If $Q$ contains a set $S$ of eight vertices
$X,Xi,Xj,Xk,Xij,Xik,Xjk,Xijk$ ($i<j<k$), then it does the cube $\zeta(X|ijk)$.
\end{itemize}
\end{prop}
\begin{proof}
We use induction on $n$. Let us prove~(iii), denoting by $Q'$ the cubillage on
$Z(n-1,3)$ that is the $n$-contraction of $Q$. Three cases are possible.
(a) Suppose that $k<n$ and $n\notin X$. Then $S$ belongs to the vertex set of
the subcubillage $Q^-_n$ (cf.~\refeq{Z-Z+}) and, therefore, to the vertex set
of $Q'$. By induction, $Q'$ contains the cube on $S$, namely,
$\zeta=\zeta(X|ijk)$. From~\refeq{Q-Q+} and Proposition~\ref{pr:contr-exp} it
follows that under the $n$-expansion operation for $Q'$ using the s-membrane
$M:=\Pi^-_n$, $\zeta$ becomes a cube in $Q$, as required. (Recall that
$\Pi^-_n$ is the corresponding disk in $\Pi^{\,\rm fr}_n$, defined in the paragraph
before~\refeq{Z-Z+}.)
(b) Suppose that $n\in X$. Then $k<n$ and $S$ belongs to the vertex set of
$Q^+_n$. Therefore, $S':=\{Y-n\colon Y\in S\}$ is contained in $V_{Q'}$ and,
moreover, in the vertex set of the subcubillage $Q^+(M)$ of $Q'$ (where $M$ is
as in~(a)). So, by induction, $Q^+(M)$ contains the cube $\zeta'=\zeta(X-n|ijk)$.
The $n$-expansion operation for $Q'$ using $M$ transfers $\zeta'$ to the
desired cube $\zeta(X|ijk)$ in $Q$.
(c) Now let $n\notin X$ and $k=n$. Then the set $S^-:=\{X,Xi,Xj,Xij\}$ belongs
to $\Pi^-_n$ (and $Q^-_n$), and the set $S^+:=\{Xn,Xin,Xjn,Xijn\}$ to $\Pi^+_n$
(and $Q^+_n$). The $n$-contraction operation shifts $S^+$ by $-\theta_n$ and
merges it with $S^-$ (which lies in $M$). By induction, $Q'$ contains the rhombus
$\rho=\rho(X|ij)$. The $n$-expansion operation for $Q'$ using $M$ transforms
$\rho$ into the cube $\zeta(Z|ijk)$ in $Q^0_n(M)$, and therefore, in $Q$
(cf.~\refeq{Q-Q+}), as required.
Assertions in (i) and (ii) are shown in a similar fashion (even easier).
\end{proof}
Based on this proposition, we now prove Proposition~\ref{pr:T-smembr}.
Let $Q$ be a cubillage on $Z(n,3)$, and $T$ a rhombus tiling on $Z(n,2)$ with
$V_T\subset V_Q$ (regarding vertices as subsets of $[n]$). For each rhombus
$\rho=\rho(X|ij)$ in $T$, the vertices of the form $X,Xi,Xj,Xij$ are contained
in $Q$ as well, and by~(ii) in Proposition~\ref{pr:edge_rh_cube}, $Q$ contains
a rhombus $\rho'$ on these vertices. Then $\rho=\pi(\rho')$. Combining such
rhombi $\rho'$ in $Q$ determined by the rhombi $\rho$ on $T$, we obtain a
2-dimensional subcomplex $M$ in $Q$ which is bijectively mapped by $\pi$ onto
$T$. Hence $M$ is an s-membrane in $Q$ isomorphic to $T$, yielding
Proposition~\ref{pr:T-smembr} and Theorem~\ref{tm:smembr-scoll}.
\section{The lattice of s-membranes} \label{sec:lattice_s}
As mentioned in the Introduction, the set ${\bf S}^\ast(C)$ of maximal by size
strongly separated collections $S\subset 2^{[n]}$ that are contained in a fixed
maximal chord separated collection $C\subset 2^{[n]}$ has nice structural
properties. Due to
Theorems~\ref{tm:LZ},\,\ref{tm:galash},\,\ref{tm:smembr-scoll}, it is more
enlightening to deal with equivalent geometric objects, by considering a
cubillage $Q$ on the zonotope $Z=Z(n,3)$ and the set ${\cal M}(Q)$ of s-membranes
in $Q$.
Using notation as in Sect.~\SSEC{pies}, for an s-membrane $M\in {\cal M}(Q)$, we
write $Z^-(M)$ ($Z^+(M)$) for the (closed) region of $Z$ bounded by the front
side $Z^{\,\rm fr}$ of $Z$ and $M$ (resp. by $M$ and the rear side $Z^{\,\rm re}$)) and write
$Q^-(M)$ ($Q^+(M)$) for the set of cubes of $Q$ contained in $Z^-(M)$ (resp.
$Z^+(M)$). The sets $Q^-(M)$ and $Q^+(M)$ are important in our analysis and we
call them the \emph{front heap} and the \emph{rear heap} of $M$, respectively.
Consider two s-membranes $M,M'\in{\cal M}(Q)$ and form the sets $N:=(M\cup M')^{\rm
fr}$ and $N':=(M\cup M')^{\rm re}$. Then both $N,N'$ are bijective to $Z'$ by $\pi$. Also one
can see that for any rhombus $\rho$ in $M$, if some interior point of $\rho$
belongs to $N$ ($N'$), then the entire $\rho$ lies in $N$ (resp. $N'$), and
similarly for $M'$. These observations imply that:
\begin{numitem1} \label{eq:NNp}
\begin{itemize}
\item[(i)] both $N$ and $N'$ are s-membranes in $Q$;
\item[(ii)] the front heap $Q^-(N)$ of $N$ is equal to $Q^-(M)\cap Q^-(M')$,
and the front heap $Q^-(N')$ of $N'$ is equal to $Q^-(M)\cup Q^-(M')$.
\end{itemize}
\end{numitem1}
(Accordingly, the rear heaps of $N$ and $N'$ are $Q^+(N)=Q^+(M)\cup Q^+(M')$
and $Q^+(N')=Q^+(M)\cap Q^+(M')$.) By~\refeq{NNp}, the front heaps of
s-membranes constitute a distributive lattice, which gives rise to a similar
property for the s-membranes themselves.
\begin{prop} \label{pr:latticeMQ}
The set ${\cal M}(Q)$ of s-membranes in $Q$ is endowed with the structure of
distributive lattice in which the meet and join operations for
$M,M'\in{\cal M}(Q)$ produce the s-membranes $M\wedge M'$ and $M\vee M'$ such that
$Q^-(M\wedge M')=Q^-(M)\cap Q^-(M')$ and $Q^-(M\vee M')=Q^-(M)\cup Q^-(M')$.
\ \vrule width.1cm height.3cm depth0cm
\end{prop}
It is useful to give an alternative description for this lattice, which reveals
an intrinsic structure and a connection with flips in rhombus tilings. It is
based on a natural partial order on $Q$ defined below. Recall that
for a cube $\zeta$, the front side $\zeta^{\,\rm fr}$ and the rear side $\zeta^{\,\rm re}$ are
formed by the rhombi as indicated in the Example in Sect.~\SSEC{smembr}.
\noindent\textbf{Definition.} For $\zeta,\zeta'\in Q$, we say that $\zeta$
\emph{immediately precedes} $\zeta'$ if $\zeta^{\,\rm re}\cap(\zeta')^{\rm fr}$
contains a rhombus.
\begin{lemma} \label{lm:acyclic}
The directed graph $\Gamma_Q$ whose vertices are the cubes of $Q$ and whose
edges are the pairs $(\zeta,\zeta')$ such that $\zeta$ immediately precedes
$\zeta'$ is acyclic.
\end{lemma}
\begin{proof}
Consider a directed path $P=(\zeta_0,e_1,\zeta_1,\ldots,e_p,\zeta_p)$ in
$\Gamma_Q$ (where $e_r$ is the edge $(\zeta_{r-1},\zeta_r)$). We show that $P$
is not a cycle (i.e., $\zeta_0\ne\zeta_p$ when $p>0$) by using induction on
$n$. This is trivial if $n=3$.
We know that for any $n$-cube $\zeta=\zeta(X|ijn)$ of $Q$, its front rhombus
$\rho(X|ij)$ belongs to the front side $\Pi^{\,\rm fr}_n$ of the $n$-pie $\Pi_n$ of $Q$,
its rear rhombus $\rho(Xn|ij)$ belongs to the rear side $\Pi^{\,\rm re}_n$, and the
other rhombi of $\zeta$, namely, $\rho(X|in),\,\rho(X|jn), \, \rho(Xj|in), \,
\rho(Xi|jn)$ lie in the interior or belt of $\Pi_n$ (for definitions, see
Sect.~\SSEC{pies}). This implies that if for some $r$, the cubes
$\zeta_{r-1}$ and $\zeta_r$ belong to different sets among $Q^-_n,\, Q^+_n,\,
\Pi_n$, then the edge $e_r$ goes either from $Q^-_n$ to $\Pi_n$, or from
$\Pi_n$ to $Q^+_n$. Therefore, $P$ crosses each of the disks $\Pi^{\,\rm fr}_n$ and
$\Pi^{\,\rm re}_n$ at most once, implying that $\zeta_0=\zeta_p$ would be possible
only if the vertices of $P$ are entirely contained in exactly one of $Q^-_n, \,
Q^+_n,\, \Pi_n$.
Let $Q'$ be the $n$-contraction of $Q$. We assume by induction that
$\Gamma_{Q'}$ is acyclic. Then the cases $\zeta_r\in Q^-_n$ and $\zeta_r\in
Q^+_n$ are impossible (subject to $\zeta_0=\zeta_p$), taking into account that
$Q'$ is obtained by combining $Q^-_n$ and the cubes of $Q^+_n$ shifted by
$-\theta_n$.
It remains to show that the subgraph $\Gamma'$ of $\Gamma_Q$ induced by the
cubes of $\Pi_n$ is acyclic. To see this, observe that for an $n$-cube
$\zeta=\zeta(X|ijn)$ of $Q$, its rear rhombi lying in the interior or belt of
$\Pi_n$ are $\rho_1:=\rho(X|in)$ and $\rho_2:=\rho(Xi|jn)$. So if $\zeta$ and
another $n$-rhombus $\zeta'$ are connected by edge $(\zeta,\zeta')$ in $\Gamma'$, then
$\zeta'$ shares with $\zeta$ either $\rho_1$ or $\rho_2$. Let us associate with
$\zeta,\zeta'$ the corresponding rhombi $\rho,\rho'$ on $\Pi^{\,\rm fr}_n$,
respectively. Then $\rho=\rho(X|ij)$ and $\rho'=(X'|i'j')$ for some $X'$ and
$i'<j'$. These rhombi have an edge in common; namely, the edge $e=(X,Xi)$ if
$\zeta,\zeta'$ share the rhombus $\rho_1$, and the edge $e'=(Xi,Xij)$ if
$\zeta,\zeta'$ share $\rho_2$. Note that (under the projection by $\pi$) both
$e,e'$ belong to the \emph{left} boundary of $\rho$, in view of
$i<j$.
These observations show that the subgraph $\Gamma'$ is isomorphic to the graph
$\Gamma''$ whose vertices are the rhombi in $\Pi^{\,\rm fr}_n$ and whose edges are the
pairs $(\rho,\rho')$ such that $\rho,\rho'$ share an edge lying in the left
boundary of $\rho$ (and in the right boundary of $\rho'$). This $\Gamma''$ is
acyclic. (Indeed, consider the rhombus tiling $T:=\pi(\Pi^{\,\rm fr}_n)$ on $Z(n-1,2)$.
Then any directed path from $\emptyset$ to $[n-1]$ in $T$ may be crossed by
an edge of $\Gamma''$ only from right to left, not back,
whence $\Gamma''$ is acyclic.)
\end{proof}
\begin{corollary} \label{cor:part_order}
The graph $\Gamma_Q$ induces a partial order $\prec$ on the cubes of $Q$.
Moreover, the ideals of $(Q,\prec)$ (i.e., the subsets $Q'\subseteq Q$
satisfying $(\zeta\in Q',\; \zeta'\prec \zeta \Longrightarrow \zeta'\in Q')$
are exactly the front heaps $Q^-(M)$ of s-membranes $M\in{\cal M}(Q)$.
\ \vrule width.1cm height.3cm depth0cm
\end{corollary}
Here the second assertion can be concluded from the fact that the ideals of
$(Q,\prec)$ are the sets of cubes $Q'\subseteq Q$ such that $\Gamma_Q$ has no
edge going from $Q-Q'$ to $Q'$.
Using Corollary~\ref{cor:part_order}, we now explain a relation to strong
flips in rhombus tilings. For convenience we identify an s-membrane
$M\in{\cal M}(Q)$ with the corresponding rhombus tiling $\pi(M)$ on $Z(n,2)$. In
particular, the minimal s-membrane $Z^{\,\rm fr}$ is identified with the standard
tiling $T^{\rm st}_n$, and the maximal s-membrane $Z^{\,\rm re}$ with the
antistandard tiling $T^{\rm ant}_n$.
Let $M\in{\cal M}(Q)$ be different from $T^{\rm st}_n$. Then the heap $J:=Q^-(M)$
is nonempty. Since $\Gamma_Q$ is acyclic, $J$ has a maximal element
$\zeta=\zeta(X|ijk)$ (i.e., there is no $\zeta'\in J$ with $\zeta\prec
\zeta'$). Then $M$ contains all rear rhombi of $\zeta$, namely, $\rho(X|ik),\;
\rho(Xi|jk),\; \rho(Xk|ij)$. They span the hexagon $H(X|ijk)$ having
$\Lambda$-configuration and we observe that
\begin{numitem1} \label{eq:flip_in_M}
for $M,\,J,\, \zeta$ as above, the set $J':=J-\{\zeta\}$
is an ideal of $(Q,\prec)$ as well, and the s-membrane (rhombus tiling) $M'$
with $Q^-(M')=J'$ is obtained from $M$ by replacing the rhombi of
$\zeta^{\,\rm re}$ by the rhombi forming $\zeta^{\,\rm fr}$ (namely, $\rho(X|ij),\;
\rho(X|jk),\;\rho(Xj|ik)$), or, in other words, by the lowering flip involving the
hexagon $H(X|ijk)$ (see the picture in the end of Sect.~\SSEC{rhomb_til}).
\end{numitem1}
(Of especial interest are \emph{principal} ideals of $(Q,\prec)$; each of them
is determined by a cube $\zeta\in Q$ and consists of all $\zeta'\in Q$ from
which $\zeta$ is reachable by a directed path in $\Gamma_Q$. The s-membrane
corresponding to such an ideal admits only one lowering flip within $Q$,
namely, that determined by the rhombi of $\zeta$. Symmetrically: considering
$M\in{\cal M}(Q)$ different from $T^{\rm ant}_n$ and its rear heap $R:=Q^+(M)$, and
choosing an element $\zeta\in R$ that admits no $\zeta'\in R$ with $\zeta'\prec
\zeta$, we can make the raising flip by replacing in $M$ the rhombi of
$\zeta^{\,\rm fr}$ by the ones of $\zeta^{\,\rm re}$. When $R$ is formed by some $\zeta\in Q$
and all $\zeta'\in Q$ reachable from $\zeta$ by a directed path in $\Gamma_Q$,
then $M$ admits only one raising flip within $Q$, namely, that
determined by the rhombi of $\zeta$.)
In terms of maximal s-collections, \refeq{flip_in_M} together with
Proposition~\ref{pr:latticeMQ} implies the following
\begin{corollary} \label{cor:flip_in_SC}
Let $C$ be a maximal chord separated collection in $2^{[n]}$. The set
${\bf S}^\ast(C)$ of maximal by size strongly separated collections in $C$ is a
distributive lattice with the minimal element ${\cal I}_n$ and the maximal element
\rm{co-}${\cal I}_n$ (being the set of intervals and the set of co-intervals in
$[n]$, respectively) in which $S\in{\bf S}^\ast(C)$ immediately precedes
$S'\in{\bf S}^\ast(C)$ if and only if $S'$ is obtained from $S$ by one raising
flip (``in the presence of six witnesses'').
\end{corollary}
\noindent\textbf{Remark 3.} Note that the set of all maximal s-collections in
$2^{[n]}$ is a poset (with the unique minimal element ${\cal I}_n$ and the unique
maximal element \rm{co-}${\cal I}_n$ and with the immediate preceding relation
given by strong flips as well); however, in contrast to ${\bf S}^\ast(C)$, this
poset is not a lattice already for $n=6$, as is shown in Ziegler~\cite{zieg}.
Note also that a triple $\tau$ of rhombi in an s-membrane $M\in{\cal M}(Q)$ that
spans a hexagon need not belong to one cube of $Q$ (in contrast to~(iii) in
Proposition~\ref{pr:edge_rh_cube} where $Q$ contains a cube if all \emph{eight}
vertices of this cube belong to $V_Q$). In this case, $(M,\tau)$ determines a
flip in the variety of all rhombus tilings on $Z(n,2)$ but not within
${\cal M}(Q)$.
\section{Embedding rhombus tilings in cubillages} \label{sec:embed_rt}
In this section we study cubillages on $Z(n,3)$ containing one or more fixed
s-membranes.
\subsection{Extending an s-membrane to a cubillage} \label{ssec:memb-to-cube}
We start with the following issue. Given a maximal strongly separated
collection $S\subset 2^{[n]}$, let ${\bf C}(S)$ be the set of maximal chord
separated collections containing $S$. How to construct explicitly one instance
of such c-collections? A naive method consists in choosing, step by step, a
new subset $X$ of $[n]$ and adding it to the current collection including $S$
whenever $X$ is chord separated from all its members. However, this method is
expensive as it may take exponentially many (w.r.t. $n$) steps.
An efficient approach involving geometric interpretations and using flips in
s-membranes consists in the following. We build in the ``empty'' zonotope
$Z=Z(n,3)$ the abstract s-membrane $M=M(S)$ with $V_M=S$, by embedding $S$ (as
the corresponding set of points) in $Z$ and forming the rhombus $\rho(X|ij)$
for each quadruple of the form $\{X,Xi,Xj,Xij\}$ in $S$. Then we construct a
cubillage $Q$ containing this s-membrane (thus obtaining $S\subset
V_Q\in{\bf C}(S)$, as required).
This is performed in two stages. At the first stage, assuming that $M$ is
different from $Z^{\,\rm fr}$ (equivalently, $\pi(M)\ne T^{\rm st}_n$), we grow, step by
step, a partial cubillage $Q'$ filling the region $Z^-(M)$ between $Z^{\,\rm fr}$ and
$M$, starting with $Q':=\emptyset$. At each step, the current $Q'$ is such that
$(Q')^{\rm re}=M$ and $(Q')^{\rm fr}$ forms an s-membrane $M'$. If $M'=Z^{\,\rm fr}$,
we are done. Otherwise $\pi(M')\neT^{\rm st}$ implies that $M'$ contains at least one
triple of rhombi spanning a hexagon having $\Lambda$-configuration (see the end
of Sect.~\SSEC{rhomb_til}). We choose one hexagon $H=H(X|ijk)$ of this sort,
add to $Q'$ the cube $\zeta=\zeta(X|ijk)$ induced by $H$, and update $M'$
accordingly by replacing the rhombi of $H$ by the other three rhombi in $\zeta$
(which form $\zeta^{\,\rm fr}$); we say that the updated $M'$ is obtained from the
previous one by the \emph{lowering flip using} $\zeta$. And so on until we
reach $Z^{\,\rm fr}$.
At the second stage, acting in a similar way, we construct a partial cubillage
$Q''$ filling the region $Z^+(M)$ between $M$ and $Z^{\,\rm re}$. Namely, a current
$Q''$ is such that $(Q'')^{\rm fr}=M$, and $(Q'')^{\rm re}$ forms an s-membrane
$M''$. Unless $M''=Z^{\,\rm re}$, we choose in $M''$ a hexagon $H$ having
$Y$-configuration, add to $Q''$ the cube $\zeta$ induced by $H$ and update
$M''$ accordingly, thus making the \emph{raising flip using} $\zeta$. And so
on.
Eventually, $Q:=Q'\cup Q''$ becomes a complete cubillage in $Z$ containing $M$,
as required. Since the partial cubillages $Q',Q''$ are constructed
independently,
\begin{numitem1} \label{eq:direct_union}
the set ${\bf Q}(M)$ of cubillages on $Z=Z(n,3)$ containing a fixed s-membrane
$M$ is represented as the ``direct union'' of the sets ${\bf Q}^-(M)$ and
${\bf Q}^+(M)$ of partial cubillages filling $Z^-(M)$ and $Z^+(M)$,
respectively, i.e., ${\bf Q}(M)=\{Q'\cup Q''\colon Q'\in{\bf Q}^-(M),\; Q''\in
{\bf Q}^+(M)\}$.
\end{numitem1}
\noindent\textbf{Remark 4.} When $M=Z^{\,\rm fr}$ ($M=Z^{\,\rm re}$), ~${\bf Q}^+(M)$ (resp.
${\bf Q}^-(M)$) becomes the set ${\bf Q}_n$ of all cubillages on $Z(n,3)$. The
latter set is connected by 3-flips (defined in the Introduction); moreover, a
similar property is valid for fine tilings on zonotopes of any dimension, as a
consequence of results in~\cite{MS}.
It light of this, one can consider an arbitrary s-membrane $M$ and ask about the
connectedness of the set ${\bf Q}(M)$ of cubillages w.r.t. 3-flips that preserve $M$.
This is equivalent to asking whether or not for any two partial cubillages
$Q,Q'\in{\bf Q}^-(M)$, there exists a sequence $Q_0,Q_1,\ldots,Q_p\in
{\bf Q}^-(M)$ such that $Q_0=Q$, $Q_p=Q'$, and each $Q_r$ ($r=1,\ldots,p$) is
obtained by a 3-flip from $Q_{r-1}$; and similarly for ${\bf Q}^+(M)$. The answer
to this question is affirmative (a proof is left to a forthcoming paper).
\subsection{Cubillages for two s-membranes} \label{ssec:two_smembr}
One can address the following issue. Suppose we are given two abstract
s-membranes $M,M'$ properly embedded in $Z=Z(n,3)$. When there exists a
cubillage $Q$ on $Z$ containing both $M$ and $M'$? The answer is clear: if and
only if the set $V_M\cup V_{M'}$ is chord separated. However, one can ask: how
to construct such a $Q$ efficiently?
For simplicity, consider the case of ``non-crossing'' s-membranes, assuming
that $M$ is situated in $Z$ before $M'$, i.e., $M$ lies in $Z^-(M')$.
A partial cubillage $Q'$ filling $Z^-(M)$ and a partial cubillage $Q''$ filling
$Z^+(M')$ always exist and can be constructed by the method as in
Sect.~\SSEC{memb-to-cube}. So the only problem is to construct a partial
cubillage $\widetilde Q$ filling the space between $M$ and $M'$, i.e.,
$Z(M,M'):=Z^+(M)\cap Z^-(M')$; then $Q:=Q'\cup\widetilde Q\cup Q''$ is as required.
Conditions when a required $\widetilde Q$ does exist are exposed in the proposition
below.
We need some definitions. Consider a rhombus tiling $T$ on the zonogon
$Z'=Z(n,2)$ and a color $i\in[n]$. For each $i$-edge $e$ in $T$, let $m(e)$ be
the middle point on $e$, and for each $j\in [n]-\{i\}$, let $c(\rho$) be the
central point of the $\{i,j\}$-rhombus $\rho$ in $T$ (where $\rho$ is the
$ij$-rhombus when $i<j$, and the $ji$-rhombus when $j<i$). For such a $\rho$
and the $i$-edges in it, say, $e$ and $e'$, connect $c(\rho)$ by straight-line
segments with each of $m(e)$ and $m(e')$. One easily shows that the union of
these segments over all $j$ produces a non-self-intersecting piecewise linear
curve connecting the middle points of the two $i$-edges on the left and right
boundaries of $Z'$, denoted as $D_i$ and called $i$-th (undirected) \emph{dual
path} for $T$. (The set $\{D_1,\ldots, D_n\}$ matches a pseudo-line
arrangement, in a sense.)
\noindent\textbf{Definitions.} Let $1\le i<j<k\le n$ and let $\rho$ be the
$ik$-rhombus in $T$. The triple $ijk$ is called \emph{normal} if $\rho$ lies
above $D_j$, and an \emph{inversion} for $T$ if $\rho$ lies below $D_j$. The
set of inversions for $T$ is denoted by ${\rm Inv}(T)$. Also we say that a triple
$ijk$ in $T$ is \emph{elementary} if the rhombi of types $ij$, $ik$ and $jk$ in it
span a hexagon (which has $Y$-configuration if $ijk$ is normal, and
$\Lambda$-configuration if $ijk$ is an inversion).
See the picture where a normal triple (an inversion) $ijk$ is illustrated in
the left (resp. right) fragment and the corresponding dual paths are drawn by
dotted lines.
\begin{center}
\includegraphics[scale=1]{swc9}
\end{center}
\begin{prop} \label{pr:two_smbr}
Let $M,M'$ be two s-membranes in $Z=Z(n,3)$ such that $M\subset Z^-(M')$. Then
a partial cubillage $\widetilde Q$ filling $Z(M,M')$ (and therefore a cubillage
on $Z$ containing both $M,M'$) exists if and only if
${\rm Inv}(M)\subseteq{\rm Inv}(M')$. Such a $\widetilde Q$ consists of
$|{\rm Inv}(M')|-|{\rm Inv}(M)|$ cubes and can be constructed efficiently.
\end{prop}
One direction in this proposition is rather easy. Indeed, suppose that a
partial cubillage $\widetilde Q$ filling $Z(M,M')$ does exist. Take a minimal
(w.r.t. the order $\prec$ as in Sect.~\SEC{lattice_s}) cube
$\zeta=\zeta(X|ijk)$ in $\widetilde Q$. Then the rhombi of $\zeta^{\,\rm fr}$ belong to $M$
and span the hexagon $H=H(X|ijk)$ having $Y$-configuration. Hence the triple
$ijk$ in $M$ is normal and elementary (using terminology for $M$ as that for
the tiling $\pi(M)$). By making the flip in $M$ using $\zeta$, we obtain an
s-membrane in which $ijk$ becomes an inversion, and the fact that $ijk$ is
elementary implies that no other triple $i'j'k'$ changes its status. Also the
new s-membrane becomes closer to $M'$. Applying the procedure $|\widetilde Q|$
times, we reach $M'$. This shows ``only if'' part in
Proposition~\ref{pr:two_smbr}.
As to ``if'' part, its proof is less trivial and relies on a result by Felsner and
Weil. Answering an open question by Ziegler~\cite{zieg}, they proved the
following assertion (stated in~\cite{FW} in equivalent terms of pseudo-line
arrangements).
\begin{theorem} {\rm\cite{FW}} \label{tm:FW}
Let $T,T'$ be rhombus tilings on $Z(n,2)$ and let ${\rm Inv}(T)\subset{\rm Inv}(T')$.
Then $T$ has an elementary triple contained in ${\rm Inv}(T')-{\rm Inv}(T)$.
\end{theorem}
\noindent(This is a 2-dimensional analog of the well-known fact that for two
permutations $\sigma,\sigma'\in S_n$ with ${\rm Inv}(\sigma)\subset
{\rm Inv}(\sigma')$, $\sigma$ has a transposition in
${\rm Inv}(\sigma')-{\rm Inv}(\sigma)$. Ziegler~\cite{zieg} showed that the
corresponding assertion in dimension 3 or more is false.)
Now Theorem~\ref{tm:FW} implies that if $M,M'$ are s-membranes with
${\rm Inv}(M)\subset{\rm Inv}(M')$, then there exists a cube $\zeta=\zeta(X|ijk)$
such that $\zeta^{\,\rm fr}\subset M$ and $ijk\in{\rm Inv}(M')-{\rm Inv}(M)$. The flip
in $M$ using $\zeta$ increases the set of inversions by $ijk$.
This enables us to recursively construct a partial cubillage filling $Z(M,M')$
starting with $\zeta$, and ``if'' part of Proposition~\ref{pr:two_smbr} follows.
\section{W-membranes and quasi-combies} \label{sec:w-membr}
In this section we deal with a maximal c-collection $C$ in $2^{[n]}$ and its
associated cubillage $Q$ on the zonotope $Z=Z(n,3)$ (i.e., with $V_Q=C$), and
consider the class ${\bf W}^\ast(C)$ of maximal by size weakly separated
collections contained in $C$. (Recall that $C$ need not be w-pure, by
Lemma~\ref{lm:example}.) Since each $W\in{\bf W}^\ast(C)$ is the spectrum of a
combi on the zonogon $Z'=Z(n,2)$ (cf.~Theorem~\ref{tm:combi}), a reasonable
question is how a combi $K$ with $V_K\subset V_Q$ (regarding vertices as
subsets of $[n]$) relates to the structure of $Q$. We have seen that maximal by
size s-collections in $C$ and their associated rhombus tilings on $Z'$ are
represented by s-membranes, that are special 2-dimensional subcomplexes in $Q$.
In case of weak separation, we will represent combies via \emph{w-membranes},
that are subcomplexes of a certain subdivision, or fragmentation, of $Q$. Also,
along with a combi $K$ with $V_K\subset V_Q$, we will be forced to deal with
the set of so-called \emph{quasi-combies} accompanying $K$, which were
introduced in~\cite{DKK3} and have a nice geometric interpretation in terms of
$Q$ as well.
\subsection{Fragmentation of a cubillage and quasi-combies} \label{ssec:fragment}
The \emph{fragmentation} $Q^{\boxtimes}$ of a cubillage $Q$ on $Z=Z(n,3)$ is the
complex obtained by cutting $Q$ by the horizontal planes through the vertices
of $Q$, i.e., the planes $z=h$ for $h=0,\ldots,n$. This subdivides each cube
$\zeta=\zeta(X|ijk)$ into three pieces: the lower tetrahedron $\zeta^\nabla$,
the middle octahedron $\zeta^\square$, and the upper tetrahedron
$\zeta^\Delta$, called the $\nabla$-, $\square$-, and $\Delta$-\emph{fragments}
of $\zeta$, respectively. Depending on the context, we also may think of
$Q^{\boxtimes}$ as the set of such fragments over all cubes. We say that a fragment
has \emph{height} $h+\frac12$ if it lies between the planes $z=h$ and $z=h+1$.
It is convenient to visualize faces of $Q^{\boxtimes}$ as though looking at them from
the front and slightly from below, i.e., along a vector $(0,1,\epsilon)$, and
accordingly use the projection $\pi_\epsilon: {\mathbb R}^3\to{\mathbb R}^2$ defined by
$\pi_\epsilon(x,y,z)=(x,z-\epsilon y)$ for a sufficiently small $\epsilon>0$. One can see
that $\pi_\epsilon$ transforms the generators $\theta_1, \ldots,\theta_n$ for $Z$
as in~\refeq{cyc_conf} into generators for $Z'=Z(n,2)$) which are adapted for
combies, i.e., satisfy the strict convexity condition~\refeq{strict_xi}.
For $S\subset Z$, let $S^{\rm fr}_\epsilon$ ($S^{\rm re}_\epsilon$) denote the set of
points of $S$ seen from the front (from the rear) in the direction related to
$\pi_\epsilon$, i.e., the points $(x,y,z)\in S\cap\pi_\epsilon^{-1}(\alpha,\beta)$ with
$y$ minimum (resp. maximum), for all $(\alpha,\beta)\in{\mathbb R}^2$. In particular,
when replacing the previous projection $\pi$ by $\pi_\epsilon$, all facets
(triangles) of the fragments of a cube become fully seen from the front or
rear; see the picture.
\begin{center}
\includegraphics[scale=1]{swc10}
\end{center}
Thus, all 2-dimensional faces in $Q^{\boxtimes}$ are triangles, and we conditionally
refer to those of them that lie in horizontal sections $z=h$ as \emph{horizontal}
triangles, and to the other ones (halves of rhombi in $Q$) as \emph{vertical}
ones. Horizontal triangles $\tau$ are divided into two groups. Namely, $\tau$
is called \emph{upper} (\emph{lower}) if it has vertices of the form $Xi,Xj,Xk$
(resp. $Y-k,Y-j,Y-i$) for $i<j<k$, and therefore its ``obtuse'' vertex $Xj$
(resp. $Y-j$) is situated above the edge $(Xi,Xk)$ (resp. below the edge
$(Y-k,Y-i)$), called the \emph{longest} edge of $\tau$ (which is justified when
$\epsilon$ is small). Equivalently, an upper (lower) horizontal $\tau$ belongs to
an $\nabla$-fragment (resp. $\Delta$-fragment).
Accordingly, we refer to the edges in horizontal sections as horizontal ones,
or \emph{H-edges}, and to the other edges as vertical ones, or \emph{V-edges}
(adapting terminology for combies from Sect.~\SSEC{combi}).
For $h\in[n]$, let $Q^{\boxtimes}_h$ denote the section of $Q$ at height $h$
(consisting of horizontal triangles). The triangulation $Q^{\boxtimes}_h$ partitioned
into upper and lower triangles will be of use in what follows. (A nice property
of $Q^{\boxtimes}_h$ pointed out in~\cite{gal} is that its spectrum (the set of
vertices regarded as subsets of $[n]$) constitutes a maximal w-collection in
$\binom{[n]}{h}$.) For example, if $Q$ is the cubillage on $Z(4,3)$ formed by
four cubes $\zeta(\emptyset|123),\; \zeta(\emptyset|134),\; \zeta(1|234),\;
\zeta(3|124)$, then the triangulations $Q^{\boxtimes}_1,\, Q^{\boxtimes}_2,\,Q^{\boxtimes}_3$ are as
illustrated in the picture, where the sections of these cubes are labeled by
$a,b,c,d$, respectively.
\begin{center}
\includegraphics[scale=0.9]{swc11}
\end{center}
\subsection{W-membranes} \label{ssec:wmembran}
\noindent\textbf{Definition.} A 2-dimensional subcomplex $M$ of the
fragmentation $Q^{\boxtimes}$ is called a \emph{w-membrane} if $M$ is a disk which is
bijectively projected by $\pi_\epsilon$ on $Z(n,2)$; equivalently, the boundary of
the disk $M$ is the rim $Z^{\,\rm rim}$ of $Z=Z(n,3)$ and $M=M^{\,\rm fr}_\epsilon$.
Arguing as in Sect.~\SEC{lattice_s} for s-membranes, one shows
that the set ${\cal M}(Q^{\boxtimes})$ of w-membranes in $Q^{\boxtimes}$ constitutes a
distributive lattice.
More precisely, associate with a w-membrane $M$: (a) the part $Z^-(M)$
($Z^+(M)$) of $Z$ between $Z^{\,\rm fr}$ and $M$ (resp. between $M$ and $Z^{\,\rm re}$); and (b)
the subcomplex $Q^{\bullet-}(M)$ ($Q^{\bullet+}(M)$) of $Q^{\boxtimes}$ contained in
$Z^-(M)$ (resp. $Z^+(M)$), called the \emph{front heap} (resp. \emph{rear
heap}) when it is regarded as the corresponding set of $\nabla$-, $\square$-,
and $\Delta$-fragments.
Then (similar to~\refeq{NNp}) for two w-membranes $M,M'\in{\cal M}(Q^{\boxtimes})$, we
have:
\begin{numitem1} \label{eq:NNpeps}
\begin{itemize}
\item[(i)] both $N:=(M\cup M')^{\rm fr}_\epsilon$ and $N':=(M\cup M')^{\rm re}_\epsilon$
are w-membranes;
\item[(ii)] $Q^{\bullet-}_N=Q^{\bullet-}_M\capQ^{\bullet-}_{M'}$ and
$Q^{\bullet-}_{N'}=Q^{\bullet-}_M\cupQ^{\bullet-}_{M'}$.
\end{itemize}
\end{numitem1}
\begin{prop} \label{pr:lattfragm}
${\cal M}(Q^{\boxtimes})$ is a distributive lattice in which operations $\wedge$ and
$\vee$ applied to $M,M'\in{\cal M}(Q^{\boxtimes})$ produce w-membranes $M\wedge M'$ and $M\vee
M'$ such that $Q^{\bullet-}(M\wedge M')=Q^{\bullet-}(M)\cap Q^{\bullet-}(M')$ and
$Q^{\bullet-}(M\vee M')=Q^{\bullet-}(M)\cup Q^{\bullet-}(M')$.
\ \vrule width.1cm height.3cm depth0cm
\end{prop}
Next, for fragments $\tau,\tau'$ in $Q^{\boxtimes}$, we say that $\tau$
\emph{immediately precedes} $\tau'$ if $\tau^{\,\rm re}_\epsilon \cap (\tau')^{\rm
fr}_\epsilon$ consists of a (vertical or horizontal) triangle. Accordingly, we
define the directed graph $\Gamma_{Q^{\boxtimes}}$ whose vertices are the fragments in
$Q^{\boxtimes}$ and whose edges are the pairs $(\tau,\tau')$ such that $\tau$
immediately precedes $\tau'$.
\begin{lemma} \label{lm:acycfrag}
The graph $\Gamma_{Q^{\boxtimes}}$ is acyclic.
\end{lemma}
\begin{proof}
Consider a directed path $P=(\tau_0,e_1,\tau_1,\ldots,e_p,\tau_p)$ in
$\Gamma_{Q^{\boxtimes}}$. We show that $P$ is not a cycle as follows.
If consecutive fragments $\tau=\tau_{i-1}$ and $\tau'=\tau_i$ share a
horizontal triangle $\sigma$ of height $h$ (i.e., lying in the plane $z=h$),
then the construction of $\pi_\epsilon$ together with the equality
$\sigma=\tau^{\,\rm re}_\epsilon \cap (\tau')^{\rm fr}_\epsilon$ implies that $\tau$ lies
below and $\tau'$ lies above the plane $z=h$.
On the other hand, if $\tau$ and $\tau'$ share a vertical triangle, then both
$\tau,\tau'$ have the same height.
Thus, it suffices to show that if all fragments $\tau_i$ in $P$ have the same
height, then $P$ is not a cycle. This assertion follows from
Lemma~\ref{lm:acyclic} and the observation that if fragments $\tau,\tau'$ of
$Q^{\boxtimes}$ share a vertical triangle $\sigma$, and $\tau$ immediately precedes
$\tau'$, then the cubes $\zeta,\zeta'$ of $Q$ containing these fragments
(respectively) share the rhombus $\rho$ including $\sigma$ and such that
$\rho=\zeta^{\,\rm re}\cap(\zeta')^{\rm fr}$.
\end{proof}
\begin{corollary} \label{cor:ideal_frag}
The graph $\Gamma_{Q^{\boxtimes}}$ induces a partial order $\prec$ on the fragments of
$Q^{\boxtimes}$. The ideals of $(Q^{\boxtimes},\prec)$ are exactly the front heaps
$Q^{\bullet-}(M)$ of w-membranes $M\in{\cal M}(Q^{\boxtimes})$.
\ \vrule width.1cm height.3cm depth0cm
\end{corollary}
When a w-membrane $M$ is different from the minimal w-membrane $Z^{\,\rm fr}$, the
ideal $F:=Q^{\bullet-}(M)$ has at least one maximal element, i.e., a fragment
$\tau\in F$ such that there is no $\tau'\in F-\{\tau\}$ with $\tau\prec\tau'$.
Equivalently, the rear side $\tau^{\,\rm re}_\epsilon$ is entirely contained in $M$.
The \emph{lowering flip} in $M$ using $\tau$ replaces the triangles of
$\tau^{\,\rm re}_\epsilon$ by the ones of $\tau^{\,\rm fr}_\epsilon$, producing a w-membrane $M'$
closer to $Z^{\,\rm fr}$, namely, such that $Q^{\bullet-}(M')=F-\{\tau\}$. Note that this
flip preserves the set of vertices (i.e., $V_{M'}=V_M$) if $\tau$ is a
$\nabla$- or $\Delta$-fragment, in which case we refer to this as a \emph{tetrahedral}
(lowering) flip. See the picture.
\begin{center}
\includegraphics[scale=1]{swc12}
\end{center}
In contrast, if $\tau$ is a $\square$-fragment, then the set of vertices does
change, namely, $V_{M'}=(V_M-\{Xik\})\cup\{Xj\}$, where $\tau$ belongs to the
cube $\zeta(X|ijk)$; we refer to such a flip as \emph{octahedral} or
\emph{essential}. See the picture.
\begin{center}
\includegraphics[scale=1]{swc13}
\end{center}
Symmetrically, when $M\neZ^{\,\rm re}$, its rear heap $R:=Q^{\bullet+}(M)$ has at least
one minimal fragment $\tau$, i.e., such that there is no $\tau'\in R-\{\tau\}$
with $\tau'\prec \tau$. Equivalently, $\tau^{\,\rm fr}_\epsilon$ is entirely contained in
$M$. The \emph{raising flip} in $M$ using $\tau$ produces a w-membrane $M'$
closer to $Z^{\,\rm re}$. Such flips, referred to as tetrahedral and octahedral (or
essential) as before, are illustrated on the above two pictures as well.
Making all possible lowering or raising \emph{tetrahedral} flips starting with
a given w-membrane $M$, we obtain a set of w-membranes with the same spectrum
$V_M$, denoted as ${\cal E}(M)$ and called the \emph{escort} of $M$. Of
especial interest is a w-membrane $L\in{\cal E}(M)$ that has the maximum number
of V-edges. Such an $L$ admits neither a $\nabla$-fragment $\tau$ with
$\tau^{\,\rm re}_\epsilon\subset L$, nor a $\Delta$-fragment $\tau'$ with $(\tau')^{\rm
fr}_\epsilon\subset L$, since a lowering flip in the former case and a raising flip
in the latter case would increase the number of V-edges. We call $L$ a
\emph{fine} w-membrane.
We shall see later that the w-membranes correspond to the so-called
\emph{non-expensive quasi-combies}, and the fine w-membranes to the
combies, which are \emph{compatible} with $Q^{\boxtimes}$.
The following auxiliary statement will be of use.
\begin{numitem1} \label{eq:vert-hor}
\begin{itemize}
\item[(i)] Let $Q^{\boxtimes}$ contain a vertical $\Delta$-triangle $\Delta$ and a
lower horizontal triangle $\sigma$ sharing an edge $e$ that is the longest
edge of $\sigma$ (and the base edge of $\Delta$). Then $\Delta$ and $\sigma$
belong to the same $\Delta$-fragment $\tau$ of $Q^{\boxtimes}$ (thus forming
$\tau^{\,\rm fr}_\epsilon$).
\item[(ii)] Symmetrically, if a vertical $\nabla$-triangle $\nabla$ and an
upper horizontal triangle $\sigma$ share an edge that is the longest edge of
$\sigma$, then $\nabla\cup\sigma= \tau^{\,\rm re}_\epsilon$ for some $\nabla$-fragment
$\tau$ of $Q^{\boxtimes}$.
\end{itemize}
\end{numitem1}
Indeed, let $\rho$ be the rhombus in $Q$ containing the triangle $\Delta$ as
in~(i). This $\rho$ is a facet of one or two cubes of $Q$ and $\sigma$ lies in
the section of one of them, $\zeta$ say, by the horizontal plane containing
$e$. Since $\sigma$ is lower, the only possible case is when $\Delta$ and
$\sigma$ form the front side of the $\Delta$-fragment of $\zeta$, as required.
The case~(ii) is symmetric.
A useful consequence of~\refeq{vert-hor} is:
\begin{numitem1} \label{eq:fine_wmembr}
for any horizontal triangle $\sigma$ of a fine w-membrane $L$, the longest edge
of $\sigma$ belongs to one more (lower or upper) horizontal triangle of $L$.
\end{numitem1}
Indeed, if $\sigma$ is lower, then its longest edge belongs to neither a
vertical $\nabla$-triangle (since $\pi_\epsilon$ is injective on $L$), nor a
vertical $\Delta$-triangle (otherwise $\sigma\cup\Delta$ would be as
in~\refeq{vert-hor}(i) and one could make a lowering flip increasing the number
of V-edges). When $\sigma$ is upper, the argument is similar
(using~\refeq{vert-hor}(ii)).
\subsection{Quasi-combies and w-membranes} \label{ssec:combi-membr}
We assume that the zonogon $Z':=Z(n,2)$ is generated by the vectors
$\xi_i=\pi_\epsilon(\theta_i)$, $i=1,\ldots,n$, where the $\theta_i$ are as
in~\refeq{cyc_conf}; then the $\xi_i$ satisfy~\refeq{strict_xi}. Speaking of
combies and etc., we use terminology and notation as in Sect~\SSEC{combi}.
A \emph{quasi-combi} on $Z'$ is defined in the same way as a combi, with the
only difference that the requirement that for any lens $\lambda$, the lower
boundary $L_\lambda$, as well as the upper boundary $U_\lambda$, has at least
two edges is now withdrawn; so one of $L_\lambda$ and $U_\lambda$ is allowed to
have only one edge. When all vertices of $\lambda$ are contained in
$L_\lambda$, and therefore $U_\lambda$ has a unique edge, namely,
$(\ell_\lambda,r_\lambda)$, we say that $\lambda$ is a \emph{lower semi-lens}.
Symmetrically, when the set $V_\lambda$ of vertices of $\lambda$ belongs to
$U_\lambda$, $\lambda$ is called an \emph{upper semi-lens}. An important
special case of a semi-lens $\lambda$ is a (lower of upper) triangle.
We refer to the $\Delta$- and $\nabla$-tiles of a quasi-combi $K$ as
\emph{vertical} ones, and to the lenses and semi-lenses in it as
\emph{horizontal} ones. This is justified by the fact that all vertices $A$ of
a horizontal tile have the same size, or, let us say, lie in the same
\emph{level} $h=|A|$, whereas a vertical tile has vertices in two levels.
A quasi-combi is called \emph{fully triangulated} if all its tiles are
triangles. An immediate observation is that
\begin{numitem1} \label{eq:fully_triang}
$\pi_\epsilon$ maps any w-membrane $M$ of $Q^{\boxtimes}$ to a fully triangulated
quasi-combi (regarding $M$ as a 2-dimensional complex).
\end{numitem1}
In what follows we will liberally identify $M$ with $\pi_\epsilon(M)$ and speak of
a w-membrane as a quasi-combi. A property converse to~\refeq{fully_triang}, in
a sense, is valid in a more general situation. Before stating it, we introduce
four simple operations on a quasi-combi $K$.
\noindent\textbf{(S) Splitting a horizontal tile.} For chosen a lens $\lambda$
of $K$ and non-adjacent vertices $u,v$ in $L_\lambda$ or in $U_\lambda$, the
operation cuts $\lambda$ into two pieces (either one lens and one semi-lens or
two semi-lenses) by connecting $u,v$ by the line-segment $[u,v]$. When $\lambda$ is
a lower (upper) semi-lens and $u,v$ is a pair of non-adjacent vertices in
$L_\lambda$ (resp. $U_\lambda$), the operation acts similarly.
\noindent\textbf{(M) Merging two horizontal tiles.} Suppose that $\lambda'$ and
$\lambda''$, which are either two semi-lenses or one lens and one semi-lens,
have a common edge $e$ that is the longest edge of at least one of them,
$\lambda'$ say, i.e., $e=(\ell_{\lambda'},r_{\lambda'})$. The operation merges
$\lambda',\lambda''$ into one piece $\lambda:=\lambda'\cup\lambda''$.
One can see that both operations result in correct quasi-combies. Two examples
are illustrated in the picture.
\begin{center}
\includegraphics[scale=1]{swc14}
\end{center}
The next two operations involve semi-lenses and vertical triangles and
resemble, to some extent, tetrahedral flips in w-membranes. Here by a
\emph{lower} (\emph{upper}) \emph{fan} in a quasi-combi $K$ we mean a sequence
of $\nabla$-tiles $\nabla_r=\nabla(X|i_{r-1}i_r)$ (resp. $\Delta$-tiles
$\Delta_r=\Delta(Y|i_{r-1}i_r)$, $r=1,\ldots,p$, where $i_0<\cdots<i_p$ (resp.
$i_0>\cdots >i_p$); i.e., these triangles have the same bottom vertex $X$ (resp.
the same top vertex $Y$) and two consecutive triangles share a vertical edge.
\noindent\textbf{(E) Eliminating a semi-lens.} Suppose that the longest edge
$e=(\ell_\lambda,r_\lambda)$ of a lower semi-lens $\lambda$ belongs to a
$\Delta$-tile $\Delta=\Delta(Y|ji)$ ($j>i$). Then $e$ is the base edge
$(Y-j,Y-i)$ of $\Delta$, and $\lambda$ has type $ij$ and the upper root just at
$Y$. The operation of eliminating $\lambda$ replaces $\lambda$ and $\Delta$ by
the corresponding upper fan $(\Delta_r\colon r=1,\ldots,p)$, where each
$\Delta_r$ has the top vertex $Y$ and its base edge is $r$-th edge in
$L_\lambda$. Symmetrically, if an upper semi-lens $\lambda$ and a $\nabla$-tile
$\nabla$ share an edge $e$ (which is the longest edge of $\lambda$ and the base
edge of $\nabla$), then the operation replaces $\lambda$ and $\nabla$ by the
corresponding lower fan $(\nabla_r\colon r=1,\ldots,p)$, where the base edge of
$\nabla_r$ is $r$-th edge in $U_\lambda$.
\noindent\textbf{(C) Creating a semi-lens.} This operation is converse to~(E).
It deals with a lower or upper fan of vertical triangles and replaces them by
the corresponding pair consisting of either an upper semi-lens and a
$\nabla$-tile, or a lower semi-lens and a $\Delta$-tile.
Again, it is easy to check that (E) and (C) result in correct quasi-combies.
These operations are illustrated in the picture (where $p=3$).
\begin{center}
\includegraphics[scale=0.7]{swc15}
\end{center}
Now, given a quasi-combi $K$, we consider the set $\Omega(K)$ of all
quasi-combies $K'$ on $Z'$ with the same spectrum $V_K$, called the
\emph{escort} of $K$ (note that when $K$ matches a w-membrane $K$, $\Omega(K)$
can be larger than ${\cal E}(M)$). We observe the following
\begin{lemma} \label{lm:OrbitK}
{\rm(i)} ~$\Omega(K)$ contains exactly one combi. {\rm(ii)} ~$\Omega(K)$ is the set of
quasi-combies that can be obtained from $K$ by use of operations
(S),(M),(E),(C). In particular, $V_K$ is a maximal w-collection in $2^{[n]}$.
\end{lemma}
\begin{proof}
Choosing an arbitrary quasi-combi $K'\in\Omega(K)$ and applying to $K'$ a
series of operations~(M) and (E), one can produce $K^\ast$ having no
semi-lenses at all (since each application of (M) or (E) decreases the number
of semi-lenses). Therefore, $K^\ast$ is a combi with $V_{K^\ast}=V_K=:S$.
Moreover, $K^\ast$ is the unique combi with the given spectrum $S$, by
Theorem~\ref{tm:combi} (see also~\cite[Th.~3.5]{DKK3}). This gives~(i). Now (i)
implies~(ii) (since any $K'\in \Omega(K)$ can be obtained from $K^\ast$ using
(S) and (C), which are converse to (M) and (E)).
\end{proof}
As a consequence of~\refeq{fully_triang} and Lemma~\ref{lm:OrbitK}, we obtain
\begin{corollary} \label{cor:wmembr-maxwc}
The spectrum of any w-membrane is a maximal w-collection in $2^{[n]}$.
\end{corollary}
\noindent\textbf{Definition.} A quasi-combi $K$ is called \emph{compatible}
with a cubillage $Q$ if each edge of $K$ is (the image by $\pi_\epsilon$ of) an edge
of $Q^{\boxtimes}$. (In particular, $V_K\subset V_Q$.)
\begin{prop} \label{pr:compat_quasi}
Let $K$ be a quasi-combi on $Z'=Z(n,2)$ compatible with a cubillage $Q$ on
$Z(n,3)$. Then the horizontal tiles (lenses and semi-lenses) of $K$ can be
triangulated so as to turn $K$ into (the image by $\pi_\epsilon$ of) a
w-membrane in $Q^{\boxtimes}$.
\end{prop}
\begin{proof}
Let $\tau$ be a $\Delta$- or $\nabla$-tile in $K$; then the edges of $\tau$
belong to $Q^{\boxtimes}$. Arguing as in the proof of
Proposition~\ref{pr:edge_rh_cube} (using induction on $n$ and considering the
$n$-contraction of $Q$ and its fragmentation), one can show that $\tau$ is a
face (a vertical triangle) of $Q^{\boxtimes}$. Now consider a lens
or semi-lens $\lambda$ of $K$ lying in level $h$, say. Since all edges of
$\lambda$ belong to $Q^{\boxtimes}$, the polygon $\lambda$ must be subdivided into a
set of triangles in the section of $Q^{\boxtimes}$ by the plane $z=h$. Combining such
sets and vertical triangles $\tau$ as above, we obtain a disk bijective to $Z'$
by $\pi_\epsilon$, yielding a w-membrane $M$ in $Q^{\boxtimes}$ with $V_K\subset V_M$. Now
the fact that both $V_M$ and $V_K$ are maximal w-collections implies $V_K=V_M$,
and the result follows.
\end{proof}
Let us say that a quasi-combi $K$ is \emph{non-expensive} if all semi-lenses in
it are triangles and there is no semi-lens $\lambda$ whose longest edge
$(\ell_\lambda,r_{\lambda})$ is simultaneously either an edge of a lens or the
longest edge of another semi-lens. In particular, any combi is non-expensive.
Note that~\refeq{fully_triang} and Proposition~\ref{pr:compat_quasi} imply that
each w-membrane $M$ one-to-one corresponds (via $\pi_\epsilon$) to a fully
triangulated quasi-combi compatible with $Q$ and having the same spectrum
$V_M$. One more correspondence following from Proposition~\ref{pr:compat_quasi}
concerns non-expensive quasi-combies.
\begin{corollary} \label{cor:non-expens}
Each w-membrane $M$ one-to-one corresponds to a non-expensive quasi-combi $K$
compatible with $Q$ and such that $V_K=V_M$. Non-expensive quasi-combies with
the same escort have the same set of lenses.
\end{corollary}
Indeed, for a non-expensive quasi-combi $K$, the corresponding w-membrane $M$
is obtained by subdividing each lens of $K$ into triangles of $Q^{\boxtimes}$. We also
use the fact that each application of~(E) matches a tetrahedral flip in the
corresponding w-membrane (since each semi-lens is a triangle), and a series of
such operations results in a combi with the same set of lenses.
A sharper version of above results is stated by weakening the requirement of
compatibility.
\begin{theorem} \label{tm:combi-wmembr}
For each maximal by size w-collection $W$ contained in the spectrum $V_Q$ of a
cubillage $Q$, there exists a w-membrane $M$ in $Q^{\boxtimes}$ with $V_M=W$.
\end{theorem}
\begin{proof}
Let $K$ be the combi with $V_K=W$. In light of reasonings in the proof of
Proposition~\ref{pr:compat_quasi}, it suffices to show that
\begin{numitem1} \label{eq:vert_trian}
each vertical triangle $\tau$ of $K$ is extended to a rhombus of $Q$ (and
therefore $\tau$ is a face of $Q^{\boxtimes}$).
\end{numitem1}
To see this, we rely on the following fact (which is interesting in its own
right).
\noindent\textbf{Claim.} \emph{Let a set $Y\subset[n]$ be chord separated from
each of $X,X1,Xn$ for some $X\subseteq [n]-\{1,n\}$. Then $Y$ is chord
separated from $X1n$ as well.}
\noindent\textbf{Proof of the Claim.} Let $1,\ldots,n$ be disposed in this
order on a circumference $O$. Let $Y':=Y-X$ and $X':=X-Y$. One may assume that
$1,n\notin Y'$ (otherwise the chord separation of $Y$ and $X1n$ immediately
follows from that of $Y,X,X1,Xn$).
If $Y$ and $X1n$ are not chord separated, then there are elements $x,x'\in
X'1n$ and $y,y'\in Y'$ such that the corresponding chords $e=[x,x']$ and
$e'=[y,y']$ ``cross'' each other. Then $\{x,x'\}\ne\{1,n\}$ (since $1,n$ are
neighboring in $O$). So one may assume that $x\in X'$ (and $x'\in X'1n$). But
in each possible case $(x'\in X'$, $x'=1$ or $x'=n$), the chord $e$ crossing
$e'$ connects two elements of either $X'$ or $X'1$ or $X'n$; a contradiction.
\ \vrule width.1cm height.3cm depth0cm
Now consider a $\nabla$-tile $\nabla=\nabla(X|ij)$ of $K$ (having the vertices
$X,Xi,Xj$ with $i<j$). If $\{i,j\}=\{1,n\}$, then, by the Claim (and
Theorem~\ref{tm:galash}), $Xij$ is chord separated from all vertices of $Q$,
and the maximality of $V_Q$ implies that $Xij$ is a vertex of $Q$ as well.
Hence, by Proposition~\ref{pr:edge_rh_cube}(ii), $Q$ contains the rhombus
$\rho(X|ij)$, as required.
So we may assume that at least one of $j<n$ and $1<i$ takes place. Assuming the
former, we use induction on $n$ and argue as follows.
Let $Q'$ be the $n$-contraction of $Q$, and $M$ the s-membrane in $Q'$ that is
the image of the $n$-pie in $Q$ (for definitions, see Sect.~\SSEC{pies}).
Besides $Q'$, we need to consider the reduced set $W':=\{A\subseteq[n-1]\colon
A$ or $An$ or both belong to $W\}$. Then $W'$ is a maximal w-collection in
$2^{[n-1]}$, and as is shown in~\cite{DKK3},
\begin{numitem1} \label{eq:reduc_tau}
if $\tau$ is a vertical triangle of $K$ having type $ij$ with $j<n$, if $A,B,C$
are the vertices of $\tau$, and if $K'$ is the combi on $Z(n-1,2)$ with
$V_{K'}=W'$, then $K'$ has a vertical triangle with the vertices
$A-n,\,B-n,\,C-n$.
\end{numitem1}
Now consider two: $n\notin X$ and $n\in X$.
If $n\notin X$, then $X,Xi,Xj$ are vertices of $Q'$ and simultaneously
vertices of the reduced combi $K'$. By~\refeq{reduc_tau}, $K'$ has the tile
$\nabla'=\nabla(X|ij)$. By indiction, the vertices of $\nabla'$ are extended to
a rhombus $\rho'$ of $Q'$. This $\rho'$ is lifted to $Q$, as required.
If $n\in X$, then $Q'$ and $K'$ have vertices $X',X'i,X'j$ for $X':=X-n$, ~$K'$
has the triangle $\nabla'=\nabla(X'|ij)$ (by~\refeq{reduc_tau}), the vertices
of $\nabla'$ are extended to a rhombus $\rho'$ of $Q'$, and $\rho'$ is lifted
to the desired rhombus $\rho(X|ij)$ in $Q$.
The case of a $\Delta$-tile $\Delta=\Delta(Y|ji)$ of $K$ with $i<j<n$ is symmetric.
Finally, if $1<i<j=n$, we act in a similar fashion, but applying to $Q$ the
1-contraction operation, rather than the $n$-contraction one (this is just the
place where we use the 1-contraction mentioned in Sect.~\SSEC{pies}); the
details are left to the reader.
This completes the proof of the theorem.
\end{proof}
\section{Extending a combi to a cubillage} \label{sec:embed_combi}
The purpose of this section is to explain how to efficiently extend a fixed
maximal w-collection in $2^{[n]}$ to a maximal c-collection, working with their
geometric interpretations: combies and cubillages. Our construction will imply the
following
\begin{theorem} \label{tm:embed_combi}
Given a maximal weakly separated collection $W\subset 2^{[n]}$, one can find,
in polynomial time, a maximal chord separated collection $C\subset 2^{[n]}$
including $W$.
\end{theorem}
\begin{proof}
It is convenient to work with an arbitrary fully triangulated quasi-combi $K$
with $V_K=W$. The goal is to construct a cubillage $Q$ on $Z=Z(n,3)$ whose
fragmentation $Q^{\boxtimes}$ contains $K$ as a w-membrane. (Note that it is routine
to construct the (unique) combi with the spectrum $W$ (see~\cite{DKK3} for
details), and to form $K$, we subdivide each lens of the combi into the pair of upper and
lower semi-lenses and then triangulate them arbitrarily. The resulting
cubillage $Q$ will depend on the choice of such triangulations.)
We start with properly embedding $K$ into the ``empty'' zonotope $Z$, and our
method consists of two phases. At the first (second) phase, we construct a
partial fragmentation $F^-$ (resp. $F^+$) consisting of $\nabla$-, $\square$-,
and $\Delta$-fragments of some cubes $\zeta(X|ijk)$ (where, as usual, $i<j<k$
and $X\subseteq [n]-\{i,j,k\}$) filling the region $Z^-(K)$ of $Z$ between
$Z^{\,\rm fr}$ and $K$ (resp. the region $Z^+(K)$ between $K$ and $Z^{\,\rm re}$). (For
definitions, see Sect.~\SSEC{fragment}.) Then $F:=F^+ \cup F^-$ is a
subdivision of $Z$ into such fragments, and it is not difficult to realize that
$F$ is just the fragmentation $Q^{\boxtimes}$ of some cubillage $Q$; so $Q^{\boxtimes}$ is as
required for the given $K$.
Next we describe the first phase. At each step in it, we deal with one more
w-membrane $M$ such that
\begin{itemize}
\item[$(\ast)$]
$M$ lies entirely in $Z^-(K)$, and there is a partial fragmentation $F'$
filling the region $Z(M,K)$ between $M$ and $K$ (i.e., $F'$ is a
subdivision of $Z(M,K)$ into $\nabla$-, $\square$-, and $\Delta$-fragments).
\end{itemize}
If $M$ (regarded as a fully triangulated quasi-combi) has no horizontal
triangle (semi-lens), then $M$ is, in essence, a rhombus tiling in which
each rhombus $\rho(X|ij)$ is cut into two vertical triangles, namely,
$\nabla(X|ij)$ and $\Delta(Xij|ji)$. So $M$ can be identified with the
corresponding s-membrane, and we can construct a partial cubillage $Q'$ filling
the region $Z^-(M)$ (between $Z^{\,\rm fr}$ and $M$) by acting as in
Sect.~\SSEC{memb-to-cube}. Combining $Q'$ and $F'$, we obtain the desired
fragmentation $F^-$ filling $Z^-(K)$.
Now assume that $M$ has at least one semi-lens, and let $h$ be minimum so that
the set $\Lambda$ of semi-lenses in the level $z=h$ is nonempty. Choose
$\lambda\in\Lambda$ such that no edge in its lower boundary $L_\lambda$ belongs
to another semi-lens. (The existence of such a $\lambda$ is provided by the
acyclicity of the directed graph whose vertices are the elements of $\Lambda$
and whose edges are the pairs $(\lambda,\lambda')$ such that $U_\lambda$ and
$L_{\lambda'}$ share an edge, which follows, e.g., from
Lemma~\ref{lm:acyclic}.) Two cases are possible.
\emph{Case 1}: $\lambda$ is an upper triangle, i.e., $L_\lambda$ consists of a
single edge, namely, $e=(\ell_\lambda,r_\lambda)$. Let $U_\lambda$ have vertices
$Xi=\ell_\lambda$, $Xj$ and $X_k=r_\lambda$ ($i<j<k$). Then $e$ belongs to a
$\nabla$-tile in $M$, namely, $\nabla=\nabla(X|ik)$. Form the $\nabla$-fragment
$\tau=\zeta^\nabla(X|ijk)$ (the lower tetrahedron with the vertices
$X,Xi,Xj,Xk)$. We add $\tau$ to $F'$ and accordingly make the lowering flip in
$M$ using $\tau$ (which replaces the triangles $\lambda,\nabla$ forming
$\tau^{\,\rm re}_\epsilon$ by $\nabla(X|ij)$ and $\nabla(X|jk)$ forming $\tau^{\,\rm fr}_\epsilon$; see
Sect.~\SSEC{wmembran}). The new $M$ is a correct fully triangulated quasi-combi
(embedded as a w-membrane in $Z$), which is closer to $Z^{\,\rm fr}$.
\emph{Case 2}: $\lambda$ is a lower triangle. Then $L_\lambda$ consists of two
edges: $e=(\ell_\lambda=Y-k,Y-j)$ and $e'=(Y-j,Y-i=r_\lambda)$, where $i<j<k$.
Also by the choice of $h$, the edges $e,e'$ belong to $\nabla$-tiles of $M$,
namely, those of the form $\nabla=\nabla(X|jk)$ and $\nabla'=\nabla(X'|ij)$,
respectively, where $X:=Y-\{j,k\}$ and $X':=Y-\{i,j\}$. See the left fragment
of the picture.
\begin{center}
\includegraphics[scale=0.85]{swc16}
\end{center}
Let $A:=Y-j$ ($=Xk=X'i$). Note that the ``sector'' between the edges $(X,A)$
and $(X',A)$ is filled by an upper fan $(\Delta_1,\ldots,\Delta_p)$, where
$\Delta_r=\Delta(A|i_{r-1}i_r)$ and $k=i_0>i_1\cdots>i_p=i$ (cf.
Sect~\SSEC{wmembran}). Consider two possibilities.
\emph{Subcase 2a}: $p=1$, i.e., the fan consists of only one tile, namely,
$\Delta=\Delta(A|ki)$. Observe that the vertices $X,X',Xj,X'j,A$ belong
to an octahedron, namely, $\tau=\zeta^\square(\widetilde X|ijk)$, where $\widetilde X$
denotes $X-i=X'-k$. Moreover, the triangles $\lambda,\Delta,\nabla,\nabla'$
form the rear side of $\tau$. We add $\tau$ to $F'$ and accordingly make the
octahedral flip in $M$ using $\tau$, which replaces $\tau^{\,\rm re}_\epsilon$ by the front side
$\tau^{\,\rm fr}_\epsilon$ formed by four triangles shared the new vertex $A':=\widetilde Xj$.
(See the middle fragment of the above picture where the new triangles are
indicated by solid lines.) The new $M$ is again a correct w-membrane
closer to $Z^{\,\rm fr}$. Note that under the flip, the semi-lens $\lambda$ is replaced
by an upper semi-lens $\lambda'$ in level $h-1$ (this $\lambda'$ has the
longest edge $(X,X')$ and the top $A'$).
\emph{Subcase 2b}: $p>1$. Then $X$ and $X'$ are connected in $M$ by the path
$P$ that passes the vertices $X=A-i_0,A-i_1,\ldots, A-i_p=X'$. We make two
transformations. First we connect $X$ and $X'$ by line-segment $\widetilde e$.
Since $\widetilde e$ lies in the region $Z^-(M)$ (in view of~\refeq{cyc_conf}), so
does the entire truncated polyhedral cone $\Sigma$ with the top vertex $A$ and
the base polygon $B$ bounded by $P\cup\widetilde e$. See the right fragment of the
above picture (where $p=3$). We subdivide $B$ into $p-1$ triangles
$\sigma_1,\ldots,\sigma_{p-1}$ (having vertices on $P$) and extend each
$\sigma_r$ to tetrahedron $\tau_r$ with the top $A$. These
$\tau_1,\ldots,\tau_{p-1}$ subdivide $\Sigma$ into $\Delta$-fragments (each
being of the form $\zeta^\Delta(A|i_\alpha i_\beta i_\gamma)$ for some $0\le
\alpha<\beta<\gamma\le p$). Observe that the rear side $\Sigma^{\rm re}_\epsilon$
of $\Sigma$ is formed by the fan $(\Delta_1,\ldots,\Delta_p)$, whereas
$\Sigma^{\rm fr}_\epsilon$ consists of the lower horizontal triangles
$\sigma_1,\ldots,\sigma_{p-1}$ plus the vertical triangle with the top $A$ and
the base $\widetilde e$, denoted as $\widetilde \Delta$.
We add the fragments $\tau_1,\ldots,\tau_{p-1}$ to $F'$ and accordingly update
$M$ by replacing the triangles of $\Sigma^{\rm re}_\epsilon$ by the ones of
$\Sigma^{\rm fr}_\epsilon$ (as though making $p-1$ lowering tetrahedral flips). The
new w-membrane has the upper fan at $A$ consisting of a unique $\Delta$-tile,
namely, $\widetilde \Delta$, and now we make the second transformation, by applying
the octahedral flip as in Subcase~2a (involving the triangles
$\lambda,\widetilde\Delta,\nabla,\nabla'$ on the same vertices $X,X',Xj,X'j,A$).
Doing so, we eventually get rid of semi-lenses in the current $M$; so $M$
becomes an s-membrane in essence, which enables us to extend the current $F'$
to the desired fragmentation $F^-$ filling $Z^-(K)$ (by acting as in
Sect.~\SSEC{memb-to-cube}).
At the second phase, we act ``symmetrically'', starting with $M:=K$ and moving
toward $Z^{\,\rm re}$, in order to obtain a fragmentation $F^+$ filling $Z^+(K)$.
Then $F^-\cup F^+$ is as required, and the theorem follows.
\end{proof}
\end{document}
|
\betaegin{document}
\maketitle
\sigmaection{Introduction and statement of results}
The study of representations of integers as sums of polygonal numbers has a long and storied history. For $m\in\mathbb N_{\gammaeq 3}$ and $\varepsilonll\in\mathbb N_0$, let $p_m(\varepsilonll)$ be the \betaegin{it}$\varepsilonll$-th $m$-gonal number\varepsilonnd{it}
\betaegin{equation*}
p_m(\varepsilonll):=\phirac12 (m-2)\varepsilonll^2-\phirac12 (m-4)\varepsilonll,
\varepsilonnd{equation*}
which counts the number of points in a regular $m$-gon with side lengths $\varepsilonll$. Fermat famously conjectured in 1638 that every positive integer may be written as the sum of at most $m$ $m$-gonal numbers, or equivalently that for every $n\in\mathbb N_0$
\betaegin{equation}\lambdaabel{eqn:Fermatsum}
\sigmaum_{1 \lambdaeq j \lambdaeq m} p_m(\varepsilonll_j) = n
\varepsilonnd{equation}
is solvable. Lagrange proved the four-squares theorem in 1770, resolving the case $m=4$ of Fermat's conjecture. The case $m=3$ of triangular numbers was solved by Gauss in 1796 and is sometimes called the Eureka Theorem because Gauss famously marked in his diary ``EYPHKA! num=$\tauriangle+\tauriangle+\tauriangle$''. Cauchy \cite{Cauchy} finally completed the full proof of the conjecture in 1813, and Nathanson \cite{Nathanson} shortened Cauchy's proof in 1987; he also provided some additional history.
More generally, given $\phibm\betam{\alphalpha }\in \mathbb N^d$ (throughout we write vectors in bold letters) and $n\in\mathbb N$ one may consider Diophantine equations of the type
\betaegin{equation}\lambdaabel{eqn:genpolysum}
\sigmaum_{1 \lambdaeq j \lambdaeq d} \alphalpha_j p_m\lambdaeft(\varepsilonll_j\right)=n.
\varepsilonnd{equation}
It is natural to ask for a classification of those $n\in\mathbb N$ for which \varepsilonqref{eqn:genpolysum} is solvable with $\phibm\betam{\varepsilonll}\in \mathbb N_0^{d}$.
The case $m=4$ is well-understood: by applying the theory of modular forms (see \cite[Proposition 11]{Zagier123}), for $m=4$ one not only knows the existence of a solution to \varepsilonqref{eqn:Fermatsum} but has a precise formula for the number of such solutions. Namely,
Jacobi showed in 1834 (see e.g. \cite[p. 119]{Williams}) that
\betaegin{equation}\lambdaabel{eqn:sum4squares}
\#\lambdaeft\{ \phibm\betam{\varepsilonll}\in \mathbb Z^4: \sigmaum_{1\lambdaeq j \lambdaeq 4}\varepsilonll_j^2=n\right\} = 8\sigmaum_{\sigmaubstack{4\numid d\mid n}} d.
\varepsilonnd{equation}
Although formulas like \varepsilonqref{eqn:sum4squares} are rare, they are often ``almost true'' in the sense that the number of solutions to equations like \varepsilonqref{eqn:genpolysum} with $\phibm\betam{\varepsilonll}\in\mathbb Z^d$ may be written in the shape of \varepsilonqref{eqn:sum4squares} up to an error term. For example, in the case $\phibm\betam{\alphalpha}=\phibm\betam{1}$ with arbitrary even $d$ and $m=4$, Ramanujan stated \cite[(146)]{Ramanujan} a formula for the number of solutions to \varepsilonqref{eqn:genpolysum} which was later proven by Mordell \cite{Mordell}.
Set
\[
r_{2k}(n):=\#\lambdaeft\{\phibm\betam{\varepsilonll}\in\mathbb Z^{2k}: \sigmaum_{1\lambdaeq j \lambdaeq 2k}\varepsilonll_j^2=n\right\}
\]
and suppose for simplicity that $k\gammaeq 10$ is even. Ramanujan's claim \cite[(146)]{Ramanujan} together with \cite[(143)]{Ramanujan} implies that there exists $ \partialtaelta>0$ such that for $n\in\mathbb N$
\betaegin{equation}\lambdaabel{eqn:r2k}
r_{2k}(n)=\phirac{2k
(-1)^{n+1}
}{\lambdaeft(2^{k}-1\right)B_{k}} \sigmaum_{d\mid n} (-1)^{d+\phirac{n}{d}\phirac{k}{2}} d^{k-1}+O\lambdaeft(n^{k-1- \partialtaelta}\right),
\varepsilonnd{equation}
where $B_{k}$ is the $k$-th Bernoulli number.
More generally,
Kloosterman \cite[(I.3I)]{Kloosterman} applied
the Circle Method to show formulas resembling \varepsilonqref{eqn:r2k} (where the main term is the singular series from the Circle Method) in the case $m=4$ and $d=4$ of \varepsilonqref{eqn:genpolysum}.
The goal of this paper is to obtain formulas resembling \varepsilonqref{eqn:r2k} for the number of solutions
\betaegin{equation*}
r_{m,\phibm\betam{\alphalpha}}(n):=\#\lambdaeft\{\phibm\betam{\varepsilonll}\in \mathbb N_0^d: \sigmaum_{1\lambdaeq j \lambdaeq d}\alphalpha_j p_m\lambdaeft(\varepsilonll_j\right)=n\right\}.
\varepsilonnd{equation*}
Note that in \varepsilonqref{eqn:sum4squares} and \varepsilonqref{eqn:r2k}, we are counting solutions with $\varepsilonll_j\in\mathbb Z$, while the goal in this paper is to restrict to solutions with $\varepsilonll_j\in\mathbb N_0$. The reason for this restriction is the connection with regular polygons. Although the formula defining $p_m(\varepsilonll_j)$ is still well-defined for $\varepsilonll_j\in\mathbb Z$, their interpretation as the number of points in a regular $m$-gon with side lengths $\varepsilonll_j$ is lost when $\varepsilonll_j<0$ because side-lengths cannot be negative. For $m\in\{3,4\}$, the restriction $\varepsilonll_j\in\mathbb N_0$ does not lead to a fundamentally different question than taking $\varepsilonll_j\in\mathbb Z$. Indeed, using that $p_3(-\varepsilonll-1)=p_3(\varepsilonll)$, we obtain for $m=3$ a bijection between solutions with $\varepsilonll_j\gammaeq 0$ and those with $\varepsilonll_j<0$. Similarly, since $p_4(-\varepsilonll)=p_4(\varepsilonll)$, we have for $m=4$ a bijection between solutions with $\varepsilonll_j\gammaeq 0$ and those with $\varepsilonll_j\lambdaeq 0$. The case $\varepsilonll_j=0$ is double-counted, but formulas for solutions with $\varepsilonll_j=0$ may be obtained by taking $d\mapsto d-1$ and removing $\alphalpha_j$ in \varepsilonqref{eqn:genpolysum}. Thus for $m\in\{3,4\}$, finding the number of solutions to \varepsilonqref{eqn:genpolysum} with $\varepsilonll_j\in\mathbb N_0$ is equivalent to finding the number of solutions with $\varepsilonll_j\in\mathbb Z$, and we hence assume $m\gammaeq 5$ throughout. To the best of our knowledge, in this case formulas like \varepsilonqref{eqn:r2k} for the number of solutions to \varepsilonqref{eqn:genpolysum} if $\phibm\betam{\varepsilonll}\in\mathbb N_0^d$ are not known.
However, standard techniques yield formulas of this type for $\phibm\betam{\varepsilonll}\in\mathbb Z^d$. Completing the square in \varepsilonqref{eqn:genpolysum}, solutions to \varepsilonqref{eqn:genpolysum} are in one-to-one correspondence with solutions to certain sums of squares with fixed congruence conditions. Using this relationship, one finds that studying
\[
r_{m,\phibm\betam{\alphalpha}}^*(n):=\#\lambdaeft\{\phibm\betam{x}\in\mathbb Z^d: \sigmaum_{1\lambdaeq j \lambdaeq d}\alphalpha_jp_m(x_j) = n\right\}
\]
is equivalent to evaluating $s_{r,M,\betam\alphalpha}^*(An+B)$ (for some appropriate $A$, $B$, $r$, and $M$), where
\betaegin{equation*}\lambdaabel{eqn:s*def}
s_{r,M,\phibm\betam{\alphalpha}}^*(n):=\#\lambdaeft\{\phibm\betam{x}\in\mathbb Z^d: \sigmaum_{1\lambdaeq j\lambdaeq d}\alphalpha_jx_j^2 = n,\ x_j\varepsilonquiv r\rhomod{M}\right\}.
\varepsilonnd{equation*}
The generating function ($q:=e^{2\rhoi i \tauau}$ with $\tauau\in\mathbb{H}:=\{\tauau\in\mathbb C: \im{\tauau}>0\}$)
\[
\Theta_{r,M,\phibm\betam{\alphalpha}}^*(\tauau):=\sigmaum_{n\gammaeq 0}s_{r,M,\phibm\betam{\alphalpha}}^*(n) q^{\phirac{n}{M}}
\]
is a modular form of weight $\phirac{d}{2}$ for some congruence subgroup (see, e.g., \cite[Proposition 2.1]{Shimura}). Using the theory of modular forms, formulas like \varepsilonqref{eqn:r2k} may be obtained by splitting $\Theta_{r,M,\phibm\betam{\alphalpha}}^*$ into an Eisenstein series and a cusp form and using a result of Deligne \cite{Deligne} to bound the coefficients of the cusp form as an error term. As noted above, although one obtains formulas like \varepsilonqref{eqn:r2k} for $r_{m,\phibm\betam{\alphalpha}}^*(n)$ due to its connection with modular forms, one loses the interpretation for $p_{m}(\varepsilonll_j)$ in terms of regular $m$-gons.
The aim of this paper is to link the study of $r_{m,\phibm\betam{\alphalpha}}(n)$ to modular forms while simultaneously preserving the connection with regular $m$-gons by restricting to $\varepsilonll_j\in\mathbb N_0$.
However, the restriction of $\varepsilonll_j$ to $\mathbb N_0$ breaks an important symmetry and as a result the
generating function for $r_{m,\phibm\betam{\alphalpha}}(n)$ is unfortunately not a modular form, so one cannot employ standard methods to obtain a formula for $r_{m,\phibm\betam{\alphalpha}}(n)$.
Indeed, in his last letter to Hardy in 1920, Ramanujan commented that ``unlike the `False' theta functions'', the mock theta functions that he discovered ``enter into mathematics as beautifully as the ordinary theta functions''. However, contrary to Ramanujan's claims about the false theta functions,
recent work by Nazaroglu and the first author \cite{BringmannNazaroglu} shows that the generating function has some modular properties and in particular can be \varepsilonnquote{completed} to a function transforming like a modular form. This gives that the generating function has some explicit ``obstruction to modularity''. The investigation of this obstruction to modularity plays a fundamental role in this paper and causes most of the technical difficulties.
Given the results in \cite{BringmannNazaroglu}, one approach to obtaining formulas like \varepsilonqref{eqn:r2k} would be to establish structure theorems or generalizing results on modular forms to extend to functions with this type of obstruction to modularity. In this paper, we instead link the $r_{m, \phibm\betam{\alphalpha}}(n)$ and $r_{m,\phibm\betam{\alphalpha}}^*(n)$, showing that they are essentially equal up to an error term. As above, by completing the square, one finds that this is equivalent to relating $s_{r,M,\phibm\betam{\alphalpha}}^*(An+B)$ to $s_{r,M,\betam\alphalpha,C}(An+B)$ (for some $A,B,C$), where
\[
s_{r,M,\phibm\betam{\alphalpha},C}(n):=\#\lambdaeft\{\phibm\betam{x}\in\mathbb Z^d: \sigmaum_{1\lambdaeq j\lambdaeq d}\alphalpha_jx_j^2 = n,\ x_j\varepsilonquiv r\rhomod{M},\ x_j\gammaeq C\right\}.
\]
If $C=1$ (i.e., if $\phibm\betam{x}\in\mathbb N^d$), then we omit it in the notation.
Heuristically, one would expect that solutions with $\varepsilon_j x_j>0$ are equally distributed independent of the choice of $\varepsilon_j\in \{\rhom 1\}$. Our main theorem shows that this is indeed the case.
\betaegin{theorem}\lambdaabel{thm:rNrZ}
Let $\phibm\betam{\alphalpha}\in \mathbb N^4$ and $r,M\in\mathbb N$ be given.
\nuoindent
\nuoindent
\betaegin{enumerate}[leftmargin=*,label=\rm(\alpharabic*)]
\item We have
\betaegin{equation*}\lambdaabel{eqn:sNsZ}
s_{r,M,\phibm\betam{\alphalpha}}(n)=\phirac{1}{16}s_{r,M,\phibm\betam{\alphalpha}}^*(n)+O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\varepsilonnd{equation*}
\item
For $m>4$ we have
\betaegin{equation*}\lambdaabel{eqn:rNrZ}
r_{m,\phibm\betam{\alphalpha}}(n) = \phirac{1}{16} r_{m,\phibm\betam{\alphalpha}}^*(n) + O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\varepsilonnd{equation*}
\varepsilonnd{enumerate}
\varepsilonnd{theorem}
\nuoindent
\betaegin{remark}
The main term of $s_{r,M,\phibm\betam{\alphalpha}}^*(n)$ comes from the Eisenstein component of $\Theta_{r,M,\phibm\betam{\alphalpha}}^*$. The computation of the corresponding Eisenstein series appears throughout the literature in a variety of different shapes. In one direction,
Kloosterman \cite{Kloosterman} computed this component as the singular series coming from the Circle Method. On the other hand, the corresponding Eisenstein series appears in the work of Siegel \cite{Siegel1,Siegel2} and follow-up work of Weil \cite{Weil}, van der Blij \cite{vanderBlij}, and Shimura \cite{ShimuraCongruence} in two different forms. Firstly, the Eisenstein series may be realized as a certain weighted average of the solutions over the genus of the given sum of squares with congruence conditions. Secondly, Siegel computed its coefficients as certain $p$-adic limits. Finally, since the space of modular forms of a given weight and congruence subgroup is a finite-dimensional vector space, one may explicitly construct a basis and determine the Eisenstein series component using Linear Algebra.
\varepsilonnd{remark}
As noted above, combining Theorem \ref{thm:rNrZ} with known techniques from the theory of modular forms yields formulas resembling \varepsilonqref{eqn:r2k}. As a first corollary, we obtain such a formula for the number of representations of $n$ as a sum of four hexagonal numbers; the main term
\rm
is given in terms of the \varepsilonmph{sum of divisors function} $\sigmaigma(n):=\sigmaum_{d\mid n} d$.
\betaegin{corollary}\lambdaabel{cor:hexagonal}
We have
\[
r_{6,(1,1,1,1)}(n)=\phirac{1}{16}\sigmaigma(2n+1)+O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\]
\varepsilonnd{corollary}
\betaegin{remark}
Since $\sigmaigma(2n+1)\gammaeq 2n+1$, Corollary \ref{cor:hexagonal} implies that $r_{6,\phibm\betam{\alphalpha}}(n)>0$ for $n$ sufficiently large. Guy \cite{Guy} proposed a study of the numbers which are not the sum of four polygonal numbers. Moreover, Corollary \ref{cor:hexagonal} implies that the number of such solutions is $\gammag n$.
\varepsilonnd{remark}
Another example is given by sums of five hexagonal numbers where the last hexagonal number is repeated at least twice.
To state the result, let $(\phirac{\cdot}{\cdot})$ be the generalized Legendre symbol.
\betaegin{corollary}\lambdaabel{cor:hexagonal2}
For $\phibm\betam{\alphalpha}={\lambdaeft(1,1,1,2\right)}$ and $m=6$, we have
\[
r_{6,\phibm\betam{\alphalpha}}(n)=-\phirac{1}{64}\sigmaum_{d\mid (8n+5)} \lambdaeft(\phirac{8}{d}\right)d +O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\]
In particular, for $n$ sufficiently large
\[
r_{6,\phibm\betam{\alphalpha}}(n)>0.
\]
\varepsilonnd{corollary}
The proofs of Corollaries \ref{cor:hexagonal} and \ref{cor:hexagonal2} rely on formulas of Cho \cite{Cho} which use the fact that $\Theta_{-1,4,\betam{\alphalpha}}^*$ is an Eisenstein series in the cases
$\betam{\alphalpha}=(1,1,1,1)$ and $\betam{\alphalpha}=(1,1,1,2)$; indeed, as pointed out by Cho in \cite[Examples 3.3 and 3.4]{Cho}, the space of modular forms containing them is spanned by Eisenstein series in these cases. However, to obtain similar corollaries from Theorem \ref{thm:rNrZ}, we do not require the corresponding theta function to be an Eisenstein series. In order to exhibit how to use Theorem \ref{thm:rNrZ}, we give one such example.
\betaegin{corollary}\lambdaabel{cor:pentagonal}
For $\phibm\betam{\alphalpha}={\lambdaeft(1,1,1,1\right)}$ and $m=5$, we have
\[
r_{5,(1,1,1,1)}(n)= \phirac{1}{24}\sigmaigma(6n+1) + O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\]
\varepsilonnd{corollary}
The paper is organized as follows. In Section \ref{sec:setup}, we connect sums of squares and polygonal numbers, introduce partial theta functions, and relate them to theta functions and false theta functions. In Section \ref{sec:Farey}, we recall some facts about Farey fractions that are used for the Circle Method. In Section \ref{sec:modular}, we give modular transformation properties of the theta functions and the false theta functions in a shape that is useful for our application of the Circle Method. Section \ref{sec:IntBound} is devoted to studying the obstruction to modularity of the false theta functions and bounding them in a suitable way to use in the Circle Method. In Section \ref{sec:CircleMethod}, we apply the Circle Method to prove Theorem \ref{thm:rNrZ}. Finally, we prove Corollaries \ref{cor:hexagonal}, \ref{cor:hexagonal2}, and \ref{cor:pentagonal}
in Section \ref{sec:Corollaries} to demonstrate how to apply Theorem \ref{thm:rNrZ} to obtain identities resembling \varepsilonqref{eqn:r2k}.
\sigmaection{Sums of squares with congruence conditions and polygonal numbers} \lambdaabel{sec:setup}
In this section, we relate sums of polygonal numbers and sums of squares and give a relationship between $s_{r,M,\phibm\betam{\alphalpha}}$ and $s_{r,M,\phibm\betam{\alphalpha}}^*$.
Without loss of generality, we pick the ordering $\alphalpha_j\gammaeq \alphalpha_{j+1}$ for $j\in\{1,2,3\}$
in \varepsilonqref{eqn:genpolysum}. As noted in the introduction, we investigate sums of polygonal numbers via a connection with sums of squares satisfying certain congruence conditions.
Writing
\betaegin{equation}\lambdaabel{pml}
p_m(\varepsilonll)=\phirac12 (m-2)\lambdaeft(\varepsilonll-\phirac{m-4}{2(m-2)}\right)^2-\phirac{(m-4)^2}{8(m-2)},
\varepsilonnd{equation}
one sees directly that
\[
r_{m,\phibm\betam{\alphalpha}}(n) = s_{-(m-4),2(m-2),\phibm\betam{\alphalpha},-(m-4)}\lambdaeft(8(m-2)n+\sigmaum_{1\lambdaeq j \lambdaeq 4} \alphalpha_j(m-4)^2\right).
\]
Using \varepsilonqref{pml}, we have the generating function
\betaegin{equation*}
\sigmaum_{n\gammaeq 0}r_{m, \phibm\betam{\alphalpha}}(n)q^n=\sigmaum_{\phibm\betam{\varepsilonll}\in\mathbb N_0^4}q^{\sigmaum_{j=1}^4\alphalpha_j p_m\lambdaeft(\varepsilonll_j\right)
}=q^{-\sigmaum_{j=1}^4\alphalpha_j\phirac{(m-4)^2}{8(m-2)}}\rhorod_{j=1}^4 \sigmaum_{\varepsilonll\in\mathbb N_0}q^{\alphalpha_j\phirac{m-2}{2}\lambdaeft(\varepsilonll-\phirac{m-4}{2(m-2)}\right)^2}.
\varepsilonnd{equation*}
We restrict our investigation of solutions to \varepsilonqref{eqn:genpolysum} to the case $d=4$ and $\phibm\betam{\varepsilonll}\in\mathbb N_0^4$. We claim that most of the solutions to \varepsilonqref{eqn:genpolysum} come from solutions with $\varepsilonll_j\nueq 0$, i.e., sums of precisely four polygonal numbers instead of at most four polygonal numbers. Defining $r_{m,\phibm\betam{\alphalpha}}^{+}(n)$ via the generating function
\betaegin{equation}\lambdaabel{eqn:r+def}
\sigmaum_{n\gammaeq 0}r_{m, \phibm\betam{\alphalpha}}^{+}(n)q^n:=\sigmaum_{\phibm\betam{\varepsilonll}\in\mathbb N^4}q^{\sigmaum_{j=1}^4\alphalpha_j p_m\lambdaeft(\varepsilonll_j\right)}=q^{-\sigmaum_{j=1}^4\alphalpha_j\phirac{(m-4)^2}{8(m-2)}}\rhorod_{j=1}^4 \sigmaum_{\varepsilonll\in\mathbb N}q^{\alphalpha_j\phirac{m-2}{2}\lambdaeft(\varepsilonll-\phirac{m-4}{2(m-2)}\right)^2},
\varepsilonnd{equation}
a direct calculation using \cite[Lemma 4.1(a)]{Blomer} shows the following.
\betaegin{lemma}\lambdaabel{lem:rexactly}
{
For $\phibm\betam{\alphalpha}\in\mathbb N^4$, we}
have
\[
r_{m,\phibm\betam{\alphalpha}}(n)=r_{m,\phibm\betam{\alphalpha}}^{+}(n)+O\lambdaeft(n^{\phirac{1}{2}+\varepsilon}\right).
\]
\varepsilonnd{lemma}
Define the partial theta function
\[
\Theta_{r,M,\phibm\betam{\alphalpha}}^+(\tauau):=\sigmaum_{n\gammaeq 0}s_{r,M,\phibm\betam{\alphalpha}}(n) q^{\phirac{n}{M}},
\]
which is closely related to the generating function of $r_{m,\phibm\betam{\alphalpha}}^+(n)$ by \varepsilonqref{eqn:r+def}.
\betaegin{lemma}\lambdaabel{lem:r+s+rel}
For $m\gammaeq 5$ and $\phibm\betam{\alphalpha}\in \mathbb N^4$, we have
\[
\sigmaum_{n\gammaeq 0} r_{m,\phibm\betam{\alphalpha}}^+(n) q^n =q^{-\sigmaum_{j=1}^4\alphalpha_j\phirac{(m-4)^2}{8(m-2)}}\Theta_{m,2(m-2),\phibm\betam{\alphalpha}}^+\lambdaeft(\phirac{\tauau}{4}\right).
\]
\varepsilonnd{lemma}
By Lemma \ref{lem:r+s+rel} and Lemma \ref{lem:rexactly}, to prove Theorem \ref{thm:rNrZ} it suffices to approximate the Fourier coefficients of $\Theta_{r,M,\phibm\betam{\alphalpha}}^+(\tauau)$. These functions $\Theta_{r,M,\phibm\betam{\alphalpha}}^+$ are closely related to the usual (unary) theta functions $\vartheta(r,M;\tauau)$ and false theta functions $F_{r,M}(\tauau)$, defined for $0\lambdaeq r \lambdaeq M-1$, $M\in \mathbb N$ by
\betaegin{equation}\lambdaabel{eqn:thetaFdef}
F_{r,M}(\tauau):=\sigmaum_{\nuu\varepsilonquiv r\rhomod{2M}}\operatorname{sgn}(\nuu)q^{\phirac{\nuu^2}{4M}},\qquad \vartheta(r,M; \tauau):=\sigmaum_{\nuu\varepsilonquiv r\rhomod{M}}q^{\phirac{\nuu^2}{2M}}.
\varepsilonnd{equation}
A direct calculation shows the following.
\betaegin{lemma}\lambdaabel{lem:ThetaFalse}
For $M\in\mathbb N$, $0<r<2M$, and $\phibm\betam{\alphalpha}\in\mathbb N^4$, we have
\[
\Theta_{r,2M,\phibm\betam{\alphalpha}}^+(\tauau)=\phirac{1}{16}\sigmaum_{J\sigmaubseteq\{1,2,3,4\}} \rhorod_{j\in J}\vartheta\lambdaeft(r, 2M; 2\alphalpha_{j}\tauau\right)\rhorod_{\varepsilonll\in \{1,2,3,4\}\sigmaetminus J} F_{r,M}\lambdaeft(2\alphalpha_{\varepsilonll}\tauau\right).
\]
\varepsilonnd{lemma}
By Lemmas \ref{lem:r+s+rel} and \ref{lem:ThetaFalse}, for $J\sigmaubseteq\{1,2,3,4\}$ it is natural to define
\[
F_{r,M,\phibm\betam{\alphalpha},J}(q):=q^{-\phirac{r^2}{2M}\sigmaum_{j=1}^4 \alphalpha_j}\rhorod_{j\in J}\vartheta\lambdaeft(r, 2M; 2\alphalpha_{j}\tauau\right)\rhorod_{\varepsilonll\nuotin J} F_{r,M}\lambdaeft(2\alphalpha_{\varepsilonll}\tauau\right),
\]
where hereafter
$\varepsilonll \nuotin J$ means $\varepsilonll\in \{1,2,3,4\}\sigmaetminus J$.
Then for each $J\sigmaubseteq \{1, \partialtaots,4\}$ we set
\betaegin{equation*}
F_{r,M,\phibm\betam{\alphalpha},J}(q)=: \sigmaum_{n\gammaeq 0} c_{r,M,\phibm\betam{\alphalpha},J}(n)q^n.
\varepsilonnd{equation*}
If $J=\{1,2,3,4\}$, then we omit $J$ in the notation. A straightforward calculation yields the following.
\betaegin{lemma}\lambdaabel{lem:mainterm}
\nuoindent
\nuoindent
\betaegin{enumerate}[leftmargin=*,label=\rm(\alpharabic*)]
\item
For $M\in\mathbb N$, $0<r<2M$, and $\phibm\betam{\alphalpha}\in \mathbb N^4$, we have
\[
F_{r,M,\phibm\betam{\alphalpha}}(q)= q^{-\phirac{r^2}{2M}\sigmaum_{j=1}^4 \alphalpha_j}\Theta_{r,2M,\phibm\betam{\alphalpha}}^*(\tauau).
\]
In particular, for every $n\in\mathbb N_0$
\[
c_{r,M,\phibm\betam{\alphalpha}}(n)=s_{r,2M,\phibm\betam{\alphalpha}}^*\lambdaeft(2Mn+r^2\sigmaum_{1\lambdaeq j \lambdaeq 4}\alphalpha_j\right).
\]
\item
For $m\gammaeq 5$ and $\phibm\betam{\alphalpha}\in\mathbb N^4$, we have
\[
c_{m,m-2,\phibm\betam{\alphalpha}}\lambdaeft(4\lambdaeft(n-\sigmaum_{1\lambdaeq j \lambdaeq 4} \alphalpha_j\right)\right)=r_{m,\phibm\betam{\alphalpha}}^*(n).
\]
\varepsilonnd{enumerate}
\varepsilonnd{lemma}
\sigmaection{Basic facts on Farey fractions}\lambdaabel{sec:Farey}
The {\it Farey sequence of order $N$ $\in \mathbb N$} is the sequence of reduced fractions in $[0,1)$ whose denominator does not exceed $N$. If $\phirac{h}{k}$, $\phirac{h_1}{k_1}$ are adjacent elements in the Farey sequence then their {\it mediant} is $\phirac{h+h_1}{k+k_1}$.
When computing mediants below, we consider $\phirac{N-1}{N}$ to be adjacent to $\phirac{0}{1}$ and take the mediant between $\phirac{N-1}{N}$ and $\phirac{1}{1}$.
The Farey sequence of order $N$ is then iteratively defined by placing the mediant between two adjacent Farey fractions of order $N-1$ if the denominator of the mediant in reduced terms is at most $N$. We see that two Farey fractions $\phirac{h_1}{k_1}<\phirac{h}{k}$ of order $N$ are adjacent if and only if the mediant in reduced terms has denominator larger than $N$. This implies that
\betaegin{equation}\lambdaabel{eqn:adjacent}
h k_1-h_1k=1,
\varepsilonnd{equation}
and the converse is also true; if $hk_1-h_1k=1$, then $\phirac{h}{k}$ and $\phirac{h_1}{k_1}$ are adjacent Farey fractions of order $\max(k,k_1)$. For three adjacent Farey fractions $\phirac{h_1}{k_1}<\phirac{h}{k}<\phirac{h_2}{k_2}$, we set for $j\in\{1,2\}$ (note that $k_j$ depends on $h$)
\betaegin{equation}\lambdaabel{eqn:rhohdef}
\varrho_{k,j}(h):=k+k_j-N.
\varepsilonnd{equation}
Since the mediant between adjacent terms has denominator larger than $N$ and $k_j\lambdaeq N$, we have
\betaegin{equation}\lambdaabel{eqn:varrhojbnd}
1\lambdaeq \varrho_{k,j}(h)\lambdaeq k.
\varepsilonnd{equation}
The following lemma is straightforward to prove.
\betaegin{lemma}\lambdaabel{lem:AdjacentNeighbours}
If $\phirac{h}{k}<\phirac{h_2}{k_2}$ are adjacent Farey fractions of order $N$, then $1-\phirac{h_2}{k_2}<1-\phirac{h}{k}$ are also adjacent Farey fractions of order $N$ and
\[
\varrho_{k,2}(h)=\varrho_{k,1}(k-h).
\]
\varepsilonnd{lemma}
For $n\in \mathbb N$, set $N:=\lambdafloor\sigmaqrt{n}\rfloor$ and define arcs along the circle of radius $e^{-\phirac{2\rhoi}{N^2}}$ through $e^{2\rhoi i \tauau}$ with $\tauau=\phirac{h}{k}+\phirac{iz}{k}\in \mathbb{H}$.
Note that $\tauau\in\mathbb{H}$ is equivalent to $\operatorname{Re}(z)>0$. Specifically, we choose
Farey fractions $\phirac{h}{k}$ of order $N$ with $0\lambdaeqslant h<k\lambdaeqslant N$ and $\gammacd(h,k)=1$
and set $z:=k(\phirac{1}{N^2}-i\Phi)$ with $-\vartheta'_{h,k}\lambdaeqslant \Phi \lambdaeqslant \vartheta^{''}_{h,k}$. Here, for adjacent Farey fractions $\phirac{h_1}{k_1}<\phirac{h}{k}<\phirac{h_2}{k_2}$ in the Farey sequence of order $N$, set
\betaegin{equation*}
\vartheta'_{h,k}:=\phirac{1}{k(k+k_1)}, \quad \vartheta^{''}_{h,k}:=\phirac{1}{k(k+k_2)}.
\varepsilonnd{equation*}
By \varepsilonqref{eqn:varrhojbnd}, we have
\betaegin{equation}\lambdaabel{eqn:PhiBound}
|\Phi|\lambdaeq \max\lambdaeft\{\vartheta_{h,k}^{'},\vartheta_{h,k}^{''}\right\}<\phirac{1}{kN}.\quad \quad j\in\{1,2\}.
\varepsilonnd{equation}
\sigmaection{Modular Transformations}\lambdaabel{sec:modular}
Kloosterman's version of the Circle Method \cite{Kloosterman} plays a fundamental role in the proof of Theorem \ref{thm:rNrZ}. In order to integrate along arcs from $-\vartheta_{h,k}'$ to $\vartheta_{h,k}''$, one requires the asymptotic behaviour of $F_{r,M,\phibm\betam{\alphalpha},J}(\tauau)$ near $\phirac{h}{k}$. Transformation properties relating the cusp $\phirac{h}{k}$ to $i\infty$ thus play a pivotal role in determining the asymptotic growth near the cusp $\phirac{h}{k}$.
To state these, for $a,b\in\mathbb Z$, $c\in\mathbb N$ define the \tauextit{Gauss sum}
\betaegin{equation*}
G(a,b;c):= \sigmaum_{\varepsilonll\rhomod{c}} e^{\phirac{2\rhoi i}{c} \lambdaeft(a\varepsilonll^2+b\varepsilonll\right)}.
\varepsilonnd{equation*}
\sigmaubsection{The theta functions}
We use the following modular transformation properties.
\betaegin{lemma}\lambdaabel{lem:thetatrans}
We have
\betaegin{equation*}
\vartheta\lambdaeft( r,2M;2\alphalpha_j\lambdaeft(\phirac{h}{k}+\phirac{iz}{k}\right)\right)
=\phirac{e^{\phirac{\rhoi i \alphalpha_jhr^2}{Mk}}}{2\sigmaqrt{Mk\alphalpha_jz}}\sigmaum_{\nuu\in\mathbb Z}e^{-\phirac{\rhoi \nuu^2}{4Mk\alphalpha_jz}+\phirac{\rhoi i r \nuu}{Mk}} G(2M\alphalpha_jh,2r\alphalpha_jh+\nuu;k).
\varepsilonnd{equation*}
\varepsilonnd{lemma}
\betaegin{proof}
Writing $\nuu=r+2M\alphalpha+2Mk\varepsilonll$ with $\alphalpha\rhomod{k}$ and $\varepsilonll\in\mathbb Z$ in definition \varepsilonqref{eqn:thetaFdef}, we obtain
\betaegin{align*}
\vartheta\lambdaeft( r,2M;2\alphalpha_j\lambdaeft(\phirac{h}{k}+\phirac{iz}{k}\right)\right)
= \sigmaum_{\alphalpha\rhomod k} e^{
\phirac{2\rhoi i \alphalpha_j h}{2Mk}
(r+2M\alphalpha)^2} \sigmaum_{\varepsilonll\in\mathbb Z}
e^{
\phirac{2\rhoi i \alphalpha_j}{2Mk}
(r+2M\alphalpha+2Mk\varepsilonll)^2 iz}.
\varepsilonnd{align*}
Using the modular inversion formula (see \cite[(2.4)]{Shimura})
\betaegin{equation*}
\vartheta\lambdaeft( r,M;-\phirac 1 \tauau\right) = M^{-\phirac 12} (-i\tauau)^{\phirac 12} \sigmaum_{k\rhomod{M}} e^{\phirac{2\rhoi i rk}{M}}
\vartheta(k,M;\tauau)
\varepsilonnd{equation*}
on the inner sum, the claim easily follows.
\varepsilonnd{proof}
\sigmaubsection{The false theta functions}
We next establish analogous modular properties for the false theta functions.
For $\mu\in\mathbb Z\sigmaetminus\{0\}$ set
\betaegin{equation}\lambdaabel{defineint}
\mathcal{I}(\mu,k;z)=\mathcal{I}_{M,\alphalpha_j}(\mu,k;z):=\lambdaim_{\varepsilon\tauo 0^+}\int_{-\infty}^\infty \phirac{e^{-\phirac{\rhoi x^2}{4Mk\alphalpha_jz}}}{x-(1+i\varepsilon)\mu}dx.
\varepsilonnd{equation}
Throughout we write $\sigmaum^{*}_{\nuu\gammaeq 0}$ for the sum where the $\nuu=0$ term is counted with a factor $\phirac{1}{2}$
and moreover abbreviate
\[
\sigmaideset{}{^*}\sigmaum_{\phibm\betam{\nuu}\in \mathbb N_0^4}:= \rhorod_{j=1}^4\sigmaideset{}{^*}\sigmaum_{\nuu_j\gammaeq 0}.
\]
For $d\in\mathbb N$, set
\[
\mathcal{L}_{d}:=[1-d,-1]\cup[1,d].
\]
\betaegin{lemma}\lambdaabel{lem:Fmodular}
We have
\betaegin{multline*}
F_{r,M}\lambdaeft( 2\alphalpha_j\lambdaeft( \phirac{h}{k}+\phirac{iz}{k}\right)\right)=\phirac{1}{2\sigmaqrt{Mk\alphalpha_jz}}e^{\phirac{\rhoi i \alphalpha_jhr^2}{Mk}}\sigmaum_{\nuu\in\mathbb Z}\operatorname{sgn}(\nuu) e^{-\phirac{\rhoi \nuu^2}{4Mk\alphalpha_jz}+\phirac{\rhoi i r \nuu}{Mk}}G(2M\alphalpha_jh,2\alphalpha_jhr+\nuu;k) \\
+ \phirac{ie^{\phirac{\rhoi i \alphalpha_jh r^2}{Mk}}}{2\sigmaqrt{Mk\alphalpha_jz}\rhoi}
\sigmaum_{\varepsilonll\in\mathcal{L}_{Mk}}
\sigmaideset{}{^*}\sigmaum_{\nuu\gammaeq 0}\sigmaum_{\rhom} e^{\phirac{\rhoi i r \varepsilonll}{Mk}}G(2M\alphalpha_jh,2\alphalpha_jh+\varepsilonll;k)\mathcal{I}(\varepsilonll\rhom 2Mk\nuu,k;z).
\varepsilonnd{multline*}
\varepsilonnd{lemma}
\betaegin{proof}
We have, writing $\nuu=r+2M\alphalpha +2Mk\varepsilonll$ ($0\lambdaeq\alphalpha\lambdaeq k-1$, $\varepsilonll\in\mathbb Z$)
\betaegin{align*}
F_{r,M}\lambdaeft(2\alphalpha_j\lambdaeft(\phirac{h}{k}+\phirac{iz}{k}\right)\right)
&=\sigmaum_{\alphalpha=0}^{k-1} e^{\phirac{\rhoi i \alphalpha_jh}{Mk}(r+2M\alphalpha)^2} F_{r+2M\alphalpha,Mk}(2\alphalpha_j i z).
\varepsilonnd{align*}
Choosing the $+$-sign in \cite[two displayed formulas after (4.5)]{BringmannNazaroglu} implies that
\betaegin{equation*}
F_{\betaeta,M}\lambdaeft(-\phirac 1\tau\right) -\tau^{\phirac 12} \sigmaum_{r=1}^{M-1}\rhosi_{\betaeta,r}\lambdaeft( \betaegin{matrix}0& -1\\ 1 & 0\varepsilonnd{matrix}\right) F_{r,M}(\tau)=
\sigmaqrt{2M} \int_{0}^{-\phirac 1\tau +i\infty+\varepsilon} \phirac{f_{\betaeta,M}(\mathfrak{z})}{\sigmaqrt{i\lambdaeft(\mathfrak{z}+\phirac 1\tau\right)}}d\mathfrak{z},
\varepsilonnd{equation*}
where
\betaegin{equation*}
f_{r,M}(\tauau) := \phirac{1}{2M} \sigmaum_{\nuu\varepsilonquiv r\rhomod{2M}}\nuu q^{\phirac{\nuu^2}{4M}},\qquad
\rhosi_{\betaeta,r}\lambdaeft(\betaegin{matrix}0&-1\\ 1 &0\varepsilonnd{matrix}\right):= e^{-\phirac{3\rhoi i}{4}}\sigmaqrt{\phirac{2}{M}}\sigmain\lambdaeft(\phirac{\rhoi \betaeta r}{M}\right).
\varepsilonnd{equation*}
Changing $\tau\mapsto-\phirac 1\tau$, and using
\betaegin{equation*}
F_{0,M}(\tauau)=F_{M,M}(\tauau)=0, \qquad F_{2M-r,M}(\tauau)=-F_{r,M}(\tauau),
\varepsilonnd{equation*}
\betaegin{equation*}
\sigmaum_{\betaeta \rhomod{2M}} e^{\phirac{2\rhoi i}{2M} (\varepsilonll+r)\betaeta} = \betaegin{cases}
0 & \tauext{ if } r \nuot\varepsilonquiv -\varepsilonll \rhomod{2M}, \\
2M & \tauext{ if } r \varepsilonquiv -\varepsilonll \rhomod{2M},
\varepsilonnd{cases}
\varepsilonnd{equation*}
we obtain, after a short calculation
\betaegin{multline}\lambdaabel{eqn:Finverse}
F_{\varepsilonll,M}\lambdaeft(-\phirac{1}{\tauau}\right)= e^{\phirac{\rhoi i}{4}} \sigmaqrt{-\phirac{\tauau}{2M}}\sigmaum_{\betaeta\rhomod{2M}} e^{\phirac{2\rhoi i \varepsilonll \betaeta}{2M}} F_{\betaeta,M}(\tauau)\\
+ e^{-\phirac{3\rhoi i}{4}} \sigmaqrt{-\tauau}\sigmaum_{\betaeta\rhomod{2M}} e^{
\phirac{2\rhoi i \varepsilonll \betaeta}{2M}} \int_{0}^{\tau+i\infty+\varepsilon}\phirac{f_{\betaeta,M}(\mathfrak{z})}{\sigmaqrt{i(\mathfrak{z}-\tauau)}}d\mathfrak{z}
\varepsilonnd{multline}
Thus
\betaegin{multline*}\lambdaabel{rewriteF}
F_{r+2M\alphalpha,Mk}(2\alphalpha_j iz) = \phirac{e^{\phirac{\rhoi i}{4}}}{2\sigmaqrt{Mk\alphalpha_j iz}}
\sigmaum_{\betaeta\rhomod{2Mk}} e^{\phirac{2\rhoi i (r+2M\alphalpha)\betaeta}{2Mk}} F_{\betaeta,Mk}\lambdaeft( \phirac{i}{2\alphalpha_jz}\right)\\
+ \phirac{e^{-\phirac{3\rhoi i}{4}}}{\sigmaqrt{2\alphalpha_j iz}}\sigmaum_{\betaeta\rhomod{2Mk}} e^{
\phirac{2\rhoi i (r+2M\alphalpha)\betaeta}{2Mk}} \int_{0}^{\phirac{i}{2\alphalpha_j z}+i\infty+\varepsilon}\phirac{f_{\betaeta,Mk}(\mathfrak{z})}{\sigmaqrt{i\lambdaeft(\mathfrak{z}-\phirac{i}{2\alphalpha_jz}\right)}}d\mathfrak{z}.
\varepsilonnd{multline*}
The first term can easily be rewritten, giving the first summand claimed in the lemma.
In the second term of \varepsilonqref{eqn:Finverse}, $f_{0,Mk}=0$ and for $\betaeta\nueq 0$ and $\tauau=\phirac{i}{2\alphalpha_j z}$ we write the integral as
\betaegin{align*}
\phirac{i}{2M}\lambdaim_{ \partialtaelta\rightarrow 0^+}
\sigmaum\lambdaimits_{\nuu\varepsilonquiv \betaeta \rhomod{2M}} \nuu e^{
\phirac{\rhoi i \nuu^2\tau}{2M}} \int_{i\tau+ \partialtaelta}^{\infty-i\varepsilon} \phirac{e^{-
\phirac{\rhoi \nuu^2\mathfrak{z}}{2M}
}}{\sigmaqrt{-\mathfrak{z}}} d\mathfrak{z}.
\varepsilonnd{align*}
We split up the integral in a way that allows $ \partialtaelta=0$ to be directly plugged in termwise by Abel's Theorem. For this, we use \cite[displayed formula after (3.4)]{BringmannNazaroglu} to obtain that
\betaegin{equation*}
\int_{i\tauau+ \partialtaelta}^{\infty-i\varepsilon}\phirac{e^{-\phirac{\rhoi r^2\mathfrak{z}}{2M}}}{\sigmaqrt{-\mathfrak{z}}}d\mathfrak{z}=-\phirac{i\sigmaqrt{2M}}{\nuu}\lambdaeft(\operatorname{sgn}(\nuu)+\operatorname{erf}\lambdaeft(i \nuu \sigmaqrt{\phirac{\rhoi}{2M} (-i\tau- \partialtaelta)}\right)\right).
\varepsilonnd{equation*}
We split the error function as
\betaegin{equation}\lambdaabel{eqn:errorsplit}
\lambdaeft( \operatorname{erf}\lambdaeft( i\nuu \lambdaeft(\sigmaqrt{\phirac{\rhoi}{2M}(-i\tau- \partialtaelta)}\right)\right) -\phirac{ie^{\phirac{\rhoi \nuu^2}{2M} (-i\tau- \partialtaelta)}}{\sigmaqrt{2M}\rhoi \nuu \sigmaqrt{-i\tau- \partialtaelta}} \right) + \phirac{ie^{\phirac{\rhoi \nuu^2}{2M} (-i\tau- \partialtaelta)}}{\sigmaqrt{2M}\rhoi \nuu \sigmaqrt{-i\tau- \partialtaelta}}.
\varepsilonnd{equation}
Plugging in the asymptotic expansion of the error function towards $\infty$ for the error function, one finds that the series in $\nuu$ of $\operatorname{sgn}(\nuu)$ plus the first term of \varepsilonqref{eqn:errorsplit} converges absolutely for $ \partialtaelta\gammaeq 0$, and hence we may just take the limit $ \partialtaelta\tauo 0^+$.
For the second term, we need to compute
\betaegin{align*}
\lambdaim_{ \partialtaelta\rightarrow 0^+} \phirac{1}{\sigmaqrt{-i\tau- \partialtaelta}}
\sigmaum\lambdaimits_{\nuu\varepsilonquiv \betaeta \rhomod{2M}}
\phirac{e^{-\phirac{\rhoi \nuu^2 \partialtaelta}{2M}}}{\nuu} =\lambdaim_{ \partialtaelta\rightarrow 0^+} \phirac{1}{\sigmaqrt{-i\tau- \partialtaelta}}\lambdaeft(
\sigmaum_{\nuu\gammaeq 1} \sigmaum_{\rhom}\phirac{e^{-\phirac{\rhoi}{2M} \lambdaeft(\betaeta \rhom 2M\nuu\right)^2 \partialtaelta}}{\betaeta \rhom 2M \nuu}
+\phirac{e^{-\phirac{\rhoi \betaeta^2}{2M} \partialtaelta}}{\betaeta}
\right).
\varepsilonnd{align*}
Using the fact that ${\sigmaum\lambdaimits_{\rhom}} \phirac{1}{\betaeta\rhom 2M\nuu} = \phirac{2\betaeta}{\betaeta^2-4M^2\nuu^2}$, the above series converges absolutely for $ \partialtaelta \gammaeq 0$ and hence by Abel's Theorem we have, for $\betaeta\nueq 0$
\betaegin{multline*}
\int_{0}^{\tau+i\infty+\varepsilon} \phirac{f_{\betaeta,M}(\mathfrak{z})}{\sigmaqrt{i(\mathfrak{z}-\tau)}}d \mathfrak{z}\\
= \phirac{1}{\sigmaqrt{2M}}
\sigmaideset{}{^*}{\sigmaum}_{\nuu\gammaeq 0}\sigmaum_{\rhom}
\lambdaeft(\operatorname{sgn}(\betaeta\rhom 2 M\nuu)
+\operatorname{erf}\lambdaeft(i(\betaeta \rhom 2M\nuu)\sigmaqrt{-\phirac{\rhoi i\tau}{2M}}\right)\right) e^{\phirac{\rhoi i}{2M} (\betaeta \rhom 2M\nuu)^2\tau}\\
+\phirac{1}{\sigmaqrt{2M}}
\lambdaeft(\operatorname{sgn}(\betaeta)+
\operatorname{erf}
\lambdaeft(i\betaeta \sigmaqrt{-\phirac{\rhoi i\tau}{2M}}\right)
\right)
e^{\phirac{\rhoi i}{2M} \betaeta^2\tau}.
\varepsilonnd{multline*}
We now use the following identity from \cite[(3.8)]{BringmannNazaroglu} ($s\in\mathbb R\sigmaetminus\{0\}$, $\operatorname{Re}(V)> 0$)
\betaegin{equation*}
\lambdaeft(\operatorname{sgn}(s)+\operatorname{erf}\lambdaeft(is\sigmaqrt{\rhoi V}\right)\right)e^{-\rhoi s^2 V}
= -\phirac{i}{\rhoi} \lambdaim_{\varepsilon\tauo 0^+} \int_{-\infty}^{\infty} \phirac{e^{-\rhoi V x^2}}{x-s(1+i\varepsilon)}dx,
\varepsilonnd{equation*}
to obtain that
\betaegin{multline*}
\int_{0}^{\tau+i\infty+\varepsilon} \phirac{f_{\betaeta,M}(\mathfrak{z})}{\sigmaqrt{i(\mathfrak{z}-\tau)}}d \mathfrak{z} = -\phirac{i}{\sigmaqrt{2M}\rhoi} \sigmaum_{\nuu\gammaeq 1}\sigmaum_\rhom \lambdaim_{\varepsilon\tauo 0^+}
\int_{-\infty}^{\infty} \phirac{e^{\phirac{\rhoi i \tau x^2}{2M}}}{x-(1+i\varepsilon)(\betaeta \rhom2M\nuu)}dx\\
-\phirac{i}{\sigmaqrt{2M}\rhoi}
\lambdaim_{\varepsilon\tauo 0^+}\int_{-\infty}^{\infty} \phirac{e^{\phirac{\rhoi i \tau x^2}{2M}}}{x-(1+i\varepsilon)\betaeta}dx.
\varepsilonnd{multline*}
From this the second claimed term in the lemma may directly be obtained.
\varepsilonnd{proof}
\sigmaection{Bounding $\mathcal{I}(\mu,k;z)$}\lambdaabel{sec:IntBound}
\sigmaubsection{Rewriting $\mathcal{I}(\mu,k;z)$}
In the following lemma, we rewrite $\mathcal{I}(\mu,k;z)$. To state the lemma, set
\[
g(x):= e^{-\phirac{\rhoi (x+\mu)^2}{4Mk\alphalpha_jz}},\qquad R_g(x):=\re{g(x)},\qquad I_g(x):=\im{g(x)}.
\]
\betaegin{lemma}\lambdaabel{lem:inteval}
For every $ \partialtaelta>0$ and $\mu\in\mathbb Z\sigmaetminus\{0\}$, we have
\betaegin{equation*}
\mathcal{I}(\mu,k;z)= \operatorname{sgn}(\mu)\rhoi i e^{-\phirac{\rhoi \mu^2}{4Mk\alphalpha_j z}} +\int_{- \partialtaelta}^{ \partialtaelta} \lambdaeft(R_g'\lambdaeft(y_{1,x}\right)+ iI_g'\!\lambdaeft(y_{2,x}\right)\right)dx+\operatorname{sgn}(\mu)\sigmaum_{\rhom}\rhom\int_{ \partialtaelta}^{\infty}{\phirac{1}{x} e^{-\phirac{\rhoi\lambdaeft(x\rhom|\mu|\right)^2}{4Mk\alphalpha_jz}}}dx
\varepsilonnd{equation*}
for some $y_{1,x},y_{2,x}$ between $0$ and $x$ (in particular, $y_{\varepsilonll,x}\in (- \partialtaelta, \partialtaelta)$).
\varepsilonnd{lemma}
\betaegin{proof}
We make the change of variables $x\mapsto x+\mu$ in \varepsilonqref{defineint} to rewrite the integral as
\betaegin{equation*}
\int_{-\infty}^{\infty}\phirac{e^{-\phirac{\rhoi x^2}{4Mk\alphalpha_j z}}}{x-(1+i\varepsilon)\mu} dx
=
\int_{-\infty}^{\infty}\phirac{e^{-\phirac{\rhoi \lambdaeft(x+\mu\right)^2}{4Mk\alphalpha_j z}}}{x-i\varepsilon\mu} dx.
\varepsilonnd{equation*}
We then split the integral into three pieces as
\[
\int_{-\infty}^{\infty} =\int_{- \partialtaelta}^{ \partialtaelta} + \int_{ \partialtaelta}^{\infty}+ \int_{-\infty}^{- \partialtaelta}=:\mathcal{I}_1+\mathcal{I}_2+\mathcal{I}_3.
\]
To evaluate $\mathcal{I}_1$, we note that by Taylor's Theorem, there exist $y_{1,x}$ and $y_{2,x}$ between $0$ and $x$ such that
\[
R_g(x)=R_g(0)+R_g'\!\lambdaeft(y_{1,x}\right)x \qquad \tauext{ and }\qquad I_g(x)=I_g(0)+I_g'\!\lambdaeft(y_{2,x}\right)x.
\]
Therefore
\[
g(x)= e^{-\phirac{\rhoi\mu^2}{4Mk\alphalpha_jz}}+\lambdaeft(R_g'\!\lambdaeft(y_{1,x}\right) + i I_g'\!\lambdaeft(y_{2,x}\right)\right)x.
\]
Thus
\betaegin{align}\nuonumber
\lambdaim_{\varepsilon\tauo 0^+}\mathcal{I}_1&=\lambdaim_{\varepsilon\tauo 0^+}\int_{- \partialtaelta}^{ \partialtaelta} \phirac{e^{-\phirac{\rhoi\mu^2}{4Mk\alphalpha_jz}}+\lambdaeft(R_g'\!\lambdaeft(y_{1,x}\right) + i I_g'\!\lambdaeft(y_{2,x}\right)\right)x}{x-i\varepsilon\mu}dx\\
\lambdaabel{eqn:I1split}& = \lambdaim_{\varepsilon\tauo 0^+}e^{-\phirac{\rhoi \mu^2 }{4Mk\alphalpha_jz}} \int_{- \partialtaelta}^{ \partialtaelta} \phirac{1}{x-i\varepsilon\mu }dx+ \int_{- \partialtaelta}^{ \partialtaelta}\lambdaeft( R_g'\!\lambdaeft(y_{1,x}\right) + i I_g'\!\lambdaeft(y_{2,x}\right) \right) dx.
\varepsilonnd{align}
The second term on the right-hand side of \varepsilonqref{eqn:I1split} is precisely the second term in the claim.
Evaluating the integral explicitly, the first term in \varepsilonqref{eqn:I1split} equals
\betaegin{equation}\lambdaabel{eqn:Logs}
e^{-\phirac{\rhoi \mu^2}{4Mk\alphalpha_jz}} \lambdaim_{\varepsilon\tauo 0^+}\int_{- \partialtaelta}^{ \partialtaelta}\phirac{1}{x-i\varepsilon\mu}dx =e^{-\phirac{\rhoi\mu^2}{4Mk\alphalpha_jz}}\lambdaim_{\varepsilon\tauo 0^+}\lambdaeft(\operatorname{Log}\lambdaeft( \partialtaelta-i\varepsilon\mu\right)-\operatorname{Log}\lambdaeft(- \partialtaelta-i\varepsilon\mu\right)\right).
\varepsilonnd{equation}
Here and throughout, $\operatorname{Log}$ denotes the principal branch of the complex logarithm.
We then evaluate, using the fact that $\mu\nueq 0$,
\betaegin{align*}
\lambdaim_{\varepsilon\tauo 0^+}
\operatorname{Log}\lambdaeft( \partialtaelta-i\varepsilon
\mu
\right)&=\lambdaog( \partialtaelta),\\
\lambdaim_{\varepsilon\tauo 0^+}
\operatorname{Log}\lambdaeft(- \partialtaelta-i\varepsilon\mu\right)&=
\betaegin{cases}
\operatorname{Log}(- \partialtaelta)=\lambdaog( \partialtaelta) +\rhoi i&\tauext{if }
\mu
<0,\\
\operatorname{Log}(- \partialtaelta)-2\rhoi i=\lambdaog( \partialtaelta)-\rhoi i&\tauext{if }\mu>0.
\varepsilonnd{cases}
\varepsilonnd{align*}
Therefore \varepsilonqref{eqn:Logs} becomes
\[
\rhoi i \operatorname{sgn}(\mu)e^{-\phirac{\rhoi\mu^2}{4Mk\alphalpha_jz}}.
\]
Since the paths of integration in $\mathcal{I}_2$ and $\mathcal{I}_3$ do not go through zero, we can plug in $\varepsilon=0$ to obtain
\betaegin{equation}\lambdaabel{eqn:I2+I3}
\lambdaim_{\varepsilon\tauo 0^+}\lambdaeft(\mathcal{I}_2+\mathcal{I}_3\right) = \int_{ \partialtaelta}^{\infty} \phirac{1}{x}{e^{-\phirac{\rhoi (x+\mu)^2}{4Mk\alphalpha_j z}}} dx + \int_{-\infty}^{- \partialtaelta} \phirac{1}{x}{e^{-\phirac{ \rhoi(x+
\mu
)^2 }{4Mk\alphalpha_j z}}} dx.
\varepsilonnd{equation}
Making the change of variables $x\mapsto -x$ in the second integral, we see that \varepsilonqref{eqn:I2+I3} becomes
\[
\int_{ \partialtaelta}^{\infty} \phirac{1}{x}e^{-\phirac{\rhoi(x+\mu)^2}{4Mk\alphalpha_j z}}dx
-\int_{ \partialtaelta}^{\infty} \phirac{1}{x} e^{-\phirac{ \rhoi(x-\mu)^2}{4Mk\alphalpha_j z}} dx
=\operatorname{sgn}(\mu)\sigmaum_{\rhom}\rhom\int_{ \partialtaelta}^{\infty}\phirac{1}{x} e^{{-\phirac{\rhoi\lambdaeft(x\rhom|
\mu
|\right)^2}{4Mk\alphalpha_jz}}}dx.\qedhere
\]
\varepsilonnd{proof}
\sigmaubsection{Asymptotics for $\mathcal{I}(\mu,k;z)$}
The main result in this subsection is the following approximation of $\mathcal{I}(\mu,k;z)$.
\betaegin{proposition}\lambdaabel{prop:intbound}
If $1\lambdaeq k\lambdaeq N$ and $|\Phi|\lambdaeq \phirac{1}{kN}$, then
for $0< \partialtaelta<\phirac{|
\mu
|}{2}$ we have, for some $c>0$
\[
\mathcal{I}(\mu,k;z)=- \phirac{2\sigmaqrt{Mk\alphalpha_jz}}{\mu}+O\lambdaeft(\phirac{k^{\phirac 32}|z|^{\phirac 32}}{\mu^3}+\lambdaeft(1 + \phirac{|\mu| \partialtaelta}{k|z|}+\lambdaog\lambdaeft(\phirac{|\mu|}{ \partialtaelta}\right)\right)e^{-\phirac{c\mu^2}{k}\re{\phirac{1}{z}}}\right).
\]
\varepsilonnd{proposition}
Before proving Proposition \ref{prop:intbound}, we approximate the third term from Lemma \ref{lem:inteval}. We set
$
A:=\phirac{\rhoi\mu^2}{4Mk\alphalpha_j|z|}
$
and make the change of variables $x\mapsto |\mu|x$ to obtain that the third term in Lemma \ref{lem:inteval} equals
\betaegin{equation}\lambdaabel{eqn:thirdterm2}
\operatorname{sgn}(\mu)\sigmaum_{\rhom}\rhom \int_{\phirac{ \partialtaelta}{|\mu|}}^{\infty}\phirac{1}{x} e^{-\phirac{A|z|}{z}(x\rhom 1)^2}dx.
\varepsilonnd{equation}
We split the integral at $x=\phirac 12$. To approximate the contribution from $x\gammaeq \phirac{1}{2}$, we define for $d\in\mathbb N_0$
\betaegin{equation}\lambdaabel{eqn:Jddef}
\mathcal{J}_{d,\rhom}:=C_d\lambdaeft(\phirac{z}{2A|z|}\right)^{d-1}\int_{\phirac{1}{2}}^{\infty} \phirac{1}{x^d}e^{-A\phirac{|z|}{z} (x\rhom 1)^2} dx,
\varepsilonnd{equation}
where
\betaegin{equation*}
C_d:= \betaegin{cases} (d-1)!&\tauext{if }d\gammaeq1,\\ 1&\tauext{if }d=0.\varepsilonnd{cases}
\varepsilonnd{equation*}
Note that $\mathcal{J}_{1,\rhom}$ is the contribution from $x\gammaeq \phirac{1}{2}$ to the integral in \varepsilonqref{eqn:thirdterm2}. The following trivial bound for
$\mathcal{J}_{d,\rhom}$ follows immediately
by bringing the absolute value inside the integral.
\betaegin{lemma}\lambdaabel{lem:Jbndtrivial}
For $d\in\mathbb N_0$, we have
\[
\lambdaeft|\mathcal{J}_{d,\rhom}\right|\lambdaeq \phirac{2\sigmaqrt{\rhoi}C_d A^{\phirac{1}{2}-d}}{\sigmaqrt{|z|\re{\phirac{1}{z}}}}.
\]
\varepsilonnd{lemma}
To obtain a better approximation for $\mathcal{J}_{d,\rhom}$, we next relate
$\mathcal{J}_{d,\rhom}$
with
$\mathcal{J}_{d+1,\rhom}$ and $\mathcal{J}_{d-1,\rhom}$.
\betaegin{lemma}\lambdaabel{lem:Jdtrick}
For $d\in\mathbb N$, we have
\[
\mathcal{J}_{d,\rhom} =\mp\lambdaeft(
-(d-1)!
\lambdaeft(\phirac{z}{A|z|}\right)^{d} e^{-\phirac{A|z|}{z}\lambdaeft(\phirac{1}{2}\rhom 1\right)^2} +\mathcal{J}_{d+1,\rhom} +
\max(d-1,1)
\phirac{z}{2A|z|}\mathcal{J}_{d-1,\rhom}\right).
\]
\varepsilonnd{lemma}
\betaegin{proof}
We first rewrite
\betaegin{equation}\lambdaabel{eqn:Jdtrick}
\mathcal{J}_{d,\rhom} = \mathcal{J}_{d,\rhom} \rhom \phirac{C_{d}}{C_{d-1}} \phirac{z}{2A|z|}\mathcal{J}_{d-1,\rhom}\mp \phirac{C_{d}}{C_{d-1}} \phirac{z}{2A|z|}\mathcal{J}_{d-1,\rhom}.
\varepsilonnd{equation}
Using integration by parts, the first two terms in \varepsilonqref{eqn:Jdtrick} equal
\betaegin{equation*}
\rhom C_d\lambdaeft(\phirac{z}{2A|z|}\right)^{d-1}\int_{\phirac{1}{2}}^{\infty}\phirac{1}{x^d}{(x\rhom 1)}e^{-A\phirac{|z|}{z} (x\rhom 1)^2}dx=\rhom C_d\lambdaeft(\phirac{z}{A|z|}\right)^{d} e^{-\phirac{A|z|}{z}\lambdaeft(\phirac{1}{2}\rhom 1\right)^2} \mp \mathcal{J}_{d+1,\rhom}.
\varepsilonnd{equation*}
Plugging back into \varepsilonqref{eqn:Jdtrick} and using $C_d=(d-1)!$ and $\phirac{C_{d}}{C_{d-1}}=\max(d-1,1)$ yields the claim.
\varepsilonnd{proof}
We also require an approximation for $\mathcal{J}_{0,\rhom}$.
\betaegin{lemma}\lambdaabel{lem:J0eval}
There exists $c>0$ such that
\[
\mathcal{J}_{0,\rhom} =2 \partialtaelta_{\rhom 1=-1} \sigmaqrt{\phirac{\rhoi A|z|}{z}}+O\lambdaeft(\phirac{\sigmaqrt{A} e^{-c A|z|\re{\phirac{1}{z}}}}{\sigmaqrt{|z|\re{\phirac{1}{z}}}}\right).
\]
\varepsilonnd{lemma}
\betaegin{proof}
We first make the change of variables $x\mapsto x\mp 1$ in \varepsilonqref{eqn:Jddef} to obtain that
\[
\mathcal{J}_{0,\rhom}=\phirac{2A|z|}{z}\int_{\phirac{1}{2}\rhom 1}^{\infty}e^{-\phirac{A|z|}{z}x^2}dx.
\]
For $\rhom 1=-1$, we rewrite this as
\betaegin{align*}
\mathcal{J}_{0,-}&=\phirac{2A|z|}{z}\lambdaeft(\int_{-\infty}^{\infty}e^{-\phirac{A|z|}{z}x^2}dx-\int_{\phirac{1}{2}}^{\infty}e^{-\phirac{A|z|}{z}x^2}dx\right)
\\&=\phirac{2A|z|}{z}\int_{-\infty}^{\infty}e^{-\phirac{A|z|}{z}x^2}dx+O\lambdaeft(A\int_{\phirac{1}{2}}^{\infty}e^{-A|z|\re{\phirac{1}{z}} x^2}dx\right).
\varepsilonnd{align*}
Hence we have
\[
\mathcal{J}_{0,\rhom}=2 \partialtaelta_{\rhom 1=-1} \phirac{A|z|}{z}\int_{-\infty}^{\infty}e^{-\phirac{A|z|}{z}x^2}dx+O\lambdaeft(A\int_{1\rhom \phirac{1}{2}}^{\infty}e^{-A|z|\re{\phirac{1}{z}} x^2}dx\right).
\]
Noting that $\operatorname{Re}(\phirac{1}{z})>0$, we then bound
\betaegin{align*}
\int_{1\rhom \phirac{1}{2}}^{\infty}e^{-A|z|\re{\phirac{1}{z}} x^2}dx&\lambdaeq
\phirac{\sigmaqrt{\rhoi}e^{-\phirac{1}{4}A|z|\re{\phirac{1}{z}}}}{2\sigmaqrt{A|z|\re{\phirac{1}{z}}}}.
\varepsilonnd{align*}
The claim follows, evaluating
\[
\int_{-\infty}^{\infty} e^{-\phirac{A|z|}{z} x^2} dx = \sigmaqrt{\phirac{\rhoi z}{A |z|}}. \qedhere
\]
\varepsilonnd{proof}
\nuoindent
We next combine Lemmas \ref{lem:Jdtrick} and \ref{lem:J0eval} to obtain an approximation for $J_{1,\rhom}$. To compare the asymptotic growth of different terms, we note that by \varepsilonqref{eqn:PhiBound} and the fact that $k\lambdaeq N$
\betaegin{align}\lambdaabel{realbound}
\sigmaqrt{\phirac{\re{\phirac{1}{z}}}{k}}&=\phirac{1}{kN\sigmaqrt{\phirac{1}{N^4}+\Phi^2}}\gammaeq \phirac{1}{\sigmaqrt{2}},\\
\lambdaabel{eqn:k|z|bound}
\phirac{k^2}{N^2}&\lambdaeq k|z|=k^2\lambdaeft(\phirac{1}{N^4}+\Phi^2\right)^{\phirac{1}{2}}\lambdaeq \sigmaqrt{2}.
\varepsilonnd{align}
\betaegin{lemma}\lambdaabel{lem:J1bnd}
If $1\lambdaeq k\lambdaeq N$ and $|\Phi|<\phirac{1}{kN}$, then we have
\[
\mathcal{J}_{1,\rhom}=
\partialtaelta_{\rhom 1 =-1}
\sigmaqrt{\phirac{\rhoi z}{A|z|}}+O\lambdaeft(A^{-\phirac{3}{2}}+e^{-cA|z|\re{\phirac{1}{z}}}\right).
\]
\varepsilonnd{lemma}
\betaegin{proof}
By Lemma \ref{lem:Jdtrick} with $d=1$, we have
\betaegin{equation*}\lambdaabel{eqn:J1evalstart}
\mathcal{J}_{1,\rhom} =\mp \lambdaeft(-\phirac{z}{A|z|} e^{-\phirac{A|z|}{z}\lambdaeft(\phirac{1}{2}\rhom 1\right)^2} +\mathcal{J}_{2,\rhom} + \phirac{z}{2A|z|}\mathcal{J}_{0,\rhom}\right).
\varepsilonnd{equation*}
We then plug in Lemma \ref{lem:Jdtrick}
again twice (once with $d=2$ and then once with $d=1$)
to obtain that
\betaegin{align*}
\nuonumber
\mathcal{J}_{1,\rhom}
&
=\mp\Bigg( \lambdaeft(-\phirac{z}{A|z|} +\lambdaeft(-\phirac{1}{2}\rhom 1\right)\lambdaeft(\phirac{z}{A|z|}\right)^2\right) e^{-\phirac{A|z|}{z}\lambdaeft(\phirac{1}{2}\rhom 1\right)^2} \mp \mathcal{J}_{3,\rhom}\\
\lambdaabel{eqn:J1eval3}
&\varepsilontaspace{2.5in}+ \phirac{z}{2A|z|} \mathcal{J}_{2,\rhom} +\lambdaeft(\phirac{z}{2A|z|}+\lambdaeft(\phirac{z}{2A|z|}\right)^2\right)\mathcal{J}_{0,\rhom}\Bigg).
\varepsilonnd{align*}
The first term can be bounded against
\[
O\lambdaeft(\lambdaeft(\phirac{1}{A}+\phirac{1}{A^2}\right)e^{-c A|z|\re{\phirac1\tauau}}\right)=O\lambdaeft(\phirac{1}{A}e^{-c A|z|\re{\phirac{1}{z}}}\right),
\]
using that $A\gammag 1$ by \varepsilonqref{eqn:k|z|bound}. Moreover, by Lemma \ref{lem:Jbndtrivial}, we have
\[
\lambdaeft|\mathcal{J}_{3,\rhom}\right|,\
\phirac{z}{2A|z|} \lambdaeft|\mathcal{J}_{2,\rhom}\right|
\lambdal \phirac{A^{-2}}{\sigmaqrt{A|z|\re{\phirac{1}{z}}}}.
\]
For the terms with $\mathcal{J}_{0,\rhom1}$, we use Lemma \ref{lem:J0eval} to approximate these by
\betaegin{align*}
\mp \partialtaelta_{\rhom 1=-1}\sigmaqrt{\phirac{\rhoi z}{A|z|}}+O\lambdaeft(A^{-\phirac 32}+
\phirac{e^{-cA|z|\re{\phirac 1z}}}{\sigmaqrt{A|z|\re{\phirac{1}{z}}}}\right).
\varepsilonnd{align*}
Noting that $\mp \partialtaelta_{\rhom 1=-1}= \partialtaelta_{\rhom 1=-1}$, this gives
\[
\mathcal{J}_{1,\rhom}= \partialtaelta_{\rhom 1=-1}\sigmaqrt{\phirac{\rhoi z}{A|z|}} +O\lambdaeft(A^{-\phirac{3}{2}} + \lambdaeft(\phirac{1}{A}+\phirac{1}{\sigmaqrt{A|z|\re{\phirac{1}{z}}}}\right)e^{-c A|z|\re{\phirac{1}{z}}} + \phirac{A^{-2}}{\sigmaqrt{A|z|\re{\phirac{1}{z}}}}\right).
\]
We then use \varepsilonqref{realbound} and the trivial bound $|z|\operatorname{Re}(\phirac{1}{z})\lambdaeq 1$ to compare the $O$-terms, obtaining
\[
\phirac{A^{-2}}{\sigmaqrt{A|z|\re{\phirac{1}{z}}}}\lambdal A^{-\phirac{3}{2}}\qquad\tauext{ and }\qquad \phirac{1}{A}\lambdal \phirac{1}{\sigmaqrt{A|z|\re{\phirac{1}{z}}}}\lambdal 1.
\]
This gives the claim.
\varepsilonnd{proof}
We are now ready to prove Proposition \ref{prop:intbound}.
\betaegin{proof}[Proof of Proposition \ref{prop:intbound}]
The first term in Lemma \ref{lem:inteval} yields the second error term in Proposition \ref{prop:intbound}. For the second term in Lemma \ref{lem:inteval}, we note that
\betaegin{align*}
R_g(y)&=e^{-\phirac{\rhoi (y+\mu)^2}{4Mk\alphalpha_j}\re{\phirac{1}{z}}}\cos\lambdaeft(-\phirac{\rhoi(y+\mu)^2}{4Mk\alphalpha_j}\mathrm{Im}\varepsilontaspace{-.1cm}\lambdaeft(\phirac 1z\right)\right)\\
I_g(y)&=e^{-\phirac{\rhoi (y+\mu)^2}{4Mk\alphalpha_j}\re{\phirac{1}{z}}}\sigmain\lambdaeft(-\phirac{\rhoi(y+\mu)^2}{4Mk\alphalpha_j}\mathrm{Im}\varepsilontaspace{-.1cm}\lambdaeft(\phirac 1z\right)\right)
\varepsilonnd{align*}
and then explicitly take the derivatives and bound $|\mathrm{Re}(z)|<|z|$, $|\mathrm{Im}(z)|<|z|$, and the absolute value of the sines and cosines that occur against $1$. This yields
\[
\lambdaeft|R_g'\lambdaeft(y_{1,x}\right)+iI_g'\lambdaeft(y_{2,x}\right)\right|\lambdaeq \phirac{\rhoi}{Mk\alphalpha_j|z|}\sigmaum_{\varepsilonll=1}^2 \lambdaeft|y_{\varepsilonll,x}+\mu\right|e^{-\phirac{\rhoi (y_{\varepsilonll,x}+\mu)^2}{4Mk\alphalpha_j}\re{\phirac{1}{z}}}.
\]
To bound the right-hand side, we use $y_{\varepsilonll,x}< \partialtaelta<\phirac{|\mu|}{2}$ to conclude that $\phirac{|\mu|}{2}\lambdaeq |y_{\varepsilonll,x}+\mu|\lambdaeq \phirac{3|\mu|}{2}$. Noting that $\operatorname{Re}(\phirac{1}{z})>0$ yields that the second term in Lemma \ref{lem:inteval} contributes the third error term in Proposition \ref{prop:intbound}. We rewrite the third term in Lemma \ref{lem:inteval} as in \varepsilonqref{eqn:thirdterm2} and split the integral in \varepsilonqref{eqn:thirdterm2} at $\phirac{1}{2}$. For $\phirac{ \partialtaelta}{|\mu|}\lambdaeq x\lambdaeq \phirac{1}{2}$, we bring the absolute value inside and note that for $x\lambdaeq \phirac{1}{2}$ we have $|x\rhom 1|\gammaeq \phirac{1}{2}$ to bound
\betaegin{equation}\lambdaabel{eqn:smallxintparts}
\int_{\phirac{ \partialtaelta}{|\mu|}}^{\phirac{1}{2}} \phirac{1}{x}e^{-\phirac{A|z|}{z} (x\rhom 1)^2}dx\lambdaeq e^{-\phirac{A|z|}{4} \re{\phirac{1}{z}}}\int_{\phirac{ \partialtaelta}{|\mu|}}^{\phirac{1}{2}} \phirac{1}{x} dx \lambdal \lambdaeft( 1+\lambdaog\lambdaeft(\phirac{ \partialtaelta}{|\mu|}\right)\right)e^{-\phirac{A|z|}{4} \re{\phirac{1}{z}}}.
\varepsilonnd{equation}
We next turn to the contribution from $x\gammaeq \phirac 12$. By Lemma \ref{lem:J1bnd}, we have \betaegin{equation}\lambdaabel{eqn:pluginJ0}
\operatorname{sgn}(\mu)\sigmaum_{\rhom}\rhom\mathcal{J}_{1,\rhom}=-\operatorname{sgn}(\mu)\sigmaqrt{\phirac{\rhoi z}{A|z|}} +O\lambdaeft(A^{-\phirac{3}{2}} + e^{-c A|z|\re{\phirac{1}{z}}}\right).
\varepsilonnd{equation}
As noted below \varepsilonqref{eqn:Jddef}, $\mathcal{J}_{1,\rhom}$ is precisely the contribution from $x\gammaeq \phirac{1}{2}$ to the integral in \varepsilonqref{eqn:thirdterm2}. Therefore, combining \varepsilonqref{eqn:pluginJ0} with \varepsilonqref{eqn:smallxintparts} yields
\[
\operatorname{sgn}(\mu)\sigmaum_{\rhom}\rhom\int_{\phirac{ \partialtaelta}{|\mu|}}^{\infty} \phirac{1}{x}e^{-\phirac{A|z|}{z}(x\rhom 1)^2}dx = -\operatorname{sgn}(\mu)\sigmaqrt{\phirac{\rhoi z}{A|z|}} +O\lambdaeft(A^{-\phirac{3}{2}} + \lambdaeft(1+\lambdaog\lambdaeft(\phirac{ \partialtaelta}{|\mu|}\right)\right) e^{-c A|z|\re{\phirac{1}{z}}}\right).
\]
Plugging in $A=\phirac{\rhoi \mu^2}{4Mk\alphalpha_j|z|}$ gives that this equals
\betaegin{align*}
-\phirac{2\sigmaqrt{Mk\alphalpha_j z}}{\mu} +O\lambdaeft(\phirac{k^{\phirac 32}|z|^{\phirac 32}}{|\mu|^3}+\lambdaeft(1+\lambdaog\lambdaeft(\phirac{ \partialtaelta}{|\mu|}\right)\right)e^{-\phirac{c\mu^2}{k}\re{\phirac{1}{z}}}\right),
\varepsilonnd{align*}
where the value of $c$ is changed from the previous line.
These correspond to the main term and the first, second, and fourth error terms in Proposition \ref{prop:intbound}.
\varepsilonnd{proof}
We directly obtain the following corollary by choosing $ \partialtaelta:=\phirac{k|z|}{2\sigmaqrt{2}|\mu|}$ in Proposition \ref{prop:intbound}.
\betaegin{corollary}\lambdaabel{cor:intbound}
We have, for some $c>0$
\[
\mathcal{I}(\mu,k;z)=- \phirac2{\sigmaqrt{Mk\alphalpha_jz}}{\mu}+O\lambdaeft(\phirac{k^{\phirac 32}|z|^{\phirac 32}}{|\mu|^{3}}+\lambdaog\lambdaeft(\phirac{\mu^2}{k|z|}\right)e^{-\phirac{c\mu^2}{k}\re{\phirac{1}{z}}}\right).
\]
\varepsilonnd{corollary}
\sigmaubsection{Summing $\mathcal{I}(\mu,k;z)$}
We next approximate the sum over $\nuu$ in the second term of Lemma \ref{lem:Fmodular}.
\betaegin{lemma}\lambdaabel{lem:intsumbound}
There exists $c>0$ such that for all $0<k\lambdaeq N$ and $\varepsilonll\in\mathcal{L}_{Mk}$ we have
\betaegin{align}
\lambdaabel{eqn:intsumboundmain}
\sigmaideset{}{^*}\sigmaum_{\nuu\gammaeq 0}\sigmaum_{\rhom} \mathcal{I}\lambdaeft(\varepsilonll\rhom 2Mk\nuu,k;z\right)&=-\rhoi \sigmaqrt{\taufrac{\alphalpha_jz}{Mk}}\cot\lambdaeft(\taufrac{\rhoi\varepsilonll}{2Mk}\right)\!+\! O\lambdaeft(\taufrac{k^{\phirac{3}{2}}|z|^{\phirac{3}{2}}}{|\varepsilonll|^3}\right)\!+\!O\lambdaeft(\lambdaog(k|z|)e^{-\phirac{c\varepsilonll^2}{k}\re{\phirac{1}{z}}}\right)\\
\lambdaabel{eqn:intsumboundO2}
&=O\lambdaeft(\taufrac{\sigmaqrt{k|z|}}{|\varepsilonll|}+\lambdaog(k|z|)e^{-\phirac{c\varepsilonll^2}{k}\re{\phirac{1}{z}}}\right)\\
\lambdaabel{eqn:intsumboundO3}
&=O\lambdaeft(\taufrac{n^{\varepsilon}}{|\varepsilonll|}\right).
\varepsilonnd{align}
\varepsilonnd{lemma}
\betaegin{remark}
Note that the first term on the right-hand side
of \varepsilonqref{eqn:intsumboundmain}
is always finite because $1-Mk\lambdaeq\varepsilonll \lambdaeq Mk$ with $\varepsilonll\nueq 0$ implies that the parameter is never an integer multiple of $\rhoi$.
\varepsilonnd{remark}
\betaegin{proof}[Proof of Lemma \ref{lem:intsumbound}]
Plugging Corollary \ref{cor:intbound} with $\mu=\varepsilonll\rhom 2Mk\nuu$ into the left-hand side of Lemma \ref{lem:intsumbound} and using
\betaegin{equation*} \lambdaabel{415}
\rhoi \cot(\rhoi x) = \lambdaim\lambdaimits_{N \rightarrow \infty} \lambdaeft(\phirac{1}{x} + \sigmaum_{n=1}^{N} \lambdaeft(\phirac{1}{x+n}+\phirac{1}{x-n}\right)\right),
\varepsilonnd{equation*}
the main term in \varepsilonqref{eqn:intsumboundmain} becomes the claimed main term.
To obtain \varepsilonqref{eqn:intsumboundmain}, we are left to bound the error terms. Note that since
$1-Mk\lambdaeq \varepsilonll \lambdaeq Mk$ (with $\varepsilonll\nueq 0$), we have $\phirac{2Mk}{|\varepsilonll|}\gammaeq 2$. We conclude that since $|\phirac{2Mk}{\varepsilonll}\nuu-1|\gammaeq 2\nuu-1\gammaeq \nuu$ for $\nuu\gammaeq 1$, the sum of the first $O$-term in Corollary \ref{cor:intbound} is
\[
\sigmaideset{}{^*}{\sigmaum}_{\nuu\gammaeq 0}\sigmaum_{\rhom} \phirac{1}{|\varepsilonll\rhom 2Mk\nuu|^3}\lambdaeq \phirac{1}{|\varepsilonll|^3}+ \phirac{2}{|\varepsilonll|^3} \sigmaum_{ \nuu\gammaeq 1} \phirac{1}{\nuu^3}\lambdal \phirac{1}{|\varepsilonll|^3},
\]
yielding the first error-term in the lemma.
For the final error-term, we write
\betaegin{multline}\lambdaabel{eqn:absconv}
\sigmaideset{}{^*}\sigmaum_{\nuu\gammaeq 0}\sigmaum_{\rhom}\lambdaog\lambdaeft(\phirac{|\varepsilonll\rhom 2Mk\nuu|^2}{k|z|}\right)e^{-\phirac{c(\varepsilonll\rhom 2Mk\nuu)^2}{k}\re{\phirac{1}{z}}}\\
=-\lambdaog\lambdaeft(k|z|\right)\sigmaideset{}{^*}\sigmaum_{\nuu\gammaeq 0}\sigmaum_{\rhom} e^{-\phirac{c(\varepsilonll\rhom 2Mk\nuu)^2}{k}\re{\phirac{1}{z}}} +2 \sigmaideset{}{^*}\sigmaum_{\nuu\gammaeq 0}\sigmaum_{\rhom}\lambdaog\lambdaeft(|\varepsilonll\rhom 2Mk\nuu|\right) e^{-\phirac{c(\varepsilonll\rhom 2Mk\nuu)^2}{k}\re{\phirac{1}{z}}}.
\varepsilonnd{multline}
Since $1-Mk\lambdaeq \varepsilonll\lambdaeq Mk$, we have $d:=|\varepsilonll\rhom 2Mk\nuu|\gammaeq |\varepsilonll|$ for every $\nuu$ and the terms in all sums in \varepsilonqref{eqn:absconv} are non-negative. Hence we may bound \varepsilonqref{eqn:absconv} against a constant multiple of
\betaegin{align*}
e^{-\phirac{c\varepsilonll^2}{2k}\re{\phirac{1}{z}}}\lambdaog(k|z|)\sigmaum_{d\gammaeq |\varepsilonll|}e^{-\phirac{cd^2}{2\sigmaqrt{2}}} + e^{-\phirac{c\varepsilonll^2}{2k}\re{\phirac{1}{z}}}\sigmaum_{d\gammaeq |\varepsilonll|}\lambdaog(d) e^{-\phirac{cd^2}{2\sigmaqrt{2}}}.
\varepsilonnd{align*}
Each of the sums is absolutely convergent and may be bounded by the sum with $|\varepsilonll|=1$, giving a uniform bound independent of $\varepsilonll$. The first term is dominant because $\lambdaog(k|z|)\gammag 1$,
yielding \varepsilonqref{eqn:intsumboundmain}.
\nuoindent The approximation \varepsilonqref{eqn:intsumboundO2} follows by showing that
\betaegin{align*}
\lambdaabel{eqn:errortomain1}
\phirac{k^{\phirac{3}{2}}|z|^{\phirac{3}{2}}}{|\varepsilonll|^3}, \qquad
\sigmaqrt{\phirac{|z|}{k}}\lambdaeft|\cot\lambdaeft(\phirac{\rhoi \varepsilonll}{2Mk}\right)\right|\lambdal \phirac{\sigmaqrt{k|z|}}{\varepsilonll}.
\varepsilonnd{align*}
Finally \varepsilonqref{eqn:intsumboundO3} follows by \varepsilonqref{realbound}, \varepsilonqref{eqn:k|z|bound}, and \varepsilonqref{eqn:intsumboundO2}.
\varepsilonnd{proof}
\sigmaection{Proof of Theorem \ref{thm:rNrZ}}\lambdaabel{sec:CircleMethod}
\sigmaubsection{Kloosterman's Fundamental Lemma}
To describe Kloosterman's Fundamental Lemma \cite[Lemma 6]{Kloosterman}, we note that for each $
0\lambdaeq h
<k$
with $\gammacd(h,k)=1$,
there exists a unique $\varrho(h)=\varrho_{k}(h)$ with $0<\varrho(h)\lambdaeq k$ for which
\betaegin{equation*}\lambdaabel{cong}
h\lambdaeft(N+\varrho(h)\right)\varepsilonquiv -1\rhomod{k}.
\varepsilonnd{equation*}
\betaegin{lemma}\lambdaabel{lem:KloostermanFundamental}
For any $\phibm\betam{\nuu}\in\mathbb Z^4$, $k\in\mathbb N$, $0<\varrho< k$, and $n\in\mathbb Z$, we have
\[
\lambdaeft|\sigmaum_{\sigmaubstack{
0\lambdaeq h
<k\ \\ \gammacd(h,k)=1\\
\varrho(h)\lambdaeq \varrho}} e^{-\phirac{2\rhoi i n h}{k}} \rhorod_{j=1}^{4} G(2M\alphalpha_jh,\nuu_j;k)\right|=O\lambdaeft(k^{2+\phirac{7}{8}+\varepsilon}\gammacd(n,k)^{\phirac{1}{4}}\right).
\]
Here the $O$-constants is absolute (and in particular independent of $\phibm\betam{\nuu}$ and $\varrho$).
\varepsilonnd{lemma}
One obtains the value of $\varrho(h)$ by \varepsilonqref{eqn:adjacent} and \varepsilonqref{eqn:varrhojbnd}.
\betaegin{lemma}\lambdaabel{lem:rhoh}
We have
\[
\varrho(h) =\varrho_{k,1}(h).
\]
\varepsilonnd{lemma}
\sigmaubsection{Setting up the Circle Method}
Fix $J\sigmaubseteq\{1,2,3,4\}$ and write $F(q):=F_{r,M,\phibm\betam{\alphalpha},J}(q)$.
By Cauchy's Theorem, we have
\betaegin{equation*}
c(n):=c_{r,M,\phibm\betam{\alphalpha},J}(n)=\phirac{1}{2\rhoi i}\int_{\mathcal{C}}\phirac{F(q)}{q^{n+1}}dq,
\varepsilonnd{equation*}
where $\mathcal{C}$ is an arbitrary path inside the unit circle that loops around zero in the counterclockwise direction. We choose the circle with radius $e^{-\phirac{2\rhoi}{N^2}}$ with $N:=\lambdafloor\sigmaqrt{n} \rfloor$ and the parametrization $q=e^{-\phirac{2\rhoi}{N^2}+2\rhoi i t}$ with $0\lambdaeqslant t\lambdaeqslant 1$. Thus
\betaegin{equation*}
c(n)=\int_{0}^{1}F\lambdaeft(e^{-\phirac{2\rhoi}{N^2}+2\rhoi i t}\right)e^{\phirac{2\rhoi n}{N^2}-2\rhoi i n t} dt.
\varepsilonnd{equation*}
Decomposing the path of integration along the Farey arcs $-\vartheta'_{h,k}\lambdaeqslant \Phi \lambdaeqslant \vartheta^{''}_{h,k}$ with $\Phi=t-\phirac{h}{k}$,
\betaegin{equation}\lambdaabel{eqn:c(n)}
c(n)=\sigmaum_{\sigmaubstack{0\lambdaeqslant h<k\lambdaeqslant N\\ \gammacd(h,k)=1}}
e^{-\phirac{2\rhoi i n h}{k}}\int_{-\vartheta'_{h,k}}^{\vartheta^{''}_{h,k}} F\lambdaeft(e^{\phirac{2\rhoi i}{k}(h+iz)}\right)e^{\phirac{2\rhoi n z}{k}} d\Phi,
\varepsilonnd{equation}
where $z=k(\phirac{1}{N^2}-i\Phi)$ as above.
Since for $J=\{1,2,3,4\}$ we may use Lemma \ref{lem:mainterm}, we consider the case that $J\nueq \{1,2,3,4\}$. For $1\lambdaeq\varepsilonll \lambdaeq 4$, $\phibm\betam{ \nuu}\in\mathbb N_0^4$, $\phibm\betam{ \lambdaambda}\in \mathcal{L}_{Mk}^4$, and $\phibm\betam{ \varepsilon}\in\{\rhom\}^4$, set
\betaegin{align*}
\mathcal{I}_{\phibm\betam{ \nuu},\phibm\betam{ \lambdaambda},\phibm\betam{ \varepsilon},\varepsilonll}(z)&=\mathcal{I}_{\phibm\betam{ \nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},\varepsilonll,M,k}(z):=\betaegin{cases}
\phirac{1 }{2Mk-1}&\tauext{if }\varepsilonll\in J,\\
\phirac{\varepsilon_{\varepsilonll} }{2Mk-1}&\tauext{if }\varepsilonll\nuotin J\tauext{ and }\nuu_{\varepsilonll}\nueq 0,\\
\partialtaisplaystyle{\sigmaideset{}{^*}\sigmaum_{\nuu\gammaeq 0}\sigmaum_{\rhom}}\mathcal{I}( \lambdaambda_{\varepsilonll} \rhom 2Mk \nuu,k;z)&\tauext{if }\varepsilonll\nuotin J \tauext{ and }\nuu_{\varepsilonll}=0,
\varepsilonnd{cases} \\
d_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},\varepsilonll}&:=\varepsilon_{\varepsilonll} \nuu_{\varepsilonll}+ \partialtaelta_{\nuu_{\varepsilonll}=0} \partialtaelta_{\varepsilonll\nuotin J} \lambdaambda_{\varepsilonll}.
\varepsilonnd{align*}
\nuoindent
By Lemmas \ref{lem:thetatrans} and \ref{lem:Fmodular}, we have
\betaegin{align}\lambdaabel{eqn:expandnus}
&16M^2 F\lambdaeft(e^{{\phirac{2\rhoi i}{k}(h+iz)}}\right)\rhorod_{j=1}^{4}\sigmaqrt{\alphalpha_j}\\
& \nuotag =\phirac{e^{\phirac{\rhoi z r^2}{Mk}\sigmaum_{j=1}^4\alphalpha_j}}{k^2z^2}\sigmaideset{}{^*}\sigmaum_{\phibm\betam{\nuu}\in \mathbb N_0^4}\sigmaum_{\phibm\betam{\lambdaambda}\in \mathcal{L}_{Mk}^4}\sigmaum_{\phibm\betam{\varepsilon}\in\{\rhom\}^4} \rhorod_{j=1}^{4} e^{-\phirac{\rhoi \nuu_j^2}{4Mk\alphalpha_{j} z}+\varepsilon_j\phirac{\rhoi i r \nuu_j}{Mk}}G\!\lambdaeft(2M\alphalpha_j h,2r\alphalpha_jh+d_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j};k\right)\mathcal{I}_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}(z).
\varepsilonnd{align}
Plugging \varepsilonqref{eqn:expandnus} back into \varepsilonqref{eqn:c(n)}, we see that the contribution to $c(n)$ from the term $\phibm\betam{\nuu}\in\mathbb N_0^4$ is $\phirac{1}{16M^2\rhorod_{j=1}^4\sigmaqrt{\alphalpha_j}} \phirac{1}{2^{\sigmaum_{j=1}^4 \partialtaelta_{\nuu_j=0}}}$ times
\betaegin{align*}
\phibm\betam{I}_{\phibm\betam{\nuu}}(n)=\phibm\betam{I}_{\betam{\nuu},\betam{\alphalpha},M,J}(n):=&
\sigmaum_{\sigmaubstack{0\lambdaeq h<k\lambdaeq N \\ \gammacd(h,k)=1}} \phirac{e^{-\phirac{2\rhoi i n h}{k}}}{k^2}\sigmaum_{\phibm\betam{\lambdaambda}\in\mathcal{L}_{Mk}^4}\sigmaum_{\phibm\betam{\varepsilon}\in\{\rhom\}^4}\rhorod_{j=1}^4 e^{\varepsilon_j\phirac{\rhoi i r \nuu_j}{Mk}}G\!\lambdaeft(2M\alphalpha_j h,2r\alphalpha_jh+d_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}
;k\right)
\\
&\tauimes \int_{-\vartheta'_{h,k}}^{\vartheta''_{h,k}} \phirac{1}{z^2}e^{\phirac{2\rhoi}{k} \lambdaeft(n+\phirac{r^2}{2M}\sigmaum_{j=1}^4\alphalpha_j\right)z -\sigmaum_{j=1}^4\phirac{\rhoi \nuu_j^2}{4Mk\alphalpha_jz}}\rhorod_{j=1}^4
\mathcal{I}_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}(z)
d\Phi.
\varepsilonnd{align*}
\sigmaubsection{Bounding $\sigmaum_{\phibm\betam{\nuu}\in\mathbb N_0^4\sigmaetminus\{\phibm\betam{0}\}}\phibm\betam{I}_{\betam{\nuu}}(n)$}\lambdaabel{sec:nointegral}
The following lemma proves useful for bounding the sum of $\phibm\betam{I}_{\betam{\nuu}}(n)$ with $\phibm\betam{\nuu}\nueq \mathbf{{0}}$.
\betaegin{lemma}\lambdaabel{lem:CircleNoIntegral}
Suppose that $0\lambdaeq \varrho_1\lambdaeq \varrho_2\lambdaeq \infty$, $c>0$, and for each $0<k\lambdaeq N$ let a subset $\Lambda_k\sigmaubseteq \mathbb Z^4\sigmaetminus \{\phibm\betam{0}\}$ be given. Then, with $\|\phibm\betam{\nuu}\|^2:=\sigmaum_{1\lambdaeq j\lambdaeq 4} \nuu_j^2$,
\betaegin{align*}
\sigmaum_{0<k\lambdaeq N} \phirac{1}{k^2}
\sigmaum_{\phibm\betam{\nuu}\in \Lambda_k}\sigmaum_{\phibm\betam{\lambdaambda}\in \mathcal{L}_{Mk}^4}
\rhorod_{j=1}^4\phirac{1}{\lambdaambda_j}
&\sigmaum_{\phibm\betam{\varepsilon}\in \{\rhom\}^4}
\sigmaum_{\varrho=\varrho_1}^{\varrho_2} \int_{\phirac{1}{k(N+\varrho+1)}}^{\phirac{1}{k(N+\varrho)}}\phirac{1}{|z|^2}e^{-c\|\phibm\betam{\nuu}\|^2 \phirac{\re{\phirac{1}{z}}}{k}} d\Phi \\
&\tauimes\lambdaeft|\sigmaum_{\sigmaubstack{0<h<k\\ \gammacd(h,k)=1\\ \varrho(h)\lambdaeq \varrho }} e^{-\phirac{2\rhoi inh}{k}} \rhorod_{j=1}^4 G\!\lambdaeft(2M\alphalpha_jh,2\alphalpha_j h\rhom
d_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j};k\right)\right| \lambdal n^{\phirac{15}{16}+\varepsilon}.
\varepsilonnd{align*}
\varepsilonnd{lemma}
\betaegin{proof}
We first use
Lemma \ref{lem:KloostermanFundamental}
and the fact that
\betaegin{equation}\lambdaabel{eqn:lambdasum}
\sigmaum_{\lambdaambda_j\in\mathcal{L}_{Mk}}\phirac{1}{\lambdaambda_j} \lambdal \lambdaog(k)\lambdal k^{\varepsilon}.
\varepsilonnd{equation}
Uniformly bounding against the cases $\varrho_1=0$ and $\varrho_2=\infty$ in the lemma, the left-hand side of the lemma may be bounded against
\betaegin{equation}\lambdaabel{eqn:CircleAfterFundamental}
\lambdal
\sigmaum_{0<k\lambdaeq N} k^{\phirac{7}{8}+\varepsilon}\gammacd(n,k)^{\phirac{1}{4}}
\sigmaum_{\phibm\betam{\nuu}\in \Lambda_k}
\int_{0}^{\phirac{1}{kN}}\phirac{1}{|z|^2}e^{ -c\|\phibm\betam{\nuu}\|^2\phirac{ \re{\phirac{1}{z}}}{k}}d\Phi.
\varepsilonnd{equation}
\nuoindent
By assumption, for every $\phibm\betam{\nuu}\in \Lambda_k$ we have $\|\phibm\betam{\nuu}\|\gammaeq 1$ and using \varepsilonqref{realbound} we obtain that \varepsilonqref{eqn:CircleAfterFundamental} may be bounded against
\betaegin{equation}\lambdaabel{eqn:CircleAfterFundamental2}
\lambdal \sigmaum_{0<k\lambdaeq N} k^{\phirac{7}{8}+\varepsilon}\gammacd(n,k)^{\phirac{1}{4}}
\sigmaum_{\phibm\betam{\nuu}\in \Lambda_k}e^{-\phirac{c}{4}\|\phibm\betam{\nuu}\|^2}
\int_{0}^{\phirac{1}{kN}}\phirac{1}{|z|^2}e^{ -\phirac{c}{2}\phirac{ \re{\phirac{1}{z}}}{k}}d\Phi.
\varepsilonnd{equation}
It remains to show that \varepsilonqref{eqn:CircleAfterFundamental2} is $O(n^{\phirac{15}{16}+\varepsilon})$.
Since $\Lambda_k\sigmaubseteq\mathbb Z^4\sigmaetminus \{\phibm\betam{0}\}$, we may bound the sum over $\phibm\betam{\nuu}$ uniformly by
\[
\sigmaum_{\phibm\betam{\nuu}\in \Lambda_k} e^{-\phirac{c}{4}\|\phibm\betam{\nuu}\|^2}\lambdaeq \sigmaum_{\phibm\betam{\nuu}\in \mathbb Z^4\sigmaetminus\{\phibm\betam{0}\}} e^{-\phirac{c}{4}\|\phibm\betam{\nuu}\|^2}\lambdal 1.
\]
We then split the sum and integral in \varepsilonqref{eqn:CircleAfterFundamental2} into three pieces:
\betaegin{equation*}
\sigmaum\nuolimits_1 : \quad \sigmaum_{0<k\lambdaeq N^{1-\varepsilonll}} \int_{0}^{\phirac{1}{kN^{1+\varepsilonll}}} ,\qquad
\sigmaum\nuolimits_2 : \quad \sigmaum_{0<k\lambdaeq N^{1-\varepsilonll}} \int_{\phirac{1}{kN^{1+\varepsilonll}}}^{
\phirac{1}{kN}
} ,\qquad
\sigmaum\nuolimits_3 : \quad \sigmaum_{ N^{1-\varepsilonll}<k\lambdaeq N} \int_{0}^{\phirac{1}{kN}}
\varepsilonnd{equation*}
for $\varepsilonll$ some (arbitrary small) number.
We first consider $\sigmaum\nuolimits_1$. Plugging $0<|\Phi|<\phirac{1}{kN^{1+\varepsilonll}}$ into the right-hand side of the equality in \varepsilonqref{realbound}, we have
\betaegin{equation*}
\operatorname{Re}\lambdaeft(\phirac{1}{z}\right)> \phirac{k N^{2\varepsilonll}}{2}.
\varepsilonnd{equation*}
Combining this with the first inequality in \varepsilonqref{eqn:k|z|bound},
the contribution from $\sigmaum_1$ to \varepsilonqref{eqn:CircleAfterFundamental2} is $O(e^{-\phirac{c}{8}N^{2\varepsilonll}})$.
We next turn to $\sigmaum\nuolimits_2$. Using
the fact $\operatorname{Re}(\phirac{1}{z})>0$,
we bound
\betaegin{equation*}
\sigmaum\nuolimits_2\lambdal \sigmaum_{0<k\lambdaeq N^{1-\varepsilonll}} k^{\phirac{7}{8}+\varepsilon} \gammacd(n,k)^{\phirac{1}{4}} \int_{\phirac{1}{kN^{1+\varepsilonll}}}^{\phirac{1}{kN}}\phirac{1}{|z|^2}d\Phi.
\varepsilonnd{equation*}
One can show that the integral is $O( \phirac{N^{1+\varepsilonll}}{k})$, yielding
$
\sigmaum\nuolimits_2 \lambdal N^{\phirac{15}{8}+\phirac{\varepsilonll}{8}+\varepsilon}.
$
Choosing $\varepsilonll$ sufficiently small (depending on $\varepsilon$), we obtain $\sigmaum\nuolimits_2 = O( n^{\phirac{15}{16}+\varepsilon} )$.
We finally turn to $\sigmaum\nuolimits_3$.
We bound, choosing $\varepsilonll \lambdaeq 8\varepsilon$
\betaegin{equation*}
\sigmaum\nuolimits_3 \lambdal N^2\sigmaum_{ N^{1-\varepsilonll}<k\lambdaeq N} k^{-\phirac{9}{8}+\varepsilon} \gammacd(n,k)^\phirac{1}{4}
\alpharctan\lambdaeft(\phirac{N}{k}\right)\lambdal n^{\phirac{15}{16}+\varepsilon}. \qedhere
\varepsilonnd{equation*}
\varepsilonnd{proof}
We next bound the contribution from the sum over all $h$, $k$ of the terms $\phibm\betam{\nuu}\nueq \phibm\betam{0}$ from \varepsilonqref{eqn:expandnus}.
\betaegin{proposition}\lambdaabel{prop:CircleNoIntegral}
If $J\nueq \{1,2,3,4\}$, then
\[
\sigmaideset{}{^*}\sigmaum_{\phibm\betam{\nuu}\in\mathbb N_0^4\sigmaetminus \{\phibm\betam{0}\}}\phibm\betam{I}_{\phibm\betam{\nuu}}(n)=O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\]
\varepsilonnd{proposition}
\betaegin{proof}
Writing $k+k_j=N+\varrho_{k,j}(h)$ as in \varepsilonqref{eqn:rhohdef},
we split the integral in $\phibm\betam{I}_{\betam{\nuu}}(n)$ as
\betaegin{equation}\lambdaabel{eqn:thetasplit}
\int_{-\vartheta'_{h,k}}^{\vartheta''_{h,k}}=
\int_{-\phirac{1}{k\lambdaeft(N+\varrho_{k,1}(h)\right)}}^{0} +\int_{0}^{\phirac{1}{k\lambdaeft(N+\varrho_{k,2}(h)\right)}}
=\sigmaum_{\varrho=\varrho_{k,1}(h)}^{\infty}
\int_{-\phirac{1}{k(N+\varrho)}}^{-\phirac{1}{k(N+\varrho+1)}}
+\sigmaum_{\varrho=\varrho_{k,2}(h)}^{\infty}
\int_{\phirac{1}{k(N+\varrho+1)}}^{\phirac{1}{k(N+\varrho)}}.
\varepsilonnd{equation}
\nuoindent
Interchanging the sums on $h$ and $\varrho$ for the first sum in \varepsilonqref{eqn:thetasplit}, its contribution to $\phibm\betam{I}_{\phibm\betam{\nuu}}(n)$ equals
\betaegin{multline}\lambdaabel{eqn:NoIntegralFirstSum1}
\sigmaum_{0<k\lambdaeq N} \phirac{1}{k^2}\sigmaum_{\phibm\betam{\lambdaambda} \in \mathcal{L}_{Mk}^4}\sigmaum_{\phibm\betam{\varepsilon} \in\{\rhom\}^4} e^{\phirac{\rhoi i r}{Mk}\sigmaum_{j=1}^4\varepsilon_j\nuu_j} \sigmaum_{\varrho=0}^{\infty}
\int_{-\phirac{1}{k(N+\varrho)}}^{-\phirac{1}{k(N+\varrho+1)}}\phirac{1}{z^2} e^{\phirac{2\rhoi}{k} \lambdaeft(n+\phirac{r^2}{2M}\sigmaum_{j=1}^4\alphalpha_j\right)z -\sigmaum_{j=1}^4\phirac{\rhoi \nuu_j^2}{4Mk\alphalpha_j z}}\\
\tauimes \rhorod_{j=1}^4
\mathcal{I}_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}(z)
d\Phi\sigmaum_{\sigmaubstack{0\lambdaeq h<k \\ \gammacd(h,k)=1\\ \varrho_{k,1}(h)\lambdaeq \varrho}} e^{-\phirac{2\rhoi i n h}{k}}\rhorod_{j=1}^4
G\!\lambdaeft(2M\alphalpha_jh,2\alphalpha_jh +d_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j};k\right).
\varepsilonnd{multline}
Similarly, interchanging the sums over $h$ and $\varrho$ in the second sum in \varepsilonqref{eqn:thetasplit} and then applying Lemma \ref{lem:AdjacentNeighbours} yields a contribution to $\phibm\betam{I}_{\phibm\betam{\nuu}}(n)$ of
\betaegin{multline}\lambdaabel{eqn:NoIntegralSecondSum}
\sigmaum_{0<k\lambdaeq N} \phirac{1}{k^2}\sigmaum_{\phibm\betam{\lambdaambda}\in \mathcal{L}_{Mk}^4}\sigmaum_{\phibm\betam{\varepsilon}\in\{\rhom\}^4} e^{\phirac{\rhoi i r}{Mk}\sigmaum_{j=1}^4\varepsilon_j\nuu_j} \sigmaum_{\varrho=0}^{\infty}
\int_{\phirac{1}{k(N+\varrho+1)}}^{\phirac{1}{k(N+\varrho)}}\phirac{1}{z^2} e^{\phirac{2\rhoi}{k} \lambdaeft(n+\phirac{r^2}{2M}\sigmaum_{j=1}^4\alphalpha_j\right)z -\sigmaum_{j=1}^4\phirac{\rhoi \nuu_j^2}{4Mk\alphalpha_j z}}\\
\tauimes \rhorod_{j=1}^4
\mathcal{I}_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}(z)
d\Phi\sigmaum_{\sigmaubstack{0\lambdaeq h<k \\ \gammacd(h,k)=1\\ \varrho_{k,1}(k-h)\lambdaeq \varrho}} e^{-\phirac{2\rhoi i n h}{k}}
\rhorod_{j=1}^4
G\!\lambdaeft(2M\alphalpha_jh,2\alphalpha_jh +
d_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}
;k\right).
\varepsilonnd{multline}
Making the change of variables $h\mapsto k-h$ in the inner sum, the inner sum becomes
\betaegin{equation*}\lambdaabel{eqn:conjG}
\overline{\sigmaum_{\sigmaubstack{0\lambdaeq h<k\lambdaeq N \\ \gammacd(h,k)=1\\ \varrho_{k,1}(h)\lambdaeq \varrho}} e^{-\phirac{2\rhoi i n h}{k}}\rhorod_{j=1}^{4}G\!\lambdaeft(2M\alphalpha_jh,2\alphalpha_jh -d_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}
;k\right)}.
\varepsilonnd{equation*}
We then take the absolute value inside all of the sums except the sum on $h$ in both \varepsilonqref{eqn:NoIntegralFirstSum1} and \varepsilonqref{eqn:NoIntegralSecondSum}. Noting that $|z|^2$ and $\operatorname{Re}(\phirac{1}{z})$ are the same for $\Phi$ and $-\Phi$, we may make the change of variables $\Phi\mapsto -\Phi$ in \varepsilonqref{eqn:NoIntegralFirstSum1} to bound both \varepsilonqref{eqn:NoIntegralFirstSum1} and \varepsilonqref{eqn:NoIntegralSecondSum} against
\betaegin{multline}\lambdaabel{eqn:NoIntegralFirstSum}
\lambdal \sigmaum_{0<k\lambdaeq N}\phirac{1}{k^2} \sigmaum_{\phibm\betam{\lambdaambda}\in \mathcal{L}_{Mk}^4}\sigmaum_{\phibm\betam{\varepsilon}\in \{\rhom\}^4}
\sigmaum_{\varrho=0}^{\infty}
\int_{\phirac{1}{k(N+\varrho+1)}}^{\phirac{1}{k(N+\varrho)}}
\phirac{1}{|z|^2} e^{\phirac{2\rhoi}{k} \lambdaeft(n+\phirac{r^2}{2M}\sigmaum_{j=1}^4\alphalpha_j\right)\re{z} -\sigmaum_{j=1}^4\phirac{\rhoi \nuu_j^2}{4Mk\alphalpha_j}\re{\phirac{1}{z}}} \\
\tauimes \lambdaeft|\mathcal{I}_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}(z)\right|
d\Phi
\lambdaeft|\sigmaum_{\sigmaubstack{0\lambdaeq h<k \\ \gammacd(h,k)=1\\ \varrho_{k,1}(h)\lambdaeq \varrho}} e^{-\phirac{2\rhoi i n h}{k}}G\!\lambdaeft(2M\alphalpha_jh,2\alphalpha_jh \rhom d_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j};k\right)\right|,
\varepsilonnd{multline}
\nuoindent where $\rhom$ is chosen as ``$+$'' for \varepsilonqref{eqn:NoIntegralFirstSum1} and ``$-$'' for \varepsilonqref{eqn:NoIntegralSecondSum}.
\rm
We note that since $\operatorname{Re}(z)=\phirac{k}{N^2}\sigmaim \phirac{k}{n}$, we have
\betaegin{equation}\lambdaabel{eqn:trivialexponential}
e^{\phirac{2\rhoi}{k}\lambdaeft(n+\phirac{r^2}{2M}\sigmaum_{j=1}^{4} \alphalpha_j\right)\re{z}}\lambdal 1.
\varepsilonnd{equation}
We next bound $|\mathcal{I}_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}(z)|$. In the case that $j\in J$ or $\nuu_j\nueq 0$, we trivially bound (using $\lambdaambda_j\in\mathcal{L}_{Mk}$)
\[
\mathcal{I}_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}(z)=\phirac{1}{2Mk-1}<\phirac{1}{|\lambdaambda_j|}.
\]
If both $j\nuotin J$ and $\nuu_j=0$, then we use \varepsilonqref{eqn:intsumboundO3} to bound
\betaegin{equation}\lambdaabel{eqn:Itrivial}
\lambdaeft|\mathcal{I}_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}(z)\right|
\lambdal \phirac{n^{\varepsilon}}{|\lambdaambda_j|}.
\varepsilonnd{equation}
Hence, setting $c:=\phirac{\rhoi}{4M\min_{j}(|\alphalpha_j|)}$, \varepsilonqref{eqn:NoIntegralFirstSum} may be bounded against
\betaegin{multline*}
\lambdal n^{\varepsilon}
\sigmaum_{0<k\lambdaeq N}\phirac{1}{k^2}\sigmaum_{\phibm\betam{\lambdaambda}\in \mathcal{L}_{Mk}^4}\rhorod_{j=1}^{4}\phirac{1}{|\lambdaambda_{j}|}\sigmaum_{\phibm\betam{\varepsilon}\in \{\rhom\}^4}
\sigmaum_{\varrho=0}^{\infty}\int_{\phirac{1}{k(N+\varrho+1)}}^{\phirac{1}{k(N+\varrho)}}\phirac{1}{|z|^2} e^{-c \|\phibm\betam{\nuu}\|^2\phirac{\re{\phirac{1}{z}}}{k}} d\Phi\\
\tauimes \lambdaeft|\sigmaum_{\sigmaubstack{0\lambdaeq h<k\lambdaeq N \\ \gammacd(h,k)=1\\ \varrho_{k,1}(h)\lambdaeq \varrho}} e^{-\phirac{2\rhoi i n h}{k}}
\rhorod_{j=1}^{4}
G\!\lambdaeft(2M\alphalpha_jh,2\alphalpha_jh \rhom
d_{\phibm\betam{\nuu},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},j}
;k\right)\right|.
\varepsilonnd{multline*}
By Lemma \ref{lem:rhoh}, we have $\varrho(h)=\varrho_{k,1}(h)$.
Summing over $\phibm\betam{\nuu}\in \mathbb N_0^4\sigmaetminus\{\phibm\betam{0}\}$, we may therefore use Lemma \ref{lem:CircleNoIntegral} with $\Lambda_k=\mathbb N_0^4\sigmaetminus\{\phibm\betam{0}\}$, $\varrho_1=0$, and $\varrho_2=\infty$ to conclude that $\sigmaideset{}{^*_{\betam{\nuu}\nueq 0}}\sigmaum \phibm\betam{I}_{\phibm\betam{\nuu}}(n)$ is $O(n^{\phirac{15}{16}+\varepsilon})$, giving the bound claimed in the proposition.
\varepsilonnd{proof}
\sigmaubsection{Bounding $\phibm\betam{I}_{\betam{0}}(n)$}
This subsection is devoted to bounding $\phibm\betam{I}_{\betam{0}}(n)$.
\betaegin{proposition}\lambdaabel{prop:CircleConstantTerms}
If $J\nueq \{1,2,3,4\}$, then
\[
\phibm\betam{I}_{\betam{0}}(n)=O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\]
\varepsilonnd{proposition}
\betaegin{proof}
As in the proof of Proposition \ref{prop:CircleNoIntegral}, we first split the sum as in \varepsilonqref{eqn:thetasplit} and interchange the sums on $h$ and $\varrho$ and then take the absolute value inside all of the sums other than the sum on $h$. Since $J\nueq \{1,2,3,4\}$, without loss of generality we have $4\nuotin J$. For $1\lambdaeq j\lambdaeq 3$, we use \varepsilonqref{eqn:Itrivial} and we bound $\mathcal{I}_{\phibm\betam{0},\phibm\betam{\lambdaambda},\phibm\betam{\varepsilon},4}(z)$ with \varepsilonqref{eqn:intsumboundO2}.
Plugging in \varepsilonqref{eqn:trivialexponential}, we hence obtain
\betaegin{multline}
\betam{I}_{\betam{0}}(n)\lambdal n^{\varepsilon}\sigmaum_{0<k\lambdaeq N} \phirac{1}{k^2} \sigmaum_{\varrho=0}^{\infty}\int_{\phirac{1}{k(N+\varrho+1)}}^{\phirac{1}{k(N+\varrho)}}\!\phirac{1}{|z|^2}
\sigmaum_{\phibm\betam{\lambdaambda}\in \mathcal{L}_{Mk}^4}\!O\lambdaeft(\phirac{\sigmaqrt{k|z|}}{|\lambdaambda_4|}+\lambdaog(k|z|)e^{-\phirac{c\lambdaambda_4^2\re{\phirac{1}{z}}}{k}}\right)d \Phi\\
\tauimes
\rhorod_{j=1}^3\phirac{1}{|\lambdaambda_j|}
\lambdaeft| \sigmaum_{\sigmaubstack{0\lambdaeq h<k \\ \tauext{gcd}(h,k)=1\\
\varrho_{k,1}(h)\lambdaeq \varrho}} e^{-\phirac{2\rhoi inh}{k}} \rhorod_{j=1}^{4} G\!\lambdaeft(2M\alphalpha_jh, 2\alphalpha_j h\rhom \partialtaelta_{j\nuotin J}\lambdaambda_{j}
;k\right) \right|. \lambdaabel{eqn:oneintegral2}
\varepsilonnd{multline}
Plugging in Lemma \ref{lem:rhoh}, the contribution to $\betam{I}_{\betam{0}}(n)$ from the first term in the $O$-constant in \varepsilonqref{eqn:oneintegral2} is bounded by
\[
\lambdal n^{\varepsilon}\varepsilontaspace{-1pt}\sigmaum_{0<k\lambdaeq N} \phirac{1}{k^{\phirac{3}{2}}} \sigmaum_{\phibm\betam{\lambdaambda}\in\mathcal{L}_{Mk}^4}\rhorod_{j=1}^4\phirac{1}{|\lambdaambda_j|}
\sigmaum_{\varrho=0}^{\infty} \int_{\phirac{1}{k(N+\varrho+1)}}^{\phirac{1}{k(N+\varrho)}}
\varepsilontaspace{-2pt} \phirac{d\Phi}{|z|^{\phirac{3}{2}}}
\lambdaeft|\sigmaum_{\sigmaubstack{0\lambdaeq h<k \\ \gammacd(h,k)=1\\
\varrho(h)\lambdaeq \varrho}} \varepsilontaspace{-4pt} e^{-\phirac{2\rhoi inh}{k}} \rhorod_{j=1}^{4} G\!\lambdaeft(2M\alphalpha_jh, 2\alphalpha_j h\rhom \partialtaelta_{j\nuotin J}\lambdaambda_j;k\right)\right|.
\]
Using Lemma \ref{lem:KloostermanFundamental} and \varepsilonqref{eqn:lambdasum}, we can bound this against
\betaegin{align}\lambdaabel{eqn:mainintbound}
\nuonumber &\lambdal n^{\varepsilon} \sigmaum_{0<k\lambdaeq N} k^{\phirac{11}{8}+\varepsilon}\gammacd(n,k)^{\phirac{1}{2}}
\sigmaum_{\phibm\betam{\lambdaambda}\in\mathcal{L}_{Mk}^4}\rhorod_{j=1}^4\phirac{1}{|\lambdaambda_j|}
\sigmaum_{\varrho=0}^{\infty}
\int_{\phirac{1}{k(N+\varrho+1)}}^{\phirac{1}{k(N+\varrho)}}
\phirac{d\Phi}{|z|^{\phirac{3}{2}}} \\
&\lambdal n^{\varepsilon} \sigmaum_{0<k\lambdaeq N} k^{\phirac{11}{8}+\varepsilon}\gammacd(n,k)^{\phirac{1}{2}}
\int_{0}^{\phirac{1}{kN}}
\phirac{d\Phi}{|z|^{\phirac{3}{2}}}.
\varepsilonnd{align}
We split the integral in \varepsilonqref{eqn:mainintbound} into the ranges $\Phi<\phirac{1}{N^2}$ and $\Phi\gammaeq \phirac{1}{N^2}$. Using that for $\Phi\gammaeq \phirac{1}{N^2}$, we have $|z|^{\phirac{3}{2}}\gammag k^{\phirac{3}{2}}\Phi^{\phirac{3}{2}}$, the contribution from $\Phi\gammaeq \phirac{1}{N^2}$ to \varepsilonqref{eqn:mainintbound} may be bounded against
\[
\lambdal n^{\varepsilon}\sigmaum_{0<k\lambdaeq N} k^{-\phirac{1}{8}+\varepsilon}\gammacd(n,k)^{\phirac{1}{2}}
\int_{\phirac{1}{N^2}}^{\infty}
\Phi^{-\phirac{3}{2}}d\Phi
\lambdal n^{\varepsilon} N\sigmaum_{0<k<N} k^{-\phirac{1}{8}+\varepsilon}\gammacd(n,k)^{\phirac{1}{2}}.
\]
For $0<\Phi<\phirac{1}{N^2}$, we use the trivial bound $|z|^{\phirac{3}{2}}\gammag \phirac{k^{\phirac{3}{2}}}{N^3}$,
to obtain that the contribution from $0<\Phi<\phirac{1}{N^2}$ to \varepsilonqref{eqn:mainintbound} is
\[
\lambdal n^{\varepsilon} N\sigmaum_{0<k\lambdaeq N} k^{-\phirac{1}{8}+\varepsilon}\gammacd(n,k)^{\phirac{1}{2}}.
\]
Therefore \varepsilonqref{eqn:mainintbound} is $O(n^{\phirac{15}{16}+\varepsilon})$.
We next consider the contribution to \varepsilonqref{eqn:oneintegral2} coming from the second $O$-term. Using \varepsilonqref{eqn:k|z|bound} to bound $\lambdaog(k|z|)\lambdal n^{\varepsilon}$, the contribution to \varepsilonqref{eqn:oneintegral2} from the second term in the $O$-constant is
\betaegin{align*}
\lambdal n^{\varepsilon}\sigmaum_{0<k\lambdaeq N} \phirac{1}{k^2}
\sigmaum_{\phibm\betam{\lambdaambda}\in\mathcal{L}_{Mk}^4}\rhorod_{j=1}^3\phirac{1}{|\lambdaambda_j|}\sigmaum_{\phibm\betam{\varepsilon}\in \{\rhom\}^4}
&\sigmaum_{\varrho=0}^{\infty}
\int_{\phirac{1}{k(N+\varrho+1)}}^{\phirac{1}{k(N+\varrho)}}
\phirac{1}{|z|^2} e^{-\phirac{c \lambdaambda_4^2\re{\phirac{1}{z}}}{k}} d\Phi\\
& \tauimes\lambdaeft| \sigmaum_{\sigmaubstack{0\lambdaeq h<k \\ \tauext{gcd}(h,k)=1\\ \varrho_{k,1}(h)\lambdaeq \varrho}} e^{-\phirac{2\rhoi inh}{k}} \rhorod_{j=1}^{4} G\!\lambdaeft(2M\alphalpha_jh, 2\alphalpha_j h+ \partialtaelta_{j\nuotin J}\lambdaambda_j;k\right) \right| \lambdal n^{\phirac{15}{16} + \varepsilon},
\varepsilonnd{align*}
using Lemma \ref{lem:CircleNoIntegral} with $\varrho_1=0$, $\varrho_2=\infty$, and $
\Lambda_k=\{(0\ 0\ 0\ \lambdaambda_4)^T: \lambdaambda_4\in \mathcal{L}_{Mk}\}$ yields that this may be bounded against $O(n^{\phirac{15}{16}+\varepsilon}).$
\varepsilonnd{proof}
\sigmaubsection{Proof of Theorem \ref{thm:rNrZ}}
We are now ready to prove the main theorem.
\betaegin{proof}[Proof of Theorem \ref{thm:rNrZ}]
(1)
We first use Lemma \ref{lem:ThetaFalse}. If $M$ is odd, then we use the fact that
\betaegin{equation*}\lambdaabel{eqn:ThetaModd}
\Theta_{r,M,\phibm\betam{\alphalpha}}^+(\tauau)=\Theta_{2r,2M,\phibm\betam{\alphalpha}}^+\lambdaeft(\phirac{\tauau}{4}\right).
\varepsilonnd{equation*}
Thus we may assume without loss of generality that $M$ is even. We deal with the terms from Lemma \ref{lem:ThetaFalse} termwise for each $J\sigmaubseteq\{1,2,3,4\}$.
Plugging Propositions \ref{prop:CircleNoIntegral} and \ref{prop:CircleConstantTerms} into \varepsilonqref{eqn:expandnus}, we conclude that for $J\nueq \{1,2,3,4\}$ we have
\[
c_{r,M,\phibm\betam{\alphalpha},J}(n)=O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\]
Thus by Lemma \ref{lem:ThetaFalse}, we have
\[
s_{r,2M,\phibm\betam{\alphalpha}}\lambdaeft(2Mn+r^2\sigmaum_{1\lambdaeq j \lambdaeq 4}\alphalpha_j\right)= \phirac{1}{16}c_{r,M,\phibm\betam{\alphalpha}}(n)+ O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\]
Plugging in Lemma \ref{lem:mainterm} (1) then yields
\[
s_{r,2M,\phibm\betam{\alphalpha}}\lambdaeft(2Mn+r^2\sigmaum_{1\lambdaeq j \lambdaeq 4}\alphalpha_j\right)= \phirac{1}{16}s_{r,2M,\phibm\betam{\alphalpha}}^*\lambdaeft(2Mn+r^2\sigmaum_{1 \lambdaeq j \lambdaeq 4}\alphalpha_j\right)+ O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\]
Since
\[
s_{r,2M,\phibm\betam{\alphalpha}}(n)=s_{r,2M,\phibm\betam{\alphalpha}}^*(n)=0
\]
if $n\nuot\varepsilonquiv r^2\sigmaum_{j=1}^4\alphalpha_j\rhomod{2M}$, the claim follows.
\rm
\nuoindent (2) By Lemma \ref{lem:rexactly}, we have
\[
r_{m,\phibm\betam{\alphalpha}}(n)=r_{m,\phibm\betam{\alphalpha}}^+(n)+O\lambdaeft(n^{\phirac{1}{2}+\varepsilon}\right).
\]
Lemma \ref{lem:r+s+rel} then yields
\[
r_{m,\phibm\betam{\alphalpha}}^+(n)=s_{m,2(m-2),\phibm\betam{\alphalpha}}\lambdaeft(8(m-2)\lambdaeft(n-\sigmaum_{1\lambdaeq j \lambdaeq 4}\alphalpha_j\right)+m^2\sigmaum_{1 \lambdaeq j \lambdaeq 4}\alphalpha_j\right).
\]
Thus by part (1) and Lemma \ref{lem:mainterm} we have
\betaegin{align*}
r_{m,\phibm\betam{\alphalpha}}^+(n)&=\phirac{1}{16}s_{m,2(m-2),\phibm\betam{\alphalpha}}^*\lambdaeft(8(m-2)\lambdaeft(n-\sigmaum_{1\lambdaeq j\lambdaeq 4}\alphalpha_j\right)+m^2\sigmaum_{1\lambdaeq j \lambdaeq 4}\alphalpha_j\right)+O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right)\\
&=\phirac{1}{16}r_{m,\phibm\betam{\alphalpha}}^*(n)+ O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right). \qedhere
\varepsilonnd{align*}
\varepsilonnd{proof}
\rm
\sigmaection{Proof of Corollary \ref{cor:hexagonal} and Corollary \ref{cor:hexagonal2}}\lambdaabel{sec:Corollaries}
In this section, we prove Corollaries \ref{cor:hexagonal} and \ref{cor:hexagonal2}.
\betaegin{proof}[Proof of Corollary \ref{cor:hexagonal}]
By Theorem \ref{thm:rNrZ} (2), we have
\betaegin{equation}\lambdaabel{eqn:hexagonal1}
r_{6,\phibm\betam{\alphalpha}}(n)=\phirac{1}{16} r_{6,\phibm\betam{\alphalpha}}^*(n)+O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\varepsilonnd{equation}
Completing the square in the special case $\phibm\betam{\alphalpha}=(1,1,1,1)$, we obtain
\[
r_{6,(1,1,1,1)}^*(n)= s_{3,4,(1,1,1,1)}^*(8n+4).
\]
Note that by the change of variables $x_j\mapsto \varepsilon_j x_j$ with $\betam{\varepsilon}\in \{\rhom\}^4$, we have
\[
s_{3,4,(1,1,1,1)}^*(8n+4)=\phirac{1}{16}s_{1,2,(1,1,1,1)}^*(8n+4).
\]
Cho \cite[Example 3.3]{Cho} computed
\[
s_{1,2,(1,1,1,1)}^*(8n+4)=16 \sigmaigma(2n+1).
\]
Thus
\[
r_{6,(1,1,1,1)}^*(n)= \sigmaigma(2n+1).
\]
Plugging this back into \varepsilonqref{eqn:hexagonal1} yields the claim.
\varepsilonnd{proof}
We next prove Corollary \ref{cor:hexagonal2}.
\betaegin{proof}[Proof of Corollary \ref{cor:hexagonal2}]
Using \cite[Example 3.4]{Cho}, the argument is essentially identical to the proof of Corollary \ref{cor:hexagonal}, except that in this case it is not immediately obvious that the main term is always positive. For this we use multiplicativity to bound
\[
-\sigmaum_{d\mid (8n+5)} \lambdaeft(\phirac{8}{d}\right) d\gammaeq \varphi(8n+5)\gammag n^{1-\varepsilon},
\]
where $\varphi$ denotes the Euler totient function.
\rm
\varepsilonnd{proof}
We finally prove Corollary \ref{cor:pentagonal}.
\betaegin{proof}[Proof of Corollary \ref{cor:pentagonal}]
By Theorem \ref{thm:rNrZ} (2), we have
\betaegin{equation}\lambdaabel{eqn:pentagonal}
r_{5,(1,1,1,1)}(n)=\phirac{1}{16} r_{5,(1,1,1,1)}^*(n)+O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).
\varepsilonnd{equation}
Completing the square, we obtain
\betaegin{equation}\lambdaabel{eqn:rspentagonal}
r_{5,(1,1,1,1)}^*(n)= s_{5,6,(1,1,1,1)}^*(24n+4).
\varepsilonnd{equation}
Using \cite[Proposition 2.1]{Shimura}, it is not hard to show that the generating function $\Theta_{5,6,(1,1,1,1)}^*$ for $s_{5,6,(1,1,1,1)}^*$ is a modular form of weight two on $\Gamma_0(144)$.
We next claim that
\betaegin{equation}\lambdaabel{eqn:Thetamin16split}
\Theta_{5,6,(1,1,1,1)}^*(\tauau)= \phirac{2}{3}E(4\tauau)+\phirac{1}{3}\varepsilonta^4(24\tauau),
\varepsilonnd{equation}
where
\betaegin{align*}
E(\tauau)&:=\sigmaum_{n\varepsilonquiv 1\rhomod{6}} \sigmaigma(n) q^n,& \varepsilonta(\tauau)&:=q^{\phirac{1}{24}}\rhorod_{n\gammaeq 1}\lambdaeft(1-q^n\right).
\varepsilonnd{align*}
For this we note first that $\tauau\mapsto\varepsilonta^4(24\tauau)$ is a cusp form of weight two on $\Gamma_0(144)$. We next recall that for a translation-invariant function $f$ with Fourier expansion
$
f(\tauau)=\sigmaum_{n\gammaeq 0} c_f(v;n) q^n,
$
the \varepsilonmph{quadratic twist of $f$ with a character $\operatorname{ch}i$} is given by
\[
f\otimes \operatorname{ch}i(\tauau):=\sigmaum_{n\gammaeq 0} \operatorname{ch}i(n) c_{f}(v;n) q^n.
\]
For $ \partialtaelta\in\mathbb N$, one also defines the \varepsilonmph{$V$-operator} and \varepsilonmph{$U$-operator} by
\betaegin{equation*}
f\betaig| V_{ \partialtaelta}(\tauau):=\sigmaum_{n\gammaeq 0} c_{f}\lambdaeft( \partialtaelta v;n\right)q^{ \partialtaelta n},\qquad
f\betaig| U_{ \partialtaelta}(\tauau):=\sigmaum_{n\gammaeq 0} c_{f}\lambdaeft(\phirac{v}{ \partialtaelta}; \partialtaelta n\right)q^{n}.
\varepsilonnd{equation*}
A straightforward generalization of the proof for holomorphic modular forms (see \cite[Proposition 17 (b) of Section 3]{Koblitz} and \cite[Lemma 1]{LiWinnie}) yields that if $f$ satisfies weight $k\in\mathbb Z$ modularity on $\Gamma_0(N)$ and $\operatorname{ch}i$ is a character with modulus $M$, then $f\otimes \operatorname{ch}i$ satisfies weight $k$ modularity on $\Gamma_0(\operatorname{lcm}(N,M^2))$ with character $\operatorname{ch}i^2$, $f|U_{ \partialtaelta}$ satisfies weight $k$ modularity on
$\Gamma_0(\operatorname{lcm}(\phirac{N}{\gammacd(N, \partialtaelta)}, \partialtaelta))$, and $f|V_{ \partialtaelta}$ satisfies weight $k$ modularity on $\Gamma_0( \partialtaelta N)$.
Recall the weight two Eisenstein series
\[
E_2(\tauau):=1-24\sigmaum_{n\gammaeq 1} \sigmaigma(n) q^n
\]
and set $\operatorname{ch}i_D(n):=(\phirac{D}{n})$. We see that
\[
E=-\phirac{1}{48}\lambdaeft(E_2\otimes \operatorname{ch}i_{-3}+E_2\otimes \operatorname{ch}i_{-3}^2\right)\betaig|\lambdaeft(1-U_2V_2\right).
\]
Letting $\omegaidehat{E}_2(\tauau):=E_2(\tauau)-\phirac{3}{\rhoi v}$ be the completed weight two Eisenstein series, we easily conclude
\[
E=-\phirac{1}{48}\lambdaeft(\omegaidehat{E}_2\otimes \operatorname{ch}i_{-3}+E_2\otimes\operatorname{ch}i_{-3}^2\right)\betaig|\lambdaeft(1-U_2V_2\right).
\]
Since $\omegaidehat{E}_2$ is modular of weight two on $\operatorname{SL}_2(\mathbb Z)$, we see that $E$ is modular of weight two on $\Gamma_0(36)$. Since it is holomorphic, we conclude that the right-hand side of \varepsilonqref{eqn:Thetamin16split} is a weight two modular form on $\Gamma_0(144)$. By the valence formula, \varepsilonqref{eqn:Thetamin16split} is true as long as it is true for the first 48 Fourier coefficients, which is easily checked with a computer.
By work of Deligne \cite{Deligne}, we know that the $n$-th coefficient of $\varepsilonta^4(24\tauau)$ is $\lambdal n^{\phirac{1}{2}+\varepsilon}$. Therefore, writing the $n$-th coefficient of $E$ as $c_E(n)$, we conclude from \varepsilonqref{eqn:Thetamin16split} that
\[
s_{5,6,(1,1,1,1)}^*(24n+4)=\phirac{2}{3}c_E(6n+1)+O\lambdaeft(n^{\phirac{1}{2}+\varepsilon}\right)= \phirac{2}{3}\sigmaigma(6n+1) + O\lambdaeft(n^{\phirac{1}{2}+\varepsilon}\right).
\]
Plugging back into \varepsilonqref{eqn:rspentagonal} and then plugging this into \varepsilonqref{eqn:pentagonal} implies that
\[
r_{5,(1,1,1,1)}(n)=\phirac{1}{16}r_{5,(1,1,1,1)}^*(n)+O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right) = \phirac{1}{24}\sigmaigma(6n+1) + O\lambdaeft(n^{\phirac{15}{16}+\varepsilon}\right).\qedhere
\]
\varepsilonnd{proof}
\betaegin{thebibliography}{99}
\betaibitem{Blomer}V. Blomer, \betaegin{it}Uniform bounds for Fourier coefficients of theta-series with arithmetic applications\varepsilonnd{it}, Acta Arith. \tauextbf{114} (2004), 1--21.
\betaibitem{BringmannNazaroglu}K. Bringmann and C. Nazaroglu, \betaegin{it} A framework for modular properties of false theta functions.\varepsilonnd{it} Research in the Mathematical Sciences {\betaf6:30} (2019).
\betaibitem{Cauchy} A.-L. Cauchy, \betaegin{it}D\'emonstration du th\'eor\`em g\'en\'eral de Fermat sur les nombres polygones\varepsilonnd{it}, M\'em. Sci. Math. Phys. Inst. France \tauextbf{14} (1813--1815), 177--220; Oeuvres compl\`etes \tauextbf{VI} (1905), 320--353.
\betaibitem{Cho}B. Cho, \betaegin{it} On the number of representations of integers by quadratic forms with congruence conditions\varepsilonnd{it}, J. Math. Anal. Appl. \tauextbf{462} (2018), 999--1013.
\betaibitem{Deligne}P. Deligne, \betaegin{it}La conjecture de Weil I\varepsilonnd{it}, Inst. Hautes \'Etudes Sci. Publ. Math. \tauextbf{43} (1974), 273--307.
\betaibitem{Guy}R. Guy, \betaegin{it}Every number is expressible as the sum of how many polygonal numbers?\varepsilonnd{it}, Amer. Math. Monthly \tauextbf{101} (1994), 169--172.
\betaibitem{Kloosterman}H. Kloosterman, \betaegin{it}On the representation of numbers of the form $ax^2+by^2+cz^2+dt^2$\varepsilonnd{it}, Acta. Math. \tauextbf{49} (1926), 407--464.
\betaibitem{Koblitz}N. Koblitz, \betaegin{it}Introduction to elliptic curves and modular forms\varepsilonnd{it}, Graduate texts in Math. \tauextbf{97}, Springer, 1984.
\betaibitem{LiWinnie}W. Li, \betaegin{it}Newforms and functional equations\varepsilonnd{it}, Math. Ann. \tauextbf{212} (1975), 285--315.
\betaibitem{Mordell}L. Mordell, \betaegin{it}On the representations of numbers as sums of $2r$ squares\varepsilonnd{it}, Quart. J. Pure and Appl. Math., Oxford \tauextbf{48} (1917), 93--104.
\betaibitem{Nathanson}M. Nathanson, \betaegin{it}A short proof of Cauchy's polygonal number theorem\varepsilonnd{it}, Proc. Amer. Math. Soc. \tauextbf{99} (1987), 22--24.
\betaibitem{Ramanujan}S. Ramanujan, \betaegin{it}On certain arithmetical functions\varepsilonnd{it}, Trans. Cambridge Phil. Soc. \tauextbf{9} (1916), 159--184.
\betaibitem{Shimura}G. Shimura, \betaegin{it}On modular forms of half integral weight\varepsilonnd{it}, Ann. Math. \tauextbf{97} (1973), 440--481.
\betaibitem{ShimuraCongruence}G. Shimura, \betaegin{it}Inhomogeneous quadratic forms and triangular numbers\varepsilonnd{it}, Amer. J. Math. \tauextbf{126} (2004), 191--214.
\betaibitem{Siegel1}C. Siegel, \betaegin{it}Indefinite quadratische Formen und Funktionentheorie, I\varepsilonnd{it}, Math. Ann. \tauextbf{124} (1951), 17--54.
\betaibitem{Siegel2}C. Siegel, \betaegin{it}Indefinite quadratische Formen und Funktionentheorie, II\varepsilonnd{it}, Math. Ann. \tauextbf{124} (1951), 364--387.
\betaibitem{vanderBlij} F. van der Blij, \betaegin{it}On the theory of quadratic forms\varepsilonnd{it}, Ann. Math. \tauextbf{50} (1949), 875--883.
\betaibitem{Weil}A. Weil, \betaegin{it}Sur la formule de Siegel dans la th\'eorie des groupes classiques\varepsilonnd{it}, Acta. Math. \tauextbf{113} (1965), 1--87.
\betaibitem{Williams}K. Williams, \betaegin{it}Number theory in the spirit of Liouville\varepsilonnd{it}, London Math. Soc. Texts \tauextbf{76}, Cambridge University Press, 2011.
\betaibitem{Zagier123}D. Zagier, \betaegin{it}Elliptic modular forms and their applications\varepsilonnd{it}, in ``The 1-2-3 of modular forms: lectures at a Summer School in Nordfjordeid, Norway (ed. K. Ranestad), Universitext, Springer-Verlag, Berlin-Heidelberg-New York (2008), 1--103.
\varepsilonnd{thebibliography}
\varepsilonnd{document}
|
\begin{document}
\title[On normal subgroups of $D^*$]{On normal subgroups of $D^*$ whose elements are periodic modulo the center of $D^*$ of \\bounded order}
\dedicatory{Dedicated to Professor Hendrik W. Lenstra for his 65th birthday}
\author[Mai Hoang Bien]{Mai Hoang Bien}
\mbox{\rm add}ress{Mathematisch Instituut, Leiden Universiteit, The Netherlands and Dipartimento di Matematica, Universit\`{a} degli Studi di Padova, Italy.}
\curraddr{Mathematisch Instituut, Leiden Universiteit, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands.}
\email{[email protected]}
\keywords{Divsion ring, normal subgroup, radical, central. \\
\protect \indent 2010 {\it Mathematics Subject Classification.} 16K20.}
\maketitle
\begin{abstract} Let $D$ be a division ring with the center $F=Z(D)$. Suppose that $N$ is a normal subgroup of $D^*$ which is radical over $F$, that is, for any element $x\in N$, there exists a positive integer $n_x$, such that $x^{n_x}\in F$. In \cite{Her1}, Herstein conjectured that $N$ is contained in $F$. In this paper, we show that the conjecture is true if there exists a positive integer $d$ such that $n_x\le d$ for any $x\in N$. \end{abstract}
\section{Introduction}
Let $D$ be a division ring with the center $F=Z(D)$. For an element $x\in D$, if there exists a positive integer $n_x$ such that $x^{n_x}\in F$ and $x^{m}\notin F$ for any positive integer $m<n_x$ then $x$ is called {\it $n_x$-central}. If $n_x=1$, $x$ is said to be {\it central}. A subgroup $N$ of the unit group $D^*$ of $D$ is called {\it radical} over $F$ if for any element $x\in N$, there exists $n_x>0$ such that $x$ is $n_x$-central. Such a subgroup $N$ is called {\it central} if $n_x=1$ for any $x\in N$. In other words, $N$ is central if and only if $N$ is contained in $F$.
In 1978, Herstein \cite{Her1} conjectured that if a subnormal subgroup $N$ of $D^*$ is radical over $F$ then it is central. Two years later, he considered the conjecture again and proved that the assumption ``subnormal" in this conjecture is equivalent to ``normal" (see \cite[Lemma 1]{Her2}). That is, he asked whether a normal subgroup of $D^*$ is central if it is radical over $F$. In \cite{Her1}, Herstein proved that the conjecture holds if $N$ is torsion. As a consequence, one can see that the conjecture is also true if $D$ is centrally finite. We notice that in \cite{HH}, there is a different proof of this fact. Recall that a division ring $D$ with the center $F$ is called {\it centrally finite} if $D$ is a finite dimensional vector space over $F$ \cite[Definition 14.1]{Lam}. In \cite{Her2}, by using the Pigeon-Hole Principle, Herstein also showed that the conjecture holds if $F$ is uncountable.
Recently, there are some efforts to give the answer for this conjecture. In \cite{HDB1} and \cite{HDB2}, we proved that the conjecture holds if $D$ is either of type $2$ or weakly locally finite. Actually, we get a more general result: if a normal subgroup of $D^*$ is radical over a proper division subring $K$ of $D$ then it is central provided $D$ is either of type $2$ or weakly locally finite. Recall that a division ring $D$ is of {\it type $2$} if $\dim_FF(x,y)<\infty$ for any $x, y\in D^*$. If $F(S)$ is a centrally finite division ring for any finite subset $S$ of $D$ then $D$ is called {\it weakly locally finite}. Here, $F(S)$ denotes the division subring of $D$ generated by $F\cup S$. In general, the conjecture remains still open.
In this paper, we give a positive answer for this conjecture in a particular case. In fact, we prove the following Theorem.
\begin{Th} Let $D$ be a division ring and $N$ be a normal subgroup of $D^*$. If there exists a positive integer $d$ such that every element $x\in N$ is $n_x$-central for some positive integer $n_x\le d$ then $N$ is central. \end{Th}
\section{The proof of the Theorem}
The technique we use in this paper is generalized rational expressions. For our further need, we recall some definitions and prove some Lemmas.
First, basing on the structure of twisted Laurent series rings, we will construct a division ring which will be used for next Lemmas. Let $R$ be a ring and $\phi$ be a ring automorphism of $R$. We write $\mathbb{C}al R=R((t,\phi))$ for the ring of formal Laurent series $\sum\limits_{i = n}^\infty {{a_i}{t^i}}$, where $n\in \mathbb{Z}, a_i\in R$, with the mutiplication defined by the twist equation $ta=\phi(a)t$ for every $a\in R$. In case $\phi(a)=a$ for any $a\in R$, we write $R((t))=R((t,\phi))$. If $R=D$ is a division ring then $\mathbb{C}al D=D((t,\phi))$ is also a division ring (see \cite[Example 1.8]{Lam}). Moreover, we have.
\begin{Lemma}\label{2.1} Let $R=D$ be a division ring, $\mathbb{C}al D=D((t,\phi))$ be as above, $F=Z(D)$ be the center of $D$, and $L=\{\, a\in D\mid \phi(a)=a\}$ be the fixed division ring of $\phi$ in $D$. If the center $k=Z(L)$ of $L$ is contained in $F$, then the center of $\mathbb{C}al D$ is
$$Z(\mathbb{C}al D)=\left\{ {\begin{array}{*{20}{c}}
k&{\text{ if } \phi \text{ has infinite order, }}\\
{k(({t^s}))}&{\text{ if } \phi \text{ has an order } s.}
\end{array}} \right.$$
\end{Lemma}
\begin{Proof} The proof is similar to \cite[Proposition 14.2]{Lam}. It suffices to prove that $Z(D)\subseteq k$ if $\phi$ has infinite order, and $Z(\mathbb{C}al D)\subseteq k((t^s))$ in case $f$ has an order $s$ since it is easy to check that $k((t^s))\subseteq Z(\mathbb{C}al D)$ if $f$ has an order $s$. Let $\alpha=\sum\limits_{i = n}^\infty {{a_i}{t^i}}$ be in $Z(\mathbb{C}al D)$. We first prove that $a_i\in k$ for every $i\ge n$. One has $\sum\limits_{i = n}^\infty {{a_i}{t^{i+1}}}=(\sum\limits_{i = n}^\infty {{a_i}{t^i}})t=t\sum\limits_{i = n}^\infty {{a_i}{t^i}}=\sum\limits_{i = n}^\infty {{\phi(a_i)}{t^{i+1}}}$. Hence, $\phi(a_i)=a_i$ for every $i\ge n$. It means $a_i\in L$ for every $i\ge n$. Moreover, for any $a\in L$, $\sum\limits_{i = n}^\infty {{aa_i}{t^{i}}}=(\sum\limits_{i = n}^\infty {{a_i}{t^i}})a=\sum\limits_{i = n}^\infty {{a_i\phi(a)}{t^i}}=\sum\limits_{i = n}^\infty {{a_ia}{t^{i}}}$. Therefore, $aa_i=a_ia$ for every $i\ge n$. It implies, $a_i\in k$ for every $i\ge n$. Now for any $b\in D$, $\sum\limits_{i = n}^\infty {{ba_i}{t^{i}}}=(\sum\limits_{i = n}^\infty {{a_i}{t^i}})b=\sum\limits_{i = n}^\infty {{a_i\phi^i(b)}{t^i}}=\sum\limits_{i = n}^\infty {{\phi^i(b)a_i}{t^i}}$, so that $ba_i=\phi^i(b)a_i$ for every $i\ge n$.
{\bf Case 1.} The automorphism $\phi$ has infinite order. For some $i\ne 0$, from the fact that $(b-\phi^i(b))a_i=0$, one has $a_i=0$, which implies $\alpha=a_0\in k$.
{\bf Case 2.} The automorphism $\mathcal{P}hi$ has an order $s$. For any $i$ which is not divided by $n$, since $(b-\phi^i(b))a_i=0$, so that $a_i=0$. Therefore, $\alpha=\sum\limits_{i = m}^\infty {{a_{si}}{t^{si}}}\in k((t^s))$.
\end{Proof}
Let $\{\,t_i\mid i\in \mathbb{Z}\,\}$ be a countable set of indeterminates and $D$ be a division ring. We construct a family of division rings by the following way. Set $$D_0=D((t_0)), D_1 =D_0((t_1)),$$ $$D_{-1}=D_1((t_{-1})), D_2=D_{-1}((t_{2})),$$
for any $n>1,$ $$ D_{-n}=D_n((t_{-n})),D_{n+1}=D_{-n}((t_{n+1})).$$ Now put $D_{\infty}=\bigcup\limits_{n=-\infty}^{+\infty} {{D_n}}$. Then $D_\infty$ is a division ring. Assume that $F$ is the center of $D$. By Lemma~\ref{2.1}, it is elementary to prove by induction on $n\ge 0$ that the center of $D_0$ is $F_0=F((t_0))$, the center of $D_{n+1}$ is $F_{n+1}=F_{-n}((t_{n+1}))$ and the center of $D_{-n}$ is $F_{-(n+1)}=F_{n+1}((t_{-(n+1)}))$. In particular, $F$ is contained in $Z(D_\infty)$. Consider an automorphism $f$ on $D_\infty$ defined by $f(a)=a$ for any $a$ in $D$ and $f(t_i)=t_{i+1}$ for every $i\in \mathbb{Z}$.
\begin{Prop}\label{2.2} Let $D, D_\infty$ and $f$ be as above. Then $\mathbb{C}al D=D_\infty((t,f))$ is a division ring whose center coincides with the center $F$ of $D$.
\end{Prop}
\begin{Proof} We have $D$ is the fixed division ring of $f$ in $D_\infty$. Since the center $F$ of $D$ is contained in the center of $D_\infty$, $f$ has infinite order and by Lemma~\ref{2.1}, $Z(\mathbb{C}al D)=F.$\end{Proof}
Recall that a {\it generalized rational expression} of a division ring $D$ is an expression constructed from $D$ and a set of noncommutative indeteminates using addition, subtraction, multiplication and division. A generalized rational expression over $D$ is called a {\it generalized rational identity} if it vanishes on all permissible substitutions from $D$. A generalized rational expression $f$ of $D$ is called nontrivial if there exists an extension division ring $D_1$ of $D$ such that $f$ is not a generalized rational identity of $D_1$. The details of generalized rational identities can be found in \cite{Rowen}.
Given a positive integer $n$ and $n+1$ noncommutative indeteminates $x,y_1,\cdots, y_n$, put $$g_n(x,y_1,y_2,\cdots, y_n)=\sum\limits_{\delta \in {S_{n + 1}}} {\mbox{\rm sign}(\delta ).{x^{\delta (0)}}{y_1}{x^{\delta (1)}}{y_2}{x^{\delta (2)}} \ldots {y_n}{x^{\delta (n)}}}, $$ where $S_{n+1}$ is the symmetric group of $\{\,0,1,\cdots, n\,\}$ and $\mbox{\rm sign}(\delta)$ is the sign of permutation $\delta$. This is the generalized rational expression defined in \cite{BMM} to connect an algebraic element of degree $n$ and a polynomial. We have the first property of this generalized rational expression.
\begin{Lemma}\label{3.1} Let $D$ be a division ring with the center $F$. For any element $a\in D$, the following are equivalent:
\begin{enumerate}
\item The element $a$ is algebraic over $F$ of degree less than $n$.
\item $g_n(a,r_1,r_2,\cdots, r_n)=0$ for any $r_1, r_2,\cdots, r_n\in D$.
\end{enumerate}
\end{Lemma}
\begin{Proof} See \cite[Corollary 2.3.8]{BMM}
\end{Proof}
Let $D$ be a division ring with center $F$ and $a$ be an element of $D$. Then, by definition, $g_n(axa^{-1}x^{-1},y_1,y_2,\cdots, y_n)$ is also a generalized rational expressions of $D$. Notice that, in general, the expression $g_n(x,y_1,\cdots, y_n)$ is a polynomial but $g_n(axa^{-1}x^{-1},y_1,y_2,\cdots, y_n)$ is not necessary a polynomial. If $a$ is algebraic of degree less than $n$ over $F$ then $g_n(a,y_1,y_2,\cdots, y_n)$ is a trivial generalized rational expression according to Lemma~\ref{3.1}. However, the following Lemma shows that $g_n(axa^{-1}x^{-1},y_1,y_2,\cdots, y_n)$ is always nontrivial if $a$ is not in $F$.
\begin{Lemma}\label{1.1} Let $D$ be a division ring with center $F$. If $a\in D\backslash F$ then the generalized rational expression $g_n(axa^{-1}x^{-1},y_1,y_2,\cdots, y_n)$ is nontrivial.
\end{Lemma}
\begin{Proof} Let $D_\infty$, $\mathbb{C}al D=D_\infty ((t, f))$ and $F$ be as in Proposition~\ref{2.2}. Since $a\notin F$, there exists $c\in D$ such that $c=aba^{-1}b^{-1}\ne 1$. Because $a,b,c$ commute with $t$, $$(c-1)(1+b^{-1}t)^{-1}+1=a(b+t)a^{-1}(b+t)^{-1}.$$ If $a(b+t)a^{-1}(b+t)^{-1}$ is algebraic over $F$ then so is $(c-1)(1+b^{-1}t)^{-1}$. Hence, $(c-1)^{-1}+b^{-1}(c-1)^{-1}t=((c-1)(1+b^{-1}t)^{-1})^{-1}$ is algebraic over $F$. Let $p(x)=x^m+a_{m-1}x^{m-1}+\cdots +a_1x+a_0$, with $m>0$, be the minimal polynomial of $(c-1)^{-1}+b^{-1}(c-1)^{-1}t$ over $F$. It means $$ 0=((c-1)^{-1}+b^{-1}(c-1)^{-1}t)^m+\cdots +a_1((c-1)^{-1}+b^{-1}(c-1)^{-1}t)+a_0.$$ For instance, $(b^{-1}(c-1)^{-1})^m=0$, a contradiction! Therefore, $a(b+t)a^{-1}(b+t)^{-1}$ is not algebraic over $F$. Using Lemma~\ref{3.1}, we have $$g_n(a(b+t)a^{-1}(b+t)^{-1},r_1,r_2,\cdots, r_n)\ne 0,$$ for some $r_1,r_2,\cdots, r_n\in \mathbb{C}al D$. This means $g_n(axa^{-1}x^{-1},y_1,y_2,\cdots, y_n)$ is nontrivial.
\end{Proof}
A polynomial identity ring is a ring $R$ with a non-zero polynomial $P$ vanishing on all permissible substitutions from $R$. In this case, $P$ is called {\it polynomial identity} of $R$ or we say that $R$ {\it satisfies} $P$. There is a well-known result: a division ring is a polynomial identity division ring if and only if it is centrally finite (see \cite[Theorem 6.3.1]{Her3}). We have a similar property for generalized rational identity division rings.
\begin{Lemma}\label{3.3} Let $D$ be a division ring with the center $F$. If there exists a nontrivial generalized rational identity of $D$ then either $D$ is centrally finite or $F$ is finite.
\end{Lemma}
\begin{Proof} See \cite[Theorem 8.2.15]{Rowen}.
\end{Proof}
Now we are ready to prove our Theorem.\\
\noindent
{\bf Proof of Theorem 1.1}\\
Suppose that $N$ is not contained in $F$. Then, there exists $a\in N\backslash F$. For any $d+1$ elements $r, r_1,r_2,\cdots, r_d$ of $D$ with $r\ne 0$, since $ara^{-1}r^{-1}\in N$ is $n_{a,r}$-central element for some $0<n_{a,r}\le d$, by Lemma~\ref{3.1}, $$g_d(ara^{-1}r^{-1},r_1,r_2,\cdots, r_d)=0.$$ By Lemma~\ref{1.1}, $g_d(axa^{-1}x^{-1},y_1,y_2,\cdots, y_d)$ is a nontrivial generalized rational identity of $D$. Now, in view of Lemma~\ref{3.3}, either $D$ is centrally finite or $F$ is finite. If $D$ is centrally finite then $N\subseteq F$ by \cite[Theorem 3.1]{HDB1}. If $F$ is finite then $N$ is torsion, so by \cite[Theorem 8]{Her1}, $N\subseteq F$ . Thus, in both cases we have $N\subseteq F$, a contradiction.
\subsection*{Acknowledgment} The author is very thankful to the referee for carefully reading the paper and making useful comments.
\end{document}
|
\begin{document}
\title{Maximally homogeneous para-CR manifolds of semisimple type}
\author{D.V. Alekseevsky, C. Medori\and A. Tomassini}
\date{}
\address{Department of Mathematics\\ Hull University\\
UK}
\email{[email protected]}
\address{Dipartimento di Matematica\\ Universit\`a di Parma\\ Viale G.\,P. Usberti, 53/A\\ 43100
Parma\\ Italy}
\email{[email protected]}
\address{Dipartimento di Matematica\\ Universit\`a di Parma\\ Viale G.\,P. Usberti, 53/A\\ 43100
Parma\\ Italy}
\email{[email protected]}
{\mathfrak {su}}bjclass{53C15, 53D99, 58A14}
\thanks{This work was supported by the Project M.U.R.S.T. ``Geometric Properties of
Real and Complex Manifolds'' and by G.N.S.A.G.A.
of I.N.d.A.M. The first author was also supported by Grant FWI
Project P17108-N04 (Wien) and the Leverhulme Trust, EM/9/2005/0069.}
\begin{abstract}
An almost para-CR structure on a
manifold $M$ is given by a distribution $HM {\mathfrak {su}}bset TM$ together with a
field $K \in \Gamma({\rm End}(HM))$ of involutive endomorphisms of
$HM$. If $K$ satisfies an integrability condition, then $(HM,K)$ is
called a para-CR structure. The notion of maximally homogeneous
para-CR structure of a semisimple type is given. A
classification of such maximally homogeneous para-CR
structures is given in terms of appropriate gradations of real
semisimple Lie algebras.
\end{abstract}
\maketitle
\tableofcontents
\def{\mathfrak A}} \def\ga{{\mathfrak a}} \def\cA{{\mathcal A}{{\mathfrak A}} \def\ga{{\mathfrak a}} \def\cA{{\mathcal A}}
\def{\mathfrak B}} \def\gb{{\mathfrak b}} \def\cB{{\mathcal B}{{\mathfrak B}} \def\gb{{\mathfrak b}} \def\cB{{\mathcal B}}
\def{\mathfrak C}} \def\gc{{\mathfrak c}} \def\cC{{\mathcal C}{{\mathfrak C}} \def\gc{{\mathfrak c}} \def\cC{{\mathcal C}}
\def{\mathfrak D}} \def\gd{{\mathfrak d}} \def\cD{{\mathcal D}{{\mathfrak D}} \def\gd{{\mathfrak d}} \def\cD{{\mathcal D}}
\def{\mathfrak E}} \def\gge{{\mathfrak e}} \def\cE{{\mathcal E}{{\mathfrak E}} \def\gge{{\mathfrak e}} \def\cE{{\mathcal E}}
\def{\mathfrak F}} \def\gf{{\mathfrak f}} \def\cF{{\mathcal F}{{\mathfrak F}} \def\gf{{\mathfrak f}} \def\cF{{\mathcal F}}
\def{\mathfrak G}} \def\ggg{{\mathfrak g}} \def\cG{{\mathcal G}{{\mathfrak G}} \def\ggg{{\mathfrak g}} \def\cG{{\mathcal G}}
\def{\mathfrak H}} \def\gh{{\mathfrak h}} \def\cH{{\mathcal H}{{\mathfrak H}} \def\gh{{\mathfrak h}} \def\cH{{\mathcal H}}
\def{\mathfrak I}} \def\gi{{\mathfrak i}} \def\cI{{\mathcal I}{{\mathfrak I}} \def\gi{{\mathfrak i}} \def\cI{{\mathcal I}}
\def{\mathfrak J}} \def\gj{{\mathfrak j}} \def\cJ{{\mathcal J}{{\mathfrak J}} \def\gj{{\mathfrak j}} \def\cJ{{\mathcal J}}
\def{\mathfrak K}} \def\gk{{\mathfrak k}} \def\cK{{\mathcal K}{{\mathfrak K}} \def\gk{{\mathfrak k}} \def\cK{{\mathcal K}}
\def{\mathfrak L}} \def\gl{{\mathfrak l}} \def\cL{{\mathcal L}{{\mathfrak L}} \def\gl{{\mathfrak l}} \def\cL{{\mathcal L}}
\def{\mathfrak M}} \def\gm{{\mathfrak m}} \def\cM{{\mathcal M}{{\mathfrak M}} \def\gm{{\mathfrak m}} \def\cM{{\mathcal M}}
\def{\mathfrak N}} \def\gn{{\mathfrak n}} \def\cN{{\mathcal N}{{\mathfrak N}} \def\gn{{\mathfrak n}} \def\cN{{\mathcal N}}
\def{\mathfrak O}} \def\go{{\mathfrak o}} \def\cO{{\mathcal O}{{\mathfrak O}} \def\go{{\mathfrak o}} \def\cO{{\mathcal O}}
\def{\mathfrak P}} \def\gp{{\mathfrak p}} \def\cP{{\mathcal P}{{\mathfrak P}} \def\gp{{\mathfrak p}} \def\cP{{\mathcal P}}
\def{\mathfrak Q}} \def\gq{{\mathfrak q}} \def\cQ{{\mathcal Q}{{\mathfrak Q}} \def\gq{{\mathfrak q}} \def\cQ{{\mathcal Q}}
\def{\mathfrak R}} \def\gr{{\mathfrak r}} \def\cR{{\mathcal R}{{\mathfrak R}} \def\gr{{\mathfrak r}} \def\cR{{\mathcal R}}
\def{\mathfrak S}} \def\gs{{\mathfrak s}} \def\cS{{\mathcal S}{{\mathfrak S}} \def\gs{{\mathfrak s}} \def\cS{{\mathcal S}}
\def{\mathfrak T}} \def\gt{{\mathfrak t}} \def\cT{{\mathcal T}{{\mathfrak T}} \def\gt{{\mathfrak t}} \def\cT{{\mathcal T}}
\def{\mathfrak U}} \def\gu{{\mathfrak u}} \def\cU{{\mathcal U}{{\mathfrak U}} \def\gu{{\mathfrak u}} \def\cU{{\mathcal U}}
\def{\mathfrak V}} \def\gv{{\mathfrak v}} \def\cV{{\mathcal V}{{\mathfrak V}} \def\gv{{\mathfrak v}} \def\cV{{\mathcal V}}
\def{\mathfrak W}} \def\gw{{\mathfrak w}} \def\cW{{\mathcal W}{{\mathfrak W}} \def\gw{{\mathfrak w}} \def\cW{{\mathcal W}}
\def{\mathfrak X}} \def\gx{{\mathfrak x}} \def\cX{{\mathcal X}{{\mathfrak X}} \def\gx{{\mathfrak x}} \def\cX{{\mathcal X}}
\def{\mathfrak Y}} \def\gy{{\mathfrak y}} \def\cY{{\mathcal Y}{{\mathfrak Y}} \def\gy{{\mathfrak y}} \def\cY{{\mathcal Y}}
\def{\mathfrak Z}} \def\gz{{\mathfrak z}} \def\cZ{{\mathcal Z}{{\mathfrak Z}} \def\gz{{\mathfrak z}} \def\cZ{{\mathcal Z}}
\def{\mathfrak {gl}}{{\mathfrak {gl}}}
\def{\mathfrak {sl}}{{\mathfrak {sl}}}
\def{\mathfrak {so}}{{\mathfrak {so}}}
\def{\mathfrak {su}}{{\mathfrak {su}}}
\def{\mathfrak {sp}}{{\mathfrak {sp}}}
\noindentewcommand{{\rm I}}{{\rm I}}
\noindentewcommand{{\bf GL}}{{\bf GL}}
\noindentewcommand{{\bf SO}}{{\bf SO}}
\noindentewcommand{{\bf SU}}{{\bf SU}}
\noindentewcommand{{\bf G \!\bf r}}{{\bf G \!\bf r}}
\noindentewcommand{{\hbox{rad}}}{{\hbox{rad}}}
\noindentewcommand{{\mathcal R}}{{\mathcal R}}
\noindentewcommand{{\hbox{tr}}}{{\hbox{tr}}}
\noindentewcommand\C{{\mathbb C}}
\noindentewcommand\R{{\mathbb R}}
\noindentewcommand\Z{{\mathbb Z}}
\noindentewcommand\T{{\mathbb T}}
\noindentewcommand\pCR{{para-CR\,\,}}
\noindentewcommand\bC{{\bf C}}
\section{Introduction and notation} Let $M$ be a $2n$-dimensional manifold. An
\emph{almost paracomplex structure}\index{paracomplex structure} on $M$ is a field of
endomorphisms $K \in {\rm End}(TM)$ of the tangent bundle $TM$ of
$M$ such that $K^2=\hbox{\rm id}$. It is called an (almost) paracomplex
structure in the {\em strong sense} if its $\pm 1$-eigenspace
distributions
$$
T^\pm M=\{X\pm KX\,\,\vert\,\,X\in\Gamma(M,TM)\}
$$
have the same rank (see e.g. \cite{L}, \cite{CFG}). An almost
paracomplex structure $K$ is called a \emph{paracomplex
structure}\index{paracomplex structure}, if it is
\emph{integrable}, i.e.
$$
S(X,Y)=[X,Y]+[KX,KY] - K[X,KY] - K[KX,Y]=0\
$$
for any vector fields $X,\,Y\in \Gamma(TM)$.\noindentewline
This is equivalent to say that the distributions
$T^\pm M$ are involutive.
\par
Recall that an \emph{almost}
CR-\emph{structure}\index{CR-structure} of codimension $k$ on a $2n+k$-dimensional
manifold $M$ is a distribution $HM{\mathfrak {su}}bset TM$ of rank $2n$
together with a field of endomorphisms $J \in {\rm End}(HM)$ such
that $J^2=-\hbox{\rm id}$.
An almost CR-structure is called
CR-\emph{structure}\index{CR-structure}, if the $\pm i$-eigenspace subdistributions
$H_\pm M$ of the complexified tangent bundle $T^\C M$ are
involutive.\noindentewline We define an almost para-CR structure in a
similar way.
\begin{definition}
An \emph{almost para-CR structure}\index{para-CR structure}
of codimension $k$ on a $2n+k$-dimensional
manifold $M$ (in the weak sense) is a pair $(HM,K)$, where $HM {\mathfrak {su}}bset TM$ is a rank
$2n$ distribution and $ K \in {\rm End}(HM)$ is a field of
endomorphisms such that $K^2=\hbox{\rm id}$ and $K\noindenteq\pm\hbox{\rm
id}$.\noindentewline
An almost para-CR structure is said to be a
{\rm para-CR structure}\index{para-CR structure}, if the
eigenspace subdistributions $H_\pm M {\mathfrak {su}}bset HM$
are integrable or equivalently if the following integrability conditions hold:
\begin{eqnarray}
&{}&[KX,KY]+[X,Y]\in\Gamma(HM)\,,\label{integrability1}\\[3pt]
&{}& S(X,Y):=[X,Y]+[KX,KY] -
K([X,KY]+[KX,Y])=0\label{integrability2}
\end{eqnarray}
for all $X,\,Y\in\Gamma(HM)$.
\end{definition}
If the eigenspace distributions
$$
H_\pm M =\{X\pm KX\,\,\vert\,\, X\in\Gamma(M,HM)\}
$$
of an almost para-CR structure have the same rank, then $(HM,K)$ is called an
almost para-CR structure in the {\em strong sense}.
\noindentewline
A straightforward computation shows that the
integrability condition is equivalent to the involutivness of the
distributions $H_+M$ and $H_-M$.
\noindentewline A manifold $M$, endowed with an (almost) para-CR
structure, is called an (\emph{almost})
\emph{para-{\rm CR} manifold}\index{para-CR manifold}
\index{para-CR manifold}.\noindentewline
Note that a direct product of (almost) para-CR manifolds is
an (almost) para-CR manifold.\noindentewline
One can associate with a point $x \in M$ of a para-CR manifold $(M,HM,K)$
a fundamental graded Lie algebra $\gm$. A para-CR structure is said to be {\em regular} if these Lie algebras
$\gm_x$ do not depend on $x$. In this case, a para-CR structure can be considered as a Tanaka structure
(see \cite{AS} and section 4). A regular para-CR structure is called a structure of {\em semisimple type}
if the full prolongation
$$
\ggg = \gm^{\infty} = \gm^{-d} + \cdots + \gm^{-1} + \ggg^0 + \ggg^1 + \cdots
$$
of the
associated non-positively graded Lie algebra $\ggg^{-d} + \cdots + \ggg^{-1} + \ggg^0$
(which is an analogue of the generalized Levi form
of a CR structure) is a semisimple Lie algebra. Such a para-CR
structure defines a parabolic geometry and its group of
automorphisms ${\rm Aut}(M,HM,K)$ is a Lie group of dimension $\leq \dim
\ggg$.\noindentewline
Recently in \cite{NS} P. Nurowski and G. A. J. Sparling consider the natural para-CR structure which arises
on the $3$-dimensional space $M$ of solutions of a second order ordinary differential
equation $y''=Q(x,y,y')$. Using the Cartan method of prolongation, they construct the full prolongation
$\mathcal G\to M$ of $M$ with a ${\mathfrak {sl}}(3,\R)$-valued Cartan connection and a quotient line bundle over $M$
with a conformal metric of signature $(2,2)$. This is a para-analogue of the Feffermann bundle of a CR-structure. They
apply these bundles to the initial ODE and get interesting applications. \noindentewline
In \cite{AMT} we proved that a para-CR structures of semisimple type
on a simply connected manifold $M$ with the automorphism group of maximal dimension $\dim \ggg$
can be identified with a (real) flag manifold $M = G/P$ where $G$ is the simply connected Lie group
with the Lie algebra $\ggg$ and $P$ the parabolic subgroup
generated by the parabolic subalgebra $\gp = \ggg^0 + \ggg^1 + \cdots +\ggg^d.$
We gave a classification of maximally homogeneous para-CR
structures of semisimple type such that the associated graded semisimple Lie algebra $\ggg$
has depth $d=2$. In the present paper we classify all maximally homogeneous para-CR structures of
semisimple type in terms of graded real semisimple Lie algebras.
\section{Graded Lie algebras associated with para-CR structures}
{\mathfrak {su}}bsection{Gradations of a Lie algebra} Recall that a \emph{gradation}\index{gradation}
(more precisely a $\Z$-gradation) of
\emph{depth} $k$ of a Lie algebra $\ggg$ is a direct sum decomposition
\begin{equation}\label{gradation}
\ggg ={\mathfrak {su}}m_{i\in\Z}\ggg^{i}= \ggg^{-k}+\ggg^{-k+1}+\cdots+\ggg^{0}+
\cdots+\ggg^{j}+\cdots
\end{equation}
such that $[\ggg^{i},\ggg^{j}]{\mathfrak {su}}bset \ggg^{i+j}$,
for any $i,j\in \Z$, and $\ggg^{-k}\noindenteq \{0\}$. Note that $\ggg^0$ is a
subalgebra of $\ggg$ and each $\ggg^i$ is a $\ggg^0$-module.\noindentewline
We say that an element $x\in\ggg^{j}$
has \emph{degree} $j$ and we write $d(x)=j$. The
endomorphism $\delta$ of $\ggg$ defined by
$$
\delta_{\vert_{\ggg_{j}}}=j\cdot id
$$
is a semisimple derivation of $\ggg$ (with integer eigenvalues), whose
eigenspaces determine the gradation. Conversely, any semisimple
derivation $\delta$ of $\ggg$ with integer eigenvalues defines a
gradation where the grading space $\ggg^j$ is the eigenspace of
$\delta$ with eigenvalue $j$. If $\ggg$ is a semisimple Lie
algebra, then any derivation $\delta$ is inner, i.e. there exists $d\in\ggg$ such that
$\delta=\hbox{\rm ad}_d$. The element $d\in\ggg$ is called the \emph{grading element}\index{grading element}.
\begin{definition} A gradation $\ggg = {\mathfrak {su}}m \ggg^i$ of a Lie
algebra {\rm (}or a graded Lie algebra $\ggg${\rm )}
is called
\begin{enumerate}
\item {\rm fundamental}\index{graded Lie algebra!fundamental}, if the negative part
$\gm = {\mathfrak {su}}m_{i< 0} \ggg^i $ is
generated by $\ggg^{-1}$;
\item {\rm (almost) effective} or {\rm transitive}\index{graded Lie algebra!transitive},
if the non-negative part
$$
\ggg^{\geq 0}=\gp=\ggg^0+\ggg^1+\cdots
$$
contains no non-trivial ideal of $\ggg$;
\item { \rm non-degenerate}\index{graded Lie algebra!non-degenerate}, if
$$
X\in\ggg^{-1}\,,\,\,[X,\ggg^{-1}]=0\,\,\Longrightarrow\,\,X=0\,.
$$
\end{enumerate}
\end{definition}
{\mathfrak {su}}bsection{Fundamental algebra associated with a distribution}
Let ${\mathcal H}$ be a distribution on a manifold $M$. We recall that to any point $x\in M$ it is possible
to associate a Lie algebra $\gm(x)$ in the following way.\noindentewline
First of all, we consider a filtration of the Lie algebra ${\mathcal X}(M)$ of
vector fields defined inductively by
\begin{eqnarray*}
\Gamma({\mathcal H})_{-1}&=&\Gamma({\mathcal H})\,,\\
\Gamma({\mathcal H})_{-i}&=&\Gamma({\mathcal H})_{-i+1}+
[\Gamma({\mathcal H}),\Gamma({\mathcal H})_{-i+1}]\,\,,\hbox{\rm for}\,\,\,i>1.
\end{eqnarray*}
Then evaluating vector fields at a point $x\in M$, we get a flag
$$
T_xM{\mathfrak {su}}pset\mathcal{H}_{-d-1}(x)=\mathcal{H}_{-d}(x)
{\mathfrak {su}}psetneq \mathcal{H}_{-d+1}(x)
{\mathfrak {su}}pset \cdots
{\mathfrak {su}}pset {\mathcal H}_{-2}(x){\mathfrak {su}}pset \mathcal{H}_{-1}(x)=\mathcal{H}_x
$$
in $T_xM$, where
$$
{\mathcal
H}_{-i}(x)=\{X_{\vert_x}\,\,\vert\,\,X\in\Gamma({\mathcal
H})_{-i}\}.
$$
Let us assume that $\mathcal{H}_{-d}(x)=T_xM$. The commutators of vector fields induce a structure of
fundamental negatively graded Lie algebra on the associated graded
space
$$
{\gm}(x)={\rm gr}(T_xM)={\gm}^{-d}(x)+{\gm}^{-d+1}(x)+\cdots + {\gm}^{-1}(x)\,,
$$
where ${\gm}^{-j}(x)={\mathcal H}_{-j}(x)/{\mathcal H}_{-j+1}(x)$. Note that
${\gm}^{-1}(x)={\mathcal H}_x$. \noindentewline
A distribution ${\mathcal H}$ is called a \emph{regular distribution}\index{regular distribution} of
\emph{depth} $d$ and \emph{type}
$\gm$ if all graded Lie algebras ${\gm}(x)$ are isomorphic to a given
graded fundamental Lie algebra
$$
{\gm}={\gm}^{-d}+ {\gm}^{-d+1}+\cdots + {\gm}^{-1}\,.
$$
In this case ${\gm}$ is called the \emph{Lie algebra associated} with the
distribution ${\mathcal H}$.
A regular distribution $\mathcal{H}$ is called \emph{non-degenerate}
if the associated Lie algebra is non-degenerate.
{\mathfrak {su}}bsection{Para-CR algebras and regular para-CR structures} We recall the
following
\begin{definition} A pair $(\gm,K_o)$, where $\gm = \gm^{-d} + \cdots +
\gm^{-1}$ is a negatively graded fundamental Lie algebra and $K_o$
is an involutive endomorphism of $\gm^{-1}$, is called a
\emph{para-{\rm CR} algebra}\index{para-CR algebra}
of \emph{depth} $d$. If, moreover, the $\pm
1$-eigenspaces $\gm^{-1}_{\pm}$ of $K_o$ on $\gm^{-1}$ are
commutative subalgebras of $\gm$, then $(\gm,K_o)$ is called an
{\em integrable para-CR algebra}\index{para-CR algebra!integrable}.
\end{definition}
\begin{definition}\label{regularstructure}
Let $(\gm, K_o)$ be a para-CR algebra of depth $d$.
An almost para-CR structure $(HM,K)$
on $M$ is called \emph{regular} of
\emph{type} $(\gm, K_o)$ and \emph{depth} $d$ if, for any $x\in
M$, the pair $({\gm}(x),K_x)$ is isomorphic to $({\gm},K_o)$. We
say that the regular almost para-CR structure is non-degenerate
if the graded algebra $\gm$ is non-degenerate.
\end{definition}
Note that a regular almost para-CR structure of type $(\gm ,K_0)$ is integrable
if and only if the Lie algebra $(\gm ,K_0)$ is integrable.
\section{Prolongations of graded Lie algebras}
{\mathfrak {su}}bsection{Prolongations of negatively graded Lie algebras}
The \emph{full prolongation}\index{full prolongation} of a negatively graded fundamental Lie algebra
$\gm=\gm^{-d}+\cdots + \gm^{-1}$ is defined as a maximal
graded Lie algebra
$$
\ggg(\gm)=\ggg^{-d}(\gm)+\cdots +\ggg^{-1}(\gm)+
\ggg^0(\gm)+\ggg^1(\gm)+\cdots
$$
with the negative part
$$
\ggg^{-d}(\gm)+\cdots +\ggg^{-1}(\gm)=\gm
$$
such that the following transitivity condition holds: \noindentewline
$$
\mbox{if}\,\, X\in\ggg^k(\gm)\,,\,k\geq 0\,,\,\,
[X,\ggg^{-1}(\gm)]=\{0\}\,,\,\,\mbox{then}\,\,\, X=0\,.
$$
In \cite{T1}, N. Tanaka proved that the full prolongation $\ggg(\gm)$
always exists and it is unique up to isomorphisms. Moreover, it can be
defined inductively by
$$
\ggg^i(\gm)=
\begin{cases}
\gm^i & \hbox{\rm if}\,\, i<0\,,\\
\{A\in\hbox{\rm Der}(\gm,\gm)\,:\, A(\gm^j){\mathfrak {su}}bset \gm^j\,,\forall j<0\}
& \hbox{\rm if}\,\, i=0\,,\\
\{A\in\hbox{\rm Der}(\gm,{\mathfrak {su}}m_{h< i}\ggg^h(\gm))\,:\,
A(\gm^j){\mathfrak {su}}bset \ggg(\gm)^{i+j}\,,\forall j<0\}
& \hbox{\rm if}\,\, i>0\,,
\end{cases}
$$
where ${\rm Der}(\gm, V)$ denotes the space of derivations of the
Lie algebra $\gm$ with values in the $\gm$-module $V$.
Note that
\begin{eqnarray}
\ggg^i(\gm)&\!\!=\!\!&
\Big\{A\in\hbox{\rm Hom}_\R(\gm,{\mathfrak {su}}m_{h<i}\ggg^h(\gm))\,\Big\vert\,
A(\ggg^h(\gm)){\mathfrak {su}}bset\ggg^{h+i}(\gm)\,\,\forall h<0\,,\,\,\,\\
\noindentonumber &{}&\mbox{and}\,\, [A(Y),Z]+ [Y,A(Z)]=A([Y,Z])\,\,\forall Y,Z\in\gm
\Big\}\,.
\end{eqnarray}
{\mathfrak {su}}bsection{Prolongations of non-positively graded Lie algebras}
Consider now a non-positively graded Lie algebra $\gm
+\ggg^0=\gm^{-d}+\cdots +\gm^{-1}+\ggg^0$. The \emph{full
prolongation} of $\gm +\ggg^0$ is the subalgebra
$$
(\gm+\ggg^0)^\infty
=\gm^{-d}+\cdots +\gm^{-1} +\ggg^0+\ggg^1+\ggg^2+\cdots
$$
of $\ggg(\gm)$, defined inductively by
$$
\ggg^i=\{X\in\ggg(\gm)^i\,:\,[X,\gm^{-1}]{\mathfrak {su}}bset\ggg^{i-1}\}\,,\,\,\,
\hbox{\rm for any}\,\, i\geq 1\,.
$$
\begin{definition}
A graded Lie algebra ${\gm}+{\ggg}^0$ is called of {\rm finite
type}\index{graded Lie algebra!of finite type} if its full prolongation
$\ggg=({\gm}+{\ggg}^0)^\infty$ is a
finite dimensional Lie algebra and it is called of {\rm semisimple
type}\index{graded Lie algebra!of semisimple type} if $\ggg$ is a
finite dimensional semisimple Lie algebra.
\end{definition}
We have the following criterion (see \cite{T2}, \cite{AS})
\begin{lemma}\label{criterion}
Let $(\gm ={\mathfrak {su}}m_{i<0}\gm^{i}, K_o)$ be an integrable para-CR
algebra and $\ggg^0$ the subalgebras of $\ggg^0(\gm)$ consisting
of any $A\in\ggg^0(\gm)$ such that $A\vert_{\gm^{-1}}$ commutes
with $K_o$. Then the graded Lie algebra $(\gm + \ggg^0)$ is of
finite type if and only if $\gm$ is non-degenerate.
\end{lemma}
The following result will be used in the last section (see e.g. \cite{MN1}, Theorem 3.21)
\begin{lemma}
Let $\ggg ={\mathfrak {su}}m_i \ggg_i$ be a fundamental effective semisimple
graded Lie algebra such that $\gm +\ggg^0$ is of finite type. Then
$\ggg$ coincides with the full prolongation $(\gm +\ggg^0)^\infty$
of $\gm +\ggg^0$.
\end{lemma}
\section{Standard almost para-CR manifolds}
{\mathfrak {su}}bsection{Maximally homogeneous Tanaka structures}A regular para-CR
structure of type $(\gm ,K_0)$ is of
{\em finite type}\index{para-CR structure!of finite type} or,
respectively, of {\em semisimple type}\index{para-CR structure!of semisimple type},
if the Lie algebra $(\gm
+ \ggg^0)^{\infty}$ is finite-dimensional or, respectively,
semisimple. Recall that $\ggg^0 = Der(\gm, K_0)$ is the Lie
algebra of
Lie group ${\rm Aut}(\gm,K_0)$. \\
We recall the following (see \cite{AS})
\begin{definition} Let $\gm = \gm^{-d}+ \cdots + \gm^{-1} $ be a
negatively graded Lie algebra generated by $\gm^{-1}$ and $G^0$
a closed Lie subgroup of (grading preserving) automorphisms of
$\gm$. A \emph{Tanaka structure}\index{Tanaka structure} of \emph{type}
$(\gm, G^0)$ on a manifold $M$ is a regular distribution
$\mathcal H {\mathfrak {su}}bset TM$ of type $\gm$ together with a principal
$G^0$-bundle $\pi : Q \to M$ of adapted coframes of $\mathcal H$.
A coframe $\varphi : {\mathcal H}_x \to \gm^{-1}$ is called
\emph{adapted} if it can be extended to an isomorphism $\varphi :
\gm_x \to \gm$ of Lie algebra.
\end{definition}
We say that the Tanaka structure of type $(\gm , G^0)$ is of
\emph{finite type}\index{Tanaka structure!of finite type}
(respectively \emph{semisimple type}\index{Tanaka structure!of semisimple type}
$(\gm, G^0)$),
if the graded Lie algebra ${\gm}+\ggg^0$ is of finite type
(respectively semisimple type). Let $P$ be a Lie subgroup of a
connected Lie group $G$ and $\gp$
(respectively $\ggg$) the Lie algebra of $P$ (respectively $G$).
\begin{thm}
Let $(\pi : Q\to M,{\mathcal H})$ be a Tanaka structure on $M$ of
semisimple type $(\gm,G^0)$. Then the Tanaka prolongation of
$(\pi,{\mathcal H})$ is a $P$-principal bundle ${\mathcal G}\to M$,
with the parabolic structure group $P$, equipped with a Cartan
connection $\kappa: T{\mathcal G}\to \ggg$, where $\ggg$ is the full
prolongation of $\gm +\ggg^0$ and ${\rm Lie}P = \gp ={\mathfrak {su}}m_{i\geq 0}
\ggg_i$. Moreover, $\hbox{\rm Aut} ({\mathcal H},\pi)$ is a Lie
group and
$$
\dim\hbox{\rm Aut} ({\mathcal H},\pi)\leq \dim \ggg\,.
$$
\end{thm}
Let $({\mathcal H}, \pi:Q\to M)$ be a Tanaka structure of
semisimple type $(\gm,G^0)$ and $\ggg=(\gm+\ggg^0)^\infty=\gm
+\gp$ be the full prolongation of the non-positively graded Lie
algebra $\gm+\ggg^0$.
\begin{definition} A semisimple Tanaka structure $({\mathcal H}, \pi:Q\to M)$
is called {\rm maximally homogeneous}\index{maximally homogeneous} if the dimension of its
automorphism
group $ {\rm Aut}({\mathcal H}, \pi)$ is equal to $\dim \ggg$.\\
\end{definition}
{\mathfrak {su}}bsection{Tanaka structures of semisimple type}
We construct maximally
homogeneous Tanaka structures of semisimple type $(\gm, G^0)$ as
follows. Let $G=\hbox{\rm Aut}(\ggg)$ be the Lie group of
automorphisms of the graded Lie algebra $\ggg$. Recall that $G^0$ is a
closed subgroup of the automorphism group of the graded Lie algebra
$\ggg^-=\gm$. Since the Lie algebra $\ggg$ is canonically associated
with $\gm$, we can canonically extend the action of $G^0$ on $\gm$
to the action of $G^0$ on $\ggg$ by automorphisms. In other
words, we have an embedding $G^0\hookrightarrow \hbox{\rm
Aut}(\ggg)=G$ as a closed subgroup. We denote by $G^+$ the connected
(closed) subgroup of $G$ with Lie algebra $\ggg_+={\mathfrak {su}}m_{p>0}\ggg^p$.
Then $P=G^0\cdot G^+ {\mathfrak {su}}bset G$ is a (closed) parabolic subgroup of
$G$. Let $G/P$ be the corresponding flag manifold. We have a
decomposition $\ggg =\gm +\gp$ and we identify $\gm$ with the
tangent space $T_o(G/P)$. Then the natural action of $G^0$ on $\gm$ is
the isotropy representation of $G^0$. We have a natural Tanaka
structure $({\mathcal H},\pi :Q\to G/P)$ of type $(\gm,G^0)$,
where ${\mathcal H}$ is the $G$-invariant distribution defined by
$\gm^{-1}$ and $Q$ is the $G^0$-bundle of coframes on ${\mathcal H}$.\noindentewline
\begin{comment}
which is generated by coframes
$$
Q_{\vert_o}=G^0\cdot \hbox{\rm id}\,,\quad \hbox{id}:\gm^{-1}\to \gm^{-1}\,.
$$
\end{comment}
Hence, the flag manifold $G/P$ carries a natural maximally
homogeneous Tanaka structure $(\mathcal H, \pi: Q \to G/P)$.\\
The universal covering $F$ of the manifold $G/P$ also has the
induced
Tanaka structure $(H_F, \pi_F: Q_F \to F)$ of type $(\gm,G^0 )$
and the simply connected (connected) Lie group $\tilde G$ with
the Lie algebra $\ggg$ acts transitively and almost effectively on $F$ as a group
of automorphisms of this Tanaka structure. Moreover, the
stabilizer in $\tilde G$ of an appropriate point $o \in F$ is the
(connected) parabolic subgroup $\tilde P$
generated by the subalgebra $\gp = \ggg^0 + \ggg^1 + \cdots +
\ggg^d$.
The Tanaka structure $(\mathcal{H}, \pi: Q \to F = \tilde G/\tilde P)$ is
obviously maximally homogeneous and it is called the
\emph{standard (simply connected maximally homogeneous) Tanaka structure} of type
$(\gm ,G^0)$. We can state the following (see e.g. \cite[Theor. 4.8]{AMT})
\begin{thm} \label{maxhomogeneous thm}
Any maximally homogeneous Tanaka structure of semisimple type
$(\mathfrak{m}, G_0)$ is isomorphic to the standard Tanaka
structure on the simply connected flag manifold $F = \tilde G / \tilde P$
where $\tilde G$ is the simply connected semisimple Lie group
with the Lie algebra $\ggg = (\gm + \ggg^0)^\infty$ and $\tilde P$
is the parabolic subgroup generated by the subalgebra
$\gp = \ggg^0 + \ggg^1 + \cdots +
\ggg^d$.
\end{thm}
Let $(HM,K)$ be a regular almost para-CR structure of
type $(\gm,K_0)$. Assume that it has finite type, i.e. $m=\dim
(\gm + \ggg^0)^{\infty} < \infty$. According to the above
definition, $(HM,K)$ is { \it maximally homogeneous},
if it admits a (transitive) Lie group of automorphisms of dimension $m$.\\
By Theorem \ref{maxhomogeneous thm}, a maximally homogeneous
almost para-CR structure of semisimple type is locally equivalent
to the standard structure associated with a gradation of a semisimple
Lie algebra. In the following subsection we describe this
correspondence in more details.
{\mathfrak {su}}bsection{Models of almost para-CR manifolds}
Let $\ggg = {\mathfrak {su}}m_{-d}^{d} \ggg^i = \ggg^{-} + \ggg^0 + \ggg^+$ be
an effective fundamental gradation of a semisimple Lie algebra
$\ggg$ with negative part $\gm = \ggg^{-}={\mathfrak {su}}m_{i<0}\ggg^i$ and positive part
$\ggg^+={\mathfrak {su}}m_{i>0}\ggg^i$.\noindentewline\noindentoindent Denote by $F = \tilde G/\tilde P$
the simply connected real flag
manifold associated with the graded Lie algebra $\ggg$ where $\tilde G$
is the simply connected Lie group with Lie algebra $\ggg$ and
$\tilde P = G^0 G^+$ is the connected subgroup generated by the
Lie subalgebra $\ggg^0 + \ggg^+$.\noindentewline\noindentoindent
We will identify
the tangent space $T_oF$ at the point $o=eP$ with the subspace
$$
\ggg/\gp\simeq\gm\,.
$$
Since the subspace $(\ggg^{-1}+\gp)/\gp{\mathfrak {su}}bset T_oF$ is invariant
under the isotropy representation of $P$, it defines an invariant
distribution $\mathcal{H}$ on $F$. Since the gradation is
fundamental, one can easily check that, for any $x\in F$, the
negatively graded Lie algebra $\gm(x)$ associated with ${\mathcal H}$
is isomorphic to the Lie algebra $\gm$. Moreover, let
\begin{equation}\label{decomposition(-1)}
\ggg^{-1} = \ggg^{-1}_+ + \ggg^{-1}_-
\end{equation}
be a decomposition of the
$G^0$-module $\ggg^{-1}$ into a sum of two submodules and $K_0$
the associated ${\rm ad}_{\ggg_0}$-invariant endomorphism such that
$\ggg^{-1}_{\pm}$ are $\pm 1$-eigenspaces of $K_0$.\noindentewline
The decomposition (\ref{decomposition(-1)}) defines two
invariant complementary subdistributions
${\mathcal H}_\pm$ of the distribution ${\mathcal H}{\mathfrak {su}}bset TF$
associated with $\ggg^{-1}$ and $K_0$ defines $\tilde G$-invariant
para-CR structure $(HF, K)$ on $F$. It is the
standard para-CR structure associated with the graded Lie
algebra $\ggg$ and the decomposition (\ref{decomposition(-1)}). We
get the following theorem (see also \cite[Theor. 5.1]{AMT})
\begin{thm}\label{reductiontheorem}
Let $F=\tilde G/\tilde P$ be the simply connected flag manifold associated
with a (real) semisimple
effective fundamental graded Lie algebra $\ggg$. A decomposition
$\ggg^{-1}=\ggg^{-1}_+ +\ggg^{-1}_-$ of $\ggg^{-1}$ into
complementary $G^0$-submodules $\ggg^{-1}_\pm$ determines an
invariant almost para-CR structure $(HM,K)$ such that $\pm1
$-eigenspaces $H_\pm M $ of $K$ are subdistributions of $ HM$
associated with $\ggg^{-1}_\pm$. Conversely, any standard
almost para-CR structure $( HM, K)$ on $F$ can be obtained in such a way.\\
Moreover, $(HM,K)$ is:
\begin{enumerate}
\item an almost para-CR structure if $\ggg^{-1}_+$
and $\ggg^{-1}_-$ have the same dimensions,
\item a para-CR
structure if and only if $\ggg^{-1}_+$ and $\ggg^{-1}_-$ are
commutative subalgebras of $\ggg$,
\item non-degenerate if and only if $\ggg$ has no graded ideals
of depth one.
\end{enumerate}
\end{thm}
By Theorem \ref{reductiontheorem},
the classification of maximally homogeneous para-CR
structures of semisimple type, up to local isomorphisms (i.e. up
to coverings), reduces to the description of all
gradation of semisimple Lie algebras $\ggg$ and to decomposition of
the $\ggg^0$-module $\ggg^{-1}$ into irreducible submodules. We
will give such a description for complex and real semisimple Lie
algebras in the next two sections.
\section{Fundamental gradations of a complex semisimple Lie algebra} We recall here the
construction of a gradation of a complex semisimple Lie algebra $\ggg$.
Let $\gh$ be a Cartan subalgebra of a semisimple Lie algebra
$\ggg$
and
$$
\ggg =\gh \oplus{\mathfrak {su}}m_{\alpha\in R}\ggg_\alpha
$$
be the root decomposition of $\ggg$ with respect to $\gh$. We denote by
$$
\Pi=\{\alpha_1,\ldots ,\alpha_\ell\}{\mathfrak {su}}bset R
$$
a system of simple roots of the root system $R$ and
associate to each simple root $\alpha_i$ (or corresponding vertex
of the Dynkin diagram) a non-negative integer $d_i$.
Using the {\em label vector}\index{label vector} $\vec{d}=(d_1,\ldots , d_\ell)$, we define the
\emph{degree} of a root $\alpha ={\mathfrak {su}}m_{i=1}^\ell k_i\alpha_i$ by
$$
d(\alpha)={\mathfrak {su}}m_{i=1}^\ell k_id_i\,.
$$
This defines a gradation of $\ggg$ by the conditions
$$
d(\gh)=0\,,\qquad
d(\ggg_\alpha)=d(\alpha),\quad \forall\alpha\in R\,,
$$
which is called the \emph{gradation associated with the label
vector} $\vec{d}$. \noindentewline
We denote by $d\in\gh$ the
corresponding grading element. Then $d(\alpha)=\alpha(d)$. Any
gradation of a complex semisimple Lie algebra $\ggg$ is conjugated
to a gradation of such a type (see \cite{GOV}). In particular, it
has the form
$$
\ggg=\ggg^{-k}+\cdots +\ggg^0+\cdots + \ggg^{k}\,,
$$
where $\ggg^0$ is a reductive subalgebra of $\ggg$ and the grading
spaces $\ggg^{-i}$ and $\ggg^{i}$ are dual with respect to the
Killing form. It is clear now that any graded semisimple Lie
algebra is a direct sum of graded simple Lie algebras. Hence, it
is sufficient to
describe gradations of simple Lie algebras.\\
We need the following (see \cite{Y})
\begin{lemma}
The gradation of a complex semisimple Lie algebra $\ggg$
associated with a label vector $\vec{d}=(d_1,\ldots , d_\ell)$ is fundamental
if and only if all labels $d_i\in\{0,1\}$.
\end{lemma}
Let $\Pi^1{\mathfrak {su}}bset\Pi$ be a set of simple roots. We denote by
$\vec{d}_{\Pi^1}$ the label vector which associates label one
to the roots in $\Pi^1$ and label zero to the other simple
roots.\noindentewline Now we describe the depth of a fundamental
gradation.\noindentewline
Let $\mu$ be the maximal root with respect to
the fundamental system $\Pi$. It can be written as a linear
combination
\begin{equation}\label{maximalroot}
\mu = m_1\alpha_1 +\cdots + m_\ell\alpha_\ell
\end{equation}
of fundamental roots, where the coefficient $m_i$ is a positive
integer called the
\emph{Dynkin mark associated} with $\alpha_i$.
\begin{lemma}\label{depth}
Let $\Pi^1=\{\alpha_{i_1},\ldots , \alpha_{i_s}\}{\mathfrak {su}}bset\Pi$ be a
set of simple roots.
Then the depth $k$ of the fundamental gradation defined by the label vector
$\vec{d}_{\Pi^1}$ is given by
$$
k=m_{i_1}+m_{i_2}+\cdots + m_{i_s}\,.
$$
\end{lemma}
\noindentoindent {\bf Proof}. The depth $k$ of the gradation is equal to the maximal degree $d(\alpha)$,
$\alpha$ being a root. If $\alpha =k_1\alpha_1+\cdots +k_\ell\alpha_\ell$ is the
decomposition of a root $\alpha$ with respect to simple roots, then
$$
d(\alpha)=k_{i_1}+\cdots +k_{i_s}\leq d(\mu)=m_{i_1}+\cdots +m_{i_s}=k\,.
$$
$\qquad\Box$
\par\noindentoindent
{\bf Irreducible submodules of the $\ggg^0$-module $\ggg^1$.}\ \
Let $ \ggg={\mathfrak {su}}m\ggg^i $ be a fundamental gradation of a complex
semisimple Lie algebra $ \ggg $, defined by a label vector
$ \vec{d}$. Following \cite{GOV}, we describe the decomposition of
a $ \ggg^0 $-module into irreducible submodules. Set
$$
R^i=\{\alpha\in R\,\,\vert\,\, d(\alpha)=i\}=\{\alpha\in R\,\,\vert\,\,
\ggg_\alpha{\mathfrak {su}}bset\ggg^i\}
$$
and
$$
\Pi^i=\Pi\cap R^i =\{\alpha\in\Pi\,\,\vert\,\,d(\alpha)=i\}\,.
$$
For any simple root $ \gamma\in\Pi $, we put
$$
R(\gamma)= \{\gamma +(R^0 \cup \{0 \})\}\cap R =
\{\alpha =\gamma + \phi^0\in R,\,\,\,\phi^0\in R^0 \cup \{0 \} \}.
$$
We associate to any set of roots $ Q {\mathfrak {su}}bset R $ a subspace
$$
\ggg (Q)={\mathfrak {su}}m_{\alpha\in Q}\ggg_\alpha{\mathfrak {su}}bset \ggg \,.
$$
\begin{prop}{\rm (}\cite{GOV}{\rm )}\label{decomposizione}
The decomposition of a $\ggg^0$-module $\ggg^1$ into irreducible
submodules is given by
$$
\ggg^1={\mathfrak {su}}m_{\gamma\in\Pi^1}\ggg(R(\gamma))\,.
$$
Moreover, $\gamma$ is a lowest weight of the irreducible
submodule $\ggg(R(\gamma))$. In particular, the number of the irreducible
components is equal to the number $\#\Pi^1$ of the simple roots of degree
{\rm 1}.
\end{prop}
Since the $\ggg^0$-modules $\ggg^i$ and $\ggg^{-i}$ are dual,
Proposition \ref{decomposizione} gives also the decomposition of
the $\ggg^0$-module $\ggg^{-1}$ into irreducible submodules.
\section{Fundamental gradations of a real semisimple Lie algebra}
{\mathfrak {su}}bsection{Real forms of a complex semisimple Lie algebra}
Now we recall the description of a real form of a
complex semisimple Lie algebra in terms of Satake diagrams. It is
sufficient to do this for complex simple Lie algebras.
Any real form of a complex semisimple Lie algebra $\ggg$ is the
fixed points set $\ggg^\sigma$ of an antilinear involution $\sigma$,
that is, an antilinear map $\sigma:\ggg\to \ggg$, which is an
automorphism of $\ggg$ as a real algebra, such that
$\sigma^2=\hbox{\rm id}$. We fix a Cartan decomposition
$$
\ggg^\sigma =\gk +\gm
$$
of the real form $\ggg^\sigma$, where $\gk$ is a maximal compact subalgebra of
$\ggg^\sigma$ and $\gm$ is its orthogonal complement
with respect to the Killing form $B$. Let
$$
\gh^\sigma =\gh_{\gk} +\gh_{\gm}
$$
be a Cartan subalgebra of $\ggg^\sigma$ which is consistent
with this decomposition and such that $\gh_{\gm}=\gh\cap\gm$
has maximal dimension. Then the root decomposition of
$\ggg^\sigma$, with respect to the subalgebra
$\gh^\sigma $, can be written as
$$
\ggg^\sigma =\gh^\sigma + {\mathfrak {su}}m_{\lambda\in\Sigma}\ggg^\sigma_\lambda\,,
$$
where $\Sigma{\mathfrak {su}}bset (\gh^\sigma)^*$ is a (non-reduced) root system. The
number $m_\lambda =\dim\ggg_\lambda$ is the \emph{multiplicity}
of a root $\lambda\in\Sigma$.
Denote by $\gh =(\gh^\sigma)^\C$ the complexification of
$\gh^\sigma$ which is a $\sigma$-invariant Cartan subalgebra.
We denote by $\sigma^*$ the induced antilinear action of
$\sigma$ on $\gh^*$ given by
$$
\sigma^*\alpha =\overline{\alpha\circ\sigma}\,,\quad\alpha\in\gh^*\,.
$$
Consider the root space decomposition
$$
\ggg =\gh+{\mathfrak {su}}m_{\alpha\in R} \ggg_\alpha
$$
of the Lie algebra $\ggg$ with respect to the Cartan subalgebra $\gh$. Note
that $\sigma^*$ preserves the root system $R$, i.e. $\sigma^*R=R$.
Now we relate the root space decomposition of $\ggg^\sigma$ and $\ggg$. We
define the subsystem of compact roots $R_\bullet$ by
$$
R_\bullet =\{\alpha\in R\,\,\vert\,\, \sigma^*\alpha = -\alpha \}=
\{\alpha\,\,\vert\,\,\alpha(\gh_\gm )=0\}
$$
and denote by $R'=R\setminus R_\bullet$ the complementary set of
non-compact roots. We can choose a system $\Pi$ of simple roots of
$R$ such that the corresponding system of positive roots $R_+$
satisfies the condition: $R'_+=R'\cap R_+$ is $\sigma$-invariant.
In this case, $\Pi$ is called a $\sigma$-\emph{fundamental system}
of roots. We denote by $\Pi_\bullet =\Pi\cap R_\bullet$ the set of
compact simple roots (which are also called black) and by $\Pi'
=\Pi\setminus \Pi_\bullet$ the non-compact simple roots (called
white). The action of $\sigma^*$ on white roots satisfies the
following property:
for any $\alpha\in\Pi'$ there exists a unique $\alpha'\in\Pi'$
such that\ $\sigma^*\alpha-\alpha'$\ is a linear combination of
black roots, i.e.
$$
\sigma^*\alpha = \alpha' +{\mathfrak {su}}m_{\beta\in\Pi_\bullet}k_\beta\beta,
\quad k_\beta\in{\mathbb N}\,.
$$
In this case, we say that the roots $\alpha, \,\alpha'$ are
$\sigma$-\emph{equivalent} and we will write $\alpha\sim\alpha'$.
The information about fundamental system ($\Pi =\Pi_\bullet \cup
\Pi'$) together with the $\sigma$-equivalence can be visualized in
terms of the \emph{Satake diagram}\index{Satake diagram}, which is defined as follows.
\noindentewline On the Dynkin diagram of the system of simple roots $\Pi$,
we paint the vertices which correspond to black roots into black and
we join the vertices which correspond to
$\sigma$-equivalent roots $\alpha,\,\alpha'$ by a curved arrow.\noindentewline
By a slight abuse of notation, we will refer to the $\sigma$-fundamental system
$\Pi=\Pi_\bullet\cup\Pi'$, together with the $\sigma$-equivalence
$\sim$, as the \emph{Satake diagram}. This diagram is determined
by the real form $\ggg^\sigma$ of a complex simple Lie algebra $\ggg$
and does not depend on the choice of a Cartan subalgebra and a
$\sigma$-fundamental system. The list of Satake diagram of real
simple Lie algebras is known (see e.g. \cite{GOV}).\noindentewline
Conversely, Satake diagram ($\Pi=\Pi_\bullet\cup\Pi',\sim$) allows
to reconstruct the action of $\sigma^*$ on $\Pi$, hence on $\gh^*$.
This action can be canonically extended to the antilinear involution
$\sigma$ of the complex Lie algebra $\ggg$. Hence,
{ \it there is a
natural $1-1$ correspondence between Satake diagrams subordinated to
the Dynkin diagram of a complex semisimple Lie algebra $\ggg$, up to
isomorphisms, and real forms $\ggg^\sigma$ of $\ggg$,
up to conjugations.} \noindentewline
{\mathfrak {su}}bsection{Gradations of a real semisimple Lie algebra}
Let $\ggg$ be a complex simple Lie algebra and $\ggg^\sigma$ be a real form of
$\ggg$ with a Satake diagram ($\Pi=\Pi_\bullet\cup\Pi',\sim)$.
\begin{comment}
We identify
$\Pi=\{\alpha_1,\ldots ,\alpha_\ell\}$ with a $\sigma$-fundamental system,
which is a system of simple roots of $\ggg$ with respect to a
Cartan subalgebra $\gh$ and $\Pi_\bullet$ and $\Pi'$ with the
set of black and white roots respectively.\\
\end{comment}
Let $\vec{d}=(d_1,\ldots ,d_\ell)$ be a label vector of the simple roots
system $\Pi$ and $\ggg ={\mathfrak {su}}m_{i\in\Z}\ggg^{i}$ be the corresponding
gradation of $\ggg$, with the grading element $d\in\gh{\mathfrak {su}}bset\ggg$.
\noindentewline The following theorem gives necessary and sufficient
conditions in order that this gradation induces a gradation
$$
\ggg^\sigma ={\mathfrak {su}}m_{i\in\Z}\ggg^\sigma\cap \ggg^i
$$
of the real form $\ggg^\sigma$. This means that
the grading element $d$ belongs to $\ggg^\sigma$. We denote by
$\Pi^0{\mathfrak {su}}bset\Pi$ the set of simple roots with label zero.
\begin{thm} $($\cite{D}$)$\label{djokovic}
A gradation of a complex semisimple Lie algebra $\ggg$,
associated with a label vector $\vec{d}=(d_1,\ldots ,d_\ell)$,
induces a gradation of the real form $\ggg^\sigma$, which
corresponds to a
Satake diagram $(\Pi = \Pi_{\bullet}\cup \Pi',\sim)$ if and only if
the following two conditions hold:
\begin{enumerate}
\item[i{\rm )}] $\Pi_\bullet {\mathfrak {su}}bset \Pi^0$, i.e. any black vertex
of the Satake diagram has label zero;
\item[ii{\rm )}] if
$\alpha\sim\alpha'$ for
$\alpha,\,\alpha'\in\Pi\setminus\Pi_\bullet$, then $d(\alpha)
=d(\alpha')$, i.e. white vertices of the Satake diagram which are
joint by a curved arrow have the same label.
\end{enumerate}
\end{thm}
\par
A label vector $\vec{d}=(d_1,\ldots ,d_\ell)$ of a Satake diagram
$(\Pi=\{\alpha_1,\ldots ,\alpha_\ell\}=\Pi_\bullet\cup\Pi',\sim)$
and the corresponding gradation of $\ggg$ are called of
{\em real type}
if they satisfy conditions i) and ii) of the theorem above, that is
black vertices have label zero and vertices related by a curved
arrow have the same label. Hence, we can state
Theorem \ref{djokovic} as follows
\begin{cor}
There exists a natural $1-1$ correspondence between label vectors
$\vec{d}$ of real type of a Satake diagram of a real semisimple Lie
algebra $\ggg^\sigma$ and gradations of $\ggg^\sigma$. The gradation
of $\ggg^\sigma$ is fundamental if and only if the corresponding
gradation of $\ggg$ is fundamental, i.e. $\vec{d} =
\vec{d_{\Pi^1}}$.
\end{cor}
\noindentoindent
{\bf Irreducible submodules of the $\ggg^0$-module $\ggg^1$.}
Let $\ggg={\mathfrak {su}}m\ggg^i$ be a gradation of a complex
semisimple Lie algebra $\ggg$ with grading element $d$ and
$\ggg^\sigma ={\mathfrak {su}}m(\ggg^\sigma)^i={\mathfrak {su}}m\ggg^i\cap\ggg^\sigma$ be a real form of
$\ggg$, consistent with this gradation. We denote by
$\,\,\,(\Pi=\Pi_\bullet
\cup\Pi',\,\sim)\,\,\,$ the Satake diagram of $\ggg^\sigma$.\noindentewline
By Proposition \ref{decomposizione}, the decomposition of $\ggg^1$ into irreducible
$\ggg^0$-submodules is given by
$
\ggg^1={\mathfrak {su}}m_{\gamma\in\Pi^1}\ggg(R(\gamma))\,,
$
where $\Pi^1$ is the set of simple roots of label one. The
following obvious proposition describes the decomposition of
$(\ggg^\sigma)^0$-module $(\ggg^\sigma)^1$ into irreducible
submodules.
\begin{prop}
For any simple root $\gamma \in\Pi^1$ of label one,
there are two possibilities:
\begin{enumerate}
\item[i)] $\sigma^*\gamma=\gamma+{\mathfrak {su}}m_{\beta\in\Pi_\bullet}k_\beta\beta$.
Then $\sigma^*\gamma \in R(\gamma)$ and the
$\ggg^0$-module $\ggg(R(\gamma))$ is $\sigma$-invariant;
\item[ii)] $\sigma^*\gamma =\gamma'+{\mathfrak {su}}m_{\beta\in\Pi_\bullet}k_\beta\beta$,
where $\gamma\noindenteq\gamma'\in \Pi^1$. Then, $\sigma^*R(\gamma)=R(\gamma')$ and
the two irreducible $\ggg^0$-modules
$\ggg(R(\gamma))$ and $\ggg(R(\gamma'))$ determine
one irreducible submodule
$
\ggg^\sigma\cap(\ggg(R(\gamma))+\ggg(R(\gamma')))
$
of $\ggg^\sigma$.
\end{enumerate}
\end{prop}
\begin{cor} \label{IrreducibleSubmodules}
Let $\ggg^\sigma ={\mathfrak {su}}m(\ggg^\sigma)^i$ be the gradation of a real
semisimple Lie algebra $\ggg^{\sigma}$, associated with a
label vector $\vec d $ of real type. Then irreducible
submodules of the $(\ggg^\sigma)^0$-module $(\ggg^\sigma)^{-1}$
correspond to vertices $\gamma$ with label one without curved
arrow and to pairs $(\gamma,\gamma')$ of vertices with label one
related by a curved arrow.
In particular, a decomposition of the $(\ggg^\sigma)^0$-module
$(\ggg^\sigma)^{-1}$ is determined by a decomposition
of the set $\Pi^1$ of vertices with label 1 into a disjoint union
$\Pi^1 = \Pi^1_+ \cup \Pi^1_- $ such that equivalent vertices belong
to the same component. The corresponding submodules $(\ggg^\sigma)^{-1}_+$
and $(\ggg^\sigma)^1_-$ are given by
\begin{equation}\label{decomposition}
(\ggg^\sigma)^{-1}_{\pm} = \ggg^\sigma \cap {\mathfrak {su}}m_{\gamma\in \Pi^1_{\pm}} \ggg
(R(-\gamma)).
\end{equation}
\end{cor}
We will always assume that a decomposition of $\Pi^1$ satisfies
the above property.
\section{Classification of Maximally homogeneous para-CR manifolds}
Let $\ggg^{\sigma}$ be a real semisimple Lie algebra associated with a
Satake diagram $(\Pi = \Pi_{\bullet}\cup \Pi', \sim)$ with the fundamental gradation
defined by a subset $\Pi^1 {\mathfrak {su}}bset \Pi'$ and $F= \tilde G/\tilde P$ be the
associated flag manifold.
By Theorem \ref{reductiontheorem}, an almost para-CR structure on $F = \tilde G/\tilde P$ associated with
a decomposition $\Pi^1 = \Pi^1_+ \cup \Pi^1_-$ is integrable (i.e. a para-CR structure)
if and only if
the $(\ggg^\sigma)^0$-submodules $(\ggg^\sigma)^{-1}_+$ and $(\ggg^\sigma)^{-1}_-$ given by
(\ref{decomposition}) are Abelian subalgebras
of $\ggg^\sigma$. In order to give an integrability criterion, we introduce the following definitions.
\begin{definition}\label{admissible} Let $R$ be a system of roots and $\Pi$ be a system of simple roots. A subset
$\Pi^1 {\mathfrak {su}}bset \Pi$ is said to be {\rm admissible} if $\Pi^1$ contains at least two roots and
there are no roots of $R$ of the form
\begin{equation}\label{twoalphacondition}
2\alpha + {\mathfrak {su}}m k_i \phi_i\,, \,\,\,\mbox{with}\,\, \alpha \in \Pi^1\,,\, \phi_i \in \Pi_0
= \Pi \setminus \Pi^1.
\end{equation}
\end{definition}
\begin{definition} Let $\ggg^\sigma$ be a real semisimple Lie algebra
with a fundamental gradation defined by a subset $\Pi^1 {\mathfrak {su}}bset \Pi'$.
We say that a decomposition $\Pi^1 = \Pi^1_+ \cup \Pi^1_-$
is {\em alternate} if the following conditions hold:
\begin{itemize}
\item[i)] if $\alpha\in\Pi^1_\pm$ and $\alpha'\sim\alpha$, then $\alpha'\in\Pi^1_\pm$;
\item[ii)] the vertices in $\Pi^1_+$ and $\Pi^1_-$ appear in the Satake diagram
in alternate order. This means that each connected component of the graph obtained
deleting vertices in $\Pi^1_+$ $($respectively in $\Pi^1_-$$)$ has not more than one vertex in
$\Pi^1_-$ $($respectively in $\Pi^1_+$$)$.
\end{itemize}
\end{definition}
We are ready to state the following
\begin{prop} \label{alternateprop}Let $\ggg^\sigma$ be a semisimple real Lie algebra
with the fundamental gradation associated with a subset $\Pi^1 {\mathfrak {su}}bset \Pi$ and $F = \tilde G/\tilde P$
the associated flag manifold.
A decomposition $\Pi^1 = \Pi^1_+ \cup \Pi^1_-$
defines a
para-CR structure on the flag manifold $F$ if and
only if the subset $\Pi^1$ is admissible and the decomposition of $\Pi^1$ is alternate.
\end{prop}
For the proof we need the following lemma.
\begin{lemma}\label{lemmaAbelCond} The subspace $\ggg^{1}_+ = {\mathfrak {su}}m_{\gamma \in \Pi^1_+}\ggg(R(\gamma))$
$($hence also the subspace $(\ggg^\sigma)^{1}_+ = \ggg^\sigma \cap \ggg^1_+)$ which corresponds to a
subset $\Pi^1_+ {\mathfrak {su}}bset \Pi^1$ is an Abelian subalgebra if and only if
there is no root $\beta$ of the form
\begin{equation}\label{AbelianCondition}
\beta= \alpha + \alpha' + {\mathfrak {su}}m k_i\phi_i
\end{equation}
where $\alpha, \alpha' \in \Pi^1_+$ and $\phi_i \in \Pi^0.$
The case $\alpha = \alpha'$ is allowed.
\end{lemma}
\noindentoindent {\bf Proof.} If such a root $\beta$ exists, then $ [\ggg(R(\alpha), \ggg (R(\alpha'))] \noindenteq 0$
and $\ggg^1_+$ is not an Abelian subalgebra. The converse is also clear.
$\Box$
\par\noindentoindent
\noindentoindent {\bf Proof of Proposition} \ref{alternateprop}.
Let $ \Pi^1= \Pi^1_+ \cup \Pi^1_- $ be a decomposition of $\Pi^1$. The condition
(\ref{AbelianCondition}) for $\alpha = \alpha'$ is fulfilled if and only if
$\Pi^1$ is admissible. Assume now that two different vertices
$\alpha, \alpha'$ in $\Pi^1_+$ are
connected in the Satake diagram by vertices in $\Pi^0=\Pi\setminus\Pi^1$. Then there is
a root of the form (\ref{AbelianCondition}) and $\ggg^1_+$ is not a
commutative subalgebra. This shows that the decomposition which defines a para-CR structure
on $F$ must be alternate.\\
Conversely, assume that the decomposition is alternate. Then any two
vertices $\alpha, \alpha' \in \Pi^1_+$ belong to different
connected components of the Satake graph with deleting
$\Pi^1_-$. This implies that there is no root of the form
(\ref{AbelianCondition}) for $\alpha \noindenteq \alpha'$. Then
Lemma \ref{lemmaAbelCond} shows that $(\ggg^\sigma)^1_+$ is a commutative
subalgebra.
The same argument is applied also for $(\ggg^\sigma)^1_-$.
$\Box$
\par\noindentoindent
\begin{comment}
The following Proposition describes admissible subsystems $\Pi^1$
of a system $\Pi$ of simple roots for any indecomposable root
system $R$.\\
\end{comment}
We enumerate simple roots of complex simple Lie $\ggg$
algebras as in \cite{Bou}. Let $\Pi =\{\alpha_1, \ldots , \alpha_{\ell}\}$ be
the simple roots of $\ggg$, which are
identified with vertices of the corresponding Dynkin diagram. We denote the elements of
a subset $\Pi^1 {\mathfrak {su}}bset \Pi$ (respectively $\Pi^1 {\mathfrak {su}}bset \Pi'$) which defines a fundamental gradation
of $\ggg$ (respectively $\ggg^\sigma$) by
$$
\alpha_{i_1}, \ldots , \alpha_{i_k},\,\,\,\,\, i_1 < i_2 < \cdots < i_k.
$$
\begin{prop}\label{lista-complessa} Let $\Pi$ be a system of simple roots of a root system $R$ of a complex
simple Lie algebra $\ggg$. Then
a subset $\Pi^1 {\mathfrak {su}}bset \Pi$ of at least two elements is admissible $($see Definition \ref{admissible}$)$ in the following cases:
\begin{itemize}
\item for $ \ggg = A_\ell$, in all cases;
\item for $\ggg = B_\ell$, under the
condition: $i_{k}=i_{k-1}+1$;
\item for $\ggg = C_\ell$, under the
condition: $i_{k}=\ell$;
\item for $\ggg = D_\ell$, under the condition: if $i_k < \ell-1$,
then $i_k=i_{k-1}+1$;
\item for $\ggg = E_6$, in all cases except the following ones:
$$
\{\alpha_1,\alpha_4\}\,,\,\{\alpha_1,\alpha_5\}\,,\,\{\alpha_3,\alpha_6\}\,,\,\{\alpha_4,\alpha_6\}\,,\,
\{\alpha_1,\alpha_4,\alpha_6\}\,;
$$
\item for $\ggg = E_7$, in all cases except the following ones:
\begin{eqnarray*}
&\{\alpha_1,\alpha_4\}\,,\,\{\alpha_1,\alpha_5\}\,,\,\{\alpha_3,\alpha_6\}\,,\,
\{\alpha_4,\alpha_6\}\,,\,\{\alpha_1,\alpha_6\}\,,\,&\\
&\{\alpha_2,\alpha_7\}\,,\,\{\alpha_3,\alpha_7\}\,,\,\{\alpha_4,\alpha_7\}\,,\,
\{\alpha_5,\alpha_7\}\,,\,&\\
&\{\alpha_1,\alpha_4,\alpha_6\}\,,\,\{\alpha_1,\alpha_4,\alpha_7\}\,,\,\{\alpha_1,\alpha_5,\alpha_7\}\,,\,
\{\alpha_3,\alpha_6,\alpha_7\}\,,\,\{\alpha_4,\alpha_6,\alpha_7\}\,,\,&\\
&\{\alpha_1,\alpha_4,\alpha_6,\alpha_7\}\,;&
\end{eqnarray*}
\item for $\ggg = E_8$, in all cases except the following ones:
\begin{eqnarray*}
&\{\alpha_1,\alpha_4\}\,,\,\{\alpha_1,\alpha_5\}\,,\,\{\alpha_3,\alpha_6\}\,,\,
\{\alpha_4,\alpha_6\}\,,\,\{\alpha_1,\alpha_6\}\,,\,&\\
&\{\alpha_2,\alpha_7\}\,,\,\{\alpha_3,\alpha_7\}\,,\,\{\alpha_4,\alpha_7\}\,,\,
\{\alpha_5,\alpha_7\}\,,\,&\\
&\{\alpha_1,\alpha_4,\alpha_6\}\,,\,\{\alpha_1,\alpha_4,\alpha_7\}\,,\,\{\alpha_1,\alpha_5,\alpha_7\}\,,\,
\{\alpha_3,\alpha_6,\alpha_7\}\,,\,\{\alpha_4,\alpha_6,\alpha_7\}\,,\,&\\
&\{\alpha_1,\alpha_4,\alpha_6,\alpha_7\}\,,\,&\\
&\{\alpha_1,\alpha_7\}\,,\,\{\alpha_1,\alpha_8\}\,,\,\{\alpha_2,\alpha_8\}\,,\,\{\alpha_3,\alpha_8\}\,,\,\{\alpha_4,\alpha_8\}
\,,\,\{\alpha_5,\alpha_8\}\,,\,\{\alpha_6,\alpha_8\}\,,&\\
&\{\alpha_1,\alpha_4,\alpha_8\}\,,\,\{\alpha_1,\alpha_5,\alpha_8\}\,,\,\{\alpha_3,\alpha_6,\alpha_8\}\,,\,
\{\alpha_4,\alpha_6,\alpha_8\}\,,\,\{\alpha_1,\alpha_6,\alpha_8\}\,,\,&\\
&\{\alpha_2,\alpha_7,\alpha_8\}\,,\,\{\alpha_3,\alpha_7,\alpha_8\}\,,\,\{\alpha_4,\alpha_7,\alpha_8\}\,,\,
\{\alpha_5,\alpha_7,\alpha_8\}\,,\,&\\
&\{\alpha_1,\alpha_4,\alpha_6,\alpha_8\}\,,\,\{\alpha_1,\alpha_4,\alpha_7,\alpha_8\}\,,\,
\{\alpha_1,\alpha_5,\alpha_7,\alpha_8\}\,,\,
\{\alpha_3,\alpha_6,\alpha_7,\alpha_8\}\,,&\\
&\{\alpha_4,\alpha_6,\alpha_7,\alpha_8\}\,,\,&\\
&\{\alpha_1,\alpha_4,\alpha_6,\alpha_7,\alpha_8\}\,;&
\end{eqnarray*}
\item for $ \ggg = F_4 $, in all cases except the following ones:
$$
\{\alpha_1,\alpha_3\}\,,\,\{\alpha_1,\alpha_4\}\,,\,\{\alpha_2,\alpha_4\}\,,\,\{\alpha_3,\alpha_4\}\,,\,
\{\alpha_1,\alpha_3,\alpha_4\}\,;
$$
\item for $ \ggg = G_2$, in the case $\{\alpha_1,\alpha_2\}\,.$
\end{itemize}
In cases different from $D_\ell$, $E_6$, $E_7$ and $E_8$, for any $\Pi^1$
given as above it is possible to give an alternate decomposition $\Pi^1=\Pi^1_+\cup\Pi^1_-$.\noindentewline
For $D_\ell$, an alternate decomposition of $\Pi^1$ can be given in the following cases:
\begin{itemize}
\item $\alpha_{\ell-2}\in \Pi^1$,
\item $\Pi^1$ is contained in at most two of the branches issuing from $\alpha_{\ell-2}$.
\end{itemize}
For $E_6$, $E_7$ and $E_8$, an alternate decomposition of $\Pi^1$ can be given in the following cases:
\begin{itemize}
\item $\alpha_{4}\in \Pi^1$,
\item $\Pi^1$ is contained in at most two of the branches issuing from $\alpha_{4}$.
\end{itemize}
\end{prop}
\noindentoindent {\bf Proof.} We have to describe all subsets $\Pi^1$
of $\Pi$ which satisfy
(\ref{twoalphacondition}). This condition can be
reformulated as follows. For any $\alpha \in \Pi^1$, denote by $\Pi_{\alpha}$ the
connected component of the subdiagram of
the Dynkin diagram $\Pi$ obtained by deleting vertices in $\Pi^1 \setminus
\{\alpha\}$ and containing $\alpha$. Then the root system associated with $\Pi_\alpha$
has no roots of the form
$$ \beta = 2\alpha + {\mathfrak {su}}m_{\phi \in \Pi_\alpha \setminus \{\alpha\}}k_{\phi}\phi.$$
Using this condition and the decomposition of any root
into a linear combination of simple roots, one can prove the
proposition. \\
In the case of $A_\ell$, any root has coefficient $0,1$ in the
decomposition into simple roots. Hence, any
decomposition satisfies the property (\ref{twoalphacondition}).\par
\noindentoindent
In the case of $B_\ell$, any root which has coefficient 2 has the
form
$$
{\mathfrak {su}}m_{i\leq h< j}\alpha_h +2{\mathfrak {su}}m_{j\leq h\leq\ell}\alpha_h\,,\qquad(1\leq i<j\leq\ell)\,.
$$
Hence the condition (\ref{twoalphacondition}) holds if and only
if the last two roots in $\Pi^1$ are consecutive, i.e. $i_{k-1}+1=i_k$.\par
\noindentoindent
In the case of $C_\ell$, the roots with a coefficient 2 are given by
\begin{eqnarray*}
&{}&{\mathfrak {su}}m_{i\leq h<j}\alpha_h +2{\mathfrak {su}}m_{j\leq h<\ell}\alpha_h+\alpha_\ell\,,\qquad(1\leq i<j\leq\ell)\,,\\
&{}&2{\mathfrak {su}}m_{i\leq h<\ell}\alpha_h +\alpha_\ell\,,\qquad(1\leq i<\ell)\,.
\end{eqnarray*}
The second formula implies that there are no roots of the form given in \eqref{twoalphacondition}
if and only if $i_k=\ell$.\par
\noindentoindent
In the case of $D_\ell$, the roots with a coefficient $2$ are
$$
{\mathfrak {su}}m_{i\leq h<j}\alpha_h +2{\mathfrak {su}}m_{j\leq h<\ell-1}\alpha_h+\alpha_{\ell-1}+\alpha_\ell\,,\qquad(1\leq
i<j<\ell-1)\,.
$$
The condition (\ref{twoalphacondition}) fails if and only if
the last two roots $\alpha_{i_{k-1}}\,, \alpha_{i_k}$ satisfy $i_{k-1}< i_k -1$ and $i_k <
\ell-1$.\par
\noindentoindent
The case of exceptional Lie algebras can be treated in a similar way, by using tables in \cite{Bou}.
$\Box$
\par\noindentoindent
Let $\Pi^1{\mathfrak {su}}bset\Pi'$ be an admissible subset which defines a fundamental gradation of $\ggg^\sigma$.
An alternate decomposition of $\Pi^1=\Pi^1_+\cup\Pi^1_+$ can be given if the conditions of
Proposition \ref{lista-complessa} are satisfied and, in addition, the following ones hold:
\begin{itemize}
\item for ${\mathfrak {su}}(p,q)$, it has to be $q=p$ and $\alpha_p\in\Pi^1$;
\item for ${\mathfrak {so}}(\ell-1,\ell+1)$, it has to be $\Pi^1\cap\{\alpha_{\ell -1},\alpha_\ell\}=\emptyset$ or
$\{\alpha_{\ell -2},\alpha_{\ell -1},\alpha_\ell\}{\mathfrak {su}}bset\Pi^1$;
\item for $E_6$II, it has to be $\alpha_4\in\Pi^1$ and if $\alpha_2\noindentotin\Pi^1$, then
$\{\alpha_3,\alpha_5\}{\mathfrak {su}}bset\Pi^1$;
\end{itemize}
while for ${\mathfrak {so}}^*(2\ell)$ and $E_6$III there is no alternate decomposition of $\Pi^1$.
\par\noindentoindent
Proposition \ref{alternateprop} implies the following final theorem.
\begin{thm} Let $(\Pi = \Pi_{\bullet} \cup \Pi', \sim)$ be a
Satake diagram of a simple real Lie algebra $\ggg^\sigma $ and $\Pi^1 {\mathfrak {su}}bset \Pi'$
be an admissible subset as described above.
Let $\tilde G$ be the simply connected Lie group
with the Lie algebra $\ggg^\sigma$ and $\tilde P$ be the parabolic
subgroup of $\tilde G$ generated by the non-negatively graded subalgebra
$$
\gp = {\mathfrak {su}}m_{i \geq 0} (\ggg^\sigma)^i
$$
associated with the grading element $\vec d_{\Pi^1}$. Then the alternate decomposition
$\Pi^1 = \Pi^1_+ \cup \Pi^1_-$ defines a decomposition
$$(\ggg^\sigma)^1 = (\ggg^\sigma)^1_+ + (\ggg^\sigma)^1_- $$
of the $(\ggg^\sigma)^0$-module $(\ggg^\sigma)^1$ into a sum of two
commutative subalgebras. This decomposition determines an
invariant para-CR structure on the
simply connected flag manifold $F = \tilde G/\tilde P$. Moreover, any simply
connected maximally homogeneous para-CR manifolds of semisimple
type is a direct product of such manifolds.
\end{thm}
\par
\noindentoindent {\bf Acknowledgement.} The authors would like to thank P. Nurowski for bringing their attention to the paper \cite{NS}
and for useful discussions.
\printindex
\end{document}
|
\begin{document}
\title{The Brownian Castle}
\author{G. Cannizzaro$^{1,2}$ and M. Hairer$^1$}
\institute{Imperial College London, SW7 2AZ, UK \and University of Warwick, CV4 7AL, UK\\
\email{[email protected], [email protected]}}
\maketitle
\begin{abstract}
We introduce a $1+1$-dimensional temperature-dependent model such that the classical ballistic
deposition model is recovered as its zero-temperature limit.
Its $\infty$-temperature version, which we refer to as the $0$-Ballistic Deposition ($0$-BD) model,
is a randomly evolving interface which, surprisingly enough, does {\it not} belong to either
the Edwards--Wilkinson (EW) or the Kardar--Parisi--Zhang (KPZ) universality class.
We show that $0$-BD has a scaling limit,
a new stochastic process that we call {\it Brownian Castle} (BC) which, although it
is ``free'', is distinct from EW and,
like any other renormalisation fixed point, is scale-invariant, in this case
under the $1:1:2$ scaling (as opposed
to $1:2:3$ for KPZ and $1:2:4$ for EW).
In the present article, we not only derive its finite-dimensional
distributions, but also provide a ``global'' construction of the Brownian Castle
which has the advantage of highlighting the fact that it admits backward characteristics given by
the (backward) Brownian Web (see~\mathfrak{c}ite{TW,FINR}).
Among others, this characterisation enables us to establish fine pathwise properties of BC and
to relate these to special points of the Web. We prove that the Brownian Castle is
a (strong) Markov and Feller process on a suitable space of c\`adl\`ag functions
and determine its long-time behaviour.
At last, we give a glimpse to its universality by proving the convergence of $0$-BD to BC in a rather strong sense.
\end{abstract}
\mathfrak{s}etcounter{tocdepth}{2}
\tableofcontents
\mathfrak{s}ection{Introduction}
The starting point for the investigation presented in this article is the one-dimensional
Ballistic Deposition (BD) model \mathfrak{c}ite{BDOriginal}
and the long-standing open question concerning its large-scale behaviour.
Ballistic Deposition (whose precise definition will be given below)
is an example of random interface in $(1+1)$-dimensions, i.e.\ a
map $h\mathfrak{c}olon\mathbb{R}_+\times A\to\mathbb{R}$, $A$ being a subset of $\mathbb{R}$, whose evolution is driven by a stochastic forcing.
In this context, two universal behaviours, so-called universality classes,
have generally been considered, the Kardar--Parisi--Zhang (KPZ) class, to which BD is
presumed to belong \mathfrak{c}ite{MR1600794,Jeremy}, and the Edwards--Wilkinson (EW) class.
Originally introduced in~\mathfrak{c}ite{KPZOrig}, the first is conjectured to capture the large-scale fluctuations of all those models exhibiting some smoothing mechanism, slope-dependent growth speed,
and short range randomness.
The ``strong KPZ universality conjecture'' states that for height functions $h$ in this loosely defined class,
the limit as $\delta\to0$ of
$\delta^{1} h(\mathfrak{c}dot/\delta^{3}, \mathfrak{c}dot/\delta^2)-C/\delta^2$, where $C$ is a model dependent constant, exists
(meaning in particular that the scaling exponents
of the KPZ class are $1:2:3$), and is given by $h_{\mathrm{KPZ}}$,
a universal (model-independent) stochastic process referred to as the ``KPZ fixed point''
(see~\mathfrak{c}ite{KPZfp} for the recent construction of this process as the scaling limit of TASEP).
If an interface model satisfies these features but does not display any slope-dependence,
then it is conjectured to belong to the EW universality class~\mathfrak{c}ite{EW}, whose scaling exponents are $1:2:4$
and whose universal fluctuations are Gaussian, given by the solutions $h_\mathbb{E}W$ to the (additive)
stochastic heat equation
\begin{equ}[e:EW]
\d_t h_\mathbb{E}W = {1\over 2} \d_x^2 h_\mathbb{E}W + \xi\;,
\end{equ}
with $\xi$ denoting space-time white noise\footnote{The choice of constants $1/2$ and $1$ appearing in \eqref{e:EW}
is no loss of generality as it can be enforced by a simple fixed rescaling.}.
That said, there is a paradigmatic model in the KPZ universality class which plays a distinguished role.
This model is a singular stochastic PDE, the KPZ equation, which can be formally written as
\begin{equ}[e:KPZ]
\partial_t h =\frac12\d_x^2 h+ \frac14(\partial_x h)^2 + \xi\,.
\end{equ}
(The proof that it does indeed converge to the KPZ fixed point under the KPZ scaling
was recently given in \mathfrak{c}ite{QS1,Virag}.)
The importance of~\eqref{e:KPZ} lies in the fact that its solution is expected to be universal itself in view of
the so-called ``weak KPZ universality conjecture'' \mathfrak{c}ite{MR1462228,KPZJeremy}
which, loosely speaking, can be stated as follows.
Consider any (suitably parametrised) continuous
one-parameter family $\eps \mapsto h_\eps$ of interface growth models with the following properties
\begin{itemize}[noitemsep, label=-]
\item the model $h_0$ belongs to the EW universality class,
\item for $\eps > 0$, the model $h_\eps$ belongs to the KPZ universality class.
\end{itemize}
Then it is expected that there exists a choice of constants $C_\eps$ such that
\begin{equ}[e:weakKPZ]
\lim_{\eps \to 0} \eps^{1/2} h_\eps(\eps^{-2} t, \eps^{-1} x) - C_\eps = h\;,
\end{equ}
with $h$ solving~\eqref{e:KPZ}.
One can turn this conjecture on its head and take a result of the form \eqref{e:weakKPZ}
as suggestive evidence that the models $h_\eps$ are indeed in the KPZ universality class for
fixed $\eps$.
What we originally intended to do with the Ballistic Deposition model was exactly this:
introduce a one-parameter family
of interface models to which BD belongs and prove a limit of the type~\eqref{e:weakKPZ}.
\begin{figure}\label{fig:BD}
\end{figure}
The BD model is a Markov process $h$ taking values in $\mathbb{Z}^\mathbb{Z}$
and informally defined as follows. Take a family of i.i.d.\ exponential clocks (with rate $1$)
indexed by $x \in \mathbb{Z}$ and, whenever the clock located at $x$ rings, the value
of $h(x)$ is updated according to the rule
\begin{equ}[e:BD]
h(x) \mapsto \max\{h(x-1), h(x)+1, h(x+1)\}\;.
\end{equ}
This update rule is usually interpreted as a ``brick'' falling down at site $x$ and then either sticking
to the topmost existing brick at sites $x-1$ or $x+1$, or coming to rest on top of the
existing brick at site $x$. See Figure~\ref{fig:BD} for an example illustrating two steps of
this dynamic.
The result of a typical medium-scale simulation is shown in Figure~\ref{fig:BDsim},
suggesting that $x \mapsto h(x)$ is locally Brownian.
\begin{figure}\label{fig:BDsim}
\end{figure}
A natural one-parameter family containing ballistic deposition is given
by interpreting the maximum appearing in \eqref{e:BD} as a ``zero-temperature'' limit and, for $\beta\geq 0$, to consider
instead the update rule
\begin{equ}[e:betaBD]
h(x) \mapsto y \in \{h(x-1), h(x)+1, h(x+1)\}\;, \quad \mathbb{P}(y = \bar y) \propto e^{\beta \bar y}\;.
\end{equ}
As $\beta \to \infty$, this does indeed reduce to \eqref{e:BD}, while $\beta = 0$ corresponds
to a natural uniform reference measure for ballistic deposition. It is then legitimate
to ask whether \eqref{e:weakKPZ} holds if we take for $h_\eps$ the process just described with
a suitable choice of $\beta=\beta(\eps)$.
\begin{figure}\label{fig:BC}
\end{figure}
Surprisingly, this is \textit{not} the case. The reason however is not that ballistic deposition
isn't in the KPZ universality class, but that its $\beta = 0$ version (which we will refer to as $0$-Ballistic Deposition model)
does not belong to the Edwards-Wilkinson universality class. Indeed, Figure~\ref{fig:BC} shows
what a typical large-scale simulation of this process looks like. It is clear from this picture
that its large-scale behaviour is not Brownian; in fact, it does not even appear to be continuous!
The aim of this article is to describe the scaling limit of this process, which we
denote by $h_{\mathrm{BC}}$ and call the ``Brownian Castle'' in view of the turrets and crannies apparent in the simulation.
\mathfrak{s}ubsection{The $0$-Ballistic Deposition and its scaling limit}
In order to understand how to characterise the Brownian Castle, it is convenient to take a step back
and examine more closely the $0$-Ballistic Deposition model.
In view of~\eqref{e:betaBD}, the dynamics of $0$-BD is driven by three independent
Poisson point processes $\mu^L$, $\mu^R$ and $\mu^\bullet$ on $\mathbb{R}\times\mathbb{Z}$, whose intensity is
$\lambda/2$ for $\mu^R$ and $\mu^L$ and $\lambda$ for $\mu^\bullet$, $\lambda$ being
such that for every $k\in\mathbb{Z}$, $\lambda(\mathrm{d} t, k)$ is the Lebesgue measure on $\mathbb{R}$.\footnote{This is
actually a slightly different model from that described in~\eqref{e:betaBD} where
the three processes have the same intensity. The present choice is so that as many constants as possible take
the value $1$ in
the limit, but other (symmetric) choices yield the same limit modulo a simple rescaling. }
Each event of $\mu^L$, $\mu^R$ and $\mu^\bullet$ is responsible of one of the three
possible updates $h_{{0\text{-}\mathrm{bd}}}(x) \mapsto y$ of the height function, namely $\mu^L$ yields
$y=h_{{0\text{-}\mathrm{bd}}}(x+1)$, $\mu^R$ yields $y=h_{{0\text{-}\mathrm{bd}}}(x-1)$, and $\mu^\bullet$ yields $y= h_{{0\text{-}\mathrm{bd}}}(x)+1$.
\begin{figure}\label{f:PPEvalMap}
\end{figure}
Given a realisation of these processes, we can graphically represent them as in Figure~\ref{f:PPEvalMap}, i.e.\
events of $\mu^L$ and $\mu^R$
are drawn as left / right pointing arrows, while for those of $\mu^\bullet$ are drawn as dots on $\mathbb{R}\times\mathbb{Z}$.
Assuming that the configuration of $0$-BD at time $0$ is $h_0\in\mathbb{Z}^\mathbb{Z}$,
it is easy to see that for any $z=(t,y)\in\mathbb{R}_+\times\mathbb{Z}$,
the value $h_{{0\text{-}\mathrm{bd}}}(z)$ can be obtained by going backwards following the arrows along
the unique path $\pi^{\downarrow}_z$ starting at $z$ and
ending at a point in $\{0\}\times\mathbb{Z}$, say $(0,x)$ (the red line in Figure~\ref{f:PPEvalMap}),
and adding to $h_0(x)$ the number of dots that are met along the way
(in Figure~\ref{f:PPEvalMap}, $h_{{0\text{-}\mathrm{bd}}}(t,y)=h_0(x)+4$).
In order to obtain order one large-scale fluctuations for $h_{{0\text{-}\mathrm{bd}}}$, we clearly need
to rescale space and time diffusively to ensure convergence of the random walks $\pi^{\downarrow}_z$ to Brownian motions.
The size of the fluctuations should equally be scaled in a diffusive relation with time in order to have a
limit for the fluctuations of the Poisson processes obtained by ``counting the number of dots''.
In other words, the scaling exponents governing the large-scale
fluctuations should indeed be $1:1:2$.
The previous considerations immediately enable us to deduce the finite-dimensional distributions
of the scaling limit of $0$-BD and consequently lead to the following definition of the Brownian Castle.
\begin{definition}\label{def:bcFD}
Given $\mathfrak{h}_0\in D(\mathbb{R},\mathbb{R})$\footnote{Here $D(\mathbb{R},\mathbb{R})$ denotes all c\`adl\`ag functions from $\mathbb{R}$ to $\mathbb{R}$.},
we define the Brownian Castle (BC) starting from $\mathfrak{h}_0$ as the process $h_\mathrm{bc}:\mathbb{R}_+\times\mathbb{R}\to\mathbb{R}$ with
finite-dimensional distributions at space-time points $\{(t_i,x_i)\}_{i\le k}$ with $t_i > 0$ given as follows.
Consider $k$ coalescing Brownian motions $B_i$, running backwards in time and such that each $B_i$ is defined on $[0,t_i]$
with terminal value $B_i(t_i) = x_i$. For each edge $e$ of the resulting rooted forest, consider independent
Gaussian random variables $\xi_e$ with variance equal to the length $\ell_e$ of the time interval corresponding to $e$.
We then set $h_\mathrm{bc}(t_i,x_i) = \mathfrak{h}_0(B_i(0)) + \mathfrak{s}um_{e \in E_i} \xi_e$, where $E_i$ denotes the set of
edges that are traversed when joining the $i$th leaf to the root of the corresponding coalescence tree.
See Figure~\ref{fig:exBC} for a graphical description.
\end{definition}
\begin{figure}\label{fig:exBC}
\end{figure}
The existence of such a process $h_\mathrm{bc}$ is guaranteed by Kolmogorov's extension theorem.
One goal of the present article is to provide a finer description of the Brownian Castle which
allows to deduce some of its pathwise properties and to show that $0$-BD converges to it in a topology
that is significantly stronger than just convergence of $k$-point distributions.
\mathfrak{s}ubsection{The Brownian Castle: a global construction and main results}
As Definition~\ref{def:bcFD} and the above discussion suggest, any global construction of the Brownian Castle must
comprise two components. On the one hand, since we want to define it at {\it all} points simultaneously,
we need a family of backward coalescing trajectories
starting from every space-time point
and each distributed as a Brownian motion. On the other, we need a stochastic process indexed by the points
on these trajectories whose increment between, say, $z_1,\,z_2\in\mathbb{R}^2$, is a Gaussian random variable with variance
given by their {\it ancestral distance}, i.e. by the time it takes for the trajectories
started from $z_1$ and $z_2$ to meet.
The first of these components is the (backward) Brownian Web
which was originally constructed in~\mathfrak{c}ite{TW} and further studied in~\mathfrak{c}ite{FINR}.
In the present context, it turns out to be more convenient to work with the
characterisation provided by~\mathfrak{c}ite{CHbwt}. The latter has the advantage of highlighting
the {\it tree structure} of the Web
which in turn determines the distribution of the increments of the Gaussian process indexed by it.
The starting point in our analysis is to construct a random couple
$\mathfrak{c}hi_\mathrm{bc}\eqdef(\zeta^\downarrow_\mathrm{bw}, B_\mathrm{bc})$ (and an appropriate Polish space in which it lives),
whose first component $\zeta^\downarrow_\mathrm{bw}=(\mathscr{T}^\downarrow_\mathrm{bw},\ast^\downarrow_\mathrm{bw},d^\downarrow_\mathrm{bw}, M^\downarrow_\mathrm{bw})$
is the Brownian Web Tree of~\mathfrak{c}ite{CHbwt}. The terms in $\mathfrak{c}hi_\mathrm{bc}$ can heuristically be described as
\begin{itemize}[noitemsep,label=-]
\item $(\mathscr{T}^\downarrow_\mathrm{bw},\ast^\downarrow_\mathrm{bw}, d^\downarrow_\mathrm{bw})$ is an pointed $\mathbb{R}$-tree (see Definition~\ref{def:Rtree})
which should be thought of as the set of ``placeholders'' for
the points in the trajectories, and whose elements are morally of the form $( s, \pi^\downarrow_z)$,
where $\pi^\downarrow_z$ is a backward path in the Brownian Web $\mathcal{W}$ from $z=(t,x)\in\mathbb{R}^2$ (there can be more than $1$!)
and $s<t$ is a time, and in which the distance $d^\downarrow_\mathrm{bw}$ is the {\it ancestral distance} given by
\begin{equ}[e:AncestralD]
d_\mathrm{bw}^\downarrow((t,\pi^\da_{z}),(s,\pi^\da_{z'}))\eqdef (t+s)-2\tau^\downarrow_{t,s}(\pi^\da_{z},\pi^\da_{z'})\;,
\end{equ}
where the coalescence time $\tau^\downarrow_{t,s}$ is given by $\tau^\downarrow_{t,s}(\pi^\da_{z},\pi^\da_{z'})\eqdef\mathfrak{s}up\{r<t\wedge s\,:\, \pi^\da_{z}(r)=\pi^\da_{z'}(r)\}$,
\item $M^\downarrow_\mathrm{bw}$ is the {\it evaluation map} which associates to the abstract placeholder in $\mathscr{T}^\downarrow_\mathrm{bw}$
the actual space-time point
in $\mathbb{R}^2$ to which it corresponds, i.e. $M^\downarrow_\mathrm{bw}\mathfrak{c}olon (s, \pi^\downarrow_z)\mapsto (s, \pi^\downarrow_z(s))$,
\item $B_\mathrm{bc}$ is the {\it branching map}, which corresponds to
the Gaussian process indexed by $\mathscr{T}^\downarrow_\mathrm{bw}$ and such that
\begin{equ}
\mathbb{E}[(B_\mathrm{bc}(t,\pi^\da_{z})-B_\mathrm{bc}(s,\pi^\da_{z'}))^2]=d_\mathrm{bw}^\downarrow((t,\pi^\da_{z}),(s,\pi^\da_{z'}))\,.
\end{equ}
\end{itemize}
With such a couple at hand, we would like to define the Brownian Castle starting at $\mathfrak{h}_0$ by setting
\begin{equ}[e:FormalBC]
\mathfrak{h}_\mathrm{bc}(z)\eqdef \mathfrak{h}_0(\pi_z^\downarrow(0))+B_\mathrm{bc}(t,\pi^\downarrow_z)-B_\mathrm{bc}(0,\pi^\downarrow_z)\,,\qquad\text{for all $z\in\mathbb{R}_+\times\mathbb{R}$.}
\end{equ}
The above definition implicitly relies on the fact that
we are assigning to every point $z\in\mathbb{R}^2$ a point in $\mathscr{T}^\downarrow_\mathrm{bw}$ (and consequently a path $\pi^\da_z\in\mathcal{W}$), but,
as it turns out (see~\mathfrak{c}ite[Theorem 3.24]{CHbwt}), in the Brownian Web Tree
there are ``special points'' from which more than one path originates.
Since anyway this number is finite (at most $3$), we can always pick
the right-most trajectory and ensure well-posedness of~\eqref{e:FormalBC}
(see the definition of the tree map in Section~\ref{sec:TreeMap} that makes this assignment rigorous).
The special points of the Brownian Web Tree are particularly relevant for the Brownian Castle because
they are the points at which the discontinuities generating the turrets and crannies
in Figure~\ref{fig:BDsim} can be located. Thanks to~\eqref{e:FormalBC}, we will not only be able to
detect these points but also track the space-time behaviour
of the (dense) set of discontinuities of $h_{\mathrm{BC}}$ (see Section~\ref{sec:BC}).
In the following theorem, we loosely state some of the main results concerning the Brownian Castle
which can be obtained by virtue of the construction above.
\begin{theorem}\label{thm:main}
The Brownian Castle in Definition~\ref{def:bcFD} admits a version $\mathfrak{h}_\mathrm{bc}$ such that for all $\mathfrak{h}_0\in D(\mathbb{R},\mathbb{R})$,
$t\mapsto \mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)$ is a right-continuous map with values in $D(\mathbb{R},\mathbb{R})$
endowed with the Skorokhod topology $d_\mathrm{Sk}$ in~\eqref{e:Sk}, which is continuous except for
a countable subset of $\mathbb{R}_+$,
but admits no version which is c\`adl\`ag in both space and time.
$\mathfrak{h}_\mathrm{bc}$ is a time-homogeneous $D(\mathbb{R},\mathbb{R})$-valued Markov process,
satisfying both the strong Markov and the Feller properties,
which is invariant under the $1:1:2$ scaling,
i.e. if $\mathfrak{h}_\mathrm{bc}^{i}$ with $i \in \{1,2\}$ are two instances of the Brownian Castle
with possibly different initial conditions at time $0$, then, for all $\lambda>0$, one has the equality in law
\begin{equ}[e:eqlawBC]
\mathfrak{h}_\mathrm{bc}^{1}(t,x) \eqlaw \lambda^{-1} \mathfrak{h}_\mathrm{bc}^{2}(\lambda^2 t, \lambda x),\qquad t \ge 0,\,x\in \mathbb{R},
\end{equ}
(viewed as an equality between space-time processes) provided that \eqref{e:eqlawBC}
holds as an equality between spatial processes at time $t=0$.
Moreover, when quotiented by vertical translations, $\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)$ converges in law, as $t \to \infty$,
to a stationary process whose increments are Cauchy but which is singular with respect to the Cauchy process.
\end{theorem}
A more precise formulation of this theorem, together with its proof, is split in the various statements
contained in Section~\ref{s:bc}.
\begin{remark}
When we say that ``a process $h$ admits a version having property $P$'', we mean that there exists a standard probability
space $\Omega$ endowed with a collection of random variables $\mathbf{h}(z)$ such that
for any finite collection $\{z_1,\ldots ,z_k\}$, the laws of $(h(z_i))_{i\le k}$ and
$(\mathbf{h}(z_i))_{i\le k}$ coincide and furthermore $\mathbf{h}^{-1}(P) \mathfrak{s}ubset \Omega$
is measurable and of full measure.
\end{remark}
The second task of the present article is to show that the $0$-Ballistic Deposition model indeed converges to it.
Thanks to the heuristics presented above, in order to expect any meaningful limit, we need to recentre and
rescale the $0$-BD height function $h_{{0\text{-}\mathrm{bd}}}$ according to the $1:1:2$ scaling, so we set
\begin{equ}[e:Scaled]
h^\delta_{{0\text{-}\mathrm{bd}}}(t,x)\eqdef \delta\Big(h_{{0\text{-}\mathrm{bd}}}\Big(\frac{t}{\delta^2},\frac{x}{\delta}\Big)-\frac{t}{\delta^2}\Big)\,,\qquad\text{for all $z\in\mathbb{R}_+\times\mathbb{R}$.}
\end{equ}
Now, given the way in which Definition~\ref{def:bcFD} was derived, convergence of $h_{{0\text{-}\mathrm{bd}}}^\delta$ to $h_\mathrm{bc}$
in the sense of finite-dimensional distributions
should not come as a surprise since it is an almost immediate consequence of Donsker's invariance principle.
That said, we aim at investigating a stronger form of convergence which relates $0$-BD and BC as
space-time processes. The major obstacle here is that
Theorem~\ref{thm:main} explicitly asserts that the Brownian Castle $h_\mathrm{bc}$
does not live in any ``reasonable'' space which is Polish and in which point evaluation is a measurable operation,
so that {\it a priori} it is not even clear in what sense such a convergence should be stated.
It is at this point that our construction, summarised by the expression in~\eqref{e:FormalBC}, comes once more into play.
As we have seen above, the version of the Brownian Castle $\mathfrak{h}_\mathrm{bc}$ given in~\eqref{e:FormalBC},
is fully determined by the couple $\mathfrak{c}hi_\mathrm{bc}\eqdef(\zeta^\downarrow_\mathrm{bw}, B_\mathrm{bc})$,
which in turn was inspired by the graphical representation
of the $0$-Ballistic Deposition model illustrated in Figure~\ref{f:PPEvalMap}.
For any realisation of the Poisson random measures
$\mu^L$, $\mu^R$ and $\mu^\bullet$ suitably rescaled and (for the latter) compensated,
it is possible to build $\mathfrak{c}hi^\delta_{{0\text{-}\mathrm{bd}}}\eqdef(\zeta^\downarrow_\delta, N_\delta)$,
in which $\zeta^\downarrow_\delta\eqdef(\mathscr{T}_\delta^\downarrow,\ast^\downarrow_\delta, d_\delta^\downarrow, M_\delta^\downarrow)$
is the Discrete Web Tree of~\mathfrak{c}ite[Definition 4.1]{CHbwt} and
encodes the family of coalescing
backward random walks $\pi^{\downarrow,\delta}$ naturally associated to the random measures $\mu^L$ and $\mu^R$,
while $N_\delta$ is the compensated Poisson point process indexed by $\mathscr{T}_\delta^\downarrow$ and induced by $\mu^\bullet$
(the precise construction of $\mathfrak{c}hi^\delta_{{0\text{-}\mathrm{bd}}}$ can be found in Section~\ref{sec:graphical}).
Given any initial condition $\mathfrak{h}^\delta_0\in D(\mathbb{R},\mathbb{R})$, we now set, analogously to~\eqref{e:FormalBC},
\begin{equ}[e:FormalBD]
\mathfrak{h}^\delta_{{0\text{-}\mathrm{bd}}}(z)\eqdef \mathfrak{h}^\delta_0(\pi_{\bar z}^{\downarrow,\delta}(0))+N_\delta(t,\pi_{\bar z}^{\downarrow,\delta})-N_\delta(0,\pi_{\bar z}^{\downarrow,\delta})\,,\qquad\text{for all $z\in\mathbb{R}_+\times\mathbb{R}$,}
\end{equ}
where, for $z=(t,x)$, $\bar z=(t,\delta\lfloor x/\delta\rfloor)\in\mathbb{R}_+\times(\delta\mathbb{Z})$. Notice that $\mathfrak{h}^\delta_{{0\text{-}\mathrm{bd}}}$
is a c\`adl\`ag (in both space and time) version of $h_{{0\text{-}\mathrm{bd}}}^\delta$ started from $\mathfrak{h}_0^\delta$, in the sense that
all of their $k$-point (in space-time) marginals coincide.
In force of the previous construction, we are able to state the following theorem whose proof can be found at the
end of Section~\ref{sec:0BD}.
\begin{theorem}\label{thm:convTime}
Let $\{\mathfrak{h}_0^\delta\}_\delta,\,\mathfrak{h}_0\mathfrak{s}ubset D(\mathbb{R},\mathbb{R})$ be such that $d_{\mathrm{Sk}}(\mathfrak{h}_0,\mathfrak{h}_0^\delta)\to 0$.
Then, for every sequence $\delta_n \to 0$ there exists a version of $\mathfrak{h}^\delta_{{0\text{-}\mathrm{bd}}_n}$ and $\mathfrak{h}_\mathrm{bc}$
for which, almost surely, there exists a countable set of times $D$ such that for every $T\in\mathbb{R}_+\mathfrak{s}etminus D$
\begin{equ}[e:ConvDC]
\lim_{\delta\to0} d_{\mathrm{Sk}}(\mathfrak{h}^\delta_{{0\text{-}\mathrm{bd}}}(T,\mathfrak{c}dot),\mathfrak{h}_\mathrm{bc}(T,\mathfrak{c}dot))=0\,.
\end{equ}
Here, $\mathfrak{h}^\delta_{{0\text{-}\mathrm{bd}}}$ starts from $\mathfrak{h}_0^\delta$ and is defined in~\eqref{e:FormalBD}
while $\mathfrak{h}_\mathrm{bc}$ is the Brownian Castle started from $\mathfrak{h}_0$.
\end{theorem}
This theorem also provides information concerning the nature
and evolution of the discontinuities of the Brownian Castle. Indeed, for~\eqref{e:ConvDC} to hold, it
cannot be the case that many small discontinuities of $0$-BD add up and
ultimately create a large discontinuity for BC. Instead, the statement shows that
the major discontinuities of the former converge to those of the latter.
To see this phenomenon and prove Theorem~\ref{thm:convTime},
the main ingredient is the convergence
of $\mathfrak{c}hi_{{0\text{-}\mathrm{bd}}}^\delta$ to $\mathfrak{c}hi_\mathrm{bc}$ (and that of the dual of the Discrete Web Tree to the dual of
the Brownian Web Tree as stated in~\mathfrak{c}ite[Theorem 4.5]{CHbwt}).
The topology in which such a convergence holds (see Section~\ref{sec:Trees})
is chosen in such a way that
the convergence of the evaluation maps morally provides a control over the sup norm
distance of discrete and continuous backward trajectories and is therefore similar in spirit to that in, e.g.,~\mathfrak{c}ite{FINR},
while that of the trees guarantees that couples of distinct discrete and continuous paths which are close
also coalesce approximately at the same time. This is a crucial point (which moreover distinguishes our work
from the previous ones) since it is at the basis of the convergence of $N_\delta$ to $B_\mathrm{bc}$ and ultimately
ensures that of $\mathfrak{h}^\delta_{{0\text{-}\mathrm{bd}}}$ to $\mathfrak{h}_\mathrm{bc}$.
\mathfrak{s}ubsection{The BC universality class and further remarks}
Over the last two decades, the KPZ (and EW) universality class has been at the heart
of an intense mathematical interest because of the challenges it posed and
the numerous physical systems which abide its laws (see~\mathfrak{c}ite{QS} for a review and~\mathfrak{c}ite{ACQ,SS, KPZfp, Virag, QS1} among many other recent results).
This article and the results stated herein establish the existence of a {\it new} universality class,
which we will refer to as the {\it BC universality class}, by characterising a novel scale-invariant stochastic
process, the Brownian Castle, which encodes the fluctuations of models in this class and arises as the
scaling limit of a microscopic random system,
the $0$-Ballistic Deposition model. It is natural to wonder what are the features a model should
exhibit in order to belong to it.
Given the analysis of the Brownian Castle outlined above, it is reasonable to expect that
any interface model which displays both {\it horizontal} and {\it vertical fluctuations} but {\it no smoothing}
is an element of the BC universality class. The first type of fluctuations is responsible for the
(coalescing) Brownian characteristics in the limit, while the second determines the Brownian motion indexed by them.
A model which possesses these features and is somewhat paradigmatic for the class
is the random transport equation given by
\begin{equ}[e:BCeq]
\partial_t h=\eta\,\partial_x h +\mu\;,
\end{equ}
where $\eta$ and $\mu$ are two space-time stationary random fields,
the first being responsible for the horizontal / lateral fluctuations and the second for the vertical ones.
We conjecture that, provided the noises are sufficiently mixing so that some form of functional central limit theorem
applies, under the $1:1:2$ scaling the solution of~\eqref{e:BCeq} converges (in a weak sense) to the Brownian Castle.
We conclude this introduction by pointing out some aspects of the construction of the Brownian Castle
and the description of the $0$-Ballistic Deposition model, commenting on their relation with the existing literature.
The importance of the Brownian Web lies in its connection with many interesting
physical and biological systems (population genetics models, drainage networks, random walks in random environment...)
and a thorough account on the advances of the research behind it can be found in~\mathfrak{c}ite{SSS}.
One of the most notable generalisations of the Brownian Web
is the Brownian Net~\mathfrak{c}ite{SS} (and the stochastic flows therein~\mathfrak{c}ite{SSSflows}), which arises as the scaling limit
of a collection of coalescing random walks that in addition have a small probability to branch.
It is then natural
to wonder if a construction similar to that carried out here is still possible starting from the Brownian Net,
and what the corresponding ``Castle'' would be in this context. We believe that such considerations allow
to build crossover processes connecting the BC and EW universality classes, namely such that
their small scale statistics are BC, while their large scale fluctuations are EW.
From the perspective of discrete interacting systems, let us also mention
that the graphical representation of the $0$-Ballistic Deposition is in itself not new.
A picture analogous to Figure~\ref{f:PPEvalMap}, can be found in~\mathfrak{c}ite{NoisyVoter}, where the authors introduce
the so-called noisy voter model. The latter can be obtained by the usual voter model (see~\mathfrak{c}ite{Li} for the definition),
whose graphical representation is the same as that in Figure~\ref{f:PPEvalMap} but without {\mathfrak{c}olor{blue}$\bullet$},
by adding spontaneous flipping, illustrated by the realisation of $\mu^\bullet$.
Let us remark that not only the meaning but also the limiting procedure involving $\mu^\bullet$ is different in the two cases.
In the present setting the intensity of $\mu^\bullet$ is fixed, while
it is sent to $0$ at a suitable rate in~\mathfrak{c}ite{FINRb}.
\mathfrak{s}ubsection{Outline of the paper}
In Section \ref{sec:Trees}, we recall the main definitions related to $\mathbb{R}$-trees and
the topology and construction of the Brownian Web Tree $\zeta^\downarrow_\mathrm{bw}$ given in~\mathfrak{c}ite{CHbwt}.
In Section~\ref{sec:BST}, we build the couple $\mathfrak{c}hi_\mathrm{bc}$.
We introduce, for $\alpha,\,\beta\in(0,1)$,
the space $\mathbb{T}^{\alpha,\beta}_\mathrm{bsp}$ (and its ``characteristic'' subset
$\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$) in which the couple $\mathfrak{c}hi_\mathrm{bc}$ lives, and
define the metric that makes it Polish (Section~\ref{sec:bsp}).
We then provide conditions under which a stochastic process $X$
indexed by a spatial tree $\zeta$ admits a H\"older continuous modification, construct both Gaussian
and Poisson processes indexed by a generic spatial tree, and prove that the map
$\zeta\mapsto\mathrm{Law}(\zeta,X)$ is continuous. (Sections~\ref{sec:MEC}-\ref{sec:MECc}.)
At last, combining this with the results of Section~\ref{sec:Trees} we construct the law of
$\mathfrak{c}hi_\mathrm{bc}$ on $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ in Section~\ref{sec:BCM}.
Section \ref{s:bc} is devoted to the Brownian Castle and its periodic counterpart, and
contains the proof of Theorem~\ref{thm:main}. In Section~\ref{sec:BC},
we first define the version formally given in~\eqref{e:FormalBC} and
determine its continuity properties (among which the finite $p$-variation for $p>1$),
then we study the location and structure of its discontinuities
and analyse their relation with the special points of the Brownian Web.
Afterwards, in Section~\ref{sec:BCprocess}, we prove the Markov, strong Markov, Feller and strong Feller properties
(the latter holds only in the periodic case) and study its long-time behaviour.
In Section \ref{sec:BCdist}, we derive the distributional
properties of the Brownian Castle (scale invariance and multipoint distributions) and show that
although its invariant measure has increments that are Cauchy distributed and has finite
$p$-variation for any $p>1$, it is singular with
respect to the law of the Cauchy process.
In Section~\ref{sec:0BD}, we turn our attention to the $0$-Ballistic Deposition model. At first,
we associate the triplet $\mathfrak{c}hi^\delta_{{0\text{-}\mathrm{bd}}}$ to it and show that the latter converges to $\mathfrak{c}hi_\mathrm{bc}$
(Sections~\ref{sec:graphical} and~\ref{sec:0BDconv}) and then (Section~\ref{sec:Conv}),
we prove Theorem~\ref{thm:convTime}. Finally, the appendix collects a number of relatively straightforward
technical results.
\mathfrak{s}ubsection*{Notations}
We will denote by $|\mathfrak{c}dot|_e$ the usual Euclidean norm on $\mathbb{R}^d$, $d\geq 1$, and
adopt the short-hand notations $|x|\eqdef|x|_e$ and $\|x\|\eqdef|x|_e$ for $x\in\mathbb{R}$ and $\mathbb{R}^2$ respectively.
Let $(\mathscr{T},d)$ be a metric space. We define the Hausdorff distance $d_H$ between two non-empty subsets $A,\,B$ of $\mathscr{T}$ as
\begin{equ}[e:Hausdorff]
d_H(A,B)\eqdef\inf\{\eps\mathfrak{c}olon A^\eps\mathfrak{s}ubset B\text{ and } B^\eps\mathfrak{s}ubset A\}
\end{equ}
where $A^\eps$ is the $\eps$-fattening of $A$, i.e. $A^\eps=\{\mathfrak{s}z\in\mathscr{T}\mathfrak{c}olon \exists\, \mathfrak{s}w\in A\text{ s.t. } d(\mathfrak{s}z,\mathfrak{s}w)< \eps\}$.
Let $(\mathscr{T},d,\ast)$ be a pointed metric space, i.e. $(\mathscr{T},d)$ is as above and $\ast \in \mathscr{T}$,
and let $M\mathfrak{c}olon \mathscr{T}\to\mathbb{R}^d$ be a map. For $r>0$ and $\alpha\in(0,1)$, we define the $\mathfrak{s}up$-norm
and $\alpha$-H\"older norm of $M$ restricted to a ball of radius $r$ as
\begin{equ}
\|M\|^{(r)}_\infty\eqdef\mathfrak{s}up_{\mathfrak{s}z\in B_d(\ast,r]}|M(\mathfrak{s}z)|_e\,,\qquad \|M\|^{(r)}_\alpha\eqdef\mathfrak{s}up_{\mathfrak{s}ubstack{\mathfrak{s}z,\mathfrak{s}w\in B_d(\ast,r]\\d(\mathfrak{s}z,\mathfrak{s}w)\leq 1}}\frac{|M(\mathfrak{s}z)-M(\mathfrak{s}w)|_e}{d(\mathfrak{s}z,\mathfrak{s}w)^\alpha}\,.
\end{equ}
where $B_d(\ast,r]\mathfrak{s}ubset\mathscr{T}$ is the closed ball of radius $r$ centred at $\ast$, and, for $\delta>0$,
the modulus of continuity as
\begin{equation}\label{e:MC}
\omega^{(r)}(M,\delta)\eqdef \mathfrak{s}up_{\mathfrak{s}ubstack{\mathfrak{s}z,\mathfrak{s}w\in B_d(\ast,r]\\d(\mathfrak{s}z,\mathfrak{s}w)\leq \delta}} |M(\mathfrak{s}z)-M(\mathfrak{s}w)|_e\,.
\end{equation}
In case $\mathscr{T}$ is compact, in all the quantities above, the suprema are taken over the whole space $\mathscr{T}$ and
the dependence on $r$ of the notation will be suppressed.
Moreover, we say that a function $M$ is (locally) {\it little $\alpha$-H\"older continuous} if for all $r>0$,
$\lim_{\delta\to 0} \delta^{-\alpha} \omega^{(r)}(M,\delta)=0$.
Let $I\mathfrak{s}ubseteq \mathbb{R}$ be an interval and $(\mathbb{C}X,d)$ be a complete separable metric space.
We denote the space of c\`ad\`ag functions on $I$ with values in $\mathbb{C}X$ as $D(I,\mathbb{C}X)$ and, for $f\in D(I,\mathbb{C}X)$,
the set of discontinuities of $f$ by $\mathbb{D}isc(f)$.
We will need two different metrics on $D(I,\mathbb{C}X)$, corresponding to the so-called J1 (or Skorokhod) and M1
topologies. For the first, let $\Lambda(I)$ be the space of strictly increasing continuous homeomorphisms on
$I$ such that
\begin{equ}
\gamma(\lambda)\eqdef \mathfrak{s}up_{t\in I} |\lambda(t)-t|\vee \mathfrak{s}up_{\mathfrak{s}ubstack{s,t\in I\\mathfrak{s}<t}}\left|
\log\left(\frac{\lambda(t)-\lambda(s)}{t-s}\right)\right|<\infty\,.
\end{equ}
Then, for $\lambda\in\Lambda(I)$ and $f,g\in D(I,\mathbb{C}X)$ we set $d^I_\lambda(f,g)\eqdef 1\vee \mathfrak{s}up_{s\in I}d(f(s),g(\lambda(s)))$,
so that the Skorokhod metric is given by
\begin{equ}[e:Sk]
d_\mathrm{Sk}(f,g)\eqdef\inf_{\lambda\in\Lambda(I)} \gamma(\lambda)\vee d^I_\lambda(f,g)\;,\quad d_\mathrm{Sk}(f,g)\eqdef\inf_{\lambda\in\Lambda} \gamma(\lambda)\vee \int_0^\infty e^{-t}\,d^{[-t,t]}_\lambda(f,g)\, \mathrm{d} t\;,
\end{equ}
where in the first case $I$ is assumed to be bounded.
For the M1 metric instead, we restrict to the case of $\mathbb{C}X=\mathbb{R}_+\eqdef[0,\infty)$. Given $f\in D(I,\mathbb{R}_+)$, denote by
$\Gamma_f$ its completed graph, i.e. the graph of $f$ to which all the vertical segments joining the points
of discontinuity are added, and order it by saying that $(x_1,t_1)\leq (x_2,t_2)$ if either $t_1<t_2$
or $t_1=t_2$ and $|f(t_1^-)-x_1|\leq |f(t_1^-)-x_2|$.
Let $P_f$ be the set of all parametric representations of $\Gamma_f$, which is the set of all non-decreasing
(with respect to the order on $\Gamma_f$) functions $\mathfrak{s}igma_f\mathfrak{c}olon I\to \Gamma_f$.
Then, if $I$ is bounded, we set
\begin{equ}
\hat d^\mathrm{c}_{\mathrm{M1}}(f,g)\eqdef 1\vee\inf_{\mathfrak{s}igma_{f},\mathfrak{s}igma_g} \|\mathfrak{s}igma_{f}-\mathfrak{s}igma_{g}\|
\end{equ}
and $d^\mathrm{c}_{\mathrm{M1}}(f,g)$ to be the topologically equivalent metric with respect to which
$D(I,\mathbb{R}_+)$ is complete (see~\mathfrak{c}ite[Section 8]{Whitt} for more details). If instead $I=[-1,\infty)$, we define
\begin{equ}[def:M1metric]
d_{\mathrm{M1}}(f,g)\eqdef \int_0^\infty e^{-t} \big( 1\wedge d_\mathrm{M1}^{\mathrm{c}}(f^{(t)},g^{(t)})\big)\,\mathrm{d} t
\end{equ}
where $f^{(t)}$ is the restriction of $f$ to $[-1, t]$.
For $p>0$, we say that a function $f\mathfrak{c}olon \mathbb{R}\to\mathbb{C}X$, $(\mathbb{C}X,d)$ being a metric space, has
finite $p$-variation if for every bounded interval $I\mathfrak{s}ubset \mathbb{R}$
\begin{equ}[e:pvar]
\|f\|_{p\mathrm{\mhyphen var},I}\eqdef \left(\mathfrak{s}up_{t_k\in D_I} d(f(t_k),f(t_{k-1}))^p\right)^{1/p}<\infty
\end{equ}
where $D_I$ ranges over all finite partitions of $I$.
The Wasserstein distance of two probability measures $\mu,\nu$ on a complete separable metric space $(\mathscr{T},d)$ is defined as
\begin{equ}[def:Wass]
\mathcal{W}(\mu,\nu)\eqdef \inf_{g\in\Gamma(\mu,\nu)} \mathbb{E}_\gamma[d(X,Y)]
\end{equ}
where $\Gamma(\mu,\nu)$ denotes the set of couplings of $\mu$ and $\nu$, the expectation is taken with respect to $g$
and $X,Y$ are two $\mathscr{T}$-valued random variables distributed according to $\mu$ and $\nu$ respectively.
At last, we will write $a\lesssim b$ if there exists a constant $C>0$ such that $a\leq C b$
and $a\approx b$ if $a\lesssim b$ and $b\lesssim a$.
\mathfrak{s}ubsection*{Acknowledgements}
{\mathfrak{s}mall
The authors would like to thank Rongfeng Sun and Jon Warren for useful discussions.
GC would like to thank the Hausdorff Institute in Bonn for the kind hospitality during the programme
``Randomness, PDEs and Nonlinear Fluctuations'', where he carried out part of this work.
GC gratefully acknowledges financial support via the EPSRC grant EP/S012524/1.
MH gratefully acknowledges financial support from the Leverhulme trust via a Leadership Award,
the ERC via the consolidator grant 615897:CRITICAL, and the Royal Society via a research professorship.
}
\mathfrak{s}ection{The Brownian Web Tree and its topology}\label{sec:Trees}
In this section, we provide the basic definitions and notations on (characteristic) $\mathbb{R}$-trees
and outline the construction and main properties of the (Double) Brownian Web Tree derived in~\mathfrak{c}ite{CHbwt}.
\mathfrak{s}ubsection{Characteristic $\mathbb{R}$-trees in a nutshell}\label{sec:SpTrees}
We begin by recalling the notion of $\mathbb{R}$-tree (see~\mathfrak{c}ite[Definition 2.1]{DL}),
degree of a point, segment, ray and end.
\begin{definition}\label{def:Rtree}
A metric space $(\mathscr{T},d)$ is an $\mathbb{R}$-tree if for every $\mathfrak{s}z_1,\mathfrak{s}z_2\in\mathscr{T}$
\begin{enumerate}[noitemsep]
\item there is a unique isometric map $f_{\mathfrak{s}z_1,\mathfrak{s}z_2} :[0, d(\mathfrak{s}z_1,\mathfrak{s}z_2)]\to\mathscr{T}$ such that $f_{\mathfrak{s}z_1,\mathfrak{s}z_2}(0)=\mathfrak{s}z_1$ and
$f_{\mathfrak{s}z_1,\mathfrak{s}z_2}(d(\mathfrak{s}z_1,\mathfrak{s}z_2))=\mathfrak{s}z_2$,
\item for every continuous injective map $q\mathfrak{c}olon [0,1] \to \mathscr{T}$ such that $q(0)=\mathfrak{s}z_1$ and $q(1)=\mathfrak{s}z_2$, one has
\begin{equ}
q([0,1])=f_{\mathfrak{s}z_1,\mathfrak{s}z_2}([0,d(\mathfrak{s}z_1,\mathfrak{s}z_2)])\,.
\end{equ}
\end{enumerate}
A {\it pointed $\mathbb{R}$-tree} is a triple $(\mathscr{T},\ast,d)$ such that $(\mathscr{T},d)$ is an $\mathbb{R}$-tree and $\ast \in \mathscr{T}$.
\end{definition}
Given $\mathfrak{s}z\in\mathscr{T}$, the number of connected components of $\mathscr{T}\mathfrak{s}etminus\{\mathfrak{s}z\}$ is the {\it degree of $\mathfrak{s}z$},
$\deg(\mathfrak{s}z)$ in short.
A point of degree $1$ is an {\it endpoint}, of degree $2$,
an {\it edge point} and if the degree is $3$ or higher, a {\it branch point}.
\begin{definition}\label{def:end}
Let $(\mathscr{T},d)$ be an $\mathbb{R}$-tree and, for any $\mathfrak{s}z_1,\,\mathfrak{s}z_2\in\mathscr{T}$, $f_{\mathfrak{s}z_1,\mathfrak{s}z_2}$ the isometric
map in Definition~\ref{def:Rtree}.
We call the range of $f_{\mathfrak{s}z_1,\mathfrak{s}z_2}$, {\it segment joining $\mathfrak{s}z_1$ and $\mathfrak{s}z_2$} and denote it by $\llbracket\mathfrak{s}z_1,\mathfrak{s}z_2\mathrm{r}b$.
For $\mathfrak{s}z\in\mathscr{T}$, a segment having $\mathfrak{s}z$ as an endpoint is said to be a {\it $\mathscr{T}$-ray from $\mathfrak{s}z$}
if it is maximal for inclusion.
The {\it ends} of $\mathscr{T}$ are the equivalence classes of $\mathscr{T}$-rays with respect to the equivalence
relation according to which different $\mathscr{T}$-rays are equivalent if their intersection is again a $\mathscr{T}$-ray.
An end is {\it closed} if it is an endpoint of $\mathscr{T}$ and {\it open} otherwise, and, for $\downarrowgger$ an open end,
we indicate by $\llbracket\mathfrak{s}z,\downarrowgger\rangle$ the unique $\mathscr{T}$-ray from $\mathfrak{s}z$ representing $\downarrowgger$.
$\downarrowgger$ is said to be an {\it open end with (un-)bounded rays} if for every $\mathfrak{s}z\in\mathscr{T}$, the map
$\iota_\mathfrak{s}z: \llbracketracket\mathfrak{s}z,\downarrowgger\rangle\to\mathbb{R}_+$ given by
\begin{equation}\label{e:iota}
\iota_\mathfrak{s}z(\mathfrak{s}w)=d(\mathfrak{s}z,\mathfrak{s}w)\,,\qquad\mathfrak{s}w\in\llbracket\mathfrak{s}z,\downarrowgger\rangle
\end{equation}
is (un-)bounded.
\end{definition}
Throughout the article, we will work with (subsets of) the space of {\it spatial $\mathbb{R}$-trees},
consisting of $\mathbb{R}$-trees embedded into $\mathbb{R}^2$ via a map, called the evaluation map.
\begin{definition}\label{def:CharTree}
Let $\alpha\in(0,1)$ and consider the quadruplet $\zeta=(\mathscr{T},\ast, d, M)$ where
\begin{itemize}[noitemsep, label=-]
\item $(\mathscr{T},\ast,d)$ is a complete and locally compact pointed $\mathbb{R}$-tree,
\item $M$, the {\it evaluation map}, is a locally little $\alpha$-H\"older continuous proper\footnote{Namely such that $\lim_{\eps \to 0} \mathfrak{s}up_{\mathfrak{s}z \in K} \mathfrak{s}up_{d(\mathfrak{s}z,\mathfrak{s}z') \le \eps} \|M(\mathfrak{s}z)-M(\mathfrak{s}z')\| / d(\mathfrak{s}z,\mathfrak{s}z')^\alpha = 0$ for every compact $K$ and the preimage of every compact set is compact. } map
from $\mathscr{T}$ to $\mathbb{R}^2$.
\end{itemize}
The space of {\it pointed $\alpha$-spatial $\mathbb{R}$-trees} $\mathbb{T}^\alpha_\mathrm{sp}$ is the set of equivalence
classes of quadruplets as above with respect to the equivalence relation that identifies $\zeta$ and $\zeta'$
if there exists a bijective isometry $\phi:\mathscr{T}\to\mathscr{T}'$ such that $\phi(\ast)=\ast'$ and $M'\mathfrak{c}irc\phi\equiv M$, in short
(with a slight abuse of notation) $\varphi(\zeta)=\zeta'$.
Elements $\zeta=(\mathscr{T},\ast, d, M)\in\mathbb{T}^\alpha_\mathrm{sp}$ are further said to be {\it characteristic},
and their space denoted by $\mathbb{C}^\alpha_\mathrm{sp}$, if the evaluation map $M$ satisfies
\begin{enumerate}[noitemsep, label=(\arabic*)]
\item\label{i:Back} {\it (Monotonicity in time)} for every $\mathfrak{s}z_0,\mathfrak{s}z_1\in\mathscr{T}$ and $s \in [0,1]$ one has
\begin{equation}\label{e:Back}
M_t(\mathfrak{s}z_s)=
\bigl(M_t(\mathfrak{s}z_0)-s \,d(\mathfrak{s}z_0,\mathfrak{s}z_1)\bigr) \vee \bigl(M_t(\mathfrak{s}z_1)-(1-s) \,d(\mathfrak{s}z_0,\mathfrak{s}z_1)\bigr)\;.
\end{equation}
where $\mathfrak{s}z_s$ is the unique element of $\llbracket\mathfrak{s}z_0,\mathfrak{s}z_1\mathrm{r}b$ with
$d(\mathfrak{s}z_0,\mathfrak{s}z_s) = s\,d(\mathfrak{s}z_0,\mathfrak{s}z_1)$,
\item\label{i:MonSpace} {\it (Monotonicity in space)} for every $s< t$, interval $I = (a,b)$
and any four elements $\mathfrak{s}z_0,\bar \mathfrak{s}z_0, \mathfrak{s}z_1, \bar \mathfrak{s}z_1$
such that $M_t(\mathfrak{s}z_0)=M_t(\bar \mathfrak{s}z_0) = t$, $M_t(\mathfrak{s}z_1)=M_t(\bar \mathfrak{s}z_1) = s$, $M_x(\mathfrak{s}z_0)< M_x(\bar \mathfrak{s}z_0)$,
and $M(\llbracket\mathfrak{s}z_0,\mathfrak{s}z_1\mathrm{r}b), M(\llbracket\bar \mathfrak{s}z_0,\bar\mathfrak{s}z_1\mathrm{r}b) \mathfrak{s}ubset [s,t] \times (a,b)$, we have
\begin{equation}\label{e:MonSpace}
M_x(\mathfrak{s}z_s)\leq M_x(\bar \mathfrak{s}z_{s})
\end{equation}
for every $s \in [0,1]$,
\item\label{i:Spread} for all $z=(t,x)\in\mathbb{R}^2$, $M^{-1}(\{t\}\times[x-1,x+1])\neq\emptyset$.
\end{enumerate}
The space of those characteristic trees for which,
in~\ref{i:Back},~\eqref{e:Back} holds with $\vee$ instead of $\wedge$ will be denoted instead by
$\hat\mathbb{C}^\alpha_\mathrm{sp}$
\end{definition}
\begin{remark}\label{rem:Periodic}
Let $\mathbb{T}\eqdef\mathbb{R}/\mathbb{Z}$ be the torus of size $1$ endowed with the usual periodic metric
$d(x,y)= \inf_{k \in \mathbb{Z}} |x-y+k|$.
When $M$ is $\mathbb{R}\times\mathbb{T}$-valued, we will say that $\zeta$ is {\it periodic} and denote the space
of periodic (characteristic) pointed $\alpha$-spatial $\mathbb{R}$-trees by $\mathbb{T}^\alpha_{\mathrm{sp},\mathrm{per}}$ (or $\mathbb{C}^\alpha_{\mathrm{sp},\mathrm{per}}$).
As in~\mathfrak{c}ite[Definition 2.19]{CHbwt}, we point out that~\ref{i:MonSpace} makes sense also in this
case provided we restrict to intervals $(a,b)$ that do not wrap around the torus.
\end{remark}
In order to introduce a metric on $\mathbb{T}^\alpha_\mathrm{sp}$ (and $\mathbb{C}^\alpha_\mathrm{sp}$),
recall that a correspondence $\mathbb{C}C$ between two metric spaces $(\mathscr{T},d)$, $(\mathscr{T}',d')$ is a subset of
$\mathscr{T}\times\mathscr{T}'$ such that for all $\mathfrak{s}z\in\mathscr{T}$ there exists at least one $\mathfrak{s}z'\in\mathscr{T}'$ for which $(\mathfrak{s}z,\mathfrak{s}z')\in\mathbb{C}C$ and vice versa.
Its {\it distortion} is given by
\begin{equ}
\mathop{\mathrm{dis}} \mathbb{C}C\eqdef \mathfrak{s}up\{|d(\mathfrak{s}z,\mathfrak{s}w)-d'(\mathfrak{s}z',\mathfrak{s}w')|\,:\, (\mathfrak{s}z,\mathfrak{s}z'),\,(\mathfrak{s}w,\mathfrak{s}w')\in\mathbb{C}C\}\;.
\end{equ}
Let $\zeta=(\mathscr{T},\ast,d,M)$ and $\zeta'=(\mathscr{T}',\ast',d',M')\in\mathbb{T}^\alpha_\mathrm{sp}$ be such that
both $\mathscr{T}$ and $\mathscr{T}'$ are compact, and $\mathbb{C}C$ be a correspondence between them.
We set
\begin{equation}\label{e:MetC}
\begin{split}
\boldsymbol{\Delta}^{\mathrm{c},\mathbb{C}C}_\mathrm{sp}(\zeta,\zeta') &\eqdef \frac{1}{2}\mathop{\mathrm{dis}} \mathbb{C}C+\mathfrak{s}up_{(\mathfrak{s}z,\mathfrak{s}z')\in\mathbb{C}C}\|M(\mathfrak{s}z)-M'(\mathfrak{s}z')\|\\
&\quad +\mathfrak{s}up_{n\in\mathbb{N}}\,\,2^{n\alpha}\mathfrak{s}up_{\mathfrak{s}ubstack{(\mathfrak{s}z,\mathfrak{s}z'),(\mathfrak{s}w,\mathfrak{s}w')\in\mathbb{C}C\\d(\mathfrak{s}z,\mathfrak{s}w),d'(\mathfrak{s}z',\mathfrak{s}w')\in\mathbb{C}A_n}}
\|\delta_{\mathfrak{s}z,\mathfrak{s}w}M-\delta_{\mathfrak{s}z',\mathfrak{s}w'}M'\|
\end{split}
\end{equation}
where $\mathbb{C}A_n\eqdef(2^{-n},2^{-(n-1)}]$ for $n\in\mathbb{N}$, and $\delta_{\mathfrak{s}z,\mathfrak{s}w}M\eqdef M(\mathfrak{s}z)-M(\mathfrak{s}w)$. In the above,
we adopt the convention that if there exists no pair of couples
$(\mathfrak{s}z,\mathfrak{s}z'),(\mathfrak{s}w,\mathfrak{s}w')\in\mathbb{C}C$ such that $d(\mathfrak{s}z,\mathfrak{s}w)\in\mathbb{C}A_n$, then the increment of $M$ is removed
and the supremum is taken among all $\mathfrak{s}z',\mathfrak{s}w'$ such that $d'(\mathfrak{s}z',\mathfrak{s}w')\in\mathbb{C}A_n$ and vice versa\footnote{If instead we
adopted the more natural convention $\mathfrak{s}up \emptyset = 0$, then the triangle inequality might fail, e.g. when comparing a generic
spatial tree to the trivial tree made of only one point. }
We can now define
\begin{equation}\label{e:MetricCompact}
\mathbb{D}elta^\mathrm{c}_\mathrm{sp}(\zeta,\zeta')\eqdef \boldsymbol{\Delta}^\mathrm{c}_\mathrm{sp}(\zeta,\zeta')+d_\mathrm{M1}(b_\zeta,b_{\zeta'})
\end{equation}
where
\begin{itemize}[noitemsep,label=-]
\item the first term is
\begin{equation}\label{e:MetricC}
\boldsymbol{\Delta}^\mathrm{c}_\mathrm{sp}(\zeta,\zeta')\eqdef\inf_{\mathbb{C}C\,:\,(\ast,\ast')\in\mathbb{C}C} \boldsymbol{\Delta}^{\mathrm{c},\mathbb{C}C}_\mathrm{sp}(\zeta,\zeta')\;,
\end{equation}
\item the map $b_\zeta$ is the {\it properness map}, which is $0$ for $r<0$, while
for $r\ge 0$ is defined as
\begin{equation}\label{def:PropMap}
b_\zeta(r)\eqdef \mathfrak{s}up_{\mathfrak{s}z\,:\,M(\mathfrak{s}z)\in\Lambda_r} d(\ast,\mathfrak{s}z)\;,
\end{equation}
for $\Lambda_r\eqdef [-r,r]^2\mathfrak{s}ubset\mathbb{R}^2$ and $\Lambda_r=\Lambda_r^\mathrm{per}\eqdef [-r,r]\times\mathbb{T}$, in the periodic case,
\item $d_{M_1}$ is the metric on the space of c\`adl\`ag functions given in~\eqref{def:M1metric}.
\end{itemize}
Let us point out that by~\mathfrak{c}ite[Lemma 2.9]{CHbwt}, the properness map is non-decreasing and
c\`adl\`ag so that the second summand in~\eqref{e:MetricCompact} is meaningful.
Since the elements of $\mathbb{T}^\alpha_\mathrm{sp}$ are locally compact, by~\mathfrak{c}ite[Theorem 2.6(a)]{CHbwt} we can
generalise the definition of $\mathbb{D}elta_\mathrm{sp}^\mathrm{c}$ to the non-compact case. For $\zeta,\,\zeta'\in\mathbb{T}^\alpha_\mathrm{sp}$,
we set
\begin{equation}\label{e:Metric}
\begin{split}
\mathbb{D}elta_\mathrm{sp}(\zeta,\zeta')&\eqdef\int_0^{+\infty} e^{-r}\, \Big[1\wedge \boldsymbol{\Delta}^{\mathrm{c}}_\mathrm{sp}(\zeta^{(r)},\zeta'^{\,(r)})\Big]\,\mathrm{d} r+ d_\mathrm{M1}(b_\zeta,b_{\zeta'})\\
&=:\boldsymbol{\Delta}_\mathrm{sp}(\zeta,\zeta')+d_\mathrm{M1}(b_\zeta,b_{\zeta'}).
\end{split}
\end{equation}
where, for $r>0$,
\begin{equation}\label{e:rRestr}
\zeta^{(r)}\eqdef (\mathscr{T}^{(r)}, \ast, d, M)
\end{equation}
$\mathscr{T}^{(r)}\eqdef B_d(\ast,r]$ being the closed ball of radius $r$ in $\mathscr{T}$.
\begin{proposition}\label{p:Compactness}
For any $\alpha\in(0,1]$,
\begin{enumerate}[noitemsep, label=(\roman*)]
\item the space $(\mathbb{T}_\mathrm{sp}^\alpha,\mathbb{D}elta_\mathrm{sp})$ is a complete, separable metric space,
\item a subset $\mathbb{C}A=\{\zeta_a=(\mathscr{T}_a,\ast_a,d_a,M_a)\,:\,a\in A\}$, $A$ being an index set, of $\mathbb{T}^\alpha_\mathrm{sp}$
is relatively compact if and only if for every $r>0$ and $\eps>0$ there exist
\begin{enumerate}[noitemsep]
\item a finite integer $N(r;\eps)$ such that
\begin{equ}[e:EpsNet]
\mathfrak{s}up_a \mathcal{N}_{d_a}(\mathscr{T}_a^{(r)},\eps)\leq N(r;\eps)
\end{equ}
where $\mathcal{N}_{d_a}(\mathscr{T}_a^{(r)},\eps)$ is the cardinality of the minimal $\eps$-net in $\mathscr{T}_a^{(r)}$
with respect to the metric $d_a$,
\item a finite constant $C=C(r)>0$ and $\delta=\delta(r,\eps)>0$ such that
\begin{equation}\label{e:Equicont}
\mathfrak{s}up_{a\in A}\|M_a\|_{\infty}^{(r)}\leq C\qquad\text{and}\qquad\mathfrak{s}up_{a\in A}\delta^{-\alpha}\omega^{(r)}(M_a,\delta)<\eps\,,
\end{equation}
\item a finite constant $C'=C'(r)>0$ such that
\begin{equation}\label{e:Comb}
\mathfrak{s}up_a b_{\zeta_a}(r)\leq C'\,,
\end{equation}
\end{enumerate}
\item $\mathbb{C}^\alpha_\mathrm{sp}$ is closed in $\mathbb{T}^\alpha_\mathrm{sp}$.
\end{enumerate}
\end{proposition}
\begin{proof}
The first two points were shown in~\mathfrak{c}ite[Theorem 2.13 and Proposition 2.16]{CHbwt},
while the last is a consequence of~\mathfrak{c}ite[Lemma 2.22]{CHbwt}.
\end{proof}
An important feature satisfied by any characteristic tree $\zeta=(\mathscr{T},\ast,d,M)\in\mathbb{C}^\alpha_\mathrm{sp}$
and shown in~\mathfrak{c}ite[Proposition 2.23]{CHbwt},
is that $\mathscr{T}$ possesses a
unique open end $\downarrowgger$ with unbounded rays (see Definition~\ref{def:end}) such that
for every $\mathfrak{s}z\in\mathscr{T}$ and every $\mathfrak{s}w\in\llbracket\mathfrak{s}z,\downarrowgger\rangle$, one has
\begin{equation}\label{e:Ray}
M_t(\mathfrak{s}w)=M_t(\mathfrak{s}z)-d(\mathfrak{s}z,\mathfrak{s}w)\;.
\end{equation}
It is by virtue of the previous property that we can introduce the {\it radial map}
which allows to move along rays in the $\mathbb{R}$-tree.
\begin{definition}\label{def:RadMap}
Let $\alpha\in(0,1]$, $\zeta=(\mathscr{T},\ast,d,M)\in\mathbb{C}^\alpha_\mathrm{sp}$ and $\downarrowgger$ the open end with unbounded
rays such that~\eqref{e:Ray} holds. The {\it radial map} $\rho$ associated to $\zeta$
is defined as
\begin{equation}\label{e:RadMap}
\rho(\mathfrak{s}z,s)\eqdef \iota_\mathfrak{s}z^{-1}(M_t(\mathfrak{s}z)-s)\,,\qquad\text{for $\mathfrak{s}z\in\mathscr{T}$ and $s\in(-\infty, M_t(\mathfrak{s}z)]$}
\end{equation}
where $\iota_\mathfrak{s}z$ was given in~\eqref{e:iota}. If instead $\zeta\in\hat\mathbb{C}^\alpha_\mathrm{sp}$,
the radial map $\hat\rho$ satisfies
$\hat\rho(\mathfrak{s}z,s)\eqdef \iota_\mathfrak{s}z^{-1}(s-M_t(\mathfrak{s}z))$, where this time $s\in[M_t(\mathfrak{s}z),+\infty)$.
\end{definition}
\mathfrak{s}ubsection{The tree map}\label{sec:TreeMap}
In this subsection, we introduce a map, the {\it tree map}, which serves as an inverse of the evaluation map.
Since the evaluation map is not necessarily bijective, we need to determine a way to assign to a point in $\mathbb{R}^2$,
one in the tree so that certain continuities properties can be deduced from those of the evaluation map.
We begin with the following definition.
\begin{definition}\label{def:Right-most}
Let $\alpha\in(0,1)$, $\zeta=(\mathscr{T},\ast,d,M)\in\mathbb{C}^\alpha_\mathrm{sp}$ and
$\rho$ be the radial map associated to $\zeta$ given as in Definition~\ref{def:RadMap}.
For $z=(t,x)\in\mathbb{R}^2$, we say that
$\mathfrak{s}z$ is a {\it right-most point} for $z$ if $M_t(\mathfrak{s}z) = t$ and
\begin{equ}[e:Right-most]
M_x(\rho(\mathfrak{s}z,s))=\mathfrak{s}up\{M_x(\rho(\mathfrak{s}w,s))\,:\,M_t(\mathfrak{s}w) = t\;\&\; M_x(\mathfrak{s}w) \le x\}\,,
\end{equ}
for all $s<t$. {\it Left-most points} are defined as in~\eqref{e:Right-most} but replacing $\mathfrak{s}up$ with $\inf$ and
$M_x(\mathfrak{s}w) \le x$ with $M_x(\mathfrak{s}w) \ge x$. If there is a unique
right-most (resp. left-most) point we will denote it by $\mathfrak{s}z_\mathrm{r}$ (resp. $\mathfrak{s}z_\mathrm{l}$).
\end{definition}
\begin{remark}\label{rem:Right-mostUA}
For $\zeta\in\hat\mathbb{C}^\alpha_\mathrm{sp}$,
a point is said to be a right-most (or left-most) point if~\eqref{e:Right-most} holds for all $s>t$.
\end{remark}
For $z\in\mathbb{R}^2$ and an arbitrary characteristic tree, right-most and left-most point are not necessarily uniquely defined.
As we will see below, for elements in the measurable subset of $\mathbb{C}^\alpha_\mathrm{sp}$ given in~\mathfrak{c}ite[Definition 2.29]{CHbwt},
this is indeed the case.
\begin{definition}\label{def:TreeCond}
Let $\alpha\in(0,1)$. We say that $\zeta=(\mathscr{T},\ast,d,M)\in\mathbb{C}^\alpha_\mathrm{sp}$ satisfies the {\it tree condition}
if
\begin{enumerate}[label=($\mathfrak{t}$)]
\item\label{i:TreeCond} for all $\mathfrak{s}z_1,\mathfrak{s}z_2\in\mathscr{T}$, if $M(\mathfrak{s}z_1)=M(\mathfrak{s}z_2)=(t,x)$ and there exists $\eps>0$ such that
$M(\rho(\mathfrak{s}z_1,s))=M(\rho(\mathfrak{s}z_2,s))$ for all $s\in[t-\eps,t]$, then $\mathfrak{s}z_1=\mathfrak{s}z_2$.
\end{enumerate}
We denote by $\mathbb{C}^\alpha_\mathrm{sp}(\mathfrak{t})$, the subset of $\mathbb{C}^\alpha_\mathrm{sp}$ whose elements satisfy~\ref{i:TreeCond}.
\end{definition}
\begin{lemma}\label{l:Right-most}
Let $\alpha\in(0,1)$ and $\zeta=(\mathscr{T},\ast,d,M)\in\mathbb{C}^\alpha_\mathrm{sp}(\mathfrak{t})$.
Then, for all $z \in\mathbb{R}^2$ there exist unique left-most and right-most points.
\end{lemma}
\begin{proof}
Since left-most and right-most points are exchanged under $M_x \mapsto -M_x$, we only need to
consider right-most points.
Let $z=(t,x)\in\mathbb{R}^2$, $\downarrowgger$ be the unique open end such that~\eqref{e:Ray} holds and $\rho$ be
$\zeta$'s radial map given in~\eqref{e:RadMap}.
Note first that we can assume without loss of generality that $z \in M(\mathscr{T})$ since the
right-most points for $z$ equal those of $\bar z = (t, \bar x)$, where
$\bar x = \mathfrak{s}up\{y \le x\,:\, (t,y) \in M(\mathscr{T})\}$. Since $M$ is proper, it is closed and therefore
$\bar z \in M(\mathscr{T})$.
Let $\{s_n\}_n\mathfrak{s}ubset\mathbb{R}$ be a sequence such that $s_n\uparrow t$ and
$A_n\eqdef\{\rho(\mathfrak{s}w,s_n)\,:\,\mathfrak{s}w\in M^{-1}(z)\}$. $A_n$ is finite since
$M^{-1}(z)$ is totally disconnected, thanks to~\eqref{e:Ray}, and is compact by properness of the evaluation map.
In particular this implies that also the number of paths connecting points in $A_n$ with those in $A_{n+1}$ is finite.
We inductively construct a sequence $\{\mathfrak{s}w_n\}_n\in M^{-1}(z)$ as follows.
Let $\mathfrak{s}w_1$ be one of the points for which $M_x(\rho(\mathfrak{s}w_1,s))\geq M_x(\rho(\mathfrak{s}w,s))$
for all $s\in[s_0,s_1]$ and $\mathfrak{s}w\in M^{-1}(z)$.
Assume we picked $\mathfrak{s}w_{n}$. If $M_x(\rho(\mathfrak{s}w_{n},s))\geq M_x(\rho(\mathfrak{s}w,s))$ for all $s\in[s_n,s_{n+1}]$ and
all $\mathfrak{s}w\in M^{-1}(z)$ then set $\mathfrak{s}w_{n+1}\eqdef\mathfrak{s}w_{n}$. Otherwise choose any $\mathfrak{s}w_{n+1}$ so that
$M_x(\rho(\mathfrak{s}w_{n+1},s))$ coincides with the right hand side of~\eqref{e:Right-most} for all $s\in[s_n,s_{n+1}]$.
Notice that in the first case $d(\mathfrak{s}w_{n},\mathfrak{s}w_{n+1})=0$. In the other instead, there exists $\bar s\in [s_n,s_{n+1}]$ such that
$M_x(\rho(\mathfrak{s}w_n,\bar s)<M_x(\rho(\mathfrak{s}w_{n+1},\bar s)$ hence by monotonicity in space
$M_x(\rho(\mathfrak{s}w_n,s)\leq M_x(\rho(\mathfrak{s}w_{n+1},s)$ for any $s\leq \bar s$.
Moreover, for $s\in[s_{n-1},s_n]$, $M_x(\rho(\mathfrak{s}w_n,s))\geq M_x(\rho(\mathfrak{s}w_{n+1},s))$ by construction,
and therefore $M_x(\rho(\mathfrak{s}w_n,s))= M_x(\rho(\mathfrak{s}w_{n+1},s))$ for $s\in[s_{n-1},s_n]$.~\ref{i:TreeCond} then implies that
$\rho(\mathfrak{s}w_n,s_n)= \rho(\mathfrak{s}w_{n+1},s_n)$ and we conclude that $d(\mathfrak{s}w_{n},\mathfrak{s}w_{n+1})\leq2(t-s_n)$.
Hence, the sequence $\{\mathfrak{s}w_n\}_n$ is Cauchy and converges to a unique limit $\mathfrak{s}z\in M^{-1}(z)$. Since, for any
$n$, $d(\mathfrak{s}w_{n-1},\mathfrak{s}z)\leq2(t-s_n)$ we necessarily have $\rho(\mathfrak{s}z,s)=\rho(\mathfrak{s}w_{n},s)$ for all $s\in[s_{n-1},s_n]$ which
implies that $\mathfrak{s}z$ is a right-most point. Now, if there existed another one, say $\bar \mathfrak{s}z$, then by definition,
$\rho(\bar\mathfrak{s}z,\mathfrak{c}dot)\equiv\rho(\mathfrak{s}z,\mathfrak{c}dot)$ on any subinterval $I\mathfrak{s}ubset (-\infty,t)$
therefore, by~\ref{i:TreeCond}, $d(\mathfrak{s}z,\bar\mathfrak{s}z)=0$.
\end{proof}
Thanks to the above lemma, we are ready for the following definition.
\begin{definition}\label{def:TreeM}
Let $\alpha\in(0,1)$ and $\mathbb{C}^\alpha_\mathrm{sp}(\mathfrak{t})$ (resp. $\hat\mathbb{C}^\alpha_\mathrm{sp}(\mathfrak{t})$)
be the subset of $\mathbb{C}^\alpha_\mathrm{sp}$ (resp. $\mathbb{C}^\alpha_\mathrm{sp}(\mathfrak{t})$) whose elements satisfy~\ref{i:TreeCond}.
For $\zeta\in\mathbb{C}^\alpha_\mathrm{sp}(\mathfrak{t})$, we define
the {\it tree map} $\mathfrak{T}$ associated to $\zeta$ as
\begin{equ}[e:TreeM]
\mathfrak{T}(z)\eqdef \mathfrak{s}z_\mathrm{r}\,,\qquad\text{for all $z\in M(\mathscr{T})$}
\end{equ}
where $\mathfrak{s}z_\mathrm{r}$ is the unique right-most point (see Definition~\ref{def:Right-most}).
\end{definition}
\begin{remark}
In the previous definition we could have analogously picked the left-most point. The choice above was made so that,
under suitable assumptions on the evaluation map (see Proposition~\ref{p:ContT}), the tree map is c\`adl\`ag.
\end{remark}
The following proposition determines the continuity properties of the tree map.
\begin{proposition}\label{p:ContT}
Let $\alpha\in(0,1)$ and $\zeta\in\mathbb{C}^\alpha_\mathrm{sp}(\mathfrak{t})$.
Then, for every $t\in\mathbb{R}$, $x\mapsto \mathfrak{T}(t,x)$ is c\`adl\`ag.
\end{proposition}
Before proving the previous, we state and show the following lemma which contains more precise information
regarding the roles of left-most and right-most point in the continuity properties of the tree map.
\begin{lemma}\label{l:ContT}
Let $\alpha\in(0,1)$ and $\zeta\in\mathbb{C}^\alpha_\mathrm{sp}(\mathfrak{t})$.
Let $z=(t,x)\in\mathbb{R}^2\mathfrak{c}ap M(\mathscr{T})$ and assume there exists a sequence $\{z_n=(t,x_n)\}_n\mathfrak{s}ubset\mathbb{R}^2\mathfrak{c}ap M(\mathscr{T})$
converging to it. If
\begin{enumerate}[noitemsep]
\item $x_n\downarrow x$ then $\lim_n \mathfrak{T}(z_n)=\mathfrak{T}(z)$,
\item $x_n\uparrow x$ then $\lim_n \mathfrak{T}(z_n)$ exists and coincides with $\mathfrak{s}z_\mathrm{l}$,
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\zeta=(\mathscr{T},\ast,d,M)\in\mathbb{C}^\alpha_\mathrm{sp}$. We will prove only 1. since the other can be shown similarly.
Let $z$ and $\{z_n\}_n$ be as in the statement.
Since $M$ is proper and continuous, the sequence $\{\mathfrak{s}z^n_\mathrm{r}\eqdef \mathfrak{T}(z_n)\}_n$ converges
along subsequences and any limit point is necessarily in $M^{-1}(z)$.
But now, monotonicity in space and~\eqref{e:Ray} imply that
\begin{equ}
d(\mathfrak{s}z,\mathfrak{s}z_\mathrm{r}^n)= d(\mathfrak{s}z,\mathfrak{s}z_\mathrm{r})\vee d(\mathfrak{s}z_\mathrm{r},\mathfrak{s}z^n_\mathrm{r})
\end{equ}
hence, by uniqueness of $\mathfrak{s}z_\mathrm{r}$, the result follows.
\end{proof}
\begin{proof}[of Proposition~\ref{p:ContT}]
Let $\zeta=(\mathscr{T},\ast,d,M)\in\mathbb{C}^\alpha_\mathrm{sp}$ and recall that by definition $M$ is both proper and continuous.
Assume first that $z=(t,x)\notin M(\mathscr{T})$. Then,
there exists $\eps>0$ such that $\{t\}\times[x-\eps,x+\eps]\mathfrak{c}ap M(\mathscr{T})=\emptyset$.
Hence, all $w\in \{t\}\times[x-\eps,x+\eps]$ have the same right-most point so that $\mathfrak{T}$ is constant there.
If $z=(t,x)\in M(\mathscr{T})$ and, for some $\eps>0$, $\{t\}\times(x,x+\eps)\mathfrak{c}ap M(\mathscr{T})=\emptyset$ then $\mathfrak{T}$
is constantly equal to $\mathfrak{T}(z)$ on $\{t\}\times[x,x+\eps)$ and therefore it is continuous from the right.
If $z=(t,x)\in M(\mathscr{T})$ and, for some $\eps>0$, $\{t\}\times(x-\eps,x)\mathfrak{c}ap M(\mathscr{T})=\emptyset$, since by property~\ref{i:Spread}
of characteristic trees $M^{-1}(\{t\}\times[x-\eps-2,x-\eps])\neq\emptyset$ and $M^{-1}(\{t\}\times[x-\eps-2,x-\eps])$
is closed, there exists $\bar z\eqdef \mathfrak{s}up\{w\in \{t\}\times[x-\eps-2,x-\eps])\mathfrak{c}ap M(\mathscr{T})\}$. But then,
$\mathfrak{T}$ is constantly equal to $\mathfrak{T}(\bar z)$ on $\{t\}\times(x-\eps,x)$ which implies that $\lim_{y\uparrow x}\mathfrak{T}(t,y)$ exists.
The case of $z$ being an accumulation point in $\{t\}\times[x-1,x+1]\mathfrak{c}ap M(\mathscr{T})$ was covered in Lemma~\ref{l:ContT},
so that the proof is concluded.
\end{proof}
\begin{remark}\label{rem:ContT}
Notice that in view of the proof of Proposition~\ref{p:ContT} and Lemma~\ref{l:ContT}, for
$\zeta=(\mathscr{T},\ast,d,M)\in\mathbb{C}^\alpha_\mathrm{sp}$, $\mathfrak{T}$ is continuous {\it only} at those points $z$ such that the cardinality of
$M^{-1}(z)$ is less or equal to $1$.
\end{remark}
\mathfrak{s}ubsection{The (Double) Brownian Web tree}\label{sec:BW}
As pointed out in the introduction, a major role in the definition of the Brownian Castle
is played by the Brownian Web and, in particular its characterisation as a
characteristic spatial $\mathbb{R}$-tree.
In this subsection, we recall the main statements in~\mathfrak{c}ite{CHbwt},
where such a characterisation was derived, focusing
on those results which are instrumental to the present paper.
We begin with the following theorem (see~\mathfrak{c}ite[Theorem 3.8 and Remark 3.9]{CHbwt})
which establishes the existence of
the Brownian Web Tree and uniquely identifies its law in $\mathbb{C}^\alpha_\mathrm{sp}$.
\begin{theorem}\label{thm:BW}
Let $\alpha<\tfrac12$. There exists a $\mathbb{C}^\alpha_\mathrm{sp}$-valued random variable
$\zeta^\downarrow_\mathrm{bw}=(\mathscr{T}^\downarrow_\mathrm{bw},\ast^\downarrow_\mathrm{bw},d^\downarrow_\mathrm{bw},M^\downarrow_\mathrm{bw})$ with radial map $\rho^\downarrow$,
whose law is uniquely characterised by the following properties
\begin{enumerate}
\item for any deterministic point $w=(s,y)\in\mathbb{R}^2$ there exists almost surely a unique point
$\mathfrak{s}w\in\mathscr{T}^\downarrow_\mathrm{bw}$ such that $M^{\downarrow}_\mathrm{bw}(\mathfrak{s}w)=w$,
\item for any deterministic $n\in\mathbb{N}$ and $w_1=(s_1,y_1),\dots,w_n=(s_n,y_n)\in\mathbb{R}^2$, the joint distribution
of $( M^{\downarrow}_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}w_i,\mathfrak{c}dot)))_{i=1,\dots,n}$,
where $\mathfrak{s}w_1,\dots,\mathfrak{s}w_n$
are the points determined in 1.,
is that of $n$ coalescing backward Brownian motions starting at $w_1,\dots, w_n$,
\item for any deterministic countable dense set $\mathcal{D}$ such that $0\in\mathcal{D}$, let $\mathfrak{s}w$ be the point determined in 1.
associated to $w\in\mathcal{D}$ and $\tilde\ast^\downarrow$ that associated to $0$. Define
$\tilde\zeta_\infty^\downarrow(\mathcal{D})=(\tilde\mathscr{T}_\infty^\downarrow(\mathcal{D}),\tilde\ast^\downarrow,d^\downarrow,\tilde M_\infty^{\,\downarrow,\mathcal{D}})$ as
\begin{equation}\label{e:Coupling}
\begin{split}
\tilde\mathscr{T}_\infty^\downarrow(\mathcal{D})&\eqdef\{\rho^\downarrow(\mathfrak{s}w,t)\,:\,w=(s,y)\in\mathcal{D}'\,,\,t\leq s\}\\
\tilde M_\infty^{\,\downarrow,\mathcal{D}}(\rho^\downarrow(\mathfrak{s}w,t))&\eqdef M_\mathrm{bw}(\rho^\downarrow(\mathfrak{s}w,t))
\end{split}
\end{equation}
and $d^\downarrow$ to be the ancestral metric in~\eqref{e:AncestralD}.
Let $\tilde\mathscr{T}^\downarrow(\mathcal{D})$ be the completion of $\tilde\mathscr{T}_\infty^\downarrow(\mathcal{D})$ under $d^\downarrow$,
$\tilde M^{\,\downarrow,\mathcal{D}}$ be the unique little $\alpha$-H\"older continuous extension of $\tilde M_\infty^{\,\downarrow,\mathcal{D}}$ and
$\tilde\zeta^\downarrow(\mathcal{D})\eqdef(\tilde\mathscr{T}^\downarrow(\mathcal{D}),\ast^\downarrow,d^\downarrow, \tilde M^{\,\downarrow,\mathcal{D}})$. Then,
$\tilde \zeta^\downarrow(\mathcal{D})\eqlaw\zeta^\downarrow_\mathrm{bw}$.
\end{enumerate}
The same statement holds upon taking
the periodic version of all objects and spaces above and
replacing the properties $1.$-$3.$ with $1_\mathrm{per}.$, $2_\mathrm{per}.$ and $3_\mathrm{per}.$
obtained from the former by adding the word
``periodic'' before any instance of ``Brownian motion''.
\end{theorem}
Thanks to the previous result, we can define the (periodic) Brownian Web Tree.
\begin{definition}\label{def:BW}
Let $\alpha<\tfrac12$. We define {\it backward Brownian Web
Tree} and {\it periodic backward Brownian Web tree}, the $\mathbb{C}^\alpha_\mathrm{sp}$ and $\mathbb{C}^\alpha_{\mathrm{sp},\mathrm{per}}$
random variables $\zeta^\downarrow_\mathrm{bw}=(\mathscr{T}^\downarrow_\mathrm{bw},\ast^\downarrow_\mathrm{bw},d^\downarrow_\mathrm{bw},M^\downarrow_\mathrm{bw})$ and
$\zeta^{\mathrm{per},\downarrow}_\mathrm{bw}=(\mathscr{T}_\mathrm{bw}^{\mathrm{per},\downarrow},\ast_\mathrm{bw}^{\mathrm{per},\downarrow}, d_\mathrm{bw}^{\mathrm{per},\downarrow},M_\mathrm{bw}^{\mathrm{per},\downarrow})$
whose distributions is uniquely characterised by properties $1.$-$3.$ and
$1_\mathrm{per}.$, $2_\mathrm{per}.$ and $3_\mathrm{per}.$ in Theorem~\ref{thm:BW}.
\end{definition}
The following proposition states some important quantitative and qualitative properties of both the Brownian
Web Tree and its periodic counterpart.
\begin{proposition}\label{p:BW}
Let $\alpha<\tfrac12$ and $\zeta^\downarrow_\mathrm{bw}$ be the Brownian Web Tree given as in Definition~\ref{def:BW}.
Then, almost surely, for any fixed $\theta>\tfrac32$ and all $r>0$ there exists a constant $c=c(r)>0$
depending only on $r$ such that for all $\eps>0$
\begin{equ}[e:nMEC]
\mathcal{N}_{d_\mathrm{bw}^\downarrow}(\mathscr{T}_\mathrm{bw}^{\downarrow,\,(r)}, \eps)\leq c \eps^{-\theta}
\end{equ}
where $\mathcal{N}_{d_\mathrm{bw}^\downarrow}(\mathscr{T}^{\downarrow,\,(r)}, \eps)$ is defined as in~\eqref{e:EpsNet}.
Moreover, almost surely $M^\downarrow_\mathrm{bw}$ is surjective and~\ref{i:TreeCond} holds for $\zeta^\downarrow_\mathrm{bw}$.
All the properties above remain true in the periodic case.
\end{proposition}
\begin{proof}
See~\mathfrak{c}ite[Proposition 3.2 and Theorem 3.8]{CHbwt}.
\end{proof}
A key aspect of the Brownian Web Tree is that it comes naturally associated with a dual,
which consists of a spatial $\mathbb{R}$-tree whose rays, when embedded into $\mathbb{R}^2$,
are distributed as a family of coalescing {\it forward} Brownian motions.
In the following theorem and the subsequent definition (see~\mathfrak{c}ite[Theorem 3.1, Remark 3.18 and Definition 3.19]{CHbwt}),
we introduce the (periodic) Double Brownian Web Tree, a random couple of characteristic $\mathbb{R}$-trees
made of the Brownian Web Tree and its dual, and clarify the relation between the two.
\begin{theorem}\label{thm:DBW}
Let $\alpha<1/2$. There exists a $\mathbb{C}^\alpha_\mathrm{sp}\times\hat\mathbb{C}^\alpha_\mathrm{sp}$-valued random variable
$\zeta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\mathrm{bw}\eqdef(\zeta^\downarrow_\mathrm{bw},\zeta^\uparrow_\mathrm{bw})$,
$\zeta^{\bigcdot}_\mathrm{bw}=(\mathscr{T}^{\bigcdot}_\mathrm{bw},\ast^{\bigcdot}_\mathrm{bw},d^{\bigcdot}_\mathrm{bw},M^{\bigcdot}_\mathrm{bw})$, $\bigcdot\in\{\downarrow,\uparrow\}$, whose
law is uniquely characterised by the following properties
\begin{enumerate}[label=(\roman*)]
\item\label{i:Dist} Both $-\zeta^\uparrow_\mathrm{bw}\eqdef (\mathscr{T}^\uparrow_\mathrm{bw},\ast^\uparrow_\mathrm{bw},d^\uparrow_\mathrm{bw},-M^\uparrow_\mathrm{bw})$ and $\zeta^\downarrow_\mathrm{bw}$ are
distributed as the backward Brownian Web tree in Definition~\ref{def:BW}.
\item\label{i:Cross} Almost surely, for any $\mathfrak{s}z^\downarrow\in\mathscr{T}^\downarrow_\mathrm{bw}$ and $\mathfrak{s}z^\uparrow\in\mathscr{T}^\uparrow_\mathrm{bw}$, the paths
$M^\downarrow_{\mathrm{bw}}(\rho^\downarrow(\mathfrak{s}z^\downarrow,\mathfrak{c}dot))$ and $M^\uparrow_{\mathrm{bw}}(\rho^\uparrow(\mathfrak{s}z^\uparrow,\mathfrak{c}dot))$ do not cross,
where $\rho^\downarrow$ (resp. $\rho^\uparrow$) is the radial map of $\zeta^\downarrow_\mathrm{bw}$ (resp. $\zeta^\uparrow_\mathrm{bw}$).
\end{enumerate}
Moreover, almost surely $\zeta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\mathrm{bw}\in\mathbb{C}^\alpha_\mathrm{sp}(\mathfrak{t})\times\hat\mathbb{C}^\alpha_\mathrm{sp}(\mathfrak{t})$ and
$\zeta^\uparrow_\mathrm{bw}$ is determined by $\zeta^\downarrow_\mathrm{bw}$ and vice-versa, meaning that
the conditional law of $\zeta^\uparrow_\mathrm{bw}$ given $\zeta^\downarrow_\mathrm{bw}$ is almost surely given
by a Dirac mass.
The above statement remains true in the periodic setting upon replacing every object and space
with their periodic counterpart.
\end{theorem}
\begin{definition}\label{def:DBW}
Let $\alpha<\tfrac12$. We define the {\it double Brownian Web tree} and {\it double periodic Brownian Web tree}
as the $\mathbb{C}^\alpha_\mathrm{sp}\times\hat\mathbb{C}^\alpha_\mathrm{sp}$ and $\mathbb{C}^\alpha_{\mathrm{sp},\mathrm{per}}\times\hat\mathbb{C}^\alpha_{\mathrm{sp},\mathrm{per}}$-valued
random variables $\zeta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\mathrm{bw}\eqdef(\zeta^{\downarrow}_\mathrm{bw},\zeta^{\uparrow}_\mathrm{bw})$ and
$\zeta^{\mathrm{per},\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\mathrm{bw}\eqdef(\zeta^{\mathrm{per},\downarrow}_\mathrm{bw}, \zeta^{\mathrm{per},\uparrow}_\mathrm{bw})$ given by
Theorem~\ref{thm:DBW}.
We will refer to $\zeta^\uparrow_\mathrm{bw}$ and $ \zeta^{\mathrm{per},\uparrow}_\mathrm{bw}$ as the {\it forward} (or dual) and {\it forward periodic,
Brownian Web trees}.
We denote their laws
by $\mathbb{T}heta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\mathrm{bw}(\mathrm{d}(\zeta^\downarrow\times\zeta^\uparrow))$ and $\mathbb{T}heta^{\mathrm{per},\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\mathrm{bw}(\mathrm{d}(\zeta^\downarrow\times\zeta^\uparrow))$,
with marginals $\mathbb{T}heta^{\downarrow}_\mathrm{bw}(\mathrm{d}\zeta)$, $\mathbb{T}heta^{\uparrow}_\mathrm{bw}(\mathrm{d}\zeta)$ and
$\mathbb{T}heta^{\mathrm{per},\downarrow}_\mathrm{bw}(\mathrm{d}\zeta)$, $\mathbb{T}heta^{\mathrm{per},\uparrow}_\mathrm{bw}(\mathrm{d}\zeta)$ respectively.
\end{definition}
As we will see in the next section, the continuity properties of the Brownian Castle are crucially
connected to the cardinality and the degree of points of the inverse maps
$(M^{\bigcdot}_\mathrm{bw})^{-1}$ and $(M^{\mathrm{per},\bigcdot}_\mathrm{bw})^{-1}$, for $\bigcdot\in\{\uparrow,\downarrow\}$.
Based on these two features, it is possible to classify all points of $\mathbb{R}^2$ or $\mathbb{R}\times\mathbb{T}$
as was shown in~\mathfrak{c}ite[Theorem 3.24]{CHbwt} and we now summarise this classification.
\begin{definition}\label{def:Type}
Let $\zeta_\mathrm{bw}^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}=(\zeta^\downarrow_\mathrm{bw},\zeta^\uparrow_\mathrm{bw})$ be the double Brownian Web tree.
For $\bigcdot\in\{\uparrow,\downarrow\}$, the type of a point $z\in\mathbb{R}^2$ for $\zeta^{\bigcdot}_\mathrm{bw}$ is $(i,j)\in\mathbb{N}^2$, where
\begin{equ}
i=\mathfrak{s}um_{i=1}^{|(M^{\bigcdot}_\mathrm{bw})^{-1}(z)|}(\deg(\mathfrak{s}z_i^{\bigcdot})-1)\quad\text{and}\quad j=|(M^{\bigcdot}_\mathrm{bw})^{-1}(z)|\,.
\end{equ}
Above, $\{\mathfrak{s}z^{\bigcdot}_i\,:\,i\in\{1,\dots, |(M^{\bigcdot}_\mathrm{bw})^{-1}(z)|\}\}=(M^{\bigcdot}_\mathrm{bw})^{-1}(z)$.
We define $S^\downarrow_{i,j}$ (resp. $S^\uparrow_{i,j}$) as the subset of $\mathbb{R}^2$ containing all points of type $(i,j)$
for the forward (resp. backward) Brownian Web tree.
For the periodic Brownian Web $\zeta_\mathrm{bw}^{\mathrm{per},\downarrow\mathrel{\mspace{-1mu}}\uparrow}=(\zeta^{\mathrm{per},\uparrow}_\mathrm{bw},\zeta^{\mathrm{per},\downarrow}_\mathrm{bw})$,
the definition is the same as above and the set of all of points in $\mathbb{R}\times\mathbb{T}$ of type $(i,j)$
for the backward (resp. forward)
periodic Brownian Web tree, will be denoted by
$S^{\mathrm{per},\downarrow}_{i,j}$ (resp. $S^{\mathrm{per},\uparrow}_{i,j}$).
\end{definition}
\begin{theorem}\label{thm:Types}
For the backward and backward periodic Brownian Web trees $\zeta^\downarrow_\mathrm{bw}$ and $\zeta^{\downarrow,\mathrm{per}}_\mathrm{bw}$,
almost surely, every $z\in\mathbb{R}^2$ (resp. $\mathbb{R}\times\mathbb{T}$) is of one of the following types,
all of which occur: $(0,1),\,(1,1),\,(2,1),\,(0,2),\,(1,2)$ and $(0,3)$.
Moreover, almost surely, $S^\downarrow_{0,1}$ has full Lebesgue measure, $S^\downarrow_{2,1}$ and $S^\downarrow_{0,3}$
are countable and dense and for every $t\in\mathbb{R}$
\begin{itemize}[noitemsep,label=-]
\item $S^\downarrow_{0,1}\mathfrak{c}ap\{t\}\times\mathbb{R}$ has full Lebesgue measure in $\{t\}\times\mathbb{R}$,
\item
$S^\downarrow_{1,1}\mathfrak{c}ap\{t\}\times\mathbb{R}$ and $S^\downarrow_{0,2}\mathfrak{c}ap\{t\}\times\mathbb{R}$ are both countable
and dense in $\{t\}\times\mathbb{R}$,
\item $S^\downarrow_{2,1}\mathfrak{c}ap\{t\}\times\mathbb{R}$, $S^\downarrow_{1,2}\mathfrak{c}ap\{t\}\times\mathbb{R}$
and $S^\downarrow_{0,3}\mathfrak{c}ap\{t\}\times\mathbb{R}$ have each cardinality at most $1$.
\end{itemize}
At last, for every deterministic $t$, $S^\downarrow_{2,1}\mathfrak{c}ap\{t\}\times\mathbb{R}$, $S^\downarrow_{1,2}\mathfrak{c}ap\{t\}\times\mathbb{R}$
and $S^\downarrow_{0,3}\mathfrak{c}ap\{t\}\times\mathbb{R}$ are almost surely empty.
Moreover, $S^\downarrow_{(i,j)}=S^\uparrow_{(i',j')}$ for $(i,j)/(i',j')=(0,1)/(0,1)$, $(1,1)/(0.2)$, $(2,1)/(0,3)$, $(0,2)/(1,1)$,
$(1,2)/(1,2)$ and $(0.3)/(0,3)$, and the same holds in the periodic case.
\end{theorem}
\begin{proof}
The first part of the statement corresponds to~\mathfrak{c}ite[Theorem 3.24]{CHbwt}, while the last
is a consequence of~\mathfrak{c}ite[Proposition 3.22]{CHbwt}.
\end{proof}
\mathfrak{s}ection{Branching Spatial Trees and the Brownian Castle measure}\label{sec:BST}
As mentioned in the introduction, the Brownian castle is not just given by an $\mathbb{R}$-tree $\mathscr{T}$
realised on $\mathbb{R}^2$ by a map $M$, but furthermore comes with a stochastic process $X$ indexed by $\mathscr{T}$,
whose distribution depends on the metric structure of $\mathscr{T}$. (In our case, we want $X$ to be a Brownian motion
indexed by $\mathscr{T}$.)
How to do it in such such a way that $X$ admits a (H\"older) continuous modification is the topic of the second
section but first, we want to introduce the space in which such an object lives.
\mathfrak{s}ubsection{Branching Spatial $\mathbb{R}$-trees}\label{sec:bsp}
The space of branching spatial $\mathbb{R}$-trees corresponds to spatial $\mathbb{R}$-trees endowed
with an additional (H\"older) continuous map, from the tree to $\mathbb{R}$, which, for us,
will encode a realisation of a suitable stochastic process. The term {\it branching} is chosen because
this extra map (read, process) should be thought of as {\it branching} at the points in which the branches of the tree coalesce.
\begin{definition}\label{def:BPST}
Let $\alpha,\beta\in(0,1)$. The space of {\it $(\alpha,\beta)$-branching spatial pointed $\mathbb{R}$-trees}
$\mathbb{T}^{\alpha,\beta}_\mathrm{bsp}$ is the set of couples $\mathfrak{c}hi=(\zeta, X)$ with $\zeta\in\mathbb{T}^\alpha_\mathrm{sp}$ and
$X\mathfrak{c}olon \mathscr{T}\to \mathbb{R}$, the {\it branching map}, is a locally little $\beta$-H\"older continuous map.
In $\mathbb{T}^{\alpha,\beta}_\mathrm{bsp}$ two couples
$\mathfrak{c}hi=(\zeta,X),\,\mathfrak{c}hi'=(\zeta',X')$ are indistinguishable if there exists a bijective
isometry $\varphi:\mathscr{T}\to\mathscr{T}'$ such that $\varphi(\ast)=\ast'$, $M'\mathfrak{c}irc\varphi\equiv M$
and $X'\mathfrak{c}irc\varphi\equiv X$, in short $\varphi\mathfrak{c}irc\mathfrak{c}hi=\mathfrak{c}hi'$.
If furthermore $\zeta\in\mathbb{C}^\alpha_\mathrm{sp}$, we say that $\mathfrak{c}hi=(\zeta, X)$ is a {\it characteristic $(\alpha,\beta)$-branching
spatial $\mathbb{R}$-tree} and denote the subset of $\mathbb{T}^{\alpha,\beta}_\mathrm{bsp}$ containing such $\mathbb{R}$-trees
by $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$.
\end{definition}
Similar to what done in Section~\ref{sec:SpTrees}, we endow the space $\mathbb{T}^{\alpha,\beta}_\mathrm{bsp}$ with the metric
\begin{equation}\label{def:bspMetric}
\begin{split}
\mathbb{D}elta_\mathrm{bsp}(\mathfrak{c}hi,\mathfrak{c}hi')&\eqdef\int_0^{+\infty} e^{-r}\, \big[1\wedge \boldsymbol{\Delta}_\mathrm{bsp}^\mathrm{c}(\mathfrak{c}hi^{(r)},\mathfrak{c}hi'^{(r)})\big]\mathrm{d} r+d_\mathrm{M1}(b_\zeta,b_{\zeta'})\\
&=:\boldsymbol{\Delta}_\mathrm{bsp}(\mathfrak{c}hi,\mathfrak{c}hi')+d_\mathrm{M1}(b_\zeta,b_{\zeta'})\,.
\end{split}
\end{equation}
for $\mathfrak{c}hi,\,\mathfrak{c}hi'\in\mathbb{T}^{\alpha,\beta}_\mathrm{bsp}$.
Above, given $r>0$, $\mathfrak{c}hi^{(r)}$, $\mathfrak{c}hi'^{\,(r)}$ are defined as in~\eqref{e:rRestr} and
\begin{equ}
\boldsymbol{\Delta}^\mathrm{c}_\mathrm{bsp}(\mathfrak{c}hi^{(r)},\mathfrak{c}hi'^{\,(r)})\eqdef\inf_{\mathbb{C}C:(\ast,\ast')\in\mathbb{C}C} \mathbb{D}elta_\mathrm{bsp}^{\mathrm{c},\mathbb{C}C}(\mathfrak{c}hi^{(r)},\mathfrak{c}hi'^{\,(r)})\;,
\end{equ}
the infimum being taken over all correspondences $\mathbb{C}C\mathfrak{s}ubset\mathscr{T}^{(r)}\times\mathscr{T}'^{(r)}$ such that $(\ast,\ast')\in\mathbb{C}C$,
and for such a correspondence $\mathbb{C}C$
\begin{equation}\label{e:MetbC}
\begin{split}
\boldsymbol{\Delta}^{\mathrm{c},\mathbb{C}C}_\mathrm{bsp}(\mathfrak{c}hi^{(r)},\mathfrak{c}hi'^{\,(r)})\eqdef& \boldsymbol{\Delta}^{\mathrm{c},\mathbb{C}C}_\mathrm{sp}(\zeta^{(r)},\zeta'^{\,(r)})+\mathfrak{s}up_{(\mathfrak{s}z,\mathfrak{s}z')\in\mathbb{C}C}|X(\mathfrak{s}z)-X'(\mathfrak{s}z')|\\
&+\mathfrak{s}up_{n\in\mathbb{N}}2^{n\beta}\mathfrak{s}up_{\mathfrak{s}ubstack{(\mathfrak{s}z,\mathfrak{s}z'),(\mathfrak{s}w,\mathfrak{s}w')\in\mathbb{C}C\\d(\mathfrak{s}z,\mathfrak{s}w),d'(\mathfrak{s}z',\mathfrak{s}w')\in\mathbb{C}A_n}}
|\delta_{\mathfrak{s}z,\mathfrak{s}w}X-\delta_{\mathfrak{s}z',\mathfrak{s}w'}X'|\,.
\end{split}
\end{equation}
The following lemma determines the metric properties of $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ and gives a
compactness criterion for its subsets. The proof, as well as the statement, are completely
analogous to those for $\mathbb{T}^{\alpha}_\mathrm{sp}$ given in~\mathfrak{c}ite[Theorem 2.13 and Proposition 2.16]{CHbwt},
so we refer the reader to the aforementioned reference for further details.
\begin{lemma}\label{l:Comp}
For $\alpha,\beta\in(0,1)$, $(\mathbb{T}^{\alpha,\beta}_\mathrm{bsp},\mathbb{D}elta_\mathrm{bsp})$ is a complete separable metric space and
$\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ is closed in it.
Moreover, a subset $\mathbb{C}A$ of $\mathbb{T}^{\alpha,\beta}_\mathrm{bsp}$ is relatively compact if and only if
\begin{enumerate}[noitemsep]
\item the projection of $\mathbb{C}A$ onto $\mathbb{T}^\alpha_\mathrm{sp}$ is relatively compact and
\item for every $r>0$ and $\eps > 0$ there exist constants $K=K(r)>0$ and $\delta=\delta(r,\eps)>0$ such that
\begin{equation}\label{e:Equicont2}
\mathfrak{s}up_{\mathfrak{c}hi\in\mathbb{C}A}\|X_\mathfrak{c}hi\|^{(r)}_{\infty}\leq K\qquad\text{and}\qquad\mathfrak{s}up_{\zeta\in\mathbb{C}A}\delta^{-\beta}\omega^{(r)}(X_\mathfrak{c}hi,\delta)<\eps\,.
\end{equation}
\end{enumerate}
\end{lemma}
The following lemma will be needed in some of the proofs below. It gives a way to estimate the distance between
a branching spatial $\mathbb{R}$-tree and one of its subset, provided the Hausdorff distance
(see~\eqref{e:Hausdorff}) between the metric spaces is known.
\begin{lemma}\label{l:Approx}
Let $\alpha,\,\beta\in(0,1)$, $\mathfrak{c}hi=(\mathscr{T},\ast,d,M,X)\in\mathbb{T}_\mathrm{bsp}^{\alpha,\beta}$ and assume $\mathscr{T}$ is compact.
Let $\delta>0$, $T \mathfrak{s}ubset \mathscr{T}$ be such that $\ast \in T$ and
the Hausdorff distance between $T$ and $\mathscr{T}$ is bounded above
by $\delta$ and define $\bar\mathfrak{c}hi=(T,\ast,d,M \mathord{\upharpoonright} T, X \mathord{\upharpoonright} T)$. Then
\begin{equ}[e:Approx]
\boldsymbol{\Delta}_\mathrm{bsp}^\mathrm{c}(\mathfrak{c}hi,\bar\mathfrak{c}hi)\lesssim (2\delta)^{-\alpha}\omega(M,2\delta)+ (2\delta)^{-\beta}\omega(X,2\delta)
\end{equ}
\end{lemma}
\begin{proof}
The proof follows the very same steps of~\mathfrak{c}ite[Lemma 2.12]{CHbwt}.
\end{proof}
\mathfrak{s}ubsection{Stochastic Processes on trees}\label{sec:MEC}
We want to understand how to realise the branching map $X$ as a {\it H\"older continuous} real-valued
stochastic process indexed by a pointed locally compact complete $\mathbb{R}$-tree $(\mathscr{T},d,\ast)$,
whose covariance structure is suitably related to the metric $d$.
In this section, we will mostly consider the case of a {\it fixed} (but generic) $\mathbb{R}$-tree,
and provide conditions for the process $X$ to admit a $\beta$-H\"older continuous modification, for some $\beta\in(0,1)$.
We will always view the law of the process $X$ on a given $\mathscr{T}$, $(\mathscr{T},\ast,d,M)\in\mathbb{T}^\alpha_\mathrm{sp}$, as a
probability measure on the space branching spatial $\mathbb{R}$-trees $\mathbb{T}^{\alpha,\beta}_\mathrm{bsp}$.
A function $\varphi:\mathbb{R}_+\to\mathbb{R}_+$ is said to be a {\it Young function} if it is convex, increasing and such that
$\lim_{t\to\infty}\varphi(t)=+\infty$ and $\varphi(0)=0$.
From now on, fix a standard probability space $(\Omega, \mathbb{C}A,\mathbb{P})$. Given $p\geq0$ and a Young function $\varphi$,
the Orlicz space on $\Omega$ associated to $\varphi$ is
the set $L^\varphi$ of random variables $Z \mathfrak{c}olon \Omega \to \mathbb{R}$ such that
\begin{equation}\label{def:Orlicz}
\|Z\|_\varphi\eqdef\inf\{c>0\,:\,\mathbb{E}[\varphi(|Z|/c)]\leq 1\}<\infty\,.
\end{equation}
Notice that if $Z$ is a positive random variable such that $\|Z\|_{L^\varphi}\leq C$ for some Young function $\varphi$
and some finite $C>0$,
then Markov's inequality yields
\begin{equ}[e:Tail]
\mathbb{P}(Z>u)\leq 1 / \varphi(u/C)\;,\quad \forall \,u > 0\;.
\end{equ}
The following proposition shows that if $\varphi$ is of exponential type $\phi(x) = e^{x^q}-1$
and $\mathcal{N}_{d}(\mathscr{T},\eps)$
grows at most polynomially as $\eps \to 0$, then be obtain a modulus of continuity of order
$d |\log d|^{1/q}$.
\begin{proposition}\label{p:Holder}
Let $q\geq 1$ and $\varphi_q(x)\eqdef e^{x^q}-1$.
Let $(\mathscr{T},\ast,d)$ be a pointed complete proper metric space
and $\{X(\mathfrak{s}z):\mathfrak{s}z\in\mathscr{T}\}$ a stochastic process indexed by $\mathscr{T}$. Assume there are
$\nu \in (0,1]$, $\theta >0$ and, for every $r>0$,
there exists a constant $c >0$ such that
\begin{equ}[e:nMEC]
\mathcal{N}_{d}(\mathscr{T}^{(r)},\eps)\leq c\,\eps^{-\theta}\;,\qquad \text{for all $\eps\in(0,1)$}
\end{equ}
and
\begin{equ}[e:Holder1]
\|X(\mathfrak{s}z)-X(\mathfrak{s}z')\|_{\varphi_q}\leq c\, d(\mathfrak{s}z,\mathfrak{s}z')^\nu\;,\qquad \text{for all $\mathfrak{s}z,\mathfrak{s}z' \in \mathscr{T}^{(r)}$,}
\end{equ}
where $\mathscr{T}^{(r)}$ is defined as in~\eqref{e:rRestr}.
Then, $X$ admits a continuous version such that for every $r>0$, there exists a random variable $K=K(\omega,r)$ such that
\begin{equ}
|X(\mathfrak{s}z)-X(\mathfrak{s}z')|\leq K \,d(\mathfrak{s}z,\mathfrak{s}z')^\nu \,|\log d(\mathfrak{s}z,\mathfrak{s}z')|^{1/q}\;,\quad \text{for all $\mathfrak{s}z,\mathfrak{s}z' \in \mathscr{T}^{(r)}$.}
\end{equ}
Furthermore, one has the bound $\mathbb{P}(K \ge u) \le C_1 \exp(-C_2 u^q)$ for some constants $C_i > 0$ depending only $r$,
$\nu$, $c$ and $\theta$.
\end{proposition}
\begin{proof}
We closely follow the argument and notations of \mathfrak{c}ite[Sec.~1.2]{Talagrand}. We also note that,
since $d^\nu$ is again a metric, it suffices to consider the case $\nu = 1$, so we do this now.
Set $\eps_n = 2^{-n}$ and
$N_n = \mathcal{N}_d(\mathscr{T},\eps_n)$. Noting
that in our case $d(\pi_n(\mathfrak{s}z), \pi_{n+1}(\mathfrak{s}z)) \le 2\eps_n$, \eqref{e:Tail} yields
\begin{equ}
\mathbb{P}(|X(\pi_n(\mathfrak{s}z))-X(\pi_{n+1}(\mathfrak{s}z))| \ge u n^{1/q} \eps_n) \lesssim \exp(-c u^q n)\;,
\end{equ}
for some constant $c > 0$. Note furthermore that in our case $N_n N_{n-1}
\lesssim 2^{2\theta n}$ by assumption. Proceeding similarly to \mathfrak{c}ite[Sec.~1.2]{Talagrand},
we consider the event $\Omega_u$ on which
$|X(\pi_n(\mathfrak{s}z))-X(\pi_{n+1}(\mathfrak{s}z))| \le u n^{1/q} \eps_n$ for every $n \ge 0$, so that
\begin{equ}
\mathbb{P}(\Omega_u^c) \lesssim \mathfrak{s}um_{n \ge 0} 2^{2\theta n} \exp(-c u^q n) \lesssim \exp(-c u^q)\;.
\end{equ}
Furthermore, on $\Omega_u$, one has
\begin{equs}
|X(\mathfrak{s}z)-X(\mathfrak{s}z')| &\le \mathfrak{s}um_{n \ge n_0} |X(\pi_n(\mathfrak{s}z))-X(\pi_{n+1}(\mathfrak{s}z))| + \mathfrak{s}um_{n > n_0} |X(\pi_n(\mathfrak{s}z'))-X(\pi_{n+1}(\mathfrak{s}z'))| \\
&\qquad + |X(\pi_{n_0}(\mathfrak{s}z)) - X(\pi_{n_0+1}(\mathfrak{s}z'))|\\
&\le 4 u \mathfrak{s}um_{n \ge n_0} n^{1/q} \eps_n \approx u\, d(\mathfrak{s}z,\mathfrak{s}z') |\log d(\mathfrak{s}z,\mathfrak{s}z')|^{1/q}\;,
\end{equs}
where $n_0$ is such that $\eps_{n_0+1} \le d(\mathfrak{s}z,\mathfrak{s}z') \le \eps_{n_0}$. The claim then follows at once.
\end{proof}
Condition~\eqref{e:nMEC} on the size of the $\eps$-nets will always be met by the $\mathbb{R}$-trees we will consider,
that this is the case for the Brownian Web Tree is stated in Proposition~\ref{p:BW}.
Therefore, we introduce a subset of the space of characteristic spatial trees whose elements satisfy it locally uniformly.
Let $\alpha\in(0,1)$, $\theta>0$ and let $c \mathfrak{c}olon \mathbb{R}_+ \to \mathbb{R}_+$ be an increasing function.
We define $\tilde\mathfrak{E}_\alpha(c,\theta)$, $\tilde\mathfrak{E}(\theta)$ and $\tilde\mathfrak{E}_\alpha$ respectively as
\begin{equs}[def:MeasSet]
\tilde\mathfrak{E}_\alpha(c,\theta)&\eqdef \{\zeta=(\mathscr{T},\ast,d,M)\in \mathbb{T}^\alpha_\mathrm{sp}\,:\,\forall r,\eps>0\,,\,\,\mathcal{N}_d(\mathscr{T}^{(r)},\eps)\leq c(r)\eps^{-\theta}\}\,,\\
\tilde\mathfrak{E}_\alpha(\theta)&\eqdef \bigcup_{c} \tilde\mathfrak{E}_\alpha(c,\theta)\qquad\text{and}\qquad \tilde\mathfrak{E}_\alpha\eqdef \bigcup_\theta\tilde\mathfrak{E}_\alpha(\theta)\,.
\end{equs}
and $\mathfrak{E}_\alpha(c,\theta)\eqdef \tilde\mathfrak{E}_\alpha(c,\theta)\mathfrak{c}ap\mathbb{C}^\alpha_\mathrm{sp}$ and $\mathfrak{E}_\alpha(\theta),\,\mathfrak{E}_\alpha$ accordingly.
Thanks to~\mathfrak{c}ite[Proposition 7.4.12]{BBI}, it is not difficult to verify that for every given $c$ and $\theta$ as above
$\tilde\mathfrak{E}_\alpha(c,\theta)$ is
closed in $\mathbb{T}^\alpha_\mathrm{sp}$, and consequently $\tilde\mathfrak{E}_\alpha(\theta)$ and $\tilde\mathfrak{E}_\alpha$ are measurable with respect
to the Borel $\mathfrak{s}igma$-algebra induced by the metric $\mathbb{D}elta_\mathrm{sp}$ (and so are $\mathfrak{E}_\alpha(\theta)$ and $\mathfrak{E}_\alpha$).
In the next proposition we show how~\eqref{e:Holder1} can be used to prove tightness for the laws,
on the space of branching spatial
$\mathbb{R}$-trees, of a family of stochastic processes
indexed by different spatial $\mathbb{R}$-trees that uniformly belong to $\tilde\mathfrak{E}_\alpha(c,\theta)$, for some $\theta$ and $c$.
\begin{proposition}\label{p:TightnessTree}
Let $q\geq 1$, $\alpha\in(0,1)$ and let $\mathfrak{K} \mathfrak{s}ubset \tilde\mathfrak{E}_\alpha(c,\theta)$ for some $c,\,\theta>0$ be relatively compact.
For every $\zeta = (\mathscr{T},\ast,d,M)\in\mathfrak{K}$, let $X_\zeta$ be a stochastic process indexed by $\mathscr{T}$
and denote by $\mathbb{C}Q_{\zeta}$ the law of $(\zeta,X_\zeta)$.
Assume that there exists $\nu\in(0,1)$ such that for all $\eps>0$ and $r>0$, there are constants
$c_\infty=c_\infty(\eps)>0$ and $c_\nu=c_\nu(r)>0$ such that
\begin{equ}\label{e:Infty-tightness}
\inf_{\zeta\in\mathfrak{K}}\mathbb{C}Q_{\zeta}\left(|X_\zeta(\ast)|\leq c_\infty\right)\geq 1-\eps
\end{equ}
and
\begin{equation}\label{e:Holder-tightness}
\| X_\zeta(\mathfrak{s}z)-X_\zeta(\mathfrak{s}w)\|_{\varphi_q}\leq c_\nu d(\mathfrak{s}z,\mathfrak{s}w)^\nu\,,\qquad\text{for all $\mathfrak{s}z,\mathfrak{s}w\in\mathscr{T}^{(r)}$ and $\zeta \in \mathfrak{K}$.}
\end{equation}
Then, the family of probability measures $\{\mathbb{C}Q_{\zeta}\}_{\zeta \in \mathfrak{K}}$ is
tight in $\mathbb{T}_\mathrm{bsp}^{\alpha,\beta}$ for any $\beta<\nu$.
\end{proposition}
\begin{proof}
By Lemma~\ref{l:Comp}, we only need to focus on the maps $X_\zeta$ and, more specifically, on their restriction to the
$r$-neighbourhoods of $\ast$.
Since $\{X_\zeta(\ast)\}_{\zeta\in \mathfrak{K}}$ is tight by~\eqref{e:Infty-tightness}, it remains to argue that for every $\eps>0$
\begin{equation}\label{e:TightCond}
\lim_{\delta\to 0}\mathfrak{s}up_{\zeta \in \mathfrak{K}} \mathbb{C}Q_{\zeta}\Big( \delta^{-\beta}\mathfrak{s}up_{d(\mathfrak{s}z,\mathfrak{s}w)\leq\delta} |X_\zeta(\mathfrak{s}z)-X_\zeta(\mathfrak{s}w)|>\eps\Big)=0\,.
\end{equation}
This in turn is immediate from Proposition~\ref{p:Holder} and our assumption.
\end{proof}
Our main example is that of a Brownian motion indexed by a pointed $\mathbb{R}$-tree.
Let $(\mathscr{T},\ast,d)$ be a pointed locally compact complete $\mathbb{R}$-tree, and let $\{B(\mathfrak{s}z)\,:\,\mathfrak{s}z\in\mathscr{T}\}$
be the centred Gaussian process such that $B(\ast)\eqdef 0$ and such that
\begin{equation}\label{e:GP}
\mathbb{E}[(B(\mathfrak{s}z)-B(\mathfrak{s}z'))^2] = d(\mathfrak{s}z,\mathfrak{s}z')\;,
\end{equation}
for all $\mathfrak{s}z,\,\mathfrak{s}z'\in\mathscr{T}$. We call $B$ the Brownian motion on $\mathscr{T}$.
\begin{remark}\label{rem:GPExistence}
The existence of a Gaussian process whose covariance matrix is as above is guaranteed by the fact that any
$\mathbb{R}$-tree $(\mathscr{T},d)$ is of strictly negative type, see~\mathfrak{c}ite[Cor.~7.2]{HLM}.
\end{remark}
\begin{remark}\label{rem:contBM}
If $\zeta = (\mathscr{T},d,\ast,M)\in\tilde\mathfrak{E}_\alpha$,
then it follows from Proposition~\ref{p:Holder} that, for $B$ a Brownian motion on $\mathscr{T}$,
one has $(\zeta,B) \in \mathbb{T}^{\alpha,\beta}_\mathrm{bsp}$ almost surely, for every $\beta < \beta_\mathrm{Gau}\eqdef1/2$.
We will denote by $\mathbb{C}Q_\zeta^\mathrm{Gau}$ its law on $\mathbb{T}^{\alpha,\beta}_\mathrm{bsp}$.
\end{remark}
In the study of $0$-Ballistic Deposition, we will also consider Poisson processes indexed by an $\mathbb{R}$-tree.
The Poisson process
is clearly not continuous so, in order to fit it in our framework, we introduce a smoothened version of it.
First, recall that for any locally compact complete $\mathbb{R}$-tree $\mathscr{T}$, the skeleton of $\mathscr{T}$, $\mathscr{T}^o$, is defined
as the subset of $\mathscr{T}$ obtained by removing all its endpoints, i.e.
\begin{equation}\label{def:Skeleton}
\mathscr{T}^o\eqdef \bigcup_{\mathfrak{s}z\in\mathscr{T}}\llbracket\ast,\mathfrak{s}z\llbracket\,.
\end{equation}
For any $\mathscr{T}$ as above, there exists a unique $\mathfrak{s}igma$-finite measure $\ell=\ell_\mathscr{T}$, called the {\it length measure}
such that $\ell(\mathscr{T}\mathfrak{s}etminus\mathscr{T}^o)=0$ and
\begin{equation}\label{def:LengthMeasure}
\ell\big(\llbracket\mathfrak{s}z,\mathfrak{s}z'\mathrm{r}b\big)=d(\mathfrak{s}z,\mathfrak{s}z')\,,
\end{equation}
for all $\mathfrak{s}z,\mathfrak{s}z'\in\mathscr{T}$.
\begin{remark}\label{rem:disjoint}
One important property of the Brownian motion $B$ is that, given any four points $\mathfrak{s}z_i$, one has
\begin{equ}
\mathbb{E}\big[(B(\mathfrak{s}z_1)-B(\mathfrak{s}z_2))(B(\mathfrak{s}z_3)-B(\mathfrak{s}z_4))\big] = \ell \bigl(\llbracket\mathfrak{s}z_1,\mathfrak{s}z_2\mathrm{r}b \mathfrak{c}ap \llbracket\mathfrak{s}z_3,\mathfrak{s}z_4\mathrm{r}b\bigr)\;,
\end{equ}
where the equality follows immediately by polarisation and the tree structure of $\mathscr{T}$.
In particular, increments of $B$ are independent on any two disjoint
subtrees of $\mathscr{T}$.
\end{remark}
Let $\alpha\in(0,1)$, $\zeta=(\mathscr{T},\ast,d,M)\in\mathbb{C}^\alpha_\mathrm{sp}$ a characteristic tree
and $\downarrowgger$ the open end for which~\eqref{e:Ray} holds.
For $\gamma>0$, let $\mu_\gamma$ be the Poisson random measure on $\mathscr{T}$ with intensity $\gamma\ell$ and, for $a>0$,
let $\psi_a$ be a smooth non-negative real-valued function on $\mathbb{R}$,
compactly supported in $[0,a]$ and such that $\int_\mathbb{R} \psi_a(x)\,\mathrm{d} x=1$.
We define the smoothened Poisson random measure on $\mathscr{T}$ as
\begin{equation}\label{def:PRM}
\mu^{a}_\gamma(\mathfrak{s}w)\eqdef \int_{\llbracket\mathfrak{s}w,\downarrowgger\rangle}\psi_a(d(\mathfrak{s}w,\bar\mathfrak{s}z))\mu_\gamma(\mathrm{d}\bar\mathfrak{s}z)\,,
\qquad\mathfrak{s}w\in\mathscr{T}\,.
\end{equation}
In other words, we are smoothening the Poisson random measure $\mu_\gamma$ by fattening its points along the rays
from the endpoints to the open end $\downarrowgger$, so that
the value at a given point $\mathfrak{s}w$ depends only on Poisson points in
$\llbracket\mathfrak{s}w, \mathfrak{s}w(a)\mathrm{r}b$, where $\mathfrak{s}w(a)$ is the unique point on the ray $\llbracket\mathfrak{s}w,\downarrowgger\rangle$ such that
$d(\mathfrak{s}w,\mathfrak{s}w(a))=a$.
\begin{definition}\label{def:Poisson}
Let $\alpha\in(0,1)$ and $\zeta=(\mathscr{T},\ast,d,M)\in \mathbb{C}^\alpha_\mathrm{sp}$ with length measure
$\ell$. Let $\gamma>0$ and $\mu_\gamma$ be the Poisson random measure on
$\mathscr{T}$ with intensity measure $\gamma \ell$. For $a>0$, let $\psi_a$ be a smooth non-negative real-valued function on $\mathbb{R}$,
compactly supported in $[0,a]$ and such that $\int_\mathbb{R} \psi_a(x)\,\mathrm{d} x=1$.
We define the {\it rescaled compensated smoothened} (RCS in short) {\it Poisson process on $\mathscr{T}$} as
\begin{equation}\label{def:RCSPPtree}
N^{a}_\gamma(\mathfrak{s}z)\eqdef {1\over \mathfrak{s}qrt \gamma} \int_{\llbracket\ast,\mathfrak{s}z\mathrm{r}b}\bigl(\mu^{a}_\gamma(\mathfrak{s}w) - \gamma\bigr)\ell(\mathrm{d} \mathfrak{s}w)\;,\qquad\text{for $\mathfrak{s}z\in\mathscr{T}$,}
\end{equation}
where $\mu^a_\gamma$ is the smoothened Poisson random measure given in~\eqref{def:PRM}.
\end{definition}
In case the $\mathbb{R}$-tree has (locally) finitely many endpoints and $\gamma$ and $a$ are fixed, it is easy
to see that the smoothened Poisson process defined above is Lipschitz.
That said, we want to obtain more quantitative information about its regularity and how the latter relates to the parameters
$\gamma$ and $a$, in order to be able to identify a regime in which a family of
RCS Poisson processes on a given tree converges weakly.
In the following Lemma (and the rest of the paper), for $\zeta=(\mathscr{T},\ast,d,M)\in\mathfrak{E}_\alpha$ and
$X$ a stochastic process indexed by $\mathscr{T}$, which admits a $\beta$-H\"older
continuous modification, we will denote by $\mathbb{C}Q_\zeta(\mathrm{d} X)$
the law of $(\zeta,X)$ in the space of $(\alpha,\beta)$-characteristic
branching spatial $\mathbb{R}$-trees and by $\mathcal{M}(\mathbb{C}^{\alpha,\beta}_\mathrm{bsp})$ the space of probability measures
on $\mathbb{C}_\mathrm{bsp}^{\alpha,\beta}$ endowed with the topology of weak convergence.
\begin{lemma}\label{l:PPtree}
Let $\alpha\in(0,1)$, $\zeta=(\mathscr{T},\ast,d,M)\in\mathfrak{E}_\alpha$, $N^{a}_\gamma$ be as in Definition~\ref{def:Poisson}
and set $\beta_\mathrm{Poi}\eqdef\frac{1}{2p}$.
If $a=\gamma^{-p}$ for some $p>1$, then for any $\beta<\beta_\mathrm{Poi}$, $(\zeta,N^{a}_\gamma)\in\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ almost surely. Furthermore, denoting its law by $\mathbb{C}Q_\zeta^{\mathrm{Poi}_\gamma}$, these are tight over $\gamma \ge 1$.
\end{lemma}
\begin{proof}
Thanks to Propositions~\ref{p:Holder} and~\ref{p:TightnessTree},
it suffices to show that there exists a constant $C$ depending only on $r$ such that
\begin{equation}\label{e:IncrementPP}
\| N^{a}_\gamma(\mathfrak{s}z)-N^{a}_\gamma(\mathfrak{s}w)\|_{\varphi_1}\leq C d(\mathfrak{s}z,\mathfrak{s}w)^{\frac{1}{2p}}\,,
\end{equation}
for all $\mathfrak{s}z,\mathfrak{s}w\in \mathscr{T}^{(r)}$, which in turn is essentially a consequence of
Lemma~\ref{l:SmoothPoisson} in the appendix. Indeed, if the points
$\mathfrak{s}z,\mathfrak{s}w$ belong to the same ray, then the increment $N^{\psi_a}_\gamma(\mathfrak{s}z)-N^{\psi_a}_\gamma(\mathfrak{s}w)$
coincides in distribution with that of $P^{a}_\gamma( d(\mathfrak{s}z,\mathfrak{s}w))$ in~\eqref{def:SmoothRCPP}, so
that~\eqref{e:IncrementPP} follows from~\eqref{b:SmoothRCPP}.
If $\mathfrak{s}z$ and $\mathfrak{s}w$ lie on different branches, let $\mathfrak{s}z_\downarrowgger$ be the unique point for which
$\llbracket\mathfrak{s}z,\downarrowgger\rangle\mathfrak{c}ap\llbracket\mathfrak{s}w,\downarrowgger\rangle=\llbracket\mathfrak{s}z_\downarrowgger,\downarrowgger\rangle$. Then, the triangle
inequality for Orlicz norms yields
\begin{equ}
\| N^{a}_\gamma(\mathfrak{s}z)-N^{a}_\gamma(\mathfrak{s}w)\|_{\varphi_1}
\le
\| N^{a}_\gamma(\mathfrak{s}z)-N^{a}_\gamma(\mathfrak{s}z_\downarrowgger)\|_{\varphi_1} + \| N^{a}_\gamma(\mathfrak{s}z_\downarrowgger)-N^{a}_\gamma(\mathfrak{s}w)\|_{\varphi_1}\;,
\end{equ}
and the required bound follows from \eqref{e:IncrementPP}.
\end{proof}
\mathfrak{s}ubsection{Probability measures on the space of branching spatial trees}\label{sec:MECc}
The results in the previous section identify suitable conditions on a spatial $\mathbb{R}$-tree
$\zeta=(\mathscr{T},\ast,d,M)$ and the distribution of the increments of a stochastic process $X$ indexed by $\mathscr{T}$, under which
the couple $(\zeta,X)$ is (almost surely) a branching spatial $\mathbb{R}$-tree.
We now want to let $\zeta$ vary and understand the behaviour of the map $\zeta\mapsto \mathbb{C}Q_\zeta = \mathop{\mathrm{Law}}(\zeta,X)$.
Since in rest of the paper we will only deal with characteristic $\mathbb{R}$-trees, we will directly work with
these, even though some of our statements remain true for general spatial $\mathbb{R}$-trees.
We now write $\mathbb{C}Q_\zeta^{\gamma,p} = \mathop{\mathrm{Law}}(\zeta,X)$ for $X = N_\gamma^a$ as in \eqref{def:RCSPPtree}
with the choice $a = \gamma^{-p}$, and $\mathbb{C}Q_\zeta^{\mathrm{Gau}}$ for $X = B$ as in \eqref{e:GP}.
We then have the following continuity property.
\begin{proposition}\label{p:MeasG}
Let $\alpha\in(0,1)$, $c,\theta>0$, and $\mathbb{C}Q_\zeta = \mathbb{C}Q_\zeta^\mathrm{Gau}$, respectively $\mathbb{C}Q_\zeta = \mathbb{C}Q_\zeta^{\gamma,p}$
for some $\gamma > 0$ and $p > 1$. Then, the map
\begin{equ}[e:MapTreetoPro]
\mathfrak{E}_\alpha(c,\theta)\ni\zeta\mapsto\mathbb{C}Q_\zeta\in\mathcal{M}(\mathbb{C}^{\alpha,\beta}_\mathrm{bsp})
\end{equ}
is continuous, provided that
$\beta<\beta_\mathrm{Gau} = \f12$, respectively $\beta < \beta_{\mathrm{Poi}}$ as in Lemma~\ref{l:PPtree}.
\end{proposition}
In view of Lemma~\ref{l:PPtree}, Proposition~\ref{p:TightnessTree}, and the central limit theorem,
it is clear that, for a {\it fixed} tree $\zeta$, $\mathbb{C}Q_\zeta^{\gamma,p}$ converges weakly to $\mathbb{C}Q^\mathrm{Gau}_\zeta$
as $\gamma\uparrow\infty$, for any fixed $p$.
In the next statement, we show that such a convergence is locally uniform in $\zeta$.
\begin{proposition}\label{p:PPtoGau}
Let $\alpha\in(0,1)$, $p>1$, and $c,\theta>0$. Then, for any $\beta<\beta_\mathrm{Poi}$,
$\lim_{\gamma\to\infty}\mathbb{C}Q_\zeta^{\gamma,p}=\mathbb{C}Q_\zeta^\mathrm{Gau}$ in $\mathcal{M}(\mathbb{C}^{\alpha,\beta}_\mathrm{bsp})$,
uniformly over compact subsets of $\fE_\alpha(c,\theta)$.
\end{proposition}
The remainder of this subsection is devoted to the proof of the previous two statements.
We will first focus on Proposition~\ref{p:MeasG} since some tools from its proof will be needed in that
of Proposition~\ref{p:PPtoGau}. Before delving into the details we need to make
some preliminary considerations which apply to both.
Let $c,\,\theta>0$ be fixed and $\mathfrak{K}$ be a compact subset of $\mathfrak{E}_\alpha(c,\theta)$.
Since the constant $C$ in~\eqref{e:IncrementPP} is independent of both $\gamma$ and the
specific features of the tree,
Proposition~\ref{p:TightnessTree} implies that the families
$\{\mathbb{C}Q^{\gamma,p}_\zeta\,:\gamma>0,\,\zeta\in \mathfrak{K}\}$ and $\{\mathbb{C}Q^\mathrm{Gau}_\zeta\,:\,\zeta\in \mathfrak{K}\}$ are tight in
$\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ for any $\beta<\beta_{\mathrm{Poi}}$ and $\beta<\beta_\mathrm{Gau}$ respectively,
and jointly, for $\beta<\beta_{\mathrm{Poi}}\wedge\beta_{\mathrm{Gau}}$.
Then, the proof of both Propositions~\ref{p:MeasG} and~\ref{p:PPtoGau} boils down to show
that if $\{\zeta_n\}_n\mathfrak{s}ubset \mathfrak{K}$ is a sequence
converging to $\zeta$ with respect to $\mathbb{D}elta_\mathrm{sp}$ then there exists a coupling between
$(\zeta_n, X_n)$ and $(\zeta,X)$ such that $\mathbb{D}elta_\mathrm{bsp}((\zeta_n,X_n),(\zeta,X))$ converges to $0$
locally uniformly over $\zeta\in \mathfrak{K}$ and the H\"older norms of $X_n$ and $X$.
If we denote by $\mathbb{C}Q_n$ and $\mathbb{C}Q_\zeta$ the laws of $(\zeta_n, X_n)$ and $(\zeta,X)$ then for the first statement,
we need to pick $\mathbb{C}Q_n$ and $\mathbb{C}Q_\zeta$ to be either
$\mathbb{C}Q^{\gamma,p}_{\zeta_n}$ and $\mathbb{C}Q^{\gamma,p}_{\zeta}$, for $\gamma>0$ fixed,
or $\mathbb{C}Q^{\mathrm{Gau}}_{\zeta_n}$ and $\mathbb{C}Q^\mathrm{Gau}_{\zeta}$, while for the second, $\zeta_n=\zeta$ for all $n$,
$\mathbb{C}Q_n=\mathbb{C}Q^{\gamma_n,p}_\zeta$, with $\gamma_n\to\infty$ and $\mathbb{C}Q_\zeta=\mathbb{C}Q^\mathrm{Gau}_\zeta$.
The problem in the first case is that,
since $X$ and $X_n$ are indexed by different spaces, it is not {\it a priori} clear how to build these couplings.
In the next subsection, which represents the core of the proof, we construct one.
\mathfrak{s}ubsubsection{Coupling processes on different trees}\label{sec:Coupling}
Fix $\alpha\in(0,1)$ and $\eta > 0$, and consider characteristic trees
$\zeta=(\mathscr{T},\ast,d,M),\,\zeta'=(\mathscr{T}',\ast',d',M')\in\mathbb{C}^\alpha_\mathrm{sp}$ such that $\mathscr{T}$ and $\mathscr{T}'$
are \textit{compact}.
As a shorthand, we set $\delta = \boldsymbol{\Delta}^{\mathrm{c}}_\mathrm{sp}(\zeta^{(r)},\zeta'^{\,(r)})$
and we fix a correspondence $\mathbb{C}C$ between $\mathscr{T}^{(r)}$ and $\mathscr{T}'^{\,(r)}$ such that
\begin{equ}[e:InitialBound]
\boldsymbol{\Delta}^{\mathrm{c},\mathbb{C}C}_\mathrm{sp}(\zeta^{(r)},\zeta'^{\,(r)})\le 2\delta\,.
\end{equ}
We will always assume that our two trees are sufficiently close so that $\delta \le \eta$.
(We typically think of the case $\delta \ll \eta$.)
Let then $B$ be the Gaussian process on $\mathscr{T}$ such that~\eqref{e:GP} holds and, for $\gamma>0$, let
$\mu_\gamma$ be the Poisson random measure on $\mathscr{T}$ with intensity $\gamma\ell$ and $N^a_\gamma$ be
the RCS Poisson process of Definition~\ref{def:Poisson} and Lemma~\ref{l:PPtree}.
The aim of this subsection is to first inductively construct subtrees $T$ and $T'$ of $\mathscr{T}$ and $\mathscr{T}'$
respectively that
are close to each other and whose distance from the original trees is easily quantifiable. Simultaneously,
we build a bijection $\phi \mathfrak{c}olon T \to T'$ which preserves the length measure and has small distortion.
This provides a natural coupling between $\mu_\gamma$ and a Poisson random measure $\mu_\gamma'$ on
$T'$ by $\mu_\gamma'(A) = \phi^*\mu_\gamma(A)\eqdef\mu_\gamma(\phi^{-1}(A))$, and similarly for the white noise
underlying $B$.
To start our inductive construction, we simply set
\begin{equ}
T_0 = \{\ast\}\;,\qquad T_0' = \{\ast'\}\;,\qquad \phi(\ast) = \ast'\;.
\end{equ}
Assume now that, for some $m\in\mathbb{N}$, we are given subtrees $T_{m-1}$ and $T_{m-1}'$ as well as a length-preserving
measure $\phi\mathfrak{c}olon T_{m-1}\to T_{m-1}'$.
Let then $\mathfrak{s}v \in\mathscr{T} \mathfrak{s}etminus T_{m-1}$ be a point whose distance from $T_{m-1}$ is maximal and denote by
$\mathfrak{b}_m$ the projection of $\mathfrak{s}v$ onto $T_{m-1}$. We also set
\begin{equ}
\mathbb{C}C_{m-1}\eqdef \mathbb{C}C \mathfrak{c}up \phi_{m-1} = \mathbb{C}C \mathfrak{c}up \{(\mathfrak{s}z,\phi(\mathfrak{s}z))\,:\, \mathfrak{s}z \in T_{m-1}\}\;,
\end{equ}
where we have identified the bijection $\phi_{m-1} = \phi\mathord{\upharpoonright} T_{m-1}$ with the natural
correspondence induced by it.
If $d(\mathfrak{s}v,\mathfrak{b}_m)\leq 2(\eta \vee \mathop{\mathrm{dis}}\mathbb{C}C_{m-1}')$, we terminate our construction and set
\minilab{e:construction}
\begin{equ}[e:approxTree]
T\eqdef T_{m-1}\;,\quad T'\eqdef T_{m-1}'\;,\quad
Z\eqdef (T,\ast,d, M\mathord{\upharpoonright} T)\;,\quad Z'\eqdef (T',\ast',d', M'\mathord{\upharpoonright} T')\;.
\end{equ}
Otherwise, let $\mathfrak{s}v'\in\mathscr{T}'$ be such that $(\mathfrak{s}v,\mathfrak{s}v')\in\mathbb{C}C$ and $\mathfrak{b}_m'$ be its
projection onto $T_{m-1}'$. If $d(\mathfrak{s}v,\mathfrak{b}_m)\geq d'(\mathfrak{s}v',\mathfrak{b}_m')$ then we set $\mathfrak{s}v_m' = \mathfrak{s}v'$ and denote by
$\mathfrak{s}v_m \in \llbracket\mathfrak{b}_m,\mathfrak{s}v\mathrm{r}b$ the unique point such that $d(\mathfrak{s}v_m,\mathfrak{b}_m)= d'(\mathfrak{s}v_m',\mathfrak{b}_m')$. Otherwise,
we set $\mathfrak{s}v_m = \mathfrak{s}v$ and define $\mathfrak{s}v_m'$ correspondingly.
We then set
\minilab{e:construction}
\begin{equ}[e:approxTree2]
T_m\eqdef T_{m-1}\mathfrak{c}up\llbracket \mathfrak{b}_m,\mathfrak{s}v_m\mathrm{r}b\,,\qquad T_m'\eqdef T_{m-1}'\mathfrak{c}up\llbracket \mathfrak{b}_m',\mathfrak{s}v_m'\mathrm{r}b\;,
\end{equ}
and we extend $\varphi$ to $\llbracket \mathfrak{b}_m,\mathfrak{s}v_m\mathrm{r}b \mathfrak{s}etminus \{\mathfrak{b}_m\}$ to be the unique isometry such that
$\varphi(\mathfrak{s}v_m)=\mathfrak{s}v_m'$. We also write $\ell_m = d(\mathfrak{b}_m,\mathfrak{s}v_m) = d'(\mathfrak{b}_m',\mathfrak{s}v_m')$.
The following shows that this construction terminates after finitely
many steps.
\begin{lemma}\label{l:GrowingT'}
Let $N$ be the minimal number of balls of radius $\eta/8$ required to cover $\mathscr{T}$.
Then, the construction described above terminates after at most $N$ steps and,
until it does, one has $\ell_m \ge \eta/2$ so that in particular $\mathfrak{s}v_m'\notin T_{m-1}'$.
\end{lemma}
\begin{proof}
We start by showing the second claim. Assuming the construction has not terminated yet, we
only need to consider the case $d'(\mathfrak{s}v',\mathfrak{b}_m') < d(\mathfrak{s}v,\mathfrak{b}_m)$ so that $\mathfrak{s}v_m' = \mathfrak{s}v'$. Take $j<m$ such that
$\mathfrak{b}_m'\in\llbracket\mathfrak{b}_j',\mathfrak{s}v_j'\mathrm{r}b$, then
\begin{equs}
\ell_m &= d'(\mathfrak{s}v_m',\mathfrak{b}_m')=\frac{1}{2}(d'(\mathfrak{s}v_m',\mathfrak{s}v_j')+d'(\mathfrak{s}v_m',\mathfrak{b}_j')-d'(\mathfrak{s}v_j',\mathfrak{b}_j'))\\
&\geq \frac{1}{2}(d(\mathfrak{s}v,\mathfrak{s}v_j)+d(\mathfrak{s}v,\mathfrak{b}_j)-d(\mathfrak{s}v_j,\mathfrak{b}_j))-\frac{3}{2}\mathop{\mathrm{dis}}\mathbb{C}C_{m-1}' \label{e:PointOut}\\
&\geq d(\mathfrak{s}v,\mathfrak{b}_m)-\frac{3}{2}\mathop{\mathrm{dis}}\mathbb{C}C_{m-1}'\ge {\eta\over 2}\;.
\end{equs}
The passage from the first to the second line is a consequence of the fact that
$(\mathfrak{s}v,\mathfrak{s}v_m'),\,(\mathfrak{s}v_j,\mathfrak{s}v_j'),\,(\mathfrak{b}_j,\mathfrak{b}_j')\in\mathbb{C}C_{m-1}$, and the last bound follows
from the fact that $d(\mathfrak{s}v,\mathfrak{b}_m) \ge \f32\mathop{\mathrm{dis}}\mathbb{C}C_{m-1} + \f\eta2$ by assumption.
It remains to note that since $\ell_m > \eta/2$, the points $\mathfrak{s}v_j$ are all at distance at least $\eta/2$
from each other, so there can only be at most $N$ of them.
\end{proof}
Thanks to Lemma~\ref{l:GrowingT'}, we can also define
\begin{align}
B'(\mathfrak{s}z')-B'(\mathfrak{b}_m')&\eqdef B(\varphi_m^{-1}(\mathfrak{s}z'))-B(\mathfrak{b}_m)\,,\qquad\text{for $\mathfrak{s}z'\in\llbracket \mathfrak{b}_m',\mathfrak{s}v_m'\mathrm{r}b$}\label{e:Gauss}\\
\mu_\gamma'\mathord{\upharpoonright} \llbracket \mathfrak{b}_m',\mathfrak{s}v_m'\mathrm{r}b&\eqdef \varphi_m^*\mu_\gamma\,,\label{e:Poisson}
\end{align}
and $N'^{\,a}_\gamma$ accordingly.
In the following lemma, we denote by $X$ and $X'$ the processes on $T$ and $T'$, which correspond to
either $B$ and $B'$ or to $N^a_\gamma$ and $N'^{\,a}_\gamma$.
\begin{lemma}\label{l:Coupling}
In the setting above, let $Z,\,Z'$ be as in~\eqref{e:approxTree} and $X$ and $X'$ be constructed inductively via~\eqref{e:Gauss}
or~\eqref{e:Poisson}. If~\eqref{e:InitialBound} holds, then $(Z,X)$ and $(Z',X')$ belong to $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$
and
\begin{equ}[b:Coupling]
\boldsymbol{\Delta}^{\mathrm{c},\phi}_\mathrm{bsp}((Z,X),(Z',X'))\lesssim (5^{N}\delta)^{-\alpha}\omega(M, 5^{N}\delta)+N5^{N\kappa}\|X\|_{\beta+\kappa} \delta^\kappa\,.
\end{equ}
for any $\kappa\in(0,1)$ sufficiently small, where $N$ is as in Lemma~\ref{l:GrowingT'}.
Moreover, the Hausdorff distance between both $T$ and $\mathscr{T}$, and $T'$ and $\mathscr{T}'$, is bounded above by
$2( 5^{N}\delta) + \eta$.
\end{lemma}
\begin{proof}
We will first bound the distortion $\mathop{\mathrm{dis}}\phi$ between $T$ and $T'$ by induction on $m$.
We begin by showing that $\mathop{\mathrm{dis}}(\mathbb{C}C_{m-1}\mathfrak{c}up\{(\mathfrak{s}v_m,\mathfrak{s}v_m'),(\mathfrak{b}_m,\mathfrak{b}_m')\})\leq\f72\mathop{\mathrm{dis}}\mathbb{C}C_{m-1}$.
Assume without loss of generality
that $d(\mathfrak{s}v,\mathfrak{b}_m)> d'(\mathfrak{s}v',\mathfrak{b}_m')$ and let $(\mathfrak{s}w,\mathfrak{s}w')\in\mathbb{C}C_{m-1}$.
By the triangle inequality and the fact that, by construction, $(\mathfrak{s}v,\mathfrak{s}v_m'),\,(\mathfrak{b}_m,\phi_{m-1}(\mathfrak{b}_m))\in\mathbb{C}C_{m-1}$
we have
\begin{equs}
|d(\mathfrak{s}v_m,\mathfrak{s}w)-d'(\mathfrak{s}v'_m,\mathfrak{s}w')|&\leq d(\mathfrak{s}v,\mathfrak{s}v_m)+\mathop{\mathrm{dis}}\mathbb{C}C_{m-1}\,,\\
|d(\mathfrak{b}_m,\mathfrak{s}w)-d'(\mathfrak{b}'_m,\mathfrak{s}w')|&\leq \mathop{\mathrm{dis}}\mathbb{C}C_{m-1}+d'(\phi_{m-1}(\mathfrak{b}_m),\mathfrak{b}_m')\,,
\end{equs}
where we added and subtracted $d(\mathfrak{s}v_m,\mathfrak{s}w)$ in the first and $d'(\phi_{m-1}(\mathfrak{b}_m),\mathfrak{s}w')$ in the second.
Now, as a consequence of~\eqref{e:PointOut}
\begin{equ}
d(\mathfrak{s}v,\mathfrak{s}v_m)=d(\mathfrak{s}v,\mathfrak{b}_m)-d(\mathfrak{s}v_m,\mathfrak{b}_m) =d(\mathfrak{s}v,\mathfrak{b}_m)- d'(\mathfrak{s}v',\mathfrak{b}_m')<\tfrac{3}{2}\mathop{\mathrm{dis}} \mathbb{C}C_{m-1}
\end{equ}
while
\begin{equs}
d'(\phi_{m-1}&(\mathfrak{b}_m),\mathfrak{b}_m')=d'(\mathfrak{s}v_m',\phi_{m-1}(\mathfrak{b}_m))-d'(\mathfrak{s}v_m',\mathfrak{b}'_m)\\
&\leq \mathop{\mathrm{dis}} \mathbb{C}C_{m-1}+d(\mathfrak{s}v,\mathfrak{b}_m)-d(\mathfrak{s}v_m,\mathfrak{b}_m)= \mathop{\mathrm{dis}} \mathbb{C}C_{m-1}+d(\mathfrak{s}v,\mathfrak{s}v_m)\leq \tfrac{5}{2}\mathop{\mathrm{dis}} \mathbb{C}C_{m-1}
\end{equs}
and both hold since, by construction, $d(\mathfrak{s}v_m,\mathfrak{b}_m)=d'(\mathfrak{s}v_m',\mathfrak{b}_m')$.
Therefore the claim $\mathop{\mathrm{dis}}(\mathbb{C}C_{m-1}\mathfrak{c}up\{(\mathfrak{s}v_m,\mathfrak{s}v_m'),(\mathfrak{b}_m,\mathfrak{b}_m')\})\leq\f72\mathop{\mathrm{dis}}\mathbb{C}C_{m-1}$, follows at once.
Now, for any $\mathfrak{s}z\in T_m\mathfrak{s}etminus T_{m-1}$, let $\tilde\mathfrak{s}z\in\mathscr{T}$ be such that $(\tilde \mathfrak{s}z,\phi_m(\mathfrak{s}z))\in\mathbb{C}C_{m-1}$.
We clearly have
\begin{equ}
|d(\mathfrak{s}z,\mathfrak{s}w)-d'(\phi_m(\mathfrak{s}z),\mathfrak{s}w')|\leq d(\mathfrak{s}z,\tilde\mathfrak{s}z)+\mathop{\mathrm{dis}} \mathbb{C}C_{m-1}\,.
\end{equ}
Denote by $\tilde\mathfrak{b}\in T_m$ the projection of $\tilde\mathfrak{s}z$ onto $T_m$. In order to bound $d(\mathfrak{s}z,\tilde\mathfrak{s}z)$,
it suffices to exploit the fact that if $\tilde\mathfrak{b}\in\llbracket\mathfrak{s}z,\mathfrak{s}v_m\mathrm{r}b$, then $d(\mathfrak{s}z,\tilde\mathfrak{s}z)=d(\mathfrak{b}_m,\tilde\mathfrak{s}z)-d(\mathfrak{b}_m,\mathfrak{s}z)$,
while if $\tilde\mathfrak{b}\in T_m\mathfrak{s}etminus\llbracket\mathfrak{s}z,\mathfrak{s}v_m\mathrm{r}b$, then $d(\mathfrak{s}z,\tilde\mathfrak{s}z)=d(\mathfrak{s}v_m,\tilde\mathfrak{s}z)-d(\mathfrak{s}v_m,\mathfrak{s}z)$.
Let $(\mathfrak{s}y,\mathfrak{s}y')$ be either $(\mathfrak{s}v_m,\mathfrak{s}v_m')$ or $(\mathfrak{b}_m,\mathfrak{b}_m')$, so that
\begin{equs}
d(\mathfrak{s}z,\tilde\mathfrak{s}z)=d(\mathfrak{s}y,\tilde\mathfrak{s}z)-d(\mathfrak{s}y,\mathfrak{s}z)&\leq d'(\mathfrak{s}y',\phi_m(\mathfrak{s}z))+\mathop{\mathrm{dis}}(\mathbb{C}C_{m-1}\mathfrak{c}up\{(\mathfrak{s}v_m,\mathfrak{s}v_m'),(\mathfrak{b}_m,\mathfrak{b}_m')\})-d(\mathfrak{s}y,\mathfrak{s}z)\\
&=\mathop{\mathrm{dis}}(\mathbb{C}C_{m-1}\mathfrak{c}up\{(\mathfrak{s}v_m,\mathfrak{s}v_m'),(\mathfrak{b}_m,\mathfrak{b}_m')\})\leq \tfrac{7}{2}\mathop{\mathrm{dis}}\mathbb{C}C_{m-1}
\end{equs}
where we used that, by construction, $d'(\mathfrak{s}y',\phi_m(\mathfrak{s}z))=d(\mathfrak{s}y,\mathfrak{s}z)$. Hence,
$\mathop{\mathrm{dis}}\phi_m\leq\mathop{\mathrm{dis}}\mathbb{C}C_m\leq \tfrac92 \mathop{\mathrm{dis}}\mathbb{C}C_{m-1}$ and since $\mathop{\mathrm{dis}}\mathbb{C}C_0=\mathop{\mathrm{dis}}\mathbb{C}C$,
we conclude that $\mathop{\mathrm{dis}}\phi_m\leq \mathop{\mathrm{dis}}\mathbb{C}C_m\leq (\tfrac92)^m\mathop{\mathrm{dis}}\mathbb{C}C$
and therefore $\mathop{\mathrm{dis}}\phi\leq \mathop{\mathrm{dis}}(\mathbb{C}C\mathfrak{c}up\phi)\lesssim 5^N\delta$.
Concerning the evaluation maps, take $\mathfrak{s}z_1, \mathfrak{s}z_2 \in T$ and choose
$\mathfrak{s}w_1, \mathfrak{s}w_2\in\mathscr{T}$ such that $(\mathfrak{s}w_i,\phi(\mathfrak{s}z_i)) \in\mathbb{C}C$. One has
\begin{equation}\label{e:Points}
d(\mathfrak{s}z_i,\mathfrak{s}w_i)=|d(\mathfrak{s}z_i,\mathfrak{s}w_i)-d'(\phi(\mathfrak{s}z_i),\phi(\mathfrak{s}z_i))|\leq \mathop{\mathrm{dis}} (\phi\mathfrak{c}up\mathbb{C}C)\lesssim 5^{N}\delta\;,
\end{equation}
so that
\begin{equs}[e:InftyBoundFinal]
\|M(\mathfrak{s}z_1)-M'(\phi(\mathfrak{s}z_1))\|&\leq \|\delta_{\mathfrak{s}z_1,\mathfrak{s}w_1} M\|+\|M(\mathfrak{s}w_1)-M'(\phi(\mathfrak{s}z_1))\|\\
&\leq \omega(M,5^N\delta) + 2\delta\;,
\end{equs}
where we used the little H\"older continuity of $M$ and~\eqref{e:InitialBound}.
For the H\"older part of the distance instead, let $n\in\mathbb{N}$ and assume further that
$d(\mathfrak{s}z_1,\mathfrak{s}z_2),\,d(\phi(\mathfrak{s}z_1),\phi(\mathfrak{s}z_2))\in\mathcal{A}_n$.
If there exist no $\mathfrak{s}w_1,\,\mathfrak{s}w_2\in\mathscr{T}$ such that $(\mathfrak{s}w_i,\phi(\mathfrak{s}z_i)) \in\mathbb{C}C$ and also $d(\mathfrak{s}w_1,\mathfrak{s}w_2)\in\mathcal{A}_n$, then
for $n\in\mathbb{N}$ such that $2^{-n}>5^N\delta$, we exploit~\eqref{e:InftyBoundFinal} to obtain
\begin{equ}
\|\delta_{\mathfrak{s}z_1,\mathfrak{s}z_2}M-\delta_{\phi(\mathfrak{s}z_1),\phi(\mathfrak{s}z_2)}M'\|\lesssim 2^{-n\alpha}((5^N\delta)^{-\alpha}\omega(M, 5^N\delta))
\end{equ}
while for $n$ such that $2^{-n}\leq 5^N\delta$, by definition of $\mathbb{D}elta^\mathrm{c}_\mathrm{sp}$ (see below~\eqref{e:MetC}), we get
\begin{equ}
\|\delta_{\mathfrak{s}z_1,\mathfrak{s}z_2}M-\delta_{\phi(\mathfrak{s}z_1),\phi(\mathfrak{s}z_2)}M'\|\leq \|\delta_{\mathfrak{s}z_1,\mathfrak{s}z_2}M\|+\|\delta_{\phi(\mathfrak{s}z_1),\phi(\mathfrak{s}z_2)}M'\|\lesssim 2^{-n\alpha}(2^{n\alpha}\omega(M, 2^{-n})+\delta)
\end{equ}
which in turn is bounded by $2^{-n\alpha}((5^N\delta)^{-\alpha}\omega(M, 5^N\delta))$.
In case instead $d(\mathfrak{s}w_1,\mathfrak{s}w_2)\in\mathcal{A}_n$, then we simply apply the triangle inequality to write
\begin{equ}
\|\delta_{\mathfrak{s}z_1,\mathfrak{s}z_2}M-\delta_{\phi(\mathfrak{s}z_1),\phi(\mathfrak{s}z_2)}M'\|\leq \|\delta_{\mathfrak{s}z_1,\mathfrak{s}z_2}M-\delta_{\mathfrak{s}w_1,\mathfrak{s}w_2}M\|+\|\delta_{\mathfrak{s}w_1,\mathfrak{s}w_2}M-\delta_{\phi(\mathfrak{s}z_1),\phi(\mathfrak{s}z_2)}M'\|
\end{equ}
Thanks to the estimate on the distortion of $\mathbb{C}C\mathfrak{c}up\phi$ and~\eqref{e:InftyBoundFinal},
the first summand can be controlled as in the proof
of~\mathfrak{c}ite[Lemma 2.12]{CHbwt}, while the second is bounded by $2\delta$ thanks to~\eqref{e:InitialBound}.
We focus now on the branching maps, for which we proceed once again by induction on the iteration step $m$.
Notice that for $m=0$, there is nothing to prove.
We now assume that for some $m<N$, there exists $K,\,K'>0$ such that
\begin{gather}
\mathfrak{s}up_{(\mathfrak{s}z,\mathfrak{s}z')\in\phi_{m-1}}|X(\mathfrak{s}z)-X'(\mathfrak{s}z')|\leq K \delta^\rho\label{e:IndInfty}\\
\|X'\mathord{\upharpoonright} T'_{m-1}\|_\rho\leq K'\|X\|_\rho\label{e:HolderNorm}
\end{gather}
for $\rho<\beta$, but arbitrarily close to it. Let $(\mathfrak{s}z,\mathfrak{s}z')\in\phi_m\mathfrak{s}etminus\phi_{m-1}$
be such that $\mathfrak{s}z\in\llbracket\mathfrak{b}_m,\mathfrak{s}v_m\mathrm{r}b$ and $\mathfrak{s}w$ be the point in $T_{m-1}$ for which
$(\mathfrak{s}w,\mathfrak{b}_{m}')\in\mathbb{C}C_{m-1}$. Then,
\begin{equs}
|X(\mathfrak{s}z)-X'(\mathfrak{s}z')|&=|X(\mathfrak{b}_{m})-X'(\mathfrak{b}'_{m})|\leq |\delta_{\mathfrak{b}_m,\mathfrak{s}w}X|+|X(\mathfrak{s}w)-X'(\mathfrak{b}'_{m})|\\
&\leq \|X\|_\varrho d(\mathfrak{b}_{m},\mathfrak{s}w)^\varrho+K\delta^\varrho\leq (5^{N\rho}\|X\|_\varrho+K)\delta^\varrho
\end{equs}
where the passage from the first to the second line is a consequence of the H\"older regularity of $X$ and~\eqref{e:IndInfty}
while in the last inequality we exploited the fact that both $(\mathfrak{b}_m,\mathfrak{b}_m')$ and $(\mathfrak{s}w,\mathfrak{b}_m')\in\phi_m$ and the same
bound as in~\eqref{e:Points}.
Concerning the H\"older norm of $X'$, let $\mathfrak{s}z'\in T_m'\mathfrak{s}etminus T_{m-1}'$ and $\mathfrak{s}w'\in T_{m-1}'$, then by triangle inequality
we have
\begin{equ}
|\delta_{\mathfrak{s}z',\mathfrak{s}w'}X'|\leq |\delta_{\mathfrak{s}z',\mathfrak{b}_m'}X'|+|\delta_{\mathfrak{b}_m',\mathfrak{s}w'}X'|\leq \|X\|_\rho(1+K') d'(\mathfrak{s}z',\mathfrak{s}w')^\rho
\end{equ}
where we used~\eqref{e:HolderNorm}. Hence, the $\rho$-H\"older norm of $X'$ on $T'$ is bounded above by $N\|X\|_\rho$.
For the second summand in~\eqref{e:MetbC}, let $(\mathfrak{s}z,\mathfrak{s}z'),(\mathfrak{s}w,\mathfrak{s}w')\in\phi$ be such that $d(\mathfrak{s}z,\mathfrak{s}w),d'(\mathfrak{s}z',\mathfrak{s}w')\in\mathbb{C}A_n$.
Then, we have
\begin{equ}
|\delta_{\mathfrak{s}z,\mathfrak{s}w}X-\delta_{\mathfrak{s}z',\mathfrak{s}w'}X'|\lesssim 2^{-n\rho}(\|X\|_\rho+\|X'\|_\rho)\lesssim 2^{-n\rho} N \|X\|_\rho
\end{equ}
as well as
\begin{equ}
|\delta_{\mathfrak{s}z,\mathfrak{s}w}X-\delta_{\mathfrak{s}z',\mathfrak{s}w'}X'|\lesssim N5^{N\rho}\|X\|_\varrho \delta^\rho\,.
\end{equ}
hence, by geometric interpolation,~\eqref{b:Coupling} follows.
For the last part of the statement, notice that the Hausdorff distance between $\mathscr{T}$ and $T$
is bounded above by $2(\eta \vee \mathop{\mathrm{dis}}\mathbb{C}C') \lesssim \eta + 5^N \delta$ by the definition of our halting condition.
Concerning the Hausdorff distance between $\mathscr{T}'$ and $T'$, let $\mathfrak{s}z'\in\mathscr{T}'$.
Take $\mathfrak{s}z\in\mathscr{T}$ such that $(\mathfrak{s}z,\mathfrak{s}z')\in\mathbb{C}C$, $\mathfrak{s}w\in T$ such that $d(\mathfrak{s}z,\mathfrak{s}w)\lesssim \eta + 5^N \delta$, which
exists since $d_H(\mathscr{T}, T)\lesssim \eta + 5^N \delta$, and $\mathfrak{s}w'\in T'$ such that $(\mathfrak{s}w,\mathfrak{s}w')\in\phi$. Then
\begin{equ}
d'(\mathfrak{s}z',\mathfrak{s}w')\leq|d'(\mathfrak{s}z',\mathfrak{s}w')-d(\mathfrak{s}z,\mathfrak{s}w)|+ d(\mathfrak{s}z,\mathfrak{s}w)\lesssim \mathop{\mathrm{dis}}(\mathbb{C}C\mathfrak{c}up\phi)+\eta + 5^N \delta
\end{equ}
from which the claim follows at once.
\end{proof}
We are now ready for the proof of Propositions~\ref{p:MeasG} and~\ref{p:PPtoGau}.
\begin{proof}[of Proposition~\ref{p:MeasG}]
Let $\{\zeta_n\}_n\,,\zeta\mathfrak{s}ubset\mathfrak{E}_\alpha(c,\theta)$ be such that $\mathbb{D}elta_\mathrm{sp}(\zeta_n,\zeta)$ converges to $0$.
Let $\eps>0$ be fixed. Since the family $\{\mathbb{C}Q_n\}_n$ is tight, there exists $\mathcal{K}_\eps \mathfrak{s}ubset \mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ compact such that
\begin{equ}[e:TightFamily]
\inf_{n\in\mathbb{N}} \mathbb{C}Q_n(\mathcal{K}_\eps)\geq 1-\eps\,.
\end{equ}
We want to bound the Wasserstein distance between $\mathbb{C}Q_n$ and $\mathbb{C}Q$. By the previous we have
\begin{equs}[e:Wass1]
\mathcal{W}(\mathbb{C}Q_{n},\mathbb{C}Q_{\zeta})&=\inf_{g\in\Gamma(\mathbb{C}Q_{\zeta_n},\mathbb{C}Q_{\zeta})}\mathbb{E}_g\left[\mathbb{D}elta_\mathrm{bsp}((\zeta_n,X_n),(\zeta,X))\right]\\
&\leq \inf_{g\in\Gamma(\mathbb{C}Q_{\zeta_n},\mathbb{C}Q_{\zeta})}\mathbb{E}_g\left[\mathbb{D}elta_\mathrm{bsp}((\zeta_n,X_n),(\zeta,X))\mathbbm{1}_{\mathcal{K}_\eps}\right]+\eps\\
&\leq \inf_{g\in\Gamma(\mathbb{C}Q_{\zeta_n},\mathbb{C}Q_{\zeta})}\mathbb{E}_g\left[\mathbb{D}elta^\mathrm{c}_\mathrm{bsp}((\zeta_n^{(r)},X_n^{(r)}),(\zeta^{(r)},X^{(r)}))\mathbbm{1}_{\mathcal{K}_\eps}\right]+2\eps\;,
\end{equs}
where in the last passage we used the definition of metric $\mathbb{D}elta_\mathrm{bsp}$ in~\eqref{def:bspMetric} and chose $r$ so that
$e^{-r}<\eps$. We now apply the construction \eqref{e:construction} with $\zeta$ replaced by
$\zeta^{(r)}$ and $\zeta'$ replaced by $\zeta_n^{(r)}$.
The triangle inequality then yields
\begin{align}
\mathbb{D}elta^\mathrm{c}_\mathrm{bsp}&((\zeta_n^{(r)},X_n^{(r)}),(\zeta^{(r)},X^{(r)}))\leq \mathbb{D}elta^\mathrm{c}_\mathrm{bsp}((Z_n,X_n^{(r)}),(Z,X^{(r)}))\label{e:Summ1}\\
&+\mathbb{D}elta^\mathrm{c}_\mathrm{bsp}((\zeta_n^{(r)},X_n^{(r)}),(Z_n,X_n^{(r)}))+\mathbb{D}elta^\mathrm{c}_\mathrm{bsp}((\zeta^{(r)},X^{(r)}),(Z,X^{(r)}))\;.\label{e:Summ2}
\end{align}
Since the summands in~\eqref{e:Summ2} only depend on one of the two probability measures, their
coupling is irrelevant. By the last point of Lemma~\ref{l:Coupling}, the Hausdorff distance of both $\mathscr{T}_n^{(r)}$ and
$\mathscr{T}^{(r)}$ from $T'$ and $T$ is at most of order $\eta + 5^N\delta$ so that we can apply Lemma~\ref{l:Approx}.
If we now choose first $\eta \approx \eps$
small enough, then we can guarantee that each of the two terms in~\eqref{e:Summ2} is less than $\eps$,
provided that $n$
is sufficiently large (and therefore $\delta$ sufficiently small). Note here that even though $N$
depends (badly) on $\eta$, it is independent of $n$.
At last, upon choosing at most an even smaller $\delta$ and exploiting the coupling of the previous section,
we can use~\eqref{b:Coupling} to control~\eqref{e:Summ1} by $\eps$,
so that the proof is concluded.
\end{proof}
\begin{proof}[of Proposition~\ref{p:PPtoGau}]
Throughout the proof we will write $\mathbb{C}Q^\gamma_\zeta$ for $\mathbb{C}Q^{\gamma,p}_\zeta$
and $\mathbb{C}Q_\zeta$ for $\mathbb{C}Q^\mathrm{Gau}_\zeta$ and we fix a compact set $\mathfrak{K} \mathfrak{s}ubset \fE_\alpha(c,\theta)$.
Let $\eps>0$ be fixed. Since the family $\{\mathbb{C}Q^{\gamma}_\zeta,\,\mathbb{C}Q_\zeta\,:\gamma>0,\,\zeta\in \mathfrak{K}\}$
is tight in $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ for any $\beta<\beta_{\mathrm{Poi}}$, there exists $\mathcal{K}_\eps\mathfrak{s}ubset\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ compact such that~\eqref{e:TightFamily} holds with the infimum taken over $\zeta\in \mathfrak{K}$ and $\gamma>0$.
Then, proceeding as in~\eqref{e:Wass1} and following the strategy used to control~\eqref{e:Summ2}
in the previous proof, we see that we can choose $r,\,\eta>0$ in such a way that
\begin{equ}[e:WassDistCoup]
\mathfrak{s}up_{\zeta\in \mathfrak{K}}\mathcal{W}(\mathbb{C}Q^\gamma_\zeta,\mathbb{C}Q_\zeta)\leq\mathfrak{s}up_{\zeta\in \mathfrak{K}} \inf_{g\in\Gamma(\mathbb{C}Q^\gamma_{\zeta},\mathbb{C}Q_{\zeta})}\mathbb{E}_g\left[\mathbb{D}elta^\mathrm{c}_\mathrm{bsp}((Z,X_\gamma^{(r)}),(Z,X^{(r)}))\mathbbm{1}_{\mathcal{K}_\eps}\right]+ 4\eps\;,
\end{equ}
where $Z = Z_\zeta$ is again constructed from $\zeta$ as in \eqref{e:construction}.
We are left to determine a coupling under which the first term is small.
Let $W$ be a standard Brownian motion and $P_\gamma$
a rescaled compensated Poisson process of intensity $\gamma$ on $\mathbb{R}_+$, coupled in such a way that, with
probability at least $1-\eps$, one has
\begin{equ}[e:dlambda]
\mathfrak{s}up_{t\in [0,L]} |W(t)-P_\gamma(t)| \le (1+L) \gamma^{-1/5} \eqdef d_\gamma\;,
\end{equ}
provided that $\gamma$ is sufficiently small.
Here $L\eqdef \mathfrak{s}up_{\zeta\in \mathfrak{K}}\ell_\zeta(T)$ with $\ell_\zeta$ the length measure on $\mathscr{T}$
is finite by Lemma~\ref{l:GrowingT'} and the existence of
such a coupling (with $1/5$ replaced by any exponent less than $1/4$) is guaranteed by the quantitative
form of Donsker's invariance principle. Similarly, we have
$\mathfrak{s}up_{\zeta\in K} \#\{\mathfrak{s}z\in T_\zeta\,:\,\deg(\mathfrak{s}z)=1\}<\infty$ and we will denote its value by $N$.
For $\zeta\in \mathfrak{K}$, we order the endpoints and the respective branching points
of $T$ according to the procedure of Section~\ref{sec:Coupling} and recursively define the subtrees $T_j$,
$j\leq N$, of $T$ as in~\eqref{e:approxTree}. For every $j\leq N$, denote by $\phi_j$ the unique bijective isometry
from $\llbracket\mathfrak{b}_j,\mathfrak{s}v_j\mathrm{r}b$ to $[\ell_\zeta( T_j), \ell_\zeta(T_j)+d(\mathfrak{b}_j,\mathfrak{s}v_j)]$ with $\phi_j(\mathfrak{b}_j) = \ell_\zeta( T_j)$.
Given any function $Y \mathfrak{c}olon [0,L] \to \mathbb{R}$, we then define $\tilde Y \mathfrak{c}olon T \to \mathbb{R}$ to be the unique function
such that $\tilde Y(\ast) = 0$ and
\begin{equ}[e:defYtilde]
\tilde Y(\mathfrak{s}z) =
Y(\phi_j(\mathfrak{s}z))-Y(\phi_{j}(\mathfrak{b}_{j}))+\tilde Y(\mathfrak{b}_j)\;,\quad \text{for $\mathfrak{s}z\in T_j\mathfrak{s}etminus T_{j-1}$. }
\end{equ}
This then allows us to construct the desired coupling by setting
$B = \tilde W$ and $N_\gamma = \tilde P_\gamma$, as well as
$N^a_\gamma$ to be the smoothened version of $N_\gamma$.
It follows from \eqref{e:defYtilde} that, setting $\delta_j = \mathfrak{s}up_{\mathfrak{s}z \in T_j} |B(\mathfrak{s}z)-N_\gamma(\mathfrak{s}z)|$, we have
$\delta_{j+1} \le \delta_j + 2 d_\gamma$ on the event \eqref{e:dlambda}.
We now remark that, for any integer $k > 0$, we can guarantee that
\begin{equ}[e:SmoothNonSmooth]
\mathbb{P}\Big(\mathfrak{s}up_{\mathfrak{s}z\in T}|N_\gamma(\mathfrak{s}z)-N^a_\gamma(\mathfrak{s}z)|\ge k\gamma^{-\frac{1}{2}}\Big)
\le L N (2\gamma)^{p+k(1-p)}\;,
\end{equ}
so that, choosing $k$ sufficiently large so that $k(p-1) > p$ and then choosing $\gamma$ sufficiently
large, one has
\begin{equ}[e:CouplingSup]
\mathbb{P}\Big(\mathfrak{s}up_{\mathfrak{s}z\in T}|B(\mathfrak{s}z)-N^a_\gamma(\mathfrak{s}z)| > 2N d_\gamma + k\gamma^{-\frac{1}{2}}\Big) \le 2\eps\;.
\end{equ}
The claim now follows at once by combining this with Lemma~\ref{l:PPtree}.
\end{proof}
\mathfrak{s}ubsection{The Brownian Castle measure}\label{sec:BCM}
In this section, we collect the results obtained so far and show how to define
a measure, which we will refer to as the {\it Brownian Castle measure},
on the space of branching spatial trees which encodes the inner structure of the Brownian Castle.
We begin with the following proposition, which determines the existence and the continuity properties of
the Gaussian process defined via~\ref{e:GP} on the Brownian Web Tree of Section~\ref{sec:BW}.
\begin{proposition}\label{p:GPDBW}
Let $\zeta_\mathrm{bw}^\downarrow$ and $\zeta_\mathrm{bw}^{\mathrm{per},\downarrow}$ be the
backward and backward periodic Brownian Web trees in Definition~\ref{def:DBW}. There exist
Gaussian processes $B_\mathrm{bc}$ and $B^\mathrm{per}_\mathrm{bc}$ indexed by $\mathscr{T}^\downarrow_\mathrm{bw}$, $\mathscr{T}^{\mathrm{per},\downarrow}_\mathrm{bw}$
that satisfy~\eqref{e:GP}. Moreover, they admit a version whose realisations are
locally little $\beta$-H\"older continuous for any $\beta<1/2$.
\end{proposition}
\begin{proof}
The existence part of the statement is due to the fact that $\mathscr{T}^\downarrow_\mathrm{bw}$ and $\mathscr{T}_\mathrm{bw}^{\mathrm{per},\downarrow}$
are almost surely $\mathbb{R}$-trees (see Remark~\ref{rem:GPExistence}), while, since by Proposition~\ref{p:BW}
$\zeta_\mathrm{bw}^\downarrow$ and $\zeta_\mathrm{bw}^{\mathrm{per},\downarrow}$ belong to $\mathfrak{E}_\alpha$ almost surely,
the H\"older regularity is a direct consequence of Proposition~\ref{p:Holder}.
\end{proof}
The previous proposition represents the last ingredient to define the Brownian Castle measure. This is given by
the law of the couple $\mathfrak{c}hi_\mathrm{bc}\eqdef(\zeta^\downarrow_\mathrm{bw}, B_\mathrm{bc})$ in the space of characteristic branching
spatial trees.
\begin{theorem}\label{thm:BSPT}
Let $\zeta_\mathrm{bw}^\downarrow$ and $\zeta_\mathrm{bw}^{\mathrm{per},\downarrow}$ be the backward and backward periodic Brownian Web trees
in Definition~\ref{def:DBW}, and
$B_\mathrm{bc}$ and $B^\mathrm{per}_\mathrm{bc}$ be the Gaussian processes built in Proposition~\ref{p:GPDBW}.
Then, almost surely the couple $\mathfrak{c}hi_\mathrm{bc}\eqdef(\zeta^\downarrow_\mathrm{bw}, B_\mathrm{bc})$ is a
characteristic $(\alpha,\beta)$-branching
spatial pointed $\mathbb{R}$-tree according to Definition~\ref{def:BPST}, for any $\alpha,\beta<1/2$. We call its law on
$\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ the {\bf Brownian Castle Measure}. The latter can be written as
\begin{equation}\label{e:LawBC}
\mathbb{C}P_\mathrm{bc}(\mathrm{d}\mathfrak{c}hi)\eqdef \int\mathbb{C}Q^\mathrm{Gau}_\zeta(\mathrm{d} X)\,\mathbb{T}heta^\downarrow_\mathrm{bw}(\mathrm{d}\zeta)
\end{equation}
where
$\mathbb{C}Q^\mathrm{Gau}_\zeta(\mathrm{d} X)$ denotes the law of the Gaussian process $B_\mathrm{bc}$ on $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$.
Analogously, for the same range of the parameters $\alpha,\beta$,
$\mathfrak{c}hi^\mathrm{per}_\mathrm{bc}\eqdef(\zeta^{\mathrm{per},\downarrow}_\mathrm{bw}, B^\mathrm{per}_\mathrm{bc})$ almost surely belongs to $\mathbb{C}^{\alpha,\beta}_{\mathrm{bsp}, \mathrm{per}}$
and we define its law on it, $\mathbb{C}P^\mathrm{per}_\mathrm{bc}(\mathrm{d}\mathfrak{c}hi)$, as in~\eqref{e:LawBC}.
\end{theorem}
\begin{proof}
That $\mathfrak{c}hi_\mathrm{bc}$ almost surely belongs to $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ is a direct consequence of Theorem~\ref{thm:BW} and
Propositions~\ref{p:GPDBW}. The needed measurability properties that allow to define~\eqref{e:LawBC} follow by
Proposition~\ref{p:MeasG}.
\end{proof}
\mathfrak{s}ection{The Brownian Castle}\label{s:bc}
The aim of this section is to rigorously define the ``nice'' version of the Brownian Castle (and its periodic
version, see Remark~\ref{rem:DistPBC} for its definition) sketched in~\eqref{e:FormalBC},
and prove Theorem~\ref{thm:main} along with other properties.
\begin{remark}\label{rem:DistPBC}
The periodic Brownian Castle $h^\mathrm{per}_\mathrm{bc}$ is defined similarly to $h_\mathrm{bc}$.
We require it to start from a periodic c\`adl\`ag function in $D(\mathbb{T},\mathbb{R})$ and its finite-dimensional
distributions to be characterised as in Definition~\ref{def:bcFD}, but
$z_1,\dots,z_n\in(0,+\infty)\times\mathbb{T}$ and
the coalescing backward Brownian motions $B_k$'z are periodic.
\end{remark}
\mathfrak{s}ubsection{Pathwise properties of the Brownian Castle}\label{sec:BC}
Let $\alpha,\beta<\frac{1}{2}$, $\mathfrak{c}hi=(\zeta,X)$, for $\zeta=(\mathscr{T},\ast,d,M)$, be a (periodic) $(\alpha,\beta)$-branching
characteristic spatial $\mathbb{R}$-tree according to Definition~\ref{def:BPST}
satisfying~\ref{i:TreeCond} (see Definition~\ref{def:TreeCond}),
and $\rho$ be its radial map as in~\eqref{e:RadMap}.
We introduce the following maps
\begin{equation}\label{e:SBC}
\mathfrak{h}_\mathfrak{c}hi^\mathfrak{s}t(z)\eqdef X(\mathfrak{T}(z))\qquad z\in\mathbb{R}^2
\end{equation}
and, for $\mathfrak{h}_0\in D(\mathbb{R},\mathbb{R})$ (or in $D(\mathbb{T},\mathbb{R})$),
\begin{equation}\label{e:BC}
\mathfrak{h}_\mathfrak{c}hi^{\mathfrak{h}_0}(z)\eqdef \mathfrak{h}_0(M_x(\rho(\mathfrak{T}(z),0)))+ \mathfrak{h}^\mathfrak{s}t(z)- X(\rho(\mathfrak{T}(z),0))\,
\end{equation}
for $z=(t,x)\in\mathbb{R}_+\times\mathbb{R}$ (resp. $\mathbb{R}_+\times\mathbb{T}$).
We are now ready for the following theorem and the consequent definition, which identify the version of the
Brownian Castle we will be using throughout the rest of the paper.
\begin{theorem}\label{thm:BC}
Let $\mathfrak{c}hi_\mathrm{bc}$ be the $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$-valued random variable introduced in Theorem~\ref{thm:BSPT} and
$\mathcal{P}_\mathrm{bc}$ be its law given by~\eqref{e:LawBC}.
Then, for every c\`adl\`ag function $\mathfrak{h}_0\in D(\mathbb{R},\mathbb{R})$, the map $\mathfrak{h}_{\mathfrak{c}hi_\mathrm{bc}}^{\mathfrak{h}_0}$ in~\eqref{e:BC}
is $\mathcal{P}_\mathrm{bc}$-almost surely well-defined and is a version of the
Brownian Castle $h_\mathrm{bc}$, i.e. it starts at $\mathfrak{h}_0$ at time $0$ and its finite-dimensional distributions
are as in Definition~\ref{def:bcFD}.
In the periodic setting, the same statement holds, i.e. for every periodic c\`adl\`ag function $\mathfrak{h}_0\in D(\mathbb{T},\mathbb{R})$
$\mathfrak{h}_{\mathfrak{c}hi^\mathrm{per}_\mathrm{bc}}^{\mathfrak{h}_0}$ is $\mathcal{P}^\mathrm{per}_\mathrm{bc}$-almost surely well-defined, starts at $\mathfrak{h}_0$ at time $0$ and
has the same finite-dimensional distributions as $h_\mathrm{bc}^\mathrm{per}$ in Remark~\ref{rem:DistPBC}.
\end{theorem}
\begin{proof}
The proof is a direct consequence of our construction in Sections~\ref{sec:BW} and~\ref{sec:BCM}
and follows by Proposition~\ref{p:BW}, Lemma~\ref{l:Right-most} and Theorems~\ref{thm:BW} and~\ref{thm:BSPT}.
\end{proof}
\begin{definition}\label{def:BC}
We define the {\bf stationary (periodic) Brownian Castle}, $\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}$ (resp. $\mathfrak{h}^{\mathrm{per},\mathfrak{s}t}_\mathrm{bc}$), as the field
$\mathfrak{h}^\mathfrak{s}t_{\mathfrak{c}hi_\mathrm{bc}}$ on $\mathbb{R}^2$ (resp. $\mathfrak{h}^\mathfrak{s}t_{\mathfrak{c}hi^\mathrm{per}_\mathrm{bc}}$ on $\mathbb{R}\times\mathbb{T}$) given by~\eqref{e:SBC},
while, for $\mathfrak{h}_0\in D(\mathbb{R},\mathbb{R})$ (resp. $\mathfrak{h}_0\in D(\mathbb{T},\mathbb{R})$),
we define the {\bf (periodic) Brownian Castle starting at $\mathfrak{h}_0$}, $\mathfrak{h}_\mathrm{bc}$ (resp. $\mathfrak{h}^\mathrm{per}_\mathrm{bc}$), as the map
$\mathfrak{h}^{\mathfrak{h}_0}_{\mathfrak{c}hi_\mathrm{bc}}$ (resp. $\mathfrak{h}^{\mathfrak{h}_0}_{\mathfrak{c}hi^\mathrm{per}_\mathrm{bc}}$) in~\eqref{e:BC}.
\end{definition}
\begin{remark}\label{rem:Stationary}
Since we require the Gaussian process $B_\mathrm{bc}$ to start from $0$ at $\ast$, $\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}$ is not, strictly speaking,
stationary but its increments are. As a consequence, writing $\tilde\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}$ for the projection of $\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}$
onto a space of functions in which two elements are identified if they differ by a fixed constant, we see that
$\tilde\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}$ is truly stationary in time.
\end{remark}
The previous theorem guarantees that thanks to $\mathfrak{c}hi_\mathrm{bc}$ it is possible to provide a construction of the Brownian Castle
which highlights its inner structure. We will now see how to exploit this construction in order to prove certain
continuity properties that the Brownian Castle $\mathfrak{h}_\mathrm{bc}$ and its periodic counterpart $\mathfrak{h}^\mathrm{per}_\mathrm{bc}$ enjoy.
\begin{proposition}\label{p:BCcadlag}
$\mathbb{C}P_\mathrm{bc}$-almost surely, for every initial condition $\mathfrak{h}_0\in D(\mathbb{R},\mathbb{R})$, the map $t\mapsto\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)$
takes values in $D(\mathbb{R},\mathbb{R})$, and is continuous from above, i.e. for every $t\in\mathbb{R}_+$
$\lim_{s\downarrow t} d_\mathrm{Sk}(\mathfrak{h}_\mathrm{bc}(s,\mathfrak{c}dot),\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot))=0$. Moreover, $\mathbb{C}P_\mathrm{bc}$-almost surely, for every $t\in\mathbb{R}_+$ such that
there is no $x\in\mathbb{R}$ for which $(t,x)\in S^\downarrow_{(0,3)}$, $t\mapsto \mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)$ is continuous at $t$, i.e.
$\lim_{s\to t} d_\mathrm{Sk}(\mathfrak{h}_\mathrm{bc}(s,\mathfrak{c}dot),\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot))=0$.
The same holds in the periodic setting $\mathcal{P}^\mathrm{per}_\mathrm{bc}$-almost surely.
\end{proposition}
\begin{proof}
The definition of $\mathfrak{h}_\mathrm{bc}$, together with Proposition~\ref{p:ContT} and the continuity of $B_\mathrm{bc}$
immediately imply that almost surely for every $t\in\mathbb{R}$,
$\mathbb{R}\ni x\mapsto \mathfrak{h}_\mathrm{bc}(t,x)$ is c\`adl\`ag and therefore belongs to $D(\mathbb{R},\mathbb{R})$.
In order to prove the second part of the statement, fix $t>0$ and let $s> t$.
By definition of the Skorokhod distance, it suffices to exhibit a $\lambda_s\in\Lambda$ such that
$\gamma(\lambda_s)<\eps$ for $s$ sufficiently small and $\mathfrak{s}up_{x\in[-R,R]}|\mathfrak{h}_\mathrm{bc}(s,\lambda_s(x))-\mathfrak{h}_\mathrm{bc}(t,x)|<\eps$ for $R$
big enough.
Since $\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)\in D(\mathbb{R},\mathbb{R})$,~\mathfrak{c}ite[Lemma 1.12.3]{Bil99} implies that
there exist $-R=x_1<\dots<x_n=R$ such that for all $i=1,\dots,n$
\begin{equation}\label{e:PointsD}
\mathfrak{s}up_{x,y\in[x_i,x_{i+1})}|\mathfrak{h}_\mathrm{bc}(t,x)-\mathfrak{h}_\mathrm{bc}(t,y)|<\eps/2\,.
\end{equation}
We assume, without loss of generality, that for every $x\in[x_i,x_{i+1})$, $\rho^\downarrow(\mathfrak{T}_\mathrm{bw}(t,x),0)$ coincide.
Indeed, if this is not the case, it suffices to add a finite number of points $x_i$ and~\eqref{e:PointsD} would still hold.
Now, for each of the $z_i=(x_i,t)$, we consider $\mathfrak{s}z_\mathrm{l}^i\in\mathscr{T}^\uparrow_\mathrm{bw}$, i.e. the left-most point
(see Remark~\ref{rem:Right-mostUA}) in the preimage of $z_i$ for the forward Brownian Web tree (which,
by Theorem~\ref{thm:DBW} is deterministically fixed by $\zeta^\downarrow_\mathrm{bw}$).
Now, let $\kappa>0$ be small enough so that for $s\in(t,t+\kappa)$
\begin{equ}
M^\uparrow_{\mathrm{bw},x}(\rho^\uparrow(\mathfrak{s}z^1_\mathrm{l},s))<\dots<M^\uparrow_{\mathrm{bw},x}(\rho^\uparrow(\mathfrak{s}z^n_\mathrm{l},s))\,,
\end{equ}
set
\begin{equ}
\lambda_s(x_i)\eqdef M^\uparrow_{\mathrm{bw},x}(\rho^\uparrow(\mathfrak{s}z^i_\mathrm{l},s))
\end{equ}
and define $\lambda_s(x)$ for $x\neq x_i$ by linear interpolation.
Clearly, $\gamma(\lambda_s)$ converges to $0$ as $s\downarrow t$,
so that we can choose $\tilde s$ sufficiently close to $t$ for which $\gamma(\lambda_s)<\eps$ for all $s\in(t,\tilde s)$.
Now, by the non-crossing property (see point~\ref{i:Cross} in Theorem~\ref{thm:DBW}),
$y\eqdef M_{\mathrm{bw},x}^\downarrow(\rho^\downarrow(\mathfrak{T}_\mathrm{bw}(\lambda_s(x),s),t))\in[x_i,x_{i+1})$ for $s\in(t,\tilde s)$ and $x\in[x_i,x_{i+1})$
and clearly $d_\mathrm{bw}^\downarrow(\mathfrak{T}_\mathrm{bw}(\lambda_s(x),s), \rho^\downarrow(\mathfrak{T}_\mathrm{bw}(\lambda_s(x)),t))= s-t$.
Recall that $B_\mathrm{bc}$ is locally $\beta$-H\"older continuous so that
upon taking $\bar s\eqdef \tilde s\wedge(t+\eps^{1/\beta}/2)$, we obtain
\begin{equs}
|\mathfrak{h}_\mathrm{bc}(s,\lambda_s(x))-\mathfrak{h}_\mathrm{bc}(t,x)|&\leq |\mathfrak{h}_\mathrm{bc}(s,\lambda_s(x))-\mathfrak{h}_\mathrm{bc}(t, y)|+|\mathfrak{h}_\mathrm{bc}(t,y)-\mathfrak{h}_\mathrm{bc}(t,x)|\\
&<|B_\mathrm{bc}(\mathfrak{T}_\mathrm{bw}(s,\lambda_s(x)))-B_\mathrm{bc}(\mathfrak{T}_\mathrm{bw}(t,y))|+\frac{\eps}{2}<\eps
\end{equs}
for all $x\in[-R,R)$ and $s\leq\bar s$, and from this the result follows.
It remains to prove the last part of the statement.
Let $t\in\mathbb{R}_+$ be such that $\{t\}\times\mathbb{R}\mathfrak{c}ap S^\downarrow_{(0,3)}=\emptyset$ and $\eps>0$.
We now consider a finite subset of
$\{t-\eps^{1/\beta}\}\times\mathbb{R}$, $\tilde \mathbb{X}i_{[-R,R]}^\downarrow$
(which is the image via $M^\downarrow_\mathrm{bw}$ of $\mathbb{X}i_R$ in~\eqref{e:CPS}), given by
\begin{equ}[e:Xitilde]
\tilde\mathbb{X}i^\downarrow_{[-R,R]}(t,t-\eps^{1/\beta}) \eqdef \{M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z,t-\eps^{1/\beta}))
\,:\, M^\downarrow_{\mathrm{bw}}(\mathfrak{s}z) \in\{t\}\times[-R,R]\}\,.
\end{equ}
Order the elements in the previous set in increasing order, i.e. $\tilde\mathbb{X}i^\downarrow_{[-R,R]}(t,t-\eps^{1/\beta})=\{x_i\,:\,i=1,\dots, N\}$
and $x_1\eqdef \min \tilde\mathbb{X}i^\downarrow_{[-R,R]}(t,t-\eps^{1/\beta})$. Now, for any $x_i\in\tilde\mathbb{X}i^\downarrow_{[-R,R]}(t,t-\eps^{1/\beta})$, let
\begin{equs}[e:yis]
y_i&\eqdef\inf\{y\in\mathbb{R}\,:\,\rho^\downarrow (\mathfrak{T}_\mathrm{bw}(t,y),t-\eps^{1/\beta})=x_i\}\;,\qquad i=1,\dots, N\text{ and }\\
y_{N+1}&\eqdef\mathfrak{s}up\{y\in\mathbb{R}\,:\,\rho^\downarrow (\mathfrak{T}_\mathrm{bw}(t,y),t-\eps^{1/\beta})=x_N\}
\end{equs}
then, since by Theorem~\ref{thm:DBW}~\ref{i:Cross} forward and backward paths do not cross, we know that
$\{y_i\}$ coincides with $\tilde \mathbb{X}i^\uparrow_{[x_1,x_N]}(t-\eps^{1/\beta},t)$ (defined as $\tilde\mathbb{X}i^\downarrow$ but with all arrows reversed).
By duality $S^\downarrow_{(0,3)}=S^\uparrow_{(2,1)}$, hence $\{(y_i,t)\}\mathfrak{c}ap S^\uparrow_{(2,1)}=\emptyset$. Therefore, there exists
a time $\tilde t\in(t-\eps^{1/\beta},t)$ such that no pair of forward paths started before $t-\eps^{1/\beta}$
and passing through $[x_1,x_N]$ at time $t-\eps^{1/\beta}$, coalesces at a time $s\in(\tilde t,t]$. In other words,
the cardinality of $\tilde \mathbb{X}i^\uparrow_{[x_1,x_N]}(t-\eps^{1/\beta},t)$ coincides with that of $\tilde \mathbb{X}i^\uparrow_{[x_1,x_N]}(t-\eps^{1/\beta},s)$
for any $s\in (\tilde t,t]$.
For $i\leq N$, let $\mathfrak{s}z_i$ be the unique point in $\mathscr{T}^\uparrow_\mathrm{bw}$,
such that for all $\mathfrak{s}z\in\mathscr{T}^\uparrow_\mathrm{bw}$ for which $M^\uparrow_{\mathrm{bw},x}(\mathfrak{s}z)\in\{t-\eps^{1/\beta}\}\times[x_i,x_{i+1}]$,
$\rho^\uparrow(\mathfrak{s}z, \tilde t)=\mathfrak{s}z_i$. We define the map $\lambda_{s}$, $s\in(\tilde t, t)$, as
\begin{equ}
\lambda_s(x_i)\eqdef M^\uparrow_{\mathrm{bw},x}(\mathfrak{s}z_i)
\end{equ}
and for $x\neq x_i$ we extend it by linear interpolation. Clearly, $\gamma(\lambda_s)$ converges to $0$, so that we can choose
$\tilde s>\tilde t$ sufficiently close to $t$ so that $\gamma(\lambda_s)<\eps$.
Now, notice that, by construction (and Theorem~\ref{thm:DBW}~\ref{i:Cross}),
for all $x\in[-R,R]$ and $s\in(\tilde s, t)$, $\mathfrak{T}_\mathrm{bw}(t,x)$ and $\mathfrak{T}_\mathrm{bw} (s,\lambda_s(x))$ must be such that
$d^\downarrow(\mathfrak{T}_\mathrm{bw}(t,x), \mathfrak{T}_\mathrm{bw} (s,\lambda_x(x)))<\eps^{1/\beta}$ which,
by the (local) $\beta$-H\"older continuity of $B_\mathrm{bc}$, guarantees that
\begin{equ}
|\mathfrak{h}_\mathrm{bc}(t,x)-\mathfrak{h}_\mathrm{bc}(s,\lambda_s(x))|=|B_\mathrm{bc}(\mathfrak{T}_\mathrm{bw}(t,x))-B_\mathrm{bc}(\mathfrak{T}_\mathrm{bw} (s,\lambda_x(x))|\lesssim \eps
\end{equ}
which concludes the proof in the non-periodic setting.
The periodic case follows the same steps but, as spatial interval, one can directly take the whole of $\mathbb{T}$ instead of $[-R,R]$.
\end{proof}
\begin{remark}
By Proposition~\ref{p:ContT}, the fact that $\mathfrak{h}_\mathrm{bc}(t,\bigcdot)$ is càdlàg simply follows from its description
in terms of an element of $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$. The fact that it is right-continuous
as a function of time however uses specific properties of the Brownian Castle itself and wouldn't be true
for $\mathfrak{h}_\mathrm{bc}$ built from an arbitrary element of $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$.
\end{remark}
In the next proposition, we show that it is possible to obtain a finer control over the fluctuations of
the Brownian Castle.
\begin{proposition}\label{p:pvar}
$\mathbb{C}P_\mathrm{bc}$-almost surely, for every $t>0$, $\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)$
has finite $p$-variation for every $p>1$, locally on any bounded interval of $\mathbb{R}$.
\end{proposition}
\begin{proof}
Let $t,\,R>0$ and consider $\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)$ restricted to the interval $[-R,R]$.
At first, we will approximate $\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)$ by piecewise constant functions.
For any $n \ge 0$, let $\tilde\mathbb{X}i(t,t-2^{-n})$ be the set defined according to~\eqref{e:Xitilde}
and let $N_n\eqdef\eta_R(t,2^{-n})$ be its cardinality, which we recall
satisfies the bound $N_n \lesssim 2^{\varsigma n}$ (for some random proportionality constant
independent of $n$) given in~\eqref{e:CPSbound}.
We order the points of $\tilde\mathbb{X}i(t,t-2^{-n})$ as in the proof of Proposition~\ref{p:BCcadlag},
denote them by $x_1^{(n)}<\dots<x_{N_n}^{(n)}$, and set $x_0^{(n)}\eqdef -R$ and $x_{N_n+1}^{(n)}\eqdef R$.
We define the piecewise constant function $\mathfrak{h}^n_\mathrm{bc}(t,\mathfrak{c}dot)$ by
\begin{equ}
\mathfrak{h}^n_\mathrm{bc}(t,x)=\mathfrak{h}_\mathrm{bc}(t,x_i^{(n)})\qquad\text{for $x\in[x_i^{(n)},x_{i+1}^{(n)})$.}
\end{equ}
We then note that, for any $x\in[-R,R]$ we have the identity
\begin{equ}
\mathfrak{h}_\mathrm{bc}(t,x)-\mathfrak{h}^0_\mathrm{bc}(t,x)=\mathfrak{s}um_{n\ge 0} \big(\mathfrak{h}^{n+1}_\mathrm{bc}(t,x)-\mathfrak{h}^{n}_\mathrm{bc}(t,x)\big)
\end{equ}
so that in particular, for any $p\ge 1$,
\begin{equ}[e:Trpvar]
\|\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)-\mathfrak{h}^0_\mathrm{bc}(t,\mathfrak{c}dot)\|_{p\mathrm{\mhyphen var}}\leq \mathfrak{s}um_{n\ge 0} \|\mathfrak{h}^{n+1}_\mathrm{bc}(t,\mathfrak{c}dot)-\mathfrak{h}^{n}_\mathrm{bc}(t,\mathfrak{c}dot)\|_{p\mathrm{\mhyphen var}}\;,
\end{equ}
the $p$-variation norm $\|\mathfrak{c}dot\|_{p\mathrm{\mhyphen var}}$ being defined as in~\eqref{e:pvar}.
Thanks to the $\beta$-H\"older continuity of $B_\mathrm{bc}$, we then have
\begin{equ}
\|\mathfrak{h}^{n+1}_\mathrm{bc}(t,\mathfrak{c}dot)-\mathfrak{h}^{n}_\mathrm{bc}(t,\mathfrak{c}dot)\|_{p\mathrm{\mhyphen var}}\lesssim 2^{-\beta n} N_{n+1}^{1/p}\;,
\end{equ}
since $\mathfrak{h}^{n+1}_\mathrm{bc}(t,\mathfrak{c}dot)-\mathfrak{h}^{n}_\mathrm{bc}(t,\mathfrak{c}dot)$ is a piecewise constant
function with sup-norm over $[-R,R]$ bounded by $C 2^{-\beta n}$ (for some random $C$
independent of $n$) and at most $N_{n+1}$ jumps.
Inserting this bound into~\eqref{e:Trpvar} and exploiting the bound on $N_{n+1}$
provided by Lemma~\ref{l:CPScard}, we obtain
\begin{equ}
\|\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)-\mathfrak{h}^0_\mathrm{bc}(t,\mathfrak{c}dot)\|_{p\mathrm{\mhyphen var}}\lesssim \mathfrak{s}um_{n\ge 0} 2^{(\varsigma/p - \beta)n}\;,
\end{equ}
which is finite for any $p>\varsigma/\beta$. Since both $\varsigma$ and $\beta$ can be chosen
arbitrarily close to $1/2$, the statement follows.
\end{proof}
Combining the (H\"older) continuity of the map $B_\mathrm{bc}$
(or of $B^\mathrm{per}_\mathrm{bc}$) with Proposition~\ref{p:ContT}, we conclude
that the set of discontinuities of $\mathfrak{h}_\mathrm{bc}$ is contained in $S^\downarrow_{0,2} \mathfrak{c}up S^\downarrow_{1,2} \mathfrak{c}up S^\downarrow_{0,3}$ (see Definition~\ref{def:Type}) or, by duality (see Theorem~\ref{thm:Types}),
in the image through $M^\uparrow_\mathrm{bw}$ of the {\it skeleton} of the forward Brownian Web
$\mathscr{T}^{o,\uparrow}_\mathrm{bw}$\footnote{In~\mathfrak{c}ite{CHbwt}, it was shown that the skeleton is given by $\mathscr{T}^{\uparrow}_\infty(\mathcal{D})$ (resp. $\mathscr{T}^{\mathrm{per},\uparrow}_\infty(\mathcal{D})$), $\mathcal{D}$ being any countable dense of $\mathbb{R}^2$
(resp. $\mathbb{T}\times\mathbb{R}$), but with the endpoints removed}
given by~\eqref{def:Skeleton}, and the same holds for $\mathfrak{h}^\mathrm{per}_\mathrm{bc}$.
This means that we can identify specific events in the spatio-temporal evolution of the (periodic) Brownian Castle with
special points of the (periodic) Brownian Web.
Let us define the {\it basin of attraction} for the shock at $z=(t,x)\in\mathbb{R}_+\times\mathbb{R}$ as
\begin{equation}\label{def:boa}
\begin{split}
A_z\eqdef\{z'=(t',x')&\in\mathbb{R}^2\,:\,t'<t\text{ and there exists}\\
&\text{$\mathfrak{s}z'\in\mathscr{T}^\uparrow_\mathrm{bw}$ s.t. $M^\uparrow_\mathrm{bw}(\mathfrak{s}z')=z'$ and $M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}z',t))=z$}\}
\end{split}
\end{equation}
and the {\it age} of the shock as
\begin{equation}\label{e:age}
a_z = t - \mathfrak{s}up\{t' < t\,:\, \{t'\}\times \mathbb{R} \mathfrak{c}ap A_z = \emptyset\}\,.
\end{equation}
and define {\it mutatis mutandis} $A_z^\mathrm{per}$ and $a_z^\mathrm{per}$ as the basin of attraction of a shock at $z\in\mathbb{R}_+\times\mathbb{T}$ and its age,
in the periodic setting.
In the following proposition, we show properties of the age of a point $z$ and characterise its basin of attraction.
\begin{proposition}\label{p:BoA}
\begin{enumerate}[noitemsep]
\item In both the periodic and non-periodic case, the set of points with strictly positive age coincides with the union of
points of type $(i,j)$ for $j>1$.
\item In the non-periodic case, almost surely, for every $z$, $a_z<\infty$ and there exists a unique
$z' = (t',x') \in A_z$ that realises the
supremum in~\eqref{e:age}, i.e.\ such that $a_z = t-t'$. In the periodic case, for every $t\in\mathbb{R}$, the
previous holds for all $z\in\{t\}\times\mathbb{T}$ except for exactly one value $z_\mathrm{per}=(t,x_\mathrm{per})$, which is such that $a_{z_\mathrm{per}}=\infty$.
\item In the non-periodic case, if $z=(t,x)$ is such that $a_z > 0$, then the unique $z' = (t',x') \in A_z$
(determined in the previous point) such that $a_z=t-t'$, belongs to $S^\uparrow_{0,3}$. Moreover,
the left-most and right-most points at $z$, $\mathfrak{s}z_\mathrm{l},\,\mathfrak{s}z_\mathrm{r}\in (M^\downarrow_\mathrm{bw})^{-1}(z)$ are such that
$\mathfrak{s}z_\mathrm{l} \neq \mathfrak{s}z_\mathrm{r}$ and
\begin{equation}\label{e:intBoA}
\mathring A_z = \bigcup_{s<t}\, (M_\mathrm{bw}^\downarrow(\rho^\downarrow(\mathfrak{s}z_\mathrm{l},s)),M_\mathrm{bw}^\downarrow(\rho^\downarrow(\mathfrak{s}z_\mathrm{r},s)))\;.
\end{equation}
where $\mathring A_z$ denotes the interior of $A_z$ and $A_z$ is compact.
\end{enumerate}
\end{proposition}
\begin{proof}
Point 1. is an immediate consequence of Theorem~\ref{thm:Types}. Indeed, if $z$ is such that $a_z>0$,
then there exists a point in $(M^\uparrow_\mathrm{bw})^{-1}(z)$ whose degree is strictly greater than~$1$,
which implies that $|(M^\downarrow_\mathrm{bw})^{-1}(z)|\geq 2$ so that $z$ belongs to the union of $S^\downarrow_{i,j}$ for $j>1$.
Vice-versa, if $z$ belongs to one of the $S^\downarrow_{i,j}$ for $j>1$ then, by duality,
it belongs to one of $S^\uparrow_{i,j}$ for $(i,j)=(1,1),\,(2,1)$ or $(1,2)$
so that there exists at least one $\mathfrak{s}z'\in\mathscr{T}^\uparrow_\mathrm{bw}$ such that $M^\uparrow_{\mathrm{bw},t}(\mathfrak{s}z')<t$
and $M^\uparrow_\mathrm{bw}(\alpha^\uparrow_\mathrm{bw}(\mathfrak{s}z',t))=z$. Hence $a_z\geq t-M^\uparrow_{\mathrm{bw},t}(\mathfrak{s}z')>0$.
For point 2., we first show that if $a_z$ is finite then the point realising the supremum is unique,
the proof being the same in the periodic
and non-periodic setting. Assume there exist
$z'=(t',x'),\,z''=(t',x'')\in A_z$ realising the supremum in~\eqref{e:age}. Then,
by the coalescing property, every point in $\{t'\}\times[x',x'']$ (or $\{t'\}\times(\mathbb{T}\mathfrak{s}etminus[x',x''])$) belongs to $A_z$.
But, according to Theorem~\ref{thm:Types} almost surely
for every $s\in\mathbb{R}$, $S^\uparrow_{1,1}\mathfrak{c}ap \{s\}\times\mathbb{R}$ is dense in $\{s\}\times\mathbb{R}$, hence,
there is $\tilde z\in S^\uparrow_{1,1}\mathfrak{c}ap \{t'\}\times[x',x'']$ and $\mathfrak{s}z\in\mathscr{T}^\uparrow_\mathrm{bw}$ such that $M^\uparrow_{\mathrm{bw},t}(\mathfrak{s}z)<t'$ and
$M^\uparrow_{\mathrm{bw},t}(\rho^\uparrow(\mathfrak{s}z,t'))=\tilde z$. But then $a_z\geq t-M^\uparrow_{\mathrm{bw},t}(\mathfrak{s}z)>t-t'$, which is a contradiction.
Since, by~\mathfrak{c}ite[Proposition 3.21]{CHbwt}, $\mathscr{T}^\uparrow_\mathrm{bw}$ has a {\it unique} open end with unbounded rays,
for every $z$, $a_z<\infty$.
This is not true anymore for $\mathscr{T}^{\mathrm{per},\uparrow}_\mathrm{bw}$ which has exactly two open ends with unbounded rays, but
since there exists a {\it unique} bi-infinite edge connecting them (see ~\mathfrak{c}ite[Proposition 3.25]{CHbwt}), it follows that
for every $t$ there is a unique $x_\mathrm{per}\in\mathbb{T}$ such that $(t,x_\mathrm{per})$ has infinite age.
Let us now focus on 3. Let $z=(t,x)$ be such that $a_z>0$. From~2.,
there exists a unique point $z'\in\mathbb{R}^2$ and a point $\mathfrak{s}z'\in\mathscr{T}^\uparrow_\mathrm{bw}$
such that $M^\uparrow_\mathrm{bw}(\mathfrak{s}z')=z'$ and $M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}z',t))=z$, hence
$z\in S^\uparrow_{1,1}\mathfrak{c}up S^\uparrow_{2,1}\mathfrak{c}up S^\uparrow_{1,2}$.
Thanks to Theorem~\ref{thm:Types}, the right-most and left-most points in $(M_\mathrm{bw}^\downarrow)^{-1}(z)$,
$\mathfrak{s}z_\mathrm{r},\mathfrak{s}z_\mathrm{l}\in\mathscr{T}^\downarrow_\mathrm{bw}$ must be distinct. By Theorem~\ref{thm:DBW}~\ref{i:Cross},
forward and backward trajectories cannot cross, therefore for every $s\in(t',t)$,
$M^\downarrow_\mathrm{bw}(\rho^\downarrow(\mathfrak{s}z_\mathrm{l},s))<M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}z',s))<M^\downarrow_\mathrm{bw}(\rho^\downarrow(\mathfrak{s}z_\mathrm{r},s))$. In particular,
the backward paths starting from $z$ cannot coalesce before $t'$. They cannot coalesce after $t'$ either since,
if this
were to be the case, then for the same reasons as above the path in the forward web starting from any point in
$\{s\}\times(M^\downarrow_\mathrm{bw}(\rho^\downarrow(\mathfrak{s}z_\mathrm{l},s), M^\downarrow_\mathrm{bw}(\rho^\downarrow(\mathfrak{s}z_\mathrm{r},s)))$, $s<t'$ would be contained
in $A_z$, contradicting point~2. It follows that the point at which the two backward paths coalesce is exactly $z'$,
which implies that $z'\in S^\downarrow_{2,1}=S^\uparrow_{0,3}$. Moreover the previous argument also shows that~\eqref{e:intBoA}
holds (with $\mathfrak{s}z_1=\mathfrak{s}z_\mathrm{r}$ and $\mathfrak{s}z_2=\mathfrak{s}z_\mathrm{l}$).
\end{proof}
\begin{remark}
Proposition~\ref{p:BoA} and its proof underline one of the main visible differences between the Brownian Castle and its
periodic counterpart. Indeed, only $\mathscr{T}^{\mathrm{per},\uparrow}_\mathrm{bw}$ possesses a bi-infinite edge $\beta^\uparrow$,
which implies that $\mathfrak{h}^\mathrm{per}_\mathrm{bc}$ exhibits a
``master shock'' starting back at $-\infty$ and running along $M^{\mathrm{per},\uparrow}_\mathrm{bw}(\beta^\uparrow(\mathfrak{c}dot))$.
Indeed, as we have seen above, for every $s\in\mathbb{R}$
there exist two backward paths starting in or passing through $M^{\mathrm{per},\uparrow}_\mathrm{bw}(\beta^\uparrow(s))$ that before meeting need to
transverse the whole torus. On the other hand, all the discontinuities of $\mathfrak{h}_\mathrm{bc}$ have a finite origin that can be tracked with
the methods shown in Proposition~\ref{p:BoA}.
\end{remark}
The following proposition collects the most important connections between certain events we witness on the Brownian Castle
and special points in the Web.
\begin{proposition}
\begin{enumerate}[noitemsep]
\item Shocks for $\mathfrak{h}_\mathrm{bc}$ and $\mathfrak{h}_\mathrm{bc}^\mathrm{per}$ correspond to the trajectories of the forward and periodic forward
Brownian Web trees respectively, i.e. they are points of type $(1,1)$ or $(1,2)$ for $\zeta^\uparrow_\mathrm{bw}$ (resp. $\zeta^{\mathrm{per},\uparrow}_\mathrm{bw}$).
\item If two shocks merge at $z$, then $z$ is of type $(0,3)$ for the backward (periodic) Brownian Web.
\end{enumerate}
\end{proposition}
\begin{proof}
The result follows by the fact that, by construction, the paths of backward Brownian Web tree represent the backward
characteristics of the Brownian Castle, and by duality.
\end{proof}
The previous proposition provides the reason why there is no chance for the Brownian Castle $\mathfrak{h}_\mathrm{bc}(s,\mathfrak{c}dot)$
to admit a limit as $s\uparrow t$ for all $t\in\mathbb{R}_+$, in the Skorokhod topology
(or any of the $M_1$, $J_2$ and $M_2$-topologies on this space, see~\mathfrak{c}ite[Section 12]{Whitt}),
{\it independently} of the specifics of the proof of Proposition~\ref{p:BCcadlag} or our construction.
Indeed the Skorokhod topology allows for discontinuities to evolve {\it continuously} and to merge only if their difference
{\it continuously converges to $0$}. This is not necessarily the case here.
Indeed, if $z=(t,x)\in S^\uparrow_{2,1}$, then there are two paths in the forward Web that coalesce at $z$,
i.e. two discontinuities merging there. According to Proposition~\ref{p:ContT}, for $s$ sufficiently close to $t$
these discontinuities evolve continuously up to the time at which they merge but there is no reason for their difference
to vanish. The pointwise limit of $\mathfrak{h}_\mathrm{bc}(s,\mathfrak{c}dot)$ as $s\uparrow t$ would then need to encode three different values
at the point $z$, but the resulting object is not an element of $D(\mathbb{R},\mathbb{R})$. Furthermore, according to Theorem~\ref{thm:Types}
$S^\uparrow_{2,1}$ is a countable yet {\it dense} subset of $\mathbb{R}^2$ so that points at which c\`adl\`ag continuity
fails are very common!
In the following proposition, whose proof is based on the above heuristics, we show that for any choice
of initial condition, there is no version of the Brownian Castle $h_\mathrm{bc}$
(defined by simply specifying its finite dimensional distributions) which is c\`adl\`ag in time and space.
\begin{proposition}
Given any initial condition $\mathfrak{h}_0\in D(\mathbb{R},\mathbb{R})$ and $T >0$, the Brownian Castle starting at $\mathfrak{h}_0$
does not admit a
version in $D([0,T],D(\mathbb{R},\mathbb{R}))$. The same is true for the periodic Brownian Castle.
\end{proposition}
\begin{proof}
Since a right-continuous function with values in $D(\mathbb{R},\mathbb{R})$ is uniquely determined by its values at space-time
points with rational coordinates (for example), it suffices to show that the exists a (random) time
for which $\mathfrak{h}_\mathrm{bc}$ admits
no left limit in $D(\mathbb{R},\mathbb{R})$. For this, it suffices to find a point $(t,x)$ and three
sequences $(t_k,x^{(i)}_k)_{k \ge 0}$ (here $i \in \{1,2,3\}$) with $t_k \uparrow t$,
$x^{(i)}_k \to x$, and $\lim_{k \to \infty} \mathfrak{h}_\mathrm{bc}(t_k,x^{(i)}_k) = L_i$ with all three limits
$L_i$ different from each other.
Now, notice that, almost surely, one can find two elements $\mathfrak{s}x_0, \mathfrak{s}x_1 \in \mathscr{T}_\mathrm{bw}^\uparrow$
with $M_\mathrm{bw}^\uparrow(\mathfrak{s}x_i) = (x_i, 0)$ and $x_0$, $x_1$ in $[-1,1]$
such that, for the forward Brownian Web tree, one has $\rho^\uparrow(\mathfrak{s}x_0,T) = \rho^\uparrow(\mathfrak{s}x_1,T)$.
Writing $t = \inf\{s > 0\,:\, \rho^\uparrow(\mathfrak{s}x_0,s) = \rho^\uparrow(\mathfrak{s}x_1,s)\}$
and $x = M_\mathrm{bw}^\uparrow(\rho^\uparrow(\mathfrak{s}x_0,t))$, we then
necessarily have $(t,x) \in S_{0,3}^\downarrow$ by duality and, furthermore, the three trajectories emanating
from $(t,x)$ in the backwards Brownian Web cannot coalesce before time $0$ by the
non-crossing property. Since further, the Gaussian process $B_\mathrm{bc}$ is locally H\"older continuous and,
with high probability, $t\geq c>0$ for some positive constant $c$, the claim then follows by taking for
$(t_k, x_k^{(i)})$, sequences accumulating at
$(t,x)$ and belonging to these three trajectories.
\end{proof}
\mathfrak{s}ubsection{The Brownian Castle as a Markov process}\label{sec:BCprocess}
We are now interested in studying the properties of the (periodic) Brownian Castle as a random interface evolving in time, i.e.\ as a
stochastic process with values in $D(\mathbb{R},\mathbb{R})$ (resp. $D(\mathbb{T},\mathbb{R})$).
To do so, we need to introduce a suitable filtration on the probability space $(\Omega,\mathcal{F},\mathcal{P}_\mathrm{bc})$
(resp.\ $(\Omega_\mathrm{per},\mathcal{F}_\mathrm{per}, \mathcal{P}^\mathrm{per}_\mathrm{bc})$) on which the Castle is defined\footnote{For example $\Omega$ can be taken to be $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ and $\mathcal{F}$ the Borel $\mathfrak{s}igma$-algebra induced by the metric in~\eqref{def:bspMetric}}.
From now on, we assume that all
sub-$\mathfrak{s}igma$-algebras of $\mathcal{F}$ (resp. $\mathcal{F}_\mathrm{per}$) that we consider contain all null events.
We will make use of the following construction. Given a metric space $\mathbb{C}X$, we write $\mathbb{C}X^{2c} \mathfrak{s}ubset \mathbb{C}X^2$
for the set of all pairs $(x,y)$ such that $x$ and $y$ are in the same path component of $\mathbb{C}X$.
Given $B \mathfrak{c}olon \mathbb{C}X \to \mathbb{R}$ (or into any abelian group), we then write $\delta B \mathfrak{c}olon \mathbb{C}X^{2c} \to \mathbb{R}$
for the map given by $\delta B(x,y) = B(y) - B(x)$.
Let $\mathfrak{c}hi=(\mathscr{T},\ast,d,M,B)\in \Omega=\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ (or $\Omega_\mathrm{per}$) and, for $-\infty\leq s\leq t<+\infty$,
define $\mathscr{T}_{s,t}\eqdef M^{-1}((s,t]\times\mathbb{R})$ (or $\mathscr{T}_{s,t}\eqdef M^{-1}((s,t]\times\mathbb{T})$).
Let $\mathrm{eval}_{s,t}$ be the map given by
\begin{equation}\label{e:eval}
\mathrm{eval}_{s,t}(\mathfrak{c}hi)\eqdef \big(\zeta_{s,t}, \delta \big(B\mathord{\upharpoonright} \mathscr{T}_{s,t}\big)\big)
\end{equation}
where
$\zeta_{s,t}\eqdef(\mathscr{T}_{s,t},d,M\mathord{\upharpoonright} \mathscr{T}_{s,t})$. We use the notations
\begin{equ}[e:filt]
\mathcal{F}_{s,t} \eqdef \mathfrak{s}igma(\mathrm{eval}_{s,t})\;,\qquad \mathcal{F}_t\eqdef \mathfrak{s}igma(\mathrm{eval}_{-\infty,t})\;,
\end{equ}
for the $\mathfrak{s}igma$-algebras that they generate. The following property is crucial.
\begin{lemma}\label{lem:indep}
If the intervals $(s,t]$ and $(u,v]$ are disjoint, then $\mathcal{F}_{s,t}$ and $\mathcal{F}_{u,v}$ are
independent.
\end{lemma}
\begin{proof}
The fact that $\mathscr{T}_{s,t}$ and $\mathscr{T}_{u,v}$ are independent under the law of the Brownian Web tree
was shown for example in \mathfrak{c}ite[Prop.~2]{HoW} (this is for a slightly
different representation of the Brownian web, but the topological space $\mathscr{T}$ can be recovered from it in a measurable way).
It remains to note that, conditionally on $\mathscr{T}_{s,t}$ and $\mathscr{T}_{u,v}$, the joint law of $\delta \big(B\mathord{\upharpoonright} \mathscr{T}_{s,t}\big)$ and
$\delta \big(B\mathord{\upharpoonright} \mathscr{T}_{u,v}\big)$ is of product form with the two factors being $\mathscr{T}_{s,t}$ and $\mathscr{T}_{u,v}$-measurable
respectively. This follows immediately from the independence properties of Brownian increments as formulated in
Remark~\ref{rem:disjoint}.
\end{proof}
One almost immediate consequence is that both $\mathfrak{h}_\mathrm{bc}$ and $\mathfrak{h}_\mathrm{bc}^\mathrm{per}$ are time-homogeneous strong Markov processes
satisfying the Feller property.
\begin{proposition}
The (periodic) Brownian Castle $\mathfrak{h}_\mathrm{bc}$ (resp. $\mathfrak{h}_\mathrm{bc}^\mathrm{per}$) is a time-homogeneous
$D(\mathbb{R},\mathbb{R})$ (resp. $D(\mathbb{T},\mathbb{R})$)-valued Markov process on the
complete probability space $(\Omega, \mathcal{F}, \mathcal{P}_\mathrm{bc})$ (resp. $(\Omega_\mathrm{per},\mathcal{F}_\mathrm{per}, \mathcal{P}^\mathrm{per}_\mathrm{bc})$),
with respect to the filtration $\{\mathcal{F}_t\}_{t\geq0}$ introduced in~\eqref{e:filt}.
Moreover, both $\{\mathfrak{h}_\mathrm{bc}(t,\bigcdot)\}_{t\geq0}$ and $\{\mathfrak{h}^\mathrm{per}_\mathrm{bc}(t,\bigcdot)\}_{t\geq0}$ are strong Markov and Feller.
\end{proposition}
\begin{proof}
The proof works {\it mutatis mutandis} for both the periodic and non-periodic case
so we will focus on the latter.
We have already shown that $\mathcal{P}_\mathrm{bc}$-almost surely for every $\mathfrak{h}_0$ and $t\geq 0$, $\mathfrak{h}_\mathrm{bc}(t,\bigcdot)\in D(\mathbb{R},\mathbb{R})$
(see Proposition~\ref{p:BCcadlag}). Moreover, by construction, $\mathfrak{h}_\mathrm{bc}(t,\bigcdot)$ only depends on $\mathrm{eval}_{-\infty,t}(\mathfrak{c}hi_\mathrm{bc})$ so
that it is clearly $\mathcal{F}_t$-measurable.
Notice that, by definition, for every $0\leq s< t$ and $x\in\mathbb{R}$, we can write
\begin{equ}
\mathfrak{h}_\mathrm{bc}(t,x)=\mathfrak{h}_\mathrm{bc}(s,M^\downarrow_\mathrm{bw}(\rho^\downarrow(\mathfrak{T}_\mathrm{bw}(t,x),s)))+ B_\mathrm{bc}(\mathfrak{T}_\mathrm{bw}(t,x))-B_\mathrm{bc}(\rho^\downarrow(\mathfrak{T}_\mathrm{bw}(t,x),s))\;,
\end{equ}
so that $\mathfrak{h}_\mathrm{bc}(t,\bigcdot)$ is $\mathfrak{h}_\mathrm{bc}(s,\bigcdot) \vee \mathcal{F}_{s,t}$-measurable.
Since $\mathcal{F}_{s,t}$ and $\mathcal{F}_s$ are independent by Lemma~\ref{lem:indep}, the Markov property follows,
while the time homogeneity is an immediate consequence of the stationarity of $(\mathscr{T}^\downarrow_\mathrm{bw}, M^\downarrow_\mathrm{bw}, B_\mathrm{bc})$.
Stochastic continuity was already shown in Proposition~\ref{p:BCcadlag}, so, if
we show that the law of $\mathfrak{h}_\mathrm{bc}(t,\bigcdot)$ depends continuously (in the topology of weak convergence) on $\mathfrak{h}_0$,
then the Feller property holds.
By the definition of the Skorokhod topology, it is sufficient to show that, if $\{\mathfrak{h}_0^n\}_{n\in\mathbb{N}}\mathfrak{s}ubset D(\mathbb{R},\mathbb{R})$
is a sequence converging to $\mathfrak{h}_0$ in $D(\mathbb{R},\mathbb{R})$ then, for every $R > 0$, one has
$\mathfrak{s}up_{|x|\le R} |\mathfrak{h}^n_{\mathrm{bc}}(t,x) - \mathfrak{h}_{\mathrm{bc}}(t,x)| \to 0$ in probability, where
we write $\mathfrak{h}^n_{\mathrm{bc}}$ for the Brownian Castle with initial condition $\mathfrak{h}_0^n$.
Note that
\begin{equ}[e:FellerRHS]
\mathfrak{s}up_{|x| \le R}|\mathfrak{h}^n_{\mathrm{bc}}(t,x)-\mathfrak{h}_{\mathrm{bc}}(t,x)|\leq \mathfrak{s}up_{|x| \le R}|\mathfrak{h}_0^n(y(x))-\mathfrak{h}_0(y(x))|
\end{equ}
where $y(x)\eqdef M^\downarrow_\mathrm{bw}(\rho^\downarrow(\mathfrak{T}_\mathrm{bw}(t,x),0))$. With probability one, the set $\{y(x)\,:\,x\in [-R,R]\}$
is finite and has empty intersection with the set of discontinuities of $\mathfrak{h}_0$.
Hence, the right-hand side of \eqref{e:FellerRHS} converges to $0$ (almost surely and therefore also in probability)
by~\mathfrak{c}ite[Prop.~3.5.2]{EK86}.
Since the Brownian Castle almost surely admits right continuous trajectories and is Feller, the same proof as
in~\mathfrak{c}ite[Theorem III.3.1]{RevYor} guarantees that it is strong Markov (even though its state space is not locally compact).
\end{proof}
The periodic Brownian Castle $\mathfrak{h}_\mathrm{bc}^\mathrm{per}$, is not only Feller, but also strong Feller, namely its
Markov semigroup maps bounded functions to continuous functions.
It will be convenient to write $\mathscr{T}[t] = M^{-1}(\{t\} \times \mathbb{T})$ for the time-$t$ ``slice''
of a spatial $\mathbb{R}$-tree.
\begin{proposition}
The periodic Brownian Castle satisfies the strong Feller property.
\end{proposition}
\begin{proof}
Let $\mathbb{P}hi\in\mathbb{C}B_b(D(\mathbb{T},\mathbb{R}))$ bounded by $1$, $\mathfrak{h}_0\in D(\mathbb{T},\mathbb{R})$ and $t>0$.
We aim to show that,
for every $\eps>0$ there exists $\delta>0$ such that whenever
$d_\mathrm{Sk}(\bar\mathfrak{h}_0,\mathfrak{h}_0)<\delta$
\begin{equ}[e:wantedSF]
| \mathbb{E}_\mathrm{bc}[\mathbb{P}hi(\mathfrak{h}^\mathrm{per}_\mathrm{bc}(t,\mathfrak{c}dot))|\mathfrak{h}_0]-\mathbb{E}_\mathrm{bc}[\mathbb{P}hi(\mathfrak{h}^\mathrm{per}_\mathrm{bc}(t,\mathfrak{c}dot))|\bar\mathfrak{h}_0]|<\eps\,.
\end{equ}
Fix $\eps>0$. Let $\nu\in(0,t)$ sufficiently small and $\bar N$ big enough so that the probability of the events
\begin{equs}
A_1&\eqdef\left\{\#\{\rho^\downarrow(\mathfrak{s}z,\nu)\,:\,\mathfrak{s}z\in\mathscr{T}^{\mathrm{per},\downarrow}_\mathrm{bw}[t]\} = \#\{\rho^\downarrow(\mathfrak{s}z,0)\,:\,\mathfrak{s}z\in\mathscr{T}^{\mathrm{per},\downarrow}_\mathrm{bw}[t]\}\right\}\\
A_2&\eqdef\left\{\#\{\rho^\downarrow(\mathfrak{s}z,0)\,:\,\mathfrak{s}z\in\mathscr{T}^{\mathrm{per},\downarrow}_\mathrm{bw}[t]\} \le \bar N\right\}
\end{equs}
is each at least $1-{\eps\over 3}$. This is certainly possible since as $\nu$ goes to $0$ and
$\bar N$ tends to $\infty$ the probability of both $A_1$ and $A_2$ goes to $1$.
On $A_1\mathfrak{c}ap A_2$, let $\mathfrak{s}z_1,\dots,\mathfrak{s}z_N$ be the list of all distinct points of
$\{\rho^\downarrow(\mathfrak{s}z,\nu)\,:\,\mathfrak{s}z\in\mathscr{T}^{\mathrm{per},\downarrow}_\mathrm{bw}[t]\}$ (clearly, $N\leq\bar N$)
and set $y_i= M^{\mathrm{per},\downarrow}_{\mathrm{bw},x}(\mathfrak{s}y_i)$ where $\mathfrak{s}y_i = \rho^\downarrow(\mathfrak{s}z_i,0)$.
As before, the probability that one of the $y_i$'s is a discontinuity point for $\mathfrak{h}_0$ is $0$.
Hence, by~\mathfrak{c}ite[Prop.~3.5.2]{EK86}, for every $\hat \eps > 0$ it is possible to choose $\delta>0$ small enough so that whenever
$d_\mathrm{Sk}(\bar\mathfrak{h}_0,\mathfrak{h}_0)<\delta$ then $\delta_i\eqdef \mathfrak{h}_0(y_i)-\bar\mathfrak{h}_0(y_i)$
satisfies $|\delta_i|<\hat \eps$ for all $i$.
Write simply $B$ instead of $B^{\mathrm{per},\downarrow}_\mathrm{bc}$ as a shorthand.
Note now that for every $x \in \mathbb{T}$ there exists $i\le N$ such that one can write
\begin{equ}[e:reprBC]
\mathfrak{h}^\mathrm{per}_\mathrm{bc}(t,x) = \mathfrak{h}_0(y_i) + \delta B(\mathfrak{s}y_i,\mathfrak{s}z_i) + \delta B(\mathfrak{s}z_i, \mathfrak{T}_\mathrm{bw}(t,x))\;,
\end{equ}
and, conditional on $\mathscr{T}^{\mathrm{per},\downarrow}_\mathrm{bw}$, the collection of random variables
$\{\delta B(\rho^\downarrow(\mathfrak{s}z,\nu), \mathfrak{s}z)\,:\, \mathfrak{s}z \in \mathscr{T}^{\mathrm{per},\downarrow}_\mathrm{bw}[t]\}$
is independent of the collection $\{\delta B(\mathfrak{s}y_i,\mathfrak{s}z_i)\}_{i \le N}$.
Conditional on $\mathscr{T}^{\mathrm{per},\downarrow}_\mathrm{bw}$ and restricted to $A_1 \mathfrak{c}ap A_2$,
the law of the latter is $\mathbb{C}N(0,\nu \mathrm{Id}_N)$ for some $N \le \bar N$. We now choose $\hat \eps$
small enough so that $\|\mathbb{C}N(0,\nu \mathrm{Id}_N) - \mathbb{C}N(h,\nu \mathrm{Id}_N)\|_\mathbb{T}V \le \eps/3$, uniformly over all
$N \le \bar N$ and all $h \in \mathbb{R}^{N}$ with $|h_i| \le \hat \eps$.
Writing $\bar\mathfrak{h}^\mathrm{per}_\mathrm{bc}$ for the Brownian Castle with initial condition $\bar \mathfrak{h}_0$, it
immediately follows from the properties of the total variation distance that
we can couple $\bar\mathfrak{h}^\mathrm{per}_\mathrm{bc}$ and $\mathfrak{h}^\mathrm{per}_\mathrm{bc}$ in such a way that
$\mathbb{P}(\bar\mathfrak{h}^\mathrm{per}_\mathrm{bc}(t,\bigcdot) = \mathfrak{h}^\mathrm{per}_\mathrm{bc}(t,\bigcdot)) \ge 1-\eps$, uniformly over
$\bar \mathfrak{h}_0$ with $d_\mathrm{Sk}(\bar\mathfrak{h}_0,\mathfrak{h}_0)<\delta$, and \eqref{e:wantedSF} follows.
\end{proof}
We now want to study the large time behaviour of the Brownian Castle and its periodic counterpart.
Notice at first that for any sublinearly growing initial condition $\mathfrak{h}_0$, the variance of $\mathfrak{h}_\mathrm{bc}(t,0)$ grows like $t$
since, by Definition~\ref{def:bcFD}, $\mathfrak{h}_\mathrm{bc}(t,0)$ conditioned on $\mathscr{T}_\mathrm{bw}^\downarrow$ is Gaussian with variance $t$ and
mean given by $\mathfrak{h}_0$, evaluated at the point where the backward Brownian motion starting at $(t,0)$ hits $\{0\}\times\mathbb{R}$.
On the other hand, it is immediate that the Brownian Castle is equivariant under the action of $\mathbb{R}$ by vertical translations
in the sense that one has $\mathfrak{h}_\mathrm{bc}^{\mathfrak{h}_0 + a} = \mathfrak{h}_\mathrm{bc}^{\mathfrak{h}_0} + a$. As a consequence, writing
$\tilde D(\mathbb{R},\mathbb{R}) = D(\mathbb{R},\mathbb{R}) / \mathbb{R}$ for the quotient space, the canonical projection of the Brownian Castle
onto $\tilde D$ is still a Markov process.
We henceforth write $\tilde\mathfrak{h}_\mathrm{bc}$ (respectively $\tilde\mathfrak{h}^\mathrm{per}_\mathrm{bc}$) for this Markov process.
Recall the stationary Brownian Castle $\mathfrak{h}_\mathrm{bc}^\mathfrak{s}t$ given in Definition~\ref{def:BC}.
As above, we write $\tilde \mathfrak{h}_\mathrm{bc}^s$ for its canonical projection to $\tilde D$ and similarly for its
periodic version, which, according to Remark~\ref{rem:Stationary}, are truly
stationary. With these notations, we then have the following result.
\begin{proposition}\label{p:LargeTime}
There exists a stopping time $\tau$ with exponential tails such that, for $t \ge\tau$, one has
$\tilde \mathfrak{h}_\mathrm{bc}^\mathrm{per}(t,\bigcdot) = \tilde \mathfrak{h}_\mathrm{bc}^{\mathrm{per},s}(t,\bigcdot)$ independently of the initial condition
$\mathfrak{h}_0$.
\end{proposition}
\begin{proof}
It suffices to take for $\tau$ the first time such that all the backward paths starting from
$\{t\}\times\mathbb{T}$ coalesce before hitting time $0$, namely
\begin{equ}
\tau \eqdef\inf\Big\{t \ge 0: d^\downarrow_\mathrm{bw}(\mathfrak{s}z,\mathfrak{s}z')\leq 2t\,,\quad\forall\,\mathfrak{s}z,\mathfrak{s}z'\in\mathscr{T}^{\mathrm{per},\downarrow}_\mathrm{bw}[t]\Big\}\,.
\end{equ}
Notice that $\tau$ coincides in distribution with $T^\uparrow(0)$ introduced in~\mathfrak{c}ite[Sec.~3.1]{CMT}. (This is by duality:
the non-crossing property guarantees that all backwards trajectories starting from $t$ coalesce before time $0$
precisely when all forward trajectories starting from $0$ have coalesced.)
It follows immediately from the definitions that, for all $t \ge \tau$, $\tilde \mathfrak{h}_\mathrm{bc}^\mathrm{per}(t,\bigcdot)$ is
independent of $\mathfrak{h}_0$ and therefore equal to $\tilde \mathfrak{h}_\mathrm{bc}^{\mathrm{per},\mathfrak{s}t}(t,\bigcdot)$.
Exponential integrability of $\tau$ then follows from~\mathfrak{c}ite[Prop.~3.11 (ii)]{CMT}.
\end{proof}
In the non-periodic case, one cannot expect such a strong statement of course, but the following bound still holds.
\begin{proposition}\label{prop:convergenceFull}
The bound
\begin{equ}
\mathbb{E}_\mathrm{bc}[d_\mathrm{Sk}(\tilde\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot), \tilde\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}(t,\mathfrak{c}dot))] \lesssim {\log t \over \mathfrak{s}qrt t}\;,
\end{equ}
holds independently of the initial condition $\mathfrak{h}_0$.
\end{proposition}
\begin{proof}
The definition of $d_\mathrm{Sk}$ implies that if $\tilde\mathfrak{h}_\mathrm{bc}(t,x) = \tilde\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}(t,x)$ for all $x$ with
$|x| \le R$, then $d_\mathrm{Sk}(\tilde\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot), \tilde\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}(t,\mathfrak{c}dot)) \le e^{-R}$ and that, in any case,
$d_\mathrm{Sk}$ is bounded by $1$. It follows that
\begin{equ}
\mathbb{E} d_\mathrm{Sk}(\tilde\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot), \tilde\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}(t,\mathfrak{c}dot)) \le \mathbb{P}(A_R^c) + e^{-R}\mathbb{P}(A_R)\;,
\end{equ}
for any $R>0$ and any event $A_R$ implying that $\tilde\mathfrak{h}_\mathrm{bc}(t,\mathfrak{c}dot)$ and $\tilde\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}(t,\mathfrak{c}dot)$ agree on $[-R,R]$.
It suffices to take for $A_R$ the event that the two backwards trajectories starting at $(t,\pm R)$ coalesce before time $0$.
Since $\mathbb{P}(A_R^c) \lesssim R/\mathfrak{s}qrt t$, choosing $R = \log t$ yields the claim.
\end{proof}
\mathfrak{s}ubsection{Distributional properties}\label{sec:BCdist}
In this section, we will focus on the distributional properties of the stationary Brownian Castle which,
as showed in the previous section, describes the long-time behaviour of
$\mathfrak{h}_\mathrm{bc}$ (resp. $\mathfrak{h}^\mathrm{per}_\mathrm{bc}$), at least modulo vertical translations.
We begin with the following proposition which shows that $\mathfrak{h}_\mathrm{bc}$ is indeed invariant with respect to the $1{:}1{:}2$ scaling,
i.e.\ its scaling exponents are indeed those characterising its own universality class.
\begin{proposition}\label{p:Scaling}
Let $\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}$ be the stationary Brownian Castle defined according to~\eqref{e:SBC}. Then, for any $\lambda>0$,
\begin{equ}[e:eqlaw]
\lambda\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}(\bigcdot/\lambda^2, \bigcdot/\lambda)\eqlaw\mathfrak{h}_\mathrm{bc}^\mathfrak{s}t(\bigcdot,\bigcdot)\,.
\end{equ}
\end{proposition}
\begin{proof}
The claim clearly holds for all finite-dimensional distributions from the scaling properties of Brownian motion.
Since these characterise the law of $\mathfrak{h}^\mathfrak{s}t_\mathrm{bc}$,~\eqref{e:eqlaw} follows at once.
\end{proof}
\begin{remark}
Note that \eqref{e:eqlaw} holds without having to quotient by vertical shifts, while this is necessary to have space-time
translation invariance.
\end{remark}
Although Definition~\ref{def:bcFD} provides a graphic description of the finite dimensional distributions of the Brownian Castle,
we would like to obtain more explicit formulas characterising them.
In the next proposition, we begin our analysis by studying the distribution of the increments
at fixed time and as time goes to $+\infty$.
\begin{proposition}\label{prop:2pointDist}
Let $\mathfrak{h}_0\in D(\mathbb{R},\mathbb{R})$, $t>0$ and $\mathfrak{h}_\mathrm{bc}$ be the Brownian Castle with initial condition $\mathfrak{h}_0$.
Then, as $|x-y|$ converges to $0$
\begin{equ}[e:smallDist]
\frac{\mathfrak{h}_\mathrm{bc}(t,x)-\mathfrak{h}_\mathrm{bc}(t,y)}{x-y}\overset{\mathrm{\tiny law}}{\longrightarrow} \mathbb{C}auchy(0,2)
\end{equ}
where, for $a\in\mathbb{R}$ and $\gamma>0$, $\mathbb{C}auchy(a,\gamma)$ denotes a Cauchy random variable
with location parameter $a$ and scale parameter $\gamma$.
Moreover, for any $x,y\in\mathbb{R}$,
\begin{equ}[e:largeTime]
\mathfrak{h}_\mathrm{bc}(t,x)-\mathfrak{h}_\mathrm{bc}(t,y)\overset{\mathrm{\tiny law}}{\longrightarrow} \mathbb{C}auchy(0,2|x-y|)
\end{equ}
as $t\uparrow+\infty$. In particular, for any $x,y\in\mathbb{R}$ the stationary Brownian Castle $\mathfrak{h}_\mathrm{bc}^\mathfrak{s}t$ satisfies
$\mathfrak{h}_\mathrm{bc}^\mathfrak{s}t(t,x)-\mathfrak{h}_\mathrm{bc}^\mathfrak{s}t(t,y)\eqlaw\mathbb{C}auchy(0,2|x-y|)$ for any $t\geq 0$.
\end{proposition}
\begin{proof}
The claim for $\mathfrak{h}_\mathrm{bc}^\mathfrak{s}t$ is clearly true since from its definition we have
\begin{equ}
\mathfrak{h}_\mathrm{bc}^\mathfrak{s}t(t,x)-\mathfrak{h}_\mathrm{bc}^\mathfrak{s}t(t,y) \eqlaw \mathbb{C}N(0,2\tau_{y-x})\;,
\end{equ}
where $\tau_r$ is the law of the first hitting time of $r$ for a standard Brownian motion starting at $0$.
Now, $\tau_r \eqlaw \mathop{\mathrm{Levy}}(0,r^2)$ and $\mathbb{C}auchy(t) \eqlaw \mathbb{C}N(\mathop{\mathrm{Levy}}(0,t^2/2))$, which implies the result.
\eqref{e:largeTime} follows from Proposition~\ref{prop:convergenceFull}, while
\eqref{e:smallDist} can be reduced to \eqref{e:largeTime} by Proposition~\ref{p:Scaling}.
\end{proof}
We now turn our attention to the $n$-point distribution for $n\geq 3$.
Given $x_1,\dots,x_n \in \mathbb{R}$, we aim at deriving an expression for the characteristic function of
$(\mathfrak{H}_\mathrm{bc}(x_1),\dots,\mathfrak{H}_\mathrm{bc}(x_n))$, where we use the shorthand $\mathfrak{H}_\mathrm{bc} = \mathfrak{h}_\mathrm{bc}^\mathfrak{s}t(0,\bigcdot)$.
By the definition of $\mathfrak{H}_\mathrm{bc}$ (and Definition~\ref{def:bcFD}),
once the full ancestral structure of $n$ independent coalescing (backward)
Brownian motions starting from $x_1,\dots, x_n$ respectively, is known, the conditional joint distribution
of $(\mathfrak{H}_\mathrm{bc}(x_1),\dots,\mathfrak{H}_\mathrm{bc}(x_n))$ (modulo vertical shifts) is Gaussian and therefore easily accessible.
In order to get our hands on the aforementioned ancestral structure we will proceed inductively using the strong Markov property
of a finite family of coalescing Brownian motions. This will be possible if we are able to simultaneously keep
track of the first time at which any two Brownian motions meet, which are the Brownian motions meeting and
the position of all of them at that time.
Let $x_1<\dots<x_n$ and let $(y_i)_{i \le n}$ be independent standard Brownian motions starting at
$x_i$. Denote by $Z=(\tau,\iota,\mathbb{P}i^{n-1})$ the $\mathbb{R}_+\times [n-1] \times\mathbb{R}^{n-1}$-valued
random variable in which $\tau$, $\iota$ and $\mathbb{P}i^{n-1}$ are defined by
\begin{equation}\label{def:Z}
\begin{split}
\tau&\eqdef\inf\left\{t>0\,:\,\exists\,\iota \in [n-1] \text{ such that } y_\iota(t)=y_{\iota+1}(t)\right\}\;,\\
\mathbb{P}i^{n-1}&\eqdef \left(y_1(\tau),\dots,y_{\iota-1}(\tau), y_{\iota+1}(\tau),\dots,y_n(\tau)\right)\;,
\end{split}
\end{equation}
where $\iota$ is implicitly defined as the (almost surely unique) value appearing in the definition of $\tau$.
The random variable $Z$ admits a density with respect to the product of the one-dimensional Lebesgue measure on $\mathbb{R}_+$,
the counting measure on $[n-1]$ and the $(n-1)$-dimensional Lebesgue measure,
as the following variant of the Karlin--McGregor formula \mathfrak{c}ite{McGregor} shows.
\begin{lemma}\label{l:Density}
Let $Z$ be the $\mathbb{R}_+\times [n-1]\times\mathbb{R}^{n-1}$-valued random variable defined
in~\eqref{def:Z}. Then, with the usual abuse of notation,
\begin{equation}\label{e:DensityZ}
\mathbb{P}_x \left( \tau\in \mathrm{d} t,\iota=j,\mathbb{P}i^{n-1}\in \mathrm{d} y\right)=\det M^j_t (x,y)\,\mathrm{d} y\,\mathrm{d} t
\end{equation}
where the $n\times n$ matrix $M^j$ is defined by
\begin{equation}\label{e:Kernel}
M^j_t (x,y)_{i,k}\eqdef
\begin{cases}
p_t(y_k-x_i)\,,& \text{for $k< j$,}\\
p_t'(y_j-x_i)\,,& \text{for $k= j$, }\\
p_t(y_{k-1}-x_i)\,,& \text{for $k> j$,}
\end{cases}
\end{equation}
$p$ is the heat kernel and $p'$ its spatial derivative.
\end{lemma}
\begin{proof}
This is an immediate corollary of Theorem~\ref{thm:Density}, combined with the Karlin--McGregor formula.
\end{proof}
In the following proposition, we derive a recursive formula for the characteristic function of the $n$-point distribution
of $\mathfrak{H}_\mathrm{bc}$.
\begin{proposition}\label{prop:FDist}
Let $\mathfrak{H}_\mathrm{bc}$ be as above.
For $n\in\mathbb{N}$, $\alpha=(\alpha_1,\dots,\alpha_n), \,x=(x_1,\dots,x_n) \in\mathbb{R}^n$ such that $\mathfrak{s}um_j\alpha_j=0$ and
$x_1<\dots<x_n$, let $F_n(\alpha,x)$ be the characteristic function of $(\mathfrak{H}_\mathrm{bc}(x_1),\dots,\mathfrak{H}_\mathrm{bc}(x_n))$ evaluated at $\alpha$.
Then, $F_n$ satisfies the recursion
\begin{equation}\label{e:RecF}
F_n(\alpha,x)=\mathfrak{s}um_{j=1}^{n-1}\int_{\mathbb{R}_+} \int_{y_1<\dots<y_{n-1}} e^{-\frac{1}{2}|\alpha|^2t} F_{n-1}(\mathfrak{c}_j\alpha,y) \det M_t^j(x,y) \,\mathrm{d} y\,\mathrm{d} t
\end{equation}
where the $n\times n$ matrix $M^j$ was given in~\eqref{e:Kernel} and $\mathfrak{c}_j\alpha\in\mathbb{R}^{n-1}$ is the vector defined by
\begin{equ}
(\mathfrak{c}_j\alpha)_l\eqdef
\begin{cases}
\alpha_l\,,& l<j\\
\alpha_j+\alpha_{j+1}\,,& l=j\\
\alpha_{l+1}\,,&l>j\,.
\end{cases}
\end{equ}
\end{proposition}
\begin{proof}
Fix $x_1<\dots<x_n$ and consider the stochastic process $(y_i, B_{ij})_{i,j=1}^n$ where the $y_i$ are
coalescing Brownian motions with initial conditions $y_i(0) = x_i$ and the $B_{ij}$ are given by
\begin{equ}[e:defBij]
dB_{ij}(t) = \mathbbm{1}_{y_i \neq y_j} \bigl(dW_i(t) - dW_j(t)\bigr)\;, \qquad B_{ij}(0) = 0\;,
\end{equ}
the $W_i$ being i.i.d.\ standard Wiener processes independent of the $y_i$'s.
This process is strong Markov since so is the family of $y_i$'s (see \mathfrak{c}ite{TW}). Furthermore, since the $y_i$'s all coalesce at some time,
the limit $B_{ij}(\infty) = \lim_{t \to \infty} B_{ij}(t)$ is well-defined and, by construction,
one has
\begin{equ}
(\mathfrak{H}_\mathrm{bc}(x_i) - \mathfrak{H}_\mathrm{bc}(x_j))_{ i,j=1}^n
\eqlaw (B_{ij}(\infty))_{ i,j=1}^n
\end{equ}
We now combine the strong Markov property with the fact that $B$, as defined by \eqref{e:defBij},
depends on its initial condition in an affine way with unit slope.
This implies that, writing $\tilde \mathfrak{H}_\mathrm{bc}$ for a copy of $\mathfrak{H}_\mathrm{bc}$ that is independent
of the process $(y,B)$ and $\tau$ for any stopping time, one has the identity in law
\begin{equ}
\big(B_{ij}(\infty)\big)_{i,j} \eqlaw \big(B_{ij}(\tau) + \tilde \mathfrak{H}_\mathrm{bc}(y_i(\tau)) - \tilde \mathfrak{H}_\mathrm{bc}(y_j(\tau))\big)_{i,j}\;.
\end{equ}
Write now $(\mathbb{C}F_t)$ for the filtration generated by the $y_i$ and the $W_i$
and $\tau$ for the first time at which any two of the $y$'s coalesce. Using furthermore the shorthand
$\mathfrak{H}_\mathrm{bc}(x)$ for $(\mathfrak{H}_\mathrm{bc}(x_1),\ldots,\mathfrak{H}_\mathrm{bc}(x_n))$ we have
\begin{equs}
F_n&(\alpha,x)=\mathbb{E} \left[\mathbb{E} \left[e^{i\mathfrak{s}cal{\alpha, \mathfrak{H}_\mathrm{bc}(x)}}\Big|\mathbb{C}F_\tau\right]\right] = \mathbb{E}\left[ e^{i\mathfrak{s}cal{\alpha, \tilde \mathfrak{H}_\mathrm{bc}(y(\tau))}-\frac{1}{2}|\alpha|^2\tau}\right]\\
&=\mathfrak{s}um_{l=1}^{n-1}\int_0^\infty\int_{y_1<\dots<y_{n-1}}e^{-\frac{|\alpha|^2}{2}t} \mathbb{E} \left[ e^{i\mathfrak{s}cal{\alpha, \mathfrak{H}_\mathrm{bc}(y)}}\right]\mathbb{P}_x(\tau\in\mathrm{d} t,\iota=l,\mathbb{P}i^{n-1}\in\mathrm{d} y)\\
&=\mathfrak{s}um_{l=1}^{n-1}\int_0^\infty\int_{y_1<\dots<y_{n-1}}e^{-\frac{|\alpha|^2}{2}t} \mathbb{E} \left[ e^{i\mathfrak{s}cal{\mathfrak{c}_l\alpha, \mathfrak{H}_\mathrm{bc}(y)}}\right] \det M_t^j(x,y) \,\mathrm{d} y\,\mathrm{d} t
\end{equs}
where $\mathbb{P}_x$ denotes the measure appearing in Lemma~\ref{l:Density}.
The required identity~\eqref{e:RecF} follows at once.
\end{proof}
Thanks to the results in Sections~\ref{sec:BC} and~\ref{sec:BCprocess}, and Proposition~\ref{prop:2pointDist}, we know that
$\mathfrak{H}_\mathrm{bc}$ has increments which are stationary and distributed according to a Cauchy
random variable with parameter given by their lengths, and admits a version whose trajectories are
c\`adl\`ag and have (locally) finite $p$-variation for any $p>1$ (see Proposition~\ref{p:pvar}).
If moreover we knew that the increments were independent,
we could conclude that $\mathfrak{H}_\mathrm{bc}$ is nothing but a Cauchy process.
The lack of independence is already evident by formula~\eqref{e:RecF} in Proposition~\ref{prop:FDist}, therefore
the $k$-point distributions of $\mathfrak{H}_\mathrm{bc}$ with $k > 2$ are different from those of the Cauchy process.
In the next proposition, we actually show more, namely that the law of
$\mathfrak{H}_\mathrm{bc}$ is a {\it genuinely new} measure on $D(\mathbb{R},\mathbb{R})$ since it is
singular with respect to that of the Cauchy process.
\begin{proposition}\label{prop:Singularity}
Let $\mathfrak{H}_\mathrm{bc}$ be as above and $\mathfrak{C}$ be the standard Cauchy process on
$\mathbb{R}_+$. Then, when restricted to $[0,1]$, the laws of $\mathfrak{H}_\mathrm{bc}$ and $\mathfrak{C}$ are mutually singular.
\end{proposition}
\begin{proof}
We want to exhibit an almost sure property that distinguishes the laws of $\mathfrak{H}_\mathrm{bc}$ and $\mathfrak{C}$ on
$D([0,1],\mathbb{R})$.
Let $1\geq x_0>\dots>x_m\geq \f12$, $\varphi:\mathbb{R}^m\to\mathbb{R}$ be a bounded function and $\{\lambda_n\}_n$
an increasing sequence in $[1,\infty)$ such that $\lambda_{n+1} \ge 4\lambda_n$ for all $n\in\mathbb{N}$. For $n\geq 1$, define the functional
$\mathbb{P}hi_n: D([0,1],\mathbb{R})\to\mathbb{R}$ by
\begin{equ}
\mathbb{P}hi_n(h)\eqdef\frac{1}{n}\mathfrak{s}um_{k=1}^n \varphi(I_k(h))\,,\qquad\text{where}\qquad (I_k(h))_i\eqdef \lambda_k\left(h(x_i/\lambda_k)-h(x_{0}/\lambda_k)\right)
\end{equ}
for $h\in D([0,1],\mathbb{R})$. By scaling invariance and the independence properties of the Cauchy process $\{I_k(\mathfrak{C})\}_k$ is i.i.d.\ while,
by Proposition~\ref{p:Scaling}, $\{I_k(\mathfrak{H}_\mathrm{bc})\}_k$ is a sequence of identically distributed (but not independent!)
random variables. Since furthermore $\varphi$ is bounded,
the classical strong law of large numbers holds for
$\{\varphi(I_k(\mathfrak{C}))\}_k$, which implies that almost surely
\begin{equation}\label{e:SLLN}
\lim_{n\to\infty} \mathbb{P}hi_n(\mathfrak{C})=\mathbb{E}[\mathbb{P}hi_1(\mathfrak{C})]\,.
\end{equation}
We claim that, provided that we choose a sequence $\{\lambda_n\}_n$
that increases sufficiently fast,~\eqref{e:SLLN} holds also for $\mathfrak{H}_\mathrm{bc}$.
Before proving the claim, notice that, assuming it holds, we are done. Indeed, it suffices to
take $m\geq 2$, and determine a function $\varphi$ such that $\mathbb{E}[\mathbb{P}hi_1(\mathfrak{C})]\neq \mathbb{E}_\mathrm{bc}[\mathbb{P}hi_1(\mathfrak{H}_\mathrm{bc})]$. Such a function
clearly exists since by Proposition~\ref{prop:FDist} $\mathfrak{C}$ and $\mathfrak{H}_\mathrm{bc}$ have different $n$-point distributions for $n\geq 3$.
We now turn to the proof of the claim.
We will construct two sequences $\{\hat J_k\}_k$ and $\{J_k\}_k$ of $\mathbb{R}^m$-valued random variables such that
\begin{equ}
\hat J_1 = J_1\;,\qquad
\{J_k\}_{k \in \mathbb{N}} \eqlaw \{I_k(\mathfrak{H}_\mathrm{bc})\}_{k \in \mathbb{N}}\;,
\end{equ}
and the sequence $\hat J_k$ is i.i.d.
Arguing as for the Cauchy process,~\eqref{e:SLLN} holds for $\{\varphi(\hat J_k)\}_k$ hence the claim follows if we can
build $\{\hat J_k\}_k$ and $\{J_k\}_k$ in such a way that, almost surely, $\hat J_k=J_k$ for all but finitely many values of $k$.
Let $\{W_{k,i}\,:\, k \in \mathbb{N}, i \in \{0,\ldots,m\}\}$ be a collection
of i.i.d.\ standard Wiener processes and $z_{k,i} = (0, x_{k,i})$ be points with $x_{k,i} = x_i/\lambda_k$.
We use them in two
different ways. First, we apply the construction given at the start of Section~\ref{sec:BW}
to each of the groups $\{(W_{k,i}, z_{k,i})\}_{i \le m}$ separately, which
yields a collection $\{\zeta_k = (\mathscr{T}_k,\ast_k,d_k, M_k)\}_{k \in \mathbb{N}}$ of characteristic spatial $\mathbb{R}$-trees
with each $\zeta_k$ representing coalescing Brownian motions starting from $\{z_{k,i}\}_{i \le m}$
and $M_k(\ast_k) = z_{0,k}$. We then apply the construction to the whole collection at once,
taken in lexicographical order, so to obtain one ``big'' spatial $\mathbb{R}$-tree $\zeta = (\mathscr{T},d,\ast, M)$.
Let $\hat\mathfrak{s}z_{k,i} \in \mathscr{T}_k$ and $\mathfrak{s}z_{k,i} \in \mathscr{T}$ be the unique points such that $M_k(\hat\mathfrak{s}z_{k,i}) = z_{k,i}$
and $M(\mathfrak{s}z_{k,i}) = z_{k,i}$, respectively. Write furthermore
\begin{equ}
\tau_k = \mathfrak{s}up\{t < 0\,:\, \rho(\mathfrak{s}z_{k,0},t) = \rho(\mathfrak{s}z_{k-1,m},t)\}\;.
\end{equ}
Since both $\{\zeta_k\}_k$ and $\zeta$ are built via the same Brownian motions, we clearly have
$M_k(\rho(\hat\mathfrak{s}z_{k,i},t)) = M(\rho(\mathfrak{s}z_{k,i},t))$ for all $i \le m$ and
all $t \in [\tau_k,0]$. Denote by $\bar \mathscr{T} \mathfrak{s}ubset \mathscr{T}$ the subspace given by
\begin{equ}
\bar \mathscr{T} = \{\rho(\mathfrak{s}z_{k,i},t)\,:\, t \in [\tau_k,0],\; i \le m,\; k \in \mathbb{Z}\}\;,
\end{equ}
and similarly for $\bar \mathscr{T}_k \mathfrak{s}ubset \mathscr{T}_k$, so that there is a canonical bijection
$\iota \mathfrak{c}olon \bigcup_{k \in \mathbb{N}} \bar \mathscr{T}_k \to \bar \mathscr{T}$.
We now turn to the branching maps.
Fix independent Brownian motions $B_k$ on each of the $\mathscr{T}_k$ and
write $\tilde B \mathfrak{c}olon \bigcup_{k \in \mathbb{N}} \bar \mathscr{T}_k \to \mathbb{R}$ for the map that restricts to $B_k$ on each $\mathscr{T}_k$.
We then construct a Brownian motion $B$ on
$\mathscr{T}$ such that
\begin{claim}
\item Writing $\bar B$ for the restriction of $B$ to $\bar \mathscr{T}$, one has $\delta \bar B(\mathfrak{s}z,\mathfrak{s}z') = \delta \tilde B(\iota \mathfrak{s}z,\iota \mathfrak{s}z')$.
\item Conditionally on the $W$'s, all the increments $B(\mathfrak{s}z) - B(\mathfrak{s}z')$ are independent of all the $B_k$'s
for any $\mathfrak{s}z,\mathfrak{s}z' \in \mathscr{T} \mathfrak{s}etminus \bar \mathscr{T}$.
\end{claim}
The independence properties of Brownian motion mentioned in Remark~\ref{rem:contBM} guarantee that such a construction is
possible and uniquely determines the law of $B$, conditional on the $W_{k,i}$'s and the $B_k$'s.
We claim that setting
\begin{equ}
J_{k,i} = \lambda_k \big(B(\mathfrak{s}z_{k,i})- B(\mathfrak{s}z_{k,0})\big)\;,\quad
\hat J_{k,i} = \lambda_k \big(B_k(\hat \mathfrak{s}z_{k,i})- B_k(\hat \mathfrak{s}z_{k,0})\big)\;,
\end{equ}
the sequences $\{\hat J_k\}_k$ and $\{J_k\}_k$ satisfy all the desired properties mentioned above.
Clearly the $\hat J_k$ are independent and they are identically distributed by Brownian scaling.
The fact that the $J_{k,i}$ are distributed like $I_k(\mathfrak{H}_\mathrm{bc})$ is also immediate from the construction, so it remains to
show that $\hat J_k = J_k$ for all but finitely many values of $k$.
Writing
\begin{equ}
\hat \tau_k = \mathfrak{s}up\{t < 0\,:\, \rho(\hat\mathfrak{s}z_{k,0},t) = \rho(\hat\mathfrak{s}z_{k,m},t)\}\;,
\end{equ}
we have that $\hat J_k = J_k$ as soon as $|\hat \tau_k| \le |\tau_k|$.
For $k > 0$,
\begin{equs}
|\tau_k| &\eqlaw \Bigl({x_m \over \lambda_{k-1}} - {x_0 \over \lambda_{k}}\Bigr)^2\mathop{\mathrm{Levy}}(1)
\ge (4\lambda_{k-1})^{-2}\mathop{\mathrm{Levy}}(1)\;,\\
|\hat \tau_k| &\eqlaw \Bigl({x_0-x_m\over \lambda_k}\Bigr)^2 \mathop{\mathrm{Levy}}(1) \le \lambda_k^{-2}\mathop{\mathrm{Levy}}(1)\;,
\end{equs}
where we use the fact that $\lambda_k \ge 4 \lambda_{k-1}$ in the first inequality.
If we choose $0 < c_k < C_k < \infty$ such that
\begin{equ}
\mathbb{P}(\mathop{\mathrm{Levy}}(1) < c_k) \le k^{-2}\;,\qquad
\mathbb{P}(\mathop{\mathrm{Levy}}(1) > C_k) \le k^{-2}\;,
\end{equ}
we can conclude that $\mathbb{P}(|\hat \tau_k| \ge |\tau_k|) \le 2k^{-2}$ provided that we choose
the $\lambda_k$'s in such a way that $c_k \lambda_k^2 \ge 16 C_k \lambda_{k-1}^2$.
Hence, by Borel-Cantelli we have $\hat J_k = J_k$ for all but finitely many $k$'s and the proof is concluded.
\end{proof}
\mathfrak{s}ection{Convergence of $\boldsymbol{0}$-Ballistic Deposition}\label{sec:0BD}
In this last section, we show that the $0$-Ballistic Deposition model does indeed converge to the
Brownian Castle. In order to prove Theorem~\ref{thm:convTime}, we will begin by associating
to $0$-BD a branching spatial $\mathbb{R}$-tree and
prove that, when suitably rescaled, the
law of the latter converges to $\mathcal{P}_\mathrm{bc}$ defined in~\eqref{e:LawBC}.
\mathfrak{s}ubsection{The Double Discrete Web tree}
\label{sec:graphical}
We begin our analysis by recalling the construction and the results obtained in~\mathfrak{c}ite[Section 4]{CHbwt},
concerning the spatial tree representation of a family of coalescing backward random walks and its dual.
Let $\delta\in(0,1]$ and $(\Omega,\mathcal{A},\mathbb{P}_\delta)$ be a standard probability space supporting
four Poisson random measures, $\mu_{\gamma}^L$, $\mu_{\gamma}^R$,
$\hat\mu_{\gamma}^L$ and $\hat\mu_{\gamma}^R$.
The first two, $\mu_{\gamma}^L$ and $\mu_{\gamma}^R$, live on $\mathbb{D}^\downarrow_\delta\eqdef\mathbb{R}\times\delta\mathbb{Z}$,
are independent and
have both intensity $\gamma\lambda$,
where, for every $k\in\delta\mathbb{Z}$, $\lambda(\mathrm{d} t, \{k\})$ is a copy of the Lebesgue measure on $\mathbb{R}$ and
throughout the section
\begin{equ}[e:gamma]
\gamma=\gamma(\delta)\eqdef\frac{1}{2\delta^2}\,.
\end{equ}
The others live on $\mathbb{D}^\uparrow_\delta\eqdef\mathbb{R}\times\delta(\mathbb{Z}+1/2)$, and are obtained from the formers by setting,
for every measurable $A\mathfrak{s}ubset\mathbb{D}^\uparrow_\delta$
\begin{equ}[e:DualPoisson]
\hat\mu^L_\gamma(A)\eqdef\mu^R_\gamma(A-\delta/2)\qquad\text{and}\qquad
\hat\mu^R_\gamma(A)\eqdef\mu^L_\gamma(A+\delta/2)\,.
\end{equ}
Here, $A\pm\delta/2\eqdef\{z\pm(0,\delta/2)\,:\,z\in A\}$.
Representing the previous Poisson processes via arrows as in Figure~\ref{f:PPEvalMap}, it
is not hard to define two families of coalescing random walks $\{\pi^{\downarrow,\delta}\}_{z\in\mathbb{D}^\downarrow_\delta}$
and $\{\pi^{\uparrow,\delta}\}_{z\in\mathbb{D}^\uparrow_\delta}$, the first running backward in time and the second, forward,
in such a way they never cross (see~\mathfrak{c}ite[Section 4.1]{CHbwt} and in particular Figure 1 therein).
Thanks to these, we can state the following definition, which is taken from~\mathfrak{c}ite[Definition 4.1]{CHbwt}.
\begin{definition}\label{def:DWT}
Let $\delta\in(0,1]$, $\gamma$ as in~\eqref{e:gamma},
$\mu_{\gamma}^L$ and $\mu_{\gamma}^R$ be two independent Poisson random measures on
$\mathbb{D}^\downarrow_\delta$ of intensity $\gamma\lambda$, $\hat\mu^L$ and $\hat\mu^R$ be given as in~\eqref{e:DualPoisson}
and $\{\pi^\dad_z\}_{z\in\mathbb{D}^\downarrow_\delta}$ and $\{\pi^\uad_{\hat z}\}_{\hat z\in \mathbb{D}^\uparrow_\delta} $ be the
families of coalescing random walks introduced above. We define the {\it Double Discrete Web Tree} as
the couple $\zeta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\delta\eqdef(\zeta^\downarrow_\delta, \zeta^\uparrow_\delta)$, in which
\begin{itemize}[noitemsep,label=-]
\item $\zeta^{\downarrow}_\delta\eqdef(\mathscr{T}^{\downarrow}_\delta, \ast^{\downarrow}_\delta, d^{\downarrow}_\delta, M^{\downarrow}_\delta)$ is given by setting $\mathscr{T}^{\downarrow}_\delta = \mathbb{D}^\downarrow_\delta$, $\ast^{\downarrow}_\delta = (0,0)$, $M^{\downarrow}_\delta$ the canonical inclusion, and
\begin{equ}[e:defdd]
d^{\downarrow}_\delta(z,\bar z) = t + t' - 2 \mathfrak{s}up \{ s \le t \wedge t' \,:\, \pi^\dad_z(s) = \pi^\dad_{\bar z}(s)\}\;.
\end{equ}
\item $\zeta^{\uparrow}_\delta\eqdef(\mathscr{T}^{\uparrow}_\delta, \ast^{\uparrow}_\delta, d^{\uparrow}_\delta, M^{\uparrow}_\delta)$ is built
similarly, but with $\ast^\uparrow_{\delta} = (0,\delta/2)$
and the supremum in \eqref{e:defdd} replaced by $\inf \{ s \ge t \vee t' \,:\, \pi^\uad_z(s) = \pi^\uad_{\bar z}(s)\}$.
\end{itemize}
\end{definition}
As was pointed out in~\mathfrak{c}ite{CHbwt}, the Discrete Web Tree and its dual are not {\it spatial} trees since
the random walks $\pi^{\downarrow,\delta}$, $\pi^{\uparrow,\delta}$ are not continuous.
To overcome the issue, in~\mathfrak{c}ite[Section 4.1]{CHbwt}, it was introduced a suitable modification
$\tilde\zeta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\delta$ of $\zeta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\delta$, called the {\it Interpolated Double Discrete Tree}
(see~\mathfrak{c}ite[Definition 4.2]{CHbwt}).
Simply speaking, the latter was obtained by keeping the same pointed $\mathbb{R}$-trees
$(\mathscr{T}^{\bigcdot}_\delta, \ast^{\bigcdot}_\delta, d^{\bigcdot}_\delta)$, $\bigcdot\in\{\uparrow,\downarrow\}$,
as those of the Double Discrete Tree
and defining new evaluation maps $\tilde M^\downarrow_\delta$ and $\tilde M^\uparrow_\delta$ by interpolating
the discontinuities of $M^\downarrow_\delta$ and $M^\uparrow_\delta$ in such a way that
the properties~\ref{i:Back},~\ref{i:MonSpace} (and~\ref{i:Spread}) of Definition~\ref{def:CharTree} were kept
and continuity was restored.
The following result is a consequence of Propositions 4.3 and 4.4, and Theorem 4.5 in~\mathfrak{c}ite{CHbwt}.
\begin{theorem}\label{thm:DWTConv}
For any $\delta\in(0,1)$ and $\alpha\in(0,1/2)$, the Interpolated Double Discrete Web Tree $\tilde\zeta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\delta$
introduced above almost surely belongs to $\mathbb{C}^\alpha_\mathrm{sp}\times\hat\mathbb{C}^\alpha_\mathrm{sp}$ and satisfies~\ref{i:TreeCond}
of Definition~\ref{def:TreeCond}.
Let $\mathbb{T}heta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\delta$ be the law of $\tilde\zeta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\delta$ on $\mathbb{C}^\alpha_\mathrm{sp}\times \hat\mathbb{C}^\alpha_\mathrm{sp}$,
with marginals $\mathbb{T}heta^{\downarrow}_\delta$ and $\mathbb{T}heta^{\uparrow}_\delta$.
Then, $\mathbb{T}heta^\downarrow_\delta$ is tight in $\fE^\alpha(\theta)$ (see~\eqref{def:MeasSet})
for any $\theta>\tfrac32$ and, as $\delta\downarrow 0$,
$\mathbb{T}heta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\delta$ converges to the law of the Double Web Tree
$\mathbb{T}heta^{\downarrow\mathrel{\mspace{-1mu}}\uparrow}_\mathrm{bw}$ of Definition~\ref{def:DWT} weakly on $\mathbb{C}^\alpha_\mathrm{sp}\times \hat\mathbb{C}^\alpha_\mathrm{sp}$.
At last, almost surely, for $\bigcdot\in\{\uparrow,\downarrow\}$
\begin{equ}[e:Dist_p_Mp]
\mathfrak{s}up_{\mathfrak{s}z\in\mathscr{T}^{\bigcdot}_\delta}\|\tilde M^{\bigcdot}_\delta(\mathfrak{s}z)-M^{\bigcdot}_\delta(\mathfrak{s}z)\|\leq \delta
\end{equ}
where $M^{\bigcdot}_\delta$ are the evaluation maps of the double Discrete Web Tree in Definition~\ref{def:DWT}.
\end{theorem}
\mathfrak{s}ubsection{The $\boldsymbol{0}$-BD measure and convergence to BC measure}\label{sec:0BDconv}
We are now ready to introduce the missing ingredient in the construction of the $0$-BD tree,
namely the Poisson random measure responsible of increasing the height function by $1$.
Let $\delta\in(0,1]$, $\gamma$ as in~\eqref{e:gamma},
$\mu_{\gamma}^L$ and $\mu_{\gamma}^R$ be as in the previous section and
$\mu_{\gamma}^\bullet$ be a Poisson random measure on $\mathbb{D}_\delta^\downarrow$ of intensity
$2\gamma\lambda$ and independent of both $\mu_{\gamma}^L$ and $\mu_{\gamma}^R$.
For a typical realisation of $\mu_{\gamma}^L$ and $\mu_{\gamma}^R$,
consider the Discrete Web Tree $\zeta^\downarrow=(\mathscr{T}^\downarrow_\delta,\ast_\delta,d^\downarrow_\delta, M^\downarrow_\delta)$
in Definition~\ref{def:DWT}. Let $\mu_{\gamma}$ be the measure on $\mathscr{T}^\downarrow_\delta$ induced by
$\mu_{\gamma}^\bullet$ via
\begin{equ}[e:numeasure]
\mu_{\gamma}(A)\eqdef \mu_{\gamma}^\bullet( M^\downarrow_\delta(A))
\end{equ}
for any $A$ Borel subset of $\mathscr{T}^\downarrow_\delta$.
Notice that $M^\downarrow_\delta$ is bijective on $\mathbb{D}_\delta^\downarrow$
so that $\mu_{\gamma}$ is well-defined and, since $\mu_{\gamma}^\bullet$ is independent over
disjoint sets, $\mu_{\gamma}$ is distributed according to
a Poisson random measure on $\mathscr{T}^\downarrow_\delta$ of intensity $2\gamma\ell$,
$\ell$ being the length measure on $\mathscr{T}^\downarrow_\delta$ (see~\eqref{def:LengthMeasure}.
\begin{definition}\label{def:0-BDtree}
Let $\delta\in(0,1)$, $\gamma$ as in~\eqref{e:gamma},
$\mu_{\gamma}^L$, $\mu_{\gamma}^R$ and $\mu_{\gamma}^\bullet$ be
three independent Poisson random measures on $\mathbb{D}_\delta$ of respective intensities
$\gamma\lambda$, $\gamma\lambda$, and $2\gamma\lambda$. We define the {\it $0$-BD Tree} as the couple
$\mathfrak{c}hi^\delta_{{0\text{-}\mathrm{bd}}}\eqdef(\zeta^\downarrow_\delta, N_{\gamma})$ in which $\zeta^\downarrow_\delta$ is as
in Definition~\ref{def:DWT} while $N_{\gamma}$ is the rescaled compensated Poisson process given by
\begin{equ}[e:RCPP]
N_{\gamma}(\mathfrak{s}z)\eqdef \delta \big(\mu_\gamma(\llbracket\ast,\mathfrak{s}z\mathrm{r}b) -2\gamma d_\delta^\downarrow(\mathfrak{s}z,\ast)\big)\;.
\end{equ}
and $\mu_{\gamma}$ is the measure given in~\eqref{e:numeasure}.
\end{definition}
As before, the $0$-BD tree also fails to be a branching spatial tree since neither
the evaluation nor the branching map are continuous. The remedy here was already
presented in Section~\ref{sec:MEC} where we introduced the smoothened RC Poisson process.
\begin{remark}\label{rem:Borel}
We can view the triple $(\mu_{\gamma}^L,\mu_{\gamma}^R,\mu_{\gamma}^\bullet)$ as an element of
the space of locally finite integer-valued measures endowed with the topology of vague convergence.
All functions of $\mathfrak{c}hi^\delta_{{0\text{-}\mathrm{bd}}}$ mentioned later on are Borel measurable with respect to
this.
\end{remark}
\begin{definition}\label{def:Smooth0-BDtree}
In the setting of Definition~\ref{def:0-BDtree} and for $p>2$ and $a=(2\gamma)^{-p}$,
we define the {\it smoothened $0$-BD Tree} as the couple
$\tilde\mathfrak{c}hi^\delta_{{0\text{-}\mathrm{bd}}}\eqdef(\tilde\zeta^\downarrow_\delta, N^a_{\gamma})$ in which $\tilde\zeta^\downarrow_\delta$ is the
Interpolated Discrete Web Tree of Section~\ref{sec:graphical} while $N^a_{\gamma}$ is the RCS
Poisson process of Definition~\ref{def:Poisson} associated to the Poisson random measure
$\mu_{\gamma}$ given in~\eqref{e:numeasure}.
\end{definition}
\begin{proposition}\label{p:0-BDTreeisChar}
For any $\delta\in(0,1]$ and $\alpha,\,\beta\in(0,1)$ the smoothened $0$-BD Tree
$\tilde\mathfrak{c}hi^\delta_{{0\text{-}\mathrm{bd}}}=(\tilde\zeta^\downarrow_\delta, N^a_{\gamma})$ in Definition~\ref{def:Smooth0-BDtree}
is almost surely a characteristic $(\alpha,\beta)$-branching spatial pointed $\mathbb{R}$-tree.
Its law $\mathcal{P}^\delta_{{0\text{-}\mathrm{bd}}}$ on $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$, which we call the {\bf $\boldsymbol{0}$-BD measure},
can be written as
\begin{equ}[e:0-BDMeasure]
\mathcal{P}^\delta_{{0\text{-}\mathrm{bd}}}(\mathrm{d}\mathfrak{c}hi)\eqdef \int \mathbb{C}Q^{\mathrm{Poi}_{\gamma}}_\zeta(\mathrm{d} \mathfrak{c}hi)\mathbb{T}heta^\downarrow_\delta(\mathrm{d}\zeta)
\end{equ}
where $\mathbb{T}heta^\downarrow_\delta$ denotes the law of $\tilde\zeta^\downarrow_\delta$ in Theorem~\ref{thm:DWTConv} on
$\mathbb{C}^\alpha_\mathrm{sp}$ and $\mathbb{C}Q^{\mathrm{Poi}_{\gamma}}_\zeta$ that of the RCS Poisson process $N^a_{\gamma}$ on
$\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$.
Moreover, almost surely~\eqref{e:Dist_p_Mp} holds and for every $r>0$ there exists a
constant $C=C(r)>0$ such that for all $\delta$ small enough and $k>p/(p-2)$
\begin{equ}[e:DistBM]
\mathbb{P}_\delta\Big(\mathfrak{s}up_{\mathfrak{s}z\in(\mathscr{T}^\downarrow_\delta)^{(r)}}|N^a_{\gamma}(\mathfrak{s}z)-N_{\gamma}(\mathfrak{s}z)|> k \delta\Big)\leq C\delta\;,
\end{equ}
where $N_{\gamma}$ is the branching map of $\tilde\mathfrak{c}hi^\delta_{{0\text{-}\mathrm{bd}}}$ in Definition~\ref{def:0-BDtree}.
\end{proposition}
\begin{proof}
The first part of the statement is a direct consequence of the fact that almost surely
$\tilde\zeta^\downarrow_\delta\in\fE_\alpha$ since $\mathscr{T}^\downarrow_\delta$ is almost surely locally finite,
so that Lemma~\ref{l:PPtree} applies,
while the measurability conditions required to make sense of~\eqref{e:0-BDMeasure}
are implied by Proposition~\ref{p:MeasG}.
Moreover,~\eqref{e:Dist_p_Mp} follows by Theorem~\ref{thm:DWTConv} so that we are left to show~\eqref{e:DistBM}.
Let us recall the definition of the event $E_R^\delta$ given in~\mathfrak{c}ite[Proposition 4.4]{CHbwt}
(see also Proposition 3.2 Eq. (3.4) in the same reference).
For $r\geq 1$ and $R>r$, let $Q_R^\pm$ be two squares of side $1$ centred at $(r+1,\pm(2R+1))$,
$z^\pm=(t^\pm,x^\pm)$ be two points in the interior of $Q_R^\pm$ and
$\{z^{\pm}_\delta\}_\delta\mathfrak{s}ubset Q^\pm_R\mathfrak{c}ap (\mathbb{D}_\delta)$ be sequences converging to $z^\pm$.
We set
\begin{equ}
E_R^\delta\eqdef\{\mathfrak{s}up_{0\geq s\geq -r}|\pi^{\downarrow,\delta}_0(s)|\leq R\,,\,\mathfrak{s}up_{t_\delta^\pm\geq s\geq -r}|\pi^{\downarrow,\delta}_{z_\delta^\pm}(s)-x_\delta^\pm|\leq R\}
\end{equ}
Notice that, as was shown in~\mathfrak{c}ite[Eq. (4.9) and (3.5)]{CHbwt},
\begin{equ}[e:RP]
\liminf_{\delta\downarrow 0} \mathbb{P}_\delta(E_R^\delta)\geq1-\frac{\mathfrak{s}qrt{r}}{R} e^{-\frac{R^2}{2r}}\,.
\end{equ}
Moreover, on $E_R^\delta$,
$\tilde M^\downarrow_\delta((\mathscr{T}^\downarrow_\delta)^{(r)})\mathfrak{s}ubset\Lambda_{r,R}^\delta\eqdef([-r,r]\times[-3R-1,3R+1])\mathfrak{c}ap \mathbb{D}^\downarrow_\delta$.
Now, for any positive integer $k$, $\mathfrak{s}up_\mathfrak{s}z |N^a_{\gamma}(\mathfrak{s}z)-N_{\gamma}(\mathfrak{s}z)|>k (2\gamma)^{-1/2}$
only if there exists $\mathfrak{s}z\in (\mathscr{T}^\downarrow_\delta)^{(r)}$ and a neighbourhood of $\mathfrak{s}z$ of size $a=(2\gamma)^{-p}$
which contains more than $k$ $\mu_\gamma$-points. By the definition of $\mu_\gamma$ in~\eqref{e:numeasure},
this implies that there must exist $i=0,\dots, \lceil 2r a^{-1}\mathrm{c}eil$ such that the rectangle
$([t_{i+1},t_i]\times\mathbb{R})\mathfrak{c}ap \Lambda_{r,R}^\delta$, $t_i\eqdef r-2 ia$, contains at least
$k$ $\mu^\bullet_\gamma$-points.
These considerations together with~\eqref{e:RP}, lead to the bound
\begin{equs}
\mathbb{P}\Big(\mathfrak{s}up_{\mathfrak{s}z\in(\mathscr{T}^\downarrow_\delta)^{(r)}}|N^a_{\gamma}(\mathfrak{s}z)-N_{\gamma}(\mathfrak{s}z)|>k\gamma^{-\tfrac12}\Big)\lesssim
\frac{\mathfrak{s}qrt{r}}{R} e^{-\frac{R^2}{2r}} + 2r \delta^{-2p}(4\delta^{2p-3} R)^k\,.
\end{equs}
Therefore, taking $R=\delta^{-1}$, choosing $k$ sufficiently large so that $k>p/(p-2)$, and
then $\delta$ sufficiently small~\eqref{e:DistBM} follows at once.
\end{proof}
We are now ready to show that the law of smoothened $0$-BD tree converges to the Brownian Castle measure
$\mathcal{P}_\mathrm{bc}$ of Theorem~\ref{thm:BSPT}.
\begin{theorem}\label{thm:0-BDConv}
Let $\alpha\in(0,1/2)$ and, for $\delta\in(0,1]$ and $p>1$, $a=(2\gamma)^{-p}$ and
$\mathcal{P}^\delta_{{0\text{-}\mathrm{bd}}}$ be the law of the smoothened $0$-BD tree
given in~\eqref{e:0-BDMeasure}. Then, as $\delta\downarrow 0$, $\mathcal{P}^\delta_{{0\text{-}\mathrm{bd}}}$ converges to $\mathcal{P}_\mathrm{bc}$ weakly on
$\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$, for any $\beta<\beta_\mathrm{Poi}=\frac{1}{2p}$.
\end{theorem}
\begin{proof}
Since $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ is a metric space,
it suffices to show convergence when testing against any Lipschitz continuous bounded function $F$.
By~\eqref{e:LawBC} and~\eqref{e:0-BDMeasure},
we see that $|\int F(\mathfrak{c}hi)(\mathcal{P}^\delta_{{0\text{-}\mathrm{bd}}}-\mathcal{P}_\mathrm{bc})(d\mathfrak{c}hi)| \le I_1 + I_2$ with
\begin{equs}
I_1&\eqdef\Big|\int\int F(\mathfrak{c}hi)\Big(\mathbb{C}Q^{\mathrm{Poi}_{\gamma}}_\zeta(\mathrm{d} \mathfrak{c}hi)-\mathbb{C}Q^{\mathrm{Gau}}_\zeta(\mathrm{d} \mathfrak{c}hi)\Big)\mathbb{T}heta^\downarrow_\delta(\mathrm{d}\zeta)\Big|\\
I_2&\eqdef\Big|\int \Big(\int F(\mathfrak{c}hi)\mathbb{C}Q^{\mathrm{Gau}}_\zeta(\mathrm{d} \mathfrak{c}hi)\Big)\Big(\mathbb{T}heta^\downarrow_\delta(\mathrm{d}\zeta)-\mathbb{T}heta^\downarrow_\mathrm{bw}(\mathrm{d}\zeta)\Big)\Big|\,.
\end{equs}
Since $\mathbb{T}heta^\downarrow_\delta$ converges by Theorem~\ref{thm:DWTConv}, for every $\eps>0$ we can find
a compact subset $K_\eps \mathfrak{s}ubset \mathbb{C}^\alpha_\mathrm{sp}$ with
$\mathfrak{s}up_{\delta>0}\mathbb{T}heta^\downarrow_\delta(K_\eps)\geq 1-\eps$.
Hence,
\begin{equs}
I_1&\leq \Big|\int_{K_\eps}\int F(\mathfrak{c}hi)\Big(\mathbb{C}Q^{\mathrm{Poi}_{\gamma}}_\zeta(\mathrm{d} \mathfrak{c}hi)-\mathbb{C}Q^{\mathrm{Gau}}_\zeta(\mathrm{d} \mathfrak{c}hi)\Big)\mathbb{T}heta^\downarrow_\delta(\mathrm{d}\zeta)\Big|+2\|F\|_\infty\eps\\
&\leq \mathfrak{s}up_{\zeta\in K_\eps}\Big|\int F(\mathfrak{c}hi)\Big(\mathbb{C}Q^{\mathrm{Poi}_{\gamma}}_\zeta(\mathrm{d} \mathfrak{c}hi)-\mathbb{C}Q^{\mathrm{Gau}}_\zeta(\mathrm{d} \mathfrak{c}hi)\Big)\Big| +2\|F\|_\infty\eps\,.
\end{equs}
As $\delta\to 0$ the first term converges to $0$ by Proposition~\ref{p:PPtoGau} and,
since the left hand side is independent of $\eps$, we conclude that $I_1\to 0$.
Finally, Proposition~\ref{p:MeasG} and the Lipschitz continuity of $F$ imply that the map
$\zeta\mapsto \int F(\mathfrak{c}hi)\mathbb{C}Q^{\mathrm{Gau}}_\zeta(\mathrm{d} \mathfrak{c}hi)$
is continuous so that $I_2 \to 0$ by Theorem~\ref{thm:DWTConv}.
\end{proof}
\mathfrak{s}ubsection{The $\boldsymbol{0}$-BD model converges to the Brownian Castle}\label{sec:Conv}
In order to establish the convergence of the $0$-Ballistic Deposition model to the Brownian Castle,
let $\delta>0$ and $\mathfrak{c}hi^\delta_{{0\text{-}\mathrm{bd}}}$ the $0$-BD Tree given in Definition~\ref{def:0-BDtree}.
Let $\mathfrak{h}_0^\delta\in D(\mathbb{R},\mathbb{R})$ and, as in~\eqref{e:BC}, set
\begin{equ}[e:Version0BD]
\mathfrak{h}_{{0\text{-}\mathrm{bd}}}^\delta(z)\eqdef \mathfrak{h}_0^\delta(M^\downarrow_{\delta,x}(\rho^\downarrow_\delta(\mathfrak{T}_\delta(z),0)))+ N_\gamma(\mathfrak{T}_\delta(z))- N_\gamma(\rho^\downarrow_\delta(\mathfrak{T}_\delta(z),0))
\end{equ}
for all $z\in\mathbb{R}_+\times\mathbb{R}$, where $\mathfrak{T}_\delta$ is the tree map associated to $\zeta^\downarrow_\delta$ of Definition~\ref{def:TreeM}.
Even though $\mathfrak{c}hi_{{0\text{-}\mathrm{bd}}}^\delta$ is not
a characteristic branching spatial tree,~\eqref{e:Version0BD} still makes sense and provides a version
(say, in $D(\mathbb{R}_+,D(\mathbb{R},\mathbb{R}))$) of the rescaled and centred $0$-BD in the sense that its $k$-point
distributions agree with those of $h_{{0\text{-}\mathrm{bd}}}^\delta$ in~\eqref{e:Scaled}.
Before proving Theorem~\ref{thm:convTime}, let us state the following lemma which will be needed in the proof.
\begin{lemma}\label{l:Points0}
Let $\zeta^\downarrow_\mathrm{bw}$ be the backward Brownian Web tree
of Definition~\ref{def:BW} and $A \mathfrak{s}ubset \mathbb{R}$ be a fixed subset of measure $0$. Then, with probability $1$
\begin{equ}
\{M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z,0))\mathfrak{c}olon M^\downarrow_{\mathrm{bw},t}(\mathfrak{s}z)>0\}\mathfrak{c}ap A=\emptyset\,.
\end{equ}
\end{lemma}
\begin{proof}
It suffices to note that, by Theorem~\ref{thm:BW}, $\zeta^\downarrow_\mathrm{bw}\eqlaw \tilde\zeta^\downarrow(\mathbb{Q}^2)$
and that,
for a Brownian motion $B$, one has $\mathbb{P}(B_t \in A) = 0$ for any fixed $t > 0$.
\end{proof}
\begin{proof}[of Theorem~\ref{thm:convTime}]
By Theorem~\ref{thm:0-BDConv} and Skorokhod's representation theorem, there exists a probability
space supporting the random variables $\mathfrak{c}hi_{{0\text{-}\mathrm{bd}}}^{\delta_n}$, $\tilde \mathfrak{c}hi_{{0\text{-}\mathrm{bd}}}^{\delta_n}$, $\tilde\zeta^\uparrow_{\delta_n}$, $\mathfrak{c}hi_\mathrm{bc}$ and $\zeta^\uparrow_\mathrm{bw}$ in such a way that the following
properties hold.
\begin{enumerate}[noitemsep]
\item The random variables $\tilde \mathfrak{c}hi_{{0\text{-}\mathrm{bd}}}^{\delta_n}$, $\mathfrak{c}hi_{{0\text{-}\mathrm{bd}}}^{\delta_n}$ and $\tilde\zeta^\uparrow_{\delta_n}$
are related by the constructions in Section~\ref{sec:graphical} and
Definitions~\ref{def:0-BDtree} and~\ref{def:Smooth0-BDtree}.
\item Similarly, the random variables $\mathfrak{c}hi_\mathrm{bc}$ and $\zeta^\uparrow_\mathrm{bw}$ are related by
the construction of Definition~\ref{def:DBW}.
\item One has $\tilde \mathfrak{c}hi_{{0\text{-}\mathrm{bd}}}^{\delta_n} \to \mathfrak{c}hi_\mathrm{bc}$ and $\tilde\zeta^\uparrow_{\delta_n} \to \zeta^\uparrow_\mathrm{bw}$
almost surely in $\mathbb{C}^{\alpha,\beta}_\mathrm{bsp}$ and $\mathbb{C}^\alpha_\mathrm{sp}$ respectively, for all
$\alpha\in(0,1/2)$ and $\beta<\beta_{\mathrm{Poi}}$.
\end{enumerate}
We consider this choice of random variables fixed from now on and, in order to shorten notations, we
will henceforth
replace $\delta_n$ by $\delta$ with the understanding that we only ever consider values of $\delta$
belonging to the fixed sequence.
We now define the countable set $D \mathfrak{s}ubset \mathbb{R}$ appearing in the statement of the theorem as
the set of times $t\in\mathbb{R}_+$ for which there is $x\in\mathbb{R}$ with $(t,x)\in S^\downarrow_{(0,3)}$
(see Definition~\ref{def:Type}). Our goal is then to exhibit a set of full measure
such that, for every $T\notin D$, every $R>0$ and every $\eps>0$, there exists $\delta>0$ and
$\lambda\in\Lambda([-R,R])$
for which $\gamma(\lambda)\vee d^{[-R,R]}_\lambda(\mathfrak{h}_\mathrm{bc}(T,\mathfrak{c}dot), \mathfrak{h}_{{0\text{-}\mathrm{bd}}}^\delta(T,\mathfrak{c}dot))<\eps$.
The proof will be divided into four steps, but before delving into the details we will need some
preliminary considerations.
We henceforth consider a sample of the random variables mentioned above as given and we fix
some arbitrary $T \notin D$ and $R, \eps > 0$. Since
the sets $\{\mathfrak{c}hi_\mathrm{bc},\,\mathfrak{c}hi_{{0\text{-}\mathrm{bd}}}^\delta\}_\delta$ and $\{\tilde\zeta^\uparrow_\mathrm{bw},\tilde\zeta^\uparrow_\delta\}_{\delta}$
are compact, point 3.\ of Proposition~\ref{p:Compactness} and~\eqref{e:Dist_p_Mp} imply
that if we choose
$r>2 \mathfrak{s}up_{\delta}b_{\tilde\zeta^\downarrow_\delta}(2 (R\vee T))\vee b_{\tilde\zeta^\uparrow_\delta}(2(R\vee T))$
then, for all $\delta$ and $\bigcdot\in\{\downarrow,\uparrow\}$,
\begin{equ}
(M^{\bigcdot}_\delta)^{-1}(\{T\}\times[-R,R])\mathfrak{s}ubset \mathscr{T}^{\bigcdot,\,(r)}_\delta\,.
\end{equ}
where $M^{\bigcdot}_\delta$ are the evaluation maps in Definition~\ref{def:DWT} (for the non-interpolated trees).
Invoking once more Proposition~\ref{p:Compactness}, we also know that the constant $C_r>0$ given by
\begin{equ}
C_r\eqdef \mathfrak{s}up\{\|M^{\bigcdot}_\mathrm{bw}\|_\alpha^{(r)},\, \|\tilde M^{\bigcdot}_\delta\|_\alpha^{(r)},\,\|B_\mathrm{bc}\|_\beta^{(r)},\,\|N^a_\gamma\|_\beta^{(r)}\mathfrak{c}olon \delta\in(0,1]\}\vee 1
\end{equ}
is finite.
\noindent {\bf Step 1.} As a first step in our analysis, we want to determine a set of distinct points $y_1<\dots<y_{N+1}$
for which the modulus of continuity of $\mathfrak{h}_\mathrm{bc}$ on $\{T\}\times[y_i,y_{i+1})$ can be easily controlled.
Let $0<\eta_1<\eps$ be sufficiently small and $\tilde\mathbb{X}i_{[-R,R]}^\downarrow(T,T-\eta_1)$ be defined
according to~\eqref{e:Xitilde}.
We order its elements in increasing order, i.e. $\tilde\mathbb{X}i_{[-R,R]}^\downarrow(T,T-\eta_1)=\{x_1,\dots,x_N\}$ with
$x_1\eqdef\min \tilde\mathbb{X}i_{[-R,R]}^\downarrow(T,T-\eta_1)$ and let $\{y_i\,:\,i=1,\dots,N+1\}$ be as in~\eqref{e:yis}.
Since $T\notin S^\downarrow_{(0,3)}=S^\uparrow_{(2,1)}$, arguing as in the proof of Proposition~\ref{p:BCcadlag},
there exists $t_\mathrm{c}\in(T-\eta_1,T)$ such that no pair of forward paths
starting before $T-\eta_1$ and passing through $\{T-\eta_1\}\times[x_1,x_N]$ coalesces at a time $s\in(t_\mathrm{c},T]$.
For each $i=1,\dots,N$, let $\mathfrak{s}x_i^+,\,\mathfrak{s}x_i^-$ be the points in $(M^\uparrow_\mathrm{bw})^{-1}(T-\eta_1,x_i)$ from which
the right-most and left-most forward paths from $(T-\eta_1,x_i)$ depart and such that $M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}x_i^-,T))=y_i$ and
$M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}x_i^+,T))=y_{i+1}$. Notice that these coincide with the right-most and
left-most point from $(T,x_i)$ defined in Remark~\ref{rem:Right-mostUA} unless $(T,x_i)\in S^\uparrow_{(0,3)}$.
\noindent {\bf Step 2.} As a second step, we would like to determine a sufficiently small $\delta$ and points $y_1^\delta<\dots<y_{N+1}^\delta$
which play the same role as the $y_i$'s, but for $\mathfrak{h}_{{0\text{-}\mathrm{bd}}}^\delta$, and are close to them.
Let $\eta<\tfrac12 \eta_1$ and $M\geq 1$ the number of endpoints of $\mathscr{T}_\mathrm{bw}^{\uparrow,(r),\eta}$, which is
finite by points 2.\ and 3.\ of Lemma~\ref{l:Trim}. Let $\eta_2>0$ be such that
\begin{equ}[e:eta2]
12C_r\eta_2^\alpha<\min\{|y_i-y_{i+1}|\mathfrak{c}olon i=1,\dots N\}\wedge\frac{|T-t_\mathrm{c}|}{10 M}\,.
\end{equ}
Thanks to the fact that $\mathbb{D}elta_\mathrm{bsp}(\mathfrak{c}hi_\mathrm{bc}, \tilde\mathfrak{c}hi_{{0\text{-}\mathrm{bd}}}^\delta)\vee \mathbb{D}elta_\mathrm{sp}(\tilde\zeta^\uparrow_\delta, \zeta^\uparrow_\mathrm{bc})\to0$
and Lemma~\ref{l:Trim}, there exists $\delta=\delta(\eta_2)>0$ and correspondences
$\mathbb{C}C^{\downarrow}$, between $\mathscr{T}_\mathrm{bw}^{\downarrow,(r)}$ and $\mathscr{T}^{\downarrow,(r)}_\delta$, and $\mathbb{C}C^{\uparrow}$,
between $\mathscr{T}_\mathrm{bw}^{\uparrow,(r),\eta}$ and $\mathscr{T}_\delta^{\uparrow,(r),\eta}$ (see \eqref{def:Trim+} below), such that
\begin{equ}[e:DistBound]
\mathbb{D}elta^{\mathrm{c},\mathbb{C}C^{\downarrow}}_\mathrm{bsp}(\mathfrak{c}hi_\mathrm{bc}^{(r)}, \tilde\mathfrak{c}hi_{{0\text{-}\mathrm{bd}}}^{\delta,(r)})\vee \mathbb{D}elta_\mathrm{sp}^{\mathrm{c},\mathbb{C}C^{\uparrow}}(\zeta_\mathrm{bw}^{\uparrow,(r),\eta}, \tilde\zeta_\delta^{\uparrow,(r),\eta})<\eta_2\,.
\end{equ}
Let us define the subtrees $T^\uparrow_\mathrm{bw}$ and $T^\uparrow_\delta$
according to~\eqref{e:PathTree} and the corresponding spatial trees $Z^\uparrow_\mathrm{bw}$ and $\tilde Z^\uparrow_\delta$
as in~\eqref{e:PathSpTrees}.
For $1\le i \le N$ and $\bigcdot\in\{+,-\}$, define $\mathfrak{s}w_i^{\bigcdot}\eqdef\rho^\uparrow(\mathfrak{s}x_i^{\bigcdot}, T-\eta_1+\eta+\eta_2)$.
Applying Lemma~\ref{l:PathTrees}, it follows from~\eqref{e:Incl} that
$\llbracket \mathfrak{s}w_i^{\bigcdot},\rho^\uparrow(\mathfrak{s}w_i^{\bigcdot},T)\mathrm{r}b\mathfrak{s}ubset T^\uparrow_\mathrm{bw}$ and, by the definition of $T^\uparrow_\mathrm{bw}$, $T^\uparrow_\delta$
and of the path correspondence $\mathbb{C}C^{\uparrow}_\mathrm{p}$ of~\eqref{e:PathCorr},
there exists $\mathfrak{s}w_i^{\bigcdot,\delta}\in T^\uparrow_\delta$ such that
$(\mathfrak{s}w_i^{\bigcdot}, \mathfrak{s}w_i^{\bigcdot,\delta})\in\mathbb{C}C^{\uparrow}_\mathrm{p}$.
In the following lemma, we determine the $y_i^\delta$'s and complete the second step of the proof.
\begin{lemma}\label{l:Step2}
For $\eta_2$ as in~\eqref{e:eta2},
the set $Y_\delta\eqdef \{M^\uparrow_\delta(\rho^\uparrow_\delta(\mathfrak{s}w_i^{\bigcdot,\delta},T))\,:\,1\le i \le N\,,\bigcdot\in\{+,-\}\}$
contains exactly
$N+1$ points $y_1^\delta<\dots<y_{N+1}^\delta$ which satisfy
$|y_{i+1}^\delta-y_i^\delta|\geq \tfrac13\min_i\{|y_i-y_{1+1}|\}$. Moreover,
there exists no point $\mathfrak{s}z_\delta\in\mathscr{T}^\uparrow_\delta$ such that
$M^\uparrow_{\delta,t}(\mathfrak{s}z_\delta)<T-\eta_1-5M \eta_2$ and, for some $i$,
$y_i^\delta<M^\downarrow_{\delta,x}(\rho^\uparrow_\delta(\mathfrak{s}z_\delta,T))<y_{i+1}^\delta$.
\end{lemma}
\begin{proof}
In order to verify that $Y_\delta$ has at most $N+1$ points, it suffices to show that,
for all $i \in \{1,\ldots,N-1\}$, the rays starting at $\mathfrak{s}w_i^{+,\delta}$ and $\mathfrak{s}w_{i+1}^{-,\delta}$
coalesce before time $T$. By Lemma~\ref{l:PathTrees}, the distance between $\mathfrak{s}w_i^{+,\delta}$ and $\mathfrak{s}w_{i+1}^{-,\delta}$ is bounded by
\begin{equs}
d^\uparrow_\delta(\mathfrak{s}w_i^{+,\delta}, \mathfrak{s}w_{i+1}^{-,\delta}) \leq d^\uparrow_\mathrm{bw}(\mathfrak{s}w_i^{+}, \mathfrak{s}w_{i+1}^{-})+4M\eta_2\leq 2(t_\mathrm{c}-(T-\eta_1+\eta+\eta_2)+2M\eta_2)
\end{equs}
so that if $\bar s$ is the first time at which $\rho_\delta^\uparrow(\mathfrak{s}w_i^{+,\delta},\bar s)=\rho_\delta^\uparrow(\mathfrak{s}w_{i+1}^{-,\delta},\bar s)$
then, by~\eqref{e:eta2},
\begin{equ}
\bar s= T-\eta_1+\eta+\eta_2+\tfrac12 d^\uparrow_\delta(\mathfrak{s}w_i^{+,\delta}, \mathfrak{s}w_{i+1}^{-,\delta})\leq t_\mathrm{c}+(4M+1)\eta_2<T\;.
\end{equ}
Hence, the cardinality of $Y_\delta$ is not bigger than $N+1$ and we can order its elements as
$y_1^\delta\leq\dots\leq y_{N+1}^\delta$.
To show that the inequalities are strict, notice that, again by Lemma~\ref{l:PathTrees} and~\eqref{e:Dist_p_Mp}, we have
\begin{equs}[e:boundyydelta]
|y_i&-y_i^\delta|= |M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}w_i^{-},T))-M^\uparrow_{\delta,x}(\rho^\uparrow_\delta(\mathfrak{s}w_i^{-,\delta},T))|\\
\leq& |M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}w_i^{-},T))-\tilde M^\uparrow_{\delta,x}(\rho^\uparrow_\delta(\mathfrak{s}w_i^{-,\delta},T))|\\
&+|\tilde M^\uparrow_{\delta,x}(\rho^\uparrow_\delta(\mathfrak{s}w_i^{-,\delta},T))|-M^\uparrow_{\delta,x}(\rho^\uparrow_\delta(\mathfrak{s}w_i^{-,\delta},T))| \lesssim C_r\eta_2^\alpha+\delta\leq \frac{1}{6}\min_i\{|y_i-y_{i+1}|\}\,.
\end{equs}
The lower bound on $|y_i^\delta-y_{i+1}^\delta|$ follows at once.
For the second part of the statement, we argue by contradiction and assume $\mathfrak{s}z_\delta\in\mathscr{T}^\uparrow_\delta$
is such that $M^\uparrow_{\delta,t}(\mathfrak{s}z_\delta)<T-\eta_1-5M\eta_2$ and
$y_i^\delta<M^\downarrow_{\delta,x}(\rho^\uparrow_\delta(\mathfrak{s}z_\delta,T))<y_{i+1}^\delta$.
Note that $I_{\mathfrak{s}z_\delta}\eqdef\llbracket\rho^\uparrow_\delta(\mathfrak{s}z_\delta,T-\eta_1+\eta),\rho^\uparrow_\delta(\mathfrak{s}z_\delta,T)\mathrm{r}b\mathfrak{s}ubset T_\delta^\uparrow$
since all the points in the segment are at distance at least $\eta+5M\eta_2$
from $\mathfrak{s}z_\delta$ and, by~\eqref{e:Incl}, $R_{\eta+5M\eta_2}(\mathscr{T}^{\uparrow,(r)}_\delta)\mathfrak{s}ubset T_\delta^\uparrow$.
Hence, there exists $\mathfrak{s}w\in T_\mathrm{bw}^\uparrow$ such that
for all $s\in[T-\eta_1+\eta,T]$, $(\rho^\uparrow(\mathfrak{s}w,s),\rho^\uparrow_\delta(\mathfrak{s}z_\delta,s))\in\mathbb{C}C^{\uparrow}_\mathrm{p}$.
Now, $\mathfrak{s}w\in T_\mathrm{bw}^\uparrow$ and the latter is contained in $\mathscr{T}^{\uparrow,(r),\eta}_\mathrm{bw}$ by~\eqref{e:Incl},
therefore there must be a point $\bar\mathfrak{s}w\in\mathscr{T}^{\uparrow,(r)}_\mathrm{bw}$ such that $M_{\mathrm{bw},t}^\uparrow(\bar\mathfrak{s}w)\leq T-\eta_1$ and
$\rho^\uparrow(\bar\mathfrak{s}w,T-\eta_1+\eta)=\mathfrak{s}w$.
Since, by construction, all the rays in $\mathscr{T}^\uparrow_\mathrm{bw}$ starting before $T-\eta_1$ must coalesce before time $t_\mathrm{c}$
and the tree is characteristic,
$\mathfrak{s}w$ must be such that either $M^\uparrow_{\mathrm{bw},x}(\rho^\uparrow(\mathfrak{s}w,T-\eta_1+\eta+\eta_2))\geq M^\uparrow_{\mathrm{bw},x}(\mathfrak{s}w_i^+)$
or $M^\uparrow_{\mathrm{bw},x}(\rho^\uparrow(\mathfrak{s}w,T-\eta_1+\eta+\eta_2))\leq M^\uparrow_{\mathrm{bw},x}(\mathfrak{s}w_i^-)$.
Assume the first (the other case is analogous),
then, by the coalescing property, for all $s\geq t_\mathrm{c}$, $\rho^\uparrow(\mathfrak{s}w,s)=\rho^\uparrow(\mathfrak{s}w_i^+,s)$, which means that
$(\rho^\uparrow(\mathfrak{s}w_i^+,s), \rho^\uparrow_\delta(\mathfrak{s}z_\delta,s))\in\mathbb{C}C^{\uparrow}_\mathrm{p}$.
Therefore,
\begin{equs}
d^\uparrow_\delta(\rho^\uparrow_\delta(\mathfrak{s}z^i_\delta,T-t_\mathrm{c}), &\rho^\uparrow_\delta(\mathfrak{s}w_i^{+,\delta},T-t_\mathrm{c}))=|d^\uparrow_\delta(\rho^\uparrow_\delta(\mathfrak{s}z_\delta,T-t_\mathrm{c}), \rho^\uparrow_\delta(\mathfrak{s}w_i^{+,\delta},T-t_\mathrm{c}))\\
&-d^\uparrow(\rho^\uparrow(\mathfrak{s}w_i^+,T-t_\mathrm{c}),\rho^\uparrow(\mathfrak{s}w_i^+,T-t_\mathrm{c}))|<4M\eta_2\leq T-t_\mathrm{c}\,.
\end{equs}
However, the segment $I_{\mathfrak{s}z_\delta}$ cannot intersect either
$\llbracket\mathfrak{s}w_i^{-,\delta},\rho^\uparrow_\delta(\mathfrak{s}w_i^{-,\delta},T)\mathrm{r}b$ or
$\llbracket\mathfrak{s}w_{i}^{+,\delta},\rho^\uparrow_\delta(\mathfrak{s}w_{i}^{+,\delta},T)\mathrm{r}b$, since otherwise
$M^\downarrow_{\delta,x}(\rho^\uparrow_\delta(\mathfrak{s}z_\delta,T))=y_i^\delta$ or $y_{i+1}^\delta$.
This implies that $d^\uparrow_\delta(\rho^\uparrow_\delta(\mathfrak{s}z^i_\delta,T-t_\mathrm{c}), \rho^\uparrow_\delta(\mathfrak{s}w_i^{+,\delta},T-t_\mathrm{c}))>T-t_\mathrm{c}$,
which is a contradiction thus completing the proof.
\end{proof}
Before proceeding, let us introduce, for all $i=1,\dots,N$, the following trapezoidal regions
$\mathbb{D}elta_i$ and $\mathbb{D}elta^\delta_i$ in $\mathbb{R}^2$
\begin{equs}[e:Trap]
\mathbb{D}elta_i&\eqdef\bigcup_{s\in[T-\eta_1+\eta+\eta_2,T]}[M^\uparrow_{\mathrm{bw},x}(\rho^\uparrow(\mathfrak{s}w_i^{-},s), M^\uparrow_{\mathrm{bw},x}(\rho^\uparrow(\mathfrak{s}w_i^{+},s))]\\
\mathbb{D}elta^\delta_i&\eqdef\bigcup_{s\in[T-\eta_1+\eta+\eta_2,T]}[M^\uparrow_{\delta,x}(\rho_\delta^\uparrow(\mathfrak{s}w_i^{-,\delta},s), M^\uparrow_{\delta,x}(\rho_\delta^\uparrow(\mathfrak{s}w_i^{+,\delta},s))]\,.
\end{equs}
By Lemma~\ref{l:Step2} and the non-crossing property of forward and backward trajectories
(see Theorem~\ref{thm:DBW}\ref{i:Cross} and the construction of the double Discrete Web Tree in Definition~\ref{def:DWT}),
every couple of points $\mathfrak{s}z_1,\,\mathfrak{s}z_2\in\mathfrak{T}_\mathrm{bw}(\mathbb{D}elta_i)$ and $\mathfrak{s}z^\delta_1,\,\mathfrak{s}z^\delta_2\in\mathfrak{T}_\delta(\mathbb{D}elta_i^\delta)$
satisfies
$d^\downarrow_\mathrm{bw}(\mathfrak{s}z_1,\,\mathfrak{s}z_2)\leq 2\eta_1$ and $d^\downarrow_\delta(\mathfrak{s}z^\delta_1,\,\mathfrak{s}z^\delta_2)\leq 2(\eta_1+5M\eta_2)$.
Indeed, if there existed points $\mathfrak{s}z^\delta_1,\,\mathfrak{s}z^\delta_2\in\mathfrak{T}_\delta(\mathbb{D}elta_i^\delta)$,
for which $d^\downarrow_\delta(\mathfrak{s}z^\delta_1,\,\mathfrak{s}z^\delta_2)> 2(\eta_1+5M\eta_2)$,
then the paths $M^\downarrow_\delta(\rho^\downarrow_\delta(\mathfrak{s}z^\delta_i,\mathfrak{c}dot))$ would coalesce before $T-\eta_1-5M\eta_2$.
This in turn would imply the existence of a forward path starting before
$T-\eta_1-5M\eta_2$ at a position $x$ lying in between
the two trajectories, which,
because of the non-crossing property, at time $T$ would be located strictly between $y_i^\delta$ and $y_{i+1}^\delta$,
thus contradicting the above lemma.
\noindent {\bf Step 3.} In this third step, we want to show that for every $i$ we can find a couple
$(\mathfrak{s}z^i,\mathfrak{s}z^i_\delta)\in\mathbb{C}C^{\downarrow}$ such that $\mathfrak{s}z^i_\delta\in \mathfrak{T}_\delta(\mathbb{D}elta^\delta_i)$ and
$\rho^\downarrow(\mathfrak{s}z^i,\bar s)\in \mathfrak{T}(\mathbb{D}elta_i)$ for some
a $\bar s$ sufficiently close to $T$.
Let $i\in \{1,\dots,N\}$ and $\mathfrak{s}z_\delta^i\in \mathscr{T}^\downarrow_\delta$ be such that $M^\downarrow_\delta(\mathfrak{s}z^i_\delta)=(T,x)$ and
$x\in(y^\delta_i+6C_r\eta_2^\alpha, y_{i+1}^\delta-6C_r\eta_2^\alpha)$,
which exists thanks to Lemma~\ref{l:Step2} if we choose $\eta_2$ as in~\eqref{e:eta2}.
Clearly, $\mathfrak{s}z^i_\delta\in\mathfrak{T}_\delta(\mathbb{D}elta_i^\delta)$.
Let $\mathfrak{s}z^i\in\mathscr{T}^\downarrow_\mathrm{bw}$ be such that $(\mathfrak{s}z^i,\mathfrak{s}z^i_\delta)\in\mathbb{C}C^{\downarrow}$.
If $M^\downarrow_{\mathrm{bw},t}(\mathfrak{s}z^i)>T$,
since $d^\downarrow_\mathrm{bw}(\mathfrak{s}z^i,\rho^\downarrow(\mathfrak{s}z^i,T))=|M^\downarrow_{\mathrm{bw},t}(\mathfrak{s}z^i)-M^\downarrow_{\delta,t}(\mathfrak{s}z_\delta^i)|<\eta_2$
(the last inequality being a consequence of~\eqref{e:DistBound})
we have $|M^\downarrow_{\mathrm{bw},x}(\mathfrak{s}z^i)-M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z^i,T))|\leq C_r\eta_2^\alpha$. Hence
\begin{equs}
|M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z^i,T)-y_i|&\geq |y_i^\delta-M^\downarrow_{\delta,x}(\mathfrak{s}z_\delta^i)|-|M^\downarrow_{\delta,x}(\mathfrak{s}z_\delta^i)-M^\downarrow_{\mathrm{bw},x}(\mathfrak{s}z^i)|\\
&\quad -|M^\downarrow_{\mathrm{bw},x}(\mathfrak{s}z^i)-M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z^i,T))|\\
&\geq |y_i^\delta-M^\downarrow_{\delta,x}(\mathfrak{s}z_\delta)|-\eta_2-\|M\|_\alpha^{(r)}\eta_2^\alpha>0
\end{equs}
where the last passage holds thanks to our choice of $\eta_2$ in~\eqref{e:eta2},
and the same result can be shown upon replacing
$y_{i+1}$ to $y_i$.
If instead $M^\downarrow_{\mathrm{bw},t}(\mathfrak{s}z^i)\leq T$, by the H\"older continuity of $M^\uparrow_\mathrm{bw}$,
\begin{equs}
\mathfrak{s}up_{s\in[T-\eta_2,T]}|M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}w_i^{-},s))-y_i|\vee |M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}w_i^{+},s))-y_{i+1}|&\leq C_r \eta_2^\alpha\,.
\end{equs}
so that we can argue as above and show
$|M^\downarrow_{\mathrm{bw},x}(\mathfrak{s}z)-M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}w_i^{-},t))|\wedge |M^\downarrow_{\mathrm{bw},x}(\mathfrak{s}z)-M^\uparrow_\mathrm{bw}(\rho^\uparrow(\mathfrak{s}w_i^{+},t))|>0$.
As a consequence of the coalescing property and the previous bounds,
for all points $\mathfrak{s}w\in \mathfrak{T}_\mathrm{bw}(\mathbb{D}elta_i)$ and $\mathfrak{s}w_\delta\in \mathfrak{T}_\delta(\mathbb{D}elta^\delta_i)$ we have
\begin{equ}[e:DistanceBounds]
d^\downarrow_\mathrm{bw}(\mathfrak{s}w,\mathfrak{s}z^i)\leq \eta_2 +2\eta_1\,,\qquad d^\downarrow_\delta(\mathfrak{s}w_\delta,\mathfrak{s}z_\delta^i)\leq 2(\eta_1+5M\eta_2)\,.
\end{equ}
\noindent{\bf Step 4.} We can now exploit what we obtained so far, go back to the height functions $\mathfrak{h}_\mathrm{bc}$ and $\mathfrak{h}_{{0\text{-}\mathrm{bd}}}^\delta$,
and complete the proof.
First, let $\lambda:\mathbb{R}\to\mathbb{R}$ be the continuous function such that
$\lambda(y_i)=y_i^\delta$ for all $i$,
interpolating linearly between these points, and $\lambda'(x) = 1$ for $x \not\in [y_1,y_{N+1}]$.
In particular, one has $\lambda(x)\in[y_i^\delta,y_{i+1}^\delta)$ if $x\in[y_i,y_{i+1})$.
Note that as a consequence of \eqref{e:boundyydelta}, we can choose $\eta_1$ and $\delta$
sufficiently small so that $\gamma(\lambda) \le \eps$.
Let $x\in [y_i,y_{i+1})$. Then,
\begin{equs}
|\mathfrak{h}_\mathrm{bc}(T,x)&-\mathfrak{h}_{{0\text{-}\mathrm{bd}}}^\delta(T,\lambda(x))|\leq|\mathfrak{h}_0(M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z^i,0)))-\mathfrak{h}_0^{\downarrow,\delta}(M^\downarrow_{\delta,x}(\rho^\downarrow_\delta(\mathfrak{s}z_\delta^i,0)))| \label{e:FinalB}\\
&+ |B_\mathrm{bc}(\mathfrak{T}(T,x))-N_\gamma(\mathfrak{T}_\delta(T,\lambda(x)))|
+|B_\mathrm{bc}(\rho^\downarrow(\mathfrak{s}z^i,0))-N_\gamma(\rho^\downarrow_\delta(\mathfrak{s}z^i_\delta,0))|\,,
\end{equs}
where we chose $\eta_1$ and $\eta_2$ sufficiently small, so that $T-\eta_1-5M\eta_2>0$ and consequently,
by~\eqref{e:DistanceBounds}, $\rho^\downarrow(\mathfrak{T}(T,x),0)=\rho^\downarrow(\mathfrak{s}z^i,0)$ and
$\rho^\downarrow_\delta(\mathfrak{T}_\delta(T,\lambda(x)),0)=\rho^\downarrow_\delta(\mathfrak{s}z^i_\delta,0)$.
For the second term in~\eqref{e:FinalB} we exploit the H\"older continuity of $B_\mathrm{bc}$
and $N^a_\gamma$,~\eqref{e:DistBM},~\eqref{e:DistanceBounds} and~\eqref{e:DistBound} which give
\begin{equs}
|B_\mathrm{bc}(\mathfrak{T}(T,x))&-N_\gamma(\mathfrak{T}_\delta(T,\lambda(x)))| \leq |B_\mathrm{bc}(\mathfrak{T}(T,x))-B_\mathrm{bc}(\mathfrak{s}z^i)| +|B_\mathrm{bc}(\mathfrak{s}z^i)-N^a_\gamma(\mathfrak{s}z_\delta^i)|\\
&+|N^a_\gamma(\mathfrak{s}z_\delta^i)-N^a_\gamma(\mathfrak{T}_\delta(T,\lambda(x)))|
+|N^a_\gamma(\mathfrak{T}_\delta(T,\lambda(x)))-N_\gamma(\mathfrak{T}_\delta(T,\lambda(x)))|\\
&\lesssim (\eta_2+2\eta_1)^\beta+\eta_2+(\eta_1+5M\eta_2)^\beta+\delta\,.\label{e:BranchBound}
\end{equs}
For the last term in~\eqref{e:FinalB}, arguing as in the proof of~\mathfrak{c}ite[Lemma 2.28]{CHbwt} (replacing
$M_1$ and $M_2$ by $B$ and $N^a_\gamma$ in the statement) and using~\eqref{e:DistBM}, we have
\begin{equs}
|B_\mathrm{bc}(\rho^\downarrow(\mathfrak{s}z^i,0))-N_\gamma(\rho^\downarrow_\delta(\mathfrak{s}z^i_\delta,0))|\leq& |B_\mathrm{bc}(\rho^\downarrow(\mathfrak{s}z^i,0))-N^a_\gamma(\rho^\downarrow_\delta(\mathfrak{s}z^i_\delta,0))| \label{e:Branch0}\\
&+|N^a_\gamma(\rho^\downarrow_\delta(\mathfrak{s}z^i_\delta,0))-N_\gamma(\rho^\downarrow_\delta(\mathfrak{s}z^i_\delta,0))|\lesssim C_r\eta_2^\beta+\delta\,.
\end{equs}
It remains to treat the initial condition. To do so, we make use of~\eqref{e:DistBound},~\mathfrak{c}ite[Lemma 2.28]{CHbwt}
and~\eqref{e:Dist_p_Mp}, which give
\begin{equs}[e:initial]
|M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z^i,0))-&M^\downarrow_{\delta,x}(\rho^\downarrow_\delta(\mathfrak{s}z_\delta^i,0))|\leq |M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z^i,0))-\tilde M^\downarrow_{\delta,x}(\rho^\downarrow_\delta(\mathfrak{s}z_\delta^i,0))|\\
&+|\tilde M^\downarrow_{\delta,x}(\rho^\downarrow_\delta(\mathfrak{s}z_\delta^i,0))- M^\downarrow_{\delta,x}(\rho^\downarrow_\delta(\mathfrak{s}z_\delta^i,0))|\lesssim C_r\eta_2^\alpha+\delta\,.
\end{equs}
Now, by Lemma~\ref{l:Points0}, with probability $1$ we have
\begin{equ}
\{M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z,0))\mathfrak{c}olon M^\downarrow_{\mathrm{bw},t}(\mathfrak{s}z)>0\}\mathfrak{c}ap\mathbb{D}isc(\mathfrak{h}_0)=\emptyset\,.
\end{equ}
In particular, for all $i$, $M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z^i,0))$ is a continuity point of
$\mathfrak{h}_0$ and by assumption $d_\mathrm{Sk}(\mathfrak{h}_0^\delta, \mathfrak{h}_0)\to 0$.
By choosing $\eta_1$, $\eta_2$ and $\delta$ sufficiently small,
we can guarantee on the one hand that each of~\eqref{e:BranchBound} and~\eqref{e:Branch0} is smaller than $\eps/3$,
while on the other that the distance between
$M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z^i,0))$ and $M^\downarrow_{\delta,x}(\rho^\downarrow_\delta(\mathfrak{s}z_\delta^i,0))$ is arbitrarily small.
This, together with the fact that by assumption $d_\mathrm{Sk}(\mathfrak{h}_0^\delta, \mathfrak{h}_0)\to 0$ and that $M^\downarrow_{\mathrm{bw},x}(\rho^\downarrow(\mathfrak{s}z^i,0))$
is a continuity point for $\mathfrak{h}_0$, implies that also the third term in~\eqref{e:FinalB} can be made smaller than $\eps/3$.
We conclude that $\gamma(\lambda)\vee d^{[-R,R]}_\lambda(\mathfrak{h}_\mathrm{bc}(T,\mathfrak{c}dot), \mathfrak{h}_{{0\text{-}\mathrm{bd}}}^\delta(T,\mathfrak{c}dot))<\eps$ as required to complete the proof.
\end{proof}
\begin{appendix}
\mathfrak{s}ection{The smoothened Poisson process}\label{app:SmoothPoisson}
In this appendix, we derive bounds on the Orlicz norm of the increment of a smoothened version of the Poisson process.
Let $a>0$ and $\psi_a$ be a smooth non negative function supported in $[-a,0]$ or $[0,a]$ such that
$\int \psi_a(x)\mathrm{d} x=1$. For $\lambda>0$ let $\mu_\lambda$ be a Poisson random measure on $\mathbb{R}_+$ with
intensity measure $\lambda \ell$, where $\ell$ is the Lebesgue measure on $\mathbb{R}_+$, and define the
{\it rescaled compensated smoothened Poisson process} $P^{a}_\lambda$ and
the rescaled compensated Poisson process $P_\lambda$ respectively by
\begin{equation}\label{def:SmoothRCPP}
P^{a}_\lambda = \frac{1}{\mathfrak{s}qrt{\lambda}}\Big(\int_0^t\psi_a\ast\mu_\lambda(s)\mathrm{d} s-\lambda t\Big) \;,\qquad P_\lambda(t) \eqdef \frac{1}{\mathfrak{s}qrt{\lambda}}\big(\mu_\lambda([0,t])-\lambda t\big)\;.
\end{equation}
Then, the following lemma holds.
\begin{lemma}\label{l:SmoothPoisson}
In the setting above, let $P^{a}_\lambda$ be the rescaled compensated smoothened Poisson process on $[0,T]$
defined in~\eqref{def:SmoothRCPP}.
Let $p>1$ and assume $a\lambda^p=1$. Then, there exists a positive constant $C$ depending only on $T$
such that for every $0\leq s<t\leq T$, we have
\begin{equation}\label{b:SmoothRCPP}
\| P^{a}_\lambda(t)-P^{a}_\lambda(s)\|_{\varphi_1}\leq C (t-s)^{\frac{1}{2p}}
\end{equation}
where the norm appearing on the left hand side is the Orlicz norm defined in~\eqref{def:Orlicz} with
$\varphi_1(x)\eqdef e^x-1$.
\end{lemma}
\begin{proof}
We prove the result for $\psi_a$ supported in $[-a,0]$.
Also, writing $P(t) = P_1(t)$, we have $\mathbb{E} e^{P(t)/c} = \exp(t(e^{1/c}-1-1/c))$, so that
\begin{equ}[e:boundPt]
\|P(t)\|_{\phi_1} \lesssim 1 + \mathfrak{s}qrt t\;.
\end{equ}
Fix $0\leq s<t\leq T$ and consider first the case $t-s\geq a$. Notice that we have
\begin{equs}
P^{a}_\lambda(t)-P^{a}_\lambda(s)&=\frac{1}{\mathfrak{s}qrt{\lambda}}\Big(\int_\mathbb{R} \left(\psi_a(t-u)-\psi_a(s-u)\right)\mu_\lambda([0,u])\mathrm{d} u -\lambda (t-s)\Big)\\
&\leq P_\lambda(t+a)-P_\lambda(s) +\mathfrak{s}qrt{\lambda}a \eqlaw {1\over \mathfrak{s}qrt \lambda}P\big(\lambda(a+t-s)\big) + \mathfrak{s}qrt \lambda a\;.
\end{equs}
It follows from \eqref{e:boundPt} and the triangle inequality that
\begin{equ}
\| P^{a}_\lambda(t)-P^{a}_\lambda(s)\|_{\varphi_1}
\lesssim \mathfrak{s}qrt{t-s+a} + {1\over \mathfrak{s}qrt \lambda} + \mathfrak{s}qrt \lambda a
\lesssim \mathfrak{s}qrt{t-s} + {1\over \mathfrak{s}qrt \lambda}\lesssim (t-s)^{1\over 2p}\;.
\end{equ}
For $t-s<a$, we bound the increment of $P^a_\lambda$ by
\begin{equs}
P^{a}_\lambda(t)- P^{a}_\lambda(s)&=\frac{1}{\mathfrak{s}qrt{\lambda}}\Big(\int_s^t\int_u^{u+a} \psi_a(u-r) \mu_\lambda(\mathrm{d} r)\mathrm{d} u-\lambda(t-s)\Big)\\
&\leq \frac{1}{\mathfrak{s}qrt{\lambda}}\Big(\frac{1}{a}\int_s^t \mu_\lambda([u,u+a])\mathrm{d} u-\lambda(t-s)\Big)\eqlaw\frac{t-s}{\mathfrak{s}qrt \lambda a} P(\lambda a)\;.
\end{equs}
Since $\lambda a \le 1$, it follows that in this case
$\| P^{a}_\lambda(t)-P^{a}_\lambda(s)\|_{\varphi_1} \lesssim \frac{t-s}{\mathfrak{s}qrt \lambda a} \lesssim (t-s)^{1\over 2p}$ as claimed.
\end{proof}
\mathfrak{s}ection{On the cardinality of the coalescing point set of the Brownian Web}
The aim of this appendix is to derive a uniform bound on the cardinality of the coalescing point set of
the Brownian Web Tree $\zeta^\downarrow_\mathrm{bw}$ of Theorem~\ref{thm:BW}.
Let $R,\,\eps>0$ and $t\in\mathbb{R}$, set
\begin{align}
\mathbb{X}i_R(t,t-\eps)&\eqdef\{\rho^\downarrow(\mathfrak{s}z,t-\eps)\mathfrak{c}olon M^\downarrow_{\mathrm{bw}}(\mathfrak{s}z)\in [t,\infty)\times[-R,R]\}\label{e:CPS}\\
\eta_R(t,\eps)&\eqdef \#\mathbb{X}i_R(t,t-\eps)\label{e:CPScard}
\end{align}
where, for a set $A$, $\#A$ denotes its cardinality. Then, the following lemma holds.
\begin{lemma}\label{l:CPScard}
Almost surely, for any $\varsigma>1/2$ and all $r,\,R>0$ there exists a constant $C=C(r,R)$ such that
\begin{equ}[e:CPSbound]
\eta_R(t,\eps)\leq C\eps^{-\varsigma}\,,
\end{equ}
for all $t\in[-r,r]$ and $\eps\in(0,1]$
\end{lemma}
\begin{proof}
Notice first that, by a simple duality argument, $\eta_R(t,\eps)$ is equidistributed with
$\hat\eta(t,t+\eps;-R,R)$ of~\mathfrak{c}ite[Definition 2.1]{FINR}. According to~\mathfrak{c}ite[Lemma C.2]{GSW},
for any $R,\,t,\,\eps$, $\eta_R(t,\eps)$ is a negatively correlated point process with intensity measure
$\tfrac{2R}{\mathfrak{s}qrt{t}}\lambda$, where $\lambda$ is the Lebesgue measure on $\mathbb{R}$.
In particular,~\mathfrak{c}ite[Lemma C.5]{GSW} implies that, for any $p>1$
\begin{equ}[e:MB]
\mathbb{E}[\eta_R(t,\eps)^p]\lesssim_p \left(\frac{2R}{\mathfrak{s}qrt{\eps}}\right)^{p}\,,
\end{equ}
where the hidden constant depends only on $p$. Moreover, the random variables $\eta_R(t,\eps)$
are monotone in the following sense, for any $R,\,t,\,\eps$ we have
\begin{equ}[e:Mono]
\eta_R(t,\eps)\leq \eta_{R'}(t',\eps')
\end{equ}
for all $R'\geq R$, $t'\in(t-\eps,t]$ and $\eps'\in(0,\eps-(t-t')]$.
Let $R,\,r>0$, for $k=0,\dots, 2\lceil r\mathrm{c}eil 2^m$ set $t_{k,m}\eqdef -r+k 2^{-m}$
and consider the event
\begin{equ}
E_m\eqdef\{\forall k=0,\dots,2\lceil r\mathrm{c}eil 2^m,\,n\in\mathbb{N},\quad \eta_R(t_{k,m},2^{-n})\leq 2^{m+1}R 2^{n\varsigma}\}\,.
\end{equ}
By Markov's inequality and~\eqref{e:MB}, we have
\begin{equs}
\mathbb{P}(E_m^c)&\leq \mathfrak{s}um_{k=0}^{2\lceil r\mathrm{c}eil 2^m}\mathfrak{s}um_n \mathbb{P}(\eta_R(t_{k,m},2^{-n})\leq 2^{m+1}R 2^{n\varsigma})\lesssim_p \lceil r\mathrm{c}eil 2^{m(1-p)} \mathfrak{s}um_n 2^{-np(\varsigma-1/2)}
\end{equs}
which, for $\varsigma>1/2$, is finite and $O_r(2^{m(1-p)})$. Therefore, upon choosing $p>1$ and
applying Borel-Cantelli and~\eqref{e:Mono}, the statement follows at once.
\end{proof}
\mathfrak{s}ection{Exit law of Brownian motion from the Weyl chamber}\label{app:Density}
For $n \ge 2$, we define the Weyl chamber $W_n$ as
\begin{equ}
W_n = \{x \in \mathbb{R}^n \,:\, x_1<\dots<x_n\}\;.
\end{equ}
Let $(B_t^x)_{t\ge 0}$ be a standard $n$-dimensional Brownian motion and,
given a sufficiently regular domain $W \mathfrak{s}ubset \mathbb{R}^n$, let $\tau_W=\inf\{t>0\,:\, B^x_t\in \partial W\}$
and
\begin{equ}
P_t^W(x,y)\,\mathrm{d} y \eqdef \mathbb{P}(\tau_W > t, B_t^x \in \mathrm{d} y)\;,\qquad x,y \in W\;.
\end{equ}
We then have the following result.
\begin{theorem}\label{thm:Density}
Let $(B_t^x)_{t\ge 0}$ with $x \in W_n$ be as above and let $\tau=\tau_{W_n}$.
Then,
\begin{equation}
\mathbb{P}\left(\tau\in \mathrm{d} t, B_{\tau}^x\in \mathrm{d} y\right) = \partial_{n_y} P_t^{W_n}(x,y)\,\mathfrak{s}igma_{W_n}(\mathrm{d} y) \,\mathrm{d} t \eqdef \nu_x(\mathrm{d} t,\mathrm{d} y)\;,
\end{equation}
where $\partial_{n_y}$ is the derivative in the inward normal direction at
$y\in\partial W_n$ and $\mathfrak{s}igma_{W_n}$ is the surface measure on $\partial W_n$.
\end{theorem}
\begin{proof}
For smooth cones, the claim was shown for example in \mathfrak{c}ite[Thm~1.3]{BDB}, so it remains to perform an approximation argument.
Choose a sequence of smooth cones $W_n^{(\eps)}$ such that, for every $\eps > 0$, one has
$W_n^{(\eps)} \mathfrak{s}ubset W_n$ and furthermore $W_n^{(\eps)} \mathfrak{c}ap C_\eps^c = W_n \mathfrak{c}ap C_\eps^c$,
where $C_\eps$ denotes those ``corner'' configurations where at least two distinct
pairs of points are at distance less than $\eps$ from each other.
Writing $\tau_\eps = \tau_{W_n^{(\eps)}}$, it follows immediately from these two
properties that
$P_t^{W_n^{(\eps)}}(x,y) \le P_t^{W_n}(x,y)$ for all $t \ge 0$ and $x,y \in W_n^{(\eps)}$, so that in particular,
~\mathfrak{c}ite[Thm~1.3]{BDB} implies
\begin{equ}
\nu_x^{(\eps)}(\mathrm{d} t,\mathrm{d} y) \eqdef \mathbb{P}\left(\tau_\eps \in \mathrm{d} t, B_{\tau_\eps}^x\in \mathrm{d} y\right)= \partial_{n_y} P_t^{W_n^{(\eps)}}(x,y)\,\mathfrak{s}igma_{W_n}(\mathrm{d} y) \,\mathrm{d} t \le \nu_x(\mathrm{d} t,\mathrm{d} y)\;,
\end{equ}
for all $y \in \d W_n \mathfrak{c}ap C_\eps^c$. Here, we also used the fact that $P_t^{W_n^{(\eps)}}$ and $P_t^{W_n}$ both
vanish on $\d W_n \mathfrak{c}ap C_\eps^c$. We also note that $\nu_x$ is a probability measure, as can be seen by combining the
divergence theorem (on the space-time domain $\mathbb{R}_+\times W_n$) with the fact that $P_t^{W_n}$ solves the heat equation
on $W_n$ with Dirichlet boundary conditions and initial condition $\delta_x$.
On the other hand, writing $\mu_x^{(\eps)}$ for the (positive) measure such that
\begin{equ}
\mathbb{P}\left(\tau\in \mathrm{d} t, B_{\tau}^x\in \mathrm{d} y\right) = \nu_x^{(\eps)}(\mathrm{d} t,\mathrm{d} y \mathfrak{c}ap C_\eps^c) + \mu_x^{(\eps)}(\mathrm{d} t, \mathrm{d} y)\;,
\end{equ}
one has the bound
\begin{equ}
c_\eps \eqdef \mu_x^{(\eps)}(\mathbb{R}_+ \times \d W_n) \le \mathbb{P}(\hat \tau_\eps \le \tau)\;,\qquad \hat \tau_\eps = \inf\{t \,:\, B_t^x \in C_\eps\}\;.
\end{equ}
Since $\tau < \infty$ almost surely and Brownian motion does not hit subspaces of codimension~$2$, we have
$\lim_{\eps \to 0} c_\eps = 0$.
For any two measurable sets $I \mathfrak{s}ubset \mathbb{R}_+$ and $A \mathfrak{s}ubset \d W_n$ such that $A \mathfrak{c}ap C_\delta = \emptyset$ for some
$\delta > 0$, we then have
\begin{equ}
\mathbb{P}\left(\tau\in I, B_{\tau}^x\in A\right) \le \nu_x^{(\eps)}(I,A \mathfrak{c}ap C_\eps^c) + c_\eps
\le \nu_x(I,A) + c_\eps\;,
\end{equ}
for all $\eps \le \delta$, so that
\begin{equ}[e:almostGood]
\mathbb{P}\left(\tau\in \mathrm{d} t, B_{\tau}^x\in \mathrm{d} y\right) \le \nu_x(\mathrm{d} t, \mathrm{d} y) + \hat \mu(\mathrm{d} t, \mathrm{d} y)\;,
\end{equ}
where $\hat \mu$ is supported on $\mathbb{R}_+ \times \bigcap_{\eps > 0}C_\eps$. As before, one must have
$\hat \mu = 0$ since Brownian motion does not hit subspaces of codimension~$2$, so that the desired identity
follows from the fact that both $\nu_x$ and the left-hand side of \eqref{e:almostGood} are probability measures.
\end{proof}
\mathfrak{s}ection{Trimming and path correspondence}\label{app:Trim}
In this appendix we introduce some further tools in the context of spatial trees, which plays a major role
in the proof of Theorem~\ref{thm:convTime}.
Let $(\mathscr{T},\ast,d)$ be a pointed locally compact complete $\mathbb{R}$-tree and fix $\eta>0$.
We define the {\it $\eta$-trimming} of $\mathscr{T}$ as
\begin{equation}\label{def:trim}
R_\eta(\mathscr{T})\eqdef\{\mathfrak{s}z\in\mathscr{T}\,:\,\exists\text{ $\mathfrak{s}w\in\mathscr{T}$ such that $\mathfrak{s}z\in\llbracket\ast,\mathfrak{s}w\mathrm{r}b$ and $d(\mathfrak{s}z,\mathfrak{s}w)\geq \eta$} \}\mathfrak{c}up\{\ast\}\,.
\end{equation}
(The explicit inclusion of $\ast$ is only there to guarantee that $ R_\eta(\mathscr{T})$ is non-empty if $\mathscr{T}$ is
of diameter less than $\eta$.)
$ R_\eta(\mathscr{T})$ is clearly closed in $\mathscr{T}$ and furthermore, it is
a locally finite $\mathbb{R}$-tree.
With a slight abuse of notation, we denote again by $R_\eta$ the trimming of a spatial $\mathbb{R}$-tree, i.e. the map
$R_\eta\mathfrak{c}olon\mathbb{T}^\alpha_\mathrm{sp}\to\mathbb{T}^\alpha_\mathrm{sp}$ defined on $\zeta=(\mathscr{T},\ast,d,M)\in\mathbb{T}^\alpha_\mathrm{sp}$ as
$R_\eta(\zeta)= ( R_\eta(\mathscr{T}),\ast,d,M)$.
In the following lemma we summarise further properties of the trimming map.
\begin{lemma}\label{l:Trim}
Let $\alpha\in(0,1)$. For all $\eta>0$, $R_\eta$ is continuous on $\mathbb{T}^\alpha_\mathrm{sp}$.
Moreover, if $\{(\mathscr{T}_a,\ast_a,d_a)\}_{a\in A}$, $A$ being an index set, is a family of compact pointed $\mathbb{R}$-trees then
\begin{enumerate}[noitemsep]
\item the Hausdorff distance between $ R_\eta(\mathscr{T}_a)$ and $\mathscr{T}_a$ is bounded above by $\eta$,
\item the number of endpoints of $ R_\eta(\mathscr{T}_a)$ is bounded above by $(c\eta)^{-1}\ell_a( R_{c\eta}(\mathscr{T}_a))<\infty$,
for any $c\in(0,1)$,
\item the family is relatively compact if and only if $\mathfrak{s}up_{a\in A} \ell_a( R_{\eta}(\mathscr{T}_a))<\infty$ for all $\eta>0$.
\end{enumerate}
\end{lemma}
\begin{proof}
The continuity of the trimming map is an easy consequence of the definition of the metric
$\mathbb{D}elta_\mathrm{sp}$ in~\eqref{e:Metric},~\mathfrak{c}ite[Lemma 2.6(ii)]{EPW}
and~\mathfrak{c}ite[Lemma 2.17]{CHbwt}. Points 1., 2. and 3. were respectively shown in~\mathfrak{c}ite[Lemma 2.6(iv)]{EPW},
the proof of~\mathfrak{c}ite[Lemma 2.7]{EW06} and~\mathfrak{c}ite[Lemma 2.6]{EW06}.
\end{proof}
Let $\alpha\in(0,1)$ and $\zeta_1,\,\zeta_2\in\hat\mathbb{C}^\alpha_\mathrm{sp}$
(see Definition~\ref{def:CharTree}) be such that
\begin{equ}[e:ast]
M_{1,t}(\ast_1)=0=M_{2,t}(\ast_2)\,.
\end{equ}
For $j=1,2$, denote by $\rho_j$ the radial map of $\zeta_j$. For $r,\eta>0$, set
\begin{equ}[def:Trim+]
\mathscr{T}_j^{(r),\,\eta}\eqdef R_\eta \big(\mathscr{T}_j^{(r)}\mathfrak{c}up\llbracket\ast_j,\rho_j(\ast_j,r+\eta)\mathrm{r}b\big)
\end{equ}
and $\zeta_j^{(r),\,\eta}\eqdef (\mathscr{T}_j^{(r),\,\eta},\ast_j,d_j,M_j)$.
Assume there exists a correspondence $\mathbb{C}C$ between
$\zeta^{(r),\,\eta}_1$ and $\zeta^{(r),\,\eta}_2$ for which
\begin{equ}[e:InitialDist]
\mathbb{D}elta^{\mathrm{c}, \mathbb{C}C}_\mathrm{sp}(\zeta^{(r),\,\eta}_1,\zeta^{(r),\,\eta}_2 )<\eps\;,
\end{equ}
for some $\eps>0$.
Let $N$ be the number of endpoints of
$\zeta^{(r),\,\eta}_1$, which is finite by
Lemma~\ref{l:Trim} points 2.\ and 3.
We now number the endpoints of $\mathscr{T}^{(r),\,\eta}_1$ and denote them by $\{\tilde\mathfrak{s}v^1_i\,:\,i=0,\dots, N_\eta\}$,
where $\tilde\mathfrak{s}v^1_0\eqdef\mathfrak{s}v^1_0= \ast_1$.
Let $\mathfrak{s}v_0^2\eqdef\ast_2$ and for every $i=1,\dots, N_\eta$,
let $\tilde\mathfrak{s}v_i^2\in \mathscr{T}_2^{(r),\,\eta}$ be such that $(\tilde\mathfrak{s}v^1_i,\tilde\mathfrak{s}v_i^2)\in\mathbb{C}C$.
If $M_{1,t}(\tilde\mathfrak{s}v^1_i)\geq M_{2,t}(\tilde\mathfrak{s}v_i^2)$,
set $\mathfrak{s}v_i^1\eqdef\tilde\mathfrak{s}v^1_i$ and $\mathfrak{s}v_i^2\eqdef \rho_2(\tilde\mathfrak{s}v_i^2, M_{1,t}(\mathfrak{s}v^1_i))$,
otherwise set $\mathfrak{s}v^2_i\eqdef\tilde\mathfrak{s}v^2_i$ and
$\mathfrak{s}v^1_i\eqdef \rho_1(\tilde\mathfrak{s}v_i^1, M_{2,t}(\mathfrak{s}v^2_i))$.
Setting $\ast_j^{(r)} = \rho_j(\ast_j,r)$, we define the subtree $T_j \mathfrak{s}ubset \mathscr{T}^{(r),\,\eta}_j$ by
\begin{equ}[e:PathTree]
T_j\eqdef \bigcup_{i \le N_\eta} \llbracket \mathfrak{s}v_i^j, \ast_j^{(r)}\mathrm{r}b \,.
\end{equ}
We also write $Z_j$ for the corresponding spatial tree
\begin{equ}[e:PathSpTrees]
Z_j\eqdef (T_j,\ast_j,d_j, M_j)\,.
\end{equ}
Finally, we define the {\it path correspondence} between $T_1$ and $T_2$ by
\begin{equ}[e:PathCorr]
\mathbb{C}C_\mathrm{p}\eqdef \bigcup_{i=0}^{N_\eta}\{(\rho_1(\mathfrak{s}v_i^1,t),\rho_2(\mathfrak{s}v^2_i,t)) \mathfrak{c}olon M_{1,t}(\mathfrak{s}v_i^1)\leq t\leq r\}\,.
\end{equ}
\begin{lemma}\label{l:PathTrees}
Let $\alpha\in(0,1)$ and $\zeta_1,\,\zeta_2\in\hat\mathbb{C}^\alpha_\mathrm{sp}$, $r,\,\eta, \,\eps>0$
be such that~\eqref{e:ast} and~\eqref{e:InitialDist} hold.
Then,
\begin{equ}[b:PathTrees]
\mathop{\mathrm{dis}}\mathbb{C}C_\mathrm{p}+\mathfrak{s}up_{(\mathfrak{s}z_1,\mathfrak{s}z_2)\in\mathbb{C}C_\mathrm{p}}\|M_1(\mathfrak{s}z_1)-M_2(\mathfrak{s}z_2)\|\lesssim 4N_\eta\eps+\|M_1\|_\alpha^{(r)}\eps^\alpha\,.
\end{equ}
Moreover, the Hausdorff distance between $T_1$ and $\mathscr{T}^{(r),\,\eta}_1$ is bounded above by $\eps$,
while that between $T_2$ and $\mathscr{T}^{(r),\,\eta}_2$ is bounded by $5N_\eta\eps$ and
the following inclusions hold
\begin{equ}[e:Incl]
R_{\eta+\eps}(\mathscr{T}^{(r)}_1)\mathfrak{s}ubset T_1\mathfrak{s}ubset \mathscr{T}^{(r),\,\eta}_1\,,\qquad R_{\eta+5N_\eta\eps}(\mathscr{T}^{(r)}_2)\mathfrak{s}ubset T_2\mathfrak{s}ubset \mathscr{T}_2^{(r),\,\eta}\,.
\end{equ}
\end{lemma}
\begin{proof}
The statement follows by applying iteratively~\mathfrak{c}ite[Lemma 2.28]{CHbwt}.
Indeed, for $m\leq N_\eta$, let $\tilde\mathbb{C}C_{m}\eqdef\mathbb{C}C\mathfrak{c}up\mathbb{C}C_{m-1}$, where $\mathbb{C}C_{m-1}$ is defined as
the right hand side of~\eqref{e:PathCorr} but the union runs from $0$ to $m-1$
(so that in particular $\mathbb{C}C_{N_\eta}=\mathbb{C}C_\mathrm{p}$).
Then, for all $m$, $\tilde\mathbb{C}C_{m}=C_{\tilde\mathbb{C}C_{m-1}}$,
the right hand side being defined according to~\mathfrak{c}ite[Eq. (2.29)]{CHbwt}. Hence, we immediately see that
$\mathop{\mathrm{dis}}\mathbb{C}C_\mathrm{p}\leq \mathop{\mathrm{dis}} \tilde \mathbb{C}C_{N_\eta}\lesssim 4N_\eta\eps$ and the bound
on the evaluation map can be similarly shown.
For the last part of the statement, notice that the Hausdorff distance between $T_1$ and $\mathscr{T}^{(r),\eta}_1$
is bounded by $\eps$ by construction,
while, arguing as in the proof of Lemma~\ref{l:Coupling}, it is immediate to show that
the Hausdorff distance between $T_2$ and $\mathscr{T}^{(r),\eta}_2$ is controlled by $4N_\eta\eps+\eps$.
These bounds together with the definition of the trimming map, guarantee that, for any $a>0$,
all the endpoints of $R_{\eta+\eps}(\mathscr{T}^{(r)}_1)$ and
$R_{\eta+5N_\eta\eps+a}(\mathscr{T}_2^{(r)})$ must belong to $T_1$ and $T_2$ respectively, which in turn implies
\begin{equ}
R_{\eta+\eps+a}(\mathscr{T}^{(r)}_1)\mathfrak{s}ubset T_1\,,\qquad R_{\eta+5N_\eta\eps+a}(\mathscr{T}_2^{(r)})\mathfrak{s}ubset T_2\,.
\end{equ}
By letting $a\to 0$ using Lemma~\ref{l:Trim} point 1., the conclusion follows.
\end{proof}
\end{appendix}
\end{document}
|
\begin{document}
\title{Supplementary Information for ``Non-classical microwave-optical photon pair generation with a chip-scale transducer"}
\author{Srujan Meesala}
\thanks{These authors contributed equally}
\affiliation{Kavli Nanoscience Institute and Thomas J. Watson, Sr., Laboratory of Applied Physics, California Institute of Technology, Pasadena, California 91125, USA}
\affiliation{Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA}
\author{Steven Wood}
\thanks{These authors contributed equally}
\affiliation{Kavli Nanoscience Institute and Thomas J. Watson, Sr., Laboratory of Applied Physics, California Institute of Technology, Pasadena, California 91125, USA}
\affiliation{Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA}
\author{David Lake}
\thanks{These authors contributed equally}
\affiliation{Kavli Nanoscience Institute and Thomas J. Watson, Sr., Laboratory of Applied Physics, California Institute of Technology, Pasadena, California 91125, USA}
\affiliation{Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA}
\author{Piero Chiappina}
\affiliation{Kavli Nanoscience Institute and Thomas J. Watson, Sr., Laboratory of Applied Physics, California Institute of Technology, Pasadena, California 91125, USA}
\affiliation{Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA}
\author{Changchun Zhong}
\affiliation{Pritzker School of Molecular Engineering, The University of Chicago, Chicago, IL 60637, USA}
\author{Andrew D. Beyer}
\affiliation{Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr, Pasadena, California 91109, USA}
\author{Matthew D. Shaw}
\affiliation{Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr, Pasadena, California 91109, USA}
\author{Liang Jiang}
\affiliation{Pritzker School of Molecular Engineering, The University of Chicago, Chicago, IL 60637, USA}
\author{Oskar~Painter}
\email{[email protected]}
\homepage{http://copilot.caltech.edu}
\affiliation{Kavli Nanoscience Institute and Thomas J. Watson, Sr., Laboratory of Applied Physics, California Institute of Technology, Pasadena, California 91125, USA}
\affiliation{Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA}
\affiliation{Center for Quantum Computing, Amazon Web Services, Pasadena, California 91125, USA}
\date{\today}
\maketitle
\section{Notation}
\begin{table}[ht]
\centering
\begin{tabular}{p{0.35\linewidth} | p{0.6\linewidth}}
Symbol & Meaning \\ \hline
$\hat{a}, \hat{b}, \hat{c}$ & operators for optical, acoustic, and microwave modes of the transducer \newline \\
$\hat{c}_+, \hat{c}_-$ & operators for hybridized electromechanical modes \newline \\
$\hat{a}_{\mathrm{in}}, \hat{c}_{\mathrm{in}}, (\hat{a}_{\mathrm{out}}, \hat{c}_{\mathrm{out}})$ & operators for optical and microwave input (output) modes in coupling waveguides \newline \\
$\hat{A},\hat{C}$ & operators for optical and microwave temporal modes in coupling waveguides \newline \\
$\bar{C}_{mn}$ & moments of the temporal microwave mode \newline \\
$\bar{C}_{mn}|_{\mathrm{click}}$ & moments of the temporal microwave mode conditioned on an optical click \\
\end{tabular}
\caption{Summary of the various modes and related quantities defined in the main text.}
\label{tab:symbols}
\end{table}
\section{Device fabrication process}
The fabrication process for the transducer chip is illustrated in Fig.~\ref{si:fabrication_fig:wide} and described in the caption. Masks for all steps are patterned in ZEP-520A resist via electron-beam lithography on a Raith EBPG 5200 tool. All dry etching is performed in Oxford Plasmalab 100 inductive coupled plasma reactive ion etching (ICP RIE) tools. The process flow can be sub-divided into sections used to define various portions of the transducer device. Steps (i)-(vi) complete the definition of the AlN box essential for the piezoacoustic cavity. The combination of dry and wet etch steps ensures that the dimensions of the AlN box are precisely defined while the silicon device layer is undamaged on most of the chip. This is important to achieve optical, mechanical, and microwave modes with high quality factors. Steps (vii)-(ix) define the NbN resonator and step (x) defines the OMC. Steps (xi)-(xiii) are use to define aluminum electrodes on the piezo-resonator and galvanically connect them to the NbN resonator using bandage steps. The bandaid steps involve in situ Ar milling for two minute and six minute durations respectively to clear the surface of NbN and Al prior to Al bandaid evaporation. To provide optical fiber access to coupler sections at the end of the silicon photonic waveguides, we clear a portion of the SOI substrate up to a depth of 150~$\mathrm{\mu}$m using a deep reactive ion etch at the edge of the chip. Finally, the buried oxide (BOX) layer is etched in anhydrous vapor HF to release the device membrane.
\begin{figure*}
\caption{Device fabrication process. Images are not to scale. \textbf{(i)}
\label{si:fabrication_fig:wide}
\end{figure*}
\section{Microwave circuit design}
The kinetic inductance resonator used in our transducer is fabricated from an NbN film of 10~nm thickness and 50~pH/sq sheet inductance. The meandering ladder geometry described in the main text is comprised of 2~$\mathrm{\mu m} \times 1~\mathrm{\mu m}$ rectangular loops formed from traces of width 130~nm. We use extended electrical terminals of length 200~$\mathrm{\mu m}$ and width 1$~\mathrm{\mu m}$ to spatially separate the high kinetic inductance section from the OMC. The resonator is designed to achieve a target frequency of 5.0~GHz for the fundamental mode with a capacitance, $C_{\mathrm{res}}=7.1$~fF, which includes a small contribution of 0.27~fF from the electrodes on the piezo-acoustic resonator. Nearly the entire inductance of the resonator mode is due to the kinetic inductance of the superconducting film. The use of closed superconducting loops in the resonator allows for tuning of the kinetic inductance, $L_k$, via a DC supercurrent, $I$, induced by an external magnetic field according to the relation $L_{k} \approx L_{k,0}[1 + (I/I_{*})^2]$ as shown in \cite{Xu2019}. Here $L_{k,0}$ is the kinetic inductance at zero magnetic field. $I_{*} \gg I$ is a characteristic current on the order of the critical current of the nanowire \cite{Zmuidzinas2012}. This relation leads to quadratic tuning of the resonator frequency in response to an external magnetic field as observed in Fig.~2b of the main text. The resonator is capacitively coupled to an on-chip 50~$\mathrm{\Omega}$ coplanar waveguide (CPW) patterned in NbN to achieve an external coupling rate, $\kappa_{e,c}/2\pi=1.3$~MHz. On the chip used for the experiments in this work, sixteen transducer devices were laid out in groups of four with each group addressed by a separate CPW. Adjacent resonators on the same CPW were designed to be detuned from each other by a frequency spacing of 100~MHz, much larger than the cross-coupling, which was estimated to be less than 1~MHz from simulation. This frequency multiplexing approach allows us to increase the number of transducers available for testing on the chip while ensuring minimal microwave crosstalk.
\section{Piezo-optomechanics design}
The piezo-optomechanical transducer in our work is realized by attaching a half-wavelength AlN piezo-acoustic cavity to a silicon OMC as shown in Fig.~\ref{si:design_fig}a. Figure~\ref{si:design_fig}b,c shows the optical and acoustic modes used for transduction. The energy of the acoustic mode is split between the piezo-acoustic and OMC regions, which are designed independently to support acoustic resonances at 5~GHz with large piezo-electric and optomechanical coupling respectively. Similar to the device demonstrated in \cite{Mirhosseini2020}, hybridization of acoustic modes in these two portions is achieved through the connecting OMC section, whose band structure is designed to provide a bandgap for optical photons and waveguiding for acoustic phonons at the frequencies of interest. The primary difference compared to the previous design is the change to an ultra-low mode volume piezo-acoustic cavity to reduce energy participation in the piezo region, which is lossier for acoustics when compared to silicon. This design change is possible without sacrificing piezoelectric coupling since the impedance of the kinetic inductance resonator is roughly an order of magnitude larger than that of the transmon qubit used in \cite{Mirhosseini2020}.
\begin{figure*}
\caption{Piezo-optomechanics design. \textbf{a.}
\label{si:design_fig}
\end{figure*}
Figures~\ref{si:design_fig}d,e show the hybridization of the piezo-acoustic and OMC modes via a sweep of the length of the piezo box. We observe strong hybridization of two distinct branches that are piezo-like and OMC-like as highlighted by the dashed lines with maximal hybridization occurring at a piezo length of 880 nm. On the transducer chip used in our experiments, we fabricate devices with piezo length swept over a range of 20 nm about this nominal value to compensate for fabrication disorder. Clamping loss of the piezo-acoustic resonator is minimized by using tethers with modulated width as shown in Figs.~\ref{si:design_fig}a,b. The tethers are designed to support a 1.5 GHz wide acoustic bandgap centered around the transducer acoustic modes.
The OMC is evanescently coupled to a suspended silicon waveguide, which terminates in a coupler section at the edge of the chip. We use a millimeter length scale waveguide to distance the NbN circuit from scattering of pump light generated near the optical coupler section. The waveguide is designed with alternating curved sections in order to increase robustness against buckling from intrinsic stress in the silicon device layer. Adiabatic tethers are used to anchor the waveguide in order to reduce scattering loss and maintain high optical collection efficiency.
\section{Measurement Setup}
\begin{figure*}
\caption{Experimental setup. \textbf{a.}
\label{si:measurement_fig}
\end{figure*}
The measurement setup used in this work is detailed in Fig.~\ref{si:measurement_fig}. For SPDC experiments, trigger signals from a master digital delay generator are used to synchronize optical pump pulses with the timing window used for optical and microwave readout. Optical pump pulses with $>120$ dB extinction are generated using two analog acousto-optic modulators (G\&H Photonics) and are routed via a circulator into the `Optics in/out' path towards the device in the dilution fridge as shown in Fig.\ref{si:measurement_fig}a. The optical emission from the device passes through the same circulator and is directed to a pump filtering setup prior to single photon detection (SPD) along the `SPD in' path in Fig.~\ref{si:measurement_fig}a. The filtering setup comprises two tunable Fabry–Perot filter cavities (Stable Laser Systems) in series and provides 104~dB extinction for a pump detuning of 5~GHz along with a transmission bandwidth of 2.7~MHz. This transmission bandwidth naturally excludes emission due to optomechanical scattering by other transducer modes besides the two hybridized electromechanical modes of interest, $\hat{c}_\pm$. During the experiment, transmission through the filters is checked every four minutes and a lock sequence is initiated if the transmission drops below a set threshold. At the beginning of the transmission check, the optical path is set to bypass the fridge via a MEMS switch. An electro-optic phase modulator ($\mathrm{\phi}$-m in Fig.~\ref{si:measurement_fig}a) is used to generate a sideband on the pump tone at the target frequency to which the filters are locked. In our SPDC experiments, this target frequency corresponds to a pump detuning, $\Delta_{\mathrm{a}} = (\omega_+ + \omega_-)/2$. If required, the locking algorithm adjusts the filter cavities to maximize transmission at the target frequency by monitoring the output of each cavity on a separate photodetector. Additionally, during long measurements, we periodically monitor the polarization of the pump light sent to the device and compensate for long term polarization drifts along the excitation path. Active polarization control is preformed by using an electronic polarization controller (Phoenix Photonics) and maximizing the optical pump power reflected by the device.
Optical photon counting is achieved using a tungsten silicide (WSi) superconducting nanowire single photon detector (SNSPD) mounted on the still plate of the dilution refrigerator (BlueFors LD-250) maintained at a temperature of 770~mK. As shown in Fig.~\ref{si:measurement_fig}b, electrical pulses from the SNSPD generated by single optical photon detection events are amplified and split into two separate paths to record the detection time on a time correlated single photon counter (TCSPC) and to trigger microwave readout conditioned on an optical click. For conditional microwave readout, we first generate a logical bit for every optical click by performing a logic level translation of the SNSPD electrical pulse to a TTL signal. We then perform a logical AND operation between this `optical click bit' and a trigger bit provided by the master delay generator defining the duration of the optical gating window. On the other hand, unconditional microwave readout is triggered directly by the master delay generator. An RF switch (MiniCircuits ZASWA-2-50DRA+) is used to switch between unconditional and conditional readout of the microwave output signal.
The microwave output chain shown along the `MW out' path in Fig.~\ref{si:measurement_fig}c begins with a Josephson traveling-wave parametric amplifier (TWPA, MIT Lincoln Labs) \cite{Macklin2015twpa} mounted on the mixing plate as the first amplification stage. For TWPA operation, we use a CW pump tone at a frequency of $\sim$6.07~GHz added to the amplifier input using a 20~dB directional coupler (not shown in the figure). A dual junction circulator with 40 dB isolation is placed between the directional coupler and the sample to shield the transducer from back-reflected pump. The TWPA is followed by a high mobility electron transistor (HEMT, Low Noise Factory LNF-LNC4\_8C) amplifier mounted on the 4K plate. In the setup outside the fridge as shown in Fig.~\ref{si:measurement_fig}b, we perform additional amplification of the signal and filtering of the TWPA pump tone using a tunable notch filter (Micro Lambda Wireless MLBFR-0212). The microwave signal is then down-converted to an intermediate frequency (IF) of $\sim$100~MHz after mixing with a local oscillator on an IQ mixer (Marki IQ-4509). The I and Q outputs of the mixer are subsequently bandpass-filtered, amplified and recorded independently on two channels of an analog to digital converter (ADC, Alazartech ATS 9360). These two digitized voltage signals correspond to measurement of the real and imaginary quadratures of the output mode of the amplification chain, $\hat{s}_{\mathrm{out}}$. Based on the calibrated gain of our amplification chain, we measured that our heterodyne detection setup has an added noise of $\sim$2.5~quanta referred to the output of the transducer at a signal frequency of $\sim$5.0~GHz. This near-quantum-limited heterodyne readout enabled by the TWPA is a key enabler for measurements of microwave-optical correlations from our transducer on a reasonable experimental timescale. The setup shown in Fig.~\ref{si:measurement_fig}b additionally allows for pulsed microwave excitation of the transducer as well as spectroscopy with a vector network analyzer (VNA).
The transducer sample is wire-bonded to a printed circuit board (PCB) with coaxial connectors and is housed in an oxygen-free high thermal conductivity (OFHC) copper package. The microwave resonance frequency of the NbN resonators can be tuned by an external magnetic field generated from an Nb-Ti coil mounted over the sample package. Individual devices on the chip are optically addressed using a lensed fiber (OZ Optics) affixed to the top of a three-axis piezo stepper stack (Attocube Systems) placed in line with the on-chip tapered optical couplers. The entire assembly is enclosed in a cylindrical magnetic shield and is mounted to the mixing plate of the dilution refrigerator cooled to a base temperature $T_\text{f}\approx 20~\text{mK}$.
\section{Optical heralding rate contributions}\label{si:heralding_rate_contributions}
\begin{figure}
\caption{\textbf{a.}
\label{si:opticalHR_fig}
\end{figure}
The optical heralding events in our SPDC experiments have a finite noise contribution from pump-induced heating of the hybridized electromechanical modes as well as technical sources in our experimental setup, namely detector dark counts and pump laser leakage. In Fig.~\ref{si:opticalHR_fig}a, we plot the optical photon count rate due to these noise sources as determined from independent measurements. The blue trace corresponds to the total optical signal as shown in Fig.~3a of the main text. Dashed lines bound the gating duration used to select photon clicks that trigger conditional microwave readout. The dark count rate (DCR) of the SNSPD in our experiment is 1.5~Hz and is limited by black-body radiation at wavelengths outside the telecom band guided through the optical fiber. Stray photon flux from laser leakage through the pump filtering setup as indicated by the purple trace in Fig.~\ref{si:opticalHR_fig} is determined by detuning the filter cavity by 100~MHz from the acoustic mode used in the transduction experiment. To determine the noise contribution from the thermal occupation of the acoustic mode, we perform a measurement of optomechanical sideband asymmetry \cite{SMeenehan}. We excite the transducer with the same pump pulse sequence used in the SPDC experiment but with the pump laser tuned to the red mechanical sideband of the optical cavity ($\Delta_{\mathrm{a}} = -(\omega_+ + \omega_-)/2$). The resultant photon flux from this measurement is shown by the red trace in Fig.~\ref{si:opticalHR_fig}a. Integrating the photon counts from these independent measurements over the gating window, we obtain the contributions of each noise source to the total probability of a heralding event, $p_{\mathrm{click}} = 2.7\times10^{-6}$ as shown in Fig.~\ref{si:opticalHR_fig}b. The corresponding fractional contributions are enumerated in Tab.~\ref{tab:op_clicks}. Further, the difference between the count rate under red and blue detuned laser excitation allows us to infer an average thermal occupation of $0.097 \pm 0.019$ for the acoustic mode over the duration of the optical gating window.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|}
\hline
Source of a photon click & Fraction of clicks \\
\hline
SPDC signal & 0.727 \\
\hline
Thermal & 0.069 \\
\hline
DCR & 0.171 \\
\hline
Pump leakage & 0.033 \\
\hline
\end{tabular}
\caption{Contributions to the optical photon count rate.}
\label{tab:op_clicks}
\end{table}
\section{Microwave moment inversion}\label{sec: moment inversion}
To measure the statistical moments of the microwave emission from the transducer, we adopt the method originally demonstrated in Ref.~\cite{Eichler2011}, which has been widely applied to perform tomography of non-classical microwave radiation emitted by superconducting qubits \cite{Bozyigit2011, Kannan2020, Ferreira2022}. The microwave signal emitted from the device undergoes a series of amplification steps before a record is captured as a digitized, complex-valued voltage signal on the heterodyne setup. This phase-insensitive amplification chain inevitably adds noise and is modelled as \cite{Caves1982,daSilva2010},
\begin{equation}\label{si:amp_flux_eq_exact}
\hat{s}_\text{out} = \sqrt{G}\hat{c}_\text{out} + \sqrt{G-1}\hat{h}^\dagger,
\end{equation}
where $\hat{c}_\text{out}$, $\hat{s}_\text{out}$ and $\hat{h}^\dagger$ are the bosonic mode operators corresponding to the transducer output, the heterodyne setup output and added noise respectively, with dimensions of $T^{-1/2}$. In the limit of large amplifier gain ($G\gg1$), this relation may be approximated as
\begin{equation}\label{si:amp_flux_eq}
\hat{s}_\text{out} \approx \sqrt{G}\left(\hat{c}_\text{out} + \hat{h}^\dagger\right),
\end{equation}
Once recorded, we take the inner product of the heterodyne traces with an emission envelope function, $f$ in software. A particular choice of the envelope function corresponds to a measurement of the quadratures of the temporal mode $\hat{S}(t) := \int \hat{s}_\text{out}(t+t')f^{*}(t') dt'$. With this choice Eq.~(\ref{si:amp_flux_eq}) becomes,
\begin{equation}
\hat{S} \approx \sqrt{G}\left(\hat{C} + \hat{H}^\dagger\right),
\label{si:eq:io}
\end{equation}
where $\hat{C}(t) := \int\hat{c}_\text{out}(t+t')f^{*}(t')dt' $ and $\hat{H}(t) := \int \hat{h}(t+t')f^{*}(t')dt' $. Note that $\hat{C}$ is a temporal mode in the output waveguide, not the internal microwave mode, $\hat{c}$ of the transducer. Thus from a sequence of $N$ heterodyne measurements, a dataset of complex voltage samples $\{S_1,S_2,...S_N\}$ is generated and the statistical moments are numerically calculated as \cite{daSilva2010, Eichler2011},
\begin{equation}
\bar{S}_{mn} := \langle \hat{S}^{\dagger m} \hat{S}^n \rangle = \frac{1}{N} \sum_{i=1}^N (S_i^*)^m (S_i)^n.
\end{equation}
The results in the main text require us to determine the statistical moments of $\hat{C}$, which we organize into a moments matrix with elements $\bar{C}_{mn} = \langle \hat{C}^{\dagger m} \hat{C}^n \rangle$. This is accomplished using additional reference measurements in which the transducer is not optically pumped so that the input to the amplifier is a vacuum state. This reference measurement directly gives the anti-normally ordered moments of the added noise as $\bar{H}_{mn} := \langle \hat{H}^m\hat{H}^{\dagger n} \rangle =\bar{S}_{mn}/G^{\frac{n+m}{2}}$.
We experimentally verified that $\bar{H}_{mn}$ is consistent with a thermal state for $m,n\leq2$. Knowledge of the mean occupancy ($n_{\text{th}, H}$) of the state constitutes full knowledge of the statistics for a thermal state. Accordingly, we extract $n_{\text{th}, H}$ from the reference measurements and calculate $\bar{H}_{mn}$ as \cite{Barnett2018},
\begin{equation}
\bar{H}_{mn} =
\begin{cases}
m!(n_{\text{th}, H}+1)^m &m=n, \\
0 & \text{otherwise}.
\end{cases}
\end{equation}
Finally, under the assumption that there are no correlations between $\hat{C}$ and $\hat{H}$, Eq.~(\ref{si:eq:io}) can be used to derive the following relation between the moment matrix elements \cite{Eichler2011},
\begin{equation}\label{si:allmoments_eq}
\bar{S}_{ij} = \bar{T}_{ijmn}\bar{C}_{mn},
\end{equation}
where
\begin{equation}
\bar{T}_{ijmn}=
\begin{cases}
G^{\frac{i+j}{2}}{i \choose m}{j \choose n} \bar{H}_{i-m,j-n} & m\leq i, n\leq j, \\
0 & \text{otherwise}.
\end{cases}
\end{equation}
By inverting Eq.~(\ref{si:allmoments_eq}), we recover $\bar{C}_{mn}$.
\section{Microwave emission envelope function}\label{si:MW_EM_Section}
\begin{figure}
\caption{\textbf{a.}
\label{si:evelope_fig}
\end{figure}
We define the two hybridized electro-mechanical modes in our experiment as
\begin{equation}
\hat{c}_\pm = \frac{1}{\sqrt{2}}\left(\hat{b} \pm \hat{c}\right),
\end{equation}
with center frequencies $\omega_\pm = \omega_b \pm g_\text{pe}$.
The ideal two-mode squeezed state generated under constant optical pumping on the blue optomechanical sideband ($\Delta = \omega_a - \omega_p = \omega_c$) can be written in the form \cite{Zhong2020FreqBin},
\begin{equation}
\ket{\psi} = \left[1 + \sqrt{p}\left(\hat{a}_+^{\dag}\hat{c}_+^{\dag} + \hat{a}_-^{\dag}\hat{c}_-^{\dag}\right) + \mathcal{O}(p) \right] \ket{\mathrm{vac}},
\end{equation}
up to a normalization factor. Here $p$ is the probability of an optomechanical scattering event, $\hat{a}_\pm$ are the optical modes which match the frequencies of the hybridized modes $\hat{c}_\pm$ as required by energy conservation. Performing single photon detection on the optical state coupled out of the cavity naturally allows us to discard the vacuum component of $\ket{\psi}$. Further, we can neglect higher order terms, $\mathcal{O}(p)$ in the expansion since $p \ll 1$ for the optical pump power level used in our experiments. In our experiment, the SNSPD has nanosecond timing jitter, which is much smaller than the period of the beat note between the modes $\hat{a}_\pm$ given by $2\pi/2g_{\text{pe}} = 625$~ns. As a result, detection of an SNSPD click in an experimental trial erases frequency information of the optical state. This measurement can be described as a projective measurement onto the state, $ \left(e^{i\phi(t)/2}\hat{a}_+ + e^{-i\phi(t)/2}\hat{a}_-\right)\ket{\text{vac}}$, where the relative phase $\phi(t)=\phi_o+2g_\text{pe}t$ for an optical click received at time $t$. Here $\phi_o$ is the constant relative phase acquired between the two frequency bins along the optical path and $t$ is the emission time of the photon pair defined relative to the beginning of the pump pulse. An optical click at time $t$ can therefore be used to herald the microwave state $\ket{\psi}_\text{click} = \left(e^{-i\phi(t)/2}\hat{c}_+^{\dag} + e^{i\phi(t)/2}\hat{c}_-^{\dag}\right)\ket{\mathrm{vac}}$.
This picture becomes slightly more complicated when considering the pulsed optical drive used in this work as well as the parameters of the device. The large bandwidth of the pump pulse of 2.75 MHz in comparison to the mode splitting of 1.6 MHz means that the frequency bins cannot be individually well resolved in the optical emission. Further, given the splitting between the frequency bins versus their individual linewidths of 1.0 MHz, our device is not deep in the strong piezoelectric coupling regime. This is evinced by the two peaks with finite overlap in the experimentally measured power spectrum of the conditional microwave emission, $\langle \hat{c}^\dagger_\text{out}(\omega)\hat{c}_\text{out}(\omega) \rangle|_\text{click}$ shown with the blue trace in Fig.~\ref{si:evelope_fig}b. However, the theoretical picture discussed above can guide the parametrization of the envelope function used for microwave readout. Accordingly, we define the microwave emission envelope function as a coherent superposition of two frequency bins centered at $\omega_\pm$,
\begin{equation}
f(t) = \frac{g(t)}{\sqrt{2}}\left( e^{-i(\omega_+ t+\phi_o/2)} + e^{-i(\omega_- t - \phi_o/2)}\right).
\label{eq:f_phi}
\end{equation}
The relative phase, $\phi_o$ is assumed to be fixed since the optical pulse duration used in our experiment is short compared to the beat period between the frequency bins. The function, $g(t)$ accounts for the finite linewidth of the frequency bins and is obtained by numerically solving the master equation for our system using the QuTiP software package \cite{Johansson2013}. For this calculation, we use the Hamiltonian in Eq.~(1) of the main text along with experimentally determined coupling and decay rates for the transducer modes. The resulting MW emission intensity is shown in Fig.~\ref{si:evelope_fig}a. $g(t)$ is parameterized as a skewed Gaussian to capture the faster timescale of the rising edge relative to the decay. The corresponding voltage envelope is given by the square root of a skewed Gaussian and is written as
\begin{equation}
g(t) = \frac{1}{\sqrt[4]{{2\pi}T_\text{g}^2}}\mathrm{exp}\left(- \frac{t^2}{4T_\text{g}^2} \right)\left[ 1 + \mathrm{erf}\left( \frac{\alpha t}{\sqrt{2}T_\text{g}} \right)\right]^{1/2},
\end{equation}
up to a normalization factor involving the gain of the microwave amplification chain. Here the Gaussian standard deviation, $T_\text{g}$ = 230~ns and skew factor, $\alpha = 2.0$ are obtained from fitting to the simulated result. We note that for a microwave photon instantaneously loaded into the resonator, $g(t)$ is expected to follow an exponential decay. However, in our transducer, due to the finite conversion rate of the phonon from SPDC into a microwave photon, $g(t)$ has a finite rise time. As a result, we find that choosing the skewed Gaussian parametrization instead of an exponential decay provides $\sim 4\%$ higher overlap between the emission envelope function and the simulated wavepacket as quantified by their normalized inner product. The power spectrum of the resultant envelope function, $|f(\omega)|^2$ is shown as the orange trace in Fig.~\ref{si:evelope_fig}b. This is observed to be well-matched to the power spectrum of the conditional microwave emission from the transducer obtained from experimentally recorded heterodyne voltage traces.
\section{Microwave gain calibration}
We calibrate the gain of our microwave heterodyne detection setup by operating the transducer as a frequency converter with the optical pump laser on the red mechanical sideband. The calibration procedure is performed at zero magnetic field, when the acoustic and microwave modes, $\hat{b}, \hat{c}$ are detuned by a frequency separation, $\omega_b - \omega_c = 2\pi\times 12~\mathrm{MHz} \gg 2g_{\mathrm{pe}}$ and are weakly hybridized. An independent measurement of optomechanical sideband asymmetry provides a meter for phonon occupation in the acoustic mode, $\hat{b}$ via the single phonon count rate, $R_o$ \cite{SMeenehan}. We then resonantly excite the acoustic mode via the microwave port and measure the transduced optical photon count rate, $R$. Simultaneously, we record the reflected microwave signal from the transducer, which produces an output power, $P_{\mathrm{det}}$ at the output of the heterodyne setup. The gain, $G$ of the amplification chain can be determined using the series of equations below.
\begin{eqnarray}
R &=& n_{b,\mathrm{sig}}R_o, \\
n_{b,\mathrm{sig}}\hbar\omega_b &=& P_{\mathrm{in}}\frac{4\kappa_{e,b}}{\kappa_{i,b}^2}, \\
P_{\mathrm{det}} &=& G \left( \frac{\kappa_{e,b} - \kappa_{i,b}}{\kappa_{e,b} + \kappa_{i,b}} \right)^2 P_{\mathrm{in}}.
\end{eqnarray}
Here, $n_{b,\text{sig}}$ is the occupation of the acoustic mode due to the microwave drive obtained after subtracting the thermal component of the transduced signal. $P_\text{in}$ is the input microwave power to the transducer. $\kappa_{b,\text{e}}$ is the coupling rate of the acoustic mode to the microwave waveguide due to weak hybridization with the microwave resonator and $\kappa_{b,\text{i}}$ is the intrinsic linewidth of the acoustic mode. Both rates are determined via a VNA measurement of the acoustic mode in the presence of the optical pump pulses. Following this procedure, we estimate gain, $G$ = 103~dB between the microwave output port of the transducer and the output of the heterodyne setup. While knowledge of the gain allows us to estimate the microwave intensity at the transducer output, we note that the values of the normalized correlation functions measured in our experiment are independent of the absolute accuracy of this calibration.
\section{TWPA optimization}
The operating point of the TWPA was optimized prior to data acquisition. In order to ensure fast acquisition and unbiased statistics, our operational criterion were (1) maximize the gain and (2) ensure linearity in the response. In our optimization procedure, the frequency and power of the pump tone driving the TWPA were swept, and coherent pulses with amplitude matched to the conditional microwave signal in our experiment were sent to the transducer input port. The result of the heterodyne measurement of the reflected signal from the transducer was used to compute $g^{(2)}_{CC}$. To ensure linearity, we chose a region in the TWPA pump frequency and power space where $g^{(2)}_{CC}=1$ to within one standard deviation. The final operating point was then chosen as the one with maximum gain from this cohort. Over the course of the experiment, linearity of the TWPA was periodically verified by measuring $g^{(2)}_{CC}$ for a weak coherent input. In case of a deviation of $g^{(2)}_{CC}$ from unity by one standard deviation, the experiment was halted and the pump optimization routine was re-run.
\section{Transducer heating dynamics}
To investigate the heating dynamics of the transducer, we performed a series of microwave measurements under pulsed optical excitation. In Fig.~\ref{si:heating_dynamics}a, we show the time-resolved power spectrum of the unconditional microwave emission at the transducer output port. For this measurement, we use the optical pulse sequence ($T_{\mathrm{p}}$ = 160 ns, $50$ kHz repetition rate) used in the SPDC experiments in the main text albeit at a higher pump power, $n_a=12$, which generates higher thermal noise. We observe that majority of the heating occurs in the emission bandwidth of the hybridized electro-mechanical modes. The peak of this thermal emission is noticeably delayed from the optical pump pulse. Additionally, we observe a small but non-zero noise contribution far from the transducer resonances, which we attribute to heating of the microwave waveguide. These resonator and waveguide contributions to the added noise are separated by fitting the power spectrum at each delay to a double Lorentzian function with a constant floor. The fit result is shown in Fig.~\ref{si:heating_dynamics}b, and indicates that resonator heating significantly dominates waveguide heating and has a stronger time dependence. This points to parasitic optical absorption in the intrinsic baths of the acoustic and microwave modes as the dominant source of transducer heating. Additionally, we measured the scaling of the intensity of this thermal emission with optical pump power and observed it to be sub-linear as shown by the results plotted in Fig.~\ref{si:heating_dynamics}c. Finally, from the time dynamics of heating in Fig.~\ref{si:heating_dynamics}a, we observe that while most of the thermal emission decays on the timescale of a few microseconds, a small component persists as steady state heating and is observed prior to the arrival of the optical pump pulse. We attribute this to a slower component of the hot bath generated by optical absorption and characterize it by varying the repetition rate of the optical pulses. The result shown in Fig.~\ref{si:heating_dynamics}d confirms slow decay of this component of heating with an exponential decay time of $33\pm 6 \mathrm{\mu}$s. The measurements described in this section were performed on the same device used for the SPDC experiments in the main text, but after a partial warm up and cooldown to base temperature. After this thermal cycle, we observed frequency shifts of the microwave and acoustic modes by a few MHz and an increase in the normal mode splitting. However, the thermal noise quanta generated by the optical pulse sequence used for the experiments in the main text remained nearly the same.
\begin{figure}
\caption{Transducer heating dynamics. \textbf{a.}
\label{si:heating_dynamics}
\end{figure}
\section{Simulation of the conditional microwave state}
As detailed in Eq.\,1 of the main text, the dynamics of the transducer are governed by the Hamiltonian,
\begin{equation}
\hat{H}_{abc}(t) = \hat{H}_o + \hat{H}_\text{om}(t) + \hat{H}_\text{pe},
\end{equation}
where $\hat{H}_o$, $\hat{H}_\text{om}(t)$, and $\hat{H}_\text{pe}$ contain the mode frequencies, optomechanical interaction, and piezoelectric interaction terms respectively. These are defined as
\begin{subequations}
\begin{align}
\hat{H}_o &= -\hbar\Delta_a\hat{a}^\dag\hat{a} + \hbar\omega_b\hat{b}^\dag\hat{b} + \hbar\omega_c\hat{c}^\dag\hat{c},\\
\hat{H}_{\mathrm{om}}(t) &= \hbar G_{\mathrm{om}}(t)(\hat{a}^\dag\hat{b}^\dag + \hat{a}\hat{b}),\\
\hat{H}_{\mathrm{pe}} &= \hbar g_{\mathrm{pe}}(\hat{b}^\dag\hat{c} + \hat{b}\hat{c}^\dag).
\end{align}
\end{subequations}
We have explicitly included the time dependence of the optomechanical coupling $G_{\mathrm{om}}(t) = \sqrt{n_a(t)} g_{\mathrm{om}}$ due to the temporal shape of the intra-cavity pump photon number, $n_a(t) = {\kappa_{\text{e},a}}/({\Delta_a^2+\kappa_{a}^2/4}) P_\text{in}(t)$. $P_\text{in}(t)$, the optical pump power at the device follows a Gaussian shape with parameters described in the main text. In our experiment, we use a blue detuned pump with $\Delta_a=\omega_b$. Further, our device is in the sideband resolved regime with $\omega_b \gg \kappa_a$.
In addition to unitary evolution due to couplings among the internal modes of the transducer, the system is also subject to dissipation and heating due to coupling to the environment. This is captured by the master equation,
\begin{equation}\label{si:eq_full_Lim}
\dot{\hat{\rho}}(t) = \mathcal{L}(t)\hat{\rho}(t).
\end{equation}
Defining the Lindblad superoperator as $\mathcal{D}(\hat{o}) \hat{\rho} = \hat{o}\hat{\rho}\hat{o}^\dagger - \frac{1}{2} \left\{ \hat{o}^\dagger \hat{o}, \hat{\rho} \right\} $ the action of $\mathcal{L}(t)$ on the density matrix is written as
\begin{equation}
\mathcal{L}(t)\hat{\rho}(t) = -\frac{i}{\hbar}\left[ \hat{H}_{abc}, \hat{\rho} (t)\right] + \left(\mathcal{L}_a + \mathcal{L}_b(t) + \mathcal{L}_c(t)\right) \hat{\rho} (t),
\end{equation}
where the superoperators, $\mathcal{L}_a, \mathcal{L}_b, \mathcal{L}_c$ describe the coupling of the respective transducer modes to the environment according to the relations below.
\begin{subequations}
\begin{align}
\mathcal{L}_a\hat{\rho}(t) =&\, \kappa_{a} \mathcal{D}(\hat{a})\hat{\rho}(t), \\
\mathcal{L}_b(t)\hat{\rho}(t) =&\, n_{\text{th},b}(t) \kappa_{\text{i},b} \mathcal{D}(\hat{b}^\dagger)\hat{\rho}(t) \nonumber\\
&\, + \left[n_\text{th,b}(t)+1\right]\kappa_{i,b} \mathcal{D}(\hat{b})\hat{\rho}(t), \label{SI:eq_diss_mech}\\
\mathcal{L}_c(t)\hat{\rho}(t) =&\, n_{\text{th},c}(t)\kappa_{\text{i},c} \mathcal{D}(\hat{c}^\dagger)\hat{\rho}(t) \nonumber\\
&\, + \left[n_{\text{th},c}(t)+1\right]\kappa_{i,c} \mathcal{D}(\hat{c})\hat{\rho}(t)\nonumber\\
&\,+n_{\text{th},w}\kappa_{\text{e},c} \mathcal{D}(\hat{c}^\dagger)\hat{\rho}(t) \nonumber\\
&\, + \left[n_{\text{th},w}+1\right]\kappa_{e,c} \mathcal{D}(\hat{c})\hat{\rho}(t).
\end{align}
\end{subequations}
Here the total dissipation rate of the optical mode, $\kappa_a=\kappa_{e,a}+\kappa_{i,a}$. To capture heating of the transducer modes due to parasitic absorption of the optical pump, we assume that the acoustic and microwave modes are coupled to intrinsic baths with time dependent thermal occupation, $n_{\text{th},b}(t)$ and $n_{\text{th},c}(t)$ respectively. $n_{\text{th},w}$, the thermal occupation of the microwave waveguide is set to a constant value much smaller than $n_{\text{th},b}(t)$ and $n_{\text{th},c}(t)$ based on the measurements detailed in Section 11. We note that while these baths may be significantly more complex and involve several components possessing different, time-dependent coupling strengths, our model is aimed at capturing the total influx of thermal excitations into the transducer modes generated by the optical pulse. With this endeavor in mind, we assume that the acoustic (microwave) mode couples to a single intrinsic bath at a fixed dissipation rate, $\kappa_{i,b}$ ($\kappa_{i,c}$) and ascribe the time-dependence of the heating dynamics entirely to the thermal occupation of the bath. This phenomenological approach to heating induced by optical absorption has previously been used to model photon correlations in optomechanics experiments \cite{Hong2017}.
While Eq.~(\ref{si:eq_full_Lim}) can be numerically solved in principle, this is computationally intensive owing to the large state space involving the three modes of the transducer. Instead, we take advantage of the fact that $\kappa_{a} \gg G_\text{om}$ to adiabatically eliminate the optical mode \cite{Wilson-Rae2008}. Moving into a frame rotating with the mechanical and microwave modes and defining the optomechanical scattering rate, $\Gamma_\text{om}(t) = 4|G_\text{om}(t)|^2/\kappa_{a}$, we arrive at the simplified master equation,
\begin{equation}\label{SI:eq_ad_elim}
\dot{\hat{\rho}}_r(t) = \mathcal{L}_r(t)\hat{\rho}_r(t),
\end{equation}
where $\hat{\rho}_r = \mathrm{Tr}_a\{ \hat{\rho}\}$ denotes the reduced density matrix spanning only the acoustic and microwave modes. $\mathcal{L}_r$ is defined by its action on the reduced density matrix,
\begin{align}
\mathcal{L}_r(t)\hat{\rho}_r(t) =&\, -\frac{i}{\hbar}\left[ \hat{H}_\text{pe}, \hat{\rho_r} (t)\right] \nonumber\\
&\,+\left(\mathcal{L}_{\text{om}}(t) + \mathcal{L}_b(t) + \mathcal{L}_c(t)\right) \hat{\rho}_r (t),
\end{align}
where we have introduced optomechanical scattering as a coupling of the reduced system to an effective bath according to the relation,
\begin{equation}
\mathcal{L}_{\text{om}}(t) \hat{\rho}_r(t) = \Gamma_\text{om}(t) \mathcal{D}(\hat{b}^\dagger)\hat{\rho}_r(t)\label{SI:eq_optical_loss} .
\end{equation}
The Lindblad superoperator, $\mathcal{D}(\hat{b}^\dagger)$ contains a non-number preserving quantum jump operator which adds a phonon to the system due to optomechanical SPDC. This process is naturally correlated with the creation of an optical photon, which is routed with finite collection efficiency to a single photon detector. Given that optical decay ($\kappa_a/2\pi = 1.3$ GHz) occurs on a much faster timescale than the dynamics of the rest of the system ($\kappa_a \gg \kappa_b, g_\text{pe}, \kappa_c$), optical detection is assumed to be instantaneous. Under this approximation, in the event of a click on the optical detector caused by a quantum jump at time $t_\text{J}$, the resultant conditional state of the reduced system is,
\begin{equation}
\hat{\rho}_\text{J}(t_\text{J}) = \frac{ \hat{b}^\dagger \hat{\rho}_r(t_\text{J}) \hat{b}}{\mathrm{Tr} \left\{ \hat{b}^\dagger \hat{\rho}_r(t_\text{J}) \hat{b} \right\}}. \label{si:eq_jump}
\end{equation}
This state then evolves according to Eq.~(\ref{SI:eq_ad_elim}). Since we operate in the weak pump regime where the integrated jump probability over the optical pulse duration, $\int \Gamma_\text{om}(t) dt \ll 1$, we neglect two-fold SPDC events.
While the treatment above describes the evolution of the internal modes of the transducer, we experimentally measure the emission in the temporal mode, $\hat{C}$ in the transducer microwave output port. This temporal mode is linearly related to the internal microwave mode of the transducer as $\hat{C}(t) := \int \hat{c}_\text{out}(t+t')f^{*}(t') dt' = \sqrt{\kappa_{e,c}} \int \hat{c}(t+t')f^{*}(t') dt'$. We can thus compute the occupation of the temporal mode as
\begin{eqnarray}
& & \langle\hat{C}^\dag (t) \hat{C} (t) \rangle \notag \\
& = & \kappa_{e,c} \langle \int\hat{c}^\dag (t+t'')f(t'') dt'' \int\hat{c} (t+t')f^{*}(t') dt' \rangle \notag \\
& = & \kappa_{e,c} \int \langle \hat{c}^\dag (t+t'') \hat{c} (t+t') \rangle f(t'') f^{*}(t') dt''dt' \notag \\
& = & \kappa_{e,c} \int \langle \hat{c}^\dag (t'+\tau ') \hat{c} (t') \rangle f(t'+\tau ' - t) f^{*}(t' - t) dt'd\tau ', \notag\\
& & \label{eq:CdagC}
\end{eqnarray}
where the correlator, $\langle \hat{c}^\dag (t'+\tau ') \hat{c}(t') \rangle$ inside the integral on the final line can be numerically computed using the quantum regression theorem \cite{GardinerZoller2004}. Further, to model the conditional state due to a quantum jump at time $t_\text{J}$, this correlator is computed using $\hat{\rho}_\text{J}(t_\text{J})$ in Eq.~(\ref{si:eq_jump}) for times, $t>t_{\mathrm{J}}$. For times $t<t_{\mathrm{J}}$, we use the value for the unconditional state. Performing this calculation for a sequence of jump times, $\{t_{\mathrm{J}}\}$ associated with detector clicks received within the optical gating window, we obtain a sequence of conditional time traces for the occupation of the temporal mode, $\{ \langle\hat{C}^\dag \hat{C} \rangle | _{t_{\mathrm{J}}} \}$. The final conditional trace, $\langle\hat{C}^\dag\hat{C} \rangle | _{\mathrm{click}}$ is given by a weighted average over this sequence with weights determined by the infinitesimal jump probability, $\delta p_{\mathrm{J}} \propto \Gamma_\text{om}(t_{\mathrm{J}})dt$ in an interval $dt$ at time $t_\text{J}$. The effects of detector dark counts as well as the optical pump filter in our experiment are incorporated via appropriate modifications to these weights.
\begin{figure}
\caption{Simulation of transducer heating dynamics \textbf{a.}
\label{si:bath_sim}
\end{figure}
\begin{figure}
\caption{Simulation of the conditional microwave state. Occupation of the temporal mode, $\hat{C}
\label{si:g2ac_sim}
\end{figure}
We perform a numerical simulation of our system by implementing the above model using the QuTiP software package \cite{Johansson2013}. Solutions to the master equation as well as the required correlators are evaluated in a $10\times10$ Fock space of the acoustic and microwave modes. To incorporate effects of heating of the microwave and acoustic baths separately in our model, we measured the unconditional microwave emission from the transducer in response to optical pump pulses under two conditions (i) mechanics far detuned from the microwave mode, $\omega_c - \omega_b \gg 2 g_{\text{pe}}$, (ii) mechanics on resonance with the microwave mode, $\omega_b = \omega_c$. The results of the measurement are shown with the dotted traces in Fig.~\ref{si:bath_sim}d. Under condition (i), which is achieved at zero magnetic field, the microwave emission is approximately entirely due to heating of the microwave bath alone. On the other hand, condition (ii) corresponds to maximal electro-mechanical hybridization and is relevant to the SPDC experiments described in the main text. In this setting, heating of both microwave and acoustic baths is expected to contribute to the measured microwave output signal. Using measurements of the microwave emission under both conditions, we invert the master equation to determine $n_\text{th,b}(t)$ and $n_\text{th,c}(t)$ approximated as a piece-wise linear function over coarse samples as shown in Fig.~\ref{si:bath_sim}b. The inversion is performed by choosing an ansatz for $n_\text{th,b}(t)$ and $n_\text{th,c}(t)$, iteratively solving the master equation and performing least mean square optimization with respect to the experimentally measured thermal microwave emission. The resulting occupations of the acoustic mode, $\hat{b}$ and of the unconditional temporal microwave mode, $\hat{C}$ obtained by solving the master equation are shown in Fig.~\ref{si:bath_sim}c and with the solid traces in Fig.~\ref{si:bath_sim}d respectively. While the model for the hot baths is determined entirely using microwave measurements, we observe that the simulated occupation of the acoustic mode is in reasonable agreement with an experimental measurement shown by the gray data point in Fig.~\ref{si:bath_sim}c. This data point corresponds to the optomechanical sideband asymmetry measurement described in Section 6, and represents the thermal occupation of the acoustic mode averaged over the duration of the optical pump pulse. Finally, the conditional microwave emission determined using our model shown with the solid purple trace in Fig.~\ref{si:g2ac_sim} is in good agreement with the corresponding experimental result shown via the dotted purple trace. The simulated conditional and unconditional intensity traces plotted in this figure predict a value of $g^{(2)}_{AC}(\tau_o) = 3.97$, which is in good agreement with the experimentally observed value of 3.90.
Simulation of $g^{(2)}_{CC}|_{\text{click}}$ requires evaluation of $\langle\hat{C}^{\dag 2} \hat{C}^2\rangle|_{\text{click}}$. Extending the approach of Eq.~(\ref{eq:CdagC}) to the second order moment is a nontrivial computational endeavor as it requires calculation of the four-time correlator, $ \langle \hat{c}^\dag (t+t'''') \hat{c}^\dag (t+t''') \hat{c} (t+t'') \hat{c} (t+t') \rangle$. For the purpose of this work, we pursue the more tractable task of placing bounds on $g^{(2)}_{CC}|_{\text{click}}$. First, we consider the conditional state of the acoustic mode immediately after the addition of a phonon as triggered by an SPDC event. Since the extraction of this acoustic state into the microwave output port is accompanied by loss and addition of noise, we expect the function, $g^{(2)}_{bb}|_{\text{click}} = \langle \hat{b}^{\dag 2}b^2 \rangle|_{\text{click}}/(\langle \hat{b}^{\dag}b \rangle|_{\text{click}} )^2$ to provide a lower bound for $g^{(2)}_{CC}|_{\text{click}}$. Our model estimates $g^{(2)}_{bb}|_{\text{click}}(0) = 0.24$. Next, we consider a choice of the emission envelope function, $f(t) = \delta(t-T_o-\tau_o)$, where $T_o$ is the time corresponding to the peak intensity of the pump pulse. Physically, employing this function results in collecting the microwave emission at all frequencies from the transducer at a fixed time. This is strictly less optimal than the choice in our measurements, namely a function coherently matched to the theoretically expected single photon wavepacket as discussed in Sec. \ref{si:MW_EM_Section}. In the event of this sub-optimal choice, moments of the temporal mode, $\hat{C}$ are proportional to the corresponding moments of the internal microwave mode, $\hat{c}$ and the function, $g^{(2)}_{cc}|_{\text{click} }= \langle \hat{c}^{\dag 2}c^2 \rangle|_{\text{click}}/(\langle \hat{c}^{\dag}c \rangle|_{\text{click}}) ^2$, serves as an upper bound for $g^{(2)}_{CC}|_{\text{click}}$. Our model estimates $g^{(2)}_{cc}|_{\text{click}}(\tau_o) = 0.77$. In summary, the above arguments allow us to use numerical simulations to bound $g^{(2)}_{CC}|_{\text{click}}(\tau_o)$ to the interval (0.24, 0.77), which has substantial overlap with the experimental observation of $g^{(2)}_{CC}|_{\text{click}}(\tau_o)=0.42^{+0.27}_{-0.28}$.
\section{Data analysis for correlation functions}
For measurements of $g^{(2)}_{AC}$ and $g^{(2)}_{CC}|_\text{click}$ presented in figure \,3 of the main text, $9.1\times10^4$ heterodyne voltage traces were acquired over a one month period. For every $\sim$16 minutes of acquisition of conditional microwave data, we interleaved the acquisition of voltage samples of amplifier noise and the unconditional microwave signal. The unconditional microwave intensity signal, $\bar{C}_{11}(\tau)$ used to evaluate the normalized cross-correlation function, $g^{(2)}_{AC}$ was obtained from $1.4\times10^7$ heterodyne voltage traces. Likewise, the measurement of the normalized second order intensity correlation function, $g^{(2)}_{CC} (\tau_\mathrm{o})$ used $3.2\times 10^7$ heterodyne voltage samples.
\begin{figure}
\caption{\textbf{a.}
\label{si:moments_fig}
\end{figure}
To minimize the effect of long term fluctuations in the added noise and gain of the TWPA, we divide the dataset into chunks corresponding to acquisition over one day, and invert each separately to extract the moments $\bar{C}_{mn}^{(k)}, \bar{C}_{mn}^{(k)}|_\text{click}$, where the index $k$ runs over the daily chunks. This inversion process follows the methods detailed in Sec.~\ref{sec: moment inversion} and assumes that no correlations exist between $\hat{H}$ and $\hat{A}$ as well as $\hat{H}$ and $\hat{C}$. We then take a weighted average to compute the entries of the moments matrix, $\bar{C}_{mn} = \sum_k w^{(k)} \bar{C}_{mn}^{(k)}$ and $\bar{C}_{mn}|_\text{click} = \sum_k w^{(k)}_\text{click} \bar{C}_{mn}^{(k)}|_\text{click}$, where $w^{(k)}$, $w^{(k)}_\text{click}$ denote the fraction of records in the $k^\text{th}$ chunk of the unconditional and conditional datasets respectively. The moments matrices constructed from this process are shown in Fig.~\ref{si:moments_fig}b.
The acquisition rate for the moments of the conditional heterodyne output, $\bar{S}_\text{mn}|_\text{click}$ is determined by the optical heralding rate, $R_{\mathrm{click}}=0.14$ Hz. This is much slower compared to that for the amplifier noise moments, $\bar{H}_\text{mn}$, which can be acquired at the 50~kHz repetition rate of the experiment and determined with high accuracy. As a result, the error in $g^{(2)}_{CC}|_\text{click}$ is dominated by uncertainty in $\bar{S}_\text{mn}|_\text{click}$. To calculate error in $g^{(2)}_{CC}$ and $g^{(2)}_{CC}|_\text{click}$, we employ bootstrapping with replacement using $10^5$ bootstraps to construct the probability density functions shown in Fig.~\ref{si:convergence_fig}a. Finally, error estimates were calculated by numerical integration of the distributions such that they they cover a 34.1\% confidence interval above and below the mean. The error estimates calculated using this process vs. number of bootstraps is shown in Fig.~\ref{si:convergence_fig}b-c for the conditional and unconditional datasets. We find that the use of $10^5$ bootstraps yields acceptable convergence in the error estimate.
To mitigate the effects of long term drifts in frequencies of the microwave modes, we periodically measured the unconditional microwave power spectrum and monitored the resonance frequencies, $\omega_\pm$. This additionally allowed us to characterize the spectral diffusion of the microwave modes. In our data analysis, we excluded intervals of the measurement where the mode frequencies drifted by more than twice the standard deviation associated with spectral diffusion, which represented $3\%$ of all recorded traces.
\begin{figure}
\caption{\textbf{a.}
\label{si:convergence_fig}
\end{figure}
\section{Classical bound on the conditional second order intensity correlation}
In the main text, we claim that the conditional microwave autocorrelation function is expected to satisfy the inequality, $g^{(2)}_{CC}|_{\mathrm{click}} \geq 1$ for classical microwave-optical states. This can be proved by rewriting the conditional quantities defined in the main text in terms of correlators of the joint microwave-optical state,
\begin{align}\label{si:jointprop_eq}
g^{(2)}_{CC}(\tau)|_\text{click}
&= \frac{\braket{\hat{C}^{\dagger 2}(\tau) \hat{C}^2(\tau)} |_\text{click}}
{\left( \braket{\hat{C}^\dagger(\tau) \hat{C}(\tau)}|_\text{click}\right)^2 } \nonumber \\
&= \frac{\braket{\hat{A}^\dagger\hat{C}^{\dagger 2}(\tau) \hat{C}^2(\tau) \hat{A}} \braket{\hat{A}^\dagger\hat{A}}}
{ \left(\braket{\hat{A}^\dagger\hat{A}} \braket{\hat{C}^\dagger(\tau) \hat{C}(\tau)} \right)^2},
\end{align}
where the above correlators are explicitly written in normal order. Using the the optical equivalence theorem, we can recast the expectation values of the above correlators in phase space using the Sudarshan-Glauber P representation \cite{Mandel1995,Kuzmich2003} as
\begin{align}
\braket{\hat{A}^{\dagger m} \hat{C}^{\dagger n}(\tau) \hat{A}^m \hat{C}^n(\tau) }
& =
\int|\alpha|^{2m} |\gamma|^{2n} P_\tau(\alpha,\gamma) d^2 \alpha d^2\gamma \nonumber \\
& := \braket{|\alpha|^{2m} |\gamma|^{2n}}_{P_\tau},
\end{align}
where $P_\tau(\alpha,\gamma)$ is the joint phase space density cprresponding to modes $\hat{A}$ and $\hat{C}$ with relative delay $\tau$. Applying this directly to Eq.~(\ref{si:jointprop_eq}) gives,
\begin{equation}
g^{(2)}_{CC}(\tau)|_\text{click} = \frac{\braket{|\alpha|^2_P |\gamma|^4}_{P_\tau} \braket{|\alpha|^2}_{P_\tau}}{\braket{|\alpha|^2|\gamma|^2}^2_{P_\tau}} \geq 1,
\end{equation}
which follows directly from the Cauchy-Schwarz inequality.
\section{Convergence of $g^{(2)}_{AC}$ to the classical bound}
The results of pump power dependent measurements of the normalized intensity cross-correlation function, $g^{(2)}_{AC}$ are shown in Fig.~\ref{si:cc_powerdep}.
\begin{figure}
\caption{Maximum value of $g^{(2)}
\label{si:cc_powerdep}
\end{figure}
All measurements were performed with the same pump pulse duration and repetition rate used to acquire data in the main text. We observe that at higher pump powers, the value of $g^{(2)}_{AC}$ approaches the classical upper bound of 2 as expected from the increased thermal noise added to the photon pair generated in SPDC. The data in this figure was collected from the same device used for the experiments in the main text albeit after a partial warm up and cooldown to base temperature. This thermal cycle led to modified device parameters, which produced an increase in $g^{(2)}_{AC}$ at the lowest pump power of $n_a=0.8$ compared to the result presented in the main text.
\end{document}
|
\begin{document}
\title{A Radon-type transform arising in photoacoustic tomography with circular detectors}
\begin{abstract}
Photoacoustic tomography is the most well-known example of a hybrid imaging method.
In this article, we define a Radon-type transform arising in a version of photoacoustic tomography that uses integrating circular detectors and describe how the Radon transform integrating over all circles with a fixed radius is determined from this Radon-type transform.
Here we consider three situations: when the centers of the circular detectors are located on a cylinder, on a plane, and on a sphere.
This transform is similar to a toroidal Radon transform, which maps a given function to its integrals over a set of tori.
We also study this object.
\end{abstract}
\section{Introduction}
Photoacoustic Tomography (PAT) is a noninvasive medical imaging technique based on the reconstruction of an internal photoacoustic source. Its principle is based on the excitation of high bandwidth acoustic waves with pulsed non-ionizing electromagnetic energy \cite{xuw06,zangerls09,zangerlsh09}. Ultrasound imaging often has high resolution but displays low contrast. Optical or radio-frequency EM illumination, on the other hand, gives an enormous contrast between unhealthy and healthy tissues, although it has low resolution. The photoacoustic effect, which was discovered by A.G. Bell in 1880, is the underlying physical principle of PAT.
PAT can provide information about the chemical composition as well as the optical structure of an object.
In PAT, one induces an acoustic wave inside an object of interest by delivering optical energy~\cite{kuchmentk08,xuw06}, and then one measures the acoustic wave to a surface outside of the object of interest~\cite{burgholzerbmghp07,xuw06,zangerls09}.
The initial data of the three dimensional wave equation contain diagnostic information.
{\color{black}One of the mathematical problems of PAT} boils down to recovering this initial pressure field.
The type of detector most often studied is a point transducer, which approximately measures the pressure at a given point.
{\color{black}However, it is} difficult to manufacture small detectors with high bandwidth and sensitivity.
Hence, various other types of detectors to measure the acoustic data have been introduced, such as line detectors, planar detectors, cylindrical detectors and circular detectors.
Measurements are modeled by the integrals of pressure over the shape of the detectors.
{\color{black}Works~\cite{zangerls09,zangerls10,zangerlsh09} have} dealt with PAT with the circular detectors.
{They showed that the data from PAT with circular detectors is the solution of a certain initial value problem, and they converted the problem of recovering the initial pressure field into an inversion problem for the circular Radon transform using this fact.}
Also, they assume that the circular detectors are centered on a cylinder in~\cite{zangerls09,zangerlsh09} or the circular detectors are of different radii and are lying on a surface of a sphere in~\cite{zangerls10}.
In our approach, we define a new Radon-type transform arising in this version of PAT, and consider the situation when the set of the {centers of the circular detectors} is a cylinder (only this situation is discussed in previous works~\cite{zangerls09,zangerlsh09}) and two more situations: when the set of the {centers of the circular detectors} is a plane or a sphere (this spherical geometry is different from that in~\cite{zangerls10}).
This transform is similar to a toroidal Radon transform, which maps a given function into {the set of its integrals over tori with respect to a certain non-standard measure}; we also study this mathematically similar object.
This paper is organized as follows.
Section~\ref{defiandwork} is devoted to a Radon-type transform arising in PAT with circular detectors. We reduce this Radon-type transform to the Radon transform on circles with a fixed radius.
In section~\ref{sec:torus}, we define a toroidal Radon transform and reduce this transform to the circular Radon transform.
\section{PAT with circular integrating detectors}\label{defiandwork}
In PAT, the acoustic pressure $p(\mathbf{x},t)$ satisfies the following initial value problem:
\begin{equation}\label{eq:pdeofpat}
\begin{array}{cc}
\partial^2_tp(\mathbf{x},t)=\triangle_\mathbf{x} p(\mathbf{x},t)\qquad&(\mathbf{x},t)=(x_1,x_2,x_3,t)\in\mathbb{R}^3\times(0,\infty),\\
p(\mathbf{x},0)=f(\mathbf{x})&\mathbf{x}\in\mathbb{R}^3,\\
\partial_tp(\mathbf{x},0)=0 &\mathbf{x}\in\mathbb{R}^3.
\end{array}
\end{equation}
(We assume that the sound speed is equal to one everywhere including the interior of object.) The goal of PAT is to recover the initial pressure $f$ from measurements of $p$ outside the support of $f$.
\begin{figure}
\caption{The {centers of the circular detectors}
\label{fig:circleTAT}
\end{figure}
Throughout this section, it is assumed that the initial pressure field $f$ is smooth and circular detectors are parallel to the $x_1x_2$-plane.
As mentioned before, three geometries will be studied.
Let the {centers of the circular detectors} be located on a subset $A$ of $\mathbb{R}^3$.
We consider three cases: $A$ is the cylinder $\partial B_R^2(0)\times\mathbb{R}$, the $x_2x_3$-plane, or the sphere $\partial B^3_R(0)$ (see figure 1).
Here $B_R^k(\mathbf{x})=B^k(\mathbf{x},R)$ is a ball in $\mathbb{R}^k$ centered at $\mathbf{x}\in\mathbb{R}^k$ with radius $R$.
In other words, it is assumed that the acoustic signals are measured by a stack of parallel circular detectors where these circles are centered on a cylinder $\partial B_R^2(0)\times\mathbb{R}$, on the $x_2x_3$-plane, or on the sphere $\partial B^3_R(0)$ and their radii are a constant $r_{det}$.
The measured data $P(\mathbf a,t)$ for $(\mathbf a,t)\in A\times(0,\infty)$ can be written as
$$
P(\mathbf a,t)=\frac{1}{2\pi}\int\limits^{2\pi}_0p(\mathbf a+(r_{det}\vec\alpha,0),t)d\alpha,
$$
where $\vec\alpha=(\cos\alpha,\sin\alpha)\in S^1$.
Also, it is a well-known fact that
$$
p(\mathbf{x},t)=\partial_t\left(\frac{1}{4\pi t}\int\limits_{\partial B^3(\mathbf{x},t)}f(\vec\beta)dS(\vec\beta)\right),
$$
is a solution of the IVP~\eqref{eq:pdeofpat}.
Here $dS$ is the measure on the sphere and
$$
\vec\beta=(\cos\beta_1\sin\beta_2,\sin\beta_1\sin\beta_2,\cos\beta_2)\in S^2.
$$
Hence $P(\mathbf a,t)$ becomes
$$
\begin{array}{ll}
P(\mathbf a,t)&\displaystyle=\frac{1}{2\pi}\int\limits^{2\pi}_0\partial_t\left(\frac{1}{4\pi t}\int\limits_{\partial B^3(\mathbf a+(r_{det}\vec\alpha,0),t)}f(\vec\beta)dS(\vec\beta) \right)d\alpha\\
&\displaystyle=\frac{1}{8\pi^2}\partial_t\left(t\int\limits^{2\pi}_0\int\limits^{\pi}_0\int\limits^{2\pi}_0
f(\mathbf a+(r_{det}\vec\alpha,0)+t\vec\beta)\sin\beta_2 d\beta_1 d\beta_2 d\alpha\right).
\end{array}
$$
Let us define a transform $\mathcal R_P$ by
$$
\begin{array}{l}
\displaystyle \mathcal{R}_Pf(\mathbf a,t):=\int\limits^{2\pi}_0\int\limits^{2\pi}_0\int\limits^{\pi}_0 f(\mathbf a+(r_{det}\vec\alpha,0)+t\vec\beta)\sin\beta_2 d\beta_2 d\beta_1d\alpha.
\end{array}
$$
We will demonstrate a relation between the Radon transform on circles with a fixed radius$-$a well studied problem$-$and $\mathcal{R}_Pf$.
This will allow us to recover $f$ from $P$.
\subsection{Reconstruction}\label{recon}
Let a transform $M_{r_{det}}f$ be defined by
$$
M_{r_{det}}f(\mathbf{x}):=\int\limits^{2\pi}_0f(r_{det}\vec\alpha+(x_1,x_2),x_3)d\alpha,
$$
where $\mathbf{x}=(x_1,x_2,x_3)\in\mathbb{R}^3$.
We will show that $M_{r_{det}}f$ can be obtained from $\mathcal{R}_Pf$ when $A$ is a cylinder, a plane or a sphere.
We can easily find the inversion of the Radon transform $M_{r_{det}}f$ over all circles with a fixed radius as follows:
Taking the 2-dimensional Fourier transform of $M_{r_{det}}f$ with respect to $(x_1,x_2)$, we have for $\boldsymbol\xi=(\xi_1,\xi_2)\in\mathbb{R}^2,$
$$
\widehat{M_{r_{det}}f}(\boldsymbol\xi,x_3)=2\pi\hat{f}(\boldsymbol\xi,x_3)J_0(r_{det}|\boldsymbol\xi|),
$$
where $J_0(s)$ is the Bessel function of order zero, and
$\widehat {M_{r_{det}}f}$ and $\hat f$ are the 2-dimensional Fourier transforms of $M_{r_{det}}f$ and $f$ with respect to $(x_1,x_2)$.
Hence we can reconstruct $f$ though
$$
f(\mathbf{x})=\frac1{4\pi^2}\int\limits_{\mathbb{R}} R\widehat{M_{r_{det}}f}(\boldsymbol\xi,x_3)/J_0(r_{det}|\boldsymbol\xi|) e^{i\boldsymbol\xi\cdot(x_1,x_2)}d\boldsymbol\xi.
$$
\begin{rmk}\label{rmk:pompeiu}
When we have two circular detectors with different radii, say $r_1,r_2$, we have two different values $M_{r_{1}}f,M_{r_{2}}f$ for each $\mathbf{x}$, i.e., two Radon transforms on circles with different fixed radii.
Some works~\cite{berensteingy90,thangavelu94,zalcman80} show how $f$ can be reconstructed from $M_{r_{1}}f,M_{r_{2}}f$ under a certain assumption.
\end{rmk}
\subsubsection{Cylindrical geometry}
Let $f$ be a smooth function supported in the solid cylinder $B^2_R(0)\times\mathbb{R}=\{\mathbf{x}=(x_1,x_2,x_3)\in\mathbb{R}^3:x_1^2+x_2^2\leq R\}$. Let the centers of the circular detectors be located on the cylinder $A=\partial B^2(0)\times\mathbb{R}$.
We can represent $\mathbf a\in A$ by $(R\vec\theta,z)\in\partial B^2_R(0)\times\mathbb{R}$ for $(\vec\theta,z)\in S^1\times\mathbb{R}$.
Then $\mathcal R_Pf(R\vec\theta,z,t)$ is equal to
$$
\int\limits^{2\pi}_0\int\limits^{2\pi}_0\int\limits^{\pi}_0
f((R\vec\theta+r_{det}\vec\alpha,z)+t\vec\beta)\sin\beta_2 d\beta_2 d\beta_1d\alpha.
$$
Consider the definition of $\mathcal R_Pf$.
The inner integral with respect to $\beta_2$ in the definition formula of $\mathcal R_Pf$ can be thought of as the circular Radon transform with weight $\sin\beta_2$.
We will first remove this integral by applying the technique previously used to derive an inversion formula for the circular Radon transform.
Let us define the operator
$\mathcal{R}^\#_P$ for an integrable function $g$ on $\partial B_R^2(0)\times\mathbb{R}\times[0,\infty)$ by
$$
\mathcal{R}^\#_Pg(R\vec\theta,x_3,\rho)=\displaystyle\int\limits_{\mathbb{R}} g(R\vec\theta,z,\sqrt{(z-x_3)^2+\rho^2})|(z-x_3,\rho)|dz.
$$
\begin{lem}\label{lem:relation}
Let $f\in C^\infty_c(B^2_R(0)\times\mathbb{R})$.
Then we have
\begin{equation}\label{eq:circularcylinder}
\int\limits^{2\pi}_0\int\limits^{2\pi}_0f(R\vec\theta+r_{det}\vec\alpha+t\vec\beta_1,x_3) d\beta_1 d\alpha=-\frac{1}{\pi^2 t}\mathcal H_t\partial_t\mathcal{R}^\#_P\mathcal{R}_Pf(R\vec\theta,x_3,t).
\end{equation}
Here $\mathcal H_th$ is the Hilbert transform of $h$ with respect to $t$ defined by
$$
\mathcal H_th(t)=\frac1\pi P.V.\int\limits_{\mathbb{R}} h(\tau)\frac{d\tau}{t-\tau},
$$
where $P.V.$ means the principal value.
\end{lem}
To prove this theorem, we follow a similar method to the one used in~\cite{andersson88,fawcett85,nattererw01,nilsson97}.
\begin{proof}
By definition, $\mathcal{R}_Pf(R\vec\theta,z,t)$ can be written as
$$
-\int\limits^{2\pi}_0\int\limits^{2\pi}_0\int\limits^1_{-1}f(R\vec\theta+r_{det}\vec\alpha+t\sqrt{1-s^2}\vec\beta_1, z+ts)dsd\beta_1 d\alpha.
$$
Taking the Fourier transform of $\mathcal{R}_Pf$ with respect to $z$ yields
$$
\widehat{\mathcal{R}_Pf}(R\vec\theta,\xi_1,t)=-\int\limits^{2\pi}_0\int\limits^{2\pi}_0\int\limits^1_{-1}\hat{f}(R\vec\theta+r_{det}\vec\alpha+t\sqrt{1-s^2}\vec\beta_1, \xi_1)e^{its\xi_1}dsd\beta_1 d\alpha,
$$
where $\hat{f}$ and $\widehat{\mathcal{R}_Pf}$ are the 1-dimensional Fourier transforms of $f$ and $\mathcal{R}_Pf$ with respect to $x_3$ and $z$, respectively.
Taking the Hankel transform of order zero of $t\widehat{\mathcal{R}_Pf}$ with respect to $t$, we have
$$
\begin{array}{ll}
H_0(t\widehat{\mathcal{R}_Pf})(R\vec\theta,\xi_1,\eta)\\
=-\displaystyle\int\limits^\infty_0 \int\limits^{2\pi}_0\int\limits^{2\pi}_0\int\limits^1_{-1}\hat{f}(R\vec\theta+r_{det}\vec\alpha+t\sqrt{1-s^2}\vec\beta_1, \xi_1)e^{its\xi_1}dsd\beta_1 d\alpha \;t^2J_0(t\eta)dt\\
=-2\displaystyle\int\limits^\infty_0 \int\limits^{2\pi}_0\int\limits^{2\pi}_0\int\limits^1_0\hat{f}(R\vec\theta+r_{det}\vec\alpha+t\sqrt{1-s^2}\vec\beta_1, \xi_1)t^2J_0(t\eta)\cos(ts\xi_1)dsd\beta_1 d\alpha dt\\
=-2\displaystyle\int\limits^{2\pi}_0\int\limits^{2\pi}_0\int\limits^\infty_0 \int\limits^\infty_0 \hat{f}(R\vec\theta+r_{det}\vec\alpha+b\vec\beta_1, \xi_1)b\cos(\rho\xi_1) J_0(\eta\sqrt{\rho^2+b^2})d\rho dbd\beta_1d\alpha,
\end{array}
$$
where in the last line, we made a change of variables $(t,s)\rightarrow (\rho,b)$ where $t=\sqrt{\rho^2+b^2}$ and $s=\rho/\sqrt{\rho^2+b^2}$.
We use the following identity: for $0<\xi_1<a$,
\begin{equation}\label{eq:batemann}
\displaystyle\int\limits^\infty_0 J_0(a\sqrt{\rho^2+b^2})\cos(\rho\xi_1)d\rho=\left\{\begin{array}{ll}\dfrac{\cos(b\sqrt{a^2-\xi_1^2})}{\sqrt{a^2-\xi_1^2}} &\mbox{ if } 0<\xi_1<a,\\
0&\mbox{ otherwise}\end{array}\right.
\end{equation}
(see~\cite[p.55 (35) vol.1]{batemann}). Using this identity, $H_0(t\widehat{\mathcal{R}_Pf})(R\vec\theta,\xi_1,\eta)$ is equal to
\begin{equation*}
-2\left\{\begin{array}{ll}\displaystyle\int\limits^{2\pi}_0\int\limits^{2\pi}_0\int\limits^\infty_0 \hat{f}(R\vec\theta+r_{det}\vec\alpha+b\vec\beta_1, \xi_1)b\dfrac{\cos(b\sqrt{\eta^2-\xi_1^2})}{\sqrt{\eta^2-\xi_1^2}}dbd\beta_1d\alpha\;&\mbox{ if } 0<\xi_1<\eta,\\
0&\mbox{ otherwise.}\end{array}\right.
\end{equation*}
Substituting $\eta=\sqrt{\xi_1^2+\xi_2^2}$ yields
\begin{equation}\label{eq:hankel}
H_0(t\widehat{\mathcal{R}_Pf})(R\vec\theta,\xi_1,|\boldsymbol\xi|)=-2\displaystyle\int\limits^{2\pi}_0\int\limits^{2\pi}_0\int\limits^\infty_0 \hat{f}(R\vec\theta+r_{det}\vec\alpha+b\vec\beta_1, \xi_1)\dfrac{b}{\xi_2}\cos(b\xi_2)dbd\beta_1d\alpha.
\end{equation}
The inner integral in the right hand side of the last equation is the Fourier cosine transform with respect to $b$, so taking the Fourier cosine transform of~\eqref{eq:hankel}, we get
\begin{equation}\label{relationhankelandfourier}
\begin{array}{l}
\displaystyle\int\limits^{2\pi}_0\int\limits^{2\pi}_0 \hat{f}(R\vec\theta+r_{det}\vec\alpha+s\vec\beta_1, \xi_1)sd\beta_1d\alpha\\
\displaystyle= -\pi^{-1}\int\limits^\infty_0 H_0(t\widehat{\mathcal{R}_Pf})(R\vec\theta,\xi_1,|\boldsymbol\xi|)\cos(s\xi_2)\xi_2 d\xi_2,
\end{array}
\end{equation}
where $\hat{f}$ is the Fourier transform of $f$ with respect to the last variable $x_3$.
We change the right hand side of~\eqref{relationhankelandfourier} into a term containing the operator $\mathcal{R}_P ^\#$.
Taking the Fourier transform of $\mathcal{R}^\#_Pg$ on $\partial B^2(0)\times\mathbb{R}^2$ with respect to $(z,\rho)$ yields
\begin{equation}\label{eq:relationhankelandback}
\begin{array}{ll}
\widehat{\mathcal{R}^\#_Pg}(R\vec\theta, \boldsymbol\xi)=\displaystyle\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} e^{-i(x_3,\rho)\cdot \boldsymbol\xi} \mathcal{R}^\#_Pg(R\vec\theta,x_3,\rho)dx_3d\rho\\
=\displaystyle\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} e^{-i(x_3,\rho)\cdot \boldsymbol\xi} \int\limits_{\mathbb{R}} |(x_3-z,\rho)|g(\theta,z,\sqrt{(z-x_3)^2+\rho^2} )dzdx_3d\rho\\
=\displaystyle\int\limits_{\mathbb{R}} e^{-i\xi_1\cdot z} \int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} e^{-i(x_3-z,\rho)\cdot\boldsymbol\xi} |(x_3-z,\rho)| g(\theta,z,\sqrt{(z-x_3)^2+\rho^2} )dx_3d\rho dz\\
=\displaystyle\int\limits_{\mathbb{R}} e^{-i\xi_1\cdot z}\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} e^{-i(x_3,\rho)\cdot \boldsymbol\xi} |(x_3,\rho)|g(\theta,z,|(x_3,\rho)| )dx_3d\rho dz\\
=2\pi\displaystyle\int\limits_{\mathbb{R}} e^{-i\xi_1\cdot z}H_0(tg)(R\vec\theta,z,|\boldsymbol\xi| )dz\\
=2\pi H_0(t\hat{g})(R\vec\theta,\xi_1,|\boldsymbol\xi| ),
\end{array}
\end{equation}
where $\widehat{\mathcal{R}^\#_Pg}$ is the Fourier transform with respect to the last variable $(x_3,\rho)$.
Combining~\eqref{eq:relationhankelandback} with~\eqref{relationhankelandfourier}, we have for $g=\mathcal{R}_Pf$,
\begin{equation*}\label{relationfourierandback}
\begin{array}{ll}
\displaystyle\int\limits^{2\pi}_0\int\limits^{2\pi}_0\hat{f}(R\vec\theta+r_{det}\vec\alpha+s\vec\beta_1, \xi_1)d\beta_1d\alpha &\displaystyle
=-\frac{1}{2\pi^2 s} \int\limits^\infty_0 \widehat{\mathcal{R}^\#_Pg}(R\vec\theta, \boldsymbol\xi) \cos(s\xi_2)\xi_2 d\xi_2\\
&\displaystyle=-\frac{1}{\pi^2 s} \int\limits_{\mathbb{R}} \widehat{\mathcal{R}^\#_Pg}(R\vec\theta, \boldsymbol\xi) e^{is\xi_2}|\xi_2| d\xi_2\\
&\displaystyle=\frac{1}{\pi^2 s} \int\limits_{\mathbb{R}} \widehat{\partial_t\mathcal{R}^\#_Pg}(R\vec\theta, \boldsymbol\xi) e^{is\xi_2}(i\operatorname{sgn}(\xi_2)) d\xi_2.
\end{array}
\end{equation*}
The fact that $\widehat{\mathcal H_th}(\xi)=(-i\operatorname{sgn}(\xi))\hat h(\xi)$ completes our proof.
\end{proof}
Again, the inner integral with respect to $\beta_1$ in the left hand side of~\eqref{eq:circularcylinder} is the circular Radon transform with centers on $\partial B^2_R(0)$ and radius $t$.
Hence, by applying an inversion formula of the circular Radon transform, we get $M_{r_{det}}f(\mathbf{x})$.
\begin{thm}\label{thm:circle}
Let $f$ be a smooth function supported in $B^2_R(0)\times\mathbb{R}$.
Then for any $\mathbf{x}\in\mathbb{R}^3$, we have
$$
M_{r_{det}}f(\mathbf{x})=\displaystyle\frac{1}{\pi R}\triangle_{x_1,x_2}\int\limits^{2\pi}_0 \mathcal{R}^\#_P\mathcal{R}_Pf(R\vec\theta,x_3, |(x_1,x_2)-R\vec\theta|)d\theta.
$$
\end{thm}
To prove this theorem, we follow the method discussed in~\cite{finchhr07}.
\begin{proof}
It is computed in~\cite{finchhr07} that
$$
\begin{array}{l}
\displaystyle\int\limits^{2\pi}_0\log \left||(x_1,x_2)-R\vec\theta|^2-|(y_1,y_2)-R\vec\theta|^2\right|d\theta\\
=2\pi R\log|(x_1,x_2)-(y_1,y_2)|+2\pi R\log R.
\end{array}
$$
For any measurable function $q$ on $\mathbb{R}$, it is easily shown that
$$
\begin{array}{l}
\displaystyle\int\limits^{2(R+r_{det})}_0t\int\limits^{2\pi}_0\int\limits^{2\pi}_0
f(R\vec\theta+r_{det}\vec\alpha+t\vec\beta_1,x_3)d\beta_1d\alpha \;q(t)dt\\
\displaystyle=\int\limits^{2\pi}_0\int\limits_{\mathbb{R}} R f(R\vec\theta+r_{det}\vec\alpha+w,x_3)q(|\mathbf w|)d\mathbf wd\alpha.
\end{array}
$$
Applying this with $q(t)=\log\left|t^2-|(x_1,x_2)-R\vec\theta|^2\right|$ and making the change of variables $\mathbf y=(y_1,y_2)=R\vec\theta+t\vec\beta_1\in\mathbb{R}^2$ give
$$
\begin{array}{l}
\displaystyle\int\limits^{2\pi}_0\int\limits^{2(R+r_{det})}_0\int\limits^{2\pi}_0\int\limits^{2\pi}_0
tf(R\vec\theta+r_{det}\vec\alpha+t\vec\beta_1,x_3)\log\left|t^2-|(x_1,x_2)-R\vec\theta|^2\right|d\beta_1d\alpha dt d\theta\\
=\displaystyle\int\limits^{2\pi}_0\int\limits^{2\pi}_0\int\limits_{\mathbb{R}} R f(r_{det}\vec\alpha+\mathbf y,x_3)\log \left||(x_1,x_2)-R\vec\theta|^2-|\mathbf y-R\vec\theta|^2\right| d\mathbf y d\alpha d\theta\\
=\displaystyle 2\pi R\int\limits^{2\pi}_0\int\limits_{\mathbb{R}} R f(r_{det}\vec\alpha+\mathbf y,x_3) (\log|(x_1,x_2)-\mathbf y|+\log R)d\mathbf y d\alpha,
\end{array}
$$
where in the last line, we used the Fubini-Tonelli theorem.
Since $(2\pi)^{-1}\log|(x_1,x_2)-(y_1,y_2)|+\log R$ is a fundamental solution of the Laplacian in $\mathbb{R}^2$, we have
$$
\begin{array}{l}
\displaystyle\triangle\int\limits^{2\pi}_0\int\limits^{2(R+r_{det})}_0\int\limits^{2\pi}_0\int\limits^{2\pi}_0t
f(R\vec\theta+r_{det}\vec\alpha+t\vec\beta_1,x_3)\log\left|t^2-|(x_1,x_2)-R\vec\theta|^2\right|d\beta_1d\alpha dt d\theta\\
=R\displaystyle\int\limits^{2\pi}_0f((x_1,x_2)+r_{det}\vec\alpha,x_3)d\alpha,
\end{array}
$$
where $\triangle$ is the Laplacian on $(x_1,x_2)$.
Applying Lemma~\ref{lem:relation}, $M_{r_{det}}f(\mathbf{x})$ is equal to
$$
-\frac{1}{\pi^2 R}\triangle_{x_1,x_2}\int\limits^{2\pi}_0\int\limits^{2(R+r_{det})}_0\mathcal H_t\partial_t\mathcal{R}^\#_P\mathcal{R}_Pf(R\vec\theta,x_3, t) \log\left|t^2-|(x_1,x_2)-R\vec\theta|^2\right|dt d\theta.
$$
We note that $\mathcal H_t\partial_th=\partial_t\mathcal H_t h$,
$$ \log\left|t^2-|(x_1,x_2)-R\vec\theta|^2\right|= \log\left|t-|(x_1,x_2)-R\vec\theta|\right|+ \log\left|t+|(x_1,x_2)-R\vec\theta|\right|,
$$
and $\log|t|$ is $P.V.\frac1t$.
By integration by parts, we have
$$
\begin{array}{ll}
M_{r_{det}}f(\mathbf{x})=&\displaystyle\frac{1}{\pi^2 R}\triangle_{x_1,x_2}\int\limits^{2\pi}_0P.V.\int\limits^{2(R+r_{det})}_0\frac{\mathcal H_t\mathcal{R}^\#_P\mathcal{R}_Pf(R\vec\theta,x_3, t)}{t-|(x_1,x_2)-R\vec\theta|}dt d\theta\\
&+\displaystyle\frac{1}{\pi^2 R}\triangle_{x_1,x_2}\int\limits^{2\pi}_0\int\limits^{2(R+r_{det})}_0\frac{\mathcal H_t\mathcal{R}^\#_P\mathcal{R}_Pf(R\vec\theta,x_3, t)}{t+|(x_1,x_2)-R\vec\theta|}dt d\theta.
\end{array}
$$
Since $\mathcal{R}^\#_P\mathcal{R}_Pf$ is even in $t$, so is $\mathcal H_t\mathcal{R}^\#_P\mathcal{R}_Pf$.
Substituting $t=-t$ in the second term gives
$$
M_{r_{det}}f(\mathbf{x})=\displaystyle\frac{1}{\pi^2 R}\triangle_{x_1,x_2}\int\limits^{2\pi}_0P.V.\int\limits^{2(R+r_{det})}_{-2(R+r_{det})}\frac{\mathcal H_t\mathcal{R}^\#_P\mathcal{R}_Pf(R\vec\theta,x_3, t)}{t-|(x_1,x_2)-R\vec\theta|}dt d\theta.
$$
The fact that $\mathcal H_t\mathcal H_th(t)=-h(t)$ completes our proof.
\end{proof}
\begin{rmk}\label{rmk:line}
When $A$ is a cylinder, we can reconstruct $f$ from $\mathcal{R}_Pf$ by applying Theorem~\ref{thm:circle} and the argument below Remark~\ref{rmk:pompeiu}.
\end{rmk}
\subsubsection{Planar geometry}
Let the {centers of the circular detectors} be located on the $x_2x_3$-plane.
Then we can denote $\mathbf a\in A$ by $(y,z)\in \mathbb{R}^2$.
Also, $\mathcal{R}_Pf$ is equal to zero if $f$ is an odd function in $x_1$.
We thus assume the function is even in $x_1$.
Then $\mathcal{R}_Pf$ can be written by
$$
\mathcal{R}_Pf(y,z,t)=\int\limits^{2\pi}_0\int\limits_{S^2}
f((0,y,z)+r_{det}(\vec\alpha,0)+t\vec\beta) dS(\vec\beta) d\alpha.
$$
Let $M_P$ be the the spherical Radon transform mapping a locally integrable function $f$ on $\mathbb{R}^3$ into its integral over the set of spheres centered on the $x_2x_3$-plane:
$$
M_Pf(y,z,t)=\int\limits_{S^2}f((0,y,z)+t\vec\beta)dS(\vec\beta).
$$
Then we have
$$
\mathcal{R}_Pf(y,z,t)=\int\limits_{S^2}M_{r_{det}}f((0,y,z)+t\vec\beta)dS(\vec\beta)=M_P(M_{r_{det}}f)(y,z,t).
$$
It is well-known (see, e.g.~\cite{nattererw01,nilsson97}) that for $\boldsymbol\xi=(\xi_1,\xi_2,\xi_3)\in\mathbb{R}^3$,
$$
\hat f(\boldsymbol\xi)=\frac{|\boldsymbol\xi||\xi_1|}{4\pi^3}\mathcal F (M_P^*M_Pf)(\boldsymbol\xi),
$$
where $\mathcal F f=\hat f$ is the $3$-dimensional Fourier transform of $f$, and
for an integrable function $g$ on $\mathbb{R}^2\times[0,\infty)$,
$$
M_P^*g(\mathbf{x})=\int\limits_{\mathbb{R}} R g(y,z,\sqrt{x_1^2+(y-x_2)^2+(z-x_3)^2})dydz.
$$
\begin{thm}
Let $f\in C^\infty_c(\mathbb{R}^3)$ be even in $x_1$.
Then we have
$$
M_{r_{det}}f(\mathbf{x})=\frac{1}{2^5\pi^6}\int\limits_{\mathbb{R}^3}|\boldsymbol\xi||\xi_1|\mathcal F (M_P^*\mathcal R_Pf)(\boldsymbol\xi)e^{i\boldsymbol\xi\cdot\mathbf{x}}d\boldsymbol\xi.
$$
\end{thm}
Now $f$ can be determined applying the argument below Remark~\ref{rmk:pompeiu}.
\begin{rmk}
Redding and Newsam derived another inversion formula for the spherical Radon transform $M_P$ in~\cite{reddingn01}. Using this inversion formula, we can also reconstruct $M_{r_{det}}f$ from $\mathcal R_Pf$.
\end{rmk}
\subsubsection{Spherical geometry}
Let the {centers of the circular detectors} be located on the sphere $\partial B^3_R(0)$.
Then we can denote $\mathbf a\in A$ by $R\vec\omega\in \partial B^3_R(0)$ for $\vec\omega\in S^2$.
Then $\mathcal{R}_Pf$ can be written by
$$
\mathcal{R}_Pf(R\vec\omega,t)=\int\limits^{2\pi}_0\int\limits_{S^2}
f(R\vec\omega+r_{det}(\vec\alpha,0)+t\vec\beta) dS(\vec\beta) d\alpha.
$$
Let $M_S$ be the spherical Radon transform mapping a locally integrable function $f$ on $\mathbb{R}^3$ into its integral over the spheres centered at the $\partial B_R^3(0)$:
$$
M_Sf(\vec\omega,t)=\int\limits_{S^2}f(R\vec\omega+t\vec\beta)dS(\vec\beta).
$$
Then we have
$$
\mathcal{R}_Pf(R\vec\omega,t)=\int\limits_{S^2}M_{r_{det}}f(R\vec\omega+t\vec\beta)dS(\vec\beta)=M_S(M_{r_{det}}f)(\vec\omega,t).
$$
It is well-known (see, e.g.~\cite{finchpr04}) that
$$
f(\mathbf{x})=-\frac{R }{2^3\pi^2 }\int\limits_{S^2}\frac{\left.\partial^2_tt^2M_Sf( \vec\omega,t)\right|_{t=|R\vec\omega-\mathbf{x}|}}{|R\vec\omega-\mathbf{x}|}dS(\vec\omega).
$$
\begin{thm}
Let $f\in C^\infty_c(B^3_R(0))$.
Then we have
$$
M_{r_{det}}f(\mathbf{x})=-\frac{R }{2^3\pi^2}\int\limits_{S^2}\frac{\left.\partial^2_tt^2\mathcal R_Pf(R\vec\omega,t)\right|_{t=|R\vec\omega-\mathbf{x}|}}{|R\vec\omega-\mathbf{x}|}dS(\vec\omega).
$$
\end{thm}
Again $f$ can be determined applying the argument below Remark~\ref{rmk:pompeiu}.
\begin{rmk}
Kunyansky derived two other inversion formulas for the spherical Radon transform $M_S$ in~\cite{kunyansky07,kunyansky071}. Using these inversion formulas, we can reconstruct $M_{r_{det}}f$ from $\mathcal R_Pf$.
\end{rmk}
\section{A toroidal Radon transform}\label{sec:torus}
As mentioned before, we study the toroidal Radon transform, which is a mathematically similar object to $\mathcal R_P$, in this section.
(When integrating over the tori, the standard area measure is not used.)
Although we have not been able to establish the direct link between PAT with circular detectors and the toroidal Radon transform, studying the toroidal Radon transform is an interesting geometric problem in its own right.
We assume that all tori are parallel to the $x_1x_2$-plane and consider two geometries: the centers of tori are located on a cylinder, or on a plane.
\begin{defi}
Let $u>0$ be a radius of the central circles of tori.
Let $A\times\mathbb{R}\subset\mathbb{R}^2\times\mathbb{R}$ be the set of the centers of tori.
The toroidal Radon transform $R_T$ maps $f\in C^\infty_c(\mathbb{R}^3)$ into
\begin{equation}\label{eq:torusradon}
R_Tf(\boldsymbol\mu,p,r)=\displaystyle\frac{1}{2\pi }\int\limits^{2\pi}_0\int\limits^{2\pi}_0 f(\boldsymbol\mu+(u-r\cos\beta)\vec\alpha,p+r\sin\beta)d\beta d \alpha,
\end{equation}
for $(\boldsymbol\mu,p,r)\in A\times\mathbb{R}\times(0,\infty)$.
Here $\alpha$ is the angular parameter along the central circle, $(\boldsymbol\mu,p)$ is the center of the torus, and $\beta$ and $r$ are the polar angle and the radius of the tube of the torus, respectively.
\end{defi}
We consider two situations: $A$ is the circle $\partial B^2_R(0)$ or the line $x_1=0$. Thus the set of the centers of tori is a cylinder $\partial B^2_R(0)\times\mathbb{R}$ or the $x_2x_3$-plane.
We then present the relation between the circular Radon transform and the toroidal Radon transform.
This relation leads naturally to an inversion formula, if one uses an inversion formula for the circular Radon transform (already discussed in~\cite{finchhr07,kunyansky07} or \cite{andersson88,fawcett85,nattererw01,nilsson97,reddingn01}).
\begin{defi}
Let $f$ be a compactly supported function in $\mathbb{R}^3$.
The circular Radon transform $M_{}$ maps a function $f$ into
$$
M_{}f(\boldsymbol\mu,x_3,r)=\int\limits^{2\pi}_0 f(\boldsymbol\mu+r\vec\alpha,x_3)d\alpha \quad\mbox{ for } (\boldsymbol\mu,x_3,r)\in A\times\mathbb{R}\times(0,\infty).
$$
\end{defi}
\subsection{Reconstruction}\label{recontorus}
The inner integral with respect to $\beta$ in \eqref{eq:torusradon} can be thought of as the circular Radon transform with radius $r$.
As in subsection~\ref{recon}, we will first invert this transform.
Let us define the operator
$R^*_T$ for $g\in C^\infty_c(A\times\mathbb{R}\times[0,\infty))$ by
$$
R^*_Tg(\boldsymbol\mu,z,\rho)=\displaystyle\int\limits_{\mathbb{R}} g(\boldsymbol\mu,p,\sqrt{(z-p)^2+\rho^2})dp,
$$
where $(\boldsymbol\mu,z,\rho)\in A\times\mathbb{R}^2$.
The following two lemmas show the relation between the circular and the toroidal Radon transforms.
Let us define the linear operator $I_2^{-1}$ by $\widehat{I^{-1}_2h}(\boldsymbol\mu,\boldsymbol\xi)=|\xi_2|\hat{h}(\boldsymbol\mu,\boldsymbol\xi)$,
where $h$ is a function on $A\times\mathbb{R}^2$ and $\hat h$ is its Fourier transform in the last two-dimensional variable.
\begin{lem}\label{lem:andersson}
Let $f\in C^\infty_c(\mathbb{R}^3)$.
Then we have
\begin{equation}\label{eq:invtorus}
\displaystyle \frac{1}{2} I^{-1}_2R^*_TR_Tf(\boldsymbol\mu,x_3, r)=\left\{\begin{array}{ll}M_{}f(\boldsymbol\mu,x_3,u-r)+M_{}f(\boldsymbol\mu,x_3,u+r)&\mbox{ if } u>r,\\\\
M_{}f(\boldsymbol\mu,x_3,r-u)+M_{}f(\boldsymbol\mu,x_3,u+r)&\mbox{ otherwise, }\end{array}\right.
\end{equation}
\end{lem}
To prove this lemma, we follow the method discussed in~\cite{andersson88,nattererw01,nilsson97}.
\begin{proof}
By definition, we have
$$
R_Tf(\boldsymbol\mu,p,r)
=\displaystyle\frac{1}{2\pi }\sum^2_{j=1}\int\limits^{2\pi}_0\int\limits^1_{-1} f(\boldsymbol\mu+(u+(-1)^jr\sqrt{1-s^2})\vec\alpha,p+rs)\frac{ds}{\sqrt{1-s^2}}d\alpha.
$$
We take the Fourier transform of $R_Tf$ with respect to $p$ and the Hankel transform of order zero of $\widehat{R_Tf}$ with respect to $r$.
Then $H_0\widehat{R_Tf}(\boldsymbol\mu,\xi_1,\eta)$ can be written as
\begin{equation}\label{hankelrtftorus}
\begin{array}{ll}
\displaystyle\frac{1}{2\pi }\sum^2_{j=1}\int\limits^{2\pi}_0\int\limits^\infty_0 \int\limits^\infty_0 \hat{f}(\boldsymbol\mu+(u+(-1)^jb)\vec\alpha, \xi_1)\cos(\rho\xi_1) J_0(\eta\sqrt{\rho^2+b^2})d\rho dbd\alpha,
\end{array}
\end{equation}
where $\hat{f}$ and $\widehat{R_Tf}$ are the 1-dimensional Fourier transforms of $f$ and $R_Tf$ with respect to $z$ and $p$, respectively.
Lastly, we change variables $(r,s)\rightarrow (\rho,b)$, where $r=\sqrt{\rho^2+b^2}$ and $s=\rho/\sqrt{\rho^2+b^2}$.
Applying~\eqref{eq:batemann} to~\eqref{hankelrtftorus}, we get
\begin{equation*}
H_0\widehat{R_Tf}(\boldsymbol\mu,\xi_1,|\boldsymbol\xi|)=\displaystyle\frac{1}{2\pi }\sum^2_{j=1}\int\limits^{2\pi}_0\int\limits^\infty_0 \hat{f}(\boldsymbol\mu+(u+(-1)^jb)\vec\alpha, \xi_1)\dfrac{\cos(b\xi_2)}{\xi_2}dbd\alpha.
\end{equation*}
The inner integral in the right hand side of the last equation is the Fourier cosine transform with respect to $b$, so taking the inverse Fourier cosine transform of the above formula, we get
\begin{equation}\label{relationhankelandfouriertorus}
\displaystyle\sum^2_{j=1}\int\limits^{2\pi}_0 \hat{f}(\boldsymbol\mu+(u+(-1)^js)\vec\alpha, \xi_1)d\alpha= 4\int\limits^\infty_0 H_0\widehat{R_Tf}(\boldsymbol\mu,\xi_1,|\boldsymbol\xi|)\cos(s\xi_2)\xi_2 d\xi_2.
\end{equation}
For a fixed $\xi_1$, one recognizes the sum of two circular Radon transforms on the left.
Similarly to~\eqref{eq:relationhankelandback}, we can change the right hand side of~\eqref{relationhankelandfouriertorus} into a term containing operator $R_T ^*$, i.e.,
\begin{equation}\label{eq:relationhankelandbacktorus}
\widehat{R^*_Tg}(\boldsymbol\mu, \boldsymbol\xi)=2\pi H_0\hat{g}(\boldsymbol\mu,\xi_1,|\boldsymbol\xi| ).
\end{equation}
Here $\widehat{R^*_Tg}$ is the 2-dimensional Fourier transform with respect to the variables $(z,\rho)$.
Combining~\eqref{eq:relationhankelandbacktorus} with~\eqref{relationhankelandfouriertorus}, we have for $g=R_Tf$,
\begin{equation*}\label{relationfourierandback}
\begin{array}{ll}
\displaystyle \sum^2_{j=1}\int\limits_{0}^{2\pi} \hat{f}(\boldsymbol\mu+(u+(-1)^js)\vec\alpha, \xi_1)d\alpha &\displaystyle=\frac{2}{\pi} \int\limits^\infty_0 \widehat{R^*_Tg}(\boldsymbol\mu, \boldsymbol\xi) \cos(s\xi_2)\xi_2 d\xi_2\\
&\displaystyle=\frac{1}{\pi} \int\limits_{\mathbb{R}} \widehat{R^*_Tg}(\boldsymbol\mu, \boldsymbol\xi) e^{is\xi_2}|\xi_2| d\xi_2,
\end{array}
\end{equation*}
since $\widehat{R^*_Tg}$ is even in $\xi_2$.
\end{proof}
\begin{lem}\label{lem:redding}
Let $f\in C^\infty_c(\mathbb{R}^3)$.
Then we have
$$
\begin{array}{l}
\displaystyle\frac{2}{\pi}\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} \int\limits^\infty_0 rsR_Tf(\boldsymbol\mu,-\eta,s) e^{-i(s^2+2x_3\eta+x_3^2-\eta^2+r^2)\xi}\xi ds d\eta d\xi\\
=\left\{\begin{array}{ll}M_{}f(\boldsymbol\mu,x_3,u-r)+M_{}f(\boldsymbol\mu,x_3,u+r)&\mbox{ if } u>r,\\\\
M_{}f(\boldsymbol\mu,x_3,r-u)+M_{}f(\boldsymbol\mu,x_3,u+r)&\mbox{ otherwise}.
\end{array}\right.
\end{array}
$$
To prove this lemma, we follow the method discussed in~\cite{reddingn01}.
\end{lem}
\begin{proof}
Let $G$ be defined by
$$
G(\boldsymbol\mu,p,\xi):=\displaystyle\int\limits^\infty_0 rR_Tf(\boldsymbol\mu,p,r) e^{-ir^2\xi}dr.
$$
Then we have
$$
\begin{array}{ll}
G(\boldsymbol\mu,p,\xi)&=\displaystyle\frac{1}{2\pi}\int\limits^\infty_0 \int\limits^{2\pi}_0\int\limits^\pi_{-\pi}rf(\boldsymbol\mu+(u-r\cos\beta)\vec\alpha,p+r\sin\beta)e^{-ir^2\xi}d\beta d\alpha dr\\
&=\displaystyle\frac{1}{2\pi}\int\limits^{2\pi}_0\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} f(\boldsymbol\mu+(u-y)\vec\alpha,p+z) e^{-i(y^2+z^2)\xi}dydzd \alpha\\
&=\displaystyle\frac{1}{2\pi}\int\limits^{2\pi}_0\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} f(\boldsymbol\mu+(u-y)\vec\alpha,z) e^{-i(y^2+(z-p)^2)\xi}dydzd \alpha\\
&=\displaystyle \frac{e^{-ip^2\xi}}{2\pi}\int\limits^{2\pi}_0\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} f(\boldsymbol\mu+(u-y)\vec\alpha,z) e^{-i(y^2+z^2)\xi}e^{2ipz\xi}dydzd\alpha,
\end{array}
$$
where in the second line, we switched from the polar coordinates $(r,\beta)$ to the Cartesian coordinates $(y,z)\in\mathbb{R}^2$.
Making the change of variables $r=y^2+z^2$ gives that $G(\boldsymbol\mu,p,\xi)$ equal
$$
\begin{array}{l}
\displaystyle \frac{e^{-ip^2\xi}}{2\pi}\sum^2_{j=1}\int\limits^{2\pi}_0\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} f(\boldsymbol\mu+(u+(-1)^j\sqrt{r-z^2})\vec\alpha,z) \frac{e^{-ir\xi}e^{2ipz\xi}}{2\sqrt{r-z^2}}drdzd\alpha.
\end{array}
$$
Let us define the function
$$
k_{\boldsymbol\mu}(\alpha,z,r):=\left\{\begin{array}{ll}\displaystyle\sum^2_{j=1}f(\boldsymbol\mu+(u+(-1)^j\sqrt{r-z^2})\vec\alpha,z)/\sqrt{r-z^2} \qquad &\mbox{if } 0<z^2<r,\\
0\qquad &\mbox{otherwise.}
\end{array}\right.
$$
Then we have
$$
\begin{array}{ll}
G(\boldsymbol\mu,p,\xi)&=\displaystyle\frac{e^{-ip^2\xi}}{4\pi} \int\limits^{2\pi}_0\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} k_{\boldsymbol\mu}(\alpha,z,r) e^{-ir\xi}e^{2ipz\xi}drdzd \alpha\\
&=\displaystyle\frac{e^{-ip^2\xi}}{4\pi} \int\limits^{2\pi}_0\widehat{k_{\boldsymbol\mu}}(\alpha,-2p\xi,\xi)d \alpha,
\end{array}
$$
where $\widehat{k_{\boldsymbol\mu}}$ is the 2-dimensional Fourier transform of $k_{\boldsymbol\mu}$ with respect to the variables $(z,r)$.
Also, we have
\begin{equation*}\label{reddingandradon}
\begin{array}{ll}
\displaystyle\sum^2_{j=1}\int\limits^{2\pi}_0f(\boldsymbol\mu+(u+(-1)^js)\vec\alpha,x_3)d\alpha=\displaystyle \int\limits^{2\pi}_0 sk_{\boldsymbol\mu}(\alpha,x_3,x_3^2+s^2)d\alpha\\
=\displaystyle\frac{1}{4\pi^2}\int\limits_{\mathbb{R}} R \int\limits^{2\pi}_0 s\widehat{k_{\boldsymbol\mu}}(\alpha,\eta,\xi)e^{-i(x_3\eta+(x_3^2+s^2)\xi)}d\alpha d\eta d\xi\\
=\displaystyle\frac{1}{\pi}\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} se^{i\frac{\eta^2}{4\xi}}G(\boldsymbol\mu,-\frac{\eta}{2\xi},\xi)e^{-i(x_3\eta+(x_3^2+s^2)\xi)}d\eta d\xi\\
=\displaystyle\frac{2}{\pi}\int\limits_{\mathbb{R}} \int\limits_{\mathbb{R}} sG(\boldsymbol\mu,-\eta,\xi)e^{-i(2x_3\eta+(x_3^2+s^2)-\eta^2)\xi}\xi d\eta d\xi,
\end{array}
\end{equation*}
where in the last line, we changed the variable $\eta/2\xi$ to $\eta$.
\end{proof}
\subsubsection{Cylindrical geometry}
Let the centers of the central circles be located on the a cylinder $\partial B^2_R(0)\times\mathbb{R}=A\times\mathbb{R}$.
That is, $A$ is the circle centered at the origin with radius $R$.
The next two results show that the circular Radon transform can be recovered from the toroidal Radon transform.
Both theorems are easily obtained using Lemma~\ref{lem:andersson}.
\begin{thm}\label{thm:inversionwithcondition}
If $R/2<u<R$ and $f\in C^\infty_c(B^2_{R-u}(0)\times\mathbb{R})$, then
\begin{equation*}
\displaystyle M_{} f(\boldsymbol\mu,x_3,r)=\left\{\begin{array}{ll}2^{-1}I^{-1}_2R^*_TR_Tf(\boldsymbol\mu,x_3, r-u) \quad&\mbox{ if } r>u,\\
0&\mbox{otherwise.}
\end{array}\right.
\end{equation*}
\end{thm}
\begin{thm}\label{thm:inversion}
Let $f\in C^\infty_c(B^2_R(0)\times\mathbb{R})$.
Then
\begin{equation*}
\displaystyle M_{}f(\boldsymbol\mu,x_3,r)=\left\{\begin{array}{ll}\displaystyle\frac{1}{2}\sum^{[\frac Ru+\frac12]}_{j=0}(-1)^jI^{-1}_2R^*_TR_Tf(\boldsymbol\mu,x_3, (2j+1)u-r)&\mbox{ if }r\leq u,\\
\displaystyle\frac{1}{2}\sum^{[R/u]}_{j=0}(-1)^{j}I^{-1}_2R^*_TR_Tf(\boldsymbol\mu,x_3, (2j+1)u+r)&\mbox{ otherwise.}
\end{array}\right.
\end{equation*}
\end{thm}
\begin{rmk}
One can obtain other relations similar to Theorems~\ref{thm:inversionwithcondition} and~\ref{thm:inversion}, by using Lemma~\ref{lem:redding} instead of Lemma~\ref{lem:andersson}.
\end{rmk}
\begin{rmk}
When the set of {centers of the circular detectors} is a cylinder, (i.e., $A$ is a circle,) one can recover $f$ from its torodial transform $R_Tf$ by applying inversion formulas for the spherical Radon transform with the centers of circles located on the circle (see e.g.~\cite{finchhr07,kunyansky07,kunyansky071}) to the left hand sides of equations in Theorems~\ref{thm:inversionwithcondition} and \ref{thm:inversion}.
\end{rmk}
\begin{rmk}
If $u>2R$ (i.e., the radius of central circles is bigger than the diameter of the cylinder $B^2_R(0)\times\mathbb{R}$), then
$$
M_{}f(\boldsymbol\mu,x_3,r)=\left\{\begin{array}{ll}2^{-1}I^{-1}_2R^*_TR_Tf(\boldsymbol\mu,x_3,u-r)&\mbox{ if } r\leq u,\\\\
2^{-1}I^{-1}_2R^*_TR_Tf(\boldsymbol\mu,x_3,u+r)&\mbox{ otherwise.}
\end{array}\right.
$$
\end{rmk}
\subsubsection{Planar geometry}
Let $A\subset\mathbb{R}^2$ be the $x_1=0$ line (i.e., the centers of tori are located on the $x_2x_3$-plane in $\mathbb{R}^3$).
Then $R_Tf(\boldsymbol\mu,x_3,r)$ is equal to zero if $f$ is an odd function in $x_1$.
We thus assume the function $f$ to be even in $x_1$.
\begin{thm}\label{thm:inversioninline}
Let $f\in C^\infty_c(B^3_R(0))$ be even in $x_1$.
Then we have
\begin{equation}\label{eq:toriii}
\displaystyle M_{}f(\boldsymbol\mu,x_3,r)=\left\{\begin{array}{ll}\displaystyle\frac{1}{2}\sum^{[\frac {R+u}{2u}]}_{j=0}(-1)^jI^{-1}_2R^*_TR_Tf(\boldsymbol\mu,x_3, (2j+1)u-r)&\mbox{ if }r\leq u,\\
\displaystyle\frac{1}{2}\sum^{[R/2u]}_{j=0}(-1)^{j}I^{-1}_2R^*_TR_Tf(\boldsymbol\mu,x_3, (2j+1)u+r)&\mbox{ otherwise.}
\end{array}\right.
\end{equation}
\end{thm}
\begin{rmk}
When $A$ is a line, we can determine $f$ from $R_Tf$ by applying inversion formulas for the spherical Radon transform with the centers of circles on the hyperplane~\cite{andersson88,fawcett85,nattererw01,nilsson97,reddingn01} to the left hand side of~\eqref{eq:toriii}.
\end{rmk}
\begin{rmk}
If $u>R$ (i.e., the radius of the detectors is bigger than the radius of the ball containing $supp\; f$), then
$$
M_{}f(\boldsymbol\mu,x_3,r)=\left\{\begin{array}{ll}2^{-1}I^{-1}_2R^*_TR_Tf(\boldsymbol\mu,x_3,u-r)&\mbox{ if } r<u,\\\\
2^{-1}I^{-1}_2R^*_TR_Tf(\boldsymbol\mu,x_3,u+r)&\mbox{ otherwise.}
\end{array}\right.$$
\end{rmk}
\section{Conclusion}
Here we studied a Radon-type transform arising in PAT with circular detectors, and also the toroidal Radon transform.
We proved that these transforms reduce to well-studied transforms: the Radon transform over circles with a fixed radius, or the circular Radon transform.
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Integral representation of random variables with respect to
Gaussian processes}
\runtitle{Integral representation}
\begin{aug}
\author{\inits{L.}\fnms{Lauri}~\snm{Viitasaari}\corref{}\ead[label=e1]{[email protected]}}
\address{Department of Mathematics and Systems Analysis, Aalto
University School of Science, Helsinki,
P.O. Box 11100, FIN-00076 Aalto, Finland. \printead{e1}}
\end{aug}
\received{\smonth{3} \syear{2014}}
\revised{\smonth{6} \syear{2014}}
\begin{abstract}
It was shown in Mishura \textit{et al.}
(\textit{Stochastic Process. Appl.} \textbf{123} (2013) 2353--2369),
that any random variable can be represented as improper pathwise
integral with respect to fractional Brownian motion.
In this paper, we extend this result to cover a wide class of Gaussian
processes. In particular, we consider a wide class of processes that
are H\"{o}lder continuous of order $\alpha>1/2$ and show that only
local properties of the covariance function play role for such results.
\end{abstract}
\begin{keyword}
\kwd{F\"ollmer integral}
\kwd{Gaussian processes}
\kwd{generalised Lebesgue--Stieltjes integral}
\kwd{integral representation}
\end{keyword}
\end{frontmatter}
\section{Introduction}
In stochastic analysis and its applications such as financial
mathematics, it is an interesting question what kind of random
variables one can
replicate with stochastic integrals. In order to answer this question,
first one needs to consider in which sense the stochastic integral
exists. In particular, if the driving process $X$ is not a
semimartingale it is not clear how to define integrals with respect to
$X$ and what kind of integrands can be integrated with the given
definition of the integral.
The motivation for our work originates back to Dudley \cite{d} who
showed that any functional $\xi$ of a standard Brownian motion $W$ can
be replicated as an
It\^o integral $\int_0^1 \mathbb{P}si(s)\,\mathrm{d} W_s$, where $\mathbb{P}si$ is an adapted
process satisfying $\int_0^1 \mathbb{P}si^2(s)\,\mathrm{d} s < \infty$ a.s. Moreover,
under additional assumption $\int_0^1 \mathbb{E}[\mathbb{P}si^2(s)]\,\mathrm{d} s < \infty$
one can cover only centered random variables with finite variance. On
the other hand, in this case the process $\mathbb{P}si$ is unique.
Later on Mishura \textit{et al.} \cite{m-s-v} considered the same problem where
standard Brownian motion $W$ was replaced with fractional Brownian
motion (fBm) $B^H$ with Hurst index $H>\frac{1}{2}$. In this case the
authors considered generalised Lebesgue--Stieltjes integrals with
respect to fBm which can be defined, thanks to results of Azmoodeh
\textit{et al.} \cite{a-m-v}, for integrands of form $f(B^H_u)$ where $f$ is a
function of locally bounded variation. As an application of the results
in \cite{m-s-v},\vadjust{\goodbreak} the authors considered financial implications of the
results and gave a negative answer to the problem of zero integral;
does $\int_0^1 \psi(s) \,\mathrm{d} B^H_s=0$ imply that $\psi(s)=0$. This
problem was open for fBm for some time, and in addition the result was
known only for Brownian motion.
It is interesting to note that while the stochastic integrals are
defined in different ways, the results for standard Brownian motion and
fBm are quite similar. On the other hand, the key idea to obtain
representation for arbitrary processes with integrals with respect to
some given process is to use idea of ``tracking'': first define a
sequence which obviously converges and then track that sequence. The
simplest way to do this is to define an integrand on a given time
interval which diverges in the limit and then use stopping times. This
idea was first used by Dudley \cite{d} for Brownian motion and then by
Mishura \textit{et al.} \cite{m-s-v} for fBm.
In this article, motivated by these two contributing works, we study
the problem for more general class of Gaussian processes.
In particular, we also consider generalised Lebesgue--Stieltjes
integrals and show that the brilliant construction introduced in \cite{m-s-v} for fBm applies, with small modifications, for more general
Gaussian processes. We also note that the integrals exist also as
forward integrals in the sense of F\"{o}llmer \cite{Follmer}. Our
class of Gaussian processes consists of wide class of processes which
has versions that are H\"{o}lder continuous of order $\alpha>\frac
{1}{2}$. More precisely, our class of processes consist of H\"older
continuous Gaussian processes $X$ which also satisfy several mild extra
conditions given for the corresponding covariance function $R$. In
particular, the class includes many stationary and stationary increment
processes that are H\"older continuous of sufficient order. In order to
obtain such result for general class of Gaussian processes, we show
that for the construction introduced in \cite{m-s-v} the only required
facts are local properties of the corresponding covariance function.
Moreover, we show that the replication can be done in arbitrary small
amount of time which has significant implications to the finance. As
such, this article is a hybrid of discussing review paper and an
original research article; We prove similar results as for fBm and use
the same idea of tracking so the proofs are quite similar with only
minor changes needed and no unnecessary complexity is added. On the
other hand, the results are extended to much wider class of processes,
the needed properties for such results are identified and it is also
shown that the replication can be done in any time interval. We also
discuss applications such as implications to finance and the problem of
zero integral. In particular, the results of this paper indicate that
with pathwise integrals the answer to the problem of zero integral is
usually false.
The rest of the paper is organised as follows. We start Section~\ref{sec:aux} by recalling the findings obtained in \cite{m-s-v} for fBm.
Moreover, we introduce the key properties of fBm under which the
authors in \cite{m-s-v} obtained their results. We end the Section~\ref{sec:aux} by introducing our notation and assumptions. We also
recall basic facts on generalised Lebesgue--Stieltjes integrals and F\"
{o}llmer integrals. In Section~\ref{sec:main}, we introduce and prove
the main results for our general class of processes. We end the paper
by discussion in Section~\ref{sec:app} where we shortly discuss
financial applications, uniqueness of the representation and the
problem of zero integral.
\section{Auxiliary facts}\label{sec:aux}
\subsection*{Key properties for fractional Brownian motion}
In \cite{m-s-v}, the authors proved the following:
\begin{itemize}
\item
For any distribution function $F$ there exists an adapted process $\mathbb{P}hi
$ such that $\int_0^1 \mathbb{P}hi(s)\,\mathrm{d} B_s^H$ is well-defined (in the sense
of generalised Lebesgue--Stieltjes integral) and has distribution~$F$.
\item
Any measurable random variable $\xi$ can be represented as an improper
integral, that is, $\xi= \lim_{t\rightarrow1-}\int_0^t \mathbb{P}si(s)\,\mathrm{d} B^H_s$.
\item
A measurable random variable $\xi$ which is an end value of some H\"
{o}lder continuous process can be represented as a proper integral.
\end{itemize}
Our aim is to establish similar results for general class of Gaussian
processes. By studying the paper \cite{m-s-v}, one can see that in a
sense the following facts are the main ingredients for such results:
\begin{enumerate}[3.]
\item
It\^o's formula: for every locally bounded variation function $f$ we have
\[
F\bigl(B_T^H\bigr) = \int_0^T
f\bigl(B_u^H\bigr)\,\mathrm{d} B^H_u,
\]
where $F(x) = \int_0^x f(y)\,\mathrm{d} y$,
\item
fBm has stationary increments,
\item
a crossing bound at zero: there exists a constant $C$ such that for
every $0<s<t\leq T$ we have
\[
\mathbb{P}\bigl(B_s^H < 0 < B_t^H\bigr)
\leq C(t-s)^H t^{-H},
\]
\item
small ball probability: there exists a constant $C$ such that for every
$T$ and $\varepsilon$ we have
\[
\mathbb{P}\Bigl(\sup_{0\leq t\leq T}\bigl |B_t^H\bigr | \leq\varepsilon
\Bigr) \leq\exp \bigl(-CT\varepsilon^{-\fraca{1}{H}} \bigr)
\]
provided that $\varepsilon\leq T^H$.
\end{enumerate}
For our purposes, we have results similar to conditions 1 and 3 for
more general class of processes obtained by Sottinen and Viitasaari
\cite{s-v2}
(see subsection below). The conditions 2 and 4 we replace with
weaker assumptions on the covariance structure of the Gaussian process $X$.
\subsection*{Definitions and auxiliary results}
Throughout the paper, we are restricted on a bounded interval $[0,T]$
which is usually omitted in notation.
\begin{defn}
Let $X$ be a centered Gaussian process. We denote by $R_X(t,s)$,
$W_X(t,s)$, and $V_X(t)$ its covariance, incremental variance and
variance, that is,
\begin{eqnarray*}
R_X(t,s) &=& \mathbb{E}[X_tX_s],
\\
W_X(t,s) &=& \mathbb{E}\bigl[(X_t - X_s)^2
\bigr],
\\
V_X(t) &=& \mathbb{E}\bigl[X_t^2\bigr].
\end{eqnarray*}
We denote by $w^*_X(t)$ the ``worst case'' incremental variance
\[
w^*_X(t) = \sup_{0\le s\le T-t} W_X(s,s+t).
\]
\end{defn}
Let now $\alpha\in (\frac{1}{2},1 )$. We consider the
following class of processes.
\begin{defn}\label{defn:Xalpha}
A centered continuous Gaussian process $X=(X_t)_{t\in[0,T]}$ with
covariance $R_X$ belongs to the \emph{class} $\mathcal{X}^\alpha_T$
if there is a constant $\delta$ such that for every $u\in[T-\delta
,T)$ the process $Y_t = X_{t+u}-X_u$ for $t\in[0,T-u]$ satisfies:
\begin{enumerate}[3.]
\item
$R_Y(s,t)> 0$ for every $s,t>0$,
\item
the ``worst case'' incremental variance satisfies
\[
w^*_Y(t) = \sup_{0\le s\le T-t-u}W_Y(s,s+t)
\leq Ct^{2\alpha},
\]
where $C> 0$,
\item
there exist $c,\hat{\delta}>0$ such that
\[
V_Y(s) \geq cs^{2}
\]
provided $s\leq\hat{\delta}$,
\item
there exists a $\hat{\delta}>0$ such that
\[
\sup_{0< t<2\hat{\delta}}\sup_{\fraca{t}{2}\leq s\leq t}\frac
{R_Y(s,s)}{R_Y(t,s)}<
\infty.
\]
\end{enumerate}
\end{defn}
The class depends also on parameter $\delta$ which will be omitted on
the notation.
Note that the definition is quite technical. However, the conditions
are needed in order to have It\^o formula and crossing bound for
incremental process $Y$ close to time $T$. Moreover, the results for
fBm relies on the fact that $B^H$ has stationary increments. For our
class we simply need certain structure for covariance close to $T$. The
idea on the results is that before some point $t=T-\delta$ we simply
wait and do nothing. Moreover, the following remarks and examples show
that the assumptions are not very restrictive and are satisfied for
many Gaussian processes. For further discussion and details, see \cite{s-v2}
where the class was first introduced such that the covariance of
$X$ itself satisfy properties 1--4.\vadjust{\goodbreak}
\begin{rmk}
\begin{enumerate}[3.]
\item
Note that the first condition means that the increments of the process
are positively correlated close to time $T$. More precisely, we need
\[
R_X(t+u,s+u)+R_X(u,u) > R_X(t+u,u) +
R_X(u,s+u).
\]
In other words, the covariance function should have positive increments
on rectangles.
\item
The second condition implies that $Y$ has version which is H\"{o}lder
continuous of any order $a<\alpha$. For the rest of the paper, we
assume that this version is chosen.
\item
A special subclass of $\mathcal{X}^\alpha_T$ are processes with
stationary increments. In this case, we have
\begin{eqnarray*}
R_Y(t,s) &=& R_X(t,s)=\tfrac{1}{2}
\bigl[V(t)+V(s)-V(t-s) \bigr],
\\
W_Y(t,s) &=& W_X(t,s)=V_X(t-s),
\\
w^*_Y(t) &=& w^*_X(t)=V_X(t).
\end{eqnarray*}
Especially, stationary increment processes with $W_X(t,s)\sim
|t-s|^{2\alpha}$ at zero with $\alpha>\frac{1}{2}$ belong to
$\mathcal{X}^\alpha_T$ for every $T$. In particular, fBm with Hurst
index $H>\frac{1}{2}$ belongs to $\mathcal{X}^\alpha_T$.
\item
Another special subclass of $\mathcal{X}^\alpha_T$ are stationary
processes. In this case, we have
\begin{eqnarray*}
R_X(t,s) &=& r(t-s),
\\
W_X(t,s) &=& 2 \bigl[r(0)-r(t-s) \bigr],
\\
V_X(t) &=& r(0),
\\
w^*_X(t) &=& 2 \bigl[r(0)-r(t) \bigr]
\end{eqnarray*}
and
\begin{eqnarray*}
R_Y(t,s) &=& r(t-s)+r(0)-r(t)-r(s),
\\
W_Y(t,s) &=& W_X(t,s),
\\
V_Y(t) &=& W_X(t+u,u) = w^*_X(t),
\\
w^*_Y(t) &=& w^*_X(t).
\end{eqnarray*}
Consequently, for a stationary process $X$ with covariance function
$r(t)$ we have $X\in\mathcal{X}^\alpha_T$ if
$r(t)$ satisfies
\begin{eqnarray*}
r(t-s)+r(0) &>& r(t) + r(s),
\\
ct^2&\leq& r(0)-r(t) \leq Ct^{2\alpha}
\end{eqnarray*}
and
\[
\sup_{0< t<2\hat{\delta}}\sup_{\fraca{t}{2}\leq s\leq t} \frac
{r(0)-r(s)}{r(t-s)+r(0)-r(t)-r(s)} <
\infty.
\]
Especially, processes with strictly decreasing covariance at zero
satisfy assumptions 1 and~4. In particular, stationary
processes with strictly decreasing covariance and $W_X(t,s)\sim
|t-s|^{2\alpha}$ at zero with $\alpha>\frac{1}{2}$ belongs to
$\mathcal{X}^\alpha_T$ for every $T$. As an example, the process $X$
with covariance function
\[
r(t) = \exp \bigl(-|t|^{2\alpha} \bigr)
\]
with $\frac{1}{2}<\alpha<1$
belongs to $\mathcal{X}^\alpha_T$. We will use this process as a
motivating example throughout the paper, and we will denote this
process by $\tilde{X}$.
\end{enumerate}
\end{rmk}
The following statement derived in Sottinen and Viitasaari \cite{s-v2}
is one of the main ingredients for our study.
\begin{them}
\label{thm:ito}
Let $X\in\mathcal{X}^{\alpha}_T$ with $\alpha>\frac{1}{2}$ and let
$f$ be a function of locally bounded variation. Set $F(x) =\int_0^x
f(y)\,\mathrm{d} y$. Then
\begin{equation}
\label{ito} F(X_T - X_u) = \int_u^T
f(X_s - X_u) \,\mathrm{d} X_s
\end{equation}
provided $u\in[T-\delta,T)$, where the integral can be understood as
a generalised Lebesgue--Stieltes integral or as F\"{o}llmer integral.
\end{them}
\begin{rmk}
In the original paper \cite{s-v2}, the authors considered only convex
functions. However, by examining the proof it is evident that the
result holds also for functions of locally bounded variation.
\end{rmk}
Furthermore, we make the following assumption for small ball
probabilities. The examples are discussed in the next subsection.
\begin{Assumption}\label{assu:smallball}
There exist constants $C,\delta>0$ such that for every $s,t\in
[T-\delta,T]$ with $t=s+\Delta$ it holds
\begin{equation}
\label{smallball} \mathbb{P}\Bigl(\sup_{s\leq u \leq t} |X_u-
X_s| \leq\varepsilon\Bigr) \leq \exp \bigl(-C\Delta\varepsilon^{-\fraca{1}{\alpha}}
\bigr)
\end{equation}
provided that $\varepsilon\leq\Delta^\alpha$.
\end{Assumption}
\subsection*{Which processes satisfy the Assumption \texorpdfstring{\protect\ref{assu:smallball}}{2.6}?}
In this subsection, we briefly review what kind of processes $X\in
\mathcal{X}^\alpha_T$ satisfy the Assumption~\ref{assu:smallball}.
In general, the small ball probabilities are an interesting subject of
study and a survey on small ball probabilities is given by Li and Shiao
\cite{l-s} where also the following theorem can be found.
\begin{them}
Let $\{X_t,t\in[0,1\}$ be a centered Gaussian process with $X_0=0$.
Assume that there is a function $\sigma^2(h)$ such that
\[
\forall0\leq s,t\leq1,\qquad \mathbb{E}(X_s-X_t)^2
\leq\sigma^2\bigl(|t-s|\bigr),
\]
and that there are $0<c_1\leq c_2<1$ such that $c_1\sigma(2h\wedge
1)\leq\sigma(h) \leq c_2 \sigma(2h\wedge1)$ for $0< h<1$. Then
there exists $K>0$ depending only on $c_1$ and $c_2$ such that
\[
\mathbb{P} \Bigl(\sup_{0\leq t\leq1}|X_t| \leq\sigma(\varepsilon)
\Bigr) \geq\exp \biggl(-\frac{K}{\varepsilon} \biggr).
\]
\end{them}
\begin{exm}
It is straightforward that fBm satisfies the assumptions for any $H\in(0,1)$.
\end{exm}
As a direct consequence, we obtain the following statement.
\begin{cor}
Let $X\in\mathcal{X}^\alpha_T$. Then for every $t\in[0,T]$ there
exist $\Delta>0$ and $K>0$ such that
\[
\mathbb{P} \Bigl(\sup_{s\leq u\leq t}|X_u-X_s| \leq
\varepsilon \Bigr) \geq \exp \bigl(-K\Delta\varepsilon^{-\fraca{1}{\alpha}} \bigr),
\]
provided that $|t-s|\leq\Delta$.
\end{cor}
According to this corollary the bound given in
Assumption~\ref{assu:smallball} is the best possible in terms of $\Delta$ and
$\varepsilon$. The upper bound is more difficult to obtain. Moreover, it
is pointed out in \cite{l-s} that the incremental variance is not an
appropriate tool for the upper bound. However, in many cases of
interest we can have the required upper bound. In particular, many
cases of interest have stationary increments or
are stationary processes. For processes with stationary increments, the
following theorem can be used to study the upper bound. For the proof,
we refer to \cite{k-l-s} where a slightly more general setup was considered.
\begin{them}
\label{thm:stat_inc_smallball}
Assume that the centered process $X$ has stationary increments and the
incremental variance $W(t,s) = W(0,t-s)$ satisfies:
\begin{enumerate}[2.]
\item
There exists $\theta\in(0,4)$ such that for every $x\in
[0,\frac{1}{2} ]$ we have
\[
W(0,2x) \leq\theta W(0,x).
\]
\item
For every $0<x<1$ and $2\leq j\leq\frac{1}{x}-2$, we have
\begin{eqnarray}
\label{hassu-konditio}
&&6W(0,jx) + W\bigl(0,(j+2)x\bigr)+W\bigl(0,(j-2)x
\bigr)
\nonumber
\\[-8pt]
\\[-8pt]
&&\quad\geq4W\bigl(0,(j+1)x\bigr) + 4W\bigl(0,(j-1)x\bigr).
\nonumber
\end{eqnarray}
\end{enumerate}
Then there exists a constant $K>0$ such that for every $\varepsilon\in
(0,1)$ we have
\[
\mathbb{P} \Bigl(\sup_{0\leq t\leq1}|X_t-X_0| \leq
\sqrt{W(0,\varepsilon )} \Bigr) \leq\exp \biggl(-\frac{K}{\varepsilon} \biggr).
\]
\end{them}
\begin{rmk}
In the original theorem, it was stated that instead of
(\ref{hassu-konditio}) it is also sufficient that the
incremental variance $W(t,s)$ is concave. Note that in our case usually
$W(0,t)\sim t^{2\alpha}$ with $\alpha>\frac{1}{2}$. Hence, $W(t,s)$
cannot be concave.
\end{rmk}
\begin{rmk}
We remark that the result holds also for stationary Gaussian processes.
\end{rmk}
\begin{cor}
Assume that $X\in\mathcal{X}^\alpha_T$ has stationary increments or
is stationary such that $W(0,t)\sim t^{2\alpha}$. Then
Assumption~\ref{assu:smallball} is satisfied.
\end{cor}
\begin{pf}
It is straightforward to see that a function $W(0,x)=x^{2\alpha}$
satisfies (\ref{hassu-konditio})
provided \mbox{$\alpha> \frac{1}{2}$}. It remains to note that with $\delta
$ small enough, we have $W(0,t-s) \sim C|t-s|^{2\alpha}$ provided
\mbox{$|t-s|\leq\Delta$}.
\end{pf}
\begin{exm}
As special examples we note that fBm $B^H$ with $H>\frac{1}{2}$ and
the process $\tilde{X}$ satisfy the Assumption~\ref{assu:smallball}.
\end{exm}
For general processes, $X\in\mathcal{X}^\alpha_T$ it is not clear
when Assumption~\ref{assu:smallball} is satisfied.
In principle, one can derive similar result as
Theorem~\ref{thm:stat_inc_smallball} under similar conditions.
However, in this case the incremental variance function $W(t+s,s)$
depends also on the starting point $s$.
Consequently, one needs to check the condition when $s$ is close to
$T$. Hence in this case, the structure of the covariance function is
more important.
\subsection*{Pathwise integrals}
In this section, we briefly introduce two kinds of pathwise integrals.
\subsubsection*{Generalized Lebesgue--Stieltjes Integral}
The generalized Lebesgue--Stieltjes integral is based on fractional
integration and fractional Besov spaces. For details on these topics,
we refer to \cite{s-k-m} and \cite{n-r}.
Recall first the definitions for fractional Besov norms and
Lebesgue--Liouville fractional integrals and derivatives.
\begin{defn}
Fix $ 0 <\beta< 1 $.
\begin{enumerate}[2.]
\item
The \emph{fractional Besov space} $W^{\beta}_1 = W^{\beta}_1
([0,T])$ is the space of real-valued measurable functions $ f \dvtx [0,T]
\to\mathbb{R}$ such that
\[
{\Vert f \Vert}_{1,\beta} = \sup_{0 \le s < t \le T} \biggl(
\frac
{|f(t) - f(s)|}{(t-s)^\beta} + \int_{s}^{t} \frac{|f(u) - f(s)
|}{(u-s)^ {1+\beta}}
\,\mathrm{d} u \biggr) < \infty.
\]
\item
The \emph{fractional Besov space} $W^{\beta}_2 = W^{\beta}_2
([0,T])$ is the space of real-valued measurable functions $ f \dvtx [0,T]
\to\mathbb{R}$ such that
\[
{\Vert f \Vert}_{2,\beta} = \int_{0}^{T}
\frac{|f(s)|}{s^ \beta} \,\mathrm{d} s + \int_{0}^{T}\int
_{0}^{s} \frac{|f(u) - f(s) |}{(u-s)^
{1+\beta}} \,\mathrm{d} u \,\mathrm{d} s < \infty.
\]
\end{enumerate}
\end{defn}
In this paper, we study the norm ${\Vert f \Vert}_{2,\beta}$ on
different intervals $[0,t]$. Hence we use short notation ${\Vert f
\Vert}_{t,\beta}$.
\begin{rmk}\label{r:rmk1}
Let $C^{\alpha}=C^{\alpha}([0,T])$ denote the space of H\"{o}lder
continuous functions of order $\alpha$ on $[0,T]$ and let $ 0<
\varepsilon< \beta\wedge(1- \beta)$. Then
\[
C^{\beta+ \varepsilon} \subset W^{\beta}_{1} \subset
C^{\beta-
\varepsilon} \quad\mbox{and}\quad C^{\beta+ \varepsilon} \subset
W^{\beta}_{2}.
\]
\end{rmk}
\begin{defn}
Let $t\in[0,T]$. The \emph{Riemann--Liouville fractional integrals}
$I^\beta_{0+}$ and $I^\beta_{t-}$ of order $\beta> 0$ on $[0,T]$ are
\begin{eqnarray*}
\bigl(I^\beta_{0+} f\bigr) (s) &=& \frac{1}{\Gamma(\beta)} \int
_0^s f(u) (s-u)^{\beta-1} \,\mathrm{d} u,
\\
\bigl(I^\beta_{t-} f\bigr) (s) &=& \frac{\mathrm{e}^{\mathrm{i}\uppi\beta}}{\Gamma(\beta)} \int
_s^t f(u) (u-s)^{\beta-1} \,\mathrm{d} u,
\end{eqnarray*}
where $\Gamma$ is the Gamma-function. The \emph{Riemann--Liouville
fractional derivatives} $D^{\beta}_{0+}$ and $D^{\beta}_{t-}$ are the
left-inverses of the corresponding integrals $I^\beta_{0+}$ and
$I^\beta_{t-}$. They can be also define via the \emph{Weyl
representation} as
\begin{eqnarray*}
\bigl(D^{\beta}_{0+} f\bigr) (s) &=& \frac{1}{\Gamma(1-\beta)} \biggl(
\frac{f(s)}{s^\beta} + \beta \int_{0}^{s}
\frac{f(s) - f(u)}{(s-u)^{\beta+ 1}} \,\mathrm{d} u \biggr),
\\
\bigl(D^{\beta}_{t-} f\bigr) (s) &=& \frac{\mathrm{e}^{\mathrm{i}\uppi\beta}}{\Gamma(1-\beta)} \biggl(
\frac
{f(s)}{(t-s)^\beta} + \beta\int_{s}^{t}
\frac{f(s) -
f(u)}{(u-s)^{\beta+ 1}} \,\mathrm{d} u \biggr)
\end{eqnarray*}
if $f\in I^\beta_{0+}(L^1)$ or $f\in I^\beta_{t-}(L^1)$, respectively.
\end{defn}
Denote $g_{t-}(s) = g(s)-g(t-)$.
The generalized Lebesgue--Stieltjes integral is defined in terms of
fractional derivative operators according to the next proposition.
\begin{prop}[(\cite{n-r})]\label{pr:n-r}
Let $0<\beta<1$ and let $f \in W^{\beta}_2$ and $g\in W^{1-\beta
}_1$. Then for any $t \in(0,T]$ the \emph{generalized
Lebesgue--Stieltjes integral} exists as the following Lebesgue integral
\[
\int_0^t f(s) \,\mathrm{d} g(s) = \int
_{0}^{t} \bigl(D^{\beta}_{0+}
f_{0+}\bigr) (s) \bigl(D^{1- \beta}_{t-} g_{t-}
\bigr) (s) \,\mathrm{d} s
\]
and is independent of $\beta$.
\end{prop}
We will use the following estimate to prove the existence of F\"{o}llmer integrals.
\begin{them}[(\cite{n-r})]\label{t:n-r}
Let $ f \in W^{\beta}_2$ and $ g \in W^{1- \beta}_1$. Then we have
the bound
\[
\biggl\vert \int_{0}^t f(s) \,\mathrm{d} g(s) \biggr
\vert \le\sup_{0\le s < t \le T} \bigl | D^{1 - \beta}_{t-}
g_{t-}(s)\bigr | {\Vert f \Vert}_{2,\beta}.
\]
\end{them}
\subsubsection*{F\"{o}llmer integral}
We also recall the definition of a forward-type Riemann--Stieltjes
integral due to F\"{o}llmer \cite{Follmer} (for English translation,
see \cite{Sondermann}).
\begin{defn}\label{defn:follmer-integral}
Let $(\pi_n)_{n=1}^{\infty}$ be a sequence of partitions
$\pi_n=\{0=t_0^n<\cdots<t_{k(n)}^n=T\}$ such that $|\pi_n|=\max_{j=1,\ldots,k(n)}|t_j^n-t_{j-1}^n|\rightarrow0$ as $n\to\infty$.
Let $X$ be a continuous process. The \emph{F\"{o}llmer integral along
the sequence} $(\pi_n)_{n=1}^{\infty}$ of $Y$ with respect to $X$ is
defined as
\[
\int_0^t Y_u \,\mathrm{d}
X_u = \lim_{n\rightarrow\infty} \sum
_{t_j^n\in
\pi_n \cap(0,t]}Y_{t_{j-1}^n}(X_{t_j^n}-X_{t_{j-1}^n}),
\]
if the limit exists a.s.
\end{defn}
The F\"{o}llmer integral is a natural choice for applications such as
finance. However, usually it is difficult to prove the existence of the
F\"{o}llmer integral. For instance, for finite quadratic variation
processes the existence of the integral is a consequence of the It\^o's
formula. On the other hand, generalised Lebesgue--Stieltjes integrals
provides a tool to obtain the existence of F\"{o}llmer integral. For
instance, in \cite{s-v2} the authors proved first the existence of a
generalised Lebesgue--Stieltjes integral and then obtained the existence
of F\"{o}llmer integral by applying Theorem~\ref{t:n-r}.
\section{Main results}\label{sec:main}
We begin with the following technical lemma which gives the diverging
integrand. In our case, it can be defined similarly as for fBm. Hence,
we simply present the key points of the proof.
\begin{lma}\label{lma:aux}
Let $X\in\mathcal{X}^\alpha_T$ such that
Assumption~\ref{assu:smallball} is satisfied.
Then one can construct $\mathbb{F}$-adapted process $\phi_T$ on
$[0,T]$ such that the integral
\[
\int_0^s \phi_T(s)\,\mathrm{d}
X_s
\]
exists for every $s<T$ and
\begin{equation}
\label{rep:aux-lemma} \lim_{s\rightarrow T-} \int_0^s
\phi_T(s)\,\mathrm{d} X_s = \infty
\end{equation}
a.s.\vadjust{\goodbreak}
\end{lma}
\begin{pf}
Fix numbers $\gamma\in (1,\frac{1}{\alpha} )$ and $\eta
\in (0,\frac{1}{\gamma\alpha}-1 )$. Furthermore,
set $t_0=0$ and $t_n= \sum_{k=1}^n \Delta_k$, $n\geq1$ where $\Delta
_n = \frac{Tn^{-\gamma}}{\sum_{k=1}^\infty k^{-\gamma}}$,
and define a function $f_{\eta}(x)=(1+\eta)|x|^{\eta}\operatorname{sign}(x)$.
Note that we can assume without loss of generality that
conditions of Definition~\ref{defn:Xalpha} hold in the whole interval. Otherwise set
$t_1=T-\delta$ and start after $t_1$. Finally, we set
\[
\tau_n = \min \bigl\{t\geq t_{n-1} \dvt |X_t -
X_{t_{n-1}}| \geq n^{-\frace{1}{1+\eta}} \bigr\} \wedge t_n
\]
and
\[
\phi_T(s) = \sum_{n=1}^\infty
f_\eta(X_s - X_{t_{n-1}})\mathbf {1}_{[t_{n-1},\tau_n)}(s).
\]
In order to complete the proof, we have to show that $\Vert \phi
_T\Vert _{s,\beta}<\infty$ a.s. for every $s<T$ and that
(\ref{rep:aux-lemma}) holds. The fact that $\Vert \phi_T\Vert _{s,\beta
}<\infty$ can be proved similarly as for fBm case in \cite{m-s-v}
together with Theorem~\ref{thm:ito}. Hence, it remains to show that
(\ref{rep:aux-lemma}) holds.
First by Theorem~\ref{thm:ito}, we get that for every
$s\in[t_{n-1},t_n)$
\[
\int_0^s \phi_T(u)\,\mathrm{d}
X_u = \sum_{k=1}^{n-1}|X_{\tau
_k}-X_{t_{k-1}}|^{1+\eta}
+ |X_{s\wedge\tau_n} - X_{t_{n-1}}|^{1+\eta}.
\]
Now, as in the case of fBm, it is enough to show that only finite
numbers of events $A_n$ happen where $A_n$ is defined by
\[
A_n = \Bigl\{\sup_{t_{n-1}\leq t\leq t_n}|X_t -
X_{t_{n-1}}|< n^{-\frace{1}{\eta+1}} \Bigr\}.
\]
But now, by Assumption~\ref{assu:smallball}, we have
\[
\mathbb{P}(A_n) \leq \mathrm{e}^{-Cn^{-\gamma+ \frace{1}{\alpha(\eta+1)}}}
\]
for $n$ large enough. Noting our choices of $\gamma$ and $\eta$ we
obtain $\sum_{n\geq1}\mathbb{P}(A_n) < \infty$, and thus the result follows
from Borel--Cantelli lemma.
\end{pf}
\begin{rmk}
Same result can be obtained for integrals over any interval
$[s,t]\subset[T-\delta,T]$.
\end{rmk}
\begin{rmk}
It was remarked in paper by Mishura \textit{et al.} \cite{m-s-v} that for fBm
it is easy to see
that $\Vert \phi_T\Vert _{t,\beta}<\infty$ even for random times $t<T$. This
is indeed natural, since the It\^o's formula (\ref{ito}) holds also
for any bounded random time $\tau$ (see \cite{s-v2} for details).
\end{rmk}
\begin{rmk}
It was shown in \cite{a-m-v} that for fBm one can approximate the
integral of It\^o's
formula (\ref{ito}) with Riemann--Stieltjes sums along uniform
partition, i.e. the integral exists also as F\"{o}llmer integral.
Moreover, it was pointed out in \cite{s-v2} that this is true for more
general processes $X\in\mathcal{X}^\alpha_T$ and any partition.
Hence for any $n$, the integral
\[
\int_{t_{n-1}}^{t_n} f_{\eta}(X_s -
X_{t_{n-1}})\,\mathrm{d} X_s
\]
exists also as F\"{o}llmer integral. Now by noting that
$\phi_T(s)$ is defined as a linear combination of functions of this
form it is evident that the integral
\[
\int_0^t \phi_T(s)\,\mathrm{d}
X_s
\]
exists also as F\"{o}llmer integral for every $t<T$. The same
conclusion holds true also for other results presented in this paper.
\end{rmk}
As a direct corollaries, we obtain that integral with respect to $X_t$
can have any distribution and that any measurable random variable can
be represented as an improper integral; same results as for fBm. For
the sake of completeness, we present the results.
\begin{cor}
\label{cor:distribution_rep}
For any cdf $F$ one can construct adapted process $\psi_T(s)$ such
that $\int_0^T \psi_T(s) \,\mathrm{d} X_s$ has distribution $F$.
\end{cor}
\begin{pf}
The proof follows same arguments as for fBm in \cite{m-s-v} except
that since we do not know how the process $X$ behaves before some time
close to $T$, we have to choose some point $v<T$ such that $X_v$ has
non-vanishing variance. The rest follows with same arguments with
obvious changes.\vadjust{\goodbreak}
\end{pf}
\begin{rmk}
Note that the result remains true if replace the process $X$ with $Y =
h(X)$, where $h$ is
strictly monotone $C^1$ function. In this case the integrals of form
\[
\int_0^T \psi_T(s) \,\mathrm{d}
Y_s
\]
are well defined by results in \cite{s-v2}. We remark that the result
is still valid even if the function $h$ is uniformly bounded.
\end{rmk}
\begin{them}
\label{thm:arb_rv-rep}
Let $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space with
left-continuous filtration $\mathbb{F}=\{\mathcal{F}_t\}_{t\in
[0,T]}$ and let $X\in\mathcal{X}^\alpha_T$ such that
Assumption~\ref{assu:smallball} is satisfied.
Then for any $\mathcal{F}_T$ measurable random variable $\xi$ one can
construct
$\mathbb{F}$-adapted process $\mathbb{P}si_T$ on $[0,T]$ such that the integral
\[
\int_0^s \mathbb{P}si_T(s)\,\mathrm{d}
X_s
\]
exists for every $s<T$ and
\[
\lim_{s\rightarrow T-} \int_0^s
\mathbb{P}si_T(s)\,\mathrm{d} X_s = \xi
\]
a.s.
\end{them}
\begin{pf}
As in the proof of Lemma~\ref{lma:aux}, we can assume that assumptions
of Definition~\ref{defn:Xalpha} are satisfied for the whole interval.
Put first $Y_t = \tan\mathbb{E}[\operatorname{arctan}\xi|\mathcal{F}_t]$. Now
$Y_t$ is adapted, and we have
$Y_t \rightarrow\xi$ as $t\rightarrow T-$ a.s. by martingale
convergence theorem and left continuity of $\mathbb{F}$. Next for a
sequence $t_n$ increasing to $T$, set
$\delta_n = Y_{t_n} - Y_{t_{n-1}}$ and $\tau_n= \inf \{t \geq
t_n \dvt Z_t^n = |\delta_n| \}$, where $Z_t^n = \int_{t_n}^t \phi
_{t_{n+1}}(s)\,\mathrm{d} X_s$, and $\phi_{t_{n+1}}(s)$ is the process
constructed in Lemma~\ref{lma:aux} such that $Z_t^n \rightarrow\infty
$ as $t\rightarrow t_{n+1}$. By setting
\[
\mathbb{P}si_T(s) = \sum_{n\geq1}
\phi_{t_{n+1}}(s)\mathbf{1}_{[t_n,\tau
_n]}(s)\operatorname{sign}(
\delta_n)
\]
we can repeat the arguments in \cite{m-s-v} to conclude that
\[
V_t := \lim_{t\rightarrow T-}\int_0^t
\mathbb{P}si_T(s)\,\mathrm{d} X_s = \xi.
\]
\upqed\end{pf}
\begin{rmk}
Consider an arbitrary $\mathbb{F}$ measurable process $Y_t$. If for
every $t\in(0,T]$ we have $X\in\mathcal{X}^\alpha_t$, then by
Theorem~\ref{thm:arb_rv-rep} we have that for
every $t$ there is a process $\mathbb{P}si_t(u)$ such that the process
\[
V_t := \lim_{s\rightarrow t-}\int_0^s
\mathbb{P}si_t(u)\,\mathrm{d} X_u
\]
is a version of $Y_t$.
\end{rmk}
For the proof of our main theorem we also need a bound for the
probability that a Gaussian process $X$ crosses a zero level. The bound
is a consequence of the following more general result proved in \cite{s-v2}.
\begin{lma}
\label{lma:crossing2}
Let $X$ be a centered Gaussian process with strictly positive and
bounded covariance function $R$, $0 < s < t \le T$
and $ a \in\mathbb{R}$. Then there exists a universal constant $C=C(T)$ such that
\begin{eqnarray*}
&&{\mathbb{P} ( X_s < a < X_t )}
\\
&&\quad \le C \frac{\sqrt{W(t,s)}}{\sqrt{V(s)}} \biggl[1+\frac
{R(s,s)}{R(t,s)}+ \frac{|a|\mathrm{e}^{-\frace{a^2}{2V^*}}}{\sqrt{V(s)}}
\max \biggl(1,\frac{R(s,s)}{R(t,s)} \biggr) \biggr],
\end{eqnarray*}
where
\[
V^* = \sup_{s\leq T}V(s).
\]
\end{lma}
\begin{cor}
\label{lma:crossing}
Let $X$ be a centered Gaussian process with positive and bounded
covariance function $R(s,t)$, and let $0<s\leq t\leq T$ be fixed. Then
there exists a constant $C=C(T)$ such that
\[
\mathbb{P}(X_s < 0 < X_t) \leq C \sqrt{\frac{W(t,s)}{V(s)}}
\biggl[1+\frac
{R(s,s)}{R(t,s)} \biggr].
\]
\end{cor}
In \cite{m-s-v} the authors also studied when a random variable $\xi$
can be viewed as a proper integral, that is,
\[
\xi= \int_0^1 \mathbb{P}si(s)\,\mathrm{d} B^H_s
\]
for some process $\mathbb{P}si(s)$. As a result it was shown in \cite{m-s-v}
that this is true if $\xi$ can be viewed as an endpoint of some
stochastic process which is H\"{o}lder continuous of some order $a>0$.
Moreover, under assumption that $\mathbb{P}si$ is continuous the authors also
proved that the conditions are necessary. As the proof is based on
similar arguments as the proofs of previous theorems, it is not a
surprise that we can derive similar results for our general class of
processes. However, in our general case we have to modify the proof
accordingly by choosing parameters differently. Consequently, we can
only cover processes $\xi$ which are H\"{o}lder continuous of order
$a>1-\alpha$. For extensions, see Remark~\ref{rmk:H} below.
\begin{them}
\label{thm:H_rv-rep}
Let $X\in\mathcal{X}^\alpha_T$ such that Assumption~\ref{assu:smallball} is satisfied, and
let $\xi$ be $\mathcal{F}_T$ measurable random variable. If there
exists a H\"{o}lder continuous process $Z_s$ of order
$a>1-\alpha$ such that $Z_T = \xi$, then one can construct $\mathbb
{F}$-adapted process $\mathbb{P}si_T$ on $[0,T]$ such that
the integral
\[
\int_0^T \mathbb{P}si_T(s)\,\mathrm{d}
X_s
\]
exists and
\[
\int_0^T \mathbb{P}si_T(s)\,\mathrm{d}
X_s = \xi
\]
a.s.
\end{them}
As in the proof of Lemma~\ref{lma:aux} and without loss of generality,
we assume that conditions of Definition~\ref{defn:Xalpha} are
satisfied for the whole interval. Otherwise we simply choose $t_1$
large enough such that we are close to $T$.
\begin{pf*}{Proof of Theorem~\ref{thm:H_rv-rep}}
Without loss of generality, we can assume $a<\alpha$. Let $\beta
\in (1-\alpha,a\wedge\frac{1}{2} )$ and fix $\gamma>
\frac{1}{a-\beta}\vee1$. We
put $\Delta_n = \frac{Tn^{-\gamma}}{\sum_{k=1}^{\infty}k^{-\gamma
}}$ and set $t_0=0$, $t_n = \sum_{k=1}^{n-1}\Delta_k$, $n\geq2$.
Note that with our choice of $\gamma$ and $\beta$ we have $\gamma
(\alpha-\beta)-1> \gamma(\alpha-a)$. Hence, we can choose some
$\kappa\in(\gamma(\alpha- a), \gamma(\alpha-\beta)-1)$. Next, we
proceed as for fBm case and
divide the proof into three steps:
\begin{enumerate}[3.]
\item Set $\mathbb{P}si_T(t) = 0$ on interval $[t_0,t_1]$. To proceed the
construction is done recursively on intervals $(t_n,t_{n+1}]$ and the
construction is divided into two steps depending on whether we have
$Y_{t_{n-1}} = Z_{t_{n-2}}$ (Case A) or $Y_{t_{n-1}} \neq Z_{t_{n-2}}$
(Case B).
For the sake of completeness and clearness, we present the steps.
Put $Y_t = \int_0^t \mathbb{P}si_T(s)\,\mathrm{d} X_s$ and assume that $\mathbb{P}si_T(s)$ is
constructed on $[0,t_{n-1}]$ for some $n\geq2$. If we have Case A,
then we set
\[
\tau_n = \inf \bigl\{t\geq t_{n-1} \dvt
n^{\kappa}|X_t- X_{t_{n-1}}|= |Z_{t_{n-1}}-Z_{t_{n-2}}|
\bigr\} \wedge t_n
\]
and for $s\in[t_{n-1},t_n)$,
\[
\mathbb{P}si_T(s) = n^{\kappa}\operatorname{sign}(X_s -
X_{t_{n-1}})\operatorname{sign}(Z_{t_{n-1}}-Z_{t_{n-2}})
\mathbf{1}_{[0,\tau_n]}(s).
\]
Now if $\tau_n < t_n$, we obtain by It\^o's formula (\ref{ito}) that
\[
Y_{t_n} = Z_{t_{n-1}}.
\]
Assume next that we have Case B. Then we proceed as in
Theorem~\ref{thm:arb_rv-rep} and set
\[
Y_t^n = \int_{t_{n-1}}^{t}
\phi_{t_n}(s)\,\mathrm{d} X_s,
\]
where $\phi_{t_n}(s)$ is the process constructed in
Lemma~\ref{lma:aux} such that $Y_t^n\rightarrow\infty$ as $t\rightarrow t_n$,
\[
\tau_n = \inf \bigl\{t\geq t_{n-1} \dvt
Y_t^n= |Z_{t_{n-1}}-Y_{t_{n-1}}| \bigr\},
\]
and for $s\in[t_{n-1},t_n)$,
\[
\mathbb{P}si_T(s) = \phi_{t_n}(s) \operatorname{sign}(Z_{t_{n-1}}-Y_{t_{n-1}})
\mathbf{1}_{[0,\tau_n]}(s).
\]
Then $Y_{t_n} = Z_{t_{n-1}}$.
\item
Next, note that for a fixed $n$, the only possibility that $Y_{t_n}
\neq Z_{t_{n-1}}$ is that we have case A and $\tau_n \geq t_n$. Hence,
it suffices to show that the event
\[
C_n = \Bigl\{\sup_{t_{n-1}\leq t\leq t_n}n^{\kappa}|X_t
- X_{t_{n-1}}|\leq|Z_{t_{n-1}}-Z_{t_{n-2}}| \Bigr\}
\]
happens only finite number of times. For this we take $b\in
(\alpha- \frac{\kappa}{\gamma}, a )$, and the arguments in
\cite{m-s-v} implies that it is sufficient to show that only finite
number of events
\[
D_n = \Bigl\{\sup_{t_{n-1}\leq t\leq t_n}n^{\kappa}|X_t
- X_{t_{n-1}}|\leq\Delta_n^b \Bigr\}
\]
happen. Recall that now we have $b>\alpha-\frac{\kappa}{\gamma}$
which can be written as $\gamma b +\kappa> \gamma\alpha$. Hence we
can apply the small ball estimate (\ref{smallball}) of
Assumption~\ref{assu:smallball} together with Borel--Cantelli lemma to obtain the result.
\item
To complete the proof, we have to show that $\Vert \mathbb{P}si_T\Vert _{T,\beta} <
\infty$ a.s. For this, we go through the main steps which are
different from the case of fBm. We write
\[
A_n = \bigl\{\mbox{We have Case A on }(t_{n-1},t_n]
\bigr\},\qquad B_n = A_n^C,
\]
and
\begin{eqnarray*}
\mathbb{P}si_T(s) &=& \sum_{n\geq2}
\mathbb{P}si_T(s)\mathbf{1}_{(t_{n-1},t_n]}(s)\mathbf{1}_{A_n}
\\
&&{}+ \sum_{n\geq2}\mathbb{P}si_T(s)
\mathbf{1}_{(t_{n-1},t_n]}(s)\mathbf{1}_{B_n}
\\
&=:& \mathbb{P}si_T^A(s) + \mathbb{P}si_T^B(s).
\end{eqnarray*}
As for fBm case, it is evident that $\Vert \mathbb{P}si_T^B(s)\Vert _{T,\beta}<\infty$
since only finite numbers of events $B_n$ happen. Furthermore, we can write
\begin{eqnarray*}
\mathbb{E}\bigl[\bigl \Vert \mathbb{P}si_T^A(s)\bigr \Vert _{T,\beta}\bigr]
&=& \int_0^T \frac{\mathbb{E}|\mathbb{P}si
_T^A(s)|}{s^{\beta}}
\\
&&{}+ \sum_{n=2}^{\infty} \int
_{t_{n-1}}^{t_n} \int_0^{t_{n-1}}
\frac
{\mathbb{E}|\mathbb{P}si_T^A(t) - \mathbb{P}si_T^A(s)|}{(t-s)^{\beta+1}}\,\mathrm{d} s \,\mathrm{d} t
\\
&&{}+\sum_{n=2}^{\infty} \int
_{t_{n-1}}^{t_n} \int_{t_{n-1}}^t
\frac
{\mathbb{E}|\mathbb{P}si_T^A(t) - \mathbb{P}si_T^A(s)|}{(t-s)^{\beta+1}}\,\mathrm{d} s \,\mathrm{d} t
\\
&=:& I_1+ I_2 + I_3.
\end{eqnarray*}
The finiteness of $I_1$ and $I_2$ are easy to show and we omit the
details. For $I_3$ we set $\lambda_n(t)=\operatorname{sign}(X_t -
X_{t_{n-1}})$ and obtain
\begin{eqnarray*}
I_3 &=& \sum_{n=2}^{\infty} \int
_{t_{n-1}}^{t_n} \int_{t_{n-1}}^t
\frac{\mathbb{E}|\mathbb{P}si_T^A(t) - \mathbb{P}si_T^A(s)|}{(t-s)^{\beta
+1}}\,\mathrm{d} s \,\mathrm{d} t
\\
&=& \sum_{n=2}^{\infty} n^{\kappa}\int
_{t_{n-1}}^{t_n} \int_{t_{n-1}}^t
\frac{\mathbb{E}|\lambda_n(t)\mathbf{1}_{s\leq\tau_n}-\lambda
_n(s)\mathbf{1}_{s\leq\tau_n}|\mathbf{1}_{A_n}}{(t-s)^{\beta
+1}}\,\mathrm{d} s\,\mathrm{d} t
\\
&\leq&\sum_{n=2}^{\infty} n^{\kappa}\int
_{t_{n-1}}^{t_n} \int_{t_{n-1}}^t
\frac{\mathbb{E} [|\lambda_n(t)-\lambda_n(s)|+\mathbf
{1}_{s\leq\tau_n< t} ]}{(t-s)^{\beta+1}}\,\mathrm{d} s\,\mathrm{d} t.
\end{eqnarray*}
Now note that
\[
\bigl |\lambda_n(t)-\lambda_n(s)\bigr | = \mathbf{1}_{\{X_s-X_{t_{n-1}}\leq0
\leq X_t-X_{t_{n-1}}\}}
+ \mathbf{1}_{\{X_s-X_{t_{n-1}}\geq0 \geq
X_t-X_{t_{n-1}}\}},
\]
and by taking expectation together with symmetry it is sufficient to
consider probability
\[
\mathbb{P}(X_s-X_{t_{n-1}}\leq0 \leq X_t-X_{t_{n-1}}).
\]
Let us study the integral
\[
\int_{t_{n-1}}^{t_n} \int_{t_{n-1}}^t
\frac{\mathbb{P}(X_s-X_{t_{n-1}}\leq0
\leq X_t-X_{t_{n-1}})}{(t-s)^{\beta+1}}\,\mathrm{d} s\,\mathrm{d} t.
\]
By change of variable, we obtain that it is sufficient to study
\begin{eqnarray*}
&&\int_0^{t_n-t_{n-1}} \int_0^t
\frac{\mathbb{P}
(X_{s+t_{n-1}}-X_{t_{n-1}}\leq0 \leq
X_{t+t_{n-1}}-X_{t_{n-1}})}{(t-s)^{\beta+1}}\,\mathrm{d} s\,\mathrm{d} t
\\
&&\quad=\int_0^{t_n-t_{n-1}} \int_0^{\fraca{t}{2}}
\frac{\mathbb{P}
(X_{s+t_{n-1}}-X_{t_{n-1}}\leq0 \leq
X_{t+t_{n-1}}-X_{t_{n-1}})}{(t-s)^{\beta+1}}\,\mathrm{d} s\,\mathrm{d} t
\\
&&\qquad{}+\int_0^{t_n-t_{n-1}} \int
_{\fraca{t}{2}}^t\frac{\mathbb{P}
(X_{s+t_{n-1}}-X_{t_{n-1}}\leq0 \leq
X_{t+t_{n-1}}-X_{t_{n-1}})}{(t-s)^{\beta+1}}\,\mathrm{d} s\,\mathrm{d} t
\\
&&\quad=: J_1 + J_2.
\end{eqnarray*}
For $J_1$ we can bound the probability with one and get
\[
J_1 \leq C \Delta_n^{1-\beta}.
\]
Consider next the term $J_2$. By assumption 1 of
Definition~\ref{defn:Xalpha} the covariance of Gaussian processes
$X_{s+t_{n-1}}-X_{t_{n-1}}$ and $X_{t+t_{n-1}}-X_{t_{n-1}}$ is positive
for every $n$ and every $s,t\in[0,t_n-t_{n-1}]$.
Thus we can apply Corollary~\ref{lma:crossing} and assumption 4 to obtain
\[
\mathbb{P}(X_s-X_{t_{n-1}}\leq0 \leq X_t-X_{t_{n-1}})
\leq C\frac{\sqrt
{W_n(t,s)}}{\sqrt{\mathbb{E}(X_s - X_{t_{n-1}})^2}},
\]
where
\begin{eqnarray*}
W_n(t,s) &=& \mathbb{E}\bigl(X_{t+t_{n-1}} -X_{t_{n-1}} -
(X_{s+t_{n-1}} - X_{t_{n-1}})\bigr)^2
\\
&\leq& C(t-s)^{2\alpha},
\end{eqnarray*}
and
\[
\mathbb{E}(X_{s+t_{n-1}} - X_{t_{n-1}})^2 \geq Cs^2
\]
by assumptions. Hence, by symmetry of probabilities
$P(X_s-X_{t_{n-1}}\leq0 \leq X_t-X_{t_{n-1}})$ and
$P(X_s-X_{t_{n-1}}\geq0 \geq X_t-X_{t_{n-1}})$, we obtain
\begin{eqnarray*}
J_2 &\leq& C\int_0^{t_n-t_{n-1}} \int
_{\fraca{t}{2}}^t\frac
{(t-s)^{\alpha-\beta-1}}{s}\,\mathrm{d} s\,\mathrm{d} t
\\
&\leq& C\int_0^{t_n-t_{n-1}} t^{\alpha-\beta-1}\,\mathrm{d} t
\\
&\leq& C \Delta_n^{\alpha-\beta}.
\end{eqnarray*}
To conclude, we note that
\[
\int_{t_{n-1}}^{t_n} \int_{t_{n-1}}^t
\frac{\mathbf{1}_{s\leq\tau
_n < t}}{(t-s)^{\beta+1}}\,\mathrm{d} s \,\mathrm{d} t \leq C\Delta_n^{1-\beta},
\]
and hence
\[
I_3 \leq C \sum_{n=2}^{\infty}
n^{\kappa- \gamma(\alpha-\beta)} < \infty
\]
by our choice of $\kappa$, $\gamma$, and $\beta$.\qed
\end{enumerate}
\noqed\end{pf*}
\begin{rmk}\label{rmk:H}
With our general assumptions, we can only cover H\"{o}lder continuous
variables of order $a>1-\alpha$.
However, under additional assumption that for $s$ close to $T$ and
small enough $\Delta$ the incremental variance satisfies
\[
\mathbb{E}[X_{s+\Delta} - X_s]^2 \geq C
\Delta^{2\theta}
\]
with some constant $C$ and some parameter $\theta\in(\alpha,1)$, we
can cover more. More precisely, we can cover H\"{o}lder continuous
processes of order $a>\theta- \alpha$. Especially this is the case if
the process $X$ is stationary or has stationary increments with
$W_X(0,t)\sim t^{2\alpha}$. In particular case of fBm one can cover
H\"{o}lder continuous processes of any order $a>0$. Similarly, with a
process $\tilde{X}$ one can cover H\"older continuous processes of any
order $a>0$.
\end{rmk}
\begin{rmk}
In \cite{m-s-v}, the authors proved also that under additional
assumption that $\mathbb{P}si$ is continuous, the assumption of the Theorem~\ref{thm:H_rv-rep} is also necessary. The proof is based only to the
H\"older continuity of fBm and well-known properties of Young
integrals. Consequently, same conclusion remains for our general class
of processes.
\end{rmk}
\begin{cor}
Let $Z_t$ be a.s. H\"{o}lder continuous process of order $a>1-\alpha$
and for every $t\in(0,T]$ we have $X\in\mathcal{X}^\alpha_t$. Then
for every $t$ there
exists $\mathbb{F}$-adapted process $\mathbb{P}si_t$ such that it holds, a.s.,
\[
\int_0^t \mathbb{P}si_t(s)\,\mathrm{d}
X_s = Z_t,
\]
i.e. the integral $\int_0^t \mathbb{P}si_t(s)\,\mathrm{d} X_s$ is a version of $Z_t$.
\end{cor}
\section{Applications and discussions}
\label{sec:app}
In the paper \cite{m-s-v}, the authors considered financial
implications of their results to a model where the stock is driven by
geometric fBm. In particular, the results indicate one more reason why
geometric fBm is not a proper model in finance.
Evidently, we could state similar results in our general setting and as
a consequence, we can argue that processes $X\in\mathcal{X}^\alpha
_T$ do not fit
well as the driving process of stock prices. This is also discussed
with details in \cite{s-v2} where the authors proved the pathwise It\^
o--Tanaka formula for processes in our class. For further details, we
refer to \cite{m-s-v} and \cite{s-v2}, and the repetition of the
arguments presented in \cite{m-s-v} for more general processes $X\in
\mathcal{X}^\alpha_T$ are left to the reader. However, we wish to
give one remark on financial implications of our results. In \cite{m-s-v}, the authors proved that if the stock is driven by geometric
fractional Brownian motion, then one can replicate essentially all
interesting derivatives. On the other hand, we can never know whether
the process driving the stock is geometric fBm or not. The benefit of
our results is that in addition to the fact that the replication can be
done with much more general class of processes, the replication can be
done also in arbitrary small amount of time. This means that one can
wait and observe the process up to some time arbitrary close to the
maturity, and start the replication procedure after that point.
Especially this is useful if there is no information on the stock
dynamics. Assuming that the driving process is Gaussian, one can save
time to estimate the covariance structure of the process and use this
information for the replication.
\subsection*{On the uniqueness of representation}
In the case of standard Brownian motion, every centered random variable
$\xi$ with finite variance can be represented as
\[
\xi= \int_0^1 \mathbb{P}si(s)\,\mathrm{d} W_s,
\]
where $\int_0^1 \mathbb{E}[\mathbb{P}si(s)]^2\,\mathrm{d} s<\infty$.
Moreover, a direct consequence of the It\^o isometry implies that in
this case the process $\mathbb{P}si$ is unique. However, for generalised
Lebesgue--Stieltjes integrals the representation is not unique. As an
example, consider fractional Ornstein--Uhlenbeck process given by
\[
U_t^\theta= \int_0^t
\mathrm{e}^{-\theta(t-s)}\,\mathrm{d} B^H_s.
\]
On the other hand, by Theorem~\ref{thm:H_rv-rep} we know that
\[
U_t^\theta= \int_0^t
\mathbb{P}si_t(s)\,\mathrm{d} B^H_s,
\]
where $\mathbb{P}si_t(s)$ is defined equally zero on interval $[0,t_1]$, and
$t_1$ can be chosen arbitrary close to $1$. Hence, the representation
is clearly not unique in general with pathwise integrals. On the other
hand, for Skorokhod integrals with respect to fBm the representation is
unique (see \cite{bender}).
\subsection*{The problem of zero integral}
Another application which was considered in \cite{m-s-v} for fBm was
the problem of zero integral, and we wish to end the paper by giving
some remarks on zero integral problem for our general class of processes.
Recall that the zero integral problem refers to the question whether we
have implication
\begin{equation}
\label{zero_integral_conc} \int_0^1 u_s \,\mathrm{d}
X_s = 0, \qquad\mbox{a.s.}\quad\mathbb{R}ightarrow\quad u_s = 0,
\qquad \mathbb{P}\otimes\operatorname{Leb}\bigl([0,T]\bigr) \qquad\mbox{a.e.}
\end{equation}
For standard Brownian motion this is true under assumption $\int_0^T
\mathbb{E}[u_s^2]\,\mathrm{d} s < \infty$, and the result is a direct consequence of
the It\^o isometry. On the other hand, if we only have that
$\int_0^1 u_s^2 \,\mathrm{d} s < \infty$ a.s., then the conclusion is false.
In particular, one can construct an adapted process such that $\int_0^{\fraca{1}{2}} u_s\,\mathrm{d} W_s = 1$ and $\int_{\fraca{1}{2}}^1 u_s\,\mathrm{d}
W_s = -1$.
Similarly for fBm, the authors in \cite{m-s-v} explained that one can
construct an adapted process such that $\int_0^{\fraca{1}{2}} u_s\,\mathrm{d}
B^H_s = 1$ and $\int_{\fraca{1}{2}}^1 u_s\,\mathrm{d} B^H_s = -1$. Now the
results presented in this paper indicate that the same conclusion
remains true if we replace fBm $B^H$ with more general Gaussian process
$X$. This suggests that the problem of zero integral is not interesting
in the first place since the conclusion is false in most of the
interesting case unless one poses some extra assumptions. We also note
that a negative answer to the question of zero integral is a direct
consequence of the fact that the representation is not unique. As
another example of this, consider a random variable $(X_1 - K)^+$.
Clearly this random variable is an end value of H\"{o}lder continuous
process, and thus Theorem~\ref{thm:H_rv-rep} implies that there is a
process $\mathbb{P}si_1(s)$ such that
\[
(X_1 - K)^+ = \int_0^1
\mathbb{P}si_1(s)\,\mathrm{d} X_s.
\]
Moreover, by construction of the process $\mathbb{P}si_1(s)$ we have
$\mathbb{P}si_1(s) = 0$ on the interval $s\in[0,t_1]$. On the other hand, by
Theorem~\ref{thm:H_rv-rep} (assuming that the covariance $R_X$ of the
process $X$ itself satisfies 1--4) we have
\[
(X_1 - K)^+ = (X_0 - K)^+ +\int_0^1
\mathbf{1}_{X_s>K}\,\mathrm{d} X_s.
\]
If now $X_0\leq K$ a.s., subtracting first equation from the second
one, we obtain that
\[
0 = \int_0^1 \mathbb{P}si_1(s) -
\mathbf{1}_{X_s > K}\,\mathrm{d} X_s.
\]
Now $\mathbb{P}si_1(s)=0$ a.s. on $[0,t_1]$, and clearly the same is not true for
process $\mathbf{1}_{X_s>K}$. This is another argument to show that
the $\int_0^1 u_s\,\mathrm{d} X_s=0$ does not imply $u_s=0$ a.s. in general.
\printhistory
\end{document}
|
\begin{document}
\thanks{The first author is supported by the DFG grants AB 584/1-1 and AB 584/1-2.
The second author is supported by the FIR project 2013 ``Geometrical and Qualitative aspects of PDEs'', and by the GNAMPA.
The third author is supported by 19901/GERM/15, Fundaci\'on S\'eneca, CARM, Programa de Ayudas a Grupos de Excelencia de la Regi\'on de Murcia.}
\date{\today}
\subjclass[2010]{Primary 52B45,
52A40.
Secondary
52A20,
52A39.
}
\keywords{Minkowski valuation, Rogers-Shephard inequality, affine isoperimetric inequality, difference body}
\begin{abstract}
We provide a description of the space of continuous and translation invariant Minkowski valuations $\mathcal{P}hi:\mathcal{K}^n\to\mathcal{K}^n$ for which there is an upper and a lower bound for the volume of $\mathcal{P}hi(K)$ in terms of the volume of the convex body $K$ itself. Although no invariance with respect to a group acting on the space of convex bodies is imposed, we prove that only two types of operators appear: a family of operators having only cylinders over $(n-1)$-dimensional convex bodies as images, and a second family consisting essentially of 1-homogeneous operators. Using this description, we give improvements of some known characterization results for the difference body.
\end{abstract}
\title{Minkowski valuations under volume constraints}
\section{Introduction}
An inequality between two geometric quantities associated to a convex body is called \emph{affine isoperimetric inequality} if the ratio of these two quantities is invariant under the action of all affine transformations of the convex body.
Affine isoperimetric inequalities have always constituted an important part of convex geometry and have found numerous applications to different areas, such as functional analysis, partial differential equations, or geometry of numbers (see \cite{lutwak.handbook}). Moreover, affine isoperimetric inequalities are usually stronger than their Euclidean counterparts.
Three of the best known affine isoperimetric inequalities associated to operators between convex bodies are: the Rogers-Shephard inequality, associated to the difference body; the Busemann-Petty centroid inequality, associated to the centroid body; and the Petty projection and Zhang inequalities, associated to the projection body.
One of the first and most relevant applications of these inequalities was given by Zhang \cite{zhang}, who obtained an affine version of the Sobolev inequality from (an extension of) the Petty projection inequality. Ten years later, Haberl and Schuster \cite{haberl.schuster1,haberl.schuster2} generalized it to an asymmetric affine $L_p$-Sobolev inequality by using the characterization of the $L_p$-projection bodies previously obtained by Ludwig \cite{ludwig05} in the context of the so-called $L_p$-Minkowski valuations. For further results in this direction we refer to \cite[Section 10.15]{schneider.book14}, \cite{haberl.schuster.xiao,ludwig.xiao.zhang,lutwak86,cianchi_lyz_2009,lyz_2010,lyz_2000,lyz_2002,haddad.jimenez.montenegro,wang}, and references therein.
In the present paper, we initiate a study aiming at a deeper understanding of the relationship between affine isoperimetric inequalities and characterization results for Minkowski valuations, by taking the converse direction of Haberl and Schuster~\cite{haberl.schuster1} and classifying, given an affine isoperimetric inequality, all continuous (and translation invariant) Minkowski valuations by which it is satisfied. In this paper, we focus on the affine isoperimetric inequality associated to the difference body operator.
We denote by $\mathcal{K}^n$ the space of convex and compact sets (convex bodies) in $\mathbb{R}^n$.
The \emph{difference body operator} $D:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$ is defined by
\begin{equation}\mathrm{length}abel{diff_body}
DK := K + (-K),
\end{equation}
where $-K:=\{x\in\mathbb{R}^n\,:\,-x\in K\}$ and $+$ denotes the Minkowski or vectorial sum. Notice that the ratio
$$
\frac{V_n(DK)}{V_n(K)}
$$
is invariant under affine transformations of $\mathbb{R}^n$ (here $V_n$ denotes the $n$-dimensional volume).
The affine isoperimetric inequalities associated to the difference body read as follows
\begin{equation}\tag{RS}
2^nV_n(K)\mathrm{length}eqV_n(DK)\mathrm{length}eq\binom{2n}{n}V_n(K),\quad\forall K\in\mathcal{K}^n.
\end{equation}
For convex bodies with non-empty interior, equality holds in the upper inequality exactly if $K$ is a simplex, and convex bodies symmetric with respect to the origin (i.e., $K=-K$) are the only optimizers of the lower inequality.
The lower bound follows from a direct application of the Brunn-Minkowski inequality (see \cite{schneider.book14}) and the upper bound was proved by Rogers and Shephard in \cite{rogers.shephard} (see also \cite{chakerian,rogers.shephard58} for other proofs and related inequalities).
We study in this paper the operators $\mathcal{P}hi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$ satisfying an (RS) type inequality, that is, operators such that the volume of the image of a convex body $K$ is bounded uniformly, from above and from below, by a multiple of the volume of $K$ (see Definition~\ref{def_VC_intro}).
We will always assume these operators to be continuous and Minkowski valuations.
In the framework of convex geometry, an operator $Z:\mathcal{K}^n\mathrm{length}ongrightarrow(\mathcal{A},+)$ is a \emph{valuation} if
\begin{equation}\mathrm{length}abel{def_val}
Z(K)+Z(L)=Z(K\cup L)+Z(K\cap L)
\end{equation}
for every $K,L\in\mathcal{K}^n$ such that $K\cup L\in\mathcal{K}^n$. Here $(\mathcal{A},+)$ denotes an Abelian semigroup.
Valuations have developed as particularly important objects in convex geometry since Dehn's solution to the Third Hilbert problem. Probably the best known result in the theory of valuations is the characterization by Hadwiger~\cite{hadwiger} of the intrinsic volumes as a basis of the space of continuous and motion invariant valuations taking values in $\mathbb{R}$.
The interested reader is referred to \cite{alesker.survey,bernig.survey,fu.survey,mcmullen93,mcmullen.schneider}
and \cite[Chapter 6]{schneider.book14} for valuable and detailed surveys about the state of the art of the theory of real-valued valuations. We refer also to \cite{alesker.faifman,bernig.fu.solanes,bernig.fu,ludwig.reitzner} for further recent results in this area.
Apart from the real-valued case, there has been an increasing interest in valuations having other spaces as codomain. Examples of these are, among others, the space of matrices, tensors, area and curvature measures, and various function spaces (see e.g.\! \cite{bernig.hug,haberl.parapatits,ludwig_matrix,ludwig_sob,wannerer1,wannerer2}).
\emph{Minkowski valuations} are those taking values in $\mathcal{K}^n$ endowed with the Minkowski addition. In other words, $\mathcal{P}hi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$ is a Minkowski valuation if
\eqref{def_val} holds for every pair of convex bodies $K$ and $L$ such that $K\cup L\in\mathcal{K}^n$, and the Minkowski addition is taken on both sides of the equality. As stated before, they will be the main object of study of this work.
One of the most pursued scopes in the theory of valuations amounts to characterize classical and new objects from the realm of convex geometry, as valuations with specific additional properties. These additional properties are usually of two types:
\begin{itemize}
\item[(i)] topological: continuity or semi-continuity with respect to the standard topology on $\mathcal{K}^n$;
\item[(ii)] algebraic-geometrical: covariance or contravariance with respect to the action of some group of transformations of $\mathcal{K}^n$, such as the group of
translations, $\GL(n)$ or $\SO(n)$.
\end{itemize}
In this paper we aim to use a property of a different nature: a {\it metric-geometrical} property, namely the fulfillment of the following volume constraint, recently introduced in \cite{abardia.colesanti.saorin1} (see also \cite{abardia.saorin2,colesanti.hug.saorin}).
\begin{definition}\mathrm{length}abel{def_VC_intro}
Let $\mathcal{P}hi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$ be an operator. We say that $\mathcal{P}hi$ \emph{satisfies a volume constraint (VC)} if there are constants $c_\mathcal{P}hi,C_\mathcal{P}hi>0$ such that
\begin{equation}\tag{VC}
c_\mathcal{P}hiV_n(K)\mathrm{length}eq V_n(\mathcal{P}hi(K))\mathrm{length}eq C_\mathcal{P}hiV_n(K),\quad\forall K\in\mathcal{K}^n.
\end{equation}
\end{definition}
In \cite{abardia.saorin2}, the following characterization result for the difference body operator was obtained, based on (VC).
\begin{teor}[\cite{abardia.saorin2}]\mathrm{length}abel{thm_as} Let $n\geq 2$. An operator $\mathcal{P}hi: \mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$ is continuous, $\GL(n)$-covariant, and satisfies the upper bound in the (VC) condition if and only if there are $a,b\geq 0$ such that $\mathcal{P}hi(K)=aK+b(-K)$ for every $K\in\mathcal{K}^n$.
\end{teor}
This theorem belongs to a very recent and rapidly developing theory of classification results in convex geometry, without the notion of Minkowski valuation but under other natural, and very general, properties such as symmetrization. Some of these general results yield as a corollary a characterization of the difference body operator. We highlight in the following the first of them and refer the reader to \cite{bianchi.gardner.gronchi,gardner.hug.weil1,gardner.hug.weil2,milman.rotem} for more details.
\begin{teor}[\cite{gardner.hug.weil1}]\mathrm{length}abel{thm_ghw} Let $n\geq 2$. An operator $\mathcal{P}hi: \mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$ is continuous, translation invariant, and $\GL(n)$-covariant if and only if there is a $\mathrm{length}ambda\geq 0$ such that $\mathcal{P}hi(K)=\mathrm{length}ambda DK$ for every $K\in\mathcal{K}^n$.
\end{teor}
We would like to stress that Theorem~\ref{thm_ghw} does not require the property of being a Minkowski valuation, but requires $\GL(n)$-covariance.
We recall that $\mathcal{P}hi$ is said to be {\em covariant} with respect to a group $G$ of transformations of
$\mathbb{R}^n$ if
$$\mathcal{P}hi(g(K))=g(\mathcal{P}hi(K)),\quad\forall\, K\in\mathcal{K}^n,\;\forall\, g\in G.$$
The first works about characterization of Minkowski valuations were obtained by Schneider in \cite{schneider74} and \cite{schneider74.dim2}. He obtained significant classification results for a special type of Minkowski valuations, called {\em Minkowski endomorphism}, which are defined as the continuous Minkowski valuations that are homogeneous of degree 1, commute with rotations, and are translation invariant. The difference body operator constitutes the fundamental example of a Minkowski endomorphism.
Ludwig's works \cite{ludwig02, ludwig05} represent the starting point for a systematic study of characterization results in the theory of Minkowski valuations.
Concerning the difference body operator, she obtained the following fundamental characterization result.
\begin{teor}[\cite{ludwig05}]\mathrm{length}abel{t: ludwig class DK}Let $n\geq 2$. An operator $\mathcal{P}hi: \mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$ is a continuous, translation invariant, and
$\SL(n)$-covariant Minkowski valuation if and only if there is a $\mathrm{length}ambda\geq 0$ such that $\mathcal{P}hi(K)=\mathrm{length}ambda DK$ for every $K\in\mathcal{K}^n$.
\end{teor}
After the seminal results of Ludwig, an intensive investigation of Minkowski valuations has been launched, which has led to characterization results, for other groups of transformations or for certain subfamilies of $\mathcal{K}^n$. The corresponding results can be found in
\cite{abardia.bernig,abardia,boroczky.ludwig,kiderlen,schuster.wannerer.smooth,schuster.wannerer16,haberl,schuster.10,schuster.wannerer12,wannerer.equiv} and references therein.
\subsection{Results of the present paper}
We denote by $\MVal$ the space of continuous and translation invariant Minkowski valuations and by $\MVal^s$
the subspace of $\MVal$ consisting of Minkowski valuations with symmetric image. The Steiner point of $K$ is denoted by $s(K)$. We refer the reader to Section~\ref{preliminaries} for further notation and definitions.
As described above, in the present paper, we consider the general question of describing the operators in $\MVal$ satisfying the (VC) condition {\it without any further hypothesis}. Our main result can be stated as follows:
\begin{theorem}\mathrm{length}abel{teo}
Let $n\geq 2$ and consider $\mathcal{P}hi\in\MVal$ satisfying (VC).
Then exactly one of the following possibilities occurs:
\begin{enumerate}\itemsep10pt
\item[\emph{(i)}] there exist $\mathcal{P}hi_1\in\MVal$ homogeneous of degree 1 and $p:\mathcal{K}^n\mathrm{length}ongrightarrow\mathbb{R}^n$ continuous and translation invariant valuation
such that
$$
\mathcal{P}hi(K)=\mathcal{P}hi_1(K)+p(K),\quad\forall\,K\in\mathcal{K}^n;
$$
\item[\emph{(ii)}] there exist a segment $S$, an $(n-1)$-dimensional convex body $L$ with $\mathcal{P}him (L+S)=n$, and $p:\mathcal{K}^n\mathrm{length}ongrightarrow\mathbb{R}^n$ continuous and
translation invariant valuation such that
$$
\mathcal{P}hi(K)= L+V_n(K)S+p(K),\quad\forall\,K\in\mathcal{K}^n.
$$
\end{enumerate}
\end{theorem}
In Sections~\ref{n-1} and~\ref{1} the operator $p:\mathcal{K}^n\mathrm{length}ongrightarrow\mathbb{R}^n$ is described more explicitly, as a sum of valuations with fixed degree.
If we additionally assume that the operator $\mathcal{P}hi$ has symmetric images, then $p(K)$ is the origin, for every $K\in\mathcal{K}^n$, and we obtain the following.
\begin{theorem}\mathrm{length}abel{cor}
Let $n\geq 2$ and consider $\mathcal{P}hi\in\MVal^s$ satisfying (VC).
Then exactly one of the following possibilities occurs:
\begin{enumerate}\itemsep10pt
\item[\emph{(i)}] $\mathcal{P}hi$ is homogeneous of degree one;
\item[\emph{(ii)}] there exist a centered segment $S$ and an $o$-symmetric $(n-1)$-dimensional convex body $L$ with $\mathcal{P}him (L+S)=n $ such that
$$\mathcal{P}hi(K)= L+V_n(K)S,\quad\forall K\in\mathcal{K}^n.$$
\end{enumerate}
\end{theorem}
We would like to remark that Theorem~\ref{teo} constitutes, to the best of our knowledge, the first characterization result in the theory of Minkowski valuations which does not assume the operator to be invariant, covariant or contravariant with respect to some subgroup of $\GL(n)$.
A description of the 1-homogeneous Minkowski valuations appearing in Theorem~\ref{cor}(i) was given in~\cite{abardia.colesanti.saorin1} in the context of Minkowski additive operators (i.e., continuous, 1-homoge\-ne\-ous, and translation invariant Minkowski valuations). There, the Minkowski endomorphisms satisfying the (VC) condition, and the Minkowski additive operators that satisfy (VC) and are monotonic were classified.
Theorem~\ref{teo} allows us to improve these results by removing the homogeneity hypothesis and obtain the following.
\begin{theorem}\mathrm{length}abel{+mon intro2}
Let $n\geq 2$. An operator $\mathcal{P}hi\in\MVal$ satisfies (VC) and is monotonic if and only if exactly one of the following possibilities occurs:
\begin{enumerate}
\item[\emph{(i)}] there are $g\in\GL(n)$ and $p\in\mathbb{R}^n$ such that $\mathcal{P}hi(K)=gDK+p$ for every $K\in\mathcal{K}^n$;
\item[\emph{(ii)}] there are $L,S\in\mathcal{K}^n$ with $0\in S$, $\mathcal{P}him S=1$, $\mathcal{P}him L=n-1$, and $\mathcal{P}him(L+S)=n$ such that $\mathcal{P}hi(K)=L+V_n(K)S$ for every $K\in\mathcal{K}^n$.
\end{enumerate}
\end{theorem}
\begin{theorem}\mathrm{length}abel{+On_dim_geq_32}
Let $n\geq 3$.
\begin{enumerate}
\item[\emph{(i)}] An operator $\mathcal{P}hi\in\MVal$ satisfies (VC) and is $\SO(n)$-covariant if and only if there are $a,b\geq 0$ with $a+b>0$ such that
$$
\mathcal{P}hi(K)=a(K-s(K))+b(-K+s(K)),\quad\forall K\in\mathcal{K}^n.
$$
\item[\emph{(ii)}] An operator $\mathcal{P}hi\in\MVal^{s}$ satisfies (VC) and is $\SO(n)$-covariant if and only if there is a $\mathrm{length}ambda>0$ such that $\mathcal{P}hi(K)=\mathrm{length}ambda DK$ for every $K\in\mathcal{K}^n$.
\end{enumerate}
\end{theorem}
To prove our results we rely upon recent developments from the theory of real-valued valuations, which will be recalled in Section~\ref{preliminaries} for the reader's convenience.
In addition, we need to develop new techniques, since our assumption of satisfying (VC) is of a different nature than typical covariance or contravariance with respect to some subgroup of $\GL(n)$.
For Theorem~\ref{teo} we perform a careful study of the image of zonotopes under $\mathcal{P}hi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$, since the lack of covariance does not allow us to use the standard technique of exploiting the image of few simplices.
For the proof of Theorems~\ref{+mon intro2} and \ref{+On_dim_geq_32}, we use Theorem~\ref{teo} and classical results in the theory of real-valued valuations.
The paper is organized as follows. In Section~\ref{preliminaries} we collect the known results, especially about valuations, that will be used along the paper, and we introduce the notation used throughout. In Sections~\ref{sec: dependence a point} to \ref{1} we prove Theorem~\ref{teo}. More precisely, Section~\ref{sec: dependence a point} is devoted to show that we have actually a dichotomy under the hypotheses of Theorem~\ref{teo}. This leads to either the 1-homogeneous case, or to case (ii)
of Theorem \ref{teo}. In the next two sections we study each case, giving the proof of Theorems~\ref{teo} and~\ref{cor} in Section~\ref{1}. Finally, in Section~\ref{on_mon}, we prove Theorem~\ref{+mon intro2} and Theorem~\ref{+On_dim_geq_32} together with its analogue for $n=2$.
We end the paper with some examples to illustrate the necessity of our assumptions in Theorem~\ref{teo}.
\section{Preliminaries}\mathrm{length}abel{preliminaries}
\subsection{Notation}
As usual, we denote by $\mathbb{R}^n$ the $n$-dimensional Euclidean space, equipped with the standard scalar product $\mathrm{length}angle\cdot,\cdot\rangle$.
If $A\subset\mathbb{R}^n$ is a measurable set, $V_n(A)$ denotes its volume, that is, its $n$-dimensional Lebesgue measure. If $A\subset\mathbb{R}^n$, the \emph{span} of $A$, $\spa A$, is the vector subspace of $\mathbb{R}^n$ parallel to the minimal affine subspace in $\mathbb{R}^n$ containing $A$. The \emph{dimension of} $A$ is defined as $\mathcal{P}him A:=\mathcal{P}him (\spa A)$.
The unit sphere of $\mathbb{R}^n$ is denoted by $\mathbb{S}^{n-1}$ and we denote by $B^n$ the Euclidean unit ball with volume $\kappa_n$.
For $p,q\in\mathbb{R}^n$ we write $[p,q]$ for the line segment joining the points $p$ and $q$, and $S_v:=[-v,v]$, $v\in\mathbb{R}^n$, for the line segment joining $-v$ and $v$.
The general linear group in $\mathbb{R}^n$ is denoted by $\GL(n)$, the special linear group by $\SL(n)$, the group of orthogonal transformations of $\mathbb{R}^n$ by $\mathrm{O}(n)$ and by $\SO(n)\subset\mathrm{O}(n)$ the group of the orthogonal transformations which preserve orientation.
\subsection{Convex bodies}
For the basics on convex geometry and on the theory of valuations, we refer the reader to the books \cite{gardner.book06,gruber.book,klain.rota,schneider.book14,artstein.giannopoulos.milman}.
Let $\mathcal{K}^n$ denote the set of convex bodies (compact and
convex sets) in $\mathbb{R}^n$ endowed with the Hausdorff metric, and let $\mathcal{K}^n_s$ denote the set of convex bodies in $\mathbb{R}^n$ which are symmetric with respect to the origin. The elements of $\mathcal{K}_s^n$ are called \emph{$o$-symmetric} convex bodies.
We endow $\mathcal{K}^n$ with the Minkowski addition:
$$K+L:=\{x+y\,:\,x\in K,\,y\in L\}.$$
The \emph{support function} of a convex body $K\in\mathcal{K}^n$, $h_K:\mathbb{R}^n\mathrm{length}ongrightarrow\mathbb{R}$, is given by
$$h(K, u) = h_K(u)=\max \{ \mathrm{length}angle u, x\rangle : x \in K \},\quad u\in\mathbb{R}^n,$$
and it determines $K$ uniquely (\cite[Theorem 1.7.1]{schneider.book14}).
For every $u\in\mathbb{R}^n$, the function $K\mapsto h(K,u)$ is linear with respect to the Minkowski addition and multiplication by non-negative reals:
\begin{equation}\mathrm{length}abel{add_sup}
h(\alpha K+\beta L,\cdot)=\alpha h(K,\cdot)+\beta h(L,\cdot),\quad\forall\, K,L\in\mathcal{K}^n,\, \forall\, \alpha,\beta\ge0,
\end{equation}
A \emph{zonotope} is a convex body obtained as the finite sum of line segments and a \emph{zonoid} is a convex body that can be approximated, in the
Hausdorff metric, by zonotopes (see e.g. \cite[p.~191]{schneider.book14}).
Zonotopes will play a prominent role in the proof of Theorem~\ref{teo}.
\subsection{Mixed volumes}\mathrm{length}abel{sec_mixed_volumes}
We will thoroughly use the notion of mixed volumes of convex bodies, for which we refer to Chapter 5 of \cite{schneider.book14}.
The mixed volume of $n$ convex bodies $K_1,\dots,K_n$ from $\mathcal{K}^n$ will be denoted by the usual notation:
$$
V(K_1,\dots,K_n).
$$
Mixed volumes are multilinear functionals $(\mathcal{K}^n)^n\mathrm{length}ongrightarrow\mathbb{R}$. In each entry, they are continuous, translation invariant, and satisfy the valuation property (see \eqref{val_prel} for the definition).
Brackets $[i]$ next to an entry of a mixed volume mean that the entry is repeated $i$ times.
Mixed volumes can be extended to the vector space spanned by restrictions of support functions on $\mathbb{S}^{n-1}$ (see \cite[p.~291]{schneider.book14}).
For the proof of Theorem~\ref{teo}, we will use the existence of this extension.
In view of this, we will use both notations, $K$ and $h_K$, as arguments in a mixed volume involving the convex body $K$. In other words, we write equivalently
$$
V(K,K_2,\dots,K_n)\quad\mbox{or}\quad V(h_K,K_2,\dots,K_n)
$$
and interpret the support function as a function restricted to $\mathbb{S}^{n-1}$.
Along the paper, and especially in Section 3, we will use the following result, containing conditions for which a mixed volume does not vanish.
\begin{theorem}[Theorem 5.1.8 in \cite{schneider.book14}]\mathrm{length}abel{mix_volumes_Schneider}
For $K_1,\dots,K_n\in\mathcal{K}^n$, the following assertions are equivalent:
\begin{enumerate}
\item[\emph{(a)}] $V(K_1,\dots,K_n)>0;$
\item[\emph{(b)}] there are segments $S_i\subset K_i$ $(i=1,\dots,n)$ having linearly independent directions;
\item[\emph{(c)}] $\mathcal{P}him(K_{i_1}+\dots+K_{i_k})\geq k$ for each choice of indices $1\mathrm{length}eq i_1<\dots<i_k\mathrm{length}eq n$ and for all $k\in\{1,\dots,n\}$.
\end{enumerate}
\end{theorem}
\subsection{Valuations}
Let $(\mathcal{A},+)$ be an Abelian semigroup. A map $\varphi:\mathcal{K}^n\mathrm{length}ongrightarrow(\mathcal{A},+)$ is called \emph{valuation} if
\begin{equation}\mathrm{length}abel{val_prel}
\varphi(K)+\varphi(L)=\varphi(K\cup L)+\varphi(K\cap L),
\end{equation}
for every $K,L\in\mathcal{K}^n$ such that $K\cup L\in\mathcal{K}^n$.
We say that $\varphi$ is \emph{translation invariant} if $\varphi(K+t)=\varphi(K)$ for all $K\in\mathcal{K}^n$ and $t\in\mathbb{R}^n$. If $\mathcal{A}$ is a topological space, we say that $\varphi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{A}$ is \emph{continuous}, if it is continuous with respect to the Hausdorff topology on $\mathcal{K}^n$.
If there is a multiplication between the positive real numbers and the elements in $\mathcal{A}$, then we say that $\varphi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{A}$ is \emph{homogeneous of degree} $j$, if $\varphi(\mathrm{length}ambda K)=\mathrm{length}ambda^{j}\varphi(K)$ for all $\mathrm{length}ambda\in(0,\infty)$. If $\mathcal{A}$ is ordered, then $\varphi$ is \emph{monotonic} (increasing with respect to set inclusion) if for all $K,L\in\mathcal{K}^n$ such that $K\subset L$, then $\varphi(K)\mathrm{length}eq\varphi(L)$. A valuation is called \emph{even} if $\varphi(-K)=\varphi(K)$ for all $K\in\mathcal{K}^n$. If $(-1)\cdot\varphi(K)=:-\varphi(K)$ is defined for all $K$, then we say that $\varphi$ is \emph{odd} if $\varphi(-K)=-\varphi(K)$ for every $K\in\mathcal{K}^n$, and $\varphi$ is called an $\emph{$o$-symmetrization}$ if $\varphi(K)\in\mathcal{K}^n_s$ for every $K\in\mathcal{K}^n$, that is,
\[
\varphi(K)=-\varphi(K) \text{ for all } K\in\mathcal{K}^n.
\]
Finally, if a group of transformations $G$ acts on $\mathcal{K}^n$ and on $\mathcal{A}$, we say that a valuation $\varphi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{A}$ is \emph{$G$-covariant} if, for any $K\in\mathcal{K}^n$,
\[
\mathcal{P}hi(g K)=g\mathcal{P}hi(K) \text{ for all } g\in \,G.
\]
\subsubsection{Real-valued valuations}
These are the valuations $\mu$ on $\mathcal{K}^n$ having $(\mathbb{R},+)$, the real numbers with the usual addition, as target space. We denote by
$\Val$ the space of real-valued valuations, which are additionally continuous and translation invariant; this is in fact a Banach space. The subspace of valuations
homogeneous of degree $j$, $j\in\{0,\dots,n\}$, is denoted by $\Val_j$.
McMullen proved the following fundamental decomposition result of the space $\Val$.
\begin{theorem}[\cite{mcmullen77}]\mathrm{length}abel{mcmullen_dec:teo} For every $\mu\in\Val$ there exist unique $\mu_j$,
$j=0,\dots,n$, with $\mu_j\in\Val_j$ for every $j$, such that
$$
\mu=\sum_{j=0}^n \mu_j.
$$
In other words
$$
\Val=\bigoplus_{j=0,\mathrm{length}dots,n} \Val_j.
$$
\end{theorem}
The next result provides useful information on the image of a homogeneous valuation in $\Val_j$ which vanishes on convex bodies of certain dimensions.
\begin{theorem}[\cite{klain00,schneider_schuster06}] \mathrm{length}abel{th_lemma}
Let $\mu\in\Val_j$, $j\in\{0,1,\dots,n-1\}$.
\begin{enumerate}
\item[\emph{(i)}]
If $\mu(K)=0$ for every $K\in\mathcal{K}^n$ with $\mathcal{P}him K= j$, then $\mu(-K)=-\mu(K)$ for every $K\in\mathcal{K}^n$. In particular, $\mu$ is odd.
\item[\emph{(ii)}] If $\mu(K)=0$ for every $K\in\mathcal{K}^n$ with $\mathcal{P}him K= j+1$, then $\mu\equiv 0$.
\end{enumerate}
\end{theorem}
\subsubsection{Minkowski valuations}
An operator $\mathcal{P}hi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$ is called a \emph{Minkowski valuation} if \eqref{val_prel} holds for $\mathcal{P}hi$ and $(\mathcal{A},+)=(\mathcal{K}^n,+)$ with $+$ the Minkowski addition of convex bodies.
The space of continuous and translation invariant Minkowski valuations is denoted by $\MVal$. By $\MVal_j\subset\MVal$ (resp.~$\MVal_j^s\subset\MVal^s$), we denote the $j$-homogeneous Minkowski valuations (resp.~that are $o$-symmetrizations).
We will often use the following construction to pass from Minkowski valuations to real-valued valuations: Let $\mathcal{P}hi$ be a Minkowski valuation and fix $u\in\mathbb{R}^n$. The map $\mathcal{P}hi_u:\mathcal{K}^n\mathrm{length}ongrightarrow\mathbb{R}$ defined by
$$\mathcal{P}hi_u(K)=h(\mathcal{P}hi(K),u),\quad\forall K\in\mathcal{K}^n,$$
is a real-valued valuation which inherits the properties of $\mathcal{P}hi$ such as continuity, translation invariance, $j$-homogeneity, and monotonicity.
Let $\mathcal{P}hi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$ be a continuous and translation invariant Minkowski valuation, i.e., $\mathcal{P}hi\in\MVal$. Using the support function of $\mathcal{P}hi(K)$ as just described, the decomposition in Theorem~\ref{mcmullen_dec:teo} yields
\begin{equation}\mathrm{length}abel{h-decomp1}
h(\mathcal{P}hi (K),u)=\sum_{j=0}^{n}f_{j}(K,u),\quad u\in\mathbb{R}^n,
\end{equation}
where every $f_{j}(K,u)$ is continuous in both variables and positively homogeneous of bi-degree $(j,1)$, i.e., it satisfies
$$f_{j}(\mathrm{length}ambda K, \mu u)=\mathrm{length}ambda^{j} \mu f_{j}(K,u),\quad\forall\mathrm{length}ambda,\mu>0.$$
Notice that, by the McMullen decomposition, each $f_j$ has the valuation property with respect to $K$, for every fixed $u$.
Moreover, $K\mapsto f_j(K,u)$ is translation invariant for every $u\in\mathbb{R}^n$.
Since we will use the above decomposition very often, we will refer to it as the {\it McMullen decomposition} of $\mathcal{P}hi\in\MVal$ instead of McMullen decomposition of $h(\mathcal{P}hi(\cdot),u)\in\Val$. We would like to remark that in the literature the term ``McMullen decomposition of a Minkowski valuation'' has been used with a stronger meaning, namely,
a Minkowski valuation $\mathcal{P}hi$ has a McMullen decomposition if there exist $\mathcal{P}hi_j\in\MVal_j$, $0\mathrm{length}eq j\mathrm{length}eq n$, such that
$$\mathcal{P}hi=\sum_{j=0}^n\mathcal{P}hi_j.$$ This turns out to be equivalent to the fact that every function $u\mapsto f_j(K,u)$ in \eqref{h-decomp1} is convex for every $0\mathrm{length}eq j\mathrm{length}eq n$ (cf.~\cite{dorrek}) which is, in general, not the case. This was first shown in \cite{parapatits.wannerer} (see also \cite{dorrek}).
The following two results give conditions in order that some of the functions $f_j(K,u)$ are support functions.
\begin{lemma}[\cite{schneider_schuster06}]\mathrm{length}abel{support}Let $\mathcal{P}hi\in\MVal$. If a convex body $K\in\mathcal{K}^n$ satisfies
$$h(\mathcal{P}hi(\mathrm{length}ambda K),\cdot)=\sum_{j=k}^lf_{j}(\mathrm{length}ambda K,\cdot),$$
for $\mathrm{length}ambda>0$, with some $k,l\in\{0,\dots,n\}$, $k\mathrm{length}eq l$, then $f_{k}(K,\cdot)$ and $f_{l}(K,\cdot)$ are support functions.
\end{lemma}
By Lemma~\ref{support} the functions $f_0(K,\cdot)$ and $f_n(K,\cdot)$ in the McMullen decomposition~\eqref{h-decomp1} are always support functions. Moreover, since for every $u\in\mathbb{R}^n$, the function $f_0(\cdot,u)$ is a continuous, translation invariant, and homogeneous of degree $0$ real-valued valuation, it is a multiple of the Euler characteristic and, hence, independent of the convex body $K$; notice, however, that this multiple may depend on $u$. Analogously, $f_n(K,u)$ is a multiple of the volume of $K$ (see \cite{hadwiger}), which may depend on $u$. In the following, we denote by $L_0$ (resp.~$L_n$) the convex body with support function $f_0(\{0\},\cdot)$ (resp.\! $f_n(\kappa_n^{-1/n}B_n,\cdot)$
) and write the McMullen decomposition of $\mathcal{P}hi$ as
\begin{equation}\mathrm{length}abel{h-decomp}
h(\mathcal{P}hi(K),u)=h(L_0,u)+\sum_{j=1}^{n-1}f_j(K,u)+V_n(K)h(L_n,u).
\end{equation}
\begin{remark}\mathrm{length}abel{image of a point} Let $\mathcal{P}hi\in\MVal$ and let $p\in\mathbb{R}^n$ be a point. Then
$
\mathcal{P}hi(\{p\})=L_0.
$
\end{remark}
Another particular case, where the functions $u\mapsto f_j(K,u)$ are known to be convex, was given in \cite{schuster.parapatits}. Parapatits and Schuster proved there that restricted to zonoids $Z\in\mathcal{K}^n$, each function $u\mapsto f_j(Z,u)$ in the McMullen decomposition for $\mathcal{P}hi\in\MVal$
is a support function.
\begin{theorem}[\cite{schuster.parapatits}]\mathrm{length}abel{suppZonoid} Let $\mathcal{P}hi\in\MVal$ and let $Z\in\mathcal{K}^n$ be a zonoid. Then there exist convex bodies $L_0$, $\mathcal{P}hi_1(Z),\dots,\mathcal{P}hi_{n-1}(Z), L_n$ such that
\begin{equation}\mathrm{length}abel{h-decomp convex bodies}
\mathcal{P}hi(\mathrm{length}ambda Z)=L_0+\mathrm{length}ambda\mathcal{P}hi_1(Z)+\dots+\mathrm{length}ambda^{n-1}\mathcal{P}hi_{n-1}(Z)+\mathrm{length}ambda^nV_n(Z)L_n,
\end{equation}
for every $\mathrm{length}ambda>0$.
\end{theorem}
In view of the previous result we fix the following notation.
\begin{definition}\mathrm{length}abel{notation_fi_dii}
Let $\mathcal{P}hi\in\MVal$, let $u\in\mathbb{R}^n$, and let $Z\in\mathcal{K}^n$ be a zonoid.
\begin{enumerate}[(i)]
\item For $j\in\{0,1,\dots,n\}$, the function $f_j(\cdot,u):\mathcal{K}^n\mathrm{length}ongrightarrow\mathbb{R}$ will be called the \emph{$j$-homogeneous function of the McMullen decomposition of $\mathcal{P}hi$}, in \eqref{h-decomp1}.
\item For $j\in\{1,2,\dots,n-1\}$, $\mathcal{P}hi_j(Z)\in\mathcal{K}^n$ will be referred to as the \emph{convex body $\mathcal{P}hi_j(Z)$ of the McMullen decomposition of $\mathcal{P}hi$} in \eqref{h-decomp convex bodies}.
\noindent To simplify the notation in this case, we also write $\mathcal{P}hi_0(Z)$ for $L_0$ and
$\mathcal{P}hi_n(Z)$ for $V_n(Z)L_n$.
\end{enumerate}
\end{definition}
For every $1\mathrm{length}eq j\mathrm{length}eq n-1$, the function $u\mapsto f_j(K,u)$ of the McMullen decomposition of $\mathcal{P}hi\in\MVal$ defined in \eqref{h-decomp1} inherits many invariance properties of $\mathcal{P}hi$.
In particular, we easily deduce the following.
\begin{lemma}\mathrm{length}abel{even}Let $n\geq 2$, let $j\in\{0,\dots,n\}$, and let $\mathcal{P}hi\in\MVal^s$. Then the $j$-homogeneous function of the McMullen decomposition of $\mathcal{P}hi$, $u\mapsto f_j(K,u)$, is even for every $K\in\mathcal{K}^n$.
\end{lemma}
\begin{proof}
Let $K\in\mathcal{K}^n$ and let $\mathcal{P}hi\in\MVal^s$. Since $\mathcal{P}hi(K)\in\mathcal{K}^n_s$, for every $K\in\mathcal{K}^n$, we have $h(\mathcal{P}hi(K),u)=h(\mathcal{P}hi(K),-u)$ for every $u\in\mathbb{R}^n$.
For $\mathrm{length}ambda>0$, the McMullen decomposition for $\mathcal{P}hi$ in \eqref{h-decomp1} with $\mathrm{length}ambda K$ instead of $K$ and once with $-u$ instead of $u$ yields
$$
f_{0}(K,u)+\sum_{j=1}^n\mathrm{length}ambda^{j}f_j(K,u)=f_{0}(K,-u)+\sum_{j=1}^n\mathrm{length}ambda^{j}f_j(K,-u).
$$
By comparing the coefficients of the above polynomial expression in $\mathrm{length}ambda$ we get $f_{j}(K,u)=f_{j}(K,-u)$ for every $0\mathrm{length}eq j\mathrm{length}eq n$.
\end{proof}
In the next lemma we collect some facts about the functions involved in the McMullen decomposition of $\mathcal{P}hi\in\MVal$ which will be used throughout the rest of the work.
\begin{lemma}\mathrm{length}abel{r: facts on f_js}
Let $\mathcal{P}hi\in\MVal$, $K\in\mathcal{K}^n$, $u\in\mathbb{R}^n$, and $j\in\{0,1,\dots,n\}$. Then:
\begin{enumerate}
\item[\emph{(i)}] the function $u\mapsto f_j(K,u)$ is a difference of support functions of convex bodies;
\item[\emph{(ii)}] the function $K\mapsto f_j(K,u)$ is a valuation homogeneous of degree $j$;
\item[\emph{(iii)}] if $\mathcal{P}him K\mathrm{length}eq j-1$, then $f_j(K,u)=0$;
\item[\emph{(iv)}] if $\mathcal{P}him K=j$, then $u\mapsto f_j(K,u)$ is a support function;
\item[\emph{(v)}] if $f_j(K,u)=0$ for every $K\in\mathcal{K}^n$ with $\mathcal{P}him K=j+1$, then $f_j(\cdot,u)\equiv 0$.
\item[\emph{(vi)}] if $j_0\in\{0,\dots,n-1\}$ and $f_j(K,\cdot)$ is linear for every $j>j_0$, then $f_{j_0}(K,\cdot)$ is a support function.
\end{enumerate}
\end{lemma}
\begin{proof}
The statement of item (i) was proved in \cite{schuster.parapatits}. Item (ii) follows directly from the McMullen decomposition of $\mathcal{P}hi$. Item (iii) follows, for instance, from Corollary~6.3.2 in \cite{schneider.book14}.
Item (iv) is deduced from Lemma~\ref{support} and items (ii) and (iii). Item (v) follows from Theorem~\ref{th_lemma}(ii).
For item (vi), we first note that since $\mathcal{P}hi(K)$ is a convex body for every $K\in\mathcal{K}^n$, we have
$0\geq h(\mathcal{P}hi(\mathrm{length}ambda K),u+v)-h(\mathcal{P}hi(\mathrm{length}ambda K),u)-h(\mathcal{P}hi(\mathrm{length}ambda K),v)$ for every $u,v\in\mathbb{R}^n$ and $\mathrm{length}ambda>0$. By the McMullen decomposition of $\mathcal{P}hi$ and the linearity of $f_j(K,\cdot)$ for $j>j_0$, we have
\begin{align*}
0&\geq \mathrm{length}ambda^n(f_n(K,u+v)-f_n(K,u)-f_n(K,v))+\dots\\
&\quad\dots+\mathrm{length}ambda^{j_0+1}(f_{j_0+1}(K,u+v)-f_{j_0+1}(K,u)-f_{j_0+1}(K,v))+
\\&\quad+\mathrm{length}ambda^{j_0}(f_{j_0}(K,u+v)-f_{j_0}(K,u)-f_{j_0}(K,v))
+O(\mathrm{length}ambda^{j_0-1})
\\&=\mathrm{length}ambda^{j_0}(f_{j_0}(K,u+v)-f_{j_0}(K,u)-f_{j_0}(K,v))
+O(\mathrm{length}ambda^{j_0-1}).
\end{align*}
If $j_0\geq 1$, then as $\mathrm{length}ambda\to\infty$, we get that the inequality can be satisfied only if
$$f_{j_0}(K,u+v)-f_{j_0}(K,u)-f_{j_0}(K,v)\mathrm{length}eq 0,$$ that is,
$f_{j_0}(K,\cdot)$ is a support function for every $K\in\mathcal{K}^n$. If $j_0=0$, we obtain the latter directly.
\end{proof}
To finish this section, we state the following technical result, which can be obtained from Theorem~6.3.6 in \cite{schneider.book14}. For completeness, we give a proof of the result, which will be essential in Section~\ref{sec: dependence a point}.
Let $C(n,k)$ denote the set of all ordered subsets of $k$ elements among $\{1,\dots,n\}$ and let $\sigma_j$ be the $j$-th element of $\sigma$, $1\mathrm{length}eq j\mathrm{length}eq k$.
\begin{theorem}[Corollary of Theorem 6.3.6 in \cite{schneider.book14}]
\mathrm{length}abel{lem_polarization}
Let $1\mathrm{length}eq k\mathrm{length}eq n$, let $\mathcal{P}hi\in\MVal_k$, and let $S_1,\dots,S_n$ be segments in $\mathbb{R}^n$. Then
$$\mathcal{P}hi(S_1+\dots+S_n)=\sum_{\sigma\in C(n,k)}\mathcal{P}hi(S_{\sigma_1}+\dots+S_{\sigma_k}).$$
\end{theorem}
\begin{proof}
Let $\mathcal{P}hi\in\MVal_k$ and let $S_1,\dots,S_n$ be segments in $\mathbb{R}^n$.
Consider $u\in\mathbb{R}^n$ and define the continuous and translation invariant real-valued valuation $\mathcal{P}hi_u(K):=h(\mathcal{P}hi(K),u)$. From Theorem 6.3.6 in \cite{schneider.book14}, we have that there exists a continuous and translation invariant operator $\overline{\mathcal{P}hi_u}:(\mathcal{K}^n)^k\mathrm{length}ongrightarrow\mathbb{R}$ that is Minkowski additive in each variable and such that
\begin{equation}\mathrm{length}abel{636_schneider_u}
\mathcal{P}hi_u(S_1+\dots+S_n)=\sum_{r_1,\dots,r_n=0}^{k}\binom{k}{r_1\dots r_n}\overline{\mathcal{P}hi_u}(S_1[r_1],\dots,S_n[r_n]),
\end{equation}
with $\sum_{j=1}^nr_j=k$.
Moreover, by Theorem 6.3.6 in \cite{schneider.book14}, the mapping $K\mapsto{\overline{\mathcal{P}hi_u}}(K[r],M_{r+1},\dots,M_k)$ is a continuous and translation invariant valuation, homogeneous of degree $r$ for each fixed $r\in\{1,\dots,k\}$ and for every fixed tuple of convex bodies $M_{r+1},\dots,M_k$. In particular, for every $r_1,\dots,r_n$ with $r_1+\dots+r_n=k$, we have that
$$K\mapsto{\overline{\mathcal{P}hi_u}}(K[r_1],S_2[r_2],\dots,S_n[r_n])$$
is a continuous and translation invariant real-valued valuation, homogeneous of degree $r_1$. Hence, if $\mathcal{P}him K<r_1$, then $\overline{\mathcal{P}hi_u}(K[r_1],S_2[r_2],\dots,S_n[r_n])=0$ (see \cite[Corollary 6.3.2]{schneider.book14}). Since in \eqref{636_schneider_u} we are taking $K=S_1$, a segment, if $r_1\geq 2$, the summand vanishes. Since the same argument can be done for $r_2,\dots,r_n$, we obtain that $r_i\in\{0,1\}$ for every $1\mathrm{length}eq i\mathrm{length}eq n$ and the sum in \eqref{636_schneider_u} can be taken over $C(n,k)$. Hence,
\begin{align*}
\mathcal{P}hi_u(S_1+\dots+S_n)&=\sum_{r_1,\dots,r_n=0}^{k}\binom{k}{r_1\dots r_n}\overline{\mathcal{P}hi_u}(S_1[r_1],\dots,S_n[r_n])
\\&=\sum_{\sigma\in C(n,k)}\overline{\mathcal{P}hi_u}(S_{\sigma_1},\dots,S_{\sigma_k})
=\sum_{\sigma\in C(n,k)}\mathcal{P}hi_u(S_{\sigma_1}+\dots+S_{\sigma_k}).
\end{align*}
The last equality holds by applying the same argument as before but with $\mathcal{P}hi_u(S_{\sigma_1}+\dots+S_{\sigma_k})$ instead of $\mathcal{P}hi_u(S_1+\dots+S_n)$.
Since $\mathcal{P}hi_u(K)=h(\mathcal{P}hi(K), u)$, by using \eqref{add_sup}, we have proven that
\[
h(\mathcal{P}hi(S_1+\dots+S_n),u)=h(\sum_{\sigma\in C(n,k)}\mathcal{P}hi(S_{\sigma_1}+\dots+S_{\sigma_k}),u),
\]
for every $u\in\mathbb{R}^n$.
Since the support function uniquely describes a convex body (\cite[Theorem 1.7.1]{schneider.book14}), the statement of the theorem follows.
\end{proof}
\subsection{Volume constraints}\mathrm{length}abel{sec:volume_constraints}
As described in the introduction, the main objective of this paper is to describe Minkowski valuations satisfying certain volume constraints.
\begin{definition} An operator $\mathcal{P}hi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n$ satisfies a lower volume constraint (LVC) if there exists a constant $c_\mathcal{P}hi>0$ such that
$$V_n(\mathcal{P}hi(K))\ge c_\mathcal{P}hiV_n(K),\quad\forall K\in\mathcal{K}^n.$$
Analogously, we say that $\mathcal{P}hi$ satisfies an upper volume constraint (UVC) if there exists $C_\mathcal{P}hi>0$ such that
$$V_n(\mathcal{P}hi(K))\mathrm{length}eq C_{\mathcal{P}hi}V_n(K),\quad\forall K\in\mathcal{K}^n.$$
\end{definition}
Throughout the paper we will refer to these properties simply writing (LVC) and (UVC), respectively. We will mostly consider
valuations that satisfy both (LVC) and (UVC), which corresponds to Definition \ref{def_VC_intro}. If $\mathcal{P}hi$ is of this type, we will say that $\mathcal{P}hi$ satisfies the {\em volume constraint}, briefly, $\mathcal{P}hi$ satisfies (VC) or $\mathcal{P}hi$ satisfies the (VC) condition.
The identity operator on $\mathcal{K}^n$ trivially satisfies (VC), but a more interesting example, which motivated the previous definition in \cite{abardia.colesanti.saorin1},
is the difference body operator, defined in \eqref{diff_body}, which satisfies (RS).
The operators in Theorem~\ref{teo}(ii) are also examples of Minkowski valuations satisfying (VC). Indeed, for a segment $S$ and an $(n-1)$-dimensional convex body $L$ with $\mathcal{P}him(L+S)=n$ we have, by the linearity and positivity of mixed volumes (see Theorem~\ref{mix_volumes_Schneider}),
$$V_n(L+V_n(K)S)=V(L[n-1],V_n(K)S)=V_n(K)V(L[n-1],S)=V_n(K)V_n(L+S).$$
Hence, the (VC) condition is satisfied with $c_{\mathcal{P}hi}=C_{\mathcal{P}hi}=V_n(L+S)\neq 0$.
\section{Dichotomy for the image of a point}\mathrm{length}abel{sec: dependence a point}
The aim of this section is to prove that, if a Minkowski valuation $\mathcal{P}hi\in\MVal$ satisfies the (VC) condition, then the image of a
point is either a point or an $(n-1)$-dimensional convex body. That is, we prove the following result.
\begin{theorem}\mathrm{length}abel{L0Ln} Let $p\in\mathbb{R}^n$.
If $n\geq 2$ and $\mathcal{P}hi\in\MVal$ satisfies the (VC) condition, then either $\mathcal{P}him(\mathcal{P}hi(\{p\}))=0$ or $\mathcal{P}him(\mathcal{P}hi(\{p\}))=n-1$.
\end{theorem}
For the proof of Theorem~\ref{L0Ln}, we need to exploit in a more specific way the information given by the McMullen decomposition \eqref{h-decomp} of $\mathcal{P}hi$, which we
can use since $\mathcal{P}hi$ is a continuous and translation invariant Minkowski valuation.
We consider $V_n(\mathcal{P}hi(\mathrm{length}ambda K))$ for $\mathrm{length}ambda>0$. By \eqref{h-decomp}, Lemma~\ref{r: facts on f_js}(i), and the extension of mixed volumes to differences of support
functions (see Section~\ref{sec_mixed_volumes}), we have
$$
V_n(\mathcal{P}hi (\mathrm{length}ambda K))\!=\!V_n(h(\mathcal{P}hi (\mathrm{length}ambda K),\cdot)[n])\!=\!V_n\Big(\!\big(h(L_0,\cdot)+\sum_{j=1}^{n-1}\mathrm{length}ambda^j f_j(K,\cdot)+\mathrm{length}ambda^nV_n(K)h(L_n,\cdot)\big)[n]\Big).
$$
The multilinearity of the extension of mixed volumes to differences of support functions provides us with a polynomial expansion of $V_n(\mathcal{P}hi(\mathrm{length}ambda K))$ in $\mathrm{length}ambda$, which may contain terms of degree from
$0$ until $n^n$.
Moreover, each of the coefficients of the polynomial is a sum of mixed volumes of the support functions of $L_0$ and $L_n$, and the
functions $f_j(K,\cdot)$, $1\mathrm{length}eq j\mathrm{length}eq n-1$, involved in the McMullen decomposition of $\mathcal{P}hi$.
As each of these functions depends only on $K$, for the sake of brevity, we will write
\begin{equation}\mathrm{length}abel{vi di}
V_n(\mathcal{P}hi(\mathrm{length}ambda K))=\sum_{j=0}^{n^n} v^{\mathcal{P}hi}_j(K) \mathrm{length}ambda^j,
\end{equation}
and denote by $v_j^{\mathcal{P}hi}(K)$ the coefficient of degree $j$ in the above polynomial expansion of $V_n(\mathcal{P}hi(\mathrm{length}ambda K))$, $0\mathrm{length}eq j \mathrm{length}eq n^n$.
We note that, in general, $v_j^{\mathcal{P}hi}(K)$ may be negative.
\begin{remark}\mathrm{length}abel{f_i in v_j}
Let $\mathcal{P}hi\in\MVal$ satisfy (VC) and let $f_i$, $0\mathrm{length}eq i\mathrm{length}eq n$, be the functions appearing in its McMullen decomposition. Let $K\in\mathcal{K}^n$ and let $v_j^{\mathcal{P}hi}(K)$ be as above. Then, for every $0\mathrm{length}eq j \mathrm{length}eq n^n$, the mixed volumes involved in the coefficient $v_j^{\mathcal{P}hi}(K)$ contain only $f_i(K,\cdot)$ with $0\mathrm{length}eq i\mathrm{length}eq j$.
\end{remark}
We next state a fact whose proof is a simple observation, but which will play an important role in the next.
If $\mathcal{P}hi\in\MVal$ satisfies the (VC) condition, then there exist positive constants $c_\mathcal{P}hi$ and $C_\mathcal{P}hi$,
independent of $K$ and $\mathrm{length}ambda$, for which
$$
c_\mathcal{P}hi\mathrm{length}ambda^nV_n(K)=c_\mathcal{P}hiV_n(\mathrm{length}ambda K)\mathrm{length}eqV_n(\mathcal{P}hi(\mathrm{length}ambda K))\mathrm{length}eq C_\mathcal{P}hiV_n(\mathrm{length}ambda K)=C_\mathcal{P}hi\mathrm{length}ambda^nV_n(K).
$$
Comparing these inequalities with \eqref{vi di}, we immediately get that the only possibly non vanishing term in the sum in~\eqref{vi di} is the one containing $\mathrm{length}ambda^n$. In other words,
$V_n(\mathcal{P}hi(\mathrm{length}ambda K))$ is necessarily a monomial of degree $n$.
The following corollaries collect the important consequences of this fact.
\begin{corollary}\mathrm{length}abel{mixedZero}
Let $\mathcal{P}hi\in \MVal$ satisfy the (VC) condition.
Then:
\begin{enumerate}
\item[\emph{(i)}] if $\mathcal{P}him K<n$, then $v^{\mathcal{P}hi}_l(K)=0$ for all $0\mathrm{length}eq l \mathrm{length}eq n^n$;
\item[\emph{(ii)}] if $\mathcal{P}him K=n$, then $v_n^{\mathcal{P}hi}(K)\neq 0$;
\item[\emph{(iii)}] if $\mathcal{P}him K=n$, then $v^{\mathcal{P}hi}_l(K)=0$ for every $l\neq n$.
\end{enumerate}
\end{corollary}
\begin{corollary}\mathrm{length}abel{each_zero}
Let $\mathcal{P}hi\in \MVal$ satisfy the (VC) condition and let $K\in\mathcal{K}^n$ be fixed.
If for every $1\mathrm{length}eq j\mathrm{length}eq n-1$, the functions $u\mapsto f_j(K,u)$ are convex,
then a coefficient $v^{\mathcal{P}hi}_j(K)$ in \eqref{vi di} vanishes if and only if
each of the mixed volumes involved in its explicit expression does.
\end{corollary}
\begin{proof}
Let $\mathcal{P}hi\in \MVal$ satisfy the (VC) condition and let $K\in\mathcal{K}^n$.
If each function $u\mapsto f_j(K,u)$ is a convex function, $1\mathrm{length}eq j\mathrm{length}eq n-1$, then the coefficients $v^{\mathcal{P}hi}_j(K)$ are sums of mixed volumes of convex bodies. Thus, these summands are all non-negative and the statement holds.
\end{proof}
We now proceed to prove Theorem~\ref{L0Ln}.
\begin{proof}[Proof of Theorem~\ref{L0Ln}]
Let $\mathcal{P}hi\in\MVal$ satisfy the (VC) condition. We consider its McMullen decomposition, as described in \eqref{h-decomp}, and use the notation of Definition~\ref{notation_fi_dii}.
We first prove that $\mathcal{P}him L_0\neq n$.
Indeed, since $L_0=\mathcal{P}hi(\{p\})$, by Remark~\ref{image of a point} and \eqref{h-decomp}, we have that $\mathcal{P}him L_0=n$ implies $V_n(\mathcal{P}hi(\{p\}))>0$, in contradiction with the (UVC) condition. From now on, we assume $0\mathrm{length}eq \mathcal{P}him L_0\mathrm{length}eq n-1$.
Let $Z$ be a fixed $n$-dimensional zonotope. By Theorem~\ref{suppZonoid}, $u\mapsto f_k(Z,u)$ is the support function of a convex body $\mathcal{P}hi_k(Z)$,
for every $1\mathrm{length}eq k \mathrm{length}eq n-1$.
On the other hand, by Corollary~\ref{mixedZero}(ii) we have $v_n^{\mathcal{P}hi}(Z)\neq 0$, where $v_n^{\mathcal{P}hi}(Z)$ is the coefficient of the degree $n$ of the polynomial $V_n(\mathcal{P}hi(\mathrm{length}ambda Z))$, in $\mathrm{length}ambda$, given in \eqref{vi di}. Therefore, $v_n^{\mathcal{P}hi}(Z)$ is a
sum of mixed volumes of the form
$$
V(\mathcal{P}hi_0(Z)[a_0],\mathcal{P}hi_1(Z)[a_1],\dots,\mathcal{P}hi_{n-1}(Z)[a_{n-1}],\mathcal{P}hi_n(Z)[a_n]),
$$
where $a_0,\dots,a_n\in\{0,1,\dots,n\}$ and satisfy the following two conditions.
\renewcommand\mathrm{length}abelitemi{\tiny$\bullet$}
\begin{itemize}\itemsep9pt
\item As the sum of the multiplicities of the entries of a mixed volume is $n$,
\begin{equation}\mathrm{length}abel{sum=n}
\sum_{k=0}^na_k=n;
\end{equation}
\item by \eqref{vi di} and the fact that $V(\mathcal{P}hi_0[a_0],\mathcal{P}hi_1(Z)[a_1],\dots,\mathcal{P}hi_{n-1}(Z)[a_{n-1}],\mathcal{P}hi_n(Z)[a_n])$ is a summand of $v_n^{\mathcal{P}hi}(Z)$,
\begin{equation}
\mathrm{length}abel{sumk=n}\sum_{k=0}^nka_k=n.
\end{equation}
\end{itemize}
Using the described notation, we prove the following claim.
\noindent{\bf Claim 1.} \emph{$v_n^{\mathcal{P}hi}(Z)$ has only one non-zero summand:
\begin{equation}\mathrm{length}abel{gen_term}
V(\mathcal{P}hi_0(Z)[a_0],\mathcal{P}hi_1(Z)[a_1],\dots,\mathcal{P}hi_{n-1}(Z)[a_{n-1}],\mathcal{P}hi_n(Z)[a_n])>0,
\end{equation}
with $a_0,\dots,a_n\in\{0,1,\dots,n\}$ satisfying \eqref{sum=n} and \eqref{sumk=n}. Moreover,
$$
\mathcal{P}him(\mathcal{P}hi_k(Z))=a_k,\quad\forall\, k\in\{0,\dots,n\}.
$$}
We note that, a priori, $a_k$, $1\mathrm{length}eq k\mathrm{length}eq n-1$, may depend on the zonotope $Z$.
At the end of the proof, we will show that this summand is one of the following two:
\begin{enumerate}\itemsep6pt
\item[(A)] $V(L_0[n-1],L_n)$,
\item[(B)] $V(\mathcal{P}hi_1Z[n])$,
\end{enumerate}
and that this fact implies Theorem~\ref{L0Ln}. We note that the mixed volume on (A) (resp. (B)) corresponds to $a_0=n-1$, $a_n=1$ and $a_j=0$, $1\mathrm{length}eq j\mathrm{length}eq n-1$ (resp. $a_1=n$, $a_0=0$ and $a_j=0$, $2\mathrm{length}eq j\mathrm{length}eq n$), which are the trivial solutions of \eqref{sum=n} and \eqref{sumk=n}.
We next prove Claim~1. By Corollary~\ref{mixedZero}(ii), there exist $a_0,\dots,a_n$ for which \eqref{gen_term} holds.
We show that $\mathcal{P}him(\mathcal{P}hi_k(Z))=a_k$ for every $k\in\{0,\dots,n\}$. This means, in particular, that there is only one possible choice for the numbers $(a_0,a_1,\dots,a_n)$, which implies the whole claim.
For $a_0,\dots,a_n$ such that \eqref{gen_term} holds, Theorem~\ref{mix_volumes_Schneider} yields that
$\mathcal{P}him(\mathcal{P}hi_k(Z))\geq a_k$ and, hence, there exist $a_k$ linearly independent segments $S_1,\dots,S_{a_k}\subset\mathcal{P}hi_k(Z)$, for every $k\in\{0,\dots,n\}$.
Assume that for some $k\in\{0,\dots,n\}$, $\mathcal{P}him(\mathcal{P}hi_k(Z))>a_k$. Then, by condition (b) in Theorem~\ref{mix_volumes_Schneider}, there exists $j\ne k$ such that
$a_j\ge 1$ and
$$
V(\mathcal{P}hi_0(Z)[a_0],\dots,\mathcal{P}hi_k(Z)[a_k+1],\dots,\mathcal{P}hi_j(Z)[a_j-1],\dots,\mathcal{P}hi_n(Z)[a_n])>0.
$$
However, this mixed volume is one of the summands of the coefficient $v_{n+k-j}^{\mathcal{P}hi}(Z)$, which has to be zero by Corollaries~\ref{each_zero} and \ref{mixedZero}(iii). Thus, $\mathcal{P}him(\mathcal{P}hi_k(Z))=a_k$, which concludes the proof of Claim~1.
In the second step of the proof, we will apply Claim~1 to cubes. To do so, we need to introduce some notation.
Let $\{w_1,\dots,w_n\}$ be a fixed basis of $\mathbb{R}^n$ and define
$S_i:=[-w_i,w_i]$ and
\begin{equation}\mathrm{length}abel{Cn}
C_n:=S_1+\dots+S_n.
\end{equation}
Clearly $C_n$ is an $n$-dimensional zonotope and since $\mathcal{P}hi$ satisfies (VC) we have $\mathcal{P}him(\mathcal{P}hi (C_n))=n$.
We define $a_k:=\mathcal{P}him(\mathcal{P}hi_k(C_n))$. By Claim~1, $a_k$ coincides with the multiplicity of $\mathcal{P}hi_k(Z)$ in the mixed volume appearing in \eqref{gen_term}, for $Z=C_n$.
Let $C(n,k)$ denote, as in Theorem \ref{lem_polarization}, the set of all ordered subsets of $k$ elements among $\{1,\dots,n\}$ and for $\sigma\in C(n,k)$ and $j=1,\dots,k$,
let $\sigma_j$ denote the $j$-th element of $\sigma$. By Theorem~\ref{lem_polarization}, we have
\begin{align}\mathrm{length}abel{polarization}
\mathcal{P}hi_k(C_n)&=\sum_{\sigma\in C(n,k)}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k})\\\nonumber
&=\sum_{\sigma\in C'(n,k)}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k})+\sum_{\sigma
\in C(n,k)\setminus C'(n,k)}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k}),
\end{align}
where $C'(n,k)$ contains those elements $\sigma\in C(n,k)$ for which $\mathcal{P}him(\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k}))\neq 0$.
For every $1\mathrm{length}eq k\mathrm{length}eq n$, we can choose a subset $\Sigma_k\subset C'(n,k)$ which is {\em minimal} in the following sense: first
\begin{equation}\mathrm{length}abel{minimal_elements}
\mathcal{P}him(\mathcal{P}hi_k(C_n))=\mathcal{P}him\mathrm{length}eft(\sum_{\sigma\in\Sigma_k}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k})\right),
\end{equation}
and, secondly, this equality fails to be true if we omit one of the terms from $\Sigma_k$ in the sum on the right-hand side.
We note that the number of elements in $\Sigma_k$ is at most $a_k$, which is attained if $\mathcal{P}him(\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k}))=1$ for every $\sigma\in\Sigma_k$. Moreover, for every $\sigma\in C'(n,k)$ there exists a subset $\Sigma_k$ which contains $\sigma$ and is minimal. Equation \eqref{minimal_elements} will be used to prove Claim~2 below. For simplicity, we say that \emph{a segment $S_j$ has index in $\Sigma_k$} if there is a $\sigma\in\Sigma_k$ such that $\sigma_l=j$ for some $1\mathrm{length}eq l\mathrm{length}eq k$.
Equation \eqref{polarization} together with the McMullen decomposition \eqref{h-decomp} yields
\begin{equation}\mathrm{length}abel{pol2}
\mathcal{P}hi (C_n)=\mathcal{P}hi_0(C_n)+\sum_{k=1}^n\mathcal{P}hi_k(C_n)=L_0+\sum_{k=1}^{n}\sum_{\sigma\in C'(n,k)}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k})
+q,
\end{equation}
where $q\in\mathbb{R}^n$ is given by $\sum_{k=1}^n\sum_{\sigma
\in C(n,k)\setminus C'(n,k)}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k})$.
We will next focus on the following sum of convex bodies:
\begin{equation}\mathrm{length}abel{pol3}
\sum_{k=1}^{n}\sum_{\sigma\in C'(n,k)}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k}).
\end{equation}
Let $\tau_i$ be the number of subsets $\sigma\in C'(n,k)$, for all possible choices of $k$ between 1 and $n$, for which $i$ is an element of $\sigma$.
In other words, $\tau_i$ is the number of summands in \eqref{pol3} in which the segment $S_i$ appears.
We define $I:=(\tau_1,\dots,\tau_n)\in\mathbb{N}^n$.
\noindent{\bf Claim 2.}
\begin{enumerate}
{\em
\item[\emph{(a)}] For every $1\mathrm{length}eq k\mathrm{length}eq n$ and $\sigma\in C'(n,k)$,
\begin{equation}\mathrm{length}abel{0or1}
\mathcal{P}him(\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k}))=1.
\end{equation}
\item[\emph{(b)}] For every $1\mathrm{length}eq k\mathrm{length}eq n$, there are exactly $a_k$ elements $\sigma\in C'(n,k)$ for which \eqref{0or1} holds.
\item[\emph{(c)}] $I=(1,\dots,1)$.}
\end{enumerate}
First we prove that
\begin{equation}\mathrm{length}abel{tau_i_geq_1}\tau_k\geq 1, \quad 1\mathrm{length}eq k\mathrm{length}eq n,\end{equation}
arguing by contradiction. Without loss of generality we assume that $\tau_1=0$, i.e.,
$$
\mathcal{P}him(\mathcal{P}hi_k(S_1+S_{\sigma_2}+\dots+S_{\sigma_k}))=0,\quad\forall\, \sigma=(1,\sigma_2,\dots,\sigma_k)\in C(n,k),\;\forall\, k\in\{1,\dots,n\}.
$$
Then, by \eqref{polarization}, we clearly have
$$
0<V_n(\mathcal{P}hi(S_1+\dots+S_n))=V_n(\mathcal{P}hi(S_2+\dots+S_n))
$$
which is a contradiction with (UVC) since $V_n(S_2+\dots+S_n)=0$. Hence, $\tau_k\geq 1$, $1\mathrm{length}eq k\mathrm{length}eq n$, and each segment appears at least in one summand in
\eqref{pol3}.
For the proof of (a) and (b) in Claim~2, we will repeatedly use the following argument: if our claim is not satisfied, we construct, according to the given considerations in each case, appropriate zonotopes so that (UVC) fails to hold for them.
We prove next that \eqref{0or1} holds. If $k=n$, then \eqref{0or1} is directly satisfied. Indeed, if $a_n\neq 0$, then $a_n=1$ from
\eqref{sumk=n} and, by Claim~1, $\mathcal{P}him(\mathcal{P}hi_n(C_n))=\mathcal{P}him(\mathcal{P}hi_n(S_1+\dots+S_n))=1$.
For $1\mathrm{length}eq k\mathrm{length}eq n-1$, we prove \eqref{0or1} by contradiction.
Assume that there are $k\in\{1,\dots,n-1\}$ and $\widetilde\sigma\in C'(n,k)$ such that
$$
\mathcal{P}him(\mathcal{P}hi_k(S_{\widetilde\sigma_1}+\dots+S_{\widetilde\sigma_k}))\geq 2.
$$
Let $\Sigma_k\subset C'(n,k)$ be a minimal set containing $\widetilde\sigma$, as defined in \eqref{minimal_elements}. In this situation, because of the minimality of $\Sigma_k$ and Claim~1, the number $q$ of elements of $\Sigma_k$ is at most $(a_k-1)$. We denote by $s\mathrm{length}eq qk$ the number of linearly independent segments with index in $\Sigma_k$ and by $P_s$ the zonotope sum of these $s$ segments. For every $1\mathrm{length}eq l\mathrm{length}eq n$, $l\neq k$, let $\Sigma_l$ be a fixed minimal set. Denote by $s'$ the number of linearly independent segment with index in an element of the set $\{\Sigma_{l}\}_{1\mathrm{length}eq l\mathrm{length}eq n, l\neq k}$. Let $P_{s'}$ be the zonotope sum of these $s'$ segments. We have
$$
s'\mathrm{length}e\sum_{l=1,\dots,n;\;l\neq k}la_l=n-ka_k.
$$
Let $P$ be the zonotope given by $P=P_s+P_{s'}$. By construction
$$\mathcal{P}him P\mathrm{length}eq s+s'\mathrm{length}eq qk+n-ka_k\mathrm{length}e (a_k-1)k+n-ka_k=n-k<n$$
and, hence, $V_n(P)=0$. On the other hand, recalling \eqref{minimal_elements}, we have
$V_n(\mathcal{P}hi(P))\neq 0$. This contradicts (UVC). Thus, we have \eqref{0or1}.
Hence, \eqref{tau_i_geq_1} and Claim~2(a) prove that each segment $S_1,\dots,S_n$ appears at least in one summand of \eqref{pol3} and that each summand of
\eqref{pol3} has dimension 1. This implies that, in the sum in \eqref{polarization}, there are at least $a_k$ summands in
$C'(n,k)$ for every $1\mathrm{length}eq k\mathrm{length}eq n-1$. In particular, any minimal set $\Sigma_k$ contains exactly $a_k$ elements.
We show next Claim~2(b), i.e., we show that for every $1\mathrm{length}eq k\mathrm{length}eq n$ there are exactly $a_k$ subsets $\sigma\in C'(n,k)$ for which $\mathcal{P}him(\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k}))=1$ holds.
For $k=n$, this follows immediately since $C(n,n)$ contains only one element and either $a_n=0$ or $a_n=1$. Claim~1 yields the result.
We prove the statement for $1\mathrm{length}eq k\mathrm{length}eq n-1$ arguing by contradiction. For $1\mathrm{length}eq l\mathrm{length}eq n-1$, let $\Sigma_l$ be minimal and denote
$\sigma^1,\dots,\sigma^{a_l}\in \Sigma_l$.
Fix $k\in\{1,\dots,n-1\}$ and assume that Claim~2(b) does not hold for $k$.
Then there is $\sigma\in C'(n,k)$ such that $\sigma\neq\sigma^{j}$, $j\in\{1,\dots,a_k\}$.
Without loss of generality, since \eqref{0or1} holds, we may assume that
\begin{equation}\mathrm{length}abel{eq_exchange}
\mathcal{P}him(\mathcal{P}hi_{k}(S_{\sigma_1}+\dots+S_{\sigma_k})+\mathcal{P}hi_{k}(S_{\sigma^2_1}+\dots+S_{\sigma^2_k})+\dots+\mathcal{P}hi_{k}(S_{\sigma^{a_k}_1}+\dots+S_{\sigma^{a_k}_k}))=a_k.
\end{equation}
Let $Q$ be the sum of the $n-ka_k$ segments whose index is in $\Sigma_l$, $1\mathrm{length}eq l\mathrm{length}eq n$, $l\neq k$ (cf. Claim~2(a)). Then $\mathcal{P}him Q=n-k a_k$. Indeed, if these segments are not linearly independent, i.e., if $\mathcal{P}him Q<n-ka_k$, consider the zonotope
$$
P:=\sum_{j=1}^{a_k}\sum_{i=1}^kS_{\sigma_i^j}+Q.
$$
Then $\mathcal{P}him P <n$. On the other hand, by \eqref{minimal_elements}, $V_n(\mathcal{P}hi(P))\neq 0$. This contradicts the (UVC) condition.
Thus we assume next that $\mathcal{P}him Q=n-ka_k$.
Consider the at most $(k+1)a_k$ segments $S_{\sigma_1},\dots,S_{\sigma_k},S_{\sigma_1^1},\dots,S_{\sigma_k^1},\dots,S_{\sigma_1^{a_k}},\dots,S_{\sigma_k^{a_k}}$. We will distinguish the following mutually excluding cases and define an appropriate zonotope $P'$ in each case:
\begin{enumerate}[i)]
\item Some segment with index in $\Sigma_k$ is already a summand of $Q$. We set
$$
P'=\sum_{j=1}^{a_k}\sum_{i=1}^k S_{\sigma_i^j}.
$$
\item No segment with index in $\Sigma_k$ is in $Q$ but some segment $S_{\sigma_1},\dots,S_{\sigma_k}$ is already a summand of $Q$. In this case we set
$$
P'=S_{\sigma_1}+\dots+S_{\sigma_k}+\sum_{j=2}^{a_k}\sum_{i=1}^kS_{\sigma_{i}^j}.
$$
\item Otherwise, all segments $S_{\sigma_1},\dots,S_{\sigma_k}$ have index in $\Sigma_k$, that is,
$$
\mathcal{P}him(S_{\sigma_1}+\dots+S_{\sigma_k}+\sum_{j=1}^{a_k}\sum_{i=1}^kS_{\sigma_{i}^j})=ka_k.
$$
Since $\sigma\neq \sigma^1$, there is an $l$ such that $2\mathrm{length}eq l\mathrm{length}eq a_k$ and for which $S_{\sigma_i}=S_{\sigma^l_r}$ for some $1\mathrm{length}eq i,r\mathrm{length}eq k$. We set
$$
P'=S_{\sigma_1}+\dots+S_{\sigma_k}+\sum_{l=2}^{a_k}\sum_{i=1}^kS_{\sigma_{i}^l}.
$$
\end{enumerate}
Define the zonotope $P:=P'+Q$. By construction, we have $\mathcal{P}him P<n$. On the other hand, by \eqref{pol2}, \eqref{minimal_elements}, and \eqref{eq_exchange},
$\mathcal{P}him(\mathcal{P}hi(P))=n$. This contradicts the (UVC) condition and Claim~2(b) holds also for $1\mathrm{length}eq k\mathrm{length}eq n-1$.
Now, the assertion $I=(1,\dots,1)$, which completes the proof of Claim~2, follows immediately from \eqref{sumk=n}, Claim~2(a) and~(b), and the
fact that $\tau_i\geq1$, $1\mathrm{length}eq i\mathrm{length}eq n$. Indeed, by Claim~2(b), $C'(n,k)$ contains exactly $a_k$ elements, for every $k=1,\dots,n$. This means that
there are exactly $ka_k$ indices corresponding to $C'(n,k)$ and, in total, we have $\sum_{k=1}^n ka_k=n$ indices. As each $\tau_i$ is at least 1, we have that
$\tau_i=1$ for every $i=1,\dots,n$, and Claim~2 is proved.
In the next claim we study the relation between the subset $C'(n,k)$ associated to a generalized cube $C_n$ and the subset $C'(n,k)$ associated to another generalized cube, $P$ that differs only in one segment with $C_n$; that is, we compare the distribution of the segments appearing in \eqref{pol3} within the different $\mathcal{P}hi_k$ for the generalized cubes $C_n$ and $P$.
\noindent\textbf{Claim 3.}
\emph{Let $C_n=S_1+\dots+S_n$ be as before and let $S$ be a segment such that $\spa S\neq \spa S_i$ for every $1\mathrm{length}eq i\mathrm{length}eq n$. Define $P:=S+S_2+\dots+S_n$. Let $j\in\{1,\dots,n-1\}$. If
$$\mathcal{P}him(\mathcal{P}hi_{j}(S_1+\dots+S_{j}))=1,$$
then }
\begin{enumerate}\itemsep6pt
\item[(a)] $\mathcal{P}him(\mathcal{P}hi_j(S+S_2+\dots+S_j))=1$ \emph{and}
\item[(b)] \emph{$\mathcal{P}him(\mathcal{P}hi_k(S+S_{\sigma_2}+\dots+S_{\sigma_k}))=0$ for every $k\in\{1,\dots,n\}$ and $S_{\sigma_2},\dots,S_{\sigma_k}$ such that $\{2,\dots,j\}\neq\{{\sigma_2},\dots,{\sigma_k}\}$.}
\end{enumerate}
Let $C_n=S_1+\cdots+S_n$ and let $P=S+S_2+\cdots+S_n$ be as in the statement.
Using \eqref{pol2} for $C_n$ and $P$, we can write
\[
\mathcal{P}hi(C_n)=L_0+\sum_{k=1}^n\sum_{\sigma\in C'(n,k,C_n)}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k}) + q, \quad\text{ and}
\]
\[
\mathcal{P}hi(P)=L_0+\sum_{k=1}^n\sum_{\sigma\in C'(n,k,P)}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k}) + q',
\]
where $C'(n,k,C_n)$ (resp. $C'(n,k,P)$) denotes the subset of elements $\sigma\in C(n,k)$ for which $\mathcal{P}him(\mathcal{P}hi_k(S_{\sigma_1}+\cdots+S_{\sigma_k}))\neq 0$, for the above sum in $C_n$ (resp. $P$). For $P$, we make an abuse of notation and denote also by $1$ the index associated to $S$.
We compare the central sum
$$A=\sum_{k=1}^n\sum_{\sigma\in C'(n,k,C_n)}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k})$$
with the sum
$$B=\sum_{k=1}^n\sum_{\sigma\in C'(n,k,P)}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k}).$$
First, using Claim~2(c), we split $A$ and $B$ as follows:
$$A=\mathcal{P}hi_j(S_1+\cdots+S_j)+\sum_{k=1}^n\sum_{\substack{\sigma\in C'(n,k,C_n) \\ \sigma_l\neq 1, 1\mathrm{length}eq l \mathrm{length}eq k}}\mathcal{P}hi_k(S_{\sigma_1}+\dots+S_{\sigma_k})=\mathcal{P}hi_j(S_1+\cdots+S_n)+C,$$
$$B=\mathcal{P}hi_i(S+S_{\beta_2}+\cdots+S_{\beta_i})+\sum_{k=1}^n\sum_{\substack{\gamma\in C'(n,k,P) \\ \gamma_l\neq 1, 1\mathrm{length}eq l \mathrm{length}eq k}}\mathcal{P}hi_k(S_{\gamma_1}+\dots+S_{\gamma_k})=\mathcal{P}hi_i(S+S_{\beta_2}\cdots+S_{\beta_i})+D,$$
for $1\mathrm{length}eq j,i\mathrm{length}eq n$ and $\beta_2,\dots,\beta_i\in\{2,\dots,n\}$.
By Claim~2(c), $S_1$ does not appear in $C$, as well as, $S$ does not appear in $D$. Thus, every summand in $C$ is a summand in $D$ and vice versa, that is, the sums $C$ and $D$ are the same and contain the same segments.
Therefore, using again Claim~2(c), we obtain that $\beta_m\in\{2,\cdots,j\}$ for $2\mathrm{length}eq m\mathrm{length}eq i$. Since every segment $S_m$, $2\mathrm{length}eq m\mathrm{length}eq j$ appears exactly once in $B$, we necessarily have $i=j$. Hence, the proof of (a) is completed. Now (b) follows directly from (a) and Claim~2(c).
\noindent\textbf{Claim 4.} \emph{Let $C_n=S_1+\dots+S_n$ be as in \eqref{Cn}. Then the unique non-zero summand of $v_n^{\mathcal{P}hi}(C_n)$ (given by Claim~1) is necessarily one of the following:
\begin{enumerate}\itemsep6pt
\item[\emph{(A)}] $V(L_0[n-1],L_n)V_n(C_n)$,
\item[\emph{(B)}] $V(\mathcal{P}hi_1(C_n)[n])$.
\end{enumerate}}
In the notation of Claim~1, this is equivalent to say that, for $C_n$, either
\begin{enumerate}\itemsep4pt
\item[$(\widetilde{A})$] $(a_0,\dots,a_n)=(n-1,0,\dots,0,1)$ or
\item[$(\widetilde B)$] $(a_0,\dots,a_n)=(0,n,0,\dots,0)$.
\end{enumerate}
Notice that this yields the statement of Theorem~\ref{L0Ln}. Indeed, by Remark~\ref{image of a point}, $L_0=\mathcal{P}hi(\{p\})$ for every $p\in\mathbb{R}^n$ and, by Claim~1, $\mathcal{P}him(\mathcal{P}hi_0(C_n))=\mathcal{P}him L_0=a_0$. Hence, we have that either $\mathcal{P}him(\mathcal{P}hi(\{p\}))=n-1$ or $\mathcal{P}him(\mathcal{P}hi(\{p\}))=0$.
First notice that the case $n=2$ is trivial, since by \eqref{sum=n} and \eqref{sumk=n}, the only possibilities for $(a_0,a_1,a_2)$ are $(1,0,1)$ and $(0,2,0)$. We assume in the following that $n\geq 3$.
If $a_n=1$, by \eqref{sumk=n}, we obtain that $a_j=0$ for every $1\mathrm{length}eq j\mathrm{length}eq n-1$. Furthermore, using \eqref{sum=n}, $a_0=n-1$ and we are in case $(\widetilde A)$.
If $a_n\neq 1$, then \eqref{sumk=n} yields $a_n=0$. Moreover, either $a_1=n$ and we are in case $(\widetilde B)$ or $a_1\neq n$, what the following argument proves to be impossible.
Let $C_n=S_1+\cdots+S_n$ and assume that we have $a_n=0$ and $a_1\neq n$.
Define $$k_0:=\min\{k : a_k\neq 0, 1\mathrm{length}e k\mathrm{length}eq n\}.$$ Notice that, as proved at the beginning of the proof of Theorem~\ref{L0Ln}, $\mathcal{P}him L_0=a_0<n$. Hence by \eqref{sumk=n}, there exists $k\geq 1$ such that $a_k\neq 0$; in particular, $k_0$ is well-defined. If $k_0a_{k_0}\neq n$, define $k_1:=\min\{k : a_k\neq 0, k> k_0\}$, while
if $k_0a_{k_0}=n$, set $k_1=k_0$. The existence of $k_1$ is guaranteed by \eqref{sumk=n}.
We claim that $k_1>1$. Indeed, if $k_1=1$, then $1=k_0=k_1$ and thus, by the definition of $k_1$, $k_0 a_{k_0}=a_1=n$, but this contradicts the assumption that $a_1\neq n$.
We claim also that $k_0<n$. Indeed, $k_0=n$ means that $a_n\neq 0$, which by \eqref{sumk=n} is equivalent to $a_n=1$, but we are assuming $a_n=0$.
We next claim that
\begin{equation}\mathrm{length}abel{aggiunta 2}
k_0+k_1\mathrm{length}e n.
\end{equation}
Indeed, if $k_0<k_1$, by \eqref{sumk=n} we have $k_0+k_1\mathrm{length}eq k_0a_{k_0}+k_1a_{k_1}\mathrm{length}eq n$. Assume now that $k_0=k_1$; then we have
$k_0 a_{k_0}=n$. We study the quantity $2k_0=k_0+k_1$, depending on $a_{k_0}$. If $a_{k_0}=1$, then $k_0=n$, but this is not possible, as we have
$a_n=0$. If $a_{k_0}=2$, then $2k_0=a_{k_0}k_0=n$ and \eqref{aggiunta 2} holds. Finally, if $a_{k_0}>2$ then $2k_0<a_{k_0}k_0=n$. Inequality \eqref{aggiunta 2} is proved.
Using \eqref{0or1}, \eqref{aggiunta 2}, and Claim~2(c), we may assume without loss of generality that
\begin{equation}\mathrm{length}abel{not_zero}
\mathcal{P}him(\mathcal{P}hi_{k_0}(S_1+\dots+S_{k_0}))=1\quad\textrm{and}\quad\mathcal{P}him(\mathcal{P}hi_{k_1}(S_{k_0+1}+\dots+S_{k_0+k_1}))=1.
\end{equation}
We will apply Claim~3 to the following generalized cubes to obtain the contradiction. Consider the bases of $\mathbb{R}^n$ given by $\{w_1,\dots,w_{k_0},w_{k_0}+w_{k_0+1},w_{k_0+2},\dots,w_n\}$ and \\ $\{w_1,\dots,w_{k_0-1},w_{k_0}+w_{k_0+1},w_{k_0+1},\dots,w_n\}$, and the associated zonotopes
$$\widetilde C_n:=S_1+\dots+S_{k_0}+S_{k_0,k_0+1}+S_{k_0+2}+\dots+S_n
$$
and
$$\overline C_n:=S_1+\dots+S_{k_0-1}+S_{k_0,k_0+1}+S_{k_0+1}+\dots+S_n.$$
Here we denote $S_{k_0,k_0+1}:=[-(w_{k_0}+w_{k_0+1}),w_{k_0}+w_{k_0+1}]$.
Observe that the choice of the first basis cannot be done if $k_0=n$, i.e., $a_n=1$, which corresponds to $(\widetilde A)$ of Claim~4.
By~\eqref{not_zero}, we have
$$\mathcal{P}him(\mathcal{P}hi_{k_0}(S_1+\dots+S_{k_0}))=1.$$
Hence, if $k_0\geq 2$, then Claim~3(b) yields
\begin{equation}\mathrm{length}abel{basisZero}
\mathcal{P}him(\mathcal{P}hi_{k_0}(S_1+\dots+S_{k_0-1}+S_{k_0,k_0+1}))=0.
\end{equation}
If $k_0=1$ and $k_1\geq 2$, we obtain
\begin{equation}\mathrm{length}abel{basisZero_k0=1}
\mathcal{P}him(\mathcal{P}hi_{k_1}(S_{1,2}+S_{3}+\dots+S_{k_1+1}))=1\quad\textrm{and}\quad\mathcal{P}him(\mathcal{P}hi_1(S_{1,2}))=0.
\end{equation}
Applying Claim~3(a) to the cubes $C_n$ and $P=\overline C_n$, and using \eqref{not_zero}, we have that
$$\mathcal{P}him(\mathcal{P}hi_{k_0}(S_1+\dots+S_{k_0}))=1$$
implies
\begin{equation}\mathrm{length}abel{basisNonZero}
\mathcal{P}him(\mathcal{P}hi_{k_0}(S_1+\dots+S_{k_0-1}+S_{k_0,k_0+1}))=1,
\end{equation}
which for $k_0=1$ means
\begin{equation}\mathrm{length}abel{basisNonZero_k0=1}
\mathcal{P}him(\mathcal{P}hi_{1}(S_{1,2}))=1.
\end{equation}
Hence, if $k_0\geq 2$, \eqref{basisZero} together with \eqref{basisNonZero} yields a contradiction. If $k_0=1$ and $k_1\geq 2$, then \eqref{basisZero_k0=1} with \eqref{basisNonZero_k0=1} yields also a contradiction.
We note that there is no contradiction if $k_0=k_1=1$, which corresponds to case $(\widetilde B)$ of Claim~4, since in this case we have
$$\mathcal{P}him(\mathcal{P}hi_{1}(S_1))=\mathcal{P}him(\mathcal{P}hi_1(S_{1,2}))=\mathcal{P}him(\mathcal{P}hi(S_2))=1.$$
Thus, we have proved Claim~4, which concludes the proof of the theorem.
\end{proof}
From the proof of the previous theorem, especially from Claim~1, and by approximation of arbitrary zonoids by $n$-dimensional ones, we deduce the following result.
\begin{corollary}\mathrm{length}abel{cor:dimfi}
Let $n\geq 2$ and let $\mathcal{P}hi\in\MVal$ satisfy (VC).
\begin{enumerate}
\item[\emph{(i)}] If $\mathcal{P}him(\mathcal{P}hi\{0\})=\mathcal{P}him L_0=n-1$, then $\mathcal{P}him L_n=1$ and $\mathcal{P}him(L_0+L_n)=n$.
\item[\emph{(ii)}] If $\mathcal{P}him L_0=n-1$ and $Z$ is a zonoid, then $\mathcal{P}him(\mathcal{P}hi_j(Z))=0$ for every $j=1,\dots,n-1$.
\item[\emph{(iii)}] If $\mathcal{P}him L_0=0$ and $Z$ is a zonoid, then $\mathcal{P}him(\mathcal{P}hi_j(Z))=0$
for every $j=2,\dots,n$. In particular, $L_n$ is a point.
\end{enumerate}
\end{corollary}
\section{On the McMullen decomposition of valuations satisfying (VC)}\mathrm{length}abel{n-1}
In this section we will investigate more deeply the properties of the homogeneous functions in the McMullen decomposition in \eqref{h-decomp} for Minkowski valuations satisfying (VC).
The two next lemmas recall standard facts, which will be often used in the following.
\begin{lemma}\mathrm{length}abel{standard facts} Let $n\geq 2$ and $j\in\{0,1,\dots,n\}$.
\begin{enumerate}
\item[\emph{(i)}]
If $\mu\in\Val_j$ vanishes on $j$-dimensional simplices, then $\mu$ vanishes on every $j$-di\-men\-sio\-nal convex body.
\item[\emph{(ii)}] If $\mathcal{P}hib\in\MVal_j$ satisfies that $\mathcal{P}him(\mathcal{P}hib(T))=0$ for every $j$-dimensional simplex $T$, then $\mathcal{P}him(\mathcal{P}hib(K))=0$ for every $j$-dimensional convex body $K$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof of both statements follows by standard approximation arguments. Indeed, each convex body can be approximated in the Hausdorff distance by polytopes \cite[Theorem 1.8.16]{schneider.book14}. Moreover, each polytope can be decomposed in a finite number of simplices (simplicial
decomposition) whose intersection is either empty or a lower-dimensional simplex (see e.g. \cite[Proof of Theorem 6.3.1]{schneider.book14}). The statement
follows by using the valuation property and the continuity of the valuation.
\end{proof}
\begin{lemma}\mathrm{length}abel{standard fact ii}
Let $n\geq 2$ and let $T$ be a $j$-dimensional simplex, $2\mathrm{length}eq j\mathrm{length}eq n$. Then there exists a convex polytope $P$ such that $T\cup P$ is a convex zonotope and $T\cap P$ has
dimension $j-1$.
\end{lemma}
\begin{proof}
Let $T$ be a $j$-dimensional simplex, $2\mathrm{length}eq j\mathrm{length}eq n$. Without loss of generality we may assume that one vertex of $T$ is the origin. Let $g\in \GL(n)$ be such that $g(T)$ is the standard $j$-dimensional
simplex of the hyperplane
$
H=\{(x_1,\dots,x_n)\,:\,x_{j+1}=\dots=x_n=0\}.
$
Let $C_j$ be the unit standard cube in $H$. The set $C_j\setminus g(T)$ is convex (as the intersection of $C_j$ with an open half-space of $H$) and its closure is a polytope $P$. Let $P'=g^{-1}(P)$. Then $C_j=g(T) \cup g(P')$. This shows that the desired statement holds for $g(T)$.
Applying now $g^{-1}$, we obtain it for $T$.
\end{proof}
\begin{lemma}\mathrm{length}abel{f_i=0 esencia}
Let $n\geq 2$, $\mathcal{P}hi\in\MVal$, and $1\mathrm{length}eq j\mathrm{length}eq n-1$.
Suppose that the mapping $u\mapsto f_j(Z,u)$, associated to $\mathcal{P}hi$ as defined in \eqref{h-decomp}, is a linear function for every $Z$ zonotope in $\mathcal{K}^n$. Then:
\begin{itemize}\itemsep9pt
\item[(i)] $u\mapsto f_j(K,u)$ is a linear function for every $K\in\mathcal{K}^n$ with $\mathcal{P}him K=j$;
\item[(ii)] if $u\mapsto f_j(K,u)$ is a support function for every $K\in\mathcal{K}^n$ with $\mathcal{P}him K=j+1$, then $u\mapsto f_j(K,u)$ is a linear function for every $K\in\mathcal{K}^n$ with
$\mathcal{P}him K=j+1$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $\mathcal{P}hi\in\MVal$, let $1\mathrm{length}eq j\mathrm{length}eq n-1$, and let $u\mapsto f_j(Z,u)$ be a linear function for every $Z$ zonotope in $\mathcal{K}^n$.
We note that, from Theorem~\ref{suppZonoid}, $f_j(Z,\cdot)$ is a support function for every zonotope $Z$ and hence we can
write $\mathcal{P}hi_j(Z)$ for the convex body whose support function is $f_j(Z,\cdot)$. Moreover, $\mathcal{P}him \mathcal{P}hi_j(Z)=0$, since a convex body with linear support function is a point.
\begin{enumerate}\itemsep9pt
\item[(i)] Let $T$ be a $j$-dimensional simplex and let $P$ be a polytope given by Lemma~\ref{standard fact ii}. Then $T\cup P$ is a zonotope and, by hypothesis, $\mathcal{P}him(\mathcal{P}hi_j(T\cup P))=0$.
Furthermore, since $\mathcal{P}him (T\cap P)=j-1$, Lemma~\ref{r: facts on f_js}(iii) yields $f_j(T\cap P,\cdot)\equiv 0$. Hence, $f_j(T\cap P,\cdot)$ is the support function of $\mathcal{P}hi_j(T\cap P)=\{0\}$.
Moreover, from Lemma~\ref{r: facts on f_js}(iv), for every $j$-dimensional convex body $K$, $u\mapsto f_j(K,u)$ is the support function of a convex body $\mathcal{P}hi_j(K)$. Thus, if $f_j(T,\cdot)$ and $f_j(P,\cdot)$ are the support functions of $\mathcal{P}hi_j(T)$ and $\mathcal{P}hi_j(P)$, resp., we have
\begin{equation*}\mathcal{P}hi_j(T\cup P)=\mathcal{P}hi_j(T\cup P)+\mathcal{P}hi_j(T\cap P)=\mathcal{P}hi_j(T)+\mathcal{P}hi_j(P).\end{equation*}
Hence, $\mathcal{P}him(\mathcal{P}hi_j(T))=0$ for every simplex $T$ of dimension $j$.
The statement follows by Lemma~\ref{standard facts}(ii).
\item[(ii)] Similarly to the argument in (i), we let $T$ be a $(j+1)$-dimensional simplex and let $P$ be given by Lemma~\ref{standard fact ii}. Since $\mathcal{P}him(T\cap P)=j$, from the previous item we have that $\mathcal{P}him(\mathcal{P}hi_j(T\cap P))=0$.
On the other hand, since $T\cup P$ is a zonotope, by hypothesis we have $\mathcal{P}him(\mathcal{P}hi_j(T\cup P))=0$.
Using now that $f_{j}(K,\cdot)$ is a support function for any $K\in\mathcal{K}^n$ with $\mathcal{P}him K=j+1$, we obtain
$$
\mathcal{P}him(\mathcal{P}hi_j(T\cup P)+\mathcal{P}hi_j(T\cap P))=\mathcal{P}him(\mathcal{P}hi_j(T)+\mathcal{P}hi_j(P)).
$$
Therefore, $\mathcal{P}him(\mathcal{P}hi _j(T))=0$, and Lemma~\ref{standard facts}(ii) yields the result.
\end{enumerate}
\end{proof}
\begin{lemma}\mathrm{length}abel{513}Let $n\geq 2$ and let $\mathcal{P}hi\in\MVal$ satisfy (VC) and $\mathcal{P}him(\mathcal{P}hi(\{0\}))=n-1$. Then:
\begin{enumerate}
\item[\emph{(i)}]
for every $1\mathrm{length}eq j\mathrm{length}eq n-1$ and $K\in\mathcal{K}^n$ with $\mathcal{P}him K=j$, $f_j(K,\cdot)$ is a linear function;
\item[\emph{(ii)}] if further $\mathcal{P}hi\in\MVal^s$, then $f_j(K,\cdot)\equiv 0$ for every $1\mathrm{length}eq j\mathrm{length}eq n-1$ and $K\in\mathcal{K}^n$ with $\mathcal{P}him K=j$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\mathcal{P}hi$ be as in the statement.
\begin{enumerate}
\item[(i)]
By Corollary~\ref{cor:dimfi}(ii), each function $u\mapsto f_j(Z,u)$, $1\mathrm{length}eq j\mathrm{length}eq n-1$, associated to $\mathcal{P}hi$ is the support function of a point, and hence it is a linear function. Lemma~\ref{f_i=0 esencia}(i) yields the statement.
\item[(ii)] Let now $\mathcal{P}hi$ be also an $o$-symmetrization. By Lemma~\ref{even}, the function $u\mapsto f_j(K,u)$ is even for every $1\mathrm{length}eq j\mathrm{length}eq n-1$ and $K\in\mathcal{K}^n$. On the other hand, by item (i), we know that $u\mapsto f_j(K,u)$ is a linear function for every $K\in\mathcal{K}^n$ with $\mathcal{P}him K=j$. Both conditions imply that $f_j(K,u)=0$ for every $u\in\mathbb{R}^n$ and $K\in\mathcal{K}^n$ with $\mathcal{P}him K=j$.
\end{enumerate}
\end{proof}
\begin{lemma}\mathrm{length}abel{l: fi(K,u)=0}
Let $n\geq 2$ and let $\mathcal{P}hi\in\MVal^s$ satisfy (VC) and
$
\mathcal{P}him(\mathcal{P}hi(\{0\}))=n-1.
$
Then $f_{j}(K,u)=0$ for every $u\in\mathbb{R}^n$, $1\mathrm{length}eq j\mathrm{length}eq n-1$, and $K\in\mathcal{K}^n$.
\end{lemma}
\begin{proof}
Corollary~\ref{cor:dimfi}(i) together with Lemma~\ref{even} yields $\mathcal{P}him L_n=1$ and $L_n\in\mathcal{K}^n_s$.
Let $\{e_1,\dots,e_n\}$ be a basis of $\mathbb{R}^n$
such that $L_n=[-e_1,e_1]$ and denote by $H$ the $(n-1)$-dimensional subspace orthogonal to $\spa\{e_1\}$.
We divide the proof in three steps.
\noindent{\bf Step 1.} \emph{Let $K\in\mathcal{K}^n$, $1\mathrm{length}eq j\mathrm{length}eq n-1$, $u',v'\in H$, and $u=ae_1+u'$ and $v=be_1+v'$ with $ab\geq 0$.
Then
$$f_j(K,u+v)=f_j(K,u)+f_j(K,v).$$ Moreover, $f_j(K,w)=0$ for every $w\in H$.}
We prove the claim by backward induction. First we prove it for $j=n-1$.
For simplicity, we write $f(K,u)$ instead of $f_{n-1}(K,u)$. For every $a\in\mathbb{R}^n$,
\begin{equation}\mathrm{length}abel{first_l}
h(L_n,ae_1+u')=|\mathrm{length}angle e_1,ae_1+u'\rangle|=|a|\|e_1\|=h(L_n,ae_1)+h(L_n,u').
\end{equation}
Since $\mathcal{P}hi(K)\in\mathcal{K}^n$ for every convex body $K$, for a fixed $\mathrm{length}ambda>0$ we can write, using \eqref{h-decomp} and \eqref{first_l},
\begin{align*}
0&\geq h(\mathcal{P}hi (\mathrm{length}ambda K),ae_1+v')-h(\mathcal{P}hi(\mathrm{length}ambda K),ae_1)-h(\mathcal{P}hi(\mathrm{length}ambda K),v')
\\&=\mathrm{length}ambda^n(h(L_n,ae_1+v')-h(L_n,ae_1)-h(L_n,v'))
\\&\quad+\mathrm{length}ambda^{n-1}(f(K,ae_1+v')-f(K,ae_1)-f(K,v'))+O(\mathrm{length}ambda^{n-2})
\\&=\mathrm{length}ambda^{n-1}(f(K,ae_1+v')-f(K,ae_1)-f(K,v'))+O(\mathrm{length}ambda^{n-2}).
\end{align*}
Thus, as $\mathrm{length}ambda\to\infty$, we obtain
\begin{equation}\mathrm{length}abel{ineq1}
f(K,ae_1+v')\mathrm{length}eq f(K,ae_1)+f(K,v')
\end{equation}
for every convex body $K\in\mathcal{K}^n$.
Since $K\mapsto f(K,u)$ is a continuous, translation invariant, and $(n-1)$-homogeneous real-valued valuation, and, by Lemma~\ref{513}(ii), it vanishes when restricted to
$(n-1)$-dimensional convex bodies, Lemma~\ref{th_lemma}(i) yields
\begin{equation}\mathrm{length}abel{impar}
f(K,u)+f(-K,u)=0,\quad\forall K\in\mathcal{K}^n,\,u\in\mathbb{R}^n.
\end{equation}
Combining this fact with \eqref{ineq1}, in which $K$ is replaced by $-K$, we obtain
$$f(K,a e_1+v')=f(K,a e_1)+f(K,v') \text{ for every } K\in\mathcal{K}^n, a\in\mathbb{R}, v'\in H.$$
From the fact that $h(L_n,u')=0$ we also obtain
$$h(L_n,u+v)=h(L_n,(a+b)e_1)=|a+b|\|e_1\|$$
and
$$h(L_n,u)+h(L_n,v)=h(L_n,ae_1)+h(L_n,be_1)=(|a|+|b|)\|e_1\|,$$
which yields, for $a,b\in\mathbb{R}$ with the same sign,
\begin{equation}\mathrm{length}abel{second_l}
h(L_n,u+v)=h(L_n,u)+h(L_n,v).
\end{equation}
Using, as above, that $\mathcal{P}hi(\mathrm{length}ambda K)\in\mathcal{K}^n$ for every $K\in\mathcal{K}^n$ and $\mathrm{length}ambda>0$ together with \eqref{second_l} and \eqref{impar},
we obtain
$$f(K,u+v)=f(K,u)+f(K,v).$$
If we apply this equality to $u=e_1+w$ and $v=e_1-w$, $w\in H$, using \eqref{first_l} and that, by Lemma \ref{even}, $f$ is even,
we have
\begin{align*}
2f(K,e_1)&=f(K,u+v)
\\&=f(K,u)+f(K,v)
\\&=2f(K,e_1)+f(K,w)+f(K,-w)
\\&=2(f(K,e_1)+f(K,w)),
\end{align*}
which implies $f(K,w)=0$ for every convex body $K\in\mathcal{K}^n$ and $w\in H$.
Hence, we have proved Step~1 for $j=n-1$.
In order to proceed with the (backward) induction, we assume that the claim holds for $j> j_0$ and prove it for $j=j_0$. By the induction hypothesis, and the McMullen decomposition in \eqref{h-decomp}, we can argue for $f_{j_0}$ as we have just done with $f$ to prove the statement.
\noindent{\bf Step 2.} \emph{For every $K\in\mathcal{K}^n$ and $1\mathrm{length}eq j\mathrm{length}eq n-1$, either the function $u\mapsto f_j(K,u)$ or $u\mapsto f_j(-K,u)$ is a support function and
\begin{equation}\mathrm{length}abel{aggiunta1}
f_j(K,u)=(-1)^{\varepsilon_j(K)}\alpha_j(K)h([-e_1,e_1],u)=(-1)^{\varepsilon_j(K)}\alpha_j(K)|\mathrm{length}angle e_1,u\rangle|
\end{equation}
where $\varepsilon_j(K)\in\{0,1\}$ and $\alpha_j(K)\geq 0$.}
Let $u=ae_1+u',v=be_1+v'\in\mathbb{R}^n$ with $a,b\in\mathbb{R}$ and $u',v'\in H$. Step~1 and the evenness of $u\mapsto f_j(K,u)$ yield
\[
\begin{split}
\quad f_j(K,u+v)&=f_j(K,(a+b)e_1)+f_j(K,u'+v')\\&=|a+b|f_j(K,\mathrm{sign}(a+b)e_1)=|a+b|f_j(K,e_1) \quad\text{and}
\\ f_j(K,u)+&f_j(K,v)=(|a|+|b|)f_j(K,e_1).
\end{split}
\]
Let $K\in\mathcal{K}^n$ be such that $f_j(K,e_1)\geq 0$. As a consequence of the previous equalities,
$$
f_j(K,u+v)\mathrm{length}eq f_j(K,u)+f_j(K,v),\quad\forall\, u,v\in\mathbb{R}^n.
$$
This means that $u\mapsto f_j(K,u)$ is a support function. If $f_j(K,e_1)<0$, we can use
Lemmas~\ref{513}(ii) and \ref{th_lemma}(i) to obtain that
$f_j(-K,e_1)> 0$. Now, applying the
previous argument to $-K$, we get that $u\mapsto f_j(-K,u)$ is a support function.
Let $1\mathrm{length}eq j\mathrm{length}eq n-1$ and let $K\in\mathcal{K}^n$ be such that $u\mapsto f_j(K,u)$ is a support function. Let $\mathcal{P}hi_j(K)\in\mathcal{K}^n$ be such that $f_j(K,\cdot)=h(\mathcal{P}hi_j(K),\cdot)$. From Step~1, $\mathcal{P}hi_j(K)$ lies on the line orthogonal to $H$ passing through the origin. Since $\mathcal{P}hi_j(K)\in\mathcal{K}^n_s$ is a centered convex body, it is a centered segment on the line
spanned by $e_1$. Thus, there exists $\alpha_j(K)\geq 0$ (depending on $K$ and $j$) such that $\mathcal{P}hi_j(K)=\alpha_j(K)[-e_1,e_1]$.
Using $f_j(K,u)=-f_j(-K,u)$, we get \eqref{aggiunta1}.
\noindent{\bf Step 3.} \emph{For every $K\in\mathcal{K}^n$ and $j\in\{1,\dots, n-1\}$, $f_j(K,\cdot)\equiv 0$.}
We prove it by induction on $j$.
Let $j=1$. Let $K\in\mathcal{K}^n$ be so that $\mathcal{P}him K\geq 1$ and $f_1(K,\cdot)$ is a support function. By Corollary~\ref{mixedZero}(i), the mixed volume $V(L_0[n-1],f_1(K,\cdot))$ vanishes since $V(L_0[n-1],f_1(K,\cdot))=v_1^{\mathcal{P}hi}(K)$, that is, $V(L_0[n-1],f_1(K,\cdot))$ is the coefficient of the 1-homogeneous term of the polynomial in \eqref{vi di}. On the other hand,
$$
V(L_0[n-1],f_1(K,\cdot))=(-1)^{\varepsilon(K)}\alpha_1(K)V(L_0[n-1],[-e_1,e_1]),
$$
which, by Corollary~\ref{cor:dimfi}(i), vanishes if and only if $\alpha_1(K)=0$. If $\mathcal{P}him K=0$, then $f_1(K,u)=0$, for every $u\in\mathbb{R}^n$, by Lemma~\ref{r: facts on f_js}(iii). Hence, we have $f_1(K,\cdot)\equiv 0$ for every $K\in\mathcal{K}^n$.
Assume that $f_j(K,\cdot)$ vanishes for every $j<j_0\mathrm{length}e n-1$ and for every $K\in\mathcal{K}^n$. Let $K\in\mathcal{K}^n$ have $\mathcal{P}him K\geq j_0$ and consider
$v_{j_0}^{\mathcal{P}hi}(K)$, i.e., the coefficient of degree $j_0$ in the polynomial expansion \eqref{vi di}.
Remark~\ref{f_i in v_j} yields that the only possible entries of each mixed volume summand of $v_{j_0}^{\mathcal{P}hi}(K)$ are $f_j(K,\cdot)$ with $0\mathrm{length}eq j\mathrm{length}eq j_0$. Therefore,
by the induction hypothesis, $v_{j_0}^{\mathcal{P}hi}(K)$ is given only by the summand $V(L_0[n-1],f_{j_0}(K,\cdot))$. As $j_0<n$, by Corollary~\ref{mixedZero}(i),
$$
v_{j_0}^{\mathcal{P}hi}(K)=V(L_0[n-1],f_{j_0}(K,\cdot))=0.
$$
The latter is true if and only if $\alpha_{j_0}(K)=0$, by a similar argument as for $j=1$. Hence, the statement of Step~3 and so, also Lemma~\ref{l: fi(K,u)=0} are proved.
\end{proof}
\section{Proof of Theorems \ref{teo} and \ref{cor}}\mathrm{length}abel{1}
We start with the following theorem in which we give an explicit expression for the image of an operator $\mathcal{P}hi\in\MVal$ satisfying (VC) and such that $\mathcal{P}him(\mathcal{P}hi(\{0\}))=0$.
\begin{theorem}\mathrm{length}abel{di_1Z=n}Let $n\geq 2$. An operator $\mathcal{P}hi\in\MVal$ satisfies (VC) and $\mathcal{P}him(\mathcal{P}hi(\{0\}))=0$ if and only if
$$
\mathcal{P}hi(K)=p+\mathcal{P}hi_1(K)+p_2(K)+\dots+p_{n-1}(K)+V_n(K)q,\quad\forall K\in\mathcal{K}^n,
$$
with $p,q\in\mathbb{R}^n$, $p_j:\mathcal{K}^n\mathrm{length}ongrightarrow\mathbb{R}^n$, $2\mathrm{length}eq j\mathrm{length}eq n-1$, continuous, translation invariant, and $j$-homogeneous valuations, and
$\mathcal{P}hi_1\in\MVal_1$ satisfying (VC).
\end{theorem}
\begin{proof}
Assume first that $\mathcal{P}hi\in\MVal$ satisfies the hypotheses of the statement.
We show that $f_j(K,\cdot)$ is a linear function for every $K\in\mathcal{K}^n$ and $2\mathrm{length}eq j\mathrm{length}eq n$, by backward induction on $j$.
By Corollary~\ref{cor:dimfi}(iii), if $K$ is a zonoid, then $f_j(K,\cdot)$ is a linear function for every $2\mathrm{length}eq j\mathrm{length}eq n$. Thus, Lemma~\ref{f_i=0 esencia}(i), ensures that if $K$ is a convex body with $\mathcal{P}him K=j$, $2\mathrm{length}eq j\mathrm{length}eq n-1$, then $f_j(K,\cdot)$ is linear. Furthermore, since
$h(L_n,\cdot)$ is a linear function, Lemma~\ref{r: facts on f_js}(vi) and Lemma~\ref{f_i=0 esencia}(ii) yield $\mathcal{P}him(\mathcal{P}hi_{n-1}(K))=0$ for every $K\in\mathcal{K}^n$ of dimension $n$, that is, $f_{n-1}(K,\cdot)$ in the McMullen decomposition of $\mathcal{P}hi(K)$ is also a linear function.
We now proceed with a backward induction argument. Let us assume that $f_{j}(K,\cdot)$ is a linear function for every $j_0< j\mathrm{length}eq n-1$ and $K\in\mathcal{K}^n$. By Lemma~\ref{r: facts on f_js}(vi) and Lemma~\ref{f_i=0 esencia}(ii),
$f_{j_0}(K,\cdot)$ is linear for every $K\in \mathcal{K}^n$ with $\mathcal{P}him K=j+1$. Theorem~\ref{th_lemma} yields that $f_j(K,\cdot)$ is linear for every $K\in\mathcal{K}^n$, $2\mathrm{length}eq j\mathrm{length}eq n-1$. (Notice that in general $f_1(K,\cdot)$ is not linear since it is not linear for zonotopes.)
Now, again by Lemma~\ref{r: facts on f_js}(vi), we have that $u\mapsto f_1(K,u)$ is a support function for every $K\in\mathcal{K}^n$. We denote by $\mathcal{P}hi_1(K)$ the convex body such that $h(\mathcal{P}hi_1(K),\cdot)=f_1(K,\cdot)$. The map $K\mapsto\mathcal{P}hi_1(K)$ is a continuous and translation invariant Minkowski valuation that satisfies $V_n(\mathcal{P}hi(K))=V_n(\mathcal{P}hi_1(K))$. Therefore, $\mathcal{P}hi_1\in\MVal_1$ satisfies~(VC).
The converse is clear.
\end{proof}
Next we give the explicit expression for the image of an operator $\mathcal{P}hi\in\MVal$ satisfying (VC) and such that $\mathcal{P}him(\mathcal{P}hi(\{0\}))=n-1$.
\begin{theorem}\mathrm{length}abel{di_1Z=0_noOSym}Let $n\geq 2$. An operator $\mathcal{P}hi\in\MVal$ satisfies (VC) and $\mathcal{P}him(\mathcal{P}hi(\{0\}))=n-1$
if and only if there exist $L\in \mathcal{K}^n$ with $\mathcal{P}him L=n-1$ and
a segment $S$ with $\mathcal{P}him (L+S)=n$ such that
\begin{equation}\mathrm{length}abel{di_K}
\mathcal{P}hi(K)=L+p_1(K)+\dots+p_{n-1}(K)+V_n(K)S,\quad\forall K\in\mathcal{K}^n,
\end{equation}
where $p_j:\mathcal{K}^n\mathrm{length}ongrightarrow\mathbb{R}^n$ is a continuous, translation invariant valuation, homogeneous of degree $j$, $1\mathrm{length}eq j\mathrm{length}eq n-1$.
\end{theorem}
In order to prove Theorem \ref{di_1Z=0_noOSym}, we will first prove its symmetric version, namely, the following result.
\begin{theorem}\mathrm{length}abel{di_1Z=0}Let $n\geq 2$. An operator $\mathcal{P}hi\in\MVal^s$ satisfies (VC) and $\mathcal{P}him(\mathcal{P}hi(\{0\}))=n-1$ if and only if there exist $L\in \mathcal{K}^n_s$ with $\mathcal{P}him L=n-1$ and a
centered segment $S$ with $\mathcal{P}him(L+S)=n$ such that
$$
\mathcal{P}hi(K)=L+V_n(K)S,\quad\forall K\in\mathcal{K}^n.
$$
\end{theorem}
\begin{proof}
By Lemma~\ref{l: fi(K,u)=0} and the McMullen decomposition \eqref{h-decomp}, there exist
$L_0, L_n\in\mathcal{K}^n$ such that
$$
\mathcal{P}hi(K)=L_0+V_n(K)L_n,\quad\forall\, K\in\mathcal{K}^n.
$$
On the other hand, by Corollary~\ref{cor:dimfi}(i) we have that $\mathcal{P}him L_0=n-1$,
$\mathcal{P}him L_n=1$, and $\mathcal{P}him(L_0+L_n)=n$.
The converse clearly holds since $K\mapsto L+V_n(K)S$, with $L,S$ as assumed, satisfies all the conditions (cf. Section~\ref{sec:volume_constraints}).
\end{proof}
Now we proceed with the proof of Theorem~\ref{di_1Z=0_noOSym}.
\begin{proof}[Proof of Theorem~\ref{di_1Z=0_noOSym}] Let $\mathcal{P}hi\in\MVal$ satisfy (VC) and $\mathcal{P}him(\mathcal{P}hi(\{0\}))=n-1$. Define the operator $\mathcal{P}hib:\mathcal{K}^n\mathrm{length}ongrightarrow\mathcal{K}^n_s$ by
$$
\mathcal{P}hib(K):=D(\mathcal{P}hi(K)).
$$
It is clear that $\mathcal{P}hib$ is a continuous and translation invariant Minkowski valuation which satisfies (VC) as a consequence of (RS) and the assumption that $\mathcal{P}hi$ satisfies (VC). Moreover, the image of $K$ under $\mathcal{P}hib$ is an $o$-symmetric convex body, since the
difference body operator has this property. Notice also that $\mathcal{P}him(\mathcal{P}hib(\{0\}))=\mathcal{P}him(D(\mathcal{P}hi(\{0\})))=\mathcal{P}him(\mathcal{P}hi(\{0\}))=n-1$, since the difference body operator preserves the dimension of any convex body.
Hence, we can apply Theorem~\ref{di_1Z=0} to $\mathcal{P}hib$ and obtain the existence of an $(n-1)$-dimensional $o$-symmetric convex
body $L$ and a centered segment $S$ such that
\begin{equation}\mathrm{length}abel{dii_L_S}
\mathcal{P}hib(K)=L+V_n(K)S,\quad\forall K\in\mathcal{K}^n.
\end{equation}
If we write it in terms of the support function, and for a $\mathrm{length}ambda>0$, we have
\begin{equation}\mathrm{length}abel{sup_psi}
h(\mathcal{P}hib(\mathrm{length}ambda K),u)=h(L,u)+\mathrm{length}ambda^nV_n(K)h(S,u).
\end{equation}
The support function of $\mathcal{P}hib(K)$ can be also written in terms of the support function of $\mathcal{P}hi(K)$. By the homogeneity of each summand in the McMullen
decomposition of $\mathcal{P}hi$ in \eqref{h-decomp}, we have
\begin{align}\mathrm{length}abel{sup_psi2}\nonumber
h(\mathcal{P}hib(\mathrm{length}ambda K),u)&=h(\mathcal{P}hi(\mathrm{length}ambda K),u)+h(\mathcal{P}hi(\mathrm{length}ambda K),-u)\\&=\,h(L_0,u)+h(L_0,-u)+\mathrm{length}ambda(f_1(K,u)+f_1(K,-u))+\dots+\nonumber\\
&\,\,\,\,\,\,+\mathrm{length}ambda^{n-1}(f_{n-1}(K,u)+f_{n-1}(K,-u))+\mathrm{length}ambda^nV_n(K)(h(L_n,u)+h(L_n,-u)),
\end{align}
for every $u\in\mathbb{R}^n$. Comparing the coefficients of the polynomials in \eqref{sup_psi} and \eqref{sup_psi2}, we obtain that $L_n$ is a segment in the same
direction as $S$ and that $L_0$ is an $(n-1)$-dimensional convex body lying in a parallel hyperplane to $\spa L$. Moreover,
\begin{equation}\mathrm{length}abel{f_i=0}
f_j(K,u)+f_j(K,-u)=0,\quad\forall\, 1\mathrm{length}eq j\mathrm{length}eq n-1,\,u\in\mathbb{R}^n,\,K\in\mathcal{K}^n.
\end{equation}
Our aim is to show
\begin{equation}\mathrm{length}abel{linear_K}
f_j(K,u+v)=f_j(K,u)+f_j(K,v),\quad \forall\, 1\mathrm{length}eq j\mathrm{length}eq n-1, u,v\in\mathbb{R}^n, K\in\mathcal{K}^n.
\end{equation}
Once it is proved, we have that $\mathcal{P}hi(K)$ is given as in~\eqref{di_K}, since the functions $u\mapsto f_j(K,u)$ are linear functions, i.e., $f_j(K,u)=h(\{p_j(K)\},u)$, as we want to show.
To prove \eqref{linear_K}, we use the following two claims.
\noindent{\bf Claim 1.} \emph{Let $1\mathrm{length}eq j\mathrm{length}eq n-1$. Let $T$ be a $(j+1)$-dimensional simplex and let $P$ be a polytope given by Lemma~\ref{standard fact ii}. Then
$u\mapsto f_{j}(T,u)+f_{j}(P,u)$ is a linear function in $\mathbb{R}^n$.}
By Lemma~\ref{standard fact ii}, $T\cup P$ is a zonotope. Thus, by Corollary~\ref{cor:dimfi}(ii), $\mathcal{P}him(\mathcal{P}hi_j(T\cup P))=0$, that is,
$u\mapsto f_j(T\cup P,u)=h(\mathcal{P}hi_j(T\cup P),u)$ is a linear function. By Lemma~\ref{r: facts on f_js}(iv), $f_j(T\cap P,\cdot)$ is a support function. Now Lemma~\ref{513} yields $\mathcal{P}him(\mathcal{P}hi_j(T\cap P))=0$. Hence, as $K\mapsto f_j(K,u)$ is a valuation for every $u\in\mathbb{R}^n$,
$$
f_j(T,u)+f_j(P,u)=h(\mathcal{P}hi_j(T\cup P),u)+h(\mathcal{P}hi_j(T\cap P),u)=\mathrm{length}angle q_{_{T,P}},u\rangle,
$$
for some $q_{_{T,P}}\in\mathbb{R}^n$. In other words, $u\mapsto f_j(T,u)+f_j(P,u)$ is a linear function.
\noindent{\bf Claim 2.}
\emph{Let $\{e_1,\dots,e_n\}$ be a basis of $\mathbb{R}^n$ such that $S=[-e_1,e_1]$ and denote by $H$ the hyperplane orthogonal to $S$.
Then, for every $K\in\mathcal{K}^n$, $u',v'\in H$, and $1\mathrm{length}eq j\mathrm{length}eq n-1$,
\begin{equation}\mathrm{length}abel{eq_K}
f_j(K,u'+v')=f_j(K,u')+f_j(K,v')
\end{equation}
and
\begin{equation}\mathrm{length}abel{eq_K_a}
f_j(K,ae_1+u')=f_j(K,ae_1)+f_j(K,u'), \quad\forall\,a\in\mathbb{R}.
\end{equation}}
We prove \eqref{eq_K} by backward induction on $j$. Assume first $j=n-1$.
We argue as in Step~1 of Lemma~\ref{l: fi(K,u)=0}. Since $\mathcal{P}hi(K)\in\mathcal{K}^n$ for every convex body $K$, for $\mathrm{length}ambda>0$, we have
\begin{align*}
0&\geq h(\mathcal{P}hi (\mathrm{length}ambda K),u'+v')-h(\mathcal{P}hi(\mathrm{length}ambda K),u')-h(\mathcal{P}hi(\mathrm{length}ambda K),v')
\\&=\mathrm{length}ambda^{n-1}(f_{n-1}(K,u'+v')-f_{n-1}(K,u')-f_{n-1}(K,v'))+O(\mathrm{length}ambda^{n-2}).
\end{align*}
As $\mathrm{length}ambda\to\infty$, we obtain
\begin{equation}\mathrm{length}abel{ineq_n-1}
f_{n-1}(K,u'+v')\mathrm{length}eq f_{n-1}(K,u')+f_{n-1}(K,v'),\quad \forall\, K\in\mathcal{K}^n,u',v'\in H.
\end{equation}
In order to obtain equality in \eqref{ineq_n-1}, we first apply \eqref{ineq_n-1} to a simplex $T$ and a polytope $P$ satisfying the condition of the previous claim with $j=n-1$ and add both expressions, to obtain
\begin{equation}\mathrm{length}abel{ineq_eq}f_{n-1}(T,u'+v')+f_{n-1}(P,u'+v')\mathrm{length}eq f_{n-1}(T,u')+f_{n-1}(T,v')+ f_{n-1}(P,u')+f_{n-1}(P,v').
\end{equation}
Now Claim~1 yields that both sides of the above inequality are the same linear function. Hence, we have equality in~\eqref{ineq_eq}, which together with inequality~\eqref{ineq_n-1}
yields
$$f_{n-1}(T,u'+v')= f_{n-1}(T,u')+f_{n-1}(T,v')$$
for every $(n-1)$-dimensional simplex $T$. As $K\mapsto f_{n-1}(K,u'+v')-f_{n-1}(K,u')-f_{n-1}(K,v')$ is a continuous and translation invariant real-valued valuation for every $u'\in H$, Lemma~\ref{standard facts}(i) yields \eqref{eq_K} for $j=n-1$.
Assuming next that \eqref{eq_K} holds for every $j> j_0$, we show it for $j=j_0$. In this case, we obtain, similarly to the previous case,
$f_{j_0}(K,u'+v')\mathrm{length}eq f_{j_0}(K,u')+f_{j_0}(K,v')$ for every $K\in\mathcal{K}^n$ and $u',v'\in H$. Hence, applying again Claim~1, now for $j=j_0$, and Lemma~\ref{standard facts}(i), we get \eqref{eq_K} for every $1\mathrm{length}eq j\mathrm{length}eq n-1$.
The proof of \eqref{eq_K_a} follows, similarly, by a backward induction argument on $j$. Indeed, since we have proven that $L_n$ is a segment in the same direction as $S$ (see~\eqref{dii_L_S}), we have $h(L_n,ae_1+v)=h(L_n,ae_1)+h(L_n,v)$. Since $\mathcal{P}hi(K)$ is a convex body,
$$f_{n-1}(K,ae_1+u')\mathrm{length}eq f_{n-1}(K,ae_1)+f_{n-1}(K,u')$$ for every $K\in\mathcal{K}^n$, $u'\in H$ and $a\in\mathbb{R}$ (arguing as we did for \eqref{ineq_n-1}). Now, exactly in the same way we performed the proof of \eqref{eq_K}, Claim~1 and Lemma~\ref{standard facts}(i) ensure \eqref{eq_K_a}. Thus, Claim~2 is proved.
Now we proceed to prove \eqref{linear_K}.
Let $u=ae_1+u'$ and $v=be_1+v'$, $a,b\in\mathbb{R}$, $u',v'\in H$. We compute, by using \eqref{eq_K_a},
$$
f_j(K,u+v)=f_j(K,(a+b)e_1+u'+v')=|a+b|f_j(K,\mathrm{sgn}(a+b)e_1)+f_j(K,u'+v')
$$
and
$$
f_j(K,u)+f_j(K,v)=|a|f_j(K,\mathrm{sgn}(a)e_1)+f_j(K,u')+|b|f_j(K,\mathrm{sgn}(b)e_1)+f_j(K,v').
$$
Assume that $a+b>0$, $a>0$, and $b<0$. By using the above equations, \eqref{eq_K}, and \eqref{f_i=0}, we get
\begin{align*}
f_j(K,u+v)&=(a-|b|)f_j(K,e_1)+f_j(K,u'+v')
\\&=af_j(K,e_1)-|b|f_j(K,e_1)+f_j(K,u')+f_j(K,v')
\\&=af_j(K,e_1)+f_j(K,u')+|b|f_j(K,-e_1)+f_j(K,v')
\\&=f_j(K,u)+f_j(K,v),
\end{align*}
for every $1\mathrm{length}eq j\mathrm{length}eq n-1$ and $K\in\mathcal{K}^n$. The equality for the remaining cases (different signs of $a,b$) is obtained in a similar way.
Hence, \eqref{linear_K} is proved.
The converse is clear, as for any $K\in\mathcal{K}^n$, $K\mapsto L+p_1(K)+\dots+p_{n-1}(K)+V_n(K)S$, satisfies all the stated conditions (cf. Section~\ref{sec:volume_constraints}).
\end{proof}
\begin{proof}[Proof of Theorems \ref{teo} and \ref{cor}]
First we note that Theorem~\ref{cor} follows from Theorem~\ref{teo} and the assumption that $\mathcal{P}hi$ is an $o$-symmetrization, since the only point which is $o$-symmetric is the origin.
The proof of Theorem~\ref{teo} follows from Theorem~\ref{L0Ln}, together with Theorem~\ref{di_1Z=n}, for the case (i), and Theorem~\ref{di_1Z=0_noOSym}, for the case (ii).
\end{proof}
We note that the operators in Theorem~\ref{cor}(ii) are $\SL(n)$-invariant. It is well-known that the continuous and translation invariant real-valued valuations, which are $\SL(n)$-invariant, are linear combinations of the Euler characteristic and the volume. From this fact, it easily follows that the continuous and translation invariant Minkowski valuations, which are $\SL(n)$-invariant, are of the form $K\mapsto M_1+V_n(K)M_2$, where $M_1,M_2\in\mathcal{K}^n$ are fixed. The above result characterizes the $\SL(n)$-invariant Minkowski valuations satisfying (VC).
\begin{corollary}
Let $n\geq 2$. An operator $\mathcal{P}hi:\mathcal{K}^n\mathrm{length}ongrightarrow \mathcal{K}^n_s$ is a continuous, $\SL(n)$-invariant, and translation invariant Minkowski valuation satisfying (VC) if and only if
there exist a centered segment $S$ and an $o$-symmetric $(n-1)$-dimensional convex body $L$ with $\mathcal{P}him (L+S)=n $ such that for every $K\in\mathcal{K}^n$,
$$\mathcal{P}hi(K)= L+V_n(K)S.$$
\end{corollary}
\section{Proof of Theorems~\ref{+mon intro2} and \ref{+On_dim_geq_32}}\mathrm{length}abel{on_mon}
In this section, we apply Theorem~\ref{teo} to obtain Theorems~\ref{+mon intro2} and \ref{+On_dim_geq_32}. These are improvements of the following results from~\cite{abardia.colesanti.saorin1}, as the homogeneity hypothesis is removed.
\begin{theorem}[\cite{abardia.colesanti.saorin1}]\mathrm{length}abel{mon}
Let $n\geq 2$. An operator $\mathcal{P}hi\in\MVal$ is $1$-homogeneous, monotonic, and satisfies (VC) if and only if there is a $g\in\GL(n)$ such that
$$
\mathcal{P}hi(K)=g(DK),\quad\forall\, K\in\mathcal{K}^n.
$$
\end{theorem}
\begin{theorem}[\cite{abardia.colesanti.saorin1}]\mathrm{length}abel{+On_dim_geq_3}
Let $n\geq 3$.
\begin{enumerate}
\item[\emph{(i)}] An operator $\mathcal{P}hi\in\MVal$ is $1$-homogeneous, $\SO(n)$-covariant, and satisfies (VC) if and only if there are $a,b\geq 0$ with $a+b>0$ such that
$$
\mathcal{P}hi(K)=a(K-s(K))+b(-K+s(K)),\quad\forall K\in\mathcal{K}^n.
$$
\item[\emph{(ii)}] An operator $\mathcal{P}hi\in\MVal^{s}$ is $1$-homogenous, $\SO(n)$-covariant, and satisfies (VC) if and only if there is a $\mathrm{length}ambda >0$ such that $\mathcal{P}hi(K)=\mathrm{length}ambda DK$ for every $K\in\mathcal{K}^n$.
\end{enumerate}
\end{theorem}
We consider first Theorem~\ref{mon} and prove that the homogeneity property can be removed, that is, we prove Theorem~\ref{+mon intro2}.
\begin{proof}[Proof of Theorem~\ref{+mon intro2}]
By Theorem~\ref{teo}, we have that $\mathcal{P}hi$ is either of the form
\begin{equation}\mathrm{length}abel{+On_1}
\mathcal{P}hi(K)=p+\mathcal{P}hi_1(K)+p_2(K)+\dots+p_{n-1}(K)+V_n(K)q,\quad\forall\,K\in\mathcal{K}^n,
\end{equation}
with $\mathcal{P}hi_1\in\MVal_1$, $p,q\in\mathbb{R}^n$ and $p_j:\mathcal{K}^n\mathrm{length}ongrightarrow\mathbb{R}^n$, $2\mathrm{length}eq j\mathrm{length}eq n-1$, continuous, translation invariant, and $j$-homogeneous valuations;
or
\begin{equation}\mathrm{length}abel{+On_2}
\mathcal{P}hi(K)= L+p_1(K)+\dots+p_{n-1}(K)+V_n(K)S,\quad\forall\,K\in\mathcal{K}^n,
\end{equation}
with $S$ a non-degenerated segment, $L$ an $(n-1)$-dimensional convex body such that $\mathcal{P}him (L+S)=n $ and $p_j:\mathcal{K}^n\mathrm{length}ongrightarrow\mathbb{R}^n$, $1\mathrm{length}eq j\mathrm{length}eq n-1$, continuous,
translation invariant, and $j$-homogeneous valuations.
We observe first, that the monotonicity condition implies that for every $K\in\mathcal{K}^n$ and for every $\mathrm{length}ambda\geq 1$,
$\mathcal{P}hi(K)\subset\mathcal{P}hi(\mathrm{length}ambda K)$, and that for every $0<\mathrm{length}ambda \mathrm{length}eq 1$, we have $\mathcal{P}hi(\mathrm{length}ambda K)\subset\mathcal{P}hi(K)$ (notice that, by translation invariance we may assume
that $K$ contains the origin, so that $K\subset \mathrm{length}ambda K$ for $\mathrm{length}ambda\ge1$ and $K\supset\mathrm{length}ambda K$ for $0<\mathrm{length}ambda\mathrm{length}e1$).
First we deal with the case of $\mathcal{P}hi$ being given as in \eqref{+On_1}. Let $\mathrm{length}ambda\geq 1$ and $K\in\mathcal{K}^n$. Applying the support function to both sides of \eqref{+On_1}, since $h(\{p\},u)=\mathrm{length}angle p,u\rangle$ for any $p\in\mathbb{R}^n$, the monotonicity condition $\mathcal{P}hi(K)\subset\mathcal{P}hi(\mathrm{length}ambda K)$ implies that
$$h(\mathcal{P}hi_1(K),u)+\sum_{j=2}^n\mathrm{length}angle p_j(K),u\rangle\mathrm{length}eq \mathrm{length}ambda h(\mathcal{P}hi_1(K),u)+\sum_{j=2}^n\mathrm{length}ambda^j\mathrm{length}angle p_j(K),u\rangle,
$$
for any $u\in\mathbb{R}^n$, $\mathrm{length}ambda\geq 1$, and $K\in\mathcal{K}^n$.
Following the notation in \eqref{+On_1}, we set $p_n(K)=q$.
As $\mathrm{length}ambda\to\infty$, we obtain that $\mathrm{length}angle p_n(K),u\rangle\geq 0$ for every $u\in\mathbb{R}^n$, which is possible only if $q=p_n(K)=0$ for every $K\in\mathcal{K}^n$.
By backward induction, as $\mathrm{length}ambda\to\infty$, we obtain that $p_j(K)=0$ for every $K\in\mathcal{K}^n$ and $2\mathrm{length}eq j\mathrm{length}eq n-1$.
Hence, $\mathcal{P}hi$ as given in \eqref{+On_1} is monotonic if and
only if $\mathcal{P}hi=\mathcal{P}hi_1+p$, where $\mathcal{P}hi_1\in\MVal_1$ is monotonic. Theorem~\ref{mon} yields the first statement of Theorem~\ref{+mon intro2}.
We now deal with the operators of the form \eqref{+On_2}. Again taking the support function, we get, for every $u\in\mathbb{R}^n$, $0\mathrm{length}eq\mathrm{length}ambda\mathrm{length}eq 1$, and $K\in\mathcal{K}^n$,
$$\sum_{j=1}^{n-1}\mathrm{length}angle p_j(K),u\rangle+V_n(K)h(S,u)\geq \sum_{j=1}^{n-1}\mathrm{length}ambda^j\mathrm{length}angle p_j(K),u\rangle+\mathrm{length}ambda^nV_n(K)h(S,u).
$$
As $\mathrm{length}ambda\to 0^+$, we have $\mathrm{length}angle p_1(K),u\rangle\geq 0$ for every $u\in\mathbb{R}^n$, which implies $p_1(K)=0$ for every $K\in\mathcal{K}^n$. Induction on $j$ yields $p_j(K)=0$ for every
$1\mathrm{length}eq j\mathrm{length}eq n-1$. Hence, the above inequality holds if $\mathrm{length}ambda^nh(S,u)\mathrm{length}eq h(S,u)$ for every $u\in\mathbb{R}^n$ and $0\mathrm{length}eq \mathrm{length}ambda\mathrm{length}eq 1$. This is possible only if $h(S,u)\geq 0$ for every
$u\in\mathbb{R}^n$, that is, if $S$ contains the origin. The result follows after observing that $K\mapsto L+V_n(K)S$ is monotonic if $S$ contains the origin.
\end{proof}
Next we prove Theorem~\ref{+On_dim_geq_32}. For that we apply Theorem~\ref{+On_dim_geq_3} and the following characterization, by Schneider \cite{schneider72}, of the Steiner point. We recall that the \emph{Steiner point} $s(K)$ of $K\in\mathcal{K}^n$ is defined as
$$s(K)=\frac{1}{\kappa_n}\int_{\mathbb{S}^{n-1}}h(K,u)udu$$ and that an operator $\phi:\mathcal{K}\mathrm{length}ongrightarrow \mathbb{R}^n$ is translation covariant if
$\phi(K+t)=\phi(K)+t$ for every $t\in\mathbb{R}^n$.
We observe that the Steiner point is a continuous, translation covariant, $\SO(n)$-covariant, and 1-homogeneous vector-valued valuation (see \cite[p.~50]{schneider.book14}).
Schneider first proved in~\cite{schneider71} that this list of conditions characterizes the Steiner point. In \cite{schneider72}, he removed the homogeneity hypothesis and proved the following result.
\begin{theorem}[\cite{schneider72}]\mathrm{length}abel{schneider_vector} An operator $\phi:\mathcal{K}^n\mathrm{length}ongrightarrow\mathbb{R}^n$ is a continuous, $\SO(n)$-covariant, and translation covariant valuation if and only if there is a $\mathrm{length}ambda\geq 0$ such that $\phi(K)=\mathrm{length}ambda\,s(K)$ for every $K\in\mathcal{K}^n$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{+On_dim_geq_32}]
We will show that every operator $\mathcal{P}hi\in\MVal$ satisfying (VC) and being $\SO(n)$-covariant is also 1-homogeneous, so that we can apply Theorem~\ref{+On_dim_geq_3}.
By Theorem \ref{teo}, we know that $\mathcal{P}hi$ is either of the form \eqref{+On_1} or \eqref{+On_2}.
We study which of those operators are $\SO(n)$-covariant.
Assume first that $\mathcal{P}hi$ is given as in \eqref{+On_2}. Applying \eqref{+On_2} to $\mathrm{length}ambda K$, for $K\in\mathcal{K}^n$ and $\mathrm{length}ambda>0$, taking support functions in \eqref{+On_2}, and using the $\SO(n)$-covariance, we have for every
$g\in\SO(n)$ and $u\in\mathbb{R}^n$,
\begin{eqnarray*}
&&h(L,u)+\sum_{j=1}^{n-1}\mathrm{length}ambda^j h(p_j(g(K)),u)+\mathrm{length}ambda^nV_n(K)h(S,u)\\
&&\quad\qquad=h(g(L),u)+\sum_{j=1}^{n-1}\mathrm{length}ambda^j h(g(p_j(K)),u)+\mathrm{length}ambda^nV_n(K)h(g(S),u).
\end{eqnarray*}
As $\mathrm{length}ambda\to 0^+$, we obtain $h(L,u)=h(g(L),u)$ for every $g\in\SO(n)$, where $L\in\mathcal{K}^n$ is a fixed $(n-1)$-dimensional convex body. Since $h(L,u)=h(g(L),u)$ holds for every $g\in\SO(n)$ only if $L=\{0\}$ or $L=rB^n$, $r>0$, and non of these are $(n-1)$-dimensional convex bodies, we obtain that the case given by \eqref{+On_2} does not contain any $\SO(n)$-covariant valuation.
Similarly, from \eqref{+On_1}, we have, for every $u\in\mathbb{R}^n$, $g\in\SO(n)$, $\mathrm{length}ambda>0$, and $K\in\mathcal{K}^n$,
\begin{eqnarray*}
&&h(\{p\},u)+\mathrm{length}ambda h(\mathcal{P}hi_1(g(K)),u)+\sum_{j=2}^{n}\mathrm{length}ambda^j h(\{p_j(g(K))\},u)\\
&&\quad\qquad=h(\{g(p)\},u)+\mathrm{length}ambda h(g(\mathcal{P}hi_1(K)),u)+\sum_{j=2}^{n}\mathrm{length}ambda^j h(\{g(p_j(K))\},u),
\end{eqnarray*}
which implies
$$
h(\{p_j(g(K))\},u)=h(\{g(p_j(K))\},u),\quad\forall u\in\mathbb{R}^n,\, g\in\SO(n),\, K\in\mathcal{K}^n,\, 2\mathrm{length}eq j\mathrm{length}eq n.
$$
Hence, since the support functions of $p_j(K)\in\mathbb{R}^n$ and $g(p_j(K))\in\mathbb{R}^n$ coincide, we have
$p_j(g(K))=gp_j(K)$, $2\mathrm{length}eq j\mathrm{length}eq n$. By Theorem \ref{schneider_vector} applied to $K\mapsto p_j(K)-s(K)$, which is translation covariant, this is possible only if $p_j(K)=\mathrm{length}ambda_j\,s(K)$ for every $K\in\mathcal{K}^n$ and some $\mathrm{length}ambda_j\geq 0$. As $K\mapsto p_j(K)$ is homogeneous of degree $j$, $2\mathrm{length}eq j\mathrm{length}eq n$, and $K\mapstos(K)$ is homogeneous of degree 1, this is the case only if
$\mathrm{length}ambda_j=0$, for every $2\mathrm{length}eq j\mathrm{length}eq n$.
Similarly, $h(\{p\},u)=h(\{g(p)\},u)$
for every $g\in\SO(n)$ and $u\in\mathbb{R}^n$, implies $p=0$. Therefore, the only operators of the form in \eqref{+On_1} which are $\SO(n)$-covariant are 1-homogeneous.
We can now apply Theorem~\ref{+On_dim_geq_3} to obtain the result.
\end{proof}
In a similar manner, we can use Theorem~\ref{teo} and Theorem~6.1 in \cite{abardia.colesanti.saorin1} to show the 2-dimensional version of Theorem~\ref{+On_dim_geq_32}.
\begin{theorem}
Let $n=2$. An operator $\mathcal{P}hi\in\MVal$ satisfies (VC) and is $\SO(2)$-covariant if and only if there are $g\in\SO(2)$ and $a,b\geq 0$ with $a+b>0$ such that
$$
\mathcal{P}hi(K)=ag(K-s(K))+bg(-K+s(K)),\quad\forall K\in\mathcal{K}^n.
$$
\end{theorem}
\section{Examples}\mathrm{length}abel{examples}
We provide some examples of operators satisfying all but one of the hypothesis of Theorem~\ref{teo} and hence showing that the result is best possible, in the sense that other operators appear if one of the hypothesis is removed, except for the continuity. We are not aware of a translation invariant Minkowski valuation satisfying (VC) which is not continuous.
\begin{example}
Let $L$ be an $(n-1)$-dimensional convex body and let $S$ be a segment such that $\mathcal{P}him (L+S)=n$. Then
\[
K\mapsto DK+s(K)
\]
and
\[
K\mapsto L+V_n(K) S +s(K)
\]
are continuous Minkowski valuations which satisfy (VC). However, they are not translation invariant,
since the Steiner point is not.
\end{example}
\begin{example}
The operator \[
K\mapsto {\rm conv}\mathrm{length}eft((K-s(K))\cup (-K+s(K))\right)
\]
is continuous, translation invariant, and satisfies (VC).
It is also an $o$-symmetrization. However, it is not a Minkowski valuation.
\end{example}
\begin{example}
Let $L$ be an $(n-1)$-dimensional convex body and let $S$ be a segment such that $\mathcal{P}him (L+S)=n$.
Then
\[
K\mapsto L+V_n(DK) S
\]
is a continuous and translation invariant operator satisfying (VC). However, it is not a Min\-kows\-ki valuation.
\end{example}
\begin{example}
For $n\geq 2$, the complex difference body introduced in \cite{abardia}, $D_C:\mathcal{K}^{2n}\mathrm{length}ongrightarrow \mathcal{K}^{2n}$,
with $C$ an $o$-symmetric planar convex body
provides a continuous and translation invariant Minkowski valuation that satisfies (LVC) and is an $o$-symmetrization.
However, it does not satisfy (UVC).
\end{example}
\begin{example}
Let $L$ be an $(n-1)$-dimensional convex body and let $S$ be a segment with $\mathcal{P}him (L+S)=n$.
Then the operator
\[
K\mapsto L+V_n(K) S+ DK
\]
is a continuous and translation invariant Minkowski valuation satisfying (LVC). However, it does not satisfy (UVC).
\end{example}
\begin{example}
Let $L$ be an $(n-1)$-dimensional symmetric convex body and let $S$ be a segment with $\mathcal{P}him (L+S)<n$.
Then the operator
\[
K\mapsto L+V_n(K) S
\]
is a continuous and translation invariant Minkowski valuation satisfying (UVC). However, it does not satisfy (LVC).
\end{example}
\end{document}
|
\begin{document}
\title{Sequence Set Design With Good Correlation Properties via Majorization-Minimization}
\author{Junxiao Song, Prabhu Babu, and Daniel P. Palomar, \IEEEmembership{Fellow, IEEE}\thanks{This work was supported by the Hong Kong RGC 16206315 research grant.
Junxiao Song, Prabhu Babu, and Daniel P. Palomar are with the Hong
Kong University of Science and Technology (HKUST), Hong Kong. E-mail:
\{jsong, eeprabhubabu, palomar\}@ust.hk.}}
\maketitle
\begin{abstract}
Sets of sequences with good correlation properties are desired in
many active sensing and communication systems, e.g., multiple-input\textendash multiple-output
(MIMO) radar systems and code-division multiple-access (CDMA) cellular
systems. In this paper, we consider the problems of designing complementary
sets of sequences (CSS) and also sequence sets with both good auto-
and cross-correlation properties. Algorithms based on the general
majorization-minimization method are developed to tackle the optimization
problems arising from the sequence set design problems. All the proposed
algorithms can be implemented by means of the fast Fourier transform
(FFT) and thus are computationally efficient and capable of designing
sets of very long sequences. A number of numerical examples are provided
to demonstrate the performance of the proposed algorithms.\end{abstract}
\begin{IEEEkeywords}
Autocorrelation, CDMA sequences, complementary sets, cross-correlation,
majorization-minimization, unimodular sequences.
\end{IEEEkeywords}
\section{Introduction}
Sequences with good correlation properties play an important role
in many active sensing and communication systems\cite{he2012waveform,levanon2004radar}.
The design of a single sequence with good autocorrelation properties
(e.g., small autocorrelation sidelobes) has been studied extensively,
e.g., see \cite{stoica2009new,MISL,WISL_song2015} and the references
therein. In this paper, we focus on the design of sets of sequences
with good correlation properties. We consider both the design of complementary
sets of sequences (CSS) and the design of sequence sets with good
auto- and cross-correlation properties. In addition, in order to avoid
non-linear side effects and make full use of the transmission power
available in the system, we restrict our design to unimodular sequences.
Let $\{\mathbf{x}_{m}\}_{m=1}^{M}$ denote a set of $M$ complex unimodular
sequences each of length $N$, i.e., $\mathbf{x}_{m}=[x_{m}(1),\ldots,x_{m}(N)]^{T}$,
$m=1,\ldots,M$. Then the aperiodic cross-correlation of $\mathbf{x}_{i}$
and $\mathbf{x}_{j}$ at lag $k$ is defined as
\begin{eqnarray}
r_{i,j}(k) & = & \sum_{n=1}^{N-k}x_{i}(n+k)x_{j}^{*}(n)=r_{j,i}^{*}(-k),\nonumber \\
& & i,j=1,\ldots,M,k=1-N,\ldots,N-1.\label{eq:cross_corr-1}
\end{eqnarray}
When $i=j,$ \eqref{eq:cross_corr-1} reduces to the autocorrelation
of $\mathbf{x}_{i}$.
The motivation of CSS design comes from the difficulties in designing
a single unimodular sequence with impulse-like autocorrelation. For
instance, it can be easily observed that the autocorrelation sidelobe
at lag $N-1$ of a unimodular sequence is always equal to 1, no matter
how we design the sequence. The difficulties have encouraged researchers
to consider the idea of CSS, and the set of sequences $\{\mathbf{x}_{m}\}_{m=1}^{M}$
is called complementary if and only if the autocorrelations of $\{\mathbf{x}_{m}\}$
sum up to zero at any out-of-phase lag, i.e.,
\begin{equation}
\sum_{m=1}^{M}r_{m,m}(k)=0,\,1\leq\left|k\right|\leq N-1.
\end{equation}
CSS have been applied in many active sensing and communication systems,
for instance, multiple-input\textendash multiple-output (MIMO) radars
\cite{compleSet_Searle2008}, radar pulse compression \cite{compleSet_Levanon2009},
orthogonal frequency-division multiplexing (OFDM) \cite{compleSet_Schmidt2007},
ultra wide-band (UWB) communications \cite{compleSet_Garcia2010},
code-division multiple-access (CDMA) \cite{compleSet_Tseng2000},
and channel estimation \cite{compleSet_Spasojevic2001}. Owing to
the practical importance, a lot of effort has been devoted to the
construction of CSS. The majority of research results on CSS at the
early stage have been concerned with the analytical construction of
CSS for restricted sequence length $N$ and set cardinality $M$.
More recently, computational methods have also been proposed for the
design of CSS, see \cite{SequenceSet_Mojtaba2013} for example. In
contrast to analytical constructions, computational methods are more
flexible in the sense that they do not impose any restriction on the
length of sequences or the set cardinality.
In CSS design, only the autocorrelation properties of the sequences
have been considered. But some applications require a set of sequences
with not only good autocorrelation properties but also good cross-correlations
among the sequences, for example, in CDMA cellular networks or in
MIMO radar systems. Good autocorrelation indicates that a sequence
is nearly uncorrelated with its own time-shifted versions, while good
cross-correlation means that any sequence is nearly uncorrelated with
all other time-shifted sequences. Good correlation properties in the
above sense ensure that matched filters at the receiver end can easily
separate the users in a CDMA system \cite{lowCorrSet_Oppermann1997}
or extract the signals backscattered from the range of interest while
attenuating signals backscattered from other ranges in MIMO radar
\cite{SequenceSet_He2009}.
Extending the approaches in \cite{WISL_song2015}, we present in this
paper several new algorithms for the design of complementary sets
of sequences and sequence sets with both good auto- and cross-correlation
properties. The sequence set design problems are first formulated
as optimization problems and they include the single sequence design
problems considered in \cite{MISL,WISL_song2015} as special cases.
Then several efficient algorithms are developed based on the general
majorization-minimization (MM) method via successively majorizing
the objective functions twice. All the proposed algorithms can be
implemented by means of the fast Fourier transform (FFT) and are thus
very efficient in practice. The convergence properties and an acceleration
scheme, which can be used to further accelerate the proposed MM algorithms,
are also briefly discussed.
The remaining sections of the paper are organized as follows. In Section
\ref{sec:Problem-Formulation}, the problem formulations are presented.
In Section \ref{sec:Seq-Design-viaMM}, an MM algorithm is derived
for the CSS design problem, followed by the derivations of two MM
algorithms for designing sequence sets with good auto- and cross-correlations
in Sections \ref{sec:Set_Auto_Cross_Wei} and \ref{sec:Adaptive-MM},
respectively. Convergence analysis and an acceleration scheme are
introduced in Section \ref{sec:Convergence-Acc}. Finally, Section
\ref{sec:Numerical-Experiments} presents some numerical results,
and the conclusions are given in Section \ref{sec:Conclusion}.
\emph{Notation}: Boldface upper case letters denote matrices, boldface
lower case letters denote column vectors, and italics denote scalars.
$\mathbb{R}$ and $\mathbb{C}$ denote the real field and the complex
field, respectively. $\mathrm{Re}(\cdot)$ and $\mathrm{Im}(\cdot)$
denote the real and imaginary part, respectively. ${\rm arg}(\cdot)$
denotes the phase of a complex number. The superscripts $(\cdot)^{T}$,
$(\cdot)^{*}$ and $(\cdot)^{H}$ denote transpose, complex conjugate,
and conjugate transpose, respectively. $X_{i,j}$ denotes the (\emph{i}-th,
\emph{j}-th) element of matrix $\mathbf{X}$ and $x_{i}$ ($x(i)$)
denotes the \emph{i}-th element of vector $\mathbf{x}$. $\mathbf{X}_{i,:}$
denotes the \emph{i}-th row of matrix $\mathbf{X}$, $\mathbf{X}_{:,j}$
denotes the \emph{j}-th column of matrix $\mathbf{X}$, and $\mathbf{X}_{i:j,k:l}$
denotes the submatrix of $\mathbf{X}$ from $X_{i,k}$ to $X_{j,l}$.
$\circ$ denotes the Hadamard product. $\otimes$ denotes the Kronecker
product. $\mathrm{Tr}(\cdot)$ denotes the trace of a matrix. ${\rm diag}(\mathbf{X})$
is a column vector consisting of all the diagonal elements of $\mathbf{X}$.
${\rm Diag}(\mathbf{x})$ is a diagonal matrix formed with $\mathbf{x}$
as its principal diagonal. ${\rm vec}(\mathbf{X})$ is a column vector
consisting of all the columns of $\mathbf{X}$ stacked. $\mathbf{I}_{n}$
denotes an $n\times n$ identity matrix.
\section{Problem Formulation and MM Primer\label{sec:Problem-Formulation}}
The problems of interest in this paper are the design of complementary
sets of sequences (CSS) and the design of sequence sets with good
auto- and cross-correlation properties. In the following, we first
provide criteria to measure the complementarity of a sequence set
and also the goodness of auto- and cross-correlation properties respectively,
and then formulate the sequence set design problems as optimization
problems. The MM method is also briefly introduced, which will be
applied to tackle the optimization problems later.
\subsection{Design of Complementary Set of Sequences}
We are interested in developing efficient optimization methods for
the design of complementary sets of sequences. Consequently, to measure
the complementarity of a sequence set $\{\mathbf{x}_{m}\}_{m=1}^{M}$,
we consider the complementary integrated sidelobe level (CISL) metric
of a set of sequences, which is defined as
\begin{equation}
{\rm CISL}=\sum_{k=1}^{N-1}\left|\sum_{m=1}^{M}r_{m,m}(k)\right|^{2}.\label{eq:ISL_set}
\end{equation}
Then a natural idea to generate complementary sets of unimodular sequences
is to minimize the CISL metric in \eqref{eq:ISL_set}, i.e., solving
the following optimization problem:
\begin{equation}
\begin{array}{ll}
\underset{\{\mathbf{x}_{m}\}_{m=1}^{M}}{\mathsf{minimize}} & {\displaystyle \sum_{k=1}^{N-1}}\left|{\displaystyle \sum_{m=1}^{M}}r_{m,m}(k)\right|^{2}\\
\mathsf{subject\;to} & \left|x_{m}(n)\right|=1,\\
& n=1,\ldots,N,\,m=1,\ldots,M.
\end{array}\label{eq:CSS_prob}
\end{equation}
Note that if the objective of problem \eqref{eq:CSS_prob} can be
driven to zero, then the corresponding solution is a complementary
set of sequences. But the problem may also be used to find almost
complementary sets of sequences for $(N,M)$ values for which no CSS
exists.
\subsection{Design of Sequence Set with Good Auto- and Cross-correlation Properties}
To design sequence sets with both good auto- and cross-correlation
properties, we consider the goodness measure used in \cite{SequenceSet_He2009},
which is defined as
\begin{equation}
\Psi={\displaystyle \sum_{m=1}^{M}}{\displaystyle \sum_{\substack{k=1-N\\
k\neq0
}
}^{N-1}}\left|r_{m,m}(k)\right|^{2}+{\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1\\
j\neq i
}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}\left|r_{i,j}(k)\right|^{2}.\label{eq:obj_auto_cross}
\end{equation}
In this criterion, the first term contains the autocorrelation sidelobes
of all the sequences and the cross-correlations are involved in the
second term. Then, to design unimodular sequence sets with good correlation
properties, we consider the following optimization problem:
\begin{equation}
\begin{array}{ll}
\underset{\{\mathbf{x}_{m}\}_{m=1}^{M}}{\mathsf{minimize}} & {\displaystyle \sum_{m=1}^{M}}{\displaystyle \sum_{\substack{k=1-N\\
k\neq0
}
}^{N-1}}\left|r_{m,m}(k)\right|^{2}+{\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1\\
j\neq i
}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}\left|r_{i,j}(k)\right|^{2}\\
\mathsf{subject\;to} & \left|x_{m}(n)\right|=1,\,n=1,\ldots,N,\,m=1,\ldots,M.
\end{array}\label{eq:seqset_corr}
\end{equation}
Since $r_{m,m}(0)=N$, $m=1,\ldots,M$, due to the unimodular constraints,
problem \eqref{eq:seqset_corr} can be written more compactly as
\begin{equation}
\begin{array}{ll}
\underset{\{\mathbf{x}_{m}\}_{m=1}^{M}}{\mathsf{minimize}} & {\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}\left|r_{i,j}(k)\right|^{2}-N^{2}M\\
\mathsf{subject\;to} & \left|x_{m}(n)\right|=1,\\
& n=1,\ldots,N,\,m=1,\ldots,M.
\end{array}\label{eq:seqset_prob}
\end{equation}
As have been shown in \cite{he2012waveform}, the criterion $\Psi$
defined in \eqref{eq:obj_auto_cross} is lower bounded by $N^{2}M(M-1)$
and thus cannot be made very small. This unveils the fact that it
is not possible to design a set of sequences with all auto- and cross-correlation
sidelobes very small. Therefore, we also consider the following more
general weighted formulation:
\begin{equation}
\begin{array}{ll}
\underset{\{\mathbf{x}_{m}\}_{m=1}^{M}}{\mathsf{minimize}} & {\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}w_{k}\left|r_{i,j}(k)\right|^{2}-w_{0}N^{2}M\\
\mathsf{subject\;to} & \left|x_{m}(n)\right|=1,\,n=1,\ldots,N,\,m=1,\ldots,M,
\end{array}\label{eq:seqset_prob_weight}
\end{equation}
where $w_{k}=w_{-k}\geq0$, $k=0,\ldots,N-1$ are nonnegative weights
assigned to different time lags. It is easy to see that if we choose
$w_{k}=1$ for all $k,$ then problem \eqref{eq:seqset_prob_weight}
reduces to \eqref{eq:seqset_prob}. But problem \eqref{eq:seqset_prob_weight}
provides more flexibility in the sense that we can assign different
weights to different correlation lags, so that we can minimize the
correlations only within a certain time lag interval. Also note that
when $M=1$, problem \eqref{eq:seqset_prob_weight} becomes the weighted
integrated sidelobe level minimization problem considered in \cite{WISL_song2015}.
Two algorithms named CAN and WeCAN were proposed in \cite{SequenceSet_He2009}
to tackle problems \eqref{eq:seqset_prob_weight} and \eqref{eq:seqset_prob},
respectively. But the authors of \cite{SequenceSet_He2009} resorted
to solving ``almost equivalent'' problems that seem to work well
in practice. In this paper, we develop algorithms to directly tackle
the sequence set design formulations in \eqref{eq:seqset_prob_weight}
and \eqref{eq:seqset_prob}.
\subsection{The MM Method\label{sub:MM-Method}}
The MM method refers to the majorization-minimization method, which
is an approach to solve optimization problems that are too difficult
to solve directly. The principle behind the MM method is to transform
a difficult problem into a series of simple problems. Interested readers
may refer to \cite{hunter2004MMtutorial,MM_Stoica,razaviyayn2013unified}
and references therein for more details.
Suppose we want to minimize $f(\mathbf{x})$ over $\mathcal{X}\subseteq\mathbb{C}^{n}$.
Instead of minimizing the cost function $f(\mathbf{x})$ directly,
the MM approach optimizes a sequence of approximate objective functions
that majorize $f(\mathbf{x})$. More specifically, starting from a
feasible point $\mathbf{x}^{(0)},$ the algorithm produces a sequence
$\{\mathbf{x}^{(k)}\}$ according to the following update rule:
\begin{equation}
\mathbf{x}^{(k+1)}\in\underset{\mathbf{x}\in\mathcal{X}}{\arg\min}\,\,u(\mathbf{x},\mathbf{x}^{(k)}),\label{eq:major_update}
\end{equation}
where $\mathbf{x}^{(k)}$ is the point generated by the algorithm
at iteration $k,$ and $u(\mathbf{x},\mathbf{x}^{(k)})$ is the majorization
function of $f(\mathbf{x})$ at $\mathbf{x}^{(k)}$. Formally, the
function $u(\mathbf{x},\mathbf{x}^{(k)})$ is said to majorize the
function $f(\mathbf{x})$ at the point $\mathbf{x}^{(k)}$ if
\begin{eqnarray}
u(\mathbf{x},\mathbf{x}^{(k)}) & \geq & f(\mathbf{x}),\quad\forall\mathbf{x}\in\mathcal{X},\label{eq:major1}\\
u(\mathbf{x}^{(k)},\mathbf{x}^{(k)}) & = & f(\mathbf{x}^{(k)}).\label{eq:major2}
\end{eqnarray}
In other words, function $u(\mathbf{x},\mathbf{x}^{(k)})$ is an upper
bound of $f(\mathbf{x})$ over $\mathcal{X}$ and coincides with $f(\mathbf{x})$
at $\mathbf{x}^{(k)}$.
It is easy to show that with this scheme, the objective value is monotonically
decreasing (nonincreasing) at every iteration, i.e.,
\begin{equation}
f(\mathbf{x}^{(k+1)})\leq u(\mathbf{x}^{(k+1)},\mathbf{x}^{(k)})\leq u(\mathbf{x}^{(k)},\mathbf{x}^{(k)})=f(\mathbf{x}^{(k)}).\label{eq:descent-property}
\end{equation}
The first inequality and the third equality follow from the the properties
of the majorization function, namely \eqref{eq:major1} and \eqref{eq:major2}
respectively and the second inequality follows from \eqref{eq:major_update}.
To derive MM algorithms in practice, the key step is to find a majorization
function of the objective such that the majorized problem is easy
to solve. For that purpose, the following result on quadratic upper-bounding
will be useful later when constructing simple majorization functions.
\begin{lem}[\cite{MISL}]
\label{lem:majorizer}Let $\mathbf{L}$ be an $n\times n$ Hermitian
matrix and $\mathbf{M}$ be another $n\times n$ Hermitian matrix
such that $\mathbf{M}\succeq\mathbf{L}.$\textup{ }Then for any point
$\mathbf{x}_{0}\in\mathbf{C}^{n}$, the quadratic function $\mathbf{x}^{H}\mathbf{L}\mathbf{x}$
is majorized by $\mathbf{x}^{H}\mathbf{M}\mathbf{x}+2{\rm Re}\left(\mathbf{x}^{H}(\mathbf{L}-\mathbf{M})\mathbf{x}_{0}\right)+\mathbf{x}_{0}^{H}(\mathbf{M}-\mathbf{L})\mathbf{x}_{0}$
at\textup{ $\mathbf{x}_{0}$.}
\end{lem}
\section{Design of Complementary Set of Sequences via MM\label{sec:Seq-Design-viaMM}}
To tackle problem \eqref{eq:CSS_prob} via majorization-minimization,
we first perform some reformulations. Let us define an auxiliary
sequence of length $M(2N-1)$ as follows \cite{SequenceSet_Mojtaba2013}:
\begin{equation}
\mathbf{z}=[\mathbf{x}_{1}^{T},\mathbf{0}_{N-1}^{T},\ldots,\mathbf{x}_{M}^{T},\mathbf{0}_{N-1}^{T}]^{T},
\end{equation}
then the first $N$ aperiodic autocorrelation lags of $\mathbf{z}$
(denoted by $\{r_{z}(k)\}$) can be written as
\begin{equation}
r_{z}(k)=\sum_{m=1}^{M}r_{m,m}(k),\,0\leq k\leq N-1.
\end{equation}
Then the sequence set $\{\mathbf{x}_{m}\}_{m=1}^{M}$ is complementary
if and only if $\mathbf{z}$ has a zero correlation zone (ZCZ) for
lags in the interval $1\leq k\leq N-1$, and the CSS design problem
\eqref{eq:CSS_prob} can be reformulated as
\begin{equation}
\begin{array}{ll}
\underset{\{\mathbf{x}_{m}\}_{m=1}^{M}}{\mathsf{minimize}} & {\displaystyle \sum_{k=1}^{N-1}}\left|r_{z}(k)\right|^{2}\\
\mathsf{subject\;to} & \mathbf{z}=[\mathbf{x}_{1}^{T},\mathbf{0}_{N-1}^{T},\ldots,\mathbf{x}_{M}^{T},\mathbf{0}_{N-1}^{T}]^{T},\\
& \left|x_{m}(n)\right|=1,\,n=1,\ldots,N,\,m=1,\ldots,M.
\end{array}\label{eq:CSS_WISL_prob}
\end{equation}
The objective in \eqref{eq:CSS_WISL_prob} can be viewed as the weighted
ISL metric in \cite{WISL_song2015} of the sequence $\mathbf{z}$
(i.e., $\sum_{k=1}^{M(2N-1)-1}w_{k}\left|r_{z}(k)\right|^{2}$) with
weights chosen as
\begin{equation}
w_{k}=\begin{cases}
1, & 1\leq k\leq N-1\\
0, & N\leq k\leq M(2N-1)-1.
\end{cases}\label{eq:weights}
\end{equation}
However, in problem \eqref{eq:CSS_WISL_prob}, the sequence $\mathbf{z}$
has some special structures and the original weighted ISL minimization
algorithm proposed in \cite{WISL_song2015} for designing unimodular
sequences cannot be directly applied due to the zeros. But the algorithm
can be adapted to take the sequence structure into account and in
the following we give a brief derivation of the modified algorithm,
which mainly follows from Section III.B in \cite{WISL_song2015}.
Similar to Section III.B in \cite{WISL_song2015}, we perform two
successive majorization steps to problem \eqref{eq:CSS_WISL_prob}.
Let $L=M(2N-1)$ be the length of $\mathbf{z}$, and $\mathbf{U}_{k},\,k=1-L,\ldots,,L-1$
be $L\times L$ Toeplitz matrices with the $k$th diagonal elements
being $1$ and $0$ elsewhere, i.e.,
\begin{equation}
\left[\mathbf{U}_{k}\right]_{i,j}=\begin{cases}
1 & \textrm{if }j-i=k\\
0 & \textrm{if }j-i\neq k,
\end{cases}\quad i,j=1,\ldots,L.\label{eq:U_k}
\end{equation}
Then the autocorrelations $\{r_{z}(k)\}$ of $\mathbf{z}$ can be
written in terms of $\mathbf{U}_{k}$ as
\begin{equation}
r_{z}(k)=\mathbf{z}^{H}\mathbf{U}_{k}\mathbf{z},\,k=1-L,\ldots,,L-1.
\end{equation}
Then given $\mathbf{z}^{(l)}=[\mathbf{x}_{1}^{(l)T},\mathbf{0}_{N-1}^{T},\ldots,\mathbf{x}_{M}^{(l)T},\mathbf{0}_{N-1}^{T}]^{T}$
at iteration $l$, by using Lemma \ref{lem:majorizer} we can majorize
the objective of \eqref{eq:CSS_WISL_prob} by a quadratic function
as in \cite{WISL_song2015} and the majorized problem after the first
majorization step is given by
\begin{equation}
\begin{array}{ll}
\underset{\{\mathbf{x}_{m}\}_{m=1}^{M}}{\mathsf{minimize}} & \mathbf{z}^{H}\left(\mathbf{R}-(L-1)\mathbf{z}^{(l)}(\mathbf{z}^{(l)})^{H}\right)\mathbf{z}\\
\mathsf{subject\;to} & \mathbf{z}=[\mathbf{x}_{1}^{T},\mathbf{0}_{N-1}^{T},\ldots,\mathbf{x}_{M}^{T},\mathbf{0}_{N-1}^{T}]^{T},\\
& \left|x_{m}(n)\right|=1,\,n=1,\ldots,N,\,m=1,\ldots,M,
\end{array}\label{eq:CSS_major1_prob}
\end{equation}
where
\begin{eqnarray}
\mathbf{R} & = & {\displaystyle \sum_{\substack{k=1-L\\
k\neq0
}
}^{L-1}}w_{k}r_{z}^{(l)}(-k)\mathbf{U}_{k}
\end{eqnarray}
is a Hermitian Toeplitz matrix and $w_{k}=w_{-k}$, $k=1,\ldots,L-1$
are given in \eqref{eq:weights}.
To perform the second majorization step, we first bound the maximum
eigenvalue of the matrix $\mathbf{R}-(L-1)\mathbf{z}^{(l)}(\mathbf{z}^{(l)})^{H}$
as in \cite{WISL_song2015}, i.e.,
\begin{equation}
\lambda_{{\rm max}}\left(\mathbf{R}-(L-1)\mathbf{z}^{(l)}(\mathbf{z}^{(l)})^{H}\right)\le\lambda_{u},\label{eq:eig_bound_L}
\end{equation}
where
\begin{eqnarray}
\lambda_{u} & = & \frac{1}{2}\left(\max_{1\leq i\leq L}\mu_{2i}+\max_{1\leq i\leq L}\mu_{2i-1}\right),\\
\boldsymbol{\mu} & = & \mathbf{F}\mathbf{c},\label{eq:F_mu}\\
\mathbf{c} & = & [0,w_{1}r_{z}^{(l)}(1),\ldots,w_{L-1}r_{z}^{(l)}(L-1),\nonumber \\
& & 0,w_{L-1}r_{z}^{(l)}(1-L),\ldots,w_{1}r_{z}^{(l)}(-1)]^{T},
\end{eqnarray}
and the matrix $\mathbf{F}$ in \eqref{eq:F_mu} is the $2L\times2L$
FFT matrix with $F_{m,n}=e^{-j\frac{2mn\pi}{2L}},0\leq m,n<2L$. Then
by applying Lemma \ref{lem:majorizer} with $\mathbf{M=}\lambda_{u}\mathbf{I},$
we can obtain the majorized problem of \eqref{eq:CSS_major1_prob}
given by
\begin{equation}
\begin{array}{ll}
\underset{\{\mathbf{x}_{m}\}_{m=1}^{M}}{\mathsf{minimize}} & {\rm Re}\left(\mathbf{z}^{H}\big(\mathbf{R}\!-\!(L-1)\mathbf{z}^{(l)}(\mathbf{z}^{(l)})^{H}\!-\!\lambda_{u}\mathbf{I}\big)\mathbf{z}^{(l)}\right)\\
\mathsf{subject\;to} & \mathbf{z}=[\mathbf{x}_{1}^{T},\mathbf{0}_{N-1}^{T},\ldots,\mathbf{x}_{M}^{T},\mathbf{0}_{N-1}^{T}]^{T},\\
& \left|x_{m}(n)\right|=1,\,n=1,\ldots,N,\,m=1,\ldots,M,
\end{array}
\end{equation}
which can be rewritten as
\begin{equation}
\begin{array}{ll}
\underset{\{\mathbf{x}_{m}\}_{m=1}^{M}}{\mathsf{minimize}} & \left\Vert \mathbf{z}-\mathbf{y}\right\Vert _{2}^{2}\\
\mathsf{subject\;to} & \mathbf{z}=[\mathbf{x}_{1}^{T},\mathbf{0}_{N-1}^{T},\ldots,\mathbf{x}_{M}^{T},\mathbf{0}_{N-1}^{T}]^{T},\\
& \left|x_{m}(n)\right|=1,\,n=1,\ldots,N,\,m=1,\ldots,M,
\end{array}\label{eq:CSS_major2_prob}
\end{equation}
where
\begin{eqnarray}
\mathbf{y} & = & -\big(\mathbf{R}-(L-1)\mathbf{z}^{(l)}(\mathbf{z}^{(l)})^{H}-\lambda_{u}\mathbf{I}\big)\mathbf{z}^{(l)}\nonumber \\
& = & \left((L-1)MN+\lambda_{u}\right)\mathbf{z}^{(l)}-\mathbf{R}\mathbf{z}^{(l)}.
\end{eqnarray}
Problem \eqref{eq:CSS_major2_prob} admits the following closed form
solution
\begin{eqnarray}
x_{m}(n) & = & e^{j{\rm arg}(y_{(m-1)(2N-1)+n})},\nonumber \\
& & n=1,\ldots,N,m=1,\ldots,M.
\end{eqnarray}
The overall algorithm for the CSS design problem \eqref{eq:CSS_prob}
is summarized in Algorithm \ref{alg:CSS-MM}. Note that the algorithm
can be implemented by means of FFT (IFFT) operations, since $\mathbf{R}$
is Hermitian Toeplitz and it can be decomposed as
\begin{equation}
\mathbf{R}=\frac{1}{2L}\mathbf{F}_{:,1:L}^{H}{\rm Diag}(\boldsymbol{\mu})\mathbf{F}_{:,1:L},
\end{equation}
according to Lemma 4 in \cite{WISL_song2015}.
\begin{algorithm}[tbh]
\begin{algor}[1]
\item [{Require:}] \begin{raggedright}
number of sequences $M$, sequence length $N$
\par\end{raggedright}
\item [{{*}}] \begin{raggedright}
Set $l=0$ and initialize $\{\mathbf{x}_{m}^{(0)}\}_{m=1}^{M}$.
\par\end{raggedright}
\item [{{*}}] $L=M(2N-1)$
\item [{repeat}]~
\begin{algor}[1]
\item [{{*}}] $\mathbf{z}^{(l)}=[\mathbf{x}_{1}^{(l)T},\mathbf{0}_{N-1}^{T},\ldots,\mathbf{x}_{M}^{(l)T},\mathbf{0}_{N-1}^{T}]^{T}$
\item [{{*}}] $\mathbf{f}=\mathbf{F}[\mathbf{z}^{(l)T},\mathbf{0}_{1\times L}]^{T}$
\item [{{*}}] $\mathbf{r}=\frac{1}{2L}\mathbf{F}^{H}\left|\mathbf{f}\right|^{2}$
\item [{{*}}] $\mathbf{c}=\mathbf{r}\circ[0,\mathbf{1}_{N-1}^{T},\mathbf{0}_{2(L-N)+1}^{T},\mathbf{1}_{N-1}^{T}]^{T}$
\item [{{*}}] $\boldsymbol{\mu}=\mathbf{F}\mathbf{c}$
\item [{{*}}] $\lambda_{u}=\frac{1}{2}\big(\underset{1\leq i\leq N}{\max}\mu_{2i}+\underset{1\leq i\leq N}{\max}\mu_{2i-1}\big)$\ref{alg:MWISL-Set-adaptive}
\item [{{*}}] $\mathbf{y}=\left((L-1)MN+\lambda_{u}\right)\mathbf{z}^{(l)}-\frac{1}{2L}\mathbf{F}_{:,1:L}^{H}(\boldsymbol{\mu}\circ\mathbf{f})$
\item [{{*}}] $x_{m}^{(l+1)}(n)=e^{j{\rm arg}(y_{(m-1)(2N-1)+n})},n=1,\ldots,N,m=1,\ldots,M.$
\item [{{*}}] $l\leftarrow l+1$
\end{algor}
\item [{until}] convergence
\end{algor}
\protect\caption{\label{alg:CSS-MM}The MM Algorithm for CSS design problem \eqref{eq:CSS_prob}.}
\end{algorithm}
\section{Design of Sequence Set with Good Auto- and Cross-correlation Properties
via MM\label{sec:Set_Auto_Cross_Wei}}
In this section, we consider the problem of designing sequence sets
for both good auto- and cross-correlation properties. We first consider
the more general problem formulation with weights involved, i.e.,
problem \eqref{eq:seqset_prob_weight}, and derive an MM algorithm
for the problem in the following.
Let us first stack the sequences $\mathbf{x}_{m},m=1,\ldots,M$ together
and denote it by $\mathbf{x}$, i.e.,
\begin{equation}
\mathbf{x}=[\mathbf{x}_{1}^{T},\ldots,\mathbf{x}_{M}^{T}]^{T},
\end{equation}
then we have
\begin{equation}
\mathbf{x}_{m}=\mathbf{S}_{m}\mathbf{x},\,m=1,\ldots,M,\label{eq:x_m_x}
\end{equation}
where $\mathbf{S}_{m}$ is an $N\times NM$ block selection matrix
defined as
\begin{equation}
\mathbf{S}_{m}=[\mathbf{0}_{N\times(m-1)N},\mathbf{I}_{N},\mathbf{0}_{N\times(M-m)N}].
\end{equation}
We then note that \eqref{eq:cross_corr-1} can be written more compactly
as
\begin{equation}
r_{i,j}(k)=\mathbf{x}_{j}^{H}\mathbf{U}_{k}\mathbf{x}_{i},\,k=1-N,\ldots,N-1,\,i,j=1,\ldots,M,\label{eq:cross_corr_mat}
\end{equation}
where $\mathbf{U}_{k}$ is defined as in \eqref{eq:U_k} but is of
size $N\times N$ now. By combining \eqref{eq:cross_corr_mat} and
\eqref{eq:x_m_x}, we have
\begin{eqnarray}
r_{i,j}(k) & = & \mathbf{x}^{H}\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i}\mathbf{x},\\
& & k=1-N,\ldots,N-1,\,i,j=1,\ldots,M,\nonumber
\end{eqnarray}
and then
\begin{equation}
\begin{aligned}\left|r_{i,j}(k)\right|^{2} & =\left|\mathbf{x}^{H}\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i}\mathbf{x}\right|^{2}\\
& =\left|{\rm Tr}\left(\mathbf{x}\mathbf{x}^{H}\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i}\right)\right|^{2}\\
& =\left|{\rm vec}(\mathbf{x}\mathbf{x}^{H})^{H}{\rm vec}(\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i})\right|^{2}.
\end{aligned}
\label{eq:r_ij_square}
\end{equation}
By using \eqref{eq:r_ij_square}, problem \eqref{eq:seqset_prob_weight}
can be rewritten as
\begin{equation}
\begin{array}{ll}
\underset{\mathbf{x}\in\mathbb{C}^{NM}}{\mathsf{minimize}} & {\rm vec}(\mathbf{x}\mathbf{x}^{H})^{H}\mathbf{L}{\rm vec}(\mathbf{x}\mathbf{x}^{H})-w_{0}N^{2}M\\
\mathsf{subject\;to} & \left|x_{n}\right|=1,\,n=1,\ldots,NM,
\end{array}\label{eq:prob_weight_vec}
\end{equation}
where
\begin{equation}
\mathbf{L}={\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}w_{k}{\rm vec}(\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i}){\rm vec}(\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i})^{H}.\label{eq:L_mat}
\end{equation}
Since $w_{k}\geq0,$ it is easy to see that $\mathbf{L}$ is a nonnegative
real symmetric matrix and it can be shown (see Lemma 5 in \cite{WISL_song2015})
that
\begin{equation}
\mathbf{L}\preceq{\rm Diag}(\mathbf{b}),
\end{equation}
where $\mathbf{b}=\mathbf{L}\mathbf{1}$. Then given $\mathbf{x}^{(l)}$
at iteration $l$, by using Lemma \ref{lem:majorizer}, we know that
the objective of problem \eqref{eq:prob_weight_vec} is majorized
by the following function at $\mathbf{x}^{(l)}$:
\begin{equation}
\begin{aligned} & \,\,u_{1}(\mathbf{x},\mathbf{x}^{(l)})\\
= & \,\,{\rm vec}(\mathbf{x}\mathbf{x}^{H})^{H}{\rm Diag}(\mathbf{b}){\rm vec}(\mathbf{x}\mathbf{x}^{H})\\
& +2{\rm Re}\big({\rm vec}(\mathbf{x}\mathbf{x}^{H})^{H}(\mathbf{L}-{\rm Diag}(\mathbf{b})){\rm vec}(\mathbf{x}^{(l)}\mathbf{x}^{(l)H})\big)\\
& +{\rm vec}(\mathbf{x}^{(l)}\mathbf{x}^{(l)H})^{H}({\rm Diag}(\mathbf{b})-\mathbf{L}){\rm vec}(\mathbf{x}^{(l)}\mathbf{x}^{(l)H})-w_{0}N^{2}M.
\end{aligned}
\label{eq:major_u1}
\end{equation}
Since the elements of $\mathbf{x}$ are of unit modulus, it is easy
to see that the first term of \eqref{eq:major_u1} is just a constant.
After ignoring the constant terms, the majorized problem of \eqref{eq:prob_weight_vec}
is given by
\begin{equation}
\begin{array}{ll}
\underset{\mathbf{x}\in\mathbb{C}^{NM}}{\mathsf{minimize}} & {\rm Re}\big({\rm vec}(\mathbf{x}\mathbf{x}^{H})^{H}(\mathbf{L}-{\rm Diag}(\mathbf{b})){\rm vec}(\mathbf{x}^{(l)}\mathbf{x}^{(l)H})\big)\\
\mathsf{subject\;to} & \left|x_{n}\right|=1,\,n=1,\ldots,NM.
\end{array}\label{eq:prob_major1}
\end{equation}
By substituting $\mathbf{L}$ in \eqref{eq:L_mat} back, we have
\begin{equation}
\begin{aligned} & {\rm Re}\big({\rm vec}(\mathbf{x}\mathbf{x}^{H})^{H}\mathbf{L}{\rm vec}(\mathbf{x}^{(l)}\mathbf{x}^{(l)H})\big)\\
= & {\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}{\rm Re}\bigg(w_{k}{\rm Tr}\left(\mathbf{x}\mathbf{x}^{H}\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i}\right)\\
& \qquad\qquad\qquad\quad\times{\rm Tr}\left(\mathbf{x}^{(l)}\mathbf{x}^{(l)H}\mathbf{S}_{i}^{H}\mathbf{U}_{-k}\mathbf{S}_{j}\right)\bigg)\\
= & {\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}{\rm Re}\left(w_{k}r_{j,i}^{(l)}(-k)\mathbf{x}^{H}\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i}\mathbf{x}\right),
\end{aligned}
\label{eq:u1_term1}
\end{equation}
and the second term of the objective can also be rewritten as
\begin{equation}
\begin{aligned} & {\rm Re}\big({\rm vec}(\mathbf{x}\mathbf{x}^{H})^{H}{\rm Diag}(\mathbf{b}){\rm vec}(\mathbf{x}^{(l)}\mathbf{x}^{(l)H})\big)\\
= & {\rm Re}\left({\rm vec}(\mathbf{x}\mathbf{x}^{H})^{H}\left(\mathbf{b}\circ{\rm vec}(\mathbf{x}^{(l)}\mathbf{x}^{(l)H})\right)\right)\\
= & {\rm Re}\left({\rm Tr}\left(\mathbf{x}\mathbf{x}^{H}{\rm mat}\left(\mathbf{b}\circ{\rm vec}(\mathbf{x}^{(l)}\mathbf{x}^{(l)H})\right)\right)\right)\\
= & {\rm Re}\left(\mathbf{x}^{H}\left({\rm mat}(\mathbf{b})\circ(\mathbf{x}^{(l)}\mathbf{x}^{(l)H})\right)\mathbf{x}\right),
\end{aligned}
\label{eq:u1_term2}
\end{equation}
where ${\rm mat}(\cdot)$ is the inverse operation of ${\rm vec}(\cdot)$.
It is clear that both \eqref{eq:u1_term1} and \eqref{eq:u1_term2}
are quadratic in $\mathbf{x}$ and problem \eqref{eq:prob_major1}
can be rewritten as
\begin{equation}
\begin{array}{ll}
\underset{\mathbf{x}\in\mathbb{C}^{NM}}{\mathsf{minimize}} & \mathbf{x}^{H}\left(\mathbf{R}-\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\right)\mathbf{x}\\
\mathsf{subject\;to} & \left|x_{n}\right|=1,\,n=1,\ldots,NM,
\end{array}\label{eq:prob_quad}
\end{equation}
where
\begin{equation}
\mathbf{R}={\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}w_{k}r_{j,i}^{(l)}(-k)\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i},\label{eq:R_mat}
\end{equation}
\begin{equation}
\begin{aligned}\mathbf{B} & ={\rm mat}(\mathbf{b})\\
& ={\rm mat}(\mathbf{L}\mathbf{1})\\
& ={\rm mat}\left({\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}w_{k}{\rm vec}(\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i}){\rm vec}(\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i})^{H}\mathbf{1}\right)\\
& ={\rm mat}\left({\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}w_{k}(N-\left|k\right|){\rm vec}(\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i})\right)\\
& ={\displaystyle \sum_{i=1}^{M}}{\displaystyle \sum_{\substack{j=1}
}^{M}}{\displaystyle \sum_{k=1-N}^{N-1}}w_{k}(N-\left|k\right|)\mathbf{S}_{j}^{H}\mathbf{U}_{k}\mathbf{S}_{i}\\
& =\mathbf{1}_{M\times M}\otimes\mathbf{W},
\end{aligned}
\label{eq:B_mat}
\end{equation}
and
\[
\begin{aligned}\mathbf{W} & ={\displaystyle \sum_{k=1-N}^{N-1}}w_{k}(N-\left|k\right|)\mathbf{U}_{k}\\
& =\begin{bmatrix}w_{0}N & w_{1}(N-1) & \ldots & w_{N-1}\\
w_{1}(N-1) & w_{0}N & \ddots & \vdots\\
\vdots & \ddots & \ddots & w_{1}(N-1)\\
w_{N-1} & \ldots & w_{1}(N-1) & w_{0}N
\end{bmatrix}.
\end{aligned}
\]
Note that in \eqref{eq:prob_quad} we have removed the ${\rm Re}(\cdot)$
operator since the matrices $\mathbf{R}$ and $\mathbf{B}$ are Hermitian.
Since the majorized problem \eqref{eq:prob_quad} is still hard to
solve directly, we propose to majorize the objective function at $\mathbf{x}^{(l)}$
again to further simplify the problem that we need to solve at each
iteration. Similarly, to construct a majorization function of the
quadratic objective in \eqref{eq:prob_quad}, we need to find a matrix
$\mathbf{M}$ such that $\mathbf{M}\succeq\mathbf{R}-\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)$
and a straightforward choice may be $\mathbf{M}=\lambda_{{\rm max}}\left(\mathbf{R}-\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\right)\mathbf{I}$.
But to compute the maximum eigenvalue, some iterative algorithms are
needed and since we need to compute this at every iteration, it will
be computationally expensive. To maintain the computational efficiency
of the algorithm, here we propose to use some upper bound of $\lambda_{{\rm max}}\left(\mathbf{R}-\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\right)$
that can be easily computed. To derive such an upper bound, we first
introduce several results that will be useful. The first result reveals
a fact regarding the eigenvalues of the matrix $\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)$,
which follows from \cite{WISL_song2015}.
\begin{lem}
\label{lem:eig_set}Let $\mathbf{B}$ be an $N\times N$ matrix and
$\mathbf{x}\in\mathbb{C}^{N}$ with $\left|x_{n}\right|=1,\,n=1,\ldots,N$.
Then $\mathbf{B}\circ(\mathbf{x}\mathbf{x}^{H})$ and $\mathbf{B}$
share the same set of eigenvalues.
\end{lem}
The second result indicates some relations between the eigenvalues
of the Kronecker product of two matrices and the eigenvalues of the
two individual matrices \cite{roger1994topics}.
\begin{lem}
\label{lem:eig_kron}Let $\mathbf{A}$ and $\mathbf{B}$ be square
matrices of size $M$ and $N$, respectively. Let $\lambda_{1},\ldots,\lambda_{M}$
be the eigenvalues of $\mathbf{A}$ and $\mu_{1},\ldots,\mu_{N}$
be those of $\mathbf{B}$. Then the eigenvalues of $\mathbf{A}\otimes\mathbf{B}$
are $\lambda_{i}\mu_{j},i=1,\ldots,M$, $j=1,\ldots,N$ (including
algebraic multiplicities in all three cases).
\end{lem}
The third result regards bounds of the extreme eigenvalues of Hermitian
Toeplitz matrices, which can be computed by using FFTs \cite{eig_localization}.
\begin{lem}
\label{lem:eig_bounds}Let $\mathbf{T}$ be an $N\times N$ Hermitian
Toeplitz matrix defined by $\{t_{k}\}_{k=0}^{N-1}$ as follows:
\[
\mathbf{T}=\begin{bmatrix}t_{0} & t_{1}^{*} & \ldots & t_{N-1}^{*}\\
t_{1} & t_{0} & \ddots & \vdots\\
\vdots & \ddots & \ddots & t_{1}^{*}\\
t_{N-1} & \ldots & t_{1} & t_{0}
\end{bmatrix}
\]
and $\mathbf{F}$ be a $2N\times2N$ FFT matrix with $F_{m,n}=e^{-j\frac{2mn\pi}{2N}},0\leq m,n<2N$.
Let $\mathbf{c}=[t_{0},t_{1},\cdots,t_{N-1},0,t_{N-1}^{*},\cdots,t_{1}^{*}]^{T}$
and $\boldsymbol{\mu}=\mathbf{F}\mathbf{c}$ be the discrete Fourier
transform of $\mathbf{c}$. Then
\begin{eqnarray}
\lambda_{{\rm max}}(\mathbf{T}) & \leq & \frac{1}{2}\left(\max_{1\leq i\leq N}\mu_{2i}+\max_{1\leq i\leq N}\mu_{2i-1}\right),\label{eq:eig_up}\\
\lambda_{{\rm min}}(\mathbf{T}) & \geq & \frac{1}{2}\left(\min_{1\leq i\leq N}\mu_{2i}+\min_{1\leq i\leq N}\mu_{2i-1}\right).\label{eq:eig_low}
\end{eqnarray}
\end{lem}
Based on these results, we can now obtain an upper bound of $\lambda_{{\rm max}}\left(\mathbf{R}-\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\right)$
given in the following lemma.
\begin{lem}
\label{lem:eig_upperBound}Let $\mathbf{R}$ and $\mathbf{B}$ be
matrices defined in \eqref{eq:R_mat} and \eqref{eq:B_mat}, respectively.
Let $\mathbf{w}=[w_{0}N,w_{1}(N-1),\ldots,w_{N-1},0,w_{N-1},\ldots,w_{1}(N-1)]^{T},$
$\boldsymbol{\mu}=\mathbf{F}\mathbf{w}$ and $\lambda_{W}=\frac{1}{2}\left(\min_{1\leq i\leq N}\mu_{2i}+\min_{1\leq i\leq N}\mu_{2i-1}\right)$.
Then
\begin{equation}
\lambda_{{\rm max}}\left(\mathbf{R}-\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\right)\leq\left\Vert \mathbf{R}\right\Vert -\lambda_{B},
\end{equation}
where
\begin{equation}
\lambda_{B}=\begin{cases}
\min\left\{ M\lambda_{W},0\right\} , & M\geq2\\
\lambda_{W}, & M=1,
\end{cases}
\end{equation}
and $\left\Vert \cdot\right\Vert $ can be any submultiplicative matrix
norm.\end{lem}
\begin{IEEEproof}
See Appendix \ref{sec:Proof-of-Lemma-eigUB}.
\end{IEEEproof}
In our case, for computational efficiency, we choose the induced $\ell_{\infty}$-norm
(also known as max-row-sum norm) in Lemma \ref{lem:eig_upperBound},
which is defined as
\begin{equation}
\left\Vert \mathbf{R}\right\Vert _{\infty}=\max_{i=1,\ldots,NM}\sum_{j=1}^{NM}\left|R_{i,j}\right|.
\end{equation}
Now, by choosing $\mathbf{M}=\left(\left\Vert \mathbf{R}\right\Vert _{\infty}-\lambda_{B}\right)\mathbf{I}$
in Lemma \ref{lem:majorizer}, the objective in \eqref{eq:prob_quad}
is majorized by
\[
\begin{aligned} & \,\,u_{2}(\mathbf{x},\mathbf{x}^{(l)})\\
= & \,\,(\left\Vert \mathbf{R}\right\Vert _{\infty}-\lambda_{B})\mathbf{x}^{H}\mathbf{x}\\
& +2{\rm Re}\left(\mathbf{x}^{H}\big(\mathbf{R}\!-\!\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\!-\!(\left\Vert \mathbf{R}\right\Vert _{\infty}\!-\!\lambda_{B})\mathbf{I}\big)\mathbf{x}^{(l)}\right)\\
& +(\mathbf{x}^{(l)})^{H}((\left\Vert \mathbf{R}\right\Vert _{\infty}-\lambda_{B})\mathbf{I}-\mathbf{R}\!+\!\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big))\mathbf{x}^{(l)}.
\end{aligned}
\]
Again after ignoring the constant terms, the majorized problem of
\eqref{eq:prob_quad} is given by
\begin{equation}
\begin{array}{ll}
\underset{\mathbf{x}\in\mathbb{C}^{NM}}{\mathsf{minimize}} & {\rm Re}\left(\mathbf{x}^{H}\mathbf{y}\right)\\
\mathsf{subject\;to} & \left|x_{n}\right|=1,\,n=1,\ldots,NM,
\end{array}\label{eq:prob_linear}
\end{equation}
where
\begin{equation}
\mathbf{y}=\big(\mathbf{R}\!-\!\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\big)\mathbf{x}^{(l)}\!-\!(\left\Vert \mathbf{R}\right\Vert _{\infty}\!-\!\lambda_{B})\mathbf{x}^{(l)}.\label{eq:y}
\end{equation}
It is clear that problem \eqref{eq:prob_linear} is separable in the
elements of $\mathbf{x}$ and the solution of the problem is given
by
\begin{equation}
x_{n}=e^{j{\rm arg}(-y_{n})},n=1,\ldots,NM.\label{eq:x_closed}
\end{equation}
According to the general steps of the majorization minimization method,
we can now implement the algorithm in a straightforward way, that
is at each iteration, we compute $\mathbf{y}$ according to \eqref{eq:y}
and update $\mathbf{x}$ via \eqref{eq:x_closed}. Clearly, the computational
cost is dominated by the computation of $\mathbf{y}$. To obtain an
efficient implementation, here we further explore the special structure
of the matrices involved in the computation of $\mathbf{y}$.
We first note that the matrix $\mathbf{R}$ in \eqref{eq:R_mat} can
be written as the following block matrix:
\begin{equation}
\mathbf{R}=\begin{bmatrix}\mathbf{R}_{11} & \mathbf{R}_{12} & \cdots & \mathbf{R}_{1M}\\
\mathbf{R}_{21} & \mathbf{R}_{22} & \cdots & \mathbf{R}_{2M}\\
\vdots & \vdots & \ddots & \vdots\\
\mathbf{R}_{M1} & \cdots & \cdots & \mathbf{R}_{MM}
\end{bmatrix},
\end{equation}
where each block is defined as
\begin{equation}
\mathbf{R}_{ij}={\displaystyle \sum_{k=1-N}^{N-1}}w_{k}r_{i,j}^{(l)}(-k)\mathbf{U}_{k},\,i,j=1,\ldots,M.
\end{equation}
It is easy to see that the building blocks $\mathbf{R}_{ij},i,j=1,\ldots,M$,
are Toeplitz matrices and when $i=j,$ they are also Hermitian. In
the following, we introduce a simple result regarding Toeplitz matrices
(not necessarily Hermitian) that can be used to perform the matrix
vector multiplication $\mathbf{R}\mathbf{x}^{(l)}$ more efficiently
via FFT (IFFT).
\begin{lem}
\label{lem:diagonal}Let $\mathbf{T}$ be an $N\times N$ Toeplitz
matrix defined as follows:
\[
\mathbf{T}=\begin{bmatrix}t_{0} & t_{1} & \ldots & t_{N-1}\\
t_{-1} & t_{0} & \ddots & \vdots\\
\vdots & \ddots & \ddots & t_{1}\\
t_{1-N} & \ldots & t_{-1} & t_{0}
\end{bmatrix}
\]
and $\mathbf{F}$ be a $2N\times2N$ FFT matrix with $F_{m,n}=e^{-j\frac{2mn\pi}{2N}},0\leq m,n<2N$.
Then $\mathbf{T}$ can be decomposed as $\mathbf{T}=\frac{1}{2N}\mathbf{F}_{:,1:N}^{H}{\rm Diag}(\mathbf{F}\mathbf{c})\mathbf{F}_{:,1:N}$,
where $\mathbf{c}=[t_{0},t_{-1},\cdots,t_{1-N},0,t_{N-1},\cdots,t_{1}]^{T}$.\end{lem}
\begin{IEEEproof}
See Appendix \ref{sec:Proof-of-Lemma-diag}.
\end{IEEEproof}
According to Lemma \ref{lem:diagonal}, by defining $\mathbf{H}$
to be the $2N\times N$ matrix composed of the first $N$ columns
of the $2N\times2N$ FFT matrix, i.e.,
\begin{equation}
\mathbf{H}=\mathbf{F}_{:,1:N},\label{eq:H_mat}
\end{equation}
we know that
\begin{equation}
\mathbf{R}_{ij}=\frac{1}{2N}\mathbf{H}^{H}{\rm Diag}(\mathbf{F}\mathbf{c}_{ij})\mathbf{H},
\end{equation}
where
\begin{equation}
\begin{aligned}\mathbf{c}_{ij} & =[w_{0}r_{i,j}^{(l)}(0),w_{1}r_{i,j}^{(l)}(1),\ldots,w_{N-1}r_{i,j}^{(l)}(N-1),\\
& \quad\,\,\,0,w_{N-1}r_{i,j}^{(l)}(1-N),\ldots,w_{1}r_{i,j}^{(l)}(-1)]^{T}.
\end{aligned}
\label{eq:c_ij}
\end{equation}
Thus, the matrix vector multiplication $\mathbf{R}\mathbf{x}^{(l)}$
can be performed as
\begin{equation}
\mathbf{R}\mathbf{x}^{(l)}=\frac{1}{2N}\tilde{\mathbf{H}}^{H}\begin{bmatrix}{\rm Diag}(\mathbf{F}\mathbf{c}_{11}) & \cdots & {\rm Diag}(\mathbf{F}\mathbf{c}_{1M})\\
\vdots & \ddots & \vdots\\
{\rm Diag}(\mathbf{F}\mathbf{c}_{M1}) & \cdots & {\rm Diag}(\mathbf{F}\mathbf{c}_{MM})
\end{bmatrix}\tilde{\mathbf{H}}\mathbf{x}^{(l)},\label{eq:Rx_compute}
\end{equation}
where $\tilde{\mathbf{H}}$ is a $2MN\times MN$ block diagonal matrix
given by
\begin{equation}
\tilde{\mathbf{H}}=\begin{bmatrix}\mathbf{H} & \mathbf{0} & \cdots & \mathbf{0}\\
\mathbf{0} & \mathbf{H} & \ddots & \vdots\\
\vdots & \ddots & \ddots & \mathbf{0}\\
\mathbf{0} & \cdots & \mathbf{0} & \mathbf{H}
\end{bmatrix}.
\end{equation}
From \eqref{eq:Rx_compute}, we can see that the multiplication $\mathbf{R}\mathbf{x}^{(l)}$
takes $M^{2}+2M$ FFT (IFFT) operations if all $\mathbf{c}_{ij},i,j=1,\ldots,M$
are given. Since to form the vectors $\mathbf{c}_{ij},i,j=1,\ldots,M,$
all the autocorrelations and cross-correlations, i.e., $r_{i,j}^{(l)}(k),i,j=1,\ldots,M,\,k=1-N,\ldots,N-1,$
are needed, and another $M^{2}$ FFT (IFFT) operations are required.
Similarly, $\left\Vert \mathbf{R}\right\Vert _{\infty}$ can also
be computed with $M^{2}+2M$ FFT (IFFT) operations, since it can be
obtained by taking the largest element of the vector $\tilde{\mathbf{R}}\mathbf{1}$,
where $\tilde{\mathbf{R}}$ is the matrix with each element being
the modulus of the corresponding element of $\mathbf{R}$, i.e., $\tilde{R}_{i,j}=\left|R_{i,j}\right|,i,j=1,\ldots,N.$
Finally, to compute $\big(\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\big)\mathbf{x}^{(l)}$
we first conduct some transformations as follows:
\begin{equation}
\begin{aligned} & \,\,\big(\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\big)\mathbf{x}^{(l)}\\
= & \,\,{\rm diag}\big(\mathbf{B}{\rm Diag}(\mathbf{x}^{(l)})\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)^{T}\big)\\
= & \,\,{\rm diag}\big(\mathbf{B}\big(\mathbf{x}^{(l)}\circ\big(\mathbf{x}^{(l)}\big)^{*})\big(\mathbf{x}^{(l)}\big)^{T}\big)\\
= & \,\,{\rm diag}\big(\mathbf{B}\mathbf{1}_{NM\times1}\big(\mathbf{x}^{(l)}\big)^{T}\big)\\
= & \,\,(\mathbf{B}\mathbf{1}_{NM\times1})\circ\mathbf{x}^{(l)}\\
= & \,\,\left((\mathbf{1}_{M\times M}\otimes\mathbf{W})\mathbf{1}_{NM\times1}\right)\circ\mathbf{x}^{(l)}\\
= & \,\,\left(M\mathbf{1}_{M\times1}\otimes(\mathbf{W}\mathbf{1}_{N\times1})\right)\circ\mathbf{x}^{(l)}.
\end{aligned}
\end{equation}
Since $\mathbf{W}$ is Toeplitz, we know from Lemma \ref{lem:diagonal}
that it can be decomposed as
\begin{equation}
\mathbf{W}=\frac{1}{2N}\mathbf{H}^{H}{\rm Diag}(\mathbf{F}\mathbf{w})\mathbf{H},
\end{equation}
where $\mathbf{w}$ is the same as the one defined in Lemma \ref{lem:eig_upperBound}.
Thus, $\big(\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\big)\mathbf{x}^{(l)}$
can be computed with $3$ FFT (IFFT) operations.
In summary, to compute $\mathbf{y}$ as in \eqref{eq:y}, around $3M^{2}+4M+3$
$2N$-point FFT (IFFT) operations are needed. Since the computational
complexity of one FFT (IFFT) is $\mathcal{O}(N\log N),$ the per iteration
computational complexity of the proposed algorithm is of order $\mathcal{O}(M^{2}N\log N)$.
The overall algorithm is summarized in Algorithm \ref{alg:MWISL-Set}.
\begin{algorithm}[tbh]
\begin{algor}[1]
\item [{Require:}] \begin{raggedright}
number of sequences $M$, sequence length $N$, weights
$\{w_{k}\geq0\}_{k=0}^{N-1}$
\par\end{raggedright}
\item [{{*}}] Set $l=0$, initialize $\mathbf{x}^{(0)}$ of length $MN$.
\item [{{*}}] \begin{raggedright}
$\mathbf{w}=[w_{0}N,w_{1}(N-1),\ldots,w_{N-1},0,w_{N-1},\ldots,w_{1}(N-1)]^{T}$
\par\end{raggedright}
\item [{{*}}] $\boldsymbol{\mu}=\mathbf{F}\mathbf{w}$
\item [{{*}}] \begin{raggedright}
$\lambda_{W}=\frac{1}{2}\left({\displaystyle \min_{1\leq i\leq N}}\mu_{2i}+{\displaystyle \min_{1\leq i\leq N}}\mu_{2i-1}\right)$
\par\end{raggedright}
\item [{{*}}] \begin{raggedright}
$\lambda_{B}=\begin{cases}
\min\left\{ M\lambda_{W},0\right\} , & M\geq2\\
\lambda_{W}, & M=1
\end{cases}$
\par\end{raggedright}
\item [{repeat}]~
\begin{algor}[1]
\item [{{*}}] \begin{raggedright}
Compute $r_{i,j}^{(l)}(k),i,j=1,\ldots,M,k=1-N,\dots,N-1$.
\par\end{raggedright}
\item [{{*}}] \begin{raggedright}
Compute $\mathbf{c}_{ij},i,j=1,\ldots,M$ according to \eqref{eq:c_ij}.
\par\end{raggedright}
\item [{{*}}] Compute $\mathbf{R}\mathbf{x}^{(l)}$ according to \eqref{eq:Rx_compute}.
\item [{{*}}] \begin{raggedright}
Compute $\left\Vert \mathbf{R}\right\Vert _{\infty}$ based
on $\left|\mathbf{c}_{ij}\right|,i,j=1,\ldots,M$.
\par\end{raggedright}
\item [{{*}}] $\mathbf{p}=\frac{M}{2N}\mathbf{1}_{M\times1}\otimes\left(\mathbf{H}^{H}\left(\boldsymbol{\mu}\circ(\mathbf{H}\mathbf{1})\right)\right)$
\item [{{*}}] $\mathbf{y}=\frac{\mathbf{R}\mathbf{x}^{(l)}-\mathbf{p}\circ\mathbf{x}^{(l)}}{\left\Vert \mathbf{R}\right\Vert _{\infty}\!-\!\lambda_{B}}\!-\!\mathbf{x}^{(l)}$
\item [{{*}}] $x_{n}^{(l+1)}=e^{j{\rm arg}(-y_{n})},\,n=1,\ldots,MN$
\item [{{*}}] $l\leftarrow l+1$
\end{algor}
\item [{until}] convergence
\end{algor}
\protect\caption{\label{alg:MWISL-Set}The MM Algorithm for problem \eqref{eq:seqset_prob_weight}.}
\end{algorithm}
\section{Simplified MM for the Case without Weights\label{sec:Adaptive-MM}}
In the previous section, we developed an algorithm for problem \eqref{eq:seqset_prob_weight}.
By simply choosing weights $w_{k}=1,k=1-N,\ldots,1+N$, the algorithm
can be readily applied to solve problem \eqref{eq:seqset_prob}. However,
as analyzed in the previous section, the algorithm requires about
$3M^{2}+4M$ $2N$-point FFT (IFFT) operations at every iteration.
In this section, we will derive an algorithm for problem \eqref{eq:seqset_prob},
which requires only $2M$ $2N$-point FFT (IFFT) operations per iteration.
Let us denote the sequence covariance matrix at lag $k$ by $\mathbf{R}_{k}$,
i.e.,
\begin{eqnarray}
\mathbf{R}_{k} & = & \begin{bmatrix}r_{1,1}(k) & r_{1,2}(k) & \ldots & r_{1,M}(k)\\
r_{2,1}(k) & r_{2,2}(k) & & r_{2,M}(k)\\
\vdots & & \ddots & \vdots\\
r_{M,1}(k) & \cdots & \cdots & r_{M,M}(k)
\end{bmatrix}\\
& & k=1-N,\ldots,,N-1.\nonumber
\end{eqnarray}
By using \eqref{eq:cross_corr_mat}, it is easy to see that
\begin{equation}
\mathbf{R}_{k}=\left(\mathbf{X}^{H}\mathbf{U}_{k}\mathbf{X}\right)^{T}=\mathbf{R}_{-k}^{H},\,k=0,\ldots,,N-1,
\end{equation}
where
\begin{equation}
\mathbf{X}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{M}].
\end{equation}
With the above matrix notation, problem \eqref{eq:seqset_prob} can
be rewritten as
\begin{equation}
\begin{array}{ll}
\underset{\mathbf{X}\in\mathbb{C}^{N\times M}}{\mathsf{minimize}} & {\displaystyle \sum_{k=1-N}^{N-1}}\left\Vert \mathbf{X}^{H}\mathbf{U}_{k}\mathbf{X}\right\Vert _{F}^{2}-N^{2}M\\
\mathsf{subject\;to} & \left|X_{i,j}\right|=1,\,i=1,\ldots,N,\,j=1,\ldots,M.
\end{array}\label{eq:prob_set_mat}
\end{equation}
Since
\begin{eqnarray*}
\left\Vert \mathbf{X}^{H}\mathbf{U}_{k}\mathbf{X}\right\Vert _{F}^{2} & = & {\rm Tr}\left(\mathbf{X}^{H}\mathbf{U}_{k}^{H}\mathbf{X}\mathbf{X}^{H}\mathbf{U}_{k}\mathbf{X}\right)\\
& = & {\rm Tr}\left(\mathbf{X}\mathbf{X}^{H}\mathbf{U}_{k}^{H}\mathbf{X}\mathbf{X}^{H}\mathbf{U}_{k}\right)\\
& = & {\rm vec}\left(\mathbf{X}\mathbf{X}^{H}\right)^{H}\left(\mathbf{U}_{k}^{H}\otimes\mathbf{U}_{k}^{H}\right){\rm vec}\left(\mathbf{X}\mathbf{X}^{H}\right),
\end{eqnarray*}
we have
\begin{equation}
{\displaystyle \sum_{k=1-N}^{N-1}}\left\Vert \mathbf{X}^{H}\mathbf{U}_{k}\mathbf{X}\right\Vert _{F}^{2}={\rm vec}\left(\mathbf{X}\mathbf{X}^{H}\right)^{H}\tilde{\mathbf{L}}{\rm vec}\left(\mathbf{X}\mathbf{X}^{H}\right),\label{eq:obj_vec}
\end{equation}
where
\begin{eqnarray}
\tilde{\mathbf{L}} & = & \sum_{k=1-N}^{N-1}\left(\mathbf{U}_{k}^{H}\otimes\mathbf{U}_{k}^{H}\right).\label{eq:L_tilde}
\end{eqnarray}
Let us define
\begin{eqnarray}
\mathbf{h}_{p} & = & [1,e^{j\omega_{p}},\cdots,e^{j\omega_{p}(N-1)}]^{T},\,p=1,\ldots,2N,
\end{eqnarray}
where $\omega_{p}=\frac{2\pi}{2N}(p-1),\:p=1,\cdots,2N.$ Since $\mathbf{U}_{k}$
is Toeplitz and can be written in terms of $\mathbf{h}_{p},p=1,\ldots,2N$
according to Lemma \ref{lem:diagonal}, it can be shown that the matrix
$\tilde{\mathbf{L}}$ defined in \eqref{eq:L_tilde} can also be written
as
\begin{equation}
\tilde{\mathbf{L}}=\frac{1}{2N}\sum_{p=1}^{2N}{\rm vec}(\mathbf{h}_{p}\mathbf{h}_{p}^{H}){\rm vec}(\mathbf{h}_{p}\mathbf{h}_{p}^{H})^{H},
\end{equation}
and then we have
\begin{equation}
\begin{aligned} & {\displaystyle \sum_{k=1-N}^{N-1}}\left\Vert \mathbf{X}^{H}\mathbf{U}_{k}\mathbf{X}\right\Vert _{F}^{2}\\
= & \frac{1}{2N}\sum_{p=1}^{2N}\left|{\rm vec}\left(\mathbf{X}\mathbf{X}^{H}\right)^{H}{\rm vec}(\mathbf{h}_{p}\mathbf{h}_{p}^{H})\right|^{2}\\
= & \frac{1}{2N}\sum_{p=1}^{2N}{\rm Tr}(\mathbf{X}\mathbf{X}^{H}\mathbf{h}_{p}\mathbf{h}_{p}^{H})^{2}\\
= & \frac{1}{2N}\sum_{p=1}^{2N}\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}^{4}.
\end{aligned}
\end{equation}
Thus, problem \eqref{eq:prob_set_mat} can be further reformulated
as
\begin{equation}
\begin{array}{ll}
\underset{\mathbf{X}\in\mathbb{C}^{N\times M}}{\mathsf{minimize}} & \frac{1}{2N}{\displaystyle \sum_{p=1}^{2N}}\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}^{4}-N^{2}M\\
\mathsf{subject\;to} & \left|X_{i,j}\right|=1,\,i=1,\ldots,N,\,j=1,\ldots,M.
\end{array}\label{eq:prob_quartic}
\end{equation}
To construct a majorization function of the objective in \eqref{eq:prob_quartic},
we propose to majorize each$\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}^{4}$
according to the following lemma.
\begin{lem}
\label{lem:quartic_major}Let $f(x)=x^{4},$ $x\in[0,t]$. Then for
given $x_{0}\in[0,t)$, $f(x)$ is majorized at $x_{0}$ over the
interval $[0,t]$ by the following quadratic function:
\begin{equation}
ax^{2}+(4x_{0}^{3}-2ax_{0})x+ax_{0}^{2}-3x_{0}^{4},\label{eq:quad_major}
\end{equation}
where
\begin{equation}
a=t^{2}+2x_{0}t+3x_{0}^{2}.\label{eq:a_val}
\end{equation}
\end{lem}
\begin{IEEEproof}
See Appendix \ref{sec:Proof-of-Lemma-quartic-major}.
\end{IEEEproof}
Given $\mathbf{X}^{(l)}$ at iteration $l$, by taking $\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}$
as a whole, we know from Lemma \ref{lem:quartic_major} that each
$\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}^{4}$ (for
any $p\in\{1,\ldots,2N\}$) is majorized by
\begin{equation}
a_{p}\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}^{2}+b_{p}\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}+a_{p}\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}^{2}-3\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}^{4},
\end{equation}
where
\begin{eqnarray}
a_{p} & = & t^{2}+2t\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}+3\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}^{2},\label{eq:a_bound}\\
b_{p} & = & 4\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}^{3}-2a_{p}\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2},
\end{eqnarray}
and $t$ is an upper bound of $\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}^{4}$
over the set of interest at the current iteration. Since the objective
decreases at every iteration in the MM framework, at the current iteration
$l$, it is sufficient to consider the set on which the objective
is smaller than the current objective evaluated at $\mathbf{X}^{(l)}$.
Hence we can choose $t=\left({\displaystyle \sum_{p=1}^{2N}}\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}^{4}\right)^{1/4}$
here. Then the majorized problem of \eqref{eq:prob_quartic} is given
by (ignoring the constant terms and the scaling factor $\frac{1}{2N}$)
\begin{equation}
\begin{array}{ll}
\underset{\mathbf{X}\in\mathbb{C}^{N\times M}}{\mathsf{minimize}} & {\displaystyle \sum_{p=1}^{2N}}\left(a_{p}\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}^{2}+b_{p}\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}\right)\\
\mathsf{subject\;to} & \left|X_{i,j}\right|=1,\,i=1,\ldots,N,\,j=1,\ldots,M.
\end{array}\label{eq:quartic_major1}
\end{equation}
Let us first take a look at the first term of the objective. It can
be rewritten as follows:
\begin{equation}
\begin{aligned}{\displaystyle \sum_{p=1}^{2N}}a_{p}\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}^{2} & ={\displaystyle \sum_{p=1}^{2N}}a_{p}{\rm Tr}\left(\mathbf{X}^{H}\mathbf{h}_{p}\mathbf{h}_{p}^{H}\mathbf{X}\right)\\
& ={\rm Tr}\left(\mathbf{X}^{H}\left(\sum_{p=1}^{2N}a_{p}\mathbf{h}_{p}\mathbf{h}_{p}^{H}\right)\mathbf{X}\right)\\
& ={\rm Tr}\left(\mathbf{X}^{H}\mathbf{H}^{H}{\rm Diag}(\mathbf{a})\mathbf{H}\mathbf{X}\right),
\end{aligned}
\label{eq:first_term}
\end{equation}
where $\mathbf{H}=[\mathbf{h}_{1},\ldots,\mathbf{h}_{2N}]^{H}$ is
the matrix defined in \eqref{eq:H_mat} and $\mathbf{a}=[a_{1},\ldots,a_{2N}]^{T}$.
From Lemma \ref{lem:diagonal} and Lemma \ref{lem:eig_bounds}, we
can see that the matrix $\mathbf{H}^{H}{\rm Diag}(\mathbf{a})\mathbf{H}$
is Hermitian Toeplitz and its maximum eigenvalue is bounded above
as follows:
\begin{equation}
\lambda_{{\rm max}}(\mathbf{H}^{H}{\rm Diag}(\mathbf{a})\mathbf{H})\leq N\left(\max_{1\leq i\leq N}a_{2i}+\max_{1\leq i\leq N}a_{2i-1}\right).
\end{equation}
Let us define
\begin{equation}
\lambda_{a}=N\left(\max_{1\leq i\leq N}a_{2i}+\max_{1\leq i\leq N}a_{2i-1}\right),
\end{equation}
then by choosing $\mathbf{M=}\lambda_{a}\mathbf{I}$ in Lemma \ref{lem:majorizer},
the function in \eqref{eq:first_term} is majorized by
\begin{equation}
\begin{aligned} & \,\,\lambda_{a}{\rm Tr}(\mathbf{X}^{H}\mathbf{X})\\
& +2{\rm Re}\left({\rm Tr}\left(\mathbf{X}^{H}\left(\mathbf{H}^{H}{\rm Diag}(\mathbf{a})\mathbf{H}-\lambda_{a}\mathbf{I}\right)\mathbf{X}^{(l)}\right)\right)\\
& +{\rm Tr}\left(\mathbf{X}^{(l)H}\left(\lambda_{a}\mathbf{I}-\mathbf{H}^{H}{\rm Diag}(\mathbf{a})\mathbf{H}\right)\mathbf{X}^{(l)}\right).
\end{aligned}
\label{eq:first_term_major}
\end{equation}
Note that ${\rm Tr}(\mathbf{X}^{H}\mathbf{X})=MN,$ so the first term
of \eqref{eq:first_term_major} is just a constant.
For the second term of the objective in \eqref{eq:quartic_major1},
we have
\begin{equation}
\begin{array}{cl}
& {\displaystyle \sum_{p=1}^{2N}}b_{p}\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}\\
= & {\displaystyle \sum_{p=1}^{2N}}\left(4\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}^{2}-2a_{p}\right)\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}\left\Vert \mathbf{X}^{H}\mathbf{h}_{p}\right\Vert _{2}\\
\text{\ensuremath{\leq}} & {\displaystyle \sum_{p=1}^{2N}}\left(4\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}^{2}-2a_{p}\right){\rm Re}\left(\mathbf{h}_{p}^{H}\mathbf{X}^{(l)}\mathbf{X}^{H}\mathbf{h}_{p}\right)\\
= & {\rm Re}\left({\rm Tr}\left(\tilde{\mathbf{Y}}\mathbf{X}^{H}\right)\right)
\end{array}\label{eq:bX_major}
\end{equation}
where
\begin{equation}
\tilde{\mathbf{Y}}=\left({\displaystyle \sum_{p=1}^{2N}}\left(4\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}^{2}-2a_{p}\right)\mathbf{h}_{p}\mathbf{h}_{p}^{H}\right)\mathbf{X}^{(l)}
\end{equation}
and the inequality follows from the Cauchy-Schwarz inequality and
the fact
\begin{equation}
\begin{aligned}4\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}^{2}-2a_{p} & =-2\left(\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}+t\right)^{2}\leq0.\end{aligned}
\end{equation}
Since the inequality in \eqref{eq:bX_major} holds with equality when
$\mathbf{X}=\mathbf{X}^{(l)}$, ${\rm Re}\left({\rm Tr}\left(\tilde{\mathbf{Y}}\mathbf{X}^{H}\right)\right)$
majorizes the second term of the objective in \eqref{eq:quartic_major1}
at $\mathbf{X}^{(l)}$.
By adding the two majorization functions, i.e., \eqref{eq:first_term_major}
and \eqref{eq:bX_major}, we get the majorized problem of \eqref{eq:quartic_major1}
(ignoring the constant terms):
\begin{equation}
\begin{array}{ll}
\underset{\mathbf{X}\in\mathbb{C}^{N\times M}}{\mathsf{minimize}} & {\rm Re}\left({\rm Tr}\left(\mathbf{Y}\mathbf{X}^{H}\right)\right)\\
\mathsf{subject\;to} & \left|X_{i,j}\right|=1,\,i=1,\ldots,N,\,j=1,\ldots,M,
\end{array}\label{eq:prob_linear_X}
\end{equation}
where
\begin{equation}
\begin{aligned}\mathbf{Y} & =\tilde{\mathbf{Y}}+2\left(\mathbf{H}^{H}{\rm Diag}(\mathbf{a})\mathbf{H}-\lambda_{a}\mathbf{I}\right)\mathbf{X}^{(l)}\\
& =4\left({\displaystyle \sum_{p=1}^{2N}}\left\Vert \mathbf{X}^{(l)H}\mathbf{h}_{p}\right\Vert _{2}^{2}\mathbf{h}_{p}\mathbf{h}_{p}^{H}\right)\mathbf{X}^{(l)}-2\lambda_{a}\mathbf{X}^{(l)}.
\end{aligned}
\label{eq:Y_mat}
\end{equation}
It is easy to see that problem \eqref{eq:prob_linear_X} can be rewritten
as
\begin{equation}
\begin{array}{ll}
\underset{\mathbf{X}\in\mathbb{C}^{N\times M}}{\mathsf{minimize}} & {\displaystyle \sum_{i=1}^{N}\sum_{j=1}^{M}{\rm Re}\left(X_{i,j}^{*}Y_{i,j}\right)}\\
\mathsf{subject\;to} & \left|X_{i,j}\right|=1,\,i=1,\ldots,N,\,j=1,\ldots,M,
\end{array}
\end{equation}
which is separable in the elements of $\mathbf{X}$ and the solution
of the problem is given by
\begin{equation}
X_{i,j}=e^{j{\rm arg}(-Y_{i,j})},i=1,\ldots,N,\,j=1,\ldots,M.\label{eq:X_closed}
\end{equation}
Then at every iteration of the algorithm, we just compute the matrix
$\mathbf{Y}$ given in \eqref{eq:Y_mat} and update $\mathbf{X}$
according to \eqref{eq:X_closed}. It is worth noting that the matrix
$\mathbf{Y}$ in \eqref{eq:Y_mat} can be computed efficiently via
FFT (IFFT), since it can be rewritten as
\begin{equation}
\mathbf{Y}=4\mathbf{H}^{H}{\rm Diag}(\mathbf{q})\mathbf{H}\mathbf{X}^{(l)}-2\lambda_{a}\mathbf{X}^{(l)},\label{eq:Y_mat_FFT}
\end{equation}
where
\begin{equation}
\mathbf{q}=\left|\mathbf{H}\mathbf{X}^{(l)}\right|^{2}\mathbf{1}_{M\times1}
\end{equation}
and $\left|\cdot\right|^{2}$ denotes the element-wise absolute-squared
value. The overall algorithm is then summarized in Algorithm \ref{alg:MWISL-Set-adaptive}
and we can see that $2M$ 2N-point FFT (IFFT) operations are needed
at each iteration.
\begin{algorithm}[tbh]
\begin{algor}[1]
\item [{Require:}] \begin{raggedright}
number of sequences $M$, sequence length $N$
\par\end{raggedright}
\item [{{*}}] Set $l=0$, initialize $\mathbf{X}^{(0)}$ of size $N\times M$.
\item [{repeat}]~
\begin{algor}[1]
\item [{{*}}] $\mathbf{q}=\left|\mathbf{H}\mathbf{X}^{(l)}\right|^{2}\mathbf{1}_{M\times1}$
\item [{{*}}] $t=\left(\mathbf{1}^{T}\left(\mathbf{q}\circ\mathbf{q}\right)\right)^{\frac{1}{4}}$
\item [{{*}}] \begin{raggedright}
$a_{i}=t^{2}+2t\sqrt{q_{i}}+3q_{i},i=1,\ldots,2N$
\par\end{raggedright}
\item [{{*}}] $\lambda_{a}=N\left({\displaystyle \max_{1\leq i\leq N}}a_{2i}+{\displaystyle \max_{1\leq i\leq N}}a_{2i-1}\right)$
{}
\item [{{*}}] $\mathbf{Y}=4\mathbf{H}^{H}{\rm Diag}(\mathbf{q})\mathbf{H}\mathbf{X}^{(l)}-2\lambda_{a}\mathbf{X}^{(l)}$
\item [{{*}}] \begin{raggedright}
$X_{i,j}^{(l+1)}\negthinspace=\negthinspace e^{j{\rm arg}(-Y_{i,j})},i=1,\ldots,N,j=1,\ldots,M$
\par\end{raggedright}
\item [{{*}}] $l\leftarrow l+1$
\end{algor}
\item [{until}] convergence
\end{algor}
\protect\caption{\label{alg:MWISL-Set-adaptive}The MM Algorithm for problem \eqref{eq:seqset_prob}.}
\end{algorithm}
\section{Convergence Analysis and Acceleration Scheme\label{sec:Convergence-Acc}}
\subsection{Convergence Analysis}
The algorithms developed in the previous sections are all based on
the general majorization-minimization method and according to subsection
\ref{sub:MM-Method} we know that the sequences of objective values
generated by the algorithms at every iteration are nonincreasing.
Since it is easy to see that the objective functions of problems \eqref{eq:CSS_prob},
\eqref{eq:seqset_prob} and \eqref{eq:seqset_prob_weight} are all
bounded below by $0,$ the sequences of objective values are guaranteed
to converge to finite values.
In the following, we establish the convergence of the solution sequences
generated by the algorithms to stationary points. Let $f(\mathbf{x})$
be a differentiable function and $\mathcal{X}$ be an arbitrary constraint
set, then a point $\mathbf{x}^{\star}\in\mathcal{X}$ is said to be
a stationary point of the problem
\begin{equation}
\begin{array}{ll}
\underset{\mathbf{x}\in\mathcal{X}}{\mathsf{minimize}} & f(\mathbf{x})\end{array}
\end{equation}
if it satisfies the following first-order optimality condition \cite{Bertsekas2003}:
\[
\nabla f(\mathbf{x}^{\star})^{T}\mathbf{z}\geq0,\,\forall\mathbf{z}\in T_{\mathcal{X}}(\mathbf{x}^{\star}),
\]
where $T_{\mathcal{X}}(\mathbf{x}^{\star})$ denotes the tangent cone
of $\mathcal{X}$ at $\mathbf{x}^{\star}.$ The convergence property
of the CSS design algorithm in Algorithm \ref{alg:CSS-MM} can be
stated as follows.
\begin{thm}
\label{thm:MM-converge}Let $\{\mathbf{x}_{m}^{(l)}\}_{m=1}^{M},l=0,1,\ldots$
be the sequence of iterates generated by Algorithm \ref{alg:CSS-MM}.
Then the sequence has at least one limit point and every limit point
of the sequence is a stationary point of problem \eqref{eq:CSS_prob}. \end{thm}
\begin{IEEEproof}
The proof is similar to that given in \cite{WISL_song2015} and we
omit it here.
\end{IEEEproof}
Note that the convergence results of Algorithms \ref{alg:MWISL-Set}
and \ref{alg:MWISL-Set-adaptive} can be stated similarly and the
sequences generated by the two algorithms converge to stationary points
of problems \eqref{eq:seqset_prob_weight} and \eqref{eq:seqset_prob},
respectively.
\subsection{Acceleration Scheme \label{sec:Acceleration-Schemes}}
The popularity of the MM method is due to its simplicity and numerical
stability (monotonicity), but it is usually attained at the expense
of slow convergence. Due to the successive majorization steps that
we have carried out in the derivation of the majorization functions,
the convergence of the proposed algorithms seems to be slow. To fix
this issue, we can apply some acceleration schemes and in this subsection
we briefly introduce such a scheme that can be easily applied to speed
up the proposed MM algorithms. It is the squared iterative method
(SQUAREM) \cite{SQUAREM}, which was originally proposed to accelerate
any Expectation\textendash Maximization (EM) algorithms. It seeks
to approximate Newton\textquoteright s method for finding a fixed
point of the EM algorithm map and generally achieves superlinear convergence.
Since SQUAREM only requires the EM updating map, it can be readily
applied to any EM-type algorithms. In \cite{WISL_song2015}, it was
applied to accelerate some MM algorithms and some modifications were
made to maintain the monotonicity of the original MM algorithm and
to ensure the feasibility of the solution after every iteration. The
modified scheme is summarized in Algorithm 3 in \cite{WISL_song2015}
and we will apply it to accelerate the proposed MM algorithms in this
paper.
\section{Numerical Experiments\label{sec:Numerical-Experiments}}
To show the performance of the proposed algorithms in designing set
of sequences for various scenarios, we present some experimental results
in this section. For clarity, the MM algorithms proposed for problems
\eqref{eq:CSS_prob}, \eqref{eq:seqset_prob} and \eqref{eq:seqset_prob_weight},
i.e., Algorithms \ref{alg:CSS-MM}, \ref{alg:MWISL-Set-adaptive}
and \ref{alg:MWISL-Set}, will be referred to as MM-CSS, MM-Corr and
MM-WeCorr, respectively. And the acceleration scheme described in
section \ref{sec:Acceleration-Schemes} was applied in our implementation
of the algorithms. All experiments were performed in Matlab on a PC
with a 3.20 GHz i5-3470 CPU and 8 GB RAM.
\subsection{CSS Design}
In this subsection, we give an example of applying the proposed MM-CSS
algorithm to design (almost) complementary sets of sequences (CSS).
We consider the design of unimodular CSS of length $N=128$ and with
$M=1,2,3.$ For all cases, the initial sequence set $\{\mathbf{x}_{m}^{(0)}\}_{m=1}^{M}$
was generated randomly with each sequence being $\{e^{j2\pi\theta_{n}}\}_{n=1}^{N}$,
where $\{\theta_{n}\}_{n=1}^{N}$ are independent random variables
uniformly distributed in $[0,1]$. The stopping criterion was set
to be $\left|{\rm ISL}^{(l+1)}-{\rm ISL}^{(l)}\right|/\max\left(1,{\rm ISL}^{(l)}\right)\leq10^{-15}$
to allow enough iterations. The complementary autocorrelation levels
of the output sequence sets with $M=1,2,3$ sequences are shown in
Fig. \ref{fig:CSS_corr}, where the complementary autocorrelation
level is the normalized autocorrelation sum in dB defined as
\begin{equation}
20\log_{10}\frac{\left|\sum_{m=1}^{M}r_{m,m}(k)\right|}{\sum_{m=1}^{M}r_{m,m}(0)},\,k=1-N,\ldots,N-1.
\end{equation}
From the figure, we can see that as $M$ increases, the complementary
autocorrelation level decreases, which can be easily understood as
larger $M$ provides more degrees of freedom for the CSS design. In
particular, when $M=3$ the autocorrelation sums of the sequences
are very close to zero and the sequences can be viewed as complementary
in practice.
\begin{figure}
\caption{\label{fig:CSS_corr}
\label{fig:CSS_corr}
\end{figure}
\subsection{Approaching the Lower Bound of $\Psi$}
As have been mentioned earlier, the criterion $\Psi$ defined in \eqref{eq:obj_auto_cross}
is lower bounded by $N^{2}M(M-1)$. Then a natural question is whether
we can achieve that bound. In this subsection, we apply the proposed
MM-Corr and MM-WeCorr algorithms to minimize the criterion $\Psi$,
i.e., solving problem \eqref{eq:seqset_prob}, and compare the performance
with the CAN algorithm \cite{SequenceSet_He2009}.
In the experiment, we consider sequences sets with $M\in\{2,3,4\}$
sequences and each sequence of length $N\in\{256,1024\}$. For all
algorithms, the initial sequence set was generated randomly as in
the previous subsection, and the stopping criterion was set to be
$\left|\Psi^{(l+1)}-\Psi^{(l)}\right|/\Psi^{(l)}\leq10^{-8}$.
For each $(M,N)$ pair, the algorithms were repeated 10 times and
the minimum and average values of $\Psi$ achieved by the three algorithms,
together with the corresponding lower bound, are shown in Table \ref{tab:Corr_bound}.
The average running time of the three algorithms was also recorded
and is provided in Table \ref{tab:avg-running-time}. From Table
\ref{tab:Corr_bound}, we can see that all the three algorithms can
get reasonably close to the lower bound of $\Psi$, which means the
sequence sets generated by the algorithms are almost optimal for the
$(M,N)$ pairs that have been considered. Another point we notice
is that, for all $(M,N)$ pairs and all algorithms, the average values
over 10 random trials are quite close to the minimum values, which
implies that the three algorithms are not sensitive to the initial
points. From Table \ref{tab:avg-running-time}, we can see that for
each $(M,N)$ pair, the MM-Corr algorithm is the fastest and the CAN
algorithm is the slowest among the three algorithms. Since the per
iteration computational complexity of MM-Corr and CAN is almost the
same ($2M$ $2N$-point FFT (IFFT) operations), it implies that MM-Corr
takes far fewer iterations to converge compared with CAN. Another
observation is that for the same sequence length $N$, the cases with
larger $M$ values take less time compared with the cases with smaller
$M$ values, for example the running time of the algorithms for the
pair $(M=4,N=256)$ is less than that for $(M=2,N=256)$. Since a
larger $M$ value means higher per iteration computational complexity,
the observation implies that when $M$ becomes larger, the algorithms
need much fewer iterations to converge. It probably further implies
that it is easier for a larger set of sequences to approach the lower
bound than a smaller set of sequences.
\begin{table*}[t]
\protect\caption{\label{tab:Corr_bound}The lower bound of $\Psi$ in \eqref{eq:obj_auto_cross}
and the values achieved by different algorithms.}
\begin{centering}
\begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}lccccccc}
\hline
& \multicolumn{2}{c}{CAN} & \multicolumn{2}{c}{MM-WeCorr} & \multicolumn{2}{c}{MM-Corr} & Lower Bound\tabularnewline
\hline
& minimum & average & minimum & average & minimum & average & \tabularnewline
\hline
$M=2,N=256$ & 131082 & 131089 & 131083 & 131093 & \textbf{131079} & 131093 & 131072\tabularnewline
$M=3,N=256$ & 393220 & 393222 & \textbf{393217} & 393220 & 393219 & 393222 & 393216\tabularnewline
$M=4,N=256$ & 786436 & 786439 & \textbf{786433} & 786436 & \textbf{786433} & 786436 & 786432\tabularnewline
$M=2,N=1024$ & 2097336 & 2097394 & 2097426 & 2098298 & \textbf{2097335} & 2097453 & 2097152\tabularnewline
$M=3,N=1024$ & 6291553 & 6291580 & \textbf{6291486} & 6291556 & 6291504 & 6291548 & 6291456\tabularnewline
$M=4,N=1024$ & 12582992 & 12583019 & \textbf{12582937} & 12582989 & 12582939 & 12582992 & 12582912\tabularnewline
\hline
\end{tabular*}
\par\end{centering}
\end{table*}
\begin{table}[tbh]
\protect\caption{\label{tab:avg-running-time}The average running time (in seconds)
of different algorithms over 10 random trials.}
\centering{}
\begin{tabular*}{0.95\columnwidth}{@{\extracolsep{\fill}}lccc}
\hline
& CAN & MM-WeCorr & MM-Corr\tabularnewline
\hline
$M=2,N=256$ & 9.3342 & 0.6765 & 0.2435\tabularnewline
$M=3,N=256$ & 2.3461 & 0.3813 & 0.1000\tabularnewline
$M=4,N=256$ & 1.3562 & 0.3822 & 0.0844\tabularnewline
$M=2,N=1024$ & 33.8459 & 1.2011 & 0.6137\tabularnewline
$M=3,N=1024$ & 8.0584 & 1.0797 & 0.2750\tabularnewline
$M=4,N=1024$ & 4.9846 & 1.0298 & 0.2242\tabularnewline
\hline
\end{tabular*}
\end{table}
\subsection{Sequence Set Design with Zero Correlation Zone }
As can be seen from the previous subsection, it is impossible to design
a set of sequences with all auto- and cross-correlation sidelobes
very small. Since in some applications, it is enough to minimize the
correlations only within a certain time lag interval, in this subsection
we present an example of applying the proposed MM-WeCorr algorithm
to design a set of sequences with low correlation sidelobes only at
required lags and compare the performance with the WeCAN algorithm
in \cite{SequenceSet_He2009}. The Matlab code of the WeCAN algorithm
was downloaded from the website\footnote{http://www.sal.ufl.edu/book/}
of the book \cite{he2012waveform}.
Suppose we want to design a sequence set with $M=3$ sequences each
of length $N=256$ and with low auto- and cross-correlations only
at lags $k=51,\ldots,80$. To tackle the problem, we apply the MM-WeCorr
and WeCAN algorithms from random initial sequence sets generated as
in the previous subsections. For the MM-WeCorr algorithm, we choose
the weights $\{w_{k}\}_{k=0}^{N-1}$ as follows:
\begin{equation}
w_{k}=\begin{cases}
1, & k\in\{51,\dots,80\}\\
0, & {\rm otherwise},
\end{cases}\label{eq:weights-sim}
\end{equation}
so that only the correlations at the required lags will be minimized.
For both algorithms, we do not stop until the objective in \eqref{eq:seqset_prob_weight}
goes below $10^{-10}$ or after 10000 seconds. The evolution curves
of the objective with respect to the running time are shown in Fig.
\ref{fig:obj_vs_time}. From the figure we can see that the proposed
MM-WeCorr algorithm drives the objective to $10^{-10}$ within $1$
second, while the objective is still above $10^{2}$ after 10000 seconds
for WeCAN. This is because the proposed MM-WeCorr algorithm requires
about $3M^{2}+4M$ $2N$-point FFT's per iteration, while each iteration
of WeCAN requires $2MN$ computations of $2N$-point FFT's and also
$2N$ computations of the SVD of $M\times N$ matrices. The slower
convergence of WeCAN may be another reason. Fig. \ref{fig:ZCZ_set}
shows the auto- and cross-correlations (normalized by $N$) of the
sequence sets generated by the two algorithms. We can see in Fig.
\ref{fig:ZCZ_set} that the correlation sidelobes of the MM-WeCorr
sequence set are suppressed to almost zero (about -175 dB) at the
required lags, while that of the WeCAN sequence set is much higher.
Another observation is that the cross-correlations at lag $k=0$ for
the WeCAN sequence set are very low, although we did not try to suppress
them. The reason is that in WeCAN, the weight at lag $0$ should be
always positive and in fact large enough to ensure some weight matrix
to be positive semidefinite. Thus the ``0-lag'' correlations are
in fact emphasized the most in WeCAN. Note that in MM-WeCorr, the
weight at lag $0$, i.e., $w_{0}$, can take any nonnegative value,
thus it is more flexible to some extent.
\begin{figure}
\caption{\label{fig:obj_vs_time}
\label{fig:obj_vs_time}
\end{figure}
\noindent \begin{center}
\begin{figure*}
\caption{\label{fig:ZCZ_set}
\label{fig:ZCZ_set}
\end{figure*}
\par\end{center}
\section{Conclusion\label{sec:Conclusion}}
In this paper, we have developed several efficient MM algorithms which
can be used to design unimodular sequence sets with almost complementary
autocorrelations or with both good auto- and cross-correlations. The
proposed algorithms can be viewed as extensions of some single sequence
design algorithms in the literature and share the same convergence
properties, i.e., the convergence to a stationary point. In addition,
all the algorithms can be implemented via FFT and thus are computationally
very efficient. Numerical experiments show that the proposed CSS design
algorithm can generate an almost complementary set of sequences as
long as the cardinality of the set is not too small. In the case of
sequence set design for both good auto- and cross-correlation properties,
the proposed algorithms can get as close to the lower bound of the
correlation criterion as the state-of-the-art method and are much
faster. It has also been observed that the proposed weighted correlation
minimization algorithm can produce sets of unimodular sequences with
virtually zero auto- and cross-correlations at specified time lags.
\appendices{}
\section{Proof of Lemma \ref{lem:eig_upperBound}\label{sec:Proof-of-Lemma-eigUB}}
\begin{IEEEproof}
First, with Lemma \ref{lem:eig_set}, we have
\begin{equation}
\begin{aligned} & \,\,\lambda_{{\rm max}}\left(\mathbf{R}-\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\right)\\
\leq & \,\,\lambda_{{\rm max}}(\mathbf{R})-\lambda_{{\rm min}}\left(\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\right)\\
= & \,\,\lambda_{{\rm max}}(\mathbf{R})-\lambda_{{\rm min}}\left(\mathbf{B}\right).
\end{aligned}
\end{equation}
Then, according to Lemma \ref{lem:eig_kron}, it is easy to see that
\begin{equation}
\lambda_{{\rm min}}\left(\mathbf{B}\right)=\begin{cases}
\min\{M\lambda_{{\rm min}}(\mathbf{W}),0\}, & M\geq2\\
\lambda_{{\rm min}}(\mathbf{W}), & M=1,
\end{cases}
\end{equation}
and noticing the fact that $\mathbf{W}$ is symmetric Toeplitz, we
know from Lemma \ref{lem:eig_bounds} that
\begin{equation}
\lambda_{{\rm min}}(\mathbf{W})\geq\lambda_{W}.
\end{equation}
Thus,
\begin{equation}
\lambda_{{\rm min}}\left(\mathbf{B}\right)\geq\lambda_{B}=\begin{cases}
\min\left\{ M\lambda_{W},0\right\} , & M\geq2\\
\lambda_{W}, & M=1,
\end{cases}
\end{equation}
and we have
\begin{equation}
\lambda_{{\rm max}}\left(\mathbf{R}-\mathbf{B}\circ\big(\mathbf{x}^{(l)}(\mathbf{x}^{(l)})^{H}\big)\right)\leq\left\Vert \mathbf{R}\right\Vert -\lambda_{B},
\end{equation}
where $\left\Vert \mathbf{R}\right\Vert $ can be any submultiplicative
matrix norm of $\mathbf{R}$.
\end{IEEEproof}
\section{Proof of Lemma \ref{lem:diagonal}\label{sec:Proof-of-Lemma-diag}}
\begin{IEEEproof}
The $N\times N$ Toeplitz matrix $\mathbf{T}$ can be embedded in
a circulant matrix $\mathbf{C}$ of dimension $2N\times2N$ as follows:
\begin{equation}
\mathbf{C}=\left[\begin{array}{cc}
\mathbf{T} & \mathbf{W}\\
\mathbf{W} & \mathbf{T}
\end{array}\right],
\end{equation}
where
\begin{equation}
\mathbf{W}=\begin{bmatrix}0 & t_{1-N} & \cdots & t_{-1}\\
t_{N-1} & 0 & \ddots & \vdots\\
\vdots & \ddots & \ddots & t_{1-N}\\
t_{1} & \cdots & t_{N-1} & 0
\end{bmatrix}.
\end{equation}
The circulant matrix $\mathbf{C}$ can be diagonalized by the FFT
matrix \cite{Gray2006}, i.e.,
\begin{equation}
\mathbf{C}=\frac{1}{2N}\mathbf{F}^{H}{\rm Diag}(\mathbf{F}\mathbf{c})\mathbf{F},\label{eq:circ_decomp}
\end{equation}
where $\mathbf{c}$ is the first column of $\mathbf{C},$ i.e., $\mathbf{c}=[t_{0},t_{-1},\cdots,t_{1-N},0,t_{N-1},\cdots,t_{1}]^{T}.$
Since the matrix $\mathbf{T}$ is just the upper left $N\times N$
block of $\mathbf{C}$, we can easily obtain $\mathbf{T}=\frac{1}{2N}\mathbf{F}_{:,1:N}^{H}{\rm Diag}(\mathbf{F}\mathbf{c})\mathbf{F}_{:,1:N}$.
\end{IEEEproof}
\section{Proof of Lemma \ref{lem:quartic_major} \label{sec:Proof-of-Lemma-quartic-major}}
\begin{IEEEproof}
For any given $x_{0}\in[0,t)$, let us consider the quadratic function
of the following form:
\begin{equation}
g(x|x_{0})=x_{0}^{4}+4x_{0}^{3}(x-x_{0})+a(x-x_{0})^{2},\label{eq:g_x}
\end{equation}
where $a>0$. It is easy to check that $f(x_{0})=g(x_{0}|x_{0}).$
So to make $g(x|x_{0})$ be a majorization function of $f(x)$ at
$x_{0}$ over the interval $[0,t]$, we need to further have $f(x)\leq g(x|x_{0})$
for all $x\in[0,t]$. Equivalently, we must have
\begin{equation}
\begin{aligned}a & \geq\frac{x^{4}-x_{0}^{4}-4x_{0}^{3}(x-x_{0})}{(x-x_{0})^{2}}\\
& =x^{2}+2x_{0}x+3x_{0}^{2}
\end{aligned}
\label{eq:a_inequal}
\end{equation}
for all $x\in[0,t]$. Let us define the function
\begin{equation}
A(x|x_{0})=x^{2}+2x_{0}x+3x_{0}^{2},
\end{equation}
then condition \eqref{eq:a_inequal} is equivalent to
\begin{equation}
\begin{aligned}a & \geq\max_{x\in[0,t]}\,A(x|x_{0}).\end{aligned}
\label{eq:a_inequal2}
\end{equation}
Since the derivative of $A(x|x_{0})$, given by
\begin{equation}
A^{\prime}(x|x_{0})=2x+2x_{0},
\end{equation}
is nonnegative for all $x\in[0,t],$ we know that $A(x|x_{0})$ is
nondecreasing on the interval $[0,t]$ and the maximum is achieved
at $x=t.$ Thus, condition \eqref{eq:a_inequal2} becomes
\begin{equation}
\begin{aligned}a & \geq A(t|x_{0})\\
& =t^{2}+2x_{0}t+3x_{0}^{2}.
\end{aligned}
\end{equation}
Finally, by appropriately rearranging the terms of $g(x|x_{0})$ in
\eqref{eq:g_x}, we can obtain the function in \eqref{eq:quad_major}.
The proof is complete.
\end{IEEEproof}
\end{document}
|
\begin{document}
\title{A fibered description of the vector-valued spectrum}
\author[V. Dimant]{Ver\'onica Dimant}
\author[J. Singer]{Joaqu\'{\i}n Singer}
\thanks{Partially supported by Conicet PIP 11220130100483 and ANPCyT PICT 2015-2299 }
\subjclass[2010]{46J15, 46E50, 47B48, 32A38}
\keywords{spectrum, algebras of holomorphic functions, homomorphisms of algebras}
\address{Departamento de Matem\'{a}tica y Ciencias, Universidad de San
Andr\'{e}s, Vito Dumas 284, (B1644BID) Victoria, Buenos Aires,
Argentina and CONICET} \email{[email protected]}
\address{Departamento de Matem\'{a}tica, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, (1428) Buenos Aires,
Argentina and IMAS-CONICET} \email{[email protected]}
\begin{abstract}
For Banach spaces $X$ and $Y$ we study the vector-valued spectrum $\mathcal {M}_\infty(B_X,B_Y)$, that is the set of non null algebra homomorphisms from $\mathcal {H}^\infty(B_X)$ to $\mathcal {H}^\infty(B_Y)$, which is naturally projected onto the closed unit ball of $\mathcal {H}^\infty(B_Y, X^{**})$. The aim of this article is to describe the fibers defined by this projection, searching for analytic balls and considering Gleason parts.
\end{abstract}
\maketitle
\section{Introduction}
Let $\mathcal {H}^\infty(B_X)$ be the space of bounded holomorphic functions on the open unit ball of a complex Banach space $X$. The study of the spectrum of this uniform algebra $\mathcal {M}(\mathcal {H}^\infty(B_X))$ began with the seminal work of Aron, Cole and Gamelin \cite{AronColeGamelin} where this set was fibered over $\overline B_{X^{**}}$, the closed unit ball of the bidual of $X$. For $X$ infinite dimensional, unlike what happens in the one dimensional case, it was proved in the same article that each fiber is quite large. Following this way the description of the fibers and the study of conditions that assure the existence of analytic balls inside the fibers was addressed in several articles, as \cite{ColeGamelinJohnson, Farmer, AronFalcoGarciaMaestre}.
Inspired by what is known about the spectrum $\mathcal {M}(\mathcal {H}^\infty(B_X))$ our aim here is to study, for Banach spaces $X$ and $Y$, the \textit{vector-valued spectrum} $\mathcal {M}_\infty(B_X,B_Y)$ defined by
\[
\mathcal {M}_\infty(B_X,B_Y)=\{ \Phi: \mathcal {H}^\infty(B_X)\to \mathcal {H}^\infty(B_Y) \textrm{ algebra homomorphisms}\}\setminus \{0\}.
\]
Even if homomorphisms between uniform algebras are a typical object of study (see, for instance, \cite{Gamelin, GalindoGamelinLindstrom, GalindoLindstrom, GorkinMortini, AronGalindoLindstrom,MacCluerOhnoZhao}), the treatment of this set as a whole just started in \cite{DiGaMaSe}. Now, we continue that work with a slight change of perspective and focus but maintaining a structure modeled on the ideas of \cite{AronColeGamelin}.
As was noticed in \cite{AronColeGamelin}, in order to obtain information about the spectrum of $\mathcal {H}^\infty(B_X)$ it is useful to first study the spectrum of $\mathcal {H}_b(X)$, the Fr\'{e}chet algebra of holomorphic functions of bounded type on $X$ (that is, holomorphic functions which are bounded on bounded sets, with the topology of uniform convergence on bounded sets). The same idea leads our work here: with the goal of describing the vector-valued spectrum $\mathcal {M}_\infty(B_X,B_Y)$ we begin by focusing on the set $\mathcal {M}_{b,\infty}(X,B_Y)$ given by
\[
\mathcal {M}_{b,\infty}(X,B_Y)=\{ \Phi: \mathcal {H}_b(X)\to \mathcal {H}^\infty(B_Y) \textrm{ continuous algebra homomorphisms}\}\setminus \{0\}.
\]
As in the scalar-valued case, this set has a rich analytic structure in the form of a Riemann domain over the space $\mathcal{H}^{\infty}(B_Y,X^{**}) $, when $X$ is symmetrically regular (see Section \ref{Section-Riemann domain}). The study of the fibers of $\mathcal {M}_{b,\infty}(X,B_Y)$ over $\mathcal{H}^{\infty}(B_Y,X^{**}) $ is developed in Section \ref{Section-Fiber Mb}. Following the Aron-Cole-Gamelin program, a radius function can be defined for the homomorphisms in $\mathcal {M}_{b,\infty}(X,B_Y)$ and then extended to $\mathcal {M}_\infty(B_X,B_Y)$ giving a way to relate both spectra. This is presented in Section \ref{Section-Radius function}.
Each function $g\in \mathcal{H}^{\infty}(B_Y,X^{**}) $ with $g(B_Y)\subset B_{X^{**}}$ naturally produces a composition homomorphism $ C_g\in \mathcal {M}_\infty(B_X,B_Y)$ given by
\[
C_g(f)=\widetilde f\circ g,\quad\textrm{ for all }f\in\mathcal {H}^\infty(B_X),
\] where $\widetilde f\in \mathcal {H}^\infty(B_{X^{**}})$ is the canonical extension of $f$ (see reference below). Conversely, as in \cite{DiGaMaSe}, we can define a projection
\begin{align*}
\xi \colon \mathcal {M}_\infty(B_X,B_Y) &\to \mathcal{H}^\infty(B_Y,X^{**}), \\
\Phi & \mapsto \left[ y \mapsto [x^* \mapsto \Phi(x^*)(y)] \right].
\end{align*}
The image of this projection is the closed unit ball of $\mathcal{H}^\infty(B_Y,X^{**})$ (see explanation in Section \ref{Section-Minf}).
One of the goals of this article is to describe the fibers over this closed ball, that is, for each $g\in \overline B_{\mathcal {H}^\infty(B_Y,X^{**})} $, the set of homomorphisms $\Phi\in\mathcal {M}_\infty(B_X,B_Y)$ such that $\xi(\Phi)=g$.
Finally, Section \ref{Section-Gleason parts} looks into the notion of Gleason parts for the vector-valued spectrum $\mathcal {M}_\infty(B_X,B_Y)$.
We begin by recalling some usual definitions and properties about polynomials and holomorphic functions in Banach spaces. For general theory on the topic we refer the reader to the books of Dineen \cite{Dineen}, Mujica \cite{MujicaLibro} and Chae \cite{Chae}.
Given Banach spaces $X$ and $Y$ we say that a function $P:X\to Y$ is a {\em continuous $m$-homogeneous polynomial} if there exists a unique continuous symmetric $m$-linear mapping $\overset\vee{P}$ such that $P(x) = \overset\vee{P}(x,\displaystyleots, x)$. If $U\subset X$ is an open set, a mapping $f:U\to Y$ is said to be {\em holomorphic} if for every $x_0\in U$ there exists a sequence $(P_mf(x_0))$, with each $P_mf(x_0)$ a continuous $m$-homogeneous polynomial, such that the series
\[
f(x)=\sum_{m=0}^\infty P_mf(x_0)(x-x_0)
\] converges uniformly in some neighborhood of $x_0$ contained in $U$.
We say that an $m$-homogeneous polynomial $P:X\to\mathbb C$ is {\em of finite type} if there are linear forms $x_1^*, \displaystyleots, x_n^*$ in $X^*$ such that
\[
P(x)=\sum_{k=1}^n (x_k^*(x))^m.
\]
The set
\[
\mathcal {H}^\infty(B_X)=\{f:B_X\to\mathbb C:\, f \textrm{ is holomorphic and bounded}\}
\] is a Banach algebra (endowed with the supremum norm). Analogously, the notation $\mathcal {H}^\infty(B_Y,X^{**})$ refers to the Banach space of bounded holomorphic functions from $B_Y$ to $X^{**}$ (endowed with the supremum norm).
A holomorphic function $f:X\to \mathbb C$ is said to be {\em of bounded type} if it maps bounded subsets of $X$ into bounded subsets of $\mathbb C$. The set
\[
\mathcal {H}_b(X)=\{f:X\to\mathbb C:\, f \textrm{ is a holomorphic function of bounded type}\}
\] is a Fr\'{e}chet algebra if we endow it with the family of (semi)norms $\{\sup_{\|x\|<R}|f(x)|\}_{R>0}$.
By \cite{AronBerner, DavieGamelin}, there is a canonical extension $[f\mapsto \widetilde f]$ from $\mathcal {H}^\infty(B_X)$ to $\mathcal {H}^\infty(B_{X^{**}})$ which is an isometry and a homomorphism of Banach algebras. The extension is also defined from $\mathcal {H}_b(X)$ to $\mathcal {H}_b(X^{**})$ and satisfies, for any $f\in\mathcal {H}_b(X)$ and $R>0$,
\[
\|f\|_{RB_X} = \|\widetilde f\|_{RB_{X^{**}}}.
\]
Recall that a Banach space $X$ is said to be {\em symmetrically regular} if every continuous linear mapping $T:X\to X^*$ which is symmetric (i. e. $T(x_1)(x_2)=T(x_2)(x_1)$ for all $x_1, x_2\in X$) turns out to be weakly compact.
The (scalar-valued) spectrum of a Banach or Fr\'{e}chet algebra $\mathcal A$ is the set
\[
\mathcal {M}(\mathcal{A})=\{\overset\veearphi:\mathcal{A}\to\mathbb C \textrm{ algebra homomorphisms}\}\setminus \{0\}.
\]
As in \cite{AronColeGamelin}, we will denote the scalar-valued spectrum of $\mathcal {H}_b(X)$ by $\mathcal {M}_b(X)$. Analogously, $\mathcal {M}_\infty(B_X)$ will denote $\mathcal {M}(\mathcal {H}^\infty(B_X))$.
\textbf{Duality and compactness.} We quote some observations about duality and compactness for future reference.
\begin{enumerate}[(1)]
\item As mentioned in \cite{GGL}, the algebra $\mathcal {H}^\infty (B_Y)$ is a weak-star closed subalgebra of $\ell_\infty(B_Y)$ and hence it is the dual of the quotient Banach space $\ell_1(B_Y)/\mathcal {H}^\infty(B_Y)^\perp$. We chose to denote this predual space by $\mathcal G^\infty (B_Y)$, as in \cite{Mujica}, for simplicity but we recall its description as a quotient of $\ell_1(B_Y)$.
\item \label{compact} The vector-valued spectrum $\mathcal {M}_\infty(B_X,B_Y)$ is a subset of the unit sphere of
\[\mathcal L(\mathcal {H}^\infty(B_X), \mathcal {H}^\infty(B_Y)) = \left(\mathcal {H}^\infty(B_X)\widehat\otimes_\pi \mathcal G^\infty (B_Y)\right)^*.\] As the closed unit ball of a dual Banach space is weak-star compact and $\mathcal {M}_\infty(B_X,B_Y)$ is closed with respect to this topology, it follows that $\mathcal {M}_\infty(B_X,B_Y)$ is a weak-star compact set (i. e. a compact set with respect to the topology given by $\mathcal {H}^\infty(B_X)\widehat\otimes_\pi \mathcal G^\infty (B_Y)$). Note also that, by the definition of $\mathcal G^\infty (B_Y)$, a bounded net $(\Phi_\alpha)$ is weak-star convergent to $\Phi$ in $\left(\mathcal {H}^\infty(B_X)\widehat\otimes_\pi \mathcal G^\infty (B_Y)\right)^*$ if and only if $\Phi_\alpha (f)(y)\to \Phi(f)(y)$, for all $f\in \mathcal {H}^\infty(B_X)$ and $y\in B_Y$. The weak-star compactness of $\mathcal {M}_\infty(B_X,B_Y)$ is shown with a different argument in \cite[Theorem 11]{DiGaMaSe}.
\item \label{comment3} For each $R>0$, the norm $\|f\|_R=\sup\{|f(x)|:\ x\in RB_X\}$ gives to $\mathcal {H}_b(X)$ the structure of a normed space (not complete). In this way the set
\[
\mathcal {M}_{b,\infty}(X,B_Y)_R =\{\Phi\in \mathcal {M}_{b,\infty}(X,B_Y):\ \Phi\textrm{ is continuous with respect to the norm } \|\cdot\|_R\}
\]
is contained in the unit sphere of
\[\mathcal L((\mathcal {H}_b(X),\|\cdot\|_R), \mathcal {H}^\infty(B_Y)) = \left((\mathcal {H}_b(X),\|\cdot\|_R)\otimes_\pi \mathcal G^\infty (B_Y)\right)^*.\]
Arguing as in the previous item, for a given $R>0$, the set $\mathcal {M}_{b,\infty}(X,B_Y)_R$ is weak-star compact; or equivalently, it is compact with respect to the topology given by
\[
\Phi_\alpha\to \Phi \quad\textrm{ whenever } \quad \Phi_\alpha (f)(y)\to \Phi(f)(y), \textrm{ for all } f\in \mathcal {H}_b(X) \textrm{ and } y\in B_Y.
\]
\end{enumerate}
\section{Riemann domain over $ \mathcal{H}^{\infty}(B_Y,X^{**}) $}\label{Section-Riemann domain}
For a symmetrically regular Banach space $X$ it is shown in \cite{DiGaMaSe} that $\mathcal{M}_{b,\infty}(X,B_Y)$ can be endowed with a structure of a Riemann domain over $\mathcal L(X^*, Y^*)$. Now, to achieve a fibered description of $\mathcal {M}_\infty(B_X,B_Y)$ we find it more suitable to define a Riemann domain structure over $ \mathcal{H}^{\infty}(B_Y,X^{**}) $ as opposed to $\mathcal L(X^*, Y^*)$. In this way we are choosing a more complex underlying space but we are dealing with a simpler projection and the behaviour of the fibers shows more akin to what happens in the scalar-valued case.
As in \cite[Equation (81)]{DiGaMaSe}, for each $g \in \mathcal{H}^{\infty}(B_Y,X^{**}) $ there is an associated composition homomorphism $ C_g \in \mathcal{M}_{b,\infty}(X,B_Y) $ given by
\[ C_g (f) (y) = \widetilde{f} \circ g(y),\]
where $\widetilde{f}$ denotes the canonical extension of $f$. It is easily verified that $ C_g$ is well defined and that gives an inclusion
\begin{align*}
j:\mathcal{H}^{\infty}(B_Y,X^{**}) &\to \mathcal{M}_{b,\infty}(X,B_Y), \\
g &\mapsto C_g.
\end{align*}
Also, as in \cite[Equation (83)]{DiGaMaSe}, there is a projection
\begin{align*}
\xi \colon \mathcal{M}_{b,\infty}(X,B_Y) &\to \mathcal{H}^\infty(B_Y,X^{**}), \\
\Phi & \mapsto \left[ y \mapsto [x^* \mapsto \Phi(x^*)(y)] \right].
\end{align*}
The fact that this mapping is well-defined can be seen as follows: in order to prove that $\xi(\Phi):B_Y \to X^{**}$ is holomorphic, it is enough to check that it is weak-star holomorphic \cite[Exercise 8.D]{MujicaLibro}. This is true since, for each $x^*\in X^*$, we have that
\begin{align*}
\widetilde{x^*}(\xi(\Phi)) = \Phi(x^*) \in \mathcal{H}^\infty(B_Y).
\end{align*}
To see that it is bounded recall that as $\Phi$ belongs to $\mathcal{M}_{b,\infty}(X,B_Y)$, there exists $r>0$ such that $\|\Phi(h)\|\le \|h\|_{rB_X}$, for all $h\in\mathcal {H}_b(X)$. Therefore,
\begin{align*}
\sup_{y\in B_Y} \|\xi(\Phi)(y)\| &= \sup_{y\in B_Y} \sup_{\|x^*\| \le 1} |\xi(\Phi)(y)(x^*)| \\
&=\sup_{\|x^*\| \le 1} \sup_{y\in B_Y} |\Phi(x^*)(y)|\\
&\leq \sup_{\|x^*\| \le 1}\|x^*\|_{rB_X} \leq r.
\end{align*}
Thus, we conclude that $\xi(\Phi)$ belongs to $\mathcal{H}^\infty(B_Y,X^{**})$ (and $\Phi$ is well-defined). Note also that $\xi(j(g))=g$, for all $g \in \mathcal{H}^{\infty}(B_Y,X^{**}) $.
Now, the construction of the Riemann domain structure is analogous to what was done in the scalar-valued case \cite{AronColeGamelin, AronGalindoGarciaMaestre, Dineen} and in the already mentioned vector-valued case \cite{DiGaMaSe}. Anyway, we present it adapted to our particular setting for future reference throughout the article.
For $x^{**} \in X^{**}$, $\tau_{x^{**}}$ is the translation mapping given by $\tau_{x^{**}} (x) = J_Xx +x^{**}$. This induces, as usual, a mapping $\tau^*_{x^{**}} \colon \mathcal{H}_b(X) \to \mathcal{H}_b(X)$ where $\tau^*_{x^{**}} (f) (x) = \widetilde{f}(J_Xx +x^{**})$. By \cite[Proposition 6.30]{Dineen} for any fixed $f \in \mathcal{H}_b(X)$, we have that the mapping $[x^{**}\mapsto \tau_{x^{**}}^*(f)]$ is a holomorphic function of bounded type from $X^{**}$ into $\mathcal{H}_b(X)$. Then, we can define for each $g \in \mathcal{H}^\infty(B_Y,X^{**})$ and $\Phi \in \mathcal{M}_{b,\infty}(X,B_Y)$ the homomorphism $\Phi^{g}$ in $\mathcal{M}_{b,\infty}(X,B_Y)$ by
\begin{align*}
\Phi^g(f)(y) &= \Phi(\tau^*_{g(y)}(f)) (y) \\
&=\Phi [x \mapsto \widetilde{f}(J_Xx+g(y)) ] (y),
\end{align*}
for all $f \in \mathcal{H}_b(X)$ and $y \in B_Y$. In order to see that $\Phi^g$ is well-defined we need to check that $\Phi^g(f)$ belongs to $\mathcal {H}^\infty (B_Y)$, for every $f \in \mathcal{H}_b(X)$.
To derive that $\Phi^g(f)$ is holomorphic, we consider the following function of two variables
\begin{align*}
B_Y \times B_Y &\to \mathbb{C} \\
(y,z) &\mapsto \Phi (\tau^*_{g(z)}(f)) (y).
\end{align*}
Clearly, the mapping is holomorphic in the first variable. The same is true for the second one, as it is the result of applying $\displaystyleelta_y \circ \Phi$ to the composition of $[x^{**}\mapsto \tau_{x^{**}}^*(f)]$ with $[y \mapsto g(y)]$. By Hartogs' theorem, this mapping is holomorphic when considering both variables simultaneously and thus it remains so when restricted to the set $\{(y,y):\, y\in B_Y\}$. This gives that the mapping $\Phi^g(f)$ is holomorphic. As before, recall that there exists $r>0$ such that $\|\Phi(h)\|\le \|h\|_{rB_X}$, for all $h\in\mathcal {H}_b(X)$, implying that $\Phi^g(f)$ is bounded:
\begin{align}
\sup_{y\in B_Y} |\Phi^g(f)(y)| &= \sup_{y\in B_Y} |\Phi(\tau^*_{g(y)}(f)) (y)| \leq \sup_{ y,z\in B_Y } |\Phi(\tau^*_{g(z)}(f))(y)| \nonumber \\
& = \sup_{z\in B_Y} \|\Phi(\tau^*_{g(z)}(f))\|_{B_Y}\leq \sup_{z\in B_Y}\|\tau^*_{g(z)}(f)\|_{rB_X} \label{bounded} \\
&= \sup_{\substack{z\in B_Y \\ x\in rB_X}} |\widetilde{f}(g(z)+J_Xx)| \leq \|f\|_{(\|g\|+r)B_X} < \infty. \nonumber
\end{align}
This shows that $\Phi^g$ is well defined. An easy computation shows that $\Phi^g$ is an algebra homomorphism from which we obtain that $\Phi^g \in \mathcal{M}_{b,\infty}(X,B_Y)$.
It is also worth noting that the projection $\xi$ satisfies
\begin{equation} \label{xi-phig} \xi(\Phi^g) = \xi(\Phi) + g.
\end{equation}
Indeed, this is clear since for all $y\in B_Y$ and all $x^*\in X^*$, $\tau_{g(y)}^*(x^*)=x^*+g(y)(x^*)$.
We now arrive at the statement of the Riemann domain structure on $\mathcal{M}_{b,\infty}(X,B_Y)$.
\begin{proposition}
If $X$ is a symmetrically regular Banach space and $Y$ is any Banach space, $(\mathcal{M}_{b,\infty}(X,B_Y),\xi)$ is a Riemann domain over $\mathcal{H}^\infty(B_Y,X^{**})$ with each connected component homeomorphic to $\mathcal{H}^\infty(B_Y,X^{**})$.
\end{proposition}
\begin{proof}
For $\Phi \in \mathcal {M}_{b,\infty}(X,B_Y)$ and $\overset\veearepsilon > 0$, consider the sets
\[ V_{\Phi,\overset\veearepsilon} = \{ \Phi^g \colon g\in \mathcal{H}^\infty(B_Y,X^{**}),\ \|g\| < \overset\veearepsilon \}. \]
These sets form a neighborhood basis for a Hausdorff topology in $\mathcal{M}_{b,\infty}(X,B_Y)$.
First of all, given $\Psi \in V_{\Phi,\overset\veearepsilon}$ we have that $\Psi = \Phi^g$ for a certain $g$ with $\|g\| < \overset\veearepsilon$. Since $X$ is symmetrically regular, by \cite[Lemma 6.28]{Dineen}, we have that $\tau^*_{g(y)} \circ \tau^*_{h(y)}=\tau^*_{(g+h)(y)}$ for all $y\in B_Y$, $h \in \mathcal{H}^\infty (B_Y,X^{**})$. It follows that
\begin{align*}
\Psi^h(f)(y) &= (\Phi^g)^h(f)(y) = \Phi^g(\tau^*_{h(y)} (f)) (y)= \Phi (\tau^*_{g(y)} \circ \tau^*_{h(y)} (f)) (y) \\
& = \Phi(\tau^*_{(g+h)(y)} (f)) (y)= \Phi^{g+h}(f) (y).
\end{align*}
Therefore, for $\displaystyleelta = \overset\veearepsilon - \|g\|$ we have that $V_{\Psi,\displaystyleelta} \subset V_{\Phi, \overset\veearepsilon}$.
That this is in fact a Hausdorff topology follows as usual: given $\Psi \not= \Phi \in \mathcal{M}_{b,\infty}(B_Y,X^{**})$ there are two possibilities, either $\xi(\Psi) = \xi(\Phi)$ or $\xi(\Psi) \not = \xi(\Phi)$. In the former case, a simple argument using \eqref{xi-phig} implies that $V_{\Phi,\overset\veearepsilon} \cap V_{\Psi,\overset\veearepsilon'} = \overset\veearnothing$ for every $\overset\veearepsilon, \overset\veearepsilon' > 0$. If, otherwise, $\xi(\Phi) \not = \xi (\Psi)$, the easily obtained conclusion is that $V_{\Phi,\overset\veearepsilon} \cap V_{\Psi,\overset\veearepsilon} = \overset\veearnothing$ for $ \overset\veearepsilon = \frac{\|\xi(\Phi) - \xi(\Psi)\|}{2}$.
Additionally, note that if we consider, for $\Phi \in \mathcal{M}_{b,\infty}(X,B_Y)$, the set
\[ V_\Phi = \cup_{\overset\veearepsilon > 0} V_{\Phi,\overset\veearepsilon}=\{\Phi^g \colon g\in \mathcal{H}^\infty(B_Y,X^{**})\}, \]
we obtain exactly the connected component of $\Phi$, which is homeomorphic to $\mathcal {H}^\infty(B_Y,X^{**})$ as stated.
\end{proof}
As in \cite[Proposition 10]{DiGaMaSe} each function $f \in \mathcal{H}_b(X)$ can be extended to a function on $\mathcal{M}_{b,\infty}(X,B_Y)$ by means of a sort of Gelfand transform:
\begin{align*}
\widehat{f} \colon \mathcal{M}_{b,\infty}(X,B_Y) &\to \mathcal{H}^\infty(B_Y) \\
\Phi &\mapsto \Phi (f),
\end{align*}
and this function, when restricted to each connected component is a holomorphic function of bounded type. Even though the connected components here are not the same as those in \cite{DiGaMaSe}, the proof developed in that paper works in our context with slight modifications. However, we choose to present here another simpler argument.
\begin{proposition} \label{gelfand-transform}
Let $X$ be a symmetrically regular Banach space and let $Y$ be any Banach space. Given a function $f \in \mathcal{H}_b(X)$ we have that the extension $\widehat{f}\colon \mathcal{M}_{b,\infty}(X,B_Y) \to \mathcal{H}^\infty(B_Y)$ is a holomorphic function of bounded type, when restricted to each connected component of the spectrum. That is, $\widehat{f} \circ (\xi|_{V_\Phi})^{-1} \in \mathcal{H}_b(\mathcal{H}^\infty(B_Y,X^{**}),\mathcal{H}^\infty(B_Y))$ for every $\Phi\in\mathcal {M}_{b,\infty}(X, B_Y)$.
\end{proposition}
The proof follows readily by using the following lemma, which is surely known. We include the proof as we could not find a proper reference.
\begin{lemma} \label{holomorphic mapping}
Given Banach spaces $X$ and $Y$ and an open set $U\subset X$, let $F:U\to \mathcal{H}^\infty(B_Y)$ be a locally bounded mapping. Then, $F$ is holomorphic if and only if $\displaystyleelta_y\circ F$ is holomorphic, for all $y\in B_Y$.
\end{lemma}
\begin{proof}
Since $ \mathcal{H}^\infty(B_Y)$ is the dual of $\mathcal{G}^\infty(B_Y)$, we have that $F$ is holomorphic if and only if it is so when applied to any element of $\mathcal{G}^\infty(B_Y)$. To derive the conclusion we have to prove that it is enough to consider just the evaluations $\displaystyleelta_y\in \mathcal{G}^\infty(B_Y)$. Any $v\in \mathcal{G}^\infty(B_Y)$ can be represented in the quotient $\ell_1(B_Y)/\mathcal {H}^\infty(B_Y)^\perp$ by a sum $\sum_k \lambda_k \displaystyleelta_{y_k}$, where $(\lambda_k)\in \ell_1$ and $y_k\in B_Y$ for all $k$. So we need to prove that the mapping $[x\mapsto F(x)(v)=\sum_k \lambda_k F(x)(y_k)]$ is holomorphic. Since, by hypothesis, the mappings $[x\mapsto \sum_{k=1}^n \lambda_k F(x)(y_k)]$ are holomorphic for all $n$, to derive the result by means of \cite[Theorem 14.16]{Chae}, we only need to check that these mappings are locally uniformly bounded. Given $x_0\in U$ take $r>0$ and $C>0$ such that $B_X(x_0,r)\subset U$ and $F$ is bounded by $C$ in $B_X(x_0,r)$. Then, for every $x\in B_X(x_0,r)$,
\[
\left| \sum_{k=1}^n \lambda_k F(x)(y_k)\right|\le \sum_{k=1}^n |\lambda_k| |F(x)(y_k)|\le C \|(\lambda_k)\|_{\ell_1}.
\]
\end{proof}
Now we proceed with the proof of the holomorphic property of the Gelfand transform.
\begin{proof} {\it (of Proposition \ref{gelfand-transform})}
We want to prove that the function
\begin{align*}
\mathcal{H}^\infty(B_Y,X^{**}) &\to \mathcal{H}^\infty(B_Y) \\
g &\mapsto \Phi^g (f)
\end{align*}
is holomorphic of bounded type. As in equation \eqref{bounded} we obtain that there exists $r>0$ such that
\[
\|\Phi^g (f)\| \leq \|f\|_{(\|g\|+r)B_X}
\] and hence our target function is bounded on bounded sets.
Hence, it is locally bounded. Now, appealing to the previous lemma, it remains to prove that, for all $y\in B_Y$, the mapping $[g\mapsto \Phi^g (f)(y)]$ is holomorphic. This is true since it is the composition of the following two holomorphic mappings:
\[\begin{array}{rclrcl}
\mathcal {H}^\infty(B_Y, X^{**}) &\to & X^{**} &\qquad\qquad X^{**} &\to &\mathbb C\\
g &\mapsto & g(y) &\qquad\qquad x^{**} &\mapsto & \Phi(\tau^*_{x^{**}}(f))(y),
\end{array}
\] and the proof is finished.
\end{proof}
\section{The fibering of $\mathcal{M}_{b,\infty}(X,B_Y)$ over $\mathcal{H}^\infty(B_Y,X^{**})$ } \label{Section-Fiber Mb}
We now focus on the set of elements in $\mathcal{M}_{b,\infty}(X,B_Y)$ that are projected to the same function $g$ of $\mathcal{H}^\infty(B_Y,X^{**})$. This is called the {\em fiber} over $g$ and is defined by
\[ \mathscr{F}(g) = \{\Phi \in \mathcal{M}_{b,\infty}(X,B_Y) \colon \xi(\Phi) = g \}. \]
Our aim in this section is to study the size of these sets.
In the scalar-valued spectrum the usual projection is $\pi:\mathcal {M}_b(X)\to X^{**}$, given by $\pi(\overset\veearphi)(x^*)=\overset\veearphi(x^*)$, for all $x^*\in X^*$. The fiber over each $z\in X^{**}$ is the set of all $\overset\veearphi$ such that $\pi(\overset\veearphi)=z$. Clearly, the fiber over $z$ contains at least the evaluation homomorphism $\displaystyleelta_z$. When finite type polynomials are dense in $\mathcal{H}_b(X)$, the fiber over $z$ is just $\{\displaystyleelta_z\}$ \cite[Theorem 3.3]{AronColeGamelin}.
Analogously, for every $g \in \mathcal{H}^\infty(B_Y,X^{**})$ we can define the corresponding composition homomorphism $ C_g$. Since $ C_g$ verifies that $\xi( C_g) = g$ we have that the sets $\mathscr{F}(g)$ are non-empty. Moreover, as in the scalar-valued spectrum, the density of finite type polynomials on $X$ implies that the homomorphisms $ C_g$ should be all we find in each fiber. This similarity between scalar and vector-valued spectra is made clear through the following remark.
\begin{remark}\rm \label{Composition_morphism}
For each $\Phi \in \mathcal{M}_{b,\infty}(X,B_Y)$ and each $y\in B_Y$, we denote by $\displaystyleelta_y\circ \Phi\in\mathcal {M}_b(X)$ the mapping given by $[f\in\mathcal {H}_b(X)\mapsto \Phi(f)(y)]$. Then, it is clear that $\Phi$ is the composition homomorphism $ C_g$ if and only if for every $y\in B_Y$, $\displaystyleelta_y\circ \Phi$ is the evaluation homomorphism $\displaystyleelta_{g(y)}$. Also, $\Phi\in\mathscr{F}(g)$ if and only if for every $y\in B_Y$, $\displaystyleelta_y\circ \Phi$ is in the fiber (relative to the spectrum $\mathcal {M}_b(X)$) over $g(y)\in X^{**}$.
\end{remark}
Now, we easily obtain the following which was previously observed in \cite[page 10]{DiGaMaSe}.
\begin{proposition}
Let $X$ and $Y$ be Banach spaces. If finite type polynomials are dense in $\mathcal{H}_b(X)$ then for each $g \in \mathcal{H}^\infty(B_Y,X^{**})$ we have that $\mathscr{F}(g)$ consists solely of the corresponding $ C_g$.
\end{proposition}
Whenever finite type polynomials are not dense in $\mathcal{H}_b(X)$ we might find more elements in the fibers over elements in $\mathcal{H}^\infty(B_Y,X^{**})$. For instance, the following theorem shows that if there is a polynomial in $X$ which is not weakly continuous on bounded sets there is a {\em disk} of homomorphisms in each fiber. The proof is inspired by an analogous result for the scalar-valued spectrum \cite[Theorem 3.1]{AronFalcoGarciaMaestre}.
\begin{theorem} \label{inyectar disco}
If $X$ is a Banach space such that there exists a polynomial on $X$ which is not weakly continuous on bounded sets, then for each $g \in \mathcal{H}^\infty(B_Y,X^{**})$ we can inject the complex disk $\mathbb{D}$ analytically into the fiber $\mathscr{F}(g)$.
\end{theorem}
\begin{proof} If there exists a polynomial on $X$ which is not weakly continuous on bounded sets, then (for a certain $m$) there is an $m$-homogeneous polynomial $P$ such that its canonical extension $\widetilde{P}$ is not weak-star continuous at any $x^{**} \in X^{**}$ (see \cite[Corollary 2]{BoydRyan} or \cite[Proposition 1]{AronDimant}).
Given $g \in \mathcal{H}^\infty(B_Y,X^{**})$, denoting $x^{**}_0 = g(0)$
we can find an $\overset\veearepsilon > 0$ and a bounded net $(x^{**}_{\alpha})$, weak-star convergent to $x^{**}_0$, such that $|\widetilde{P}(x^{**}_\alpha) - \widetilde{P}(x^{**}_0)| > \overset\veearepsilon$ for every $\alpha$.
We now fix an ultrafilter $\mathscr{U}$ containing the sets $\{ \alpha \colon \alpha \geq \alpha_0 \}$ and define for $t \in \mathbb D$ the mapping $\Phi_t: \mathcal{H}_b(X) \to \mathcal{H}^\infty(B_Y)$ by
\[
\Phi_t(f)(y) = \lim_{\mathscr{U}} \widetilde{f}(g(y) + t(x^{**}_\alpha - x^{**}_0 )).
\]
Note that the limit along the ultrafilter exists because for each $t$ and each $f$, if $M$ is a bound for the sequence $(x^{**}_{\alpha})$ we have
\[
\left\|\widetilde{f}(g(y) + t(x^{**}_\alpha - x^{**}_0 ))\right\|\le\|f\|_{(\|g\|+M+\|x_0^{**}\|)B_X},
\] and so the set $(\mathcal{M}_{b,\infty}(X,B_Y))_{\|g\|+M+\|x_0^{**}\|}$ is weak-star compact (see item \ref{comment3} of the comment about duality and compactness in the Introduction).
The previous inequality also shows that, for all $\alpha$, the mappings $[y\mapsto \widetilde{f}(g(y) + t(x^{**}_\alpha - x^{**}_0 ))]$ are in a ball of $\mathcal {H}^\infty(B_Y)=\left(\mathcal{G}^\infty(B_Y)\right)^*$ and by weak-star compactness we obtain that $\Phi_t(f)\in\mathcal {H}^\infty(B_Y)$.
Also, it is easy to see that, for each $t \in \mathbb{C}$, $\Phi_t$ is a homomorphism in $\mathcal{M}_{b,\infty}(X,B_Y)$ and $\xi(\Phi_t) = g$. To assert that the mapping $[t\mapsto \Phi_t]$ is analytic, we need to check that for every $f \in \mathcal{H}^\infty(B_X)$ the following mapping is analytic:
\begin{align*}
\Phi(f): \mathbb{D} &\to \mathcal{H}^\infty(B_Y) \\
t&\mapsto \Phi_t(f) = \lim_{\mathscr{U}} \widetilde{f}(g(y) + t(x^{**}_\alpha - x^{**}_0 )).
\end{align*}
Fix $f \in \mathcal{H}^\infty(B_X)$ and define $f_\alpha: \mathbb{D} \to\mathcal{H}^\infty(B_Y)$ by
\[ f_\alpha (t) (y) = \widetilde{f}(g(y) + t(x^{**}_\alpha - x^{**}_0 )). \]
The set $\{f_\alpha\}_\alpha$ is contained in $\|f\|_{KB_X}\overline{B}_{\mathcal{H}^\infty(\mathbb{D},\mathcal{H}^\infty(B_Y))}$, where $K=\|g\|+(|t_0|+s)(M+\|x_0^{**}\|)$. Since, by \cite[Theorem 2.1]{Mujica},
\[ \mathcal{H}^\infty(\mathbb{D},\mathcal{H}^\infty(B_Y)) = \mathcal{L}(\mathcal G^\infty(\mathbb{D}),\mathcal{H}^\infty(B_Y)) = (\mathcal G^\infty(\mathbb{D})\widehat{\otimes}_\pi \mathcal G^\infty(B_Y))^*, \]
the set $\|f\|_{KB_X}\overline{B}_{\mathcal{H}^\infty(\mathbb{D},\mathcal {H}^\infty(B_Y))}$ is a weak-star compact set, which tells us that the limit of the $f_\alpha$'s can be taken analytically on $t$.
This proves the analyticity of the mapping
\begin{align*}
t&\mapsto \Phi_t(f) = \lim_{\mathscr{U}} \widetilde{f}(g(y) + t(x^{**}_\alpha - x^{**}_0 )).
\end{align*}
Moreover, since $P$ is an $m$-homogeneous polynomial, we can write
\[ \widetilde{P}(g(y) + t(x^{**}_\alpha - x^{**}_0 )) = \widetilde{P}(g(y)) + \sum_{j=1}^m t^j \binom{m}{j} \overset\vee{\widetilde{P}}(g(y)^{m-j},(x^{**}_\alpha - x^{**}_0)^j). \]
In particular, taking $y=0$ we obtain
\[ \Phi_t(P)(0) = \sum_{j=0}^m a_j t^j. \]
Now, since
\[ |\Phi_1(P)(0) - \Phi_0(P)(0)| = \lim_{\mathscr{U}}|\widetilde{P}(x^{**}_\alpha) - \widetilde{P}(x^{**}_0)| \geq \overset\veearepsilon, \]
we have a non-constant polynomial of degree $\leq m$, so we can find $t_0 \in \mathbb D$ and $s > 0$ such that $\Phi_t(P)(0)$ is injective in $\mathbb{D}(t_0,s)$.
Finally, through the composition with the mapping $\gamma :\mathbb D \to \mathbb D(t_0,s)$ given by $[t\mapsto t_0+st]$ we obtain that $\Phi\circ\gamma:\mathbb D\to \mathscr{F}(g)$ is the desired analytic injection.
\end{proof}
\begin{remark} \rm \textbf{Fibers over constant functions.} \label{Remark: constant} The scalar-valued spectrum $\mathcal {M}_b(X)$ is naturally seen inside $\mathcal {M}_{b,\infty}(X,B_Y)$ through the inclusion mapping $[\overset\veearphi \mapsto \overset\veearphi\cdot1_Y]$ where each element of $\mathcal {M}_b(X)$ lies in a fiber over a constant function. Then, for a constant function $g\in \mathcal{H}^\infty(B_Y,X^{**})$ it is natural to wonder whether there are homomorphisms in $\mathscr{F}(g)$ not belonging to $\mathcal {M}_b(X)$. It is worth noting that the previous theorem does not provide examples of that kind. Indeed, if we take $g(y)=x_0^{**}$ for all $y$, we have that
\begin{align*}
\Phi_t (f) (y) = \lim_{\mathscr{U}} \widetilde{f} (x_0^{**} + t(x_\alpha^{**} - x_0^{**})),
\end{align*}
which is a constant function of $y$ and hence identified with an element of $\mathcal {M}_b(X)$. However, building on the previous result we obtain in the next theorem an analytic injection of the ball $B_{\mathcal {H}^\infty(B_Y)}$ in each fiber over a constant function, providing examples of non scalar-valued homomorphisms in those fibers.
\end{remark}
\begin{theorem} \label{Prop:constant_function}
If $X$ is a Banach space such that there exists a polynomial on $X$ which is not weakly continuous on bounded sets, then for each constant function $g \in \mathcal{H}^\infty(B_Y,X^{**})$ we can inject the ball $B_{\mathcal {H}^\infty(B_Y)}$ analytically in the fiber $\mathscr{F}(g)$. Moreover, through this inclusion each non-constant function in $B_{\mathcal {H}^\infty(B_Y)}$ is mapped into a non scalar-valued homomorphism of $\mathscr{F}(g)$.
\end{theorem}
\begin{proof} Given $g(y)=x_0^{**}$ for all $y$, from the proof of Theorem \ref{inyectar disco} we have an analytic injection $\Phi\circ\gamma:\mathbb D\to \mathscr{F}(g)$. Now consider $\Psi:B_{\mathcal {H}^\infty(B_Y)}\to\mathscr{F}(g)$ given by
\[
\Psi(h)(f)(y)= \Phi\circ\gamma (h(y))(f), \quad\textrm{ for all } h\in B_{\mathcal {H}^\infty(B_Y)},\ f\in\mathcal {H}_b(X),\ y\in B_Y.
\] Note that this definition makes sense because, by the previous remark, for each $h$ and $y$, the homomorphism $\Phi\circ\gamma (h(y))$ is scalar-valued.
Let us check that $\Psi$ is well defined, analytic and injective. First, note that, for all $f\in\mathcal {H}_b(X)$ and $h\in B_{\mathcal {H}^\infty(B_Y)}$, the mapping $\Psi(h)(f)$ is analytic since it is the composition of two holomorphic mappings: $[y\mapsto h(y)]$ and $[t\mapsto \Phi\circ\gamma (t)(f)]$. As before, it is bounded:
\[
\sup_{y\in B_Y}|\Psi(h)(f)(y)|\le \|f\|_{(\|g\|+M+\|x_0^{**}\|)B_X}.
\]
Now, it is readily seen that $\Psi(h)$ belongs to $\mathcal {M}_{b,\infty}(X,B_Y)$ and that in fact it is in the fiber over $g$, so $\Psi$ is well defined.
Secondly, for each $f\in\mathcal {H}_b(X)$, the mapping from $B_{\mathcal {H}^\infty(B_Y)}$ to $\mathcal {H}^\infty(B_Y)$ given by $[h\mapsto \Psi(h)(f)]$ is analytic. Indeed, by Lemma \ref{holomorphic mapping}, it is enough to see that the mapping $[h\mapsto\Psi(h)(f)(y)]$ is analytic, which again can be done by writing it as the composition of a linear and a holomorphic mapping: $[h\mapsto h(y)]$ and $[t\mapsto \Phi\circ\gamma (t)(f)]$.
Thirdly, $\Psi$ is injective because $\Phi\circ\gamma$ has the same property.
Finally, note that $\Psi$ maps each non-constant function in $B_{\mathcal {H}^\infty(B_Y)}$ into a non scalar-valued homomorphism. If $h\in B_{\mathcal {H}^\infty(B_Y)}$ is non-constant then there exist $y_1$ and $y_2$ in $B_Y$ such that $h(y_1)\not = h(y_2)$ and thus $\Phi\circ\gamma (h(y_1))\not = \Phi\circ\gamma (h(y_2))$. So $\Psi(h)$ cannot be of the form $\overset\veearphi\cdot1_Y$ for a scalar valued $\overset\veearphi$.
\end{proof}
\section{The radius function} \label{Section-Radius function}
Aron, Cole and Gamelin \cite{AronColeGamelin} introduced a radius function on $\mathcal {M}_b(X)$ and proved several properties. Then, they extended this definition to homomorphisms in $\mathcal {M}_\infty(B_X)$ establishing a relationship between both spectra. We now follow the same plan in the vector-valued case.
Given a homomorphism $\Phi \in \mathcal{M}_{b,\infty}(X,B_Y)$ we define its radius as
\[ R(\Phi) = \inf \{ r > 0 \colon \|\Phi(f)\|_{B_Y} \leq \|f\|_{rB_X}, f \in \mathcal{H}_b(X) \}.\]
It is worth noting that since the homomorphisms in $\mathcal{M}_{b,\infty}(X,B_Y)$ are continuous we have $ 0 \leq R(\Phi) < \infty$. Furthermore, the following result regarding the continuity of $\Phi$ in $R(\Phi)B_X$ holds. Note that this is a vector-valued version of \cite[Lemma 2.1]{AronColeGamelin}. We omit the proof, as it is identical.
\begin{lemma}
For every $\Phi \in \mathcal{M}_{b,\infty}(X,B_Y)$ and $f \in \mathcal{H}_b(X)$ we have
\[ \|\Phi(f)\|_{B_Y} \leq \|f\|_{R(\Phi)B_X}.
\]
\end{lemma}
For $\Phi \in \mathcal{M}_{b,\infty}(X,B_Y)$ we denote by $\Phi_m$ its restriction to $\mathcal P(^mX)$, that is $\Phi_m$ is a linear operator from $\mathcal P(^mX)$ into $\mathcal {H}^\infty(B_Y)$. As in the scalar-valued case we have
\begin{proposition}
\label{RadioLimsup}
The radius function $R$ on $\mathcal{M}_{b,\infty}(X,B_Y)$ is given by
\begin{equation*}
R(\Phi) = \limsup_{m \to \infty} \|\Phi_m\|^{1/m}=\sup_{m \ge 1} \|\Phi_m\|^{1/m}.
\end{equation*}
\end{proposition}
\begin{proof}
The first equality follows the lines of the proof of \cite[Theorem 2.3]{AronColeGamelin}. For the second one we need a slight change in the argument. It is observed in \cite[page 55]{AronColeGamelin} that
\[
\|\overset\veearphi_m\|^2\le\|\overset\veearphi_{2m}\|\qquad\textrm{ for all } \overset\veearphi\in\mathcal {M}_b(X) \textrm{ and }m\in\mathbb N.
\]
Thus the same is true for vector-valued homomorphisms. Hence,
\[
\|\Phi_m\|^{1/m}\le\|\Phi_{2m}\|^{1/2m}\qquad\textrm{ for all } \Phi\in\mathcal {M}_{b,\infty}(X,B_Y) \textrm{ and }m\in\mathbb N,
\]
which implies that the limit superior should coincide with the supremum.
\end{proof}
Note that the limit superior above is not necessarily a limit. In \cite{Deghoul} Deghoul exhibits an example of an homomorphism $\overset\veearphi$ in $\mathcal {M}_b(\ell_2)$ with $R(\overset\veearphi)\not=0$ and $\|\overset\veearphi_m\|=0$ for every odd $m$.
\begin{remark}\rm
As we have already observed, the role played by the evaluation homomorphisms $\displaystyleelta_z$ in the scalar-valued spectrum is performed here by the composition homomorphisms $ C_g$. It is easy to see that vector-valued versions of \cite[Lemma 3.1 and Lemma 3.2]{AronColeGamelin} are valid:
\[\|\xi(\Phi)\| \leq R(\Phi), \ \Phi \in \mathcal{M}_{b,\infty}(X,B_Y), \]
and also
\[R( C_g) = \|g\|, \ g \in \mathcal{H}^{\infty}(B_Y,X^{**}). \]
\end{remark}
Let us now translate the radius function to the spectrum $\mathcal{M}_\infty(B_X,B_Y)$. For that, first we consider the natural projection
\[ \overset\veearrho \colon \mathcal{M}_\infty(B_X,B_Y) \to \mathcal{M}_{b,\infty}(X,B_Y), \]
defined so that $\overset\veearrho(\Psi)$ is the restriction of $\Psi \in \mathcal{M}_\infty(B_X,B_Y)$ to $\mathcal{H}_b(X).$
We then extend the radius function $R$ to $\Psi \in \mathcal{M}_\infty(B_X,B_Y)$ by declaring $R(\Psi)$ to be the smallest value of $r$, $0 \leq r \leq 1$ such that $\Psi$ is continuous with respect to the norm of uniform convergence on the ball $rB_X$. Applying the previous results of this section and the fact that $\mathcal{M_\infty}(B_X,B_Y)$ is weak-star compact (as noted in item \ref{compact} of the observation regarding duality and compactness in the Introduction), the proof of \cite[Theorem 10.1]{AronColeGamelin} can be easily adapted to our setting, arriving at the following result.
\begin{theorem}
The image $\overset\veearrho(\mathcal{M}_\infty(B_X,B_Y))$ of the projection $\overset\veearrho$ consists of precisely the set of $\Phi \in \mathcal{M}_{b,\infty}(X,B_Y)$ such that $R(\Phi) \leq 1$. Moreover, the projection $\overset\veearrho$ establishes a one-to-one correspondence between the set of $\Psi \in \mathcal{M}_{\infty}(B_X,B_Y)$ satisfying $R(\Psi) < 1$ and the set of $\Phi \in \mathcal{M}_{b,\infty}(X,B_Y)$ satisfying $R(\Phi) < 1$.
\end{theorem}
\section{The fibering of $\mathcal{M}_{\infty}(B_X,B_Y)$ over $\overline B_{\mathcal{H}^\infty(B_Y,X^{**})}$ }\label{Section-Minf}
As in the case of $\mathcal{M}_{b,\infty}(X,B_Y)$, we can define a natural projection from the vector-valued spectrum $\mathcal{M}_{\infty}(B_X,B_Y)$ into $\mathcal{H}^\infty(B_Y,X^{**})$, by composing $\xi \colon \mathcal{M}_{b,\infty}(X,B_Y) \to \mathcal{H}^\infty(B_Y,X^{**})$ with $\overset\veearrho \colon \mathcal{M}_\infty(B_X,B_Y) \to \mathcal{M}_{b,\infty}(X,B_Y)$. In order to simplify the notation we choose to denote this projection again by $\xi$ (instead of $\xi\circ \overset\veearrho$). In this setting, $\xi$ is defined by:
\begin{align*}
\xi \colon \mathcal {M}_\infty(B_X,B_Y) &\to \mathcal{H}^\infty(B_Y,X^{**}), \\
\Phi & \mapsto \left[ y \mapsto (x^* \mapsto \Phi(x^*)(y)) \right].
\end{align*}
The image of $\xi$ is clearly contained in the closed unit ball of $\mathcal{H}^\infty(B_Y,X^{**})$. Also, for each $g\in \mathcal{H}^\infty(B_Y,X^{**})$ such that $g(B_Y)\subset B_{X^{**}}$ we can consider the composition homomorphism $ C_g\in \mathcal {M}_\infty(B_X,B_Y)$ given by $ C_g(f)=\widetilde f\circ g$, for all $f\in\mathcal {H}^\infty(B_X)$. Since $\xi( C_g)=g$ the following inclusions hold:
\[
B_{\mathcal{H}^\infty(B_Y,X^{**})}\subset \{g\in \mathcal{H}^\infty(B_Y,X^{**}):\ g(B_Y)\subset B_{X^{**}}\}\subset Im(\xi).
\]
Note that, as we have already mentioned, by \cite[Theorem 2.1]{Mujica} the space $\mathcal{H}^\infty(B_Y,X^{**})$ is isometric to $\mathcal L(\mathcal G^\infty(B_Y), X^{**})$ and so it is the dual of $\mathcal G^\infty(B_Y)\widehat\otimes_\pi X^*$.
Now, as $\mathcal{M}_{\infty}(B_X,B_Y)$ is weak-star compact and $\xi$ is weak-star to weak-star continuous, the image of $\xi$ should be weak-star compact in $\mathcal{H}^\infty(B_Y,X^{**})$. Hence
\[
Im(\xi)=\overline {B}_{\mathcal{H}^\infty(B_Y,X^{**})}.
\]
Now, we turn our attention to the fibers defined by this projection. For $g\in \overline B_{\mathcal{H}^\infty(B_Y,X^{**})}$, the fiber over $g$ is the set
\[
\mathscr{F}(g) = \{\Phi \in \mathcal{M}_{\infty}(B_X,B_Y) \colon \xi(\Phi) = g \}.
\]
For the scalar-valued spectrum $\mathcal{M}_{\infty}(B_X)$, to study the fibers over $\overline B_{X^{**}}$, the distinction between points $z$ in the interior of the ball (for which the evaluation $\displaystyleelta_z$ is in the fiber) and points $z$ in the boundary (where $\displaystyleelta_z$ cannot be defined) is relevant. In the vector-valued case recall that for a holomorphic function $g\in \overline B_{\mathcal{H}^\infty(B_Y,X^{**})}$ if $\|g(y_0)\|=1$ for a certain $y_0\in B_Y$ then $g(y)$ belongs to $S_{X^{**}}$ (the unit sphere of $X^{**}$) for all $y\in B_Y$. Thus, to distinguish the fibers over $g\in \overline B_{\mathcal{H}^\infty(B_Y,X^{**})}$ in terms of whether $ C_g$ is or is not defined, we get the following two possibilities for $g$:
\begin{enumerate}
\item[(i)] $g(B_Y)\subset B_{X^{**}}$ (where $ C_g\in \mathscr{F}(g)$).
\item[(ii)] $g(B_Y)\subset S_{X^{**}}$ (where $ C_g$ cannot be defined).
\end{enumerate}
Note that whenever $X^{**}$ is strictly convex, the only functions $g\in \overline B_{\mathcal{H}^\infty(B_Y,X^{**})}$ with $g(B_Y)\subset S_{X^{**}}$ are the constant functions. Then, in this case the condition (ii) changes to:
\begin{enumerate}
\item[(ii')] There exists $x_0^{**}\in S_{X^{**}}$ such that $g(y)=x_0^{**}$, for all $y\in B_Y$ (where $ C_g$ cannot be defined).
\end{enumerate}
Also, as we commented for the spectrum $\mathcal {M}_{b,\infty}(X,B_Y)$ in Remark \ref{Remark: constant}, there are other {\em special } fibers in $\mathcal{M}_{\infty}(B_X,B_Y)$ which are those over constant functions $g\in \overline B_{\mathcal{H}^\infty(B_Y,X^{**})}$. Each of these fibers includes the scalar-valued fiber of $\mathcal {M}_\infty(B_X)$ over the same constant.
Recall that it is said that an element $x_0^{**}\in S_{X^{**}}$ is {\em norm attaining} if there exists $x_0^*\in S_{X^*}$ such that $x_0^{**}(x_o^*)=1$.
Now we show that, if $x_0^{**}\in S_{X^{**}}$ is norm attaining, the fiber over the constant function $g(y)=x_0^{**}$ contains a lot of elements that do not arise from the scalar-valued spectrum. The proof is build on the following proposition from \cite{AronFalcoGarciaMaestre}.
\begin{proposition} \cite[Proposition 2.1]{AronFalcoGarciaMaestre}
\label{inj-falco}
Given a Banach space $X$ and a norm attaining element $x_0^{**}\in S_{X^{**}}$, there is an analytic injection
\[ F: \mathbb{D} \hookrightarrow \mathcal{M}_{x_0^{**}}(\mathcal {H}^\infty(B_X)). \]
\end{proposition}
Note that this clearly holds for every $x_0\in S_X$. Now, to transfer this construction to the vector-valued spectrum recall that in Theorem \ref{Prop:constant_function} we have proved a similar result regarding the fibers over constant functions in the spectrum $\mathcal {M}_{b,\infty}(X,B_Y)$ with the additional hypothesis of the existence of a polynomial that is not weakly continuous on bounded sets. The proof of the following result follows the lines of the proof of Theorem \ref{Prop:constant_function}, and so we omit it.
\begin{proposition}
\label{FibrasS_X}
Given Banach spaces $X$ and $Y$ and a norm attaining element $x_0^{**}\in S_{X^{**}}$, let $g(y)=x_0^{**}$, for all $y\in B_Y$. Then there is an analytic injection
\begin{align*}
\Psi: B_{\mathcal {H}^\infty(B_Y)} &\hookrightarrow \mathscr{F}(g) \subset \mathcal {M}_\infty(B_X,B_Y)\\
\Psi(h)(f)(y) &= F(h(y)) (f),
\end{align*}
where $F$ is the mapping of the previous proposition.
Moreover, each non-constant function in $B_{\mathcal {H}^\infty(B_Y)}$ is mapped into a non scalar-valued homomorphism of $\mathscr{F}(g)$.
\end{proposition}
For a finite dimensional $X$ we quote a conjecture from \cite[page 88]{AronColeGamelin}: {\em ``one expects that the fiber over $x_0$ consists of only the evaluation homomorphism $\displaystyleelta_{x_0}$ for $x_0\in B_{X}$''}. We do not know whether this is true but this is certainly the case for $B_X=\mathbb D$ and for each finite dimensional ball $B_X$ such that the Gleason problem is solved for $\mathcal {H}^\infty(B_X)$ (see \cite[6.6]{Rudin-FunctionTheoryCn} or \cite{carlsson, lemmers} and references therein), for instance, $X=\ell_p^n$, with $1<p<\infty$. In the context of a strictly convex finite dimensional Banach space $X$ where the Gleason problem is solved for $\mathcal {H}^\infty(B_X)$ we have an almost complete depiction of the fibers of $\mathcal {M}_\infty(B_X,B_Y)$ which resembles the description of the fibers of $\mathcal {M}_\infty(B_X)$. The result, stated in the next theorem, is obtained just summing up the above comments and Proposition \ref{FibrasS_X}. We point out that item $(i)$ was previously proved for the ball of $\ell_2^n$ in \cite[Theorem 6.6.5]{Rudin-FunctionTheoryCn} and for the disk $\mathbb D$ in \cite[Proposition 15]{GalindoLindstrom}.
\begin{theorem} \label{propDisk} If $X$ is a strictly convex finite dimensional Banach space such that the Gleason problem is solved for $\mathcal {H}^\infty(B_X)$ then for any given $g \in \overline{B}_{\mathcal {H}^\infty(B_Y, X)}$ there are two alternatives for the fiber $\mathscr{F}(g)$:
\begin{enumerate}
\item[(i)] If $g(B_Y) \subset B_X$, then $\mathscr{F}(g) = \{ C_g\}.$
\item[(ii)] If $g \equiv x_0$ with $x_0\in S_{X}$, then $B_{\mathcal {H}^\infty(B_Y)}$ can be analytically injected in $\mathscr{F}(g).$
\end{enumerate}
\end{theorem}
\begin{proof} First, recall that $X$ being strictly convex implies the only two possibilities for a given $g\in \overline{B}_{\mathcal {H}^\infty(B_Y,X)}$ are those in the previous items.
Now, if $g$ satisfies $(i)$, for every $\Phi \in \mathscr{F}(g)$ and each $y\in B_Y$, we have that $\displaystyleelta_y \circ \Phi \in \mathcal {M}_\infty(B_X)$ is in the fiber over $g(y)\in B_X$. Since this fiber is a singleton $\{\displaystyleelta_{g(y)}\}$, we easily infer that $\mathscr{F}(g) = \{ C_g\}$.
If, whereas, $g \equiv x_0$ with $x_0\in S_{X}$, the result follows from Proposition \ref{FibrasS_X}.
\end{proof}
A lot of research is available in the literature about the basic simplest case $X=Y=\mathbb C$ (that is, homomorphisms from $\mathcal {H}^\infty(\mathbb D)$ into $\mathcal {H}^\infty(\mathbb D)$). Anyway, through our construction we can give a slightly different description of the spectrum $\mathcal {M}_\infty(\mathbb D, \mathbb D)$ resembling the classical scalar-valued situation. Indeed, this vector-valued spectrum is projected onto $\overline B_{\mathcal {H}^\infty(\mathbb D)}$, being one-to-one over the set $\{g\in \overline B_{\mathcal {H}^\infty(\mathbb D)}:\ g(\mathbb D)\subset \mathbb D\}$. The remaining fibers (i. e. those over constant functions $g$ of modulus 1) are large, they have plenty of non scalar-valued homomorphisms and each one contains an analytic copy of $B_{\mathcal {H}^\infty(\mathbb D)}$.
For any infinite dimensional Banach space $X$ we know from \cite[Theorem 11.1]{AronColeGamelin} that each fiber of the spectrum $\mathcal {M}_\infty(B_X)$ contains a homeomorphic copy of $\beta(\mathbb N)$. This canonically translates to fibers over constant functions $g\in \overline{B}_{\mathcal {H}^\infty(B_Y,X^{**})}$ in the spectrum $\mathcal {M}_\infty(B_X,B_Y)$. We can extend this result to fibers over (non-constant) functions $g$ of constant norm 1. Recall that $\beta(\mathbb{N})\setminus \mathbb N$ contains a homeomorphic copy of $\beta(\mathbb N)$ so it is enough to obtain a homeomorphic copy of $\beta(\mathbb{N})\setminus \mathbb N$ inside the fiber.
\begin{proposition}
If $X$ is an infinite dimensional Banach space and $g \in \overline{B}_{\mathcal {H}^\infty(B_Y,X^{**})}$ is a function of constant norm $1$, then the fiber in $\mathcal {M}_\infty(B_X,B_Y)$ over $g$ contains a homeomorphic copy of $\beta(\mathbb{N})$.
\end{proposition}
\begin{proof}
Let $g \in \overline{B}_{\mathcal {H}^\infty(B_Y,X^{**})}$ be a function of constant norm 1 and fix $y_0 \in B_Y$. Since $\|g(y_0)\| = 1$, by \cite[Theorem 10.5]{AronColeGamelin}, for each $(r_n)_n\in \mathbb D$ with $r_n \to 1$ the sequence $(r_ng(y_0))$ has an interpolating subsequence for $\mathcal {H}^\infty(B_X)$ (which we still call $(r_ng(y_0))$).
We now write $g_n = r_ng$ and consider the mapping
\begin{align*}
I:\mathbb{N} &\to \overline{\{C_{g_n}\}}^{w^*} \subseteq \mathcal{M}_\infty(B_X,B_Y) \\
m &\mapsto C_{g_m}.
\end{align*}
By the universal property of $\beta\mathbb{N}$, there is a continuous extension $\beta I: \beta \mathbb{N} \to \overline{\{C_{g_n}\}}^{w^*}$ such that $\beta I|_{\mathbb{N}} = I$. Since $(r_ng(y_0))$ is an interpolating sequence, the composition $\displaystyleelta_{y_0} \circ \beta I$ is injective and so must be $\beta I$.
Finally, a straightforward computation shows that for every $\eta\in \beta(\mathbb{N})\setminus \mathbb N$, the image $\beta I (\eta)$ lies in the fiber over $g$.
\end{proof}
From the above, we know that, when $X$ is infinite dimensional, fibers of $\mathcal {M}_\infty(B_X,B_Y)$ over constant functions or over functions of constant norm 1 are large. It is thus natural to ask whether the same is true for the remaining fibers, that is, fibers over non-constant functions $g\in \overline{B}_{\mathcal {H}^\infty(B_Y,X^{**})}$ with $g(B_Y)\subset B_{X^{**}}$. We have no general answer to this question. We can only say something under the hypothesis of existence of a polynomial that is not weakly continuous on bounded sets, and even in this case we can just reach the fibers over functions $g \in B_{\mathcal {H}^\infty(B_Y,X^{**})}$, as we present below. We do not know whether this result can be extended to fibers over functions $g$ of norm 1 such that $g(B_Y)\subset B_{X^{**}}$ or not.
\begin{theorem} \label{Inyectar disco Minf}
If $X$ is a Banach space such that there exists a polynomial on $X$ which is not weakly continuous on bounded sets, then for each $g \in B_{\mathcal {H}^\infty(B_Y,X^{**})}$ we can inject the complex disk $\mathbb{D}$ analytically in the fiber $\mathscr{F}(g)$.
\end{theorem}
The proof is quite similar to that of Theorem \ref{inyectar disco}, so we omit it. A slight change arises while choosing the net $(x_\alpha^{**})$ which should be taken in the ball $B(x_0^{**},1-\|g\|)$.
Also, mimicking the arguments of Theorem \ref{Prop:constant_function} we have an analogous result for the fibers over constant functions of norm smaller than 1.
\begin{theorem} \label{Inyectar bola Minf}
If $X$ is a Banach space such that there exists a polynomial on $X$ which is not weakly continuous on bounded sets, then for each constant function $g \in B_{\mathcal{H}^\infty(B_Y,X^{**})}$ we can inject the ball $B_{\mathcal {H}^\infty(B_Y)}$ analytically in the fiber $\mathscr{F}(g)$. Moreover, through this inclusion each non-constant function in $B_{\mathcal {H}^\infty(B_Y)}$ is mapped into a non scalar-valued homomorphism of $\mathscr{F}(g)$.
\end{theorem}
A typical example of a space where all the polynomials are weakly continuous on bounded sets is the sequence space $c_0$. The study of the algebra $\mathcal {H}^\infty(B_{c_0})$ is interesting also because $B_{c_0}$ is the natural infinite dimensional extension of the polydisk $\mathbb D^n$. Even though the previous results do not apply to $X=c_0$, it is anyway possible in this case to insert analytic copies of balls into the fibers of the vector-valued spectrum. This result and a thorough study about $\mathcal {M}_\infty(B_{c_0}, B_{c_0})$ will appear in a forthcoming article \cite{DimantSinger2}.
\section{Gleason parts for $\mathcal{M}_{\infty}(B_X,B_Y)$ }\label{Section-Gleason parts}
The study of Gleason parts in the spectrum of uniform algebras was motivated by the search for analytic structure. This is justified by the fact that the image of an analytic mapping from an open convex set into the spectrum should be contained in a single Gleason part. A thorough description of Gleason parts for the spectrum $\mathcal {M}_\infty(\mathbb D)$ was made by K. Hoffman in \cite{Hoffman} and later on deeply studied by several authors (see, for instance, \cite{Gorkin, mortini, Suarez}). For an infinite dimensional Banach space $X$ the study of Gleason parts for $\mathcal {M}_\infty(B_X)$ (with special emphasis on the case $X=c_0$) was initiated in \cite{AronDimantLassalleMaestre}. Let us recall this notion.
For a uniform algebra $\mathcal{A}$ with spectrum $\mathcal {M}(\mathcal{A})$, the {\em pseudo-hyperbolic distance} in the spectrum is given by $\rho(\overset\veearphi, \psi) =
\sup\{|\overset\veearphi(f)|: \ f \in \mathcal{A}, \|f\| \leq 1, \psi(f) = 0 \}$. Note that $\rho(\overset\veearphi, \psi)\le 1$ and that this notion is related to the usual metric in the spectrum by the following known equality \cite[Theorem~2.8]{Bear}:
\begin{equation}\label{Gleason-metric}
\|\overset\veearphi - \psi\| = \frac{2 - 2\sqrt{1 - \rho(\overset\veearphi, \psi)^2}}{\rho(\overset\veearphi,\psi)}.
\end{equation}
This means that $\|\overset\veearphi - \psi\| = 2 $ if and only if $\rho(\overset\veearphi, \psi) = 1$, motivating
for each $\overset\veearphi \in \mathcal M(\mathcal A)$, the definition of the {\em Gleason part} of $\overset\veearphi$ as
the set
\[\mathcal {GP}(\overset\veearphi) = \{ \psi : \ \rho(\overset\veearphi,\psi) < 1\}=\{\psi : \ \|\overset\veearphi - \psi\| < 2\}.\]
An interesting aspect here is that these sets form a partition of $\mathcal {M}(\mathcal{A})$ into equivalence classes.
We can extend the notion of Gleason part to the vector-valued spectrum. Indeed, for $\Phi\in\mathcal {M}_\infty(B_X, B_Y)$ we set
\[\mathcal {GP}(\Phi) = \{\Psi : \ \|\Phi - \Psi\| < 2\}=\{ \Psi : \ \sigma(\Phi, \Psi) =\sup_{y\in B_Y}\rho(\displaystyleelta_y\circ\Phi,\displaystyleelta_y\circ\Psi) < 1\}.\]
The equality between the above sets is clear from its analogous statement in the scalar-valued spectrum (recall that $\displaystyleelta_y\circ\Phi$ belongs to $\mathcal {M}_\infty(B_X)$ for each $\Phi\in\mathcal {M}_\infty(B_X, B_Y)$ and $y\in B_Y$). It is also readily seen, appealing again to the scalar-valued result, that Gleason parts lead to a partition of $\mathcal {M}_\infty(B_X, B_Y)$ into equivalence classes. This notion, without the specific name of {\em Gleason parts}, was previously considered in several articles (see, for instance, \cite{AronGalindoLindstrom, chu-hugli-mackey,GalindoGamelinLindstrom,GorkinMortiniSuarez, HozokawaIzuchiZheng,MacCluerOhnoZhao}).
In \cite{AronDimantLassalleMaestre} the relationship between fibers and Gleason parts for $\mathcal {M}_\infty(B_X)$ was addressed.
Some of the results of that article have a vector-valued counterpart that we now present. We begin by proving a version of \cite[Proposition 1.1]{AronDimantLassalleMaestre} that shows which fibers might share Gleason parts.
Here the notation $C_{\mathbf 0}$ refers to the composition homomorphism by the constant function $g\equiv 0$.
\begin{proposition}{\label{basic 1}}
Let $X$ and $Y$ be Banach spaces.
\begin{enumerate}[\upshape (a)]
\item For every $g \in B_{\mathcal{H}^\infty(B_Y,X^{**})}$, the composition homomorphism $ C_g$ is contained in the Gleason part $\mathcal{GP}(C_{\mathbf 0})$. In fact, $\sigma( C_g,C_{\mathbf 0}) = \|g\|$.
\item Let $g \in S_{\mathcal{H}^\infty(B_Y,X^{**})}$ and $h\in B_{\mathcal{H}^\infty(B_Y,X^{**})}$. For any $\Phi \in \mathscr{F}(g)$ and $\Psi \in \mathscr{F}(h)$ we have that $\Phi$ and $\Psi$ lie in different Gleason parts.
\item Let $g,\, h\in \mathcal{H}^\infty(B_Y,X^{**})$ with $g(B_Y)\subset B_{X^{**}}$ and $h(B_Y)\subset S_{X^{**}}$. For any $\Phi \in \mathscr{F}(g)$ and $\Psi \in \mathscr{F}(h)$ we have that
$\Phi$ and $\Psi$ lie in different Gleason parts.
\end{enumerate}
\end{proposition}
\begin{proof}
(a) By the Schwarz lemma and the fact that $S_{X^*}$ is a norming set for $X^{**}$ we obtain
\begin{eqnarray*}
\rho(\displaystyleelta_y\circ C_g,\displaystyleelta_y\circ C_{\mathbf 0}) &= &\sup\{|\displaystyleelta_y\circ C_g(f)|: f\in\mathcal {H}^\infty(B_X),\ \|f\| \leq 1, \displaystyleelta_y\circ C_{\mathbf 0}(f) = 0 \}\\
&= & \sup\{|\overline f(g(y))|: f\in\mathcal {H}^\infty(B_X),\ \|f\| \leq 1, f(0) = 0 \}= \|g(y)\|.
\end{eqnarray*}
Hence, $\sigma( C_g,C_{\mathbf 0}) = \|g\|$.
(b) Since $g \in S_{\mathcal{H}^\infty(B_Y,X^{**})}$ there exist sequences $(y_n)\subset B_Y$ and $(x_n^*)\subset S_{X^*}$ such that $g(y_n)(x_n^*)\to 1$. Let us take $\lambda_n= h(y_n)(x_n^*)$ and note that $|\lambda_n|\le \|h\|<1$, for every $n$. We consider, for each $n$ and $m$, the following function defined on $B_X$:
\[f_{n,m}(\cdot) = \frac{(x^*_n(\cdot))^m - \lambda_n^m}{\| (x^*_n)^m - \lambda_n^m \|}.\]
It is clear that $f_{n,m}$ belongs to $\mathcal {H}^\infty(B_X)$, $\Psi(f_{n,m})(y_n)=0$ and $\|f_{n,m}\|=1$. Thus,
\begin{eqnarray*}
\sigma(\Phi, \Psi) & = & \sup_{y\in B_Y} \sup\{ |\Phi(f)(y)|:\ f\in \mathcal {H}^\infty(B_X),\, \|f\|\le 1,\, \Psi(f)(y)=0\}\\
&\ge & \sup_{n,m} |\Phi(f_{n,m})(y_n)|=\sup_{n,m} \frac{|g(y_n)(x_n^*)^m-\lambda_n^m|}{\| (x^*_n)^m - \lambda_n^m \|}\\
&\ge & \sup_{n,m}\frac{|g(y_n)(x_n^*)|^m-\|h\|^m}{1+\|h\|^m}=1.
\end{eqnarray*}
Consequently, $\Phi$ and $\Psi$ lie in different Gleason parts.
(c) For any $y\in B_Y$ we know that $\displaystyleelta_y\circ\Phi$ and $\displaystyleelta_y\circ\Psi$ are in the fibers (with respect to the scalar-valued spectrum $\mathcal {M}_\infty(B_X)$) over $g(y)\in B_{X^{**}}$ and $h(y)\in S_{X^{**}}$, respectively. By \cite[Proposition 1.1]{AronDimantLassalleMaestre}, $\rho(\displaystyleelta_y\circ\Phi, \displaystyleelta_y\circ\Psi)=1$ and hence $\sigma(\Phi, \Psi)=1$.
\end{proof}
The statement of item (a) of the previous proposition, for functions $g \in B_{\mathcal{H}^\infty(B_Y,X)}$, was proved in \cite[Proposition 3]{AronGalindoLindstrom}. Also, item (b), in the particular case where $\Phi$ and $\Psi$ are composition homomorphisms, appeared in \cite[Proposition 5]{AronGalindoLindstrom}.
Recall that in the previous section we separated the fibers over functions $g\in\overline B_{\mathcal {H}^\infty(B_Y,X^{**})}$ into two cases:
\begin{enumerate}
\item[(i)] $g(B_Y)\subset B_{X^{**}}$.
\item[(ii)] $g(B_Y)\subset S_{X^{**}}$.
\end{enumerate}
Now, in light of the above proposition, to study Gleason parts it is relevant to split the first condition to distinguish whether the norm of $g$ is either 1 or smaller. Hence, the possible fibers to consider (with no intersection of Gleason parts) are:
\begin{enumerate}[(i)]
\item Fibers over functions $g$ with $\|g\|<1$. Referred to as {\em interior fibers}.
\item Fibers over functions $g$ with $g(B_Y)\subset B_{X^{**}}$ and $\|g\|=1$. Referred to as {\em middle fibers}.
\item Fibers over functions $g$ with $g(B_Y)\subset S_{X^{**}}$. Referred to as {\em edge fibers}.
\end{enumerate}
Note also that, from (a), we have $\{ C_g:\ g\in B_{\mathcal{H}^\infty(B_Y,X^{**})}\}\subset \mathcal{GP}(C_{\mathbf 0})$. This inclusion could be strict, for instance, when there is a polynomial on $X$ which is not weakly continuous on bounded sets, as the following result shows. This is a vector-valued version of \cite[Proposition 1.2 and Corollary 1.3]{AronDimantLassalleMaestre}.
\begin{proposition}{\label{basic 2}}
Let $X$ and $Y$ be Banach spaces.
\begin{enumerate}[\upshape (a)]
\item Let $(g_\alpha)$ be a net in $B_{\mathcal{H}^\infty(B_Y,X^{**})}$ with $\|g_\alpha\|\le r<1$, for all $\alpha$. If the net of composition homomorphisms $(C_{g_\alpha})$ is weak-star convergent to an element $\Phi$ in $\mathcal {M}_\infty(B_X, B_Y)$ then $\Phi$ is contained in the Gleason part $\mathcal{GP}(C_{\mathbf 0})$.
\item If there exists a polynomial on $X$ which is not weakly continuous on bounded sets, then $\{ C_g:\ g\in B_{\mathcal{H}^\infty(B_Y,X^{**})}\}$ is a proper subset of $\mathcal{GP}(C_{\mathbf 0})$.
\end{enumerate}
\end{proposition}
\begin{proof}
(a) Take any $f \in \mathcal {H}^\infty(B_X)$ such that $\|f\| = 1$ and $f(0) = 0$, and an element $y\in B_Y$. By the weak-star convergence, for any fixed $\overset\veearepsilon>0$ such that $r + \overset\veearepsilon <1$ we can find $\alpha$ such that $|C_{g_\alpha} (f)(y) - \Phi(f)(y)| < \overset\veearepsilon.$
Then, \[|\Phi(f)(y) - C_{\mathbf 0}(f)(y)| \leq \overset\veearepsilon + |C_{\mathbf 0}(f)(y) - \Phi_{g_\alpha} (f)(y)|\le \overset\veearepsilon + \sigma(\Phi_{g_\alpha},C_{\mathbf 0})=\overset\veearepsilon + \|g_\alpha\| < \overset\veearepsilon + r.\] Thus, $\sigma(\Phi,C_{\mathbf 0}) < 1,$ which concludes the proof.
(b) Working as in Theorem \ref{Inyectar disco Minf} (which, in turn, refers to Theorem \ref{inyectar disco}) we can construct a net $(C_{g_\alpha})$, as in item (a), that is weak-star convergent to a homomorphism $\Phi$ which is in the Gleason part of $C_{\mathbf 0}$ but it is not of composition type.
\end{proof}
Observe that the vector-valued spectrum $\mathcal {M}_\infty(B_X,B_Y)$ is a metric space when viewed as a subset of $\mathcal L(\mathcal {H}^\infty(B_X), \mathcal {H}^\infty(B_Y))$. Since its metric is given by $\|\Phi-\Psi\|$ we refer to it as the {\em Gleason metric}. The following proposition, which is a version of \cite[Proposition 1.6]{AronDimantLassalleMaestre}, gives conditions under which there is an isometry in the spectrum that maps each fiber onto another fiber.
\begin{proposition}
Let $X,Y$ be Banach spaces and $\theta: B_X \to B_X$ be an automorphism. Then the mapping
\begin{align*}
\Lambda_{\theta}:\mathcal {M}_\infty(B_X,B_Y) &\to \mathcal {M}_\infty(B_X,B_Y)\\
\Phi &\mapsto (f \mapsto \Phi(f\circ \theta))
\end{align*}
is an isometry with respect to the Gleason metric. Moreover, if $X$ is symmetrically regular and for every $x^* \in X^*$ both $x^*\circ \theta$ and $x^*\circ\theta^{-1}$ are uniform limits of finite type polynomials, then for every $g \in \overline{B}_{\mathcal {H}^\infty(B_Y,X^{**})}$ we have that $\Lambda_{\theta}(\mathscr{F}(g)) = \mathscr{F}(\widetilde{\theta}\circ g).$
\end{proposition}
\begin{proof}
For $\Phi$ and $\Psi$ in $\mathcal {M}_\infty(B_X,B_Y)$ we have that
\begin{equation*}
\| \Lambda_{\theta}(\Phi) - \Lambda_{\theta}(\Psi)\| = \sup_{\substack {f\in \mathcal {H}^\infty(B_X)\\\|f\| \le 1}} \|\Phi(f\circ \theta) - \Psi(f \circ \theta) \| \leq \|\Phi - \Psi\|.
\end{equation*}
Applying the same inequality to $\Lambda_{\theta^{-1}}$ and noting that $\Lambda_{\theta^{-1}} \circ \Lambda_{\theta} = Id$, we obtain the desired isometry.
Assume now that $X$ is symmetrically regular and for every $x^* \in X^*$ we have that both $x^* \circ \theta$ and $x^* \circ \theta^{-1}$ lie in the closure of finite type polynomials. Fix $g \in \overline{B}_{\mathcal {H}^\infty(B_Y,X^{**})}$. For any $\Phi \in \mathscr{F}(g)$, $y \in B_Y$, $x^* \in X^*$ we have
\begin{align*}
\Lambda_{\theta}(\Phi)(x^*)(y) &= \Phi(x^*\circ \theta)(y).\\
\intertext{Since $x^*\circ \theta$ is a uniform limit of finite type polynomials the same happens to $\widetilde{x^*\circ \theta}$, which implies that there is a unique extension of $\widetilde\theta$ to $\overline{B}_{X^{**}}$ through weak-star continuity. Thus, we can compute}
\Phi(x^*\circ \theta)(y) &= \widetilde{x^* \circ \theta} (g(y))=\widetilde{\theta}(g(y))(x^*).
\end{align*}
This means that $\Lambda_{\theta}(\mathscr{F}(g))$ is contained in $\mathscr{F}(\widetilde{\theta} \circ g)$. Also, since $X$ is symmetrically regular, arguing as in the proof of \cite[Corollary 2.2]{Choi} we can see that
$\widetilde{\theta^{-1}}\circ \widetilde\theta=Id$,
and so repeating the same argument as above for $\theta^{-1}$ instead of $\theta$ we obtain that
\[
\mathscr{F}(\widetilde{\theta} \circ g)=\Lambda_{\theta}\left[ \Lambda_{\theta^{-1}}(\mathscr{F}(\widetilde \theta\circ g))\right]\subset \Lambda_{\theta}(\mathscr{F}(\widetilde{\theta^{-1}}\circ\widetilde \theta\circ g))=\Lambda_{\theta}(\mathscr{F}( g)).
\]
Hence, the desired equality between the fibers is proved.
\end{proof}
Examples of automorphisms of the ball satisfying the conditions of the previous proposition are shown in \cite[Examples 1.7 and 1.8]{AronDimantLassalleMaestre} for $X=c_0$ and $X=\ell_2$. The conclusion about the fibers of the vector-valued spectrum then holds in these cases for any Banach space $Y$.
For the scalar-valued spectrum of a uniform algebra it is known that the image of an open convex set through an analytic injection is contained in a single Gleason part (see, for instance, \cite[Lemma 2.1]{Hoffman} or \cite[Proposition 3.4]{AronDimantLassalleMaestre}). By Lemma \ref{holomorphic mapping} the same result is valid for the vector-valued spectrum.
\begin{proposition}
Given Banach spaces $X$, $Y$ and $Z$ and an open convex set $U\subset Z$, let $F:U\to \mathcal {M}_\infty(B_X,B_Y)$ be an analytic injection. Then, $F(U)$ is contained in a single Gleason part.
\end{proposition}
This proposition combined with some of the results of Section \ref{Section-Minf} provides examples of situations where a ball is contained in the intersection of a fiber and a Gleason part. More precisely we have:
\begin{itemize}
\item If $x_0^{**}\in S_{X^{**}}$ is a norm attaining element and $g(y)=x_0^{**}$, for all $y\in B_Y$, consider the analytic injection $\Psi$ of Proposition \ref{FibrasS_X}. Then, $\Psi(B_{\mathcal {H}^\infty(B_Y)})$ is contained in the intersection of a Gleason part and $\mathscr{F}(g)$.
\item If there exists a polynomial on $X$ which is not weakly continuous on bounded sets, by Theorem \ref{Inyectar disco Minf}, for each $g \in B_{\mathcal {H}^\infty(B_Y,X^{**})}$ there is a copy of the complex disk $\mathbb{D}$ in the intersection of a Gleason part and $\mathscr{F}(g)$.
\item If there exists a polynomial on $X$ which is not weakly continuous on bounded sets, by Theorem \ref{Inyectar bola Minf}, for each constant function $g \in B_{\mathcal {H}^\infty(B_Y,X^{**})}$ there is a copy of the unit ball $B_{\mathcal {H}^\infty(B_Y)}$ in the intersection of a Gleason part and $\mathscr{F}(g)$.
\end{itemize}
The above examples and Proposition \ref{basic 1} (a) show situations in which Gleason parts contain balls; so we can say that they are {\em large} Gleason parts. Yet, in this spectrum, there also exist singleton Gleason parts. On the one hand, it is easily seen that any singleton Gleason part of the scalar-valued spectrum $\mathcal {M}_\infty(B_X)$ is also a singleton Gleason part of $\mathcal {M}_\infty(B_X, B_Y)$. On the other hand, singleton Gleason parts which are not in fibers over constant functions surely exist in $\mathcal {M}_\infty(B_X, B_X)$. For instance, the identity mapping is a singleton Gleason part. This was addressed in \cite{chu-hugli-mackey} showing that no composition operator might be in the same Gleason part, answering a conjecture stated in \cite{AronGalindoLindstrom}, where a particular case was studied. A complete proof of the statement can be found in \cite{GalindoGamelinLindstrom}. Also, if $g:B_Y\to B_X$ is biholomorphic then the Gleason part containing $ C_g\in \mathcal {M}_\infty(B_X, B_Y)$ is a singleton. Indeed, it is readily seen that the mapping from $\mathcal{M_\infty}(B_Y) \to \mathcal{M}_\infty(B_X)$ given by $[\overset\veearphi \mapsto \overset\veearphi \circ C_g]$ maps strong boundary points to strong boundary points. The result then follows from \cite[Theorem 6.2]{GalindoGamelinLindstrom}.
\begin{example} \textbf{Relationship between fibers and Gleason parts.}\\
\rm The case $B_X=\mathbb D$ allows us to show how the relationship between fibers and Gleason parts is different whether we consider interior, middle or edge fibers.
We take into account the description of the fibers of the spectrum $\mathcal {M}_\infty(\mathbb D, B_Y)$ made in Theorem \ref{propDisk} along with what we know from Proposition \ref{basic 1} about Gleason parts. \\
\begin{itemize}
\item Interior fibers of $\mathcal {M}_\infty(\mathbb D, B_Y)$ only contain the corresponding composition homomorphism. Then, $\mathcal{GP}(C_{\mathbf 0})=\{ C_g:\ g\in B_{\mathcal {H}^\infty(B_Y)}\}$, so there is only one Gleason part through all the interior fibers.
\item For edge fibers, take $\lambda\not=\mu$ with $|\lambda|=|\mu|=1$ and $\Phi\in\mathscr{F}(g)$, $\Psi\in\mathscr{F}(h)$, where $g(y)= \lambda$ and $h(y)= \mu$, for all $y\in B_Y$. Then, for any $y$, we have that $\displaystyleelta_y\circ \Phi$ and $\displaystyleelta_y\circ \Psi$ are homomorphisms in the scalar-valued spectrum $\mathcal {M}_\infty(\mathbb D)$ belonging to the fibers over $\lambda$ and $\mu$, respectively. Thus, it is known that they belong to different Gleason parts. Hence, $1=\rho(\displaystyleelta_y\circ \Phi, \displaystyleelta_y\circ \Psi)\le \sigma(\Phi, \Psi)$ and so $\mathcal{GP}(\Phi)\not= \mathcal{GP}(\Psi)$. Therefore, no Gleason part could have elements from different edge fibers.
\end{itemize}
The transition between one Gleason part containing all the fibers (interior case) and Gleason parts inside the fibers (edge case) is made by the middle fibers:
\begin{itemize}
\item Any middle fiber only contains the corresponding composition homomorphism, yet several (but not all) middle fibers might belong to the same Gleason part.
\end{itemize}
We show this last situation with the following example, which is partially adapted from \cite[Example 2]{MacCluerOhnoZhao}.
For $y^*\in S_{X^*}$, consider the functions $g(y)=y^*(y)$, $h(y)=\frac{y^*(y)+1}{2}$ and $i(y)=\frac{y^*(y)+1}{2}+k(y^*(y)-1)^2$, for every $y\in B_Y$, where $0<k<\frac18$. They are all {\em middle} functions in $\mathcal {H}^\infty(B_Y)$ and the composition homomorphisms associated to them satisfy $\mathcal{GP}(C_g)\not= \mathcal{GP}(C_h)=\mathcal{GP}(C_i)$. Indeed, take a sequence $(y_n)$ in $B_Y$ such that $y^*(y_n)\to -1$. Then
\[
\rho(\displaystyleelta_{y_n}\circ C_g, \displaystyleelta_{y_n}\circ C_h)= \rho(\displaystyleelta_{g(y_n)},\displaystyleelta_{h(y_n)})=\left|\frac{g(y_n)-h(y_n)}{1-\overline {g(y_n)} h(y_n)}\right| \to 1.
\] This implies that $\mathcal{GP}(C_g)\not= \mathcal{GP}(C_h)$. On the other hand we have
\begin{eqnarray*}
\sigma(C_h, C_i) & =&\sup_{y\in B_Y}\rho(\displaystyleelta_{y}\circ C_h, \displaystyleelta_{y}\circ C_i)= \sup_{y\in B_Y} \rho(\displaystyleelta_{h(y)},\displaystyleelta_{i(y)})\\
&=& \sup_{y\in B_Y} \left|\frac{h(y)-i(y)}{1-\overline {h(y)} i(y)}\right| = \sup_{y\in B_Y} \left|\frac{k(y^*(y)-1)^2}{1- \frac{\overline{y^*(y)}+1}{2} (\frac{y^*(y)+1}{2}+k(y^*(y)-1)^2)}\right|\\
&=& \sup_{z\in\mathbb D} \left|\frac{k(z-1)^2}{1- \frac{\overline{z}+1}{2} (\frac{z+1}{2}+k(z-1)^2)}\right| <1,
\end{eqnarray*} where the last inequality is proved in \cite[Example 2]{MacCluerOhnoZhao}.
Therefore, $\mathcal{GP}(C_h)= \mathcal{GP}(C_i)$.
\end{example}
\textbf{Acknowledgements.} We would like to thank Daniel Carando for helpful conversations.
\providecommand{\WileyBibTextsc}{}
\let\textsc\WileyBibTextsc
\providecommand{\othercit}{}
\providecommand{\jr}[1]{#1}
\providecommand{\etal}{~et~al.}
\end{document}
|
\begin{document}
\title{Structured Latent Factor Analysis for Large-scale Data: Identifiability, {Estimability, and Their}
\doublespacing
\begin{abstract}
Latent factor models are widely used to measure unobserved latent traits in social and behavioral sciences, including psychology, education, and marketing. When used in a confirmatory manner, design information is incorporated as zero constraints on corresponding parameters, yielding structured (confirmatory) latent factor models.
In this paper, we study how such design information affects the identifiability and the estimation of a structured latent factor model.
Insights are gained through both asymptotic and non-asymptotic analyses. Our asymptotic results are established under a regime where both the number of manifest variables and the sample size diverge, motivated by applications to large-scale data. Under this regime, we define the structural identifiability of the latent factors and establish necessary and sufficient conditions that ensure structural identifiability. In addition, we propose an estimator which is shown to be consistent {and rate optimal} when structural identifiability holds. Finally, a non-asymptotic error bound is derived for this estimator, through which the effect of design information is further quantified. Our results shed lights on the design of large-scale measurement in education and psychology and have important implications on measurement validity and reliability.
\end{abstract}
\noindent
KEY WORDS:
High-dimensional latent factor model, confirmatory factor analysis, identifiability of latent factors, structured low-rank matrix, large-scale psychological measurement
\mathbf section{Introduction}
Latent factor models are one of the major statistical tools for multivariate data analysis
that have received many applications in social and behavioral sciences \citep{anderson2003introduction,bartholomew2011latent}.
Such models capture and interpret the common dependence among multiple manifest variables through the introduction of low-dimensional latent factors, where the latent factors are often given substantive interpretations. For example, in educational and psychological measurement, the manifest variables may be a test-taker's responses to test items, and the latent factors are often interpreted as his/her cognitive abilities and psychological traits, respectively. In marketing, the manifest variables may be an audience's ratings on movies, and the latent factors may be interpreted as his/her preferences on multiple characteristics of movies.
In many applications, latent factor models are used in a confirmatory manner, where design information on the relationship between manifest variables and latent factors is specified a priori.
This is translated into a zero constraint on a loading parameter of the model. We call such a model a \textit{structured latent factor model}. For example, consider a
mathematics test consisting of $J$ items that measures students' abilities on calculus and algebra. Then a structured latent factor model with two factors may be specified to model its item response data, where the two factors may be
interpreted as the calculus and algebra factors, respectively.
The relationship between the manifest variables and the two factors is coded by a $J\times 2$ binary matrix. The $j$th row being $(1, 0)$, $(0, 1)$, and $(1, 1)$ means that the $j$th items can be solved using only calculus skill, only algebra skill, and both, respectively.
Identifiability is an important property of a structured latent factor model for ensuring the substantive interpretation of latent factors.
When the model is not identifiable, certain latent factors cannot be uniquely extracted from data and thus their substantive interpretations may not be valid.
In particular, if no design information is incorporated, then it is well known that a latent factor model is not identifiable due to rotational indeterminacy, in which case one can simultaneously rotate the latent factors and the loading matrix of the model, without changing the distribution of data. It is thus important to study the relationship between design information of manifest variables and identifiability of latent factors.
That is,
when is a latent factor identifiable and when is not?
A related problem is the estimation of a structured latent factor model. Under what notion of consistency, can a latent factors be consistently estimated? In that case, what is the convergence rate? These problems will be investigated in this paper. Similar identifiability problems have been considered in the context of linear factor models \citep{anderson1956statistical,shapiro1985identifiability,grayson1994identification} and restricted latent class models \citep{xu2016identifiability,Xu2017,gu2018sufficient}.
This paper considers the identifiability and estimability of structured latent factor models.
We study this problem under a generalized latent factor modeling framework, which includes latent factor models for continuous, categorical, and count data.
Unlike many existing works that take an empirical Bayes framework treating the latent factors as random variables, we treat them as unknown model parameters. This treatment has been taken in many works on latent factor models \citep{haberman1977maximum,holland1990sampling, bai2012statistical, owen2016bi}. Under this formulation, the identification of a latent factor becomes equivalent to the identification of the direction of a vector in $\mathbb R^{\infty}$, where each entry of the infinite-dimensional vector corresponds to a sample (e.g., a test-taker or a costumer) in a population. As will be shown in the sequel, to identify such a direction, we need an infinite number of manifest variables (e.g., measurements). Under such an asymptotic regime, we provide a necessary and sufficient condition on the design of manifest variables for the identifiability of latent factors. This condition can be very easily interpreted from a repeated measurement perspective. In the rest of the paper, this notation of identifiability is called \textit{structural identifiability}. Comparing with existing works \citep[e.g.,][]{anderson1956statistical,shapiro1985identifiability,grayson1994identification}, the current development applies to a more general family of latent factor models, and avoids distribution assumptions on the latent variables by adopting the double asymptotic regime that the sample size $N$ and the number of manifest variables $J$ simultaneously grow to infinity.
Under the double asymptotic regime and when structural identifiability holds, we propose an estimator and establish its consistency and convergence rate. Under suitable conditions, this convergence rate is shown to be optimal through an asymptotic lower bound for this estimation problem.
Finally, a non-asymptotic error bound is derived for this estimator, which complements the asymptotic results by quantifying the estimation accuracy under a given design with finite $N$ and $J$.
To establish these results,
we prove useful probabilistic error bounds and develop perturbation bounds on the intersection of linear subspaces which are of independent value for the theoretical analysis of low-rank matrix estimation.
The rest of the paper is organized as follows.
In Section~\mathbf ref{SLFA}, we introduce a generalized latent factor modeling framework, within which our research questions are formulated. In Section~\mathbf ref{sec:iden},
we discuss the structural identifiability for latent factors,
establish the relationship between structural identifiability and estimability,
and provide an estimator for which asymptotic results and a non-asymptotic error bound are established.
Further implications of our theoretical results on large-scale measurement
are provided in Section~\mathbf ref{sec:implication} and extensions of our results to more complex settings are discussed in
Section~\mathbf ref{sec:extension}.
A new perturbation bound on linear subspaces is presented in Section~\mathbf ref{sec:proof-strategy} that is
key to our main results. Numerical results are presented in Section~\mathbf ref{sec:sim} that contain
two simulation studies and an application to a personality assessment dataset.
Finally, concluding remarks are provided in Section~\mathbf ref{sec:conc}.
The proofs of all the technical results are provided as supplementary material.
\mathbf section{Structured Latent Factor Analysis}\label{SLFA}
\mathbf subsection{Generalized Latent Factor Model}\label{subsec:model}
Suppose that there are $N$ individuals (e.g., $N$ test-takers) and $J$ manifest variables (e.g. $J$ test items). Let $Y_{ij}$ be a random variable denoting the $i$th individual's value on the $j$th manifest variable and let $y_{ij}$ be its realization. For example, in educational tests, $Y_{ij}$s could be binary responses from the examinees, indicating whether the answers are correct or not.
We further assume that each individual $i$ is associated with a $K$-dimensional latent vector, denoted as $\boldsymbol \theta_i = (\theta_{i1}, ..., \theta_{iK})^{\top}$ and each manifest variable $j$ is associated with $K$ parameters ${\boldsymbol \alpha}a_j = (a_{j1}, ..., a_{jK})^{\top}$.
We give two concrete contexts. Consider an educational test of mathematics, with
$K = 3$ dimensions of ``algebra", ``geometry", and ``calculus". Then $\theta_{i1}$, $\theta_{i2}$, and $\theta_{i3}$
represent individual $i$'s proficiency levels on algebra, geometry, and calculus, respectively.
In the measurement of Big Five personality factors \citep{goldberg1993structure}, $K = 5$ personality factors are considered, including ``openness to experience", ``conscientiousness", ``extraversion", ``agreeableness", and ``neuroticism". Then $\theta_{i1}$, ..., $\theta_{i5}$ represent individual $i$'s levels on the continuums of the five personality traits.
The manifest parameter ${\boldsymbol \alpha}a_j$s can be understood as the regression coefficients when regressing
$Y_{ij}$s on $\boldsymbol \theta_i$s, $i = 1, ..., N$.
In many applications of latent factor models, especially in psychology and education,
the estimations of $\boldsymbol \theta_i$s and ${\boldsymbol \alpha}a_j$s are both of interest.
Our development is under a generalized latent factor model framework \citep{skrondal2004generalized}, which extends
the generalized linear model framework \citep{mccullagh1989generalized} to latent factor analysis.
Specifically, we assume that the distribution of $Y_{ij}$ given $\boldsymbol \theta_i$ and ${\boldsymbol \alpha}a_j$ is a member of the exponential family with natural parameter
\begin{equation}\label{eq:inner}
m_{ij} = {\boldsymbol \alpha}a_j^\top \boldsymbol \theta_i = a_{j1}\theta_{i1} + \cdots a_{jK}\theta_{iK},
\end{equation}
and possibly a scale (i.e. dispersion) parameter $\phi$. More precisely, the density/probability mass function takes the form:
\begin{equation}\label{eq:model}
f(y \vert {\boldsymbol \alpha}a_j, \boldsymbol \theta_i, \phi) = \exp\left(\frac{ym_{ij} - b(m_{ij})}{\phi} + c(y, \phi)\mathbf right),
\end{equation}
where $b(\cdot)$ and $c(\cdot)$ are pre-specified functions that depend on the member of the exponential family. Given
$\boldsymbol \theta_i$ and ${\boldsymbol \alpha}a_j$, $i = 1, ..., N$ and $j = 1, ..., J$, we assume that all $Y_{ij}$s are independent. Consequently, the likelihood function,
in which $\boldsymbol \theta_i$s and ${\boldsymbol \alpha}a_j$s are treated as fixed effects, can be written as
\begin{equation}\label{eq:lik}
L(\boldsymbol \theta_1, ..., \boldsymbol \theta_N, {\boldsymbol \alpha}a_1, ..., {\boldsymbol \alpha}a_J, \phi) = \prod_{i=1}^N\prod_{j=1}^J \exp\left(\frac{y_{ij} m_{ij} - b(m_{ij})}{\phi} + c(y_{ij}, \phi)\mathbf right).
\end{equation}
This likelihood function is known as the \textit{joint likelihood} function in the literature of latent variable models \citep{skrondal2004generalized}.
We remark that
in the existing literature of latent factor models,
there is often an intercept term indexed by $j$ in the specification of \eqref{eq:inner},
which can be easily realized under our formulation
by constraining $\theta_{i1} = 1$, for all $i = 1, 2, ..., N$. In that case, $a_{j1}$ serves as the intercept term.
This framework provides a large class of models, including linear factor models for continuous data, as well as logistic and Poisson models for multivariate binary and count data, as special cases. These special cases are listed below.
\begin{enumerate}
\item Linear factor model: $$Y_{ij} \vert \boldsymbol \theta_i, {\boldsymbol \alpha}a_j \mathbf sim N({\boldsymbol \alpha}a_j^\top \boldsymbol \theta_i, \mathbf sigma^2),$$ where the scale parameter $\phi = \mathbf sigma^2$.
\item Multidimensional Item Response Theory (MIRT) model: $$Y_{ij} \vert \boldsymbol \theta_i, {\boldsymbol \alpha}a_j \mathbf sim Bernoulli\left(\frac{\exp({\boldsymbol \alpha}a_j^\top \boldsymbol \theta_i)}{1+\exp({\boldsymbol \alpha}a_j^\top \boldsymbol \theta_i)}\mathbf right),$$ where the scale parameter $\phi = 1$.
\item Poisson factor model: $$Y_{ij} \vert \boldsymbol \theta_i, {\boldsymbol \alpha}a_j \mathbf sim Poisson\left(\exp({\boldsymbol \alpha}a_j^\top \boldsymbol \theta_i)\mathbf right),$$ where the scale parameter $\phi = 1$.
\end{enumerate}
The analysis of this paper is based on the joint likelihood function~\eqref{eq:lik} where both $\boldsymbol \theta_i$s and ${\boldsymbol \alpha}a_j$s are treated as fixed effects, though in the literature the person parameters $\boldsymbol \theta_i$ are often treated as random effects and integrated out in the likelihood function \citep{holland1990sampling}.
This fixed effect point of view allows us to straightforwardly treat the measurement problem as an estimation problem.
For ease of exposition, we assume the scale parameter is known in the rest of the paper, while pointing out that
it is straightforward to extend all the results to the case where it is unknown.
\mathbf subsection{Confirmatory Structure}
In this paper, we consider a confirmatory setting where the relationship between the manifest variables and the latent factors is known a priori. Suppose that there are $J$ manifest variables and $K$ latent factors. Then the confirmatory information is recorded by a $J\times K$ matrix, denoted by $Q = (q_{jk})_{J\times K}$, whose entries take value zero or one. In particular, $q_{jk} = 0$ means that manifest variable $j$ is not directly associated with latent factor $k$. Such design information is often available in many applications of latent factor models, including in education, psychology, economics, and marketing \citep{thompson2004exploratory,reckase2009multidimensional,gatignon2003statistical}.
The design matrix $Q$ incorporates domain knowledge into the statistical analysis by imposing zero constraints on model parameters. For the generalized latent factor model, the loading parameter $a_{jk}$ is constrained to zero if $q_{jk}$ is zero.
The constraints induced by the design matrix play an important role in the identifiability and the interpretation of the latent factors.
Intuitively, suitable zero constraints on loading parameters will anchor the latent factors by preventing rotational indeterminacy,
a major problem for the identifiability of latent factor models. In the rest of this paper, we formalize this intuition by studying how the design matrix $Q$ affects the identifiability and estimability of generalized latent factor models.
\mathbf subsection{A Summary of Main Results}
In this paper, we investigate how design information given by $\mbox{$\mathbf q$}_j$s affects the quality of measurement. This problem is tackled through both asymptotic analysis and a non-asymptotic error bound, under the generalized latent factor modeling framework.
Our asymptotic analysis focuses on the identifiability and estimability of the latent factors, under a setting where both $N$ and $J$ grow to infinity.
To define identifiability, consider a population of people where $N = \infty$
and a universe of manifest variables where $J = \infty$.
A latent factor $k$ is a hypothetical construct, defined by the person population.
More precisely, it is determined by
the individual latent factor scores of the entire person population, denoted by
$(\theta_{1k}^*, \theta_{2k}^* ...) \in \mathbb{R}^{\mathbb Z_+}$, where $\theta_{ik}^*$ denotes the true latent factor score of person $i$ on latent factor $k$ and $\mathbb{R}^{\mathbb Z_+}$ denotes the set of vectors with countably infinite real number components.
The identifiability
of the $k$th latent factor then is equivalent to the identifiability of
a vector in $\mathbb R^{\mathbb Z_+}$ under the distribution of an infinite dimensional random matrix, $\{Y_{ij}: i = 1, 2, ..., j = 1, 2, ...\}$. This setting is natural in the context of large-scale measurement where both $N$ and $J$ are large.
Under the above setting, this paper addresses three research questions.
First, how should the identifiability of latent factors be suitably formalized? Second, under what design
are the latent factors identifiable? Third, what is the relationship between the identifiability and estimability? In other words, whether and to what extend can we recover the scores of an identifiable latent factor from data?
We further provide a non-asymptotic error bound to complement the asymptotic results. For finite $N$ and $J$, the effect of design information on the estimation of a latent factor
is reflected by a multiplying coefficient in the bound.
\mathbf subsection{Preliminaries}\label{subsec:notation}
In this section, we fix some notations used throughout this paper.
\paragraph{Notations.}
\begin{enumerate}[label=\alph*.]
\item $\mathbb{Z}_+$: the set of all positive integers.
\item $\mathbf Real^{\mathbb{Z}_+}$: the set of vectors with countably infinite real number components.
\item $\mathbf Real^{\mathbb{Z}_+\times \{1, ..., K\}}$: the set of all the real matrices with countably infinite rows and $K$ columns.
\item $\{0, 1\}^{\mathbb{Z}_+\times \{1, ..., K\}}$: the set of all the binary matrices with countably infinite rows and $K$ columns.
\item $\Theta$: the parameter matrix for the person population, $\Theta \in \mathbf Real^{\mathbb{Z}_+\times \{1, ..., K\}}$.
\item $A$: the parameter matrix for the manifest variable population, $A \in \mathbf Real^{\mathbb{Z}_+\times \{1, ..., K\}}$.
\item $Q$: the design matrix for the manifest variable population, $Q \in \{0, 1\}^{\mathbb{Z}_+\times \{1, ..., K\}}$.
\item $\mathbf 0$: the vector or matrix with all components being 0.
\item $P_{\Theta,A}$: the probability distribution of $(Y_{ij}, i,j\in \mathbb{Z}_+)$, given person and item parameters $\Theta$ and $A$.
\item $\mathbf v_{[1:m]}$: the first $m$ components of a vector $\mathbf v$.
\item $W_{[S_1,S_2]}$: the submatrix of a matrix $W$ formed by rows $S_1$ and columns $S_2$, where $S_1, S_2 \mathbf subset \mathbb{Z}_+$.
\item $W_{[1:m, k]}$: the first $m$ components of the $k$-th column of a matrix $W$.
\item $W_{[k]}$: the $k$-th column of a matrix $W$.
\item $\Vert \mathbf v\Vert$: the Euclidian norm of a vector $\mathbf v$.
\item $\mathbf sin\angle(\mathbf u, \mathbf v)$: the sine of the angle between two vectors,
{$$\mathbf sin\angle(\mathbf u, \mathbf v) = \mathbf sqrt{1 - \frac{(\mathbf u^{\top} \mathbf v)^2}{\Vert \mathbf u\Vert^2 \Vert \mathbf v\Vert^2}},$$}
where $\mathbf u, \mathbf v\in \mathbf Real^{m}$, $\mathbf u, \mathbf v \neq \mathbf 0$. {We point out that the angles between vectors are assumed to belong to $[0,\pi]$ and $\mathbf sin\angle(\mathbf u, \mathbf v)\mathbf geq 0$ for all vectors $\mathbf u, \mathbf v$.}
\item $\Vert W\Vert_F$: the Frobenius norm of a matrix $W = (w_{ij})_{m\times n}$, $\Vert W\Vert_F \triangleq \mathbf sqrt{\mathbf sum_{i=1}^m \mathbf sum_{j = 1}^n w_{ij}^2}$.
\item $\Vert W\Vert_2$: the spectral norm of matrix $W$, i.e., the largest singular value of matrix.
\item $\mathbf sigma_1(W)\mathbf geq \mathbf sigma_2(W)\mathbf geq...\mathbf geq \mathbf sigma_n(W)$: the singular values of a matrix $W\in \mathbf Real^{m\times n}$, in a descending order.
\item $\mathbf gamma(W)$: a function mapping from $\mathbf Real^{\mathbb{Z}_+\times n}$ to $\mathbf Real$, defined as
\begin{equation}\label{eq:gamma}
\mathbf gamma(W) \triangleq \liminf_{m\to\infty}\frac{\mathbf sigma_n(W_{[1:m,1:n]})}{\mathbf sqrt{m}}.
\end{equation}
\item $|S|$: the cardinality of a set $S$.
\item $N\wedge J$: the minimum value between $N$ and $J$.
\end{enumerate}
\mathbf section{Main Results}\label{sec:iden}
\mathbf subsection{Structural Identifiability}\label{sec:struct-ident}
We first formalize the definition of structural identifiability.
For two vectors with countably infinite components
$\mathbf w=(w_1,w_2,...)^{\top}, \mathbf z=(z_1,z_2,...)^{\top}\in \mathbb{R}^{\mathbb{Z}_+}$,
we define
\begin{equation}
\mathbf sinp\angle(\mathbf w,\mathbf z)=\limsup_{n\to\infty}\mathbf sin\angle(\mathbf w_{[1:n]}, \mathbf z_{[1:n]}),
\end{equation}
which quantifies the angle between two vectors $\mathbf w$ and $\mathbf z$ in $\mathbb{R}^{\mathbb{Z}_+}$. In particular, we say the angle between $\mathbf w$ and $\mathbf z$ is zero when $\mathbf sinp\angle(\mathbf w, \mathbf z)$ is zero.
\begin{definition}[Structural identifiability of a latent factor]\label{def:ident}
Consider the $k$th latent factor, where $k \in \{1, ..., K\}$,
and a nonempty parameter space $\mathcal{S} \mathbf subset \mathbf Real^{\mathbb{Z}_+\times \{1, ..., K\}} \times \mathbf Real^{\mathbb{Z}_+\times \{1, ..., K\}}$ for $(\Theta, A)$.
We say the $k$-th latent factor is structurally identifiable in the parameter space $\mathcal{S}$ if for any $(\Theta,A), ({\Theta}',{A}')\in\mathcal{S}$, $P_{\Theta,A}=P_{{\Theta}',{A}'}$ implies $\mathbf sinp\angle(\Theta_{[k]},{\Theta}'_{[k]})=0$.
\end{definition}
We point out that the parameter space $\mathcal{S}$, {which will be specified later in this section,} is essentially determined by the design information $q_{jk}$s.
As will be shown shortly,
a good design imposes suitable constraints on the parameter space, which further ensures the structure identifiability of the latent factors.
This definition of identifiability avoids the consideration of the scale of the latent factor, which is not uniquely determined as the distribution of data only depends on $\{\boldsymbol \theta_i^{\top}{\boldsymbol \alpha}a_j: i,j\in \mathbb{Z}_+\}$.
Moreover, the sine measure is a canonical way to quantify the distance between two linear spaces that has been used in, for example, the well-known sine theorems for matrix perturbation \citep{davis1963rotation,wedin1972perturbation}.
As will be shown in the sequel, this definition of structural identifiability naturally leads to a relationship between identifiability and estimability and has important implications on psychological measurement.
We now characterize the structural identifiability under suitable regularity conditions.
We consider a design matrix $Q$ for the manifest variable population, where
$Q\in\{0,1\}^{\mathbb{Z}_+\times \{1, ..., K\}}$.
Our first regularity assumption is about the stability of the $Q$ matrix.
\begin{itemize}
\item [A1] The limit
$$p_Q(S)=\lim_{J\to\infty}\frac{|\{j: q_{jk} = 1, \mbox{~if~} k \in S \mbox{~and~} q_{jk} = 0, \mbox{~if~} k \notin S, 1\leq j\leq J\}|}{J}$$ exists for any subset
$S\mathbf subset\{1,...,K\}$. In addition, $p_Q({\emptyset})=0$.
\end{itemize}
The above assumption requires that the frequency of manifest variables associated with and only with latent factors in $S$ converges to a limit proportion $p_Q(S)$.
In addition, $p_Q({\emptyset})=0$ implies that there are few irrelevant manifest variables.
{We point out that this assumption is adopted mainly to simplify the statements in Theorem~\mathbf ref{thm:identifiability} and Theorem~\mathbf ref{thm:consistency}. As discussed later, this assumption is automatically satisfied if $\mbox{$\mathbf q$}_j$s are generated under a stochastic design. This assumption will be further relaxed in Section~\mathbf ref{sec:a1} where a non-asymptotic error bound is established.}
We also make the following assumption on the generalized latent factor model, which is satisfied under most of the widely used models, including the linear factor model, MIRT model, and the Poisson factor model listed above.
\begin{itemize}
\item [A2] The natural parameter space $\{\nu: |b(\nu)|<\infty \}=\mathbf Real$.
\end{itemize}
Under the above assumptions, Theorem~\mathbf ref{thm:identifiability} provides a necessary and sufficient condition on the design matrix $Q$ for the structural identifiability of the $k$th latent factor.
This result is established within the parameter space
$\mathcal{S}_Q\mathbf subset \mathbb{R}^{\mathbb{Z}_+\times \{1, ..., K\} }\times \mathbb{R}^{\mathbb{Z}_+\times \{1, ..., K\}}$,
\begin{equation}
\mathcal{S}_Q= \mathcal{S}_Q^{(1)}\times \mathcal{S}_Q^{(2)}.
\end{equation}
Here, we define
\begin{equation}
\mathcal{S}_Q^{(1)}=\left\{
\Theta\in \mathbb{R}^{\mathbb{Z}_+\times \{1, ..., K\} }: \|\boldsymbol \theta_i\|\leq C \text{ and } \mathbf gamma(\Theta)>0
\mathbf right\}
\end{equation}
and
\begin{equation}\label{eq:s-q-a}
\begin{split}
\mathcal{S}_Q^{(2)}=\Big\{
A\in \mathbb{R}^{\mathbb{Z}_+\times \{1, ..., K\}}: \|{\boldsymbol \alpha}a_j\|\leq C, A_{[R_Q(S),S^c]}=\mathbf{0} \mbox{~for all~} S \mathbf subset \{1, ..., K\},\\\text{ and } \mathbf gamma(A_{[R_Q(S),S]})>0 \text{ for all } S, \mbox{ s.t. } p_Q(S)>0
\Big\},
\end{split}
\end{equation}
where $C$ is a positive constant, the $\mathbf gamma$ function is defined in \eqref{eq:gamma}, and
\begin{equation}
R_Q(S)=\{
j: q_{jk'} = 1, \mbox{~for all~} k' \in S \mbox{~and~} q_{jk'} = 0, \mbox{~for all~} k' \notin S
\}
\end{equation}
denotes the set of manifest variables that are associated {with and only with} latent factors in $S$.
Discussions on the parameter space are provided after the statement of Theorem~\mathbf ref{thm:identifiability}.
\begin{theorem}\label{thm:identifiability}
Under Assumptions A1 and A2, the $k$-th latent {factor} is structurally identifiable
in $\mathcal{S}_Q$
if and only if
\begin{equation}\label{eq:condition-k}
\{k\}=\bigcap_{k\in S, p_Q(S)>0} S,
\end{equation}
where we define $\bigcap_{k\in S, p_Q(S)>0} S=\emptyset$ if $p_Q(S)=0$ for all $S$ that contains $k$.
\end{theorem}
The following proposition guarantees that the parameter space is nontrivial.
\begin{proposition}\label{lemma:empty}
For any $Q$ satisfying A1, $\mathcal{S}_Q\neq \emptyset$.
\end{proposition}
We further remark on the parameter space $\mathcal{S}_Q$.
First, $\mathcal{S}_Q$ requires some regularities on each $\boldsymbol \theta_i$ and ${\boldsymbol \alpha}a_j$ (i.e., $\|\boldsymbol \theta_i\|\leq C,\|{\boldsymbol \alpha}a_j\|\leq C$) and the $A$-matrix satisfying the constraints imposed by $Q$ ($A_{[R_Q(S),S^c]}=\mathbf{0}$) for all $S$. It further requires that there is enough variation among people, quantified by
$\mathbf gamma(\Theta)>0$, where the $\mathbf gamma$ function is defined in \eqref{eq:gamma}. Note that this requirement is mild, in the sense that
if $\boldsymbol \theta_i$s are independent and identically distributed (i.i.d.) with a strictly positive definite covariance matrix, then $\mathbf gamma(\Theta)>0$ a.s., according to the strong law of large numbers. Furthermore, $\mathbf gamma(A_{[R_Q(S),S]})>0 \text{ for } S \mbox{ satisfying } p_Q(S)>0$ requires that each type of manifest variables (categorized by $S$)
contains sufficient information if appearing frequently ($p_Q(S)>0$). Similar to the justification for $\Theta$,
$\mathbf gamma(A_{[R_Q(S),S]})>0$ can
also be justified by considering that ${\boldsymbol \alpha}a_j$s are i.i.d. following a certain distribution for $j \in R_Q(S)$.
We provide an example to facilitate the understanding of Theorem~\mathbf ref{thm:identifiability}. Suppose that assumptions A1 and A2 hold.
If $K=2$ and $p_Q({\{
1\}})= p_Q({\{1,2\}})=1/2$, then the second latent factor is not structurally identifiable, even if it is associated with infinitely many manifest variables. In addition, having many manifest variables with a simple structure ensures the structural identifiability of a latent factor. That is, if $p_Q(\{k\}) > 0$, then the $k$th factor is structurally identifiable.
Finally, we briefly discuss Assumption A1.
The next proposition implies that if we have a stochastic design where $Q$ has i.i.d. rows, then Assumption A1 is satisfied almost surely.
\begin{proposition}\label{prop:Q-stoch}
Assume that the design matrix $Q$ has i.i.d. rows and $\mbox{$\mathbf q$}_i\neq (0,...,0)$ for all $i=1,2,..$.
Then, Assumption A1 is satisfied almost surely. Moreover, let
\begin{equation}
w_S=P\left(q_{1k}=1 \text{ for all } k\in S, \text{ and }q_{1k}=0 \text{ for all } k\notin S\mathbf right)
\end{equation}
for each $S\mathbf subset\{1,...,K\}$.
Then, $Q$ satisfies Assumption A1 with $p_S(Q)=w_S$ for all $S\mathbf subset\{1,...,K\}$ almost surely.
\end{proposition}
\mathbf subsection{Identifiability and Estimability}\label{sec:iden-and-est}
It is well known that for a fixed dimensional parametric model with i.i.d. observations, the
identifiability of a model parameter is necessary for the existence of a consistent estimator.
We extend this result to the infinite-dimensional parameter space under the current setting.
We start with a generalized definition for the consistency of estimating a latent factor.
An estimator given $N$ individuals and $J$ manifest variables is denoted by
$(\hat{\Theta}^{(N,J)},\hat{A}^{(N,J)})$, which only depends on $Y_{[1:N,1:J]}$ for all $N,J\in\mathbb{Z}_+$.
\begin{definition}[Consistency for estimating latent factor $k$]
The sequence of estimators
$\{
(\hat{\Theta}^{(N,J)},\hat{A}^{(N,J)}), N,J\in \mathbb{Z}_+
\}$ is said to
consistently estimate the latent factor $k$ if
\begin{equation}
\mathbf sin\angle(\hat{\Theta}^{(N,J)}_{[k]},\Theta_{[1:N,k]})\overset{P_{\Theta,A}}{\to} 0, \text{ as } N,J\to\infty,
\end{equation}
for all $(\Theta,A)\in \mathcal{S}_Q$.
\end{definition}
The next proposition establishes the necessity of the structural identifiability of a latent factor on its estimability.
\begin{proposition}\label{lemma:identifiable-estimable}
If latent factor $k$ is not structurally identifiable in $\mathcal{S}_Q$, then there does not exist a consistent estimator for latent factor $k$.
\end{proposition}
\begin{remark}
The above results of identifiability and estimability are all established under
a double asymptotic regime that both $N$ and $J$ grow to infinity, which
is suitable for large-scale applications where both $N$ and $J$ are large. It differs from the classical asymptotic setting \citep[e.g.,][]{anderson1956statistical,bing2017adaptive} where $J$ is fixed and $N$ grows to infinity.
The classical setting treats the latent factors as random effects and focuses on the identifiability of the loading parameters.
There are several reasons for adopting the double asymptotic regime. First, the double asymptotic regime allows us to directly focus on the
identifiability and the estimation of factor scores $\boldsymbol \theta_i$, which is of more interest in many applications (e.g, psychological and educational measurement) than the loading parameters. To consistently estimate the factor scores, naturally, we need the number of measurements $J$ to grow to infinity. {Second, having a varying $J$ allows us to take the number of manifest variables into consideration in the research design and data collection when measuring certain traits with substantive meaning through a latent factor model.} Third, even if we only focus on the loading parameters, the classical asymptotic regime does not lead to consistency on the estimation of the loading parameters.
For example, for the MIRT model for binary data, one cannot consistently estimate the loading parameters, when $J$ is fixed and $N$ grows to infinity, unless we assume $\boldsymbol \theta_i$s to be i.i.d. samples from a certain parametric distribution and use this parametric assumption in the estimation procedure. Finally, this double asymptotic regime is not entirely new. In fact, this regime has been adopted in, for example, \cite{haberman1977maximum}, \cite{bai2012statistical}, and \cite{owen2016bi} among others,
for the asymptotic analysis of linear and nonlinear factor models.
\end{remark}
\mathbf subsection{Estimation and Its Consistency}\label{sec:consistency}
We further show that the structural identifiability and estimability are equivalent under our setting. For ease of exposition, let $Q\in \{0, 1\}^{\mathbb{Z}_+\times \{1, ..., K\}}$ be a design matrix satisfying Assumption A1.
In addition, let $(\Theta^*,A^*)\in \mathcal{S}_{Q}$ be the true parameters for the person and the manifest variable populations. We provide
an estimator $(\hat{\Theta}^{(N, J)},\hat{A}^{(N, J)})$
such that
$$
\mathbf sin \angle (\hat{\Theta}_{[k]}^{(N, J)}, \Theta_{[1:N,k]}^*) \overset{P_{\Theta^*,A^*}}{\to} 0, ~~ N, J, \mathbf rightarrow \infty,$$
when $Q$ satisfies \eqref{eq:condition-k} which leads to
the structural identifiability of latent factor $k$
according to Theorem~\mathbf ref{thm:identifiability}.
Specifically, we consider the following estimator
\begin{equation}\label{eq:mle}
\begin{aligned}
(\hat{\Theta}^{(N, J)},\hat{A}^{(N, J)})\in\operatornamewithlimits{arg\,min}& - l\left(\boldsymbol \theta_1, ..., \boldsymbol \theta_N, {\boldsymbol \alpha}a_1, ..., {\boldsymbol \alpha}a_J\mathbf right),\\
s.t.~ & \|\boldsymbol \theta_i\|\leq C', \|{\boldsymbol \alpha}a_j\|\leq C',\\
& {\boldsymbol \alpha}a_j \in \mathcal{D}_j, i = 1, ..., N, j = 1, ..., J, \\
\end{aligned}
\end{equation}
where $l(\boldsymbol \theta_1, ..., \boldsymbol \theta_N, {\boldsymbol \alpha}a_1, ..., {\boldsymbol \alpha}a_J)= \mathbf sum_{i=1}^N\mathbf sum_{j=1}^Jy_{ij}(\boldsymbol \theta_i^\top {\boldsymbol \alpha}a_j) -b(\boldsymbol \theta_i^\top {\boldsymbol \alpha}a_j)$,
$C'$ is any constant greater than $C$ in the definition of $\mathcal{S}_Q$,
and $\mathcal{D}_j = \{{\boldsymbol \alpha}a \in \mathbf Real^{K}: a_{jk}= 0 \mbox{~if~} q_{jk} = 0\}$ imposes the constraint on ${\boldsymbol \alpha}a_j$. Note that maximizing $l\left(\boldsymbol \theta_1, ..., \boldsymbol \theta_N, {\boldsymbol \alpha}a_1, ..., {\boldsymbol \alpha}a_J\mathbf right)$ is equivalent to maximizing the joint likelihood \eqref{eq:lik}, due to the natural exponential family form.
The next theorem provides an error bound on $(\hat{\Theta}^{(N, J)},\hat{A}^{(N, J)})$.
\begin{theorem}\label{thm:consistency}
Under assumptions A1-A2 and $(\Theta^*, A^*)\in\mathcal{S}_{Q}$, there exists a constant ${\kappa_1}$ (independent of $N$ and $J$, depending on the function $b$, the constant $C'$, and $K$) such that,
\begin{equation}\label{eq:error-bound-m}
\frac{1}{\mathbf sqrt{NJ}}E\left\|\hat{\Theta}^{(N, J)}(\hat{A}^{(N, J)})^{\top}-\Theta^*_{[1:N, 1:K]} A^{*\top}_{[1:J, 1:K]}\mathbf right\|_F\leq \frac{{\kappa_1}}{\mathbf sqrt{N\wedge J}}.
\end{equation}
{Moreover}, if $Q$ satisfies
\eqref{eq:condition-k} and thus latent factor $k$ is structurally identifiable,
then there exists ${\kappa_2}>0$ such that
\begin{equation}\label{eq:error-bound-dim-k}
E \mathbf sin\angle(\Theta^*_{[1:N, k]},\hat{\Theta}_{[k]}^{(N, J)}) \leq \frac{{\kappa_2}}{\mathbf sqrt{N\wedge J}}.
\end{equation}
\end{theorem}
Proposition~\mathbf ref{lemma:identifiable-estimable}, and Theorems~\mathbf ref{thm:identifiability} and \mathbf ref{thm:consistency} together imply that the structural identifiability and estimability over $\mathcal{S}_Q$ are equivalent, which is
summarized in the following corollary.
\begin{corollary}
Under Assumptions A1 and A2, there exists an estimator $(\hat{\Theta}^{(N, J)},\hat{A}^{(N, J)})$ such that
$\lim_{N,J\to\infty}\mathbf sin\angle(\hat{\Theta}_{[k]}^{(N, J)},\Theta_{[1:N,k]})=0$ in $P_{\Theta,A}$ for all $(\Theta,A)\in\mathcal{S}_Q$ if and only if the design matrix $Q$ satisfies \eqref{eq:condition-k}.
\end{corollary}
\begin{remark}
The estimator \eqref{eq:mle} is closely related to the joint likelihood estimator in the literature of econometrics and psychometrics.
When $J$ is fixed and $N$ grows to infinity, this estimator is shown to be inconsistent, due to the simultaneous growth of the sample size and the parameter space \citep{neyman1948consistent,andersen1970asymptotic,ghosh1995inconsistent}. However, if $N$ and $J$ simultaneously grow to infinity, \cite{haberman1977maximum} shows that the joint maximum likelihood estimator is consistent under the Rasch model, a
simple unidimensional item response theory model. Then under a general family of multidimensional item response theory models and under an exploratory factor analysis setting, an estimator similar to \eqref{eq:mle} is considered in \cite{chen2017joint} but without the zero constraints given by the $Q$-matrix. As a result, the estimator obtained in \cite{chen2017joint} is rotationally indeterminant.
\end{remark}
\begin{remark}
The error bound \eqref{eq:error-bound-m} holds even when one or more latent factors are not structurally identifiable. In particular, \eqref{eq:error-bound-m} holds when removing the constraint ${\boldsymbol \alpha}a_j \in\mathcal{D}_j$ from \eqref{eq:mle}, which corresponds to the exploratory factor analysis setting where no design matrix $Q$ is pre-specified \citep[or in other words, $q_{jk} = 1$ for all $j$ and $k$; see the setting of][]{chen2017joint}.
In that case, the best one can achieve is to recover the linear space spanned by the column vectors of $\Theta^*$ and similarly
the linear space spanned by the column vectors of $A^*$. To make sense of such exploratory factor analysis results, one needs an additional rotation step to find an approximately sparse estimate of $A^*$ \citep{chen2017joint}.
\end{remark}
\begin{remark}
{The proposed estimator \eqref{eq:mle} and its error bound are related to exact low-rank matrix completion \citep{bhaskar20151} and approximate low-rank matrix completion \citep[e.g.][]{candes2010matrix,davenport20141,cai2013max}, where a bound similar to \eqref{eq:error-bound-m} can typically be derived.}
The key differences are (a)
the research on matrix completion is only interested in the estimation of $\Theta^* A^{*\top}$, while the current paper focuses on the estimation of $\Theta^*$ that is a fundamental problem of psychological measurement and
(b) our results are derived under a more general family of models.
\end{remark}
To further evaluate the efficiency of the proposed estimator,
we provide the following lower bounds.
\begin{theorem}\label{thm:lower-bound}
Suppose that Assumptions A1 and A2 hold.
For $N, J \in \mathbb{Z}_+$, let $\bar{M}^{(N,J)}$ be an arbitrary estimator which maps data $Y_{[1:N,1:J]}$ to $\mathbb R^{N\times J}$.
Then there exists ${\varepsilon_1}>0$ and $N_0,J_0>0$ such that for $N\mathbf geq N_0, J\mathbf geq J_0$, there exists $(\Theta^*,A^*)\in\mathcal{S}_{Q}$ such that
\begin{equation}\label{eq:lower-bound-m}
P_{\Theta^*,A^*}\left(\frac{1}{\mathbf sqrt{NJ}}\left\|\bar{M}^{(N,J)}-\Theta^*_{[1:N, 1:K]} (A^{*}_{[1:J, 1:K]})^{\top}\mathbf right\|_F\mathbf geq \frac{{\varepsilon_1}}{\mathbf sqrt{N\wedge J}}\mathbf right)\mathbf geq \frac{1}{2}.
\end{equation}
Moreover, let $\bar{\Theta}^{(N,J)}_{[k]}$ be an arbitrary estimator which maps data $Y_{[1:N,1:J]}$ to $\mathbb R^{N}$.
Then for each $A^*\in\mathcal{S}_Q^{(2)}$ defined in \eqref{eq:s-q-a}, there exists ${\varepsilon_2}>0$ and $N_0,J_0>0$ such that for $N\mathbf geq N_0, J\mathbf geq J_0$, there exists $\Theta^*\in\mathcal{S}_Q^{(1)}$ such that
\begin{equation}\label{eq:lower-bound-theta}
P_{\Theta^*,A^*}\left(\mathbf sin\angle(\bar{\Theta}^{(N, J)}_{[k]},\Theta^*_{[1:N, k]})\mathbf geq \frac{{\varepsilon_2}}{\mathbf sqrt{J}}\mathbf right)\mathbf geq \frac{1}{2}.
\end{equation}
\end{theorem}
Based on the asymptotic upper bounds and lower bounds obtained in Theorems~\mathbf ref{thm:consistency} and \mathbf ref{thm:lower-bound}, we have the following findings. First, the proposed estimator $\hat{\Theta}^{(N, J)}(\hat{A}^{(N, J)})^{\top}$ is rate optimal for the estimation of $\Theta^*_{[1:N, 1:K]} (A^{*}_{[1:J, 1:K]})^{\top}$, because the upper bound \eqref{eq:error-bound-m} matches the lower bound \eqref{eq:lower-bound-m}. Second, for estimating $\Theta^{*}_{[1:N,k]}$, the proposed estimator $\hat{\Theta}_{[k]}$ is rate optimal when
$\limsup_{N, J\mathbf rightarrow \infty} J/N < \infty$. This assumption on the rate $J/N$ seems reasonable in many
applications of confirmatory generalized latent factor models, as $N$ is typically larger than $J$.
Finally, if $N=o(J)$, then the asymptotic upper bound \eqref{eq:error-bound-dim-k} and lower bound \eqref{eq:lower-bound-theta} do not match, in which case the proposed estimator may not be rate optimal. In particular, when $N=o(J)$, it is known that the lower bound for the estimation of $\Theta^{*}_{[1:N,k]}$ can be achieved under a linear factor model \citep[see][]{bai2012statistical}. However, this lower bound may not be achievable under a generalized latent factor model for categorical data, such as the MIRT model for binary data.
This problem is worth further investigation.
We end this section by providing an alternating minimization algorithm (Algorithm~\mathbf ref{alg:jmle}) for solving the optimization program~\eqref{eq:mle}, which is computationally efficient through our parallel computing implementation using Open Multi-Processing
\citep[OpenMP;][]{dagum1998openmp}.
{Specifically, to handle the constraints,
we adopt a projected gradient descent algorithm \citep[e.g.][]{parikh2014proximal} for solving \eqref{eq:opt1} and \eqref{eq:opt2} in each iteration, where the projections have closed-form solutions.} Similar algorithms have been considered in other works, such as
\cite{udell2016generalized}, \cite{zhu2016personalized} and \cite{bi2017group},
for solving optimization problems with respect to low-rank matrices. Following Corollary 2 of \cite{grippo2000convergence}, we obtain Lemma~\mathbf ref{lemma:algorithm} below on the convergence property of the proposed algorithm.
\begin{lemma}\label{lemma:algorithm}
Any limit point of the sequence $\{\boldsymbol \theta_i^{(l)},{\boldsymbol \alpha}a_j^{(l)}, i=1,...,N, j=1,...,J, l=1,2,...\}$ obtained from Algorithm~\mathbf ref{alg:jmle} is a critical point of the optimization program \eqref{eq:mle}.
\end{lemma}
\begin{algorithm}[!th]
\SetAlgoLined
1. \KwIn{Data $(y_{ij}: 1\leq i\leq N, 1\leq j\leq J)$, dimension $K$, constraint parameter $C'$,
initial iteration number $l = 1$, and initial value $\boldsymbol \theta_i^{(0)}$ and ${\boldsymbol \alpha}a_j^{(0)} \in \mathcal D_j$, $i = 1, ..., N$, $j = 1, ..., J$.}
2. {\bf Alternating minimization:}
\For{$l = 1, 2, ...$}{
\For{$i = 1, 2, ..., N$}{
\begin{equation}\label{eq:opt1}
\boldsymbol \theta_i^{(l)} \in \operatornamewithlimits{arg\,min}_{\Vert \boldsymbol \theta\Vert \leq C'} - l_i(\boldsymbol \theta),
\end{equation}
where
$l_i(\boldsymbol \theta) = \mathbf sum_{j=1}^J y_{ij}(\boldsymbol \theta^\top {\boldsymbol \alpha}a_j^{(l-1)}) -b(\boldsymbol \theta^\top {\boldsymbol \alpha}a_j^{(l-1)})$.
}
\For{$j = 1, 2, ..., J$}{
\begin{equation}\label{eq:opt2}
{\boldsymbol \alpha}a_j^{(l)} \in \operatornamewithlimits{arg\,min}_{{\boldsymbol \alpha}a \in \mathcal{D}_j, \Vert {\boldsymbol \alpha}a\Vert \leq C'} - \tilde l_j({\boldsymbol \alpha}a),
\end{equation}
where
$\tilde l_j({\boldsymbol \alpha}a) = \mathbf sum_{i=1}^N y_{ij}((\boldsymbol \theta_i^{(l)})^\top {\boldsymbol \alpha}a) -b((\boldsymbol \theta_i^{(l)})^\top {\boldsymbol \alpha}a)$.
}
}
3. \KwOut{Iteratively perform Step 2 until convergence. Output $\hat \boldsymbol \theta_i = \boldsymbol \theta^{(L)}_i$, $\hat {\boldsymbol \alpha}a_j = {\boldsymbol \alpha}a_j^{(L)}$, $i = 1, ..., N$, $j = 1, ..., J$,
where $L$ is the last iteration number.}
\caption{Alternating minimization algorithm}
\label{alg:jmle}
\end{algorithm}
\mathbf subsection{A Non-asymptotic Error Bound}\label{sec:a1}
We further provide a non-asymptotic error bound as a complement to the asymptotic results. Through this error bound, the effect of design information on the estimation of latent factors is quantified for finite $N$ and $J$, without requiring $N$ and $J$ to grow to infinity.
We introduce the following definition on collections of manifest varible types.
\begin{definition}[Feasible collection of subsets]
For a given $k\mathbf subset\{1,...,K\}$, we say a collection of subsets of $\{1,...,K\}$, $\mathcal{A}$, is feasible for the $k$th dimension if
\begin{equation}
\{k\}=\bigcap_{S\in \mathcal A} S.
\end{equation}
\end{definition}
The concept of feasible collection is closely related to the necessary and sufficient condition \eqref{eq:condition-k} for the structural identifiability of the $k$th factor.
Given the concept of feasible collection of manifest variable types, we then define an index as follows,
\begin{equation}\label{eq:sigmanj}
\mathbf sigma_{N,J}=\min\left\{\frac{\max\limits_{\mathcal{A}: \mathcal{A}\text{ is feasible}}
\min\limits_{S\in\mathcal{A}} \mathbf sigma_{|S|}( A^*_{[R_Q(S)\cap\{1,...,J\},S]} )
}{\mathbf sqrt{J}},\frac{\mathbf sigma_K(\Theta^*_{[1:N,1:K]})}{\mathbf sqrt{N}}\mathbf right\},
\end{equation}
which serves as a measure of the signal strength on dimension $k$.
We elaborate on this index. First, for a given feasible collection $\mathcal{A}$, the first quantity in the large brackets of \eqref{eq:sigmanj}, ${\min\limits_{S\in\mathcal{A}} \mathbf sigma_{|S|}( A^*_{[R_Q(S)\cap\{1,...,J\},S]} )}/{\mathbf sqrt{J}}$, measures the amount of information contained in manifest variables associated with the set of factors $S\in\mathcal{A}$. In particular, if for any feasible collection $\mathcal{A}$, there exists an $S \in \mathcal{A}$ such that manifest variables of type $S$ do not exist, then this quantity becomes zero and thus $\mathbf sigma_{N, J} = 0$ as the second term in the brackets of \eqref{eq:sigmanj} is nonnegative. In that case, the manifest variables essentially contain no information about the $k$th dimension. Second, the second term in the brackets, ${\mathbf sigma_K(\Theta^*_{[1:N,1:K]})}/{\mathbf sqrt{N}}$, measures the co-linearity of different dimensions of $\Theta^*_{[1:N,1:K]}$.
In summary, $\mathbf sigma_{N,J}$ is a nonnegative index, with
a larger value suggesting that $\Theta^*_{[1:N,k]}$ is easier to estimate.
An error bound is then established in the next theorem.
\begin{theorem}\label{thm:finite-bound}
Suppose that Assumption A2 holds and $\|\boldsymbol \theta_i^*\|\leq C, \|{\boldsymbol \alpha}a_j^*\|\leq C$ for all $1\leq i\leq N$ and $1\leq j\leq J$.
Then, there is a constant ${\kappa_1}finite$, independent of $N$ and $J$, such that
\begin{equation}\label{eq:error-bound-m-finite}
\frac{1}{\mathbf sqrt{NJ}}E\left\|\hat{\Theta}^{(N, J)}(\hat{A}^{(N, J)})^{\top}-\Theta^*_{[1:N, 1:K]} A^{*\top}_{[1:J, 1:K]}\mathbf right\|_F\leq \frac{{\kappa_1}finite}{\mathbf sqrt{N\wedge J}}.
\end{equation}
Moreover, if $\mathbf sigma_{N,J}> 4{\kappa_1}finite^{1/2}{({N\wedge J})^{-1/2}}$, then there exists a monotone decreasing function ${\kappa_2}function(\cdot):R_+\to R_+$ (independent of $N,J$ and may depend on $C'$, $K$ and function $b$) such that for $N\mathbf geq N_0$ and $J\mathbf geq J_0$,
\begin{equation}\label{eq:error-bound-k-finite}
E \mathbf sin\angle(\Theta^*_{[1:N, k]},\hat{\Theta}_{[k]}^{(N, J)}) \leq \frac{{\kappa_2}function(\mathbf sigma_{N,J}) }{\mathbf sqrt{N\wedge J}}.
\end{equation}
The precise forms of the constant ${\kappa_1}finite$ and the function ${\kappa_2}function(\cdot)$ are given in the Appendix.
\end{theorem}
The above theorem does not require $N$ and $J$ to grow to infinity and thus is referred to as a non-asymptotic result.
Comparing with Theorem \mathbf ref{thm:consistency}, the assumptions of Theorem~\mathbf ref{thm:finite-bound} are weaker.
That is,
Theorem~\mathbf ref{thm:finite-bound} does not require any limit-type assumptions as required in Theorem \mathbf ref{thm:consistency}, including Assumption A1, the requirement $(\Theta^*,A^*)\in\mathcal{S}_Q$, and condition \eqref{eq:condition-k}.
Instead, it quantifies the effect of
design information on the estimation of latent factors
under a non-asymptotic setting, only requiring the signal level $\mathbf sigma_{N,J}$ is higher than the noise level $4{\kappa_1}finite^{1/2}{({N\wedge J})^{-1/2}}$. According to the proposition below, this assumption can be implied by the limit-type assumptions of Theorem \mathbf ref{thm:consistency} and thus is weaker.
\begin{proposition}\label{prop:new-assump-old-assump}
If Assumption A1 and condition \eqref{eq:condition-k} are satisfied and $(\Theta^*,A^*)\in\mathcal{S}_Q$, then
$$\liminf\limits_{N,J\to\infty}\mathbf sigma_{N,J}>0.$$
\end{proposition}
\mathbf section{Further Implications}\label{sec:implication}
In this section, we discuss the implications of the above results in the applications of generalized latent factor models to large-scale educational and psychological measurement. In such applications, each manifest variable typically corresponds to an item in a test, each
individual is a test-taker, and both $N$ and $J$ are large. The latent factors are often interpreted as individuals' cognitive abilities, psychological traits, etc., depending on the context of the test. It is often of interest to
estimate the latent factor scores $\boldsymbol \theta_i$.
\mathbf subsection{On the Design of Tests.}
According to Theorems~\mathbf ref{thm:identifiability} and \mathbf ref{thm:consistency}, the key to the structural identifiability and consistent estimation of factor $k$ is
\begin{equation}\label{eq:imp}
\{k\}=\bigcap_{k\in S, p_Q(S)>0} S,
\end{equation}
which provides insights on the measurement design. First, it implies that the ``simple structure" design advocated in psychological measurement is a safe design. Under the simple structure design, each manifest variable is associated with one and only one factor. If each latent factor $k$ is associated with many manifest variables that only measure factor $k$, or more precisely $p_Q({\{k\}}) > 0$, \eqref{eq:imp} is satisfied.
Second, our result implies that a simple structure is not necessary for a good measurement design. A latent factor can still be identified even when it is always measured together with some other factors. For example, consider the $Q$-matrix in Table~\mathbf ref{tab:Q}. Under this design,
all three factors satisfy \eqref{eq:imp} even when there is no item measuring a single latent factor.
Third, \eqref{eq:imp} is not satisfied when there exists a $k' \neq k$ and $k'\in \bigcap_{k\in S, p_Q(S)>0} S$.
That is,
almost all manifest variables that are associated with factor $k$ are also associated with factor $k'$, in the asymptotic sense.
Consequently, one cannot distinguish factor $k$ from factor $k'$, making factor $k$ structurally unidentifiable. We point out that in this case, factor $k'$ may still be structurally identifiable, as $p_Q({\{k'\}}) > 0$ is still possible.
Finally, \eqref{eq:imp} is also not satisfied when $\bigcap_{k\in S, p_Q(S)>0} S=\emptyset$.
It implies that the factor $k$ is not structurally identifiable when the factor is not measured by a sufficient number of manifest variables.
\begin{table}
\centering
\begin{tabular}{c|ccc|ccc|c}
\hline
& 1 & 2 & 3 & 4 & 5 & 6 & $\cdots$ \\
\hline
& 1 & 1 & 0 & 1 & 1 & 0 & $\cdots$\\
$Q^\top$& 1 & 0 & 1 & 1 & 0 & 1 & $\cdots$\\
& 0 & 1 & 1 & 0 & 1 & 1 & $\cdots$\\
\hline
\end{tabular}
\caption{An example of the design matrix $Q$, which has infinite rows and $K = 3$ columns. The rows of $Q$ are given by repeating the first $3\times 3$ submatrix infinite times. }\label{tab:Q}
\end{table}
\mathbf subsection{Properties of Estimated Factor Scores}
\paragraph{A useful result.}
Let $(\Theta^*,A^*)\in \mathcal{S}_{Q}$ be the true parameters for the person and the manifest variable populations. We start with a lemma connecting sine angle consistency and $L_2$ consistency.
\begin{lemma}\label{lemma:sine}
Let $\mathbf w,\mathbf w'\in R^n$ with $\mathbf{w}\neq \mathbf{0}$ and $\mathbf{w}'\neq\mathbf{0}$ and $c=\text{sign}(\cos\angle(\mathbf w,\mathbf w'))$.
Then,
\begin{equation}
\Big\|\frac{\mathbf w}{\|\mathbf w\|}-c\frac{\mathbf w'}{\|\mathbf w'\|}\Big\|^2=2-2\mathbf sqrt{1-\mathbf sin^2\angle(\mathbf w,\mathbf w')}.
\end{equation}
\end{lemma}
Combining Theorem~\mathbf ref{thm:consistency} and Lemma~\mathbf ref{lemma:sine}, we have the
following corollary which
establishes a relationship between the true person parameters and their estimates.
This result is the key to the rest of the results in this section.
\begin{corollary}\label{cor:similar}
Under Assumption A1-A2 and \eqref{eq:condition-k} is satisfied for some $k$,
then
there exists a sequence of random variables $c_{N,J}\in\{-1,1\}$, such that
\begin{equation}\label{eq:cn}
\left\|\frac{\Theta^*_{[1:N, k]}}{\|\Theta^*_{[1:N, k]}\|}-c_{N,J}\frac{\hat{\Theta}^{(N, J)}_{[k]}}{\|\hat{\Theta}^{(N, J)}_{[k]}\|}\mathbf right\|
\overset{P_{\Theta^*, A^*}}{\to} 0, ~~ N, J, \mathbf rightarrow \infty.
\end{equation}
\end{corollary}
\begin{remark}
Corollary~\mathbf ref{cor:similar} follows directly from \eqref{eq:error-bound-dim-k}. It provides an alternative view on how $\hat{\Theta}_{[k]}^{(N, J)}$ approximates $\Theta^*_{[1:N, k]}$. Since the likelihood function depends on $\Theta_{[1:N, 1:K]}$ and $A_{[1:J, 1:K]}$ only through $\Theta_{[1:N, 1:K]} A_{[1:J, 1:K]}^\top$, the scale of $\Theta_{[1:N, k]}$ is not identifiable even when it is structurally identifiable. This phenomenon is intrinsic to latent variable models \citep[e.g.][]{skrondal2004generalized}. Corollary~\mathbf ref{cor:similar} {states}
that $\Theta^*_{[1:N, k]}$ and $\hat{\Theta}_{[k]}^{(N, J)}$ are close in Euclidian distance after properly normalized.
{The}
normalized vectors $\Theta^*_{[1:N, k]}/\Vert \Theta^*_{[1:N, k]}\Vert$ and $c_{N,J} \hat \Theta_{[k]}^{(N, J)}/\Vert \hat \Theta_{[k]}^{(N, J)}\Vert$ are both of unit length. The value of $c_{N,J}$ depends on the angle between $\Theta^*_{[1:N, k]}$ and $\hat \Theta_{[k]}^{(N, J)}$. Specifically, $c_{N,J} = 1$ if $\cos \angle(\Theta^*_{[1:N, k]}, \hat \Theta_{[k]}^{(N, J)}) > 0$ and
$c_{N,J} = -1$ otherwise. In practice, especially in psychological measurement, $c_{N,J}$ can typically be determined by additional domain knowledge.
\end{remark}
\paragraph{On the distribution of person population.} In psychological measurement, the distribution of true factor scores
is typically of interest, which may provide an overview of the population on the constructs being measured.
Corollary~\mathbf ref{cor:similar} implies the following proposition on the empirical distribution of the factor scores.
\begin{proposition}\label{prop:empdist}
Suppose assumptions A1-A2 are satisfied
and furthermore \eqref{eq:condition-k} is satisfied for factor $k$. We normalize
$\theta_{ik}^*$ and $\hat{\theta}_{ik}^{(N, J)}$ by
\begin{equation}\label{eq:normalized-theta}
v_i=\frac{\mathbf sqrt{N}\theta_{ik}^*}{\|\Theta^{*}_{[1:N, k]}\|} \text{ and } \hat{v}_i= \frac{c_{N,J}\mathbf sqrt{N}\hat{\theta}_{ik}^{(N, J)}}{\|\hat{\Theta}_{[k]}^{(N, J)}\|}, ~~ i=1,...,N,
\end{equation}
where $c_{N,J}$ is defined and discussed in Corollary~\mathbf ref{cor:similar}.
Let $F_{N}$ and $\hat{F}_{N,J}$ be the empirical measures of $v_1,...,v_N$ and $\hat{v}_1,...,\hat{v}_N$, respectively.
Then,
$$Wass(F_{N},\hat{F}_{N,J}) \overset{P_{\Theta^*,A^*}}{\to} 0, ~~ N, J, \mathbf rightarrow \infty,$$ where $Wass(\cdot,\cdot)$ denotes the Wasserstein distance between two {probability measures}
$$Wass(\mu,\nu)=\mathbf sup_{h\text{ is 1-Lipschitz}}|\int h d\mu-\int hd\nu|.$$
\end{proposition}
We point out that the normalization in \eqref{eq:normalized-theta} is reasonable. Consider a random design setting where
$\theta^*_{ik}$s are i.i.d. samples from some distribution with a finite second moment. Then $F_N$ converges weakly to the distribution of $\eta / \mathbf sqrt{E\eta^2}$, where $\eta$ is a random variable following the same distribution.
Proposition~\mathbf ref{prop:empdist} then implies that when factor $k$ is structurally identifiable and both $N$ and $J$ are large,
the empirical distribution of $\hat{\theta}_{1k}^{(N, J)}, \hat{\theta}_{2k}^{(N, J)}, ..., \hat{\theta}_{Nk}^{(N, J)}$
approximates the empirical distribution of ${\theta}^*_{1k}, {\theta}^{*}_{2k}, ..., {\theta}^{*}_{Nk}$ accurately, up to a scaling. Specifically, for any 1-Lipschitz function $h$,
$\int h(x)\hat{F}_{N, J}(dx)$ is a consistent estimator for $\int h(x)F_N(dx)$ according to the definition of Wasserstein distance. Furthermore, Corollary~\mathbf ref{cor:similar} states that under the regularity conditions, $\lim_{N\mathbf rightarrow \infty} \mathbf sum_{i=1}^N (v_i - \hat v_i)^2/N = 0$, implying that $\mathbf sum_{i=1}^N 1_{\{(v_i - \hat v_i)^2 \mathbf geq \epsilon\}}/N = 0$, for all $\epsilon > 0$. That is, most of the $\hat v_i$s will fall into a small neighborhood of the corresponding $v_i$s.
\paragraph{On ranking consistency.} The estimated factor scores may also be used to rank individuals along a certain trait. In particular, in educational testing, the ranking provides an ordering of the students' proficiency in a certain ability (e.g., calculus, algebra, etc.). Our results also imply the validity of the ranking along a latent factor when it is structurally identifiable and $N$ and $J$ are sufficiently large. More precisely, we have the following proposition.
\begin{proposition}\label{prop:ranking}
Suppose assumptions A1-A2 are satisfied
and furthermore \eqref{eq:condition-k} is satisfied for factor $k$.
Consider $v_i$ and $\hat v_{i}$, the normalized versions of $\theta_{ik}^*$ and $\hat \theta_{ik}^{(N, J)}$ as defined in \eqref{eq:normalized-theta}.
In addition, assume that there exists a constant $\kappa_R$ such that for any sufficiently small $\epsilon>0$ and sufficiently large $N$,
\begin{equation}\label{eq:assump-ranking}
\frac{\mathbf sum_{i\neq i'} I{\{\vert v_i-v_{i'} \vert \leq \epsilon \}}}{N(N-1)/2} \leq \kappa_R \epsilon.
\end{equation}
Then,
\begin{equation}\label{eq:ranking}
\frac{\tau(\mathbf v,\hat{\mathbf v})}{N(N-1)/2} \overset{P_{(\Theta^*,A^*)}}{\to} 0, ~~ N, J, \mathbf rightarrow \infty,
\end{equation}
where $\tau(\mathbf v,\hat{\mathbf v})=\mathbf sum_{i\neq i'}I(v_i>v_{i'},\hat{v}_i<\hat{v}_{i'})+I(v_i<v_{i'},\hat{v}_i>\hat{v}_{i'})$ is the number of inconsistent pairs according to the ranks of $\mathbf v = (v_1, ..., v_N)$ and $\hat {\mathbf v} = (\hat v_1, ..., \hat v_N)$.
\end{proposition}
We point out that \eqref{eq:assump-ranking} is a mild regularity condition on the empirical distribution $F_N$. It requires that the probability mass under $F_N$ does not concentrate in any small $\epsilon$-neighborhood, which further implies that the pairs of individuals who are difficult to distinguish along factor $k$, i.e., $(i, i')$s that $v_i$ and $v_{i'}$ are close, take only a small proportion among all the $(N-1)N/2$ pairs. In fact, it can be shown that \eqref{eq:assump-ranking} is true with probability tending to 1 as $N$ grows to infinity, when $\theta_{ik}^*$s are i.i.d. samples from a distribution with a bounded density function.
Proposition~\mathbf ref{prop:ranking} then implies that if we rank the individuals using $\hat v_i$ (assuming $c_{N,J}$ can be consistently estimated based on other information), the proportion of incorrectly ranked pairs converges to 0.
Note that $\tau(\mathbf v,\hat{\mathbf v})$ is known as the Kendall's tau distance \citep{kendall1990rank}, a widely used measure for ranking consistency.
\paragraph{On classification consistency.} Another common practice of utilizing estimated factor scores is to classify individuals into two or more groups along a certain construct.
For example, in an educational mastery test, it is of interest to classify examinees into ``mastery" and ``nonmastery" groups according their proficiency in a certain ability \citep{lord1980applications,bartroff2008modern}. In measuring psychopathology, it is common to classify respondents into ``diseased" and ``non-diseased" groups based on a mental health disorder. We justify the validity of making classification based on the estimated factor score.
\begin{proposition}\label{prop:classify}
Suppose assumptions A1-A2 are satisfied
and furthermore \eqref{eq:condition-k} is satisfied for factor $k$.
Consider $v_i$ and $\hat v_{i}$, the normalized versions of $\theta_{ik}^*$ and $\hat \theta_{ik}^{(N, J)}$ as defined in \eqref{eq:normalized-theta}.
Let $\tau_-<\tau_+$ be the classification thresholds, then
\begin{equation}\label{eq:classificatin}
\frac{\mathbf sum_{i=1}^N I{\left\{\hat{v}_i\mathbf geq \tau_+, v_i\leq \tau_- \mathbf right\}} + {I}{\left\{\hat{v}_i\leq \tau_-, v_i \mathbf geq \tau_+ \mathbf right\}}}{N} \overset{P_{\Theta^*,A^*}}{\to} 0, ~~ N, J, \mathbf rightarrow \infty,.
\end{equation}
\end{proposition}
Considering two pre-specified thresholds $\tau_-$ and $\tau_+$ is the well-known indifference zone formulation of educational mastery test \cite[e.g.][]{bartroff2008modern}. In that context, examinees with $v_i \mathbf geq \tau_+ $ are classified into the ``mastery" group and those with $v_i\leq \tau_-$ are classified into the ``nonmastery" group.
The interval $(\tau_-, \tau_+)$ is known as the indifference zone, within which no decision is made.
Proposition~\mathbf ref{prop:classify} then implies that when factor $k$ is structurally identifiable, the classification error tends to 0 as both $N$ and $J$ grow to infinity.
\mathbf section{Extensions}\label{sec:extension}
\mathbf subsection{Generalized Latent Factor Models with Intercepts}
As mentioned in Section~\mathbf ref{subsec:model}, intercepts can be easily incorporated into the generalized latent factor model by restricting $\theta_{i1}=1$. Then, $a_{j1}$s are the intercept parameters and $q_{j1}=1$ for all $j$.
Consequently, for any $S$ satisfying $p_Q(S) > 0$, $1\in S$ and thus
the latent factors 2-$K$ are not structurally identifiable according to Theorem~\mathbf ref{thm:identifiability}.
Interestingly, these factors are still structurally identifiable if we
restrict to the following parameter space
\begin{equation}
\begin{split}
\mathcal{S}_{Q,-}
=
\Big\{&
(\Theta,A)\in \mathcal{S}_Q:
\lim_{N\to\infty}\frac{1}{N}\mathbf{1}_{N}^{\top}\Theta_{[1:N,m]}=0 \text{ for } m\mathbf geq 2, \text{ and } \theta_{i1}=1 \text{ for } i\in\mathbb{Z}_+
\Big\},
\end{split}
\end{equation}
which requires that $\Theta_{[k]}$ and $\Theta_{[1]}$ are asymptotically orthogonal, for all $k \mathbf geq 2$.
\begin{proposition}\label{thm:intercept}
Under Assumptions A1-A2, and assuming that $q_{j1}=1$ for all $j\in\mathbb{Z}_+$ and $K\mathbf geq 2$,
then
{the $k$th latent {factor} is structurally identifiable in $\mathcal{S}_{Q,-}$ if and only if
\begin{equation}\label{eq:condition-k-intercept}
\{1, k\}=\bigcap_{k\in S, p_Q(S)>0} S,
\end{equation}
for $k\mathbf geq 2$.
}
\end{proposition}
The next proposition guarantees that $\mathcal{S}_{Q,-}$ is also non-empty.
\begin{proposition}\label{lemma:empty-intercept}
For all $Q$ satisfying A1 and $q_{j1}=1$ for all $j\in\mathbb{Z}_+$, and in addition $C>1$, then $\mathcal{S}_{Q,-}\neq\emptyset$.
\end{proposition}
\begin{remark}
When having intercepts in the model, similar results of consistency {and a non-asymptotic error bound} can be established for the estimator
\begin{equation}\label{eq:mle2}
\begin{aligned}
(\hat{\Theta}^{(N, J)},\hat{A}^{(N, J)})\in\operatornamewithlimits{arg\,min}& - l\left(\boldsymbol \theta_1, ..., \boldsymbol \theta_N, {\boldsymbol \alpha}a_1, ..., {\boldsymbol \alpha}a_J\mathbf right),\\
s.t.~ & \|\boldsymbol \theta_i\|\leq C', \|{\boldsymbol \alpha}a_j\|\leq C', {\boldsymbol \alpha}a_j \in \mathcal{D}_j, \\
& \theta_{i1} = 1, \mathbf sum_{i'=1}^N \theta_{i'k} = 0,\\
& i = 1, ..., N, j = 1, ..., J, k = 2, ..., K.
\end{aligned}
\end{equation}
\end{remark}
\mathbf subsection{Extension to Missing Values}
Our estimator can also handle missing data which are often encountered in practice.
Let $\Omega = (\omega_{ij})_{N\times J}$ be the indicator matrix of nonmissing values, where
$\omega_{ij} = 1$ if $Y_{ij}$ is observed and $\omega_{ij} = 0$ if $Y_{ij}$ is missing. When data are completely missing at random, the joint likelihood function becomes
$$L^\Omega(\boldsymbol \theta_1, ..., \boldsymbol \theta_N, {\boldsymbol \alpha}a_1, ..., {\boldsymbol \alpha}a_J, \phi) = \prod_{i,j: \omega_{ij} = 1} \exp\left(\frac{y_{ij} m_{ij} - b(m_{ij})}{\phi} + c(y_{ij}, \phi)\mathbf right)$$
and our estimator becomes
\begin{equation}\label{eq:mle3}
\begin{aligned}
(\hat{\Theta}^{(N, J)},\hat{A}^{(N, J)})\in \operatornamewithlimits{arg\,min}& - l^\Omega\left(\boldsymbol \theta_1, ..., \boldsymbol \theta_N, {\boldsymbol \alpha}a_1, ..., {\boldsymbol \alpha}a_J\mathbf right),\\
s.t.~ & \|\boldsymbol \theta_i\|\leq C', \|{\boldsymbol \alpha}a_j\|\leq C',\\
& {\boldsymbol \alpha}a_j \in \mathcal{D}_j, i = 1, ..., N, j = 1, ..., J, \\
\end{aligned}
\end{equation}
where $l^{\Omega}(\Theta A^{\top})= \mathbf sum_{i, j: \omega_{ij} = 1} y_{ij}m_{ij}-b(m_{ij})$.
Moreover, results similar to Theorem~\mathbf ref{thm:consistency} can be established even when having missing data.
Specifically, we assume
\begin{enumerate}
\item[A3] $\omega_{ij}$s in $\Omega$ are independent and identically distributed Bernoulli random variables with
$$P(\omega_{ij} = 1) = \frac{n}{NJ}.$$
\end{enumerate}
This assumption implies that data are completely missing at random and only about $n$ entries of $(Y_{ij})_{N\times J}$
are observed. We have the following result.
\begin{proposition}\label{prop:missing}
Under assumptions A1-A3 and $(\Theta^*, A^*)\in\mathcal{S}_{Q}$, there exists ${\kappa_1}missing>0$ such that,
\begin{equation}\label{eq:error-bound-m-missing}
\begin{aligned}
&\frac{1}{\mathbf sqrt{NJ}} E\left(\|\hat{\Theta}^{(N, J)}(\hat{A}^{(N, J)})^{\top}-\Theta^*_{[1:N, 1:K]} (A_{[1:N, 1:K]}^{*})^\top\|_F\mathbf right)\\
\leq &{\kappa_1}missing\max\left\{
\mathbf sqrt{\frac{N\vee J}{n}},\frac{(NJ)^{1/2}}{n^{3/4}}
\mathbf right\}.
\end{aligned}
\end{equation}
Moreover, if $Q$ satisfies
\eqref{eq:condition-k} and thus latent factor $k$ is structurally identifiable,
then there exists a constant ${\kappa_2}missing$, such that
\begin{equation}\label{eq:error-bound-dim-k-missing}
E \mathbf sin\angle(\Theta^*_{[1:N, k]},\hat{\Theta}_{[k]}^{(N, J)}) \leq {\kappa_2}missing \max\left\{
\mathbf sqrt{\frac{N\vee J}{n}},\frac{(NJ)^{1/2}}{n^{3/4}}
\mathbf right\}.
\end{equation}
\end{proposition}
\begin{remark}
Results similar to \eqref{eq:error-bound-m-missing} have also
been derived in the matrix completion literature \citep[e.g.][]{candes2010matrix,davenport20141,cai2013max,bhaskar20151} under specific statistical models with an underlying low rank structure.
Proposition~\mathbf ref{prop:missing} extends the existing results on matrix completion to a generalized latent factor model.
\end{remark}
\mathbf section{A Useful Perturbation Bound on Linear Subspace}\label{sec:proof-strategy}
The standard approach (see, e.g., \cite{davenport20141}) for bounding the error of the maximum likelihood estimator is by making use of the strong/weak convexity of the log-likelihood function.
However, in the generalized latent factor model, the log-likelihood function is not convex in $(\Theta_{[1:N,1:K]},A_{[1:J,1:K]})$. Thus, the standard approach is not applicable for proving \eqref{eq:error-bound-dim-k} in Theorem~\mathbf ref{thm:consistency} and similar results on the estimation of $\Theta_{[1:N,k]}$ in Theorem~\mathbf ref{thm:finite-bound} and Proposition~\mathbf ref{prop:missing}.
New technical tools are developed to handle this problem. Specifically, there are two major steps for proving Theorem~\mathbf ref{thm:consistency}.
In the first step, we establish \eqref{eq:error-bound-m}, which bounds the error on the estimation of $\Theta_{[1:N, 1:K]}^* A^{*\top}_{[1:J, 1:K]}$ as a whole. This step extends the error bound for exact low-rank matrix completion \citep{bhaskar20151} under the generalized latent variable model setting. A small estimation error of $\Theta_{[1:N, 1:K]}^* A^{*\top}_{[1:J, 1:K]}$ implies that the linear space spanned by the column vectors of $\Theta_{[1:N, 1:K]}^*$ can be recovered up to a small perturbation.
In the second step, given the result from the first step and design information, we show that the direction of the vector $\Theta_{[1:N, k]}^*$ can be recovered up to a small perturbation. This step is technically challenging and is tackled by a new perturbation bound for the intersection of linear spaces. This perturbation bound, as introduced below, may be of independent value for the theoretical analysis of low-rank matrix estimation.
Let ${\mathcal{R}}(W)$ denote the column space of a matrix $W$. \mathbf sloppy
Under the conditions of Theorem~\mathbf ref{thm:consistency}, the result of \eqref{eq:error-bound-m}
combined with the Davis-Kahan-Wedin sine theorem \citep[see e.g.][]{stewart1990matrix} allows us to bound
$\mathbf sin\angle({\mathcal{R}}(\Theta^*_{[1:N, S]}),{\mathcal{R}}(\hat{\Theta}^{(N, J)}_{[S]}))$, for any $S$ satisfying $p_Q(S) > 0$,
where $\angle(L,M)$ denotes the largest principal angle between two linear spaces $L$ and $M$, i.e.,
$\mathbf sin \angle(L,M) = \max_{\mathbf u\in M, \mathbf u\neq 0} \min_{\mathbf v\in L, \mathbf v\neq \mathbf 0} \mathbf sin \angle(\mathbf u,\mathbf v).$
Our strategy is to bound $$ \mathbf sin\angle(\Theta^*_{[1:N, k]},\hat{\Theta}_{[k]}^{(N, J)}) = \mathbf sin\angle({\mathcal{R}}(\Theta^*_{[1:N, k]}),{\mathcal{R}}(\hat{\Theta}_{[k]}^{(N, J)}))$$ by $\mathbf sin\angle({\mathcal{R}}(\Theta^*_{[1:N, 1:K]}),{\mathcal{R}}(\hat{\Theta}^{(N, J)}))$ under the assumptions of Theorem~\mathbf ref{thm:consistency}.
Note that ${\mathcal{R}}(\Theta^*_{[1:N, k]}) = \bigcap_{k\in S, p_Q(S) > 0} {\mathcal{R}}(\Theta^*_{[1:N, S]})$ and similarly
${\mathcal{R}}(\hat{\Theta}_{[k]}^{(N, J)}) = \bigcap_{k\in S, p_Q(S) > 0} {\mathcal{R}}(\hat{\Theta}_{[S]}^{(N, J)})$.
Consequently, it remains to show that
if the linear spaces are perturbed slightly,
then their intersection does not change much.
To this end, we establish a new perturbation bound on the intersection of general linear spaces in the next proposition.
\begin{proposition}[Perturbation bound for intersection of linear spaces]\label{prop:perturb}
Let $L$, $M$, $L'$, $M'$ be linear subspaces of a finite dimensional vector space. Then,
\begin{equation}\label{eq:perturbation-bound}
\|{\mathbf{P}}_{L'\cap M'}-{\mathbf{P}}_{L\cap M}\|\leq 8 \max\{
\alpha(\theta_{\min,+}(L,M)),\alpha(\theta_{\min,+}( L', M'))
\}(\|{\mathbf{P}}_L-{\mathbf{P}}_{L'}\|+\|{\mathbf{P}}_{M}-{\mathbf{P}}_{M'}\|),
\end{equation}
where we define $\theta_{\min,+}(L,M)$ as the smallest positive principal angle between $L$ and $M$ (defined as $0$ if all the principal angles are $0$), ${\mathbf{P}}_M$ denotes the orthogonal projection onto a linear space $M$, and $
\alpha(\theta)={{2(1+\cos \theta)}/{(1-\cos\theta)^3}}.$
Here, the norm $\|\cdot\|$ could be any unitary invariant, uniformly generated and normalized matrix norm.
In particular, if we take $\|\cdot\|$ to be the spectral norm $\|\cdot\|_2$, then we have
\begin{equation}
\begin{aligned}
&\mathbf sin\angle(L'\cap M',L\cap M)\\
\leq & 8 \max\{
\alpha(\theta_{\min,+}(L,M)),\alpha(\theta_{\min,+}( L', M'))
\}(\mathbf sin\angle(L,L')+\mathbf sin\angle({M},{M'})).
\end{aligned}
\end{equation}
\end{proposition}
We refer the readers to \cite{stewart1990matrix} for more details on principal angles between two linear spaces and on matrix norms. In particular, the spectral norm is a unitary invariant, uniformly generated and normalized matrix norm.
The result in Proposition~\mathbf ref{prop:perturb} holds for all linear subspaces.
The right-hand side in \eqref{eq:perturbation-bound} is finite if and only if $\theta_{\max,+}(L,M)\neq 0$ and $\theta_{\max,+}(L',M')\neq0$.
In our problem, $L={\mathcal{R}}(\Theta_{[1:N,S_1]})$ and $M={\mathcal{R}}(\Theta_{[1:N,S_2]})$ for $S_1,S_2\mathbf subset \{1,...,K\}$ and $S_1\neq S_2$. The next lemma further bounds
$\alpha(\theta_{\min,+}(L,M))$ when $L$ and $M$ are column spaces of a matrix, which is a key step in proving \eqref{eq:error-bound-dim-k}.
\begin{lemma}\label{lemma:angle-lower}
Let $W\in \mathbf Real^{N\times K}\mathbf setminus\{\mathbf{0}\}$ for some positive integer $N$ and $K$, and $S_1,S_2\mathbf subset\{1,...,K\}$ be such that $S_1\mathbf setminus S_2\neq \emptyset$ and $S_2\mathbf setminus S_1 \neq \emptyset$, then
\begin{equation}\label{eq:angle-lower}
\cos (\theta_{\min,+}({\mathcal{R}}(W_{[S_1]}),{\mathcal{R}}(W_{[S_2]})))\leq 1-\frac{\mathbf sigma_{|S_1\cup S_2|}^2(W_{[S_1\cup S_2]})}{\|W\|_2^2}.
\end{equation}
\end{lemma}
\mathbf section{Numerical Experiments}\label{sec:sim}
\mathbf subsection{Simulation Study I} We first verify Theorem~\mathbf ref{thm:consistency} and its implications when all latent factors are structurally identifiable. Specifically, we consider $K = 5$ under the three models discussed in Section~\mathbf ref{subsec:model}, including the linear, the MIRT and the Poisson models. Two design structures are considered, including (1) a simple structure, where $p_Q(\{k\}) = 1/5$, $k = 1, ..., 5$ and (2) a mixed structure, where $p_Q(S)= 1/5$, $S = \{1, 2, 3\}, \{2, 3, 4\}, \{3, 4, 5\}, \{4, 5, 1\}$, and $\{5, 1, 2\}$. The true person parameters
$\boldsymbol \theta_i^*$s and the true manifest parameters
${\boldsymbol \alpha}a_j^*$s
are generated i.i.d. from distributions over the ball $\{\mathbf x \in \mathbf Real^{K}: \Vert \mathbf x\Vert \leq 2.5\}$, respectively, (i.e., $C = 2.5$ in $S_Q$). Under these settings, all the latent factors are structurally identifiable.
For each model and each design structure, a range of $J$ values are considered and we let $N = 25J$. Specifically, we consider $J = 100, 200, ..., 1000$ for the linear, the MIRT, and the Poisson models. For each combination of a model, a design structure, and a $J$ value, 50 independent datasets are generated. For each dataset, we apply Algorithm~\mathbf ref{alg:jmle} to solve \eqref{eq:mle}, where $C' = 1.2C$.
Results are shown in Figures~\mathbf ref{fig:1}-\mathbf ref{fig:5}.
\begin{figure}
\caption{Linear Factor Model}
\caption{MIRT Model}
\caption{Poisson Factor Model}
\caption{The value of $\|\hat{\Theta}
\label{fig:1}
\end{figure}
Figure~\mathbf ref{fig:1} shows the trend of $\frac{1}{NJ}\|\hat{\Theta}^{(N, J)}(\hat{A}^{(N, J)})^{\top}-\Theta^*_{[1:N, 1:K]} A^{*\top}_{[1:J, 1:K]}\|_F^2$ ($y$-axis) when $J$ increases ($x$-axis), where each panel corresponds to a model. This result verifies \eqref{eq:error-bound-m} in Theorem~\mathbf ref{thm:consistency}.
According to these plots, the normalized squared Frobenius norm between
$\Theta^*_{[1:N, 1:K]} A^{*\top}_{[1:J, 1:K]}$ and its estimate $\hat{\Theta}^{(N, J)}(\hat{A}^{(N, J)})^{\top}$ decays towards zero, as $J$ and $N$ increase.
Figures~\mathbf ref{fig:2}-\mathbf ref{fig:5} present results based on the first latent factor and we point out that
the results based on the other latent factors are almost the same.
\begin{figure}
\caption{Linear Factor Model}
\caption{MIRT Model}
\caption{Poisson Factor Model}
\caption{The value of $ \mathbf sin\angle(\Theta^*_{[1:N, 1]}
\label{fig:2}
\end{figure}
Figure~\mathbf ref{fig:2} is used to verify \eqref{eq:error-bound-dim-k}
in Theorem~\mathbf ref{thm:consistency}, showing the pattern that $\mathbf sin\angle(\Theta^*_{[1:N, k]},\hat{\Theta}_{[k]}^{(N, J)})$ decreases as $J$ and $N$ increase.
\begin{figure}
\caption{Estimated}
\caption{True}
\caption{Comparison between the histogram of $v_i$s and that of $\hat v_i$s for the first latent factor under the Poisson Model and the simple structure.}
\label{fig:3}
\end{figure}
Moreover, Figure~\mathbf ref{fig:3} provides evidence on the result of Proposition~\mathbf ref{prop:empdist}.
Displayed in Figure~\mathbf ref{fig:3} are the histograms of $v_i$s and $\hat v_i$s, respectively,
based on a randomly selected dataset when $J = 1000$ under the Poisson model and the simple structure.
According to this figure, little difference is observed between the empirical distribution of $v_i$s and that of $\hat v_i$s. Similar results are observed for other datasets under all these three models when $J$ and $N$ are large.
\begin{figure}
\caption{Linear Factor Model}
\caption{MIRT Model}
\caption{Poisson Factor Model}
\caption{The Kendall's tau ranking error calculated from $\Theta^*_{[1:N, 1]}
\label{fig:4}
\end{figure}
\begin{figure}
\caption{Linear Factor Model}
\caption{MIRT Model}
\caption{Poisson Factor Model}
\caption{The classification error calculated from $\Theta^*_{[1:N, 1]}
\label{fig:5}
\end{figure}
Finally,
Figures~\mathbf ref{fig:4} and \mathbf ref{fig:5} show results of ranking and classification based on the estimated factor scores, whose theoretical results are given in Propositions~\mathbf ref{prop:ranking} and \mathbf ref{prop:classify}.
The $y$-axes of the two figures show the normalized Kendall's tau distance in \eqref{eq:ranking}
and the classification error in \eqref{eq:classificatin}, respectively.
Specifically, $\tau_-$ and $\tau_+$ are chosen as
0.14 and 0.43 which are the $55\%$ and $65\%$ quantiles of the $v_i$s.
From these plots, both the ranking and the classification errors tend to zero, as $J$ and $N$ grow large.
\mathbf subsection{Simulation Study II}
We then provide an example, in which a latent factor is not identifiable. Specifically, we consider $K = 2$ and the same latent factor models as in Study I.
The design structure is given by $p_Q(\{1\}) = 1/2$ and $p_Q(\{1, 2\}) = 1/2$.
The true person parameters
$\boldsymbol \theta_i^*$s and the true manifest parameters
${\boldsymbol \alpha}a_j^*$s
are generated i.i.d. from distributions over the ball $\{\mathbf x \in \mathbf Real^{K}: \Vert \mathbf x\Vert \leq 3\}$, respectively, (i.e., $C = 3$ in $S_Q$). Under these settings, the first latent factor is structurally identifiable and the second factor is not.
For each model, we consider $J = 100, 200, ..., 1000$. The rest of the simulation setting is the same as Study I.
\begin{figure}
\caption{The value of $\|\hat{\Theta}
\label{fig:6}
\end{figure}
\begin{figure}
\caption{Linear Factor Model}
\caption{MIRT Model}
\caption{Poisson Factor Model}
\caption{The value of $ \mathbf sin\angle(\Theta^*_{[1:N, 1]}
\label{fig:7}
\end{figure}
Results are shown in Figures~\mathbf ref{fig:6} and \mathbf ref{fig:7}. First, Figure~\mathbf ref{fig:6} presents the patten that $\frac{1}{NJ}\|\hat{\Theta}^{(N, J)}(\hat{A}^{(N, J)})^{\top}-\Theta^*_{[1:N, 1:K]} A^{*\top}_{[1:J, 1:K]}\|_F^2$
decays to 0 when $J$ increases, even when a latent factor is not structurally identifiable. This is consistent with the first part of Theorem~\mathbf ref{thm:consistency}. Second, Figure~\mathbf ref{fig:7} shows the
trend of $ \mathbf sin\angle(\Theta^*_{[1:N, k]},\hat{\Theta}_{[k]}^{(N, J)})$ as $J$ increases. In particular,
the value of $ \mathbf sin\angle(\Theta^*_{[1:N, k]},\hat{\Theta}_{[k]}^{(N, J)})$ stays above 0.3 for most of the data sets for the factor which is structurally unidentifiable, while it still decays towards 0 for the identifiable one.
\mathbf subsection{Real Data Example}\label{sec:real}
As an illustration, we apply the proposed method to analyze a personality assessment dataset based on an International Personality Item Pool (IPIP) NEO personality inventory \citep{johnson2014measuring}. This inventory is a public-domain version of the widely used NEO personality inventory \citep{costa1985neo}, which is designed to measure the Big Five personality factors ($K = 5$), including Neuroticism (N), Agreeableness (A), Extraversion (E), Openness to experience (O), and Conscientiousness (C).
The dataset contains 20,993 individuals and 300 items. We use a subset of the dataset which contains 7,325 individuals who have answered all 300 items. The measurement design matrix has a simple structure, which is a safe design according to our identifiability theory. Under this design, each item only measures one personality factor and each factor is measured by 60 items.
All the items are on a five-category rating scale, where reverse-worded items were reversely recorded
(1 $\mathbf rightarrow$ 5, 2 $\mathbf rightarrow$ 4, 4 $\mathbf rightarrow$ 2, 5 $\mathbf rightarrow$ 1) at the time the respondent completed the inventory.
In this analysis we dichotomize them by merging categories $\{1, 2, 3\}$ and $\{4, 5\}$, respectively, and then fit the MIRT model.
\begin{figure}
\caption{The histogram of the estimated factor scores $\hat{\Theta}
\label{fig:real_hist_theta}
\end{figure}
\begin{table}
\centering
\begin{tabular}{c|ccccc}
\hline
& N & A & E & O & C \\
\hline
N & 1.00 & -0.23 & -0.35 & -0.04 & -0.34\\
A & -0.23 & 1.00 & 0.22 & 0.19 & 0.31\\
E & -0.35 & 0.22 & 1.00 & 0.20 & 0.18\\
O & -0.04 & 0.19 & 0.20 & 1.00 & -0.01\\
C & -0.34 & 0.31 & 0.18 & -0.01 & 1.00\\
\hline
\end{tabular}
\caption{The correlation matrix between the estimated factor scores for the IPIP-NEO dataset.}\label{tab:real_cor_theta}
\end{table}
\begin{figure}
\caption{The boxplot of the 60 estimated unconstrained loadings for each of the five factors for the IPIP-NEO dataset.}
\label{fig:real_boxplot_A}
\end{figure}
\begin{table}
\centering
\footnotesize
\begin{tabular}{c|rcl}
\hline
Factor &Items & Loading & Content \\
\hline
& 23(N$+$) & 2.81 & Am often down in the dumps. \\
& 4(N$+$) & 2.66 & Get stressed out easily. \\
N& 58(N$-$) & 2.51 & Know how to cope. \\
& 25(N$+$) & 2.38 & Have frequent mood swings. \\
& 13(N$+$) & 2.31 & Get upset easily. \\
\hline
& 79(A$-$) & 2.02 & Take advantage of others. \\
& 86(A$-$) & 1.88 & Look down on others. \\
A& 84(A$+$) & 1.82 & Am concerned about others. \\
& 98(A$-$) & 1.80 & Insult people. \\
& 67(A$-$) & 1.80 & Distrust people. \\
\hline
& 123(E$+$) & 2.93 & Feel comfortable around people. \\
& 128(E$-$) & 2.54 & Avoid contacts with others. \\
E& 124(E$+$) & 2.48 & Act comfortably with others. \\
& 139(E$-$) & 2.25 & Avoid crowds. \\
& 132(E$+$) & 2.19 & Talk to a lot of different people at parties. \\
\hline
& 196(O$-$) & 2.28 & Do not like art. \\
& 191(O$+$) & 2.25 & Believe in the importance of art. \\
O& 226(O$-$) & 2.11 & Am not interested in abstract ideas. \\
& 225(O$+$) & 2.07 & Enjoy thinking about things. \\
& 229(O$-$) & 2.05 & Am not interested in theoretical discussions. \\
\hline
& 286(C$-$) & 2.23 & Find it difficult to get down to work. \\
& 272(C$+$) & 2.21 & Work hard. \\
C& 283(C$+$) & 2.15 & Start tasks right away. \\
& 289(C$-$) & 2.11 & Have difficulty starting tasks. \\
& 285(C$+$) & 2.10 & Carry out my plans. \\
\hline
\end{tabular}
\caption{The content of the top five items with highest estimated loadings on each factor for the IPIP-NEO dataset.}\label{tab:real_top_5}
\end{table}
The results are shown in Figures~\mathbf ref{fig:real_hist_theta} through \mathbf ref{fig:real_boxplot_A} and Tables~\mathbf ref{tab:real_cor_theta} and Table~\mathbf ref{tab:real_top_5}. In Figure~\mathbf ref{fig:real_hist_theta}, the histograms of $\hat{\Theta}_{[k]}^{(N, J)}$ are given, for $k = 1, ..., 5$, which correspond to the N, A, E, O, and C factors, respectively. As we can see, the estimated factor scores are quite normally distributed, especially for the first three factors. For the O and C factors, the distributions of the estimated factor scores are slightly right skewed. In Table~\mathbf ref{tab:real_cor_theta}, the correlations between the factors are calculated using the estimated factor scores. The correlations between the factors are relatively small, which
are largely consistent with the existing findings in the literature of Big Five personality \citep{digman1997higher}. Figure~\mathbf ref{fig:real_boxplot_A} shows the box plot of the 60 unconstrained loadings for each of the five factors, according to which all the estimated loadings take value between 0 and 3 and the majority of them take value between 0.5 and 2. Table~\mathbf ref{tab:real_top_5} lists the content of the top five items with the highest estimated loadings for each factor. These items are indeed representative of the Big Five factors.
\mathbf section{Concluding Remarks}\label{sec:conc}
In this paper, we study how design information affects the identifiability and estimability of a general family of structured latent factor models, through both asymptotic and non-asymptotic analyses.
In particular, under a double asymptotic regime where both the numbers of individuals and manifest variables grow to infinity, we define the concept of structural identifiability for latent factors.
Then
necessary and sufficient conditions are established for the structural identifiability of a given latent factor. Moreover, an estimator is proposed that can consistently recover all the structurally identifiable latent factors. A non-asymptotic error bound is developed
to characterize the effect of design information on the estimation of latent factors, which complements the asymptotic results. In establishing these results, new perturbation bounds on the intersection of linear subspaces, as well as some other technical tools, are developed, which may be of independent theoretical interest. As shown in Section~\mathbf ref{sec:implication},
our results have significant implications on the use of generalized latent factor models in large-scale educational and psychological measurement.
There are many future directions along the current work. First, it is of interest to develop methods, such as information criteria, for model comparison based on the proposed estimator. These methods can be used to
select a design matrix $Q$ that best describes the data structure when there are multiple candidates,
or to determine the underlying latent dimensions.
Second, the current results may be further generalized by considering more general latent factor models beyond the exponential family. For example,
we may establish similar identifiability and estimability results when the distribution of $Y_{ij}$ is a
more complicated function or even an unknown function of $\boldsymbol \theta_i$ and ${\boldsymbol \alpha}a_j$.
\mathbf section*{Acknowledgement}
We would like to thank the editors and two referees for their helpful and constructive comments. We would also like to thank Prof. Zhiliang Ying for his valuable comments.
Xiaoou Li's research is partially supported by the NSF grant DMS-1712657. Yunxiao Chen's research is supported in part by NAEd/Spencer postdoctoral fellowship.
\end{document}
|
\begin{document}
\draft
\title{Wigner functions, squeezing properties and slow decoherence of
\\
atomic Schr\"{o}dinger cats}
\author{
{M. G. Benedict$^{1,}$}\cite{MGBemail}
and
{A. Czirj\'{a}k$^{1,2,}$}\cite{ACemail}
}
\address{
$^{1}$Department of Theoretical Physics, Attila J\'{o}zsef University,
\\
H-6720 Szeged, Tisza Lajos krt. 84-86, Hungary
\\
$^{2}$Department of Quantum Physics, University of Ulm,
D-89069 Ulm, Germany
}
\date{Submitted to Physical Review A: March 26, 1999.}
\maketitle
\begin{abstract}
We consider a class of states in an ensemble of two-level atoms:
a superposition of two distinct atomic coherent states, which can
be regarded as atomic analogues of the states usually called Schr\"{o}dinger
cat states in quantum optics. According to the relation of the constituents
we define polar and nonpolar cat states. The properties of these are
investigated by the aid of the spherical Wigner function. We show that
nonpolar cat states generally exhibit squeezing, the measure of which
depends on the separation of the components of the cat, and also on
the number of the constituent atoms.
By solving the master equation for the polar cat state
embedded in an external environment, we determine the characteristic times
of decoherence, dissipation and also the characteristic time of a new
parameter, the non-classicality of the state. This latter one is introduced
by the help of the Wigner function, which is used also to visualize the
process. The dependence of the characteristic times on the number of atoms
of the cat and on the temperature of the environment shows that the
decoherence of polar cat states is surprisingly slow.
\end{abstract}
\pacs{PACS number(s):
42.50.-p,
42.50.Fx,
03.65.Bz
}
\section{Introduction}
The question why macroscopic superpositions are not observable in everyday
life has been raised most strikingly by Schr\"{o}dinger in his famous cat
paradox. Recent experiments \cite{MMKW,BrHR}, however, show that at least
mesoscopic superpositions can be observed in quantum-optical systems. In
quantum optics one usually speaks of a Schr\"{o}dinger cat (SC) state if
one has a superposition of two different coherent states of a harmonic
oscillator. In one of the experiments \cite{MMKW} a superposition of two
different coherent states have been created for an ion oscillating in a
harmonic potential. In the other one \cite{BrHR} two coherent states of a
cavity mode were superposed, and also the process of the decoherence between
these states could be followed by monitoring the field with resonant atoms.
The unusual properties of such states have been discussed theoretically
in several publications, see e.g. \cite{YS86,JJ90,SPL91,BMKP92}.
A different type of Schr\"{o}dinger cat like state can be created
in principle in a collection of two-level atoms, as first proposed in
\cite{CZ94}.
The terminology we may use, is the following: The individual
two-level atoms can be regarded as the ``cells'' of the cat, and the cat is
definitely alive, if all of its cells are alive, i.e. they are in the
$|+\rangle $ state, and it is definitely dead, if all the cells are in the
ill, $|-\rangle $ state. In the case of $N$ atoms a prototype of a SC like
state is then:
\begin{equation}
|\Psi _{\text{SC}}\rangle ={\frac{1}{\sqrt{2}}}
(|{+,+,\ldots ,+}\rangle +|{-,-,\ldots ,-}\rangle ),
\label{SC0}
\end{equation}
where each of the terms contain $N$ pluses and $N$ minuses. We shall call
this state the polar cat state, because the two components are in the
farthest possible distance from each other. This state is in the totally
symmetric $N+1$ dimensional subspace of the whole $2^{N}$ dimensional
Hilbert space, and if such states are manipulated by a resonant
electromagnetic field mode with dipole interaction, then the atomic system
will remain in this subspace. This is the arena of the collective
interaction of the atoms and the electromagnetic field, called superradiance
\cite{D54,GH82,BM96}.
In this work we present results concerning the
properties and dynamics of polar cat states (\ref{SC0}), and also of more
general collective atomic states, the generation of which have also been
considered recently \cite{AgPS,G98}.
Our approach of discussing the properties of quantum states like
$|\Psi _{\text{SC}}\rangle $ is based mainly on the method of the Wigner
function, which is one of the possible quasi-probability distributions.
It has become a customary tool for investigating quantum states of an
electromagnetic mode oscillator, or an ion oscillating in an appropriate
trapping field \cite{FBLSS97,LMKMIW96}. The method of Wigner function is
much less exploited, however, in the description of atomic states like
(\ref{SC0}).
That is why we first summarize the essentials of this method, and
then turn to the determination of the Wigner function for the cat state
(\ref{SC0})
in Section II. Next, in Section III. we consider more general cat
like states, which we call ``nonpolar cats'' and deteremine their squeezing
properties. Finally, in Section IV. we write down and solve the master
equation for a cat state in an environment with finite temperature.
We define and determine the dissipation and decoherence times of the system,
and the characteristic time when the system becomes essentially classical.
\section{The Wigner function of the polar cat state}
The $N$-atom dipole interaction with the electromagnetic field is
equivalent to the dynamics of
a spin of $j=N/2$, and the phase space of the atomic subsystem is the
surface of a sphere of radius $\sqrt{j(j+1)}$, ($\hbar =1$), which is
sometimes called the Bloch sphere. This phase space and quasiprobability
distributions corresponding to various operators
acting in the $2j+1$ dimensional Hilbert space
have been introduced first by Stratonovich \cite{St}. Similar constructions
have been considered independently by several authors
\cite{ACGT,Ag,G76,VG,SW94}.
We use here the construction and notation introduced
by Agarwal \cite{Ag}. Similarly to the case of oscillator quasidistributions
\cite{CG,AgW},
the quasiprobability functions for angular momentum states
are not unique either. Beyond the natural requirements that the possible
quasiprobability distribution functions have to satisfy, there is a special
property, called the product rule, that distinguishes the most natural
choice among the possible quasiprobability distributions. This rule requires
that the expectation value of a product of two operators could be calculated
by integrating the product of the corresponding quasiprobabilities. This
choice is essentially unique, and in accordance with most authors we call it
the Wigner function for spin $j$. We note that the construction can be
extended to include several values of $j$
\cite{FBC98,Wolf},
and in the same
spirit Wigner functions can be defined for arbitrary Lie groups \cite{Brif}.
We also note that it is possible to define joint Wigner functions
for atom-field interactions, and then a fully phase space description of
atom-field dynamics can be considered \cite{CB96}. Here we restrict ourselves
to the problem of angular momentum with a fixed value of $j$.
Using the procedure proposed in \cite{Ag} we shortly summarize here the
method of quasiprobability functions in the $2j+1$ dimensional Hilbert
space. One first chooses an operator basis in this space, and the most
straightforward set of operators is the set of the spherical tensor
operators $T_{KQ}$ which transform among others irreducibly under the action
of the rotation operators\cite{BL}. Their explicit expression is:
\begin{eqnarray}
T_{KQ}=\sum_{m=-j}^{j}(-1)^{j-m}(2K+1)^{1/2}\left(
\begin{array}{ccc}
j & K & j \\
-m & Q & m-Q
\end{array}
\right) |j,m\rangle \langle j,m-Q|
\end{eqnarray}
where
$\left(
\begin{array}{ccc}
j & K & j \\
-m & Q & m-Q
\end{array}
\right) $
is the Wigner $3j$ symbol.
They form a basis in the sense that any operator
of the Hilbert space can be expanded in terms of them and
they fulfil the Hilbert-Schmidt orthonormality condition
$\text{Tr} \left( T^{\dagger}_{KQ} T_{K'Q'} \right)=\delta_{KK'}\delta_{QQ'}$.
Introducing the characteristic
matrix of the density operator $\rho $ with respect of this operator basis
as:
\begin{equation}
\varrho_{KQ}=\text{Tr} \left( \rho \,T_{KQ}^{\dagger} \right), \label{charf}
\end{equation}
the Wigner function of the state $\rho$ is defined as:
\begin{equation}
W_{\rho }(\theta ,\phi )=\sqrt{\frac{2j+1}{4\pi }}\sum_{K=0}^{2j}
\sum_{Q=-K}^{K}\varrho _{KQ} \, Y_{KQ}(\theta ,\phi ). \label{WFU}
\end{equation}
The factor in front of the sum ensures normalization.
We note that in a similar way one can associate a Wigner function
$W_{A}(\theta ,\phi )$ to any operator $A$, by introducing its characteristic
matrix:
$A_{KQ}=\text{Tr} \left( A\,T_{KQ}^{\dagger} \right)$,
and then forming the sum as in Eq. (\ref{WFU}).
It can be easily seen, that this is a very similar procedure according to
which one introduces the quasidistributions of oscillator states and
operators by the help of characteristic functions of the translation operator
basis: $D(\alpha )=\exp (\alpha a^{\dagger }-\alpha ^{*}a)$ \cite{CG,AgW}.
The construction of Eq. (\ref{WFU}) can be shown to satisfy the product rule
mentioned above, giving the following result for the expectation
value of an operator $A$:
\begin{eqnarray}
\text{Tr}
(\rho A)= \sqrt{\frac{4\pi }{2j+1}} \int W_{\rho}(\theta ,\phi )
W_{A}(\theta ,\phi )\sin \theta
\,\text{d}\theta \,\text{d}\phi .
\end{eqnarray}
For other types of quasidistributions of angular momentum, like the analogs
of the oscillator $P$ and $Q$ functions see \cite{Ag}.
Similarly to the case of the oscillator, the Wigner function allows one to
visualize the properties of the state in question. In the work of Dowling \&
al. \cite{DAS} graphical representations of the Wigner function of the
number, coherent and squeezed atomic states were presented. The Wigner
function of a cat state like (\ref{SC0}) has been considered first in \cite
{BCAS}.
The characteristic matrix of the state given by (\ref{SC0}) can now be
calculated according to the definition, Eq. (\ref{charf}), taking into
account that the density operator corresponding to
$|\Psi _{\text{SC}}\rangle $ is
\begin{eqnarray}
\rho _{\text{SC}}=\frac{1}{2}\left( \left| j,j\right\rangle \left\langle
j,j\right| +\left| j,-j\right\rangle \left\langle j,-j\right| +\left|
j,j\right\rangle \left\langle j,-j\right| +\left| j,-j\right\rangle
\left\langle j,j\right| \right)
\end{eqnarray}
in the standard basis, with $j=N/2.$ The characteristic matrix has the form:
\begin{eqnarray}
(\varrho _{\text{SC}})_{K,Q} = \frac{\sqrt{2K+1}}{2}
\left\{ \left(
\begin{array}{ccc}
j & K & j \\
-j & 0 & j
\end{array}
\right) (1+(-1)^{K})\delta _{Q,0} +
\left(
\begin{array}{ccc}
j & K & j \\
-j & 2j & -j
\end{array}
\right) (\delta _{Q,2j}+(-1)^{K}\delta _{Q,-2j})\right\} ,
\end{eqnarray}
and from Eq. (\ref{WFU}) one obtains the following result for the Wigner
function:
\begin{eqnarray}
W_{\text{SC}}(\theta ,\phi )={\frac{1}{2}}\sqrt{\frac{N+1}{4\pi }}
\left\{ \sum_{l=0}^{N}{\frac{{\sqrt{2l+1}N!}}{{\sqrt{(N-l)!(N+l+1)!}}}}
[Y_{l0}(\theta )+Y_{l0}(\pi -\theta )]
+2\sqrt{\frac{{(2N+1)!}}{{4\pi }}}{
\frac{{(\sin \theta )^{N}\cos (N\phi )}}{{2^{N}N!}}}\right\} . \label{WFSC}
\end{eqnarray}
The first term, containing the sums of two spherical harmonics,
corresponds to the individual states $|+,+,\ldots,+\rangle $, and
$|-,-,\ldots,-\rangle $, while the last term arises from the interference
term between the ``living'' and ``dead'' parts of (\ref{SC0}) (the last two
terms of the density operator).
Fig. 1 shows the polar diagram of this Wigner function for $N=5$ atoms. The
two bumps to the ``north'' and ``south'' correspond to the quasiclassical
coherent constituents, while the ripples along the equator -- where the
function takes periodically positive and negative values -- are the result
of interference between the two kets of Eq. (\ref{SC0}).
The factor $\cos (N\phi )$ in (\ref{WFSC}) shows that
the number of negative ``wings''
along the equator is equal to the number of atoms.
\begin{figure}
\caption{
Wigner function for a polar cat state, Eq. (\ref{WFSC}
\label{fig:polcat}
\end{figure}
\section{Nonpolar cat states, and their squeezing properties}
\subsection{Nonpolar cat states}
One can also construct more general SC states by taking the superposition of
any two atomic coherent states. An atomic coherent state (a quasiclassical
state) \cite{ACGT}, $\left| \tau \right\rangle $ is an eigenstate
with the highest eigenvalue $m=j$
of the component of the angular momentum operator
`pointing in the direction ${\bf n}$':
\begin{eqnarray}
\left( {\bf J\cdot n}\right) |\tau \rangle =j|\tau \rangle .
\end{eqnarray}
The notation $\tau $ refers to a specific parametrization of the unit vector
${\bf n}$ by its stereographic projection to the complex plane. It is
connected with the polar angle $\beta $ and the azimuth $\alpha $ of the
direction ${\bf n}$ as $\tau =\tan (\beta /2)e^{-i\alpha }$. The
atomic coherent state can be expanded in terms of the eigenstates
$\left|j,m\right\rangle $ of $J_{z}$ \cite{ACGT}:
\begin{eqnarray}
\nonumber
|\tau \rangle &=&\left( \frac{1}{1+|\tau |^{2}}\right) ^{j}e^{\tau
J_{+}}\left| j,-j\right\rangle
= \sum_{m=-j}^{j}
{2j \choose j+m}^{1/2}
\frac{\tau ^{j+m}}{(1+|\tau |^{2})^{j}}\left| j,m\right\rangle
\nonumber \\
&=&\sum_{m=-j}^{j}
{2j \choose j+m}^{1/2}
\sin ^{j+m}(\beta /2)\cos ^{j-m}(\beta /2)e^{-i(j+m)\alpha }
\left| j,m \right\rangle . \label{COHS}
\end{eqnarray}
The superposition of two quasiclassical coherent states is given by the ket:
\begin{equation}
|\Psi _{12}\rangle ={\frac{{|\tau _{1}\rangle +|\tau _{2}\rangle }}
{\sqrt{2(1+\text{Re}\langle \tau _{1}|\tau _{2}\rangle )}}}. \label{NPSC}
\end{equation}
Recently Agarwal, Puri and Singh \cite{AgPS} and Gerry and Grobe \cite
{G98} have proposed methods to generate such states in a cavity, via a
dispersive interaction with the cavity mode.
\begin{figure}
\caption{
The phase space scheme of a nonpolar cat state, Eq.(\ref{NPSC}
\label{fig:scheme}
\end{figure}
We choose here $\tau _{1}=\tan (\beta /2)$, $\tau _{2}=-\tau _{1}$. Then
$\beta $ is the polar angle of the classical Bloch vector corresponding to
the atomic coherent state ${|\tau _{1}\rangle }$ ($\beta $ is measured
from the south pole), see Fig. 2. This means that the $x$ component of the
expectation value of the dipole moment in these states is proportional to
$\pm (N/2)\sin \beta $, respectively, and the $y$ component is zero. Any
other equal weight superposition of two atomic coherent states can be
obtained from this special choice by an appropriate rotation. The polar cat
state of the previous section corresponds to the special case when the two
points are the northern and southern poles of the Bloch sphere.
If the centres of the two coherent states in question are not
in opposite points of the sphere, then we will call their superposition
as ``nonpolar'' cat states.
The corresponding
quasiprobability distribution functions still can be explicitly calculated.
For the Wigner function of the cat state $|\Psi _{12}\rangle $ one gets the
following expression:
\begin{eqnarray}
W(\theta ,\phi ) =
\sqrt{\frac{N+1}{4\pi}}
&& \sum_{K=0}^{2j}\sum_{Q=-K}^{K}{\frac{\sqrt{2K+1}{(2j)!}}
{{2(1+(\cos \beta )^{2j})}}}{\ }
\sum_{m=-j}^{j}{\frac{
(-1)^{j-Q-m}+(-1)^{3j+m}+(-1)^{2j}+(-1)^{2j-Q}}{\sqrt{
(j+m)!(j-m)!(j+Q+m)!(j-Q-m)!}}}
\nonumber \\
&& \times \left(
\begin{array}{ccc}
j & \ K & \ j \\
-m-Q & Q & m
\end{array}
\right) (\sin {\beta /2})^{2(j+m)+Q}(\cos {\beta /2})^{2(j-m)-Q}\
Y_{KQ}(\theta ,\phi ). \label{WFNP}
\end{eqnarray}
We present polar plots of this Wigner function in Fig. 3. for $N=5$ atoms
and for several values of $\beta$.
\begin{figure}
\caption{
Wigner functions for the state $|\Psi_{12}
\label{fig:genpolcat}
\end{figure}
For small $\beta $ values, the interference is weak and the maximum of the
Wigner function is around $\theta =0$. For larger $\beta $-s the function
has two maxima around $\theta =\pm \beta $, and the interference gets more
pronounced. When $\beta =\pi /2$, the two maxima corresponding to the
individual coherent states point in the $x$ and $-x$ directions,
respectively. In this case we get back the Wigner function of the SC state
of Eq. (\ref{SC0}), rotated around the $y$ axis by $\pi /2$.
\subsection{Squeezing properties}
The expectation values of the dipole operators $J_{x}$ and $J_{y}$ are zero
in the state (\ref{NPSC}) with $\tau _{1}=\tan (\beta /2)$, $\tau _{2}=-\tau
_{1}$, which is a consequence of the mirror symmetry of this state with
respect of both the $x-z$ and the $y-z$ planes. As it is known, the
variances of the dipole operators, $J_{x}$ and $J_{y}$ are equal to each
other in an atomic coherent state:
\begin{equation}
(\Delta J_{x})^{2}|_{\left| \tau \right\rangle }=(\Delta J_{y})^{2}|_{\left|
\tau \right\rangle }=j/2. \label{VCOH}
\end{equation}
In order to calculate the variances in the state (\ref{NPSC}), one can use
directly expansion (\ref{COHS}) and the known matrix elements of
$J_{x}$ and $J_{y}$, but the summations that occur
are rather cumbersome to evaluate. A more effective procedure is to apply
the method of generating functions \cite{ACGT}. All the necessary
expectation values in a cat state can be calculated by the formula :
\begin{eqnarray}
\left[ \left( \frac{\partial }{\partial \xi }\right) ^{a}\left( \frac{
\partial }{\partial \eta }\right) ^{b}\left( \frac{\partial }{\partial \zeta
}\right) ^{c}X_{A}\right] _{\xi =\eta =\zeta =0}=\left\langle \tau
_{1}\right| J_{-}^{a}J_{z}^{b}J_{+}^{c}\left| \tau _{2}\right\rangle ,
\end{eqnarray}
where
\begin{eqnarray}
\nonumber
X_{A}(\xi ,\eta ,\zeta ) & \equiv & \left\langle \tau _{1}\right| e^{\xi
J_{-}}e^{\eta J_{z}}e^{\zeta J_{+}}\left| \tau _{2}\right\rangle
=\frac{
\left\langle -j\right| e^{(\tau _{1}^{*}+\xi )J_{-}}e^{\eta J_{z}}e^{(\tau
_{2}+\zeta )J_{+}}\left| -j\right\rangle }{\{(1+|\tau _{1}|^{2})(1+|\tau
_{2}|^{2})\}^{j}} \nonumber \\
&=&\frac{\{e^{-\eta /2}+e^{\eta /2}(\tau _{1}^{*}+\xi )(\tau _{2}+\zeta )\}}
{\{(1+|\tau _{1}|^{2})(1+|\tau _{2}|^{2})\}^{j}}^{2j}
\end{eqnarray}
is the (antinormally ordered) generating function.
Inserting the necessary operators, we obtain the following expressions
for the variances in the state given by (\ref{NPSC}):
\begin{eqnarray}
(\Delta J_{x})^{2}={\frac{j}{2}}\left( 1+{\frac{(2j-1)\sin ^{2}\beta }{{
1+(\cos \beta )^{2j}}}}\right) ,
\end{eqnarray}
\begin{eqnarray}
(\Delta J_{y})^{2}={\frac{j}{2}}\left( 1-{\frac{(2j-1)(\cos \beta
)^{2j-2}\sin ^{2}{\beta }}{{1+(\cos \beta )^{2j}}}}\right) .
\label{YC}
\end{eqnarray}
Comparing these results with Eq. (\ref{VCOH}), we see, that except for some
special cases the $J_{y}$ quadrature is squeezed while the $J_{x}$
quadrature is stretched in this state. The reason of this asymmetry lies in
the fact, of course, that in the superposition (\ref{NPSC}) we have chosen
states that are both centered in points lying in the $x-z$ plane.
One of the exceptional cases that is not squeezed is, if there is only one
atom: $j=1/2.$ As it is easily seen, for $j=1/2$ any state in the
two-dimensional Hilbert space is a coherent state, and therefore it does not
show squeezing. The two other exceptions are $\beta =0$ for any $j$, because
then the two coherent states coincide, and $\beta =\pi /2$, which is the
rotated version of the polar cat state.
Writing Eq.(\ref{YC}) in the form $(\Delta J_{y})^{2}={j}(1-{\cal{S}})/2$,
we can define the quantity ${\cal{S}}$ as the measure of squeezing.
Analysis shows that if $N$ is large enough, then
the maximum value of ${\cal{S}}$ is $0.56$ and it is achieved around
$\beta _{m}=1.6/\sqrt{N}$. Figure 4 shows the dependence of ${\cal{S}}$
on $\beta $ for several values of $N=2j$.
\begin{figure}
\caption{
The $\beta$ dependence of the quantity ${\cal{S}
\label{fig:sq}
\end{figure}
\section{Decoherence and dissipation}
As we mentioned in the Introduction, there have already been
realizable methods proposed for the experimental generation of atomic
SC states in a collection of two-level atoms \cite{AgPS,G98}. However,
such an atomic ensemble can never be perfectly isolated from the surrounding
environment. Further, any observation of these states necessarily leads
to the interaction of the atomic system with a measuring apparatus.
In both of these cases the atomic system interacts with a system containing
a large number of degrees of freedom. A possible and successful approach
to this problem \cite{Zu} considers that the static environment continously
influences the dynamics of the atomic subsystem, which besides exchanging
energy with the environment loses the coherence of its quantum
superpositions and evolves into a classical statistical mixture.
In this section we investigate the decoherence and dissipation of the atomic
Schr\"{o}dinger cat states embedded in an environment with many degrees of
freedom, by writing down the master equation for the reduced
density operator of the atomic subsystem.
We provide the solution for the polar cat states (\ref{SC0}).
\subsection{Model and solution}
We couple our ensemble of two-level atoms to the environment
which is supposed to be a multimode electromagnetic radiation
with photon annihilation and creation operators $a_k$ and $a_k^\dagger$.
Then the interacting system can be described by the following well known
model Hamiltonian which considers dipole interaction and uses the rotating
wave approximation:
\begin{equation}
H=\omega _{\text{a}}J_{z}+ \hbar \sum_{k}\omega _{k}a_{k}^{\dagger
}a_{k}+ \sum_{k}g_{k}\left( a_{k}^{\dagger }J_{-}+a_{k}J_{+}\right) ,
\label{model_ham}
\end{equation}
where $\omega_{\text{a}}$ is the transition frequency between the two atomic
energy levels, the $\omega_k$ denote the frequencies of the modes of the
environment and $g_k$ are the coupling constants.
If we suppose the environment to be in thermal
equilibrium at temperature $T$,
then the time evolution of the atomic subsystem is determined by a master
equation for its reduced density operator $\rho(t)$
\cite{ASp,WM}:
\begin{eqnarray}
\hbar^2
\frac{\text{d}\rho (t) }{\text{d}t} = -\frac{\gamma }{2}
\ (\langle n \rangle+1)\ (J_{+}J_{-}\rho (t)
+\rho (t) J_{+}J_{-}-2J_{-}\rho (t) J_{+})
-\frac{\gamma }{2}
\ \langle n \rangle\ (J_{-}J_{+}\rho (t)
+\rho (t) J_{-}J_{+}-2J_{+}\rho (t) J_{-}) \label{mastereq}
\end{eqnarray}
which involves the usual Born-Markov approximation and is written
in the interaction picture. Here
$\langle n \rangle=\left( \exp \left( \hbar \omega _{\text{a}}/
(k_{\text{B}} T)\right) -1\right) ^{-1}$
is the mean number of photons in the environment and
$\gamma = (g(\omega_{\text{a}})\sigma(\omega_{\text{a}}))^2$ denotes the
damping rate, where $\sigma$ is the mode density of the environment.
Eq. (\ref{mastereq}) can be obtained also in a somewhat different
context, as described in \cite{BSH}. Then one assumes the atomic subsystem
to be placed in a resonant cavity with low quality mirrors causing
the damping of the cavity mode at a rate $\kappa$. Under certain reasonable
assumptions one can get Eq. (\ref{mastereq}) with
$\gamma = 2 \, g(\omega_{\text{a}})^2/\kappa$.
From Eq. (\ref{mastereq})
one can easily deduce the following equations for the matrix elements of the
density operator $\rho _{m,l}(t)\equiv \left\langle j,m\left| \rho
(t)\right| j,l \right\rangle $:
\begin{eqnarray}
\frac{\text{d}\rho _{m,l}(t)}{\text{d}t}& = &
-\frac{\gamma}{2} \, [ \, \langle n \rangle
\left( 2j(j+1)-m(m+1)-l(l+1)\right)
+(\langle n \rangle+1)\left( 2j(j+1)-m(m-1)-l(l-1)
\right) \, ] \, \rho _{m,l} (t)
\nonumber \\
& & + \,\gamma \, \langle n \rangle\, \sqrt{\left(
j(j+1)-m(m-1)\right) \left( j(j+1)-l(l-1)\right) }\ \rho
_{m-1,l-1} (t)
\nonumber \\
& & + \,\gamma \, (\langle n \rangle+1)\, \sqrt{\left( j(j+1)-m(m+1)\right)
\left( j(j+1)-l(l+1)\right) }\ \rho _{m+1,l+1} (t) . \label{dens_mtrx_dyn}
\end{eqnarray}
Thus the time evolution of a particular density matrix element is coupled
only to the two neighbouring elements in the corresponding diagonal for
$\langle n \rangle > 0$,
and only to the neighbour with larger index at zero temperature.
In the case of a polar
cat state (consisting of $N=2j$ atoms), the elements of the density
matrix have zero initial values except for $\rho _{-j,-j},\
\rho _{j,j},\ \rho _{-j,j}$ and $\rho _{j,-j}\ (=\rho _{-j,j}^{*}).$ This
implies that the density matrix elements, except for those in the main
diagonal and for $\rho _{-j,j}$ and $\rho _{j,-j}$, remain identically zero
for any time. Setting $\gamma =1$ (i.e. the time unit is $1/\gamma $) the
equations for the elements in the main diagonal of the density matrix are
the following:
\begin{eqnarray}
\frac{\text{d}\rho _{m,m} (t)}{\text{d}t} &=&
-\left[ \langle n \rangle\, \left( j(j+1)-m(m+1)\right)
+(\langle n \rangle+1)\left( j(j+1)-m(m-1)\right)
\right] \ \rho _{m,m} (t) \nonumber \\
& & + \ \langle n \rangle \, \left( j(j+1)-m(m-1)\right) \
\rho _{m-1,m-1} (t)
\nonumber \\
& & + \ (\langle n \rangle+1)\, \left( j(j+1)-m(m+1)\right)
\ \rho _{m+1,m+1} (t) \label{main_diag_dyn}
\end{eqnarray}
with the initial values $\rho _{m,m}(t=0)=\frac{1}{2}(\delta _{m,j}+\delta
_{m,-j})$ (cf. equation (\ref{SC0})). The dynamics of $\rho _{-j,j}$
is governed by the particularly simple equation:
\begin{equation}
\frac{\text{d}\rho _{-j,j} (t)}{\text{d}t}=
-j(2 \langle n \rangle + 1)\rho _{-j,j} (t) ,
\label{decoh_dyn}
\end{equation}
yielding immediately the following solution with the initial value
$\rho_{-j,j}(0)=1/2$ corresponding to the polar cat state:
\begin{equation}
\rho _{-j,j}(t)=\frac{1}{2}\exp \left( -j(2 \langle n \rangle + 1)t\right).
\label{decoh}
\end{equation}
As expected, the stationary solution of Eq.~(\ref{main_diag_dyn}) is the
Boltzmann distribution of the stationary values $\bar{\rho}_{m,m}$:
\begin{eqnarray}
\bar{\rho}_{m,m} & = &
\exp \left( -(m+j) \frac{\hbar \omega_{\text{a}}}{k_{\text{B}} T} \right)
\frac{ 1-\exp \left(-\hbar \omega_{\text{a}}/(k_{\text{B}} T) \right) }{
1-\exp \left(- (2j+1) \hbar \omega_{\text{a}}/(k_{\text{B}} T) \right) }
= \frac{
\left( \langle n \rangle / (\langle n \rangle + 1 ) \right)^{m+j}
}{ (\langle n \rangle+1)\ \left(1-\left(\langle n \rangle /
(\langle n \rangle + 1)\right)^{2 j+1} \right)}.
\label{stac}
\end{eqnarray}
Approximate analytical time dependent solutions of Eq. (\ref{main_diag_dyn})
can be found especially for the case of superradiance, when
$\rho_{j,j}(0)=1$ in \cite{DG}, see also \cite{BM96} and references therein.
For the initial conditions corresponding to the polar cat state, the time
dependent solution of equations (\ref{main_diag_dyn}) at zero temperature
($\langle n \rangle=0$) can be obtained by the following recursive
integration:
\begin{eqnarray}
\rho _{j,j}(t) &=&\frac{1}{2}\exp \left( -2jt\right) , \nonumber
\\ \label{recursion}
\\
\rho _{m,m}(t) &=&\exp \left( -b_{m}t\right) \left( \frac{1}{2}\ \delta
_{m,-j}+b_{m+1}\int\limits_{0}^{t}\exp \left( b_{m}t^{\prime }\right) \ \rho
_{m+1,m+1}(t^{\prime })\ \text{d}t^{\prime }\right) ,\hspace{1cm}-j\leq m<j
\nonumber
\end{eqnarray}
where $b_{m}=j(j+1)-m(m-1)$. These equations show rather explicitly, how does
the initial excitation cascade down to the zero temperature stationary state.
For non-zero temperatures ($\langle n \rangle>0$) we have solved equations
(\ref{main_diag_dyn}) numerically. We are going to analyze the solutions
in the next subsection.
\subsection{Characteristic times}
Figure \ref{fig:dmplot} shows the time evolution of the relevant density
matrix elements $\rho_{m,m} (t)$, $m=-j,-j+1,\ldots,j$
(solid lines) and $\rho_{-j,j} (t)$ (dashed line),
in the case of initial polar cat states consisting of 5 and 50 atoms,
for $\langle n \rangle=0,\ 1,\ 10$.
The actual value of
$\rho_{-j,j}(t)$ characterizes the coherence of the corresponding state,
since $\rho_{-j,j}$ and $\rho_{j,-j}$ ($=\rho_{-j,j}^*$) are the only
nonzero matrix elements outside the main diagonal. Their
exponential decay (cf. Eq.~(\ref{decoh})) is the decoherence, shown by the
dashed lines in the plots of Fig. \ref{fig:dmplot}.
Thus it is reasonable to define the characteristic time of the decoherence
by
\begin{eqnarray}
t_{\text{dec}} = \frac{2}{N(2 \langle n \rangle + 1)},
\label{tdec}
\end{eqnarray}
implying
$\rho_{-j,j} (t_{\text{dec}})=\rho_{-j,j}(0)/e$.
In contrast to the simple time dependence of $\rho_{-j,j}(t)$,
the dynamics of the main diagonal elements $\rho_{m,m} (t)$ depend
on the actual value of $\langle n \rangle$ and $N$ rather sensitively.
The zero temperature cases, Fig. \ref{fig:dmplot} (a) and (d), clearly show
the initial excitation, contained in $\rho_{j,j} (0)= 1/2$, cascading
down to $\rho_{-j,-j} (\infty)=1$ as given by Eqs. (\ref{recursion}).
At nonzero temperatures ($\langle n \rangle > 0$)
the time evolution of the $\rho_{m,m} (t)$-s is more complicated because of
the coupling to both neighbours, cf. Eq. (\ref{dens_mtrx_dyn}).
More information can be extracted from the time evolution of the
$\rho_{m,m} (t)$-s by calculating the energy of the atomic subsystem
as the function of time:
\begin{eqnarray}
E(t) \equiv \langle \omega_{\text{a}} \, J_z \rangle (t)
= \omega_{\text{a}} \text{Tr} \left( \rho (t) \, J_z \right)
= \hbar \omega_{\text{a}} \sum_{m=-j}^{j} \, m \, \rho_{m,m} (t).
\label{energy}
\end{eqnarray}
\begin{figure}
\caption{
Plots of the density matrix elements $\rho_{m,m}
\label{fig:dmplot}
\end{figure}
The process of dissipation (i.e. the change of the energy of the
atomic subsystem in time) can be very easily followed by studying $E(t)$.
This function, normalized to the zero temperature stationary energy and
shifted to vary from 1 to its stationary value:
$1+E(t)/(-j \hbar \omega_{\text{a}})$,
is shown in the plots of Fig.\ \ref{fig:dmplot} by the dotted lines.
Since its asymptotic behavior is exponential-like, it is reasonable to
define the characteristic time of dissipation $t_{\text{diss}}$ by
requiring
\begin{equation}
|E(t_{\text{diss}})-E(\infty)|= |E(0)-E(\infty)|/e.
\end{equation}
In order to ensure that $E(t)$ achieves its stationary
value with a good accuracy in the plots of Fig. \ref{fig:dmplot}
we have set the time range to $5 \, t_{\text{diss}}$.
It is seen that the value of $r=t_{\text{diss}}/t_{\text{dec}}$ grows
with both the temperature and the number of atoms.
A more detailed analysis of this question follows later in this section.
The initial state of the process, the polar cat state,
has sharply non-classical features. On the other hand,
at non-zero temperature the final stationary
state of the present model is a thermal state,
which is classical in its nature. (At zero temperature the stationary
state is also non-classical, since it is the state $|j,-j\rangle$.)
It is natural to ask, when does the transition from the non-classical to
the classical stage occur? What is a good measure of non-classicality
reflecting the change of non-classical nature of the corresponding
state?
The spherical Wigner function (\ref{WFU}) provides a good answer to
both of these questions.
Quantum states are generally considered essentially
non-classical if the corresponding Wigner function takes on also negative
values.
Therefore to answer the second question,
for the measure of the degree of non-classicality we propose to use the
quantity $\nu = 1 - (I_+ - I_-)/(I_+ + I_-)$, where $I_+$ is the integral
of the Wigner function over those domains where it is positive while $I_-$
is the absolute value of the integral of the Wigner function over
the domains where it is negative. Since the integral of the
Wigner function over the sphere is 1, $I_+ - I_- = 1$, thus
$\nu = 2 I_- /(2 I_- + 1)$, and it is easy to see that $0 \leq \nu < 1$.
According to this definition, the bigger is the value of $\nu$,
the more non-classical is the state, and for all classical states
one has $\nu=0$.
Regarding now the first question, namely for how long is the state of the
atomic system non-classical,
we introduce a third kind of characteristic time $t_{\text{ncl}}$.
We define $t_{\text{ncl}}$ to be the time instant when the corresponding
spherical Wigner function becomes non-negative everywhere on the sphere,
i.e. $\nu$ becomes 0.
We will return to this question in connection with the
time evolution of the Wigner function, which we will present in
the next subsection in more detail.
Based on the information provided by the three kinds of
characteristic times, we consider here the dependence of the
process on the number of atoms and on the temperature.
In Figure \ref{fig:cht} we plot $t_{\text{diss}}$ (dashed line),
$t_{\text{dec}}$ (solid line) and $t_{\text{ncl}}$ (dotted line)
as the function of the number of constituent atoms of the polar cat $N$,
for several temperatures, on a log-log scale.
\begin{figure}
\caption{
Plots of the characteristic times of
decoherence, $t_{\text{dec}
\label{fig:cht}
\end{figure}
It is seen that the characteristic time of decoherence $t_{\text{dec}}$
is inversely proportional to the number of
atoms (the straight solid lines in Fig. \ref{fig:cht}), according to
the definition (\ref{tdec}).
Compared to this, the characteristic time of non-classicality
$t_{\text{ncl}}$ decreases less rapidly with increasing number of atoms.
The characteristic time of dissipation $t_{\text{diss}}$ first slightly
increases at non-zero temperature then it achieves a maximum which depends
on $\langle n \rangle$ and finally it decreases nearly inversely
proportional to the number of atoms.
The values of $t_{\text{diss}}$ at different temperatures seem to converge
slowly beyond a certain number of atoms.
It seems however rather surprising that the ratio
$t_{\text{diss}}/t_{\text{dec}}$ is not as large as such a quantity is
usually expected to be \cite{Zu,large}:
in the case of $N=1000$ it is 4.04 for $\langle n \rangle=0$, and
it is still just around 350 for $\langle n \rangle=100$.
(Note that $\langle n \rangle=100$
corresponds to a temperature of 250~K in the case of typical
experiments \cite{BrHR}.)
The ratio $t_{\text{diss}}/t_{\text{dec}}$ seems not even to vary
considerably with increasing $N$ beyond the maximum of $t_{\text{diss}}$
mentioned above.
Thus the process of decoherence is extremely slow in the case of a polar
cat state which is coupled to the environment by an interaction leading to
the master equation (\ref{mastereq}).
Similar effects have already been reported for other physical systems
earlier \cite{similar}.
In a recent work Braun, Braun and Haake \cite{BBH}
investigated the decoherence of an atomic SC state
$| \tau_1 \rangle + | \tau_2 \rangle$ based on Eq. (\ref{mastereq})
for zero temperature.
By evaluating a certain quantity characterising the decoherence rate
at the {\it initial} time, and applying a semiclassical procedure for
finite times
they concluded that for atomic SC states with $\tau_1 \tau_2^* = 1$
the decoherence slows down.
Our initial state, the polar cat state fulfils the former condition.
The results presented in Fig. \ref{fig:cht} derive from the
solution of the master equation for the whole process.
They are in agreement with the statements of Ref.\cite{BBH}, where the
initial stage of the decoherence is analyzed for the case of zero
temperture.
\subsection{Wigner functions}
We illustrate now the process of decoherence and dissipation
using the spherical Wigner function (\ref{WFU}).
\begin{figure}
\caption{
Polar plots of the temporal change of the Wigner function representing the
decoherence and
dissipation of the initial polar cat state (\ref{SC0}
\label{fig:wfN5n0}
\end{figure}
In order to obtain its time dependence
we have to calculate first the characteristic matrix
$\varrho _{K,Q}(t)=\text{Tr} \left( \rho (t)T_{K,Q}^\dagger \right) $
from the matrix elements $\rho_{m,l}(t)$ according to
\begin{equation}
\varrho _{K,Q}(t)=\sqrt{2K+1}\sum_{m=-j}^{j}(-1)^{j-m}\left(
\begin{array}{ccc}
j & K & j \\
-m & Q & m-Q
\end{array}
\right) \rho _{m,m-Q}(t).
\label{rhotrf}
\end{equation}
From Eq. (\ref{rhotrf}) it can be seen that only
$\varrho _{K,0}$ ($K=0,1,\ldots,N$) and
$\varrho _{N,N}=(-1)^{N} (\varrho _{N,-N})^{*}$ are nonzero.
This fact (which is due to the initial conditions specified by the
polar cat state)
ensures that the azimuthal dependence of the spherical Wigner function
is determined only by the real part of the spherical harmonic $Y_{N,N}
(\theta,\phi)$ which is proportional to $\cos (N \phi)$.
Therefore the Wigner function keeps its initial azimuthal symmetry
during the whole process.
Further, since $\varrho _{N,N} (t) =(-1)^{N} \rho _{j,-j} (t)$,
the azimuthal modulation of the spherical Wigner function explicitly
shows the degree of the coherence of the actual state.
Figures \ref{fig:wfN5n0} and \ref{fig:wfN5n10} show the polar plots of
the spherical Wigner function transforming its shape in time
at $\langle n \rangle = 0$ and $\langle n \rangle = 10$,
respectively. The initial state is a polar cat made of $N=5$
atoms and its Wigner function is shown in Fig.~\ref{fig:polcat}.
\begin{figure}
\caption{
Polar plots of the temporal change of the Wigner function representing the
decoherence and
dissipation of the initial polar cat state (\ref{SC0}
\label{fig:wfN5n10}
\end{figure}
In Figures \ref{fig:wfN5n0} and \ref{fig:wfN5n10} the following main
characteristics of the
process can be identified. The decoherence is shown by the
decreasing and finally disappearing ripples along the equator.
The vanishing of non-classicality, i.e. the decrease of the parameter
$\nu$, can be easily recognized as the decrease of the negative (dark)
wings. At nonzero temperature they disappear exactly at $t_{\text{ncl}}$,
as shown in Fig. \ref{fig:wfN5n10}(c).
The dissipation is represented by the approach of the initial upper and
lower bumps to each other.
At zero temperature (Fig. \ref{fig:wfN5n0}) the upper bump disappears
and goes over to the lower one. This stationary shape of the Wigner
function corresponds to the lowest coherent state $|j,-j\rangle$ \cite{DAS}.
For $\langle n \rangle = 10$, when
the stationary energy is close to the initial energy, not only the upper
bump moves downwards but also the lower one lifts upwards. The stationary
Wigner function has nearly a spherical symmetry, although its center
is not in the origin.
In agreement with Fig. \ref{fig:cht}, the plots of Figures \ref{fig:wfN5n0}
and \ref{fig:wfN5n10} show, that the timescales of the decoherence and
of the dissipation are very close to each other in the case of 5 atoms,
for zero temperature they are practically the same. Then the spherical
Wigner function exhibits considerable azimuthal modulation (ripples) also
at $t_{\text{diss}}$.
We may come back finally to the question of finding the characteristic time
of non-classicality $t_{\text{ncl}}$.
According to the arguments given after Eq. (\ref{rhotrf}) it is sufficient
to study the Wigner function within a $\phi$-range
of length $2\pi/N$, e.g. $0\leq \phi \leq 2\pi/N$, because it is invariant
with respect of rotations by $\phi=k {2\pi\over N},\ (k=1,2, \dots, N) $
i.e. it has $C_{N}$ symmetry at all times. Therefore the spherical Wigner
function
of a polar cat state, while subject to dissipation and decoherence, has its
minimum value always at $\phi = \pi/N$.
Thus in order to calculate $t_{\text{ncl}}$,
it is sufficient to follow the
time evolution of the section $W(\theta,\phi=\pi/N)$.
Further, in connection with the calculation of the measure of
non-classicality $\nu$, it is sufficient to consider the above mentioned
$\phi$-range when evaluating the integrals $I_+$ and $I_-$.
\section{Conclusions}
We have considered a class of states in an ensemble of two-level atoms,
a superposition of two distinct atomic coherent states which are called
atomic Schr\"{o}dinger cat states.
According to the relative positions of the constituents we have defined
polar and nonpolar cat states. We have investigated their properties
based on the spherical Wigner function, which has been proven to be a
convenient tool to investigate the quantum interference effects.
We have shown that nonpolar cat states generally exhibit squeezing,
for which we have introduced the measure ${\cal S}$.
The squeezing depends on the separation of the components
of the cat and on the number of the atoms the cat is consisting of.
By solving the master equation of this system embedded in an external
environment we have determined the characteristic times of decoherence,
dissipation and non-classicality of an initial polar cat state.
We have shown how these depend on the number of the microscopic
elements the cat consists of, and on the temperature of the environment.
Our results show that the decoherence of the polar cat state is
surprisingly slow: $t_{\text{diss}}/t_{\text{dec}}$
is less then a half of an order of magnitude for zero temperature,
making these states potentially significant in several areas of quantum
physics, e.g. experimental studies of decoherence, quantum computing and
cryptography.
We have visualised the process, governed by the interaction with
the external environment, using the spherical Wigner function.
Its transformation in time reflects the characteristics of the behaviour
of the atomic subsystem in a suggestive way.
\section*{Acknowledgments}
The authors thank L.~Di\'{o}si, T.~Geszti, F.~Haake, J.~Janszky and
W.~P.~Schleich for enlightening discussions on the subject,
and Cs.~Benedek for his help in figure plotting.
One of the authors, A.~C. is grateful to the DAAD for financial
support.
This work was supported by the Hungarian Scientific Research Fund
(OTKA) under contracts T022281, F023336 and M028418.
\begin{references}
\bibitem[*]{MGBemail}E-mail address: [email protected]
\bibitem[\star]{ACemail}E-mail address: [email protected]
\bibitem{MMKW} C. Monroe, D. M. Meekhof, B. E. King, D. J. Wineland,
{Science} {\bf 272}, 1131 (1996)
\bibitem{BrHR} M. Brune, E. Hagley, J. Dreyer, X. M\^aitre, A. Maali,
C. Wunderlich, J. M. Raimond, and S. Haroche, Phys. Rev. Lett. {\bf 77},
4887 (1996)
\bibitem{YS86} B. Yurke, D. Stoler, Phys. Rev. Lett. {\bf 57}, 13 (1986)
\bibitem{JJ90} {J. Janszky, A. Vinogradov,} {Phys. Rev. Lett} {\bf 64},
{2771} (1990)
\bibitem{SPL91} W. Schleich, M. Pernigo, F. LeKien, Phys Rev. A {\bf 44},
2172 (1991)
\bibitem{BMKP92} V. Buzek, H. Moya-Cessa, P. L. Knight, S. D. L. Phoenix,
Phys. Rev A {\bf 45}, 5193 (1992)
\bibitem{CZ94} {J. Cirac, P. Zoller,} {Phys. Rev. A} {\bf 50}, {R2799}
(1994)
\bibitem{D54} {R. H. Dicke,} {Phys. Rev.} {\bf 93}, {99} (1954)
\bibitem{GH82} M. Gross, S. Haroche, Phys. Rep. {\bf 93}, 301 (1982)
\bibitem{BM96} {M. G. Benedict, A. M. Ermolaev, V. A. Malyshev,
I. V. Sokolov, E. D. Trifonov}, {\it Superradiance} (IOP, Bristol, 1996)
\bibitem{AgPS} G. S. Agarwal, R. R. Puri, R. P. Singh, Phys. Rev A
{\bf 56}, 2249 (1997)
\bibitem{G98} C. C. Gerry, R. Grobe, Phys. Rev A {\bf 57}, 2247 (1998)
\bibitem{FBLSS97} M. Freyberger, P. Bardroff, C. Leichtle, G. Schrade, W.
Schleich, Physics World {\bf 10}, (11) 41 (1997)
\bibitem{LMKMIW96} D. Leibfried, D. M. Meekhof, B. E. King, C. Monroe, W.
M. Itano, D. J. Wineland, Phys. Rev. Lett. {\bf 77}, 4281 (1996)
\bibitem{St} {R. L. Stratonovich,} {Sov. Phys. JETP} {\bf 31}, 1012 (1956)
\bibitem{ACGT} {F. Arecchi, E. Courtens, R. Gilmore, H. Thomas,} {Phys.
Rev. A} {\bf 6}, {2221} (1972)
\bibitem{Ag} {G. S. Agarwal,} {Phys. Rev. A} {\bf 24}, {2889} (1981)
\bibitem{G76} {R. Gilmore} {J. Phys. A: Math. Gen.} {\bf 9}, {L65} (1976)
\bibitem{VG} {J. C. V\'{a}rilly, J. M. Gracia-Bondia,} {Ann. Phys. (NY)}
{\bf 190}, {107} (1989)
\bibitem{SW94} M. O. Scully, K. W\'{o}dkiewicz, {Found. Phys.} {\bf 24},
{85} (1994)
\bibitem{CG} K. E. Cahill, R. J. Glauber, Phys. Rev. {\bf 177}, 1857 and
1882 (1969)
\bibitem{AgW} G. S. Agarwal, E. Wolf, Phys. Rev. D {\bf 2} 2161, 2187,
and 2206 (1970)
\bibitem{FBC98} P. F\"{o}ldi, M. G. Benedict and A. Czirj\'{a}k, Acta Phys.
Slovaca {\bf 48}, 335 (1998)
\bibitem{Wolf} N. M. Atakishiyev, S. M. Chumakov, K. B. Wolf,
J. Math. Phys. {\bf 39} 6247 (1998)
\bibitem{Brif} C. Brif, A. Mann, J. Phys. A: Math. Gen. {\bf 31}, L9
(1998);
Phys. Rev. A. {\bf 59}, 971 (1999)
\bibitem{CB96} A. Czirj\'{a}k, M. G. Benedict, Quantum Semiclass. Opt.
{\bf 8}, 975 (1996)
\bibitem{BL} {L. C. Biederharn, J. D. Louck,} {\it Angular Momentum in
Quantum Physics} (Addison-Wesley, Reading, MA, 1981)
\bibitem{DAS} {J. Dowling, G. S. Agarwal, W. P. Schleich,} {Phys. Rev. A}
{\bf 49}, {4101} (1994)
\bibitem{BCAS} {M. G. Benedict, A. Czirj\'{a}k, Cs. Benedek,} Acta Physica
Slovaca {\bf 47}, 259 (1997)
\bibitem{ASp} G. S. Agarwal, in Springer Tracts in Modern Physics, Vol 70.
(Springer, Berlin, 1974)
\bibitem{WM} {D. F. Walls, G. J. Milburn,} {\it Quantum Optics} (Springer,
Berlin, 1994)
\bibitem{BSH} R. Bonifacio, P. Schwendimann and F. Haake, Phys. Rev. A
{\bf 4}, 302 and 854 (1971)
\bibitem{DG} V. Degiorgo, F. Ghielmetti, Phys. Rev. A {\bf 4}, 2415
(1971);
S. Haroche, in {\it New trends in atomic physics},
Les Houches Summer School Lecture Notes, session 38.
Eds. R. Stora, G. Grynberg,
(North Holland, Amsterdam, 1984)
\bibitem{Zu}
W. H. Zurek, Phys. Rev. D {\bf 24}, 1516 (1981);
\bibitem{large}
A. O. Calderia, A. J. Leggett, Ann. Phys. (NY) {\bf 149}, 374 (1983);
D. F. Walls and G. J. Milburn, Phys. Rev. A {\bf 31}, 2403 (1985);
F. Haake, D. F. Walls, Phys. Rev. A {\bf 36}, 730 (1987);
F. Haake, M. \.{Z}ukowski, Phys. Rev. A {\bf 47}, 2506 (1993);
W. T. Strunz, J. Phys. A: Math. Gen. {\bf 30}, 4053 (1997);
\bibitem{similar}
C. C. Gerry, E. E. Hach III, Phys. Lett. A {\bf 174}, 185 (1993);
B. R. Garraway, P. L. Knight, Phys. Rev. A {\bf 49}, 1266 (1994);
Phys. Rev. A {\bf 50}, 2548 (1994);
R. L. de Matos Filho, W. Vogel, Phys. Rev. Lett. {\bf 76}, 608 (1996);
J. F. Poyatos, J. I. Cirac, P. Zoller, Phys. Rev. Lett. {\bf 77}, 4728 (1996);
D. A. Lidar, I. L. Chuang, K. B. Whaley, Phys. Rev. Lett. {\bf 81}, 2594 (1998)
\bibitem{BBH} D. Braun, P. A. Braun and F. Haake,
{\it Slow Decoherence of Superpositions of Macroscopically Distinct States,}
in Proceedings of the 1998 Bielefeld conference on
'Decoherence: Theoretical, Experimental, and Conceptual Problems',
(Springer, Berlin); see also the Los Alamos e-print: quant-ph/9903040
\end{references}
\end{document}
|
\begin{document}
\title{On $q$-scale functions of spectrally negative compound Poisson processes}
\author{Anita Behme\thanks{Technische Universit\"at
Dresden, Institut f\"ur Mathematische Stochastik, Zellescher Weg 12-14, 01069 Dresden, Germany, \texttt{[email protected]} and \texttt{[email protected]}, phone: +49-351-463-32425, fax: +49-351-463-37251.}\; and David Oechsler$^\ast$}
\date{\today}
\maketitle
\begin{abstract}
Scale functions play a central role in the fluctuation theory of spectrally negative Lévy processes. For spectrally negative compound Poisson processes with positive drift, a new representation of the $q$-scale functions in terms of the characteristics of the process is derived. Moreover, similar representations of the derivatives and the primitives of the $q$-scale functions are presented. The obtained formulae for the derivatives allow for a complete exposure of the smoothness properties of the considered $q$-scale functions. Some explicit examples of $q$-scale functions are given for illustration.
\end{abstract}
2020 {\sl Mathematics subject classification.} 60G51, 60J45 (primary), 91B05 (secondary)\\
{\sl Keywords:} compound Poisson process; q-scale functions; spectrally one-sided Lévy process; two-sided exit problem
\section{Introduction}\leftabel{S0}
\setcounter{equation}{0}
Among Lévy processes those with spectrally negative jump measures play a special role in applied mathematics. On the one hand this is due to the fact that for many real life scenarios in queuing, risk, finance, etc. it is intrinsically reasonable to allow only jumps in one direction. On the other hand spectrally negative Lévy processes offer mathematical advantages when considering exit problems and the related \(q\)-scale functions. Recall that for a spectrally negative Lévy process \((L_t)_{t\geq 0}\) with Laplace exponent \(\psi\) and for \(q\geq0\) the \(q\)-scale function \(W^{(q)}:\mathbb{R}_+\to\mathbb{R}\) is defined as the unique function such that
\begin{equation}\leftabel{eq-def-Wq}
{i\mkern1mu}nt_0^{i\mkern1mu}nfty \e^{-\beta x} W^{(q)}(x) \diff x = \frac1{\psi(\beta)-q}
\end{equation}
holds for all \(\beta>\sup\{y:\psi(y)=q\}=:\mathbb{P}hi(q)\). These \(q\)-scale functions get their name from the related scale functions of regular diffusions and indeed, just as for their counterparts, many fluctuation identities of spectrally one-sided Lévy processes may be expressed in terms of \(q\)-scale functions, see e.g. \cite{Bertoin97, emery, rogers1, rogers2}, or \cite{doney, kuznetsov2011theory} and references therein.
However, being defined as an inverse Laplace transform, analytic or even closed-form expressions of $q$-scale functions are available only in exceptional cases; see e.g. \cite{hubalek2010} for a collection of such cases.
Although there are practical ways to evaluate \(q\)-scale functions numerically as exposed e.g. in \cite{kuznetsov2011theory}, or to approximate them as done e.g. in \cite{Egami2010PhasetypeFO}, it is not only of theoretical interest to expand the library of cases with analytic expressions. As they are intermediate objects in the sense that we are not only interested in \(q\)-scale functions per se, but merely in expressions derived from them, evaluating \(q\)-scale functions numerically both hinders error analysis and amplifies error propagation, e.g. when the jump measure must be estimated which is basically the case in most applications.
In this article, after setting the stage with some preliminaries given in Section \rightef{S2}, in Section \rightef{S3} we provide and prove our main Theorem \rightef{main}. This states that the \(q\)-scale functions of a spectrally negative compound Poisson process with drift $c >0$, intensity $\leftambda>0$ and jump distribution $\mathbb{P}i(-\diff s)$ can be represented as
\begin{align}\leftabel{mainform}
W^{(q)}(x)=\frac1c \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x]} g_k(s,x) ~\mathbb{P}i^{*k}(\diff s), \quad x\geq 0,
\end{align}
where
\begin{equation}\leftabel{g}
g_k(s,x)=\frac{(\leftambda/c)^k }{k!}\e^{-\frac{q+\leftambda}{c}(s-x)}(s-x)^k,\quad s, x\geq 0,
\end{equation}
and $\mathbb{P}i^{\ast k}$ denotes the $k$-th convolution power of $\mathbb{P}i$.
Note that \eqref{mainform} breaks down into a closed-form expression if the jump measure is discrete. In that particular case our result may be regarded as a generalization of the \(0\)-scale function
\begin{align}\leftabel{Erlang}
W^{(0)}(x)=\frac{1}{c}\sum_{s=0}^{\leftfloor x \rightfloor} \frac{\e^{-\frac\leftambda c (s-x)} }{s!}\left(\frac{\leftambda}{c}\right)^s\left(s-x\right)^s, \quad x\geq 0,
\end{align}
of \(L_t=c t - N_t\), $t\geq 0$, where \(c>0\) and \(N_t\) is a Poisson point process with intensity \(\leftambda>0\). Formula \eqref{Erlang} can be found in \cite{hubalek2010} or \cite{asmussenalbrecher} and dates all the way back to Erlang \cite{Erlang1909}, even though the notions of neither scale functions nor Lévy processes were present then. \\
At first glance the representation \eqref{mainform} resembles the form of the $0$-scale function one obtains via the so-called Pollaczeck-Khintchine formula (cf. \cite[IV (2.2)]{asmussenalbrecher}, \cite[Thm. 1.8]{Kyprianou2014} or \cite[Thm. 1.3]{KyprianouGS}), a prominent statement in actuarial mathematics. Namely, as long as the jumps $\{\xi_i,i{i\mkern1mu}n \mathbb{N}N\}$ of the considered compound Poisson process have finite mean $\mu$ such that $\leftambda \mu>c$, one can show that (see e.g. \cite[Eq. (4.12)]{KyprianouGS})
\begin{equation*}
W(x)= \frac{1}{c} \sum_{k=0}^{i\mkern1mu}nfty \left( \frac{\leftambda \mu}{c} \right)^k \mathbb{P}i_I^{\ast k}(x), \quad x\geq 0,
\end{equation*}
where $\mathbb{P}i_I(\diff x)=\mu^{-1}\mathbb{P}i((x,{i\mkern1mu}nfty)) \diff x$ is the integrated tail function of the jump distribution. However, besides of being valid for all $q\geq 0$, \eqref{mainform} does not need an assumption like $\leftambda \mu>c$. Moreover it relies on convolutions of the jump distribution itself, instead of its integrated tail function.
For many purposes it is essential to know not only the \(q\)-scale functions \(W^{(q)}\) but also their primitives and - in case they exist - their derivatives, see e.g. \cite[Sec. 1]{kuznetsov2011theory} for a collection of applications that stem from various areas of applied probability theory.
The question of smoothness is of theoretical interest as well (cf. \cite{chankypsavov, kyprivsong} and \cite[Sec. 3.5]{kuznetsov2011theory} for more on that topic) as one can interpret \(W^{(q)}\) as an eigen-function of the infinitesimal generator \(\mathcal{A}\) of \((L_t)_{t\geq0}\), i.e.
\[
(\mathcal{A}-q)W^{(q)} = 0
\]
in some sense. However, this equation is to be treated cautiously as it is not clear whether \(W^{(q)}\) is in the domain of \(\mathcal{A}\) or not. In Section \rightef{S41} we present representation formulae for all derivatives of $W^{(q)}$ possible. These immediately confirm Doney's conjecture \cite[Conj. 3.13]{kuznetsov2011theory} which states in our setting (i.e. for spectrally negative compound Poisson processes ) that
\begin{align}\leftabel{equivalence}
W^{(q)}{i\mkern1mu}n\mathcal{C}^{n+1}(0,{i\mkern1mu}nfty) \Leftrightarrow \overline{\mathbb{P}i}{i\mkern1mu}n\mathcal{C}^{n}(0,{i\mkern1mu}nfty)
\end{align}
for \(n{i\mkern1mu}n\mathbb{N}\), where \(\overline{\mathbb{P}i}(x)=\mathbb{P}i((-{i\mkern1mu}nfty, x])\) is the cumulative distribution function of \(\mathbb{P}i\). Moreover, we show that for absolutely continuous measures $\mathbb{P}i$ the equivalence \eqref{equivalence} is a local property, i.e. \( (W^{(q)})^{(n+1)}(x_0)\) exists (and is continuous) for \(x_0{i\mkern1mu}n\mathbb{R}_+\) if and only if \(\overline{\mathbb{P}i}^{(n)}(x_0)\) exists (and is continuous). With this we slightly improve the corresponding result \cite[Thm. 3]{chankypsavov}. \newline
A representation formula for the primitive of \(W^{(q)}\) is derived in Section \rightef{S42}, while in Section \rightef{S43} we shall rewrite our representations of $W^{(q)}$, its first derivative, and its primitive in the form of $\mathbb{E}E^x[f_{q,\leftambda, c}(L_1, N_1)]$ for appropriate functions $f_{q,\leftambda, c}$ and with $N_1$ being the number of jumps of $(L_t)_{t\geq 0}$ up to time $1$.
The final Section \rightef{S5} is devoted to some new examples of explicitely computable q-scale functions.
\section{Preliminaries} \leftabel{S2}
\setcounter{equation}{0}
Throughout the paper $(L_t)_{t\geq 0}$ denotes a spectrally negative Lévy process with Laplace exponent
\begin{equation}\leftabel{eq-laplaceexp}
\psi(u)= \leftog\mathbb{E}E[\e^{uL_1}] = \tilde{c} u + \frac12 \sigma^2 u^2 + {i\mkern1mu}nt_{(-{i\mkern1mu}nfty,0)} (\e^{ux}-1- ux\mathds{1}_{x> -1})\tilde{\mathbb{P}i}(\diff x),
\end{equation}
with $\tilde{c}{i\mkern1mu}n\mathbb{R}R$, $\sigma^2\geq 0$ and a Lévy measure $\tilde{\mathbb{P}i}$ concentrated on $(-{i\mkern1mu}nfty,0)$.
We first note that rescaling the given Lévy process corresponds to a rescaling of the associated $q$-scale functions, as shown by the following lemma, which follows immediately from the definition of the $q$-scale functions in \eqref{eq-def-Wq} and the form of the Laplace exponent \eqref{eq-laplaceexp}.
\begin{lemma}\leftabel{scaling}
Let \(\mathbb{V}\mathrm{ar}\,epsilon>0\). For any \(q\geq0\) the \(q\)-scale function of \((L_t)_{t\geq 0}\) is given by
\[
W^{(q)}(x)=\frac1\mathbb{V}\mathrm{ar}\,epsilon \hat{W}^{(q)} \left(\frac x\mathbb{V}\mathrm{ar}\,epsilon\right), \quad x\geq 0,
\]
where \(\hat{W}^{(q)}\) is the \(q\)-scale function of the Lévy process $(\hat{L}_t)_{t\geq 0}$ given by \(\hat{L}_t:=\frac1\mathbb{V}\mathrm{ar}\,epsilon L_t,~t\geq0\).
\end{lemma}
In a similar manner the next lemma allows to adjust the drift by proper rescaling of the scale functions. It is an immediate consequence of the definition of the $q$-scale function in \eqref{eq-def-Wq} and the fact that $\leftog \mathbb{E}E[\e^{uL_t}]=t\psi(u)$.
\begin{lemma}\leftabel{scaling1}
Assume $\tilde{c}> 0$. Then for any \(q\geq0\) the \(q\)-scale function of \((L_t)_{t\geq 0}\) is given by
\[
W^{(q)}(x)= \frac{1}{\tilde{c}} \hat{W}^{( q/\tilde{c})}(x),\quad x\geq 0,
\]
where \( \hat{W}^{(q/\tilde{c})}(x), x\geq 0, \) is the \(q/\tilde{c}\)-scale function of the subordinated process $(\hat{L}_t)_{t\geq 0}$ given by \(\hat L_t := L_{t/\tilde{c}},~t\geq0\).
\end{lemma}
We are especially interested in spectrally negative compound Poisson processes for which \eqref{eq-laplaceexp} simplifies to
\begin{equation*}
\psi(u)= \leftog\mathbb{E}E[\e^{uL_1}] = c u + \leftambda {i\mkern1mu}nt_{(0,{i\mkern1mu}nfty)} (\e^{-ux}-1)\mathbb{P}i(\diff x),
\end{equation*}
where we call $c$ the \emph{drift}, $\leftambda\geq 0$ the \emph{intensity}, and $\mathbb{P}i(\diff x)$ the \emph{jump measure} describing the negative of the jumps of $(L_t)_{t\geq 0}$. We will thus consider
\begin{equation}\leftabel{eq-specialtype}
L_t=c t- \sum_{i=1}^{N_t} \xi_i, \quad t\geq 0,
\end{equation}
for a Poisson process $(N_t)_{t\geq 0}$ with intensity $\leftambda>0$ and i.i.d. non-negative random variables $\xi_i,i\geq1$ with distribution $\mathbb{P}i$. In view of Lemma \rightef{scaling1} we will often set $c=1$.
Whenever we consider a shifted version of the Lévy process starting in $x{i\mkern1mu}n\mathbb{R}R$ we use the notation $\mathbb{P}^x$ for the induced probability measure and $\mathbb{E}E^x$ for the corresponding expectation. For $x=0$ we set $\mathbb{P}:=\mathbb{P}^0$ and $\mathbb{E}E=\mathbb{E}E^0$.
We abbreviate the natural numbers including $0$ as $\mathbb{N}N=\{0,1,2,\leftdots\}$, and the real half-line as $\mathbb{R}R_+=[0,{i\mkern1mu}nfty)$.
Integrals of the form ${i\mkern1mu}nt_a^b$ are to be understood as ${i\mkern1mu}nt_{[a,b]}$. Partial derivatives are abbreviated as $\partial_x:=\frac{\partial}{\partial x}$, while $\partial_{x,\pm}$ denote left- and right derivatives with respect to $x$. In case of only one variable, we may omit the $x$ and simply use $\partial$, or $\partial_{\pm}$, for the (directional) derivative.
Finally, throughout the paper we shall use the standard floor and ceiling functions $\leftfloor x \rightfloor := \max \{y{i\mkern1mu}n\mathbb{Z}~:~y\lefteq x \}$ and $\leftceil x \rightceil := \min \{y{i\mkern1mu}n\mathbb{Z} ~:~ y\geq x\}$.
\section{A representation of \(W^{(q)}\)}\leftabel{S3}
\setcounter{equation}{0}
As mentioned in the introduction, our main result reads as follows.
\begin{theorem}\leftabel{main}
For any \(q\geq0\) the \(q\)-scale function of the spectrally negative compound Poisson process in \eqref{eq-specialtype} is given by \eqref{mainform}.
\end{theorem}
To prove this theorem in the forthcoming subsections, we choose a dense subclass of processes for which we can explicitly compute the scale functions. The expressions we obtain will then be extended to the general setting of the theorem by approximation arguments.
\subsection{A recursion formula}
We start by proving a recursion formula for $q$-scale functions of spectrally negative compound Poisson processes whose jump measures are bounded away from zero.
\begin{lemma}\leftabel{recursivformula1}
Let \((L_t)_{t\geq 0}\) be a spectrally negative compound Poisson process as in \eqref{eq-specialtype} with drift $c=1$. Let further \(J:=\mathrm{supp}(\mathbb{P}i) \subset [1,{i\mkern1mu}nfty)\) and fix \(q\geq0\). Then the \(q\)-scale function \(W^{(q)}\) of \((L_t)_{t\geq 0}\) fulfils
\begin{align}\leftabel{recurse}
W^{(q)}(x) = \e^{(q+\leftambda) x} \left( \e^{-(q+\leftambda)\leftfloor x\rightfloor} W^{(q)}( \leftfloor x \rightfloor) - \leftambda{i\mkern1mu}nt_J {i\mkern1mu}nt^x_{\leftfloor x\rightfloor} \e^{-(q+\leftambda) z}W^{(q)}(z-y) \diff z ~\mathbb{P}i(\diff y)\right)
\end{align}
for \(x\geq0\), if we set \(W^{(q)}(x)=0\) for \(x<0\). Moreover, for any solution \(V\) of \eqref{recurse} with \(V(x)=0\) for \(x<0\) it holds
\begin{align}\leftabel{unique}
V(x)=V(0)W^{(q)}(x)
\end{align}
for all \(x{i\mkern1mu}n\mathbb{R}\).
\end{lemma}
\begin{proof}
Note first that, if some function solves \eqref{recurse}, then so does any multiple of it. \\
To prove \eqref{recurse}, fix \(q\geq 0\) and let \(a>0\). Define the first passage times
\begin{align*}
\tau_a^+&:= {i\mkern1mu}nf\{t>0: L_t\geq a\},\\
\tau_0^-&:= {i\mkern1mu}nf\{t>0: L_t\lefteq0\}.
\end{align*}
It is well known (cf. \cite[Thm. 1.2]{kuznetsov2011theory}) that
\begin{equation}\leftabel{scaleviapassage}
\mathbb{E}^x[\e^{-q\tau_a^+}\mathds{1}_{\{\tau_a^+<\tau_0^-\}}] = \frac{W^{(q)}(x)}{W^{(q)}(a)}.
\end{equation}
Scale functions are defined on \([0,{i\mkern1mu}nfty)\) and in this particular setting we are able to compute them directly on \([0,1]\) due to \(J\subset [1,{i\mkern1mu}nfty)\).\\
To do so, let \(a=1\) and \(x{i\mkern1mu}n(0,1)\) arbitrary.
Under \(\mathbb{P}^x\) the event \(\{\tau_1^+<\tau_0^-\}\) is equivalent to the event that no jump occurs until \(T={1-x}\), the time it takes for the pure drift to exit the interval. Thus \(\mathbb{P}^x (\tau_1^+<\tau_0^-)= \e^{\leftambda (x-1)}\) due to the number of jumps in an interval \([0,T]\) following a Poisson distribution with parameter \(\leftambda T\). In that case \(\tau_1^+={1-x}\) and we find
\begin{align}
\mathbb{E}^x[\e^{-q\tau_1^+} \mathds{1}_{\{\tau_1^+<\tau_0^-\}}] &= \e^{ q (x-1)}\mathbb{P}^x (\tau_1^+<\tau_0^-) = \e^{(q+\leftambda)(x-1)}.\leftabel{firstinterval}
\end{align}
Thus for \(x{i\mkern1mu}n(0,1)\),
\[
W^{(q)}(x) = \e^{(q+\leftambda)(x-1)} W^{(q)}(1),
\]
and hence, \eqref{recurse} is fulfilled on \((0,1)\) because the integral in \eqref{recurse} vanishes and \(\e^{-(q+\leftambda)}W^{(q)}(1)\) is just a constant factor. Since scale functions are continuous (cf. \cite[Lem. 2.3]{kuznetsov2011theory}) on \([0,{i\mkern1mu}nfty)\) the formula holds true for \(x=0\) and \(x=1\) as well.
However, from \cite[Lem. 3.1]{kuznetsov2011theory}, we know that \(W^{(q)}(0)=\frac{1}{c}\), and as we agreed on \(c=1\), we have
\begin{align}\leftabel{on1}
W^{(q)}(x)=\e^{(q+\leftambda)x}, \quad x{i\mkern1mu}n[0,1].
\end{align}
Clearly, if we prove \eqref{recurse} for \(x{i\mkern1mu}n\mathbb{R}_+\setminus\mathbb{N}\), we may close any gap by continuity. Therefore let \(x{i\mkern1mu}n\mathbb{R}_+\setminus\mathbb{N}\) with \(x>1\).
As we want to use \eqref{scaleviapassage} again, we are interested in the expression \(\mathbb{E}^x[\e^{-q\tau_{\leftceil x\rightceil}^+}\mathds{1}_{\{\tau_{\leftceil x\rightceil}^+<\tau_0^-\}}] \) which can be written as
\begin{equation}\leftabel{onrwithoutn}
\mathbb{E}^x[\e^{-q\tau_{\leftceil x\rightceil}^+}\mathds{1}_{\{\tau_{\leftceil x\rightceil}^+<\tau_0^-\}}] = \mathbb{E}^x[\e^{-q\tau_{\leftceil x\rightceil}^+}\mathds{1}_{\{\tau_{\leftceil x\rightceil}^+<\sigma\}}]+
\mathbb{E}^x[\e^{-q\tau_{\leftceil x\rightceil}^+}\mathds{1}_{\{\tau_{\leftceil x\rightceil}^+>\sigma\}\cap\{L_{\sigma}>0\}\cap\{\hat\tau_{\leftceil x\rightceil}^+<\hat\tau_{0}^-\}}],
\end{equation}
where \(\sigma\) denotes the time of the first jump of $(L_t)_{t\geq 0}$ and
\begin{align*}
\hat\tau^+_{\leftceil x\rightceil} &:={i\mkern1mu}nf\{t\geq\sigma: L_t\geq\leftceil x\rightceil \} - \sigma, \\
\hat\tau_{0}^- &:= {i\mkern1mu}nf\{t\geq\sigma: L_t\lefteq0 \} - \sigma
\end{align*}
denote the respective first passage times after the first jump. The first term in \eqref{onrwithoutn} is
\[
\mathbb{E}^x[\e^{-q\tau_{\leftceil x\rightceil}^+}\mathds{1}_{\{\tau_{\leftceil x\rightceil}^+<\sigma\}}]=\mathbb{E}^{x-\leftfloor x\rightfloor}[\e^{-q\tau_1^+}\mathds{1}_{\{\tau_1^+<\sigma\}}]=\mathbb{E}^{x-\leftfloor x\rightfloor}[\e^{-q\tau_1^+}\mathds{1}_{\{\tau_1^+<\tau^-_0\}}]=
\e^{(q+\leftambda)(x-\leftceil x\rightceil)}.
\]
due to the space homogeneity of Lévy processes, $J\subseteq[1,{i\mkern1mu}nfty)$ and formula \eqref{firstinterval}. For the second expression in \eqref{onrwithoutn} note that on \(\{\tau_{\leftceil x\rightceil}^+>\sigma\}\) it holds \(\tau_{\leftceil x\rightceil}^+ = \sigma + \hat\tau_{\leftceil x\rightceil}^+\) by definition and \(\sigma=\tau_{\leftfloor x \rightfloor}^-\) since \(J\subset [1,{i\mkern1mu}nfty)\). We condition on $\sigma\sim \mathrm{Exp}(\leftambda)$ and $\xi_1=-\Delta L_\sigma$ and since under $\mathbb{P}^x$ we have $L_\sigma=x+\sigma-\xi_1$, we obtain
\begin{align*}
\mathbb{E}^x\left[\e^{-q\tau_{\leftceil x\rightceil}^+}\mathds{1}_{\{\tau_{\leftceil x\rightceil}^+>\sigma\}\cap\{L_{\sigma}>0\}\cap\{\hat\tau_{\leftceil x\rightceil}^+ < \hat\tau_{0}^-\}}\right]
&= \mathbb{E}^x\left[\mathbb{E}^x\left[\e^{-q\tau_{\leftceil x\rightceil}^+} \mathds{1}_{\{\tau_{\leftceil x\rightceil}^+>\sigma\}\cap\{L_{\sigma}>0\}\cap\{\hat\tau_{\leftceil x\rightceil}^+ < \hat\tau_{0}^-\}}\middle| \sigma, \xi_1 \right] \right]\\
&= \mathbb{E}^x\left[\e^{-q\sigma} \mathds{1}_{\{\tau_{\leftceil x\rightceil}^+>\sigma\}\cap\{L_{\sigma}>0\}} \mathbb{E}^{L_{\sigma}}\left[\e^{-q\hat\tau_{\leftceil x\rightceil}^+} \mathds{1}_{\{\hat\tau_{\leftceil x\rightceil}^+ <\hat\tau_{0}^-\}}\right] \right] \\
&= \mathbb{E}^x\left[\e^{-q\sigma} \mathds{1}_{\{\tau_{\leftceil x\rightceil}^+>\sigma\}\cap\{L_{\sigma}>0\}} \frac{W^{(q)}(L_{\sigma})}{W^{(q)}(\leftceil x\rightceil)} \right] \\
&= \frac1{W^{(q)}({\leftceil x\rightceil})} \leftambda{i\mkern1mu}nt_J {i\mkern1mu}nt_0^{{\leftceil x\rightceil}-x}\e^{-(q+\leftambda)t}W^{(q)}(x+ t-y)\diff t ~ \mathbb{P}i(\diff y).
\end{align*}
Note that in the above the indicator function \(\mathds{1}_{\{L_\sigma>0\}}\) can be omitted due to \(W^{(q)}(x)=0\) if \(x<0\). \newline
Via \eqref{scaleviapassage} we obtain now by combination of the two terms in \eqref{onrwithoutn}
\begin{align*}
W^{(q)}(x)&= \mathbb{E}^x[\e^{-q\tau_{\leftceil x\rightceil}^+}\mathds{1}_{\{\tau_{\leftceil x\rightceil}^+<\tau_0^-\}}] \cdot W^{(q)}({\leftceil x\rightceil})\\
&=\e^{(q+\leftambda)(x-\leftceil x\rightceil)}W^{(q)}(\leftceil x\rightceil) + \leftambda{i\mkern1mu}nt_J {i\mkern1mu}nt_0^{\leftceil x\rightceil-x}\e^{-(q+\leftambda)t}W^{(q)}(x+ t-y)\diff t~ \mathbb{P}i(\diff y),\\
&=\e^{(q+\leftambda)(x-\leftceil x\rightceil)}W^{(q)}(\leftceil x\rightceil) + \leftambda{i\mkern1mu}nt_J{i\mkern1mu}nt_{x}^{\leftceil x\rightceil}\e^{(q+\leftambda)(x-z)}W^{(q)}(z-y)\diff z ~ \mathbb{P}i(\diff y).\end{align*}
Recall that \(x{i\mkern1mu}n\mathbb{R}_+\setminus\mathbb{N}\) and choose a decreasing sequence \(\{x_n\}_{n{i\mkern1mu}n\mathbb{N}}\) with $x_n\to\leftfloor x\rightfloor$ as $n\to{i\mkern1mu}nfty$. Taking the limit we obtain
\[
W^{(q)}(\leftfloor x\rightfloor)=\e^{-(q+\leftambda)}W^{(q)}({\leftceil x\rightceil}) + \leftambda{i\mkern1mu}nt_J {i\mkern1mu}nt_{\leftfloor x\rightfloor}^{\leftceil x\rightceil}\e^{({q+\leftambda})(\leftfloor x\rightfloor-z)}W^{(q)}(z-y)\diff z~\mathbb{P}i(\diff y)
\]
by continuity. If we now combine the last two formulae we get
\begin{align*}
W^{(q)}(x)=\e^{(q+\leftambda) x}\left(\e^{-(q+\leftambda)\leftfloor x\rightfloor}W^{(q)}(\leftfloor x\rightfloor)-\leftambda{i\mkern1mu}nt_J {i\mkern1mu}nt_{\leftfloor x\rightfloor}^x \e^{-(q+\leftambda) z}W^{(q)}(z-y)\diff z ~ \mathbb{P}i(\diff y)\right),
\end{align*}
the desired result. As mentioned above this implies \eqref{recurse} for \(x{i\mkern1mu}n\mathbb{N}\) as well.\\
Finally, given a solution $V$ of \eqref{recurse} with $V(x)=0$ for $x<0$, it follows directly from \eqref{recurse}, that $V(x)=e^{(q+\leftambda)x} V(0)= W^{(q)}(x) V(0)$ on $[0,1)$. Recursively \eqref{unique} now follows for all $x{i\mkern1mu}n\mathbb{R}R$.
\end{proof}
Formula \eqref{recurse} can be solved inductively for lattice distributed jumps. Before doing so in the next section we show a simple corollary of the above which simplifies our notation.
\begin{corollary}\leftabel{auxiliary}
In the setting of Lemma \rightef{recursivformula1} for all \(q\geq0\) there exist continuous functions \(w_q: \mathbb{R}\to\mathbb{R}\) such that for \(x{i\mkern1mu}n\mathbb{R}\)
\begin{align}\leftabel{ansatz}
W^{(q)}(x)=\e^{(q+\leftambda) x}w_q(x),
\end{align}
and \(w_q\) fulfils the recursive formula
\begin{align}\leftabel{recurse2}
w_q(x)=w_q(\leftfloor x\rightfloor) - \leftambda{i\mkern1mu}nt_J \e^{-(q+\leftambda) y}{i\mkern1mu}nt^x_{\leftfloor x\rightfloor} w_q(z-y) \diff z ~\mathbb{P}i(\diff y)
\end{align}
for \(x\geq0\), while \(w_q(x)=0\) for \(x<0\). In particular
\begin{equation}\leftabel{recurse_start}
w_q(x)=1 \quad \text{for } x{i\mkern1mu}n[0,1].
\end{equation}
\end{corollary}
\begin{proof}
We obtain \eqref{recurse2} by plugging \eqref{ansatz} into formula \eqref{recurse}. Formula \eqref{recurse_start} follows from \eqref{on1}.
\end{proof}
\subsection{Lattice-distributed jumps}
The recursion \eqref{recurse2} becomes particularly easy for lattice-distributed jumps as the outer integral breaks down into a finite sum. Here, by call $\mathbb{P}i$ a \emph{(positive) lattice distribution}, if $\mathbb{P}i$ is a discrete probability measure of the form
\[\mathbb{P}i (\diff x) =\sum_{y{i\mkern1mu}n J} p_y \delta_y (\diff x)
\]
with \(\sum_{y{i\mkern1mu}n J} p_y = 1\) and \(J:=\mathrm{supp}(\mathbb{P}i)\subset \mathbb{V}\mathrm{ar}\,epsilon\mathbb{N}\setminus\{0\}\) for some \(\mathbb{V}\mathrm{ar}\,epsilon>0\). The parameter \(\mathbb{V}\mathrm{ar}\,epsilon\) is called the \emph{step} of the lattice measure.
\begin{remark}
Other definitions of lattice measures exist in the literature. A more general lattice would be
\[
\mathrm{supp}(\mathbb{P}i)=a+\mathbb{V}\mathrm{ar}\,epsilon\mathbb{Z},
\]
for some \(a{i\mkern1mu}n\mathbb{R}\) and \(\mathbb{V}\mathrm{ar}\,epsilon>0\). Also general lattice distributions need not be finite measures. However, we restrict ourselves to the cases where the definition above applies.
\end{remark}
By Lemma \rightef{scaling} and Lemma \rightef{scaling1} it suffices to consider \(\mathbb{V}\mathrm{ar}\,epsilon=1\) and we shall do so for the moment. By Corollary \rightef{auxiliary} the functions $w_q$ thus fulfil
\begin{equation}\leftabel{recurse3} \begin{aligned}
w_q(x)&=w_q(\leftfloor x\rightfloor) - \leftambda\sum_{y{i\mkern1mu}n J} p_y \e^{-(q+\leftambda) y}{i\mkern1mu}nt^x_{\leftfloor x\rightfloor} w_q(z-y) \diff z, \quad x\geq 0,\\
\text{with} \quad w_q(x)&=1, \quad x{i\mkern1mu}n[0,1], \quad \text{and} \quad w_q(x)=0, \quad x<0.
\end{aligned}\end{equation}
We shall use the following slight abuse of notation for the sake of shortness. Let \(v{i\mkern1mu}n\mathbb{N}^J\), i.e. \(v: J \to \mathbb{N} , y\mapsto v_y\) is a function which maps every possible jump height on a non-negative integer. Then we define for any \(v{i\mkern1mu}n\mathbb{N}^J\)
\begin{align*}
|v|:=\sum_{y{i\mkern1mu}n J} v_y, \quad \text{and} \quad
\leftangle v, J \rightangle := \sum_{y{i\mkern1mu}n J} v_y y.
\end{align*}
We will interpret \(v\) as a vector that keeps track of how many jumps of which size occur. Then \(|v|\) is the total number and \(\leftangle v,J \rightangle\) the accumulated size of all occurring jumps.
\begin{lemma}\leftabel{lattice_explicit}
Assume $\mathbb{P}i=\sum_{y{i\mkern1mu}n J} p_y \delta_y $ is a positive lattice distribution with $J\subset \mathbb{N}N\setminus\{0\}$ and set \(m_y={\leftambda p_y} \e^{-(q+\leftambda) y}, y{i\mkern1mu}n J\). Then the unique solution \(w_q\) to \eqref{recurse3} is given by
\begin{align}\leftabel{latsol}
w_q(x)= \sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J\rightangle\lefteq\leftfloor x\rightfloor}} \left(\prod_{\ell{i\mkern1mu}n J} \frac{m_\ell^{v_\ell}}{v_\ell !}\right)\Big(\leftangle v, J\rightangle-x\Big)^{|v|},~x\geq0.
\end{align}
\end{lemma}
\begin{proof}
First note that both the sum and the product in \eqref{latsol} are always finite: Since \(J\) is bounded away from zero, for any fixed $x\geq 0$ the condition on \(v{i\mkern1mu}n\mathbb{N}^{J}\) implies that only those \(v{i\mkern1mu}n\mathbb{N}^{J}\) with \(|v|\lefteq \leftfloor x\rightfloor\) may appear, and the entries of \(v\) being in \(\mathbb{N}\) implies that merely finitely many of them are non-zero. \newline
Consider \(x{i\mkern1mu}n[0,1)\). The sum in \eqref{latsol} then consists of only one summand, namely the one generated by \(v\equiv 0\), and we obtain \(w_q(x)=1\). The second summand is generated by \(v=(1,0,\leftdots)\) and appears for \(x\geq1\). However, it vanishes for \(x=1\) as \(\leftangle v, J\rightangle-x=0\) and, hence, \(w_q(1)=1\) as well which coincides with \eqref{recurse3}. Indeed, any new summand is zero at its first appearance and this guarantees continuity of \(w_q\). \newline
Now assume \eqref{latsol} is true on \([0,n]\) for some \(n{i\mkern1mu}n\mathbb{N}\) and let \(x{i\mkern1mu}n(n,n+1)\). By \eqref{recurse3} we find
\[
w_q(x)=w_q(n) - \sum_{y{i\mkern1mu}n J} m_y {i\mkern1mu}nt^x_{n} \sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J\rightangle\lefteq \leftfloor z-y \rightfloor}} \left(\prod_{\ell{i\mkern1mu}n J} \frac{m_\ell^{v_\ell}}{v_\ell !}\right)\Big(\leftangle v, J\rightangle-z+y\Big)^{|v|} \diff z.
\]
To swap the integral with the sum to its right we fix \(y{i\mkern1mu}n J\) for a moment. Since we integrate over \(z{i\mkern1mu}n[n,x]\) and \(y{i\mkern1mu}n J \subset \mathbb{N}\) it holds \(\leftfloor z-y\rightfloor = n-y\). Thus, the summation area is independent of \(z\) and we can compute further
\begin{align*}
w_q(x)&=w_q(n) - \sum_{y{i\mkern1mu}n J} m_y \sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J\rightangle\lefteq n-y}}\left(\prod_{\ell{i\mkern1mu}n J} \frac{m_\ell^{v_\ell}}{v_\ell !}\right) {i\mkern1mu}nt^{x}_{n} \Big(\leftangle v, J\rightangle-z+y\Big)^{|v|} \diff z \\
&=w_q(n) + \sum_{y{i\mkern1mu}n J} m_y \sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J\rightangle\lefteq n-y}}\left(\prod_{\ell{i\mkern1mu}n J} \frac{m_\ell^{v_\ell}}{v_\ell !}\right)
\left(\frac{(\leftangle v, J\rightangle-x+y)^{|v|+1} }{|v|+1}-\frac{(\leftangle v, J\rightangle-n+y)^{|v|+1} }{|v|+1}\right).
\end{align*}
Denoting by \(e_y{i\mkern1mu}n\mathbb{N}^{J}\) the unique element with $|e_y|=1$ and \(\leftangle e_y,J\rightangle= y \) we can write
\begin{align*}
w_q(x)&=w_q(n) + \sum_{y{i\mkern1mu}n J}\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v+e_y, J\rightangle\lefteq n}}\left(\prod_{\ell{i\mkern1mu}n J} \frac{m_\ell^{(v+e_y)_\ell}}{v_\ell !}\right)
\left(\frac{(\leftangle v+e_y, J\rightangle-x)^{|v+e_y|} }{|v+e_y|}-\frac{(\leftangle v+e_y, J\rightangle-n)^{|v+e_y|} }{|v+e_y|}\right).
\end{align*}
Although not obvious at first, it is possible to combine the two sums. To achieve that we need to count how often each summand appears. Recall that only finitely many entries of the appearing $v$ are non-zero. Thus by first multiplying with \(\frac{|v|!}{|v|!} = |v|!\cdot \frac{|v+e_y|}{|v+e_y|!}\) we may insert a multinomial coefficient and obtain
\begin{align*}
w_q(x)=w_q(n) + \sum_{y{i\mkern1mu}n J}\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v+e_y, J\rightangle\lefteq n}}&{|v| \choose v_1,v_2, \cdots}\left(\frac{\prod_{\ell{i\mkern1mu}n J}^{i\mkern1mu}nfty m_\ell^{(v+e_y)_\ell}}{|v+e_y|!}\right)\cdot\\
& \cdot\left((\leftangle v+e_y, J\rightangle-x)^{|v+e_y|}-(\leftangle v+e_y, J\rightangle-n)^{|v+e_y|} \right),
\end{align*}
where we set $v_x=0$ for $x{i\mkern1mu}n \mathbb{N}N\setminus J$.
Further, set
\[
f(v)= \left(\frac{\prod_{\ell{i\mkern1mu}n J} m_\ell^{v_\ell}}{|v|!}\right)\left((\leftangle v, J\rightangle-x)^{|v|}-(\leftangle v, J\rightangle-n)^{|v|} \right), \quad v{i\mkern1mu}n\mathbb{N}^{J},
\]
then
\begin{align*}
w_q(x)=w_q(n) + \sum_{y{i\mkern1mu}n J}\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v+e_y, J\rightangle\lefteq n}}{|v| \choose v_1,v_2, \cdots}f(v+e_y).
\end{align*}
Observe that \(\bar v{i\mkern1mu}n\mathbb{N}^{J}\) appears in the argument of \(f\) if it fulfils \(\leftangle \bar v,J\rightangle \lefteq n\) and if there exists a vector \(v{i\mkern1mu}n\mathbb{N}^{J}\) such that \(\bar v = v+e_y\) for some \(y{i\mkern1mu}n J\).
Thus \(\bar v\equiv 0\) is not part of the sum due to it being obviously the only vector where there are no \(v{i\mkern1mu}n\mathbb{N}^{J}\) and \(y{i\mkern1mu}n J\) such that \(v+e_y\equiv 0\), and hence,
\begin{align*}
w_q(x)&=w_q(n) + \sum_{y{i\mkern1mu}n J}\sum_{\substack{\bar{v}{i\mkern1mu}n\mathbb{N}^{J} \\ 1\lefteq \leftangle \bar{v}, J\rightangle\lefteq n}}{|\bar{v}-1| \choose \bar v_1, \cdots, \bar v_{y-1}, \bar v_y-1, \bar v_{y+1},\cdots}f(\bar{v})\\
&=w_q(n) +\sum_{\substack{\bar{v}{i\mkern1mu}n\mathbb{N}^{J} \\ 1\lefteq \leftangle \bar{v}, J\rightangle\lefteq n}} f(\bar{v}) \sum_{y{i\mkern1mu}n J} {|\bar{v}-1| \choose \bar v_1, \cdots, \bar v_{y-1}, \bar v_y-1, \bar v_{y+1},\cdots}\\
&=w_q(n) +\sum_{\substack{\bar{v}{i\mkern1mu}n\mathbb{N}^{J} \\ 1\lefteq \leftangle \bar{v}, J\rightangle\lefteq n}} f(\bar{v}) {|\bar v| \choose \bar v_1, \bar v_2, \cdots},
\end{align*}
by the recurrence relation for multinomial coefficients.\\
In the following we again replace \(\bar v\) by \(v\) and plug-in \(w_q(n)\) as well. Then everything comes neatly together as
\begin{align*}
w_q(x)&= \sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J\rightangle\lefteq n}} \left(\prod_{\ell {i\mkern1mu}n J} \frac{m_\ell^{v_\ell}}{v_\ell !}\right)\Big(\leftangle v, J\rightangle-n\Big)^{|v|}
+\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ 1\lefteq\leftangle v, J\rightangle\lefteq n}} \left(\prod_{\ell{i\mkern1mu}n J} \frac{m_\ell^{v_\ell}}{v_\ell !}\right)\left((\leftangle v, J\rightangle-x)^{|v|}-(\leftangle v, J\rightangle-n)^{|v|} \right) \\
&=\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J\rightangle\lefteq\leftfloor x\rightfloor}} \left(\prod_{\ell {i\mkern1mu}n J} \frac{m_\ell^{v_\ell}}{v_\ell !}\right)\Big(\leftangle v, J\rightangle-x\Big)^{|v|},
\end{align*}
which completes the proof upon realising that the continuity argument we utilised for \(x=1\) is applicable for \(x=n+1\) as well.
\end{proof}
As mentioned above we may treat lattices with arbitrary step \(\mathbb{V}\mathrm{ar}\,epsilon>0\), i.e. not only integer jumps, by the scaling arguments of Lemma \rightef{scaling} and Lemma \rightef{scaling1}. We use the notation
\begin{align}\leftabel{gaussepsilon}
\leftfloor x \rightfloor_\mathbb{V}\mathrm{ar}\,epsilon:= \mathbb{V}\mathrm{ar}\,epsilon \leftfloor \tfrac{x}{\mathbb{V}\mathrm{ar}\,epsilon} \rightfloor =\max\{z{i\mkern1mu}n \mathbb{V}\mathrm{ar}\,epsilon\mathbb{Z} ~:~ z < x\}\quad \text{for all }x{i\mkern1mu}n\mathbb{R}R, \mathbb{V}\mathrm{ar}\,epsilon>0.
\end{align}
\begin{proposition}\leftabel{general_lattice}
Let $(L_t)_{t\geq 0}$ be a spectrally negative compound Poisson process with drift \(c>0\), intensity \(\leftambda>0\) and jump measure \(\mathbb{P}i=\sum_{y{i\mkern1mu}n J} p_y \delta_{y}\) such that \(J:=\mathrm{supp}(\mathbb{P}i)\subset \mathbb{V}\mathrm{ar}\,epsilon\mathbb{N}\setminus\{0\}\) for some \(\mathbb{V}\mathrm{ar}\,epsilon>0\). Then for \(q\geq0\) the \(q\)-scale function \(W^{(q)}\) of $(L_t)_{t\geq 0}$ is given by
\begin{align}\leftabel{smalllatsol1}
W^{(q)}(x)=\frac{1}{c} \e^{\frac{ q+\leftambda}{c} x}\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J \rightangle\lefteq\leftfloor x \rightfloor_\mathbb{V}\mathrm{ar}\,epsilon}} \left(\prod_{\ell{i\mkern1mu}n J} \frac{m_{\ell}^{v_\ell}}{v_\ell !}\right)\Big(\leftangle v, J\rightangle-x\Big)^{|v|},\quad x\geq0,
\end{align}
where \(m_y=\frac{\leftambda p_y}{c} \e^{-\frac{q+\leftambda}{c} y}\), $y{i\mkern1mu}n J$.
\end{proposition}
\begin{proof}
Recall from \eqref{eq-specialtype} that $L_t = c t - \sum_{i=1}^{N_t} \xi_i,$ and define the auxiliary Lévy process $(\tilde{L}_t)_{t\geq 0}$ by
\begin{align*}
\tilde L_t = t-\sum_{i=1}^{\tilde N_{t}} \frac{\xi_i}\mathbb{V}\mathrm{ar}\,epsilon, \quad t\geq 0,
\end{align*}
where \(\tilde N_t:=N_{\mathbb{V}\mathrm{ar}\,epsilon t/c}\), i.e. \(\tilde L_t\) has intensity \(\tilde \leftambda = \mathbb{V}\mathrm{ar}\,epsilon \leftambda/c\). We easily convince ourselves that the jump measure of \(\tilde L_t\) is supported on \(\tilde{J}=\frac{J}{\mathbb{V}\mathrm{ar}\,epsilon}\subseteq \mathbb{N}\setminus\{0\}\). Thus, Lemma \rightef{lattice_explicit} is applicable and we can compute the $q$-scale function of $(\tilde{L}_t)_{t\geq 0}$ via Corollary \rightef{auxiliary} as
\begin{align*}
\tilde W^{(q)}(x)&= \e^{(q+\tilde{\leftambda})x} \tilde{w}_q(x)
\\
&=\e^{(q+\mathbb{V}\mathrm{ar}\,epsilon\leftambda/c)x}\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J/\mathbb{V}\mathrm{ar}\,epsilon} \\ \leftangle v, \frac J\mathbb{V}\mathrm{ar}\,epsilon \rightangle\lefteq\leftfloor x\rightfloor}} \left(\prod_{\ell {i\mkern1mu}n J/\mathbb{V}\mathrm{ar}\,epsilon} \frac{ ({\frac{\mathbb{V}\mathrm{ar}\,epsilon\leftambda}{c} p_{\mathbb{V}\mathrm{ar}\,epsilon\ell}} \e^{-(q+\mathbb{V}\mathrm{ar}\,epsilon\leftambda/c) \ell})^{v_\ell}}{v_\ell !}\right)\Big(\leftangle v, \frac{J}{\mathbb{V}\mathrm{ar}\,epsilon}\rightangle-x\Big)^{|v|},~x\geq0,
\end{align*}
for any \(q\geq0\).
Now we consecutively apply Lemma \rightef{scaling1} and Lemma \rightef{scaling} to obtain for all \(q\geq0\) and \(x\geq0\) the relationship
\[
W^{(q)}(x)=\frac{1}{c} \tilde W^{(\mathbb{V}\mathrm{ar}\,epsilon q/c)} \Big(\frac{x}{\mathbb{V}\mathrm{ar}\,epsilon}\Big),
\]
where \(W^{(q)}\) is the scale function of \(L_t\).
Thus, we find
\begin{align*}
W^{(q)}(x)&=\frac{1}{c} \e^{(\mathbb{V}\mathrm{ar}\,epsilon q/c+\mathbb{V}\mathrm{ar}\,epsilon\leftambda/c)\frac{x}{\mathbb{V}\mathrm{ar}\,epsilon}}\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J/\mathbb{V}\mathrm{ar}\,epsilon} \\ \leftangle v, \frac J\mathbb{V}\mathrm{ar}\,epsilon \rightangle\lefteq\leftfloor \frac{x}{\mathbb{V}\mathrm{ar}\,epsilon} \rightfloor}} \left(\prod_{\ell{i\mkern1mu}n J/\mathbb{V}\mathrm{ar}\,epsilon}\frac{ ({\frac{\mathbb{V}\mathrm{ar}\,epsilon\leftambda}{c} p_{\mathbb{V}\mathrm{ar}\,epsilon\ell}} \e^{-(\mathbb{V}\mathrm{ar}\,epsilon q/c+\mathbb{V}\mathrm{ar}\,epsilon\leftambda/c) \ell})^{v_\ell}}{v_\ell !}\right)\Big(\leftangle v, \frac{J}{\mathbb{V}\mathrm{ar}\,epsilon}\rightangle-\frac{x}{\mathbb{V}\mathrm{ar}\,epsilon}\Big)^{|v|}\\
&=\frac{1}{c} \e^{\frac{ q+\leftambda}{c} x}\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J \rightangle\lefteq\leftfloor x \rightfloor_\mathbb{V}\mathrm{ar}\,epsilon}} \left(\prod_{\ell{i\mkern1mu}n J} \frac{ ({\frac{\leftambda}{c} p_\ell} \e^{- \frac{q+\leftambda}{c} \ell})^{v_\ell}}{v_\ell !}\right)\Big(\leftangle v, J \rightangle-x \Big)^{|v|},
\end{align*}
as had to be shown.
\end{proof}
Our strategy's next step is approximation, i.e. we consider converging sequences of spectrally negative compound Poisson processes and investigate the limit of the respective sequence of scale functions. \newline
However, the formula obtained in Proposition \rightef{general_lattice} is yet too cumbersome for that purpose. Luckily we can order the sum in \eqref{smalllatsol1} in a way such that the parameters of the process, i.e. drift, intensity and jump measure, become visible.
\begin{corollary}\leftabel{ordering}
In the setting of Proposition \rightef{general_lattice} it holds for all \(q\geq0\)
\begin{align}\leftabel{neworder}
W^{(q)}(x)= \frac{1}{c} \sum_{k=0}^{\leftfloor \frac x\mathbb{V}\mathrm{ar}\,epsilon\rightfloor} \frac{(\leftambda/c)^k}{k!} {i\mkern1mu}nt_0^{\leftfloor x\rightfloor_\mathbb{V}\mathrm{ar}\,epsilon} \e^{-\frac{q+\leftambda}{c} ( s - x)} ( s -x)^{k}~
\mathbb{P}i^{*k}(\diff s), \quad x\geq 0.
\end{align}
\end{corollary}
\begin{proof}
Recall that \eqref{smalllatsol1} is a finite sum due to the restriction \(\leftangle v,J\rightangle \lefteq \leftfloor x \rightfloor_\mathbb{V}\mathrm{ar}\,epsilon\). But this implies \(|v|\lefteq \frac{1}{\mathbb{V}\mathrm{ar}\,epsilon} \leftangle v,J\rightangle \lefteq \leftfloor \frac x\mathbb{V}\mathrm{ar}\,epsilon\rightfloor\) via \eqref{gaussepsilon} since \(\min\{y :~ y{i\mkern1mu}n J\} \geq \mathbb{V}\mathrm{ar}\,epsilon\). Thus we may reorder \eqref{smalllatsol1} according to the value of \(|v|\). We obtain for \(x\geq0\)
\begin{align*}
W_q(x)&=\frac{1}{c} \e^{\frac{ q+\leftambda}{c} x} \sum_{k=0}^{\leftfloor \frac x\mathbb{V}\mathrm{ar}\,epsilon\rightfloor} \sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J \rightangle\lefteq\leftfloor x \rightfloor_\mathbb{V}\mathrm{ar}\,epsilon \\ |v|=k}} \left(\prod_{\ell{i\mkern1mu}n J} \frac{m_{\ell}^{v_\ell}}{v_\ell !}\right)\Big(\leftangle v, J\rightangle-x\Big)^{k}\\
&= \frac{1}{c} \e^{\frac{ q+\leftambda}{c} x} \sum_{k=0}^{\leftfloor \frac x\mathbb{V}\mathrm{ar}\,epsilon\rightfloor} \sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J \rightangle\lefteq \mathbb{V}\mathrm{ar}\,epsilon \leftfloor \frac{x}{\mathbb{V}\mathrm{ar}\,epsilon} \rightfloor \\ |v|=k}} {k\choose v_{\mathbb{V}\mathrm{ar}\,epsilon},v_{2\mathbb{V}\mathrm{ar}\,epsilon},\cdots} \frac{(\leftambda/c)^k}{k!} \left(\prod_{\ell {i\mkern1mu}n J} p_\ell^{v_\ell}\right) \e^{-\frac{q+\leftambda}{c} \leftangle v, J\rightangle} \Big(\leftangle v, J\rightangle-x\Big)^{k},
\end{align*}
where we inserted a multinomial coefficient by multiplying with \(\frac{k!}{k!}\) and set $v_x=0$ for $x\notin J$. The inner sum can be ordered even further, this time according to the value of \(\frac{1}{\mathbb{V}\mathrm{ar}\,epsilon}\leftangle v,J\rightangle\). It holds
\begin{align*}
W_q(x)&=\frac{1}{c} \e^{\frac{ q+\leftambda}{c} x} \sum_{k=0}^{\leftfloor \frac x\mathbb{V}\mathrm{ar}\,epsilon\rightfloor} \sum_{j=k}^{\leftfloor \frac x\mathbb{V}\mathrm{ar}\,epsilon\rightfloor} \sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J \rightangle = \mathbb{V}\mathrm{ar}\,epsilon j \\ |v|=k}} {k\choose v_{\mathbb{V}\mathrm{ar}\,epsilon},v_{2\mathbb{V}\mathrm{ar}\,epsilon},\cdots} \frac{(\leftambda/c)^k}{k!} \left(\prod_{\ell {i\mkern1mu}n J} p_\ell^{v_\ell}\right) \e^{-\frac{q+\leftambda}{c} \mathbb{V}\mathrm{ar}\,epsilon j} \Big(\mathbb{V}\mathrm{ar}\,epsilon j -x\Big)^{k}\\
&= \frac{1}{c} \e^{\frac{ q+\leftambda}{c} x} \sum_{k=0}^{\leftfloor \frac x\mathbb{V}\mathrm{ar}\,epsilon\rightfloor} \sum_{j=k}^{\leftfloor \frac x\mathbb{V}\mathrm{ar}\,epsilon\rightfloor} \frac{(\leftambda/c)^k}{k!} \e^{-\frac{q+\leftambda}{c} \mathbb{V}\mathrm{ar}\,epsilon j} \Big(\mathbb{V}\mathrm{ar}\,epsilon j -x\Big)^{k}
\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J} \\ \leftangle v, J \rightangle = \mathbb{V}\mathrm{ar}\,epsilon j \\ |v|=k}} {k\choose v_{\mathbb{V}\mathrm{ar}\,epsilon},v_{2\mathbb{V}\mathrm{ar}\,epsilon},\cdots} \left(\prod_{\ell {i\mkern1mu}n J} p_\ell^{v_\ell}\right).
\end{align*}
We easily convince ourselves that
\[
\mathbb{P}i^{*k}(\mathbb{V}\mathrm{ar}\,epsilon j)=\sum_{\substack{v{i\mkern1mu}n\mathbb{N}^{J}\\ \leftangle v, J\rightangle=\mathbb{V}\mathrm{ar}\,epsilon j \\|v|=k}}
{k\choose v_\mathbb{V}\mathrm{ar}\,epsilon,v_{2\mathbb{V}\mathrm{ar}\,epsilon},\cdots}
\left(\prod_{\ell{i\mkern1mu}n J} p_\ell^{v_\ell}\right)
\]
for \(j,k{i\mkern1mu}n\mathbb{N}\). Thereby the proof is concluded since $\mathbb{P}i$ is discrete and $\mathbb{P}i^{\ast k}([0,k\mathbb{V}\mathrm{ar}\,epsilon))=0$.
\end{proof}
To simplify notation in the following recall $g_k(s,x)$, $0\lefteq s,x$, \(k{i\mkern1mu}n\mathbb{N}\) from \eqref{g}
then \eqref{neworder} is
\begin{align}\leftabel{almostready}
W^{(q)}(x)=\frac{1}{c} \sum_{k=0}^{\leftfloor \frac x\mathbb{V}\mathrm{ar}\,epsilon\rightfloor} {i\mkern1mu}nt^{\leftfloor x\rightfloor_\mathbb{V}\mathrm{ar}\,epsilon}_0 g_k(s,x) ~\mathbb{P}i^{*k}(\diff s).
\end{align}
Further, for \(k>\leftfloor \frac{x}{\mathbb{V}\mathrm{ar}\,epsilon} \rightfloor\) it holds \(\mathrm{supp}(\mathbb{P}i^{*k})\subset (\leftfloor x \rightfloor_\mathbb{V}\mathrm{ar}\,epsilon,{i\mkern1mu}nfty)\). Moreover, for any \(x{i\mkern1mu}n\mathbb{R}_+\setminus\mathbb{N}\) the interval \((\leftfloor x \rightfloor_\mathbb{V}\mathrm{ar}\,epsilon,x]\) is a \(\mathbb{P}i^{*k}\)-null set for all \(k{i\mkern1mu}n\mathbb{N}\). Hence, we may write in the setting above
\begin{align}\leftabel{representation}
W^{(q)}(x)=\frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i^{*k}(\diff s), \quad x\geq 0,
\end{align}
which is \eqref{mainform}.
Now we have an expression at hand where the parameters of the Lévy process are clearly visible. Due to the isolation of the jump measure investigating limits is a straightforward task.
\subsection{Arbitrary jump distributions}\leftabel{S33}
In this section we make use of the fact that probability measures can be approximated by lattice measures as it is stated in the following lemma.
\begin{lemma}\leftabel{dense}
The set of positive lattice distributions is dense in the set of probability distributions on $\mathbb{R}R_+$ with respect to the topology of weak convergence.
\end{lemma}
\begin{proof}
By definition, a sequence \(\{\mu_n\}_{n{i\mkern1mu}n\mathbb{N}}\) of probability measures converges weakly to a probability measure \(\mu\), if and only if the sequence \(\{F_n\}_{n{i\mkern1mu}n\mathbb{N}}\) of their respective cumulative distribution functions converges pointwise to the distribution function \(F\) of \(\mu\), wherever \(F\) is continuous. \newline
Hence, it suffices to prove that any distribution function with $F(0)=0$ can be approximated by a sequence of step functions corresponding to lattice distributions with successively smaller steps \(\mathbb{V}\mathrm{ar}\,epsilon_n\searrow 0\). Such a sequence may be constructed in the following way.\newline
For fixed \(n{i\mkern1mu}n\mathbb{N}\) we set
\begin{align*}
F_n(x)=\sum_{k=0}^{i\mkern1mu}nfty F(k\mathbb{V}\mathrm{ar}\,epsilon_n)\mathds{1}_{[k\mathbb{V}\mathrm{ar}\,epsilon_n,(k+1)\mathbb{V}\mathrm{ar}\,epsilon_n)}
\end{align*}
for all \(x{i\mkern1mu}n\mathbb{R}_+\). By construction \(F_n\) is the cumulative distribution function of a lattice distribution as it fulfils \(F_n(0)=0\), is right-continuous and it holds \(\leftim_{x\to{i\mkern1mu}nfty}F_n(x)=1\) due to \(\leftim_{x\to{i\mkern1mu}nfty}F(x)=1\). Choose \(x{i\mkern1mu}n\mathbb{R}_+\) arbitrary such that \(F\) is continuous at \(x\). For all \(n{i\mkern1mu}n\mathbb{N}\) it then holds \(F_n(x)=F(\leftfloor x \rightfloor_{\mathbb{V}\mathrm{ar}\,epsilon_n})\). But then \(\leftim_{n\to{i\mkern1mu}nfty} \leftfloor x \rightfloor_{\mathbb{V}\mathrm{ar}\,epsilon_n} = x\) implies \(\leftim_{n\to{i\mkern1mu}nfty} F(\leftfloor x \rightfloor_{\mathbb{V}\mathrm{ar}\,epsilon_n}) = F(x)\) and the proof is finished.
\end{proof}
The idea is now clear. Denote by \(L:=(L_t)_{t\geq 0}\) a spectrally negative compound Poisson process with drift \(c>0\), intensity \(\leftambda>0\) and (spectrally one-sided) jump measure \(\mathbb{P}i\) as in \eqref{eq-specialtype}. Choose a sequence \(\{\mathbb{P}i_n\}_{n{i\mkern1mu}n\mathbb{N}}\) of lattice distributions which converges weakly to \(\mathbb{P}i\) and for \(n{i\mkern1mu}n\mathbb{N}\) denote by \(L^{(n)}:=(L^{(n)}_t)_{t\geq 0}\) the compound Poisson process with the same drift and intensity as \(L\) but with jump measure \(\mathbb{P}i_n\) instead. We keep this notation throughout this section.
By Corollary \rightef{ordering} for fixed \(q\geq0\) and \(n{i\mkern1mu}n\mathbb{N}\) the \(q\)-scale function \(W^{(q)}_n\) of \(L^{(n)}\) is
\begin{align*}
W^{(q)}_n(x)=\frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i_n^{*k}(\diff s), \quad x\geq 0.
\end{align*}
To prove that such a representation holds for the \(q\)-scale functions \(W^{(q)}\) of \((L_t)_{t\geq0}\) as well we need to show three things: First, that weak convergence of the jump measures \(\{\mathbb{P}i_n\}_{n{i\mkern1mu}n\mathbb{N}}\) implies pointwise convergence of the respective scale functions \(\{W^{(q)}_n\}_{n{i\mkern1mu}n\mathbb{N}}\), second, that the right-hand side of \eqref{representation} is well-defined for arbitrary jump measures and third, that the limits are consistent, i.e.
\begin{align*}
\leftim_{n\to{i\mkern1mu}nfty} \frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i_n^{*k}(\diff s) = \frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i^{*k}(\diff s)
\end{align*}
for every \(x\geq0\).
For the first issue we use the notion of equicontinuity. Recall that a family of continuous functions \(\mathcal{F}\subset \mathcal{C}(\mathbb{R}_+)\) is said to be \emph{equicontinuous} if and only if for all \(x{i\mkern1mu}n\mathbb{R}_+\) and all \(\mathbb{V}\mathrm{ar}\,epsilonilon>0\) there exists \(\delta=\delta(x,\mathbb{V}\mathrm{ar}\,epsilonilon)>0\) such that for all functions \(f{i\mkern1mu}n\mathcal{F}\) and all \(y{i\mkern1mu}n\mathbb{R}_+\) it holds
\[
|x-y|<\delta {i\mkern1mu}mplies |f(x)-f(y)|<\mathbb{V}\mathrm{ar}\,epsilonilon.
\]
\begin{lemma}\leftabel{equic}
For fixed \(q\geq0\) and fixed \(c,\leftambda>0\) let \(\mathcal{W}^{(q)}_{c,\leftambda}\) be the set of \(q\)-scale functions of spectrally negative compound Poisson processes with parameters \(c\) and \(\leftambda\) and lattice distributed jumps. Then \(\mathcal{W}^{(q)}_{c,\leftambda}\) is equicontinuous.
\end{lemma}
\begin{proof}
As simultaneous scaling does not affect equicontinuity we may, once again, set \(c=1\). Let \(q,\leftambda\geq0\) and consider a function \(W^{(q)}{i\mkern1mu}n\mathcal{W}^{(q)}_{1,\leftambda}\), i.e. \(W^{(q)}\) is the \(q\)-scale function of a compound Poisson process with drift \(c=1\), intensity \(\leftambda>0\) and lattice jump measure \(\mathbb{P}i\) on $(0,{i\mkern1mu}nfty)$. We begin by proving
\begin{align}\leftabel{less_e}
W^{(q)}(x) \lefteq \e^{(q+\leftambda) x}, \quad \text{for all } x\geq 0.
\end{align}
Let $\mathbb{V}\mathrm{ar}\,epsilon>0$ be the step of the lattice distribution $\mathbb{P}i$ and \(x>\mathbb{V}\mathrm{ar}\,epsilon\). As in the proof of Lemma \rightef{recursivformula1} write $\sigma$ for the time of the first jump of the corresponding Lévy process, then it follows
\begin{align*}
\mathbb{E}^\mathbb{V}\mathrm{ar}\,epsilon[\e^{-q\tau_x^+}\mathds{1}_{\{\tau_x^+<\tau_0^-\}}]
&\geq \mathbb{E}^\mathbb{V}\mathrm{ar}\,epsilon[\e^{-q\tau_x^+} \mathds{1}_{\{\tau_x^+<\sigma\}}]
= \e^{(q+\leftambda) (\mathbb{V}\mathrm{ar}\,epsilon-x)},
\end{align*}
by arguments as in the proof of Lemma \rightef{recursivformula1}. Thus via \eqref{scaleviapassage}
\begin{align*}
W^{(q)}(x) \lefteq \e^{(q+\leftambda) (x-\mathbb{V}\mathrm{ar}\,epsilon)}W^{(q)}(\mathbb{V}\mathrm{ar}\,epsilon) = \e^{(q+\leftambda) x}, \quad x>\mathbb{V}\mathrm{ar}\,epsilon.
\end{align*}
For \(x<\mathbb{V}\mathrm{ar}\,epsilon\) we see from \eqref{smalllatsol1} that \eqref{less_e} is clearly fulfilled and hence by continuity we have shown \eqref{less_e}.\\
Set \(W^{(q)}(x)=\e^{(q+\leftambda)x}w_q(x)\) for some continuous function \(w_q\) as in Corollary \rightef{auxiliary}. The inequality \eqref{less_e} then implies \(w_q(x)\lefteq1\) for all \(x\geq0\). Moreover, from \cite[Cor. 2.5]{kuznetsov2011theory} we know that \(w_q{i\mkern1mu}n \mathcal{C}^1((0,{i\mkern1mu}nfty) \setminus \mathrm{supp}(\mathbb{P}i))\).\\ We are going to show that
\begin{align*}
\left| \frac{w_q(x_2)-w_q(x_1)}{x_2-x_1}\right| \lefteq \leftambda
\end{align*}
holds for all \(x_1,x_2{i\mkern1mu}n\mathbb{R}_+\) sufficiently close. From \eqref{recurse3} (but used here for general step size $\mathbb{V}\mathrm{ar}\,epsilon>0$) we know
$$w_q(x)= w_q(\leftfloor x\rightfloor_\mathbb{V}\mathrm{ar}\,epsilon) - \leftambda\sum_{y{i\mkern1mu}n J} p_y \e^{-(q+\leftambda) y}{i\mkern1mu}nt^x_{\leftfloor x\rightfloor_\mathbb{V}\mathrm{ar}\,epsilon} w_q(z-y) \diff z ~\mathbb{P}i(\diff y).$$
Thus for any \(x_1,x_2{i\mkern1mu}n\mathbb{R}_+\) such that \(\leftfloor x_1 \rightfloor_\mathbb{V}\mathrm{ar}\,epsilon=\leftfloor x_2 \rightfloor_\mathbb{V}\mathrm{ar}\,epsilon\)
\begin{align*}
\left|\frac{w_q(x_2)-w_q(x_1)}{x_2-x_1}\right| &=
\left|\leftambda \sum_{y{i\mkern1mu}n J} p_y \e^{-(q+\leftambda) y}\frac1{x_2-x_1}{i\mkern1mu}nt^{x_2}_{x_1} w_q(z-y) \diff z \right| \\
&\lefteq \leftambda \sum_{y{i\mkern1mu}n J} p_y \e^{-(q+\leftambda) y}\frac1{x_2-x_1}{i\mkern1mu}nt^{x_2}_{x_1} | w_q(z-y)|\diff z \\
&\lefteq \leftambda \sum_{y{i\mkern1mu}n J} p_y \e^{-(q+\leftambda) y} \max \{w_q(z)| z \lefteq x_2\}\\
&\lefteq \leftambda,
\end{align*}
i.e. $|w_q(x_2)-w_q(x_1)|\lefteq \leftambda |x_2-x_1|$, where the Lipschitz constant $\leftambda$ does not depend on the step $\mathbb{V}\mathrm{ar}\,epsilon$.
Moreover, since \(w_q{i\mkern1mu}n \mathcal{C}^1((0,{i\mkern1mu}nfty) \setminus \mathrm{supp}(\mathbb{P}i))\), the left derivatives \(\partial_-w_q(x)\) on lattice elements \(x{i\mkern1mu}n\mathbb{V}\mathrm{ar}\,epsilon\mathbb{N}\setminus\{0\}\) are given by
\begin{align*}
\partial_-w_q(x)=\leftim_{x_n\nearrow x} w_q'(x_n)\lefteq \leftambda.
\end{align*}
Hence, the directional derivatives of \(w_q\) are bounded everywhere, and therefore the family $\{w_q: w_q(x)=e^{-(q+\leftambda)x} W^{(q)}(x), W^{(q)}(x){i\mkern1mu}n \mathcal{W}^{(q)}_{c,\leftambda}\}$ is equicontinuous.\\
However, since for any $x_1, x_2{i\mkern1mu}n \mathbb{R}R_+$
\begin{align*}
|W^{(q)}(x_1)-W^{(q)}(x_2)|&= \e^{(q+\leftambda)x_2} | \e^{(q+\leftambda)(x_1-x_2)} w_q(x_1) - w_q(x_2)|\\
&\lefteq \e^{(q+\leftambda)x_2} \lefteft( |(\e^{(q+\leftambda)(x_1-x_2)} -1)w_q(x_1)| + | w_q(x_1) - w_q(x_2)|\rightight)\\
&\lefteq \e^{(q+\leftambda)x_2} \lefteft( |(\e^{(q+\leftambda)(x_1-x_2)} -1)| + | w_q(x_1) - w_q(x_2)|\rightight)
\end{align*}
this implies equicontinuity of \(\mathcal{W}^{(q)}_{1,\leftambda}\) as well.
\end{proof}
The lemma above now allows to prove pointwise convergence of scale functions of compound Poisson processes with drift in the case that the jump distributions converge weakly.
\begin{lemma}\leftabel{scale_consistent}
For all \(x\geq0\) it holds
\begin{align*}
\leftim_{n\to{i\mkern1mu}nfty}W_n^{(q)}(x)= W^{(q)}(x).
\end{align*}
\end{lemma}
\begin{proof}
By Lemma \rightef{equic} the sequence \(\{W^{(q)}_n\}_{n{i\mkern1mu}n\mathbb{N}}\) is equicontinuous.
Thus, by \cite[Thm. 1.7.5]{arendt2011vector}, if there are constants \(M,\omega>0\) such that for every \(n{i\mkern1mu}n\mathbb{N}\) it holds \(|W^{(q)}_n(x)|\lefteq M\e^{\omega x}\) and if there exists \(c_0\geq\omega\) such that the sequence $\{ \mathcal{L}(W^{(q)}_n)(\beta) \}_{n{i\mkern1mu}n\mathbb{N}}$
converges pointwise as $n\to{i\mkern1mu}nfty$ for every \(\beta>c_0\), then the sequence \(\{W^{(q)}_n\}_{n{i\mkern1mu}n\mathbb{N}}\) converges uniformly on every compact subset of \(\mathbb{R}_+\) as $n\to{i\mkern1mu}nfty$ and for every \(\beta>c_0\)
\begin{align}\leftabel{laplace}
\leftim_{n\to{i\mkern1mu}nfty} \mathcal{L}\left(W^{(q)}_n\right)(\beta) = \mathcal{L}\left(\leftim_{n\to{i\mkern1mu}nfty}W^{(q)}_n\right)(\beta).
\end{align}
Let \(\psi\) and \(\psi_n\) denote the Laplace exponents of the L\'evy processes \(L\) and \(L^{(n)}\), respectively. Then as \(\mathbb{P}i_n\to \mathbb{P}i\), $n\to{i\mkern1mu}nfty$, weakly, it follows that \(\psi_n(\beta) \to \psi(\beta)\), $n\to{i\mkern1mu}nfty$, for every \(\beta>\sup\{y:\psi(y)=q\}=\mathbb{P}hi(q)\).
Further, from the definition of \(q\)-scale functions we know that its Laplace transforms are given by
\[
\mathcal{L}\left(W^{(q)}\right)(\beta) = \frac1{\psi(\beta)-q}, \quad \beta>\mathbb{P}hi(q),
\]
such that the above implies pointwise convergence of $\{ \mathcal{L}(W^{(q)}_n)(\beta) \}_{n{i\mkern1mu}n\mathbb{N}}$ for all $\beta>\mathbb{P}hi(q)$ as $n\to{i\mkern1mu}nfty$. \\
Let \(\omega>\mathbb{P}hi(q)\). Since \(\mathcal{L}(W^{(q)}_n)(\omega){i\mkern1mu}n\mathbb{R}\) for \(n\) large enough there exist constants \(M>0\) and \(n_0{i\mkern1mu}n\mathbb{N}\) such that \(|W_n^{(q)}(x)|\lefteq M\e^{\omega x}\) holds for every \(n>n_0\) and for all \(x{i\mkern1mu}n\mathbb{R}_+\).\\
Thus \(\{W^{(q)}_n\}_{n{i\mkern1mu}n\mathbb{N}}\) converges uniformly on every compact subset of \(\mathbb{R}_+\) as $n\to{i\mkern1mu}nfty$, and \eqref{laplace} holds for \(\beta>\omega\). But as \(\omega>\mathbb{P}hi(q)\) is arbitrary, \eqref{laplace} holds for all \(\beta>\mathbb{P}hi(q)\), and hence the assertion follows.
\end{proof}
We are now ready to give the proof of our main theorem as follows.
\begin{proof}[Proof of Theorem \rightef{main}.]
With the previous Lemma \rightef{scale_consistent}, proving well-definedness and consistency of the limits is tantamount to completing the proof.\\
To see the well-definedness of the right hand side of \eqref{mainform}, i.e. of
\begin{align}\leftabel{candidate}
\frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i^{*k}(\diff s),
\end{align}
note that for fixed \(x\geq0\) a sufficient condition for the infinite series in \eqref{candidate} to converge is
\begin{align*}
\sum_{k=0}^{{i\mkern1mu}nfty} \max_{0\lefteq s\lefteq x} \left| g_k(s,x) \right| <{i\mkern1mu}nfty
\end{align*}
due to \(\mathbb{P}i^{*k}\) being probability measures for every \(k{i\mkern1mu}n\mathbb{N}\). However, by the definition of \(g_k\) in \eqref{g} we easily find constants \(C_1,C_2>0\) such that
\begin{align*}
\max_{0\lefteq s\lefteq x} \left| g_k(s,x) \right| < C_1\frac{C_2^k}{k!}.
\end{align*}
Hence, the expression \eqref{candidate}
is well-defined for all \(x\geq0\), and since we know
\begin{align}\leftabel{w_approx}
W^{(q)}(x)=\leftim_{n\to{i\mkern1mu}nfty} \frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i_n^{*k}(\diff s),
\end{align}
from Lemma \rightef{scale_consistent} it remains to prove that \eqref{candidate} coincides with the limit in \eqref{w_approx}. \\
For fixed \(k{i\mkern1mu}n\mathbb{N}\) we know that \(\mathbb{P}i_n^{*k}\to\mathbb{P}i^{*k}~(n\to{i\mkern1mu}nfty)\) weakly due to Lévy's continuity theorem. Indeed, the characteristic functions \(\hat\mathbb{P}i_n\) of \(\mathbb{P}i_n\) converge pointwise to the characteristic function \(\hat\mathbb{P}i\) of \(\mathbb{P}i\). Hence, we have \(\leftim_{n\to{i\mkern1mu}nfty} \hat\mathbb{P}i_n^k(x) = \hat\mathbb{P}i^k(x)\) for every \(x{i\mkern1mu}n\mathbb{R}\) and thereby \(\mathbb{P}i_n^{*k}\to\mathbb{P}i^{*k}~(n\to{i\mkern1mu}nfty)\) weakly. Thus, it follows
\begin{align*}
\leftim_{n\to{i\mkern1mu}nfty} \frac{1}{c} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i_n^{*k}(\diff s) = \frac{1}{c}{i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i^{*k}(\diff s)
\end{align*}
for all \(x{i\mkern1mu}n\mathbb{R}\) and every \(k{i\mkern1mu}n\mathbb{N}\) since \(g_k\) is continuous and bounded.
But this, the fact that \(C_1\) and \(C_2\) are independent of the jump measure, and $\leftim_{k\to{i\mkern1mu}nfty} \mathbb{P}i^{\ast k}([0,x]) \to0$ as $k\to {i\mkern1mu}nfty$, allow us to find \(n,m{i\mkern1mu}n\mathbb{N}\) large enough such that for any $\mathbb{V}\mathrm{ar}\,epsilonilon>0$
\begin{align*}
\leftefteqn{\Bigg| \frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i_n^{*k}(\diff s) - \frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i^{*k}(\diff s)\Bigg|}\\
& \lefteq \left| \frac{1}{c} \sum_{k=0}^{m} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i_n^{*k}(\diff s) - \frac{1}{c} \sum_{k=0}^{m} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i^{*k}(\diff s)\right| \\
&\quad +
\left| \frac{1}{c} \sum_{k=m+1}^{{i\mkern1mu}nfty} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i_n^{*k}(\diff s)\right| +\left|\frac{1}{c} \sum_{k=m+1}^{{i\mkern1mu}nfty} {i\mkern1mu}nt^{x}_0 g_k(s,x) ~\mathbb{P}i^{*k}(\diff s)\right| \\
&\lefteq \mathbb{V}\mathrm{ar}\,epsilonilon+\mathbb{V}\mathrm{ar}\,epsilonilon+\mathbb{V}\mathrm{ar}\,epsilonilon,
\end{align*}
which finishes the proof.
\end{proof}
\section{Primitives and Derivatives}\leftabel{S4}
\setcounter{equation}{0}
\subsection{Smoothness of scale functions}\leftabel{S41}
Recall that we denote by \(\overline{\mathbb{P}i}\) the cumulative distribution function of the jump measure \(\mathbb{P}i\) and that we abbreviate differentiation as $\partial$ and use $\partial_x:=\frac{\partial}{\partial x}$.
\begin{proposition}\leftabel{smoothness}
For \(q\geq0\) let \(W^{(q)}\) be the \(q\)-scale function of the spectrally negative compound Poisson process in \eqref{eq-specialtype}.
\begin{enumerate}
{i\mkern1mu}tem[(a)] [First order derivatives] We have
\begin{align}\leftabel{D+}
\partial_{+} W^{(q)}(x)&=\frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x]} \partial_x g_k(s,x) ~\mathbb{P}i^{*k}(\diff s), \quad x\geq 0,\\\leftabel{D-}
\partial_{-} W^{(q)}(x)&=\frac{1}{c}\sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x)} \partial_x g_k(s,x) ~\mathbb{P}i^{*k}(\diff s), \quad x>0.
\end{align}
In particular $W^{(q)}$ is (continuously) differentiable in $x_0\geq 0$ if and only if $\overline{\mathbb{P}i}$ is continuous in $x_0$, and $W^{(q)}{i\mkern1mu}n \mathcal{C}^1(0,{i\mkern1mu}nfty)$ if and only if $\overline{\mathbb{P}i}{i\mkern1mu}n \mathcal{C}^0(0,{i\mkern1mu}nfty)$.
{i\mkern1mu}tem[(b)] [Higher order derivatives]
For \(n\geq 1\) and for all $x \geq 0$ such that the following exists,
\begin{align}\leftabel{derivative}
\partial^{n+1} W^{(q)}(x)&=\frac{1}{c}\left( \sum_{k=1}^{n}\sum_{j=k}^{n}C(k,j) \partial^{n+1-j} \overline{\mathbb{P}i}^{*k}(x) + \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x]} \partial_x^{n+1} g_k(s,x)\,\diff \overline{\mathbb{P}i}^{*k}( s)\right)
\end{align}
with constants
\begin{equation} \leftabel{propconstants}
C(k,j)=(\partial_x^j g_k(s,x))\Big|_{s=x}=\begin{cases}
\left(\frac{\leftambda }{c}\right)^k{j \choose k}\left(\frac{q+\leftambda}{c}\right)^{j-k}(-1)^k, &j\geq k, \\
0,& j<k,
\end{cases}, \quad k,j{i\mkern1mu}n\mathbb{N}.
\end{equation}
In particular $W^{(q)}$ is twice (continuously) differentiable in $x_0\geq 0$ if and only if $\overline{\mathbb{P}i}$ is (continuously) differentiable in $x_0$. For $n>1$, if $\overline{\mathbb{P}i}{i\mkern1mu}n\mathcal{C}^1(0,{i\mkern1mu}nfty)$, then $W^{(q)}$ is \((n+1)\) times (continuously) differentiable at \(x_0\geq 0\)
if and only if \(\overline{\mathbb{P}i}\) is \(n\) times (continuously) differentiable at \(x_0\).\\
Hence \(W^{(q)} {i\mkern1mu}n \mathcal{C}^{n+1}(0,{i\mkern1mu}nfty)\) if and only if $\overline{\mathbb{P}i}{i\mkern1mu}n\mathcal{C}^n(0,{i\mkern1mu}nfty)$, $n\geq 1$.
\end{enumerate}
\end{proposition}
To prove Proposition \rightef{smoothness} we will swap differentiation and summation. By an application of the Moore-Osgood Theorem (cf. \cite[Thm. 7.17]{rudin}) this is possible if the summands themselves are differentiable and if the series converges uniformly. These points will therefore be treated in the upcoming three preparatory lemmas.
\begin{lemma}\leftabel{diffg} Consider $g(s,x)$, $0\lefteq s, x$, from \eqref{g}. Then for all $k,n{i\mkern1mu}n\mathbb{N}N$ it holds
\begin{align} \leftabel{dgdx}
\partial_x^n g_k(s,x)=\left(\frac{\leftambda}{c}\right)^k \e^{-\frac{q+\leftambda}{c}(s-x)}\sum_{i=0}^{n\wedge k} {n\choose i} \left(\frac{q+\leftambda}{c}\right)^{n-i}\frac{(s-x)^{k-i}}{(k-i)!}(-1)^i, \quad s,x\geq 0,
\end{align}
such that in particular $(\partial_x^n g_k(s,x))\big|_{s=x}$ is given by \eqref{propconstants} and is independent of \(x\).
\end{lemma}
\begin{proof}
This is immediate from the definition of $g$.
\end{proof}
\begin{lemma}\leftabel{abcd}
Let \(F\) be a cumulative distribution function with \(F(0)=0\).
\begin{enumerate}
{i\mkern1mu}tem[(a)] [First order derivatives] For all \(k{i\mkern1mu}n\mathbb{N}\)
\begin{align}\leftabel{lem_b1}
\partial_{+} {i\mkern1mu}nt_{[0,x]} g_k(s,x)\diff F^{*k}(s)={i\mkern1mu}nt_{[0,x]}\partial_x g_k(s,x) ~\diff F^{*k}(s),\quad x\geq0.
\end{align}
The left derivative differs from the right derivative at \(x_0{i\mkern1mu}n\mathbb{R}_+\) if and only if \(k=1\) and \(F(x_0-)\neq F(x_0)\). In this case it is given by
\begin{align}\leftabel{lem_b2}
\partial_{-} {i\mkern1mu}nt_{[0,x_0]} g_1(s,x_0)\diff F(s)={i\mkern1mu}nt_{[0,x_0)}\partial_x g_1(s,x_0) ~\diff F(s).
\end{align}
{i\mkern1mu}tem[(b)][Higher order derivatives] If \(F{i\mkern1mu}n\mathcal{C}^0\), then for all \(k{i\mkern1mu}n\mathbb{N}\), $n{i\mkern1mu}n\mathbb{N}N$, with $C(k,n)$ as in \eqref{propconstants},
\begin{align}\leftabel{lem_c}
\partial_{\pm} {i\mkern1mu}nt_{[0,x]} \partial_x^n g_k(s,x)\diff F^{*k}(s)&= {i\mkern1mu}nt_{[0,x]}\partial^{n+1}_x g_k(s,x) ~\diff F^{*k}(s) + C(k,n) \partial_{\pm} F^{*k}(x),
\end{align}
for all $x\geq 0$ such that $\partial_{\pm} F^{\ast k}(x)$ exists. In particular for \(x_0{i\mkern1mu}n\mathbb{R}_+\) such that \(\partial F^{\ast k}(x_0)\) does not exist, the function \(x\mapsto {i\mkern1mu}nt_{[0,x]} \partial_x^n g_k(s,x)\diff F^{\ast k}(s)\) is not differentiable at \(x_0\).
\end{enumerate}
\end{lemma}
\begin{proof}
Note first that for all \(n{i\mkern1mu}n\mathbb{N}\) it holds
\begin{align}\leftabel{lem_a}
\partial_x {i\mkern1mu}nt_{[0,x]} \partial^n_x g_0(s,x)\diff F^{*0}(s)&= {i\mkern1mu}nt_{[0,x]}\partial_x^{n+1} g_0(s,x) ~\diff F^{*0}(s), \quad x\geq0,
\end{align} due to \eqref{dgdx} and \(F^{*0}=\mathds{1}_{[0,{i\mkern1mu}nfty)}\). But since \eqref{lem_b1} - \eqref{lem_c} are all covered by \eqref{lem_a} if \(k=0\) we may assume \(k>0\) in the following. \\
To prove (a) consider
\begin{align}
\leftefteqn{\partial_+ {i\mkern1mu}nt_{[0,x]} g_k(s,x)\diff F^{*k}(s)} \nonumber\\
&=\leftim_{h\to0+}\frac1{h}\left( {i\mkern1mu}nt_{[0,x+h]} g_k(s,x+h) ~ \diff F^{*k}(s) -{i\mkern1mu}nt_{[0,x]} g_k(s,x) ~ \diff F^{*k}(s)\right) \nonumber \\
&=\leftim_{h\to0+}\frac1{h}\left( {i\mkern1mu}nt_{[0,x+h]} g_k(s,x+h) ~ \diff F^{*k}(s) -{i\mkern1mu}nt_{[0,x]} g_k(s,x+h) ~ \diff F^{*k}(s) \right) \nonumber \\
&\quad +\leftim_{h\to0+} {i\mkern1mu}nt_{[0,x]} \frac{ g_k(s,x+h)- g_k(s,x) }{h} ~ \diff F^{*k}(s) \nonumber \\
&= \leftim_{h\to0+} \frac1h{i\mkern1mu}nt_{(x,x+h]} g_k(s,x+h)~\diff F^{*k}(s) + {i\mkern1mu}nt_{[0,x]}\partial_x g_k(s,x) ~\diff F^{*k}(s). \leftabel{diffdecomp}
\end{align}
The first term vanishes since we may estimate for \(h<1\), $k\geq 1$
\begin{align*}
\leftim_{h\to0+} \frac1h{i\mkern1mu}nt_{(x,x+h]}|g_k(s,x+h)|~\diff F^{*k}(s)&\lefteq \leftim_{h\to0+} \frac{h^k}h{i\mkern1mu}nt_{(x,x+h]}\frac{(\leftambda/c)^k }{k!}~\diff F^{*k}(s) =0,
\end{align*}
where for $k=1$ we use the fact that \(F^{*k}\) is right-continuous. This proves \eqref{lem_b1}. An analogue argument for the left derivative leads to
\begin{align*}
\partial_- {i\mkern1mu}nt_{[0,x]} g_k(s,x)\diff F^{*k}(s)&= \leftim_{h\to0+} \frac1h{i\mkern1mu}nt_{(x-h,x]}g_k(s,x-h)~\diff F^{*k}(s) + {i\mkern1mu}nt_{[0,x]}\partial_x g_k(s,x) ~\diff F^{*k}(s).
\end{align*}
Again the first term vanishes for \(k>1\) regardless of the continuity of \(F\).
However for \(k=1\) it holds
\begin{align*}
\leftim_{h\to0+} \frac1h{i\mkern1mu}nt_{(x-h,x]}g_1(s,x-h)~\diff F(s)& =\frac{\leftambda }{c}(F(x)-F(x-)) = -(\partial_x g_1(s,x))\Big|_{s=x} (F(x)-F(x-))
\end{align*}
by \eqref{propconstants}, which proves \eqref{lem_b2}.\\
For (b), by the same decomposition as in \eqref{diffdecomp}
\begin{align*}
\partial_+ {i\mkern1mu}nt_{[0,x]} \partial_x^n g_k(s,x)\diff F^{*k}(s)
&= \leftim_{h\to0+} \frac1h{i\mkern1mu}nt_{(x,x+h]} \partial_x^n g_k(s,x+h)~\diff F^{*k}(s) + {i\mkern1mu}nt_{[0,x]}\partial_x^{n+1} g_k(s,x) ~\diff F^{*k}(s).\end{align*}
To determine the limit we use partial integration and obtain
\begin{align*}
\leftefteqn{ \leftim_{h\to0+} \frac1h{i\mkern1mu}nt_{(x,x+h]} \partial_x^n g_k(s,x+h)~\diff F^{\ast k}(s)} \\
&=\leftim_{h\to0+} \frac1h \lefteft( \lefteft[\partial_x^n g_k(s,x+h) F^{\ast k}(s) \rightight]_{s=x}^{x+h} - {i\mkern1mu}nt_{(x,x+h]} \partial^{n+1}_x g_k(s,x+h) F^{\ast k}(s) \diff s \rightight)\\
&=\leftim_{h\to0+} \frac{\partial_x^n g_k(x+h,x+h)\cdot F^{\ast k}(x+h) - \partial_x^n g_k(x,x+h)\cdot F^{\ast k}(x) }{h} - \partial^{n+1}_x g_k(x,x) F^{\ast k}(x) \\
&=\partial_x^n g_k(x+h,x+h) \leftim_{h\to0+} \frac{F^{\ast k}(x+h) - F^{\ast k}(x)}{h} \\
&\quad + F^{\ast k}(x)\lefteft(\leftim_{h\to 0+} \frac{ \partial_x^n g_k(x+h,x+h) - \partial_x^n g_k(x,x+h) }{h} - \partial^{n+1}_x g_k(x,x) \rightight) \\
&= C(k,n) \leftim_{h\to0+} \frac{F(x+h)-F(x)}{h},
\end{align*}
since $\partial_s \partial^n_x g_k(s,x) = \partial^{n+1}_x g_k(s,x)$ as can be seen from \eqref{dgdx}. The same computation for the left derivative finishes the proof.
\end{proof}
\begin{lemma}\leftabel{uniformconv}
Let \(F\) be a cumulative distribution function with \(F(0)=0\) and let \(n{i\mkern1mu}n\mathbb{N}\). Then the series
\begin{align*}
\sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x]} \partial_x^n g_k(s,x) ~\diff F^{*k}(s)
\end{align*}
converges uniformly on every compact subset of \(\mathbb{R}_+\).
\end{lemma}
\begin{proof}
This follows immediately from the form of $\partial_x^n g_k(s,x)$ determined in Lemma \rightef{diffg} and the fact that $F^{\ast l}$ is a cumulative distribution function for all $k{i\mkern1mu}n\mathbb{N}N$.
\end{proof}
We are now ready to prove Proposition \rightef{smoothness} by simply bringing the above arguments in the right order.
\begin{proof}[Proof of Proposition \rightef{smoothness}.]
Note first that \eqref{D+} and \eqref{D-} follow from \eqref{lem_b1} and \eqref{lem_b2} together with Lemma \rightef{uniformconv} and \cite[Thm. 7.17]{rudin}. Further, from \eqref{D+} and \eqref{D-} it is clear that $W^{(q)}{i\mkern1mu}n \mathcal{C}^1(0,{i\mkern1mu}nfty)$ if and only if $\overline{\mathbb{P}i}{i\mkern1mu}n \mathcal{C}^0(0,{i\mkern1mu}nfty)$. \\
To prove \eqref{derivative} we use induction on $n{i\mkern1mu}n\mathbb{N}N$.
Let \(n=1\). Then using first Equations \eqref{D+}, \eqref{D-}, then \cite[Thm. 7.17]{rudin}, and Lemma \rightef{uniformconv}, and finally \eqref{lem_c},
\begin{align*}
\partial_x^2 W^{(q)}(x) &= \partial_x \lefteft(\frac{1}{c} \sum_{k=0}^{i\mkern1mu}nfty {i\mkern1mu}nt_{[0,x]} \partial_x g_k(s,x) \,\mathbb{P}i^{\ast k} (\diff s) \rightight)\\
&= \frac{1}{c} \sum_{k=0}^{i\mkern1mu}nfty \partial_x\left( {i\mkern1mu}nt_{[0,x]} \partial_x g_k(s,x) \,\mathbb{P}i^{\ast k} (\diff s)\right)\\
&= \frac{1}{c} \lefteft(\sum_{k=0}^{i\mkern1mu}nfty{i\mkern1mu}nt_{[0,x]} \partial_x^2 g_k(s,x) \mathbb{P}i^{\ast k} (\diff s) + C(1,1) \, \partial \overline{\mathbb{P}i}(x) \rightight),
\end{align*}
since $C(k,1)=0$ for all $k>1$. This is \eqref{derivative} for $n=1$. Furthermore it follows immediately from the computation above, that $W^{(q)}$ is twice (continuously) differentiable in $x_0$ if and only if $\overline{\mathbb{P}i}$ is (continuously) differentiable in $x_0$. Hence $W^{(q)}{i\mkern1mu}n \mathcal{C}^2(0,{i\mkern1mu}nfty)$ if and only if $\overline{\mathbb{P}i}{i\mkern1mu}n \mathcal{C}^1(0,{i\mkern1mu}nfty)$.\\
Now let $n>1$ and assume that \eqref{derivative} holds for $n-1$. Then by differentiating the two terms separately and assuming existence of all appearing derivatives
\begin{align*}
\partial^{n+1} W^{(q)}(x)&=\frac{1}{c}\left(\partial_x\left( \sum_{k=1}^{n-1}\sum_{j=k}^{n-1}C(k,j) \partial^{n-j} \overline{\mathbb{P}i}^{*k}(x)\right) + \partial_x\left(\sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x]} \partial_x^n g_k(s,x) ~\diff \overline{\mathbb{P}i}^{*k}( s)\right)\right),
\end{align*}
where we obtain for the first term simply
\begin{align*}
\partial_x \sum_{k=1}^{n-1}\sum_{j=k}^{n-1}C(k,j) \partial^{n-j} \overline{\mathbb{P}i}^{*k}(x) = \sum_{k=1}^{n-1}\sum_{j=k}^{n-1}C(k,j) \partial^{n-j+1} \overline{\mathbb{P}i}^{*k}(x).
\end{align*}
For the second term using \cite[Thm. 7.17]{rudin}, Lemma \rightef{uniformconv}, and \eqref{lem_c} as before we obtain
\begin{align*}
\partial_x\sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x]} \partial_x^n g_k(s,x) ~\diff \overline{\mathbb{P}i}^{*k}( s) &= \sum_{k=0}^{{i\mkern1mu}nfty} \left( {i\mkern1mu}nt_{[0,x]} \partial_x^{n+1} g_k(s,x) ~\diff \overline{\mathbb{P}i}^{*k}( s)+ C(k,n)\, \partial \overline{\mathbb{P}i}^{*k}(x) \right)\\
&= \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x]} \partial_x^{n+1} g_k(s,x) ~\diff \overline{\mathbb{P}i}^{*k}( s)+ \sum_{k=1}^{n} C(k,n) \,\partial \overline{\mathbb{P}i}^{*k}(x),
\end{align*}
since the second term vanishes for all but finitely many \(k{i\mkern1mu}n\mathbb{N}\) due to \eqref{propconstants}.
Adding the two derivatives together we finally obtain
\begin{align*}
\partial^{n+1} W^{(q)}(x)&=\frac{1}{c}\left(\sum_{k=1}^{n}\sum_{j=k}^{n}C(k,j) \partial^{n+1-j} \overline{\mathbb{P}i}^{*k}(x) + \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x]} \partial_x^{n+1} g_k(s,x) ~\diff \overline{\mathbb{P}i}^{*k}( s)\right).
\end{align*}
This proves \eqref{derivative} for $n$ and exactly all $x\geq 0$ such that $\partial^{j} \overline{\mathbb{P}i}^{*k}(x)$ exists for $k= 1,\leftdots, n$, $j=1, \leftdots, n-k+1$. \\
Whenever $n>1$ and $\overline{\mathbb{P}i} {i\mkern1mu}n \mathcal{C}^1(0,{i\mkern1mu}nfty)$ then there exists a continuous density $\pi$ of $\overline{\mathbb{P}i}$ such that $\mathbb{P}i^{\ast k}(\diff x) = \pi^{\ast k}(x) \diff x$ and $\overline{\mathbb{P}i}^{\ast k}(x)= {i\mkern1mu}nt_0^x \pi^{\ast k}(y) dy$.
This, however, implies that $\overline{\mathbb{P}i}^{\ast k}$, $k\geq 1$, is $j$ times (continuously) differentiable in $x_0$ if $\overline{\mathbb{P}i}$ is $j$ times (continuously) differentiable in $x_0$, since
$$\partial^{j} \overline{\mathbb{P}i}^{*k}(x_0) = \partial^{j-1} \pi^{\ast k} (x_0) = \pi^{\ast (k-1)} \ast ( \partial^{j-1}\pi ) (x_0),\quad k\geq 1.$$
Together with the above we observe that in this case $W^{(q)}$ is \((n+1)\) times (continuously) differentiable at \(x_0\geq 0\) if and only if \(\overline{\mathbb{P}i}\) is \(n\) times (continuously) differentiable at \(x_0\).\\
The final conclusion $W^{(q)}{i\mkern1mu}n \mathcal{C}^{n+1}(0,{i\mkern1mu}nfty)$ if and only if $\overline{\mathbb{P}i}{i\mkern1mu}n \mathcal{C}^n(0,{i\mkern1mu}nfty)$ is now immediate.
\end{proof}
\subsection{Primitives of q-scale functions}\leftabel{S42}
Considering lattice distributed jumps it is easy to compute the primitives of the \(q\)-scale functions from the representation \eqref{representation} since it consists of only finitely many summands as shown in Lemma \rightef{lem_primderiv}. The subsequent approximation arguments which allow for a general formula as given in Proposition \rightef{diff_and_int} below, are almost the same as the ones used in Section \rightef{S33} and we constrain ourselves on carving out the differences.
\begin{lemma}\leftabel{lem_primderiv}
In the setting of Proposition \rightef{general_lattice} it holds
\begin{align}\leftabel{prim}
{i\mkern1mu}nt_0^x W^{(q)}(y) \diff y &=\frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x)}{i\mkern1mu}nt_s^x g_k(s,y) \diff y ~\mathbb{P}i^{*k}(\diff s), \quad x\geq 0.
\end{align}
\end{lemma}
\begin{proof}
We prove \eqref{prim} by induction. For \(x{i\mkern1mu}n[0,\mathbb{V}\mathrm{ar}\,epsilon)\) it holds
\[
W^{(q)}(x)= g_0(0,x).
\]
Thus, \eqref{prim} is valid for \(x{i\mkern1mu}n[0,\mathbb{V}\mathrm{ar}\,epsilon)\) and by continuity for \(x=\mathbb{V}\mathrm{ar}\,epsilon\) as well. Suppose that it holds on \([0,z]\) for some \(z{i\mkern1mu}n\mathbb{V}\mathrm{ar}\,epsilon\mathbb{N}\) and let \(x{i\mkern1mu}n(z,z+\mathbb{V}\mathrm{ar}\,epsilon)\). We begin by decomposing
\[
{i\mkern1mu}nt_0^x W^{(q)}(y) \diff y = {i\mkern1mu}nt_0^z W^{(q)}(y) \diff y+{i\mkern1mu}nt_z^x W^{(q)}(y) \diff y.
\]
Using \eqref{almostready} we may write the second term as
\begin{align*}
{i\mkern1mu}nt_z^x W^{(q)}(y) \diff y &= {i\mkern1mu}nt_z^x \sum_{k=0}^{z/\mathbb{V}\mathrm{ar}\,epsilon} {i\mkern1mu}nt_{[0,z]} g_k(s,y) ~\mathbb{P}i^{*k}(\diff s) \diff y= \sum_{k=0}^{z/\mathbb{V}\mathrm{ar}\,epsilon} {i\mkern1mu}nt_{[0,z]} {i\mkern1mu}nt_z^x g_k(s,y)\diff y ~\mathbb{P}i^{*k}(\diff s).
\end{align*}
Together with the induction hypothesis we obtain
\begin{align*}
{i\mkern1mu}nt_0^x W^{(q)}(y) \diff y &= \sum_{k=0}^{z/\mathbb{V}\mathrm{ar}\,epsilon} {i\mkern1mu}nt_{[0,z)}{i\mkern1mu}nt_s^z g_k(s,y) \diff y ~\mathbb{P}i^{*k}(\diff s)+\sum_{k=0}^{z/\mathbb{V}\mathrm{ar}\,epsilon} {i\mkern1mu}nt_{[0,z]} {i\mkern1mu}nt_z^x g_k(s,y)\diff y ~\mathbb{P}i^{*k}(\diff s)\\
&=\sum_{k=0}^{z/\mathbb{V}\mathrm{ar}\,epsilon}{i\mkern1mu}nt_{[0,z]}{i\mkern1mu}nt_s^x g_k(s,y) \diff y ~\mathbb{P}i^{*k}(\diff s) \\
&= \sum_{k=0}^{{i\mkern1mu}nfty}{i\mkern1mu}nt_{[0,x)}{i\mkern1mu}nt_s^x g_k(s,y) \diff y ~\mathbb{P}i^{*k}(\diff s).
\end{align*}
Thus, \eqref{prim} holds on \([0,z+\mathbb{V}\mathrm{ar}\,epsilon]\) by continuity and by induction on \(\mathbb{R}_+\).
\end{proof}
The obtained formula is also valid in the general context as shown by the following proposition.
\begin{proposition}\leftabel{diff_and_int}
For \(q\geq0\) let \(W^{(q)}\) be the \(q\)-scale function of the spectrally negative compound Poisson process in \eqref{eq-specialtype}. Then its primitive is given by \eqref{prim}.
\end{proposition}
\begin{proof}
As mentioned above, the proof of \eqref{prim} for arbitrary jump distributions is almost identical to the approximation part of the proof of Theorem \rightef{main}. Indeed, it remains to prove that weak convergence of the jump measures \(\{\mathbb{P}i_n\}_{n{i\mkern1mu}n\mathbb{N}}\) implies pointwise convergence of the respective functions \({i\mkern1mu}nt W_n^{(q)}(y) \diff y\) since both well-definedness and consistency can be shown in exactly the same way as for \(W^{(q)}\) itself.\\
Let \(L, W^{(q)}\) and \(L^{(n)}, W^{(q)}_n, n{i\mkern1mu}n\mathbb{N}\) be as in Section \rightef{S33}. By Lemma \rightef{scale_consistent} we already know that \(W^{(q)}_n \underset{n\to{i\mkern1mu}nfty} \leftongrightarrow W^{(q)}\) uniformly on any compact subset of \(\mathbb{R}_+\). That, however, immediately implies
\begin{align*}
\leftim_{n\to{i\mkern1mu}nfty} {i\mkern1mu}nt_0^x W_n^{(q)}(y) \diff y = {i\mkern1mu}nt_0^x W^{(q)}(y) \diff y
\end{align*}
for \(x\geq0\) by dominated convergence, and hence \eqref{prim} holds for arbitrary jump measures.
\end{proof}
\subsection{Further representations}\leftabel{S43}
The formulae for $W^{(q)}$ in Theorem \rightef{main}, its directional first derivatives in Proposition \rightef{smoothness} and its primitive in Proposition \rightef{diff_and_int} share the form
\begin{align*}
V(x):=\sum_{k=0}^{{i\mkern1mu}nfty} \frac{\leftambda^k}{k!} {i\mkern1mu}nt_0^{i\mkern1mu}nfty f_k(s,x) ~\mathbb{P}i^{*k}(\diff s)
\end{align*}
for some functions \(f_k\). Since for \(k{i\mkern1mu}n\mathbb{N}\) it holds \(\sum_{i=0}^{k}\xi_i \sim \mathbb{P}i^{*k}\) and \(\mathbb{P}(N_{t}=k)=\frac{(\leftambda t)^k}{k!}\e^{-\leftambda t}\) it thus follows that
\begin{align*}
V(x)&= \e^{\leftambda t}\sum_{k=0}^{i\mkern1mu}nfty \mathbb{E}\left[t^{-k}f_{N_t}\left(\sum_{i=0}^{N_t}\xi_i,x\right)~\middle|~N_t=k \right]\mathbb{P}(N_t=k) =\e^{\leftambda t}\mathbb{E}\left[t^{-N_t}f_{N_t}\left(\sum_{i=0}^{N_t}\xi_i ,x\right)\right].
\end{align*}
For example, to obtain a representation for the $q$-scale function itself we have to choose $$f(s,x)=\frac{1}{c} \frac{k!}{\leftambda^k} g_k(s,x) \mathds{1}_{[0,x]}(s) =\frac{1}{c^{k+1}} \e^{-\frac{q+\leftambda}{c}(s-x)} (s-x)^k \mathds{1}_{[0,x]}(s) $$ which yields
\begin{align*}
W^{(q)}(x)&= \frac{\e^{\leftambda t}}{c} \mathbb{E}E\lefteft[ (c t)^{-N_t} \e^{-\frac{q+\leftambda}{c}(\sum_{i=0}^{N_t}\xi_i-x)} \left(\sum_{i=0}^{N_t}\xi_i-x\right)^{N_t} \mathds{1}_{\sum_{i=0}^{N_t}\xi_i \lefteq x} \rightight]\\
&=\frac{\e^{\leftambda t}}{c} \mathbb{E}^x \left[ \e^{-\frac{q+\leftambda}{c}(c t-L_t)}\left(\frac{c t-L_t}{c t}\right)^{N_t} \mathds{1}_{\{L_t \geq c t \}} \right],
\end{align*}
for every \(q,x,t\geq0\). In particular we may choose $t=1$ to get
\begin{equation}\leftabel{eq-qscaleexpvalue}
W^{(q)}(x) = \frac{\e^{-q}}{c} \mathbb{E}^x\left[ \e^{\frac{q+\leftambda}{c} L_1}\left(1-\frac{L_1}{c}\right)^{N_1} \mathds{1}_{\{L_1\geq c\}} \right], \quad q,x\geq 0.
\end{equation}
The following corollary collects the corresponding representations for the derivatives and primitives of the $q$-scale functions. The proof works as the one given for \eqref{eq-qscaleexpvalue} and is omitted.
\begin{corollary}\leftabel{expected_values}
For \(q\geq0\) let \(W^{(q)}\) be the \(q\)-scale function of the spectrally negative compound Poisson process in \eqref{eq-specialtype}, then for all $x\geq 0$
\begin{align*}
\partial_{+} W^{(q)}(x)&= \frac{\e^{-q}}{c^2} \mathbb{E}^x\left[\e^{\frac{q+\leftambda}{c} L_1} \left(1-\frac{L_1}{c}\right)^{N_{1}-1}\left( (q+\leftambda)\left(1-\frac{L_1}{c}\right) - N_1\right)\mathds{1}_{\{L_1 \geq c \}}\right],\\
\partial_{-} W^{(q)}(x)&=\frac{\e^{-q}}{c^2} \mathbb{E}^x\left[\e^{\frac{q+\leftambda}{c} L_1} \left(1-\frac{L_1}{c}\right)^{N_{1}-1}\left( (q+\leftambda)\left(1-\frac{L_1}{c}\right) - N_1\right)\mathds{1}_{\{L_1 > c \}}\right], \\
{i\mkern1mu}nt_0^x W^{(q)}(y)\diff y &= \e^\leftambda \mathbb{E}^x\left[ {i\mkern1mu}nt_0^{\frac{L_1}{c}-1} \e^{(q+\leftambda)y} (-y)^{N_1} \diff y \mathds{1}_{\{L_1 >c\}} \right].\\
\end{align*}
\end{corollary}
\section{Examples}\leftabel{S5}
\setcounter{equation}{0}
As discussed in \cite{hubalek2010} it is of great use to have examples of spectrally negative Lévy processes at hand for which the \(q\)-scale functions can be computed explicitly. In this section we apply our results to expand the library of those cases.\\
In the following let \((L_t)_{t\geq0}\) be as in \eqref{eq-specialtype}. We begin by presenting two discrete examples for which the convolutions are easy to find, namely geometrically distributed and zero-truncated Poisson distributed jumps.
\begin{example}[Geometrically distributed jumps]
For fixed \(p{i\mkern1mu}n(0,1)\) let
$\mathbb{P}i(\cdot)= \sum_{n=1}^{i\mkern1mu}nfty (1-p)^{n-1}p\, \delta_{n}(\cdot)$
be a geometric distribution, then it is well-known that the measure \(\mathbb{P}i^{*k}\) describes a negative binomial distribution, i.e.
\begin{align*}
\mathbb{P}i^{\ast k}(\cdot)=\sum_{n=k}^{i\mkern1mu}nfty { n-1 \choose n-k} (1-p)^{n-k}p^k\, \delta_n(\cdot), \quad k{i\mkern1mu}n \mathbb{N}N.
\end{align*}
Hence for \(q\geq0\) the \(q\)-scale function of \((L_t)_{t\geq0}\) follows from Theorem \rightef{main} as
\begin{align*}
W^{(q)}(x)&= \frac{1}{c} \sum_{k=0}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_{[0,x]} g_k(s,x) ~\mathbb{P}i^{*k}(\diff s)
= \frac{1}{c} \sum_{k=0}^{\leftfloor x \rightfloor} \sum_{n=k}^{\leftfloor x \rightfloor} g_k(n,x) { n-1 \choose n-k} (1-p)^{n-k}p^k \\
&= \frac{1}{c} \sum_{k=0}^{\leftfloor x \rightfloor} \sum_{n=k}^{\leftfloor x \rightfloor}\frac{(\leftambda/c)^k }{k!} { n-1 \choose n-k} \e^{-\frac{q+\leftambda}{c}(n-x)} (1-p)^{n-k}(p(n-x))^k,\qquad x\geq 0.
\end{align*}
\end{example}
\begin{example}[Zero-truncated Poisson distributed jumps]
Here we denote by \(\mathbb{P}i\) the distribution of a Poisson random variable \(\xi\) with parameter \(\mu>0\), given \(\xi>0\). To apply Theorem \rightef{main} the probability mass functions \(f_k\) of \(\mathbb{P}i^{*k}, k{i\mkern1mu}n\mathbb{N}\), are needed. For \(k\neq0\) they are given by
\begin{align}\leftabel{ztp}
f_k(n)=\frac{\mathbb{P}\left(\sum_{i=1}^k \xi_i = n \cap \forall i \lefteq k : \xi_i>0 \right)}{\mathbb{P}\left(\forall i \lefteq k : \xi_i >0\right)}=: \frac{z(n,k)}{(1-\e^{-\mu})^k} , \quad n{i\mkern1mu}n\mathbb{N}N\setminus\{0\},
\end{align}
where \(\xi_i, i{i\mkern1mu}n\mathbb{N}\setminus\{0\}\) are i.i.d. copies of \(\xi\). For \(k=0\) we have by definition \(\mathbb{P}i^{*0}=\delta_0\). The numerator \(z(n,k)\) in \eqref{ztp} can be computed recursively as follows. Obviously it holds \(z(n,k)=0\) if \(n<k{i\mkern1mu}n\mathbb{N}\setminus\{0\}\) and \(z(n,1)=\frac{\mu^s}{s!}\e^{-\mu}\) for \(n{i\mkern1mu}n\mathbb{N}\setminus\{0\}\). Everywhere else we have
\begin{align*}
z(n,k)&=\mathbb{P}\Bigg(\sum_{i=1}^k \xi_i = n\Bigg)- \sum_{\ell=1}^{k-1}{k\choose \ell} \mathbb{P}\Bigg(\sum_{i=1}^{\ell} \xi_i =0\Bigg)z(n,k-\ell)\\
&= \frac{(k\mu)^n}{n!}\e^{-k\mu} - \sum_{\ell=1}^{k-1} {k \choose \ell}\e^{-\ell\mu}z(n,k-\ell).
\end{align*}
Further setting $z(0,0)=1$ and $z(n,0)=0, n{i\mkern1mu}n\mathbb{N}N\setminus\{0\}$, the $q$-scale functions of $(L_t)_{t\geq 0}$ are then given by
$$W^{(q)}(x)=\frac{1}{c} \sum_{k=0}^{\leftfloor x \rightfloor} \sum_{n=k}^{\leftfloor x \rightfloor}\frac{(\leftambda/c)^k }{k!} \e^{-\frac{q+\leftambda}{c}(n-x)} \left(\frac{n-x}{1-\e^{-\mu}}\right)^k z(n,k),\qquad x\geq 0.$$
\end{example}
This section would be incomplete without illustrating the applicability of our results for continuous jump distributions. In \cite{Egami2010PhasetypeFO} explicit representations of the $q$-scale functions have been derived for Lévy processes with phase-type distributed jumps, thus including compound Poisson processes with Erlang distributed jumps. Theorem \rightef{main} now admits a representation for those processes where the jumps follow an arbitrary Gamma distribution.
\begin{example}[Gamma distributed jumps]
For \(i=1,2,\leftdots\) let \(\xi_i\sim\Gamma(\alpha,\beta)\) with parameters \(\alpha,\beta>0\), i.e. it holds
\[
\mathbb{P}i(\diff s)= \frac{\beta^\alpha}{\Gamma(\alpha)}s^{\alpha-1}\e^{-\beta s} \mathds{1}_{\{s\geq0\}}\diff s,
\]
and
\[
\mathbb{P}i^{*k}(\diff s)=\frac{\beta^{k\alpha}}{\Gamma(k\alpha)}s^{k\alpha-1}\e^{-\beta s} \mathds{1}_{\{s\geq0\}} \diff s, \quad k{i\mkern1mu}n\mathbb{N}\setminus\{0\},
\]
and \(\mathbb{P}i^{*0}=\delta_0\). Applying Theorem \rightef{main} to obtain the \(q\)-scale function of \((L_t)_{t\geq0}\) we compute
\begin{align*}
W^{(q)}(x)&=\e^{\frac{q+\leftambda}{c } x}+\sum_{k=1}^{{i\mkern1mu}nfty} {i\mkern1mu}nt_0^x \frac{(\leftambda/c)^k}{k!}\e^{-\frac{q+\leftambda}{c }(s-x)}(s-x)^k \frac{\beta^{k\alpha}}{\Gamma(k\alpha)} s^{k\alpha-1}\e^{-\beta s} \diff s \\
&=\e^{\frac{q+\leftambda}{c } x}\Big( 1+\sum_{k=1}^{{i\mkern1mu}nfty}\frac{(\leftambda/c)^k}{k!}\frac{\beta^{k\alpha}}{\Gamma(k\alpha)} {i\mkern1mu}nt_0^x \e^{-(\frac{q+\leftambda}{c} +\beta)s}(s-x)^k s^{k\alpha-1} \diff s\Big).
\end{align*}
In the following we denote \(C_k:=\frac{(\leftambda/c)^k}{k!}\frac{\beta^{k\alpha}}{\Gamma(k\alpha)}\) and \(\rightho:=\frac{q+\leftambda}{c}+\beta\). Applying the binomial theorem we obtain
\begin{align*}
W^{(q)}(x)&=\e^{\frac{q+\leftambda}{c} x}\left(1+\sum_{k=1}^{{i\mkern1mu}nfty}C_k \sum_{\ell=0}^{k}{k\choose \ell}(-x)^{k-\ell}{i\mkern1mu}nt_0^x \e^{-\rightho s} s^{k\alpha+\ell-1} \diff s\right) \\
&=\e^{\frac{q+\leftambda}{c} x}\left(1+\sum_{k=1}^{{i\mkern1mu}nfty}C_k \sum_{\ell=0}^{k}{k\choose \ell}(-x)^{k-\ell} \rightho^{-(k\alpha+\ell)}\gamma(k\alpha+\ell,\rightho x)\right), \quad x\geq 0,
\end{align*}
where \(\gamma(\cdot,\cdot)\) is the lower incomplete gamma function.
\end{example}
\end{document}
|
\begin{document}
\noindent{\large\bf Rate of approximation by logarithmic
derivatives of polynomials whose zeros lie on a
circle}\footnote{This work supported by RFBR project
18-31-00312 mol\underline{\ \ }a.} \\
\noindent{\bf M. A. Komarov}\footnote{Department of Functional
Analysis and Its Applications, Vladimir State University,
Gor$'$kogo street 87,
600000 Vladimir, Russia\\ e-mail: [email protected]}\\
\noindent{\bf Abstract} \ We obtain an estimate for uniform
approximation rate of bounded analytic in the unit disk functions
by logarithmic derivatives of $C$-polynomials, i.e., polynomials,
all of whose zeros lie on the unit circle $C:|z|=1$.
\noindent{\bf Keywords} \ logarithmic derivatives of polynomials,
simple partial fractions, \\ $C$-polynomials, uniform
approximation
\noindent{\bf Mathematics Subject Classification} \ 41A25, 41A20, 41A29, 30E10 \\
{\bf 1.} Let $C$ denote the unit circle $|z|=1$ and $D$ denote the
unit disk $|z|<1$. It's proved in \cite{Thompson} for any bounded
analytic in $D$ function $f$ and in \cite{Rub-Suff} for any
analytic in $D$ function, that there is a sequence of rational
functions $S_n(z)=S_n(f;z)$ of the form
\[S_n(z)=\sum_{k=1}^{m_n}(z-z_{n,k})^{-1}, \qquad |z_{n,k}|=1,\]
which converges to $f(z)$ uniformly on every closed subset of $D$.
Obviously, $S_n$ is a logarithmic derivative of $C$-polynomial
$(z-z_{n,1})\dots(z-z_{n,m_n})$ ($C$-{\it polynomials} defined in
abstract). Analogous problems with more general constraints on
poles (for example, if $z_{n,k}$ belong to rectifiable Jordan
curve) investigated in \cite{Korev,Chui,Chui-Shen}. Approximation
by sums $\sum_{k=1}^n(z-z_k)^{-1}$ with {\it free} poles studied
in \cite{DD,Kos,K-IzvRAN-2017} (see also bibliography in
\cite{K-IzvRAN-2017}).
We study a rate of approximation of bounded analytic in $D$
functions by logarithmic derivatives of $C$-polynomials on closed
subsets of $D$.
\noindent{\bf Theorem.} {\it For any bounded analytic in $D$
function $f(z)$ there is a sequence of $C$-po\-ly\-no\-mi\-als
$P_{N}(z)$, $N\ge N_0$, such that ${\rm deg}\,P_N(z)=N$ and
\begin{equation}\label{result}
\left|\frac{P_N'(z)}{P_N(z)}-f(z)\right|<
\frac{(a+\varepsilon)^{n+1}}{\varepsilon(1-a-\varepsilon)}
(1+o(1)), \qquad n=[N/2], \qquad {\rm as} \quad N\to \infty
\end{equation}
in any disk $K_a=\{|z|\le a\}$ $(a<1)$ for every $\varepsilon\in
(0,1-a)$.}
Let $d_n(f,K_a)$ denote the error in best approximation to $f$ on
the disk $K_a$ by logarithmic derivatives $Q'/Q$ of polynomials
$Q$ of degree at most $n$ with {\it free} zeros. It's interesting,
that, generally speaking, the convergence $d_n(f,K_a)$ is also
geometric (sf. (\ref{result})):
\[\limsup_{n\to \infty} \sqrt[n]{d_n(f,K_a)}\le a.\]
This estimate follows from \cite{Kos} (see also \cite{DD}), where
the analog of polynomial Walsh's theorem was proved for the
problem of approximation by such fractions $Q'/Q$.
{\bf 2.} To construct polynomials $P_N(z)$ we use the approach
\cite{Rub}. We need the next lemma, stated in \cite{Rub} (for the
case $m=0$ see \cite[p.\,108]{Polya-Szego}). Further
$\overline{D}=D+C$.
\noindent{\bf Lemma} \cite{Rub}. {\it Let
$Q(z)=a_0(z-z_1)\dots(z-z_q)$, $a_0\ne 0$, be a polynomial of
degree $q\ge 1$ and
$Q^*(z):=z^q\overline{Q}(1/z)=\overline{a}_0(1-\overline{z}_1
z)\dots (1-\overline{z}_q z)$. If $Q(z)$ zero free in
$\overline{D}$, then $Q(z)+z^m Q^*(z)$ is $C$-polynomial for every
$m=0,1,2,\dots$, and $|Q^*(z)|\le |Q(z)|$ in $\overline{D}$.}
To prove this lemma it is sufficient to consider the equation
\begin{equation}\label{Phi=...}
\Phi(z)=-a_0/\overline{a}_0, \qquad
\Phi(z):=\frac{a_0}{\overline{a}_0}\frac{z^m Q^*(z)}{Q(z)}\equiv
z^m \prod_{j=1}^q \frac{1-\overline{z}_j z}{z-z_j}, \qquad
|z_j|>1.
\end{equation}
Absolute values of all factors in last product are less, equal or
more than 1 iff $|z|<1$, $|z|=1$ or $|z|>1$, respectively,
therefore all roots of equation (\ref{Phi=...}) (and so, all zeros
of $Q(z)+z^m Q^*(z)$) lie on $C$, and in the disk $\overline{D}$
we have $|\Phi(z)|\le 1$ and $|Q^*(z)|\le |Q(z)|$.
\noindent{\it Remark 1.} If $Q(z)\equiv a_0={\rm const}\ne 0$,
then $Q^*(z)\equiv \overline{a}_0$, consequently, $Q(z)+z^m
Q^*(z)\equiv a_0+z^m \overline{a}_0$ is $C$-polynomial for $m>0$
only.
\noindent{\it Remark 2.} Lemma is also true, if zeros of $Q(z)$
(but not all of them) lie on $C$, because $\overline{z}_j=1/z_j$
and $|1-\overline{z}_j z|/|z-z_j|\equiv 1$ as $|z_j|=1$. But if
$Q(z)$ is $C$-polynomial, then $Q^*(z)\equiv tQ(z)$,
$t=\overline{Q(0)}/a_0$, and we need again to assume $m>0$, if
$1+t=0$.
{\bf 3.} {\it Proof of the theorem.} Set $g(z)=\exp\left(\int_0^z
f(\zeta)d\zeta\right)$,
\[g(z)=s_n(z)+R_{n}(z), \qquad s_n(z)=1+\sum_{k=1}^n g_k z^k, \qquad
R_n(z)=\sum_{k=n+1}^{\infty} g_k z^k.\] Derivative $g'\equiv gf$
is bounded and analytic in $D$. In particular, $g'$ belongs to the
Hardy space $H^1(D)$, hence the series $\sum |g_k|$ converges
\cite[Theorem\,15]{Hardy}.
Function $\varphi(x,y)={\rm Re} \int_0^z f(\zeta)d\zeta$ is
bounded in $D$, so $\varphi>-\infty$ and $\inf_{D} |g(z)|=\inf_{D}
\exp \varphi(x,y)=M_0>0$. Choose $n_0\in \mathbb{N}$, such that
$\sum_{n+1}^{\infty} |g_k|\le M_0-M_0/2$ as $n\ge n_0$. Hence
polynomials $s_n(z)$ are zero free in $\overline{D}$ as $n\ge
n_0$.
Further $N\ge 2n_0$ and $n:=[N/2]\ge n_0$. Set
$p(z)=s_n^*(z)=z^{q}\overline{s}_n(1/z)$, where $q=\deg s_n(z)$,
$0\le q\le n$. By lemma and remark 1 we have $|p(z)|\le |s_n(z)|$
in $\overline{D}$, and sums
\[P(z)=P_{q+m}(z):=s_n(z)+z^{m}p(z) \qquad {\rm as} \quad m=1,2,\dots\]
are $C$-polynomials. Rewrite $P$ as $P(z)\equiv
g(z)+z^{m}p(z)-R_n(z)$. We now have
\[P'(z)=g(z)f(z)+m z^{m-1}p(z)+z^{m}p'(z)-R_n'(z),\]
\begin{equation}\label{P'/P-f}
\frac{P'(z)}{P(z)}-f(z)= \frac{z^{m-1}(m
-zf(z))p(z)+z^{m}p'(z)-R_n'(z)+f(z)R_n(z)}{P(z)}.
\end{equation}
Denote $M_1=\max\{1,|g_1|,\dots,|g_n|\}$. Since $|g_k|<M_0$ (as
$k\ge n_0+1$), we have
\begin{equation}\label{|pn|,|Rn|}
|p(z)|\le |s_n(z)|<M_1/(1-a), \qquad |R_n(z)|<M_0 a^{n+1}/(1-a)
\qquad {\rm as} \quad |z|\le a,
\end{equation}
\begin{equation}\label{|P|>}
|P(z)|\ge |g(z)|-|z^{m}p(z)-R_n(z)|>
M_0-(M_1a^m+M_0a^{n+1})/(1-a) \qquad {\rm as} \quad |z|\le a.
\end{equation}
If $|z|<r<1$ and function $F$ is analytic in $D$, then
\[2\pi|F'(z)|=\left|\int_{|\zeta|=r}\frac{F(\zeta)d\zeta}{(\zeta-z)^2}\right|
\le \max_{|\zeta|=r}|F(\zeta)|
\int_{|\zeta|=r}\frac{|d\zeta|}{|\zeta-z|^2}=
\max_{|\zeta|=r}|F(\zeta)|\frac{2\pi r}{r^2-|z|^2}\] (we apply
Poisson's integral), and hence if $r=a+\varepsilon<1$, then
\[|F'(z)|<\varepsilon^{-1}\max\nolimits_{|\zeta|=a+\varepsilon}|F(\zeta)|
\qquad {\rm as} \quad |z|\le a.\] Thus, by this and
(\ref{|pn|,|Rn|}) we have
\begin{equation}\label{|p'|<}
|p'(z)|<\frac{M_1}{\varepsilon(1-a-\varepsilon)}, \qquad
|R_n'(z)|<\frac{M_0(a+\varepsilon)^{n+1}}{\varepsilon(1-a-\varepsilon)}
\qquad {\rm as} \quad |z|\le a.
\end{equation}
We put $m=N-q\ge n$ and obtain (\ref{result}) from
(\ref{P'/P-f})---(\ref{|p'|<}), and theorem follows.
\noindent{\it Remark 3.} $P_N'(z)/P_N(z)-f(z)=O(z^l)$ as $l\ge
n-1$ (see (\ref{P'/P-f})).
\noindent{\it Remark 4.} It follows from $g'\in H^1(D)$, that
$g(z)$ continuous in $\overline{D}$ and absolutely continuous on
$C$ \cite[Ch.II, \S5(5.7)]{Priwalow}. In particular,
$g(e^{i\theta})$ has bounded variation and $|g_k|=O(1/k)$, so we
can to write $O(1/n)$ instead of $1+o(1)$ in (1). If $g$ is a zero
free in $\overline{D}$ polynomial of degree $q\ge 0$ and $f=g'/g$,
then $R_n(z)\equiv 0$ as $n\ge q$, and approximation rate is
higher. For example, if $f(z)\equiv 0$, then $P_N(z)=1+z^N$ and
$\sup_{K_a}|{P}_N'/{P}_N-f|=Na^{N-1}/(1-a^N)$.
\end{document}
|
\begin{document}
\title{On eigenfunctions corresponding to a small resurgent eigenvalue.}
\begin{abstract} The paper is devoted to some foundational questions in resurgent analysis. As a main technical result, it is shown that under appropriate conditions the infinite sum of endlessly continuable majors commutes with the Laplace transform. A similar statement is proven for compatibility of a convolution and of an infinite sum of majors. We generalize the results of Candelpergher-Nosmas-Pham and prove a theorem about substitution of a small (extended) resurgent function into a holomorphic parameter of another resurgent function. Finally, we discuss an application of these results to the question of resurgence of eigenfunctions of a one-dimensional Schr\"odinger operator corresponding to a small resurgent eigenvalue. \\
Keywords: resurgence, Laplace transform, linear ODEs.
\end{abstract}
\section{Introduction}
A resurgent function $f(h)$ is, roughly, a function admitting a nice enough hyperasymptotic expansion
$${\mathfrak {su}}m_k e^{-c_k/h}(a_{k,0}+a_{k,1}h+a_{k,2}h^2+...), \ \ \ \ h\to 0,$$
and resurgent analysis is a way of working with such expansions by means of representing $f(h)$ as a Laplace integral of a ramified analytic function in the complex plane.
Applications of the theory of resurgent functions to quantum-mechanical problems (~\cite{V83},~\cite{DDP97},~\cite{DP99}) have demonstrated its elegance and computational power, which brings forward the question of a fully rigorous mathematical justification of these methods.
This article is devoted to some foundational questions of resurgent analysis as applied to the Schr\"odinger equation in one dimension. Suppose, as we explicitly mention in ~\cite{G} or as it implicitly happens, say, in ~\cite{DDP97}, that using methods of ~\cite{ShSt} one can construct a resurgent solution of the Schr\"odinger equation in one dimension
$$ [-h^2\partial_q^2 + V(q,h)]\psi(q,h) = 0 $$
for the potential $V(q,h)$ analytic with respect to $q$ and polynomial in $h$. This situation includes, in particular, the eigenvalue problem
\begin{equation}
[-h^2\partial_q^2 + V(q,h)]\psi(E_1,q,h) = hE_1\psi(E_1,q,h), \label{eq1}
\end{equation}
where $\psi$ is required or expected to holomorphically depend on $q$ and $E_1$ and to be a resurgent function of $h$ for any fixed $q$ and $E_1$. (We say ``expected to be" because a complete justification of these facts is still an open problem.)
Then, imposing appropriate boundary conditions on $\psi$ and using methods of asymptotic matching (see e.g. ~\cite{DP99,DDP97}) one can write the quantization conditions and obtain by solving them some energy level $E_1(h)$ which is expected to be a resurgent function of $h$ which is almost never a polynomial; thus, substituting $E_1(h)$ into \eqref{eq1} brings us outside of the class of equations polynomial in $h$. In this paper we show that if $E_1$ is a small resurgent function (see below for a precise definition), then one can substitute $E_1$ into a resurgent solution $\psi(E_1,q,h)$ of \eqref{eq1} and obtain another resurgent function of $h$ that would solve the corresponding differential equation (see section {\rm Re \ }f{ApplicnToExistence}).
In order to work out the detail of substituting a small resurgent function of $h$ into a holomorphic parameter of another resurgent function, the following more technical issues are addressed.
A resurgent function is given as a Laplace integral of a ramified analytic function (called ``major") along some infinite contour. In order for the Laplace integral to converge, one needs to correct that major by an entire function so that it stays bounded along the infinite branches of the contour; the details of this procedure are discussed in ~\cite{CNP}. If we are to deal with an infinite sum of majors, we need to choose the corresponding corrections by entire functions in a coherent way, in the precise sense of section {\rm Re \ }f{CaseOfConvergSeriesOfMajors}. Then one can show that the Laplace transform is compatible, under certain conditions, with an infinite sum of majors.
A similar statement is shown for a convolution and for the reconstruction homomorphism (the reconstruction of a major from its decomposition into microfunctions).
We also consider the question of substituting of a small resurgent function into a holomorphic function and generalize the argument of ~\cite{CNP} to the case of what they would call an extended resurgent function. This case is reduced to the case considered in ~\cite{CNP} by means of interchanging the infinite sums with convolutions and Laplace transforms as above.
The ~\cite{CNP}'s proof will also be recalled in a slightly generalized form that allows to accomodate the case of substituting of $k$ small resurgent functions into a holomorphic functions of $k$ variables.
Although the results of this article are perfectly natural, they have not been, to our knowledge, properly documented in the literature and we felt it is important to write up their detailed proofs.
\section{Preliminaries from Resurgent Analysis}
We need to combine the setup of ~\cite{CNP} (the resurgence is with respect to the semiclassical parameter $h$ rather than the coordinate $q$) and mathematical clarity of ~\cite{ShSt} and therefore have to mix their terminology and to somewhat change their notation.
{\mathfrak {su}}bsection{Laplace transform and its inverse. Definition of a resurgent function.}
Morally speaking, we will be studying analytic functions ${\rm var \ }phi(h)$ admitting asymptotic expansions $e^{-c/h}(a_0+a_1 h + a_2 h^2+...)$ for $h\to 0$ and $\arg h$ constrained to lie in an arc $A$ of the circle of directions on the complex plane; respectively, the inverse asymptotic parameter $x=1/h$ will tend to infinity and $\arg x$ will belong to the complex conjugate arc $A^*$. Such functions can be represented as Laplace transforms of functions ${\bf {\mathbb P}hi}(\xi)$ where the complex variable $\xi$ is Laplace-dual to $x=1/h$ and ${\bf {\mathbb P}hi}(\xi)$ is analytic in $\xi$ for $|\xi|$ large enough and $\arg \xi \in {\check A}$, the copolar arc to $A$. The concept of a resurgent function will be defined by imposing conditions on analytic behavior of ${\bf {\mathbb P}hi}$.
{\mathfrak {su}}bsubsection{"Strict" Laplace isomorphism}
For details see ~\cite[Pr\'e I.2]{CNP}.
Let $A$ be a small (i.e. of aperture $<\pi$) arc in the circle of directions $S^1$. Denote by $\check A$ its copolar arc, $\check A = \cup_{\alpha\in A} {\check \alpha}$, where $\check \alpha$ is the open arc of length $\pi$ consisting of directions forming an obtuse angle with $\alpha$.
Denote by ${\cal O}^\infty(A)$ the space of sectorial germs at infinity in direction $A$ of holomorphic functions and by ${\cal E}(A)$ the subspace of those that are of exponential type in the direction $A$, i.e. bounded by $e^{K|t|}$ as the complex argument $t$ goes to infinity in the direction $A$.
We want to construct the isomorphism
$$ {\cal L} \ : \ {\cal E}({\check A})/{\cal O}({\mathbb C})^{exp} \ \longleftrightarrow \ {\cal E}(A^*) \ : \ {\bar{\cal L}}$$
where ${\cal O}({\mathbb C})^{exp}$ denotes the space of functions of exponential type in all directions.
{\bf Construction of ${\cal L}$}. Let ${\bf {\mathbb P}hi}$ be a function holomorphic in a sectorial neighborhood ${\cal O}mega$ of infinity in direction $\check A$. For any small arc $A'{\mathfrak {su}}bset{\mathfrak {su}}bset A$ we can choose a point $\xi_0\in{\mathbb C}$ such that ${\cal O}mega$ contains a sector $\xi_0 {\check A}'$ with the vertex $\xi_0$ opening in the direction ${\check A}'$. Define the Laplace transform
$$ {\mathbb P}hi_\gamma (x) := \int_\gamma e^{-x\xi} {\bf {\mathbb P}hi}(\xi) d\xi $$
with $\gamma=-\partial (\xi_0 {\check A}')$. Then ${\mathbb P}hi_\gamma$ is holomorphic of exponential type in a sectorial neighborhood of infinity in direction $A^*$. Cauchy theorem shows that if ${\bf {\mathbb P}hi}$ is entire of exponential type, then ${\mathbb P}hi_\gamma = 0$, which allows us to define
$$ {\cal L}({\bf {\mathbb P}hi} \ \rm{mod \ } {{\cal O}({\mathbb C})^{exp}} ) \ = \ {\mathbb P}hi_\gamma . $$
The construction of ${\bar {\cal L}}={\cal L}^{-1}$ will not be used in this paper.
{\mathfrak {su}}bsubsection{"Large" Laplace isomorphism}
The Laplace transform ${\cal L}$ defined in the previous subsection can be applied only to a function ${\bf {\mathbb P}hi}(\xi)$ satisfying a growth condition at infinity. At a price of changing the target space of ${\cal L}$ this restriction can be removed as follows.
For details see ~\cite[Pr\'e I.3]{CNP}.
Let ${\check A}=(\alpha_0,\alpha_1)$ be the copolar of a small arc, where $\alpha_0,\alpha_1\in S^1$ are two directions in the complex plane, and let $\gamma: {\mathbb R}\to {\mathbb C}$ be an endless continuous path. We will say that $\gamma$ is \underline{adapted} to ${\check A}$ if $\lim_{t\to -\infty} \gamma(t)/|\gamma(t)|\to \alpha_0$, $\lim_{t\to \infty} \gamma(t)/|\gamma(t)|\to \alpha_1$, and if the length of the part of $\gamma$ contained in a ring $\{ z: R \le |z|\le R+1\}$ is bounded by a constant independent of $R$.
For a small arc $A$ there are two mutually inverse isomorphisms
$$ {\cal L} \ : \ {\cal O}^\infty({\check A})/{\cal O}({\mathbb C}) \ \longleftrightarrow \ {\cal E}(A^*) / {\cal E}^{-\infty}(A^*) \ : \ {\bar{\cal L}},$$
where ${\cal E}^{-\infty}(A^*)$ is the set of sectorial germs at infinity that decay faster than any function of exponential type (cf. ~\cite[Pr\'e I.0]{CNP}).
{\bf Construction of ${\cal L}$}. Let ${\bf {\mathbb P}si}$ be holomorphic in ${\cal O}mega$, a sectorial neighborhood of infinity of direction $\check A$. Let $\gamma$ be a path adapted to ${\check A}$ in ${\cal O}mega$. As we will see in lemma {\rm Re \ }f{Lemma21} below, there is a function ${\bf {\mathbb P}hi}$ bounded on $\gamma$ such that ${\bf {\mathbb P}hi}-{\bf {\mathbb P}si} \in {\cal O}({\mathbb C})$; define
\begin{equation} {\cal L}({\bf {\mathbb P}si} \mod {\cal O}({\mathbb C})) \ := \ \int_\gamma e^{-x\xi} {\bf {\mathbb P}hi}(\xi)d\xi \ \mod {\cal E}^{-\infty}(A^*). \label{defLfla} \end{equation}
{\bf Definition.} Any of the functions ${\bf {\mathbb P}si}(\xi)$ satisfying ${\cal L}({\bf {\mathbb P}si} {\rm\ mod \ } {\cal O}({\mathbb C})) = \psi(x) \mod {\cal E}^{-\infty}(A^*)$ is called a \underline{major} of the function $\psi(x)$.
An equivalence class of functions defined on a subset of ${\mathbb C}$ modulo adding an entire function is also called an \underline{integrality class}.
{\mathfrak {su}}bsubsection{Resurgent functions.}
Resurgent functions are usually understood to be functions of a large parameter $x$. For brevity we will speak of resurgent functions of $h$ to mean resurgent functions of $1/h$.
{\bf Definition.} A germ $f(\xi)\in {\cal O}_{\xi_0}$ is \underline{endlessly continuable} if for any $L>0$ there is a finite set ${\cal O}mega_L{\mathfrak {su}}bset {\mathbb C}$ such that $f(\xi)$ has an analytic continuation along any path of length $<L$ avoiding ${\cal O}mega_L$.
{\bf Definition.} (cf. ~\cite[p.122]{ShSt}) Let $A{\mathfrak {su}}bset S^1{\mathfrak {su}}bset {\mathbb C}$ be a small arc and let $A^*$ be obtained from $A$ by complex conjugation. A \underline{resurgent function} $f(x)$ (of the variable $x\to\infty$) in direction $A^*$ is an element of ${\cal E}(A^*)/{\cal E}^{-\infty}$ such that any (hence all) of its majors is endlessly continuable.
{\bf Remark.} ~\cite{CNP} calls the same kind of object an "extended resurgent function".
Let (cf. ~\cite[R\'es I]{CNP}) ${\pmb{\cal R}(A)}$ denote the set of endlessly continuable sectorial germs of analytic functions ${\mathbb P}hi(\xi)$ defined in a neighborhood of infinity in the direction $\check A$. Then a resurgent function of $x$ in the direction $A^*$ has a major in ${\pmb{\cal R}}(A)$.
When we mean a resurgent function $g(h)$ of a variable $h\to 0$, under the correspondence $x=1/h$ the sectorial neighborhood of infinity in the direction $A^*$ becomes a sectorial neighborhood of the origin in the direction $A$, and we will talk about a resurgent function $g(h)$ (for $h\to 0$) in the direction $A$.
{\mathfrak {su}}bsubsection{Examples of resurgent functions.}
\paragraph{Example 1.} $h^\alpha$
The major corresponding to $h^{\nu}$ for $\nu\ne 1,2,3,..$ is
$\frac{-1}{2i\sin(\pi\nu)} \frac{(-\xi)^{\nu-1}}{\Gamma(\nu)}$, and $\xi^{\nu-1}\frac{{\rm Log \ } \xi}{2\pi i \Gamma(\nu)}$ for $\nu=1,2,...$.
\paragraph{Example 2.} $\log h$.
\paragraph{Example 3.} ${\mathbb P}hi(h):=e^{1/h}$, ${\mathbb P}hi(h):=e^{-1/\sqrt{h}}$. (~\cite[R\'es II.3.4]{CNP})
\paragraph{Example 4.} $e^{-1/h^2}$ is zero as a resurgent function on the arc $(-\pi/4,\pi/4)$, but does not give a resurgent function on any larger arc because there it is no longer bounded by any function $e^{c/|h|}$ for $c\in {\mathbb R}$.
{\mathfrak {su}}bsection{Decomposition theorem for a resurgent function} \label{DecompThm}
Making rigorous sense of a formula of the type
$$\phi(h) \ \sim \ {\mathfrak {su}}m_k e^{-c_k/h}(a_{k,0}+a_{k,1}h+a_{k,2}h^2+...), \ \ \ \ h\to 0,$$
may be done by explicitly writing an estimate of an error that occurs if we truncate the $k$-th power series on the right at the $N_k$-th term. Resurgent analysis offers the following attractive alternative: if $\phi(h)$ is resurgent, then the numbers $c_k$ can be seen as locations of the first sheet singularities of the major ${\bf{\mathbb P}hi}(\xi)$, which is expressed in terms of a decomposition of ${\bf{\mathbb P}hi}(\xi)$ in a formal sum of microfunctions that we are going to discuss now. Microfunctions, or the singularities of ${\bf{\mathbb P}hi}(\xi)$ at $c_k$ are related to power series $a_{k,0}+a_{k,1}h+a_{k,2}h^2+...$ through the concept of Borel summation, see section {\rm Re \ }f{BorelSum}.
{\mathfrak {su}}bsubsection{Microfunctions and resurgent symbols} \label{MicrofAndResSymb}
{\bf Definition.} (see ~\cite[Pr\'e II.1]{CNP}) A \underline{microfunction} at $\omega\in{\mathbb C}$ in the direction ${\check A}{\mathfrak {su}}bset {\mathbb S}^1$ is the datum of a sectorial germ at $\omega$ in direction $\check A$ modulo holomorphic germs in $\omega$; the set of such microfunctions is denoted by
$${\pmb{\cal C}}^\omega(A) = {\cal O}^\omega({\check A}) / {\cal O}_\omega . $$
We will write ${\pmb{\cal C}}(A)$ to mean ${\pmb{\cal C}}^0(A)$.
A microfunction is said to be \underline{resurgent} if it has an endlessly continuable representative. The set of resurgent microfunctions at $\omega$ in direction $\check A$ is denoted in ~\cite[R\'es I.3.0, p.178]{CNP} by ${\pmb{\cal R}}^\omega (A)$.
{\bf Definition.} (~\cite[R\'es I.3.3, p.183]{CNP}) A \underline{resurgent symbol} in direction $\check A$ is a collection $\dot\phi = (\phi^\omega \in {\pmb{\cal R}}^\omega (A))_{\omega\in{\mathbb C}}$ such that $\phi^\omega$ is nonzero only for $\omega$ in a discrete subset ${\cal O}mega\in{\mathbb C}$ called the \underline{support} of $\dot\phi$, and for any $\alpha\in A$ the set ${\mathbb C} \backslash {\cal O}mega\alpha$ is a sectorial neighborhood of infinity in direction $\check A$, where ${\cal O}mega\alpha=\bigcup_{\omega\in{\cal O}mega}\omega\alpha$ be the union of closed rays in the direction $\alpha$ emanating from the points of ${\cal O}mega$, Fig.{\rm Re \ }f{ResDRWp8}.
\begin{figure}
\caption{Singularities of a major and corresponding cuts}
\label{ResDRWp8}
\end{figure}
The set of such resurgent symbols is denoted $\dot{\pmb{\cal R}}(A)$, and resurgent symbols themselves can be written as $\dot \phi = (\phi^\omega)_{\omega\in {\cal O}mega} \in \dot{\pmb {\cal R}}(A)$ or as $\dot \phi = {\mathfrak {su}}m_{\omega\in{\cal O}mega} \phi^\omega \in \dot{\pmb {\cal R}}(A)$.
{\bf Definition.} A resurgent symbol is \underline{elementary} if its support ${\cal O}mega$ consists of one point. It is \underline{elementary simple} if that point is the origin.
{\mathfrak {su}}bsubsection{Decomposition isomorphism.} \label{DecomposnIsomsm}
The correspondence between resurgent symbols in the direction $A$ and majors of resurgent functions in direction $A$ depends on a {\it resummation direction} $\alpha\in A$, which we will fix once and for all. The direction $\alpha$ can more concretely be thought of as $\arg h$ or as the the direction of the cuts in the $\xi$-plane
that we are going to draw.
Suppose (~\cite[p.182-183]{CNP}) ${\cal O}mega$ is a discrete set (of singularities) in the complement of a sectorial neighborhood of infinity in direction $\check A$, and let a holomorphic function ${\bf {\mathbb P}hi}$ be defined on ${\mathbb C}\backslash {\cal O}mega\alpha$ and endlessly continuable. Take $\omega\in {\cal O}mega \alpha$. Let $D_\omega$ be a small disc centered at $\omega$. Its diameter in the direction $\alpha$ cuts $D_\omega$ into the left and right open half-discs $D^{-}_\omega$ and $D^{+}_\omega$ (or top and bottom if $\alpha$ is the positive real direction). If $D_\omega$ is small enough, the function ${\bf{\mathbb P}hi}|D^{+}_\omega$, resp ${\bf{\mathbb P}hi}|D^{-}_\omega$, can be analytically continued to the whole split disc $D_\omega \backslash \omega \alpha$. Denote by $sing^{\omega}_{\alpha+} {\bf {\mathbb P}hi}$, resp. $sing^{\omega}_{\alpha-} {\bf {\mathbb P}hi}$, the microfunction at $\omega$ of direction $\check \alpha$ defined by the class modulo ${\cal O}_\omega$ of this analytic continuation.
\begin{Thm} (\cite[R\'es I.4]{CNP}) Let $\dot \phi = (\phi^\omega)_{\omega\in {\cal O}mega} \in \dot{\pmb{ \cal R}}(A)$ be a resurgent symbol with the singular support ${\cal O}mega$, and $\alpha \in A$. There is an endlessly continuable function ${\bf {\mathbb P}hi}\in {\cal O}({\mathbb C}\backslash {\cal O}mega\alpha)$ such that
$$ sing^\omega_{\alpha+} {\bf {\mathbb P}hi} = \left\{
\begin{array}{ll} \phi^\omega & \text{if $\omega\in{\cal O}mega$,} \\ 0 & \text{if not.}
\end{array} \right. $$
This defines a bijective linear map
$$ {\bf s}_{\alpha_+} \ : \ \dot{\pmb{\cal R}}(A) \ \to \ {\pmb{\cal R}(A)}/{\cal O}({\mathbb C}) \ : \ {\bf s}_{\alpha+}{\dot \phi} = {\bf {\mathbb P}hi} $$ whose inverse $({\bf s}_{\alpha+})^{-1}$ is called the \underline{decomposition isomorphism}. \end{Thm}
An endlessly continuable function ${\bf s}_{\alpha-}{\dot \phi}$ can be defined analogously.
The maps ${\bf s}_{\alpha+}$,${\bf s}_{\alpha-}$ respect convolution products (cf. ~\cite[p.185, R\'es I.4]{CNP}).
The map defined in ~\cite[R\'es I]{CNP} as
\begin{equation} ({\bf s}_{\alpha+})^{-1}\circ {\bf s}_{\alpha-} \ : \ {\dot {\pmb{\cal R}}}(A) \ \to \ {\dot{\pmb{\cal R}}}(A) \label{HomPassage} \end{equation}
is not identity and gives rise to the concept of alien derivatives which is central in resurgent analysis but will not be discussed in this article.
Further, if a resurgent function ${\rm var \ }phi(h)={\rm var \ }phi(h,t)$ and its major ${\bf{\mathbb P}hi}(\xi)={\bf{\mathbb P}hi}(\xi,t)$ depend, say, continuously in some appropriate sense, on an auxiliary parameter $t$, the decompositon into microfunctions $({\bf s}_{\alpha+})^{-1}{\bf {\mathbb P}hi}$ will depend on $t$ ``discontinuously" -- an effect referred to as {\it Stokes phenomenon} and discussed, e.g., in ~\cite{DP99}.
{\mathfrak {su}}bsubsection{Mittag-Leffler sum} \label{MLS}
The concept of a Mittag-Leffler sum formalizes the idea of an infinite sum of resurgent functions ${\mathfrak {su}}m_j {\rm var \ }phi_j(h)$ where ${\rm var \ }phi_j(h)$ have smaller and smaller exponential type, e.g., ${\rm var \ }phi_j = O(e^{-c_j/h})$ for $c_j\to \infty$ as $j\to \infty$.
Rephrasing ~\cite[Pr\'e I.4.1]{CNP}, let ${\mathbb P}hi_j$, $j=1,2,...$ be endlessly continuable holomorphic functions (thought of as majors of ${\rm var \ }phi_j$), ${\mathbb P}hi_j\in{\cal O}({\cal O}mega_j)$, where ${\cal O}mega_j$ are sectorial neighborhoods of infinity satisfying ${\cal O}mega_j{\mathfrak {su}}bset {\cal O}mega_{j+1}$ and $\bigcup_j {\cal O}mega_j = {\mathbb C}$. Suppose ${\cal S}_j$ together with the projection $p_j:{\cal S}_j\to {\mathbb C}$ and with the choice of the first sheet (which contains ${\cal O}mega_j$), is the endlessly continuable Riemann surface of ${\mathbb P}hi_j$. The following statement seems to be implicitly used in ~\cite{CNP}; we will use it, too, although we are not aware of a detailed treatment of this question in the literature:
\begin{Statement} \label{RiemSurfMLS} Under above assumptions, there exists an endlessly continuable Riemann surface ${\cal S}_{ML}$ together with the choice of the first sheet and locally biholomorphic maps $\pi: {\cal S}_{ML} \to {\mathbb C}$ and $\pi_j:{\cal S}_{ML}\to {\cal S}_j$, so that: \\
(i) $\pi=p_j \circ \pi_j$ and $\pi_j$ maps the first sheet of ${\cal S}$ to the first sheet of ${\cal S}_j$, for all $j$; \\
(ii) for any point $\xi\in{\cal S}_{ML}$, there is an $N\in {\mathbb N}$ so that $\pi_j(\xi)\in {\cal O}mega_j$ (and hence is on the first sheet of ${\cal S}_j$) for all $j\ge N$.
\end{Statement}
Then, similarly to ~\cite{CNP}, take an exhaustive sequence of discs $D_n{\mathfrak {su}}bset {\cal O}mega_n$, $\bigcup_n D_n={\mathbb C}$, and find entire functions $E_j$ so that ${\mathfrak {su}}p_{\xi\in D_j} |{\mathbb P}hi_j(\xi) -E_j(\xi)| \le 1/j!$. Then, thanks to the condition (ii) of the Statement {\rm Re \ }f{RiemSurfMLS}, the series ${\mathfrak {su}}m_j \pi_j^* ({\mathbb P}hi_j-p_j^* E_j)$ is factorially convergent on compact subsets of ${\cal S}_{ML}$.
By ${\text{\large \rm ML$\Sigma$}}_j {\mathbb P}hi_j$ we will mean any such sum ${\mathfrak {su}}m_j \pi_j^* ({\mathbb P}hi_j-p_j^* E_j)$ whose terms are {\it factorially convergent} on compact subsets of ${\cal S}_{ML}$; being specific about the rate of convergence will be important later on. We will call ${\mathbb P}hi$ a \underline{Mittag-Leffler sum} of ${\mathbb P}hi_1,{\mathbb P}hi_2,...$ and write
$$ {\mathbb P}hi \ = \ {\text{\large \rm ML$\Sigma$}}_j {\mathbb P}hi_j. $$
{\mathfrak {su}}bsection{Borel summation. Resurgent asymptotic expansions.} \label{BorelSum}
{\bf Definition.} A \underline{resurgent hyperasymptotic expansion} is a formal sum
$${\mathfrak {su}}m_k e^{-c_k/h}(a_{k,0}+a_{k,1}h+a_{k,2}h^2+...),$$ where: \\
i) $c_k$ form a discrete subset in ${\mathbb C}$ in the complement to some sectorial neighborhood of infinity in direction $\check A$;\\
ii) the power series of every summand satisfies the Gevrey condition, and \\
iii) each infinite sum $a_{k,0}+a_{k,1}h+a_{k,2}h^2+...$ defines, under formal Borel transform
$$ {\cal B} \ : \ e^{-c_k/h }h^\ell \ \mapsto \ (\xi-c_k)^{\ell-1} \frac{\log (\xi-c_k)}{2\pi i \Gamma(\ell)} \ \ \text{if} \ \ell\in {\mathbb N},$$
$$ {\cal B} \ : \ e^{-c_k/h } \ \mapsto \ \frac{1}{2\pi i(\xi-c_k)}, $$
an endlessly continuable microfunction centered at $c_k$.
The authors of ~\cite{CNP} denote by $\dot{{\cal R}} (A)$ (regular, as opposed to the bold-faced, ${\cal R}$) the algebra of resurgent hyperasymptotic expansions.
The right and left summations of resurgent asymptotic expansions are defined in ~\cite{DP99} or ~\cite{CNP} as follows. Given a Gevrey power series ${\mathfrak {su}}m_{k=1}^{\infty} a_k h^k$, replace it by a function (the corresponding {\it``minor''}) ${\pmb f}(\xi)={\mathfrak {su}}m_{k=1}^{\infty} a_k \frac{\xi^{k-1}}{(k-1)!}$, assume that ${\pmb f}(\xi)$ has only a discrete set of singularities, and consider the Laplace integrals $\int_{[0,\alpha)}e^{-\xi/h}{\pmb f}(\xi)d\xi$ along a ray from $0$ to infinity in the direction $\alpha$ deformed to avoid the singularities from the right or from the left, as on the figure {\rm Re \ }f{nuthesisp3}.
\begin{figure}
\caption{Integration contours in the definition of left and right summations.}
\label{nuthesisp3}
\end{figure}
After some technical discussion, this procedure defines a resurgent function of $h$ which ~\cite{CNP} denote
${\rm S}_{\alpha \pm} \left({\mathfrak {su}}m_{k=0}^{\infty} a_k h^k \right)$.
Note that comparing the results of the left and right resummations is related the study of the map \eqref{HomPassage}.
The diagram on figure {\rm Re \ }f{Diagram} helps to visualize the logical relationship of the concepts that have been introduced.
\begin{figure}
\caption{Logical relationship between concepts of resurgent analysis}
\label{Diagram}
\end{figure}
\section{Majors exponentially decreasing along a path.} \label{ExpDecreaseAlongPath}
As we have seen, the definition of the Laplace isomorphism ${\cal L}$ involves choosing a representative of a class $\mod {\cal O}({\mathbb C})$ of a major that is bounded along infinite branches of a contour. A result on the existence of such a representative is recalled in section {\rm Re \ }f{OneMajorExpDecr}; it might be helpful to look at the proofs in ~\cite[Pr\'e I.3]{CNP} before proceding to a generalization of this result to the case of an infinite sum of majors presented in section {\rm Re \ }f{CaseOfConvergSeriesOfMajors}.
We write $D_R=\{ z\in {\mathbb C} : |z|\le R\}$.
{\mathfrak {su}}bsection{Case of a single major.} \label{OneMajorExpDecr}
\begin{Lemma} (\cite[Pr\'e I.3, Lemma 3.0]{CNP}) Let $\Gamma{\mathfrak {su}}bset {\mathbb C}$ be an embedded curve (i.e. a closed submanifold of dimension 1) transverse to circles $|\zeta|=R$ for all $R\ge R_0$. Let ${\bf {\mathbb P}hi}$ be a holomorphic function in a neighborhood of $\Gamma$. Then for any function $m:{\mathbb R}_+\to {\mathbb R}_+$ satisfying $\inf_{x\le N} m(x) >0$ for any $N>0$, there is an entire function $E$ such that $|({\bf {\mathbb P}hi} + {\bf E})(\zeta)|\le m(|\zeta|) $ for any $\zeta\in\Gamma$. \label{Lemma21} \end{Lemma}
The following two auxiliary results used by ~\cite{CNP} in the proof of lemma {\rm Re \ }f{Lemma21} will be needed later on.
\begin{figure}
\caption{Notation in Lemma {\rm Re \ }
\label{ResDRWp105}
\end{figure}
\begin{Lemma} \label{L321} Let (cf. Fig.{\rm Re \ }f{ResDRWp105}) $\Gamma{\mathfrak {su}}bset {\mathbb C}$ be a finite disjoint union of curvilinear segments $\Gamma_i = [\zeta_i,\zeta'_i]$ with $i=1,...,r$, joining transversally the boundaries of the annulus $R\le |\zeta| \le R'$.
Let ${\bf {\mathbb P}hi}$ be a function holomorphic in a neighborhood of $\Gamma$ and zero at $\zeta_1,...,\zeta_r$.
Then for any ${\rm var \ }epsilon>0$ there is a polynomial function ${\bf E}$ such that\\
i) $|{\bf E}|\le {\rm var \ }epsilon$ on $D_R = \{ |\zeta|<R\}$; \\
ii) $|{\bf {\mathbb P}hi}+{\bf E}|\le {\rm var \ }epsilon$ on $\Gamma$; \\
iii) $({\bf {\mathbb P}hi}+{\bf E})(\zeta'_i)=0$ for $i=1,...,r$. \end{Lemma}
\begin{Lemma} Given $r$ points $\zeta_1,...,\zeta_r$ on a compact set $K{\mathfrak {su}}bset {\mathbb C}$, then for all ${\rm var \ }epsilon>0$ there is ${\rm var \ }epsilon''>0$ such that for any choice of the interpolation data $a_1,...,a_r$ satisfying $|a_i|\le{\rm var \ }epsilon''$, the Lagrange interpolation polynomial defined by $Q(\zeta_i)=a_i$ is estimated by ${\rm var \ }epsilon$ on $K$. \end{Lemma}
{\mathfrak {su}}bsection{Case of a convergent series of majors.} \label{CaseOfConvergSeriesOfMajors}
In order to be able to work with a Laplace integral of an infinite sum of majors, it is useful to choose a representative of each summand of that series that would be bounded on the infinite branches of the integration path, and to do it in a way that such choices would be consistent with taking the infinite sum. The main issue is to show that the entire correction functions to each of the summands form themselves a series convergent on compact subsets of ${\mathbb C}$. More precisely,
\begin{Prop} \label{Prop32} Suppose ${\mathfrak {su}}m_{j=1}^{\infty} {\mathbb P}hi^{(j)}$ is an infinite series of majors defined on the same (unbounded) domain ${\cal S}{\mathfrak {su}}bset {\mathbb C}$ which converges uniformly and faster than some geometric series with a ratio $q$ (i.e., for every disc $D_\ell$ there is a constant $M_\ell$ so that $|{\mathbb P}hi^{(j)}|\le M_\ell q^j$ on $D_\ell\cap {\cal S}$)
on compact subsets to a function ${\mathbb P}hi$, $\Gamma$ a contour transversal to circles $\partial D_R$ for $R\ge R_0$ and $m:{\mathbb R}_+\to {\mathbb R}_+$ a function as in Lemma {\rm Re \ }f{Lemma21}. Then for any number $Q$, $q<Q<1$, one can choose entire functions $F^{(j)}$ so that: \\
i) ${\mathfrak {su}}m_{j=1}^{\infty} F^{(j)}$ converges uniformly on compact sets of ${\mathbb C}$ faster than geometric series of ratio $Q$ to an entire function $F$;\\
ii) for all $j$ the function $|{\mathbb P}hi^{(j)}(\xi) + F^{(j)}(\xi)|\le (1-q) Q^j m(|\xi|)$ along $\Gamma$;\\
iii) the function $|{\mathbb P}hi(\xi)+ F(\xi)|\le \frac{1-q}{1-Q}m(|\xi|)$ along $\Gamma$; \\
iv) $|{\mathbb P}hi^{(j)} + F^{(j)}|$ can be, on every compact subset of ${\cal S}$, estimated by a geometric series with the ratio $Q$. \end{Prop}
\textsc{Proof.} Only the parts i) and ii) need to be proven in detail, the parts iii) and iv) will then follow as easy consequences.
Suppose, to simplify notation, that $R_0=1$, that $m(|\zeta|)\le 2^{-|\zeta|}$ and $m(|\zeta|)=m_k$ (constant) for $k-1<|\zeta|\le k$.
There are positive numbers $b_{k\ell}$ satisfying the following property: \\
{\it For any interpolation data on the the finite set $\Gamma\cap \partial D_k$, namely, any function $d:\Gamma\cap \partial D_k \to {\mathbb C}$ with $\max_{p\in \Gamma\cap \partial D_k}|d(p)|\le 1$, the supremum of the corresponding degree $|\Gamma\cap \partial D_k|-1$ interpolation polynomial is $<b_{k\ell}$ on $D_\ell$.} \\
Clearly we can choose $b_{k\ell}\le b_{kk}$ for $\ell\le k$ and $b_{k\ell}\ge 1$ for all $k,\ell$.
Fix numbers $\{Q_s\}_{s\in{\mathbb N}}$ such that $q<Q_1<Q_2<...<Q_s<...<Q<1$.
We will construct by induction entire functions $F^{(j)}_1,..., F^{(j)}_{s}$ such that for each fixed $s$, the series ${\mathfrak {su}}m_{j=1}^\infty [{\mathbb P}hi^{(j)}+F_1^{(j)}+...+F^{(j)}_{s}]$ converges uniformly on compact sets faster than a geometric series of ratio $Q_{s}$.
\underline{For $s=1$},
find a number $N_1$ so that for $\forall j> N_1$ one has $|{\mathbb P}hi^{(j)}(\xi)| < \frac{(1-Q_1)Q_1^j}{2b_{11}} m_1$ on $\Gamma\cap D_1$.
Using lemma {\rm Re \ }f{Lemma21}, choose entire functions $E^{(j)}_1$ so that $|{\mathbb P}hi^{(j)}+E^{(j)}_1|<\frac{m_1(1-Q_1)Q_1^j}{b_{11} 2}$ on $D_1\cap \Gamma$ for $j\le N_1$ and $E^{(1)}_j=0$ for $j> N_1$.
Now choose $G^{(j)}_1$ as interpolation polynomials so that ${\mathbb P}hi^{(j)}+E^{(j)}_1+G^{(j)}_1=0$ on $\Gamma \cap \partial D_1$. Then
\begin{equation} {\mathfrak {su}}p_{D_\ell} |G^{(j)}_1| < \frac{b_{1\ell} m_1(1-Q_1)Q_1^j}{2b_{11}} \ \ \ \text{for all} \ j\in {\mathbb N} . \label{eq818} \end{equation}
Put $F^{(j)}_1:=E^{(j)}_1+G^{(j)}_1$. Then on $D_1\cap \Gamma$
\begin{equation} | {\mathbb P}hi^{(j)} + F^{(j)}_1 | \ \le \ |{\mathbb P}hi^{(j)}+E^{(j)}_1| + |G^{(j)}_1| \le \frac{m_1 (1-Q_1)Q_1^j}{2b_{11}} + \frac{m_1(1-Q_1)Q_1^j}{2}\ \le\ m_1(1-Q_1)Q_1^j \label{eq241} \end{equation}
On the other hand, on $D_\ell$ for $j\ge N_1 +1$,
$$ |{\mathbb P}hi^{(j)}+F_1^{(j)}| = |{\mathbb P}hi^{(j)}+G^{(j)}| \le |{\mathbb P}hi^{(j)}|+|G^{(j)}|, $$
where the first summand decreases faster then some geometric series of ratio $Q_1$ by assumptions of the proposition, and the second does so by \eqref{eq818}.
\underline{For $s\ge 2$,} suppose that we have constructed $F^{(j)}_1,...,F^{(j)}_{s-1}$ for all $j$, let us construct $F^{(j)}_s$. Choose $N_s$ so that on $\Gamma \cap (D_s\backslash D_{s-1})$ we have $\forall j>N_s$ the inequality $|{\mathbb P}hi^{(j)}+F_1^{(j)}+...+F_{s-1}^{(j)}| < \frac{m_s(1-Q_s)Q_s^j}{2b_{ss}}$. Then, by lemma {\rm Re \ }f{L321}, there are entire functions $E^{(j)}_s$ such that\\
a)
\begin{equation} |E^{(j)}_s|<\frac{m_s(1-Q_s)Q_s^j}{2b_{ss}}\ \ \text{on} \ D_{s-1} \ \ \text{for} \ j\le N_s \label{eq900} \end{equation}
and \\
b) $|E^{(j)}_s+F^{(j)}_1+...+F_{s-1}^{(j)} + {\mathbb P}hi^{(j)}|<\frac{m_s(1-Q_s)Q_s^j}{2b_{ss}}$ on $(D_s\backslash D_{s-1}) \cap \Gamma$. \\
Choose $G_s^{(j)}$ as the interpolation polynomial such that $F^{(j)}_{s-1}+...+ F^{(j)}_1+ {\mathbb P}hi^{(j)}+E^{(j)}_s+G^{(j)}_s = 0$ on $\Gamma \cap \partial D_s$ and put $F^{(j)}_s:= E^{(j)}_s+G^{(j)}_s$. Then \begin{equation} {\mathfrak {su}}p_{D_\ell} |G_s^{(j)}| < \frac{b_{s\ell} m_s (1-Q_s)Q_s^j}{2b_{ss}} \ \ \text{ for all } \ j \label{eq819} \end{equation}
Combining \eqref{eq900} and \eqref{eq819}, we see that
\begin{equation} {\mathfrak {su}}p_{D_{s-1}} |F^{(j)}_s| \le m_s (1-Q_s) Q_s^j \label{eq902} \end{equation}
Analogously to the \eqref{eq241},
\begin{equation} |{\mathbb P}hi^{(j)}+F_1^{(j)}+...+F_s^{(j)}|\le m_s(1-Q_s)Q_s^j \label{eq242} \end{equation}
on $\Gamma\cap (D_s\backslash D_{s-1}).$
The series ${\mathfrak {su}}m_j ({\mathbb P}hi^{(j)}+F_1^{(j)}+...+F_s^{(j)})$ still converges uniformly and faster than some geometric series of ratio $Q_s$ on compact sets; indeed, on $D_\ell$ such an estimate follows from the analogous property for $(s-1)$, the fact that for $j$ large $F^{(j)}_s=G^{(j)}_s$ , and the formula \eqref{eq902}.
This finishes the inductive construction.
Now put $ F^{(j)} := {\mathfrak {su}}m_{k=1}^\infty F^{(j)}_k $.
This sum is uniformly convergent on compact sets because of the estimate $|F^{(j)}_k|\le m_k\le \frac{1}{2^k}$ that holds on $D_\ell$ for $k\ge \ell+1$ by \eqref{eq902}. Hence $F^{(j)}$ is a well-defined entire function.
Let us now find a geometric series with ratio $Q$ that bounds $F^{(j)}$ on $D_\ell$, at least for $j>\max\{ N_1,...,N_\ell\}$. We get
$$ {\mathfrak {su}}p_{D_\ell} |F^{(j)}| \ \le \ {\mathfrak {su}}p_{D_\ell} {\mathfrak {su}}m_{k=1}^\infty |F^{(j)}_k| \ \le \ {\mathfrak {su}}m_{k=1}^{\ell} {\mathfrak {su}}p_{D_\ell} |F^{(j)}_k| + {\mathfrak {su}}m_{k=\ell+1}^{\infty} {\mathfrak {su}}p_{D_\ell} |F^{(j)}_k| \ \stackrel{(!)}{\le} \ $$
$$ \ \le \ {\mathfrak {su}}m_{k=1}^{\ell} b_{k\ell}m_k (1-q) Q^j + {\mathfrak {su}}m_{k=\ell+1}^{\infty} m_k (1-q) Q^j \ \le \
\left[ {\mathfrak {su}}m_{k=1}^{\ell} b_{k\ell}m_k (1-q) + {\mathfrak {su}}m_{k=\ell+1}^{\infty} m_k (1-q) \right] Q^j, $$
where the inequality (!) holds because in the first sum in the given range we can use the equality $F^{(j)}_k=G^{(j)}_k$ and \eqref{eq819}, and for the second sum we use can \eqref{eq902}. This proves that $F^{(j)}$ satisfies (i).
The statement (ii) follows from \eqref{eq241} and \eqref{eq242}. $\Box$
{\mathfrak {su}}bsection{Interchangeability of infinite sum and ${\cal L}$.}
\begin{Prop} \label{Prop35} Suppose ${\mathbb P}hi^{(j)}$ is a sequence of majors defined on a common sectorial neighborhood of infinity and suppose that on each compact subset of this neighborhood the sum ${\mathfrak {su}}m_j |{\mathbb P}hi^{(j)}|$ uniformly converges faster than a geometric series with a ratio $q<1$. Then
$${\cal L} \left\{ \left( {\mathfrak {su}}m_j {\mathbb P}hi^{(j)}\right) \mod {\cal O}({\mathbb C}) \right\} \ = \ {\mathfrak {su}}m_j {\cal L} ({\mathbb P}hi^{(j)} \mod {\cal O}({\mathbb C})).$$
\end{Prop}
\textsc{Proof.} Taking a number $Q$, $q<Q<1$, a path $\gamma$ as in \eqref{defLfla}, the representatives ${\mathbb P}hi^{(j)}$ of the corresponding integrality classes and the number $Q<1$ provided by proposition {\rm Re \ }f{Prop32} for $m(t)=2^{-t}$, we can write ${\cal L}({\mathbb P}hi^{(j)} \mod {\cal O}({\mathbb C}))$ as $\int_\gamma e^{-\xi/h} {\mathbb P}hi^{(j)} d\xi$, and similarly for ${\cal L}\left\{ \left( {\mathfrak {su}}m_j {\mathbb P}hi^{(j)}\right) \mod {\cal O}({\mathbb C}) \right\}$. Then the question reduces to showing that
$$ \int_\gamma {\mathfrak {su}}m_j e^{-\xi/h} {\mathbb P}hi^{(j)}(\xi) d\xi \ = \ {\mathfrak {su}}m_j \int_\gamma e^{-\xi/h} {\mathbb P}hi^{(j)}(\xi) d\xi.$$
By Fubini's theorem we need to check:\\
i) For any $\xi\in \gamma$ the sum ${\mathfrak {su}}m_j e^{-\xi/h} |{\mathbb P}hi^{(j)}(\xi)|$ converges -- because $|{\mathbb P}hi^{(j)}(\xi)|<m(|\xi|) Q^j$; \\
ii) the integral of such a sum clearly converges for small $h$; \\
iii) For any $j$ the integral $\int_\gamma e^{-\xi/h} |{\mathbb P}hi^{(j)}(\xi)| d\xi$ converges -- in fact it is less than $\frac{1}{Q^j} \int_\gamma |e^{-\xi/h}| d|\xi|$; \\
iv) the sum of these integrals is then clearly convergent, too. $\Box$
\begin{Cor} \label{CorLMLS} Given a sequence of majors ${\mathbb P}si_j$ as in section {\rm Re \ }f{MLS}, then
$$ {\cal L} \left( {\text{\large \rm ML$\Sigma$}}_j {\mathbb P}si_j \right) = {\mathfrak {su}}m_j {\cal L} {\mathbb P}si_j $$ $\Box$ \end{Cor}
\section{Convolution products and majors bounded in a neighborhood of infinity.} \label{ConvolutionSectn}
Let $A$ be a small arc in the circle of directions which for simplicity of language will be assumed symmetric with respect to the real axis. Let us study the convolution product of majors that will be assumed holomorphic on some sectorial neighborhood of infinity in the direction ${\check A}$. This convolution of majors is known to correspond to multiplication of resurgent functions.
Recall ~\cite{CNP} that the convolution of two integrality classes of majors $[{\mathbb P}hi]$ and $[{\mathbb P}si]$ along a path $\Gamma$ adopted to a sectorial neighborhood of infinity in direction $\check A$ is defined by choosing two representatives: ${\mathbb P}hi$ that is exponentially decreasing along the infinite branches of $\Gamma$, and ${\mathbb P}si$ that is $\le const$ on the neighborhood of infinity bounded by $\Gamma$ (see lemma {\rm Re \ }f{PreI342} below), and considering the integral
$$ ({\mathbb P}hi*_\Gamma {\mathbb P}si)(\xi) = \int_\Gamma {\mathbb P}hi(\eta){\mathbb P}si(\xi-\eta) d\eta. $$
This integral defines a sectorial germ of an analytic function at infinity and can be analytically continued to some Riemann surface by deforming the integration contour.
It is discussed in ~\cite{CNP} that the result of the convolution is independent of choices modulo ${\cal O}({\mathbb C})$.
Let us look a little closer at the deformation of the contour. Suppose for simplicity that $\Gamma$ consists of two rays starting at $\eta=0$. Then for $\xi$ to the left of $\Gamma$ the above formula defines an analytic function ``on the nose"; this is the case a) on the figure {\rm Re \ }f{ExNote3p3}.
\begin{figure}
\caption{Deformation of the convolution contour and analytic continuation of the convolution product.}
\label{ExNote3p3}
\end{figure}
In order to analytically continue the convolution to other values of $\xi$, to the right of the contour $\Gamma$, we need to continuously deform the convolution contour (cases b), c) of the fugure {\rm Re \ }f{ExNote3p3}) to obtain a contour $\Gamma_\xi$ so that $\Gamma_\xi$ avoids singularities of ${\mathbb P}hi$ and $\xi-\Gamma_\xi$ avoids singularities of ${\mathbb P}si$. The singularities of the convolution appear for those $\xi$ for which this deformation becomes impossible.
It is shown in ~\cite{CNP} that if ${\mathbb P}hi$ and ${\mathbb P}si$ are defined on endless Riemann surfaces ${\cal S}_{\mathbb P}hi$ and ${\cal S}_{\mathbb P}si$ respectively, then ${\mathbb P}hi*{\mathbb P}si$ can be analytically continued to an endless Riemann surface denoted ${\cal S}_{\mathbb P}hi * {\cal S}_{\mathbb P}si$.
Note that if $\xi$ stays within a compact subset $K$ ($K$ is a subset of ${\cal S}_{\mathbb P}hi *{\cal S}_{\mathbb P}si$, but we thought it helpful to superimpose it on the Riemann surface ${\cal S}_{\mathbb P}hi$ on the picture), then the deformation of the contour $\Gamma_\xi$ will be confined to a compact subset $K'$ of ${\cal S}_{\mathbb P}hi$.
As has been mentioned, the definition of the convolution product involves a choice in an integrality class of a major, ${\mathbb P}si \mod {\cal O}({\mathbb C})$, of a representative that is bounded on sectorial neighborhood of infinity in the direction $\check A$. Let us begin by recalling how this choice is made for a single major, and then prove that this choice can be made compatible with an infinte sum of majors (section {\rm Re \ }f{CaseOfConvergSeriesOfMajors42}). Finally, we will show that under appropriate assumptions convolution is interchangeable with an infinite sum of majors (section {\rm Re \ }f{InfiniteSumConvoRev}).
{\mathfrak {su}}bsection{Case of a single major} \label{CaseSingleM}
\begin{Lemma} \label{PreI342} (cf. ~\cite[Pr\'e I.3.4.2]{CNP}) Every integrality class $[{\mathbb P}hi]$ in the direction $\check A$ for a small arc $A$ has a representative ${\mathbb P}si$ bounded in a (possibly smaller) sectorial neighborhood in the direction $\check A$. \end{Lemma}
\begin{figure}
\caption{Integration contours in the proof of Lemma {\rm Re \ }
\label{ExNote3p2}
\end{figure}
\textsc{Proof.} Let ${\cal O}mega$ be a sectorial neighborhood of infinity in the direction ${\check A}$ and suppose ${\mathbb P}hi$ is defined in ${\cal O}mega$. Let $\gamma$ be an infinite path contained in ${\cal O}mega$ and adapted to ${\check A}$ consisting of two rays coming together, for simplicity of notation, at the origin, fig.{\rm Re \ }f{ExNote3p2}. Without loss of generality assume ${\mathbb P}hi$ to be exponentially decreasing along the branches of $\gamma$.
Then
$${\mathbb P}hi(\xi) \ = \ {\mathbb P}si(\xi) \ + \ E(\xi),$$
where
$$ {\mathbb P}si(\xi) \ = \frac{1}{2\pi i} \int_\gamma \frac{{\mathbb P}hi(\eta)}{\xi-\eta} d\eta $$
and where $E\in {\cal O}({\mathbb C})$ can be defined as follows: \\
Given $\xi\in D_R{\mathfrak {su}}bset {\mathbb C}$, construct the contour $\gamma_R$ consisting of an arc of radius $R+1$ and two infinite branches of $\gamma$ as shown on the fig.{\rm Re \ }f{ExNote3p2} and put
$$ E(\xi) = \frac{1}{2\pi i}\int_{\gamma_R} \frac{{\mathbb P}hi(\eta)}{\xi-\eta}d\eta. $$
Finally, notice that ${\mathbb P}si$ is bounded in any subset of ${\cal O}mega$ where ${\mathfrak {su}}p_{\eta\in\gamma} \frac{1}{|\xi-\eta|}$ is bounded. $\Box$
We have recalled this proof in order to make the following
{\bf Remark {\rm Re \ }f{PreI342}.A.} If $\xi\in D_R$, and if ${\mathbb P}hi$ is bounded by an exponentially decreasing function $m(|\xi|)$ on $\gamma$ and by $M_{R+1}$ on ${\rm Sec}(0,\check A)\cap D_{R+1}$, we have an estimate
$$ |E(\xi)| \le \frac{1}{2\pi} \int_{arc} |{\mathbb P}hi(\eta) | |d\eta| + \int_{\gamma} m(|\xi|) |d\xi| \le (R+1) M_R + \int_{\gamma} m(|\xi|) |d\xi|. $$
That means that if ${\mathbb P}hi^{(j)}$ are bounded by $q^j m(|\xi|)$ on the infinite branches of $\gamma$ and converge faster than some geometric series with ratio $q$ on $D_{R+1}$, then on $D_R$ the values of $E(\xi)$ are bounded by some geometric series with ratio $q$.
{\mathfrak {su}}bsection{Case of a convergent series of majors} \label{CaseOfConvergSeriesOfMajors42}
The following proposition will be used in the proof of Prop.{\rm Re \ }f{Prop9} later.
\begin{Prop} \label{Prop331} Given a series of majors ${\mathbb P}hi^{(j)}$, all of them analytic on ${\rm Sec}(0,\check A)$, converging uniformly and faster than a geometric series of ratio $q<1$ on compact subsets of their common Riemann surface ${\cal S}$. Let $\gamma$ be a contour contained in a sector ${\rm Sec}(p_0,\check A)$ and adapted to ${\check A}$, and let $p \in {\rm Sec}(p_0,\check A)$ be such that the distance from ${\rm Sec}(p,\check A)$ to $\gamma$ is ${\rm var \ }epsilon>0$. Then for any number $Q$, $q<Q<1$, we can choose entire functions $F^{(j)}$ so that: \\
i) In ${\rm Sec}(p,\check A)$ and on the compact subsets $K$ of ${\cal S}$, $|{\mathbb P}hi^{(j)}-F^{(j)}|<M_K Q^j$. \\
ii) ${\mathfrak {su}}m_j F^{(j)}$ converges uniformly on compact subsets.
\end{Prop}
\textsc{Proof.}
Indeed, begin by choosing ${\mathbb P}hi_1^{(j)}$ such that ${\mathbb P}hi_1^{(j)}(\xi)\le Q^j m(|\xi|) $ along $\gamma$ with $m(|\xi|)=e^{-|\xi|}$ and such that ${\mathbb P}hi^{(j)}-{\mathbb P}hi_1^{(j)}$ are holomorphic and converge faster than geometric series of ratio $Q$ on compact subsets.
Choose $E^{(j)}$ similarly to the proof of lemma {\rm Re \ }f{PreI342},
$$E^{(j)}:= -\frac{1}{2\pi i} \int_\gamma \frac{{\mathbb P}hi_1^{(j)}(\eta)d\eta}{\xi-\eta} \ + \ {\mathbb P}hi_1^{(j)}(\xi); $$
then by remark {\rm Re \ }f{PreI342}.A, ${\mathfrak {su}}m_j E^{(j)}$ converges as geometric series of ratio $Q$ on compact subsets of ${\mathbb C}$ and hence defines an entire function. Take $F^{(j)} = ({\mathbb P}hi^{(j)}-{\mathbb P}hi^{(j)}_1)+E^{(j)}$.
Then we need to show that $|{\mathbb P}hi_1^{(j)}-E^{(j)}|<{\tilde M}_K Q^j$ on a compact subset $K$. ( Since, as noted before, a similar inequality holds for $|{\mathbb P}hi^{(j)}-{\mathbb P}hi_1^{(j)}|$, the differences $|{\mathbb P}hi^{(j)}-E^{(j)}|$ will also be estimated by a geometric series of ratio $Q$.) I.e., we need to show that $\int_\gamma \frac{{\mathbb P}hi_1^{(j)}(\eta)}{\xi-\eta}d\eta$ with $\gamma$ chosen as in the statement, converges uniformly of compact subsets.
Without loss of generality suppose $p=0$ and let $U_-={\rm Sec}(0,\check A)$ on the first sheet of ${\cal S}$, $U_+={\cal S}\backslash U_-$. For a compact subset $K{\mathfrak {su}}bset {\cal S}$ let $K_{\pm}=K\cap U_{\pm}$.
Fix $K{\mathfrak {su}}bset{\mathfrak {su}}bset {\cal S}$ and let us check for $\xi\in K$ the inequality $\left| \int_\gamma \frac{{\mathbb P}hi_1^{(n)}(\eta)}{\xi-\eta} d\eta \right| < C Q^n$.
Suppose first that $\xi\in U_{-}$. We know that $|{\mathbb P}hi_1^{(n)}(\xi)|<C_1 Q^n e^{-|\xi|}$ on $\gamma$. When $\xi\in U_-$ and $\eta\in \gamma$, then $\xi-\eta\in U_-$ and so $\frac{1}{\xi-\eta}< C_2$. Then $\left| \int \frac{{\mathbb P}hi_1^{(n)}(\eta)}{\xi-\eta} d\eta \right| <C_3 Q^n$ for yet another constant $C_3$.
Suppose now $\xi\in K_+$. In this case $\gamma$ gets deformed to a path $\gamma_\xi$ so that both $\gamma_\xi$ and $\xi-\eta$ are fully contained in $K'\cup U_-$ for some compact set $K'$ and the length of $\gamma_\xi \cap K'$ is $\le L_K$. Use the following estimates: for $\eta\in K'$ we have $|\frac{1}{\xi-\eta}|<C_4$ (note that $\xi$ is never equal to $\eta$ because $\xi\not\in \gamma_\xi$), $|{\mathbb P}hi_1^{(n)}(\eta)|<C_5 Q^n$ (because ${\mathbb P}hi_1^{(n)}$ converges faster than some geometric series on compact subsets, cf. Proposition {\rm Re \ }f{Prop32},iv) \ ), so $\left| \int_{\gamma_\xi \cap K'} \frac{{\mathbb P}hi_1^{(n)}(\eta)}{\xi-\eta} d\eta \right|< L_K C_4 C_5 Q^n$.
For the part of the integral along $\gamma_\xi \backslash K'$ proceed as in the case of $\xi\in U_-$. $\Box$
{\mathfrak {su}}bsection{Interchanging infinite sum and a convolution.} \label{InfiniteSumConvoRev}
\begin{Prop} \label{Prop43rev} Suppose ${\mathbb P}si^{(j)}$ is a sequence of majors defined on a common Riemann surface containing ${\rm Sec}(p_0,\check A)$ and converging faster than a geometric series on compact subsets. Let $p\in {\rm Int} ({\rm Sec}(p_0,\check A))$ and $\Gamma$ be the contour consisting of two rays on the boundary of ${\rm Sec}(p,\check A)$. Then
$$ {\mathbb P}hi *_{\Gamma} \left( {\mathfrak {su}}m_j {\mathbb P}si^{(j)} \right) \ = \ {\mathfrak {su}}m_j {\mathbb P}hi *_\Gamma {\mathbb P}si^{(j)} \ \ \ \ \mod {\cal O}({\mathbb C}). $$
Moreover, with the appropriate choice of the representatives, the series in the right hand side converges uniformly on compact sets faster than a geometric series of some ratio $Q<1$.
\end{Prop}
\textsc{Proof.} Without loss of generality $p=0$; denote $U_- ={\rm Sec}(0,\check A)$.
We are studying the integrals $\int_\Gamma {\mathbb P}hi(\xi-\eta){\mathbb P}si^{(n)}(\eta) d\eta$. Choose representatives ${\mathbb P}si_n$ of ${\mathbb P}si^{(n)}$ provided by Proposition {\rm Re \ }f{Prop32} such that for some $Q<1$ we have $|{\mathbb P}si^{(n)}(\xi)|\le M e^{-|\xi|} Q^n$ along $\Gamma$.
Choose a representative ${\mathbb P}hi$ so that $|{\mathbb P}hi(\xi)|<C_1 $ on $U_-\cup \Gamma$.
Let us show that with this choice of representatives the equality holds exactly, not just modulo entire functions. By Fubini's theorem, we need to show that $\int$, $\int{\mathfrak {su}}m$, ${\mathfrak {su}}m$, ${\mathfrak {su}}m\int$ are absolutely convergent. Let us show that sum of analytic continuations of $\int_\Gamma {\mathbb P}hi(\xi-\eta) {\mathbb P}si_n (\eta) d\eta $ is convergent uniformly on compact sets of the Riemann surface ${\cal T} = {\cal S}_{{\mathbb P}hi} * {\cal S}_{\mathbb P}si$.
Let us show that ${\mathfrak {su}}m_{n=0}^\infty \int_\Gamma {\mathbb P}si_n(\eta){\mathbb P}hi(\xi-\eta)d\eta$ converges uniformly on compact subsets. Fix $K{\mathfrak {su}}bset {\cal T}$ (${\cal T}={\cal S}_{\mathbb P}si *{\cal S}_{{\mathbb P}hi^{(j)}}$ and let us check for $\xi\in K$ the inequality $\left| \int_\Gamma {\mathbb P}si(\eta){\mathbb P}hi_n(\xi-\eta)d\eta \right| < C\alpha^n$. For our compact set $K$, let $K_-=K\cap U_-$.
Suppose first that $\xi\in U_{-}$. When $\xi\in U_-$ and $\eta\in \Gamma$, then $\xi-\eta\in U_-$; for $\eta\in\Gamma$ we also have $|{\mathbb P}si_n(\eta)| < C_2 Q^n e^{-|\eta|}$. In this case we can take $\Gamma_\xi=\Gamma$ and then $|\int_{\Gamma_\xi} {\mathbb P}hi(\xi-\eta) {\mathbb P}si_n(\eta) d\eta | <C_3 Q^n$ for yet another constant $C_3$.
Suppose now $\xi\in K_+ := K\backslash K_-$. In this case $\Gamma$ gets deformed to a path $\Gamma_\xi$ that both $\Gamma_\xi$ and $\xi-\eta$ are fully contained in $K'\cup U_-$ for some compact set $K'$ and the length of $\Gamma_\xi \cap K'$ is $\le L_K$. We will use the following estimates: for $\eta\in K'$ we have $|{\mathbb P}si_n(\eta)|<C_4Q^n$, $|{\mathbb P}hi(\xi-\eta)|<C_5$, so $\left| \int_{\Gamma_\xi \cap K'} {\mathbb P}si(\eta) {\mathbb P}hi_n(\xi-\eta) d\eta \right|< L_K C_4 C_5 Q^n$.
For the part of the integral along $\Gamma_\xi \backslash K'$ proceed as in the case of $\xi\in U_-$; together this will dispose of the case $\xi\in K_+$.
$\Box$
We thank the anonymous referee for a suggestion that led to streamlining of this proof.
\section{Interchanging infinite sums and the reconstruction isomorphism.}
In {\rm Re \ }f{DecompThm} we have reminded the correspondence between resurgent symbols and endlessly continuable majors. This correspondence respects infinite sums in the following sense.
\begin{Prop} \label{Prop341} (Inspired by ~\cite[R\'es 3.2.5]{CNP}) Given an infinite series of resurgent microfunctions at $0$ whose representatives $\phi_j(\zeta)$ converge uniformly and faster than geometric series with ratio $q$ on compact subsets of their common Riemann surface. Then for any $Q$, $q<Q<1$, and any direction $\alpha$ one can choose majors ${\mathbb P}si_j \in {\bf s}_{\alpha+} [\phi_j] + {\cal O}({\mathbb C})$ defined on a common endlessly continuable Riemann surface ${\cal S}$, so that the series ${\mathfrak {su}}m_j{\mathbb P}si_j$ converges faster than a geometric series with ratio $Q$ on compact subsets of ${\cal S}$, and ${\bf s}_{\alpha+} {\mathfrak {su}}m_j \phi_j = {\mathfrak {su}}m_j {\mathbb P}si_j \ {\rm mod} \ {\cal O}({\mathbb C})$).
\end{Prop}
{\bf Proof.} Choose $\Gamma$ as in the construction of ${\bf s}_{\alpha+}$ (~\cite[p.186]{CNP}) and choose, using Prop. {\rm Re \ }f{Prop32}, representatives ${\mathbb P}hi_j$ of microfunctions $\phi_j$ satifsfying ${\mathbb P}hi_j(\zeta)< Q^j e^{-|\zeta|}$ ($Q<1$) along the infinite branches of $\Gamma$ and bounded by some geometric series of ratio $Q$ on every compact set of their common Riemann surface.
Then also the integrals ${\mathbb P}si_j \ = \ \frac{1}{2\pi i} \int_\Gamma \frac{ {\mathbb P}hi_j(\eta)}{\xi-\eta} d\eta$ are bounded by some geometric series of ratio $Q$ on every compact set of ${\cal S}$.
This is shown by an obvious modification of the argument from sections {\rm Re \ }f{CaseOfConvergSeriesOfMajors42}-{\rm Re \ }f{InfiniteSumConvoRev}.
$\Box$
\section{Small resurgent functions.} \label{SmallResFunSec}
{\bf Definition.} (~\cite[Pr\'e II.4, p.157]{CNP}) A microfunction ${\rm var \ }phi \in {\pmb{\cal C}}(A)$ is said to be a \underline{small microfunction } if it has a representative ${\bf{\mathbb P}hi}$ such that ${\bf{\mathbb P}hi} = o (\frac{1}{|\zeta|})$ uniformly in any sectorial neighborhood of direction $\check A$ for $A' {\mathfrak {su}}bset{\mathfrak {su}}bset A$. \\ E.g., $h^\alpha$ for $\alpha>0$ satisfies that property.
The following definition has been somewhat modified compared to (\cite[R\'es II.3.2, p.219]{CNP}).
{\bf Definition.} For a given arc of direction $A=(\theta-\Delta\theta,\theta+\Delta\theta)$, a \underline{small resurgent function} in the direction $A$ is such a resurgent function that all first-sheet singularities of one of (hence any of) its majors $\omega_\alpha$ satisfy ${\rm Re \ } e^{-i\theta}\omega_\alpha > 0$, except maybe for one $\omega_0=0$, and if $\omega_0=0$ then the corresponding microfunction is small in the direction of a large (i.e. $>2\pi$) arc $B$ with $\hat B {\mathfrak {su}}pset A$, see figure {\rm Re \ }f{ExNote3p6}.
\begin{figure}
\caption{Arcs in the definition of a small resurgent microfunction.}
\label{ExNote3p6}
\end{figure}
In the notation of this definition, we have:
\begin{Lemma} \label{MajorSmallResurgFctn} A small resurgent function in the direction $A$ can be represented by a major that is $o(1/|\xi|)$ in the direction $B'{\mathfrak {su}}bset{\mathfrak {su}}bset B$ around the origin.
\end{Lemma}
\textsc{Proof.} It is enough to prove the lemma for a small resurgent function whose decomposition consists of only one small resurgent microfunction at the origin, because the formal sum of all other microfunctions corresponds, via the decomposition isomorphism of section {\rm Re \ }f{DecomposnIsomsm}, to a major that is holomorphic at the origin and is therefore automatically $o(1/|\zeta|)$ . This means that starting with a microfunction $[\phi]$ represented by an endlessly continuable sectorial germ $\phi\in {\cal O}^0(B)$, $\phi(\xi)=o(1/|\xi|)$ at the origin, we must construct a major ${\mathbb P}hi(\xi)$ of ${\bf s}_{\alpha+} [\phi]$ satisfying ${\mathbb P}hi(\xi)=o(1/|\xi|)$ for $\xi\to 0$ in the direction $B'$. Here $\alpha\in \hat B'$ will for definiteness be chosen as the positive real direction.
The major ${\mathbb P}hi$ will be given as an integral
\begin{equation} {\mathbb P}hi(\xi)=\frac{1}{2\pi i}\int_{\Gamma_\xi} \frac{\phi(\eta)}{\xi-\eta}d\eta, \label{MajorSmallMinor} \end{equation}
where the contour $\Gamma_\xi$ will be described presently. Let $\Delta\beta>0$ and $B'=\{ \theta: \ -\beta < \theta < 2\pi + \beta \} $, $B''=\{ \theta: \ -\beta-\Delta \beta < \theta < 2\pi + \beta +\Delta \beta \} $, $B=\{ \theta: \ -\beta-2\Delta \beta < \theta < 2\pi + \beta +2\Delta \beta \} $. Assume without loss of generality the length of $B$ to be $<\frac{5}{2}\pi$; otherwise cover $B$ by smaller arcs. Let $0<\beta'<\beta$ such that $\sin \beta'=\frac{1}{3}\sin (\beta+\Delta\beta)$. Choose a number $T>0$ so that $\phi(\xi)$ has no singularities for $\arg \xi\in B$ and $0<|\xi|<3T$; and let $\xi$ belong to the sectorial neighborhood ${\cal U}$ of $0$ of radius $T$ in the direction $B'$. (Remark that ${\mathbb P}hi(\xi)$ can be analytically continued beyong ${\cal U}$ by deforming the contour, but the specific way of this deformation is not important at the moment.)
For $\xi\in {\cal U}$, the contour $\Gamma_\xi$ on the Riemann surface of $\phi$ will consist of a contour $\Gamma_\xi^0$ starting at $a_{2T}$ and ending at $b_{2T}$,
and the union $\Gamma^{\infty}$ of two infinite branches independent of $\xi$, one from infinity to $a_{2T}$ and the other from $b_{2T}$ to infinity. Points $a_{2T}$ and $b_{2T}$ project to the point $2T$ on the complex plane, and to come from $a_{2T}$ to $b_{2T}$ one should go once counterclockwise around the origin.
If $\beta \le \arg \xi \le 2\pi-\beta$, then the (projection of the) path $\Gamma_\xi^0$ will come from $\xi=2T$ along the positive real axis until point $|\xi|/3$,
go once counterclockwise around the circle $\{ \zeta \ : \ |\zeta|=|\xi|/3 \}$, and return along $[|\xi|/3, 2T]$. If $2\pi -\beta' < \arg \xi \le 2\pi $, then $\Gamma_\xi$ starts horizontally from $\xi=2T$, goes counterclockwise around the arc of the circle $S_\xi=\{ \zeta: | \zeta-\xi|=|\xi|\sin \beta' \}$, back to the real line, one full circle counterclockwise around $\{ \zeta : |\zeta|=|\xi|/3 \}$, and retraces its path back to $2T$, fig.{\rm Re \ }f{Paper1p4},a).
\begin{figure}
\caption{Integration contour $\Gamma^0_\xi$ in Lemma {\rm Re \ }
\label{Paper1p4}
\end{figure}
If $2\pi \le \arg \xi \le 2\pi + \beta $, then instead of the arc along the circle $S_{\xi}$, the path contains two line segment $[{\mathbb R}e \xi \pm r_\xi; \xi \pm r_\xi]$ and the upper arc of the circle $|\zeta-\xi|=r_\xi$, where $r_\xi= \frac{1}{3}|\xi|\sin (2\pi+\beta+\Delta\beta-\arg \xi)$.
Denote the horizontal parts of the path $L_1$, $L_2$, $L_3$, $L_4$,the vertical parts $V_{1\pm}$, $V_{3\pm}$ and the arcs $A_1,A_2,A_3$, fig.{\rm Re \ }f{Paper1p4},b).
For $-\beta<\arg \xi\le \beta$, $\Gamma^0_\xi$ can be defined similarly.
Two branches of $\Gamma^\infty$ on the Riemann surface of $\phi$ should be chosen to go in the postive real direction, slightly dispaced from the real axis so that their projections to the complex plane do not intersect except at $2T$, avoiding singularities of $\phi$ in some way, and satisfying conditions of Lemma {\rm Re \ }f{Lemma21}. Then, by adding to it an entire function, we can assume $\phi(\xi)$ to be exponentially decreasing along $\Gamma^\infty$ and hence $\int_{\Gamma^\infty} \frac{\phi(\eta)}{\xi-\eta} d\eta$ is uniformly bounded for $\xi\in {\cal U}$.
We will prove the estimate ${\mathbb P}hi(\xi)=o(1/|\xi|)$ for $2\pi \le \arg \xi \le 2\pi + \beta$, other cases are similar but simpler.
Denote by $I_{L_1}, I_{V_{1+}}, I_{A_1}, ..., I_{L_4}$ the parts of the integral \eqref{MajorSmallMinor} over the corresponding closed line segment or arc. Choose a function $\mu:{\mathbb R}_{\ge 0}\to {\mathbb R}_{\ge 0}$, $\lim_{t\to 0+} \mu(t)=0$, such that $|\phi(\xi)|\le \frac{\mu(t)}{|\xi|}$ for $|\xi|<t$, $\arg \xi\in B''$. Then
$$ I_{A_1} \le \frac{ length(A_1) \cdot \max_{\eta\in A_1} ( \phi(\eta) ) }{ r_\xi } \le 2\pi\max_{\eta\in A_1} ( \phi(\eta) ) \ \le $$
$$ \le 2\pi\frac{\mu(2|\xi|)}{(|\xi|/2)} \ = \ o(\frac{1}{|\xi|}), $$
and similarly for $A_3$.
Now notice that on all other arcs and segments $|\xi-\eta|>|\xi|r_\xi \ge \frac{1}{3}|\xi| \sin\Delta\beta$. \\
Hence, e.g.,
$$ I_{L_2} \le \frac{ length(L_2)\cdot \max_{\eta\in L_2}{|\phi(\eta)|} }{\frac{1}{3}|\xi| \sin\Delta\beta } . $$
Since $length(L_2)< |\xi|$, and for $\eta\in L_2$ we have $|\phi(\eta)|\le \mu(|\xi|)/|\eta|$ and $|\eta|\ge |\xi|/3$, we get
$$ I_{L_2} \le \frac{3}{\sin\Delta\beta } \frac{\mu(|\xi|) }{ |\xi|/3} = \frac{9}{\sin\Delta\beta } \frac{\mu(|\xi|) }{ |\xi|} = o(\frac{1}{|\xi|}), $$
and analogously for $V_{1\pm}$, $A_2$, $L_3$, $V_{3\pm}$.
As for $I_{L_1}$ (and similarly for $I_{L_4}$), split $L_1$ into the union of two intervals $[{\mathbb R}e \xi + r_\xi, \frac{3}{2}|\xi|]$ (which is of length $O(\xi)$) and $[\frac{3}{2}|\xi|; 2T]$. The integral over the former interval can be estimated as above, and let us show that
$$ |\xi|\int_{\frac{3}{2}|\xi|}^{2T} \frac{\phi(\eta)}{\xi-\eta} d\eta \to 0 , \ \ \text{as} \ \xi\to 0. $$
Analogously to \cite[Pr\'e II.5.2, p.166]{CNP}, change the variable $\eta=|\xi| t$, writing $\xi=|\xi|e^{i\theta}$, we can rewrite the statement as
$$ \int_{\frac{3}{2}}^{2T/|\xi|} \frac{|\xi| \phi(|\xi|t)}{(e^{i\theta}-t)} dt \to 0 , \ \ \text{as} \ \xi\to 0, $$
which follows by the dominated convergence theorem.
Adding together these estimates, we obtain the lemma. $\Box$
\begin{Lemma} \label{TrulySmall} Let ${\rm var \ }phi(h)$ be any representative of a small resurgent function in the direction $A$. Then, for any arc $A'{\mathfrak {su}}bset{\mathfrak {su}}bset A$ and any ${\rm var \ }epsilon>0$ there is a sectorial neighborhood $U$ of $0$ in the direction $A'$ such that $|{\rm var \ }phi(h)|<{\rm var \ }epsilon$, $\forall h \in U$. \ $\Box$ \end{Lemma}
\textsc{Proof.} After reducing the question to the case of only one singularity at the origin, use the proof of ~\cite[Pr\'e II.5.2, p.166]{CNP}. Ingredients of that proof have been detailed here, see lemma {\rm Re \ }f{MajorSmallResurgFctn} and proposition {\rm Re \ }f{Prop351}. $\Box$
{\mathfrak {su}}bsection{Minors} \label{SubsectionMinors}
Following ~\cite[Pr\'e II.4]{CNP},
denote ${}^\flat {\pmb{\cal C}}(A)$ the subalgebra (with respect to convolution) of small mictofunctions in ${\pmb{\cal C}}(A)$. (It is indeed a subalgebra with respect to convolution. )
From now on consider microfunctions defined on a big arc $B$ . For an arc $A$ denote by ${\cal O}^0(A)$ the space sectorial germs of analytic functions at $0$ in the direction $A$. We can define a variation
$$ {\rm var} \ : \ {\pmb {\cal C}}(B) \to {\cal O}^0(\hat B)$$
as follows: take a microfunction $\phi$ at $0$, analytically continue it once counterclockwise around zero, obtain a microfunction $\tilde \phi$, and put ${\rm var} \ \phi := \phi - \tilde \phi$.
We will show that small microfunctions are specified by their variation (or its ``minor").
Denote by ${}^{\rm min} {\cal O}^{0}(\hat B)$ the space of germs of holomorphic functions in a sectorial neighborhood $V$ of direction $\hat B$ whose primitive tend to a finite limit when $\zeta\to 0$ in $B$. Denoting by $G(\zeta)$ this primitive and considering the family of continuous functions $f_r(e^{i\tau})=G(re^{i\tau})$ of $\tau$ indexed by $0<r<<1$, we see that for $\tau$ in a compact subinterval of $B$ the convergence for $r\to 0+$ can be made uniform.
\begin{Prop} \label{Prop351} (\cite[Pr\'e II.4.2.1]{CNP}) The map ${\rm var \ }$ is an isomorphism ${}^\flat{\pmb{\cal C}}(B) \to {}^{\rm min}{\cal O}^0(\hat B)$. Denote by b\'emol its inverse $g\mapsto {}^\flat g$.
\end{Prop}
\textsc{Proof.} The proof we are going to present is ~\cite{CNP}'s proof changed and clarified using suggestions of this paper's two anonymous referees.
\textsc{Part I.} Let us show that ${\rm var}$ send ${}^\flat{\pmb{\cal C}}(B)$ to ${}^{\rm min}{\cal O}^0(\hat B)$. Indeed, let ${\rm var \ }phi$ be represented by a function ${\bf {\mathbb P}hi}$ holomorphic in a sectorial neighborhood of $0$ in direction $B$ and satisfying ${\mathbb P}hi(\xi)=o(1/|\xi|)$ for $\xi\to 0$.
Let $V$ be a sectorial neighborhood of $0$ in direction $\hat B$, let $\zeta_0\in V$, let $\gamma$ be a contour starting at $\zeta_0$ and going around $0$, as on the figure {\rm Re \ }f{ResDRWp80}.
\begin{figure}
\caption{Contours in the proof of Proposition {\rm Re \ }
\label{ResDRWp80}
\end{figure}
We have
$$\int_\gamma {\bf {\mathbb P}hi}(\xi)d\xi \ = \ \int_{C_\zeta} {\bf{\mathbb P}hi}(\xi) d\xi \ - \ \int_{[\zeta_0,\zeta]} ({\rm var \ }{\rm var \ }phi)(\xi) d\xi, $$
where $C_\zeta$ is a small circular contour starting from a point $\zeta$ close to $0$.
As $\int_\gamma {\bf {\mathbb P}hi}(\xi)d\xi$ is independent of $\zeta$ and the integral over $C_\zeta$ tends to zero when $\zeta\to 0$, the function $\zeta\mapsto \int_{C_\zeta} {\bf {\mathbb P}hi}(\xi)d\xi$ is a primitive of ${\rm var \ }{\rm var \ }phi$ that is finite at zero.
\textsc{Part II.} Construction of the inverse map ``b\'emol".
Let $g\in {}^{\rm min} {\cal O}^0(\hat B)$. To construct a representative ${\pmb{\mathbb P}hi}$ of a microfunction ${}^\flat g$, let $G(\eta)$ be the primitive of $g$ that tends to $0$ when $\eta\to 0$ and choose $T>0$ so that $G(\xi)$ has no singularities for $0<|\xi|<3T$ and $\arg \xi\in B$. Let $\eta_1\in {\mathbb C}$, $\arg \eta_1\in \hat B$ and $|\eta_1|=2T$.
Put
$$ {\pmb {\mathbb P}hi}(\zeta) \ = \ \frac{1}{2\pi i} \int_{[0,\eta_1]} \frac{G(\eta)}{(\eta-\zeta)^2}d\eta $$
for $\zeta$ away from the line segment $[0,\eta_1]$; for other values of $\zeta$ define the analytic continuation by the deformation of the integration path.
We need to show that ${\pmb{\mathbb P}hi}(\zeta)$ is $o(1/|\zeta|)$ for $\zeta\to 0$ uniformly in every angular sector compactly contained in $B$.
For simplicity of notation let $\eta_1\in {\mathbb R}_{\ge 0}$.
Similarly to the proof of lemma {\rm Re \ }f{MajorSmallResurgFctn}, take subsectors $B'{\mathfrak {su}}bset{\mathfrak {su}}bset B''{\mathfrak {su}}bset{\mathfrak {su}}bset B$. For $|\zeta|<T$, $\arg \zeta\in B'$ choose a path $\Gamma_\zeta$ on the complex plane of the variable $\eta$ it will consist from a path $\Gamma_\zeta^0$ from $\eta=0$ to $\eta=2|\zeta|$ contained in $\{ \eta: \ \arg\eta\in B''\}$ and staying a distance of order $|\zeta|$ from $\zeta$ (one can choose such a path similarly to $\Gamma_\xi$ from the proof of lemma {\rm Re \ }f{MajorSmallResurgFctn}); and the line segment $[2|\zeta|,\eta_1]$. Similarly to the proof of lemma {\rm Re \ }f{MajorSmallResurgFctn}, one shows that $\int_{\Gamma_\zeta^0}\frac{G(\eta)}{(\eta-\zeta)^2}d\eta$ is $o(1/|\zeta|)$ and it remains to show that
$$ \lim_{|\zeta|\to 0}\left[ \zeta \int_{2|\zeta|}^{\eta_1} \frac{G(\eta)}{(\eta-\zeta)^2} d\zeta \right] \ = \ 0. $$
As in ~\cite[Pr\'e II.5.2]{CNP}, changing the variable by $\eta=|\zeta| t$ transforms the question to showing
$$ \lim_{|\zeta|\to 0} \left[ \int_{2}^{\eta_1/|\zeta|} \frac{G(|\zeta|t)}{(t-2)^2} dt \right] = 0 $$
which follows by the dominated convergence theorem.
Finally, one checks using the Cauchy theorem that ${\rm var \ }[{\pmb{\mathbb P}hi}]=g$. $\Box$
It is easy to see from the above formulas that ${\mathbb P}hi$ is defined Riemann surface obtained from the Riemann surface of $g$ by adding a branch point on its every sheet over $\eta_1$. \footnote{ \cite{CNP} say it is defined on the same Riemann surface as $g$.} The function ${\mathbb P}hi$ is called the \underline{adapted major} in \cite{CNP}, or more to the author's taste, the \underline{adapted representative} of our microfunction.
Suppose $[\phi],[\psi]$ are resurgent microfunctions, and their variation has no singularities on the line segment $[0,\eta]$. Then ~\cite{CNP} write that for $t\in(0,\eta]$
\begin{equation} [{\rm var \ } (\phi * \psi)](t) \ = \ \int_{(0,t)} ({\rm var \ } \phi(\tau))({\rm var \ } \psi(t-\tau)) d\tau \label{CNPminorCon} \end{equation}
and for $t$ farther away from $0$ we might need to use analytic continuation. Following a suggestion of the referee, since we do not know that $\phi$ and $\psi$ are integrable at the origin, we will take {\rm Re \ }f{CNPminorCon} to mean the following:
$$ [{\rm var \ } (\phi * \psi)](t) \ = \ \frac{d}{dt} \left( \int_{(0,t)} G(\tau) H(t-\tau) d\tau \right), $$
where $G$ and $H$ are primitives of ${\rm var \ } \phi$ and ${\rm var \ } \psi$, respectively.
\section{Substitution of a small resurgent function into a holomorphic function}
The goal of this section is to prove the following theorem:
\begin{Thm} \label{substituteThm} If $g(z_1,...,z_k)={\mathfrak {su}}m a_{j_1...j_k}z_1^{j_1}...z_k^{j_k}$ is a complex analytic function given around the origin by a convergent power series, and ${\rm var \ }phi_1(h),...,{\rm var \ }phi_k(h)$ are small resurgent functions, then the composition $g({\rm var \ }phi_1,...,{\rm var \ }phi_k)$ is a resurgent function.
\end{Thm}
In ~\cite{CNP}
an analogous result is proven for $k=1$ and for the case of a resurgent function ${\rm var \ }phi_1(h)$ representable by a major with a single singularity at the origin; recall the difference between our and ~\cite{CNP}'s definition of a small resurgent function explained in section {\rm Re \ }f{SmallResFunSec}. Our definition, in contrast to ~\cite{CNP}'s, includes, e.g., ${\rm var \ }phi(h)=h^2+e^{-1/h}$ as a small resurgent function; this function is not fully determined by the singularity of its major at the origin or by the minor corresponding to $h^2$; therefore ~\cite{CNP}'s method of proof by estimating iterated integrals of minors remains an important special case, but is no longer sufficient for the proof theorem {\rm Re \ }f{substituteThm}.
A sketch of ~\cite{CNP}'s proof is presented in section {\rm Re \ }f{kvarMinors}, together with the changes necessary to pass to the $k$ variable case. In section {\rm Re \ }f{ReduceToMinor} we reduce our, more general, situation to the one treated by ~\cite{CNP}.
We remark that theorem {\rm Re \ }f{substituteThm} can be also derived by induction on the number of variables from a parameter-dependent version of proposition {\rm Re \ }f{Prop9}.
{\mathfrak {su}}bsection{Generalization of ~\cite{CNP}'s construction of a composite function to the case of $k$ variables.}
\label{kvarMinors}
First let us mention that ~\cite{CNP} work with a weaker definition of an endlessly continuable function. For them the function is endlessly continuable if it can be analytically continued along any path of length $L$ and angle variation $\delta$ avoiding a finite set ${\cal O}mega_{L,\delta}$. To our knowledge, resurgent functions in this weaker sense possess all useful properties of resurgent functions in the stronger sense.
Following \cite[R\'es II.3]{CNP}, for a resurgent function consider the Riemann surface ${\cal S}$ of its major and the discrete filtered set ${\cal O}mega_*=\{ {\cal O}mega_{L} \}$ of its singularities, accessible along paths $\gamma$ of length $L$ with fixed starting point $\gamma(0)$.
Conversely, for any discrete filtered set ${\cal O}mega_*$, ~\cite{CNP} constructs a Riemann surface ${\cal S}({\cal O}mega)$.
We feel, however, that a clear exposition of the conditions that should be imposed on ${\cal O}mega_*$ in order for ${\cal S}({\cal O}mega)$ to be an {\it endlessly continuable} Riemann surface, is still absent from the literature.
The sum of two discrete filtered set ${\cal O}mega'_*$ and ${\cal O}mega''_*$ denoted $({\cal O}mega'+{\cal O}mega'')_*$ is defined as follows:\\
$({\cal O}mega'+{\cal O}mega'')_{L}$ is the set of $\omega'+\omega''$ where $\omega'\in {\cal O}mega'_{L'}$, $\omega''\in {\cal O}mega''_{L''}$ and $L'+L''=L$.
Let ${}^n{\cal O}mega_*$ denote the sum of $n$ copies of ${\cal O}mega_*$; let ${}^\infty{\cal O}mega_* : = \bigcup {}^n{\cal O}mega_*$
If two resurgent functions have ${\cal O}mega'_*$ and ${\cal O}mega''_*$ as filtered discrete sets of their singularities, the singularities of their convolution are included in $({\cal O}mega'+{\cal O}mega'')_*$, cf. ~\cite[R\'es II.3.1.5]{CNP}.
If ${}^\flat{\pmb f}$ is a small microfunction, denote by ${\pmb f}^{(-1)}$ the primitive of ${\pmb f}$ that vanishes at $0$.
We will be using the concept of the convolution of minors from ~\cite[Pr\'e II.4.2.2]{CNP}.
Given small resurgent function ${}^\flat{\pmb f}_1,..., {}^\flat{\pmb f}_n$, the function ${}^\flat{\pmb f}_1*...* {}^\flat{\pmb f}_n=:{}^\flat{\pmb g}$ is also small, and for $\zeta$ close to $0$ we have
$$ {\pmb g}^{(-n-1)}(\zeta) = \int {\pmb f}_1^{(-1)}(s_1){\pmb f}_2^{(-1)}(s_2)...{\pmb f}_n^{(-1)}(s_n) ds_1 ds_2 ... ds_n, $$
where the integral is taken over the $n$-simplex defined by
$$ \arg s_1 = ... = \arg s_n = \arg \zeta, \ \ \ \ |s_1|+|s_2|+...+|s_n|\le |\zeta|. $$
In order to obtain some estimates on the growth of ${\pmb g}^{(-n-1)}$ it will be shown that it is possible to define the continuation of ${\pmb g}^{(-n-1)}$ along any allowed path
as an integral over an $n$-simplex obtained by a deformation of the initial simplex.
Let ${\cal S}$ be an endless Riemann surface and ${\cal O}mega_*$ the discrete filtered set of its singularities. A sectorial neighborhood of $0$ in ${\cal S}$ is said to be {\it small} if all its points are $L$-accessible from $0$, with $L$ so small that ${\cal O}mega_{L}=\{ 0\}$;\\
a sectorial neighborhood of $0$ is {\it bounded} if it is the union of a small neighborhood of $0$ and a relatively compact subset of ${\cal S}$.
If ${\pmb f}$ is a minor of a small resurgent function, then its primitive ${\pmb f}^{(-1)}$ is bounded on any bounded neighborhood of $0$.
Let ${\cal S}:={\cal S}({\cal O}mega_*)$ be the Riemann surface on which ${\pmb f}_1,...,{\pmb f}_k$ are simultaneously defined. Let ${}^\infty{\cal S}:={\cal S}({}^\infty {\cal O}mega_*)$.
The following corresponds to \cite{CNP}'s Key lemma 2.
\begin{Lemma} \label{KeyLemma2} There is an exhaustive family of bounded neighborhoods $(V_{\alpha,\infty} {\mathfrak {su}}bset {}^\infty{\cal S})$, a family of bounded neighborhoods $(V_\alpha{\mathfrak {su}}bset {\cal S})$, constants $C_\alpha >0$ , and for any minor ${\pmb f}$ of a small resurgent microfunction which is analytic on ${\cal S}$, there is a family of functions $\epsilon_\alpha$ defined on ${\mathbb N}$, s.th. $\epsilon_\alpha(n)>0$ and $\epsilon_\alpha(n)\to 0$ as $n\to +\infty$, so that if ${\pmb g}_{j_1...j_k}$ denotes the minor of $({{}^\flat\pmb f}_1)^{*j_1}*...*({{}^\flat\pmb f}_1)^{*j_k}$ with $j_1+...+j_k=n$, one has an estimate
\begin{equation} |{\pmb g}_{j_1...j_k}^{(-n-1)}|_{V_{\alpha,\infty}} \ \le \ \frac{1}{n!} [ C_{\alpha} ( \max_j |{\pmb f}^{(-1)}_j|_{V_{\alpha}})^{1/2} \epsilon_{\alpha} (n)]^n \label{eq20} \end{equation}
\end{Lemma}
The proof is an easy modification of the one given in ~\cite{CNP}. Once we have established the counterpart of Key lemma, the rest of the proof goes along the same lines as in ~\cite{CNP}.
In particular, by the Cauchy integral formula, on a compact subset $K{\mathfrak {su}}bset V_{\alpha,\infty}$ that is separated by a distance $r_K$ from the branching poins of ${}^\infty{\cal S}$, we have an estimate
\begin{equation*} |{\pmb g}_{j_1...j_k}^{(-1)}|_{K} \ \le \ [ C_{\alpha} r_K^{-1} ( \max_j |{\pmb f}^{(-1)}_j|_{V_{\alpha}})^{1/2} \epsilon_{\alpha} (n)]^n. \end{equation*}
As ${\rm var \ }epsilon_\alpha(n)\to 0$, the sequence of primitives ${\pmb g}_{j_1...j_k}^{(-1)}$ converges on compact sets faster than a geometric series with any positive ratio.
Now, using ~\cite[R\'es 3.2.5]{CNP} or proposition {\rm Re \ }f{Prop341}, we can find a sequence of majors $G_{j_1...j_k}$ corresponding to minors ${\pmb g}_{j_1...j_k}$ that also converges on compact sets of their Riemann surface faster than a geometric series with any positive ratio. Let us formulate this for the future use.
{\bf Lemma {\rm Re \ }f{KeyLemma2}.A.} {\it Let $A$ be a small arc, $\alpha\in A$, ${\rm var \ }phi(h)$ a small resurgent function in the direction $A$ such that ${\bf s}_{\alpha+}^{-1} \bar{\cal L}{\rm var \ }phi$ consists a single microfunction at the origin. Then for the powers $[{\rm var \ }phi(h)]^n$, $n\ge 1$, there is a choice of majors $G_n$ that are bounded by geometric series with any a priori chosen ratio on compact subsets of their common Riemann surface.
}
{\mathfrak {su}}bsection{Reduction to the case of a small resurgent function with only one singularity of the major} \label{ReduceToMinor}
To simplify our notation, we will restrict ourselves to the case $k=1$ in this subsection.
Let $A$ be a small arc, $Sec(0,\check A)$ a closed sector with vertex $0$ in the direction $\check A$ containing the positive real direction; the isomorphism ${\bf s}_{0+}$ used in the proof below can just as well be consistently replaced with ${\bf s}_{0-}$. All convolutions of majors in this subsection will be taken with respect to a fixed contour $\Gamma$ adapted to $Sec(0,\check A)$; convolutions are then endlessly continued by deforming bounded segments of $\Gamma$ as explained in section {\rm Re \ }f{ConvolutionSectn}.
Let $g(x)={\mathfrak {su}}m_{j=0}^\infty a_j x^j$ be a convergent series representing a holomorphic function $g(x)$ near the origin, and let ${\rm var \ }phi(h)$ be a small resurgent function represented by a major $F(\xi)+R(\xi)$ holomorphic in $Sec(0,\check A)$, where ${\bf s}_{0+} F(\xi)$ consists of a single microfunction at $\xi=0$ and ${\bf s}_{0+}R(\xi)$ is supported inside the half-plane ${\mathbb R}e \xi>0$. Respectively, ${\rm var \ }phi(h)={\rm var \ }phi_0(h)+r(h)$, where ${\rm var \ }phi_0={\cal L}F$, $r={\cal L}R$.
The construction we are going to describe has been greatly simplified using an idea of the anonymous referee. Notice that if the series ${\mathfrak {su}}m_{j=0}^\infty a_j x^j$ has a nonzero radius of convergence, then so does ${\mathfrak {su}}m_{j=0}^\infty C^j_k a_j x^{j-k}$ for any $k\ge 1$. Let ${\pmb f}$ be the minor ${\rm var \ } F$, then by section {\rm Re \ }f{kvarMinors} one obtains endlessly continuable minors ${\pmb h}_k={\mathfrak {su}}m C^j_k a_j [{\pmb f}^{*(j-k)}]$ for $k\ge 0$, and hence the corresponding endlessly continuable majors $H_k={\bf s}_{0+}({}^\flat{\pmb h}_k)$. Then one can define a Mittag-Leffler sum of majors $H_k * R^{*k}$ as in section {\rm Re \ }f{MLS}, ${\mathbb P}si(\xi)={\text{\large \rm ML$\Sigma$}}_k H_k*R^{*k}$.
This ${\mathbb P}si(\xi)$ is our candidate for the major of $g({\rm var \ }phi(h))$. Now we need to take to calculate its Laplace transform of ${\mathbb P}si$ and show that the result is equal to the sum of the power series ${\mathfrak {su}}m_j a_j {\rm var \ }phi(h)^j$. Indeed, we have the following equalities of resurgent functions:
$$ \begin{aligned} {\cal L}\left[ {\text{\large \rm ML$\Sigma$}}_k H_k * R^{*k} \right]
& \stackrel{(1)}{=} \
{\mathfrak {su}}m_k {\cal L}\left[ H_k *R^{*k} \right] \ = \ \cr
& = \ {\mathfrak {su}}m_k ({\cal L} H_k) \cdot ({\cal L} R)^{k} \ = \ \cr
& \stackrel{(2)}{=} \ {\mathfrak {su}}m_{k=0}^\infty \left({\mathfrak {su}}m_{j=0}^\infty C_k^j a_j [{\rm var \ }phi_0(h)]^{j-k} \right) [r(h)]^k \ = \ \cr
& \stackrel{(3)}{=} \ {\mathfrak {su}}m_{j,k} a_j C_j^k [{\rm var \ }phi_0(h)]^{j-k} [r(h)]^k \ = \ \cr
& = \ {\mathfrak {su}}m_j a_j ({\rm var \ }phi_0(h) + r(h))^j.
\end{aligned}
$$
Here (1) holds by corollary {\rm Re \ }f{CorLMLS}, (2) by ~\cite{CNP}'s construction recalled in section P{\rm Re \ }f{kvarMinors}, and (3) by the Weierstrass' argument (cf, e.g., \cite[p.22]{Hi}) applicable since ${\rm var \ }phi_0(h)\to 0$ and $r(h)\to 0$ when $h\to 0$ in the given sector $A$ by lemma {\rm Re \ }f{TrulySmall}; the double sum on the right-hand-side of (3) is an absolutely convergent double sum.
This finishes the proof. The argument can be easily generalized to the case of $k$ variables.
{\mathfrak {su}}bsection{Parameter-dependent version.}
The following is an easy parameter-dependent version of theorem {\rm Re \ }f{substituteThm}.
\begin{Thm} Suppose $r(E,h)$ is a small resurgent function such that its major $R(E,\xi)$ is defined on one and the same Riemann surface and analytically dependent on $E$ for $E\in U$ a neighborhood if $0$. Suppose $f\in {\cal O}(D_\rho)$. Then we can choose a major ${\mathbb P}hi(E,\xi)$ of $f(r(E,h))$ for which the same is true. \end{Thm}
\textsc{Proof} is analogous to the one given for the parameter-independent case. The only modification is that in {\rm Re \ }f{eq20} one has to replace $|{\pmb f}_j^{-1}|_{V_\alpha}$ by $|{\pmb f}_j^{-1}|_{V_\alpha\times \overline{D_{\rho'}}}$ for any $\rho'<\rho$. $\Box$
Here is an application of this theorem. A major of a solution of an $h$-differentail equation $P\psi=E_r h \psi$ can be chosen to analytically depend on $E_r$, and the same is therefore true for microfunctions -- formal solutions of the above equation. We can apply the above theorem to conclude that quotients of such microfunctions, in particular, formal monodromies can be represented by majors that holomorphically depend on $E_r$, and the same is then also true for the formal monodromy exponents. So, we have:
\begin{Cor} Formal monodromy exponents for an $h$-differential equation can be represented by majors holomorphically dependent on $E_r$. \end{Cor}
\section{Substitution of a small resurgent function for a holomorphic parameter of another resurgent function.}
Let us fix a small arc $A$ containing, for definiteness, the positive real direction. In this section by resurgent or small resurgent functions we mean (small) resurgent functions in the direction $A$.
\begin{Prop} \label{Prop9} Suppose ${\rm var \ }phi(E,h)$ is a resurgent function defined for every $E\in \Delta$, where $\Delta{\mathfrak {su}}bset {\mathbb C}$ is a disc of radius $r>0$ around the origin, and satisfying the following property:\\
For all $E\in \Delta$ the majors ${\mathbb P}hi(E,\xi)$ are defined on the same Riemann surface ${\cal S}$ and ${\mathbb P}hi(E)$ is a holomorphic function on $E\times {\cal S}$. \\
Then for any small resurgent function $E(h)$, the function ${\rm var \ }phi(E(h),h)$ is resurgent. \end{Prop}
\textsc{Proof.} Construct the Riemann surface ${\cal T}$ for the major of ${\rm var \ }phi(E(h),h)$ as ${\cal S}*{}^{\infty}{\cal S}_E$ where ${\cal S}_E$ is the Riemann surface of ${\tilde E}(\xi)$
Let us construct the major for ${\rm var \ }phi(E(h),h)$. Let ${\tilde E}(\xi)=G + R$ where $G$ is repesented by only one small resurgent microfunction ${}^\flat {\pmb g}$ and $R$ has all its first-sheet singularities to the right of the imaginary axis.
Without loss of generality, assume that ${\mathbb P}hi(E,\xi)$ is holomorphic in the sector $Sec(0,\check A)$.
a) By the lemma {\rm Re \ }f{KeyLemma2}.A, we can choose representatives $G_n \in G^{*n} \mod {\cal O}({\mathbb C})$ that go to zero faster than any geometric series (here convolutions are taken along any path adapted to $Sec(0,\check A)$). Therefore, by Proposition {\rm Re \ }f{Prop331} we can take representatives $G_n \in G^{*n} \mod {\cal O}({\mathbb C})$ and a contour $\Gamma$ consisting of two half-lineas and adapted to $Sec(0,\check A)$, so that $G_n$ are bounded ``outside of $\Gamma$" by a geometric series, i.e. $|G_n|\le C q^{n}$ in that domain, and such that $|G_n|\le M_K q^{n}$ on any compact set $K{\mathfrak {su}}bset{\cal T}$, where $q=\frac{1}{2}\min\{1,r\}$.
The path $\Gamma$ splits the Riemann surface ${\cal T}$ into two closed parts: $U_{-}$ which is the first sheet of ${\cal T}$ outside of $\Gamma$, i.e. containing a sectorial neighborhood of infinity in the direction $\check A$, and the rest, denoted $U_+$, so that $U_{-}\cap U_{+}=\Gamma$. For a compact set $K{\mathfrak {su}}bset {\cal T}$ denote $K\cap U_{\pm} = K_{\pm}$.
b) By using a parameter-dependent version of lemma {\rm Re \ }f{Lemma21}, after possibly shrinking $\Delta$, we can assume that $|{\mathbb P}hi(E,\xi)|<e^{-|\xi|}$ along $\Gamma$. Then ${\mathbb P}hi_n(\xi)=\frac{1}{n!}\frac{\partial^n {\mathbb P}hi(E,\xi)}{\partial E^n}$ will satisfy $\left| {\mathbb P}hi_n \right| < C Q^n e^{-|\xi|}$ on $\Gamma$ for some $0<Q<1$.
Further, for compact subsets $K{\mathfrak {su}}bset {\cal S}$, ${\mathfrak {su}}p_{\xi\in K_+\cup U_-} |{\mathbb P}hi_n| < C_K r^{-n}$.
c) To show that ${\mathfrak {su}}m_{n=0}^\infty C_n^j \int_\Gamma G_n(\xi-\eta){\mathbb P}hi_n(\eta)d\eta$ converges uniformly on compact subsets faster than a geometric series, one imitates the proof of Proposition ~{\rm Re \ }f{Prop331}.
d) Now we need to show that this major is really the major of the composite function, i.e. we need to verify the equality
$$ {\cal L} \left[{\text{\large \rm ML$\Sigma$}}_{j=0}^\infty R^{*j} * \left({\mathfrak {su}}m_{n=0}^\infty C^j_n \int_{\Gamma} G_n(\xi-\eta) * {\mathbb P}hi_n(\eta) d\eta
\right) \right] \ = \ {\mathfrak {su}}m \frac{1}{n!} \frac{\partial^n \phi}{\partial E^n} [E(h)]^n. $$
In the first sum ${\text{\large \rm ML$\Sigma$}}$ and ${\cal L}$ can be interchanged and we get
$$ {\mathfrak {su}}m_{j=0}^\infty {\cal L}[R]^{j} \cdot {\cal L} \left[{\mathfrak {su}}m_{n=0}^\infty C^j_n \int_{\Gamma} G_n(\xi-\eta) * {\mathbb P}hi_n(\eta) d\eta \right].$$
Since the majors inside the sum converge uniformly and faster than geometric series with some fixed ratio on compact sets, ${\cal L}$ and the infinite sum can be interchanged, after which the proposition is proven. $\Box$
\section{Application to quantum resurgence.} \label{ApplicnToExistence}
Consider the Schr\"odinger operator
$$ P \ = \ -h^2\partial_q^2 + V(q,h), $$
where $V(q,h)$ is analytic in $q$ on the whole complex plane and polynomial in $h$. Denote by $\tilde P$ the Laplace-transformed operator, i.e,
$$ \tilde P \ = \ -\partial_\xi^{-2}\partial_q^2 + V(q,\partial_\xi^{-1}), $$
and by $\hat h$ is a major of $h$, e.g. $\hat h(\xi)=\frac{1}{2\pi i} \log \xi$.
Suppose we know that $\psi$ is a resurgent solution of the differential equation
$$ P\psi(E_1,q) = hE_1\psi(E_1,q) \ \ \ \mod {\cal E}^{-\infty}$$
for every $E_1\in {\mathbb C}$.
Here ${\cal E}^{-\infty}$ stands for functions of $h$ that are $<C_{a,K}e^{-a/h}$ for small $|h|$ when other parameters range over a compact set.
We will assume that the majors ${\mathbb P}si(E_1,\xi,q)$ are holomorphic with respect to $E_1$, are defined on the same Riemann surface and satisfy
\begin{equation} {\tilde P}{\mathbb P}si(E_1,q,\xi) = {\hat h}*E_1{\mathbb P}si(E_1,q,\xi) \ \ \ \mod{{\cal O}}(U_1\times U_2\times {\mathbb C}), \label{whatSolusSatisfy} \end{equation}
where $U_1$ is an open neighborhood of $0$ in ${\mathbb C}$ and $q\in U_2$, an open subset of the universal cover $\tilde {\mathbb C}$ of ${\mathbb C}\backslash\{\text{turning pts}\}$ and we will assume $U_2$ to be relatively compact. (Turning points are those $q$ for which $V(q,h)=O(h)$.) Such are the properties we expect of solutions produced, say, by Shatalov-Sternin method, ~\cite{ShSt}.
Differentiating both sides of \eqref{whatSolusSatisfy} with respect to $E_1$ at $E_1=0$, obtain
$$ {\tilde P} \left.\frac{\partial^n{\mathbb P}si}{\partial E_1^n}\right|_{E_1=0} \ = \ {\hat h}*\left.n\frac{\partial^{n-1}\psi}{\partial E_1^{n-1}}\right|_{E_1=0} \ \ \ \mod {{\cal O}}(U_1\times U_2\times {\mathbb C}).$$
As ${\mathbb P}si(E_1,q,\xi)={\mathfrak {su}}m E_1^n \left.\frac{1}{n!}\frac{\partial^n {\mathbb P}si}{\partial E_1^n}\right|_{E_1=0}$ converges uniformly on compact sets of the Riemann surface of ${\mathbb P}si$ and is holomorphic with respect to $E_1$, we see that $\frac{1}{n!}\frac{\partial^n {\mathbb P}si}{\partial E_1^n}$ can be estimated on compact sets of that Riemann surface by geometric series with the same ratio. Hence, by the previous section, we may substitute a small resurgent function $E(h)$ for $E_1$ and obtain a resurgent function.
We know that $P\psi(E_1,h) = hE_1\psi(E_1,h) \mod {\cal E}^{-\infty}$ for $E_1\in {\mathbb C}$. If they were equal as functions, not as classes modulo ${\cal E}^{-\infty}$, then there would be no issue substituting $E(h)$ into this equality. As it is, we need an additional argument in order to show:
$$ P\psi(E(h),q) = hE(h) \psi(E(h),q) \mod {\cal E}^{-\infty}.$$
On the level of majors it boils down to showing
\begin{equation} {\tilde P} {\mathfrak {su}}m {\tilde E}^{*n} * \frac{1}{n!} \left.\frac{\partial^n {\mathbb P}si}{\partial E_1^n}\right|_{E_1=0} \ = \ {\hat h}*{\tilde E} * {\mathfrak {su}}m {\tilde E}^{*n} * \frac{1}{n!} \left.\frac{\partial^n {\mathbb P}si}{\partial E_1^n}\right|_{E_1=0} \mod {{\cal O}}(U_2\times {\mathbb C}). \label{eq22} \end{equation}
We can interchange ${\tilde P}$ and the infinite sum on the left. Indeed, for the convolution with ${\hat h}^2$ and $V(q,\hat h)$ this follows by proposition {\rm Re \ }f{Prop43rev}. Infinite sum and $\frac{\partial}{\partial q}$ can be interchanged because on compacts with respect to $q$ the terms of the series can be assumed bounded by $Q^j e^{-|\xi|}$, for some $0<Q<1$, along the infinite branches of $\Gamma$, and since everything is analytic in $q$, so are the $q$-derivatives of majors.
We can interchange ${\tilde E}*$ and the infinite sum on the right. Indeed, it is shown in the proof of proposition {\rm Re \ }f{Prop9} that majors of ${\tilde E}^{*n}*\frac{1}{n!}\frac{\partial^n{\mathbb P}si}{\partial E_1^n}$ have representative converging faster than a geometric series on compact subsets. Therefore, by Prop. {\rm Re \ }f{Prop43rev}, we can interchange ${\tilde E}*$ and the infinite sum.
So, to show \eqref{eq22}, since interchanging $\tilde P$ and ${\tilde E}*$ with the infinite sums in question is legal, it is enough to show that
$$ {\mathfrak {su}}m {\tilde E}^{*n} * \frac{1}{n!}{\tilde P}\frac{\partial^n{\mathbb P}si}{\partial E_1^n} \ = \ {\mathfrak {su}}m {\hat h}*{\tilde E}^{*(n+1)}*\frac{1}{n!}\frac{\partial^n{\mathbb P}si}{\partial E_1^n} \mod {{\cal O}}(U\times {\mathbb C})$$
which follows because
$$ {\tilde P}\frac{\partial^n{\mathbb P}si}{\partial E_1^n} = n {\hat h}*\frac{\partial^{n-1}{\mathbb P}si}{\partial E_1^{n-1}} \mod {{\cal O}}(U\times {\mathbb C}) . $$
$\Box$
{\bf Acknowledgements}
The author would like to thank his advisor Boris Tsygan for a wonderful graduate experience and his dissertation committee members Dmitry Tamarkin and Jared Wunsch for their constant support in his study and research.
We greatly appreciate thoughtful remarks of this paper's two anonymous referees.
Valuable suggestions were also made by M.Aldi, J.E.Andersen, K.Burns, K.Costello, E.Delabaere, S.Garoufalidis, E.Getzler, B.Helffer, A.Karabegov, S.Koshkin, Yu.I.Manin, G.Masbaum, D.Nadler, W.Richter, K.Vilonen, F.Wang, E.Zaslow, and M.Zworski.
This work was partially supported by the NSF grant DMS-0306624 and the WCAS Dissertation and Research Fellowship.
\end{document}
|
\begin{document}
\title[On some properties of weak solutions]{On some properties of weak solutions \\ to elliptic equations with divergence-free drifts}
\author{Nikolay Filonov}
\address{V.A. Steklov Mathematical Institute, St.-Petersburg, Fontanka 27, 191023, Russia}
\email{[email protected]}
\thanks{Both authors are supported by RFBR grant 17-01-00099-a.}
\author{Timofey Shilkin}
\address{V.A. Steklov Mathematical Institute, St.-Petersburg, Fontanka 27, 191023, Russia}
\email{[email protected]}
\thanks{The research of the second author
leading to these results has received funding from the People
Programme (Marie Curie Actions) of the European Union's Seventh
Framework Programme FP7/2007-2013/ under REA grant agreement n°
319012 and from the Funds for International Co-operation under
Polish Ministry of Science and Higher Education grant agreement n°
2853/7.PR/2013/2. The author also thanks the Technische Universit\"
at of Darmstadt for its hospitality.}
\subjclass{35B65}
\date{July 12, 2017.}
\keywords{elliptic equations, weak solutions, regularity}
\begin{abstract}
We discuss the local properties of weak solutions to the equation
$-\Delta u + b\cdot\nabla u=0$. The corresponding theory is
well-known in the case $b\in L_n$, where $n$ is the dimension of the
space. Our main interest is focused on the case $b\in L_2$. In this
case the structure assumption $\operatorname{div} b=0$ turns out to
be crucial.
\end{abstract}
\maketitle
\section{Introduction and Notation}
Assume $n\ge 2$, $\Omega\subset \mathbb R^n$ is a smooth bounded
domain, $b: \Omega\to \mathbb R^n$, $f: \Omega\to \mathbb R$. In
this paper we investigate the properties of weak solutions $u:
\Omega\to \mathbb R$ to the following scalar equation
\begin{equation}
-\Delta u + b\cdot \nabla u \ = \ f \qquad \mbox{in} \quad \Omega.
\label{Equation}
\end{equation}
This equation describes the diffusion in a stationary incompressible
flow. If it is not stated otherwise, we always impose the following
conditions
\begin{equation*}
b\in L_2(\Omega) , \qquad f\in W^{-1}_2(\Omega)
\end{equation*}
(see the list of notation at the end of this section). We use the
following
\begin{definition}
\label{Def1} Assume $b\in L_2(\Omega)$, $f\in W^{-1}_2(\Omega)$. The
function $u\in W^1_2(\Omega)$ is called a weak solution to the
equation \eqref{Equation} if the following integral identity holds:
\begin{equation}
\int\limits_\Omega \nabla u\cdot ( \nabla \eta + b \eta)~dx \ = \
\langle f, \eta\rangle, \qquad \forall~\eta\in C_0^\infty(\Omega).
\label{Integral_Identity_1}
\end{equation}
\end{definition}
Together with the equation \eqref{Equation} one can consider the
formally conjugate (up to the sign of the drift) equation
\begin{equation}
-\Delta u + \operatorname{div}(bu) \ = \ f \qquad \mbox{in} \quad
\Omega. \label{Equation1}
\end{equation}
\begin{definition}
\label{Def2} Assume $b\in L_2(\Omega)$, $f\in W^{-1}_2(\Omega)$. The
function $u\in W^1_2(\Omega)$ is called a weak solution to the
equation \eqref{Equation1} if
\begin{equation}
\int\limits_\Omega (\nabla u -bu)\cdot \nabla \eta ~dx \ = \
\langle f, \eta\rangle, \qquad \forall~\eta\in C_0^\infty(\Omega).
\label{Integral_Identity_2}
\end{equation}
\end{definition}
The advantage of the equation \eqref{Equation1} is that it allows
one to define weak solutions for a drift $b$ belonging to a weaker
class than $L_2(\Omega)$. Namely, Definition \ref{Def2} makes sense
for $u\in W^1_2(\Omega)$ if
\begin{equation}
b\in L_s(\Omega) \qquad \mbox{where} \qquad s \ = \ \left\{ \
\begin{array}{cl}\frac {2n}{n+2}, \qquad & n\ge 3, \\
1+\varepsilon, \ \varepsilon>0, & n=2 .\end{array}\right.
\label{Weak_drift}
\end{equation}
Nevertheless, it is clear that for a divergence-free drift $b\in
L_2(\Omega)$ the Definitions \ref{Def1} and \ref{Def2} coincide.
Together with the equation \eqref{Equation} we discuss boundary
value problems with Dirichlet boundary conditions:
\begin{equation}
\left\{ \quad \gathered -\Delta u +b\cdot \nabla u = f \quad \mbox{in} \quad \Omega , \\
u|_{\partial \Omega} = \varphi .\qquad \endgathered \right.
\label{Dirichlet}
\end{equation}
For weak solutions the boundary condition is understood in the sense
of traces. Assume $f$ is ``good enough'' and $b \in L_2(\Omega)$,
$\operatorname{div} b = 0$. Our main observation is that the
regularity of solution $u$ inside $\Omega$ can depend on the
behaviour of its boundary values. If the function $\varphi$ is
bounded, then the solution $u$ is also bounded (see Theorem
\ref{Week_Max_Principle_2} below). If the function $\varphi$ is
unbounded on $\partial\Omega$, then the solution $u$ can become
infinite in internal points of $\Omega$ (see Example \ref{Example_1}
below). So, we distinguish between two cases: the case of general
boundary data $\varphi \in W^{1/2}_2(\partial \Omega)$, and the case
of bounded boundary data
\begin{equation}
\varphi \in L_\infty(\partial \Omega)\cap W^{1/2}_2(\partial \Omega)
\label{BC_regular} .
\end{equation}
Discussing the properties of weak solutions to the problem
\eqref{Dirichlet} we also distinguish between another two cases: in
Section 2 we consider sufficiently regular drifts, namely, $b\in
L_n(\Omega)$, and in Section 3 we focus on the case of drifts $b$
from $L_2(\Omega)$ satisfying $\operatorname{div} b = 0$. Section 4
is devoted to possible ways of relaxation of the condition $b\in
L_n(\Omega)$ in the framework of the regularity theory. In Appendix
for reader's convenience some proofs (most of which are either known
or straightforward) are gathered.
Together with the elliptic equation \eqref{Equation} it is possible to consider its
parabolic analogue
\begin{equation}
\label{113} \partial_t u -\Delta u + b\cdot \nabla u \ = \ f \qquad
\mbox{in} \quad \Omega\times (0,T),
\end{equation}
but it should be a subject of a separate survey. We address the interested
readers to the related papers \cite{Zhang}, \cite{Lis_Zhang},
\cite{NU}, \cite{Sem}, \cite{SSSZ}, \cite{Vicol}, \cite{SVZ} and references there.
In the paper we explore the following notation. For any $a$, $b\in
\mathbb R^n$ we denote by $a\cdot b$ its scalar product in $\mathbb
R^n$. We denote by $L_p(\Omega)$ and $W^k_p(\Omega)$ the usual
Lebesgue and Sobolev spaces. The space
$\overset{\circ}{W}{^1_p}(\Omega)$ is the closure of
$C_0^\infty(\Omega)$ in $W^1_p(\Omega)$ norm. The negative Sobolev
space $W^{-1}_p(\Omega)$, $p\in (1, +\infty)$, is the set of all
distributions which are bounded functionals on
$\overset{\circ}{W}{^1_{p'}}(\Omega)$ with $p':=\frac{p}{p-1}$. For
any $f\in W^{-1}_p(\Omega)$ and $w\in
\overset{\circ}{W}{^1_{p'}}(\Omega)$ we denote by $\langle
f,w\rangle$ the value of the distribution $f$ on the function $w$.
We use the notation $W^{1/2}_2(\partial \Omega)$ for the
Slobodetskii--Sobolev space. By $C(\bar \Omega)$ and $C^\alpha(\bar
\Omega)$, $\alpha\in (0,1)$ we denote the spaces of continuous and
H\" older continuous functions on $\bar \Omega$. The space
$C^{1+\alpha}(\bar \Omega)$ consists of functions $u$ whose
gradient $\nabla u$ is H\" older continuous. The index ``loc'' in
notation of the functional spaces $L_{\infty,loc}(\Omega)$,
$C^\alpha_{loc}( \Omega)$, $C^{1+\alpha}_{loc}( \Omega)$ etc implies
that the function belongs to the corresponding functional class over
every compact set which is contained in $\Omega$. The symbols
$\rightharpoonup$ and $\to $ stand for the weak and strong
convergence respectively. We denote by $B_R(x_0)$ the ball in
$\mathbb R^n$ of radius $R$ centered at $x_0$ and write $B_R$ if
$x_0=0$. We write also $B$ instead of $B_1$.
\section{Regular drifts}\label{sreg}
\subsection{Local properties}
For sufficiently regular drifts we have the local H\" older
continuity of a solution.
\begin{theorem}
\label{Holder} Assume
\begin{equation}
\label{210} b \in L_n (\Omega) \quad \text{if} \quad n \ge 3, \qquad
\int\limits_{\Omega} |b|^2\ln(2+|b|^2)~dx \ < \infty \quad \text{if}
\quad n = 2 .
\end{equation}
Let $u\in W^1_2(\Omega)$ be a weak solution to \eqref{Equation} with
$f$ satisfying
\begin{equation*}
f\in L_p(\Omega), \qquad p>\frac n2\ .
\end{equation*}
Then
$$
u\in C^\alpha_{loc}(\Omega) \qquad \mbox{with} \qquad \left\{ \
\begin{array}{cl} \alpha= 2-\frac np, & p<n, \\ \forall~\alpha < 1, & p\ge n.
\end{array}\right.
$$
\end{theorem}
The local H\" older continuity of weak solutions in Theorem
\ref{Holder} with some $\alpha\in (0,1)$ is well-known, see
\cite[Theorem 7.1]{Stampacchia} or \cite[Corollary 2.3]{NU} in the
case $f\equiv 0$. The H\" older continuity with arbitrary $\alpha\in
(0,1)$ was proved in the case $f\equiv 0$, for example, in
\cite{Filonov}. The extension of this result for non-zero right hand
side is routine.
If $b$ possesses more integrability then the first gradient of a
weak solution is locally H\" older continuous.
\begin{theorem}
Let $b\in L_p(\Omega)$ with $p>n$, and $u\in W^1_2(\Omega)$ be a
weak solution to \eqref{Equation} with $f\in L_p(\Omega)$. Then
$u\in C^{1+\alpha}_{loc}(\Omega)$ with $\alpha=1-\frac np$.
\end{theorem}
For the proof see \cite[Chapter III, Theorem 15.1]{LU}.
\subsection{Boundary value problem}
We consider the second term $\int\limits_\Omega \nabla u\cdot
b~\eta~dx$ in the equation \eqref{Integral_Identity_1} as a bilinear
form in $\overset{\circ}{W}{^1_2}(\Omega)$. It defines a linear
operator $T: \overset{\circ}{W}{^1_2}(\Omega) \to
\overset{\circ}{W}{^1_2}(\Omega) $ by the relation
\begin{equation}
\label{212} \int\limits_\Omega \nabla (T u) \cdot \nabla \eta~dx =
\int\limits_\Omega \nabla u\cdot b~\eta~dx, \quad \forall~u, \eta\in
\overset{\circ}{W}{^1_2}(\Omega) .
\end{equation}
The following result is well-known.
\begin{theorem}
\label{Operator_is_compact} Let $b$ satisfy \eqref{210}. Then the
operator $T:\overset{\circ}{W}{^1_2}(\Omega)\to
\overset{\circ}{W}{^1_2}(\Omega)$ defined by \eqref{212} is compact.
\end{theorem}
Indeed, if $n\ge 3$ then the estimate
\begin{equation}
\label{2125} \left| \int\limits_\Omega \nabla u\cdot
b~\eta~dx\right| \le C_b \|\nabla u\|_{L_2(\Omega)} \|\nabla
\eta\|_{L_2(\Omega)} \quad \forall~u, \eta\in
\overset{\circ}{W}{^1_2}(\Omega)
\end{equation}
follows by the imbedding theorem and the H\" older inequality. In
the case $n=2$ such estimate can be found for example in \cite[Lemma
4.3]{Filonov}. Next, the operator $T$ can be approximated in the
operator norm by compact linear operators $T_\varepsilon$ generated
by the bilinear forms $\int\limits_\Omega \nabla u\cdot
b_\varepsilon~\eta~dx$ where $b_\varepsilon\in
C^\infty(\bar\Omega)$.
\begin{remark}
The condition $b \in L_2(\Omega)$ in the case $n=2$ is not
sufficient. For example, one can take $\Omega = B_{1/3}$,
$$
b(x) = \frac {x}{|x|^2 \left| \ln |x|\right|^{3/4}}, \qquad u(x) =
\eta(x) = \left| \ln |x|\right|^{3/8} - (\ln 3)^{3/8} .
$$
Then $\int\limits_\Omega \nabla u\cdot b~\eta~dx = \infty$, and
therefore, the corresponding operator $T$ is unbounded.
\end{remark}
\begin{remark}
The issue of boundedness and compactness of the operator $T$ in the
case of the whole space, $\Omega = {\mathbb R}^n$, is investigated
in full generality in \cite{MV}, see Theorem \ref{t41} below. In
this section we restrict ourselves by considering assumptions on $b$
only in $L_p$--scale.
\end{remark}
Now, the problem \eqref{Dirichlet} with $\varphi\equiv0$ reduces to
the equation $u + T u = h$ in $\overset{\circ}{W}{^1_2}(\Omega)$
with an appropriate right hand side $h$. The solvability of the last
equation follows from the Fredholm theory. Roughly speaking, ``the
existence follows from the uniqueness''.
The uniqueness in the case $b \in L_n(\Omega)$, $n\ge 3$, and
$\operatorname{div} b = 0$ is especially simple. In this situation
$$
\int\limits_\Omega b \cdot \nabla u~ u~dx = 0 \qquad \forall~u\in
\overset{\circ}{W}{^1_2}(\Omega),
$$
and the uniqueness for the problem \eqref{Dirichlet} follows. In the
general case of drifts satisfying \eqref{210} without the condition
$\operatorname{div} b = 0$ the proof of the uniqueness is more
sophisticated. It requires the maximum principle which can be found,
for example, in \cite{NU}, see Corollary 2.2 and remarks at the end
of Section 2 there.
\begin{theorem}
\label{Strong_max_principle} Let $b$ satisfy \eqref{210}. Assume
$u\in W^1_2(\Omega)$ is a weak solution to the problem
\eqref{Dirichlet} with $f\equiv 0$ and $\varphi \in
L_\infty(\partial \Omega)\cap W^{1/2}_2(\partial \Omega)$. Then
either $u\equiv const $ in $\Omega$ or the following estimate holds:
$$
\operatorname{essinf}\limits_{\partial \Omega} \varphi \ < \ u(x) \
< \ \operatorname{esssup}\limits_{\partial \Omega} \varphi, \qquad
\forall~x\in \Omega.
$$
\end{theorem}
\begin{corollary}
\label{Unique} Let $b$ satisfy \eqref{210}. Then a weak solution to
the problem \eqref{Dirichlet} is unique in the space
$W^1_2(\Omega)$.
\end{corollary}
Now, the solvability of the problem \eqref{Dirichlet} is
straightforward.
\begin{theorem}
\label{Existence_weak_smooth} Let $b$ satisfy \eqref{210}. Then for
any $f \in W^{-1}_2(\Omega)$ and $\varphi \in W^{1/2}_2(\partial
\Omega)$ the problem \eqref{Dirichlet} has the unique weak solution
$u\in W^1_2(\Omega)$, and
$$
\|u\|_{W^1_2(\Omega)} \le C \left(\|f\|_{W^{-1}_2(\Omega)} +
\|\varphi\|_{W^{1/2}_2(\partial\Omega)}\right) .
$$
\end{theorem}
\begin{proof} For $\varphi\equiv0$ Theorem
\ref{Existence_weak_smooth} follows from Fredholm's theory. In the
general case the problem \eqref{Dirichlet} can be reduced to the
corresponding problem with homogeneous boundary conditions for the
function $v:=u-\tilde \varphi$, where $\tilde \varphi$ is some
extension of $\varphi$ from $\partial \Omega$ to $\Omega$ with the
control of the norm $\| \tilde \varphi\|_{W^1_2(\Omega)}\le c\|
\varphi\|_{W^{1/2}_2(\partial \Omega)}$. The function $v$ can be
determined as a weak solution to the problem
\begin{equation}
\left\{ \quad \gathered -\Delta v +b\cdot \nabla v = f +
\Delta\tilde \varphi - b \cdot \nabla \tilde \varphi
\quad \mbox{in} \quad \Omega, \\
v|_{\partial \Omega} = 0 \qquad \endgathered \right.
\label{Dirichlet_homogeneous}
\end{equation}
Under assumption \eqref{210} the right hand side belongs to
$W^{-1}_2(\Omega)$ due to Theorem \ref{Operator_is_compact}.
\end{proof}
Note that for $n\ge 3$ the problems \eqref{Dirichlet} and
\eqref{Dirichlet_homogeneous} are equivalent only in the case of
``regular'' drifts $b\in L_n(\Omega)$. If $b \in L_2(\Omega)$ and
additionally $\operatorname{div} b = 0$, then $b \cdot \nabla \tilde
\varphi \in W^{-1}_{n'}(\Omega)$, $n'=\frac n{n-1}$, and the
straightforward reduction of the problem \eqref{Dirichlet} to the
problem with homogeneous boundary data is not possible.
Finally, to investigate in Section 3 the problem \eqref{Dirichlet}
with divergence-free drifts from $L_2(\Omega)$ we need the following
maximum estimate.
\begin{theorem}
\label{Weak_Max_Principle} Let $b$ satisfy \eqref{210}. Assume
$\varphi$ satisfies \eqref{BC_regular} and let $u\in W^1_2(\Omega)$
be a weak solution to \eqref{Dirichlet} with some $f \in
L_p(\Omega)$, $p>n/2$. Then
{\rm 1)} $u\in L_\infty(\Omega)$ and
\begin{equation}
\| u\|_{L_\infty(\Omega)} \ \le \ \|\varphi\|_{L_\infty(\partial
\Omega)} + C~\|f\|_{L_p(\Omega)}. \label{Max_Pr_2}
\end{equation}
{\rm 2)} If $\operatorname{div} b = 0$ then $C=C(n,p,\Omega)$ does
not depend on $b$.
\end{theorem}
We believe Theorem \ref{Weak_Max_Principle} is known though it is
difficult for us to identify the precise reference to the statement
we need. So, we present its proof in Appendix.
\begin{remark}
\label{r211} For $n \ge 3$ consider the following example:
$$
\Omega =B, \qquad u(x) = \ln|x|, \qquad b(x) = (n-2)\frac{x}{|x|^2}
.
$$
The statements of Theorem \ref{Holder}, Theorem
\ref{Strong_max_principle} and Corollary \ref{Unique} are violated
for these functions. On the other hand, $-\Delta u + b\cdot \nabla u
= 0$, $u \in \overset{\circ}{W}{^1_2}(\Omega)$ and $b \in
L_p(\Omega)$ for any $p<n$. It means that for non-divergence free
drifts the condition $b \in L_n(\Omega)$ in \eqref{210} is sharp.
\end{remark}
\begin{remark}
\label{r212} For $n=2$ the condition $b \in L_2(\Omega)$ is not
sufficient. The statements of Theorem \ref{Holder}, Theorem
\ref{Strong_max_principle} and Corollary \ref{Unique} are violated
for the functions
$$
u(x)=\ln\left| \ln |x|\right|, \qquad b(x) = -\frac{x}{|x|^2\ln|x|}
$$
in a ball $\Omega = B_{1/e}$, nevertheless $b \in L_2(\Omega)$.
Converesely, if in the case $n=2$ we assume that $b \in L_2(\Omega)$
and $\operatorname{div} b = 0$, then the estimate \eqref{2125} is
fulfilled (see \cite{MV} or \cite{Filonov}), and all statements of
this section (Theorems \ref{Holder}, \ref{Operator_is_compact},
\ref{Strong_max_principle}, \ref{Existence_weak_smooth} and
\ref{Weak_Max_Principle}) hold true, see \cite{Filonov} or
\cite{NU}. So, this case can be considered as the regular one. See
also Remark \ref{r43} below.
\end{remark}
\section{Non-regular divergence-free drifts}
In this section we always assume that
$\operatorname{div} b = 0$. It turns out that this assumption plays
the crucial role in local boundedness of weak solutions if one
considers drifts $b\in L_p(\Omega)$ with $p<n$, $n\ge 3$. Recall
that the case $n=2$, $b \in L_2(\Omega)$ and $\operatorname{div} b =
0$ can be considered as a regular case, see Remark \ref{r212}. Thus,
below we restrict ourselves to the case $n\ge 3$.
\subsection{Boundary value problem}
We have the following approximation result.
\begin{theorem}
\label{Approximation} Assume $b\in L_2(\Omega)$, $\operatorname{div}
b=0$, $f\in W^{-1}_2(\Omega)$, and let $u\in W^1_2(\Omega)$ be a
weak solution to \eqref{Equation}. Assume also $b_k \in L_n(
\Omega)$, $\operatorname{div} b_k=0$ is an arbitrary sequence
satisfying
$$
b_k \to b \quad \mbox{in} \quad L_2(\Omega),
$$
and let $u_k\in W^1_2(\Omega)$ be the unique weak solution to the
problem
\begin{equation}
\label{30}
\left\{ \quad \gathered -\Delta u_k +b_k\cdot \nabla u_k = f, \\
u_k|_{\partial \Omega} = \varphi , \endgathered \right.
\end{equation}
where $\varphi = u|_{\partial \Omega}$. Then
\begin{equation}
u_k\to u \quad \mbox{in} \quad
L_q(\Omega) \quad \mbox{for any} \quad q<\frac n{n-2}.
\label{L_1-conv}
\end{equation}
Moreover, if $\varphi \in L_\infty (\partial\Omega)$ then
\begin{equation}
u_k\rightharpoonup u \quad \mbox{in} \quad W^1_2(\Omega).
\label{Weak_W^1_2}
\end{equation}
Finally, if
$\varphi\equiv 0$ then the energy inequality holds:
\begin{equation}
\int\limits_\Omega |\nabla u|^2 ~dx \ \le \ \langle f, u\rangle .
\label{Energy_inequality}
\end{equation}
\end{theorem}
The convergence \eqref{L_1-conv} is proved (in its parabolic
version) for $q=1$ in \cite[Proposition 2.4]{Zhang}. Note that the
proof in \cite{Zhang} uses the uniform Gaussian upper bound of the
Green functions of the operators $\partial_t u-\Delta u +b_k\cdot
\nabla u$ (sf. \cite{Aronson}). In Appendix we present an elementary
proof of Theorem \ref{Approximation} based on the maximum estimate
in Theorem \ref{Weak_Max_Principle} and duality arguments.
Theorem \ref{Approximation} has several consequences. The first of
them is the uniqueness of weak solutions, see \cite{Zhang} and
\cite{Zhikov}:
\begin{theorem}
Let $b\in L_2(\Omega)$, $\operatorname{div} b = 0$. Then a weak
solution to the problem \eqref{Dirichlet} is unique in the class
$W{^1_2}(\Omega)$.
\end{theorem}
Indeed, $u$ is a $L_q$-limit of the approximating sequence $u_k$,
and such limit is unique. The alternative proof of the uniqueness
(which is in a sense ``direct'', i.e. it does not hang upon the
approximation result of Theorem \ref{Approximation}) for $b\in
L_2(\Omega)$, $\operatorname{div} b = 0$, can be found in
\cite{Zhikov} (see also some development in
\cite{Surnachev}). Note that in \cite{Zhikov} it was also shown that
the uniqueness can break for weak solutions to the equation
\eqref{Equation1} if $b$ satisfy \eqref{Weak_drift} (actually a
little better than \eqref{Weak_drift}) and $\operatorname{div} b =
0$, but $b \notin L_2(\Omega)$.
Another consequence of Theorem \ref{Approximation} is the existence
of weak solution.
\begin{theorem}
\label{Existence} Let $b\in L_2(\Omega)$, $\operatorname{div} b =
0$. Then for any $f \in W^{-1}_2 (\Omega)$ and any $\varphi$
satisfying \eqref{BC_regular} there exists a weak solution to the
problem \eqref{Dirichlet}.
\end{theorem}
Theorem \ref{Existence} is proved in Appendix.
Finally, Theorem \ref{Approximation} allows one to establish the
global boundedness of weak solutions whenever the boundary data are
bounded.
\begin{theorem}
\label{Week_Max_Principle_2} Let $b\in L_2(\Omega)$,
$\operatorname{div} b = 0$, $f \in L_p(\Omega)$, $p>n/2$, and
$\varphi $ satisfies \eqref{BC_regular}. Assume $u\in
{W}{^1_2}(\Omega)$ is a weak solution to \eqref{Dirichlet}. Then
$u\in L_\infty(\Omega)$ and
\begin{equation}
\| u\|_{L_\infty(\Omega)} \ \le \ \|\varphi\|_{L_\infty(\partial
\Omega)} + C~ \|f\|_{L_p(\Omega)}, \label{Max_Pr_4}
\end{equation}
where the constant $C=C(n,p,\Omega)$ is independent on $b$.
\end{theorem}
Theorem \ref{Week_Max_Principle_2} is proved in Appendix.
\subsection{Local properties}
Note that any weak solution to \eqref{Equation} belonging to the
class $W^1_2(\Omega)$ can be viewed as a weak solution to the
problem \eqref{Dirichlet} with some $\varphi \in W^{1/2}_2
(\Omega)$.
\begin{theorem}
\label{Local_Boundedness} Assume $\operatorname{div} b = 0$ and
$$
b \in L_p (B) \quad \mbox{where} \quad p=2 \quad \text{if} \quad n =
3 \quad \mbox{and} \quad p \ > \ \frac n2 \quad \text{if} \quad n
\ge 4 .
$$
Let $u\in W^1_2(B)$ be a weak solution to \eqref{Equation} in $B$
with some $f \in L_q (B)$, $q>n/2$.
Then $ u\in L_\infty(B_{1/2})$ and
$$
\| u\|_{L_\infty(B_{1/2})} \ \le \ C~\Big( \| u\|_{W^1_2(B)} +
\|f\|_{L_q(B)}\Big)
$$
where the constant $C$ depends only on $n$, $p$, $q$ and
$\|b\|_{L_p(B)}$.
\end{theorem}
Theorem \ref{Local_Boundedness} was proved (in the parabolic
version) in \cite{Zhang}. For the reader's convenience we present
the proof of this theorem in Appendix.
Let us consider the following
\begin{example}\label{Example_1} Assume $n\ge 4$ and put
$$
u(x) = \ln r, \qquad b = (n-3) \left(~\frac 1r ~{\bf e}_r -
(n-3)~\frac{z}{ r^2} ~{\bf e}_z~\right),
$$
where $r^2 = x_1^2 + ... + x_{n-1}^2$, $z=x_n$, and ${\bf e}_r$,
${\bf e}_z$ are the basis vectors of the corresponding cylindrical
coordinate system in $\mathbb R^n$. Then $u \in
\overset{\circ}{W}{^1_2}(\Omega)$, and
$$
-\Delta u + b\cdot \nabla u \ = 0.
$$
Next, $\operatorname{div} b = 0$, $b(x) = O(r^{-2})$ near the axis
of symmetry, and hence
$$
b\in L_p(B) \quad \mbox{for any} \quad p \ < \ \frac {n-1}2 .
$$
\end{example}
Clearly, the assumption $b\in L_2(\Omega)$ leads to the restriction
$n\ge 6$. So, for divergence-free drifts $b\in L_2(\Omega)$ we have
the following picture. Assume $u\in W^1_2(\Omega)$ is a weak
solution to \eqref{Dirichlet} with $f\in L_p (\Omega)$, $p>n/2$.
Theorem \ref{Week_Max_Principle_2} means that
$$
\varphi \in L_{\infty}(\partial\Omega) \cap
W^{1/2}_2(\partial\Omega) \qquad \Longrightarrow \qquad u\in
L_{\infty}(\Omega) \quad \mbox{for any \ $n\ge 2$} .
$$
The Example \ref{Example_1} shows that for general $\varphi$ we have
$$
\varphi \in W^{1/2}_2(\partial\Omega) \quad \Longrightarrow \quad
\left\{ \ \
\begin{array}l \mbox{if $n\le 3$ then } u\in L_{\infty, loc}(\Omega),
\\ \mbox{if $n\ge 6$ then it is possible } u\not\in
L_{\infty, loc}(\Omega), \\ \mbox{if } \mbox{$n= 4,5$ -- open
questions}. \ \
\end{array}\right.
$$
Theorem \ref{Week_Max_Principle_2} and Example \ref{Example_1}
together establish an interesting phenomena: for drifts $b\in
L_2(\Omega)$, $\operatorname{div} b = 0$, the property of the
elliptic operator in \eqref{Equation} to improve the ``regularity''
of weak solutions (in the sense that every weak solution is locally
bounded) depends on the behavior of a weak solution on the boundary
of the domain. If the values of $\varphi:=u|_{\partial \Omega}$ on
the boundary are bounded then this weak solution must be bounded as
Theorem \ref{Week_Max_Principle_2} says. On the other hand, if the
function $\varphi$ is unbounded on $\partial \Omega$ then the weak
solution can be unbounded even near internal points of the domain
$\Omega$ as Example \ref{Example_1} shows. To our opinion such a
behavior of solutions to an elliptic equation is unexpected.
Allowing some abuse of language we can say that {\it non-regularity
of the drift can destroy the hypoellipticity of the operator}.
Theorem \ref{Week_Max_Principle_2} impose some restrictions on the
structure of the set of singular points of weak solutions. Namely,
let us define a {\it singular point} of a weak solution as a point
for which the weak solution is unbounded in any its neighborhood,
and then define the {\it singular set} of a weak solution as the set
of all its singular points. It is clear that the singular set is
closed. Theorem \ref{Week_Max_Principle_2} shows that if for some
weak solution its singular set is non-empty then its 1-dimensional
Hausdorff measure must be positive.
\begin{theorem}
\label{Structure} Let $b\in L_2(\Omega)$, $\operatorname{div} b =
0$, and let $u\in W^1_2(\Omega)$ be a weak solution to
\eqref{Equation} with $f \in L_p(\Omega)$, $p>n/2$. Denote by
$\Sigma\subset \bar \Omega$ the singular set of $u$ and assume
$\Sigma\cap\Omega\not=\emptyset$. Then any point of the set
$\Sigma\cap \Omega$ never can be surrounded by any smooth closed
$(n-1)$-dimensional surface $S\subset \bar \Omega$ such that
$u|_{S}\in L_\infty(S)$. In particular, this means that
\begin{equation}
\mathcal
H^1(\Sigma)>0, \quad \Sigma\cap\partial \Omega\not=\emptyset,
\label{Hausdorff}
\end{equation}
where $\mathcal H^1$ is one-dimensional Hausdorff measure in
$\mathbb R^n$.
\end{theorem}
\begin{proof} The first assertion is clear. Let us prove
\eqref{Hausdorff}. Assume $\Sigma\cap \Omega\not=\emptyset$ and
$x_0\in \Sigma\cap \Omega$. Denote $d:=\operatorname{dist}\{ x_0,
\partial \Omega\}$. Let $z_0\in \partial \Omega$ be a point such
that $|z_0-x_0|=d$ and denote by $[x_0,z_0]$ the straight line
segment connecting $x_0$ with $z_0$. Let us take arbitrary
$\delta>0$ and consider any countable covering of $\Sigma$ by open
balls $\{B_{\rho_i}(y_i)\}$ such that $\rho_i\le \delta$. For any
$i$ denote $r_i:=|x_0-y_i|$. If $r_i\le d$ then denote $z_i:=
[x_0,z_0]\cap
\partial B_{r_i} (x_0)$. By Theorem \ref{Week_Max_Principle_2} for
any $r\le d$ we have $\Sigma\cap \partial B_r(x_0)\not=\emptyset$.
Therefore,
$$
[x_0,z_0] \ \subset \ \bigcup\limits_{r_i\le d} B_{\rho_i}(z_i).
$$
This inclusion means that
$$
\mathcal H^1(\Sigma) \ \ge \ \mathcal H^1\left([x_0,z_0]\right) = d
> 0.
$$
\end{proof}
Theorem \ref{Structure} in particular implies that no isolated
singularity is possible. This exactly what Example \ref{Example_1}
demonstrates: the singular set in this case is the axis of symmetry.
Note that the divergence free condition brings significant
improvements into the local boundedness results. Without the
condition $\operatorname{div} b = 0$ one can prove local boundedness
of weak solutions to \eqref{Equation} only for $b\in L_n(\Omega)$
($n\ge 3$), while if $\operatorname{div} b = 0$ the local
boundedness is valid for any $b\in L_p(\Omega)$ with $p>\frac n2$.
Note also that for the moment of writing of this paper we can say
nothing about analogues of neither Theorem \ref{Local_Boundedness}
nor Example \ref{Example_1} if $p\in [\frac{n-1}2, \frac n2]$. {\it
We state this problem as an open question.}
The final issue we need to discuss is the problem of further
regularity of solutions to the equation \eqref{Equation}. The
example of a bounded weak solution which is not locally continuous
was constructed originally in \cite{SSSZ} for $n=3$ and $b \in L_1
(\Omega)$, $\operatorname{div} b = 0$ (actually the method of
\cite{SSSZ} allowed to extend their example for $b \in L_p$, $p\in
[1,2)$). Later the first author in \cite{Filonov} generalized this
example for all $n\ge 3$ and for all $p\in [1,n$).
\begin{theorem}
\label{Filonov_2013} Assume $n\ge 3$, $p < n$. Then there exist
$b\in L_p(B)$ satisfying $\operatorname{div} b = 0$ and a weak
solution $u$ to \eqref{Equation} with $f\equiv 0$ such that $u\in
W^1_2(B)\cap L_\infty(B)$ but $u\not \in C(\bar B_{1/2})$.
\end{theorem}
The latter result shows that if one is interested in the local
continuity of weak solutions then the assumption $b\in L_n(\Omega)$
can not be weakened in the Lebesgue scale and the structure
condition $\operatorname{div} b = 0$ does not help in this
situation.
It is not difficult to construct also a weak solution to
\eqref{Equation} which is continuous but not H\" older continuous.
\begin{example} \label{Example_2} Assume $n\ge 4$ and take
$$
u(x) = \frac 1{\ln r}, \quad b = \left(\frac {n-3}r - \frac 2{r\ln
r}\right) {\bf e}_r \ + \ \left( \frac {(n-3)^2}{r^2} -
\frac{2(n-3)}{r^2 \ln r} -\frac{2}{r^2\ln^2 r}~\right)z~{\bf e}_z .
$$
Here $r^2 = x_1^2 + ... + x_{n-1}^2$, $z=x_n$, and ${\bf e}_r$,
${\bf e}_z$ are the basis vectors of the cylindrical coordinate
system. Then $u\in W^1_2(B_{1/2})\cap C(B_{1/2})$, $-\Delta u+b\cdot
\nabla u = 0 $, $\operatorname{div} b=0$ in $B_{1/2}$ and $b\in
L_p(B_{1/2})$ for any $p<\frac {n-1}2$.
\end{example}
Thus, for weak solutions of \eqref{Equation} with $b\in
L_2(\Omega)$, $\operatorname{div} b = 0$, in large space dimensions
(at least for $n\ge 6$) the following sequence of implications can
break at any step:
$$
u\in W^1_2(\Omega) \ \ \ \not\!\Longrightarrow \ \ u\in L_{\infty,
loc}(\Omega) \ \ \ \not\!\Longrightarrow \ \ u\in C_{loc}(\Omega) \
\ \ \not\!\Longrightarrow \ \ u\in C^\alpha_{loc}(\Omega).
$$
\section{Beyond the $\boldsymbol{L_p}$--scale}
Theorem \ref{Filonov_2013} shows that in order to obtain the local
continuity of weak solutions to \eqref{Equation} for drifts weaker
than $b\in L_n(\Omega)$ one needs to go beyond the Lebesgue scale.
We start with the question of the boundedness of the operator $T$
defined by the formula \eqref{212}. The necessary and sufficient
condition on $b$ is obtained in \cite{MV} in the case $\Omega =
{\mathbb R}^n$.
\begin{theorem}
\label{t41} The inequality \eqref{2125} holds true if and only if
the drift $b$ can be represented as a sum $b = b_0 + b_1$, where the
function $b_0$ is such that
\begin{equation}
\label{41} \int\limits_{{\mathbb R}^n} |b_0|^2 |\eta|^2~dx \ \le \
C~\int\limits_{{\mathbb R}^n} |\nabla\eta|^2~dx, \qquad
\forall~\eta\in C_0^\infty({\mathbb R}^n) ,
\end{equation}
$b_1$ is divergence-free, $\operatorname{div} b_1 = 0$, and $b_1 \in
BMO^{-1}(\mathbb R^n)$. It means that $b_1(x) = \operatorname{div}
A(x)$, $A(x)$ is a skew-symmetric matrix, $A_{ij} = - A_{ji}$, and
$A_{ij} \in BMO ({\mathbb R}^n)$.
\end{theorem}
Here $BMO (\Omega)$ is the space of functions $f$ with {\it bounded
mean oscillation}, i.e.
$$
\sup_{{\tiny \begin{array}c x\in \Omega \\ 0<r< \infty\end{array}}}
\frac1{r^n} \int\limits_{B_r(x)\cap \Omega} |f(y) - (f)_{B_r(x)\cap
\Omega}|~dy < \infty, \quad \text{where} \quad (f)_{\omega} =
\frac1{|\omega|} \int\limits_{\omega} f(y)~dy.
$$
Clearly, each divergence-free vector $b_1$ can be represented as
$b_1 = \operatorname{div} A$ with a skew-symmetric matrix $A(x)$.
This Theorem mentions that the behaviour of the bilinear form
$\int\limits_\Omega \nabla u\cdot b~\eta~dx$ already distinguish
between general drifts and divergence-free drifts. First, let us
discuss general drifts. If $b$ satisfies \eqref{210} then it
satisfies the estimate \eqref{41} too. But we can not use the
condition \eqref{41} instead \eqref{210} for the regularity theory,
as the example of Remark \ref{r211} shows. Indeed, for functions
satisfying
\begin{equation}
\label{42} |b(x)| \ \le \ \frac{C}{|x|}
\end{equation}
the estimate \eqref{41} is fulfilled by the Hardy inequality.
On the other hand, the case of the drift $b$ having a one-point
singularity (say, at the origin) with the asymptotics which includes
homogeneous of degree $-1$ functions like \eqref{42}, is also
interesting. There are several papers, see \cite{Lis_Zhang},
\cite{Sem}, \cite{SSSZ} and \cite{NU}, dealing with different
classes of divergence-free drifts which cover \eqref{42}. All these
papers contain also the results for parabolic equation \eqref{113},
but we discuss only (simplified) elliptic versions of them. We
address the interested readers to the original papers.
The approach of \cite{SSSZ} seems to be the most general one. Assume
$b \in BMO^{-1}(\Omega)$ and $\operatorname{div} b = 0$. In this
case we understand the equation $-\Delta u + b\cdot \nabla u \ = 0$
in the sense of the integral identity
\begin{equation}
\label{43} \int_\Omega \left(\nabla u \cdot \nabla \eta + A \nabla u
\cdot \nabla \eta\right) dx = 0 \qquad \forall~\eta\in
C_0^\infty(\Omega),
\end{equation}
where the skew-symmetric matrix $A \in BMO (\Omega)$ is defined via
$\operatorname{div} A(x) = b(x)$.
\begin{theorem}
\label{t42} Let $b \in BMO^{-1}(\Omega)$ and $\operatorname{div} b =
0$. Then
{\rm 1)} The maximum principle holds. If $u\in W^1_2(\Omega)$
satisfies \eqref{43} and $\varphi :=
\left.u\right|_{\partial\Omega}$ is bounded, then
$\|u\|_{L_\infty(\Omega)} \le
\|\varphi\|_{L_\infty(\partial\Omega)}$. In particular, the weak
solution to \eqref{Dirichlet} is unique.
{\rm 2)} Any weak solution $u$ to \eqref{Equation} is H\" older
continuous, $u \in C^\alpha_{loc} (\Omega)$ for some $\alpha > 0$.
\end{theorem}
For the proof see \cite{NU} or \cite{SSSZ}. The regularity theory
developped in Section \ref{sreg} is slightly better as it guarantees
that weak solutions are locally H\" older continuous with {\it any}
exponent $\alpha < 1$. Nevertheless, Theorem \ref{t42} means that
divergence-free drifts from $BMO^{-1}$ can be also considered as
regular ones.
\begin{remark}
\label{r43} Note that the case $n=2$, $b \in L_2(\Omega)$,
$\operatorname{div} b = 0$, is the particular case of this
situation. Indeed, such drifts can be represented as a
vector-function with components $b_1 = \partial_2 h$, $b_2 = -
\partial_1 h$, where $h$ is a scalar function $h \in W^1_2(\Omega)$.
By the imbedding theorem $W^1_2(\Omega) \subset BMO(\Omega)$ we have
$$
A(x) = \left( \begin{array}{cc}
0 & -h(x) \\
h(x) & 0 \end{array} \right) \in BMO(\Omega) .
$$
\end{remark}
\section{Appendix}
First we prove Theorem
\ref{Weak_Max_Principle}.
\begin{proof} We present the proof in the case $n\ge 3$ only. The case $n=2$
differs from it by routine technical details.
1) The statement similar to our estimate \eqref{Max_Pr_2} (for more
general equations) can be found in \cite{Stampacchia}. In
particular, in \cite[Theorem 4.2]{Stampacchia} the following
estimate for weak solutions to the problem
\begin{equation}
\left\{ \quad \gathered -\Delta u +b\cdot \nabla u = f \quad \mbox{in} \quad \Omega, \\
u|_{\partial \Omega} = 0, \qquad \endgathered \right.
\label{Dirichlet_3}
\end{equation}
was proved:
\begin{equation}
\| u\|_{L_\infty(\Omega)} \ \le \ C~\Big( ~\| f\|_{L_p(\Omega)} +
\|u\|_{L_2(\Omega)}~\Big). \label{Stamp}
\end{equation}
On the other hand,
\begin{equation}
\|u\|_{W^1_2(\Omega)} \ \le \ C~\|f\|_{W^{-1}_2(\Omega)}
\label{Zero_RHS}
\end{equation}
due to Theorem \ref{Existence_weak_smooth}. Hence we can exclude the
weak norm of $u$ from the right hand side of \eqref{Stamp} and
obtain the estimate \eqref{Max_Pr_2} in the case $\varphi\equiv 0$.
In general case we can split a weak solution $u$ of the problem
\eqref{Dirichlet} as $u=u_1+u_2$, where $u_1$ is a weak solution of
\eqref{Dirichlet_3} and $u_2$ is a weak solution to the problem
\eqref{Dirichlet} with the boundary data $\varphi$ and zero right
hand side. For $u_1$ we have \eqref{Zero_RHS} and for $u_2$ we have
$\|u_2\|_{L_\infty(\Omega)} \ \le \ \|\varphi\|_{L_\infty(\partial
\Omega)}$ by Theorem \ref{Strong_max_principle}.
2) As $b \in L_n(\Omega)$ we can complete the integral identity
\eqref{Integral_Identity_1} up to the test functions $\eta\in
\overset{\circ}{W}{^1_2}(\Omega)$. Denote
$k_0:=\|\varphi\|_{L_\infty(\partial\Omega)}$ and assume $k\ge k_0$.
Take in \eqref{Integral_Identity_1} $\eta=(u-k)_+$, where we denote
$(u)_+:=\max\{ u,0\}$. As $k\ge k_0$ we have $\eta\in
\overset{\circ}{W}{^1_2}(\Omega)$ and $\nabla \eta =
\chi_{A_k}\nabla u$ where $\chi_{A_k}$ is the characteristic
function of the set
$$A_k \ := \ \{ ~x\in \Omega: ~u(x)>k~\}.$$
We obtain the identity
$$
\int\limits_{A_k} |\nabla u|^2~dx \ + \ \int\limits_{A_k} b\cdot
(u-k)\nabla u ~dx \ = \ \int\limits_{A_k} f (u-k)~dx .
$$
The second term vanishes
$$
\int\limits_{A_k} b\cdot (u-k)\nabla u ~dx \ = \ \frac
12~\int\limits_\Omega b\cdot \nabla |(u-k)_+|^2~dx \ = \ 0,
$$
as $\operatorname{div} b = 0$, and hence
$$
\int\limits_{A_k} |\nabla u|^2~dx \ = \ \int\limits_{A_k} f
(u-k)~dx, \qquad \forall~k\ge k_0.
$$
The rest of the proof goes as in the usual elliptic theory. Applying
the imbedding theorem we obtain
$$
\left(~\int\limits_{A_k} |\nabla u|^2~dx\right)^{\frac 12} \ \le \
C(n)~\left(~ \int\limits_{A_k} |f|^{\frac
{2n}{n+2}}~dx\right)^{\frac {n+2}{2n}},
$$
and using the H\" older inequality we get
$$
\|f\|_{L_{\frac {2n}{n+2}}(A_k)} \ \le \ |A_k|^{\frac {n+2}{2n} -
\frac 1p}~ \|f\|_{L_p(A_k)} .
$$
So we arrive at
$$
\int\limits_{A_k} |\nabla u|^2~dx \ \le \ C(n)~
\|f\|_{L_p(\Omega)}^2~|A_k|^{1-\frac 2n+\varepsilon} , \qquad
\forall~k\ge k_0,
$$
where $\varepsilon:=2\left(\frac 2n-\frac 1p\right)>0$. This
inequality yields the following estimate, see \cite[Chapter II,
Lemma 5.3]{LU},
$$
\operatorname{esssup}\limits_\Omega (u-k_0)_+ \ \le \ C(n,p,
\Omega)~\|f\|_{L_p(\Omega)} .
$$
The estimate of $\operatorname{essinf}\limits_\Omega u$ can be
obtained in a similar way if we replace $u$ by $-u$. \end{proof}
In order to prove Theorem \ref{Approximation} we need some auxiliary
results.
\begin{theorem}
\label{L_1-theorem} Assume $n\ge 3$, $b\in C^\infty(\bar \Omega)$,
$\operatorname{div} b=0$ in $\Omega$, $f\in L_1(\Omega)$, and assume
$u\in \overset{\circ}{W}{^1_2}(\Omega)$
is a weak solution of \eqref{Dirichlet} with $\varphi\equiv 0$.
Then for any $q\in \big[1,\frac n{n-2}\big)$
the following estimate holds:
\begin{equation}
\| u\|_{L_q(\Omega)} \ \le \ C(n,q,\Omega)~\|f\|_{L_1(\Omega)}.
\label{L_1-estimate}
\end{equation}
\end{theorem}
\begin{proof} Assume $q\in \big(1,\frac n{n-2}\big)$. By
duality we have
$$
\| u\|_{L_q(\Omega)} \ = \ \sup\limits_{g\in L_{q'}(\Omega), \
\|g\|_{L_{q'}(\Omega)}\le 1} \ \int\limits_\Omega ug~dx ,
$$
where $q':=\frac q{q-1}$, $q'>\frac n2$. For any $g\in
L_{q'}(\Omega)$ denote by $w_g \in W^2_{q'}(\Omega)$ a solution to
the problem
$$
\left\{ \ \ \gathered -\Delta w_g - b\cdot \nabla w_g \ = \
g\qquad \mbox{in} \quad \Omega, \\ w_g|_{\partial \Omega} \ = \ 0.
\qquad \qquad
\endgathered \right.
$$
From Theorem \ref{Weak_Max_Principle} we conclude that for $w_g$
the following estimate holds:
$$
\| w_g\|_{L_\infty(\Omega)} \ \le \ C(n, q,
\Omega)~\|g\|_{L_{q'}(\Omega)}.
$$
Integrating by parts we obtain
$$
\int\limits_\Omega ug~dx \ = \ \int\limits_\Omega u(-\Delta w_g -
b\cdot \nabla w_g)~dx \ = \ \int\limits_\Omega \nabla u \cdot
(\nabla w_g + bw_g)~dx \ = \ \int\limits_\Omega fw_g~dx .
$$
Then for any $g\in L_{q'}(\Omega)$ such that
$\|g\|_{L_{q'}(\Omega)}\le 1$ we get
$$
\int\limits_\Omega ug~dx \ \ = \
\int\limits_\Omega fw_g~dx \ \le \
\|f\|_{L_1(\Omega)}\| w_g\|_{L_\infty(\Omega)} \ \le \ C(n, q,
\Omega)~\|f\|_{L_1(\Omega)} .
$$
Hence we obtain \eqref{L_1-estimate}. \end{proof}
Another auxiliary result we need is the following extension theorem.
\begin{theorem}
\label{Extension} Assume $\Omega\subset \mathbb R^n$ is a bounded
domain of class $C^1$. Then there exists a bounded linear extension
operator $T: L_\infty(\partial \Omega)\cap W^{1/2}_2(\partial
\Omega) \to L_\infty(\Omega)\cap W^{1}_2( \Omega)$ such that
$$
T\varphi|_{\partial \Omega} \ = \ \varphi, \qquad \forall~\varphi
\in L_\infty(\partial \Omega)\cap W^{1/2}_2(\partial \Omega),
$$
$$
\| T\varphi\|_{W^1_2(\Omega)} \ \le \
C(\Omega)~\|\varphi\|_{W^{1/2}_2(\partial \Omega)}, \qquad
\|T\varphi\|_{L_\infty(\Omega)} \ \le \
C(\Omega)~\|\varphi\|_{L_\infty(\partial\Omega)} .
$$
\end{theorem}
\begin{proof} For the sake of completeness we briefly
recall the proof of Theorem \ref{Extension}. After the localization
and flattening of the boundary it is sufficient to construct the
extension operator from $\mathbb R^{n-1}$ to $\mathbb
R^n_+:=\mathbb R^{n-1}\times (0, +\infty)$. Then we can take the
standard operator
$$
(T\varphi)(x', x_n) \ = \ \eta(x_n)~\int\limits_{\mathbb R^{n-1}}
\varphi(x'-x_n\xi')\psi(\xi')~d\xi', \qquad (x',x_n)\in \mathbb
R^n_+,
$$
where $x':=(x_1, \ldots, x_{n-1}) \in \mathbb R^{n-1}$, $\eta\in
C_0^\infty(\mathbb R)$, $\eta(0)=1$, $\psi\in C_0^\infty(\mathbb
R^{n-1})$, $\int\limits_{\mathbb R^{n-1}}\psi(\xi')~d\xi' = 1$.
This operator is bounded from $W^{1/2}_2(\mathbb R^{n-1})$ to
$W^1_2(\mathbb R^n_+)$ and also from $L_\infty(\mathbb R^{n-1})$
to $L_\infty(\mathbb R^n_+)$. More details can be found in
\cite{BIN}.
\end{proof}
Now we can give an elementary proof of Theorem \ref{Approximation}.
\begin{proof} The function
$v_k:=u_k-u\in \overset{\circ}{W}{^1_2}(\Omega)$ is a weak solution
to the problem
$$
\left\{ \quad \gathered -\Delta v_k +b_k\cdot \nabla v_k \ = \ f_k
\quad \mbox{in}\quad \Omega ,\\ v_k|_{\partial \Omega} \ = \ 0
,\qquad \qquad
\endgathered \right.
$$
where
$$
f_k:=(b-b_k)\cdot \nabla u, \qquad f_k\in L_1(\Omega), \qquad
\|f_k\|_{L_1(\Omega)} \ \to \ 0.
$$
Assume $q\in \big[1, \frac n{n-2}\big)$. By Theorem
\ref{L_1-theorem} we have
$$
\| v_k\|_{L_q(\Omega)} \ \le \ C(n, \Omega)~\|f_k\|_{L_1(\Omega)} \
\to \ 0 ,
$$
and hence \eqref{L_1-conv} follows.
Now assume additionally $\varphi\in L_\infty(\partial\Omega)$.
Denote $\tilde \varphi:=T\varphi$ where $T$ is the extension
operator from Theorem \ref{Extension}. Taking in the integral
identity \eqref{Integral_Identity_2} for $u_k$ and $b_k$ the test
function $\eta = u_k-\tilde \varphi \in
\overset{\circ}{W}{^1_2}(\Omega)$ we obtain
$$
\int\limits_\Omega |\nabla u_k|^2~dx \ - \ \int\limits_\Omega u_k
b_k\cdot \nabla (u_k-\tilde \varphi)~dx \ = \ \int\limits_\Omega
\nabla u_k \cdot \nabla \tilde \varphi~dx \ + \ \langle f,
u_k-\tilde \varphi\rangle .
$$
Using the condition $\operatorname{div} b_k=0$ we get
$$
\int\limits_\Omega u_k b_k\cdot \nabla (u_k-\tilde \varphi)~dx \ = \
\int\limits_\Omega \tilde \varphi b_k\cdot \nabla (u_k-\tilde
\varphi)~dx .
$$
Therefore,
$$
\gathered \|\nabla u_k\|_{L_2(\Omega)}^2 \ \le \ \Big( \|\tilde
\varphi\|_{L_\infty( \Omega)}\|b_k \|_{L_2(\Omega)} \ + \ \|
f\|_{W^{-1}_2(\Omega)}\Big) \Big(\|u_k\|_{W^1_2(\Omega)}+ \|\tilde
\varphi\|_{W^1_2(\Omega)}\Big) \ + \\ + \ \|\nabla
u_k\|_{L_2(\Omega)}\|\nabla \tilde \varphi\|_{L_2(\Omega)} .
\endgathered
$$
Applying Friedrichs' and Young's inequalities we obtain the
estimate
\begin{equation}
\| u_k\|_{W^1_2(\Omega)} \ \le \ C, \label{W^1_2-estimate}
\end{equation}
with a constant $C$ independent on $k$. As the convergence
\eqref{L_1-conv} is already established, from \eqref{W^1_2-estimate}
we derive \eqref{Weak_W^1_2}.
Finally, if $\varphi\equiv 0 $ then we have the energy identities
for $u_k$
$$
\int\limits_\Omega |\nabla u_k|^2 \ = \ \langle f, u_k\rangle ,
$$
and using the weak convergence \eqref{Weak_W^1_2} we arrive at
\eqref{Energy_inequality}. \end{proof}
Now we turn to the proof of Theorem \ref{Existence}.
\begin{proof}
We take a sequence $b_k\in C^\infty(\bar \Omega)$,
$\operatorname{div} b_k=0$, such that $b_k\to b$ in $L_2(\Omega)$.
Let $u_k\in W^1_2(\Omega)$ be a weak solution to the problem
\eqref{30}. Repeating the arguments in the proof of Theorem
\ref{Approximation}, we obtain the estimate \eqref{W^1_2-estimate}
with a constant $C$ independent on $k$. Using this estimate we can
extract a subsequence satisfying \eqref{Weak_W^1_2} for some $u\in
W^1_2(\Omega)$. The weak convergence \eqref{Weak_W^1_2} and the
strong convergence $b_k\to b$ in $L_2(\Omega)$ allow us to pass to
the limit in the integral identities \eqref{Integral_Identity_1}
corresponding to $u_k$ and $b_k$. Therefore, $u$ is a weak solution
to \eqref{Dirichlet}. \end{proof}
Now we present the proof of Theorem \ref{Week_Max_Principle_2}.
\begin{proof}
Let $b_k$ be smooth divergence-free vector fields such that $b_k\to
b$ in $L_2(\Omega)$. Denote by $u_k$ the weak solution to the
problem \eqref{30}. By Theorem \ref{Weak_Max_Principle}
\begin{equation}
\| u_k\|_{L_\infty(\Omega)} \ \le \ \|\varphi\|_{L_\infty(\partial
\Omega)} + C~\|f\|_{L_p(\Omega)} \label{Max_Pr_3}
\end{equation}
with the constant $C$ depending only on $n$, $p$ and $\Omega$. From
Theorem \ref{Approximation} we have the convergence $u_k\to u$ in
$L_1(\Omega)$ and hence we can extract a subsequence (for which we
keep the same notation) such that
$$
u_k \to u \quad \mbox{a.e. in} \quad \Omega.
$$
Passing to the limit in \eqref{Max_Pr_3} we obtain \eqref{Max_Pr_4}.
\end{proof}
Finally we give the proof of Theorem \ref{Local_Boundedness}.
\begin{proof}
To simplify the presentation we give the proof only in the case
$f\equiv 0$. The extension of the result for non-zero right hand
side can be done by standard methods, see \cite[Theorem
4.1]{Han_Lin} or \cite{Preprint_Darmstadt}. First we derive the
estimate
\begin{equation}
\| u\|_{L_{\infty}(B_{1/2})} \ \le \ C
\left(1+\|b\|_{L_p(B)}\right)^\mu~ \| u\|_{L_{2p'}(B)}, \qquad
p':=\frac p{p-1} \label{We_want}
\end{equation}
(with some positive constants $C$ and $\mu$ depending only on $n$
and $p$) under additional assumption $u\in C^\infty(B)$. We
explore Moser's iteration technique, see \cite{Moser}. Assume $\beta
\ge 0$ is arbitrary and let $\zeta\in C_0^\infty(B)$ be a cut-off
function. Take a test function $ \eta = \zeta^2|u|^{\beta} u $ in
the identity \eqref{Integral_Identity_1}. Denote $ w \ := \
|u|^{\frac {\beta+2} 2}$. Then after integration by parts and some
routine calculations we obtain the inequality
\begin{equation}
\gathered \int\limits_B |\nabla(\zeta w)|^2~dx \ \le \ C ~
\int\limits_B |w|^2~\Big(|\nabla \zeta|^2 +
|b|~ |\nabla \zeta|\Big)
~dx
\endgathered
\label{Instead}
\end{equation}
Applying the imbedding theorem and the H\" older
inequality and choosing the test function $\zeta$ in an appropriate
way we arrive at the inequality
$$
\| w\|_{L_{\frac {2n}{n-2}}(B_r)} \ \le \ C\left( \frac{1 }{R-r} \ +
\ \|b\|_{L_p(B_R)} \right)~ \| w\|_{L_{2p'}(B_R)},
$$
which holds for any $\frac 12\le r<R\le 1$. Note that $\frac
{2n}{n-2}>2p'$ as $p>\frac n2$ if $n\ge 4$ and $p=2$ if $n=3$. The
latter inequality gives us the estimate
\begin{equation}
\| u\|_{L_{\frac{n\gamma}{n-2} }(B_r)} \ \le \ C^{\frac{2}\gamma}
\left(\frac{1}{R-r} + \|b\|_{L_p(B_R)} \right)^{\frac 2{ \gamma}}~
\| u\|_{L_{p'\gamma}(B_R)} \label{Iterate}
\end{equation}
with an arbitrary $\gamma\ge 2$, $\gamma:=\beta+2$. Denote
$s_0=2p'$, $s_m:=\chi s_{m-1} $, where
$\chi:=\frac{n(p-1)}{p(n-2)}$, and denote also $R_m= \frac 12
+\frac1{2^{m+1}}$. Taking in \eqref{Iterate} $r=R_m$, $R=R_{m-1}$,
$\gamma=\frac{s_{m-1}}{p'}$ we obtain
$$
\gathered \| u\|_{L_{s_m }(B_{R_m})} \ \le \ \Big( C~2^{m+1}
\ + \ C\|b\|_{L_p(B)}
\Big)^{\frac 1{ \chi^{m-1}}}~ \|
u\|_{L_{s_{m-1}}(B_{R_{m-1}})}
\endgathered
$$
Iterating this inequality we arrive at \eqref{We_want}.
Now we need to get rid of the assumption $u\in C^\infty(B)$.
Assume $u\in W^1_2(B)$ is an arbitrary weak solution to
\eqref{Equation}. Let $\zeta\in C_0^\infty(B)$ be a cut-off function
such that $\zeta\equiv 1$ on $B_{5/6}$ and denote $v:=\zeta u$. Then
$v$ is a weak solution to the boundary value problem
$$
\left\{ \quad \gathered -\Delta v +b\cdot \nabla v = g \quad \mbox{in} \quad B \\
v|_{\partial B} = 0 \qquad \endgathered \right.
$$
where
$$
g:= -u\Delta \zeta - 2\nabla u \cdot \nabla \zeta+ bu\cdot \nabla
\zeta.
$$
Note that $g \equiv 0$ and $v\equiv u$ on $B_{5/6}$. As $b\in
L_p(B)$ with $p>\frac n2$ we have $g\in W^{-1}_2(B)$. Now we take
a sequence $b_k\in C^\infty(\bar B)$, $\operatorname{div} b_k=0$,
such that $b_k\to b$ in $L_p(B)$ and let $v_k$ be the weak
solution to the problem
$$
\left\{ \quad \gathered -\Delta v_k +b_k\cdot \nabla v_k = g \quad \mbox{in} \quad B \\
v_k|_{\partial B} = 0 \qquad \endgathered \right.
$$
From Theorem \ref{Approximation} we have $v_k\rightharpoonup v$ in
$W^1_2(B)$ and as $p>\frac n2$ we can extract a subsequence (for
which we keep the same notation) such that $v_k\to v$ a.e. in $B$
and $v_k\to v$ in $L_{2p'}(B)$. As $g\equiv 0$ on $B_{5/6}$ from the
usual elliptic theory (see \cite{LU}) we conclude that $v_k\in
C^\infty(B_{5/6})$. Applying \eqref{We_want} (with the obvious
modification in radius) we obtain the estimate
$$
\| v_k \|_{L_{\infty}(B_{1/2})} \ \le \ C
\left(1+\|b_k\|_{L_p(B)}\right)^\mu~ \| v_k\|_{L_{2p'}(B_{3/4})}.
$$
Hence $v_k$ are equibounded on $B_{1/2}$. Passing to the limit in
the above inequality and taking into account that $v=u$ on $B_{
5/6}$ we obtain
$$
\| u \|_{L_{\infty}(B_{1/2})} \ \le \ C
\left(1+\|b\|_{L_p(B)}\right)^\mu~\| u\|_{L_{2p'}(B_{ 3/4})}.
$$
To conclude the proof we remark that for $p>\frac n2$ from the
imbedding theorem we have
$$
\| u\|_{L_{2p'}(B)} \ \le \ C(n,p)~\|u\|_{W^1_2(B)}.
$$
\end{proof}
\end{document}
|
\begin{document}
\pagestyle{plain}
\title{Multiplication of Weak Equivalence Classes May Be Discontinuous}
\begin{abstract}
For a countably infinite group $\Gamma$, let ${\mathcal{W}}_\Gamma$ denote the space of all weak equivalence classes of measure\-/preserving actions of $\Gamma$ on atomless standard probability spaces, equipped with the compact metrizable topology introduced by Ab\'ert and Elek. There is a natural multiplication operation on ${\mathcal{W}}_\Gamma$ (induced by taking products of actions) that makes ${\mathcal{W}}_\Gamma$ an Abelian semigroup. Burton, Kechris, and Tamuz showed that if $\Gamma$ is amenable, then ${\mathcal{W}}_\Gamma$ is a topological semigroup, i.e., the product map ${\mathcal{W}}_\Gamma \times {\mathcal{W}}_\Gamma \to {\mathcal{W}}_\Gamma ^{\mathsf{c}}olon (\mathfrak{a}, \mathfrak{b}) \mapsto \mathfrak{a} \times \mathfrak{b}$ is continuous. In contrast to that, we prove that if $\Gamma$ is a Zariski dense subgroup of $\mathrm{SL}_d(\mathbb{Z})$ for some $d \geqslant 2$ (for instance, if $\Gamma$ is a non-Abelian free group), then multiplication on ${\mathcal{W}}_\Gamma$ is discontinuous, even when restricted to the subspace ${\mathcal{W}}Free_\Gamma$ of all free weak equivalence classes.
\end{abstract}
\section{Introduction}
\noindent Throughout, $\Gamma$ denotes a countably infinite group with identity element $\mathbf{1}$. In this paper, we study \emphd{probability measure\-/preserving \ep{{p.m.p.}\xspace} actions} of $\Gamma$, i.e., actions of the form $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu)$, where $(X, \mu)$ is a standard probability space and $\mu$ is preserved by $\alpha$. We say that a {p.m.p.}\xspace action $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu)$ is \emphd{free} if the stabilizer of $\mu$-almost every point $x \in X$ is trivial, i.e., if
\[
\mu(\set{x \in X \,:\, \gamma ^{\mathsf{c}}dot x \neq x \text{ for all } \mathbf{1} \neq \gamma \in \Gamma}) = 1.
\]
The concepts of \emph{weak containment} and \emph{weak equivalence} of {p.m.p.}\xspace actions of $\Gamma$ were introduced by Kechris in~^{\mathsf{c}}ite[Section 10(C)]{K_book}. They were inspired by the analogous notions for unitary representations and are closely related to the so-called \emph{local\-/global convergence} in the theory of graph limits~^{\mathsf{c}}ite{LocalGlobal}. Roughly speaking, a {p.m.p.}\xspace action $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu)$ is weakly contained in another {p.m.p.}\xspace action $\beta ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (Y, \nu)$, in symbols $\alpha \preccurlyeq \beta$, if the interaction between any finite measurable partition of $X$ and a finite collection of elements of $\Gamma$ can be simulated, with arbitrarily small error, by a measurable partition of $Y$ (see \S\ref{subsec:weak_cont_defn} for the precise definition). If both $\alpha \preccurlyeq \beta$ and $\beta \preccurlyeq \alpha$, then $\alpha$ and $\beta$ are said to be weakly equivalent, in symbols $\alpha \simeq \beta$.
The relation of weak equivalence is much coarser than the conjugacy relation, which makes it relatively well-behaved. On the other hand, several interesting parameters associated with {p.m.p.}\xspace actions---such as their cost, type, etc.---turn out to be invariants of weak equivalence. Due to these favorable properties, the relations of weak containment and weak equivalence have attracted a considerable amount of attention in recent years. For a survey of the topic, see ^{\mathsf{c}}ite{BK}.
We denote the weak equivalence class of a {p.m.p.}\xspace action $\alpha$ by $\wec{\alpha}$. Define
\[
{\mathcal{W}}_\Gamma ^{\mathsf{c}}oloneqq \set{\wec{\alpha} \,:\, \alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu), \text{ where } (X, \mu) \text{ is atomless}}.
\]
A weak equivalence class $\mathfrak{a} \in {\mathcal{W}}_\Gamma$ is \emphd{free} if $\mathfrak{a} = \wec{\alpha}$ for some free {p.m.p.}\xspace action $\alpha$. Let
\[
{\mathcal{W}}Free_\Gamma ^{\mathsf{c}}oloneqq \set{\mathfrak{a} \in {\mathcal{W}}_\Gamma \,:\, \mathfrak{a} \text{ is free}}.
\]
Freeness is an invariant of weak equivalence ^{\mathsf{c}}ite[Theorem~3.4]{BK}; in other words, if $\mathfrak{a} \in {\mathcal{W}}Free_\Gamma$, then all {p.m.p.}\xspace actions $\alpha$ with $\wec{\alpha} = \mathfrak{a}$ are free.
Ab\'ert and Elek introduced a natural topology on ${\mathcal{W}}_\Gamma$ and proved that it is compact and metrizable ^{\mathsf{c}}ite[Theorem~1]{AbertElek} (see also ^{\mathsf{c}}ite[Theorem~10.1]{BK}). Under this topology, ${\mathcal{W}}Free_\Gamma$ becomes a closed subset of ${\mathcal{W}}_\Gamma$ ^{\mathsf{c}}ite[Corollary~10.7]{BK}. We review the definition of this topology in \S\ref{subsec:top_defn}.
The \emphd{product} of two {p.m.p.}\xspace actions $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu)$ and $\beta ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (Y, \nu)$ is the action
\[
\alpha \times \beta ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X \times Y, \mu \times \nu), \qquad \text{given by} \qquad \gamma ^{\mathsf{c}}dot (x, y) ^{\mathsf{c}}oloneqq (\gamma ^{\mathsf{c}}dot x, \gamma ^{\mathsf{c}}dot y).
\]
It can be easily seen that the weak equivalence class of $\alpha \times \beta$ is determined by the weak equivalence classes of $\alpha$ and $\beta$ (for completeness, we include a proof of this fact---see Corollary~\ref{corl:mult}), hence there is a well-defined multiplication operation on ${\mathcal{W}}_\Gamma$, namely
\[
\wec{\alpha} \times \wec{\beta} ^{\mathsf{c}}oloneqq \wec{\alpha \times \beta}.
\]
Equipped with this operation, ${\mathcal{W}}_\Gamma$ is an Abelian semigroup and ${\mathcal{W}}Free_\Gamma$ is a subsemigroup (in fact, an ideal) in ${\mathcal{W}}_\Gamma$. We are interested in the following natural question:
\begin{ques}[{^{\mathsf{c}}ite[Problem 10.36]{BK}}]\label{ques:main}
Is ${\mathcal{W}}_\Gamma$ a topological semigroup? In other words, is the map ${\mathcal{W}}_\Gamma \times {\mathcal{W}}_\Gamma \to {\mathcal{W}}_\Gamma ^{\mathsf{c}}olon (\mathfrak{a}, \mathfrak{b}) \mapsto \mathfrak{a} \times \mathfrak{b}$ continuous?
\end{ques}
Burton, Kechris, and Tamuz answered Question~\ref{ques:main} positively when the group $\Gamma$ is amenable ^{\mathsf{c}}ite[Theorem~10.37]{BK}. A crucial role in their argument is played by the identification of the space ${\mathcal{W}}_\Gamma$ for amenable $\Gamma$ with the space of the so-called \emph{invariant random subgroups} of $\Gamma$ ^{\mathsf{c}}ite[Theorem~10.6]{BK}. Note that the continuity of multiplication on the subspace ${\mathcal{W}}Free_\Gamma$ for amenable $\Gamma$ is a triviality, since if $\Gamma$ is amenable, then ${\mathcal{W}}Free_\Gamma$ contains only a single point ^{\mathsf{c}}ite[15]{BK}. On the other hand, if $\Gamma$ is nonamenable, then ${\mathcal{W}}Free_\Gamma$ has cardinality continuum ^{\mathsf{c}}ite[Remark 4.3]{T-D}.
The goal of this paper is to give a \emph{negative} answer to Question~\ref{ques:main} for a certain class of nonamenable groups $\Gamma$, including the non-Abelian free groups:
\begin{theo}\label{theo:main}
Let $d \geqslant 2$ and let $\Gamma \leqslant \mathrm{SL}_d(\mathbb{Z})$ be a subgroup that is Zariski dense in $\mathrm{SL}_d(\mathbb{R})$.
\begin{enumerate}[label={\ep{\normalfont{}\arabic*}}]
\item\label{item:squaring} The map
$
{\mathcal{W}}Free_\Gamma \to {\mathcal{W}}Free_\Gamma ^{\mathsf{c}}olon \mathfrak{a} \mapsto \mathfrak{a} \times \mathfrak{a}
$
is discontinuous.
\item\label{item:fiber} There is $\mathfrak{b} \in {\mathcal{W}}Free_\Gamma$ such that the map
$
{\mathcal{W}}Free_\Gamma \to {\mathcal{W}}Free_\Gamma ^{\mathsf{c}}olon \mathfrak{a} \mapsto \mathfrak{a} \times \mathfrak{b}
$
is discontinuous.
\end{enumerate}
\end{theo}
As observed in ^{\mathsf{c}}ite[\S10.2]{BK}, part \ref{item:squaring} of Theorem~\ref{theo:main} yields the following corollary:
\begin{corl}
There exists a countable group $\Delta$ with a normal subgroup $\Gamma \lhd \Delta$ of index $2$ such that the co\-/induction map ${\mathcal{W}}_\Gamma \to {\mathcal{W}}_\Delta$ is discontinuous.
\end{corl}
\begin{proof}[\textsc{Proof}]
Let $d \geqslant 2$ and let $\Gamma \leqslant \mathrm{SL}_d(\mathbb{Z})$ be any Zariski dense subgroup. Set $\Delta ^{\mathsf{c}}oloneqq \Gamma \times (\mathbb{Z}/2\mathbb{Z})$ and identify $\Gamma$ with a normal subgroup of $\Delta$ of index $2$ in the obvious way. Then, for any {p.m.p.}\xspace action $\alpha$ of $\Gamma$, the restriction of the co-induced action $\mathrm{CInd}_\Gamma^\Delta(\alpha)$ back to $\Gamma$ is isomorphic to $\alpha \times \alpha$. Since the restriction map ${\mathcal{W}}_\Delta \to {\mathcal{W}}_\Gamma$ is continuous ^{\mathsf{c}}ite[Proposition 10.10]{BK}, Theorem~\ref{theo:main}\ref{item:squaring} forces the co-induction map ${\mathcal{W}}_\Gamma \to {\mathcal{W}}_\Delta$ to be discontinuous. For details, see ^{\mathsf{c}}ite[\S10.2]{BK}.
\end{proof}
In view of Theorem~\ref{theo:main} and the result of Burton, Kechris, and Tamuz, it is tempting to conjecture that ${\mathcal{W}}_\Gamma$ is a topological semigroup \emph{if and only if} $\Gamma$ is amenable. However, at this point we do not even know whether multiplication of weak equivalence classes is discontinuous for every countable group that contains a non-Abelian free subgroup.
Our proof of Theorem~\ref{theo:main} provides explicit examples of sequences of {p.m.p.}\xspace actions that witness the discontinuity of multiplication on ${\mathcal{W}}_\Gamma$. We describe one such example here. Let $d \geqslant 2$ and let $\Gamma \leqslant \mathrm{SL}_d(\mathbb{Z})$ be a Zariski dense subgroup. For a prime $p$, let $\mathbb{Z}_p$ denote the ring of $p$-adic integers. Then $\mathrm{SL}_d(\mathbb{Z}_p)$ is an infinite profinite group. Since $\mathrm{SL}_d(\mathbb{Z})$ naturally embeds in $\mathrm{SL}_d(\mathbb{Z}_p)$, we may identify $\Gamma$ with a subgroup of $\mathrm{SL}_d(\mathbb{Z}_p)$ and consider the left multiplication action $\alpha_p ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright \mathrm{SL}_d(\mathbb{Z}_p)$, which we view as a {p.m.p.}\xspace action by putting the Haar probability measure on $\mathrm{SL}_d(\mathbb{Z}_p)$. Let $\mathfrak{a}_p$ denote the weak equivalence class of $\alpha_p$. Using the compactness of ${\mathcal{W}}_\Gamma$, we can pick an increasing sequence of primes $p_0$, $p_1$, \ldots{} such that the sequence $(\mathfrak{a}_{p_i})_{i\in\mathbb{N}}$ converges in ${\mathcal{W}}_\Gamma$ to some weak equivalence class $\mathfrak{a}$. Then it follows from our results that the sequence $(\mathfrak{a}_{p_i} \times \mathfrak{a}_{p_i})_{i \in \mathbb{N}}$ does \emph{not} converge to $\mathfrak{a} \times \mathfrak{a}$, thus demonstrating that multiplication on ${\mathcal{W}}_\Gamma$ is discontinuous.
The main tools that we use to prove Theorem~\ref{theo:main} come from the study of expansion properties in finite groups of Lie type, specifically the groups $\mathrm{SL}_d(\mathbb{Z}/n\mathbb{Z})$ for $n \in \mathbb{N}^+$. Our primary reference for this subject is the book ^{\mathsf{c}}ite{Tao_book}.
This paper is organized as follows. Section~\ref{sec:prelim} contains some basic definitions (such as the definition of the weak equivalence relation and the topology on the space ${\mathcal{W}}_\Gamma$) and a few preliminary results. In Section~\ref{sec:step}, we introduce the terminology pertaining to step functions and use it in Section~\ref{sec:criterion} to prove Theorem~\ref{theo:criterion}, an explicit criterion for continuity of multiplication, which is of some independent interest. Finally, the proof of Theorem~\ref{theo:main} is presented in Section~\ref{sec:proof}.
\section{Preliminaries}\label{sec:prelim}
\subsection{Basic notation, conventions, and terminology}
We use $\mathbb{N}$ to denote the set of all nonnegative integers and let $\mathbb{N}^+ ^{\mathsf{c}}oloneqq \mathbb{N} \setminus \set{0}$. Each $k \in \mathbb{N}$ is identified with the set $\set{i \in \mathbb{N} \,:\, i < k}$.
For a set $S$, we use $\fins{S}$ to denote the set of all finite subsets of $S$.
For a standard probability space $(X, \mu)$ and $k \in \mathbb{N}^+$, we use ${\mathrm{Meas}}_k(X,\mu)$ to denote the space of all measurable maps $f ^{\mathsf{c}}olon X \to k$, equipped with the pseudometric
\[
{\mathrm{dist}}_\mu(f, g) ^{\mathsf{c}}oloneqq \mu(\set{x \in X \,:\, f(x) \neq g(x)}).
\]
\subsection{The relations of weak containment and weak equivalence}\label{subsec:weak_cont_defn}
A number of equivalent definitions of weak containment exist, and several of them can be found in~^{\mathsf{c}}ite[\S\S2.1, 2.2]{BK}. We use the definition given in ^{\mathsf{c}}ite[\S2.2(1)]{BK}, as it is particularly well-suited for introducing the topology on the space of weak equivalence classes.
Let $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X,\mu)$ be a {p.m.p.}\xspace action of $\Gamma$. For $S \in \fins{\Gamma}$, $k \in \mathbb{N}^+$, and $f \in {\mathrm{Meas}}_k(X,\mu)$, define
\[
\wecSetFun \alpha S k f ^{\mathsf{c}}olon S \times k \times k \to [0;1]
\]
by setting, for all $\gamma \in S$ and $i$, $j < k$,
\[
\wecSetFun \alpha S k f (\gamma, i, j) ^{\mathsf{c}}oloneqq \mu(\set{x \in X \,:\, f(x) = i,\ f(\gamma ^{\mathsf{c}}dot x) = j}).
\]
Thus, $\wecSetFun \alpha S k f$ is a vector in the unit cube $\mathbb{I}_{S,k} ^{\mathsf{c}}oloneqq [0;1]^{S \times k \times k}$. For $F \subseteq {\mathrm{Meas}}_k(X, \mu)$, let
\[
\wecSetFun \alpha S k F ^{\mathsf{c}}oloneqq \set{\wecSetFun \alpha S k f \,:\, f \in F},
\]
and define $\wecSet \alpha S k $ to be the closure of the set $\wecSetFun \alpha S k {{\mathrm{Meas}}_k(X, \mu)}$ in $\mathbb{I}_{S,k}$.
\begin{defn}\label{defn:weak}
Let $\alpha$ and $\beta$ be {p.m.p.}\xspace actions of $\Gamma$. We say that $\alpha$ is \emphd{weakly contained} in $\beta$, in symbols $\alpha \preccurlyeq \beta$, if for all $S \in \fins{\Gamma}$ and $k \in \mathbb{N}^+$, we have \[\wecSet \alpha S k \subseteq \wecSet \beta S k.\] If simultaneously $\alpha \preccurlyeq \beta$ and $\beta \preccurlyeq \alpha$, i.e., if for all $S \in \fins{\Gamma}$ and $k \in \mathbb{N}^+$, we have \[\wecSet \alpha S k = \wecSet \beta S k,\] then $\alpha$ and $\beta$ are said to be \emphd{weakly equivalent}, in symbols $\alpha \simeq \beta$.
\end{defn}
In view of Definition~\ref{defn:weak}, we refer to the sequence
\[
\wec{\alpha} ^{\mathsf{c}}oloneqq (\wecSet \alpha S k )_{S,k},
\]
where $S$ and $k$ run over $\fins{\Gamma}$ and $\mathbb{N}^+$ respectively, as the \emphd{weak equivalence class} of $\alpha$. Let
\[
{\mathcal{W}}_\Gamma ^{\mathsf{c}}oloneqq \set{\wec{\alpha} \,:\, \alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu), \text{ where } (X, \mu) \text{ is atomless}}.
\]
A weak equivalence class $\mathfrak{a} \in {\mathcal{W}}_\Gamma$ is \emphd{free} if $\mathfrak{a} = \wec{\alpha}$ for some free {p.m.p.}\xspace action $\alpha$. Define
\[
{\mathcal{W}}Free_\Gamma ^{\mathsf{c}}oloneqq \set{\mathfrak{a} \in {\mathcal{W}}_\Gamma \,:\, \mathfrak{a} \text{ is free}}.
\]
\begin{theo}[{^{\mathsf{c}}ite[Theorem~3.4]{BK}}]
Let $\alpha$ and $\beta$ be {p.m.p.}\xspace actions of~$\Gamma$. If $\alpha$ is free and $\alpha \preccurlyeq \beta$, then $\beta$ is also free. In particular, if $\mathfrak{a} \in {\mathcal{W}}Free_\Gamma$, then all {p.m.p.}\xspace actions $\alpha$ with $\wec{\alpha} = \mathfrak{a}$ are free.
\end{theo}
\subsection{The space of weak equivalence classes}\label{subsec:top_defn}
We now proceed to define the topology on ${\mathcal{W}}_\Gamma$. For $S \in \fins{\Gamma}$ and $k \in \mathbb{N}^+$, the cube $\mathbb{I}_{S, k} = [0;1]^{S \times k \times k}$ is equipped with the \emphd{$\infty$-metric}:
\[
{\mathrm{dist}}_\infty(u, v) = \|u - v\|_\infty ^{\mathsf{c}}oloneqq \max_{\gamma, i, j} |u(\gamma, i, j) - v(\gamma,i,j)|.
\]
Let ${\mathcal{K}}(\mathbb{I}_{S, k})$ denote the set of all nonempty compact subsets of $\mathbb{I}_{S, k}$. For $C \in {\mathcal{K}}(\mathbb{I}_{S, k})$, let
\[
\mathrm{Ball}_\varepsilon(C) ^{\mathsf{c}}oloneqq \set{u \in \mathbb{I}_{S, k} \,:\, {\mathrm{dist}}_\infty(u, C) < \varepsilon}.
\]
Define the \emphd{Hausdorff metric} on ${\mathcal{K}}(\mathbb{I}_{S, k})$ by
\[
{\mathrm{dist}}_H(C_1, C_2) ^{\mathsf{c}}oloneqq \inf \set{\varepsilon > 0 \,:\, C_1 \subseteq \mathrm{Ball}_\varepsilon(C_2) \text{ and } C_2 \subseteq \mathrm{Ball}_\varepsilon(C_1)}.
\]
This metric makes ${\mathcal{K}}(\mathbb{I}_{S, k})$ into a compact space ^{\mathsf{c}}ite[Theorem 4.26]{K_DST}. By definition, for any {p.m.p.}\xspace action $\alpha$, we have $\wecSet {\alpha} S k \in {\mathcal{K}}(\mathbb{I}_{S, k})$, so ${\mathcal{W}}_\Gamma$ is a subset of the compact metrizable space \[\prod_{S, k} {\mathcal{K}}(\mathbb{I}_{S, k}),\] where the product is over all $S \in \fins{\Gamma}$ and $k \in \mathbb{N}^+$, and as such, ${\mathcal{W}}_\Gamma$ inherits a relative topology.
The following fundamental result is due to Ab\'ert and Elek:
\begin{theo}[{Ab\'ert--Elek ^{\mathsf{c}}ite[Theorem~1]{AbertElek}; see also ^{\mathsf{c}}ite[Theorem~10.1]{BK}}]
The set ${\mathcal{W}}_\Gamma$ is closed in $\prod_{S, k} {\mathcal{K}}(\mathbb{I}_{S,k})$. In other words, the space ${\mathcal{W}}_\Gamma$ is compact.
\end{theo}
The subspace ${\mathcal{W}}Free_\Gamma$ is also compact:
\begin{theo}[{^{\mathsf{c}}ite[Corollary~10.7]{BK}}]
The set ${\mathcal{W}}Free_\Gamma$ is closed in ${\mathcal{W}}_\Gamma$.
\end{theo}
\subsection{The map $\wecSetFun \alpha S k -$ is Lipschitz}
The purpose of this short subsection is to record the following simple observation:
\begin{prop}\label{prop:Lipschitz}
Let $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu)$ be a {p.m.p.}\xspace action of $\Gamma$. If $k \in \mathbb{N}^+$ and $f$, $g \in {\mathrm{Meas}}_k(X,\mu)$, then, for any $S \in \fins{\Gamma}$,
\[
{\mathrm{dist}}_\infty(\wecSetFun \alpha S k f, \wecSetFun \alpha S k {g}) \,\leqslant\, 2 ^{\mathsf{c}}dot {\mathrm{dist}}_\mu(f, g).
\]
\end{prop}
\begin{proof}[\textsc{Proof}]
Take any $\gamma \in S$ and $i$, $j < k$ and let
\[
A ^{\mathsf{c}}oloneqq \set{x \in X \,:\, f(x) = i, \ f(\gamma ^{\mathsf{c}}dot x) = j} \qquad \text{and} \qquad B ^{\mathsf{c}}oloneqq \set{x \in X \,:\, g(x) = i, \ g(\gamma ^{\mathsf{c}}dot x) = j}.
\]
Then, by definition,
\[
|\wecSetFun \alpha S k f (\gamma, i, j) \,-\, \wecSetFun \alpha S k {g} (\gamma, i, j) | \,=\, |\mu(A) \,-\, \mu(B)| \,\leqslant\, \mu(A \bigtriangleup B).
\]
If $x \in A \bigtriangleup B$, then $f(x) \neq g(x)$ or $f(\gamma ^{\mathsf{c}}dot x) \neq g(\gamma ^{\mathsf{c}}dot x)$, so $\mu(A \bigtriangleup B) \leqslant 2 ^{\mathsf{c}}dot {\mathrm{dist}}_\mu(f,g)$, as desired.
\end{proof}
\section{Step functions}\label{sec:step}
\noindent In this section we establish some basic facts pertaining to step functions on products of probability spaces. In particular, we show that multiplication is a well-defined operation on ${\mathcal{W}}_\Gamma$.
To begin with, we need a few definitions. Let $(X, \mu)$ and $(Y, \nu)$ be standard probability spaces and let $k$, $N \in \mathbb{N}^+$. We call a map $f \in {\mathrm{Meas}}_k(X \times Y, \mu \times \nu)$ an \emphd{$N$-step function} if there exist
\[
g \in {\mathrm{Meas}}_N(X, \mu), \qquad h \in {\mathrm{Meas}}_N(Y, \nu), \qquad \text{and} \qquad \varphi ^{\mathsf{c}}olon N \times N \to k,
\]
such that $f = \varphi ^{\mathsf{c}}irc (g, h)$, i.e., we have
\[
f(x, y) = \varphi(g(x), h(y)) \qquad \text{for all } x \in X \text{ and } y \in Y.
\]
Let ${\mathrm{St}}ep_{k,N}(X, \mu; Y, \nu) \subseteq {\mathrm{Meas}}_k(X \times Y, \mu \times \nu)$ denote the set of all $N$-step functions and let
\begin{equation}\label{eq:step}
{\mathrm{St}}ep_k(X, \mu; Y, \nu) ^{\mathsf{c}}oloneqq \bigcup_{N \in \mathbb{N}^+} {\mathrm{St}}ep_{k,N}(X, \mu; Y, \nu).
\end{equation}
The maps in ${\mathrm{St}}ep_k(X, \mu; Y, \nu)$ are called \emphd{step functions}. Note that the union in~\eqref{eq:step} is increasing. It is a basic fact in measure theory that the set ${\mathrm{St}}ep_k(X, \mu; Y, \nu)$ is dense in ${\mathrm{Meas}}_k(X \times Y, \mu \times \nu)$.
It will be useful to have a concrete description of the vectors of the form $\wecSetFun {\alpha \times \beta} S k f$, where $f$ is a step function. To that end, we introduce the following operation:
\begin{defn}
Let $k$, $N \in \mathbb{N}^+$ and $\varphi ^{\mathsf{c}}olon N \times N \to k$. Given $S \in \fins{\Gamma}$ and vectors $u$, $v \in \mathbb{R}^{S\times N \times N}$, the \emphd{$\varphi$-convolution} $u \ast_\varphi v \in \mathbb{R}^{S \times k \times k}$ of $u$ and $v$ is given by the formula
\[
(u \ast_\varphi v)(\gamma, i, j) ^{\mathsf{c}}oloneqq \sum_{(a, b) \in \varphi^{-1}(i)}\, \sum_{(c, d) \in \varphi^{-1}(j)} \, u(\gamma, a, c) ^{\mathsf{c}}dot v(\gamma, b, d).
\]
\end{defn}
The next proposition is an immediate consequence of the definitions:
\begin{prop}\label{prop:conv}
Let $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu)$, $\beta ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (Y, \nu)$ be {p.m.p.}\xspace actions of~$\Gamma$. Let $k$, $N \in \mathbb{N}^+$ and
\[
g \in {\mathrm{Meas}}_N(X, \mu), \qquad h \in {\mathrm{Meas}}_N(Y, \nu), \qquad \text{and} \qquad \varphi ^{\mathsf{c}}olon N \times N \to k.
\]
Set $f ^{\mathsf{c}}oloneqq \varphi ^{\mathsf{c}}irc (g, h)$. Then, for any $S \in \fins{\Gamma}$,
\[
\wecSetFun {\alpha \times \beta} S k f \,=\, \wecSetFun \alpha S N g \ast_\varphi \wecSetFun \beta S N h.
\]
\end{prop}
It is useful to note that the $\varphi$-convolution operation is Lipschitz on $\mathbb{I}_{S,N}$:
\begin{prop}\label{prop:conv_Lip}
Let $k$, $N \in \mathbb{N}^+$ and $\varphi ^{\mathsf{c}}olon N \times N \to k$. For all $S \in \fins{\Gamma}$ and $u$, $v$, $\tilde{u}$, $\tilde{v} \in \mathbb{I}_{S,N}$,
\[
{\mathrm{dist}}_\infty (u \ast_\varphi v, \, \tilde{u} \ast_\varphi \tilde{v}) \,\leqslant\, N^4 ^{\mathsf{c}}dot ({\mathrm{dist}}_\infty(u, \tilde{u}) + {\mathrm{dist}}_\infty (v, \tilde{v})).
\]
\end{prop}
\begin{proof}[\textsc{Proof}]
Since
\[
{\mathrm{dist}}_\infty (u \ast_\varphi v, \, \tilde{u} \ast_\varphi \tilde{v}) \,\leqslant\, {\mathrm{dist}}_\infty (u \ast_\varphi v, \, \tilde{u} \ast_\varphi v) + {\mathrm{dist}}_\infty (\tilde{u} \ast_\varphi v, \, \tilde{u} \ast_\varphi \tilde{v}),
\]
it suffices to prove the inequality when, say, $v = \tilde{v}$. To that end, take $\gamma \in S$ and $i$, $j < k$. We have
\begin{align*}
|(u \ast_\varphi v)(\gamma, i, j) \,-\, (\tilde{u} \ast_\varphi v)&(\gamma, i, j)| \,\leqslant\, \sum_{(a, b) \in \varphi^{-1}(i)}\, \sum_{(c, d) \in \varphi^{-1}(j)} \, |u(\gamma, a, c) \,-\, \tilde{u}(\gamma, a, c)| ^{\mathsf{c}}dot v(\gamma, b, d)\\
&\leqslant\, |\varphi^{-1}(i)|^{\mathsf{c}}dot |\varphi^{-1}(j)| ^{\mathsf{c}}dot {\mathrm{dist}}_\infty(u, \tilde{u}) \,\leqslant\, N^4 ^{\mathsf{c}}dot {\mathrm{dist}}_\infty(u, \tilde{u}). \qedhere
\end{align*}
\end{proof}
\begin{corl}\label{corl:mult}
If $\alpha$, $\tilde{\alpha}$, and $\beta$ are {p.m.p.}\xspace actions of $\Gamma$ and $\alpha \preccurlyeq \tilde{\alpha}$, then $\alpha \times \beta \preccurlyeq \tilde{\alpha} \times \beta$. In particular, the multiplication operation on ${\mathcal{W}}_\Gamma$ is well-defined.
\end{corl}
\begin{proof}[\textsc{Proof}]
Let $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu)$, $\tilde{\alpha} ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (\tilde{X}, \tilde{\mu})$, and $\beta ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (Y, \nu)$ be {p.m.p.}\xspace actions of $\Gamma$ and suppose that $\alpha \preccurlyeq \tilde{\alpha}$. Take any $S \in \fins{\Gamma}$ and $k \in \mathbb{N}^+$. By Proposition~\ref{prop:Lipschitz} and since ${\mathrm{St}}ep_k(X, \mu; Y, \nu)$ is dense in ${\mathrm{Meas}}_k(X \times Y, \mu \times \nu)$, it suffices to show that for all $f \in {\mathrm{St}}ep_{k}(X, \mu; Y, \nu)$ and $\varepsilon > 0$, there is $\tilde{f} \in {\mathrm{Meas}}_k(\tilde{X} \times Y, \tilde{\mu} \times \nu)$ such that
\[
{\mathrm{dist}}_\infty(\wecSetFun {\alpha \times \beta} S k f, \, \wecSetFun {\tilde{\alpha} \times \beta} S k {\tilde{f}}) \,< \,\varepsilon.
\]
Let $N \in \mathbb{N}^+$, $g \in {\mathrm{Meas}}_N(X, \mu)$, $h \in {\mathrm{Meas}}_N(Y, \nu)$, and $\varphi ^{\mathsf{c}}olon N \times N \to k$ be such that $f = \varphi ^{\mathsf{c}}irc (g, h)$. Since $\alpha \preccurlyeq \tilde{\alpha}$, there is a map $\tilde{g} \in {\mathrm{Meas}}_N(\tilde{X}, \tilde{\mu})$ with
\[
{\mathrm{dist}}_\infty(\wecSetFun \alpha S N g,\, \wecSetFun {\tilde{\alpha}} S N {\tilde{g}}) \,<\, \varepsilon N^{-4}.
\]
From Propositions~\ref{prop:conv} and \ref{prop:conv_Lip}, it follows that the map $\tilde{f} ^{\mathsf{c}}oloneqq \varphi ^{\mathsf{c}}irc (\tilde{g}, h)$ is as desired.
\end{proof}
\section{A criterion of continuity}\label{sec:criterion}
\noindent The purpose of this section is to establish an explicit necessary and sufficient condition for the continuity of multiplication on the space of weak equivalence classes.
Recall that a subset $Y$ of a metric space $X$ is called an \emphd{$\varepsilon$-net} if for every $x \in X$, there is $y \in Y$ such that the distance between $x$ and $y$ is less than $\varepsilon$.
\begin{lemdef}
Let $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu)$ and $\beta ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (Y, \nu)$ be {p.m.p.}\xspace actions of~$\Gamma$. For any $S \in \fins{\Gamma}$, $k \in \mathbb{N}^+$, and $\varepsilon > 0$, there exists $N \in \mathbb{N}^+$ such that the set
\[
\wecSetFun {\alpha \times \beta} S k {{\mathrm{St}}ep_{k,N}(X, \mu; Y, \nu)}
\]
is an $\varepsilon$-net in $\wecSet {\alpha \times \beta} S k$. We denote the smallest such $N$ by $N_{S,k}(\alpha, \beta, \varepsilon)$.
Furthermore, the value $N_{S, k}(\alpha, \beta, \varepsilon)$ is determined by the weak equivalence classes of $\alpha$ and $\beta$, so we can define $N_{S, k}(\wec{\alpha}, \wec{\beta}, \varepsilon) ^{\mathsf{c}}oloneqq N_{S,k}(\alpha, \beta, \varepsilon)$.
\end{lemdef}
\begin{proof}[\textsc{Proof}]
By Proposition~\ref{prop:Lipschitz} and since ${\mathrm{St}}ep_k(X, \mu; Y, \nu)$ is dense in ${\mathrm{Meas}}_k(X \times Y, \mu \times \nu)$, the set
\[
\wecSetFun {\alpha \times \beta} S k {{\mathrm{St}}ep_{k}(X, \mu; Y, \nu)}
\]
is dense in $\wecSet {\alpha \times \beta} S k$. The existence of $N_{S,k}(\alpha, \beta, \varepsilon)$ then follows since $\wecSet {\alpha \times \beta} S k$ is compact.
To prove the ``furthermore'' part, let $\tilde{\alpha} ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (\tilde{X}, \tilde{\mu})$ and $\tilde{\beta} ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (\tilde{Y}, \tilde{\nu})$ be {p.m.p.}\xspace actions of $\Gamma$ such that $\alpha \simeq \tilde{\alpha}$ and $\beta \simeq \tilde{\beta}$. Set $N ^{\mathsf{c}}oloneqq N_{S,k}(\alpha, \beta, \varepsilon)$. We have to show that
\[
N_{S,k}(\tilde{\alpha}, \tilde{\beta}, \varepsilon) \leqslant N.
\]
Take any $u \in \wecSet {\tilde{\alpha} \times \tilde{\beta}} S k$. Corollary~\ref{corl:mult} implies that $\wecSet {\tilde{\alpha} \times \tilde{\beta}} S k = \wecSet {\alpha \times \beta} S k$, so, by the choice of $N$, there is $f \in {\mathrm{St}}ep_{k, N}(X, \mu; Y, \nu)$ such that
\[
\delta ^{\mathsf{c}}oloneqq \varepsilon - {\mathrm{dist}}_\infty(u,\wecSetFun {\alpha \times \beta} S k f) \,>\, 0.
\]
Let $g \in {\mathrm{Meas}}_N(X, \mu)$, $h \in {\mathrm{Meas}}_N(Y, \nu)$, and $\varphi ^{\mathsf{c}}olon N \times N \to k$ be such that $f = \varphi ^{\mathsf{c}}irc (g, h)$. Since we have $\alpha \simeq \tilde{\alpha}$ and $\beta \simeq \tilde{\beta}$, there exist maps $\tilde{g} \in {\mathrm{Meas}}_N(\tilde{X}, \tilde{\mu})$ and $\tilde{h} \in {\mathrm{Meas}}_N(\tilde{Y}, \tilde{\nu})$ with
\[
{\mathrm{dist}}_\infty(\wecSetFun {\tilde{\alpha}} S N {\tilde{g}},\, \wecSetFun \alpha S N g) + {\mathrm{dist}}_\infty(\wecSetFun {\tilde{\beta}} S N {\tilde{h}},\, \wecSetFun \beta S N h) \,<\, \delta N^{-4}.
\]
Set $\tilde{f} ^{\mathsf{c}}oloneqq \varphi ^{\mathsf{c}}irc (\tilde{g}, \tilde{h})$. Then $\tilde{f} \in {\mathrm{St}}ep_{k, N} (\tilde{X}, \tilde{\mu}; \tilde{Y}, \tilde{\nu})$, and, from Propositions~\ref{prop:conv} and \ref{prop:conv_Lip}, it follows that
\[
{\mathrm{dist}}_\infty(u, \wecSetFun {\tilde{\alpha} \times \tilde{\beta}} S k {\tilde{f}}) \,<\, \varepsilon.
\]
Since $u$ was chosen arbitrarily, this concludes the proof.
\end{proof}
Now we can state the main result of this section:
\begin{theo}\label{theo:criterion}
Let $\mathcal{C} \subseteq {\mathcal{W}}_\Gamma \times {\mathcal{W}}_\Gamma$ be a closed set. The following statements are equivalent:
\begin{enumerate}[label={\ep{\normalfont{}\arabic*}}]
\item\label{item:continuous} the map $\mathcal{C} \to {\mathcal{W}}_\Gamma ^{\mathsf{c}}olon (\mathfrak{a}, \mathfrak{b}) \mapsto \mathfrak{a} \times \mathfrak{b}$ is continuous;
\item\label{item:bound_on_steps} for all $S \in \fins{\Gamma}$, $k \in \mathbb{N}^+$, and $\varepsilon > 0$, there is $N \in \mathbb{N}^+$ such that for all $(\mathfrak{a}, \mathfrak{b}) \in \mathcal{C}$,
\[
N_{S, k}(\mathfrak{a}, \mathfrak{b}, \varepsilon) \leqslant N.
\]
\end{enumerate}
\end{theo}
\begin{proof}[\textsc{Proof}]
We start with the implication \ref{item:continuous} $\Longrightarrow$ \ref{item:bound_on_steps}. Suppose that \ref{item:continuous} holds and assume that for some $S \in \fins{\Gamma}$, $k \in \mathbb{N}^+$, and $\varepsilon > 0$, there is a sequence of pairs $(\mathfrak{a}_n, \mathfrak{b}_n) \in \mathcal{C}$ with $N_{S, k} (\mathfrak{a}_n, \mathfrak{b}_n, \varepsilon) \longrightarrow \infty$. Since $\mathcal{C}$ is compact, we may pass to a subsequence so that $(\mathfrak{a}_n, \mathfrak{b}_n) \longrightarrow (\mathfrak{a}, \mathfrak{b}) \in \mathcal{C}$. By \ref{item:continuous}, we then also have $\mathfrak{a}_n \times \mathfrak{b}_n \longrightarrow \mathfrak{a} \times \mathfrak{b}$. Set $N ^{\mathsf{c}}oloneqq N_{S, k} (\mathfrak{a}, \mathfrak{b}, \varepsilon/3)$.
Let $\alpha_n ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X_n, \mu_n)$, $\beta_n ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (Y_n, \nu_n)$, $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu)$, and $\beta ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (Y, \nu)$ be representatives of the weak equivalence classes $\mathfrak{a}_n$, $\mathfrak{b}_n$, $\mathfrak{a}$, and $\mathfrak{b}$ respectively. We claim that $N_{S, k}(\alpha_n, \beta_n, \varepsilon) \leqslant N$ for all sufficiently large $n \in \mathbb{N}$, contradicting the choice of $(\mathfrak{a}_n, \mathfrak{b}_n)$. Indeed, take any $u \in \wecSet {\alpha_n \times \beta_n} S k$. If $n$ is large enough, then there is $v \in \wecSet {\alpha \times \beta} S k$ such that \[{\mathrm{dist}}_\infty(u, v) \,<\, \varepsilon/3.\] By the choice of $N$, there is a step function $f \in {\mathrm{St}}ep_{k, N}(X, \mu; Y, \nu)$ such that
\[
{\mathrm{dist}}_\infty(v, \wecSetFun {\alpha \times \beta} S k f) \,<\, \varepsilon/3.
\]
Let $g \in {\mathrm{Meas}}_N(X, \mu)$, $h \in {\mathrm{Meas}}_N(Y, \nu)$, and $\varphi ^{\mathsf{c}}olon N \times N \to k$ be such that $f = \varphi ^{\mathsf{c}}irc (g, h)$. If $n$ is large enough, then there exist maps $\tilde{g} \in {\mathrm{Meas}}_N(X_n, \mu_n)$ and $\tilde{h} \in {\mathrm{Meas}}_N(Y_n, \nu_n)$ satisfying
\[
{\mathrm{dist}}_\infty(\wecSetFun {\alpha_n} S N {\tilde{g}}, \wecSetFun \alpha S N g) + {\mathrm{dist}}_\infty(\wecSetFun {\beta_n} S N {\tilde{h}}, \wecSetFun \beta S N h) \,<\, \varepsilon N^{-4}/3.
\]
Let $\tilde{f} ^{\mathsf{c}}oloneqq \varphi ^{\mathsf{c}}irc (\tilde{g}, \tilde{h})$. From Propositions~\ref{prop:conv} and \ref{prop:conv_Lip}, it follows that
\[
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha_n \times \beta_n} S k {\tilde{f}}) \,<\, \varepsilon/3 + \varepsilon/3 + N^4 ^{\mathsf{c}}dot (\varepsilon N^{-4}/3) \,=\, \varepsilon,
\]
as desired.
Now we proceed to the implication \ref{item:bound_on_steps} $\Longrightarrow$ \ref{item:continuous}. Suppose that \ref{item:bound_on_steps} holds and let $(\mathfrak{a}_n, \mathfrak{b}_n)$, $(\mathfrak{a}, \mathfrak{b}) \in \mathcal{C}$ be such that $(\mathfrak{a}_n, \mathfrak{b}_n) \longrightarrow (\mathfrak{a}, \mathfrak{b})$. We have to show that $\mathfrak{a}_n \times \mathfrak{b}_n \longrightarrow \mathfrak{a} \times \mathfrak{b}$. Let $\alpha_n ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X_n, \mu_n)$, $\beta_n ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (Y_n, \nu_n)$, $\alpha ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (X, \mu)$, and $\beta ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright (Y, \nu)$ be representatives of the weak equivalence classes $\mathfrak{a}_n$, $\mathfrak{b}_n$, $\mathfrak{a}$, and $\mathfrak{b}$ respectively. We must argue that for any $S \in \fins{\Gamma}$, $k \in \mathbb{N}^+$, and $\varepsilon > 0$ and for all sufficiently large $n \in \mathbb{N}$,
\begin{align}\label{eq:lower}
&\wecSet {\alpha \times \beta} S k \,\subseteq\, \mathrm{Ball}_\varepsilon(\wecSet {\alpha_n \times \beta_n} S k);\\
&\wecSet {\alpha_n \times \beta_n} S k \,\subseteq\, \mathrm{Ball}_\varepsilon(\wecSet {\alpha \times \beta} S k).\label{eq:upper}
\end{align}
To prove \eqref{eq:lower}, let $N ^{\mathsf{c}}oloneqq N_{S, k} (\alpha, \beta, \varepsilon/2)$ and consider any $u \in \wecSet {\alpha \times \beta} S k$. By the choice of $N$, there is a step function $f \in {\mathrm{St}}ep_{k, N}(X, \mu; Y, \nu)$ such that
\[
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha \times \beta} S k f) \,<\, \varepsilon/2.
\]
Let $g \in {\mathrm{Meas}}_N(X, \mu)$, $h \in {\mathrm{Meas}}_N(Y, \nu)$, and $\varphi ^{\mathsf{c}}olon N \times N \to k$ be such that $f = \varphi ^{\mathsf{c}}irc (g, h)$. If $n$ is large enough, then there exist maps $\tilde{g} \in {\mathrm{Meas}}_N(X_n, \mu_n)$ and $\tilde{h} \in {\mathrm{Meas}}_N(Y_n, \nu_n)$ satisfying
\[
{\mathrm{dist}}_\infty(\wecSetFun {\alpha_n} S N {\tilde{g}}, \wecSetFun \alpha S N g) + {\mathrm{dist}}_\infty(\wecSetFun {\beta_n} S N {\tilde{h}}, \wecSetFun \beta S N h) \,<\, \varepsilon N^{-4}/2.
\]
Let $\tilde{f} ^{\mathsf{c}}oloneqq \varphi ^{\mathsf{c}}irc (\tilde{g}, \tilde{h})$. From Propositions~\ref{prop:conv} and \ref{prop:conv_Lip}, it follows that
\[
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha_n \times \beta_n} S k {\tilde{f}}) \,<\, \varepsilon/2 + N^4 ^{\mathsf{c}}dot (\varepsilon N^{-4}/2) \,=\, \varepsilon,
\]
i.e., $u \in \mathrm{Ball}_\varepsilon(\wecSet {\alpha_n \times \beta_n} S k)$, as desired. Notice that this argument did not involve assumption \ref{item:bound_on_steps}.
To prove \eqref{eq:upper}, we use \ref{item:bound_on_steps} and choose $N$ so that $N_{S, k}(\alpha_n, \beta_n, \varepsilon) \leqslant N$ for all $n \in \mathbb{N}$. Consider any $u \in \wecSet {\alpha_n \times \beta_n} S k$. Then there is a step function $f \in {\mathrm{St}}ep_{k, N}(X_n, \mu_n; Y_n, \nu_n)$ such that
\[
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha_n \times \beta_n} S k f) \,<\, \varepsilon/2.
\]
Let $g \in {\mathrm{Meas}}_N(X_n, \mu_n)$, $h \in {\mathrm{Meas}}_N(Y_n, \nu_n)$, and $\varphi ^{\mathsf{c}}olon N \times N \to k$ be such that $f = \varphi ^{\mathsf{c}}irc (g, h)$. If $n$ is large enough, then there exist maps $\tilde{g} \in {\mathrm{Meas}}_N(X, \mu)$ and $\tilde{h} \in {\mathrm{Meas}}_N(Y, \nu)$ satisfying
\[
{\mathrm{dist}}_\infty(\wecSetFun {\alpha} S N {\tilde{g}}, \wecSetFun {\alpha_n} S N g) + {\mathrm{dist}}_\infty(\wecSetFun {\beta} S N {\tilde{h}}, \wecSetFun {\beta_n} S N h) \,<\, \varepsilon N^{-4}/2.
\]
Let $\tilde{f} ^{\mathsf{c}}oloneqq \varphi ^{\mathsf{c}}irc (\tilde{g}, \tilde{h})$. From Propositions~\ref{prop:conv} and \ref{prop:conv_Lip}, it follows that
\[
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha \times \beta} S k {\tilde{f}}) \,<\, \varepsilon/2 + N^4 ^{\mathsf{c}}dot (\varepsilon N^{-4}/2) \,=\, \varepsilon,
\]
i.e., $u \in \mathrm{Ball}_\varepsilon(\wecSet {\alpha \times \beta} S k)$, and we are done.
\end{proof}
\section{Proof of Theorem~\ref{theo:main}}\label{sec:proof}
\subsection{Expansion in $\mathrm{SL}_d(\mathbb{Z}/n\mathbb{Z})$}
For $n \in \mathbb{N}^+$, we use $\pi_n$ to indicate reduction modulo $n$ in various contexts. That is, we slightly abuse notation and give the same name to the residue maps
\[
\pi_n ^{\mathsf{c}}olon \mathbb{Z}\to \mathbb{Z}/n\mathbb{Z}, \qquad \pi_n ^{\mathsf{c}}olon \mathrm{SL}_d(\mathbb{Z}) \to \mathrm{SL}_d(\mathbb{Z}/n\mathbb{Z}), \qquad \text{etc}.
\]
Let $G$ be a nontrivial finite group. For $A$, $S \subseteq G$, the \emphd{boundary}\footnote{For our purposes it will be more convenient to consider the vertex rather than the edge boundary.} of $A$ with respect to $S$ is
\[
\partial (A, S) ^{\mathsf{c}}oloneqq \set{a \in A \,:\, Sa \not \subseteq A}.
\]
The \emphd{Cheeger constant} $h(G, S)$ of $G$ with respect to $S$ is given by
\[
h(G, S) ^{\mathsf{c}}oloneqq \min_A \frac{|\partial(A, S)|}{|A|},
\]
where the minimum is taken over all nonempty subsets $A \subseteq G$ of size at most $|G|/2$. Notice that we have $h(G, S) > 0$ if and only if $S$ generates $G$. Indeed, let $\langle S \rangle$ be the subgroup of $G$ generated by $S$. If $\langle S \rangle \neq G$, then $|\langle S \rangle| \leqslant |G|/2$, while $\partial(\langle S \rangle, S) = \varnothing$, hence $h(G, S) = 0$. Conversely, if $h(G, S) = 0$, then there is a nonempty proper subset $A \varsubsetneq G$ closed under left multiplication by the elements of $S$. This means that $A$ is a union of right cosets of $\langle S \rangle$, and thus $\langle S \rangle \neq G$.
\begin{theo}[{Bourgain--Varj\'u ^{\mathsf{c}}ite[Theorem~1]{BV}}]\label{theo:BV}
Let $d \geqslant 2$ and let $S \in \fins{\mathrm{SL}_d(\mathbb{Z})}$ be a finite symmetric subset such that the subgroup $\langle S \rangle$ of $\mathrm{SL}_d(\mathbb{Z})$ generated by $S$ is Zariski dense in $\mathrm{SL}_d(\mathbb{R})$. Then there exist $n_0 \in \mathbb{N}^+$ and $\varepsilon > 0$ such that for all $n \geqslant 2$, if $\operatorname{gcd}(n, n_0) = 1$, then
\[
h(\mathrm{SL}_d(\mathbb{Z}/n\mathbb{Z}), \pi_n(S)) \geqslant \varepsilon.
\]
\end{theo}
Theorem~\ref{theo:BV} is an outcome of a long series of contributions by a number of researchers; for more background, see ^{\mathsf{c}}ite{BV, Tao_book} and the references therein.
A finite group $G$ is called \emphd{$D$-quasirandom}, where $D \geqslant 1$, if every nontrivial unitary representation of $G$ has dimension at least $D$ (a representation $\rho$ of $G$ is nontrivial if $\rho(a) \neq 1$ for some $a \in G$). This notion was introduced by Gowers ^{\mathsf{c}}ite{Gowers}. For a map $\zeta ^{\mathsf{c}}olon G \to \mathbb{C}$, we write
\[
\mathbf{E} \zeta ^{\mathsf{c}}oloneqq \frac{1}{|G|}\sum_{x \in G} \zeta(x), \qquad \|\zeta\|_\infty ^{\mathsf{c}}oloneqq \max_{x \in G} |\zeta(x)|, \qquad \text{and} \qquad \|\zeta\|_2 ^{\mathsf{c}}oloneqq \sqrt{\sum_{x \in G} |\zeta(x)|^2}.
\]
Given $\zeta$, $\eta ^{\mathsf{c}}olon G \to \mathbb{C}$, define the \emphd{convolution} $\zeta \ast \eta ^{\mathsf{c}}olon G \to \mathbb{C}$ of $\zeta$ and $\eta$ by the formula
\[
(\zeta \ast \eta)(x) ^{\mathsf{c}}oloneqq \sum_{ab \,=\, x} \zeta(a)\eta(b),
\]
where the sum is taken over all pairs of $a$, $b \in G$ such that $ab=x$.
\begin{theo}[{^{\mathsf{c}}ite[Proposition 1.3.7]{Tao_book}}]\label{theo:weak_mixing}
Let $G$ be a finite group and let $\zeta$, $\eta ^{\mathsf{c}}olon G \to \mathbb{C}$. Suppose that $G$ is $D$-quasirandom. If $\mathbf{E}\zeta = \mathbf{E}\eta = 0$, then
\[
\|\zeta \ast \eta\|_2 \leqslant \sqrt{\frac{|G|}{D}}\|\zeta\|_2 \|\eta\|_2.
\]
\end{theo}
\iffalse
Let $G$ be a finite group. For $A$, $B$, $S \subseteq G$, let
\[
e(A, B; S) ^{\mathsf{c}}oloneqq |\set{(a,b) \in A \times B \,:\, b \in Sa}|.
\]
We will use Theorem~\ref{theo:weak_mixing} in the form of the following corollary:
\begin{corl}[{^{\mathsf{c}}ite[Exercise 1.3.15]{Tao_book}}]\label{corl:expansion}
Let $G$ be a finite group. Suppose that $G$ is $D$-quasirandom. Then, for all $A$, $B$, $S \subseteq G$, we have
\[
\left|e(A, B; S) \,-\, \frac{|S|}{|G|}|A||B| \right| \,\leqslant\, \sqrt{\frac{|S||G||A||B|}{D}}.
\]
\end{corl}
\begin{proof}[\textsc{Proof}]
First, we show that for any $f$, $g$, $h ^{\mathsf{c}}olon G \to \mathbb{C}$,
\begin{equation}\label{eq:triple_convolution}
\| (f \ast g \ast h) \,-\, (\mathbf{E} f) (\mathbf{E} g) (\mathbf{E} h) |G|^2 \|_\infty \,\leqslant \, D^{-1/2} |G|^{1/2} \|f\|_2 \|g\|_2 \|h\|_2.
\end{equation}
(This is essentially ^{\mathsf{c}}ite[Exercise 1.3.12]{Tao_book}.) Let $f_0 ^{\mathsf{c}}oloneqq f - \mathbf{E} f$, $g_0 ^{\mathsf{c}}oloneqq g - \mathbf{E}g$, and $h_0 ^{\mathsf{c}}oloneqq h - \mathbf{E} h$, so $\mathbf{E}f_0 = \mathbf{E}g_0 = \mathbf{E}h_0 = 0$. A straightforward calculation shows that
\[
f_0 \ast g_0 \ast h_0 \,=\, (f \ast g \ast h) \,-\, (\mathbf{E} f) (\mathbf{E} g) (\mathbf{E} h) |G|^2.
\]
Hence,
\begin{align*}
\| (f \ast g \ast h) \,-\, (\mathbf{E} f) (\mathbf{E} g) (\mathbf{E} h) |G|^2 \|_\infty \,&=\, \|f_0 \ast g_0 \ast h_0 \|_\infty \\
[\text{By Cauchy--Schwarz}] \qquad &\leqslant\, \|f_0 \ast g_0\|_2 \|h_0\|_2 \\
[\text{By Theorem \ref{theo:weak_mixing}}] \qquad &\leqslant\, D^{-1/2} |G|^{1/2} \|f_0\|_2 \|g_0\|_2 \|h_0\|_2 \\
[\text{Since $\|f_0\|_2 \leqslant \|f\|_2$, $\|g_0\|_2 \leqslant \|g\|_2$, and $\|h_0\|_2 \leqslant \|h\|_2$}] \qquad &\leqslant\, D^{-1/2} |G|^{1/2} \|f\|_2 \|g\|_2 \|h\|_2,
\end{align*}
as claimed. Now, for each $F \subseteq G$, let $\mathbbm{1}_F ^{\mathsf{c}}olon G \to 2$ denote the indicator function of the set $F$. Then, given any $A$, $B$, $S \subseteq G$, we have
\[
e(A, B; S) = (\mathbbm{1}_{B^{-1}} \ast \mathbbm{1}_{S} \ast \mathbbm{1}_A)(\mathbf{1}_G),
\]
where $\mathbf{1}_G$ is the identity element of $G$, so the desired result follows by applying \eqref{eq:triple_convolution} with $f = \mathbbm{1}_{B^{-1}}$, $g = \mathbbm{1}_S$, and $h = \mathbbm{1}_A$.
\end{proof}
\fi
In order to apply Theorem~\ref{theo:weak_mixing}, we will need the following variation of Frobenius's lemma:
\begin{prop}[{cf. ^{\mathsf{c}}ite[Lemma 1.3.3]{Tao_book}}]\label{prop:Frobenius}
Let $d$, $n \geqslant 2$ and let $p$ be the smallest prime divisor of $n$. Then the group $\mathrm{SL}_d(\mathbb{Z}/n\mathbb{Z})$ is $(p-1)/2$-quasirandom.
\end{prop}
\begin{proof}[\textsc{Proof}]
The statement is trivial for $p = 2$, so assume that $p$ is odd. Write $n$ as a product of powers of distinct primes: $n = p_1^{k_1} ^{\mathsf{c}}dots p_r^{k_r}$. Then, by the Chinese remainder theorem,
\[
\mathrm{SL}_d(\mathbb{Z}/n\mathbb{Z}) \,^{\mathsf{c}}ong\, \mathrm{SL}_d(\mathbb{Z}/p_1^{k_1}\mathbb{Z}) \times ^{\mathsf{c}}dots \times \mathrm{SL}_d(\mathbb{Z}/p_r^{k_r}\mathbb{Z}).
\]
Since the product of $D$-quasirandom groups is again $D$-quasirandom ^{\mathsf{c}}ite[Exercise 1.3.2]{Tao_book}, it is enough to consider the case when $r = 1$ and $n = p^k$.
Let $\rho$ be a nontrivial finite-dimensional unitary representation of $\mathrm{SL}_d(\mathbb{Z}/p^k\mathbb{Z})$. By ^{\mathsf{c}}ite[Theorem~4.3.9]{H-OM}, the group $\mathrm{SL}_d(\mathbb{Z}/p^k\mathbb{Z})$ is generated by the \emphd{elementary} matrices, i.e., those that differ from the identity matrix in precisely one off-diagonal entry. Thus, there exists an elementary matrix $e \in \mathrm{SL}_d(\mathbb{Z}/p^k\mathbb{Z})$ such that $\rho(e) \neq 1$. Without loss of generality, we may assume that $e$ is of the form
\[
e = \left(\begin{array}{cccc}
1 & a & ^{\mathsf{c}}dots & 0 \\
0 & 1 & ^{\mathsf{c}}dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & ^{\mathsf{c}}dots & 1
\end{array}\right),
\]
where $0 \neq a \in \mathbb{Z}/p^k\mathbb{Z}$. Choose $e$ so as to maximize the power of $p$ that divides $a$. Let $\lambda$ be an arbitrary eigenvalue of $\rho(e)$ not equal to $1$ (such $\lambda$ exists since $\rho(e) \neq 1$ and is unitary). We have
\[
e^p = \left(\begin{array}{cccc}
1 & pa & ^{\mathsf{c}}dots & 0 \\
0 & 1 & ^{\mathsf{c}}dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & ^{\mathsf{c}}dots & 1
\end{array}\right),
\]
so, by the choice of $e$, $\rho(e)^p = \rho(e^p) = 1$. Hence, $\lambda^p = 1$, so the values $\lambda$, $\lambda^2$, \ldots, $\lambda^{p-1}$ are pairwise distinct. Let $b \in \mathbb{N}^+$ be an integer coprime to $p$ and let $c ^{\mathsf{c}}oloneqq b^2$. Since $b$ is invertible in $\mathbb{Z}/p^k\mathbb{Z}$, we can form a diagonal matrix $h \in \mathrm{SL}_d(\mathbb{Z}/p^k\mathbb{Z})$ with entries $(b, b^{-1}, 1 \ldots, 1)$. Then $h^{-1}$ is the diagonal matrix with entries $(b^{-1}, b, 1, \ldots, 1)$, and we have
\[
heh^{-1} = \left(\begin{array}{cccc}
1 & ca & ^{\mathsf{c}}dots & 0 \\
0 & 1 & ^{\mathsf{c}}dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & ^{\mathsf{c}}dots & 1
\end{array}\right) = e^c.
\]
This shows that $e$ and $e^c$ are conjugate in $\mathrm{SL}_d(\mathbb{Z}/p^k\mathbb{Z})$, and hence $\rho(e)$ and $\rho(e)^c$ are conjugate as well. Since $\lambda^c$ is an eigenvalue of $\rho(e)^c$, it must also be an eigenvalue of $\rho(e)$. It remains to notice that there exist $(p-1)/2$ choices for $c$ that are distinct modulo $p$ (corresponding to the $(p-1)/2$ nonzero quadratic residues modulo $p$), so $\rho(e)$ must have at least $(p-1)/2$ distinct eigenvalues, which is only possible if the dimension of $\rho$ is at least $(p-1)/2$.
\end{proof}
\subsection{The main lemma}
For the rest of Section~\ref{sec:proof}, fix $d \geqslant 2$ and let $\Gamma$ be a subgroup of $\mathrm{SL}_d(\mathbb{Z})$ that is Zariski dense in $\mathrm{SL}_d(\mathbb{R})$.
For $n \geqslant 2$, define $G_n ^{\mathsf{c}}oloneqq \mathrm{SL}_d(\mathbb{Z}/n\mathbb{Z})$. Let $\alpha_n ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright G_n$ be the action given by
\[
\gamma ^{\mathsf{c}}dot x ^{\mathsf{c}}oloneqq \pi_n(\gamma)x \qquad \text{for all } \gamma \in \Gamma \text{ and } x \in G_n.
\]
We view $\alpha_n$ as a {p.m.p.}\xspace action by equipping $G_n$ with the uniform probability measure (to simplify notation, we will avoid mentioning this measure explicitly).
The group $\Gamma$ has a Zariski dense finitely generated subgroup (by Tits's theorem~^{\mathsf{c}}ite[Theorem~3]{Tits}, such a subgroup can be chosen to be free of rank $2$), so fix an arbitrary finite symmetric set $S \in \fins{\Gamma}$ such that the group $\langle S \rangle$ is Zariski dense in $\mathrm{SL}_d(\mathbb{R})$. Fix $n_0 \in \mathbb{N}^+$ and $\varepsilon >0$ provided by Theorem~\ref{theo:BV} applied to $S$ and let \[\delta ^{\mathsf{c}}oloneqq \frac{\varepsilon}{32|S|}.\]
Define $u \in [0;1]^{S \times 2 \times 2}$ by setting, for all $\gamma \in S$ and $i$, $j <2$,
\[
u(\gamma, i, j) ^{\mathsf{c}}oloneqq \begin{cases}
1/2 &\text{if } i = j;\\
0 &\text{if } i \neq j.
\end{cases}
\]
The heart of the proof of Theorem~\ref{theo:main} lies in the following lemma:
\begin{lemma}\label{lemma:finite}
Let $n$, $m \geqslant 2$ be such that $n$ divides $m$ and $\operatorname{gcd}(m, n_0) = 1$. Let $p$ be the smallest prime divisor of $n$ and let
\[
N ^{\mathsf{c}}oloneqq \left\lfloor \frac{1}{25}\sqrt{p-1}\right\rfloor.
\]
Assume that $N \geqslant 1$. Then $u \in \wecSet {\alpha_n \times \alpha_m} S 2$, yet for all $f \in {\mathrm{St}}ep_{2, N} (G_n; G_m)$, we have
\[
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha_n \times \alpha_m} S 2 f) \geqslant \delta.
\]
In particular, $N_{S, 2}(\alpha_n, \alpha_m, \delta) > N$.
\end{lemma}
\begin{proof}[\textsc{Proof}]
Let $\mathrm{proj}_2 ^{\mathsf{c}}olon G_n \times G_m \to G_m$ denote the projection on the second coordinate.
Note that, by definition, the map $\mathrm{proj}_2$ is equivariant. Since $n$ divides $m$, there is a well-defined reduction modulo $n$ map $\pi_n ^{\mathsf{c}}olon G_m \to G_n$, and it is surjective. For $z \in G_n$, define
\[
\mathcal{O}_z ^{\mathsf{c}}oloneqq \set{(x, y) \in G_n \times G_m \,:\, x = \pi_n(y)z}.
\]
Evidently, the set $\mathcal{O}_z$ is $(\alpha_n \times \alpha_m)$-invariant. Furthermore, the map $\mathrm{proj}_2$ establishes an equivariant bijection between $\mathcal{O}_z$ and $G_m$. Since $\operatorname{gcd}(m,n_0) = 1$, Theorem~\ref{theo:BV} implies that the action $\alpha_m$ is transitive, and hence so is the restriction of the action $\alpha_n \times \alpha_m$ to $\mathcal{O}_z$. Therefore, the orbits of $\alpha_n \times \alpha_m$ are precisely the sets $\mathcal{O}_z$ for $z\in G_n$.
Given a subset $Z \subseteq G_n$, define $f_Z ^{\mathsf{c}}olon G_n \times G_m \to 2$ by
\[
f_Z(x, y) ^{\mathsf{c}}oloneqq \begin{cases}
0 &\text{if } \pi_n(y)^{-1}x \not\in Z;\\
1 &\text{if } \pi_n(y)^{-1}x \in Z.
\end{cases}
\]
The functions of the form $f_Z$ for $Z \subseteq G_n$ are precisely the $(\alpha_n \times \alpha_m)$-invariant maps $G_n \times G_m \to 2$.
Now we can show that $u \in \wecSet {\alpha_n \times \alpha_m} S 2$. The group $G_n$ contains an element of order $2$, namely the diagonal matrix with entries $(-1,-1,1,\ldots, 1)$, so $|G_n|$ is even. Hence, for any set $Z \subset G_n$ of size exactly $|G_n|/2$, we have $\wecSetFun {\alpha_n \times \alpha_m} S 2 {f_Z} = u$, as desired.
For $z \in G_n$ and $A \subseteq \mathcal{O}_z$, define the \emphd{boundary} of $A$ by
\[
\partial A ^{\mathsf{c}}oloneqq \set{(x,y) \in A \,:\, S ^{\mathsf{c}}dot (x,y) \not \subseteq A}.
\]
Suppose that $|A| \leqslant |G_m|/2$ (note that $|G_m| = |\mathcal{O}_z|$). Then, since $\mathrm{proj}_2$ establishes an equivariant bijection between $\mathcal{O}_z$ and $G_m$, Theorem~\ref{theo:BV} yields
\begin{equation}\label{eq:boundary_in_orbit}
|\partial A| \geqslant \varepsilon |A|.
\end{equation}
\begin{smallclaim}\label{claim:invariant}
Let $f ^{\mathsf{c}}olon G_n \times G_m \to 2$ be such that
\begin{equation}\label{eq:almost_inv}
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha_n \times \alpha_m} S 2 f) < \delta.
\end{equation}
Then there is a set $Z \subseteq G_n$ such that ${\mathrm{dist}}(f, f_Z) < 1/16$.
\end{smallclaim}
\begin{claimproof}
For each $\gamma \in S$, let
\[
B_\gamma ^{\mathsf{c}}oloneqq \set{(x,y) \in G_n \times G_m \,:\, f(x,y) \neq f(\gamma ^{\mathsf{c}}dot x, \gamma ^{\mathsf{c}}dot y)},
\]
and define $B ^{\mathsf{c}}oloneqq \bigcup_{\gamma \in S} B_\gamma$. By \eqref{eq:almost_inv}, for any $\gamma \in S$, we have
\[
\frac{|B_\gamma|}{|G_n| |G_m|} \,=\, \wecSetFun {\alpha_n \times \alpha_m} S 2 f (\gamma, 0, 1) + \wecSetFun {\alpha_n \times \alpha_m} S 2 f (\gamma, 1, 0) \,<\, 2\delta,
\]
and therefore \[|B| \,<\, 2\delta |S| |G_n| |G_m| \,=\, \frac{\varepsilon}{16}|G_n||G_m|.\] We will show that the set
\[
Z ^{\mathsf{c}}oloneqq \set{z \in G_n \,:\, f(x, y) = 1 \text{ for at least } |G_m|/2 \text{ pairs } (x,y) \in \mathcal{O}_z}
\]
is as desired. Define
\[
A ^{\mathsf{c}}oloneqq \set{(x, y) \in G_n \times G_m \,:\, f(x,y) \neq f_Z(x,y)},
\]
so ${\mathrm{dist}} (f, f_Z) = |A|/(|G_n||G_m|)$. Take any $z \in G_n$. By the definition of $Z$, we have $|A ^{\mathsf{c}}ap \mathcal{O}_z| \leqslant |G_m|/2$, and hence, by \eqref{eq:boundary_in_orbit},
\[
|\partial(A ^{\mathsf{c}}ap \mathcal{O}_z)| \geqslant \varepsilon |A ^{\mathsf{c}}ap \mathcal{O}_z|.
\]
Note that $\partial (A ^{\mathsf{c}}ap \mathcal{O}_z) \subseteq B ^{\mathsf{c}}ap \mathcal{O}_z$, so we have
\[
|B ^{\mathsf{c}}ap \mathcal{O}_z| \geqslant |\partial (A ^{\mathsf{c}}ap \mathcal{O}_z)| \geqslant \varepsilon|A ^{\mathsf{c}}ap \mathcal{O}_z|.
\]
Hence,
\[
|A| \,=\, \sum_{z \in G_n} |A ^{\mathsf{c}}ap \mathcal{O}_z| \,\leqslant\, \sum_{z \in G_n} \varepsilon^{-1} |B ^{\mathsf{c}}ap \mathcal{O}_z| \,=\, \varepsilon^{-1} |B| \,<\, \frac{1}{16}|G_n||G_m|.
\]
In other words, ${\mathrm{dist}}(f, f_Z) < 1/16$, as claimed.
\end{claimproof}
For $\zeta ^{\mathsf{c}}olon G_n \to \mathbb{C}$ and $\xi ^{\mathsf{c}}olon G_m \to \mathbb{C}$, define $\zeta ^{\mathsf{c}}ircledast \xi ^{\mathsf{c}}olon G_n \to \mathbb{C}$ by the formula
\[
(\zeta ^{\mathsf{c}}ircledast \xi)(x) ^{\mathsf{c}}oloneqq \sum_{a\pi_n(b) \,=\, x} \zeta(a) \xi(b),
\]
where the sum is taken over all pairs of $a \in G_n$ and $b \in G_m$ such that $a\pi_n(b)=x$. We will need the following corollary of Theorem~\ref{theo:weak_mixing} and Proposition~\ref{prop:Frobenius}:
\begin{smallclaim}\label{claim:triple_convolution}
Let $\zeta$, $\eta ^{\mathsf{c}}olon G_n \to \mathbb{C}$ and $\xi ^{\mathsf{c}}olon G_m \to \mathbb{C}$. Then
\[
\|(\zeta \ast \eta) ^{\mathsf{c}}ircledast \xi \,-\, (\mathbf{E}\zeta) (\mathbf{E}\eta) (\mathbf{E} \xi) |G_n||G_m|\|_\infty \,\leqslant\, \sqrt{\frac{2|G_m|}{p-1}} \|\zeta\|_2 \|\eta\|_2 \|\xi\|_2.
\]
\end{smallclaim}
\begin{claimproof}
This is a variant of ^{\mathsf{c}}ite[Exercise 1.3.12]{Tao_book}. After subtracting its expectation from each function, we may assume that $\mathbf{E}\zeta = \mathbf{E}\eta = \mathbf{E}\xi = 0$. \iffalse Consider any $x \in G_n$. Let
\[
R_x ^{\mathsf{c}}oloneqq \set{(a,b,c) \in G_n \times G_n \times G_m \,:\, ab\pi_n(c) = x}.
\]
Then, by definition, we have
\begin{align*}
((\zeta_0 \ast \eta_0) ^{\mathsf{c}}ircledast \xi_0)(x) \,&=\, \sum_{(a,b,c) \in R_x} \zeta_0(a)\eta_0(b) \xi_0(c)\\
&=\, \sum_{(a,b,c) \in R_x} \zeta(a)\eta_0(b) \xi_0(c) \,-\, (\mathbf{E}\zeta) \sum_{(a,b,c) \in R_x} \eta_0(b) \xi_0(c).
\end{align*}
For each $b \in G_n$ and $c \in G_m$, there is a unique $a \in G_n$ such that $(a,b,c) \in R_x$, so
\[
\sum_{(a,b,c) \in R_x} \eta_0(b) \xi_0(c) \,=\, \sum_{b \in G_n} \eta_0(b)\sum_{c \in G_m} \xi_0(c) \,=\, (\mathbf{E} \eta_0)(\mathbf{E}\xi_0)|G_n||G_m| \,=\, 0.
\]
Now we write
\[
\sum_{(a,b,c) \in R_x} \zeta(a)\eta_0(b) \xi_0(c)\,=\, \sum_{(a,b,c) \in R_x} \zeta(a)\eta(b) \xi_0(c) \,-\, (\mathbf{E}\eta) \sum_{(a,b,c) \in R_x} \zeta(a) \xi_0(c).
\]
Again, we have
\[
\sum_{(a,b,c) \in R_x} \zeta(a) \xi_0(c) \,=\, \sum_{a \in G_n} \zeta(a) \sum_{c \in G_m} \xi_0(c) \,=\, (\mathbf{E} \zeta)(\mathbf{E}\xi_0)|G_n||G_m| \,=\, 0,
\]
and the remaining summand equals
\begin{align*}
\sum_{(a,b,c) \in R_x} \zeta(a)\eta(b) \xi_0(c)\,&=\, \sum_{(a,b,c) \in R_x} \zeta(a)\eta(b) \xi(c) \,-\, (\mathbf{E}\xi) \sum_{(a,b,c) \in R_x} \zeta(a) \eta(b) \\
&=\, ((\zeta \ast \eta) ^{\mathsf{c}}ircledast \xi)(x) \,-\, (\mathbf{E}\xi) \sum_{(a,b,c) \in R_x} \zeta(a) \eta(b).
\end{align*}
For any $a$, $b \in G_n$, there are precisely $|G_m|/|G_n|$ choices of $c \in G_m$ such that $(a,b,c) \in R_x$. Hence,
\[
\sum_{(a,b,c) \in R_x} \zeta(a) \eta(b) \,=\, \frac{|G_m|}{|G_n|} \sum_{a \in G_n} \zeta(a) \sum_{b \in G_n} \eta(b) \,=\, (\mathbf{E}\zeta)(\mathbf{E} \eta) |G_n||G_m|.
\]
To summarize, we have shown that\fi
By the Cauchy--Schwarz inequality, we have
\[
\|(\zeta \ast \eta) ^{\mathsf{c}}ircledast \xi \|_\infty
\,\leqslant\, \sqrt{\frac{|G_m|}{|G_n|}}\|\zeta \ast \eta\|_2 \|\xi\|_2,
\]
while Theorem \ref{theo:weak_mixing} and Proposition~\ref{prop:Frobenius} yield
\[
\|\zeta \ast \eta\|_2
\,\leqslant\, \sqrt{\frac{2|G_n|}{p-1}} \|\zeta\|_2 \|\eta\|_2. \qedhere
\]
\end{claimproof}
We use Claim~\ref{claim:triple_convolution} to prove that invariant maps are hard to approximate by step functions:
\begin{smallclaim}\label{claim:not_invariant}
Let $f \in {\mathrm{St}}ep_{2, N} (G_n; G_m)$ and $Z \subseteq G_n$. Suppose that
$\min \set{|Z|, |G_n| - |Z|} \geqslant |G_n|/4$.
Then ${\mathrm{dist}} (f, f_Z) \geqslant 1/8$.
\end{smallclaim}
\begin{claimproof}
Let $g ^{\mathsf{c}}olon G_n \to N$, $h ^{\mathsf{c}}olon G_m \to N$, and $\varphi ^{\mathsf{c}}olon N \times N \to 2$ be such that $f = \varphi ^{\mathsf{c}}irc (g,h)$. For $i < N$, set \[X_i ^{\mathsf{c}}oloneqq g^{-1}(i).\] Thus, $\set{X_i \,:\, i < N}$ is a partition of $G_n$ into $N$ pieces. Given $i < N$ and $j < 2$, let
\[
Y_{i,j} \,^{\mathsf{c}}oloneqq\, \set{y \in G_m \,:\, \varphi(i, h(y)) = j} \,=\, \set{y \in G_m \,:\, f(x, y) = j \text{ for all } x \in X_i}.
\]
Note that $Y_{i,0} ^{\mathsf{c}}up Y_{i,1} = G_m$. Define
\[
A ^{\mathsf{c}}oloneqq \set{(x, y) \in G_n \times G_m \,:\, f(x,y) \neq f_Z(x,y)},
\]
so ${\mathrm{dist}}(f, f_Z) = |A|/(|G_n||G_m|)$. Let $\mathbf{1}_{G_n}$ be the identity element of $G_n$, and for each set $F \subseteq G_n$, let $\mathbbm{1}_F ^{\mathsf{c}}olon G_n \to 2$ denote the indicator function of $F$. Then, for any $i < N$, we have
\begin{align*}
|(X_i \times Y_{i,0}) ^{\mathsf{c}}ap A| \,&=\, |\set{(x, y) \in X_i \times Y_{i,0} \,:\, f_Z(x,y) = 1}| \\
&=\, |\set{(z, x, y) \in Z \times X_i \times Y_{i,0} \,:\, z x^{-1} \pi_n(y) = \mathbf{1}_{G_n} }|\\
&=\, ((\mathbbm{1}_Z \ast \mathbbm{1}_{X^{-1}_i}) ^{\mathsf{c}}ircledast \mathbbm{1}_{Y_{i,0}})(\mathbf{1}_{G_n}).
\end{align*}
By Claim~\ref{claim:triple_convolution}, the last expression is at least
\[
\frac{|Z||X_i||Y_{i,0}|}{|G_n|} \,-\, \sqrt{\frac{2|G_m||Z||X_i||Y_{i,0}|}{p-1}} \,\geqslant\, \frac{|X_i||Y_{i,0}|}{4} \,-\, \sqrt{\frac{2}{p-1}} |G_n||G_m|.
\]
Similarly, we have
\[
|(X_i \times Y_{i,1}) ^{\mathsf{c}}ap A| \,\geqslant\, \frac{|X_i||Y_{i,1}|}{4} \,-\, \sqrt{\frac{2}{p-1}} |G_n||G_m|,
\]
and hence
\[
|(X_i \times G_m) ^{\mathsf{c}}ap A| \,\geqslant\, \frac{|X_i||G_m|}{4} \,-\, \sqrt{\frac{8}{p-1}} |G_n||G_m|.
\]
Therefore,
\[
|A| \,=\, \sum_{i < N} |(X_i \times G_m) ^{\mathsf{c}}ap A|
\,\geqslant\, \left(\frac{1}{4} - N\sqrt{\frac{8}{p-1}}\right)|G_n||G_m| \,>\, \frac{1}{8}|G_n||G_m|,
\]
and thus ${\mathrm{dist}}(f, f_Z) > 1/8$, as desired.
\end{claimproof}
It remains to combine Claims~\ref{claim:invariant} and \ref{claim:not_invariant}. Suppose that $f \in {\mathrm{St}}ep_{2,N}(G_n;G_m)$ satisfies
\[
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha_n \times \alpha_m} S 2 f) < \delta.
\]
By Claim~\ref{claim:invariant}, there is a set $Z \subseteq G_n$ such that ${\mathrm{dist}}(f, f_Z) <1/16$. By Proposition~\ref{prop:Lipschitz}, we have
\[
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha_n \times \alpha_m} S 2 {f_Z}) \,<\, \delta + 2 ^{\mathsf{c}}dot (1/16) \,<\,1/4.
\]
In particular, for any $\gamma \in S$,
\[
\frac{|Z|}{|G_n|} \,=\, \wecSetFun {\alpha_n \times \alpha_m} S 2 {f_Z} (\gamma, 1, 1) \,>\, u(\gamma, 1, 1) - 1/4 \,=\, 1/4,
\]
i.e., $|Z| \geqslant |G_n|/4$, and, similarly, $|G_n| - |Z| \geqslant |G_n|/4$. Therefore, by Claim~\ref{claim:not_invariant}, ${\mathrm{dist}}(f, f_Z) \geqslant 1/8$, which is a contradiction. The proof of Lemma~\ref{lemma:finite} is complete.
\end{proof}
\subsection{Finishing the proof}
We say that $\mathcal{N} \subseteq \mathbb{N}^+$ is a \emphd{directed set} if $\mathcal{N}$ is infinite and for any two elements $n_1$, $n_2 \in \mathcal{N}$, there is some $m \in \mathcal{N}$ divisible by both $n_1$ and $n_2$. Each directed set $\mathcal{N} \subseteq \mathbb{N}^+$ gives rise to an inverse system consisting of the groups $(G_n)_{n \in \mathcal{N}}$ together with the homomorphisms $\pi_n ^{\mathsf{c}}olon G_m \to G_n$ for every pair of $n$, $m \in \mathcal{N}$ such that $n$ divides $m$. The inverse limit of this system is an infinite profinite group, which we denote by $G_\mathcal{N}$. For example, if we let
\[
\mathcal{N}(p) ^{\mathsf{c}}oloneqq \set{p, p^2, p^3, \ldots}
\]
for some prime $p$, then $G_{\mathcal{N}(p)}^{\mathsf{c}}ong\mathrm{SL}_d(\mathbb{Z}_p)$, where $\mathbb{Z}_p$ is the ring of $p$-adic integers.
If $\mathcal{N} \subseteq \mathbb{N}^+$ is a directed set, then $\mathrm{SL}_d(\mathbb{Z})$ naturally embeds into $G_\mathcal{N}$, so we can identify $\Gamma$ with a subgroup of $G_\mathcal{N}$. This allows us to consider the left multiplication action $\alpha_\mathcal{N} ^{\mathsf{c}}olon \Gamma ^{\mathsf{c}}urvearrowright G_\mathcal{N}$. As the group $G_\mathcal{N}$ is compact, we can equip $G_\mathcal{N}$ with the Haar probability measure and view $\alpha_\mathcal{N}$ as a {p.m.p.}\xspace action. Clearly, the action $\alpha_\mathcal{N}$ is free. Note that for each $n \in \mathcal{N}$, there is a well-defined reduction modulo $n$ map $\pi_n ^{\mathsf{c}}olon G_\mathcal{N} \to G_n$, which is equivariant and pushes the Haar measure on $G_\mathcal{N}$ forward to the uniform probability measure on $G_n$. In particular, $\alpha_n$ is a factor of $\alpha_\mathcal{N}$, and hence $\alpha_n \preccurlyeq \alpha_\mathcal{N}$.
The following is a direct consequence of Lemma~\ref{lemma:finite}:
\begin{lemma}\label{lemma:infinite}
Let $\mathcal{N}$, $\mathcal{M} \subseteq \mathbb{N}^+$ be directed sets such that $\mathcal{N} \subseteq \mathcal{M}$ and $\operatorname{gcd}(m, n_0) = 1$ for all $m \in \mathcal{M}$. Let $p$ be the smallest prime number that divides an element of $\mathcal{N}$ and let
\[
N ^{\mathsf{c}}oloneqq \left\lfloor \frac{1}{25}\sqrt{p-1}\right\rfloor.
\]
Assume that $N \geqslant 1$. Then $u \in \wecSet {\alpha_\mathcal{N} \times \alpha_\mathcal{M}} S 2$, yet for all $f \in {\mathrm{St}}ep_{2, N} (G_\mathcal{N}; G_\mathcal{M})$, we have
\[
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha_\mathcal{N} \times \alpha_\mathcal{M}} S 2 f) \geqslant \delta.
\]
In particular, $N_{S, 2}(\alpha_\mathcal{N}, \alpha_\mathcal{M}, \delta) > N$.
\end{lemma}
\begin{proof}[\textsc{Proof}]
To prove that $u \in \wecSet {\alpha_\mathcal{N} \times \alpha_\mathcal{M}} S 2$, take any $n \in \mathcal{N}$, $n \geqslant 2$. Since $\alpha_n \preccurlyeq \alpha_\mathcal{N}$, $\alpha_\mathcal{M}$, it follows from Corollary~\ref{corl:mult} that $\alpha_n \times \alpha_n \preccurlyeq \alpha_\mathcal{N} \times \alpha_\mathcal{M}$, and, by Lemma~\ref{lemma:finite}, we obtain \[u \in \wecSet {\alpha_n \times \alpha_n} S 2 \subseteq \wecSet {\alpha_\mathcal{N} \times \alpha_\mathcal{M}} S 2.\]
Now suppose that some $f \in {\mathrm{St}}ep_{2, N} (G_\mathcal{N}; G_\mathcal{M})$ satisfies
\begin{equation}\label{eq:too_close}
{\mathrm{dist}}_\infty(u, \wecSetFun {\alpha_\mathcal{N} \times \alpha_\mathcal{M}} S 2 f) < \delta.
\end{equation}
Let $g \in {\mathrm{Meas}}_N(G_\mathcal{N})$, $h \in {\mathrm{Meas}}_N(G_\mathcal{M})$, and $\varphi ^{\mathsf{c}}olon N \times N \to 2$ be such that $f = \varphi ^{\mathsf{c}}irc (g, h)$. After modifying the maps $g$ and $h$ on sets of arbitrarily small measure, we can arrange that there exist integers $n \in \mathcal{N}$ and $m \in \mathcal{M}$ and functions $\tilde{g} ^{\mathsf{c}}olon G_n \to N$ and $\tilde{h} ^{\mathsf{c}}olon G_m \to N$ such that
\[
g = \tilde{g} ^{\mathsf{c}}irc \pi_n \qquad \text{and} \qquad h = \tilde{h} ^{\mathsf{c}}irc \pi_m.
\]
Using Propositions~\ref{prop:conv} and \ref{prop:conv_Lip}, we can ensure that inequality \eqref{eq:too_close} is still valid after this modification. We may furthermore assume that $n \geqslant 2$ and $n$ divides $m$ (the last part uses that $\mathcal{N} \subseteq \mathcal{M}$ and $\mathcal{M}$ is a directed set). Let $\tilde{f} ^{\mathsf{c}}oloneqq \varphi ^{\mathsf{c}}irc (\tilde{g}, \tilde{h})$. Then $\tilde{f} \in {\mathrm{St}}ep_{2,N}(G_n;G_m)$ and \[\wecSetFun {\alpha_n \times \alpha_m} S 2 {\tilde{f}} = \wecSetFun {\alpha_\mathcal{N} \times \alpha_\mathcal{M}} S 2 f.\]
But the existence of such $\tilde{f}$ contradicts Lemma~\ref{lemma:finite}.
\end{proof}
Now we can complete the proof of Theorem~\ref{theo:main}:
\begin{proof}[\textsc{Proof of Theorem~\ref{theo:main}}]
Recall that $\Gamma$ is a Zariski dense subgroup of $\mathrm{SL}_d(\mathbb{Z})$ with $d \geqslant 2$; $S$ is a finite symmetric subset of $\Gamma$ such that the group $\langle S \rangle$ is still Zariski dense; $n_0 \in \mathbb{N}^+$ and $\varepsilon > 0$ are given by Theorem~\ref{theo:BV} applied to $S$; and $\delta = \varepsilon/(32|S|)$.
\ref{item:squaring} By Lemma~\ref{lemma:infinite}, we have
\[
\lim_{p\text{ prime}}N_{S, 2}(\alpha_{\mathcal{N}(p)}, \alpha_{\mathcal{N}(p)}, \delta) \,=\, \infty.
\]
The desired conclusion follows by applying Theorem~\ref{theo:criterion} to the set $\mathcal{C}^{\mathsf{c}}oloneqq \set{(\mathfrak{a}, \mathfrak{a}) \,:\, \mathfrak{a} \in {\mathcal{W}}Free_\Gamma}$.
\ref{item:fiber} Let $\mathcal{M} ^{\mathsf{c}}oloneqq \set{m \in \mathbb{N}^+ \,:\, \operatorname{gcd}(m, n_0) = 1}$. Then $\mathcal{M}$ is a directed set, and we claim that $\mathfrak{b} ^{\mathsf{c}}oloneqq \wec{\alpha_\mathcal{M}}$ is as desired. Indeed, by Lemma~\ref{lemma:infinite}, we have
\[
\lim_{p\text{ prime}}N_{S, 2}(\alpha_{\mathcal{N}(p)}, \alpha_{\mathcal{M}}, \delta) \,=\, \infty,
\]
so it remains to apply Theorem~\ref{theo:criterion} to the set $\mathcal{C}^{\mathsf{c}}oloneqq \set{(\mathfrak{a}, \mathfrak{b}) \,:\, \mathfrak{a} \in {\mathcal{W}}Free_\Gamma}$.
\end{proof}
\printbibliography
\end{document}
|
\betaegin{document}
\setminusaketitle
\date{today}
\betaegin{abstract}
We study the automorphism group of the algebraic closure of a
substructure $A$ of a pseudo-finite field $F$. We show that the
behavior of this group, even when $A$ is large, depends essentially
on the roots of unity in $F$. For almost all completions of the
theory of pseudofinite fields, we show that over $A$, algebraic
closure agrees with definable closure, as soon as $A$ contains the
relative algebraic closure of the prime field.
\end{abstract}
\section{Introduction}
A pseudofinite field is an infinite model of the theory of finite fields.
By Ax \cite{Ax}, we know that a field $F$ is pseudofinite if and only if it is 1) perfect, 2) PAC and 3) has a unique (and so
necessarily Galois and cyclic) extension in the algebraic closure $F^a$ of $F$ of degree $n$ for every $n\in \nn\setminus\{0\}$.
See \cite{jardenfried} for the PAC property; it will play almost no role in this paper.
We are interested in definable and algebraic closure in $F$, over a substructure $A$ containing an elementary submodel $M$. Surprisingly, the answer depends
intimately on embeddings of number fields, or finite fields, into $M$. In characteristic zero for instance, we show that model-theoretic Galois groups over $M$
have odd order, unless $M$ contains all even-order roots of unity.
Real closed fields provide a geometrically comprehensible way of symmetry-breaking in algebraic geometry; a Galois cover of an algebraic variety splits into semi-algebraic sections. Our results imply that ``almost all" pseudo-finite fields
give an alternative geometric approach to such a splitting: the Galois cover splits into definable sections, or in model theoretic terms, definable and algebraic closures coincide. See \S 5, Corollary 8.
These symmetry-breaking results are in fact valid for quasi-finite fields in the
sense of \cite{Serre}, p. 188, i.e. perfect fields with absolute
Galois group $\hat{{\setminusathbb Z}}$, the profinite completion of ${\setminusathbb Z}$. We use
pseudo-finiteness only in order to demonstrate the converse, that if
all $p^n$'th roots of unity are contained in $F$, then Galois groups of order divisible by $p$ do occur as $Aut(B/A)$
with $M \leq A \leq B \leq N$, $M \prec N$.
In \S \rhoef{tourn} ,we describe (in characteristic prime to $p$) an explicit structure incompatible with symmetries of order $p$.
We thank the referee for a close reading and two very useful reports.
\section{Quasi-finite Fields}
We write $\leq$ for the substructure relation; in particular, for fields $A,B$, $A\leq B$ means that $A$ is a subfield of $B$. The algebraic closure of any field $F$ will be denoted $F^a$.
By definition, a quasi-finite field $F$ has a unique
extension in $F^a$ of degree $n$ for every $n\in \nn\setminus\{0\}$. Let $F_n$ denote the unique extension of $F$ in $F^a$ of degree $n$. This extension is easily seen to be interpretable in $F$ using
parameters from $F$. Indeed, as $F$ is perfect, $F_n=F(\alpha)$ for some
$\alpha \in F_n$. Let $X^n+a_1X^{n-1}+\cdots +a_n$ be the minimal
polynomial of $\alpha$ over $F$. Then $F_n$, which is an $n$-dimensional
vector space over $F$ with basis $\{1,\alpha,\alpha^2,\ldots,\alpha^{n-1}\}$, is
definably isomorphic to $F^n$ via this basis as a vector space. Also
any linear endomorphism of $F_n$ translates into a definable (with
parameters) linear endomorphism of $F^n$ (coded by an $n\times n$
matrix over $F$). In particular, the $\alpha$-multiplication in $F_n$,
the multiplicative structure of $F_n$ and the action of
$\Gammaal(F_n/F)$ on $F_n$ can all be definably (with parameters) coded
in $F^n$.
Note that to interpret $F_n$ in $F$ we only need $a_1,\,\ldots, a_n$ as parameters, but to interpret the action of an element $\tau$ of
$\Gammaal(F_n/F)$ in $F$, apart from these $n$ parameters, we also need $b_0, \ldots, b_{n-1} \in F$ where $\tau(\alpha) = b_0 + b_1\alpha + \cdots +
b_{n-1}\alpha^{n-1}$, which makes up a total of $2n$ parameters. Note also that any other choice of the parameters $a_1,\,\ldots, a_n$ for which the
polynomial $X^n+a_1X^{n-1}+\ldots +a_n$ is irreducible gives rise to an isomorphic structure $F_n$; on the other hand, different choices of the
parameters $b_0,\,\ldots, b_{n-1}$ may define different field automorphisms.
\betaegin{lemma} \label{commute}Let $F$ be a quasi-finite field and $\sigma$ be a topological generator of $\setminusathop{\rm Aut}\nolimits(F^a/F)$, let $M$ be an elementary submodel of $F$. Let $\setminusu$ be in $\setminusathop{\rm Aut}\nolimits(F/M)$. Then any extension of $\setminusu$ to
$F^a$ commutes with $\sigmagma$.
\end{lemma}
\noindent {\betaf Proof:} $\,$ It is enough to show that $\sigmagma$ and $\setminusu$ commute on $F_n$ where $F_n$ is the unique extension of $F$ of degree $n$. Since $M$ is an elementary submodel of $F$,
$F_n=F(\alphalpha)$ where $\alphalpha$ is a root of an irreducible polynomial of degree $n$ with coefficients in $M$. We will show that $\sigmagma \setminusu (\alphalpha)= \setminusu \sigmagma (\alphalpha)$. We know that $\sigmagma(\alpha)=b_0+b_1\alpha+\ldots +b_{n-1}\alpha^{n-1}$ for some $b_0,\ldots b_{n-1}$ in $M$. Then $\setminusu(\sigmagma(\alpha))=b_0+b_1\setminusu(\alpha)+\ldots +b_{n-1}\setminusu(\alpha)^{n-1}$. On the other hand since $\setminusu$ is a field automorphism fixing the minimal polynomial of $\alpha$, $\setminusu(\alphalpha)=\sigmagma^r(\alphalpha)$ for some $r<n$, hence $\sigmagma(\setminusu(\alpha))=\sigmagma(\sigmagma^r(\alphalpha))=\sigmagma^r(\sigmagma(\alphalpha))=\sigmagma^r(b_0+b_1\alpha+\ldots +b_{n-1}\alpha^{n-1})=b_0+b_1\sigmagma^r(\alpha)+\ldots +b_{n-1}\sigmagma^r(\alpha)^{n-1}=b_0+b_1\setminusu(\alpha)+\ldots +b_{n-1}\setminusu(\alpha)^{n-1}=\setminusu(\sigma(\alphalpha))$.
We include here also a lemma of homological flavor, that will be essential in the main theorem.
\betaegin{lemma} \label{essential}Let $D$ be an abelian group with three commuting endomorphisms $P,S,T$. Let $\Omega=\cup_n \ker{P^n}$. Assume:
\betaegin{enumerate}
\item $P$ is surjective.
\item $T|\Omega =0$
\item $\Omega\cap \ker(S)\subseteq \ker(P^r)$ for some $r \in {\setminusathbb N}$.
Then: if $a\in \ker(S)$ and $P(a)\in \ker(T)$, then $a\in \ker(T)$.
\end{enumerate}
\end{lemma}
\noindent {\betaf Proof:} $\,$ Let $P(a)=b$ and $C=\{x\in D : \, P^n(x)=b\ \hbox{for some
} \, n > 0\}$. Since $S(b)=T(b)=0$ we have $S(C)$, $T(C)\subseteq
\Omega$. By (ii) $TS | _C=0$ i.e. $T(C)\subseteq \ker(S)$. By (iii), $T(C) \subseteq
\ker(P^r)$. But $P(C)=C$ by (i) and the definition of $C$, so
$PT(C)=T(C)$, hence $P^rT(C)=T(C)$; so $T(C)=0$.
\endproof
\subsection{Notation: maximal $p$-extensions, roots of unity}
Let $k$ be a prime field, $p$ a prime. If $p\neq \setminusathop{\rm char}\nolimits(k)$, we let $\setminusu_{p^n}$ denote the multiplicative subgroup of
$k^a$ of $p^n$-th roots of unity; and $\setminusu_{p^\infty}=\betaigcup\limits_{n<\omega} \setminusu_{p^n}$. Also write $\Omega_p = \setminusu_{p^\infty}$.
On the other hand, if $p= \setminusathop{\rm char}\nolimits(k)$, we let $\Omega_p$ be the maximal $p$-extension
of the prime field; in other words $\Omega_p = \cup_n \setminusathop{\rm Ker}\nolimits(P^n)$, where $P(x)=x^p-x$
is the Artin-Schreier operator.
In
either case it is clear that any field $M$ either contains $\Omega_p$, or intersects
$\Omega_p$ in a finite group.
\section{Geometric Representation}
In this section we will state and prove our main theorem on automorphism groups of pesudofinite fields. We begin with the main definitions.
Let $G$ be a pro-finite group.
\betaegin{definition} We say that the group $G$ is \emph{geometrically represented} in the theory $T$ if there exists $M_0\prec M \vDash T$ and $M_0 \leq A \leq B \leq M$, such that $B \subseteq \alphacl(A)$ and $\setminusathop{\rm Aut}\nolimits(B/A) \cong G$. We say that a prime number $p$ is geometrically represented in the theory $T$ if $p$ divides the order of some finite group $G$ geometrically represented in $T$.
\end{definition}
In this definition, $A,B$ are substructures of $M$ containing $M_0$.
$\setminusathop{\rm Aut}\nolimits(B/A)$ must be interpreted as the set of permutations of $B$
over $A$ preserving the truth value of all formulas (computed in
$M$.) If one takes $M$ to be saturated and of greater cardinality than $B$, this can also be described as the
set of permutations of $B$ fixing $A$ that extend to automorphisms of $M$.
Compare \cite{H}.
For a theory of perfect fields, by Galois theory, $p$ is geometrically represented in $T$ iff the group ${\setminusathbb Z}/p{\setminusathbb Z}$ is geometrically represented in $T$. As the referee pointed out, for general theories this may not hold, and one may prefer a definition allowing $A,B$ to consist of imaginary elements. Theories of pseudo-finite fields admit elimination of finite imaginaries
over a model $M$ (cf. \cite{PAC}), so for such theories the two options are the same. For simplicity and as we are only concerned with fields, we will use the definition above.
\betaegin{Remark} \label{product1} If a finite group $G$ is geometrically represented in the complete theory $T$
over a model $M_0$, then $G$ is also geometrically represented over any elementary extension $M$ of $M_0$. {\rhom Indeed we may assume
$G=\setminusathop{\rm Aut}\nolimits(B/A)$ where $B=M_0(ab),A=M_0(a)$. For any enumeration $m$ of $M$, $tp(m/M_0)$ has an extension to $M_0(a,b)$
which is finitely satisfiable in $M_0$; in this situation one says that $tp(ab/M)$ is a heir of $tp(ab/M_0)$. So me may take
$tp(ab/M)$ to have this property. It follows that the multiplicity of $b$ over $M(a)$ cannot be smaller than that of $b$ over $M_0(a)$.
So $tp(b/M_0(a)) \vdash tp(b/M(a))$. Hence $G=\setminusathop{\rm Aut}\nolimits(M(a,b)/ M(a))$.} \end{Remark}
\betaegin{theorem}\label{aut} Let $F$ be a quasifinite field, $p$ a prime; if $p \neq \setminusathop{\rm char}\nolimits(F)$, assume $F$ contains a primitive $p$'th root of unity. Assume
$p$ is geometrically represented in $Th(F)$. Then $F$ contains the group $\Omega_p$, $p^n$'th roots of unity if $p \neq \setminusathop{\rm char}\nolimits(F)$, maximal $p$-extension of the prime field if $p = \setminusathop{\rm char}\nolimits(F)$.
\end{theorem}
\noindent {\betaf Proof:} $\,$ By assumption there exist $M \prec N \setminusathop{\rm mod}\nolimitsels Th(F)$,
$M \leq A \leq B$, such that $p$ divides
$|\setminusathop{\rm Aut}\nolimits(B/A)|$. Replacing $A$ by $Fix(\tau_B)$,
where $\tau_B$ is some element in $\setminusathop{\rm Aut}\nolimits(B/A)$ of order $p$, we
may assume $B/A$ is a Galois extension
of order $p$, generated by $\tau_B$. We may take $N$ to be $|B|^+$-saturated.
Let $\tau_N$ be an extension of $\tau_B$ to $\setminusathop{\rm Aut}\nolimits(N)$.
Since $M$ is an elementary submodel of $N$, $M$ is relatively algebraically closed in $N$, $N$ and $M^a$ are
linearly disjoint over $M$; hence we may extend $\tau_N$ to a field automorphism
${\tau}$ of $N^a$, in such a way that ${\tau}$ fixes $M^a$.
Since $F$ is quasi-finite, we have $\setminusathop{\rm Aut}\nolimits(M^a/ M) =\setminusathop{\rm Aut}\nolimits(N^a/N) \cong \omegaidehat{{\setminusathbb Z}}$;
there exists an automorphism $\sigma$ of $N^a/N$ generating the Galois group $\setminusathop{\rm Aut}\nolimits(N^a/N)$; the restriction $\sigma |M^a$ generates $\setminusathop{\rm Aut}\nolimits(M^a/M)$. By Lemma \rhoef{commute},
${\tau}$ commutes with $\sigma$.
Let $G$ be the multiplicative group if $p$ is not the characteristic of $F$, and let $G$ be the additive
group otherwise. Let $(D,+)= G(N^a)$, written additively. $End(D)$ is also written additively. Let $S = \sigma-Id$ and $T=\tau - Id$;
these are commuting endomorphisms of $D$. We define an additional endomorphism $P$ commuting with both, and an element $a \in A$, according to cases:
\betaegin{itemize}
\item If $p=\setminusathop{\rm char}\nolimits(F)$, let $P(x)$ be the Artin-Schreier operator $Fr(x)-x$, where $Fr$ is the Frobenius $p$'th power map on $G$.
In this case, by Artin-Schreier theory, $B=A(b)$ for some $b$ with $P(b) \in A$.
\item If $p \neq \setminusathop{\rm char}\nolimits(F)$ and $M$ contains the $p$'th roots of $1$, let $P=p \in End(D)$, $P(x)=px$. By Kummer theory, $B=A(b)$
for some $b$ with $P(b)\in A$.
\end{itemize}
Define $ \Omega$ be as in Lemma \rhoef{essential}; so $\Omega=\Omega_p$. As $\sigma, {\tau}$
are field automorphisms, they commute with $P$; so $P,S,T$ commute.
(i) $P$ is
surjective on $N^a$ by algebraic closedness.
(ii) $T|\Omega_p=0$ since $\tau$ fixes $M^{a}$.
(iii) Suppose for contradiction that $\Omega_p$ is {\em not} contained
in $M$. Then $M \cap \Omega_p$ is a finite subgroup of $\Omega_p$,
and for some $r \in {\setminusathbb N}$, $P^r$ vanishes on $M \cap \Omega_p = \ker(S) \cap \Omega_p$.
By Lemma \rhoef{essential}, $T(b)=0$, i.e. $\tau(b)=b$; so $\tau |B=Id_B$. This contradicts the choice of $\tau$.
Thus $\Omega_p$ is contained in $M$, and so in $F$.
$\Box$
\endproof
\vskip 0.5cm
\betaegin{corollary} \label{aut-c} Let $F$ be a quasifinite field, $p$ a prime; assume $F[\setminusu]$ does not contain $\Omega_p$, where $\setminusu$ is a primitive $p$'th root of $1$.
Then $p$ is not geometrically represented in $Th(F)$.
\end{corollary}
\noindent {\betaf Proof:} $\,$ Let $F'=F[\setminusu]$. Since $[F':F] $ divides $p-1$, it is clear that $p$ remains geometrically represented in $Th(F')$.
By Theorem \rhoef{aut}, $F'$ contains $\Omega_p$.
$\Box$
\endproof
\vskip 0.5cm
We prove a converse to Theorem \rhoef{aut} when $\setminusathop{\rm char}\nolimits(F) \neq p$, and $F$ is pseudo-finite.
\betaegin{theorem}\label{aut-conv} Let $p$ be a prime, $F$ a pseudofinite field not of characteristic $p$. Assume
$F$ contains $\setminusu_{p^\infty}$. Then
$p$ is geometrically represented in $Th(F)$.
\end{theorem}
\noindent {\betaf Proof:} $\,$
As $Th(F)$ is pseudo-finite, it is the restriction to $Fix(\sigma)$
of a completion $T$ of the theory ACFA of algebraically closed fields with an automorphism $\sigma$.
We refer to \cite{CH} for basic facts about ACFA. In particular,
If $A$ is a substructure of a model of $T$ and $\alphacl(A)=A$, then any automorphism $\tau$ of $(A,\sigma)$ is elementary; so $\tau$ restricts to an automorphism of $Fix(\sigma)$, elementary
in the sense of $Th(F)$.
Let $K=Fix(\sigma)$ where $(M,\sigma) \setminusathop{\rm mod}\nolimitsels T$. (One may choose $M$ countable, if desired.)
Let $N$ be the field of generalized power series in $x$ with $\qq$-exponents
with coefficients in $M$. By \cite{Hahn} this is an algebraically
closed field, see \cite{kedlaya}. Extend $\sigma$ to $N$ by mapping
$\sum \alphalpha_i x^i$ to $\sum \sigma(\alphalpha_i) x^i$. Then $(N,\sigma)$
embeds into an elementary extension of $(M,\sigma)$.
Let $\{\omega_i\}_{i<\omega}$ be a coherent system of the $p^i$-th roots of unity in $M$, \emph{i.e.} $\omega_0=1$ and $\omega_{i+1}^p=\omega_i$ for $i\gammaeq 0$.
Define $\tau$ to be an automorphism of $N$ fixing $M$, and acting naturally on generalized power series, via:
$$\tau:
x^{1/p^i}\setminusapsto \omega_i x^{1/p^i}$$ and
$$\tau: x^{1/n}\setminusapsto x^{1/n} \hbox{ for } p\nmid n.$$
Note that, for $\sigma(x^{1/p^i})= x^{1/p^i}$ we have that $$\sigma(\tau(x^{1/p^i}))=\sigma(\omega_i x^{1/p^i})=\omega_i x^{1/p^i}$$
$$\tau(\sigma(x^{1/p^i}))=\tau( x^{1/p^i})= \omega_i x^{1/p^i},$$
As $\sigma,\tau$ also commute on $x^{1/n}$ for $(n,p)=1$, and on $M$, it is clear that $\sigma$ commutes with $\tau$ on $N$.
Now $\tau$ fixes $K(x)$ but not $K(x^{1/p})$; and $\tau^p$ fixes $K(x^{1/p})$.
So the group of $N$- or $K$-elementary automorphisms of $K(x^{1/p})$ over $K(x)$ includes (and hence equals) $\langle\tau | K(x^{1/p})\rhoangle \cong {\setminusathbb Z}/p$.
\section{ Automorphism Group and Tournaments} \label{tourn}
Here we give a different proof of Theorem \rhoef{aut}, by constructing a structure that can have no automorphisms of order $p$.
Let $p$ be a prime. By a {\em $p$-tournament} we mean a $p$-place relation $R$, such that
for any $p$-tuple of distinct elements $x_1,\ldots,x_p$,
$$R(x_{\tau(1)},\ldots,x_{\tau(p)}) \hbox{ holds for exactly one element } \tau \in \langle( 12\ldots p)\rhoangle$$
where $(12\ldots p)$ denotes the cyclic permutation of order $p$ over the $p$ element set $\{1,\ldots,p\}$,
and $ \langle( 12\ldots p)\rhoangle$ is the subgroup of $\setminusathop{\rm Sym}\nolimits(p)$ generated by this permutation, isomorphic to $ \setminusathbb{Z}/p\setminusathbb{Z}.$
A $p$-tournament clearly has no automorphism of order $p$, or even an automorphism $\sigma$ with a $p$-cycle
$a_0,\ldots,a_{p-1}$ with $\sigma(a_i)=a_{i+1} (\setminusathop{\rm mod}\nolimits p)$. Thus $p$ is {\em not} geometrically represented in $T$
if $T$ is 1-sorted and admits a $p$-tournament structure on the main sort. In fact no Galois group of $T$ can
have order divisible by $p$, whether or not the base contains an elementary submodel.
\def\setminus{\setminus}
\betaegin{proposition} \label{p-tournament} Let $p$ be a prime, and $F$ a field of characteristic $\neq p$,
containing the group $\setminusu_p$ of $p$'th roots of unity. Let $\omega
\in \setminusu_p \setminus \{1\}$. Let $S$ be a set of representatives for the
cosets of $\setminusu_p$ in $F^*=F\setminus \{ 0\}$.
Then in the structure $(F,+,\cdot,\omega,S)$ there exists a definable
$p$-tournament on $F$. \end{proposition}
\betaegin{remark} \rhom
When $F$ is pseudo-finite, and $\omega \in F$, we have
$[F^*:(F^*)^p]=p$ by a counting argument. The same conclusion holds
when $F$ is quasi-finite, using Galois cohomology: the cohomology
exact sequence associated with the short exact sequence $$1 \to
\setminusu_p \to (F^{a})^*\to_{x \setminusapsto x^p} (F^{a})^* \to 1$$ gives,
using Hilbert 90, $$F^* \to_{x \setminusapsto x^p} F^* \to
Hom(\omegaidehat{{\setminusathbb Z}}, \setminusu_p) \to H^1(Gal, (F^{a})^*)=0.$$ We refer to
\cite{Tate} for the basics of Galois cohomology.
Assume $F$ contains
a primitive $p^n$-th root of unity $\zetaeta$, but not any $p$'th root of $\zetaeta$. Then
$F^*$ is the direct sum of $\setminusu_{p^n}$ and $(F^*)^{p^n}$. Let $S_0$ be a set of representatives
for $\setminusu_{p^n} / \setminusu_p$. Then $S_0 (F^*)^{p^n}$ is a set of representatives for $(F^*)/ \setminusu_p$.
Hence, using the Proposition, there exists a $p$-tournament definable in the field $F$ using
$\setminusu_{p^n}$ as parameters. This gives another proof of Theorem \rhoef{aut}.
\end{remark}
Before proving Proposition \rhoef{p-tournament}, we illustrate it with the case $p=2$.
Assume $F$ does not contain $\sqrt{-1}$.
A \emph{tournament} on a set $X$ is an irreflexive binary relation $R\subset X\times X$ such that for every $x\neq y \in X$ exactly one of
$R(x,y)$ and $R(y,x)$ holds. A pseudofinite field $F$ not containing $\sqrt{-1}$ interprets a tournament by the formula: $$(\exists z) (z^2 =
x-y).$$ The automorphism group of any field interpreting a 0-definable tournament can not have any involutions.
We can still define a tournament in a pseudofinite field $F$ which contains all the $2^n$-th roots of unity but not all the $(2^{n+1})$-st
roots of unity.
For every $m\in \nn$ we denote the set of $2^m$-th roots of unity by $\setminusu_{2^m}$. Let $S\subset \setminusu_{2^n}$, such that $S\cap -S=\emptyset$ and
$S\cup -S=\setminusu_{2^n}$. Define a relation $R$ on $F\times F$ as follows:
$$R(x,y) \hbox{ if and only if } x-y \hbox{ is in } \betaigcup_{c\in S}c F^{2^n}.$$
Then this defines a tournament in $F$. That is, for every $x,y \in F, \, x\neq y$, exactly one of $R(x,y)$ and $R(y,x)$ holds. Suppose $\neg
R(x,y)$ then $(x-y)\not\in \betaigcup_{c\in S} c F^{2^n}$ then $(x-y)$ is in $\betaigcup_{c\in -S} c F^{2^n}$. Therefore $$-(x-y)=(y-x) \in
\betaigcup_{c\in S} c F^{2^n}$$ hence $R(y,x)$. Also, at most one of $R(x,y)$ and $R(y,x)$ hold since $$F^{*}=\betaigsqcup_{c\in
\setminusu_{2^n}}c{F^{*}}^{2^n},$$ that is, $\setminusu_{2^n}$ is a set of representatives for the cosets of the subgroup ${F^{*}}^{2^n}$ of
multiplicative part $F^{*}$ of $F$.
Now we will generalize the construction of the above tournament relation from binary to $p$-ary.
\noindent {\betaf Proof:} $\,$ (of Proposition \rhoef{p-tournament})
Define a $p$-ary relation $R_\omega$ on $F$ as follows:
$$R_\omega(x_1,x_2,\ldots,x_p) \hbox{ if and only if } x_1+\omega x_2+
\ldots+\omega^{p-1}x_{p} \in S $$
\noindent {\betaf Claim 1:} $\,$ Assume $x_1+\omega x_2+
\ldots+\omega^{p-1}x_{p} \neq 0$. Then
$$R_\omega(x_{\tau(1)},\ldots,x_{\tau(p)}) \hbox{ holds for exactly one element in } \langle( 12\ldots p)\rhoangle\sigmameq \setminusathbb{Z}/p\setminusathbb{Z}.$$
Indeed let $\pi \in \langle( 12\ldots p)\rhoangle$ and $k=\pi(1)$ (so $k$ determines the element $\pi$). Then we have:
$$x_{\pi(1)}+\omega x_{\pi(2)}+\ldots+\omega^{p-1}x_{\pi(p)}=\omega^{-(k-1)}(x_1+\omega x_2+\ldots+\omega^{p-1}x_{p})$$
Since $S$ is a set of representatives for $F^*/\setminusu_p$, and
$a:=x_1+\omega x_2+\ldots+\omega^{p-1}x_{p} \in F^*$, it is clear that
$\omega^{-(k-1)} a \in S$ for a unique value of $k$ modulo $p$.
Thus $R_\omega$ is almost a $p$-tournament, but we need to deal with certain linearly dependent $p$-tuples.
\noindent {\betaf Claim 2:} $\,$ Assume $x_1 + \omega^i x_2 + \cdots + \omega^{i(p-1)} x_p = 0$ for all $i=1,\ldots,p-1$. Then
$x_1=\cdots =x_p$.
This is because the Vandermonde matrix with rows $(1,\omega,\ldots,\omega^{p-1})$,
$(1,\omega^2,\cdots,\omega^{2(p-1)})$, $\ldots$, $(1,\omega^{p-1},\ldots,\omega^{(p-1)(p-1)})$
has rank $p-1$. So the kernel of this matrix is a vector space of dimension $1$. But
$(1,\ldots,1)$ is clearly in the kernel; hence the kernel consists of scalar multiples of this vector.
Since we are only concerned with $p$-tuples of distinct elements, for each such $p$-tuple
$x=(x_1,\ldots,x_p)$ there exists a smallest $i \in \{1,\ldots,p-1\}$ such that $x_1+ \omega^i x_2 + \ldots \neq 0$.
Write $i=i(x)$, and define $R(x_1,\ldots,x_p) $ to hold iff $R_{\omega^{i(x)}}(x_1,\ldots,x_p)$ holds.
It is then clear that $R$ is a $p$-tournament.
$\Box$
\section{Model Theoretic Consequences}
Let $T_{\setminusathop{\rm Psf}\nolimits}$ be the theory of pseudo finite fields. Let $K= \setminusathbb{Q}$ or $K=\setminusathbb{F}_p$. By Ax's theorem \cite{Ax} (cf. also \cite{jardenfried}, Chapter 20) there is a one to one correspondence between the conjugacy classes of $\setminusathop{\rm Aut}\nolimits(K^a/K)$ and the set of completions of the theory $T_{\setminusathop{\rm Psf}\nolimits}$ of characteristic $=\setminusathop{\rm char}\nolimits(K)$. Namely, note that if $M$ is a model of $T$, $K^a \cap M$ is determined by $T$ up to isomorphism; call it $K^a_T$. Then
$\sigma$ corresponds to $T$ iff $Fix(\sigma) \cong K^a_T$.
The absolute Galois group $\Gammaamma=Gal(K^a/K)$ is a compact topological group with a unique normalized left invariant Haar measure $\setminusu_\Gammaamma$. Let $\Pi$ be the set of conjugacy
classes of $\Gammaamma$, and let $\pi: \Gammaamma \to \Pi$ be the quotient map.
$\setminusu_\Gammaamma$ induces a measure $\setminusu$ on $\Pi$, namely $\setminusu(U) = \setminusu_\Gammaamma(\pi ^{-1}(U))$. Using the 1-1 correspondence above, we identify $\Pi$ with the the set of completions $\setminusathcal{C}$ of the theory of pseudofinite fields of characteristic =$\setminusathop{\rm char}\nolimits(K)$. We obtain a measure on $\setminusathcal{C}$. By a theorem of Jarden (cf. Theorem 20.5.1 of \cite{jardenfried}),
for almost all $\sigma \in \Gamma$, $K^a_T \setminusathop{\rm mod}\nolimitsels T_{\setminusathop{\rm Psf}\nolimits}$.
If $A \subset N \setminusathop{\rm mod}\nolimitsels T$, let $\setminusathop{\rm dcl}\nolimits(A)$ denote the definable closure of $A$ in $N$. We also write $M(a)$ for $\setminusathop{\rm dcl}\nolimits(M \cup \{a\})$; if $A,B$ are definably closed subsets of $N$, we will just write $AB$
for $\setminusathop{\rm dcl}\nolimits(AB)$.
$\alphacl$ denotes algebraic closure. Thus $b \in \alphacl(A)$ iff there exists a formula $\phi(x)$ with parameters in $A$ such that $N \setminusathop{\rm mod}\nolimitsels \phi(b)$ and
$\phi(N)$ is finite. The smallest possible size $|\phi(N)|$ is called the multiplicity of $b$ over $A$.
\betaegin{corollary} For almost all $T$ in $\setminusathcal{C}$, we have $\alphacl=\setminusathop{\rm dcl}\nolimits$ over
$K^a_T$.
\end{corollary}
\noindent {\betaf Proof:} $\,$ For each prime $p\neq \setminusathop{\rm char}\nolimits (K)$ the set $\{\sigmagma \in \setminusathop{\rm Aut}\nolimits (K^a/K) : \sigmagma^{p-1} \hbox{ fixes } \, \setminusu_{p^{\infty}} \}$ has measure 0. So $\betaigcup_{p\neq \setminusathop{\rm char}\nolimits (K)} \{\sigmagma \in \setminusathop{\rm Aut}\nolimits (K^a/K) : \sigmagma^{p-1} \hbox{ fixes } \, \setminusu_{p^{\infty}} \}$ has measure 0. If $\setminusathop{\rm char}\nolimits (K)=p_0$, $\{\sigmagma \in \setminusathop{\rm Aut}\nolimits (K^a/K) : \sigmagma \hbox{ fixes the maximal } p_0 \hbox { extension } L_{p_0} \}$ has measure 0. Hence by Corollary \rhoef{aut-c}, for almost all $T\in \setminusathcal{C}$, any group which is geometrically represented in $T$ is trivial; hence $acl=dcl$ over $K^a_T$.
Clearly, the same is true for Baire category in place of measure.
\betaegin{Remark} \rhom
While $\setminusathop{\rm dcl}\nolimits=\alphacl$ is a restricted form of
Skolemization, the theories of pseudo-finite fields are not
Skolemized. For instance, let $F_0$ be pseudo-finite, $\setminusathop{\rm char}\nolimits(F_0)=0$,
and let $K=F_0((t^{\qq}))$ be the field of Puiseux series over
$F_0$. Then $K$ has Galois group $\hat{{\setminusathbb Z}}$, and embeds into a
pseudo-finite field $F$ such that $\setminusathop{\rm Aut}\nolimits(F^a /F) \to \setminusathop{\rm Aut}\nolimits(K^a / K)$
is an isomorphism; hence $K$ is relatively algebraically closed in
$F$. But being Henselian and not separably closed it cannot be PAC,
by Corollary 11.5.6 of \cite{jardenfried}.
\end{Remark}
\betaegin{example} A simple theory $T$ geometrically representing $1$ and ${\setminusathbb Z}/2{\setminusathbb Z}$, but no other finite group. \rhom Let $L = \{R , f,p \}$
where $R$ is a binary predicate, $p$ a unary function , $f$ a binary function symbol. Let $X$ denote the image of $p$, $Y(x):= p ^{-1}(x) \setminus \{x\}$, and $Y=\cup_{x \in X} Y(x)$. The universal theory $T_{\forall}$ then states that $R$ is a tournament on $X$, $p: Y \to X$ has fibers of size $\leq 2$,
and $f(a,b)$ chooses an element of $Y(b)$, provided that $R(a,b)$ holds. (Formally: $p(p(x))=p(x)$; $R(x,y)$ implies $px=x,py=y$, and if $px=x,py=y$ then $R(x,y)$ or $R(y,x)$ but not both; $Y(x):= p ^{-1}(x) \setminus \{x\}$ has at most two elements; $f(x,y)=f(px,py)$; $pf(x,y)=y$, and $f(x,y)=y \leftrightarrow p(y)=y$. )
It is easy to see that
the finite $T_\forall$-structures form an amalgamation class with the joint embedding property,
and hence the model completion is a complete,
$\alphaleph_0$-categorical theory $T$. Moreover, any type $tp(c/A)$ with $c \in X$ and $X(A) \neq \emptyset $
admits an automorphism invariant extension to a universal domain $N \gammaeq A$. If $c \in A$ this is obvious. If $c \notin A$ then in fact, if $d \in Y(c)$ then $tp(cd /A)$ admits an invariant extension to $N$. Namely, set $R(b,c)$ for all $b \in X(N) \setminus A$; and let
$f(b,c) =d$.
Let $M \setminusathop{\rm mod}\nolimitsels T$, and let $a$ belong to some elementary extension $N$, with $\neg R(m,a)$ for $m \in M$. Then the two elements of $Y(a)$ have the same type over $Ma$. So ${\setminusathbb Z}/2{\setminusathbb Z}$ is geometrically represented in $T$.
We wish to show that no other groups may be geometrically represented in $T$, or even in $T^{eq}$. For the latter, we need to understand algebraic closure in (potentially) imaginary sorts.
Let $A=\setminusathop{\rm dcl}\nolimits(A)$ be a finite structure, with $X(A) \neq \emptyset$. We claim that $\alphacl(A) \cap (X \cup Y) = \alphacl^{eq}(A)$, i.e. the algebraic closure
of $A$ in the sorts $X,Y$ accounts for the algebraic imaginaries. To see this, we may add to $A$ the elements of $p ^{-1}(X(A))$, so
that $p: Y(A) \to X(A)$ is a 2-1 map. Let $e \in \alphacl(A)$ be an imaginary element. Let $B$ be a finite set of elements of $X \cup Y$, with $e \in \setminusathop{\rm dcl}\nolimits(A)$;
say $X(B) \setminus A =\{b_1,\ldots,b_n\}$. Let $b_{0}$ be an element such that $R(b_0,b_i)$ holds for $1 \leq i \leq n$. Note that (thanks to $f$),
$Y(B) \subset \setminusathop{\rm dcl}\nolimits(X(B) \cup \{b_0\})$. So $e \in A(b_0,b_1,\ldots,b_n)$. But $tp(b_i/ A(b_0,\ldots,b_{i-1})$ extends to an
$A(b_0,\ldots,b_{i-1})$-invariant type over the universal domain. Hence by reverse induction on $i$ we have $e \in A(b_0,\ldots,b_{i-1})$ for each $i$, and so for $i=0$ we have $e \in \setminusathop{\rm dcl}\nolimits(A)$.
On the other hand, if $M \subset A \subset N^{eq}$ then $Aut(\alphacl(A)/A)$ can have no more than two elements. To see this, we may take $A$
finite. By the remark above, it suffices to consider $G=Aut( \cup_{a \in X(A)} Y(a) / A)$; this group will only grow if
$A$ is restricted to the part in $X$; so say $A = \{a_1,\ldots,a_n \}$ with $a_i \in X$. If for any $a_i$, for some $a_j$ we have $R(a_j,a_i)$,
then $G$ is trivial, because of $f$. Otherwise, for some $a_i$, for all $a_j$ with $j \neq i$ we have $R(a_i,a_j)$. In this case all $Y(a_j)$ with
$j \neq i$ we have $Y(a_j) \subset \setminusathop{\rm dcl}\nolimits(A)$, so $G$ acts faithfully on $Y(a_i)$, and hence has at most two elements.
\end{example}
We could easily modify this example so as to find a theory representing all subquotients of some fixed finite group $H$, but no other
finite group.
\betaegin{Remark} \label{product-3}
If two finite groups $G,H$ are geometrically represented in the complete, stable theory $T$, then so is $G \times H$. { \rhom Using Remark \rhoef{product1}, we may take $G,H$
to be represented over the same base model $M_0$. Say $G=\setminusathop{\rm Aut}\nolimits(B/A)$,
$M_0 \leq A \leq B \leq N$, with $B \subset \alphacl(A)$ and $B$ normal over $A$;
and similarly $H=\setminusathop{\rm Aut}\nolimits(D/C)$.
It follows that $Aut(BD/AC) = G \times H$. } This property is inherited by the model completion of the theory of models of $T$ with a distinguished
automorphism, if it exists; hence it holds for $ACFA$ and for the theory of pseudo-finite fields.
\end{Remark}
\betaegin{question} Which finite groups can be geometrically represented in theories of pseudo-finite fields? \end{question}
By Remark \rhoef{product-3} and Theorem \rhoef{aut-conv} that any finite {\em Abelian} group can be geometrically represented in the theory of pseudo-finite fields containing the roots of unity. Perhaps the internal Galois groups are indeed all Abelian.
We end with some open questions. To state them algebraically, recall the standard description of the basic structure of the cyclotomic extension of $k=\setminusathbb{Q}$. $\setminusathop{\rm Aut}\nolimits(k(\setminusu_{p^{\infty}})/k)$ is the
inverse limit of the automorphism groups of the finite extensions
$\setminusathop{\rm Aut}\nolimits(k(\setminusu_{p^n})/k)$ and if $p\neq 2$,
$$\setminusathop{\rm Aut}\nolimits(k(\setminusu_{p^n})/k)\sigmameq \zetaz/p^{n-1}\zetaz\times \zetaz/(p-1)\zetaz.$$
For $i\gammaeq j$, the restriction homomorphism
$$\betaegin{array}{cccc}
r_{ij}:&\setminusathop{\rm Aut}\nolimits(k(\setminusu_{p^i})/k)&\longrightarrow & \setminusathop{\rm Aut}\nolimits(k(\setminusu_{p^j})/k)\\
&\phi&\longmapsto & \phi_{\setminusid k(\setminusu_{p^j})},\\
\end{array}$$
which is certainly onto, respects the decomposition. Hence
$$\setminusathop{\rm Aut}\nolimits(k(\setminusu_{p^{\infty}})/k) \sigmameq \zetaz_p\times \zetaz/q\zetaz.$$
where $q=p-1$;
this is also valid for $p=2$, taking $q=2$.
Let $L_p$ be the subfield of $k(\setminusu_{p^{\infty}})$ fixed by $$\zetaz/q\zetaz<\zetaz_p\times \zetaz/q\zetaz$$
and let $\omega$ be a primitive $p$-th root of unity if $p\neq 2$ and $\sqrt{-1}$ if $p=2$.
The field $L_p$ does not contain any $p^n$-th roots of unity except $\pm 1$. Suppose it does; then $L_p$ contains $\omega$, hence
the automorphism group of $L_p/k$ contains a subgroup of index $q$, but it is impossible since
$\setminusathop{\rm Aut}\nolimits(L_p/k)\sigmameq \zetaz_p$. But $L_p(\omega)=k(\setminusu_{p^{\infty}})$ and contains $\setminusu_{p^{\infty}}$.
On the other hand, the finite extension $L_p(\omega)=k(\setminusu_{p^{\infty}})$ contains all $p^n$-th roots of unity.
\def\mathbb Q{\setminusathbb Q}
\betaigskip
\betaegin{example} \rhom
Let $F$ be a pseudo-finite field whose algebraic points intersect $\mathbb Q(\setminusu_{p^{\infty}})$
in $L_p$. Let $F'=F(\setminusu_{p^{\infty}})$. By Theorem
\rhoef{aut-conv}, $p$ can be geometrically represented in $Th(F')$. When $p=2$,
by Theorem \rhoef{aut}, $p$ cannot be geometrically represented in $Th(F)$. \end{example}
We have not settled whether this phenomenon persists for $p>2$. We formalize the question algebraically:
Let $F$ be an algebraically closed field (say $\mathbb Q^a$.) Let $G= \setminusathop{\rm Aut}\nolimits(F(t)^a / F(t))$ be the absolute Galois group of $F(t)$.
Let $p$ be a prime, and $\setminusu_p^{\infty}=\cup_n \setminusu_{p^n}$ the group of $p^n$'th roots of $1$. Let $1^{1/(p-1)}$ be a primitive $p-1$-root of $1$.
Let $\sigma \in \setminusathop{\rm Aut}\nolimits(F)$. Let $G(\sigma)$ be the centralizer in $G$ of some lifting of $\sigma$ to $\setminusathop{\rm Aut}\nolimits(F(t)^a)$;
so $G(\sigma)$ is a closed subgroup of $G$, determined up to conjugacy. Let $G(\sigma,p)$ be the pro-$p$ part of $G(\sigma)$. We have:
\betaegin{enumerate}
\item If $\sigma$ fixes $\setminusu_p^{\infty}$, then $G(\sigma,p)$ is a large pro-$p$ group. (In particular contains ${\setminusathbb Z}_p$.)
\item If $Fix(\sigma) [1^{1/(p-1)}] \cap \setminusu_p^{\infty}$ is finite, then $G(\sigma,p)=1$. \end{enumerate}
Almost all $\sigma$ (for the Haar measure) fall into case (ii) for all $p$, and for them we have $G(\sigma)=1$.
\betaegin{question} What about the intermediate cases? In particular let $\sigma(x)=x^{-1}$ for $x \in \setminusu$. Is $G(\sigma,p)=1$? \end{question}
\betaegin{problem} Explain a priori why (if it is indeed the general case) $G(\sigma)$ depends only on the action
of $\sigma$ on roots of unity.
\end{problem}
\betaegin{thebibliography}{Abc1}
\betaibitem[{\betaf Ax}]{Ax} Ax, J.\ ``The elementary theory of finite fields", \emph{Ann.\ Math.}\ \textbf{88} (1968) 239-271.
\betaibitem[{\betaf CH}]{CH} Chatzidakis, Z.\ Hrushovski, E.\ ``Model theory of difference fields", \emph{Trans. Amer. Math. Soc.} \textbf{351} (1999), no. 8, 2997--3071.
\betaibitem[{\betaf FJ}]{jardenfried} Fried, M.\ Jarden, M.\ \textit{Field Arithmetic}, Erg. Math. \textbf{11},
Springer-Verlag Berlin, 2005.
\betaibitem [{\betaf Ha}]{Hahn} Hahn, H.\ ``\"{U}ber die nichtarchimedischen Gr\"{o}ssensysteme", Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften, Wien, Mathematisch - Naturwissenschaftliche Klasse (Wien. Ber.) (1907),116: 601�655 (reprinted in: Hahn, Hans (1995). Gesammelte Abhandlungen I. Springer-Verlag. )
\betaibitem[{\betaf H}]{H} Hrushovski, E., ``Finitely Axiomatizable Aleph-One
Categorical Theories", {\em Journal of Symbolic Logic}, {\betaf59} (1994)
pp. 838-845
\betaibitem[{\betaf PAC}]{PAC} Hrushovski, E., Pseudo-finite fields and related structures, Model Theory and Applications, Quaderni di Matematica, vol. 11, Aracne, Rome, 2002, pp. 151�212.
\betaibitem[{\betaf K}]{kedlaya} Kedlaya, Kiran Sridhara, ``The algebraic closure of the power series field in positive characteristic", Proc. Amer. Math. Soc. \textbf{129} (2001) pp. 3461�3470
\betaibitem[{\betaf S}]{Serre} Serre, Jean-Pierre \textit{Local Fields}. New York: Springer Verlag, 1979.
\betaibitem[{\betaf T}]{Tate} Tate,\ J. \textit{Galois Cohomology} [Internet], IAS/ Parkcity Mathematics Series; 1999 [cited 2009, August 25]. Available from: http://modular.math.washington.edu/Tables/Notes/tate-pcmi.html
\end{thebibliography}
\end{document}
|
\begin{document}
\title{On the best approximation of the hierarchical matrix product}
\author{J\"urgen D\"olz}
\address{
J\"urgen D\"olz,
Technical University of Darmstadt,
Graduate School CE,
Dolivostra\ss{}e 15, 64293 Darmstadt, Germany}
\email{[email protected]}
\author{Helmut Harbrecht}
\address{
Helmut Harbrecht,
University of Basel,
Department of Mathematics and Computer Science,
Spiegelgasse 1, 4051 Basel, Switzerland}
\email{[email protected]}
\author{Michael D. Multerer${}^\dagger$}
\address{
Michael D. Multerer, born as Michael D.~Peters,
ETH Zurich,
Department of Biosystems Science and Engineering,
Mattenstrasse 26, 4058 Basel, Switzerland}
\email{[email protected]}
{\let\thefootnote\relax\footnote{${}^\dagger$~Michael D.~Multerer was born as Michael D.~Peters}}
\begin{abstract}
The multiplication of matrices is an important arithmetic operation in
computational mathematics. In the context of hierarchical matrices,
this operation can be realized by the multiplication
of structured block-wise low-rank matrices, resulting in an almost linear cost.
However, the computational efficiency of the algorithm is based on a
recursive scheme which makes the error analysis quite involved. In this
article, we propose a new algorithmic framework for the multiplication
of hierarchical matrices. It improves currently known implementations
by reducing the multiplication of hierarchical matrices towards finding a suitable
low-rank approximation of sums of matrix-products. We propose several
compression schemes to address this task. As a consequence, we are able to
compute the best-approximation of hierarchical matrix products. A
cost analysis shows that, under reasonable assumptions on the
low-rank approximation method, the cost of the framework is
almost linear with respect to the size of the matrix. Numerical
experiments show that the new approach produces indeed the
best-approximation of the product of hierarchical matrices for
a given tolerance. They also show that the new multiplication
can accomplish this task in less computation time than
the established multiplication algorithm without error control.
\end{abstract}
\maketitle
\section{Introduction}
Hierarchical matrices, $\mathcal{H}$-matrices for short,
historically originate from the discretization
of boundary integral operators.
They allow for a data sparse approximation in terms of a block-wise
low-rank matrix.
As first shown in \cite{Hack1}, the major advantage of the $\mathcal{H}$-matrix
representation over other data sparse formats for non-local operators is that
basic operations, like addition, multiplication and inversion, can be performed
with nearly linear cost.
This fact enormously stimulated the research on
$\mathcal{H}$-matrices, see e.g.\ \cite{buchbeb,BH,GH03,GHK,HK00,Hac15}
and the references therein, and related hierarchical matrix formats like
HSS, see \cite{HSS2,HSS3,HSS1}, HODLR, see \cite{HODLR1,HODLR2}, and
$\mathcal{H}^2$-matrices, cf.\ \cite{boerm,HKS}.
The applications for $\mathcal{H}$-matrices
are manifold: They
have been used for solving large scale algebraic matrix {R}iccati
equations, cf.\ \cite{riccati}, for solving Lyapunov equations, cf.\ \cite{baur},
for preconditioning, cf.\ \cite{precond1,precond2} and for the second moment
analysis of partial differential equations with random data, cf.\ \cite{DHP15,DHP17},
just to name a few.
In this context, the matrix-matrix multiplication of
\(\mathcal{H}\)-matrices is an essential operation.
Based on the hierarchical block structure, the matrix-matrix multiplication is performed
in a recursive fashion. To that end, in each recursive call, the two factors are considered as
block-matrices which are multiplied by a block-matrix product. The resulting matrix-block
is then again compressed to the $\mathcal{H}$-matrix format. To limit the computational cost,
the block-wise ranks for the compression are usually priorily
bounded by a user-defined threshold.
For this thresholding procedure, no a-priori error estimates exist.
This fact and the recursive structure of the matrix-matrix multiplication render the error analysis
difficult, in particular, since there is no guarantee that intermediate results
provide the necessary low-rank structure.
To reduce the number of these time-consuming and error-introducing truncation
steps, different modifications have been proposed in the literature:
In \cite{Boe17}, instead of applying each low-rank update immediately to an
$\mathcal{H}$-matrix, multiple updates are accumulated in an auxiliary low-rank
matrix. This auxiliary matrix is propagated as the algorithm traverses the hierarchical
structure underlying the $\mathcal{H}$-matrix.
This greatly improves computation times, although the computational cost is not improved.
Still, also in this approach, multiple truncation steps are performed.
Thus, it does not
lead to the best approximation of the low-rank block under consideration.
As an alternative, in \cite{DHP15}, it has been proposed to directly
compute the low-rank approximation of the output matrix block by using
the truncated singular value decomposition, realized by means of a Krylov
subspace method. This requires only the implementation of matrix-vector
multiplications and is hence easy to realize. Especially, it yields
the best approximation of the low-rank blocks to be computed. But
contrary to expectations, it does not increase efficiency since the
eigensolver converges very slowly in case of a clustering of the
eigenvalues. Therefore, computing times have not been satisfactory.
In the present article, we pick up the idea from \cite{DHP15} and provide
an algorithm that facilitates the direct computation of any matrix block
in the product of two \(\mathcal{H}\)-matrices. This algorithm is based on
a sophisticated bookkeeping technique in combination with
a compression based on basic matrix-vector products.
This new algorithm will naturally lead to the
best approximation of the \(\mathcal{H}\)-matrix product
within the \(\mathcal{H}\)-matrix format. In particular, we cover the
cases of an optimal fixed rank truncation and of an optimal adaptively chosen rank
based on a prescribed accuracy.
Our numerical experiments show that both, the fixed rank and the
adaptive versions of the used low-rank techniques, are
significantly more efficient than the traditional arithmetic with fixed rank.
In particular, the numerical experiments also validate that the desired
error tolerance can indeed be reached when using the adaptive algorithms.
For the actual compression of a given matrix block, any compression technique
based on matrix-vector products is feasible. Exemplarily, we shall consider here the
adaptive cross approximation, see \cite{Beb,GTZ},
the Golub-Kahan-Lanczos bidiagonalization procedure,
see \cite{golub1965},
and the randomized range approximation, cf.\ \cite{HMT11}.
We will employ these methods to either compute approximations with fixed ranks
or with adaptively chosen ranks.
We remark that, in the fixed rank case,
a similar algorithm for randomized range approximation,
has successively been applied to directly compute the approximation
of the product of two HSS or HODLR matrices in \cite{martinsson11, martinsson16}.
The rest of this article is structured as follows. In Section
\ref{sec:pre}, we briefly recall the construction and structure
of $\mathcal{H}$-matrices together with their matrix-vector
multiplication. Section \ref{sec:Hmult} is dedicated to the
new framework for the matrix-matrix multiplication. The three
example algorithms for the efficient low-rank approximation are then the topic of
Section~\ref{sec:approx}. Section~\ref{sec:cost} is concerned
with the analysis of the computational cost of the new multiplications, which shows
that the new matrix-matrix multiplication has asymptotically the same
computational cost as the
standard matrix-matrix multiplication, i.e., almost linear in
the number of degrees of freedom. Nonetheless, as the
numerical results in Section~\ref{sec:results} show,
the constants in the estimates are significantly lower for
the new approach, resulting in a remarkable speed-up.
Finally, concluding remarks are stated in Section
\ref{sec:conclusion}.
\section{Preliminaries}\label{sec:pre}
The pivotal idea of hierarchial matrices is to introduce a tree structure
on the cartesian product $\mathcal{I}\times\mathcal{I}$, where \(\mathcal{I}\) is a
suitable index set.
The tree structure is then used
to identify sub-blocks of the matrix which are suitable for low-rank representation.
We follow the the monograph \cite[Chapter 5.3 and A.2]{Hac15} and first
recall a suitable definition of a tree.
\begin{definition}\label{def:prel.tree}
Let $V$ be a non-empty finite set, call it \emph{vertex set}, and let $\operatorname{child}$
be a mapping from $V$ into the power set $\mathcal{P}(V)$, i.e., $\operatorname{child}\colon
V\to\mathcal{P}(V)$. For any $v\in V$, an
element $v'$ in $\operatorname{child}(v)$ is called \emph{child}, whereas we call $v$ the
\emph{parent} of $v'$.
We call the structure $T(V,\operatorname{child})$ a \emph{tree}, if the following properties
hold.
\begin{enumerate}
\item There is exactly one element $r\in V$ which is not a child of a vertex, i.e.,
\[
\bigcup_{v\in V}\operatorname{child}(v)=V\setminus\{r\}.
\]
We call this vertex the \emph{root} of the tree.
\item All $v\in V$ are \emph{successors} of $r$, i.e., there is a
$k\in\mathbb{N}_0$, such that $v\in\operatorname{child}^k(r)$. We define $\operatorname{child}^k(v)$
recursively as
\[
\operatorname{child}^0(v)=\{v\}\quad\text{and}\quad\operatorname{child}^k(v)=\bigcup_{v'\in\operatorname{child}^{k-1}(v)}\operatorname{child}(v').
\]
\item Any $v\in V\setminus\{r\}$ has exactly one parent.
\end{enumerate}
Moreover, we say that the number $k$ is the \emph{level} of $v$. The
\emph{depth} of a tree is the maximum of the levels of its vertices.
We define the set of \emph{leaves} of $T$ as
\[
\mathcal{L}(T)=\{v\in V\colon \operatorname{child}(v)=\emptyset\}.
\]
\end{definition}
We remark that for any $v\in T$, there is exactly one path from $r$ to $v$, see
\cite[Remark A.6]{Hac15}.
\begin{definition}\label{def:prel.clusterTree}
Let $\mathcal{I}$ be a finite index set. The \emph{cluster tree}
$\mathcal{T}_\mathcal{I}$ is a tree with the following properties.
\begin{enumerate}
\item $\mathcal{I}\in\mathcal{T}_\mathcal{I}$ is the root of the tree
$\mathcal{T}_\mathcal{I}$,
\item for all non-leaf elements $\tau\in\mathcal{T}_\mathcal{I}$ it
holds
\[
\dot{\bigcup_{\sigma\in\operatorname{child}(\tau)}}\sigma =\tau,
\]
i.e., all non-leaf clusters are the disjoint union of their children,
\item all $\tau\in\mathcal{T}_\mathcal{I}$ are non-empty.
\end{enumerate}
The vertices of the cluster tree are referred to as \emph{clusters}.
\end{definition}
To achieve almost linear cost for the following algorithms, we
shall assume that the depth of the cluster tree is bounded by
$\mathcal{O}(\log(\#\mathcal{I}))$ and that the cardinality of the leaf clusters
is bounded by $n_{\min}$. Various ways to construct a cluster tree
fulfilling this requirement along with different kinds of other properties
exist, see \cite{Hac15} and the references therein.
Obviously, by applying the second requirement of the definition recursively, it
holds $\tau\subset\mathcal{I}$ for all $\tau\in\mathcal{T}_\mathcal{I}$. Consequently,
the leaves of the cluster tree form a partition of $\mathcal{I}$.
\begin{definition}\label{def:admissibility}
An \emph{admissibility condition} for $\mathcal{I}$ is a mapping
\[
\operatorname{adm}\colon\mathcal{P}(\mathcal{I})\times\mathcal{P}(\mathcal{I})\to\{\mathrm{true},\mathrm{false}\}
\]
which is symmetric, i.e., for
$\tau\times\sigma\in\mathcal{P}(\mathcal{I})\times\mathcal{P}(\mathcal{I})$
it holds
\[
\operatorname{adm}(\tau,\sigma)=\operatorname{adm}(\sigma,\tau),
\]
and monotone, i.e., if $\operatorname{adm}(\tau,\sigma)=\mathrm{true}$, it holds
\[
\operatorname{adm}(\tau',\sigma')=\mathrm{true},\quad\text{for all}~\tau'\in\operatorname{child}(\tau),\sigma'\in\operatorname{child}(\sigma).
\]
\end{definition}
Different kinds of admissibility exist, see \cite{Hac15} for a thorough
discussion and examples. Based on the admissibility condition and the
cluster tree, the block-cluster tree forms a partition of the index set
$\mathcal{I}\times\mathcal{I}$.
\begin{algorithm}
\caption{Construction of the block-cluster tree \(\mathcal{B}\), cf.~\cite[Definition 5.26]{Hac15}}
\label{alg:prel.buildFuN}
\begin{algorithmic}
\Function{BuildBlockClusterTree}{block-cluster $b=\tau\times\sigma$}
\If {$\operatorname{adm}(\tau,\sigma)=\mathrm{true}$}
\State $\operatorname{child}(b)\mathrel{\mathrel{\mathop:}=}\emptyset$
\mathbb{E}lse
\If {$\tau$ and $\sigma$ have sons}
\State $\operatorname{sons}(b)\mathrel{\mathrel{\mathop:}=}
\{\sigma'\times\tau'\colon\tau'\in\operatorname{child}(\tau),\sigma'\in\operatorname{child}(\sigma)\}$
\For {$b'\in\operatorname{child}(b)$}
\State \Call{BuildBlockClusterTree}{$b'$}
\mathbb{E}ndFor
\mathbb{E}lse
\State $\operatorname{child}(b)\mathrel{\mathrel{\mathop:}=}\emptyset$
\mathbb{E}ndIf
\mathbb{E}ndIf
\mathbb{E}ndFunction
\end{algorithmic}
\end{algorithm}
\begin{definition}
Given a cluster-tree $\mathcal{T}_{\mathcal{I}}$, the tree structure
$\mathcal{B}$ constructed by Algorithm~\ref{alg:prel.buildFuN} invoked
with $b=\mathcal{I}\times\mathcal{I}$ is referred to as
\emph{block-cluster tree}.
\end{definition}
For notational purposes, we write $p=\operatorname{d}\!pth(\mathcal{B})$ and
refer to $\mathcal{N}$ as the set of
inadmissible leaves of $\mathcal{B}$ and call it the \emph{nearfield}.
In a similar fashion, we will refer to $\mathcal{F}$ as the set of
admissible leaves of $\mathcal{B}$ and call it the \emph{farfield}.
We remark that the depth of the block-cluster tree is bounded by the
depth of the cluster tree and that our definition of the
block-cluster tree coincides with the notion of a \emph{level-conserving
block-cluster tree} from the literature.
\begin{definition}
For a block-cluster $b=\tau\times\sigma$ and $k\leq\min\{\#\tau,\#\sigma\}$,
we define the set of \emph{low-rank matrices} as
\[
\mathcal{R}(\tau\times\sigma,k)=\big\{\mathbf{M}\in\mathbb{R}^{\tau\times\sigma}\colon\operatorname{rank}(\mathbf{M})\leq k\big\},
\]
where all elements $\mathbf{M}\in\mathcal{R}(\tau\times\sigma,k)$ are stored in
low-rank representation, i.e.,
\[
\mathbf{M}=\mathbf{L}_{\mathbf{M}}\mathbf{R}_{\mathbf{M}}^{\intercal}
\]
for matrices $\mathbf{L}_{\mathbf{M}}\in\mathbb{R}^{\tau\times k}$ and
$\mathbf{R}_{\mathbf{M}}\in\mathbb{R}^{\sigma\times k}$.
\end{definition}
Obviously, a matrix in $\mathcal{R}(\tau\times\sigma,k)$ requires
$k(\#\tau+\#\sigma)$ units of storage instead of \mbox{$\#\tau\cdot\#\sigma$},
which
results in a significant storage improvement if $k\ll\min\{\#\tau,\#\sigma\}$.
The same consideration holds true for the matrix-vector multiplication.
With the definition of the block-cluster tree at hand, we are
in the position to introduce hierarchical matrices.
\begin{definition}
Given a block-cluster tree $\mathcal{B}$, the set of \emph{hierarchical
matrices}, in short \emph{$\mathcal{H}$-matrices}, of maximal block-rank
$k$ is given by
\[
\mathcal{H}(\mathcal{B},k):=\Big\{{\bf H}\in\mathbb{R}^{\#\mathcal{I}\times\#\mathcal{I}}:
\mathbf{H}|_{\tau\times\sigma}\in\mathcal{R}(\tau\times\sigma,k)\text{ for all }\tau\times\sigma\in\mathcal{F}\Big\}.
\]
A tree structure is induced on each element of this set by the tree
structure of the block-cluster tree. Note that all nearfield blocks
${\bf H}|_{\tau\times\sigma}$, $\tau\times\sigma\in\mathcal{N}$, are
allowed to be dense matrices.
\end{definition}
The tree structure of the block-cluster tree provides the following useful
recursive block matrix structure on $\mathcal{H}$-matrices. Every
matrix block $\mathbf{H}|_{\tau\times\sigma}$, corresponding to a non-leaf
block-cluster $\tau\times\sigma$, has the structure
\begin{align}\label{eq:blockHMat}
\mathbf{H}\big|_{\tau\times\sigma}=
\begin{bmatrix}
\mathbf{H}\big|_{\operatorname{child}(\tau)_1\times\operatorname{child}(\sigma)_1}&\ldots &\mathbf{H}\big|_{\operatorname{child}(\tau)_1\times\operatorname{child}(\sigma)_{\#\operatorname{child}(\sigma)}}\\
\vdots&&\vdots\\
\mathbf{H}\big|_{\operatorname{child}(\tau)_{\#\operatorname{child}(\tau)}\times\operatorname{child}(\sigma)_1}&\ldots&\mathbf{H}\big|_{\operatorname{child}(\tau)_{\#\operatorname{child}(\tau)}\times\operatorname{child}(\sigma)_{\#\operatorname{child}(\sigma)}}
\end{bmatrix}.
\end{align}
If the matrix block $\mathbf{H}|_{\tau'\times\sigma'}$, $\tau'\in\operatorname{child}(\tau)$,
$\sigma'\in\operatorname{child}(\sigma)$, is a leaf of $\mathcal{B}$, the matrix block is
either a low-rank matrix, if $\tau'\times\sigma'\in\mathcal{F}$, or a dense
matrix, if $\tau'\times\sigma'\in\mathcal{N}$. If the matrix block is not
a leaf of $\mathcal{B}$, it exhibits again a similar block structure as
$\mathbf{H}|_{\tau\times\sigma}$. The required ordering of the clusters relies
on the order of the indices in the clusters. A possible block structure of an
$\mathcal{H}$-matrix is illustrated in Figure~\ref{fig:hmatrix}. Of course, the
structure may vary depending on the used cluster tree and admissibility
condition.
\begin{figure}
\caption{\label{fig:hmatrix}
\label{fig:hmatrix}
\end{figure}
Having the block structure \eqref{eq:blockHMat} available, an algorithm for
the matrix-vector multiplication, as listed in Algorithm~\ref{alg:prel.hVmult},
can easily be derived.
\begin{algorithm}
\caption{\label{alg:prel.hVmult}$\mathcal{H}$-matrix-vector multiplication $\mathbf{y}{\text{\tt+=}}\mathbf{H}\mathbf{x}$,
see \cite[Equation (7.1)]{Hac15}}
\begin{algorithmic}
\Function{$\mathcal{H}$timesV}{$\mathbf{y}|_{\tau}$, $\mathbf{H}|_{\tau\times\sigma}$, $\mathbf{x}|_{\sigma}$}
\If{$\tau\times\sigma\notin\mathcal{L}(\mathcal{B})$}
\For{$\tau'\times\sigma'\in\operatorname{child}(\tau\times\sigma)$}
\State\Call{$\mathcal{H}$timesV}{$\mathbf{y}|_{\tau'}$, $\mathbf{H}|_{\tau'\times\sigma'}$, $\mathbf{x}|_{\sigma'}$}
\mathbb{E}ndFor
\mathbb{E}lse
\State $\mathbf{y}|_{\tau}{\text{\tt+=}}\mathbf{H}|_{\tau\times\sigma}\mathbf{x}|_{\sigma}$
\mathbb{E}ndIf
\mathbb{E}ndFunction
\end{algorithmic}
\end{algorithm}
Note that the matrix-vector multiplication for the leaf block-clusters involves either
a dense matrix or a low-rank matrix. In accordance with \cite[Lemma 7.17]{Hac15},
the matrix-vector multiplication of an $\mathcal{H}$-matrix block
$\mathbf{H}|_{\tau\times\sigma}$ with $\mathbf{H}\in\mathcal{H}(\mathcal{B},k)$ can
be accomplished in at most
\[
N_{\mathcal{H}\cdot v}(\tau\times\sigma,k)\leq 2C_{\operatorname{sp}} \max\{k,n_{\min}\}\Big(\big(\operatorname{d}\!pth(\#\tau)+1\big)\#\tau+\big(\operatorname{d}\!pth(\#\sigma)+1\big)\#\sigma\Big)
\]
operations, where the constant $C_{\operatorname{sp}}=C_{\operatorname{sp}}(\mathcal{B})$ is given as
\[
C_{\operatorname{sp}}(\mathcal{B})\mathrel{\mathrel{\mathop:}=}
\max_{\tau\in\mathcal{T}_{\mathcal{I}}}\#\big\{\sigma\in\mathcal{T}_{\mathcal{I}}\colon \tau\times\sigma\in\mathcal{B}\big\}.
\]
Given a cluster $\tau\in\mathcal{T}_{\mathcal{I}}$, the quantity $C_{\operatorname{sp}}$ is an upper bound on the
number of corresponding block-clusters $\tau\times\sigma\in\mathcal{B}$. Thus, it is also an
upper bound for the number of corresponding matrix blocks $\mathbf{H}|_{\tau\times\sigma}$
in the tree structure of an $\mathcal{H}$-matrix corresponding to $\mathcal{B}$.
\section{The multiplication of $\mathcal{H}$-matrices}
\label{sec:Hmult}
Instead of restating the $\mathcal{H}$-matrix multiplication in its original
form from \cite{Hack1}, we directly introduce our new framework. The connection
between the new framework and the traditional multiplication will be discussed
later in this section.
We start by introducing the
following \emph{\texttt{sum}-expressions}, which will simplify the presentation
and the analysis of the new algorithm.
\begin{definition}\label{def:sumexpr}
Let $\tau\times\sigma\in\mathcal{B}$.
For a finite index set $\mathcal{J}_{\mathcal{R}}$, the expression
\[
\mathcal{S}_\mathcal{R}(\tau,\sigma)=\sum_{j\in\mathcal{J}_{\mathcal{R}}}\mathbf{A}_j\mathbf{B}_j^{\intercal},
\]
is called a \texttt{sum}-\emph{expression of low-rank matrices}, if it is represented
and stored as a set of factorized low-rank matrices
\[
\big\{ \mathbf{A}_j\mathbf{B}_j^{\intercal}\in\mathcal{R}(\tau\times\sigma,k_j)\colon
j\in\mathcal{J}_{\mathcal{R}}\}.
\]
Similarly, for a finite index set $\mathcal{J}_{\mathcal{H}}$, the expression
\[
\mathcal{S}_\mathcal{H}(\tau,\sigma)=\sum_{j\in\mathcal{J}_{\mathcal{H}}}\mathbf{H}_j\mathbf{K}_j
\]
is called a \texttt{sum}-\emph{expression of $\mathcal{H}$-matrices}, if it is
represented and stored as a set of pairs of $\mathcal{H}$-matrix blocks
\[
\big\{\big(\mathbf{H}_j,\mathbf{K}_j\big)=\big(\mathbf{H}|_{\tau\times\rho_j},\mathbf{K}|_{\rho_j\times\sigma}\big)\colon\tau\times\rho_j,\rho_j\times\sigma\in\mathcal{B},j\in\mathcal{J}_{\mathcal{H}}\big\},
\]
with $\mathbf{H},\mathbf{K}\in\mathcal{H}(\mathcal{B},k)$ and
$\mathbf{H}|_{\tau\times\rho_j}$, $\mathbf{H}|_{\tau\times\rho_j}$, $j\in\mathcal{J}_\mathcal{H}$,
being either dense matrices or providing the block-matrix structure
\eqref{eq:blockHMat}.
The expression
\[
\mathcal{S}(\tau,\sigma)=\mathcal{S}_\mathcal{R}(\tau,\sigma)+\mathcal{S}_\mathcal{H}(\tau,\sigma)
\]
is called a \texttt{sum}-\emph{expression} and a combination of the two
previously introduced expressions. In particular, we
require that $\mathcal{S}_\mathcal{R}$ is stored as a \texttt{sum}-expression of low-rank matrices
and $\mathcal{S}_\mathcal{H}$ is stored as a \texttt{sum}-expression of $\mathcal{H}$-matrices.
\end{definition}
\begin{figure}
\caption{\label{fig:sumexpr}
\label{fig:sumexpr}
\end{figure}
Examples of \texttt{sum}-expressions
are illustrated in Figure~\ref{fig:sumexpr}. A \texttt{sum}-expressions may be considered as a kind
of a queuing system to store the sum of low-rank matrices and/or
$\mathcal{H}$-matrix products for subsequent operations.
We remark that the sum of two \texttt{sum}-expressions is again a \texttt{sum}-expression and
shall now make use of this fact to devise an algorithm for the multiplication
of $\mathcal{H}$-matrices. For simplicity, we assume that all
involved $\mathcal{H}$-matrices are built upon the same block-cluster tree
$\mathcal{B}$.
\subsection{Relation between $\mathcal{H}$-matrix products and \texttt{sum}-expressions}
We start with two $\mathcal{H}$-matrices $\mathbf{H},\mathbf{K}\in\mathcal{H}
(\mathcal{B},k)$ and want to represent their product
$\mathbf{L}\mathrel{\mathrel{\mathop:}=}\mathbf{H}\mathbf{K}$ in $\mathcal{H}(\mathcal{B},k)$.
To that end, we rewrite the $\mathcal{H}$-matrix product as a
\texttt{sum}-expression
\[
\mathbf{L}
=
\mathbf{H}\mathbf{K}
\mathrel{=\mathrel{\mathop:}}
\mathcal{S}_\mathcal{H}(\mathcal{I},\mathcal{I})
\mathrel{=\mathrel{\mathop:}}
\mathcal{S}(\mathcal{I},\mathcal{I}).
\]
The task is now to find a suitable low-rank approximation to
$\mathbf{L}|_{\tau\times\sigma}$ in $\mathcal{R}(\tau\times\sigma, k)$
for all admissible leafs $\tau\times\sigma$ of $\mathcal{B}$.
If $\tau\times\sigma$ is a
child of the root, i.e.,
$\tau\times\sigma\in\operatorname{child}(\mathcal{I}\times\mathcal{I})$, we have that
\begin{align*}
\mathbf{L}|_{\tau\times\sigma}={}&
\mathcal{S}_\mathcal{H}(\mathcal{I},\mathcal{I})|_{\tau\times\sigma}\\
={}&
\sum_{\rho\in\operatorname{child}(\mathcal{I})}
\mathbf{H}|_{\tau\times\rho}
\mathbf{K}|_{\rho\times\sigma}\\
={}&\sum_{\substack{
\rho\in\operatorname{child}(\mathcal{I})\colon\\
\tau\times\rho\in\mathcal{F}\\
\text{or}\\
\rho\times\sigma\in\mathcal{F}
}}
\mathbf{H}|_{\tau\times\rho}\mathbf{K}|_{\rho\times\sigma}+
\sum_{\substack{
\rho\in\operatorname{child}(\mathcal{I})\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{B\setminus F}
}}
\mathbf{H}|_{\tau\times\rho}\mathbf{K}|_{\rho\times\sigma},
\end{align*}
due to the block-matrix structure \eqref{eq:blockHMat} of $\mathbf{H}$ and
$\mathbf{K}$, see also Figure~\ref{fig:Hmatprod} for an illustration.
The pivotal idea is now that $\mathbf{L}|_{\tau\times\sigma}$ can be
written as a \texttt{sum}-expression itself, for which we treat the two remaining sums as follows:
\begin{itemize}
\item
The products in the first sum involve at least one low-rank matrix, such that
the product in low-rank representation can easily be computed using
matrix-vector multiplications. Having
these low-rank matrices computed, we can store the first sum as a
\texttt{sum}-expression of low-rank matrices $\mathcal{S}_\mathcal{R}(\tau,\sigma)$.
\item
Both factors of the products in the second sum correspond to inadmissible
block-clusters. Thus, they are either dense matrices or $\mathcal{H}$-matrices.
Since a dense matrix is just a special case of an $\mathcal{H}$-matrix, the
second sum can be written as a \texttt{sum}-expression of $\mathcal{H}$-matrices
$\mathcal{S}_\mathcal{H}(\tau,\sigma)$.
\end{itemize}
It follows that $\mathbf{L}|_{\tau\times\sigma}$ can be represented as a
\texttt{sum}-expressions by setting
\[
\mathbf{L}|_{\tau\times\sigma}=\mathcal{S}_\mathcal{R}(\tau,\sigma)+\mathcal{S}_\mathcal{H}(\tau,\sigma)\mathrel{=\mathrel{\mathop:}}\mathcal{S}(\tau,\sigma),
\]
see also Figure~\ref{fig:Hmatprod} for an illustration.
\begin{figure}
\caption{\label{fig:Hmatprod}
\label{fig:Hmatprod}
\end{figure}
We can thus represent all children of the root of the block-cluster tree
by \texttt{sum}-expressions. However, we will require to represent all leaves
of the block-cluster tree as \texttt{sum}-expressions.
It thus remains to discuss how to represent matrix blocks
$\mathbf{L}|_{\tau\times\sigma}$ when $\tau\times\sigma$ is not a
child of the root.
\begin{remark}
The representation of any $\mathbf{L}|_{\tau\times\sigma}$ in terms
of \texttt{sum}-expressions is not unique. For example, assuming that
$\tau\times\sigma$ is on level $j$, one may refine the block-matrix
structure of $\mathbf{H}$ and $\mathbf{K}$ by modifying the corresponding
admissibility condition. When the admissibility condition is set to
$\mathrm{false}$ on all levels smaller or equal to $j$, one can construct a finer
partitioning for $\mathbf{H}$ and $\mathbf{K}$ to which one can apply the
same strategy as above to obtain a \texttt{sum}-expression for
$\mathbf{L}|_{\tau\times\sigma}$. In particular, the conversions to the
finer partitioning can be achieved without introducing any additional
errors.
However, we will show in Section~\ref{sec:cost} that we require the
following more sophisticated strategy to obtain an $\mathcal{H}$-matrix
multiplication in almost linear complexity.
\end{remark}
\subsection{Restrictions of \texttt{sum}-expressions}
The main difference between the \texttt{sum}-expressions for $\mathbf{L}$ and
its restriction $\mathbf{L}|_{\tau\times\sigma}$ from the previous subsection
is the presence of
$\mathcal{S}_\mathcal{R}(\tau,\sigma)$.
Given a block-cluster $\tau'\times\sigma'\in\operatorname{child}(\tau\times\sigma)$,
it then holds
\begin{align*}
\mathbf{L}|_{\tau'\times\sigma'}={}&
\big(\mathbf{L}|_{\tau\times\sigma}\big)|_{\tau'\times\sigma'}\\
={}&
\mathcal{S}(\tau,\sigma)|_{\tau'\times\sigma'}\\
={}&
\mathcal{S}_\mathcal{R}(\tau,\sigma)|_{\tau'\times\sigma'}+
\mathcal{S}_\mathcal{H}(\tau,\sigma)|_{\tau'\times\sigma'},
\end{align*}
where $\mathcal{S}_\mathcal{H}(\tau,\sigma)|_{\tau'\times\sigma'}$ can be rewritten as
\begin{align*}
\mathcal{S}_\mathcal{H}(\tau,\sigma)|_{\tau'\times\sigma'}
={}&
\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{B\setminus F}
}}
\big(\mathbf{H}|_{\tau\times\rho}
\mathbf{K}|_{\rho\times\sigma}\big)\big|
_{\tau'\times\sigma'}\\
={}&
\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B}\setminus\mathcal{F}\\
\rho\times\sigma\in\mathcal{B}\setminus\mathcal{F}
}}
\sum_{\rho'\in\operatorname{child}(\rho)}
\mathbf{H}|_{\tau'\times\rho'}\mathbf{K}|_{\rho'\times\sigma'}\\
={}&
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{B}\\
\rho'\times\sigma'\in\mathcal{B}
}}
\mathbf{H}|_{\tau'\times\rho'}\mathbf{K}|_{\rho'\times\sigma'}.
\end{align*}
Each of the products in the last sum can be treated in the same manner as for the root. Thus,
$\mathcal{S}_\mathcal{H}(\tau,\sigma)|_{\tau'\times\sigma'}$ can be represented as
a \texttt{sum}-expression
\[
\mathcal{S}_\mathcal{H}(\tau,\sigma)|_{\tau'\times\sigma'}=\mathcal{S}(\tau',\sigma')=\mathcal{S}_\mathcal{R}(\tau',\sigma')+\mathcal{S}_\mathcal{H}(\tau',\sigma'),
\]
where $\mathcal{S}_\mathcal{R}(\tau',\sigma')$ and $\mathcal{S}_\mathcal{H}(\tau',\sigma')$ may be both non-empty.
The restriction of $\mathcal{S}_\mathcal{R}(\tau,\sigma)$ to $\tau'\times\sigma'$ can
be accomplished by the restriction of the corresponding low-rank matrices.
In actual implementations, the restriction of the low-rank matrices can be
realized by index-shifts, and thus without further arithmetic operations.
Since the sum of
two \texttt{sum}-expressions is again a \texttt{sum}-expression, we have shown that
each matrix block $\mathbf{L}|_{\tau\times\sigma}$ can be represented as a
\texttt{sum}-expression.
A recursive algorithm for their construction is listed
in Algorithm~\ref{alg:dosumexpr}. When the algorithm is initialized with
$\mathcal{S}(\mathcal{I},\mathcal{I})=\mathcal{S}_\mathcal{H}(\mathcal{I},\mathcal{I})
=\mathbf{HK}$ and is applied recursively to all elements of
$\mathcal{B}$, it creates a \texttt{sum}-expression for each block-cluster in
$\mathcal{B}$, in particular for all leaves of the farfield and the nearfield.
\begin{algorithm}
\caption{\label{alg:dosumexpr}Given $\mathcal{S}(\tau,\sigma)$, construct $\mathcal{S}(\tau',\sigma')$ for $\tau'\times\sigma'\in\operatorname{child}(\tau\times\sigma$).}
\begin{algorithmic}
\Function{$\mathcal{S}(\tau',\sigma')=$\,restrict}{$\mathcal{S}(\tau,\sigma)$, $\tau'\times\sigma'$}
\State{Set $\mathcal{S}_\mathcal{R}(\tau',\sigma')=\sum_i \big(\mathbf{A}_i\mathbf{B}_i^{\intercal}\big)\big|_{\tau'\times\sigma'}$,
given $\mathcal{S}_\mathcal{R}(\tau,\sigma)=\sum_i \mathbf{A}_i\mathbf{B}_i^{\intercal}$}
\State{Set $\mathcal{S}_\mathcal{H}(\tau',\sigma')$ as empty}
\For{$\mathbf{H}|_{\tau\times\rho}
\mathbf{K}|_{\rho\times\sigma}\in\mathcal{S}_\mathcal{H}(\tau,\sigma)$}
\For{$\rho'\in\operatorname{child}(\rho)$}
\If{$\tau'\times\rho'\in\mathcal{F}$ or $\rho'\times\sigma'\in\mathcal{F}$}
\State{Compute low-rank matrix
$\mathbf{A}\mathbf{B}^{\intercal}=\mathbf{H}|_{\tau'\times\rho'}\mathbf{K}|_{\rho'\times\sigma'}$}
\State{Set $\mathcal{S}_\mathcal{R}(\tau',\sigma')=\mathcal{S}_\mathcal{R}(\tau',\sigma')+
\mathbf{A}\mathbf{B}^{\intercal}$}
\mathbb{E}lse
\State{Set $\mathcal{S}_\mathcal{H}(\tau',\sigma')=\mathcal{S}_\mathcal{H}(\tau',\sigma')+
\mathbf{H}|_{\tau'\times\rho'}\mathbf{K}|_{\rho'\times\sigma'}$}
\mathbb{E}ndIf
\mathbb{E}ndFor
\mathbb{E}ndFor
\State{Set $\mathcal{S}(\tau',\sigma')=\mathcal{S}_\mathcal{R}(\tau',\sigma')
+\mathcal{S}_\mathcal{H}(\tau',\sigma')$}
\mathbb{E}ndFunction
\end{algorithmic}
\end{algorithm}
\subsection{$\mathcal{H}$-matrix multiplication using \texttt{sum}-expressions}
The algorithm from the previous section provides us, when applied
recursively, with exact
representations in terms of \texttt{sum}-expressions for each matrix block
$\mathbf{L}|_{\tau\times\sigma}$ for all block-clusters
$\tau\times\sigma\in\mathcal{B}$. In order to compute an $\mathcal{H}$-matrix
approximation of $\mathbf{L}$, we only have to convert these \texttt{sum}-expressions
to dense matrices or low-rank matrices. This leads to the $\mathcal{H}$-matrix
multiplication algorithm given in Algorithm~\ref{alg:theHmult}, which is
initialized with
$\mathcal{S}(\mathcal{I},\mathcal{I})=\mathcal{S}_\mathcal{H}(\mathcal{I},\mathcal{I})
=\mathbf{HK}$. The \textsc{evaluate}()-routine in the algorithm computes the
representation of the corresponding \texttt{sum}-expression as a full matrix,
whereas the $\mathfrak{T}()$-routine is a generic low-rank approximation or
\emph{truncation} operator.
\begin{algorithm}
\caption{\label{alg:theHmult}$\mathcal{H}$-matrix product: Compute
$\mathbf{L}|_{\tau\times\sigma}=(\mathbf{H}\mathbf{K})|_{\tau\times\sigma}$ from $\mathcal{S}(\tau,\sigma)$}
\begin{algorithmic}
\Function{$\mathbf{L}|_{\tau\times\sigma}=\mathcal{H}$mult}
{$\mathcal{S}(\tau,\sigma)$}
\If{$\tau,\sigma$ is not a leaf}
\Comment{$\mathbf{L}|_{\tau\times\sigma}$ is an $\mathcal{H}$-matrix}
\For{$\tau'\times\sigma'\in\operatorname{child}(\tau\times\sigma)$}
\State{Set $\mathcal{S}(\tau',\sigma')=$ \Call{restrict}{$\mathcal{S}(\tau,\sigma)$, $\tau'\times\sigma'$}}
\State{$\mathbf{L}|_{\tau'\times\sigma'}=$ \Call{$\mathcal{H}$mult}{$\mathcal{S}(\tau',\sigma')$}}
\mathbb{E}ndFor
\mathbb{E}lse
\If{$\tau\times\sigma\in\mathcal{F}$}
\Comment{$\mathbf{L}|_{\tau\times\sigma}$ is low-rank}
\State{$\mathbf{L}|_{\tau\times\sigma}=\mathfrak{T}(\mathcal{S}(\tau,\sigma))$}
\mathbb{E}lse
\Comment{$\mathbf{L}|_{\tau\times\sigma}$ is a dense}
\State{$\mathbf{L}|_{\tau\times\sigma}=$ \Call{evaluate}{$\mathcal{S}(\tau,\sigma)$}}
\mathbb{E}ndIf
\mathbb{E}ndIf
\mathbb{E}ndFunction
\end{algorithmic}
\end{algorithm}
The algorithm can be seen as a general framework for the multiplication of
$\mathcal{H}$-matrices, although several special cases are stated
in the literature
using different algorithmic implementations.
\subsubsection{No truncation}
In principle, the truncation operator could act as an identity. For this
implementation, it was shown in \cite{GH03} that the rank $\tilde{k}$ of low-rank
matrices in the product is bounded by
\[
\tilde{k}\leqC_{\operatorname{id}}C_{\operatorname{sp}}(p+1)k,
\]
where the constant $C_{\operatorname{id}} = C_{\operatorname{id}}(\mathcal{B})$ is given by
\begin{align*}
C_{\operatorname{id}}(\tau\times\sigma)
\mathrel{\mathrel{\mathop:}=}{}&
\#\{\tau'\times\sigma'\colon\tau'\in\operatorname{successor}(\tau),\sigma'\in\operatorname{successor}(\sigma)~\text{such that}\\
&\qquad\qquad\qquad\quad
\text{there exists}~\rho'\in\mathcal{T}_{\mathcal{I}}~\text{such that}\\
&\qquad\qquad\qquad\quad
\tau'\times\rho'\in\mathcal{B},\rho'\times\sigma'\in\mathcal{B}\},\\
C_{\operatorname{id}}(\mathcal{B})\mathrel{\mathrel{\mathop:}=}{}&\max_{\tau\times\sigma\in\mathcal{L}(\mathcal{B})}
C_{\operatorname{id}}(\tau\times\sigma).
\end{align*}
Although the rank of the product is bounded from above, the constants in the
bound might be large. Hence one is interested in truncating the low-rank
matrices to lower rank in a best possible way. Depending on the employed
truncation operator $\mathfrak{T}$, different implementations of the
multiplication evolve.
\subsubsection{Truncation with a single low-rank SVD}
Traditionally, the used truncation operators are based on the singular value
decomposition, from which several implementations have evolved. The most
accurate implementation is given by computing the exact product in low-rank
format and truncating it to a lower rank by using a singular value
decomposition for low-rank matrices as given in Algorithm~\ref{alg:lrsvd}.
\begin{algorithm}[tbp]
\caption{\label{alg:lrsvd}SVD of a low-rank matrix
$\mathbf{L}\mathbf{R}^\intercal$, see \cite[Algorithm 2.17]{Hac15}}
\begin{algorithmic}
\Function{${\bf U\Sigma V}^{\intercal}$=LowRankSVD}{$\mathbf{L}\mathbf{R}^\intercal$}
\State ${\bf Q_LR_L}{}=$ QR-decomposition of ${\bf L}$, ${\bf Q_L}\in\mathbb{R} ^{\tau\times\tilde{k}}$, ${\bf R_L}\in\mathbb{R} ^{\tilde{k}\times\tilde{k}}$
\State ${\bf Q_RR_R}{}=$ QR-decomposition of ${\bf R}$, ${\bf Q_R}\in\mathbb{R} ^{\sigma\times\tilde{k}}$, ${\bf R_R}\in\mathbb{R} ^{\tilde{k}\times\tilde{k}}$
\State ${\bf \tilde{U}\Sigma\tilde{V}}^{\intercal}{}= \operatorname{SVD}({\bf R_LR_R}^{\intercal})$
\State ${\bf U}={\bf Q_L\tilde{U}}$
\State ${\bf V}={\bf Q_R\tilde{V}}$
\mathbb{E}ndFunction
\end{algorithmic}
\end{algorithm}
The number of operations for
the $\mathcal{H}$-matrix multiplication, assuming $n_{\min}\leq k$, is then
bounded by
\[
43C_{\operatorname{id}}^3C_{\operatorname{sp}}^3k^3(p+1)^3\max\{\#\mathcal{I},\#\mathcal{F}+\#\mathcal{N}\},
\]
see \cite{GH03}.
However, it turns out that, for more complex block-cluster trees of practical
relevance, the numerical effort for this implementation of the multiplication is
quite high.
\subsubsection{Truncation with multiple low-rank SVDs --- fast truncation}\label{sec:fasttrunc}
Therefore, one may replace the the above truncation by the fast truncation
of low-rank matrices, which aims at accelerating the truncation of sums of
low-rank matrices by allowing a larger error margin. The basic idea is that in
many cases
\[
\mathbf{M}_n\mathbf{N}_n^{\intercal}=
\mathfrak{T}\bigg(\sum _{i=1}^n\mathbf{A}_i\mathbf{B}_i^{\intercal}\bigg)
\]
can be sufficiently well be approximated by computing
\begin{align*}
\mathbf{M}_2\mathbf{N}_2^{\intercal}={}&\mathfrak{T}\big(\mathbf{A}_1\mathbf{B}_1^{\intercal}+
\mathbf{A}_2\mathbf{B}_2^{\intercal}\big)\\
\mathbf{M}_i\mathbf{N}_i^{\intercal}={}&\mathfrak{T}\big(\mathbf{M}_{i-1}\mathbf{N}_{i-1}^{\intercal}+
\mathbf{A}_i\mathbf{B}_i^{\intercal}\big),\qquad i=3,\ldots,n.
\end{align*}
If the fast truncation is used as a truncation operator, the number of operations
for the $\mathcal{H}$-matrix multiplication, assuming $n_{\min}\leq k$, is
bounded by
\[
56C_{\operatorname{sp}}^2\max\{C_{\operatorname{id}},C_{\operatorname{sp}}\}k^2(p+1)^2\#\mathcal{I}+184C_{\operatorname{sp}}C_{\operatorname{id}} k^3(p+1)(\#\mathcal{F}+\#\mathcal{N}),
\]
see \cite{GH03}.
Numerical experiments confirm that the $\mathcal{H}$-matrix multiplication using
the fast truncation is indeed faster, but also slightly less accurate than the
previous version of the multiplication.
\subsubsection{Truncation with accumulated updates}
Recently, in \cite{Boe17}, a new truncation operator was introduced, to which
we will refer to as truncation with accumulated updates. Therefore,
we slightly modify the definition of the \texttt{sum}-expression and denote the new
object by $\mathcal{S}^{a}$.
\begin{definition}
For a given block-cluster $\tau\times\sigma\in\mathcal{B}$, we say that the sum
of a low-rank matrix
$\mathbf{A}\mathbf{B}^{\intercal}\in\mathcal{R}(\tau\times\sigma,k)$ and a
\texttt{sum}-expression of $\mathcal{H}$-matrices $\mathcal{S}_\mathcal{H}(\tau,\sigma)$ is
a \texttt{sum}-\emph{expression with accumulated updates} and write
\[
\mathcal{S}^{a}(\tau,\sigma)=\mathbf{A}\mathbf{B}^{\intercal}+\mathcal{S}_\mathcal{H}(\tau,\sigma).
\]
In particular, we write
$\mathcal{S}_\mathcal{R}^{a}(\tau,\sigma)=\mathbf{A}\mathbf{B}^{\intercal}$ and, for a
second low-rank matrix
$\tilde{\mathbf{A}}\tilde{\mathbf{B}}^{\intercal}\in\mathcal{R}(\tau\times\sigma,k)$,
we define the sum of these expressions with a low-rank-matrix as
\begin{align*}
\mathcal{S}_\mathcal{R}^{a}(\tau,\sigma)+\tilde{\mathbf{A}}\tilde{\mathbf{B}}^{\intercal}={}&\mathfrak{T}\big(\mathbf{A}\mathbf{B}^{\intercal}+\tilde{\mathbf{A}}\tilde{\mathbf{B}}^{\intercal}\big)\\
\mathcal{S}^{a}(\tau,\sigma)+\tilde{\mathbf{A}}\tilde{\mathbf{B}}^{\intercal}={}&\mathfrak{T}\big(\mathbf{A}\mathbf{B}^{\intercal}+\tilde{\mathbf{A}}\tilde{\mathbf{B}}^{\intercal}\big)+\mathcal{S}_\mathcal{H}(\tau,\sigma),
\end{align*}
i.e., instead of adding the new low-rank matrix to the list of low-rank matrices
in $\mathcal{S}^{a}(\tau,\sigma)$, we perform an addition of low-rank matrices
with subsequent truncation.
\end{definition}
Obviously, every \texttt{sum}-expression with accumulated updates is also a
\texttt{sum}-expression in the sense of Definition~\ref{def:sumexpr}. The key point
is that the addition with low-rank matrices is treated differently. By replacing
the \texttt{sum}-expressions in Algorithm~\ref{alg:theHmult} by \texttt{sum}-expressions
with accumulated updates, we obtain the $\mathcal{H}$-matrix multiplication as
stated in \cite{Boe17}.
The number of operations for this algorithm is bounded by
\[
3C_{\operatorname{mm}}C_{\operatorname{sp}}^2k^2(p+1)^2\#\mathcal{I}.
\]
The constant $C_{\operatorname{mm}}$ consists of several other constants which
exceed the scope of this article and we refer to \cite{Boe17} for more
details. However, the numerical experiments in \cite{Boe17} indicate that the
truncation operator with accumulated updates is faster than the fast truncation
operator.
An issue of both, the fast truncation and the truncation with accumulated
updates, is a situation where the product of $\mathcal{H}$-matrices has to be
converted into a low-rank matrix. Here, both implementations rely on a
\emph{hierarchical approximation} of the product of the $\mathcal{H}$-matrices.
That is, the product is computed in $\mathcal{H}$-matrix format and then,
starting from the leaves, recursively converted into low-rank format, which
is a time-consuming task and requires several intermediate truncation steps.
This introduces additional truncation errors, although the truncation with
accumulated updates somehow reduces the number of conversions.
A slightly different approach than the fast truncation with hierarchical
approximation was proposed in \cite{DHP15}.
There, the $\mathcal{H}$-matrix products have been truncated to low-rank matrices
using an iterative eigensolver based on matrix-vector multiplications
before applying the fast truncation operator. The numerical experiments
prove this approach to be computationally efficient, while providing even
a best approximation to the product of the $\mathcal{H}$-matrices
in low-rank format.
We summarize by remarking that all of the common $\mathcal{H}$-matrix
multiplication algorithms are variants of Algorithm~\ref{alg:theHmult},
employing different truncation operators. Therefore, in order to improve the
accuracy and the speed of the $\mathcal{H}$-matrix multiplication, efficient
and accurate truncation operators have to be used.
Since approaches based on the singular value decomposition of dense or
low-rank matrices
have proven to be less promising, we focus in the following on low-rank
approximation methods based on matrix-vector multiplications. The idea behind
this approach is that the multiplication of a \texttt{sum}-expression
$\mathcal{S}(\tau,\sigma)$ with a vector $\mathbf{v}$ of length $\#\sigma$
can be computed efficiently by
\[
\mathcal{S}(\tau,\sigma)\mathbf{v}
=
\sum_{j\in\mathcal{J}_{\mathcal{R}}}\mathbf{A}_j\big(\mathbf{B}_j^{\intercal}\mathbf{v}\big)+
\sum_{j\in\mathcal{J}_{\mathcal{H}}}\mathbf{H}_j\big(\mathbf{K}_j\mathbf{v}\big).
\]
Although this idea has already been mentioned in \cite{DHP15}, the used
eigensolver in \cite{DHP15} seemed to be less favourable for this task.
In the next section, we will discuss several approaches to compute low-rank
approximations to \texttt{sum}-expressions using matrix-vector multiplications.
In particular, we will discuss
\emph{adaptive} algorithms, which compute low-rank approximations to
\texttt{sum}-expressions up to a prescribed error tolerance. The adaptive
algorithms will allow us to compute the best-approximation of
$\mathcal{H}$-matrix products.
\section{Low-rank approximation schemes}\label{sec:approx}
In addition to the well known hierarchical approximation for the approximation
of \(\mathcal{H}\)-matrices by low-rank matrices, we consider here three different
schemes for the low-rank approximation of a given matrix. All of them
can be implemented in terms of elementary matrix-vector products and
are therefore well suited for the use in our new \(\mathcal{H}\)-matrix multiplication.
In what follows, let \({\bf A}\mathrel{\mathrel{\mathop:}=}{\bf H}|_{\tau\times\sigma}\in\mathbb{R}^{m\times n}\), $m=\#\tau$, $n=\#\sigma$, always denote
a target matrix block, which might be implicitly given in terms of a \texttt{sum}-expression $\mathcal{S}(\tau,\sigma)$.\\
\subsection{Adaptive cross approximation}\label{sec:ACA}
In the context of boundary element methods, the \emph{adaptive cross
approximation} (ACA), see \cite{Beb}, is frequently used to find
$\mathcal{H}$-matrix approximations to system matrices. However,
one can prove, see \cite{DHS17}, that the same idea can also be
applied to the product of pseudodifferential operators. Since
the $\mathcal{H}$-matrix multiplication can be seen as a discrete
analogon to the multiplication of pseudodifferential operators, we
may use ACA to find the low-rank approximations for the admissible
matrix blocks. Concretely, we approximate
${\bf A}={\bf H}|_{\tau\times\sigma}$ by a partially pivoted Gaussian
elimination, see \cite{Beb} for further details.
To this end, we define the vectors
\({\boldsymbol{\ell}}_r\in\mathbb{R}^m\) and \({\bf u}_r\in\mathbb{R}^n\)
by the iterative scheme shown in Algorithm~\ref{alg:ACA}, where \({\bf A}=[a_{i,j}]_{i,j}\) is the matrix-block under consideration.
\begin{algorithm}[htb]
\caption{Adaptive cross approximation (ACA)}
\label{alg:ACA}
\begin{algorithmic}
\For{$r =1,2,\ldots$}
\State Choose the element in \((i_r,j_r)\)-position of the Schur compolement as pivot
\State $\hat{\bf u}_r=[a_{i_r,j}]_{j=1}^n-\sum_{j=1}^{r-1}
[{\boldsymbol\ell}_j]_{i_r}{\bf u}_j$
\State \({\bf u}_r=\hat{\bf u}_r/[\hat{\bf u}_r]_{j_r}\)
\State ${\boldsymbol{\ell}}_r=[a_{i,j_r}]_{i=1}^m-
\sum_{i=1}^{r-1}[u_i]_{j_r}{\boldsymbol{\ell}_i}$
\mathbb{E}ndFor
\State Set \({\bf L}_r\mathrel{\mathrel{\mathop:}=}[{\boldsymbol\ell}_1,\ldots,{\boldsymbol\ell}_r]\) and \({\bf U}_r\mathrel{\mathrel{\mathop:}=}[{\bf u}_1,\ldots,{\bf u}_r]^\intercal\)
\end{algorithmic}
\end{algorithm}
A suitable criterion that guarantees the convergence of the algorithm
is to choose the pivot element located in the \((i_{r},j_{r})\)-position
as the maximum element in modulus of the remainder
\({\bf A}-{\bf L}_{r-1}{\bf U}_{r-1}\).
Unfortunately, this would compromise the overall cost of the approximation.
Therefore, we resort to another pivoting strategy which is sufficient in
most cases: we choose \(j_r\) such that \([\hat{\bf u}_r]_{j_r}\)
is the largest element in modulus of the row \(\hat{{\bf u}}_r\).
Obviously, the cost for the computation of the
rank-$k$-approximation ${\bf L}_{k}{\bf U}_{k}$ to the block
${\bf A}$ is \(\mathcal{O}\big(k^2(m+n)\big)\) and
the storage cost is \(\mathcal{O}\big(k(m+n)\big)\).
In addition, if \({\bf A}\) is given via a \texttt{sum}-expression,
we have to compute \({\bf e}_{i_m}^\intercal{\bf A}\)
and \({\bf A}{\bf e}_{j_m}\) in each step, where \({\bf e}_i\)
denots the \(i\)-th unit vector, in order to retrieve the row and column under consideration.
The respective computational cost for the multiplication of a \texttt{sum}-expression with a
vector is estimated in Lemma~\ref{lem:compsumexprmv}.
\subsection{Lanczos bidiagonalization}\label{sec:lanczos}
The second algorithm we consider for compressing a given matrix block is based on the Lanczos bidiagonalization (BiLanczos),
see Algorithm~\ref{alg:Bidiagonalization}.
This procedure is equivalent to the tridiagonalization of the corresponding, symmetric Jordan-Wielandt matrix
\[
\begin{bmatrix}{\bf 0} & {\bf A} \\ {\bf A}^\intercal & {\bf 0}\end{bmatrix},
\]
cf.\ \cite{golub2012}.
\begin{algorithm}[htb]
\caption{Golub-Kahan-Lanczos bidiagonalization}
\label{alg:Bidiagonalization}
\begin{algorithmic}
\State Choose a random vector \({\bf w}_1\) with \(\|{\bf w}_1\|=1\) and set \({\bf q}_0={\bf 0},\beta_0=0\)
\For{$r =1,2,\ldots$}
\State\({\bf q}_r = {\bf A}{\bf w}_r-\beta_{r-1}{\bf q}_{r-1}\)
\State \(\alpha_r = \|{\bf q}_r\|_2\)
\State \({\bf q}_r = {\bf q}_r / \alpha_r\)
\State \({\bf w}_{r+1}={\bf A}^\intercal{\bf q}_r-\alpha_r{\bf w}_r\)
\State \(\beta_r=\|{\bf w}_{r+1}\|_2\)
\State \({\bf w}_{r+1}={\bf w}_{r+1}/\beta_r\)
\mathbb{E}ndFor
\end{algorithmic}
\end{algorithm}
The algorithm leads to a decomposition of the given matrix block according to
\[
{\bf Q}^\intercal{\bf A}{\bf W}={\bf B}\mathrel{\mathrel{\mathop:}=}\begin{bmatrix}
\alpha_1 & \beta_1 & & & \\
& \alpha_2 & \beta 2 & & \\
& & \ddots & \ddots & \\
& & & \alpha_{n-1} & \beta_{n-1}\\
& & & & \alpha_n
\end{bmatrix}
\]
with orthogonal matrices \({\bf Q}^\intercal{\bf Q}={\bf I}_m\) and \({\bf W}^\intercal{\bf W}={\bf I}_n\), cf.\ \cite{golub2012}.
Note that although the algorithm yields orthogonal vectors \({\bf q}_r\) and \({\bf w}_r\) by construction, we have
to perform additional reorthogonalization steps to obtain numerical stability. Since the algorithm, like ACA,
only depends on matrix-vector multiplications, it is well suited to compress a given block \({\bf A}\).
Truncating the algorithm after \(k\) steps
results in a low-rank approximation
\[
{\bf A}\approx {\bf Q}_k{\bf B}_k{\bf W}_k^\intercal={\bf U}_k\begin{bmatrix}
\alpha_1 & \beta_1 & & & \\
& \alpha_2 & \beta_2 & & \\
& & \ddots & \ddots & \\
& & & \alpha_{k-1} & \beta_{k-1}\\
& & & & \alpha_k
\end{bmatrix}{\bf V}_k^\intercal .
\]
It is then easy to compute a singular value decomposition \({\bf B}_k=\tilde{\bf U}{\bf S}\tilde{\bf V}^\intercal\) and to obtain the
separable decomposition
\[{\bf A}\approx ({\bf Q}_k\tilde{\bf U}{\bf S})({\bf W}_k\tilde{\bf V})^\intercal={\bf L}_k{\bf U}_k.\]
As in the ACA case, the algorithm only requires two matrix-vector products with the block \({\bf A}\) in each step.
\subsection{Randomized low-rank approximation}\label{sec:randomized}
The third algorithm we consider for finding a low-rank approximation to \({\bf A}\)
is based on successive multiplication with
Gaussian random vectors. The algorithm can be motivated as follows, cf.\ \cite{HMT11}:
Let \({\bf y}_i={\bf A}{\boldsymbol\omega}_i\) for \(i=1,\ldots,r\), where \({\boldsymbol\omega}_1,\ldots,{\boldsymbol\omega}_r\in\mathbb{R}^n\)
are independently drawn Gaussian random vectors. The collection of these random vectors is very likely to be linearly independent, whereas
it is very unlikely that any linear combination of them falls in the null space of \({\bf A}\). As a consequence, the collection \({\bf y}_1,\ldots,{\bf y}_r\)
is linearly independent and spans the range of \({\bf A}\). Thus, by orthogonalizing \([{\bf y}_1,\ldots,{\bf y}_r]={\bf L}_r{\bf R}\), with an orthogonal
matrix \({\bf L}_r\in\mathbb{R}^{m\times r}\), we obtain \({\bf A}\approx{\bf L}_r{\bf U}_r\) with \({\bf U}_r={\bf L}_r^\intercal{\bf A}\), see Algorithm~\ref{alg:randomized}.
Employing oversampling,
the quality of the approximation can be increased even further. For all the details, we refer to \cite{HMT11}.
\begin{algorithm}[htb]
\caption{Randomized low-rank approximation}
\label{alg:randomized}
\begin{algorithmic}
\State Set \({\bf L}_0 = [\,]\)
\For{$r =1,2,\ldots$}
\State Generate a Gaussian random vector \({\boldsymbol\omega}\in\mathbb{R}^n\)
\State ${\boldsymbol\ell}_r = ({\bf I}-{\bf L}_{r-1}{\bf L}_{r-1}^\intercal){\bf A}{\boldsymbol\omega}$
\State ${\boldsymbol\ell}_r = {\boldsymbol\ell}_r$ / $\|{\boldsymbol\ell}_r\|_2$
\State ${\bf L}_r=[{\bf L}_{r-1}, {\boldsymbol\ell}_r]$
\State ${\bf U}_r={\bf L}_r^\intercal{\bf A}$
\mathbb{E}ndFor
\end{algorithmic}
\end{algorithm}
As the previous two algorithms, the randomized low-rank approximation requires only two matrix-vector multiplications with \({\bf A}\) in each step.
In contrast to the other two presented compression schemes, the randomized approximation allows for a blocked version
in a straightforward manner,
where instead of a single Gaussian random vector, a Gaussian matrix can be used to approximate the range of \({\bf A}\).
Although there also exist block versions of the ACA and the BiLanczos, as well, they are know to be numerically very unstable.
Note that there exist probabilistic error estimates for the approximation of a given matrix by Algorithm~\ref{alg:randomized}, cf.\ \cite{HMT11}.
Unfortunately, these error estimates are with respect to the spectral norm and therefore only give insights on the largest singular value
of the remainder. To have control on the actual approximation quality of a certain matrix block, we are thus rather interest in an error estimate
with respect to the Frobenius norm. To that end, we propose a different, adaptive criterion which estimates the error with respect to the Frobenius norm.
\subsection{Adaptive stopping criterion}
In our implementation, the proposed compression schemes rely on the following
well known adaptive stopping criterion, which
aims at reflecting the approximation error with respect to the Frobenius norm.
We terminate the approximation if the criterion
\begin{equation}\label{eq:truncaca}
\|{\boldsymbol\ell}_{k+1}\|_2\|{\bf u}_{k+1}\|_2
\leq\varepsilon\|{{\bf L}_k{\bf U}_k}\|_F
\end{equation}
is met for some desired accuracy \(\varepsilon>0\).
This criterion can be justified as follows. We assume that the error in each step is reduced by a constant rate
\(0<\vartheta<1\), i.e.,
\[
\|{\bf A}-{\bf L}_{k+1}{\bf U}_{k+1}\|_F\leq\vartheta
\|{\bf A}-{\bf L}_{k}{\bf U}_{k}\|_F.
\]
Then, there holds
\begin{align*}
\|{\boldsymbol\ell}_{k+1}\|_2\|{\bf u}_{k+1}\|_2&=
\|{\bf L}_{k+1}{\bf U}_{k+1}-{\bf L}_{k}{\bf U}_{k}\|_F\\
&\leq\|{\bf A}-{\bf L}_{k+1}{\bf U}_{k+1}\|_F+
\|{\bf A}-{\bf L}_{k}{\bf U}_{k}\|_F\\
&\leq (1+\vartheta)\|{\bf A}-{\bf L}_{k}{\bf U}_{k}\|_F
\end{align*}
and, vice versa,
\begin{align*}
\|{\bf L}_{k+1}{\bf U}_{k+1}-{\bf L}_{k}{\bf U}_{k}\|_F&\geq
\|{\bf A}-{\bf L}_{k}{\bf U}_{k}\|_F
-\|{\bf A}-{\bf L}_{k+1}{\bf U}_{k+1}\|_F\\
&\geq (1-\vartheta)\|{\bf A}-{\bf L}_{k}{\bf U}_{k}\|_F.
\end{align*}
Therefore, the approximation error is proportional to the norm
\[
\|{\boldsymbol\ell}_{k+1}{\bf u}_{k+1}\|_F=\|{\boldsymbol\ell}_{k+1}\|_2\|{\bf u}_{k+1}\|_2
\]
of the update vectors, i.e.,
\[
(1-\vartheta)\|{\bf A}-{\bf L}_{k}{\bf U}_{k}\|_F
\leq\|{\boldsymbol\ell}_{k+1}\|_2\|{\bf u}_{k+1}\|_2\leq(1+\vartheta)
\|{\bf A}-{\bf L}_{k}{\bf U}_{k}\|_F.
\]
Thus, together with \eqref{eq:truncaca}, we can guarantee a
relative error bound
\begin{equation}\label{eq:ACAblockerror}
\|{\bf A}-{\bf L}_{k}{\bf U}_{k}\|_F\leq\frac{\varepsilon}{1-\vartheta}\|{{\bf L}_k{\bf U}_k}\|_F\leq\frac{\varepsilon}{1-\vartheta}\|{\bf A}\|_F.
\end{equation}
Based on this blockwise error estimate, it is straightforward to assess the
overall error for the approximation of a given $\mathcal{H}$-matrix.
\begin{theorem}
Let ${\bf H}$ be the uncompressed matrix and
$\tilde{\bf H}$ be the matrix which is compressed by
one of the aforementioned compression schemes.
Then, with respect to the
Frobenius norm, there holds the error estimate
\[
\|{\bf H}-\tilde{\bf H}\|_F\lesssim\varepsilon\|{\bf H}\|_F
\]
provided that the blockwise error satisfies \eqref{eq:ACAblockerror}.
\end{theorem}
\begin{proof}
In view of \eqref{eq:ACAblockerror}, we have
\begin{align*}
\|{\bf H}-\tilde{\bf H}\|_F^2
&=\sum_{\tau\times\sigma\in\mathcal{F}}
\big\|{\bf H}|_{\tau\times\sigma}-\tilde{\bf H}|_{\tau\times\sigma}\big\|_F^2\\
&\lesssim \sum_{\tau\times\sigma\in\mathcal{F}}
\|{\bf H}|_{\tau\times\sigma}\|_F^2\\
&=\varepsilon^2\|{\bf H}\|_F^2.
\end{align*}
Taking square roots on both sides yields the assertion.
\end{proof}
\subsection{Fixed rank approximation}
The traditional $\mathcal{H}$-matrix multiplication is based on low-rank approximations
to a fixed a-priori prescribed rank. We will therefore shortly comment on the fixed rank
versions of the introduced algorithms which we will later also use in the numerical
experiments. Since ACA and the BiLanczos algorithm are intrinsically iterative methods,
we stop the iteration whenever the prescribed rank is reached. For the randomized low-rank
approximation we may use a single iteration of its blocked variant. The corresponding
algorithm featuring also additional $q\in\mathbb{N}_0$ subspace iterations to increase
accuracies is listed in Algorithm~\ref{alg:randomizedfixed}, cp.\ \cite{HMT11}.
\begin{algorithm}[htb]
\caption{Randomized rank-$k$ approximation with subspace iterations}
\label{alg:randomizedfixed}
\begin{algorithmic}
\State Generate a Gaussian random matrix \({\boldsymbol\Omega}\in\mathbb{R}^{n\times k}\)
\State $\mathbf{L} = {\bf A}{\boldsymbol\Omega}$
\State Orthonormalize columns of $\mathbf{L}$
\For{$\ell=1,\ldots,q$}
\State $\mathbf{U} = {\bf A}^\intercal\mathbf{L}$
\State Orthonormalize columns of $\mathbf{U}$
\State $\mathbf{L} = {\bf A}\mathbf{U}$
\State Orthonormalize columns of $\mathbf{L}$
\mathbb{E}ndFor
\State $\mathbf{U} = {\bf A}^\intercal\mathbf{L}$
\end{algorithmic}
\end{algorithm}
For practical purposes, when the singular values of $\mathbf{A}$ decay sufficiently fast,
a value of $q=1$ is usually sufficient. We will therefore use $q=1$ in the numerical
experiments. For a detailed discussion on the number of subspace
iterations and comments on \emph{oversampling}, i.e., sampling at a higher rank with a
subsequent truncation, we refer to \cite{HMT11}.
\section{Cost of the \(\mathcal{H}\)-matrix multiplication}
\label{sec:cost}
The following section is dedicated to the cost analysis of the
$\mathcal{H}$-matrix multiplication as introduced in
Algorithm~\ref{alg:theHmult}. We first estimate the
cost for the computation of the \texttt{sum}-expressions and
then proceed by analyzing the multiplication of a \texttt{sum}-expression
with a vector. Having these estimates at our disposal, the main theorem of this
section confirms that the cost of the $\mathcal{H}$-matrix multiplication
scales almost linearly with the cardinality of the index set $\mathcal{I}$.
\begin{lemma}\label{lem:compsumexprrec}
Given a block-cluster $\tau\times\sigma\in\mathcal{B}$ with
\texttt{sum}-expression $\mathcal{S}(\tau,\sigma)$, the
\texttt{sum}-expression $\mathcal{S}(\tau',\sigma')$ for any
block-cluster
$\tau'\times\sigma'\in\operatorname{child}(\tau\times\sigma)$
can be computed in at most
\begin{align*}
N_{\mathrm{update},\mathcal{S}}(\tau',\sigma')\leq
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{B\setminus F}\\
\rho'\times\sigma'\in\mathcal{F}
}}
kN_{\mathcal{H}\cdot v}(\tau'\times\rho',k)+{}&
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{F}\\
\rho'\times\sigma'\in\mathcal{B\setminus F}
}}
kN_{\mathcal{H}\cdot v}(\rho'\times\sigma',k)\\
+{}&
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{F}\\
\rho'\times\sigma'\in\mathcal{F}
}}
k\big(\#\rho'+\min\{\#\tau',\#\sigma'\}
\big)
\end{align*}
operations.
\end{lemma}
\begin{proof}
We start with recalling that $\mathcal{S}(\tau',\sigma')$ is
recursively given as
\begin{align*}
\mathcal{S}(\tau',\sigma')={}&
\mathcal{S}(\tau,\sigma)|_{\tau'\times\sigma'}\\
={}&
\mathcal{S}_\mathcal{R}(\tau,\sigma)|_{\tau'\times\sigma'}+
\mathcal{S}_\mathcal{H}(\tau,\sigma)|_{\tau'\times\sigma'}\\
={}&
\mathcal{S}_\mathcal{R}(\tau,\sigma)|_{\tau'\times\sigma'}+
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{B}\\
\rho'\times\sigma'\in\mathcal{B}
}}
\mathbf{H}|_{\tau'\times\rho'}\mathbf{H}|_{\rho'\times\sigma'},
\numberthis\label{eq:recursion}
\end{align*}
with $\mathcal{S}_\mathcal{R}(\mathcal{I},\mathcal{I})=\emptyset$. Clearly,
the restriction of $\mathcal{S}_\mathcal{R}(\tau,\sigma)$ to
$\tau'\times\sigma'$ comes for free, and we only have to look at
the remaining sum. It can be decomposed
into
\begin{align}
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{B}\\
\rho'\times\sigma'\in\mathcal{B}
}}
\mathbf{H}|_{\tau'\times\rho'}\mathbf{H}|_{\rho'\times\sigma'}
={}&
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{B\setminus F}\\
\rho'\times\sigma'\in\mathcal{B\setminus F}
}}
\mathbf{H}|_{\tau'\times\rho'}\mathbf{H}|_{\rho'\times\sigma'}\label{eq:prodexrp1}\\\
&{}\quad+
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{B\setminus F}\\
\rho'\times\sigma'\in\mathcal{F}
}}
\mathbf{H}|_{\tau'\times\rho'}\mathbf{H}|_{\rho'\times\sigma'}\label{eq:prodexrp2}\\\
&{}\quad+
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{F}\\
\rho'\times\sigma'\in\mathcal{B\setminus F}
}}
\mathbf{H}|_{\tau'\times\rho'}\mathbf{H}|_{\rho'\times\sigma'}\label{eq:prodexrp3}\\
&{}\quad+
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{F}\\
\rho'\times\sigma'\in\mathcal{F}
}}
\mathbf{H}|_{\tau'\times\rho'}\mathbf{H}|_{\rho'\times\sigma'}\label{eq:prodexrp4}.
\end{align}
The products on the right-hand side in \eqref{eq:prodexrp1} are not computed, thus, no
numerical effort has to be made. The products in
\eqref{eq:prodexrp2}, \eqref{eq:prodexrp3}, and
\eqref{eq:prodexrp4} must be computed and stored as low-rank matrices.
The computational effort for this operation is
$kN_{\mathcal{H}\cdot v}(\tau'\times\rho',k)$ for
\eqref{eq:prodexrp2}, $kN_{\mathcal{H}\cdot v}(\rho'\times\sigma',k)$
for \eqref{eq:prodexrp3}, and $k^2(\#\rho'+\min\{\#\tau',\#\sigma'\})$
for \eqref{eq:prodexrp4}. Thus, given $\mathcal{S}(\tau,
\sigma)$, the computational cost to compute $\mathcal{S}(\tau',
\sigma')$ is
\begin{align*}
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{B\setminus F}\\
\rho'\times\sigma'\in\mathcal{F}
}}
kN_{\mathcal{H}\cdot v}(\tau'\times\rho',k)+
&{}
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{F}\\
\rho'\times\sigma'\in\mathcal{B\setminus F}
}}
kN_{\mathcal{H}\cdot v}(\rho'\times\sigma',k)\\
&{}\qquad\qquad+
\sum_{\substack{
\rho'\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau'\times\rho'\in\mathcal{F}\\
\rho'\times\sigma'\in\mathcal{F}
}}
k\big(\#\rho'+\min\{\#\tau',\#\sigma'\}
\big),
\end{align*}
which proves the assertion.
\end{proof}
\begin{lemma}\label{lem:compsumexprtot}
The $\mathcal{H}$-matrix multiplication as given by Algorithm~\ref{alg:theHmult}
requires at most
\[
N_{\mathcal{S}}(\mathcal{B})\leq 16C_{\operatorname{sp}}^3k\max\{k,n_{\min}\}(p+1)^2\#\mathcal{I}
\]
operations for the computation of the \texttt{sum}-expressions.
\end{lemma}
\begin{proof}
We consider a block-cluster $\tau\times\sigma\in\mathcal{B}\setminus\mathcal{F}$
on level $j$. Then, the numerical effort to compute the \texttt{sum}-expression
$\mathcal{S}(\tau',\sigma')$ for $\tau'\times\sigma'\in\operatorname{child}(\tau\times\sigma)$
is estimated by Lemma~\ref{lem:compsumexprrec}, if the
\texttt{sum}-expression $\mathcal{S}(\tau,\sigma)$ is known. Therefore, it is sufficient
to sum over all block-clusters in $\mathcal{B}$ as follows:
\begin{align*}
&\sum_{\tau\times\sigma\in\mathcal{B}}N_{\mathrm{update},\mathcal{S}}(\tau,\sigma)\\
&\qquad\leq
\sum_{\tau\times\sigma\in\mathcal{B}}\bigg(
2\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{F}
}}
kN_{\mathcal{H}\cdot v}(\tau\times\rho,k)+
\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{F}\\
\rho\times\sigma\in\mathcal{F}
}}
k\big(\#\rho+\min\{\#\tau,\#\sigma\}
\big)
\bigg)\\
&\qquad=
2\sum_{\tau\times\sigma\in\mathcal{B}}
\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{F}
}}
kN_{\mathcal{H}\cdot v}(\tau\times\rho,k)+
\sum_{\tau\times\sigma\in\mathcal{B}}
\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{F}\\
\rho\times\sigma\in\mathcal{F}
}}
k\big(\#\rho+\min\{\#\tau,\#\sigma\}
\big).
\end{align*}
To estimate the first sum, consider
\begin{align*}
\sum_{\tau\times\sigma\in\mathcal{B}}
\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{F}
}}
kN_{\mathcal{H}\cdot v}(\tau\times\rho,k)
\leq{}&
\sum_{\tau\times\sigma\in\mathcal{B}}
\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}
}}
kN_{\mathcal{H}\cdot v}(\tau\times\rho,k)\\
\leq{}&
C_{\operatorname{sp}}
\sum_{\tau\times\rho\in\mathcal{B}}
kN_{\mathcal{H}\cdot v}(\tau\times\rho,k)\\
\leq{}&
2C_{\operatorname{sp}}^2 k\max\{k,n_{\min}\}(p+1)\sum_{\tau\times\rho\in\mathcal{B}}(\#\tau+\#\rho)\\
\leq{}&4C_{\operatorname{sp}}^3k\max\{k,n_{\min}\}(p+1)^2\#\mathcal{I},
\end{align*}
due to the fact that
\[
\sum _{\tau\times\sigma\in\mathcal{B}}(\#\tau+\#\sigma)\leq 2C_{\operatorname{sp}} (p+1)\#\mathcal{I},
\]
see, e.g., \cite[Lemma 2.4]{GH03}. Since the second sum can be estimated by
\begin{align*}
\sum_{\tau\times\sigma\in\mathcal{B}}
\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{F}\\
\rho\times\sigma\in\mathcal{F}
}}
k\big(\#\rho+\min\{\#\tau,\#\sigma\}
\big)
\leq{}&
\sum_{\substack{
\tau\times\rho\in\mathcal{B}\\
\rho\times\sigma\in\mathcal{B}
}}
k\big(\#\rho+\min\{\#\tau,\#\sigma\}
\big)\\
\leq{}&
\sum_{\substack{
\tau\times\rho\in\mathcal{B}\\
\rho\times\sigma\in\mathcal{B}
}}
k\big(2\#\rho+\#\tau+\#\sigma
\big)\\
\leq{}&
2C_{\operatorname{sp}} k(p+1)\#\mathcal{I},
\end{align*}
summing up yields the assertion.
\end{proof}
\begin{lemma}\label{lem:compsumexprmv}
For any block-cluster $\tau\times\sigma\in\mathcal{B}$, the
multiplication of $\mathcal{S}(\tau,\sigma)$ with a vector of
length $\#\sigma$ can be accomplished in at most
\[
4C_{\operatorname{sp}} \max\{k, n_{\min}\} (p+1)\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{B\setminus F}
}}(\#\tau+\#\rho+\#\sigma)
\]
operations.
\end{lemma}
\begin{proof}
We first estimate the number of elements in
$\mathcal{S}_\mathcal{R}(\tau,\sigma)$ and $\mathcal{S}_\mathcal{H}(\tau,\sigma)$.
Therefore, we remark that, for fixed $\tau$, there are at most
$C_{\operatorname{sp}}$ block-cluster pairs
$\tau\times\rho$ in
$\mathcal{B}$ and that the same consideration holds for $\sigma$.
Thus, looking at the recursion \eqref{eq:recursion},
the recursion step from $\mathcal{S}\big(\operatorname{parent}(\tau),\operatorname{parent}(\sigma)\big)$
to $\mathcal{S}(\tau,\sigma)$ adds at most $C_{\operatorname{sp}}$ low-rank
matrices. Thus, considering that
$\mathcal{S}(\mathcal{I},\mathcal{I})=\emptyset$, we have at most
$C_{\operatorname{sp}}\operatorname{level}(\tau\times\sigma)$ low-rank matrices in
$\mathcal{S}(\tau,\sigma)$. Summing up, the multiplication
of $\mathcal{S}(\tau,\sigma)$ with a vector requires at most
\begin{align*}
&C_{\operatorname{sp}} k\operatorname{level}(\tau\times\sigma)(\#\tau+\#\sigma)+
\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{B\setminus F}
}}\Big(N_{\mathcal{H}\cdot v}(\tau\times\rho,k)+N_{\mathcal{H}\cdot v}(\rho\times\sigma,k)\Big)\\
&\qquad\qquad\leq C_{\operatorname{sp}} k(p+1)(\#\tau+\#\sigma)\\
&\qquad\qquad\qquad\qquad+\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{B\setminus F}
}}2C_{\operatorname{sp}} \max\{k, n_{\min}\} (p+1)(\#\tau+2\#\rho+\#\sigma)\\
&\qquad\qquad\leq 4C_{\operatorname{sp}} \max\{k, n_{\min}\} (p+1)\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{B\setminus F}
}}(\#\tau+\#\rho+\#\sigma)
\end{align*}
operations.
\end{proof}
With the help of the previous lemmata, we are now able to state our
main result, which estimates the number of operations for the
$\mathcal{H}$-matrix multiplication from Algorithm~\ref{alg:theHmult}.
\begin{theorem}
Assuming that the range approximation scheme $\mathfrak{T}(\mathbf{M},k^\star)$
requires $\ell k^\star$, $\ell\geq 1$, matrix-vector multiplications of $\mathbf{M}$ and
additionally $N_{\mathfrak{T}}(k^{\star})$ operations to find a rank $k^\star$
approximation to a matrix $\mathbf{M}\in\mathbb{R}^{m\times n}$, then the
$\mathcal{H}$-matrix multiplication as stated in Algorithm~\ref{alg:theHmult}
requires at most
\[
8C_{\operatorname{sp}}^2 (\ell k^{\star}+n_{\min}+2C_{\operatorname{sp}})k\max\{k, n_{\min}\} (p+1)^2\#\mathcal{I}+C_{\operatorname{sp}}(2\#\mathcal{I}-1) N_{\mathfrak{T}}(k^{\star})
\]
operations.
\end{theorem}
\begin{proof}
We start by estimating the number of operations for a single far-field block
$\tau\times\sigma\in\mathcal{F}$. Using Lemma~\ref{lem:compsumexprmv}, this
requires at most
\[
4C_{\operatorname{sp}} \ell k^{\star}\max\{k, n_{\min}\} (p+1)\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{B\setminus F}
}}(\#\tau+\#\rho+\#\sigma)
+N_{\mathfrak{T}}(k^{\star})
\]
operations. Summing up over all farfield blocks yields an effort of at most
\begin{align*}
&{}\sum_{\tau\times\sigma\in\mathcal{F}}
\bigg(
4C_{\operatorname{sp}} \ell k^{\star}\max\{k, n_{\min}\} (p+1)\sum_{\substack{
\rho\in\mathcal{T}_{\mathcal{I}}\colon\\
\tau\times\rho\in\mathcal{B\setminus F}\\
\rho\times\sigma\in\mathcal{B\setminus F}
}}(\#\tau+\#\rho+\#\sigma)
+N_{\mathfrak{T}}(k^{\star})
\bigg)\\
&{}\qquad\qquad\leq
4C_{\operatorname{sp}} \ell k^{\star}\max\{k, n_{\min}\} (p+1)\sum_{\substack{
\tau\times\rho\in\mathcal{B}\\
\rho\times\sigma\in\mathcal{B}
}}(\#\tau+2\#\rho+\#\sigma)+
\sum_{\tau\times\sigma\in\mathcal{F}}N_{\mathfrak{T}}(k^{\star})\\
&{}\qquad\qquad\leq 8C_{\operatorname{sp}}^2 \ell k^{\star}k\max\{k, n_{\min}\} (p+1)^2\#\mathcal{I}+C_{\operatorname{sp}}(2\#\mathcal{I}-1) N_{\mathfrak{T}}(k^{\star}),
\end{align*}
where we have used that $\#\mathcal{F}\leq C_{\operatorname{sp}}(2\#\mathcal{I}-1)$,
cf.~\cite[Lemma 6.11]{Hac15}. Assuming that a nearfield block $\tau\times\sigma
\in\mathcal{N}$ is computed by applying its corresponding
\texttt{sum}-expression $\mathcal{S}(\tau,\sigma)$ to an identity matrix of size at
most $n_{\min}\times n_{\min}$, the nearfield blocks of the $\mathcal{H}$-matrix
product can be computed in at most
\begin{align*}
8C_{\operatorname{sp}}^2 k\max\{k, n_{\min}\}n_{\min} (p+1)^2\#\mathcal{I}
\end{align*}
operations. Summing up the operations for the farfield, the nearfield, and the
operations of the \texttt{sum}-expressions from Lemma~\ref{lem:compsumexprtot}
yields the assertion.
\end{proof}
We remark that, with the convention $p=\operatorname{d}\!pth(\mathcal{B})=\mathcal{O}(\log(\#\mathcal{I}))$,
the previous theorem states that the $\mathcal{H}$-matrix multiplication from
Algorithm~\ref{alg:theHmult} has an almost linear cost with respect to $\#\mathcal{I}$.
To that end, for given $\mathcal{I}$, we require that the block-wise ranks of the
$\mathcal{H}$-matrix are bounded by a constant $k^\star$. That constant may depend on
$\mathcal{I}$. According to \cite{DHS17}, this is reasonable for $\mathcal{H}$-matrices
which arise from the discretization of pseudodifferential operators. In particular, it is
well known that $k^\star$ depends poly-logarithmically on $\#\mathcal{I}$ for suitable
approximation accuracies $\varepsilon$.
\section{Numerical examples}\label{sec:results}
The following section is dedicated to a comparison of the new
$\mathcal{H}$-matrix multiplication to the original algorithm from \cite{Hack1}, see also Section~\ref{sec:fasttrunc}. Besides the
comparison between these two algorithms, we also compare different
truncation operators. The considered configurations are listed in
Table~\ref{tab:expcases}.
Note that, in order to compute the dense SVD of a
\texttt{sum}-expression, we compute the SVD of the \texttt{sum}-expression
applied to a dense identity matrix.
\begin{table}[htb]
\begin{tabular}{|c|c|c|}
\hline
& traditional multiplication & new multiplication\\\hline
\parbox[c]{0.2\textwidth}{\centering sum of low-rank matrices} &
\parbox[c]{0.35\textwidth}{\centering truncation with SVD of low-rank matrices, see Algorithm~\ref{alg:lrsvd}} & --- \\\hline
\parbox[c]{0.2\textwidth}{\centering conversion of products of $\mathcal{H}$-matrix blocks to low rank} &
\parbox[c]{0.35\textwidth}{
\begin{itemize}[noitemsep,leftmargin=2ex]
\item Hierarchical approximation, see \cite{Hack1}
\item ACA, see Section~\ref{sec:ACA}
\item BiLanczos, see Section~\ref{sec:lanczos}
\item Randomized, see Setion~\ref{sec:randomized}
\item Reference: dense SVD
\end{itemize}
} & ---\\\hline
\parbox[c]{0.2\textwidth}{\centering conversion of \texttt{sum}-expressions to low rank}&
\parbox[c]{0.35\textwidth}{
\centering combination of the operations above, see discussion in
Section~\ref{sec:Hmult}
}&
\parbox[c]{0.35\textwidth}{
\begin{itemize}[noitemsep,leftmargin=2ex]
\item ACA, see Section~\ref{sec:ACA}
\item BiLanczos, see Section~\ref{sec:lanczos}
\item Randomized, see Setion~\ref{sec:randomized}
\item Reference: dense SVD
\end{itemize}
}\\\hline
\end{tabular}
\caption{\label{tab:expcases}Considered configurations of the $\mathcal{H}$-matrix multiplication for the numerical experiments.}
\end{table}
We would like to compare the different configurations in terms of
speed and accuracy. For computational efficiency, the traditional
$\mathcal{H}$-matrix arithmetic puts an upper bound $k_{\max}$ on the
truncation rank. However, for computational accuracy, one should
rather put an upper bound on the truncation error. This is also referred
to as $\varepsilon$-rank, where the truncation is with respect to
the smallest possible rank such that the relative truncation error is smaller than $\varepsilon$.
We perform the experiments for both types of bounds, with $k_{\max}=16$ and
$\varepsilon=10^{-12}$.
\subsection{Example with exponential kernels}
For our first numerical example, we consider the Galerkin discretizations
$\mathbf{K}_1$ and $\mathbf{K}_2$ of an exponential kernel
\[
k_1(\mathbf{x},\mathbf{y})=\exp(-\|\mathbf{x}-\mathbf{y}\|)
\]
and a scaled exponential kernel
\[
k_2(\mathbf{x},\mathbf{y})=x_1\exp(-\|\mathbf{x}-\mathbf{y}\|)
\]
on the unit sphere with boundary $\Gamma$. This means that, for a given finite element space
$V_N=\operatorname{span}\{\varphi_1,\ldots,\varphi _N\}$ on $\Gamma$, the system matrices are given by
\[
\big[\mathbf{K}_{\ell}\big]_{ij}
=
\int_\Gamma\int_\Gamma k_\ell(\mathbf{x},\mathbf{y})\varphi_j(\mathbf{x})\varphi_i(\mathbf{y})\operatorname{d}\!\sigma_\mathbf{x}\operatorname{d}\!\sigma_\mathbf{y}
\]
for all $i,j=1,\ldots, N$ and $\ell=1,2$.
It is then well known that
$\mathbf{K}_1$, $\mathbf{K}_2$ and their product $\mathbf{K}_1\mathbf{K}_2$ is
compressible by means of $\mathcal{H}$-matrices, see \cite{DHS17,Hac15}. For our numerical experiments,
we assemble the matrices by using piecewise constant finite elements and adaptive
cross approximation as described in \cite{HP2013}.
The computations have been carried out on a single core of
a compute server with two Intel(R) Xeon(R) E5-2698v4 CPUs with a clock rate of
2.20GHz and a main memory of 756GB. The backend for the linear algebra routines
is version 3.2.8 of the software library Eigen, see \cite{eigenweb}.
Figure~\ref{fig:esetfr} depicts the computation times per degree of freedom
for the different kinds of $\mathcal{H}$-matrix multiplication for the fixed
rank truncation, whereas Figure~\ref{fig:eseter} shows the computations
times for the $\varepsilon$-rank truncation. The cost of the fixed rank
truncation seems to be $\mathcal{O}(N\log(N)^2)$, in accordance with the
theoretical cost estimates. We can also immediately infer that it pays off to
replace the hierarchical approximation by the alternative low-rank approximation
schemes to improve computation times. For the $\varepsilon$-rank truncation, no
cost estimates are known. While the asymptotic behaviour of the
computation times for the traditional multiplication seems to be in the preasymptotic
regime in this case, the new multiplication scales almost linearly.
Another important point to remark is that the new algorithm with
$\varepsilon$-rank truncation seems to outperform the frequently used traditional
$\mathcal{H}$-matrix multiplication in terms of computational efficiency.
Therefore, we shall now look whether it is also competitive in terms of
accuracy.
\begin{figure}
\caption{\label{fig:esetfr}
\label{fig:esetfr}
\end{figure}
\begin{figure}
\caption{\label{fig:eseter}
\label{fig:eseter}
\end{figure}
To estimate the error of the $\mathcal{H}$-matrix multiplication, we apply
ten subspace iterations to a subspace of size 100, using the matrix-vector
product
\[
(\mathbf{K}_1\mathbf{K}_2)\mathbf{v}-\mathbf{K}_1(\mathbf{K}_2\mathbf{v}),
\]
and compute an approximation to the Frobenius norm.
Looking at the corresponding errors in Figures~\ref{fig:eseefr} and \ref{fig:eseeer},
we see that the accuracies of the fixed rank arithmetic behave similar. However,
the new multiplication reaches these accuracies in a shorter computation time.
For the $\varepsilon$-rank truncation, we observe that the traditional
multiplication cannot achieve the prescribed $\varepsilon$-rank of
$\varepsilon=10^{-12}$. This accuracy is only achieved by the new multiplication
with an appropriate low-rank approximation method. The computation times for these
accuracies are even smaller than the fixed-rank version of the
traditional $\mathcal{H}$-matrix multiplication. Concerning the low-rank
approximation methods, it seems as if ACA and the randomized algorithm
are more robust when applied to smaller matrix sizes, although this difference
vanishes for large matrices.
\begin{figure}
\caption{\label{fig:eseefr}
\label{fig:eseefr}
\end{figure}
\begin{figure}
\caption{\label{fig:eseeer}
\label{fig:eseeer}
\end{figure}
\FloatBarrier
Since the approximation quality of the low-rank methods crucially depends on the
decay of the singular values of the off-diagonal blocks, we repeat the
experiments on a different example.
\subsection{Example with matrices from the boundary element method}
The second example is concerned with the discretized boundary
integral operator
\[
[\mathbf{V}]_{ij}
=
\int_\Gamma\int_\Gamma \frac{\varphi_j(\mathbf{x})\varphi_i(\mathbf{y})}{\|\mathbf{x}-\mathbf{y}\|}\operatorname{d}\!\sigma_\mathbf{x}\operatorname{d}\!\sigma_\mathbf{y},
\]
which is frequently used in boundary element methods, see also \cite{Ste08}.
The computation times for the operation $\mathbf{V}\mathbf{V}$ can be
found in Figures~\ref{fig:vvtfr} and \ref{fig:vvter} and the corresponding
accuracies can be found in Figures~\ref{fig:vvefr} and \ref{fig:vveer}. We
can see that the
behaviour of the traditional and the new $\mathcal{H}$-matrix multiplication
is in large parts the same as in the previous example, i.e., the new
multiplication with $\varepsilon$-rank truncation can reach higher accuracies
in shorter computation time than the traditional multiplication with
an upper bound on the used rank. However,
we shortly comment on the right figure of Figure~\ref{fig:vveer}. There,
we see, as in the previous example, that the ACA and the randomized
low-rank approximation method are more robust on small matrix sizes than
the BiLanczos algorithm. We also see that the ACA and the
randomized low-rank approximation method are even more robust than assembling
the full matrix and applying a dense SVD to obtain the low-rank blocks.
\begin{figure}
\caption{\label{fig:vvtfr}
\label{fig:vvtfr}
\end{figure}
\begin{figure}
\caption{\label{fig:vvter}
\label{fig:vvter}
\end{figure}
\begin{figure}
\caption{\label{fig:vvefr}
\label{fig:vvefr}
\end{figure}
\begin{figure}
\caption{\label{fig:vveer}
\label{fig:vveer}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
The multiplication of hierarchical matrices is a widely used algorithm in
engineering. The recursive scheme of the original implementation
and the required upper threshold for the rank make an a-priori error
analysis of the algorithm difficult. Although several approaches to reduce
the number of error-introducing operations have been made in the literature,
an algorithm providing a fast multiplication of $\mathcal{H}$-matrices with
a-priori error-bounds is still not available nowadays. By introducing
\texttt{sum}-expressions, which can be seen as a queuing system of low-rank
matrices and $\mathcal{H}$-matrix products, we can postpone the
error-introducing low-rank approximation until the last stage of the
algorithm. We have discussed several adaptive low-rank approximation methods
based on matrix-vector multiplications which make an a-priori error analysis
of the $\mathcal{H}$-matrix multiplication possible. The cost analysis
shows that the cost of our new $\mathcal{H}$-matrix multiplication
algorithm is almost linear with respect to the size of the matrix. In particular,
the numerical experiments show that the new approach can compute the best
approximation of the $\mathcal{H}$-matrix product while being
computationally more efficient than the traditional product.
Parallelization is an important topic on modern computer architectures.
Therefore, we remark that the computation of the low-rank approximations
for the target blocks is easily parallelizeable, once the necessary
\texttt{sum}-expression is available. We also remark that the computation
of the \texttt{sum}-expressions does not require concurrent write access,
such that the parallelization on a shared memory machine should be
comparably easy.
\end{document}
|
\begin{document}
\title{Approximate solutions of the Dirac equation for the Rosen-Morse
potential including the spin-orbit centrifugal term }
\author{Sameer M. Ikhdair}
\email[E-mail: ]{[email protected]; [email protected]}
\affiliation{Department of Physics, Near East University, Nicosia, North Cyprus, Turkey}
\date{
\today
}
\begin{abstract}
We give the approximate analytic solutions of the Dirac equations for the
Rosen-Morse potential including the spin-orbit centrifugal term. In the
framework of the spin and pseudospin symmetry concept, we obtain the
analytic bound state energy spectra and corresponding two-component upper-
and lower-spinors of the two Dirac particles, in closed form, by means of
the Nikiforov-Uvarov method. The special cases of the $s$-wave $\kappa =\pm
1 $ ($l=\widetilde{l}=0)$ Rosen-Morse potential, the Eckart-type potential,
the PT-symmetric Rosen-Morse potential and non-relativistic limits are
briefly studied.
Keywords: Dirac equation, spin and pseudospin symmetry, bound states,
Rosen-Morse potential, Nikiforov-Uvarov method.
\end{abstract}
\pacs{03.65.Pm; 03.65.Ge; 02.30.Gp}
\maketitle
\section{Introduction}
Within the framework of the Dirac equation the spin symmetry arises if the
magnitude of the attractive scalar potential $S(r)$ and repulsive vector
potential are nearly equal, $S(r)\sim V(r)$ in the nuclei (\textit{i.e.},
when the difference potential $\Delta (r)=V(r)-S(r)=C_{s}=$ constant$).$
However, the pseudospin symmetry occurs if $S(r)\sim -V(r)$ are nearly equal
(\textit{i.e.}, when the sum potential $\Sigma (r)=V(r)+S(r)=C_{ps}=$
constant$)$ [1-3]$.$ The spin symmetry is relevant for mesons [4]. The
pseudospin symmetry concept has been applied to many systems in nuclear
physics and related areas [2-7] and used to explain features of deformed
nuclei [8], the super-deformation [9] and to establish an effective nuclear
shell-model scheme [5,6,10]. The pseudospin symmetry introduced in nuclear
theory refers to a quasi-degeneracy of the single-nucleon doublets and can
be characterized with the non-relativistic quantum numbers $(n,l,j=l+1/2)$
and $(n-1,l+2,j=l+3/2),$ where $n,$ $l$ and $j$ are the single-nucleon
radial, orbital and total angular momentum quantum numbers for a single
particle, respectively [5,6]. The total angular momentum is given as $j=
\widetilde{l}+\widetilde{s},$ where $\widetilde{l}=l+1$ is a pseudo-angular
momentum and $\widetilde{s}=1/2$ is a pseudospin angular momentum. In real
nuclei, the pseudospin symmetry is only an approximation and the quality of
approximation depends on the pseudo-centrifugal potential and pseudospin
orbital potential [11]. Alhaidari \textit{et al}. [12] investigated in
detail the physical interpretation on the three-dimensional Dirac equation
in the context of spin symmetry limitation $\Delta (r)=0$ and pseudospin
symmetry\ limitation $\Sigma (r)=0.$
Some authors have applied the spin and pseudospin symmetry on several
physical potentials, such as the harmonic oscillator [12-15], the
Woods-Saxon potential [16], the Morse potential [17,18], the Hulth\'{e}n
potential [19], the Eckart potential [20-22], the molecular diatomic
three-parameter potential [23], the P\"{o}schl-Teller potential [24], the
Rosen-Morse potential [25] and the generalized Morse potential [26].
The exact solutions of the Dirac equation for the exponential-type
potentials are possible only for the $s$-wave ($l=0$ case). However, for $l$
-states an approximation scheme has to be used to deal with the centrifugal
and pseudo-centrifugal terms. Many authors have used different methods to
study the partially exactly solvable and exactly solvable Schr\"{o}dinger,
Klein-Gordon (KG) and Dirac equations in $1D,3D$ and/or any $D$-dimensional
cases for different potentials [27-39]. In the context of
spatially-dependent mass, we have also used and applied a recently proposed
approximation scheme [40] for the centrifugal term to find a quasi-exact
analytic bound-state solution of the radial KG equation with
spatially-dependent effective mass for scalar and vector Hulth\'{e}n
potentials in any arbitrary dimension $D$ and orbital angular momentum
quantum number $l$ within the framework of the NU method [40-42].
Another physical potential is the Rosen-Morse potential [43] expressed in
the form
\begin{equation}
V(r)=-V_{1}\sec h^{2}\alpha r+V_{2}\tanh \alpha r,
\end{equation}
where $V_{1}$ and $V_{2}$ denote the depth of the potential and $\alpha $ is
the range of the potential. This potential is useful for describing
interatomic interaction of the linear molecules and helpful for disscussing
polyatomic vibration energies such as the vibration states of $NH_{3}$
molecule [43]. It is shown that the Rosen-Morse potential and its
PT-symmetric version are the special cases of the five-parameter
exponential-type potential model [44,45]. The exact energy spectrum of the
trigonometric Rosen-Morse potential has been investigated by using
supersymmetric and improved quantization rule methods [46,47].
Recently, many works have been done to solve the Dirac equation to obtain
the energy equation and the two-component spinor wave functions. Jia \textit{
et al.} [48] employed an improved approximation scheme to deal with the
pseudo-centrifugal term to solve the Dirac equation with the generalized P
\"{o}schl-Teller potential for arbitrary spin-orbit quantum number $\kappa .$
Zhang \textit{et al.} [49] solved the Dirac equation with equal Scarf-type
scalar and vector potentials by the method of the supersymmetric quantum
mechanics (SUSYQM), shape invariance approach and the alternative method.
Zou \textit{et al.} [50] solved the Dirac equation with equal Eckart scalar
and vector potentials in terms of SUSYQM method, shape invariance approach
and function analysis method. Wei and Dong [51] obtained approximately the
analytical bound state solutions of the Dirac equation with the
Manning-Rosen for arbitrary spin-orbit coupling quantum number $\kappa .$
Thylwe [52] presented the approach inspired by amplitude-phase method for
analyzing the radial Dirac equation to calculate phase shifts by including
the spin- and pseudo-spin symmetries of relativistic spectra. Alhaidari [53]
solved Dirac equation by separation of variables in spherical coordinates
for a large class of non-central electromagnetic potentials. Berkdemir and
Sever [54] investigated systematically the pseudospin symmetry solution of
the Dirac equation for spin $1/2$ particles moving within the Kratzer
potential connected with an angle-dependent potential. Alberto \textit{et
al. }[55] concluded that the values of energy spectra may not depend on the
spinor structure of the particle, i.e., whether one has a spin-$1/2$ or a
spin-$0$ particle. Also, they showed that a spin-$1/2$ or a spin-$0$
particle with the same mass and subject to the same scalar $S(r)$ and vector
$V(r)$ potentials of equal magnitude, i.e., $S(r)=\pm V(r),$ will have the
same energy spectrum (isospectrality), including both bound and scattering
states.
In the present paper, our aim is to study the analytic solutions of the
Dirac equation for the Rosen-Morse potential with arbitrary spin-orbit
quantum number $\kappa $ by using a new approximation to deal with the
centrifugal term. However, we use the approximation given in Ref. [56] which
is quite different from the ones used in our previous works [39,40,42], $
\frac{1}{r^{2}}\approx \alpha ^{2}\left[ d+\frac{e^{-\alpha r}}{\left(
1-e^{-\alpha r}\right) ^{2}}\right] $ where $d=0$ or $d=\frac{1}{12}.$ The
approximation given in [56] is convenient for the Rosen-Morse type potential
because one may propose a more reasonable physical wave functions for this
system. Under the conditions of the spin symmetry $S(r)\sim V(r)$ and
pseudospin symmetry $S(r)\sim -V(r)$, we investigate the bound state energy
eigenvalues and corresponding upper and lower spinor wave functions in the
framework of the NU method. We also show that the spin and pseudospin
symmetry Dirac solutions can be reduced to the $S(r)=V(r)$ and $S(r)=-V(r)$
in the cases of exact spin symmetry limitation $\Delta (r)=0$ and pseudospin
symmetry limitation $\Sigma (r)=0,$ respectively. Furthermore, the solutions
obtained for the Dirac equation can be easily reduced to the Schr\"{o}dinger
solutions when the appropriate map of parameters is used.
The paper is structured as follows: In Sect. 2, we outline the NU method.
Section 3 is devoted to the analytic bound state solutions of the ($3+1$
)-dimensional Dirac equation for the Rosen-Morse quantum system obtained by
means of the NU method. The spin symmetry and pseudospin symmetry solutions
are investigated. In Sect. 4, we study the cases $\kappa =\pm 1$ ($l=
\widetilde{l}=0,$ i.e., $s$-wave), the Eckart-type potential, the
PT-symmetric Rosen-Morse potential. Finally, the relevant conclusions are
given in Sect. 5.
\section{NU Method}
The NU method [41] is briefly outlined here. It was proposed to solve the
second-order differential equation of hypergeometric-type:
\begin{equation}
\psi _{n}^{\prime \prime }(r)+\frac{\widetilde{\tau }(r)}{\sigma (r)}\psi
_{n}^{\prime }(r)+\frac{\widetilde{\sigma }(r)}{\sigma ^{2}(r)}\psi
_{n}(r)=0,
\end{equation}
where $\sigma (r)$ and $\widetilde{\sigma }(r)$ are polynomials, at most, of
second-degree, and $\widetilde{\tau }(r)$ is a first-degree polynomial. In
order to find a particular solution for Eq. (2), let us decompose the
wavefunction $\psi _{n}(r)$ as follows:
\begin{equation}
\psi _{n}(r)=\phi (r)y_{n}(r),
\end{equation}
and use
\begin{equation}
\left[ \sigma (r)\rho (r)\right] ^{\prime }=\tau (r)\rho (r),
\end{equation}
to reduce Eq. (2) to the form
\begin{equation}
\sigma (r)y_{n}^{\prime \prime }(r)+\tau (r)y_{n}^{\prime }(r)+\lambda
y_{n}(r)=0,
\end{equation}
with
\begin{equation}
\tau (r)=\widetilde{\tau }(r)+2\pi (r),\text{ }\tau ^{\prime }(r)<0,
\end{equation}
where the prime denotes the differentiation with respect to $r.$ One is
looking for a family of solutions corresponding to
\begin{equation}
\lambda =\lambda _{n}=-n\tau ^{\prime }(r)-\frac{1}{2}n\left( n-1\right)
\sigma ^{\prime \prime }(r),\ \ \ n=0,1,2,\cdots .
\end{equation}
The $y_{n}(r)$ can be expressed in terms of the Rodrigues relation:
\begin{equation}
y_{n}(r)=\frac{B_{n}}{\rho (r)}\frac{d^{n}}{dr^{n}}\left[ \sigma ^{n}(r)\rho
(r)\right] ,
\end{equation}
where $B_{n}$ is the normalization constant and the weight function $\rho
(r) $ is the solution of the differential equation (4). The other part of
the wavefunction (3) must satisfy the following logarithmic equation
\begin{equation}
\frac{\phi ^{\prime }(r)}{\phi (r)}=\frac{\pi (r)}{\sigma (r)}.
\end{equation}
By defining
\begin{equation}
k=\lambda -\pi ^{\prime }(r).
\end{equation}
one obtains the polynomial
\begin{equation}
\pi (r)=\frac{1}{2}\left[ \sigma ^{\prime }(r)-\widetilde{\tau }(r)\right]
\pm \sqrt{\frac{1}{4}\left[ \sigma ^{\prime }(r)-\widetilde{\tau }(r)\right]
^{2}-\widetilde{\sigma }(r)+k\sigma (r)},
\end{equation}
where $\pi (r)$ is a parameter at most of order $1.$ The expression under
the square root sign in the above equation can be arranged as a polynomial
of second order where its discriminant is zero. Hence, an equation for $k$
is being obtained. After solving such an equation, the $k$ values are
determined through the NU method. \
In this regard, we derive a parametric generalization version of the NU
method valid for any solvable potential by the method. We begin by writting
the hypergeometric equation in general parametric form as
\begin{equation}
\left[ r\left( c_{3}-c_{4}r\right) \right] ^{2}\psi _{n}^{\prime \prime }(r)+
\left[ r\left( c_{3}-c_{4}r\right) \left( c_{1}-c_{2}r\right) \right] \psi
_{n}^{\prime }(r)+\left( -\xi _{1}r^{2}+\xi _{2}r-\xi _{3}\right) \psi
_{n}(r)=0,
\end{equation}
with
\begin{equation}
\widetilde{\tau }(r)=c_{1}-c_{2}r,
\end{equation}
\begin{equation}
\sigma (r)=r\left( c_{3}-c_{4}r\right) ,
\end{equation}
\begin{equation}
\widetilde{\sigma }(r)=-\xi _{1}r^{2}+\xi _{2}r-\xi _{3},
\end{equation}
where the coefficients $c_{i}$ ($i=1,2,3,4$) and the analytic expressions $
\xi _{j}$ ($j=1,2,3$). Furthermore, in comparing Eq. (12) with the
counterpart Eq. (2), one obtains the appropriate analytic polynomials,
energy equation and wave functions together with the associated coefficients
expressed in general parameteric form as displayed in Appendix A.
\section{Analytic Solution of the Dirac-Rosen-Morse Problem}
In spherical coordinates, the Dirac equation for fermionic massive spin-$
\frac{1}{2}$ particles interacting with arbitrary scalar potential $S(r)$
and the time-component $V(r)$ of a four-vector potential can be expressed as
[26,57-60]
\begin{equation}
\left[ c\mathbf{\alpha }\cdot \mathbf{p+\beta }\left( Mc^{2}+S(r)\right)
+V(r)-E\right] \psi _{n\kappa }(\mathbf{r})=0,\text{ }\psi _{n\kappa }(
\mathbf{r})=\psi (r,\theta ,\phi ),
\end{equation}
where $E$ is the relativistic energy of the system, $M$ is the mass of a
particle, $\mathbf{p}=-i\hbar \mathbf{\nabla }$ is the momentum operator,
and $\mathbf{\alpha }$ and $\mathbf{\beta }$ are $4\times 4$ Dirac matrices,
i.e.,
\begin{equation}
\mathbf{\alpha =}\left(
\begin{array}{cc}
0 & \mathbf{\sigma }_{i} \\
\mathbf{\sigma }_{i} & 0
\end{array}
\right) ,\text{ }\mathbf{\beta =}\left(
\begin{array}{cc}
\mathbf{I} & 0 \\
0 & -\mathbf{I}
\end{array}
\right) ,\text{ }\sigma _{1}\mathbf{=}\left(
\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}
\right) ,\text{ }\sigma _{2}\mathbf{=}\left(
\begin{array}{cc}
0 & -i \\
i & 0
\end{array}
\right) ,\text{ }\sigma _{3}\mathbf{=}\left(
\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right) ,
\end{equation}
where $\mathbf{I}$ denotes the $2\times 2$ identity matrix and $\mathbf{
\sigma }_{i}$ are the three-vector Pauli spin matrices. For a spherical
symmetrical nuclei, the total angular momentum operator of the nuclei $
\mathbf{J}$ and spin-orbit matrix operator $\mathbf{K}=-\mathbf{\beta }
\left( \mathbf{\sigma }\cdot \mathbf{L}+\mathbf{I}\right) $ commute with the
Dirac Hamiltonian, where $\mathbf{L}$ is the orbital angular momentum
operator. The spinor wavefunctions can be classified according to the radial
quantum number $n$ and the spin-orbit quantum number $\kappa $ and can be
written using the Pauli-Dirac representation in the following forms:
\begin{equation}
\psi _{n\kappa }(\mathbf{r})=\left(
\begin{array}{c}
f_{n\kappa }(\mathbf{r}) \\
g_{n\kappa }(\mathbf{r})
\end{array}
\right) =\frac{1}{r}\left(
\begin{array}{c}
F_{n\kappa }(r)Y_{jm}^{l}(\theta ,\phi ) \\
iG_{n\kappa }(r)Y_{jm}^{\widetilde{l}}(\theta ,\phi )
\end{array}
\right) ,
\end{equation}
where $F_{n\kappa }(r)$ and $G_{n\kappa }(r)$ are the radial wave functions
of the upper- and lower-spinor components, respectively and $
Y_{jm}^{l}(\theta ,\phi )$ and $Y_{jm}^{\widetilde{l}}(\theta ,\phi )$ are
the spherical harmonic functions coupled to the total angular momentum $j$
and it's projection $m$ on the $z$ axis. The orbital and pseudo-orbital
angular momentum quantum numbers for spin symmetry $l$ and pseudospin
symmetry $\widetilde{l}$ refer to the upper- and lower-spinor components,
respectively, for which $l(l+1)=\kappa \left( \kappa +1\right) $ and $
\widetilde{l}(\widetilde{l}+1)=\kappa \left( \kappa -1\right) $. The quantum
number $\kappa $ is related to the quantum numbers for spin symmetry $l$ and
pseudospin symmetry $\widetilde{l}$ as
\begin{equation*}
\kappa =\left\{
\begin{array}{cccc}
-\left( l+1\right) =-\left( j+\frac{1}{2}\right) , & (s_{1/2},p_{3/2},\text{
\textit{etc.}}), & \text{ }j=l+\frac{1}{2}, & \text{aligned spin }\left(
\kappa <0\right) , \\
+l=+\left( j+\frac{1}{2}\right) , & \text{ }(p_{1/2},d_{3/2},\text{\textit{
etc.}}), & \text{ }j=l-\frac{1}{2}, & \text{unaligned spin }\left( \kappa
>0\right) ,
\end{array}
\right.
\end{equation*}
and the quasi-degenerate doublet structure can be expressed in terms of a
pseudospin angular momentum $\widetilde{s}=1/2$ and pseudo-orbital angular
momentum $\widetilde{l}$ which is defined as
\begin{equation*}
\kappa =\left\{
\begin{array}{cccc}
-\widetilde{l}=-\left( j+\frac{1}{2}\right) , & (s_{1/2},\text{ }p_{3/2},
\text{ \textit{etc.}}), & j=\widetilde{l}-1/2, & \text{aligned spin }\left(
\kappa <0\right) , \\
+\left( \widetilde{l}+1\right) =+\left( j+\frac{1}{2}\right) , & \text{ }
(d_{3/2},\text{ }f_{5/2},\text{ \textit{etc.}}), & \ j=\widetilde{l}+1/2, &
\text{unaligned spin }\left( \kappa >0\right) ,
\end{array}
\right.
\end{equation*}
where $\kappa =\pm 1,\pm 2,\cdots .$ For example, ($1s_{1/2},0d_{3/2}$) and
(2p$_{3/2},1f_{5/2}$) can be considered as pseudospin doublets.
Thus, the substitution of Eq. (18) into Eq. (16) leads to the following two
radial coupled Dirac equations for the spinor components
\begin{subequations}
\begin{equation}
\left( \frac{d}{dr}+\frac{\kappa }{r}\right) F_{n\kappa }(r)=\left(
Mc^{2}+E_{n\kappa }-\Delta (r)\right) G_{n\kappa }(r),
\end{equation}
\begin{equation}
\left( \frac{d}{dr}-\frac{\kappa }{r}\right) G_{n\kappa }(r)=\left(
Mc^{2}-E_{n\kappa }+\Sigma (r)\right) F_{n\kappa }(r),
\end{equation}
where $\Delta (r)=V(r)-S(r)$ and $\Sigma (r)=V(r)+S(r)$ are the difference
and sum potentials, respectively.
Under the spin symmetry ( i.e., $\Delta (r)=C_{s}=$ constant), one can
eliminate $G_{n\kappa }(r)$ in Eq. (19a), with the aid of Eq. (19b), to
obtain a second-order differential equation for the upper-spinor component
as follows [16,26]:
\end{subequations}
\begin{equation*}
\left[ -\frac{d^{2}}{dr^{2}}+\frac{\kappa \left( \kappa +1\right) }{r^{2}}+
\frac{1}{\hbar ^{2}c^{2}}\left( Mc^{2}+E_{n\kappa }-C_{s}\right) \Sigma (r)
\right] F_{n\kappa }(r)
\end{equation*}
\begin{equation}
=\frac{1}{\hbar ^{2}c^{2}}\left( E_{n\kappa }^{2}-M^{2}c^{4}+C_{s}\left(
Mc^{2}-E_{n\kappa }\right) \right) F_{n\kappa }(r),
\end{equation}
where $\kappa \left( \kappa +1\right) =l\left( l+1\right) ,$ $\kappa =l$ for
$\kappa <0$ and $\kappa =-\left( l+1\right) $ for $\kappa >0.$ The spin
symmetry energy eigenvalues depend on $n$ and $\kappa ,$ \textit{i.e.}, $
E_{n\kappa }=E(n,\kappa \left( \kappa +1\right) ).$ For $l\neq 0,$ the
states with $j=l\pm 1/2$ are degenerate. Further, the lower-spinor component
can be obtained from Eq. (19a) as
\begin{equation}
G_{n\kappa }(r)=\frac{1}{Mc^{2}+E_{n\kappa }-C_{s}}\left( \frac{d}{dr}+\frac{
\kappa }{r}\right) F_{n\kappa }(r),
\end{equation}
where $E_{n\kappa }\neq -Mc^{2},$ only real positive energy states exist
when $C_{s}=0$ (exact spin symmetry).
On the other hand, under the pseudospin symmetry ( i.e., $\Sigma (r)=C_{ps}=$
constant), one can eliminate $F_{n\kappa }(r)$ in Eq. (19b), with the aid of
Eq. (19a), to obtain a second-order differential equation for the
lower-spinor component as follows [16,26]:
\begin{equation*}
\left[ -\frac{d^{2}}{dr^{2}}+\frac{\kappa \left( \kappa -1\right) }{r^{2}}-
\frac{1}{\hbar ^{2}c^{2}}\left( Mc^{2}-E_{n\kappa }+C_{ps}\right) \Delta (r)
\right] G_{n\kappa }(r)
\end{equation*}
\begin{equation}
=\frac{1}{\hbar ^{2}c^{2}}\left( E_{n\kappa }^{2}-M^{2}c^{4}-C_{ps}\left(
Mc^{2}+E_{n\kappa }\right) \right) G_{n\kappa }(r),
\end{equation}
and the upper-spinor component $F_{n\kappa }(r)$ is obtained from Eq. (19b)
as
\begin{equation}
F_{n\kappa }(r)=\frac{1}{Mc^{2}-E_{n\kappa }+C_{ps}}\left( \frac{d}{dr}-
\frac{\kappa }{r}\right) G_{n\kappa }(r),
\end{equation}
where $E_{n\kappa }\neq Mc^{2},$ only real negative energy states exist when
$C_{ps}=0$ (exact pseudospin symmetry). From the above equations, the energy
eigenvalues depend on the quantum numbers $n$ and $\kappa $, and also the
pseudo-orbital angular quantum number $\widetilde{l}$ according to $\kappa
(\kappa -1)=\widetilde{l}(\widetilde{l}+1),$ which implies that $j=
\widetilde{l}\pm 1/2$ are degenerate for $\widetilde{l}\neq 0.$ The quantum
condition is obtained from the finiteness of the solution at infinity and at
the origin point, i.e., $F_{n\kappa }(0)=G_{n\kappa }(0)=0$ and $F_{n\kappa
}(\infty )=G_{n\kappa }(\infty )=0.$
At this stage, we take the vector and scalar potentials in the form of
Rosen-Morse potential model (see Eq. (1)). Equations (20) and (22) can be
solved exactly for $\kappa =0,-1$ and $\kappa =0,1,$ respectively, because
of the spin-orbit centrifugal and pseudo-centrifugal terms. Therefore, to
find approximate solution for the radial Dirac equation with the Rosen-Morse
potential, we have to use an approximation for the spin-orbit centrifugal
term. For values of $\kappa $ that are not large and vibrations of the small
amplitude about the minimum, Lu [56] has introduced an approximation to the
centrifugal term near the minimum point $r=r_{e}$ as
\begin{equation}
\frac{1}{r^{2}}\approx \frac{1}{r_{e}^{2}}\left[ D_{0}+D_{1}\frac{-\exp
(-2\alpha r)}{1+\exp (-2\alpha r)}+D_{2}\left( \frac{-\exp (-2\alpha r)}{
1+\exp (-2\alpha r)}\right) ^{2}\right] ,
\end{equation}
where
\begin{equation*}
D_{0}=1-\left( \frac{1+\exp (-2\alpha r_{e})}{2\alpha r_{e}}\right)
^{2}\left( \frac{8\alpha r_{e}}{1+\exp (-2\alpha r_{e})}-\left( 3+2\alpha
r_{e}\right) \right) ,
\end{equation*}
\begin{equation*}
D_{1}=-2\left( \exp (2\alpha r_{e})+1\right) \left[ 3\left( \frac{1+\exp
(-2\alpha r_{e})}{2\alpha r_{e}}\right) -\left( 3+2\alpha r_{e}\right)
\left( \frac{1+\exp (-2\alpha r_{e})}{2\alpha r_{e}}\right) \right] ,
\end{equation*}
\begin{equation}
D_{2}=\left( \exp (2\alpha r_{e})+1\right) ^{2}\left( \frac{1+\exp (-2\alpha
r_{e})}{2\alpha r_{e}}\right) ^{2}\left( 3+2\alpha r_{e}-\frac{4\alpha r_{e}
}{1+\exp (-2\alpha r_{e})}\right) ,
\end{equation}
and higher order terms are neglected.
\subsection{Spin symmetry solution of the Rosen-Morse Problem}
We take the sum potential in Eq. (20) as the Rosen-Morse potential model,
i.e.,
\begin{equation}
\Sigma (r)=-4V_{1}\frac{\exp (-2\alpha r)}{\left( 1+\exp (-2\alpha r)\right)
^{2}}+V_{2}\frac{\left( 1-\exp (-2\alpha r)\right) }{\left( 1+\exp (-2\alpha
r)\right) }.
\end{equation}
The choice of $\Sigma (r)=2V(r)\rightarrow V(r)$ as mentioned in Ref. [12]
enables one to reduce the resulting relativistic solutions into their
non-relativistic limit under appropriate transformations. $.$
Using the approximation given by Eq. (24) and introducing a new parameter
change $z(r)=-\exp (-2\alpha r)$, this allows us to decompose the
spin-symmetric Dirac equation (20) into the Schr\"{o}dinger-like equation in
the spherical coordinates for the upper-spinor component $F_{n\kappa }(r),$
\begin{equation}
\left[ \frac{d^{2}}{dz^{2}}+\frac{\left( 1-z\right) }{z\left( 1-z\right) }
\frac{d}{dz}+\frac{\left( -\beta _{1}z^{2}+\beta _{2}z-\varepsilon _{n\kappa
}^{2}\right) }{z^{2}\left( 1-z\right) ^{2}}\right] F_{n\kappa }(z)=0,\text{ }
F_{n\kappa }(0)=F_{n\kappa }(1)=0,
\end{equation}
with
\begin{subequations}
\begin{equation}
\text{ }\varepsilon _{n\kappa }=\frac{1}{2\alpha }\sqrt{\frac{\omega }{
r_{e}^{2}}D_{0}+\frac{1}{\hbar ^{2}c^{2}}\left( Mc^{2}+E_{n\kappa
}-C_{s}\right) \left( Mc^{2}-E_{n\kappa }+V_{2}\right) }>0,
\end{equation}
\begin{equation}
\beta _{1}=\frac{1}{4\alpha ^{2}}\left\{ \frac{\omega }{r_{e}^{2}}\left(
D_{0}+D_{1}+D_{2}\right) +\frac{1}{\hbar ^{2}c^{2}}\left( Mc^{2}+E_{n\kappa
}-C_{s}\right) \left( Mc^{2}-E_{n\kappa }-V_{2}\right) \right\} ,\text{ }
\end{equation}
\begin{equation}
\beta _{2}=\frac{1}{4\alpha ^{2}}\left\{ \frac{\omega }{r_{e}^{2}}\left(
2D_{0}+D_{1}\right) +\frac{2}{\hbar ^{2}c^{2}}\left( Mc^{2}+E_{n\kappa
}-C_{s}\right) \left( Mc^{2}-E_{n\kappa }-2V_{1}\right) \right\} ,\text{ }
\end{equation}
where $\omega =\kappa \left( \kappa +1\right) .$
In order to solve Eq. (27) by means of the NU method, we should compare it
with Eq. (2) to obtain the following particular values for the parameters:
\end{subequations}
\begin{equation}
\widetilde{\tau }(z)=1-z,\text{\ }\sigma (z)=z\left( 1-z\right) ,\text{\ }
\widetilde{\sigma }(z)=-\beta _{1}z^{2}+\beta _{2}z-\varepsilon _{n\kappa
}^{2}.
\end{equation}
Comparing Eqs. (13)-(15) with Eq. (29), we can easily obtain the
coefficients $c_{i}$ ($i=1,2,3,4$) and the analytic expressions $\xi _{j}$ ($
j=1,2,3$). However, the values of the coefficients $c_{i}$ ($i=5,6,\cdots
,16 $) are found from the relations A1-A5 of Appendix A. Therefore, the
specific values of the coefficients $c_{i}$ ($i=1,2,\cdots ,16$) together
with $\xi _{j}$ ($j=1,2,3$) are displayed in Table 1. From the relations A6
and A7 of Appendix A together with the coefficients in Table 1, the selected
forms of $\pi (z)$ and $k$ take the following particular values
\begin{equation}
\pi (z)=\varepsilon _{n\kappa }-\left( 1+\varepsilon _{n\kappa }+\delta
\right) z,
\end{equation}
\begin{equation}
k=\beta _{2}-\left[ 2\varepsilon _{n\kappa }^{2}+\left( 2\delta +1\right)
\varepsilon _{n\kappa }\right] ,
\end{equation}
respectively, where
\begin{equation}
\delta =\frac{1}{2}\left( -1+\sqrt{1+\frac{\omega D_{2}}{\alpha ^{2}r_{e}^{2}
}+\frac{4V_{1}}{\alpha ^{2}\hbar ^{2}c^{2}}\left( Mc^{2}+E_{n\kappa
}-C_{s}\right) }\right) ,
\end{equation}
for bound state solutions. According to the NU method, the relations A8 and
A9 of Appendix A give
\begin{equation*}
\tau (z)=1+2\varepsilon _{n\kappa }-\left( 3+2\varepsilon _{n\kappa
}+2\delta \right) z,
\end{equation*}
\begin{equation}
\text{ }\tau ^{\prime }(r)=-\left( 3+2\varepsilon _{n\kappa }+2\delta
\right) <0,
\end{equation}
with prime denotes the derivative with respect to $z.$ In addition, the
relation A10 of Appendix A gives the energy equation for the Rosen-Morse
potential in the Dirac theory as
\begin{equation*}
\left( Mc^{2}+E_{n\kappa }-C_{s}\right) \left( Mc^{2}-E_{n\kappa
}+V_{2}\right) =-\frac{\omega D_{0}}{r_{e}^{2}}\hbar ^{2}c^{2}
\end{equation*}
\begin{equation}
+\alpha ^{2}\hbar ^{2}c^{2}\left[ \frac{-\frac{V_{2}}{2\alpha ^{2}\hbar
^{2}c^{2}}\left( Mc^{2}+E_{n\kappa }-C_{s}\right) +\frac{\omega \left(
D_{1}+D_{2}\right) }{4\alpha ^{2}r_{e}^{2}}}{\left( n+\delta +1\right) }
-\left( n+\delta +1\right) \right] ^{2}.
\end{equation}
Further, for the exact spin symmetry case, $V(r)=S(r)$ or $C_{s}=0$, we
obtain
\begin{equation*}
\left( Mc^{2}+E_{n\kappa }\right) \left( Mc^{2}-E_{n\kappa }+V_{2}\right) =-
\frac{\omega D_{0}}{r_{e}^{2}}\hbar ^{2}c^{2}
\end{equation*}
\begin{equation}
+\alpha ^{2}\hbar ^{2}c^{2}\left[ \frac{-\frac{V_{2}}{2\alpha ^{2}\hbar
^{2}c^{2}}\left( Mc^{2}+E_{n\kappa }-C_{s}\right) +\frac{\omega \left(
D_{1}+D_{2}\right) }{4\alpha ^{2}r_{e}^{2}}}{\left( n+\widetilde{\delta }
+1\right) }-\left( n+\widetilde{\delta }+1\right) \right] ^{2},
\end{equation}
with
\begin{equation}
\widetilde{\delta }=\delta (C_{s}\rightarrow 0).
\end{equation}
Let us now find the corresponding wave functions for this model. Referring
to Table 1 and the relations A11 and A12 of Appendix A, we find the
functions:
\begin{equation}
\rho (z)=z^{2\varepsilon _{n\kappa }}\left( 1-z\right) ^{2\delta +1},
\end{equation}
\begin{equation}
\phi (r)=z^{\varepsilon _{n\kappa }}\left( 1-z\right) ^{\delta +1}.
\end{equation}
Hence, the relation A13 of Appendix A gives
\begin{equation}
y_{n}(z)=A_{n}z^{-2\varepsilon _{n\kappa }}\left( 1-z\right) ^{-\left(
2\delta +1\right) }\frac{d^{n}}{dz^{n}}\left[ z^{n+2\varepsilon _{n\kappa
}}\left( 1-z\right) ^{n+2\delta +1}\right] \sim P_{n}^{\left( 2\varepsilon
_{n\kappa },2\delta +1\right) }(1-2z),\text{ }z\in \lbrack 0,1],
\end{equation}
where the Jacobi polynomial $P_{n}^{\left( \mu ,\nu \right) }(x)$ is defined
only for $\mu >-1,$ $\nu >-1,$ and for the argument $x\in \left[ -1,+1\right]
.$ By using $F_{n\kappa }(z)=\phi (z)y_{n}(z),$ we get the radial
upper-spinor wave functions from the relation A14 as
\begin{equation*}
F_{n\kappa }(z)=\mathcal{N}_{n\kappa }z^{\varepsilon _{n\kappa }}\left(
1-z\right) ^{\delta +1}P_{n}^{\left( 2\varepsilon _{n\kappa },2\delta
+1\right) }(1-2z)
\end{equation*}
\begin{equation*}
=\mathcal{N}_{n\kappa }\left( \exp (-2\alpha r)\right) ^{\varepsilon
_{n\kappa }}\left( 1-\exp (-2\alpha r)\right) ^{\delta +1}
\end{equation*}
\begin{equation}
\times
\begin{array}{c}
_{2}F_{1}
\end{array}
\left( -n,n+2\left( \varepsilon _{n\kappa }+\delta +1\right) ;2\varepsilon
_{n\kappa }+1;-\exp (-2\alpha r)\right) .
\end{equation}
The above upper-spinor component satisfies the restriction condition for the
bound states, \textit{i.e.}, $\delta >0$ and $\varepsilon _{n\kappa }>0.$
The normalization constants $\mathcal{N}_{n\kappa }$ are calculated in
Appendix B.
Before presenting the corresponding lower-component $G_{n\kappa }(r),$ let
us recall a recurrence relation of hypergeometric function, which is used to
solve Eq. (21) and present the corresponding lower component $G_{n\kappa
}(r),$
\begin{equation}
\frac{d}{dz}\left[
\begin{array}{c}
_{2}F_{1}
\end{array}
\left( a;b;c;z\right) \right] =\left( \frac{ab}{c}\right)
\begin{array}{c}
_{2}F_{1}
\end{array}
\left( a+1;b+1;c+1;z\right) ,
\end{equation}
with which the corresponding lower component $G_{n\kappa }(r)$ can be
obtained as follows
\begin{equation*}
G_{n\kappa }(r)=\frac{\mathcal{N}_{n\kappa }\left( -\exp (-2\alpha r)\right)
^{\varepsilon _{n\kappa }}(1+\exp (-2\alpha r))^{\delta +1}}{\left(
Mc^{2}+E_{n\kappa }-C_{s}\right) }\left[ -2\alpha \varepsilon _{n\kappa }-
\frac{2\alpha \left( \delta +1\right) \exp (-2\alpha r)}{\left( 1+\exp
(-2\alpha r)\right) }+\frac{\kappa }{r}\right]
\end{equation*}
\begin{equation*}
\times
\begin{array}{c}
_{2}F_{1}
\end{array}
\left( -n,n+2\left( \varepsilon _{n\kappa }+\delta +1\right) ;2\varepsilon
_{n\kappa }+1;-\exp (-2\alpha r)\right)
\end{equation*}
\begin{equation*}
+\mathcal{N}_{n\kappa }\left[ \frac{2\alpha n\left[ n+2\left( \varepsilon
_{n\kappa }+\delta +1\right) \right] \left( -\exp (-2\alpha r)\right)
^{\varepsilon _{n\kappa }+1}\left( 1+\exp (-2\alpha r)\right) ^{\delta +1}}{
\left( 2\varepsilon _{n\kappa }+1\right) \left( Mc^{2}+E_{n\kappa
}-C_{s}\right) }\right]
\end{equation*}
\begin{equation}
\times
\begin{array}{c}
_{2}F_{1}
\end{array}
\left( -n+1;n+2\left( \varepsilon _{n\kappa }+\delta +\frac{3}{2}\right)
;2\left( \varepsilon _{n\kappa }+1\right) ;-\exp (-2\alpha r)\right) ,
\end{equation}
where $E_{n\kappa }\neq -Mc^{2}$ for exact spin symmetry. Here, it should be
noted that the hypergeometric series $
\begin{array}{c}
_{2}F_{1}
\end{array}
\left( -n,n+2\left( \varepsilon _{n\kappa }+\delta +1\right) ;2\varepsilon
_{n\kappa }+1;-\exp (-2\alpha r)\right) $ does not terminate for $n=0$ and
thus does not diverge for all values of real parameters $\delta $ and $
\varepsilon _{n\kappa }.$
For $C_{s}>Mc^{2}+E_{n\kappa }$ and $E_{n\kappa }<Mc^{2}+V_{2}$ or $
C_{s}<Mc^{2}+E_{n\kappa }$ and $E_{n\kappa }>Mc^{2}+V_{2},$ we note that
parameters given in Eq. (28a) turn to be imaginary, i.e., $\varepsilon
_{n\kappa }^{2}<0$ in the $s$-state ($\kappa =-1$)$.$ As a result, the
condition of existing bound states are $\varepsilon _{n\kappa }>0$ and $
\delta >0,$ that is to say, in the case of $C_{s}<Mc^{2}+E_{n\kappa }$ and $
E_{n\kappa }<Mc^{2}+V_{2},$ bound-states do exist for some quantum number $
\kappa $ such as the $s$-state ($\kappa =-1$)$.$ Of course, if these
conditions are satisfied for existing bound-states, the energy equation and
wave functions are the same as these given in Eq. (34) and Eqs. (40) and
(42).
\subsection{Pseudospin symmetry solution of the Rosen-Morse Problem}
Now taking the difference potential in Eq. (22) as the Rosen-Morse potential
model, i.e.,
\begin{equation}
\Delta (r)=-4V_{1}\frac{\exp (-2\alpha r)}{\left( 1+\exp (-2\alpha r)\right)
^{2}}+V_{2}\frac{\left( 1-\exp (-2\alpha r)\right) }{\left( 1+\exp (-2\alpha
r)\right) },
\end{equation}
leads us to obtain a Schr\"{o}dinger-like equation for the lower-spinor
component $G_{n\kappa }(r),$
\begin{equation}
\left[ \frac{d^{2}}{dz^{2}}+\frac{\left( 1-z\right) }{z\left( 1-z\right) }
\frac{d}{dz}+\frac{\left( -\widetilde{\beta }_{1}z^{2}+\widetilde{\beta }
_{2}z-\widetilde{\varepsilon }_{n\kappa }^{2}\right) }{z^{2}\left(
1-z\right) ^{2}}\right] G_{n\kappa }(z)=0,\text{ }G_{n\kappa }(0)=G_{n\kappa
}(1)=0,
\end{equation}
where
\begin{subequations}
\begin{equation}
\text{ }\widetilde{\varepsilon }_{n\kappa }=\frac{1}{2\alpha }\sqrt{\frac{
\widetilde{\omega }}{r_{e}^{2}}D_{0}-\frac{1}{\hbar ^{2}c^{2}}\left[
E_{n\kappa }^{2}-M^{2}c^{4}-\left( Mc^{2}+E_{n\kappa }\right) C_{ps}+\left(
Mc^{2}-E_{n\kappa }+C_{ps}\right) V_{2}\right] }>0,
\end{equation}
\begin{equation}
\widetilde{\beta }_{1}=\frac{1}{4\alpha ^{2}}\left\{ \frac{\widetilde{\omega
}}{r_{e}^{2}}\left( D_{0}+D_{1}+D_{2}\right) -\frac{1}{\hbar ^{2}c^{2}}\left[
E_{n\kappa }^{2}-M^{2}c^{4}-\left( Mc^{2}+E_{n\kappa }\right) C_{ps}-\left(
Mc^{2}-E_{n\kappa }+C_{ps}\right) V_{2}\right] \right\} ,\text{ }
\end{equation}
\begin{equation}
\widetilde{\beta }_{2}=\frac{1}{4\alpha ^{2}}\left\{ \frac{\widetilde{\omega
}}{r_{e}^{2}}\left( 2D_{0}+D_{1}\right) -\frac{2}{\hbar ^{2}c^{2}}\left[
E_{n\kappa }^{2}-M^{2}c^{4}-\left( Mc^{2}+E_{n\kappa }\right) C_{ps}-2\left(
Mc^{2}-E_{n\kappa }+C_{ps}\right) V_{1}\right] \right\} ,\text{ }
\end{equation}
and $\widetilde{\omega }=\kappa \left( \kappa -1\right) .$ To avoid
repetition in the solution of Eq. (44), a first inspection for the
relationship between the present set of parameters $(\widetilde{\varepsilon }
_{n\kappa },\widetilde{\beta }_{1},\widetilde{\beta }_{2})$ and the previous
set $(\varepsilon _{n\kappa },\beta _{1},\beta _{2})$ tells us that the
negative energy solution for pseudospin symmetry, where $S(r)=-V(r),$ can be
obtained directly from those of the positive energy solution above for spin
symmetry using the parameter map [57-59]:
\end{subequations}
\begin{equation}
F_{n\kappa }(r)\leftrightarrow G_{n\kappa }(r),V(r)\rightarrow -V(r)\text{
(or }V_{1}\rightarrow -V_{1}\text{ and }V_{2}\rightarrow -V_{2}\text{)},
\text{ }E_{n\kappa }\rightarrow -E_{n\kappa }\text{ and }C_{s}\rightarrow
-C_{ps}.
\end{equation}
Following the previous results with the above transformations, we finally
arrive at the energy equation
\begin{equation*}
\left( Mc^{2}-E_{n\kappa }+C_{ps}\right) \left( Mc^{2}+E_{n\kappa
}-V_{2}\right) =-\frac{\widetilde{\omega }D_{0}}{r_{e}^{2}}\hbar ^{2}c^{2}
\end{equation*}
\begin{equation}
+\alpha ^{2}\hbar ^{2}c^{2}\left[ \frac{\frac{V_{2}}{2\alpha ^{2}\hbar
^{2}c^{2}}\left( Mc^{2}-E_{n\kappa }+C_{ps}\right) +\frac{\widetilde{\omega }
\left( D_{1}+D_{2}\right) }{4\alpha ^{2}r_{e}^{2}}}{\left( n+\delta
_{1}+1\right) }-\left( n+\delta _{1}+1\right) \right] ^{2},
\end{equation}
where
\begin{equation}
\delta _{1}=\frac{1}{2}\left( -1+\sqrt{1+\frac{\widetilde{\omega }D_{2}}{
\alpha ^{2}r_{e}^{2}}-\frac{4V_{1}}{\alpha ^{2}\hbar ^{2}c^{2}}\left(
Mc^{2}-E_{n\kappa }+C_{ps}\right) }\right) .
\end{equation}
By using $G_{n\kappa }(z)=\phi (z)y_{n}(z),$ we get the radial lower-spinor
wave functions as
\begin{equation}
G_{n\kappa }(r)=\widetilde{\mathcal{N}}_{n\kappa }\left( \exp (-2\alpha
r)\right) ^{\widetilde{\varepsilon }_{n\kappa }}\left( 1-\exp (-2\alpha
r)\right) ^{\delta _{1}+1}P_{n}^{\left( 2\widetilde{\varepsilon }_{n\kappa
},2\delta _{1}+1\right) }(1-2\exp (-2\alpha r)).
\end{equation}
The above upper-spinor component satisfies the restriction condition for the
bound states, \textit{i.e.}, $\delta _{1}>0$ and $\widetilde{\varepsilon }
_{n\kappa }>0.$ The normalization constants $\widetilde{\mathcal{N}}_{nl}$
are calculated in Appendix B.
\section{Discussions}
In this section, we are going to study four special cases of the energy
eigenvalues given by Eqs. (34) and (47) for the spin and pseudospin
symmetry, respectively. First, let us study $s$-wave case $l=0$ ($\kappa =-1$
) and $\widetilde{l}=0$ ($\kappa =1$) case
\begin{equation}
\left( Mc^{2}+E_{n,-1}-C_{s}\right) \left( Mc^{2}-E_{n,-1}+V_{2}\right)
=\alpha ^{2}\hbar ^{2}c^{2}\left[ \frac{\frac{V_{2}}{2\alpha ^{2}\hbar
^{2}c^{2}}\left( Mc^{2}+E_{n,-1}-C_{s}\right) }{n+\delta _{2}+1}+n+\delta
_{2}+1\right] ^{2},
\end{equation}
where
\begin{equation}
\delta _{2}=\frac{1}{2}\left( -1+\sqrt{1+\frac{4V_{1}}{\alpha ^{2}\hbar
^{2}c^{2}}\left( Mc^{2}+E_{n,-1}-C_{s}\right) }\right) .
\end{equation}
If one sets $C_{s}=0$ into Eq. (50) and $C_{ps}=0$ into Eq. (47), we obtain
for spin and pseudospin symmetric Dirac theory,
\begin{equation}
\left( Mc^{2}+E_{n,-1}\right) \left( Mc^{2}-E_{n,-1}+V_{2}\right) =\alpha
^{2}\hbar ^{2}c^{2}\left[ \frac{\frac{V_{2}}{2\alpha ^{2}\hbar ^{2}c^{2}}
\left( Mc^{2}+E_{n,-1}\right) }{n+\delta _{-1}+1}+n+\delta _{-1}+1\right]
^{2},
\end{equation}
\begin{equation}
\delta _{-1}=\frac{1}{2}\left( -1+\sqrt{1+\frac{4V_{1}}{\alpha ^{2}\hbar
^{2}c^{2}}\left( Mc^{2}+E_{n,-1}\right) }\right) ,
\end{equation}
and
\begin{equation}
\left( Mc^{2}-E_{n,+1}\right) \left( Mc^{2}+E_{n,+1}-V_{2}\right) =+\alpha
^{2}\hbar ^{2}c^{2}\left[ -\frac{\frac{V_{2}}{2\alpha ^{2}\hbar ^{2}c^{2}}
\left( Mc^{2}-E_{n,+1}\right) }{n+\delta _{+1}+1}+n+\delta _{+1}+1\right]
^{2},
\end{equation}
\begin{equation}
\delta _{+1}=\frac{1}{2}\left( -1+\sqrt{1-\frac{4V_{1}}{\alpha ^{2}\hbar
^{2}c^{2}}\left( Mc^{2}-E_{n,+1}\right) }\right) .
\end{equation}
respectively. The above solutions for the $s$-wave are found to be identical
for spin and pseudospin cases $S(r)=V(r)$ and $S(r)=-V(r),$ respectively.
Second, when we set $V_{1}\rightarrow -V_{1}$ and $V_{2}\rightarrow -V_{2}$
the potential reduces to the Eckart-type potential and energy eigenvalues
are given by
\begin{equation}
\left( Mc^{2}+E_{n,-1}\right) \left( Mc^{2}-E_{n,-1}-V_{2}\right) =\alpha
^{2}\hbar ^{2}c^{2}\left[ -\frac{\frac{V_{2}}{2\alpha ^{2}\hbar ^{2}c^{2}}
\left( Mc^{2}+E_{n,-1}\right) }{n+\delta _{-1}+1}+n+\delta _{-1}+1\right]
^{2},
\end{equation}
\begin{equation}
\delta _{-1}=\frac{1}{2}\left( -1+\sqrt{1-\frac{4V_{1}}{\alpha ^{2}\hbar
^{2}c^{2}}\left( Mc^{2}+E_{n,-1}\right) }\right) ,
\end{equation}
for spin symmetry and
\begin{equation}
\left( Mc^{2}-E_{n,+1}\right) \left( Mc^{2}+E_{n,+1}+V_{2}\right) =+\alpha
^{2}\hbar ^{2}c^{2}\left[ \frac{\frac{V_{2}}{2\alpha ^{2}\hbar ^{2}c^{2}}
\left( Mc^{2}-E_{n,+1}\right) }{n+\delta _{+1}+1}+n+\delta _{+1}+1\right]
^{2},
\end{equation}
\begin{equation}
\delta _{+1}=\frac{1}{2}\left( -1+\sqrt{1+\frac{4V_{1}}{\alpha ^{2}\hbar
^{2}c^{2}}\left( Mc^{2}-E_{n,+1}\right) }\right) ,
\end{equation}
for pseudospin symmetry.
Third, let us now discuss the non-relativistic limit of the energy
eigenvalues and wave functions of our solution. If we take $C_{s}=0$ and put
$S(r)=V(r)=\Sigma (r),$ the non-relativistic limit of energy equation (34)
and wave functions (40) under the following appropriate transformations $
\left( Mc^{2}+E_{n\kappa }\right) /\hbar ^{2}c^{2}\rightarrow 2\mu /\hbar
^{2}$ and $Mc^{2}-E_{n\kappa }\rightarrow -E_{nl}$ [26,57,42] become
\begin{equation}
E_{nl}=V_{2}+\frac{\omega D_{0}}{2\mu r_{e}^{2}}\hbar ^{2}-\frac{\hbar ^{2}}{
2\mu }\alpha ^{2}\left[ \frac{\frac{\mu }{\alpha ^{2}\hbar ^{2}}\left(
2V_{1}+V_{2}\right) -\frac{\omega D_{1}}{4\alpha ^{2}r_{e}^{2}}+\left(
n+1\right) ^{2}+\left( 2n+1\right) \widetilde{\delta }_{0}}{n+\widetilde{
\delta }_{0}+1}\right] ^{2},
\end{equation}
with
\begin{equation}
\widetilde{\delta }_{0}=\frac{1}{2}\left( -1+\sqrt{1+\frac{8\mu V_{1}}{
\alpha ^{2}\hbar ^{2}}+\frac{\omega D_{2}}{\alpha ^{2}r_{e}^{2}}}\right) ,
\end{equation}
and the associated wave functions are
\begin{equation}
F_{nl}(r)=\mathcal{N}_{nl}\left( \exp (-2\alpha r)\right) ^{\varepsilon
_{n\kappa }}\left( 1-\exp (-2\alpha r)\right) ^{1+\widetilde{\delta }
_{0}}P_{n}^{\left( 2\varepsilon _{n\kappa },2\widetilde{\delta }
_{0}+1\right) }(1-2\exp (-2\alpha r)),
\end{equation}
where
\begin{subequations}
\begin{equation}
\text{ }\varepsilon _{nl}=\frac{1}{2\alpha }\sqrt{\frac{\omega }{r_{e}^{2}}
D_{0}+\frac{2\mu }{\hbar ^{2}}\left( V_{2}-E_{nl}\right) }>0,\text{ }\omega
=l(l+1),
\end{equation}
which are identical with Ref. [25] in the solution of the Schr\"{o}dinger
equation. Finally, the Jacobi polynomials can be expressed in terms of the
hypergeometric function as
\end{subequations}
\begin{equation}
P_{n}^{\left( \mu ,\nu \right) }(1-2\exp (-2\alpha r))=\frac{\left( \mu
+1\right) _{n}}{n!}
\begin{array}{c}
_{2}F_{1}
\end{array}
\left( -n,1+\mu +\nu +n;\mu +1;\exp (-2\alpha r)\right) ,
\end{equation}
where $z\in \left[ 0,1\right] $ which lie within or on the boundary of the
interval $\left[ -1,1\right] .$
Fourth, if we choose $V_{2}\rightarrow iV_{2},$ the potential becomes the $
PT $-symmetric Rosen-Morse potential, where $P$ denotes parity operator and $
T$ denotes time reversal. \ For a potential $V(r),$ making the
transformation of $r-r$ (or $r\rightarrow \xi -r$) and $i\rightarrow -i,$ if
we have the relation $V(-r)=V^{\ast }(r)),$ the potential $V(r)$ is said to
be $PT$-symmetric [43]. In this case we obtain for spin-symmetric Dirac
equation
\begin{equation*}
\left( Mc^{2}+E_{n\kappa }\right) \left( Mc^{2}-E_{n\kappa }+iV_{2}\right) =-
\frac{\omega D_{0}}{r_{e}^{2}}\hbar ^{2}c^{2}
\end{equation*}
\begin{equation}
+\alpha ^{2}\hbar ^{2}c^{2}\left[ \frac{-\frac{iV_{2}}{2\alpha ^{2}\hbar
^{2}c^{2}}\left( Mc^{2}+E_{n\kappa }-C_{s}\right) +\frac{\omega \left(
D_{1}+D_{2}\right) }{4\alpha ^{2}r_{e}^{2}}}{\left( n+\widetilde{\delta }
+1\right) }-\left( n+\widetilde{\delta }+1\right) \right] ^{2}.
\end{equation}
In the non-relativistic limit, it turns to become
\begin{equation}
E_{nl}=iV_{2}+\frac{\omega \hbar ^{2}D_{0}}{2\mu r_{e}^{2}}-\frac{\hbar ^{2}
}{2\mu }\alpha ^{2}\left[ \frac{\frac{\mu }{\alpha ^{2}\hbar ^{2}}\left(
2V_{1}+iV_{2}\right) -\frac{\omega D_{1}}{4\alpha ^{2}r_{e}^{2}}+\left(
n+1\right) ^{2}+\left( 2n+1\right) \widetilde{\delta }_{0}}{n+\widetilde{
\delta }_{0}+1}\right] ^{2},
\end{equation}
where real $V_{1}>0,$ which is identical to the results of Ref. [25]. If one
sets $l=0$ in the above equation, the result is identical with that of Refs.
[44,45].
\section{Conclusions}
We have obtained analytically the energy spectra and corresponding wave
functions of the Dirac equation for the Rosen-Morse potential under the
conditions of the spin symmetry and pseudospin symmetry in the context of
the Nikiforov-Uvarov method. For any spin-orbit coupling centrifugal term $
\kappa ,$ we have found the explicit expressions for energy eigenvalues and
associated wave functions in closed form. The most stringent interesting
result is that the present spin and pseudospin symmetry cases can be easily
reduced to the KG solution once $S(r)=V(r)$ and $S(r)=-V(r)$ (\textit{i.e}.,
$C_{s}=C_{ps}=0$) [55]. The resulting solutions of the wave functions are
being expressed in terms of the generalized Jacobi polynomials. Obviously,
the relativistic solution can be reduced to it's non-relativistic limit by
the choice of appropriate mapping transformations. Also, in case when
spin-orbit quantum number $\kappa =0,$ the problem reduces to the $s$-wave
solution. The $s$-wave Rosen-Morse, the Eckart-type potential, the
PT-symmetric Rosen-Morse potential.and the non-relativistic cases are
briefly studied.
\acknowledgments The partial support provided by the Scientific and
Technological Research Council of Turkey (T\"{U}B\.{I}TAK) is highly
appreciated. The author thanks the anonymous kind referees and editors for
the very constructive comments and suggestions.
\appendix
\section{Parametric Generalization of the NU Method}
Our systematical derivation holds for any potential form.
(i) The relevant coefficients $c_{i}$ ($i=5,6,\cdots ,16$) are given as
follows:
\begin{equation}
c_{5}=\frac{1}{2}\left( c_{3}-c_{1}\right) ,\text{ }c_{6}=\frac{1}{2}\left(
c_{2}-2c_{4}\right) ,\text{ }c_{7}=c_{6}^{2}+\xi _{1},
\end{equation}
\begin{equation}
\text{ }c_{8}=2c_{5}c_{6}-\xi _{2},\text{ }c_{9}=c_{5}^{2}+\xi _{3},\text{ }
c_{10}=c_{4}\left( c_{3}c_{8}+c_{4}c_{9}\right) +c_{3}^{2}c_{7},
\end{equation}
\begin{equation}
c_{11}=\frac{2}{c_{3}}\sqrt{c_{9}},\text{ }c_{12}=\frac{2}{c_{3}c_{4}}\sqrt{
c_{10}},
\end{equation}
\begin{equation}
c_{13}=\frac{1}{c_{3}}\left( c_{5}+\sqrt{c_{9}}\right) ,\text{ }c_{14}=\frac{
1}{c_{3}c_{4}}\left( \sqrt{c_{10}}-c_{4}c_{5}-c_{3}c_{6}\right) ,
\end{equation}
\begin{equation}
c_{15}=\frac{2}{c_{3}}\sqrt{c_{10}},\text{ }c_{16}=\frac{1}{c_{3}}\left(
\sqrt{c_{10}}-c_{4}c_{5}-c_{3}c_{6}\right) .
\end{equation}
(ii) The analytic results for the key polynomials:
\begin{equation}
\pi (r)=c_{5}+\sqrt{c_{9}}-\frac{1}{c_{3}}\left( c_{4}\sqrt{c_{9}}+\sqrt{
c_{10}}-c_{3}c_{6}\right) r,
\end{equation}
\begin{equation}
k=-\frac{1}{c_{3}^{2}}\left( c_{3}c_{8}+2c_{4}c_{9}+2\sqrt{c_{9}c_{10}}
\right) ,
\end{equation}
\begin{equation}
\tau (r)=c_{3}+2\sqrt{c_{9}}-\frac{2}{c_{3}}\left( c_{3}c_{4}+c_{4}\sqrt{
c_{9}}+\sqrt{c_{10}}\right) r,
\end{equation}
\begin{equation}
\tau ^{\prime }(r)=-\frac{2}{c_{3}}\left( c_{3}c_{4}+c_{4}\sqrt{c_{9}}+\sqrt{
c_{10}}\right) <0.
\end{equation}
(iii) The energy equation:
\begin{equation}
c_{2}n-\left( 2n+1\right) c_{6}+\frac{1}{c_{3}}\left( 2n+1\right) \left(
\sqrt{c_{10}}+c_{4}\sqrt{c_{9}}\right) +n\left( n-1\right) c_{4}+\frac{1}{
c_{3}^{2}}\left( c_{3}c_{8}+2c_{4}c_{9}+2\sqrt{c_{9}c_{10}}\right) =0.
\end{equation}
(iv) The wave functions:
\begin{equation}
\rho (r)=r^{c_{11}}(c_{3}-c_{4}r)^{c_{12}},
\end{equation}
\begin{equation}
\phi (r)=r^{c_{13}}(c_{3}-c_{4}r)^{c_{14}},\text{ }c_{13}>0,\text{ }c_{14}>0,
\end{equation}
\begin{equation}
y_{n\kappa }(r)=P_{n}^{\left( c_{11},c_{12}\right) }(c_{3}-2c_{4}r),\text{ }
c_{11}>-1,\text{ }c_{12}>-1,\text{ }r\in \left[
(c_{3}-1)/2c_{4},(1+c_{3})/2c_{4}\right] ,
\end{equation}
\begin{equation}
\psi _{n\kappa }(r)=\phi (r)y_{n\kappa }(r)=\mathcal{N}
_{n}r^{c_{13}}(c_{3}-c_{4}r)^{c_{14}}P_{n}^{\left( c_{11},c_{12}\right)
}(c_{3}-2c_{4}r),
\end{equation}
where $P_{n}^{\left( a,b\right) }(c_{3}-2c_{4}r)$ are the Jacobi polynomials
and $\mathcal{N}_{n}$ is a normalizing factor.
When $c_{4}=0,$ the Jacobi polynomial turn to be the generalized Laguerre
polynomial and the constants relevant to this polynomial change are
\begin{equation}
\lim_{c_{4}\rightarrow
0}P_{n}^{(c_{11},c_{12})}(c_{3}-2c_{4}r)=L_{n}^{c_{11}}(c_{15}r),
\end{equation}
\begin{equation}
\lim_{c_{4}\rightarrow 0}(c_{3}-c_{4}r)^{c_{14}}=\exp (-c_{16}r),
\end{equation}
\begin{equation}
\psi _{n\kappa }(r)=\mathcal{N}_{n}\exp (-c_{16}r)L_{n}^{c_{11}}(c_{15}r),
\end{equation}
where $L_{n}^{c_{11}}(c_{15}r)$ are the generalized Laguerre polynomials and
$\mathcal{N}_{n}$ is a normalizing constant.
$\label{appendix}$
\section{Normalization of the radial wave function}
In order to find the normalization factor $\mathcal{N}_{n\kappa }$, we start
by writting the normalization condition:
\begin{equation}
\frac{\mathcal{N}_{n\kappa }^{2}}{2\alpha }\int_{0}^{1}z^{2\varepsilon
_{n\kappa }-1}(1-z)^{2\delta +2}\left[ P_{n}^{(2\varepsilon _{n\kappa
},2\delta +1)}(1-2z)\right] ^{2}dz=1.
\end{equation}
Unfortunately, there is no formula available to calculate this key
integration. Neveretheless, we can find the explicit normalization constant $
\mathcal{N}_{nl}.$ For this purpose, it is not difficult to obtain the
results of the above integral by using the following formulas [61-64]
\begin{equation}
\dint\limits_{0}^{1}\left( 1-s\right) ^{\mu -1}s^{\nu -1}
\begin{array}{c}
_{2}F_{1}
\end{array}
\left( \alpha ,\beta ;\gamma ;as\right) ds=\frac{\Gamma (\mu )\Gamma (\nu )}{
\Gamma (\mu +\nu )}
\begin{array}{c}
_{3}F_{2}
\end{array}
\left( \nu ,\alpha ,\beta ;\mu +\nu ;\gamma ;a\right) ,
\end{equation}
and $
\begin{array}{c}
_{2}F_{1}
\end{array}
\left( a,b;c;z\right) =\frac{\Gamma (c)}{\Gamma (a)\Gamma (b)}
\dsum\limits_{p=0}^{\infty }\frac{\Gamma (a+p)\Gamma (b+p)}{\Gamma (c+p)}
\frac{z^{p}}{p!}.$ Hence, the normalization constants for the upper-spinor
component are
\begin{equation}
\mathcal{N}_{n\kappa }=\left[ \frac{\Gamma (2\delta +3)\Gamma (2\varepsilon
_{n\kappa }+1)}{2\alpha \Gamma (n)}\dsum\limits_{m=0}^{\infty }\frac{
(-1)^{m}\left( n+2(1+\varepsilon _{n\kappa }+\delta )\right) _{m}\Gamma (n+m)
}{m!\left( m+2\varepsilon _{n\kappa }\right) !\Gamma \left( m+2\left(
\varepsilon _{n\kappa }+\delta +\frac{3}{2}\right) \right) }f_{n\kappa }
\right] ^{-1/2}\text{ ,}
\end{equation}
with
\begin{equation}
f_{n\kappa }=
\begin{array}{c}
_{3}F_{2}
\end{array}
\left( 2\varepsilon _{n\kappa }+m,-n,n+2(1+\varepsilon _{n\kappa }+\delta
);m+2\left( \varepsilon _{n\kappa }+\delta +\frac{3}{2}\right)
;1+2\varepsilon _{n\kappa };1\right) ,
\end{equation}
where $\left( x\right) _{m}=\Gamma (x+m)/\Gamma (x).$ Also, the
normalization constants for the lower-spinor component are
\begin{equation}
\widetilde{\mathcal{N}}_{n\kappa }=\left[ \frac{\Gamma (2\delta
_{1}+3)\Gamma (2\widetilde{\varepsilon }_{n\kappa }+1)}{2\alpha \Gamma (n)}
\dsum\limits_{m=0}^{\infty }\frac{(-1)^{m}\left( n+2(1+\widetilde{
\varepsilon }_{n\kappa }+\delta _{1})\right) _{m}\Gamma (n+m)}{m!\left( m+2
\widetilde{\varepsilon }_{n\kappa }\right) !\Gamma \left( m+2\left(
\widetilde{\varepsilon }_{n\kappa }+\delta _{1}+\frac{3}{2}\right) \right) }
g_{n\kappa }\right] ^{-1/2}\text{ ,}
\end{equation}
with
\begin{equation}
g_{n\kappa }=
\begin{array}{c}
_{3}F_{2}
\end{array}
\left( 2\widetilde{\varepsilon }_{n\kappa }+m,-n,n+2(1+\widetilde{
\varepsilon }_{n\kappa }+\delta _{1});m+2\left( \widetilde{\varepsilon }
_{n\kappa }+\delta _{1}+\frac{3}{2}\right) ;1+2\widetilde{\varepsilon }
_{n\kappa };1\right) .
\end{equation}
\
\ {\normalsize
}
\baselineskip= 2\baselineskip
{\normalsize
}
\baselineskip= 2\baselineskip
\begin{table}[tbp]
\caption{The specific values for the parametric constants necessary for
calculating the energy eigenvalues and eigenfunctions of the spin symmetry
Dirac wave equation.}
\begin{tabular}{llll}
\tableline Constant & Analytic value & Constant & Analytic value \\
\tableline$c_{1}$ & $1$ & $c_{2}$ & $1$ \\
$c_{3}$ & $1$ & c$_{4}$ & $1$ \\
$c_{5}$ & $0$ & $c_{6}$ & $-\frac{1}{2}$ \\
$c_{7}$ & $\frac{1}{4}+\beta _{1}$ & $c_{8}$ & $-\beta _{2}$ \\
$c_{9}$ & $\varepsilon _{n\kappa }^{2}$ & $c_{10}$ & $\left( \delta +\frac{1
}{2}\right) ^{2}$ \\
$c_{11}$ & $2\varepsilon _{n\kappa }$ & $c_{12}=c_{15}$ & 2$\delta +1$ \\
$c_{13}$ & $\varepsilon _{n\kappa }$ & $c_{14}=c_{16}$ & $\delta +1$ \\
$\xi _{1}$ & $\beta _{1}$ & $\xi _{2}$ & $\beta _{2}$ \\
$\xi _{3}$ & $\varepsilon _{n\kappa }^{2}$ & & \\
\tableline & & &
\end{tabular}
\end{table}
\
\end{document}
|
\begin{equation}gin{document}
\begin{equation}gin{frontmatter}
\title{QCMPI: A PARALLEL ENVIRONMENT FOR QUANTUM COMPUTING }
\author[pit]{Frank Tabakin}
\and
\author[spain]{Bruno Juli\'a-D\'{\i}az}
\address[pit]{Department of Physics and Astronomy \\
University of Pittsburgh\\
Pittsburgh, PA, 15260}
\address[spain]{Departament de Estructura i Constituents de la Materia\\
Universitat de Barcelona\\
Diagonal 647\\
08028 Barcelona (Spain)
}
\begin{equation}gin{abstract}
{\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ is a quantum computer (QC) simulation package written in
Fortran 90 with parallel processing capabilities. It is an accessible
research tool that permits rapid evaluation of quantum algorithms for
a large number of qubits and for various ``noise" scenarios.
The prime motivation for developing {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ is to facilitate numerical
examination of not only how QC algorithms work, but also to include noise,
decoherence, and attenuation effects and to evaluate the efficacy of
error correction schemes. The present work builds on an earlier Mathematica
code {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}, which is mainly a pedagogic tool. In that earlier work,
although the density matrix formulation was featured, the description
using state vectors was also provided. In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}, the stress is on
state vectors, in order to employ a large number of qubits. The parallel
processing feature is implemented by using the Message-Passing Interface
(MPI) protocol. A description of how to spread the wave function components
over many processors is provided, along with how to efficiently describe
the action of general one- and two-qubit operators on these state vectors.
These operators include the standard Pauli, Hadamard, CNOT and CPHASE gates
and also Quantum Fourier transformation. These operators make up the actions
needed in QC. Codes for Grover's search and Shor's factoring algorithms
are provided as examples. A major feature of this work is that concurrent
versions of the algorithms can be evaluated with each version subject to
alternate noise effects, which corresponds to the idea of solving a
stochastic Schr\"{o}dinger equation. The density matrix for the ensemble of
such noise cases is constructed using parallel distribution methods to
evaluate its eigenvalues and associated entropy. Potential
applications of this powerful tool include studies of the stability and
correction of QC processes using Hamiltonian based dynamics.
\end{enumerate}d{abstract}
\end{enumerate}d{frontmatter}
\noindent{\bf Program Summary}
{\it Title of program:} QCMPI\\
{\it Catalogue identifier:}\\
{\it Program summary URL:} http://cpc.cs.qub.ac.uk/summaries\\
{\it Program available from:} CPC Program Library, Queen's University of Belfast, N. Ireland\\
{\it Operating systems:} Any system that supports Fortran 90 and MPI;
developed and tested at the Pittsburgh Supercomputer Center, at the Barcelona
Supercomputer (BSC/CNS) and on multi-processor Macs and PCs. For cases where distributed density
matrix evaluation is invoked, the BLACS and SCALAPACK packages are needed, \\
{\it Programming language used:} Fortran 90 and MPI\\
{\it Number of bytes in distributed program, including test code and
documentation: }\\
{\it Distribution format:} tar.gz\\
{\it Nature of Problem:} Analysis of quantum computation algorithms and the effects of noise. \\
{\it Method of Solution:} A Fortran 90/MPI package is provided that contains
modular commands to create and analyze quantum circuits. Shor's factorization
and Grover's search algorithms are explained in detail. Procedures for
distributing state vector amplitudes over processors and for solving
concurrent (multiverse) cases with noise effects are implemented. Density
matrix and entropy evaluations are provided in both single and parallel versions. \\
\setlength{\parskip}{0.1cm}
\tableofcontents
\setlength{\parskip}{\baselineskip}
\section{INTRODUCTION}
Achieving a realistic Quantum Computer (QC)~\cite{Nielsen,Preskill} requires
the control, measurement, and stability of simple quantum systems called qubits.
A qubit is any system with two accessible states which can form a quantum
ensemble. That ensemble can be manipulated to store and process information.
Since quantum states can exist as superpositions of many possibilities, and
since an isolated quantum system propagates without loss of quantum phases,
a QC provides the advantage of being a ``massively parallel" device and having
enhanced probability for solving difficult, otherwise intractable,
problems. That enhancement is generated by constructive quantum interference.
This ideal situation can be disrupted by external effects, which can cause the
quantum system to loose its quantum interference capabilities--this is called
decoherence and loss of entanglement. In addition, uncontrolled random pulses
(noise~\footnote{There are various types of noise, such as thermal noise. We
use the term noise in a generic sense, although specific noise models can be
incorporated into {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}. }) could strike the QC during its controlled
performance and thereby its operations or gates can be less than perfect.
To gauge the efficacy of a QC, even when influenced by such external
environmental effects, and to evaluate the positive influence of error
correction~\cite{QEC} steps, it is important to have large scale QC simulations. Such
simulations can only represent a small part of the full ``massively parallel"
quantum ensemble dynamics, since a real QC goes way beyond the capabilities of
any classical computer. Nevertheless, it seems natural to invoke the best,
most parallel and largest memory computers we have available. Therefore, we
embarked on developing a Fortran 90 parallel computer QC simulation, starting
with the basic QC algorithms of quantum searching~\cite{Grover} and factorization~\cite{Shor}.
Other authors have also attacked this problem to good
effect~\cite{prevQC0,prevQC1,prevQC2,prevQC3,prevQC4}. Nevertheless, there is a need for a
generally available, well-documented, and easy to use supercomputer version,
to encourage others to contribute their own advances. In addition, we have
developed a broader range of applications~\footnote{Teleportation and superdense
coding programs are also available, but were omitted for brevity.} and
supercomputer techniques than previously available. An important feature
of our work is that we invoke the algorithms on concurrent groups of
processors, which are then subject to different noise. Then, the overall
density matrix is constructed as an ensemble average over these noise groups.
The density matrix can be stored on a grid of processors and its eigenvalues
found using parallel codes, thereby avoiding the pitfalls
of overly large matrix storage. Thus, we can evaluate the entropy, and indeed
sub-entropies, for the dynamic evolution of a QC process
in a simulation of a real world environment.
Our code is called {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ to indicate that it is a QC simulation based
on the Message-Passing Interface (MPI)~\cite{MPI,openmpi}. It is a Fortran 90
simulation of a Quantum Computer that is both flexible and an
improvement over earlier such works~\cite{prevQC0,prevQC1,prevQC2,prevQC3,prevQC4}.
The flexibility is generated by a modular approach to all of the
initializations, operators, gates, and measurements, which then
can be readily used to describe the basic QC
Teleportation~\cite{Teleportation}, Superdense coding~\cite{superdense},
Grover's search~\cite{Grover} and Shor's factoring~\cite{Shor} algorithms.
We also adopt a state vector,\!~\footnote{A state vector requires arrays of
size $2^{n_q},$ whereas a density matrix has a much larger size $2^{n_q}
\times 2^{n_q}.$ Here $n_q$ denotes the number of qubits.}\! rather than a
density matrix~\cite{QDENSITY}, approach to facilitate representing a large
number of qubits in a manner that allows for general treatments, such as
handling the dynamics stipulated by realistic Hamiltonians. We include
environmental effects by introducing random stochastic interactions in
separate groups of processors, that we dub multiverses.
In section~\ref{sec2}, we introduce qubit state vectors along with various
state vector notations. We stress that a wave function component description
allows for changes induced by simple one-body operators such as local quantum
gates and also one-body parts of Hamiltonians. Examples are provided in
section~\ref{sec3} of the affect of a general one-body operator on both two
and more qubit systems. Expansion in a computational basis, using equivalent
decimal and binary labels, is used to demonstrate the role of operators on the
state vector amplitudes. It is shown how to distribute a wave function over
numerous processors and how to handle the fact that a one-body operator acts
on wave function amplitudes in an manner that not only modifies amplitudes
stored on a given processor, but also affects amplitudes seated on other
processors. Criteria for locating the processors involved in these classes
of operators are derived. Understanding this combination of effects; namely,
wave function distribution and the alteration of that distribution due to the
action of a one-body operator, is central to all subsequent developments. It
is handled by careful MPI invocations and serves as a model for the extension
to multi-qubit operations.
In section~\ref{sec4}, the MPI manipulations described earlier for the one
qubit case are generalized and then the layout for the two-qubit operator
alterations of the quantum amplitudes are clarified. With that result in
hand, the particular two-qubit gates {\bf CNOT} and {\bf CPHASE} are readily constructed,
as are two-body Hamiltonians for dynamical applications. Generalization to
three-qubit operators, in particular to the Toffolli gate, are obvious.
In section~\ref{sec5}, Grover's algorithm is discussed and it is shown how
{\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} allows one to simulate up to 30 qubits, (depending on the number of
processors and available memory) in a reasonable time.
Shor's algorithm is simulated using {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ as discussed in
section~\ref{sec5}. Several standard codes that handle large-number modular
and continued fraction manipulations are provided, but the heart of this case
is the Quantum Fourier Transform (QFT) and an associated projective measurement.
The QFT is generated by a chain of Hadamards and CNOT gates acting on a
multi-qubit register. It is shown how to do a QFT with wave function
components distributed over many processors. Here the benefit of using MPI is
dramatic.
In section~\ref{sec7}, the procedures invoked to describe parallel universes,
subject to stochastic noise, is explained for both the Grover and Shor
algorithms. For brevity, similar application to teleportation and superdense
coding are omitted here (although also implemented using {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}).
Also in section~\ref{sec7}, the construction and evaluation of a density
matrix is discussed in two ways. In one way, the full density matrix is
stored on the master processor and its eigenvalues and the associated entropy
is evaluated using a linear code subroutine. In the second, more general way,
the density matrix is spread over many processors on a BLACS constructed
processor grid and eigenvalues and entropy determined using the parallel
library SCALAPACK~\cite{scala}. The later version reduces the storage needs and enhances speed.
A brief description of the included routines is given in section~\ref{sec8},
and finally some conclusions and future developments
are discussed in section~\ref{sec9}.
\section{STATES }
\label{sec2}
\subsection{One-Qubit States }
The state of a quantum system is described by a wave function which
in general depends on the space or momentum coordinates of the
particles and on time. In Dirac's representation-independent
notation, the state of a system is a vector in an abstract Hilbert
space $\mid \Psi(t) \rangle$, which depends on time, but in that form one
makes no choice between the coordinate or momentum space
representation. The transformation between the space and momentum
representation is contained in a transformation bracket. The two
representations are thus related by Fourier transformation, which is the
way Quantum Mechanics builds localized wave packets. In this manner,
uncertainty principle limitations on our ability to measure
coordinates and momenta simultaneously with arbitrary precision are
embedded into Quantum Mechanics (QM). This fact leads to operators,
commutators, expectation values and, in the special cases when a
physical attribute can be precisely determined, eigenvalue equations
with Hermitian operators. That is the content of many quantum texts.
Our purpose is now to see how to define a state vector, to
describe systems or ensembles of qubits as needed for quantum
computing. Thus, the degrees of freedom associated with change in
location are suppressed and the focus is on the two-state aspect.
Spin, which is the most basic example of two-valued quantum attribute, is
missing from a spatial description. This subtle degree of freedom,
whose existence is deduced, inter alia, by analysis of the Stern-Gerlach
experiment, is an additional Hilbert space vector feature. For
example, for a single spin 1/2 system the wave function including
both space and spin aspects is: \begin{equation} \Psi(\vec{r}_1, t) \mid s\
m_s\rangle, \end{enumerate}d{equation} where $\mid s \ m_s\rangle$ denotes a state that is
simultaneously an eigenstate of the particle's total spin operator
$s^2 = s_x^2 +s_y^2+s_z^2$, and of its spin component operator
$s_z$. That is \begin{equation} s^2 \mid s m_s\rangle = {\hskip3pt}ar^2 s (s+1) \mid s m_s\rangle
\qquad s_z \mid s m_s\rangle = {\hskip3pt}ar m_s \mid s m_s\rangle \,. \end{enumerate}d{equation} For a spin
1/2 system, we denote the spin up state as $\mid s m_s\rangle\rightarrow
\mid \frac{1}{2},\frac{1}{2}\rangle \equiv \mid 0\rangle$, and the spin down
state as $\mid s m_s\rangle\rightarrow \mid \frac{1}{2},-\frac{1}{2}\rangle
\equiv \mid 1\rangle.$
A simpler, equivalent representation is as a two component amplitude
\begin{equation}
\mid 0\rangle\rightarrow \left(
\begin{equation}gin{array}{l}
1 \\
0
\end{enumerate}d{array}
\right) \, ,
\qquad
\mid 1 \rangle\rightarrow{\bf \left(
\begin{equation}gin{array}{l}
0 \\ 1
\end{enumerate}d{array}
\right)} \, .
\label{spinor1}
\end{enumerate}d{equation}
This matrix representation can be used to describe the two states of
any quantum system and is not restricted to the spin attribute. In
this matrix representation, the Pauli matrices $ \vec{\sigma}$ are:
\footnote{ $\vec{s} =\frac{{\hskip3pt}ar}{2} \vec{\sigma}$ }
\begin{equation}
\sigma_z \longrightarrow
\left(
\begin{equation}gin{array}{lc}
1 & \ \ 0\\
0 & -1
\end{enumerate}d{array}\right) \, ,
\qquad
\sigma_x \longrightarrow
\left(
\begin{equation}gin{array}{lc}
0 & \ \ 1\\
1 & 0
\end{enumerate}d{array}\right) \, ,
\qquad
\sigma_y \longrightarrow \left(
\begin{equation}gin{array}{lc}
0 & -i \\
i & \ \ 0
\end{enumerate}d{array}\right) \, .
\label{Pauli}
\end{enumerate}d{equation}
These are all Hermitian matrices $\sigma_i =\sigma^\dagger_i$. Along with the
unit operator ${\cal I}\equiv \sigma_0$
\begin{equation}
{\cal I}\equiv \sigma_0 \longrightarrow
\left(
\begin{equation}gin{array}{lc}
1 & \ \ 0\\
0 & \ \ 1
\end{enumerate}d{array}\right) \, ,
\label{unitM}
\end{enumerate}d{equation}
any operator acting on a qubit can be expressed as a combination of Pauli operators.
Operators on multi-qubit states can be expressed as linear combinations of the
tensor product~\footnote{ A tensor product of two matrices $A\otimes B$ is
defined by the rule:
$\langle q_i , q_j \mid~ A\otimes B~ \mid~q_s, q_t\rangle \equiv \langle q_i \mid A \mid q_s \rangle\langle q_j \mid B \mid q_t\rangle,$
with obvious generalization to multi-qubit operators.} of the Pauli
operators. For example, a general operator $\Omega$ can be expressed as
\begin{equation}
\Omega = \sum_{ i_1=0 }^{3} \cdots \sum_{ i_{n_q}=0 }^{3}\
\begin{equation}ta_{ i_1, i_2 \cdots i_{n_q} }
\ [ \sigma_{i_1} \otimes \sigma_{i_2}
\cdots \otimes \sigma_{i_{n_q}} ],
\end{enumerate}d{equation}
where $ \begin{equation}ta_{ i_1, i_2 \cdots i_{n_q} } $ is in general a set of
complex numbers, but are real numbers for hermitian $\Omega.$
A one qubit state is a superposition of the two states associated
with the above $0$ and $1$ bits:
\begin{equation}
\mid \Psi\rangle =C_0 \mid 0\rangle+C_ 1 \mid 1\rangle,
\end{enumerate}d{equation}
where $C_0 \equiv \langle0\mid \Psi\rangle$ and
$C_1 \equiv \langle1\mid \Psi\rangle$ are complex probability amplitudes
for finding the particle with spin up or spin down, respectively. The
normalization of the state $\langle\Psi\mid\Psi\rangle =1$, yields
$\mid C_0 \mid^2 + \mid C_1 \mid^2=1$. Note that the spatial aspects of
the wave function are being suppressed; which corresponds to the particles
being in a fixed location, such as at quantum dots~\footnote{When these
separated systems interact, one might need to restore the spatial aspects
of the full wave function.}. A $2 \times 1$ matrix representation of this
one-qubit state is thus:
\begin{equation}
\mid \Psi \rangle \rightarrow \left(
\begin{equation}gin{array}{l}
C_0 \\
C_1
\end{enumerate}d{array}
\right) \, .
\label{amp1}
\end{enumerate}d{equation}
An essential point is that a QM system can exist in a
superposition of these two bits; hence, the state is called a
quantum-bit or ``qubit." Although our discussion uses the notation
of a system with spin 1/2, it should be noted again that the same discussion
applies to any two distinct states that can be associated with
$\mid 0\rangle $ and $ \mid 1\rangle$.
\subsection{ Multi-qubit States}
A quantum computer involves more than one qubit; therefore, we generalize the
previous section to multi-qubit states.
For more than one qubit, a so-called computational basis of states is defined by
a product space
\begin{equation}
\mid n \rangle_{n_q} \equiv \mid q_1\rangle \cdots \mid q_{n_q}\rangle \equiv \mid \bf{Q} \rangle ,
\end{enumerate}d{equation}
where $n_q$ denotes the total number of qubits in the system.
We use the convention that the most significant
qubit~\footnote{ An important aspect of relating the individual
qubit state to a binary representation is that one can maintain
the order of the qubits, since if a qubit hops over to another
order the decimal number is altered.} is labeled as $q_1$ and
the least significant qubit by $q_{n_q}.$ Note we use $q_{i}$ to
indicate the quantum number of the $i$th qubit. The values assumed
by any qubit are limited to either $q_i = 0$ or $1.$ The state label
$\bf{Q}$ denotes the qubit array ${ \bf{Q}} = \left( q_1, q_2, \cdots, q_{n_q} \right) ,$
which is a binary number label for the state with equivalent decimal label $n.$
This decimal multi-qubit state label is related to the equivalent binary label by
\begin{equation}
n \equiv q_1 \cdot 2^{n_q -1}+ q_{2} \cdot 2^{n_q -2} + \cdots+ q_{n_q} \cdot 2^{0} = \sum_{i=1}^{n_q} \, q_i \cdot 2^{n_q -i} \, .
\label{staten}
\end{enumerate}d{equation}
Note that the $i$th qubit contributes a value of $ q_i \cdot 2^{n_q -i}$
to the decimal number $n.$ Later we will consider ``partner states"
($ \mid { n_0} \rangle,\ \mid { n_1} \rangle $) associated with a
given ${ n}, $ where a particular qubit $i_s$ has a value of $q_{i_s} = 0,$
\begin{equation}
{ n_0} = { n} - q_{i_s} \cdot 2^{n_q -i_s},
\label{pair0}\end{enumerate}d{equation} or a value of
$q_{i_s} = 1,$
\begin{equation}
{ n_1} = { n} - (q_{i_s}-1) \cdot 2^{n_q -i_s}.
\label{pair1}
\end{enumerate}d{equation}
These partner states are involved in the action of a single operator
acting on qubit $i_s ,$ as described in the next section.
A general state with $n_q$ qubits can be expanded in terms of the
above computational basis states as follows
\begin{equation}
\mid \Psi\rangle_{n_q} = \sum_{ \bf Q} C_{ \bf Q} \mid { \bf Q}
\rangle \equiv \sum_{ n=0}^{2^{n_q}-1 } C_{ n} \, \mid n \rangle\,,
\end{enumerate}d{equation}
where the sum over ${ \bf Q}$ is really a product of $n_q$ summations
of the form $\sum_{q_i=0,1}.$ The above Hilbert space expression maps over
to an array, or column vector, of length $2^{n_q}$
\begin{equation}a
\qquad \mid \Psi\rangle_{n_q}&\equiv& \left(
\begin{equation}gin{array}{l}
C_0 \\
C_1 \\
\ \, \vdots \\
\ \, \vdots \\
C_{2^{n_q} -1}
\label{ampn}\end{enumerate}d{array}
\right)
\qquad
{\rm or\ with\ binary\ labels}
\longrightarrow
\left(
\begin{equation}gin{array}{lc}
C_{0 \cdots 00} \\
C_{0 \cdots 01} \\
\ \, \vdots \\
\ \, \vdots \\
C_{1 \cdots 11}
\end{enumerate}d{array}\right) \, .
\end{enumerate}d{equation}a
The expansion coefficients $C_n$ (or $C_{{\bf Q}}$) are complex numbers with
the physical meaning that $C_n=\langle n \mid \Psi\rangle_{n_q}$ is the
probability amplitude for finding the system in the computational basis state
$\mid n\rangle, $ which corresponds to having the qubits pointing in the
directions specified by the binary array ${\bf Q}.$
Switching between decimal $n$ and equivalent binary ${\bf Q}$ labels
is accomplished by the simple subroutines bin2dec and dec2bin in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}.
There we denote the binary number by an array $B(1) \cdots B(n_q).$ The
routines are:
\begin{equation}gin{center}
\fbox{ \bf call bintodec(nq,B,D) } \quad
\fbox{ \bf call dectobin(nq,D,B) }
\end{enumerate}d{center}
where {\bf nq} is the number of qubits; {\bf D} is a real decimal number and {\bf B} is the
equivalent binary array.
\section{ONE-QUBIT OPERATORS}
\label{sec3}
An important part of quantum computation is the act of rotating a qubit.
The {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont NOT } and single qubit Hadamard ${\bf \cal{ H} }$ operators are of
particular interest:
\begin{equation}
{\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont NOT } \equiv \ \ \ \sigma_x \longrightarrow \left(
\begin{equation}gin{array}{lccr}
0&&& 1\\
1&&& 0
\end{enumerate}d{array}\right) \, , \qquad
{\bf \cal{ H} } \equiv\ \ \ \frac{ \sigma_x + \sigma_z}{\sqrt{2}}
\longrightarrow \frac{1}{ \sqrt{2}} \left(
\begin{equation}gin{array}{lccr}
1&&& 1\\
1&&& -1
\end{enumerate}d{array}\right) \, .
\label{NOTH}
\end{enumerate}d{equation}
These have the following effect on the basis states
${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont NOT } \mid 0\rangle = \mid 1\rangle$, ${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont NOT }~\mid~1~\rangle~=~\mid 0~\rangle$, and
${\bf \cal{ H} } \mid 0\rangle = \frac{ \mid 0\rangle + \mid 1\rangle}{\sqrt{2}}\,,
{\bf \cal{ H} } \mid 1\rangle = \frac{ \mid 0\rangle - \mid 1\rangle}{\sqrt{2}}.$
General one-qubit operators can be constructed from the Pauli operators;
we denote the general one-qubit operator acting on qubit $s$ as $ {\Omega_s
}.$ Consider the action of such an operator on the multi-qubit state
$\mid \Psi\rangle_{n_q} : $
\begin{equation}a
{\Omega_s }\!\! \mid \Psi\rangle_{n_q}&=& \sum_{{\bf Q} } C_{{\bf Q} } \ \ {\Omega_s }\!
\! \mid {{\bf Q}} \rangle \\ \nonumber
&=&
\sum_{q_1=0,1} \cdots \sum_{q_s=0,1} \cdots \sum_{q_{n_q}=0,1}\ C_{{\bf Q} } \ \
\mid q_1 \rangle \cdots \ ( {\Omega_s }\!\! \mid q_s \rangle )\ \cdots \mid q_{n_q} \rangle. \\
\label{op1}
\end{enumerate}d{equation}a
Here $ {\Omega_s }$ is assumed to act only on the qubit $i_s$ of value $q_s.$
The $ {\Omega_s }\!\! \mid q_s \rangle $ term can be expressed as
\begin{equation}
{\Omega_s }\!\! \mid q_s \rangle = \sum_{q'_s=0,1}\!\! \mid q'_s \rangle \langle q'_s \mid\! {\Omega_s }\! \mid q_s \rangle,
\label{closure1}
\end{enumerate}d{equation}
using the closure property of the one qubit states.
Thus Eq.~(\ref{op1}) becomes
\begin{equation}a
{\Omega_s} \!\! \mid \Psi\rangle_{n_q}&=& \sum_{\bf Q} C_{\bf Q}\ {\Omega_s} \!\! \mid { \bf Q } \rangle
= \\ \nonumber
\sum_{q_1=0,1} \cdots \sum_{q_s=0,1} \cdots \!\! \sum_{q_{n_q}=0,1} \sum_{q'_s=0,1} \!\! C_{\bf Q} \!\!
&&\langle q'_s \mid \!{\Omega_s }\! \mid q_s \rangle\ \mid q_1 \rangle
\cdots \ \mid q'_s \rangle \cdots \mid q_{n_q} \rangle.
\end{enumerate}d{equation}a
Now we can interchange the labels $ q_s \leftrightarrow q'_s ,$ and use the
label $ {\bf Q} $ to obtain the algebraic result for the action of a one-qubit
operator on a multi-qubit state
\begin{equation}
{\Omega_s } \mid \Psi\rangle_{n_q}= \sum_{\bf Q} {\tilde C}_{\bf Q}\ \mid { \bf Q } \rangle =
\sum_{n=0}^{ 2^{n_q}-1 } {\tilde C}_{n}\ \mid n\rangle,
\end{enumerate}d{equation}
where
\begin{equation}
{\tilde C}_{\bf Q}= {\tilde C}_{n} = \sum_{q'_s=0,1}\!\! \langle q_s \mid \!{\Omega_s }\! \mid q'_s \rangle\
C_{\bf Q' ,} \label{oneopres}
\end{enumerate}d{equation}
where $ { \bf{Q}} = \left( q_1, q_2, \cdots q_{n_q} \right) ,$ and
$ { \bf{Q'}} = \left( q_1,\cdots q'_{s} \cdots q_{n_q} \right) .$
That is ${ \bf Q}$ and ${ \bf Q'}$ are equal except for the qubit acted
upon by the one-body operator ${\Omega_s}.$
A better way to state the above result is to consider Eq.~(\ref{oneopres})
for the case that $n$ has $q_s=0$ and thus $n\rightarrow n_0$ and to write
out the sum over $q'_s$ to get
\begin{equation}
{\tilde C}_{n_0} = \langle 0 \mid \!{\Omega_s }\! \mid 0 \rangle C_{n_0}+\langle 0 \mid \!{\Omega_s }\! \mid 1 \rangle C_{n_1},
\end{enumerate}d{equation}
where we introduced the partner to $n_0$ namely $n_1.$ For the case that $n$
has $q_s=1$ and thus $n\rightarrow n_1$ Eq.~(\ref{oneopres}), with expansion
of the sum over $q'_s$ yields
\begin{equation}
{\tilde C}_{n_1} = \langle 1 \mid \!{\Omega_s }\! \mid 0 \rangle C_{n_0}+\langle 1
\mid \!{\Omega_s }\! \mid 1 \rangle C_{n_1} \ ,
\end{enumerate}d{equation}
or, written as a matrix equation, we have for each $n_0, n_1$ partner pair
\begin{equation}
\left(
\begin{equation}gin{array}{l}
{\tilde C}_{n_0} \\
{\tilde C}_{n_1}
\end{enumerate}d{array}
\right)
= \left(
\begin{equation}gin{array}{lccr}
\langle 0 \mid {\Omega_s} \mid 0 \rangle&&& \langle 0 \mid{\Omega_s} \mid 1 \rangle\\
\langle 1 \mid {\Omega_s} \mid 0 \rangle&&&\langle 1 \mid {\Omega_s} \mid 1 \rangle
\end{enumerate}d{array} \right)
\left(
\begin{equation}gin{array}{l}
C_{n_0} \\
C_{n_1}
\end{enumerate}d{array}
\right) \ .
\label{resmat1}
\end{enumerate}d{equation}
This is not an unexpected result. Later we will denote the matrix element
$\langle 0 \mid {\Omega_s} \mid 0 \rangle$ as $ {\Omega_s}_{0 0}$, etc.
Equation~(\ref{resmat1}) above shows how a $2\times2$ one-qubit operator
${\Omega_s}$ acting on qubit $i_s$ changes the state amplitude for each value
of $n_0.$ Here, $n_0$ denotes a decimal number for a computational basis
state with qubit $i_s$ having the $q_s$ value zero and $n_1$ denotes its
partner decimal number for a computational basis state with qubit $i_s$
having the $q_s$ value one. They are related by
\begin{equation}
n_1 = n_0 + 2^{n_q-i_s}.
\end{enumerate}d{equation}
At times, we shall call $ 2^{n_q-i_s}$ the ``stride" of the $i_s$ qubit;
it is the step in $n$ needed to get to a partner. There are $2^{n_q}/2$
values of $n_0$ and hence $2^{n_q}/2$ pairs $n_0,n_1.$ Equation
~(\ref{resmat1}) is applied to each of these pairs. In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ that process
is included in the subroutine {\bf OneOpA}.
Note that we have replaced the full $2^{n_q}\times2^{n_q}$ one qubit
operator by a series of $2^{n_q}/2$ sparse matrices. Thus we do not have
to store the full $2^{n_q}\times2^{n_q}$ but simply provide a $2\times2$
matrix for repeated use. Each application of the $2\times2$ matrix
involves distinct amplitude partners and therefore the set of $2\times2$
operations can occur simultaneously and hence in parallel.
In the next section, the procedure for distributing the state vector over
several processors is described along with the changes induced by the
action of a one-body operator. Later this procedure is generalized to
multi-qubit operators, using the same concepts.
\subsection{Distribution of the State}
The state of the multi-qubit system is described at any given time by
the array of coefficients $C_n(t)$ for $n=0,\cdots 2^{n_q}-1,$ see
Eq.~(\ref{ampn}). The action of a one-qubit gate, assumed to act
instantaneously, is specified by the rules discussed in the previous
section. Now we wish to distribute these state-vector coefficients stored
in ``standard order" with increasing $n$, over a number of processors
$N_P.$ For convenience, we assume that the number of processors invoked
is a power of two, i.e. $N_P=2^p$ and thus we can distribute the $C_n$
coefficients uniformly over those processors with
$N_x=2^{n_q}/N_P = 2^{n_q - p}$ amplitudes on each processor. In the code we
denote $N_x$ as {\bf NPART}.
So, for example, we place
\begin{equation}a
&& C_0 \cdots C_{N_x-1}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm on\ processor \
myid=0;} \\ \nonumber
&& C_{N_x} \cdots C_{ 2 N_x-1} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm on\ processor\
myid=1;} \\ \nonumber
&& \ \ \ \ \ \ \ \ \vdots \\ \nonumber
&& C_{ (N_P-1) N_x} \cdots C_{ N_P N_x-1}
\ \ \ \ \ {\rm on\ processor\
myid=N_P-1 \ . }
\end{enumerate}d{equation}a
Where {\bf myid} is the processor number, from 0 to $N_p-1$. Note that
$ N_P \cdot N_x=2^{n_q}.$ This distribution of the state over the
$N_P$ processors places a demand of $2^{n_q -p}$ on the memory of each
processor. For 64 processors $p=6$ and the memory required is down by a
factor of 64; and for 4096 processors $p=12$ and the memory required is
down by a factor of 4096, etc. As the number of processors available
increases, so will the memory demands on each processor.
However, life is not that simple. A one-qubit operator for a given partner
pair $n_0,n_1$ often involves a $C_{n_0}$ that is seated on one processor and
a $C_{n_1}$ that is seated on another processor. We need to deal with that
situation, while respecting the scheme for standard order distribution of the
amplitude coefficients. The first question that arises is when are the pairs
$C_{n_0},C_{n_1}$ seated on the same processor? We call that being ``seated
in the same section," in analogy to theater seating. That is, we dub being
located on a particular processor as having the same section, with the
location of a particular amplitude within that section being called its ``seat."
With that language, it is simple to state the condition for an amplitude
pair $C_{n_0},C_{n_1}$ being on the same processor; namely, that the
difference (we call this the ``stride") $n_0-n_1= 2^{n_q - i_s}$ be less than
the distance $2^{n_q }/N_P=2^{n_q - p}$ or simply $i_s>p.$ If this
condition is not satisfied, the stride is large enough to jump out of the
section and thus require inter-processor communications. This result holds
true if the number of processors is of the form $N_P=2^{ p}= 1,2,4,8,16
\cdots.$ One can prove this rule by induction.
This condition $i_s>p$, indicates that the larger $i_s,$ that is for qubits
that are the least significant contributors to the state label $n,$ the
associated pairs of amplitudes reside on the same processor. In contrast,
the smaller $i_s$ are the qubits for which the pair amplitudes are the
furthest away in processor number. The stride ranges from a value of 1 for
$i_s=n_q$ (least significant qubit) to $2^{n_q}/2$ for $i_s=1$ ( most
significant qubit). Carrying out the $2\times2$ matrix multiplication
Eq.~(\ref{resmat1}) is simple for those pairs on the same processor, but
suitable transfer to the requisite processors must be implemented before
one can perform the requisite $2\times2$ matrix multiplication. To carry
out that process requires a way to identify the processor (e.g. the section
assignment) and the location within that processor (the seat ) and to
interchange the amplitudes. The latter task is carried out using the
MPI protocol, as discussed later.
\subsection{Pair Section, Seat and MPI}
Distribution of the $2^{n_q}$ amplitudes $C_0 \cdots C_{2^{n_q} -1 }$ over
the $N_P$ processors, places $N_x=2^{n_q}/N_P = 2^{n_q - p}$ amplitudes on
each processor. As the state label $n$ ranges from $0$ to $2^{n_q} -1$ one
jumps between different processors. The relationship between the $n$ label
and the processor on which the associated amplitudes sits is simply:
${\rm section}= {\rm Int}(n/N_x ),$ where Int() means the integer part
and the seat (i.e. location within that processor) is
${ \rm seat}={\rm Mod}(n,N _x)$ which denotes modular arithmetic of base
$N_x.$ In the code $N_x$ is called {\bf NPART} and section is identical with
{\bf myid}, the processor number.
With the ability to identify the processor/location or section/seat
assignment associated with each index $n,$ the remaining task is to
transfer the requisite amplitudes to the ``correct" location. That task
is carried out by the Message Passing Interface( MPI ) commands
{\bf MPI\_SEND} and {\bf MPI\_RECV}. We need to coordinate the various
processors and exchange data during a calculation. The main reason MPI
was developed over the last several decades is to enable efficient
communication between processors during a computation.
Why use MPI? The MPI protocol affords many advantages for
developing parallel processing codes. The main advantages are that: (1) MPI provides a standard set of routines that are easy to use and (2)
MPI is flexible and works on many platforms.~\footnote{ We have
run our codes on the Pittsburgh and Barcelona supercomputers, and also on arrays of imacs.}
Thus MPI proved perfect for our need to develop a
user-friendly, flexible realization of the action of multi-qubit operators on
state vectors in a parallel computing environment.
\subsection{Action of One-Qubit Operator}
The following figures (Figs.~\ref{fig1}-~\ref{MPIrole2}) illustrate the role played by MPI in transferring distributed amplitudes to appropriate processor
locations when the one-qubit operator acts. We use the case of $n_q=3$ or $2^3=8$ components as a simple example.
The first case takes the partner labels $n=1, 3,$ which corresponds to the
binary numbers $(001) $ and $(011).$ Here we use the binary labels for
the components and consider the special case of a one-qubit operator acting
on {\bf qubit 2} and assuming two processors $p=1$ (see Fig.~\ref{fig1} ).
For that case, the two amplitudes affected by the one-qubit operator reside
on the same processor, i.e., they have just different seats in the same section.
Thus there is no need for MPI data transfer.
\begin{equation}gin{figure}[t]
\begin{equation}gin{center}
\includegraphics[width=12cm]{fig1}
\caption{A three qubit state vector is acted on by a one-qubit operator
on qubit 2 ($i_s=2$) . The case illustrated is for the partners $n=1,3,$
which correspond to the binary numbers $(001) $ and $(011).$ It is assumed
that there are just two processors $N_P=2^p,$ with $p=1.$
Thus $i_s>p$ for this case and the two coupled amplitudes reside on the same
processor and no MPI data transfer is invoked. }
\label{fig1}
\end{enumerate}d{center}
\end{enumerate}d{figure}
Now consider the partner labels $n=0, 4,$ which correspond to the binary
numbers $(000) $ and $(100).$ Again we use the binary labels for the
components, but now consider the special case of a one-qubit operator
acting on {\bf qubit 1} and again assuming two processors. For this
case, the two amplitudes affected by the one-qubit operator do not
reside on the same processor, i.e., they are in different sections.
Thus there is now an essential need for MPI data transfer, which
involves sending and receiving as illustrated in Fig.~\ref{MPIrole2}.
This entails two sends and two receives.
\begin{equation}gin{figure}[t]
\begin{equation}gin{center}
\includegraphics[width=12cm]{fig2}
\caption{A three qubit state vector is acted on by a one-qubit operator on
qubit 1 $(i_s=1)$ . The case illustrated is for the partners $n =0, 4,$ which
corresponds to the binary numbers $(000) $ and $(100).$ It is assumed that
there are just two processors $N_P=2^p,$ with $p=1.$ Thus the condition
$i_s>p$ is not satisfied and indeed the two coupled amplitudes reside on
different processors and MPI data transfer is invoked. We need to send
$(S_1)$ component $C_{000}$ to processor one, and it is received at
$(R_1),$ and also send $(S_2)$ component $C_{100}$ to processor zero, and
it is received at $(R_2).$ Later we will specify the send and receive commands in the MPI language. }
\label{MPIrole2}
\end{enumerate}d{center}
\end{enumerate}d{figure}
Of course, one needs to continue this process for the other three amplitude pairs
$n=1, 5,$ $n=2,6,$ and $n=3,7.$ In general, we have $2^{n_q}/2$ partner pairs.\
Those pairs require six more sends and six more receives.
An important issue is then to see if the time gained by invoking more processors wins
out over the time needed for all of these MPI transfers. Another important concept is
one of ``balance," which involves the extent to which the various processors
perform equally in time and storage (ideally we assume they are all exactly
equivalent and in balance).
It is important to understand the above illustrations, because for more
qubits and more processors and for two- and three- qubit operators, the
steps are simply generalizations of these basic cases. Careful manipulation
of the amplitudes, allows for spreading the amplitudes over many processors
and using MPI to do the requisite data transfers for all kinds of operators and gates.
For the one qubit case, the steps here are called by the command
\begin{equation}gin{center}
\fbox{ \bf call OneOpA(nq,is,Op,psi,NPART, COMM) }
\end{enumerate}d{center}
where nq denotes the number of qubits; $"is''$ labels the qubit acted on by operator ``Op," psi is an
input wave function array of length {\bf NPART}=$N_x,$ which is returned as the
modified state vector. The last entry {\bf COMM} is included to allow for later
extension to separate communication channels that we refer to as parallel universes or multiverses.
Let us emphasize that any operator, acting on one qubit is a
special case of the one described here. Thus all rotations,
all so-called local operations, including those generated by
the one-body part of Hamiltonian evolution, are covered by
the code {\bf OneOpA}.
\section{MULTI-QUBIT OPERATORS}
\label{sec4}
Let us return to the main issue of how to distribute the amplitudes over
several processors and to cope with the action of operators on a quantum
state. The case of a two-qubit operator is a generalization of the steps
discussed for a one-qubit operator. Nevertheless, it is worthwhile to
present those details, as a guide to those who plan to use and perhaps
extend {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}.
We now consider a general two-qubit operator that we assume acts on
qubits $i_{s_1}$ and $i_{s_2},$ each of which ranges over the full
${1, \cdots, n_q}$ possible qubits. General two-qubit operators can
be constructed from tensor products of two Pauli operators; we denote the
general two-qubit operator as $ {\bf \cal{ V} }.$ Consider the action of
such an operator on the multi-qubit state $\mid \Psi \rangle_{n_q} : $
\begin{equation}a
{\bf \cal{ V} }\!\! \mid \Psi\rangle_{n_q}&=& \sum_{{\bf Q} } C_{{\bf Q} } \ \ {\bf \cal{ V} }\!
\! \mid {{\bf Q}} \rangle \\ \nonumber
&=&
\sum_{q_1=0}^{1} \cdots \sum_{q_{s1},q_{s2}=0}^{1} \cdots \sum_{q_{n_q}=0}^{1}\ C_{{\bf Q} } \ \
\mid q_1 \rangle \cdots ( {\bf {\cal V} }\!\! \mid q_{s1} q_{s2} \rangle )\ \cdots \mid q_{n_q} \rangle.
\label{op2}
\end{enumerate}d{equation}a
Here $ {\bf \cal{ V} }$ is assumed to act only on the two $i_{s1},i_{s2}$
qubits. The $( {\bf {\cal V} }\!\! \mid q_{s1}\ q_{s2} \rangle )$ term
can be expressed as
\begin{equation}
{\bf {\cal V} }\!\! \mid q_{s1}\ q_{s2} \rangle
= \sum_{q'_{s1},q'_{s2}=0}^{1} \mid q'_{s1}\ q'_{s2} \rangle \langle q'_{s1}\ q'_{s2} \mid\! {\bf {\cal V} }\!\mid q_{s1}\ q_{s2} \rangle
\label{closure2}
\end{enumerate}d{equation}
using the closure property of the two-qubit product states. Thus Eq.~(\ref{op2}) becomes
\begin{equation}a
{\bf { \cal V} } \!\! \mid \Psi\rangle_{n_q}&=& \sum_{\bf Q} C_{\bf Q}\ {\bf {\cal V}} \!\! \mid { \bf Q } \rangle
=
\sum_{q_1=0}^{1} \cdots \sum_{q_{s1}=0}^{1} \cdots \sum_{q_{s2}=0}^{1} \cdots \sum_{q_{n_q}
=0}^{1} \sum_{q'_{s1},q'_{s2}=0}^{1} \\ \nonumber C_{\bf Q}
&&\langle q'_{s1} q'_{s2} \mid \!{\bf {\cal V} }\! \mid q_{s1} q_{s1} \rangle\ \mid q_1 \rangle \cdots \ \mid q'_{s1} q'_{s2} \rangle \cdots \ \mid q_{n_q} \rangle.
\end{enumerate}d{equation}a
Now we can interchange the labels $ q_{s1} \leftrightarrow q'_{s1} , q_{s2}
\leftrightarrow q'_{s2} $ and use the label $ {\bf Q} $ to obtain the
algebraic result for the action of a two-qubit operator on a multi-qubit state
\begin{equation}
{\bf \cal{ V} } \mid \Psi\rangle_{n_q}= \sum_{\bf Q} {\tilde C}_{\bf Q}\ \mid { \bf Q } \rangle =
\sum_{n=0}^{ 2^{n_q}-1 } {\tilde C}_{n}\ \mid n\rangle,
\end{enumerate}d{equation} where
\begin{equation}
{\tilde C}_{\bf Q}= {\tilde C}_{n} = \sum_{q'_{s1},q'_{s2}=0}^{1}\!\!
\langle q_{s1} q_{s2} \mid \!{\Omega_s }\! \mid q'_{s1} q'_{s2} \rangle\
C_{\bf Q' ,}
\label{twoopA}
\end{enumerate}d{equation}
where $ { \bf{Q}} = \left( q_1, q_2, \cdots q_{n_q} \right) ,$ and
$ { \bf{Q'}} = \left( q_1,\cdots q'_{s1} \cdots q'_{s2} \cdots q_{n_q}
\right).$ That is ${ \bf Q}$ and ${ \bf Q'}$ are equal except for the qubits
acted upon by the two-body operator ${\bf {\cal V}}.$
A better way to state the above result is to consider Eq.~(\ref{twoopA}) for
the following four choices
\begin{equation}a
n_{ 00} & \rightarrow & ( q_1 \cdots q_{s1}=0 \cdots q_{s2}=0 ,\cdots q_{n_q} )
\nonumber \\
n_{01} & \rightarrow & ( q_1 \cdots q_{s1}=0 \cdots q_{s2}=1 ,\cdots q_{n_q} )
\nonumber \\
n_{10} & \rightarrow & ( q_1 \cdots q_{s1}=1 \cdots q_{s2}=0 ,\cdots q_{n_q} )
\nonumber \\
n_{11} & \rightarrow & ( q_1 \cdots q_{s1}=1 \cdots q_{s2}=1 ,\cdots q_{n_q} ) ,
\label{twoopB}
\end{enumerate}d{equation}a
where the computational basis state label $n_{q_{s1},q_{s2}}$ denotes the four
decimal numbers corresponding to ${\bf Q} = ( q_1, \cdots q_{s1} \cdots q_{s2 } \cdots q_{n_q}).$
Evaluating Eq.~(\ref{twoopA}) for the four choices Eq.~(\ref{twoopB}) and
completing the sums over $q'_{s1}, q'_{s2} ,$ the effect of a general
two-qubit operator on a multi-qubit state amplitudes is given by a $4 \times 4$ matix
\begin{equation}
\left(
\begin{equation}gin{array}{l}
{\tilde C}_{n_{00}} \\
{\tilde C}_{n_{01}} \\
{\tilde C}_{n_{10} }\\
{\tilde C}_{n_{11}}
\end{enumerate}d{array}
\right)
= \left(
\begin{equation}gin{array}{lccccr}
{\cal V}_{00;00} && {\cal V}_{00;01}
\ \ \ {\cal V}_{00;10} \ \ \ {\cal V}_{00;11}\\
{\cal V}_{01;00} && {\cal V}_{01;01}
\ \ \ {\cal V}_{01;10} \ \ \ {\cal V}_{01;11}\\
{\cal V}_{10;00} && {\cal V}_{10;01}
\ \ \ {\cal V}_{10;10} \ \ \ {\cal V}_{10;11}\\
{\cal V}_{11;00} && {\cal V}_{11;01}
\ \ \ {\cal V}_{11;10} \ \ \ {\cal V}_{11;11}
\end{enumerate}d{array} \right)
\left(
\begin{equation}gin{array}{l}
C_{n_{00}} \\
C_{n_{01}} \\
C_{n_{10} }\\
C_{n_{11}}
\end{enumerate}d{array}
\right) \ ,
\label{resmat2}
\end{enumerate}d{equation}
where $ {\cal V}_{ij;kl} \equiv \langle i,j \mid {\cal V} \mid k , l
\rangle.$ Equation~(\ref{resmat2}) above shows how a $4\times4$ two-qubit operator
${\cal V}$ acting on qubits $i_{s1},i_{s2} $ changes the state amplitude for
each value of $n_{00}.$ Here, $n_{00}$ denotes a decimal number for a
computational basis state with qubits $i_{s1},i_{s2} $ both having the
values zero and its three partner decimal numbers for a computational basis state
with qubits $i_{s1},i_{s2} $ having the values $(0,1), (1,0)$ and $(1,1),$
respectively. The four partners $n_{00},n_{01},n_{10},n_{11},$ or ``amplitude
quartet," coupled by the two-qubit operator are related by:
\begin{equation}
n_{01}=n_{00} + 2^{n_q - i_{s2}} \qquad n_{10}=n_{00} + 2^{n_q - i_{s1}}
\qquad n_{11}=n_{00} + 2^{n_q - i_{s1}}+2^{n_q - i_{s2}},
\end{enumerate}d{equation}
where $ i_{s2}, i_{s2}$ label the qubits that are acted on by the two-qubit operator.
There are $2^{n_q}/4$ values of $n_{00}$ and hence $2^{n_q}/4$ amplitude
quartets $n_{00}, n_{01}, n_{10},n_{11}.$ Equation~(\ref{resmat2}) is applied
to each of these quartets for a given pair of struck qubits. In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ that
process is included in the subroutine TwoOPA.
In this treatment, we are essentially replacing a large sparse matrix,
by a $2^{n_q}/4$ set of $4 \times 4 $ matrix actions, thereby saving
the storage of that large matrix.
In the next section, the procedure for distributing the state vector
over several processors is illustrated along with the changes
induced by the action of a two-body operator.
\subsection{Action of Two-Qubit Operators}
To visualize the distribution of the amplitudes over several processors and
the role played by MPI in transferring the amplitudes to appropriate location,
when the {\it two}-qubit operator acts, the following diagrams lay out
the scheme. We again use the case of $n_q=3$ or $2^3=8$ components as a simple illustration.
The first case takes the amplitude quartet labels $n=0,1,2,3$ which
corresponds to the binary numbers $(000), (001), (010),$ and $(011).$ Here
we use the binary labels for the components and consider the special case
of a two-qubit operator acting on {\bf qubits 2 and 3.} We consider
just two processors. In this case the four amplitudes affected by the
two-qubit operator reside on the same processor, i.e., they have just
different seats in the same section. Thus there is no need for MPI data transfer.
\begin{equation}gin{figure}[t]
\begin{equation}gin{center}
\includegraphics[width=10cm]{fig3}
\caption{A three qubit state vector is acted on by a two-qubit operator on
qubits 2 and 3 ($i_{s1}=2,i_{s2}=3$) . The case illustrated is for the
amplitude quartet $n =0,1,2,3, $ which corresponds to the binary numbers
$(000) ,(001), (010),$ and $(011).$ It is assumed that there are just
two processors $N_P=2^p,$ with $p=1.$ Thus $ i_{s2} > i_{s1}>p$ for this
case and the two coupled amplitudes reside on the same processor and
no MPI data transfer is invoked. The dashed circles indicate that all four
amplitudes contribute to forming the values of
${\tilde C}_{000}, {\tilde C}_{001} , {\tilde C}_{010}, {\tilde C}_{011} $
are given by Eq.~(\ref{twoopA}). }
\label{MPIrole2A}
\end{enumerate}d{center}
\end{enumerate}d{figure}
Now consider the amplitude quartet labels $n=0, 2 ,4, 6,$ which
corresponds to the binary numbers $(000), (010), (100),$ and $(110).$
Again we use the binary labels for the components, but now consider
the special case of a two-qubit operator acting on {\bf qubits 1 and 2.}
We consider just two processors. For this case, the two amplitudes
affected by the one-qubit operator do not reside on the same processor,
i.e., they are in different sections. Thus there is now an essential
need for MPI data transfer, which involves sending and receiving
as illustrated in Fig.~\ref{MPIrole2B}.
\begin{equation}gin{figure}[t]
\begin{equation}gin{center}
\includegraphics[width=10cm]{fig4}
\caption{A three qubit state vector is acted on by a two-qubit operator on
qubits 1 and 2 ($i_{s1}=1,i_{s2}=2$) . The case illustrated is for the
amplitude quartet $n =0,2,4,6, $ which corresponds to the binary numbers
$(000) ,(010), (110),$ and $(110).$ It is assumed that there are just
two processors $N_P=2^p,$ with $p=1.$ Thus $ i_{s2} > p,$ but we do {\bf not}
have $i_{s1} >p,$ thus for this case amplitudes reside on different processors
and MPI data transfer is invoked. The dashed circles indicate that all four
amplitudes are to be sent/received from other locations. }
\label{MPIrole2B}
\end{enumerate}d{center}
\end{enumerate}d{figure}
For the two qubit case, the steps here are called by the command
\begin{equation}gin{center}
\fbox{ \bf call TwoOpA(nq,is1,is2,Op,psi,NPART,COMM) }
\end{enumerate}d{center}
where {\bf nq} denotes the number of qubits; {\bf is1,is2} label the qubits acted on by
operator {\bf Op}, {\bf psi} is an input wave function array of length {\bf NPART}=$N_x,$ which
is returned as the modified state vector.
\subsection{CNOT and CPHASE}
The two-qubit operators {\bf CNOT} and {\bf CPHASE} are oft-used special cases
of the above two-qubit operator discussion. They are simpler than the general case because
they are given by the sparse matrices
\begin{equation}
{\cal V} \rightarrow {\rm CNOT} =
\left(
\begin{equation}gin{array}{lccr}
1 & \ \ 0 & \ \ 0 & \ \ 0\\
0 & \ \ 1 & \ \ 0 & \ \ 0\\
0 & \ \ 0 & \ \ 0 & \ \ 1\\
0 & \ \ 0 & \ \ 1 & \ \ 0\\
\end{enumerate}d{array}\right)\, , \quad
{\cal V} \rightarrow {\rm CPHASE} =
\left(
\begin{equation}gin{array}{lccr}
1 & \ \ 0 & \ \ 0 & \ \ 0\\
0 & \ \ 1 & \ \ 0 & \ \ 0\\
0 & \ \ 0 & \ \ 1 & \ \ 0\\
0 & \ \ 0 & \ \ 0 & \ \ -1\\
\end{enumerate}d{array}\right) \, .
\end{enumerate}d{equation}
{\bf CNOT} stores the rule $00 \rightarrow 00, 01 \rightarrow 01,10 \rightarrow 11,11 \rightarrow 10 ,$
where qubit 1 is the control, and qubit 2 gets acted on by $\sigma_1$
only when qubit 1 has a value of one. In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}, a subroutine {\bf CNOT} codes
this special two-qubit operator:
\begin{equation}gin{center}
\fbox{ \bf call CNOT(nq,is1,is2,psi,NPART,COMM) }.
\end{enumerate}d{center}
{\bf CPHASE} stores the rule
$00 \rightarrow 00, 01 \rightarrow 01,10 \rightarrow 10,11 \rightarrow -11 ,$
where qubit 1 is the control, and qubit 2 gets acted on by $\sigma_3$ only
when qubit 1 has a value of one. In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}, a subroutine {\bf CPHASEA} codes this
special two-qubit operator:
\begin{equation}gin{center}
\fbox{ \bf call CPHASEA(nq,is1,is2,psi,NPART,COMM) }.
\end{enumerate}d{center}
Another two-qubit operator which plays a key role in the quantum Fourier transformation,
is the {\bf CPHASEK}, given by a sparse matrix that depends on a positive integer $k$
\begin{equation}
{\cal V} \rightarrow {\rm CPHASEK} =
\left(
\begin{equation}gin{array}{lccr}
1 & \ \ \ 0 & \ \ \ 0 &0 \ \ \\
0 & \ \ \ 1 & \ \ \ 0 &0 \ \ \\
0 & \ \ \ 0 & \ \ \ 1 &0 \ \ \\
0 & \ \ \ 0 & \ \ \ 0 & \ \ e^{ \frac{2 \pi i}{2^k}} \\
\end{enumerate}d{array}\right) \, .
\label{cphasek}
\end{enumerate}d{equation}
In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}, a subroutine {\bf CPHASEK} codes this special two-qubit operator:
\begin{equation}gin{center}
\fbox{ \bf call CPHASEK (nq,is1,is2,k, psi,NPART,COMM) }.
\end{enumerate}d{center}
Note that there are no MPI commands required in this subroutine.
\subsection{The Full Hadamard--Special Handling}
\label{sec5C}
An important example of a multi-qubit operation is when Hadamards act on all
of the qubits in a system--a step that is often used to initialize a QC.
One way to do that is simply to repeat the prior discussion
and use the subroutine for qubits $i_s =1 \cdots n_q.$ That procedure is implemented in the subroutine {\bf HALL}.
Hadamards acting on all qubits involves the operator
\begin{equation}
{\cal H}^{n_q} \equiv \left [ {\cal H}_1 \otimes{\cal H}_2 \cdots \otimes {\cal H}_{n_q}\right] .
\label{HALL}
\end{enumerate}d{equation} Another way to implement this operator is based on the realization
that when acting on the column vector $(C_{0} \cdots C_{2^{n_q} -1} ) $ it forms an equal
weighted combination with particular signs $s_{n.n'},$ whereby the effect of $ {\cal H}^{n_q} $ is
\begin{equation}
C_n \rightarrow \frac{1}{2^{\frac{n_q}{2} }}
\sum_{n'=0}^{2^{n_q} -1}s_{n,n'} C_{n'} \equiv {\tilde C}_n \, .
\end{enumerate}d{equation}
The task is to determine the signs $s_{n,n'}.$
These signs are relatively simple to pin down.
From the product structure of $ {\cal H}^{n_q} $ it is simple to show that the
signs are determined by the condition $s_{n,n'}=(-1)^\delta,$ where $\delta$
denotes the number of times the binary bits for $n,n'$ of unit value are
equal, i.e. how many times $B(i)=B'(i)=1.$ This condition is carried
out in the function SH:
\fbox{ \bf FUNCTION SH(nq,n,np) }
where nq is the number of qubits; $np=n' .$ This procedure is implemented in the subroutine {\bf HALL2}.
The user should decide which version {\bf HALL} or {\bf HALL2} works best in their context.
Ironically, although a small subroutine, {\bf HALL} or {\bf HALL2} is used repeatedly in
Grover's search and dominate the time
expended in a large qubit quantum search. We shall refer to the operator $ {\cal H}^{n_q} $
as {\bf HALL} throughout this paper, recognizing that it can be implemented using either
{\bf CALL HALL} or by the special sign handling method {\bf CALL HALL2}.
\section{A SAMPLE OF RELEVANT QUANTUM ALGORITHMS}
{\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} permits the simulation of any quantum circuit on a
parallel computing environment. In this section we describe
two well-known QC algorithms already included in the current
package and which exemplify the use of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} in practical
applications.
\subsection{GROVER's searching algorithm}
\label{sec5}
We now show how to apply the operators (gates) , and the treatment of a
multi-qubit state, to the first of several basic QC algorithms. These
are standard procedures in QC and we examine them with {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ so
that one can describe these algorithms dynamically using basic,
realistic Hamiltonians and also subject these procedures to environmental effects.
The case of superdense coding,~\cite{superdense} which is a way to
enhance communication between Alice and Bob by means of shared
entangled states., has also been developed in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}.
Our first application presented here is Grover's search
algorithm~\cite{Grover}. In this case, we start with a state of $n_q$ qubits
all pointing up $\mid 0 0 0 \cdots 0 \rangle,$ and act on it with {\bf HALL},
see Eq.~(\ref{HALL}). Then, we need an all-knowing Oracle operator
${\cal \bf O\rm racle}$ to mark an item that is to be ferreted out by the algorithm.
The Oracle step is very simple when we use amplitude coefficients $C(n);$
we simply find the processor (section) and location on that processor
(seat) associated with the marked item $n_x$ and reverse the sign of that
amplitude $C(n_x) \rightarrow - C(n_x).$ All other amplitudes are unchanged.
\begin{equation}
{\cal \bf O\rm racle}
\left(
\begin{equation}gin{array}{l}
C_{0} \\
C_{1} \\
\ \vdots \\
C_{n_x} \\
\ \vdots \\
C_{2^{n_q}-1}
\end{enumerate}d{array}
\right) \longrightarrow
\left(
\begin{equation}gin{array}{l}
C_{0} \\
C_{1} \\
\ \vdots \\
-C_{n_x} \\
\ \vdots \\
C_{2^{n_q}-1}
\end{enumerate}d{array}
\right) \, .
\label{oracle1}
\end{enumerate}d{equation}
This process can be extended to two or more marked items.
Grover's procedure, which entails using constructive interference to
make the marked item's amplitude stand out from all others, involves
acting repeatedly on the state {\bf HALL}$ \mid 0 0 0 \cdots 0 \rangle$
with the ``Grover operator"
\begin{equation}
{\cal G} \equiv {\rm HALL} \ \cdot \ {\cal Inv}\ \cdot \ {\rm HALL} \ \cdot
\ {\cal \bf O\rm racle}
\end{enumerate}d{equation}
where ${\cal Inv}$ is an operator
\begin{equation}
{\cal Inv} = 2 \mid 0 0 0 \cdots 0 \rangle \langle 0 0 0 \cdots 0 \mid - {\bf I}
\end{enumerate}d{equation} where ${\bf I}$ is an $2^{n_q}\times2^{n_q}$ unit matrix. The
operator $ {\cal Inv}$ is simply realized by changing the sign of
all amplitudes except for the $n=0$ one.
\begin{equation}
{\cal Inv}
\left(
\begin{equation}gin{array}{l}
C_{0} \\
C_{1} \\
C_{2} \\
\ \vdots \\
C_{2^{n_q}-1}
\end{enumerate}d{array}
\right) \longrightarrow
\left(
\begin{equation}gin{array}{l}
+ C_{0} \\
- C_{1} \\
- C_{2} \\
\ \vdots \\
-C_{2^{n_q}-1}
\end{enumerate}d{array}
\right) \, .
\label{inversion}
\end{enumerate}d{equation} The combination {\bf HALL} $ \cdot \ {\cal Inv} \ \cdot $ {\bf HALL} is called an
inversion about the mean and is an essential part of Grover's algorithm.
The other essential part is to act on the initial state enough times $n_t$
to produce an amplitude ${\bf {\tilde C}(n_x)}$ that stands out with
high probability. We take the number $n_t= \frac{\pi}{4} \sqrt{ 2^{n_q } } $\ .
\begin{equation}
G^{n_t} \cdot {\rm HALL} \mid 0 0 0 \cdots 0 \rangle \rightarrow
\left(
\begin{equation}gin{array}{l}
{\tilde C}_{0} \\
{\tilde C}_{1} \\
\ \vdots \\
{\bf {\tilde C}_{n_x}} \\
\ \vdots \\
{\tilde C}_{2^{n_q}-1}
\end{enumerate}d{array} \, .
\right)
\end{enumerate}d{equation}
In all of these simple states, it is the ${\rm {\bf HALL}}$ operator and its
repeated application in $G^{n_t}$ that involves the most time expenditure.
The {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ Grover search for a large number of qubits is included as Guniv.f90. Instructions for
running the code and a guide to steps invoked are incorporated directly as comments in the listing.
Generalization to include noise using a multiuniverse approach is discussed later.
\subsection{SHOR's factoring algorithm}
\label{sec6}
Shor's algorithm~\cite{Shor} is a QC method for factoring a large number.
The basic idea is to prepare a state that, when subjected to a Quantum
Fourier Transform (QFT), permits one to search for the period of a function
that reveals the requisite factors with high probability. It uses quantum
enhancement to go way beyond classical factoring procedures and yields the
factors with high probability for very large non-prime numbers, after
relatively few tries. To simulate this algorithm, where we are restricted
to numbers much smaller than possible in a future QC, there are several
steps implemented in the {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ code subshor.f90. A pedagogic analysis
of the reasons for each step of Shor's algorithm is presented in reference~\cite{Gerjuoy} .
\subsection*{ {\bf Step 1:} Choose the number, $M,$ and set the register sizes}
Choose the number, $M,$ to be factorable number:
$15, 21, 33, 35, 39,55,77 \cdots $ and determine the size of two requisite
registers.
Preparatory tests on the input number $M$ are made so that the code
continues only if the input number is not a power of 2 or a prime and
is thus a suitable candidate for factoring. These are classical procedures
performed by standard codes. Then, based on having an acceptable input
number, an initial state of two distinct registers are prepared, with
register one having $n_1$ and register two having $n_2$ qubits. The
first register should ~\cite{Shor,Gerjuoy} have enough qubits to store
in base 2 all numbers in the range $M^2$ to $2 M^2,$ i.e.
$ M^2 \geq 2^{n_1}< 2 M^2.$ Therefore the choice for $n_1$ is set as
\begin{equation}
n_1 = {\rm Ceiling}(\log_2(Q)\ ) \qquad Q=2^{{\rm Ceiling}(\log_2(M^2)\ )}
\end{enumerate}d{equation}
where ${\rm Ceiling}( x)$ gives the smallest integer greater than or
equal to $x.$~\footnote{ For example, if $M=21,$ and $M^2=441,$ then
$n_1=9$ and $2^9=512,$ and register one includes the value $441.$ If $n_1$
were taken as $8,$ then register one would not include the value $441,$
since $2^8=256.$ If $n_1$ were taken as $10,$ then register one would exceed
the value $2 M^2=882,$ since $2^{10}=1024.$ }
The number of qubits in register two is set by
\begin{equation}
n_2 = {\rm Ceiling}(\log_2(M)\ )
\end{enumerate}d{equation}
so that there are enough qubits to store in base 2 all numbers up to and
including the value of the input number $M.$ ~\footnote{For example,
if $M=21,$ then $n_2=5$ and $2^5=32>M,$ and register two includes the
value $21.$ If $n_2$ were taken as $4,$ then register two would not include
the value $21,$ since $2^4=16.$}
Here we invoke the minimum number of qubits for both registers. A larger
$n_1$ lengthens the computation, albeit providing higher probability of success.
\subsection*{ {\bf Step 2 :} Load the first register}
Load the first register with all the integers less than or equal
to $2^{n_1}-1$.
This is achieved by acting with a Hadamard on all qubits in register one, that is
use {\bf HALL} on a basis state of $n_1$ qubits so that register one is set to the state
\begin{equation}
\mid \Psi_1\rangle = {\rm HALL} \mid 00 \cdots 0\rangle_{n_1}
= \frac{1}{2^{\frac{n_1}{2} }}\ \sum_{n=0}^{2^{n_1}-1}\ \mid n\rangle_{n_1}.
\label{Shor1}
\end{enumerate}d{equation}
Thus each of the $2^{n_1}$ amplitudes appear with equal weight, which is the
quantum massive parallel processing feature.
Next we attend to setting register two.
\subsection*{ {\bf Step 3 :} Load the second register}
Select an integer $({\rm xguess})$ coprime to M and load the function
$f(j) \equiv {\rm xguess}^j \ (mod M)$ into the second register for a fixed
choice of ${\rm xguess}$ and all possible $j$ values: $ 0 \leq \, j \, \leq 2^{n_1}-1.$
Note that in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}, we simply compute $\mid f(n)\rangle$ and tack it on to
the $\mid n\rangle_{n_1}$ state. As discussed by Shor~\cite{Shor}, one must
actually do this crucial step by a quantum process for modular exponentiation.
In Shor's algorithm, ${\rm xguess}$ is a random choice for a number that is coprime
to M, where coprime means that $M$ and ${\rm xguess}$ have no common factor other
than 1. Euler's phi function of $M$ is used to determine the range of
integers between 1 and $M$ that are coprime to $M;$ a number in this
interval $a$ is selected and then tested using ${\rm GCD }[a,M]\equiv 1$ to
assure that it is coprime to $M.$ If it passes that test we set the
value for $a \rightarrow {\rm xguess}.$
The reason for defining the above function (aka the Shor Oracle) is that
this function $f(j) $has a characteristic period for each value of $M$ and ${\rm xguess}.$
\begin{equation}
\mid f(n+r)\rangle=\mid f(n)\rangle , \qquad f(n) = {\rm xguess}^n \ (mod M).
\label{fperiod}
\end{enumerate}d{equation}
Finding the period $r$ is a key goal.
The full state composed of registers 1 and 2 $(n_q=n_1+n_2)$ is built in the
following way:
\begin{equation}
\mid \Psi\rangle_{n_q} = \frac{1}{2^{\frac{n_1}{2} }}\
\sum_{n=0}^{2^{n_1}-1} \ \mid n\rangle_{n_1} \mid f(n) \rangle_{n_2}\,.
\label{Shor2}
\end{enumerate}d{equation}
\subsection*{ {\bf Step 4 :} Measure register 2}
We could measure the second register next or postpone that act to
coincide with step 6 below, because step 5 involves register 1 only. It
is helpful to think of the action of measuring register 2 now to motivate
the need for step 6. For each possible value of $n_1\rightarrow k$, one
asks if register 2 is in state $ _{n_2}\, \!\!\langle k\mid$ by projecting the
full state
\begin{equation}
_{n_2}\!\!\langle k\mid \Psi\rangle_{n_q} = \frac{1}{2^{\frac{n_1}{2} }}\
\sum_{n=0}^{2^{n_1}-1} \ \mid n\rangle_{n_1} \langle k\mid f(n)
\rangle_{n_2} \rightarrow \frac{1}{D^{\frac{1}{2} }}\
\sum_{j=0}^{D-1} \ \mid n_k + j r\rangle_{n_1} ,
\label{projreg2}
\end{enumerate}d{equation}
where at the last stage the state is normalized after projection--the usual
Born rule for a projective measurement. Note that for every choice of
$k, D$ terms of the first register appear in superposition where
$D$ is $ \approx 2^{n_1} /r . $ The integer $ D$ is
ascertained~\footnote{ The value of $D$ is constrained by the conditions
$ 0 \leq n_k + (D-1) r \leq 2^{n_1}-1 $ and $ 0 \leq n_k \leq r-1.$
Hence, the integer $D$ is constrained by
$D \leq \frac{ 2^{n_1}}{r} +\frac{r-1-n_k}{r}.$ } by the period $r$ for a
fixed ${\rm xguess}$ by
\begin{equation}
\mid f(n_k+ (D-1) r)\rangle= \mid f(n_k+ (D-2) r)\rangle=\cdots=
\mid f(n_k+ 1 r)\rangle=\mid f(n_k)\rangle=k.
\end{enumerate}d{equation}
The next step involves acting on register 1 to search for the
period $r$ by enhancing the quantum interference using a quantum Fourier transform.
\subsection*{ {\bf Step 5 :} Quantum Fourier Transform register 1}
Register one, which is now in the state
\begin{equation}
\mid \Phi\rangle_{n_1} = \frac{1}{\sqrt{D}} \ \sum_{j=0}^{D-1} \ \mid n_k + j r \rangle_{n_1},
\label{Phistate}
\end{enumerate}d{equation} is next acted upon by a quantum Fourier transformation operator
${\bf\rm QFT}$ (see the Appendix for a discussion of the QFT operator) which changes the above state
\begin{equation}a
{\rm QFT} \mid \Phi\rangle_{n_1} &=& \frac{1}{\sqrt{ 2^{n_1} D}} \
\sum_{j=0}^{D-1} \ \sum_{n=0}^{2^{n_1}-1} \
e^{ 2 \pi i n (n_k + j r)/2^{n_1}} \mid n \rangle_{n_1} \nonumber \\
&=& \frac{1}{\sqrt{ 2^{n_1} D}} \ \sum_{n=0}^{2^{n_1}-1} \
e^{ 2 \pi i n n_k /2^{n_1}} \sum_{j=0}^{D-1} \ (e^{ 2 \pi i n r/2^{n_1}} )^j \mid n \rangle_{n_1}\,.
\label{QFT1}
\end{enumerate}d{equation}a The QFT is a unitary operator which switches to a basis in which the superposition
is isolated into the above exponential amplitude. The sum on $j$ can be
performed~\footnote{ The following summation rule is used here
$ \sum_{j=0}^{D-1}\ X^j = \frac{X^D -1}{X -1}.$ One can also display
this as 2-D vector additions of equal length phasors, as in Fresnel zone
plate interference.} and thus the result is
\begin{equation}
{\rm QFT} \mid \Phi\rangle_{n_1} = \frac{1}{\sqrt{ 2^{n_1} D}} \
\sum_{n=0}^{2^{n_1}-1} \ e^{ 2 \pi i n n_k /2^{n_1} } \ e^{ \pi i n (D-1) /2^{n_1} }\
\frac{ \sin( D \pi n \frac{r}{2^{n_1}} ) }{\sin(\pi n \frac{r}{2^{n_1}}
)}
\mid n \rangle_{n_1} .
\label{QFT2}
\end{enumerate}d{equation}
The probability for finding the final state with register 1 in state $
\langle n\mid $
and register 2 in state $ \langle k \mid $ is therefore
\begin{equation}
p(n,k ) = \frac{1}{ 2^{n_1} D} \
\left[\frac{ \sin( D \pi n \frac{r}{2^{n_1}} ) }{\sin(\pi n \frac{r}{2^{n_1}} )} \right]^2,
\label{probshor} \end{enumerate}d{equation} where the dependence on $k$ has dropped out,
(except for a possible dependence of $D$ on $k$-see earlier footnote).
Note that for the special case that $ n r/2^{n_1}$ is an integer, the
above result reduces to $p(n,k ) \rightarrow \frac{D}{ 2^{n_1} } \approx
\frac{1}{r} , $
which is understood by returning to Eq.~(\ref{QFT1}) and using
$\sum_{j=0}^{D-1} \ (1 )^j =D. $
How can one extract the period $r$ from making
such a measurement on registers 1 and 2?
\subsection*{ {\bf Step 6:} Measure register 1 and determine period and factors}
The measurement of registers 1 and 2 has a probability given by
Eq.~(\ref{probshor}). At select values of $n\rightarrow \bar{n},$ the
probability $p(n,k )$ has local maxima. Consider the associated
fraction $\frac{ \bar{n} }{2^{n_1}},$ which is extracted from a
determination of those local maxima. At these maxima
\begin{equation}
p_{max} = \frac{1}{ 2^{n_1} D} \
\left[\frac{ \sin( D \pi r \frac{\bar{n}}{2^{n_1}} ) }
{\sin(\pi r \frac{\bar{n}}{2^{n_1}} )} \right]^2.
\label{probshor2}
\end{enumerate}d{equation}
In the arguments of the sin functions in Eq.~(\ref{probshor2}) , $D$ is an
integer, so the maximum probability occurs in the vicinity of an integer value for
$r \frac{\bar{n}}{2^{n_1}} .$ We therefore seek an approximate value of the
ratio $ \frac{\bar{n}}{2^{n_1}} \approx integer/r,$ for an even $r.$ That
ratio is found by expressing $ \frac{\bar{n}}{2^{n_1}} $ as a continued
fraction and determining its first convergent of the form $integer/r,$
for an even $r.$\footnote{Gerjuoy~\cite{Gerjuoy} showed that the maximum
probability is not less than $ \frac{4}{ \pi^2} \approx 0.4,$ but more
likely to be $\geq \frac{8}{ r \pi^2} \approx 0.81$}
That determines the value of the period $r,$ which we require to be even
so that we can use the final step~\footnote{ Note that the periodic function
${\rm xguess}^r {\rm Mod}[M] = {\rm xguess}^0 {\rm Mod}[M] =1.$ For even period $r$ this yields
$ ({\rm xguess}^{\frac{r}{2}})^2 -1 \equiv 0\ {\rm Mod}[M]= ({\rm xguess}^{\frac{r}{2}} -1)
({\rm xguess}^{\frac{r}{2}}+1).$ As long as ${\rm xguess}^{\frac{r}{2}} $ is not one,
at least one of $({\rm xguess}^{\frac{r}{2}})^2 \pm 1$ must have a common factor
with $M,$ and therefore finding ${\rm GCD }[ ({\rm xguess}^{\frac{r}{2}})^2 \pm 1]$ yields
the factors of $M.$}
\begin{equation}
f_1={\rm GCD }[{\rm xguess}^{r/2} +1,M] ; \qquad
f_2={\rm GCD }[{\rm xguess}^{r/2}-1,M]
\end{enumerate}d{equation}
to determine the factors $f_1,f_2$ of $M.$ The above process simplifies
if the ratio $\frac{2^{n_1}}{r} =D$ is already an integer.
In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ the local probability maxima and the associated factors
are all stipulated. In an actual measurement, one of those results would
be found with that probability.
\section{PARALLEL UNIVERSE AND NOISE}
\label{sec7}
A real quantum computer will involve the manipulation of qubits using
external fields and interactions with single qubits and between qubits.
Clearly, each physical realization has its set of Hamiltonians that
describe that system and these QC manipulations. The circuit description of
QC involves gates, which in turn should be described by the action of
Hamitonians on qubits. For example, the simple one-qubit Hadamard gate
can be realized by rotating the qubit's spin axis from the $\hat{z}$ to
the $\hat{x}$ axis by means of a $-\vec{\mu}\cdot \vec{B}$ interaction
acting for the proper time. More complicated gates involve clever design
of one and two-qubit interactions. In the future, we hope that {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\
will provide a tool for describing all requisite gates based on Hamiltonian
evolution. Dynamical evolution involves one- ( $H_1$) and two- ( $H_2 $) body
Hamiltonians
$
\mid \Psi(t + \delta t) \rangle = [ 1 - \frac{i}{{\hskip3pt}ar} ( H_1 + H_2) \delta t ]\ \mid \Psi(t ) \rangle
.$
Their action over a small time interval $\delta t $ can be calculated
by repeated application of the {\bf OneOpA} and {\bf TwoOpA} codes provided in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}.
Such applications are the subject for future studies.
The major obstacle to the implementation of such gates required for the
success of QC algorithms is the strong possibility that random intrusions,
such as noise, will decohere the quantum system and remove the essential
feature of quantum interference. That issue behoves us to simulate the
affect of noise by considering many replications of the QC algorithm,
which ideally are identical, and then subject each of them to random
single and double one- qubit as well as single two-qubit errors. For that
task MPI is ideally suited and therefore, as a major part of this paper, we
have implemented that ``Parallel Universe" approach, for which we include
herein the Grover and Shor algorithms. Other cases (teleportation and
superdense coding) have also been implemented. Subsequent numerical
studies of the efficacy of error correction protocols can be implemented
using the framework provided by the parallel universe feature of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}.
The next feature of this ``Parallel Universe" approach is that all of the
state vector amplitudes can be gathered together and used to construct an
ensemble average in the form of a density matrix. This process corresponds
to solving a set of stochastic Schr\"{o}dinger equations~\cite{SSE} and using
those solutions to produce a density matrix. Let us now examine the steps
needed to construct a density matrix.
\subsection{Density Matrix}
There are advantages to using a density matrix to describe QC dynamics.
The density matrix describes an ensemble average of quantum systems, with
its evolution determined not only by the system's Hamiltonian but also by
environmental terms using either Kraus operator~\cite{Kraus} or
Lindblad~\cite{Lindblad} differential equation forms In addition, the
description of entanglement and of mixed states is handled nicely and
concepts like entropy and Fidelity can be evaluated more readily. To form
a density matrix in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ and to determine the entropy, affords a good
example of how to extend {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ to such ensemble averages.
For a definite state vector, the pure state density matrix\footnote{The
density matrix is Hermitian, has unit trace, and is positive definite.
In general $\rho^2 \leq \rho$, with the equal sign applied for pure states.}
is simply
\begin{equation}
\rho = \mid \Psi \rangle\langle \Psi \mid = \sum_{n=0}^{2^{n_q} -1} \ \
\sum_{n'=0}^{2^{n_q} -1} C^*(n') C(n) \mid n\rangle \langle n' \mid
\, .
\end{enumerate}d{equation}
This large $(2^{n_q}\times 2^{n_q})$ matrix can be distributed over $N_P=2^p$
processors by placing $2^{n_q - p} \times 2^{n_q - p}$ matrices on each
processor.~\footnote{To facilitate the parallel treatment of the density matrix, we take p as even.}
Matrix multiplication, traces and eigenvalue determination can
then be implemented using MPI procedures, supplemented by BLACS processor grid
and parallel linear algebra SCALAPACK programs~\cite{scala}. Once the eigenvalues of $\rho$
are calculated the entropy can be determined. But for a pure state, we know
that $\rho^2 \equiv \rho,$ and since the trace of $\rho$ is one, the
eigenvalues for a pure state are 1 and $2^{n_q}-1$ zeros. Thus the entropy is
zero, as it should be for a well-defined, non-chaotic, albeit probabilistic state.
How do we go beyond a pure state density matrix within the {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ setup?
There are several options, but one overall goal. The overall goal is to
build a state $\mid \Psi_\alpha \rangle$ repeatedly as labeled by $\alpha,$
with an associated probability ${\cal P}_\alpha$ with
$\sum_\alpha\ {\cal P}_\alpha =1.$ For each case, the state $\mid \Psi_\alpha
\rangle$ could be generated in a different way. One option is to get a set
of amplitudes $C_\alpha (n)$ randomly, with each random set assigned a
probability ${\cal P}_\alpha.$ Another way is to select a few qubits and
subject them to random one and two body interactions and possible stochastic
pulses (noise), again assigning each case a probability ${\cal P}_\alpha.$
The associated mixed state density matrix would then be
\begin{equation}
\rho =\sum_\alpha {\cal P}_\alpha \mid \Psi_\alpha \rangle\langle\Psi_\alpha \mid
= \sum_{n=0}^{2^{n_q} -1} \ \ \sum_{n'=0}^{2^{n_q} -1}\ \
\sum_\alpha {\cal P}_\alpha\ C_{\alpha }^*(n') C_{\alpha }(n) \mid
n\rangle
\langle n' \mid \, .
\end{enumerate}d{equation}
The above result can be expressed as~\footnote{ An abbreviated version
is $\rho =\sum_\alpha \! {\cal P}_\alpha\! \mid \alpha\rangle\langle\alpha\mid,$
with $C_{\alpha }(n) = \langle n \mid \alpha\rangle.$}
\begin{equation}
\langle n \mid \rho \mid n'\rangle = \sum_\alpha
{\cal P}_\alpha\ C_{\alpha }(n)\, C_{\alpha }^*(n') \, .
\end{enumerate}d{equation}
This is perhaps not the most general density matrix, but one can trace out
some of the ancilla qubits and/or subject the density matrix to additional
entangling operations using $\rho'= U \rho U^\dagger$ or even apply the
non-unitary Lindblad~\cite{Lindblad} process to generate an enhanced range
of density matrices. These procedures, which we outline here, are included
in this version of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ to facilitate studies of decoherence and
environmental effects. A major advantage of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ is that the invocation of
parallel universes (aka multiverses) to describe the influence of noise on
a QC does not involve much increase in computation time compared to a single
pure run, especially since the only communication between groups is that
used is to construct the density matrix.
This scheme provides an efficient use of multi-processor computers.
\subsection{Parallel universe implementation}
The above steps are implemented in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ by first splitting
the overall number of processors $N_P$(nprocU) into many groups
$N_G$, each group is referred to as a ``multiverse." For
convenience, we take both $N_P$ and $N_G$ to be powers of 2.
Within each multiverse, there are $N_P/N_G \equiv N_g$ (nprocM)
processors that are used to perform a distinct QC algorithm. The
MPI command {\bf MPI\_COMM\_SPLIT} is used to produce these
separate groups. Each group is specified by its group rank (rankM),
which ranges from zero to NGROUPS-1, where NGROUPS denoted the
total number of multiverses. ~\footnote{There are spawning features
of MPI-2 that might be invoked to carry out this process more
efficiently, but at this stage we found MPI-1 sufficient for our needs.}
The method used to store and evaluate the density matrix is controlled by
an integer {\bf Ientropy}. For the choice {\bf Ientropy}=0, there
is no evaluation of the density matrix. For the choice {\bf Ientropy}=1,
the full density matrix is constructed on the master processor and its
eigenvalues determined by a LAPACK code. That procedure should be used
when storage space for $\rho$ is ample. For {\bf Ientropy}=2 the density matrix
is not stored on one processor, but is distributed on a BLACS generated
processor grid and the parallel eigenvalue code {\bf PCHEEVX} from
the SCALAPACK package is invoked to evaluate $\rho 's$ eigenvalues. To
carry out this last task the number of processors, groups and qubits
have to be carefully monitored for consistency with the codes
conventions, as indicated directly in the listings.
\subsection{Noise scenarios}
A simple example of a ``noise scenario" has been included ~\footnote{See
subroutine Noise called in subgrover.f90 and subshor.f90}in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ to
show how the role of noise can be examined. The motivation here is to first
introduce noise and later to evaluate various error correction schemes.
The division of a large number of processors into groups was made so that
only the first group (rankM=0) functions without noise. All of the other
groups perform the algorithm with noise. The noise is introduced separately
for each group (or multiverse) where the users can design their own scenarios.
We have input noise using a one-qubit unitary operator (subroutine D2) that
we take as a $2 \times 2$ Wigner rotation function
${\cal D}^{\frac{1}{2}}(\alpha,\begin{equation}ta,\gamma),$ where $\alpha,\begin{equation}ta,\gamma$ are
three Euler angles.~\footnote{ We take random $\alpha,\begin{equation}ta$, and set
$\gamma=0$ for simplicity.} This can be specialized to either small
deviations or, within a phase, to one of the Pauli operators. One can introduce
one qubit noise, acting on a random qubit, typically once within each
processor multiverse, but two or more one-qubit noise intrusions can be
invoked at various stages of the algorithm, by suitable placement of the subroutine ``Noise."
In addition, a two-qubit unitary operator (subroutine D4) that we take
as a $4 \times 4$ Wigner function ${\cal
D}^{\frac{3}{2}}(\alpha,\begin{equation}ta,\gamma),$ can also be specialized to either
small deviations or, within a phase, to one of the Pauli operator products
$\sigma_i \otimes \sigma_j .$ This allows one to introduce a single error
that acts on two qubits once, in contrast to two one-qubit errors.
The one-qubit operator is assumed to act on a random selected qubit (qhit)
and at selected, variable stages of the algorithm (eloc). Extension to
two-qubit noise is obvious. Of course the associated universes which allow
two one-qubit or single two-qubit errors should carry lower weight.
By using unitary operators in each universe the overall density matrix
still maintains unit trace, but of course the trace of $\rho^2$ will be
decreased by noise. The probability of success will also decrease.
Thus, {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ provides a framework for introducing errors and, along
with Hamiltonian-driven gates, provides an important tool for dynamical
studies of QC with noise and in the future with error correction.
\section{FORTRAN AND MPI CODES}
\label{sec8}
Sample {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ codes are provided which incorporate directions as to how to
run the code. From these examples, the user should be able to see the benefit
of being able to handle problems with a considerable number of qubits,
organized into parallel universes, in reasonable time. Some improvements could
be invoked to accelerate {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ , for example by collecting messages and
sending them as a group (collective communications). The issue here is the
standard fight between sharing the work load over the available processors
(balance) and minimizing the cost of sending messages. However, the major benefit of dividing a large number of processors
into multiverses and subjecting each one to separate noise scenarios, is
in itself justification and reason to use a multi-processor supercomputer.
The list of files contained in QCMPI are:
\begin{equation}gin{itemize}
\item qcmpisubs.f90, contains all QCMPI subroutines
\item Guniv.f90, builds multiverse environment for Grover's search
\item subgrover.f90, Grover's search routine
\item Suniv.f90, builds multiverse setup for Shor's factoring algorithm
\item subshor.f90, Shor's factoring routine
\item makefile, sample of compiling options for several
supercomputing facilities
\item *.job, sample job submitting scripts
\item README, instructions.
\end{enumerate}d{itemize}
\subsection{Performance}
As an indication of the performance of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}, we have run a number
of sample cases using the multiverse Grover codes included in the package. In
table~\ref{tablep}, we show the global memory requirements together
with the wallclock time and the percentage of the latter used in MPI
operations.
\begin{equation}gin{table}[h]
\caption{Performance of a number of sample runs all realized using
Guniv.f, subgrover.f90 and qcmpisubs.f90. In all cases. the
Ientropy=2 option is chosen and thus the entropy is computed making
use of the scalapack routines and two multiverses are considered.
The Gflop/sec and Gbytes refer to the total amount used by all processors.
The information presented has been obtained using IPM~\cite{ipm}.
\label{tablep}}
\begin{equation}gin{tabular}{l | llllrr}
\hline
NP & $nq$ & $2^{nq}$ & Gflop/sec
& Gbytes & Wallclock (sec) & \% communication \\
\hline
\hline
4 & 10 & 1024 & 1.99618 & 1.4043 & 1.25 & 26.63 \\
4 & 12 & 4096 & 3.31855 & 9.96069 & 48.98 & 5.65 \\
16 & 10 & 1024 & 1.90923 & 5.54911 & 1.07 & 62.27 \\
16 & 12 & 4096 & 10.4583 & 38.7235 & 11.29 & 39.14 \\
64 & 10 & 1024 & 0.65133 & 22.1393 & 3.21 & 31.19 \\
64 & 12 & 4096 & 15.2444 & 153.814 & 7.54 & 56.32 \\
\hline
\hline
\end{enumerate}d{tabular}
\end{enumerate}d{table}
\section{CONCLUSIONS AND FUTURE DEVELOPMENTS}
\label{sec9}
In conclusion, the Fortran 90 code {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ provides a modular approach to
quantum algorithms that provides accessible implementation of quantum computation
algorithms. All of the gates needed for the circuit model are provided, as
well as the quantum Fourier transformation procedure. Extension to
three-qubit
operators and to the one-way model of computation are
straightfoward, as is the extension to the qutrit case. Such extensions will
be provided in the future by the
authors and hopefully also by interested users.
The main features of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ are the distribution of state-vector amplitudes
over processors, to allow for increased number of qubits and the use of MPI to
carry out the requisite communication needed when one- and two- body operators
(gates) act on states. This task is carried out in a manner that allows
ready extension to Hamiltonian driven QC dynamics.
In addition. {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,} \ provides a multi-universe setup, which replicates
the QC algorithm over many groups, at little cost in computation time. That
procedure provides a major advantage of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\, , not generally available in
the literature, to provide a framework for studying the role of noise on the
efficacy of QC. That is, we believe, the major task in this subject.
The methods demonstrated here for the distribution and evaluation of a large density matrix
can be generalized to the case of large unitary matrices to represent gates.
There is much to do with this tool such as studies of: Hamiltonian driven QC dynamics
using realistic Hamiltonians, along with environmental effects, influence of
random pulses, and efficacy of error correction protocols.
\section*{Acknowledgments}
We gratefully acknowledge the help and participation of Prof. C.W. Finley at
an early stage of this work. This project was supported in part by the U.S. National
Science Foundation and in part under Grants PHY070002P \& PHY070018N from
the Pittsburgh Supercomputing Center, which is supported by several federal
agencies, the Commonwealth of Pennsylvania and private industry. Circuit
graphs were prepared using codes from Ref.~\cite{drawings} which we
appreciate. Thanks to PSC staff members Dr. Roberto Gomez and Rick Costa. We
also thank the openmpi and scalapack groups, especially Julie Langou and Jeff Squyres .
\appendix
\section{The quantum Fourier transform circuit and {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}}
The quantum Fourier transform is performed using the circuit in
Fig.~\ref{QFTcircuit}. A ladder of Hadamards and two-qubit control
CPHASEK gates (Eq.~(\ref{cphasek})) are used to produce the QFT.
\begin{equation}gin{figure}[hb]
\begin{equation}gin{center}
\includegraphics[width=14cm]{fig5}
\caption{The quantum Fourier transform circuit.
Here $ \phi_k\equiv e^{ 2 \pi i /2^k}$ and the register
one has $n_1$ qubits.The binary number $q_1 q_2 \cdots q_{n_1}$
corresponds to a decimal number $n$ which ranges as $0\leq n\leq 2^{n_1} -1.$
The fraction binary notation is used where
$0.q_a\cdots q_{b} \equiv \frac{q_a}{2} + \frac{q_{a+1}}{2^2} +\cdots
+\frac{q_b}{2^{ b-a+1}}.$ To restore the qubits to standard order with the
most significant bit to the left (top of figure), an addition reversal of the
qubit order must be applied. An overall normalization of $1/2^{\frac{n_1}{2}}$ is understood.
}
\end{enumerate}d{center}
\label{QFTcircuit}
\end{enumerate}d{figure}
The above steps are carried out in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QCMPI\,}\ with the following code, which
includes a series of pair swaps to reorder the qubits labels.
\begin{equation}gin{verbatim}
Do ic =1,n1-1
call OneOpA(nq,ic,had,psi,NPART,COMM)
Do k=ic+1,n1
call CPHASEK(nq,k,ic,k+1-ic,psi,NPART,COMM)
enddo
enddo
! Final Hadamard
call OneOpA(nq,n1,had,psi,NPART,COMM)
! Reverse order using pair swaps
Do i=1,n1/2
call SWAP(nq,i,n1+1-i,Psi,NPART,COMM)
enddo
\end{enumerate}d{verbatim}
Here {\bf had} denotes the Hadamard, {\bf psi} is the input and
then the output state vector at each stage, and {\bf NPART} denotes the
part of {\bf psi} on the current processor. The qubits are restored
to standard order by a set of pair swaps. This also demonstrates how
to use the {\bf CPHASEK, OneOP}(for a Hadamard case),and the {\bf SWAP} subroutine:
To understand how the ladder of Hadamard and control phase gates yields a
quantum Fourier transform, note that the final state shown in the code,
\begin{equation}a
\noindent \mid q_1 q_2 \cdots q_{n_1} \rangle \rightarrow \frac{1}{\sqrt{2^{n_1}}} ( \mid 0\rangle +
e^{2 \pi i \ 0.q_1 \cdots q_{n_1} } \mid 1 \rangle ) \otimes
( \mid 0\rangle + e^{2 \pi i \ 0.q_2 \cdots q_{n_1} } \mid 1 \rangle ) && \nonumber \\
\otimes \cdots ( \mid 0\rangle + e^{2 \pi i \ 0.q_{n_1-1} q_{n_1} } \mid 1 \rangle ) \otimes
( \mid 0\rangle + e^{2 \pi i \ 0.q_{n_1} } \mid 1 \rangle )&& , \nonumber \\
&&
\end{enumerate}d{equation}a
which becomes
\begin{equation}a
\noindent \mid q_1 q_2 \cdots q_{n_1} \rangle \rightarrow \frac{1}{\sqrt{2^{n_1}}} ( \mid 0\rangle +
e^{2 \pi i \ 0. q_{n_1} } \mid 1 \rangle ) \otimes
( \mid 0\rangle + e^{2 \pi i \ 0.q_{n_1-1} q_{n_1} } \mid 1 \rangle ) && \nonumber \\
\otimes \cdots ( \mid 0\rangle + e^{2 \pi i \ 0.q_2 \cdots q_{n_1}} \mid 1 \rangle ) \otimes
( \mid 0\rangle + e^{2 \pi i \ 0.q_1 \cdots q_{n_1} } \mid 1 \rangle )&& , \nonumber \\
&&
\end{enumerate}d{equation}a
after the final qubits are reordered by a series of pair swap operations. The last result can be written as:
\begin{equation}
{\rm QFT} \mid q_1 q_2 \cdots q_{n_1} \rangle= \frac{1}{\sqrt{2^{n_1}}}
\sum_{Q'} e^{2 \pi i \ [ q'_1 0. q_{n_1} + q'_2 0. q_{n_1-1} q_{n_1} \cdots
+ q'_{n_1-1} 0.q_2 \cdots q_{n_1} + q'_{n_1} 0.q_1 \cdots q_{n_1} ] }
\ \mid Q' \rangle,
\end{enumerate}d{equation}
where $ Q'$ denotes the binary number $ q'_1 q'_2 \cdots q'_{n_1},$
corresponding to the decimal number $n'$. The above is equivalent to
\begin{equation}
{\rm QFT} \mid n \rangle= \frac{1}{\sqrt{N}} \sum_{n'=0}^{N-1} e^{ 2 \pi i n\, n'/N} \mid n' \rangle.
\end{enumerate}d{equation}
where $N= 2^{n_1} .$ A simple product $ n\, n'/N$ appears in the exponent
because
$ [ q'_1 0. q_{n_1} + q'_2 0. q_{n_1-1} q_{n_1} \cdots + q'_{n_1-1}
0.q_2 \cdots q_{n_1} + q'_{n_1} 0.q_1 \cdots q_{n_1} ] \rightarrow n
n'/N,$ which can be shown by noting that $ e^{ 2 \pi i 2^s} \equiv 1$ for all
integers $s \geq 0.$ Therefore, in the product $n n'/N = ( q_1 2^{n_1-1} + q_2 2^{n_1-2} \cdots q_{n_1} 2^0)( q'_1 2^{n_1-1}
+ q'_2 2^{n_1-2} \cdots q'_{n_1} 2^0),$ we can drop all cross terms that yield
a $2^s$ which suffices to prove the equivalence. Hence, one sees that the
{\rm QFT} is a unitary transformation from basis $\mid n\rangle$ to $\mid
n'\rangle$ of the form $\langle n \mid {\rm QFT} \mid n'\rangle =
\frac{1}{\sqrt{N}}\ (e^{ 2 \pi i /N} )^{n n'}. $
\section{The MPI Codes}
The
{\bf bintodec,
dectobin,
OneOpA,
TwoOpA,
EulerPhi,
splitn,
ProjA,
Randx,
QFT,
CF,
SWAP,
CPHASEK,
HALL,
HALL2,
Entropy,
EntropyP} codes are best understood by examination of
the explicit directions within the code and also by the useage in the sample algorithms.
\subsection{ Sample algorithm codes}
Teleportation and Superdense coding codes are also available. In this
paper, we present the Grover and Shor cases. The Grover code is called
Guniv.f90 and initiates the process by selecting a marked item that is to be
searched for, with that item (labelled as IR) distributed to all the
processors. There are $N_P=2^p$ processors that are split into $N_G= 2^g$ groups
(called multiverses) each multiverse then consists of $N_x=N_P/N_G=2^{p-g}$ members, where
both $N_P$ and $N_G$ are assumed to be powers of 2. Independent searches are
carried out in each multiverse and at the end (this could be done at any
preferred stage) the state-vector amplitudes for each group are used to form a
group's density matrix. An overall ensemble average of all the group's
density matrices are then computed and either the $2^{n_q}\times 2^{n_q}$
array is located on the master processor of the first group using subroutine
{\bf Entropy} or
is distributed over a BLASC grid using subroutine {\bf EntropyP}.
The first group (rankM$=$0) is free of noise, whereas all the other groups
(rankM$>$0) are subject to various random disturbances with assigned
probablities. This is where particular noise models could be invoked by
the user. This structure is also used for the Shor case.
In the Shor case (Suniv.f90, subshor.f90), there is an initial setup process to pick and test
the number to be factored that is broadcast to all $N_P$ processors, with
again a split into $N_G$ groups (multiverses) and separate searches done on the
$N_g$ members of each universe. Again group one is free of noise, whereas
noise is introduced on all other groups, with a subsequent build up of the
full density matrix using either the {\bf ientropy=1} or {\bf ientropy=2} options. Others cases and extensions all follow this same general pattern.
One can also examine the particular eigenvalues of the full density matrix,
at selected stages, and also obtain fidelities and subtraces if desired.
\begin{equation}gin{thebibliography}{9}
\begin{itemize}bitem{Nielsen} Michael A. Nielsen and Isaac I. Chuang,
``Quantum Computation and Quantum Information'', Cambridge University Press (2000).
\begin{itemize}bitem{Preskill} J. Preskill's Caltech remarkable lectures are available at:\\
http://www.theory.caltech.edu/people/preskill/ph229/ .
\begin{itemize}bitem{QEC} ``Classical Enhancement of Quantum Error-Correcting Codes,'' Isaac Kremsky, Min-Hsiu Hsieh and Todd A. Brun, Phys. Rev. A 78, 012341 (2008).
\begin{itemize}bitem{Shor} Peter W. Shor, SIAM J. COMPUT 26 (5) 1484 (1997).
\begin{itemize}bitem{Grover} L. K. Grover, Phys. Rev. Lett. 79, 325-328 (1997).
\begin{itemize}bitem{prevQC0} G. Patz, ``A Parallel Environment for simulating quantum
computation'', Ph.D. thesis, MIT (2003).
\begin{itemize}bitem{prevQC1} K. Obenland and A. Despain,
"A Parallel Quantum Computer Simulator," quant-ph/9804039. Presented at High Performance Computing 1998.
\begin{itemize}bitem{prevQC2} Niwa, Jumpei and Matsumoto, Keiji and Imai,
Hiroshi, "General-purpose parallel simulator for quantum computing," Phys. Rev. A 66, 062317 (2002).
\begin{itemize}bitem{prevQC3} K. De Raedta, K. Michielsenb, H. De Raedtb, B. Trieuc,
G. Arnoldc, M. Richterc, Th. Lippertc, H. Watanabed, and N. Ito,
``Massively parallel quantum computer simulator," Computer Physics
Communications Volume 176, Issue 2, 15 January 2007, Pages 121-136.
\begin{itemize}bitem{prevQC4} Ian Glendinning and Bernhard \"{O}mer,
``Parallelization of the QC-Lib Quantum Computer Simulator Library, "
in Roman Wyrzykowski et al., editors, Parallel Processing and Applied
Mathematics: proceedings / 5th International Conference, PPAM 2003
Czestochowa, Poland, volume 3019 of Lecture Notes in Computer Science,
pages 461-468. Springer, 2004. See also: ``Parallelization of the
general single qubit gate and CNOT for the QC-lib quantum computer simulator
library." Technical Report TR 2003-01, Institute for Software Science, University of Vienna, June 2003.
\begin{itemize}bitem{MPI} W. Gropp, E. Lusk , N. Doss and A. Skjellum,
"A high-performance, portable implementation of the {MPI} message passing
interface standard," Parallel Computing {\bf 22}, 6, 789 (1996).
\begin{itemize}bitem{openmpi} For openmpi see: http://www.open-mpi.org/.
\begin{itemize}bitem{Teleportation} C. H. Bennett, G. Brassard, C. Cr\'epeau, R. Jozsa, A. Peres, and W. K. Wootters,
Phys. Rev. Lett. 70, 1895-1899 (1993).
\begin{itemize}bitem{superdense}C. H. Bennett and Stephen J. Wiesner, Phys. Rev. Lett. 69, 2881 (1992).
\begin{itemize}bitem{QDENSITY} Bruno Juli\'a-D\'\i az, Joseph M. Burdis and
Frank Tabakin, ``QDENSITY \- A Mathematica Quantum Computer simulation," Computer Physics Communications, 174 (2006) 914.
\begin{itemize}bitem{scala} Scalapack and Blacs packages,\\
http://www.netlib.org/scalapack/scalapack\_home.html.
\begin{itemize}bitem{Kraus} K. Kraus, States, Effects and Operations: Fundamental Notions of Quantum Theory, Springer Verlag 1983.
\begin{itemize}bitem{Lindblad} G. Lindblad, Commun. Math. Phys. 4 ,119 (1976).
\begin{itemize}bitem{Gerjuoy} Edward Gerjuoy, Am. J. Phys. 73 (6) 521 (2005).
\begin{itemize}bitem{SSE} ``Decoherence and quantum trajectories,'' Todd A. Brun,
in ``Decoherence and Entropy in Complex Systems'', ed. Hans-Thomas Elze
(Springer, Berlin, 2004); ``Generalized stochastic Schr\"{o}dinger equations
for state vector collapse,'' Stephen L. Adler and Todd A. Brun, J. Phys. A 34, 4797-4809 (2001).
\begin{itemize}bitem{drawings} Circuit graphs have been drawn using I. Chuang ``qasm2circ''\\
(http://www.media.mit.edu/quanta/qasm2circ/),
and Steve Flammia \& Bryan Eastin ``Qcircuit,''(http://info.phys.unm.edu/Qcircuit/).
\begin{itemize}bitem{ipm} Integrated Performance Monitoring, IPM,
http://ipm-hpc.sourceforge.net/.
\end{enumerate}d{thebibliography}
\end{enumerate}d{document}
|
\begin{document}
\title{Validity of Quantum Adiabatic Theorem}
\author{Zhaoyan Wu and Hui Yang \\
Center for Theoretical Physics, Jilin University\\
Changchun, Jilin 130023, China}
\maketitle
\begin{abstract}
The consistency of quantum adiabatic theorem has been doubted recently. It
is shown in the present paper, that the difference between the adiabatic
solution and the exact solution to the Schr\"{o}dinger equation with a
slowly changing driving Hamiltonian is small; while the difference between
their time derivatives is not small. This explains why substituting the
adiabatic solution back into Schr\"{o}dinger equation leads to
`inconsistency' of the adiabatic theorem. Physics is determined completely
by the state vector, and not by its time derivative. Therefore the quantum
adiabatic theorem is physically correct.
\end{abstract}
\subsection{Introduction}
Quantum adiabatic theorem (QAT) dates back to the early years of quantum
mechanics[1]. It has important applications within and beyond quantum
physics. In 1984, M. Berry found there is a geometrical phase in the
adiabatically evolving wavefunction besides the dynamic phase[2]. B. Simon
pointed out Berry's phase factor is the holonomy of a Hermitian line
bundle[3]. This started a rush for geometrical phases in quantum physics[4],
which helped people to get deeper insight into many physical phenomena, such
as Bohm-Aharanov effect, quantum Hall effect, etc. Recently, the quantum
adiabatic theorem has renewed its importance in the context of quantum
control and quantum computation[5]. More recently, however, the consistency
of QAT has been doubted[6]. In their paper entitled "Inconsistency in the
application of the adiabatic theorem", K.-P. Marzlin and B.C. Sanders gave a
proof of inconsistency of QAT, and declared that the standard treatment of
QAT alone does not ensure that a formal application of it result in correct
results. This interesting suggestion has attracted attention from the
physics circle[7]. The purpose of this letter is to point out that the QAT
does give approximate state-vectors\ (wavefunctions), but not necessarily
the approximate time derivatives of state-vectors. While physics is
completely determind by the state-vector, and has nothing to do with its
time derivative. Therefore the QAT is physically correct. What leads to
`inconsistency' of the QAT is neglecting the fact that the adiabatic
approximate state-vector does not necessarily give the approximate time
derivative of state-vector ($\left\Vert \psi _{A}(t)-\psi (t)\right\Vert \ll
1\nRightarrow \left\Vert \overset{\cdot }{\psi }_{A}(t)-\overset{\cdot }{
\psi }(t)\right\Vert \ll 1,$ where $\left\Vert \varphi \right\Vert \equiv
\sqrt{\langle \varphi |\varphi \rangle }$ denotes the norm of state-vector $
\varphi $ ).
\subsection{Standard treatment of QAT}
Suppose that the Hamiltonian depends on $N$ real parameters $R^{1},\ldots
,R^{N}:$
\begin{equation}
H=H(R^{1},\ldots ,R^{N})=H(R) \label{eqn1}
\end{equation}
When the representing point of the Hamiltonian describes slowly a finite
curve $C$ on the $N-$dimenssional parameter manifold $\mathcal{M}$
\begin{equation}
C:R^{\sigma }=R^{\sigma }(t),\ \forall t\in \lbrack 0,T],1\leq \sigma \leq N
\end{equation}
where $T$ is the evolution time, let us study the evolution of the system.
The instantaneous Hamiltonian's eigen equation is
\begin{equation}
H(R)u_{n}(R)=E_{n}(R)u_{n}(R)
\end{equation}
Getting to the rotating representation
\begin{equation}
\psi (t)=\sum_{n\geq 0}c_{n}(t)u_{n}(R(t))\exp [\frac{-i}{\hbar }
\int_{0}^{t}E_{n}(R(t^{\prime }))dt^{\prime }]
\end{equation}
we get the Schr\"{o}dinger equation
$\ \ \ \ \ \ \overset{\cdot }{c_{m}}(t)=-\sum_{n\geq 0}\left\langle
u_{m}(R(t))\mid \overset{\cdot }{u}_{n}(R(t))\right\rangle \times $
\begin{equation}
\exp \left\{ \frac{i}{\hbar }\int_{0}^{t}[E_{m}(R(t^{\prime
}))-E_{n}(R(t^{\prime }))]dt^{\prime }\right\} c_{n}(t)
\end{equation}
To avoid confusion of infinitesimals of different orders and to show what
`rapidly oscillating' means, let's change to the dimensionless time $\tau
=t/T$
\QTP{Body Math}
$\ \ \ \ \ \frac{d}{d\tau }\check{c}_{m}(\tau )=-\sum_{n\geq 0}\left\langle
u_{m}(\check{R}(\tau ))\mid \frac{d}{d\tau }u_{n}(\check{R}(\tau
))\right\rangle \times $
\begin{equation}
\exp \left\{ \frac{i}{\hbar }T\int_{0}^{\tau }[E_{m}(\check{R}(\tau ^{\prime
}))-E_{n}(\check{R}(\tau ^{\prime }))]d\tau ^{\prime }\right\} \check{c}
_{n}(\tau )
\end{equation}
where
\begin{equation}
\check{c}_{m}(\tau )=c_{m}(T\tau )=c_{m}(t),\check{R}(\tau )=R(T\tau )=R(t)
\end{equation}
The initial value problem of the above differential equations is equivalent
to the following integral equations
\QTP{Body Math}
$\ \ \ \ \ \ \check{c}_{m}(\tau )=\check{c}_{m}(0)-\sum_{n\geq
0}\int_{0}^{\tau }\left\langle u_{m}(\check{R}(\tau _{1}))\mid \frac{d}{
d\tau }u_{n}(\check{R}(\tau _{1}))\right\rangle \times $
\begin{equation}
\exp \left\{ \frac{i}{\hbar }T\int_{0}^{\tau _{1}}[E_{m}(\check{R}(\tau
^{\prime }))-E_{n}(\check{R}(\tau ^{\prime }))]d\tau ^{\prime }\right\}
\check{c}_{n}(\tau _{1})d\tau _{1}
\end{equation}
Let's slow down evenly the changing speed of the Hamiltonian while keep the
finite curve $C$ fixed. Mathematically, that is to\ let $T\rightarrow \infty
$, while keeping the function form of $\check{R}(\tau )$ unchanged. The
oscillating factors in the integrand ensure vanishing of the corresponding
integrals. There is no resonance problem in the mathematical context. For
the practical physical problem, slowly changing of the Hamiltonian means $T$
is such a long time that
\begin{equation}
\left\vert \frac{\left\langle u_{m}(R(t))\mid \overset{\cdot }{u}
_{n}(R(t))\right\rangle \hbar }{E_{m}(R(t))-E_{n}(R(t))}\right\vert \ll 1
\end{equation}
In both mathematics and physics contexts, the integral equation (8) can be
approximately rewritten as
\begin{equation}
\check{c}_{m}^{A}(\tau )=\check{c}_{m}(0)-\int_{0}^{\tau }\left\langle u_{m}(
\check{R}(\tau _{1}))\mid \frac{d}{d\tau }u_{m}(\check{R}(\tau
_{1}))\right\rangle \check{c}_{m}^{A}(\tau _{1})d\tau _{1}
\end{equation}
Solving this equation by using iteration gives
\begin{equation}
\check{c}_{m}^{A}(\tau )=\exp \left\{ -\int_{0}^{\tau }\left\langle u_{m}(
\check{R}(\tau _{1}))\mid \frac{d}{d\tau }u_{m}(\check{R}(\tau
_{1}))\right\rangle d\tau _{1}\right\} \check{c}_{m}(0)
\end{equation}
This proves the QAT.
\subsection{Analysis of `Inconsistency' of QAT}
When we substitute the adiabatic approximate solution (11) back into the
integral equations (8), the equations approximately hold.
\QTP{Body Math}
$\ \ \ \ \ 0\approx -\sum_{n(\neq m)}\int_{0}^{\tau }\left\langle u_{m}(
\check{R}(\tau _{1}))\mid \frac{d}{d\tau }u_{n}(\check{R}(\tau
_{1}))\right\rangle \times $
\begin{equation}
\exp \left\{ \frac{i}{\hbar }T\int_{0}^{\tau _{1}}[E_{m}(\check{R}(\tau
^{\prime }))-E_{n}(\check{R}(\tau ^{\prime }))]d\tau ^{\prime }\right\}
\check{c}_{n}^{A}(\tau _{1})d\tau _{1}
\end{equation}
However, when we substitute the adiabatic approximate solution (11) back
into the differential equations (6) whose initial value problem is
equivalent to the integral equations (8), we obtain
$\ \ \ \ \ \ \ 0\approx -\sum_{n(\neq m)}\left\langle u_{m}(\check{R}(\tau
))\mid \frac{d}{d\tau }u_{n}(\check{R}(\tau ))\right\rangle \times $
\begin{equation}
\exp \left\{ \frac{i}{\hbar }T\int_{0}^{\tau }[E_{m}(\check{R}(\tau ^{\prime
}))-E_{n}(\check{R}(\tau ^{\prime }))]d\tau ^{\prime }\right\} \check{c}
_{n}^{A}(\tau )
\end{equation}
Considering that $\psi (0)$ can be an arbitrary state-vector, we have
\begin{equation}
0\approx \left\langle u_{m}(\check{R}(\tau ))\mid \frac{d}{d\tau }u_{n}(
\check{R}(\tau ))\right\rangle ,\forall m\neq n
\end{equation}
which is false. Notice that the right-hand side of (13) is the derivative of
the right-hand side of (12), while (12) is correct and (13) is incorrect. In
order to understand the situation we are facing, let's study the following
basic mathematical fact. Let $|\psi (t)\rangle \equiv |0\rangle e^{-i\omega
t}+\varepsilon |1\rangle e^{-it/(\varepsilon ^{2})},(0<\varepsilon \ll 1),$ $
|\varphi (t)\rangle \equiv |0\rangle e^{-i\omega t},$ where $|0\rangle
,|1\rangle $ are eigen vectors of the 1-dimensional harmonic oscillator
energy.
\begin{equation}
\because \left\Vert |\psi (t)\rangle -|\varphi (t)\rangle \right\Vert
=\varepsilon \ll 1,\therefore |\psi (t)\rangle \approx |\varphi (t)\rangle
\end{equation}
While
\begin{equation}
\because \left\Vert |\overset{\cdot }{\psi }(t)\rangle -|\overset{\cdot }{
\varphi }(t)\rangle \right\Vert =1/\varepsilon \gg 1,\therefore |\overset{
\cdot }{\psi }(t)\rangle \nsim |\overset{\cdot }{\varphi }(t)\rangle
\end{equation}
The above example shows that two approximately equal time-dependent
state-vectors do not necessarily have approximately equal time derivatives.
Therefore the approximate solution to integral equations (8) does not ensure
that the equivalent differential equations (6) approximately hold. It is
neglect of this basic mathematical fact that leads to `Inconsistency' of QAT
in [6].
QAT gives the approximate state-vector, not the approximate time derivative
of the state-vector. All the physics is, however, determined by the
state-vector itself, not by its time derivative. Therefore QAT is completely
correct physically.
\subsection{An exactly solvable example}
\QTP{Body Math}
Let's consider an exactly solvable example, the evolution of the spin
wavefunction of an electron in a slowly rotating magnetic field$
\overrightarrow{B}(t)=B_{0}(\overrightarrow{i}\cos \frac{2\pi }{T}t+
\overrightarrow{j}\sin \frac{2\pi }{T}t)$.The instantaneous Hamiltonian is
\QTP{Body Math}
$\ \ \ \ \ \ \ \ \ \ \ \ \ H(t)=-\overrightarrow{\mu }\cdot \overrightarrow{B
}(t)=\frac{e}{m}\overrightarrow{s}\cdot \overrightarrow{B}(t)=\frac{e\hbar }{
2m}\overrightarrow{\sigma }\cdot \overrightarrow{B}(t)$
\begin{equation}
=\frac{e\hbar B_{0}}{2m}\left[
\begin{array}{cc}
0 & e^{^{^{-i2\pi t/T}}} \\
e^{^{i2\pi t/T}} & 0
\end{array}
\right] =\varepsilon \left[
\begin{array}{cc}
0 & e^{^{-i2\pi t/T}} \\
e^{^{i2\pi t/T}} & 0
\end{array}
\right]
\end{equation}
Its eigenvalues are $E_{\pm }(t)=\pm \varepsilon $. And the corresponding
eigenvectors are
\QTP{Body Math}
\begin{equation}
u_{\pm }(t)=\frac{1}{\sqrt{2}}\left[
\begin{array}{c}
e^{-i\pi t/T} \\
\pm e^{i\pi t/T}
\end{array}
\right]
\end{equation}
The exact general solution to the Schrodinger equation
\QTP{Body Math}
\begin{equation}
i\hbar \frac{d}{dt}\psi (t)=H(t)\psi (t)
\end{equation}
or
\begin{equation}
i\hbar \frac{d}{dt}\left[
\begin{array}{c}
x(t) \\
y(t)
\end{array}
\right] =\varepsilon \left[
\begin{array}{cc}
0 & e^{^{-i2\pi t/T}} \\
e^{i2\pi t/T} & 0
\end{array}
\right] \left[
\begin{array}{c}
x(t) \\
y(t)
\end{array}
\right]
\end{equation}
is
$\ \ \ \ \ \ \ \ \ \ \psi (t)=\left[
\begin{array}{c}
x(t) \\
y(t)
\end{array}
\right] $
\begin{equation}
=\left[
\begin{array}{c}
-A^{-1}[c_{1}(B+C)e^{iCt}+c_{2}(B-C)e^{-iCt}]e^{-iBt} \\
\lbrack c_{1}e^{iCt}+c_{2}e^{-iCt}]e^{^{iBt}}
\end{array}
\right]
\end{equation}
where $A=\varepsilon /\hbar ,B=\pi /T,C=\sqrt{A^{2}+B^{2}}$, and $c_{1}$, $
c_{2}$ are the integral constants. The specific solution determined by the
initial condition
\begin{equation}
\psi (0)=\left[
\begin{array}{c}
x(0) \\
y(0)
\end{array}
\right] =\frac{1}{\sqrt{2}}\left[
\begin{array}{c}
1 \\
1
\end{array}
\right]
\end{equation}
is
\QTP{Body Math}
\begin{equation}
\psi (t)=\frac{1}{\sqrt{2}}\left[
\begin{array}{c}
\left( \cos Ct-i\frac{A-B}{C}\sin Ct\right) e^{-iBt} \\
\left( \cos Ct-i\frac{A+B}{C}\sin Ct\right) e^{^{iBt}}
\end{array}
\right]
\end{equation}
Let's get into the rotating representation.
\QTP{Body Math}
\begin{equation}
\psi (t)=c_{+}(t)u_{+}(t)e^{-iAt}+c_{-}(t)u_{-}(t)e^{iAt}
\end{equation}
The exact Schr\"{o}dinger equation becomes
\QTP{Body Math}
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \overset{
\cdot }{c}_{+}(t)=iBe^{i2At}c_{-}(t)$
\begin{equation}
\overset{\cdot }{c}_{-}(t)=iBe^{-i2At}c_{+}(t)
\end{equation}
Its general solution is
\begin{equation}
\left[
\begin{array}{c}
c_{+}(t) \\
c_{-}(t)
\end{array}
\right] =\left[
\begin{array}{c}
e^{iAt}\left( \frac{C-A}{B}c^{\prime }e^{iCt}-\frac{C+A}{B}c^{\prime \prime
}e^{-iCt}\right) \\
e^{-iAt}\left( c^{\prime }e^{iCt}+c^{\prime \prime }e^{-iCt}\right)
\end{array}
\right]
\end{equation}
The specific solution determined by the initial condition
\begin{equation}
\left[
\begin{array}{c}
c_{+}(0) \\
c_{-}(0)
\end{array}
\right] =\left[
\begin{array}{c}
1 \\
0
\end{array}
\right]
\end{equation}
is
\begin{equation}
\left[
\begin{array}{c}
c_{+}(t) \\
c_{-}(t)
\end{array}
\right] =\left[
\begin{array}{c}
\left( \cos Ct-i\frac{A}{C}\sin Ct\right) e^{iAt} \\
(i\frac{B}{C}\sin Ct)e^{-iAt}
\end{array}
\right]
\end{equation}
The adiabatic approximation means neglecting the non-diagonal ($n\neq m$)
terms, which contain oscillating factors, on the right-hand side of
differential equations (25). The adiabatic approximate solution determind by
the initial condition (27) is
\begin{equation}
\left[
\begin{array}{c}
c_{+}^{A}(t) \\
c_{-}^{A}(t)
\end{array}
\right] =\left[
\begin{array}{c}
1 \\
0
\end{array}
\right]
\end{equation}
Getting to the dimenssionless time $\tau =t/T$, we rewrite (28) and (29) as
\QTP{Body Math}
$\ \ \left[
\begin{array}{c}
\check{c}_{+}(\tau ) \\
\check{c}_{-}(\tau )
\end{array}
\right] =$
\begin{equation}
\left[
\begin{array}{c}
\left( \cos \sqrt{\left( \varepsilon T/\hbar \right) ^{2}+\pi ^{2}}\tau -i
\frac{\varepsilon T/\hbar }{\sqrt{\left( \varepsilon T/\hbar \right)
^{2}+\pi ^{2}}}\sin \sqrt{\left( \varepsilon T/\hbar \right) ^{2}+\pi ^{2}}
\tau \right) e^{i\varepsilon T\tau /\hbar } \\
\left( i\frac{\pi }{\sqrt{(\varepsilon T/\hbar )^{2}+\pi ^{2}}}\sin \sqrt{
\left( \varepsilon T/\hbar \right) ^{2}+\pi ^{2}}\tau \right)
e^{-i\varepsilon T\tau /\hbar }
\end{array}
\right]
\end{equation}
\begin{equation}
\left[
\begin{array}{c}
\check{c}_{+}^{A}(\tau ) \\
\check{c}_{-}^{A}(\tau )
\end{array}
\right] =\left[
\begin{array}{c}
1 \\
0
\end{array}
\right]
\end{equation}
It's easy to see that
\begin{equation}
\left[
\begin{array}{c}
\check{c}_{+}(\tau ) \\
\check{c}_{-}(\tau )
\end{array}
\right] \underset{T\rightarrow \infty }{\longrightarrow }\left[
\begin{array}{c}
1 \\
0
\end{array}
\right] =\left[
\begin{array}{c}
\check{c}_{+}^{A}(\tau ) \\
\check{c}_{-}^{A}(\tau )
\end{array}
\right]
\end{equation}
The difference between (30) and (31) is small, but rapidly oscillates with
the dimensionless time $\tau $. Therefore, it's to be expected that the
derivative with $\tau $ of the difference is no longer small. In fact,
letting $F\equiv \sqrt{\left( \varepsilon T/\hbar \right) ^{2}+\pi ^{2}}$,
we have
\QTP{Body Math}
$\left[
\begin{array}{c}
\frac{d}{d\tau }\check{c}_{+}(\tau ) \\
\frac{d}{d\tau }\check{c}_{-}(\tau )
\end{array}
\right] =\left[
\begin{array}{c}
\left( \frac{-\pi ^{2}}{F}\sin F\tau \right) e^{i\varepsilon T\tau /\hbar }
\\
\left( \frac{\pi \varepsilon T/\hbar }{F}\sin F\tau +i\pi \cos F\tau \right)
e^{-i\varepsilon T\tau /\hbar }
\end{array}
\right] $
\begin{equation}
\underset{T\rightarrow \infty }{\rightarrow }\left[
\begin{array}{c}
0 \\
i\pi e^{-i2\varepsilon T\tau /\hbar }
\end{array}
\right] \neq \left[
\begin{array}{c}
0 \\
0
\end{array}
\right] =\left[
\begin{array}{c}
\frac{d}{d\tau }\check{c}_{+}^{A}(\tau ) \\
\frac{d}{d\tau }\check{c}_{-}^{A}(\tau )
\end{array}
\right]
\end{equation}
\subsection{conclusion}
The above discussion shows: (i) The adiabatic state-vector $\psi ^{A}(t)$
does not satisfies approximately the Schr\"{o}dinger differential equation,
but it satisfies approximately the equivalent integral equation. (ii) The
QAT is completely correct physically. This is ensured by $\left\Vert \psi
(t)-\psi ^{A}(t)\right\Vert \ll 1$. But it's not necessarily true that $
\left\Vert \overset{\cdot }{\psi }(t)-\overset{\cdot }{\psi ^{A}}
(t)\right\Vert \ll 1$. (iii) Taking $\overset{\cdot }{\psi ^{A}}(t)$ for $
\overset{\cdot }{\psi }(t)$ will probably lead to contradiction.
Even though we don't agree with [6], we still think it's an interesting
work. Because it has raised an important question: In theoretical reasoning,
one has to bear in mind that approximately equal functions do not have to
have approximately equal derivatives.
\end{document}
|
\begin{document}
\title{Online Packet Scheduling with Bounded Delay and Lookahead
ootnote{M. B{\"o}
\begin{abstract}
We study the \emph{online bounded-delay packet scheduling problem ({\textsf{PacketScheduling}})}, where
packets of unit size arrive at a router over time and need to be transmitted
over a network link. Each packet has two attributes:
a non-negative weight and a deadline for its transmission.
The objective is to maximize the total weight of the transmitted packets.
This problem has been well studied in the literature, yet its
optimal competitive ratio remains unknown: the best upper bound is
$1.828$~\cite{englert_suppressed_packets_07}, still quite far from the best lower bound of
$\phi \approx 1.618$~\cite{hajek_unit_packets_01,andelman_queueing_policies_03,chin_partial_job_values_03}.
In the variant of {{\textsf{PacketScheduling}}} with \emph{$s$-bounded instances}, each packet can be
scheduled in at most $s$ consecutive slots, starting at its release time.
The lower bound of $\phi$ applies even to the special case of $2$-bounded
instances, and a $\phi$-competitive algorithm for $3$-bounded
instances was given in~\cite{chin_weighted_throughput_06}.
Improving that result, and addressing a question posed by Goldwasser~\cite{goldwasser_survey_10},
we present a $\phi$-competitive algorithm for \emph{$4$-bounded} instances.
We also study a variant of {{\textsf{PacketScheduling}}} where an online
algorithm has the additional power of \emph{1-lookahead}, knowing at
time $t$ which packets will arrive at time $t+1$. For {{\textsf{PacketScheduling}}} with 1-lookahead
restricted to $2$-bounded instances, we present an online
algorithm with competitive ratio $\half (\sqrt{13} - 1) \approx 1.303$
and we prove a nearly tight lower bound of $\onefourth (1 + \sqrt{17}) \approx 1.281$.
\end{abstract}
\section{Introduction}\label{sec:intro}
\myparagraph{Background.}
Optimizing the flow of packets across an IP network gives rise to a plethora of
challenging algorithmic problems. In fact, even
scheduling packet transmissions from a router across a specific network
link can involve non-trivial tradeoffs.
Several models for such tradeoffs have been formulated, depending on the
architecture of the router, on characteristics of the packets,
and on the objective function.
In the model that we study in this paper, each packet has two attributes:
a non-negative weight and a deadline for its transmission. The time is assumed to
be discrete (slotted), and only one packet can be sent in each slot.
The objective is to maximize the total weight of the transmitted packets.
We focus on the online setting, where at each time step the router needs to choose
a pending packet for transmission, without the knowledge about future packet arrivals.
This problem, which we call \emph{online bounded-delay packet scheduling problem} ({{\textsf{PacketScheduling}}}),
was introduced by Kesselman~{\em et al.\/}~\cite{kesselman_buffer_overflow_04} as a
theoretical abstraction that captures the constraints and objectives of packet
scheduling in networks that need to provide quality of service (QoS) guarantees.
The combination of deadlines and weights is used to model packet priorities.
In the literature, the {{\textsf{PacketScheduling}}} problem is sometimes referred to as
\emph{bounded-delay buffer management in QoS switches}. It can also be formulated
as the job-scheduling problem $1|p_j = 1,r_j|\sum w_jU_j$,
where packets are represented by unit-length
jobs with deadlines, with the objective to maximize the weighted throughput.
A router transmitting packets across a link needs to make scheduling decisions
on the fly, based only on the currently available information. This motivates the
study of online competitive algorithms for {{\textsf{PacketScheduling}}}.
A simple online greedy algorithm that always schedules the heaviest pending packet is
known to be $2$-competitive~\cite{hajek_unit_packets_01,kesselman_buffer_overflow_04}.
In a sequence of papers~\cite{chrobak_improved_buffer_04,englert_buffer_management_09,li_optimal_agreeable_05,englert_suppressed_packets_07},
this ratio was gradually improved, and the best
currently known ratio is $1.828$~\cite{englert_suppressed_packets_07}.
The best lower bound, widely believed to be the optimal ratio, is
$\phi = (1 + \sqrt{5}) / 2 \approx 1.618$~\cite{hajek_unit_packets_01,andelman_queueing_policies_03,chin_partial_job_values_03}.
Closing the gap between these two bounds is one of the most intriguing open problems in
online scheduling.
\myparagraph{$s$-Bounded instances.}
In an attempt to bridge this gap, restricted models
have been studied.
In the \emph{$s$-bounded} variant of {{\textsf{PacketScheduling}}}, each packet must be scheduled
within $k$ consecutive slots, starting at its
release time, for some $k\leq s$ possibly depending on the packet.
The lower bound of $\phi$ from~\cite{hajek_unit_packets_01,andelman_queueing_policies_03,chin_partial_job_values_03}
holds even in the $2$-bounded case.
A matching $\phi$-competitive algorithm was given
Kesselman~{{\em et al.\/}}~\cite{kesselman_buffer_overflow_04} for $2$-bounded
instances and by Chin~{{\em et al.\/}}~\cite{chin_weighted_throughput_06}
for $3$-bounded instances.
Both results are based on the algorithm ${\textrm{EDF}}_\alpha$, with $\alpha=\phi$, which always schedules the earliest-deadline
packet whose weight is at least the weight of the heaviest pending packet divided by $\alpha$
(ties are broken in favor of heavier packets). ${\textrm{EDF}}_\phi$ is not $\phi$-competitive
for $4$-bounded instances; however, a different choice of $\alpha$
yields a $1.732$-competitive algorithm for the $4$-bounded case~\cite{chin_weighted_throughput_06}.
\emparagraph{Our contribution.} We present a
$\phi$-competitive online algorithm for {{\textsf{PacketScheduling}}} restricted to $4$-bounded instances, matching the
lower bound of $\phi$ (see Section~\ref{sec:4bounded}).
This improves the results from~\cite{chin_weighted_throughput_06}
and answers the question posed by Goldwasser
in his SIGACT~News survey~\cite{goldwasser_survey_10}.
\myparagraph{Algorithms with 1-lookahead.}
In Sections~\ref{sec:lookaheadalgo} and \ref{sec:lookaheadlb}, we
investigate a variant of {{\textsf{PacketScheduling}}} where an
online algorithm is able to learn at time $t$ which packets will
arrive by time $t+1$. This property is known as
\emph{1-lookahead}. From a practical point of view, 1-lookahead corresponds
to the situation in which a router can see the packets that are just
arriving to the buffer and that will be available for transmission in the next time slot.
The notion of lookahead is quite natural and it has appeared in the online algorithm
literature for paging~\cite{albers_lookahead_97}, scheduling~\cite{motwani_lookahead_assembly_lines_98}
and bin packing~\cite{grove_bin_packing_lookahead_95} since the 1990s.
Ours is the first paper, to our knowledge, that considers
lookahead in the context of packet scheduling.
\emparagraph{Our contributions.}
We provide two results about {{\textsf{PacketScheduling}}} with 1-lookahead, restricted to $2$-bounded instances.
First, in Section~\ref{sec:lookaheadalgo}, we present
an online algorithm for this problem with competitive ratio of $\half (\sqrt{13} - 1) \approx 1.303$.
Then, in Section~\ref{sec:lookaheadlb}, we give a lower bound of $\onefourth(1 +
\sqrt{17}) \approx 1.281$ on the competitive ratio of algorithms with 1-lookahead
which holds already for the $2$-bounded case.
\section{Definitions and Notation}\label{sec:definitions}
\myparagraph{Problem statement.}
Formally, we define the {{\textsf{PacketScheduling}}} problem as follows.
The instance is a set of packets, with each packet $p$ specified by a
triple $(r_p,d_p,w_p)$, where $r_p$ and $d_p\ge r_p$ are integers representing
the \emph{release time} and \emph{deadline} of $p$, and $w_p\ge 0$ is a real number
representing the \emph{weight} of $p$. Time is discrete, divided into
unit \emph{time slots}, also called \emph{steps}.
A \emph{schedule} assigns time slots to some subset of packets such that
(i) any packet $p$ in this subset is assigned a slot in the interval $[r_p,d_p]$,
and (ii) each slot is assigned to at most one packet.
The objective is to compute a schedule that maximizes the total weight of the
scheduled packets, also called the {\em profit}.
In the \emph{$s$-bounded} variant of {{\textsf{PacketScheduling}}}, we assume that each
packet $p$ in the instance satisfies $d_p\le r_p+s-1$. In other words,
this packet must be scheduled within $k_p$ consecutive slots, starting
at its release time, for some $k_p\leq s$.
\myparagraph{Online algorithms.}
In the online variant of {{\textsf{PacketScheduling}}}, which is the focus of our work, at any time $t$
only the packets released at times up to $t$ are revealed. Thus an online
algorithm needs to decide which packet to schedule at time $t$ (if any)
without any knowledge of packets released after time $t$.
As is common in the area of online optimization, we measure the
performance of an online algorithm ${\cal A}$ by its competitive
ratio. An algorithm is $R$-competitive if, for all instances, the
total weight of the optimal schedule (computed offline) is at most $R$
times the weight of the schedule computed by ${\cal A}$.
We say that a packet is \emph{pending} for an algorithm at time $t$,
if $r_p \le t \le d_p$ and $p$ is not scheduled before time $t$.
A (pending) packet $p$ is \emph{expiring} at time $t$ if $d_p = t$,
that is, it must be scheduled now or never.
A packet $p$ is \emph{tight} if $r_p = d_p$; thus $p$ is
expiring already at its release time.
\myparagraph{Algorithms with 1-lookahead.}
In Sections~\ref{sec:lookaheadalgo} and \ref{sec:lookaheadlb}, we
investigate the {{\textsf{PacketScheduling}}} problem \emph{with 1-lookahead}. With
1-lookahead, the problem definition changes so that at time $t$, an
online algorithm can also see the packets that will be released at
time $t+1$, in addition to the pending packets. Naturally, only a
pending packet can be scheduled at time $t$.
\myparagraph{Other terminology and assumptions.}
We will make several assumptions about our problem that do not affect the
generality of our results. First, we can assume that all packets have different
weights. Any instance can be transformed into an instance with distinct weights
through infinitesimal perturbation of the weights, without affecting the
competitive ratio. Second, we assume that at each step there is at least one
pending packet. (If not, we can always release a tight packet of
weight $0$ at each step.)
We define the \emph{earliest-deadline relation} on packets, or \emph{canonical
ordering}, denoted $\prec$, where $x\prec y$ means that either $d_x <
d_y$ or $d_x = d_y$ and $w_x > w_y$ (so the ties are broken in favor
of heavier packets). At any step $t$, the algorithm maintains the
earliest-deadline relation on the set of its pending packets.
Throughout the paper, ``earliest-deadline packet'' means
the earliest packet in the canonical ordering.
Regarding the adversary (optimal) schedule, we can assume that it
satisfies the following \emph{earliest-deadline property}: if packets $p$, $p'$ are
scheduled in steps $t$ and $t'$, respectively, where
$r_{p'} \le t < t' \le d_p$ (that is, $p$ and $p'$
can be swapped in the schedule without violating their
release times and deadlines), then $p\prec p'$. This can be rephrased in
the following useful way: at any step, the optimum schedule transmits
the earliest-deadline packet among all the pending packets
that it transmits in the future.
\section{An Algorithm for 4-bounded Instances}\label{sec:4bounded}
In this section, we present a $\phi$-competitive algorithm for
$4$-bounded instances. Ratio $\phi$ is of course optimal~\cite[see
also
Section~\ref{sec:intro}]{hajek_unit_packets_01,andelman_queueing_policies_03,chin_partial_job_values_03}.
Up until now, the best competitive ratio for $4$-bounded instances was
$\sqrt{3}\approx 1.732$, achieved by algorithm ${\textrm{EDF}}_{\sqrt{3}}$ in
\cite{chin_weighted_throughput_06}. Our algorithm can be seen as a
modification of ${\textrm{EDF}}_\phi$, which under certain conditions schedules
a packet lighter than $w_h/\phi$ where $h$ is the heaviest pending
packet.
We remark that our algorithm uses memory; in particular, it marks one
pending packet under certain conditions. It is an interesting question
whether there is a memoryless $\phi$-competitive algorithm for $4$-bounded
instances.
\myparagraph{Algorithm~$\textsf{ToggleH}$.}
The algorithm maintains one mark that may be assigned to one of the
pending packets. For a given step $t$, we choose the following packets from
among all pending packets:
\begin{description}
\setlength\itemsep{-1pt}
\item{$h=$} the heaviest packet,
\item{$s=$} the second-heaviest packet,
\item{$f=$} the earliest-deadline packet with $w_f\ge w_h/\phi$, and
\item{$e=$} the earliest-deadline packet with $w_e \ge w_h/\phi^2$.
\end{description}
We then proceed as follows:
\begin{tabbing}
aaa \= aaa \= aaa \= aaa \= aaa \= aaa \= \kill
\> \textbf{if} ($h$ is not marked) $\vee$ ($w_s \ge w_h/\phi$) $\vee$ ($d_e > t$)
\\
\>\> schedule $f$
\\
\> \> \textbf{if} there is a marked packet \textbf{then} unmark it
\\
\>\> \textbf{if} ($d_h =t+3$) $\wedge$ ($d_f = t+2$) \textbf{then} mark $h$
\\
\> \textbf{else} // ($h$ is marked) $\wedge$ ($w_s < w_h/\phi$) $\wedge$ ($d_e = t$)
\\
\>\> schedule $e$
\\
\> \> unmark $h$
\end{tabbing}
Note that when $f\neq h$, then the algorithm will always schedule $f$. This is because
in this case $f$ is a candidate for $s$, so the condition $w_s\ge w_h/\phi$ holds.
The algorithm never specifically chooses $s$ for scheduling -- it is only
used to determine if there is one more relatively heavy pending packet other than $h$. (But
$s$ \emph{may} get scheduled if it so happens that $s =f$ or
$s = e$.)
Note also that, if $e\neq f$, then $e$ is scheduled only in a very
specific scenario, when all of the following hold:
$e$ is expiring, $h$ is marked, and $w_s < w_h/\phi$.
\myparagraph{Intuition.}
Let us give a high-level view of the analysis using charging schemes
and an example that motivates both our algorithm and its analysis.
The example consists of four packets $j,k,f,h$ released in step $1$,
with deadlines $1,2,3,4$ and weights
$1-\varepsilon,1-\varepsilon,1,\phi$ for a small $\varepsilon>0$,
respectively. The optimum schedules all packets.
Algorithm ${\textrm{EDF}}_\phi$ performs only $f$-steps; in our example it
schedules $f$ and $h$ in steps $1$ and $2$, while $j$ and $k$ are
lost. Thus the ratio is larger than $\phi$. (In fact, after optimizing
the threshold and the weight of $h$, this is the tight example for
${\textrm{EDF}}_{\sqrt{3}}$ on 4-bounded instances.) $\textsf{ToggleH}$ avoids this
example by performing $e$-step in step $2$ and scheduling $k$ which
has the role of $e$ and $s$ in the algorithm.
This example and its variants are also important for our analysis. We
analyze the algorithms by charging schemes, where the weight of each
packet scheduled by the adversary is charged to one or more of the
slots of the algorithm's schedule. If the weight charged to each slot
is at most $R$ times the weight of the packet scheduled by the
algorithm in that slot, the algorithm is $R$-competitive. In the case
of ${\textrm{EDF}}$, we charge the weight of each packet $j$ scheduled by the
adversary at time $t$ either fully to the step where ${\textrm{EDF}}$ schedules
$j$, if it is before $t$, or fully to step $t$ otherwise. In our
example, the weight charged to step $1$ is $2-\varepsilon$ while
${\textrm{EDF}}$ schedules only weight $1$, giving the ratio $2$. Considering
steps $1$ and $2$ together leads to a better ratio and after balancing
the threshold it gives the tight analysis of ${\textrm{EDF}}_{\sqrt{3}}$.
Our analysis of $\textsf{ToggleH}$ is driven by the variants of the example
above where step~$2$ is an $f$-step. This may happen in several
cases. One case is if in step~$2$ another packet $s$ with $w_s\geq
w_h/\phi$ arrives. If $s$ is not scheduled in step $2$, then $s$ is
pending in step~$3$, thus $\textsf{ToggleH}$ schedules a relatively heavy
packet in step~$3$, and we can charge a part of the weight of $f$,
scheduled in step $3$ by the adversary, to step~$3$.
This motivates the definition of regular up and back charges
below and corresponds to Case~5.1 in the analysis. Another case is
when the weight of $k$ is changed to $1/\phi-\varepsilon$. Then $\textsf{ToggleH}$
performs an $f$-step because $k$ is not a candidate for $e$, thus the
role of $e$ is taken by the non-expiring packet $h$. However, then the
weight of the four packets charged to steps $1$ and $2$ in the way
described above is at most $\phi$ times the weight of $f$ and $h$;
this corresponds to Case~5.2 of the analysis. Lemma~\ref{lem:aux}
gives a subtle argument showing that in the 4-bounded case essentially
these two variants of our example are the only difficult
situations. Finally, in the original example, $\textsf{ToggleH}$ schedules $k$
in step $2$ which is an $e$-step. Then again $h$ is a pending heavy
packet and we can charge some weight of $f$ to step $3$. Intuitively
it is important that an $e$-step is performed only in a very specific
situation where it is guaranteed that $h$ can be scheduled in the next
two steps (as it is marked) and that there is no other packet of
comparable weight due to the condition $w_s<w_h/\phi$. Still, there is
a case to be handled: If more packets arrive in step $3$, it is also
possible that the adversary schedules $h$ already in step $2$ and we
need to redistribute its weight. This case motivates the definition of
the special up and back charges below.
\begin{theorem}\label{thm:toggle-h}
Algorithm~$\textsf{ToggleH}$ is $\phi$-competitive on $4$-bounded instances.
\end{theorem}
\begin{proof}
Fix some optimal adversary schedule. Without loss of generality, we
can assume that this schedule satisfies the earliest-deadline property
(see Section~\ref{sec:definitions}).
We have two types of packets scheduled by Algorithm~$\textsf{ToggleH}$:
\emph{f-packets}, scheduled using the first case, and
\emph{e-packets}, scheduled using the second case. Similarly, we refer
to the steps as \emph{$f$-steps} and \emph{$e$-steps}.
Let $t$ be the current step. By $h$, $f$, $e$, and $s$ we denote the
packets from the definition of $\textsf{ToggleH}$. By $j$ we denote the packet
scheduled by the adversary. By $h'$ and $h''$ we denote the heaviest
pending packets in steps $t+1$ and $t+2$, respectively. We use the
same convention for packets $f$, $e$, $s$, and $j$.
Our analysis uses a new charging scheme which we now define. The
adversary packet $j$ scheduled in step $t$ is charged according to the
first case below that applies:
\begin{enumerate}
\item
If $t$ is an $e$-step and $j=h$, we charge $w_h/\phi$ to step $t$
and $w_h/\phi^2$ to step $t-1$. We call these charges a
\emph{special up charge} and a \emph{special back charge},
respectively. Note that the total charge is equal to $w_h=w_j$.
\item
If $j$ is pending for $\textsf{ToggleH}$ in step $t$, charge $w_j$ to
step $t$. We call this charge a \emph{full up charge}.
\item
Otherwise $j$ is scheduled before step $t$. We charge
$w_h/\phi^2$ to step $t$ and $w_j-w_h/\phi^2$ to the step where
$\textsf{ToggleH}$ scheduled $j$. We call these charges a \emph{regular up
charge} and a \emph{regular back charge}, respectively. We point out that
the regular back charge may be negative, but this causes no problems in
the proof.
\end{enumerate}
We start with an easy observation that we use several times throughout the proof.
\begin{lemma}\label{l:up}
If an $f$-step $t$ receives a regular back charge, then the up charge it
receives is less than $w_h/\phi$.
\end{lemma}
\begin{proof}
For a regular up charge the lemma is trivial (with a slack of a factor
of $\phi$). For a full up charge, the existence of a back charge
implies that the adversary schedules $f$ after $j$, thus the
earliest-deadline property of the adversary schedule implies that
$j\prec f$, as both $j$ and $f$ are pending for the adversary at
$t$. Thus $\textsf{ToggleH}$ would schedule $j$ if $w_j\geq
w_h/\phi$. Finally, an $f$-step does not receive a special up charge.
\end{proof}
We examine packets scheduled by $\textsf{ToggleH}$ from left to right, that is
in order of time. For each time step $t$, if $p$ is the packet scheduled at time $t$, we
want to show that the charge to step $t$ is at most $\phi w_p$.
However, as it turns out, this will not always be true. In one
case we will also consider the next step $t+1$ and the packet $p'$ scheduled in step $t+1$,
and show that the total charge to steps $t$ and $t+1$ is at most $\phi (w_p+w_{p'})$.
Let $t$ be the current step. We consider several cases.
\noindent
\mycase{1} $t$ is an $e$-step.
By the definition of $\textsf{ToggleH}$, $w_e\geq w_h/\phi^2$ and $d_e=t$;
the latter implies that step $t$ receives no regular back charge.
We further note that the heaviest pending packet $h'$ in step $t+1$ is either
released at time $t+1$ or it coincides with $h$, which is still pending and
became unmarked by the algorithm in step $t$; in either case $h'$ is unmarked
at the beginning of step $t+1$, which implies that step $t+1$ is an $f$-step.
Thus, step $t$ receives no special back charge, which, combined with the
previous observation, implies it receives no back charge of any kind.
Now we claim that the up charge is at most $w_h/\phi$. For a special
or regular up charge this follows from its definition. For a full up
charge, the job $j$ is pending at time $t$ for $\textsf{ToggleH}$ and $j\neq
h$ (as for $j=h$ the special charges are used). This implies
that $w_j<w_h/\phi$, as otherwise $w_s\geq w_h/\phi$ and $t$ would be
an $f$-step. Thus the full charge is $w_j\leq w_h/\phi$ as well.
Using $w_e\geq w_h/\phi^2$, the charge is at most $w_h/\phi\leq\phi
w_e$ and we are done.
\noindent
\mycase{2} $t$ is an $f$-step and $t$ does not receive a back charge.
Then $t$ can only receive an up-charge, and this
up charge is at most $w_h\leq \phi w_f$, where the inequality follows from the
definition of $f$.
\noindent
\mycase{3} $t$ is an $f$-step and $t$ receives a special back charge.
From the definition of special charges, the next step is an $e$-step,
and therefore $h'$ is marked at its beginning. Since the only
packet that may be marked after an $f$-step is $h$, we thus have
$h=h'=j'$, and the special back charge is $w_h/\phi^2$.
Since $f\prec h$, the adversary cannot schedule $f$ after step $t$, so
step $t$ cannot receive a regular back charge.
We claim that the up charge to step $t$ is at most $w_f$. Indeed,
a regular up charge is at most $w_h/\phi^2 \leq w_f$, and a special
up charge does not happen in an $f$-step. To show this bound for
a full up charge, assume for contradiction that
$w_j>w_f$. This implies that $j\neq f$ and, since
$\textsf{ToggleH}$ scheduled $f$, we have $d_j>d_f$. In particular $j$ is
pending at time $t+1$. Thus
$w_{s'}\ge w_j > w_f \ge w_h/\phi$, contradicting the fact that
$t+1$ is an $e$-step. Therefore the full charge is $w_j\le w_f$, as claimed.
As $w_h\leq \phi w_f$, the total charge to $t$ is at most
$w_f+w_h/\phi^2\leq w_f+w_f/\phi=\phi w_f$.
\noindent
\mycase{4} $t$ is an $f$-step, $t$ receives a regular back charge and no
special back charge, and
$f=h$. The up charge is at most $w_h/\phi$ by Lemma~\ref{l:up} and the back
charge is at most $w_h$, thus
the total charge is at most $w_h+w_h/\phi=\phi w_h$, and we are done.
\noindent
\mycase{5} $t$ is an $f$-step, $t$ receives a regular back charge and no
special back charge, and
$f\neq h$. Let ${\bar t}$ be the step when the adversary schedules
$f$. We distinguish two sub-cases.
\noindent
\mycase{5.1} In step ${\bar t}$, a packet of weight at least $w_h/\phi$
is pending for the algorithm.
Then the regular back charge to $t$ is at most
$w_f-(w_h/\phi)/\phi^2=w_f-w_h/\phi^3$. As the up charge to $t$ is at most
$w_h/\phi$ by Lemma~\ref{l:up}, the total charge to $t$ is at most
$w_h/\phi+w_f-w_h/\phi^3= w_f+w_h/\phi^2\leq (1+1/\phi) w_f=\phi w_f$,
and we are done.
\noindent
\mycase{5.2} In step ${\bar t}$, no packet of weight at least
$w_h/\phi$ is pending for the algorithm.
In this case we consider the charges to steps $t$ and $t+1$
together. First, we claim the following.
\begin{figure}
\caption{An illustration of the situation in Case~5.2. Up charges are denoted by solid arrows
and back charges by dashed arrows.}
\label{fig:ToggleH-case5.2}
\end{figure}
\begin{lemma}\label{lem:aux}
$\textsf{ToggleH}$ schedules $h$ in step $t+1$. Furthermore, step $t+1$
receives no special charge and it receives an up charge of at most
$w_h/\phi^2$.
\end{lemma}
\begin{proof}
Since $f\neq h$, we have $f\prec h$ and thus, using also the
definition of ${\bar t}$ and 4-boundedness, ${\bar t}\leq d_f<d_h\leq
t+3$. The case condition implies that $h$ is not pending at ${\bar t}$,
thus $\textsf{ToggleH}$ schedules $h$ before ${\bar t}$. The only
possibility is that $\textsf{ToggleH}$ schedules $h$ in step $t+1$,
${\bar t}=d_f=t+2$, and $d_h=t+3$; see Figure~\ref{fig:ToggleH-case5.2}
for an illustration. This also implies that $\textsf{ToggleH}$
marks $h$ in step $t$.
We claim that $w_{s'}<w_h/\phi$. Indeed, otherwise either $s'$ is
pending in step $t+2$, contradicting the condition of Case 5.2, or
$d_{s'}=t+1<d_h$, thus $s'$ is a better candidate for $f'$ than $h$,
which contradicts the fact that the algorithm scheduled
$f'=h$.
The claim also implies that $h'=h$, as otherwise $w_{s'}\geq
w_h$. Since $h=h'$ is scheduled in step $t+1$, there is no
marked packet in step $t+2$ and $t+2$ is an $f$-step; thus there is
no special back charge to $t+1$.
We note that step $t+1$ is also an $f$-step, since $\textsf{ToggleH}$
schedules $h$ in step $t+1$ and $d_h>t+1$. Since $h'=h$ is marked
when step $t+1$ starts and $w_{s'}< w_h/\phi$, the reason that step
$t+1$ is an $f$-step must be that $d_{e'}>t+1$.
There is no special up charge to step $t+1$ as it is an $f$-step. If
the up charge to step $t+1$ is a regular up charge, by definition it
is at most $w_{h'}/\phi^2=w_h/\phi^2$ and the lemma holds.
The only remaining case is that of a full up charge to step $t+1$ from
a packet $j'$ scheduled by the adversary in step $t+1$ and pending for
$\textsf{ToggleH}$ in step $t+1$.
Since $j'\neq h$, it is a candidate for $s'$, and thus
$w_{j'}<w_h/\phi\leq w_f$. The earliest-deadline property of the
adversary schedule implies that $j'\prec f$; together with $d_f=t+2$
and $w_{j'}<w_f$ this implies $d_{j'}=t+1$. Therefore
$w_{j'}<w_h/\phi^2$, as otherwise $j'$ is a candidate for $e'$, but we
have shown that $d_{e'}>t+1$. Thus the regular up charge is at most
$w_{j'}<w_h/\phi^2$ and the lemma holds also in the remaining case.
\end{proof}
By Lemma~\ref{lem:aux}, step $t+1$ receives no special charge and an
up charge of at most $w_h/\phi^2$ and $\textsf{ToggleH}$ schedules $h$ in step
$t+1$. Step $t+1$ thus also receives a regular back charge of at most
$w_h$. So the total charge to step $t+1$ is at most $w_h/\phi^2 + w_h
\le w_f/\phi + w_h$. Moreover, using Lemma~\ref{l:up}, the total charge
to step $t$ is at most $w_h/\phi + w_f$. Thus, the total charge to
these two steps is at most $(w_h/\phi + w_f) + (w_f/\phi + w_h) =
\phi(w_f + w_h)$, as $f$ and $h$ are the two packets scheduled by
$\textsf{ToggleH}$.
In each case we have shown that a step or a pair of
consecutive steps receive a total charge of at most $\phi$
times the weight of packets scheduled in these steps.
Thus $\textsf{ToggleH}$ is $\phi$-competitive for the $4$-bounded case.
\end{proof}
\section{An Algorithm for 2-Bounded Instances with Lookahead}\label{sec:lookaheadalgo}
In this section, we present an algorithm for \emph{$2$-bounded} {{\textsf{PacketScheduling}}} \emph{with 1-lookahead},
as defined in Section~\ref{sec:definitions}.
Consider some online algorithm ${\cal A}$. Recall that,
for a time step $t$, packets \emph{pending} for ${\cal A}$ are those that are released at or
before time $t$ and have neither expired nor been scheduled by ${\cal A}$ before time $t$.
\emph{Lookahead} packets at time $t$ are the packets with release time $t+1$.
For ${\cal A}$, we define the \emph{plan} in step $t$ to be the optimal schedule
in the time interval $[t,\infty)$ that consists
of pending and lookahead packets at time $t$ and has the
earliest-deadline property.
For $2$-bounded instances, this plan will only use slots $t$, $t+1$ and $t+2$.
We will typically denote the packets in the plan scheduled
in these slots by $p_1,p_2,p_3$, respectively. The earliest-deadline
property then implies that if both $p_1$ and $p_2$ have release time $t$
and deadline $t+1$ then $p_1$ is heavier than $p_2$ and similarly for
$p_2$ and $p_3$.
\goodbreak
\myparagraph{Algorithm {\sc CompareWithBias}\xspacealpha.}
Fix some parameter $\alpha > 1$. At any time step $t$, the algorithm proceeds as follows:
\begin{tabbing}
aaa \= aaa \= aaa \= aaa \= aaa \= aaa \= \kill
\> let $p_1,p_2,p_3$ be the plan at time $t$
\\
\> \textbf{if} $r_{p_2} = t$ \textbf{and}
$w_{p_1} < \min(\, w_{p_2} \,,\, w_{p_3} \,,\, \frac{1}{2\alpha} (w_{p_2} + w_{p_3}) \,)$
\\
\>\> \textbf{then} schedule $p_2$
\\
\> \textbf{else} schedule $p_1$
\end{tabbing}
Note that if the algorithm schedules $p_2$ then $p_1$ must be expiring,
for otherwise $w_{p_1} > w_{p_2}$ (by canonical ordering).
Also, the scheduled packet is at least as heavy
as the heaviest expiring packet $q$,
since clearly $w_{p_1}\ge w_q$ and the algorithm schedules $p_2$ only
if $w_{p_1}<w_{p_2}$.
\myparagraph{Analysis.}
We set the parameter $\alpha$ and constants $\delta$ and $R$
which we will use in the analysis so that they satisfy the following equalities:
\begin{align}
2-\delta - \frac{R + 2\delta - 1}{\alpha} = R \label{eq:forwardCh} \\
1 - 2\delta + 2\alpha\delta = R \label{eq:chainCharges} \\
1 + \frac{1}{2\alpha} = R \label{eq:splitCharges}
\end{align}
By solving these equations we get
$\alpha=\onefourth (\sqrt{13} + 3) \approx 1.651$,
$\delta = \onesixth (5 - \sqrt{13}) \approx 0.232$,
and $R = \half(\sqrt{13} - 1) \approx 1.303$.
In this section we will prove the following theorem:
\begin{theorem}
The algorithm {\sc CompareWithBias}\xspacealpha is $R$-competitive for packet scheduling
on 2-bounded instances for $R = \half(\sqrt{13} - 1) \approx 1.303$ if $\alpha = \onefourth(\sqrt{13} + 3) \approx 1.651$.
\end{theorem}
We also use the following properties of these constants:
\begin{align}
2-R-3\delta = 0\label{eq:2-R-3delta}\\
2-R-2\delta > 0 \label{ineq:2-R-2delta}\\
1-\delta-\frac{R-1+2\delta}{2\alpha} > 0 \label{ineq:1-d-frac}\\
1-\frac{R}{2\alpha} > 0\label{ineq:1-R/2alpha}\\
3\alpha\delta < R \label{ineq:chainBegCh}\\
2-\frac{R}{\alpha} < R\label{ineq:fwdChFromSingleton}
\end{align}
where~(\ref{eq:2-R-3delta}) follows from~(\ref{eq:forwardCh})
and~(\ref{eq:chainCharges}) and strict inequalities can be verified
numerically.
Let $\textsf{ALG}$\xspace be the schedule produced by {\sc CompareWithBias}\xspace.
Let us consider an optimal schedule $\textsf{OPT}$\xspace (a.k.a.\ schedule of the adversary) satisfying the
canonical ordering, i.e., if a packet $x$ is scheduled before a packet $y$ in {$\textsf{OPT}$\xspace}
then either $y$ is released after $x$ is scheduled or $x\prec y$.
Recall that we are assuming w.l.o.g.\ that the weights of packets are different.
The analysis of {\sc CompareWithBias}\xspace is based on a charging scheme.
First we define a few packets by their schedule times; see Figure~\ref{fig:packetdef}.
\begin{figure}
\caption{Packet definition.}
\label{fig:packetdef}
\end{figure}
\myparagraph{Informal description of charging.}
We use three types of charges. The adversary's packet $j$ in step
$t$ is charged using a \textit{full charge} either to step $t-1$ if
$\textsf{ALG}$\xspace schedules $j$ in step $t-1$ or to step $t$ if $w_f\geq w_j$
(including the case $f=j$) and $f$ is not in step $t+1$ in $\textsf{OPT}$\xspace; the
last condition assures that step $t$ does not receive two full
charges.
The second type are \textit{split charges} that occur in step $t$ if
$w_f > w_j$, $j$ is pending in step $t$ in {$\textsf{ALG}$\xspace} and $f$ is in step
$t+1$ in $\textsf{OPT}$\xspace, i.e., step $t$ receives a full back charge from $f$.
In this case, we distribute the charge from $j$ to $f$ and another
relatively large packet $f'$ scheduled in step $t+1$ or $t+2$ in
$\textsf{ALG}$\xspace; we shall prove that one of these steps satisfies $2\alpha{\thinnegspace\cdot\thinnegspace}
w_j<w_f+w_f'$. We charge to step $t+2$ only when it is necessary,
which allows us to prove that split-charge pairs are pairwise
disjoint. Also, in this case we analyze the charges to both steps
together, thus it is not necessary to fix a distribution of the weight
to the two steps.
The remaining case is when $w_f < w_j$ and $j$ is not scheduled in
$t-1$ in $\textsf{ALG}$\xspace. We analyze these steps in maximal consecutive
intervals, called \textit{chains} and the corresponding charges are
\textit{chain charges}.
Inside each chain we distribute the charge of each packet $j$
scheduled at $t$ in $\textsf{OPT}$\xspace to steps $t-1$, $t$ and $t+1$, if these steps
are also in the chain. The distribution of weights shall depend on
a parameter $\delta$. Packets at the beginning and at the end of the
chain are charged in a way that minimizes the charge to steps outside
of the chain. In particular, the step before a chain receives no charge
from the chain.
\myparagraph{Notations and the charging scheme.}
A step $t$ for which $w_f < w_j$ and $j$ is pending in step $t$ in $\textsf{ALG}$\xspace
is called \textit{a chaining step}.
A maximal sequence of successive chaining steps is called a \textit{chain}.
The chains with a single step are called \textit{singleton chains},
the chains with at least two steps are called \textit{long chains}.
The pair of steps that receive a split charge from the same packet is
called a \textit{split-charge pair}. The charging scheme does not
specify the distribution of the weight to the two steps of the
split-charge pair, as the charges to them are analyzed together.
Packet $j$ scheduled in $\textsf{OPT}$\xspace at time $t$ is charged
according to the first rule below that applies. See
Figure~\ref{fig:nonchaining} for an illustration of the first four
(non-chaining) charges and Figure~\ref{fig:chaining} for an illustration
of the chaining charges.
\begin{figure}
\caption{Non-chaining charges. Note that for split charges $f$ is
scheduled in step $t+1$ in $\textsf{OPT}
\label{fig:nonchaining}
\end{figure}
\begin{figure}
\caption{On the left, a chain of length 3 starting in step $t-1$ and
ending in step $t+1$. The \emph{chain beginning charges}
\label{fig:chaining}
\end{figure}
\begin{enumerate}
\item \label{ch:back} If $j$ is scheduled in step $t-1$ in $\textsf{ALG}$\xspace (that is, $e=j$),
charge $w_j$ to step $t-1$. We call this charge a \textit{full back charge}.
\item \label{ch:up} If $w_f\geq w_j$ and $f$ is not scheduled in step $t+1$ in $\textsf{OPT}$\xspace (in particular, if $j=f$),
charge $w_j$ to step $t$. We call this charge a \textit{full up charge}.
\item \label{ch:split} If $w_f > w_j$ and at least one of the following holds:
\begin{compactitem}
\item $2\alpha{\thinnegspace\cdot\thinnegspace} w_j < w_f + w_g$,
\item $g$ does not get a full back charge and $2\alpha{\thinnegspace\cdot\thinnegspace} (w_{p_1} - w_g) < w_f + w_g$ where
$p_1$ is the first packet in the plan at time $t$,
\end{compactitem}
then charge $w_j$ to the pair of steps $t$ and $t+1$. We call this charge a \textit{close split charge}.
\item \label{ch:distant} If $w_f > w_j$, then charge $w_j$ to the pair
of steps $t$ and $t+2$. We call this charge a \textit{distant split charge}.
\item \label{ch:chain} Otherwise step $t$ is a chaining step, as $w_f
< w_j$ and $\textsf{ALG}$\xspace\ does not schedule $f$ in step $t-1$ by the previous
cases. We distinguish the following subcases.
\begin{enumerate}
\item \label{ch:chainSingle} If step $t$ is (the only step of) a
singleton chain, then charge $\min(w_j, R{\thinnegspace\cdot\thinnegspace} w_f)$ to step $t$ and
$w_j - R{\thinnegspace\cdot\thinnegspace} w_f$ to step $t+1$ if $w_j>R{\thinnegspace\cdot\thinnegspace} w_f$. We call these
charges an \textit{up charge from a singleton chain} and a
\textit{forward charge from a singleton chain}.
\item \label{ch:chainBeginning} If step $t$ is the first step of a
long chain, charge $2\delta{\thinnegspace\cdot\thinnegspace} w_j$ to step $t$, and
$(1-2\delta){\thinnegspace\cdot\thinnegspace} w_j$ to step $t+1$. We call these charges
\textit{chain beginning charges}.
\item \label{ch:chainEnd} If step $t$ is the last step of a long
chain, charge $\delta{\thinnegspace\cdot\thinnegspace} w_j$ to step $t-1$, $(R - 1 +
2\delta){\thinnegspace\cdot\thinnegspace} w_f$ to step $t$, and $(1-\delta){\thinnegspace\cdot\thinnegspace} w_j - (R - 1 +
2\delta){\thinnegspace\cdot\thinnegspace} w_f$ to step $t+1$. We call these charges
\textit{chain end charges}; the charge to step $t+1$ is called a
\textit{forward charge from a chain}. (Note that we always have
$(1-\delta){\thinnegspace\cdot\thinnegspace} w_j > (R - 1 + 2\delta){\thinnegspace\cdot\thinnegspace} w_f$, since $w_j >
w_f$ and $1-\delta = R - 1 + 2\delta$ which follows
from~(\ref{eq:2-R-3delta}).)
\item \label{ch:chainInside} Otherwise, i.e., step $t$ is inside a
long chain, charge $\delta{\thinnegspace\cdot\thinnegspace} w_j$ to step $t-1$, $\delta{\thinnegspace\cdot\thinnegspace}
w_j$ to step $t$, and $(1-2\delta){\thinnegspace\cdot\thinnegspace} w_j$ to step $t+1$. We call
these charges \textit{chain link charges}.
\end{enumerate}
\end{enumerate}
To estimate the competitive ratio we need to show that each step
or a pair of steps does not receive too much charge.
We start with a useful observation about plans of Algorithm~{{\sc CompareWithBias}\xspacealpha},
that will be used multiple times in our proofs.
\begin{lemma}\label{l:scheduledTightPacketNotTooLight}
Consider a time $t$, where the algorithm has two pending packets $a$, $b$ and a
lookahead packet $c$ with the following properties:
$d_a =t$, $(r_b,d_b) = (t, t+1)$, $(r_c,d_c) = (t+1, t+2)$,
and $w_a < \min (w_b, w_c)$.
If the algorithm schedules packet $a$ in step $t$ then the plan
at time $t$ is $a,b,c$, and $2\alpha{\thinnegspace\cdot\thinnegspace} w_a\ge w_b+w_c$.
\end{lemma}
\begin{proof}
We claim that there is no pending or lookahead packet $q
\notin\braced{b,c}$ heavier than $a$. Suppose for a contradiction that
such a $q$ exists. Then a schedule containing packets $q,b,c$ in some
order is feasible and has larger profit than $a,b,c$. This implies
that the plan does not contain $a$ and thus $a$ cannot be scheduled,
contradicting the assumption of the lemma.
The schedule $a,b,c$ is feasible and the claim above implies that it
is optimal, thus it is the plan. It remains to show that $2\alpha{\thinnegspace\cdot\thinnegspace}
w_a \geq w_b+w_c$, which follows easily by a contradiction: Otherwise
$2\alpha{\thinnegspace\cdot\thinnegspace} w_a < w_b+w_c$ and {\sc CompareWithBias}\xspacealpha\ would schedule $b$,
contradicting the assumption of the lemma.
\end{proof}
Next, we will provide an analysis of full, split and chain charges,
starting with full and split charges. We prove several lemmas from which the analysis follows.
We fix some time slot $t$, and use the notation from Figure~\ref{fig:packetdef} for
packets at time slots $t-1$, $t$, $t+1$ and $t+2$ in the schedule of the algorithm and
the optimal schedule.
\myparagraph{Analysis of full charges.} Using
Rules~\ref{ch:back} and~\ref{ch:up}, if step $t$
receives a full back charge, then the condition of Rule~\ref{ch:up} guarantees
that it will not receive a full up charge. This gives us the following
observation.
\begin{lemma}\label{l:oneFullCharge}
Step $t$ receives at most one full charge, i.e., a charge by Rule \ref{ch:back}
or \ref{ch:up}.
\end{lemma}
\myparagraph{Analysis of split charges.}
We now analyze close and distant split charges. The crucial property of split charges
is that, similar to full charges, each step receives at most one split charge.
Before we prove this, we establish several useful properties of split charges.
\begin{lemma}\label{obs:splitCharges}
Let the plan at time $t$ be $p_1,p_2,p_3$.
If $j$ is charged using a close or a distant split charge, then the following holds:
\begin{compactenum}[(a)]
\item $j$ is not scheduled by the algorithm in step $t-1$, i.e., $j$ is pending for the algorithm in step $t$.
\item $d_f = t+1$ and $f$ is scheduled in step $t+1$ in $\textsf{OPT}$\xspace (that is, $k=f$).
In particular, step $t$ receives a full back charge.
\item $d_j = t$ and $w_j\le w_{p_1}$.
\item $p_2 = f$.
\end{compactenum}
\end{lemma}
\begin{proof}
By Rule~\ref{ch:back}, packet $j$ would be charged using a full back
charge if it were scheduled in step $t-1$, implying (a). The case
conditions for split charges in the charging scheme imply that $\textsf{OPT}$\xspace
schedules $f$ in step $t+1$ and $w_f > w_j$. Now (b) follows from the
fact that we do not charge $j$ using a full up charge.
To show (c), note that if $j$ is not expiring, then $j$ and $f$ would
have equal deadlines. As we also have $w_f > w_j$, $f$ would be
scheduled before $j$ in $\textsf{OPT}$\xspace by the canonical ordering, a
contradiction. The inequality $w_j\le w_{p_1}$ now follows from the
definition of the plan.
It remains to prove (d). Towards contradiction, suppose that $f=p_1$.
We know that $j$ is expiring and thus it is not in the plan.
If $d_{p_2} = t+1$ then the optimality of the plan implies
$w_{p_2} > w_j$ (otherwise $j, f, p_3$ would be a better plan),
so, since $p_2$ is not in {$\textsf{OPT}$\xspace},
we could improve {$\textsf{OPT}$\xspace} by scheduling $f$ in step $t$ and $p_2$ in step $t+1$.
Next, assume that $d_{p_2} = t+2$.
The optimality of the plan implies that
$w_{p_2} > w_j$ and $w_{p_3} > w_j$.
Since both $p_2$, $p_3$ have deadline $t+2$,
at least one of them is not scheduled in $\textsf{OPT}$\xspace.
So $\textsf{OPT}$\xspace could be improved by scheduling $f$ in step $t$ and one of $p_2$ or $p_3$ in step $t+1$.
In both cases we get a contradiction with the optimality of $\textsf{OPT}$\xspace.
\end{proof}
We show a useful lemma about a distant split charge from which we derive an upper bound
on $w_j$, similar as the upper bound in the definition of close split charge.
\begin{lemma}\label{l:distantSplitCh:g<p3}
If $j$ is charged using a distant split charge, then
$w_g < w_{p_3}$ where $p_3$ is the third packet in the plan at time $t$, and $d_g = t+1$.
\end{lemma}
\begin{proof}
Suppose that $w_g \ge w_{p_3}$.
Then, from Lemma~\ref{obs:splitCharges}(c),(d) and the choice of $p_2 = f$ in the algorithm, we have that
$2\alpha w_j \le 2\alpha w_{p_1}
< w_{p_2} + w_{p_3}
\le w_f + w_g$,
so we would use the close split charge in step $t$, not the distant one.
Thus $w_g < w_{p_3}$, as claimed.
To prove the second part, if we had $d_g = t+2$ then, since the
algorithm chose $g$ in step $t+1$ and also $d_{p_3} = t+2$,
we would also have that $w_g\ge w_{p_3}$ -- a contradiction.
\end{proof}
\begin{lemma}\label{l:distributeCharge}
If $j$ is charged using a distant split charge then $2\alpha{\thinnegspace\cdot\thinnegspace} w_j < w_f + w_h$.
(Recall that $h$ is the packet scheduled in step $t+2$ in $\textsf{ALG}$\xspace.)
\end{lemma}
\begin{proof}
Let $p_1, p_2, p_3$ be the plan in step $t$.
By Lemma~\ref{obs:splitCharges}(d) we have that $f=p_2$.
Thus $2\alpha{\thinnegspace\cdot\thinnegspace} w_{p_1} < w_{p_2} + w_{p_3}$ by the definition of the algorithm.
By Lemma~\ref{obs:splitCharges}(c), $j$ is expiring and $w_j \le w_{p_1}$.
As $g\neq p_3$ by Lemma~\ref{l:distantSplitCh:g<p3},
the algorithm has $p_3$ pending in step $t+2$ where it is expiring, implying that
$w_{p_3}\le w_h$.
Putting it all together, we get
$2\alpha w_j \le 2\alpha w_{p_1} < w_{p_2} + w_{p_3} \le w_f + w_h$.
\end{proof}
For a split charge from $j$ in step $t$, let $t'$ be the other step
that receives the split charge from $j$; that is, $t'=t+1$ for a close split
charge and $t'=t+2$ for a distant split charge.
We now show that split-charge pairs are pairwise disjoint.
\begin{lemma}\label{l:noTwoDistribCharges}
If $j$ is charged using a split charge to a pair of steps $t$ and $t'$,
then neither of $t$ and $t'$ is involved in another pair that receives
a split charge from a packet $j'\neq j$.
\end{lemma}
\begin{proof}
No matter which split charge we use for $j$,
using Lemma~\ref{obs:splitCharges}(b), step $t+1$ does not receive a
split charge from $k=f$. By a similar argument, since $j$ is not scheduled
in step $t-1$ in {$\textsf{ALG}$\xspace}, step $t$ does not receive
a close split charge from the packet scheduled in step $t-1$ in {$\textsf{OPT}$\xspace}.
It remains to prove that if $j$ is charged using a distant split charge,
then the packet $\ell$ scheduled in step $t+2$ in $\textsf{OPT}$\xspace is not charged using
a split charge. (This also ensures that step $t$ does not receive a
distant split charge
from a packet scheduled in step $t-2$ in $\textsf{OPT}$\xspace.)
For a contradiction, suppose that packet $\ell$ is charged using a split charge.
Let $p_1, p_2, p_3$ be the plan in step $t$.
Recall that $g$ and $h$ are the packets scheduled in steps $t+1$ and $t+2$ in $\textsf{ALG}$\xspace.
From Lemma~\ref{l:distantSplitCh:g<p3}, step $t+1$ does not receive a full back charge. Since we did not apply
the close split charge for $j$ in Rule~\ref{ch:split}, we must have
\begin{equation}\label{eq:no2splitChEq}
2\alpha (w_{p_1} - w_g) \ge w_f+w_g \ge w_f.
\end{equation}
By Lemma~\ref{obs:splitCharges}(b) applied to step $t+2$, we get $d_h = t+3$.
Since $d_{p_3} = t+2$, we get $w_{p_3} < w_h$.
We now use Lemma~\ref{l:scheduledTightPacketNotTooLight} for step $t+1$ with
$a=g, b=p_3$, and $c=h$. We note that all the assumptions of the lemma are
satisfied: we have $d_g = t+1$, $(r_{p_3},d_{p_3}) = (t+1,t+2)$,
$(r_h,d_h) = (t+2,t+3)$, and $w_g < w_{p_3} < w_h$.
This gives us that
$2\alpha w_g \ge w_{p_3}+w_h > w_{p_3}$.
Since the algorithm schedules $f=p_2$ in step $t$, we have
$2\alpha w_{p_1} < w_f+w_{p_3}$.
Subtracting the inequality derived in the previous paragraph, we
get $2\alpha (w_{p_1}-w_g) < (w_f + w_{p_3}) - w_{p_3} = w_f$
-- a contradiction with (\ref{eq:no2splitChEq}).
This completes the proof.
\end{proof}
The lemmas above allow us to estimate the total of full and split charges.
\begin{lemma}\label{l:splitChargeUB}
If $j$ is charged using a split charge to a pair of steps $t$ and $t'$,
then the total of full and split charges
to steps $t$ and $t'$ does not exceed $R{\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})$
where $f'$ is the packet scheduled in step $t'$ in $\textsf{ALG}$\xspace.
\end{lemma}
\begin{proof}
Each of steps $t$ and $t'$ may receive a full charge,
but each step at most one full charge from a packet of smaller or equal weight
by Lemma~\ref{l:oneFullCharge} and charging rules.
If we use a distant split charge or if step $t'$ gets a full back charge, then
$2\alpha{\thinnegspace\cdot\thinnegspace} w_j < w_f + w_{f'}$ by Lemma~\ref{l:distributeCharge} or Rule~\ref{ch:split}.
Thus the total of full and split charges to steps $t$ and $t'$
is upper bounded by
\begin{equation*}
w_f + w_{f'} + w_j < w_f + w_{f'} + \frac{w_f + w_{f'}}{2\alpha}
= \left(1 + \frac{1}{2\alpha}\right){\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})= R{\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})
\end{equation*}
where we used~(\ref{eq:splitCharges}) in the last step.
Otherwise, i.e., if we use a close split charge and step $t'=t+1$ does
not get a full back charge,
then we have $2\alpha{\thinnegspace\cdot\thinnegspace} (w_{p_1} - w_{f'}) < w_f + w_{f'}$ by Rule~\ref{ch:split}.
Since $d_j = t$ by Lemma~\ref{obs:splitCharges}(c), we have $w_j\le w_{p_1}$
and $2\alpha{\thinnegspace\cdot\thinnegspace} (w_j - w_{f'}) < w_f + w_{f'}$.
Also, step $t+1$ does not receive a full up charge by Lemma~\ref{obs:splitCharges}(b).
We thus bound the total of full and split charges to steps $t$ and $t+1$ by
\begin{equation*}
w_f + w_j < w_f + \frac{w_f + (2\alpha + 1){\thinnegspace\cdot\thinnegspace} w_{f'}}{2\alpha}
= \left(1 + \frac{1}{2\alpha}\right){\thinnegspace\cdot\thinnegspace} (w_f + w_{f'}) = R{\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})
\end{equation*}
using~(\ref{eq:splitCharges}) in the last step again.
\end{proof}
\myparagraph{Analysis of chain charges.}
We now analyze chaining steps starting with a lemma below consisting of several useful observations.
In particular, Part~(c) motivates the name ``chaining'' for such steps.
\begin{lemma}\label{obs:chainSteps}
If step $t$ is a chaining step, then the following holds:
\begin{compactenum}[(a)]
\item $d_j = t+1$,
\item $d_f = t$.
\end{compactenum}
Moreover, if step $t+1$ is also a chaining step, then
\begin{compactenum}
\item[(c)] $j$ is scheduled by the algorithm in step $t+1$, i.e., $g=j$,
\item[(d)] $2\alpha{\thinnegspace\cdot\thinnegspace} w_f \ge w_j+w_k$ (recall that $k$ is the packet
scheduled in step $t+1$ in $\textsf{OPT}$\xspace).
\end{compactenum}
\end{lemma}
\begin{proof}
Recall that Algorithm~{\sc CompareWithBias}\xspacealpha never schedules a packet lighter than
the heaviest expiring packet. As in step $t$ it schedules $f$ with
$w_f < w_j$ (by Rule~\ref{ch:chain} for chain charges), (a) follows.
Furthermore, it follows that $f$ is expiring in step $t$,
because otherwise the algorithm would schedule $j$, since both would have the same deadline and $j$ is heavier.
Thus (b) holds as well.
Now assume that step $t+1$ is also in the chain and
for a contradiction suppose that $g\neq j$.
Since $j$ is expiring and pending for the algorithm in step $t+1$,
we have $w_g > w_j$ and $w_k > w_g$ as step $t+1$ is in the chain.
Summarizing, the algorithm sees all packets $f,j,g,k$ in step $t$ (some are pending and some may be lookahead packets),
and they are all distinct packets with $w_f < w_j < w_g < w_k$, $d_f = t$, $(r_j,d_j) = (t,t+1)$, and both
$g$ and $k$ can be feasibly scheduled at time $t+1$. Thus, independently of the release times and deadlines of
$g$ and $k$, the plan at time $t$ containing $f$ would not be optimal -- a contradiction.
This proves that (c) holds.
Finally, we show (d).
Since $f$ is expiring in step $t$ by (b) and both $j$ and $k$ are considered for the plan at time $t$
and satisfy $(r_j, d_j) = (t, t+1)$, $(r_k, d_k) = (t+1, t+2)$, $w_f < w_j < w_k$,
we use Lemma~\ref{l:scheduledTightPacketNotTooLight} with $a=f, b=j,$ and $c=k$
and get the inequality in (d).
\end{proof}
First we show that chaining steps does not receive charges of other types.
\begin{lemma}\label{l:noFullorSplitChargeInAChain}
If step $t$ is a chanining step, then $t$ does not receive a full charge or a split charge.
\end{lemma}
\begin{proof}
By Lemma~\ref{obs:chainSteps}, $f$ is expiring, thus step $t$ does not receive a full back charge.
As $w_j>w_f$, the step also does not get a full up charge or a split charge from step $t$.
So it remains to show that $f$ does not receive a split charge.
First observe that step $t$ cannot receive a close split charge from step $t-1$
in $\textsf{OPT}$\xspace, because $j$ is pending in step $t$ in $\textsf{ALG}$\xspace, while
Lemma~\ref{obs:splitCharges}(b) states that a split charge
from step $t-1$ would require $j$ to be scheduled at time $t-1$ in $\textsf{ALG}$\xspace.
Finally, we show that step $t$ does not receive a distant split charge.
For a contradiction, suppose that step $t$ receive a distant split charge from the packet $x$
scheduled in step $t-2$ in $\textsf{OPT}$\xspace. Let $p_1, p_2, p_3$ be the plan in step $t-2$.
According to Lemma~\ref{obs:splitCharges}(d) and (b), $p_2$ is scheduled
in step $t-2$ in $\textsf{ALG}$\xspace and in step $t-1$ in $\textsf{OPT}$\xspace.
Moreover, by Lemma~\ref{obs:splitCharges}(c),
$x$ is pending and expiring in step $t-2$ and $w_{p_1}\ge w_x$.
As the algorithm scheduled $p_2$ in step $t-2$ we get $r_{p_2} = t-2$ and $w_{p_1} < w_{p_3}$.
Observe that $p_3$ is not scheduled in $\textsf{OPT}$\xspace, since it is expiring in step $t$
and $j$ is not expiring, by Lemma~\ref{obs:chainSteps}(a). Thus we could increase the weight of $\textsf{OPT}$\xspace
if we scheduled $p_2$ in step $t-2$ instead of $x$ and $p_3$ in step $t-1$.
This contradicts the optimality of $\textsf{OPT}$\xspace.
\end{proof}
We now analyze how much charge does each chaining step get.
\begin{lemma}\label{l:chargingToChain}
If step $t$ is a chaining step,
then it receives a charge of at most $R{\thinnegspace\cdot\thinnegspace} w_f$.
\end{lemma}
\begin{proof}
By Lemma~\ref{l:noFullorSplitChargeInAChain}, step $t$ does not receive any full or
split charges; therefore we just need to prove that the total of chain charges
to step $t$ does not exceed $R{\thinnegspace\cdot\thinnegspace} w_f$.
\noindent
\mycase{1} $t$ is the last step of a chain.
If $t$ is the only step in the chain then Rule~\ref{ch:chainSingle}
implies directly that the charge to $t$ is at most $R{\thinnegspace\cdot\thinnegspace} w_f$.
Otherwise, Lemma~\ref{obs:chainSteps}(c) implies that $f$ is scheduled in step $t-1$ in $\textsf{OPT}$\xspace, and thus
the charge from step $t-1$ is $(1-2\delta){\thinnegspace\cdot\thinnegspace} w_f$. The charge from step $t$
is at most $(R-1+2\delta){\thinnegspace\cdot\thinnegspace} w_f$ by Rule~\ref{ch:chainEnd}.
So the total charge is at most $R{\thinnegspace\cdot\thinnegspace} w_f$.
\noindent
\mycase{2} $t$ is not the last step of a chain.
Since step $t+1$ is also in the chain,
by Lemma~\ref{obs:chainSteps}(c) we have that $j$ is scheduled in step $t+1$ in $\textsf{ALG}$\xspace
and $\textsf{OPT}$\xspace has a packet $k$ with $w_k>w_j$ in step $t+1$.
From Lemma~\ref{obs:chainSteps}(d) we know that $2\alpha{\thinnegspace\cdot\thinnegspace} w_f \ge w_j+w_k$.
There are two sub-cases. If $t$ is the first step of the chain, then the charge to $t$
is at most
\begin{equation*}
2\delta{\thinnegspace\cdot\thinnegspace} w_j + \delta{\thinnegspace\cdot\thinnegspace} w_k \le \threehalves \delta{\thinnegspace\cdot\thinnegspace} (w_j+w_k)
\le 3\alpha\delta{\thinnegspace\cdot\thinnegspace} w_f
< R{\thinnegspace\cdot\thinnegspace} w_f,
\end{equation*}
where the last inequality follows from~(\ref{ineq:chainBegCh}).
Otherwise, using Lemma~\ref{obs:chainSteps}(c), $f$ is scheduled in step $t-1$ in $\textsf{OPT}$\xspace,
so the total charge to step $t$ is at most
\begin{equation*}
(1-2\delta){\thinnegspace\cdot\thinnegspace} w_f + \delta{\thinnegspace\cdot\thinnegspace} w_j + \delta{\thinnegspace\cdot\thinnegspace} w_k \le
(1-2\delta){\thinnegspace\cdot\thinnegspace} w_f + 2\alpha\delta{\thinnegspace\cdot\thinnegspace} w_f = R{\thinnegspace\cdot\thinnegspace} w_f
\end{equation*}
where the last equality follows from~(\ref{eq:chainCharges}).
\end{proof}
\myparagraph{Analysis of forward charges from chains.}
We now show that a forward charge from a chain does not cause
an overload on the step just after the chain
which may also get both a full charge and a split charge.
(This is the only case when a step receives charges of all three types.)
For the following lemmas we assume that step $t-1$ is a chaining
step. Recall that $i$ is
the packet scheduled in step $t-1$ in $\textsf{OPT}$\xspace and $e$ is the packet
scheduled in step $t-1$ in $\textsf{ALG}$\xspace. First, we prove some useful observations.
\begin{lemma}\label{l:fwdChargeFromChain}
If step $t$ receives a forward charge from a chain, then the following holds
\begin{compactenum}[(a)]
\item $j\neq e$ (that is, $j$ is not charged using a full back charge),
\item $w_f\geq w_j$,
\item $w_f \geq w_i$.
\end{compactenum}
Moreover, if step $t$ is not in a split-charge pair:
\begin{compactenum}
\item[(d)] $j$ is charged using a full up charge to step $t$,
\item[(e)] step $t$ does not receive a back charge.
\end{compactenum}
\end{lemma}
\begin{proof}
Part~(a) holds because $e$ is expiring in step $t-1$, by Lemma~\ref{obs:chainSteps}(b).
Part~(b) follows from (a) and the fact that step $t$ is not chaining.
To show~(c), Lemma~\ref{obs:chainSteps}(a) implies that
$d_i = t$. Also, since $i\neq e$, $i$ is pending in $\textsf{ALG}$\xspace in step $t$.
Now (c) follows, because each packet scheduled by the algorithm is at least
as heavy as the the heaviest expiring packet.
Part~(d) follows from (a) and (b) and the assumption that $t$ is not in a split-charge pair.
Part~(e) follows from (d) and Lemma~\ref{l:oneFullCharge}.
\end{proof}
Note that $f$ may be the same packet as $i$ or $j$. We start with the case in which $f$ is not
in a split-charge pair.
\begin{lemma}\label{l:fwdChFromChain-NoSplitCh}
If step $t$ receives a forward charge from a chain $C$
and $t$ is not in a split-charge pair,
then the total charge to step $t$ is at most $R{\thinnegspace\cdot\thinnegspace} w_f$.
\end{lemma}
\begin{proof}
The proof is by case analysis, depending on the relative weights of $j$ and $e$, and
on whether $C$ is a singleton or a long chain.
In all cases we use Lemma~\ref{l:fwdChargeFromChain} and the charging rules
to show upper bounds on the total charge.
\noindent
\mycase{1} $w_j < w_e$.
\noindent
\mycase{1.1} The chain $C$ is long.
The charge to step $t$ is then at most
\begin{align}
w_j + (1-\delta){\thinnegspace\cdot\thinnegspace} w_i - (R-1+2\delta){\thinnegspace\cdot\thinnegspace} w_e
&< w_j + (1-\delta){\thinnegspace\cdot\thinnegspace} w_i - (R-1+2\delta){\thinnegspace\cdot\thinnegspace} w_j
\nonumber
\\
&= (2-R-2\delta){\thinnegspace\cdot\thinnegspace} w_j + (1-\delta){\thinnegspace\cdot\thinnegspace} w_i
\nonumber
\\
&\le (2-R-2\delta){\thinnegspace\cdot\thinnegspace} w_f + (1-\delta){\thinnegspace\cdot\thinnegspace} w_f
\label{eqn: fwdChFromChain-NoSplitCh 1.1}
\\
&= (3-R-3\delta){\thinnegspace\cdot\thinnegspace} w_f = w_f\,.
\label{eqn: fwdChFromChain-NoSplitCh 1.2}
\end{align}
To justify inequality~(\ref{eqn: fwdChFromChain-NoSplitCh 1.1}), note that
$2-R-2\delta\ge 0$ by~(\ref{ineq:2-R-2delta}) and
$1-\delta\ge 0$ by the choice of $\delta$, so
we can apply inequalities $w_j\le w_f$ and
$w_i\le w_f$ from Lemma~\ref{l:fwdChargeFromChain}(b) and (c).
The last step~(\ref{eqn: fwdChFromChain-NoSplitCh 1.2}) follows from equation~(\ref{eq:2-R-3delta}).
\noindent
\mycase{1.2} The chain $C$ is singleton.
We assume that $w_i > R{\thinnegspace\cdot\thinnegspace} w_e$,
otherwise there is no forward charge from the chain.
Then the charge to step $t$ is
\begin{equation*}
w_j + w_i - R{\thinnegspace\cdot\thinnegspace} w_e
\le w_j + w_i - R{\thinnegspace\cdot\thinnegspace} w_j
\le w_i
\le w_f\,,
\end{equation*}
where in the last step we used Lemma~\ref{l:fwdChargeFromChain}(c).
\noindent
\mycase{2}
$w_j > w_e$.
We claim first that $j$ is not expiring in step $t$, that is $d_j = t+1$.
Indeed, if we had $d_j = t$, then in step $t-1$ the algorithm would have pending
packets $e$ and $i$, plus packet $j$ (pending or lookahead), that need to be
scheduled in slots $t-1$ and $t$. Since $w_e < w_i$ (because step
$t-1$ is a chaining step) and $w_e < w_j$ (by the case assumption),
packet $e$ could not be in the plan in step $t-1$ which is a contradiction.
Thus $d_j = t+1$.
Recall that $e$ is expiring in step $t-1$ by Lemma~\ref{obs:chainSteps}(b)
and both $i$ and $j$ are considered for the plan in step $t-1$.
Moreover, we know that $w_i > w_e$, $w_j > w_e$, $(r_i, d_i) = (t-1, t)$ (by Lemma~\ref{obs:chainSteps}(a)),
and $(r_j, d_j) = (t, t+1)$.
We thus use Lemma~\ref{l:scheduledTightPacketNotTooLight} for step $t-1$
with $a=e, b=i,$ and $c=j$, to get that $2\alpha{\thinnegspace\cdot\thinnegspace} w_e \ge w_i+w_j$.
\noindent
\mycase{2.1} The chain $C$ is long.
The charge to step $t$ is
\begin{align}
w_j + (1-\delta){\thinnegspace\cdot\thinnegspace} w_i &- (R-1+2\delta){\thinnegspace\cdot\thinnegspace} w_e
\nonumber
\\
&\le w_j + (1-\delta){\thinnegspace\cdot\thinnegspace} w_i - (R-1+2\delta){\thinnegspace\cdot\thinnegspace} \frac{w_i + w_j}{2\alpha}
\nonumber
\\
&= \left(1-\frac{R-1+2\delta}{2\alpha}\right) {\thinnegspace\cdot\thinnegspace} w_j + \left(1-\delta-\frac{R-1+2\delta}{2\alpha}\right) {\thinnegspace\cdot\thinnegspace} w_i
\nonumber
\\
&\le \left(1-\frac{R-1+2\delta}{2\alpha}\right) {\thinnegspace\cdot\thinnegspace} w_f + \left(1-\delta-\frac{R-1+2\delta}{2\alpha}\right) {\thinnegspace\cdot\thinnegspace} w_f
\label{eqn: fwdChFromChain-NoSplitCh 2.2}
\\
&= \left(2-\delta-\frac{R-1+2\delta}{\alpha}\right) {\thinnegspace\cdot\thinnegspace} w_f = R{\thinnegspace\cdot\thinnegspace} w_f.
\label{eqn: fwdChFromChain-NoSplitCh 2.4}
\end{align}
To justify inequality~(\ref{eqn: fwdChFromChain-NoSplitCh 2.2}), we note that
$1-\delta-(R-1+2\delta)/(2\alpha)\ge 0$, by~(\ref{ineq:1-d-frac}), so we can
again apply inequalities $w_j\le w_f$ and $w_i\le w_f$ from
Lemma~\ref{l:fwdChargeFromChain}(b) and (c).
In the last step~(\ref{eqn: fwdChFromChain-NoSplitCh 2.4})
we used equation~(\ref{eq:forwardCh}).
\noindent
\mycase{2.2} The chain $C$ is singleton.
We assume that $w_i > R{\thinnegspace\cdot\thinnegspace} w_e$,
otherwise there is no forward charge from the chain. Then the charge to step $t$ is
\begin{align}
w_j + w_i - R{\thinnegspace\cdot\thinnegspace} w_e &\le
w_j + w_i - R{\thinnegspace\cdot\thinnegspace} \frac{w_i + w_j}{2\alpha}
\nonumber
\\
&=\left(1-\frac{R}{2\alpha}\right){\thinnegspace\cdot\thinnegspace} (w_i+w_j)
\nonumber
\\
&\le \left(1-\frac{R}{2\alpha}\right){\thinnegspace\cdot\thinnegspace} (2w_f)
\label{eqn: fwdChFromChain-NoSplitCh 3.3}
\\
&= \left(2-\frac{R}{\alpha}\right){\thinnegspace\cdot\thinnegspace} w_f < R{\thinnegspace\cdot\thinnegspace} w_f
\label{eqn: fwdChFromChain-NoSplitCh 3.4}
\end{align}
Inequality~(\ref{eqn: fwdChFromChain-NoSplitCh 3.3}) is valid, because
$w_i\le w_f$ and $w_j\le w_f$, by Lemma~\ref{l:fwdChargeFromChain},
and $1-R/(2\alpha)\ge 0$ by~(\ref{ineq:1-R/2alpha}).
In step~(\ref{eqn: fwdChFromChain-NoSplitCh 3.4})
we used~(\ref{ineq:fwdChFromSingleton}).
\end{proof}
We now analyze how the forward charge from a chain combines with split charges.
First we observe that only the first step from a split-charge pair
may receive a forward charge from a chain.
\begin{lemma}\label{l:distributeChargeAndFwdChTo2ndStep}
If $j$ is charged using a split charge to a pair of steps $t$ and $t'$ (where $t'$ is $t+1$ or $t+2$),
then $t'$ does not receive a forward charge from a chain.
\end{lemma}
\begin{proof}
By Lemma~\ref{obs:splitCharges}(b) we have $k=f$,
which implies that steps $t$ and $t+1$ are not chaining steps.
\end{proof}
\begin{lemma}\label{l:fwdAndDistributeCharge}
If $j$ is charged using a split charge to a pair of steps $t$ and $t'$,
$f'$ is the packet scheduled in $t'$ in $\textsf{ALG}$\xspace, and
step $t$ receives a forward charge from a chain $C$,
then the total charge to steps $t$ and $t'$ is
at most $R{\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})$.
\end{lemma}
\begin{proof}
First we note that $j$ is expiring in step $t$ by Lemma~\ref{obs:splitCharges}(c)
and $f$ is not expiring in step $t$ by Lemma~\ref{obs:splitCharges}(b), so $f\neq i$.
We claim $w_j < w_e$. Indeed, if $w_j > w_e$, then in step $t-1$ the algorithm would have pending
packets $e$ and $i$, plus packet $j$ (pending or lookahead), that need to be
scheduled in slots $t-1$ and $t$. Since $w_e < w_i$ (because step
$t-1$ is a chaining step) and $w_e < w_j$,
packet $e$ could not be in the plan in step $t-1$ which is a contradiction.
Therefore $w_j < w_e$.
Let $p_1, p_2, p_3$ be the plan at time $t$.
We split the proof into two cases,
both having two subcases, one for long chains and one for singleton chains.
\noindent
\mycase{1}
$j$ is charged using a distant split charge or $f'$ gets a full back charge.
We claim that $2\alpha{\thinnegspace\cdot\thinnegspace} w_i < w_f + w_{f'}$. Indeed, since $i$ is expiring and pending in step $t$
by Lemma~\ref{obs:chainSteps}(a), we have $w_i\le w_{p_1}$.
As the algorithm scheduled $f=p_2$ by Lemma~\ref{obs:splitCharges}(d)
we get $2\alpha{\thinnegspace\cdot\thinnegspace} w_{p_1} < w_f + w_{p_3}$. To prove the claim it remains to show $w_f'\ge w_{p_3}$.
If $j$ is charged using a distant split charge, then by Lemma~\ref{l:distantSplitCh:g<p3}
we have $w_g < w_{p_3}$ and in particular, $g\neq p_3$. Thus $p_3$ is pending and expiring in step $t+2$,
hence $w_{p_3}\le w_{f'}$.
Otherwise, if $j$ is charged using a close split charge, then $f'=g$ gets a full back charge.
Hence $d_g = t+2$. Since also $d_{p_3}=t+2$ and the algorithm chooses the heaviest such packet,
we have $w_{p_3}\le w_{g}$.
The claim follows, since
\begin{equation}\label{eq:fwdCh+SplitCh1}
2\alpha{\thinnegspace\cdot\thinnegspace} w_i \le 2\alpha{\thinnegspace\cdot\thinnegspace} w_{p_1} < w_f + w_{p_3} \le w_f + w_{f'}\,.
\end{equation}
\noindent
\mycase{1.1} The chain $C$ is long.
We upper bound the total charge to steps $t$ and $t'$ by
\begin{align}
w_f + w_{f'} + w_j + (1-\delta){\thinnegspace\cdot\thinnegspace} w_i &- (R-1+2\delta){\thinnegspace\cdot\thinnegspace} w_e
\nonumber
\\
&\le w_f + w_{f'} + (2-R-2\delta){\thinnegspace\cdot\thinnegspace} w_e + (1-\delta){\thinnegspace\cdot\thinnegspace} w_i
\nonumber
\\
&< w_f + w_{f'} + (2-R-3\delta){\thinnegspace\cdot\thinnegspace} w_i + w_i
\label{eqn: fwdChFromChain-split 1.1.3}
\\
&= w_f + w_{f'} + w_i
\label{eqn: fwdChFromChain-split 1.1.4}
\\
&<w_f + w_{f'} + \frac{w_f + w_{f'}}{2\alpha}
\label{eqn: fwdChFromChain-split 1.1.5}
\\
& = R{\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})
\nonumber
\end{align}
We can use $w_e<w_i$ in~(\ref{eqn: fwdChFromChain-split 1.1.3}), because $2-R-2\delta\ge 0$ by~(\ref{ineq:2-R-2delta}).
Equality~(\ref{eqn: fwdChFromChain-split 1.1.4}) follows from $2-R-3\delta = 0$ by~(\ref{eq:2-R-3delta})
and inequality~(\ref{eqn: fwdChFromChain-split 1.1.5}) from~(\ref{eq:fwdCh+SplitCh1}).
In the last step we use~(\ref{eq:splitCharges}).
\noindent
\mycase{1.2} The chain $C$ is singleton.
We suppose that $w_i > R{\thinnegspace\cdot\thinnegspace} w_e$,
otherwise there is no forward charge from the chain.
We upper bound the total charge to steps $t$ and $t'$ by
\begin{align}
w_f + w_{f'} + w_j + w_i - R{\thinnegspace\cdot\thinnegspace} w_e
&\le w_f + w_{f'} + (1-R){\thinnegspace\cdot\thinnegspace} w_e + w_i
\nonumber
\\
&< w_f + w_{f'} + w_i
\nonumber
\\
&< w_f + w_{f'} + \frac{w_f + w_{f'}}{2\alpha}
\label{eqn: fwdChFromChain-split 1.2.3}
\\
&= R {\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})\,,
\nonumber
\end{align}
where we apply Equation~\ref{eq:fwdCh+SplitCh1} in~(\ref{eqn: fwdChFromChain-split 1.2.3}), and we use~(\ref{eq:splitCharges}) in the last step.
\noindent
\mycase{2}
$j$ is charged using a close split charge and $f' = g$ does not get a full back charge.
We have $2\alpha{\thinnegspace\cdot\thinnegspace} (w_{p_1}-w_g) < w_f + w_g$ by the definition of the close split charge.
Since $i$ is expiring and pending in step $t$
by Lemma~\ref{obs:chainSteps}(a), we have $w_i\le w_{p_1}$.
Hence $2\alpha{\thinnegspace\cdot\thinnegspace} (w_i-w_g) < w_f + w_g$.
This is equivalent to
\begin{equation}\label{eq:fwdCh+SplitCh2}
w_i < \frac{w_f + (2\alpha + 1){\thinnegspace\cdot\thinnegspace} w_g}{2\alpha}\,.
\end{equation}
\noindent
\mycase{2.1} The chain $C$ is long.
We again suppose that $w_i > R{\thinnegspace\cdot\thinnegspace} w_e$,
as otherwise there is no forward charge from the chain.
The total charge to steps $t$ and $t'=t+1$ is
\begin{align}
w_f + w_j + (1-\delta){\thinnegspace\cdot\thinnegspace} w_i &- (R-1+2\delta){\thinnegspace\cdot\thinnegspace} w_e
\nonumber
\\
&\le w_f + (2-R-2\delta){\thinnegspace\cdot\thinnegspace} w_e + (1-\delta){\thinnegspace\cdot\thinnegspace} w_i
\nonumber
\\
&< w_f + (2-R-3\delta){\thinnegspace\cdot\thinnegspace} w_i + w_i
\label{eqn: fwdChFromChain-split 2.1.3}
\\
&= w_f + w_i
\label{eqn: fwdChFromChain-split 2.1.4}
\\
&< w_f + \frac{w_f + (2\alpha + 1){\thinnegspace\cdot\thinnegspace} w_{f'}}{2\alpha}
\label{eqn: fwdChFromChain-split 2.1.5}
\\
&= w_f + w_{f'} + \frac{w_f + w_{f'}}{2\alpha}
\nonumber
\\
&= R {\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})\,,
\nonumber
\end{align}
We can use $w_e<w_i$ in~(\ref{eqn: fwdChFromChain-split 2.1.3}), because $2-R-2\delta\ge 0$ by~(\ref{ineq:2-R-2delta}).
Then we use $2-R-3\delta = 0$ by~(\ref{eq:2-R-3delta}) in~(\ref{eqn: fwdChFromChain-split 2.1.4}),
Equation~\ref{eq:fwdCh+SplitCh2} in~(\ref{eqn: fwdChFromChain-split 2.1.5}),
and Equation~\ref{eq:splitCharges} in the last step.
\noindent
\mycase{2.2} The chain $C$ is singleton.
We upper bound the total charge to steps $t$ and $t+1$ by
\begin{align}
w_f + w_j + w_i - R{\thinnegspace\cdot\thinnegspace} w_e
&\le w_f + (1-R){\thinnegspace\cdot\thinnegspace} w_e + w_i
\nonumber
\\
&< w_f + w_i
\nonumber
\\
&< w_f + \frac{w_f + (2\alpha + 1){\thinnegspace\cdot\thinnegspace} w_{f'}}{2\alpha}
\label{eqn: fwdChFromChain-split 2.2.3}
\\
&=w_f + w_{f'} + \frac{w_f + w_{f'}}{2\alpha}
\nonumber
\\
&= R {\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})\,,
\nonumber
\end{align}
where we apply~(\ref{eq:fwdCh+SplitCh2}) in inequality~(\ref{eqn: fwdChFromChain-split 2.2.3}), and~(\ref{eq:splitCharges}) in the last step.
\end{proof}
We now summarize our analysis of {\sc CompareWithBias}\xspacealpha. If $t$ is not in a
split-charge pair, we show upper bounds on the total charge to step
$t$. For each split-charge pair $(t,t')$, we show upper bounds on the
total charge to both steps $t$ and $t'$. This is sufficient, since
split-charge pairs are pairwise disjoint by
Lemma~\ref{l:noTwoDistribCharges}, thus summing all the bounds gives
the result.
For each step $t$, we distinguish three cases according to whether
$t$ is in a split-charge pair and whether $t$ is a chaining step.
In all cases, let $f$ be the packet scheduled at time $t$ in $\textsf{ALG}$\xspace
and let $j$ be the packet scheduled at time $t$ in $\textsf{OPT}$\xspace.
\noindent
\mycase{1} Step $t$ is not chaining
and it is not in a split-charge pair.
Then $t$ receives at most one full charge from a packet $p$ such that $w_p \leq w_f$
(by Lemma~\ref{l:oneFullCharge} and charging rules)
and possibly a forward charge from a chain $C$;
then Lemma~\ref{l:fwdChFromChain-NoSplitCh} shows that the sum of a forward charge from a chain
and a full charge is at most $R{\thinnegspace\cdot\thinnegspace} w_f$.
\noindent
\mycase{2} Step $t$ is a chaining step.
Then it does not receive a split charge
or a full charge, by Lemma~\ref{l:noFullorSplitChargeInAChain}.
Lemma~\ref{l:chargingToChain} implies that step $t$ receives a charge of at most
$R{\thinnegspace\cdot\thinnegspace} w_f$.
\noindent
\mycase{3} $(t,t')$ is a split-charge pair, i.e., $t$ is the first
step of the split-charge pair and $t'=t+1$, or $t'=t+2$. Thus $j$ is
charged using a split charge. Let $f'$ be the packet scheduled in step
$t'$ in $\textsf{ALG}$\xspace.
By Lemma~\ref{l:distributeChargeAndFwdChTo2ndStep} step $t'$ does not
receive a forward charge from a chain. If step $t$ also does not
receive a forward charge from a chain, then the total charge to steps
$t$ and $t'$ is at most $R{\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})$ by
Lemma~\ref{l:splitChargeUB}. Otherwise, step $t$ receives a forward
charge from a chain and we apply Lemma~\ref{l:fwdAndDistributeCharge}
to show that the total charge to steps $t$ and $t'$ is again at most
$R{\thinnegspace\cdot\thinnegspace} (w_f + w_{f'})$.
\section{A Lower Bound for 2-bounded Instances with Lookahead}\label{sec:lookaheadlb}
In this section, we prove that there is no online algorithm for
{{\textsf{PacketScheduling}}} with 1-lookahead that has competitive ratio smaller than
$\onefourth (1+\sqrt{17}) \approx 1.281$, even for $2$-bounded
instances. The idea of our proof is somewhat similar to the proof of
the lower bound of $\phi$ for
{{\textsf{PacketScheduling}}}~\cite{hajek_unit_packets_01,andelman_queueing_policies_03,chin_partial_job_values_03}.
\begin{theorem}
Let $R = \onefourth (1 + \sqrt{17})$.
For each $\varepsilon > 0$, no deterministic online algorithm for {{\textsf{PacketScheduling}}} with 1-lookahead
can be $(R - \varepsilon)$-competitive, even for $2$-bounded instances.
\end{theorem}
\begin{proof}
Fix some online algorithm ${\cal A}$ and some $\varepsilon > 0$. We will
show that, for some sufficiently large integer $n$ and sufficiently
small $\delta > 0$, there is a $2$-bounded instance of {{\textsf{PacketScheduling}}} with
1-lookahead, parametrized by $n$ and $\delta$, for which the optimal
profit is at least $(R-\varepsilon)$ times the profit of ${\cal A}$.
Our instance will consist of phases $0,\ldots,k$, for some $k\le
n$. In each phase $i<n$ we will release three packets whose weights
will grow roughly exponentially from one phase to next. The number $k$
of phases is determined by the adversary based on the behavior of
${\cal A}$.
The adversary strategy is as follows. We start with phase $0$.
Suppose that some phase $i$, where $0\le i < n$,
has been reached. In phase $i$ the adversary releases the following three packets:
\begin{compactitem}
\item A packet $a_i$ with weight $w_{i}$, release time $2i+1$ and deadline $2i+1$, i.e., a tight packet.
\item A packet $b_i$ with weight $w_{i+1}$, release time $2i+1$ and deadline $2i+2$.
\item A packet $c_i$ with weight $w_{i+1}$, release time $2i+2$ and deadline $2i+3$.
\end{compactitem}
(The weights $w_i$ will be specified later.)
Now, if ${\cal A}$ schedules an expiring packet in step $2i+1$ (a tight packet $a_i$ or $c_{i-1}$, which may be pending from the previous phase),
then the game continues; the adversary will proceed to phase $i+1$. Otherwise,
the algorithm schedules packet $b_i$, in which case the adversary lets $k = i$ and the game ends.
Note that in step $2i+2$ the algorithm may schedule only $b_i$ or $c_i$, each having weight $w_{i+1}$.
Also, importantly, in step $2i+1$ the algorithm cannot yet see whether the packets from phase $i+1$ will arrive or not.
If phase $i=n$ is reached, then in phase $n$ the adversary releases
a single packet $a_n$ with weight $w_{n}$ and release time and deadline $2n+1$,
i.e., a tight packet.
We calculate the ratio between the weight of packets in an optimal schedule and the weight of packets sent by the algorithm.
Let $S_k = \sum_{i=0}^k w_i$.
There are two cases: either $k < n$, or $k = n$.
\noindent
\mycase{1} $k < n$. In all steps $2i+1$ for $i < k$ algorithm ${\cal A}$
scheduled an expiring packet of weight $w_{i}$
and in step $2k+1$ it scheduled packet $b_k$ of weight $w_{k+1}$. In an even step $2i+2$ for $i\leq k$ it scheduled
a packet of weight $w_{i+1}$. Note that there is no packet scheduled in step $2k+3$.
Overall, ${\cal A}$ scheduled packets of total weight $S_{k-1} + w_{k+1} + S_{k+1} - w_0 = 2S_{k+1} - w_k - w_0$.
The adversary schedules packets of weight $w_{i+1}$ in steps $2i+1$ and $2i+2$ for $i < k$ and
all packets from phase $k$ in steps $2k+1$, $2k+2$ and $2k+3$. In total,
the optimum has a schedule of weight $2S_{k+1} - 2w_0 + w_k$.
The ratio is
\begin{equation*}
R_k = \frac{2S_{k+1}+w_k-2w_0}{2S_{k+1} - w_k - w_0}.
\end{equation*}
\noindent
\mycase{2} $k = n$. As before, in all odd steps $2i+1$ for $i < n$
algorithm ${\cal A}$ scheduled an expiring packet of weight $w_{i}$
and in all even steps $2i+2$ for $i<n$ it scheduled a packet of weight $w_{i+1}$. In the last step $2n+1$ it scheduled a packet of weight $w_n$
as there is no other choice. Overall, the total weight of ${\cal A}$'s schedule is $2S_n - w_0$.
The adversary schedules packets of weight $w_{i+1}$ in steps $2i+1$ and $2i+2$ for $i < n$ and a packet of weight $w_{n}$ in the last step $2n+1$
which adds up to $2S_n - 2w_0 + w_n$. The ratio is
\begin{equation*}
\widehat{R}_n = \frac{2S_n+w_n-2w_0}{2S_n - w_0}.
\end{equation*}
We start with an intuitive explanation which leads to the optimal
setting of weights $w_i$ and the ratio $R$ for the instances of the
type described above. We normalize the instances so that $w_0=1$. We
want to set the weights so that $R_k\geq R-\varepsilon$ for all $k\geq
0$ and $\widehat{R}_n\geq R-\varepsilon$. We first find the weights depending
on $\delta$ such that $R_k=R$ for all $k\geq 1$. Using
$w_k=S_k-S_{k-1}$ for $k\geq 1$ and $w_0=1$, the condition $R_k=R$ for
$k\geq 1$ is rewritten as
\begin{equation}
\label{eq:Rk}
R=\frac{2S_{k+1} + S_k-S_{k-1}-2}{2S_{k+1} - S_k +S_{k-1}-1}\,,
\end{equation}
or equivalently as
\begin{equation}
\label{eq:recurrence}
(2R-2)S_{k+1} -(R+1)S_k + (R+1)S_{k-1} = -(2-R)\,.
\end{equation}
A general solution of this linear recurrence with $S_0=w_0=1$ and a
parameter $\delta$ is
\begin{equation}
\label{eq:solution}
S_k=(\gamma+1)\alpha^k+\delta(\beta^k-\alpha^k)-\gamma\,,
\end{equation}
where $\alpha<\beta$ are the two roots of the characteristic
polynomial of the recurrence $(2R-2)x^2-(R+1)x+(R+1)$ and
$\gamma=(2-R)/(2R-2)$. To justify~(\ref{eq:solution}), a general
solution is $A\alpha^k+B\beta^k-\gamma$ for parameters $A$ and $B$ and
a suitable constant $\gamma$. Considering $A=B=0$, the value
$\gamma=(2-R)/(2R-2)$ follows. Considering the constraint $S_0=1$, we
obtain $A+B=\gamma+1$; our parametrization by $\delta$
in~(\ref{eq:solution}) is equivalent but more convenient for further
analysis.
In our case of $R=\onefourth(1+\sqrt{17})$ a calculation gives
\begin{equation}
\label{eq:parameters}
\alpha = R+\onehalf =\onefourth (3 + \sqrt{17})\,,\quad
\beta = R+1 = \onefourth ( 5 + \sqrt{17})\, \mbox{ and }\quad
\gamma = R = \onefourth(1+\sqrt{17})\,.
\end{equation}
A calculation shows that for $\delta=0$, the solution satisfies
$R_0=R$. We choose a solution with a sufficiently small $\delta>0$
which guarantees $R_0\geq R-\varepsilon$. Since $1<\alpha<\beta$, for
large $n$, the dominating term in $S_n$ is $\delta\beta^n$. Thus
\begin{equation}
\label{eq:Rn}
\lim_{n\rightarrow\infty}\widehat{R}_n =
\lim_{n\rightarrow\infty}
\frac{2S_n+S_n-S_{n-1}}{2S_n} =
\lim_{n\rightarrow\infty}
\frac{3\delta\beta^n-\beta^{n-1}}{2\delta\beta^n}=\frac{3\beta-1}{2\beta}=R\,.
\end{equation}
The last equality is verified by a direct calculation; actually it is
the equation that defines the optimal $R$ for our construction (if
$\beta$ as the root of the characteristic polynomial of the recurrence
is expressed in terms of $R$).
For a formal proof, we set $w_0=1$ and for $i=1,2,\ldots$,
\[
w_i=(\gamma+1)\alpha^{k-1}(\alpha-1)+\delta(\beta^{k-1}(\beta-1)-\alpha^{k-1}(\alpha-1))\,,
\]
where the parameters $\alpha$, $\beta$ and $\gamma$ are given by
(\ref{eq:parameters}) and $\delta>0$ is sufficiently small. By a
routine calculation we verify (\ref{eq:solution}) and
(\ref{eq:recurrence}). Thus $R_k=R$ for $k\geq 1$.
For $R_0$, we first verify that $\delta=0$ would yield $w_1=\alpha$ and
$R_0=R$. By continuity of the dependence of $w_1$ and $R_0$ on
$\delta$, for a sufficiently small $\delta>0$, we have $R_0\geq
R-\varepsilon$; fix such a $\delta>0$. Now, for $n\rightarrow\infty$,
$S_n=\delta\beta^n+O(\alpha^n)=\delta\beta^n(1+o(1))$. Thus, the
calculation (\ref{eq:Rn}) gives $\lim_{n\rightarrow\infty}\widehat{R}_n =
R$. Consequently, $\widehat{R}_n\geq R-\varepsilon$ for a sufficiently large $n$
of our choice. This defines the required instance and
completes the proof.
\end{proof}
\end{document}
|
\begin{document}
\newcounter{bnomer}
\newcounter{snomer}
\newcounter{diagram}
\setcounter{bnomer}{0} \setcounter{diagram}{0}
\renewcommand{\arabic{bnomer}.\arabic{snomer}}{\arabic{bnomer}.\arabic{snomer}}
\renewcommand{\arabic{bnomer}}{\arabic{bnomer}}
\newcommand{\sect}[1]{
\setcounter{snomer}{0} \refstepcounter{bnomer}
\begin{center}\large{\textbf{\S \arabic{bnomer}.{ #1}}}\end{center}}
\newcommand{\thenv}[2]{
\refstepcounter{snomer}
\par\addvspace{
amount}\textbf{{#1} \arabic{bnomer}.\arabic{snomer}.}
{#2}\par\addvspace{
amount}}
\renewcommand{References}{References}
\date{}
\title{The algebra of invariants for the adjoint action of the
unitriangular group}
\author{Victoria Sevostyanova\thanks{Partially
supported by Israel Scientific Foundation grant 797/14.}}
\maketitle
\begin{center}
\parbox[b]{330pt}{\small\textsc{Abstract.} In this paper the
algebra of invariants for the adjoint action of the unitriangular
group in the nilradical of a parabolic subalgebra is studied. We
prove that the algebra of invariants is finitely generated.}
\end{center}
\sect{Introduction}
Let $G$ be the general linear group $\mathrm{GL}(n,K)$ over an
algebraically closed field $K$ of characteristic zero. Let $B$ ($N$,
respectively) be its Borel (maximal unipotent, respectively)
subgroup, which consists of upper triangular matrices with nonzero
(unit, respectively) elements on the diagonal. We fix a parabolic
subgroup $P\supset B$. Let $\mathfrak{p}$, $\mathfrak{b}$ and
$\mathfrak{n}$ be the Lie subalgebras in $\mathfrak{gl}(n,K)$
correspon\-ding to $P$, $B$ and $N$, respectively. We represent
$\mathfrak{p}=\mathfrak{r}\oplus\mathfrak{m}$ as the direct sum of
the nilradical $\mathfrak{m}$ and a block diagonal subalgebra
$\mathfrak{r}$ with sizes of blocks $(n_1,\ldots, n_s)$. The
subalgebra $\mathfrak{m}$ is invariant relative to the adjoint
action of the group $P$:
$$\mbox{for any }g\in P\mbox{ we have
}x\in\mathfrak{m}\mapsto\mathrm{Ad}_gx=gxg^{-1}.$$
Therefore $\mathfrak{m}$ is invariant relative to the adjoint action
of the subgroups $B$ and $N$. We extend this action to the
representation in the algebra $K[\mathfrak{m}]$ and in the field
$K(\mathfrak{m})$:
$$\mbox{for any }g\in P\mbox{ we have
}f(x)\in K[\mathfrak{m}]\mapsto f(\mathrm{Ad}_{g^{-1}}x).$$
The complete description of the field of invariants
$K(\mathfrak{m})^N$ for any parabolic subalgebra is a result
of~\cite{S1}. In this paper a notion of an extended base is
introduced. Elements of the extended base correspond to a set of
algebraically independent $N$-invariants. These invariants generate
the field of invariants $K(\mathfrak{m})^N$. Further in the
paper~\cite{S2} the structure of the algebra of invariants
$K[\mathfrak{m}]^N$ is considered. If the sizes of diagonal blocks
are $(2,k,2)$, $k>2$, or $(1,2,2,1)$, then the invariants
constructed on the extended base do not generate the algebra of
invariants and the algebra of invariants is not free. Besides, the
additional invariants in both cases are constructed, which together
with the main list of the invariants constructed on the extended
base generate the algebra of invariants. Also, the relations between
these invariants are provided.
The aim of this paper is to prove that the algebra of invariants
$K[\mathfrak{m}]^N$ is finitely generated. We show this as follows.
Let $P=L\ltimes U$, where $L$ is the Levi subgroup and $U$ is the
unipotent radical. Then $N=U_L\ltimes U$, where $U_L$ is the maximal
unipotent subgroup of $L$. One has
$$K[\mathfrak{m}]^N=K\Big[K[\mathfrak{m}]^U\Big]^{U_L}.$$
In this paper we show that the algebra of invariants
$K[\mathfrak{m}]^U$ is a finitely generated, free algebra and we
present its generating invariants. Then by Khadzhiev's theorem (see
Theorem~\ref{Th-Khadzhiev}), we get our main result:
\thenv{Theorem}{\emph{The algebra of invariants $K[\mathfrak{m}]^N$
is finitely generated}.}
\sect{Main statements and definitions}
We begin with definitions. Let
$\mathfrak{b}=\mathfrak{n}\oplus\mathfrak{h}$ be a triangular
decomposition. Let $\Delta$ be the root system relative to
$\mathfrak{h}$ and let $\Delta^{\!+}$ be the set of positive roots.
Let $\{\varepsilon_i\}_{i=1}^{n}$ be the standard basis of
$\mathbb{C}^n$. Every positive root $\gamma$ in $\mathfrak{gl}(n,K)$
can be represented as $\gamma=\varepsilon_i-\varepsilon_j$,
$1\leqslant i<j\leqslant n$ (see \cite{GG}). We identify a root
$\gamma$ with the pair $(i,j)$ and the set of the positive roots
$\Delta^{\!+}$ with the set of pairs $(i,j)$, $i<j$. The system of
positive roots $\Delta^{\!+}_\mathfrak{r}$ of the reductive
subalgebra $\mathfrak{r}$ is a subsystem in $\Delta^{\!+}$.
Let $\{E_{i,j}:~i<j\}$ be the standard basis in $\mathfrak{n}$. Let
$E_\gamma$ denote the basis element $E_{i,j}$, where $\gamma=(i,j)$.
Let $M$ be a subset of $\Delta^{\!+}$ corresponding to
$\mathfrak{m}$ that is $$\mathfrak{m}=\bigoplus_{\gamma\in
M}E_{\gamma}.$$ We identify the algebra $K[\mathfrak{m}]$ with the
polynomial algebra in variables $x_{i,j}$, $(i,j)\in M$.
We define a relation in $\Delta^{\!+}$ such that $\gamma'>\gamma$
whenever $\gamma'-\gamma\in\Delta^{\!+}$. Note that the relation $>$
is not an order relation.
The roots $\gamma$ and $\gamma'$ are called \emph{comparable}, if
either $\gamma'>\gamma$ or $\gamma>\gamma'$.
We will introduce a subset $S$ in the set of positive roots such
that every root from this subset corresponds to some $N$-invariant.
\thenv{Definition}{A subset $S$ in $M$ is called a \emph{base} if
the elements in $S$ are not pairwise comparable and for any
$\gamma\in M\setminus S$ there exists $\xi\in S$ such that
$\gamma>\xi$.}
Let us show that the base exists. We need the following
\thenv{Definition\label{Def-minimal}}{Let $A$ be a subset in $M$. We
say that $\gamma$ is a \emph{minimal element} in $A$ if there is no
$\xi\in A$ such that $\xi<\gamma$.}
For a given parabolic subgroup we will construct a diagram in the
form of a square array. The cell of the diagram corresponding to a
root of $S$ is labeled by the symbol $\otimes$. Symbols $\times$
will be explained below.
\thenv{Example\label{Ex-2132}}{Diagram 1 represents the parabolic
subalgebra with sizes of its diagonal blocks $(2,1,3,2)$. In this
case minimal elements in $M$ are $(2,3)$, $(3,4)$ and $(6,7)$.}
\begin{center}\refstepcounter{diagram}
{\begin{tabular}{|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|c}
\multicolumn{2}{l}{{\small 1\quad2\!\!}}&\multicolumn{2}{l}{{\small
3\quad4\!\!}}&\multicolumn{2}{l}{{\small 5\quad 6\!\!}}&
\multicolumn{2}{l}{{\small 7\quad 8\!\!}}\\
\cline{1-8} \multicolumn{2}{|l|}{1}&&&$\!\otimes$&&&&{\small 1}\\
\cline{3-8} \multicolumn{2}{|r|}{1}&$\!\otimes$&&&&&&{\small 2}\\
\cline{1-8} \multicolumn{2}{|c|}{}&1&$\!\otimes$&&&&&{\small 3}\\
\cline{3-8} \multicolumn{3}{|c|}{}&\multicolumn{3}{|l|}{1}&$\!\times$&$\!\times$&{\small 4}\\
\cline{7-8} \multicolumn{3}{|c|}{}&\multicolumn{3}{|c|}{1}&$\!\times$&$\!\otimes$&{\small 5}\\
\cline{7-8} \multicolumn{3}{|c|}{}&\multicolumn{3}{|r|}{1}&$\!\otimes$&&{\small 6}\\
\cline{4-8} \multicolumn{6}{|c|}{}&\multicolumn{2}{|l|}{1}&{\small 7}\\
\multicolumn{6}{|c|}{}&\multicolumn{2}{|r|}{1}&{\small 8}\\
\cline{1-8} \multicolumn{8}{c}{Diagram \arabic{diagram}}\\
\end{tabular}}
\end{center}
We construct the base $S$ by the following algorithm.
\textsc{Step 1.} Put $M_0=M$ and $i=1$. Let $S_1$ be the set of
minimal elements in $M_0$.
\textsc{Step 2.} Put $M_i=M_{i-1}\setminus\big\{S_i\cup\{\gamma\in
M_{i-1}:\exists\xi\in S_i, \xi<\gamma\}\big\}$. Let $S_i$ be the set
of minimal elements of $M_{i-1}$. Increase $i$ by 1 and repeat Step
2 until $M_i$ is empty.
Denote $S=S_1\cup S_2\cup\ldots$ The base $S$ is unique.
We have $S_1=\{(2,3),(3,4),(6,7)\}$ and $S_2=\{(1,5),(5,8)\}$ in
Example~\ref{Ex-2132}.
Let $(r_1,r_2,\ldots,r_s)$ be the sizes of the diagonal blocks in
$\mathfrak{r}$. Put $$R_k=\displaystyle\sum_{i=1}^{k}r_i.$$
Let us present $N$-invariant corresponding to a root of the base.
Consider the formal matrix $\mathbb{X}$ of variables
$$\left(\mathbb{X}\right)_{i,j}=\left\{
\begin{array}{ll}
x_{i,j}&\mbox{if }(i,j)\in M;\\
0&\mbox{otherwise.}
\end{array}\right.$$
The matrix $\mathbb{X}$ can be represented as a block matrix
$$\mathbb{X}=\left(\begin{array}{ccccc}
0&X_{1,2}&X_{1,3}&\ldots&X_{1,s}\\
0&0&X_{2,3}&\ldots&X_{1,s}\\
\ldots&\ldots&\ldots&\ldots&\ldots\\
0&0&0&\ldots&X_{s-1,s}\\
0&0&0&\ldots&0\\
\end{array}\right),$$
where the size of $X_{i,j}$ is $r_i\times r_j$,
\begin{equation}
X_{i,j}=\left(\begin{array}{cccc}
x_{R_{i-1}+1,R_{j-1}+1}&x_{R_{i-1}+1,R_{j-1}+2}&\ldots&x_{R_{i-1}+1,R_{j}}\\
x_{R_{i-1}+1,R_{j-1}+2}&x_{R_{i-1}+2,R_{j-1}+2}&\ldots&x_{R_{i-1}+2,R_{j}}\\
\ldots&\ldots&\ldots&\ldots\\
x_{R_{i},R_{j-1}+1}&x_{R_{i},R_{j-1}+2}&\ldots&x_{R_{i},R_{j}}\\
\end{array}\right).\label{X_ij}
\end{equation}
\thenv{Lemma\label{Lemma1}}{\emph{The roots corresponding to the
antidiagonal elements in} $X_{i,i+1}$ (\emph{from the lower left
element towards right upper direction}) \emph{are in the base}.}
Thus the roots of the base in the blocks $X_{i,i+1}$ are as follows.
\begin{center}
{\begin{tabular}{|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|}
\cline{1-4} &&&\\
\cline{1-4} &&&\\
\cline{1-4} &&&$\!\otimes$\\
\cline{1-4} &&$\!...$&\\
\cline{1-4} &$\!\otimes$&&\\
\cline{1-4} $\!\otimes$&&&\\
\cline{1-4}
\end{tabular}\ \ \mbox{or }
\begin{tabular}{|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|}
\cline{1-6} &&&$\!\otimes$&&\\
\cline{1-6} &&$\!...$&&&\\
\cline{1-6} &$\!\otimes$&&&&\\
\cline{1-6} $\!\otimes$&&&&&\\
\cline{1-6}
\end{tabular}}
\end{center}
\textsc{Proof.} By definition~\ref{Def-minimal} for any $i$ the root
$(R_i,R_i+1)$ is minimal. Therefore $M\setminus M_1$ contains roots
corresponding to all cells in the row $R_i$ and the column $R_i+1$.
Hence $(R_i-1,R_i+2)\in S_2$ if $r_i,r_{i+1}>1$ and all roots of $M$
in the rows $R_i$, $R_i-1$ and in the columns $R_i+1$, $R_i+2$
belong to $M\setminus M_2$. Hence $(R_i-2,R_i+3)\in S_3$ if
$r_i,r_{i+1}>2$ etc.~$\Box$
There are roots in $S$ such that these roots do not correspond to
elements of the secondary diagonal in $X_{i,i+1}$, for example
$(1,5)$ in Example~\ref{Ex-2132}.
For any root $\gamma=(a,b)\in M$ let $S_\gamma=\{(i,j)\in
S:i>a,j<b\}$. Let $S_\gamma=\{(i_1,j_1),\ldots,(i_k,j_k)\}$. Note
that if $\gamma$ is minimal in $M$, then $S_{\gamma}=\emptyset$.
Denote by $M_\gamma$ a minor $\mathbb{X}_I^J$ of the matrix
$\mathbb{X}$ with ordered systems of rows $I$ and columns $J$, where
$$I=\mathrm{ord}\{a,i_1,\ldots,i_k\},\quad J=\mathrm{ord}\{j_1,\ldots,j_k, b\}.$$
\thenv{Example}{Let us continue Example~\ref{Ex-2132}. For the root
$(1,6)$ we have $S_{(1,6)}=\{(2,3),(3,4)\}$, $I=\{1,2,3\}$,
$J=\{3,4,6\}$, and
$$M_{(1,6)}=\left|\begin{array}{ccc}
x_{1,3}&x_{1,4}&x_{1,6}\\
x_{2,3}&x_{2,4}&x_{2,6}\\
0&x_{3,4}&x_{3,6}\\
\end{array}\right|.$$
All minors $M_{\xi}$ for $\xi\in S$ are following
$$M_{(2,3)}=x_{2,3},\ M_{(3,4)}=x_{3,4},\ M_{(6,7)}=x_{6,7},$$$$
M_{(5,8)}=\left|\begin{array}{cc}
x_{5,7}&x_{5,8}\\
x_{6,7}&x_{6,8}\\
\end{array}\right|,\
M_{(1,5)}=\left|\begin{array}{ccc}
x_{1,3}&x_{1,4}&x_{1,5}\\
x_{2,3}&x_{2,4}&x_{2,5}\\
0&x_{3,4}&x_{3,5}\\
\end{array}\right|.$$}
\thenv{Lemma\label{L-M-inv}}{\emph{For any $\xi\in S$ the minor
$M_{\xi}$ is $N$-invariant}.}
\thenv{Notation\label{Notation}}{The group $N$ is generated by the
one-parameter subgroups
$$g_{i,j}(t)=I+tE_{i,j},\mbox{ where }1\leqslant i<j\leqslant n$$
and $I$ is the identity matrix. The adjoint action of any
$g_{i,j}(t)$ makes the following transformations of a matrix:
\begin{itemize}
\item[1)] the $j$th row multiplied by $t$ is added to the $i$th row,
\item[2)] the $i$th column multiplied by $-t$ is added to the $j$th
column, i.e. for a variable $x_{a,b}$ we have
$$\mathrm{Ad}_{g_{i,j}^{-1}(t)}x_{a,b}=\left\{\begin{array}{ll}
x_{a,b}+tx_{j,b}&\mbox{if }a=i;\\
x_{a,b}-tx_{a,i}&\mbox{if }b=j;\\
x_{a,b}&\mbox{otherwise}.
\end{array}\right.$$
\end{itemize}}
\textsc{Proof.} By the notation it is sufficient to prove that for
any $\xi=(k,m)\in S$ the minor $M_{\xi}$ is invariant under the
adjoint action of $g_{i,j}(t)$ for any $i<j$. If $i<k$, then the
$i$th row does not belong to the minor $M_{\xi}$ and the adding of
the $j$th row to the $i$th row leaves $M_{\xi}$ unchanged. Let
$M_{\xi}=\mathbb{X}_I^J$ for some collections of rows $I$ and
columns $J$. If $i\geqslant k$, then since the numbers in $I$ are
consecutive, the number of any nonzero row $j$ at the intersection
with columns $J$ belongs to $I$. Then the adding of the $j$th row to
the $i$th row leaves $M_{\xi}$ unchanged again. Using the similar
reasoning for columns, we get that $M_{\xi}$ is
$N$-invariant.~$\Box$
The set $\{M_{\xi},\ \xi\in S\}$ does not generate all the
$N$-invariants. There is the other series of $N$-invariants. To
present it we need
\thenv{Definition}{An ordered set of positive roots
$$\{\varepsilon_{i_1}-\varepsilon_{j_1},
\varepsilon_{i_2}-\varepsilon_{j_2},\ldots,
\varepsilon_{i_s}-\varepsilon_{j_s}\}$$ is called a \emph{chain} if
$j_1=i_2,j_2=i_3,\ldots,j_{s-1}=i_s$.}
\thenv{Definition}{We say that two roots $\xi,\xi'\in S$ form an
\emph{admissible pair} $q=(\xi,\xi')$ if there exists $\alpha_q$ in
the set $\Delta^{\!+}_\mathfrak{r}$ corresponding to the reductive
part $\mathfrak{r}$ such that the ordered set of roots
$\{\xi,\alpha_q,\xi'\}$ is a chain. In other words, roots
$\xi=\varepsilon_i-\varepsilon_j$ and
$\xi'=\varepsilon_k-\varepsilon_l$ are an admissible pair if
$\alpha_q=\varepsilon_j-\varepsilon_k\in\Delta^{\!+}_\mathfrak{r}$.
Note that the root $\alpha_q$ is uniquely determined by $q$.}
\thenv{Example\label{Ex-2132-admissible_pair}}{In the case of
Diagram 1 we have three admissible pairs $q_1=(\xi_1,\xi_3)$,
$q_2=(\xi_2,\xi_3)$, $q_3=(\xi_1,\xi_4)$, where $\xi_1=(2,3)$,
$\xi_2=(1,5)$, $\xi_3=(6,7)$, and $\xi_4=(5,8)$.}
Let the set $Q:=Q(\mathfrak{p})$ consist of admissible pairs. For
every admissible pair $q=(\xi,\xi')$ we construct a positive root
$\varphi_q=\alpha_q+\xi'$, where $\{\xi,\alpha_q,\xi'\}$ is a chain.
Consider the subset $\Phi=\{\varphi_q:~q\in Q\}$ in the set of
positive roots. The cell of the diagram corresponding to a root of
$\Phi$ is labeled by $\times$.
\thenv{Example}{The roots of $\Phi$ for the admissible pairs in
Example~\ref{Ex-2132-admissible_pair} are $\varphi_{q_1}=(4,7)$,
$\varphi_{q_2}=(5,7)$, $\varphi_{q_3}=(4,8)$.}
Now we are ready to present the $N$-invariant corresponding to a
root $\varphi\in\Phi$.
Let admissible pair $q=(\xi,\xi')$ correspond to $\varphi_q\in\Phi$.
We construct the polynomial
\begin{equation}
L_{\varphi_q}=\sum_{\scriptstyle\alpha_1,\alpha_2\in\Delta^{\!+}_\mathfrak{r}\cup\{0\}
\atop\scriptstyle\alpha_1+\alpha_2=\alpha_q}
M_{\xi+\alpha_1}M_{\alpha_2+\xi'}.\label{L_q}
\end{equation}
\thenv{Example}{Continuing the previous example, we have
$$L_{(4,7)}=x_{3,4}x_{4,7}+x_{3,5}x_{5,7}+x_{3,6}x_{6,7},$$
$$L_{(4,8)}=x_{3,4}\left|\begin{array}{cc}
x_{4,7}&x_{4,8}\\
x_{6,7}&x_{6,8}\\
\end{array}\right|+x_{3,5}\left|\begin{array}{cc}
x_{5,7}&x_{5,8}\\
x_{6,7}&x_{6,8}\\
\end{array}\right|,$$
$$L_{(5,7)}=\left|\begin{array}{ccc}
x_{1,3}&x_{1,4}&x_{1,5}\\
x_{2,3}&x_{2,4}&x_{2,5}\\
0&x_{3,4}&x_{3,5}\\
\end{array}\right|x_{5,7}+\left|\begin{array}{ccc}
x_{1,3}&x_{1,4}&x_{1,6}\\
x_{2,3}&x_{2,4}&x_{2,6}\\
0&x_{3,4}&x_{3,6}\\
\end{array}\right|x_{6,7}.$$}
\thenv{Lemma}{\emph{The polynomial $L_{\varphi}$ is $N$-invariant
for any} $\varphi=\varphi_q\in\Phi$, $q=(\xi,\xi')$.}
\textsc{Proof.} By Notation~\ref{Notation} it is sufficient to check
the action of $g_{i,j}(t)$. Let $\xi=(a,b)$,~$\xi'=(a',b')$. Using
the definition of admissible pair, we have $a<b<a'<b'$,
$\alpha_q=(b,a')\in\Delta_{\mathfrak{r}}^{\!+}$, and
$\varphi=(b,b')$. If $i<b$ or $j>a'$, then using the same arguments
as in the proof of the invariance of $M_{\xi}$ for $\xi\in S$, we
have that the minors of the right part of (\ref{L_q}) are
$g_{i,j}(t)$-invariant.
Let $b\leqslant i<j\leqslant a'$. Denote $\gamma_1=(b,i)$,
$\gamma_2=(j,a')$, $\beta=(i,j)$; then
$\alpha_q=\gamma_1+\beta+\gamma_2$ and
$\gamma_1+\beta,\beta+\gamma_2\in\Delta_{\mathfrak{r}}^{\!+}\cup\{0\}$.
We have
\begin{equation}
\left\{\begin{array}{l} T_{g_{i,j}(t)}M_{\xi+\gamma_1+\beta}=
M_{\xi+\gamma_1+\beta}+tM_{\xi+\gamma_1},\\
T_{g_{i,j}(t)}M_{\beta+\gamma_2+\xi'}=
T_{\beta+\gamma_2+\xi'}-tM_{\gamma_2+\xi'}.\\
\end{array}\right.\label{T_g(M_xi)}
\end{equation}
The other minors of (\ref{L_q}) are invariant under the action of
$g_{i,j}(t)$. Combining (\ref{L_q}) and (\ref{T_g(M_xi)}), we get
$$\left(T_{g_{i,j}(t)}L_{\varphi}\right)-L_{\varphi}=M_{\xi+\gamma_1}
\left(M_{\beta+\gamma_2+\xi'}-t M_{\gamma_2+\xi'}\right)+$$
$$\left(M_{\xi+\gamma_1+\beta}+t
M_{\xi+\gamma_1}\right)M_{\gamma_2+\xi'}-
M_{\xi+\gamma_1}M_{\beta+\gamma_2+\xi'}-
M_{\xi+\gamma_1+\beta}M_{\gamma_2+\xi'}=0.~\Box$$
Thus we proved the first part of
\thenv{Theorem\label{M-L_independ}}{\emph{For an arbitrary parabolic
subalgebra}, \emph{the system of polynomials}
\begin{equation}
\{M_\xi,~\xi\in S,~L_{\varphi},~\varphi\in\Phi,\}\label{system-M-L}
\end{equation}
\emph{is contained in $K[\mathfrak{m}]^N$ and is algebraically
independent over $K$}.}
To show the algebraic independence, consider the restriction
homo\-mor\-phism $f\mapsto f|_\mathcal{Y}$, where
$$\mathcal{Y}=\left\{\sum_{\xi\in S\cup\Phi}c_{\xi}E_\xi:
c_{\xi}\neq0\ \forall\xi\in S\cup\Phi\right\},$$ from
$K[\mathfrak{m}]$ to the polynomial algebra $K[\mathcal{Y}]$ of
$x_\xi$, $\xi\in S$, and of $x_\varphi$, $\varphi\in\Phi$. Direct
calculations show that the system of the images
$$\left\{M_\xi|_{\mathcal{Y}},\xi\in S,L_\varphi|_{\mathcal{Y}},
\varphi\in\Phi\right\}$$ is algebraically independent over $K$.
Therefore, the system~(\ref{system-M-L}) is algebraically
independent over $K$ (see details in~\cite{PS}).
\thenv{Definition}{The set $S\cup\Phi$ is called an \emph{extended
base}.}
\thenv{Definition}{The matrices of $\mathcal{Y}$ are called
\emph{canonical}.}
By~\cite{S1} one has the following theorems.
\thenv{Theorem\label{Exist_of_representative}}{\emph{There exists a
nonempty Zariski-open subset $W\subset\mathfrak{m}$ such that the
$N$-orbit of any $x\in W$ intersects $\mathcal{Y}$ at a unique
point}.}
\thenv{Theorem\label{Th_invariant_field}}{\emph{The field of
invariants $K(\mathfrak{m})^N$ is the field of rational functions
of} $M_\xi$, $\xi\in S$, \emph{and} $L_{\varphi}$,
$\varphi\in\Phi$.}
\sect{Invariants of the unipotent subgroup \\in the Levi
decomposition of $P$}
Let us consider the decomposition of a parabolic group $P$ into the
semi\-direct product of the Levi subgroup $L$ and the unipotent
radical $U$. Let $U_L$ be the maximal unipotent subgroup in the Levi
group $L$. One has $N=U_L\ltimes U$. The aim is to describe the
algebra of invariants $K[\mathfrak{m}]^{U}$.
As above, we will introduce some subset $T\subset\Delta^{\!+}$ and
construct a correspon\-ding invariant $N_{\xi}\in
K[\mathfrak{m}]^{U}$ for every root $\xi\in T$.
\thenv{Definition}{A root $\xi\in\Delta^{\!+}$ belongs to \emph{a
broad base} $T\subset\Delta^{\!+}$ if one of the following
conditions holds:
\begin{itemize}
\item[1)] the root $\xi$ belongs to $S$;
\item[2)] there exists a root $\gamma\in S$ such that
$\xi>\gamma$ and the variables $x_\xi$ and $x_\gamma$ are located in
the same block $X_{i,j}$.
\end{itemize}}
\thenv{Example\label{Ex-2132-T}}{The diagram presents roots of the
broad base $T$ for the diagonal blocks $(2,1,3,2)$. The cells of the
diagram corresponding to roots of $S$ (resp. $T\setminus S$) are
labeled by the symbol $\otimes$ (resp.~$\boxtimes$).}
\begin{center}\refstepcounter{diagram}
{\begin{tabular}{|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|c}
\multicolumn{2}{l}{{\small 1\quad 2\!\!}}&\multicolumn{2}{l}{{\small
3\quad 4\!\!}}&\multicolumn{2}{l}{{\small 5\quad 6\!\!}}&
\multicolumn{2}{l}{{\small 7\quad 8\!\!}}\\
\cline{1-8} \multicolumn{2}{|l|}{1}&$\!\boxtimes$&&$\!\otimes$&$\!\boxtimes$&&&{\small 1}\\
\cline{3-8} \multicolumn{2}{|r|}{1}&$\!\otimes$&&&&&&{\small 2}\\
\cline{1-8} \multicolumn{2}{|c|}{}&1&$\!\otimes$&$\!\boxtimes$&$\!\boxtimes$&&&{\small 3}\\
\cline{3-8} \multicolumn{3}{|c|}{}&\multicolumn{3}{|l|}{1}&$\!\boxtimes$&$\!\boxtimes$&{\small 4}\\
\cline{7-8} \multicolumn{3}{|c|}{}&\multicolumn{3}{|c|}{1}&$\!\boxtimes$&$\!\otimes$&{\small 5}\\
\cline{7-8} \multicolumn{3}{|c|}{}&\multicolumn{3}{|r|}{1}&$\!\otimes$&$\!\boxtimes$&{\small 6}\\
\cline{4-8} \multicolumn{6}{|c|}{}&\multicolumn{2}{|l|}{1}&{\small 7}\\
\multicolumn{6}{|c|}{}&\multicolumn{2}{|r|}{1}&{\small 8}\\
\cline{1-8} \multicolumn{8}{c}{Diagram \arabic{diagram}}\\
\end{tabular}}
\end{center}
Let $M'=\big\{\xi\in M:E_{\xi}\in\mathfrak{m}^2\big\}.$ In other
words, if an element corresponding to a root $\xi\in M$ does not
belong to blocks $X_{k,k+1}$ for any $k$, then $\xi\in M'$. We have
$$\sum_{\xi\in M'}x_{\xi}E_{\xi}=\left(\begin{array}{ccccc}
0&0&X_{1,3}&\ldots&X_{1,s}\\
\ldots&\ldots&\ldots&\ldots&\ldots\\
0&0&0&\ldots&X_{s-2,s}\\
0&0&0&\ldots&0\\
0&0&0&\ldots&0\\
\end{array}\right),$$
$$\sum_{\xi\in M\setminus M'}x_{\xi}E_{\xi}=\left(\begin{array}{ccccc}
0&X_{1,2}&0&\ldots&0\\
0&0&X_{1,3}&\ldots&0\\
\ldots&\ldots&\ldots&\ldots&\ldots\\
0&0&0&\ldots&X_{s-1,s}\\
0&0&0&\ldots&0\\
\end{array}\right),$$
where $X_{i,j}$ is the block~(\ref{X_ij}).
If $\xi\in M\setminus M'$, then $x_{\xi}$ is in some block
$X_{k,k+1}$. We have $\xi\in S$ or using Lemma~\ref{Lemma1}, there
is $\gamma\in S$ such that $\xi$ is to the right or above $\gamma$.
In both cases $\xi\in T$. Therefore $M\setminus M'\subset T$.
Example~\ref{Ex-2132-T} shows that $M\setminus M'\neq T$ in general
case.
For $\xi=(i,j)\in T$ let $N_{\xi}\in K[\mathfrak{m}]$ be defined as
follows
$$N_{\xi}=\left\{\begin{array}{ll}
x_{i,j}&\mbox{if }\xi\in M\setminus M';\\
M_{\xi}&\mbox{if }\xi\in M'.
\end{array}\right.$$
\thenv{Example}{Let us write all $U$-invariants $N_{\xi}$ in the
case $(2,1,3,2)$ for $\xi\in T\cap M'$.
$$N_{(1,5)}=\left|\begin{array}{ccc}
x_{1,3}&x_{1,4}&x_{1,5}\\
x_{2,3}&x_{2,4}&x_{2,5}\\
0&x_{3,4}&x_{3,5}\\
\end{array}\right|,\quad
N_{(1,6)}=\left|\begin{array}{ccc}
x_{1,3}&x_{1,4}&x_{1,6}\\
x_{2,3}&x_{2,4}&x_{2,6}\\
0&x_{3,4}&x_{3,6}\\
\end{array}\right|.$$}
\thenv{Lemma\label{Th:N_xi-invariant}}{\emph{The minor $N_{\xi}$ is
invariant under the adjoint action of the unipotent group} $U$
\emph{for any} $\xi\in T$.}
\textsc{Proof.} The group $U$ is generated by the one-parameter
subgroups (see Notation~\ref{Notation})
$$g_{i,j}(t)=I+tE_{i,j},\mbox{ where }(i,j)\in M.$$
There are two cases of a root $\xi\in T$. The first case is $\xi\in
M\setminus M'$ and the second one is $\xi\in M'\cap T$.
\begin{enumerate}
\item
Suppose $\xi=(a,b)\in T$ belongs to the set $M\setminus M'$; then
$N_{\xi}=x_{a,b}$ and there is some $k$ such that the variable
$x_{a,b}$ is in the block $X_{k,k+1}$. Using the
Notion~\ref{Notation}, for $t\neq0$ we have
$\mathrm{Ad}_{g_{i,j}^{-1}(t)}x_{a,b}\neq x_{a,b}$ if $a=i$ and
$x_{j,b}$ is in $X_{k,k+1}$ or $b=j$ and $x_{a,i}$ is in
$X_{k,k+1}$. In both cases the root $(i,j)$ belongs to
$\Delta^{\!+}_{\mathfrak{r}}$. Therefore $(i,j)\not\in M$ and
$g_{i,j}(t)\not\in U$.
Hence $x_{a,b}$ is an $U$-invariant.
\item
If the root $\xi=(a,b)\in T$ does not belong to $M\setminus M'$,
then by definition of $T$, there exists a root $\gamma\in S$ such
that $\gamma=(i,b)$, $i>a$, or $\gamma=(a,j)$, $j<b$, and $x_\xi$
and $x_\gamma$ are in the same block $X_{l,m}$, $l<m+1$. Suppose
$\gamma=(i,b)$, $i>a$. The case $\gamma=(a,j)$ is similar. Let
$M_{\gamma}=\mathbb{X}_I^J$ be a minor of order $k$ of the formal
matrix with rows $I=\{i,i+1,\ldots,i+k-1\}$ and columns
$J=\{b-k+1,b-k+2,\ldots,b\}$, then $N_{\xi}=\mathbb{X}_{I'}^J$,
where $I'=\{a,i+1,\ldots,i+k-1\}$. Note that all rows of $N_{\xi}$
except the row $a$ and all columns are consecutive. Since a minor is
not changed by addition to a row (resp. column) any other its row
(resp. column), the adjoint action of $g_{u,v}(t)$ can change
$\mathbb{X}_{I'}^J$ if $u=a$ and $v\leqslant i$. Let
$\mathrm{Ad}_{g_{u,v}^{-1}(t)}N_{\xi}\neq N_{\xi}$ for $t\neq0$.
Since $x_\xi$ and $x_\gamma$ are in the same block $X_{l,m}$ and
$u=a$ and $v\leqslant i$, then $x_{(u,b)}$ and $x_{(v,b)}$ are in
the same block $X_{l,m}$. Hence
$(u,v)\in\Delta_{\mathfrak{r}}^{\!+}$ and $g_{u,v}(t)\not\in U$. So
we have that $N_{\xi}$ is an $U$-invariant.~$\Box$
\end{enumerate}
\thenv{Definition}{\emph{The remoteness} of a root $\gamma\in M$ is
called the maximum number $s$ of roots $\gamma_i$ in $M$ such that
$\gamma=\gamma_1>\gamma_2>\ldots>\gamma_s$.}
\thenv{Example}{The remoteness of the root $(1,6)$ in
Example~\ref{Ex-2132} equals~5, we have
$$(1,6)>(1,5)>(2,5)>(3,5)>(3,4).$$}
\thenv{Lemma\label{N_independ}}{\emph{The system of polynomials
$\{N_{\xi}, \ \xi\in T\}$ is algebraically independent over $K$}.}
\textsc{Proof.} Assume the converse, namely that the system
$\{N_{\xi}, \ \xi\in T\}$ is alge\-braically dependent. Hence there
is a polynomial $f$ such that for some $\xi_1,\ldots,\xi_k$ we have
$$f(N_{\xi_1},N_{\xi_2},\ldots,N_{\xi_k})=0.$$
Suppose that the degree of the polynomial $f$ is minimal. Let
$\xi_1$ be a root with the maximal remoteness. If $\xi\in T$ has a
$k$th remoteness, then all roots $\gamma\neq\xi$ for variables
$x_{\gamma}$ in the polynomial $N_{\xi}$ have a remoteness smaller
than $\xi$. The variable $x_{\xi}$ is in the first row and the last
column of the minor $N_{\xi}$. Let us expand $N_{\xi}$ according to
the first row. We have $N_{\xi}=ax_{\xi}+b$ for some polynomials $a$
and $b$ and all variables in $a$ and $b$ correspond to the roots
with less remoteness than the remoteness of $\xi$. Then the variable
$x_{\xi_1}$ is included into the single minor $N_{\xi_1}$.
We have
$$0=f(N_{\xi_1},\ldots,N_{\xi_k})=$$
$$=f_m(N_{\xi_2},\ldots,N_{\xi_k})N_{\xi_1}^m+
f_{m-1}(N_{\xi_2},\ldots,N_{\xi_k})N_{\xi_1}^{m-1}+\ldots+
f_0(N_{\xi_2},\ldots,N_{\xi_k}).$$ Since $N_{\xi_1}=ax_{\xi_1}+b$
and $a\not\equiv0$, we conclude that the coefficient of the highest
power for the variable $x_{\xi_1}$ is
$f_m(N_{\xi_2},\ldots,N_{\xi_k})a^m$. Therefore
$$f_m(N_{\xi_2},\ldots,N_{\xi_k})=0.$$
This contradicts the minimality of $f$ and completes the
proof.~$\Box$
\sect{The algebra of $U$-invariants}
Let $\mathcal{Z}=\left\{\displaystyle\sum_{\xi\in
T}c_{\xi}E_{\xi}:c_{\xi}\in K\right\}$.
\thenv{Proposition\label{Exist_of_representative_for_U}}{\emph{There
exists a nonempty Zariski-open subset $V\subset\mathfrak{m}$ such
that for any $x\in V$ the $U$-orbit of the element $x$ intersects
$\mathcal{Z}$ at a unique point}.}
\textsc{Proof.} By Theorem \ref{Exist_of_representative} there
exists a nonempty Zariski-open subset $W$ such that for any
$x\in\mathfrak{m}$ there exists $g\in N$ satisfing
$\mathrm{Ad}_gx\in \mathcal{Y}$. Fix any $x\in W$, there is an
element $g\in N$ corresponding to $x$. Since $N=U_L\ltimes U$, $g\in
N$ can be represented as the product $g=g_1g_2$, where $g_1\in U_L$
and $g_2\in U$. Then $g_1^{-1}g\in U$. Let us show that we can take
$V=W$ and $\mathrm{Ad}_{g_1^{-1}g}x\in \mathcal{Z}$.
Since $\mathcal{Y}\subset \mathcal{Z}$ and one-parameter subgroups
$g_{i,j}(t)=I+tE_{i,j}$, where
$(i,j)\in\Delta^{\!+}_{\mathfrak{r}}$, generate the group $U_L$, it
is enough to show that for any $g_{i,j}(t)\in U_L$ we have
$\mathrm{Ad}_{g_{i,j}(t)}\mathcal{Z}\subset\mathcal{Z}$. Suppose
$g_{i,j}(t)\in U_L$; then $(i,j)\in\Delta^{\!+}_{\mathfrak{r}}$.
This means that there exists $k$ such that $R_{k-1}<i<j\leqslant
R_k$. If some element is changed after the action of the
one-parameter subgroup $g_{i,j}(t)$, then this element is $(i,a)$ or
$(b,j)$ for some $a>i$ and $b<j$. In the first case the $j$th row is
added to the $i$th row
$$\mathrm{Ad}_{g_{i,j}(t)}x_{i,a}=x_{i,a}+tx_{j,a}.$$
We have that the variables $x_{(i,a)}$ and $x_{(j,a)}$ are in the
same block $X_{k,l}$. In the second case the $i$th column is added
to the $j$th column
$$\mathrm{Ad}_{g_{i,j}(t)}x_{b,j}=x_{b,j}-tx_{b,i}.$$
Similarly, the variables $x_{(b,j)}$ and $x_{(b,i)}$ are in the same
block $X_{m,k}$. By the definition of $T$, in the case $(i,a)$ we
have that if the root $(j,a)\in T$, then $(i,a)\in T$. This means
that the $g_{i,j}$-action does not change the set
$\mathcal{Z}=\displaystyle\sum_{\xi\in T}c_{\xi}E_{\xi}$. Similarly,
if $(b,i)\in T$, then $(b,j)\in T$.
By Lemmas~\ref{Th:N_xi-invariant} and~\ref{N_independ} any
$z\in\mathcal{Z}$ such that $N_{\xi}|_z\neq0$ for any $\xi\in T$ is
a representative of some $U$-orbit.~$\Box$
Let $\mathcal{S}$ be the set of denominators generated by invariants
$N_{\xi}$, $\xi\in T$. Denote by $K[\mathfrak{m}]^U_{\mathcal{S}}$
localization of the algebra $K[\mathfrak{m}]^U$ on $\mathcal{S}$.
Let $$\pi:K[\mathfrak{m}]^U\rightarrow K[\mathcal{Z}]$$ be the
restriction homomorphism, $f\in K[\mathfrak{m}]^U\mapsto
f|_{\mathcal{Z}}$, where the algebra $K[\mathcal{Z}]$ is a
polynomial algebra of variables $x_{\xi}$, $\xi\in T$. Extend $\pi$
to the mapping
$\widetilde{\pi}:K[\mathfrak{m}]^U_{\mathcal{S}}\rightarrow
K[c_{\xi_1}^{\pm1},c_{\xi_2}^{\pm1},\ldots,c_{\xi_s}^{\pm1}],$ where
$\xi_1,\xi_2,\ldots,\xi_s$ are all roots in $T$.
\thenv{Proposition\label{K(m)^U}}{\emph{Let
$\{\xi_1,\xi_2,\ldots,\xi_s\}$ be a collection of roots of the broad
base} $T$. \emph{The mapping
$\widetilde{\pi}:K[\mathfrak{m}]^U_{\mathcal{S}}\rightarrow
K[c_{\xi_1}^{\pm1},c_{\xi_2}^{\pm1},\ldots,c_{\xi_s}^{\pm1}]$ is an
isomorphism and} $K[\mathfrak{m}]^U_{\mathcal{S}}=
K[N_{\xi_1}^{\pm1},N_{\xi_2}^{\pm1},\ldots,N_{\xi_s}^{\pm1}]$.}
\textsc{Proof.} Let us show that $\widetilde{\pi}$ is a
monomorphism. Indeed, if $f\in K[\mathfrak{m}]^U_{\mathcal{S}}$
satisfies $\widetilde{\pi}(f)=0$, then $f|_{\mathcal{Z}}=0$. By
Proposition~\ref{Exist_of_representative_for_U},
$\mathrm{Ad}_U\mathcal{Z}$ is dense in $\mathfrak{m}$, therefore
$f(\mathfrak{m})=0$. So $f\equiv0$ and $\pi$ is a monomorphism.
To prove that $\widetilde{\pi}$ is an epimorphism, we will show that
for any $\xi\in T$ the element $c_{\xi}$ has a preimage in
$K[N_{\xi_1}^{\pm1},N_{\xi_2}^{\pm1},\ldots,N_{\xi_s}^{\pm1}]$. The
proof is by induction on the remoteness of $\xi$. Since for any root
$\xi\in M\setminus M'$ the polynomial $N_{\xi}=x_{\xi}$ is an
$U$-invariant, then $\widetilde{\pi}(N_{\xi})=c_{\xi}$ and the base
of induction is evident. Suppose for a root $\xi$ with remoteness
less than $k$ we have that $c_{\xi}$ has a preimage in
$K[N_{\xi_1}^{\pm1},N_{\xi_2}^{\pm1},\ldots,N_{\xi_s}^{\pm1}]$. Let
us show the statement for $k$. Consider a relation $\prec$ on $T$,
defined by $\varphi_1\prec\varphi_2$ whenever $i_1>i_2$ and
$j_1<j_2$, where $\varphi_1=(i_1,j_1)$ and $\varphi_2=(i_2,j_2)$.
Let $\xi\in T$ have a $k$th remoteness, then
$$N_{\xi}=x_{\xi}\prod_{\varphi\prec\xi}N_{\varphi}+b,$$
where the product is taken on all roots $\varphi\prec\xi$ such that
$\varphi\in S$ and $\varphi$ is maximal in the sense of the relation
$\prec$. For Example~\ref{Ex-2132-T} we have
$$\prod_{\varphi\prec(1,6)}N_{\varphi}=N_{(2,3)}N_{(3,4)}.$$
Note that all variables $c_{\gamma}$ in the polynomial
$\widetilde{\pi}(b)$ correspond to the roots $\gamma$ with less
remoteness than the remoteness of $\xi$. Therefore by the induction
assumption, for all these roots $\gamma$ we have that $c_{\gamma}$
has a preimage in the localization
$K[N_{\xi_1}^{\pm1},N_{\xi_2}^{\pm1},\ldots,N_{\xi_s}^{\pm1}]$.
Hence there is a function $\phi(y_1,\ldots,y_s)\in
K[y_1^{\pm1},y_2^{\pm1},\ldots,y_s^{\pm1}]$ such that
$\widetilde{\pi}(b)=
\widetilde{\pi}\big(\phi(N_{\xi_1},\ldots,N_{\xi_s})\big)$. Then
$$\widetilde{\pi}^{-1}(c_{\xi})=
\frac{N_{\xi}-\phi(N_{\xi_1},\ldots,N_{\xi_s})}
{\displaystyle\prod_{\varphi\prec\xi}N_{\varphi}}\in
K[N_{\xi_1}^{\pm1},N_{\xi_2}^{\pm1},\ldots,N_{\xi_s}^{\pm1}].~\Box$$
\thenv{Theorem\label{Th-K[m]^U}}{\emph{The algebra of invariants
$K[\mathfrak{m}]^U$ is a polynomial algebra of} $N_{\xi}$, $\xi\in
T$.}
\textsc{Proof.} Let us show that for $L\in K[\mathfrak{m}]^U$ one
has
$$L\in K[N_{\xi_1},N_{\xi_2},\ldots,N_{\xi_s}],$$ where
$\{\xi_1,\xi_2,\ldots,\xi_s\}$ is a collection of roots of the broad
base $T$. By Proposi\-tion~\ref{K(m)^U}, there exists a polynomial
$f$ and integers $l_1,l_2,\ldots,l_k$ such that
\begin{equation}
L=\frac{f(N_{\xi_1},N_{\xi_2},\ldots,N_{\xi_k})}{\displaystyle
\prod_{i=1}^{k}N_{\xi_i}^{l_i}}.
\label{L=}\end{equation}
By the induction on the number of $N_{\xi}$ in the denominator it is
sufficient to prove that if $LN_{\xi}\in
K[N_{\xi_1},\ldots,N_{\xi_s}]$ for some $\xi\in T$ and for some
$L\in K[\mathfrak{m}]$, then $L\in K[N_{\xi_1},\ldots,N_{\xi_s}]$.
We fix a root $\xi$. Suppose $\xi=(i,j)$ and consider the case
$\xi\in M'$. If some root $\gamma$ in the broad base $T$ has the
form $(i-1,b)$ for some $b>j$, then denote $\mu_{\gamma}=(a,b)$ for
some $a>i$ such that $\mu_{\gamma}\not\in T$. If $\gamma=(a,j+1)$
for some $a<i-1$, then denote $\mu_{\gamma}=(a,b)$ for some $b<j$
such that $\mu_{\gamma}\not\in T$. For the other roots $\gamma\in T$
and in the case $\xi\not\in M'$ we have $\mu_{\gamma}=\gamma$.
The existence of this root $\mu_\gamma$ in the case
$\mu_\gamma\neq\gamma$ is explained as follows. Since $\xi\in M'$,
then $x_\xi$ is the block $X_{k,m}$ for some $k,m$ and $m>k+1$.
Evidently, the roots $(R_k,R_k+1)$ and $(R_{m-1},R_{m-1}+1)$ are
minimal in $M$ and belong to $S$. By definition of $T$, we have
$(R_k,u)\not\in T$ and $(v,R_{m-1}+1)\not\in T$ for $u\geqslant j$
and $v\leqslant i$. These roots can be chosen for $\mu_\gamma$.
\thenv{Example}{Let us take the root $\xi=(2,7)$. The symbol
$\bullet$ marks this root on the diagram. The roots
$\mu_\gamma=\gamma$ in $T$ are pointed out by the
symbol~$\boxtimes$. The single root $\mu_{\gamma}\neq\gamma$ is
marked by~$\odot$.}
\begin{center}\refstepcounter{diagram}
{\begin{tabular}{|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|c}
\multicolumn{2}{l}{{\small 1\quad 2\!\!}}&\multicolumn{2}{l}{{\small
3\quad 4\!\!}}&\multicolumn{2}{l}{{\small 5\quad 6\!\!}}&
\multicolumn{2}{l}{{\small 7\quad 8\!\!}}\\
\cline{1-8} \multicolumn{3}{|l|}{1}&$\!\boxtimes$&&&$\!\boxtimes$&&{\small 1}\\
\cline{4-8} \multicolumn{3}{|c|}{1}&$\!\boxtimes$&&&$\!\bullet$&$\!\boxtimes$&{\small 2}\\
\cline{4-8} \multicolumn{3}{|l|}{\qquad\ \ 1}&$\!\boxtimes$&&&&$\!\odot$&{\small 3}\\
\cline{1-8} \multicolumn{3}{|c|}{}&1&$\!\boxtimes$&&&&{\small 4}\\
\cline{4-8} \multicolumn{4}{|c|}{}&1&$\!\boxtimes$&$\!\boxtimes$&$\!\boxtimes$&{\small 5}\\
\cline{5-8} \multicolumn{5}{|c|}{}&\multicolumn{3}{|l|}{1}&{\small 6}\\
\multicolumn{5}{|c|}{}&\multicolumn{3}{|c|}{1}&{\small 7}\\
\multicolumn{5}{|c|}{}&\multicolumn{3}{|r|}{1}&{\small 8}\\
\cline{1-8} \multicolumn{8}{c}{Diagram \arabic{diagram}}\\
\end{tabular}}
\end{center}
Consider a set of matrices
$$A=\left\{\sum_{\gamma\in T}c_{\mu_\gamma}E_{\mu_\gamma},
\mbox{ where }c_{\mu_\gamma}
\mbox{ such that }N_{\gamma}|_A\neq0\mbox{ for }\gamma\neq\xi\mbox{
and }N_{\xi}|_A=0\right\}.$$
Consider a subset $\mathcal{X}=\{(N_{\xi_1}|_A,\ldots,
N_{\xi_s}|_A)\}$ of the vector space $K^{s}$. Evidently,
$\mathcal{X}\subset\mathrm{Ann}\,N_{\xi}$. Let us show that the
system of polynomials $$\{N_{\gamma}|_A,\ \gamma\neq\xi\}$$ is
algebraically independent. The proof is by induction on the number
of roots. Since we have $N_{\gamma}|_A=x_{\gamma}$ for any
$\gamma\in M\setminus M'$ and
$N_{\gamma}|_A=N_{\gamma}|_\mathcal{Z}$ for any $\gamma<\xi$, the
set $B=\{\gamma\in T:\ \gamma<\xi\}\cup (M\setminus M')$ is the base
of induction. Suppose that for a subset $B\subset T$ such that for
any root $\gamma\in B$ with the maximal remoteness and for any
$\eta\in T$ we have $\mu_{\eta}<\mu_{\gamma}$, then $\eta\in B$.
Suppose that the polynomials $N_{\gamma}|_A$, $\gamma\in B$, are
algebraically independent. Let $\gamma\not\in B\setminus\{\xi\}$ be
a root such that there is no $\eta\in T\setminus B$ such that
$\mu_{\eta}<\mu_{\gamma}$. Then $N_{\gamma}|_A=ax_{\mu_{\gamma}}+b$,
where polynomials $a,b$ depend on variables $x_{\mu_{\eta}}$,
$\eta<\gamma$. Therefore there is a single polynomial consisting the
variable $x_{\mu_{\eta}}$ in the list
$\{N_{\gamma}|_A,N_{\eta}|_A,\eta\in B\}$. Using the induction
hypothesis, $N_{\gamma}|_A$ and $N_{\eta}|_A$, where $\eta\in B$,
are algebraically independent.
Denote $\mathcal{I}_\mathcal{X}=\{\varphi\in
K[y_{\xi_1},\ldots,y_{\xi_s}]:\ \varphi(\mathcal{X})=0\}$ and
$\mathcal{I}=<y_{\xi}>$. Now let us prove that
$\mathcal{I}_\mathcal{X}=\mathcal{I}$. Obviously,
$\mathcal{I}_\mathcal{X}\supset\mathcal{I}$, hence
$\mathcal{X}\subset\mathrm{Ann}\,\mathcal{I}$. Since the dimension
of $\mathrm{Ann}\,\mathcal{I}$ is the degree of transcendence of the
algebra $K[y_{\xi_1},\ldots,y_{\xi_s}]/\mathcal{I}$ over the main
field $K$, we have
$$\dim\,\mathrm{Ann}\,\mathcal{I}=
\mathrm{degtr}_KK[y_{\xi_1},\ldots,y_{\xi_s}]/\mathcal{I}=s-1,$$
$$\dim\,\mathcal{X}=s-1.$$
Therefore, $\mathrm{Ann}\,\mathcal{I}=\overline{\mathcal{X}}$.
Suppose $g\in\mathcal{I}_\mathcal{X}$, then there exists $m\in
\mathbb{N}$ such that $g^m\in\mathcal{I}$ by the Hilbert's
Nullstellensatg. Since $\mathcal{I}$ is a prime ideal, we obtain
$g\in \mathcal{I}$. This means
$\mathcal{I}_\mathcal{X}=\mathcal{I}=<y_{\xi}>$. To conclude the
proof, it remains to note that there exists a polynomial
$p=p(y_{\xi_1},\ldots,y_{\xi_s})$ such that
$$LN_{\xi}=N_{\xi}p(N_{\xi_1},\ldots,N{\xi_s}).$$
Finally, we have $L\in K[N_{\xi_1},\ldots,N_
{\xi_s}]$.~$\Box$
By~\cite{Kh} one has
\thenv{Theorem (Khadzhiev)\label{Th-Khadzhiev}}{\emph{Let $H$ be a
connected reductive group and $U$ its maximal reductive subgroup}.
\emph{Then for any finitely generated algebraic} $H$-\emph{algebra
$A$ the algebra $A^U$ is finitely generated}.}
\thenv{Corollary}{\emph{The algebra of invariants
$K[\mathfrak{m}]^N$ is finitely generated}.}
\textsc{Proof.} By Theorem~\ref{Th-K[m]^U}, the algebra of
invariants $A=K[\mathfrak{m}]^U$ is finitely generated. Therefore
the algebra of invariants
$$A^{U_L}=K\big[K[\mathfrak{m}]^U\big]^{U_L}=K[\mathfrak{m}]^N$$
under the adjoint action of the reductive group $U_L$, where $U_L$
is the Levi subgroup of the parabolic group $P$, is finitely
generated too by the Khadzhiev's theorem.~$\Box$
\textbf{Acknowledgments.} \emph{I am very grateful to the referees
for their comments and recommendations which helped me to improve
the presentation. I also would like to thanks Anna Melnikov and
Alexander Panov for discussions.}
\textsc{Department of Mathematics, University of Haifa, Israil}\\
\textsc{Department of Mechanics and Mathematics, Samara University,
Russia}\\ \emph{E-mail address}: \verb"[email protected]"
\end{document}
|
\begin{document}
\title{In the Frame\thanks{The author's research is supported by NSERC discovery grant RGPIN-03882.}}
\author[1,2]{Douglas R.\ Stinson}
\affil[1]{David R.\ Cheriton School of Computer Science\\ University of Waterloo\\
Waterloo, Ontario, N2L 3G1, Canada
}
\affil[2]{School of Mathematics and Statistics\\
Carleton University\\
Ottawa, Ontario, K1S 5B6, Canada}
\date{
\today
}
\maketitle
\begin{abstract}
In this expository paper, I survey Room frames and Kirkman frames, concentrating on the early history of these objects. I mainly look at the basic construction techniques, but I also provide some historical remarks and discussion. I also briefly discuss some other types of frames that have been investigated as well as some applications of frames to the construction of other types of designs.
\end{abstract}
\section{Room Squares and Room Frames}
\label{intro.sec}
I begin by briefly discussing the Room square problem. I refer to the 1992 survey by Dinitz and Stinson \cite{DS92a} for a thorough summary of the history of Room squares and related designs up to that time.
\begin{definition}
A \emph{Room square of side $n$} is an $n$ by $n$ array, $F$, on a set $S$ of $n+1$ symbols, that satisfies the following properties:
\begin{enumerate}
\item every cell of $F$ either is empty or contains an unordered pair of symbols from $S$,
\item each symbol in $S$ occurs in exactly one cell in each row and each column of $F$, and
\item every unordered pair of symbols occurs in exactly one cell of $F$.
\end{enumerate}
\end{definition}
A Room square of side seven is presented in Figure \ref{RS-7}.
It is clear that $n$ must be an odd positive integer if a Room square of side $n$ exists. Although Room squares have been studied since the 1850's, it was not until 1974 that a complete existence result was given by Wallis \cite{Wa74}, as a culmination of work by Mullin, Nemeth, Wallis and others. See \cite{MW75} for a short self-contained proof of the existence of Room squares.
\begin{theorem} There exists a Room square of side $n$ if and only if $n$ is an odd positive integer and $n \neq 3,5$.
\end{theorem}
\begin{figure}
\caption{A Room square of side seven}
\label{RS-7}
\end{figure}
\begin{figure}
\caption{A Room frame of type $2^5$}
\label{type2^5}
\end{figure}
\begin{figure}
\caption{A Room frame of type $2^6$}
\label{type2^6}
\end{figure}
In this paper, I am interested in a generalization of a Room square called a Room frame. Here is a definition.
\begin{definition}
Let $t$ and $u$ be positive integers and let $S$ be a set of $tu$ symbols. Suppose that $S$ is partitioned into $u$ sets of size $t$, denoted $S_i$, $1 \leq i \leq u$. A \emph{Room frame of type $t^u$} is a $tu$ by $tu$ array, $F$, that satisfies the following properties:
\begin{enumerate}
\item the rows and columns of $F$ are indexed by $S$,
\item every cell of $F$ either is empty or contains an unordered pair of symbols from $S$,
\item the subarrays of $F$ indexed by $S_i \times S_i$ are empty, for $1 \leq i \leq u$ (these empty subarrays are called \emph{holes}),
\item if $s \in S_i$, then row or column $s$ contains each symbol in $S \setminus S_i$ exactly once, and
\item the unordered pairs of symbols occurring in $F$ are precisely the pairs
$\{x,y\}$, where $x, y$ are in different $S_i$'s (each such pair occurs in one cell of $F$).
\end{enumerate}
\end{definition}
Room frames of types $2^5$ and $2^6$ are depicted in Figures \ref{type2^5} and \ref{type2^6}, respectively.
The Room frames defined above are \emph{uniform}, which means that all the holes have the same size. Non-uniform Room frames have also received much study, but I mainly focus on the uniform case in this paper.
One initial observation is that a Room square of side $n$ equivalent to a Room frame of type $1^n$. Suppose $F$ is a Room square of side $n$. Pick a particular symbol $x$ and permute the rows and columns of $F$ so the cells containing $x$ are precisely the cells on the main diagonal (such a Room square is said to be \emph{standardized}). Then delete the pairs in these cells. Conversely, given a Room frame of type $1^n$, then we can introduce a new a symbol $x$ and place the pair $\{x,s\}$ in the cell $(s,s)$ for all $s$.
For example, suppose we start with the Room square of side seven that was presented in Figure \ref{RS-7}.
This Room square is already standardized, so we simply remove the pairs in the diagonal cells to construct a Room frame of type $1^7$. This Room frame is presented in Figure \ref{type1^7}.
\begin{figure}
\caption{A Room frame of type $1^7$}
\label{type1^7}
\end{figure}
Perhaps surprisingly, there is no Room frame of type $2^4$ (this was shown by exhaustive case analysis in \cite{St81a}). There is also
no Room frame of type $1^5$, since such a structure would be equivalent to a (nonexistent) Room square of side five. It is also clear that $u \geq 4$ and $t(u - 1)$ is even if a Room frame of type $t^u$ exists. The following theorem gives complete existence results for uniform Room frames. This result is the culmination of many papers spanning a time period of $30$ years.
\begin{theorem}
\label{uniformRoomframe}
\cite{DS81,DL93,DSZ94,DW10,GZ93}
There exists a Room frame of type $t^u$ if and only if $u \geq 4$, $t(u - 1)$ is even,
and $(t, u) \neq (1, 5)$ or $(2, 4)$.
\end{theorem}
I now define a special type of Room frame that has received considerable attention.
\begin{definition}
\label{skew.defn}
Suppose $S$ is a symbol set that is partitioned into $u$ sets of size $t$, denoted $S_i$, $1 \leq i \leq u$. A Room frame $F$ of type $t^u$ is \emph{skew} if, for any $r$ and $s$ where $r$ and $s$ are not in the same $S_i$, precisely one of the two cells $F(r,s)$ and $F(s,r)$ is empty. This skew definition also applies to Room squares.
\end{definition}
Skew Room frames are more difficult to construct than ``ordinary'' (i.e., non-skew) Room frames.
In the case of skew Room squares, the following result was shown in 1981.
\begin{theorem}
\cite{St81c}
\label{srs.thm}
There exists a skew Room square of side $n$ if and only if $n$ is an odd positive integer and $n \neq 3,5$.
\end{theorem}
Theorem \ref{srs.thm} was the consequence of a long series of papers by a number of different authors.
The paper \cite{St81c} gives a short proof of Theorem \ref{srs.thm} that is based on skew Room frames.
For skew Room frames of type $t^u$, the following theorem is the current state of knowledge. Note that this theorem
generalizes Theorem \ref{srs.thm}, which is the special case $t=1$.
\begin{theorem}
\cite{St87a,CZ96,ZG07}
There exists a skew Room frame of type $t^u$ if and only if $u \geq 4$, $t(u - 1)$ is even,
$(t, u) \neq (1, 5)$ or $(2, 4)$, with the following possible exceptions:
\begin{enumerate}
\item $u = 4$ and $t \equiv 2 \pmod{4}$;
\item $u = 5$ and $t \in \{17, 19, 23, 29, 31\}$.
\end{enumerate}
\end{theorem}
\section{Constructions for Room Frames}
\subsection{Orthogonal Starters}
Constructions for Room frames come in two flavours: direct and recursive.
The main direct construction is based on orthogonal frame starters. (It is also possible to construct ``random-looking'' examples of Room squares and Room frames using hill-climbing algorithms; see \cite{DS87,DS92a}.) I briefly discuss the method of orthogonal (frame) starters in this section. For more information on frame starters and orthogonal frame starters, see the recent survey \cite{St22}. Note that all of our definitions will refer to additive abelian groups.
\begin{definition}
Let $G$ be an abelian group of order $g$ and let $H$ be a subgroup of order $h$. A \emph{frame starter} in $G \setminus H$ is a set of $(g-h)/2$ pairs $\{ \{x_i,y_i \} : 1 \leq i \leq (g-h)/2\}$ that satisfies the following two properties:
\begin{enumerate}
\item $\{ x_i, y_i : 1 \leq i \leq (g-h)/2 \} = G \setminus H$.
\item $\{ \pm (x_i-y_i) : 1 \leq i \leq (g-h)/2 \} = G \setminus H$.
\end{enumerate}
\end{definition}
In other words, the pairs in the frame starter form a partition of $G \setminus H$, and the differences obtained from these pairs also partitions $G \setminus H$.
\begin{definition}
Suppose that $S_1 = \{ \{x_i,y_i \} : 1 \leq i \leq (g-h)/2\}$ and $S_2 = \{ \{u_i,v_i \} : 1 \leq i \leq (g-h)/2\}$
are both frame starters in $G \setminus H$. Without loss of generality, assume that
$y_i - x_i = v_i - u_i$ for $1 \leq i \leq (g-h)/2$. We say that $S_1$ and $S_2$ are
\emph{orthogonal} if the following two properties hold:
\begin{enumerate}
\item $y_i - v_i \not\in H$ for $1 \leq i \leq (g-h)/2$.
\item $y_i - v_i \neq y_j - v_j$ if $1 \leq i,j \leq (g-h)/2$, $i \neq j$.
\end{enumerate}
\end{definition}
Thus, when we match up the pairs in $S_1$ and $S_2$ according to their differences, the ``translates'' are distinct elements of $G \setminus H$. These translates are often called an \emph{adder}.
\begin{example}
Suppose $G ={\ensuremath{\mathbb{Z}}}_{10}$ and $H = \{0,5\}$. Here are two orthogonal frame starters in $G\setminus H$:
\[
\begin{array}{l}
S_1 = \{ \{3,4\}, \{7,9\}, \{8,1\}, \{2,6\} \} \\
S_2 = \{ \{6,7\}, \{1,3\}, \{9,2\}, \{4,8\} \}.
\end{array}
\]
\end{example}
\begin{example}
Suppose $G ={\ensuremath{\mathbb{Z}}}_{7}$ and $H = \{0\}$. Here are two orthogonal frame starters in $G\setminus H$:
\[
\begin{array}{l}
S_1 = \{ \{3,4\}, \{2,5\}, \{1,6\} \} \\
S_2 = \{ \{2,3\}, \{5,1\}, \{6,4\} \}.
\end{array}
\]
\end{example}
\begin{example}
\label{z15.exam}
Suppose $G ={\ensuremath{\mathbb{Z}}}_{15}$ and $H = \{0,5,10\}$. Here are two orthogonal frame starters in $G\setminus H$:
\[
\begin{array}{l}
S_1 = \{ \{1,2\}, \{9,11\}, \{3,6\}, \{8,12\}, \{13,4\}, \{7,14\} \} \\
S_2 = \{ \{2,3\}, \{11,13\}, \{9,12\} , \{4,8\}, \{1,7\}, \{14,6\}\}.
\end{array}
\]
\end{example}
\begin{example}
\label{z4z4.exam}
Suppose $G ={\ensuremath{\mathbb{Z}}}_{4} \times {\ensuremath{\mathbb{Z}}}_{4}$ and $H = \{(0,0), (0,2), (2,0), (2,2)\}$. Here are two orthogonal frame starters in $G\setminus H$:
\[
\begin{array}{ll}
S_1 = &\{ \{(1,1),(3,2)\}, \{(3,0),(3,1)\}, \{(2,1),(3,3)\}, \{(0,3),(1,3)\},\\
& \{(1,0),(2,3)\}, \{(0,1),(1,2)\} \} \\
S_2 = &\{ \{(1,2),(3,3)\}, \{(1,3),(1,0)\}, \{(1,1),(2,3)\}, \{(3,1),(0,1)\},\\
& \{(2,1),(3,0)\}, \{(3,2),(0,3)\} \} \\
\end{array}
\]
\end{example}
Orthogonal frame starters can be used to construct a Room frame of the relevant type. The resulting Room frame has $G$ in its automorphism group. In the case where $|H| = 1$, then we have a \emph{starter}. Orthogonal starters can be used to generate Room squares.
Suppose that $S_1 = \{ \{x_i,y_i \} : 1 \leq i \leq (g-h)/2\}$ and $S_2 = \{ \{u_i,v_i \} : 1 \leq i \leq (g-h)/2\}$
are orthogonal frame starters in $G \setminus H$, where $y_i - x_i = v_i - u_i$ for $1 \leq i \leq (g-h)/2$.
$S_1$ and $S_2$ are \emph{skew-orthogonal} if $y_i - v_i \neq -(y_j - v_j)$ if $1 \leq i,j \leq (g-h)/2$, $i \neq j$. Equivalently, the set of adders and their negatives is precisely $G \setminus H$. It can be verified that all four examples of orthogonal frame starters presented above are in fact skew-orthogonal. As one would suspect, skew-orthogonal starters give rise to skew Room frames (e.g., see Figure \ref{type1^7}).
It has been proven that orthogonal frame starters cannot be used to construct a Room frame of type $2^6$ (in fact, there is no frame starter in $G \setminus H$ when $|G| = 12$ and $|H| = 2$). However, a modification known as \emph{orthogonal intransitive frame starters} permit the construction of these (and many other useful) Room frames. The Room frame depicted in Figure \ref{type2^6} illustrates the basic idea. The ten by ten square in the upper left of the diagram is developed modulo $10$, similar to a Room frame obtained from orthogonal frame starters, except that there are two additional fixed points, denoted $A$ and $B$. The last two rows and the last two columns are developed modulo $10$, and there is a hole containing the two fixed points. We do not give a formal definition, but the associated orthogonal intransitive frame starters, denoted by the quadruple $(S_1,C,S_2,R)$, are defined on
${\ensuremath{\mathbb{Z}}}_{10} \cup \{A,B\}$ and they are obtained from the first row and the first column of the Room frame. Note that $R$ and $C$ refer to the last two rows and the last two columns of the Room frame, respectively.
\[
\begin{array}{l}
S_1 = \{ \{7,9\}, \{1,2\}, \{6,A\}, \{3,B\} \} \\
C = \{ \{4,8\} \}\\
S_2 = \{ \{6,8\}, \{3,4\}, \{7,A\}, \{1,B\} \}\\
R = \{ \{2,9\} \}.
\end{array}
\]
\subsection{Existence of Uniform Room Frames}
I now discuss some aspects of the proof of Theorem \ref{uniformRoomframe}.
The cases $t=1$ are of course equivalent to Room squares. Thus there exists a Room frame of type $1^u$ if and only if $u$ is odd and $u \geq 7$. The most important cases in establishing the general existence result (Theorem \ref{uniformRoomframe}) are when $t=2$ or $t=4$. I examine these cases now.
\subsubsection{The Case $t=2$}
Since a Room frame of type $2^4$ does not exist, the goal was to prove that there is a Room frame of type $2^u$ for all $u \geq 5$. Jeff Dinitz and I considered this problem in detail in \cite{DS81}.
Our approach used pairwise balanced designs (PBDs). A \emph{pairwise balanced design} is a pair
$(X,{\ensuremath{\mathcal{A}}})$, where $X$ is a finite set of \emph{points} and ${\ensuremath{\mathcal{A}}}$ is a set of subsets of $X$ (called \emph{blocks}), with the property that every pair of points is contained in a unique block. Assume that $|A| > 1$ for every $A \in {\ensuremath{\mathcal{A}}}$. We say that $(X,{\ensuremath{\mathcal{A}}})$ is a $(v,K)$-PBD if $|X| = v$ and $|A| \in K$ for every $A \in {\ensuremath{\mathcal{A}}}$. Also, a set $K$ is \emph{PBD-closed} if $v \in K$ whenever there exists a $(v,K)$-PBD. Finally, if $K = \{k\}$, then a $(v,K)$-PBD is a
$(v,k,1)$-BIBD (i.e., a \emph{balanced incomplete block design}).
For any fixed integer $t$, the set $U_t = \{ u : \text{ there exists a Room frame of type } t^u\}$ is PBD-closed.
Thus, the natural approach is to construct a sufficient number of ``small'' Room frames by various appropriate techniques and then appeal to PBD-closure. For PBDs with block sizes not less than five, the following classical result due to Hanani is relevant.
\begin{theorem}\cite[Lemma 5.18]{Han75}
Let
\[K_{\geq 5} = \{ 5, 6, \dots, 20, 22,23,24,27,28,29,32,33,34,39\}.\]
Then, for all $u \geq 5$, there is a $(u,K_{\geq 5})$-PBD.
\end{theorem}
Therefore, if we can construct Room frames of type $2^u$ for all $u \in K_{\geq 5}$, we could then conclude that there is a Room frame of type $2^u$ for all $u \geq 5$.
We took the following approach in \cite{DS81}. First, we have already exhibited a Room frame of type $2^5$ in Figure \ref{type2^6}. For odd values $u > 5 \in K_{\geq 5}$, we made use a
``doubling construction'' which creates a Room frame of type $2^u$ from a skew Room square of side $u$.
The idea is as follows:
\begin{construction}[Doubling Construction]
{\rm \quad \\
\begin{enumerate}
\item We construct a Room frame, say $F$, of type $1^u$ from a skew Room frame of side $u$. Suppose the holes are
$\{i\}$, for $0 \leq i \leq u-1$.
\item Construct a second Room frame of type $1^u$ by transposing $F$, in which the holes are $\{i'\}$, for $0 \leq i \leq u-1$. Superimpose $F$ and $F'$.
\item Construct a pair of latin squares, say $L$ and $L'$, of order $u$, having a common transversal. Assume that $L$ is on symbols $\{i'\}$, for $0 \leq i \leq u-1$, and $L'$ is on symbols $\{i'\}$, for $0 \leq i \leq u-1$. Superimpose $L$ and $L'$. Assume the common transversal is $(i,i')$ for $0 \leq i \leq u-1$.
\item Construct an array of side $2u$ in which the superposition of $F$ and $F'$ is in the top left corner and the superposition of $L$ and $L'$ is in the bottom right corner.
Finally, remove the common transversal from $L$ and $L'$.
The result is the desired Room frame of type $2^u$.
\end{enumerate}
}
\end{construction}
\begin{example}
\label{doubling.exam}
We construct a Room frame of type $2^7$ from a skew Room square of side seven in Figure \ref{type2^7}. One of the holes is indicated in grey.
\begin{figure}
\caption{A Room frame of type $2^7$}
\label{type2^7}
\end{figure}
\end{example}
For values $u \in K_{\geq 5}$ that are divisible by four, we constructed orthogonal frame starters in ${\ensuremath{\mathbb{Z}}}_{2u} \setminus \{0,u\}$ to obtain the relevant Room frames. The remaining values $u \in K_{\geq 5}$, for which $u \equiv 2 \bmod 4$, were handled by orthogonal intransitive frame starters.
\subsubsection{The Case $t=4 $}
The next cases to consider are for $t = 4$. This is similar to the $t=2$ case but a bit easier, because a Room frame of type $4^4$ exists while a Room frame of type $2^4$ does not exist. It is possible to use another PBD result due to Hanani:
\begin{theorem}\cite[Lemma 5.10]{Han75}
Let
\[K_{\geq 4} = \{ 4,7, \dots, 12, 14, 15,18,19,23,27\}.\]
Then, for all $u \geq 4$, there is a $(u,K_{\geq 4})$-PBD.
\end{theorem}
It is required to find a small number of Room frames of type $4^u$. The following standard result will be useful in the subsequent discussion.
\begin{theorem}[Inflation by MOLS]
\label{inflation.thm}
If there exists a Room frame of type $t^u$ and $s \neq 2,6$ is a positive integer, then there is a Room frame of type $(st)^u$.
\end{theorem}
The idea is to take $s$ copies of each point and replace each filled cell of the Room frame of type by an appropriate pair of orthogonal latin squares of side $s$.
Let us return to our analysis of Room frames of type $4^u$. For $u$ odd, $u \geq 7$, we can start with a Room square of side $u$ (i.e., a Room frame of type $1^u$) and apply Theorem \ref{inflation.thm} (inflation by MOLS) with $s=4$.
For $u=4$, see Example \ref{z4z4.exam}. The case $u=5$ is a special case of a finite field starter-based construction from \cite{DS80}. For $u = 8,10,12,14$ and $18$, strong frame starters in cyclic groups yield the desired Room frames (see \cite{DS81}). The last case is $u=6$, which was handled in \cite{DS81} using orthogonal intransitve frame starters.
\subsubsection{General Values of $t$}
Given the existence results for $t = 1$, $2$ and $4$ that are discussed above, we can handle most other values of $t$ by using Theorem \ref{inflation.thm} (inflation by MOLS). Starting with Room frames of type $2^u$ (for $u \geq 5$) and $4^u$ (for $u \geq 5$), we immediately get Room frames of type $t^u$ for all even $t$ and all $u \geq 5$. Similarly, starting with Room frames of type $1^u$ (for $u$ odd, $u \geq 7$), we obtain Room frames of type $t^u$ for all odd $t$ and all odd $u$, $u \geq 7$.
The remaining cases are Room frames of type $t^4$ (for even $t$) and $t^5$ (for odd $t$).
As of 1981, Room frames of types $4^4$ and $8^4$ had been constructed using orthogonal frame starters. In conjunction with Theorem \ref{inflation.thm}, this showed that Room frames of type $t^4$ exist for all $t \equiv 0 \bmod 4$ (see \cite{DS81}).
Also, by 1981, Room frames of types $2^5$, $3^5$, $5^5$ and $7^5$ had been constructed using orthogonal frame starters. Using Theorem \ref{inflation.thm}, this showed that Room frames of type $t^5$ exist for all $t$ such that $\gcd(t, 210) > 1$ (see \cite{DS81}).
It was several years until these results were improved.
However, by the early 1990's, Room frames of types $6^4$ and $10^4$ were constructed (see \cite{DS93}) using the hill-climbing algorithm described in \cite{DS87}, suitably modified to construct Room frames. Using Theorem \ref{inflation.thm}, it followed that Room frames of type $t^4$ exist for all $t$ divisible by $4$, $6$ or $10$. So all the remaining unknown cases for type $t^4$ had $t \equiv 2,10 \bmod 12$.
A major advance was due to Ge and Zhu \cite{GZ93} in 1993, who utilized a sophisticated construction based on \emph{incomplete Room frames}. Their paper \cite{GZ93} solved all but a few cases of types $t^4$ and $t^5$, as stated in the following theorem.
\begin{theorem}
\cite{GZ93}
There exists a Room frame of type $t^4$ for all even $t \geq 4$, except possibly for
$ t \in \{ 14,22,26,34,38,46,62,74,82,86,98,122,134,146\}$. Also, there exists a Room frame of type $t^5$ for all $t \geq 5$, except possibly for $ t = 11, 13, 17$ or $19$.
\end{theorem}
The four possible exceptions of type $t^5$ were handled in a 1993 paper by Dinitz and Lamken \cite{DL93}, who constructed these Room frames using orthogonal frame starters. Then, in 1994, all but one of the possible exceptions of type $t^4$ were constructed by Dinitz, Stinson and Zhu \cite{DSZ94}. This was accomplished by a recursive construction that made use of starters having \emph{complete ordered transversals}. The single remaining exception was a Room frame of type $14^4$, which was finally constructed by Dinitz and Warrington \cite{DW10} in 2010. This ``last'' Room frame was constructed using the hill-climbing algorithm from \cite{DS92a}; it required over 5,000,000 trials before it completed successfully.
An observant reader will note that Room frames of types $6^4$, $10^4$ and $14^4$ were constructed using hill-climbing. This is because is impossible to construct these Room frames from orthogonal frame starters
(see \cite{St22}).
\subsection{Some Historical Remarks}
The first Room frame can be found in the 1974 paper published by Wallis \cite{Wa74}. This was a Room frame of type $2^5$, which I presented in Figure \ref{type2^5}.
Other early examples of Room frames in the literature include a Room frame of type $8^4$ (Wallis, \cite{Wa76}), and a skew Room frame of type $2^5$ (Beaman and Wallis, \cite{BW76}). These papers appeared in 1976 and 1977, respectively.
These papers from the 1970's did not define Room frames as a specific combinatorial structure.
For example, the ``original'' Room frame, of type $2^5$, was simply a component used in a construction that created a Room square of side $5(v-w)+w$ from a Room square of side $v$ containing a Room subsquare of side $w$. This construction was used to obtain a Room square of side $257$, which completed the solution of the Room square existence problem (we provide a few more details below).
The term ``frame'' was apparently first coined in the 1981 paper by Mullin, Schellenberg, Vanstone and Wallis \cite{MSVW81}. This paper refers to the 1972 survey by Wallis \cite{Wa72} as the place where Room frames were introduced. However, although \cite[Chapter IV]{Wa72} discusses a construction that is termed ``the frame construction,'' there does not seem to be any actual use of Room frames (as we now understand the term) there.
It is clear that the paper \cite{MSVW81} was the first one where Room frames were studied as objects of interest in their own right. (However, I should point out that the Room frames studied in \cite{MSVW81} were in fact skew Room frames.) The main result proven in \cite{MSVW81} was that a skew Room frame of type $2^n$ exists for all positive integers $n \equiv 1 \bmod 4$, $n \neq 33,57,93,133$. Skew Room frames of types $2^n$ were known to exist for $n = 5,9,13,17$ and the set $\{ n : \text{there exists a skew Room frame of type $2^n$}\}$ is PBD-closed. Constructions of PBDs with blocks of sizes $5,9,13$ and $17$ given in \cite{MSVW81} completed the proof.
I became aware of the paper \cite{MSVW81} in 1978 when Ron Mullin gave me a preprint version of the paper (the was no arXiv in those days!). I found the idea of Room frames quite fascinating and they became an important technique in my research tool chest for many years. Room frames were in fact a central theme in my PhD thesis \cite{St-Phd} and I subsequently published a number of papers focussed on Room frames and their applications starting in the early 1980's.
I would like to comment briefly on direct product constructions, which have had a long history in combinatorial designs. A singular direct product construction was in fact the original motivation for Room frames (see Wallis \cite{Wa74}). In the case of Room squares, a \emph{direct product} construction creates a Room square of side $uv$ from Room squares of sides $u$ and $v$. A \emph{singular direct product} creates a Room square of side $u(v-w) + w$ from a Room square of side $u$ and a Room square of side $v$ that contains a Room subsquare of side $w > 0$.
The singular direct product requires that there exists a pair of orthogonal latin squares of order $v-w$, which rules out the ordered pair $(v,w) = (7,1)$, since a pair of orthogonal latin squares of order six does not exist. The resulting Room square of side $u(v-w) + w$ contains Room subsquares of sides $u, v$ and $w$.
Since a Room square of side five does not exist, we cannot take $u=5$ in the singular direct product.
However, the existence of a Room frame of type $2^5$ provides a clever way to circumvent this restriction.
Given a Room square of side $v$ that contains a Room subsquare of side $w > 0$, a singular direct product uses the Room frame of type $2^5$ to create a Room square of side $5(v-w)+w$, provided that $v - w \neq 12$. (The reason for the restriction $v - w \neq 12$ is that this variation of the singular direct product requires a pair of orthogonal latin squares of order $(v-w)/2$.)
The previously mentioned construction of a Room square of side 257 (from \cite{Wa74}) uses two applications of the singular direct product:
\begin{eqnarray*}
57 & = & 7(9-1) + 1\\
257 &=& 5(57-7)+7.
\end{eqnarray*}
The first equation leads to a Room square of side 57 that contains a Room subsquare of side seven.
This is then used, in the ``Room frame'' variation of the singular direct product, in the second equation.
\section{Kirkman Frames}
\subsection{Motivation: From Room frames to Kirkman Frames}
A couple of years after getting my PhD---probably around 1983---I started thinking about generalizations of Room frames. The obvious place to begin was to look at block size three rather than block size two. At the same time, it seemed simplest to start with a single resolution rather than the orthogonal resolutions that exist in Room squares and Room frames. So this led to the definition of a Kirkman frame as a ``Kirkman triple system with holes'' that I gave in my paper \cite{St87}, which was published in 1987.
There was a long delay in the publication of this paper, as it was rejected by at least three journals before it was accepted by \emph{Discrete Mathematics}. Ironically, it has turned out be one of my most highly cited papers in design theory. The following quote is from the 2003 survey by Rees and Wallis \cite[p.\ 332]{RW03}:
\begin{quote}
\emph{``Since their introduction by Stinson, however, Kirkman Frames
have proven to be the single most valuable tool for the construction of
the various generalizations of KTSs that will be discussed in this survey.''}
\end{quote}
I should also mention the book by Furino, Miao and Yin \cite{FMY96}, which is devoted to the topic of Kirkman frames, as evidence of the importance of this topic.
In a Room frame, a hole of size $t$ intersects $t$ rows and $t$ columns of the array. This is often stated as part of the definition. In the case of a Kirkman frame, I decided to simply require that the set of blocks could be partitioned into \emph{holey parallel classes}, where each holey parallel class forms a partition of all the points not in some hole. It then can be proven using a simple counting argument that a hole of size $t$ is associated with exactly $t/2$ holey parallel classes. This of course implies that every hole has even size.
It is easy to see that a Kirkman triple system of order $v$ is equivalent to a Kirkman frame of type $2^{v/2}$.
So the smallest Kirkman frame that is not equivalent to a Kirkman triple system is the Kirkman frame of type $4^4$. Example \ref{type4^4} provides a construction of this Kirkman frame.
\begin{example}
\label{type4^4}
{\rm I construct a Kirkman frame of type $4^4$ using the technique that I described in \cite{St86}. Here is a pair of incomplete orthogonal latin squares of order six with a hole of size two, which were discovered by Euler in the eighteenth century:
\[
\begin{array}{|x{.3cm}|x{.3cm}|x{.3cm}|x{.3cm}|x{.3cm}|x{.3cm}|}
\hline
$5$ & $6$ & $3$ & $4$ & $1$ & $2$\\ \hline
$2$ & $1$ & $6$ & $5$ & $3$ & $4$\\ \hline
$6$ & $5$ & $1$ & $2$ & $4$ & $3$\\ \hline
$4$ & $3$ & $5$ & $6$ & $2$ & $1$\\ \hline
$1$ & $4$ & $2$ & $3$ & & \\ \hline
$3$ & $2$ & $4$ & $1$ & & \\ \hline
\end{array}
\hspace{1in}
\begin{array}{|x{.3cm}|x{.3cm}|x{.3cm}|x{.3cm}|x{.3cm}|x{.3cm}|}
\hline
$a$ & $b$ & $e$ & $f$ & $c$ & $d$\\ \hline
$f$ & $e$ & $a$ & $b$ & $d$ & $c$\\ \hline
$d$ & $c$ & $f$ & $e$ & $a$ & $b$\\ \hline
$e$ & $f$ & $d$ & $c$ & $b$ & $a$\\ \hline
$b$ & $d$ & $c$ & $a$ & & \\ \hline
$c$ & $a$ & $b$ & $d$ & & \\ \hline
\end{array}
\]
From these incomplete orthogonal latin squares, construct an incomplete group-divisible design. I label the rows and columns and obtain a block of size four from each of the $32$ filled cells:
\[
\begin{array}{llll}
\{r_1,c_1,5,a\} & \{r_1,c_2,6,b\} & \{r_1,c_3,3,e\} & \{r_1,c_4,4,f\}
\\ \{r_1,c_5,1,c\} & \{r_1,c_6,2,d\}\\
\{r_2,c_1,2,f\} & \{r_2,c_2,1,e\} & \{r_2,c_3,6,a\} & \{r_2,c_4,5,b\}
\\ \{r_2,c_5,3,d\} & \{r_2,c_6,4,c\}\\
\{r_3,c_1,6,d\} & \{r_3,c_2,5,c\} & \{r_3,c_3,1,f\} & \{r_3,c_4,2,e\}
\\ \{r_3,c_5,4,a\} & \{r_3,c_6,3,b\}\\
\{r_4,c_1,4,e\} & \{r_4,c_2,3,f\} & \{r_4,c_3,5,d\} & \{r_4,c_4,6,c\}
\\ \{r_4,c_5,2,b\} & \{r_4,c_6,1,a\}\\
\{r_5,c_1,1,b\} & \{r_5,c_2,4,d\} & \{r_5,c_3,2,c\} & \{r_5,c_4,3,a\} \\
\{r_6,c_1,3,c\} & \{r_6,c_2,2,a\} & \{r_6,c_3,4,b\} & \{r_6,c_4,1,d\}
\end{array}
\]
Every block contains exactly one point from $\{ 5,6,e,f,r_5,r_6,c_5,c_6\}$.
Then delete these eight points, obtaining $32$ blocks of size three. Each deleted point gives rise to a holey parallel class. The resulting Kirkman frame has holes $\{1,2,3,4\}$, $\{a,b,c,d\}$, $\{ r_1,r_2,r_3,r_4\}$
and $\{ c_1,c_2,c_3,c_4\}$. The blocks, arranged into eight holey parallel classes, are as follows:
\[
\begin{array}{l|l}
\multicolumn{2}{c}{\{1,2,3,4\}} \\ \hline
\{r_1,c_1,a\} & \{r_1,c_2,b\} \\
\{r_2,c_4,b\} & \{r_2,c_3,a\} \\
\{r_3,c_2,c\} & \{r_3,c_1,d\} \\
\{r_4,c_3,d\} & \{r_4,c_4,c\} \\
\end{array}
\hspace{1in}
\begin{array}{l|l}
\multicolumn{2}{c}{\{a,b,c,d\}} \\ \hline
\{r_1,c_3,3\} & \{r_1,c_4,4\} \\
\{r_2,c_2,1\} & \{r_2,c_1,2\} \\
\{r_3,c_4,2\} & \{r_3,c_3,1\} \\
\{r_4,c_1,4\} & \{r_4,c_2,3\} \\
\end{array}
\]
\[
\begin{array}{l|l}
\multicolumn{2}{c}{\{ r_1,r_2,r_3,r_4\}} \\ \hline
\{c_1,1,b\} & \{c_1,3,c\} \\
\{c_2,4,d\} & \{c_2,2,a\} \\
\{c_3,2,c\} & \{c_3,4,b\} \\
\{c_4,3,a\} & \{c_4,1,d\} \\
\end{array}
\hspace{1in}
\begin{array}{l|l}
\multicolumn{2}{c}{\{ c_1,c_2,c_3,c_4\}} \\ \hline
\{r_1,1,c\} & \{r_1,2,d\} \\
\{r_2,3,d\} & \{r_2,4,c\} \\
\{r_3,4,a\} & \{r_3,3,b\} \\
\{r_4,2,b\} & \{r_4,1,a\} \\
\end{array}
\]
}
$\blacksquare$
\end{example}
In \cite{St87}, I proved that there is a Kirkman frame of type $t^u$ if and only if $t$ is even and $t(u-1) \equiv 0 \bmod 3$. I will review the main steps in the proof. The proof used PBDs and group-divisible designs (GDDs) along with a few small Kirkman frames. One useful recursive tool is the
``GDD Construction'' from \cite{St87}. This is just a standard Wilson-type GDD construction (see, e.g., \cite{Wi75}), adapted to the setting of Kirkman frames.
I recall a special case of the GDD Construction that is sufficient for our needs, but first I define
group-divisible designs (GDDs). A \emph{group-divisible design} is a triple
$(X,{\ensuremath{\mathcal{G}}},{\ensuremath{\mathcal{A}}})$, where $X$ is a finite set of \emph{points}, ${\ensuremath{\mathcal{G}}}$ is a partition of $X$ into subsets called \emph{groups\footnote{Of course these are \emph{not} algebraic groups.}}, and ${\ensuremath{\mathcal{A}}}$ is a set of subsets of $X$, with the property that every pair of points is contained in a unique group, or a unique block, but not both. Assume that $|A| > 1$ for every $A \in {\ensuremath{\mathcal{A}}}$.
Observe that a PBD is equivalent to a GDD in which every group has size $1$.
\begin{theorem}
[GDD construction]
\label{GDD.const}
Let $(X, {\ensuremath{\mathcal{G}}} , {\ensuremath{\mathcal{A}}})$ be a GDD in which $|X| = gu$ and there are $u$ groups of size $g$, and let $w\geq 1$ ($w$ is often called a \emph{weight}). For each block $A \in {\ensuremath{\mathcal{A}}}$, suppose there is a Kirkman frame of type $w^{|A|}$. Then there is a Kirkman frame of type $(gw)^u$.
\end{theorem}
\begin{remark}
If $|G| = 1$ for all $G \in {\ensuremath{\mathcal{G}}}$ (i.e., the GDD is a PBD), then we obtain a PBD-closure result. More precisely, the set
\[\{u : \text{there exists a Kirkman frame of type } t^u \}\] is PBD-closed for any fixed $t$.
\end{remark}
Now I discuss the existence proof for uniform Kirkman frames. When $t=2$, it follows that $u \equiv 1 \bmod 3$. Thus all the Kirkman frames in the case $t=2$ can immediately be obtained from Kirkman triple systems.
The next case, $t=4$, is easily handled as follows. Again, $u \equiv 1 \bmod 3$ is a necessary condition. I presented a Kirkman frame of type $4^4$ in Example \ref{type4^4}. When $u \equiv 1 \bmod 3$, $u \geq 7$, it suffices to apply Theorem \ref{GDD.const} as follows. Start with a group-divisible design having $u$ groups of size two and blocks of size four, and define $w=2$. Every block is replaced by a Kirkman frame of type $2^4$. The required GDDs are constructed in
\cite{BHS77}.
The case $t=6$ is only a bit more difficult. Here there are no congruential conditions on $u$. I split the proof into two subcases, namely $u \equiv 0,1 \bmod 4$ and $u \equiv 2,3 \bmod 4$.
When $u \equiv 0,1 \bmod 4$, start with a $(3u+1,\{4\})$-PBD (i.e., a $(3u+1,4,1)$-BIBD). Delete a point, creating a GDD with $u$ groups of size three and blocks of size four. Define $w=2$ and apply Theorem \ref{GDD.const}, filling in Kirkman frames of type $2^4$. The result is a Kirkman frame of type $6^u$.
For $u \equiv 2,3 \bmod 4$, $u \geq 7$, start with a $(3u+1,\{4,7\})$-PBD that contains a unique block of size seven (see \cite{Br79}). Delete a point that is not in the block of size seven, creating a GDD with $u$ groups of size three and blocks of size four and seven. Define $w=2$ and apply Theorem \ref{GDD.const}, filling in Kirkman frames of type $2^4$ and $2^7$. The result is a again Kirkman frame of type $6^u$.
It remains to construct a Kirkman frame of type $6^6$. A direct construction for this Kirkman frame can be found in \cite{RS92}.
Having handled the cases $t = 2,4$ and $6$, all other cases follow by ``Inflation by MOLS'' (Theorem \ref{inflation.thm}), which also works for Kirkman frames. The reader can fill in the details.
\subsection{Some Historical Remarks}
\label{hist.sec}
Kirkman frames can be used in existence proofs for Kirkman triple systems. In particular, several of the constructions for Kirkman triple systems given by Ray-Chaudhuri and Wilson in \cite{RW71} (see also \cite{HRW72}) can be recast as Kirkman frame-based constructions. For example, \cite[Theorem 1]{RW71} proves that the set
\[R_3^* = \{ r: \text{there exists a Kirkman triple system of order } 2r+1\}\] is PBD-closed.
That is, if there exists a $(v,K)$-PBD where $K \subseteq R_3^*$, then $v \in R_3^*$.\footnote{Note that this result is a special case of Theorem \ref{GDD.const}.}
I sketch the proof of this fundamental result using Kirkman frame language, making use of the fact that a Kirkman frame of type $2^r$ is equivalent to a Kirkman triple system of order $2r+1$. Basically, it suffices to delete a point from the Kirkman triple system to obtain the desired Kirkman frame (see Example
\ref{KTS-frame.exam}). Then apply Theorem \ref{GDD.const} with $w=2$. (In more detail, let $(X, {\ensuremath{\mathcal{B}}})$ be a $(v,K)$-PBD where $K \subseteq R_3^*$. Give every point in $X$ weight two. For every block $B \in {\ensuremath{\mathcal{B}}}$, construct a Kirkman frame of type $2^{|B|}$ on the points $B \times \{1,2\}$, where the holes are
$\{x\} \times \{1,2\}$, $x \in B$. The result is a Kirkman frame of type $2^v$, which is equivalent to a
Kirkman triple system of order $2v+1$.)
Interestingly, the original proof of this result, given in \cite{RW71}, is slightly different. Rather than utilizing Kirkman frames (implicitly or explicitly), the proof instead uses the notion of a \emph{completed resolvable design}. The idea is to add a new point to the blocks of each parallel class of a given Kirkman triple system of order $2r+1$. Then a new block is formed, consisting of the $r$ new points. The result is a
$(3r+1, \{4,r\})$-PBD (and this PBD contains a unique block of size $r$ if $r > 4$). Clearly this process can be reversed; see Example \ref{KTS-frame.exam}. The PBD-closure proof given
in \cite{RW71} starts with a PBD and gives every point weight 3. Every block is replaced by a suitable completed resolvable design. At the end, a completed resolvable design is constructed; this design is equivalent to a Kirkman triple system of order $2v+1$.
The following example illustrates a KTS, a Kirkman frame and a completed resolvable design, all of which are equivalent structures.
\begin{example}
\label{KTS-frame.exam}
{\rm I present a Kirkman triple system of order 15 that was originally discovered by Cayley in 1850 (see \cite[Example 19.7]{CR99}). This Kirkman triple system has the following 35 blocks, where each row of five blocks is a parallel class:
\[
\begin{array}{l@{\hspace{.2in}}l@{\hspace{.2in}}l@{\hspace{.2in}}l@{\hspace{.2in}}l}
\mathit{abc} & \mathit{d}35 & \mathit{e}17 & \mathit{f}28 & \mathit{g}46\\
\mathit{ade} & \mathit{b}26 & \mathit{c}48 & \mathit{f}15 & \mathit{g}37\\
\mathit{afg} & \mathit{b}13 & \mathit{c}57 & \mathit{d}68 & \mathit{e}24\\
\mathit{bdf} & \mathit{a}47 & \mathit{c}16 & \mathit{e}38 & \mathit{g}25\\
\mathit{beg} & \mathit{a}58 & \mathit{c}23 & \mathit{d}14 & \mathit{f}67\\
\mathit{cdg} & \mathit{a}12 & \mathit{b}78 & \mathit{e}56 & \mathit{f}34\\
\mathit{cef} & \mathit{a}36 & \mathit{b}45 & \mathit{d}27 & \mathit{g}18
\end{array}
\]
To construct the associated Kirkman frame of type $2^7$, delete a point. The holes of the Kirkman frame are formed by the blocks containing the given point. Suppose the point $\mathit{a}$ is deleted. Then we obtain the following Kirkman frame:
\[
\begin{array}{c|l@{\hspace{.2in}}l@{\hspace{.2in}}l@{\hspace{.2in}}l}
\text{hole} & \multicolumn{4}{|c}{\text{holey parallel class}}\\ \hline
\mathit{bc} & \mathit{d}35 & \mathit{e}17 & \mathit{f}28 & \mathit{g}46\\
\mathit{de} & \mathit{b}26 & \mathit{c}48 & \mathit{f}15 & \mathit{g}37\\
\mathit{fg} & \mathit{b}13 & \mathit{c}57 & \mathit{d}68 & \mathit{e}24\\
47 &\mathit{bdf} & \mathit{c}16 & \mathit{e}38 & \mathit{g}25\\
58 & \mathit{beg} & \mathit{c}23 & \mathit{d}14 & \mathit{f}67\\
12 & \mathit{cdg} & \mathit{b}78 & \mathit{e}56 & \mathit{f}34\\
36 & \mathit{cef} & \mathit{b}45 & \mathit{d}27 & \mathit{g}18
\end{array}
\]
Notice that there are seven holes of size two, and seven holey parallel classes.
Finally, I construct the associated completed resolvable design. For each of the seven parallel classes, add a new point to all the blocks in that parallel class. Then create a new block consisting of the seven new points.
\[
\begin{array}{l@{\hspace{.2in}}l@{\hspace{.2in}}l@{\hspace{.2in}}l@{\hspace{.2in}}l}
\infty_1\mathit{abc} & \infty_1\mathit{d}35 & \infty_1\mathit{e}17 & \infty_1\mathit{f}28 & \infty_1\mathit{g}46\\
\infty_2\mathit{ade} & \infty_2\mathit{b}26 & \infty_2\mathit{c}48 & \infty_2\mathit{f}15 & \infty_2\mathit{g}37\\
\infty_3\mathit{afg} & \infty_3\mathit{b}13 & \infty_3\mathit{c}57 & \infty_3\mathit{d}68 & \infty_3\mathit{e}24\\
\infty_4\mathit{bdf} & \infty_4\mathit{a}47 & \infty_4\mathit{c}16 & \infty_4\mathit{e}38 & \infty_4\mathit{g}25\\
\infty_5\mathit{beg} & \infty_5\mathit{a}58 & \infty_5\mathit{c}23 & \infty_5\mathit{d}14 & \infty_5\mathit{f}67\\
\infty_6\mathit{cdg} & \infty_6\mathit{a}12 & \infty_6\mathit{b}78 & \infty_6\mathit{e}56 & \infty_6\mathit{f}34\\
\infty_7\mathit{cef} & \infty_7\mathit{a}36 & \infty_7\mathit{b}45 & \infty_7\mathit{d}27 & \infty_7\mathit{g}18\\
\multicolumn{5}{l}{\infty_1\infty_2\infty_3\infty_4\infty_5\infty_6\infty_7}
\end{array}
\]
}
$\blacksquare$
\end{example}
On the other hand, as Assaf and Hartman pointed out in \cite{AH89}, Hanani constructs frames with block size three and $\lambda = 2$ in his 1974 paper \cite{Han74}\footnote{We discuss frames with $\lambda > 1$ in greater detail in Section \ref{other.sec}.}. These frames are the same thing as \emph{near-resolvable $(v,3,2)$-BIBDs}. There are $v$ holey parallel classes in such a BIBD, each of which misses a single point (thus a necessary condition for existence is that $v \equiv 1 \bmod 3$). Hanani gives direct constructions for several such frames, and he also proves a PBD-closure result in \cite[Lemma 8]{Han74}. This establishes the existence of frames of type $1^{3u+1}$ (with block size three and $\lambda = 2$) for all positive integers $u$.
Assaf and Hartman \cite{AH89} also note that the paper \cite{Han74} constructs examples of frames with
holes of size $t$, for $t =3,12,24$ (again, these frames have block size three and $\lambda = 2$).
It is somewhat difficult to identify the first paper to actually include a Kirkman frame construction (i.e., one with $\lambda = 1$ that incorporates a resolution into holey parallel classes).
Perhaps it is the 1977 Baker-Wilson paper (\cite{BW77}) on nearly Kirkman triple systems (NKTS).
Here is the statement and proof of a main recursive construction, exactly as it appears in \cite{BW77}.
\begin{theorem}
\cite[Lemma 3]{BW77}
If there exists a GDD on $u$ points with group sizes from $M = \{g_1, \dots , g_m\}$ and block sizes from
$K = \{k_1, \dots , k_{\ell}\}$ such that: (i) for each $g_i$ there exists an NKTS$[2g_i+2]$, and (ii) for each $k_i$ there exists a KTS$[2k_i+1]$, then there exists an NKTS$[2u+2]$.
\end{theorem}
\begin{proof}
Let the GDD have points $Y$ and sets ${\ensuremath{\mathcal{G}}} \cup {\ensuremath{\mathcal{B}}}$. The point set $X$ of the NKTS$[2u+2]$ is to consist of
$\theta'$, $\theta''$ and two symbols $y'$, $y''$ for each $y \in Y$. The matching is to be ${\ensuremath{\mathcal{A}}}_0 = \{ \{ \theta', \theta''\}\} \cup \{\{y',y''\}: y \in Y\}$, and the parallel classes are to be ${\ensuremath{\mathcal{A}}}_y$, $y \in Y$, obtained as follows:
For each $G \in {\ensuremath{\mathcal{G}}}$, form an NKTS with matching $\{ \{ \theta', \theta''\}\} \cup \{\{y',y''\}: y \in G\}$
and parallel classes ${\ensuremath{\mathcal{A}}}_y^G$, arbitrarily indexed by the elements $y \in G$. For each $B \in {\ensuremath{\mathcal{B}}}$, form a KTS on the points $y'$, $y''$, $y \in B$ and another symbol $*$ so that $\{*, y', y''\}$ is a triple of the KTS for each $y \in B$; the other triples which belong to the parallel class containing $\{*, y', y''\}$ will be denoted by ${\ensuremath{\mathcal{A}}}_y^B$ (these triples partition the set of $z'$, $z''$ with $z \in B$, $z \neq y$). Then the parallel class ${\ensuremath{\mathcal{A}}}_y$ is to be the union of ${\ensuremath{\mathcal{A}}}_y^G$, where $G$ is the unique member of ${\ensuremath{\mathcal{G}}}$ containing $y$, and all the classes ${\ensuremath{\mathcal{A}}}_y^B$, where $B$ is a member of ${\ensuremath{\mathcal{B}}}$ containing $y$.
\end{proof}
Now where is the Kirkman frame in the above proof? Let $B \in {\ensuremath{\mathcal{B}}}$ be any block. For each $y \in B$, we observe that each ${\ensuremath{\mathcal{A}}}_y^B$ is a holey parallel class with hole $\{y', y''\}$. Further, the union of these holey parallel classes forms a Kirkman frame of type $2^{|B|}$. We can describe this construction informally, omitting details and using Kirkman frame-based language, as follows:
\begin{enumerate}
\item start with a GDD, take two copies of every point and take two new points
\item replace every block $B$ by a Kirkman frame of type $2^{|B|}$, and
\item replace every group $G$ by an NKTS$[2|G| + 2]$ containing the two new points.
\end{enumerate}
Finally, I should mention the paper by Lee and Furino (\cite{LF95}) entitled
\emph{A translation of J. X. Lu's ``an existence theory for resolvable balanced incomplete block designs''}. Lu's 1984 paper is another early example of a paper that uses Kirkman frame techniques in the context of resolvable designs.
According to Furino and Lee,
\begin{quote}
\emph{``We have remained faithful to the constructions provided by Lu but have altered the presentation to be more consistent with contemporary techniques and notation. Specifically, Lu’s Theorem 4 is presented via frames which are never explicitly mentioned in the original but which are clearly embedded in the constructions.''
}
\end{quote}
\section{Other Kinds of Frames}
\label{other.sec}
Various generalizations of frames have been studied. We briefly discuss some possible aspects that have been considered.
\subsection{Frames with Larger Block Size}
Room frames have \emph{block size} two and Kirkman frames have block size three. Frames with larger block size can also be studied.
The most obvious generalization of Kirkman frames would be frames with block size four, which are often known as \emph{$4$-frames}
(analogously, a frame with block size $k$ is termed a \emph{$k$-frame}).
These were first studied systematically by Rees and Stinson in \cite{RS92}, and additional results can be found in \cite{CSZ97,WG14,ZG07}. The following theorem summarizes the current existence results for $4$-frames.
\begin{theorem}
There exists a $4$-frame of type $h^u$ if and only if $u\geq 5$, $h \equiv 0 \pmod{3}$ and
$h(u - 1) \equiv 0 \pmod{4}$, except
possibly where
\begin{enumerate}
\item $h = 36$ and $u = 12$;
\item $h \equiv 6 \pmod{12}$ and
\begin{enumerate}
\item $h = 6$ and $u \in \{7, 23, 27, 35, 39, 47\}$;
\item $h = 18$ and $u \in \{15, 23, 27\}$;
\item $h \in \{30, 66, 78, 114, 150, 174, 222, 246, 258, 282, 318, 330, 354, 534\}$ and\\
$u \in \{7, 23, 27, 39, 47\}$;
\item $h \in \{n : 42 \leq n \leq 11238\} \setminus \{66, 78, 114, 150, 174, 222, 246, 258, 282, 318, 330, 354, 534\}$ and $u \in \{23, 27\}$.
\end{enumerate}
\end{enumerate}
\end{theorem}
We should also mention that it is easy to see that a resolvable $(v,k,1)$-BIBD is equivalent to a $k$-frame of type $(k-1)^r$, where $r = (v-1)/(k-1)$. It suffices to delete a point from the BIBD to construct the frame, and the process can be reversed.
\subsection{Frames with Larger Dimension}
Room frames have \emph{dimension} $two$ (since they have two orthogonal frame resolutions) and, analogously, Kirkman frames have dimension one. Frames of higher dimension have also received attention. For example, it was shown in \cite{DS80} that a Room frame of type $2^q$ having dimension $t$ can be constructed
if $q = 2^k t + 1$ is a prime power, $t \geq 3$ is odd and $k \geq 2$.
There have also been numerous papers addressing the case of two-dimensional Kirkman frames; see \cite{DAW15} for recent results.
\subsection{Frames for Graph Decompositions}
The ``blocks'' in Room and Kirkman frames correspond to complete graphs ($K_2$ and $K_3$, resp.) \emph{Graph decompositions} into graphs other than complete graphs are also possible in a frame setting.
\emph{Cycle frames} were introduced in \cite{ASSW89}. Here we have a decomposition of a complete multipartite graph into holey parallel classes of cycles of lengths three and five. These cycle frames were useful in solving the uniform Oberwolfach problem for odd length cycles. There has also been considerable research done on cycle frames where there is only one cycle length; see \cite{BCDT17} for a very general existence result.
Another variation is \emph{$K_{1,3}$-frames} \cite{CC17}. This is a frame-type graph decomposition into graphs isomorphic to $K_{1,3}$. And, as might be expected, a \emph{$(K_{4} - e)$-frame} (see \cite{CSZ97}) involves decompositions into graphs isomorphic to $K_4$ with an edge deleted.
\subsection{Frames with $\lambda > 1$}
Various types of frames can also be considered for $\lambda > 1$, i.e., where every pair in the underlying design occurs $\lambda > 1$ times. For example, two-dimensional Room frames having block size three and $\lambda = 2$ were first studied in \cite{La91,LV93}.
In general, we can consider frames with block size $k$ and a specified value of $\lambda$; such a frame is commonly denoted as a $(k,\lambda)$-frame. For example, necessary and sufficient existence conditions for $(3,\lambda)$-frames are determined in \cite{AH89}. There also has been considerable work done on $(4,3)$-frames; see, e.g., \cite{FKLMY,Ge01,GLL}.
Finally, a near-resolvable $(v,k,k-1)$-BIBD is the same thing as a $(k,k-1)$-frame of type $1^v$, since there are $v$ holey parallel classes in the BIBD, each missing one point
(we already mentioned this result in Section \ref{hist.sec} in the case $k=3$).
\subsection{Nonuniform Frames}
A \emph{nonuniform} frame is one in which not all the holes have the same size. Nonuniform frames are often used in constructions of uniform frames. Also, Room squares or Kirkman triple systems containing subdesigns are equivalent to certain frames where all but one of the holes have the same size. For example, a Room square of side $v$ containing a Room subsquare of side $w$ is equivalent to a Room frame of type $1^{v-w}w^1$. Additionally, various other classes of nonuniform frames have been studied, e.g., Room frames of type
$2^tu^1$ (\cite{DSZ94}).
\subsection{Frames with Special Properties}
There has been some study of frames with various special properties. We have already mentioned skew Room frames in Section \ref{intro.sec}.
Other examples of Room frames having special properties that have been studied include \emph{partitionable skew Room frames} \cite{CSZ97,ZG07}, as well as
Room frames with \emph{partitionable transversals} \cite{DL00}.
Finally, I should mention a generalization of frames known as \emph{double frames}. These objects were defined by Chang and Miao in \cite{CM02}, where they were used to unify various frame-type recursive constructions. One application of double frames is to construct frames, e.g., see \cite{WG14}.
\subsection{Equiangular Tight Frames}
It is not surprising that a term such as ``frame'' could have multiple meanings, even within combinatorial mathematics. However, it is probably not to be expected that the phrase ``Kirkman frame'' would arise in two completely different contexts.
To be specific, ``Kirkman equiangular tight frames and codes'' is the title of a 2014 paper by Jasper, Mixon and Fickus
\cite{JMF14}.
In this context, the term ``frame'' can be found in the 1992 book by Daubechies \cite{Da92}; it refers to a generalization of an orthonormal basis. The \emph{equiangular tight frames} (see \cite{StHe}) meet the Welch bound with equality. Many of the known constructions for these objects use combinatorial designs such as conference matrices, difference sets and Steiner systems.
In \cite{JMF14}, a construction utilizing Kirkman triple systems was proposed; the resulting frames were termed ``Kirkman equiangular tight frames.'' Thus we have two completely different notions of a Kirkman frame!
\section{Applications}
In this paper, I have concentrated on construction methods for Room frames and Kirkman frames. However, I would be remiss if I did not mention some applications of frames. Here are a few ``obvious'' applications. Room frames were essential in proving the existence of Room squares with subsquares (see \cite{DSZ94,DW10}), and analogously, Kirkman frames were of crucial importance in constructing Kirkman triple systems with Kirkman subsystems (\cite{RS89}). Skew Room frames were a very important tool in solving the skew Room square existence problem (\cite{St81c}), and Kirkman frames were employed in the construction of resolvable group-divisible designs with block size three (\cite{RS86}). The survey by Rees and Wallis \cite{RW03} is especially useful as it gives detailed discussion and self-contained proofs of several important applications of Kirkman frames.
Finally, frames with larger block size (in particular, $4$-frames) are utilized in similar problems relating to designs with larger block size (see, e.g., \cite{WG14}).
There are many other applications that are perhaps not immediately obvious. I will list a few representative examples now; however, I should note that this is far from being a complete list.
\begin{itemize}
\item Skew Room frames were used in \cite{LRS} and elsewhere to construct nested cycle systems.
\item Skew Room frames were used in \cite{Ro94,RWCZ} to construct weakly $3$-chromatic BIBDs with block size four.
\item Partitionable skew Room frames can be used to construct resolvable $(K_{4} - e)$-designs (see \cite{CSZ97}). For information on partitionable skew Room frames, see \cite{ZG07}.
\item $4$-frames have been used to construct three mutually orthogonal latin squares with holes (i.e., three HMOLS); see \cite{SZ91,CSZ97}.
\item Uniformly resolvable designs, which were introduced by Rees \cite{Re87}, have been studied by numerous authors.
Their construction often employs frames. Two recent papers on this topic are \cite{WG16,WG17}.
\item Ling \cite{Li03} uses splittable $4$-frames to give improved results concerning a problem in generalized Ramsey theory involving edge-colourings of $K_{n,n}$.
\end{itemize}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{New construction of graphs
with high chromatic number\\ and small clique}
\author[label1, label2]{Hamid Reza Daneshpajouh}
\address[label1]{School of Mathematics, Institute For Research In Fundamental Sciences, Niavaran Bldg, Niavaran Square, Tehran, Iran}
\address[label2]{Moscow Institute of Physics and Technology, Institutskiy per. 9, Dolgoprudny, Russia 141700
}
\ead{[email protected], [email protected]}
\begin{abstract}
In this note, we introduce a new method for constructing graphs with high chromatic number and small clique. Indeed, via this method, we present a new proof for the well-known Kneser's conjecture.
\end{abstract}
\begin{keyword}
Borsuk-Ulam theorem \sep Chromatic number \sep $G$-Tucker lemma \sep Triangle-free graphs
\end{keyword}
\end{frontmatter}
\section{Introduction}
In this note, all graphs are finite, simple and undirected. The complete graph on $n$ vertices is denoted by $\mathcal{K}_n$. The number of graph vertices in the largest complete subgraph of $G$, denoted by $\omega (G)$ is called the clique number of $G$. The girth of a graph is the number of edges in its shortest cycle. A proper (vertex) coloring is an assignment of labels or colors to each vertex of a graph so that no edge connects two identically colored vertices. The smallest number of colors needed for the proper coloring of a graph $G$ is the chromatic number, $\chi (G)$. For a given graph $H$, a graph $G$ is called $H$-free if no induced subgraph of $G$ is isomorphic to $H$. In particular, a $\mathcal{K}_3$-free graph is called a triangle-free graph.
It is obvious that $\chi (G)\geq\omega (G)$. The chromatic and clique numbers of a graph can be arbitrarily far apart. There are various constructions of triangle-free graphs with arbitrarily large chromatic number. Probably the best known is due to Mycielski~\cite{Myc}. In 1955, he created a construction that preserves the property of being triangle-free but increases the chromatic number. For more references, see also Blanche Descartes~\cite{descartes1947three}, John Kelly and Leroy Kelly~\cite{kelly1954paths}, and Alexander Zykov~\cite{zykov1949some}. Erd\"{o}s~\cite{Erdos} with a deeper insight showed the existence of graphs that have high girths and still have arbitrarily large chromatic number by probabilistic means. Indeed, exploring the relation between clique number and other properties of graphs such as chromatic number, maximum degree, etc., is still an active and fascinating research area within mathematics. There are still a lot of open questions waiting to be solved in this area, such as bounding the chromatic number of triangle-free graphs with fixed maximum degree~\cite{Kostochka}, or more generally Reed's conjecture~\cite{Reed}. For more problems see~\cite{OpenProblem}. The insights gained from this literature review indicate that we need a deeper understanding of these spaces. Thus, it is of interest to know new ways of constructing such spaces.
In~\cite{Daneshpajouh}, compatibility graphs and $G$-Tucker lemma were introduced to obtain a new topological lower bound on the chromatic number of a special family of graphs. Indeed, via the bound, a new method for finding test graphs was proposed by the author. In this note, we give a new way, based on compatibility graphs and $G$-Tucker lemma, for constructing graphs with high chromatic number and small clique. Finally, as a corollary of this method, we give a new proof of the famous Kneser's conjecture.
The organization of the note is as follows. In Section $2$, we set up notation and terminology, and we repeat the relevant material from~\cite{Daneshpajouh} that will be needed throughout the note. Finally, in Section $3$, our main results are stated and proved.
\section{Preliminary and Notations}
In this section we review definitions and results as required for the rest of the note. Here and subsequently, $G$ stands for a non-trivial finite group, and its identity element is denoted by $e$. A partially ordered set or poset is a set and a binary relation $\leq$ such that for all $a, b, c\in P$:
$a\leq a$ (reflexivity); $a\leq b$ and $b\leq c$ implies $a\leq c$ (transitivity); and $a\leq b$ and $b\leq a$ implies $a = b$ (anti-symmetry). A pair of elements $a, b$ of a partially order set are called comparable if $a\leq b$ or $b\leq a$. A subset of a poset in which each two elements are comparable is called a chain. A function $f : P\to Q$ between partially ordered sets is order-preserving or monotone, if for all $a$ and $b$ in $P$, $a\leq_{P} b$ implies $f(a)\leq_{Q} f(b)$. If $X$ is a set, then a group action of $G$ on $X$ is a function $G\times X\to X$ denoted $(g, x)\mapsto g.x$, such that $e.x = x$ and $(gh).x = g.(h.x)$ for all $g, h$ in $G$ and all $x$ in $X$. A $G$-poset is a poset together with a $G$-action on its elements that preserves the partial order, i.e, if $x<y$ then $g.x<g.y$.
To provide topological lower bounds on the chromatic numbers of graphs, Hom complex was defined by Lov\'{a}sz. For a recent account of the theory, we refer the reader to~\cite{kozlov2007combinatorial}. We need the following version of this concept.
\begin{definition}[Hom poset]
Let $F$ be a graph with vertex set $\{1, 2\cdots , n\}$. For a graph $H$, we define Hom poset $Hom_{p}(F, H)$ whose elements are given by all $n$-tuples $(A_1, \cdots, A_n)$ of non-empty subsets of $V(H)$, such that for any edge $(i, j)$ of $F$ we have $A_i\times A_j\subseteq E(H)$. The partial order is defined by $A=(A_1,\cdots, A_n)\leq B=(B_1,\cdots, B_n)$ if and only if $A_i\subseteq B_i$ for all $i\in\{1,\cdots , n\}$.
\end{definition}
Let $Z_r=\{e=\omega^0,\cdots, \omega^{r-1}\}$ be the cyclic group of order $r$.The cyclic group $Z_r$ acts on the poset $Hom_{p}(\mathcal{K}_r, H)$ naturally by cyclic shift. More precisely, for each $\omega^{i}\in Z_r$ and $(A_1,\cdots, A_r)\in Hom_{p}(\mathcal{K}_r, H)$ , define $\omega^{i}.(A_1,\cdots, A_r)=(A_{1+i(\text{mod} r)},\cdots , A_{r+i(\text{mod} r)})$.
To find a new bound on the chromatic number of a special family of graphs, a combinatorial analog of the Borsuk-Ulam theorem for $G$-spaces, $G$-Tucker lemma, wsa introduced in~\cite{Daneshpajouh}. To recall the lemma, we need to make some definitions. Consider the $G$-poset $G\times \{1,\cdots , n+1\}$ with natural $G$-action, $h.(g, i)\to (hg, i)$, and the order defined by $(h, x) < (g, y)$ if $x < y$. Also, let ${\left(G\cup\{0\}\right)}^n\setminus\{(0,\cdots , 0)\}$ be the $G$-poset whose action is $g.(x_1, \cdots, x_n) = (g.x_1, \cdots, g.x_n)$, and the order relation is given by:
$$x=(x_1, \cdots, x_n)\leq y=(y_1, \cdots, y_n)\quad\text{if}\quad x_i=y_i\quad\text{whenever}\quad x_i\neq 0.$$
\begin{lemma}[$G$-Tucker's lemma]
Suppose that $n$ is a positive integers, $G$ is a finite group, and
$$\lambda : {\left(G\cup\{0\}\right)
}^n\setminus\{(0,\cdots , 0)\}\to G\times\{1, \cdots, (n-1)\}$$
is a map such that $\lambda (g.x)=g.\lambda(x)$ for all $g\in G$ and all $x$ in ${\left(G\cup\{0\}\right)
}^n\setminus\{(0,\cdots , 0)\}$. Then there exist two sets $X\leq Y$ and $e\neq g\in G$ such that $\lambda (X) = g.\lambda (Y)$. Throughout this note, such a pair is called the bad pair.
\end{lemma}
It is worth noting that, there is a more general result for the case $G=Z_p$, the cyclic group of prime order $p$, see the $Z_p$-Tucker lemma of Ziegler~\cite{Ziegler}.
Let us finish this section by recalling the definition of compatibility graph from~\cite{Daneshpajouh}.
\begin{definition}[Compatibility graph]
Let $P$ be a $G$-poset. The compatibility graph of $P$, denoted by $C_P$, has $P$ as vertex set, and two elements $x, y\in P$ are adjacent if there is an element $g\in G\setminus\{e\}$ such that $x$ and $g.y$ are comparable in $P$.
\end{definition}
\section{New graphs
with high chromatic number and small clique}
In this section we state and discuss the main results of this note.
\begin{theorem}
For every graph $H$ and $r\geq 2$, the graph $C_{Hom(\mathcal{K}_{r}, G)}$ is $\mathcal{K}_{r+1}$-free.
\end{theorem}
\begin{proof}
For an $r$-tuple $A=(A_1, \cdots, A_r)$, define $|A|= |\cup_{i=1}^{r} A_i|$. Suppose that vertices $A_1=(A_{1,1},\cdots , A_{1,r}), \cdots, A_{k}=(A_{k,1},\cdots , A_{k,r})$ form a
clique of size $k$ in $C_{Hom(\mathcal{K}_{r}, G)}$. Without loss of generality assume that $|A_{k}|\geq |A_{i}|$ for each $1\leq i\leq k-1$. Now, by the definition, there are $1\leq s_1,\cdots , s_{k-1}\leq r-1$ such that $\omega^{s_j}.A_{j}\subseteq A_{k}$, for each $1\leq j\leq k-1$. We claim that $s_i\neq s_j$ for each $i\neq j$. Suppose, contrary to our claim, that $s_i= s_j=a$ for some $1\leq i,j\leq k-1$. Since $A_i$ is connected to $A_j$ there is a, $1\leq b\leq r-1$ such that $\omega^{b}.A_i$ is comparable to $A_j$. Without loss of generality assume that $\omega^b.A_i\subseteq A_j$. Therefore
\[
\begin{cases}
\omega^{a}.A_i\subseteq A_k\\
\omega^{a}.A_j\subseteq A_k\\
\omega^{b}.A_i\subseteq A_j\
\end{cases}
\Longrightarrow
\begin{cases}
\omega^{a}.A_i\subseteq A_k \\
\omega^{a+b}.A_i\subseteq A_k \
\end{cases}
\Longrightarrow
\begin{cases}
A_{i,a+b+1(\text{mod} r)}\subseteq A_{k, b+1} \\
A_{i,a+b+1(\text{mod} r)}\subseteq A_{k, 1} \
\end{cases}
\Longrightarrow A_{k,b+1}\cap A_{k,1}\neq\emptyset
\]
which contradicts the fact that any two distinct entries of $A_k$ are disjoint (note that, since $1\leq b\leq r-1$, $b+1\not\equiv 1 (\text{mod} r)$ which demonstrates that $A_{k,b+1(\text{mod} r)}$ and $A_{k, 1}$ are two distinct entries of $A_k$), therefore $k\leq r$. Now, the proof is completed.
\end{proof}
If $\psi : H\to K$ is a graph homomorphism, we associate it to a new map
$$C_{Hom(\mathcal{K}_r,-)}(\psi) : C_{Hom(\mathcal{K}_r, H)}\to C_{Hom(\mathcal{K}_r, K)}$$
by sending each $(A_1, \cdots, A_r)$ to $(\psi (A_1), \cdots, \psi (A_r))$. Since a complete bipartite subgraph of $H$ is sent by $\psi$ to a complete bipartite subgraph of $K$, the map $C_{Hom(\mathcal{K}_r,-)}(\psi)$ is a graph homomorphism. Moreover, the construction commutes with the composition of maps and the identity homomorphism is mapped to the identity homomorphism. So, using the terminology of category theory, one can say that $C_{Hom(\mathcal{K}_r,-)}(\psi)$ is a functor from the category of graphs to the category of $\mathcal{K}_{r+1}$-free graphs. Furthermore, we have an obvious graph homomorphism from $C_{Hom(\mathcal{K}_r, H)}$ to $H$ by sending each vertex $(A_1, \cdots, A_r)$ to $\min A_1$. This give us the following lower bound on chromatic number.
\begin{corollary}
For any graph $H$, $\chi (C_{Hom(\mathcal{K}_r, H)})\leq\chi (H)$.
\end{corollary}
Recalling that the Kneser graph $KG(n,k)$ is the graph whose vertices correspond to the $k$-element subsets of the set $\{1,\cdots , n\}$, and two vertices are adjacent if and only if the two corresponding sets are disjoint. These graphs were introduced by Lov\'{a}sz~\cite{lovasz1978kneser} in his famous proof of the Kneser's conjecture~\cite{kneser}. In the context of graph theory, Kneser's conjecture is $\chi (KG(n,k)) = n-2k+2$, whenever $n\geq 2k-1$. Beside the Lov\'{a}sz's proof, there exist several different proofs for Kneser's conjecture, see for instance \cite{alishahi2015chromatic, barany1978shor, greene2002new, matouvsek2004combinatorial}. Now we are in a position to state and prove our main theorem.
\begin{theorem}
For all integers $n\geq rk$, $\chi\left(C_{Hom(\mathcal{K}_r, KG(n, k))}\right)\geq n-r(k-1)$.
\end{theorem}
\begin{proof}
For any $X=(x_1,\cdots ,x_n)\in {\left(Z_r\cup\{0\}\right)}^{n}\setminus\{(0,\cdots , 0)\}$ and each $1\leq i\leq r$, define $X_i = \{s\,|\, x_s=\omega^{i}\}$. Also, denote by $\binom{X}{k}$ the $r$-tuple $(A_1,\cdots, A_r)$ where $A_i$ is the set of all $k$-subsets of $X_i$. Now, assume that $c : C_{Hom(\mathcal{K}_r, KG(n, k))}\to\{1, \cdots , C\}$ is a proper coloring of $C_{Hom(\mathcal{K}_r, KG(n, k))}$ with $C$ colors. We define a $\lambda $ coloring. Let $X\in {\left(Z_r\cup\{0\}\right)}^{n}\setminus\{(0,\cdots , 0)\}$. If $|X_i|\leq k-1$ for all $1\leq i\leq r$, set
$$\lambda (X)= (\omega^{t}, \sum_{i=1}^{r}|X_i|),$$
where $\omega^t$ is the first nonzero element in $X$. if $|X_i|\geq k$ and $|X_j|\leq k-1$ for some $i, j$, set
$$\lambda (X)= (\omega^{t}, r(k-1) + |\{i : |X_i|\geq k\}|),$$
where $\omega^t$ is the first nonzero element in $X$ such that $|X_t|\geq k$.
Finally, consider the case that all of $|X_i|\geq k$. Let $\omega^t.X$ be the one of $X, \omega.X,\cdots , \omega^{r-1}.X$, with $c(\binom{\omega^{t}.X}{k}) = \min\{c\left(\binom{\omega^{i}.X}{k}\right) : 0\leq i\leq r-1\}$. Now, set
$$\lambda (X)= (\omega^{-t}, rk-1+ c\left(\binom{\omega^{t}.X}{k}\right)),$$
Note that, the vertices $\{\binom{\omega^{j}.X}{k}\, :\, 0\leq j\leq r-1\}$ form a clique of size $r$ in $C_{Hom(\mathcal{K}_r, KG(n, k))}$, $\lambda (X)\leq rk-1+C-r+1=C+r(k-1)$. Thus, $\lambda$ takes its values in $\{1, \cdots , C+r(k-1)\}$. It is easy to check that $\lambda(\omega^i.X)=\omega^i.\lambda(X)$ for all $X\in {\left(Z_r\cup\{0\}\right)}^{n}\setminus\{(0,\cdots , 0)\}$ and all $\omega^i\in Z_r$. So, to use $Z_r$-Tucker lemma, its enough to show that $\lambda$ cannot have a bad pair. Let $X\leq Y$, $\lambda (X)=(\omega^{i}, a)$ and $\lambda (Y)=(\omega^{j}, a)$. We consider three cases.
\begin{itemize}
\item The first case is $a\leq r(k-1)$. In this case, we have $X_i\subset Y_i$ for all $i$ and $\sum_{i=1}^{r}|X_i| = \sum_{i=1}^{r}|Y_i|$. These imply that $X=Y$. Thus, $\omega^i=\omega^j$.
\item The second case is $r(k-1)+1\leq a \leq rk-1$. In this case, the number of $i$ for which $|X_i|\geq k$ is equal to the number of $i$ for which $|Y_i|\geq k$. This fact beside the fact that $X_i\subseteq Y_i$ for all $i$, imply that $|X_i|\geq k$ if and only if $|Y_i|\geq k$. Therefore, $\omega^i=\omega^j$.
\item Finally, assume that $a\geq rk$ and $\omega^i\neq\omega^j$. On the one hand, by the definition of $\lambda$, $c(\binom{\omega^{-i}.X}{k})=c(\binom{\omega^{-j}.Y}{k})$. On the other hand, since $X\leq Y$ and $\omega^i\neq\omega^j$, the vertex $\binom{\omega^i.X}{k}$ is connected to the vertex $\binom{\omega^i.Y}{k}$ which contradicts the proper coloring of $c$.
\end{itemize}
Applying $Z_r$-Tucker's lemma we have $C+ r(k-1)\geq n$, thus $C\geq n-r(k-1)$.
\end{proof}
In fact, the existence of graphs with high chromatic number and small clique is a direct consequence of Theorem $3.1$ and Theorem $3.2$. As another application, one could deduce a new proof of Kneser's conjecture. More precisely, it is well-known that the Kneser graph $KG(n,k)$ has a coloring with $n-2k+2$ colors (see~\cite[Section3.3]{matousek2008using}), thus by Theorem $3.2$ and Corollary $3.1$ we have
$$n-2k+2\leq\chi\left( C_{Hom(\mathcal{K}_2, KG_{n, k})}\right)\leq\chi\left(KG(n,k)\right)\leq n-2k+2.$$
In summary we have the following corollary.
\begin{corollary}
For all $k\geq 1$ and $n\geq 2k-1$, the chromatic number of $C_{Hom(\mathcal{K}_2, KG_{n, k})}$ is $n-2k+2$. In particular, $\chi (KG(n, k))=n-2k+2$.
\end{corollary}
\end{document}
|
\begin{equation}gin{document}
\draft
\title{Catalysis in non--local quantum operations}
\author{G. Vidal and J. I. Cirac}
\address{Institut f\"ur Theoretische Physik, Universit\"at Innsbruck,
A-6020 Innsbruck, Austria}
\date{\today}
\maketitle
\begin{equation}gin{abstract}
We show how entanglement can be used, without being consumed, to
accomplish unitary operations that could not be performed with
out it. When applied to infinitesimal transformations our method
makes equivalent, in the sense of Hamiltonian simulation, a
whole class of otherwise inequivalent two-qubit interactions.
The new catalysis effect also implies the asymptotic equivalence
of all such interactions.
\end{abstract}
\pacs{03.67.-a, 03.65.Bz, 03.65.Ca, 03.67.Hk}
\narrowtext
Can entanglement help to perform certain tasks? How much
entanglement has to be consumed? Can we use entanglement without
consuming it at all? These questions are quite relevant in the
context of quantum information theory, since entanglement can be
considered as an expensive physical resource without classical
analogy. In particular, the last question has been recently
answered \cite{Jo99} in the context of transformation between
states of two parties, Alice and Bob, under local operations and
classical communication (LOCC). More specifically, examples have
been presented where a state can only be transformed into some
other one by LOCC when a certain entangled state
$|\eta\rangle_{ab}$ is available. In this case, even though the
total entanglement (shared by Alice and Bob) decreases, the
state $|\eta\rangle_{ab}$ is recovered after the procedure. This
effect has been termed catalysis \cite{Jo99}, since the state
$|\eta\rangle_{ab}$ is necessary for the process to occur, even
though it is not consumed.
In this Letter we present a novel catalysis effect through
quantum entanglement. A maximally entangled state will be used,
but not consumed, to perform a non-local task that cannot be
achieved without it. The task consists of implementing a certain
two--qubit unitary gate when only some other one is available.
Remarkably, this catalysis is achieved using only local unitary
manipulations. The same construction allows to simulate with a
given non--local interaction other kinds of interactions, which
otherwise could not be simulated using only LOCC. In our method
unitarity of the local manipulations is an important feature,
since it makes possible that some LOCC-inequivalent interactions
become fully equivalent in presence of entanglement. This
sharply contrasts with the case of entangled state conversions
through LOCC manipulations \cite{Jo99}, where LOCC-inequivalent
states must remain inequivalent through catalysis, because the
local measurements needed in the conversions unavoidably
decrease the entanglement between the parties. Another
consequence of our results is that certain Hamiltonians become
equivalent under asymptotic LOCC, a phenomenon that shares
analogies with the one that occurs in transformations between
pure states \cite{Be96}.
Let us consider two parties, Alice and Bob, each of them
possessing a qubit, $A$ and $B$, respectively. Their goal is to
apply certain unitary operator $\tilde U$ to the qubits.
However, they only have at hand another particular two--qubit
unitary operator $U$, and the ability to perform one of the
following classes of operations. (a) LU: local unitary
operations on each qubit; (b) LU+anc: each of the local unitary
operations is jointly performed on a local ancilla, initially in
a product state, and a qubit; (c) LO: each party can perform
general local operations on its qubit (and ancilla); (d) LOCC:
the same as LO but classical communication is also allowed; (e)
cat--LU: the same as LU+anc, but now Alice's and Bob's ancillas
are initially in an entangled state, which can be used, but not
consumed, during the process. Clearly, everything that can be
done in the LU, LU+anc, and LO scenarios, can be also done in
the LOCC scenario. Here we will show that there are operators
$\tilde U$ that cannot be applied in the LOCC scenario, but that
can be achieved in the cat--LU one.
Let $U$ denote a unitary operator acting on two qubits $A$ and
$B$. Using the results of Ref.\ \cite{Kr01}, we can always write
$U^{AB}=(u^A v^B) [U^{AB}_s(c_1,c_2,c_3)] (\tilde u^A
\tilde v^B)$, where
\begin{mathletters}
\begin{equation}a
\label{U0}
U^{AB}_s(c_1,c_2,c_3)&=& e^{-i \sum_{k=1}^3 c_k
\sigma_k^A\sigma_k^B},\\
\label{restriction}
&& \pi/4\ge c_1 \ge c_2 \ge |c_3|,
\end{equation}a
\end{mathletters}
the $\sigma$'s are Pauli operators, and the $u$'s and $v$'s are
local unitary operators. The superscripts accompanying each
operator indicate the system(s) on which it acts. The
coefficients $c$ can be easily determined using the method
described in Ref.\ \cite{Kr01}. Any two unitary operators are
equivalent under LU (i.e. they can perform the same tasks if
arbitrary local unitary operations on $A$ and $B$ are allowed
before and after their action) if and only if they give rise to
the same $U_s(c_1,c_2,c_3)$. Since in all what follows we will
always allow for LU, we can restrict ourselves to unitary
operators $U$ of the form (\ref{U0}).
In the catalytic scenario, cat--LU, we have at our disposal two
ancillas (qubits) $a$ and $b$, initially in the Bell state
$|B_{0,0}\rangle_{ab}$ \cite{notBell}. We must impose that after
the whole process the ancillas $a$ and $b$ end up again in state
$|B_{0,0}\rangle_{ab}$. We allow for joint unitaries acting on
$A$ and $a$, as well as joint unitaries acting on $B$ and $b$.
We will show that in this situation we can use
$U_s(c_1,c_2,c_3)$ to implement $U_s(c_1+c_2,0,0)$. Later on we
will show that this cannot be achieved without the entangled
ancillas, even if LOCC are allowed.
The above claim about what can be done with $U_s$ in the cat--LU
scenario follows directly from the fact that
\begin{equation}a
& &\left(w^{Aa}w^{Bb}\right)^\dagger
\left[U^{AB}_s(c_1,c_2,c_3)\right]
\left(w^{Aa} w^{Bb}\right)|\Psi\rangle_{AB}|B_{0,0}\rangle_{ab}
\nonumber\\
\label{catal}
&&=e^{ic_3} \left[U^{AB}_s(c_1+c_2,0,0)\right]
|\Psi\rangle_{AB}|B_{0,0}\rangle_{ab},
\end{equation}a
for all $|\Psi\rangle$. Here, the unitary operators $w$ are
defined according to $w|i,j\rangle = |j,i\oplus j\rangle$, and
therefore correspond to a swap operation followed by a c--NOT.
Even though Eq.\ (\ref{catal}) can be directly checked, we will
indicate here the main idea behind this equation. The operators
in the form $U_s$ are diagonal in the Bell basis \cite{notBell},
i.e.
\begin{mathletters}
\begin{equation}a
U_s(c_1,c_2,c_3) |B_{0,0}\rangle &=& e^{-i(c_1+c_2-c_3)}
|B_{0,0}\rangle,\\ U_s(c_1,c_2,c_3) |B_{1,0}\rangle &=&
e^{-i(c_1-c_2+c_3)} |B_{1,0}\rangle,\\ U_s(c_1,c_2,c_3)
|B_{0,1}\rangle &=& e^{i(c_1+c_2+c_3)} |B_{0,1}\rangle.\\
U_s(c_1,c_2,c_3) |B_{1,1}\rangle &=& e^{-i(-c_1+c_2+c_3)}
|B_{1,1}\rangle.
\end{equation}a
\end{mathletters}
In particular,
\begin{mathletters}
\begin{equation}a
e^{ic_3}U_s(c_1+c_2,0,0)|B_{\alpha,0}\rangle &=& e^{- i
(c_1+c_2-c_3)}|B_{\alpha,0}\rangle,\\
e^{ic_3}U_s(c_1+c_2,0,0)|B_{\alpha,1}\rangle &=& e^{ i
(c_1+c_2+c_3)}|B_{\alpha,1}\rangle,
\end{equation}a
\end{mathletters}
for $\alpha=\pm 1$. Thus, we see that if we could transform
$|B_{\alpha,\begin{equation}ta}\rangle_{AB} \to |B_{0,\begin{equation}ta}\rangle_{AB}$
before acting with $U_s(c_1,c_2,c_3)$ and then we would invert
such transformation, we would obtain the desired result.
Unfortunately, there exist no such a transformation since two
states ($\alpha=0,1$) have to be mapped onto a single one, and
then back. However, this can be accomplished with the help of
the entangled ancillas, and this is precisely what the operator
$w_{Aa} w_{Bb}$ does: it transforms
$|B_{\alpha,\begin{equation}ta}\rangle_{AB}|B_{0,0}\rangle_{ab} \to
|B_{0,\begin{equation}ta}\rangle_{AB}
|B_{\overline\alpha,\begin{equation}ta}\rangle_{ab}$.
Now, let us show that $U_s(c_1+c_2,0,0)$ cannot be obtained with
the help of $U_s(c_1,c_2,c_3)$ and LOCC for a range of values of
the parameters $c$. Note that this automatically implies that
this task is not possible either with LU, LU+anc, or LO. In the
LOCC scenario we may use two ancillas $a$ and $b$, with
corresponding Hilbert spaces of arbitrary dimensions. The LOCC
consist of generalized measurement on $A$ and $a$, and on $B$
and $b$ involving classical communication before and also after
the application of $U_s(c_1,c_2,c_3)$.
We want that the whole procedure involving a set of LOCC,
followed by the action of $U_s(c_1,c_2,c_3)$, and again another
set of LOCC, reproduce the action of $U_s(c_1+c_2,0,0)$ on any
input state of $A$ and $B$. In particular, we can take $A$ and
$B$ initially entangled with two other, remote qubits $C$ and
$D$, in state
\begin{equation}
|\Psi_0\rangle_{ABCDab}\equiv |B_{0,0}\rangle_{AC}
|B_{0,0}\rangle_{BD}
|0\rangle_a |0\rangle_b.
\end{equation}
Let us assume that a set of LOCC takes place {\em before}
$U_s(c_1,c_2,c_3)$ acts. We will now show that one can
substitute these LOCC by local unitaries acting on $A$ and $a$,
and $B$ and $b$. We will use the fact that the whole process
must be described by a unitary operator [$U_s(c_1+c_2,0,0)$]
acting on $A$ and $B$, which implies that the entanglement
between the qubit $C$ ($D$) and the rest of the systems must be
preserved, i.e. the final state must be a maximally entangled
state between $C$ ($D$) and the rest. For a set of outcomes
$\Gamma$ of the generalized measurements performed on $A$ and
$a$, and on $B$ and $b$, before the application of
$U_s(c_1,c_2,c_3)$ we will have that the state of the systems
will change according to $x^{Aa}_{\Gamma} y^{Bb}_{\Gamma}
|\Psi_0\rangle_{ABCDab}$, where $x_{\Gamma}$ and $y_{\Gamma}$
are two operators that depend on the set of outcomes of the
measurements. Let us consider first the action of $x$ (we will
omit the subscript $\Gamma$ in order to keep the notation
readable)
\begin{equation}
x|0\rangle_A|0\rangle_a = d_{0}|\psi_{0}\rangle_{Aa},\quad
x|1\rangle_A|0\rangle_a = d_{1}|\psi_{1}\rangle_{Aa},
\end{equation}
where $|\psi_{0,1}\rangle$ are normalized states. Note that it
can occur neither that $|d_{0}|\ne|d_{1}|$ nor that
$|\psi_{0}\rangle$ and $|\psi_{1}\rangle$ are not orthonormal.
If this were the case, then the entanglement of the qubit $C$
with the rest of the systems would decrease. According to well
known results on entanglement concentration \cite{Lo01}, this
entanglement cannot be recovered later on with the help of LOCC.
Since the whole protocol does not involve joint actions with
remote qubit C, this immediately would contradict the fact that
this entanglement has to be maintained at the very end of the
process. Thus, we must have that $|d_{0}|=|d_{1}|\equiv d$ and,
at the same time, $|\psi_{0}\rangle$ and $|\psi_{1}\rangle$ are
orthonormal. But in this case we can always find certain unitary
operator $u$ acting on $A$ and $a$ such that $du$ gives the same
action as $x$ on the relevant states. Thus, we can substitute
$x_\Gamma$ by a unitary operator $u_\Gamma$ chosen randomly with
probability $|d_{\Gamma}|^2$. The same analysis applies to
$y_\Gamma$.
According to this result, the problem reduces to showing that
\begin{equation}
\label{Phi1}
|\Phi_1(\Psi)\rangle\equiv\left[U^{AB}_s(c_1+c_2,0,0) \right]
|\Psi\rangle_{AB}|0,0\rangle_{ab},
\end{equation}
cannot be obtained starting from
\begin{equation}
|\Phi_2(\Psi)\rangle\equiv\left[U^{AB}_s(c_1,c_2,c_3)\right]
\left(x^{Aa} y^{Bb}\right) |\Psi\rangle_{AB}|0,0\rangle_{ab},
\end{equation}
using LOCC, for all $|\Psi\rangle$ and where $x$ and $y$ are
unitary. In order to prove that, we restrict the values of the
parameter $c$ to satisfy $c_3=0$, $c_2>0$, and $c_1+c_2\le
\pi/4$, and use the following fact \cite{Ni99}: if $|\Psi_1\rangle$
can be obtained by LOCC out of $|\Psi_2\rangle$, then
\begin{equation}
\label{Nielsen}
P(\Psi_1)\ge P(\Psi_2),
\end{equation}
where
\begin{equation}
P(\Psi)\equiv\max_{||\psi||=||\phi||=1} |\langle\psi|\langle
\phi||\Psi\rangle|^2.
\end{equation}
[$P$ is the square of the maximal Schmidt coefficient.] In
particular, if we take in (\ref{Phi1})
$|\Psi_{i,j}\rangle_{AB}=|i\rangle_A|j\rangle_B$ ($i,j=0,1$), we
have that $P[\Phi_1(\Psi_{i,j})]=\cos^2(c_1+c_2)$. Defining
\begin{mathletters}
\begin{equation}a
|\psi_i\rangle_{Aa} &\equiv& x^{Aa} |i,0\rangle_{Aa},\\
|\varphi_j\rangle_{Bb} &\equiv& y^{Bb} |j,0\rangle_{Bb},
\end{equation}a
\end{mathletters}
we will show that it is not possible to have
\begin{equation}
\label{cond}
\left| \langle \psi_i|\langle \varphi_j| |\Phi_2(\Psi_{i,j})\rangle
\right|^2
\le \cos^2(c_1+c_2),
\end{equation}
for all $i,j=0,1$, and therefore that condition (\ref{Nielsen})
is violated. We can always write
\begin{equation}
\label{psivarphi}
|\psi\rangle_{Aa}|\varphi\rangle_{Bb} =
\sum_{\alpha,\begin{equation}ta=0,1} |B_{\alpha,\begin{equation}ta}\rangle_{AB}
|N_{\alpha,\begin{equation}ta}\rangle_{ab},
\end{equation}
where the $n_{\alpha,\begin{equation}ta}\equiv||N_{\alpha,\begin{equation}ta}||^2\ge 0$
add up to one. Thus, condition (\ref{cond}) reduces to
\begin{equation}a
&&\left| e^{-i(c_1+c_2)} n_{0,0} + e^{i(c_1+c_2)} n_{0,1}
\right.\nonumber\\
&& \left.+ e^{-i(c_1-c_2)} n_{1,0} + e^{i(c_1-c_2)}
n_{1,1}\right|^2
\le
\cos^2(c_1+c_2).
\end{equation}a
Actually, it can be easily shown that the left hand side is
always larger or equal than the right hand side, the equality
holding only for $n_{1,0}=n_{1,1}=0$ and $n_{0,0}=n_{0,1}=1/2$.
Using these results in Eq.\ (\ref{psivarphi}) and imposing that
$|\psi_i\rangle_{Aa}|\varphi_j\rangle_{Bb}$ is a product state,
we obtain that it must be of either of the form
$|0,1\rangle_{AB}|\mu_i,\nu_j\rangle_{ab}$ or
$|1,0\rangle_{AB}|\mu_i,\nu_j\rangle_{ab}$. Now, recalling that
$|\psi_i\rangle_{Aa}|\varphi_j\rangle_{Bb}$ must be created
using local unitary operators acting on $A$ and $a$, and $B$ and
$b$ out of $|i,0\rangle_{Aa}|j,0\rangle_{Bb}$ one readily finds
that this is impossible for all $i,j=0,1$. Thus, we have proven
that $U_s(c_1+c_2,0,0)$ cannot be obtained with the help of
$U_s(c_1,c_2,0)$ and LOCC for $\pi/4\ge c_1+c_2>0$ and $c_1\ge
c_2>0$.
In the following, we will analyze the implications of our
catalytic method in the context of infinitesimal transformations
of two-qubits \cite{Du00,Nielsen,Beth,Be01,Vi01}. Remarkably,
the study of this kind of transformations has allowed to
establish a partial order in the set of all possible physical
interactions (or Hamiltonians) \cite{Be01}. This partial order
is related to whether a given interaction can {\em simulate}
(i.e., produce the same results of) another one, when certain
operations are allowed. In this context, the necessary and
sufficient conditions for a two-qubit Hamiltonian $H$ to be able
to simulate another $H'$ under LU, LU+anc and LOCC have been
derived \cite{Be01,Vi01}, giving the same conditions. One can
immediately see from our general results on unitary operators
that in the catalytic scenario, these conditions are relaxed,
i.e. there are certain Hamiltonians that can simulate other
under cat--LU, but not under LOCC. Here we will analyze this
fact in detail and extract some conclusions.
Thus, we consider $U=e^{-iH\delta t}$, where $H=H^\dagger$ is a
Hamiltonian acting on the qubits $A$ and $B$ and $||H\delta t||
\ll 1$. Again, since we allow for arbitrary local unitaries
at any time, we can restrict ourselves to Hamiltonians of the
form
\begin{mathletters}
\begin{equation}a
H(c_1,c_2,c_3) &=& \sum_{k=1}^3 c_k
\sigma_k^A \sigma_k^B,\\ c_1\ge c_2\ge |c_3|.
\end{equation}a
\end{mathletters}
In Refs.\ \cite{Be01,Vi01} it has been shown that given
$H(c_1,c_2,c_3)$, a total time $\delta t$, and if we allow for
LOCC after time steps smaller than $\delta t$, then we can
obtain the operation generated by $H(\tilde c_1,\tilde
c_2,\tilde c_3)$ during the same time $\delta t$ up to second
order corrections in $H\delta t$ if and only if
\begin{mathletters}
\label{conditionsinf}
\begin{equation}a
c_1 + c_2 -c_3 &\ge& \tilde c_1 + \tilde c_2 -\tilde c_3,\\
\label{cond2}
c_1 &\ge& \tilde c_1,\\
c_1 + c_2 + c_3 &\ge& c_1 + c_2 + c_3.
\end{equation}a
\end{mathletters}
This implies that under LOCC, $H$ can simulate $\tilde H$ if and
only if these conditions are satisfied.
If we use our catalytic method, we have that it is possible to
simulate $\tilde H(c_1+c_2,0,0)$ with $H(c_1,c_2,c_3)$, which
for $c_2\ne 0$ violates condition (\ref{cond2}). In fact, taking
$c_3=0$, we see that $H_1 \equiv H(c_1+c_2,0,0)$ can simulate
$H_2 \equiv H(c_1, c_2,0)$ as well, since conditions
(\ref{conditionsinf}) are fulfilled. Thus, our catalytic method
makes any pair of Hamiltonians of the form $H_1$ and $H_2$
equivalent, although they are inequivalent under LOCC
simulation. This result also has fundamental implications in the
study of {\em asymptotic} simulation of interactions using
LU+anc. There $N$ applications of an evolution generated by $H$
for a time $\delta t$ are available, in the limit $\delta t\to
0$ and $N\delta t\to
\infty$. $H_1$ can simulate $H_2$ even for finite $N$ \cite{Be01,Vi01}.
We can now use $H_2$ for $N_0$ times to create a maximally
entangled state of the ancillas \cite{Du00} with $N_0 \delta t$
finite, which could then be used to catalyze the Hamiltonian
evolution generated by $H_1$ a number $N-N_0\sim N$ of times.
So far, we have seen that under the catalytic scenario, some
Hamiltonians acting on two qubits become equivalent. Of course,
an important question is whether all Hamiltonians become
equivalent in that scenario \cite{Bennetpc}. We now show that
this is not the case. We derive a set of necessary conditions
similar to (\ref{conditionsinf}) that the Hamiltonians $H$ and
$\tilde H$ must fulfill for $H$ to be able to simulate $\tilde
H$. First, we will use that both Hamiltonians are diagonal in
the Bell basis \cite{notBell}, and we will call the
corresponding eigenvalues
\begin{mathletters}
\begin{equation}a
\lambda_1= c_1+c_2-c_3, &&\quad \tilde \lambda_1= \tilde c_1+\tilde
c_2-\tilde c_3 + \tilde c_4,\\
\lambda_2= c_1-c_2+c_3, &&\quad \tilde \lambda_2= \tilde c_1-\tilde
c_2+\tilde c_3 + \tilde c_4,\\
\lambda_3= -c_1+c_2+c_3, &&\quad \tilde \lambda_3= -\tilde c_1+\tilde
c_2+\tilde c_3 + \tilde c_4,\\
\lambda_4= -c_1-c_2-c_3, &&\quad \tilde \lambda_4= -\tilde c_1-\tilde
c_2-\tilde c_3 + \tilde c_4.
\end{equation}a
\end{mathletters}
Note that with these numeration, the $\lambda$'s and $\tilde
\lambda$'s are sorted in decreasing order. We have also taken
into account a global constant $\tilde c_4$, since it will be
important in the discussion below. We will show that if $H$ can
simulate $\tilde H$ under cat--LU, then
\begin{mathletters}
\begin{equation}a
\label{cond21}
c_1+ c_2-c_3 &\ge& \tilde c_1+\tilde c_2-\tilde c_3 + \tilde
c_4,\\
\label{cond22}
c_1+ c_2+c_3 &\ge& \tilde c_1+\tilde c_2+\tilde c_3 -\tilde
c_4,\\
\label{cond23}
\sum_{k=1}^3 |c_k| &\ge& \sum_{k=1}^4 |\tilde c_k|.
\end{equation}a
\end{mathletters}
These conditions mean, for example, that with $H(c_1,c_2,c_3)$
it is not possible to efficiently simulate either $\tilde
H(c_1+c_2+c_3,0,0)$ ---which would imply catalytic equivalence
of all interactions since the converse simulation is possible
[cf. (\ref{conditionsinf})]---, nor $H(c_1,c_2,-c_3)$
---which excludes the simulation of a time-reversed evolution of
$H(c_1,c_2,c_3)$.
Following the same steps as in \cite{Vi01} we find that $H$ can
efficiently simulate $\tilde H$ using LOCC only if there exists
a set of unitary operators $u_m$ and $v_m$ and some positive
numbers $p_m$ which add up to one, such that
\begin{equation}a
\label{inter}
&&\sum_{m}
p_m(u_m^{Aa}v_m^{Bb})^\dagger\left[H^{AB}\otimes\mbox{$1 \hspace{-1.0mm} {\bf l}$}^{ab}\right]
(u_m^{Aa}v_m^{Bb})
|\Psi\rangle_{AB}|\Phi_0\rangle_{ab}\nonumber\\
&& =\left[\tilde
H^{AB}\otimes\mbox{$1 \hspace{-1.0mm} {\bf l}$}^{ab}\right]|\Psi\rangle_{AB}|\Phi_0\rangle_{ab},
\end{equation}a
for all $|\Psi\rangle$ and certain fixed state $|\Phi_0\rangle$
of arbitrary dimensional ancillas. Here we have included
$\mbox{$1 \hspace{-1.0mm} {\bf l}$}^{ab}$ to make the formula more explicit. According to a
basic result in the theory of majorization \cite{Uhxx}, the
operator resulting from the sum over $m$ in (\ref{inter}) must
have the eigenvalues lying in the interval
$[\lambda_1,\lambda_4]$. This automatically implies that the
operator $\tilde H^{AB}$ must also have its eigenvalues in the
same interval, which leads to $\lambda_1\ge \tilde \lambda_1$
and $\lambda_4\le\tilde \lambda_4$, and therefore to
(\ref{cond21},\ref{cond22}). In order to obtain the last
condition (\ref{cond23}), we apply the bra $_{ab}\langle
\Phi_0|$ to both sides of Eq.\ (\ref{inter}), multiply the
corresponding equation by $\sigma_k^A\sigma_k^B/4$ and trace
with respect to $A$ and $B$. Taking the absolute values of the
resulting expressions, and adding from $k=1,\ldots,4$ we obtain
\begin{equation}
\label{ineq}
\sum_{k=1}^4 |\tilde c_k| \le \sum_{n=1}^3 |c_n| \sum_m p_m h_{n,m},
\end{equation}
where
\begin{equation}
h_{n,m} = \frac{1}{4}\sum_{k=1}^4
|_{ab}\langle\Phi_0|X_{k,n,m}^a Y_{k,n,m}^b|\Phi_0\rangle_{ab}|,
\end{equation}
and
\begin{mathletters}
\begin{equation}a
X_{k,n,m}^a &=& {\rm tr}_A[\sigma_k^A (u_m^{Aa})^\dagger
\sigma_m^A u_m^{Aa}].\\
Y_{k,n,m}^b &=& {\rm tr}_B[\sigma_k^B (v_m^{Bb})^\dagger \sigma_m^B
v_m^{Bb}].
\end{equation}a
\end{mathletters}
Using Cauchy--Schwarz inequality, we have
\begin{equation}a
h_{n,m} &\le& \left[\frac{1}{4}\sum_{k=1}^4
\langle\Phi_0|X^{a}_{k,n,m}
(X^{a}_{k,n,m})^\dagger|\Phi_0\rangle\right]^{1/2}\nonumber\\
&\times& \left[\frac{1}{4}\sum_{k=1}^4
\langle\Phi_0|Y^{b}_{k,n,m}
(Y^{b}_{k,n,m})^\dagger|\Phi_0\rangle\right]^{1/2}=1,\nonumber
\end{equation}a
where for the last equality we have used the fact that
$\sigma_k$ form an orthonormal basis in the space of operators
acting on a qubit. Substituting $h_{n,m}\le 1$ in Eq.\
(\ref{ineq}), we finally obtain condition (\ref{cond23}).
In conclusion, we have shown that certain unitary operations can
be catalyzed by an entangled state, in the sense that the state
is not consumed but without it the process would not be
possible. We have also shown that the method introduced here
allows to make equivalent certain kind of interactions acting on
two qubits. This fact allows for these interactions to become
equivalent in the asymptotic limit, which is compatible with the
conjecture that all two-qubit interactions are equivalent in the
asymptotic limit \cite{Bennetpc}.
We thank C. H. Bennett for stimulating discussions. This work
was supported by the Austrian Science Foundation under the SFB
``control and measurement of coherent quantum systems'' (Project
11), the European Community under the TMR network
ERB--FMRX--CT96--0087, project EQUIP (contract IST-1999-11053),
and contract HPMF-CT-1999-00200, the European Science
Foundation, and the Institute for Quantum Information GmbH.
\begin{equation}gin{references}
\bibitem{Jo99}
D. Jonathan and M. B. Plenio, Phys. Rev. Lett. {\bf 83}, 3566
(1999).
\bibitem{Be96}
C. H. Bennett, H. J. Bernstein, S. Popescu, and B. Schumacher,
Phys. Rev. A {\bf 53}, 2046-2052 (1996).
\bibitem{Kr01}
B. Kraus and J. I. Cirac, Phys. Rev. A {\bf 63}, 062309 (2001);
N. Khaneja and S. Glaser, quant-ph/0010100.
\bibitem{notBell}
We denote the Bell states as follows:
\begin{mathletters}
\begin{equation}a
|B_{0,0}\rangle &\equiv& \frac{1}{\sqrt{2}} (|0,1\rangle
+|1,0\rangle),~
|B_{1,0}\rangle \equiv \frac{1}{\sqrt{2}}(|0,0\rangle + |1,1\rangle),
\nonumber\\
|B_{0,1}\rangle &\equiv&\frac{1}{\sqrt{2}} (|0,1\rangle -
|1,0\rangle),~
|B_{1,1}\rangle \equiv\frac{1}{\sqrt{2}} (|0,0\rangle -
|1,1\rangle).\nonumber
\end{equation}a
\end{mathletters}
\bibitem{Lo01}
H.-K. Lo and S. Popescu, Phys. Rev. A {\bf 63}, 022301 (2001).
\bibitem{Ni99}
M. A. Nielsen, Phys. Rev. Lett. {\bf 83}, 436 (1999).
\bibitem{Du00}
W. D\"{u}r, G. Vidal, J. I. Cirac, N. Linden and S. Popescu,
Phys. Rev. Lett. (in press), quant-ph/0006034.
\bibitem{Nielsen} J.L. Dodd, M.A. Nielsen, M.J. Bremner and R.T.
Thew, quant-ph/0106064
\bibitem{Beth} P. Wocjan, D. Janzing and Th. Beth,
quant-ph/0106077.
\bibitem{Be01}
C. H. Bennett, J. I. Cirac, M. S. Leifer, D. W. Leung, N.
Linden, S. Popescu, and G. Vidal, quant-ph/0107035.
\bibitem{Vi01}
G. Vidal and J. I. Cirac, quant-ph/0108076.
\bibitem{Bennetpc}
C. H. Bennett (private communication).
\bibitem{Uhxx}
P.M. Alberti and A. Uhlmann, {\em Stochasticity and partial
order: doubly stochastic maps and unitary mixing}, Dordrecht,
Boston, 1982.
\end{references}
\end{document}
|
{\beta}egin{document}
\title{Addendum to:\
Edge-Unfolding Nearly Flat Convex Caps}
{\beta}egin{abstract}
This addendum to~\cite{o-eunfcc-17} establishes that
a nearly flat acutely triangulated convex cap in the sense of that paper can be edge-unfolded
even if closed to a polyhedron by adding the convex polygonal base under the cap.
{\varepsilon}nd{abstract}
{\sigma}ection{Introduction}
{\sigma}eclab{Introduction}
The paper~\cite{o-eunfcc-17} established that every sufficiently flat acutely triangulated
convex cap has an edge-unfolding to a non-overlapping simple polygon, i.e., a {\varepsilon}mph{net}.
I used the term ``convex cap'' in the following sense
(where $\phi(f)$ is
the angle the normal to face $f$ makes with the $z$-axis):
{\beta}egin{quotation}
\noindent
Define a {\varepsilon}mph{convex cap} ${\mathcal C}$ of angle ${\mathcal P}hi$ to be $C={\mathcal P} \cap H$
for some ${\mathcal P}$ and $H$, such that $\phi(f) {\lambda}e {\mathcal P}hi$ for all $f$ in ${\mathcal C}$.
[...]
Note that ${\mathcal C}$ is not a closed polyhedron; it has no ``bottom,''
but rather a boundary ${\partial \mathcal C}$.
{\varepsilon}nd{quotation}
This note proves that same claim holds even when ${\mathcal C}$ is closed to a polyhedron
by adjoining the convex base face $B$ bounded by ${\partial \mathcal C}$.
Eventually this addendum will be incorporated into a future version~\cite{o-eunfcc-17}.
For now we assume familiarity with that paper, and especially the
section below, the most relevant portions of which we reproduce verbatim. Ellisons are
marked by ``[...].''
{\sigma}ection{Angle-Monotone Spanning Forest}
{\sigma}eclab{AngMonoForest}
\noindent
[...]
{\sigma}ubsection{Angle-Monotone Spanning Forest}
{\sigma}eclab{SpanningForest}
``It was proved
in~\cite{lo-ampnt-17}
that every nonobtuse triangulation $G$ of a convex region $C$
has a boundary-rooted spanning forest $F$ of $C$, with all paths in $F$
$90^\circ$-monotone.
We describe the proof and simple construction algorithm before detailing
the changes necessary for acute triangulations.
Some internal vertex $q$ of $G$ is selected, and the plane partitioned into
four $90^\circ$-quadrants $Q_0,Q_1,Q_2,Q_3$ by orthogonal lines through $v$.
Each quadrant is closed along one axis and open on its counterclockwise axis;
$q$ is considered in $Q_0$ and not in the others, so the quadrants partition the plane.
It will simplify matters later if we orient the axes so that no vertex except
for $q$ lies on the axes, which is clearly always possible.
Then paths are grown within each quadrant independently, as follows.
A path is grown from any vertex $v \in Q_i$ not yet included in the forest $F_i$,
stopping when it reaches either a vertex already in $F_i$, or ${\partial C}$.
These paths never leave $Q_i$, and result in a forest $F_i$ spanning the vertices in $Q_i$ .
No cycle can occur because a path is grown from $v$ only when $v$ is not already
in $F_i$; so $v$ becomes a leaf of a tree in $F_i$.
Then $F = F_1 \cup F_2 \cup F_3 \cup F_4$.
We cannot follow this construction exactly in our situation of an acute triangulation $G$,
because the ``quadrants" for ${{\tau}heta}$-monotone paths for ${{\tau}heta} = 90^\circ - {\Delta}{{\tau}heta} < 90^\circ$ cannot cover the plane
exactly: They leave a thin $4 {\Delta}{{\tau}heta}$ angular gap; call the cone of this aperature $g$.
We proceed as follows.
Identify an internal vertex $q$ of $G$ so that it is possible to
orient the cone-gap $g$, apexed at $q$, so that $g$ contains no internal vertices of $G$.
See Fig.~{\phi}igref{QuadGap} for an example. Then we proceed just
as in~\cite{lo-ampnt-17}: paths are grown within each $Q_i$, forming four
forests $F_i$, each composed of ${{\tau}heta}$-monotone paths.
{\beta}egin{figure}[htbp]
\centering
\includegraphics[width=0.6{\lambda}inewidth]{Figures/QuadGap_10_20_s1_v47.pdf}
\caption{Here the near-quadrants $Q_i$ have width ${{\tau}heta}=87^\circ$,
so the gap $g$ has angle $4{\Delta}{{\tau}heta} = 12^\circ$.}
{\phi}iglab{QuadGap}
{\varepsilon}nd{figure}
It remains to argue that there always is such a $q$ at which to apex cone-gap $g$.
Although it is natural to imagine $q$ as centrally located (as in Fig.~{\phi}igref{QuadGap}),
it is possible that $G$ is so dense with vertices that such a central location is not possible.
However, it is clear that the vertex $q$ that is closest to ${\partial C}$ will suffice: aim $g$
along the shortest path from $q$ to ${\partial C}$. Then $g$ might include several vertices on ${\partial C}$,
but it cannot contain any internal vertices of $G$, as they would be closer to ${\partial C}$.
Again we could rotate the axes slightly so that no vertex except for $q$ lies on an axis.''
\noindent
[...] (End quoted text.)
{\sigma}ection{Unfolding the Base $B$}
{\sigma}eclab{Base}
By our definition of a convex cap, its boundary ${\partial \mathcal C}$ lies in a plane, and so bounds
a convex polygonal base $B$.
We assume that, unlike the cap ${\mathcal C}$, $B$ is not triangulated, and so must be unfolded
as an intact unit.
(Of course, it can be unfolded as a unit even if triangulated.)
Let $C_\perp$ be the unfolded net of ${\mathcal C}$ produced by the algorithm in~\cite{o-eunfcc-17}.
If some edge $e$ of $C_\perp$ lies on the convex hull of $C_\perp$, then $B$ can be
``flipped out'' to $B'$ around $e$ by cutting all edges of ${\partial \mathcal C}$ except for $e$.
Because $B'$ is convex and is attached to the hull of $C_\perp$, it is clear there is
no overlap, and we would be finished. In fact this is the proof path we will follow, but it is
not as straightforward as it might seem.
{\sigma}ubsection{Obstructions}
{\sigma}eclab{Obstructions}
We now argue that there is an arbitrarily flat convex cap ${\mathcal C}$ and a spanning
cut forest ${\mathcal F}$ such that there is no such edge $e$ of $C_\perp$
to which to attach $B'$ without overlap.
First, we look at a ``real'' unfolding to see what form the obstruction might take.
Fig.~{\phi}igref{AM_20_30_s2_v83_ell_Hull} shows a portion of $C_\perp$,
identifying a particular edge $e$ which is tilted inside the hull and would lead
to overlap were $B'$ attached there.
{\beta}egin{figure}[htbp]
\centering
\includegraphics[width=0.75{\lambda}inewidth]{Figures/AM_20_30_s2_v83_ell_Hull}
\caption{Detail from Fig.~24 of~\protect\cite{o-eunfcc-17}, with
a portion of the convex hull marked.}
{\phi}iglab{AM_20_30_s2_v83_ell_Hull}
{\varepsilon}nd{figure}
However, even in this example, there are many other candidates for $e$ that would suffice as
$B$'s attachment. This suggests the next question: Is there a cap ${\mathcal C}$
and a cut forest ${\mathcal F}$ such that
every edge of $C_\perp$ is similarly titled inside the hull, leaving no ``safe'' attachment
edge for $B$?
The answer is {{\sigma}c yes}. We only sketch the argument before discussing in
more detail how to circumvent this counterexample.
Let ${\mathcal C}$ be a cap whose boundary ${\partial \mathcal C}$ is a $12$-sided regular polygon (i.e.,
a dodecagon). For the construction to work, we need at least a $9$-sided polygon; $12$
makes it visually clearer.
The cut forest ${\mathcal F}$ is as illustrated in Fig.~{\phi}igref{BaseCex_22}:
one tree in ${\mathcal F}$ is a $2$-path from the center of ${\mathcal C}$,
and all the others trees are single segments. The key property is that each
segment creates a very shallow angle with ${\partial \mathcal C}$.
{\beta}egin{figure}[htbp]
\centering
\includegraphics[width=0.7{\lambda}inewidth]{Figures/BaseCex_22}
\caption{No edge of $C_\perp$ is a convex hull edge. The cut forest $F$ is shown red,
${\partial \mathcal C}$ is blue, developed edges of $C_\perp$ black.}
{\phi}iglab{BaseCex_22}
{\varepsilon}nd{figure}
We arrange near-zero curvature at the central vertex, and all the other vertices have
the same curvature ${\omega} > 0$. Because of the shallow angle they form with ${\partial \mathcal C}$,
the opening gap caused by cutting each segment is nearly orthogonal to the cut segment.
With the internal angle of a $12$-gon ${\phi}rac{5}{6} \pi$, the exterior angle between
$B$ and a reflected copy $B'$ of $B$ is ${\phi}rac{2}{6} \pi = 60^\circ < 90^\circ$.
This allows the orthogonally jutting rotation of each boundary edge to penetrate into a
reflected $B'$, reflected about the next edge $e$ counterclockwise,
as illustrated.
There is no impediment to realizing this example in 3D so that the curvatures ${\omega}$ suffice
to render every boundary edge of $C_\perp$ leading to overlap with $B'$. Moreover, this could
be accomplished for an arbitrarily flat ${\mathcal C}$ by increasing the number of sides of the $n$-gon base
so that even a very small ${\omega}$ results in overlap.
These claims will not be justified further, as they only serve to motivate the next steps.
{\sigma}ubsection{Quadrant-based forest $F$}
{\sigma}eclab{Quadrant-based}
The reason the preceding counterexample does not present an insurmountable
obstacle is that the spanning forest $F$ selected in the planar projection graph $G$ of ${\mathcal C}$
is not arbitrary, but instead is based on the quadrants
illustrated in Fig.~{\phi}igref{QuadGap}.
We now show that the quadrant-based forest $F$ leads to an edge $e$ of $C_\perp$
to which to attach the reflected base $B'$.
A reminder on the notation we are employing: ${\mathcal C}$ is the convex cap in $\mathbb{R}^3$,
$C$ is its projection on the $xy$-plane, $F$ the spanning forest in that plane,
${\mathcal F}$ the lift of the forest, and $C_\perp$ is the development of the cap ${\mathcal C}$
after cutting ${\mathcal F}$.
Let $v \in {\partial C}$ be a vertex on the boundary of the projection $C$ that is
a root of a tree in the forest $F$. Rather than
viewing the lift, cut, and development as producing $C_\perp$,
it will help to view the movement of $v$ in the plane caused by opening
the curvatures along the cut paths terminating at $v$.
As we saw in Section~8
of~\cite{o-eunfcc-17},
we can view each cut path $Q$ to $v$ as two planar polygonal chains $L$ and $R$
which are initially identical, and then open at each vertex $v_i$ along
$Q$ by the curvatures ${\omega}_i$. Here we are only interested in the final planar displacement of
the root $v$, the endpoint of the chain. Let $v$ and $v'$ be the original and displaced
versions of $v$, i.e., the last vertices of $L$ and $R$. The {\varepsilon}mph{gap segment} $v v'$ represents
the gap at the boundary of $C_\perp$ caused by opening the cuts in $F$ to $v$,
visible, for example, in Fig.~{\phi}igref{AM_20_30_s2_v83_ell_Hull}.
The gap segment $v v'$ is caused by the composition of several (small) rotations about
different centers, the vertices along $Q$. It is well-known that $v v'$ is equivalent
to a single rotation about a (generally) different center $c$,
which we'll call the {\varepsilon}mph{composite center of rotation}.
We claim that, for sufficiently small ${\omega}_i$, $c$ is either inside the convex hull of $Q$,
or arbitrarily close to the boundary of the hull.
This claim is justified in the Appendix by
Lemma~{\lambda}emref{center-of-gravity}.
Returning to Fig.~{\phi}igref{BaseCex_22}, the centers of rotation in $F$ were
arranged so that the gap segments were nearly orthogonal to ${\partial C}$,
so that they ``jutted out'' and caused overlap with the reflected $B'$.
We now argue first, that a different arrangement of centers of rotation
can produce ``safe'' gap segments, and second, that this can be achieved by
a quadrant-based spanning forest $F$.
Arrange an edge $e=(v,u)$ of ${\partial C}$ to be topmost and horizontal
and crossing the vertical quadrant boundary, and suppose
both $v$ and $u$ are roots of cut trees in $F$.
We call $e$ a locally {\varepsilon}mph{safe edge} if the composite centers of rotation $c_v$ and $c_u$
for the trees incident to $v$ and $u$ fall underneath $e$, as illustrated in
Fig.~{\phi}igref{SafeEdge}. For then the gap segments angle down below $e$,
making $e$ a safe candidate for the attachment of $B'$.
{\beta}egin{figure}[htbp]
\centering
\includegraphics[width=0.9{\lambda}inewidth]{Figures/SafeEdge}
\caption{Safe edge $e$.}
{\phi}iglab{SafeEdge}
{\varepsilon}nd{figure}
Returning to Fig.~{\phi}igref{QuadGap}, we now argue that what was there called the gap edge $g$
of ${\partial C}$ can serve as a safe edge $e$.
As in that figure, let $q$ be the quadrants origin. The shortest path ${\gamma}$ on ${\mathcal C}$ to ${\partial \mathcal C}$
is orthogonal to a boundary edge $e$. The development of the geodesic ${\gamma}$ is a straight
line. Use this straight line as the vertical axis of the quadrants, with
$e$ horizontal.
Now, because the spanning forest algorithm grows edges within the quadrant wedges,
we are guaranteed that all edges of a tree incident to the endpoints of $e$
are slanted such that they angle strictly vertically underneath $e$, as
illustrated in Fig.~{\phi}igref{QuadTrees}. The strictness follows because the wedges
are ${\Delta}{{\tau}heta}$ less than $90^\circ$.
{\beta}egin{figure}[htbp]
\centering
\includegraphics[width=0.75{\lambda}inewidth]{Figures/QuadTrees}
\caption{The trees incident to the endpoints of $e$ have composite centers of rotation
underneath $e$.}
{\phi}iglab{QuadTrees}
{\varepsilon}nd{figure}
Now applying Lemma~{\lambda}emref{center-of-gravity}, we obtain that the
composite center of rotation is underneath $e$.
Even though that lemma allows the true center to be slightly outside the
hull of the rotation centers, that the wedges have angle less than $90^\circ$
permits the conclusion that the true center is in the hull for sufficiently small
${\omega}_i$, and therefore strictly underneath $e$.
Thus we know that $e$ is a locally safe edge.
But now it is easy to reapply this argument to conclude that none of the gap segments
for other vertices around ${\partial C}$ are angled above the horizontal, and so $e$ is in
fact globally safe. Thus we can attach the reflected base $B'$ along $e$ without overlap.
This proves:
{\beta}egin{theorem}
A convex polyhedron consisting of a
a nearly flat acutely triangulated convex cap ${\mathcal C}$ joined along ${\partial \mathcal C}$ to a
base $B$ can be edge-unfolded without overlap,
for sufficiently small cap curvature ${\Omega}$.
{\tau}hmlab{BaseSafe}
{\varepsilon}nd{theorem}
\noindent
Using the error between the true and approximate composite rotation centers
${\delta}={\phi}rac{1}{2} {\sigma}um_i {\varepsilon}ll_i {\omega}_i$ from Lemma~{\lambda}emref{RotationsCG},
and crudely summarizing this as ${\delta}=L {\omega} / 2$ for a total chain length $L$,
a calculation shows the wedge slant ${\Delta}{{\tau}heta}$ leads to
``sufficiently small'' curvatures
satisfied if ${\omega} {\lambda}esssim 2 {\Delta}{{\tau}heta}$.
But already we know that
$$
{\Omega} < \pi {\Phi}^2 < \pi (0.3 {\sigma}qrt{{\Delta}{{\tau}heta}} )^2 {\alpha}pprox 0.28 {\Delta}{{\tau}heta} \;,
$$
and because ${\omega} < {\Omega}$ for any one tree of $F$, the curvatures are already
``sufficiently small'' from other constraints.
An illustration is shown in Fig.~{\phi}igref{CB_10_20_s3_3D_Unf}.
The selection of $e$ in this example does not follow the proof exactly
just due to limitations of my implementation
($q$ is not closest to ${\partial \mathcal C}$ and $e$ is not orthogonal to the vertical quadrant axis), but it illustrates how $e$ is locally
and indeed globally safe.
(In this and in most examples, there are many safe edges.)
{\beta}egin{figure}[htbp]
\centering
\includegraphics[width=1.0{\lambda}inewidth]{Figures/CB_10_20_s3_3D_Unf}
\caption{Cap ${\mathcal C}$ (left) and an edge-unfolding (right),
including base $B$ flipped across safe edge $e$.}
{\phi}iglab{CB_10_20_s3_3D_Unf}
{\varepsilon}nd{figure}
{\sigma}ection*{Appendix}
We need a lemma that allows us to conclude that, for small curvatures,
the effect of the rotations along a cut path $Q$ to a boundary vertex $v \in {\partial C}$
is equivalent to one rotation from a point in the convex hull of the vertices along $Q$.
{\beta}egin{lemma}
Let $R_i({\omega}_i,p_i)$ be a two-dimensional rotation by angle ${\omega}_i {\gamma}e 0$
about point $p_i$, for $i=1,{\lambda}dots,k$.
Then, for sufficiently small ${\omega}_i$,
the result of composing the $k$ rotations $R_i$ is
equivalent to one rotation about a
{\varepsilon}mph{center-of-gravity} rotation center:
the sum of the $p_i$
weighted by the angles:
$$
R_1({\varepsilon} {\omega}_1,p_1) \circ \cdots \circ R_k({\varepsilon} {\omega}_k,p_k)
{\rho}ightarrow R({\varepsilon} {\omega},p)
$$
as ${\varepsilon} {\tau}o 0$, where
$${\omega} = {\sigma}um_i {\omega}_i $$
and
$$p = ({\omega}_1 p_1 + \cdots + {\omega}_k p_k) / {\omega} \;.$$
{\lambda}emlab{RotationsCG}
{\varepsilon}nd{lemma}
\noindent
The role of ${\varepsilon}$ is to ensure all the angles approach $0$.
Equivalently (and more appropriate in our context), we can just think of
the ${\omega}_i$ as ``sufficiently small.''
This proposition is illustrated in Fig.~{\phi}igref{RotationCG}
for a polygonal chain.
{\beta}egin{figure}[htbp]
\centering
\includegraphics[width=1.0{\lambda}inewidth]{Figures/RotationCG}
\caption{Comparison of true composite center of rotation, and the
approximate center-of-gravity center. Here $R$ is fixed and $L$ obtained
by ${\omega}_i$ rotations.}
{\phi}iglab{RotationCG}
{\varepsilon}nd{figure}
\noindent
{\beta}egin{proof}
It is well-known that the composition of two rotations by angles ${\omega}_1, {\omega}_2$
about different centers $p_1, p_2$ is equivalent to
one rotation by ${\omega}_1 + {\omega}_2$ about a (generally) different center $c$.{\phi}ootnote{
Unless ${\omega}_1 + {\omega}_2 = 2 \pi$, which will never occur with small rotations.}
Consequently, the same holds for the composition of $k$ rotations.
We now prove that as ${\omega}_1, {\omega}_2$ approach $0$, the center $c$ approaches the point
$p=({\omega}_1 p_1 + {\omega}_2 p_2) / ({\omega}_1 + {\omega}_2)$ on the $p_1 p_2$ segment.
Following~\cite[p.38]{n-vca-98}, we view the rotations by ${\omega}_1, {\omega}_2$
as reflections in lines separated by ${\omega}_1 /2, {\omega}_2 /2$.
Then $c$ is the intersection of two reflection lines, as illustrated in Fig.~{\phi}igref{CompRotErr}.
With $p_1=(0,0)$ and $p_2=(1,0)$, explicit calculation yields
$$c = {\lambda}eft( \,
{\phi}rac{{\sigma}in {\omega}_2}{{\sigma}in {\omega}_1 + {\sigma}in {\omega}_2} \, , \,
{\phi}rac{{\sigma}in {\omega}_1 {\sigma}in {\omega}_2}{{\sigma}in {\omega}_1 + {\sigma}in {\omega}_2} \,
{\rho}ight) \;.
$$
From this expression and that for $p$ above, futher calculation shows
that the error ${\delta} = | c - p |$ is ${\phi}rac{1}{8} ({\omega}_1 + {\omega}_2)$ for small ${\omega}_i$.
So indeed ${\delta}$ approaches zero.
Repeating the argument for $k$ rotations yields
(via a calculation not shown here)
that the error ${\delta}$ is bounded by ${\phi}rac{1}{2} {\sigma}um_i {\varepsilon}ll_i {\omega}_i$,
where ${\varepsilon}ll_i=| p_{i+1}-p_i |$ are the link lengths of the chain,
as ${\omega}_i {\tau}o 0$.
Thus, ${\delta} {\tau}o 0$, $c$ approaches $p$, and
the claim of the lemma is established.
{\varepsilon}nd{proof}
{\beta}egin{figure}[htbp]
\centering
\includegraphics[width=0.75{\lambda}inewidth]{Figures/CompRotErr}
\caption{The error ${\delta}$ between the true composite center $c$ and the center-of-gravity center.}
{\phi}iglab{CompRotErr}
{\varepsilon}nd{figure}
\noindent
An immediate implication of Lemma~{\lambda}emref{RotationsCG} is:
{\beta}egin{lemma}
Under the same assumptions, the center-of-gravity
approximate center $p$
approaches a point in the convex hull of $\{ p_1, {\lambda}dots, p_k \}$
as ${\varepsilon} {\tau}o 0$, or equivalently, as ${\omega} {\tau}o 0$.
{\lambda}emlab{center-of-gravity}
{\varepsilon}nd{lemma}
{\beta}egin{proof}
With ${\omega}_i {\gamma}e 0$, the weighted sum in Lemma~{\lambda}emref{RotationsCG}
is a convex combination of the $p_i$ points,
and so inside (or on the boundary of) the convex hull.
{\varepsilon}nd{proof}
{\beta}ibliographystyle{alpha}
{\beta}ibliography{CCapEUnf}
{\varepsilon}nd{document}
|
\begin{example}in{document}
\title{On Hadamard powers of Random Wishart matrices}
\author[J.~S.~Baslingker]{Jnaneshwar Baslingker}
\address{Department of Mathematics, Indian Institute of Science, Bangalore-560012, India.}
\email{[email protected]}
\partialate{}
\begin{example}in{abstract}
A famous result of Horn and Fitzgerald is that the $\betaa$-th Hadamard power of any $n\times n$ positive semi-definite (p.s.d) matrix with non-negative entries is p.s.d $\forall \betaa\geq n-2$ and is not necessarliy p.s.d for $\betaa< n-2,$ with $\ \betaa\notin \mathbb{N}$. In this article, we study this question for random Wishart matrix $A_n:={X_nX_n^T}$, where $X_n$ is $n\times n$ matrix with i.i.d. Gaussians. It is shown that applying $x\rightightarrow |x|^{\alphaha}$ entrywise to $A_n$, the resulting matrix is p.s.d, with high probability, for $\alphaha>1$ and is not p.s.d, with high probability, for $\alphaha<1$. It is also shown that if $X_n$ are $\leftfloor n^{s}\rightfloor\times n$ matrices, for any $s<1$, the transition of positivity occurs at the exponent $\alpha=s$.
\end{abstract}
\keywords{ Wishart matrices, Positive semi-definite, Hadamard power}
\subjclass[2010]{60B20, 60B11 }
\maketitle
\section{Introduction}
Entrywise exponents of matrices preserving positive semi-definiteness has been a topic of active research. An important theorem in this field is the result of Horn and Fitzgerald \cite{FH77}. Let $\mathbf{P}P_n^+$ denote the set of $n\times n$ p.s.d. matrices with non-negative entries. Schur product theorem gives us that the $m$-th Hadamard power $A^{\circ m}:=[a_{ij}^m]$ of any p.s.d. matrix $A=[a_{ij}]\in \mathbf{P}P_n^+$ is again p.s.d. for every positive integer $m$. Horn and Fitzgerald proved that $n-2$ is the `critical exponent' for such matrices, i.e., $n-2$ is the least number for which $A^{\circ \alphaha}\in \mathbf{P}P_n^+$ for every $A\in \mathbf{P}P_n^+$ and for every real number $\alphaha \ge n-2$. They considered the matrix $A\in \mathbf{P}P_n^+$ with $(i,j)$-th entry $1+\varepsilon ij$ and showed that if $\alphaha$ is not an integer and $0<\alphaha<n-2$, then $A^{\circ \alphaha}$ is not positive semi-definite for a sufficiently small positive number $\varepsilon$ (see \cite{AK}).
We consider a random matrix version of this problem. Let $X:=\lefteft[X_{ij}\rightight]$ be a $n \times n$ matrix, where $X_{ij}$ are i.i.d standard normal random variables.
Define $A_{n}:=\fracac{XX^T}{n}$ and $|A_{n}|^{\circ\alphaha}$ as the matrix obtained by applying $x\rightightarrow |x|^{\alphaha}$ function entrywise to $A_{n}$. Let $B_{n,\alphaha}:=|A_{n}|^{\circ\alphaha}$.
We are interested in the values of real $\alphaha>0$ for which the matrix $B_{n,\alphaha}$ is positive semi-definite, with high probability. Simulations show that for large values of $n$, if $\alphaha>1$ then with high probability, $B_{n,\alphaha}$ is positive semi-definite and for $\alphaha<1$, with high probability, $B_{n,\alphaha}$ is not positive semi-definite (as shown in Table \rightef{Table 1}).
We state and prove the theorem that these observations from simulations are indeed true. In fact we prove a stronger result. Fix any $s\lefteq 1$ and let $m= \leftfloor n^{s}\rightfloor$. Let $X_n:=\lefteft[X_{ij}\rightight]$ be a $m \times n$ matrix, where $X_{ij}$ are i.i.d standard normal random variables. Define $A_{n,s}:=\fracac{X_nX_n^T}{n}$ and $B_{n,\alphaha,s}:=|A_{n,s}|^{\circ\alphaha}$. Let $\leftambda_1(A)$ denote the smallest eigenvalue of $A$. We prove the following main result.
\begin{example}in{theorem}\leftabel{main theorem}
$\exists \varepsilon_s>0$ such that for $\alphaha>s$, as $n\rightightarrow \infty$
\begin{example}in{align*}
\mathbb{P}\lefteft(\leftambda_{1}(B_{n,\alphaha,s})\geq \varepsilon_s\rightight)&\rightightarrow 1.
\end{align*}
For $\alphaha<s$, as $n\rightightarrow \infty$
\begin{example}in{align*}
\mathbb{P}\lefteft(\leftambda_{1}(B_{n,\alphaha,s})<0\rightight)&\rightightarrow 1.
\end{align*}
\end{theorem}
\begin{example}in{remark}
Simulations show Theorem \rightef{main theorem} holds if i.i.d Gaussians are replaced by other i.i.d random variables with finite second moment like Uniform$(0,1)$, Exp($1$) and even heavy tailed distributions like Cauchy distribution, distributions with densities $f(x)=bx^{-1-b},\forall x\geq 1$, all with transition of positivity at exponent $\alpha=s$. Note that in the last case one does not have finite mean if $b$ is small. This suggests that the transition of matrix positivity happens for a large family of distributions. In this direction we prove the below proposition where we show that $B_{n,\alpha,s}$ is p.s.d for the range of $\alphaha>2s$, when $X_n$ has sub-Gaussian entries.
\end{remark}
\begin{example}in{proposition}\leftabel{alpha>2}
Let the entries of $X_n$ be i.i.d sub-Gaussian random variables with mean $0$ and unit variance. Fix $\alphaha>2s$ and $\varepsilon>0$. Define $B_{n,\alpha,s}$ as before. Then as $n\rightightarrow \infty$
\begin{example}in{align}
\mathbb{P}\lefteft(\leftambda_{1}(B_{n,\alpha,s})\lefteq 1-\varepsilon\rightight)&\rightightarrow 0, \\
\mathbb{P}\lefteft(\leftambda_{m}(B_{n,\alpha,s})\geq 1+\varepsilon\rightight)&\rightightarrow 0.
\end{align}
\end{proposition}
\begin{example}in{table}[h!]
\centering
\begin{example}in{tabular}{ |c|c|c|c| }
\hline
$s$ & $\alphaha$ & $\leftambda_1$ \\
\hline
$1$ & $0.98$ & $-0.288$ \\
$1$ & $0.99$ & $-0.246$ \\
$1$ & $1.06$ & $0.016$ \\
$1$ & $1.07$ & $0.046$ \\
$0.8$ & $0.78$ & $-0.076$ \\
$0.8$ & $0.79$ & $-0.049$ \\
$0.8$ & $0.81$ & $0.017$ \\
$0.8$ & $0.82$ & $0.041$ \\
\hline
\end{tabular}
\caption{Table of smallest eigenvalues for varying $\alphaha$ and $s$ with $n=5000$.\leftabel{Table 1}}
\end{table}
\begin{example}in{remark}
Although Theorem \rightef{main theorem} and Proposition \rightef{alpha>2} hold for $m=\Theta(n^s)$, for definiteness we fix $m=\leftfloor n^s\rightfloor$. For $m=a\times n$ for fixed $a>0$, the transition of positivity is at exponent $1$. For the critical exponent to be less than $1$, we need $m=\Theta(n^s)$ with $ s<1$, which is much smaller, unlike in the study of spectrum of Wishart matrices.
\end{remark}
A standard way to study the distribution of eigenvalues of a random matrix is to look at the limit of empirical spectral distributions using method of moments. For example, Wigner's proof of semi-circle law for Gaussian ensemble uses this method (For more see \cite{AGZ}). In our case, the entries of the matrix $B_{n,\alpha,s}$ are sums of products of random variables and the entries on the same row or column are correlated. The entrywise absolute fractional power makes this problem intractable, if we try to use method of moments. As we are interested
only in the existence of negative eigenvalues, we manage to avoid computing all the moments.
\subsection{Outline of the paper:}
First we prove Proposition \rightef{alpha>2} in Section \rightef{sec a>2}. This is done using Gershgorin's circle theorem and the sub-exponential Bernstein's inequality. Note that this proposition is not needed to prove Theorem \rightef{main theorem}.
The proof of Theorem \rightef{main theorem} is divided into two parts. In the first part of the proof, we consider the range $\alpha<s$. Let $C_{n,\alpha,s}:=\fracac{B_{n,\alpha,s}}{n^{\fracac{s-\alphaha}{2}}}$. For ease of notation, we write $C_{n,\alpha,s}$ as $C_m$. $C_m$ is a $m\times m$ matrix where $m=\leftfloor n^s\rightfloor$.
Define the diagonal matrix $D_m$, with $D_m(i,i):=C_m(i,i)-\fracac{\ell_{\alphaha}}{n^{s/2}}$ and $E_m:=C_m-D_m-\fracac{\ell_{\alphaha}}{n^{s/2}}J_m$, where $\ell_{\alphaha}$ is as defined in Subsection \rightef{Notations}. We use the following lemma, whose proof is given in Section \rightef{sec a<1}, to conclude that EESD of $B_{n,\alpha,s}$ has positive weight on negative reals.
\begin{example}in{lemma}\leftabel{Lemma 1 of alpha<1}
Let $\[\begin{example}in{aligned}r{\mu}_{E_m}$ be the EESD of $E_m$. Then\\
i) Limit of first moment of $\[\begin{example}in{aligned}r{\mu}_{E_m}$ is $0$\\
ii) Limit of second moment of $\[\begin{example}in{aligned}r{\mu}_{E_m}$ is a positive constant\\
iii) The fourth moments of $\[\begin{example}in{aligned}r{\mu}_{E_m}$ are uniformly bounded.
\end{lemma}
Using a concentration of measure result, we show that with high probability, $B_{n,\alpha,s}$ has negative eigenvalues. This is done in Section \rightef{sec a<1}.
In the second part of the proof, we consider the range $s<\alpha$. We further divide this range by looking at $\lefteft(\fracac{k+1}{k}\rightight)s<\alpha$, where $k$ is an integer greater than $1$ and let $k\rightightarrow\infty$. For $\lefteft(\fracac{k+1}{k}\rightight)s<\alpha$, we consider $C_m$, a modification of $B_{n,\alpha,s}$, whose EESD has $2k$-th moment converging to $0$ to conclude that the probability of $B_{n,\alpha,s}$ having a negative eigenvalue converges to 0. We then let $k$ be arbitrarily large. This is done in Section \rightef{sec a>1}.
\subsection{Notation}\leftabel{Notations}
\\
1) $m=\leftfloor n^s\rightfloor$.
2) $\leftambda_1(A)$ and $\leftambda_m(A)$ denote the smallest and largest eigenvalues of $A$ respectively.
3) $R_i$ denotes the $i$th row of $X_n$. ($R_i^T\sim N(0, I_n)$ in Sections \rightef{sec a<1}, \rightef{sec a>1} but not necessarily in Section \rightef{sec a>2}).
4) $\rightho_{ij}=\fracac{\leftangle R_i,R_j\rightangle}{\leftVert R_i\rightVert\leftVert R_j\rightVert}$.
5) $\ell_{\alphaha}=\mathbb{E}[|Z|^{\alphaha}]$, where $Z$ is a standard normal random variable.
6) $J_n=$ All ones matrix of size $n\times n$ and $I_n=$ $n\times n$ identity matrix.
7) $\mathcal{F}_{i,j}=$ The sigma algebra generated from the $i$th row and $j$th row of $X_n$.
8) ${\sigma}ma_i=\leftVert R_i\rightVert/\sqrt{n}$.
9) $Y_{ij}=\mathbb{E}\lefteft[\lefteft(\lefteft|\fracac{\leftangle R_i,R_k\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-m_{\alphaha}\rightight)\lefteft(\lefteft|\fracac{\leftangle R_k,R_j\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-m_{\alphaha}\rightight)\bigg|\mathcal{F}_{i,j}\rightight]$.
\section{Proof of Proposition \rightef{alpha>2}} \leftabel{sec a>2}
In this section we prove Proposition \rightef{alpha>2}.
\begin{example}in{proof}[Proof of Proposition~\rightef{alpha>2}] For ease of notation, we write $B_{n,\alpha,s}$ as $B_n$.
The diagonal entries of $B_n$ are of the form $\lefteft(\fracac{\leftangle R_i,R_i\rightangle}{n}\rightight)^{\alphaha}$ and off-diagonal entries are of the form $\lefteft|\fracac{\leftangle R_i,R_j\rightangle}{n}\rightight|^{\alphaha}$. Note that all the off-diagonal entries are identically distributed and all the diagonal entries are identically distributed.
First we give an upper bound for the probability that $\sum_{i=2}^{m}(B_n)_{1i}>\varepsilon$.
\begin{example}in{align*}
\mathbb{P}\lefteft(\sum_{i=2}^{m}(B_n)_{1i}>\varepsilon\rightight)\lefteq m \mathbb{P}\lefteft((B_n)_{12}>\fracac{\varepsilon}{m}\rightight).
\end{align*}
Note that $(B_n)_{12}$ is a function of sum of $n$ independent sub-exponential random variables (product of independent Gaussians is sub-exponential (Lemma $2.7.7$ of \cite{RV}). We now recall the Bernstein inequality for sub-exponential random variables from \cite{RV}.
\begin{example}in{theorem}{(Theorem $2.8.1$ of \mboxox{\cite{RV})}}\leftabel{Bernstein}
Let $X_1,X_2,\partialots,X_N$ be independent, mean zero, sub-exponential random variables. Then, for every $t\geq 0$, we have
\begin{example}in{align*}
\mathbb{P}\lefteft({\lefteft|\sum_{i=1}^NX_i\rightight|\geq t}\rightight)\lefteq 2
\exp\lefteft[-c\min{\lefteft(\fracac{t^2}{\sum_{i=1}^N\leftVert X_i\rightVert_{\psi_1}^2},\fracac{t}{\max_i{\leftVert X_i\rightVert}_{\psi_1}}\rightight)}\rightight]\end{align*}
where $c>0$ is an absolute constant and $\leftVert X\rightVert_{\psi_1}$ is the sub-exponential norm of $X$.
\end{theorem}
Bernstein's inequality and the fact that $m=\leftfloor n ^s\rightfloor$ gives us that
\begin{example}in{align*}
\mathbb{P}\lefteft((B_n)_{12}>\fracac{\varepsilon}{m}\rightight)&=\mathbb{P}\lefteft(\lefteft|\leftangle R_1,R_2\rightangle\rightight|\geq n\lefteft(\fracac{\varepsilon}{m}\rightight)^{1/\alphaha}\rightight)\\
&\lefteq 2\exp\lefteft(-c_1n^{1-\fracac{2s}{\alphaha}}\rightight)
\end{align*}
for some constant $c_1=c_1(\varepsilon)$.
This implies that
\begin{example}in{align*}
\mathbb{P}\lefteft(\sum_{i=2}^{m}(B_n)_{1i}>\varepsilon\rightight)\lefteq 2m\exp(-c_1n^{1-\fracac{2s}{\alphaha}}).
\end{align*}
Using the identical distribution of off-diagonal entries, we get that
\begin{example}in{align}\leftabel{eq:Gersh_1}
\mathbb{P}\lefteft(\bigcup_{i=1}^{m}\lefteft(\sum_{j=1,j\neq i}^{m}(B_n)_{ij}>\varepsilon\rightight)\rightight)\lefteq 2m^2\exp(-c_1n^{1-\fracac{2s}{\alphaha}}).
\end{align}
For the diagonal entry $(B_n)_{11}$, we have
\begin{example}in{align*}
\mathbb{P}\lefteft((B_n)_{11}\lefteq 1-\varepsilon\rightight)&=\mathbb{P}\lefteft(\fracac{\leftangle R_1,R_1\rightangle}{n}\lefteq (1-\varepsilon)^{1/\alphaha}\rightight)\\
&\lefteq \mathbb{P}\lefteft({\leftangle R_1,R_1\rightangle}-n\lefteq n((1-\varepsilon)^{1/\alphaha}-1)\rightight)\\
&\lefteq 2\exp\lefteft(-c_2n\rightight),
\end{align*}
for a constant $c_2=c_2(\varepsilon,\alphaha)$. Here we have used Theorem \rightef{Bernstein} in the last inequality, as ${\leftangle R_1,R_1\rightangle}$ $-n$ is a sum of $n$ mean $0$, i.i.d, sub-exponential random variables and $t=n((1-\varepsilon)^{1/\alphaha}-1)$.
This implies that
\begin{example}in{align}\leftabel{eq:Gersh_2}
\mathbb{P}\lefteft(\bigcup_{i=1}^{m}\lefteft((B_n)_{ii}\lefteq 1-\varepsilon\rightight)\rightight)\lefteq 2m\exp(-c_2n).
\end{align}
Similarly
\begin{example}in{align}\leftabel{eq:Gersh_3}
\mathbb{P}\lefteft(\bigcup_{i=1}^{m}\lefteft((B_n)_{ii}\geq 1+\varepsilon\rightight)\rightight)\lefteq 2m\exp(-c_2n).
\end{align}
Applying Gershgorin circle theorem (Theorem $6.1.1$ of \cite{HJ}) to $B_n$, using \eqref{eq:Gersh_1}, \eqref{eq:Gersh_2}, \eqref{eq:Gersh_3}, gives us that, with probability at least $1-4m^2\exp(-c_3n^{1-\fracac{2s}{\alphaha}}),\ \leftambda_{1}\geq 1-2\varepsilon$ and $\leftambda_{m}\lefteq 1+2\varepsilon$. Here $c_3>0$ depends on $\varepsilon$ and $\alphaha$. As $\alphaha>2s$, this completes the proof of Proposition \rightef{alpha>2}.
\end{proof}
\section{$\alphaha<s$ range}\leftabel{sec a<1}
In this section we prove Theorem \rightef{main theorem} for the range $\alpha<s$. We define a few terms here which will be used in the rest of the article.
Empirical spectral distribution of a symmetric random matrix $A_n$ is the random probability measure $\mu_{A_n}:=\fracac{1}{n}\sum\leftimits_{i=1}^n\partialelta_{\leftambda_i}$, where $\leftambda_i$s are the eigenvalues of $A_n$. Expected empirical spectral distribution(EESD) of $A_n$ is the probability measure $\[\begin{example}in{aligned}r{\mu}_{A_n}$ such that $\int_{\mathbb{R}}fd\[\begin{example}in{aligned}r{\mu}_{A_n}=\mathbb{E}\lefteft[\int_{\mathbb{R}}f\ d\mu_{A_n}\rightight]$, for all bounded continuous functions $f$ (For more see \cite{AGZ}). We prove the following lemma which implies Theorem \rightef{main theorem} for the range $\alpha<s$.
\leftabel {sec a<1}
\begin{example}in{lemma}\leftabel{alpha<1}
Fix $\alphaha<s$. Then $\mathbb{P}(\leftambda_1(C_{n,\alpha,s})<0)\rightightarrow 1$, as $n\rightightarrow \infty$.
\end{lemma}
\begin{example}in{proof}[Proof of Lemma ~\rightef{alpha<1}]
We complete the proof of Lemma \rightef{alpha<1} assuming Lemma \rightef{Lemma 1 of alpha<1} and then provide the proof of Lemma \rightef{Lemma 1 of alpha<1}. For the sake of contradiction assume that $\mathbb{P}\lefteft(\leftambda_1(C_m)<0\rightight)$ does not converge to $1$, then by going to a subsequence we may assume that $\exists \ \varepsilon>0$ such that
$\mathbb{P}\lefteft(\leftambda_1(C_m)>0\rightight)>\varepsilon$ and $\[\begin{example}in{aligned}r{\mu}_{E_m}$ converge weakly to some probability distribution $\mu$ (Using $(ii)$ of Lemma \rightef{Lemma 1 of alpha<1} we get the tightness of $\[\begin{example}in{aligned}r{\mu}_{E_m}$).
Now $\mu$ must have mean $0$, positive variance. Indeed, if a sequence of probability distributions $\[\begin{example}in{aligned}r{\mu}_{E_m}$ converge weakly to $\mu$, then by Skhorokhod's theorem, on some probability space there exist random variables $T_m\sim \[\begin{example}in{aligned}r{\mu}_{E_m}$ and $T\sim \mu$ such that $T_m$ converge almost surely to $T$. Now as $\[\begin{example}in{aligned}r{\mu}_{E_{m}}$ have uniform bound on second moments, we get that $T_m$ are uniformly integrable. This implies that the first moment of $T$ is the limit of first moments of $T_m$. Similarly as the fourth moments of $T_m$ are uniformly bounded, the second moment of $T$ is the limit of second moments of $T_m$. Thus $\mu$ has mean $0$, positive variance.
As $\mu$ has zero mean and positive variance, $\mu(-\infty,-\omegaga)\geq \eta$ for some $\eta,\omegaga>0$. This gives us that
\begin{example}in{align}\leftabel{eq:limit}
\[\begin{example}in{aligned}r{\mu}_{E_m}(-\infty,-\omegaga)>\fracac{\eta}{2}
\end{align}
for large enough $n$. We would like to say with high probability, empirical spectral distributions of $E_m$ also have positive weight on the negative reals. This would imply the existence of negative eigenvalues, with high probability. Here we make use of the following McDiarmid-type concentration result due to Guntuboyina and Leeb \cite{GL}. For a $n\times n$ symmetric matrix $A$, let $\mu_A$ denote the probability measure $\mu_A:=\fracac{1}{n}\sum_{i=1}^n\partialelta_{\leftambda_i}$, where $\leftambda_i$s are the eigenvalues of $A$.
Let $F_{\mu_A}$ denote the cumulative distribution function of ${\mu}_{A}$ and $F_{\mu_A}(f)=\int_{\mathbb{R}}f d\mu_{A}$. The Kolmogorov-Smirnov distance between two probability measures $\mu,\mu'$ is defined as $d_{{KS}}(\mu,\mu'):=\leftVert F_{\mu}-F_{\mu'}\rightVert_{\infty}$. Let $V_g([a,b])$ denote the total variation of the function $g$ on an interval $[a,b]$ and $V_g(\mathbb{R}):=\sup_{[a,b]}V_g([a,b])$.
\begin{example}in{theorem}[Theorem $6$ of \cite{GL}]\leftabel{Guntuboyina}
Let $M$ be a random symmetric $n\times n$ matrix that is a function of $m$ independent random quantities $Y_1,Y_2,\partialots,Y_m,$ i.e., $M=M(Y_1,Y_2,\partialots,Y_m)$. Write $M_{(i)}$ for the matrix obtained from $M$ after replacing $Y_i$ by an independent copy, i.e., $M_{(i)}=M(Y_1,\partialots,Y_{i-1},Y_i^*,Y_{i+1},\partialots,Y_m)$ where $Y_{i}^*$ is distributed as $Y_i$ and independent of $Y_1,Y_2\partialots,Y_m.$ For $S=M/\sqrt{m}$ and $S_{(i)}=M_{(i)}/\sqrt{m}$, assume that
\begin{example}in{align*}
\leftVert F_S-F_{S_{(i)}}\rightVert_\infty\lefteq \fracac{r}{n}
\end{align*}
holds (almost surely) for each $i=1,2,\partialots,m$ and for some (fixed) integer $r$. Finally, assume that $g:\mathbb{R}\rightightarrow\mathbb{R}$ is of bounded variation on $\mathbb{R}$. For each $\varepsilon>0,$ we then have
\begin{example}in{align*}
\mathbb{P}\lefteft(|F_S(g)-\mathbb{E}[F_S(g)]|\geq \varepsilon\rightight)\lefteq 2\exp\lefteft[-\fracac{n^2 2\varepsilon^2}{mr^2V_g^2(\mathbb{R})}\rightight].
\end{align*}
\end{theorem}
We apply Theorem \rightef{Guntuboyina} where $E_m$ is the matrix $M$ which is a function of the $\leftfloor n^s\rightfloor$ rows (independent) of $X_n$. In order to apply Theorem \rightef{Guntuboyina}, we need to show
\begin{example}in{align}\leftabel{rank condition}
\leftVert F_{E_m}-F_{E_{m{{(i)}}}}\rightVert_\infty\lefteq \fracac{r}{\leftfloor n^s\rightfloor}
\end{align}
almost surely. Here $E_{m(i)}$ is the matrix obtained when $i$th row of $X_n$ is replaced by an independent and identical copy. To show \eqref{rank condition}, we use that the rank($E_m-E_{m(i)})\lefteq 2$ and the
standard rank inequality (Lemma $2.5$ of \cite{CB}) which gives us
\begin{example}in{align*}
\leftVert F_{E_m}-F_{E_{m{{(i)}}}}\rightVert_\infty\lefteq \fracac{2}{\leftfloor n^s\rightfloor}.
\end{align*}
Note that $V_f(\mathbb{R})$ is finite and independent of $n$. We can now apply Theorem \rightef{Guntuboyina} to the matrices $E_m$. Using the function $f=\mathbbm{1}_{(-\infty,-\omegaga)}$ as the bounded variation function and applying Theorem \rightef{Guntuboyina}, we get
\begin{example}in{align}\leftabel{eq:concentration}
\mathbb{P}\lefteft(|F_{E_m}(f)-\mathbb{E}[F_{E_m}(f)]|\geq \eta/4\rightight)\lefteq 2\exp\lefteft(-{c\leftfloor n^s\rightfloor\eta^2}\rightight)
\end{align}
for some $c>0$.
Using \eqref{eq:limit} and \eqref{eq:concentration}, we get that, for large enough $n$
\begin{example}in{align*}
\mathbb{P}(\mboxox{fraction of eigenvalues of }E_{n} \mboxox{ less than }-\omegaga/2\geq \eta/4)\geq 1-\fracac{\varepsilon}{2}.
\end{align*}
$E_{m}$ is almost $C_{m}$, with diagonals made $0$ and then off-diagonals are subtracted by $\ell_{\alphaha}/\leftfloor n^s\rightfloor$.
Using \eqref{eq:Gersh_3}, it can be seen that
\begin{example}in{align}
\mathbb{P}\lefteft(\bigcup_{i=1}^{m}\lefteft((C_n)_{ii}\geq n^{\fracac{\alphaha-s}{2}}(1+\varepsilon)\rightight)\rightight)\lefteq 2m\exp(-c_2n)\leftabel{eq:Gersh_4}\\
\mathbb{P}\lefteft(\bigcup_{i=1}^{m}\lefteft((D_n)_{ii}\geq n^{\fracac{\alphaha-s}{2}}\lefteft(1+\varepsilon-\fracac{\ell_{\alphaha}}{n^{\alphaha/2}}\rightight)\rightight)\rightight)\lefteq 2m\exp(-c_2n).\leftabel{eq:Gersh_5}
\end{align}
Weyl's inequality (Theorem $4.3.1$ of \cite{HJ}) bounds the amount of perturbation of eigenvalues due to perturbation of a matrix. Using Weyl's inequality, along with \eqref{eq:Gersh_5} gives that,
\begin{example}in{align*}
\mathbb{P}(\mboxox{fraction of eigenvalues of }E_{m}+D_{m} \mboxox{ less than }-\fracac{\omegaga}{2}+n^{\fracac{\alphaha-s}{2}}(1+\varepsilon-\fracac{m_{\alphaha}}{n^{\alphaha/2}})\geq \eta/4)\\
\geq 1-\fracac{\varepsilon}{2}-2m\exp(-c_2n).
\end{align*}
As rank($E_{m}+D_{m}-C_{m})=1 $ and $\alpha<s$, using rank inequality (Lemma $2.5$ of \cite{CB}) again, we get that
\begin{example}in{align*}
\mathbb{P}(\mboxox{all the eigenvalues of }C_{m}\mboxox{ are non-negative})<\fracac{\varepsilon}{2}+2m\exp(-c_2n),
\end{align*}
which contradicts the earlier assumption. This completes the proof of Lemma \rightef{alpha<1}.
\end{proof}
We now prove Lemma \rightef{Lemma 1 of alpha<1}.
\begin{example}in{proof}[Proof of Lemma ~\rightef{Lemma 1 of alpha<1}]
\textbf{Computation of moments of $\[\begin{example}in{aligned}r{\mu}_{E_m}$}: Before we start the computations, we make a note of the form of entries of $E_m$.
Diagonal entries: $(E_m)_{ii}=0$
Off diagonal entries: $(E_m)_{ij}=\fracac{1}{n^{s/2}}\lefteft(\lefteft|\fracac{\leftangle R_i,R_j\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)$
We prove limits of first and second moments of $\[\begin{example}in{aligned}r{\mu}_{E_m}$ are $0$ and a positive value.
\textbf{Limit of first moments}: $\int_{\mathbb{R}}x\ d\[\begin{example}in{aligned}r{\mu}_{E_m}(x)=\mathbb{E}\lefteft[\int_{\mathbb{R}}x\ d\mu_{E_m}(x)\rightight]=\fracac{1}{m}\sum_{i=1}^m\mathbb{E}[(E_m)_{ii}]=0$. Hence
\begin{example}in{align*}
\leftim_{n\rightightarrow\infty}\int_{\mathbb{R}}x\ d\[\begin{example}in{aligned}r{\mu}_{E_m}(x)&=0
\end{align*}
\textbf{Limit of second moments}: $\int_{\mathbb{R}}x^2\ d\[\begin{example}in{aligned}r{\mu}_{E_m}(x)=\mathbb{E}\lefteft[\int_{\mathbb{R}}x^2\ d\mu_{E_m}(x)\rightight]=\fracac{1}{m}\sum_{i,j}(E_m)_{ij}^2$. As the off-diagonal entries are identically distributed, it is enough to look at the limit of
$\sum_{i=1}^m\mathbb{E}[((E_m)_{1i})^2]$.
\begin{example}in{align*}
\leftim_{n\rightightarrow\infty} \sum_{i=1}^m\mathbb{E}[((E_m)_{1i})^2]= \leftim_{n\rightightarrow\infty} (m-1) \mathbb{E}[((E_m)_{12})^2]
\end{align*}
Using central limit theorem, uniform bound on $\mathbb{E}\lefteft[\lefteft(\fracac{\leftangle R_1,R_2\rightangle}{\sqrt{n}}\rightight)^4\rightight]$ and $m=\leftfloor n^s\rightfloor$, it is easy to see that the limit is $\mathbb{E}\lefteft[\lefteft(|Z|^{\alphaha}-\ell_{\alphaha}\rightight)^2\rightight]$.
We now prove that the fourth moments of $\[\begin{example}in{aligned}r{\mu}_{E_m}$ are uniformly bounded.
\textbf{Uniform bound of fourth moments}:
\begin{example}in{align*}
\int_{\mathbb{R}}x^4\ d\[\begin{example}in{aligned}r{\mu}_{E_m}(x)=\fracac{1}{m}\sum_{i_1i_2i_3i_4}\mathbb{E}\lefteft[(E_m)_{i_1i_2}(E_m)_{i_2i_3}(E_m)_{i_3i_4}(E_m)_{i_4i_1}\rightight].
\end{align*}
This is a sum of expectations with each term corresponding to a closed walk of length $4$ on the complete graph $K_m$. It is enough to look at closed walks starting and ending at vertex $1$. Such walks can visit $2,3$ or $4$ different vertices, including the vertex $1$.
\begin{example}in{align*}
\int_{\mathbb{R}}x^4\ d\[\begin{example}in{aligned}r{\mu}_{E_m}(x)=\sum_{i,j,k\neq 1}\mathbb{E}\lefteft[(E_m)_{1i}^4\rightight]+\mathbb{E}\lefteft[(E_m)_{1i}^2(E_m)_{1j}^2\rightight]\\+\mathbb{E}\lefteft[(E_m)_{1i}^2(E_m)_{ij}^2\rightight]+\mathbb{E}\lefteft[(E_m)_{1i}(E_m)_{ij}(E_m)_{jk}(E_m)_{k1}\rightight]
\end{align*}
The four terms in the above equation correspond to four different types of walks as shown below.
\begin{example}in{figure}[h!]
\begin{example}in{tikzpicture}
\Vertex[label=$1$, size=0.4, x=10,y=10]{A} \Vertex[label=$i$, size=0.4, x=16,y=10]{B}
\mathbf{E}dge[label=$e_1$,bend=30,Direct](A)(B)
\mathbf{E}dge[label=$e_2$,bend=-15,Direct](B)(A)
\mathbf{E}dge[label=$e_3$,bend=-15,Direct](A)(B)
\mathbf{E}dge[label=$e_4$,bend=30,Direct](B)(A)
\end{tikzpicture}
\caption{The walk corresponding to $\mathbb{E}\lefteft[(E_m)_{1i}^4\rightight]$}
\end{figure}
\begin{example}in{figure}[h!]
\begin{example}in{tikzpicture}
\Vertex[label=$i$, size=0.4, x=5,y=10]{i} \Vertex[label=$1$, size=0.4, x=10,y=10]{1} \Vertex[label=$j$, size=0.4, x=15,y=10]{j}
\mathbf{E}dge[label=$e_1$,bend=15,Direct](1)(i)
\mathbf{E}dge[label=$e_2$,bend=15,Direct](i)(1)
\mathbf{E}dge[label=$e_3$,bend=15,Direct](1)(j)
\mathbf{E}dge[label=$e_4$,bend=15,Direct](j)(1)
\end{tikzpicture}
\caption{The walk corresponding to $\mathbb{E}\lefteft[(E_m)_{1i}^2(E_m)_{1j}^2\rightight]$}
\end{figure}
\begin{example}in{figure}[h!]
\begin{example}in{tikzpicture}
\Vertex[label=$1$, size=0.4, x=5,y=10]{1} \Vertex[label=$i$, size=0.4, x=10,y=10]{i} \Vertex[label=$j$, size=0.4, x=15,y=10]{j}
\mathbf{E}dge[label=$e_1$,bend=15,Direct](1)(i)
\mathbf{E}dge[label=$e_2$,bend=15,Direct](i)(j)
\mathbf{E}dge[label=$e_3$,bend=15,Direct](j)(i)
\mathbf{E}dge[label=$e_4$,bend=15,Direct](i)(1)
\end{tikzpicture}
\caption{The walk corresponding to $\mathbb{E}\lefteft[(E_m)_{1i}^2(E_m)_{ij}^2\rightight]$}
\end{figure}
\begin{example}in{figure}[h!]
\begin{example}in{tikzpicture}
\Vertex[label=$1$, size=0.4, x=5,y=10]{1} \Vertex[label=$i$, size=0.4, x=6,y=11]{i} \Vertex[label=$j$, size=0.4, x=7,y=10]{j}
\Vertex[label=$k$, size=0.4, x=6,y=9]{k}
\mathbf{E}dge[label=$e_1$,bend=30,Direct](1)(i)
\mathbf{E}dge[label=$e_2$,bend=30,Direct](i)(j)
\mathbf{E}dge[label=$e_3$,bend=30,Direct](j)(k)
\mathbf{E}dge[label=$e_4$,bend=30,Direct](k)(1)
\end{tikzpicture}
\caption{The walk corresponding to $\mathbb{E}\lefteft[(E_m)_{1i}(E_m)_{ij}(E_m)_{jk}(E_m)_{k1}\rightight]$}
\end{figure}
Using the fact that off-diagonal entries of $E_m$ are identically distributed, uniform bound on $\mathbb{E}\lefteft[\lefteft(\fracac{\leftangle R_1,R_2\rightangle}{\sqrt{n}}\rightight)^4\rightight]$ and central limit theorem, it can be seen that
\begin{example}in{align}\leftabel{eq:2V}
\leftim_{n\rightightarrow\infty}\sum_{i\neq 1}\mathbb{E}\lefteft[(E_m)_{1i}^4\rightight]=\leftim_{n\rightightarrow\infty}\fracac{(m-1)}{m^2}\mathbb{E}\lefteft[\lefteft(\lefteft|\fracac{\leftangle R_1,R_2\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)^4\rightight]=0.
\end{align}
Using a similar argument as above it can be seen that
\begin{example}in{align}\leftabel{eq:3V}
\leftim_{n\rightightarrow\infty}\sum_{i,j\neq 1}\mathbb{E}\lefteft[(E_m)_{1i}^2(E_m)_{1j}^2\rightight]= \leftim_{n\rightightarrow\infty}\sum_{i,j\neq 1}\mathbb{E}\lefteft[(E_m)_{1i}^2(E_m)_{ij}^2\rightight]=\mathbb{E}\lefteft[\lefteft(|Z_1|^{\alphaha}-\ell_{\alphaha}\rightight)^2\lefteft(|Z_2|^{\alphaha}-\ell_{\alphaha}\rightight)^2\rightight],
\end{align}
where $Z_1,Z_2$ are i.i.d standard Gaussians.
If we prove that
\begin{example}in{align}\leftabel{eq:4V}
\leftim_{n\rightightarrow\infty}\sum_{i,j,k\neq 1}\mathbb{E}\lefteft[(E_m)_{1i}(E_m)_{ij}(E_m)_{jk}(E_m)_{k1}\rightight]=0,
\end{align}
then using \eqref{eq:2V}, \eqref{eq:3V}, \eqref{eq:4V}, we would have proved that fourth moments of $\[\begin{example}in{aligned}r{\mu}_{E_m}$ are uniformly bounded and we would be done with the proof of Lemma \rightef{alpha<1}. Note that
\begin{example}in{align}\leftabel{eq:Y}
\leftim_{n\rightightarrow\infty}\sum_{i,j,k\neq 1}\mathbb{E}\lefteft[(E_m)_{1i}(E_m)_{ij}(E_m)_{jk}(E_m)_{k1}\rightight]=
\end{align}
\begin{example}in{align*}
\leftim_{n\rightightarrow\infty} m\mathbb{E}\lefteft[\lefteft(\lefteft|\fracac{\leftangle R_1,R_2\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\lefteft(\lefteft|\fracac{\leftangle R_2,R_3\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\lefteft(\lefteft|\fracac{\leftangle R_3,R_4\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\lefteft(\lefteft|\fracac{\leftangle R_4,R_1\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\rightight]
\end{align*}
Let $\mathcal{F}_{1,3}$ denote the sigma algebra generated from the $1$st row and $3$rd row of $X_n$ and \[Y_{1,3}:=\mathbb{E}\lefteft[\lefteft(\lefteft|\fracac{\leftangle R_1,R_2\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\lefteft(\lefteft|\fracac{\leftangle R_2,R_3\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\biggr|\mathcal{F}_{1,3}\rightight].\]
Note that using independence of $2$nd row and $4$th row of $X_n$, RHS of \eqref{eq:Y} can be written as,
$\leftim_{n\rightightarrow\infty}m\mathbb{E}[Y_{1,3}^2]$.
We prove the below lemma from which it follows that $\leftim_{n\rightightarrow\infty}m\mathbb{E}[Y_{1,3}^2]=0$ and hence the fourth moments of $\[\begin{example}in{aligned}r{\mu}_{E_m}$ are uniformly bounded. Let $\rightho_{ij}:=\fracac{\leftangle R_i,R_j\rightangle}{\leftVert R_i\rightVert\leftVert R_j\rightVert}$ and ${\sigma}ma_i:=\leftVert R_i\rightVert/\sqrt{n}$.
\begin{example}in{lemma}\leftabel{main lemma_1}
$\mathbb{E}[(nY_{1,3})^k]$ is uniformly bounded by $M_k, \forall n,k\in\mathbb{N}$, where $M_k>0$ are some constants dependent on $k$.
\end{lemma}
\begin{example}in{proof}
\begin{example}in{align*}
Y_{1,3}&=\mathbb{E}\lefteft[\lefteft(\lefteft|\fracac{\leftangle R_1,R_2\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\lefteft(\lefteft|\fracac{\leftangle R_2,R_3\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\biggr|\mathcal{F}_{1,3}\rightight]\\
&=\mathbb{E}\lefteft[{\sigma}ma_1^{\alphaha}\lefteft(\lefteft(\lefteft|\fracac{\leftangle R_1,R_2\rightangle}{{\sigma}ma_1\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)+\lefteft(\ell_{\alphaha}-\fracac{\ell_{\alphaha}}{{\sigma}ma_1^{\alphaha}}\rightight)\rightight){\sigma}ma_3^{\alphaha}\lefteft(\lefteft(\lefteft|\fracac{\leftangle R_2,R_3\rightangle}{{\sigma}ma_3\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)+\lefteft(\ell_{\alphaha}-\fracac{\ell_{\alphaha}}{{\sigma}ma_3^{\alphaha}}\rightight)\rightight) \biggr|\mathcal{F}_{1,3}\rightight]\\
&={\sigma}ma_1^{\alphaha}{\sigma}ma_3^{\alphaha}\mathbb{E}[(|Z_1|^{\alphaha}-\ell_{\alphaha})(|Z_3|^{\alphaha}-\ell_{\alphaha})]+\ell_{\alphaha}^2({\sigma}ma_1^{\alphaha}-1)({\sigma}ma_3^{\alphaha}-1).
\end{align*}
Here $Z_1,Z_3$ are standard normal random variables (after conditioning on $\mathcal{F}_{1,3}$) with correlation coefficient $\rightho_{13}$. Note that almost surely $0<|\rightho_{13}|<1$ and hence $(Z_1,Z_3)$ have joint density.
Define a function of correlation coefficient as below,
\begin{example}in{align*}
I(\rightho):&=\fracac{1}{2\pi\sqrt{1-\rightho^2}}\int_{\mathbb{R}}\int_{\mathbb{R}}(|x|^{\alphaha}-\ell_{\alphaha})(|y|^{\alphaha}-\ell_{\alphaha})\exp\lefteft(-\fracac{1}{2(1-\rightho^2)}\lefteft(x^2+y^2-2xy\rightho\rightight)\rightight)dxdy.
\end{align*}
Note that $I(0)=0,\ I(\rightho)=I(-\rightho)$ and $I(\rightho)$ is a smooth function. Above given expansion of $Y_{1,3}$ can be written as
\begin{example}in{align*}
Y_{1,3}&={\sigma}ma_1^{\alphaha}{\sigma}ma_3^{\alphaha}\rightho_{13}^2\fracac{I(\rightho_{13})}{\rightho_{13}^2}+\ell_{\alphaha}^2({\sigma}ma_1^{\alphaha}-1)({\sigma}ma_3^{\alphaha}-1).
\end{align*}
We now show $I(\rightho)/\rightho^2$ is a bounded function.
Fix $t>0$. For $|\rightho|>t$, note that $I(\rightho)$ is Gaussian expectation and therefore $I(\rightho)/\rightho^2$ is bounded. We use L'Hospital's rule to get a bound on $\fracac{I(\rightho)}{\rightho^2}$ when $|\rightho|<t$. Using differentiation under integral sign, and using L'Hospital's rule twice, it can be seen that $I(\rightho)/\rightho^2$ is a bounded function. Hence we can write,
\begin{example}in{align*}
|Y|\lefteq M{\sigma}ma_1^{\alphaha}{\sigma}ma_{3}^{\alphaha}|\rightho_{13}^2|+m_{\alphaha}^2|{\sigma}ma_1^{\alphaha}-1||{\sigma}ma_3^{\alphaha}-1|.
\end{align*}
As $\forall\alphaha<2$,
\begin{example}in{align}\leftabel{alpha<2 inequality}
|{\sigma}ma_1^{\alphaha}-1|\lefteq \lefteft|\fracac{\leftangle R_1,R_1\rightangle}{n}-1\rightight|\lefteq \fracac{1}{\sqrt{n}}\lefteft|\fracac{\leftangle R_1,R_1\rightangle-n}{\sqrt{n}}\rightight|.
\end{align}
As a result we can write,
\begin{example}in{align*}
|nY|\lefteq M {\sigma}ma_1^{\alphaha}{\sigma}ma_{3}^{\alphaha}n{\rightho_{13}^2}+\lefteft|\fracac{\leftangle R_1,R_1\rightangle-n}{\sqrt{n}}\rightight|\lefteft|\fracac{\leftangle R_3,R_3\rightangle-n}{\sqrt{n}}\rightight|.
\end{align*}
It is easy to see that, the $k$th moments of ${\sigma}ma_1^{\alphaha},{\sigma}ma_{3}^{\alphaha},n{\rightho_{13}^2},\lefteft|\fracac{\leftangle R_1,R_1\rightangle-n}{\sqrt{n}}\rightight|$ are uniformly bounded by some constant, $\forall n\in \mathbb{N}$ and hence $k$th moments of $nY$ are also uniformly bounded . This completes the proof of Lemma \rightef{main lemma_1}.
\end{proof}
This proves that the fourth moments are uniformly bounded. This completes the proof of Lemma \rightef{Lemma 1 of alpha<1}.
\end{proof}
\section{$\alphaha>s$ range} \leftabel{sec a>1}
In this section we prove Theorem \rightef{main theorem} for the range $\alpha>s$. We prove the below lemma which implies Theorem \rightef{main theorem} for this range of $\alphaha$.
\begin{example}in{lemma}\leftabel{alpha>1}
Fix $s<\alphaha$ and $0<\varepsilon<1/2$. Then $\mathbb{P}(\leftambda_1(B_{n,\alpha,s})>\varepsilon)\rightightarrow 1$, as $n\rightightarrow \infty$.
\end{lemma}
\begin{example}in{proof}[Proof of Lemma ~\rightef{alpha>1}]
For ease of notation, we write $B_{n,\alpha,s}$ as $B_m$.
Define a diagonal matrix $D_m$ such that $D_m(i,i)=B_m(i,i)-\fracac{\ell_{\alphaha}}{n^{\alphaha/2}}$. Let $C_m:=B_m-\lefteft(\fracac{\ell_{\alphaha}}{n^{\alphaha/2}}\rightight)J_m-D_m$. Note that $C_m(i,j)=\fracac{1}{n^{\alphaha/2}}\lefteft(\lefteft|\fracac{\leftangle R_i,R_j\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)$ and the diagonal entries of $C_m$ are zero.
We first show that $\mathbb{P}(\leftambda_1(C_m)\lefteq -1+2\varepsilon)\rightightarrow 0$. This will complete the proof of Lemma \rightef{alpha>1}. This is true as,
using Lemma \rightef{Bernstein}, we have
\begin{example}in{align}\leftabel{eq:diag}
\mathbb{P}\lefteft(\bigcup_{i=1}^{ m}\lefteft((D_m)_{ii}\lefteq 1-\varepsilon\rightight)\rightight)\lefteq 2m\exp(-c_3n),
\end{align}
for some constant $c_3>0$ depending on $\alphaha$. To get the matrix $B_m$, we add $C_m$ with $D_m+(\ell_{\alphaha}/n^{\alphaha/2})J_m.$
Using Weyl's inequality (Theorem $4.3.1$ of \cite{HJ}), we get
\begin{example}in{align}\leftabel{eq:weyl_2}
\mathbb{P}\lefteft(\leftambda_1(B_m)- \leftambda_1(C_m)< 1-\varepsilon\rightight)\lefteq 2m\exp(-c_3n).
\end{align}
The above inequality shows that the eigenvalues of $B_m$ are at least $1-\varepsilon$ more than that of $C_m$, with high probability. This completes the proof if we prove $\mathbb{P}(\leftambda_1(C_m)\lefteq -1+2\varepsilon)\rightightarrow 0$.
Choose $k$ such that $\alphaha>\lefteft(\fracac{k+1}{k}\rightight)s$.
\begin{example}in{align*}
\mathbb{P}(\leftambda_1(C_m)\lefteq -1+2\varepsilon)&\lefteq \mathbb{P}((\leftambda_1(C_m))^{2k}\geq (1-2\varepsilon)^{2k})\\
&\lefteq \fracac{\mathbb{E}[\mathrm{Tr}(C_m^{2k})]}{(1-2\varepsilon)^{2k}}.
\end{align*}
We prove that $\mathbb{E}[\mathrm{Tr}(C_m^{2k})]\rightightarrow 0$, where $\alphaha>\lefteft(\fracac{k+1}{k}\rightight)s$. This completes the proof of the theorem.
We state a lemma here which generalizes Lemma \rightef{main lemma_1}. Let $p\in \mathbb{N}_{\geq 3}.$
Define $$G:={n^{(2(p-3)+2)\alphaha/2}}\mathbb{E}[C_{12}C_{13}C_{14}^2C_{15}^2\partialots C_{1p}^2\bigr|\mathcal{F}_{2,3,\partialots,p}]$$
\begin{example}in{lemma}\leftabel{main lemma_2}
$\mathbb{E}[(nG)^k]$ is uniformly bounded by constant $M_k$ for all $k\in\mathbb{N}$.
\end{lemma}
\begin{example}in{proof}
Let $W_{12}:=\bigg(\bigg|\fracac{\leftangle R_1,R_2\rightangle}{{\sigma}ma_2\sqrt{n}}\bigg|^{\alphaha}-\ell_{\alphaha}\bigg)+ \fracac{\ell_{\alphaha}}{{\sigma}ma_2^{\alphaha}}\bigg({\sigma}ma_2^{\alphaha}-1\bigg)$. Then
\begin{example}in{align*}
G={\sigma}ma_2^{\alphaha}{\sigma}ma_3^{\alphaha}{\sigma}ma_4^{2\alphaha}{\sigma}ma_5^{2\alphaha}\partialots{\sigma}ma_p^{2\alphaha}\mathbb{E}\bigg[W_{12}W_{13}W_{14}^2W_{15}^2\partialots W_{1p}^2\biggr|\mathcal{F}_{2,3,\partialots,p}\bigg].
\end{align*}
Due to \eqref{alpha<2 inequality}, the term $({\sigma}ma_2^\alphaha-1)$ is of the order of $1/\sqrt{n}$. All moments of ${\sigma}ma_2^{\alphaha}$ are uniformly bounded. So for $\mathbb{E}[(nG)^k]$ to be uniformly bounded, it is enough to prove that $k$-th moments of
\begin{example}in{align*}
n\mathbb{E}\lefteft[\bigg(\bigg|\fracac{\leftangle R_1,R_2\rightangle}{{\sigma}ma_2\sqrt{n}}\bigg|^{\alphaha}-\ell_{\alphaha}\bigg)\bigg(\bigg|\fracac{\leftangle R_1,R_3\rightangle}{{\sigma}ma_2\sqrt{n}}\bigg|^{\alphaha}-\ell_{\alphaha}\bigg)W_{14}^2\partialots W_{1p}^2\biggr|\mathcal{F}_{2,3,\partialots,p}\rightight]
\end{align*} and
\begin{example}in{align*}
\sqrt{n}\mathbb{E}\lefteft[\bigg(\bigg|\fracac{\leftangle R_1,R_2\rightangle}{{\sigma}ma_2\sqrt{n}}\bigg|^{\alphaha}-\ell_{\alphaha}\bigg)W_{14}^2\partialots W_{1p}^2\biggr|\mathcal{F}_{2,3,\partialots,p}\rightight]
\end{align*}
are uniformly bounded, $\forall n\in\mathbb{N}$. We will prove that $k$-th moment of first quantity is uniformly bounded. For the second quantity, similar argument works.
Note that conditional on $\mathcal{F}_{2,3,\partialots,p}$, the conditional expectation $G$ is a function of standard Gaussian random variables, say, $Z_2,Z_3,\partialots,Z_p$, with the correlation matrix being $\tilde{\mathbb{S}igma}=A_{p-1}A_{p-1}^T$, where $A_{p-1}$ is $(p-1)\times n$ matrix with $A_{p-1}(i,j)=\fracac{X_n(i+1,j)}{\sqrt{n}{\sigma}ma_{i+1}}$. It can be seen easily that almost surely rank$(A_{p-1})=p-1$ and hence $\tilde{\mathbb{S}igma}$ is invertible. For any symmetric invertible matrix $\mathbb{S}igma$ with $1$s on diagonal, define
\begin{example}in{align*}
h(\mathbb{S}igma):=\fracac{1}{\sqrt{(2\pi)^{p-1}|\mathbb{S}igma|}}\int(|x_1|^{\alphaha}-\ell_{\alphaha})(|x_2|^{\alphaha}-\ell_{\alphaha})\partialots (|x_{p-1}|^{\alphaha}-\ell_{\alphaha})^2\exp\lefteft(\fracac{-x^T\mathbb{S}igma^{-1}x}{2}\rightight)dx_1\partialots dx_{p-1}
\end{align*}
Here $h$ is a function of the entries above the diagonal of $\mathbb{S}igma$. Using symmetry and independence $h(I_{p-1})=0$. Expanding $W_{12}W_{13}W_{14}^2W_{15}^2\partialots W_{1p}^2$ and using the fact that $({\sigma}ma_2^\alphaha-1)$ is of order $1/\sqrt{n}$, to prove that $k$-th moments of
\begin{example}in{align*}
n\mathbb{E}\lefteft[\bigg(\bigg|\fracac{\leftangle R_1,R_2\rightangle}{{\sigma}ma_2\sqrt{n}}\bigg|^{\alphaha}-\ell_{\alphaha}\bigg)\bigg(\bigg|\fracac{\leftangle R_1,R_3\rightangle}{{\sigma}ma_2\sqrt{n}}\bigg|^{\alphaha}-\ell_{\alphaha}\bigg)W_{14}^2\partialots W_{1p}^2\biggr|\mathcal{F}_{2,3,\partialots,p}\rightight]
\end{align*}
are uniformly bounded, it is enough to prove that $k$-th moments of $nh(\tilde{\mathbb{S}igma})$ are uniformly bounded.
It is easy to see that $h$ is a differentiable function. We make use of the multi-variable mean value theorem $|f(y)-f(x)|\lefteq |\nabla f(cx+(1-c)y)||y-x|,$ for some $ 0\lefteq c\lefteq 1$. Using the fact that $\sum_{i< j}\tilde{\mathbb{S}igma}_{i,j}^2$ is of order of $1/\sqrt{n}$, it is enough to show $h(\mathbb{S}igma)/\sum_{i< j}\mathbb{S}igma_{i,j}^2$ is bounded.
For $\mathbb{S}igma$ bounded away from the origin, using Gaussian integrals, it can be seen that $\fracac{h(\mathbb{S}igma)}{\sum_{i< j}\mathbb{S}igma_{i,j}^2}$ is bounded. As $h(I_{p-1})=0$ at the origin, mean value theorem and basic computations gives boundedness of $h(\mathbb{S}igma)/\sum_{i< j}\mathbb{S}igma_{i,j}^2$ in a neighbourhood of the origin. This completes the proof of Lemma \rightef{main lemma_2}.
\end{proof}
\textbf{Computation of $\mathbb{E}[\mathrm{Tr}(C_m^{2k})]$}: Consider a closed walk of length $2k$ on complete graph $K_m$. Let $i_1i_2\partialots i_{2k-1}i_1$ be the closed walk. This corresponds to the term $\mathbb{E}[C_{i_1i_2}C_{i_2i_3}\partialots C_{i_{2k-1}i_1}]$ in expansion of $\mathbb{E}[\mathrm{Tr}(C_m^{2k})]$. Thus terms in expansion of $\mathbb{E}[\mathrm{Tr}(C_m^{2k})]$ correspond to closed walks of length $2k$ (starting point can be any of the $m$ vertices). As the diagonal entries are zero, the paths cannot have loops at any vertices. We first look at walks without "leaf vertices". By "leaf vertices" we mean the vertices, like "$3$" and "$1$", which are of degree $2$ as shown below (In the graph generated due to closed walk, such vertices are leafs).
\begin{example}in{figure}[h!]
\includegraphics[scale=0.7]{3V_1.png}
\centering
\caption{The vertices $1,3$ are leaf vertices}
\end{figure}
So we look at closed walks of length $2k$ without loops and leaf vertices. As the off-diagonal entries of $C_m$ are of the order $1/n^{\alphaha/2}$ and $\alphaha>\lefteft(\fracac{k+1}{k}\rightight)s$, the sums of expectations corresponding to paths visiting $k+l$ vertices with $l\lefteq1$(each vertex can be chosen in at most $\leftfloor n^s\rightfloor$ ways), goes to $0$. So it is enough to look at paths visiting at least $k+2$ vertices.
Closed walks of length $2k$, visiting $k+l,\ l\geq 2$ vertices, must have at least $2l$ vertices of degree $2$ (none of which are leaf vertices) as shown below. This is due to the fact that since it is a closed walk, degree of every vertex is even and sum of degrees of vertices must equal twice the total number of edges.
There would be $C_{i,j}C_{j,k}$ term when expanding $\mathrm{Tr}(C_m^{2k})$ as sum of product of entries of $C_m$. This factor shows up due to the vertex $j$ having degree $2$. We would like to condition on the rows $i,k$ of $X_n$ and use Lemma \rightef{main lemma_1}.
It could happen that more than $1$, say $t$, degree-$2$ vertices come together in series as shown below. In such a case we condition as shown in the example below.
\begin{example}in{figure}[h!]
\includegraphics[scale=0.7]{vertex_conditioning_3.png}
\centering
\end{figure}
Suppose there is a path traversing vertices $a$ through $e$, as shown above, where degrees of both $a,e$ are at least $4$ and $b,c,d$ are all degree-$2$ vertices. Here degrees are calculated in the graph generated by the closed walk of length $2k$. In such a case we will have the factor $C_{a,b}C_{b,c}C_{c,d}C_{d,e}$ in the expansion of $\mathrm{Tr}(C_m^{2k})$ corresponding to that path. In the expectation term corresponding to such a path, we condition on rows $a,c,e$ and use independence to get $2$ conditional expectations $Y_{a,c},Y_{c,e}$ mentioned in Section \rightef{sec a<1} . The `x' mark denotes the rows which we are going to condition on. If there are even number of degree-$2$ vertices coming together, we condition as shown below.
\begin{example}in{figure}[h!]
\includegraphics[scale=0.7]{vertex_conditioning_2.png}
\centering
\end{figure}
In the case shown above, vertices $a,d$ have degree at least $4$ and $b,c$ are degree-$2$ vertices. We condition of rows $a,c,d$. All other rows corresponding to vertices with degrees greater than $2$ will also be conditioned.
Now we look at $\mathbb{E}[\mathrm{Tr}(C_m^{2k})]$ and the walks of length $2k$, without loops and leaf vertices , visiting $k+l$ vertices. The $k+l$ vertices can be chosen in $\leftfloor n^{s}\rightfloor^{k+l}$ ways and taking the order of $C_{i,j}$ into account we can write,
\begin{example}in{align*}
\fracac{\leftfloor n^{s}\rightfloor^{k+l}}{n^{k\alphaha}}\mathbb{E}\lefteft[\lefteft(\lefteft|\fracac{\leftangle R_1,R_2\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\partialots\rightight]
\end{align*}
corresponding to the walks we are interested in. Using Independence and conditioning on the rows corresponding to the vertices with degree at least $4$ and those appropriate vertices when more than $1$ degree-$2$ vertices come together, we get product of at least $l$ number of conditional expectations like $Y_{i,j}$. Using Lemma \rightef{main lemma_1}, $n^l\mathbb{E}\lefteft[\lefteft(\lefteft|\fracac{\leftangle R_1,R_2\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\partialots\rightight]$ is uniformly bounded. As $l$ was arbitrary and as $\alphaha>\lefteft(\fracac{k+1}{k}\rightight)s$, we can see that the expectation corresponding to the walks without loops and leaf vertices goes to $0$ with $n$.
Now we look at paths without loops but have leaf vertices. If initially we had a closed walk of length $2g$ without leaf vertices and visited $l$ different vertices. Note that each leaf vertex attached increases length of walk by $2$ and number of vertices visited by $1$. Adding $t$ leaf vertices such that $g+t=k$ gives corresponding expectation terms like
\begin{example}in{align*}
\fracac{\leftfloor n^{s}\rightfloor^{g+l+t}}{n^{(g+t)\alphaha}}\mathbb{E}\lefteft[\lefteft(\lefteft|\fracac{\leftangle R_1,R_2\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\partialots\rightight].
\end{align*}
If such a leaf vertex or multiple leaf vertices can be attached to a vertex which is degree-$2$ originally, then we condition on the rows corresponding to all the leaf vertices and the vertices whose rows we were conditioning on originally, as shown below.
\begin{example}in{figure}[h]
\includegraphics[scale=0.4]{vertex_conditioning_4.png}
\centering
\end{figure}
The vertices $d,c$ are leaf vertices attached to vertex $b$.
Without the vertices $d,c$ and edges between them and $b$, the vertex $b$ would be of degree-$2$. After addition of vertices $d,c$ and the edges, the conditioning will be done on rows corresponding to $a,c,d,e$. This is where Lemma \rightef{main lemma_2} is used. Such conditioning gives conditional expectation factor like $G$ in Lemma \rightef{main lemma_2} for every vertex which get attached at least one leaf vertex to it.
If leaf vertices are attached to a vertex which is of degree $4$ or more originally, then again we condition on rows corresponding to all leaf vertices along with the previous vertices we were conditioning on (Lemma \rightef{main lemma_2} is not needed here). As $G$ is of the order of $1/n$ and $\alphaha>\lefteft(\fracac{k+1}{k}\rightight)s$,
\begin{example}in{align*}
\fracac{\leftfloor n^s\rightfloor^{g+t+l}}{n^{(g+t)\alphaha}}\mathbb{E}\lefteft[\lefteft(\lefteft|\fracac{\leftangle R_1,R_2\rightangle}{\sqrt{n}}\rightight|^{\alphaha}-\ell_{\alphaha}\rightight)\partialots\rightight]\rightightarrow 0
\end{align*}
This shows that $\mathbb{E}[\mathrm{Tr}(C_n^{2k})] \rightightarrow 0$, as $n\rightightarrow \infty$. Taking $k$ arbitrarily large completes the proof of Lemma \rightef{alpha>1}.
\end{proof}
\subsection*{Acknowledgement.} The author thanks Manjunath Krishnapur for suggesting the question addressed in this article and for several helpful discussions without which this article could not have been possible.
\end{document}
|
\begin{document}
\ \vskip 1cm
\centerline{{\mathbb L}ARGE Monotonicity}
\centerline{{\mathbb L}ARGE of quantum relative entropy revisited}
\centerline{\bf This paper is dedicated to Elliott Lieb and Huzihiro
Araki}
\centerline{\bf on the occasion of their 70th birthday}
\centerline{\large D\'enes Petz\footnote{E-mail: petz{@}math.bme.hu. The
work was supported by the Hungarian OTKA T032662.}}
\centerline{Department for Mathematical Analysis}
\centerline{Budapest University of Technology and Economics}
\centerline{ H-1521 Budapest XI., Hungary}
\noindent
\begin{quote}
Monotonicity under coarse-graining is a crucial property of the
quantum relative entropy. The aim of this paper is to investigate
the condition of equality in the monotonicity theorem and in its
consequences as the strong sub-additivity of von Neumann entropy, the
Golden-Thompson trace inequality and the monotonicity of the Holevo
quantitity. The relation to quantum Markov states is briefly indicated.
\end{quote}
\begin{quote}
{\it Key words: quantum states, relative entropy, strong sub-additivity,
coarse-graining, Uhlmann's theorem, $\alpha$-entropy.}
\end{quote}
\section{Introduction}
Quantum relative entropy was introduced by Umegaki \cite{Um} as a
formal generalization of the Kullback-Leibler information (in the
setting of finite von Neumann algebras). Its real importance was
understood much later and the monograph \cite{OP} already deduced
most information quantities from the relative entropy.
One of the fundamental results of quantum information theory is the
monotonicity of relative entropy under completely positive mappings.
After the discussion of some particular cases by Araki \cite{Araki1976}
and by Lindblad \cite{Lindblad},
this result was proven by Uhlmann \cite{Uh} in full generality and
nowadays it is
referred as {\it Uhlmann's theorem}. The strong sub-additivity property of
entropy can be obtained easily from Uhlmann's theorem (see \cite{OP}
about this point and as a general reference as well) and Ruskai
discussed the relation of several basic entropy inequalities in
details \cite{MBR0, MBR1}. The aim of this paper is to investigate
the condition
of equality in the monotonicity theorem and in its consequences.
The motivation to do this comes from the needs of quantum information
theory developed in the setting of matrix algebras in the last ten
years \cite{N-Ch}, on the other hand work \cite{MBR1} has given also some
stimulation.
The paper is written entirely in a finite dimensional
setting but some remarks are made about the possible more general
scenario.
\section{Uhlmann's theorem}
Let ${\cal H}$ be a finite dimensional Hilbert space and $D_i$ be statistical
operators on ${\cal H}$ ($i=1,2$). Their {\it relative entropy} is defined as
\begin{equation}
S(D_1,D_2)=\cases{\mbox{Tr}\, D_1(\log D_1 -\log D_2)\quad & if $\mbox{supp}\, D_1
\subset \mbox{supp}\, D_2$, \cr +\infty & otherwise .}
\end{equation}
If $\lambda>0$ is the smallest eigenvalue of $D_2$, then $S(D_1,D_2)$ is
always finite and $S(D_1,D_2)\le \log n -\log \lambda$, where $n$ is the
dimension of ${\cal H}$.
Let ${\cal K}$ be another finite dimensional Hilbert
space. We call a linear mapping $T: B({\cal H})\to B({\cal K})$ {\it coarse graining}
if $T$ is trace preserving and $2$-positive, that is
\begin{equation}
\left[ \begin{array}{cc}
T(A)&T(B)\\T(C)&T(D)\end{array} \right] \ge 0 \hbox{\ if }
\left[ \begin{array}{cc}
A& B\\ C& D \end{array} \right] \ge 0\,.
\end{equation}
Such a $T$ sends a statistical operator to statistical operator and
satisfies the Schwarz inequality $T(a^*a)\ge T(a)T(a)^*$. The concept
of coarse graining is the quantum version of the Markovian mapping in
probability theory. All the important examples are actually completely
positive. We work in this more general framework because the proofs
require only the Schwarz inequality.
$B({\cal H})$ and $B({\cal K}$ are Hilbert spaces with respect to the
Hilbert-Schmidt inner product and the adjoint of $T: B({\cal H})\to B({\cal K})$
is defined:
$$
\mbox{Tr}\, A \,T(B)= \mbox{Tr}\, T^*(A)\, B \qquad (A\in B({\cal K}), B \in B({\cal H})).
$$
The adjoint of a coarse graining $T$ is 2-positive again and
$T^*(I)=I$. It follows that $T^*$ satisfies the Schwarz inequality as
well.
The following result is known as Uhlmann's theorem.
\begin{thm} (\cite{Uh, OP}) For a coarse graining $T:B({\cal H})\to B({\cal K})$
the monotonicity
$$
S(D_1,D_2) \ge S(T(D_1),T(D_2))
$$
holds. \label{T:U}
\end{thm}
It should be noted that relative entropy was defined in the setting of
von Neumann algebras first by Umegaki \cite{Um} and extended by Araki
\cite{Araki1976}. Uhlmann's monotonicity result is more general
than the above statement.
To the best knowledge of the author, it is not known if the
monotonicity theorem holds without the hypothesis of 2-positivity.
\section{The proof of Uhlmann's theorem and its analysis}
The simplest way to analyse the equality in the monotonicity theorem is
to have a close look at the proof of the inequality. Therefore we
present a proof which is based on the {\it relative modular operator method}.
The concept of relative modular operator was developed by Araki in
the modular theory of operator algebras \cite{ArakiMasuda}, however it
could be used
very well in finite dimensional settings. For example, {\it Lieb's concavity
theorem} gets a natural proof by this method \cite{petz1986}.
Let $D_1$ and $D_2$ be density matrices acting on the Hilbert space
${\cal H}$ and assume that they are invertible. On the Hilbert space $B({\cal H})$
one can define an operator $\Delta$ as
$$
\Delta a = D_2 a D_1^{-1} \qquad (a \in B({\cal H})).
$$
This is the so-called relative modular operator and it is the product
of two commuting positive operators: $\Delta=LR$, where
$$
La= D_2a\quad {\rm and}\quad Ra=a D_1^{-1} \qquad (a \in B({\cal H})).
$$
Since $\log \Delta= \log L +\log R$, we have
$$
{S}(D_1, D_2) = \langleD^{1/2}_1, (\log D_1 -\log D_2) D^{1/2}_1\rangle=
- \langleD^{1/2}_1, (\log \Delta) D^{1/2}_1\rangle.
$$
The relative entropy ${S}(D_1, D_2)$ is expressed by the quadratic form
of the logarithm of the relative modular operator. This is the
fundamental formula what we use (and actually this is nothing else but
Araki's definition of the relative entropy in a general von Neumann
algebra \cite{Araki1976}).
Let $T$ be a coarse graining as in Theorem \ref{T:U}.
We assume that $D_1$ and $T(D_1)$ are invertible matrices and set
$$
\Delta a = D_2 a D_1^{-1} \quad (a \in B({\cal H}))\quad\hbox{and}\quad
\Delta_0 x = T(D_2) x T(D_1)^{-1} \quad (x \in B({\cal K})).
$$
$\Delta$ and $\Delta_0$ are operators on the spaces $B({\cal H})$ and
$B({\cal K})$, respectively. Both become a Hilbert space with the
Hilbert-Schmidt inner product. The relative entropies in the theorem
are expressed by the resolvent of relative modular operators:
\begin{eqnarray*}
{S}(D_1, D_2) &=& -\langleD^{1/2}_1, (\log \Delta) D^{1/2}_1\rangle \\
&=& \int^{\infty}_{0} \langleD^{1/2}_1, (\Delta + t)^{-1} D^{1/2}_1\rangle
-(1 + t)^{-1}\,dt \\
{S}(T(D_1), T(D_2)) &=& -\langleT(D_1)^{1/2}, (\log \Delta_0) T(D_1)^{1/2}\rangle \\
&=& \int^{\infty}_{0} \langleT(D_1)^{1/2}, (\Delta_0 + t)^{-1}
T(D_1)^{1/2}\rangle-(1 + t)^{-1} \,dt,
\end{eqnarray*}
where the identity
$$
\log x= \int^{\infty}_{0} (1 + t)^{-1}-(x+t)^{-1}\,dt
$$
is used. The operator
\begin{equation}
VxT(D_1)^{1/2}=T^*(x)D_1^{1/2}
\end{equation}
is a contraction:
$$
\Vert T^*(x)D_1^{1/2}\Vert^2 = \mbox{Tr}\, D_1 T^*(x^*)T^*(x)\le
\mbox{Tr}\, D_1 T^*(x^* x)=\mbox{Tr}\, T(D_1) x^* x=\Vert xT(D_1)^{1/2}\Vert^2
$$
since the Schwarz inequality is applicable to $T^*$. A similar simple
computation gives that
\begin{equation}
V^*\Delta V \le \Delta_0\,.
\end{equation}
The function $y \mapsto (y+t)^{-1}$ is operator monotone (decreasing) and
operator convex, hence
\begin{equation}\label{E:op}
(\Delta_0+t)^{-1}\le (V^* \Delta V +t)^{-1}\le V^*(\Delta +t)^{-1}V
\end{equation}
(see \cite{HanPed}). Since $VT(D_1)^{1/2}= D_1^{1/2}$, this implies
\begin{equation}
\langleD^{1/2}_1, (\Delta + t)^{-1} D^{1/2}_1\rangle \ge
\langleT(D_1)^{1/2}, (\Delta_0 + t)^{-1} T(D_1)^{1/2}\,.
\end{equation}
By integrating this inequality we have the monotonicity theorem
from the above integral formulas.
Now we are in the position to analyse the case of equality. If
$$
S(D_1,D_2)= S(T(D_1),T(D_2)),
$$
then
\begin{equation}
\langleT(D_1)^{1/2}, V^*(\Delta + t)^{-1}V T(D_1)^{1/2}\rangle =
\langleT(D_1)^{1/2}, (\Delta_0 + t)^{-1} T(D_1)^{1/2}\rangle\,.
\end{equation}
for all $t>0$. This equality together with the operator inequality
(\ref{E:op}) gives
\begin{equation}
V^*(\Delta + t)^{-1} D_1^{1/2}=(\Delta_0 + t)^{-1} T(D_1)^{1/2}
\end{equation}
for all $t>0$. Differentiating by $t$ we have
\begin{equation}
V^*(\Delta + t)^{-2} D_1^{1/2}=(\Delta_0 + t)^{-2} T(D_1)^{1/2}
\end{equation}
and we infer
\begin{eqnarray*}
\Vert V^*(\Delta + t)^{-1} D_1^{1/2}\Vert^2
&=&\langle(\Delta_0 + t)^{-2} T(D_1)^{1/2}, T(D_1)^{1/2}\rangle\\
&=& \langleV^*(\Delta + t)^{-2} D_1^{1/2},T(D_1)^{1/2}\rangle\\
&=& \Vert (\Delta + t)^{-1} D_1^{1/2}\Vert^2
\end{eqnarray*}
When $\Vert V^*\xi\Vert = \Vert \xi \Vert$ holds for a contraction $V$,
it follows that $VV^*\xi=\xi$. In the light of this remark we arrive at
the condition
$$
VV^*(\Delta + t)^{-1} D_1^{1/2}=(\Delta + t)^{-1} D_1^{1/2}
$$
and
\begin{eqnarray*}
V(\Delta_0 + t)^{-1} T(D_1)^{1/2}
& =& VV^*(\Delta + t)^{-1} D_1^{1/2}\\
& = & (\Delta + t)^{-1} D_1^{1/2}
\end{eqnarray*}
By Stone-Weierstrass approximation we have
\begin{equation}
V f(\Delta_0) T(D_1)^{1/2}=f(\Delta) D_1^{1/2}
\end{equation}
for continuous functions. In particular for $f(x)=x^{i t}$ we
have
\begin{equation}\label{E:ns}
T^*\big(T(D_2)^{it}T(D_1)^{-it}\big)= D_2^{it} D_1^{-it}\,.
\end{equation}
This condition is necessary and sufficient for the equality.
\begin{thm} \label{masodik}
Let $T:B({\cal H})\to B({\cal K})$ be a $2$-positive trace preserving
mapping and let $D_1, D_2 \in B({\cal H}), T(D_1), T(D_2)\in B({\cal K})$ be
invertible density matrices. Then the equality
$
S(D_1,D_2) = S(T(D_1),T(D_2))
$
holds if and only if the following equivalent conditions are satisfied:
\begin{itemize}
\item[(1)] $T^*\big(T(D_1)^{it}T(D_2)^{-it}\big)= D_1^{it} D_2^{-it}$
for all real $t$.
\item[(2)] $T^*(\log T(D_1)-\log T(D_2))=\log D_1 -\log D_2 $.
\end{itemize}
\end{thm}
The equality implies (\ref{E:ns}) which is equivalent to (1) in the Theorem.
Differentiating (1) at $t=0$
we have the second condition which obviously applies the equalities
of the relative entropies. {
$\square$}
The above proof follows the lines of \cite{petz1988}. The original paper is
in the setting of arbitrary von Neumann algebras and hence slightly more
technical (due to the unbounded feature of the relative modular
operators). Condition (2) of Theoem \ref{masodik} appears also in the
paper \cite{MBR1} in which different methods are used.
Next we recall a property of 2-positive mappings. When $T$ is assumed to be
2-positive, the set
$$
{\cal A}_{T}:=\{X\in B({\cal H}): T(X^*X)=T(X)T(X^*) \hbox{\ and\ }
T(X^*X)=T(X^*)T(X)\}.
$$
is a *-sub-algebra of $B({\cal H})$ and
\begin{equation}
T(XY)=T(X)T(Y)\quad \hbox{for\ all\ } X\in {\cal A}_T
\hbox{\ and\ } Y\in B({\cal H}).
\end{equation}
\begin{corollary} Let $T:B({\cal H})\to B({\cal K})$ be a $2$-positive trace preserving
mapping and let $D_1, D_2 \in B({\cal H}), T(D_1), T(D_2)\in B({\cal K})$ be
invertible density matrices. Assume that $T(D_1)$ and $T(D_2)$
commute. Then the equality
$
S(D_1,D_2) = S(T(D_1),T(D_2))
$
implies that $D_1$ and $D_2$ commute.
\end{corollary}
Under the hypothesis $u_t:=T(D_1)^{it}T(D_2)^{-it}$ and $w_t:=
D_1^{it} D_2^{-it}$ are unitaries. Since $T^*$ is unital $u_t \in
{\cal A}_{T^*}$ for every $t \in {\mathbb R}$. We have
$$
w_{t+s}=T^*(u_{t+s})=T^*(u_{t}u_s)=T^*(u_t)T^*(u_s)=w_t w_s
$$
which shows that $w_t$ and $w_s$ commute and so do $D_1$ and $D_2$.{
$\square$}
\section{Consequences and related inequalities}
{\bf 3.1. The Golden-Thompson inequality.} The Golden-Thompson
inequality tells that
$$
\mbox{Tr}\, e^{A+B} \le \mbox{Tr}\, e^A e^B
$$
holds for self-adjoint matrices $A$ and $B$. It was shown in \cite{petz1988b}
that this inequality can be reformulated as a particular case of
monotonicity when $e^A/\mbox{Tr}\, e^A$ is considered as a density matrix and
$e^{A+B}/\mbox{Tr}\, e^{A+B}$ is the so-called perturbation by $B$. Corollary
5 of the original paper is formulated in the context of von Neumann
algebras but the argument was adapted to the finite dimensional case
in \cite{petz1994}, see also p. 128 in \cite{OP}. The equality holds in
the Golden-Thompson inequality if and only if $AB=BA$.
One of the possible extensions of the Golden-Thompson inequality is the
statement that the function
\begin{equation}\label{E:Fried}
p \mapsto \mbox{Tr}\, (e^{pB/2}e^{pA}e^{pB/2})^{1/p}
\end{equation}
is increasing for $p>0$. The limit at $p=0$ is $\mbox{Tr}\, e^{A+B}$
\cite{Araki1990}. It was proved by Friedland and So that the function
(\ref{E:Fried}) is strictly monotone or constant \cite{Friedland}.
The latter case corresponds to the commutativity of $A$ and $B$.
{\bf 3.2. A posteriori relative entropy.}
Let $E_j$ ($1\le j \le m$) be a partition of unity in $B({\cal H})_+$, that is
$\sum_j E_j=I$. (The operators $E_j$ could describe a measurement giving
finitely many possible outcomes.) Any density matrix $D_i \in B({\cal H})$
determines a probability distribution
$$
\mu_i=(\mbox{Tr}\, D_i E_1,\mbox{Tr}\, D_iE_2, \dots, \mbox{Tr}\, D_iE_m).
$$
It follows from Uhlmann's theorem that
\begin{equation}\label{E:egy}
S(\mu_1, \mu_2) \le S(D_1,D_2)\, .
\end{equation}
We give an example that the equality in (\ref{E:egy}) may appear
non-trivially.
\begin{example} Let $D_2=\mbox{Diag}\,(1/3,1/3,1/3)$, $D_1=\mbox{Diag}\,(1-2\mu, \mu,\mu)$,
$E_1=\mbox{Diag}\,(1,0,0)$ and
$$
E_2=\left[\begin{array}{ccc} 0 & 0 & 0 \\ 0 & x & z\\ 0 & \bar{z}&1-x
\end{array} \right],\qquad
E_3=\left[\begin{array}{ccc} 0 & 0 & 0 \\ 0 & 1-x & -z\\ 0 & -\bar{z}& x
\end{array} \right]
$$
When $0 < \mu <1/2$, $0<x<1$ and for the complex $z$ the modulus of $z$
is small enough we have a partition of unity and $S(\mu_1, \mu_2) =
S(D_1,D_2)$ holds.
\end{example}
First we prove a lemma.
\begin{lemma} If $D_2$ is an invertible density then the
equality in (\ref{E:egy}) implies that $D_2$ commutes with $D_1, E_1,
E_2,\dots, E_m$.
\end{lemma}
The linear operator $T$ associates a diagonal matrix
$$
\mbox{Diag}\,(\mbox{Tr}\, D E_1,\mbox{Tr}\, D E_2, \dots, \mbox{Tr}\, DE_m))
$$
to the density $D$ acting on ${\cal H}$
and under the hypothesis (\ref{E:ns}) is at our disposal. We have
$$
\langleD_2^{1/2}, T^*\big(T(D_1)^{it}T(D_2)^{-it}\big)D_2^{1/2}\rangle=
\langleD_2^{1/2}, D_1^{it} D_2^{-it}D_2^{1/2}\rangle\,.
$$
Actually we
benefit from the analytic continuation and we put $-i/2$ in place of $t$.
Hence
\begin{equation}
\sum_{j=1}^m (\mbox{Tr}\, E_j D_1)^{1/2}(\mbox{Tr}\, E_j D_1)^{1/2}=\mbox{Tr}\, D_1^{1/2} D_2^{1/2}\,.
\end{equation}
The Schwarz inequality tells us that
\begin{eqnarray*}
\mbox{Tr}\, D_1^{1/2} D_2^{1/2}&=&\langle D_1^{1/2}, D_2^{1/2}\rangle=\sum_{j=1}^m
\langle D_1^{1/2}E_j^{1/2}, D_2^{1/2}E_j^{1/2}\rangle\\
&\le&
\sum_{j=1}^m
\sqrt{\langle D_1^{1/2}E_j^{1/2}, D_1^{1/2}E_j^{1/2}\rangle}
\sqrt{\langle D_2^{1/2}E_j^{1/2}, D_2^{1/2}E_j^{1/2}\rangle} \\
&=&
\sum_{j=1}^m (\mbox{Tr}\, E_j D_1)^{1/2}(\mbox{Tr}\, E_j D_2)^{1/2}\,.
\end{eqnarray*}
The condition for equality in the Schwarz inequality is well-known:
There are some complex numbers $\lambda_j \in {\mathbb C}$ such that
\begin{equation}\label{E:def}
D_1^{1/2}E_j^{1/2}= \lambda_j D_2^{1/2}E_j^{1/2}\,.
\end{equation}
(Since both sides have positive trace, $\lambda_j$ are actually positive. )
The operators $E_j$ and $E_j^{1/2}$ have the same range, therefore
\begin{equation}\label{E:2}
D_1^{1/2}E_j = \lambda_j D_2^{1/2}E_j\,.
\end{equation}
Summing over $j$ we obtain
$$
D_2^{-1/2}D_1^{1/2} = \sum_{j=1}^m \lambda_j E_j\,.
$$
Here the right hand side is self-adjoint, so $D_2^{-1/2}D_1^{1/2} =
D_1^{1/2}D_2^{-1/2} $ and $D_1D_2=D_2D_1$. Now it follows from
(\ref{E:def}) that $E_j$ commutes with $D_2$. {
$\square$}
Next we analyse the equality in (\ref{E:egy}).
If $D_2$ is invertible, then the previous lemma tells us that $D_1$ and
$D_2$ are diagonal in an appropriate basis. In this case $S(\mu_1,
\mu_2)$ is determined by the diagonal elements of the matrices $E_j$.
Let ${\cal E}(A)$ denote the diagonal matrix whose diagonal coincides with
that of $A$. If $E_j$ is a partition of unity, then so is ${\cal E}(E_j)$.
However, given a partition of unity $F_j$ of diagonal matrices, there
could be many choice of a partition of unity $E_j$ such that ${\cal E}(E_j)
=F_j$, in general. In the moment we do not want to deal with this
ambiguity, and we
assume that we have a basis $e_1, e_2,\dots,e_n$ consisting of common
eigenvectors of the operators $D_1,D_2,{\cal E}(E_1),{\cal E}(E_2),\dots,{\cal E}(E_n)$:
$$
D_i e_k= v^i_ke_k \quad\hbox{and}\quad
{\cal E}(E_j) e_k= w_{kj} e_k \quad(i=1,2,\, j=1,2,\dots,m,\,k=1,2, \dots, n).
$$
The matrix $[w_{kj}]_{kj}$ is (raw) stochastic and condition
(\ref{E:2}) gives
$$
\frac{v^1_k}{v^2_k}w_{kj}=(\lambda_j)^2 w_{kj}
$$
This means that $w_{kj}\ne 0$ implies that ${v^1_k}/{v^2_k}$ does not
depend on $k$. In other words, $D_1D_2^{-1}$ is constant on the support
of any $E_j$.
Let $j$ be equivalent with $k$, if the support of ${\cal E}(E_j)$ intersects
the support of ${\cal E}(E_k)$. We denote by $[j]$ the equivalence class of $j$
and let $J$ be the set of equivalence classes.
$$
P_{[j]}:=\sum_{k\in[j]} {\cal E}(E_k)
$$
must be a projection and $\{P_{[j]}: [j]\in J\}$
is a partition of unity. We deduced above that
$$
D_1D_2^{-1}P_{[j]}=\lambda_j P_{[j]}
$$
One cannot say more about the condition for equality. All these
extracted conditions hold in the above example and ${\cal E}(E_k)$'s do not
determine $E_k$'s, see the freedom for the variable $z$ in the example.
We can summarise our analysis as follows. The case of equality in
(\ref{E:egy}) implies some commutation relation and the whole problem
is reduced to the commutative case. It is not necessary that the
positive-operator-valued measure $E_j$ should have projection values.
{\bf 3.3. The Holevo bound.} Let $E_j$ ($1\le j \le m$) be a partition
of unity in $B({\cal K})_+$, $\sum_j E_j=I$. We assume that the density
matrix $D\in B({\cal H})$ is in the form of a convex combination $D=\sum_i
p_iD_i$ of other densities $D_i$. Given a coarse graining $T:B({\cal H})\to
B({\cal K})$ we can say that
our signal $i$ appears with probability $p_i$, it is encoded by the
density matrix $D_i$, after transmission the density $T(D_i)$ appears
in the output and the receiver decides that the signal $j$ was sent
with the probability $\mbox{Tr}\, T(D_i)E_j$. This is the standard scheme of
quantum information transmission. Any density matrix $D_i \in B({\cal H})$
determines a probability distribution
$$
\mu_i=(\mbox{Tr}\, T(D_i) E_1,\mbox{Tr}\, T(D_i) E_2, \dots, \mbox{Tr}\, T(D_i)E_m).
$$
on the output. The inequality
\begin{equation}\label{E:Hb}
S(\mu)-\sum_i p_i S(\mu_i) \le S(D)-\sum_i p_i S(D_i)
\end{equation}
(where $\mu:=\sum_i p_i \mu_i$ and $D:=\sum_i p_i D_i$) is the
so-called {\it Holevo bound} for the amount of information passing through the
communication channel. Note that the Holevo bound appeared before the
use of quantum relative entropy and the first proof was more complicated.
$\mu_i$ is a coarse-graining of $T(D_i)$, therefore inequality
(\ref{E:Hb}) is of the form
$$
\sum_i p_iS(R(D_i), R(D)) \le \sum_i p_i S(D_i,D).
$$
On the one hand, this form shows that the bound (\ref{E:Hb}) is a
consequence of the monotonicity, on the other hand, we can make an
analysis of the equality. Since the states $D_i$ are the codes of the
messages to be transmitted, it would be too much to assume that all of
them are invertible. However, we may assume that $D$ and $T(D)$ are
invertible. Under this hypothesis Lemma 1 applies and tells us that
the equality in (\ref{E:Hb}) implies that all the operators $T(D),
T(D_i)$ and $E_j$ commute.
{\bf 3.4. $\alpha$-entropies.} The $\alpha$-divergence of the densities
$D_1$ and $D_2$ is
\begin{equation}
S_{\alpha} (D_1 , D_2) = \frac{4}{1- \alpha^2}{\mbox{Tr}\,}\:
(D_1 - D_1^{\frac{1+\alpha}{2}} D_2^{\frac{1-\alpha}{2}}),
\end{equation}
which is essentially
$$
\langleD^{1/2}_2, \Delta^{\frac{1+\alpha}{2}} D^{1/2}_2\rangle
$$
up to constants in the notation of Sect. 2. The proof of the
monotonicity works for this more general quantity with a small
alteration. What we need is
$$
\langleD^{1/2}_2, \Delta^\beta D^{1/2}_2\rangle =\frac{\sin \pi \beta}{\pi}
\int^{\infty}_{0} -t^\beta\langleD^{1/2}_2, (\Delta + t)^{-1} D^{1/2}_2\rangle+
t^{\beta-1} \,dt
$$
for $0 < \beta <1$. Therefore for $0< \alpha <2$ the proof of the above
Theorem 2 goes through for the $\alpha$-entropies. The monotonicity
holds for the $\alpha$-entropies, moreover (1) and (2) from Theorem
\ref{masodik} are necessary and sufficient for the equality.
The role of the $\alpha$-entropies is smaller than that of the relative
entropy but they are used for approximation of the relative entropy and
for some other purposes (see \cite{hape}, for example).
\section{Strong subadditivity of entropy and the Markov property}
The {\it strong subadditivity} is a crucial property of the von Neumann entropy
it follows easily from the monotonicity of the relative entropy. (The
first proof of this property of entropy was given by Lieb and Ruskai
\cite{SSA} before the Uhlmann's monotonicity theorem.)
The strong subadditivity property is related to the composition of three
different systems. It is used, for example, in the analysis of the
translation invariant states of quantum lattice systems: The proof of
the existence of the global entropy density functional is based on the
subadditivity and a monotonicity property of local entropies is
obtained by the strong subadditivity \cite{petz1996}.
Consider three Hilbert spaces, ${\cal H}_j$, $j=1,2,3$ and a
statistical operator $D_{123}$ on the tensor product ${\cal H}_1\otimes
{\cal H}_2\otimes {\cal H}_3$. This statistical operator has marginals on all
subproducts, let $D_{12}$, $D_{2}$ and $D_{23}$ be the marginals on
${\cal H}_1\otimes {\cal H}_2$, ${\cal H}_2$ and ${\cal H}_2\otimes {\cal H}_3$, respectively.
(For example, $D_{12}$ is determined by the requirement $\mbox{Tr}\, D_{123}
(A_{12} \otimes I_3)=\mbox{Tr}\, D_{12}A_{12}$ for every operator $A_{12}$ acting
on ${\cal H}_1\otimes {\cal H}_2$; $D_{2}$ and $D_{23}$ are similarly defined.)
The strong subadditivity asserts the following:
\begin{equation}\label{E:ssa}
S(D_{123})+S(D_2)\le
S(D_{12})+S(D_{23})
\end{equation}
In order to prove the strong subadditivity, one can start with the
identities
\begin{eqnarray*}
S(D_{123}, \mbox{tr}_{123})&=&S(D_{12},\mbox{tr} _{12})+S(D_{123},
D_{12}\otimes \mbox{tr} _3)
\\
S(D_2,\mbox{tr} _2)+S(D_{23},D_2\otimes \mbox{tr} _3)&=&S(D_{23},
\mbox{tr} _{23}),
\end{eqnarray*}
where $\mbox{tr}$ with a subscript denotes the density of the corresponding
tracial state, for example $\mbox{tr} _{12}=I_{12}/\dim ({\cal H}_1\otimes {\cal H}_2)$.
From these equalities we arrive at a new one,
\begin{eqnarray*}
S(D_{123}, \mbox{tr}_{123})+ S(D_2,\mbox{tr} _2) &=&
S(D_{12},\mbox{tr} _{12})+ S(D_{23}, \mbox{tr} _{23})\\ &\quad & +
S(D_{123}, D_{12}\otimes \mbox{tr} _3)-S(D_{23},D_2\otimes \mbox{tr} _3).
\end{eqnarray*}
If we know that
\begin{equation}\label{E:RSS}
S(D_{123},D_{12}\otimes \mbox{tr} _3)\ge S(D_{23}, D_2\otimes
\mbox{tr} _3)
\end{equation}
then the strong subadditivity (\ref{E:ssa}) follows. Set a linear
transformation $B({\cal H}_1\otimes {\cal H}_2\otimes {\cal H}_3)\to B({\cal H}_2\otimes
{\cal H}_3)$ as follows:
\begin{equation}\label{E:T}
T (A\otimes B\otimes C):= B\otimes C (\mbox{Tr}\, A )
\end{equation}
T is completely positive and trace preserving. On the other hand,
$T(D_{123})=D_{23}$ and $T(D_{12}\otimes \mbox{tr} _3)=D_{2}\otimes \mbox{tr} _3$.
Hence the monotonicity theorem gives (\ref{E:RSS}).
This proof is very transparent and makes the equality case visible.
The equality in the strong subadditivity holds if and only if we have
equality in (\ref{E:RSS}). Note that $T$ is the partial trace over the
third system and
\begin{equation}\label{E:Tcsil}
T^*(B\otimes C)=I\otimes B \otimes C.
\end{equation}
\begin{thm}\label{equality}
Assume that $D_{123}$ is invertible. The equality holds in the strong
subadditivity (\ref{E:ssa}) if and only if the following equivalent
conditions hold:
\begin{itemize}
\item[(1)] $D_{123}^{it} D_{12}^{-it}= D_{23}^{it} D_2^{-it}$
for all real $t$.
\item[(2)] $\log D_{123}-\log D_{12}=\log D_{23} -\log D_2 $.
\end{itemize}
\end{thm}
Note that both condition (1) and (2) contain implicitely tensor
products, all operators should be viewed in the three-fold-product.
Theorem 2 applies due to (\ref{E:Tcsil}) and this is the proof. {
$\square$}
It is not obvious the meaning of conditions (1) and (2) in
Theorem \ref{equality}. The easy choice is
$$
\log D_{12}=H_1+H_2+H_{12},\,\, \log D_{23}=H_2+H_3+H_{23},\,\, \log D_2=H_2
$$
for a commutative family of self-adjoint operators $H_1,H_2,H_3,H_{12},
H_{23}$ and to define $\log D_{123}$ by condition (2) itself. This
example lives in an abelian subalgebra of ${\cal H}_1\otimes {\cal H}_2\otimes
{\cal H}_3$ and a probabilistic representation can be given.
$D_{123}$ may be regarded as the joint probability distribution of some
random variables $\xi_1, \xi_2$ and $\xi_3$. In this language we can
rewrite (1) in the form
\begin{equation}
\frac{\hbox{Prob} (\xi_1= x_1, \xi_2= x_2, \xi_3= x_3 )}
{\hbox{Prob}(\xi_1= x_1, \xi_2= x_2 )}=
\frac{\hbox{Prob} (\xi_2= x_2, \xi_3= x_3 )}
{\hbox{Prob}(\xi_2= t_2 )}
\end{equation}
or in terms of conditional probabilities
\begin{equation}
\hbox{Prob} (\xi_3= x_3 | \xi_1= x_1, \xi_2= x_2 )=
\hbox{Prob} (\xi_3= x_3 | \xi_2= x_2 )\,.
\end{equation}
In this form one recognizes the Markov property for the variables
$\xi_1, \xi_2$ and $\xi_3$; subscripts 1,2 and 3 stand for ``past'',
``present'' and ``future''. It must be well-known that for classical
random variables the equality case in the strong subadditivity of the
entropy is equivalent to the Markov property.
The equality
\begin{equation}\label{E:inc}
S(D_{123})-S(D_{12}) = S(D_{23})-S(D_2)
\end{equation}
means an equality of entropy increments. Concerning the Markov property, see
\cite{AF} or pp. 200--203 in \cite{OP}.
\begin{thm}
Assume that $D_{123}$ is invertible. The equality holds in the strong
subadditivity (\ref{E:ssa}) if and only if there exists a completely
positive unital mapping $\gamma: B({\cal H}_1\otimes {\cal H}_2\otimes {\cal H}_3)\to
B({\cal H}_2\otimes {\cal H}_3)$ such that
\begin{itemize}
\item[(1)] $\mbox{Tr}\,(D_{123}\gamma(x))=\mbox{Tr}\,(D_{123}x)$ for all $x$.
\item[(2)] $\gamma|B({\cal H}_2)\equiv $ identity.
\end{itemize}
\end{thm}
If $\gamma$ has properties (1) and (2), then $\gamma^*(D_{23})=D_{123}$
and $\gamma^*(D_2\otimes \mbox{Tr}_3)=D_{12}\otimes \mbox{Tr}_3$ for its dual and
we have equality in (\ref{E:RSS}).
To prove the converse let
\begin{equation}\label{E:E}
E (A\otimes B\otimes C):= B\otimes C (\mbox{Tr}\, A /\dim {\cal H}_1)
\end{equation}
which is completely positive and unital. Set
\begin{equation}\label{E:gamma}
\gamma (\,\cdot\,):= D_{23}^{-1/2}E(D_{123}^{1/2}\,\cdot\, D_{123}^{1/2}) D_{23}^{-1/2}
\end{equation}
If the equality holds in the strong subadditivity, then property (1)
from Theorem 3 is at our disposal and it gives $\gamma(x)=x$ for $x \in
B({\cal H}_2)$.{
$\square$}
In a probabilistic interpretation $E$ and $\gamma$ are conditional
expectations. $E$ preserves the tracial state and it is a projection of
norm one. $\gamma$ leaves the state with density $D_{123}$ invariant,
however it is not a projection. (Accardi and Cecchini called this
$\gamma$ generalised conditional expectation, \cite{AC}.)
It is interesting to construct translation invariant states on the
infinite tensor product of matrix algebras (that is, quantum spin chain
over ${ \mathbb Z}$) such that condition (\ref{E:inc}) holds for all ordered
subsystems 1,2 and 3.
\end{document}
|
\begin{document}
\title{On generalized superelliptic Riemann surfaces}
\author{Ruben A. Hidalgo, Sa\'ul Quispe}
\address{Departamento de Matem\'atica y Estad\'{\i}stica, Universidad de La Frontera, Temuco, Chile.}
\email{[email protected], [email protected]}
\thanks{The first two authors were partially supported by Project Fondecyt 1150003 and Project Anillo ACT 1415 PIA-CONICYT}
\author{Tony Shaska}
\address{Department of Mathematics and Statistics, Oakland University, Rochester, MI, 48386. }
\email{[email protected]}
\begin{abstract}
A closed Riemann surface $\X$, of genus $g \geq 2$, is called a generalized superelliptic curve of level $n \geq 2$ if it admits an order $n$ conformal automorphism $\tau$ so that $\X/\langle \tau \rangle$ has genus zero and $\tau$ is central in ${\rm Aut}(\X)$; the cyclic group $H=\langle \tau \rangle$ is called a generalized superelliptic group of level $n$ for $\X$. These Riemann surfaces are natural generalizations of hyperelliptic Riemann surfaces.
We provide an algebraic curve description of these Riemann surfaces in terms of their groups of automorphisms. Also, we observe that the generalized superelliptic group $H$ of level $n$ is unique, with the exception of a very particular family of exceptional generalized superelliptic Riemann surfaces for $n$ even. In particular, the uniqueness holds if
either: (i) $n$ is odd or (ii) the quotient $\X/H$ has all its cone points of order $n$.
In the non-exceptional case, we use this uniqueness property
to observe that the corresponding curves are definable over their fields of moduli if ${\rm Aut}(\X)/H$ is neither trivial or cyclic.
\end{abstract}
\keywords{ generalized superelliptic curves, minimal field of definition}
\maketitle
ection{Introduction}
Let $\X$ denote a closed Riemann surface of genus $g \geq 2$. A natural question is to determine the group ${\rm Aut}(\X)$ of conformal automorphism of $\X$, which is known to be a finite group of order $\leq 84(g-1)$, and to determine algebraic curves representing it over which one may realize a given subgroup of its automorphisms. Another related question is to determine if such Riemann surface can be defined over its field of moduli and to describe an algebraic curve description defined over a "minimal" field of definition of it. These questions have been studied for a long time and complete answers to them are not known, except for certain particular cases.
For an integer $n \geq 2$ we say that $\X$ is a {\it cyclic $n$-gonal Riemann surface} if it admits an order $n$ conformal automorphism $\tau$ so that the quotient orbifold ${\mathcal O}=\X/\langle \tau\rangle$ has genus zero (so it can be identified with the Riemann sphere); $\tau$ is called a {\it $n$-gonal automorphism} and $H=\langle \tau \rangle \cong C_{n}$ a {\it $n$-gonal group} of $\X$. It is well known that $\X$ can be represented by an affine irreducible (which might have singularities) algebraic curve of the form (called a {\it cyclic $n$-gonal curve})
\begin{equation}\label{ngonal}
\quad y^{n}=\mathfrak prod_{j=1}^{r}(x-a_{j})^{l_{j}},
\end{equation}
where
\begin{enumerate}
\item[(i)] $a_{1},\ldots,a_{r} \in {\mathbb C}$ are distinct,
\item[(ii)] $l_{j} \in \{1,\ldots,n-1\}$,
\item[(iii)] $\gcd(n,l_{1},\ldots,l_{r})=1$.
\end{enumerate}
In this algebraic model, $\tau(x,y)=(x,\omega_{n}y)$, where $\omega_{n}=e^{2 \mathfrak pi i/n}$, and $\mathfrak pi(x,y)=x$ is a regular branched cover with deck group $H$.
There exists an extensive literature about cyclic $p$-gonal Riemann surfaces when $p$ is a prime integer (see, for instance, \cite{Accola,B,BW,BCG,BCI,K,LRS,Sanjeewa,Sanjeewa-S,W1,W2}). Castelnuovo-Severi's inequality \cite{Accola1, CS} asserts that a cyclic $p$-gonal Riemann surface of genus $g>(p-1)^{2}$ has a unique $p$-gonal group (similar uniqueness results holds for $n$ not necessarily a prime when all cone points of $S/H$ have branch order $n$ \cite{K}). In \cite{Hid:pgrupo} it is proved that a cyclic $p$-gonal Riemann surface of genus $2 \leq g < (p-1)(p-5)/10$ also has a unique $p$-gonal group (for instance, for $p \geq 11$ and $g=(p-1)/2$).
The uniqueness property of the $p$-gonal group, in those cases, has permitted to determine the groups of automorphisms of such cyclic $p$-gonal Riemann surfaces and their equations \cite{W1,W2}.
A particular class of cyclic $n$-gonal Riemann surfaces, called {\it superelliptic curves of level $n$} \cite{BHS,HS,MPRZ},
are those for which, in the above algebraic description \eqref{ngonal}, all the exponents $l_{j}$ are equal to $1$ (and if $r \not\equiv 0 \mod(n)$, then $\gcd(n,r)=1$) and $\tau$ is assumed to be central in ${\rm Aut}(\X)$ (this is a generic condition and generalizes the hyperelliptic situation);
$\tau$ is called a {\it superelliptic automorphism of level $n$} and $H=\langle \tau\rangle$ a {\it superelliptic group of level $n$}. In this case, all cone points of $\X/H$ have order $n$.
A classification of those superelliptic curves of genus $g \leq 48$ according to their group of automorphisms was provided in \cite{MPRZ, Sanjeewa}. In \cite{HS} it has been shown that in most cases a superelliptic curve can be defined over its field of moduli and, for $g \leq 10$, it is described those which might not be definable over their field of moduli. In that paper is also noted that every superelliptic curve of level $n$ admits a unique superelliptic group of level $n$ (see also Corollary \ref{unicosuper}).
A natural generalization of superelliptic curves is by not requiring the cone points of $\X/H$ to be all of order $n$. A {\it generalized superelliptic curve of level $n$} is a closed Riemann surfaces $\X$ admitting an order $n$ automorphism $\tau$ which is central in $G={\rm Aut}(\X)$ and $\X/\langle \tau \rangle$ has genus zero; we call $\tau$ a {\it generalized superelliptic automorphism of level $n$} and the cyclic group $H=\langle \tau \rangle$ is a {\it generalized superelliptic group of level $n$}. The condition for $\tau$ to be central imposes some conditions of the exponents $l_{j}$ in the algebraic curve \eqref{ngonal} in terms of the reduced group of conformal automorphisms $\G=G/H$ relative to $H$ (see Lemma \ref{exponentes}). By Singerman's list of finitely maximal signatures \cite{Singerman}, generically, cyclic $n$-gonal Riemann surface are generalized superelliptic curve of level $n$.
Following similar arguments as done by Horiuchi for the hyperelliptic situation in \cite{Horiuchi} (because of Lemma \ref{exponentes}), in Theorem \ref{gonalescentral} we provide a description of generalized superelliptic Riemann surfaces $\X$ in terms of $\G$. In Theorem \ref{unicidad0} we study the uniqueness of the generalized superelliptic group of level $n$. It says that except for a very particular family of exceptional generalized superelliptic Riemann surfaces of level $n$ even, there is only one generalized superelliptic group of level $n$. As a consequence we have uniqueness of the generalized superelliptic group $H$ of level $n$ in the following cases: (i) $n$ is odd, (ii) the quotient $\X/H$ has all its cone points of order $n$ (for instance, when $\X$ is a superelliptic curve of level $n$). For the exceptional cases, the groups of conformal automorphisms can be described and it can be see that they are defined over their fields of moduli.
Let us assume now that $\X$ is a non-exceptional generalized superelliptic curve of level $n$; so its has a unique generalized superelliptic group $H$ of level $n$. The uniqueness of the generalized superelliptic group $H$ in $G$ permits to observe (see Theorem \ref{teounico}) that in the case that $\G$ is different from trivial or cyclic group the generalized superelliptic curve $\X$ is definable over its field of moduli (the field of definition of the point in moduli space defining the conformal class of $\X$). In the case that $\G$ is cyclic a similar answer is provided under a mild condition on the signature of $\X/G$ (see Theorem \ref{teoAQ}). At this point we should mention the paper \cite{LRS} where the authors study the Galois descent obstruction for hyperelliptic curves of arbitrary genus whose reduced automorphism groups are cyclic (in their paper it is also considered the case of fields of positive characteristic); they an explicit and effectively computable description of this obstruction and obtain an arithmetic criterion for the existence of a hyperelliptic descent.
\noindent
\begin{nota}
Throughout this paper we denote by $C_{n}$ the cyclic group of order $n$, by $D_{m}$ the dihedral group of order $2m$, by $A_{4}$ and $A_{5}$ the alternating groups of orders $12$ and $60$, respectively, and by $S_{4}$ the alternating group in four letters. Also, when writing an algebraic curve $y^{n}=\mathfrak prod_{j=1}^{s}(x-b_{j})^{l_{j}}$, if $l_{i}=0$, then we delete the corresponding factor $(x-b_{i})$ from it and the corresponding $\frac n {n_i}$ from the signature.
\end{nota}
ection{Preliminaries}
ubsection{Co-compact Fuchsian groups}
A co-compact Fuchsian group is a discrete subgroup $K$ of orientation-preserving isometries of the hyperbolic plane ${\mathbb H}$, so a discrete subgroup of ${\rm PSL}(2,{\mathbb R})$, so that the quotient orbifold ${\mathbb H}/K$ is compact.
The algebraic structure of a co-compact Fuchsian group $K$ is determined by its signature
\begin{equation}\label{eq1}
(\gamma; n_{1},\ldots,n_{r}),
\end{equation}
where the quotient orbifold ${\mathbb H}/K$ has genus $\gamma$ and $r$ cone points having branch orders $n_{1},\ldots, n_{r}$. The algebraic structure of $K$ is given as
\begin{equation}\label{eq2}
K=\langle a_{1},b_{1},\ldots,a_{\gamma}, b_{\gamma},c_{1},\ldots,c_{r}: c_{1}^{n_{1}}=\cdots=c_{r}^{n_{r}}=1, c_{1}\cdots c_{r}[a_{1},b_{1}] \cdots [a_{\gamma},b_{\gamma}]=1\rangle
\end{equation}
where $[a,b]=aba^{-1}b^{-1}$.
The hyperbolic area of the orbifold ${\mathbb H}/K$ is equal to
\begin{equation}\label{eq3}
\mu(K)=2\mathfrak pi \left( 2\gamma-2+
um_{j=1}^{r}\left(1-\frac{1}{n_{j}}\right)\right).
\end{equation}
If the co-compact Fuchsian group $K$ has no torsion, the quotient orbifold ${\mathbb H}/K$ is a closed Riemann surface of genus $g \geq 2$ and its signature is $(\gamma;-)$. Conversely, by the uniformization theorem, every closed Riemann surface of genus $g \geq 2$ can be represented as a quotient ${\mathbb H}/K$, where $K$ is a torsion free Fuchsian group.
Let $R(K)$ be the set of all isomorphisms $\rho:K \to {\rm PSL}(2,{\mathbb R})$ so that $\rho(K)$ is a co-compact Fuchsian group. We have a natural one-to-one map
\begin{equation}\label{eq4}
R(K) \to \left({\rm PSL}(2,{\mathbb R})\right)^{2\gamma+r}:\rho \mapsto (\rho(a_{1}),\rho(b_{1}),\ldots,\rho(a_{\gamma}),\rho(b_{\gamma}),\rho(c_{1}),\ldots,\rho(c_{r})),
\end{equation}
which permits to see $R(K)$ as a subset of $\left({\rm PSL}(2,{\mathbb R})\right)^{2\gamma+r}$ and, in particular, to give it a topological structure.
The {\it Teichm\"uller space} of $K$ is defined as the quotient space $T(K)$ obtained by the following equivalence relation: $\rho_{1}
im \rho_{2}$ if and only there is some $A \in {\rm PSL}(2,{\mathbb R})$ so that $\rho_{2}(x)=A \rho_{1}(x) A^{-1}$, for every $x \in K$. One provides to $T(K)$ with the quotient topology. It is known \cite{Earle2} that $T(K)$ is in fact a simply-connected manifold of complex dimension $3\gamma-3+r$.
Now, if $\widehat{K}$ is another co-compact Fuchsian group that contains $K$ as a finite index $d$ subgroup, then there is a natural embedding $T(\widehat{K})
ubset T(K)$; so ${\rm dim}(T(\widehat{K})) \leq {\rm dim}(T(K))$ \cite{Greenberg}. For most of the situations, it happens that ${\rm dim}(T({\widehat K})) < {\rm dim}(T(K))$. The exceptional cases occur when ${\rm dim}(T({\widehat K})) = {\rm dim}(T(K))$ and the list of these pairs $(K,\widehat{K})$ is provided in \cite{Singerman}. Below we recall these lists from \cite{Singerman}.
\begin{table}[htp]
\caption{normal inclusions}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$K$ & ${\widehat K}$ & $[\widehat{K}:K]$\\
\hline
$(2;-)$ & $(0;2,2,2,2,2,2)$ & $2$\\
$(1;t,t)$ & $(0;2,2,2,2,t)$ & $2$\\
$(1;t)$ & $(0;2,2,2,2t)$ & $2$\\
$(0;t,t,t,t)$ & $(0;2,2,2,t)$ & $4$\\
$(0;t_{1},t_{1},t_{2},t_{2})$ & $(0;2,2,t_{1},t_{2})$ & $2$\\
$(0;t,t,t)$ & $(0;3,3,t)$ & $3$\\
$(0;t,t,t)$ & $(0;2,3,2t)$ & $6$\\
$(0;t_{1},t_{1},t_{2})$ & $(0;2,t_{1},2t_{2})$ & $2$\\
\hline
\end{tabular}
\end{center}
\label{default1}
\end{table}
\begin{table}[htp]
\caption{non-normal inclusions}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$K$ & ${\widehat K}$ & $[\widehat{K}:K]$\\
\hline
$(0;7,7,7)$ & $(0;2,3,7)$ & $24$\\
$(0;2,7,7)$ & $(0;2,3,7)$ & $9$\\
$(0;3,3,7)$ & $(0;2,3,7)$ & $8$\\
$(0;4,8,8)$ & $(0;2,3,8)$ & $12$\\
$(0;3,8,8)$ & $(0;2,3,8)$ & $10$\\
$(0;9,9,9)$ & $(0;2,3,9)$ & $12$\\
$(0;4,4,5)$ & $(0;2,4,5)$ & $6$\\
$(0;n,4n,4n)$ & $(0;2,3,4n)$ & $6$\\
$(0;n,2n,2n)$ &$(0;2,4,2n)$ & $4$\\
$(0;3,n,3n)$ & $(0;2,3,3n)$ & $4$\\
$(0;2,n,2n)$ & $(0;2,3,2n)$ & $3$ \\
\hline
\end{tabular}
\end{center}
\label{default2}
\end{table}
ubsection{Automorphisms in terms of Fuchsian groups}
Let $\Gamma$ be a torsion free co-compact Fuchsian group and $\X={\mathbb H}/\Gamma$ be its uniformized closed Riemann surface. A finite group $G$ acts faithfully as a group of conformal automorphisms of $\X$ if there is some co-compact Fuchsian group $K$ and an epimorphism $\theta:K \to G$ whose kernel is $\Gamma$.
ection{Cyclic $n$-gonal Riemann surfaces}
Let us consider a cyclic $n$-gonal Riemann surface $\X$. By the definition there is an order $n$ conformal automorphism $\tau \in {\rm Aut}(\X)$ and a regular branched cover $\mathfrak pi: \X \to \widehat{\mathbb C}$ whose deck group is $H=\langle \tau \rangle \cong C_{n}$. Let us assume the branch values of $\mathfrak pi$ are given by the points $a_{1},\ldots, a_{s} \in \widehat{\mathbb C}$. Let us denote the branch order of $\mathfrak pi$ at $a_{j}$ by $n_{j} \geq 2$ (which is a divisor of $n$).
ubsection{Fuchsian description of $\X$}
Let $\Gamma$ be a torsion free co-compact Fuchsian group so that $\X={\mathbb H}/\Gamma$.
In this case, there is co-compact Fuchsian group $K$ with signature $(0;n_{1},\ldots,n_{s})$, so it has a presentation as in \eqref{eq2} with $\gamma=0$ and $r=s \geq 3$, and there is some epimorphism $\rho:K \to C_{n}=\langle \tau \rangle$ with torsion free kernel $\Gamma$. The following fact, due to Harvey, asserts that the divisors $n_{j}$ must satisfy some constrains.
\begin{thm}[Harvey's criterion \cite{Harvey}]\label{harvey}
Let $K$ be a Fuchsian group with signature $(0;n_{1},\ldots , n_{s})$, where each $n_{j} \geq 2$ is a divisor of $n$ and $s \geq 3$. Then there exists a epimorphism $\rho:K \to C_{n}$ with torsion free kernel if and only if
\begin{enumerate}
\item[(a)] $n={\rm lcm}(n_{1},\ldots , n_{j-1}, n_{j+1},\ldots , n_{s})$ for all $j$;
\item[(b)] if $n$ is even, then $\#\{ j \in \{1,\ldots,s\}: n/n_{j} \; \mbox{is odd} \}$ is even.
\end{enumerate}
\end{thm}
Let $\rho(c_{j})=\tau^{l_{j}}$, where $c_{j}$ as in \eqref{eq2}, for $l_{1},\ldots, l_{s} \in \{1,\ldots,n-1\}$. The condition $c_{1}\cdots c_{s}=1$ is equivalent to have $l_{1}+\cdots+l_{s} \equiv 0 \mod(n)$.
The condition for $\Gamma=\ker(\rho)$ to be torsion free is then equivalent to have that
$\gcd(n,l_{j})=n/n_{j}$, for $j=1,\ldots, s$. The surjectivity of $\rho$ is equivalent to have $\gcd(n,l_{1},\ldots,l_{s})=1$, which in our case is equivalent to condition ($a$).
Condition ($b$) is then equivalent to say that for $n$ even the number of $l_{j}$ being odd is even, which trivially hold.
As a consequence of the above, the following holds.
\begin{cor}\label{harvey1}
The cyclic $n$-gonal Riemann surface $\X$ of genus $g \geq 2$ can be described as $\X={\mathbb H}/\Gamma$, where $\Gamma$ is a co-compact Fuchsian group being the kernel of an homomorphism
\[
\begin{split}
\rho: K=\langle c_{1},\ldots,c_{s}: c_{1}^{n_{1}} =\cdots=c_{s}^{n_{s}}=c_{1}\cdots c_{s}=1\rangle & \to C_{n}=\langle \tau \rangle, \\
\end{split}
\]
such that $\rho( c_{j} )= \tau^{l_{j}}$, $s \geq 3$, and
\begin{enumerate}
\item $l_{1},\ldots, l_{s} \in \{1,\ldots,n-1\}$,
\item $l_{1}+\cdots+l_{s} \equiv 0 \mod(n)$,
\item $n_{j}=n/\gcd(n,l_{j})$, for all $j$,
\item $\gcd(n,l_{1},\ldots,l_{s})=1$.
\end{enumerate}
\end{cor}
\begin{rem}
Now, if $G$ is a group of conformal automorphisms of the cyclic $n$-gonal Riemann surface $\X={\mathbb H}/\Gamma$, containing the cyclic group $H=\langle \tau \rangle$, then there is a co-compact Fuchsian group $N$ containing the group $K$ so that $\Gamma$ is a normal subgroup of $N$. If the signature of $K$ does not belong to the list given in \cite{Singerman} (see above), then $N=K$ for the generic situation; so $G=H$ and $\tau$ will be central. This means that, generically, a cyclic $n$-gonal Riemann surface is a generalized superelliptic curve.
\end{rem}
ubsection{Algebraic description of $\X$ and $\tau$}\label{Sec:algebra}
As a consequence of Corollary \ref{harvey}, the following must hold.
\begin{enumerate}
\item If $a_{1},\ldots, a_{s} \in {\mathbb C}$, $s \geq 3$, then $\X$ can be described by an equation of the form
$$\X: \quad y^{n}=\mathfrak prod_{j=1}^{s} (x-a_{j})^{l_{j}},$$
where
\begin{enumerate}
\item $l_{1},\ldots, l_{s} \in \{1,\ldots,n-1\}$, $\gcd(n,l_{j})=n/n_{j}$,
\item $l_{1}+\cdots+l_{s} \equiv 0 \mod(n)$,
\item $\gcd(n,l_{1},\ldots,l_{s})=1$.
\end{enumerate}
\item If one of the branched values, say $a_{s}=\infty$, then
$$\X: \quad y^{n}=\mathfrak prod_{j=1}^{s-1} (x-a_{j})^{l_{j}},$$
where
\begin{enumerate}
\item $l_{1},\ldots, l_{s-1} \in \{1,\ldots,n-1\}$, $\gcd(n,l_{j})=n/n_{j}$ and $\gcd(n,l_{1}+\cdots+l_{s-1})=n/n_{s}$,
\item $l_{1}+\cdots+l_{s-1} \not\equiv 0 \mod(n)$,
\item $\gcd(n,l_{1},\ldots,l_{s-1})=1$.
\end{enumerate}
\end{enumerate}
In any of the above situations,
$\tau(x,y)=(x,\omega_{n} y)$, where $\omega_{n}=e^{2 \mathfrak pi i/n}$, and $\mathfrak pi(x,y)=x$. The branch order of $\mathfrak pi$ at $a_{j}$ is $n_{j}=n/\gcd(n,l_{j})$ and, by the Riemann-Hurwitz formula, the genus $g$ of $\X$ is given by
$$g=1+\frac{1}{2}\left((s-2)n-
um_{j=1}^{s} \gcd(n,l_{j})\right),$$
where in the case (2) $l_{s} \in \{1,\ldots,n-1\}$ is so that $l_{1}+\cdots+l_{s-1}+l_{s} \equiv 0 \mod(n)$.
ubsection{Conditions for $\tau$ to be central}
As seen in Section \ref{Sec:algebra}, we may consider a curve representation of $\X$ of the form (where $r=s$ if $\infty \not\in \{a_{1},\ldots,a_{s}\}$ and $r=s-1$ otherwise)
$y^{n}=\mathfrak prod_{j=1}^{r}(x-a_{j})^{l_{j}},$
(with the corresponding constrains (a)-(c) on the exponents $l_{j}$ and $n$, depending on the case if $\infty$ is or not a branch value of $\mathfrak pi$).
In this model, $\tau(x,y)=(x,\omega_{n}y)$ and $\mathfrak pi(x,y)=x$.
Let $N$ be the normalizer of $H=\langle \tau \rangle$ in ${\rm Aut}(\X)$. As $H$ is normal subgroup of $N$, the reduced group $\overline{N}=N/H$ is a finite group of M\"obius transformations keeping invariant the set of branch values of $\mathfrak pi$. Let us denote by $\theta:N \to \overline{N}$ the canonical projection map. If $\eta \in N$, then $\theta(\eta)$ is a M\"obius transformation keeping invariant the set $\{a_{1},\ldots,a_{r}\}$ if $\infty$ is not a branch value of $\mathfrak pi$; otherwise, it keeps invariant the set $\{\infty,a_{1},\ldots,a_{r}\}$.
The following fact provides condition for $\tau$ to be central in $N$ by asking some extra constrains on the exponents $l_{1},\ldots,l_{r}$.
\begin{lem}\label{exponentes}
The automorphism $\tau$ is central in $N$ if and only if for every $\eta \in G$ and $a_{j}$ and $a_{i}$ in the same $\theta(\eta)$-orbits one has that $l_{j}=l_{i}$.
\end{lem}
\begin{proof}
Let $\eta \in N$ and assume $\theta(\eta)$ has order $m \geq 2$. As there is suitable M\"obius transformation $M$ so that $M \theta(\eta) M^{-1}$ is just the rotation $x \mapsto \omega_{m} x$, by post-composing $\mathfrak pi$ with $M$, we may assume that $\theta(\eta)(x)=\omega_{m}x$. So the cyclic $n$-gonal curve can be written as
$$(*) \quad y^{n}=x^{s}\mathfrak prod_{j=1}^{r} (x-a_{j})^{l_{j,1}}(x-a_{j}\omega_{m})^{l_{j,2}} \cdots (x-a_{j}\omega_{m}^{m-1})^{l_{j,m}},$$
in which case $\tau(x,y)=(x,\omega_{n}y)$ and $\eta(x,y)=(\omega_{m}x,F(x,y))$, where $F(x,y)$ is a suitable rational map.
As $\eta(\tau(x,y))=(\omega_{m}x,F(x,\omega_{n}y))$, $\tau(\eta(x,y))=(\omega_{m}x,\omega_{n}F(x,y))$, the automorphism $\tau$ commutes with $\eta$ when $F(x,y)=R(x)y$, for a suitable rational map $R(x) \in {\mathbb C}(x)$. As $\theta(\eta)^{m}=1$, it follows that $\eta^{m} \in \langle \tau \rangle$, from which we must have that $\left(\mathfrak prod_{j=0}^{m-1}R(\omega_{m}^{j}x)\right)^{n}=1$.
Now, by applying $\theta(\eta)$ to the right part of $(*)$, we obtain
\[ \omega_{m}^{q} x^{s}\mathfrak prod_{j=1}^{r} \frac{(x-a_{j})^{l_{j,1}}(x-a_{j}\omega_{m})^{l_{j,2}} \cdots (x-a_{j}\omega_{m}^{m-1})^{l_{j,m}}}{(x-a_{j})^{l_{j,1}-l_{j,2}}(x-a_{j}\omega_{m})^{l_{j,2}-l_{j,3}} \cdots (x-a_{j}\omega_{m}^{m-1})^{l_{j,m}-l_{j,1}}}.
\]
As $\eta(x,y)=(\omega_{m}x,R(x)y)$, it must be that
\[ l_{j,1}-l_{j,2}=l_{j,2}-l_{j,3}=\cdots=l_{j,m-1}-l_{j,m}=l_{j,m}-l_{j,1}=\alpha.\]
But this asserts that $l_{j}=\alpha+l_{j+1}$, for $j=1,\ldots,m-1$ and $l_{m}=\alpha+l_{1}$, in particular, $l_{1}=m\alpha+l_{1}$, that is, $m\alpha=0$. Since $m \geq 2$, $\alpha=0$.
\end{proof}
\begin{rem}
In the case that $N={\rm Aut}(\X)$ (for instance, if $n=p$ is a prime integer so that either $g>(p-1)^{2}$ or $g<(p-1)(p-5)/10$), Lemma \ref{exponentes} states the conditions for $\X$ to be a generalized superelliptic Riemann surface.
\end{rem}
ection{Generalized superelliptic Riemann surfaces and their automorphisms}
Let $\X$ be a generalized superelliptic Riemann surface of level $n$, let $\tau \in {\rm Aut}(\X)$ be a generalized superelliptic automorphism of level $n$ and $H=\langle \tau \rangle$. Let us consider a regular branched cover $\mathfrak pi:\X \to \widehat{\mathbb C}$ with $H$ as its deck group. Let $a_{1},\ldots,a_{r} \in {\mathbb C}$ be its finite branch values (it might be that $\infty$ is also a branch value of $\mathfrak pi$).
We know from Section \ref{Sec:algebra}, that there is curve representation of the form
$y^{n}=\mathfrak prod_{j=1}^{r}(x-a_{j})^{l_{j}},$
with the corresponding constrains (a)-(c) on the exponents $l_{j}$ and $n$, depending on the case if $\infty$ is or not a branch value of $\mathfrak pi$, and in that representation we have that
$\tau(x,y)=(x,\omega_{n}y)$ and $\mathfrak pi(x,y)=x$.
ubsection{Description of generalized superelliptic Riemann in terms of their automorphism group}
The group $G={\rm Aut}(\X)$ descends by $\mathfrak pi$ to obtain the reduced group $\G=G/H$, which is a finite group of M\"obius transformations keeping invariant the set of branch values of $\mathfrak pi$. It is well known that
$\G \in \{C_{m}, D_{m}, A_{4}, S_{4}, A_{5}\}$.
As a consequence of Lemma \ref{exponentes}, all branch values of $\mathfrak pi$ belonging to the same $\G$-orbit must have the same exponent. This fact
permits to imitate the arguments in Horiuchi in \cite{Horiuchi}, done for the hyperelliptic situation, to write down the above equation in terms of $\G$. In the following theorem $n_{j}=\gcd(n,l_{j})$, for $i=0, \dots , r+1$ and
symbols $\frac n {n_i}$ or $\frac {2n} {n_i}$ not appearing in the signature if $l_i =0$.
\begin{thm}\label{gonalescentral}
Let $\X$ be a generalized superelliptic Riemann surface of level $n$, $\tau \in G={\rm Aut}(\X)$ its generalized superelliptic automorphisms, and $H=\langle \tau \rangle$. Then, up to isomorphisms, $\X$ and $G$ are described as indicated below in terms of the reduced group $\G=G/H$.\\
\begin{enumerate}[leftmargin=*]
\item {$\G=\big\langle a(x)=\omega_{m} x \big\rangle \cong C_{m}$:}
\[ \X: \quad y^{n}=x^{l_{0}} (x^{m}-1)^{l_{1}}\mathfrak prod_{j=2}^{r} (x^{m}-a_{j}^{m})^{l_{j}},\]
such that $a_{1},\ldots,a_{r} \in {\mathbb C}-\{0,1\}, \; a_{i}^{m} \neq a_{j}^{m}$,
where $\gcd(n,l_{0},l_{1},\ldots,l_{r})=1$
and if $l_{0}=0$, then $m(l_{1}+\cdots+l_{r}) \equiv 0 \mod(n)$. The group of automorphisms is
\[G=\langle \tau, A: \tau^{n}=1, A^{m}=\tau^{l_{0}}, \tau A = A \tau \rangle,\]
where
\[A(x,y)=(\omega_{m} x, \omega_{m}^{l_{0}/n}y).\]
Moreover, if we set $n_{j}=\gcd(n,l_{j})$,
then the signature of $\X \to \X/H$ is
\[
\left\{
\begin{array}{ll}
\left(0;\frac{n}{n_{1}},
tackrel{m}{\ldots},\frac{n}{n_{1}},\ldots, \frac{n}{n_{r}},
tackrel{m}{\ldots},\frac{n}{n_{r}}\right), & \textit{if } l_{0}=0, \\
\left(0;\frac{n}{n_{0}},\frac{n}{n_{1}},
tackrel{m}{\ldots},\frac{n}{n_{1}},\ldots, \frac{n}{n_{r}},
tackrel{m}{\ldots},\frac{n}{n_{r}}\right), & \textit {if } l_{0} \neq 0, l_{0}+m
um_{j=1}^{r}l_{j} \equiv 0 \mod(n),\\
\left(0;\frac{n}{n_{0}},\frac{n}{n_{r+1}},\frac{n}{n_{1}},
tackrel{m}{\ldots},\frac{n}{n_{1}},\ldots, \frac{n}{n_{r}},
tackrel{m}{\ldots},\frac{n}{n_{r}}\right), & \textit{if } l_{0} \neq 0, l_{0}+m
um_{j=1}^{r}l_{j} \not\equiv 0 \mod(n),
\end{array}
\right.
\]
\noindent the signature of $\X\to \X/G$ is
\[\left\{
\begin{array}{ll}
\left(0;m,m,\frac{n}{n_{1}},\frac{n}{n_{2}},\ldots,\frac{n}{n_{r}}\right), & \mbox{if $l_{0}=0$,}\\
\left(0;m,\frac{mn}{n_{0}},\frac{n}{n_{1}},\frac{n}{n_{2}},\ldots,\frac{n}{n_{r}}\right), & \mbox{if $l_{0} \neq 0$, $l_{0}+m
um_{j=1}^{r}l_{j} \equiv 0 \mod(n)$,}\\
\left(0;\frac{mn}{n_{0}},\frac{mn}{n_{r+1}},\frac{n}{n_{1}},\frac{n}{n_{2}},\ldots,\frac{n}{n_{r}}\right), & \mbox{if $l_{0} \neq 0$, $l_{0}+m
um_{j=1}^{r}l_{j} \not\equiv 0 \mod(n)$,}
\end{array}
\right.
\]
and the genus of $\X$ is
\[
\left\{
\begin{array}{ll}
1+\frac{1}{2}\left((rm-2)n-m
um_{j=1}^{r}n_{j}\right), & \mbox{if $l_{0}=0$,}\\
1+\frac{1}{2}\left((rm-1)n-m
um_{j=1}^{r}n_{j}\right), & \mbox{if $l_{0} \neq 0$, $l_{0}+m
um_{j=1}^{r}l_{j} \equiv 0 \mod(n)$,}\\
1+\frac{1}{2}\left(rmn-m
um_{j=1}^{r}n_{j}\right), & \mbox{if $l_{0} \neq 0$, $l_{0}+m
um_{j=1}^{r}l_{j} \not\equiv 0 \mod(n)$.}
\end{array}
\right.
\]
\item {$\G=\Big\langle a(x)=\omega_{m} x, b(x)=\frac{1}{x} \Big\rangle \cong D_{m}$:}
\[ \X: \quad y^{n}=x^{l_{0}}(x^{m}-1)^{l_{r+1}}(x^{m}+1)^{l_{r+2}} \mathfrak prod_{j=1}^{r} (x^{m}-a_{j}^{m})^{l_{j}}(x^{m}-a_{j}^{-m})^{l_{j}},\]
such that $a_{i}^{\mathfrak pm m} \neq a_{j}^{\mathfrak pm m} \neq 0,\mathfrak pm1$, where the following hold:
a) $2l_{0}+m(l_{r+1}+l_{r+2})+2m(l_{1}+\cdots+l_{r}) \equiv 0 \mod(n)$,
b) $\gcd(n,l_{0},l_{1},\ldots,l_{r+2})=1$.
\noindent The group of automorphisms is
\[G=\langle \tau,A,B: \tau^{n}=1, A^{m}=\tau^{l_{0}}, \; B^{2}=\tau^{l_{r+1}}, \; \tau A=A\tau,\; \tau B=B \tau \rangle,\]
where
\[
A(x,y) =(\omega_{m} x, \omega_{m}^{l_{0}/n}y), \quad B(x,y) =\left(\frac{1}{x}, \frac{(-1)^{l_{r+1}/n}y}{x^{(2l_{0}+m(l_{r+1}+l_{r+2}+2(l_{1}+\cdots+l_{r})))/n}}\right).
\]
Let $n_{j} =\gcd (n,l_{j})$, then
the signature of $\X \to \X/H$ is
\[ \left(0;\frac{n}{n_{0}},\frac{n}{n_{0}},\frac{n}{n_{r+1}},
tackrel{m}{\ldots},\frac{n}{n_{r+1}},\frac{n}{n_{r+2}},
tackrel{m}{\ldots},\frac{n}{n_{r+2}},
\frac{n}{n_{1}},
tackrel{2m}{\ldots},\frac{n}{n_{1}},\ldots, \frac{n}{n_{r}},
tackrel{2m}{\ldots},\frac{n}{n_{r}}\right),\]
the signature of $\X\to \X/G$ is
\[
\left(0;\frac{mn}{n_{0}},\frac{2n}{n_{r+1}},\frac{2n}{n_{r+2}}, \frac{n}{n_{1}},\frac{n}{n_{2}},\ldots, \frac{n}{n_{r}}\right),
\]
and the genus of $\X$ is
\[g=1+\frac{1}{2}\left(2m(r+1)n-2n_{0}-m\left(n_{r+1}+n_{r+2}+2
um_{j=1}^{r}n_{j}\right)\right).\]
\item {$\G=\Big\langle a(x)=-x, b(x)=\frac{i-x}{i+x}\Big\rangle \cong A_{4}$:}
\[\X: \quad y^{n}=R_{1}(x)^{l_{r+1}} R_{2}(x)^{l_{r+1}} R_{3}(x)^{l_{r+2}} \mathfrak prod_{j=1}^{r} (R_{1}(x)^{3}+12 f(a_{j})
qrt{3}i R_{3}(x)^{2})^{l_{j}},\]
where
\begin{enumerate}
\item $R_{1}(x)=x^{4}-2
qrt{3} i x^{2}+1, \; R_{2}(x)=x^{4}+2
qrt{3} i x^{2}+1, \; R_{3}(x)=x(x^{4}-1)$,
\item $f(a_{j}) \neq f(a_{i}) \neq 0,1, \infty$,
\[f(x)=\frac{R_{1}(x)^{3}}{-12
qrt{3} i R_{3}(x)^{2}},\]
\item $8l_{r+1}+6l_{r+2}+12(l_{1}+\cdots+l_{r})\equiv 0 \mod(n)$,
\item $\gcd(n,l_{1},\ldots,l_{r+2})=1$.
\end{enumerate}
The group of automorphisms is
\[
\begin{split}
G = \langle \tau, A, B: & \, \, \tau^{n}=1,A^{2}=\tau^{l_{r+2}}, \; B^{3}=\tau^{-(5l_{r+1}+3l_{r+2}+6(l_{1}+\cdots+l_{r}))}, \\
& (AB)^{3}=\tau^{-3(l_{r+1}+(l_{1}+\cdots+l_{4})}, \; \tau A=A\tau, \; \tau B=B \tau\rangle,
\end{split}
\]
where
\[ A(x,y)=(-x,(-1)^{l_{r+2}/n}y), \quad B(x,y)=(b(x),F(x)y), \]
and
\[F(x)=\frac{2^{(4l_{r+1}+3l_{r+2}+6(l_{1}+\cdots+l_{r}))/n}i^{(l_{r+2}+2(l_{1}+\cdots+l_{r}))/n}}{(x+i)^{(8l_{r+1}+6l_{r+2}+12(l_{1}+\cdots+l_{r}))/n}}.\]
Let $n_{j}=\gcd(n,l_{j})$,
then the signature of $\X \to \X/H$ is
\[
\left(0;\frac{n}{n_{r+1}},
tackrel{8}{\ldots},\frac{n}{n_{r+1}},\frac{n}{n_{r+2}},
tackrel{6}{\ldots},\frac{n}{n_{r+2}}, \frac{n}{n_{1}},
tackrel{12}{\ldots},\frac{n}{n_{1}},\ldots, \frac{n}{n_{r}},
tackrel{12}{\ldots},\frac{n}{n_{r}}\right),\]
the signature of $\X \to \X/G$ is
\[ \left(0;\frac{3n}{n_{r+1}},\frac{3n}{n_{r+1}},\frac{2n}{n_{r+2}}, \frac{n}{n_{1}},\frac{n}{n_{2}},\ldots,\frac{n}{n_{r}}\right),\]
and the genus of $\X$ is
\[g=1+6(r+1)n-4n_{r+1}-3n_{r+2}-6
um_{j=1}^{r}n_{j}.\]
\item {$\G=\Big\langle a(x)=ix, b(x)=\frac{i-x}{i+x}\Big\rangle \cong S_{4}$:}
\[ \X: \quad y^{n}=R_{1}(x)^{l_{r+1}} R_{2}(x)^{l_{r+2}}R_{3}(x)^{l_{r+3}} \mathfrak prod_{j=1}^{r} (R_{1}(x)^{3}-108f(a_{j})R_{3}(x)^{4})^{l_{j}},\]
where
\begin{enumerate}
\item $R_{1}(x)=x^{8}+14x^{4}+1, \; R_{2}(x)=x^{12}-33x^{8}-33x^{4}+1, \; R_{3}(x)=x(x^{4}-1)$,
\item $f(a_{j}) \neq f(a_{i}) \neq 0,1,\infty$,
\[f(x)=\frac{R_{1}(x)^{3}}{108R_{3}(x)^{4}},\]
\item $8l_{r+1}+12l_{r+2}+6l_{r+3}+24(l_{1}+\cdots+l_{r}) \equiv 0 \mod(n)$,
\item $\gcd(n,l_{1},\ldots,l_{r+3})=1$.
\end{enumerate}
The group of automorphisms is
\[
\begin{split}
G=\langle \tau, A, B; & \tau^{n}=1, \; A^{4}=\tau^{l_{r+3}}, \; B^{3}=\tau^{-(5l_{r+1}+6l_{r+2}+3l_{r+3}+15(l_{1}+\cdots+l_{r}))}, \\
& (AB)^{2}= \tau^{-(4l_{r+1}+5l_{r+2}+2l_{r+3}+12(l_{1}+\cdots+l_{r}))}, \; \tau A=A \tau, \; \tau B= B \tau \rangle, \\
\end{split}
\]
where $A(x,y) =(ix,i^{l_{r+3}/n}y)$,
$B(x,y) =(b(x),F(x)y)$, and
\[
\begin{split}
F(x) & =\frac{(-1)^{l_{r+2}/n} i^{l_{r+3}/n} 2^{(4l_{r+1}+6l_{r+2}+3l_{r+3}+12(l_{1}+\cdots+l_{r}))/n}}{(x+i)^{(8l_{r+1}+12l_{r+2}+6l_{r+3}+24(l_{1}+\cdots+l_{r}))/n}}. \\
\end{split}
\]
Let $n_{j}=\gcd(n,l_{j})$, then the signature of $\X \to \X/H$ is
\[ \left(0;\frac{n}{n_{r+1}},
tackrel{8}{\ldots},\frac{n}{n_{r+1}},\frac{n}{n_{r+2}},
tackrel{12}{\ldots},\frac{n}{n_{r+2}}, \frac{n}{n_{r+3}},
tackrel{6}{\ldots},\frac{n}{n_{r+3}},\frac{n}{n_{1}},
tackrel{24}{\ldots},\frac{n}{n_{1}},\ldots, \frac{n}{n_{r}},
tackrel{24}{\ldots},\frac{n}{n_{r}}\right), \]
the signature of $\X \to \X/G$ is
\[ \left(0;\frac{3n}{n_{r+1}},\frac{2n}{n_{r+2}},\frac{4n}{n_{r+3}},\frac{n}{n_{1}},\frac{n}{n_{2}},\ldots,\frac{n}{n_{r}}\right), \]
and the genus of $\X$ is
\[ g=1+12(r+1)n-4n_{r+1}-6n_{r+2}-3n_{r+3}-12
um_{j=1}^{r}n_{j}. \]
\item {$\G=\Big\langle a(x)=\omega_{5} x, b(x)=\frac{(1-\omega_{5}^{4})x+(\omega_{5}^{4}-\omega_{5})}{(\omega_{5}-\omega_{5}^{3})x+(\omega_{5}^{2}-\omega_{5}^{3})}\Big\rangle \cong A_{5}$:}
$$\X: \quad y^{n}=R_{1}(x)^{l_{r+1}} R_{2}(x)^{l_{r+2}}R_{3}(x)^{l_{r+3}}\mathfrak prod_{j=1}^{r} (R_{1}(x)^{3}-1728 f(a_{j})R_{3}(x)^{5})^{l_{j}},$$
where
\begin{enumerate}
\item $R_{1}(x)=-x^{20}-1+228x^{5}(x^{10}-1)-494x^{10}, \; R_{2}(x)=x^{30}+1+522 x^{5}(x^{20}-1)-10005 x^{10}(x^{10}+1), \quad R_{3}(x)=x(x^{10}+11x^{5}-1)$,
\item $f(a_{j}) \neq f(a_{i}) \neq 0,1,\infty$,
$$f(x)=\frac{R_{1}(x)^{3}}{1728 R_{3}(x)^{5}},$$
\item $20l_{r+1}+30l_{r+2}+12l_{r+3}+60(l_{1}+\cdots+l_{r}) \equiv 0 \mod(n)$,
\item $\gcd(n,l_{1},\ldots,l_{r+3})=1$.
\end{enumerate}
\noindent The elements $a(x)$ and $b(x)$ induce the automorphisms
$$A(x,y)=(a(x),\omega_{5}^{s_{3}/n} y), \; B(x,y)=(b(x),L(x) y),$$
where $L(x)$ is a rational map satisfying
$$L(b^{2}(x))L(b(x))L(x)=\omega_{5}^{l},$$
for a suitable $l \in \{0,1,2,3,4\}$, and
$$L(x)^{n}=T_{1}^{l_{r+1}+3(l_{1}+\cdots +l_{r})}(x) T_{2}^{l_{r+2}}(x) T_{3}^{l_{r+3}}(x),$$
where $T_{j}(x)=R_{j}(b(x))/R_{j}(x)$, for $j=1,2,3$.
The group of automorphisms is
\[ G=\langle \tau, A,B: \, \, A^{5}=\tau^{l_{r+3}}, \; B^{3}=\tau^{l} \rangle \]
Let $n_{j}=\gcd(n,l_{j})$,and
the signature of $\X \to \X/H$ is
$$(0;\frac{n}{n_{r+1}},
tackrel{20}{\ldots},\frac{n}{n_{r+1}},\frac{n}{n_{r+2}},
tackrel{30}{\ldots},\frac{n}{n_{r+2}}, \frac{n}{n_{r+3}},
tackrel{12}{\ldots},\frac{n}{n_{r+3}},\frac{n}{n_{1}},
tackrel{60}{\ldots},\frac{n}{n_{1}},\ldots, \frac{n}{n_{r}},
tackrel{60}{\ldots},\frac{n}{n_{r}}),$$
the signature of $\X \to \X/G$ is
$$(0;\frac{3n}{n_{r+1}},\frac{2n}{n_{r+2}},\frac{5n}{n_{r+3}},\frac{n}{n_{1}},\frac{n}{n_{2}},\ldots,\frac{n}{n_{r}}),$$
and the genus of $\X$ is
$$g=1+30(r+1)n-10n_{r+1}-15n_{r+2}-6n_{r+3}-30
um_{j=1}^{r}n_{j}.$$
\end{enumerate}
\end{thm}
\begin{proof}
Let $\X$ be a generalized superelliptic curve of level $n$ and let $H=\langle \tau \rangle \cong C_{n}$ be a generalized superelliptic group of level $n$ for $\X$. We set $G={\rm Aut}(\X)$ and $\G=G/H$. Let us assume that $\G$ is not the trivial group, so $\G \in \{C_{m},D_{m},A_{4},S_{4},A_{5}\}$, where $m \geq 2$. Let $f:\widehat{\mathbb C} \to \widehat{\mathbb C}$ be a regular branched cover with $\G$ as its deck group; we have that $f$ is a degree $|\G|$-rational map. Let us write $f(x)=P(x)/Q(x)$, for suitable relatively prime polynomials $P(x), Q(x)$.
We already know that $\X$ is represented by a cyclic $n$-gonal curve of the form $y^{n}=\mathfrak prod_{j=1}^{r}(x-b_{j})^{d_{j}}$, where $d_{1},\ldots,d_{r} \in \{1,\ldots,n-1\}$ satisfy Harvey's conditions in Corollary \ref{harvey1}, and either the collection $\{b_{1},\ldots,b_{r}\}$ is $\G$-invariant if $d_{1}+\cdots + d_{r} \equiv 0 \mod(n)$ or the collection $\{\infty,b_{1},\ldots,b_{r}\}$ is $\G$-invariant if $d_{1}+\cdots+d_{r} \not\equiv 0 \mod(n)$. Lemma \ref{exponentes} asserts that for $b_{i}, b_{j}$ so that there is some $T \in \G$ with $T(b_{i})=b_{j}$, then $d_{i}=d_{j}$. If the disjoint $\G$-orbits (eliminating $\infty$ from its orbit if it is a branch value of $\mathfrak pi$) are given by
$$\{a_{1,1},\ldots,a_{1,r_{1}}\},\ldots,\{a_{q,1},\ldots,a_{q,r_{q}}\},$$
then our curve can be written as follows
$$y^{n}=\mathfrak prod_{j=1}^{q} \left(\mathfrak prod_{i=1}^{r_{j}} (x-a_{j,i})\right)^{l_{j}}.$$
The $\G$-invariance of these sets (if $l_{1}+\cdots+l_{r} \not\equiv 0 \mod(n)$, then one of these orbits must be enlarged by adding $\infty$) asserts that we might write, for the case $r_{j}=|\G|$,
$$\mathfrak prod_{i=1}^{r_{j}}(x-a_{j,i})=P(x)-f(a_{j,1})Q(x),$$
and for the case that the $r_{j}$ is a proper divisor of $|\G|$ a similar equality holds but we will need to take care of multiplicities.
\begin{enumerate}
\item In the case that $\G=C_{m}$, up to a M\"obius transformation, we may assume that $\G=\big\langle a(x)=\omega_{m} x \big\rangle \cong C_{m}$, $m \geq 2$, and $f(x)=x^{m}$. In this case the possible $\G$-orbits are given by (up to conjugation by $T(x)=\lambda x$, for a suitable $\lambda \neq 0$, orbits of length one $\{\infty\}$ and/or $\{0\}$, the $m$-roots of unity, and $m$-roots of other complex numbers, that is,
$$\X: \quad y^{n}=x^{l_{0}} (x^{m}-1)^{l_{1}}\mathfrak prod_{j=2}^{r} (x^{m}-a_{j}^{m})^{l_{j}},$$
$$a_{2},\ldots,a_{r} \in {\mathbb C}-\{0,1\}, \; a_{i}^{m} \neq a_{j}^{m},$$
where the following Harvey's constrains are as indicated in the theorem. In this case, $a$ induces the automorphism
$$A(x,y)=(\omega_{m} x, \omega_{m}^{l_{0}/n}y),$$
and it can be seen that
$$G=\langle \tau, A: \tau^{n}=1, A^{m}=\tau^{l_{0}}, \tau A = A \tau \rangle.$$
The signatures of $\X/H$ and $\X/G$ are easily obtained from the curve above (and Harvey's constrains) and, the formula of the genus of $\X$ is obtained from the signature of $\X/H$.
\item In the case that $\G=D_{m}$, up to a M\"obius transformation, we may assume that
$\G=\Big\langle a(x)=\omega_{m} x, b(x)=\frac{1}{x} \Big\rangle \cong D_{m}$ and $f(x)=x^{m}+x^{-m}$. In this case, the $\G$-orbits are $\{0,\infty\}$, the $m$-roots or unity, the $m$-roots of $-1$ and some other orbits of length $2m$, that is,
$$\X: \quad y^{n}=x^{l_{0}}(x^{m}-1)^{l_{r+1}}(x^{m}+1)^{l_{r+2}} \mathfrak prod_{j=1}^{r} (x^{m}-a_{j}^{m})^{l_{j}}(x^{m}-a_{j}^{-m})^{l_{j}},$$
$$a_{i}^{\mathfrak pm m} \neq a_{j}^{\mathfrak pm m} \neq 0,\mathfrak pm1,$$
where Harvey's conditions now read as
\begin{enumerate}
\item $2l_{0}+m(l_{r+1}+l_{r+2})+2m(l_{1}+\cdots+l_{r}) \equiv 0 \mod(n)$,
\item $\gcd(n,l_{0},l_{1},\ldots,l_{r+2})=1$.
\end{enumerate}
In this case, the elements $a(x)$ and $b(x)$ induce the automorphisms
$$A(x,y)=(\omega_{m} x, \omega_{m}^{l_{0}/n}y),$$
$$B(x,y)=\left(\frac{1}{x}, \frac{(-1)^{l_{r+1}/n}y}{x^{(2l_{0}+m(l_{r+1}+l_{r+2}+2(l_{1}+\cdots+l_{r})))/n}}\right),$$
and
$$G=\langle \tau,A,B: \tau^{n}=1, A^{m}=\tau^{l_{0}}, \; B^{2}=\tau^{l_{r+1}}, \; \tau A=A\tau,\; \tau B=B \tau \rangle,$$
The signatures of $\X/H$ and $\X/G$ are easily obtained from the curve above (and Harvey's constrains) and, the formula of the genus of $\X$ is obtained from the signature of $\X/H$.
\item In the case that $\G=A_{4}$, up to a M\"obius transformation, we may assume that
$\G=\Big\langle a(x)=-x, b(x)=\frac{i-x}{i+x}\Big\rangle \cong A_{4}$. In this case,
$$f(x)=\frac{R_{1}(x)^{3}}{-12
qrt{3} i R_{3}(x)^{2}},$$
where
$$R_{1}(x)=x^{4}-2
qrt{3} i x^{2}+1, \; R_{2}(x)=x^{4}+2
qrt{3} i x^{2}+1, \; R_{3}(x)=x(x^{4}-1),$$
and the curve we obtain is of the form
$$\X: \quad y^{n}=R_{1}(x)^{l_{r+1}} R_{2}(x)^{l_{r+1}} R_{3}(x)^{l_{r+2}} \mathfrak prod_{j=1}^{r} (R_{1}(x)^{3}+12 f(a_{j})
qrt{3}i R_{3}(x)^{2})^{l_{j}},$$
$$f(a_{j}) \neq f(a_{i}) \neq 0,1, \infty,$$
where Harvey's condition now read as
\begin{enumerate}
\item $8l_{r+1}+6l_{r+2}+12(l_{1}+\cdots+l_{r})\equiv 0 \mod(n)$,
\item $\gcd(n,l_{1},\ldots,l_{r+2})=1$.
\end{enumerate}
Since,
$$R_{1}(b(x))=\frac{2(1-
qrt{3}i)}{(x+i)^{4}}R_{1}(x), \;\; R_{1}(a(x))=R_{1}(x),$$
$$R_{2}(b(x))=\frac{2(1+
qrt{3}i)}{(x+i)^{4}}R_{2}(x), \;\; R_{2}(a(x))=R_{2}(x),$$
$$R_{3}(b(x))=\frac{8i}{(x+i)^{6}}R_{3}(x), \;\; R_{3}(a(x))=-R_{3}(x),$$
we see that $a(x)$ and $b(x)$ induce the automorphisms
$$A(x,y)=(-x,(-1)^{l_{r+2}/n}y), \; B(x,y)=(b(x),F(x)y),$$
where
$$F(x)=\frac{2^{(4l_{r+1}+3l_{r+2}+6(l_{1}+\cdots+l_{r}))/n}i^{(l_{r+2}+2(l_{1}+\cdots+l_{r}))/n}}{(x+i)^{(8l_{r+1}+6l_{r+2}+12(l_{1}+\cdots+l_{r}))/n}}.$$
and we obtain that
$$G=\langle \tau, A, B: \tau^{n}=1,A^{2}=\tau^{l_{r+2}}, \; B^{3}=\tau^{-(5l_{r+1}+3l_{r+2}+6(l_{1}+\cdots+l_{r}))}, $$
$$
(AB)^{3}=\tau^{-3(l_{r+1}+(l_{1}+\cdots+l_{4})}, \;
\tau A=A\tau, \; \tau B=B \tau\rangle.$$
The signatures of $\X/H$ and $\X/G$ are easily obtained from the curve above (and Harvey's constrains) and, the formula of the genus of $\X$ is obtained from the signature of $\X/H$.
\item In the case that $\G=S_{4}$, up to a M\"obius transformation, we may assume that
$\G=\Big\langle a(x)=ix, b(x)=\frac{i-x}{i+x}\Big\rangle \cong S_{4}$. In this case,
$$f(x)=\frac{R_{1}(x)^{3}}{108R_{3}(x)^{4}},$$
where
$$R_{1}(x)=x^{8}+14x^{4}+1, \; R_{2}(x)=x^{12}-33x^{8}-33x^{4}+1, \; R_{3}(x)=x(x^{4}-1),$$
and the curve has the form
$$\X: \quad y^{n}=R_{1}(x)^{l_{r+1}} R_{2}(x)^{l_{r+2}}R_{3}(x)^{l_{r+3}} \mathfrak prod_{j=1}^{r} (R_{1}(x)^{3}-108f(a_{j})R_{3}(x)^{4})^{l_{j}},$$
$$f(a_{j}) \neq f(a_{i}) \neq 0,1,\infty,$$
where Harvey's condition now read as
\begin{enumerate}
\item $8l_{r+1}+12l_{r+2}+6l_{r+3}+24(l_{1}+\cdots+l_{r}) \equiv 0 \mod(n)$,
\item $\gcd(n,l_{1},\ldots,l_{r+3})=1$.
\end{enumerate}
Since,
$$R_{1}(a(x))=R_{1}(x), \;\; R_{1}(b(x))=\frac{16}{(x+i)^{8}}R_{1}(x),$$
$$R_{2}(a(x))=R_{2}(x), \;\; R_{2}(b(x))=\frac{-64}{(x+i)^{12}}R_{2}(x),$$
$$R_{3}(a(x))=iR_{3}(x), \;\; R_{3}(b(x))=\frac{8i}{(x+i)^{6}}R_{3}(x),$$
the elements $a(x)$ and $b(x)$ induce the automorphisms
$$A(x,y)=(ix,i^{l_{r+3}/n}y), \; B(x,y)=(b(x),F(x)y)$$
$$F(x)=\frac{(-1)^{l_{r+2}/n} i^{l_{r+3}/n} 2^{(4l_{r+1}+6l_{r+2}+3l_{r+3}+12(l_{1}+\cdots+l_{r}))/n}}{(x+i)^{(8l_{r+1}+12l_{r+2}+6l_{r+3}+24(l_{1}+\cdots+l_{r}))/n}},$$
and one obtains that
$$G=\langle \tau, A, B; \tau^{n}=1, \; A^{4}=\tau^{l_{r+3}}, \; B^{3}=\tau^{-(5l_{r+1}+6l_{r+2}+3l_{r+3}+15(l_{1}+\cdots+l_{r}))},$$
$$(AB)^{2}= \tau^{-(4l_{r+1}+5l_{r+2}+2l_{r+3}+12(l_{1}+\cdots+l_{r}))}, \; \tau A=A \tau, \; \tau B= B \tau \rangle.$$
The signatures of $\X/H$ and $\X/G$ are easily obtained from the curve above (and Harvey's constrains) and, the formula of the genus of $\X$ is obtained from the signature of $\X/H$.
\item In the case that $\G=A_{5}$, up to a M\"obius transformation, we may assume that
$\G=\Big\langle a(x)=\omega_{5} x, b(x)=\frac{(1-\omega_{5}^{4})x+(\omega_{5}^{4}-\omega_{5})}{(\omega_{5}-\omega_{5}^{3})x+(\omega_{5}^{2}-\omega_{5}^{3})}\Big\rangle \cong A_{5}$. In this case,
$$f(x)=\frac{R_{1}(x)^{3}}{1728 R_{3}(x)^{5}},$$
where
$$R_{1}(x)=-x^{20}-1+228x^{5}(x^{10}-1)-494x^{10}, \; R_{2}(x)=x^{30}+1+522 x^{5}(x^{20}-1)-10005 x^{10}(x^{10}+1),$$
$$R_{3}(x)=x(x^{10}+11x^{5}-1),$$
and the curve we obtain has the form
$$\X: \quad y^{n}=R_{1}(x)^{l_{r+1}} R_{2}(x)^{l_{r+2}}R_{3}(x)^{l_{r+3}}\mathfrak prod_{j=1}^{r} (R_{1}(x)^{3}-1728 f(a_{j})R_{3}(x)^{5})^{l_{j}},$$
$$f(a_{j}) \neq f(a_{i}) \neq 0,1,\infty,$$
and Harvey's conditions read in this case as
\begin{enumerate}
\item $20l_{r+1}+30l_{r+2}+12l_{r+3}+60(l_{1}+\cdots+l_{r}) \equiv 0 \mod(n)$,
\item $\gcd(n,l_{1},\ldots,l_{r+3})=1$.
\end{enumerate}
In this case,
$$R_{1}(a(x))=R_{1}(x), \; R_{2}(a(x))=R_{2}(x), \; R_{3}(a(x))=\omega_{5}R_{3}(x),$$
and let us consider the rational maps
$$T_{j}(x)=R_{j}(b(x))/R_{j}(x), \quad j=1,2,3.$$
It can be checked that $T_{1}^{3}=T_{3}^{5}$ and that
there is rational map $L(x)$ so that
$$L(b^{2}(x))L(b(x))L(x)=\omega_{5}^{l},$$
for a suitable $l \in \{0,1,2,3,4\}$ and
$$L^{n}=T_{1}^{l_{r+1}+3(l_{1}+\cdots l_{r})}T_{2}^{l_{r+2}} T_{3}^{l_{r+3}}.$$
In this case,
$$G=\langle \tau, A,B\rangle,$$
where
$$A(x,y)=(a(x),\omega_{5}^{s_{3}/n} y), \; B(x,y)=(b(x),L(x) y),$$
$$(A^{5}=\tau^{l_{r+3}}, \; B^{3}=\tau^{l}).$$
The signatures of $\X/H$ and $\X/G$ are easily obtained from the curve above (and Harvey's constrains) and, the formula of the genus of $\X$ is obtained from the signature of $\X/H$.
\end{enumerate}
\end{proof}
ubsection{On the uniqueness of the generalized superelliptic group}
In this section we study the uniqueness of the generalized superelliptic group $H=\langle \tau \rangle $ of level $n$.
Let $\G={\rm Aut}(\X)/H$ the reduced group with respect to $H$, which we know to be either trivial, cyclic, dihedral or one of the Platonic groups $A_{4}$, $S_{4}$ or $A_{5}$. As the Platonic groups and the dihedral groups of order not divisible by $4$ there is no non-trivial central element, we may observe the following fact.
\begin{prop}
Let $\X$ be a generalized superelliptic Riemann surface of level $n$ and let $H$ be a generalized superelliptic group of level $n$. If the reduced group $\G$ of automorphism with respect to $H$ is either a dihedral group of order not divisible by $4$ or $A_{4}$ or $S_{4}$ or $A_{5}$, then $H$ is the unique generalized superelliptic group of level $n$ for $\X$.
\end{prop}
\begin{proof}
Assume, by the contrary, that there is a generalized superelliptic automorphism $\eta$ of level $n$ with $\eta \not\in H$. Then $\eta$ induces a non-trivial central element of the reduced group $\G$, a contradiction.
\end{proof}
As a consequence of the above, the only possibility for $\X$ to admit another generalized superelliptic group of level $n$ is when $\G$ is either a non-trivial cyclic group or a dihedral group of order $4m$.
In the following, we observe that if $\X$ has at least two different generalized superelliptic groups of level $n$, then it belong to a certain family of ``exceptional" generalized superelliptic Riemann surfaces.
\begin{thm}\label{unicidad0}
If $\X$ is a generalized superelliptic Riemann surface of level $n$ admitting at least two different generalized superelliptic groups of level $n$, then
$n=2d$, $d \geq 2$, and it can be represented by a cyclic $n$-gonal curve of the form
$$\X: \quad y^{2d}=x^{2}\left(x^{2}-1\right)^{l_{1}}\left(x^{2}-a_{1}^{2}\right)^{l_{2}} \mathfrak prod_{j=3}^{L}\left(x^{2}-a_{j}^{2}\right)^{2\widehat{l_{j}}},$$
where
$$l_{1},l_{2},2\widehat{l_{3}},\ldots,2\widehat{l_{L}} \in \{1,\ldots,2d-1\}, \quad \mbox{$l_{1}$ is odd,}$$
and
\begin{enumerate}
\item for $l_{2}=2\widehat{l_{2}}$, $\gcd\left(d,l_{1},\widehat{l_{2}},\ldots,\widehat{l_{L}}\right)=1$.
\item for $l_{2}$ odd, then $l_{1}+l_{2}=2d$ and $\gcd\left(d,l_{1},l_{2},\widehat{l_{3}},\ldots,\widehat{l_{L}}\right)=1$.
\end{enumerate}
In these cases, $\tau(x,y)=(x,\omega_{2d}y)$ and $\eta(x,y)=(-x,\omega_{2d}y)$ are generalized superelliptic automorphisms of level $n$ so that $K=\langle \tau,\eta\rangle \cong C_{2d} \times C_{2}$.
The quotient orbifold $\X/K$ has signature
$$\left(0;2,2d,\frac{2d}{\gcd(2d,l_{1})},\frac{2d}{\gcd(2d,l_{2})},\frac{d}{\gcd\left(d,\widehat{l_{3}}\right)},\ldots,\frac{d}{\gcd\left(d,\widehat{l_{L}}\right)}\right),$$
in the case that $1+l_{1}+l_{2}+2\left(\widehat{l_{3}}+\cdots+\widehat{l_{L}}\right) \equiv 0 \mod(d)$, or
\begin{small}
\[
\left(0;2,2d,\frac{2d}{\gcd(2d,l_{1})},\frac{2d}{\gcd(2d,l_{2})},\frac{d}{\gcd\left(d,\widehat{l_{3}}\right)},\ldots,\frac{d}{\gcd\left(d,\widehat{l_{L}}\right)}, \frac{d}{\gcd\left(d,1+l_{1}
+l_{2}+\widehat{l_{3}}+\cdots+\widehat{l_{L}}\right)}\right),
\]
\end{small}
in the case that $1+l_{1}+l_{2}+2\left(\widehat{l_{3}}+\cdots+\widehat{l_{L}}\right) \not\equiv 0 \mod(d)$.
The genus of $\X$ is, in the first case, equal to
\[ 2d(1+L)-\gcd(2d,l_{1})-\gcd(2d,l_{2})-2
um_{j=3}^{L} \gcd\left(d,\widehat{l_{j}}\right),\]
and, in the second case, equal to
\begin{small}
\[
{
d(3+2L)-\gcd(2d,l_{1})-\gcd(2d,l_{2})-2
um_{j=3}^{L} \gcd\left(d,\widehat{l_{j}}\right) - \gcd\left(d,1+l_{1}+l_{2}+2
um_{j=3}^{L}\widehat{l_{j}}\right)}.
\]
\end{small}
\end{thm}
\begin{rem}
Let us observe that the Riemann surfaces described by the cyclic $2d$-gonal curves in Theorem \ref{unicidad0} are not all of them necessary generalized superelliptic; the theorem only asserts that the exceptional ones are some of them (for $L \geq 3$ they are generically generalized superelliptic of level $2d$). For example, the cyclic $2d$-gonal curve $y^{2d}=x^{2}(x^{2}-1)^{l_{1}}$, with $l_{1}=d-1$ and $d \geq 2$ even, admits the extra automorphism
$$\alpha(x,y)=\left(\frac{x(x^{2}-1)^{d/2}}{y^{d}}, \frac{y^{l_{1}}}{(x^{2}-1)^{(l_{1}^{2}-1)/2d}}\right),$$ which does not commutes with $\tau$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{unicidad0}]
As $n=2$ corresponds to the hyperelliptic situation, which we already know to be unique, we must have $n \geq 3$.
Let us assume $\X$ has two different generalized superelliptic groups of level $n$, say $H=\langle \tau \rangle$ and $\langle \eta \rangle$, where
$\eta \notin H=\langle \tau \rangle$.
Let us consider, as before, the canonical quotient homomorphism $\theta:G \to \G=G/H$, where $G={\rm Aut}(\X)$, and let $\mathfrak pi:\X \to \widehat{\mathbb C}$ be a regular branched cover with deck group $H$.
As $\tau$ is central,
$K=\langle \tau, \eta\rangle<G$ is an abelian group and $\overline{K}=K/H =\langle \theta(\eta) \rangle \cong C_{m}$, where $n=md$ and $m \geq 2$. Since $\theta(\eta)$ has order $m$, $\eta^{m} \in H$ and it has order $d$. So, replacing $\tau$ by a suitable power (still being a generator of $H$) we may assume that $\eta^{m}=\tau^{m}$.
Theorem \ref{gonalescentral} asserts that we may assume $\X$ to be represented by an cyclic $n$-gonal curve of the form
$$\X: \quad y^{n}=x^{l_{0}} (x^{m}-1)^{l_{1}}\mathfrak prod_{j=2}^{L}(x^{m}-a_{j}^{m})^{l_{j}},$$
where the following Harvey's conditions are satisfied:
\begin{enumerate}
\item $l_{0}=0$, $m(l_{1}+\cdots+l_{L}) \equiv 0 \mod(n)$ and $\gcd(n,l_{1},\ldots,l_{L})=1$; or
\item $l_{0} \neq 0$ and $\gcd(n,l_{0},l_{1},\ldots,l_{L})=1$.
\end{enumerate}
In this algebraic model, $\tau(x,y)=(x,\omega_{n}y)$, $\mathfrak pi(x,y)=x$ and $\theta(\eta)(x)=\omega_{m}x$, where $\omega_{r}=e^{2 \mathfrak pi i/r}$. In this way, $\eta(x,y)=(\omega_{m}x,\omega_{m}^{l_{0}/n}y)$.
As we are assuming
$\eta^{m}=\tau^{m}$ and $\eta$ has order $n$, we may assume the following
$$\left\{ \begin{array}{lll}
{\rm if } \; l_{0} \neq 0: & \eta(x,y)=(\omega_{m}x, \omega_{n}y) & \mbox{and } \; l_{0}=m, \\
{\rm if } \; l_{0}=0: & \eta(x,y)=(\omega_{m}x,y) & \mbox{and } \; n=m.
\end{array}
\right.
$$
\noindent
(I) Case $l_{0}=m$; so $\eta(x,y)=(\omega_{m}x, \omega_{n}y)$ and we are in case (2) above.
The $\eta$-invariant algebra ${\mathbb C}[x,y]^{\langle \eta\rangle}$ is generated by the monomials $u=x^{m}, v=y^{n}$
and those of the form $x^{a}y^{b}$, where $a \in \{0,1,\ldots,m-1\}$ and $b \in \{0,1,\ldots,n-1\}$ (the case $a=b=0$ not considered) satisfy that $a+b/d \equiv 0 \mod(m)$. In particular, $b=dr$ for $r \in \{0,1,\ldots,[(n-1)/d]\}$ so that $a+r \equiv 0 \mod(m)$. As $0 \leq a+r \leq (m-1)+[(n-1)/d] \leq (m-1)+[(md-1)/d]<2m$, it follows that $a+r \in \{0,m\}$. As the case $a+r=0$ asserts that $a=b=0$, which is not considered, we must have $a+r=m$, from which we see that the other generators are
given by $t_{1},\ldots, t_{m}$, where $t_{j}=x^{m-j}y^{dj}$ (observe that $t_{m}=v$). As consequence of invariant theory, the quotient curve $\X/\langle \eta\rangle$ corresponds to the algebraic curve
$${\mathcal Y}: \quad \left\{ \begin{array}{lcl}
t_{1}^{m}&=&u^{m-1}v,\\
t_{2}^{m}&=&u^{m-2}v^{2},\\
&\vdots&\\
t_{m-1}^{m}&=&uv^{m-1},\\
v&=&u (u-1)^{l_{1}}\mathfrak prod_{j=2}^{L}(u-a_{j}^{m})^{l_{j}}.
\end{array}
\right.
$$
The curve ${\mathcal Y}$ admits the automorphisms $T_{1},\ldots, T_{m-1}$, where $T_{j}$ is just amplification of the $t_{j}$-coordinate by $\omega_{m}$ and acts as the identity on all the other coordinates. The group generated by all of these automorphisms is
$$(*) \quad {\mathcal U}=\langle T_{1},\ldots,T_{m-1}\rangle \cong C_{m}^{m-1}.$$
The regular branched cover map $\mathfrak pi_{\mathcal U}:{\mathcal Y} \to \widehat{\mathbb C}: (u,v,t_{1},\ldots,t_{m-1}) \mapsto u$ has ${\mathcal U}$ as its deck group. Let us observe that the values $0$, $a_{1}^{m},\ldots, a_{L}^{m}$ belongs to the branch set of $\mathfrak pi_{\mathcal U}$.
Since ${\mathcal Y}=\X/\langle \eta \rangle$ has genus zero and the finite Abelian groups of automorphisms of the Riemann sphere are either the trivial group, a cyclic group or $V_{4}=C_{2}^{2}$, the group ${\mathcal U}$ is either one of these three types.
As $m \geq 2$, the group ${\mathcal U}$ cannot be the trivial group nor it can be isomorphic to $V_{4}$. It follows that
${\mathcal U}$ is a cyclic group; so $m=2$ and, in particular, $n=2d$, where $d \geq 2$, and
$$\X: \quad y^{2d}=x^{2} (x^{2}-1)^{l_{1}} \mathfrak prod_{j=2}^{L}(x^{2}-a_{j}^{2})^{l_{j}}.$$
Harvey's condition (a) is equivalent to have $\gcd(2d,2,l_{1},\ldots,l_{L})=1$, which is satisfied if some of the exponents $l_{j}$ is odd. Without loss of generality, we may assume that $l_{1}$ is odd.
In this case the curve ${\mathcal Y}$ is given by
$${\mathcal Y}: \quad \left\{ \begin{array}{lcl}
t_{1}^{2}&=& uv,\\
v&=& u(u-1)^{l_{1}}\mathfrak prod_{j=2}^{L}(u-a_{j}^{2})^{l_{j}},
\end{array}
\right.
$$
which is isomorphic to the curve $$w^{2}=(u-1)^{l_{1}}\mathfrak prod_{j=2}^{L}(u-a_{j}^{2})^{l_{j}}.$$
As this curve must have genus zero, and $l_{1}$ is odd, the number of indices $j \in \{2,\ldots,L\}$ for which $l_{j}$ is odd, must be at most one.
\begin{enumerate}
\item[(i)] If $l_{1}$ is the only odd exponent, then if we write $l_{j}=2\widehat{l_{j}}$, for $j=2,\ldots,L$, the we must have
$\gcd(2d,2,l_{1},2\widehat{l_{2}},\ldots,2\widehat{l_{L}})=1$, which is equivalent to
$\gcd(d,l_{1},\widehat{l_{2}},\ldots,\widehat{l_{L}})=1$.
\item[(ii)] If there are exactly two of the exponents being odd, then we may assume, without loss of generality, that $l_{1}$ and $l_{2}$ are the only odd exponents. In this case, we must then have that $l_{1}+l_{2} \equiv 0 \mod(2d)$, that is, $l_{1}+l_{2}=2d$. If we write $l_{j}=2\widehat{l_{j}}$, for $j=3,\ldots,L$, then we must have
$\gcd(2d,2,l_{1},l_{2},2\widehat{l_{3}},\ldots,2\widehat{l_{L}})=1$, which is equivalent to
$\gcd(d,l_{1},l_{2},\widehat{l_{3}},\ldots,\widehat{l_{L}})=1$.
\end{enumerate}
\noindent
(II) Let us now consider the case $l_{0}=0$; so $m=n$ and $\eta(x,y)=(\omega_{n}x, y)$ and we are in case (1) above.
The $\eta$-invariants algebra ${\mathbb C}[x,y]^{\langle \eta\rangle}$ is generated by the monomials $u=x^{n}, v=y$. As consequence of invariant theory, the quotient curve $\X/\langle \eta\rangle$ corresponds to the algebraic curve
$${\mathcal Y}: \quad \left\{ \begin{array}{lcl}
v^{n}&=&(u-1)^{l_{1}}\mathfrak prod_{j=2}^{L}(u-a_{j}^{n})^{l_{j}}.
\end{array}
\right.
$$
As
${\mathcal Y}$ must have genus zero and $n \geq 3$, we should have $L=1,2$ (and for $L=2$ we must also have $l_{1}+l_{2} \equiv 0 \mod(n)$). So either
$$\X: \quad y^{n}=(x^{n}-1)^{l_{1}}, \quad L=1,$$
or
$$\X: \quad y^{n}=(x^{n}-1)^{l_{1}}(x^{n}-a_{2}^{n})^{l_{2}}, \; l_{1}+l_{2} \equiv 0 \mod(n), \quad L=2.$$
Note that, for $L=1$ we may assume $l_{1}=1$ (this is the classical Fermat curve of degree $n$). As the group of automorphisms of classical Fermat curve of degree $n$ is $C_{n}^{2} \rtimes S_{3}$, we may see that $\tau$ is not central; that is, it is not a generalized superelliptic Riemann surface of level $n$.
In the case $L=2$, Harvey's conditions holds exactly when $\gcd(n,l_{1},l_{2})=1$. As $l_{1}+l_{2} \equiv 0 \mod(n)$ and $l_{1},l_{2} \in \{1,\ldots,n-1\}$, we have that $l_{1}+l_{2}=n$.
If we write $l_{2}=n-l_{1}$, then
$$\left(\frac{x^{n}-1}{x^{n}-a_{2}^{n}}\right)^{l_{1}}=\frac{y^{n}}{(x^{n}-a_{2}^{n})^{n}},$$
and by writing $l_{1}=n-l_{2}$ we also have that
$$\left(\frac{x^{n}-a_{2}^{n}}{x^{n}-1}\right)^{l_{2}}=\frac{y^{n}}{(x^{n}-1)^{n}}.$$
Then the M\"obius transformation $M(x)=a/x$ induces the automorphism
$$\alpha(x,y)=\left(\omega_{n}\frac{a_{2}}{x},\frac{-a_{2}^{l_{2}} (x^{n}-1)(x^{n}-a_{2}^{n})}{x^{n} y} \right),$$
which does not commute with $\eta(x,y)=(\omega_{n}x,y)$ since $n \geq 3$, a contradiction.
\end{proof}
Since for an exceptional generalized superelliptic Riemann surface of level $n$ we must have $n$ even, we may observe the following fact.
\begin{cor}\label{unicidad}
Let $n$ be either equal to two or an odd integer. Then
every generalized superelliptic curve of level $n$ has a unique generalized superelliptic group of level $n$.
\end{cor}
If $\X$ is an exceptional generalized superelliptic Riemann surface of level $n$ and $H$ one of its generalized superelliptic groups of level $n$, then the quotient orbifold $\X/H$ has a cone point of order $n/2$. In particular, we observe the following.
\begin{cor}\label{unicosuper}
Let $n \geq 4$ be even, let $\X$ be a generalized superelliptic Riemann surface of level $n$ and let $H$ be a generalized superelliptic group of level $n$. If $\X/H$ has no cone point of order $n/2$, then $H$ is the unique generalized superelliptic group of $\X$ of level $n$. In particular,
every superelliptic Riemann surface of level $n$ admits a unique superelliptic group of level $n$.
\end{cor}
ubsection{A remark about a result due to Sanjeewa}
In \cite{Sanjeewa} it was determined the groups $G$ of conformal automorphisms of a cyclic $n$-gonal Riemann surface $\X$ for which the $n$-gonal group $H=\langle \tau \rangle$ is assumed to be normal subgroup and all cone points of $S/H$ of order $n$ (in particular, this contains the case of superelliptic Riemann surfaces). For the case $\G=C_{m}$ it was stated (see Theorems 3.2 and 4.1 in \cite{Sanjeewa}) that either $G=C_{nm}$ or $G=\langle r,s:r^{n}=1, s^{m}=1, srs^{-1}=r^{l}\rangle$, where $(l,n)=1$ and $l^{m} \equiv 1 \mod(n)$ (if $(m,n)=1$, then $l=n-1$). In the case that $\tau$ is central (in the superelliptic situation), the last situation only happens if $l=1$, that is, either $G=C_{nm}$ or $G=C_{n}\times C_{m}$.
In the generalized superelliptic situation things changes as can be seen in the next example which considers a generalized superelliptic curve of genus seventeen for which the quotient has $16$ cone points, two of them of order $2$ and all others of order $4$, whose reduced group is a cyclic group of order four. In this case $G \cong C_{2} \times C_{8}$, which is neither $C_{16}$ or $C_{4}^{2}$ as will be in the previous consideration for superelliptic curves of level four.
\begin{example}
Let us consider two values $\lambda, \mu \in {\mathbb C}$ so that $\lambda^{4} \neq \mu^{4}$, $\lambda^{4}, \mu^{4} \in {\mathbb C}-\{0,1\}$, and the curve
$$\X: \quad y^{4}=x^{2}(x^{4}-1)(x^{4}-\lambda^{4})(x^{4}-\mu^{4}),$$
which has genus $g=17$ and admits the automorphisms
$$\tau(x,y)=(x,iy), \quad \eta(x,y)=(ix,
qrt{i}\;y).$$
For generic values of $\lambda$ and $\mu$, we have that $G=\langle \tau, \eta\rangle$ is the full group of automorphisms of $\X$ and that
$G \cong C_{2} \times C_{8}$ (the factor $C_{2}$ is generated by $\tau \eta^{2}$ and the factor $C_{8}$ is generated by $\eta$). In this case, the automorphisms $\tau$ is a generalized superelliptic automorphism of level $n=4$ and $G/\langle \tau \rangle=C_{4}$.
Let us observe that another automorphism of order $4$ is given by $\rho=\eta^{2}$, that is, $\rho(x,y)=(-x,iy)$. As ${\mathbb C}[x,y]^{\langle \rho\rangle}$ is generated by $u=x^{2}$, $v=xy^{2}$ and $w=y^{4}$, we may see that $\X/\langle \rho\rangle$ is isomorphic to $$\widehat{v}^{2}=(u-1)(u-\lambda^{2})(u-\mu^{4}),$$
($\widehat{v}u=v$) which has genus one.
\end{example}
ection{Minimal fields of definition of generalized superelliptic curves}
Let us consider a closed Riemann surface $\X$ of genus $g$, describe as a projective, irreducible, algebraic curve defined over ${\mathbb C}$, say given as the common zeroes of the polynomials $P_{1},\ldots, P_{r}$, and let us denote by $G=\Aut (\X)$ the full automorphism group of $\X$.
If $
igma \in {\rm Gal}({\mathbb C})$, then $X^{
igma}$ will denote the curve defined as the common zeroes of the polynomials $P_{1}^{
igma},\ldots,P_{r}^{
igma}$, where $P_{j}^{
igma}$ is obtained from $P_{j}$ by applying $
igma$ to its coefficients. The new algebraic curve $\X^{
igma}$ is again a closed Riemann surface of the same genus $g$.
Let us observe that, if $
igma, \tau \in {\rm Gal}({\mathbb C})$, then $X^{\tau
igma}=(X^{
igma})^{\tau}$.
ubsection{Field of definition}
A subfield $k_{0}$ of ${\mathbb C}$ is called a {\it field of definition} of $\X$ if there is a curve ${\mathcal Y}$, defined over $k_{0}$, which is isomorphic to $\X$ over ${\mathbb C}$. It is clear that every subfield of ${\mathbb C}$ containing $k_{0}$ is also a field of definition of it. In the other direction, a subfield of $k_{0}$ might not be a field of definition of $\X$.
Weil's descent theorem \cite{Weil} provides sufficient conditions for a subfield $k_{0}$ of ${\mathbb C}$ to be a field of definition. Let us denote by ${\rm Gal}({\mathbb C}/k_{0})$ the group of field automorphisms of ${\mathbb C}$ acting as the identity on $k_{0}$.
\begin{thm}[Weil's descent theorem \cite{Weil}]
Assume that $\X$ has genus $g \geq 2$. If for every $
igma \in {\rm Gal}({\mathbb C}/k_{0})$ there is an isomorphism $f_{
igma}:\X \to \X^{
igma}$ satisfying the Weil's co-cycle condition
$$f_{\tau
igma}=f_{
igma}^{\tau} \circ f_{\tau}, \quad \forall
igma, \tau \in {\rm Gal}({\mathbb C}/k_{0},$$
then there is a curve ${\mathcal Y}$, defined over $k_{0}$, and there is an isomorphism $R:\X \to {\mathcal Y}$, defined over a finite extension of $k_{0}$, so that $R=R^{
igma} \circ f_{
igma}$, for every $
igma \in {\rm Gal}({\mathbb C}/k_{0})$.
\end{thm}
Clearly, the sufficient conditions in Weil's descent theorem are trivially satisfied if $\X$ has no non-trivial automorphisms (a generic situation for $\X$ of genus at least three).
\begin{cor}\label{coro:weil}
If $\X$ has trivial group of automorphisms and for every $
igma \in {\rm Gal}({\mathbb C}/k_{0})$ there is an isomorphism $f_{
igma}:\X \to \X^{
igma}$, then $\X$ can be defined over $k_{0}$.
\end{cor}
ubsection{Field of moduli} \label{Sec:FOD}
The notion of field of moduli was originally introduced by Shimura for the case of abelian varieties and later extended to more general algebraic varieties by Koizumi.
If $G_{\X}$ is the subgroup of ${\rm Gal}({\mathbb C})$ consisting of those $
igma$ so that $\X^{
igma}$ is isomorphic to $\X$, then the fixed field $M_{\X}$ of $G_{\X}$ is called {\it the field of moduli} of $\X$.
A result due to Koizumi \cite{Koizumi} asserts that the field of moduli of $\X$ coincides with the intersection of all its fields of definition and there is always a field of definition that is a finite extension of the field of moduli. This is the
field of definition of the representing point $\mathfrak p=[\X]$ in the moduli space $\M_g$.
It is known that every curve of genus $g \leq 1$ can be defined over its field of moduli. If $g \geq 2$, to determine the field of moduli and to decide if it is a field of definition is difficult task and it is an active research topic.
Examples of algebraic curves which cannot be defined over their field of moduli have been provided
by Earle \cite{Earle}, Huggins \cite{Hu2} and Shimura \cite{Shimura} for the hyperelliptic situation and by the first author \cite{Hid} and Kontogeorgis \cite{Kontogeorgis} in the non-hyperelliptic situation.
In other words, $\M_g$ is not a \textit{fine} moduli space.
Investigating the obstruction for the field of moduli to be a field of definition is part of descent theory for fields of definition and has many consequences in arithmetic geometry. Many works have been devoted to this problem, most notably by Weil \cite{Weil}, Shimura \cite{Shimura} and Grothendieck, among many others. Weil's criterion \cite{Weil} assures that if a curve has no non-trivial automorphisms then its field of moduli is a field of definition. On the other extreme, if the curve $\X$ is quasiplatonic (that is, when the quotient orbifold $\X/{\rm Aut}(\X)$ has genus zero and exactly three cone points), then Wolfart \cite{Wolfart} proved that the field of moduli is also a field of definition. Hence, the real problem occurs when the curve has non-trivial automorphism group but the quotient orbifold $\X/{\rm Aut}(\X)$ has non-trivial moduli.
It is known that a cyclic $n$-gonal Riemann surface is either definable over its field of moduli or over an degree two extension of it. In the particular case of superelliptic curves, with extra automorphisms, an equation over an at most quadratic extension of its field of moduli has been provided in \cite{BT} using the Shaska invariants.
A direct consequence of Weil's descent theorem is the following.
\begin{cor}\label{corotrivial}
Every curve with trivial group of automorphisms can be defined over its field of moduli.
\end{cor}
As a consequence of Belyi's theorem \cite{Belyi}, every quasiplatonic curve $\X$ can be defined over $\overline{\mathbb Q}$ (so over a finite extension of ${\mathbb Q}$).
\begin{thm}[Wolfart \cite{Wolfart}]\label{Wolfart}
Every quasiplatonic curve can be defined over its field of moduli (which is a number field).
\end{thm}
ubsection{Two practical sufficient conditions}
When the curve $\X$ has a non-trivial group of automorphisms, then Weil's conditions (in Weil's descent theorem) are in general not easy to check. Next we consider certain cases for which it is possible to check for $\X$ to be definable over its field of moduli.
ubsubsection{Sufficient condition 1: unique subgroups}
Let $H$ be a subgroup of $\Aut(\X)$. In general it might be another different subgroup $K$ which is isomorphic to $H$ and with $\X/K$ and $\X/H$ having the same signature. For instance, the genus two curve $\X$ defined by $y^{2}=x(x-1/2)(x-2)(x-1/3)(x-3)$ has two conformal involutions, $\tau_{1}$ and $\tau_{2}$, whose product is the hyperelliptic involution. The quotient $\X/\langle \tau_{j}\rangle$ has genus one and exactly two cone points (of order two).
We say that $H$ is {\it unique} in $\Aut(\X)$ if it is the unique subgroup of $\Aut(\X)$ isomorphic to $H$ and with quotient orbifold of same signature as $\X/H$. Typical examples are (i) $H=\Aut(\X)$ and (ii) $H$ being the cyclic group generated by the hyperelliptic involution for the case of hyperelliptic curves.
If $H$ is unique in $\Aut(\X)$, then it is a normal subgroup; so we may consider the reduced group $\bAut(\X)=\Aut(\X)/H$, which is a group of automorphisms of the quotient orbifold $\X/H$. In \cite{HQ} the following sufficient condition for a curve to definable over its field of moduli was obtained.
\begin{thm}[Hidalgo and Quispe \cite{HQ}]\label{thm1}
Let $\X$ be a curve of genus $g \geq 2$ admitting a subgroup $H$, which is unique in $\Aut(\X)$, and so that $\X/H$ has genus zero. If the reduced group of automorphisms $\bAut(\X)=\Aut(\X)/H$ is different from trivial or cyclic, then $\X$ is definable over its field of moduli.
\end{thm}
If $\X$ is a hyperelliptic curve, then a consequence of the above is the following result (originally due to Huggins \cite{Hu2}).
\begin{cor}\label{cor1}
Let $\X$ be a hyperelliptic curve with extra automorphisms and reduced automorphism group $\bAut (\X)$ not isomorphic to a cyclic group. Then, the field of moduli of $\X$ is a field of definition.
\end{cor}
ubsubsection{Sufficient condition 2: Odd signature}
Another sufficient condition of a curve $\X$ to be definable over its field of moduli, which in particular contains the case of quasiplatonic curves, was provided in \cite{AQ}. We say that $\X$ has {\it odd signature} if $\X/{\rm Aut}(\X)$ has genus zero and in its signature one of the cone orders appears an odd number of times.
\begin{thm}[Artebani and Quispe \cite{AQ}\label{thm2}]
Let $\X$ be a curve of genus $g \geq 2$. If $\X$ has odd signature, then it can be defined over its field of moduli.
\end{thm}
ubsection{Most of generalized superelliptic curves are definable over their field of moduli}
The exceptional generalized superelliptic Riemann surfaces of level $n$ are definable over their fields of moduli.
As a consequence of Corollary \ref{unicidad} and Theorem \ref{thm1}, we obtain the following fact concerning the field of moduli of the non-exceptional generalized superelliptic curves.
\begin{thm}\label{teounico}
Let $\X$ be a non-exceptional generalized superelliptic curve of genus $g \geq 2$ with generalized superelliptic group $H \cong C_{n}$. If the reduced group of automorphisms $\bAut(\X)=\Aut(\X)/H$ is different from trivial or cyclic, then $\X$ is definable over its field of moduli.
\end{thm}
As a consequence of the above, we only need to take care of the case when the reduced group $\G=G/H$ is either trivial or cyclic. As a consequence of Theorem \ref{thm2} we have the following fact.
\begin{thm}\label{teoAQ}
Let $\X$ be a generalized superelliptic curve of genus $g \geq 2$ with generalized superelliptic group $H \cong C_{n}$ so that $\G=G/H$ is either trivial or cyclic. If $\X$ has odd signature, then it can be defined over its field of moduli.
\end{thm}
As a consequence, the only cases were the generalized superelliptic Riemann surfaces cannot be defined over their field of moduli are those non-exceptional generalized superelliptic curves with reduced group $\G=G/H$ being either trivial or cyclic and with $\X/G$ having not an odd signature.
ection{Appendix}
In order to compute all the cyclic $n$-gonal curves of genus $g \geq 2$ one proceeds as follows. We consider the collection ${\mathcal F}_{g}$ of all tuples $(n,r;n_{1},\ldots,n_{r})$ satisfying the following properties (Harvey's conditions):
\begin{enumerate}
\item $n \geq 2$, $r \geq 3$;
\item $2 \leq n_{1} \leq n_{2} \leq \cdots \leq n_{r} \leq n$;
\item $n_{j}$ is a divisor of $n$, for each $j=1,\ldots,r$;
\item ${\rm lcm}\left(n_{1},\ldots,n_{j-1},n_{j+1},\ldots, n_{r}\right)=n$, for every $j=1,\ldots,r$;
\item if $n$ is even, then $\#\{j\in\{1,\ldots,r\}: n/n_{j} \; \mbox{is odd}\}$ is even;
\item $2(g-1)=n\left(r-2-
um_{j=1}^{r}n_{j}^{-1}\right)$.
\end{enumerate}
For each tuple $(n,r;n_{1},\ldots,n_{r}) \in {\mathcal F}_{g}$ we consider the collection ${\mathcal F}_{g}(n,r;n_{1},\ldots,n_{r})$ of tuples $(l_{1},\ldots,l_{r})$ so that
\begin{enumerate}
\item $l_{1},\ldots, l_{r} \in \{1,\ldots, n-1\}$;
\item $\gcd(n,l_{j})=n/n_{j}$, for each $j=1,\ldots,r$.
\end{enumerate}
Now, for each such tuple $(l_{1},\ldots,l_{r}) \in {\mathcal F}_{g}(n,r;n_{1},\ldots,n_{r})$ we may consider the epimorphism
$$\theta:\Delta=\langle c_{1},\ldots, c_{r}: c_{1}^{n_{1}}=\cdots=c_{r}^{n_{r}}=c_{1}\cdots c_{r}=1\rangle \to C_{n}=\langle \tau \rangle: c_{j} \mapsto \tau^{l_{j}}.$$
Our assumptions above ensure that the kernel $\Gamma=\ker(\theta)$ is a torsion free normal co-compact Fuchsian subgroup of $\Delta$ with $\X={\mathbb H}/\Gamma$ a closed Riemann surface of genus $g$ admitting a cyclic group $H \cong C_{n}$ as a group of conformal automorphisms with quotient orbifold $\X/H={\mathbb H}/\Delta$; a genus zero orbifold with exactly $r$ cone points of respective orders $n_{1},\ldots,n_{r}$. The surface $\X$ corresponds to a cyclic $n$-gonal curve
$$C(n,r;l_{1},\ldots,l_{r};a_{1},\ldots,a_{r}): \quad y^{n}=\mathfrak prod_{j=1}^{r}(x-a_{j})^{l_{j}},$$
for suitable pairwise different values $a_{1},\ldots,a_{r} \in {\mathbb C}$,
and $H$ being generated by $\tau(x,y)=(x,\omega_{n}y)$.
We should note that there might be different tuples $(l_{1},\ldots,l_{r})$ and $(l_{1}',\ldots,l_{r}')$, necessarily belonging to the same ${\mathcal F}_{g}(n,r;n_{1},\ldots,n_{r})$, for which the pairs $(\X,H)$ and $(\X',H')$ are isomorphic (i.e., an isomorphism between the Riemann surfaces conjugating the cyclic groups). In general this is a difficult problem to determine such pairs defining same isomorphic pairs. But, in the (non-exceptional) generalized superelliptic situation, the uniqueness of the superelliptic cyclic group of level $n$ (see Theorem \ref{unicidad}) permits to see that $(\X,H)$ and $(\X',H')$ are isomorphic pairs if and only if the corresponding
curves $C(n,r;l_{1},\ldots,l_{r};a_{1},\ldots,a_{r})$ and $C(n,r;l'_{1},\ldots,l'_{r};a'_{1},\ldots,a'_{r})$ are isomorphic, and this last being equivalent to the existence of
\begin{enumerate}
\item[(i)] M\"obius transformation $A \in {\rm PSL}(2,{\mathbb C})$,
\item[(ii)] a permutation $\eta \in S_{r}$,
\item[(iii)] an element $u \in \{1,\ldots,n-1\}$ with $\gcd(u,n)=1$,
\end{enumerate}
so that
\begin{enumerate}
\item[(iv)] $l'_{j} \equiv u l_{\eta(j)} \mod(n)$, for $j=1,\ldots,r$,
\item[(v)] $a'_{j}=M(a_{j})$, for $j=1,\ldots,r$.
\end{enumerate}
All the above (together Lemma \ref{exponentes}) permits to construct all the possible generalized superelliptic curves of lower genus in a similar fashion as done for the superelliptic case \cite{MPRZ}.
\nocite{*}
{}
\end{document}
\end{document}
|
\begin{equation}gin{document}
\title[A localization theorem]{A localization theorem and boundary regularity for a class of degenerate Monge Ampere equations}
\author{Ovidiu Savin}
\begin{equation}gin{abstract}
We consider degenerate Monge-Ampere equations of the type
$$\det D^2 u= f \quad \mbox{in $\Omega$}, \quad \quad f \sim \, d_{\partial \Omega}^\alpha \quad \mbox{near $\partial \Omega$,}$$
where $d_{\partial \Omega}$ represents the distance to the boundary of the domain $\Omega$ and $\alpha>0$ is a positive power.
We obtain $C^2$ estimates at the boundary under natural conditions on the boundary data and the right hand side.
Similar estimates in two dimensions were obtained by J.X. Hong, G. Huang and W. Wang in \cite{HHW}.
\textit{\textbf{e}}nd{abstract}
\maketitle
\section{Introduction}
In this paper we discuss boundary regularity for solutions to degenerate Monge-Ampere equations of the type
$$\det D^2 u= f \quad \mbox{in $\Omega$}, \quad \quad f \sim \, \, d_{\partial \Omega}^\alpha \quad \mbox{near $\partial \Omega$,}$$
where $d_{\partial \Omega}$ represents the distance to the boundary of a convex domain $\Omega$ and $\alpha>0$ is a positive power.
Boundary estimates for the Monge-Ampere equation in the nondegenerate case $f\in C(\overline \Omega)$, $f>0$, were obtained starting with the works of Ivockina \cite{I}, Krylov \cite{K}, Caffarelli-Nirenberg-Spruck \cite{CNS} (see also \cite{C,TW,W}). The general strategy for the $C^2$ estimates in the nondegenerate case is to obtain first a bound by above for the second derivatives on $\partial \Omega$, and then to use the equation and bound all the pure second derivatives by below. When $f=0$ on $\partial \Omegaega$ this bound cannot hold since some second derivative becomes $0$. In this paper we show that, under general conditions on the data, in a neighborhood of $\partial \Omega$ only one second derivative tends to 0 and all tangential pure second derivatives are continuous and bounded by below away from 0. The difficulty in proving this result lies in the fact that the tangential pure second derivatives are only subsolutions for the linearized operator, and therefore it is not clear whether or not such a lower bound is satisfied.
In the case of two dimensions J.X. Hong, G. Huang and W. Wang in \cite{HHW} used that the tangential second derivative is in fact a solution to an elliptic equation and showed that $u\in C^2$ up to the boundary.
In this paper we study the geometry of boundary sections in the degenerate case when $f$ behaves in a neighborhood of $\partial \Omega$ as a positive power of the distance to $\partial \Omega$. We use the compactness methods developed in \cite{S1} where a localization theorem for boundary sections of solutions to the Monge-Ampere equation was obtained. In Theorem \ref{T1} we show that a localization theorem holds also in the degenerate case, and it states that boundary sections have the shape of half-ellipsoids. We achieve this by reducing the problem to the study of tangent cones for solutions to degenerate Monge-Ampere equations that have a singularity on $\partial \Omegaega$. Then we use the ideas from \cite{S2} where the regularity of such tangent cones was investigated for the classical Monge-Ampere equation.
Before we state our main results we recall the notion for a function to be $C^2$ at a point. We say that $u$ is $C^2$ at $x_0$ if there exists a quadratic polynomial $Q_{x_0}$ such that, in the domain of definition of $u$, $$u(x)=Q_{x_0}(x) + o(|x-x_0|^2).$$
Throughout this paper we refer to a linear map $A$ of the form
$$Ax=x+\tau x_n, \quad \quad \mbox{with} \quad \tau \cdot e_n=0,$$
as a {\it sliding along} $x_n=0$. Notice that the map $A$ is the identity map when is restricted to $x_n=0$ and it becomes a translation of vector $s \tau$ when is restricted to $x_n=s$ .
Let $\Omegaega$ be a bounded convex domain such that $\partial \Omega$ is $C^{1,1}$ at the origin, that is $0 \in \partial \Omega$ and
\begin{equation}gin{equation}\label{01}
\Omega \subset \{x_n>0\},\quad \mbox{ and $\Omega$ has an interior tangent ball at the origin.}
\textit{\textbf{e}}nd{equation}
We are interested in the behavior near the origin of a convex solution $u\in C(\overline \Omega)$ to the equation
\begin{equation}gin{equation}\label{02}
\det D^2 u=g(x) \, d_{\partial \Omega}^\alpha, \quad \quad \quad \alpha>0,
\textit{\textbf{e}}nd{equation}
where $g$ is a nonnegative function that is continuous at the origin, $g(0)>0$.
Our main theorem is the following pointwise $C^2$ estimate at the boundary (see also Theorem \ref{T2} for a more precise quantitative version).
\begin{equation}gin{thm}\label{T01}
Let $\Omega$, $u$ satisfy \textit{\textbf{e}}qref{01}, \textit{\textbf{e}}qref{02} above. Assume that
$$ u(0)=0, \quad \nabla u(0)=0, \quad u=\varphi \quad \mbox{on $\partial \Omega$},$$
and the boundary data $\varphi$ is $C^2$ at $0$, and it separates quadratically away from $0$.
Then $u$ is $C^2$ at $0$. Precisely, there exists a sliding $A$ along $x_n=0$ and a constant $a>0$ such that
$$u(Ax)=Q_0(x')+ a x_n^{2+\alpha} + o(|x'|^2+x_n^{2+\alpha}),$$
where $Q_0$ represents the quadratic part of the boundary data $\varphi$ at the origin.
\textit{\textbf{e}}nd{thm}
If the hypotheses above hold and $\partial \Omega \in C^2$, $\varphi \in C^2$, $g \in C^\begin{equation}ta$ in a neighborhood of $0$, then $u \in C^2(\overline \Omega \cap B_\delta)$, for some small $\delta>0$ (see Theorem \ref{T2.1}). Here we require $g \in C^\begin{equation}ta$ only to guarantee the $C^2$ regularity at interior points close to $\partial \Omega$.
It is worth remarking that the $C^2$ estimate of Theorem \ref{T01} does not hold for harmonic functions or solutions to the classical Monge-Ampere equation. In these cases we need stronger assumptions on $\partial \Omega$ and $\varphi$, i.e. to be $C^{2,Dini}$ at the origin.
In a subsequent work we intend to use Theorem \ref{T01} and perturbations arguments to obtain $C^{2,\begin{equation}ta}$ and higher order estimates when the data $\partial \Omega$, $\varphi$, $g$ is more regular.
Our second result which is closely related to Theorem \ref{T01} is a Liouville theorem for degenerate solutions to Monge-Ampere equations defined in half-space.
\begin{equation}gin{thm}\label{T02}
Assume $u \in C(\overline{\mathbb R^n_+})$ satisfies
\begin{equation}gin{equation}\label{03}
\det D^2 u=x_n^\alpha, \quad \quad u(x',0)=\frac 12 |x'|^2.
\textit{\textbf{e}}nd{equation}
If there exists $\textit{\textbf{e}}ps>0$ small such that $u =O( |x|^{3+\alpha-\textit{\textbf{e}}ps})$ as $|x| \to \infty$, then
$$u(Ax)=bx_n+\frac 12 |x'|^2 + \frac{x_n^{2+\alpha}}{(1+\alpha)(2+\alpha)}, $$
for some sliding $A$ along $x_n=0$, and some constant $b$.
\textit{\textbf{e}}nd{thm}
We remark that Theorem \ref{T02} holds also for $\alpha=0$. The theorem states that solutions to \textit{\textbf{e}}qref{03} that grow at a power less than $|x|^{3+\alpha}$ at $\infty$ are unique modulo additions of $c \, x_n$ and domain deformations given by slidings along $x_n=0$. Clearly, both transformations leave \textit{\textbf{e}}qref{03} invariant.
The growth condition at infinity is necessary since $$\frac{x_1^2}{2(1+x_n)} + \frac 12 (x_2^2+...+x_{n-1}^2)+ \frac{x_n^{2+\alpha}}{(1+\alpha)(2+\alpha)} + \frac{x_n^{3+\alpha}}{(2+\alpha)(3+\alpha)} $$
satisfies also \textit{\textbf{e}}qref{03}.
In the two dimensional case Theorem \ref{T02} follows easily after performing a partial Legendre transform in the $x_1$ direction. Then the problem reduces to the classification of solutions to a linear equation defined in half-space. However this approach does not seem to work in higher dimensions.
Theorem \ref{T01} applies when the right hand side $f$, which may depend also on $u$ and $\nabla u$, is expected to behave as a power of the distance to $\partial \Omega$. For example we obtain $C^2$ estimates up to the boundary for solutions to the eigenvalue problem for Monge-Ampere equation which was first investigated by Lions in \cite{L}.
\begin{equation}gin{thm}\label{T03}
Assume $\partial \Omega \in C^2$ is uniformly convex and $u \in C(\overline \Omega)$ satisfies $$(\det D^2 u)^\frac 1 n= \lambda |u| \quad \quad \mbox{in $\Omega$}, \quad \quad u=0 \quad \mbox{on $\partial \Omega$.}$$
Then $u \in C^2(\overline \Omega)$.
\textit{\textbf{e}}nd{thm}
In two dimensions Theorem \ref{T03} was obtained in \cite{HHW}.
The paper is organized as follows. In Section \ref{s2} we introduce some notation and state our main results, the localization Theorem \ref{T1} and the quantitative $C^2$ estimate Theorem \ref{T2}. Most of the paper is devoted to the proof of the localization Theorem \ref{T1}. In Section \ref{s3} we deal with some general properties of boundary sections. In Section \ref{s4} we use compactness and reduce Theorem \ref{T1} to Theorem \ref{T3} which deals with estimates of boundary sections for a class of solutions with discontinuities on $\partial \Omega$. In Section \ref{s5} we obtain two Pogorelov type estimates for solutions to certain Monge-Ampere equations. We use these estimates in Section \ref{s6} where we complete the proof of Theorem \ref{T3}. In Section \ref{s7} we prove a Liouville theorem from which Theorem \ref{T2} follows. Finally is Section \ref{s8} we prove Theorems \ref{T02} and \ref{T03}.
\section{Statement of main results}\label{s2}
We introduce some notation. We denote points in $\mathbb R^n$ as
$$x=(x_1,...,x_n)=(x',x_n), \quad \quad \quad x' \in \mathbb R^{n-1}.$$
We denote by $B_r(x)$ the ball of radius $r$ and center $x$, and by $B_r'(x')$ the ball in $\mathbb R^{n-1}$ of radius $r$ and center $x'$.
Given a convex function $u$ defined on a convex set $\overline \Omega$, we denote by $S_h(x_0)$ the section centered at $x_0$ and height $h>0$,
$$S_h(x_0):=\{x \in \overline \Omega | \quad u(x) < u(x_0)+\nabla u(x_0) \cdot (x-x_0) +h \}.$$
We denote for simplicity $S_h=S_h(0)$, and sometimes when we specify the dependence on the function $u$ use the notation $S_h(u)=S_h$.
Throughout the paper we think of the constants $n$, $\alpha$ and $\mu$ as being fixed. We refer to all positive constants depending only $n$, $\alpha$ and $\mu$ as {\it universal constants} and we denote them by $c$, $C$, $c_i$, $C_i$. The dependence of various constants also on other parameters like $\rho$ and $\rho'$ will be denoted by $c(\rho, \rho')$.
Our assumptions are the following (we assume $\rho$, $\rho'$ are small positive constants).
First we assume $\Omegaega$ is $C^{1,1}$ at the origin, that is
\
H1) $\Omega$ is an open convex set , $0 \in \partial \Omega$, $$\Omega \subset \{ x_n > 0\} \cap B_{1/\rho},$$ and $\Omega$ has an interior tangent ball of radius $\rho$ at the origin.
\
Let $x_{n+1}=0$ be the tangent plane for a continuous convex function $u:\overline \Omega \to \mathbb R$ at the origin, that is
\
H2) $u \ge 0$, $u(0)=0$, $\nabla u(0)=0$ in the sense that $x_{n+1}=t x_n$ is not a supporting plane for the graph of $u$ at $0$ for any $t >0$.
\
We assume that $u$ separates on $\partial \Omegaega$ quadratically away from its tangent plane in a neighborhood of $0$. Precisely
\
H3) For some $\textit{\textbf{e}}ps_0 \in (0,\frac 14)$ we have $$(1-\textit{\textbf{e}}ps_0)\varphi(x') \le u(x) \le (1+\textit{\textbf{e}}ps_0) \varphi(x') \quad \quad \mbox{for all} \quad x \in \partial \Omegaega \cap B_{\rho/2},$$ with $\varphi(x')$ a function of $n-1$ variables satisfying $$\mu^{-1}I \ge D^2_{x'} \varphi \ge \mu \, \, I,$$ and also at the points on $\partial \Omegaega$ outside $B_{\rho/2}$ we assume
$$u(x) \ge \rho' \quad \mbox{on} \quad \partial \Omega \cap \{ x_n \le \rho\} \setminus B_{\rho/2}.$$
\
We assume that the Monge-Ampere measure of $u$ near $0$ behaves as $d_{\partial \Omega}^\alpha$ where $d_{\partial \Omega}(x)$ denotes the distance from $x$ to $\partial \Omega$ i.e.,
\
H4) $$(1-\textit{\textbf{e}}ps_0) d_{\partial \Omega}^\alpha \le \det D^2 u \le (1+\textit{\textbf{e}}ps_0)d_{\partial \Omega}^\alpha \quad \quad \mbox{in} \quad B_\rho \cap \Omegaega, $$
and $$\det D^2 u \le 1/ \rho' \quad \quad \mbox{in} \quad \{x_n < \rho\} \cap \Omegaega.$$
Our localization theorem states that if $u$ satisfies the hypotheses above then the sections $S_h$ of $u$ at the origin
are equivalent, up to a sliding along $x_n=0$, to the sections of the function $|x'|^2 + x_n^{2+\alpha}$.
\begin{equation}gin{thm} [Localization Theorem] \label{T1}
Assume H1, H2, H3, H4 are satisfied. If $\textit{\textbf{e}}ps_0$ is sufficiently small, universal, then
$$k \, A \mathcal E_h \, \cap \overline \Omega \, \subset S_h \, \subset k^{-1} \, A \mathcal E_h \, \cap \overline \Omega \quad \quad \mbox{for all $h < c(\rho, \rho')$,}$$
where $$ \mathcal E_h:=\{ |x'|^2+x_n^{2+\alpha} < h\},$$
and $A$ is a {\it sliding} along $x_n=0$ i.e.
$$Ax=x + \tau x_n, \quad \quad \tau=(\tau_1,\tau_2,..,\tau_{n-1},0), \quad |\tau|\le C(\rho,\rho').$$
The constant $k$ above is universal, that is depends only on $n$, $\alpha$ and $\mu$, and $c(\rho,\rho')$, $C(\rho,\rho')$ depend on the universal constants and $\rho$, $\rho'$.
\textit{\textbf{e}}nd{thm}
\begin{equation}gin{rem}\label{r0}
The conclusion can be stated as $$c(|x'|^2 + x_n^{2+\alpha}) \le u(Ax) \le C(|x'|^2 + x_n^{2+\alpha}),$$
in a neighborhood of the origin where $c$, $C$ are universal constants. Equivalently we can say that there exists a sliding $A$ such that $A^{-1} S_h$ is equivalent to an ellipsoid of axes parallel to the coordinate axes and of lengths $h^{1/2},h^{1/2}, \ldots, h^{1/2}, h^{1/(2+\alpha)}$.
\textit{\textbf{e}}nd{rem}
\begin{equation}gin{rem}
If $\det D^2 u=d_{\partial \Omega}^\alpha$ and $\partial \Omegaega\in C^{1,1}$ in a neighborhood of $0$ then Theorem \ref{T1} provides bounds by above and below for the tangential (to $\partial \Omega$) second derivatives in a neighborhood of $0$. The conclusion of Theorem \ref{T1} can be viewed as a boundary $C^{1,1}$ estimate by below written in terms of the sections $S_h$ rather than using second derivatives.
\textit{\textbf{e}}nd{rem}
The localization theorem for the nondegenerate case $\alpha=0$ holds if $\det D^2 u$ is only bounded away from $0$ and $\infty$ (see \cite{S1}). When $\alpha>0$ the hypothesis that $g=d_{\partial \Omega}^{-\alpha} \det D^2 u$ has small oscillation is in fact optimal. It is possible to construct a counterexample for Theorem \ref{T1} in two dimensions if we allow $g$ to be only bounded. However in this case we obtain a pointwise $C^{1,\gamma}$ estimate (see Proposition \ref{p0}.)
Our second theorem provides a pointwise $C^2$ estimate for solutions $u$ as above in the case when the boundary data is $C^2$.
\begin{equation}gin{thm}\label{T2}
Assume $u$ satisfies the hypotheses of Theorem \ref{T1} with $$\varphi(x')=\frac 12 |x'|^2.$$
For any $\textit{\textbf{e}}ta>0$ there exists $\textit{\textbf{e}}ps_0$ depending on $\textit{\textbf{e}}ta$, $\alpha$ and $n$, and a sliding $A$ along $x_n=0$ such that
$$(1-\textit{\textbf{e}}ta) A \, \, S_h(U_0) \subset S_h(u) \subset (1+\textit{\textbf{e}}ta) A \, \, S_h(U_0) $$
for all $h < c(\textit{\textbf{e}}ta,\rho,\rho')$ where $U_0$ is the particular solution
$$U_0(x):=\frac 12 |x'|^2 + \frac{x_n^{2+\alpha}}{(1+\alpha)(2+\alpha)}.$$
\textit{\textbf{e}}nd{thm}
\begin{equation}gin{rem}
In both Theorem \ref{T1} and \ref{T2} the first inequality of hypothesis H4 can be relaxed to
$$(1-\textit{\textbf{e}}ps_0) \left [\left (x_n-\frac 1 \rho |x'|^2 \right )^+ \right ]^\alpha \le \det D^2 u \le (1+ \textit{\textbf{e}}ps_0) \left (x_n + \frac 1 \rho |x'|^2 \right )^\alpha \quad \quad \mbox{in $B_\rho \cap \Omegaega$}$$
or in other words we can replace $d_{\partial \Omega}$ by the distances to the exterior respectively interior tangent ball of radius $\rho$ at the origin. In fact in our proof we just use the inequality above instead of the first part of H4.
\textit{\textbf{e}}nd{rem}
Finally we also state a version of Theorem \ref{T2} in the case when the data is $C^2$ in a neighborhood of $0$.
\begin{equation}gin{thm}\label{T2.1} Let $\partial \Omegaega \in C^2$ in $B_\rho$, and $u\in C(\overline \Omega)$ convex such that
$$u(0)=0, \quad \nabla u(0)=0, \quad u=\varphi(x') \quad \mbox{on $\partial \Omega \cap B_\rho$,}$$
with $$\varphi \in C^2(B_\rho'), \quad \rho' I \le D_{x'}^2 \varphi(0) \le \frac{1}{\rho'} \, I,$$ and $u \ge \rho'$ on $\partial \Omega \setminus B_\rho$.
Assume $$ \det D^2 u= g \, d_{\partial \Omega}^\alpha \quad \mbox{in} \quad \Omega \cap B_\rho, \quad \quad \det D^2 u \le \frac{1}{\rho'} \quad \mbox{in} \quad \Omega \setminus B_\rho $$
with $$g \in C^\begin{equation}ta(\overline \Omega \cap B_\rho), \quad \quad \|g\|_{C^\begin{equation}ta} \le \frac{1}{\rho'}, \quad \rho'\le g(0) \le \frac{1}{\rho'},$$
for some $\begin{equation}ta>0$ small. Then $$u \in C^2 (\overline \Omega \cap B_\delta)$$ with $\delta$ and the modulus of continuity of $D^2u$ depending on $n$, $\alpha$, $\begin{equation}ta$, $\rho$, $\rho'$ and the $C^2$ modulus of continuity of $\varphi$ and $\partial \Omega$.
\textit{\textbf{e}}nd{thm}
\section{Preliminaries and rescaling}\label{s3}
In this section we use rescaling arguments and reduce the proof of Theorem \ref{T1} to the Proposition \ref{p2} below.
First we show that $|S_h|^2 d_h^\alpha \sim h^n$ where $d_h$ is the $e_n$ coordinate of the center of mass $x^*_h$ of $S_h$. We can think of $d_h$ also as a quantity that represents roughly the height of $S_h$ in the $x_n$ direction. In the next proposition we prove that after using a sliding $A_h$ depending on $h$ we may normalize $S_h$ such that it has its center of mass on the $x_n$-axis and the corresponding normalized function $\tilde u$ satisfies essentially the same hypotheses as $u$.
\begin{equation}gin{prop}\label{p1}
Assume $u$ satisfies the hypotheses of Theorem \ref{T1}. Then for all $h \le c(\rho,\rho', \textit{\textbf{e}}ps_0)$ there exists a sliding along $x_n=0$ $$A_h=x-\tau_h x_n, \quad \quad \tau_h \cdot e_n=0, \quad |\tau_h| \le C(\rho,\rho', \textit{\textbf{e}}ps_0)h^{-1/4},$$
such that the rescaled function $$\tilde u(A_h x)=u(x)$$ satisfies in $$\tilde S_h:=A_h S_h=\{ \tilde u <h\}$$ the following:
1) the center of mass $\tilde x^*_h$ of $\tilde S_h$ lies on the $x_n$ axis i.e. $\tilde x^*_h=d_h e_n.$
2) $$c_0 h^n \le |S_h|^2 d_h^ \alpha \le C_0 h^n,$$ with $c_0$, $C_0$ universal.
Also, after performing a rotation of the $x_1$,..,$x_{n-1}$ variables we can write
$$ \tilde x^*_h + c_0 D_h B_1 \subset \tilde S_h \subset C_0 D_h B_1,$$
where $$D_h:=diag(d_1,d_2,..,d_{n-1},d_n)$$ is a diagonal matrix that satisfies
\begin{equation}gin{equation}\label{dn}
\left(\partialrod_1^{n-1}d_i^2\right) \, d_n^{2+\alpha}=h^n.
\textit{\textbf{e}}nd{equation}
3) $$\tilde G_h:= \partial \tilde S_h \cap \{ \tilde u < h\} \subset \partial \tilde \Omega_h$$ is a graph i.e $$\tilde G_h=(x', g_h(x')) \quad \quad \mbox{with} \quad g_h(x') \le \frac 2 \rho |x'|^2,$$
and the function $\tilde u$ satisfies on $\tilde G_h$ $$(1-2 \textit{\textbf{e}}ps_0) \varphi(x') \le \tilde u(x) \le (1+2 \textit{\textbf{e}}ps_0) \varphi(x').$$
Moreover $\tilde u$ satisfies in $\tilde S_h$
$$(1-2\textit{\textbf{e}}ps_0) \left (x_n-\frac 4 \rho |x'|^2 \right )^\alpha \le \det D^2 \tilde u \le (1+ 2 \textit{\textbf{e}}ps_0) \left (x_n + \frac 4 \rho |x'|^2 \right )^\alpha.$$
\textit{\textbf{e}}nd{prop}
For simplicity of notation in this section we denote shortly by $c'$, $C'$, $c_i'$, $C_i'$ various constants that depend on universal constants and $\rho$,$\rho'$ and $\textit{\textbf{e}}ps_0$ (instead of $c(\rho,\rho', \textit{\textbf{e}}ps_0)$ etc.) Also we use $c'$, $C'$ for constants that may change their value from line to line whenever there is no possibility of confusion.
First we construct an explicit barrier for $u$.
\begin{equation}gin{lem}\label{l1}
Let $$\bar w (r,y);= r^2 g(y r^{-\frac 32}) \quad \quad \mbox{with} \quad g(t)=(1-t^\gamma)^+, \quad t \ge 0,$$
for some $\gamma>0$ small depending only on $n$. Then the function
$$w_1(x',x_n):=c'\bar w (|x'|,C'x_n),$$
is a lower barrier for $u$ provided that $c'$ (small), $C'$ (large) are appropriate constants depending on $n$, $\mu$, $\rho$, $\rho'$.
\textit{\textbf{e}}nd{lem}
\begin{equation}gin{proof}
Let $t=y r^{-\frac 32}$. Using that
$$\frac {dt}{dr}=-\frac 32 t r^{-1}, \quad \quad \frac{dt}{dy}=r^{-\frac 32},$$ we compute in the set where $\bar w>0$ (hence $t \in (0,1)$):
\begin{equation}gin{align*}
\bar w_{yy} &=r^{-1} g''=r^{-1} \gamma (1-\gamma) t^{\gamma-2},\\
\bar w_r &=r(2g-\frac 32 t g')=r(2g+\frac 32 \gamma t ^\gamma),\\
\bar w_{ry} &=r^{-\frac 12}(2g-\frac 32 t g')'=r^{-\frac 12} \gamma t^{\gamma-1}(-2+ \frac 32 \gamma),\\
\bar w_{rr} &=(2g-\frac 32 t g')-\frac 32 t (2g-\frac 32 t g')'=2g+\frac 32 \gamma(3-\frac 32 \gamma)t^\gamma.
\textit{\textbf{e}}nd{align*}
We find
\begin{equation}gin{align*}
\det D^2_{r,y} \bar w & \ge r^{-1} \gamma^2 t^{2 \gamma-2}\left[(1-\gamma) \frac 32 (3-\frac 32 \gamma)-(2-\frac 32 \gamma)^2 \right]\\
& \ge c_0 r^{-1} t^{2 \gamma-2},
\textit{\textbf{e}}nd{align*}
and
$$\frac{\bar w_r}{r} \ge c_0 t^ \gamma,$$
thus
$$\det D_x^2 \bar w (|x'|,x_n) \ge c_1 |x'|^{-1} t^{n \gamma -2} \ge c_1 |x'|^{-1}.$$
Now we choose $c'=c(\rho,\rho')$ small such that
$$c'|x'|^2 \le \frac 14 \mu |x'|^2 \le u \quad \mbox{on $\partial \Omega \cap B_{\rho/2}$,}$$
$$c'|x'|^2 \le \rho' \quad \mbox{on $\partial \Omega \cap \{x_n \le \rho\}$}$$
and then $C'$ large such that
$$\det D^2 w_1 >1/ \rho' \quad \quad \mbox{on $B_{1/\rho} \cap \{ w_1>0\} $.}$$
Since $u \ge w_1 $ on $\partial (\Omega \cap \{x_n \le \rho\})$ and $\det D^2 w_1 > \det D^2 u$ on the set where $w_1>0$ we find $u \ge w_1$ in $\Omega \cap \{x_n \le \rho\}$.
\textit{\textbf{e}}nd{proof}
{\it Proof of Proposition \ref{p1}.}
Since $u \ge w$ with $w$ as in Lemma \ref{l1} we have $S_h \subset \{ w<h\}$ thus
$$S_h \subset \left \{ c'|x'|^2 (1-C'x_n |x'|^{-\frac 32}) < h \right \},$$
or
\begin{equation}gin{equation}\label{1}
S_h \subset \left \{ |x'| \le C_1' h^ {1/2} \right \} \cup \left \{ x_n \ge c_1' |x'|^{\frac 32} \right \}.
\textit{\textbf{e}}nd{equation}
Let $x^*$ denote the center of mass of $S_h$ and define $d_h$ as
$$d_h=x^* \cdot e_n.$$
We claim that
\begin{equation}gin{equation}\label{2}
d_h \ge h^{3/4} \quad \quad \mbox{for all $h<c_2'$.}
\textit{\textbf{e}}nd{equation}
Otherwise we have
$$S_h \subset \{ |x'| \le C' h^{1/2} \} \cap \{ x_n \le C' h^{3/4} \},$$
and we compare $u$ with
$$w_2:=c' h \left [ \left ( \frac{|x'|}{h^{1/2}} \right)^2 + \left (\frac{x_n}{h^{3/4}} \right)^2 \right] + t x_n,$$ with $c'$ sufficiently small, and some $t>0$ arbitrarily small. In $S_h$ we have $w_2 \le h$ and
$$\det D^2 w_2 = c' h^{-1/2} \ge \det D^2 u,$$
and on $\partial \Omega \cap \partial S_h$ we use $x_n \le C'|x'|^2$ and obtain
$$w_2 \le c'(|x'|^2 + C'|x'|^2 x_n h^{-1/2}) + t C' |x'|^2 \le \frac \mu 2 |x'|^2 \le u.$$
In conclusion $w_2 \le u$ which contradicts $\nabla u(0)=0$ and the claim \textit{\textbf{e}}qref{2} is proved.
Next we show that for all small $h$ we also have the following lower bound
\begin{equation}gin{equation}\label{3}
d_h \le C h ^ \frac{1}{2+\alpha}.
\textit{\textbf{e}}nd{equation}
Assume by contradiction that $d_h \ge C h^\frac{1}{2+\alpha}$ for some large $C$ universal. Since $S_h$ contains the set $\partial \Omega \cap B_{ch^{1/2}}$ and the point
$$x^*_h=({x^*_h}' , d_h) \quad \quad \mbox{with} \quad |{x^*_h}' | \le C' d_h^{2/3},$$
it contains also the convex set generated by them. It is straightforward to check that this convex set contains an ellipsoid $E$ of volume
$$|E|=c(n) (ch^{1/2})^{n-1} Ch^\frac{1}{2+\alpha},$$
with $c(n)$ a small constant depending only on $n$, such that
$$E\subset \{x_n -\frac 1 \rho |x'|^2 \ge h^\frac{1}{2+\alpha}\} \cap B_{\rho/2},$$
if $h$ is small. Now we compare $u$ with the quadratic polynomial $P$ that solves
$$ \det D^2 P = \frac \mu 2 (h^\frac{1}{2+\alpha})^\alpha \le \det D^2 u, \quad \quad P=h \ge u \quad \mbox{on} \quad \partial E$$
hence $P \ge u \ge 0$. Writing this inequality at the center of $E$ we obtain
$$h^n \ge c(n) \, |E|^2 \, \det D^2 P,$$ and we reach a contradiction if $C$ is sufficiently large, hence \textit{\textbf{e}}qref{3} is proved.
From \textit{\textbf{e}}qref{3} we see that $S_h \subset B_{\rho/2}$ for all small $h$, and the argument above shows in fact that
\begin{equation}gin{equation}\label{4}
|S_h|^2 \, d_h^ \alpha \le C_0 h^n,
\textit{\textbf{e}}nd{equation}
for all small $h$. Indeed, by John's lemma we can choose the ellipsoid $E$ centered at $x^*$ with
$$E-x^* \subset \frac 1 4 (S_h - x^*), \quad \quad |E| \ge c(n) |S_h|,$$
and $$E \subset \{ x_n -\frac 1 \rho |x'|^2 \ge d_h /2 \},$$
and then we easily obtain \textit{\textbf{e}}qref{4} as before.
Now we let
$$\tilde x= A_h x:=x-\tau_h x_n, \quad \quad \tau_h:=\frac{{x_h^*}'}{x_h^* \cdot e_n},$$
and
$$ \tilde u(\tilde x)=\tilde u(A_hx)=u(x).$$
From \textit{\textbf{e}}qref{1}, \textit{\textbf{e}}qref{2} we find
\begin{equation}gin{equation}\label{5}
|\tau_h| \le C'\frac{d_h^{2/3}}{d_h} \le C'd_h^{-1/3} \le C' h^{-1/4},
\textit{\textbf{e}}nd{equation}
and $\tilde x^*_h$ lies on the $x_n$ axis by construction.
We have $\tilde x_n=x_n$ and if $x \in \partial \Omega \cap S_h \subset B_{Ch^{1/2}}$ then
$$|x-\tilde x|=|\tau_h x_n| \le C' h^{-1/4} |x'|^2 \le C'h^{1/4}|x'|.$$
This easily implies that $\tilde G_h$ defined in Proposition \ref{p1} belongs to the graph of a function $g_h$ that satisfies $|g_h(x')| \le (2 / \rho) |x'|^2$. Since
$$|\varphi(x')-\varphi(\tilde x')| \le C|x'| |x'-\tilde x'| \le C' h^{1/4} |x'|^2 \le \frac{\textit{\textbf{e}}ps_0}{2} \varphi(x'),$$
on $\tilde G_h$ we have
$$(1-2\textit{\textbf{e}}ps_0) \varphi(\tilde x ') \le \tilde u(\tilde x) \le (1+2\textit{\textbf{e}}ps_0) \varphi(\tilde x').$$
Also if $x \in S_h$ then (see \textit{\textbf{e}}qref{5}, \textit{\textbf{e}}qref{3})
$$|x'|^2 \le 2 |\tilde x'|^2 + 2 |\tau_h|^2 x_n^2 \le 2|x'|^2 + C'd_h^{-2/3} d_h x_n \le 2|\tilde x'|^2 + \frac{\textit{\textbf{e}}ps_0 \rho}{2} x_n $$
thus
$$x_n+ \frac 1 \rho |x'|^2 \le (1+ \frac{\textit{\textbf{e}}ps_0}{2})(\tilde x_n + \frac 4 \rho |\tilde x'|^2),$$
$$x_n- \frac 1 \rho |x'|^2 \ge (1 - \frac{\textit{\textbf{e}}ps_0}{2})(\tilde x_n - \frac 4 \rho |\tilde x'|^2),$$
which imply the desired inequalities for $\det D^2 \tilde u$.
It remains to show part 2) of Proposition \ref{p1}. After a rotation of the first $n-1$ coordinates we may assume that $\tilde S_h \cap \{x_n=d_h\}$ is equivalent to an ellipsoid of axes $d_1\le d_2 \le \cdots \le d_{n-1}$ i.e.
$$\left \{ \sum_1^{n-1} (\frac {x_i}{d_i})^2 \le 1 \right \} \cap \{ x_n = d_h\} \subset \tilde S_h \cap \{x_n =d_n \} \subset \left \{ \sum_1^{n-1} (\frac {x_i}{d_i})^2 \le C(n) \right \},$$
with $C(n)$ a constant depending only on $n$. We find
$$S_h \subset \left \{ \sum_1^{n-1} (\frac {x_i}{d_i})^2 \le C(n) \right \} \cap \{ 0 \le x_n \le C(n) d_h \},$$
and also since $\tilde u \le c |x'|^2$ on $\tilde G_h$ we see that
\begin{equation}gin{equation}\label{6}
d_i \ge c_3 h^{1/2}.
\textit{\textbf{e}}nd{equation}
We claim that
\begin{equation}gin{equation}\label{7}
d_h^{2+\alpha}\partialrod_1^{n-1} d_i^2 \ge c_4 h^n.
\textit{\textbf{e}}nd{equation}
Otherwise, similarly as before we consider
$$w_3:=ch\left[\sum_1^{n-1} (\frac{x_i}{d_i})^2 + (\frac{x_n}{d_h})^2 \right] + t x_n,$$
with $c$ small, and obtain (provided that $c_4$ is chosen sufficiently small)
$$\det D^2 w_3 \ge c^n h^n (d_h^2 \Pi d_i^2)^{-1} \ge C d_h^ \alpha \ge \det D^2 \tilde u,$$
$$w_3 \le h =\tilde u \quad \mbox{on} \quad \partial \tilde S_h \setminus \tilde G_h,$$
and moreover on $\tilde G_h$ we use \textit{\textbf{e}}qref{6} and obtain
$$w_3 \le c|x'|^2 + C h \frac{x_n}{d_h} + t x_n\le \frac{\mu}{4}|x'|^2 \le \tilde u.$$
This implies $\tilde u \ge w_3$ in $\tilde S_h$ and we contradict that $\nabla \tilde u(0)=0$, hence \textit{\textbf{e}}qref{7} is proved.
Now we define $d_n$ from $d_1$, .., $d_{n-1}$ by the equality \textit{\textbf{e}}qref{dn}, and \textit{\textbf{e}}qref{4}, \textit{\textbf{e}}qref{7} give
\begin{equation}gin{equation}\label{7.1}
cd_n \le d_h \le C d_n
\textit{\textbf{e}}nd{equation}
which proves part 2).
\qed
\begin{equation}gin{rem}\label{r1}
The set $\tilde S_h \cap \{x_n=d_h\}$ is just a translation of $S_h \cap \{x_n=d_h\}$, hence $d_1$, $d_2$,..,$d_{n-1}$ represent the length of the axes of an ellipsoid which is equivalent to $S_h \cap \{x_n=x^*_h \cdot e_n\}$.
\textit{\textbf{e}}nd{rem}
\begin{equation}gin{rem}
We can prove \textit{\textbf{e}}qref{7} without using the upper bound on $\varphi(x')$. Precisely, if we assume that $\varphi$ satisfies
$$\mu \mathcal N \le D_{x'}^2 \varphi \le \mu^{-1} \mathcal N,$$
with $$\mathcal N =diag(a_1^2,\ldots, a_{n-1}^2), \quad \quad a_i \ge 1,$$ then \textit{\textbf{e}}qref{7} still holds. Indeed, now we have $d_i \ge c_3 h^{1/2}/a_i$ instead of \textit{\textbf{e}}qref{6} and then on $\tilde G_h$ we still satisfy
$$w_3 \le c a_i^2x_i^2 + C x_n h/d_h + t x_n \le \varphi(x') \le \tilde u.$$
\textit{\textbf{e}}nd{rem}
We mention that in the beginning of the proof of Proposition \ref{p1} we obtained a pointwise $C^{1,1/3}$ estimate for solutions that grow quadratically away from their tangent plane and have bounded Monge-Ampere measure. We state this result below although it will not be used in the proof of Theorem \ref{T1}.
\begin{equation}gin{prop}\label{p0}
Assume $\Omega$, $u$ satisfy hypotheses H1, H2 of Section \ref{s2}, and
$$\rho |x'|^2 \le u(x) \le \frac {1}{\rho} |x'|^2 \quad \mbox{on} \quad \partial \Omega, \quad \quad \det D^2u \le \frac {1}{\rho} \quad \mbox{in} \quad \Omega.$$ Then
$$u(x) \le C'|x|^\frac 43 \quad \quad \mbox{in} \quad \Omega \cap B_{c'}$$
with $C'$, $c'$ constants depending on $n$ and $\rho$.
\textit{\textbf{e}}nd{prop}
\begin{equation}gin{proof}
The section $S_h$ and its center of mass $x_h^*$ satisfy \textit{\textbf{e}}qref{1} and \textit{\textbf{e}}qref{2} since we only used the upper bound on $\det D^2 u$ and the quadratic bound by below for $u$ on $\partial \Omega$. From this we obtain that the convex hull generated by $x_h^*$ and $\partial \Omega \cap B_{c'h^{1/2}}$, which is included in $S_h$, contains $\overline \Omega \cap B_{c'_1h^{3/4}}$ for some small $c_1'$, which proves the proposition.
\textit{\textbf{e}}nd{proof}
In order to prove Theorem \ref{T1} we need to show that the quantities $d_i$ are bounded by above by $C h^{1/2}$ for some $C$ universal. Precisely we prove the following lemma which will be completed in Section \ref{s6}.
\begin{equation}gin{lem}\label{l2}
Assume $u$ satisfies the hypotheses of Proposition \ref{p1} for some $\textit{\textbf{e}}ps_0$ sufficiently small, universal. Then for all $h \le c(\rho,\rho')$ we have
$$\max_{1\le i \le n-1} \, \, d_i \le C h^{1/2},$$
for some $C$ universal, with $d_i$ defined as in Proposition \ref{p1}.
\textit{\textbf{e}}nd{lem}
{\it Lemma \ref{l2} implies Theorem \ref{T1}}
From Lemma \ref{l2} and \textit{\textbf{e}}qref{6}, \textit{\textbf{e}}qref{dn} we find ($i \ne n$)
$$ch^{1/2} \le d_i \le Ch^{1/2}, \quad ch^\frac{1}{2+\alpha} \le d_n \le C h^\frac{1}{2+\alpha},$$
hence, by Proposition \ref{p1},
\begin{equation}gin{equation}\label{8}
\tilde x_h^* + c F_h B_1 \subset A_h S_h \subset C F_h B_1,
\textit{\textbf{e}}nd{equation}
with $$F_hx:=(h^\frac 12x',h^\frac{1}{2+\alpha}x_n).$$
Since $ \partial \Omega_h \cap B_{ch^{1/2}} \subset \tilde G_h \subset \tilde S_h = A_h S_h$ we see from the inclusion above that also
$$c F_h B_1 \cap A_h \overline \Omega \subset A_h S_h \subset C F_h B_1.$$
Using in \textit{\textbf{e}}qref{8} that $S_{h/2} \subset S_h$ we find
$$F_h^{-1} A_h A_{h/2}^{-1} F_{h/2} B_1 \subset C B_1$$ which gives
$$|\tau_h - \tau_{h/2}| \le C_1 h^{\frac12-\frac{1}{2+\alpha}},$$
for all $h \le c(\rho,\rho')$. If we denote by $h_k=2^{-k}$ then, since $\alpha>0$, we obtain $\tau_{h_k} \to \tau_0$ and
$$|\tau_h-\tau_0| \le C_2 h^{\frac12-\frac{1}{2+\alpha}}, \quad \quad \mbox{for all $h=h_k \le c(\rho,\rho')$.}$$
This inequality implies
$$cB_1 \subset F_h^{-1} A_h A_0^{-1} F_h \, \, B_1 \subset C \, B_1,$$
hence we can replace $A_h$ with $A_0$ in the second inclusion above and obtain
$$k F_h B_1 \cap A_0 \overline \Omega \subset A_0 S_h \subset k^{-1} F_h B_1,$$
for some small $k$ universal.
\qed
{\bf Normalized solutions.}
Next we ``normalize" $\tilde u$ in $\tilde S_h$ (or we may think we normalize $u$ in $S_h$) back to size 1 in such a way that it solves a similar equation. Precisely we define
\begin{equation}gin{equation}\label{v}
v(x):=\frac{1}{h} \tilde u(D_hx)=\frac {1}{h} \tilde u(d_1 x_1,\ldots,d_n x_n)
\textit{\textbf{e}}nd{equation}
with
$D_h$, $d_1, \ldots , d_n$ defined in Proposition \ref{p1}. Then $v$ is a continuous convex function function in $\overline \Omega_v$ with $\Omega_v:=D_h^{-1} \tilde \Omega$ and
\begin{equation}gin{equation}\label{v1}
v(0)=0, \quad \quad v\ge 0, \quad \quad \nabla v(0)=0 \quad \quad \mbox{(in the sense of H2).}
\textit{\textbf{e}}nd{equation}
The section $S_1(v):=\{ v<1\}$ satisfies $S_1(v)=D_h^{-1} \tilde S_h$ thus
\begin{equation}gin{equation}\label{v2}
x^* + c B_1 \subset S_1(v) \subset C B_1, \quad \quad \mbox{for some point $x^*$.}
\textit{\textbf{e}}nd{equation}
We compute
$$ \det D^2 v(x) = h^{-n} (\det D_h)^2 \det D^2 \tilde u(D_hx)=d_n^{-\alpha} \det D^2 \tilde u (D_hx).$$
From \textit{\textbf{e}}qref{1}, \textit{\textbf{e}}qref{7.1} we know that for $i <n$ we have $d_i \le C' d_n^{2/3}$ hence
$$\left |\frac 4 \rho \sum_1^{n-1}(d_i x_i)^2 \right | \le C' d_n^{4/3} |x'|^2 \le \textit{\textbf{e}}ps_0 d_n |x'|^2,$$
if $h <c'$. Using this inequality in Proposition \ref{p1} part 3) we obtain
$$(1-2\textit{\textbf{e}}ps_0)[(x_n-\textit{\textbf{e}}ps_0|x'|^2)^+]^\alpha \le d_n^{-\alpha} \det D^2 \tilde u(D_hx) \le (1+ 2 \textit{\textbf{e}}ps_0)(x_n+\textit{\textbf{e}}ps_0|x'|^2)^\alpha,$$
hence
\begin{equation}gin{equation}\label{v3}
(1-2\textit{\textbf{e}}ps_0)[(x_n-\textit{\textbf{e}}ps_0 |x'|^2)^+]^\alpha \le \det D^2 v \le (1+ 2 \textit{\textbf{e}}ps_0)(x_n+\textit{\textbf{e}}ps_0|x'|^2)^\alpha \quad \quad \mbox{in $S_1(v)$.}
\textit{\textbf{e}}nd{equation}
If we denote by $G_v$ the closed set $G_v:=\partial \Omega_v \cap \partial S_1(v)$ we have that $G_v$ is the graph of a convex function $(x',g_v(x'))$ with
$$d_n g_v \le \frac2 \rho \sum d_i^2 x_i^2 \le \textit{\textbf{e}}ps_0 d_n |x'|^2,$$
hence
\begin{equation}gin{equation}\label{v4}
0 \le g_v \le \textit{\textbf{e}}ps_0 |x'|^2.
\textit{\textbf{e}}nd{equation}
We have $v=1$ on $\partial S_1(v) \setminus G_v$, and on $G_v$ the function $v$ satisfies
\begin{equation}gin{equation}\label{v5}
(1-2\textit{\textbf{e}}ps_0) \varphi_v(x') \le v \le (1+2\textit{\textbf{e}}ps_0) \varphi_v(x'),
\textit{\textbf{e}}nd{equation}
with
$$\varphi_v(x'):=\frac 1 h \varphi(d_1x_1,\ldots, d_{n-1}x_{n-1}).$$
Notice that
$$\mu^{-1} \mathcal N \ge D^2_{x'} \varphi_v \ge \mu \mathcal N,$$
with (see \textit{\textbf{e}}qref{6}) $$\mathcal N=diag(a_1^2,a_2^2,\ldots,a_{n-1}^2), \quad \quad a_i:=\frac{d_i}{h^{1/2}} \ge c.$$
We collect the properties \textit{\textbf{e}}qref{v1}-\textit{\textbf{e}}qref{v5} for $v$ into a formal definition below.
{\bf The class $\mathcal D_{\bar \mu}^\sigma$.}
Let $\bar \mu$, $\sigma$ be positive (small) fixed constants, and let $\bar \mu \le a_1 \le \cdots \le a_{n-1}$ be real numbers.
We say that $$v \in \mathcal D_{\bar \mu}^\sigma(a_1,\ldots,a_{n-1})$$ if $v$ is a continuous convex function defined on a convex set $\overline \Omega$ such that
1)
$$0 \in \partial \Omega, \quad B_{\bar\mu}(x_0) \subset \Omegaega \subset B_{1/\bar \mu}^+ \quad \mbox{for some $x_0$,}$$
$$1 \ge v\ge 0, \quad v(0)=0 \quad \nabla u(0)=0,$$
2) in the interior of $\Omega$ the function $v$ satisfies:
$$(1-\sigma)[(x_n-\sigma |x'|^2)^+]^\alpha \le \det D^2 v \le (1+ \sigma)(x_n+\sigma|x'|^2)^\alpha,$$
3) on $\partial \Omega$ the function $v$ satisfies:
\noindent there exists a closed set $G \subset \partial \Omega$ which is a graph $(x',g(x'))$ with
$$g(x') \le \sigma |x'|^2,$$ such that
$$v=1 \quad \mbox{on $\partial \Omega \setminus G$,}$$
and
$$(1-\sigma) \varphi_v(x') \le v \le (1+\sigma) \varphi_v(x') \quad \quad \mbox{on $G$}$$
for some function $\varphi_v$ such that $$\bar \mu^{-1} \mathcal N \ge D^2_{x'} \varphi_v \ge \bar \mu \mathcal N, \quad \quad \mbox{with} \quad \mathcal N=diag(a_1^2,a_2^2,\ldots,a_{n-1}^2).$$
\
In view of \textit{\textbf{e}}qref{v1}-\textit{\textbf{e}}qref{v5} and the definition above we may rephrase Proposition \ref{p1} as follows.
\begin{equation}gin{lem}\label{l3}
If $u$ satisfies the hypotheses of Proposition \ref{p1} and $v$ is the normalized solution of $u$ in $S_h$ given by \textit{\textbf{e}}qref{v}, and $h \le h_0:=c(\rho,\rho',\textit{\textbf{e}}ps_0)$, then
$$v \in \mathcal D_{\bar \mu}^{2 \textit{\textbf{e}}ps_0}(a_1,a_2,..,a_{n-1}),$$
for some $\bar \mu$ universal (depending on $n$, $\alpha$, $\mu$) and with $a_i = d_i h^{-\frac 12}$.
\textit{\textbf{e}}nd{lem}
\
{\bf Definition of $\mathcal S_h'(u)$.}
Given a section $S_h(u)$ at the origin for some convex function $u$, we define the set $\mathcal S'_h(u) \subset \mathbb R^{n-1}$ (and call it {\it normalized diameter} of $S_h(u)$) as
$$x' \in \mathcal S'_h(u) \mathcal Leftrightarrow x^*_h + h^\frac 12(x',0) \in S_h(u),$$
where $x^*_h$ denotes the center of mass of $S_h(u)$. In other words $\mathcal S_h'$ is obtained by intersecting $S_h$ with the $n-1$ dimensional plane generated by $e_1$, ..$e_{n-1}$ passing through its center of mass, and then we perform a $h^{- 1/2}$ dilation.
From the definition we see that if $\tilde u(Ax)=u(x)$ with $A$ a sliding along $\{x_n=0\}$ then $\mathcal S_h'(\tilde u)=\mathcal S_h(u)$. If $u$ satisfies the hypotheses of Proposition \ref{p1} then, by the definition of $d_i$ (see Remark \ref{r1}), we have that $\mathcal S_h'(u)$ is equivalent to the $n-1$ dimensional ellipsoid $E_h$ of axes $a_i=d_i h^{-1/2}$, $i<n$ i.e.
\begin{equation}gin{equation}\label{12}
E_h \subset \mathcal S_h'(u) \subset C(n) E_h.
\textit{\textbf{e}}nd{equation}
Thus Lemma \ref{l2} is equivalent to showing that $\mathcal S_h'(u)$ is included in a fixed ball of universal radius for all $h$ small.
Next we check the relation between $\mathcal S'_t(v)$ and $\mathcal S'_{th}(u)$ if $v$ is the normalized solution for $u$ in $S_h$. Since
$$v=\frac 1 h \tilde u(D_h x)$$ we have $S_t(v)=D_h^{-1} S_{th}(\tilde u)$ hence
\begin{equation}gin{equation}\label{11}
\mathcal S_t'(v)= h^\frac 1 2 {D_h'}^{-1} \mathcal S'_{th}(\tilde u) = h^ \frac 12 {D_h'}^{-1} \mathcal S'_{th}(u),
\textit{\textbf{e}}nd{equation}
where $D_h'=diag(d_1,..,d_{n-1})$ represents the restriction of $D_h$ to the first $n-1$ variables.
In order to prove Lemma \ref{l2} and therefore Theorem \ref{T1} it suffices to prove the next proposition which provides bounds for the sets $\mathcal S_t'(v)$ for general functions $v \in \mathcal D_\sigma^{\bar \mu}$.
\begin{equation}gin{prop}\label{p2}
Let $\bar \mu$ small, $M$ large be fixed. There exist positive constants $\delta$, $\bar c$ small, depending only on $\bar \mu$, $n$, $\alpha$, $M$ such that if $$v \in \mathcal D_\delta^{\bar \mu}(a_1,..,a_{n-1}), \quad \quad \mbox{and} \quad a_{k+1} \ge \delta^{-1},$$
for some $0 \le k \le n-2$, then
$$\mathcal S_t'(v) \subset \{|(x_{k+1},..,x_{n-1})| \le \frac 1 M \},$$
for some $t \in [\bar c, 1]$.
\textit{\textbf{e}}nd{prop}
\begin{equation}gin{rem}
Since $S_1(v) \subset B_{1/\bar \mu}$ we always have the inclusion
\begin{equation}gin{equation}\label{10}
\mathcal S_t'(v) \subset t^{-\frac 1 2} B_{2/\bar \mu}'.
\textit{\textbf{e}}nd{equation}
Proposition \ref{p2} states roughly that if the boundary data of $v$ grows sufficiently fast in the $(x_k,..,x_{n-1})$ variables then the normalized diameter $\mathcal S_t'(v)$ projects into an arbitrarily ``small" set in these variables.
\textit{\textbf{e}}nd{rem}
The proof of Proposition \ref{p2} will be completed in the next three sections. We conclude this section by showing that Lemma \ref{l2} follows from Proposition \ref{p2}.
\
\begin{equation}gin{lem}\label{l3.1}
Proposition \ref{p2} implies Lemma \ref{l2}
\textit{\textbf{e}}nd{lem}
\begin{equation}gin{proof} We apply Proposition \ref{p2} for $\bar \mu$ as in Lemma \ref{l3} and for $M:=4 \sqrt n$, hence the constants $\delta$, $\bar c$ above become universal constants. We also choose $\textit{\textbf{e}}ps_0=\delta /2$ so that Proposition \ref{p2} applies for all normalized functions of $u$ in $S_h$ with $ h\le h_0$, with $h_0=c(\rho,\rho')$.
Denote by $d_i(h)$ and $a_i(h)$ the quantities $d_i$ and $a_i=d_i h^{-1/2}$ (for $i<n$) corresponding to the section $S_h$. We show that for any $h \le h_0$ we have
\begin{equation}gin{equation}\label{9}
\max_i a_i(h) \ge \bar C \quad \quad \mathbb Rightarrow \quad \quad \max_i a_i(th) \le \frac 12 \max_i a_i(h),
\textit{\textbf{e}}nd{equation}
for some $t \in [\bar c, 1]$, and with $\bar C$ universal.
Since $S_{h_0} \subset B_{1/\rho}$ we find $d_i(h_0) \le \rho^{-1}$ hence
$$\max a_i(h_0) \le C_0':=\rho^{-1} h_0^{- \frac 1 2}.$$
Now property \textit{\textbf{e}}qref{9} implies that $\max a_i(h)$ is bounded above by a universal constant for all $h \le c_1'$, thus Lemma \ref{l2} holds.
In order to prove \textit{\textbf{e}}qref{9} let $v$ denote the normalized function for $u$ in $S_h$ and assume that $a_{k+1}(h)$ is the first $a_i(h)$ greater than $\delta^{-1}$ i.e.
$$a_1 \le \cdots \le a_{k} \le \delta^{-1} \le a_{k+1} \le \cdots \le a_{n-1}.$$
Since $v \in \mathcal D_\delta^{\bar \mu} (a_1,..,a_{n-1})$, by Proposition \ref{p2} we have (see \textit{\textbf{e}}qref{10})
$$\mathcal S_t'(v) \subset \{|(x_1,..,x_k)| \le C_1\} \times \{|(x_{k+1},..,x_{n-1})| \le \frac 1 M\},$$
for some $C_1$ universal with $$C_1:=2\bar c^{-\frac 12}/\bar \mu \ge 2t^{-\frac 12}/ \bar \mu.$$
From \textit{\textbf{e}}qref{11} $$\mathcal S'_{th}(u)=h^{-\frac 12} D_h' \mathcal S_t'(v)=diag(a_1,..,a_{n-1}) \mathcal S_t'(v),$$ and we obtain
$$\mathcal S'_{th}(u) \subset \partialrod_{i=1}^{k}\{|x_i|\le C_1 a_i\} \times \partialrod_{i=k+1}^{n-1}\{|x_i|\le \frac{a_i}{M}\}.$$
For $i \le k$ we have
$$C_1 a_i \le C_1 \delta^{-1} := \frac {\bar C} { M} \le \frac{\max a_i} {M},$$
and we find
$$\mathcal S'_{th}(u) \subset \frac 14 \max a_i(h) B'_1,$$
which gives (see \textit{\textbf{e}}qref{12}) $$\max a_i(th) \le \frac 12 \max a_i(h).$$
\textit{\textbf{e}}nd{proof}
\section{Compactness and the class $\mathcal D_0^\mu$}\label{s4}
In this section we use compactness arguments and reduce Proposition \ref{p2} to the Theorem \ref{T3} below.
We prove Proposition \ref{p2} by compactness by letting $\sigma \to 0$ and $a_{k+1} \to \infty$.
First we remark that if we have a sequence of functions $v_m$ in $D_{\sigma_m}^\mu$ with $\sigma_m \to 0$ then we can extract a subsequence $v_{m_l}$ that converges to a limiting convex function $v$. Here, and throughout this paper, the convergence of convex functions (defined on possibly different domains) means that their supergraphs converge in the Hausdorff distance (in $\mathbb R^{n+1}$) to the supergraph of the limit function. The Monge-Ampere measure of the limit function $v$ is given by $x_n^\alpha$, however $v$ may have discontinuities at the boundary. Before we introduce the class $\mathcal D_0^\mu$ of such limiting solutions, we recall some definitions of boundary values for convex functions defined in convex domains (see \cite{S1}).
\begin{equation}gin{defn}
Let $u:\overline \Omega \to \mathbb R$ convex, and $\varphi: \partial \Omega \to \mathbb R$ be two bounded semicontinuous functions i.e. their upper graph
$$\{x_{n+1} \ge u(x)\} \subset \overline \Omega \times \mathbb R, \quad \quad \{x_{n+1} \ge \varphi(x) \} \subset \partial \Omega \times \mathbb R,$$ are closed sets. We say that $$u=\varphi \quad \mbox{on} \quad \partial \Omega$$
if $u|_{\partial \Omega}=\varphi^*$ where $\varphi^*$ represents the convex envelope of $\varphi$. In other words $u=\varphi$ on $\partial \Omega$ means that, when we restrict to the cylinder $\partial \Omega \times \mathbb R$, the upper graph of $u$ coincides with the convex envelope of the upper graph of $\varphi$.
\textit{\textbf{e}}nd{defn}
An example of function $\varphi$ is of course $u|_{\partial \Omega}$, the restriction of $u$ to $\partial \Omega$, and when $\Omega$ is strictly convex this is the only possible choice. On the other hand, on some flat part of the boundary $\partial \Omega$ there are many choices of functions $\varphi \ge u$ since we only require $\varphi^*=u$. The advantage of the definition above is that the maximum principle still holds and the boundary data behaves well when taking limits. Precisely we have (see Proposition 2.2 and Theorem 2.7 in \cite{S1}):
{\it Maximum Principle:} Assume
$$u=\varphi, \quad v=\partialsi, \quad \varphi \le \partialsi \quad \mbox{on $\partial \Omegaega$},$$
$$\det D^2 u \ge f \ge \det D^2 v \quad \mbox{in $\Omega$.} $$
Then $u \le v$.
\
{\it Closedness under limits:} Assume $$\det D^2 u_k=f_k, \quad \quad u_k=\varphi_k \quad \mbox{on} \quad \partial \Omega_k,$$ and
$$u_k \to u, \quad \varphi_k \to \varphi, \quad f_k \to f.$$ Then
$$\det D^2 u=f, \quad \quad\mbox{and} \quad u=\varphi \quad \mbox{on} \quad \partial \Omega.$$
\
By $u_k \to u$, $\varphi_k \to \varphi$ above we understand that the corresponding upper graphs converge in the Hausdorff distance and $f_k \to f$ means that $f_k$ converges uniformly on compact sets to $f$.
We also use the following property of boundary values as defined above: if $u=\varphi$ on $\partial \Omega$ then the restriction of $u$ to the set $\{ u \le h\}$
satisfies
$$\mbox{ $u=\varphi$ on $\{\varphi \le h\}$ and $u=h$ on the rest of $\partial \{u<h\}$}.$$
Next we introduce the class $\mathcal D_0^\mu$. By abuse of notation we denote its elements still by $u$ and they can be viewed as limits of normalized solutions of the functions $u$ from Section 2.
{\bf The class $\mathcal D_0^\mu$.}
Let $\mu>0$ be fixed, and let $\mu \le a_1 \le \cdots \le a_k$ be $k$ real numbers, $0 \le k \le n-1$. We say that the convex function $u$ defined in the convex set $\overline \Omega$ belongs to the class
$$u \in \mathcal D_0 ^\mu(a_1,..,a_k, \infty,..,\infty)$$
if the following hold:
1) $$0 \in \partial \Omega, \quad B_\mu(x^*) \subset \Omega \subset B_{1/\mu}^+ \quad \mbox{for some $x^*$},$$
$$u \ge 0, \quad u(0)=0, \quad \nabla u(0)=0,$$
2) $$\det D^2 u=x_n^\alpha \quad \mbox{in $\Omega$,}$$
3) $$ \mbox{$u=\varphi$ on $\partial \Omega$ with} \quad \quad \varphi:=\left \{
\begin{equation}gin{array}{l}
\partialsi_u \quad \quad \mbox{on $G \subset \partial \Omega$,}\\
1 \quad \quad \quad \mbox{on $\partial \Omega \setminus G$,}
\textit{\textbf{e}}nd{array}
\right.
$$
where $\partialsi_u(x_1,..x_k)$ is a nonnegative convex function of $k$ variables satisfying
$$\mu^{-1} \mathcal N_k \ge D^2 \partialsi_u \ge \mu \mathcal N_k, \quad \quad \quad \mathcal N_k:=diag(a_1^2,..,a_k^2),$$
and $G$ represents the $k$ dimensional set (in $\mathbb R^n$) where $\partialsi_u \le 1$, i.e.
$$G:=\{x \in \mathbb R^n| \quad \partialsi_u(x_1,..,x_k) \le 1, \quad x_i=0 \quad \mbox{if} \quad i>k \}.$$
We easily obtain the following lemma
\begin{equation}gin{lem}[Compactness]\label{l4}
Assume $v_m \in \mathcal D_{\sigma_m}^\mu(a_1^m,..,a_{n-1}^m)$ is a sequence of functions with
$$\sigma_m \to 0, \quad a_{k+1}^m \to \infty.$$
Then we can extract a convergent subsequence to a function $u$ with
$$u \in \mathcal D_0^\mu(a_1,..,a_l,\infty,.,\infty)$$
for some $0 \le l \le k$.
\textit{\textbf{e}}nd{lem}
\begin{equation}gin{proof}
All the properties for $u$, except $\nabla u(0)=0$, follow from the closedness under limits property above. In order to show that $\nabla u(0)=0$ we remark that if $v \in D_\sigma^\mu$ (with $\sigma \in [0,1/2)$) then we can obtain from the proof of Proposition \ref{p1} that the center of mass $x^*_h(v)$ of $S_h(v)$ satisfies
\begin{equation}gin{equation}\label{13}
x_h^*(v)\cdot e_n \ge h^\frac 34 \quad \quad \mbox{if $h \le c$,}
\textit{\textbf{e}}nd{equation}
where $c$ depends only on $n$, $\alpha$, $\mu$. Indeed, we bound $v$ by below using the same barriers $w_1$ and $w_2$ (with constants depending only on $n$, $\alpha$, $\mu$) and obtain the estimates \textit{\textbf{e}}qref{1}, \textit{\textbf{e}}qref{2}. We can do this since we only need the inequality $v \ge c|x'|^2$ on the part of the boundary where $\{v<1\}$ which is clearly satisfied by all $v \in \mathcal D_\sigma^\mu$.
Since property \textit{\textbf{e}}qref{13} is preserved after taking limits we see that $u$ satisfies it as well, and this easily implies that $\nabla u(0)=0$ since otherwise $S_h(u) \subset \{ x_n \le O(h) \}$ and we contradict \textit{\textbf{e}}qref{13}.
\textit{\textbf{e}}nd{proof}
\begin{equation}gin{rem}\label{r3}
In the proof above we allow $\sigma_m=0$ and $a_i=\infty$ for some $i$, therefore the compactness holds for the class $\mathcal D_0^\mu$ as well.
\textit{\textbf{e}}nd{rem}
Using the compactness lemma above we see that in order to prove Proposition \ref{p2} it suffices to prove the following version for the class $\mathcal D_0^\mu$.
\begin{equation}gin{prop}\label{p3}
Let $\mu>0$ small, $M>0$ large be fixed, and assume $$u \in \mathcal D_0^\mu(a_1,..a_k,\infty,..,\infty)$$ for some $0 \le k \le n-2.$ There exists $\bar c(k,M) >0$ depending only on $n$, $\alpha$, $\mu$, $M$, $k$ such that
$$\mathcal S'_t(u) \subset \{ |(x_{k+1},..,x_{n-1})| \le \frac 1 M \},$$
for some $t \in [\bar c(k,M),1]$.
\textit{\textbf{e}}nd{prop}
We will prove Proposition \ref{p3} by induction on $k$, and this is the reason why we require the dependence of $\bar c$ on $k$. Clearly at the end, the constant $\bar c(M)$ which is the minimum of all $c(k,M)$ above can be taken independent of $k$.
\
{\it Proposition \ref{p3} implies Proposition \ref{p2}.}
We show that Proposition \ref{p2} holds with the constant $$\bar c:= \bar c(2M) =\min_k \bar c(k,2M)$$ and for some $\delta>0$ small. Otherwise there exists a sequence of $\delta_m \to 0$ and corresponding functions $v_m \in D_{\delta_m}^\mu$, $a_k^m \to \infty$ for which the conclusion of Proposition \ref{p2} does not hold. By Lemma \ref{l4} we can extract a convergent subsequence to a function $$u \in \mathcal D_0^\mu(a_1,..,a_l,\infty,..,\infty) \quad \quad \mbox{for some $0 \le l \le k$}.$$ From Proposition \ref{p3} there is $t \in [\bar c,1]$ such that
$$\mathcal S'_t (u) \subset \{|(x_{l+1},..,x_{n-1})| \le \frac{1}{2M} \} \subset \{|(x_{k+1},..,x_{n-1})| \le \frac{1}{2M}\},$$
and therefore the conclusion is satisfied for $v_m$ for all large $m$, contradiction.
\qed
The key step in proving Proposition \ref{p3} consists in proving the following estimates for the class $\mathcal D_0^\mu (1,1,.,1,\infty,..,\infty)$.
\begin{equation}gin{thm}\label{T3}
If $$u \in \mathcal D_0^\mu(\underbrace{1,...,1}_{k \, \, times},\infty..,\infty)$$ for some $0\le k \le n-2$ then
$$\mathcal S'_h(u) \subset \{ |(x_{k+1},..,x_{n-1})| \le C h^\begin{equation}ta \},\quad \quad \quad \begin{equation}ta:=\frac{1}{2(n+1-k+\alpha)}>0,$$
with $C$ large depending only on $n$, $\alpha$, $\mu$ and $k$.
\textit{\textbf{e}}nd{thm}
Theorem \ref{T3} holds for $\alpha=0$ as well. Its proof will be completed in Section \ref{s6} by induction on $k$ and we will see that it applies also for $\alpha=0$.
\begin{equation}gin{lem}\label{l4.1}
Theorem \ref{T3} implies Proposition \ref{p3}.
\textit{\textbf{e}}nd{lem}
\begin{equation}gin{proof}
We prove Proposition \ref{p3} by induction.
{\it Case $k=0$:} We apply Theorem \ref{T3} for $k=0$ and obtain that
$$\mathcal S'_t(u) \subset \{ |(x_1,..,x_{n-1})| \le C t^\begin{equation}ta \le \frac 1 M \},$$ by choosing $t$ small depending on $M$, $\mu$, $n$, $\alpha$.
{\it Case $k-1 \mathbb Rightarrow k$.} We assume Proposition \ref{p3} holds for $k-1$ with $1 \le k \le n-2$ and we prove it holds also for $k$. By compactness (see Remark \ref{r3}) we know that the following property holds
{\it Property $P(k-1)$:}
There exists $C_0:=C_0(M,\mu,k,n,\alpha)$ such that if
$$u \in D_0^\mu(a_1,..,a_{n-1}), \quad \quad \mbox{with $a_k \ge C_0$},$$
for some $a_i \in [\mu,\infty) \cup \{\infty \}$, then
$$\mathcal S_t'(u) \subset \{|(x_k,..,x_{n-1})| \le \frac 1 M \},$$
for some $t\in [c_k,1]$ with $c_k$ depending on the parameters above.
\
Thus when $a_{k} \ge C_0$ the conclusion for $k$ i.e.
$$\mathcal S_t'(u) \subset \{|(x_{k+1},..,x_{n-1})| \le \frac 1 M \}$$
is already satisfied from the property $P(k-1)$.
It remains to prove the statement only when $u\in \mathcal D_0^\mu(a_1,..,a_k,\infty,..,\infty)$ and $a_k \le C_0$. In this case we can write
$$u \in \mathcal D_0^{\tilde \mu} (\underbrace{1,...,1}_{k \, \, times},\infty..,\infty)$$
for some $\tilde \mu$ small depending on $M$, $\mu$, $n$, $k$, $\alpha$.
Now we can apply Theorem \ref{T3} and find that
$$S_t'(u)\subset \{|(x_{k+1},..,x_{n-1}| \le C(\tilde \mu) t^\begin{equation}ta \le \frac 1 M\},$$ if we choose $t$ small enough depending on $M$, $\mu$, $n$, $k$, $\alpha$.
\textit{\textbf{e}}nd{proof}
\begin{equation}gin{rem}\label{r2}
In the proof above we showed that if Theorem \ref{T3} holds for all $l\le k$, for some $k$ satisfying $0 \le k \le n-2$, then Proposition \ref{p2} holds for all $l\le k$ as well.
\textit{\textbf{e}}nd{rem}
We conclude this section with some results for normalized solutions of $u \in \mathcal D_0^\mu$ in $S_h(u)$, that are versions of Proposition \ref{p1} for the class $\mathcal D_0^\mu.$ Below we think of $\mu$, $\alpha$, $n$ as being fixed constants, and we refer to other positive constants depending only on $\mu$, $\alpha$ and $n$ as universal constants.
\begin{equation}gin{lem}\label{l6}
Assume $$u \in \mathcal D_0^\mu (\infty,...,\infty).$$
For each $h \in (0,1]$, after a rotation (relabeling) of the $x'$ coordinates, there exist a sliding $A_h$ along $x_n=0$, and a diagonal matrix $D_h$
$$D_h=diag (d_1,d_2,..,d_n), \quad \quad \mbox{with} \quad \left(\partialrod_{i=1}^n d_i^2 \right ) \, \, d_n^\alpha=h^n,$$
such that the normalized solution
$$u_h(x):=\frac 1 h u(A_h D_h x) \quad \quad \mbox{satisfies} \quad u_h|_{S_1(u_h)} \in \mathcal D_0^{\bar \mu}(\infty,..,\infty),$$
with $\bar \mu>0$ universal.
Moreover if $x_h^*$ denotes the center of mass of $S_h(u)$ then, $$c d_n \le x_h^* \cdot e_n \le C d_n \quad \mbox{and} \quad ch^{3/4} \le d_n \le C h^\delta,$$
with $c$, $C$, $\delta$ universal.
\textit{\textbf{e}}nd{lem}
\begin{equation}gin{proof}
This is a simplified version of Proposition \ref{p1} since the behavior of $\det D^2 u_h$ and the boundary data of $u_h$ is left invariant under composition of the affine transformations above.
As in the proof of Proposition \ref{p1}, we choose $d_1$,...,$d_{n-1}$ as being the lengths of the axes of the ellipsoid which is equivalent to $S_h \cap \{x_n=x_h^*\cdot e_n\}$. After a rotation we may assume that its axes are parallel to the coordinate axes. We choose $d_n$ in terms of $d_1,\ldots,d_{n-1}$ so that it satisfies the above identity for $D_h$. We let $A_h$ so that $x^*$, the center of mass of $S_1(u_h)$, lies on the $x_n$ axis, i.e. $$x^*=\frac {x^*_h \cdot e_n}{d_n} \,\, \, e_n.$$
By construction, the restriction of $u_h$ to $S_1(u_h)$ satisfies
$$u_h \ge 0, \quad u_h(0)=0, \quad \nabla u_h(0)=0, \quad \det D^2 u_h=x_n^\alpha,$$
$$ u_h = \varphi \quad \mbox{on $\partial S_1(u_h)$,} \quad \mbox{with $\varphi=1$ on $\partial S_1 \setminus\{0\}$, and $\varphi(0)=0$,}$$
and when we restrict to the $n-1$ dimensional space passing through $x^*$ we have
$$ B_1' \subset S_1(u_h) \subset C(n) B'_1 \quad \quad \mbox{on the hyperplane $\{x_n=x^*\cdot e_n\}$.}$$
In order to prove that $u_h$ belongs to the class $\mathcal D_0^{\bar \mu} $ above it remains to show that $$c \le x^* \cdot e_n \le C.$$
We obtain this by choosing appropriate lower and upper barriers for $u_h$ in $S_1(u_h)$. Indeed, if $x^* \cdot e_n$ is very small then we obtain $u_h \ge w_3$ where $w_3$ is the barrier
$$w_3(x)=c|x'|^2 + c^{1-n} x_n^2 + t x_n,$$
for some $t>0$ and we contradict $\nabla u_h(0)=0$.
On the other hand if $x^* \cdot e_n$ is very large then $S_1(u_h)$ contains an ellipsoid $E$ (centered at $x^*$) of large volume and we contradict that $u_h \ge 0$ similarly as in \textit{\textbf{e}}qref{4}. This proves that $$u_h \in \mathcal D_0^{\bar \mu}(\infty,..,\infty) \quad \mbox{ for some $\bar \mu$ universal.}$$
The inequality $d_n \ge c h^{3/4}$ follows as in the proof of Proposition \ref{p2}, (see also proof of Lemma \ref{l4}). In order to prove the upper bound on $d_n$ we remark that $d_n \sim x^*_h \cdot e_n \sim b_u(h)$ where $b_u(h)$ represents the height of $S_h(u)$ i.e. $$b_u(h):=\max_{x \in S_h(u)} x_n.$$ By the compactness of the class $\mathcal D_0^\mu$ we easily obtain that $$\frac{b_v(1/2)}{b_v(1)} \le 1-c \quad \quad \mbox{for any $v\in \mathcal D_0^\mu$.}$$
This implies that $$\frac{b_u(h/2)}{b_u(h)} =\frac{b_{u_h}(1/2)}{b_{u_h}(1)} \le 1-c_1, \quad \quad \mbox{for all $h \le 1$},$$
with $c_1$ universal, hence $b_u(h) \le C h^\delta$ which finishes our proof.
\textit{\textbf{e}}nd{proof}
{\it Remark:}
The proof shows in fact that $\bar \mu$ depends only on $\alpha$ and $n$ since we can choose the constant $c$ in $w_3$ to depend only on $n$.
Also the inequality $b(u) \le C h^\delta$ implies that
\begin{equation}gin{equation}\label{31}
u \ge c \, {x_n}^{1/ \delta}.
\textit{\textbf{e}}nd{equation}
Below we prove a similar lemma as above for functions $u \in \mathcal D_0^\mu(1,..,1,\infty,..,\infty)$. Before we state our lemma we introduce some notation.
\
{\bf Notation.} Fix $1\le k \le n-2$. We denote points in $\mathbb R^n$ by
$$x=(y,z,x_n) \quad \quad y:=(x_1,..x_k) \in \mathbb R^k, \quad z:=(x_{k+1},..,x_{n-1}) \in \mathbb R^{n-1-k}.$$
We say that a linear transformation $T:\mathbb R^n \to \mathbb R^n$ is a {\it sliding along $y$ direction} if
$$Tx=x + \nu_1z_1 +\ldots + \nu_{n-k-1}z_{n-k-1}$$ with
$$\nu_1,..,\nu_{n-k-1} \in span\{e_1,..,e_k\}.$$
We see that $T$ leaves the $(z,x_n)$ components invariant together with the subspace $(y,0,0)$. Clearly if $T$ is a sliding along the $y$ direction then so is $T^{-1}$ and $\det T=1$.
We will use the following linear algebra fact about transformations $T$ as above.
Assume $E_{x'} \subset \mathbb R^{n-1}$ is an ellipsoid in $x'=(y,z)$ variables (with center of mass at $0$). Then there exists $T$ a sliding along $y$-variable such that $$T E_{x'}=E_y \times E_z,$$ with $E_y$, $E_z$ two ellipsoids in the $y$ respectively $z$ variables. Here the ellipsoid $E_y$ is obtained by intersecting $E$ with the $y$-subspace. Using John's lemma we conclude that if $\Omegaega'\subset \mathbb R^{n-1}$ is a bounded convex set (with center of mass at the origin), there exists $T$ such that $T \Omegaega'$ is equivalent to a product of ellipsoids in the $y$ and $z$ variables i.e.
$$E_y \times E_z \subset T \Omegaega' \subset C(n) E_y \times E_z.$$
\begin{equation}gin{lem}\label{l5}
Assume Theorem \ref{T3} holds for all $l \le k-1$, for some $k$ with $1 \le k \le n-2$, and let
$$u \in \mathcal D_0^{\mu} (\underbrace{1,...,1}_{k \, \, times},\infty..,\infty).$$
For each $h \in (0,1]$, after a rotation (relabeling) of the $y$ respectively $z$ coordinates, there exist a sliding $T_h$ along the $y$ variable, and a sliding $A_h$ along $x_n=0$, and a diagonal matrix $D_h$
$$D_h=diag (d_1,d_2,..,d_n), \quad \quad \mbox{with} \quad \left(\partialrod_{i=1}^n d_i^2 \right ) \, \, d_n^\alpha=h^n,$$
such that the normalized solution
$$u_h(x):=\frac 1 h u(T_h A_h D_h x) \quad \quad \mbox{satisfies} \quad u_h|_{S_1(u_h)} \in \mathcal D_0^{\bar \mu} (\underbrace{1,...,1}_{k \, \, times},\infty..,\infty).$$
Moreover if $x_h^*$ denotes the center of mass of $S_h(u)$ then,
$$c d_n \le x_h^* \cdot e_n \le C d_n, \quad \quad \quad ch^{3/4} \le d_n \le C h^\delta,$$
$$\mbox{and} \quad c \le d_i h^{-\frac 12} \le C \quad \mbox{for $i\le k$}.$$
The constants $\bar \mu$, $c$, $C$, $\delta$ above depend on $\mu$, $n$, $\alpha$ and $k$.
\textit{\textbf{e}}nd{lem}
The lemma states that if $u$ satisfies the hypothesis of Theorem \ref{T3} then we can normalize it in $S_h$ (using also a sliding along $y$ variable) such that the normalized solution satisfies essentially to same hypothesis as $u$.
\begin{equation}gin{proof}
First we remark that if we use an affine deformation $x \to TAx$ with $T$ sliding along $y$, $A$ sliding along $x_n=0$, and let
$$\tilde u(x):=u(TAx)$$ then the intersection of the $y$-subspace (passing through the center of mass) with $S_h(u)$ is left invariant. Precisely we have
$$\mathcal S_t'(\tilde u) \cap\{z=0 \} \quad = \quad \mathcal S_t'(u) \cap \{z=0\} \quad \quad \quad \mbox{for any $t>0$.}$$
For each $h$ we let $T=T_h$ and $A=A_h$ such that $S_h(\tilde u)$ has the center of mass $\tilde x^*_h$ on the $x_n$ axis and $$S_h(\tilde u) \cap \{x_n=\tilde x ^*_h \cdot e_n\}$$ is equivalent to a product of ellipsoids $E_y \times E_z$.
After a rotation of the $y$ respectively $z$ coordinates we may assume that $E_y$, $E_z$ have axes of lengths $d_1 \le \ldots \le d_k$ respectively $d_{k+1},..,d_{n-1}$ parallel to the coordinate axes. From the boundary data of $u$ we know that $$\{(0,0,y)| \quad |y| \le c h^{1/2}\} \subset S_h(u) $$ which implies $$d_i \ge c h^{1/2} \quad \mbox{for $i=1,..,k$.}$$
We choose $d_n$ as before in terms of $d_1,\ldots,d_{n-1}$ so that it satisfies the above identity for $D_h$. We let $$u_h=\frac 1h \tilde u (D_h x),$$
and obtain
$$u_h \ge 0, \quad u_h(0)=0, \quad \nabla u_h(0)=0, \quad \det D^2 u_h=x_n^\alpha,$$
$$ u_h = \varphi \quad \mbox{on $\partial S_1(u_h)$,} \quad \mbox{with $\varphi=1$ on $\partial S_1 \setminus\{G_y\}$, and $\varphi=\partialsi(y)$ on $G_y$,}$$
where $$G_y:=\{(y,0,0)| \quad \partialsi(y) \le 1\}$$
and $\partialsi$ is a nonnegative function in $y$ satisfying
$$\mu^{-1} \mathcal N_y \ge D^2_y \partialsi \ge \mu \mathcal N_y \quad \quad \mathcal N_k=diag(a_1^2,..,a_k^2), \quad a_i:=d_i h^{-\frac 12} \ge c.$$
Moreover, by construction, when we restrict to the $n-1$ dimensional space passing through $x^*$ the center of mass of $S_1(u_h)$ we have
$$ B_1' \subset S_1(u_h) \subset C(n) B'_1 \quad \quad \mbox{on the hyperplane $\{x_n=x^*\cdot e_n\}$.}$$
The properties above imply that $$u_h \in \mathcal D_0^{\tilde \mu}(a_1,..,a_k, \infty,..,\infty) \quad \quad \mbox{for some $\tilde \mu$ universal.} $$
Indeed, for this it suffices to prove that $$c \le x^* \cdot e_n \le C,$$ and this follows exactly as in the proof of Lemma \ref{l6}.
Since $u_h$ belongs to the class above, the bounds on $d_n$ follow in the same way as in Lemma \ref{l6}.
It remains to show that $a_i$, $1\le i \le k$ remain bounded above by a universal constant for all $h$.
From our hypothesis and Remark \ref{r2} we know that Proposition \ref{p3} holds for all $l$ with $l \le k-1$. Using compactness as in Lemma \ref{l4.1} this implies that the property $P(l)$ holds for all $l \le k-1$. Precisely,
there exists $C_0:=C_0(M,\mu,k,n,\alpha)$ such that if
$$v \in D_0^\mu(a_1,..,a_{n-1}), \quad \quad \mbox{with $a_l \ge C_0$, for some $l \le k$},$$
and some $a_i \in [\mu,\infty) \cup \{\infty \}$, then
$$\mathcal S_t'(v) \subset \{|(x_l,..,x_{n-1})| \le \frac 1 M \},$$
for some $t\in [c_k,1]$ with $c_k$ depending on the parameters above.
Now we argue as in Lemma \ref{l3.1}. For $i \le k$ denote by $d_i(h)$ and $a_i(h)$ the quantities $d_i$ and $a_i=d_i h^{-1/2}$ constructed above that correspond to the section $S_h(u)$. Notice that $a_i(h)$ represent the lengths of the axes of a $k$-dimensional ellipsoid (in the $y$ variable) which is equivalent to $\mathcal S_h'(u) \cap \{z=0 \}$. We show that for any $h$ we have
$$
\max a_i(h) \ge \bar C \quad \quad \mathbb Rightarrow \quad \quad \max a_i(th) \le \frac 12 \max a_i(h),$$
for some $t \in [\bar c, 1]$, and with $\bar C$ universal. Since $\max a_i(1)$ is bounded above by a universal constant we easily obtain that $\max a_i(h) $ remains bounded above.
Let $C_0$ and $c_k$ denote the constants in the property above for $\tilde \mu$ and $M=4 \sqrt n$, hence $C_0$, $c_k$ are universal. Assume that $a_l(h)$ is the first $a_i(h)$ greater than $C_0$ i.e.
$$a_1 \le \cdots \le a_{l-1} \le C_0 \le a_l \le \cdots \le a_{k}.$$
Since $u_h \in \mathcal D_0^{\tilde \mu} (a_1,..,a_k, \infty,..,\infty)$, we have (see \textit{\textbf{e}}qref{10})
$$ \mathcal S_t'(u_h) \subset \{|(x_1,..,x_{l-1})| \le C_1\} \times \{|(x_l,..,x_{n-1})| \le \frac 1 M\},$$
for some $C_1(c_k)$ universal.
Since $$\{ y| (y,0) \in \mathcal S'_{th}(u) \}\, = \,diag(a_1,..,a_k) \, \, \, \{y| (y,0) \in \mathcal S_t'(u_h)\},$$ we obtain
$$\mathcal S'_{th}(u) \cap \{z=0 \} \subset \partialrod_{i=1}^{l-1}\{|x_i|\le C_1 a_i\} \times \partialrod_{i=l}^{k}\{|x_i|\le \frac{a_i}{M}\} \times \{z=0\}.$$
For $i \le l-1$ we have
$$C_1 a_i \le C_1 C_0 := \frac {\bar C} { M} \le \frac{\max a_i} {M},$$
and we find
$$\mathcal S'_{th}(u) \cap \{z=0\} \subset \{ |y| \le \frac 14 \max a_i(h) \} \times \{z=0\},$$
which gives $$\max a_i(th) \le \frac 12 \max a_i(h).$$
\textit{\textbf{e}}nd{proof}
\section{Pogorelov type estimates}\label{s5}
In this section we obtain two estimates of Pogorelov type that will be used in Section \ref{s6} for the proof of Theorem \ref{T3}. They appeared also in \cite{S2} where the obstacle problem for Monge-Ampere equation was investigated.
\begin{equation}gin{thm}\label{Po}
Assume $u\in C^4(\Omegaega) \cap C(\overline \Omega)$ is convex, $u=0$ on $\partial \Omega$,
$$\det D^2 u=f(x_2,\ldots,x_n) \quad \mbox{in $\Omegaega$}, \quad \quad f>0.$$
Then
$$u_{11}|u| \le C(n,\max_{\Omegaega}|u_1|).$$
\textit{\textbf{e}}nd{thm}
{\it Remark:} The constant $C(n,\max_{\Omegaega}|u_1|)$ does not depend on $f$ or $\Omegaega$.
\begin{equation}gin{proof} We may assume that $u \in C^4(\overline \Omega)$ since we apply the estimate to $u+\textit{\textbf{e}}ps$ and then let $\textit{\textbf{e}}ps \to 0$. We write
$$ \log \det D^2 u= \log f$$
and differentiate with respect to $x_1$
\begin{equation}gin{equation}{\label{48}}
u^{ij}u_{1ij}=0,
\textit{\textbf{e}}nd{equation}
where $[u^{ij}]=[D^2u]^{-1}$ and we use the index summation convention.
Differentiating once more, we have
\begin{equation}gin{equation}{\label{49}}
u^{ij}u_{11ij}-u^{ik}u^{jl}u_{1ij}u_{1kl}=0.
\textit{\textbf{e}}nd{equation}
Suppose the maximum of
\begin{equation}gin{equation}{\label{410}}
\log u_{11} + \log|u| + \frac{1}{2}|u_1|^2=M
\textit{\textbf{e}}nd{equation}
occurs at the origin. One can also assume that $D^2u(0)$ is diagonal since
the the domain transformation (sliding along $x_1$ variable)
\begin{equation}gin{equation}{\label{412.5}}
\tilde{u}(x_1,..,x_n):=u(x_1-\alpha_2 x_2-..-\alpha_n x_n, x_2, ..,x_n), \quad
\alpha_i=\frac{u_{1i}(0)}{u_{11}(0)}
\textit{\textbf{e}}nd{equation}
does not affect the equation or the maximum
in (\ref{410}). Thus, at $0$
\begin{equation}gin{equation}{\label{411}}
\frac{u_{11i}}{u_{11}}+\frac{u_i}{u}+u_1u_{1i}=0
\textit{\textbf{e}}nd{equation}
\begin{equation}gin{equation}{\label{412}}
\frac{u_{11ii}}{u_{11}}-\frac{u_{11i}^2}{u_{11}^2}+\frac{u_{ii}}{u}
-\frac{u_i^2}{u^2}+u_{1i}^2+u_1u_{1ii}\le 0
\textit{\textbf{e}}nd{equation}
We multiply (\ref{412}) by $u_{ii}^{-1}$ and add
$$\frac{u_{11ii}}{u_{11}u_{ii}}-\frac{u_{11i}^2}{u_{ii}u_{11}^2}+\frac{n}{u}
-\frac{u_i^2}{u_{ii}u^2}+\frac{u_1u_{1ii}}{u_{ii}}+u_{11}\le 0.$$
From (\ref{411}) we obtain
$$ \frac{u_i}{u}=-\frac{u_{11i}}{u_{11}}, \quad i \ne 1,$$
which together with (\ref{48}), (\ref{49}) gives
$$\sum_{i,j\ne
1}\frac{u_{1ij}^2}{u_{11}u_{ii}u_{jj}}+\frac{n}{u}
-\frac{u_1^2}{u_{11}u^2}+u_{11}\le 0,$$
thus,
$$e^{2M}-ne^{\frac{1}{2}u_1^2}e^M \le u_1^2e^{u_1^2}$$
and the result follows.
\textit{\textbf{e}}nd{proof}
The second estimate deals with curvature bounds for the level sets of solutions to certain Monge-Ampere equations.
We assume the convex function $u\in C^4(\Omegaega) \cap C(\overline \Omega)$ is increasing in the $e_n$ direction and
\begin{equation}gin{equation}\label{4.13}
u=\sigma x_n \quad \mbox{on $\partial \Omega$,}
\textit{\textbf{e}}nd{equation}
for some $\sigma>0$. We denote by
$v(x_1,..,x_{n-1},s)$
the graph in the $-e_n$ direction of the $s$ level set, i.e
$$u(x_1,..,x_{n-1},-v(x_1,..,x_{n-1},s))=s.$$
Clearly $v$ is convex.
\begin{equation}gin{thm}{\label{abo}}
Assume $u$ satisfies \textit{\textbf{e}}qref{4.13} and
$$ u_n^\alpha \, \, \det D^2 u =f(x_2,..,x_{n-1},u) \quad \mbox{in $\Omega$,}$$
for some $\alpha \ge 0$.
Then
$$v_{11} \, \, |u-\sigma x_n| \le C \left ( n,\alpha,\sigma,\max_\Omega u_n, \max_\Omega |v_1| \right).$$
\textit{\textbf{e}}nd{thm}
{\it Remark:} The constant $C$ does not depend on $f$ or $\Omega$. We also have the equality $$|u-\sigma x_n|= |\sigma v+ s|.$$
First we write the equation for $v$.
The normal map to the graph of $u$ at
$$X=(x_1,..,x_n,x_{n+1})=(x_1,..,x_n,u(x))$$ is given by
$$\nu=(\nu_1,..,\nu_{n+1})=(1+|\nabla
u|^2)^{-\frac{1}{2}}(-u_1,..,-u_n,1).$$
The Gauss curvature of the graph of $u$ at $X$ equals
\begin{equation}gin{align*}
K(X) &=\det D_i \left ( u_j (1+|\nabla u|^2)^{-\frac{1}{2}} \right)\\
&=(1+|\nabla u|^2)^{-\frac{n+2}{2}} \det D^2 u \\
&= (\nu_{n+1})^{n+2} \det D^2 u.
\textit{\textbf{e}}nd{align*}
The graph of $u$ can be viewed as the graph of $v$ in the $-e_n$ direction,
thus
$$K(X)= (-\nu_n)^{n+2} \det D^2 v,$$
which gives
$$ \det D^2 u= \det D^2 v \left( \frac{-\nu_n}{\nu_{n+1}} \right)^{n+2}.$$
Since
$$u_n=\frac{-\nu_n}{\nu_{n+1}}=-\frac 1 {v_s}=\frac{1}{|v_s|},$$
we find
$$u_n^\alpha \, \det D^2 u= |v_s|^{-(n+2+\alpha)} \, \det D^2 v.$$
By abuse of notation we relabel the $s=x_{n+1}$ variable (i.e. the last coordinate of $v$) by $x_n$ we find that $v$ satisfies
$$\det D^2 v=f(x_2,x_3,..,x_{n-1},x_n) |v_n|^{n+2+\alpha}, \quad \quad \quad v_n<0,$$
and it is defined in
$$\Omegaega_v:=\left \{v < -\frac {x_n} {\sigma} \right \}, \quad \quad v=-\frac {x_n} {\sigma} \quad \mbox{on $\partial \Omega_v$.}$$
We denote by
$$w:=v + \frac {x_n} {\sigma},$$
thus $w=0$ on $\partial \Omegaega_v$ and in $\Omega_v$
$$ \det D^2 w= f(x_2,..,x_n) \left (\frac 1 \sigma - w_n \right )^{n+2+\alpha},$$
and also
$$\left (\frac 1 \sigma -w_n \right )^{-1}=u_n>0, \quad \quad w_1=v_1, \quad w_{11}=v_{11}.$$
In order to prove Theorem \ref{abo} it suffices to prove the next
estimate.
\begin{equation}gin{lem}{\label{above}}
Suppose that in the bounded set $\{w<0 \}$
$$ \det D^2 w= f(x_2,..,x_n)(w_\xi + \begin{equation}ta)^{n+2+\alpha}, \quad w_\xi + \begin{equation}ta>0,$$
where $\xi$ is some vector. Then
$$w_{11}|w| \le C \left (n,\alpha,\max_{\{w<0 \}} |w_1|, \max_{\{w<0 \}}
\frac{\begin{equation}ta}{w_\xi +
\begin{equation}ta} \right ).$$
\textit{\textbf{e}}nd{lem}
\begin{equation}gin{proof}
Assume the maximum of
\begin{equation}gin{equation}{\label{415}}
\log w_{11} + \log |w| + \frac{\textit{\textbf{e}}ta}{2}w_1^2
\textit{\textbf{e}}nd{equation}
occurs at the origin, where $\textit{\textbf{e}}ta>0$ is a small constant depending only on
$\max|w_1|$, to be made precise later.
Again we can assume that $D^2w(0)$ is diagonal. Indeed, using
a sliding in the $x_1$ direction as in (\ref{412.5}) we find that the transformed function $\tilde w$ satisfies
$$\det D^2 \tilde{w}=f(x_2,..,x_n) (\tilde{w}_{\tilde{\xi}} + \begin{equation}ta)^{n+2+\alpha}, \quad
\tilde{w}_{\tilde{\xi}}(\tilde x)=w_\xi(x),$$
thus, the hypothesis and the conclusion remain invariant under this
transformation.
We write
$$\log \det D^2 w= \log f (x_2,..,x_n)+ \gamma \log (w_\xi +\begin{equation}ta),$$
with $$\gamma:=n+2+\alpha.$$
Taking derivatives in the $e_1$ direction we find
\begin{equation}gin{equation}{\label{416}}
\frac{w_{1ii}}{w_{ii}}=\gamma \, \frac{w_{1\xi}}{w_\xi+\begin{equation}ta}
\textit{\textbf{e}}nd{equation}
\begin{equation}gin{equation}{\label{417}}
\frac{w_{11ii}}{w_{ii}}-\frac{w_{ij1}^2}{w_{ii}w_{jj}}=
\gamma \, \frac{w_{11\xi}}{w_\xi+\begin{equation}ta}- \gamma\, \frac{w_{1\xi}^2}{(w_\xi+\begin{equation}ta)^2}
\textit{\textbf{e}}nd{equation}
On the other hand, from (\ref{415}) we obtain at $0$
\begin{equation}gin{equation}{\label{418}}
\frac{w_{11i}}{w_{11}}+\frac{w_i}{w}+\textit{\textbf{e}}ta w_1w_{1i}=0,
\textit{\textbf{e}}nd{equation}
\begin{equation}gin{equation}{\label{419}}
\frac{w_{11ii}}{w_{11}}-\frac{w_{11i}^2}{w_{11}^2}+\frac{w_{ii}}{w}
-\frac{w_i^2}{w^2}+\textit{\textbf{e}}ta w_1w_{1ii} +\textit{\textbf{e}}ta w_{1i}^2 \le 0.
\textit{\textbf{e}}nd{equation}
We multiply (\ref{419}) by $w_{ii}^{-1}$ and add, then use (\ref{417}),
(\ref{416})
\begin{equation}gin{equation}{\label{420}}
\frac{1}{w_{11}}\left( \frac{w_{ij1}^2}{w_{ii}w_{jj}}
+ \gamma \frac{w_{11\xi}}{w_\xi+\begin{equation}ta}-\gamma \frac{w_{1\xi}^2}{(w_\xi+\begin{equation}ta)^2}
\right)-
\textit{\textbf{e}}nd{equation}
$$-\frac{w_{11i}^2}{w_{ii}w_{11}^2}+\frac{n}{w}
-\frac{w_i^2}{w_{ii}w^2}+\gamma \textit{\textbf{e}}ta w_1\frac{w_{1\xi}}{w_\xi+\begin{equation}ta} +\textit{\textbf{e}}ta
w_{11} \le 0.$$
Since $\gamma \ge n$ we have
\begin{equation}gin{equation}\label{421}
\sum_{i,j \ne
1}\frac{w_{ij1}^2}{w_{ii}w_{jj}}- \gamma \frac{w_{1\xi}^2}{(w_\xi+\begin{equation}ta)^2} \ge \\
\textit{\textbf{e}}nd{equation}
$$-\frac{w_{111}^2}{w_{11}^2}+\frac{1}{n}\left( \sum_1^n
\frac{w_{1ii}}{w_{ii}}
\right)^2-\gamma \frac{w_{1\xi}^2}{(w_\xi+\begin{equation}ta)^2}\ge$$
$$ -\frac{w_{111}^2}{w_{11}^2}+
\frac{\gamma^2}{n}\frac{w_{1\xi}^2}{(w_\xi+\begin{equation}ta)^2}
-\gamma \frac{w_{1\xi}^2}{(w_\xi+\begin{equation}ta)^2} \ge$$
$$\ge -\frac{w_{111}^2}{w_{11}^2}.$$
From (\ref{418})
$$\frac{w_i}{w}=-\frac{w_{11i}}{w_{11}}, \quad \mbox{for $i\ne 1$}$$
which together with (\ref{421}) gives us in (\ref{420})
\begin{equation}gin{equation}{\label{422}}
\frac{\gamma}{w_\xi+\begin{equation}ta}\left(\frac{w_{11\xi}}{w_{11}}+
\textit{\textbf{e}}ta w_1w_{1\xi} \right) - \frac{w_{111}^2}{w_{11}^3}+
\frac{n}{w}-\frac{w_1^2}{w_{11}w^2}+\textit{\textbf{e}}ta w_{11} \le 0.\textit{\textbf{e}}nd{equation}
From (\ref{418})
\begin{equation}gin{equation}{\label{423}}
\frac{w_{11\xi}}{w_{11}}+ \textit{\textbf{e}}ta w_1w_{1\xi}=-\frac{w_\xi}{w}
\textit{\textbf{e}}nd{equation}
and also
$$\frac{w_{111}}{w_{11}}=-\frac{w_1}{w} - \textit{\textbf{e}}ta
w_1w_{11}$$
thus,
\begin{equation}gin{equation}{\label{424}}
\frac{w_{111}^2}{w_{11}^2} \le 2 \frac{w_1^2}{w^2} + 2 \textit{\textbf{e}}ta^2
w_1^2w_{11}^2.
\textit{\textbf{e}}nd{equation}
We use \textit{\textbf{e}}qref{423}, \textit{\textbf{e}}qref{424} in \textit{\textbf{e}}qref{422} and obtain
$$-\frac{\gamma w_\xi}{(w_\xi +\begin{equation}ta)w}+ \frac{n}{w} -
3\frac{w_1^2}{w_{11}w^2}+\textit{\textbf{e}}ta(1- 2\textit{\textbf{e}}ta w_1^2) w_{11} \le 0.$$
Multiplying by $w_{11}w^2$ we have
$$\textit{\textbf{e}}ta(1- 2 \textit{\textbf{e}}ta w_1^2) (ww_{11})^2+\left( n-\gamma + \gamma \frac{\begin{equation}ta}{w_\xi +\begin{equation}ta}
\right) ww_{11} \le 3 w_1^2$$
and the result follows if $\textit{\textbf{e}}ta$ is chosen such that $\textit{\textbf{e}}ta \, (\max w_1^2) <1/4$.
\textit{\textbf{e}}nd{proof}
\section{Proof of Theorem \ref{T3}}\label{s6}
We prove Theorem \ref{T3} by induction on $k$. The cases $k=0$ and the induction step $k-1\mathbb Rightarrow k$ are quite similar.
We start with $k=0$.
\begin{equation}gin{prop}\label{p4}
Theorem \ref{T3} holds for $k=0$. Precisely if $$u \in \mathcal D_0^\mu(\infty,..,\infty),$$
then
$$\mathcal S_h'(u) \subset \{|x'| \le C h^\begin{equation}ta \}, \quad \quad \begin{equation}ta:=\frac{1}{2(n+1+\alpha)},$$
for some $C$ depending on $\mu$, $n$ and $\alpha$.
\textit{\textbf{e}}nd{prop}
\begin{equation}gin{rem} \label{r4}In view of Lemma \ref{l6} we may assume, after relabeling $\mu$, that all renormalized solutions $u_h$ given in Lemma \ref{l6} are in the same class $\mathcal D_0^\mu(\infty,..,\infty).$
\textit{\textbf{e}}nd{rem}
We prove Proposition \ref{p4} by studying the behavior of the tangent cone of $u$ at the origin.
The {\it tangent cone} $\Gamma_u$ of $u$ at the origin is obtained by taking the supremum of all supporting planes of $u$ at the origin. In other words the upper graph of $\Gamma_u$ is obtained by the intersection of all half-spaces that pass through the origin and contain the upper graph of $u$, therefore $\Gamma_u$ is lower semicontinuous.
We define the $n-1$ dimensional function $\gamma_u(x')$ as being the restriction of $\Gamma_u$ to $x_n=1$ i.e
$$\gamma_u(x'):= \Gamma_u(x',1).$$
By construction the upper graph of $\gamma_u$ is a closed set and
$$u(x) \ge \Gamma_u(x)=x_n \gamma_u(\frac{x'}{x_n}).$$
Since $\nabla u(0)=0$ we have $\gamma_0 \ge 0$ and $\inf \gamma_u=0$. In the next lemma we obtain some useful properties of $\gamma_u$.
\begin{equation}gin{lem}\label{l7}
a) $$\gamma_u(x') \ge c_0|x'|-C_0.$$
b) If $$\gamma_u \ge C_0 p' \cdot (x'-x_0'),$$for some unit vector $p'\in \mathbb R^{n-1}$, $|p'|=1$ and some $x_0'$ then
$$\gamma_u \ge p' \cdot (x'-x_0') + c_0.$$
The constants $c_0$, $C_0$ above are universal constants.
\textit{\textbf{e}}nd{lem}
\begin{equation}gin{proof}
a) We compare $u$ with
$$w:=c x' \cdot p' + c|x'|^2 + C (x_n^2 -\mu^{-1} x_n),$$
with $p'$ a unit vector.
We choose $c$ small such that $w \le 1$ in $\Omega \subset B^+_{1/\mu}$ and $C$ large such that $\det D^2 w \ge \det D^2u$. We find $u \ge w$ hence
$$\gamma_u \ge \gamma_w=c_0 x' \cdot p'-C_0 x_n,$$
which proves part a).
b) Assume that $p'=e_1$ and let $x_0' \cdot e_1=q$. Then
$$u \ge x_n\gamma_u(\frac{x'}{x_n}) \ge C_0(x_1-q x_n)^+.$$
In the set $O=\Omegaega \cap \{x_1-q x_n >-1\}$ we compare $u$ with
$$w:= \frac {C_0}{2} (x_1-qx_n) + \frac{C_0}{8} (x_1-q x_n)^2 + \delta (x_2^2+\ldots + x_n^2) + \delta x_n,$$
where $\delta$ is small, fixed, depending on $\mu$.
Notice that if $C_0$ is sufficiently large we have $\det D^2 w \ge \det D^2 u$ and $w \le u$ on $\partial O$.
Indeed, on $\partial O \setminus \partial \Omega$ we have $$x_1-q x_n=-1 \quad \mathbb Rightarrow \quad w \le 0 \le u,$$ and in the set $\partial O \cap \partial \Omegaega$,
$$1 \ge u \ge C_0(x_1 - q x_n) \quad \mathbb Rightarrow \quad w \le 1.$$
From $w(0)=0$ and the inequalities above we obtain $w \le u$ on $\partial \Omega$.
In conclusion $$\gamma_u \ge \gamma_w \ge \frac{C_0}{2}(x_1-q) + \delta.$$
\textit{\textbf{e}}nd{proof}
\begin{equation}gin{rem}\label{r5}
From the proof we see that we only need the weaker assumption $$u \ge C_0(x_1-q x_n) \quad \mbox{ on $\partial \Omegaega$,}$$
in order to obtain the conclusion of part b).
\textit{\textbf{e}}nd{rem}
From part a) we see that $x_o' \in \mathbb R^{n-1}$ the point where $\gamma_u$ achieves its infimum belongs to $B_C'$. As a consequence of Lemma \ref{l7} we obtain the following corollary about the section $S_1(\gamma_u) \subset \mathbb R^{n-1}$.
\begin{equation}gin{cor}\label{c1} There exist universal constants $c_*$ small, $C_*$ large, such that
$$B_{c_*}' \subset S_1(\gamma_u) - x_o' \subset B_{C_*}'$$
$$S_{c_*}(\gamma_u)-x_o' \subset (1-c_*) (S_1(\gamma_u)-x_o').$$
\textit{\textbf{e}}nd{cor}
\begin{equation}gin{proof}
We only need to show that $\gamma_u$ cannot be too small near $\partial S_1(\gamma_u)$.
Assume by contradiction that $\gamma_u(y_0') \ll 1$ for some $y_0'$ near $\partial S_1(\gamma_u)$. Then we can find a plane of slope $C_0$ i.e. $C_0 p'\cdot (x'-x_0') \le \gamma_u(x')$ with $x_0' \in \partial S_1(\gamma_1)$ sufficiently close to $y_0'$. We apply part b) of Lemma \ref{l7} and obtain that $\gamma_u$ is greater than a universal constant in a neighborhood of $x_0'$ and we reach a contradiction.
\textit{\textbf{e}}nd{proof}
Next we apply the corollary above for the rescalings $u_h$ of $u$ defined in Lemma \ref{l6} (see Remark \ref{r4}). For any $h \in (0,1]$, we have
$$u_h(x)=\frac 1 h \tilde u(D_h x), \quad \quad \mbox{with} \quad \tilde u(x)=u(A_hx).$$
Notice that $\gamma_u$ is just a translation of $\gamma_{\tilde u}$.
Since
$$\Gamma_{u_h}(x)=\frac 1 h \Gamma_{\tilde u}(D_h x)$$
we divide by $x_n$ and obtain
$$ \gamma_{u_h} \left (\frac{x'}{x_n}\right )=\frac {d_n}{ h } \, \, \gamma_{\tilde u}\left(\frac{ D_h' x'}{d_n x_n}\right )$$
or
$$\gamma_{u_h}(x')=\frac{d_n}{h} \, \, \gamma_{\tilde u}\, (d_n^{-1} \, D_h'x') \quad \quad \mbox{with} \quad D_h'=diag(d_1,..,d_{n-1}).$$
This implies that $$d_n^{-1}D_h'\, \, S_s(\gamma_{u_h})= S_{s \, h/d_n}(\gamma_{\tilde u}).$$
We apply Corollary \ref{c1} for the sections $S_1$ and $S_{c_0}$ of $\gamma_{u_h}$ and use also that $\gamma_u$ is a translation for $\gamma_{\tilde u}$. We obtain the following inclusions for the sections $S_t(\gamma_u)$, $t:=h/d_n$
\begin{equation}gin{equation}\label{51}
d_n^{-1} D_h' \, B_{c_*}' \quad \subset \quad S_t(\gamma_u) -x_o' \quad \subset \quad d_n^{-1} D_h' \, B_{C_*}', \quad \quad \quad t=\frac{h}{d_n},
\textit{\textbf{e}}nd{equation}
and
\begin{equation}gin{equation}\label{52}
S_{c_* t}(\gamma_u)-x_o' \quad \subset \quad (1-c_*)\left(S_t(\gamma_u) -x_o' \right).
\textit{\textbf{e}}nd{equation}
From Lemma \ref{l6} we know that as $h$ ranges from 1 to 0 the parameter $t=h/d_n$ covers an interval $[0,c]$. The inclusion \textit{\textbf{e}}qref{51} says that the sections $S_t(\gamma_u)$ are {\it balanced} around the minimum point $x_o$, i.e. there exists an ellipsoid $E$ such that
$$c_0 E \subset S_t(\gamma_u)-x_o' \subset C_0 E, \quad \quad \quad E=d_n^{-1}D_h B_1'.$$
A dilation of the ellipsoid $E$ above is equivalent also to the normalized diameter $\mathcal S'(u)$. Indeed, from the definition of $D_h$
\begin{equation}gin{equation}\label{53}
h^{-1/2} D_h' \, B_1' \, \, \subset \, \, \mathcal S_h'(u) \, \, \subset \, \, C(n) h^{-1/2} D_h' \, B_1'.
\textit{\textbf{e}}nd{equation}
The inclusions \textit{\textbf{e}}qref{51}, \textit{\textbf{e}}qref{53} show the relation between the sections $S_t(\gamma_u)$ and $\mathcal S_h'(u)$.
Property \textit{\textbf{e}}qref{52} implies that
\begin{equation}gin{equation}\label{54}
\gamma_u(x') \ge c |x'-x_o'|^M \quad \quad \mbox{in $B_c'(x_o)$,}
\textit{\textbf{e}}nd{equation}
for some $M$ large universal. From the fact that the sections of $\gamma_u$ are balanced one can also prove that $\gamma_u \in C^{1,\begin{equation}ta}$. Thus each small section $S_t(\gamma_u)$ contains a small ball of radius $t^{1/(1+\begin{equation}ta)}$ and it is contained in a ball of radius $t^{1/M}$. However, these bounds are not sufficient for the proof of Proposition \ref{p4}.
We remark that so far in the proof we only used that the Monge-Ampere measure of $u$ is bounded by above and below by multiples of $x_n^\alpha$. Below we use the estimates of Section 4 and the fact that the Monge-Ampere measure is precisely $x_n^\alpha$, and conclude that $\gamma_u$ has quadratic growth near $x_o'$.
Precisely we show the following.
\begin{equation}gin{lem}\label{l8}
There exists universal constants $c_1$, $C_1$ such that
$$c_1|x'-x_o|^2 \le \gamma_u(x') \le C_1 |x'-x_o'|^2 \quad \quad \mbox{in $B_{c_1}'(x_o')$}.$$
\textit{\textbf{e}}nd{lem}
This estimate for $\gamma_u$ easily implies Proposition \ref{p4}. Indeed, the lemma gives $$t^{1/2}B_c' \subset S_t(\gamma_u) -x_o' \subset t^{1/2} B_C',$$
which together with \textit{\textbf{e}}qref{51} implies that for all $i<n$,
$$ c d^{1/2}_n \le d_i h^{-1/2} \le C d^{1/2}_n.$$
Then
$$\left(\partialrod_{i=1}^{n-1} d_i^2 \right) \, d_n^{2+\alpha}=h^n \quad \mathbb Rightarrow \quad ch \le {d_n}^{n+1+\alpha} \le Ch $$
and Proposition \ref{p4} follows from \textit{\textbf{e}}qref{53}.
\
Below we prove Lemma \ref{l8}. After performing a sliding along $x_n=0$ of bounded norm, we may assume that $x_o'=0$.
\
{\it Step 1:} In step 1 we use Theorem \ref{Po} in the set $\{ u < c x_n\}$ to obtain
\begin{equation}gin{equation}\label{55}
D^2\gamma_u \le C I \quad \quad \mbox{in} \quad \{\gamma_u <c \}.
\textit{\textbf{e}}nd{equation}
In order to apply Theorem \ref{Po} for $u$ we first need to bound $|\nabla u|$ in the set $\{u<c x_n\}$ for some $c$ small. To this aim we observe that the projection of $\Omega=S_1(u)$ along $e_n$ into $\mathbb R^{n-1}$ contains the ball $B_{1/C_0}'$, with $C_0$ as in Lemma \ref{l7}. Otherwise we can find a direction, say $x_1$ such that
$$\Omega \subset \{ x_1 \le 1/C_0 \},$$
hence $$u \ge C_0 x_1 \quad \mbox{on $\partial \Omega$},$$
and by Lemma \ref{l7} (see Remark \ref{r5}) we find $$\gamma_u \ge x_1 + c_0,$$ and we contradict that $\gamma_u(0)=0$ (since $x_o'=0$.)
Since $ \Omegaega \subset B_C^+$ contains a ball $B_\mu(x^*)$, and its projection contains $B_{1/C_0}'$ and $0 \in \partial \Omega$, we see from its convexity that it must contain also $B_r(\varrho e_n)$ for some small fixed universal constants $\varrho$, $r$. Now we use $u \ge 0$, $u(0)=0$ and conclude that $|\nabla u| \le C$ in the convex set generated by $0$ and $B_{r/2}(\varrho e_n)$. On the other hand by \textit{\textbf{e}}qref{31} and \textit{\textbf{e}}qref{54} we see that this convex set contains the set $\{ u < cx_n \}$ if $c$ is sufficiently small.
Now let $w:=u-cx_n$ and notice that the rescalings
$$w_\lambda(x):=\frac{1}{\lambda} w(\lambda x), \quad \quad \det D^2 w_\lambda = c(\lambda) x_n^\alpha,$$
have the same gradient bound in $\{w_\lambda <0\}$ and they converge uniformly on $x_n=1$ to $|\gamma_u - c|$. By Theorem \ref{Po} we find
$$|w_\lambda| \, \, \partial_{11}w_\lambda \, \le C,$$
hence
$$|\gamma_u-c| \, \, \partial_{11} \gamma_u \, \le C,$$
which proves step 1.
\
In the course of the proof we showed also that the segment $[0,\varrho x_o] \subset S_1(u)$ with $x_o:=(x_o',1)=e_n$. We apply this for the rescaling $u_h$ and obtain
$$[0,\varrho d_n \, x_o] \subset S_h(u).$$ Using the bounds on $d_n$ from Lemma \ref{l7} we find (see also \textit{\textbf{e}}qref{31})
\begin{equation}gin{equation}\label{57}
c t^{1/\delta} \le u(t x_o',t) \le C t^{4/3}.
\textit{\textbf{e}}nd{equation}
We can extend this inequality at points $y'$ near $x_o'$,
\begin{equation}gin{equation}\label{56}
c t^{1/\delta} \le u(t y',t) - t \gamma_u(y') \le C t^{4/3} \quad \quad \mbox{for all $y' \in B_c'$.}
\textit{\textbf{e}}nd{equation}
Indeed, \textit{\textbf{e}}qref{56} follows by applying \textit{\textbf{e}}qref{57} to the function $u-p \cdot x$ where $p \cdot x$ is the linear function which restricted to $x_n=1$ becomes tangent by below to $\gamma_u$ at $y'$. From Step 1 we see that when $|y'|$ is small, the slope of $l$ is also small and $u-l$ (renormalized at its $1/2$ section) belongs to a class $\mathcal D_0^c (\infty,...,\infty)$. Therefore we can apply \textit{\textbf{e}}qref{57} for $u-l$ and obtain the desired inequality \textit{\textbf{e}}qref{56}.
\
{\it Step 2:} In step 2 we apply Theorem \ref{abo} for the Legendre transform of $u$ and obtain
$$D^2 \gamma_u \ge c I \quad \quad \mbox{in $B_c'$.}$$
Let $u^*$ denote the Legendre transform of $u$,
$$u^*(\xi):= \sup_{x \in \overline \Omega} \left( x \cdot \xi - u(x)\right).$$
Since $u$ is lower semicontinuous the supremum is always achieved at some point $x \in \overline \Omega$. We are interested in the behavior of $u^*(\xi)$ for $|\xi| \le c$ small. From the boundary values of $u$ we see that the maximum is realized either at $0$ or at some $x \in \Omega$, and clearly $u^* \ge 0$. We define $K$ as the convex set
$$K:=\{ u^*=0\}.$$
If $\xi \in K$ then the maximum is achieved at $0$, and this happens if and only if $$\xi' \cdot x' + \xi_n \le \gamma_u(x') \quad \mbox{for all $x'$} \quad \quad \mathcal Leftrightarrow \quad \xi_n \le - \gamma_u^* (\xi'),$$
where $\gamma_u^*$ represents the Legendre transform of $\gamma_u$.
In conclusion
$$K=\{\xi_n \le - \gamma_u^* (\xi') \}.$$
From Step 1 and \textit{\textbf{e}}qref{54} we know that
$$c|x'|^M \le \gamma_u(x') \le C |x'|^2 \quad \mbox{in $B_c'$}$$
hence
\begin{equation}gin{equation}\label{58}
\{ \xi_n \le -C |\xi'|^\frac{M}{M-1} \} \subset K \subset \{ \xi_n \le - c |\xi'|^2 \} \quad \mbox{in $B_c$}.
\textit{\textbf{e}}nd{equation}
Since $u$ is strictly convex in $\Omega$ we obtain that
\begin{equation}gin{equation}\label{59}
u^* \in C^1(B_c),
\textit{\textbf{e}}nd{equation}
and in the set $\{u^*>0\}$ we have
$$\det D^2 u^*(\xi) =(\det D^2 u (x))^{-1}=x_n^{-\alpha}=(u^*_n(\xi))^{-\alpha},$$
thus, $u^*$ solves the equation
\begin{equation}gin{equation}\label{59.5}
(u^*_n)^\alpha \det D^2 u^* =1 \quad \quad \mbox{in $B_c \setminus K$.}
\textit{\textbf{e}}nd{equation}
Also from \textit{\textbf{e}}qref{56} and the definition of Legendre transform we find
$$ u^*(\xi) \ge c \left ( (\xi_n + \gamma_u^*(\xi'))^+ \right )^4 \quad \quad \mbox{and} \quad |\nabla u^*| \le 1 \quad \mbox{in $B_c$},$$
which together with \textit{\textbf{e}}qref{58} implies that
$$O:= \{ u^* < \textit{\textbf{e}}ta (\xi_n + \textit{\textbf{e}}ta) \} \subset B_{c_1},$$
with $\textit{\textbf{e}}ta$ and $c_1$ sufficiently small universal constants. Moreover, in $O$, the Lipschitz norms of the level sets of $u^*$ (viewed as graphs in the $-e_n$ direction) are bounded by a universal constant.
Next we apply Theorem \ref{abo} for $u^*$ in $O$ and obtain universal bounds for the second derivatives of the level sets of $u^*$ in a fixed neighborhood of the origin. Writing this for $K$, the $0$ level set, we obtain the desired result of Step 2 since, in a neighborhood of $0 \in \mathbb R^{n-1}$, $$D^2 \gamma_u^* \le C I \quad \mathbb Rightarrow \quad D^2 \gamma_u \ge c I.$$
We cannot apply directly Theorem \ref{abo} since $u^*$ is not strictly increasing in the $e_n$ direction. However we show below using approximations that the theorem still applies in our case i.e. for functions $u^*$ that satisfy \textit{\textbf{e}}qref{59}, \textit{\textbf{e}}qref{59.5}. Formally, we write that $u^*$ solves the equation in $B_c$ with right hand side $f(u^*)$ with $f=\chi_{(0,\infty)}$, and then apply Theorem \ref{abo}.
\
{\it Approximation.} Define $$v_\textit{\textbf{e}}ps= \max \left \{ u^*, \, \, \textit{\textbf{e}}ps \left (1+ \xi_n + \frac 1 2 |\xi|^2\right ) \right \}$$
and we remark that in $B_{c_1}$, $v_\textit{\textbf{e}}ps>0$, it is strictly increasing in the $e_n$ direction, $|\nabla v_\textit{\textbf{e}}ps| \le 1$ and its level sets have Lipschitz norm bounded by a universal constant. In the set $\{v_\textit{\textbf{e}}ps>u^*\} \cap B_{c_1}'$ we have
$$(\partial_n v_\textit{\textbf{e}}ps)^ \alpha \det D^2 v_\textit{\textbf{e}}ps = \textit{\textbf{e}}ps^{n+\alpha},$$
hence
$$ (\partial_n v_\textit{\textbf{e}}ps)^ \alpha \det D^2 v_\textit{\textbf{e}}ps \ge f_\textit{\textbf{e}}ps(v_\textit{\textbf{e}}ps) \quad \mbox{in $B_{c_1}$,} $$
in viscosity sense, with
$f_\textit{\textbf{e}}ps$ a nondecreasing function satisfying $$f(s)=\textit{\textbf{e}}ps^{n+\alpha} \quad \mbox{if $s \le \textit{\textbf{e}}ps^{1/2}$}, \quad \quad f(s)=1 \quad \mbox{if $s \ge 2 \textit{\textbf{e}}ps^{1/2}$.} $$
We define $\bar v_\textit{\textbf{e}}ps$ as the viscosity solution to
$$(\partial_n \bar v_\textit{\textbf{e}}ps)^\alpha \det D^2 \bar v_\textit{\textbf{e}}ps =f_\textit{\textbf{e}}ps(\bar v_\textit{\textbf{e}}ps) \quad \quad \mbox{in}\quad O_\textit{\textbf{e}}ps:= \{v_\textit{\textbf{e}}ps< \textit{\textbf{e}}ta(\xi_n + \textit{\textbf{e}}ta)\},$$
$$ \bar v_\textit{\textbf{e}}ps=v_\textit{\textbf{e}}ps \quad \mbox{on $\partial O_\textit{\textbf{e}}ps$.} $$
The existence of $\bar v_\textit{\textbf{e}}ps$ follows by Perron's method and since $v_\textit{\textbf{e}}ps$ is a subsolution, we have $\bar v_\textit{\textbf{e}}ps \ge v_\textit{\textbf{e}}ps$. This implies that $\bar v_\textit{\textbf{e}}ps$ is strictly increasing in the $e_n$ direction and, $|\nabla \bar v_\textit{\textbf{e}}ps|$ and the Lipschitz norm of the level sets of $\bar v_\textit{\textbf{e}}ps$ are bounded by a universal constant. Therefore we can apply Theorem \ref{abo} for $\bar v_\textit{\textbf{e}}ps$ in $O_\textit{\textbf{e}}ps$ and obtain the uniform second derivative bounds for its level sets around the origin.
It remains to show that $\bar v_\textit{\textbf{e}}ps$ converges to $u^*$. Assume that a subsequence of $\bar v_\textit{\textbf{e}}ps$ converge to $\bar v_0$. Then $\bar v_0$ is defined in $O$, $\bar v_0= u^*$ on $\partial O$, and by construction $$\bar v_0 \ge u_*.$$
We prove that also $\bar v_0 \le u^*$. Assume by contradiction that the maximum of $\bar v_0 -u^*$ is positive and occurs at a point $\xi_0$. From the convergence of $\bar v_\textit{\textbf{e}}ps$ to $\bar v_0$ we obtain
$$ (\partial_n \bar v_0)^\alpha \det D^2 \bar v_0 =1 \quad \quad \mbox{in the set $\{ \bar v_0>0 \}\cap O$,} $$
and the equation is satisfied in the classical sense. We find $$\xi_0 \notin \{ u^*>0 \} \, \, \subset \{ \bar v_0 >0 \},$$ since in $\{u^*>0\}$
both $u^*$ and $\bar v_0$ solve the same equation. On the other hand if $\xi_0 \in \{u^*=0\}$ then (see \textit{\textbf{e}}qref{59}) we obtain $\nabla \bar v_0(\xi_0)=0$ thus
$$\bar v_0 \ge \bar v_0 (\xi_0) >0,$$
and we reach again a contradiction.
\qed
Next we prove the induction step for Theorem \ref{T3}.
\begin{equation}gin{prop}\label{p5}
Assume Theorem \ref{T3} holds for all $l \le k-1$ for some $k$ with $1 \le k \le n-2$. Then Theorem \ref{T3} holds also for $k$.
\textit{\textbf{e}}nd{prop}
We recall the notation of Section 3 that we denote points in $\mathbb R^n$ by
$$x=(y,z,x_n) \quad y=(x_1,..,x_k) \in \mathbb R^k \quad z=(x_{k+1},..,x_{n-1}) \in \mathbb R^{n-1-k}.$$
The proof of Proposition \ref{p5} is very similar to the proof of Proposition \ref{p4}, in most statements we just have to replace $x'$ by $z$. We provide the details below.
{\it Remark:} In view of Lemma \ref{l5} we may assume, after relabeling $\mu$, that all renormalized solutions $u_h$ given in Lemma \ref{l5} are in the same class $\mathcal D_0^\mu(1,..,1, \infty,...,\infty).$
Let $\Gamma_u$ denote the tangent cone of $u$ at the origin. Any supporting plane for $u$ at the origin has $0$ slope in the $y$ direction, hence $\Gamma_u$ does not depend on the $y$ variable.
We define the $n-k-1$ dimensional function $\gamma_u(z)$ as being the restriction of $\Gamma_u$ to $x_n=1$ i.e
$$\gamma_u(z):= \Gamma_u(y,z,1).$$
By construction the upper graph of $\gamma_u$ is a closed set and
$$u(x) \ge \Gamma_u(x)=x_n \gamma_u(\frac{z}{x_n}).$$
Since $\nabla u(0)=0$ we have $\gamma_0 \ge 0$ and $\inf \gamma_u=0$. In the next lemma we obtain some useful properties of $\gamma_u$.
\begin{equation}gin{lem}\label{l9}
a) $$\gamma_u(z) \ge c_0|z|-C_0.$$
b) If $$\gamma_u \ge C_0 p_z \cdot (z-z_0),$$for some unit vector $p_z\in \mathbb R^{n-k-1}$, $|p_z|=1$ and some $z_0$ then
$$\gamma_u \ge p_z \cdot (z-z_0) + c_0.$$
The constants $c_0$, $C_0$ above are universal constants.
\textit{\textbf{e}}nd{lem}
\begin{equation}gin{proof}
a) We compare $u$ with
$$w:=c z \cdot p_z + c|z|^2 + \partialsi_u(y) + C (x_n^2 -\mu^{-1} x_n),$$
with $p_z$ a unit vector, and $\partialsi_u$ denoting the boundary data of $u$ on $\partial \Omega \cap\{(y,0,0) \}$.
Notice that $w = u$ on the intersection of $\partial \Omega$ with the $y$ axis. We choose $c$ small such that $w \le 1$ in $\Omega \subset B^+_{1/\mu}$ and $C$ large such that $\det D^2 w \ge \det D^2u$. By maximum principle, $u \ge w$, hence
$$\gamma_u \ge \gamma_w=c z \cdot p_z -C x_n,$$
which proves part a).
b) Assume that $p_z$ points in the $z_1$ direction and let $z_0 \cdot p_z=q$. Then
$$u \ge x_n\gamma_u(\frac{z'}{x_n}) \ge C_0(z_1-q x_n)^+.$$
In the set $O=\Omegaega \cap \{z_1-q x_n >-1\}$ we compare $u$ with
$$w:= \frac {C_0}{2} (z_1-qx_n) + \frac{C_0}{8} (z_1-q x_n)^2 + \delta (|x|^2-z_1^2) + \delta x_n,$$
where $\delta$ is small, fixed, depending on $\mu$.
Notice that if $C_0$ is sufficiently large we have $\det D^2 w \ge \det D^2 u$ and $u \le w$ on $\partial O$. Indeed,
on $\partial O \setminus \partial \Omega$ we have $$z_1-q x_n=-1 \quad \mathbb Rightarrow \quad w \le 0 \le u,$$
and on $\partial O \cap \partial \Omegaega$
$$1 \ge u \ge C_0(z_1 - q x_n) \quad \mathbb Rightarrow \quad w \le 1.$$
From the inequalities above and $$w(y,0,0) \le \delta |y|^2 \le u(y,0,0)$$ we obtain $w \le u$ on $\partial O$.
In conclusion $$\gamma_u \ge \gamma_w \ge \frac{C_0}{2}(z_1-q) + \delta.$$
\textit{\textbf{e}}nd{proof}
\begin{equation}gin{rem}\label{r6}
In part a) we showed that
\begin{equation}gin{equation}\label{59.6}
u(x) \ge \partialsi_u(y) + c|z|- Cx_n.
\textit{\textbf{e}}nd{equation}
Also we only need the weaker assumption $$u \ge C_0(z_1-q x_n) \quad \mbox{ on $\partial \Omegaega$,}$$
in order to obtain the conclusion of part b).
\textit{\textbf{e}}nd{rem}
Part a) shows that the point $z_o \in \mathbb R^{n-k-1}$ where $\gamma_u$ achieves its infimum belongs to $B_C^z$. As a consequence of Lemma \ref{l9} we obtain as before the following inclusions
$$B_{c_*}^z \subset S_1(\gamma_u) - z_o \subset B_{C_*}^z$$
$$S_{c_*}(\gamma_u)-z_o \subset (1-c_*) (S_1(\gamma_u)-z_o).$$
for universal constants $c_*$ small, $C_*$ large.
Next we write these inclusions for the rescalings $u_h$ of $u$ defined in Lemma \ref{l6} (see Remark \ref{r4}). Recall from Lemma \ref{l5} that for any $h \in (0,1]$, we have
$$u_h(x)=\frac 1 h \tilde u(D_h x), \quad \quad \mbox{with} \quad \tilde u(x)=u(T_h A_h x).$$
Notice that $\gamma_u$ is just a translation of $\gamma_{\tilde u}$.
Since
$$\Gamma_{u_h}(x)=\frac 1 h \Gamma_{\tilde u}(D_h x)$$
we divide by $x_n$ and obtain as before
$$\gamma_{u_h}(z)=\frac{d_n}{h} \, \, \gamma_{\tilde u}\, (d_n^{-1} \, D_h^z z) \quad \quad \mbox{with} \quad D_h^z=diag(d_{k+1},..,d_{n-1}).$$
This implies that $$d_n^{-1}D_h^z\, \, S_s(\gamma_{u_h})= S_{s \, h/d_n}(\gamma_{\tilde u}).$$
We obtain the following inclusions for the sections of $\gamma_u$,
\begin{equation}gin{equation}\label{59.7}
d_n^{-1} D_h^z \, B_{c_*}^z \quad \subset \quad S_t(\gamma_u) -z_o \quad \subset \quad d_n^{-1} D_h^z \, B_{C_*}^z, \quad \quad \quad t=\frac{h}{d_n},
\textit{\textbf{e}}nd{equation}
and
\begin{equation}gin{equation*}
S_{c_* t}(\gamma_u)-z_o \quad \subset \quad (1-c_*)\left(S_t(\gamma_u) -z_o \right).
\textit{\textbf{e}}nd{equation*}
From Lemma \ref{l5} we know that as $h$ ranges from 1 to 0 the parameter $t=h/d_n$ covers an interval $[0,c]$. The inclusions above show that the sections $S_t(\gamma_u)$ are balanced around the minimum point $x_o$, and
\begin{equation}gin{equation}\label{510}
\gamma_u(z) \ge c |z-z_o|^M \quad \quad \mbox{in $B_c^z(z_o)$,}
\textit{\textbf{e}}nd{equation}
for some $M$ large universal.
We also recall from Lemma \ref{l5} that, from the construction of $D_h$,
\begin{equation}gin{equation}\label{511}
\mathcal S_h'(u) \, \, \subset \, \,\mathbb R^k \times h^{-1/2} D_h^z \, B_{C(n)}^z.
\textit{\textbf{e}}nd{equation}
It remains to show that $\gamma_u$ grows quadratically near its minimum point.
\begin{equation}gin{lem}\label{l10}
There exists universal constants $c_1$, $C_1$ such that
$$c_1|z-z_o|^2 \le \gamma_u(x') \le C_1 |z-z_o|^2 \quad \quad \mbox{in $B_{c_1}^z(z_o)$}.$$
\textit{\textbf{e}}nd{lem}
This lemma implies Proposition \ref{p5} as before. Indeed, the lemma gives $$t^{1/2}B_c^z \subset S_t(\gamma_u) -z_o \subset t^{1/2} B_C^z,$$
which together with \textit{\textbf{e}}qref{59.7} implies that
$$ c d^{1/2}_n \le d_i h^{-1/2} \le C d^{1/2}_n \quad \quad \mbox{for $k<i<n$.}$$
By Lemma \ref{l5} we also know
$$ c \le d_i h^{-\frac 12} \le C \quad \quad \mbox{for $i\le k$.}$$
Then
\begin{equation}gin{equation}\label{65.65}
\left(\partialrod_{i=1}^{n-1} d_i^2 \right) \, d_n^{2+\alpha}=h^n \quad \mathbb Rightarrow \quad ch \le {d_n}^{n+1-k+\alpha} \le Ch,
\textit{\textbf{e}}nd{equation}
and Proposition \ref{p5} follows from \textit{\textbf{e}}qref{511}.
\
Below we prove Lemma \ref{l10}. After performing a sliding along $x_n=0$ of bounded norm, we may assume that $z_o=0$.
\
{\it Step 1:} We use Theorem \ref{Po} in the set $\{ u < c x_n\}$ to obtain
\begin{equation}gin{equation}\label{512}
D^2\gamma_u \le C I \quad \quad \mbox{in} \quad \{\gamma_u <c \}.
\textit{\textbf{e}}nd{equation}
We first need to bound $|\nabla u|$ in the set $\{u<c x_n\}$ for some $c$ small. To this aim we observe that the orthogonal projection of $\Omega=S_1(u)$ into the $z$-axis contains the ball $B_{1/C_0}^z$, with $C_0$ as in Lemma \ref{l9}. Otherwise we can find a direction, say $z_1$ such that
$$\Omega \subset \{ z_1 \le 1/C_0 \},$$
hence $$u \ge C_0 z_1 \quad \mbox{on $\partial \Omega$},$$
and by Lemma \ref{l9} (see Remark \ref{r6}) we find $$\gamma_u \ge z_1 + c_0,$$ and we contradict that $\gamma_u(0)=0$ (since $z_o=0$.)
Notice that $ \Omegaega \subset B_C^+$ contains a ball $B_\mu(x^*)$, the projection of $\Omega$ into the $z$ coordinates contains $B_{1/C_0}^z$ and also $$G:=\{(y,0,0)| |y| \le c \} \subset \partial \Omega.$$ Since $\Omega$ is convex, it must contain also $B_r(\varrho e_n)$ for some small fixed universal constants $\varrho$, $r$. Now we use that at each point in $G$ the function $u$ has a supporting plane of bounded slope, and conclude that $|\nabla u| \le C$ in the convex set generated by $G$ and $B_{r/2}(\varrho e_n)$. This convex set contains $\{ u < cx_n \}$ if $c$ is sufficiently small, since by \textit{\textbf{e}}qref{31}, \textit{\textbf{e}}qref{59.6} and \textit{\textbf{e}}qref{510} we obtain
$$\{ u< \delta x_n \} \subset \{x_n \le c(\delta)\} \cap \{|z| \le c(\delta) x_n \} \cap \{\partialsi_u \le c(\delta) \},$$
for some constant $c(\delta)\to 0$ as $\delta \to 0$.
Now let $w:=u-cx_n$ and notice that the rescalings $w_\lambda$ defined in ...
have uniform gradient bound in $\{w_\lambda <0\}$ and they converge uniformly on $x_n=1$ to $|\Gamma_u - c|$. Step 1 follows by applying Theorem \ref{Po} to $w_\lambda$ as before.
\
Above we showed also that the segment $[0,\varrho Z_o] \subset S_1(u)$ with $Z_o:=(0,z_o,1)=e_n$. We apply this for the rescaling $u_h$ and obtain
$$[0,\varrho d_n \, Z_o] \subset S_h(u).$$ Using the bounds on $d_n$ from Lemma \ref{l5} we find (see also \textit{\textbf{e}}qref{31})
\begin{equation}gin{equation}\label{513}
c t^{1/\delta} \le u(0,tz_o ,t) \le C t^{4/3}.
\textit{\textbf{e}}nd{equation}
We can extend this inequality at points $z$ near $z_o$,
\begin{equation}gin{equation}\label{514}
c t^{1/\delta} \le u(0,t z,t) - t \gamma_u(z) \le C t^{4/3} \quad \quad \mbox{for all $z \in B_c^z$.}
\textit{\textbf{e}}nd{equation}
Indeed, \textit{\textbf{e}}qref{514} follows by applying \textit{\textbf{e}}qref{513} to the function $u-l=u-p_z \cdot z-p_n x_n$ where $p_z \cdot z +p_n$ is the linear function tangent by below to $\gamma_u$ at some point $z_*$. From Step 1 we see that when $|z_*|$ is small, $|p_z|$, $|p_n|$ are also small and $u-l$ (renormalized at its $1/2$ section) belongs to a class $\mathcal D_0^c (1,..,1,\infty,..,\infty)$. Therefore we can apply \textit{\textbf{e}}qref{513} for $u-l$ and obtain the desired inequality \textit{\textbf{e}}qref{514} for $z_*$.
\
{\it Step 2:} In step 2 we apply Theorem \ref{abo} for the Legendre transform of $u$ and obtain
$$D^2 \gamma_u \ge c I \quad \quad \mbox{in $B_c^z$.}$$
As before let $u^*$ denote the Legendre transform of $u$,
$$u^*(\xi):= \sup_{x \in \overline \Omega} \left( x \cdot \xi - u(x)\right).$$
Writing $\xi=(\xi_y,\xi_z,\xi_n)$ we have
$$u^*(\xi) \ge \sup_{(y,0,0) \in \overline \Omega} \left( y \cdot \xi_y - u(y,0,0)\right) = \partialsi_u^*(\xi_y),$$
where $\partialsi_u^*$ is the Legendre transform of the boundary data $\partialsi_u(y)$ of $u$ (on the $y$ subspace).
If $|\xi|$ is small then the maximum in $u^*(\xi)$ is realized either in $\Omegaega$ or at some point $(y,0,0)$ with $|y|$ small and in the second case we have $u^*=\partialsi_u^*$.
Since $u$ is strictly convex in $\Omega$ we find that
$$u^* \in C^1(B_c) \quad \mbox{and} \quad \quad (u^*_n)^\alpha \det D^2 u^* =1 \quad \quad \mbox{in $\{u^*>\partialsi_u^*\}$.} $$
In other words $u^*$ is the solution to an obstacle problem in which the obstacle $\partialsi_u^*$ is quadratic and depends only on the $\xi_y$ variable.
We define $K$ as the convex set
$$K:=\{ u^*=0\} \, = \, \{\xi_y=0 \} \cap \{ \xi_n \le - \gamma_u^*(\xi_z)\}$$
where $\gamma_u^*$ denotes the Legendre transform of $\gamma_u$. Below we bound the curvatures of the level sets of $u^*$ in the $z$ direction in a neighborhood of the origin. Then Step 2 follows by applying these bounds for the $0$ level set above.
From Step 1 and \textit{\textbf{e}}qref{510} we know that
$$c|z|^M \le \gamma_u(z) \le C |z|^2 \quad \mbox{in $B_c^z$}$$
hence
\begin{equation}gin{equation*}
\{ \xi_n \le -C |\xi_z|^\frac{M}{M-1} \} \subset K \subset \{ \xi_n \le - c |\xi_z|^2 \} \quad \mbox{in $\{\xi_y=0 \} \cap B_c$}.
\textit{\textbf{e}}nd{equation*}
Also from \textit{\textbf{e}}qref{514} and the definition of Legendre transform we find
$$ u^*(\xi) \ge c \left ( (\xi_n + \gamma_u^*(\xi_z))^+ \right )^4 \quad \quad \mbox{in $B_c$},$$
which together with
$$u^*(\xi) \ge \partialsi_u^*(\xi_y) \ge c |\xi_y|^2,$$
implies that
$$O:= \{ u^* < \textit{\textbf{e}}ta (\xi_n + \textit{\textbf{e}}ta) \} \subset B_{c_1},$$
with $\textit{\textbf{e}}ta$ and $c_1$ sufficiently small universal constants.
We also claim that in $B_c$, $|\nabla u^*|$ and the Lipschitz norms in the $z$ direction of the level sets of $u^*$ (viewed as graphs in the $-e_n$ direction) are bounded by a universal constant.
Indeed, let $\xi \in B_c$ and let $\nabla u^*(\xi)=x=(y,z,x_n) \in \Omegaega$. We need to show that
$|z| \le C x_n$. We increase the tangent plane of $u$ at the point $x$ (which has slope $\xi$) till it touches the boundary data of $u$ for the first time at some point $(y_0,0,0)$. Clearly $\xi_y$ coincides with the derivative of $\partialsi_u$ at $y_0$. We have
\begin{equation}gin{align*}
u(x) & \le \partialsi_u(y_0) + \xi \cdot (x-(y_0,0,0)) \\
& \le \partialsi_u(y_0) + \xi_y \cdot (y-y_0) + \xi_z \cdot z + \xi_n x_n \\
& \le \partialsi_u(y) + \xi_z \cdot z + \xi_n x_n,
\textit{\textbf{e}}nd{align*}
and by \textit{\textbf{e}}qref{59.6}
$$u(x) \ge \partialsi_u(y) + c|z| - Cx_n.$$
The inequalities above imply that $|z| \le C x_n $ if $|\xi|$ is sufficiently small.
Since $u^*$ is not strictly increasing in the $e_n$ direction, we apply Theorem \ref{abo} using approximations as before.
Formally, $u^*$ satisfies the hypotheses of Theorem \ref{abo} with right hand side $f(u^*-\partialsi_u^*)$ with $f=\chi_{(0,\infty)}$. The right hand side does not depend on the $z$ variable, thus we can bound the second derivatives of the level sets of $u^*$ in the $z$ direction.
\
{\it Approximation.} Define $$v_\textit{\textbf{e}}ps= \max \left \{ u^*, \, \, \partialsi_u^*(\xi_y)+ \textit{\textbf{e}}ps \left (1+ \xi_n + \frac 1 2 |\xi|^2\right ) \right \}$$
and we remark that in $B_{c_1}$, $v_\textit{\textbf{e}}ps$ is strictly increasing in the $e_n$ direction, $|\nabla v_\textit{\textbf{e}}ps| \le C$, and its level sets have Lipschitz norm in the $z$ direction bounded by a universal constant. In the set $\{v_\textit{\textbf{e}}ps>u^*\} \cap B_{c_1}$ we have
$$(\partial_n v_\textit{\textbf{e}}ps)^ \alpha \det D^2 v_\textit{\textbf{e}}ps \ge \textit{\textbf{e}}ps^{n+\alpha},$$
hence
$$ (\partial_n v_\textit{\textbf{e}}ps)^ \alpha \det D^2 v_\textit{\textbf{e}}ps \ge f_\textit{\textbf{e}}ps(v_\textit{\textbf{e}}ps-\partialsi_u^*) \quad \mbox{in $B_{c_1}$,} $$
in viscosity sense, with
$f_\textit{\textbf{e}}ps$ a nondecreasing function satisfying $$f(s)=\textit{\textbf{e}}ps^{n+\alpha} \quad \mbox{if $s \le \textit{\textbf{e}}ps^{1/2}$}, \quad \quad f(s)=1 \quad \mbox{if $s \ge 2 \textit{\textbf{e}}ps^{1/2}$.} $$
We define $\bar v_\textit{\textbf{e}}ps$ as the viscosity solution to
$$(\partial_n \bar v_\textit{\textbf{e}}ps)^\alpha \det D^2 \bar v_\textit{\textbf{e}}ps =f_\textit{\textbf{e}}ps(\bar v_\textit{\textbf{e}}ps-\partialsi_u^*) \quad \quad \mbox{in}\quad O_\textit{\textbf{e}}ps:= \{v_\textit{\textbf{e}}ps< \textit{\textbf{e}}ta(\xi_n + \textit{\textbf{e}}ta)\},$$
$$ \bar v_\textit{\textbf{e}}ps=v_\textit{\textbf{e}}ps \quad \mbox{on $\partial O_\textit{\textbf{e}}ps$.} $$
We apply Theorem \ref{abo} for $\bar v_\textit{\textbf{e}}ps$ in $O_\textit{\textbf{e}}ps$ and obtain the uniform second derivative bounds in the $z$ direction for its level sets around the origin. As before we find that $\bar v_\textit{\textbf{e}}ps$ converges to $u^*$ as $\textit{\textbf{e}}ps \to 0$. Thus the conclusion holds also for $u^*$, and the proof of Proposition \ref{p5} is finished.
\qed
\section{Proof of Theorem \ref{T2}}\label{s7}
Assume the hypotheses of Theorem \ref{T2} are satisfied. By Theorem \ref{T1}, we may also assume after performing an affine transformation that the solution $u$ satisfies
\begin{equation}gin{equation}\label{60}
c_0(|x'|^2 + x_n^{2+\alpha}) \le u(x) \le C_0 (|x'|^2 + x_n^{2+\alpha})
\textit{\textbf{e}}nd{equation}
for all $|x| \le c(\rho,\rho')$. Since in our case $\mu=1$, the constants $c_0$, $C_0$ depend only on $\alpha$ and $n$.
The rescalings $u_h$ for small $h$,
\begin{equation}gin{equation}\label{60.1}
u_h(x):=\frac 1 h u \left (h^\frac 12 x', h^\frac{1}{2+\alpha} x_n \right ),
\textit{\textbf{e}}nd{equation}
satisfy inequality \textit{\textbf{e}}qref{60} as well, and therefore belong to a compact family. Precisely, given a sequence $u^m$ of functions as above, and $h_m \to 0$, we can extract a subsequence $u_{h_m}^m$ that converges uniformly on compact sets to a global solution $u_0$ defined in $\mathbb R^n_+$, that satisfies \textit{\textbf{e}}qref{60} and
$$ (1-\textit{\textbf{e}}ps_0) x_n^\alpha \le \det D^2 u_0 \le (1+\textit{\textbf{e}}ps_0) x_n^\alpha,$$
$$ (1-\textit{\textbf{e}}ps_0)\frac{|x'|^2}{2} \le u_0(x',0) \le (1+\textit{\textbf{e}}ps_0) \frac{|x'|^2}{2}.$$
By compactness, the proof of Theorem \ref{T2} follows from the following Liouville type theorem.
\begin{equation}gin{prop}\label{p6}
Let $u \in C(\overline{\mathbb R^n_+})$ be a convex function that satisfies the growth condition
\begin{equation}gin{equation}\label{60.5}
c_0(|x'|^2 + x_n^{2+\alpha}) \le u(x) \le C_0 (|x'|^2 + x_n^\alpha),
\textit{\textbf{e}}nd{equation}
with $c_0$, $C_0$ the constants of Theorem \ref{T1} (see Remark \ref{r0}) and
\begin{equation}gin{equation}\label{61}
\det D^2 u = x_n^\alpha, \quad \quad u(x',0) = \frac 12 \, |x'|^2.
\textit{\textbf{e}}nd{equation}
Then $$u=U_0:=\frac 12 |x'|^2 + \frac{x_n^{2+\alpha}}{(1+\alpha)(2+\alpha)}.$$
\textit{\textbf{e}}nd{prop}
In the case $\alpha=0$ the conclusion is slightly different and $u$ must be a quadratic polynomial. The proof follows from the Pogorelov estimate in half space (see \cite{S1}). When $\alpha>0$ the situation is more delicate and we will make use of Theorem \ref{T1}.
We define $\mathcal K$ as the set of functions $u \in C(\overline{\mathbb R^n_+})$ that satisfy \textit{\textbf{e}}qref{60.5}, \textit{\textbf{e}}qref{61}. We want to show that $\mathcal K$ consists only of $U_0$.
Clearly $\mathcal K$ is a compact family under uniform convergence on compact sets. Also for any $h>0$,
$$ u \in \mathcal K \quad \mathbb Rightarrow \quad u_h(x):= \frac 1 h u \left (h^\frac 12 x', h^\frac{1}{2+\alpha} x_n \right ) \in \mathcal K.$$
If $u \in \mathcal K$ and $$x_0 \in \{x_n=0\}$$ is a point on the boundary then, after subtracting its tangent plane at $x_0$ and performing an appropriate sliding $A_{x_0}$, we can normalize $u$ at $x_0$ such that it belongs to $\mathcal K$. Precisely, there exists $A_{x_0}$ sliding along $x_n=0$ such that $u_{x_0} \in \mathcal K$ where
$$u_{x_0}(x):= u(x_0+A_{x_0}x)- u(x_0)-\nabla u(x_0) \cdot A_{x_0}x.$$
This statement follows from Theorem \ref{T1}. If $x_0$ is sufficiently close to the origin then the tangent plane of $u$ at $x_0$ has bounded slope. Indeed, the upper bound for $u_n(x_0)$ is obtained from \textit{\textbf{e}}qref{60.5} by convexity while for the lower bound we compare $u$ in $S_1(u)$ with an explicit barrier of the type
$$-\frac 12 x_0^2 + x' \cdot x_0 + \frac c2 |x'-x_0|^2 + c^{1-n} (x_n^2-Mx_n),$$
with $c$ small and $M$ large appropriate constants.
We can apply Theorem \ref{T1} at the point $x_0$ in the section $S_1(x_0)$ of $u$ and find that $u_{x_0}$ defined above satisfies \textit{\textbf{e}}qref{60.5} in a fixed neighborhood around the origin. In the general case we apply this argument for $u_h$ with $h \to \infty$ and obtain that $u_{x_0}$ satisfies \textit{\textbf{e}}qref{60.5} in whole $\mathbb R^n_+$.
Below we provide the proof of Proposition \ref{p6} in several steps. The main ingredients are the compactness of the class $\mathcal K$ under the rescalings and normalizations given above, the fact that for $i<n$, $u_{ii}$ are subsolutions for the linearized operator and also that $$\frac{u_n}{x_n^{1+\alpha}}$$
solves an elliptic equation.
\
{\it Step 1:} We show that if $u \in \mathcal K$ then $D^2_{x'} u \le I$.
Given any $y_0 \in \mathbb R^n_+$ we consider the section $S_h(y_0)$ of $u$ that becomes tangent to $x_n=0$ at some point $x_0$. After normalizing $u$ at $x_0$ and then after an appropriate rescaling, we may assume that $S_h(y_0)=\{ u < x_n \}$. Notice that the tangential second derivatives $D^2_{x'}u$ are left invariant by these transformations. Hence, by interior regularity, $ u_{ii} \le C $ for $i<n$. Assume we have a sequence of functions $u_m \in \mathcal K$ and points $y_m$ (normalized as above) for which $\partial_{ii} u_m(y_m)$ tends to the supremum value $ \sup_{u \in K}\partial_{ii} u$. Then we may assume that $u_m \to \bar u \in \mathcal K$, and $\partial_{ii} \bar u$ achieves an interior maximum at the point $\bar y=\min (\bar u-x_n)$. The function $\partial_{ii} \bar u$ is a subsolution for the linearized operator, thus $\partial_{ii} \bar u$ is constant in $\mathbb R^n$. The boundary data of $\bar u$ on $x_n=0$ shows that this constant must be 1, and this proves Step 1.
\
{\it Step 2:} We show that if $u \in \mathcal K$ then $$\partialsi(x'):=u_n(x',0) \quad \mbox{is concave,} \quad \partialsi \le 0 \quad \mbox{and}\quad \|\nabla \partialsi \|_{C^\frac{\alpha}{2+\alpha}(\mathbb R^{n-1})} \le C.$$
Formally, by Step 1 we have $u_{iin}\le 0$ on $x_n=0$ hence $\partialsi =u_n$ is concave. We prove this rigourously below. Let $x_0=(x_0', h^\frac{1}{2+\alpha})$ be the point where the section $S_t$ at the origin (for some $t$) becomes tangent to $x_n=h^\frac{1}{2+\alpha}$. From \textit{\textbf{e}}qref{60.5} we have
$$ch \le t \le Ch, \quad \quad |x_0'| \le C t^\frac 12 \le C h^ \frac 12.$$
We use Step 1 and $\nabla_{x'}u(x_0)=0$ and obtain
$$u(x',h^\frac {1}{2+\alpha}) \le C h + \frac 12 |x'-x_0'|^2 \le Ch + C |x'| h ^\frac 12 + u(x',0).$$
We let $h \to 0$ and obtain $u_n(x',0) \le 0$.
If $|x'| \le 1$ then as above, we use an explicit barrier for $u$ and easily obtain also a lower bound $0 \ge u_n(x',0) \ge -C$. We apply this for
the rescaling $u_h$ (see \textit{\textbf{e}}qref{60.1}) and find
$$ 0 \ge \partial_n u_h(x',0)=h^{\frac{1}{2+\alpha}-1} u_n(h^\frac 12 x',0) \ge -C \quad \quad \mbox{if $|x'| \le 1$,}$$
hence
$$0 \ge u_n(x',0) \ge - C |x'|^{1+\frac{\alpha}{2+\alpha}} \quad \quad \mbox{for all $x'$.}$$
We apply this last inequality for $u_{x_0}$, the normalization of $u$ at $x_0\in \{x_n=0\}$,
$$u_{x_0}(x)=u(x_0+A_{x_0}x)-u(x_0)-\nabla u(x_0) \cdot A_{x_0}x,$$
with $$A_{x_0}x=x-\tau_{x_0} x_n, \quad \quad \tau_{x_0} \cdot e_n=0.$$
We find
$$0 \ge \partial_n u_{x_0} \, \, (x',0)=u_n(x_0+x',0)-u_n(x_0)-x' \cdot \tau_{x_0} \ge - C |x'|^{1+\frac{\alpha}{2+\alpha}},$$
where in the equality above we made use of $\nabla_{x'}u(x',0)=x'$. This proves Step 2 and we remark that the inequality above shows that the components of the vector $\tau_{x_0}$ in the sliding $A_{x_0}$ are given by $u_{ni}(x_0)$, $i<n$.
\
{\it Step 3.} We show that if $v$ is a convex function in $\mathbb R^n_+$ that satisfies
$\det D^2v=x_n^\alpha,$
then
$$\bar w:=v_n/ x_n^{1+\alpha}$$
satisfies in the set $\{ \bar w>0 \}$ a linear elliptic equation of the type
$$ L \bar w:=a^{ij}(x) \bar w_{ij} + b^i(x) \bar w_i =0, \quad \quad \mbox{with} \quad (a_{ij}(x))_{ i,j} >0.$$
It suffices to show that $$w=\log \bar w = \log v - (1+\alpha) \log x_n,$$
satisfies a linear elliptic equation as above. We have
\begin{equation}gin{align*}
w_i&=\frac{v_{ni}}{v_n}-\frac{1+\alpha}{x_n}\delta_n^i \\
w_{ij}&= \frac{v_{nij}}{v_n}-\frac{v_{ni}v_{nj}}{v_n^2}+\frac{1+\alpha}{x_n^2}\delta_n^i \delta_n^j.
\textit{\textbf{e}}nd{align*}
Differentiating the equation $\log \det D^2 v=\alpha \log x_n$ along $x_n$ direction
$$v^{ij}v_{nij}=\frac{\alpha}{x_n},$$
hence
\begin{equation}gin{equation}\label{62}
v^{ij} w_{ij} = \frac{\alpha}{x_n v_n}-\frac{v_{nn}}{v_n^2}+\frac{1+\alpha}{x_n^2}v^{nn}= -\frac{w_n}{v_n}-\frac{1}{x_nv_n}+\frac{1+\alpha}{x_n^2}v^{nn}.
\textit{\textbf{e}}nd{equation}
We have $$v^{nn}v_{nn}=1-\sum_{i \ne n} v_{ni}v^{in}=1-\sum_{i \ne n} w_i v_n v^{in},$$
hence
$$v^{nn}=\frac{1}{v_{nn}}- g^i w_i,$$
for some functions $g^i$. Then
\begin{equation}gin{align*}
\frac{1+\alpha}{x_n}v^{nn}- \frac{1}{v_n}&=\frac{1+\alpha}{x_nv_{nn}}-\frac {1}{v_n}-\tilde g^i w_i\\
&=-\frac{1}{v_{nn}} \left( \frac{v_{nn}}{v_n}-\frac{1+\alpha}{x_n} \right)-\tilde g^i w_i\\
&=-\frac{1}{v_{nn}}w_n-\tilde g^i w_i,
\textit{\textbf{e}}nd{align*}
which together with \textit{\textbf{e}}qref{62} proves step 3.
\
As a consequence we obtain that if $v \in C(\overline \mathbb R^n_+)$ is a convex function that satisfies
\begin{equation}gin{equation}\label{63}
\det D^2 v=x_n^\alpha, \quad \quad v(x',0)=\frac 12 |x'|^2,
\textit{\textbf{e}}nd{equation}
and $\bar w=v_n/x_n^{1+\alpha}$ achieves a positive maximum at an interior point then $\bar w$ is constant. This implies that $v=U_0$ with $U_0$ as in Proposition \ref{p6} and therefore $\bar w \textit{\textbf{e}}quiv \frac{1}{1+\alpha}$.
Assume that $v$ satisfies \textit{\textbf{e}}qref{63}, and for some direction $\xi=(\xi',1)$ and constant $m$, the function
$$\tilde w = \frac{v_\xi+m}{x_n^{1+\alpha}} \quad \mbox{has a positive interior maximum.}$$
Then $\tilde w \textit{\textbf{e}}quiv \frac{1}{1+\alpha}$. Indeed, the function
$$\tilde v(y):=v(y'+\xi'y_n,y_n)+m y_n,$$
satisfies \textit{\textbf{e}}qref{63} and the conclusion follows as above since
$$\frac{\tilde v_n(y)}{y_n^{1+\alpha}}=\frac{v_\xi(x)+m}{x_n^{1+\alpha}}, \quad \quad \quad x:=(y'+\xi' y_n,y_n).$$
\
{\it Step 4.} We use the result above and show that $$u \in \mathcal K \quad \quad \mathbb Rightarrow \quad \frac{u_n}{x_n^{1+\alpha}} \le \frac{1}{1+\alpha}.$$
Let $x^*$ be a point in $\mathbb R^n_+$ where $u_n(x^*)>0$, and let $x_0$ be the point where the first section of $u$ at $x^*$ becomes tangent to $x_n=0$. As in Step 1 we normalize $u$ at $x_0$ and then rescale
$$v(y):= \frac 1 h u_{x_0}(F_hy) = \frac 1 h \left [u(x_0+A_{x_0}F_hy)-u(x_0)-\nabla u(x_0) \cdot A_{x_0}F_h y\right],$$
with
$$F_h y:=(h^\frac 12 y',h^\frac{1}{2+\alpha}y_n), \quad \quad A_{x_0}x=x - \tau_{x_0} x_n, \quad \quad \quad \tau_{x_0}=(\tau^1,...,\tau^{n-1},0).$$
We know that $v \in \mathcal K$ and we denote by $y^*$ the corresponding coordinates for $x^*$ in the $y$ coordinate,
$$x=x_0 + A_{x_0}F_h y \quad \quad x^*=x_0+A_{x_0}y^*.$$
We choose $h$ above such that $y^*$ is the center of the section $\{ v<y_n\}$, i.e. the point where $v-y_n$ achieves its minimum. We have
$$h \nabla v=\nabla u \, A_{x_0}F_h - \nabla u(x_0) \, A_{x_0} F_h,$$
and we obtain
\begin{equation}gin{equation}\label{64}
u_n=u_n(x_0) + h^\frac{1+\alpha}{2+\alpha}v_n + h^\frac 12 \tau^i v_i
\textit{\textbf{e}}nd{equation}
where $u_n$ is evaluated at $x$ and $v_n$, $v_i$ are evaluated at $y$.
Since $u_n(x^*)>0$ and $|\nabla v(y^*)| \le C_1$ for some constant $C_1$ depending only on $\alpha$ and $n$ we find
\begin{equation}gin{equation}\label{65}
0 \le u_n(x_0) + C_1 \left (h^\frac{1+\alpha}{2+\alpha} + h^\frac 12 |\tau_{x_0}| \right).
\textit{\textbf{e}}nd{equation}
On the other hand by Step 2 we know that $u_n \le 0$ on $x_n=0$. Thus if we write \textit{\textbf{e}}qref{64} at $$y=(y',0), \quad \mbox{with} \quad y'=2C_1 \frac{\tau_{x_0}}{|\tau_{x_0}|},$$
and use $\nabla_{y'}v(y)=y'$ together with \textit{\textbf{e}}qref{65} we obtain
$$ 0 \ge - C_1 \left (h^\frac{1+\alpha}{2+\alpha} + h^\frac 12 |\tau_{x_0}| \right) - C_2 h^\frac{1+\alpha}{2+\alpha} + 2 C_1 h^\frac 12 |\tau_{x_0}|$$
for some $C_2$ large depending on $C_1$. This and \textit{\textbf{e}}qref{65} show that
$$|\tau_{x_0}| h^\frac 12 \le C_3 h^\frac{1+\alpha}{2+\alpha}, \quad \quad u_n(x_0) \ge -C_3 h^\frac{1+\alpha}{2+\alpha},$$
for some $C_3$ depending only on $n$ and $\alpha$. We use these inequalities in \textit{\textbf{e}}qref{64} and obtain
\begin{equation}gin{equation}\label{66}
\frac{u_n(x)}{x_n^{1+\alpha}}=\frac{m+v_\xi(y)}{y_n^{1+\alpha}},
\textit{\textbf{e}}nd{equation}
for some vector $\xi$ and constant $m$ satisfying
$$\xi=(\xi',1), \quad \quad \quad |\xi'|\le C_3, \quad \mbox{and} \quad -C_3 \le m \le 0.$$
The right hand side of \textit{\textbf{e}}qref{66} is bounded by a universal constant at $y^*$ which implies that $u_n/x_n^{1+\alpha}$ is bounded at $x^*$. Since $x^*$ is arbitrary we obtain an upper bound for this function. Moreover, if we take a sequence of points which approach its supremum then the corresponding functions $v$ (and $m$, $\xi$) converge up to a subsequence to a limiting solution $\bar v \in \mathcal K$ (respectively $\bar m$, $\bar \xi$) for which
$$ \frac{ \bar m+\bar v_{\bar \xi}}{z_n^{1+\alpha}} \quad \mbox{achieves its maximum at the center of $\{\bar v <y_n \}$.}$$
By Step 3 we obtain that this maximum value is $1/(1+\alpha)$.
\
{\it Step 5.} We show that if $u \in \mathcal K$ then $u=U_0$.
Indeed, we integrate in the $x_n$ direction the inequality in Step 4 and obtain $u \le U_0$. Assume by contradiction that $u$ does not coincide with $U_0$ hence, by strong maximum principle, $u < U_0$ in $\mathbb R^n_+$. Let
$$V:= \frac{1+\textit{\textbf{e}}ps}{2} |x'|^2 +\frac{(1+\textit{\textbf{e}}ps)^{1-n}}{(2+\alpha)(1+\alpha)}x_n^{2+\alpha} - \textit{\textbf{e}}ps x_n,$$ and notice that $\det D^2 V= \det D^2 u$, and
$$V \ge U_0 \ge u \quad \quad \mbox{ on} \quad \{x_n=0\} \cup \left(\{|x'| = C_1\} \cap \{0 \le x_n \le 1\} \right)$$
and if $\textit{\textbf{e}}ps$ is sufficiently small
$$V \ge U_0 - C_2 \textit{\textbf{e}}ps \ge u \quad \quad \mbox{on} \quad \{|x'| \le C_1\} \cap \{x_n = 1\},$$
where $C_1$, $C_2$ are constants depending on $\alpha$ and $n$. By maximum principle $$V \ge u \quad \mbox{in} \quad B_{C_1}' \times [0,1]$$
and we contradict $\nabla u(0)=0$ which follows from \textit{\textbf{e}}qref{60.5}.
\qed
\section{Consequences of Theorem \ref{T2}}\label{s8}
In this section we use Theorem \ref{T2} and prove Theorems \ref{T2.1}, \ref{T03}, \ref{T02}.
First we show that if the hypotheses of Theorem \ref{T1} or Theorem \ref{T2} are satisfied at a point then they hold also in a neighborhood of that point.
\begin{equation}gin{lem}\label{l11}
Assume the hypotheses H1, H2, H3, H4 of the localization Theorem \ref{T1} are satisfied and, in addition, $\partial \Omega$ admits an interior tangent ball of radius $\rho$ at all points on $\partial \Omega \cap B_\rho$ and
$$u(x)=\varphi(x') \quad \mbox{on} \quad \partial \Omega \cap B_\rho, \quad \quad \mu^{-1} I \ge D_{x'}^2 \varphi \ge \mu I.$$
Then the hypotheses of the localization theorem hold at all points $x_0 \in \partial \Omega \cap B_c$, for some $c=c(\rho,\rho')$ small.
\textit{\textbf{e}}nd{lem}
\begin{equation}gin{proof}
We only have to check that on $\partial \Omega$, $u$ separates quadratically away from the tangent plane at $x_0$, hence we need to show that $|\nabla u(x_0)|$ is sufficiently small when $|x_0|$ is close to the origin. By Theorem \ref{T1} there exists a sliding $A$, $|A|\le C_1(\rho,\rho')$ such that
for $h \le c_1(\rho,\rho')$ small, the rescaled function
\begin{equation}gin{equation}\label{80}
u_h(y):=\frac 1 h u(A F_h y) \quad \quad F_h y:=(h^\frac 12y',h^\frac{1}{2+\alpha}y_n), \quad x=AF_h y,
\textit{\textbf{e}}nd{equation}
satisfies in $S_1(u_h)$
$$ u_h(y)= \varphi_h(y') \quad \mbox{on} \quad \partial \Omega_h=(A F_h)^{-1} \partial \Omega, \quad \quad \quad \frac \mu 2 I \le D^2_{y'} \varphi_h \le 2 \mu I,$$
$$c_0 (|y'|^2+y_n^{2+\alpha}) \le u_h \le C_0(|y'|^2+y_n^{2+\alpha}), \quad \quad \det D^2u_h \le 2 y_n^\alpha.$$
where the last inequality follows from the fact that $u$ satisfies the same inequality in $\Omega \cap B_\rho$. Now, if $y_0 \in \partial \Omega_h$ with $|y_0|<c$ small we can bound $|\nabla u_h(y_0)|$ as in Section \ref{s7}, by using a lower barrier of the type
$$u_h(y_0)+\xi' \cdot z'+c|z'|^2+c^{1-n} (z_n^2-Mz_n),$$
where $z$ denote the coordinates in a coordinate system centered at $y_0$ and with the $z_n$ axis pointing towards the inner normal to $\partial \Omega$.
In conclusion
$$|\nabla u_h(y_0)| \le C \quad \quad \mathbb Rightarrow \quad |\nabla u(x_0)|\le C_1 h^\frac{1+\alpha}{2+\alpha}, \quad \quad x_0=AF_h y_0,$$
and by choosing $h=c_2(\rho,\rho')$ small, we obtain the desired conclusion.
\textit{\textbf{e}}nd{proof}
From the proof above we see that if in Lemma \ref{l11} we have $\partial \Omega,\varphi \in C^2$ in $B_\rho$ and
$$\det D^2 u=g \, d_{\partial \Omega}^\alpha,$$
for some function $g>0$ that is continuous on $\partial \Omega \cap B_\rho$,
then Theorem \ref{T2} applies at all points on $\partial \Omega \cap B_c$ with $c=c(\rho,\rho')$ small. In particular we obtain that $u$ is pointwise $C^2$ at all these points, and using the arguments above it can be shown that $D^2u$ is continuous on $\partial \Omega \cap B_c$.
\
Next we extend our estimates from $\partial \Omega$ to a small neighborhood of $\partial \Omega$ and prove Theorem \ref{T2.1}.
{\it Proof of Theorem \ref{T2.1}}
In this proof we denote by $\bar c$, $\bar C$ various constants (that may change from line to line) which depend on $n$, $\alpha$, $\rho$, $\rho'$, $\begin{equation}ta$ and the $C^2$ modulus of continuity of $\varphi$ and $\partial \Omega$.
Assume for simplicity that $D^2 \varphi(0)=I$ and $g(0)=1$. We apply Theorem \ref{T2} and obtain that there exists a sliding $A$, with $|A| \le C(\rho,\rho')$, such that for any $\textit{\textbf{e}}ta>0$
\begin{equation}gin{equation}\label{81}
(1-\textit{\textbf{e}}ta) A \, \, S_h(U_0) \subset S_h(u) \subset (1+\textit{\textbf{e}}ta) A \, \, S_h(U_0),
\textit{\textbf{e}}nd{equation}
for all $h \le \bar c(\textit{\textbf{e}}ta)$.
Let $t$ be the minimum value of $u-h^\frac{1+\alpha}{2+\alpha}x_n$ and $x_t$ the point where is achieved thus
$$S_t(x_t)=\{ u<h^\frac{1+\alpha}{2+\alpha}x_n \}.$$
Next we show that
\begin{equation}gin{equation}\label{82}
\|D^2 u\|_{C^\begin{equation}ta(S_{t/4}(x_t))} \le \bar C h^{-\frac \begin{equation}ta 2} , \quad \sup_{S_{t/4}(x_t)} \| D^2u-D^2u(0)\|\le \bar C \textit{\textbf{e}}ta.
\textit{\textbf{e}}nd{equation}
From \textit{\textbf{e}}qref{81} we see that $t \sim h$ and also
$$S_{t/2}(x_t) \subset \mathcal C:= \left \{|x'| \le C |x_n| \right \}.$$
In the cone $\mathcal C$, $d_{\partial \Omega}/x_n$ is a positive function with bounded Lipschitz norm, hence for all $t$ small
$$\det D^2 u = \bar g \, \, x_n^\alpha \quad \quad \mbox{in $S_{t/2}(x_t)$}$$
with $\bar g (0)=1$, $\|\bar g\|_{C^\begin{equation}ta} \le \bar C$.
We let $u_h$ be the rescaled function given in \textit{\textbf{e}}qref{80} and let $ x_t=A F_h y_t$ and
notice that
$$ A F_h S_{t/(2h)}(y_t) = S_{t/2}(x_t), \quad \quad S_{t/h}(y_t)=\{u_h <y_n\}$$
where $S_t(y)$ denote the sections for $u_h$.
We have $$\det D^2 u_h= \bar g_h \, \, x_n^\alpha \quad \mbox{in}\quad S_{t/(2h)}(y_t),$$
with $$\bar g_h(y)=\bar g (AF_h y) \quad \quad \mathbb Rightarrow \quad \quad \|\bar g_h\|_{C^\begin{equation}ta} \le \bar C h^\frac {\begin{equation}ta}{2+\alpha}, \quad \bar g_h(0)=0.$$
From \textit{\textbf{e}}qref{81} we have $$|u_h -U_0| \le C \textit{\textbf{e}}ta \quad \mbox{in} \quad S_{t/(2h)}(y_t)$$
hence, by the interior $C^{2,\begin{equation}ta}$ estimates for Monge-Ampere equation, we obtain
$$\|D^2 u_h \|_{C^\begin{equation}ta} \le \bar C, \quad \|D^2_{x'} u_h - I \| \le \bar C \textit{\textbf{e}}ta \quad \quad \mbox{in} \quad S_{t/(4h)}(y_t).$$
We write these inequalities in terms of $D^2 u$ and we obtain \textit{\textbf{e}}qref{82}. We apply the same argument at other boundary points instead of the origin, thus we may assume that \textit{\textbf{e}}qref{82} holds uniformly for all points $x^*\in \partial \Omega \cap B_\delta$ and their corresponding interior sections $S_{t'}(x^*_{t'})$ which become tangent to $\partial \Omega$ at $x^*$.
Let $y^* \in \partial \Omega_h \cap S_c(u_h)$, thus $|\nabla u_h(y^*)|$ is small if $c$ is small. This implies that the section
$$S_{t'/h}(y^*_{t'}):=\{u_h<u_h(y^*) + (\nabla u_h(y^*) +\nu_{y^*}) \cdot (y-y^*)\},$$
with $\nu_{y^*}$ the inner normal to $\partial \Omega_h$, is a perturbation of the section $\{ u_h<y_n\}$. We obtain
$$S_{t'/(4h)}(y^*_{t'}) \cap S_{t/(4 h)}(y_t) \ne \textit{\textbf{e}}mptyset,$$
and the corresponding sections for $u$ satisfy
$$S_{t'/4}(x^*_{t'}) \cap S_{t/4}(x_t) \ne \textit{\textbf{e}}mptyset,$$
if $x^*\in \partial \Omega \cap S_{ch}.$ This and \textit{\textbf{e}}qref{82} imply
$$\|D^2 u(x^*)-D^2u(0)\| \le \bar C \textit{\textbf{e}}ta,$$
which together with \textit{\textbf{e}}qref{82} shows that $u \in C^2(\partial \Omega \cap B_\delta)$ for some small $\delta$.
\qed
\begin{equation}gin{rem}\label{r7}
From the proof above we see that if $g$ has a $C^\begin{equation}ta$ modulus of continuity only on $\partial \Omega$, i.e.
\begin{equation}gin{equation}\label{83}
|g(x)-g(x_0)| \le C |x-x_0|^ \begin{equation}ta \quad \quad \mbox{for all $x\in \overline \Omega$, $x_0\in \partial \Omega$,}
\textit{\textbf{e}}nd{equation}
then for $u \in C^{1,\gamma}(\overline \Omega \cap B_\delta)$ for any $\gamma<1$, and with $\delta$ small depending also on $\gamma$.
Indeed, instead of the interior $C^{2,\begin{equation}ta}$ estimates we may apply the interior $C^{1,\gamma}$ estimates since $\bar g_h$ has small oscillation in $S_{t/(2h)}(y_t) $. We obtain
$$\|\nabla u_h \|_{C^\gamma} \le C \quad \quad \mbox{in} \quad S_{t/(4h)}(y_t), \quad \quad |\nabla u_h| \le C \quad \mbox{in} \quad S_1(u_h),$$
which rescaled back implies
$$\|\nabla u\|_{C^\gamma (S_{t/4}(x_t))} \le C h^\frac{1-\gamma}{2}, \quad \quad \quad \sup_{S_h}|\nabla u-\nabla u(0)| \le h^\frac 12,$$
and the claim easily follows.
\textit{\textbf{e}}nd{rem}
As a consequence of Theorem \ref{T2.1} we obtain Theorem \ref{T03}.
\
{\it Proof of Theorem \ref{T03}}
After multiplying by an appropriate constant we may suppose $\max_\Omega|u|=1$. Since $\partial \Omega$ is uniformly convex, we can use explicit barriers at points on $\partial \Omega$ and obtain $|u| \le C d_{\partial \Omega}$ with $C$ a constant depending on $n$ and the lower bounds for the curvatures of $\partial \Omega$. Also by convexity we find $|u| \ge c d_{\partial \Omega}$.
These inequalities on $|u|$ imply that if $x_0 \in \partial \Omega$ then $c \le |\nabla u(x_0)| \le C$, hence on $\partial \Omega$ the function $u$ separates quadratically from its tangent plane at $x_0$. We apply Proposition \ref{p0} and obtain that $u$ is pointwise $C^{1,1/3}$ at all points on $\partial \Omega$, i.e.
$$0 \le u(x) - \nabla u(x_0) \cdot (x-x_0) \le C|x-x_0|^\frac 43 \quad \quad \mbox{for all $x\in \overline \Omega$, $x_0\in \partial \Omega$.} $$
This implies $\nabla u \in C^{1/3}(\partial \Omega)$, which toghether with the inequality above gives that
$$g:= |u|/d_{\partial \Omega}$$
has a uniform $C^{1/3}$ modulus of continuity on $\partial \Omega$, i.e. \textit{\textbf{e}}qref{83} holds with $\begin{equation}ta=1/3$. By Remark \ref{r7} above we find $u \in C^{1,\gamma}(\overline \Omega)$ which implies that $g \in C^\gamma(\overline \Omega)$, and the conclusion follows by Theorem \ref{T2.1}.
\qed
Before we prove Theorem \ref{T02} we obtain a simple consequence of Thorem \ref{T3}. We recall the notation used in Section \ref{s4}
$$b(h):=\max_{S_h} x_n.$$
\begin{equation}gin{lem}\label{15}
For any $\textit{\textbf{e}}ps>0$ small, there exist constants $\bar c$ small, $K$ large depending on $\mu$, $n$, $\alpha$ and $\textit{\textbf{e}}ps$ such that if
$$u\in \mathcal D_0^\mu (a_1,...,a_{n-1}), \quad \quad \mbox{with} \quad a_{n-1} \ge K$$
and $\mu \le a_1 \le \cdots \le a_{n-1} \le \infty,$ then
$$b(t) \ge (2 /\mu) \, t^\frac{1}{3+\alpha - \textit{\textbf{e}}ps} \quad \quad \mbox{for some $t \in [\bar c,1]$}.$$
\textit{\textbf{e}}nd{lem}
\begin{equation}gin{proof}
In Theorem \ref{T3} we showed that if $0\le k \le n-2$,
\begin{equation}gin{equation}\label{84}
u \in \mathcal D_0^\mu(\underbrace{1,...,1}_{k \, \, times},\infty..,\infty) \quad \quad \mathbb Rightarrow \quad b(h) \ge C h^\frac{1}{3+\alpha},
\textit{\textbf{e}}nd{equation}
for some universal $C$ depending on $\mu$, $n$, $\alpha$. Indeed, in Lemma \ref{l5} we obtained $c d_n \le b(h) \le C d_n$ and in \textit{\textbf{e}}qref{65.65}
$$ch \le d_n^{n+1-k+\alpha} \le C h, \quad \quad n+1-k+\alpha \ge 3 + \alpha.$$
Now the lemma follows by compactness similar to the proof of Lemma \ref{l4.1}.
From \textit{\textbf{e}}qref{84} with $k=0$ and by compactness, we can find $C_1(\textit{\textbf{e}}ps)$ large such that the conclusion of the lemma holds if $a_1 \ge C_1$.
If $a_1 \le C_1$ then we use compactness and \textit{\textbf{e}}qref{84} with $k=1$ (and $\tilde \mu$ depending on $\mu$ and $C_1$), and obtain that there exists $C_2(\textit{\textbf{e}}ps)$, $C_2 \gg C_1$ such that if $a_2 \ge C_2$ then the conclusion of the lemma is satisfied.
We obtain the conclusion by repeating this argument $n-2$ times.
\textit{\textbf{e}}nd{proof}
We conclude the section with the proof of Theorem \ref{T02}.
\
{\it Proof of Theorem \ref{T02}}
From Theorem \ref{T1} we know that after subtracting the tangent plane at the origin and after performing an affine deformation given by a sliding along $x_n=0$ we may suppose that
\begin{equation}gin{equation}\label{85}
u=O(|x'|^2+x_n^{2+\alpha}), \quad \quad \mbox{near the origin.}
\textit{\textbf{e}}nd{equation}
For $h$ large we define as usually $d_1 \le ...\le d_{n-1}$ to be the length of the axis of the ellipsoid which is equivalent to $S_h \cap \{x_n=x^*_h \cdot e_n \}$, and we let $d_n$ such that
$$\left (\partialrod_1^{n-1} d_i^2\right) \, d_n^{2+\alpha}=h^n.$$
As in the proof of Proposition \ref{p1} we can find $c_0$, $C_0$ depending only an $n$ and $\alpha$, and a sliding $A_h$ along $x_n=0$ such that,
$$c_0 d_n \le b(h) \le C_0 d_n,$$
and the rescaling
$$u_h(x):=\frac 1 h u(A_h D_hx) \quad \quad \mbox{with} \quad D_h:=diag(d_1,..,d_n), $$
satisfies
$$u_h \in \mathcal D_0^{c_0}(a_1,..,a_{n-1}) \quad \quad \mbox{with} \quad a_i=d_i h^{-\frac 12} .$$
If
\begin{equation}gin{equation}\label{86}
b(h) \le c_1(\textit{\textbf{e}}ps) h^{1/(2+\alpha)}
\textit{\textbf{e}}nd{equation}
for some $c_1$ sufficiently small then $a_{n-1} \ge K$ with $K$ the constant from Lemma \ref{15} applied to $u_h$. Then
$$\frac{b(th)}{b(h)}=\frac{b_{u_h}(t)}{b_{u_h}(1)} \ge 2\, t^\frac{1}{3+\alpha-\textit{\textbf{e}}ps} \quad \quad \mbox{for some $t\in [\bar c,1]$},$$
hence $$q(h) \le \frac 12 q(t h) \quad \quad \mbox{with} \quad q(h):=b(h) h^{-\frac{1}{3+\alpha-\textit{\textbf{e}}ps}}.$$
Thus if \textit{\textbf{e}}qref{86} holds for all $h$ large then $q(h) \to 0$ as $h \to \infty$. This contradicts the growth assumption for $u$ at infinity on the $x_n$ axis.
In conclusion
$$b(h) \ge c_1(\textit{\textbf{e}}ps) h^\frac{1}{2+\alpha}$$
for a sequence $h=h_m$ tending to $\infty$, hence $c(\textit{\textbf{e}}ps) \le d_ih^{-1/2} \le C(\textit{\textbf{e}}ps)$ if $i < n$. This implies that for this sequence of $h_m$'s, the rescaled function
$$\tilde u_h(x):=\frac 1 h u(A_h F_hx) \quad \quad \mbox{with} \quad F_hx:=(h^\frac 12 x', h^\frac{1}{2+\alpha}), $$
satisfies the hypotheses of Theorem \ref{T2} for any $\textit{\textbf{e}}ta>0$. Hence there exists $c_2(\textit{\textbf{e}}ps,\textit{\textbf{e}}ta)$ such that
$$(1-\textit{\textbf{e}}ta)U_0 \le \tilde u_h(\tilde A_h x) \le (1+\textit{\textbf{e}}ta) U_0(x) \quad \mbox{holds if $|x|\le c_2$,} $$
for some sliding $\tilde A_h$. In terms of $u$ this means that
$$(1-\textit{\textbf{e}}ta)U_0 \le u(\bar A_h x) \le (1+\textit{\textbf{e}}ta)U_0 \quad \mbox{holds if $|F_h^{-1}x|\le c_2$} ,$$
for some sliding $\bar A_h$. Using also \textit{\textbf{e}}qref{85} we obtain $\bar A_h=I$. We let $h \to \infty$, thus
$$(1-\textit{\textbf{e}}ta)U_0 \le u \le (1+\textit{\textbf{e}}ta) U_0 \quad \mbox{for all $x$,}$$
and, since $\textit{\textbf{e}}ta$ is arbitrary, we find $u=U_0$.
\qed
\begin{equation}gin{thebibliography}{9999}
\bibitem[C]{C} Caffarelli L., Boundary Regularity of Maps with Convex Potentials--II, Ann. of Math. 144 (1996) 453--496.
\bibitem[CNS]{CNS} Caffarelli L., Nirenberg L., Spruck J., The Dirichlet problem for nonlinear second order
elliptic equations I. Monge-Ampere equation,
Comm. Pure Appl. Math. 37 (1984), 369--402.
\bibitem[H]{H} Hong J.X., Dirichlet problems for general Monge-Ampere equations, Math. Z. 209 (1992), 289--306.
\bibitem[HHW]{HHW} Hong J.X., Huang G., Wang W., Existence of global smooth solutions to Dirichlet problem for degenrate elliptic Monge-Ampere equations, Comm. PDE, 36 (2011), 635--656.
\bibitem[I]{I} Ivockina N. M., An a priori estimate of $\|u\|_{C^2(\Omega)}$ for convex solutions of Monge Ampere
equations. Zap. Nauchn. Sem. Leningr. Otdel. Mat. Inst. Steklova (LOMI) 96 (1980) 69--79.
\bibitem[K]{K}Krylov N. V., Boundedly inhomogeneous elliptic and parabolic equations in a domain.
(Russian) Izv. Akad. Nauk SSSR Ser. Mat. 47 (1983), no. 1, 75--108.
\bibitem[L]{L} Lions P.L., Two remarks on Monge-Ampere equations, Ann. Mat. Pura Appl. 142 (1986) 263--275.
\bibitem[S1]{S1} Savin O. Pointwise $C^{2,\alpha}$ estimates at the boundary for the Monge-Ampere equations. Jour. of AMS (to appear).
\bibitem[S2]{S2} Savin O., The obstacle problem for Monge-Ampere equation, Calc. Var. PDE 22 (2005) 303--320.
\bibitem[TW]{TW} Trudinger N., Wang X.J, Boundary regularity for Monge-Ampere and affine maximal surface equations,
Ann. of Math. 167 (2008), 993--1028.
\bibitem[W]{W}Wang X.J, Regularity for Monge-Ampere equation near the boundary,
Analysis 16 (1996), 101--107.
\textit{\textbf{e}}nd{thebibliography}
\textit{\textbf{e}}nd{document}
|
\begin{document}
\title{Eigenvector convergence for minors of unitarily invariant infinite random matrices}
\begin{abstract}
In \cite{Pic91}, Pickrell fully characterizes the unitarily invariant probability measures on infinite Hermitian matrices. An alternative proof of this classification is given by Olshanski and Vershik in
\cite{OV96}, and in \cite{BO01}, Borodin and Olshanski deduce from this proof that under any of these invariant measures, the extreme eigenvalues of the minors, divided by the dimension, converge almost surely. In this paper, we prove that one also has a weak convergence for the eigenvectors, in a sense which is made precise.
After mapping Hermitian to unitary matrices via the Cayley transform, our result extends a convergence proven in our paper with Maples and Nikeghbali \cite{MNN18}, for which a coupling of the Circular Unitary Ensemble of all dimensions is considered.
\end{abstract}
\section*{Introduction}
Let $\mathcal{H}$ be the set of infinite Hermitian matrices, i.e. infinite families $(m_{j,k})_{j,k \geq 1}$ of complex numbers
such that $m_{j,k} = \overline{m_{k,j}}$, and $\mathcal{U}$ the group of infinite unitary matrices, i.e. matrices
$(u_{j,k})_{j,k \geq 1}$ such that there exists $n \geq 1$ satisfying the following property: $(u_{j,k})_{1 \leq j,k \leq n}$ is a unitary matrix and $u_{j,k} = \delta_{j,k} := \mathds{1}_{j = k}$ if $j$ or $k$ is strictly larger than $n$. The group $\mathcal{U}$ can be considered as the union of $(U(n))_{n \geq 1}$, where $U(n)$ is naturally embedded in $U(n+1)$
by the map $u \mapsto \operatorname{Diag} (u,1)$.
The group $\mathcal{U}$ naturally acts on $\mathcal{H}$ by conjugation, and some probability measures on $\mathcal{H}$ are invariant by this action: they are called {\it central measures}. After a similar study, by Aldous \cite{A81}, of infinite random matrices which are invariant by left and right multiplication by permutation or orthogonal matrices, the central measures on $\mathcal{H}$ have been completely classified
by Pickrell \cite{Pic91}, by Olshanski and Vershik in \cite{OV96}, and can be decomposed as convex combinations of measures called {\it ergodic measures}. The ergodic measures are indexed by the set $\mathbb{R} \times \mathbb{R}_+ \times \mathcal{S}$, where $\mathcal{S}$ contains all square-summable sets of non-zero real numbers with possible repetitions.
Moreover, in \cite{BO01}, Borodin and Olshanki show that these points correspond to almost sure limits of the extremal eigenvalues of the minors of the corresponding infinite matrix, divided by their dimension.
In gerenal, a central measure is then represented by a probability distribution on $\mathbb{R} \times \mathbb{R}_+ \times \mathcal{S}$. This distribution has been studied in detail for some particular central measures, which enjoy remarkable properties. For example, Borodin and Olshanski \cite{BO01} have studied the case of the {\it Hua-Pickrell measures}, which depend on a complex parameter $s$ with real part strictly larger than $-1/2$, and under which the
distribution of the minor $M_n := (m_{j,k})_{1 \leq j, k \leq n}$ has a density proportional to
$\operatorname{det} (1 + iM_n)^{-s-n} (1 - iM_n)^{- \bar{s} -n} )$ with respect to the Lebesgue measure on Hermitian matrices. Borodin and Olshanki proved that under the probability measure on $\mathbb{R} \times \mathbb{R}_+ \times \mathcal{S}$ associated to the Hua-Pickrell measure of parameter $s$, the third component (in $\mathcal{S}$) is a determinantal point process whose kernel, depending on $s$, is explicitly computed. For $s = 0$, we get the inverses of the points of a determinantal sine-kernel process.
The authors also show that the second component (called {\it Gaussian component}) is equal to zero for $s = 0$, and Qiu \cite{Qiu17} shows that it is the case for all $s$. He also determines the first component for $s \in \mathbb{R}$.
The case $s = 0$, which corresponds to minors following the Cauchy Ensemble, is particularly interesting for the following reason. The Cayley transform $x \mapsto (x - i)/(x + i)$ maps the Hermitian matrices to the unitary matrices for which $1$ is not an eigenvalue. The sequences of minors of infinite Hermitian matrices are mapped to some particular sequences of unitary matrices of increasing dimensions, called {\it virtual isometries}, and characterized by Neretin in \cite{Neretin02}.
These virtual isometries defined by Neretin correspond to unitary matrices for which $1$ is not an eigenvalue: this constraint has been removed in a construction done in a joint paper with Bourgade and Nikeghbali \cite{BNN13}, which generalizes
the construction of Neretin. Our notion of virtual isometry also generalizes the notation of {\it virtual permutations} introduced by Kerov, Olshanski and Vershik in \cite{KOV93} and studied in detail by Tsilevich \cite{Tsilevich98}, who gives a classification of the central measures on virtual permutations which is quite similar to the classification given by Olshanski and Vershik in the Hermitian setting.
If we map an infinite Hermitian matrix following the Hua-Pickrell measure for $s = 0$ by the Cayley transform, we get a virtual isometry such that each component follows the Circular Unitary Ensemble, i.e. the Haar measure on the unitary group. The convergence results proven in \cite{OV96} and \cite{BO01} imply the following: if for a virtual isometry $(u_n)_{n \geq 1}$ following the Haar measure, we consider the eigenangles of $u_n$, multiplied by $n/2 \pi$, then the corresponding point measure a.s. converges locally weakly to a determinantal sine-kernel process. In \cite{BNN13}, we give an alternative and more direct proof of this result, with an estimate of the speed of convergence.
In \cite{MNN13} and \cite{MNN18}, we improve this estimate, and we also show that each fixed component of the eigenvectors, multiplied by $\sqrt{n}$, almost surely converges to a non-trivial limit when $n$ goes to infinity. In \cite{MNN13}, we construct an operator $H$ on an infinite dimensional space, whose eigenvalues and eigenvectors are the limits of the renormalized eigenangles and eigenvectors of $(u_n)_{n \geq 1}$.
The space $\mathcal{E}$ where the operator $H$ is defined is spanned by independent infinite sequences of complex i.i.d. Gaussian variables, and its structure is not classical: in particular, it is not a Hilbert space. The flow $(e^{iH \alpha})_{\alpha \in \mathbb{R}}$ of operators on $\mathcal{E}$ can then be seen as a limit, in a sense to be made precise, of
the family $(u_n^{\lfloor \alpha n \rfloor})_{n \geq 1}$ of unitary matrices, when $n$ goes to infinity.
The construction in \cite{MNN13} is, to our knowledge, the first natural construction of an operator whose spectrum is a determinantal sine-kernel process, and which is related to a classical ensemble of random matrices. A different construction of such an operator has been later given by Valk\'o and Vir\'ag in \cite{VV17}. Note that it is natural to expect that the sine-kernel process is the spectrum of some kind of universal random operator, since it appears as a limit for the spectrum of many matrix ensembles: however, until now, our attempts to construct an operator which is more universal (i.e. related to many random matrix ensembles) than those given in \cite{MNN13} and \cite{VV17} have not succeeded (the operator in \cite{MNN13} is only related to the Circular Unitary Ensemble, and the operator in \cite{VV17} is related to ensembles of tridiagonal matrices). The construction of operators whose spectrum is the sine-kernel process might also be, even if this is very speculative, related to the conjecture of Hilbert and P\'olya, who suggested that the non-trivial zeros of the Riemann zeta functions should be interpreted as the spectrum of an operator $\frac12 + iH$ with $H$ a Hermitian operator. Indeed, the zeros of $\zeta$ are believed to locally behave like a determinantal sine-kernel process, as deduced from a conjecture by Montgomery \cite{Montgomery73}, generalized by Rudnick and Sarnak \cite{RS96}. More detail on this discussion can be found in \cite{MNN13}.
The main goal of the present paper is the generalization of the result of convergence of eigenvectors given in \cite{MNN13} and \cite{MNN18}: we will show that this convergence occurs for any random infinite Hermitian matrix following a central measure, or equivalently (by using the Cayley transform which preserves the eigenvectors), for any random virtual isometry in the sense of \cite{Neretin02}, whose distribution is invariant by unitary conjugation.
\section{Classification of the central measures and statement of our main result}
In this section, we first recall the classification of the central measures on infinite Hermitian matrices given by Pickrell \cite{Pic91}, and by Olshanski and Vershik \cite{OV96}. A reformulation of this classification is given by the following proposition:
\begin{proposition} \label{propositionOV96}
Let $\mathbb{P}$ be a central probability measure on the space of infinite Hermitian matrices. Then,
there exists a probability measure $\mu$ on $\mathbb{R} \times \mathbb{R}_+ \times \mathcal{S}$, such that
$$\mathbb{P} = \int_{\mathbb{R} \times \mathbb{R}_+ \times \mathcal{S}} \mathbb{P}^{(\alpha)} d \mu( \alpha),$$
where $\mathbb{P}^{(\alpha)}$ is defined as follows.
For $\gamma_1 \in \mathbb{R}$, $\gamma_2 \in \mathbb{R}_+$, and a square-summable, finite or infinite, sequence $(x_\ell)_{\ell \geq 1}$ of non-zero real numbers,
let the infinite matrix $M = (m_{j,k})_{j,k \geq 1}$ be given by
$$m_{j,k} = \gamma_1 \delta_{j,k} + \sqrt{\gamma_2} G_{j,k} +
\sum_{\ell \geq 1} x_{\ell} ( \xi^{(\ell)}_{j} \overline{\xi^{(\ell)}_{k}} - \delta_{j,k}),$$
where $(G_{j,k})_{j,k \geq 1}$ is an infinite matrix following the Gaussian Unitary Ensemble (normalized in order to have
$\mathbb{E} [ G_{j,j}^2 ] = 1$) and
$(\xi^{(\ell)}_{j} )_{\ell, j \geq 1}$ is an independent family of i.i.d. complex Gaussian variables, such that
$\mathbb{E} [ \xi^{(\ell)}_{j}] = \mathbb{E} [ (\xi^{(\ell)}_{j})^2] = 0$,
$\mathbb{E} [ |\xi^{(\ell)}_{j}|^2] = 1$. Then, $M$ follows the distribution $\mathbb{P}^{(\alpha)}$ where
$$\alpha = (\gamma_1, \gamma_2, \{x_{\ell}, \ell \geq 1\}).$$
\end{proposition}
\begin{remark}
The space $\mathcal{S}$ is endowed by the $\sigma$-algebra generated by the topology of weak convergence of point measures on compact sets of $\mathbb{R}^*$. If the sequence $(x_{\ell})_{\ell \geq 1}$ is infinite, then
the infinite sum defining $m_{j,k}$ is convergent almost surely and in $L^2$, since the partial sums form a martingale which is bounded in $L^2$ (because of the square-summability of $(x_{\ell})_{\ell \geq 1}$).
\end{remark}
In \cite{OV96} and \cite{BO01}, Borodin, Olshanski and Vershik show that under
$\mathbb{P}^{(\alpha)}$ with $\alpha$ given just above, the extremal eigenvalues of the minors of $M$, divided by the dimension of the minors, tend to the points $(x_{\ell})_{\ell \geq 1}$.
We will show a similar result of convergence for the coordinates of the eigenvectors. For example, if $x_{\ell}$ is the unique $\ell$-th largest point of the sequence $(x_{\ell})_{\ell \geq 1}$ and if $x_{\ell} > 0$, then each component of the eigenvector associated to the $\ell$-th largest eigenvalue, properly renormalized, converges to the corresponding component of the infinite sequence $(\xi^{(\ell)}_k)_{k \geq 1}$.
In order to state the result in full generality, it is not very convenient to directly consider eigenvectors, for the following reasons:
\begin{itemize}
\item The normalization of the eigenvector depends on the arbitrary choice of a phase.
\item If the sequence $(x_{\ell})_{\ell \geq 1}$ contains multiple points, then several eigenvalues tend to the same limit after dividing by the dimension of the minors, and the convergence of the individual eigenvectors is no longer true in general.
\end{itemize}
A good way to avoid the first problem is to replace the eigenvectors by the corresponding matrices of orthogonal projections, which are uniquely determined. The order of magnitude of the entries of these projections is $1/n$ (the trace is equal to $1$ if the eigenvalues are simple): hence, in order to get a possible convergence, it is natural to multiplies these entries by $n$. In order to solve the problem of "fusion of the eigenspaces" at the limit when $(x_{\ell})_{\ell \geq 1}$ has multiple points, we will consider the spectral projection-valued measures defined just below, instead of the individual
projections on eigenspaces.
\begin{theorem} \label{convergenceeigenvectors}
Let $\mathbb{P}$ be a central probability measure on the space of infinite Hermitian matrices, and let $M = (m_{j,k})_{j,k \geq 1}$ be an infinite matrix following the distribution $\mathbb{P}$.
Then, the random measure
$$\Lambda_n := \sum_{\lambda \in \operatorname{Spec}(M_n)} m(\lambda) \delta_{\lambda/n}$$
where $M_n$ is the top-left $n \times n$ minor of $M$, $m(\lambda)$ the multiplicity of the eigenvalue $\lambda$, and $\delta_{\lambda/n}$ the Dirac measure at $\lambda/n$,
converges almost surely to a limiting atomic measure $\Lambda_{\infty}$, with finitely many atoms on $\mathbb{R} \backslash (-\epsilon, \epsilon)$ for all $\epsilon > 0$, in the following sense: for all intervals $I$ included in $\mathbb{R}_+$ or $\mathbb{R}_-$, whose boundary does not contain zero or a point of the support of $\Lambda_{\infty}$,
$$ \Lambda_{n} (I) \underset{n \rightarrow \infty}{\longrightarrow} \Lambda_{\infty} (I).$$
Moreover, for $a, b \geq 1$ and $n \geq \max(a,b)$,
if we define the random complex measure
$$\Sigma_{n,a,b} := \sum_{\lambda \in \operatorname{Spec}(M_n)} n ( \Pi_{M_n, \lambda})_{a,b} \, \delta_{\lambda/n},$$
where $(\Pi_{M_n, \lambda})_{a,b}$ is the $a,b$ entry of the matrix of the orthogonal projection on the eigenspace of $M_n$ associated to the eigenvalue $\lambda$, then there a.s. exists an atomic non-zero complex measure $\Sigma_{\infty,a,b}$, with finitely many atoms on $\mathbb{R} \backslash (-\epsilon, \epsilon)$ for all $\epsilon > 0$, such that $\Sigma_{n,a,b}$ converges to $\Sigma_{\infty,a,b}$ in the same sense as before.
Moreover, if, with the notation of Proposition \ref{propositionOV96},
\begin{equation} \label{formulaOV}
m_{j,k} = \gamma_1 \delta_{j,k} + \sqrt{\gamma_2} G_{j,k} +
\sum_{\ell \geq 1} x_{\ell} ( \xi^{(\ell)}_{j} \overline{\xi^{(\ell)}_{k}} - \delta_{j,k}),
\end{equation}
then we have
$$\Lambda_{\infty} = \sum_{\ell \geq 1} \delta_{x_{\ell}}$$
and
$$\Sigma_{\infty,a,b} = \sum_{\ell \geq 1} \xi^{(\ell)}_{a} \overline{\xi^{(\ell)}_{b}} \delta_{x_{\ell}}$$
\end{theorem}
\begin{remark}
It is clear, from Proposition \ref{propositionOV96}, that it is enough to show the theorem when \eqref{formulaOV} holds.
The part of the theorem concerning the convergence of $\Lambda_n$ is in fact already proven in \cite{OV96} and \cite{BO01}. For sake of completeness, we give an alternative proof, together with the convergence of
$\Sigma_{n,a,b}$.
Note that the measure $\Sigma_{\infty,a,b}$ is not well-defined on intervals containing zero if the sequence $(x_{\ell})_{\ell \geq 1}$ is infinite.
\end{remark}
We quite easily deduce the following result from Theorem \ref{convergenceeigenvectors}, which gives the convergence of renormalized eigenvalues and eigenvectors:
\begin{proposition}
Let us assume that \eqref{formulaOV} occurs.
For all $r \geq 1$, the $r$-th largest (resp. smallest) eigenvalue of $M_n$ (counted with multiplicity), divided by $n$, converges a.s. to the $r$-th largest (resp. smallest) point of $\{ x_{\ell}, \ell \geq 1\}$ (counted with multiplicity), if this point is positive (resp. negative), and to zero if this point is negative (resp. positive) or does not exist.
Let us now assume that $(x_{\ell})_{\ell \geq 1}$ has a single $r$-th largest (resp. smallest) point $x_{\ell(r)}$, and that this point is positive (resp. negative). Let $V_n$ be an eigenvector corresponding to the $r$-th largest (resp. smallest) eigenvalue of $M_n$, normalized in such a way that $||V_n|| = \sqrt{n}$ and the first non-zero coordinate of $V_n$ is real and positive. Then, for all $a \geq 1$, the $a$-th coordinate of $V_n$ converges a.s. to
$ \xi^{(\ell(r))}_{a} ( |\xi^{(\ell(r))}_{1}| / \xi^{(\ell(r))}_{1})$ when $n$ goes to infinity.
\end{proposition}
\begin{proof}
Let us assume that the $r$-th largest point of $(x_{\ell})_{\ell \geq 1}$ is $y > 0$.
If $z> y > 0$ is sufficiently close to $y$, $z$ is not in the sequence $(x_{\ell})_{\ell \geq 1}$ and
$\Lambda_{\infty}( [z, \infty)) \leq r -1 $. Hence, by the convergence of $\Lambda_n$,
$\Lambda_n ([z, \infty)) < r$ for $n$ large enough and the $r$-th largest eigenvalue of $M_n$ is smaller than
$n z$. Similarly, if $z \in (0,y)$ is sufficiently close to $y$, $z$ is not in
$(x_{\ell})_{\ell \geq 1}$ and
$\Lambda_{\infty}( [z, \infty)) \geq r $, which implies $\Lambda_n ([z, \infty)) > r-1$ for $n$ large enough: the $r$-th largest eigenvalue
is larger than or equal to $nz$.
If the $r$-th largest point is negative or does not exist, there is no accumulation of points at the right of $0$, so for $\epsilon > 0$ small enough, $\epsilon$ is not in $(x_{\ell})_{\ell \geq 1}$, and $\Lambda_{\infty} ([\epsilon, \infty)) \leq r-1$,
$\Lambda_n ([\epsilon, \infty)) < r$ for large $n$, which implies that the $r$-th largest eigenvalue is smaller than $\epsilon n$. On the other hand, there exists $\epsilon > 0$ arbitrarily small such that $-\epsilon$ is not in
$(x_{\ell})_{\ell \geq 1}$. The quantity $s := \Lambda_{\infty} ((-\infty, -\epsilon]) $ is finite,
and for $n$ large, $\Lambda_{n} ((-\infty, - \epsilon]) < s+1$, which shows that the $(s+1)$-th smallest eigenvalue
is larger than $- \epsilon n$. A fortiori, it is also the case of the $r$-th largest eigenvalue if $n$ is large.
We have now proven the convergence of the largest eigenvalues, the proof for the smallest eigenvalue is exactly similar.
Let us now consider the eigenvectors. Let us assume that $x_{\ell(r)} > 0$ is the single $r$-th largest point of
$(x_{\ell})_{\ell \geq 1}$. From the convergence of the eigenvalues, we know that for $0 < z< x_{\ell(r)} < t$, $z$ and $t$ being sufficiently close to $x_{\ell(r)}$, and
for $n$ large enough depending on $z$ and $t$, the $r$-th largest eigenvalue of $M_n$ is simple and in $(nz,nt)$, and it is the only eigenvalue in this interval (because the $(r-1)$-th eigenvalue is larger than $nt$ if $r \geq 2$, and the $(r+1)$-th is smaller than $nz$). We deduce that if $V_n$ is an eigenvector associated to the $r$-th largest eigenvalue of $M_n$, normalized as in the proposition,
$$\Sigma_{n,a,b} ((z,t)) = (V_n)_a \overline{(V_n)}_b \underset{n \rightarrow \infty}{\longrightarrow}
\Sigma_{\infty,a,b} ((z,t)) = \xi^{(\ell(r))}_{a}
\overline{ \xi^{(\ell(r))}_{b} } $$
Taking $ a = b = 1$, we get
$$|(V_n)_1|^2 \underset{n \rightarrow \infty}{\longrightarrow} |\xi^{(\ell(r))}_{1}|^2,$$
and then, from the normalization chosen for the phase,
$$(V_n)_1 \underset{n \rightarrow \infty}{\longrightarrow} |\xi^{(\ell(r))}_{1}|.$$
Taking $b = 1$ and general $a$, we have
$$ (V_n)_a \overline{(V_n)}_1 \underset{n \rightarrow \infty}{\longrightarrow}
\xi^{(\ell(r))}_{a} \overline{\xi^{(\ell(r))}_{1}},$$
and then, dividing by the convergence above (the limit being a.s. different from zero),
$$ (V_n)_a \underset{n \rightarrow \infty}{\longrightarrow}
\xi^{(\ell(r))}_{a} \overline{\xi^{(\ell(r))}_{1}} / |\xi^{(\ell(r))}_{1}|
= \xi^{(\ell(r))}_{a} ( |\xi^{(\ell(r))}_{1}| / \xi^{(\ell(r))}_{1}).$$
\end{proof}
We will prove Theorem \ref{convergenceeigenvectors} into two steps: we first consider the case where $\gamma_2 = 0$ and the sequence $(x_{\ell})_{\ell \geq 1}$ is finite, and then we deduce the general case.
\section{Proof of Theorem \ref{convergenceeigenvectors} for finite sequences}
If $\gamma_2 = 0$ and $(x_\ell)_{\ell \geq 1}$ has a finite number $p$ of elements, then we can write
$$m_{j,k} = \gamma \delta_{j,k} + \sum_{\ell = 1}^{p} x_{\ell} \xi_j^{(\ell)} \overline{\xi_k^{(\ell)}}.$$
where
$$\gamma = \gamma_1 - \sum_{\ell = 1}^{p} x_{\ell}.$$
Since the parameter $\gamma$ does not change the eigenvectors of the minors $M_n$ and shifts the eigenvalues by
$\gamma$, it translates the measures $\Lambda_n$ and $\Pi_{n,a,b}$ by $\gamma/n$, and then does not change their limiting measures.
We can then assume
$$m_{j,k} = \sum_{\ell = 1}^{p} x_{\ell} \xi_j^{(\ell)} \overline{\xi_k^{(\ell)}}.$$
For $n > p$, the vectors $\xi_{[n]}^{(\ell)} := (\xi_j^{(\ell)} )_{1 \leq j \leq n} $, $ 1 \leq \ell \leq p$ are almost surely linearly independent.
For any vector $V$ in $\mathbb{C}^n$,
$$(M_n V)_{j} = \sum_{\ell = 1}^{p} x_{\ell} \xi_j^{(\ell)} \sum_{k=1}^n \overline{\xi_k^{(\ell)}} V_k,$$
i.e.
$$M_n V = \sum_{\ell = 1}^{p} x_{\ell} \langle \xi_{[n]}^{(\ell)}, V \rangle \, \xi_{[n]}^{(\ell)}.$$
We deduce that $E_n := \operatorname{Span} ( \xi_{[n]}^{(\ell)}, 1 \leq \ell \leq p )$, and its orthogonal, are invariant spaces for $M_n$, and that the orthogonal of $E_n$ is in the kernel of $M_n$.
Hence, we have
$$\Lambda_n = (n-p) \delta_0 + \sum_{\lambda \in \operatorname{Spec} (P_n)} m(\lambda) \delta_{\lambda/n},$$
where $P_n$ is the restriction of $M_n$ to $E_n$,
and
$$\Sigma_{n,a,b} = n (\Pi_{(E_n)^{\perp}} )_{a,b}\delta_0 +
\sum_{\lambda \in \operatorname{Spec} (P_n)} n (\Pi_{P_n, \lambda})_{a,b} \delta_{\lambda/n},$$
where $\Pi_{(E_n)^{\perp}}$ is the orthogonal projection on $(E_n)^{\perp}$ and
$\Pi_{P_n, \lambda}$ is the orthogonal projection on the eigenspace of $P_n$ corresponding to the eigenvalue $\lambda$.
Since the convergence of measures defined in Theorem \ref{convergenceeigenvectors} does not involve Dirac masses at zero, it is enough to show that almost surely,
\begin{equation}\sum_{\lambda \in \operatorname{Spec} (P_n) \cap nI} m(\lambda)
\underset{n \rightarrow \infty}{\longrightarrow} \sum_{\ell = 1}^p \mathds{1}_{x_{\ell} \in I}, \label{convergencesp1}
\end{equation}
and
\begin{equation}\sum_{\lambda \in \operatorname{Spec} (P_n) \cap nI} n (\Pi_{P_n, \lambda})_{a,b} \,
\underset{n \rightarrow \infty}{\longrightarrow} \sum_{\ell = 1}^p \xi^{(\ell)}_{a} \overline{\xi^{(\ell)}_{b}} \mathds{1}_{x_{\ell} \in I}, \label{convergencesp2}
\end{equation}
for all intervals $I$ whose boundary does not contain a point in $(x_{\ell})_{\ell \geq 1}$.
In the first convergence, if we divide by $p$, we simply get a classical convergence of probability measures.
Taking the Fourier transform, it is then enough to show
\begin{equation}
\operatorname{Tr} (e^{i \mu P_n/n} ) \underset{n \rightarrow \infty}{\longrightarrow}
\sum_{\ell = 1}^p e^{i \mu x_{\ell}}, \label{convergencesp123}
\end{equation}
for all $\mu \in \mathbb{R}$.
Now, in the basis $( \xi_{[n]}^{(\ell)})_{1 \leq \ell \leq p}$ of $E_n$, the operator $P_n/n$ has matrix
$$\left( \frac{1}{n} \sum_{\ell = 1}^p x_{\ell} \langle \xi_{[n]}^{(\ell)} , \xi_{[n]}^{(m)} \rangle \right)_{1 \leq \ell, m \leq p} $$
By the law of large numbers, this matrix a.s. tends to $\operatorname{Diag} (x_1, \dots, x_p)$.
Applying the continuous map $M \mapsto \operatorname{Tr} (e^{i \mu M/n})$ from the $p \times p$ matrices to
$\mathbb{C}$, we deduce \eqref{convergencesp123}.
Let $A$ be strictly larger than the maximum of $(|x_{\ell}|)_{1 \leq \ell \leq p}$.
From \eqref{convergencesp1}, all the eigenvalues of $P_n/n$ are almost surely in $[-A,A]$ for $n$ large enough. Let $I$ be an interval whose boundary does not contain a point $x_{\ell}$. In order to prove \eqref{convergencesp2}, it is enough
to prove it for $I \cap [-A,A]$ instead of $I$, and then one can assume that $I$ is bounded: let $y \leq z$ be the endpoints of $I$. For $\epsilon > 0$ small enough, $[y - \epsilon, y + \epsilon]$ and $[z-\epsilon, z + \epsilon]$ do not contain any points of $(x_{\ell})_{\ell \geq 1}$, and then (by \eqref{convergencesp1}) no eigenvalue of $P_n/n$ for $n$ large enough.
We deduce that if $f$ is a real-valued and continuous function from $\mathbb{R}$ to $\mathbb{R}$, equal to $1$ on $[y,z]$ and equal to zero on $[-A,A] \backslash [y - \epsilon, z + \epsilon]$,
it is enough to check
\begin{equation} \sum_{\lambda \in \operatorname{Spec} (P_n) } n (\Pi_{P_n, \lambda})_{a,b} f(\lambda/n) \,
\underset{n \rightarrow \infty}{\longrightarrow} \sum_{\ell = 1}^p \xi^{(\ell)}_{a} \overline{\xi^{(\ell)}_{b}} f(x_{\ell}),
\label{convergenceff}
\end{equation}
since $f$ coincides with the indicator of $I$ at all points $x_{\ell}$ and all eigenvalues of $P_n/n$ for $n$ large enough.
In fact, we will prove \eqref{convergenceff} for all continuous functions $f$. Let us first assume that $f$ is a polynomial. On the subspace $E_n$ of $\mathbb{C}^n$,
$$\sum_{\lambda \in \operatorname{Spec} (P_n) } n \Pi_{P_n, \lambda}f(\lambda/n)
= n f(P_n /n).$$
On the orthogonal of $E_n$, the same sum is equal to zero, since $P_n$ is only defined on $E_n$. Hence,
$$ \sum_{\lambda \in \operatorname{Spec} (P_n) } n (\Pi_{P_n, \lambda})_{a,b} f(\lambda/n)
= n [f(P_n/n) \Pi_{E_n}]_{a,b},$$
where $\Pi_{E_n}$ is the orthogonal projection on $E_n$.
We have seen that $( \xi_{[n]}^{(\ell)})_{1 \leq \ell \leq p}$ is a basis of $E_n$: let $(v_{p+1}, v_{p+2}, \dots, v_n)$ be
an orthonormal basis of $E_n^{\perp}$. These bases taken together give a basis $\mathcal{Q}$ of $\mathbb{C}^n$. We have previously computed $P_n/n$ in the basis $( \xi_{[n]}^{(\ell)})_{1 \leq \ell \leq p}$. From this computation, we
deduce that the matrix of $ n f(P_n/n) \Pi_{E_n}$ in the basis $\mathcal{Q}$ is
$$R := \operatorname{Diag} \left( n f \left( \left( \frac{1}{n} \sum_{\ell = 1}^p x_{\ell} \langle \xi_{[n]}^{(\ell)} , \xi_{[n]}^{(m)} \rangle \right)_{1 \leq \ell, m \leq p} \right), ( 0)_{p+1 \leq \ell, m \leq n} \right)$$
Hence, if $Q$ is the matrix whose columns form the basis $\mathcal{Q}$, we get
$$ \sum_{\lambda \in \operatorname{Spec} (P_n) } n (\Pi_{P_n, \lambda})_{a,b} f(\lambda/n)
= (QRQ^{-1})_{a,b}. $$
We know the coefficients of $Q$. In order to estimate the coefficients of $Q^{-1}$, we consider the matrix $S$ obtained by dividing the $p$ first columns $( \xi_{[n]}^{(\ell)})_{1 \leq \ell \leq p}$ of $Q$ by $\sqrt{n}$.
By the law of large numbers, for $1 \leq \ell, m \leq p$, the inner product of the columns $\ell$ and $m$ of $S$ tends to $\delta_{\ell,m}$ when $n$ goes to infinity. Moreover, the columns of index larger than $p$ are orthogonal to the $p$ first columns. Hence, if we apply Gram-Schmidt orthonormalization to the columns of $S$, we multiply
$S$ at the right by a matrix of the form $\operatorname{Diag}(T, I_{n-p})$, where $T$ is a $p \times p$ matrix tending to identity when $n$ goes to infinity. We have:
\begin{align*}
Q^{-1} & = (S \operatorname{Diag}( \sqrt{n} I_p, I_{n-p}) )^{-1} \\ & =
\operatorname{Diag}( n^{-1/2} I_p, I_{n-p}) \operatorname{Diag}(T, I_{n-p})
[S\operatorname{Diag}(T, I_{n-p})]^{-1}
\end{align*}
Since the product $S\operatorname{Diag}(T, I_{n-p})$ is a unitary matrix,
we deduce
$$Q^{-1} = \operatorname{Diag}(n^{-1/2} T T^*, I_{n-p}) S^*
= \operatorname{Diag}(n^{-1} T T^*, I_{n-p}) Q^*,
$$
and
$$QRQ^{-1} = Q \operatorname{Diag} \left( V, ( 0)_{p+1 \leq \ell, m \leq n} \right)
Q^*$$
where
$$V = f \left( \left( \frac{1}{n} \sum_{\ell = 1}^p x_{\ell} \langle \xi_{[n]}^{(\ell)} , \xi_{[n]}^{(m)} \rangle \right)_{1 \leq \ell, m \leq p} \right) T T^*.$$
Now, $V$ tends to $\operatorname{Diag}(f(x_1), \dots, f(x_p))$ when $n$ goes to infinity.
We deduce that the entry $a,b$ of $QRQ^{-1}$ tends to the right-hand side of \eqref{convergenceff}, which shows this convergence when $f$ is a polynomial.
In particular, taking $f = 1$ and $a = b$, we get
$$ \sum_{\lambda \in \operatorname{Spec} (P_n) } n (\Pi_{P_n, \lambda})_{a,a} \,
\underset{n \rightarrow \infty}{\longrightarrow} \sum_{\ell = 1}^p | \xi^{(\ell)}_{a} |^2,$$
which shows in particular that the left-hand side of this convergence is bounded independently of $n$.
Moreover, since $\Pi_{P_n, \lambda}$ is a positive operator, we have
$$|(\Pi_{P_n, \lambda})_{a,b}| \leq [(\Pi_{P_n, \lambda})_{a,a}(\Pi_{P_n, \lambda})_{b,b}]^{1/2} \leq
\frac{1}{2} \left( (\Pi_{P_n, \lambda})_{a,a} + (\Pi_{P_n, \lambda})_{b,b} \right)$$
Hence, for all $a, b \geq 1$,
$$ \sum_{\lambda \in \operatorname{Spec} (P_n) } n |(\Pi_{P_n, \lambda})_{a,b} | \leq M_{a,b}$$
independently of $n$, for some random $M_{a,b} > 0$. If we choose
$$M_{a,b} > \sum_{\ell = 1}^p | \xi^{(\ell)}_{a} | | \xi^{(\ell)}_{b} |,$$
we deduce that in \eqref{convergenceff}, for $n$ large enough in order to have all the spectrum of $P_n/n$ in $[-A,A]$,
changing a function $f$ by a function $g$ changes the two sides by at most $M_{a,b} \sup_{[-A,A]} |f - g|$.
Since any continuous function can be uniformly approached by polynomials on compact sets, we deduce that \eqref{convergenceff} extends to all continuous functions.
\section{Preliminary bound on the operator norm}
In this section, we will prove some bound on the limiting operator norm of a matrix satisfying \eqref{formulaOV}.
This bound is a consequence of the results in \cite{OV96} and \cite{BO01}, however, we give an alternative proof here for sake of completeness.
\begin{proposition}
Let $M$ be an infinite matrix satisfying \eqref{formulaOV}. Then, almost surely,
$$\underset{n \rightarrow \infty}{\lim \sup} \frac{||M_n||}{n} \leq \left(\sum_{\ell \geq 1} x_{\ell}^2 \right)^{1/2},$$
where $||M_n||$ is the operator norm of the $n \times n$ top-left minor of $M$.
\end{proposition}
\begin{remark}
From Theorem \ref{convergenceeigenvectors}, the upper limit is in fact a limit and is equal to the maximum of $(|x_{\ell}|)_{\ell \geq 1}$.
\end{remark}
\begin{proof}
Shifting by a fixed multiple of identity does not change the upper limit. The Gaussian part is also irrelevant since
the operator norm of the Gaussian Unitary Ensemble is a.s. negligible with respect to $n$ (it is classical that it is a.s. $\mathcal{O}(\sqrt{n})$, one can prove that it is a.s. $\mathcal{O}(n^{5/6 + \epsilon})$, just by expanding and bounding
$\mathbb{E} [\operatorname{Tr}(M_n^6)]$, and applying Borel-Cantelli lemma). We can then assume
$$m_{j,k} = \sum_{\ell \geq 1} x_{\ell} (\xi_j^{(\ell)} \overline{\xi_k^{(\ell)}} - \delta_{j,k} ). $$
We have
\begin{align*}
||M_n||^2 \leq \operatorname{Tr} (M_n^2)
& = \sum_{1 \leq j,k \leq n} |m_{j,k}|^2
\\ & = \sum_{p, q \geq 1} x_p x_q \sum_{1 \leq j, k \leq n} (\overline{\xi_j^{(p)}} \xi_k^{(p)} - \delta_{j,k} )
(\xi_j^{(q)} \overline{\xi_k^{(q)}} - \delta_{j,k} ).
\end{align*}
Here, the last sum can be infinite. In this case, it is rigorously defined as the a.s. limit of the sum on $1 \leq p, q \leq r$,
when $r$ goes to infinity.
Hence,
$$||M_n||^2 \leq \sum_{p, q \geq 1} x_p x_q ( | \langle \xi_{[n]}^{(p)}, \xi_{[n]}^{(q)} \rangle |^2 -
||\xi_{[n]}^{(p)}||^2 - ||\xi_{[n]}^{(q)}||^2 + n ),$$
$$||M_n||^2 \leq n^2 \sum_{p \geq 1} x_p^2
+ \sum_{p, q \geq 1} x_p x_q ( | \langle \xi_{[n]}^{(p)}, \xi_{[n]}^{(q)} \rangle |^2 - n^2 \delta_{p,q} -
||\xi_{[n]}^{(p)}||^2 - ||\xi_{[n]}^{(q)}||^2 + n ).$$
It is then sufficient to show that almost surely
$$ \sum_{p, q \geq 1} x_p x_q ( | \langle \xi_{[n]}^{(p)}, \xi_{[n]}^{(q)} \rangle |^2 - n^2 \delta_{p,q} -
||\xi_{[n]}^{(p)}||^2 - ||\xi_{[n]}^{(q)}||^2 + n ) = o(n^2),$$
which is guaranteed, by Borel-Cantelli lemma, by the estimate
$$\mathbb{E} \left[ \left( \sum_{p, q \geq 1} x_p x_q ( | \langle \xi_{[n]}^{(p)}, \xi_{[n]}^{(q)} \rangle |^2 - n^2 \delta_{p,q} -
||\xi_{[n]}^{(p)}||^2 - ||\xi_{[n]}^{(q)}||^2 + n ) \right)^4 \right] = \mathcal{O} (n^6),$$
and then (using Fatou's lemma in the case of an infinite sum), by
$$\sum_{p_1, q_1, p_2, q_2, p_3, q_3, p_4, q_4 \geq 1}
\prod_{s =1}^4 (x_{p_s} x_{q_s} ) \dots
$$ $$ \dots \times
\mathbb{E} \left[ \prod_{s = 1}^4 ( | \langle \xi_{[n]}^{(p_s)}, \xi_{[n]}^{(q_s)} \rangle |^2 - n^2 \delta_{p_s,q_s} -
||\xi_{[n]}^{(p_s)}||^2 - ||\xi_{[n]}^{(q_s)}||^2 + n ) \right]= \mathcal{O}(n^6)$$
where, in the case of infinite sums, we take a lower limit of the sums on $p_1, q_1, \dots, p_4, q_4 \leq r$ when $r$ goes to infinity.
Let us now estimate the expectations in the last equation. If one of the eight indices $p_s, q_s$ appears
exactly once (say $p_1$), we can first condition on all the seven other $\xi^{(p_s)}_{[n]}$, $\xi^{(q_s)}_{[n]}$. In the conditional expectation, three of the factors are fixed. The conditional expectation of the last factor is
$$\mathbb{E} \left[ | \langle \xi_{[n]}^{(p_1)}, \xi_{[n]}^{(q_1)} \rangle |^2 - n^2 \delta_{p_1,q_1} -
||\xi_{[n]}^{(p_1)}||^2 - ||\xi_{[n]}^{(q_1)}||^2 + n \; \big| \xi_{[n]}^{(q_1)} \right].$$
Since $p_1 \neq q_1$ by assumption, we have
\begin{align*}
\mathbb{E} \left[ | \langle \xi_{[n]}^{(p_1)} , \xi_{[n]}^{(q_1)} \rangle |^2 \; \big| \xi_{[n]}^{(q_1)} \right]
& = \sum_{1 \leq j, k \leq n} \xi_{j}^{(q_1)} \overline{\xi_{k}^{(q_1)} }
\mathbb{E} [\overline{\xi_{j}^{(p_1)} } \xi_k^{(p_1)} ] \\ & =
\sum_{1 \leq j, k \leq n} \xi_{j}^{(q_1)} \overline{\xi_{k}^{(q_1)} } \delta_{j,k}
= ||\xi_{[n]}^{(q_1)}||^2,
\end{align*}
$$n^2 \delta_{p_1,q_1} = 0,$$
$$\mathbb{E} \left[ ||\xi_{[n]}^{(p_1)}||^2 \; \big| \xi_{[n]}^{(q_1)} \right] = n,$$
$$\mathbb{E} \left[ ||\xi_{[n]}^{(q_1)}||^2 \; \big| \xi_{[n]}^{(q_1)} \right] = ||\xi_{[n]}^{(q_1)}||^2,$$
and then
$$\mathbb{E} \left[ | \langle \xi_{[n]}^{(p_1)}, \xi_{[n]}^{(q_1)} \rangle |^2 - n^2 \delta_{p_1,q_1} -
||\xi_{[n]}^{(p_1)}||^2 - ||\xi_{[n]}^{(q_1)}||^2 + n \; \big| \xi_{[n]}^{(q_1)} \right] = 0.$$
We deduce that
$$\mathbb{E} \left[ \prod_{s = 1}^4 ( | \langle \xi_{[n]}^{(p_s)}, \xi_{[n]}^{(q_s)} \rangle |^2 - n^2 \delta_{p_s,q_s} -
||\xi_{[n]}^{(p_s)}||^2 - ||\xi_{[n]}^{(q_s)}||^2 + n ) \right] = 0$$
as soon as one of the eight indices $p_s, q_2$ appears only once.
Using H\"older inequality, it is then enough to show:
$$\sum'_{p_1, q_1, p_2, q_2, p_3, q_3, p_4, q_4 \geq 1}
\prod_{s =1}^4 |x_{p_s} x_{q_s} |\dots
$$ $$ \dots \times
\prod_{s = 1}^4 \mathbb{E} \left[ \left( | \langle \xi_{[n]}^{(p_s)}, \xi_{[n]}^{(q_s)} \rangle |^2 - n^2 \delta_{p_s,q_s} -||\xi_{[n]}^{(p_s)}||^2 - ||\xi_{[n]}^{(q_s)}||^2 + n \right)^4 \right] ^{1/4}= \mathcal{O}(n^6),$$
where the prime means that we restrict the sum to the terms where each of the indices appears at least twice.
It is then enough to show that
\begin{equation}
\sum'_{p_1, q_1, p_2, q_2, p_3, q_3, p_4, q_4 \geq 1}
\prod_{s =1}^4 |x_{p_s} x_{q_s} | < \infty \label{sumx}
\end{equation}
and
\begin{equation}
\mathbb{E} \left[ \left( | \langle \xi_{[n]}^{(p)}, \xi_{[n]}^{(q)} \rangle |^2 - n^2 \delta_{p,q} -||\xi_{[n]}^{(p)}||^2 - ||\xi_{[n]}^{(q)}||^2 + n \right)^4 \right] = \mathcal{O}(n^6). \label{momentxi}
\end{equation}
For the first estimate \eqref{sumx}, we observe that in order to choose $p_1, q_1, p_2, q_2, p_3, q_3, p_4, q_4$, we
have to choose:
\begin{itemize}
\item The different indices which appear.
\item The number of times each index appears.
\item The exact positions where they appear: given the two first items, the number of possibilities is uniformly bounded.
\end{itemize}
Hence,
\begin{align*}
\sum'_{p_1, q_1, p_2, q_2, p_3, q_3, p_4, q_4 \geq 1} &
\prod_{s =1}^4 |x_{p_s} x_{q_s} | \ll \sum_{p \geq 1} x_p^8
+ \sum_{p \neq q \geq 1} ( x_p^6 x_q^2 + |x_p|^5 |x_q|^3 + x_p^4 x_q^4)
\\ &
+ \sum_{p \neq q \neq r \geq 1} (x_p^4 x_q^2 x_r^2 + |x_p|^3 |x_q|^3 x_r^2)
+ \sum_{p \neq q \neq r \neq s \geq 1} x_p^2 x_q^2 x_r^2 x_s^2,
\end{align*}
which implies the crude bound
$$\sum'_{p_1, q_1, p_2, q_2, p_3, q_3, p_4, q_4 \geq 1}
\prod_{s =1}^4 |x_{p_s} x_{q_s} | \ll \prod_{j= 2}^8 (1 + \sum_{p \geq 1} |x_p|^j). $$
Now, for $j \geq 2$,
$$\sum_{p \geq 1} |x_p|^j \leq \max_{p \geq 1} |x_p|^{j-2} \sum_{p \geq 1} |x_p|^2
\leq \left( \sum_{p \geq 1} |x_p|^{2} \right)^{1 + (j-2)/2} < \infty,$$
which proves \eqref{sumx}.
Let us now prove \eqref{momentxi}. We have, using H\"older inequality:
$$\mathbb{E} [ ||\xi_{[n]}^{(p)} ||^8 ]
= \sum_{1 \leq a,b,c,d \leq n} \mathbb{E} [ |\xi_{a}^{(p)}|^2 |\xi_{b}^{(p)}|^2 |\xi_{c}^{(p)}|^2
|\xi_{d}^{(p)}|^2 ] \leq \sum_{1 \leq a,b,c,d \leq n} \mathbb{E} [ |\xi_{1}^{(p)}|^8] = 24 n^4.$$
Hence, it is enough to show
$$\mathbb{E} \left[ \left( | \langle \xi_{[n]}^{(p)}, \xi_{[n]}^{(q)} \rangle |^2 - n^2 \delta_{p,q} \right)^4 \right] = \mathcal{O}(n^6). $$
If $p \neq q$, the left-hand side is, by using the fact that $\xi_{[n]}^{(p)}, \xi_{[n]}^{(q)}$ are independent with the same distribution as $\xi_{[n]}^{(1)}$,
$$\mathbb{E} [ | \langle \xi_{[n]}^{(p)}, \xi_{[n]}^{(q)} \rangle |^8]
= \sum_{1 \leq a_1, \dots, a_8 \leq n} \left| \mathbb{E} \left[ \prod_{s=1}^4 \overline{\xi_{a_s}^{(1)}}
\prod_{s=5}^8 \xi_{a_s}^{(1)} \right] \right|^2.$$
If one of the eight indices appears only once, the last expectation is zero by rotational invariance of the law of $\xi_{a_s}^{(1)}$. Hence, for all non-zero terms, there are at most four different indices among $a_1, \dots, a_8$. We deduce
$$\mathbb{E} [ | \langle \xi_{[n]}^{(p)}, \xi_{[n]}^{(q)} \rangle |^8] = \mathcal{O}(n^4),$$
which is more than we need.
For $p = q$, we have to show
$$\mathbb{E} [ ||\xi_{[n]}^{(p)}||^{16}] - 4 n^2 \mathbb{E} [ ||\xi_{[n]}^{(p)}||^{12}] + 6 n^4
\mathbb{E} [ ||\xi_{[n]}^{(p)}||^{8}] - 4 n^6 \mathbb{E} [ ||\xi_{[n]}^{(p)}||^{4}] + n^8 = \mathcal{O}(n^6).$$
For all $2 \leq r \leq 8$,
$$\mathbb{E} [ ||\xi_{[n]}^{(p)}||^{2r}]
= \sum_{1 \leq j_1, \dots, j_r \leq n} \mathbb{E} \left[ \prod_{s = 1}^r |\xi_{j_s}^{(p)}|^2 \right].$$
The sum of the terms where all the $j_s$ are distinct is equal to
$$n (n-1) \dots (n-r+1) = n^r - \frac{r(r-1)}{2} n^{r-1} + \mathcal{O} (n^{r-2}).$$
The sum of the terms where two of the $j_s$ are equal and the others are all distinct is
$$ \frac{r(r-1)}{2} n (n-1) \dots (n-r+2) \mathbb{E} [ |\xi_{1}^{(p)}|^2]^{r-2} \mathbb{E} [ |\xi_{1}^{(p)}|^4]
= r(r-1) n^{r-1} + \mathcal{O}(n^{r-2}).$$
The sum of all the terms with more coincidences is $\mathcal{O}(n^{r-2})$.
Hence,
$$\mathbb{E} [ ||\xi_{[n]}^{(p)}||^{2r}] = n^r + \frac{r(r-1)}{2} n^{r-1} + \mathcal{O} (n^{r-2}),$$
and
$$\sum_{s =0}^4 (-1)^s {4 \choose s} n^{8 - 2s} \mathbb{E} [ ||\xi_{[n]}^{(p)}||^{4s}]
= \sum_{s =0}^4 (-1)^s {4 \choose s} (n^8 - s (2s-1) n^7) + \mathcal{O}(n^6),$$
which is $\mathcal{O}(n^6)$, since
$$\sum_{s =0}^4 (-1)^s {4 \choose s} = 1 - 4 + 6 - 4 + 1 = 0,$$
and
$$\sum_{s =0}^4 (-1)^s {4 \choose s} s(2s-1) = 1 \cdot 0 - 4 \cdot 1 + 6 \cdot 6 - 4 \cdot 15 + 1 \cdot 28
= 0 -4 + 36 - 60 + 28 = 0.$$
\end{proof}
\section{Proof of Theorem \ref{convergenceeigenvectors} in the general case}
In the convergence in distribution given by the theorem, it is enough to test intervals of the form
$(-\infty, c]$, $(-\infty, c)$, for $c < 0$ not in the sequence $(x_{\ell})_{\ell \geq 1}$, and the intervals of the form $[c, \infty)$, $(c, \infty)$ for $c > 0$ not in the sequence $(x_{\ell})_{\ell \geq 1}$. By symmetry, we only consider positive intervals, we fix $c$ and we denote by $I$ one of the intervals $[c, \infty)$ and $(c, \infty)$.
We define $\delta > 0$ as the minimum distance between $c$ and a point of $\{0,(x_{\ell})_{\ell \geq 1}\}$.
For $\epsilon \in (0,c)$, we can decompose the infinite matrix $M$ as $A + B$, where
$A = (a_{j,k})_{j, k \geq 1}$, $B = (b_{j,k})_{j,k \geq 1}$,
$$a_{j,k} = \sum_{|x_{\ell}| > \epsilon} ( x_{\ell} \xi_{j}^{(\ell)} \overline{ \xi_{k}^{(\ell)} } - \delta_{j,k}),$$
$$b_{j,k} = \gamma_1 \delta_{j,k} + \sqrt{\gamma_2} G_{j,k} + \sum_{|x_{\ell}| \leq \epsilon} ( x_{\ell} \xi_{j}^{(\ell)} \overline{ \xi_{k}^{(\ell)} } - \delta_{j,k}).$$
If $A_n$ and $B_n$ are the top-left $n \times n$ minors of $A$ and $B$, the preliminary bound shows that
almost surely, for $n$ large enough,
$$\frac{||B_n||}{n} \leq \left[ \left(\sum_{|x_{\ell}| \leq \epsilon} x_{\ell}^2 \right)^{1/2} + \epsilon \right],$$
which, by dominated convergence, tends to zero with $\epsilon$. Hence, if we take $\epsilon \in (0,c)$ small enough, we can assume that almost surely, $||B_n|| \leq n \delta/2$ for $n$ large enough.
We have first to show that the number of eigenvalues of $M_n/n$ which are strictly larger than $c$ (resp. larger than or equal to $c$) tends to the number $r$ of points of $(x_{\ell})_{\ell \geq 1}$ which are larger than $c$ (resp. larger than or equal to $c$). Now, from the finite case studied before, the $r$-th largest eigenvalue of $A_n/n$ tends to the $r$-th largest point
of $(x_{\ell})_{\ell \geq 1}$, which is at least $c+ \delta$ (recall that there is no point in the sequence in $(c-\delta, c+ \delta)$ by definition of $\delta$). Since $||B_n|| \leq n \delta/2$ for $n$ large, the lower limit of the $r$-th largest eigenvalue of $M_n/n$ is at least $c + \delta/2$. Similarly, the upper limit of the $(r+1)$-th largest eigenvalue of $M_n/n$ is
at most $c - \delta/2$. Hence, for $n$ large enough, there are exactly $r$ eigenvalues of $M_n/n$ which are strictly larger than $c$ (resp. larger than or equal to $c$).
It now remains to show that
$$n (\Pi_{M_n, n I})_{a,b} \underset{n \rightarrow \infty}{\longrightarrow} \sum_{x_{\ell} \in I} \xi_a^{(\ell)} \overline{\xi_{b}^{(\ell)}},$$
where $\Pi_{M_n, n I}$ is the projection on the space $\mathcal{E}$ generated by the eigenvectors of $M_n$ associated to eigenvalues in $nI$ (recall that $I = [c, \infty)$ or $I = (c, \infty)$).
By the study of the finite case, it is known that with obvious notation,
$$n (\Pi_{A_n, n I})_{a,b} \underset{n \rightarrow \infty}{\longrightarrow} \sum_{x_{\ell} \in I} \xi_a^{(\ell)} \overline{\xi_{b}^{(\ell)}}.$$
Hence, if $\mathcal{F}$ is the space generated by the eigenvectors of $A_n$ associated to eigenvalues in $nI$, it is enough
to show that
$$n | (\Pi_{\mathcal{E}})_{a,b} - (\Pi_{\mathcal{F}})_{a,b}| \underset{n \rightarrow \infty}{\longrightarrow} 0.$$
Let $v$ be a unit eigenvector, corresponding to an eigenvalue $\lambda \in nI$ of $M_n$. We have a decomposition
$v = w + x$, where $w \in \mathcal{F}$ and $x$ is orthogonal to $\mathcal{F}$.
We have
$$(A_n + B_n) (w + x) = \lambda (w + x)$$
and then, taking the inner product with $x$:
$$\langle x , A_n w \rangle + \langle x , B_n w \rangle +
\langle x , A_n x \rangle + \langle x , B_n x \rangle = \lambda( \langle x, w \rangle + \langle x, x \rangle )
= \lambda ||x||^2 \geq nc ||x||^2.$$
The space $\mathcal{F}$ is stable by $A_n$, so
$$\langle x , A_n w \rangle = 0.$$
Since $x$ is orthogonal to $\mathcal{F}$, we have
$$\langle x , A_n x \rangle \leq ||x||^2 \lambda_{n,(nI)^c},$$
where $\lambda_{n,(nI)^c}$ is the largest eigenvalue of $A_n$ in the complement of $nI$.
Now, by the eigenvalue convergence in the finite case, $n^{-1} \lambda_{n,(nI)^c}$ tends to the largest point of
$\{0, (x_{\ell})_{\ell \geq 1,
x_{\ell} > \epsilon}\} \cap [0,c]$, and then it is smaller than $c - 3\delta/4$ for $n$ large enough, independently of the choice of the vector $v$. Hence,
$$\langle x , A_n x \rangle \leq n (c - 3\delta/4) ||x||^2.$$
Moreover, for $n$ large enough (independently of $v$),
$$\langle x, B_n x \rangle \leq ||B_n|| \, ||x||^2 \leq n( \delta/2) ||x||^2.$$
Hence, we have
$$\langle x , B_n w \rangle + n (c - 3\delta/4) ||x||^2 + n (\delta/2) ||x||^2 \geq nc ||x||^2,$$
and
$$n \delta ||x||^2 / 4 \leq |\langle x , B_n w \rangle| \leq ||x|| \, ||B_n w||,$$
$$||x|| \leq \frac{4}{n \delta} ||B_n w||.$$
Let $\mathcal{B} = \{y_1, \dots y_s\}$ be an orthonormal basis of $\mathcal{F}$, chosen as a measurable function of $A_n$. If $s < r$, we arbitrarily define $y_{s+1}= y_{s+2} = \dots = y_r = y_s$.
Since $||w|| \leq ||v|| = 1$, by decomposing $w$ in the basis $\mathcal{B}$ and applying triangle inequality:
$$||x|| \leq \frac{4}{n \delta} \sum_{j=1}^s ||B_n y_j||. $$
Since the number of eigenvalues of $A_n$ in $nI$ is almost surely equal to $r$ for $n$ large enough, we get
$$||x|| \leq \frac{4}{n \delta} \sum_{j=1}^r ||B_n y_j||$$
for $n$ large enough.
Let $U_j$ be a unitary matrix, chosen as a measurable function of $A_n$, such that $U_j y_j$ is the first basis vector
$e$ of $\mathbb{C}^n$. By construction, $A_n$ and $B_n$ are independent. Since the law of $B_n$ is invariant by conjugation, $B_n$ has the same law as $U_j^{-1} B_n U_j$, conditionally on $A_n$, and $(B_n,y_j)$ has the same law as
$(U_j^{-1} B_n U_j, y_j)$. Hence, we have the equality in distribution:
$$||B_n y_j|| \overset{d}{=} ||U_j^{-1} B_n U_j y_j|| = ||B_n e||$$
Now, let us estimate the tail of the distribution of $||B_n e||$, by looking at its fourth moment.
We have
$$\mathbb{E} [ ||B_n e||^4] \ll \mathbb{E} [ ||\gamma_1 e||^4]
+ \mathbb{E} [ ||G_n e||^4] + \mathbb{E} [ ||C_n e||^4],$$
where, with obvious notation,
$$g_{j,k} = \sqrt{\gamma_2} G_{j,k},$$
$$c_{j,k} = \sum_{|x_{\ell}| \leq \epsilon} (x_{\ell} \xi_j^{(\ell)} \overline{\xi_k^{(\ell)} } - \delta_{j,k} ).$$
The first term is independent of $n$ (equal to $\gamma_1^4$). The second term is the $L^4$ norm of a Gaussian vector, easily dominated by $n^2$. The third term is
$$\mathbb{E} \left[ \left( \sum_{j = 1}^n \left| \sum_{|x_{\ell}| \leq \epsilon} x_{\ell} ( \xi_j^{(\ell)}\overline{\xi_1^{(\ell)} } - \delta_{j,1} )
\right|^2 \right)^2 \right]$$
Using Fatou's lemma in the case where $(x_{\ell})_{\ell \geq 1}$ is infinite, we get that this expectation is bounded by
$$ \sum_{|x_\ell|, |x_m|, |x_p|, |x_q| \leq \epsilon} x_{\ell} x_m x_p x_q \sum_{1 \leq j, k \leq n}
\mathbb{E} \left[ ( \xi_j^{(\ell)}\overline{\xi_1^{(\ell)} } - \delta_{j,1} ) ( \xi_1^{(m)}\overline{\xi_j^{(m)} } - \delta_{j,1} ) \right. $$ $$ \left. \dots
( \xi_k^{(p)}\overline{\xi_1^{(p)} } - \delta_{k,1} ) ( \xi_1^{(q)}\overline{\xi_k^{(q)} } - \delta_{k,1} ) \right],$$
where in the case of an infinite sequence, we restrict the sum to $1 \leq \ell, m, p, q \leq t$ and let $t \rightarrow \infty$.
If one of the indices $\ell, m, p, q$, say $\ell$, is different from the three others,
we can use independence in the last expectation, in order to get a factor $\mathbb{E} [ \xi_{j}^{(\ell)} \overline{\xi_1^{(\ell)}} - \delta_{j,1}] = 0 $. Hence, the only non-zero terms in the sum correspond to the case where the indices $\ell, m, p, q$ are pairwise equal. Since the last expectation is uniformly bounded in any case, we deduce that the
sum is bounded by a universal constant times $n^2 \left(\sum_{|x_{\ell}| \leq \epsilon} x_{\ell}^2 \right)^2.$
We have then proven
$$\mathbb{E} [ ||B_n y_j||^4] = \mathbb{E} [ ||B_n e||^4] \ll n^2,$$
and by Borel-Cantelli lemma, for all $j \in \{1, \dots, r\}$ and for all $n$ large enough,
$||B_n y_j|| \leq (\delta/(1 + 4r)) n^{4/5}$.
Hence, almost surely, $||x|| \leq n^{-1/5}$ for $n$ large enough, uniformly on the choice of the unit eigenvector $v$ in $\mathcal{E}$. In other words, almost surely, for $n$ large enough, all unit eigenvectors of $M_n$ in $\mathcal{E}$ are at distance at most $n^{-1/5}$ from a vector in $\mathcal{F}$.
For $n$ large, $\mathcal{E}$ and $\mathcal{F}$ have dimension $r$. If the eigenvectors $v_1, \dots, v_r$ of $M_n$ form an orthonormal basis of $\mathcal{E}$, we have vectors in $\mathcal{F}$ of the form $v_j + \mathcal{O}(n^{-1/5})$.
The inner product of $v_j + \mathcal{O}(n^{-1/5})$ with $v_k + \mathcal{O}(n^{-1/5})$ is $\delta_{j,k} + \mathcal{O}(n^{-1/5})$.
Applying Gram-Schmidt orthogonalization, we deduce that for $n$ large, one gets an orthonormal basis of $\mathcal{F}$ of the form $(v_j + \mathcal{O}(n^{-1/5}))_{1 \leq j \leq r}$. Hence, for any unit vector $x$, and $n$ large,
$$\Pi_{\mathcal{E}}(x) - \Pi_{\mathcal{F}} (x)
= \sum_{j=1}^r \langle v_j , x \rangle v_j - \sum_{j=1}^r \langle (v_j + \mathcal{O}(n^{-1/5})) , x \rangle
(v_j + \mathcal{O}(n^{-1/5})),$$
which implies that the operator norm of $\Pi_{\mathcal{E}} - \Pi_{\mathcal{F}}$ are a.s. dominated by $n^{-1/5}$.
On the other hand, since $A_n$ and $B_n$ are independent and unitarily invariant, the couple
$(A_n, B_n)$ has the same law as $(U A_n U^{-1}, U B_n U^{-1})$, for all deterministic $U \in U(n)$. Now, simultaneous conjugation of $A_n$ and $B_n$ changes the spaces $(\mathcal{E},\mathcal{F})$ to their images by $U$. Hence,
$(\mathcal{E},\mathcal{F})$ has the same law as $(U \mathcal{E}, U\mathcal{F})$, and
$\Pi_{\mathcal{E}} - \Pi_{\mathcal{F}}$ is invariant by unitary conjugation. We deduce that there is an equality in distribution of the form
$$\Pi_{\mathcal{E}} - \Pi_{\mathcal{F}} \overset{d}{=} U \Lambda U^{-1},$$
where $\Lambda$ is a random diagonal matrix whose entries have nonincreasing modulus, and $U = (u_{j,k})_{1 \leq j, k \leq n}$ is an independent, Haar-distributed matrix in $U(n)$.
Since $\mathcal{E}$ and $\mathcal{F}$ have dimension $r$, $\Pi_{\mathcal{E}} - \Pi_{\mathcal{F}}$ has at most rank $2r$, and then only the $2r$ first entries of $\Lambda$ can be different from zero.
We get (for $n$ large)
$$|(\Pi_{\mathcal{E}} - \Pi_{\mathcal{F}})_{a,b}| = \left| \sum_{j = 1}^{2r} \Lambda_j u_{a,j} \overline{u_{b,j}}
\right| \leq 2r ||\Pi_{\mathcal{E}} - \Pi_{\mathcal{F}}|| \sup_{1 \leq j,k \leq n} |u_{j,k}|^2.$$
Now, $|u_{j,k}|^2$ is a Beta variable of parameters $1$ and $n-1$, which implies
$$\mathbb{P} [ |u_{j,k}|^2 \geq n^{-0.99} ]
= (n-1) \int_{n^{-0.99}}^1 (1-x)^{n-2} dx = (1 - n^{-0.99} )^{n-1} \ll e^{- n^{0.01} }.$$
Using Borel-Cantelli lemma, and the previous estimate on $||\Pi_{\mathcal{E}} - \Pi_{\mathcal{F}}|| $, we deduce that almost surely,
$$|(\Pi_{\mathcal{E}} - \Pi_{\mathcal{F}})_{a,b}| = \mathcal{O} (n^{-1.19}),$$
which completes the proof of Theorem \ref{convergenceeigenvectors}.
\end{document}
|
\begin{document}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{cor*}[thm]{Corollary*}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{prop*}[thm]{Proposition*}
\newtheorem{conj}[thm]{Conjecture}
\theoremstyle{definition}
\newtheorem{construction}{Construction}
\newtheorem{notations}[thm]{Notations}
\newtheorem{question}[thm]{Question}
\newtheorem{prob}[thm]{Problem}
\newtheorem{rmk}[thm]{Remark}
\newtheorem{remarks}[thm]{Remarks}
\newtheorem{defn}[thm]{Definition}
\newtheorem{claim}[thm]{Claim}
\newtheorem{assumption}[thm]{Assumption}
\newtheorem{assumptions}[thm]{Assumptions}
\newtheorem{properties}[thm]{Properties}
\newtheorem{exmp}[thm]{Example}
\newtheorem{comments}[thm]{Comments}
\newtheorem{blank}[thm]{}
\newtheorem{observation}[thm]{Observation}
\newtheorem{defn-thm}[thm]{Definition-Theorem}
\newtheorem*{Setting}{Setting}
\newcommand{\mathscr{B}}{\mathscr{B}}
\newcommand{\mathscr{C}}{\mathscr{C}}
\newcommand{\mathscr{D}}{\mathscr{D}}
\newcommand{\mathscr{E}}{\mathscr{E}}
\newcommand{\mathscr{F}}{\mathscr{F}}
\newcommand{\mathscr{G}}{\mathscr{G}}
\newcommand{\mathscr{H}}{\mathscr{H}}
\newcommand{\mathscr{I}}{\mathscr{I}}
\newcommand{\mathscr{J}}{\mathscr{J}}
\newcommand{\mathscr{K}}{\mathscr{K}}
\newcommand{\mathscr{L}}{\mathscr{L}}
\newcommand{\mathscr{M}}{\mathscr{M}}
\newcommand{\mathscr{N}}{\mathscr{N}}
\newcommand{\mathscr{O}}{\mathscr{O}}
\newcommand{\mathscr{P}}{\mathscr{P}}
\newcommand{\mathscr{Q}}{\mathscr{Q}}
\newcommand{\mathscr{R}}{\mathscr{R}}
\newcommand{\mathscr{S}}{\mathscr{S}}
\newcommand{\mathscr{T}}{\mathscr{T}}
\newcommand{\mathscr{U}}{\mathscr{U}}
\newcommand{\mathscr{V}}{\mathscr{V}}
\newcommand{\mathscr{W}}{\mathscr{W}}
\newcommand{\mathscr{X}}{\mathscr{X}}
\newcommand{\mathscr{Z}}{\mathscr{Z}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\mathbb{D}}{\mathbb{D}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{V}}{\mathbb{V}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\newcommand{\mathbf{N}}{\mathbf{N}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbf{Y}}{\mathbf{Y}}
\newcommand{\textrm{Spec}}{\textrm{Spec}}
\newcommand{\bar{\partial}}{\bar{\partial}}
\newcommand{\partial\bar{\partial}}{\partial\bar{\partial}}
\newcommand{{\color{red}ref}}{{\color{red}ref}}
\title[Koll\'ar's Package for Twisted Saito's S-sheaves] {Koll\'ar's Package for Twisted Saito's S-sheaves}
\author[Junchao Shentu]{Junchao Shentu}
\email{[email protected]}
\address{School of Mathematical Sciences, University of Science and Technology of China, Hefei, 230026, China}
\author[Chen Zhao]{Chen Zhao}
\email{[email protected]}
\address{School of Mathematical Sciences, University of Science and Technology of China, Hefei, 230026, China}
\begin{abstract}
We generalize Koll\'ar's conjecture (including torsion freeness, injectivity theorem, vanishing theorem and decomposition theorem) to Saito's $S$-sheaves twisted by a $\mathbb{Q}$-divisor. This gives a uniform treatment for various kinds of Koll\'ar's package in different topics in complex geometry. As a consequence we prove Koll\'ar's package of pluricanonical bundles twisted by a certain multiplier ideal sheaf. The method of the present paper is $L^2$-theoretic.
\end{abstract}
\maketitle
\section{Introduction}
Let $f:X\rightarrow Y$ be a proper morphism between complex spaces\footnote{All the complex spaces are assumed to be separated, reduced, paracompact, countable at infinity and of pure dimension throughout the present paper. We would like to point out that the complex spaces are allowed to be reducible.} such that $Y$ is irreducible and each irreducible component of $X$ is mapped onto $Y$. We say that a coherent sheaf $\mathscr{F}$ on $X$ satisfies {\bf Koll\'ar's package} with respect to $f$ if the following statements hold.
\begin{description}
\item[Torsion Freeness] $R^qf_\ast (\mathscr{F})$ is torsion free for every $q\geq0$ and vanishes if $q>\dim X-\dim Y$.
\item[Injectivity Theorem] If $L$ is a semipositive holomorphic line bundle on $X$ so that $L^{\otimes l}$ admits a nonzero holomorphic global section $s$ for some $l>0$, then the canonical morphism
$$R^qf_\ast(\times s):R^qf_\ast(\mathscr{F}\otimes L^{\otimes k})\to R^qf_\ast(\mathscr{F}\otimes L^{\otimes (k+l)})$$
is injective for every $q\geq0$ and every $k\geq1$.
\item[Vanishing Theorem] If $Y$ is a projective algebraic variety and $L$ is an ample line bundle on $Y$, then
$$H^q(Y,R^pf_\ast(\mathscr{F})\otimes L)=0,\quad \forall q>0,\forall p\geq0.$$
\item[Decomposition Theorem] Assume moreover that $X$ is a compact K\"ahler space. Then $Rf_\ast (\mathscr{F})$ splits in the derived category $D(Y)$ of $\mathscr{O}_Y$-modules, i.e.
$$Rf_\ast(\mathscr{F})\simeq \bigoplus_{q} R^qf_\ast(\mathscr{F})[-q]\in D(Y).$$
As a consequence, the spectral sequence
$$E^{pq}_2:H^p(Y,R^qf_\ast(\mathscr{F}))\Rightarrow H^{p+q}(X,\mathscr{F})$$
degenerates at the $E_2$ page.
\end{description}
These statements date back to J. Koll\'ar \cite{Kollar1986_1,Kollar1986_2}, who shows that the dualizing sheaf $\omega_X$ satisfies Koll\'ar's package when $X$ is smooth projective and $Y$ is projective. Aiming at various geometric applications, Koll\'ar's results have been further generalized in two directions.
The first direction is to show Koll\'ar's package for the dualizing sheaf twisted by a $\mathbb{Q}$-divisor, or a multiplier ideal sheaf more generally. These kinds of (ad-hoc) Koll\'ar's package have applications in E. Viehweg's works on the quasi-projective moduli of polarized manifolds \cite{Viehweg1995,Viehweg2010}, O. Fujino's project of minimal model program for log-canonical varieties \cite{Fujino2017} and Koll\'ar-Kov\'acs' splitting criterion for du Bois singularities \cite{Kollar2010}, etc. Besides, K. Takegoshi \cite{Takegoshi1995} proves Koll\'ar's package for the dualizing sheaf twisted by a Nakano semipositive vector bundle. The injectivity theorem for the dualizing sheaf twisted by a general multiplier ideal sheaf has been investigated by S. Matsumura \cite{Matsumura2018} and Fujino-Matsumura \cite{Matsumura2021}. However the complete proof of the Koll\'ar's package (listed as above) for the dualizing sheaf twisted by a multiplier ideal sheaf is still missing.
The other direction is to generalize Koll\'ar's package to certain Hodge theoretic objects such as variations of Hodge structure and Hodge modules. Assume that $f:X\to Y$ is a morphism between projective varieties. Let $\mathbb{V}$ be an $\mathbb{R}$-polarized variation of Hodge structure on some dense Zariski open subset $X^o\subset X_{\rm reg}$ of the regular loci $X_{\rm reg}$. M. Saito \cite{MSaito1991} constructs a coherent sheaf $S_X(\mathbb{V})$ and shows that $S_X(\mathbb{V})$ satisfies Koll\'ar's package with respect to $f$. When $\mathbb{V}$ is the trivial variation of Hodge structure, $S_X(\mathbb{V})\simeq \omega_X$. Saito's work gives an affirmative answer to Koll\'ar's conjecture \cite[\S 4]{Kollar1986_2}. Together with other deep results of Hodge modules, Koll\'ar's package for $S_X(\mathbb{V})$ plays important roles in the series works of Popa-Schnell \cite{PS2013,PS2014,PS2017}. Recently the authors of the present paper \cite{SC2021_kollar} generalize Saito's result to the context of non-abelian Hodge theory.
The purpose of the present article is to show that Koll\'ar's package holds for Saito's $S$-sheaves twisted by a multiplier ideal sheaf associated with a $\mathbb{Q}$-divisor. This gives a uniform and systematic treatment for various Koll\'ar's package twisted by a $\mathbb{Q}$-divisor. This package contains new results even in the case of the dualizing sheaf twisted by a multiplier ideal sheaf. The main tool is the abstract Koll\'ar's package established in \cite{SC2021_kollar}.
\subsection{Main result}
Before stating the main results let us recall Saito's construction of $S_X(\mathbb{V})$, with two generalizations:
\begin{enumerate}
\item We generalize Saito's construction to complex variations of Hodge structure. In particular we do not make assumptions on the local monodromy. This is interesting in the view of nonabelian Hodge theory because complex variations of Hodge structure are precisely the $\mathbb{C}^\ast$ fixed points on the moduli space of certain tame harmonic bundles (\cite[Theorem 8]{Simpson1990}, \cite[Proposition 1.9]{Mochizuki2006}).
\item We generalize Saito's construction with respect to the Deligne-Manin prolongations of the variation of Hodge structure with indices other than $(-1,0]$. This is a combination of Saito's $S_X(\mathbb{V})$ with the multiplier ideal sheaf associated to a boundary $\mathbb{Q}$-divisor.
\end{enumerate}
Let $X$ be a complex space. Let $\mathbb{V}=(\mathcal{V},\nabla,\{\mathcal{V}^{p,q}\},Q)$ be a polarized complex variation of Hodge structure (Definition \ref{defn_CVHS}) on some dense regular Zariski open subset $X^o$ of $X$. Let $A$ be an effective $\mathbb{Q}$-Cartier divisor on $X$. We define a coherent sheaf $S_X(\mathbb{V},-A)$ as follows.
\begin{description}
\item[Log smooth case] Assume that $X$ is smooth, $E:=X\backslash X^o$ is a simple normal crossing divisor and ${\rm supp}(A)\subset E$. Denote by $E=\cup_{i=1}^lE_i$ the irreducible decomposition and denote $A=\sum_{i=1}^l{r_i}E_i$ with $r_1,\dots,r_l\in\mathbb{Q}_{\geq 0}$. Let $\bm{r}=(r_1,\dots,r_l)$. Let $\mathcal{V}_{>\bm{r}-1}$ be the Deligne-Manin prolongation with indices $>\bm{r}-1$. It is a locally free $\mathscr{O}_X$-module extending $\mathcal{V}$ such that $\nabla$ induces a connection with logarithmic singularities
$$\nabla:\mathcal{V}_{>\bm{r}-1}\to\mathcal{V}_{>\bm{r}-1}\otimes\Omega_X(\log E)$$ where the real part of the eigenvalues of the residue of $\nabla$ along $E_i$ belongs to $(r_i-1,r_i]$ for each $i$. Let $j:X^o\to X$ be the open immersion. Denote $S(\mathbb{V}):=\mathcal{V}^{p_{\rm max},k-p_{\rm max}}$ where $p_{\rm max}=\max\{p|\mathcal{V}^{p,k-p}\neq0\}$.
Define $$S_{X}(\mathbb{V},-A):=\omega_X\otimes\left(j_\ast S(\mathbb{V})\cap\mathcal{V}_{>\bm{r}-1}\right).$$
\item[General case] Let $\pi:\widetilde{X}\to X$ be a proper bimeromorphic morphism such that $\pi^o:=\pi|_{\pi^{-1}(X^o\backslash{\rm supp}(A))}:\pi^{-1}(X^o\backslash{\rm supp}(A))\to X^o\backslash{\rm supp}(A)$ is biholomorphic and the exceptional loci $E:=\pi^{-1}((X\backslash X^o)\cup {\rm supp}(A)))$ is a simple normal crossing divisor. Then
\begin{align}
S_X(\mathbb{V},-A)\simeq\pi_\ast\left(S_{\widetilde{X}}(\pi^{o\ast}\mathbb{V},-\pi^\ast A)\right).
\end{align}
\end{description}
When $A=\emptyset$, $S_X(\mathbb{V},\emptyset)$ is canonically isomorphic to Saito's $S_X(\mathbb{V})$ (see \cite{MSaito1991}, at least when $\mathbb{V}$ is $\mathbb{R}$-polarizable). The main result of the present article is
\begin{thm}\label{thm_main_CVHS}
\begin{enumerate}
\item $S_{X}(\mathbb{V},-A)$ is a torsion free coherent sheaf on $X$ which is independent of the choice of the desingularization $\pi:\widetilde{X}\to X$.
\item Let $f:X\to Y$ be a locally K\"ahler proper morphism between complex spaces such that $Y$ is irreducible and each irreducible component of $X$ is mapped onto $Y$. Let $L$ be a line bundle on $X$ such that some multiple $mL=B+D$ where $B$ is a semipositive line bundle and $D$ is an effective Cartier divisor on $X$. Let $F$ be an arbitrary Nakano-semipositive vector bundle on $X$. Then $S_{X}(\mathbb{V},-\frac{1}{m}D)\otimes F\otimes L$ satisfies Koll\'ar's package with respect to $f$.
\end{enumerate}
\end{thm}
\subsection{Multiplier Grauert-Riemenschneider sheaf}
When $\mathbb{V}=\mathbb{C}_{X_{\rm reg}}$ is the trivial variation of Hodge structure, $S_X(\mathbb{C}_{X_{\rm reg}},-A)$ is exactly the Grauert-Riemenschneider sheaf twisted by the multiplier ideal sheaf associated with $A$. This is called the multiplier ideals by Viehweg \cite{Viehweg1995,Viehweg2010} and it also appear in the Nadel vanishing theorem on complex spaces \cite{Demailly2012}. Let us recall its construction for the convenience of readers.
\begin{description}
\item[Log smooth case] Assume that $X$ is smooth and ${\rm supp}(A)$ is a simple normal crossing divisor. Then
$$\mathscr{K}_X(-A):=\omega_X\otimes\mathscr{O}_X(-\lfloor A\rfloor).$$
\item[General case] Let $\pi:\widetilde{X}\to X$ be a proper bimeromorphic morphism such that $\pi^o:=\pi|_{\pi^{-1}(X^o\backslash{\rm supp}(A))}:\pi^{-1}(X^o\backslash{\rm supp}(A))\to X^o\backslash{\rm supp}(A)$ is biholomorphic and the exceptional loci $E:=\pi^{-1}((X\backslash X^o)\cup {\rm supp}(A)))$ is a simple normal crossing divisor. Then
\begin{align}
\mathscr{K}_X(-A):=\pi_\ast\left(\mathscr{K}_{\widetilde{X}}(-\pi^\ast A)\right).
\end{align}
\end{description}
Certainly $\mathscr{K}_X(-A)\simeq S_X(\mathbb{C}_{X_{\rm reg}},-A)$ and one has
$$\mathscr{K}_X(-A)\simeq\omega_X\otimes\mathscr{I}(-A)$$
when $X$ is smooth ($\mathscr{I}(-A)$ is the multiplier ideal sheaf associated with $A$).
In this case, by Theorem \ref{thm_main_CVHS} one has the following.
\begin{thm}\label{thm_main_dualizing_sheaf}
Let $f:X\to Y$ be a locally K\"ahler proper morphism between complex spaces, such that $Y$ is irreducible and each irreducible component of $X$ is mapped onto $Y$. Let $L$ be a line bundle such that some multiple $mL=B+D$ where $B$ is a semipositive line bundle and $D$ is an effective Cartier divisor on $X$. Let $F$ be an arbitrary Nakano-semipositive vector bundle on $X$. Then $\mathscr{K}_{X}(-\frac{1}{m}D)\otimes F\otimes L$ satisfies Koll\'ar's package with respect to $f$.
\end{thm}
Theorem \ref{thm_main_dualizing_sheaf} has an application to Koll\'ar's package of pluricanonical bundles.
\begin{cor}
Let $f:X\to Y$ be a morphism from a compact K\"ahler manifold to an analytic space. Assume that $\omega_X^{\otimes km}\simeq A\otimes\mathscr{O}_X(D)$, $k,m>0$ where $A$ is a semipositive line bundle (e.g. a semiample line bundle) and $D$ is an effective Cartier divisor.
Let $F$ be an arbitrary Nakano-semipositive vector bundle on $X$. Then $\mathscr{K}_{X}(-\frac{1}{m}D)\otimes\omega^{\otimes k}_X\otimes F$ satisfies Koll\'ar's package with respect to $f$. In particular if $\omega_X$ is semipositive, then $\omega_X^{\otimes k}\otimes F$ satisfies Koll\'ar's package with respect to $f$ for every $k\geq1$.
\end{cor}
\section{Abstract Koll\'ar's package}
In this section we recall the abstract Koll\'ar's package established in \cite{SC2021_kollar}.
Let $X$ be a complex space of dimension $n$ and $X^o\subset X_{\rm reg}$ a dense Zariski open subset. Let $(E,h)$ be a hermitian vector bundle on $X^o$.
Define the $\mathscr{O}_X$-module $S_X(E,h)$ as follows. Let $U\subset X$ be an open subset. $S_X(E,h)(U)$ is the space of holomorphic $E$-valued $(n,0)$-forms $\alpha$ on $U\cap X^o$ such that for every point $x\in U$, there is a neighborhood $V_x$ of $x$ so that
$$\int_{V_x\cap X^o}\alpha\wedge\overline{\alpha}<\infty.$$
\begin{lem}(Functoriality,\cite[Proposition 2.5]{SC2021_kollar})\label{prop_L2ext_birational}
Let $\pi:X'\to X$ be a proper holomorphic map between complex spaces which is biholomorphic over $X^o$. Then $$\pi_\ast S_{X'}(\pi^\ast E,\pi^\ast h)=S_X(E,h).$$
\end{lem}
\begin{lem}(\cite[Lemma 2.6]{SC2021_kollar})\label{lem_Kernel}
Let $(F,h_F)$ be a hermitian vector bundle on $X$ (in particular $h_F$ is smooth on $X$). Then
$$S_X(E,h)\otimes F\simeq S_X(E\otimes F,h\otimes h_{F}).$$
\end{lem}
\begin{defn}\label{defn_tame_hermitian_bundle}
$(E,h)$ is tame on $X$ if, for every point $x\in X$, there is an open neighborhood $U$ containing $x$, a proper bimeromorphic morphism $\pi:\widetilde{U}\to U$ which is biholomorphic over $U\cap X^o$, and a hermitian vector bundle $(Q,h_Q)$ on $\widetilde{U}$ such that
\begin{enumerate}
\item $\pi^\ast E|_{\pi^{-1}(X^o\cap U)}\subset Q|_{\pi^{-1}(X^o\cap U)}$ as a subsheaf.
\item There is a hermitian metric $h'_Q$ on $Q|_{\pi^{-1}(X^o\cap U)}$ so that $h'_Q|_{\pi^\ast E}\sim \pi^\ast h$ on $\pi^{-1}(X^o\cap U)$ and
\begin{align}\label{align_tame}
(\sum_{i=1}^r\|\pi^\ast f_i\|^2)^ch_Q\lesssim h'_Q
\end{align}
for some $c\in\mathbb{R}$. Here $\{f_1,\dots,f_r\}$ is an arbitrary set of local generators of the ideal sheaf defining $\widetilde{U}\backslash \pi^{-1}(X^o)\subset \widetilde{U}$.
\end{enumerate}
\end{defn}
The tameness condition (\ref{align_tame}) is independent of the choice of the set of local generators. In the present paper, a tame hermitian vector bundle $(E,h)$ is constructed as a subsheaf of the underlying holomorphic bundle of a variation of Hodge structure. This is a special case of tame harmonic bundles in the sense of Simpson \cite{Simpson1990} and Mochizuki \cite{Mochizuki20072,Mochizuki20071}. In this case, Condition (\ref{align_tame}) comes from the theory of degeneration of variation of Hodge structure \cite{Cattani_Kaplan_Schmid1986}.
\begin{thm}(\cite[Proposition 2.9 and \S 4]{SC2021_kollar})\label{thm_abstract_Kollar_package}
Let $f:X\rightarrow Y$ be a proper locally K\"ahler morphism from a complex space $X$ to an irreducible complex space $Y$. Assume that every irreducible component of $X$ is mapped onto $Y$, $X^o\subset X_{\rm reg}$ is a dense Zariski open subset and $(E,h)$ is a hermitian vector bundle on $X^o$ with Nakano semipositive curvature. Assume that $(E,h)$ is tame on $X$. Then $S_X(E,h)$ is a coherent sheaf which satisfies Koll\'ar's package with respect to $f:X\to Y$.
\end{thm}
\section{Twisted Saito's S-sheaf and its Koll\'ar package}
\subsection{Complex variation of Hodge structure}
\begin{defn}{\cite[\S 8]{Simpson1988}}\label{defn_CVHS}
Let $X^o$ be a complex manifold. Denote by $\sA^0_{X^o}$ the sheaf of $C^\infty$ functions on $X^o$.
A polarized complex variation of Hodge structure on $X^o$ of weight $k$ is a flat holomorphic connection $(\mathcal{V},\nabla)$ on $X^o$ together with a decomposition $\mathcal{V}\otimes_{\mathscr{O}_{X^o}}\sA^0_{X^o}=\bigoplus_{p+q=k}\mathcal{V}^{p,q}$ of $C^\infty$ bundles and a flat hermitian form $Q$ on $\mathcal{V}$ such that
\begin{enumerate}
\item The hermitian form $h_Q$ which equals $(-1)^{p}Q$ on $\mathcal{V}^{p,q}$ is a hermitian metric on the $C^\infty$ complex vector bundle $\mathcal{V}\otimes_{\mathscr{O}_{X^o}}\sA^0_{X^o}$.
\item The decomposition $\mathcal{V}\otimes_{\mathscr{O}_{X^o}}\sA^0_{X^o}=\bigoplus_{p+q=k}\mathcal{V}^{p,q}$ is orthogonal with respect to $h_Q$.
\item The Griffiths transversality condition
\begin{align}\label{align_Griffiths transversality}
\nabla(\mathcal{V}^{p,q})\subset \sA^{0,1}(\mathcal{V}^{p+1,q-1})\oplus \sA^{1,0}(\mathcal{V}^{p,q})\oplus\sA^{0,1}(\mathcal{V}^{p,q})\oplus \sA^{1,0}(\mathcal{V}^{p-1,q+1})
\end{align}
holds for every $p$ and $q$. Here $\sA^{i,j}(\mathcal{V}^{p,q})$ denotes the sheaf of smooth $(i,j)$-forms with values in $\mathcal{V}^{p,q}$.
\end{enumerate}
Denote $S(\mathbb{V}):=\mathcal{V}^{p_{\rm max},k-p_{\rm max}}$ where $p_{\rm max}=\max\{p|\mathcal{V}^{p,k-p}\neq0\}$.
\end{defn}
Let $X$ be a complex manifold and $\cup_{i=1}^l E_i=E:=X\backslash X^o\subset X$ a simple normal crossing divisor where $E_1,\dots, E_l$ are irreducible components. Let $(\mathcal{V},\nabla,\{\mathcal{V}^{p,q}\},Q)$ be a polarized complex variation of Hodge structure on $X^o:=X\backslash E$. There is a system of prolongations of $\mathcal{V}$. Let $\bm{a}=(a_1,\dots,a_l)\in\mathbb{R}^l$.
Let $\mathcal{V}_{>\bm{a}}$ be the Deligne-Manin prolongation with indices $>\bm{a}$. It is a locally free $\mathscr{O}_X$-module extending $\mathcal{V}$ such that $\nabla$ induces a connection with logarithmic singularities
$$\nabla:\mathcal{V}_{>\bm{a}}\to\mathcal{V}_{>\bm{a}}\otimes\Omega_X(\log E)$$ whose real part of the eigenvalues of the residue of $\nabla$ along $E_i$ belongs to $(a_i,a_i+1]$. Denote
$$R_X(\mathbb{V}):=\mathcal{V}_{>\bm{-1}}\cap j_\ast(S(\mathbb{V}))$$
where $j:X^o\to X$ is the open immersion and $\bm{-1}=(-1,\dots,-1)$. By the nilpotent orbit theorem \cite{Cattani_Kaplan_Schmid1986} $R_X(\mathbb{V})$ is a subbundle of $\mathcal{V}_{>\bm{-1}}$, i.e. both $R_X(\mathbb{V})$ and $\mathcal{V}_{>\bm{-1}}/R_X(\mathbb{V})$ are locally free.
\subsection{$L^2$-adapted local frame on $R_X(\mathbb{V})$}
Let $\mathbb{V}=(\mathcal{V},\nabla,\{\mathcal{V}^{p,q}\},Q)$ be a polarized complex variation of Hodge structure over $(\Delta^\ast)^n\times \Delta^m$. Denote by $h_Q$ the associated Hodge metric. Let $s_1,\dots,s_n$ be holomorphic coordinates of $(\Delta^\ast)^n$ and denote $D_i:=\{s_i=0\}\subset\Delta^{n+m}$. Let $N_i$ be the unipotent part of ${\rm Res}_{D_i}\nabla$ and let
$$p:\mathbb{H}^{n}\times \Delta^m\to (\Delta^\ast)^n\times \Delta^m,$$
$$(z_1,\dots,z_n,w_1,\dots,w_m)\mapsto(e^{2\pi\sqrt{-1}z_1},\dots,e^{2\pi\sqrt{-1}z_n},w_1,\dots,w_m)$$
be the universal covering. Let
$W^{(1)}=W(N_1),\dots,W^{(n)}=W(N_1+\cdots+N_n)$ be the monodromy weight filtrations (centered at 0) on $V:=\Gamma(\mathbb{H}^n\times \Delta^m,p^\ast\mathcal{V})^{p^\ast\nabla}$.
The following norm estimate for flat sections is proved by Cattani-Kaplan-Schmid \cite[Theorem 5.21]{Cattani_Kaplan_Schmid1986} for the case when $\mathbb{V}$ has quasi-unipotent local monodromy and by Mochizuki \cite[Part 3, Chapter 13]{Mochizuki20072} for the general case.
\begin{thm}\label{thm_Hodge_metric_asymptotic}
For any $0\neq v\in {\rm Gr}_{l_n}^{W^{(n)}}\cdots{\rm Gr}_{l_1}^{W^{(1)}}V$, one has
\begin{align*}
|v|^2_{h_Q}\sim \left(\frac{\log|s_1|}{\log|s_2|}\right)^{l_1}\cdots\left(-\log|s_n|\right)^{l_n}
\end{align*}
over any region of the form
$$\left\{(s_1,\dots s_n,w_1,\dots,w_m)\in (\Delta^\ast)^n\times \Delta^m\bigg|\frac{\log|s_1|}{\log|s_2|}>\epsilon,\dots,-\log|s_n|>\epsilon,(w_1,\dots,w_m)\in K\right\}$$
for any $\epsilon>0$ and an arbitrary compact subset $K\subset \Delta^m$ .
\end{thm}
The rest of this part is devoted to the norm estimate of the singular hermitian metric $h_Q$ on $R_X(\mathbb{V})$.
\begin{lem}\label{lem_W_F}
Assume that $n=1$. Then $W_{-1}(N_1)\cap R_X(\mathbb{V})_{\bf 0}=0$.
\end{lem}
\begin{proof}
Assume that $W_{-1}(N_1)\cap R_X(\mathbb{V})_{\bf 0}\neq0$. Let $k$ be the weight of $\mathbb{V}$. Let $l=\max\{l|W_{-l}(N_1)\cap R_X(\mathbb{V})_{\bf 0}\neq0\}$. Then $l\geq 1$.
By \cite[6.16]{Schmid1973}, the decomposition $\mathcal{V}_{>\bm{-1}}\simeq \bigoplus_{p+q=k}j_\ast \mathcal{V}^{p,q}\cap\mathcal{V}_{>\bm{-1}}$ induces a pure Hodge structure of weight $m+k$ on $W_{m}(N_1)/W_{m-1}(N_1)$. Moreover
\begin{align}\label{align_hard_lef_N}
N^l: W_{l}(N_1)/W_{l-1}(N_1)\to W_{-l}(N_1)/W_{-l-1}(N_1)
\end{align}
is an isomorphism of type $(-l,-l)$. Denote $S(\mathbb{V})=\mathcal{V}^{p,k-p}$. By the definition of $l$, any nonzero element $\alpha\in W_{-l}(N_1)\cap R_X(\mathbb{V})_{\bf 0}$ induces a nonzero $[\alpha]\in W_{-l}(N_1)/W_{-l-1}(N_1)$ of Hodge type $(p,k-l-p)$. Since (\ref{align_hard_lef_N}) is an isomorphism, there is $\beta\in W_{l}(N_1)/W_{l-1}(N_1)$ of Hodge type $(p+l,k-p)$ such that $N^l(\beta)=[\alpha]$. However, $\beta=0$ since $\mathcal{F}^{p+l}=0$. This contradicts to the fact that $[\alpha]\neq0$. $W_{-1}(N_1)\cap R_X(\mathbb{V})_{\bf 0}$ therefore has to be zero.
\end{proof}
Denote by $T_i$ the local monodromy operator of $\mathbb{V}$ around $D_i$.
Since $T_1,\dots,T_n$ are pairwise commutative, there is a finite decomposition
$$\mathcal{V}_{>\bm{-1}}|_{\bf 0}=\bigoplus_{-1<\alpha_1,\dots,\alpha_n\leq 0}\mathbb{V}_{\alpha_1,\dots,\alpha_n}$$
such that $(T_i-e^{2\pi\sqrt{-1}\alpha_i}{\rm Id})$ is unipotent on $\mathbb{V}_{\alpha_1,\dots,\alpha_n}$ for each $i=1,\dots,n$.
Let $$v_1,\dots, v_N\in R_X(\mathbb{V})|_{\bf 0}\cap\bigcup_{-1<\alpha_1,\dots,\alpha_n\leq 0}\mathbb{V}_{\alpha_1,\dots,\alpha_n}$$
be an orthogonal basis of $R_X(\mathbb{V})|_{\bf 0}\simeq \Gamma(\mathbb{H}^n,p^\ast S(\mathbb{V}))^{p^\ast\nabla}$. Then $\widetilde{v_1},\dots,\widetilde{v_N}$ that are determined by
\begin{align}\label{align_adapted_frame}
\widetilde{v_j}:={\rm exp}\left(\sum_{i=1}^n\log z_i(\alpha_i{\rm Id}+N_i)\right)v_j\textrm{ if } v_j\in\mathbb{V}_{\alpha_1,\dots, \alpha_n},\quad \forall j=1,\dots,N
\end{align}
form a frame of $\mathcal{V}_{>\bm{-1}}\cap j_\ast S(\mathbb{V})$.
To be precise, we always use the notation $\alpha_{E_i}(\widetilde{v_j})$ instead of $\alpha_i$ in (\ref{align_adapted_frame}). By (\ref{align_adapted_frame}) we acquire that
\begin{align*}
|\widetilde{v_j}|^2_{h_Q}&\sim\left|\prod_{i=1}^nz_i^{\alpha_{E_i}(\widetilde{v_j})}{\rm exp}\left(\sum_{i=1}^nN_i\log z_i\right)v_j\right|^2_{h_Q}\\\nonumber
&\sim|v_j|^2_{h_Q}\prod_{i=1}^n |z_i|^{2\alpha_{E_i}(\widetilde{v_j})},\quad j=1,\dots,N
\end{align*}
where $\alpha_{E_i}(\widetilde{v_j})\in(-1,0]$, $\forall i=1,\dots n$.
By Theorem \ref{thm_Hodge_metric_asymptotic} and Lemma \ref{lem_W_F}
one has
\begin{align*}
|v_j|^2_{h_Q}\sim \left(\frac{\log|s_1|}{\log|s_2|}\right)^{l_1}\cdots\left(-\log|s_n|\right)^{l_n},\quad l_1\leq l_2\leq\dots\leq l_{n-1},
\end{align*}
over any region of the form
$$\left\{(s_1,\dots s_n,w_1,\dots,w_{m})\in (\Delta^\ast)^n\times \Delta^{m}\bigg|\frac{\log|s_1|}{\log|s_2|}>\epsilon,\dots,-\log|s_n|>\epsilon,(w_1,\dots,w_{m})\in K\right\}$$
for any $\epsilon>0$ and an arbitrary compact subset $K\subset \Delta^{m}$. Therefore we obtain that
\begin{align*}
1\lesssim |v_j|\lesssim|z_1\cdots z_r|^{-\epsilon},\quad\forall\epsilon>0.
\end{align*}
The local frame $(\widetilde{v_1},\dots,\widetilde{v_N})$ is $L^2$-adapted in the sense of S. Zucker \cite[page 433]{Zucker1979}.
\begin{defn}
Let $(E,h)$ be a vector bundle with a possibly singular hermitian metric $h$ on a hermitian manifold $(X,ds^2_0)$. A holomorphic local frame $(v_1,\dots,v_N)$ of $E$ is called $L^2$-adapted if, for every set of measurable functions $\{f_1,\dots,f_N\}$, $\sum_{i=1}^Nf_iv_i$ is locally square integrable if and only if $f_iv_i$ is locally square integrable for each $i=1,\dots,N$.
\end{defn}
To see that $(\widetilde{v_1},\dots,\widetilde{v_N})$ is $L^2$-adapted, let us consider the measurable functions $f_1,\dots,f_N$. If
$$\sum_{j=1}^N f_j\widetilde{v_j}={\rm exp}\left(\sum_{i=1}^nN_i\log z_i\right)\left(\sum_{j=1}^N f_j\prod_{i=1}^n |z_i|^{\alpha_{E_i}(\widetilde{v_j})}v_j\right)$$
is locally square integrable, then
$$\sum_{j=1}^N f_j\prod_{i=1}^n |z_i|^{\alpha_{E_i}(\widetilde{v_j})}v_j$$
is locally square integrable because the entries of the matrix ${\rm exp}\left(-\sum_{i=1}^nN_i\log z_i\right)$ are $L^\infty$-bounded.
Since $(v_1,\dots,v_N)$ is an orthogonal basis,
$|f_j\widetilde{v_j}|_{h_Q}\sim\prod_{i=1}^n |z_i|^{\alpha_{E_i}(\widetilde{v_j})}|f_jv_j|_{h_Q}$ is locally square integrable for each $j=1,\dots,N$.
In conclusion, we obtain the following proposition.
\begin{prop}\label{prop_adapted_frame}
Let $(X,ds^2_0)$ be a hermitian manifold and $E$ a normal crossing divisor on $X$. Let $\mathbb{V}$ be a polarized complex variation of Hodge structure on $X^o:=X\backslash E$. Then there is an $L^2$-adapted holomorphic local frame $(\widetilde{v_1},\dots,\widetilde{v_N})$ of $R_X(\mathbb{V})$ at every point $x\in E$. There are moreover $\alpha_{E_i}(\widetilde{v_j})\in(-1,0]$, $i=1,\dots, r$, $j=1,\dots,N$ and positive real functions $\lambda_j\in C^\infty(X\backslash E)$, $j=1,\dots,N$ such that
\begin{align}\label{align_L2adapted_frame}
|\widetilde{v_j}|^2\sim\lambda_j\prod_{i=1}^r |z_i|^{2\alpha_{E_i}(\widetilde{v_j})},\quad \forall j=1,\dots,N
\end{align}
and
$$1\lesssim \lambda_j\lesssim|z_1\cdots z_r|^{-\epsilon},\quad\forall\epsilon>0$$
for each $j=1,\dots,N$.
Here $z_1,\cdots,z_n$ are holomorphic local coordinates on $X$ so that $E_i=\{z_i=0\}$, $i=1,\cdots r$ and $E
=\{z_1\cdots z_r=0\}$.
\end{prop}
\subsection{Twisted Saito's S-sheaf}
Let $X$ be a complex space and $X^o\subset X_{\rm reg}$ a dense Zariski open subset. Let $\mathbb{V}=(\mathcal{V},\nabla,\{\mathcal{V}^{p,q}\},Q)$ be a polarized complex variation of Hodge structure on $X^o$. Let $A$ be an effective $\mathbb{Q}$-Cartier divisor on $X$. We define a coherent sheaf $S_X(\mathbb{V},-A)$ as follows.
\begin{description}
\item[Log smooth case] Assume that $X$ is smooth, $E:=X\backslash X^o$ is a simple normal crossing divisor and ${\rm supp}(A)\subset E$. Denote by $E=\cup_{i=1}^lE_i$ the irreducible decomposition and denote $A=\sum_{i=1}^l{r_i}E_i$ with $r_1,\dots,r_l\in\mathbb{Q}_{\geq 0}$. Let $\bm{r}=(r_1,\dots,r_l)$. Let $\mathcal{V}_{>\bm{r}-1}$ be the Deligne-Manin prolongation with indices $>\bm{r}-1$. It is a locally free $\mathscr{O}_X$-module extending $\mathcal{V}$ such that $\nabla$ induces a connection with logarithmic singularities
$$\nabla:\mathcal{V}_{>\bm{r}-1}\to\mathcal{V}_{>\bm{r}-1}\otimes\Omega_X(\log E)$$ where the real part of the eigenvalues of the residue of $\nabla$ along $E_i$ belongs to $(r_i-1,r_i]$ for each $i$. Let $j:X^o\to X$ be the open immersion. Denote $S(\mathbb{V}):=\mathcal{V}^{p_{\rm max},k-p_{\rm max}}$ where $p_{\rm max}=\max\{p|\mathcal{V}^{p,k-p}\neq0\}$.
Define $$S_{X}(\mathbb{V},-A):=\omega_X\otimes\left(j_\ast S(\mathbb{V})\cap\mathcal{V}_{>\bm{r}-1}\right).$$
\item[General case] Let $\pi:\widetilde{X}\to X$ be a proper bimeromorphic morphism such that $\pi^o:=\pi|_{\pi^{-1}(X^o\backslash{\rm supp}(A))}:\pi^{-1}(X^o\backslash{\rm supp}(A))\to X^o\backslash{\rm supp}(A)$ is biholomorphic and the exceptional loci $E:=\pi^{-1}((X\backslash X^o)\cup {\rm supp}(A)))$ is a simple normal crossing divisor. Then
\begin{align}
S_X(\mathbb{V},-A)\simeq\pi_\ast\left(S_{\widetilde{X}}(\pi^{o\ast}\mathbb{V},-\pi^\ast A)\right).
\end{align}
\end{description}
Let $L$ be a line bundle such that some multiple $mL=B+D$ where $B$ is a semipositive line bundle and $D$ is an effective Cartier divisor on $X$.
Let $h_B$ be a hermitian metric on $B$ with semipositive curvature and $h_D$ the unique singular hermitian metric on $\mathscr{O}_X(D)$ determined by the effective divisor $D$. $h_D$ is a singular hermitian metric, smooth over $X\backslash D$, defined as follows. Let $s\in H^0(X,\mathscr{O}_X(D))$ be the defining section of $D$ and let $h_0$ be an arbitrary smooth hermitian metric on $\mathscr{O}_X(D)$. Then $h_D$ is defined by $|\xi|_{h_D}=|\xi|_{h_0}/|s|_{h_0}$ which is independent of the choice of $h_0$.
Denote $h_L:=(h_Bh_D)^{\frac{1}{m}}$.
The main result of this section is
\begin{thm}\label{thm_L2_interpretation}
$S_X(\mathbb{V},-\frac{1}{m}D)\otimes L\simeq S_X(S(\mathbb{V})\otimes L,h_Qh_L)$. In particular $S_X(\mathbb{V},-\frac{1}{m}D)$ is independent of the choice of the desingularization $\pi:\widetilde{X}\to X$.
\end{thm}
\begin{proof}
By Lemma \ref{prop_L2ext_birational}, the proof can be reduced to the log smooth case, that is, $X$ is smooth, $E:=X\backslash X^o$ is a simple normal crossing divisor and ${\rm supp}(D)\subset E$. Denote $j:X^o:=X\backslash E\to X$ to be the inclusion. We are going to show that $$S_X(\mathbb{V},-\frac{1}{m}D)\otimes L=S_X(S(\mathbb{V})\otimes L,h_Qh_L)$$ as subsheaves of $\omega_X\otimes j_\ast(S(\mathbb{V}))\otimes L$.
Since the problem is local, we assume that $X=\Delta^n$ is the polydisc. Denote $E:=\{z_1\cdots z_l=0\}$ where $E_i:=\{z_i=0\}$ for each $i=1,\dots,l$. Let $\mathbb{V}=(\mathcal{V},\nabla,\{\mathcal{V}^{p,q}\},h_Q)$ be a polarized complex variation of Hodge structure on $X^o$. Let ${\bf 0}=(0,\dots,0)\in X$ and let $(\widetilde{v_1},\dots,\widetilde{v_N})$ be an $L^2$-adapted local frame of $R_X(\mathbb{V})$ at ${\bf 0}$ as in Proposition \ref{prop_adapted_frame}. Let $f_1,\dots,f_N\in (j_\ast\mathscr{O}_{X^o})_{\bf 0}$ and let $e$ be the local frame of $L$ at $\bm{0}$. We are going to prove that
$$\sum_{i=1}^Nf_i[\widetilde{v_i}dz_1\wedge\cdots\wedge dz_n\otimes e]_{\bf 0}\in S_X(S(\mathbb{V})\otimes L,h_Qh_L)_{\bf 0}$$ if and only if
$$f_i\in\mathscr{O}_X\big(-\sum_{j=1}^l\lfloor\frac{r_i}{m}-\alpha_{E_j}(\widetilde{v_i})\rfloor E_j\big)_{\bf 0}$$
for every $i=1,\dots,N$.
Denote $ds^2_0=\sum_{i=1}^ndz_id\bar{z}_i$.
Since $(\widetilde{v_1},\dots,\widetilde{v_N})$ is an $L^2$-adapted frame as in Proposition \ref{prop_adapted_frame}, the integral
$$\int|\sum_{i=1}^Nf_i\widetilde{v_i}dz_1\wedge\cdots\wedge dz_n|^2|e|^2_{h_L}{\rm vol}_{ds^2_0}=\int|\sum_{i=1}^Nf_i\widetilde{v_i}|^2|e|^2_{h_L}{\rm vol}_{ds^2_0}$$ is finite near ${\bf 0}$ if and only if
\begin{align}\label{align_l2_term}
\int|f_i\widetilde{v_i}|^2|e|^2_{h_L}{\rm }{\rm vol}_{ds^2_0}\sim \int|f_i|^2{\rm }\prod_{j=1}^r|z_j|^{2\alpha_{E_j}(\widetilde{v_i})-\frac{2r_i}{m}}\lambda_i{\rm vol}_{ds^2_0}
\end{align}
is finite near ${\bf 0}$ for every $i=1,\dots,N$. Here $\lambda_i$ is a positive real function so that
\begin{align}\label{align_keylem_lambda}
1\lesssim \lambda_i\lesssim |z_1\cdots z_r|^{-\epsilon},\quad\forall\epsilon>0.
\end{align}
Denote
$$v_j(f):=\min\{l|f_l\neq0\textrm{ in the Laurant expansion } f=\sum_{i\in\mathbb{Z}}f_iz_j^i\}.$$
By Lemma \ref{lem_integral}, the local integrability of (\ref{align_l2_term}) is equivalent to that
\begin{align}
v_j( f_i)+\alpha_{E_i}(\widetilde{v_j})-\frac{r_i}{m}>-1,\quad \forall j=1,\dots, l.
\end{align}
This is equivalent to
\begin{align}
v_j( f_i)\geq\lfloor-\alpha_{E_i}(\widetilde{v_j})+\frac{r_i}{m}\rfloor,\quad \forall j=1,\dots, l.
\end{align}
As a consequence, $S_X(S(\mathbb{V})\otimes L,h_Qh_L)_{\bf 0}$ is generated by $$dz_1\wedge\cdots\wedge dz_n\otimes e\otimes{\rm exp}\left(\sum_{i=1}^n\log z_i(\lfloor-\alpha_{E_i}(\widetilde{v_j})+\frac{r_i}{m}\rfloor{\rm Id}+N_i)\right)\widetilde{v_j} ,\quad \forall j=1,\dots,N.$$
These are exactly the generators of $\omega_{X}\otimes L\otimes(j_\ast(S(\mathbb{V}))\cap\mathcal{V}_{>\frac{\bm{r}}{m}-1})$ at ${\bf 0}$.
The proof is finished.
\end{proof}
The proof of the following lemma is omitted.
\begin{lem}\label{lem_integral}
Let $f$ be a holomorphic function on $\Delta^\ast:=\{z\in\mathbb{C}|0<|z|<1\}$ and $a\in\mathbb{R}$. Then
$$\int_{|z|<\frac{1}{2}}|f|^2|z|^{2a}dzd\bar{z}<\infty$$
if and only if $v(f)+a>-1$. Here
$$v(f):=\min\{l|f_l\neq0\textrm{ in the Laurant expansion } f=\sum_{i\in\mathbb{Z}}f_iz^i\}.$$
\end{lem}
\subsection{Koll\'ar package}
In this section we prove the main theorem (Theorem \ref{thm_main_CVHS}) of the present paper.
Let $X$ be a complex space and $X^o\subset X_{\rm reg}$ a dense Zariski open subset. Let $\mathbb{V}:=(\mathcal{V},\nabla,\{\mathcal{V}^{p,q}\},Q)$ be a polarized complex variation of Hodge structure of weight $k$ on $X^o$. Let
$$\nabla=\overline{\theta}+\partial+\bar{\partial}+\theta$$
be the decomposition according to (\ref{align_Griffiths transversality}).
For the reason of degrees, $S(\mathbb{V})$ is a holomorphic subbundle of $\mathcal{V}$ and $\overline{\theta}(S(\mathbb{V}))=0$. \begin{lem}\label{lem_SV_tame}
$(S(\mathbb{V}),h_Q)$ is a Nakano semipositive vector bundle which is tame on $X$.
\end{lem}
\begin{proof}
To see that $(S(\mathbb{V}),h_Q)$ is Nakano semipositive, we take the decomposition
$$\nabla=\overline{\theta}+\partial+\bar{\partial}+\theta$$
according to (\ref{align_Griffiths transversality}).
Since $\overline{\theta}(S(\mathbb{V}))=0$, it follows from Griffiths' curvature formula
$$\Theta_h(S(\mathbb{V}))+\theta\wedge\overline{\theta}+\overline{\theta}\wedge\theta=0$$
that
$$\sqrt{-1}\Theta_h(S(\mathbb{V}))=-\sqrt{-1}\overline{\theta}\wedge\theta\geq_{\rm Nak} 0.$$
To prove the tameness we use Deligne's extension. Since the problem is local, we assume that there is a desingularization $\pi:\widetilde{X}\to X$ such that $\pi$ is biholomorphic over $X^o$ and $D:=\pi^{-1}(X\backslash X^o)$ is a simple normal crossing divisor. By abuse of notations we identify $X^o$ and $\pi^{-1}(X^o)$. There is an inclusion $S(\mathbb{V})\subset\mathcal{V}_{>\bm{-1}}|_{X^o}$. Let $z_1,\dots,z_n$ be holomorphic local coordinates such that $D_i=\{z_i=0\}$, $i=1,\dots,k$ and $D=\{z_1\cdots z_k=0\}$. By Theorem \ref{thm_Hodge_metric_asymptotic}, one has the norm estimate
\begin{align}\label{align_normest_tame}
|z_1\cdots z_k|^{}|s|_{h_0}\lesssim |s|_h
\end{align}
for any holomorphic local section $s$ of $\mathcal{V}_{>\bm{-1}}$. Here $h_0$ is an arbitrary (smooth) hermitian metric on $\mathcal{V}_{>\bm{-1}}$.
This shows that $(S(\mathbb{V}),h_Q)$ is tame.
\end{proof}
\begin{thm}
Let $X$ be a complex space and $X^o\subset X_{\rm reg}$ a dense Zariski open subset. Let $\mathbb{V}:=(\mathcal{V},\nabla,\{\mathcal{V}^{p,q}\},Q)$ be a polarized complex variation of Hodge structure of weight $k$ on $X^o$.
Let $L$ be a line bundle such that some multiple $mL=A+D$ where $A$ is a semipositive line bundle and $D$ is an effective Cartier divisor on $X$. Let $F$ be an arbitrary Nakano-semipositive vector bundle on $X$. Then $S_{X}(\mathbb{V},-\frac{1}{m}D)\otimes F\otimes L$ satisfies Koll\'ar's package with respect to any locally K\"ahler proper morphism $f:X\to Y$ such that $Y$ is irreducible and each irreducible component of $X$ is mapped onto $Y$.
\end{thm}
\begin{proof}
Let $h_A$ be a hermitian metric on $A$ with semipositive curvature and $h_D$ the singular hermitian metric on $\mathscr{O}_X(D)$ determined by the effective divisor $D$. Denote $h_L:=(h_Ah_D)^{\frac{1}{m}}$. Then
$$\sqrt{-1}\Theta_{h_L}(L)|_{X\backslash D}=\frac{\sqrt{-1}}{m}\Theta_{h_A}(A)|_{X\backslash D}\geq0.$$
Hence $(L|_{U},h_L|_U)$ has semipositive curvature and is tame on $X$. Therefore by Lemma \ref{lem_SV_tame} $(S(\mathbb{V})\otimes L\otimes F|_U,h_Qh_Lh_F|_U)$ has semipositive curvature on $U=X^o\backslash {\rm supp}(D)$ and is tame on $X$. By Lemma \ref{lem_Kernel}, Theorem \ref{thm_abstract_Kollar_package} and Theorem \ref{thm_L2_interpretation} we obtain that
$S_{X}(\mathbb{V},-\frac{1}{m}D)\otimes F\otimes L\simeq S_X(S(\mathbb{V})\otimes L\otimes F|_U,h_Qh_Lh_F|_U)$ satisfies Koll\'ar's package with respect to $f:X\to Y$.
\end{proof}
\end{document}
|
\begin{document}
\title{\mytitle Fisher Markets with Social Influence
}
\begin{abstract}
A Fisher market is an economic model of buyer and seller interactions in which each buyer's utility depends only on the bundle of goods she obtains.
Many people's interests, however, are affected by their social interactions with others.
In this paper, we introduce a generalization of Fisher markets, namely influence Fisher markets, which captures the impact of social influence on buyers' utilities.
We show that competitive equilibria in influence Fisher markets correspond to generalized Nash equilibria in an associated pseudo-game, which implies the existence of competitive equilibria in all influence Fisher markets with continuous and concave utility functions.
We then construct a monotone pseudo-game, whose variational equilibria and their duals together characterize competitive equilibria in influence Fisher markets with continuous, jointly concave, and homogeneous utility functions.
This observation implies that competitive equilibria in these markets can be computed in polynomial time under standard smoothness assumptions on the utility functions.
The dual of this second pseudo-game enables us to interpret the competitive equilibria of influence CCH Fisher markets as the solutions to a system of simultaneous Stackelberg games.
Finally, we derive a novel first-order method that solves this Stackelberg system in polynomial time, prove that it is equivalent to computing competitive equilibrium prices via \emph{t\^{a}tonnement}, and run experiments that confirm our theoretical results.
\end{abstract}
\if 0
\deni{A Fisher market is a market model comprising of a set of goods and a set of buyers endowed with some artificial currency whose utility depend only on the bundle of goods they obtain.
In this paper, we introduce a generalization of Fisher markets, namely influence Fisher markets, which captures the impact of social influence on buyers' utilities.
We show that competitive equilibrium in influence Fisher markets are corresponds to generalized Nash equilibrium in an associated pseudo-game, directly implying the existence of competitive equilibrium in all influence Fisher markets with continuous and individually concave utility functions (i.e., concave in each buyer's allocation).
We then introduce a monotone pseudo-game, whose variational equilibrium and dual together characterize competitive equilibria in influence Fisher markets with continuous, jointly concave (i.e., concave in all buyers' allocations), and homogeneous utility functions.
This observation in turn implies that competitive equilibria in these markets can be computed in polynomial time under standard smoothness assumptions on utility functions.
Further, the dual of this pseudo-game corresponds to a system of zero-sum Stackelberg games played in parallel with each game's Stackelberg equilibria \amy{not enough; we also need the Nash among the buyers!} corresponding to the set of competitive equilibria in the associated \ssadie{}{influence} Fisher market.
Finally, we derive a novel first-order method that solves this Stackelberg game in polynomial time, prove that it is equivalent to computing competitive equilibrium prices via \emph{t\^{a}tonnement}, and run experiments that confirm our theoretical results.}
\fi
\section{Introduction}
The branch of mathematical economics that attempts to explain the behavioral relationship among supply, demand, and prices via equilibria dates back to the work of French economist
\citep{walras}, and today is known as general equilibrium theory \citep{mas-colell}.
One of the seminal achievements in this area is the proof of existence of competitive equilibrium prices in Arrow-Debreu markets (\citeyear{arrow1954existence}).
In such a market, traders seek to ``purchase'' goods from others, by exchanging a part of their endowment of goods for various other goods.
A competitive equilibrium comprises an allocation of goods to traders together with good prices such that traders maximize their preferences over goods while ensuring that their spending does not exceed the value of their endowment,
and the market clears:
i.e., no more goods are allocated than the market supply and Walras' law holds, meaning the value of demand equals the value of supply.
In much of mainstream consumer theory \citep{mas-colell}, and in Arrow-Debreu markets, each trader's preference depends only on its own consumption.
Such models fail to capture the influence of social interactions on traders' interests.
For example, the more friends one has who own an iPhone, the more one might prefer an iPhone.
Likewise, if a celebrity, e.g., Beyonce, wears a particular brand of bag, e.g., Telfar, then one's preference for that brand of bag might increase.
In an age of densifying social networks, it is becoming more and more essential that our economic models capture the effects of social interactions on individuals' preferences.
To try to better understand the implications of social networks on market equilibria, \citet{Chen2011MakretwithSocialInfluence} recently proposed an extension of the Arrow-Debreu market model in which each trader's preference is influenced by the goods her neighbors obtains: \mydef{the Arrow-Debreu market with social influence}.
Formally, \citeauthor{Chen2011MakretwithSocialInfluence}'s model augments an Arrow-Debreu market with a social network connecting the traders, and then embeds this network's structure in each trader's utility function, thus inducing a preference relation over allocations of goods that depends both on the trader's and its neighbors' allocations.
The authors then study a modest generalization of competitive equilibrium in which traders maximize their utility, assuming the allocations of the other traders in the market, including their neighbors, are fixed.
\citeauthor{Chen2011MakretwithSocialInfluence} analyze their model under two specific types of utilities: linear and threshold influence functions.
They prove existence of competitive equilibrium in their setting, when the graph underlying the economy is strongly connected and the utility functions' parameters guarantee non-satiation of the preferences they represent.
Under additional assumptions on the topology of the network, they also provide polynomial-time methods for computing competitive equilibria.
As the computation of competitive equilibrium in Arrow-Debreu markets is believed to be intractable, i.e., it is PPAD-complete \citep{chen2006settling, chen2009spending}, it seems unlikely that we can obtain positive computational results in such a broad setting.
In the last two decades, however, \mydef{Fisher markets} have emerged as an interesting special case of Arrow-Debreu markets in which competitive equilibria can be efficiently computed.
The Fisher market is a one-sided Arrow-Debreu market comprising one seller and multiple buyers, the latter of whom are endowed with an artificial currency called their budget,
rather than an endowment of goods.
During the last two decades, a wide array of polynomial-time computability results have been established for Fisher markets \citep{devanur2002market, jain2005market, gao2020polygm, goktas2021minmax}.
One of the most interesting findings is the observation that the primal and dual solutions, respectively, to the \mydef{Eisenberg-Gale convex program} \cite{eisenberg1959consensus}, constitute competitive equilibrium allocations and prices in Fisher markets, and are computable in polynomial time assuming buyers with continuous, concave, and homogeneous utility functions representing locally non-satiated preferences \citep{devanur2002market, devanur2008market, jain2005market}.
Moreover, \citeauthor{fisher-tatonnement}
show that solving the dual of the Eisenberg-Gale program via (sub)gradient descent amounts to solving the market via \mydef{\emph{t\^atonnement}}, an economic price-adjustment process dating back to \citeauthor{walras} (\citeyear{walras}), in which a fictional auctioneer increases (resp.\@ decreases) the prices of goods that are overdemanded (resp.\@ underdemanded) \cite{fisher-tatonnement}.
Furthermore, \citet{goktas2021minmax} show that the dual of the Eisenberg-Gale program corresponds to a zero-sum Stackelberg game, in which \emph{t\^atonnement\/} surfaces as a no-regret learning dynamic for the auctioneer \cite{goktas2022robust}.
With the aim of obtaining stronger results on the existence and computation of competitive equilibrium in markets with social influence, we introduce a special case of the Arrow-Debreu market with social influence and a generalization of Fisher markets \citep{brainard2000compute}, which we call \mydef{Fisher markets with social influence}, or \mydef{influence Fisher markets} for short.
An influence Fisher market, as the name suggests, is a Fisher market in which buyers' utility functions depend not only on their own allocation, but also on their neighbors'.
In this paper, we provide existence and polynomial-time computability results for competitive equilibrium in influence Fisher markets.
We first extend \citeauthor{arrow1954existence}'s competitive equilibrium existence argument using their theory of pseudo-games
to prove that a competitive equilibrium exists in all influence Fisher markets with continuous utility functions that are concave in each buyer's allocation. Contrary to \citeauthor{Chen2011MakretwithSocialInfluence}, our existence result makes no assumptions about the topology of the network.
Next, for all influence Fisher markets with continuous and homogeneous utilities, we construct a similar pseudo-game with jointly convex constraints whose variational equilibria
correspond to competitive equilibrium allocations.
This pseudo-game is monotone assuming the buyers' utility functions are jointly-concave); thus, we can solve for its variational equilibria as a variational inequality problem \citep{facchinei2007vi}.
This approach yields a polynomial-time algorithm that computes competitive equilibrium allocations in influence Fisher markets.
Moreover, as the pseudo-game comprises $\numbuyers$ different optimization problems, one per buyer, there are correspondingly $\numbuyers$ duals.
Surprisingly,
the solutions to all of these duals yield the same competitive equilibrium prices!
\if 0
the ``dual'' of our pseudo-game, which comprises each individual buyer's utility maximization problem in this pseudo-game \samy{constrained by their budget}{} \sadie{In the pseudo game, they don't have individual budget constraint, they only have a joint feasibility constraint} gives rise to a \sdeni{bi-level optimization problem}{zero-sum single-leader multiple-follower Stackelberg game} that simultaneously characterizes the corresponding equilibrium prices, and that form a dual Stackelberg game with one leader and multiple followers.
\fi
{
Finally, following \citet{goktas2021minmax}, who reformulate the dual of the Eisenberg-Gale program as a zero-sum Stackelberg game, we likewise reformulate the $\numbuyers$ duals of our pseudo-game as a system of $\numbuyers$ simultaneous zero-sum Stackelberg games.
In \citeauthor{goktas2021minmax}'s dual, the leader is a fictitious auctioneer who sets prices, while the followers are a set of buyers who effectively play as a team; in our $n$ duals, each leader is again a fictitious auctioneer, but each follower is an individual buyer who best responds to the auctioneer's prices, \emph{given the other buyers' allocations}.
Thus, the buyers in this system play a Nash equilibrium.
Also following \citeauthor{goktas2021minmax}, we show that running subgradient descent on each leader's value function, i.e., the leader's utility function assuming the follower best-responds,
amounts to solving the market via \emph{t\^atonnement\/} in polynomial-time, as in (standard) Fisher markets.
The main difference between our algorithm and
theirs
is that ours requires a Nash-equilibrium oracle, so that, given prices \emph{and the other buyers' allocations}, buyers can play best responses to one another.}
\amy{i don't know exactly how the FFP algo works. but it feels like there is opportunity for a sort-of multi-player envelope theorem, where the leader could run a nested gradient descent algo computing its derivative as a function of ALL the players' eqm actions.} \sadie{We tried the current envelope theorem for NE and believed that it would not provide us with the correct gradient for the aggregate value function (since it would take the derivative for all buyers' allocations at the same time, which ruins the thing). Instead, we use the old envelope theorem on each value function, treating other buyers' allocations as constant and obtain desired results.} \amy{what do you mean by the ``current'' envelope theorem?} \sadie{There is an Envelope theorem for NE in the paper "The Envelope Theorem and Comparative Statics of Nash Equilibria"}
\paragraph{Related Work}
\if 0
The study of the computation of competitive equilibria in Fisher markets was initiated by \citet{devanur2002market}, who provided a polynomial-time method for the case of linear utilities.
\citet{jain2005market} subsequently showed that a large class of Fisher markets could be solved in polynomial-time using interior point methods.
Both of the results rely on the connection between Fisher markets and the Eisenberg-Gale program, which not only leads to efficient algorithms by which to compute competitive equilibria, but also leads to a surprising formulation of the problem as equilibrium prices are not a part of the Eisenberg-Gale primal, but rather are characterized by its dual.
Recently, \citet{goktas2021minmax} observed that solving for the competitive equilibrium of a Fisher market assuming CCH utilities can also be seen as solving a (convex-concave) min-max Stackelberg game, and they further proposed first-order methods for solving such games.
\fi
\citet{gao2020polygm} studied an alternative family of first-order methods for solving Fisher markets (only; not min-max Stackelberg games more generally), assuming linear, quasilinear, and Leontief utilities; such methods can be more efficient when markets are large.
Following \citeauthor{arrow1954existence}'s introduction of GNE, \citet{rosen1965gne} initiated the study of the mathematical and computational properties of GNE in pseudo-games with jointly convex constraints, proposing a projected gradient method to compute GNE.
Thirty years later,
\citet{uryas1994relax} developed the first relaxation methods for finding GNEs, which were improved upon in subsequent works \citep{Krawczyk2000relax, von2009relax}.
Two other types of algorithms were also introduced to the literature: Newton-style methods \citep{facchinei2009generalized, dreves2017computing, von2012newton, izmailov2014error, fischer2016globally, dreves2013newton} and interior-point potential
methods \citep{dreves2013newton}.
Many of these approaches are based on minimizing the exploitability of the pseudo-game,
but others use variational inequality \citep{facchinei2007vi, nabetani2011vi} and Lemke methods \citep{Schiro2013lemke}.
Recently, this literature has established convergence guarantees for exploitability minimization \cite{goktas2022exploitability} and relaxation \cite{jordan2023first} methods.
\section{Preliminaries}
In this section, we define our main modeling tool, pseudo-games, and
we introduce our object of study, Fisher markets with social influence, as a particular pseudo-game.
\subsection{Notation}
We use caligraphic uppercase letters to denote sets and set correspondences (e.g., $\calX$);
bold lowercase letters to denote vectors (e.g., $\price, \bm \pi$);
bold uppercase letters to denote matrices and vector-valued random variables (e.g., $\allocation$, $\bm \Gamma$);
lowercase letters to denote scalar quantities (e.g., $x, \gamma$);
and uppercase letters to denote scalar-valued random variables (e.g., $X, \Gamma$).
We denote the $i$th row vector of a matrix (e.g., $\allocation$) by the corresponding bold lowercase letter with subscript $i$ (e.g., $\allocation[\buyer])$.
Similarly, we denote the $j$th entry of a vector (e.g., $\price$ or $\allocation[\buyer]$) by the corresponding lowercase letter with subscript $j$ (e.g., $\price[\good]$ or $\allocation[\buyer][\good]$).
Lowercase letters also denote functions: e.g., $f$ if the function is scalar valued, and $\f$ if the function is vector valued.
We denote the vector of ones of size $\numbuyers$ by $\ones[\numbuyers]$, the set of integers $\left\{1, \hdots, n\right\}$ by $[n]$, the set of natural numbers by $\N$, the set of real numbers by $\R$, and the postive and strictly positive elements of a set by a $+$ and $++$ subscript, respectively, e.g., $\R_+$ and $\R_{++}$.
Finally, we denote the orthogonal projection operator onto a set $C$ by $\project[C]$, i.e., $\project[C](\x) = \argmin_{\y \in C} \left\|\x - \y \right\|^2$.
\subsection{Pseudo-games}
A (concave) \mydef{pseudo-game} \cite{arrow1954existence} $\pgame \doteq (\numplayers, \actionspace, \actions, \actionconstr, \utilp)$
comprises $\numplayers \in \N_+$ players, each $\player \in \players$ of whom chooses an action $\action[\player] \in \actionspace[\player] \subset \R^{\numactions}$, with the players' joint action space $\actionspace = \bigtimes_{\player \in \players} \actionspace[\player]$.
Each player $\player$ aims to maximize their continuous utility $\utilp[\player]: \actionspace \to \R$, which is concave in $\action[\player]$, by choosing a feasible action from a set of actions $\actions[\player](\naction[\player]) \subseteq \actionspace[\player]$ determined by the actions $\naction[\player] \in \actionspace[-\player] \subset \R^\numactions$ of the other players, where $\actions[\player]: \actionspace[-\player] \rightrightarrows \actionspace[\player]$ is a non-empty, continuous, compact- and convex-valued action correspondence.
For convenience, we represent each such correspondence as a set $\actions[\player](\naction[\player]) = \{ \action[\player] \in \actionspace[\player] \mid \actionconstr[\player][\numconstr](\action[\player], \naction[\player]) \geq \zeros, \text{ for all } \numconstr \in [\numconstrs]\}$, where for all $\numconstr \in [\numconstrs]$, $\actionconstr[\player][\numconstr]$ is a continuous and concave function in $\action[\player]$, which defines the constraints.
For notational convenience, we also define the \mydef{joint constraint function} $\constr = \left(\actionconstr[1], \hdots, \actionconstr[\numplayers] \right): \actionspace \to \R^{\numplayers \times\numconstrs}$.
If $\constr[\numconstr](\action)$ is concave in $\action$, for all $\numconstr \in [\numconstrs]$, then we say that the pseudo-game has \mydef{jointly convex constraints}, in which case the joint action correspondence is simply a convex set, i.e., $\actions = \{\action \in \actionspace \mid \constr(\action) \geq \zeros\}$.
A pseudo-game is called \mydef{monotone}
\footnote{We call a pseudo-game monotone if $- (\grad[{\action[1]}] \util[1], \hdots, \grad[{\action[\numplayers]}] \util[\numplayers])$ is a monotone operator. Such pseudo-games are also sometimes called dissipative pseudo-games, since $(\grad[{\action[1]}] \util[1], \hdots, \grad[{\action[\numplayers]}] \util[\numplayers])$ is called a dissipative operator if $- (\grad[{\action[1]}] \util[1], \hdots, \grad[{\action[\numplayers]}] \util[\numplayers])$ is a monotone operator.}
if for all $\x, \y \in \actionspace$, $\sum_{\player \in \players} \left(\grad[{\action[\player]}] \utilp[\player](\x) - \grad[{\action[\player]}] \utilp[\player](\y) \right)^T \left( \x_\player - \y_\player \right) \leq 0$.
Finally, a (concave) \mydef{game} \cite{nash1950existence} is a pseudo-game where, for all players $\player \in \players$, $\actions[\player]$ is a constant correspondence, i.e., for all players $\player \in \players,$ $\actions[\player](\naction[\player]) = \actions[\player](\otheraction[-\player])$, for all $\action, \otheraction \in \actionspace$.
\if 0
\amy{what is a monotone operator?}
\deni{I think the definition is clear because the characterization is an if and only if. So the definition of a monotone pseudo-game gives exactly the monotone operator def'n. If you still want to define it though, here it is: an operator $\h$ said to be \mydef{monotone} if for all $\x, \y \in \actionspace$,
$\left( \h(\x) - \h(\y) \right)^T \left( \x - \y \right) \leq 0$}.
\fi
Given a pseudo-game $\pgame$, an $\varepsilon$-\mydef{generalized Nash equilibrium (GNE)} is a strategy profile $\action^* \in \actions(\action^*)$ s.t.\ for all $\player \in \players$ and $\action[\player] \in \actions[\player](\naction[\player][][][*])$, $\utilp[\player](\action^*) \geq \utilp[\player](\action[\player], \naction[\player][][][*]) - \varepsilon$.
An $\varepsilon$-\mydef{variational equilibrium (VE)} (or \mydef{$\epsilon$-normalized GNE}) of a pseudo-game \emph{with joint constraints\/} is a strategy profile $\action^* \in \actions$ s.t.\ for all $\player \in \players$ and $\action \in \actions$, $\utilp[\player](\action^*) \geq \utilp[\player](\action[\player], \naction[\player][][][*]) - \varepsilon$.
A GNE (VE) is an $\varepsilon$-GNE (VE) with $\varepsilon = 0$.
While GNE are guaranteed to exist in all pseudo-games under standard assumptions (see \Cref{thm:existence_GNE}), VE are only guaranteed to exist in pseudo-games with jointly convex constraints (see \Cref{thm:jointly_convex_ve_gne}) \cite{arrow1954existence}.
\if 0
Note that the set of $\varepsilon$-VE of a pseudo-game is contained in the set of the $\varepsilon$-GNE, as $\actions(\action^*) \subseteq \actions$, for all $\action^*$ which are GNE of $\pgame$.
The converse, however, is not true, unless $\actionspace \subseteq \actions$.
Further, when $\pgame$ is a game, GNE and VE coincide; we refer to this set simply as NE.
\fi
\if 0
\subsection{Fisher Markets}
\deni{Most likely delete this section and say in the next section when utility only depends on yourself you have a regular fisher market.} \amy{didn't proofread}
A \mydef{Fisher Market} comprises of $\numbuyers$ buyers and $\numgoods$ divisible goods. Without loss of generality, we assume that there is one unit of each good $\good \in \good$ available. Each buyer $\buyer\in \buyers$ has a budget $\budget[\buyer]\in \Rp$ and a utility function $\util[\buyer]: \Rp^{\numgoods}\to \R$, giving the utility $\util[\buyer](\allocation[\buyer])$ that buyer $\buyer$ derives from each allocation of good $\allocation[ ] \in \R^\numgoods_+$.
An instance of a Fisher market is given by a tuple $(\numbuyers, \numgoods, \util, \budget)$, where $\util= \{\util[1],\hdots, \util[\numbuyers]\}$ is a set of utility functions, one per buyer, $\budget\in \Rp^{\numbuyers}$ is the vector of buyer budgets, $\supply \in \Rp^{\numgoods}$ is the vector of good supplies. When clear from context, we simply denote $(\util, \budget)$.
An \mydef{allocation} $\allocation = (\allocation[1], \hdots, \allocation[\numbuyers])^{T}\in \Rp^{\numbuyers\times \numgoods}$ is a map from goods to buyers, represented as a matrix, s.t. $\allocation[\buyer][\good]\geq 0$ denotes the amount of good $\good\in \goods$ allocated to buyer $\buyer\in \buyers$.
The goods are assigned \mydef{prices} $\price=(\price[1]\hdots, \price[\numgoods])^{T} \in \Rp^{\numgoods}$ s.t. $\price[\good] \geq 0$ denotes the unit cost of good $\good$.
A \mydef{competitive (or Walrasian) equilibrium} is a tuple $(\allocation^*, \price^*)\in \Rp^{\numbuyers\times \numgoods} \times \Rp^{\numgoods}$ consisting of an allocation and prices such that each buyer's utility is maximized constrained by their budget, i.e. $\forall \buyer\in \buyers$, $\allocation[\buyer]^* \in \argmax_{\allocation[ ] \in \R^\numgoods_+:\allocation[ ]\cdot \price^*\leq \budget[\buyer]} \util[\buyer](\allocation[])$.
\fi
\subsection{Fisher Markets with Social Influence}
In this paper, we study a model of Fisher markets with social influence, in which a buyer's utility may be influenced by the goods allocated to her neighbors.
A \mydef{Fisher market with social influence}, or an \mydef{influence Fisher market} for short, comprises $\numbuyers \in \N_+$ buyers and $\numgoods \in \N_+$ divisible goods.
Without loss of generality,
we assume that exactly one unit of each good $\good \in \goods$ is available.
The buyers are connected through a directed social influence graph $\graph=(\vertices, \edges)$, where $\vertices = \buyers$ is the set of buyers, and for any $\buyer, \buyerp\in \buyers$, there is an edge from $\buyerp$ to $\buyer$ iff the utility of $\buyer$ is influenced by the allocation $\allocation[\buyerp]$ of $\buyerp$.
We let $ \neighborset[\buyer]=\{\buyerp \mid (\buyerp, \buyer)\in \edges\}$ be the (incoming, and hence influential) neighbors of a buyer $\buyer$, and we define $\neighbordegree[\buyer]=|\neighborset[\buyer]|$, for all $\buyer\in \buyers$.
Each buyer $\buyer\in \buyers$ has a budget $\budget[\buyer] \in \Rp$ and a utility function $\util[\buyer]: \Rp^{(\neighbordegree[\buyer]+1) \times \numgoods} \to \R$ that depends on not only her own allocation, but also her neighbors'.
An instance of an influence Fisher market is thus given by a tuple $(\numbuyers, \numgoods, \graph, \util, \budget)$, where $\graph$ is the social network, $\util= \{\util[1],\hdots, \util[\numbuyers]\}$ is a set of utility functions, one per buyer, and $\budget\in \Rp^{\numbuyers}$ is a vector of buyer budgets.
When $\numbuyers$ and $\numgoods$ are clear from context, we denote influence Fisher markets simply by $(\graph, \util, \budget)$.
Given an influence Fisher market $(\graph, \util, \budget)$, an \mydef{allocation} $\allocation = (\allocation[1], \hdots, \allocation[\numbuyers])^{T}\in \Rp^{\numbuyers\times \numgoods}$ is a map from goods to buyers, represented as a matrix, s.t. $\allocation[\buyer][\good]\geq 0$ denotes the amount of good $\good\in \goods$ allocated to buyer $\buyer\in \buyers$.
Likewise, we denote by $\allocation[{\neighborset[\buyer]}]=(\allocation[\buyerp])_{\buyerp\in \neighborset[\buyer]}^T \in \Rp^{\neighbordegree[\buyer]\times \numgoods}$ the matrix representing the bundles of goods obtained by buyer $\buyer$'s neighbors.
A utility function is \mydef{locally non-satiated} if for all $\allocation[\buyer]\in \Rp^{\numgoods}, \allocation[\nei]\in \Rp^{\neighbordegree[\buyer]\times \numgoods}$, and $\varepsilon > 0$, there exists an $\allocationp[\buyer]\in \Rp^{\numgoods}$ with $\|\allocationp[\buyer]-\allocation[\buyer]\|\leq \varepsilon$ such that $\util[\buyer](\allocationp[\buyer], \allocation[\nei]) > \util[\buyer](\allocation[\buyer], \allocation[\nei])$.
Related, a utility function satisfies \mydef{no saturation} if $\forall \allocation[\buyer]\in \Rp^{\numgoods}$ and $\allocation[\nei]\in \Rp^{\neighbordegree[\buyer]\times \numgoods}$, there exists an $\allocationp[\buyer] \in \Rp^{\numgoods}$ such that $\util[\buyer](\allocationp[\buyer], \allocation[\nei]) > \util[\buyer](\allocation[\buyer], \allocation[\nei])$.
Note that if $\util[\buyer]$ is quasi-concave
in $\allocation[\buyer]$ and satisfies no saturation, then it is locally non-satiated \citep{arrow1954existence}.
\mydef{Feasibility} asserts that
no more of each good $j$ is allocated than its available supply, i.e., $\forall \good\in \goods, \sum_{\buyer\in \buyers} \allocationstar[\buyer][\good] \leq 1$.
\mydef{Walras' law} states that, for each good $j$, either all of it is allocated, in which case its price is strictly positive, and otherwise, its price is zero.
Mathematically, $\sum_{\good\in \goods} \pricestar[\good](\sum_{\buyer\in \buyers} \allocationstar[\buyer][\good]-1) = 0$.
A tuple $(\allocationstar, \pricestar)$, which consists of an allocation $\allocationstar$ and prices $\pricestar = (\pricestar[1] \hdots, \pricestar[\numgoods])^{T} \in \Rp^{\numgoods}$, is a \mydef{competitive equilibrium (CE)} in an influence Fisher market $(\graph, \util, \budget)$ if (1) fixing other buyers' allocations, buyers maximize their utilities constrained by their budget, i.e., $\forall \buyer\in \buyers, \allocationstar[\buyer] \in \argmax_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \pricestar \leq \budget[\buyer]} \util[\buyer](\allocation[\buyer],\allocationstar[\nei] )$, and (2) feasibility and Walras' law hold.
A Fisher market is a special influence Fisher market $(\graph, \util, \budget)$ where $\graph=(\vertices, \edges)$ satisfies $\edges=\emptyset$. In other words, each buyer $\buyer$ is isolated, so her utility $\util[\buyer]:\Rp^{\numgoods}\to \Rp$ depends only on her own allocation. As $\graph$ is simply a graph with $\numbuyers$ vertices and no edges, we can denote a Fisher market by the tuple $(\util, \budget)$.
When $\util$ is a set of specific utility functions, we refer to the influence Fisher market $(\graph, \util, \budget)$ by the name of the utility function: e.g., if $\util$ is a set of linear utility functions then $(\graph, \util, \budget)$ is a linear influence Fisher market.
\amy{i want to say here that FMs are a special case of AD markets, but we have not defined AD markets. i need this, b/c we need to justify the constraint that ``the sum of the prices equals the sum of the budgets.'' Deni suggested that this constraint would be implied by the fact that budgets are expressed in terms of units a numeraire good. and that the value of the budget has to equal the price of the real goods, for each buyer, b/c o/w buyers would gravitate towards collecting more of the numeraire good than the other goods. iow, this constraint seems part of the def'n of FMs, and it requires explanation.} \sadie{Deni added something in the next section, but I don't know if that's enough. I somehow think define AD markets and reduce it to Fisher market is not trivial, and I'm afraid it may cause more confusions. }
\section{Existence of Competitive Equilibrium via Pseudo-Games}
\label{sec:existence}
\amy{IMPORTANT: the auctioneer in this section does not seem very fictitious to me. it is a real player. otoh, it is fictitious in the buyer-only pseudo-game, since it arises thru the dual. so i think it is confusing to call the auctioneer fictitious here. i think we should reserve this terminology for the next section/tatonnement.}\sadie{I agree with this! I will check we use auctioneer instead of fictitious auctioneer in this section!}
In this section, we investigate the properties of competitive equilibrium in Fisher markets with social influence.
Our main tool is the pseudo-game (or abstract economy) model introduced by \citeauthor{arrow1954existence}, as both a generalization of the standard normal-form game in game theory and of the Arrow-Debreu market in microeconomics \citep{arrow1954existence}.
We provide a proof of existence of competitive equilibrium in influence Fisher markets, using methods similar to those employed by \citeauthor{arrow1954existence} in their seminal proof of the existence of competitive equilibria in Arrow-Debreu economies.
Following Arrow and Debreu, we define a pseudo-game with an auctioneer who sets prices.
Our pseudo-game then both generalizes and specializes theirs.
While in theirs, each trader's utility depends only on their own allocations, ours captures social influence through augmented utility functions.
While in theirs, buyers are constrained by their endowment, in ours, buyers are constrained by budgets.
Specifically, we construct an auctioneer-buyer pseudo-game comprising a single auctioneer and $\numbuyers$ individual buyers in which the auctioneer sets the good prices, while the buyers choose their allocations.
Given an allocation $\allocation \in \R^{\numbuyers \times \numgoods}$, let $\excessd = \left( \sum_{\buyer\in \buyers} \allocation[\buyer] \right) - \ones[\numgoods]$ be the vector of \mydef{excess demands}, i.e., the total amount by which the demand for each good exceeds its supply.
In our auctioneer-buyer pseudo-game, each buyer $\player$ chooses allocations $\allocation[\player]$ that maximize her utility subject to her budget constraint, given prices $\price$ determined by the auctioneer, the auctioneer chooses prices that maximize her total profit, i.e., $\price \cdot \excessd$,
fixing the allocation $\allocation$, subject to Walras' law.
More specifically, we assume a numeraire (i.e., a good whose price we normalize to 1), and we view the buyers' budgets as quantities of this numeraire, in which case Walras' law can be restated as the sum of the prices being equal to the sum of the budgets: i.e., $\sum_{\good \in \goods}\price[\good] = \sum_{\buyer \in \buyers}\budget[\buyer]$.
\if 0
\footnote{In Fisher markets, we assume a numeraire: i.e., a good whose price we normalize to 1.
This latter constraint then arises when we view the buyers' budgets as quantities of this numeraire, as a consequence of Walras' law,
which relates the total value of the numeraire to the total value of all other goods.}
\fi
In what follows, we show that the set of GNE in this auctioneer-buyer pseudo-game corresponds to the set of CE in an influence Fisher market; existence of a CE thus follows from existence of GNE.
\amy{discuss!} \sdeni{}{Importantly, in this pseudo-game we require the auctioneer to choose prices whose sum is equal to the sum of the budgets, because the budgets correspond are a numeraire good, i.e., their price is set to 1 (this is without loss of generality since in any Arrow-Debreu market only the proportions of the prices to one another matters), hence the prices of every other good have to be scaled appropriately, i.e., by the sum of the budget, so as to preserve the internal consistency of the Arrow-Debreu price system that emerges at the competitive equilibrium prices (since Arrow-Debreu assume wlog that prices are in the unit simplex).} \deni{Alternatively, we can get rid of this action space restriction by adding an additional seller player who has all the the goods in the market, considering budgets to be part of the commodity space and then restricting price to be in the unit simplex just like Arrow-Debreu does, I highly discourage this, it is so incomprehensible to a novice reader imo.}
\begin{assumption}
\label{assumption:existence_assum}
The influence Fisher market $(\graph, \util, \budget)$ satisfies for each buyer $\buyer \in \buyers$, $\util[\buyer]$ is
1.~continuous in $(\allocation[\buyer], \allocation[\nei])$,
2.~concave in $\allocation[\buyer]$, and
3.~satisfies no saturation.
\end{assumption}
\if 0
\begin{remark}
\Cref{assumption:existence_assum} implies the local nonsatiation of buyer preferences.
Local nonsatiation is a key assumption in Walras's law and existence of competitive equilibrium.
\end{remark}
\fi
\begin{definition}[Auctioneer-Buyer Pseudo-game]
\label{def:Auctioneer-Buyer_pseudo}
Let $(\graph, \util, \budget)$ be an influence Fisher market.
The corresponding \mydef{auctioneer-buyer pseudo-game} $\pgame=(\numbuyers+1, \actionspace, \actions, \constr, \utilp)$ is defined by
\begin{itemize}
\item an auctioneer and $\numbuyers$ buyers.
\item Each buyer chooses an allocation $\allocation[\buyer] \in \actionspace[\buyer] = \Rp^{\numgoods}$, while the auctioneer chooses prices $\price \in \actionspace[\priceplayer] = \Rp^{\numgoods}$.
\item For all buyers $\buyer \in \buyers$,
the feasible action set given the actions of other players is $\actions[\buyer] (\allocation[- \buyer], \price) = \{ \allocation[\buyer] \in \actionspace[\buyer] \mid \constr( \allocation[\buyer], \allocation[- \buyer], \price ) = \budget[\buyer]- \allocation[\buyer] \cdot \price \geq \zeros\}$.
\item For the auctioneer, the feasible action set is the fixed set $\actions[\priceplayer] = \{ \price \in \actionspace[\priceplayer] \mid \price \cdot \ones[\numgoods] = \budget \cdot \ones[\numbuyers]\}$.
\item For all players $\player \in [\numbuyers+1]$, $\player$ maximizes her utility $\utilp[\buyer]: \bigtimes_{\player \in [\numbuyers+1]} \actionspace[\player] \to \R$,
defined by $\utilp[\buyer] (\allocation, \price) = \util[\buyer] (\allocation[\buyer], \allocation[\nei])$, for the buyers $\buyer \in [\numbuyers]$, and $\utilp[\priceplayer] (\allocation, \price) = \price\cdot \excessd$
for the auctioneer.
\end{itemize}
\end{definition}
Next, we prove the existence of CE in influence Fisher markets that satisfy \Cref{assumption:existence_assum}.
\footnote{Proofs of all theorems appear in the appendix.}
\begin{restatable}{theorem}{thmExistence}
\label{thm:competitive_equ_existence}
The set of competitive equilibria of any influence Fisher market $(\graph, \util, \budget)$ that satisfies \Cref{assumption:existence_assum} is equal to the set of generalized Nash equilibria of the associated auctioneer-buyer pseudo-game $\pgame=(\numbuyers+1, \actionspace, \actions, \constr, \utilp)$.
\end{restatable}
Existence of a CE in an influence Fisher market now follows immediately from existence of GNE in pseudo-games:
\begin{restatable}{corollary}{thmExistence}
\label{cor:competitive_equ_existence}
There exists a CE $(\allocationstar, \pricestar)$ in all influence Fisher markets $(\graph, \util, \budget)$ satisfying \Cref{assumption:existence_assum}.
\end{restatable}
\section{Computation of Competitive Equilibrium via Pseudo-Games}
Although we have established the existence of competitive equilibrium in all influence Fisher markets with continuous and concave utility functions, the proof itself provides little insight into equilibrium computation, as computing a GNE is PPAD-hard in general \citep{daskalakis2009complexity}.
In order to gain further computational insights, we focus on a subset of influence Fisher markets in which each buyer's utility function is also homogeneous in its own allocation.
A utility function $\util[\buyer]$ is \mydef{homogeneous} in $\allocation[\buyer] \in \Rp^{\numgoods}$ if $\util[\buyer](\allocation[\buyer], \allocation[\nei])$ is homogeneous for all $\allocation[\nei] \in \Rp^{\neighbordegree[\buyer] \times \numgoods}$, i.e., $\util[\buyer] (\lambda \allocation[\buyer], \allocation[\nei]) = \lambda \util[\buyer] (\allocation[\buyer], \allocation[\nei])$, for all $\lambda \geq 0$.
As above, we also assume continuity and concavity.
We call utility functions that satisfy all three of these assumptions CCH utility functions, and Fisher markets inhabited by buyers with such utility functions CCH Fisher markets.
We can compute competitive equilibria in CCH Fisher markets (without social influence) via the Eisenberg-Gale convex program and its dual \citep{eisenberg1959consensus}.
To generalize this convex program to CCH influence Fisher markets, we propose another pseudo-game, which we call the buyer (only) pseudo-game, that is jointly convex, and whose variational equilibria correspond to CE allocations.
Moreover, we observe that the ``dual'' of this pseudo-game
simultaneously characterizes CE prices.
In other words, while the auctioneer-buyer pseudo-game explicitly models an auctioneer who updates prices in response to the buyers' behavior,
in the buyer (only) pseudo-game, the auctioneer is ``fictitious,'' as it is implicit in the dual.
If $(\util, \budget)$ is a CCH Fisher market, then an optimal solution $\allocationstar$ to the Eisenberg-Gale program (Eq.~\ref{eq:eisenberg_gale_primal}) constitutes a CE allocation, and an optimal solution to the Lagrangian that represents the feasibility constraints (Eq.~\ref{eq:eisenberg_gale_primal_feas_constr}) are the corresponding equilibrium prices \citep{devanur2002market, jain2005market}.
\paragraph{Primal:}
\begin{subequations}
\label{eq:eisenberg_gale_primal}
\begin{align}
&\max_{\allocation\in \Rp^{\numbuyers\times \numgoods}}
&\sum_{\buyer \in \buyers} \budget[\buyer] \log(\util[\buyer] (\allocation[\buyer])) \tag{\ref{eq:eisenberg_gale_primal}}\\
&\text{subject to}
& \forall \good\in \goods, \:\sum_{\buyer \in \buyers}\allocation[\buyer][\good] \leq 1 \label{eq:eisenberg_gale_primal_feas_constr}
\end{align}
\end{subequations}
In Fisher markets (without social influence), each buyer's utility maximization problem is independent of the others', as each depends only on the buyer's own allocation.
The Eisenberg-Gale program takes advantage of this independence.
It takes an aggregate perspective, maximizing the \emph{sum\/} of the buyers' utilities subject to their feasibility constraints, and nonetheless computes an optimal allocation that maximizes each buyer's \emph{individual\/} utility.
In influence Fisher markets, however, where this independence assumption does not hold, we can no longer compute CE from this aggregate perspective.
In our solution---CE as the VE of a jointly-convex pseudo-game---each buyer maximizes her own utility, subject to a shared feasibility constraint.
Note that the only players in this buyer pseudo-game are the $\numbuyers$ buyers; there is no auctioneer updating prices based on the buyers' behavior.
\begin{definition}[Buyer Pseudo-game] \label{def:Buyer_pseudo}
Let $(\graph, \util, \budget)$ be an influence Fisher market.
The corresponding jointly-convex \mydef{buyer pseudo-game} $\pgame=(\numbuyers, \actionspace, \actions, \constr, \utilp)$ is defined by
\begin{itemize}
\item For all buyers $\buyer \in \buyers, \actionspace[\buyer] = \Rp^{\numgoods}$.
\item For all buyers $\buyer \in \buyers$, the feasible action set given the actions of other players is $\actions[\buyer] (\allocation[- \buyer]) = \{ \allocation[\buyer] \in \actionspace[\buyer] \mid \constr( \allocation[\buyer], \allocation[- \buyer] ) = \ones - \sum_{\buyer \in \buyers} \allocation[\buyer] \geq \zeros\}$.
\item For all buyers $\buyer \in \buyers$, $\buyer$ maximizes her utility $\utilp[\buyer]: \bigtimes_{\player \in \players} \actionspace[\player] \to \R$ defined by $\utilp[\buyer] (\allocation) = \budget[\buyer] \log (\util[\buyer] (\allocation[\buyer], \allocation[\nei]))$.
\end{itemize}
\end{definition}
\begin{assumption}
\label{assumption:pseudo_assum}
For each buyer $\buyer \in \buyers$, $\util[\buyer]$ is
1.~continuous in $(\allocation[\buyer], \allocation[\nei])$; and \sadie{add remark in the appendix relating to cont. diff.}\sadie{Added.}\amy{Deni, please take a look.}
2.~concave and homogeneous
\footnote{We note that homogeneity implies no saturation, since for all $\x \in \R_+^\numgoods$ and $\allocation[\nei] \in \Rp^{\neighbordegree[\buyer]\times \numgoods}$, there exists an allocation $(1+\varepsilon)\x$ for some $\epsilon > 0$ s.t.\@ $\util[\buyer]((1+\varepsilon)\x, \allocation[\nei]) = (1+\varepsilon)\util[\buyer](\x, \allocation[\nei])> \util[\buyer](\x, \allocation[\nei])$.}
in $\allocation[\buyer]$.
\end{assumption}
\begin{restatable}{theorem}{thmPseudoPrimal}
\label{thm:pseudo_game_equ}
Let $(\graph, \util, \budget)$ be an influence Fisher market satisfying \Cref{assumption:pseudo_assum}.
Then, $\allocationstar$ is a CE allocation of $(\graph, \util, \budget)$ if and only if it is a variational equilibrium (VE) of the corresponding buyer pseudo-game $\pgame=(\numbuyers, \actionspace, \actions, \constr, \utilp)$.
Moreover, if $\allocationstar$ is a VE of the buyer pseudo-game, then the corresponding KKT conditions are satisfied with optimal Langrange multipliers $\pricelangstar[1] = \hdots = \pricelangstar[\numbuyers] = \pricestar$, which correspond to CE prices.
\end{restatable}
The construction of competitive equilibrium via the auctioneer-buyer pseudo-game (\Cref{thm:competitive_equ_existence}) is more general than the construction of competitive equilibrium via the buyer pseudo-game (\Cref{thm:pseudo_game_equ}); however, the existence of the auctioneer precludes monotonicity, and hence polynomial-time computability.
To obtain efficient algorithms, we assume the buyers' utilities are concave not only in their own allocations but in one another's allocations as well, which implies monotonicity.
We also require twice-continuous differentiability.
\begin{assumption}
\label{assumption:comp_pseudo_assum}
For each buyer $\buyer \in \buyers$,
1.~The conditions in \Cref{assumption:pseudo_assum}, and
2.~$\util[\buyer]$ is jointly concave: i.e., concave in $(\allocation[\buyer], \allocation[\nei])$, and
3.~and twice-continuously differentiable in $(\allocation[\buyer], \allocation[\nei])$.
\end{assumption}
Under \Cref{assumption:comp_pseudo_assum}, an influence Fisher market can be expressed as a monotone variational inequality.
There exist methods
that converge in last iterates
\footnote{\citet{Solodov1999ExtraProximal} and \citet{Ryu2019ODEAO} also provide methods that guarantee average-iterate convergence with this same rate in monotone variational inequalities.} to a solution of any monotone variational inequality at a rate of $O(\nicefrac{1}{T})$ (e.g., \citet{gorbunov2022extragradient}).
{Our next theorem follows from these two assertions:
\begin{restatable}{theorem}{thmPseudoConvergence}
There exist methods that converge in last iterates to the CE allocations of influence Fisher markets at a rate of $O(\nicefrac{1}{T})$ under \Cref{assumption:comp_pseudo_assum}.
In such markets, approximate competitive equilibrium allocations can be computed in polynomial time.
\end{restatable}
\deni{Here are two papers to cite ``Solodov, M. V. and Svaiter, B. F. (1999). A hybrid approximate extragradient–proximal point algorithm
using the enlargement of a maximal monotone op-
erator.'' and ``Ryu, E. K., Yuan, K., and Yin, W. (2019). Ode analysis of stochastic gradient methods with optimism
and anchoring for minimax problems.''} \sadie{I though we were using results from "Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity" by Eduard Gorbunov?} \deni{Either of the three is okay. The papers that are not from Gorbunov are more standard but they give average iterate convergence, the paper from Gorbunov gives last iterate. I think you can cite Gorbunov and use extragradient descent! Maybe add these other references as other possible algorithms in a footnote?}
\section{Computation of Competitive Equilibrium via Stackelberg Games}\label{sec:stackelberg}
Recently, \citet{cole2019balancing} presented a generalization of the Eisenberg-Gale dual for arbitrary CCH utility functions, which accurately characterizes competitive equilibrium prices, but fails to match the optimal objective value of the Eisenberg-Gale primal.
Building on their results, \citet{Goktas2021aConsumertheoretic} derived the exact Eisenberg-Gale dual, for which strong duality holds.
\paragraph{Dual:}
\begin{subequations}\label{eq:eisenberg_gale_dual}
\begin{align}
&\min_{\price\in \Rp^{\numgoods}}
&&\sum_{\good\in \goods}\price[\good]
+ \sum_{\buyer \in \buyers} \budget[\buyer]
\log \left( \util[\buyer] (\allocationstar[\buyer]) \right) - \budget[\buyer]
\tag{\ref{eq:eisenberg_gale_dual}}\\
&\text{s.t.} \:
&&\forall \buyer \in \buyers, \; \allocationstar[\buyer] \in \argmax_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price\leq \budget[\buyer]} \util[\buyer] (\allocation[\buyer])
\end{align}
\end{subequations}
\noindent
We begin this section by deriving the ``duals'' of our buyer pseudo-game.
In the buyer pseudo-game $\pgame$, each buyer is solving an optimization problem (\Cref{eq:individual_primal}) in which they maximize their utility function by choosing an optimal action in their feasible action set, given the other buyers' VE actions.
Based on this observation, we can derive the ``dual'' of our buyer pseudo-game;
but as our pseudo-game comprises $\numbuyers$ different optimization problems, one for each buyer $\buyer \in \buyers$, instead of just one dual, we have $\numbuyers$ duals.
Moreover, because any VE of a jointly-convex pseudo-game satisfies the corresponding KKT conditions with optimal Langrange multipliers $\pricelangstar[1] = \hdots = \pricelangstar[\numbuyers]$ (\Cref{thm:jointly_convex_kkt} \cite{facchinei2009generalized}), all $\numplayers$ duals yield the same CE prices!
In other words, just as the dual of Eisenberg-Gale program characterizes the CE prices of a Fisher market, the $\numplayers$ duals of our pseudo-game characterize the CE prices of an influence Fisher market (satisfying \Cref{assumption:pseudo_assum}).
\begin{restatable}{theorem}{thmPseudoDual}
\label{thm:pseudo_dual}
Let $(\graph, \util, \budget)$ be an influence Fisher market satisfying \Cref{assumption:pseudo_assum}, and let $\pgame$ be the corresponding buyer pseudo-game $\pgame=(\numbuyers, \actionspace, \actions, \constr, \utilp)$.
For each buyer $\buyer \in \buyers$, fixing its neighbors' allocations $\allocationstar[\nei]$,
the dual of $\buyer$'s optimization problem,
\begin{align}
\label{eq:individual_primal}
\max_{\allocation[\buyer] \in \Rp^{\numgoods}:
\allocation[\buyer] + \sum_{k\neq \buyer}
\allocationstar[k] \leq \ones}
\budget[\buyer] \log (\util[\buyer] (\allocation[\buyer], \allocationstar[\nei]))
\end{align}
is given by
\begin{subequations}
\label{eq:individual_dual}
\begin{align}
&\min_{\price\in \Rp^{\numgoods}}
&&\sum_{\good\in \goods} \price[\good] \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right) + \budget[\buyer] \log( \util[\buyer] (\allocationstar[\buyer], \allocationstar[\nei])) - \budget[\buyer]
\tag{\ref{eq:individual_dual}}\\
&\text{s.t.}
&& \allocationstar[\buyer] \in \argmax_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price \leq \budget[\buyer]} \util[\buyer] (\allocation[\buyer], \allocationstar[\nei])
\end{align}
\end{subequations}
\end{restatable}
\if 0
\begin{align*}
\min_{\price\in \Rp^{\numgoods}}
&\sum_{\good\in \goods} \price[\good] \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right)
+ \budget[\buyer] \log( \util[\buyer] (\allocationstar[\buyer], \allocationstar[\nei])) - \budget[\buyer] \\
&\text{s.t.}\:
\allocationstar[\buyer] \in \argmax_{\allocation[\buyer] \in \Rp^{\numgoods}:\allocation[\buyer] \cdot \price \leq \budget[\buyer]} \budget[\buyer] \log (\util[\buyer] (\allocation[\buyer], \allocationstar[\nei]))
\end{align*}
\fi
\if 0
\amy{all the terms involving the buyers' utilities are obvious generalizations of the corresponding terms in the EG dual. but the first term, involving the prices, is particularly interesting. it says that buyer $i$ wants to minimize the total prices of the goods that are available to them after all the other buyers are allocated their stuff. oh wait --- maybe this term is just a constant in this dual? where do the $\allocationstar[k][\good]$ terms come from, for $k \neq i$? in short, i think we should (if it would be technically correct to do so) include this term in both the EG dual and in ours, or in neither. which way makes more sense to you?
\begin{align*}
\min_{\price\in \Rp^{\numgoods}}
&\sum_{\good\in \goods} \price[\good] \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right)
+ \budget[\buyer] \log( \util[\buyer] (\allocationstar[\buyer], \allocationstar[\nei])) - \budget[\buyer] \\
&\text{s.t.}\:
\allocationstar[\buyer] \in \argmax_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price \leq \budget[\buyer]} \util[\buyer] (\allocation[\buyer], \allocationstar[\nei])
\end{align*}
}
\sadie{For each individual buyer, the $\allocationstar[k][\good]$ is fixed action of others, so we can treat them as constants.}
\amy{thanks! again, my question: about the term $\left( 1- \sum_{k\neq i} \allocationstar[k][\good] \right)$. is this term just a constant in Eqn 3 as well? if so, then we could include it there as well, or instead, just delete it here. it isn't needed if it is really a constant?} \sadie{In my opinion, even though $\left( 1- \sum_{k\neq i} \allocationstar[k][\good] \right)$ is a constant, it cannot be removed from the dual equation (4). Since $\allocationstar[\buyer]$ is a function of $\price$, the optimization problem is like $\min_{\price} \price\cdot \bm{c} + f(\price)$ for some constant $\bm{c}$, but we cannot remove $\bm{c}$ right?} \amy{good! then we need to add an English explanation of this term. in the old dual, the auctioneer just minimized the sum of all prices. in this dual, he does so again, but weighting each price according to this term in parentheses. so weighing each price in $i$'s opt'n problem by the total amount of stuff available to $i$ after everyone else buys their stuff. intuitively, can you explain why it makes sense to include a term like this in the new solution but not the old? i.e., for influence Fisher markets, but not standard Fisher markets?} \sadie{I'm still not sure about the English explanation part :( May need help for this. }
\fi
\citet{goktas2021minmax} further show that the dual of the Eisenberg-Gale program can be re-expressed as the solution to the following zero-sum convex-concave Stackelberg game characterizes the CE of any CCH Fisher market:
\begin{align}
\label{eq:fisher_stackelberg}
\min_{\price \in \Rp^{\numgoods}}
\max_{\allocation \in \Rp^{\numbuyers\times \numgoods}: \allocation \cdot \price \leq \budget}
\sum_{\good \in \goods} \price[\good]
+ \sum_{\buyer \in \buyers} \budget[\buyer] \log (\util[\buyer] (\allocation[\buyer]))
\end{align}
\noindent
The leader in this game is a fictitious auctioneer (i.e., price setter), while the follower represents a set of buyers who effectively play as a team.
The objective function is the sum of the auctioneer's welfare (i.e., the sum of the prices) and the buyers' Nash social welfare.
\citeauthor{goktas2021minmax} also derive a first-order method that solves this game, which, via the aforementioned interpretation, can be understood as computing a competitive equilibrium of a Fisher market via \emph{t\^atonnement}.
We argue that competitive equilibria in \emph{influence\/} Fisher markets can likewise be characterized via Stackelberg equilibria.
This more general setting requires not just one, but a system of $\numbuyers$
zero-sum convex-concave Stackelberg games \cite{goktas2021minmax}, one per buyer.
In each game, the leader once again is a fictitious auctioneer (i.e., price setter), but the follower is just an individual buyer, not the set of all buyers.
Moreover, in each buyer's Stackelberg game, the objective function is the sum of the auctioneer's revenue (i.e., the sum of the good prices, each one discounted by the supply available to buyer $\buyer$ beyond what has been claimed by the others) and the individual buyer's utility.
These $\numbuyers$ Stackelberg games are played simultaneously
with the fictitious auctioneer optimizing prices assuming all the buyers simultaneously best respond (i.e., play a Nash equilibrium), and the buyers best respond to the auctioneer's prices, given the other buyers' allocations.
\begin{definition}[Buyer $\buyer$'s Stackelberg Game]
Let $(\graph, \util, \budget)$ be an influence Fisher market.
The corresponding \mydef{Stackelberg game for buyer $\buyer$} is defined by
\begin{align}
\min_{\price\in \Rp^{\numgoods}}
\max_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price \leq \budget[\buyer]}
&\sum_{\good\in \goods} \price[\good] \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right)
+ \notag \\
&\budget[\buyer] \log( \util[\buyer] (\allocation[\buyer], \allocationstar[\nei])) \label{eq:buyer_stackelberg}
\end{align}
\end{definition}
The following corollary follows from Theorems~\ref{thm:pseudo_game_equ} and~\ref{thm:pseudo_dual}.
\begin{restatable}{corollary}{corollaryStackelberg}
\label{cor:stackelberg_iff_ce}
$(\allocationstar, \pricestar)$ is a competitive equilibrium of an influence CCH Fisher market $(\graph, \util, \budget)$ satisfying \Cref{assumption:pseudo_assum} iff $(\allocationstar[\buyer], \pricestar)$ is a Stackelberg equilibrium in buyer $\buyer$'s Stackelberg game, for all buyers $\buyer \in \buyers$.
\end{restatable}
\begin{proof}
For all $\buyer\in \buyers$, $(\allocationstar[\buyer], \pricestar)$ is a Stackelberg equilibrium in buyer $\buyer$'s Stackelberg game iff $(\allocationstar[\buyer], \pricestar)$ solves
\begin{align}
&\min_{\price\in \Rp^{\numgoods}} \! \!
&&\sum_{\good\in \goods} \price[\good] \! \! \left( \! 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \! \right) \! \!
+ \budget[\buyer] \log( \util[\buyer] (\allocationstar[\buyer], \allocationstar[\nei])) - \budget[\buyer] \nonumber \\
&\text{s.t.}\:
&&\allocationstar[\buyer] \in \argmax_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price \leq \budget[\buyer]} \util[\buyer] (\allocation[\buyer], \allocationstar[\nei])
\end{align}
\noindent
By \Cref{thm:pseudo_dual}, $\pricestar$ is a solution to this bi-level optimization problem (\Cref{eq:individual_dual}) iff $\allocationstar[\buyer]$ is a solution to buyer $\buyer$'s optimization problem (\Cref{eq:individual_primal}) in the buyer pseudo-game corresponding to $(\graph, \util, \budget)$.
Finally, by \Cref{thm:pseudo_game_equ}, $(\allocationstar, \pricestar)$ is a competitive equilibrium of $(\graph, \util, \budget)$.
\end{proof}
Our Stackelberg game formulation of CE in influence Fisher markets enables us to compute CE by solving a system of buyer Stackelberg games: i.e., solving for a Stackelberg equilibrium in each of the buyer Stackelberg games together with a Nash equilibrium among the buyers in the system.
Towards that end, for convenience, we define the \mydef{objective function} for buyer $\buyer$'s Stackelberg game:
\begin{align}
\label{eq:obj_func}
\obj_{\buyer}(\allocation[\buyer], \price)
&\coloneqq
\sum_{\good\in \goods} \price[\good] \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right)
+ \budget[\buyer] \log( \util[\buyer] (\allocation[\buyer], \allocationstar[\nei]))
\end{align}
\noindent
and the $i$th (fictional) auctioneer's \mydef{value function} in buyer $\buyer$'s Stackelberg game:
\begin{align}
\label{eq:value_func}
\val_{\buyer}(\price)
&\coloneqq
\max_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price \leq \budget[\buyer]}
\sum_{\good\in \goods} \price[\good] \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right) \notag \\
& \quad \quad \quad \quad \quad \quad \quad + \budget[\buyer] \log( \util[\buyer] (\allocation[\buyer], \allocationstar[\nei]))
\end{align}
\noindent
Moreover, while each buyer is playing a Stackelberg game with its fictitious auctioneer, all buyers are also playing an $\numbuyers$-buyer simultaneous game with one another, in which each buyer maximizes its objective function $\obj_{\buyer}(\allocation[\buyer], \price)$ (\Cref{eq:obj_func}), given the prices $\price$ set by the auctioneer and the other buyers' allocations.
We can characterize a Nash equilibrium of this $\numbuyers$-buyer game as follows:
\begin{subequations}
\label{eq:normal_form_game}
\begin{align}
\allocationstar[\buyer]
&\in \argmax_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price \leq \budget[\buyer]} \sum_{\good\in \goods} \price[\good] \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right) \notag \\
& \quad \quad \quad \quad \quad \quad \quad + \budget[\buyer] \log( \util[\buyer] (\allocation[\buyer], \allocationstar[\nei]))
\tag{\ref{eq:normal_form_game}} \\
&= \argmax_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price \leq \budget[\buyer]} \util[\buyer] (\allocation[\buyer], \allocationstar[\nei])
\end{align}
\end{subequations}
\noindent
As the first summand in Equation~\ref{eq:normal_form_game} and $\budget[\buyer]$ are constants (i.e., they do not depend on $\allocation[\buyer]$), and $\log$ is a monotonic function,
buyer $i$ simply seeks to maximize its utility subject to its budget constraint.
Using a subdifferential envelope theorem \citep{goktas2021minmax}, we now derive the subgradient of each auctioneer's value function $\val_{\buyer}$ (\Cref{eq:value_func}).
\begin{restatable}{theorem}{thmSubdiff}
\label{thm:subdiff_equal_excess_demands}
Given an influence CCH Fisher market $(\graph, \util, \budget)$, the subdifferential of the $i$th auctioneer's value function in buyer $\buyer$'s Stackelberg game (\Cref{eq:value_func}) at given prices $\price$ is equal to the negative excess demand
at $\price$: i.e.,
$\subdiff[\price]
\val_{\buyer}(\price)
= \ones - \sum_{\buyer \in \buyers} \allocationstar[\buyer]$.
\end{restatable}
Interestingly, this subgradient turns out to equal the negative excess demand in the market at the given prices.
As excess demand is an aggregate quantity, it is independent of buyer $i$.
Indeed, the subgradients of \emph{all\/} the fictional auctioneers are the same; so there is effectively just one auctioneer.
Based on this observation, we now present our \mydef{Nash Equilibrium (NE)-oracle gradient descent} algorithm (\Cref{alg:ne_oracle_gd}), which follows the subgradient of the auctioneer's value function, assuming access to a NE-oracle.
Given prices $\price \in \Rp^{\numgoods}$, this oracle returns a Nash equilibrium $\allocationstar$ of the $\numbuyers$-buyer concave game specified by \Cref{eq:normal_form_game}.
The algorithm then runs subgradient descent on the auctioneer's value function.
Overall, this approach corresponds to solving for a CE allocation and
prices via \emph{t\^{a}tonnement}, assuming the NE oracle is exact.
As NE-oracles are rarely exact, \Cref{alg:ne_oracle_gd} assumes a NE-oracle that finds a Nash equilibrium up to some approximation error $\delta$.
Finally, under standard assumptions (i.e., \Cref{assumption:comp_pseudo_assum}), the auctioneer's value function (\Cref{eq:value_func}) is convex and $\lipcont[\val]$-Lipschitz continuous in $\price$ with $\lipcont [\val] = \max_{\price\in \Rpp^{\numgoods}} \| \grad[\price] \val (\price) \|$.
\footnote{Although $\grad[\price] \val (\price)$ is not necessarily bounded at $\price=0$, we can remedy this fact by shifting $\price$ by a small constant $\varepsilon > 0$, albeit losing some accuracy.}
These properties imply that our NE-oracle gradient descent algorithm converges to
competitive equilibrium at a rate of $O(\nicefrac{1}{\sqrt{T}})$.
We include a more detailed statement
of the following theorem in the appendix.
\begin{restatable}{theorem}{thmStackelbergConvergence}
\label{thm:convergence_of_tatonnment}
\Cref{alg:ne_oracle_gd} (i.e., \emph{t\^atonnement}) converges to a competitive equilibrium in any influence CCH Fisher market $(\graph, \util, \budget)$ satisfying \Cref{assumption:comp_pseudo_assum} at a rate of $O(\nicefrac{1}{\sqrt{T}})$.
\end{restatable}
\begin{remark}
We can implement an approximate NE oracle by computing the buyers' equilibrium allocations via extragradient ascent~\cite{gorbunov2022extragradient}, which is guaranteed to converge to a Nash equilibrium at a rate of $O(\nicefrac{1}{T})$, as the $\numplayers$-buyer concave game defined by \Cref{eq:normal_form_game} is monotone.
This observation gives rise to \Cref{alg:nested_ne_gd} (see Appendix, \nameref{app:algo} Section), which computes a competitive equilibrium in influence CCH Fisher markets in polynomial time.
\end{remark}
\begin{algorithm}[htbp]
\caption{NE-Oracle \emph{T\^{a}tonnement\/} For Influence Fisher Markets}
\textbf{Inputs:} $\graph, \util, \budget, \price^{(0)}, \learnrate, \delta$\\
\textbf{Outputs:} $\allocationstar, \pricestar$
\label{alg:ne_oracle_gd}
\begin{algorithmic}[1]
\For{$\iter = 1, \hdots, \iters$}
\State Find $\allocationp\in \Rp^{\numbuyers\times \numgoods}$ with $\allocationp\cdot\price^{(\iter-1)} \leq \budget$ such that:
\State for all $\buyer \in \buyers$, $\util[\buyer] (\allocationp[\buyer], \allocationp[\nei])\geq \util[\buyer] (\allocation[\buyer], \allocationp[\nei]) - \delta$, \\
\State for any $\allocation[\buyer] \in \Rp^{\numgoods}$ satisfying $\allocation[\buyer] \cdot\price^{(\iter-1)} \leq \budget[\buyer]$
\State Set $\allocation^{(\iter)}=\allocationp$
\State Set $\price^{(\iter)} = \project[\Rp^{\numgoods}] \left(\price^{(\iter-1)} - \learnrate(1 -\sum_{\buyer \in \buyers} \allocation[\buyer]^{(\iter)}) \right)$
\EndFor
\State \Return $\allocation^{(\iters)}, \price^{(\iters)}$
\end{algorithmic}
\end{algorithm}
\section{Experiments}
We ran a series of experiments
\footnote{We include a detailed description of our experimental setup in the Appendix.}
to see how the empirical convergence rates of \Cref{alg:ne_oracle_gd} compare to its theoretical guarantees under various utility structures.
We considered three standard utility functions: linear, in which buyers practice utilitarian social welfare in their neighborhoods; Cobb-Douglas, in which practice Nash social welfare in their neighborhoods; and Leontief, in which practice egalitarian social welfare in their neighborhoods.
Each utility structure endows the objective function (\Cref{eq:obj_func}) and the value functions (\Cref{eq:value_func}) with different smoothness properties, which in turn varies the convergence properties of our algorithms.
Let $\valuation[\buyer] \in \R^\numgoods$ be a vector of parameters that describes the utility function $\utilp[\buyer]: \Rp^{\numgoods}\to \Rp$ of buyer $\buyer \in \buyers$.
We consider the following (standard) utility functions: for all $\buyer \in \buyers$,
\begin{enumerate}
\item Linear: $\util[\buyer](\allocation[\buyer],\allocation[\nei]) = \sum_{k\in {\buyer}\cup \nei} \utilp[k](\allocation[k])$, where $\utilp[\buyer](\allocation[\buyer])=\sum_{\good\in \goods} \valuation[\buyer][\good] \allocation[\buyer][\good]$
\item Cobb-Douglas: $\util[\buyer](\allocation[\buyer], \allocation[\nei])=\prod_{k\in \{\buyer\}\cup \nei } \utilp[k](\allocation[k])$, where $\utilp[\buyer](\allocation[\buyer]) = \prod_{\good \in \goods} \allocation[\buyer][\good]^{\valuation[\buyer][\good]}$
\item Leontief: $\util[\buyer](\allocation[\buyer], \allocation[\nei]) = \min_{k\in \{\buyer\}\cup \nei } \utilp[k](\allocation[k])$, where $\utilp[\buyer](\allocation[\buyer]) = \min_{\good \in \goods} \left\{ \frac{\allocation[\buyer][\good]}{\valuation[\buyer][\good]}\right\}$
\end{enumerate}
\begin{figure*}
\caption{In \textcolor{blue}
\label{fig:linear_market}
\label{fig:cd_market}
\label{fig:leontief_market}
\label{fig:convergence_results}
\end{figure*}
Assuming any of these three utility functions, we can solve for a Nash equilibrium among buyers by formulating a monotone variational inequality problem, and solving it via the extragradient method (EG) in $O(\nicefrac{1}{T})$ iterations \cite{gorbunov2022extragradient}.
Then, by using EG as the NE-oracle, we can efficiently compute an optimal $\allocationstar (\price)$ for any given $\price$, which yields \Cref{alg:nested_ne_gd} (see Appendix),
a specific implementation of \Cref{alg:ne_oracle_gd}.
\Cref{fig:convergence_results} depicts the empirical convergence of \Cref{alg:ne_oracle_gd} with EG as the NE-oracle.
We observe that convergence is fastest in influence Fisher markets with Cobb-Douglas utilities, followed by linear, and then Leontief.
For influence Fisher markets with Cobb-Douglas utilities, both the value and the objective function are differentiable; in fact, they are both twice continuously differentiable, making them both Lipschitz-smooth.
These factors combined seem to lead to a faster convergence rate than $O(\nicefrac{1}{\sqrt{T}})$.
On the other hand, for influence Fisher markets with linear utilities, we seem to obtain a tight convergence rate of $O(\nicefrac{1}{\sqrt{T}})$, which seems plausible, as the value function is not differentiable assuming linear utilities, and hence we are unlikely to achieve a better convergence rate.
Finally, influence Fisher markets with Leontief utilities, in which the objective function is not differentiable, are the hardest markets for our algorithm to solve.
Nonetheless, we still observe a decent convergence rate, one that appears only slightly slower than $O(\nicefrac{1}{\sqrt{T}})$.
\section{Conclusion}
In this paper, we studied a special case of Arrow-Debreu markets with social influence, which we call Fisher markets with social influence, or influence Fisher markets for short.
First, we extended known results on the existence of competitive equilibrium in markets with social influence to a larger more natural class of markets.
Our proof proceeds by reducing an influence Fisher market to an auctioneer-buyer pseudo-game such that every generalized Nash equilibrium in the pseudo-game is a competitive equilibrium of the influence Fisher market.
The existence of generalized Nash equilibrium in pseudo-games thus implies the existence of competitive equilibrium in influence Fisher markets.
We then introduced a monotone jointly convex buyer-only pseudo-game as a generalization of the Eisenberg-Gale program, whose variational equilibria correspond to the competitive equilibria in influence Fisher markets.
In this pseudo-game, the duals of the individual buyers' utility-maximization problems constrained by the supply constraint comprise a system of $\numbuyers$ simultaneously-played zero-sum Stackelberg games, which simultaneously characterize the competitive equilibrium prices of the influence Fisher market.
We then show that running gradient descent on the leaders'/auctioneers' value functions in these games is equivalent to solving the market via a variant of \emph{t\^{a}tonnement}, where in addition to the auctioneers iteratively adjusting prices, the buyers iteratively learn a Nash equilibrium in response to these prices.
Our results pave the way for future work developing methods to compute competitive equilibria in more general types of influence markets beyond those considered in this paper \cite{Chen2011MakretwithSocialInfluence}, and other market models with graphical structure, such as graphical economies \cite{Kakade2004GraphicalE} \amy{insert reference to KKO paper}\sadie{Added}.
\if 0
Our results pave the way for future work on more general zero-sum single-leader multiple-follower Stackelberg games whose Stackelberg equilibria can be computed in polynomial-time.
\fi
\section{Acknowledgments}
This research was partially supported by the National Science Foundation (CMMI-1761546).
\appendix
\phantom{ }
\section{Preliminaries} \label{app:prelim}
\begin{theorem} \label{thm:existence_GNE} \cite{facchinei2009generalized}
Consider a pseudo-game $\pgame \doteq (\numplayers, \actionspace, \actions, \actionconstr, \utilp)$ and suppose that for all players $\player \in \players$:
\begin{enumerate}
\item $\actionspace[\player]$ is a nonempty, convex, and compact set.
\item $\actions[\player]$ is continuous, i.e., upper and lower hemicontinuous, and for all action profiles $\naction[\player] \in \actionspace[-\player]$, $\actions[\player](\naction[\player])$ is nonempty, closed, and concave.
\item $\utilp[\player](\cdot, \naction[\player])$ is quasi-concave on $\actions[\player](\naction[\player])$.
\end{enumerate}
Then a GNE exists.
\end{theorem}
In this paper, we focus on pseudo-games with jointly convex constraints, for which a much more complete theory exists than general pseudo-games.
We first introduce the \mydef{variational inequality problem (VI)} $\vi[\actions][\objs]$, which consists of finding a vector $\actionstar\in \actions$ such that $(\otheraction-\actionstar)^T\objs(\actionstar)\geq 0$, for all $\otheraction\in \actions$.
A valuable property of jointly convex pseudo-games is that we can reduce the problem of finding a GNE to solving a VI problem.
\begin{theorem}
\label{thm:jointly_convex_ve_gne}
\cite{facchinei2009generalized}
Let $\pgame \doteq (\numplayers, \actionspace, \actions, \constr, \utilp)$ be a pseudo-game jointly convex such that for all players $\player \in \players$, the utility functions $\utilp[\player]$ are continuously differentiable. Let $\actions=\{\action\in \actionspace\mid \constr(\action)\geq \zeros\}$ be the joint action correspondence set, and let $\objs(\action)\coloneqq (\grad[{\action[\player]}]\utilp[\player](\action))_{\player\in \players}$. Then every solution of the variational inequality $\vi[\actions][\objs]$ is also a GNE of the pseudo-game $\pgame$.
\end{theorem}
\ssadie{}{
\begin{remark}
Note that we can also relieve the constraints such that each utility functions $\utilp[\buyer]$ is continuous rather than continuously differentiable. When utility functions are continuous, the set of solutions of the generalized variational inequality $\vi[\actions][\objs]$ corresponds to the set of GNE of the pseudo-game $\pgame$.
\end{remark}
}
We call a GNE of a jointly convex pseudo-game that is also a solution to $\vi[\actions][\objs]$ a \mydef{variational equilibrium (VE)}.
Note that the set of VE is a subset of the set of GNE.
The converse, however, is not true, unless $\actionspace \subseteq \actions$.
Further, when $\pgame$ is a game, GNE and VE coincide; we refer to this set simply as NE.
\begin{theorem}\label{thm:jointly_convex_kkt}
\cite{facchinei2009generalized}
Let $\pgame \doteq (\numplayers, \actionspace, \actions, \constr, \utilp)$ be a pseudo-game with jointly convex constraints such that $\utilp[\player]$, $\constr$ are continuously differentiable, then the following statement holds:
\begin{enumerate}
\item Let $\actionstar$ be a solution of the $\vi[\actions][\objs]$ such that the KKT conditions hold with some multiplier $\pricelangstar$. Then $\actionstar$ is a GNE of the pseudo-game, and the corresponding KKT conditions are satisfies with $\pricelangstar[1]=\hdots=\pricelangstar[\numplayers]=\pricelangstar$.
\item Conversely, assume that $\actionstar$ is a GNE of the pseudo-game $\pgame$ such that the KKT conditions are satisfies with $\pricelangstar[1]=\hdots=\pricelangstar[\numplayers]=\pricelangstar$. Then $(\actionstar, \pricelangstar)$ is a KKT point of $\vi[\actions][\objs]$, and $\actionstar$ itself is a solution of $\vi[\actions][\objs]$.
\end{enumerate}
\end{theorem}
\section{Algorithms}\label{app:algo}
\begin{algorithm}[H]\label{alg:nested_ne_gd}
\caption{Nested-NE T\^{a}tonnement For Influence Fisher Markets}
\textbf{Inputs:} $\graph, \util, \budget, \learnrate[\allocation], \learnrate[\price], \allocation^{(0)}, \price^{(0)}$\\
\textbf{Outputs:} $\allocationstar, \pricestar$
\begin{algorithmic}[1]
\For{$\iterouter = 1, \hdots, \iters[\price]$}
\For{$\iterinner=1, \hdots, \iters[\allocation]$}
\State For all $\buyer\in \buyers$,
$\allocation[\buyer]^{(\iterinner+1/2)} = \project[{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price^{(\iterouter)} \leq \budget[\buyer]}]
\left(\allocation[\buyer]^{(\iterinner)} + \learnrate[\allocation] \grad[{\allocation[\buyer]}]
\util[\buyer] (\allocation[\buyer]^{(\iterinner)}, \allocation[\nei]^{(\iterinner)}) \right)$
\State For all $\buyer\in \buyers$,
$\allocation[\buyer]^{(\iterinner+1)} = \project[{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price^{(\iterouter)} \leq \budget[\buyer]}]
\left(\allocation[\buyer]^{(\iterinner)} + \learnrate[\allocation] \grad[{\allocation[\buyer]}]
\util[\buyer] (\allocation[\buyer]^{(\iterinner+1/2)}, \allocation[\nei]^{(\iterinner+1/2)}) \right)$
\EndFor
\State Set $\allocation^{(\iter)}= \allocation^{(\iters[\allocation])}$
\State Set $\price^{(\iterouter)} = \project[\Rp^{\numgoods}] \left(\price^{(\iterouter-1)} - \learnrate[\price] (1-\sum_{\buyer\in \buyers} \allocation[\buyer]^{(\iter)} \right)$
\EndFor
\State \Return $\allocation^{(\iters)}, \price^{(\iters)}$
\end{algorithmic}
\end{algorithm}
\section{Omitted Proof}\label{app:proof}
\thmExistence*
\begin{proof}
We will define a pseudo-game $\pgame$ whose generalized Nash equilibrium points will correspond to a competitive equilibrium in an influence Fisher market $(\graph, \util, \budget)$. For convenience, we denote that $\sum_{\buyer\in \buyers}\budget[\buyer]=\budget[ ]$.
First, there will be $\numbuyers+1$ players--the $\numbuyers$ buyers and a fictitious player who chooses prices, which is termed the price player.
Each buyer will choose an allocation $\allocation[\buyer]\in \actionspace[\buyer]=\Rp^{\numgoods}$ and the price player will choose a price $\price\in \actionspace[\priceplayer]=\Rp^{\numgoods}$.
For convenience, let $\allocation[-\buyer]$ denotes the allocations of other buyers except buyer $\buyer$, and let $\allocation$ denotes the allocations of all buyers.
Then, for all buyer $\buyer\in \buyers$, the feasible action set given actions of other player is $\actions[\buyer](\allocation[-\buyer], \price) = \{ \allocation[\buyer] \in \actionspace[\buyer] \mid \actionconstr[\buyer](\allocation[\buyer], \allocation[-\buyer], \price)=\budget[\buyer]-\allocation[\buyer]\cdot \price \geq \zeros\}$, and for the price player, the feasible set is a fixed set $\actions[\priceplayer]=\{\price\in \actionspace[\priceplayer]\mid \ones^T\price= \budget[ ]\}$.
Finally, each buyer $\buyer$ is maximizing her utility defined by $\utilp[\buyer](\allocation,\price)=\util[\buyer](\allocation[\buyer], \allocation[\nei])$, while the price player maximizes her utility $\utilp[\priceplayer](\allocation, \price)=\price\cdot \excessd$ where $\excessd=\left(\sum_{\buyer\in \buyers}\allocation[\buyer]\right) - \ones$.
By \Cref{thm:existence_GNE}, we know that there exists an GNE $(\allocationstar,\pricestar)$ in $\pgame$. Next, we want to show that the GNE $(\allocationstar, \pricestar)$ is a competitive equilibrium in influence Fisher market $(\graph, \util, \budget)$.
First, by the definition of GNE, we know that $\forall\buyer\in \buyers$, $\utilp[\buyer](\allocationstar, \pricestar)\geq \utilp[\buyer](\allocation[\buyer], \allocationstar[-\buyer], \pricestar)$ for all $\allocation[\buyer]\in \actions[\buyer](\allocationstar[-\buyer], \pricestar)$. That is, for any $\allocation[\buyer]$ s.t. $\allocation[\buyer]\cdot\pricestar\leq\budget[\buyer]$, $\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])\geq \util[\buyer](\allocation[\buyer], \allocationstar[\nei])$ by our construction of $\pgame$. Thus, $(\allocationstar, \pricestar)$ is utility maximizing.
Next, we will show that the market clears by showing that both Walras' law and the feasibility condition hold.
To do so, we first show that $\forall \buyer\in \buyers$, $\allocationstar[\buyer]\cdot \pricestar=\budget[\buyer]$.
For any $\buyer\in \buyers$, there exists $\allocation[\buyer]\in \R^{\numgoods}$ such that $\util[\buyer](\allocation[\buyer], \allocationstar[\nei])>\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])$, since $\util[\buyer](\cdot,\allocationstar[\nei])$ is quasi-concave, for any $0<t<1$, $\util[\buyer](t\allocationstar[\buyer]+(1-t)\allocation[\buyer], \allocationstar[\nei])>\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])$. If $\allocationstar[\buyer]\cdot\pricestar<\budget[\buyer]$, we can pick $t$ small enough such that $\allocationp[\buyer]=t\allocationstar[\buyer]+(1-t)\allocation[\buyer]$ satisfies $\allocationp[\buyer]\cdot\pricestar\leq \budget[\buyer]$ and $\util[\buyer](\allocationp[\buyer], \allocationstar[\nei])>\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])$, this contradicts the utility maximizing condition which we have proved. Then, summing across the buyers, we get $\sum_{\buyer\in \buyers}\allocationstar[\buyer]\cdot \pricestar=\budget[ ]
\implies \pricestar\cdot (\sum_{\buyer\in \buyers}\allocationstar[\buyer]) - \pricestar\cdot \ones=0$ $\implies \pricestar\cdot (\sum_{\buyer\in \buyers}\allocationstar[\buyer]-\ones)=\pricestar\cdot \excessdstar=0$. Therefore, $(\allocationstar, \pricestar)$ satisfies the Walras' law.
Finally, let $e_{\good}\in \R^{\numgoods}$ be the vector in which every component is 0, except the $\good$th, which is 1. It is obvious that $e_{\good}\in \actions[\priceplayer]=\{\price\in \actionspace[\priceplayer]\mid \ones^T\price= 1\}$. Thus, since the price player's utility is maximized, we know that $\utilp[\priceplayer](\allocationstar, \pricestar)\geq \utilp[\priceplayer](\allocationstar, e_{\good})$ for all $\good\in \goods$. That is, $0=\pricestar\cdot \excessdstar \geq e_{\good}\cdot \excessdstar = \excessdstar[\good]\implies \forall \good\in \goods, \sum_{\buyer\in \buyers}\allocationstar[\buyer][\good]\leq 1$.
\end{proof}
\thmPseudoPrimal*
\begin{proof}
First, by \Cref{thm:jointly_convex_ve_gne}, there exists a VE $\allocationstar$ of $\pgame$, and we want to show that $\allocationstar$ is a competitive equilibirum allocation of the influence Fisher market $(\graph, \util, \budget)$.
For each buyer $\buyer\in \buyers$, the Lagrangian is given by:
\begin{align*}
\lang[\buyer](\allocation[\buyer], \allocationstar[-\buyer], \pricelang[\buyer], \bmu[\buyer])
&= -\budget[\buyer]\log(\util[\buyer](\allocation[\buyer], \allocationstar[\nei]))\\
&+ \sum_{\good\in \goods} \pricelang[\buyer][\good] (\allocation[\buyer][\good] + \sum_{k\neq \buyer} \allocationstar[k][\good]-1)
+ \sum_{\good\in \goods} \bmu[\buyer][\good] (-\allocation[\buyer][\good])
\end{align*}
and $(\allocationstar[\buyer], \pricelangstar[\buyer])$ satisfies the KKT conditions of buyer $\buyer$ iff
\begin{itemize}
\item (Stationarity)$\grad[{\allocation[\buyer]}] \lang[\buyer](\allocationstar[\buyer], \allocationstar[-\buyer], \pricelangstar[\buyer], \bmustar[\buyer])=\zeros$.
\item (Complementary Slackness) $\forall \good\in \goods,\; \pricelangstar[\buyer][\good](\sum_{\buyer\in \buyers} \allocationstar[\buyer][\good]-1)=0$, $\bmustar[\buyer][\good](-\allocationstar[\buyer][\good])=0$.
\item (Primal Feasibility) $\forall \good\in \goods,\;
\sum_{\buyer\in \buyers} \allocationstar[\buyer][\good]-1\leq 0$, $-\allocationstar[\buyer][\good]\leq 0$.
\item (Dual Feasibility) $\forall \good\in \goods,\;\pricelangstar[\buyer][\good]\geq 0$, $\bmustar[\buyer][\good]\geq 0$.
\end{itemize}
Then, since $\allocationstar$ is a VE of the jointly-convex pseudo-game $\pgame$, by \Cref{thm:jointly_convex_kkt}, there exists a common optimal multiplier $\pricelangstar[1]=\hdots=\pricelangstar[\numbuyers]=\pricestar$ such that for each buyer $\buyer$, $(\allocationstar[\buyer], \pricestar)$ satisfies her KKT conditions. We will show that $(\allocationstar, \pricestar)$ is a competitive equilibrium of the influence Fisher market $(\graph, \util, \budget)$.
From the complementary slackness and primal feasibility conditions, we know that
\begin{align*}
\sum_{\good\in \goods} \pricestar[\good](\sum_{\buyer\in \buyers}\allocationstar[\buyer][\good]-1)=0\\
\forall \good\in \goods,\; \sum_{\buyer\in \buyers} \allocationstar[\buyer][\good]\leq 1
\end{align*}
Thus, $(\allocationstar, \pricestar)$ satisfies the market clearance.
Moreover, from the stationarity conditions, we get for any $\buyer\in \buyers$:
\begin{align*}
\frac{\partial \lang[\buyer]}{\partial \allocation[\buyer][\good]}
&= \frac{-\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])}\left[
\frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]}
+ \pricestar[\good] - \bmustar[\buyer][\good]
= 0\\
\pricestar[\good]
&= \frac{\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])}\left[
\frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]} - \bmustar[\buyer][\good]\\
\pricestar[\good]\allocationstar[\buyer][\good]
&= \frac{\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])}\left[
\frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]} \allocationstar[\buyer][\good] - \bmustar[\buyer][\good] \allocationstar[\buyer][\good]
\\
\sum_{\good\in \goods}\pricestar[\good]\allocationstar[\buyer][\good]
&= \frac{\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])}
\sum_{\good\in \goods}\left[
\frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]} \allocationstar[\buyer][\good]\\
\sum_{\good\in \goods}\pricestar[\good]\allocationstar[\buyer][\good]
&= \frac{\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])} \util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])
\\
\sum_{\good\in \goods}\pricestar[\good]\allocationstar[\buyer][\good]
&= \budget[\buyer]
\end{align*}
where the third line is from multiplying both sides by $\allocation[\buyer][\good]$, and the fifth line is from the Euler's Theorem for homogeneous functions.
Note that the left hand side of this expression is exactly the spending of buyer $\buyer$ at $(\allocationstar, \pricestar)$. This results implies that consumers are not spending more than their budgets.
Finally, we want to show that $(\allocationstar, \pricestar)$ is utility maximizing. From the stationarity conditions again, we get for any $\buyer\in \buyers$:
\begin{align}
\frac{\partial \lang[\buyer]}{\partial \allocation[\buyer][\good]}
&= \frac{-\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])}\left[
\frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]}
+ \pricestar[\good] - \bmustar[\buyer][\good]
= 0\\
\end{align}
If $\allocationstar[\buyer][\good]>0$, by complementary slackness condition, $\bmustar[\buyer][\good]=0$, which gives us
\begin{align*}
\pricestar[\good]
&= \frac{\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei])}
\left[
\frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]}\\
\frac{\util[\buyer](\allocationstar[\buyer], \allocationstar[\nei]) }{\budget[\buyer]}
&= \frac{ \left[
\frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]}}{\pricestar[\good]}
\end{align*}
The last condition is exactly the equimarginal principle \citep{mas-colell}, hence $(\allocationstar, \pricestar)$ is utility maximizing.
Therefore, any VE $\allocationstar$ of $\pgame$ constitutes an equilibrium allocation of $(\graph, \util, \budget)$, and the optimal Lagrangian multiplier that corresponds to the joint constraint are the corresponding equilibrium prices.
\end{proof}
\thmPseudoConvergence*
\begin{proof}
Consider the variational inequality problem $\vi(\actions, \objs)$ for the jointly convex pseudo-game $\pgame$ defined in \Cref{thm:pseudo_game_equ} with $\actions=\{\allocation\in \R^{\numbuyers\times \numgoods}\mid \ones - \sum_{\buyer\in \buyers} \allocation[\buyer]\geq \zeros\}$ and $\objs(\allocation)\coloneqq (\grad[{\allocation[\buyer]}]\utilp[\buyer](\allocation))_{\player\in \players}$. Since each $\util[\buyer]$ is jointly-concave in every buyer's allocation $\allocation[\buyer]$, $\utilp[\buyer]$ defined by $\utilp[\buyer](\allocation)=\budget[\buyer]\log(\util[\buyer](\allocation[\buyer], \allocation[\nei]))$ is jointly-concave in every buyer's allocation by the composition of scalar functions as $\log$ is concave. Thus, the operator $\objs$ is monotone.
Moreover, since each $\util[\buyer]$ is twice-differentiable, the operator $\objs$ is $\lipcont[\objs]$-Lipschitz where $\lipcont[\objs]=\max_{\allocation\in \Rp^{\numbuyers\cdot \numgoods}: \allocation\in \actions}\|\grad[\allocation]\objs(\allocation)\|$. Note that though $\objs$ is not differentiable at $\allocation=\zeros$ as $\util[\buyer](\zeros)=0$ for all $\buyer$, we can remedy that by shifting up all $\util[\buyer]$ by a small constant $\varepsilon$ albeit losing some accuracy.
Then, given this monotone, Lipschitz variational inequality,
Extragradient Method (EG) can converge in last-iterate at rate of $O(\nicefrac{1}{T})$ \cite{gorbunov2022extragradient}.
\end{proof}
\begin{lemma}\label{lemma:opt_lambda}
The optimization problem
\begin{align}
\max_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer]\cdot \price \leq \budget[\buyer]} \budget[\buyer] \log(\util[\buyer](\allocation[\buyer], \allocation[\nei]))
\end{align}
is equivalent to the optimization problem
\begin{align}
\max_{\allocation[\buyer] \in \Rp^{\numgoods}} \budget[\buyer] \log(\util[\buyer](\allocation[\buyer], \allocation[\nei])) + \budget[\buyer] - \allocation[\buyer]\cdot \price
\end{align}
\end{lemma}
\begin{proof}
The Lagrangian associated with $\max_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer]\cdot \price \leq \budget[\buyer]} \budget[\buyer] \log(\util[\buyer](\allocation[\buyer], \allocationstar[\nei]))$ is given by
\begin{align*}
\lang(\allocation[\buyer], \langmult[ ], \bmu)
= \budget[\buyer] \log(\util[\buyer](\allocation[\buyer], \allocation[\nei]))
+ \langmult[ ](\budget[\buyer]-\allocation[\buyer]\cdot\price)
+ \bmu^T\allocation[\buyer]
\end{align*}
where $\langmult[ ]\in \Rp$ and $\bmu\in \Rp^{\numgoods}$ are slack variables.
Let $(\allocationstar[\buyer], \langmultstar[ ], \bmustar)$ be an optimal solution to the Lagrangian. From the KKT stationary condition for this Lagrangian, it holds that, for all $\good\in \goods$,
\begin{align*}
\frac{\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocation[\nei])}
\left[ \frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]}
-\langmultstar[ ]\price[\good] + \bmustar[\good] = 0\\
\frac{\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocation[\nei])}
\left[ \frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]} \allocationstar[\buyer][\good]
-\langmultstar[ ]\price[\good]\allocationstar[\buyer][\good] + \bmustar[\good]\allocationstar[\buyer][\good] = 0\\
\frac{\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocation[\nei])}
\left[ \frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]} \allocationstar[\buyer][\good]
-\langmultstar[ ]\price[\good]\allocationstar[\buyer][\good] = 0,
\end{align*}
where the second line is obtained by multiplying both sides by $\allocationstar[\buyer][\good]$, and the third line is obtained by the KKT complementary condition, i.e., $\forall \good\in \goods, \bmustar[\good]\allocationstar[\buyer][\good]=0$ .
Summing up across all $\good\in \goods$ on both sides yields:
\begin{align*}
\frac{\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocation[\nei])}
\sum_{\good\in \goods}\left[ \frac{\partial \util[\buyer]}{\partial \allocation[\buyer][\good]}
\right]_{\allocation[\buyer]=\allocationstar[\buyer]} \allocationstar[\buyer][\good]
- \langmultstar[ ]\sum_{\good\in \goods} \price[\good]\allocationstar[\buyer][\good]
=0\\
\frac{\budget[\buyer]}{\util[\buyer](\allocationstar[\buyer], \allocation[\nei])} \util[\buyer](\allocationstar[\buyer], \allocation[\nei])
- \langmultstar[ ]\sum_{\good\in \goods} \price[\good]\allocationstar[\buyer][\good]
=0\\
\budget[\buyer] - \langmultstar[ ]\budget[\buyer] = 0\\
\langmultstar[ ]= 1,
\end{align*}
where the second line is obtained from the Euler's theorem for homogeneous functions, and the last line is from the KKT complementary condition again, i.e., $\langmultstar[ ](\budget[\buyer]-\allocationstar[\buyer]\cdot\price)= \langmultstar[ ](
\sum_{\good\in \goods}\budget[\buyer]-\price[\good]\allocationstar[\buyer][\good])=0$.
Hence, plugging $\langmultstar[ ]=1$ back into the Lagrangain restritced to $\Rp^{\numgoods}$, we get:
\begin{align*}
&\max_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer]\cdot \price \leq \budget[\buyer]} \budget[\buyer] \log(\util[\buyer](\allocation[\buyer], \allocation[\nei]))\\
= &\max_{\allocation[\buyer] \in \Rp^{\numgoods}} \budget[\buyer] \log(\util[\buyer](\allocation[\buyer], \allocation[\nei])) + \langmultstar[ ](\budget[\buyer] - \allocation[\buyer]\cdot \price)\\
= &\max_{\allocation[\buyer] \in \Rp^{\numgoods}} \budget[\buyer] \log(\util[\buyer](\allocation[\buyer], \allocation[\nei])) + \budget[\buyer] - \allocation[\buyer]\cdot \price
\end{align*}
\end{proof}
\thmPseudoDual*
\begin{proof}
Since for each $\buyer\in \buyers$, $\pricestar$ is the optimal Lagrangian multiplier, we can characterize the price $\pricestar$ through the Lagrangian dual function:
\begin{align*}
g_{\buyer}(\price)
&= \max_{\allocation[\buyer]\in \Rp^{\numgoods}} \lang[\buyer](\allocation[\buyer], \allocationstar[\nei], \price)\\
&= \max_{\allocation[\buyer]\in \Rp^{\numgoods}}
\bigg\{\budget[\buyer]\log(\util[\buyer](\allocation[\buyer], \allocationstar[\nei])) \\
&+ \sum_{\good\in \goods} \price[\good] (1-\allocation[\buyer][\good] - \sum_{k\neq \buyer} \allocationstar[k][\good])
\bigg\}\\
&= \sum_{\good\in \goods}\price[\good] - \sum_{k\neq i}\sum_{\good\in \goods} \price[\good]\allocationstar[k][\good]\\
&+ \max_{\allocation[\buyer]\in \Rp^{\numgoods}}
\left\{\budget[\buyer]\log(\util[\buyer](\allocation[\buyer], \allocationstar[\nei]))
+ \sum_{\good\in \goods}\price[\good]\allocation[\buyer][\good]
\right\}\\
&= \sum_{\good\in \goods}\price[\good] - \sum_{k\neq i}\sum_{\good\in \goods} \price[\good]\allocationstar[k][\good]\\
&+ \max_{\allocation[\buyer]\in \Rp^{\numgoods}}
\left\{\budget[\buyer]\log(\util[\buyer](\allocation[\buyer], \allocationstar[\nei]))
+ \budget[\buyer]-\price\cdot \allocation[\buyer]\right\}-\budget[\buyer]\\
&=\sum_{\good\in \goods}\price[\good] - \sum_{k\neq i}\sum_{\good\in \goods} \price[\good]\allocationstar[k][\good]\\
&+\max_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer]\cdot \price \leq \budget[\buyer]} \budget[\buyer] \log(\util[\buyer](\allocation[\buyer], \allocationstar[\nei])) - \budget[\buyer]
\end{align*}
where the fourth line is from \Cref{lemma:opt_lambda}. Therefore, the dual of optimization problem for buyer $\buyer$ is $\min_{\price\in \Rp^{\numgoods}} g_{\buyer}(\price)
=\min_{\price\in \Rp^{\numgoods}}
\sum_{\good\in \goods}\price[\good] - \sum_{k\neq i}\sum_{\good\in \goods} \price[\good]\allocationstar[k][\good]
+\max_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer]\cdot \price \leq \budget[\buyer]} \budget[\buyer] \log(\util[\buyer](\allocation[\buyer], \allocationstar[\nei])) - \budget[\buyer]$ .
\end{proof}
\begin{assumption} \label{assumption:envelope_thm}
1. $\obj$, $\constr[1],\hdots, \constr[K]$ are continuous and concave in $\inner$; 2. $\grad[\outer]\obj$, $\grad[\outer]\constr[1],\hdots, \grad[\outer]\constr[K]$ are continuous in $(\outer, \inner)$; and $\forall \outer \in \outerset$, $\exists \innerp\in \innerset$ s.t. $\constr[k](\outer, \innerp)>0$ for all $k=1,\hdots, K.$
\end{assumption}
\begin{lemma}[Subdifferential Envelope Theorem]\cite{goktas2021minmax}
\label{lemma:subdiff_envelop}
Consider the value function $\val(\outer)=\max_{\inner\in \innerset:\constr(\outer,\inner)\geq \zeros} \obj(\outer,\inner)$. Let $Y^*(\outer)=\argmax_{\inner\in \innerset: \constr(\outer,\inner)\geq \zeros }\obj(\outer,\inner)$ and suppose \Cref{assumption:envelope_thm} holds. Then, at any point $\outerp\in \outerset$, $\subdiff[\outer] \val(\outer)=$
\begin{align*}
\text{conv}\left(
\bigcup_{\inner^*(\outerp)\in \innerset^*(\outerp)}
\bigcup_{\lambda_k(\outerp, \inner^*(\outerp))\in \Lambda(\outerp, \inner^*(\outerp))}
\left\{
\grad[\outer] \obj(\outerp, \inner^*(\outerp)) \right. \right.\\
\left.\left. + \sum_{k=1}^K \lambda_k(\outerp,\inner^*(\outerp)) \grad[\outer] \constr[k](\outerp, \inner^*(\outerp))
\right\}
\right),
\end{align*}
where $\subdiff$ is the subdifferential operator, $\bm{\lambda}(\outerp, \inner^*(\outerp))=(\lambda_1(\outerp, \inner^*(\outerp), \hdots, \lambda_K(\outerp, \inner^*(\outerp)) \in \Lambda(\outerp, \inner^*(\outerp)$ are the Langrange multipliers associated with $\inner^*(\outerp)\in Y^*(\outerp)$,
and $\text{conv}$ is the convex hull operator.
\end{lemma}
\thmSubdiff*
\begin{proof}
For all goods $\good\in \goods$,
\begin{align}
\subdiff[{\price[\good]}]
\val[\buyer](\price)
&= \subdiff[{\price[\good]}] \left(
\max_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price \leq \budget[\buyer]}
\sum_{\good\in \goods} \price[\good] \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right) \right.\nonumber
\\
&\left.+ \budget[\buyer] \log( \util[\buyer] (\allocation[\buyer], \allocationstar[\nei]))
\right)\\
&= \subdiff[{\price[\good]}] \left(
\sum_{\good\in \goods} \price[\good] \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right) \right. \nonumber\\
&\left.+ \max_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price \leq \budget[\buyer]} \budget[\buyer] \log( \util[\buyer] (\allocation[\buyer], \allocationstar[\nei]))
\right)\\
&= \subdiff[{\price[\good]}] \left(
\sum_{\good\in \goods} \price[\good] \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right) \right) \nonumber\\
& + \subdiff[{\price[\good]}]\left(\max_{\allocation[\buyer] \in \Rp^{\numgoods}: \allocation[\buyer] \cdot \price \leq \budget[\buyer]} \budget[\buyer] \log( \util[\buyer] (\allocation[\buyer], \allocationstar[\nei])) \right)\\
&= \left( 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] \right)
\nonumber \\
&+ \subdiff[{\price[\good]}]\left(\max_{\allocation[\buyer] \in \Rp^{\numgoods}} \budget[\buyer] \log( \util[\buyer] (\allocation[\buyer], \allocationstar[\nei])) + \budget[\buyer] - \allocation[\buyer]\cdot \price \right) \label{eq:use_opt_lambda}\\
&= 1 - \sum_{k \neq \buyer} \allocationstar[k][\good] + \allocationstar[\buyer] \label{eq:use_subdiff}\\
&= 1 - \sum_{\buyer\in \buyers} \allocationstar[k][\good]
\end{align}
where \cref{eq:use_opt_lambda} is from \Cref{lemma:opt_lambda}, and \cref{eq:use_subdiff} is from \Cref{lemma:subdiff_envelop}.
\end{proof}
\thmStackelbergConvergence*
\begin{proof}
First, note that for each buyer $\buyer$'s optimization problem, the dual given by (\cref{eq:individual_dual}) is convex in $\price$. Thus, the value function $\val$ is also convex in $\price$ as it is the sum of the duals.
Moreover, we know $\grad[\price]\val(\price)=\ones-\sum_{\buyer\in \buyers}\allocationstar[\buyer]$ from \Cref{thm:subdiff_equal_excess_demands}. Note that when $\price>\zeros$, $\allocationstar$ is bounded as $\allocationstar[\buyer]\leq \nicefrac{\budget[\buyer]}{\price}$ for each $\buyer\in \buyers$, so $\grad[\price]\val(\price)$ is also bounded.
\ssadie{}{Note that though $\allocationstar[\buyer]$ is not necessarily bounded at $\price=0$, we can remedy that by shifting up $\price$ by a small constant $\varepsilon$ albeit losing some accuracy. }
Therefore, $\val$ is $\lipcont[\val]$-Lipschitz, where $\lipcont[\val]=\max_{\price\in \Rpp^{\numgoods}}\|\grad[\price]\val(\price)\|$ for $\price\in \Rpp$.
Then, using a subgradient method, the algorithm converges in average-iterate at a rate of $O(\nicefrac{1}{\sqrt{T}})$.
\end{proof}
\section{Experimental Setup}
The main goal of our experiment is to understand the empirical convergence rate of \Cref{alg:ne_oracle_gd} in different Fisher markets, in which the objective function in \Cref{eq:obj_func} satisfies different smoothness properties. To answer the question, we ran multiple experiments, each time recording the prices and allocations computed by \Cref{alg:nested_ne_gd} during each iteration $\iterouter$ of the main (outer) loop. For each run of each algorithm on each market with each set of initial conditions, we then computed the objective function’s value for the iterates, i.e., $\obj(\allocation^{(\iterouter)}, \price^{(\iterouter)})$, which we plot in \Cref{fig:convergence_results}.
\paragraph{Hyperparameters}
We randomly initialized 50 different linear, Cobb-Douglas, Leontief Fisher markets with social influence, each with 3 buyers and 3 goods. Buyer $\buyer$’s budget $\budget[\buyer]$ was drawn randomly from a uniform distribution ranging from 5 to 15 (i.e., $U[5,15]$), while each buyer $\buyer$’s valuation for good $\good$, $\valuation[\buyer][\good]$, was drawn randomly from $U[5,35]$. The social network graph $\graph$ is also generated uniformly at random.
For influence Fisher markets with linear utilities, we ran our algorithm for 400 iterations with learning rate $\learnrate[\price]=2$, and solving the inner Nash equilibrium by running the extragradient method for 100 iterations with learning rate $\learnrate[\allocation]=0.2$.
For influence Fisher markets with Cobb-Douglas utilities, we ran our algorithm for 400 iterations with learning rate $\learnrate[\price]=8$, and solving the inner Nash equilibrium by running the extragradient method for 200 iterations with learning rate $\learnrate[\allocation]=0.5$.
Finally, for influence Fisher markets with Leontief utilities, we ran our algorithm for 400 iterations with learning rate $\learnrate[\price]=5$, and solving the inner Nash equilibrium by running the extragradient method for 100 iterations with learning rate $\learnrate[\allocation]=3$.
\paragraph{Programming Languages, Packages, and Licensing}
We ran our experiments in Python 3.7, using NumPy.
\Cref{fig:convergence_results} were graphed using Matplotlib.
Python software and documentation are licensed under the PSF License Agreement. Numpy is distributed under a liberal BSD license. Matplotlib only uses BSD compatible code, and its license is based on the PSF license. CVXPY is licensed under an APACHE license.
\paragraph{Computational Resources}
Our experiments were run on Google Colab with 12.68GB RAM, and took about 3 hours to run experiments with 50 markets.
\paragraph{Code Repository}
The data our experiments generated, and the code used to produce our visualizations, can be found in our code repository ({\color{blue}\rawcoderepo}).
\end{document}
|
\betagin{document}
\title{On the asymptotic behaviour of a dynamic version of the Neyman contagious point process
\varphiootnote{Research supported by the ARC Discovery Grant DP120102398.}
}
\author{K.~Borovkov\varphiootnote{Department of Mathematics and Statistics, The University of Melbourne, Parkville 3010, Australia; e-mail: {[email protected]}.}
}
\date{}
aketitle
\betagin{abstract}
We consider a dynamic version of the Neyman contagious point process that can be used
for modelling the spacial dynamics of biological populations, including species
invasion scenarios. Starting with an arbitrary finite initial configuration of points
in $\mathbb{R}^d$ with nonnegative weights, at each time step a point is chosen at random from
the process according to the distribution with probabilities proportional to the
points' weights. Then a finite random number of new points is added to the process, each
displaced from the location of the chosen ``mother'' point by a random vector and assigned a
random weight. Under broad conditions on the sequences of the numbers of newly added
points, their weights and displacement
vectors (which include a random environments setup), we derive the asymptotic behaviour of the locations of the points added to the process at time
step~$n$ and also that of the scaled mean measure of the point process after time
step~$n\to\infty$.
allskip
{\em Key words and phrases:} Neyman contagious point process, random network, preferential attachment,
random environments, stationary sequence, limit theorems, law of large numbers, the
central limit theorem, regular variation.
allskip
{\em AMS Subject Classification 2010:} 60G55
(Primary), 60F05
(Secondary).
\end{abstract}
\section{Introduction and main results}
The present paper deals with a dynamic version of the Neyman contagious point process~\cite{Ne39, Th54} (the term ``contagious" in the context of such point processes going back to G.~P\'olya \cite{Po31}). The latter can be described as follows. Suppose we are given a homogeneous Poisson point process on the carrier space $\mathbb{R}^d,$ $d\ge 1$ (in a motivating example from~\cite{Ne39}, the points representing egg masses of some insect species). Then, using each of the points in the process as a ``mother'', we add several ``daughter" points randomly displaced relative to their mother (to model a situation where ``larvae hatched from the eggs which are being laid in so-called `masses'"~\cite{Ne39}). Our process differs from that one-stage scheme in that it is multistage and, at each time step, one of the existing points is chosen at random to be the next mother, and then a random number of daughter points are added, displaced at random relative to their mother point. The points in our process are endowed with ``weights" that determine the probabilities with which the points can be chosen to be mothers in the future.
More formally, our point process starts with an initial configuration of $k_0\ge 1$ random points
$\mbox{\boldmath$X$}_{0,1},\ldots,\mbox{\boldmath$X$}_{0,k_0}\in \mathbb{R}^d,$ $d\ge 1,$ labelled with respective vectors
$(w_{0,j}, u_{0,j}),$ $j=1, \ldots, k_0,$ with non-negative components.
At time step $n\ge 1,$ $k_n\ge 0$ new points are added to the process, with respective labels
$(w_{n,j}, u_{n,j}),$ $j=1, \ldots, k_n.$ The role of the weights $w_{n,l}\ge 0$ will
be to characterise the ``fitness", or reproductive ability, of the individuals to be associated
with the points to be added to the process at the $n$th step (they can be used, for
example, to model the aging of the individuals), as they will be used to determine the
probability distributions for choosing new mothers in the future. The quantities
$u_{n,l}\ge 0$ specify the amount of a ``resource" attached to the individuals (e.g.,
the body weight etc.).
To specify the locations of the newly added points, let $ W_{-1}:=0,$ and, for $n\ge 0$, set
\[
w_n:= \sum_{l=1}^{k_n} w_{n,l}, \qquad W_n:= \sum_{r=0}^n w_r,
\]
Then, at step $n+1\ge 1$, an existing point $\mbox{\boldmath$X$}_{r,j},$ $0\le r\le n,$ $1\le j \le k_r$, is chosen at random, according the probability distribution
\betagin{equation}
\mbox{\boldmath$p$}_{n }:=\{ p_{n,r,j} : 0\le r\le n, \, 1\le j \le k_r \},\qquad p_{n,r,j}:=\varphirac{w_{r,j}}{W_{n }}. \lambdabel{ps}
\end{equation}
Denote that point by $\mbox{\boldmath$X$}^*_n$ and add $k_{n+1} \ge 0$ new points to the process, at the locations
\[
\mbox{\boldmath$X$}_{n+1, l}:=\mbox{\boldmath$X$}^*_n + \mbox{\boldmath$Y$}_{n+1,l},\qquad 1\le l\le k_{n+1},
\]
where we assume that the random vector $(\mbox{\boldmath$Y$}_{n+1,1}, \ldots, \mbox{\boldmath$Y$}_{n+1,k_{n+1}})\in(\mathbb{R}^d)^{k_{n+1}}$ is independent of $\{\mbox{\boldmath$X$}_{r,j}\}_{0\le r \le n,1\le j\le k_r }$ and the choice of the ``mother point" $\mbox{\boldmath$X$}_n^*$. We assume nothing about the character of dependence between the components $\mbox{\boldmath$Y$}_{n+1,l}$ (in particular, some of them can coincide).
Denote the distribution of $(\mbox{\boldmath$Y$}_{n,1}, \ldots, \mbox{\boldmath$Y$}_{n,k_n})$ on $(\mathbb{R}^d)^{k_n}$ by ${\bf Q}_n$, and that of $\mbox{\boldmath$Y$}_{n,j}$ on $\mathbb{R}^d$ by~$Q_{n,j}$.
Note that if the $w_n$'s tend to decrease as $n$ increases, it means that the more mature individuals are more active in reproduction. When the $w_n$'s tend to increase, the younger ones are more productive.
Models of such kind can be used to describe the dynamics of a population of microorganisms in varying (in both time and space) environments, and the process of distribution of individual insects (as in the larvae example from~\cite{Ne39}) or colonies of social insects (such as some bee species or ants). In particular, they can be of interest when modeling foreign species invasion scenarios. Further examples include dynamics of business development for companies employing multi-level marketing techniques (or referral marketing).
Observe that our process' structure resembles that of a branching random walk, see,
e.g.,~\cite{AsKa76, Bi97} and references therein. If $\mbox{\boldmath$p$}_n$ is a uniform distribution, the process under consideration is the embedded skeleton process for a continuous time branching random walk. There is also an interesting relationship with certain random search algorithms, see, e.g., \cite{De88} and~\eqref{probab_interp} below.
One can think about the points in our process as nodes in a growing random forest, the roots thereof being the initial points, with edges connecting daughters to their mothers. Note that the trees in the forest, unlike the ones in the case of branching processes, are not independent of each other.
The model we are dealing with belongs to the class of ``preferential attachment processes". The most famous of them was proposed in the much-cited paper~\cite{BaAl99}, where new nodes were attached to already existing ones with probabilities proportional to the degrees of the latter, with the aim of generating random ``scale-free" networks that are widely observed in natural and human-made systems, including the World Wide Web and some social networks. Since then there have appeared quite a few publications on the topic, including the monograph~\cite{Du07}. In most cases, in the existing literature the ``attachment rule" depends on the degrees of the existing nodes (possibly with some variations as, e.g., in~\cite{Aietal09}, where new nodes may link to a
node only if they fall within the node's ``influence region"), and the authors consider the standard set of questions concerning the growing random graph, such as: the appearance and size of the giant component (if any), the sizes of (other) connected components in different regimes, the diameter and average distance between two vertices in the giant component, typical degree distributions, proportion of vertices with a given degree, the maximum degree, bond percolation and critical probability, and connectivity properties after malicious deletion of a certain proportion of vertices.
In relation to our previous remark on connection to branching phenomena, we would like to mention the paper~\cite{RuTyVa07} obtaining the asymptotic degree distribution in a preferential attachment random tree for a wide range of weight functions (of the nodes' degrees) using well-established results from the theory of general branching processes.
However, to the best of the author's knowledge, neither models of the type discussed in the present paper nor the questions of the spacial dynamics of the growing networks had been considered in the literature prior to the publication of~\cite{BoMo05}. That paper dealt with the special case of the model where $k_n\equiv k=
box{const},$ $n\ge 1$, the weights being assumed
to be of the form $w_{n,j}=a^n,$ $j=1,\ldots,k ,$ for some constant~$a>0,$ with
$\mbox{\boldmath$Y$}_{n,j},$ $j=1,\ldots, k,$ being independent and identically distributed. Under rather broad conditions
(existence of Ces\`aro limits for means/covariance matrices of $\mbox{\boldmath$Y$}_{n,1}$, uniform
integrability for $\mbox{\boldmath$Y$}_{n,1}$ or $\|\mbox{\boldmath$Y$}_{n,1}\|^2$, $\|\cdot\|$ denoting the Euclidean norm), the paper established versions of
our Theorems~\ref{Thm1}--\ref{Thm3} below. They showed that the distributions of the
locations of the individuals added at the $n$th step and the mean measures of the
process after step~$n$ display distinct LLN- and CLT-type behaviours depending on whether $a<1,$ $a=1$ or
$a>1.$ The model was further studied in~\cite{BoVa06}, in the case where $k=1,$ $d=1,$
$\mbox{\boldmath$Y$}_{n,1}\ge 0,$ and the weights $w_n$ were assumed to be random. In addition to
obtaining an analog of our Theorem~\ref{Thm1} and proving a number of other results,
the paper also established the asymptotic (as $n\to \infty$) behaviour of the distance
$\mbox{\boldmath$X$}_{n}^*+\mbox{\boldmath$Y$}_{n+1,1}$ to the root (the initial point in the process) of the point
added at step $n+1$ under the assumption that $w_n = e^{-S_n}$, $S_n=\theta_1+ \cdots +
\theta_n$, $\theta_j$ being i.i.d.\ with zero mean and finite variance. The
paper~\cite{BoVa06} also derived the asymptotics of the expected values of the
outdegrees of vertices.
In the present work, we are dealing with a broader class of models, where the numbers $k_n$ of points added to the process at different steps can vary, while, at any given time step $n$, the displacement vectors $\mbox{\boldmath$Y$}_{n,j}$, $j=1,\ldots, k_n$, can be dependent and have different distributions, and study not only the laws of the locations of the newly added points, but also the distributions of their ``resources" (that may coincide with their ``fertility" given by the weights~$w_{n,j}$).
Moreover, we will also be dealing with processes of the above type evolving in random environments (RE). This means that the offspring sizes $k_n$, their attributes $(w_{n,j}, u_{n,j})$ and the distributions of $(\mbox{\boldmath$Y$}_{n,1}, \ldots, \mbox{\boldmath$Y$}_{n,k_n})$ will be random as well, which can be formalised as follows.
Let
$
athscr{P}$ be a given family of distributions on $( \mathbb{R}^d, \mathscr{B} (\mathbb{R}^d)),$ $ \mathscr{B} (\mathbb{R}^d)$ being the $\sigma$-algebra of Borel sets on~$ \mathbb{R}^d$. For $k\ge 1,$ denote by
$
athscr{P}^{(k)}$ the family of all distributions on $(\mathbb{R}^d)^k$ with margins from $
athscr{P}$.
The random environments are given by a sequence $\mathcal{V}:= \{V_n\}_{n\ge 1}$ of
random elements $V_n \in S$ of the space
\[
S:= \bigcup_{k\ge 0} S^{(k)}, \quad S^{(k)}:= (\mathbb{R}_+^2)^k\times
athscr{P}^{(k)}, \quad
k=1,2,\ldots, \quad S^{(0)}:=\{0\}.
\]
The elements $V_n$ can be specified by their ``dimensionality" $k_n$ (one has $k_n=k$ when $V_n\in
S^{(k)}$, meaning that $k$ new points will be added to the process at step~$n$),
``attributes'' $((w_{n,1},u_{n,1}), \ldots, (w_{n,k_n},u_{n,k_n}) )\in (\mathbb{R}_+^2)^{k_n}$,
and offspring displacement distributions ${\bf Q}_n\in
athscr{P}^{(k_n)}.$ The choice of the $\sigmagma$-algebra on $S$ is standard for objects of such
nature, so we will not describe it in detail.
The dynamics of the process in RE are basically the same as in the case of the deterministic environments, the only difference being that, in the above-presented description of step $n+1$ in the process, one should everywhere add ``given the whole sequence of random environments~$\mathcal{V}$ and the past history of the
process up to step $n $". Thus, $\mbox{\boldmath$p$}_{n }$ will become the (random) conditional distribution (given~$\mathcal{V}$) of choosing a new mother among the already existing points $\mbox{\boldmath$X$}_{0,1}, \ldots, \mbox{\boldmath$X$}_{n,k_n}$ etc.
The conditions of our assertions in the RE setup will often include strict stationarity of some sequences related to~$\mathcal{V}$. In those cases, $\mathscr{I}$ will be used to denote the $\sigma$-algebra of events invariant w.r.t.\ the respective measure preserving transformation,
\[
\lambdangle \,\cdot\, \rangle :={\bf E} (\,\cdot \,| \,\mathscr{I} )
\]
denoting the conditional expectation given~$\mathscr{I}$.
To formulate the main results of the paper, we will need some further notation. Denote by \[
\phi_{n,j} (\mbox{\boldmath$t$}) := {\bf E} \exp\{ i\mbox{\boldmath$t$}\mbox{\boldmath$X$}_{n,j}^T \},\quad \mbox{\boldmath$t$}\in\mathbb{R}^d,
\]
the characteristic function (ch.f.) of the distribution $P_{n,j}$ of the point $\mbox{\boldmath$X$}_{n,j},$ $n\ge 0,$ $1\le j \le k_n,$ in our process, $ \mbox{\boldmath$x$}^T$ standing for the transposed
(column) vector of $\mbox{\boldmath$x$}\in\mathbb{R}^d$, and set, for $ n=0,1,2,\ldots$ and $\mbox{\boldmath$t$}\in
\mathbb{R}^d,$
\betagin{align}
\pi_n := \varphirac{w_n}{W_n}\equiv \sum_{j=1}^{k_n} p_{n,n,j} ,
\quad \phi_n (\mbox{\boldmath$t$}) := \varphirac1{w_n} \sum_{j=1}^{k_n} w_{n,j} \phi_{n,j}(\mbox{\boldmath$t$})\equiv \varphirac1{\pi_n} \sum_{j=1}^{k_n} p_{n,n,j} \phi_{n,j}(\mbox{\boldmath$t$})
\lambdabel{fpi}
\end{align}
(note that $\pi_0=1)$.
That is, $\phi_n$ is the ch.f.\ of the mixture law $ P_n:= \varphirac1{w_n} \sum_{j=1}^{k_n} w_{n,j} P_{n,j}$, which coincides with the conditional distribution of the location $\mbox{\boldmath$X$}_n^*$ of the mother point for step $n+1$ in our process given that that point was chosen from the $k_n$ points added at the previous step. For $n$ such that $k_n=0,$ we can leave $\phi_{n,j}$ and $ \phi_n$ undefined.
In the RE setup, we put
\betagin{align*}
\phi_{n,j|\mathcal{V}} (\mbox{\boldmath$t$}) := {\bf E}_{\mathcal{V}} \exp\{i\mbox{\boldmath$t$} \mbox{\boldmath$X$}_{n,j}^T\}, \quad
\phi_{n|\mathcal{V}} (\mbox{\boldmath$t$}) := \varphirac1{w_n} \sum_{j=1}^{k_n} w_{n,j} \phi_{n,j|\mathcal{V}} (\mbox{\boldmath$t$}) ,\quad \mbox{\boldmath$t$}\in
\mathbb{R}^d,
\end{align*}
where ${\bf E}_{\mathcal{V}} $ stands for the conditional expectation given $\mathcal{V} $, and denote by $P_{n,j|\mathcal{V}}$ and $P_{n|\mathcal{V}}$ the respective distributions.
Set $ u_n:= \sum_{j=1}^{k_n} u_{n,j},$ $U_n:= \sum_{r=0}^n u_r,$ and consider the
measure
\betagin{align}
\lambdabel{mean_m} M_n (B) &
:= {\bf E} \biggl[\varphirac{1}{U_n}\sum_{r=0}^n \sum_{j=1}^{k_r} u_{r,j}
{\bf 1}_{\{\mbox{\scriptsize\boldmath$X$}_{ r,j} \in B\}}\biggr]
= \varphirac{1}{U_n} \sum_{r=0}^n \sum_{j=1}^{k_r} u_{r,j} P_{r,j} (B), \quad B\in\mathscr{B} (\mathbb{R}^d),
\end{align}
describing the distribution of the ``resource" specified by the quantities~$u_{n,j}$,
${\bf 1}_A$ being the indicator of the event~$A$. When $u_{n,j}\equiv 1,$ $M_n$ is just the mean measure of the process. In the RE setup, $M_{n|\mathcal{V}}$ is defined by~\eqref{mean_m} with ${\bf E}$ in its middle part replaced with~${\bf E}_{\mathcal{V}}$ (and hence with $P_{r,j}$ on its right-hand side replaced with $P_{r,j|\mathcal{V}}$).
To avoid making repetitive trivial comments, we assume throughout the paper that
$U_n\to \infty$ (a.s.\ in the RE setup) as $n\to\infty,$ so that the process of adding new points and resources never terminates.
Further, we will denote by
\[
f_{n,j} (\mbox{\boldmath$t$}) := {\bf E} \exp\{i\mbox{\boldmath$t$} \mbox{\boldmath$Y$}_{n,j}^T\}, \quad \mbox{\boldmath$t$} \in\mathbb{R}^d,
\]
the ch.f.\ of $Q_{n,j}$ ($f_{n,j|\mathcal{V}} (\mbox{\boldmath$t$})$ is defined in the same way as $f_{n,j} (\mbox{\boldmath$t$})$, but with ${\bf E}$ replaced with~${\bf E}_{\mathcal{V}}$), and set
\[
f_n (\mbox{\boldmath$t$}) := \varphirac1{w_n} \sum_{j=1}^{k_n} w_{n,j} f_{n,j}(\mbox{\boldmath$t$}) .
\]
Denote by $\mathcal{A}$ the directed set of all pairs $(n,j),$ $n\ge 0, $ $1\le j\le k_n,$ with preorder ${\bf P}eccurlyeq$ on it defined by $(r,j) {\bf P}eccurlyeq (n,l)$ iff $r\le n$. Given a net $a:=\{a_\alphapha\}_{\alphapha\in\mathcal{A}}$ in a linear topological space, and a net $\{b_\alphapha\}_{\alphapha\in\mathcal{A}}$ in $[0,\infty)$,
we say that the net $a$ is $b$-summable with sum $\mathcal{S}_b (a)$ if the limit
\[
\mathcal{S}_b (a):= \lim_\alphapha\varphirac{ \sum_{\betata{\bf P}eccurlyeq \alphapha}b_\betata a_\betata }{\sum_{\betata{\bf P}eccurlyeq \alphapha}b_\betata}
\]
exists and is finite.
If $b_{n,j}\equiv 1$ then $b$-summability is basically Ces\`aro summability (for nets), to a sum denoted by $\mathcal{S}_1 (a)$. In the above definition, $a$ can be a net of scalars, vectors, matrices, or even distributions. In the latter case the limit (to be denoted by $w
box{-}\lim$) is in the topology of weak convergence of distributions, whereas in other cases the limit is in the component-wise (point-wise for functions) sense.
\betagin{rema}\lambdabel{rem_sum}
{\rm An elementary extension of the classical Toeplitz theorem shows that, if $\lim_\alphapha a_\alphapha =a_\infty$ and $\lim_\alphapha b_\alphapha =\infty$, then $a$ is $b$-summable with the sum $a_\infty$. Observe also that
if both $ab=\{a_\alphapha b_\alphapha\}$ and $b$ are 1-summable, and
$ \mathcal{S}_1 (b) >0$, then $ a$ will be $b $-summable as well, with
the sum $\mathcal{S}_b (a) = \mathcal{S}_1 (ab)/\mathcal{S}_1 (b).$}
\end{rema}
As we already said, the weights $w_{n,j}$ can be used to model dependence of the
reproductive ability of individuals on when there were added to process (in particular,
their ``aging"). Our first result refers to the case when the total ``weight'' of all
the points in the process is finite, meaning that the
more mature individuals tend to have noticeably higher reproduction ability compared to the newly added ones.
\betagin{thm}
\lambdabel{Thm1} Let $W_\infty:=\lim_{n\to\infty} W_n\equiv \sum_{r=0}^\infty w_n <\infty$. Then the infinite product
\[
\Pi (\mbox{\boldmath$t$}) :={\bf P}od_{r=0}^\infty (1 + \pi_r (f_r (\mbox{\boldmath$t$} ) -1))
\]
converges to a ch.f. Moreover,
{\rm (i)}~if $\{Q_{n,j}\}$ is $u$-summable $($or, which is the same, the net
$\{f_{n,j} (\mbox{\boldmath$t$})\}$ is $u$-summable to a continuous sum $ \mathcal{S}_u ( f (\mbox{\boldmath$t$}) ))$, then one has $w
box{-}\lim_{n\to\infty} M_n= M_\infty$, where the limiting measure $M_\infty$ has ch.f.\
$\Pi (\mbox{\boldmath$t$}) \mathcal{S}_u ( f (\mbox{\boldmath$t$}) );$
{\rm (ii)}~if $w
box{-}\lim_{(n,j)} Q_{n,j}= Q_\infty $ for some distribution~$Q_\infty$ on $\mathbb{R}^d$ with ch.f.\ $f_\infty (\mbox{\boldmath$t$})$, then one has $w
box{-}\lim_{(n,j)} P_{n,j}= P_\infty$, the limiting distribution $P_\infty$ having the ch.f.\
\[
\phi_\infty (\mbox{\boldmath$t$}):= f_\infty (\mbox{\boldmath$t$}) \Pi (\mbox{\boldmath$t$}).
\]
In that case, one also has $w
box{-}\lim_{n\to\infty} M_n= P_\infty$
\end{thm}
From the Birkhoff--Khinchin theorem, Remark~\ref{rem_sum} and Theorem~\ref{Thm1}(i), we immediately obtain the following result in the RE setup. We write $\mbox{\boldmath$Z$} \sigmam P$ if the the random vector $\mbox{\boldmath$Z$}$ has distribution~$P$, denote by $\mbox{\boldmath$u$}_n:=(u_{n,1}, \ldots, u_{n,k_n})\in \mathbb{R}_+^{k_n}$ the vector of resource values $u_{n,j}$ assigned to the points added to the process at step~$n$, and set
\[
(uf)_n (\mbox{\boldmath$t$}):= \sum_{j=1}^{k_n} u_{n,j}f_{n,j} (\mbox{\boldmath$t$}).
\]
\betagin{coro}
In the RE setup, suppose that $\{\|\mbox{\boldmath$Z$}\|:\mbox{\boldmath$Z$} \sigmam P\in
athscr{P} \}$ is uniformly integrable and $\{(k_n, \mbox{\boldmath$u$}_n, {\bf Q}_n (\cdot))\}$ is a strictly stationary sequence.
Then, on the event $\{W_\infty <\infty\}\cap\{ \lambdangle u_1\rangle >0\},$ one has $w
box{-}\lim_{n\to\infty} M_{n|\mathcal{V}}= M_\infty$ a.s., where the limiting measure $M_\infty$ has ch.f.
\betagin{equation*}
\Pi (\mbox{\boldmath$t$})\varphirac{ \lambdangle (uf)_1 (\mbox{\boldmath$t$})\rangle}{\lambdangle u_1\rangle}.
\end{equation*}
\end{coro}
An RE version of the assertion of Theorem~\ref{Thm1}(ii) concerning convergence of $P_{n|\mathcal{V}}$ is straightforward.
\betagin{rema}
{\rm
Thus, when $W_\infty< \infty$ and the displacement laws $Q_{n,j}$ ``stabilise on average" in the sense that they are $1$-summable, the points in our process form a ``cloud" of which the relative ``density" has a limit. This is so because the ``old points" will be responsible for most of the progeny. But what about the contribution of individual points? How often will the points be chosen to become mothers (recall that we do not exclude the case $k_n=0$, so a chosen point will not necessarily have daughters at a given step)? Since, for any fixed $r\ge 0$ and $j=1,\ldots, k_r$, the indicators of the events $\{\mbox{\boldmath$X$}_{r,j}$ is chosen to be the mother at step $n+1\}$, $n\ge r,$ are independent Bernoulli random variables with success probabilities $w_{r,j}/W_n$, Borel--Cantelli's lemma immediately yields the following dichotomy:
\betagin{itemize}
\item if $\sum_{n\ge 0}W_n^{-1}= \infty$ then any individual with a positive $w$-weight will a.s.\ be chosen to become a mother infinitely often (so this is the case in Theorem~\ref{Thm1}), and
\item if $\sum_{n\ge 0} W_n^{-1} < \infty$ then any individual will become a mother finitely often a.s.
\end{itemize}
}
\end{rema}
The latter situation is, of course, impossible under conditions of Theorem~\ref{Thm1}, but both situations can occur under the conditions of
the next theorem that deals with the case where the total weight of the points in the process is unbounded. For $n\ge 0$, set $\mbox{\boldmath$\mu$}_{n,j}:={\bf E} \mbox{\boldmath$Y$}_{n,j},$ $\mbox{\boldmath$m$}_{n,j}:={\bf E} \mbox{\boldmath$Y$}_{n,j}^T\mbox{\boldmath$Y$}_{n,j} $ (when these expectations are finite), and let
\[
\mbox{\boldmath$\mu$}_n:=\varphirac1{w_n} \sum_{j=1}^{k_n} w_{n,j}\mbox{\boldmath$\mu$}_{n,j},
\quad
\mbox{\boldmath$m$}_n:=\varphirac1{w_n} \sum_{j=1}^{k_n} w_{n,j}\mbox{\boldmath$m$}_{n,j}.
\]
It is clear that $\mbox{\boldmath$\mu$}_n$ is the mean vector of the distribution with ch.f.~$f_n,$ while $\mbox{\boldmath$m$}_n$ is the matrix of (mixed) second order moments of that mixture distribution.
\betagin{thm}
\lambdabel{Thm2} Assume that $w_n = \xi_n n^\alphapha L(n)$, $n\ge 1,$ where $\{\xi_n\ge 0\}$ is Ces\`aro summable to a positive value $\mathcal{S}_1 (\xi)$, $\alphapha>-1$ and $L$ is a slowly varying at infinity function.
{\rm (i)} Let $\{\|\mbox{\boldmath$Y$}_{n,j}\| \}_{(n,j)\in\mathcal{A}}$ be
uniformly integrable, $\{\xi_n \mbox{\boldmath$\mu$}_n\}$ be Ces\`aro summable with sum $\mathcal{S}_1 (\xi \mbox{\boldmath$\mu$})$. Then
\betagin{equation}
w
box{-}\!\lim_{(n,j)} P_{n,j} (\, \cdot\, \ln n) = \deltalta_{\mbox{\scriptsize\boldmath$\lambda$}}(\, \cdot\, ),
\lambdabel{thm2_1}
\end{equation}
where $\deltalta_{\mbox{\scriptsize\boldmath$\lambda$}}$ is the unit mass concentrated at the point
$\mbox{\boldmath$\lambda$}cktrianglerighta:=(\alphapha+1) {\mathcal{S}_1 (\xi \mbox{\boldmath$\mu$}) }/{\mathcal{S}_1 (\xi)}\in\mathbb{R}^d,$
and
\[
\mbox{\boldmath$\nu$}_n:=\varphirac1{\ln n}\sum_{r=1}^{n-1} \pi_r \mbox{\boldmath$\mu$}_r \to \mbox{\boldmath$\lambda$}cktrianglerighta, \quad n\to\infty.
\]
Moreover, if
\betagin{equation}
\lim_{\varepsilon\searrow 0}\limsup_{n\to\infty} U_{\varepsilon n}/U_n=0 ,
\lambdabel{Uep}
\end{equation}
then also
\betagin{equation}
w
box{-}\!\!\lim_{n\to\infty}M_n (\, \cdot\, \ln n) = \deltalta_{\mbox{\scriptsize\boldmath$\lambda$}}(\, \cdot\, ).
\lambdabel{thm2_2}
\end{equation}
{\rm (ii)} Assume that $\{\|\mbox{\boldmath$Y$}_{n,j}\|^2 \}_{(n,j)\in\mathcal{A}}$ is uniformly integrable and
the sequence $\{\xi_n \mbox{\boldmath$m$}_n\}$ is Ces\`aro summable. Then the limit $w
box{-}\!\!\lim_{(n,j)}$ of the distributions of
\[
\varphirac{\mbox{\boldmath$X$}_{n,j} - \mbox{\boldmath$\nu$}_n \ln n}{\sqrt{\ln n}}
\]
exists and is equal to the normal law in $\mathbb{R}^d$ with zero mean vector
and covariance matrix $(\alphapha+1) {\mathcal{S}_1 (\xi \mbox{\boldmath$m$})}/{\mathcal{S}_1 (\xi )}.$
Moreover, if \eqref{Uep} holds then $w
box{-}\!\!\lim_{n\to\infty} M_n (\, \cdot\,\sqrt{\ln n} +\mbox{\boldmath$\nu$}_n\ln n)$ also exists and is equal to the same normal distribution.
\end{thm}
As in the case of Theorem~\ref{Thm1}, an RE version of the above statement is readily available from the Birkhoff--Khinchin theorem and Remark~\ref{rem_sum}. Set
\[
\mbox{\boldmath$\mu$}_{n|\mathcal{V}}:=\varphirac1{w_n}\sum_{j=1}^{k_n} w_{n,j}{\bf E}_{\mathcal{V}} \mbox{\boldmath$Y$}_{n,j},
\qquad \mbox{\boldmath$m$}_{n|\mathcal{V}} :=\varphirac1{w_n}\sum_{j=1}^{k_n} w_{n,j}{\bf E}_{\mathcal{V}} \mbox{\boldmath$Y$}_{r,j}^T \mbox{\boldmath$Y$}_{r,j}.
\]
\betagin{coro}
In the RE setup, let $w_n = \xi_n n^\alphapha L(n)$, $n\ge 1,$ where $\alphapha=\alphapha (\omegaega)>-1$ and $L(n) = L(\omegaega,n)$ is a slowly varying function a.s.
{\rm (i)} Assume that $\{\|\mbox{\boldmath$Z$}\|: \mbox{\boldmath$Z$}\sigmam P \in
athscr{P}\}$ is
uniformly integrable and
\betagin{equation}
\lambdabel{cond_xi}
\xi_n= \widetilde\xi_n (1 + o(1)) , \quad
\mbox{\boldmath$\mu$}_{n|\mathcal{V}}= \widetilde\mbox{\boldmath$\mu$}_n + o(1) \quad
box{a.s.\ as $n\to\infty$,}
\end{equation}
where
$\{(\widetilde\xi_n, \widetilde\mbox{\boldmath$\mu$}_n)\}$ is a strictly stationary sequence with a finite absolute moment and ${\bf E} \widetilde\xi_1 \|\widetilde\mbox{\boldmath$\mu$}_1\| <\infty$. Then, as $n\to\infty,$ on the event $A:= \{\lambdangle\widetilde \xi_1\rangle >0\},$ one has
\betagin{equation*}
w
box{-}\!\lim_{(n,j)} P_{n,j|\mathcal{V}} (\, \cdot\, \ln n) = \deltalta_{\mbox{\scriptsize\boldmath$\lambda$}}(\, \cdot\, ) \quad
box{a.s.,}
\end{equation*}
where $\deltalta_{\mbox{\scriptsize\boldmath$\lambda$}}$ is the unit mass concentrated at the point
$\mbox{\boldmath$\lambda$}cktrianglerighta:=(\alphapha+1) {\lambdangle \widetilde\xi_1 \widetilde \mbox{\boldmath$\mu$}_1 \rangle}/{\lambdangle\widetilde\xi_1\rangle}\in\mathbb{R}^d$, and
\[
\mbox{\boldmath$\nu$}_{n|\mathcal{V}}:=\varphirac1{\ln n}\sum_{r=1}^{n-1} \pi_r \mbox{\boldmath$\mu$}_{r|\mathcal{V}} \to \mbox{\boldmath$\lambda$}cktrianglerighta\quad
box{a.s.}
\]
Moreover, if \eqref{Uep} holds a.s.\ on $A$, then on that event one also has
\betagin{equation*}
w
box{-}\!\!\lim_{n\to\infty}M_{n|\mathcal{V}} (\, \cdot\, \ln n) = \deltalta_{\mbox{\scriptsize\boldmath$\lambda$}}(\, \cdot\, )\quad
box{a.s.}
\end{equation*}
{\rm (ii)} Assume that the family $\{\|\mbox{\boldmath$Z$}\|^2: \mbox{\boldmath$Z$}\sigmam P \in
athscr{P}\}$ is uniformly integrable and the first of relations \eqref{cond_xi} holds together with
\betagin{equation}
\lambdabel{m_n}
\mbox{\boldmath$m$}_{n|\mathcal{V}} =\widetilde \mbox{\boldmath$m$}_n + o(1)\quad
box{\it a.s.\ \ as \ } n\to\infty,
\end{equation}
where
the sequence $\{(\widetilde\xi_{n}, \widetilde\mbox{\boldmath$m$}_n)\}$ is strictly stationary with a finite absolute moment and ${\bf E} \widetilde\xi_1 \|\widetilde \mbox{\boldmath$m$}_1\|<\infty$. Then, on the event $A,$ the limit $w
box{-}\!\!\lim_{(n,j)}$ of the conditional distributions of
\[
\varphirac{\mbox{\boldmath$X$}_{n,j} - \mbox{\boldmath$\nu$}_{n|\mathcal{V}} \ln n}{\sqrt{\ln n}}
\]
given $\mathcal{V} $ exists a.s.\ and equals the normal law in $\mathbb{R}^d$ with zero mean vector
and covariance matrix $(\alphapha+1) {\lambdangle \widetilde\xi_1 \widetilde \mbox{\boldmath$m$}_1 \rangle}/{ \lambdangle \widetilde\xi_1\rangle}.$ Moreover, if \eqref{Uep} holds on $A$, that normal distribution will be limiting on $A$ for $ M_{n|\mathcal{V}} (\, \cdot\,\sqrt{\ln n} +\mbox{\boldmath$\nu$}_n\ln n)$ as well.
\end{coro}
Thus, in the broad (power function) range of the $w$-weights dynamics, the point processes display basically one and the same behaviour: a drift at rate $\ln n$ in the direction of the ``average" mean displacement vector (if the latter is non-zero), and the asymptotic normality of the point distribution at the scale $\sqrt{\ln n}$. The next theorem deals with the case where the $w$-weights grow exponentially fast on average, basically meaning that the individuals modelled by points in our process have a ``finite life span" (at least, in what concerns their reproduction abilities). As one could expect in such a situation, the distributions $P_n$ will ``drift" at a constant rater, while the $u$-resource will be ``spread" along the trajectory of the drift.
In this case, the formulation of the result is much simpler (and more natural) in the RE setup than in the deterministic one, so we will only consider here the former situation.
\betagin{thm}
\lambdabel{Thm3}
In the RE setup, let $w_n = \xi_n e^{S_n}$, $n\ge 1,$ where $S_n:=\tau_1+ \cdots + \tau_n$ for a random sequence $\{\tau_n\}_{n\in\mathbb{Z}}$.
{\rm (i)} Assume that $\{\|\mbox{\boldmath$Z$}\| : \mbox{\boldmath$Z$}\sigmam P \in
athscr{P}\}$ is
uniformly integrable, relations \eqref{cond_xi} holds with $ \{( \widetilde\xi_n, \tau_n,\widetilde\mbox{\boldmath$\mu$}_n)\}_{n\in\mathbb{Z}}$ being strictly stationary, ${\bf E} \widetilde\xi_1 \| \widetilde\mbox{\boldmath$\mu$}_1\|<\infty.$ Then, on the event $A:= \{\lambdangle \widetilde\xi_1\rangle >0, \lambdangle \tau_1\rangle >0\},$
\betagin{equation}
\lambdabel{Pn}
w
box{-}\!\lim_{(n,j)} P_{n,j|\mathcal{V}} (\, \cdot\, \ln n) = \deltalta_{\mbox{\scriptsize\boldmath$\lambda$}}\quad
box{a.s., \quad}
\mbox{\boldmath$\lambda$}cktrianglerighta := \mathscr{B}gl\lambdangle \varphirac{ \widetilde\xi_1 \widetilde\mbox{\boldmath$\mu$}_1}{ \widetilde\zeta_1}\mathscr{B}gr\rangle,
\end{equation}
where
\[
\widetilde\zeta_n:=\sum_{m=0}^\infty \widetilde\xi_{n-m}e^{-S_{n,m}},
\quad
S_{n,m}:= \tau_{n-m+1}+\tau_{n-m+2} + \cdots + \tau_n,\quad m>1,
\]
$S_{n,0}:=0,$ so that $ \{( \widetilde\xi_n, \widetilde\zeta_n,\widetilde\mbox{\boldmath$\mu$}_n)\}$ is stationary as well on $A$.
If, in addition, the distribution functions $ U_{ \lfloor nv\rfloor }/U_n, $ $v\in [0,1]$, converge weakly a.s.\ to some distribution function $G$ on $[0,1]$ as $n\to\infty,$ then, on the event $A$ there exists the limit $w
box{-}\!\lim_{n\to\infty} M_{n|\mathcal{V}} (\, \cdot\, n)= M_\infty (\, \cdot\,)$ a.s., where $M_\infty$ is supported by the straight line segment with end points ${\bf 0}$ and $\mbox{\boldmath$\lambda$}cktrianglerighta,$ such that~$M_\infty (\{\mbox{\boldmath$\lambda$}cktrianglerighta s, s\in [0,v]\})=G(v),$ $v\in [0,1].$
{\rm (ii)} Let $\{\|\mbox{\boldmath$Z$}\|^2 : \mbox{\boldmath$Z$}\sigmam P \in
athscr{P}\}$ be uniformly integrable, relations \eqref{cond_xi} and \eqref{m_n} hold with the sequence $\{( \widetilde\xi_{n}, \tau_n, \widetilde\mbox{\boldmath$\mu$}_n, \widetilde\mbox{\boldmath$m$}_n)\}_{n\in\mathbb{Z}}$ being strictly stationary with a finite
first absolute moment. Then, on the event $A,$ the limit $w
box{-}\!\!\lim_{(n,j)}$ of the conditional given $\mathcal{V} $ distributions of
\[
\varphirac{\mbox{\boldmath$X$}_{n,j} - \mbox{\boldmath$\varkappa$}_n }{\sqrt{ n}}, \qquad \mbox{\boldmath$\varkappa$}_n:= \sum_{r=1}^n \pi_r\mbox{\boldmath$\mu$}_{r|\mathcal{V}},
\]
exists a.s.\ and equals the normal law in $\mathbb{R}^d$ with zero mean vector and covariance matrix $\mathscr{B}gl\lambdangle \dfrac{ \widetilde\xi_1 }{ \widetilde\zeta_1}\, \widetilde\mbox{\boldmath$m$}_1 - \dfrac{ \widetilde\xi_1^2 }{ \widetilde\zeta_1^2}\,\widetilde\mbox{\boldmath$\mu$}^T_1\widetilde\mbox{\boldmath$\mu$}_1\mathscr{B}gr\rangle.$
\end{thm}
\betagin{rema}{\rm
In part~(ii) of Theorem~\ref{Thm3}, one can also derive a normal mixture approximation to $M_{n|\mathcal{V}} (\, \cdot\, \sqrt{n})$ as $n\to\infty$, cf. Theorem~1(iii) in~\cite{BoMo05}.}
\end{rema}
\section{Proofs}
The key tool in the proofs is an extended version of the recurrences (9)
in~\cite{BoMo05} and~(2) in~\cite{BoVa06}. Namely, first note that, by the total probability formula,
\betagin{align}
\lambdabel{phi2}
\phi_{n+1,j} = \Phi_n f_{n+1,j},
\quad \Phi_n := \sum_{r=0}^n \sum_{l=1}^{k_r}p_{n , r,l} \phi_{r,l}.
\end{align}
Using~\eqref{ps} and the observation that $p_{n,r , j} =p_{n-1,r, j} W_{n-1}/W_{n}=p_{n-1,r, j}(1-\pi_n) $ for all $r<n$, $1\le j\le k_r,$ we obtain for $n\ge 1 $, employing~\eqref{phi2}, \eqref{fpi} and recursion, that
\betagin{align}
\Phi_n &= \sum_{r=0}^{n-1} \sum_{l=1}^{k_r} p_{n , r,l} \phi_{r,l}
+\sum_{l=1}^{k_n} p_{n , n,l} \phi_{n,l}
\notag\\
&= (1-\pi_n) \Phi_{n-1} + \sum_{l=1}^{k_n} p_{n , n,l}\Phi_{n-1} f_{n,l}
= (1-\pi_n) \Phi_{n-1} + \pi_n \Phi_{n-1} f_n
\notag\\&
= (1+\pi_n (f_n -1) ) \Phi_{n-1}
= {\bf P}od_{r=0}^n (1+\pi_r (f_r -1) ).
\lambdabel{forma}
\end{align}
A probabilistic interpretation of \eqref{forma} is that
\betagin{equation}
\mbox{\boldmath$X$}_{n}^*\deltaq \sum_{r=0}^n I_r \mbox{\boldmath$Y$}_r ,\quad n\ge 1,
\lambdabel{probab_interp}
\end{equation}
where the $I_r$'s are Bernoulli random variables with success probabilities $\pi_r$ (so that $I_0\equiv 1$), the random vectors $\mbox{\boldmath$Y$}_r $ have ch.f.'s $f_r$, all of the random quantities being independent.
Therefore, it follows from \eqref{phi2} that
\betagin{align}
\phi_{n+1,j} = f_{n+1,j} {\bf P}od_{r=0}^n \bigl( 1 + \pi_r (f_r -1)\bigr),\quad n\ge 0.
\lambdabel{phi3}
\end{align}
In the RE setup, fixing $\mathcal{V} $ and using the same argument, we obtain that
$\phi_{n+1,j|\mathcal{V}}$ also equals the right-hand side of the above relation.
\betagin{rema}
{\rm Representation~\eqref{probab_interp} basically corresponds to ``going back in time", tracing the ``ancestry" of the points added at step~$n+1$.
In the case where $w_n\equiv 1$, $\mbox{\boldmath$Y$}_n\equiv 1$, this is equivalent to the correspondence between the ``distance to the root" in a random tree and the number of records in an i.i.d.\ sequence of random variables used in~\cite{De88}. Observe also the following: fix an arbitrary $(r,i)\in \mathcal{A}.$ Then, for any point $\mbox{\boldmath$X$}_{n,j}$ with $n>r$, the probability that $\mbox{\boldmath$X$}_{n,j}$ has $\mbox{\boldmath$X$}_{r,i}$ among its ancestors is one and the same value $w_{r,i}/W_r.$ Hence the mean number of the $n$th generation points that have a particle from the $r$th generation as an ancestor is $k_n\pi_r$.
}
\end{rema}
{\em Proof of Theorem~\ref{Thm1}.} Since in this case $\pi_n = (1+ o(1))w_n/W_\infty$ as
$n\to\infty,$ it follows that $\sum_{r=1}^\infty \pi_r<\infty.$ Hence the product on
the right-hand side of~\eqref{forma} converges to $\Pi (\mbox{\boldmath$t$}) $ uniformly in $\mbox{\boldmath$t$}$ as $n\to\infty$. That the limiting infinite product will be a ch.f.\ follows from L\'evy's continuity theorem, since all the factors $1 + \pi_r (f_r -1)$ are ch.f.'s (note
that $\pi_r\in (0,1)$) and the Weierstrass theorem implies that
$\Pi (\mbox{\boldmath$t$}) $ will be continuous at zero.
(i)~To prove convergence of~$M_n$, observe that~\eqref{mean_m}, the assumption that $U_n\to\infty$ as $n\to\infty$ and~\eqref{phi3} imply that, for any fixed $\mbox{\boldmath$t$}\in\mathbb{R}^d,$ for the ch.f.\ $\varphi_n (\mbox{\boldmath$t$})$ of $M_n$ one has
\betagin{align*}
\varphi_n (\mbox{\boldmath$t$}) &
= \varphirac{1}{U_n} \sum_{r=0}^n \sum_{j=1}^{k_r} u_{r,j} \phi_{r,j} (\mbox{\boldmath$t$})
= \varphirac{1}{U_n} \sum_{r=0}^n \sum_{j=1}^{k_r} u_{r,j} f_{r,j} (\mbox{\boldmath$t$})
{\bf P}od_{s=0}^{r-1} \bigl( 1 + \pi_s (f_s (\mbox{\boldmath$t$}) -1)\bigr)\\
& = (1+o(1))\Pi (\mbox{\boldmath$t$}) \varphirac{1}{U_n} \sum_{r=0}^n \sum_{j=1}^{k_r} u_{r,j} f_{r,j}(\mbox{\boldmath$t$})
= (1+ o(1)) \Pi (\mbox{\boldmath$t$}) \mathcal{S}_u (f(\mbox{\boldmath$t$})).
\end{align*}
Now the desired assertion immediately follows from L\'evy's continuity theorem.
(ii)~That $w
box{-}\lim_{(n,j)} P_{n,j}= P_\infty$ follows from representation~\eqref{phi3}, the assumption that $ \lim_{(n,j)} f_{n,j} (\mbox{\boldmath$t$}) = f_\infty (\mbox{\boldmath$t$})$ and the already established convergence of the products of $1 + \pi_r (f_r -1)$ to the ch.f.\ $\Pi (\mbox{\boldmath$t$}).$ That $w
box{-}\lim_{n\to\infty} M_n= P_\infty$ follows from the first part of Remark~\ref{rem_sum} and part~(i) of the theorem.
$\Box$
edskip
{\em Proof of Theorem~\ref{Thm2}.} (i)~First we will analyse the asymptotic behaviour
of $\pi_n$ as~$n\to\infty.$ Choose a sequence $\varepsilon_p \searrow 0$ as $p\to\infty$ that vanishes slowly enough, so that the following relations hold true: $m_p\to\infty$ as $p\to\infty$ for the sequence $\{m_p\}$ defined by $m_{-1}:=-1$, $m_0:=1,$
\[
m_p:= (1+\varepsilon_p) m_{p-1} ,\quad p= 1,2,\ldots,
\]
$\varepsilon_p/\varepsilon_{p-1}\to 1$, and
\betagin{align}
\vartheta_p:= \varphirac1{m_p}\sum_{r=1}^{m_p}\xi_r - \mathcal{S}_1 (\xi) = o(\varepsilon_p).
\lambdabel{theta0}
\end{align}
It is clear that such a sequence exists, and
we can assume for notational simplicity that all $m_j$ are integer (the
changes needed in case they are not are obvious) and that there exists a $q=q(n)\in
\mathbb{N}$ such that $m_q=n$. Then
\betagin{align*}
\sum_{r=1}^n w_r = \sum_{p=0}^{q }
\sum_{r\in (m_{p-1},m_{p}]} \xi_r r^{\alphapha} L (r)
= \sum_{p=0}^{q } m_p^{\alphapha} L (m_p)
\sum_{r\in (m_{p-1},m_{p}]} \xi_r \left(\varphirac{r}{m_p}\right)^\alphapha \varphirac{L(r)}{L (m_p)},
\end{align*}
and it is not hard to verify (using the fact that $\sum_n n^\alphapha L(n)=\infty$) that the right-hand side here (and hence $W_n$ as well) is
\betagin{align}
(1+ o(1)) \sum_{p=0}^{q } m_p^{\alphapha} L (m_p)
\sum_{r\in (m_{p-1},m_{p}]}\xi_r.
\lambdabel{summa}
\end{align}
From Ces\`aro summability of $\{\xi_n\}$ and~\eqref{theta0}, one has
\betagin{align*}
\varphirac1{\varepsilon_p m_{p-1}}\sum_{r\in (m_{p-1},m_{p}]} \xi_r
&= \varphirac1{\varepsilon_p m_{p-1}}\biggl(\sum_{r= 1}^{m_{p}} \xi_r
- \sum_{r= 1}^{m_{p-1}}\xi_r \biggr)
\\
&= \varphirac1{\varepsilon_p } \bigl[(1+\varepsilon_p)
\bigl( \mathcal{S}_1 (\xi) +\vartheta_{p}\bigr) - \bigl( \mathcal{S}_1 (\xi) + \vartheta_{p-1}\bigr)\bigr]
\\
&
= \mathcal{S}_1 (\xi) +\vartheta_{p} + (\vartheta_{p}-\vartheta_{p-1})/{\varepsilon_p}
= \mathcal{S}_1 (\xi) + o(1).
\end{align*}
Therefore the representation~\eqref{summa} for $W_n$ implies that
\betagin{align*}
W_n &= (1+ o(1)) \mathcal{S}_1 (\xi) \sum_{p=1}^{q } \varepsilon_p m_p^{\alphapha +1 } L (m_p)
\\
& = (1+ o(1)) \mathcal{S}_1 (\xi)\sum_{r=1}^{n} r^{\alphapha} L (r)
= \varphirac{(1+ o(1)) \mathcal{S}_1 (\xi) }{\alphapha+1}\, n^{\alphapha+1} L (n)
\end{align*}
by Karamata's theorem, so that
\betagin{align}
\pi_n \equiv \varphirac{w_n}{W_n }
= (\alphapha+1 + o(1)) \varphirac{\xi_n}{ n \mathcal{S}_1 (\xi) }=o(1) \quad
box{ as $n\to\infty$},
\lambdabel{beta_n}
\end{align}
where the last relation holds since $\xi_n=o(n)$ from Ces\`aro summability of $\{\xi_n\}.$
Now we turn to analysing the asymptotic behaviour of $P_{n,j}$. For a fixed
$\mbox{\boldmath$t$}\in\mathbb{R}^d$ and a sequence $b_n\to\infty,$ $n\to\infty,$ it follows from~\eqref{phi3}
that the ch.f.\ of $P_{n,j} (\cdot~b_n)$ is given by
\betagin{align}
\phi_{n,j} (\mbox{\boldmath$t$} /b_n)
& = f_{n,j } (\mbox{\boldmath$t$}/b_n) {\bf P}od_{r=0}^{n-1} \bigl[ 1 + \pi_r
(f_r (\mbox{\boldmath$t$}/b_n) -1)\bigr]
\notag\\
& =
(1+o(1)) f_{n,j } (\mbox{\boldmath$t$}/b_n) \exp\left\{ (1+o(1)) \sum_{r=0}^{n-1} \pi_r (f_r (\mbox{\boldmath$t$}/b_n) -1)\right\}
\notag\\
&= (1+o(1)) \exp\left\{ \varphirac{1+o(1)}{b_n} \sum_{r=1}^{n-1} \pi_r (\mbox{\boldmath$\mu$}_r\mbox{\boldmath$t$}^T
+ \eta_{r,n}) \right\},\quad |\eta_{r,n}|\le \eta_n=o(1),
\lambdabel{forma_b}
\end{align}
where the second equality follows from the relation
\[
\lim_{n\to\infty}\sup_{r \ge 0} |\pi_r (f_r (\mbox{\boldmath$t$}/b_n) -1)| \to 0,
\]
which holds due to~\eqref{beta_n}, and the third one follows from the uniform
integrability assumption on~$
athscr{P}$ (cf.~(12), (13) in~\cite{BoMo05}).
Using~\eqref{beta_n} and setting $b_n:=\ln n$ we obtain that
\betagin{align*}
\phi_{n,j} (\mbox{\boldmath$t$} /\ln n)
& =
(1+o(1)) \exp\left\{ \varphirac{\alphapha + 1+o(1)}{ \mathcal{S}_1 (\xi)\ln n}
\sum_{r=0}^{n-1}\varphirac{\xi_r}{r} (\mbox{\boldmath$\mu$}_r\mbox{\boldmath$t$}^T
+ \eta_{r,n}) \right\}
\\
& =
(1+o(1)) \exp\left\{ \varphirac{\alphapha + 1+o(1)}{ \mathcal{S}_1 (\xi) \ln n}
\left[ \sum_{r=0}^{n-1}\varphirac{\xi_r\mbox{\boldmath$\mu$}_r}{r} \, \mbox{\boldmath$t$}^T
+ \theta_n \sum_{r=0}^{n-1}\varphirac{\xi_r}{r} \right]\right\},
\quad |\theta_n |\le \eta_n.
\end{align*}
Next, applying to the sums in the exponential the following simple corollary of the
Stolz--Ces\`aro theorem: for a real sequence $\{x_n\},$ as $n\to\infty$,
\betagin{align}
box{if}\quad \varphirac1n \sum_{k=1}^n x_k \to x\in\mathbb{R}, \quad
box{then} \quad
\varphirac1{\ln n} \sum_{k=1}^n \varphirac{x_k}k \to x
\lambdabel{stolz}
\end{align}
(see, e.g., 2.3.25 in~\cite{KaNo00}), we obtain that $\lim_{(n,j)}\phi_{n,j} (\mbox{\boldmath$t$}/\ln n)= \exp \{ i
\mbox{\boldmath$\lambda$}cktrianglerighta \mbox{\boldmath$t$}^T\}$, where convergence is uniform in $\mbox{\boldmath$t$}$ on any bounded subset of~$\mathbb{R}^d$, which
completes the proof of~\eqref{thm2_1}.
To prove~\eqref{thm2_2}, note that, for $\varepsilonsilon_n >0$ vanishing slowly enough (so that $U_{\varepsilonsilon_n n}/U_n = o(1)$ and $\ln\varepsilonsilon_n=o(\ln n)$; we will
assume without loss of generality that $\varepsilonsilon_n n\in\mathbb{N}$), setting $\mbox{\boldmath$t$}_{r,n}:= (\mbox{\boldmath$t$} \ln r)
/\ln n,$ one has from~\eqref{mean_m} that, for any fixed $\mbox{\boldmath$t$}$, the ch.f.\ of $M_n
(\cdot \ln n)$ is equal to
\betagin{align}
\varphi_n (\mbox{\boldmath$t$}/\ln n)
& = \varphirac1{U_n}\sum_{r=\varepsilonsilon_n n}^n \sum_{j=1}^{k_r} u_{r,j} \phi_{r,j} (\mbox{\boldmath$t$}_{r,n}/\ln r) + o(1)
\notag
\\
& = \exp \{ i \mbox{\boldmath$t$} \mbox{\boldmath$\lambda$}cktrianglerighta^T\} + o(1), \quad n\to\infty,
\lambdabel{chf_M}
\end{align}
since $\mbox{\boldmath$t$}_{r,n}= \mbox{\boldmath$t$} + o(1)$ uniformly in $r\in [\varepsilonsilon_n n, n]$, and
in view of the above-mentioned uniformity of convergence of~$\phi_{n,j} (\mbox{\boldmath$t$}/\ln n)$ to the
exponential function.
allskip
(ii)~Using the second order uniform integrability assumption on~$
athscr{P},$ it is
not hard to verify that, as $\|\mbox{\boldmath$s$}\|\to 0,$
\[
f_r (\mbox{\boldmath$s$})-1 = i\mbox{\boldmath$\mu$}_r\mbox{\boldmath$s$}^T - \varphirac12 \mbox{\boldmath$s$} \mbox{\boldmath$m$}_r\mbox{\boldmath$s$}^T+ o (\|\mbox{\boldmath$s$}\|^2)
\quad
box{uniformly in~$r\ge 1$.}
\]
Therefore, for any fixed $\mbox{\boldmath$t$}\in\mathbb{R}^d$, taking into account that
$\sup_r\|\mbox{\boldmath$\mu$}_r\|<c<\infty$ (due to uniform integrability), as $n\to\infty$ and $b_n\to \infty,$
\betagin{align}
\ln (1 +\pi_r (f_r (\mbox{\boldmath$t$}/b_n) -1))
& = \pi_r (f_r (\mbox{\boldmath$t$}/b_n) -1)
+ O\left( \varphirac{\pi_r^2 }{b_n^2}\right)
\notag
\\
&= \pi_r \biggl( \varphirac{i\mbox{\boldmath$\mu$}_r\mbox{\boldmath$t$}^T}{b_n}
-\varphirac{\mbox{\boldmath$t$} \mbox{\boldmath$m$}_r\mbox{\boldmath$t$}^T}{2b_n^2} \biggr)
+ o\left(\varphirac{\pi_r }{b_n^2}\right)
\lambdabel{ln_1}
\end{align}
from~\eqref{beta_n}. Now, observing that $f_{n } (\mbox{\boldmath$t$}/b_n)=1 + o(1)$ as
$n\to\infty$ (again owing to the uniform integrability assumption), and setting
$b_n:=\sqrt{\ln n},$ one can derive from the first line of~\eqref{forma_b} that $\phi_{n,j} \bigl( {\mbox{\boldmath$t$}}/{\sqrt{\ln n}} \bigr)$ is equal to
\betagin{equation}
\lambdabel{exp_fu}
(1+o(1)) \exp\left\{ \varphirac{i \mbox{\boldmath$t$}}{\sqrt{\ln n}} \sum_{r=0}^{n-1} \pi_r \mbox{\boldmath$\mu$}_r^T
- \varphirac{1}{2 \ln n} \mbox{\boldmath$t$} \left(\sum_{r=0}^{n-1} \pi_r\mbox{\boldmath$m$}_r\right)\mbox{\boldmath$t$}^T
+ \varphirac{o(1)}{\ln n} \sum_{r=0}^{n-1} \pi_r \right\}.
\end{equation}
The first term in the argument of the exponential function in~\eqref{exp_fu} is
\[
\varphirac{i \mbox{\boldmath$t$}}{\sqrt{\ln n}} \sum_{r=0}^{n-1} \pi_r \mbox{\boldmath$\mu$}_r^T
=
{i \mbox{\boldmath$t$}}\sqrt{\ln n}\times \varphirac1{ \ln n } \sum_{r=0}^{n-1} \pi_r \mbox{\boldmath$\mu$}_r^T
\equiv
i \mbox{\boldmath$\nu$}_n \mbox{\boldmath$t$}^T \sqrt{\ln n}.
\]
Again using the Ces\`aro summability assumptions, \eqref{beta_n} and \eqref{stolz}, it is not difficult
to show, similarly to our argument in the proof of part~(i), that, as $n\to\infty,$
\[
\varphirac{1}{ \ln n} \left(\sum_{r=0}^{n-1} \pi_r\mbox{\boldmath$m$}_r\right) \to
(\alphapha+1)\dfrac{\mathcal{S}_1 (\xi \mbox{\boldmath$m$})}{\mathcal{S}_1 (\xi)} .
\]
Finally, the last term in the argument of the exponential function in~\eqref{exp_fu} is negligibly
small compared to
\[
\varphirac{1}{\ln n} \sum_{r=0}^{n-1} \pi_r \to \alphapha+1,
\]
the last relation again following from \eqref{beta_n} and
\eqref{stolz}. This establishes the desired convergence of the distributions of $(\mbox{\boldmath$X$}_{n,j}-\mbox{\boldmath$\nu$}_n\ln n)/\sqrt{n}$.
Now turn to the asymptotic behavior of
$M_n (\, \cdot\,\sqrt{\ln n} +\mbox{\boldmath$\nu$}_n\ln n)$. Let $\varepsilonsilon_n >0$ be a sequence vanishing slowly enough (so that $U_{\varepsilonsilon_n n}/U_n = o(1)$ and $\ln\varepsilonsilon_n=o(\sqrt{\ln n})$). Then, up to an additive term $o(1),$ the ch.f.\ $ \varphi_n (\mbox{\boldmath$t$} /\sqrt{\ln n})e^{-i\nu_n\mbox{\scriptsize\boldmath$t$}^\top\sqrt{\ln n}} $ of that measure equals
\betagin{align}
\varphirac1{U_n}& \sum_{r= \varepsilonsilon_n n}^n \sum_{j=1}^{k_r} u_{r,j} \phi_{r,j} (\mbox{\boldmath$t$} /\sqrt{\ln n}) e^{-i\mbox{\scriptsize\boldmath$\nu$}_n\mbox{\scriptsize\boldmath$t$}^\top\sqrt{\ln n}}
\notag\\
&=\varphirac1{U_n}\sum_{r=\varepsilonsilon_n n}^n \sum_{j=1}^{k_r} u_{r,j} \phi_{r,j} (\mbox{\boldmath$t$}_{r,n}/\sqrt{\ln r}) e^{-i\mbox{\scriptsize\boldmath$\nu$}_r\mbox{\scriptsize\boldmath$t$}_{r,n}^\top\sqrt{\ln r}}
\exp\biggl\{\varphirac1{\sqrt{\ln n}}\sum_{s=r}^{n-1}\pi_s \mbox{\boldmath$\mu$}_s \biggr\},
\lambdabel{fi_M}
\end{align}
where $\mbox{\boldmath$t$}_{r,n}:= \mbox{\boldmath$t$}\sqrt{\varphirac{\ln r}{\ln n}}\to \mbox{\boldmath$t$} $ uniformly in $r\in [\varepsilon n, n]$ as $n\to\infty$ and, as $\sup_n\|\mbox{\boldmath$\mu$}_n\|\le c<\infty$ due to the uniform integrability assumption, one has from \eqref{beta_n} that, for some $c_1<\infty,$
\[
\biggl\|\sum_{s=r}^{n-1}\pi_s \mbox{\boldmath$\mu$}_s\bigg\|
\le c \sum_{s=\varepsilonsilon_n n}^{n-1}\pi_s
= c \varphirac{\alpha+1}{\mathcal{S}_1 (\xi)} \sum_{s=\varepsilonsilon_n n}^{n-1}\varphirac{\xi_s}{s}(1+o(1))
\le c_1 \sum_{s=\varepsilonsilon_n n}^{n-1}\varphirac{\xi_s}{s}.
\]
Setting $\Xi_n:=\varphirac1n \sum_{k=1}^{n } \xi_k $, note that $\xi_s = s\Xi_s- (s-1) \Xi_{s-1}$ and $\Xi_n\to\mathcal{S}_1 (\xi)$ as $n\to\infty,$ and so
\betagin{align*}
\sum_{s=\varepsilonsilon_n n}^{n-1}\varphirac{\xi_s}{s}
& = \sum_{s=\varepsilonsilon_n n}^{n-1}\biggl(\Xi_s- \varphirac{s-1}s \, \Xi_{s-1}\biggr)
= \Xi_{n-1} - \Xi_{\varepsilonsilon_n n-1}+\sum_{s=\varepsilonsilon_n n}^{n-1} \varphirac1s\, \Xi_{s-1}
\\
& = O\biggl(\sum_{s=\varepsilonsilon_n n}^{n-1} \varphirac1s\biggr)= O\bigl(|\ln \varepsilonsilon_n|\bigr) = o \bigl(\sqrt{\ln n}\bigr)
\end{align*}
by assumption. Therefore the last factor on the right-hand side of \eqref{fi_M} tends to one, so that the previously established convergence of the distributions of $(\mbox{\boldmath$X$}_{n,j}-\mbox{\boldmath$\nu$}_n\ln n)/\sqrt{n}$
completes the proof of Theorem~\ref{Thm2}.
$\Box$
edskip
{\em Proof of Theorem~\ref{Thm3}.} (i)~All the computations below will be valid on the event $A= \{\lambdangle \widetilde\xi_1\rangle >0, \lambdangle \tau_1\rangle >0\}.$
First set for brevity $a:=\lambdangle \tau_1\rangle$ and
prove that all $\widetilde\zeta_n$ are finite on $A.$ By stationarity, it suffices to show that $\widetilde\zeta_0<\infty$ on~$A$. Using Markov's inequality:
\[
{\bf P} (\widetilde\xi_{-m} >e^{am/2}|\mathscr{I})\le \lambdangle\widetilde\xi_1\rangle e^{-am/2}, \quad m\ge 0,
\]
and the Birkhoff--Khinchin theorem implying that $S_{0,m}=am (1+o(1))$ a.s.\ as $m\to\infty,$ we see from Borel--Cantelli's lemma that
\[
\widetilde\zeta_0 = \sum_{m=0}^\infty \widetilde\xi_{-m}e^{am(1+o(1))}
\le \sum_{m=0}^\infty e^{am/2+o(m)} <\infty \qquad
box{a.s.\ on $A$.}
\]
Setting $\vartheta_n:=\xi_n/\widetilde\xi_n -1$ (which is $o(1)$ a.s.\ by assumption \eqref{cond_xi}), we have
\betagin{align}
W_n &
= \sum_{r=0}^n \xi_r e^{S_r} = e^{S_n}\sum_{m=0}^n \xi_{n-m}e^{-S_{n,m}}
\notag\\
& = e^{S_n}\biggl(\widetilde\zeta_n + \sum_{m=0}^n \vartheta_{n-m} \widetilde \xi_{n-m} e^{-S_{n,m}} - e^{-\tau_0- S_n}\widetilde\zeta_{-1}\biggr).
\lambdabel{Wn}
\end{align}
Note that here $e^{-S_n}= o(\widetilde\zeta_n)$ a.s. Indeed, again by the Birkhoff--Khinchin theorem,
\[
ax_{m\le n/2} S_{n,m} =
ax_{m\le n/2} (an - a(n-m)) + o(n)=an/2 + o(n),
\]
and so
\betagin{align}
\widetilde\zeta_n
& \ge \sum_{m=0}^{n/2}\widetilde\xi_{n-m} e^{-S_{n,m}} \ge e^{- an/2 + o(n)} \sum_{m=0}^{n/2}\widetilde\xi_{n-m}
\notag\\
& = e^{- an/2 + o(n)} \varphirac{n}2 \lambdangle\widetilde\xi_1\rangle\gg e^{- an + o(n)}= e^{-S_n} .
\lambdabel{tildeze}
\end{align}
Moreover,
\[
\sum_{m=0}^n \vartheta_{n-m} \widetilde \xi_{n-m} e^{-S_{n,m}}
= \sum_{0\le m < 2n/3} + \sum_{ 2n/3\le m \le n} =:\mathcal{S}gma_1 + \mathcal{S}gma_2,
\]
where $|\mathcal{S}gma_1|\le
ax_{r>n/3}|\vartheta_r|\widetilde\zeta_{n}=o(\widetilde\zeta_{n})$ a.s. Using notation $\overline \vartheta :=\sup_{j\ge 0} |\vartheta_{j}| $ (note that $\overline \vartheta< \infty$ a.s.) and the observation that $
in_{2n/3\le m \le n} S_{n,m}= 2na/3 + o(n)$ a.s., we obtain
\[
|\mathcal{S}gma_2 |\le \overline \vartheta\sum_{ 2n/3\le m \le n}\widetilde \xi_{n-m} e^{-S_{n,m}}
\le \overline \vartheta e^{-2na/3 + o(n)} \sum_{ r=0}^{n/3}\widetilde \xi_{r} = \overline \vartheta e^{-2na/3 + o(n)} \varphirac{n}3 \lambdangle\widetilde\xi_1\rangle = o(\widetilde\zeta_n)
\]
from \eqref{tildeze}. Using the above bounds in~\eqref{Wn} we see that $W_n= \widetilde\zeta_n e^{S_n} (1+o(1))$ a.s.\ and hence, as $n\to\infty$,
\betagin{equation}
\lambdabel{beta_n_3}
\pi_n = \varphirac{\widetilde\xi_n}{\widetilde\zeta_n}\,(1+o(1))\quad
box{a.s.}
\end{equation}
Now the conditional version of \eqref{forma_b} with $b_n:=n$ together with the uniform integrability assumption yields that, as $n\to\infty,$
\[
\phi_{n,j|\mathcal{V}} (\mbox{\boldmath$t$} /n)=(1+o(1)) \exp\left\{ \varphirac{1+o(1)}{n} \sum_{r=1}^{n-1} \varphirac{\widetilde\xi_r}{\widetilde\zeta_r} (1+\eta_{r,n}^{(1)})(\widetilde\mbox{\boldmath$\mu$}_r\mbox{\boldmath$t$}^T
+ \eta_{r,n}^{(2)}) \right\},
\]
where $
ax_{r\le n}|\eta_{r,n}^{(j)}|\le \eta_n=o(1)$ a.s., $j=1,2$. The desired convergence \eqref{Pn} follows now from the Birkhoff--Khinchin theorem.
The assertion concerning the convergence of $M_{n|\mathcal{V}} (\,\cdot\, n)$ follows from the observation that the ch.f.\ of that measure, similarly to~\eqref{chf_M}, is equal to
\[
\varphirac1{U_n}\sum_{r=0}^n \sum_{j=1}^{k_r} u_{r,j} \phi_{r,j|\mathcal{V}} \biggl(\varphirac{\mbox{\boldmath$t$}}{r}\,\varphirac{r}{n} \biggr)
= \varphirac1{U_n}\sum_{r=0}^n \sum_{j=1}^{k_r} u_{r,j} \exp \{ i \mbox{\boldmath$t$} \mbox{\boldmath$\lambda$}cktrianglerighta^T r/n+ \eta_{r,n}\},
\]
where $\eta_{r,n}=o(1)$ as $r\to\infty$, due to convergence~\eqref{Pn}.
edskip
(ii) We start as in the proof of Theorem~\ref{Thm2}(ii), but, since $\pi_n\not\to 0$ now, instead of~\eqref{ln_1} one has to use, for $b_n\to\infty,$ the following expansion:
\betagin{align*}
\ln (1 & +\pi_r (f_{r } (\mbox{\boldmath$t$}/b_n) -1))
= \pi_r (f_{r } (\mbox{\boldmath$t$}/b_n) -1)
- \varphirac{\pi_r^2}2 (f_{r } (\mbox{\boldmath$t$}/b_n) -1)^2 (1+o(1))
\notag
\\
&= \pi_r \biggl( \varphirac{i\mbox{\boldmath$\mu$}_{r|\mathcal{V}}\mbox{\boldmath$t$}^T}{b_n}
-\varphirac{\mbox{\boldmath$t$} \mbox{\boldmath$m$}_{r|\mathcal{V}}\mbox{\boldmath$t$}^T}{2b_n^2} +o(b_n^{-2}) \biggr)
- \varphirac{\pi_r^2(i\mbox{\boldmath$\mu$}_{r|\mathcal{V}} \mbox{\boldmath$t$}^T +o(1))^2 }{2b_n^2}
(1+o(1))
\\
& =\varphirac{i}{b_n} \pi_r \mbox{\boldmath$\mu$}_{r|\mathcal{V}}\mbox{\boldmath$t$}^T
- \varphirac{ \pi_r}{2b_n^2}\mbox{\boldmath$t$} \bigl(\mbox{\boldmath$m$}_{r|\mathcal{V}} - \pi_r \mbox{\boldmath$\mu$}_{r|\mathcal{V}}^T \mbox{\boldmath$\mu$}_{r|\mathcal{V}}+ o(1) \bigr)\mbox{\boldmath$t$}^T (1+o(1)),
\end{align*}
where the $o(1)$-terms are uniform in $r$ due to the uniform integrability assumptions. Letting $b_n:=\sqrt{n},$ we obtain (cf.~\eqref{exp_fu}) that $\phi_{n|\mathcal{V}}
(\mbox{\boldmath$t$}/\sqrt{n})$ is given by
\betagin{equation*}
\exp\left\{ \varphirac{i \mbox{\boldmath$t$}}{\sqrt{ n}} \sum_{r=0}^{n-1} \pi_r \mbox{\boldmath$\mu$}_{r|\mathcal{V}}^T
- \varphirac{1}{2 n} \mbox{\boldmath$t$} \left(\sum_{r=0}^{n-1} ( \pi_r\mbox{\boldmath$m$}_{r|\mathcal{V}}-\pi_r^2 \mbox{\boldmath$\mu$}_{r|\mathcal{V}}^T\mbox{\boldmath$\mu$}_{r|\mathcal{V}})\right)\mbox{\boldmath$t$}^T
+ \varphirac{o(1)}{\ln n} \sum_{r=0}^{n-1} \pi_r +o(1)\right\}.
\end{equation*}
Therefore, using \eqref{beta_n_3}, we see that the conditional (given $\mathcal{V}$) ch.f.\ of $(\mbox{\boldmath$X$}_{n,j} - \mbox{\boldmath$\varkappa$}_n)/\sqrt{ n}$ is equal to
\[
\exp\left\{
- \varphirac{1}{2 n} \mbox{\boldmath$t$}
\sum_{r=0}^{n-1} \varphirac{\widetilde \xi_r}{\widetilde \zeta_r} (1 + \eta^{(1)}_{r,n} )
\mathscr{B}gl(\widetilde \mbox{\boldmath$m$}_r - \varphirac{\widetilde \xi_r}{\widetilde \zeta_r}\widetilde\mbox{\boldmath$\mu$}_{r }^T\widetilde\mbox{\boldmath$\mu$}_{r } + {\betaeta}^{(3)}_{r,n} \mathscr{B}gr) \mbox{\boldmath$t$}^T
+ \varphirac{o(1)}{ n} \sum_{r=0}^{n-1} \varphirac{\widetilde \xi_r}{\widetilde \zeta_r} (1+o(1)) +o(1)\right\},
\]
where the terms $\eta^{(1)}_{r,n}$ and ${\betaeta}^{(3)}_{r,n}$ are small uniformly in~$r\le n$. Now the assertion of part~(ii) follows from the Birkhoff--Khinchin theorem.
$\Box$
\betagin{thebibliography}{99}
\bibitem{Aietal09}
Aiello, W.; Bonato, A.; Cooper, C.; Janssen, J.; Pra{\l}at, P.
A spatial web graph model with local influence regions.
Internet Mathematics {\bf 2009}, 5, 175-–196.
\bibitem{AsKa76}
Asmussen, S.; Kaplan, N. Branching random walks. I, II. Stoch.\ Proc.\ Appl. {\bf 1976}, 4, 1--13; 15--31.
\bibitem{BaAl99}
Barab\'asi, A.-L.; Albert, R.
Emergence of scaling in random networks.
Science {\bf 1999}, 286, 509–-512.
\bibitem{Bi97}
Biggins, J.D. How fast does a general branching random walk spread? In {\em Classical and Modern Branching Processes;} Athreya, K.B., Jagers, P., Eds.;
Springer: New York, 1997; 19--39.
\bibitem{BoMo05}
Borovkov, K.; Motyer, A.
On the asymptotic behaviour of a simple growing point process.
Statist.\ Probab.\ Letters {\bf 2005}, 72, 265--275.
\bibitem{BoVa06}
Borovkov, K.; Vatutin, V.A.
On the asymptotic behaviour of random recursive trees in
random environment.
Adv.\ Appl.\ Probab. {\bf 2006}, 38, 1047--1070.
\bibitem{De88}
Devroye, L.
Applications of the theory of records in the study of random trees.
Acta Inf. {\bf 1988}, 26, 123--130.
\bibitem{Du07}
Durrett, R.
{\em Random Graph Dynamics.}
Cambridge University Press, Cambridge, 2007.
\bibitem{KaNo00}
Kaczor, W.J.; Nowak, M.T. {\em Problems in Mathematical Analysis I: Real Numbers,
Sequences and Series.} AMS, Providence RI, 2000.
\bibitem{Ne39}
Neyman, J. On a new class of ``contagious" distributions, applicable in entomology and bacteriology. Ann.\ Math.\ Statist. {\bf 1939}, 10, 35--57.
\bibitem{Po31}
P\'olya, G. Sur quelques points de la th\'eorie des probabilit\'es.
Ann. Inst. Henri Poincar\'e, {\bf 1931}, 1, 117--162.
\bibitem{RuTyVa07}
Rudas, A.; T\'oth, B.; Valkó, B.
Random trees and general branching processes.
Random Structures Algorithms {\bf 2007}, 31, 186–-202.
\bibitem{Th54}
Thompson, H.R. A note on contagious distributions. Biometrika {\bf 1954}, 41, 268--271.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Robust optimal problem for dynamic risk measures governed by BSDEs with jumps and delayed generator}
\author{Navegué Tuo\thanks{[email protected]}\; and Auguste Aman\thanks{[email protected], corresponding author}\\
UFR de Mathématiques et Informatique\\ Université Félix H. Boigny, Cocody\\
22 BP 582 Abidjan, Côte d'Ivoire}
\date{}
\maketitle \thispagestyle{empty} \setcounter{page}{1}
\thispagestyle{fancy} \fancyhead{}
\fancyfoot{}
\renewcommand{0pt}{0pt}
\begin{abstract}
The aim of this paper is to study an optimal stopping problem for dynamic risk measures induced by backward stochastic differential equations with jumps and delayed generator. Firstly, we connect the value function of this problem to reflected BSDEs with jump and delayed generator. Furthermore, after establishing existence and uniqueness result for this reflected BSDE, we use its to address through a mixed/optimal stopping game problem for the previous dynamic risk measure in ambiguity case.
\end{abstract}
\textbf{MSC}:Primary: 60F05, 60H15, 47N10, 93E20; Secondary: 60J60\\
\textbf{Keywords}: Backward stochastic differential equations; Delayed generators Reflected backward stochastic equations; Jump processes; Optimal stopping; Dynamic risk measures; Game problems.
\section{Introduction}
The risk measures start with the work of Artzner et al. \cite{Aal}. Later, there has been a lot of studies on risk measures. See e.g Follmer and Shied \cite{E6}, Frittelli and Gianin \cite{E8}, Bion-Nadal \cite{E16}, Barrieu and El Karoui \cite{E21}, Bayraktar E, I. Karatzas and Yao \cite{E18}. After these, around the year 2005, various authors established the links between continuous time dynamic risk measures and
backward differential equations. They have introduced dynamic risk measures in the Brownian case, defined as the solutions of BSDEs (see \cite{E8, E9,E21}). Clearly,
let consider $f$ and $\xi$ respectively a function and random variable.
The risk measure of the position $\xi$ denoted by $\rho_{t}(\xi)$ is described by the process $-X_{t}$ where $\{X(t),\; t\geq 0\}$ is the first component solution of BSDEs
associated to generator $f$ and terminal value $\xi$.
Many studies have been done on such risk measures, dealing with optimal
stopping problem and robust optimization problems (see for example \cite{10,E18,E21}).
Recently, in \cite{D1}, Delong and Imkeller introduced the theory of nonlinear backward
stochastic differential equations (BSDEs, in short) with time
delayed
generators. Precisely, given a progressively measurable process $f$,
so-called generator and a square integrable random variable $\xi$, BSDEs with
time delayed generator are BSDEs of the form:
\begin{eqnarray*}
X(t) = \xi+\int_{t}^{T}f(s,X_{s},Z_{s})ds-\int_{t}^{T}Z(s)dW(s), 0 \leq t \leq T,
\end{eqnarray*}
where the process $(X_{t},Z_{t})=(X(t+u),Z(t+u))_{-T \leq u \leq 0}$ represents all the past values of the solution until $t$. Under some assumptions, they
proved existence and uniqueness result of such a BSDEs. In this dynamic, the
same authors study, in an accompanying paper (see \cite{DI2}), BSDE with time
delayed generator driven both by a Brownian motion and a Poisson random
measure. Existence and uniqueness of a solution and its Malliavin's differentiability has been established. A few year later, in \cite{D2}, Delong proved that BSDEs with time delayed generator is a important tool to formulate many problems in mathematical finance and insurance.
For example, he proved that the dynamic of option based portfolio assurance is
the following time delayed BSDE:
\begin{eqnarray*}
X(t)=X(0)+(X(T)-X(0))^{+}-\int_t^TZ(s)dW(s).
\end{eqnarray*}
From these works, and given the importance of applications related to BSDEs
with time delayed generator, in your opinion, it is very judicious to expect to
study an optimal stopping problem for dynamic risk measures governed by
backward stochastic differential equations with delayed generator. Better, this
paper is dedicated to resolve an optimal stopping problem for dynamic
risk measure governed by backward stochastic differential equations driven with
both Brownian motion and Poison random measure. For more detail, let consider
$(\psi(t))_{t\geq 0}$ a given right continuous left limited adapted process and $\tau$ be a stopping time in $[0,T]$. Our
objective is to solve an optimal stopping problem related to risk measure of the
position $\psi(\tau)$ denoted by $\rho^{\psi,\tau}$ with dynamic follows as the process $-X^{\psi,\tau}$ where $(X^{\psi,\tau},Z^{\psi,\tau}, U^{\psi,\tau})$ satisfied the following BSDE
\begin{eqnarray*}
X^{\psi,\tau}(t)&=&\psi(\tau)+\int_t^Tf(s,Z^{\psi,\tau}_{s},U^{\psi,\tau}_{s}(.))ds-\int_t^TZ^{\psi,\tau}(s)dW(s)\\
&&-\int_t^T\int_{\mathbb R^* }U^{\psi,\tau}(s,z)\tilde{N}(ds,dz),\;\; 0\leq t\leq \tau,
\end{eqnarray*}
where $\mathbb R^*=\mathbb R\textbackslash \{0\}$.
Roughly speaking, for all stopping time $\sigma$ with values on $[0,T]$, our aim is to minimize the risk measures at time $\sigma$ i.e we want to find a unique stopping time $\tau^{*}$ such that setting
\begin{eqnarray*}
v(\sigma)=ess\inf_{\sigma\leq \tau\leq T}\rho^{\psi,\tau}(\sigma),
\end{eqnarray*}
we have
\begin{eqnarray}\label{ST}
v(\sigma)=\rho^{\psi,\tau^{*}}(\sigma).
\end{eqnarray}
Our method is essentially based on the link establish between the value function $v$ and the first component of the solution of a reflected BSDEs with jump and delayed generator. Notion of reflected BSDEs has been introduced for the first time by N. EL Karoui et al. in \cite{E14} with a Brownian filtration. The solutions of such equations are constrained to be greater than given continuous processes called obstacles. Later, different extensions have been performed when we add the jumps process and/or suppose the obstacle not continuous. One can cite works of Tang and Li \cite{TL}, Hamadène and Ouknine \cite{E5,E7}, Essaky \cite{Essaky} and Quenez and Sulem \cite{10}. More recently, reflected BSDEs without jump and with delayed generator have been introduced respectively by Zhou and Ren \cite{ZR}, and Tuo et al. \cite{Tal}.
Our study takes place in two stages. First, we provide an optimality criterium, that is a characterization of optimal stopping times and when the obstacle is right continuous and left limited (rcll, in short), we show the existence of an optimal stopping time. Thereafter, we address the optimal stopping problem when there is ambiguity on the risk measure. It means that there exists a given control $\delta$ that can influence the dynamic risk measures. More precisely, given the dynamic position $\psi$ this situation consists to focus on the robust optimal stopping problem for the family of risk measures $\{\rho^{\delta},\;\; \delta\in\mathcal{A}\}$ of this position $\psi$ induced by the BSDEs associated with generators $\{f^{\delta},\; \delta\in \mathcal{A}\}$. To this purpose and in view of the first part, we study the following optimal control problem related to $Y^{\delta}$ the first component solution of reflected BSDEs with jumps and delayed generator $f^{\delta},\;\;\delta \in A)$ with a RCLL obstacle $\psi$. In other words, we want to determine a stopping time $\tau^{*}$, which minimizes over all stopping times $\tau^{\delta}$, the risk of the position $\psi$. This is equivalent to derive a saddle points to a mixed control/optimal stopping game problem.
The paper is organized as follows. We give the notation and formulation of the optimal problem for risk measures problem in Section 2. Existence and uniqueness results for RBSDEs with jumps and delayed generator with right continuous left limit (rcll) obstacle is provided in Section 3. In both section 4 and 5, we deal with the robust optimal stopping problem.
\section{Formulation of the problem}
Let consider a probability space $(\Omega,\mathcal{F},\mathbb P)$. For $E=\mathbb R^{d}\textbackslash \{0\}$ equipped with its Borel field $\mathcal{E}$, let $N$ be a Poisson random measure on $\mathbb R_{+}\times E$ with compensator $\nu(dt,dx)=\lambda(dx)dt$ where $\lambda $ is $\sigma$-finite measure on $(E,\mathcal{E})$ satisfying
\begin{eqnarray*}
\int_{E}(1\wedge |x|^2)d\lambda(x)<+\infty.
\end{eqnarray*}
such that $((N-\nu)([0,t]\times A))_{t\geq 0}$ is a martingale. Let also consider ac$d$-dimensional standard Brownian motion $(W_t)_{t\geq 0}$ independent of $N$. Let finally consider the filtration $\mathbb{F}={\mathcal{F}_t}_{t\geq t}$ defined by
\begin{eqnarray*}
\mathcal{F}_t=\mathcal{F}^{W}\wedge\mathcal{F}^{N}\wedge\mathcal{N},
\end{eqnarray*}
where $\mathcal{N}$ is the set of all $\mathbb P$-null element of $\mathcal{F}$.
\subsection{BSDEs with time delayed generators driven by Brownian motions and Poisson random measures}
This subsection is devoted to recall existence and uniqueness result for BSDEs with jump and time-delayed generator
\begin{eqnarray}\label{BSDEjump}
X(t)&=&\xi+\int_t^Tf(s,X_s,Z_{s},U_{s}(.))ds-\int_t^TZ(s)dW(s)\nonumber\\
&&-\int_t^T\int_{E}U(s,z)\tilde{N}(ds,dz),\;\; 0\leq t\leq T,
\end{eqnarray}
studied by Delong and Imkeller in \cite{DI2} and derive a comparison principle associated to this BSDE.
In this instance, let us describe following spaces of processes:
\begin{description}
\item $\bullet $ $ L_{-T}^2 (\mathbb{R}) $ denotes the space of measurable functions $ z : [-T,0]\rightarrow\mathbb{R}$ satisfying
$$ \int_{-T}^0 \mid z(v) \mid^2 dv<+\infty,
$$
\item $ \bullet $ $L_{-T}^{\infty} (\mathbb{R} )$ denotes the space of bounded, measurable functions $ y : [-T,0] \rightarrow \mathbb{R} $\\
satisfying
$$
\sup\limits_{v\in [-T,0]}\mid y(v) \mid^2 <+\infty,
$$
\item $L_{-T,m}^{2}(\mathbb R)$ denotes the space of product measurable functions $u:[-T,0]\times \mathbb R/\{0\}\rightarrow \mathbb R$ such that
$$
\int_{-T}^{0}\int_{E}|u(t,z)|^{2}m(dz)dt<+\infty.
$$
\item $\bullet$ $ L^2 (\Omega, \mathcal{F}_T,\mathbb{R})$ is the Banach space of $\mathcal{F}_T$-measurable random variables $\xi: \Omega \rightarrow \mathbb{R} $ normed by $\displaystyle \|\xi\|_{L^2}=\left[\mathbb E(|\xi|^2)\right]^{1/2}$
\item $\bullet$ $\mathcal{H}^2(\mathbb R)$ denotes the Banach space of all predictable processes $\varphi$ with values in $\mathbb R$ such that $\mathbb E\left[\int_0^T|\varphi(s)|^2ds\right]<+\infty$.
\item $\bullet$ Let $\mathcal{H}_{m}^{2}(\mathbb R)$ denote the space of $\mathcal{P}\otimes \mathcal{E}$-mesurable processes $\phi$ satisfying $\mathbb E\left(\int_{0}^{T}\int_{E}|\phi(t,z)|^{2}m(dz)dt\right)<+\infty$, where $\mathcal{P}$ is the sigma algebra of $(\mathcal{F}_t)_{t\geq 0}$-predictable set on $\Omega\times[0,T]$.
\item $\bullet$ $\mathcal{S}^2(\mathbb R)$ denotes the Banach space of all $(\mathcal{F}_t)_{0\leq t\leq T}$-adapted right continuous left limit (rcll) processes $\eta$ with values in $\mathbb R$ such that $\mathbb E\left(\sup_{0\leq s\leq T}|\eta(s)|^2\right)<+\infty$
\item $\bullet$ $\mathcal{K}^2(\mathbb R)$ denotes the Banach space of all $(\mathcal{F}_t)_{0\leq t\leq T}$-predictable right continuous left limit (rcll) increasing processes $\eta$ with values in $\mathbb R$ such that $\eta(0)=0$ and $\mathbb E\left(|\eta(T)|^2\right)<+\infty$
\end{description}
The spaces $\mathcal{H}^2(\mathbb R),\, \mathcal{H}_{m}^{2}(\mathbb R)$ and $\mathcal{S}^2(\mathbb R)$ are respectively endowed with the norms
\begin{eqnarray*}
\|\varphi\|^2_{\mathcal{H}^2,\beta}&=&\mathbb E\left[\int_0^Te^{\beta s}|\varphi(s)|^2ds\right]\\
\|\phi(t,z)\|^2_{\beta,m}&=& \mathbb E\left(\int_{0}^{T}\int_{E}|\phi(t,z)|^{2}m(dz)dt\right)\\
\|\eta (s)\|^2_{\mathcal{S}^{2},\beta}&=&\mathbb E\left(\sup_{0\leq s\leq T}e^{\beta s}|\eta(s)|^2\right).
\end{eqnarray*}
Our two results has been done under the following hypotheses: For a fix $T>0$,
\begin{description}
\item[$({\bf A1})$] $\tau$ is a finite $(\mathcal{F}_t)_{0\leq t\leq T}$-stopping time.
\item[$({\bf A2})$] $\xi \in L^{2}(\mathcal{F}_{\tau},\mathbb R)$
\item[$({\bf A3})$] $f:\Omega \times [0,T]\times L_{-T}^{\infty}(\mathbb R) \times L_{-T}^{2}(\mathbb R) \times L_{-T,m}^{2}(\mathbb R) \rightarrow \mathbb R$ is a product measurable, $\mathbb F$-adapted function satisfying
\begin{itemize}
\item [$(i)$] There exists a probability measure $\alpha$ on $([-T,0],\mathcal{B}([-T,0]))$ and a positive constant $K$, such that
\begin{eqnarray*}
&&|f(t,y_t,z_t,u_t(.)- f(t,\bar{y}_t,\bar{z}_t,\bar{u}_t(.)|^{2}\\
&\leq K &\int_{-T}^{0}\left[|y(t+v)-\bar{y}(t+v)|^{2}+|z(t+v)-\bar{z}(t+v)|^{2}\right.\\
&&\left.+\int_{E}|u(t+v,\zeta) - \bar{u}(t+v,\zeta)|^{2}m(d\zeta)\right]\alpha(dv)
\end{eqnarray*}
for $\mathbb{P}\otimes\lambda$ a.e, $(\omega,t) \in \Omega \times [0,T]$, for any $(x_t,z_t,u_t(.))$, $(\bar{x}_t,\bar{z}_t,\bar{u}_t(.))\in L_{-T}^{\infty}(\mathbb R) \times L_{-T}^{2}(\mathbb R) \times L_{-T,m}^{2}(\mathbb R)$
\item [$(ii)$] $\displaystyle \mathbb E\left(\int_{0}^{T}|f(t,0,0,0)|^{2}dt \right)<+\infty$
\item [$(iii)$] $f(t,.,.,.)= 0$ a.s, for $t<0$
\end{itemize}
\end{description}
For the sake of good understanding, we give in the following the notion of solution of BSDE \eqref{BSDEjump}.
\begin{definition}
The triple processes $(X,Z,U)$ is called solution of BSDE \eqref{BSDEjump} if $(X,Z,U)$ belongs in $\mathcal{S}^2(\mathbb R)\times \mathcal{H}^2(\mathbb R)\times \mathcal{H}^2_{m}(\mathbb R)$ and satisfies \eqref{BSDEjump}.
\end{definition}
We recall the existence and uniqueness result established in \cite{DI2}.
\begin{theorem}
Assume that $({\bf A1})$-$({\bf A3})$ hold. If $T$ a terminal time or $K$ a Lipschitz constant are sufficiently small i.e
\begin{eqnarray*}
9TKe\max(1,T)<1,
\end{eqnarray*}
\eqref{BSDEjump} has a unique solution.
\end{theorem}
The concept of comparison principle is a very important in the theory of BSDE without delay. Unfortunately, as point out by Example 5.1 in \cite{D1}, this principle cannot be extended in general form to BSDEs with delayed generators. Nevertheless, according to Theorem 3.5 appear in \cite{Tal}, the comparison principle for BSDEs without jump and with delayed generator, still hold on stochastic intervals in where the strategy process $Z$ stays away from $0$. The following theorem is an extension to BSDEs with jump and delayed generator. To do it, we need this additional assumption
\begin{description}
\item ({\bf A4 }) $f:\Omega \times [0,T]\times L_{-T}^{\infty}(\mathbb R) \times L_{-T}^{2}(\mathbb R) \times L_{-T,m}^{2}(\mathbb R) \rightarrow \mathbb R$ is a product measurable, $\mathbb F$-adapted function satisfying:\begin{eqnarray*}
f(t,x_t,z_t,u_t(.))-f(t,x_t,z_t,u'_t(.))\geq \int_{-T}^{0}\langle \theta^{x_t,z_t,u_t(.),u'_t(.)},u(t+v,.)-u'(t+v,.)\rangle_m\alpha(dv),
\end{eqnarray*}
for $\mathbb{P}\otimes\lambda$ a.e, $(\omega,t) \in \Omega \times [0,T]$ and each $(x_t,z_t,u_t(.),u'_t(.))\in L_{-T}^{\infty}(\mathbb R) \times L_{-T}^{2}(\mathbb R) \times L_{-T,m}^{2}(\mathbb R)\times L_{-T,m}^{2}(\mathbb R)$, where $ \theta:\Omega\times [0,T]\times L_{-T}^{\infty}(\mathbb R) \times L_{-T}^{2}(\mathbb R) \times L_{-T,m}^{2}(\mathbb R)\times L_{-T,m}^{2}(\mathbb R)\rightarrow L_{-T,m}^{2}(\mathbb R)$ is
a measurable an bounded function such that there exists $\varphi$ belongs to $L_{-T,m}^{2}(\mathbb R)$, verifying
\begin{eqnarray*}
\theta^{x_t,z_t,u_t(.),u'_t(.)}(\zeta)\geq -1\;\;\;\;\mbox{and}\;\;\;\; |\theta^{x_t,z_t,u_t(.),u'_t(.)}(\zeta)|\leq \varphi(\zeta).
\end{eqnarray*}
\end{description}
\begin{theorem}
Consider BSDE \eqref{BSDEjump} associated to delayed generators $f_1$, $f_2$ and corresponding terminal values $\xi^1$, $\xi^2$ at terminal time $\tau$ satisfying the assumptions $({\bf A1})$-$({\bf A3})$. Let $(X^{\tau,1}, Z^{\tau,1},U^{\tau,1})$ and $(X^{\tau,2}, Z^{\tau,2},U^{\tau,2})$ denote respectively the associated unique solutions. Let consider the sequence of stopping time $(\sigma_n)_{n\geq 1}$ define by
\begin{eqnarray}
\sigma_n&=&\inf\left\{
\begin{array}{ll}
t\geq 0 &,\displaystyle |X^{\tau,1}(t)-X^{\tau, 2}(t)|\vee|Z^{\tau,1}(t)-Z^{\tau,2}(t)|\vee \int_{E}|U^{\tau,1}(t,z)-U^{\tau,2}(t,z)|m(dz) \leq \frac{1}{n}\\ \mbox{or}\\
&\displaystyle |X^{\tau,1}(t)-X^{\tau, 2}(t)|\vee |Z^{\tau,1}(t)-Z^{\tau,2}(t)|\vee \int_{E}|U^{\tau,1}(t,z)-U^{\tau,2}(t,z)|m(dz)\geq n.
\end{array}
\right\}\nonumber
\\ &\wedge & T\label{TA}
\end{eqnarray}
and set
\begin{eqnarray}\label{TAbis}
\sigma=\sup_{n\geq 1}\sigma_n.
\end{eqnarray}
Moreover we suppose that
\begin{itemize}
\item $X^{\tau,1}(\sigma)\geq X^{\tau,2}(\sigma)$
\item $f_1(t, X^{\tau,1}_t,Z^{\tau,1}_t,U^{\tau,1}_t(.))\geq f_2(t,X^{\tau,1}_t,Z^{\tau,1}_t,U^{\tau,1}_t(.))$ or
\item $f_1(t,X^{\tau,2}_t,Z^{\tau,2}_t,U^{\tau,2}_t(.))\geq f_2(t,X^{\tau,2}_t,Z^{\tau,2}_t,U^{\tau,2}_t(.))$.
\end{itemize}
Then $X^{\tau,1}(t)\geq X^{\tau,2}(t),\; \mathbb P$-a.s. for all $t\in [0,\sigma]$.
\end{theorem}
\begin{proof}
We follow the ideas from Theorem 5.1 for BSDEs without jumps and with
delayed generator established in \cite{D1}. For each $t\in [0,T]$ let
\begin{eqnarray*}
\Delta X^{\tau}(t)=X^{\tau,1}(t)-X^{\tau,2}(t),\;\Delta Z(t)= Z^{\tau,1}(t)- Z^{\tau,2}(t),\;\;\Delta U^{\tau}(t,.)= U^{\tau,1}(t,.)- U^{\tau,2}(t,.),\\\\ \Delta f(t,X^{\tau,2}_{t},Z^{\tau,2}_{t},U^{\tau,2}_{t}(.)) = f^{1}(t,X^{\tau,2}_{t},Z^{\tau,2}_{t},U^{\tau,2}_{t}(.))-f^{2}(t,X^{\tau,2}_{t},Z^{\tau,2}_{t},U^{\tau,2}_{t}(.)).
\end{eqnarray*}
Let consider the real processes $\delta, \beta$ and $\gamma$ defined respectively by
\begin{eqnarray*}
\delta(t) =\left\{
\begin{array}{lll}
\frac{f^{1}(t,X^{\tau,1}_{t},Z^{\tau,1}_{t},U^{\tau,1}_{t}(.))- f^{1}(t,X^{\tau,2}_{t},Z^{\tau,1}_{t},U^{\tau,1}_{t}(.))}{\Delta X^{\tau}(t)}&\mbox{if}& \Delta X^{\tau}(t)\neq 0\\
0 & & otherwise,
\end{array}
\right.
\end{eqnarray*}
\begin{eqnarray*}
\beta(t)=
\left\{
\begin{array}{lll}
\frac{f^{1}(t,X^{\tau,2}_{t},Z^{\tau,1}_{t},U^{\tau,1}_{t}(.))- f^{1} (t,X^{\tau,2}_{t},Z^{\tau,2}_{t},U^{\tau,1}_t(.))}{\Delta Z^{\tau}(t)}&\mbox{if}&\Delta Z^{\tau}(t)\neq 0\\
0 & & otherwise.
\end{array}
\right.
\end{eqnarray*}
and
\begin{eqnarray*}
\gamma(t)=
\left\{
\begin{array}{lll}
\frac{f^{1}(t,X^{\tau,2}_{t},Z^{\tau,2}_{t},U^{\tau,1}_{t}(.))- f^{1} (t,X^{\tau,2}_{t},Z^{\tau,2}_{t},U^{\tau,2}_t(.))}{\int_{E}\Delta U^{\tau}(t,z)m(dz)}&\mbox{if}&\int_{E}\Delta U^{\tau}(t,z)m(dz)\neq 0\\
0 & & otherwise.
\end{array}
\right.
\end{eqnarray*}
Hence, since $f^1$ and $f^2$ are Lipschitz with respect $x$, $z$ and in $u$, we have
\begin{eqnarray*}
|\delta(t)|^2\leq K\int_{-T}^{0}\left(\frac{|\Delta X^{\tau}(t+u)|^2}{|\Delta X^{\tau}(t)|^2}\right)\alpha(du),
\end{eqnarray*}
\begin{eqnarray*}
|\beta(t)|^2\leq K\int_{-T}^{0}\left(\frac{|\Delta Z^{\tau}(t+u)|^2}{|\Delta Z^{\tau}(t)|^2}\right)\alpha(du)
\end{eqnarray*}
and
\begin{eqnarray*}
|\gamma(t)|^2\leq K\int_{-T}^{0}\left(\frac{\displaystyle \int_{E}|\Delta U^{\tau}(t+u,z)|^2 m(dz)}{\displaystyle\int_{E}|\Delta U^{\tau}(t,z)|^2 m(dz)}\right)\alpha(du).
\end{eqnarray*}
Next, in view of \eqref{TA} and \eqref{TAbis}, for $t\in[0,\sigma]$, there exist a constant $C$ such that $\phi=\delta, \beta, \gamma$,
$$|\phi(t)|\leq C,\;\; a.s.$$
On other hand, we have
\begin{eqnarray*}
\Delta X^{\tau}(t)&=& \Delta X^{\tau}(\sigma)+\int_t^{\sigma}\delta(s)\Delta X^{\tau}(s)ds + \int_t^{\sigma}\beta(s)\Delta Z(s)ds\\
&& + \int_t^{\sigma}\int_{E}\gamma(s)\Delta U^{\tau}(t,z)m(dz)ds+\int_{t}^{\sigma}\Delta f(s,X^{\tau,2}_{s},Z^{\tau,2}_{s},U^{\tau,2}_{s}(.))ds\\
&&-\int\Delta Z(s)dW(s) \int_t^{\sigma}\int_{E}\Delta U(s,z)\tilde{N}(ds,dz)
\end{eqnarray*}
and setting $\displaystyle R(t)=\int_{0}^{t}\delta(s)ds$, it follows from Itô's formula applied to $R(s)\Delta X^{\tau}(s)$ between $t$ to $\sigma$ that
\begin{eqnarray*}\label{Comp}
R(t)\Delta X^{\tau}(t) & = & R(\sigma)\Delta X^{\tau}(\sigma)+ \int_{t}^{\sigma} R(s)\beta(s)\Delta Z^{\tau}(s)ds\nonumber\\
&& +\int_{t}^{\sigma}\int_{E}R(s)\gamma(s)\Delta U^{\tau}(t,z)m(dz)ds+\int_{t}^{\sigma} R(s)\Delta f(s,X^{\tau,2}_{s},Z^{\tau,2}_{s},U^{\tau,2}_{s}(.))ds\nonumber\\
&& - \int_{t}^{\sigma} R(s)\Delta Z^{\tau}(s)dW(s)- \int_{t}^{\sigma}\int_{E}R(s)\Delta U^{\tau}(s,z)\tilde{N}(ds,dz).
\end{eqnarray*}
Taking into consideration the assumptions on generators and terminal values, we obtain
\begin{eqnarray}\label{Comp}
R(t)\Delta X^{\tau}(t) &\leq &\int_{t}^{\sigma} R(s)\beta(s)\Delta Z^{\tau}(s)ds\nonumber\\
&& +\int_{t}^{\sigma}\int_{E}R(s)\gamma(s)\Delta U^{\tau}(t,z)m(dz)ds\nonumber\\
&& - \int_{t}^{\sigma} R(s)\Delta Z^{\tau}(s)dW(s)-\int_{t}^{\sigma}\int_{E}R(s)\Delta U^{\tau}(s,z)\tilde{N}(ds,dz).
\end{eqnarray}
Let denote by $D(t)$ the right hand side of \eqref{Comp} and set $M(t)=\int_{0}^{t}\beta(s)dW(s) + \int_{0}^{t}\int_{E}\gamma(s)\tilde{N}(ds,dz)$. In view of Girsanov theorem, the process $(D(t))_{0\leq t\leq T}$ is a martingale under the probability measure $\mathbb Q$ defined by $\mathbb Q=\mathcal{E}_{\sigma}(M).\mathbb P$, where $\mathcal{E}_{\sigma}(M)$ is called a Doléan-Dade exponential. Taking conditional expectation with respect to $\mathcal{F}_{t}$ under $\mathbb Q$ both sides of \eqref{Comp}, we obtain $R(t)\Delta X^{\tau}(t)\leq 0\; \mathbb Q$-a.s., and hence $\mathbb P$-a.s. Finally, since the process $(R(t), t\geq 0)$ is non-negative, we have $t\in[0,\sigma],\; X^{\tau,1}(t)\geq X^{\tau,2}(t)\; \mathbb P$-a.s.
\end{proof}
\subsection{Properties of dynamic risk measures }
\subsection{Optimal stopping problem for dynamic risk measures}
Let $T>0$ be a time horizon and $f$ be delayed generator satisfied ({\bf A2}). For each stopping time $\tau$ with values in $[0,T]$ and $(\psi(t))_{t\geq 0}$ a $(\mathcal{F}_{t})_{t\geq 0}$-adapted square integrable stochastic process, we consider the risk of $\psi(\tau)$ at time $t$ defined by
\begin{eqnarray*}
\rho^{\psi,\tau}(t)=-X^{\psi,\tau}(t),\; 0\leq t\leq \tau,
\end{eqnarray*}
where $X^{\psi,\tau}$ satisfy BSDE \eqref{BSDEjump} with driver $f{\bf 1}_{[0,\tau]}$, terminal condition $\psi(\tau)$ and terminal time $\tau$. The functional $\rho: (\psi,\tau)\mapsto \rho^{\psi,\tau}(.)$ defines then a dynamic risk measure induced by the BSDE \eqref{BSDEjump} with driver $f{\bf 1}_{[0,\tau]}$. Let us now deal with some optimal stopping problem related to the above risk measure. Contrary to the case without delay, there is a real difficulty in setting up the problem for the BSDE with delayed generator. Indeed, since the comparison principle of delayed BSDEs failed at the neighborhood of $0$, we are no longer able to construct the supremum of this risk on $[0,T]$. To work around this difficulty, we need to construct a stochastic interval in which, we can derive a comparison theorem. For a stopping time $\delta$, let also consider $(X^{\psi,\delta},Z^{\psi,\delta})$ the solution of BSDE \eqref{BSDEjump} with driver $f{\bf 1}_{[0,\delta]}$, terminal condition $\psi(\delta)$ and terminal time $\delta$. We consider following stopping times
\begin{eqnarray*}
\sigma_n=\inf(A_n)\wedge T,
\end{eqnarray*}
where
\begin{eqnarray*}
A_n=\left\{
\begin{array}{ll}
t\geq 0, & \displaystyle \inf_{\tau,\delta}(|X^{\psi,\tau}(t)-X^{\psi,\delta}(t)|\vee|Z^{\psi,\tau}(t)-Z^{\psi,\delta}(t)|\vee \int_{E}|U^{\psi,\tau}(t,z)-U^{\psi,\delta}(t,z)|m(dz) \leq \frac{1}{n}\\ \mbox{or}\\
& \displaystyle\inf_{\tau,\delta}(|X^{\psi,\tau}(t)-X^{\psi, \delta}(t)|\vee |Z^{\psi,\tau}(t)-Z^{\psi,\delta}(t)|\vee \int_{E}|U^{\psi,\tau}(t,z)-U^{\psi,\delta}(t,z)|m(dz))\geq n
\end{array}
\right\}
\end{eqnarray*}
and set
\begin{eqnarray}
\overline{\sigma}=sup_{n\geq 1}\sigma_n.\label{ST}
\end{eqnarray}
For a stopping time $\sigma\leq \overline{\sigma}$, let consider $\mathcal{F}_{\sigma}$-measurable random variable $v(\sigma)$ (unique for the equality in the almost sure sense) defined by
\begin{eqnarray}\label{inf}
v(\sigma)=ess\inf_{\sigma\leq \tau\leq T}\rho^{\psi,\tau}(\sigma).
\end{eqnarray}
Since $\rho^{\psi,\tau}=-X^{\psi,\tau}$, we get
\begin{eqnarray}\label{sup}
v(\sigma)=ess\inf_{\sigma\leq \tau\leq T}(-X^{\psi,\tau}(\sigma))=-ess\sup_{\sigma\leq \tau\leq T}X^{\psi,\tau}(\sigma),
\end{eqnarray}
for each stopping time $\sigma\in [0,\overline{\sigma}]$, which characterize the minimal risk-measure. We then provide an existence result of an $\sigma$-optimal stopping time $\tau^{*}\in [\sigma, T]$, satisfies $v(\sigma)=\rho^{\psi,\tau^{*}}(\sigma)$ a.s.
In order to characterize minimal risk measure by reflected BSDEs with jump and delayed generators, let's derive first the notion of solution of this type of equations.
\begin{definition}
The triple of processes $(Y(t),Z(t),U(t,z),K(t))_{0 \leq t \leq T,z \in E}$ is said to be a solution of the reflected delayed BSDEs with jumps associated to delayed generator $f$, stochastic terminal times $\tau$, terminal value $\xi$ and obstacle process $(S(t))_{t\geq 0}$, if it satisfies the following.
\begin{enumerate}
\item [(i)] $(Y,Z,U,K)\in \mathcal{S}^{2}(\mathbb R)\times\mathcal{H}^{2}(\mathbb R)\times\mathcal{H}_{m}^{2}(\mathbb R)\times\mathcal{K}^{2}(\mathbb R)$.
\item [(ii)]
\begin{eqnarray} Y(t) &=& \xi + \int_{t}^{\tau}f(s,Y_{s},Z_{s},U_{s}(.))ds + K(\tau)- K(t) -\int_{t}^{\tau}Z(s)dW(s)\nonumber \\ && - \int_{t}^{\tau}\int_{E}U(s,z)\tilde{N}(ds,dz), \;\; 0 \leq t \leq \tau
\label{Eq1}
\end{eqnarray}
\item [(iii)] $Y$ dominates $S$, i.e. $Y(t) \geq S(t),\;\; 0 \leq t \leq \tau$
\item [(iv)] the Skorohod condition holds:
$\displaystyle \int_{0}^{\tau}(Y(t^-) - S(t^-))dK(t) = 0$ a.s.
\end{enumerate}
\end{definition}
In our definition, the jumping times of process $Y$ is not come only from Poisson process jumps (inaccessible jumps) but also from the jump of the obstacle process $S$ (predictable jumps).
\begin{remark}
Let us point out that condition $(iv)$ is equivalent to : If $K=K^c+K^d$, where $K^{c}$ and $K^{d}$ denote respectively continuous and discontinuous part of $K$, then $\displaystyle \int_{0}^{\tau}(Y(t)-S(t))dK^c(t)= 0$ a.s. and for every predictable stopping time $\sigma\in[0,T]$, $\displaystyle \Delta Y(\sigma)=Y(\sigma)-Y(\sigma^-)=-(S(\sigma^-)-Y(\sigma))^{+}{\bf 1}_{[Y(\sigma^-)=S(\sigma^{-})]}$. On the other hand, since the jumping times of the Poisson process are inaccessible, for every predictable stopping time $\sigma\in[0,T]$, \newline $\Delta Y(\sigma)=-\Delta K(\sigma)=-(S(\sigma^-)-Y(\sigma))^{+}{\bf 1}_{[Y(\sigma^-)=S(\sigma^{-})]}$
\end{remark}
The following theorem will be state in special context that $\xi=\psi(\tau)$ and $S=\psi$ in order to establish a link between the risk measure associated with the EDSR $(\tau, \psi (\tau), f)$ and the solution of the reflected EDSR associated with $(\tau, \psi (\tau), f,\psi)$.
\begin{theorem}\label{Theo3.1}
Let $\tau$ be a stopping time belonging on $[0,T],\,\{\psi(t), \, 0\leq t\leq T\}$ and $f$ be respectively a terminal time, an rcll process in $\mathcal{S}^{2}(\mathbb R)$ and a delayed generator satisfying Assumption $({\bf A3})-({\bf A4})$. Suppose $(Y,Z,U,K)$ be the solution of the reflected BSDE associated to $(\tau,\psi(\tau), f, \psi)$.
\begin{enumerate}
\item [(i)] For each stopping time $\sigma\leq \overline{\sigma}$, we have
\begin{eqnarray}\label{mini}
v(\sigma)=-Y(\sigma) = -ess \sup_{\tau \in [\sigma,T]}X^{\psi,\tau}(\sigma),
\end{eqnarray}
where $v(\sigma)$ is defined by \eqref{inf}.
\item [(ii)] For each stopping time $\sigma$ with values on $[0,\overline{\sigma}]$ and each $\varepsilon>0$, let $D^{\varepsilon}_{\sigma}$ be the stopping time defined by
\begin{eqnarray}
D^{\varepsilon}_{\sigma}=\inf\left\{t\in[\sigma,T],\; Y(t)\leq \psi(t)+\varepsilon\right\}.\label{TA}
\end{eqnarray}
We have
\begin{eqnarray*}
Y(\sigma)\leq X^{\psi,D^{\varepsilon}_{\sigma}}(\sigma)+C\varepsilon\;\; \mbox{a.s.},
\end{eqnarray*}
\end{enumerate}
where $C$ is a constant which only depends on $T$ and the Lipschitz constant $K$. In other words, $D^{\varepsilon}_{\sigma}$ is a $(C\varepsilon)$-optimal stopping time for \eqref{mini}.
\end{theorem}
\begin{remark}
Note that Property $(ii)$ implies that for all stopping times $\sigma$ and $\tau$ with values on $[0,\overline{\sigma}]$ and $[0,T]$ respectively such that $\sigma \leq \tau\leq D^{\varepsilon}_{\sigma}$, we have $Y(\sigma)=\mathcal{E}^{f}_{\sigma,\tau}(Y(\tau))$ a.s. In other words, the process $(Y(t),\; \sigma \leq t \leq D^{\varepsilon}_{\sigma})$ is an $\mathcal{E}^{f}$-martingale.
\end{remark}
\begin{proof}[Proof of Theorem \ref{Theo3.1}]
Let consider $\sigma$ and $\tau$ two stopping time with values in $[0,T]$ such that $\sigma\leq \tau$. Let consider $(Y,Z,U,K)$ be solution of the reflected BSDE associated to $(\psi(\tau), f, \psi)$. We have
\begin{eqnarray*} Y(\sigma)& = & \psi(\tau) + \int_{\sigma}^{\tau}f(s,Y_{s},Z_{s},U_{s}(.))ds + K(\tau)- K(\sigma) - \int_{\sigma}^{\tau}Z(s)dW(s)\\ &&-\int_{\sigma}^{\tau}\int_{E}U(s,z)\tilde{N}(ds,dz)
\end{eqnarray*}
According to reflected BSDEs framework, we know that the process $K$ is non-decreasing, hence $K(\tau)-K(\sigma)\geq 0$. Therefore,
\begin{eqnarray}\label{compari}
Y(\sigma)& \geq & \psi(\tau) + \int_{\sigma}^{\tau}f(s,Y_{s},Z_{s},U_{s}(.))ds - \int_{\sigma}^{\tau}Z(s)dW(s) - \int_{\sigma}^{\tau}\int_{E}U(s,z)\tilde{N}(ds,dz).
\end{eqnarray}
Let $(\bar{Y},\bar{Z},\bar{U})$ satisfy equation
\begin{eqnarray}
\bar{Y}(\sigma)&=&\psi(\tau) + \int_{\sigma}^{\tau}f(s,\bar{Y}_{s},\bar{Z}_{s},\bar{U}_{s}(.))ds - \int_{\sigma}^{\tau}\bar{Z}(s)dW(s) - \int_{\sigma}^{\tau}\int_{E}\bar{U}(s,z)\tilde{N}(ds,dz).
\end{eqnarray}
It follows from \eqref{compari} that $Y(\sigma)\geq\bar{Y}(\sigma)$. On other hand, thanks to uniqueness of solution for BSDE \eqref{BSDEjump}, we obtain $\bar{Y}=X^{\psi,\tau}$ which implies $Y(\sigma)\geq X^{\psi,\tau}(\sigma)$ for all $\tau\in[\sigma, T]$. Finally we get
\begin{eqnarray}\label{compari1}
Y(\sigma)&\geq & ess\sup_{\tau\in[\sigma,T]}X^{\psi,\tau}(\sigma).
\end{eqnarray}
Let us show now the reversed inequality. In view of it definition, $D^{\varepsilon}_{\sigma}$ belongs in $[\sigma,T]$ and for each $t\in[\sigma(\omega),D_{\sigma}(\omega)[$ for almost all $\omega\in \Omega$, we have $Y(t)>\psi(t)$ a.s. Therefore, recalling reflected BSDEs framework, the function $t\mapsto K(t)$ is almost surely constant on $[\sigma(\omega),D_{\sigma}(\omega)]$ so that $K(D_{\sigma})-K(\sigma)=0$. This implies that
\begin{eqnarray*}
Y(\sigma)& = & \psi(D_{\sigma}) + \int_{\sigma}^{D_{\sigma}}f(s,Y_{s},Z_{s},U_{s}(.))ds - \int_{\sigma}^{D_{\sigma}}Z(s)dW(s)-\int_{\sigma}^{D_{\sigma}}\int_{E}U(s,z)\tilde{N}(ds,dz).
\end{eqnarray*}
Using again comparison principle, we derive that $Y(\sigma)=X^{\psi,D_{\sigma}}(\sigma)$ which leads
\begin{eqnarray}\label{compari2}
Y(\sigma)&\leq & ess \sup_{\tau \in [\sigma, T]}X^{\psi,\tau}(\sigma)
\end{eqnarray}
According to \eqref{compari1} and \eqref{compari2}, we prove $(i)$. We will prove now $(ii)$. According to \eqref{TA} and comparison theorem of BSDE with delayed generator, we get that for all stopping times $\sigma\leq \bar{\sigma}$,
\begin{eqnarray}
Y(\sigma)=X^{Y,D_{\sigma}^{\varepsilon}}(\sigma)\leq X^{\psi+\varepsilon,D^{\varepsilon}_{\sigma}}(\sigma)\;\;\;\;\;\mbox{as}.\label{ZA}
\end{eqnarray}
On the other hand, using some appropriate estimate on BSDE with delayed generator, we derive
\begin{eqnarray*}
|X^{Y,D_{\sigma}^{\varepsilon}}(\sigma)-X^{\psi+\varepsilon,D^{\varepsilon}_{\sigma}}(\sigma)|^2\leq e^{\beta(T-S)}\varepsilon^2, \;\;\;\;\;\mbox{as},
\end{eqnarray*}
where $\beta$ is a constant depending only on the time horizon $T$ and a Lipschitz constant $K$. Finally, in view of \eqref{ZA} we get the result.
\end{proof}
To end this subsection let now derive an optimality criterium for the optimal stopping time problem based on the strict comparison theorem. Before let us give what we mean by an optimal stopping time.
\begin{definition}
A stopping time $\bar{\tau}\in [\sigma,T]$ is an $\sigma$-optimal stopping time if
\begin{eqnarray*}
Y(\sigma) = ess \sup_{\tau \in [\sigma,T]}X^{\psi,\tau}(\sigma)= X^{\psi, \bar{\tau}}(\sigma).
\end{eqnarray*}
On the other word, the process $(Y(t))_{\sigma\leq t \leq \bar{\tau}}$ is the solution of the non reflected BSDE associated with terminal time $\bar{\tau}$ and terminal value $\psi(\bar{\tau})$.
\end{definition}
\begin{theorem}
Let a rcll process $(\psi(t))_{t\geq 0}$ be l.u.s.c along stopping times and belong to $\mathcal{S}^{2}(\mathbb R)$. We assume $({\bf A1})$-$({\bf A4})$ holds and suppose $(Y,Z,U(.),K)$ is a solution of the reflected BSDE with jump and delayed \eqref{Eq1}. Setting for all stopping time $\sigma\leq \overline{\sigma}$ ($\overline{\sigma}$ is the same defined by \eqref{ST}), the following stopping times:
\begin{eqnarray}\label{ST1}
\tilde{\tau}_{\sigma} = \lim_{\epsilon \downarrow 0}\uparrow \tau_{\sigma}^{\epsilon},
\end{eqnarray}
where $\tau_{\sigma}^{\varepsilon}=\inf\{\sigma\leq t\leq T,\;\; Y(t)\leq \psi(t)+\varepsilon\}$,
\begin{eqnarray}\label{ST2}
\tau_{\sigma}^{*}=\inf\{\sigma \leq t \leq T,\; Y(t) = \psi(t) \},
\end{eqnarray}
and
\begin{eqnarray}\label{ST3}
\widetilde{\tau}_{\sigma}=\inf \left\{\sigma \leq t \leq T,\; K(t)- K(\sigma) > 0 \right\}.
\end{eqnarray}
Then $\overline{\tau}_{\sigma},\, \tau_{\sigma}^{*}$ and $\widetilde{\tau}_{\sigma}$ are $\sigma$-stopping times of the optimal problem \eqref{inf} such that
\begin{itemize}
\item[(i)] $\overline{\tau}_{\sigma}\leq \tau^{*}_{\sigma}$ and we have $Y(s)=X^{\psi,\tau_{\sigma}^{*}}(s)$ for all $\sigma\leq s\leq \tau_{\sigma}^{*}$\;\; a.s.
\item [(ii)] $\overline{\tau}_{\sigma}$ is the minimal $\sigma$-stopping time
\item [(iii)] $\widetilde{\tau}_{\sigma}$ is the maximal $\sigma$-stopping time.
\item [(iv)] Moreover if in $({\bf A3})(iv)$, we have $|\theta^{x_t,z_t,u_t(.),u'_t(.)}|>-1$, then $\tau^{*}_{\sigma}=\overline{\tau}_{\sigma}$.
\end{itemize}
\end{theorem}
Since the proof follows the same argument used in its proof and to avoid unnecessarily lengthening the writing, we will refer the reader to the proof of Theorem 3.7 appear to \cite{10}.
\section{Reflected BSDEs with jumps and time-delayed generator}
This section is devoted to study in general framework of the reflected BSDEs with jumps, right continuous and left limit (rcll) obstacle process and delayed generator. More precisely, for a fixed $T>0$ and a stopping time $\tau$ in value on $[0,T]$, we consider
\begin{eqnarray}
Y(t)&=&\xi+ \int_{t}^{\tau}f(s,Y_{s},Z_{s},U_{s}(.))ds + K(\tau)- K(t) -\int_{t}^{\tau}Z(s)dW(s)\nonumber \\ && - \int_{t}^{\tau}\int_{E}U(s,z)\tilde{N}(ds,dz), \;\; 0 \leq t \leq \tau.
\label{RBSDE}
\end{eqnarray}
We derive an existence and uniqueness result under the following additional hypothesis related to the obstacle process.
\begin{description}
\item [$({\bf A5})$] The obstacle process $\{S(t),\;\; 0 \leq t \leq T \}$ is a rcll progressively measurable $\mathbb R$-valued process satisfies
\begin{itemize}
\item [$(i)$] $\mathbb E \left( \sup_{0 \leq t \leq T}(S^{+}(t))^{2}\right) < +\infty$,
\item [$(ii)$] $\xi\geq S(\tau)$ a.s.
\end{itemize}
\end{description}
To begin with, let us first assume $f$ to be independent of $(y_t, z_t, u_t)\in $, that is, it is a given $(\mathcal{F}_t)_{0\leq t\leq \tau}$-progressively measurable process satisfying that $\mathbb E\left(\int^{\tau}_{0}f(t)dt\right)<+\infty$. A solution to the backward reflection problem (BRP, in short) is a triple $(Y , Z , U, K )$ which satisfies $(i), (iii), (iv)$ of the Definition 2.4. and
\begin{itemize}
\item [(ii')]
\begin{eqnarray*}
Y(t)&=&\xi+ \int_{t}^{\tau}f(s)ds + K(\tau)- K(t) -\int_{t}^{\tau}Z(s)dW(s)-\int_{t}^{\tau}\int_{E}U(s,z)\tilde{N}(ds,dz), \;\; 0 \leq t \leq \tau.
\end{eqnarray*}
\end{itemize}
The following proposition is from Hamadène and Ouknine \cite{E5} (Theorem 1.2.a and 1.4.a) or Essaky \cite{Essaky}.
\begin{proposition}
The reflected BSDE with jump associated with $(\xi, g, S)$ has a unique solution $(Y,Z,K,U)$.
\end{proposition}
\begin{theorem} \label{Theo 2.2}
Assume $({\bf A1})$-$({\bf A3})$ and $({\bf A5})$ hold. For a sufficiently small time horizon $T$ or for a sufficiently small Lipschitz constant $K$ of the generator $f$ i.e
\begin{eqnarray}
KTe\max\{1,T\}< 1, \label{C1}
\end{eqnarray}
the reflected BSDE with jumps and delayed generator \eqref{Eq1} admits a unique solution $(Y,Z,U,K)\in\mathcal{S}^{2}(\mathbb R)\times\mathcal{H}^{2}(\mathbb R)\times\mathcal{H}_{m}^{2}(\mathbb R)\times \mathcal{K}^{2}(\mathbb R)$.
\end{theorem}
\begin{proof}
Let us begin with the uniqueness result. In this fact, assume $(Y,Z,U,K)$ and $(Y',Z',U',K')$ be two solutions of RBSDE associated to data $(\xi,f,S)$ and set $\overline{\theta}=\theta-\theta'$ for $\theta=Y, Z, U,K$. Applying Itô's formula to the discontinuous semi-martingale $|\overline{Y}|^2$, we have
\begin{eqnarray}\label{i1}
&&|\overline{Y}(t)|^2+\int_t^{T}|\overline{Z}(s)|^2ds+\int_{t}^{T}\int_{\mathcal{E}}|\bar{U}(s,z)|^2 m(dz)ds\nonumber\\
&=&2\int_t^T\overline{Y}(s)(f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U'_s(.)))ds+2\int_t^T\overline{Y}(s)d\overline{K}(s)\nonumber\\
&&-2\int_t^T\overline{Y}(s)\overline{Z}(s)dW(s)-2\int_t^T\int_{\mathcal{E}}\overline{Y}(s^-)\overline{U}(s,z)\tilde{N}(ds,dz).
\end{eqnarray}
In view of Skorohod condition $(iv)$, we get
\begin{eqnarray}\label{i2}
\int_t^T\overline{Y}(s)d\overline{K}(s)&=&\int_t^T(Y(s^-)-S(t^-))dK(t)+\int_t^T(S(s^-)-Y'(t^-))dK(t)\nonumber\\
&&+ \int_t^T(Y'(s^-)-S'(t^-))dK'(t)+\int_t^T(S'(s^-)-Y(t^-))dK'(t)\nonumber\\
&\leq & 0.
\end{eqnarray}
Next, since the third and fourth term of \eqref{i1} are $(\mathcal{F}_t)_{t\geq 0}$-martingales together with \eqref{i2}, we have
\begin{eqnarray}\label{i5}
&&\mathbb E\left(|\overline{Y}(t)|^2+\int_t^{T}|\overline{Z}(s)|^2ds+\int_{t}^{T}\int_{\mathcal{E}}|\bar{U}(s,z)|^2m(dz)ds\right)\nonumber\\
&=&2\mathbb E\left(\int_t^T\overline{Y}(s)(f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U'_s(.)))ds\right)\nonumber\\
&\leq & \beta\mathbb E\left( \int_t^T|\overline{Y}(s)|^2ds\right)+\frac{1}{\beta}\mathbb E\left(\int_t^T|f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U'_s(.)))|^2ds\right)
\end{eqnarray}
According to assumptions $({A3})(i)$, change of variable and fubini's theorem, we obtain
\begin{eqnarray}\label{i6}
&& \int_t^T|f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U'_s(.)))|^2ds \nonumber\\
&\leq & K\int_t^T\left(\int_{-T}^0\left[|\overline{Y}(s+u)|^{2}+|\overline{Z}(s+u)|^{2}+\int_{\mathcal{E}}|U(s+u,z)|^2 m(dz)\right]\alpha(du)\right)ds\nonumber\\
&\leq & K\int_{-T}^{T}\left[|\overline{Y}(s)|^{2}+|\overline{Z}(s)|^{2}+\int_{\mathcal{E}}|U(s,z)|^2 m(dz)\right]ds.
\end{eqnarray}
Putting the last inequality into \eqref{i5} yields
\begin{eqnarray}\label{i6}
&&\mathbb E\left(|\overline{Y}(t)|^2+\int_t^{T}|\overline{Z}(s)|^2ds+\int_{t}^{T}\int_{\mathcal{E}}|\bar{U}(s,z)|^2 m(dz)ds\right)\nonumber\\
&\leq & \left(\beta+\frac{K}{\beta}\right)\mathbb E\int_{-T}^T|\overline{Y}(s)|^2ds+\frac{K}{\beta}\mathbb E\int_0^T\left(|\overline{Z}(s)|^{2}+\int_{\mathcal{E}}|U(s,z)|^2 m(dz)\right)ds.
\end{eqnarray}
If we choose $\beta$ such that $\frac{K}{\beta}\leq 1$, inequality $\eqref{i6}$ becomes
\begin{eqnarray}\label{i7}
&&\mathbb E\left(|\overline{Y}(t)|^2+\int_0^{T}|\overline{Z}(s)|^2ds+\int_{0}^{T}\int_{\mathcal{E}}|\bar{U}(s,z)|^2 m(dz)ds\right)\nonumber\\
&\leq & C\mathbb E\int_{-T}^T|\overline{Y}(s)|^2ds.
\end{eqnarray}
According the above estimate, using Gronwall's lemma and in view of the right continuity of the process $\overline{Y}$, we have $Y=Y'$. Therefore $(Y,Z,U,K) = (Y',Z',U',K')$, whence reflected BSDE with jump and delayed generator \eqref{RBSDE} admit a uniqueness solution.
It remains to show the existence which will be obtained via a fixed point method. For this let consider $\mathcal{D}=\mathcal{S}^{2}(\mathbb R)\times \mathcal{H}^{2}(\mathbb R)\times \mathcal{H}_m(\mathbb R)$ endowed with the norm $\|(Y,Z,U)\|_{\beta}$ defined by
\begin{eqnarray*}
\|(Y,Z,U)\|_{\beta}=\mathbb E\left(\sup_{0\leq t\leq \tau}e^{\beta t}|Y(t)|^2+\int_0^{\tau}e^{\beta t}\left(|Z(t)|^2+\int_{\mathcal{E}}U(s,z)m(dz)\right)ds\right).
\end{eqnarray*}
We now consider a mapping $\mathbb Phi:\mathcal{D}$ into itself defined by $\mathbb Phi((Y,Z,U))=(\tilde{Y},\tilde{Z},\tilde{U})$ which means that there is a process $\tilde{K}$ such as $(\tilde{Y},\tilde{Z},\tilde{U},\tilde{K})$ solve the reflected BSDE with jump associated to the data $\xi, f(t,Y,Z,U)$ and $S$. More precisely, $(\tilde{Y},\tilde{Z},\tilde{U},\tilde{K})$ satisfies $(i),\,(iii),\, (iv)$ of Definition 2.4 such that
\begin{eqnarray*}
\tilde{Y}(t)=\xi+\int_t^{\tau}f(s,Y_s,Z_s,U_s(.))ds+\tilde{K}(\tau)-\tilde{K}(t)-\int^{\tau}_t\tilde{Z}(s)dW(s)-\int_t^{\tau}\int_{E}\tilde{U}(s,z)\tilde{N}(ds,dz).
\end{eqnarray*}
For another process $(Y',Z',U')$ belonging in $\mathcal{D}$ let set $\mathbb Phi(Y',Z',U')=(\tilde{Y}',\tilde{Z}',\tilde{U}')$. In the sequel and for a generic process $\theta$, we denote $\delta\theta=\theta-\theta'$. Next, applying Ito's formula to $e^{\beta t}|\Delta\tilde{Y}(t)|^{2}$ yields
\begin{eqnarray*}
&&e^{\beta t}|\delta\tilde{Y}(t)|^2+ \beta\int_{t}^{T}e^{\beta s}|\delta \tilde{Y}(s)|^2ds+\int_{t}^{T}e^{\beta s}|\delta \tilde{Z}(s)|^2ds\\
&&+ \int_{t}^{T}e^{\beta s}\int_{\mathcal{E}}|\delta \tilde{U}(s,z)|^2m(dz)ds +\sum_{t\leq s\leq T}e^{\beta s}(\Delta_s(\delta \tilde{Y})-\Delta_s(\delta \tilde{Y}'))^2\\
&=& 2\int_{t}^{T}e^{\beta s}\delta\tilde{Y}(s)(f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U_s(.)))ds+2\int_{t}^{T}e^{\beta s}\delta\tilde{Y}(s)d\delta\tilde{K}(s)\\
&&+M(T)-M(t),
\end{eqnarray*}
where $(M(t))_{0\leq t\leq T}$ is a martingale. On the other hand, in view of uniqueness proof and young inequality, we have respectively $\displaystyle \int_{t}^{T}e^{\beta s}\delta\tilde{Y}(s)d\delta\tilde{K}(s)\leq 0$ and
\begin{eqnarray*}
&& e^{\beta s}\delta\tilde{Y}(s)(f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U_s(.)))ds\\
&\leq & \beta e^{\beta t}|\delta\bar{Y}(s)|^2+\frac{1}{\beta}|f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U_s(.))|^2,
\end{eqnarray*}
which allow us to get
\begin{eqnarray}\label{J1}
&&e^{\beta t}|\delta\tilde{Y}(t)|^2+\int_{t}^{T}e^{\beta s}|\delta \tilde{Z}(s)|^2ds+\int_{t}^{T}e^{\beta s}\int_{\mathcal{E}}|\delta \tilde{U}(s,z)|^2m(dz)ds +\sum_{t\leq s\leq T}e^{\beta s}(\Delta_s(\delta \tilde{Y})-\Delta_s(\delta \tilde{Y}'))^2\nonumber\\
&\leq &
\frac{1}{\beta}\int_{0}^{T}e^{\beta s}|f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U_s(.))|^2ds+M(T)-M(t).
\end{eqnarray}
Then taking the conditional expectation with respect $(\mathcal{F}_t)_{t\geq 0}$ in both side of the previous inequality, we obtain
\begin{eqnarray*}\label{j2}
e^{\beta t}|\delta\bar{Y}(t)|^2
&\leq & \frac{1}{\beta}\mathbb E\left(\int_{0}^{T}e^{\beta s}|f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U_s(.))|^2 ds|\mathcal{F}_t\right),
\end{eqnarray*}
which together with Doob inequality yields
\begin{eqnarray}\label{J2}
\mathbb E\left(\sup_{0\leq t\leq T}e^{\beta t}|\delta\bar{Y}(t)|^2 \right)
&\leq & \frac{1}{\beta}\mathbb E\left(\int_{0}^{T}e^{\beta s}|f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U_s(.))|^2 ds\right).
\end{eqnarray}
Taking expectation in both side of \eqref{J1} for $t=0$, it follows from \eqref{J2} that
\begin{eqnarray}\label{J3}
&&\mathbb E\left(\sup_{0\leq t\leq T}e^{\beta t}|\delta\bar{Y}(t)|^2+ \int_{t}^{T}e^{\beta s}|\delta \tilde{Z}(s)|^2ds+\int_{t}^{T}e^{\beta s}\int_{\mathcal{E}}|\delta \tilde{U}(s,z)|^2m(dz)ds\right)\nonumber\\
&\leq & \frac{1}{\beta}\mathbb E\left(\int_{0}^{T}e^{\beta s}|f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U_s(.))|^2 ds\right).
\end{eqnarray}
Let us now derive the estimation of right side of inequality \eqref{J3}. In view of assumption $({\bf A1})$, we have
\begin{eqnarray*}
&&\int_{0}^{T}e^{\beta s}|f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U_s(.))|^2 ds\\
&\leq& K\int_0^T\int_{-T}^{0}e^{\beta s}\left(|\delta Y(s+u)|^2+|\delta Z(s+u)|^2+\int_{\mathcal{E}}|\delta U(s+u,z)|^2m(dz)\right)\alpha(du)ds.
\end{eqnarray*}
Next, since $Z(t) = 0, U(t,.) \equiv 0$ and $Y(t)=Y(0)$, for $t<0$, we get respectively with Fubini's theorem, changing the variables that
\begin{eqnarray}\label{J4}
&&\int_{0}^{T}e^{\beta s}|f(s,Y_s,Z_s,U_s(.))-f(s,Y'_s,Z'_s,U_s(.))|^2 ds\nonumber\\
&\leq & K\max(1,T)e^{\beta T}\left(\sup_{0\leq t\leq T}e^{\beta t}|\delta Y(t)|^2+\int_0^T e^{\beta s}\left(|\delta Z(s)|^2+\int_{\mathcal{E}}|\delta U(s,z)|^2m(dz)\right)ds\right).\nonumber\\
\end{eqnarray}
Thereafter, it follows from \eqref{J3}, \eqref{J4} and $\beta=\frac{1}{T}$ that
\begin{eqnarray*}
&&\mathbb E\left[\sup \limits_{0 \leq t \leq T} e^{\beta t}\vert \delta\tilde{Y}(t)\vert^{2}+\int_{0}^{T}e^{\beta t}\vert \delta\tilde{Z}(t)\vert^{2}dt+\int_{0}^{T}\int_{\mathcal{E}} e^{\beta t}\vert \delta\tilde{U}(t, z)\vert^{2}m(dz)dt \right]\\
&\leq &KTe\max(1,T)\mathbb E\left(\sup_{0\leq t\leq T}e^{\beta t}|\delta Y(t)|^2+\int_0^T e^{\beta s}\left(|\delta Z(s)|^2+\int_{\mathcal{E}}|\delta U(s,z)|^2m(dz)\right)ds\right),
\end{eqnarray*}
which mean that
\begin{eqnarray*}
\|\mathbb Phi(Y,Z,U)-\mathbb Phi(Y',Z',U')\|_{\beta}\leq KTe\max(1,T)\|(\delta Y,\delta Z,\delta U)\|_{\beta}.
\end{eqnarray*}
For a sufficiently small $T$ or $K$, i.e, $KTe\max(1,T)<1$, the function $\mathbb Phi$ is a contraction. Consequently $\mathbb Phi$ admits a unique fixed point $(Y,Z,U)$ i.e $(Y,Z,U)=\mathbb Phi((Y,Z,U)$ and there is a nondecreasing process $K$ such that $(Y,Z,U,K)$ is solution of the RBSDE \eqref{Eq1}.
\end{proof}
\section{Comparison principle for reflected BSDEs with jumps and delayed generator and optimization problem}
\subsection{Comparison principle for reflected BSDEs with jumps and delayed generator}
In this subsection we give a comparison principle to the reflected BSDEs with jumps and delayed generator. The proof is simple and based on the characterization of solutions of reflected BSDEs with jumps and delayed generator established in Theorem \ref{Theo 2.2} and the comparison theorem for non reflected BSDEs with jumps and delayed generator. Therefore, unlike without delay, result is valid only in a random interval $[0, \bar{\sigma}]$, with $\overline{\sigma}$ defined by \eqref {ST}. Let $(Y^i,Z^i,U^i,K^i)$ is a unique solution of reflected BSDE with jump and delayed generator associated to $(\tau,\psi^i,f^i),\;\; i=1,2$.
\begin{theorem}\label{CP}
Let $\psi,\; \psi'$ and $f^1$, $f^2$ be respectively two rcll obstacle processes and two Lipschitz drivers satisfying ({\bf A3})-({\bf A5}). Suppose
\begin{itemize}
\item [(i)] $\psi^1(t)\leq \psi^2(t)$, a.s. for all $t\in[0,\tau]$
\item [(ii)] for all $t\in [0,\tau],\; f^1(t,Y^1_t,Z^1_t,U^1_t)\leq f^2(t,Y^1_t,Z^1_t,U^1_t)$, a.s. or $f^1(t,Y^2_t,Z^2_t,U^2_t)\leq f^2(t,Y^2_t,Z^2_t,U^2_t)$.
\end{itemize}
Then there exists a stopping times $\overline{\sigma}$ (defined by \eqref{ST}) such that
\begin{eqnarray*}
Y^1(t)\leq Y^2(t), \; a.s.\;\; t\in [0,\overline{\sigma}].
\end{eqnarray*}
\end{theorem}
\begin{proof}
Let denote by $X^{\psi^i,\tau}$ the unique solution of BSDE associated with $(\tau,\psi^i, f^i)$ for $i=1, 2$. In view of Theorem 2.3, we have
\begin{eqnarray*}
X^{\psi^1,\tau}(t)\leq X^{\psi^2,\tau}(t),\;\;\; \mbox{a.s}.,
\end{eqnarray*}
for a fix $t\in [0,\overline{\sigma}\wedge \tau]$.
Next, taking the essential supremum over all stopping times $\tau$ to values in $[t,T]$, it follows from Theorem \ref{Theo3.1} that
\begin{eqnarray*}
Y^1(t)=\sup_{\tau\in [t,T]} X^{\psi^1,\tau}(t)\leq \sup_{\tau\in [t,T]}X^{\psi^2,\tau}(t)=Y^2(t),\;\;\;\; \mbox{a.s}.
\end{eqnarray*}
\end{proof}
\subsection{Optimization problem for reflected BSDEs with jump and delayed generator}
This subsection is devoted to establish an optimization problem with the help of above comparison principle. For $\mathcal{A}$ a subset of $\mathbb R$, let $\{f^{\delta},\;\; \delta \in \mathcal{A}\}$ be a family of $\mathbb R$-valued function defined on $\Omega\times [0,T]\times L^{\infty}_{T}(\mathbb R)\times L^{2}_{T}(\mathbb R)\times L^{2}_{T,m}(\mathbb R)$. We consider $(Y^{\delta},Z^{\delta},U^{\delta}(.))$ a family of solution of reflected BSDEs associated to $(\psi,f^{\delta})$. For a appropriated stopping time $\sigma$ belong in $[0,T]$, let solve the following optimisation problem:
\begin{eqnarray}
v(\sigma)=ess\inf \limits_{\delta \in \mathcal{A}}Y^{\delta}(\sigma). \label{OP}
\end{eqnarray}
For all $(t,y,z,k)\in[0,T]\times L^{\infty}_{T}(\mathbb R)\times L^{2}_{T}(\mathbb R)\times L^{2}_{T,m}(\mathbb R)$, let set
\begin{eqnarray*}
f(t,y,z,k)=ess\inf_{\delta\in \mathcal{A}}f^{\delta}(t,y,z,k),\;\;\;\;\;\mathbb P\,\mbox{-a.s}.
\end{eqnarray*}
Optimisation problem \eqref{OP} will be treat in two context.
First, we suppose that $f$ is one of generators indexed by $\delta\in\mathcal{A}$, i.e there exists $\overline{\delta}\in\mathcal{A}$ such that for all $(t,y,z,k)\in[0,T]\times L^{\infty}_{T}(\mathbb R)\times L^{2}_{T}(\mathbb R)\times L^{2}_{T,m}(\mathbb R)$,
\begin{eqnarray}\label{mini}
f(t,y,z,k)=f^{\overline{\delta}}(t,y,z,k)\;\; \mathbb P\;\mbox{- a.s}.
\end{eqnarray}
Next, we suppose that $f$ does not belong to the above family.
We derive the following two results.
\begin{proposition}\label{Popt}
Assume $({\bf A1})$-$({\bf A4})$ and \eqref{mini}.
Then, there exists a stopping time $\widehat{\sigma}$ defined by
\begin{eqnarray}\label{Compopti}
\widehat{\sigma}=ess\inf_{\delta\in \mathcal{A}}\sigma_{\delta},
\end{eqnarray}
where $\sigma_{\delta}$ is defined as in \eqref{ST}, such that for $\sigma\leq \widehat{\sigma}$
\begin{eqnarray*}
Y(\sigma) = ess\inf_{\delta \in \mathcal{A}}Y^{\delta}(\sigma)\;\;\;\; a.s,
\end{eqnarray*}
where $(Y,Z,U(.))$ is the unique solution of the reflected BSDE associated to $(\psi,f)$.
\end{proposition}
\begin{proof}
For each $\delta \in \mathcal{A}\;$ and each $\tau\in [\widehat{\sigma},T]$, the comparison theorem for delays BSDEs with jumps yields that for a stopping time $\sigma\leq\widehat{\sigma},\; X^{\psi,\tau}(\sigma) \leq X^{\delta,\psi,\tau}(\sigma)$. The essential supremum taken over $\tau $ on the both side of the above inequality, we get
\begin{eqnarray}
ess \sup_{\sigma\leq \tau\leq T}X^{\psi,\tau}(\sigma)\leq ess \sup_{\sigma\leq \tau \leq T}X^{\delta,\psi,\tau}(\sigma).\label{Sup}
\end{eqnarray}
According to the representation \eqref{mini}, it follows from \eqref{Sup} that $Y(\sigma) \leq Y^{\alpha}(\sigma)$ for each $\alpha \in \mathcal{A}$ and each $\sigma\in [0,\widehat{\sigma}]$. This implies by the essential infimum taken over $\delta$ in both side of the previous inequality that for all stopping time $\sigma\in [0,\widehat{\sigma}]$ that
\begin{eqnarray}
Y(\sigma)\leq ess \inf_{\delta\in \mathcal{A}} Y^{\delta}(\sigma).\label{IZ1}
\end{eqnarray}
On the other, since there exists $\overline{\delta}\in\mathcal{A}$ such that $f=f^{\overline{\delta}}$, in view of uniqueness of reflected BSDE associated to $(f,\psi,\tau)$, we obtain $Y=Y^{\overline{\delta}}$. Therefore
\begin{eqnarray*}
Y(\sigma) \geq ess \inf_{\delta \in \mathcal{A}} Y^{\delta}(\sigma)
\end{eqnarray*}
which together with \eqref{IZ1} yields for all stopping time $\sigma\leq \widehat{\sigma}$,
\begin{eqnarray*}
Y(\sigma) = ess \inf_{\delta \in \mathcal{A}} Y^{\delta}(\sigma).
\end{eqnarray*}
\end{proof}
\begin{proposition}\label{Poptbis}
Assume $({\bf A1})$-$({\bf A4})$ and suppose that $f\notin\{f^{\delta}, \delta\in \mathcal{A}\}$.
Then, there exists a stopping time $\widehat{\sigma}$ defined as in Proposition \ref{Popt} such that for $\sigma\leq \widehat{\sigma}$
\begin{eqnarray*}
Y(\sigma) = ess\inf_{\delta \in \mathcal{A}}Y^{\delta}(\sigma)\;\;\;\; a.s,
\end{eqnarray*}
where $(Y,Z,U(.))$ is the unique solution of the reflected BSDE associated to $(\psi,f)$.
\end{proposition}
\begin{proof}
With the same argument as in previous proof, we obtain for all $\sigma\leq \widehat{\sigma}$,
\begin{eqnarray}
Y(\sigma)\leq ess \inf_{\delta\in \mathcal{A}} Y^{\delta}(\sigma) \;\;\; a.s.\label{IZ2}
\end{eqnarray}
Let us derive the reversed inequality. According to the definition of $f$, we have the following: $\mathbb P$-a.s., for all $\eta>0$, there exists $\delta^{\eta}$ such that $f^{\delta^{\eta}}-\eta\leq f<f^{\delta^{\eta}}$. Moreover, applied Lemma 2.1 appear in \cite{D1} to BSDE with jump and delayed generator, we provide that for all stopping time $\sigma\leq \widehat{\sigma}$, there exists constant $C$ depending only to Lipschitz constant and terminal horizon such that
\begin{eqnarray*}
X^{\psi, \tau}(\sigma) + C\eta \geq X^{\delta^{\eta},\psi,\tau}(\sigma), \;\;\; a.s.
\end{eqnarray*}
Using argument like in the proof above, we obtain
\begin{eqnarray*} \label{C2}
Y(\sigma) + C\eta \geq ess \inf_{\delta \in \mathcal{A}}Y^{\delta}.(\sigma)
\end{eqnarray*}
Since the inequality holds for each $\eta> 0$ then we have:
\begin{eqnarray*}
Y(\sigma) \geq ess \inf_{\delta \in \mathcal{A}}Y^{\delta}(\sigma),
\end{eqnarray*}
which together with \eqref{IZ2} ends the proof.
\end{proof}
\begin{remark}
According Propositions \ref{Popt} and \ref{Poptbis}, we establish that the value function of optimisation problem \eqref{OP} associated to a family of functions $\{f^{\delta},\; \delta\in \mathcal{A}\}$ is $Y$ the solution of reflected BSDE with jump and delayed generator $f$ defined by $f=ess\inf_{\delta\in\mathcal{A}}f^{\delta}$.
\end{remark}
\section{Robust optimal stopping problem for delayed risk measure}
In this section, we consider the ambiguous risk-measures modeling by a BSDE with jump that we do not enough concerning the the delayed generator associated. More precisely, we consider $(\rho^{\delta})_{\delta \in \mathcal{A}}$ the family of the risk-measure of the position $\psi(\tau)$ induced by the BSDE with jump associated to delayed generator $f^{\delta}$. Roughly speaking, we have for each $t\in[0,T]$,
\begin{eqnarray*}
\rho^{\delta,\psi,\tau}(t) = -X^{\delta,\psi, \tau }(t),
\end{eqnarray*}
where $X^{\delta,\psi,\tau}$ is the solution of the BSDE associated with the generator $f^{\delta}$, terminal condition $\psi(\tau)$ and terminal time $\tau$. We are in the context where a very persistent economic agent the worst case. For this reason, we require a risk measure which would be the supremum over $\delta$ of the family of risk measures $(\rho^{\delta,\psi,\tau}(\sigma))_{\delta\in\mathcal{A}}$ defined by
\begin{eqnarray*}
\rho^{\psi,\tau}(\sigma)=ess\sup_{\delta\in \mathcal{A}}\rho^{\delta,\psi,\tau}(\sigma) = ess \sup_{\delta \in \mathcal{A}}(-X^{\delta,\psi,\tau}(\sigma))= - ess \inf_{\delta \in \mathcal{A}}X^{\delta,\psi,\tau}(\sigma) .
\end{eqnarray*}
Our aim in this section is to find at each stopping time $\sigma\in [0,\widehat{\sigma}]$ ($\widehat{\sigma}$ is a stopping time defined such that we can apply the comparison principe for BSDE with jump and delayed generator), the stopping time $\bar{\tau}\in [\sigma,T]$ which minimizes $\rho^{\psi,\tau}(\sigma)$ the risk measure of our persistent agent. To resolve this problem, let consider the value function $u$ is defined by:
\begin{eqnarray}\label{Ambi}
u(\sigma)= ess \inf_{\tau \in [\sigma,T]} ess \sup_{\delta \in \mathcal{A}}\rho^{\delta,\psi,\tau}(\sigma).
\end{eqnarray}
On the other hand, and for a given $\sigma\in [0,\widehat{\sigma}]$, let us consider the two value function:
\begin{eqnarray}\label{M}
\overline{V}(\sigma) = ess \inf_{\delta \in \mathcal{A}} ess \sup_{\tau \in [\sigma,T]}X^{\delta,\psi,\tau}(\sigma)
\end{eqnarray}
and
\begin{eqnarray} \label{N}
\underline{V}(\sigma) = ess \sup_{\tau \in [\sigma,T]} ess \inf_{\delta \in \mathcal{A}}X^{\delta,\psi,\tau}(\sigma).
\end{eqnarray}
\begin{remark}
It not difficult to derive that $\underline{V}(\sigma)= -u(\sigma)$ a.s.
\end{remark}
Let us give this definition which permit us to understand condition of solvability to our problem.
\begin{definition}
Let $\sigma$ be in $[0,\widehat{\sigma}]$. A pair $(\overline{\tau}, \overline{\delta})\in [\sigma,T]\times\mathcal{A}$ is called a $\sigma$-saddle point of our problem \eqref{M} or \eqref{N} if
\begin{itemize}
\item [(i)] $\underline{V}(\sigma) = \overline{V}(\sigma)$ a.s.
\item [(ii)] the essential infimum in \eqref{M} is attained at $\overline{\delta}$.
\item [(iii)] the essential supremum in \eqref{N} is attained at $\overline{\tau}$.
\end{itemize}
\end{definition}
\begin{remark}
\item [(i)] It is not difficult to prove that for each $\sigma\in [0,\widehat{\sigma}]$, $(\overline{\tau},\overline{\delta})$ is a $\sigma$-saddle point if and only if for each $(\tau,\delta)\in [\sigma,T]\times\mathcal{A}$, we have
\begin{eqnarray*}
X^{\overline{\delta},\psi,\tau}(\sigma)\leq X^{\overline{\delta},\psi,\overline{\tau}}(\sigma)\leq X^{\delta,\psi,\overline{\tau}}(\sigma),\;\;\; a.s.
\end{eqnarray*}
\item [(ii)] For each $\sigma\in [0,\widehat{\sigma}]$, if $(\overline{\delta},\overline{\tau})$ is a $\sigma$-sadle point, then $\overline{\delta}$ and $\overline{\tau}$ attain respectively the infimum and the supremum in $\underline{V}(\sigma)$ that is
\begin{eqnarray*}
\underline{V}(\sigma)=ess\sup_{\tau \in [\sigma,T]} ess\inf_{\delta \in \mathcal{A}}X^{\delta,\psi,\tau}(\sigma)=ess \inf_{\delta \in \mathcal{A}}X^{\delta,\psi,\overline{\tau}}(\sigma)=X^{\overline{\delta},\psi,\overline{\tau}}(\sigma)
\end{eqnarray*}
Hence, $\overline{\tau}$ is an optimal stopping time for the agent who wants to minimize over stopping times her risk-measure at time $\sigma$ under ambiguity (see \eqref{Ambi}).
Also, since $\overline{\delta}$ attains the essential infimum in \eqref{M}, $\overline{\delta}$ corresponds at time $\sigma$ to a worst case scenario. Hence, the robust optimal stopping problem \eqref{Ambi} reduces to a classical optimal stopping problem associated with a worst-case scenario among the possible ambiguity parameters $\delta\in \mathcal{A}$.
\end{remark}
Since for all $\sigma\in [0,\widehat{\sigma}]$, we have clearly $\underline{V}(\sigma)\leq \overline{V}(\sigma)$ a.s., we want to determine when the equality holds, characterize the value function and address the question of existence of a $\sigma$- saddle point.
For this purpose, let us relate the game problem to the optimization problem for RBSDEs stated previously. Let consider $(Y^{\delta},Z^{\delta},U^{\delta}(.))$ the solution of the reflected BSDE with jump and delayed generator $(\psi(\tau),f^{\delta},\psi)$. According to section 4, there exist a stopping time $\sigma_{\delta}$ defined as in \eqref{ST} such that, for each $\sigma\in [0,\sigma_{\delta}]$, we have
\begin{eqnarray*}
Y^{\delta}(\sigma) = ess \sup_{\tau \in [\sigma_{\delta},T]} X^{\delta,\psi,\tau}(\sigma),\;\;\;\mbox{a.s}.
\end{eqnarray*}
Next, applied comparison theorem to the family of reflected BSDE with jump and delayed generator $(\psi(\tau),f^{\delta},\psi)$, there exists a stopping time $\widehat{\sigma}$ defined by \eqref{Compopti} such that for each $\sigma \in [0,\widehat{\sigma}]$,
\begin{eqnarray*}
\overline{V}(\mathcal{S}) = ess \inf_{\delta\in \mathcal{A}}Y^{\delta}(\sigma),\;\;\;\; \mbox{a.s}.
\end{eqnarray*}
Let set $f=\inf_{\delta\in \mathcal{A}}$ and consider $(Y,Z,U(.))$ as the solution of the reflected BSDE $(\psi(\tau),f,\psi)$.
\begin{theorem}\label{Topt}
Suppose that $f^{\delta},\;f$ satisfy assumptions $({\bf A3})$ and $({\bf A4})$ for all $\delta\in\mathcal{A}$. Suppose also that there exist $\overline{\delta}\in \mathcal{A}$ such that $f=f^{\overline{\delta}}$. Then, there exists a value function, which is characterized as the solution of the reflected BSDE $(\psi(\tau),f,\psi)$, that is, for each $\sigma \in[0,\widehat{\sigma}]$, we have
\begin{eqnarray*}
Y(\sigma)=\underline{V}(\sigma)=\overline{V}(\sigma)\;\;\; \mbox{a.s}.
\end{eqnarray*}
Moreover, the minimal risk measure, defined by \eqref{Ambi}, verifies, for each $\sigma\in[0,\widehat{\sigma}]$, $u(\sigma)=-Y(\sigma)$, a.s.
\end{theorem}
\begin{proof}
The proof follows the same approach as one of Theorem 5.3 appear in \cite{10}. Except the fact that we deal in the stochastic interval $[0,\widehat{\sigma}]$ where $\widehat{\sigma}$ is defined by \eqref{Compopti}. This is due to the use of the comparison theorem which is valid only on this type of interval.
\end{proof}
We have this result which generalize Corollary 5.4 in \cite{10} to BSDE with jump and delayed generator.
\begin{corollary}
Suppose assumptions of Theorem \ref{Topt} be satisfied and the obstacle $psi$ be l.u.s.c. along stopping times. Let $\widehat{\sigma}$ be a stopping time defined by \eqref{Compopti}. For each $\sigma\in [0,\widehat{\sigma}]$, we set
\begin{eqnarray*}
\tau^{*}=\inf \{s\leq \sigma,\;\; Y(u)=\psi(u)\}.
\end{eqnarray*}
Then, $(\tau^{*}_{\sigma},\overline{\sigma})$ is an $\sigma$-saddle point, that is $Y(\sigma) = X^{\overline{\sigma},Y^{\delta}_{\tau^{*}_{\sigma}}}(\sigma)$ a.s. In other word, $\tau_{\sigma}^{*}$ is an optimal stopping time for the agent who wants to minimize her risk measure at time $\sigma$ and $\overline{\delta}$ corresponds to a worst scenario.
\end{corollary}
Let end this paper with this remark in order to summarize the rest of our generalization.
\begin{remark}
Using the same approach it not difficult to respectively establish the analog of Proposition 5.5, Theorem 5.6 and Corollary 5.7 of \cite{10}.
\end{remark}
\end{document}
|
\begin{document}
\title{Non-permutation invariant Borel quantifiers}
\author{Fredrik Engstr\"om}
\revauthor{Engstr\"om, Fredrik}
\address{Department of Philosophy, Linguistics and Theory of Science \\ University of Gothenburg\\
Box 200, 405 30 G\"oteborg, Sweden}
\email{[email protected]}
\thanks{Part of the work in this paper was done while visiting the Institut Mittag-Leffler. The authors would like thank the Institut Mittag-Leffler for support.}
\thanks{First author partially supported by the EUROCORE LogICCC LINT program and the Swedish Research Council.}
\author{Philipp Schlicht}
\revauthor{Schlicht, Philipp}
\address{Mathematisches Institut \\
Universit\"at Bonn \\
Endenicher Allee 60, 53115 Bonn, Germany }
\email{[email protected]}
\thanks{Second author partially supported by an exchange grant from the European Science Foundation.}
\begin{abstract}
Every permutation invariant Borel subset of the space of countable structures is definable in $\mathscr{L}_{\omega_1\omega}$ by a theorem of Lopez-Escobar. We prove variants of this theorem relative to fixed relations and fixed non-permutation invariant quantifiers. Moreover we show that for every closed subgroup $G$ of the symmetric group $S_{\infty}$, there is a closed binary quantifier $Q$ such that the $G$-invariant subsets of the space of countable structures are exactly the $\mathscr{L}_{\omega_1\omega}(Q)$-definable sets. \end{abstract}
\maketitle
\section{Introduction}
Countable models in a given countable relational signature $\tau$ can be represented as elements of the logic space $$X_{\tau}=\prod_{R\in \tau} 2^{\mathbb{N}^{a(R)}}$$ where $a(R)$ denotes the arity of the relation $R$. For example, the set of elements of the logic space for a binary relation representing linear orders is a closed set.
The Lopez-Escobar theorem is an easy consequence due to Scott of the interpolation theorem \cite{Lopez-Escobar:65} for $\mathscr{L}_{\omega_1\omega}$. The interpolation theorem states that if $\varphi$ is an $\mathscr{L}_{\omega_1\omega}$-formula in the signature $\sigma$ and $\psi$ is an $\mathscr{L}_{\omega_1\omega}$-formula in the signature $\tau$ such that $\varphi\rightarrow\psi$ holds in all countable models, then there is an $\mathscr{L}_{\omega_1\omega}$ interpolant $\theta$ in the signature $\sigma\cap\tau$ such that $\varphi\rightarrow\theta$ and $\theta\rightarrow\psi$ hold in all countable models.
The Lopez-Escobar theorem \cite[theorem 16.8]{Kechris:95} states that any invariant Borel subset of the logic space is defined by a formula in $\mathscr{L}_{\omega_{1}\omega}$. To derive this from the interpolation theorem, note that every Borel set is defined by an $\mathscr{L}_{\omega_{1}\omega}$-formula in a sequence of parameters $n_i\in\mathbb{N}$. If you replace each $n_i$ by a constant $c_i$ or $d_i$ and use the fact that the set is permutation invariant, it follows that there is an $\mathscr{L}_{\omega_1\omega}$ interpolant without parameters or constants defining the set. Vaught \cite{Vaught:74} found a different proof which has the advantage that it generalizes to the logic space for structures of higher cardinalities. We will generalize Vaught's proof to sets of countable structures invariant under the action of a closed subgroup of the permutation group of the natural numbers.
Let $G$ be the group of permutations fixing a countable family of relations and constants in the natural numbers. In section 2 we show that every $G$-invariant set is definable from these relations and constants.
A generalized quantifier of type $\langle k\rangle$ on the natural numbers is a subset of $2^{\mathbb{N}^k}$. We will freely identify subsets of $\mathbb{N}^k$ and their characteristic functions. We consider the logic $\mathscr{L}_{\omega_1\omega}(Q)$. This is $\mathscr{L}_{\omega_1\omega}$ augmented by the quantifier $Q$ where the formula $Qx\varphi(x)$ has the fixed interpretation $\{x\in\mathbb{N}^k:\varphi(x)\}\in Q$. We study non-permutation invariant generalized quantifiers on the natural numbers and prove a variant of the Lopez-Escobar theorem for a subclass of the quantifiers which are closed and downwards closed.
Moreover for every closed subgroup $G$ of the symmetric group $S_{\infty}$, there is a closed binary quantifier $Q$ such that the $G$-invariant subsets of the space of countable structures are exactly the $\mathscr{L}_{\omega_1\omega}(Q)$-definable sets.
In section 3 we show that there is a version of the Lopez-Escobar theorem for clopen quantifiers and for finite boolean combinations of principal quantifiers. In section 4 we generalize some of the results to the logic space for structures of size $\kappa$ for uncountable cardinals $\kappa$ with $\kappa^{<\kappa}=\kappa$.
\section{Variants of the Lopez-Escobar theorem}
Let
$$X_\tau = \prod_{R \in \tau} 2^{\N^{a(R)}}$$
denote the logic space on $\N$ for a relational signature $\tau$, where $a(R)$ is the arity of the relation $R$. The space is equipped with the product topology. If $\mathcal{F}$ is a sequence of relations on $\mathbb{N}$, then the logic $\mathscr{L}_{\omega_1 \omega}(\mathcal{F})$ has a symbol for each relation in $\mathcal{F}$ with fixed interpretation as this relation.
\subsection{Variants relative to relations}
We prove a version of the Lopez-Escobar theorem for closed subgroups of the permutation group $S_{\infty}$ of the natural numbers. Recall the standard
\begin{fact} The closed subgroups of $S_{\infty}$ are exactly the automorphism groups of countable relational structures.
\end{fact}
See for example \cite[theorem 2.4.4]{Gao:09} for a proof.
\begin{defin} Suppose $G\leq S_{\infty}$ is a subgroup. The $G$-orbit of $a\in \N^{<\omega}$ is defined as $\mathrm{Orb}_G(a)=\{g(a):g\in G\}$.
\end{defin}
When $G$ is understood from the context we write $\mathrm{Orb}(a)$ for the $G$-orbit of $a$.
\begin{prop}
Suppose $G\leq S_{\infty}$ is closed and $\mathcal{F}$ is the family of orbits of $G$. Then every $G$-invariant Borel subset of $X_{\tau}$ is definable in $\mathscr{L}_{\omega_{1}\omega}(\mathcal{F})$.
\end{prop}
\begin{proof}
The proof is very similar to Vaught's proof \cite{Vaught:74}. We follow the proof of \cite[theorem 16.8]{Kechris:95} and replace the set of injections $k\rightarrow \mathbb{N}$ with the orbit of $\langle0,1,..,k-1\rangle$. Note that the Baire category theorem for $G$ holds since $G$ is closed in $S_{\infty}$. By induction on the Borel rank, there is for every Borel set $A\subseteq X_{\tau}$ and every $k\in \N$ an $\mathscr{L}_{\omega_{1}\omega}(\mathcal {F})$-formula $\varphi_k$ such that $\varphi_k(x,a)$ holds for $\langle x,a\rangle\in X_{\tau}\times \N^k$ if and only if $g(x)\in A$ for comeager many $g\in G$ with $a\subseteq g^{-1}$.
\end{proof}
Note that every $G_{\delta}$ subgroup of $S_{\infty}$ is closed \cite[proposition 1.2.1]{Becker.Kechris:96}. However, Proposition 3 is false for some $F_\sigma$ subgroups. We write $A=^*B$ if $A\triangle B$ is finite and $A\subseteq^* B$ if $A-B$ is finite. Suppose $A\subseteq \N$ is infinite and coinfinite and let $G=\set{g \in S_\infty: g(A)=^*A}$. Then $G$ is $F_{\sigma}$ and has the same orbits as $S_\infty$. However, the set $Q=\{X:A\subseteq^*X\}$ is $G$-invariant, but not $S_{\infty}$-invariant and hence not definable from the orbits of $G$.
\begin{defin} Suppose $G\leq S_{\infty}$. The orbit equivalence relation on $\N^{<\omega}$ is defined as $E_G=\{\langle a,b\rangle\in\N^{<\omega}\times \N^{<\omega}: \exists g\in G (g(a)=b)\}$.
\end{defin}
The orbit equivalence relation may contain much less information than the family of orbits. For example, $E_{\{id_{\mathbb{N}}\}}$ is definable in $\mathscr{L}_{\omega_1\omega}$. Hence none of the orbits of $\{id_{\mathbb{N}}\}$ is definable from $E_{\{id_{\mathbb{N}}\}}$.
As a corollary to lemma 3 we obtain a variant of Scott sentences for countable structures.
\begin{prop}
Suppose $G\leq S_{\infty}$ is closed and $\mathcal{F}$ is the family of $G$-orbits. There is for each $M \in X_\tau$ an $\mathscr{L}_{\omega_1\omega}(\mathcal{F})$-sentence $\varphi^G_M$ with $M\vDash\varphi^G_M$ and the property that $\varphi^G_M=\varphi^G_N$ if and only of there is $g\in G$ with $g(M)=N$.
\end{prop}
\begin{proof}
The orbit $\mathrm{Orb}(M)=\{g(M):g\in G\}$ is Borel \cite[Theorem 3.3.2]{Gao:09}.
\end{proof}
When $G$ is the symmetric group we use the standard notation $\varphi_M=\varphi^{S_\infty}_M$. We give a version of the Lopez-Escobar theorem relative to a family of relations.
\begin{prop} Suppose $\mathcal{F}=\langle R_i:i<\omega\rangle$ is a family of relations on $\mathbb{N}$ in the signature $\tau$. Then every $\mathop\mathrm{Aut}(\mathcal{F})$-invariant Borel subset of $X_\tau$ is definable in $\mathscr{L}_{\omega_1\omega}(\mathcal{F})$.
\end{prop}
\begin{proof}
It is sufficient to show that each orbit of $\mathop\mathrm{Aut}(\mathcal{F})$ in $\mathbb{N}^{<\omega}$ is definable in $\mathscr{L}_{\omega_1\omega}(\mathcal{F})$. Then all $\mathop\mathrm{Aut}(\mathcal{F})$-invariant Borel sets are definable in $\mathscr{L}_{\omega_1\omega}(\mathcal{F})$ by Proposition 3. Note that $\mathop\mathrm{Aut}(\mathcal{F})$ is closed.
Let $M_a=\langle\mathbb{N},\langle R_i:i<\omega\rangle,a\rangle$ for $a\in\mathbb{N}^{<\omega}$. Then $a\in \mathrm{Orb}(b)$ if and only if the structures $M_a$ and $M_b$ are isomorphic via a permutation in $G$ if and only if $M_b\vDash \varphi_{M_a}$. Hence $\varphi_{M_a}$ defines the orbit of $a$.
\end{proof}
\subsection{Variants relative to quantifiers}
Let $Q$ be a quantifier on $\mathbb{N}^k$, i.e. a quantifier of type $\langle k\rangle$ on the natural numbers. We have to look at two kinds of definability.
\begin{defin} A set $A\subseteq X_{\tau}$ is definable in $\mathscr{L}_{\omega_1\omega}(Q)$ if there is an $\mathscr{L}_{\omega_1\omega}(Q)$-formula $\varphi$ such that $M\in A$ if and only if $M \vDash \varphi$ for all $M\in X_\tau$.
\end{defin}
\begin{defin} A set $A\subseteq\mathbb{N}^k$ is definable in $\mathscr{L}_{\omega_1\omega}(Q)$ if there is an $\mathscr{L}_{\omega_1\omega}(Q)$-formula $\varphi$ such that $n\in A$ if and only if $\N \vDash\varphi(n)$.
\end{defin}
We say that a permutation $f$ fixes $Q$ if $A\in Q$ is equivalent to $f(A)\in Q$ for all $A\subseteq \N^k$. $\mathop\mathrm{Aut}(Q)$ is the group of permutations fixing $Q$. A set $A\subseteq X_{\tau}$ is called $G$-invariant for $G\leq S_{\infty}$ if $g(A)=A$ for all $g\in G$.
Proposition 3 implies
\begin{prop}
Suppose $Q\subseteq 2^{\mathbb{N}^k}$ is a Borel quantifier with closed automorphism group. Suppose the orbits of $\mathop\mathrm{Aut}(Q)$ are definable in $\mathscr{L}_{\omega_{1}\omega}(Q)$. Then a subset of $X_{\tau}$ is Borel and $\mathop\mathrm{Aut}(Q)$-invariant if and only if it is definable in $\mathscr{L}_{\omega_{1}\omega}(Q)$.
\end{prop}
\begin{proof}
We are left to show that every $\mathscr{L}_{\omega_{1}\omega}(Q)$-definable set $A\subseteq X_{\tau}$ is Borel. Note that the Borel sets are exactly the sets definable in $\mathscr{L}_{\omega_1\omega}$ in a sequence of parameters $m_i\in\mathbb{N}$. Suppose $Q$ is defined by an $\mathscr{L}_{\omega_{1}\omega}$-formula $\varphi$ in the parameters $m_i$.
We want to show by induction on $\psi$ that if $A$ is defined by the $\mathscr{L}_{\omega_{1}\omega}(Q)$-formula $\psi(x,\vec{n})$ with $\vec{n}=\langle n_i:i<\omega\rangle$, then $\psi$ is equivalent to an $\mathscr{L}_{\omega_{1}\omega}$-formula in a sequence of natural parameters. Let $\psi=Qx\chi(x,\vec{n})$ where $\chi$ is an $\mathscr{L}_{\omega_{1}\omega}$-formula.
Suppose $M$ is a countable structure in the signature $\tau$. Then $M\in A$ if and only if $\set{x:M\vDash \chi(x,\vec{n}}\in Q$ if and only if $\langle\mathbb{N}^k, \set{x:M\vDash\chi(x,\vec{n})}\rangle\vDash\varphi$. Hence $A$ is Borel.
\end{proof}
The assumption that $\mathop\mathrm{Aut}(Q)$ is closed is essential. We write $A=^*B$ if $A\triangle B$ is finite. Let $Q=\{X:X=^*A\}$ where $A\subseteq \mathbb{N}$ is infinite and coinfinite. It follows by induction on $\varphi$ that any $\mathscr{L}_{\omega_1\omega}(Q)$-formula of the form $Qx\varphi(x,a)$ is false in $\langle\mathbb{N},X\rangle$ for all $X\neq^*A,\neg A$. If $\langle\N,X\rangle\vDash\varphi$ for some infinite and co-infinite $X\neq^*A,\neg A$ where $\varphi$ is an $\mathscr{L}_{\omega_1\omega}(Q)$-formula, this implies that $\varphi$ holds in every structure $\langle\N,Y\rangle$ with $Y\neq^*A,\neg A$ infinite and co-infinite. Hence the set $\{X:A\subseteq^*X\}$ is $\mathop\mathrm{Aut}(Q)$-invariant but not $\mathscr{L}_{\omega_1\omega}(Q)$-definable.
\begin{defin} Suppose $Q\subseteq 2^{\mathbb{N}^k}$ is closed and downwards closed, i.e. closed under subsets. A function $p:n\rightarrow\mathbb{N}$ is compatible with $Q$ if and only if for every $A\subseteq n^k$, $A\in Q$ if and only if $p(A)\in Q$.
\end{defin}
Note that for downwards closed $Q$ and $A\subseteq n^k$, $A\in Q$ if and only if $A$ extends to an element of $Q$, i.e. if there is $B\subseteq \mathbb{N}^k$ in $Q$ with $A=B\cap n^k$. Hence $p:n\rightarrow\mathbb{N}$ is compatible with $Q$ if and only if for every function $g:n^k\rightarrow \{0,1\}$, $g$ extends to an element of $Q$ if and only if $p(g)=g\circ p^{-1}$ extends to an element of $Q$.
\begin{defin} A quantifier $Q$ is good if it is closed, downwards closed, and any finite injection $p:n\rightarrow\mathbb{N}$ compatible with $Q$ extends to a permutation $f:\mathbb{N}\rightarrow\mathbb{N}$ leaving $Q$ invariant.
\end{defin}
The information about tuples of natural numbers encoded in a good quantifier $Q$ is definable in $\mathscr{L}_{\omega\omega}(Q)$.
\begin{prop} Suppose $Q$ is good. Then the orbits of $\mathop\mathrm{Aut}(Q)$ are definable in $\mathscr{L}_{\omega\omega}(Q)$.
\end{prop}
\begin{proof} $a=\langle a_i:i<n\rangle$ is in the orbit of $\langle0,..,n-1\rangle$ if and only if the map $n\rightarrow\mathbb{N}$ mapping $i$ to $a_i$ is compatible with $Q$. This is expressible by the conjunction of $Qx\bigvee_{i\in I}x=\langle a_{i(0)},..,a_{i(k-1)}\rangle$ for all $I\subseteq n^k$ with $I\in Q$ together with the conjunction of $\neg Qx\bigvee_{i\in I}x=\langle a_{i(0)},..,a_{i(k-1)}\rangle$ for all $I\subseteq n^k$ with $I\notin Q$.
\end{proof}
Note that the automorphism group of a closed quantifier is closed.
\begin{prop}
Suppose $G$ is a closed subgroup of $S_{\infty}$. There is a good binary quantifier $Q_G$ with $G=\mathop\mathrm{Aut}(Q_G)$.
\end{prop}
\begin{proof}
Let $P$ be the downward closure of $$\bigcup_{k\in\mathbb{N}} \mathrm{Orb}(\{\langle0,0\rangle,\langle0,1\rangle,\langle1,2\rangle,\ldots,\langle k-1,k\rangle\})$$
Then $P$ is $G$-invariant, so its closure $Q$ is $G$-invariant as well.
Suppose $p:k\rightarrow\mathbb{N}$ is a finite injection compatible with $Q$. Then $s=\{\langle p(0),p(0)\rangle,\langle p(0),p(1)\rangle,..,\langle p(k-2),p(k-1)\rangle\}\in Q$. Let $\langle a^n: n<\omega\rangle$ be a sequence in $P$ converging to $s$. Then $a^n$ eventually contains a set of the form $\{\langle a^n_0,a^n_0\rangle,..,\langle a^n_{k-2},a^n_{k-1}\rangle\}$ and the eventual value of $a^n_i$ is $p(i)$ for all $i<k$. Hence $s\in P$ and $p$ can be extended to a permutation in $G$.
We claim that $G=\mathop\mathrm{Aut}(Q)$. Let $g\in \mathop\mathrm{Aut}(Q)$. Then for every $m$ we have $\{\langle g(0),g(0)\rangle,\langle g(0),g(1)\rangle,..,\langle g(m-1),g(m)\rangle\}\in Q$. This is in fact an element of $P$ by the same argument as in the last paragraph. So there are permutations $h_m\in G$ for each $m$ with $g(i)=h_m(i)$ for all $i\leq m$. Since $h_m\rightarrow g$, this implies $g\in G$.
\end{proof}
Hence there is a correspondence between the closed subgroups of $S_{\infty}$ and good binary quantifiers $Q$. Let $\mathop\mathrm{Inv}(G)$ denote the family of closed $G$-invariant subsets of $2^{\mathbb{N}^2}$. Then $$\mathop\mathrm{Aut}(\mathop\mathrm{Inv}(G))=G$$ for every closed subgroup $G\leq S_{\infty}$.
\section{More quantifiers}
We show that some other types of quantifiers have similar properties as good quantifiers, i.e. their automorphism group is closed and each orbit is definable from the quantifier. For these quantifiers there is a version of the Lopez-Escobar theorem.
However, the set of quantifiers with these properties is not closed under unions or intersections. To see this, we consider quantifiers of the following form.
\begin{defin} A principal quantifier is of the form
\begin{itemize}
\item $Q_A=\set{X\subseteq \N^k: A\subseteq X}$ or
\item $Q^A=\set{X\subseteq \N^k: X\subseteq A}$
\end{itemize}
where $A$ is a subset of $\mathbb{N}^k$.
\end{defin}
The automorphism group $\mathop\mathrm{Aut}(Q_A)=\mathop\mathrm{Aut}(Q^A)=\mathop\mathrm{Aut}(Q_A\cap Q^A)=\mathop\mathrm{Aut}(A)$ of a principal quantifier is closed and its orbits are definable in $\mathscr{L}_{\omega \omega}(Q_A)$. For $A\subseteq\mathbb{N}$ this is true since $m\in A$ if and only iff $\neg Q_An(m\neq n)$ holds, and for $A\subseteq\mathbb{N}^k$ this is shown in section 3.2.
Let's fix some infinite and co-infinite set $A\subseteq \N$ and let $Q=\{A\}$, so that $\mathop\mathrm{Aut}(Q)=\mathop\mathrm{Aut}(A)$. A sentence $Qx\varphi(x,a)$ is false whenever $\varphi$ is a $\mathscr{L}_{\omega_{1}\omega}$-formula and $a\in\mathbb{N}^{<\omega}$, since the set $\{n:\varphi(n,a)\}$ is invariant under permutations fixing $a$. Thus any subset of $\mathbb{N}$ defined by an $\mathscr{L}_{\omega_{1}\omega}(Q)$-formula with parameters in $\{0,..,t\}$ is either a subset of $\{0,..,t\}$ or includes $\N -\{0,..,t\}$ by induction on the formulas. Hence the orbits of $\mathop\mathrm{Aut}(Q)$ are not definable in $\mathscr{L}_{\omega_{1}\omega}(Q)$.
\subsection{Clopen quantifiers}
Suppose $Q$ is a quantifier of type $\langle k\rangle$. Note that the automorphism group $\mathop\mathrm{Aut}(Q)$ of any closed quantifier $Q$ is closed.
We say that a set $S \subseteq \N^k$ supports $Q$ if $A\in Q$ if and only if $B\in Q$ for all $A,B\subseteq \N$ with $A\cap S=B\cap S$.
\begin{defin}
A minimal set $S \subseteq \N^k$ supporting $Q$ is called a support of $Q$.
\end{defin}
\begin{lem}
Every closed quantifier $Q$ on $\mathbb{N}^k$ has a unique support.
\end{lem}
\begin{proof}
Easily the set of $S \subseteq \N^k$ which support $Q$ is closed under finite intersections. Suppose $S_n$ supports $Q$ for each $n\in \N$ and $S_n\subseteq S_m$ for $m\leq n$. Let $S=\bigcap_{n\in \N}S_n$. Suppose $A\in Q$ and $A\cap S=B\cap S$. Now $A_n=(A\cap S_n)\cup(B-S_n)\in Q$ for each $n$ since $A\in Q$. Then $B\in Q$ since $B$ is the limit of the sets $A_n$ and $Q$ is closed.
Let $\N^k=\{a_n:n\in \N\}$. The support of $Q$ is the intersection of the sets $A_n$ where $A_0=\N^k$ and $A_{n+1}=A_n-\{a_n\}$ if this set supports $Q$ and $A_{n+1}=A_n$ otherwise.
\end{proof}
Note that the set of finite subsets of $\N^k$ is supported by $\N^k-\{a\}$ for every $a\in \N^k$, so it does not have a support.
\begin{lem} The support of any clopen quantifier $Q$ on $\mathbb{N}^k$ is definable in $\mathscr{L}_{\omega\omega}(Q)$.
\end{lem}
\begin{proof}
Note that a quantifier is clopen if and only if it has finite support. Suppose the support of $Q$ is contained in $\{0,..,t-1\}^k$ and let $r=t^k$.
Let $R^{l,m}$ be the set of tuples $\bar{a}\smallfrown \bar{b}$ with $\bar{a}\in(\N^k)^l$ and $\bar{b}\in(\N^k)^m$ such that the finite partial function mapping each $a_{i}\in\N^{k}$ to $0$ and each $b_{j}\in\N^{k}$ to $1$ can be extended to the characteristic function of an element of $Q$. Then $\bar{a}\smallfrown \bar{b}\in R^{l,m}$ if and only if there is a tuple $\bar{c}\in(\N^k)^r$ so that
$$Qx(\bigwedge_{i<l}x\neq b_{i} \land (\bigvee_{i<m}x=a_{i} \vee \bigvee_{i<r}x=c_{i}))$$
holds. Hence $R^{l,m}$ is definable in $\mathscr{L}_{\omega \omega}(Q)$.
Then $a\in \N^k$ is in the support of $Q$ if and only if there are tuples $\bar{b}\in(\N^k)^r$ and $\bar{c}\in(\N^k)^r$ such that $R^{r+1,r}(\langle a\rangle\smallfrown\bar{b}\smallfrown\bar{c})$ and $R^{r,r+1}(\bar{b}\smallfrown\bar{c}\smallfrown\langle a\rangle)$ don't have the same truth value.
\end{proof}
\begin{prop} The orbits of the automorphism group $\mathop\mathrm{Aut}(Q)$ of any clopen quantifier $Q$ on $\mathbb{N}^k$ are definable in $\mathscr{L}_{\omega\omega}(Q)$.
\end{prop}
\begin{proof}
Suppose the support $S$ of $Q$ is contained in $\{0,..,t-1\}^k$. We claim that $\langle a_0,..,a_n\rangle$ is in the orbit of $\langle0,..,n\rangle$ if and only if there is an extension $\langle a_0,..,a_{n+t}\rangle$ with $S\subseteq\{a_0,..,a_{n+t}\}$ such that for the finite partial map $f$ with $f(i)=a_i$ for $i\leq n+t$
\begin{itemize}
\item $f$ is injective and
\item $f$ and $f^{-1}$ preserve $S$ and $R^{j,l}$ for all $j,l$ with $j+l\leq n+t$.
\end{itemize}
Suppose these conditions hold. Let $g$ be any permutation of $\N$ extending $f$. We have $R \in Q$ if and only if $R \cap S$ extends to a relation in $Q$, for any relation $R\subseteq \N^k$. But this holds if and only if $g(R)\cap S$ extends to a relation in $Q$, since $g$ preserves $S$ and $R^{j,l}$. Hence $g\in\mathop\mathrm{Aut}(Q)$.
\end{proof}
The orbits of $\mathop\mathrm{Aut}(\mathcal{F})$ are $\mathscr{L}_{\omega_{1}\omega}(\mathcal{F})$-definable and $\mathop\mathrm{Aut}(\mathcal {F})$ is closed for any sequence $\mathcal{F}$ of clopen quantifiers by a slight variation of the previous proof.
\subsection{Combinations of principal quantifiers}
We show that if $Q$ is a finite boolean combination of principal quantifiers $Q_{A_k}$, then its automorphism group is closed and each orbit in $\mathbb{N}^{<\omega}$ is definable in $\mathscr{L}_{\omega_{1}\omega}(Q)$.
Suppose $\langle A_k:k<n\rangle$ is a partition of $\mathbb{N}^d$ with $d<\omega$ and
\[Q=\bigcup_{i}\bigcap_{k<n}Q_{A_k}^{s_i(k)}\]
with $s_{i}\in\{1,-1\}^{n}$ for $i<m$, where $Q_{A_k}^1=Q_{A_k}$ and $Q_{A_k}^{-1}=\neg Q_{A_k}$. We can assume that $n$ is minimal with these properties. In this situation we write $Q=\langle A_k, s_i\rangle=\langle A_k, s_i:k<n, i<m\rangle$.
We say that a tuple $\bar{a}\in (\mathbb{N}^d)^{<\omega}$ occurs positively (negatively) in $Q=\langle A_k, s_i\rangle$ if there is some $i$ such that there is $j$ with $a_j\in A_k$ if and only if $s_i(k)=1$ ($s_i(k)=-1$). A tuple $\bar{a}$ occurs negatively if and only if $\psi(\bar{a}):= Qx\bigwedge_j (x\neq a_{j})$ holds.
\begin{lem} If $Q=\langle A_k, s_i:k<n, i<m\rangle$, then there is an $\mathscr{L}_{\omega\omega}(Q)$-formula $\chi$ with $\chi(a,b)$ if and only if $a,b\in A_{k}$ for some $k<n$.
\end{lem}
\begin{proof} Let $\chi(a,b)$ state that $\psi(\langle a\rangle\smallfrown\bar{c})$, $\psi(\langle b\rangle\smallfrown\bar{c})$, and $\psi(\langle a,b\rangle\smallfrown\bar{c})$ have equal truth values for all tuples $\bar{c}\in(\mathbb{N}^d)^n$. If $a,b\in A_{k}$ for some $k$, then $\chi(a,b)$ holds.
Suppose $a\in A_{0}$, $b\in A_{1}$, and $\chi(a,b)$ holds. Suppose $Q$ is the union of the sets
\[\bigcup_{t\in T}(\neg Q_{A_{0}}\cap\neg Q_{A_{1}}\cap\bigcap_{j\geq2}Q_{A_{j}}^{t(j)})\]
\[\bigcup_{u\in U}(Q_{A_{0}}\cap\neg Q_{A_{1}}\cap\bigcap_{j\geq2}Q_{A_{j}}^{u(j)})\]
\[\bigcup_{v\in V}(\neg Q_{A_{0}}\cap Q_{A_{1}}\cap\bigcap_{j\geq2}Q_{A_{j}}^{v(j)})\]
\[\bigcup_{w\in W}(Q_{A_{0}}\cap Q_{A_{1}}\cap\bigcap_{j\geq2}Q_{A_{j}}^{w(j)})\]
We claim that $T=U=V$. To prove $T\subseteq U$, suppose $t\in T$ and pick $\bar{d}$ so that there is exactly one $d_{k}\in A_{k}$ for each $k\geq 2$ with $t(k)=1$, so that $\psi(\langle a,b\rangle\smallfrown\bar{d})$ holds. Then $\psi(\langle b\rangle\smallfrown\bar{d})$ holds and hence $t\in U$. The other cases are analogous.
This shows that $n$ is not minimal, since
\[(\neg Q_{A_{0}}\cap\neg Q_{A_{1}})\cup(Q_{A_{0}}\cap\neg Q_{A_{1}})\cup(\neg Q_{A_{0}}\cap Q_{A_{1}})\]
can be replaced by $\neg(Q_{A_{0}}\cap Q_{A_{1}})=\neg Q_{A_{0}\cup A_{1}}$.
\end{proof}
Note that the assumption that $n$ is minimal is essential here, since otherwise the proof does not even work for quantifiers of the form $Q=Q_{A}\cap Q_{B}$.
\begin{lem} If $Q=\langle A_k, s_i:k<n, i<m\rangle$, then for each $j\leq n$ there is an $\mathscr{L}_{\omega\omega}(Q)$-formula $\theta_{j}$ such that $\theta_{j}(\bar{a},\bar{b})$ holds if and only if
\begin{itemize}
\item $\bar{a}$ occurs positively and has length $j$,
\item $\bar{b}$ occurs negatively and has length $n-j$, and
\item all elements of $\bar{a}\smallfrown\bar{b}$ are in different $A_{k}$.
\end{itemize}
\end{lem}
\begin{proof} The formula $\theta_j$ can be expressed by $\chi$ and $\psi$. Note that if $\bar{b}$ occurs negatively, then $\bar{a}$ has to occur positively, given the remaining conditions.
\end{proof}
\begin{lem} If $Q=\langle A_k, s_i:k<n, i<m\rangle$, then $g\in\mathop\mathrm{Aut}(Q)$ if and only if there are a permutation $p$ of $n$ and a permutation $r$ of $m$ such that $g(A_{k})=A_{p(k)}$ for all $k<n$ and $s_{r(i)}=s_{i}\circ p^{-1}$ for all $i<m$.
\end{lem}
\begin{proof}
If $g\in \mathop\mathrm{Aut}(Q)$, then $g$ permutes the $A_k$ by the previous lemma. Let $p:n\rightarrow n$ be this permutation. For each $s:n\rightarrow \{-1,1\}$, there is $i<m$ with $s=s_i$ if and only if there is some $j<m$ with $s\circ p=s_j$.
Suppose $p$ and $r$ are given and $x\in\bigcap_{k<n}Q_{A_k}^{s_i(k)}$ for some $i<n$. Then $g(x)\in\bigcap_{k<n}Q_{A_{p(k)}}^{s_i(k)}=\bigcap_{k<n}Q_{A_{k}}^{s_{r(i)}}$.
\end{proof}
This implies that $\mathop\mathrm{Aut}(Q)$ is closed. Suppose $g_k\rightarrow g\in S_{\infty}$ with $g_k\in \mathop\mathrm{Aut}(Q)$ for each $k<\omega$ and let $p_k$ be the permutation of $n$ corresponding to $g_k$ in the previous lemma. Then $p_k$ eventually takes a fixed value $p$, hence $g$ is according to $p$.
Given a tuple $\bar{a}\in(\mathbb{N}^d)^j$, we can find $f:j\rightarrow n$ such that there is a tuple $\bar{c}\in(\mathbb{N}^d)^n$ with
\begin{itemize}
\item all $c_i$ are in different $A_k$ and $\bar{c}$ is maximal with this property, and
\item $a_i$ and $c_{f(i)}$ are in the same $A_k$ for each $i<j$.
\end{itemize}
For tuples $\bar{c}$ with this property, let $M_{\bar{a},\bar{c}}=\langle\mathbb{N},\bar{a},\bar{c},\langle A_{p(k)}:k<n\rangle\rangle$, where $p$ is the unique permutation of $n$ such that $c_k\in A_{p(k)}$. Note that the Scott sentence $\varphi_{M_{\bar{a},\bar{c}}}$ of $M_{\bar{a},\bar{c}}$ is equivalent to a sentence in $\mathscr{L}_{\omega_1\omega}(Q)$ with parameters $\bar{a}$ and $\bar{c}$, since $A_{p(k)}$ is definable from $Q$ and $c_i$.
\begin{prop} For any finite boolean combination $Q$ of principal quantifiers of the form $Q_{A}$, the orbits of $\mathop\mathrm{Aut}(Q)$ are definable in $\mathscr{L}_{\omega_1\omega}(Q)$.
\end{prop}
\begin{proof} Let $Q=\langle A_k, s_i:k<n, i<m\rangle$. Suppose $\bar{a}$ is a tuple of length $j$ and $f:j\rightarrow n$ and $\bar{c}$ are as above.
We claim that $\bar{b}\in \mathrm{Orb}(\bar{a})$ if and only if there is a tuple $\bar{d}\in\mathbb{N}^n$ such that
\begin{itemize}
\item all $d_i$ are in different $A_k$ and $\bar{d}$ is maximal with this property,
\item $b_i$ and $d_{f(i)}$ are in the same $A_k$ for each $i$,
\item $M_{\bar{b},\bar{d}}\vDash \varphi_{M_{\bar{a},\bar{c}}}$, and
\item for all $I\subseteq n$, $Qn(\bigwedge_{k\in I}{n\neq d_k})$ holds if and only if $I=\{k<n:s_i(k)=-1\}$ for some $i<n$.
\end{itemize}
Suppose these conditions hold for $\bar{b}$ and $\bar{d}$. Since $M_{\bar{b},\bar{d}}$ models $\varphi_{M_{\bar{a},\bar{c}}}$, there is a permutation $g:\mathbb{N}\rightarrow\mathbb{N}$ mapping $\bar{a}$ to $\bar{b}$ and $\bar{c}$ to $\bar{d}$. Let $p:n\rightarrow n$ be the permutation of the indices of $c_i$ induced by this map. Then $g(A_k)=A_{p(k)}$ for each $k<n$. The last condition implies that for every $i<m$ there is some $j<m$ such that $s_i\circ p=s_j$. Hence $g$ preserves $Q$ by the previous lemma.
\end{proof}
Note that the proposition is also true for boolean combinations of principal quantifiers $Q^{A_k}$ since $Q^{A}x\varphi (x)$ can be expressed as $Q_{\neg A}x\neg\varphi (x)$. However, the two types of principal quantifiers cannot be mixed by the example at the beginning of section 3.
By a slight variation of the previous proof we get
\begin{prop} Suppose $\mathcal{F}=\langle Q_i:i<\omega\rangle$ is a sequence of finite boolean combinations of principal quantifiers $Q_{A_{i,k}}$. Then $\mathop\mathrm{Aut}(\mathcal{F})$ is closed and the orbits of $\mathop\mathrm{Aut}(\mathcal{F})$ are definable in $\mathscr{L}_{\omega_1\omega}(\mathcal{F})$.
\end{prop}
\section{Higher cardinalities}
Some of the previous results generalize when $\mathbb{N}$ is replaced with an uncountable cardinal $\kappa$. Let's always suppose $\kappa^{<\kappa}=\kappa$.
The logic space
$$X_{\tau}=\prod_{R\in \tau} 2^{\kappa^{a(R)}}$$
for a relational signature $\tau$ of size $\leq\kappa$ is equipped with the product topology. The topology on $2^{\kappa^n}$ is given by the basic open sets $U(s)=\{f\in 2^{\kappa^n}:s\subseteq f\}$ for partial functions $s\in 2^{\kappa^n}$ of size $<\kappa$. Let $S_\kappa$ denote the permutation group of $\kappa$ with the topology from $\kappa^\kappa$.
The $\kappa$-Borel subsets of $2^{\kappa^n}$ and $X_{\tau}$ are generated from the basic open sets by unions and intersections of length $\kappa$ and complements. A subspace is $\kappa$-Baire if $\bigcap_{\alpha<\kappa}U_{\alpha}$ is dense in the subspace for every sequence $\langle U_{\alpha}:\alpha<\kappa\rangle$ of open dense sets in the subspace.
A generalized quantifier of type $\langle\alpha\rangle$ on $\kappa$ for $\alpha<\kappa$ is a subset of $2^{\kappa^\alpha}$.
\begin{lem}
Suppose $Q$ is a closed quantifier on $\kappa$ of type $\langle\alpha\rangle$ with $\alpha<\kappa$. Then $\mathop\mathrm{Aut}(Q)$ is closed in $S_\kappa$.
\end{lem}
\begin{proof}
Suppose $g_\alpha\in\mathop\mathrm{Aut}(Q)$ for each $\alpha<\kappa$ and $g_\alpha\rightarrow g\in S_\kappa$. Let $R$ be a relation in $Q$. Then $g_\alpha(R)\rightarrow g(R)$ and hence $g(R)\in Q$. Since $g_\alpha^{-1}\rightarrow g^{-1}$ we have $g\in\mathop\mathrm{Aut}(Q)$.
\end{proof}
\begin{prop}
Suppose $G$ is a closed $\kappa$-Baire subgroup of $S_{\kappa}$ and $\mathcal{F}$ is the family of orbits of elements of $\kappa^{<\kappa}$. Then a subset of $X_{\tau}$ is $\kappa$-Borel and $G$-invariant if and only if it is definable in $\mathscr{L}_{\kappa^+\kappa}(\mathcal {F})$.
\end{prop}
\begin{proof}
As in the proof of Proposition 9.
\end{proof}
Good quantifiers are defined as in section 2.2 but finite tuples are replaced by elements of $\kappa^{<\kappa}$.
\begin{prop}
Suppose $G$ is a closed $\omega_1$-Baire subgroup of $S_{\omega_1}$. There is a good binary quantifier $Q_G$ with $G=\mathop\mathrm{Aut}(Q_G)$.
\end{prop}
\begin{proof} Suppose $f:\omega_1\rightarrow\mathcal{P}(\omega)$ is injective. The proof is as the proof of Proposition 13, except that $P$ is replaced by the downward closure of the union of the orbits of
$$\{\langle0,0\rangle\}\cup\{\langle n,n+1\rangle:n<\omega\}\cup\{\langle n,\alpha\rangle:\omega\leq\alpha<\gamma, n\in f(\alpha)\}$$
for $\gamma<\omega_1$.
\end{proof}
Moreover if $Q$ is a good quantifier on $\omega_1$, then a subset of $X_\tau$ is $\omega_1$-Borel and $G$-invariant if and only if it is definable in $\mathscr{L}_{\omega_2\omega_1}(Q)$.
\begin{prop}
The orbits of the automorphism group of a clopen quantifier $Q$ on $\kappa$ are definable in $\mathscr{L}_{\kappa\kappa}(Q)$.
\end{prop}
\begin{proof}
As in the proof of Proposition 18.
\end{proof}
\end{document}
|
\begin{document}
\title{{\normalsize\tt
\jobname.tex}
\begin{abstract}
Recurrence and ergodic properties are established for a single--server queueing system with variable intensities of arrivals and service. Convergence to stationarity is also interpreted in terms of reliability theory.
\end{abstract}
\section{Introduction}
In the last decades, queueing systems generalising $M/G/1/\infty$, or
$M/G/1$ (cf. \cite{GK}) -- one of the most important queueing systems --
attracted much attention, see \cite{Asmussen} -- \cite{fakinos1987},
\cite{Thor83}.
In this paper a single--server system similar to \cite{Ve2013_ait, Ver_Z-2014}
is considered,
in which {\em intensities} of new arrivals as well as of their
service may depend on the ``whole state'' of the system and the
whole state includes the number of customers in the system --
waiting and on service -- {\em and} on the elapsed time of the last
service, as well as on the elapsed time since the end of the last service. Batch arrivals are not allowed. The news in comparison to \cite{Ve2013_ait, Ver_Z-2014} is that at any state, even if the system idle (no service), the intensity of new arrivals
may depend on the time from the last end of service. The details of the system description will be formalised in the beginning of the next section.
By the {\em m-availability factor} of the system we understand the probability of the idle state
if $m=0$, or probability of $m$ customers in total on the server and in the queue. We do not use notation $G/G/1$ (or $GI/GI/1$) only because some conditions on intensities are assumed, which makes the model slightly less general.
The problem addressed in the paper is how to estimate convergence rate of characteristics of the system including the $m$-availability factors
to their stationary values.
~
The {\em
elapsed} service time is assumed to be known at any moment, but
the remaining service times for each customer are not. For definiteness,
the discipline of serving is FIFO, although other disciplines may be also considered.
~
The paper consists of the Section 1 -- Introduction, of the setting and main result in the Section 2, of the auxiliary lemmata in the Section 3 and of the short sketch of the proof of the main result in the Section 4.
\section{The setting and main results}
\subsection{Defining the process}
Let us present the class of models under investigation in this
paper. Here the state space is a union of subspaces,
\[
{\mathcal X} =
\{(0,y): \;
y\ge 0\}
\cup
\bigcup_{n=1}^{\infty} \{(n,x,y): \;
x,y\ge 0\}.
\]
Functions of
class $C^1({\mathcal X})$ are understood as functions with
classical continuous derivatives with respect to the variable $x$.
Functions with compact support on ${\mathcal X}$ are understood
as functions vanishing outside some domain bounded in this metric:
for example, $C^1_0({\mathcal X})$ stands for the class of
functions with compact support and one continuous derivative.
There is a generalised Poisson arrival flow with intensity $
\lambda(X), $ where \( X = (n,x,y) \; \mbox{for any $n \ge 1$} \), and \( X = (0,y) \; \mbox{for $n = 0 $}\). Slightly abusing notations, it is convenient to write $X=(n,x,y)$ for $n=0$ as well, assuming that in this case $x=0$. If $n>0$, then the
server is serving one customer while all others are waiting in a
queue. When the last service ends, immediately a new service of the next customer from the queue starts. If $n=0$ then the server remains idle until the next customer
arrival; the intensity of such arrival at state $(0,y)\equiv (0,0,y)$ may be variable depending on the value $y$, which stands for the elapsed time from the last end of service. Here $n$ denotes the total number of customers in the
system, and $x$ stands for the elapsed time of the current service (except for $n=0$, which was explained earlier), and $y$ is the elapsed from from the last arrival. {\em Normally}, intensity of arrivals depend on $n$ and $y$, while intensity of service depends on $n$ and $x$; however, we allow more general dependence.
Denote $n_t=n(X_t)$ -- the number of customers corresponding to
the state $X_t$, and $x_t=x(X_t)$, the second component of the
process $(X_t)$, and $y_t=y(X_t)$, the third component of the
process $(X_t)$ (the third if $n>0$)). For
any $X=(n,x,y)$, intensity of service $h(X) \equiv h(n,x,y)$ is
defined; it is also convenient to assume that $h(X)=0$ for $n(X)=0$.
Both intensities $\lambda$ and $h$ are understood in the following
way, which is a definition: on any nonrandom interval of
time $[t,t+\Delta)$, conditional probability given $X_t$ that the
current service will {\em not} be finished and there will be no
new arrivals reads,
\begin{equation}\label{intu1}
\exp\left(-\int_0^{\Delta} (\lambda+h)
(n_{t},x_{t}+s, y_t+s)\,ds\right).
\end{equation}
In the sequel, $\lambda$ and $h$ are assumed to be {\em bounded}.
In this case, for $\Delta>0$ small enough, the expression in
(\ref{intu1}) may be rewritten as
\begin{equation}\label{intu2}
1-\int_0^{\Delta} (\lambda+h)(n_{t},x_{t}+s, y_{t}+s)\,ds + O(\Delta^2),
\qquad \Delta\to 0,
\end{equation}
and this what is ``usually'' replaced by
\[
1 - (\lambda(X_t)+h(X_t))\Delta + O(\Delta^2).
\]
However, in our situation, the latter replacement may be incorrect because
of discontinuities of the functions $\lambda$ and $h$. Emphasize
that from time $t$ and until the next jump, the evolution of the
process $X$ is {\em deterministic}, which makes the process {\em piecewise-linear Markov,} see, e.g., \cite{GK}. The (conditional given $X_t$)
density of the moment of a new arrival {\em or} of the end of the
current service after $t$ at $x_t+ z$, $z\ge 0$ equals,
\begin{equation}\label{intu33}
(\lambda(n_t, x_t+z, y_t+z)+h(n_t, x_t+z, y_t+z))
\exp\left(-\int_0^{\Delta}
(\lambda+h)(n_t, x_{t}+s, y_t+s))\,ds\right).
\end{equation}
Further, given $X_t$, the moments of the next ``candidates'' for
jumps up and down are conditionally independent and have the
(conditional -- given $X_t$) density, respectively,
\begin{equation}\label{intu5}
\begin{array}{c}
\lambda(X_t+z)\exp\left(-\int\limits_0^{z}
\lambda(X_{t}+s)\,ds\right) \; \\
\\
\mbox{and} \; \\
\\
h(X_t+z)\exp\left(-\int\limits_0^{z} h(X_{t}+s)\,ds\right), \; z\ge 0.\\
\end{array}
\end{equation}
(Here $X_{t}+s := (n_t, x_t+s, y_t+s)$.) Notice that (\ref{intu33}) does correspond to conditionally
independent densities given in (\ref{intu5}).
\subsection{Main result}
Let
\[
\Lambda:= \sup_{n,x,y: \,n>0}\lambda(n,x,y) < \infty.
\]
For establishing convergence rate to the stationary regime, we
assume similarly to \cite{Ve2013_ait, Ver_Z-2014},
\begin{equation}\label{eq2}
\inf_{n>0, y} h(n,x,y) \ge \frac{C_0}{1+x}, \quad x\ge 0.
\end{equation}
We also assume a new condition related to $\lambda_0(t) = \lambda(0, 0,t)$, which was constant in the earlier papers:
now it is allowed to be variable and satisfying
\begin{equation}\label{eq20}
0<\inf_{t\ge 0}\lambda_0(t) \le \sup_{t\ge 0}\lambda_0(t) <\infty.
\end{equation}
Recall that the process has no explosion with probability one due
to the boundedness of both intensities, i.e., the trajectory may
have only finitely many jumps on any finite interval of time.
\begin{thm}\label{thm2}
Let the functions $\lambda$ and $h$ be Borel measurable and
bounded and let the assumptions (\ref{eq2}) and (\ref{eq20}) be satisfied. Then, under the assumptions above, if $C_0$ is large enough, then
there exists a unique stationary measure $\mu$. Moreover, for any $m>k$, $C>0$ there exists $\bar C>0$ such that if $C_0\ge \bar C$, then for any $t\ge 0$,
\begin{equation}\label{est}
\|\mu^{n,x,y}_t - \mu \|_{TV} \le
C \,\frac{(1+n+x+y)^m}{(1+t)^{k+1}},
\end{equation}
where $\mu^{n,x,y}_t$ is a marginal distribution of the process
$(X_t, \, t\ge 0)$ with the initial data $X=(n,x,y)\in {\mathcal
X}$.
\end{thm}
\begin{rem}
It is plausible that the bound in (\ref{est}) may be improved so that the right hand side does not depend on $y$.
Moreover, given all other constants, the value $C$ in (\ref{est}) may be made ``computable'', with a rather involved but explicit dependence on other constants. Moreover, it is likely that the condition (\ref{eq20}) may be replaced by a weaker one,
\begin{equation}\label{eq21}
\frac{C_0'}{1+t} \le \lambda_0(t) \le \sup_{t\ge 0}\lambda_0(t) <\infty,
\end{equation}
along with the assumption that $C'_0$ is large enough.
However, all these issues require a bit more accuracy in the calculus and we do not pursue these goals here leaving them until further publications with complete technical details.
\end{rem}
\section{Lemmata}
Recall \cite{Dynkin} that the generator of a Markov process $(X_t,
\, t\ge 0)$ is an operator \( {\mathcal G}, \) such that for a
sufficiently large class of functions $f$
\begin{equation}\label{dynkin2}
\sup_X \lim_{t\to 0} \left\|\frac{E_Xf(X_t) - f(X)}{t} -
{\mathcal G}f(X)\right\| = 0
\end{equation}
in the norm of the state space of the process; the notion of
generator does depend on this norm.
An
operator ${\mathcal G}$ is called a {\em mild generalised
generator} (another name is extended generator) if (\ref{dynkin2}) is replaced by its corollary
(\ref{dynkin1}) below called {\em Dynkin's formula},
or {\em Dynkin's identity} \cite[Ch. 1, \S 3]{Dynkin},
\begin{equation}\label{dynkin1}
E_Xf(X_t) - f(X) = E_X\int_0^t {\mathcal G}f(X_s)\,ds,
\end{equation}
also for a wide enough class of functions $f$.
We will also use the
non-homogeneous counterpart of Dynkin's formula,
\begin{equation}\label{dynkin_t}
E_X\mathop{\mathrm{var}}phi(t,X_t) - \mathop{\mathrm{var}}phi(0,X) = E_X\int_0^t
\left(\frac{\partial}{\partial s}\mathop{\mathrm{var}}phi(s, X_s) + {\mathcal
G}\mathop{\mathrm{var}}phi(s,X_s)\right)\,ds,
\end{equation}
for appropriate functions of two variables $(\mathop{\mathrm{var}}phi(t,X))$. Both (\ref{dynkin1}) and (\ref{dynkin_t}) play
a very important role in analysis of Markov models and under our assumptions may be justified similarly to \cite{Ver_Z-2014}. Here
$X$ is a (non-random)
initial value of the process.
Both formulae (\ref{dynkin1})--(\ref{dynkin_t}) hold true for a
large class of functions $f$, $\mathop{\mathrm{var}}phi$ with ${\mathcal G}$
given by the standard expression,
\begin{eqnarray*}
{\mathcal G}f(X) := \frac{\partial}{\partial x}f(X)1(n(X)>0) + \frac{\partial}{\partial y}f(X)
\\\\
+ \lambda(X)(f(X^+) - f(X))
+ h(X) (f(X^-) - f(X)),
\end{eqnarray*}
where for any $X=(n,x,y)$,
\[
X^+ := (n+1,x,0), \quad X^-:= ((n-1)\vee 0,0,y)
\]
(here $a\vee b = \max(a,b)$).
Under our
minimal assumptions on regularity of intensities this may be justified similarly to \cite{Ver_Z-2014}.
\begin{lem}\label{thm1}
If the functions $\lambda$ and $h$ are Borel measurable and
bounded, then the formulae (\ref{dynkin1}) and (\ref{dynkin_t})
hold true for any $t>0$ for every $f\in C^1_b({\mathcal X})$ and
$\mathop{\mathrm{var}}phi\in C^1_b([0,\infty)\times {\mathcal X})$, respectively.
Moreover, the process $(X_t, \, t\ge 0)$ is strong Markov with
respect to the filtration \(({\mathcal F}^X_t, \, t\ge 0)\).
\end{lem}
~
\noindent
Further, let
\begin{equation}\label{LL}
L_m(X) = (n+1+x +y)^m, \quad L_{k,m}(t,X) = (1+t)^k L_{m}(X).
\end{equation}
The extensions of Dynkin's formulae for some unbounded functions hold true: we will need them for the Lyapunov functions in (\ref{LL}).
\begin{cor}\label{cor1}
Under the assumptions of the Lemma \ref{thm1},
\begin{eqnarray}\label{M2}
L_{m}(X_t) - L_{m}(X) = \int_0^t \lambda(X_s)
\left[\phantom{\!\!\!\frac{}{} } \left(L_{m}(X^{(+)}_s) -
L_{m}(X_s)\right) \right.
\hspace{2cm}
\nonumber \\ \\ \nonumber
\left. + h(X_s) \left(L_{m}(X^{-}_s) - L_{m} (X_s)\right)
+1(n(X_s)>0) \frac{\partial}{\partial x}L_{m}(X_s)
+\frac{\partial}{\partial y}L_{m}(X_s)\right]\,ds +M_t,
\end{eqnarray}
with some martingale $M_t$, and also
\begin{eqnarray}\label{M2t}
L_{k,m}(t,X_t) - L_{k,m}(0,X) = \int_0^t \left[\lambda(X_s)
\left(L_{k,m}(s,X^{(+)}_s) - L_{k,m}(s,X_s)\right) \right.
\hspace{2cm}
\nonumber \\ \\ \nonumber \left.
+ h(X_s) \left(L_{k,m}(s,X^{-}_s) - L_{k,m}(s,X_s)\right) +
\left(1(n(X_s)>0)\frac{\partial}{\partial x} + \frac{\partial}{\partial y}+ \frac{\partial}{\partial
s}\right) L_{k,m}(s,X_s)\right]\,ds +\tilde M_t,
\end{eqnarray}
with some martingale
$\tilde M_t$.
\end{cor}
About a martingale approach in queueing models see, for example, \cite{LSh}. The proof of the Lemma \ref{thm1} is based on the next three Lemmata.
The first of them is a rigorous statement concerning a
well-known folklore property that probability of ``one event'' on a small
nonrandom interval of length $\Delta$ is of the order $O(\Delta)$
and probability of ``two or more events'' on the same interval is
of the order $O(\Delta^2)$. Of course, this is a common knowledge in queueing theory, yet for
discontinuous intensities it has to be, at least, explicitly stated.
\begin{lem}\label{lem1}
Under the assumptions of the Theorem \ref{thm1}, for any $t\ge 0$,
\begin{equation}\label{z0}
P_{X_{t}}(\mbox{no jumps on $(t, t+\Delta]$}) =
\exp(-\int_0^\Delta (\lambda+h)(X_t+s)\,ds) \quad (= 1 +
O(\Delta)),
\end{equation}
\begin{equation}\label{z1}
P_{X_{t}}(\mbox{at least one jump on $(t, t+\Delta]$}) =
O(\Delta),
\end{equation}
\begin{equation}\label{z1up}
P_{X_{t}}(\mbox{exactly one jump up \& no down on $(t,
t+\Delta]$}) = \int_0^\Delta \lambda(X_t+s)\,ds + O(\Delta^2),
\end{equation}
\begin{equation}\label{z1down}
P_{X_{t}}(\mbox{exactly one jump down \& no up on $(t,
t+\Delta]$}) = \int_0^\Delta h(X_t+s)\,ds + O(\Delta^2),
\end{equation}
and
\begin{equation}\label{z2}
P_{X_{t}}(\mbox{at least two jumps on $(t, t+\Delta]$}) =
O(\Delta^2).
\end{equation}
In all cases above, $O(\Delta)$ and $O(\Delta^2)$ are uniform
with respect to $X_{t}$ and only depend on the norm
$\sup_{X}(\lambda(X)+h(X))$, that is, there exist $C>0, \,
\Delta_0>0$ such that for any $X$ and any $\Delta<\Delta_0$,
\begin{eqnarray}\label{z_uni}
\limsup_{\Delta\to 0} \left\{\Delta^{-1}P_{X}(\mbox{at least one
jumps on $(0,\Delta]$})\right.
+ \Delta^{-2} P_{X}(\mbox{at least two jumps on $(0, \Delta]$})
\nonumber \\ \nonumber \\ \nonumber
+ \Delta^{-2}\left[P_{X_{t}}(\mbox{one jump up \& no down on $(t,
t+\Delta]$}) - \int_0^\Delta \lambda(X_t+s)\,ds \right]
\nonumber \\ \\ \nonumber
\left.+ \Delta^{-2}\left[P_{X_{t}}(\mbox{one jump down \& no up on
$(t, t+\Delta]$}) - \int_0^\Delta h(X_t+s)\,ds \right]\right\}
<C<\infty.
\end{eqnarray}
\end{lem}
The next two Lemmata are needed for the justification that the process with discontinuous intensities is, indeed, strong Markov.
\begin{lem}\label{Le2}
Under the assumptions of the Theorem \ref{thm1}, the semigroup
\linebreak $T_tf(X) = E_Xf(X_t)$ is continuous in $t$.
\end{lem}
\begin{lem}\label{lem4}
Under the assumptions of the Theorem \ref{thm1} the process
$(X_t,\,t\ge 0)$ is Feller, that is, $T_tf(\cdot)\in C_b({\mathcal
X})$ for any $f\in C_b({\mathcal X})$.
\end{lem}
\noindent
The proofs of all Lemmata may be performed similarly to \cite{Ver_Z-2014}.
\section{Sketch of Proof of Theorem \ref{thm2}}
The proof of convergence in total variation with rate of convergence repeats the calculus in \cite{Ve2013_ait} based on the Lyapunov functions \(L_{m}(X)\) and \(L_{k,m}(t,X)\) from (\ref{LL}),
and on Dynkin's formulae (\ref{dynkin1}) and (\ref{dynkin_t}) due
to the Corollary \ref{cor1}. Without big changes, this calculus provides a polynomial moment bound
\begin{equation}\label{pb}
E_X \tau_0^k \le C L_{m}(X) \le C (n+1 + x + y)^m,
\end{equation}
for certain values of $k$ and for the hitting time
\[
\tau_0:= \inf(t\ge 0: \; n_t = 0).
\]
Namely, once the process attains the set $\{n=0\}$, it may be successfully coupled with another (stationary) version of the same process at their joint jump $\{n=0\} \, \mapsto \{n=1\}$. This is because, in particular, immediately after such a jump the state of each process reads as $(1, 0, 0)$; in other words, this is a regeneration state.
The news is only a wider class of
intensities, which may be all variable (as well as discontinuous) including $\lambda_0$; however, this affects the calculus only a little, once it is established that (\ref{dynkin1}) and
(\ref{dynkin_t}) hold true, because this calculus involves only time values $t<\tau_0$. (Some change will be in the procedure of coupling, though.) In turn, the inequality (\ref{pb}) provides a bound for the rate of convergence, for the justification of which rate there are various approaches such as versions of coupling as well as renewal theory. Convergence of probabilities in the definition of $m$-availability factors is a special case of a more general convergence in total variation. We drop further details, which will be specified in a further publication.
\end{document}
|
\begin{document}
\markboth{M. Manev and V. Tavkova}
{Matrix Lie groups as 3-dimensional almost paracontact almost paracomplex Riemannian manifolds}
\nablaewcommand{i.e.\ }{i.e.\ }
\nablaewcommand{\phi}{\phi}
\nablaewcommand{\tilde{g}}{\tilde{g}}
\nablaewcommand{\nabla}{\nablaabla}
\nablaewcommand{\nablan}{\tilde{\nabla}}
\nablaewcommand{(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)}{(\mathcal{M},\alphalowbreak{}\phi,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)}
\nablaewcommand{\mathcal{G}}{\mathcal{G}}
\nablaewcommand{\mathcal{I}}{\mathcal{I}}
\nablaewcommand{\mathcal{W}}{\mathcal{W}}
\nablaewcommand{\mathbb R}{\mathbb R}
\nablaewcommand{\mathbb C}{\mathbb C}
\nablaewcommand{\mathfrak X}{\mathfrak X}
\nablaewcommand{\mathcal{F}}{\mathcal{F}}
\nablaewcommand{\mathcal{U}}{\mathcal{U}}
\nablaewcommand{\mathcal{H}}{\mathcal{H}}
\nablaewcommand{\mathcal{V}}{\mathcal{V}}
\nablaewcommand{(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M}{\mathcal{M}}
\nablaewcommand{\mathfrak{L}}{\mathfrak{L}}
\nablaewcommand{\mathfrak{L}L}{\mathcal{L}}
\nablaewcommand{\widehat{N}}{\widehat{N}}
\nablaewcommand{{\rm tr}}{{\rm tr}}
\nablaewcommand{\theta}{\theta}
\nablaewcommand{\operatorname{sh}}{\operatorname{sh}}
\nablaewcommand{\operatorname{ch}}{\operatorname{ch}}
\nablaewcommand{\zeta}{\zetata}
\nablaewcommand{\omega}{\omegaega}
\nablaewcommand{\lambda}{\lambda}
\nablaewcommand{\ceta}{\cetata}
\nablaewcommand{\gamma}{\gamma}
\nablaewcommand{\alpha}{\alphapha}
\nablaewcommand{\beta}{\beta}
\nablaewcommand{\mathrm{d}elta}{\mathrm{d}elta}
\nablaewcommand{\mathop{\mathfrak{S}}\limits_{x,y,z}}{\mathop{\mathfrak{S}}\limits_{x,y,z}}
\nablaewcommand{\mathrm{d}}{\mathrm{d}}
\nablaewcommand{\ddr}{\tfrac{\mathrm{d}}{\mathrm{d} r}}
\nablaewcommand{\mathrm{i}}{\mathrm{i}}
\nablaewcommand{\operatorname{im}}{\operatorname{im}}
\nablaewcommand{\operatorname{span}}{\operatorname{span}}
\nablaewcommand{\mathcal{I}d}{\operatorname{Id}}
\nablaewcommand{\thmref}[1]{Theorem~\ref{#1}}
\nablaewcommand{\lemref}[1]{Lemma~\ref{#1}}
\nablaewcommand{\cororref}[1]{Corollary~\ref{#1}}
\nablaewcommand{\propref}[1]{Proposition~\ref{#1}}
\nablaewcommand{\thetablref}[1]{Table~\ref{#1}}
\title[Matrix Lie Groups As 3-Dimensional Almost Paracontact \ldots]
{Matrix Lie Groups as 3-Dimensional Almost Paracontact Almost Paracomplex Riemannian Manifolds}
\author[M. Manev, V. Tavkova]{Mancho Manev and Veselina Tavkova}
\begin{abstract}
Lie groups considered as three-dimensional almost paracontact almost paracomplex Riemannian manifolds are investigated.
In each basic class of the classification used for the manifolds under consideration, a correspondence is established between the Lie algebra and the explicit matrix representation of its Lie group.
\end{abstract}
\subjclass[2010]{
53C15,
22E60,
22E15
}
\keywords{Almost paracontact structure, almost paracomplex structure, Riemannian metric, Lie group, Lie algebra}
\maketitle
\section{Introduction}\label{sec-intro}
\vglue-10pt
\indent
In the present paper, we continue the investigations of almost paracontact almost paracomplex Riemannian manifolds.
In \cite{Sato76}, I. Sato introduced the concept of (almost) paracontact structure compatible with a Riemannian metric as an analogue of almost contact Riemannian manifold. After that, a number of authors develop the
differential geometry of these manifolds. The beginning of the investigations on the paracontact Riemannian manifolds is given by \cite{AdatMiya77}, \cite{Sa80}, \cite{Sato77} and \cite{Sato78}.
In \cite{ManSta01}, a classification of almost paracontact Riemannian manifolds of type $(n,n)$ is made,
taking into account the relevant notion given by Sasaki in \cite{Sa80}.
They are $(2n+1)$-dimensional and the induced almost product structure
on the paracontact distribution is traceless, i.e.\ it is an almost paracomplex structure.
In \cite{ManVes}, these manifolds are called \emph{almost paracontact almost paracomplex manifolds}.
In a series of papers, e.g. \cite{AbbGarb, Barb, GriManMek, DobrMek, FCG, FinoGran, VMMolina, HMan, HMan2, Ovando, ZamNak}, the authors consider Lie groups as manifolds equipped with different additional tensor structures and metrics compatible with them.
Furthermore, in our previous work \cite{ManVes2}, we construct and characterize a
family of 3-dimensional Lie algebras corresponding to Lie groups considered as almost paracontact almost paracomplex
Riemannian manifolds. Curvature properties of these manifolds are studied.
It is known by \cite{Gil} that each representation of a Lie algebra corresponds uniquely to a representation of a simply connected Lie group. This relation is one-to-one.
Hence, knowledge the representation of a certain Lie algebra settles the issue of the representation of its Lie group.
In the present work, our goal is to find a correspondence between the Lie algebras constructed
in \cite{ManVes2} and explicit matrix representations of their Lie groups for each of the basic classes of the classification used for the manifolds under study.
The paper is organized as follows.
In Sect.~\ref{sect-mfds}, we recall some necessary facts about the investigated manifolds and related Lie algebras.
In Sect.~\ref{sect-lie}, we find the explicit correspondence between the Lie algebras determined in all basic classes of the manifolds studied and respective matrix Lie groups.
\section{Preliminaries}\label{sect-mfds}
\vglue-10pt
\indent
\subsection{Almost paracontact almost paracomplex Riemannian manifolds}
\vglue-10pt
\indent
Let $((\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M,\phi,\xi,\eta,g)$ be an almost paracontact almost paracomplex Riemannian manifold.
This means that $(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$ is a $(2n+1)$-dimensional real differentiable manifold equipped with an almost paracontact almost paracomplex structure $(\phi,\xi,\eta)$, i.e.\ $\phi$ is a fundamental $(1,1)$-tensor field of the tangent bundle $T(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$ of $(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$, $\xi$ is a characteristic vector field and $\eta$ is its dual 1-form satisfying the following conditions:
\begin{equation*}\label{str}
\begin{array}{c}
\phi^2 = \mathcal{I} - \eta \otimes \xi,\quad \eta(\xi)=1,\quad
\eta\circ\phi=0,\quad \phi\xi = 0,\quad {\rm tr}\phi = 0,
\end{array}
\end{equation*}
where $\mathcal{I}$ denotes the identity on $T(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$.
Moreover, $g$ is a Riemannian metric
that is compatible with the structure of the manifold so that the following condition is fulfilled
\begin{equation*}\label{str2}
\begin{array}{c}
g(\phi x, \phi y) = g(x,y) - \eta(x)\eta(y)
\end{array}
\end{equation*}
for arbitrary $x,y \in T(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$ \cite{Sato76}, \cite{ManSta01}.
Further $x$, $y$, $z$, $w$ will stand for arbitrary
elements of the Lie algebra $\mathfrak X((\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M)$ of tangent vector fields on $(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$ or vectors in the tangent space $T_p(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$ at $p\in (\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$.
Let us recall that an almost paracomplex structure is a traceless almost product structure $P$, i.e.\
$P^2=\mathcal{I}$, $P\nablaeq \pm \mathcal{I}$ and ${\rm tr} P=0$.
Because of ${\rm tr} P=0$, the eigenvalues $+1$ and $-1$ of $P$ have one and the same multiplicity $n$.
Let $\nablaabla$ be the Levi-Civita connection generated by $g$. The tensor field $F$ of type (0,3) on $(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$ is defined by
\begin{equation*}\label{F=nfi}
F(x,y,z)=g\bigl( \left( \nablaabla_x \phi \right)y,z\bigr).
\end{equation*}
The following equalities define 1-forms associated with $F$, known as the Lee forms of $(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$:
\begin{equation*}\label{t}
\theta(z)=g^{ij}F(e_i,e_j,z),\quad \theta^*(z)=g^{ij}F(e_i,\phi
e_j,z), \quad \omegaega(z)=F(\xi,\xi,z),
\end{equation*}
where $g^{ij}$ are the components of the inverse matrix of $g$ with respect to a basis $\left\{e_i;\xi\right\}$ $(i=1,2,\dots,2n)$ of $T_p(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$ at an arbitrary point $p\in (\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$.
A classification of almost paracontact almost paracomplex Riemannian manifolds is given in \cite{ManSta01}.
It consists of eleven basic classes $\mathcal{F}_{s}$, $s\in\{1,2,\dots, 11\}$, and each of them is defined by conditions for $F$.
In \cite{ManVes}, we determine the components $F_{s}$ of $F$ that correspond to each $\mathcal{F}_{s}$. In other words, the manifold $(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)$ belongs to $\mathcal{F}_{s}$
if and only if the equality $F=F_s$ is satisfied.
The intersection of the basic classes is the special class $\mathcal{F}_0$
defined by the condition $F=0$, which is equivalent to the covariant constancy of the structure tensors with respect to $\nabla$,
i.e.\ $\nabla\phi=\nabla\xi=\nabla\eta=\nabla g=0$.
Let us consider the studied manifold of the lowest dimension, i.e.\ $\dim{(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M}=3$.
Let $\left\{e_0,e_1,e_2\right\}$, where $e_0=\xi,e_1=\phi e_2,e_2=\phi e_1$, be a \emph{$\phi$-basis} of $T_p(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$. Thus, it is an orthonormal basis with respect to $g$, i.e.\ $g(e_i,e_j)=\delta_{ij}$ for all $i,j\in\{0,1,2\}$.
In \cite{ManVes}, we determine the components ${F_{ijk}=F(e_i,e_j,e_k)}$, ${\theta_k=\theta(e_k)}$, ${\theta^*_k=\theta^*(e_k)}$ and ${\omega_k=\omega(e_k)}$ of $F$, $\theta$, $\theta^*$ and $\omega$, respectively, with respect to $\left\{e_0,e_1,e_2\right\}$ as follows:
\begin{equation*}\label{t3}
\begin{array}{c}
\begin{array}{ll}
\theta_0=F_{110}+F_{220},\quad & \theta_1=F_{111}=-F_{122}=-\theta^*_2,\\[4pt]
\theta^*_0=F_{120}+F_{210}, \quad &\theta_2=F_{222}=-F_{211}=-\theta^*_1,\\[4pt]
\end{array}\\
\begin{array}{lll}
\omega_0=0, \qquad & \omega_1=F_{001},\qquad & \omega_2=F_{002}.
\end{array}
\end{array}
\end{equation*}
Let $x=x^ie_i$, $y=y^ie_i$, $z=z^ie_i$ be arbitrary vectors in $T_p(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$, $p\in (\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)M$, decomposed with respect to the $\phi$-basis.
Then, the components $F_s$, $s\in\{1,2,\dots,11\}$, of $F$ on $(\mathcal{M},\alphalowbreak{}\f,\alphalowbreak{}\xi,\alphalowbreak{}\eta,g)\in\mathcal{F}_s$ have the following form: \cite{ManVes}
\begin{equation}\label{Fi3}
\begin{array}{l}
F_{1}(x,y,z)=\left(x^1\theta_1-x^2\theta_2\right)\left(y^1z^1-y^2z^2\right); \\[4pt]
F_{2}(x,y,z)=F_{3}(x,y,z)=0;
\\
F_{4}(x,y,z)=\phirac{\theta_0}{2}\Bigl\{x^1\left(y^0z^1+y^1z^0\right)
+x^2\left(y^0z^2+y^2z^0\right)\bigr\};\\[4pt]
F_{5}(x,y,z)=\phirac{\theta^*_0}{2}\bigl\{x^1\left(y^0z^2+y^2z^0\right)
+x^2\left(y^0z^1+y^1z^0\right)\bigr\};\\[4pt]
F_{6}(x,y,z)=F_{7}(x,y,z)=0;\\[4pt]
F_{8}(x,y,z)=\lambda\bigl\{x^1\left(y^0z^1+y^1z^0\right)
-x^2\left(y^0z^2+y^2z^0\right)\bigr\},\\[4pt]
\hspace{38pt} \lambda=F_{110}=-F_{220}
;\\[4pt]
F_{9}(x,y,z)=\mu\bigl\{x^1\left(y^0z^2+y^2z^0\right)
-x^2\left(y^0z^1+y^1z^0\right)\bigr\},\\[4pt]
\hspace{38pt} \mu=F_{120}=-F_{210}
;\\[4pt]
F_{10}(x,y,z)=\nablau x^0\left(y^1z^1-y^2z^2\right),\quad
\nablau=F_{011}=-F_{022}
;\\[4pt]
F_{11}(x,y,z)=x^0\bigl\{\omega_{1}\left(y^0z^1+y^1z^0\right)
+\omega_{2}\left(y^0z^2+y^2z^0\right)\bigr\}.
\end{array}
\end{equation}
Therefore, the basic classes of the 3-dimensional manifolds of the investigated type are
$\mathcal{F}_1$, $ \mathcal{F}_4$, $\mathcal{F}_5$, $\mathcal{F}_8$, $\mathcal{F}_9$, $\mathcal{F}_{10}$, $\mathcal{F}_{11}$, i.e.\ $\mathcal{F}_2$, $\mathcal{F}_3$, $\mathcal{F}_6$, $\mathcal{F}_7$ are restricted to $\mathcal{F}_0$ \cite{ManVes}.
\indent
\subsection{The Lie algebras corresponding to Lie groups as almost paracontact almost paracomplex Riemannian manifolds}
\vglue-10pt
\indent
In this subsection we recall the necessary results obtained in \cite{ManVes2}.
Let $\mathfrak{L}L$ be a 3-dimensional real connected Lie group and let
$\mathfrak{l}$ be its Lie algebra with a basis
$\{E_{0},E_{1},E_{2}\}$ of left invariant vector fields. An almost paracontact almost paracomplex structure $(\phi,\xi,\eta)$ and a Riemannian metric $g$ are defined as follows:
\begin{equation*}\label{strL}
\begin{array}{c}
\phi E_0=0,\quad \phi E_1=E_{2},\quad \phi E_{2}= E_1,\quad \xi=
E_0,\quad \\[4pt]
\eta(E_0)=1,\quad \eta(E_1)=\eta(E_{2})=0,
\end{array}
\end{equation*}
\begin{equation*}\label{gL}
g(E_i,E_j)=\delta_{ij},\qquad i,j\in\{0,1,2\}.
\end{equation*}
The resulting manifold $(\mathfrak{L}L,\phi,\xi,\eta,g)$ is found to be a 3-dimensional almost paracontact almost paracomplex Riemannian manifold.
The corresponding Lie algebra $\mathfrak{l}$
is determined as follows
\begin{equation*}\label{lie}
\left[E_{i},E_{j}\right]=C_{ij}^k E_{k}, \quad i, j, k \in \{0,1,2\},
\end{equation*}
where $C_{ij}^k$ are the commutation coefficients.
\begin{thm}[\cite{ManVes2}]\label{thm-Fi-L}
The manifold $(\mathfrak{L}L,\phi,\xi,\eta,g)$ belongs to the basic class $\mathcal{F}_s$
($s \in \{1,\alphalowbreak{}4,5,8,9,10,11\}$) if and only
if the corresponding Lie algebra $\mathfrak{l}$ is determined by
the following commutators:
\begin{equation*}\label{Fi-L}
\begin{array}{llll}
\mathcal{F}_1:\; &[E_0,E_1]=0, \; & [E_0,E_2]=0, \; & [E_1,E_2]=\alpha E_1-\beta E_2;
\\[4pt]
\mathcal{F}_4:\; &[E_0,E_1]=\alpha E_2, \; & [E_0,E_2]=\alpha E_1, \; &
[E_1,E_2]=0;
\\[4pt]
\mathcal{F}_5:\; &[E_0,E_1]=\alpha E_1, \; & [E_0,E_2]=\alpha E_2, \; &
[E_1,E_2]=0;
\\[4pt]
\mathcal{F}_8:\; &[E_0,E_1]=\alpha E_2, \; & [E_0,E_2]=-\alpha E_1, \; &
[E_1,E_2]=2\alpha E_0;
\\[4pt]
\mathcal{F}_9:\; &[E_0,E_1]=\alpha E_1, \; & [E_0,E_2]=-\alpha E_2, \; &
[E_1,E_2]=0;
\\[4pt]
\mathcal{F}_{10}:\; &[E_0,E_1]=-\alpha E_2, \; & [E_0,E_2]=\alpha E_1, \; &
[E_1,E_2]=0;
\\[4pt]
\mathcal{F}_{11}:\; &[E_0,E_1]=\alpha E_0, \; & [E_0,E_2]=\beta E_0, \; &
[E_1,E_2]=0,
\end{array}
\end{equation*}
where $\alpha$, $\beta$ are arbitrary real parameters.
Moreover, the
relations of $\alpha$ and $\beta$ with the non-zero components
$F_{ijk}$ in the different basic classes $\mathcal{F}_s$ from \eqref{Fi3} are the following:
\begin{equation*}\label{Fi-L-alpha}
\begin{array}{rlrl}
\mathcal{F}_1:& \alpha=\phirac12 \theta_1, \; \beta=-\phirac12 \theta_2; \qquad
&\mathcal{F}_4:& \alpha=\phirac12 \theta_0;\\[4pt]
\mathcal{F}_5:& \alpha=\phirac12 \theta^*_0; \qquad
&\mathcal{F}_8:& \alpha=\lambda; \\[4pt]
\mathcal{F}_9:& \alpha=\mu; \qquad
&\mathcal{F}_{10}:& \alpha=\phirac12 \nablau; \\[4pt]
\mathcal{F}_{11}:& \alpha=\omega_2, \; \beta=\omega_1.\qquad & &
\end{array}
\end{equation*}
\end{thm}
Obviously, if $\alpha$ (and $\beta$ if any) vanish in the corresponding class, then the Lie algebra is Abelian and the manifold belongs to $\mathcal{F}_0$. We further exclude this trivial case from our considerations, i.e.\ we assume that $(\alpha,\beta)\nablaeq(0,0)$.
Recall that the class of the para-Sasakian paracomplex Riemannian
manifolds is $\mathcal{F}'_4$, which is the subclass of $\mathcal{F}_4$ determined by the condition $\theta(\xi)=-2n$
\cite{ManVes}. Then, \thmref{thm-Fi-L} has the following
\begin{cor}[\cite{ManVes2}]\label{cor:para}
The manifold $(\mathfrak{L}L,\phi,\xi,\eta,g)$ is para-Sasakian if and only
if the corresponding Lie algebra $\mathfrak{l}$ is determined by
the following commutators:
\[
[E_0,E_1]=-E_2, \quad [E_0,E_2]=-E_1, \quad [E_1,E_2]=0.
\]
\end{cor}
\section{Matrix representation of the 3-dimensional Lie groups equipped with the structure studied}\label{sect-lie}
\vglue-10pt
\indent
Let $(\mathfrak{L}L, \phi, \xi, \eta, g)$ be a 3-dimensional almost paracontact almost paracomplex Riemannian manifold, where $\mathfrak{L}L$ is a Lie group with associated Lie algebra $\mathfrak{g}$.
In Theorem \ref{thm-Fi-L}, we determine the Lie algebra by commutators such that the manifold belongs to the class $\mathcal{F}_s$ ($s \in \{1,4,5,8,9,10,11\}$).
In the following theorem, which is the main theorem in the present work, we obtain an explicit matrix representation of a Lie group $\mathcal{G}$ isomorphic to the given Lie group $\mathfrak{L}L$ with the same Lie algebra $\mathfrak{g}$ when $(\mathfrak{L}L, \phi, \xi, \eta, g)$ belongs to each of $\mathcal{F}_s$.
\begin{thm}\label{thm:main}
Let $(\mathfrak{L}L,\phi,\xi,\eta,g)$ be an almost paracontact almost paracomplex Riemannian manifold belonging to the basic class $\mathcal{F}_s$
($s \in \{1,4,5,8,9,10,11\}$). Then the compact simply connected Lie group $\mathcal{G}$ isomorphic to $\mathfrak{L}L$, both with one and the same Lie algebra $\mathfrak{g}$, has the following matrix representation
\begin{equation}\label{eA}
e^A=E+tA+uA^2,
\end{equation}
where $E$ is the identity matrix, $A$ is the matrix representation of the corresponding Lie algebra and $t$, $u$ are real parameters. The matrix form of $A$ as well as the expressions of $t$ and $u$ for each of $\mathcal{F}_s$ are given in Table~\ref{T1}, where $a,b,c$ are arbitrary reals and $\alpha$, $\beta$ are introduced in Theorem~\ref{thm-Fi-L}.
\end{thm}
\begin{proof}
As it is known from \cite{Gil}, the commutation coefficients provide a matrix representation $A$ of a Lie algebra. Then, the matrix representation of $\mathfrak{g}$ is the following
\begin{equation}\label{A}
A=aM_0+bM_1+cM_2, \quad a, b, c \in \mathbb R,
\end{equation}
where the basic matrices $M_i$ have entries determined by the commutation coefficients of $\mathfrak{g}$ as follows
\begin{equation}\label{Mij}
(M_i)_j^k=-C_{ij}^k, \quad i, j, k \in \{0,1,2\}.
\end{equation}
\begin{table}
\caption{The matrix form of $A$ and the expressions of $t$ and $u$ for $\mathcal{F}_s$
}\label{T1}
\cetantering
{
\phiootnotesize{
\begin{tabular}{|l|l|l|}
\hline
$
\mathcal{F}_1:$ & $
A=\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & \alpha c & {-}\beta c \\
0 & -\alpha b & \beta b
\end{array}
\right)\quad $ &
$ t=
\left\{
\begin{array}{ll}
\phirac{e^{{\rm tr}{A}}-1}{{\rm tr}{A}}, & {\rm tr}{A}\nablaeq 0 \\
1, & {\rm tr}{A}= 0
\end{array}
\right.$\\
& $
{\rm tr}{A}=\alpha c + \beta b $
&
$u=0
$
\\
\hline
$
\mathcal{F}_4:$&$
A=\left(
\begin{array}{ccc}
0 & \alpha c & \alpha b \\
0 & 0 & -\alpha a \\
0 & -\alpha a & 0 \\
\end{array}
\right)\quad $
&
$ t=
\left\{
\begin{array}{ll}
\phirac{\sinh{\sqrt{\phirac12 {\rm tr}{A^2}}}}{\sqrt{\phirac12 {\rm tr}{A^2}}}, & {\rm tr}{A^2}>0 \\
1, & {\rm tr}{A^2}= 0
\end{array}
\right.$\\
& $
{\rm tr}{A^2}=2\alpha^2 a^2 $ &
$ u=
\left\{
\begin{array}{ll}
\phirac{\cosh{\sqrt{\phirac12 {\rm tr}{A^2}}}-1}{\phirac12 {\rm tr}{A^2}}, & {\rm tr}{A^2}> 0 \\
0, & {\rm tr}{A^2}= 0
\end{array}
\right.
$
\\
\hline
$
\mathcal{F}_5:$&$
A=\left(
\begin{array}{ccc}
0 & \alpha b & \alpha c \\
0 & -\alpha a & 0 \\
0 & 0 & -\alpha a \\
\end{array}
\right)\quad $ & $
t=
\left\{
\begin{array}{ll}
\phirac{e^{\phirac12 {\rm tr}{A}}-1}{\phirac12 {\rm tr}{A}}, & {\rm tr}{A}\nablaeq 0 \\
1, & {\rm tr}{A}= 0
\end{array}
\right.$\\
& $
{\rm tr}{A}=-2\alpha a $ &
$ u=0
$
\\
\hline
$
\mathcal{F}_8:$ & $
A=\left(
\begin{array}{ccc}
0 & -\alpha c & \alpha b \\
2\alpha c & 0 & -\alpha a \\
-2\alpha b & \alpha a & 0 \\
\end{array}
\right)\;
$ &
$ t= \phirac{\sin\sqrt{-\phirac12{\rm tr}{A^2}}}{\sqrt{-\phirac12{\rm tr}{A^2}}}, \quad {\rm tr}{A^2}< 0
$\\
& $
\begin{array}{l}
{\rm tr}{A^2}=-2\alpha^2(a^2+2b^2+2c^2)
\end{array}
$ & $
u=\phirac{\cos{\sqrt{-\phirac12{\rm tr}{A^2}}}-1}{\phirac12{\rm tr}{A^2}}, \quad {\rm tr}{A^2}< 0
$
\\
\hline
$
\mathcal{F}_9:$ & $
A=\left(
\begin{array}{ccc}
0 & \alpha b & -\alpha c \\
0 & -\alpha a & 0 \\
0 & 0 & \alpha a \\
\end{array}
\right)\quad $ & $
t=
\left\{
\begin{array}{ll}
\phirac{\sinh{\sqrt{\phirac12 {\rm tr}{A^2}}}}{\sqrt{\phirac12 {\rm tr}{A^2}}}, & {\rm tr}{A^2}> 0 \\
1, & {\rm tr}{A^2}= 0
\end{array}
\right.$\\
& $
{\rm tr}{A^2}=2\alpha^2 a^2 $ & $
u=
\left\{
\begin{array}{ll}
\phirac{\cosh{\sqrt{\phirac12 {\rm tr}{A^2}}}-1}{\phirac12 {\rm tr}{A^2}}, & {\rm tr}{A^2}> 0 \\
0, & {\rm tr}{A^2}= 0 \\
\end{array}
\right.
$
\\
\hline
$
\mathcal{F}_{10}:$ & $
A=\left(
\begin{array}{ccc}
0 & \alpha c & -\alpha b \\
0 & 0 & -\alpha a \\
0 & -\alpha a & 0 \\
\end{array}
\right)\quad $ & $
t=
\left\{
\begin{array}{ll}
\phirac{\sinh{\sqrt{\phirac12 {\rm tr}{A^2}}}}{\sqrt{\phirac12 {\rm tr}{A^2}}}, & {\rm tr}{A^2}> 0 \\
1, & {\rm tr}{A^2}= 0
\end{array}
\right.$\\
& $
{\rm tr}{A^2}=2\alpha^2 a^2 $ & $
u=
\left\{
\begin{array}{ll}
\phirac{\cosh{\sqrt{\phirac12 {\rm tr}{A^2}}}-1}{\phirac12 {\rm tr}{A^2}}, & {\rm tr}{A^2}> 0 \\
0, & {\rm tr}{A^2}= 0 \\
\end{array}
\right.
$
\\
\hline
$
\mathcal{F}_{11}:$ & $
A=\left(
\begin{array}{ccc}
\alpha b + \beta c & 0 & 0 \\
-\alpha a & 0 & 0 \\
-\beta a & 0 & 0 \\
\end{array}
\right)\quad $ &
$ t=
\left\{
\begin{array}{ll}
\phirac{e^{{\rm tr}{A}}-1}{{\rm tr}{A}}, & {\rm tr}{A}\nablaeq 0 \\
1, & {\rm tr}{A}= 0 \\
\end{array}
\right.$\\
& $
{\rm tr}{A}=\alpha b+\beta c $ & $
u=0
$\\
\hline
\end{tabular}
\/}
}
\end{table}
\textbf{\emph{The class $\mathcal{F}_1$.}}
Firstly, let $(\mathfrak{L}L, \phi, \xi, \eta, g)$ belong to $\mathcal{F}_1$. In this case, the corresponding Lie algebra $\mathfrak{g}_1$, according to Theorem~\ref{thm-Fi-L}, is determined by the following way:
\begin{equation*}\label{com1}
[E_0,E_1]=[E_0,E_2]=0,\quad [E_1,E_2]=\alpha E_1-\beta E_2,
\end{equation*}
where $\alpha=\phirac12 \theta_1$, $\beta=-\phirac12 \theta_2$.
Therefore,
the nonzero commutation coefficients are:
\begin{equation}\label{Cij}
C_{12}^1=-C_{21}^1=\alpha,\quad C_{12}^2=-C_{21}^2={-\beta}.
\end{equation}
Because of \eqref{Mij} and \eqref{Cij}, we have
\[
M_0=\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right),\quad
M_1=\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & -\alpha & {\beta}\\
\end{array}
\right),\quad
M_2=\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & \alpha & {-\beta} \\
0 & 0 & 0 \\
\end{array}
\right).
\]
We have that $(b,c)\nablaeq (0,0)$ is true, otherwise $A$ is a zero matrix and $\mathfrak{g}$ is Abelian.
Then, using \eqref{A}, we obtain the matrix representation $A$ of the considered Lie algebra $\mathfrak{g}_1$ given in Table~\ref{T1}. Therefore, the characteristic polynomial of $A$ has the form:
\[
P_A(\lambda)= \lambda^2{(\lambda-\alpha c - \beta b )}
\]
and its eigenvalues $\lambda_i$ ($i={1,2,3}$) are the following:
\[
\lambda_1=\lambda_2=0, \qquad \lambda_3=\alpha c + \beta b.
\]
We then obtain the corresponding linearly independent eigenvectors $p_i$ ($i={1,2,3}$):
\[
p_1(1,0,0)^{\intercal}, \qquad p_2(0,\beta,{\alpha})^{\intercal}, \qquad p_3(0,-c,b)^{\intercal},
\]
using the notation $^\intercal$ for matrix transpose.
The vectors $p_i$ determine the following matrix:
\begin{equation}\label{matrixP-I}
P=\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & \beta & -c \\
0 & {\alpha} & b
\end{array}
\right).
\end{equation}
Using the matrix $A$ for $\mathcal{F}_1$ in Table~\ref{T1} and \eqref{matrixP-I}, we obtain $\mathrm{d}elta={\det P}={\rm tr} A$, where we denote $\mathrm{d}elta:=\alpha c + \beta b $.
Now, let us consider the first case when ${\rm tr} A \nablaeq 0$ holds, i.e.\ $\mathrm{d}elta \nablaeq 0$ and $\det P \nablaeq 0$. Then, we obtain the inverse matrix of $P$ as follows:
\[
P^{-1}=\phirac{1}{\mathrm{d}elta}\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & {b} & {c} \\
0 & -\alpha & {\beta}
\end{array}
\right).
\]
It is well known the formula
\begin{equation}\label{J}
e^A=Pe^JP^{-1},
\end{equation}
where the Jordan matrix $J$ is the diagonal matrix $J=\mathrm{diag}(\lambda_1, \lambda_2, \lambda_3)$. Therefore, the matrix representation of the corresponding Lie group $\mathcal{G}_1$ of the considered Lie algebra $\mathfrak{g}_1$ in the first case is the following:
\[
\mathcal{G}_1=\left\{
\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1+\alpha c t
& -\beta c t \\
0 & -\alpha b t
& 1 +\beta b t\\
\end{array}
\right)
\left|
\;
t=\phirac{e^{\mathrm{d}elta}-1}{\mathrm{d}elta},\; \mathrm{d}elta\nablaeq 0
\right.
\right\}.
\]
This result can be written as
\begin{equation}\label{G-I}
\mathcal{G}_1: \quad
e^A=E+tA, \quad t=\phirac{e^{{\rm tr} A}-1}{{\rm tr} A},\quad {\rm tr} A\nablaeq 0.
\end{equation}
Let us consider the second case when ${\rm tr} A=0$, i.e.\ $\mathrm{d}elta=0$ and $\det P =0$. Then the matrix $P$ is non-invertible and therefore $A$ is nilpotent with some nilpotency index $q$ and $e^A$ can be expressed as follows
\begin{equation*}\label{eAq}
e^A=E+A+\phirac{A^2}{2!}+\phirac{A^3}{3!}+\cdots+\phirac{A^{q-1}}{(q-1)!}.
\end{equation*}
Using the form of $A$ for $\mathcal{F}_1$ in Table~\ref{T1} and $\mathrm{d}elta=0$, we obtain $A^2$ is a zero matrix, i.e.\ $q=2$. Therefore, in this case we get the matrix representation of the Lie group $\mathcal{G}_1$ for $\mathfrak{g}_1$ in the following way:
\[
\mathcal{G}_1=\left\{
\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1+\alpha c
& -\beta c \\
0 & -\alpha b
& 1 + \beta b \\
\end{array}
\right)
\Bigl|
\;
\mathrm{d}elta=0
\right\},
\]
which can be written as
\begin{equation}\label{G-II}
\mathcal{G}_1:\quad
e^A=E+A, \quad
{\rm tr} A= 0
.
\end{equation}
Generalizing \eqref{G-I} and \eqref{G-II}, we get the matrix representation \eqref{eA} of the matrix Lie group $\mathcal{G}_1$, where $A$, $t$ and $u$ are given in Table~\ref{T1} for $(\mathfrak{L}L, \phi, \xi, \eta, g)\in \mathcal{F}_1$.
\textbf{\emph{The classes $\mathcal{F}_5$ and $\mathcal{F}_{11}$.}}
When we consider the cases of $\mathcal{F}_5$ and $\mathcal{F}_{11}$, we notice that ${\rm tr} A$ can be non-zero there, just as for $\mathcal{F}_1$. The results in Table~\ref{T1} for these two classes are obtained in the same way as for $\mathcal{F}_1$.
\textbf{\emph{The class $\mathcal{F}_4$.}}
Now, let us consider $(\mathfrak{L}L, \phi, \xi, \eta, g)\in\mathcal{F}_4$. According to Theorem~\ref{thm-Fi-L}, the corresponding Lie algebra $\mathfrak{g}_4$ is determined by the following way
\begin{equation}\label{C4}
[E_0,E_1]=\alpha E_2,\quad [E_0,E_2]=\alpha E_1,\quad [E_1,E_2]=0,
\end{equation}
where $\alpha=\phirac12 \theta_0$.
Bearing in mind \eqref{C4}, the non-zero commutation coefficients are:
\begin{equation}\label{Cij4}
C_{01}^2=-C_{10}^2= C_{02}^1=-C_{20}^1=\alpha.
\end{equation}
By virtue of \eqref{A}, \eqref{Mij} and \eqref{Cij4}, we obtain the matrix representation $A$ of $\mathfrak{g}_4$ as is given in Table~\ref{T1}. Obviously, we have ${\rm tr} A=0$.
We determine the matrix $P$ as in the case of $\mathcal{F}_1$ and obtain
\begin{equation*}\label{matrixP-IV}
P=\left(
\begin{array}{ccc}
1 & -b-c & -b+c \\
0 & a & a \\
0 & a & -a
\end{array}
\right)
\end{equation*}
for $\lambda_1=0$, $\lambda_2=-\alpha a$, $\lambda_3=\alpha a$, i.e.\ $J=\mathrm{diag}\{0,-\alpha a,\alpha a\}$.
Therefore, we have $\det P=-2a^2$. Using the form of $A$ in Table~\ref{T1} for $\mathcal{F}_4$ and ${\rm tr} A^2 = 2\alpha^2 a^2$,
we notice that $P$ is invertible or not depending on ${\rm tr} A^2\nablaeq 0$ or ${\rm tr} A^2= 0$, respectively.
First, when ${\rm tr} A^2$ is non-zero, i.e.\ ${\rm tr} A^2>0$ is satisfied, we obtain the inverse matrix of $P$ as follows:
\[
P^{-1}=\phirac{1}{2a}\left(
\begin{array}{ccc}
2a & 2b & 2c \\
0 & 1 & 1 \\
0 & 1 & -1
\end{array}
\right).
\]
Then, applying \eqref{J}, the following matrix representation of the Lie group $\mathcal{G}_4$:
\[
\mathcal{G}_4=\left\{
\left(
\begin{array}{ccc}
1 & \phirac{b}{a}(1-w)+\phirac{c}{a}v & \phirac{c}{a}(1-w)+\phirac{b}{a}v \\
0 & w & -v \\
0 & -v & w \\
\end{array}
\right)
\Bigl|
\;
\; a \nablaeq 0
\right\},
\]
where $v=\sinh(\alpha a)$ and $w=\cosh(\alpha a)$.
This result can be written as
\begin{equation}\label{G-IV-1}
\begin{array}{l}
\mathcal{G}_4: \quad e^A=E+t A+ u A^2,
\quad\\[4pt]
\phantom{\mathcal{G}_4: \quad\ }
t=\phirac{\sinh \sqrt{\phirac{1}{2}{\rm tr} A^2}}{ \sqrt{\phirac{1}{2}{\rm tr} A^2}},
\quad
u=\phirac{\cosh \sqrt{\phirac{1}{2}{\rm tr} A^2}-1}{\phirac{1}{2}{\rm tr} A^2},\quad
{\rm tr} A^2> 0.
\end{array}
\end{equation}
Now, we focus on the second case when ${\rm tr} A^2$ vanishes, therefore $a=0$ is valid
and $P$ is not invertible. Then $A$ is nilpotent with a nilpotency index $q=2$. Therefore, we obtain
\begin{equation}\label{G-IV-2}
\mathcal{G}_4:\quad
e^A=E+A, \quad
{\rm tr} A^2= 0.
\end{equation}
According to \eqref{G-IV-1} and \eqref{G-IV-2}, the matrix Lie group $\mathcal{G}_4$ has the matrix representation \eqref{eA}, where $A$, $t$ and $u$ are given in Table~\ref{T1} for $(\mathfrak{L}L, \phi, \xi, \eta, g)\in \mathcal{F}_4$.
\textbf{\emph{The classes $\mathcal{F}_9$ and $\mathcal{F}_{10}$.}}
Considering the cases of $\mathcal{F}_9$ and $\mathcal{F}_{10}$, we find that ${\rm tr} A= 0$ and ${\rm tr} A^2> 0$ there, just as for $\mathcal{F}_4$.
The results in Table~\ref{T1} for these two classes are obtained in the same way as for $\mathcal{F}_4$.
\textbf{\emph{The class $\mathcal{F}_8$.}}
Finally, let us consider the case when $(\mathfrak{L}L, \phi, \xi, \eta, g)$ belongs to $\mathcal{F}_8$.
From Theorem~\ref{thm-Fi-L}, we have the following:
\[
[E_0,E_1]=\alpha E_2,\quad [E_0,E_2]=-\alpha E_1,\quad [E_1,E_2]=2\alpha E_0,
\]
where $\alpha=\lambda$, according to \eqref{Fi3}.
In the same way as in the cases for $\mathcal{F}_1$ and $\mathcal{F}_4$, we obtain the matrix form of $A$ in $\mathfrak{g}_8$ as it is shown in Table~\ref{T1}.
It implies ${\rm tr} A=0$ and
${\rm tr} A^2 =- 2\alpha^2 \mathrm{d}elta$, where $\mathrm{d}elta:=a^2+2b^2+2c^2$.
Since $\mathrm{d}elta$ is positive for $(a,b,c)\nablaeq (0,0,0)$, then
${\rm tr} A^2$ is negative in this non-trivial case.
Obviously, the characteristic polynomial of $A$ has the form
$
P_A(\lambda)= \lambda\bigl(\lambda^2+\alpha^2\mathrm{d}elta\bigr)
$ and
we get the following eigenvalues of $A$:
\begin{equation}\label{lm123}
\lambda_1=0, \qquad \lambda_2=\rm{i}\alpha \sqrt{\mathrm{d}elta}, \qquad \lambda_3=-\rm{i}\alpha \sqrt{\mathrm{d}elta},
\end{equation}
where $\rm{i}=\sqrt{-1}$.
Next, we obtain the corresponding linearly independent eigenvectors $p_i$ ($i={1,2,3}$):
\begin{equation*}\label{}
\begin{array}{c}
p_1(a,\, 2b,\, 2c)^\intercal, \quad p_2(-ac-\mathrm{i} b\sqrt{\mathrm{d}elta},\, -2bc+\mathrm{i} a\sqrt{\mathrm{d}elta},\, a^2 +2b^2)^{\intercal},\\[4pt]
p_3(-ac+\mathrm{i} b\sqrt{\mathrm{d}elta},\, -2bc-\mathrm{i} a\sqrt{\mathrm{d}elta},\, a^2 +2b^2)^{\intercal}
\end{array}
\end{equation*}
and they form the following matrix
\begin{equation}\label{matrixP}
P=\left(
\begin{array}{ccc}
a & -ac-\mathrm{i} b\sqrt{\mathrm{d}elta} & -ac+\mathrm{i} b\sqrt{\mathrm{d}elta} \\
2b & -2bc+\mathrm{i} a\sqrt{\mathrm{d}elta} & -2bc-\mathrm{i} a\sqrt{\mathrm{d}elta} \\
2c & a^2 +2b^2 & a^2 +2b^2
\end{array}
\right)
\end{equation}
with $\det P=2\mathrm{i} (a^2+2b^2)\mathrm{d}elta\sqrt{\mathrm{d}elta}$.
Therefore, $P$ is invertible (respectively, non-invertible) if and only if $(a,b)\nablaeq (0,0)$ (respectively, $(a,b)= (0,0)$ and $c\nablaeq 0$).
Firstly, let us consider the case when $P$ is invertible, i.e.\ $(a,b)\nablaeq (0,0)$. Then, we obtain the inverse matrix of $P$ as follows:
\[
P^{-1}=\left(
\begin{array}{ccc}
\phirac{a}{\mathrm{d}elta} & \phirac{b}{\mathrm{d}elta} & \phirac{c}{\mathrm{d}elta} \\
\phirac{-ac+\mathrm{i} b\sqrt{\mathrm{d}elta}}{\mathrm{d}elta(a^2 +2b^2)} & -\phirac{2bc+\mathrm{i} a\sqrt{\mathrm{d}elta}}{2\mathrm{d}elta(a^2 +2b^2)} & \phirac{1}{2\mathrm{d}elta} \\
-\phirac{ac+\mathrm{i} b\sqrt{\mathrm{d}elta}}{\mathrm{d}elta(a^2 +2b^2)} & \phirac{-2bc+\mathrm{i} a\sqrt{\mathrm{d}elta}}{2\mathrm{d}elta(a^2 +2b^2)} & \phirac{1}{2\mathrm{d}elta}
\end{array}
\right).
\]
Therefore, we obtain the matrix representation of the Lie group $\mathcal{G}_8$ for $\mathfrak{g}_8$ in the following way:
\[
\mathcal{G}_8=\left\{
\left(
\begin{array}{ccc}
\alpha^2 u(a^2 - \mathrm{d}elta)+1& ab\alpha^2 u-\alpha t & ac\alpha^2 u+b\alpha t \\
2ab\alpha^2 u+2c\alpha t & \alpha^2 u(2b^2 - \mathrm{d}elta)+1
& 2bc\alpha^2 u-a\alpha t\\
2ac\alpha^2 u-2b\alpha t & 2bc\alpha^2 u+a\alpha t
& \alpha^2 u(2c^2 - \mathrm{d}elta)+1\\
\end{array}
\right) \Bigl|\
\begin{array}{l}
\mathrm{d}elta > 0
\end{array}
\right\}.
\]
This result can be written as
\begin{equation}\label{G-8-I}
\begin{array}{l}
\mathcal{G}_8: \quad
e^A=E+tA+uA^2,
\quad\\[4pt]
\phantom{\mathcal{G}_8: \quad\ }
t=\phirac{\sin\left(\alpha\sqrt{\mathrm{d}elta}\right)}{\alpha \sqrt{\mathrm{d}elta}},\quad u=\phirac{1-\cos\left(\alpha\sqrt{\mathrm{d}elta}\right)}{\alpha^2 \mathrm{d}elta},\quad \mathrm{d}elta> 0.
\end{array}
\end{equation}
Now, let us consider the case when $\det P=0$ for $P$ in \eqref{matrixP}, i.e.\ $(a,b)= (0,0)$ and $c\nablaeq 0$.
In this case
we specialize the form of $A$ and obtain its eigenvectors $p_i$ $(i = 1,2,3)$ corresponding to its eigenvalues $\lambda_i$
in \eqref{lm123}, where $\mathrm{d}elta$ is specialized as $\mathrm{d}elta=2c^2$. Then, the consequent matrix $P$ has the following form
\begin{equation*}\label{matrixP-II}
P=\left(
\begin{array}{ccc}
0 & \phirac{\mathrm{i} \sqrt{2}}{2} & -\phirac{\mathrm{i} \sqrt{2}}{2} \\
0 & 1 & 1 \\
1 &0 & 0
\end{array}
\right)
\end{equation*}
with $\det P=\mathrm{i}\sqrt{2}$.
Obviously, $P$ is invertible now and then its inverse matrix is the following
\[
P^{-1}=\left(
\begin{array}{ccc}
0 & 0 & 1 \\
-\phirac{\mathrm{i}}{\sqrt{2}} & \phirac{1}{2} & 0 \\
\phirac{\mathrm{i}}{\sqrt{2}} & \phirac{1}{2} & 0
\end{array}
\right).
\]
Thus, using formula \eqref{J}, the matrix representation of the Lie group $\mathcal{G}_8$
in this case is the following:
\[
\mathcal{G}_8=\left\{
\left(
\begin{array}{ccc}
1-\alpha^2 c^2 u&-\alpha c t & 0\\
2\alpha c t & 1-\alpha^2 c^2 u
& 0\\
0 & 0
& 1\\
\end{array}
\right) \Bigl|\;
\begin{array}{ll}
(a,b)= (0,0),\;
c\nablaeq 0
\end{array}
\right\},
\]
which can be written as
\begin{equation*}\label{G-8-II}
\begin{array}{l}
\mathcal{G}_8: \quad
e^A=E+tA+uA^2,
\quad\\[4pt]
\phantom{\mathcal{G}_8: \quad\ }
t=\phirac{\sin\left(\alpha|c|\sqrt{2}\right)}{\alpha |c|\sqrt{2}},\quad u=\phirac{1-\cos\left(\alpha|c|\sqrt{2}\right)}{2\alpha^2 c^2},
\end{array}
\end{equation*}
which coincides with \eqref{G-8-I} in the special case of $\mathrm{d}elta=2c^2$.
Finally, the results in both cases for $(a,b)\nablaeq (0,0)$ and $(a,b)= (0,0)$, $c\nablaeq 0$
can be combined as it is shown in Table~\ref{T1}
for $(\mathfrak{L}L, \phi, \xi, \eta, g)\in \mathcal{F}_8$.
The latter completes the proof of the theorem.
\end{proof}
Bearing in mind \cororref{cor:para} and \thmref{thm:main}, we obtain immediately the following
\begin{cor}
If $(\mathfrak{L}L,\phi,\xi,\eta,g)$ is para-Sasakian,
then the compact simply connected Lie group $\mathcal{G}$ isomorphic to $\mathfrak{L}L$, both with one and the same Lie algebra, has the form \eqref{eA}, i.e.\ $e^A=E+tA+uA^2$, where for $a,b,c\in\mathbb R$ we have
\[
A=\left(
\begin{array}{ccc}
0 & -c & - b \\
0 & 0 & a \\
0 & a & 0 \\
\end{array}
\right),
\quad
t=
\left\{
\begin{array}{ll}
\phirac{\sinh |a|}{|a|}, & a\nablaeq0 \\
1, & a= 0
\end{array}
\right.,
\quad
u=
\left\{
\begin{array}{ll}
\phirac{\cosh |a|-1}{|a|}, & a\nablaeq 0 \\
0, & a= 0
\end{array}
\right.
.
\]
\end{cor}
\indent
\subsection*{Acknowledgment}
The authors were supported by project MU19-FMI-020 and FP19-FMI-002 of the Scientific Research Fund,
University of Plovdiv Paisii Hilendarski, Bulgaria.
{\small\rm\baselineskip=10pt
\baselineskip=10pt
\qquad Mancho Manev \par
\qquad University of Plovdiv Paisii Hilendarski, Faculty of Mathematics and
Informatics \par
\qquad Department of Algebra and Geometry \par
\qquad 24 Tzar Asen St, 4000 Plovdiv,
Bulgaria \par
\qquad \& \par
\qquad Medical University of Plovdiv,
Faculty of Public Health \par
\qquad Department of Medical Informatics, Biostatistics and E-Learning \par
\qquad 15A Vasil Aprilov Blvd, 4002 Plovdiv,
Bulgaria \par
\qquad {\tt [email protected]}
\qquad Veselina Tavkova \par
\qquad University of Plovdiv Paisii Hilendarski,
Faculty of Mathematics and Informatics \par
\qquad Department of Algebra and Geometry\par
\qquad 24 Tzar Asen St, 4000 Plovdiv,
Bulgaria\par
\qquad {\tt [email protected]}
}
\end{document}
|
\begin{document}
\title[On non-hypercyclicity of scalar type spectral operators]
{On the non-hypercyclicity\\
of scalar type spectral operators\\
and collections of their exponentials}
\author[Marat V. Markin]{Marat V. Markin}
\address{
Department of Mathematics\newline
California State University, Fresno\newline
5245 N. Backer Avenue, M/S PB 108\newline
Fresno, CA 93740-8001
}
\email{[email protected]}
\subjclass{Primary 47A16, 47B40; Secondary 47A10, 47B15, 47D06, 47D60, 34G10}
\keywords{Hypercyclicity, scalar type spectral operator, normal operator, $C_0$-se\-migroup, strongly continuous operator group}
\begin{abstract}
Generalizing the case of a normal operator in a complex Hilbert space, we give a straightforward proof of the non-hypercyclicity of a \textit{scalar type spectral operator} $A$ in a complex Banach space as well as of the collection $\left\{e^{tA}\right\}_{t\ge 0}$ of its exponentials, which, under a certain condition on the spectrum of the operator $A$, coincides with the $C_0$-semigroup generated by $A$. The spectrum
of $A$ lying on the imaginary axis, we also show that non-hypercyclic is the strongly continuous group
$\left\{e^{tA}\right\}_{t\in {\mathbb R}}$ of bounded linear operators generated by $A$. From the general results, we infer that, in the complex Hilbert space $L_2({\mathbb R})$, the anti-self-adjoint differentiation operator $A:=\dfrac{d}{dx}$ with the domain $D(A):=W_2^1({\mathbb R})$ is non-hypercyclic and so is the left-translation strongly continuous unitary operator group generated by $A$.
\end{abstract}
\maketitle
\section[Introduction]{Introduction}
The concept of \textit{hypercyclicity}, underlying the theory of linear chaos, traditionally considered for \textit{continuous} linear operators on Fr\'echet spaces, in particular for \textit{bounded} linear operators on Banach spaces, and known to be a purely infinite-dimensional phenomenon (see, e.g., \cite{Grosse-Erdmann-Manguillot,Guirao-Montesinos-Zizler,Rolewicz1969}), is extended in \cite{B-Ch-S2001,deL-E-G-E2003} to \textit{unbounded} linear operators in Banach spaces, where also found are sufficient conditions for unbounded hypercyclicity and certain examples of hypercyclic unbounded linear differential operators.
\begin{defn}[Hypercyclicity]\ \\
Let
\[
A:X\supseteq D(A)\to X
\]
be a (bounded or unbounded) linear operator in a (real or complex) Banach space $X$ with a domain $D(A)$.
A nonzero vector
\begin{equation*}
f\in C^\infty(A):=\bigcap_{n=0}^{\infty}D(A^n)
\end{equation*}
($A^0:=I$, $I$ is the \textit{identity operator} on $X$)
is called \textit{hypercyclic} if its \textit{orbit}
under $A$
\[
\orb(f,A):=\left\{A^nf\right\}_{n\in{\mathbb Z}_+}
\]
(${\mathbb Z}_+:=\left\{0,1,2,\dots\right\}$ is the set of nonnegative integers) is dense in $X$.
Linear operators possessing hypercyclic vectors are
said to be \textit{hypercyclic}.
More generally, a collection $\left\{T(t)\right\}_{t\in J}$ ($J$ is a nonempty indexing set) of linear operators in $X$ is called \textit{hypercyclic} if it possesses \textit{hypercyclic vectors}, i.e., such nonzero vectors $\displaystyle f\in \bigcap_{t\in J}D(T(t))$, whose \textit{orbit}
\[
\left\{T(t)f\right\}_{t\in J}
\]
is dense in $X$.
\end{defn}
Cf. \cite{Markin2018(9),Markin2018(10)}.
As is easily seen, in the definition of hypercyclicity for a linear operator, the underlying space must necessarily be \textit{separable}.
It is noteworthy that, for a hypercyclic linear operator $A$, the set $HC(A)$ of all its hypercyclic vectors, containing the dense orbit of any vector hypercyclic under $A$, is dense in $(X,\|\cdot\|)$, and hence, the more so, is the subspace $C^\infty(A)\supseteq HC(A)$.
Bounded \textit{normal operators} on a complex Hilbert space are known to be non-hypercyclic \cite[Corollary $5.31$]{Grosse-Erdmann-Manguillot}. In \cite{Mark-Sich2019(1)}, non-hypercyclicity is shown to hold for arbitrary normal operators (bounded or unbounded), certain collections of their exponentials, and symmetric operators.
Here, abandoning the comforts of a Hilbert space setting with its inherent orthogonality and self-duality, while generalizing non-hypercyclicity from normal to scalar type spectral operators, we furnish a straightforward proof of the non-hypercyclicity of an arbitrary \textit{scalar type spectral operator} $A$ (bounded or unbounded) in a complex Banach space as well as of the collection $\left\{e^{tA}\right\}_{t\ge 0}$ of its exponentials (see, e.g., \cite{Dunford1954,Survey58,Dun-SchIII}), which, provided the spectrum $\sigma(A)$ of the operator $A$ is located in a left half-plane
\[
\left\{\lambda\in {\mathbb C}\,\middle|\,{\mathbb R}ep\lambda\le \omega \right\}
\]
with some $\omega\in{\mathbb R}$, coincides with the $C_0$-\textit{semigroup} generated by $A$
\cite{Markin2002(2)} (see also \cite{Berkson1966,Panchapagesan1969}). The spectrum
of $A$ lying on the imaginary axis $i{\mathbb R}$ ($i$ is the \textit{imaginary unit}), we also show that non-hypercyclic is the strongly continuous group
$\left\{e^{tA}\right\}_{t\in {\mathbb R}}$ of bounded linear operators generated by $A$. From the general results, we immediately infer that, in the complex Hilbert space $L_2({\mathbb R})$, the \textit{anti-self-adjoint} differentiation operator $A:=\dfrac{d}{dx}$ with the domain
\[
W_2^1({\mathbb R}):=\left\{f\in L_2({\mathbb R})\middle|f(\cdot)\
\text{is \textit{absolutely continuous} on ${\mathbb R}$ with}\ f'\in L_2({\mathbb R}) \right\}
\]
is non-hypercyclic and so is the left-translation strongly continuous unitary operator group generated by it \cite{Engel-Nagel,Hille-Phillips,Stone1932}.
\section[Preliminaries]{Preliminaries}
More extensive preliminaries concerning the \textit{scalar-type spectral operators} in complex Banach spaces, which, in particular, encompass \textit{normal operators} in complex Hilbert spaces \cite{Wermer} (see also \cite{Dun-SchII,Plesner}), can be found in the corresponding section of \cite{Markin2018(3)} (see also \cite{Dunford1954,Survey58,Dun-SchIII}). Here, we outline only a few facts indispensable for our subsequent discourse.
With a {\it scalar type spectral operator} $A$ in a complex Banach space $(X,\|\cdot\|)$ associated are its \textit{spectral measure} (the \textit{resolution of the identity}) $E_A(\cdot)$, whose support is the spectrum $\sigma(A)$ of $A$, and the so-called {\it Borel operational calculus} assigning to any Borel measurable function $F:\sigma(A)\to {\mathbb C}$ a scalar type spectral operator
\begin{equation*}
F(A):=\int\limits_{\sigma(A)} F(\lambda)\,dE_A(\lambda)
\end{equation*}
(see \cite{Survey58,Dun-SchIII}).
In particular,
\begin{equation*}
A^n=\int\limits_{\sigma(A)} \lambda^n\,dE_A(\lambda),\ n\in{\mathbb Z}_+,
\end{equation*}
and
\begin{equation*}
e^{tA}:=\int\limits_{\sigma(A)} e^{t\lambda}\,dE_A(\lambda),\ t\in {\mathbb R}.
\end{equation*}
Provided
\[
\sigma(A)\subseteq \left\{\lambda\in{\mathbb C}\,\middle|\, {\mathbb R}ep\lambda\le \omega\right\},
\]
with some $\omega\in {\mathbb R}$, the collection
of exponentials $\left\{e^{tA}\right\}_{t\ge 0}$
coincides with the $C_0$-\textit{se\-migroup} generated by $A$ {\cite[Proposition $3.1$]{Markin2002(2)}} (see also \cite{Berkson1966,Panchapagesan1969}), and hence, if
\[
\sigma(A)\subseteq \left\{\lambda\in{\mathbb C}\,\middle|\, -\omega\le {\mathbb R}ep\lambda\le \omega\right\},
\]
with some $\omega\ge 0$, the collection of exponentials
$\left\{e^{tA}\right\}_{t\in{\mathbb R}}$
coincides with the \textit{strongly continuous group} of bounded linear operators generated by $A$.
The orbit maps
\begin{equation}\label{expf1}
y(t)=e^{tA}f,\ t\ge 0,f \in \bigcap_{t\ge 0}D(e^{tA}),
\end{equation}
describe all \textit{weak/mild solutions} of the abstract evolution equation
\begin{equation}\label{+}
y'(t)=Ay(t),\ t\ge 0,
\end{equation}
\cite[Theorem $4.2$]{Markin2002(1)}, whereas the orbit maps
\begin{equation*}
y(t)=e^{tA}f,\ t\in {\mathbb R},f \in \bigcap_{t\in {\mathbb R}}D(e^{tA}),
\end{equation*}
describe all \textit{weak/mild solutions} of the abstract evolution equation
\begin{equation}\label{1}
y'(t)=Ay(t),\ t\in {\mathbb R},
\end{equation}
\cite[Theorem $7$]{Markin2018(3)} (see also \cite{Ball}). Such generalized solutions need not be differentiable in the strong sense and encompass the \textit{classical} ones, strongly differentiable and satisfying the corresponding equations in the traditional plug-in sense (cf. {\cite[Ch. II, Definition 6.3]{Engel-Nagel}}, see also {\cite[Preliminaries]{Markin2018(2)}}).
The subspaces
\[
C^\infty(A),\ \displaystyle \bigcap_{t\ge 0}D(e^{tA}),\ \text{and}\ \bigcap_{t\in {\mathbb R}}D(e^{tA})
\]
of all possible initial values for the orbits
under $A$, $\left\{e^{tA}\right\}_{t\ge 0}$, and $\left\{e^{tA}\right\}_{t\in {\mathbb R}}$ are \textit{dense} in $(X,\|\cdot\|)$ since they contain the subspace
\begin{equation*}
\bigcup_{\alpha>0}E_A(\Delta_\alpha)X,\ \text{where}\ \Delta_\alpha:=\left\{\lambda\in{\mathbb C}\,\middle|\,|\lambda|\le \alpha \right\},\ \alpha>0,
\end{equation*}
which is dense in $(X,\|\cdot\|)$ and coincides with the class ${\mathscr E}^{\{0\}}(A)$ of the \textit{exponential type entire} vectors of the operator $A$ \cite{Markin2015} (see also \cite{Radyno1983(1)}).
Due to its strong countable additivity, the spectral measure $E_A(\cdot)$ is bounded, i.e., there exists such an $M\ge 1$ that, for any Borel set $\delta\subseteq {\mathbb C}$,
\begin{equation}\label{bounded}
\|E_A(\delta)\|\le M
\end{equation}
\cite{Dun-SchI,Dun-SchIII}, the notation $\|\cdot\|$ being used here to designate the norm in the space $L(X)$ of all bounded linear operators on $X$. Adhering to this rather conventional economy of symbols hereafter, we also adopt the same notation for the norm in the dual space $X^*$.
For arbitrary $f\in X$ and $g^*\in X^*$, the \textit{total variation measure} $v(f,g^*,\cdot)$ of the complex-valued Borel measure $\langle E_A(\cdot)f,g^* \rangle$ ($\langle\cdot,\cdot\rangle$ is the {\it pairing} between the space $X$ and its dual $X^*$) is a {\it finite} positive Borel measure with
\begin{equation}\label{tv}
v(f,g^*,{\mathbb C})=v(f,g^*,\sigma(A))\le 4M\|f\|\|g^*\|
\end{equation}
(see, e.g., \cite{Markin2004(1),Markin2004(2)}).
Also \cite{Markin2004(1),Markin2004(2)}, for any Borel measurable function $F:{\mathbb C}\to {\mathbb C}$, arbitrary $f\in D(F(A))$ and $g^*\in X^*$, and each Borel set $\delta\subseteq {\mathbb C}$,
\begin{equation}\label{cond(ii)}
\int\limits_\delta|F(\lambda)|\,dv(f,g^*,\lambda)
\le 4M\|E_A(\delta)F(A)f\|\|g^*\|.
\end{equation}
In particular, for $\delta=\sigma(A)$,
\begin{equation}\label{cond(i)}
\int\limits_{\sigma(A)}|F(\lambda)|\,d v(f,g^*,\lambda)\le 4M\|F(A)f\|\|g^*\|.
\end{equation}
Observe that the constant $M\ge 1$ in \eqref{tv}--\eqref{cond(i)} is from
\eqref{bounded}.
\section[Main Results]{Main Results}
\begin{thm}\label{Thm1}
An arbitrary scalar type spectral operator $A$ in a complex Banach space $(X,\|\cdot\|)$ with spectral measure $E_A(\cdot)$ is non-hypercyclic and so is the collection $\left\{e^{tA}\right\}_{t\ge 0}$ of its exponentials, which, provided the spectrum of $A$ is located in a left half-plane
\[
\left\{\lambda\in {\mathbb C}\,\middle|\,{\mathbb R}ep\lambda\le \omega \right\}
\]
with some $\omega\in{\mathbb R}$, coincides with the $C_0$-semigroup generated by $A$.
\end{thm}
\begin{proof}
Let $f\in C^\infty(A)\setminus \{0\}$ be arbitrary.
There are two possibilities: either
\[
E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, |\lambda|>1\right\}\right)f\neq 0
\]
or
\[
E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, |\lambda|>1\right\}\right)f=0.
\]
In the first case, as follows from the {\it Hahn-Banach Theorem} (see, e.g., \cite{Dun-SchI}), there exists a functional $g^*\in X^*\setminus \{0\}$ such that
\begin{equation*}
\langle E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, |\lambda|>1\right\}\right)f,g^*\rangle\neq 0
\end{equation*}
and hence, for any $n\in {\mathbb Z}_+$,
\begin{multline*}
\|A^nf\|
\\
\text{by \eqref{cond(i)};}
\\
\shoveleft{
\ge \left[4M\|g^*\|\right]^{-1}\int\limits_{\sigma(A)}|\lambda|^n\,dv(f,g^*,\lambda)
\ge \left[4M\|g^*\|\right]^{-1}
\int\limits_{\{\lambda\in\sigma(A)||\lambda|>1\}}|\lambda|^n\,dv(f,g^*,\lambda)
}\\
\shoveleft{
\ge
\left[4M\|g^*\|\right]^{-1}v(f,g^*,\{\lambda\in\sigma(A)||\lambda|>1\})
}\\
\ \ \
\ge
\left[4M\|g^*\|\right]^{-1}
|\langle E_A(\{\lambda\in\sigma(A)||\lambda|>1\})f,g^*\rangle|
>0,
\end{multline*}
which implies that the orbit $\left\{A^nf\right\}_{n\in{\mathbb Z}_+}$ of $f$ under $A$ cannot approximate the zero vector, and hence, is not dense in $(X,\|\cdot\|)$.
In the second case, since
\[
f=E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, |\lambda|>1\right\}\right)f
+
E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, |\lambda|\le 1\right\}\right)f,
\]
we infer that
\[
f=E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, |\lambda|\le 1\right\}\right)f\neq 0
\]
and hence, for any $n\in {\mathbb Z}_+$,
\begin{multline*}
\left\|A^nf\right\|
\\
\text{by the properties of the \textit{operational calculus};}
\\
\shoveleft{
=
\left\|\int\limits_{\{\lambda\in\sigma(A)||\lambda|\le 1\}}
\lambda^n\,dE_A(\lambda)f\right\|
}\\
\text{as follows from the \textit{Hahn-Banach Theorem};}
\\
\shoveleft{
=\sup_{\{g^*\in X^*|\|g^*\|=1\}}
\left|\left\langle
\int\limits_{\{\lambda\in\sigma(A)||\lambda|\le 1\}}
\lambda^n\,d E_A(\lambda)f,g^*\right\rangle
\right|
}\\
\text{by the properties of the \textit{operational calculus};}
\\
\shoveleft{
= \sup_{\{g^*\in X^*|\|g^*\|=1\}}
\left|\int\limits_{\{\lambda\in\sigma(A)||\lambda|\le 1\}}
\lambda^n\,d\langle E_A(\lambda)f,g^*\rangle\right|
}\\
\shoveleft{
\le \sup_{\{g^*\in X^*|\|g^*\|=1\}}\int\limits_{\{\lambda\in\sigma(A)||\lambda|\le 1\}}
|\lambda|^n\,dv(f,g^*,\lambda)
}\\
\shoveleft{
\le \sup_{\{g^*\in X^*|\|g^*\|=1\}}\int\limits_{\{\lambda\in\sigma(A)||\lambda|\le 1\}}
1\,dv(f,g^*,\lambda)
}\\
\text{by \eqref{cond(ii)} with $F(\lambda)\equiv 1$};
\\
\shoveleft{
\le \sup_{\{g^*\in X^*|\|g^*\|=1\}}4M\left\|E_A(\{\lambda\in\sigma(A)|
|\lambda|\le 1\})f\right\|\|g^*\|
}\\
\ \ \
= 4M\left\|E_A(\{\lambda\in\sigma(A)|
|\lambda|\le 1\})f\right\|.
\end{multline*}
which also implies that the orbit
$\left\{A^nf\right\}_{n\in{\mathbb Z}_+}$ of $f$ under $A$, being bounded, is not dense in $(X,\|\cdot\|)$ and completes the proof for the case of the operator.
Now, let us consider the case of the exponential collection $\left\{e^{tA}\right\}_{t\ge 0}$ assuming that $\displaystyle f \in \bigcap_{t\ge 0}D(e^{tA})\setminus \{0\}$ is arbitrary.
There are two possibilities: either
\[
E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, {\mathbb R}ep\lambda>0\right\}\right)f\neq 0
\]
or
\[
E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, {\mathbb R}ep\lambda>0\right\}\right)f=0.
\]
In the first case, as follows from the {\it Hahn-Banach Theorem}, there exists a functional $g^*\in X^*\setminus \{0\}$ such that
\begin{equation*}
\langle E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, {\mathbb R}ep\lambda>0\right\}\right)f,g^*\rangle\neq 0
\end{equation*}
and hence, for any $t\ge 0$,
\begin{multline*}
\|e^{tA}f\|
\\
\text{by \eqref{cond(i)};}
\\
\shoveleft{
\ge \left[4M\|g^*\|\right]^{-1}\int\limits_{\sigma(A)}\left|e^{t\lambda}\right|\,dv(f,g^*,\lambda)
}\\
\shoveleft{
\ge \left[4M\|g^*\|\right]^{-1}
\int\limits_{\{\lambda\in\sigma(A)|{\mathbb R}ep\lambda>0\}}e^{t{\mathbb R}ep\lambda}\,dv(f,g^*,\lambda)
}\\
\text{since for $t\ge 0$ and $\lambda\in \sigma(A)$ with ${\mathbb R}ep\lambda>0$, $e^{t{\mathbb R}ep\lambda}\ge 1$;}
\\
\shoveleft{
\ge
\left[4M\|g^*\|\right]^{-1}v(f,g^*,\{\lambda\in\sigma(A)|{\mathbb R}ep\lambda>0\})
}\\
\ \ \,
\ge
\left[4M\|g^*\|\right]^{-1}
|\langle E_A(\{\lambda\in\sigma(A)|{\mathbb R}ep\lambda>0\})f,g^*\rangle|
>0,
\end{multline*}
which implies that the orbit $\left\{e^{tA}f\right\}_{t\ge 0}$ of $f$ cannot approximate the zero vector, and hence, is not dense in $(X,\|\cdot\|)$.
In the second case, since
\[
f=E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, {\mathbb R}ep\lambda>0\right\}\right)f
+
E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, {\mathbb R}ep\lambda\le 0\right\}\right)f,
\]
we infer that
\[
f=E_A\left(\left\{\lambda\in \sigma(A)\,\middle|\, {\mathbb R}ep\lambda\le 0\right\}\right)f\neq 0
\]
and hence, for any $t\ge 0$,
\begin{multline*}
\left\|e^{tA}f\right\|
\\
\text{by the properties of the \textit{operational calculus};}
\\
\shoveleft{
=
\left\|\int\limits_{\{\lambda\in\sigma(A)|{\mathbb R}ep\lambda\le 0\}}
e^{t\lambda}\,dE_A(\lambda)f\right\|
}\\
\text{as follows from the \textit{Hahn-Banach Theorem};}
\\
\shoveleft{
=\sup_{\{g^*\in X^*|\|g^*\|=1\}}
\left|\left\langle
\int\limits_{\{\lambda\in\sigma(A)|{\mathbb R}ep\lambda\le 0\}}
e^{t\lambda}\,d E_A(\lambda)f,g^*\right\rangle
\right|
}\\
\text{by the properties of the \textit{operational calculus};}
\\
\shoveleft{
= \sup_{\{g^*\in X^*|\|g^*\|=1\}}
\left|\int\limits_{\{\lambda\in\sigma(A)|{\mathbb R}ep\lambda\le 0\}}
e^{t\lambda}\,d\langle E_A(\lambda)f,g^*\rangle\right|
}\\
\shoveleft{
\le \sup_{\{g^*\in X^*|\|g^*\|=1\}}\int\limits_{\{\lambda\in\sigma(A)|{\mathbb R}ep\lambda\le 0\}}
\left|e^{t\lambda}\right|\,dv(f,g^*,\lambda)
}\\
\shoveleft{
=\sup_{\{g^*\in X^*|\|g^*\|=1\}}\int\limits_{\{\lambda\in\sigma(A)|{\mathbb R}ep\lambda\le 0\}}
e^{t{\mathbb R}ep\lambda}\,dv(f,g^*,\lambda)
}\\
\text{since for $t\ge 0$ and $\lambda\in \sigma(A)$ with ${\mathbb R}ep\lambda\le 0$, $e^{t{\mathbb R}ep\lambda}\le 1$;}
\\
\shoveleft{
\le \sup_{\{g^*\in X^*|\|g^*\|=1\}}\int\limits_{\{\lambda\in\sigma(A)|{\mathbb R}ep\lambda\le 0\}}
1\,dv(f,g^*,\lambda)
}\\
\text{by \eqref{cond(ii)} with $F(\lambda)\equiv 1$};
\\
\shoveleft{
\le \sup_{\{g^*\in X^*|\|g^*\|=1\}}4M\left\|E_A(\{\lambda\in\sigma(A)|
{\mathbb R}ep\lambda\le 0\})f\right\|\|g^*\|
}\\
\ \ \
= 4M\left\|E_A(\{\lambda\in\sigma(A)|
{\mathbb R}ep\lambda\le 0\})f\right\|,
\end{multline*}
which also implies that the orbit
$\left\{e^{tA}f\right\}_{t\ge 0}$ of $f$, being bounded, is not dense in $(X,\|\cdot\|)$ and completes the entire proof.
\end{proof}
\begin{rem}
Now, \cite[Theorem $1$]{Mark-Sich2019(1)} is the important particular case of Theorem \ref{Thm1} for
a (bounded or unbounded) \textit{normal operator} in a complex Hilbert space.
\end{rem}
If further for a scalar type spectral operator $A$ in a complex Banach space $(X,\|\cdot\|)$, we have the inclusion:
\[
\sigma(A)\subseteq i{\mathbb R},
\]
by {\cite[Theorem XVIII.$2.11$ (c)]{Dun-SchIII}}, for any $t\in {\mathbb R}$,
\[
\|e^{tA}\|
=\left\|\int\limits_{\sigma(A)}
e^{t\lambda}\,dE_A(\lambda)\right\|\le 4M\sup_{\lambda\in\sigma(A)}\left|e^{t\lambda}\right|
=4M\sup_{\lambda\in\sigma(A)}e^{t{\mathbb R}ep\lambda}=4M,
\]
where the constant $M\ge 1$ is from
\eqref{bounded}. Therefore, the strongly continuous group $\left\{e^{tA}\right\}_{t\in {\mathbb R}}$ of bounded linear operators generated by $A$ is \textit{bounded} (cf. \cite{Berkson1966}), which implies that every orbit $\left\{e^{tA}f\right\}_{t\in {\mathbb R}}$, $f\in X$,
is bounded, and hence, is not dense in $(X,\|\cdot\|)$. Thus, we arrive at the following
\begin{prop}\label{Prop1}
For a scalar type spectral operator $A$ in a complex Banach space $(X,\|\cdot\|)$ with $\sigma(A)\subseteq i{\mathbb R}$, the strongly continuous group $\left\{e^{tA}\right\}_{t\in {\mathbb R}}$ of bounded linear operators generated by $A$ is bounded, and hence, non-hypercyclic.
\end{prop}
As is known \cite{Stone1932}, for an \textit{anti-self-adjoint operator} $A$ in a complex Hilbert space, $\sigma(A)\subseteq i{\mathbb R}$ and the generated by $A$ strongly continuous operator group
$\left\{e^{tA}\right\}_{t\in {\mathbb R}}$ is \textit{unitary}, which, in particular, implies that
\[
\|e^{tA}\|=1,\ t\in {\mathbb R}.
\]
Thus, from Theorem \ref{Thm1} (see also \cite[Theorem $1$]{Mark-Sich2019(1)}) and Proposition \ref{Prop1}, we derive the following corollary.
\begin{cor}[The Case of an Anti-Self-Adjoint Operator]\label{Cor1}\ \\
An anti-self-adjoint operator $A$ in a complex Hilbert space is non-hypercyclic and so is the generated by $A$ strongly continuous unitary operator group $\left\{e^{tA}\right\}_{t\in {\mathbb R}}$.
\end{cor}
\section{An Application}
Since, in the complex Hilbert space $L_2({\mathbb R})$, the differentiation operator $A:=\dfrac{d}{dx}$ with the domain
\[
W_2^1({\mathbb R}):=\left\{f\in L_2({\mathbb R})\middle|f(\cdot)\
\text{is \textit{absolutely continuous} on ${\mathbb R}$ with}\ f'\in L_2({\mathbb R}) \right\}
\]
is \textit{anti-self-adjoint} (see, e.g., \cite{Akh-Glaz}), by Corollary \ref{Cor1}, we obtain
\begin{cor}[The Case of Differentiation Operator]\label{Cor2}\ \\
In the complex Hilbert space $L_2({\mathbb R})$, the differentiation operator $A:=\dfrac{d}{dx}$ with the domain $D(A):=W_2^1({\mathbb R})$ is non-hypercyclic and so is the left-translation strongly continuous unitary operator group generated by $A$.
\end{cor}
\begin{rem}
In a different setting, the situation with the differentiation operator can be vastly different
(cf. \cite[Example $2.21$]{Grosse-Erdmann-Manguillot}, \cite[Corollary $2.3$]{B-Ch-S2001}, \cite[Corollary $4.1$]{E-H2005}, and \cite[Theorem $3.1$]{B-B-T2008}).
\end{rem}
\section{Concluding Remark}
The exponentials given by \eqref{expf1} describing all \textit{weak/mild solutions} of evolution equation \eqref{+} (see Preliminaries), Theorem \ref{Thm1}, in particular, implies that such an equation is void of chaos (see \cite{Grosse-Erdmann-Manguillot}). By Proposition \ref{Prop1} (see also Preliminaries), the same is true for evolution equation \eqref{1} provided $\sigma(A)\subseteq i{\mathbb R}$.
\section{Acknowledgments}
I would like to express sincere gratitude to Dr.~Oscar Vega of the Department of Mathematics, California State University, Fresno, for his gift of book \cite{Guirao-Montesinos-Zizler} reading which inspired the above findings. My utmost appreciation goes to Professor Karl Grosse-Erdmann of the Universit\'e de Mons Institut de Math\'ematique for his interest in my endeavors in the realm of unbounded hypercyclicity and chaoticity, turning my attention to the existing research on the subject of unbounded linear hypercyclicity and chaos, and kindly communicating certain relevant references.
\end{document}
|
\begin{document}
\date{\today}
\title[spherical CR structure on a two-cusped hyperbolic 3-manifold]{A uniformizable spherical CR structure on a two-cusped hyperbolic 3-manifold}
\author{Yueping Jiang, Jieyan Wang and Baohua Xie}
\address{School of Mathematics \\ Hunan University \\ Changsha \\ China}
\email{[email protected], [email protected], [email protected]}
\keywords{Complex hyperbolic space, spherical CR uniformization, triangle
groups, Ford domain, hyperbolic 3-manifolds}
\subjclass[2010]{20H10, 57M50, 22E40, 51M10.}
\thanks{Y. Jiang was supported by NSFC (No.11631010). J. Wang was supported by NSFC (No.11701165). B. Xie was supported by NSFC (No.11871202) and Hunan Provincial Natural Science Foundation of China (No.2018JJ3024).}
\maketitle
\begin{abstract}
Let $\langle I_{1}, I_{2}, I_{3}\rangle$ be the complex hyperbolic $(4,4,\infty)$ triangle group. In this paper we give a proof of a conjecture of Schwartz for $\langle I_{1}, I_{2}, I_{3}\rangle$. That is $\langle I_{1}, I_{2}, I_{3}\rangle$ is discrete and faithful if and only if $I_1I_3I_2I_3$ is nonelliptic. When $I_1I_3I_2I_3$ is parabolic, we show that the even subgroup $\langle I_2 I_3, I_2I_1 \rangle$ is the holonomy representation of a uniformizable spherical CR structure on the two-cusped hyperbolic 3-manifold $s782$ in SnapPy notation.
\end{abstract}
\section{Introduction}
Let $\mathbf{H}^2_{\mathbb{C}}$ be the complex hyperbolic plane, $\rm{PU}(2,1)$ be its holomorphic isometry group. See Section \ref{sec:background} for more details. It is well known that $\mathbf{H}^2_{\mathbb{C}}$ is one of the rank one symmetric spaces and $\rm{PU}(2,1)$ is a semisimple Lie group.
$\mathbf{H}^2_{\mathbb{C}}$ can be viewed as the unit ball in $\mathbb{C}^2$ equipped with the Bergman metric. Its ideal boundary $\partial \mathbf{H}^2_{\mathbb{C}}$ is the 3-sphere $S^3$.
The purpose of this paper is to study the geometry of discrete subgroups of $\rm{PU}(2,1)$.
Let $M$ be a 3-manifold. A \emph{spherical CR structure} on $M$ is a system of coordinate charts into $S^3$, such that the transition functions are restrictions of elements of $\rm{PU}(2,1)$. Any spherical CR structure on $M$ determines a pair $(\rho,d)$, where $\rho:\pi_1(M)\longrightarrow \rm{PU}(2,1)$ is the holonomy and $d:\widetilde{M}\longrightarrow S^3$ is the developing map. There is a special spherical CR structure.
A \emph{uniformizable spherical CR structure} on $M$ is a homeomorphism between $M$ and a quotient space $\Omega/\Gamma$, where $\Gamma$ is a discrete subgroup of $\rm{PU}(2,1)$ and $\Omega\subset\partial \mathbf{H}^2_{\mathbb{C}}$ is the discontinuity region of $\Gamma$. An interesting problem in complex hyperbolic geometry is to find (uniformizable) spherical CR structures on hyperbolic 3-manifolds.
Geometric structures modelled on the boundary of complex hyperbolic space are rather difficult to construct. The first example of a spherical CR structure existing on a cusped hyperbolic 3-manifold was discovered by Schwartz. In the work \cite{schwartz-litg}, Schwartz constructed a uniformizable spherical CR structure on the Whitehead link complement. He also constructed a closed hyperbolic 3-manifold that admits a uniformizable spherical CR structure in \cite{schwartz-real} at almost the same time.
Let $M_8$ be the complement of the figure eight knot. There are several works showed that $M_8$ admits (uniformizable) spherical CR structures.
In \cite{fal}, Falbel constructed two different representations $\rho_1, \rho_2$ of $\pi_1(M_8)$ in $\rm{PU}(2,1)$, and proved that $\rho_1$ is the holonomy of a spherical CR structure on $M_8$. In \cite{fal-wang}, Falbel and Wang proved that $\rho_2$ is also the holonomy of a spherical CR structure on $M_8$.
In \cite{der-fal}, Deraux and Falbel constructed a uniformizable spherical CR structure on $M_8$ whose holonomy is $\rho_2$. In \cite{deraux-family}, Deraux proved that there is a 1-parameter family of spherical CR uniformizaitons of the figure eight knot complement. This family is in fact a deformation of the uniformization constructed in \cite{der-fal}.
Let us back to the Whitehead link complement. It admits a uniformizable spherical CR structure which is different from Schwartz's one.
In the recent work \cite{par-will2}, Parker and Will also constructed a spherical CR uniformization of the Whitehead link complement. By applying spherical CR Dehn surgery theorems to the uniformizations of the Whitehead link complement, one can get infinity many manifolds which admit uniformizable spherical CR structures.
In \cite{schwartz-cr}, Schwartz proved a spherical CR Dehn surgery theorem, and using it to the spherical CR uniformization of the Whitehead link complement constructed in \cite{schwartz-litg} to obtain infinity many closed hyperbolic 3-manifolds which admit uniformizable spherical CR structures.
In \cite{acosta}, Acosta used the spherical CR Dehn surgery theorem proved in \cite{acosta-scr} to the spherical CR uniformization of the Whitehead link complement constructed by Parker and Will in \cite{par-will2} to obtain infinity many one-cusped hyperbolic 3-manifolds which admit uniformizable spherical CR structures. In particular, the spherical CR uniformization of the complement of the figure eight knot constructed by Deraux and Falbel \cite{der-fal} is contained in this family.
There are some hyperbolic 3-manifolds described in the SnapPy census (see \cite{cdw}) which admit spherical CR structures.
In \cite{deraux-scr}, Deraux proved that the cusped hyperbolic 3-manifold $m009$ admits a uniformizable spherical CR structure whose holonomy representation was constructed by Falbel, Koseleff and Rouillier in \cite{fkr}. In \cite{mx}, Ma and Xie proved that the one-cusped hyperbolic 3-manifolds $m038$ and $s090$ admit spherical CR uniformizations.
In this paper, we show that the two-cusped hyperbolic 3-manifold $s782$ admits a uniformizable spherical CR structure. By studying the action of the even subgroup of a discrete complex hyperbolic triangle group on $\mathbf{H}^2_{\mathbb{C}}$, we will prove that the quotient space of its discontinuity region is homeomorphic to $s782$. That means the holonomy representation of the spherical CR uniformization of $s782$ is a triangle group.
Now let us talk about the complex hyperbolic triangle groups. Let $\Delta_{p,q,r}$ be the abstract $(p,q,r)$ reflection triangle group with the presentation
$$\langle \sigma_1, \sigma_2, \sigma_3 | \sigma^2_1=\sigma^2_2=\sigma^2_3=(\sigma_2 \sigma_3)^p=(\sigma_3 \sigma_1)^q=(\sigma_1 \sigma_2)^r=id \rangle,$$
where $p,q,r$ are positive integers or $\infty$ in which case the corresponding relation disappears.
A \emph{complex hyperbolic $(p,q,r)$ triangle group} is a representation of $\Delta_{p,q,r}$ in $\rm{PU}(2,1)$, which maps the generators to complex involutions fixing complex lines in $\mathbf{H}^2_{\mathbb{C}}$. The study of complex hyperbolic triangle groups was begun by Goldman and Parker. In \cite{Go-P}, Goldman and Parker studied the complex $(\infty,\infty,\infty)$ triangle groups. They conjectured that a representation of $\Delta_{\infty,\infty,\infty}$ into $\rm{PU}(2,1)$ is discrete and faithful if and only if the image of $\sigma_1 \sigma_2 \sigma_3$ is nonelliptic. The Goldman-Parker conjecture was proved by Schwartz in \cite{schwartz-go-p1} (or a better proof in \cite{schwartz-go-p2}). In particular, the representation with the image of $\sigma_1 \sigma_2 \sigma_3$ being parabolic is closely related with the holonomy of the spherical CR uniformizaiton of the Whitehead link complement constructed in \cite{schwartz-litg}. In the survey \cite{schwartz-icm}, a series of conjectures on complex hyperbolic triangle groups are put forward. Schwartz conjectured that:
\begin{conj}[Schwartz \cite{schwartz-icm}]\label{conj:schwartz}
Suppose that $p\leq q \leq r$. Let $\langle I_1, I_2, I_3 \rangle$ be a complex hyperbolic $(p,q,r)$ triangle group. Then $\langle I_1, I_2, I_3 \rangle$ is a discrete and faithful representation of $\Delta_{p,q,r}$ if and only if $I_1I_3I_2I_3$ and $I_1I_2I_3$ are nonelliptic. Moreover,
\begin{itemize}
\item If $3 \leq p <10$, then $\langle I_1, I_2, I_3 \rangle$ is discrete and faithful if and only if $I_1I_3I_2I_3$ is nonelliptic.
\item If $p>13$, then $\langle I_1, I_2, I_3 \rangle$ is discrete and faithful if and only if $I_1I_2I_3$ is nonelliptic.
\end{itemize}
\end{conj}
In a recent work \cite{par-will2}, Parker and Will proved Conjecture \ref{conj:schwartz} for complex hyperbolic $(3,3,\infty)$ groups. They also showed that when $I_1I_3I_2I_3$ is parabolic the quotient of $\mathbf{H}^2_{\mathbb{C}}$ by the group $\langle I_2 I_3, I_2I_1 \rangle$ is a complex hyperbolic orbifold whose boundary is a spherical CR uniformization of the Whitehead link complement.
In \cite{pwx}, Parker, Wang and Xie proved Conjecture \ref{conj:schwartz} for complex hyperbolic $(3,3,n)$ groups with $n\geq 4$. Furthermore, Acosta \cite{acosta} showed that when $I_1I_3I_2I_3$ is parabolic the group $\langle I_2 I_3, I_2I_1 \rangle$ is the holonomy representation of a uniformizable spherical CR structure on the Dehn surgery of the Whitehead link complement on one cusp of type $(1,n-3)$.
In this paper, we give a proof of Conjecture \ref{conj:schwartz} for the complex hyperbolic $(4,4,\infty)$ triangle groups and further analyze the group when $I_1I_3I_2I_3$ is parabolic. Our result is as follows:
\begin{thm}
Let $\langle I_1, I_2, I_3 \rangle$ be a complex hyperbolic $(4,4,\infty)$ triangle group. Then $\langle I_1, I_2, I_3 \rangle$ is a discrete and faithful representation of $\Delta_{4,4,\infty}$ if and only if $I_1I_3I_2I_3$ is nonelliptic. Moreover, when $I_1I_3I_2I_3$ is parabolic the quotient of $\mathbf{H}^2_{\mathbb{C}}$ by the group $\langle I_2 I_3, I_2I_1 \rangle$ is a complex hyperbolic orbifold whose boundary is a spherical CR uniformization of the two-cusped hyperbolic 3-manifold $s782$ in the SnapPy census.
\end{thm}
In \cite{wyss}, Wyss-Gallifent studied the complex hyperbolic $(4,4,\infty)$ triangle groups. He discovered several discrete groups with $I_1I_3I_2I_3$ being regular elliptic of finite order and conjectured that there should be countable infinity many. Thus, it should be very interesting to know what is the manifold at infinity for the group with $I_1I_3I_2I_3$ being regular elliptic of finite order. Motivated by the work of Acosta \cite{acosta}, we guess that the manifold is the Dehn surgery of the hyperbolic 3-manifold $s782$ on one cusp. We will treat this problem in another paper.
Our method is to construct Ford domains for the triangle groups acting on $\mathbf{H}^2_{\mathbb{C}}$. The space of complex hyperbolic $(4,4,\infty)$ triangle groups $\langle I_1, I_2, I_3 \rangle$ is parameterized by the angular $\theta\in[0,\pi/2)$ (See Section \ref{sec:parameter}).
Let $S=I_2I_3$, $T=I_2I_1$ and $\Gamma=\langle S, T \rangle$. Here $S$ is regular elliptic of order 4, and $T$ is parabolic fixing the point at infinity.
For each group in the parameter space, the Ford domain $D$ is the intersection of the closures of the exteriors of the isometric spheres for the elements $S$, $S^{-1}$, $S^2$, $(S^{-1}T)^2$ and their conjugations by the powers of $T$. The combination of $D$ is the same except when $I_1I_3I_2I_3$ is parabolic, in which case there are additional parabolic fixed points. $D$ is preserved by the subgroup $\langle T \rangle$ and is a fundamental domain for the cosets of $\langle T \rangle$ in $\Gamma$. Its ideal boundary $\partial_{\infty}D$ is the complement of a tubular neighborhood of the $T$-invariant $\mathbb{R}$-circle (or horotube defined in \cite{schwartz-cr}). By intersecting $\partial_{\infty}D$ with a fundamental domain for $\langle T \rangle$ acting on $\partial\mathbf{H}^2_{\mathbb{C}}$, we obtain a fundamental domain for $\Gamma$ acting on its discontinuity region. See section \ref{sec:ford}.
When $I_1I_3I_2I_3$ is parabolic, that is $\theta=\pi/3$, by studying the combinatorial properties of the fundamental domain for $\Gamma$ acting on its discontinuity region $\Omega(\Gamma)$, we prove that the quotient $\Omega(\Gamma)/\Gamma$ is homeomorphic to the two-cusped hyperbolic 3-manifold $s782$. In this case, there are four additional parabolic fixed points fixed by $T^{-1}S^2$, $S^2T^{-1}$, $ST^{-1}S$ and $T^{-1}ST^{-1}ST$, except the point at infinity which is the fixed point of $T$. See Section \ref{sec:manifold}.
\subsection*{Acknowledgements\label{ackowledgements}} We thank Jiming Ma for his help in the proof of Theorem \ref{thm:s782}. The third author also would like to thank Jiming Ma for numerous helpful discussions on complex hyperbolic geometry during his visiting at Fudan University.
\section{Background}\label{sec:background}
The purpose of this section is to introduce briefly complex hyperbolic geometry. One can refer to Goldman's book \cite{Go} for more details.
\subsection{Complex hyperbolic plane}
Let $\langle {\bf{z}}, {\bf{w}} \rangle={\bf{w}^{*}}H{\bf{z}}$ be the Hermitian form on ${\mathbb{C}}^3$ associated to $H$, where $H$ is the Hermitian matrix
$$
H=\left[
\begin{array}{ccc}
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
\end{array}
\right].
$$
Then ${\mathbb{C}}^3$ is the union of negative cone $V_{-}$, null cone $V_{0}$ and positive cone $V_{+}$, where
\begin{eqnarray*}
V_{-} &=& \left\{ {\bf{z}}\in {\mathbb{C}}^3-\{0\} : \langle {\bf{z}}, {\bf{z}} \rangle <0 \right\}, \\
V_{0} &=& \left\{ {\bf{z}}\in {\mathbb{C}}^3-\{0\} : \langle {\bf{z}}, {\bf{z}} \rangle =0 \right\}, \\
V_{+} &=& \left\{ {\bf{z}}\in {\mathbb{C}}^3-\{0\} : \langle {\bf{z}}, {\bf{z}} \rangle >0 \right\}.
\end{eqnarray*}
\begin{defn}
Let $P: {\mathbb{C}}^3-\{0\} \rightarrow {\mathbb{C}P^2}$ be the projectivization map.
Then the \emph{complex hyperbolic plane} $\mathbf{H}^2_{\mathbb{C}}$ is defined to be $P(V_{-})$ and its {boundary} $\partial \mathbf{H}^2_{\mathbb{C}}$ is defined to be $P(V_{0})$.
This is the \emph{Siegel domain model} of $\mathbf{H}^2_{\mathbb{C}}$.
Let $d(u,v)$ be the distance between two points $u,v \in \mathbf{H}^2_{\mathbb{C}}$.
Then the \emph{Bergman metric} on complex hyperbolic plane is given by the distance formula
$$
\cosh^2\left(\frac{d(u,v)}{2}\right)=\frac{\langle {\bf{u}}, {\bf{v}} \rangle\langle {\bf{v}}, {\bf{u}} \rangle}{\langle {\bf{u}}, {\bf{u}} \rangle \langle {\bf{v}}, {\bf{v}} \rangle},
$$
where ${\bf{u}}, {\bf{v}} \in {\mathbb{C}}^3$ are lifts of $u,v$.
\end{defn}
There is another model of $\mathbf{H}^2_{\mathbb{C}}$.
\begin{defn}\label{def:cayley}
The \emph{ball model} of $\mathbf{H}^2_{\mathbb{C}}$ is the unit ball in $\mathbb{C}^2$, which is given by the Hermition matrix $J=diag(1,1,-1)$. In this model, $\partial\mathbf{H}^2_{\mathbb{C}}$ is then the 3-dimensional sphere $S^3\subset \mathbb{C}^2$.
The \emph{Cayley transform} $C$ is given by
$$
C=\frac{1}{\sqrt{2}}\left(
\begin{array}{ccc}
1 & 0 & 1 \\
0 & \sqrt{2} & 0 \\
1 & 0 & -1 \\
\end{array}
\right).
$$
It satisfies $C^{*}HC=J$ and interchanges the Siegel domain model and the ball model of $\mathbf{H}^2_{\mathbb{C}}$.
\end{defn}
Let $\mathcal{N}=\mathbb{C}\times \mathbb{R}$ be the Heisenberg group with product
$$
[z,t]\cdot [\zeta,\nu]=[z+\zeta,t+\nu-2{\rm{Im}}(\bar{z}\zeta)].
$$
Then, in the Siegel domain model of $\mathbf{H}^2_{\mathbb{C}}$, the boundary of complex hyperbolic plane $\partial \mathbf{H}^2_{\mathbb{C}}$ can be identified to the union $\mathcal{N}\cup \{q_{\infty}\}$, where $q_{\infty}$ is the point at infinity.
The \emph{standard lift} of $q_{\infty}$ and $q=[z,t]\in\mathcal{N}$ in $\mathbb{C}^3$ are
\begin{equation}\label{eq:lift}
{\bf{q}_{\infty}}=\left[
\begin{array}{c}
1 \\
0 \\
0 \\
\end{array}
\right],\quad
{\bf{q}}=\left[
\begin{array}{c}
\frac{-|z|^2+it}{2} \\
z \\
1 \\
\end{array}
\right].
\end{equation}
The closure of complex hyperbolic plane $\mathbf{H}^2_{\mathbb{C}} \cup \partial \mathbf{H}^2_{\mathbb{C}}$ can be identified to the union of ${\mathcal{N}}\times{\mathbb{R}_{\geq 0}}$ and $\{q_{\infty}\}$.
Any point $q=(z,t,u)\in{\mathcal{N}}\times{\mathbb{R}_{\geq0}}$ has the standard lift
$$
{\bf{q}}=\left[
\begin{array}{c}
\frac{-|z|^2-u+it}{2} \\
z \\
1 \\
\end{array}
\right].
$$
Here $(z,t,u)$ is called the\emph{ horospherical coordinates} of $\mathbf{H}^2_{\mathbb{C}} \cup \partial \mathbf{H}^2_{\mathbb{C}}$.
\begin{defn}
The \emph{Cygan metric} $d_{\textrm{Cyg}}$ on $\partial \mathbf{H}^2_{\mathbb{C}} -\{q_{\infty}\}$ is defined to be
\begin{equation}\label{eq:cygan-metric}
d_{\textrm{Cyg}}(p,q)=|2\langle {\bf{p}}, {\bf{q}} \rangle|^{1/2}=\left| |z-w|^2-i(t-s+2{\rm{Im}}(z\bar{w})) \right|^{1/2},
\end{equation}
where $p=[z,t]$ and $q=[w,s]$.
The Cygan metric satisfies the properties of a distance.
The \emph{extended Cygan metric} on $\mathbf{H}^2_{\mathbb{C}}$ is given by the formula
\begin{equation}\label{eq:cygan-metric-extend}
d_{\textrm{Cyg}}(p,q)=\left| |z-w|^2+|u-v|-i(t-s+2{\rm{Im}}(z\bar{w})) \right|^{1/2},
\end{equation}
where $p=(z,t,u)$ and $q=(w,s,v)$.
\end{defn}
The formula $d_{\textrm{Cyg}}(p,q)=|2\langle {\bf{p}}, {\bf{q}} \rangle|^{1/2}$ remains valid even if one of $p$ and $q$ lies on $\partial \mathbf{H}^2_{\mathbb{C}}$.
A\emph{ Cygan sphere} is a sphere for the extended Cygan distance.
There are two kinds of 2-dimensional totally real totally geodesic subspaces of $\mathbf{H}^2_{\mathbb{C}}$: complex lines and Lagrangian planes.
\begin{defn}
Let $\textbf{v}^{\perp}$ be the orthogonal space of $\textbf{v}\in V_{+}$ with respect to the Hermitian form. The intersection of the projective line $P(\textbf{v}^{\perp})$ with $\mathbf{H}^2_{\mathbb{C}}$ is called a \emph{complex line}. The vector $\textbf{v}$ is its \emph{polar vector}.
\end{defn}
The ideal boundary of a complex line on $\partial\mathbf{H}^2_{\mathbb{C}}$ is called a \emph{$\mathbb{C}$-circle}. In the Heisenberg group, $\mathbb{C}$-circles are either vertical lines or ellipses whose projections on the $z$-plane are circles.
Let $\mathbf{H}^2_{\mathbb{R}}=\{ (x_1, x_2) \in \mathbf{H}^2_{\mathbb{C}} : x_1, x_2 \in \mathbb{R} \}$ be the set of real points. $\mathbf{H}^2_{\mathbb{R}}$ is a Lagrangian plane.
All the Lagrangian planes are the images of $\mathbf{H}^2_{\mathbb{R}}$ by isometries of $\mathbf{H}^2_{\mathbb{C}}$. The ideal boundary of a Lagrangian plane is called a \emph{$\mathbb{R}$-circle}. In the Heisenberg group, $\mathbb{R}$-circles are either straight lines or lemniscate curves whose projections on the $z$-plane are figure eight.
\subsection{Isometries}
Let ${\rm{SU}}(2,1)$ be the special unitary matrix preserving the Hermitian form. Then the projective unitary group $\rm{PU}(2,1)={\rm{SU}}(2,1)/\{I, \omega I, \omega^2 I\}$ is the holomorphic isometry group of $\mathbf{H}^2_{\mathbb{C}}$, where $\omega=(-1+i\sqrt{3})/2$ is a primite cubic root of unit.
Observe that complex conjugation also preserves the Hermitian form. Thus the full isometry group of $\mathbf{H}^2_{\mathbb{C}}$ is generated by $\rm{PU}(2,1)$ and complex conjugation.
\begin{defn}
Any isometry $g\in\rm{PU}(2,1)$ is \emph{loxodromic} if it has exactly two fixed points on $\partial \mathbf{H}^2_{\mathbb{C}}$. $g$ is \emph{parabolic} if it has exactly one fixed point on $\partial \mathbf{H}^2_{\mathbb{C}}$.
$g$ is \emph{elliptic} if it has at least one fixed point in $\mathbf{H}^2_{\mathbb{C}}$.
\end{defn}
The types of isometries can be determined by the traces of their matrix realizations, see Theorem 6.2.4 of Goldman \cite{Go}. Now suppose that $A\in{\rm{SU}}(2,1)$ has real trace. Then $A$ is elliptic if $-1\leq{\rm{tr}(A)}<3$.
Moreover, $A$ is unipotent if ${\rm{tr}(A)}=3$. In particular, if ${\rm{tr}(A)}=-1,0,1$, $A$ is elliptic of order 2, 3, 4 respectively.
There is a special class of elliptic elements of order two.
\begin{defn}
The \emph{complex involution} on complex line $C$ with polar vector ${\bf{n}}$ is given by the following formula:
\begin{equation}\label{eq:involution}
I_{C}({\bf{z}})=-{\bf{z}}+\frac{2\langle {\bf{z}}, {\bf{n}} \rangle}{\langle {\bf{n}}, {\bf{n}} \rangle} {\bf{n}}.
\end{equation}
It is obvious that $I_{C}$ is a holomorphic isometry fixing the complex line $C$.
\end{defn}
There is a special class of unipotent elements in $\rm{PU}(2,1)$.
\begin{defn}
A left \emph{Heisenberg translation} associated to $[z,t]\in\mathcal{N}$ is given by the matrix
\begin{equation}\label{eq:translation}
T_{[z,t]}=\left[
\begin{array}{ccc}
1 & -\bar{z} & \frac{-|z|^2+it}{2} \\
0 & 1 & z \\
0 & 0 & 1 \\
\end{array}
\right].
\end{equation}
It is obvious that $T_{[z,t]}$ fixes $q_{\infty}$ and maps $[0,0]\in\mathcal{N}$ to $[z,t]$.
\end{defn}
\subsection{Isometric spheres and Ford polyhedron}
Suppose that $g=(g_{ij})^3_{i,j=1}\in\rm{PU}(2,1)$ does not fix $q_{\infty}$. Then it is obvious that $g_{31}\neq 0$.
We first recall the definition of isometric spheres and relevant properties; see for instance\cite{par}.
\begin{defn}
The \emph{isometric sphere} of $g$, denoted by $\mathcal{I}(g)$, is the set
\begin{equation}\label{eq:isom-sphere}
\mathcal{I}(g)=\{ p \in {\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}} : |\langle {\bf{p}}, {\bf{q}}_{\infty} \rangle | = |\langle {\bf{p}}, g^{-1}({\bf{q}}_{\infty}) \rangle| \}.
\end{equation}
\end{defn}
The isometric sphere $\mathcal{I}(g)$ is the Cygan sphere with center
$$g^{-1}({\bf{q}}_{\infty})=\left[{\overline{g_{32}}}/{\overline{g_{31}}},2{\rm{Im}}({\overline{g_{33}}}/{\overline{g_{31}}})\right]$$
and radius $r_g=\sqrt{{2}/{|g_{31}|}}$.
The \emph{exterior} of $\mathcal{I}(g)$ is the set
\begin{equation}\label{eq:exterior}
\{ p \in {\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}} : |\langle {\bf{p}}, {\bf{q}}_{\infty} \rangle | > |\langle {\bf{p}}, g^{-1}({\bf{q}}_{\infty}) \rangle| \}.
\end{equation}
The \emph{interior} of $\mathcal{I}(g)$ is the set
\begin{equation}\label{eq:interior}
\{ p \in {\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}} : |\langle {\bf{p}}, {\bf{q}}_{\infty} \rangle | < |\langle {\bf{p}}, g^{-1}({\bf{q}}_{\infty}) \rangle| \}.
\end{equation}
The isometric spheres are paired as the following.
\begin{lem}[ \cite{Go}, Section 5.4.5]\label{lem:goldman}
Let $g$ be an element in $\rm{PU}(2,1)$ which does not fix $q_{\infty}$. Then $g$ maps $\mathcal{I}(g)$ to $\mathcal{I}(g^{-1})$,
and the exterior of $\mathcal{I}(g)$ to the interior of $\mathcal{I}(g^{-1})$.
Besides, for any unipotent transformation $h\in\rm{PU}(2,1)$ fixing $q_{\infty}$, we have $\mathcal{I}(g)=\mathcal{I}(hg)$.
\end{lem}
Since isometric spheres are Cygan spheres, we now recall some facts about Cygan spheres.
Let $S_{[0,0]}(r)$ be the Cygan sphere with center $[0,0]$ and radius $r>0$. Then
\begin{equation}\label{eq:cygan-sphere}
S_{[0,0]}(r)=\left\{ (z,t,u)\in\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}: (|z|^2+u)^2+t^2=r^4 \right\}.
\end{equation}
The geographic coordinates for Cygan sphere will play an important role in our calculation; see Section 2.5 of \cite{par-will2}.
\begin{defn}
The \emph{geographic coordinates} $(\alpha,\beta,w)$ of $q=q(\alpha,\beta,w)\in S_{[0,0]}(r)$ is given by the lift
\begin{equation}\label{eq:geog-coor}
{\bf{q}}={\bf q}(\alpha,\beta,w)=\left[
\begin{array}{c}
-r^2e^{-i\alpha}/2 \\
rwe^{i(-\alpha/2+\beta)} \\
1 \\
\end{array}
\right],
\end{equation}
where $\alpha\in [-\pi/2,\pi/2]$, $\beta\in [0, \pi)$ and $w\in [-\sqrt{\cos(\alpha)},\sqrt{\cos(\alpha)}]$.
The ideal boundary of $S_{[0,0]}(r)$ on $\partial\mathbf{H}^2_{\mathbb{C}}$ are the points with $w=\pm\sqrt{\cos(\alpha)}$.
\end{defn}
We are interested in the intersection of Cygan spheres.
\begin{prop}[ \cite{par-will2}, Proposition 2.10]\label{prop:connect}
The intersection of two Cygan spheres is connected.
\end{prop}
\begin{rem} This intersection is often called \emph{Giraud disk}.
\end{rem}
The following property should be useful to describe the intersection of Cygan spheres.
\begin{prop}[\cite{par-will2}, Proposition 2.12]
Let $S_{[0,0]}(r)$ be a Cygan sphere with {geographic coordinates} $(\alpha,\beta,w)$.
\begin{enumerate}
\item The level sets of $\alpha$ are complex lines, called slices of $S_{[0,0]}(r)$.
\item The level sets of $\beta$ are Lagrangian planes, called meridians of $S_{[0,0]}(r)$.
\item The set of points with $w=0$ is the spine of $S_{[0,0]}(r)$. It is a geodesic contained in every meridian.
\end{enumerate}
\end{prop}
A central work in this paper is to construct a polyhedron for a finitely generated subgroup of $\rm{PU}(2,1)$.
\begin{defn}
Let $G$ be a discrete subgroup of $\rm{PU}(2,1)$. The \emph{Ford polyhedron} $D_{G}$ for $G$ is the set
$$
D_{G}=\left\{ p\in{\mathbf{H}^2_{\mathbb{C}}\cup\partial\mathbf{H}^2_{\mathbb{C}}} : |\langle {\bf{p}}, {\bf{q}}_{\infty} \rangle | \geq |\langle {\bf{p}}, g^{-1}{\bf{q}}_{\infty} \rangle| ~\textrm{for all}~g\in G~\textrm{with}~g(q_{\infty})\neq q_{\infty} \right\}.
$$
\end{defn}
That is to say $D_{G}$ is the intersection of closures of the exteriors of all the isometric spheres for elements of $G$ which do not fix $q_{\infty}$.
In fact, Ford polyhedron is the limit of Dirichlet polyhedra as the center point goes to $q_{\infty}$.
\section{The parameter space of complex hyperbolic $(4,4,\infty)$ triangle groups}\label{sec:parameter}
In this section, we give a parameter space of the complex hyperbolic $(4,4,\infty)$ triangle groups.
Let $\theta\in[0,\pi/2)$. Let $C_1, C_2,C_3$ be three complex lines in complex hyperbolic space $\mathbf{H}^2_{\mathbb{C}}$ with polar vectors $\bf{n_1},\bf{n_2},\bf{n_3}$, respectively. By conjugating elements in $\rm{PU}(2,1)$, one can normalize that $\partial C_3=\{ [z,0] \in\mathcal{N} : |z|=\sqrt{2}\}$. That is the circle in the $z$-plane of the Heisenberg group with center the origin and radius $\sqrt{2}$. Then, up to the complex conjugation and rotations about the $t$-axis of the Heisenberg group, the $\mathbb{C}$-circles $\partial C_1$ and $\partial C_2$ can be normalized to be the sets $\partial C_1=\{[-e^{-i\theta},t]\in\mathcal{N}: t\in\mathbb{R} \}$ and $\partial C_2=\{[e^{i\theta},t]\in\mathcal{N}: t\in\mathbb{R} \}$. That is, $\partial C_1$ (respectively $\partial C_2$) is the vertical line whose projection on the $z$-plane of the Heisenberg group is the point $-e^{-i\theta}$ (respectively $e^{i\theta}$). Thus the polar vectors of the complex lines can be written as follows
$$
\bf{n_1}=\left[
\begin{array}{c}
e^{i\theta} \\
1 \\
0 \\
\end{array}
\right],\quad
\bf{n_2}=\left[
\begin{array}{c}
-e^{-i\theta} \\
1 \\
0 \\
\end{array}
\right],\quad
\bf{n_3}=\left[
\begin{array}{c}
1 \\
0 \\
1 \\
\end{array}
\right].
$$
Note that these two $\mathbb{C}$-circles $\partial C_1$ and $\partial C_2$ coincide with each other if $\theta=\pi/2$.
According to (\ref{eq:involution}), the complex involutions $I_1$, $I_2$ and $I_3$ on the complex lines are given as
$$
I_1=\left[
\begin{array}{ccc}
-1 & 2e^{i\theta} & 2 \\
0 & 1 & 2e^{-i\theta} \\
0 & 0 & -1 \\
\end{array}
\right], \quad
I_2=\left[
\begin{array}{ccc}
-1 & -2e^{-i\theta} & 2 \\
0 & 1 & -2e^{i\theta} \\
0 & 0 & -1 \\
\end{array}
\right],
$$
$$
I_3=\left[
\begin{array}{ccc}
0 & 0 & 1 \\
0 & -1 & 0 \\
1 & 0 & 0 \\
\end{array}
\right].
$$
\begin{prop}\label{prop:trianle}
Let $\theta\in[0,\pi/2)$ and $I_1$, $I_2$, $I_3$ be defined as above. Then $\langle I_1, I_2, I_3 \rangle$ is a complex hyperbolic $(4,4,\infty)$ triangle group. Furthermore, the element $I_1I_3I_2I_3$ is nonelliptic if and only if $0\leq \theta \leq \pi/3$.
\end{prop}
\begin{proof}
By computing the products of two involutions, we have
$$
I_2 I_3=\left[
\begin{array}{ccc}
2 & 2e^{-i\theta} & -1 \\
-2e^{i\theta} & -1 & 0 \\
-1 & 0 & 0 \\
\end{array}
\right],\quad
I_3 I_1=\left[
\begin{array}{ccc}
0 & 0 & -1 \\
0 & -1 & -2e^{-i\theta} \\
-1 & e^{i\theta} & 2 \\
\end{array}
\right],
$$
$$
I_2 I_1=\left[
\begin{array}{ccc}
1 & -4\cos(\theta) & -4(1+e^{-2i\theta}) \\
0 & 1 & 4\cos(\theta) \\
0 & 0 & 1 \\
\end{array}
\right].
$$
It is easy to verify that $I_2I_3$ and $I_3I_1$ are elliptic of order $4$ and $I_2I_1$ is unipotent.
Thus $\langle I_1, I_2, I_3 \rangle$ is a complex hyperbolic $(4,4,\infty)$ triangle group.
Since the trace of $I_1I_3I_2I_3$ is ${\rm{tr}}(I_1I_3I_2I_3)=7+8\cos(2\theta)$,
the element $I_1I_3I_2I_3$ is elliptic if and only if
$$
-1\leq {\rm{tr}}(I_1I_3I_2I_3)=7+8\cos(2\theta) <3,
$$
that is $\pi/3 <\theta \leq \pi/2$. Thus $I_1I_3I_2I_3$ is nonelliptic if and only if $0\leq \theta \leq \pi/3$.
Moreover, when $\theta=\pi/3$, the element $I_1I_3I_2I_3$ is parabolic.
\end{proof}
If $\theta=0$, then $\langle I_1, I_2, I_3 \rangle$ preserves the Lagrangian plane whose ideal boundary is the $x$-axis of the Heisenberg group.
Thus $\langle I_1, I_2, I_3 \rangle$ is obviously a discrete subgroup of $\rm{PU}(2,1)$.
If $\theta=\pi/3$, we have the following corollary.
\begin{cor}
Let $\theta=\pi/3$. Then $\langle I_1, I_2, I_3 \rangle$ is a discrete subgroup of $\rm{PU}(2,1)$.
\end{cor}
\begin{proof}
Let $\theta=\pi/3$. Then
$$
I_2 I_3=\left[
\begin{array}{ccc}
2 & 1-i\sqrt{3} & -1 \\
-1-i\sqrt{3} & -1 & 0 \\
-1 & 0 & 0 \\
\end{array}
\right],\quad
I_3 I_1=\left[
\begin{array}{ccc}
0 & 0 & -1 \\
0 & -1 & -1+i\sqrt{3} \\
-1 & 1+i\sqrt{3} & 2 \\
\end{array}
\right],
$$
$$
I_2 I_1=\left[
\begin{array}{ccc}
1 & -2 & -2+2i\sqrt{3} \\
0 & 1 & 2 \\
0 & 0 & 1 \\
\end{array}
\right].
$$
Observe that $I_2I_3$ and $I_3I_1$ are contained in the Eisenstein-Picard modular group $\rm{PU}(2,1;\mathbb{Z}[\omega])$, where $\omega=(-1+i\sqrt{3})/2$ is a primite cubic root of the unit. (See for example \cite{fal-par} for more details about the Eisenstein-Picard modular group).
Since $\rm{PU}(2,1;\mathbb{Z}[\omega])$ is discrete, $\langle I_2I_3, I_3I_1 \rangle$ is also discrete. Therefore, $\langle I_1, I_2, I_3 \rangle$ is discrete.
\end{proof}
\section{The Ford domain}\label{sec:ford}
For $\theta\in[0,\pi/3]$, let $S=I_2I_3$ and $T=I_2I_1$. Then $\Gamma=\langle S, T \rangle$ is a subgroup of $\langle I_1, I_2, I_3 \rangle$ of index two.
In this section, we will mainly prove that $\Gamma$ is discrete.
Our method is as follows: firstly, construct a candidate Ford domain $D$ (see Definition \ref{domain:D}),
then applying the Poincar\'{e} polyhedron theorem to show that $D$ is a fundamental domain for the cosets of $\langle T \rangle$ in $\Gamma$,
and as well $\Gamma$ is discrete.
\begin{defn}\label{def:isometric}
For $k\in\mathbb{Z}$, let
\begin{itemize}
\item $\mathcal{I}_k^{+}$ be the isometric sphere $\mathcal{I}(T^kST^{-k})=T^k\mathcal{I}(S)$,
\item $\mathcal{I}_k^{-}$ be the isometric sphere $\mathcal{I}(T^kS^{-1}T^{-k})=T^k\mathcal{I}(S^{-1})$,
\item $\mathcal{I}_k^{\star}$ be the isometric sphere $\mathcal{I}(T^kS^2T^{-k})=T^k\mathcal{I}(S^2)$,
\item $\mathcal{I}_k^{\diamond}$ be the isometric sphere $\mathcal{I}(T^k(S^{-1}T)^2T^{-k})=T^k\mathcal{I}((S^{-1}T)^2)$.
\end{itemize}
\end{defn}
Note that $S$ and $S^{-1}T$ both have order 4, so $S^2=S^{-2}$, $(S^{-1}T)^2=(S^{-1}T)^{-2}$.
The centers and radii of the isometric spheres $\mathcal{I}_{k}^{+}$, $\mathcal{I}_{k}^{-}$, $\mathcal{I}_{k}^{\star}$ and $\mathcal{I}_{k}^{\diamond}$ are listed in the following table.
\begin{table}[!htbp]
\centering
\begin{tabular}{ccc}
\toprule
\textbf{Isometric sphere} & \textbf{Center } & \textbf{radius}\\
\midrule
$\mathcal{I}_{k}^{+}$&$[4k\cos(\theta),8k\sin(2\theta)]$&$\sqrt{2}$\\
$\mathcal{I}_{k}^{-}$&$[4k\cos(\theta)+2e^{i\theta},0]$&$\sqrt{2}$\\
$\mathcal{I}_{k}^{\star}$&$[4k\cos(\theta)+e^{i\theta},4k\sin(2\theta)]$&$1$\\
$\mathcal{I}_{k}^{\diamond}$&$[4k\cos(\theta)-e^{-i\theta},4k\sin(2\theta)]$&$1$\\
\bottomrule
\end{tabular}
\label{tab:isometric spheres}
\end{table}
\begin{prop}\label{prop:symmetry}
Let $k\in\mathbb{Z}$.
\begin{enumerate}
\item There is an antiholomophic involution $\tau$ such that $\tau(\mathcal{I}_{k}^{+})=\mathcal{I}_{-k}^{+}$,
$\tau(\mathcal{I}_{k}^{-})=\mathcal{I}_{-k-1}^{-}$ and $\tau(\mathcal{I}_{k}^{\star})=\mathcal{I}_{-k}^{\diamond}$.
\item The complex involution $I_2$ interchanges $\mathcal{I}_{k}^{\star}$ and $\mathcal{I}_{-k}^{\star}$,
interchanges $\mathcal{I}_k^{+}$ and $\mathcal{I}_{-k}^{-}$, and interchanges $\mathcal{I}_k^{\diamond}$ and $\mathcal{I}_{-k+1}^{\diamond}$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) Let $\tau : {\mathbb{C}}^3 \longrightarrow {\mathbb{C}}^3$ be given as follows:
$$
\tau : \left[
\begin{array}{c}
z_1 \\
z_2 \\
z_3 \\
\end{array}
\right] \longmapsto
\left[
\begin{array}{c}
\bar{z}_1 \\
-\bar{z}_2 \\
\bar{z}_3 \\
\end{array}
\right].
$$
Then ${\tau}^2$ is the identity. It is easy to see that $\tau$ fixes the polar vector ${\bf{n}}_3$, and interchanges the polar vectors ${\bf{n}}_1$ and ${\bf{n}}_2$.
Thus $\tau$ conjugates $I_3$ to itself, $I_1$ to $I_2$ and vice versa.
Therefore $\tau$ conjugates $T$ to $T^{-1}$, $S$ to $T^{-1}S$, $S^{-1}$ to $S^{-1}T$, and $S^2$ to $(T^{-1}S)^2=(S^{-1}T)^2$.
This implies that $\tau(\mathcal{I}_{k}^{+})=\mathcal{I}_{-k}^{+}$,
$\tau(\mathcal{I}_{k}^{-})=\mathcal{I}_{-k-1}^{-}$ and $\tau(\mathcal{I}_{k}^{\star})=\mathcal{I}_{-k}^{\diamond}$.
(2) The statement is easily obtained by the facts $I_2SI_2=S^{-1}$ and $I_2TI_2=T^{-1}$.
\end{proof}
Before we consider the intersections of two isometric spheres, we would like to give a useful technical lemma.
Suppose that $q \in \mathcal{I}_{0}^{+}$. Then by (\ref{eq:geog-coor}) the geographic coordinates of $q=q(\alpha,\beta,w)$ is given by the lift
\begin{equation}\label{eq:geog-coor-plus-0}
{\bf{q}}={\bf q}(\alpha,\beta,w)=\left[
\begin{array}{c}
-e^{-i\alpha} \\
\sqrt{2}we^{i(-\alpha/2+\beta)} \\
1 \\
\end{array}
\right]
\end{equation}
where $\alpha\in [-\pi/2,\pi/2]$, $\beta\in [0, \pi)$ and $w\in [-\sqrt{\cos(\alpha)},\sqrt{\cos(\alpha)}]$.
\begin{defn}\label{def:functions}
Let $(\alpha,\beta,w)$ be the geographic coordinates of $\mathcal{I}_{0}^{+}$. We define the following functions.
\begin{equation*}
f_{0}^{\star}(\alpha,\beta,w)= 2w^2+1+\cos(\alpha)-\sqrt{2}w\cos(-\alpha/2+\beta-\theta)-2\sqrt{2}w\cos(\alpha/2+\beta-\theta),
\end{equation*}
\begin{equation*}
f_{0}^{-}(\alpha,\beta,w)= 2w^2+1+\cos(\alpha)-\sqrt{2}w\cos(\alpha/2+\beta-\theta)-2\sqrt{2}w\cos(-\alpha/2+\beta-\theta),
\end{equation*}
\begin{equation*}
f_{-1}^{-}(\alpha,\beta,w)= 2w^2+1+\cos(\alpha)+\sqrt{2}w\cos(\alpha/2+\beta+\theta)+2\sqrt{2}w\cos(-\alpha/2+\beta+\theta).
\end{equation*}
\end{defn}
\begin{lem}\label{lem:functions}
Suppose that $\theta\in[0,\pi/3]$. Let $f_{0}^{\star}(\alpha,\beta,w)$, $f_{0}^{-}(\alpha,\beta,w)$ and $f_{-1}^{-}(\alpha,\beta,w)$ be the functions defined in Definition \ref{def:functions}. Suppose that $q \in \mathcal{I}_{0}^{+}$. Then we have the following properties.
\begin{enumerate}
\item $q$ lies on $\mathcal{I}_{0}^{\star}$ (resp. in its interior or exterior) if and only if $f_{0}^{\star}(\alpha,\beta,w)=0$ (resp. negative or positive);
\item $q$ lies on $\mathcal{I}_{0}^{-}$ (resp. in its interior or exterior) if and only if $f_{0}^{-}(\alpha,\beta,w)=0$ (resp. negative or positive);
\item $q$ lies on $\mathcal{I}_{-1}^{-}$ (resp. in its interior or exterior) if and only if $f_{-1}^{-}(\alpha,\beta,w)=0$ (resp. negative or positive).
\end{enumerate}
\end{lem}
\begin{proof}
(1) Any point $q \in \mathcal{I}_{0}^{+}$ lies on $\mathcal{I}_{0}^{\star}$ (resp. in its interior or exterior) if and only if the Cygan distance between $q$ and the center of $ \mathcal{I}_{0}^{\star}$ is 1 (resp. less than 1 or greater than 1).
Using \ref{eq:geog-coor-plus-0}, the difference between the Cygan distance from $q$ to the center of $ \mathcal{I}_{0}^{\star}$ and 1 is
\begin{eqnarray*}
\lefteqn{\left| 2 \left\langle {\bf{q}}, \left[
\begin{array}{c}
-1/2 \\
e^{i\theta} \\
1 \\
\end{array}
\right] \right\rangle
\right|^2-1} \\
&=& 4\left| -e^{-i\alpha}+ \sqrt{2}we^{i(-\alpha/2+\beta-\theta)}-1/2 \right|^2-1 \\
&=& 4 \left( 2w^2+1+\cos(\alpha)-\sqrt{2}w\cos(-\alpha/2+\beta-\theta)-2\sqrt{2}w\cos(\alpha/2+\beta-\theta) \right) \\
&=& 4f_{0}^{\star}(\alpha,\beta,w).
\end{eqnarray*}
Hence, $q$ lies on $\mathcal{I}_{0}^{\star}$ (resp. in its interior or exterior) if and only if $f_{0}^{\star}(\alpha,\beta,w)=0$ (resp. negative or positive).
(2) Similarly, the difference between the Cygan distance from $q$ to the center of $ \mathcal{I}_{0}^{-}$ and its radius is
\begin{eqnarray*}
\lefteqn{\left| 2 \left\langle {\bf{q}}, \left[
\begin{array}{c}
-2 \\
2e^{i\theta} \\
1 \\
\end{array}
\right] \right\rangle
\right|^2-4}\\
&=& 4\left| -e^{-i\alpha}+2 \sqrt{2}we^{i(-\alpha/2+\beta-\theta)}-2 \right|^2-4 \\
&=& 16 \left( 2w^2+1+\cos(\alpha)-\sqrt{2}w\cos(\alpha/2+\beta-\theta)-2\sqrt{2}w\cos(-\alpha/2+\beta-\theta) \right) \\
&=& 16 f_{0}^{-}(\alpha,\beta,w).
\end{eqnarray*}
(3) Similarly, the difference between the Cygan distance from $q$ to the center of $ \mathcal{I}_{-1}^{-}$ and its radius is
\begin{eqnarray*}
\lefteqn{\left| 2 \left\langle {\bf{q}}, \left[
\begin{array}{c}
-2 \\
-2e^{-i\theta} \\
1 \\
\end{array}
\right] \right\rangle
\right|^2-4} \\
&=& 4\left| -e^{-i\alpha}-2 \sqrt{2}we^{i(-\alpha/2+\beta+\theta)}-2 \right|^2-4 \\
&=& 16 \left( 2w^2+1+\cos(\alpha)+\sqrt{2}w\cos(\alpha/2+\beta+\theta)+2\sqrt{2}w\cos(-\alpha/2+\beta+\theta) \right) \\
&=& 16 f_{-1}^{-}(\alpha,\beta,w).
\end{eqnarray*}
\end{proof}
Now, we begin to study the intersections of isometric spheres.
\begin{prop}
Suppose that $\theta\in[0,\pi/3]$, then each pair of the isometric spheres in \{ $\mathcal{I}_k^{+} : k\in\mathbb{Z}$ \} are disjoint in $\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}$.
\end{prop}
\begin{proof}
It suffices to show that $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_k^{+}$ are disjoint for $|k|\geq1$.
Observe that $T$ is a Heisenberg translation associated with $[4\cos(\theta),8\sin(2\theta)]$.
Since the isometric sphere $\mathcal{I}_{0}^{+}$ has center $[0,0]$ and radius $\sqrt{2}$, the isometric sphere
$\mathcal{I}_k^{+}$ has center $[4k\cos(\theta), 8k\sin(2\theta)]$ and radius $\sqrt{2}$.
According to the Cygan metric given in (\ref{eq:cygan-metric}), the Cygan distance between the centers of $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_k^{+}$ is
$$
4\sqrt{|k| \cos(\theta)}|k\cos(\theta)-i\sin(\theta)|^{1/2}\geq 4\sqrt{\cos(\theta)} \geq 2\sqrt{2}.
$$
Thus the Cygan distance between the centers of $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_k^{+}$ is bigger than the sum of the radii, except when $k=\pm 1$ and $\theta=\pi/3$.
This implies that $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_k^{+}$ are disjoint for all $|k|\geq 2$.
When $k=\pm 1$ and $\theta=\pi/3$, although the Cygan distance between the centers of $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_{\pm 1}^{+}$ is the sum of the radii, we claim that
they are still disjoint.
Using the symmetry $\tau$ in Proposition \ref{prop:symmetry}, we only need to show that $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{1}^{+}=\emptyset$.
Suppose that $q \in \mathcal{I}_{0}^{+}$. Using the geographic coordinates of $q=q(\alpha,\beta,w)$ given in (\ref{eq:geog-coor-plus-0}), we can compute the difference between the Cygan distance of $q$ and the center of $\mathcal{I}_{1}^{+}$ with its radius. That is
\begin{eqnarray*}
\lefteqn{\left| 2 \left\langle {\bf{q}}, \left[a
\begin{array}{c}
-4e^{-i\pi/3} \\
2 \\
1 \\
\end{array}
\right] \right\rangle
\right|^2-4} \\
&=& 4\left| -e^{-i\alpha}+ \sqrt{2}we^{i(-\alpha/2+\beta-\theta)}-4e^{i\pi/3} \right|^2-4 \\
&=& 32 \left( w^2+\sqrt{2}w\left(\cos(\alpha/2+\beta)/2+2\cos(\alpha/2-\beta+\pi/3)\right)+\cos(\alpha+\pi/3)+2 \right) \\
&=& 32f(\alpha,\beta,w).
\end{eqnarray*}
Here $f(\alpha,\beta,w)$ can be seen as a quadratic function of $w$. Let
$$B=\sqrt{2}\left(\cos(\alpha/2+\beta)/2+2\cos(\alpha/2-\beta+\pi/3)\right)$$ and $C=\cos(\alpha+\pi/3)+2$.
If $B^2-4C<0$, then it is obvious that $f(\alpha,\beta,w)>0$. If $B^2-4C\geq 0$, then $B\leq -2\sqrt{C}$ (it is impossible by numerically computation) or $B\geq 2\sqrt{C}$. In this case we have $B-2\sqrt{\cos(\alpha)}\geq B-2\sqrt{C}\geq 0$ since $\cos(\alpha)\leq C$. This means that the symmetry axes of the $f$ lie on the left side of $w=-\sqrt{\cos(\alpha)}$. Besides,
one can compute numerically that $f(\alpha,\beta,-\sqrt{\cos(\alpha)})>0$ on the range of $\alpha$ and $\beta$.
So, we have $f(\alpha,\beta,w)>0$. This means that every point on $\mathcal{I}_{0}^{+}$ lies outside of $\mathcal{I}_{1}^{+}$.
Hence $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{1}^{+}=\emptyset$.
\end{proof}
By a similar argument, we have the following proposition.
\begin{prop}\label{prop:s0plus}
Suppose that $\theta\in[0,\pi/3]$, then
\begin{enumerate}
\item $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_k^{-}$ are disjoint in $\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}$, except possibly when $k=-1,0$,
\item $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_k^{\star}$ are disjoint in $\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}$, except possibly when $k=-1,0$,
\item $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_k^{\diamond}$ are disjoint in $\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}$, except possibly when $k=0,1$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) Since $\mathcal{I}_{0}^{-}$ has center $[2e^{i\theta},0]$ and radius $\sqrt{2}$, the isometric sphere $\mathcal{I}_k^{-}$ has center $[4k\cos(\theta)+2e^{i\theta},0]$
and radius $\sqrt{2}$.
The Cygan distance between the centers of $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_k^{-}$ is
$$\left|4k\cos(\theta)+2e^{i\theta}\right|=2\sqrt{\sin^2(\theta)+(2k+1)^2\cos^2(\theta)},$$
which is bigger than $2\sqrt{2}$ when $k\neq -1, 0$.
(2) Since $\mathcal{I}_{0}^{\star}$ has center $[e^{i\theta},0]$ and radius $1$, the isometric sphere $\mathcal{I}_k^{\star}$ has center $[4k\cos(\theta)+e^{i\theta},4k\sin(2\theta)]$
and radius $1$. The Cygan distance between the centers of $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_k^{\star}$ is
$$\left||(4k+1)\cos(\theta)+i\sin(\theta)|^2-i(4k\sin(2\theta))\right|^{1/2}\geq |4k+1|\cos(\theta),$$
which is bigger than $1+\sqrt{2}$ when $k\neq -1, 0$.
(3) Since $\mathcal{I}_{0}^{\diamond}$ has center $[-e^{-i\theta},0]$ and radius $1$, the isometric sphere $\mathcal{I}_k^{\diamond}$ has center $[4k\cos(\theta)-e^{-i\theta},4k\sin(2\theta)]$ and radius $1$. The Cygan distance between the centers of $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_k^{\diamond}$ is $$\left||(4k-1)\cos(\theta)+i\sin(\theta)|^2-i(4k\sin(2\theta))\right|^{1/2}\geq |4k-1|\cos(\theta),$$
which is bigger than $1+\sqrt{2}$ when $k\neq 0, 1$.
\end{proof}
Similarly, we have
\begin{prop}\label{prop:s0star}
Suppose that $\theta\in[0,\pi/3]$, then
\begin{enumerate}
\item $\mathcal{I}_{0}^{\star}$ and $\mathcal{I}_k^{\star}$ are disjoint in $\mathbf{H}^2_{\mathbb{C}}$. Furthermore, when $\theta=\pi/3$ the closure of $\mathcal{I}_{0}^{\star}$ and $\mathcal{I}_{-1}^{\star}$
(respectively, $\mathcal{I}_{1}^{\star}$) is tangent at the parabolic fixed point of $T^{-1}S^2$ (respectively, $T(T^{-1}S^2)T^{-1}$) on $\partial \mathbf{H}^2_{\mathbb{C}}$.
\item $\mathcal{I}_{0}^{\star}$ and $\mathcal{I}_k^{\diamond}$ are disjoint in $\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}$, except possibly when $k=0,1$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) $\mathcal{I}_k^{\star}$ is a Cygan sphere with center $[4k\cos(\theta)+e^{i\theta},4k\sin(2\theta)]$ and radius $1$, thus the distance between the centers of
$\mathcal{I}_{0}^{\star}$ and $\mathcal{I}_{k}^{\star}$ is
$$
d_{\rm{Cyg}}([4k\cos(\theta)+e^{i\theta},4k\sin(2\theta)],[e^{i\theta},0])=|4k\cos(\theta)|\geq 2.
$$
The equality holds when $k=\pm 1$ and $\theta=\pi/3$.
When $\theta=\pi/3$, we have known that $I_1I_3I_2I_3=T^{-1}S^2$ is unipotent. Since
$$\mathcal{I}_{0}^{\star}=\mathcal{I}(S^2)=\mathcal{I}(T^{-1}S^{2}),$$
and
$$\mathcal{I}_{-1}^{\star}=\mathcal{I}(T^{-1}S^2T)=\mathcal{I}(S^2T)=\mathcal{I}(S^{-2}T),$$
using Phillips's theorem (Theorem 6.1 of \cite{phillips}), the closure of $\mathcal{I}_{0}^{\star}$ and $\mathcal{I}_{-1}^{\star}$ will be tangent at the fixed point of $T^{-1}S^2$ on $\partial \mathbf{H}^2_{\mathbb{C}}$.
Since
$$\mathcal{I}_{0}^{\star}=T(\mathcal{I}_{-1}^{\star}),\quad \mathcal{I}_{1}^{\star}=T(\mathcal{I}_{0}^{\star}),$$
the closure of $\mathcal{I}_{0}^{\star}$ and $\mathcal{I}_{1}^{\star}$ is tangent at the fixed point of $T(T^{-1}S^2)T^{-1}$ on $\partial \mathbf{H}^2_{\mathbb{C}}$.
(2) The distance between the centers of $\mathcal{I}_{0}^{\star}$ and $\mathcal{I}_k^{\diamond}$ is
\begin{eqnarray*}
\lefteqn{d_{\rm{Cyg}}([4k\cos(\theta)-e^{-i\theta},4k\sin(2\theta)],[e^{i\theta},0])}\\
&=& 2\sqrt{\cos(\theta)}\cdot|(2k-1)^2\cos(\theta)-i\sin(\theta)|^{1/2} \\
&\geq& 2|2k-1|\cos(\theta),
\end{eqnarray*}
which is bigger than $2$, except when $k=0,1$.
\end{proof}
\begin{lem}\label{lem:triple}
Suppose that $\theta\in[0,\pi/3]$, then $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}=\emptyset$ except when $\theta=\pi/3$,
in which case the triple intersection is the point $[e^{i2\pi/3},-\sqrt{3}]\in\partial \mathbf{H}^2_{\mathbb{C}}$. Moreover, this point is the parabolic fixed point of $T^{-1}S^2$.
\end{lem}
\begin{proof}
Suppose that $q \in \mathcal{I}_{0}^{+}$. Using Lemma \ref{lem:functions},
the geographic coordinates $(\alpha,\beta,w)$ of $q \in \mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}$ should satisfy the following equation
\begin{equation}\label{eq:lemma0}
2w^2+1+\cos(\alpha)-\sqrt{2}w\cos\left(-\frac{\alpha}{2}+\beta-\theta\right)-2\sqrt{2}w\cos\left(\frac{\alpha}{2}+\beta-\theta\right) = 0,
\end{equation}
\begin{equation}\label{eq:lemma1}
2w^2+1+\cos(\alpha)+\sqrt{2}w\cos\left(\frac{\alpha}{2}+\beta+\theta\right)+2\sqrt{2}w\cos\left(-\frac{\alpha}{2}+\beta+\theta\right) = 0.
\end{equation}
Subtracting the two equations (\ref{eq:lemma0}) and (\ref{eq:lemma1}), we have
$$
2\sqrt{2}w\cos(\beta)\left(\cos(\alpha/2+\theta)+2\cos(\alpha/2-\theta) \right)=0.
$$
This implies that either $w=0$ or $\beta=\pi/2$, since $\left(\cos(\alpha/2+\theta)+2\cos(\alpha/2-\theta) \right)\neq 0$ for $\theta\in [0,\pi/3]$.
We know that the points with $w=0$ lie in the meridian with $\beta=\pi/2$.
Therefore, a necessary condition for $q \in \mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}$ is that $\beta=\pi/2$.
Substituting $\beta=\pi/2$ into the equation (\ref{eq:lemma0}) and simplifying, we have
\begin{equation}\label{eq3}
2w^2+2\cos^2(\alpha/2)+\sqrt{2}w\left( \sin(\alpha/2)\cos(\theta)-3\cos(\alpha/2)\sin(\theta) \right)=0.
\end{equation}
Let $b(\alpha,\theta)=\sin(\alpha/2)\cos(\theta)-3\cos(\alpha/2)\sin(\theta)$. It is easy to see that for every $\alpha$, the function $\theta\longmapsto b(\alpha,\theta)$ is decreasing on $[0,\pi/3]$.
The left hand side of the equation (\ref{eq3}) can be seen as a quadratic function of $w$ with positive leading coefficient.
Thus the equation (\ref{eq3}) has at least one solution only if $b^2-8\cos^2(\alpha/2)\geq 0$, that is $b\geq 2\sqrt{2}\cos(\alpha/2)$ (it is impossible since $b\leq b(\alpha,0)=\sin(\alpha/2)$) or $b\leq -2\sqrt{2}\cos(\alpha/2)$. Since $\sqrt{\cos(\alpha)}\leq\cos(\alpha/2)$, we have $b+2\sqrt{2}\sqrt{\cos(\alpha)}\leq b+2\sqrt{2}\cos(\alpha/2)\leq 0$. This means that the symmetry axes of the quadratic function lie on the right hand side of $w=\sqrt{\cos(\alpha)}$.
Besides, one can compute that
\begin{eqnarray*}
\lefteqn{b\sqrt{\cos(\alpha)}+\sqrt{2}\left( \cos(\alpha)+\cos^2(\alpha/2) \right)} \\
&\geq& \left( \sin(\alpha/2)\cos(\pi/3)-3\cos(\alpha/2)\sin(\pi/3) \right)\sqrt{\cos(\alpha)}+\sqrt{2}\left( \cos(\alpha)+\cos^2(\alpha/2) \right) \\
&=& \frac{\sqrt{2}}{2}\left( \frac{\sqrt{\cos(\alpha)}}{2}+\frac{\sqrt{2}}{2}\sin(\alpha/2) \right)^2+\frac{3\sqrt{2}}{2}\left( \frac{\sqrt{3}}{2}\sqrt{\cos(\alpha)}-\frac{\sqrt{2}}{2}\cos(\alpha/2) \right)^2.
\end{eqnarray*}
Then $b\sqrt{\cos(\alpha)}+\sqrt{2}\left( \cos(\alpha)+\cos^2(\alpha/2) \right)\geq 0$. If it is $0$, then $\alpha=-\pi/3$ and $\theta=\pi/3$.
It means that for $w\in [-\sqrt{\cos(\alpha)},\sqrt{\cos(\alpha)}]$ the equation (\ref{eq3}) has no solution except when $\theta=\pi/3$ and $\alpha=-\pi/3$, in which case $w=\sqrt{\cos(\alpha)}=\sqrt{2}/2$.
Hence $q \in \mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}$ if and only if $\theta=\pi/3$, $\alpha=-\pi/3$ and $w=\sqrt{\cos(\alpha)}=\sqrt{2}/2$.
When $\theta=\pi/3$, $T^{-1}S^2$ is unipotent and its fixed point is the eigenvector with eigenvalue $1$. One can compute that its fixed point is $[e^{i2\pi/3},-\sqrt{3}] \in \partial\mathbf{H}^2_{\mathbb{C}}$, which equals to the point $q(-\pi/3,\pi/2,\sqrt{2}/2)$.
\end{proof}
\begin{figure}
\caption{The ideal boundaries of the three spheres $\mathcal{I}
\label{fig:double}
\end{figure}
\begin{prop}\label{prop:s0star1}
Suppose that $\theta\in[0,\pi/3]$, then
\begin{enumerate}
\item The intersection $\mathcal{I}_{0}^{\star} \cap \mathcal{I}_{0}^{\diamond}$ lie in the interior of $\mathcal{I}_{0}^{+}$,
\item The intersection $\mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}$ either is empty or lie in the interior of $\mathcal{I}_{0}^{+}$. Furthermore, when $\theta=\pi/3$,
there is a unique point on the ideal boundary of $\mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}$ on $\partial \mathbf{H}^2_{\mathbb{C}}$, which is fixed by $T^{-1}S^2$,
lying on the ideal boundary of $\mathcal{I}_{0}^{+}$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) Let $p=(z,t,u)\in \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{0}^{\diamond}$, then $p$ satisfies the equations
\begin{eqnarray*}
\left| |z-e^{i\theta}|^2+u-i\left(t+2{\rm{Im}}(ze^{-i\theta})\right) \right| &=& 1 \\
\left| |z+e^{-i\theta}|^2+u-i\left(t+2{\rm{Im}}(-ze^{i\theta})\right) \right| &=& 1.
\end{eqnarray*}
Set $z=|z|e^{i\phi}$, by simplifying, we have
\begin{eqnarray*}
\left| |z|^2+1+u-2|z|\cos(\phi-\theta)-i(t+2|z|\sin(\phi-\theta)) \right| &=& 1 \\
\left| |z|^2+1+u+2|z|\cos(\phi+\theta)-i(t-2|z|\sin(\phi+\theta)) \right| &=& 1.
\end{eqnarray*}
Now set
\begin{eqnarray}
\label{eq1} |z|^2+1+u-2|z|\cos(\phi-\theta)-i(t+2|z|\sin(\phi-\theta)) &=& e^{i\alpha} \\
\label{eq2} |z|^2+1+u+2|z|\cos(\phi+\theta)-i(t-2|z|\sin(\phi+\theta)) &=& e^{i\beta}.
\end{eqnarray}
Since
$$\cos{\alpha}=|z|^2+1+u-2|z|\cos(\phi-\theta) = \left(|z|-\cos(\phi-\theta)\right)^2+\sin^2(\phi-\theta)+u \geq 0$$
and
$$\cos{\beta}=|z|^2+1+u+2|z|\cos(\phi+\theta) =\left(|z|+\cos(\phi+\theta)\right)^2+\sin^2(\phi+\theta)+u \geq 0,$$
we have $-\pi/2 \leq \alpha \leq \pi/2$ and $-\pi/2 \leq \beta \leq \pi/2$. Thus it implies that $\cos(\beta/2-\alpha/2)\geq 0$.
By computing the difference of the equations (\ref{eq1}) and (\ref{eq2}), we have
\begin{equation}\label{eq:coord-z}
z=\frac{e^{i\beta}-e^{i\alpha}}{4\cos(\theta)}=\pm \frac{\sin(\beta/2-\alpha/2)}{2\cos(\theta)}e^{i(\pm \pi/2 +\beta/2+\alpha/2)}.
\end{equation}
Thus $\phi=\pm \pi/2 +\beta/2+\alpha/2$.
Therefore,
\begin{eqnarray*}
\lefteqn{
(|z|^2+u)^2+t^2 } \\
&=& \left(\cos(\alpha)-1+2|z|\cos(\phi-\theta)\right)^2+\left(\sin(\alpha)+2|z|\sin(\phi-\theta)\right)^2 \\
&=& 2+4|z|^2-2\cos(\alpha)-4|z|\cos(\phi-\theta)+4|z|\cos(\phi-\theta-\alpha) \\
&\leq& 2+4|z|^2-2\left(|z|^2+1-2|z|\cos(\phi-\theta)\right)-4|z|\cos(\phi-\theta)+4|z|\cos(\phi-\theta-\alpha) \\
&=& 2|z|^2+4|z|\cos(\phi-\theta-\alpha) \\
&=& 2|z|^2+4|z|\cos(\pm \pi/2 -\theta +\beta/2-\alpha/2) \\
&=& 2|z|^2+4|z|\left( \pm\sin(\theta)\cos(\beta/2-\alpha/2) \mp \cos(\theta)\sin(\beta/2-\alpha/2) \right) \\
&=& \frac{\sin^2(\beta/2-\alpha/2)}{2\cos^2(\theta)}+\tan(\theta)\sin(\beta-\alpha)-2\sin^2(\beta/2-\alpha/2).
\end{eqnarray*}
Since $\theta\in[0,\pi/3]$, we have $\frac{\sin^2(\beta/2-\alpha/2)}{2\cos^2(\theta)} \leq 2\sin^2(\beta/2-\alpha/2)$ and $\tan(\theta)\sin(\beta-\alpha) \leq \sqrt{3}$. This implies that $(|z|^2+u)^2+t^2 <4$.
It means that the intersection $\mathcal{I}_{0}^{\star} \cap \mathcal{I}_{0}^{\diamond}$ lie in the interior of $\mathcal{I}_{0}^{+}$.
(2) Suppose that $p=(z,t,u)\in \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}$, then $p$ satisfies the equations
\begin{eqnarray*}
1 &=& \left| |z-e^{i\theta}|^2+u-i\left(t+2{\rm{Im}}(ze^{-i\theta})\right) \right| = \left| |z|^2+u+1-it-2ze^{-i\theta} \right| \\
2 &=& \left| |z+2e^{-i\theta}|^2+u-i\left(t+2{\rm{Im}}(-2ze^{i\theta})\right) \right| = \left| |z|^2+u+4-it+4ze^{i\theta} \right|.
\end{eqnarray*}
Now set
\begin{eqnarray}
\label{eq4} |z|^2+u+1-it-2ze^{-i\theta} &=& e^{i\beta} \\
\label{eq5} |z|^2+u+4-it+4ze^{i\theta} &=& 2e^{i\alpha}.
\end{eqnarray}
By computing the difference of the equations (\ref{eq4}) and (\ref{eq5}), we have
\begin{eqnarray}\label{eq6}
z &=& \frac{2e^{i\alpha}-e^{i\beta}-3}{4e^{i\theta}+2e^{-i\theta}}.
\end{eqnarray}
According to equation (\ref{eq4}), we have
\begin{eqnarray}
\label{eq7} u &=& \cos(\beta)-\left| ze^{-i\theta}-1 \right|^2 \\
\label{eq8} t &=& -\sin(\beta)-2{\rm{Im}}(ze^{-i\theta}).
\end{eqnarray}
Since
$$\cos{\beta}=u+\left| ze^{-i\theta}-1 \right|^2 \geq 0$$
and
$$2\cos{\alpha}=u+\left| ze^{i\theta}+2 \right|^2 \geq 0,$$
we have $-\pi/2 \leq \alpha \leq \pi/2$ and $-\pi/2 \leq \beta \leq \pi/2$.
Now let us consider the case when $\theta=\pi/3$. Substituting $\alpha=\pi/6$ and $\beta=0$ into the equations (\ref{eq6}), (\ref{eq7}) and (\ref{eq8}), we obtain the point
$$p_0=\left( \left(\frac{\sqrt{3}}{3}-1\right)+i\frac{3+\sqrt{3}}{6},\frac{3-7\sqrt{3}}{6}, \frac{-13+8\sqrt{3}}{6} \right) \in \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}.$$
One can compute that $p_0$ lies in the interior of $\mathcal{I}_{0}^{+}$,
since
$$
\left||z|^2+u-it\right|^2=\left|e^{i\beta}-1+2ze^{-i\theta}\right|^2=4|z|^2=20/3-2\sqrt{3}<4.
$$
We know that the intersection $\mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}$ is connected from Proposition \ref{prop:connect}. Thus, according to Lemma \ref{lem:triple}, $\mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}$ lie in the interior of $\mathcal{I}_{0}^{+}$ except the point $[e^{i2\pi/3},-\sqrt{3}]$, which lies on the ideal boundary of $\mathcal{I}_{0}^{+}$. See Figure \ref{fig:double}.
Observe that coordinates of the centers of $\mathcal{I}_{0}^{\star}$ and $\mathcal{I}_{-1}^{-}$ are continues on $\theta$. Thus the geometric positions of the spheres $\mathcal{I}_{0}^{\star}$ and $\mathcal{I}_{-1}^{-}$ are continuously moved on $\theta$. When $\theta=0$, since the Cygan distance between the centers of $\mathcal{I}_{0}^{\star}$ and $\mathcal{I}_{-1}^{-}$ is bigger than the sum of their radii, one can see that $\mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}=\emptyset$. When $\theta=\pi/3$, we have shown that $\mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}$ lies in the interior of $\mathcal{I}_{0}^{+}$. We also have $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}=\emptyset$ for $\theta\in[0,\pi/3)$ by Lemma \ref{lem:triple}.
Hence, the intersection $\mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{-}$ is either empty or contained in the interior of $\mathcal{I}_{0}^{+}$.
\end{proof}
\begin{figure}
\caption{The ideal boundaries of the three spheres $\mathcal{I}
\label{fig:triple}
\end{figure}
\begin{prop}\label{prop:triple}
Suppose that $\theta\in [0, \pi/3]$. For $k\in\mathbb{Z}$, the three isometric spheres $\mathcal{I}_k^{+}$, $\mathcal{I}_k^{-}$, $\mathcal{I}_k^{\star}$ (respectively, $\mathcal{I}_k^{+}$, $\mathcal{I}_{k-1}^{-}$, $\mathcal{I}_k^{\diamond}$ ) have the following properties.
\begin{itemize}
\item The intersections $\mathcal{I}_k^{+} \cap \mathcal{I}_k^{-}$, $\mathcal{I}_k^{-} \cap \mathcal{I}_k^{\star}$, and $\mathcal{I}_k^{\star} \cap \mathcal{I}_k^{+}$ (respectively, $\mathcal{I}_k^{+} \cap \mathcal{I}_{k-1}^{-}$, $\mathcal{I}_{k-1}^{-} \cap \mathcal{I}_k^{\diamond}$, $\mathcal{I}_k^{\diamond} \cap \mathcal{I}_k^{+}$) are discs.
\item The intersection $\mathcal{I}_k^{+}\cap \mathcal{I}_{k}^{-} \cap \mathcal{I}_k^{\star}$ (respectively, $\mathcal{I}_k^{+}\cap \mathcal{I}_{k-1}^{-} \cap \mathcal{I}_k^{\diamond}$ ) is a union of two geodesics which are crossed at the fixed point of $T^{k}ST^{-k}$ (respectively $T^{k}(S^{-1}T)T^{-k}$) in $\mathbf{H}^2_{\mathbb{C}}$ and whose fours endpoints are on $\partial \mathbf{H}^2_{\mathbb{C}}$.
Moreover, the four rays from the fixed point to the four endpoints are cyclical permuted by $T^{k}ST^{-k}$ (respectively $T^{k}(S^{-1}T)T^{-k}$).
\end{itemize}
\end{prop}
\begin{proof}
According to Definition \ref{def:isometric} and Proposition \ref{prop:symmetry}, it suffices to consider the isometric spheres $\mathcal{I}_{0}^{+}$, $\mathcal{I}_{0}^{-}$, $\mathcal{I}_{0}^{\star}$. See Figure \ref{fig:triple}.
Let $q \in \mathcal{I}_{0}^{+}$. Consider the geographic coordinates $(\alpha,\beta,w)$ of $q$ in (\ref{eq:geog-coor-plus-0}).
By Lemma \ref{lem:functions},
if $q$ lies on $ \mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-}$, then the $\alpha,\beta,w$ should satisfy the equation
\begin{equation}\label{eq:prop0}
2w^2+1+\cos(\alpha)-\sqrt{2}w\cos(\alpha/2+\beta-\theta)-2\sqrt{2}w\cos(-\alpha/2+\beta-\theta)=0.
\end{equation}
Similarly, if $q$ lies on $\mathcal{I}_{0}^{+}\cap \mathcal{I}_{0}^{\star}$, then the $\alpha,\beta,w$ should satisfy the equation
\begin{equation}\label{eq:prop1}
2w^2+1+\cos(\alpha)-\sqrt{2}w\cos(-\alpha/2+\beta-\theta)-2\sqrt{2}w\cos(\alpha/2+\beta-\theta)=0
\end{equation}
Thus the intersection $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-}$ is the set of solutions of equation (\ref{eq:prop0})
and the intersection $\mathcal{I}_0^{+} \cap \mathcal{I}_0^{\star}$ is the set of solutions of equation (\ref{eq:prop1}).
One can easily verify that the geographic coordinate of the point $q(0,\theta,\sqrt{2}/2)\in\mathbf{H}^2_{\mathbb{C}}$ satisfies the equations (\ref{eq:prop0}) and (\ref{eq:prop1}),
so these intersections $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-}$ and $\mathcal{I}_0^{+} \cap \mathcal{I}_0^{\star}$ are topological discs from Proposition \ref{prop:connect}.
The intersection of these two sets gives the triple intersection $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-} \cap \mathcal{I}_0^{\star}$.
Now let us solve the system of the equations (\ref{eq:prop0}) and (\ref{eq:prop1}).
Let $t=\beta-\theta$. Subtracting the equations (\ref{eq:prop0}) and (\ref{eq:prop1}) and simplifying, we obtain
$$
2w\sin(\alpha/2)\sin(t)=0.
$$
Thus $w=0$ (this is impossible), or $\alpha=0$, or $t=0$. If $t=0$, then setting $\beta=\theta$ in equation (\ref{eq:prop0}), we get
$$
2w^2-3\sqrt{2}\cos\left(\frac{\alpha}{2}\right)w+1+\cos(\alpha)=2\left(w-\frac{\sqrt{2}}{2}\cos\left(\frac{\alpha}{2}\right)\right)\left(w-\sqrt{2}\cos\left(\frac{\alpha}{2}\right)\right)=0.
$$
Note that the solutions of the above equation for $w$ should satisfy $w^2\leq \cos(\alpha)$.
Thus
$$w=\frac{\sqrt{2}}{2}\cos\left(\frac{\alpha}{2}\right) \quad {\rm{with}}\quad \cos(\alpha)\geq \frac{1}{3}.$$
If $\alpha=0$, then equation (\ref{eq:prop0}) becomes to
\begin{eqnarray}\label{eq:prop2}
2w^2-3\sqrt{2}\cos(t)w+2 &=& 0.
\end{eqnarray}
Note that the solutions of equation (\ref{eq:prop2}) for $w$ should satisfy $w^2\leq\cos(\alpha)=1$. Thus the solutions of equation (\ref{eq:prop2}) are
$$w=\frac{3\cos(t)-\sqrt{9\cos^{2}(t)-8}}{2\sqrt{2}} \quad {\rm{with}} \quad \frac{2\sqrt{2}}{3}\leq \cos(t)\leq 1,$$
and
$$w=\frac{3\cos(t)+\sqrt{9\cos^{2}(t)-8}}{2\sqrt{2}}\quad {\rm{with}} \quad -1 \leq \cos(t) \leq -\frac{2\sqrt{2}}{3}.$$
So, the triple intersection $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-} \cap \mathcal{I}_0^{\star}$ is the union $\mathcal{L}_1 \cup \mathcal{C}_1 \cup \mathcal{C}_2$, where
\begin{equation*}
\mathcal{L}_1=\left\{ q(\alpha,t+\theta,w)\in \mathcal{I}_{0}^{+} : \cos(\alpha)\geq \frac{1}{3}, t=0, w=\frac{\sqrt{2}}{2}\cos\left(\frac{\alpha}{2}\right) \right\},
\end{equation*}
$$
\mathcal{C}_1=\left\{ q(0,t+\theta,w)\in \mathcal{I}_{0}^{+} : \frac{2\sqrt{2}}{3}\leq \cos(t)\leq 1, w=\frac{3\cos(t)-\sqrt{9\cos^{2}(t)-8}}{2\sqrt{2}} \right\},
$$
and
$$
\mathcal{C}_2=\left\{ q(0,t+\theta,w)\in \mathcal{I}_{0}^{+} : -1 \leq \cos(t) \leq -\frac{2\sqrt{2}}{3}, w=\frac{3\cos(t)+\sqrt{9\cos^{2}(t)-8}}{2\sqrt{2}} \right\}.
$$
Note that $\mathcal{L}_1$ lie in a Lagrangian plane of $\mathcal{I}_{0}^{+}$, and $\mathcal{C}_1 \cup \mathcal{C}_2$ lie in a complex line of $\mathcal{I}_{0}^{+}$.
It is obvious that $\mathcal{C}_1$ is an arc. One of its endpoints is $q(0,\theta,\sqrt{2}/2)\in\mathbf{H}^2_{\mathbb{C}}$, which is the fixed point of $S$. The other one is $q(0,\arccos(2\sqrt{2}/3)+\theta,1)\in\partial\mathbf{H}^2_{\mathbb{C}}$. Similarly, $\mathcal{C}_2$ is an arc whose endpoints are $q(0,\theta,\sqrt{2}/2)$ and $q(0,\arccos(-2\sqrt{2}/3)+\theta,-1)\in\partial\mathbf{H}^2_{\mathbb{C}}$. Thus $\mathcal{C}_1 \cup \mathcal{C}_2$ is connected.
The endpoints of $\mathcal{L}_1$ are $q(\arccos(1/3),\theta,\sqrt{3}/3)$ and $q(-\arccos(1/3),\theta,\sqrt{3}/3)$, which are on $\partial \mathbf{H}^2_{\mathbb{C}}$. It is easy to see that $\mathcal{L}_1$ intersects with $\mathcal{C}_1 \cup \mathcal{C}_2$ at the point $q(0,\theta,\sqrt{2}/2)\in\mathbf{H}^2_{\mathbb{C}}$.
\begin{figure}
\caption{The triple intersection $\mathcal{I}
\label{fig:cross}
\end{figure}
Moreover, $\mathcal{C}_1 \cup \mathcal{C}_2$ is a geodesic. In fact, the complex line containing $\mathcal{C}_1 \cup \mathcal{C}_2$ is $\mathcal{C}=\{(-1,z)\in\mathbf{H}^2_{\mathbb{C}}\cup\partial\mathbf{H}^2_{\mathbb{C}} : |z|\leq\sqrt{2}\}$. It is a disc bounded by the circle with center being the origin and radius $\sqrt{2}$.
While $\mathcal{C}_1 \cup \mathcal{C}_2$ lie in the circle with center $3e^{i\theta}/2$ and radius $1/2$ which is orthogonal to the boundary of the complex line.
By the Cayley transform given in Definition \ref{def:cayley}, $\mathcal{C}$ is mapped to the vertical axis $\{(0,z)\in\mathbf{H}^2_{\mathbb{C}}\cup\partial\mathbf{H}^2_{\mathbb{C}} : |z|\leq 1\}$ in the ball model of $\mathbf{H}^2_{\mathbb{C}}$.
Thus $\mathcal{C}$ is isometric to the Poincar\'{e} disc. While $\mathcal{C}_1 \cup \mathcal{C}_2$ is mapped by the Cayley transform to an arc contained in the circle with center $-3e^{i\theta}/2\sqrt{2}$ and radius $1/2\sqrt{2}$ which is orthogonal to the unit circle. Hence $\mathcal{C}_1 \cup \mathcal{C}_2$ is a geodesic. See Figure \ref{fig:cross}.
By the Cayley transform, $\mathcal{L}_1$ is mapped to $\{(-\tan(\alpha/2)i,-e^{i\theta}/\sqrt{2})\in\mathbf{H}^2_{\mathbb{C}}\cup\partial\mathbf{H}^2_{\mathbb{C}} : \cos(\alpha)\geq 1/3\}$.
Thus $\mathcal{L}_1$ and $\mathcal{C}_1 \cup \mathcal{C}_2$ are crossed at the point $(0,-e^{i\theta}/\sqrt{2})\in\mathbf{H}^2_{\mathbb{C}}$ which is the image of $q(0,\theta,\sqrt{2}/2)$ under the Cayley transform.
All the five points are lifted to the vectors in $\mathbb{C}^3$:
\begin{eqnarray*}
{\bf q}(\arccos(1/3),\theta,\sqrt{3}/3) &=& \left[
\begin{array}{c}
-\left(\frac{1}{3}-i\frac{2\sqrt{2}}{3}\right) \\
\left(\frac{2}{3}-i\frac{\sqrt{2}}{3}\right)e^{i\theta} \\
1 \\
\end{array}
\right],\\
{\bf q}(-\arccos(1/3),\theta,\sqrt{3}/3) &=& \left[
\begin{array}{c}
-\left(\frac{1}{3}+i\frac{2\sqrt{2}}{3}\right) \\
\left(\frac{2}{3}+i\frac{\sqrt{2}}{3}\right)e^{i\theta} \\
1 \\
\end{array}
\right],\\
{\bf q}(0,\arccos(2\sqrt{2}/3)+\theta,1) &=& \left[
\begin{array}{c}
-1 \\
\left(\frac{4}{3}+i\frac{\sqrt{2}}{3}\right)e^{i\theta} \\
1 \\
\end{array}
\right],\\
{\bf q}(0,\arccos(-2\sqrt{2}/3)+\theta,-1) &=& \left[
\begin{array}{c}
-1 \\
\left(\frac{4}{3}-i\frac{\sqrt{2}}{3}\right)e^{i\theta} \\
1 \\
\end{array}
\right],\\
{\bf q}(0,\theta,\sqrt{2}/2) &=& \left[
\begin{array}{c}
-1 \\
e^{i\theta} \\
1 \\
\end{array}
\right].
\end{eqnarray*}
Recall that
$$
S=\left[
\begin{array}{ccc}
2 & 2e^{-i\theta} & -1 \\
-2e^{i\theta} & -1 & 0 \\
-1 & 0 & 0 \\
\end{array}
\right].
$$
Thus it is easy to check that $S({\bf q}(0,\theta,\sqrt{2}/2))={\bf q}(0,\theta,\sqrt{2}/2)$ and the other four points are cyclical permuted by $S$ as the following
\begin{eqnarray*}
&\left[
\begin{array}{c}
-1 \\
\left(\frac{4}{3}+i\frac{\sqrt{2}}{3}\right)e^{i\theta} \\
1 \\
\end{array}
\right] \underrightarrow{S}
\left[
\begin{array}{c}
-\left(\frac{1}{3}-i\frac{2\sqrt{2}}{3}\right) \\
\left(\frac{2}{3}-i\frac{\sqrt{2}}{3}\right)e^{i\theta} \\
1 \\
\end{array}
\right] \underrightarrow{S}
\left[
\begin{array}{c}
-1 \\
\left(\frac{4}{3}-i\frac{\sqrt{2}}{3}\right)e^{i\theta} \\
1 \\
\end{array}
\right] \underrightarrow{S}\\
&\left[
\begin{array}{c}
-\left(\frac{1}{3}+i\frac{2\sqrt{2}}{3}\right) \\
\left(\frac{2}{3}+i\frac{\sqrt{2}}{3}\right)e^{i\theta} \\
1 \\
\end{array}
\right].
\end{eqnarray*}
Moreover, it is easy to verify that $\mathcal{C}_1 \cup \mathcal{C}_2=S(\mathcal{L}_1)$, $S^2(\mathcal{L}_1)=\mathcal{L}_1$ and $S^2(\mathcal{C}_1)=\mathcal{C}_2$.
Thus, the four rays from the fixed point to the four endpoints are cyclical permuted by $S$.
\end{proof}
By applying powers of $T$ and the symmetries in Proposition \ref{prop:symmetry} to Proposition \ref{prop:s0plus}, Proposition \ref{prop:s0star} and Proposition \ref{prop:s0star1}, all pairwise intersections of the isometric spheres can be summarized in the following result.
\begin{cor}\label{cor:intersections}
Suppose that $\theta\in [0, \pi/3]$. Let $\mathcal{S}=\{\mathcal{I}_k^{\pm}, \mathcal{I}_k^{\star}, \mathcal{I}_k^{\diamond}: k \in \mathbb{Z} \}$ be the set of all the isometrical spheres. Then for all $k\in\mathbb{Z}$:
\begin{enumerate}
\item $\mathcal{I}_k^{+}$ is contained in the exterior of all the isometric spheres in $\mathcal{S}$ except $\mathcal{I}_k^{-}$, $\mathcal{I}_{k-1}^{-}$, $\mathcal{I}_k^{\star}$, $\mathcal{I}_{k-1}^{\star}$, $\mathcal{I}_k^{\diamond}$ and $\mathcal{I}_{k+1}^{\diamond}$.
Moreover, $\mathcal{I}_k^{+} \cap \mathcal{I}_{k-1}^{\star}$ (resp. $\mathcal{I}_k^{+} \cap \mathcal{I}_{k+1}^{\diamond}$) is either empty or contained in the interior of $\mathcal{I}_{k-1}^{-}$ (resp. $\mathcal{I}_{k}^{-}$). When $\theta=\pi/3$, $\mathcal{I}_k^{+} \cap \mathcal{I}_{k-1}^{\star}$ (resp. $\mathcal{I}_k^{+} \cap \mathcal{I}_{k+1}^{\diamond}$) will be tangent with $\mathcal{I}_{k-1}^{-}$ (resp. $\mathcal{I}_{k}^{-}$) on $\partial \mathbf{H}^2_{\mathbb{C}}$ at the parabolic fixed point of $T^{k}(S^2T)T^{-k}$ (resp. $T^{k}(S^{-1}TS^{-1})T^{-k}$).
\item $\mathcal{I}_k^{-}$ is contained in the exterior of all the isometric spheres in $\mathcal{S}$ except $\mathcal{I}_k^{+}$, $\mathcal{I}_{k+1}^{+}$,
$\mathcal{I}_k^{\star}$, $\mathcal{I}_{k+1}^{\star}$, $\mathcal{I}_k^{\diamond}$ and $\mathcal{I}_{k+1}^{\diamond}$.
Moreover, $\mathcal{I}_k^{-} \cap \mathcal{I}_k^{\diamond}$ (resp. $\mathcal{I}_k^{-} \cap \mathcal{I}_{k+1}^{\star}$) is either empty or contained in the interior of $\mathcal{I}_{k}^{+}$ (resp. $\mathcal{I}_{k+1}^{+}$). When $\theta=\pi/3$, $\mathcal{I}_k^{-} \cap \mathcal{I}_k^{\diamond}$ (resp. $\mathcal{I}_k^{-} \cap \mathcal{I}_{k+1}^{\star}$) will be tangent with $\mathcal{I}_k^{+}$ (resp. $\mathcal{I}_{k+1}^{+}$) on $\partial \mathbf{H}^2_{\mathbb{C}}$ at the parabolic fixed point of $T^{k}(ST^{-1}S)T^{-k}$ (resp. $T^{k}(S^2T^{-1})T^{-k}$).
\item $\mathcal{I}_k^{\star}$ is contained in the exterior of all the isometric spheres in $\mathcal{S}$ except $\mathcal{I}_k^{\pm}$,
$\mathcal{I}_{k+1}^{+}$, $\mathcal{I}_{k-1}^{-}$, $\mathcal{I}_k^{\diamond}$ and $\mathcal{I}_{k+1}^{\diamond}$.
Moreover, $\mathcal{I}_k^{\star} \cap \mathcal{I}_k^{\diamond}$ (resp. $\mathcal{I}_k^{\star} \cap \mathcal{I}_{k+1}^{\diamond}$) is contained in the interior of $\mathcal{I}_k^{+}$ (resp. $\mathcal{I}_k^{-}$). $\mathcal{I}_k^{\star} \cap \mathcal{I}_{k+1}^{+}$ is described in item (1), and $\mathcal{I}_k^{\star} \cap \mathcal{I}_{k-1}^{-}$ is described in item (2).
When $\theta=\pi/3$, $\mathcal{I}_k^{\star}$ will be tangent with $\mathcal{I}_{k+1}^{\star}$ (resp. $\mathcal{I}_{k-1}^{\star}$) on $\partial \mathbf{H}^2_{\mathbb{C}}$ at the parabolic fixed point of $T^{k}(S^2T^{-1})T^{-k}$ (resp. $T^{k}(T^{-1}S^2)T^{-k}$).
\item $\mathcal{I}_k^{\diamond}$ is contained in the exterior of all the isometric spheres in $\mathcal{S}$ except $\mathcal{I}_k^{\pm}$,
$\mathcal{I}_{k-1}^{\pm}$, $\mathcal{I}_k^{\star}$ and $\mathcal{I}_{k-1}^{\star}$.
Moreover, $\mathcal{I}_k^{\diamond} \cap \mathcal{I}_k^{\star}$ and $\mathcal{I}_{k}^{\diamond} \cap \mathcal{I}_{k-1}^{\star}$ are described in item (3). $\mathcal{I}_k^{\diamond} \cap \mathcal{I}_k^{-}$ is described in item (2) and $\mathcal{I}_{k}^{\diamond} \cap \mathcal{I}_{k-1}^{+}$ is described in item (1).
When $\theta=\pi/3$, $\mathcal{I}_k^{\diamond}$ will be tangent with $\mathcal{I}_{k+1}^{\diamond}$ (resp. $\mathcal{I}_{k-1}^{\diamond}$) on $\partial \mathbf{H}^2_{\mathbb{C}}$ at the parabolic fixed point of $T^{k}(T^{-1}S^2)T^{-k}$ (resp. $T^{k}(S^2T^{-1})T^{-k}$).
\end{enumerate}
\end{cor}
\begin{defn}\label{domain:D}
Let $D$ be the intersection of the closures of the exteriors of all the isometric spheres $\mathcal{I}_k^{+}$, $\mathcal{I}_k^{-}$, $\mathcal{I}_k^{\star}$ and $\mathcal{I}_k^{\diamond}$, for $k\in\mathbb{Z}$.
\end{defn}
\begin{defn}
For $k\in\mathbb{Z}$,
let $s_k^{+}$ (respectively, $s_k^{-}$, $s_k^{\star}$, and $s_k^{\diamond}$) be the side of $D$ contained in the isometric sphere $\mathcal{I}_k^{+}$ (respectively, $\mathcal{I}_k^{-}$, $\mathcal{I}_k^{\star}$ and $\mathcal{I}_k^{\diamond}$).
\end{defn}
\begin{defn}
A \emph{ridge} is defined to be the 2-dimensional connected intersections of two sides.
\end{defn}
By Corollary \ref{cor:intersections}, the ridges are $s_{k}^{+} \cap s_{k}^{-}$, $s_{k}^{+} \cap s_{k}^{\star}$, $s_{k}^{+} \cap s_{k-1}^{-}$, $s_{k}^{+} \cap s_{k}^{\diamond}$, $s_{k}^{-} \cap s_{k}^{\star}$ and $s_{k-1}^{-} \cap s_{k}^{\diamond}$ for $k\in\mathbb{Z}$, and the sides and ridges are related as follows:
\begin{itemize}
\item The side $s_k^{+}$ is bounded by the ridges $s_k^{+} \cap s_k^{-}$, $s_k^{+} \cap s_k^{\star}$, $s_k^{+} \cap s_{k-1}^{-}$ and $s_k^{+} \cap s_k^{\diamond}$.
\item The side $s_k^{-}$ is bounded by the ridges $s_k^{+} \cap s_k^{-}$, $s_k^{-} \cap s_k^{\star}$, $s_k^{-} \cap s_{k+1}^{+}$ and $s_0^{-} \cap s_{k+1}^{\diamond}$.
\item The side $s_k^{\star}$ is bounded by the ridges $s_k^{+} \cap s_k^{\star}$ and $s_k^{-} \cap s_k^{\star}$.
\item The side $s_k^{\diamond}$ is bounded by the ridges $s_k^{\diamond} \cap s_{k-1}^{-}$ and $s_k^{+} \cap s_k^{\diamond}$.
\end{itemize}
\begin{prop}\label{prop:ridges}
The ridges $s_{k}^{+} \cap s_{k}^{-}$, $s_{k}^{+} \cap s_{k}^{\star}$, $s_{k}^{+} \cap s_{k-1}^{-}$, $s_{k}^{+} \cap s_{k}^{\diamond}$, $s_{k}^{-} \cap s_{k}^{\star}$ and $s_{k-1}^{-} \cap s_{k}^{\diamond}$ for $k\in\mathbb{Z}$ are all topologically the union of two sectors.
\end{prop}
\begin{proof}
The ridge $s_{k}^{+} \cap s_{k}^{-}$ is contained in $\mathcal{I}_k^{+} \cap \mathcal{I}_k^{-}$. According to Proposition \ref{prop:triple}, $\mathcal{I}_k^{+} \cap \mathcal{I}_k^{-}$ is topologically a disc and $\mathcal{I}_k^{+} \cap \mathcal{I}_k^{-} \cap \mathcal{I}_k^{\star}$ is the union of two crossed geodesics. The two crossed geodesics divide the disc into four sectors and one opposite pair of which will lie in the interior of the isometric sphere $\mathcal{I}_k^{\star}$. Thus $s_{k}^{+} \cap s_{k}^{-}$ is the other opposite pair of the four sectors in the disc.
More precisely, up to the powers of $T$, let us consider $s_{0}^{+} \cap s_{0}^{-}$. Let $\Delta$ be the disc $\mathcal{I}_0^{+} \cap \mathcal{I}_0^{-}$ described in the equation (\ref{eq:prop0}) and the two crossed geodesics $\mathcal{L}_1\cup \mathcal{C}_1 \cup \mathcal{C}_2$ are described in Proposition \ref{prop:triple}.
By Proposition \ref{prop:symmetry}, the complex involution $I_2$ preserves $\Delta$ and $\mathcal{L}_1\cup \mathcal{C}_1 \cup \mathcal{C}_2$. Recall that $I_2$ fixing the complex line $C_2$ with polar vector $\bf{n}_2$ described in Section \ref{sec:parameter}. One can compute that the intersection $C_2 \cap \Delta$ is the curve
\begin{equation}\label{eq:ridge}
C_2 \cap \Delta=\{q(\alpha,\alpha/2 +\theta,\sqrt{2}/2)\in\mathcal{I}_0^{+}: \cos(\alpha)\geq 1/3\}.
\end{equation}
Of course $C_2 \cap \Delta$ intersects with $\mathcal{L}_1\cup \mathcal{C}_1 \cup \mathcal{C}_2$ at the fixed point of $S$, and divides $\Delta$ into two parts. $I_2$ fixes $C_2 \cap \Delta$, and interchanges $\mathcal{L}_1$ and $\mathcal{C}_1\cup \mathcal{C}_2$. Thus $C_2 \cap \Delta$ is contained in the union of two opposite sectors. By Lemma \ref{lem:functions}, $C_2 \cap \Delta$ lie on the closure of the exterior of $\mathcal{I}_0^{\star}$, since $f_0^{\star}(\alpha,\alpha/2+\theta,\sqrt{2}/2)=1-\cos(\alpha)\geq 0$. Therefore, the union of two opposite sectors containing $C_2 \cap \Delta$ is the ridge $s_{0}^{+} \cap s_{0}^{-}$. Moreover, this ridge is preserved by $I_2$. By using the parametrization of the Giraud disk in \cite{dpp}, it can be instructive to draw figure of the Giraud disk $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-}$ and their intersection with the isometric spheres $\mathcal{I}_{-1}^{-}$, $\mathcal{I}_0^{\star}$, $\mathcal{I}_{-1}^{\star}$, $\mathcal{I}_0^{\diamond}$ and $\mathcal{I}_{1}^{\diamond}$. See Figure \ref{fig:sector}.
The other ridges can be described by a similar argument.
\end{proof}
\begin{figure}
\caption{ The figure shows the ridge $s_{0}
\label{fig:sector}
\end{figure}
\begin{prop}\label{prop:sides}
\begin{enumerate}
\item The side $s_{k}^{+}$ (resp. $s_{k}^{-}$) is a topological solid cylinder in $\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}$. The intersection of $\partial s_{k}^{+}$ (resp. $\partial s_{k}^{-}$) with $\mathbf{H}^2_{\mathbb{C}}$ is the disjoint union of two topological discs.
\item The side $s_{k}^{\star}$ (resp. $s_{k}^{\diamond}$) is a topological solid light cone in $\mathbf{H}^2_{\mathbb{C}} \cup \partial\mathbf{H}^2_{\mathbb{C}}$. The intersection of $\partial s_{k}^{\star}$ (resp. $\partial s_{k}^{\diamond}$) with $\mathbf{H}^2_{\mathbb{C}}$ is the light cone.
\end{enumerate}
\end{prop}
\begin{proof}
(1) The side $s_{k}^{+}$ is contained in the isometric sphere $\mathcal{I}_k^{+}$. By Corollary \ref{cor:intersections}, $s_{k}^{+}$ intersects possibly with the sides contained in the isometric spheres $\mathcal{I}_k^{-}$, $\mathcal{I}_{k-1}^{-}$, $\mathcal{I}_k^{\star}$, $\mathcal{I}_{k-1}^{\star}$, $\mathcal{I}_k^{\diamond}$ and $\mathcal{I}_{k+1}^{\diamond}$.
Let $\triangle_1$ be the union of the ridges $s_{k}^{+} \cap s_{k}^{-}$ and $s_{k}^{+} \cap s_{k}^{\star}$, and $\triangle_2$ be the union of the ridges $s_{k}^{+} \cap s_{k-1}^{-}$ and $s_{k}^{+} \cap s_{k}^{\diamond}$. By Proposition \ref{prop:triple}, $\triangle_1$ contains the cross $\mathcal{I}_{k}^{+}\cap\mathcal{I}_{k}^{-}\cap\mathcal{I}_{k}^{\star}$.
By Proposition \ref{prop:ridges}, $\triangle_1$ is a union of four sectors which are patched together along the cross. Hence, $\triangle_1$ is topologically either a disc or a light cone. By a straight computation, the ideal boundary of $\triangle_1$ on $\mathbf{H}^2_{\mathbb{C}}$ is a simple closed curve on the ideal boundary of $\mathcal{I}_{k}^{+}$. See Figure \ref{fig:triple}. Thus $\triangle_1$ is a topological disc. By a similar argument, $\triangle_2$ is a topological disc.
Since $\mathcal{I}_k^{+} \cap \mathcal{I}_k^{\star} \cap \mathcal{I}_{k-1}^{-}=\emptyset$ except when $\theta=\pi/3$ in which case it is a point on $\partial\mathbf{H}^2_{\mathbb{C}}$, $\triangle_1$ and $\triangle_2$ are disjoint except when $\theta=\pi/3$ in which case they intersect at two points on $\partial\mathbf{H}^2_{\mathbb{C}}$. See Figure \ref{figure:cylinder} and Figure \ref{figure:fd}. Note that isometric spheres are topological balls and their pairwise intersections are connected. So, $s_{k}^{+}$ is a topological solid cylinder. See Figure \ref{fig:sideplus}.
$s_{k}^{-}$ can be described by a similar argument.
(2) The side $s_{k}^{\star}$ is contained in the isometric sphere $\mathcal{I}_k^{\star}$. According to Corollary \ref{cor:intersections}, $s_k^{\star}$ only intersects with $s_{k}^{+}$ and $s_{k}^{-}$. Let $\triangle_3$ be the union of $s_{k}^{+} \cap s_{k}^{\star}$ and $s_{k}^{-} \cap s_{k}^{\star}$. By Proposition \ref{prop:triple} and Proposition \ref{prop:ridges}, $\triangle_3$ is a union of four sectors which are patched together along the cross $\mathcal{I}_k^{+} \cap \mathcal{I}_k^{-} \cap \mathcal{I}_k^{\star}$. By a computation for the case $k=0$, one can see that the ideal boundary of $\triangle_3$ is a union of two disjoint simple closed curves in the ideal boundary of $\mathcal{I}_k^{\star}$. See Figure \ref{fig:triple}. Thus $\triangle_3$ is a light cone. Hence, $s_{k}^{\star}$ is topologically a solid light cone. See Figure \ref{fig:sidestar}.
$s_{k}^{\diamond}$ can be described by a similar argument.
\end{proof}
\begin{figure}
\caption{A schematic view of the side $s_k^{+}
\label{fig:sideplus}
\end{figure}
\begin{figure}
\caption{A schematic view of the side $s_k^{\star}
\label{fig:sidestar}
\end{figure}
By applying a Poincar\'{e} polyhedron theorem in $\mathbf{H}^2_{\mathbb{C}}$ as stated for example in \cite{par-will2}, \cite{dpp} or \cite{mostow} (see \cite{b} for a version in the hyperbolic plane), we have our main result as follows.
\begin{thm}
Suppose that $\theta\in [0, \pi/3]$. Let $D$ be as in Definition \ref{domain:D}. Then $D$ is a fundamental domain for the cosets of $\langle T \rangle$ in $\Gamma$.
Moreover, $\Gamma$ is discrete and has the presentation
$$
\Gamma=\langle S, T | S^4=(T^{-1}S)^4=id \rangle.
$$
\end{thm}
\begin{proof}
The sides of $D$ are $s_k^{+}$ $s_k^{-}$, $s_k^{\star}$ and $s_k^{\diamond}$.
The ridges of $D$ are $s_{k}^{+} \cap s_{k}^{-}$, $s_{k}^{+} \cap s_{k}^{\star}$, $s_{k}^{+} \cap s_{k-1}^{-}$, $s_{k}^{+} \cap s_{k}^{\diamond}$, $s_{k}^{-} \cap s_{k}^{\star}$ and $s_{k-1}^{-} \cap s_{k}^{\diamond}$.
To obtain the side-pairing maps and ridge cycles, by applying powers of $T$, it suffices to consider the case where $k=0$.
{\textbf{The side-pairing maps:}}
$s_0^{+}$ is contained in the isometric sphere $\mathcal{I}(S)$ and $s_0^{-}$ in the isometric sphere $\mathcal{I}(S^{-1})$. The ridge $s_0^{+} \cap s_0^{-}$ is contained in the disc $\mathcal{I}(S) \cap \mathcal{I}(S^{-1})$, which is defined by the triple equality
$$
|\langle {\bf z}, {\bf{q}_{\infty}} \rangle|=|\langle {\bf z}, S^{-1}({\bf{q}_{\infty}}) \rangle|=|\langle {\bf z}, S({\bf{q}_{\infty}}) \rangle|.
$$
And the ridge $s_0^{-} \cap s_0^{\star}$ is contained in the disc $\mathcal{I}(S^{-1}) \cap \mathcal{I}(S^2)$, which is defined by the triple equality
$$
|\langle {\bf z}, {\bf{q}_{\infty}} \rangle|=|\langle {\bf z}, S({\bf{q}_{\infty}}) \rangle|=|\langle {\bf z}, S^{-2}({\bf{q}_{\infty}}) \rangle|.
$$
Since $S$ maps $q_{\infty}$ to $S(q_{\infty})$, $S^{-1}(q_{\infty})$ to $q_{\infty}$ and $S(q_{\infty})$ to $S^2(q_{\infty})=S^{-2}(q_{\infty})$,
$S$ maps the disc $\mathcal{I}(S) \cap \mathcal{I}(S^{-1})$ to the disc $\mathcal{I}(S^{-1}) \cap \mathcal{I}(S^2)$.
Note that $\mathcal{I}(S) \cap \mathcal{I}(S^{-1}) \cap \mathcal{I}(S^2)$ is the union of two crossed geodesics (see Proposition \ref{prop:triple}) whose four rays are cyclical permutated by $S$. Since the ridge $s_0^{+} \cap s_0^{-}$ lies in the closure of the exterior of the isometric sphere $\mathcal{I}(S^2)$, according to the equation (\ref{eq:ridge}), the point $q(\pi/3, \pi/6+\theta, \sqrt{2}/2)$ is contained in $s_0^{+} \cap s_0^{-}$. One can easily verify that $S(q(\pi/3, \pi/6+\theta, \sqrt{2}/2))$ lies in the exterior of $\mathcal{I}(S)$. Thus $S(q(\pi/3, \pi/6+\theta, \sqrt{2}/2))$ is contained in $s_0^{-} \cap s_0^{\star}$, which lie in the closure of the exterior of the isometric sphere $\mathcal{I}(S)$.
Hence $S$ maps the ridge $s_0^{+} \cap s_0^{-}$ to the ridge $s_0^{-} \cap s_0^{\star}$. Similarly, $S$ maps $s_0^{+} \cap s_0^{\star}$ to $s_0^{+} \cap s_0^{-}$. See Figure \ref{fig:ridgecycle}.
Since $S$ maps $\mathcal{I}_0^{+} \cap \mathcal{I}_{-1}^{-} \cap \mathcal{I}_0^{\diamond}$ to $\mathcal{I}_0^{-} \cap \mathcal{I}_{1}^{+} \cap \mathcal{I}_1^{\diamond}$, a similar argument shows that $S$ maps $s_0^{+} \cap s_{-1}^{-}$ to $s_0^{-} \cap s_{1}^{\diamond}$ and $s_0^{+} \cap s_0^{\diamond}$ to $s_0^{-} \cap s_{1}^{+}$.
\begin{figure}
\caption{A schematic view of the ridges $s_0^{+}
\label{fig:ridgecycle}
\end{figure}
By a similar argument, $s_0^{\star}$ (respectively, $s_0^{\diamond}$) is mapped to itself by the elliptic element of order two $S^2$ (respectively, $(T^{-1}S)^2=(S^{-1}T)^2$), which sends $s_0^{+} \cap s_0^{\star}$ to $s_0^{-} \cap s_0^{\star}$ (respectively, $s_0^{\diamond} \cap s_{-1}^{-}$ to $s_0^{+} \cap s_0^{\diamond}$) and vice-versa.
Hence, the side-pairing maps are:
\begin{eqnarray*}
T^{k}ST^{-k}: & s_k^{+}\longrightarrow s_k^{-} \\
T^{k}S^2T^{-k}: & s_k^{\star}\longrightarrow s_k^{\star} \\
T^{k}(T^{-1}S)^2T^{-k}: & s_k^{\diamond}\longrightarrow s_k^{\diamond}.
\end{eqnarray*}
\textbf{The cycle transformations:}
According to the side-pairing maps, the ridge cycles are:
\begin{eqnarray*}
&(s_k^{+} \cap s_k^{-}, s_k^{+} , s_k^{-}) \xrightarrow{T^{k}ST^{-k}}
(s_k^{\star} \cap s_k^{-}, s_k^{\star} , s_k^{-}) \xrightarrow{T^{k}S^2T^{-k}} \\
&(s_k^{+} \cap s_k^{\star}, s_k^{+} , s_k^{\star}) \xrightarrow{T^{k}ST^{-k}}
(s_k^{+} \cap s_k^{-}, s_k^{+} , s_k^{-}),
\end{eqnarray*}
and
\begin{eqnarray*}
&(s_k^{+} \cap s_{k-1}^{-}, s_k^{+} , s_{k-1}^{-}) \xrightarrow{T^{k}(T^{-1}S)T^{-k}}
(s_k^{\diamond} \cap s_{k-1}^{-}, s_k^{\diamond} , s_{k-1}^{-}) \xrightarrow{T^{k}(T^{-1}S)^{2}T^{-k}} \\
&(s_k^{+} \cap s_{k}^{\diamond}, s_k^{+} , s_{k}^{\diamond}) \xrightarrow{T^{k}(T^{-1}S)T^{-k}}
(s_k^{+} \cap s_{k-1}^{-}, s_k^{+} , s_{k-1}^{-}).
\end{eqnarray*}
Thus the cycle transformations are
$$
T^{k}ST^{-k}\cdot T^{k}S^2T^{-k}\cdot T^{k}ST^{-k} =T^{k}S^{4}T^{-k},
$$
and
$$
T^{k}(T^{-1}S)T^{-k}\cdot T^{k}(T^{-1}S)^{2}T^{-k}\cdot T^{k}(T^{-1}S)T^{-k} =T^{k}(T^{-1}S)^4T^{-k},
$$
which are equal to the identity map, since we have $S^4=id$ and $(T^{-1}S)^4=id$.
\textbf{The local tessellation}:
There are exactly two copies of $D$ along each side, since the sides are contained in isometrical spheres and the side-pairing maps send the exteriors to the interiors.
Thus there is nothing to verify for the points in the interior of every side.
According to the ridge cycles and cycle transformations, there are exactly three copies of $D$ along each ridge.
\begin{itemize}
\item $s_k^{+} \cap s_k^{-}$, $s_k^{\star} \cap s_k^{-}$ and $s_k^{+} \cap s_k^{\star}$: These three ridges are in one cycle. Thus we only need to consider the ridge $s_k^{+} \cap s_k^{-}$. Since the cycle transformation of $s_k^{+} \cap s_k^{-}$ is
$$T^{k}ST^{-k}\cdot T^{k}S^2T^{-k}\cdot T^{k}ST^{-k} =id,$$
the three copies of $D$ along $s_k^{+} \cap s_k^{-}$ are $D$, $T^{k}S^{-1}T^{-k}(D)$ and $T^{k}ST^{-k}(D)$.
We know that $s_k^{+} \cap s_k^{-}$ is contained in $\mathcal{I}(T^{k}ST^{-k}) \cap \mathcal{I}(T^{k}S^{-1}T^{-k})$, which is defined by the triple equality
$$
|\langle {\bf z}, {\bf{q}_{\infty}} \rangle|=|\langle {\bf z}, T^kS^{-1}T^{-k}({\bf{q}_{\infty}}) \rangle|=|\langle {\bf z}, T^kST^{-k}({\bf{q}_{\infty}}) \rangle|.
$$
For $z$ in the neighborhoods of $s_k^{+} \cap s_k^{-}$ in $D$, $|\langle {\bf z}, {\bf{q}_{\infty}} \rangle|$ is the smallest of the three quantities in the above triple equality.
For $z$ in the neighborhoods of $s_k^{\star} \cap s_k^{-}$ in $D$, $|\langle {\bf z}, {\bf{q}_{\infty}} \rangle|$ is no more than $|\langle {\bf z}, T^kST^{-k}({\bf{q}_{\infty}}) \rangle|$ and $|\langle {\bf z}, T^kS^{-2}T^{-k}({\bf{q}_{\infty}}) \rangle|$. Applying $T^{k}S^{-1}T^{-k}$ gives neighborhood of $s_k^{+} \cap s_k^{-}$ in $T^{k}S^{-1}T^{-k}(D)$
where $|\langle {\bf z}, T^kS^{-1}T^{-k}({\bf{q}_{\infty}}) \rangle|$ is the smallest of the three quantities in the above triple equality.
For $z$ in the neighborhoods of $s_k^{+} \cap s_k^{\star}$ in $D$, $|\langle {\bf z}, {\bf{q}_{\infty}} \rangle|$ is no more than $|\langle {\bf z}, T^kS^{-1}T^{-k}({\bf{q}_{\infty}}) \rangle|$ and $|\langle {\bf z}, T^kS^{-2}T^{-k}({\bf{q}_{\infty}}) \rangle|$. Applying $T^{k}ST^{-k}$ gives neighborhood of $s_k^{+} \cap s_k^{-}$ in $T^{k}ST^{-k}(D)$
where $|\langle {\bf z}, T^kST^{-k}({\bf{q}_{\infty}}) \rangle|$ is the smallest of the three quantities in the above triple equality.
Thus the union of $D$, $T^{k}S^{-1}T^{-k}(D)$ and $T^{k}ST^{-k}(D)$ forms a regular neighborhood of each point in $s_k^{+} \cap s_k^{-}$.
\item $s_k^{+} \cap s_{k-1}^{-}$, $s_k^{\diamond} \cap s_{k-1}^{-}$ and $s_k^{+} \cap s_{k}^{\diamond}$: We only need to consider the ridge $s_k^{+} \cap s_{k-1}^{-}$. Since the cycle transformation of $s_k^{+} \cap s_{k-1}^{-}$ is
$$T^{k}(T^{-1}S)T^{-k}\cdot T^{k}(T^{-1}S)^{2}T^{-k}\cdot T^{k}(T^{-1}S)T^{-k}=id,$$
the union of $D$, $T^{k}(T^{-1}S)^{-1}T^{-k}(D)$ and $T^{k}(T^{-1}S)T^{-k}(D)$ forms a regular neighborhood of each point in $s_k^{+} \cap s_{k-1}^{-}$, by a similar argument as in the first item.
\end{itemize}
\textbf{Consistent system of horoballs}:
When $\theta=\frac{\pi}{3}$, there are accidental ideal vertices on $D$. The sides $s_k^{\star}$ and $s_{k+1}^{\star}$ will be asymptotic on $\partial \mathbf{H}^2_{\mathbb{C}}$ at the fixed point of the parabolic element $T^k(S^2T^{-1})T^{-k}$, and the sides $s_k^{\diamond}$ and $s_{k+1}^{\diamond}$ will be asymptotic on $\partial \mathbf{H}^2_{\mathbb{C}}$ at the fixed point of the parabolic element $T^k(ST^{-1}S)T^{-k}$. To show that there will be consistent system of horoballs, it suffices to show that all the cycle transformations fixing a given cusp is non-loxodromic.
Let $p_2$ be the fixed point of $T^{-1}S^2$ and $q_2$ be the fixed point of $(T^{-1}S)^2T$ (the coordinates of $p_2$ and $q_2$ are given in Definition \ref{def:points}, or see Figure \ref{figure:cylinder}). Then all the accidental ideal vertices $\{T^k(p_2)\}$ and $\{T^{k}(q_2)\}$ are related by the side-pairing maps as the following:
$$
\xymatrix{
\ar[r] & T^{-1}(q_2) \ar[d] \ar[r] & q_2 \ar[d]|-{T^{-1}ST} \ar[r]^{(S^{-1}T)^2}
& T(q_2) \ar[d]^{S} \ar[r] & T^2(q_2) \ar[d] \ar[r] & \\
\ar[r] & T^{-1}(p_2)\ar[ur] \ar[r] & p_2 \ar[ur]_{S} \ar[r]_{S^2}
& T(p_2) \ar[ur] \ar[r] & T^{2}(p_2) \ar[r] & }
$$
Thus, up to powers of $T$, all the cycle transformations are $S\cdot S\cdot S^{-2} =id$ and
$$
T^{-1}ST\cdot (S^{-1}T)^{-2}\cdot S= (T^{-1}S^2)^2 = (I_1I_3I_2I_3)^2,
$$
which is parabolic.
This means that $p_2$ is fixed by the parabolic element $(T^{-1}S^2)^2$.
Therefore, $D$ is a fundamental domain for the cosets of $\langle T \rangle$ in $\Gamma$. The side-pairing maps and $T$ will generate the group $\Gamma$. The reflection relations are $(T^k S^2 T^{-k})^2=id$ and $(T^k(T^{-1}S)^2T^{-k})^2=id$. The cycle relations are $T^kS^4T^{-k}=id$ and $T^k(T^{-1}S)^4T^{-k}=id$. Thus $\Gamma$ is discrete and has the presentation
$$
\Gamma=\langle S, T | S^4=(T^{-1}S)^4=id \rangle.
$$
\end{proof}
Since $\Gamma$ is a subgroup of $\langle I_1, I_2, I_3 \rangle$ of index 2, as a corollary, we have
\begin{cor}
Let $\langle I_1, I_2, I_3 \rangle$ be a complex hyperbolic $(4,4,\infty)$ triangle group as in Proposition \ref{prop:trianle}.
Then $\langle I_1, I_2, I_3 \rangle$ is discrete and faithful if and only if $I_1I_3I_2I_3$ is non-elliptic.
\end{cor}
This answers Conjecture \ref{conj:schwartz} on the complex hyperbolic $(4,4,\infty)$ triangle group.
\section{The manifold at infinity}\label{sec:manifold}
In this section, we study the group $\Gamma$ in the case when $\theta=\pi/3$. That is the group $\Gamma=\langle S, T \rangle=\langle I_2I_3, I_2I_1 \rangle$ with $T^{-1}S^2=I_1I_3I_2I_3$ being parabolic.
In this case, the Ford domain $D$ has additional ideal vertices on $\partial\mathbf{H}^2_{\mathbb{C}}$, which are parabolic fixed points corresponding to the conjugators of $T^{-1}S^2$. By intersecting a fundamental domain for $\langle T \rangle$ acting on $\partial\mathbf{H}^2_{\mathbb{C}}$ with the ideal boundary of $D$, we obtain a fundamental domain for $\Gamma$ acting on its discontinuity region $\Omega(\Gamma)$.
Topologically, this fundamental domain is the unknotted cylinder cross a ray (see Proposition \ref{prop:cylinder}). By cutting and gluing we obtain two polyhedra $\mathcal{P}_{+}$ and $\mathcal{P}_{-}$ (see Proposition \ref{prop:polyhedra}). Gluing $\mathcal{P}_{-}$ to $\mathcal{P}_{+}$ by $S^{-1}$, we obtain a polyhedron $\mathcal{P}$. By studying the combinatorial properties of $\mathcal{P}$, we show that the quotient $\Omega(\Gamma)/\Gamma$ is homeomorphic to the two-cusped hyperbolic 3-manifold $s782$.
\begin{rem}
We will use $\tilde{s}_k^{+}$ (respectively, $\tilde{s}_k^{-}$, $\tilde{s}_k^{\star}$, and $\tilde{s}_k^{\diamond}$) to denote the ideal boundary of the side of $D$ contained in the ideal boundary of the isometric sphere $\mathcal{I}_k^{+}$ (respectively, $\mathcal{I}_k^{-}$, $\mathcal{I}_k^{\star}$ and $\mathcal{I}_k^{\diamond}$).
\end{rem}
\begin{defn}\label{def:points}
In the Heisenberg coordinate, we define the points
\begin{eqnarray*}
q_2&=& \left[(-3+i\sqrt{3})/2, -\sqrt{3}\right],\\
q_3&=&\left[(1+i\sqrt{3})/2, \sqrt{3}\right],\\
p_2&=&\left[(-1+i\sqrt{3})/2, -\sqrt{3}\right],\\
p_3&=&\left[(3+i\sqrt{3})/2, \sqrt{3}\right],\\
p_4&=&\left[\frac{1}{6}\left(4+\sqrt{6}+i(4\sqrt{3}-\sqrt{2})\right), 0 \right],\\
p_5&=&\left[\frac{1}{6}\left(4-\sqrt{6}+i(4\sqrt{3}+\sqrt{2})\right), 0 \right], \\
p_6&=&\left[\frac{1}{6}\left(2-\sqrt{6}+i(2\sqrt{3}+\sqrt{2})\right), -\frac{4\sqrt{2}}{3} \right],\\
p_7&=&\left[\frac{1}{6}\left(2+\sqrt{6}+i(2\sqrt{3}-\sqrt{2})\right), \frac{4\sqrt{2}}{3} \right],\\
p_8 &=& \left[\frac{1}{6}\left(-2+\sqrt{6}+i(2\sqrt{3}+\sqrt{2})\right), \frac{4\sqrt{2}}{3} \right],\\
p_9&=& \left[\frac{1}{6}\left(-2-\sqrt{6}+i(2\sqrt{3}-\sqrt{2})\right), -\frac{4\sqrt{2}}{3} \right],\\
p_{10}&=& \left[\frac{1}{6}\left(-4+\sqrt{6}+i(4\sqrt{3}+\sqrt{2})\right), 0 \right],\\
p_{11}&=& \left[\frac{1}{6}\left(-4-\sqrt{6}+i(4\sqrt{3}-\sqrt{2})\right), 0 \right],
\end{eqnarray*}
and the other points
$$p_{12}=T(p_9), \quad p_{13}=T(p_8), \quad p_{14}=T(p_{11}), \quad p_{15}=T(p_{10}).$$
\end{defn}
By Proposition \ref{prop:triple} and Corollary \ref{cor:intersections}, we have the following.
\begin{prop}\label{prop:points}
The points defined in Definition \ref{def:points} have the following properties.
\begin{itemize}
\item $p_4,p_5,p_6,p_7$ are the four points on the ideal boundary of $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-} \cap \mathcal{I}_{0}^{\star}$, which are described in Proposition \ref{prop:triple};
\item $p_8,p_9,p_{10},p_{11}$ are the four points on the ideal boundary of $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{-1}^{-} \cap \mathcal{I}_{0}^{\diamond}$;
\item $p_{12}, p_{13},p_{14},p_{15}$ are the four points on the ideal boundary of $\mathcal{I}_{1}^{+} \cap \mathcal{I}_{0}^{-} \cap \mathcal{I}_{1}^{\diamond}$;
\item $p_2$ (resp. $p_3$) is the parabolic fixed point of $T^{-1}S^2$ (resp. $S^2T^{-1}$), which is the intersection of four isometric spheres $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{-1}^{-} \cap \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{\star}$ (resp. $\mathcal{I}_{1}^{+} \cap \mathcal{I}_{0}^{-} \cap \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{1}^{\star}$);
\item $q_3$ (resp. $q_2$) is the parabolic fixed point of $ST^{-1}S$ (resp. $T^{-1}ST^{-1}ST$), which is the intersection of the four isometric spheres $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-} \cap \mathcal{I}_{0}^{\diamond} \cap \mathcal{I}_{1}^{\diamond}$ (resp. $\mathcal{I}_{-1}^{+} \cap \mathcal{I}_{-1}^{-} \cap \mathcal{I}_{0}^{\diamond} \cap \mathcal{I}_{-1}^{\diamond}$).
\end{itemize}
\end{prop}
\begin{proof}
As described in Proposition \ref{prop:triple}, all of the triple intersections $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-} \cap \mathcal{I}_{0}^{\star}$, $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{-1}^{-} \cap \mathcal{I}_{0}^{\diamond}$ and $\mathcal{I}_{1}^{+} \cap \mathcal{I}_{0}^{-} \cap \mathcal{I}_{1}^{\diamond}$ have exactly four points lying on $\partial\mathbf{H}^2_{\mathbb{C}}$. When write the standard lifts of $p_4,p_5,p_6,p_7$, one can see that they are the four points in the proof of Proposition \ref{prop:triple}. Thus the first item is proved.
By Proposition \ref{prop:symmetry}, the four points of $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{-1}^{-} \cap \mathcal{I}_{0}^{\diamond}$ are the images of $p_4,p_5,p_6,p_7$ under the antiholomorphic involution $\tau$, which are $p_{11},p_{10},p_8,p_9$.
The second item and the fact that $\mathcal{I}_{1}^{+} \cap \mathcal{I}_{0}^{-} \cap \mathcal{I}_{1}^{\diamond}=T(\mathcal{I}_{0}^{+} \cap \mathcal{I}_{-1}^{-} \cap \mathcal{I}_{0}^{\diamond})$ imply the third item.
We will only prove the statement for $p_2$ in the last two items. The others can be described by similar arguments. By Lemma \ref{lem:triple}, $p_2$ is the parabolic fixed point of $T^{-1}S^2$ and is the triple intersection $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{-1}^{-} \cap \mathcal{I}_{0}^{\star}$. By Corollary \ref{cor:intersections} (3), $\mathcal{I}_{0}^{\star}$ is tangent with $\mathcal{I}_{-1}^{\star}$ at $p_2$. This completes the proof.
\end{proof}
\begin{figure}
\caption{ Intersections of the isometric spheres $\mathcal{I}
\label{figure:ideal-side-plus}
\end{figure}
Now we study the combinatorial properties of the sides. See Figure \ref{figure:ideal-side-plus} and Figure \ref{figure:cylinder}.
\begin{prop}\label{prop:side-plus}
The interior of the side $\tilde{s}_0^{+}$ has two connected components.
\begin{itemize}
\item One of them is an octagon, denoted by $\mathcal{O}_0^{+}$, whose vertices are $p_2$, $p_6$, $p_4$, $p_7$, $q_3$, $p_8$, $p_{11}$ and $p_9$.
\item The other one is a quadrilateral, denoted by $\mathcal{Q}_0^{+}$, whose vertices are $p_2$, $p_5$, $q_3$ and $p_{10}$.
\end{itemize}
\end{prop}
\begin{proof}
By Proposition \ref{prop:sides}, when $\theta< \pi/3$, the side $\tilde{s}_0^{+}$ is topologically an annulus bounded by two disjoint simple closed curves which are the union of
the ideal boundaries of the ridges $s_{0}^{+} \cap s_{-1}^{-}$ and $s_{0}^{+} \cap s_{0}^{\diamond}$, respectively $s_{0}^{+} \cap s_{0}^{-}$ and $s_{0}^{+} \cap s_{0}^{\star}$.
When $\theta=\pi/3$, these two curves intersect at two points, which divide $\tilde{s}_0^{+}$ into two parts. That is to say the interior of the side $\tilde{s}_0^{+}$ has two connected components.
By Proposition \ref{prop:points}, the ideal boundary of the ridge $s_{0}^{+} \cap s_{-1}^{-}$ (resp. $s_{0}^{+} \cap s_{0}^{\diamond}$) is a union of two disjoint Jordan arcs $[p_9,p_{10}]$ and $[p_8, p_{11}]$ (resp. $[p_{10},p_8]$ and $[p_{11},p_9]$), the ideal boundary of the ridge $s_{0}^{+} \cap s_{0}^{-}$ (resp. $s_{0}^{+} \cap s_{0}^{\star}$) is a union of two disjoint Jordan arcs $[p_5,p_7]$ and $[p_4,p_6]$ (resp. $[p_7,p_4]$ and $[p_6,p_5]$).
Since $p_2$ is the intersection of four isometric spheres $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{-1}^{-} \cap \mathcal{I}_{0}^{\star} \cap \mathcal{I}_{-1}^{\star}$, it lies on $[p_9,p_{10}]$ and $[p_6,p_5]$. Similarly, $q_3$ lies on $[p_5,p_7]$ and $[p_{10},p_8]$.
By Proposition \ref{prop:symmetry}, the antiholomorphic involution $\tau$ preserves $\tilde{s}_0^{+}$ and interchanges its boundaries.
It is easy to check that $\tau$ interchanges $p_2$ and $q_3$, $p_5$ and $p_{10}$, $p_4$ and $ p_{11}$, $p_6$ and $p_{8}$, $p_7$ and $p_9$.
Thus one part of $\tilde{s}_0^{+}$ is a quadrilateral with vertices $p_2$, $p_5$, $q_3$ and $p_{10}$, denoted by $\mathcal{O}_0^{+}$.
The other one is an octagon with vertices $p_2$, $p_6$, $p_4$, $p_7$, $q_3$, $p_8$, $p_{11}$ and $p_9$, denoted by $\mathcal{Q}_0^{+}$.
Both of them are preserved by $\tau$.
\end{proof}
According to the symmetry $I_2$ in Proposition \ref{prop:symmetry}, which interchanges $\mathcal{I}_{0}^{+} $ and $ \mathcal{I}_{0}^{-}$, we have the following.
\begin{prop}\label{prop:side-minus}
The interior of the side $\tilde{s}_0^{-}$ has two connected components.
\begin{itemize}
\item One of them is an octagon, denoted by $\mathcal{O}_0^{-}$, whose vertices are $p_5$, $p_6$, $p_4$, $p_3$, $p_{15}$, $p_{13}$, $p_{14}$ and $q_3$.
\item The other is a quadrilateral, denoted by $\mathcal{Q}_0^{-}$, whose vertices are $p_3$, $p_7$, $q_3$ and $p_{12}$.
\end{itemize}
\end{prop}
\begin{proof}
Note that side $s_0^{-}$ is bounded by the ridges $s_{0}^{-} \cap s_{1}^{+}$, $s_{0}^{-} \cap s_{1}^{\diamond}$, $s_{0}^{-} \cap s_{0}^{+}$ and $s_{0}^{-} \cap s_{0}^{\star}$.
By Proposition \ref{prop:symmetry}, the side $s_0^{-}$ is isometric to $s_0^{+}$ under the complex involution $I_2$. Thus its ideal boundary $\tilde{s}_0^{-}$ will be also isometric to $\tilde{s}_0^{+}$. This implies that $\tilde{s}_0^{-}$ has the same combinatorial properties as $\tilde{s}_0^{+}$.
One can check that
$$
I_2 : (q_3,p_5,p_2,p_{10},p_8, p_{11}, p_9, p_6) \leftrightarrow (q_3, p_7, p_3,p_{12}, p_{14}, p_{13}, p_{15}, p_4).
$$
Thus one part of $\tilde{s}_0^{-}$ is an octagon, denoted by $\mathcal{O}_0^{-}$, whose vertices are $p_5$, $p_6$, $p_4$, $p_3$, $p_{15}$, $p_{13}$, $p_{14}$ and $q_3$.
The other one is a quadrilateral, denoted by $\mathcal{Q}_0^{-}$, whose vertices are $p_3$, $p_7$, $q_3$ and $p_{12}$. See Figure \ref{figure:cylinder}.
\end{proof}
\begin{rem}
$q_3$ lies on the $\mathbb{C}$-circle associated to $I_2$, that is the ideal boundary of the complex line fixed by $I_2$.
One can also observe that $p_2$ is fixed by $I_1$.
\end{rem}
\begin{figure}
\caption{A combinatorial picture of $\partial U$. The top and bottom curves are identified. $\mathcal{O}
\label{figure:cylinder}
\end{figure}
\begin{prop}\label{prop:side-star}
The interior of side $\tilde{s}_0^{\star}$ is a union of two disjoint triangles, denoted by $(\mathcal{T}_{1})_0^{\star}$ and $(\mathcal{T}_{2})_0^{\star}$,
whose vertices are $p_2$, $p_5$, $p_6$ and respectively, $p_3$, $p_4$, $p_7$.
\end{prop}
\begin{proof}
By Proposition \ref{prop:sides}, the side $\tilde{s}_0^{\star}$ is the union of two disjoint discs, which are bounded by the ideal boundary of the ridges $s_{0}^{+} \cap s_{0}^{\star}$ and $s_{0}^{-} \cap s_{0}^{\star}$.
As stated in Proposition \ref{prop:points}, the ideal boundary of $\mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-} \cap \mathcal{I}_{0}^{\star}$ contains the four points $p_4,p_5,p_6,p_7$. Thus $\tilde{s}_0^{\star}$ is the union of two disjoint bigons whose vertices are $p_5, p_6$, and respectively $p_4,p_7$.
Proposition \ref{prop:points} also tells us that $p_2$ and $p_3$ lie on different component of the boundaries of the two bigons.
Therefore, both of the components are triangles, denoted by $(\mathcal{T}_{1})_0^{\star}$ and $(\mathcal{T}_{2})_0^{\star}$, whose vertices are $p_2$, $p_5$, $p_6$ and respectively, $p_3$, $p_4$, $p_7$.
\end{proof}
According to the symmetry $\tau$ in Proposition \ref{prop:symmetry}, the side $\tilde{s}_0^{\diamond}$ has the same topological properties as the side $\tilde{s}_0^{\star}$.
Thus by a similar argument, we have the following.
\begin{prop}\label{prop:side-diamond}
The interior of side $\tilde{s}_0^{\diamond}$ is a union of two disjoint triangles, denoted by $(\mathcal{T}_{1})_0^{\diamond}$ and $(\mathcal{T}_{2})_0^{\diamond}$,
whose vectors are $q_2$, $p_9$, $p_{11}$ and respectively, $q_3$, $p_8$, $p_{10}$.
\end{prop}
Let $U$ be the ideal boundary of $D$ on $\partial \mathbf{H}^2_{\mathbb{C}}$. Then the union of all the sides $\{\tilde{s}_{k}^{+}\}$, $\{\tilde{s}_{k}^{-}\}$, $\{\tilde{s}_{k}^{\star}\}$ and $\{\tilde{s}_{k}^{\diamond}\}$ for $k\in\mathbb{Z}$ form the boundary of $U$.
\begin{figure}
\caption{ A realistic picture of the ideal boundaries of the isometric spheres: $\mathcal{I}
\label{figure:fd}
\end{figure}
\begin{prop}\label{prop:r-circle}
Let $L=\{ [x+i\sqrt{3}/2, \sqrt{3}x]\in\mathcal{N} : x\in \mathbb{R} \}$, then $L$ is a $T$-invariant $\mathbb{R}$-circle. Furthermore, $L$ is contained in the complement of $D$.
\end{prop}
\begin{proof}
It is obvious that $L$ is a $\mathbb{R}$-circle, since it is the image of the $x$-axis of $\mathcal{N}$ by a Heisenberg translation along the $y$-axis.
For any point $[x+i\sqrt{3}/2, \sqrt{3}x]\in L$, we have
$T([x+i\sqrt{3}/2, \sqrt{3}x])=[(x+2)+i\sqrt{3}/2, \sqrt{3}(x+2)]$ which lies in $L$.
Thus $L$ is a $T$-invariant $\mathbb{R}$-circle. See Figure \ref{figure:fd}.
Note that $T$ acts on $L$ as a translation through $2$. To show $L$ is contained in the complement of $D$, it suffices to show that a segment with length $2$ is contained in the interior of some isometric spheres. By considering their Cygan distance between a point in $L$ and the center of a isometric sphere,
one can compute that the segment $\{ [x+i\sqrt{3}/2, \sqrt{3}x] : -1/2 \leq x \leq 1/2 \}$ lie in the interior of $\mathcal{I}_0^{+}$ and
the segment $\{ [x+i\sqrt{3}/2, \sqrt{3}x] : 1/2 \leq x \leq 3/2 \}$ lie in the interior of $\mathcal{I}_0^{-}$.
\end{proof}
\begin{defn}
Let $\Sigma_{-1}=\{ [-3/2+iy,t]\in\mathcal{N} : y,t \in\mathbb{R} \}$ and $\Sigma_{0}=\{ [1/2+iy,t]\in\mathcal{N} : y,t \in\mathbb{R} \}$ be two planes in the Heisenberg group.
\end{defn}
In fact, the vertical planes $\Sigma_{-1}$ and $\Sigma_{0}$ are boundaries of {\it fans} in the sense of \cite{Go-P2}. Let $D_T$ be the region between $\Sigma_{-1}$ and $\Sigma_0$, that is
$$
D_T=\{ [x+iy,t]\in\mathcal{N}: -3/2\leq x\leq 1/2 \}.
$$
It is obvious that $\Sigma_0=T(\Sigma_{-1})$. Thus $D_T$ is a fundamental domain for $\langle T \rangle$ acting on $\partial\mathbf{H}^2_{\mathbb{C}}$.
\begin{prop}\label{prop:plane-sigma}
The intersections of $\Sigma_0$ and $\Sigma_{-1}$ with the isometric spheres $\mathcal{I}_k^{\pm}$, $\mathcal{I}_k^{\star}$ and $\mathcal{I}_k^{\diamond}$ are empty, except the following:
\begin{itemize}
\item Each one of $\Sigma_0 \cap \mathcal{I}_0^{\pm}$ and $\Sigma_0 \cap \mathcal{I}_0^{\star}$ is a circle and $\Sigma_0 \cap \mathcal{I}_0^{\diamond}=\Sigma_0 \cap \mathcal{I}_1^{\diamond}=\{q_3\}$.
\item Each one of $\Sigma_{-1}\cap \mathcal{I}_{-1}^{\pm}$ and $\Sigma_{-1} \cap \mathcal{I}_{-1}^{\star}$ is a circle and $\Sigma_{-1} \cap \mathcal{I}_{-1}^{\diamond}=\Sigma_{-1} \cap \mathcal{I}_0^{\diamond}=\{q_2\}$.
\end{itemize}
\end{prop}
\begin{proof}
Since the isometric spheres are strictly convex, their intersections with a plane is either a topological circle, or a point or empty. Note that $\Sigma_0=T(\Sigma_{-1})$. Thus it suffices to consider the intersections of $\Sigma_0$ with the isometric spheres. By a strait computation, each one of $\Sigma_0 \cap \mathcal{I}_0^{\pm}$ and $\Sigma_0 \cap \mathcal{I}_0^{\star}$ is a circle. (See Figure \ref{figure:curves}).
\end{proof}
\begin{lem}\label{lem:curve-sigma}
The plane $\Sigma_0$ (respectively $\Sigma_{-1}$) is preserved by $I_2$ (respectively $T^{-1}I_2T$).
The intersection $\Sigma_{0} \cap \partial U$ (respectively $\Sigma_{-1} \cap \partial U$) is a simple closed curve $c_0$ (respectively $c_{-1}$)
in the union $ \tilde{s}_0^{+} \cup \tilde{s}_0^{-}$ (respectively $\tilde{s}_{-1}^{+} \cup \tilde{s}_{-1}^{-}$), which contains the points $q_3$ and $v_0=[1/2+i\sqrt{3}/2,-\sqrt{3}]$ (respectively $q_2$ and $v_{-1}=T^{-1}(v_0)$).
\end{lem}
\begin{proof}
It suffices to consider $\Sigma_{0}$. The $\mathbb{C}$-circle associated to $I_2$, that is the ideal boundary of the complex line fixed by $I_2$, is $\{[1/2+i\sqrt{3}/2,t]\in \mathcal{N} : t\in\mathbb{R} \}$, which
is contained in $\Sigma_{0}$. Thus $\Sigma_0$ is preserved by $I_2$.
It is obvious that $ \Sigma_{0}$ contains $q_3$, which is the tangent point of $\mathcal{I}_{0}^{\diamond}$ and $\mathcal{I}_{1}^{\diamond}$.
The intersections $\Sigma_{0} \cap \mathcal{I}_{0}^{+}$, $\Sigma_{0} \cap \mathcal{I}_{0}^{-}$ and $\Sigma_{0} \cap \mathcal{I}_{0}^{\star}$ are circles by Proposition \ref{prop:plane-sigma}. One can compute that the intersection $\Sigma_{0} \cap \mathcal{I}_{0}^{+} \cap \mathcal{I}_{0}^{-}$ contain two points $q_3$ and $v_0=[1/2+i\sqrt{3}/2,-\sqrt{3}]$. See Figure \ref{figure:curves}. These two points divide the circles on $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_{0}^{-}$ into two arcs.
Let $c_0^{+}$ be the arc with endpoints $q_3$ and $v_0$ on $\mathcal{I}_{0}^{+}$ lying in the exterior of $\mathcal{I}_{0}^{-}$ and $c_0^{-}$ be the one on $\mathcal{I}_{0}^{-}$ lying in the exterior of $\mathcal{I}_{0}^{+}$. Then $c_0=c_0^{+}\cup c_0^{-}$ is a simple closed curve.
Observe that $\Sigma_{0} \cap \mathcal{I}_{0}^{\star}$ lie in the union of the interiors of $\mathcal{I}_{0}^{+}$ and $\mathcal{I}_{0}^{-}$.
Thus $\Sigma_0$ does not intersect $\tilde{s}_0^{\star}$.
By Proposition \ref{prop:plane-sigma}, $c_0^{+}$ lie on $\tilde{s}_0^{+}$ and $c_0^{-}$ lie on $\tilde{s}_0^{-}$.
Therefore, the intersection $\Sigma_{0} \cap \partial U$ is $c_0$, which is a simple closed curve containing $q_3$ and $v_0$.
\end{proof}
\begin{figure}
\caption{The intersection of $\Sigma_{0}
\label{figure:curves}
\end{figure}
Let $U^{c}$ be closure of the complement of $U$ in $\mathcal{N}$.
\begin{prop}\label{prop:solid-tube}
The closure of the intersection $U^{c}\cap D_T$ is a solid tube homeomorphic to a 3-ball.
\end{prop}
\begin{proof}
It suffices to show that the boundary of $U^{c}\cap D_T$ is a 2-sphere. Now let us consider the cell structure of $U^{c}\cap D_T$. See Figure \ref{figure:cylinder}.
According to Lemma \ref{lem:curve-sigma}, the intersection of $U^{c}$ with $\Sigma_0$ (resp. $\Sigma_{-1}$) is a topological disc with two vertices $q_3$ and $v_0$ (resp. $q_2$ and $v_{-1}$)and two edges $c_0^{\pm}$ (resp. $c_{-1}^{\pm}$). Moreover, $c_0$ (resp. $c_{-1}$) divides $\mathcal{O}_0^{\pm}$ (resp. $\mathcal{O}_{-1}^{\pm}$)into a quadrilateral $\mathcal{Q'}_0^{\pm}$ (resp. $\mathcal{Q'}_{-1}^{\pm}$) and a heptagon $\mathcal{H}_{0}^{\pm}$ (resp. $\mathcal{H}_{-1}^{\pm}$).
Since $p_2$, $p_5$ and $T^{-1}(p_4)$ are contained in $D_T$, one can see that $D_T$ contains $\mathcal{Q'}_0^{-}$, $\mathcal{Q'}_{-1}^{+}$, $\mathcal{H}_{0}^{+}$ and $\mathcal{H}_{-1}^{-}$.
Besides, $D_T$ contains $\mathcal{Q}_0^{+}$, $\mathcal{Q}_{-1}^{-}$, $(\mathcal{T}_1)_0^{\diamond}$, $(\mathcal{T}_2)_0^{\diamond}$, $(\mathcal{T}_1)_0^{\star}$ and $(\mathcal{T}_2)_{-1}^{\star}$.
Thus the boundary of $U^{c}\cap D_T$ consists of 12 faces, 23 edges and 13 vertices. See the region between $c_0$ and $c_{-1}$ in Figure \ref{figure:cylinder}.
Therefore the Euler characteristic of the boundary of $U^{c}\cap D_T$ is 2. So the boundary of $U^{c}\cap D_T$ is a 2-sphere.
\end{proof}
Proposition \ref{prop:r-circle} and Proposition \ref{prop:solid-tube} imply the following result.
\begin{prop}\label{prop:cylinder}
$U \cap D_T$ is an unknotted cylinder cross a ray homeomorphic to $S^{1}\times [0,1]\times \mathbb{R}_{\geq0}$.
\end{prop}
\begin{proof}
As stated in Proposition \ref{prop:r-circle}, $U^{c}$ contains the line $L$. Thus $U^{c}\cap D_T$ is a tubular neighborhood of $L \cap D_T$. It cannot be knotted.
Therefore $\partial U \cap D_T$ is un unknotted cylinder homeomorphic to $S^{1}\times [0,1]$.
One can see that $U\cap\Sigma_0$ is $c_0$ cross a ray and $U\cap\Sigma_{-1}$ is $c_{-1}$ cross a ray. Both of them are homeomorphic to $S^{1}\times\mathbb{R}_{\geq 0}$. Hence $U \cap D_T$ is an unknotted cylinder cross a ray homeomorphic to $S^{1}\times [0,1]\times \mathbb{R}_{\geq0}$.
\end{proof}
Applying the powers of $T$, Proposition \ref{prop:cylinder} immediately implies the following corollary.
\begin{cor}
$U$ is an unknotted cylinder cross a ray homeomorphic to $S^{1}\times \mathbb{R}\times \mathbb{R}_{\geq0}$.
\end{cor}
\begin{rem}
$U$ is the complement of a tubular neighborhood of the $T$-invariant $\mathbb{R}$-circle $L$. That is a horotube for $T$ (See \cite{schwartz-cr} for the definition of horotube).
\end{rem}
\begin{defn}
Suppose that the cylinder $S^{1}\times [0,1]$ has a combinatorial cell structure with finite faces $\{F_i\}$.
A \emph{canonical subdivision} on $S^{1}\times [0,1] \times \mathbb{R}_{\geq0}$ is a finite union of 3-dimensional pieces $\{\widehat{F_i}\}$ where $\widehat{F_i}=F_{i}\times \mathbb{R}_{\geq 0}$.
\end{defn}
\begin{prop}\label{prop:subdivision}
There is a canonical subdivision on $U \cap D_T$.
\end{prop}
\begin{proof}
As described in the proof of Proposition \ref{prop:solid-tube}, the combinatorial cell structure of $\partial U \cap D_T$ has 10 faces $\mathcal{Q'}_0^{-}$, $\mathcal{Q'}_{-1}^{+}$, $\mathcal{H}_{0}^{+}$, $\mathcal{H}_{-1}^{-}$, $\mathcal{Q}_0^{+}$, $\mathcal{Q}_{-1}^{-}$, $(\mathcal{T}_1)_0^{\diamond}$, $(\mathcal{T}_2)_0^{\diamond}$, $(\mathcal{T}_1)_0^{\star}$ and $(\mathcal{T}_2)_{-1}^{\star}$. By Proposition \ref{prop:cylinder}, $U \cap D_T$ is the union of 3-dimensional pieces $\widehat{\mathcal{Q'}_0^{-}}$, $\widehat{\mathcal{Q'}_{-1}^{+}}$, $\widehat{\mathcal{H}_{0}^{+}}$, $\widehat{\mathcal{H}_{-1}^{-}}$, $\widehat{\mathcal{Q}_0^{+}}$, $\widehat{\mathcal{Q}_{-1}^{-}}$, $\widehat{(\mathcal{T}_1)_0^{\diamond}}$, $\widehat{(\mathcal{T}_2)_0^{\diamond}}$, $\widehat{(\mathcal{T}_1)_0^{\star}}$ and $\widehat{(\mathcal{T}_2)_{-1}^{\star}}$. Combinatorially, these 3-dimensional pieces are the cone from $q_{\infty}$ to the faces of $\partial U \cap D_T$.
\end{proof}
Let $\Omega(\Gamma)$ be the discontinuity region of $\Gamma$ acting on $\partial\mathbf{H}^2_{\mathbb{C}}$. Then $U \cap D_T$ is obvious a fundamental domain for $\Gamma$. By cutting and gluing, we can obtain the following fundamental domain for $\Gamma$ acting on $\Omega(\Gamma)$.
\begin{prop}\label{prop:polyhedra}
Let $\mathcal{P}_{+}$ be the union of $\widehat{\mathcal{H}_{0}^{+}}$, $\widehat{(\mathcal{T}_1)_0^{\diamond}}$, $\widehat{(\mathcal{T}_2)_0^{\diamond}}$, $T(\widehat{\mathcal{Q'}_{-1}^{+}})$, $T(\widehat{\mathcal{Q}_{-1}^{-}})$ and $T(\widehat{(\mathcal{T}_2)_{-1}^{\star}})$.
Let $\mathcal{P}_{-}$ be the union of $\widehat{\mathcal{Q}_0^{+}}$, $\widehat{(\mathcal{T}_1)_0^{\star}}$, $\widehat{\mathcal{Q'}_0^{-}}$ and $T(\widehat{\mathcal{H}_{-1}^{-}})$.
Then $\mathcal{P}_{+} \cup \mathcal{P}_{-}$ is a fundamental domain for $\Gamma$ acting on $\Omega(\Gamma)$.
Moreover, $\mathcal{P}_{+}$ (resp. $\mathcal{P}_{-}$) is combinatorially an eleven pyramid (resp. nine pyramid) with cone vertex $q_{\infty}$ and base $\mathcal{O}_0^{+} \cup \mathcal{Q}_0^{-} \cup (\mathcal{T}_1)_0^{\diamond} \cup (\mathcal{T}_2)_0^{\diamond} \cup (\mathcal{T}_{2})_0^{\star}$ (resp. $\mathcal{O}_0^{-} \cup \mathcal{Q}_0^{+} \cup (\mathcal{T}_{1})_0^{\star}$).
\end{prop}
\begin{proof}
Since $\Sigma_0=T(\Sigma_{-1})$ and $c_0=T(c_{-1})$, $U\cap D_T$ and $T(U\cap D_T)$ can be glued together along $c_0\times\mathbb{R}_{\geq 0}$.
Note that $U \cap D_T$ is a fundamental domain for $\Gamma$ acting on $\Omega(\Gamma)$ and has a subdivision as described in Proposition \ref{prop:subdivision}.
Therefore $\mathcal{P}_{+} \cup \mathcal{P}_{-}$ is also a fundamental domain.
As described in Proposition \ref{prop:solid-tube}, $c_0$ (resp. $c_{-1}$) divides $\mathcal{O}_0^{\pm}$ (resp. $\mathcal{O}_{-1}^{\pm}$)into a quadrilateral $\mathcal{Q'}_0^{\pm}$ (resp. $\mathcal{Q'}_{-1}^{\pm}$) and a heptagon $\mathcal{H}_{0}^{\pm}$ (resp. $\mathcal{H}_{-1}^{\pm}$).
Note that $\mathcal{O}_0^{\pm}=T(\mathcal{O}_{-1}^{\pm})$ and $(\mathcal{T}_2)_0^{\star}=T((\mathcal{T}_2)_{-1}^{\star})$.
Thus the base of $\mathcal{P}_{+}$ (resp. $\mathcal{P}_{-}$) is $\mathcal{O}_0^{+} \cup \mathcal{Q}_0^{-} \cup (\mathcal{T}_1)_0^{\diamond} \cup (\mathcal{T}_2)_0^{\diamond} \cup (\mathcal{T}_{2})_0^{\star}$ (resp. $\mathcal{O}_0^{-} \cup \mathcal{Q}_0^{+} \cup (\mathcal{T}_{1})_0^{\star}$) which is combinatorially a hendecagon (resp. an enneagon).
See Figure \ref{figure:cylinder} and Figure \ref{figure:poly0}.
\end{proof}
\begin{figure}
\caption{A schematic view of the fundamental domain of $\Gamma$ on $\Omega(\Gamma)$. The vertices colored with red are the parabolic fixed points. The polygon colored with yellow is $\mathcal{O}
\label{figure:poly0}
\end{figure}
\begin{defn}
Let $p'_2=S^{-1}(p_2)$, $p_1=S^{-1}(q_{\infty})=[0,0]$ and $p'_{10}=S^{-1}(p_{10})$.
\end{defn}
\begin{lem}\label{lem:gluing}
Let $\mathcal{P}$ be the union $\mathcal{P}_{+} \cup S^{-1}(\mathcal{P}_{-1})$.
Then $\mathcal{P}$ is combinatorially a polyhedron with 8 triangular faces, 4 square faces, 2 pentagonal faces and 2 hexagonal faces.
The faces of $\mathcal{P}$ are paired as follows:
\begin{eqnarray*}
T: & (q_{\infty},p_2, p_9, q_2)\longmapsto (q_{\infty}, p_3, p_{12}, q_3), \\
S^{-1}T: & (q_{\infty}, q_2, p_{11}, p_8, p_{10}) \longmapsto (p_1, p_2, p_9, p_{11}, p_8),\\
(S^{-1}T)^2: & (q_2, p_9, p_{11}) \longmapsto (q_3, p_8, p_{10}),\\
S^{-1}: & (q_{\infty}, p_{10}, q_3) \longmapsto (p_1, p'_{10}, p_2),\\
S^{-1}T^{-1}S: & (p_1, p_8, q_3) \longmapsto (p_1, p'_{10}, p'_2),\\
S^{-2}: & (q_3, p_{12}, p_3, p_7) \longmapsto (p'_2, p'_{10}, p_2, p_6),\\
S^{-1}: & (q_{\infty}, p_2, p_6, p'_2, p_4, p_3) \longmapsto (p_1, p'_2, p_4, p_3, p_7, q_3),\\
S^{-1}: & ( p_6, p'_2, p_4) \longmapsto (p_4, p_3, p_7).
\end{eqnarray*}
\end{lem}
\begin{proof}
The bases of $\mathcal{P}_{+} $ and $\mathcal{P}_{-1}$ are paired as follows:
\begin{eqnarray*}
S^{-1}:& \mathcal{O}_{0}^{-} \longrightarrow \mathcal{O}_{0}^{+} \\
& (p_3, p_4, p_6, p_5, q_3, q_{14}, p_{13}, p_{15}) \longmapsto (q_3, p_7, p_4, p_6, p_2, p_9, p_{11},p_8), \\
S : & \mathcal{Q}_0^{+} \longrightarrow \mathcal{Q}_0^{-} \\
& (p_2, p_5, q_3, p_{10}) \longmapsto (q_3, p_7, p_3, p_{12}), \\
S^2 : & (\mathcal{T}_{1})_0^{\star} \longrightarrow (\mathcal{T}_{2})_0^{\star} \\
& (p_2,p_5,p_6) \longmapsto (p_3, p_4, p_7), \\
(S^{-1}T)^2 : & (\mathcal{T}_{1})_0^{\diamond} \longrightarrow (\mathcal{T}_{2})_0^{\diamond} \\
& (q_2, p_9, p_{11}) \longmapsto (q_3, p_8, p_{10}).
\end{eqnarray*}
Thus $S^{-1}(\mathcal{P}_{-1})$ and $\mathcal{P}_{+} $ are glued along $\mathcal{O}_0^{+}$.
According to Lemma \ref{lem:goldman}, $S^{-1}(\mathcal{P}_{-1})$ lie in the interior of $\mathcal{I}_0^{+}$, since $\mathcal{P}_{-1}$ lie in the exterior of $\mathcal{I}_0^{+}$.
Moreover, $p_1=S^{-1}(q_{\infty})=[0,0]$ is the center of the isometric sphere $\mathcal{I}_0^{+}$.
See Figure \ref{figure:poly1}.
\end{proof}
\begin{figure}
\caption{The combinatorial picture of $\mathcal{P}
\label{figure:poly1}
\end{figure}
\begin{prop}\label{prop:fundamental-group}
Let $\Omega$ be the discontinuity region of $\Gamma$ acting on $\mathbf{H}^2_{\mathbb{C}}$. Then the fundamental group of $\Omega/\Gamma$ has a presentation
$$
\langle u, v, w | w^{-1}vu^{-1}v^{-1}wu=v^2wuw^{-3}u=id \rangle.
$$
\end{prop}
\begin{proof}
Let $x_i, i=1,2,3,4,5,6,7,8$ be the corresponding gluing maps of $\mathcal{P}$ given in Lemma \ref{lem:gluing}.
These are the generators of the fundamental group of $\Omega/\Gamma$.
By considering the edge cycles of $\mathcal{P}$ under the gluing maps, we have the relations as follows.
\begin{enumerate}
\item $x_7^{-1}\cdot x_5 \cdot x_7 \cdot x_1 =id,$
\item $x_2^{-1} \cdot x_4 \cdot x_1 =id,$
\item $x_2 \cdot x_3^{-1} \cdot x_4^{-1} \cdot x_6 \cdot x_1 =id,$
\item $x_3^{-1} \cdot x_5^{-1} \cdot x_6 \cdot x_1 =id,$
\item $x_2 \cdot x_3 \cdot x_2 =id,$
\item $x_4^{-1} \cdot x_5 \cdot x_2=id,$
\item $x_7 \cdot x_8 \cdot x_6=id,$
\item $x_8 \cdot x_7 \cdot x_6=id,$
\item $x_8^{-1} \cdot x_7=id.$
\end{enumerate}
For example, the edge cycle of $[q_{\infty},p_2]$ is
$$
[q_{\infty},p_2] \xrightarrow{x_1} [q_{\infty},p_3] \xrightarrow{x_7} [p_1,q_3] \xrightarrow{x_5} [p_1,p'_2] \xrightarrow{x_7^{-1}} [q_{\infty},p_2].
$$
Thus
$$
x_7^{-1}\cdot x_5 \cdot x_7 \cdot x_1=id.
$$
This is the relation in (1). The others can be given by a similar argument.
Simplifying the relations and setting $u=x_1, v=x_2, w=x_7$, we obtain the presentation of the fundamental group of $\Omega/\Gamma$.
\end{proof}
Now, we are ready to show the following theorem.
\begin{thm}\label{thm:s782}
Let $\Omega$ be the discontinuity region of $\Gamma$ acting on $\mathbf{H}^2_{\mathbb{C}}$. Then the quotient space $\Omega/\Gamma$ is homeomorphic to the two-cusped hyperbolic 3-manifold $s782$ in the SnapPy census.
\end{thm}
\begin{proof}
Let $M=\Omega/\Gamma$. According to Proposition \ref{prop:fundamental-group},
the fundamental group of $M$ has a presentation
$$
\pi_1(M)=\langle u, v, w | w^{-1}vu^{-1}v^{-1}wu=v^2wuw^{-3}u=id \rangle.
$$
The manifold $s782$ is a two-cusped hyperbolic 3-manifold with finite volume.
Its fundamental group provided by SnapPy has a presentation
$$
\pi_1(s_{782})=\langle a,b,c | a^2 c b^4 c=abca^{-1}b^{-1}c^{-1}=id \rangle.
$$
Using \texttt{Magma}, we get an isomorphism $\Psi: \pi_1(M)\longrightarrow \pi_1(s782)$ given by
$$
\Psi(u)=c^{-1}b^{-1}, \quad \Psi(v)=b^{-1},\quad \Psi(w)=a.
$$
Therefore $M$ will be the connect sum of $s782$ and a closed 3-manifold with trivial fundamental group by the prime decompositions of 3-manifolds \cite{Hempel}. The solution of the
Poincar\'e conjecture implies that the closed 3-manifold is the 3-sphere. Thus $M$ is homeomorphic to $s782$. The proof of Theorem \ref{thm:s782} is complete.
\end{proof}
\end{document}
|
\begin{document}
\title{Partitioned Gradient Matching based Data Subset Selection \ for Compute-Efficient \& Robust ASR Training}
\begin{abstract}
Training state-of-the-art ASR systems such as \textsc{RNN-T}\ often have a high associated financial and environmental cost. Training with a subset of training data could mitigate this problem if the subset selected could achieve performance on-par with training with the entire dataset. Although there are many data subset selection (DSS) algorithms, direct application to the \textsc{RNN-T}\ is difficult, especially the DSS algorithms that are adaptive and use learning dynamics such as gradients, since \textsc{RNN-T}\ tends to have gradients with a significantly larger memory footprint. In this paper we propose \textbf{P}artitioned \textbf{G}radient \textbf{M}atching (\textsc{PGM}) a novel distributable DSS algorithm, suitable for massive datasets like those used to train \textsc{RNN-T}. Through extensive experiments on Librispeech 100H and Librispeech 960H, we show that \textsc{PGM}\ achieves between $3\times$ to $6\times$ speedup with only a very small accuracy degradation (under $1\%$ absolute WER difference). In addition, we demonstrate similar results for \textsc{PGM}\ even in settings where the training data is corrupted with noise.
\end{abstract}
\section{Introduction}
Owing to their simplicity in directly mapping an acoustic input sequence to a output sequence of characters, or words, or even word-pieces, neural end-to-end methods~\cite{graves2006connectionist,graves2013speech,chan2016listen,vaswani2017attention,he2019streaming} have become ubiquitous. The most common end-to-end architectures include (i) Connectionist Temporal Classification (CTC) models~\cite{graves2006connectionist,gulati2020conformer}, (ii) Attention-based Encoder-Decoder models (AED)~\cite{chan2016listen, watanabe2017hybrid} and (iii) Sequence Transduction models \cite{graves2012sequence} such as RNN-Ts~\cite{graves2013speech}. Due to their streaming and low-latency properties, sequence transduction architectures such as RNN-T~\cite{graves2013speech, sainath2020streaming,saon2021advancing} are becoming state-of-the-art for modeling the ASR problem.
These successes in the ASR have come at a cost, as most of the practical RNN-T models are trained on thousands of hours of labeled datasets~\cite{rao2017exploring, zhao2021addressing}. Model training on these massive datasets leads to significantly increased training time, energy requirements, and consequently the carbon footprint~\cite{sharir2020cost, strubell2019energy, schwartz2020green, parcollet2021energy}. As per Parcollet {\em et al.}~\cite{parcollet2021energy}, training an RNN-T model on Librispeech 960H~\cite{panayotov2015librispeech} emits more than 10kg $CO_{2}$ if trained in France, which becomes much worse for developing countries. This is exacerbated due to the many more training runs required for hyper-parameter tuning. This warrants a need for greener training strategies that rely on significantly lower resources while still achieving state-of-the-art results.
One way to make ASR training more efficient is to train on a subset of the training data, which ensures minimum performance loss~\cite{pmlr-v139-killamsetty21a,wei2014fast, kaushal2019learning,coleman2020selection,har2004coresets,clarkson2010coresets,mirzasoleiman2020coresets,killamsetty2021glister,liu2017svitchboard}. Since training on a subset reduces end-to-end time, the hyperparameter tuning time is also reduced. While greedy subset selection algorithms employ various criteria to identify the appropriate subset of training points, the process of forming the subsets remains sequential.
However, for a large scale speech corpus such as Librispeech~\cite{panayotov2015librispeech} this requirement may be difficult to meet. In this work, we propose a Partitioned Gradient Matching (\textsc{PGM}) approach, which scales well with huge datasets used in ASR and takes advantage of distributed setups. To the best of our knowledge, this is the first such study performed for ASR systems.
\subsection{Contributions of this work}
\textbf{The \textsc{PGM}{} Algorithm:} We present \textsc{PGM}{} a data subset selection algorithm
which constructs partial subsets from data partitions of the original dataset. This circumvents the need to load the entire dataset at a time into the memory, which is otherwise prohibitively expensive for ASR systems such as \textsc{RNN-T} (see Section~\ref{need}).
\noindent \textbf{\textsc{PGM}{} is a distributable Algorithm:} Training with a subset of the training data is beneficial only when the cost of selecting a subset is also less. Therefore, for subset selection algorithms to scale to larger datasets used in speech recognition, they must work across multiple GPUs, since training for ASR systems can then be distributed. In Section~\ref{dist}, we present \textsc{PGM}{} which is more suitable for ASR systems, more specifically for RNN-T.
\noindent \textbf{Trade-off between efficiency and accuracy:}
A subset selection algorithm has to counter the contrasting goals of efficiency and accuracy. We perform extensive experiments to demonstrate the trade-off between efficiency and accuracy for \textsc{PGM}{} and provide a general recipe for a user to control the trade-off.
\noindent \textbf{Effectiveness of \textsc{PGM}{} in a Noisy ASR setting:}
A subset selection algorithm should work well when the training data is corrupted with noise. In this work, we show the efficacy of \textsc{PGM}{}, even when a fraction of the labeled dataset is augmented with noise across varying signal-to-noise ratios.
\section{Background: RNN Transducer}
The RNN-T model~\cite{graves2013speech,graves2012sequence} maps an input acoustic signal $(x_1, x_2, \dots, x_T)$ to an output sequence $(y_1, y_2, \dots, y_U)$, where each output symbol $y_i \in \mathcal{M}$, and $\mathcal{M}$ is the vocabulary. An RNN-T model consists of three components - (i) Transcription Network - which maps an acoustic signal $(x_1, x_2, \dots, x_T)$ to an encoded representation $(h_1, h_2, \dots, h_T)$, $T$ being the length of the acoustic signal and $x_i$ being a $W$ dimensional feature representation, (ii) Prediction Network - which is a language model that maps the previously emitted non-blank tokens ${\mathbf{y}}_{<U} = y_1, y_2, \dots, y_{u-1}$ to an output space $g_U$ for the next output token. (iii) Joint Network - that combines the Transcription Network representation $h_t$ and Prediction Network representation $g_u$ to produce $z_{t,u}$ using a feed-forward network $J$ and $\oplus$ as a combination operator (typically a sum).
\begin{equation}
\label{eq:prnnt1}
\begin{aligned}
h_t = TranscriptionNetwork(x, t)
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:prnnt2}
\begin{aligned}
g_u = PredictionNetwork(y, u)
\end{aligned}
\end{equation}
During the training, the output probability $P_\text{rnnt}(y_{t,u})$ over the output sequence ${\mathbf{y}}$ is marginalized over all possible alignments using an efficient forward-backward algorithm to compute the log-likelihood. The training objective is to minimize the Negative Log Likelihood of the target sequence.
\begin{equation}
\label{eq:prnnt3}
\begin{aligned}
P_\text{rnnt}(y_{t,u} | {\mathbf{y}}_{<u}, x_t) &=& \text{softmax}(J(h_t \oplus g_u))
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:prnnt4}
\begin{aligned}
\mathcal{L} = - ln Pr(y|x)
\end{aligned}
\end{equation}
For inference, the decoding algorithms~\cite{graves2012sequence, saon2020alignment} attempt to find the best $(t, u)$ \def{\mathbf{y}}{{\mathbf{y}}} and their corresponding output sequence ${\mathbf{y}}$ using a beam search. In this work, we use the gradients of the joint network layer (J) for \textsc{PGM}, since the linear layer helps in fusing the audio($h_t$) and the text($g_u$) representations.
\section{Limitations of existing subset selection algorithms}\label{need}
An approach to the selection of a subset of points from the entire dataset is to rank points based on their suitability. This ranking can be done either via a some static metric such as diversity or representation among features~\cite{wei2014fast, kaushal2019learning} or via a dynamic metric using instance-wise loss gradients\footnote{gradient associated with an instance $(x,y)$ as opposed to mean mini-batch loss gradient used in training the model} to construct the subset greedily~\cite{mirzasoleiman2020coresets, killamsetty2021glister, pmlr-v139-killamsetty21a}. In the latter case, ranking and re-ranking happens using instance-wise loss gradients. Specifically, during the selection process, loss gradients of the entire set of instances have to be available in the memory in order to perform greedy selection, since otherwise, subset selection time would be prohibitively large owing to disk reads, {\em etc}.
\begin{figure*}
\caption{As \textsc{PGM}
\label{adaptivedss}
\end{figure*}
As keeping all the loss gradients in the memory would be resource intensive, we employ the following approximations, which have been also previously employed by~\cite{mirzasoleiman2020coresets,killamsetty2021glister,pmlr-v139-killamsetty21a}, {\em viz.}, (i) only last layer gradients are used and (ii) subsets are constructed for each class. The latter technique is not relevant in ASR systems since ASR requires sequential decoding into a large size vocabulary. Similar to the last layer approximation, for the \textsc{RNN-T}{} model, we use the gradients of the joint network layer (J) which performs the important task of fusing speech ($h_t$) and text ($g_u$) features for sequence transduction. In Table~\ref{tab:mem_foot} we present the memory footprint of the last layer gradient obtained while training ResNet18~\cite{he2016deep} using CIFAR10~\cite{Krizhevsky09learningmultiple} and gradients of the joint network layer of \textsc{RNN-T}\ using Librispeech 100H. We compare against training ResNet18 using CIFAR10, since most of these subset selection algorithms are applied to image classification settings. In the first column of the table \ref{tab:mem_foot}, we present the memory footprint of single instance's loss gradient. Clearly, the loss gradients used to train \textsc{RNN-T}{} have a much higher footprint than the ones used in image classification setting. The CIFAR10 dataset has 50,000 instances and Librispeech 100H has 20539. In the second column, we present the total memory required to store all the instance-wise loss gradients.
The memory requirement for \textsc{RNN-T}'s loss gradients prohibitively huge.
Thus, storing all the instance-wise loss gradients at once is not feasible for \textsc{RNN-T}{} systems.
\begin{table}[t]
\centering
\begin{tabular}{c| c| c| c} \hline \hline
\multirow{3}{2em}{Dataset }& {Single} & {Total} & {Per } \\
& {Gradient} & {size} & {Batch} \\
& {size (MB)} & (GB) &{size (GB)} \\ \hline
{\small{CIFAR10}}& 0.0215 & 1.049 & 0.0082 \\ [0.7ex]\hline
{\small{Librispeech 100H}}& 4.096 & 111 & 28 \\ [0.7ex]\hline
\hline
\end{tabular}
\caption{Memory footprint of last layer gradient obtained while training ResNet18 using CIFAR10 and gradients of the joint network layer of \textsc{RNN-T}\ using Librispeech 100H. We use a batch size of 128 for CIFAR10 and 4 for Librispeech 100H.}
\label{tab:mem_foot}
\end{table}
\citet{pmlr-v139-killamsetty21a} propose another technique, {\em viz.}, the {\em PerBatch} version, wherein one selects mini-batches (like used in SGD) instead of individual instances. Reduction in memory by using this technique is also not much for ASR systems such as \textsc{RNN-T}, since batch size used here is often small. For example, the batch size employed for the CIFAR10 dataset is typically of 128, as proposed by~\cite{he2016deep} whereas the batch size is 4 for Librispeech 100H as used in the SpeechBrain~\cite{ravanelli2021speechbrain} Librispeech RNN-T recipe. We present the memory required to store all the batch-wise loss gradients in the third column of Table \ref{tab:mem_foot}. Although this requirement may seem satisfiable with some high end computing resource, however shown are the memory requirements to store the instance-wise loss gradients only. If we add other memory needs such as space to store \textsc{RNN-T}{} model and space to process features and gradient computations, effectively one needs much larger GPU memory that the figures presented in Table \ref{tab:mem_foot}. These memory issues become even more pronounced while performing subset selection with Librispeech 960H.
Another problem with the existing subset selection algorithms is that they are sequential in nature. This doesn't allow the selection algorithm to enjoy the speedup achieved using state of the art techniques such as parallelizing across multiple GPUs etc. This may cause the subset selection algorithm to be a bottleneck while training \textsc{RNN-T}\ with datasets of the scale of Librispeech. Therefore, there is a need to design an data subset algorithm that doesn't need all the loss gradient to form a subset and could be distributed across GPUs.
\section{Partitioned Gradient Matching Algorithm}\label{dist}
\noindent Let $\Ucal = \{(x_i, y_i)\}_{i=1}^{N}$ denote the set of training examples, and $\Vcal = \{(x_j, y_j)\}_{j=1}^{M}$, the validation set. Let $\theta$ denote the ASR system's parameters with $\theta^t$ as the ASR system's parameters at the $t^{th}$ epoch. The training loss associated with the $i^{th}$ instance is denoted by $L_T^i(\theta) = L_T(x_i, y_i, \theta) = - \ln Pr(y_i|x_i)$. We denote the validation loss by $L_V = -\sum_{i \in
\Vcal} ln Pr(y_i|x_i)$. Let the training data be divided into $D$ partitions, {\em i.e.}, $\Ucal = d^1 \cup d^2 \cup \cdots \cup d^D$ where each partition comprises of $\frac{N}{D}$ instances. Let $B$ be the batch size, $b_n = N/B$ be the total number of mini-batches and $b_k = k/B$ the number of batches to be selected.
Let $L^{d^p}_T$ be the training loss associated with a data partition $d^p$ and $\nabla_{\theta}L^{d^p}_T= \{\nabla_{\theta} L_T^{d^pB_1}(\theta^t), \cdots, \nabla_{\theta} L_T^{d^pB_{l}}(\theta^t)\}$ denote the set of mini-batch gradients associated with the data partition $d^p$, where $l = \frac{b_n}{D}$. Let $L^{b_n}_T$ denote the set of mini-batch gradients. For each data partition $d^p$, we wish to perform gradient matching (GM), by optimising the following problem,
$$\underset{{\Xcal^t_{d^p} \subseteq d^p, |\Xcal^t_{d^p}| \leq \frac{b_k}{D}}}{\operatorname{argmin\hspace{0.7mm}}}
\min_{\mathbf{w}^t_{d^p}} \mbox{E}_\lambda(\mathbf{w}^t_{d^p}, \Xcal^t_{d^p}, L^{d^p}_T,\nabla_{\theta} L^{d^p}_T, \theta^t)$$
where,
\begin{equation}
\begin{aligned}
\mbox{E}_\lambda(\mathbf{w}^t_{d^p}&, \Xcal^t_{d^p}, L^{d^p}_T,\nabla_{\theta} L^{d^p}_T, \theta^t) = \lambda \lVert \mathbf{w}^t_{d^p} \rVert^2 + \\& \lVert \sum_{i \in \Xcal^t_{d^p}} \mathbf{w}^t_{id^p} \nabla_{\theta} L^{d^pB_i}_T - \nabla_{\theta} L^{d^p}_T(\theta^t) \rVert
\end{aligned}
\label{main_train}
\end{equation}
This selects a subset of batches $\Xcal^t_{d^p}$ and associated weights $\mathbf{w}^t_{d^p}$, such that the weighted sum of loss gradients associated with each instance in the subset are the best approximation of the loss gradient of the entire data partition $d^p$ while honoring the budget constraints. We perform gradient matching on mini-batch wise loss gradients only as it helps in reducing the memory needs. Similarly, we can define gradient matching problem with loss associated with the validation set as,
$$\underset{{\Xcal^t_{d^p} \subseteq d^p, |\Xcal^t_{d^p}| \leq \frac{b_k}{D}}}{\operatorname{argmin\hspace{0.7mm}}}
\min_{\mathbf{w}^t_{d^p}} \mbox{E}_\lambda(\mathbf{w}^t_{d^p}, \Xcal^t_{d^p}, L_V,\nabla_{\theta} L^{d^p}_T, \theta^t)$$
where,
\begin{equation}
\begin{aligned}
\mbox{E}_\lambda(\mathbf{w}^t_{d^p},& \Xcal^t_{d^p}, L_V,\nabla_{\theta} L^{d^p}_T, \theta^t) = \lambda \lVert \mathbf{w}^t_{d^p} \rVert^2 + \\& \lVert \sum_{i \in \Xcal^t_{d^p}} \mathbf{w}^t_{id^p} \nabla_{\theta} L^{d^pB_i}_T - \nabla_{\theta} L_V(\theta^t) \rVert
\end{aligned}
\label{main_val}
\end{equation}
The optimization problem given in Eq.\eqref{main_train} is weakly submodular \cite{pmlr-v139-killamsetty21a,natarajan1995sparse}. Hence, we can effectively solve it using a greedy algorithm with approximation guarantees – we use orthogonal matching pursuit (OMP) algorithm~\cite{elenberg2018restricted} to find the subset and their associated weights. We also add to Eq.\eqref{main_train} an $l_2$ regularization component to discourage large weight assignments to any of the instances selected in the subset, thereby preventing the model from overfitting
on some samples.
\begin{algorithm}[!t]
\caption{\textsc{PGM}: \textbf{P}artitioned \textbf{G}radient \textbf{M}atching }
\begin{algorithmic}
\REQUIRE Train set: $\Ucal = d^1 \cup d^2 \cup \cdots \cup d^D$ consisting of $D$ partitions; validation set: ${\mathcal V}$; initial subset: $\Xcal^{0}$; subset size: $b_k$; TOL: $\epsilon$; initial params: $\theta^{0}$; learning rate: $\alpha$; total epochs: $T$, selection interval: $R$, Validation Flag: \mbox{Val}, Batchsize: $B$
\FOR {epochs $t$ in $1, \cdots, T$}
\IF{$(t \mbox{ mod } R == 0)$}
\STATE $\Xcal^{t} = \phi, \mathbf{w}^{t} = []$
\FOR {data partition $p$ in $d^1, \cdots, d^D$}
\IF{$\mbox{Val}$}
\STATE $\Xcal_d^{t}, \mathbf{w}_d^t = \operatorname{GM}(L_V, \nabla_{\theta} L^{d^p}_T,\theta^{t}, \frac{b_k}{D}, \epsilon)$
\ELSE
\STATE $\Xcal_d^{t}, \mathbf{w}_d^t = \operatorname{GM}(L^{d^p}_T, \nabla_{\theta} L^{d^p}_T, \theta^{t}, \frac{b_k}{D}, \epsilon)$
\ENDIF
\STATE $\Xcal^{t} = \Xcal^{t} \cup \Xcal_d^{t}$
\STATE Extend $\mathbf{w}^{t}$ with $\mathbf{w}_d^t$
\ENDFOR
\ELSE
\STATE $\Xcal^{t} = \Xcal^{t-1}$
\ENDIF
\STATE $\theta_{t+1} = \mbox{BatchSGD}(\Xcal^{t}, \mathbf{w}^t, \alpha, B)$
\ENDFOR
\STATE Output final model parameters $\theta^{T}$
\end{algorithmic}
\label{alg:dgrad-match}
\end{algorithm}
\begin{algorithm}[t]
\caption{Gradient Matching (GM) }
\label{alg:algorithm1_sub}
\begin{algorithmic}
\REQUIRE Loss of the entire dataset(train or validation) : $L$, set
of mini-batch gradients $\nabla_{\theta} L^{B}_T$, current parameters $\theta^t$, budget $k$, TOL: $\epsilon$;
\STATE $\Xcal = \phi, \Xcal_f =\phi, r = \nabla_{\theta} L$
\FOR {$|\Xcal| \leq k$ or $ \mbox{E}_\lambda(w, \Xcal, L,\nabla_{\theta} L^{B}_T, \theta^t) > \epsilon$}
\STATE Pick a element $j$ in $\nabla_{\theta} L^{B}_T$ which a maximum alignment with $r$
\STATE $\Xcal = \Xcal \cup j$
\STATE $\Xcal_f = \Xcal \cup $ \{set of instances in the batch $j$\}
\STATE Update $w = \min_{w} \mbox{E}_\lambda(w, \Xcal, L,\nabla_{\theta} L^{B}_T, \theta^t)$
\STATE Update $r = r - \mbox{E}_\lambda(w, \Xcal, L,\nabla_{\theta} L^{B}_T, \theta^t)$
\ENDFOR
\STATE Return $\Xcal_f, w$
\end{algorithmic}
\end{algorithm}
The complete algorithm is presented in Algorithm~\ref{alg:dgrad-match}. In the algorithm, `Val' is a boolean flag that indicates whether to match the subset loss gradient with validation set loss gradient like in noisy settings (`Val=True') or with training set loss gradient (`Val=False'). Depending on the choice of the loss gradient, we perform gradient matching with $L^{d^p}_T$, current model parameters $\theta_{t}$, budget $\frac{b_k}{D}$, and a stopping criterion $\epsilon$. We describe gradient matching in details in Algorithm \ref{alg:algorithm1_sub}. Once the appropriate batch for selection is determined, we form $\Xcal_f$ adding all the samples constituting the selected mini-batch. The model is then trained using the mini-batch SGD. We randomly shuffle elements in the subset $\Xcal^t$, divide them up into mini-batches of size $B$, and run mini-batch SGD with instance weights.
The complete block diagram of \textsc{PGM}{} is presented in Figure~\ref{adaptivedss}. As the subset selection process is dependant on the model parameters, we repeat the subset selection every $R$ epochs. For each data partition $d^p$, we perform gradient matching (GM) individually and obtain partial subsets $\Xcal^t_{d^p}$, sequentially, one after another. However in the presence of multi-GPU settings, since the gradient matching within a a data partition can be performed independently from gradient matching in other data partitions, the gradient matchings could be executed in parallel. This allows one to take advantage of multi-GPU settings which is critical to efficiently process large datasets typically used to train \textsc{RNN-T}. In Figure \ref{adaptivedss}, we illustrate parallelization of \textsc{PGM}{} on the system with $G$ GPUs. Here, every $G$ partial subsets are obtained in parallel and this process is repeated $\frac{D}{G}$ times.
\subsection{Connection to existing work} \label{conn_ew}
In this section we discuss the connection of \textsc{PGM}{} with \textsc{Grad-MatchPB} \cite{pmlr-v139-killamsetty21a} where subset is selected via solving the following problem,
$$\underset{{\Xcal^t \subseteq \Ucal, |\Xcal^t| \leq b_k}}{\operatorname{argmin\hspace{0.7mm}}}
\min_{\mathbf{w}^t} \mbox{E}_\lambda(\mathbf{w}^t, \Xcal^t, L, L^{b_n}_T, \theta^t)$$
where
\begin{equation*}
\begin{aligned}
\mbox{E}_\lambda(\mathbf{w}^t, \Xcal^t,& L, L^{b_n}_T, \theta^t) = \lambda \lVert \mathbf{w}^t \rVert^2 + \\& \lVert \sum_{i \in \Xcal^t}\mathbf{w}^t_{i} \nabla_{\theta} L_T^{B_i}(\theta^t) - \nabla_{\theta} L(\theta^t) \rVert
\end{aligned}
\end{equation*}
\begin{figure*}\label{fig_1:wer}
\label{fig_2}
\end{figure*}
$L^{b_n}_T $ denotes the set of all mini-batch gradients, defined as $L^{b_n}_T = \nabla_{\theta} L^{d^1}_T \cup \nabla_{\theta} L^{d^2}_T\cup \cdots \cup \nabla_{\theta} L^{d^D}_T$ and $L$ is either the training loss of the entire dataset $L_T$ defined as $L_T = \mathbb{E}(L^{d^p}_T)$ or $L_V$ depending on what sort matching we seek for. The problem tries to find subset and it associated weights so that the gradients of the mini-batches best approximate the either gradient associated with the full dataset or the validation set. We show that \textsc{Grad-MatchPB} is lower bound to \textsc{PGM}, that is
\begin{equation*}
\begin{aligned}
\mathbb{E}(\mbox{E}_\lambda(\mathbf{w}^t_{d^p}, \Xcal^t_{d^p}, L^{d^p}_T,& \nabla_{\theta} L^{d^p}_T, \theta^t)) \\& \geq \mbox{E}_\lambda(\mathbf{w}^t, \Xcal^t, L_T, L^{b_n}_T, \theta^t))
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\mathbb{E}(\mbox{E}_\lambda(\mathbf{w}^t_{d^p}, \Xcal^t_{d^p}, L_V,& \nabla_{\theta} L^{d^p}_T, \theta^t)) \\& \geq \mbox{E}_\lambda(\mathbf{w}^t, \Xcal^t, L_V, L^{b_n}_T, \theta^t))
\end{aligned}
\end{equation*}
\noindent For the proof, we refer the reader to Appendix \ref{sec:proof}.
\section{Experiments}
\textbf{Datasets} We perform all our experiments on the Librispeech dataset~\cite{panayotov2015librispeech}. We present results on the medium-scale Librispeech 100H as well as on the large-scale Librispeech 960H datasets.
Along with the standard Librispeech benchmark, we also perform experiments on noisy Librispeech, where the speech is augmented with noise across varying signal-to-noise ratios (up to 15db) on a fraction of the training data. We refer to this dataset as \textbf{Librispeech-noise}, where up to 30\% examples in the original dataset are augmented with noise across varying signal-to-noise ratios.
\textbf{Architecture.} We perform all our experiments on the Speechbrain's\cite{ravanelli2021speechbrain} Librispeech transducer recipe. The transcription network of the RNN-T consists of a CRDNN encoder which has 2 CNN blocks followed by 4 layers of bi-LSTMs and subsequently followed by 2 DNN layers. The prediction network consists of an embedding layer followed by a single layer GRU unit. A joint network is a single linear layer that projects 1024 dimensional representations to output a vocabulary of 1000 BPE. The decoding is done through a time-synchronous decoding algorithm~\cite{graves2012sequence, hannun2019sequence} with a beam size of 4. The decoding involves an external transformer language model trained on the Librispeech corpus~\cite{kannan2018analysis,hrinchuk2020correction, wolf2019huggingface}.
\begin{figure*}\label{tab:wer_960}
\label{tab:wer_noise}
\end{figure*}
\textbf{Training Details.} For the training, we employ a learning rate of 2.0 with an annealing factor of 0.8 for the relative improvement of 0.0025 on validation loss (sometimes referred to as newbob scheduler). The training on Librispeech 100H is performed on two A100 40GB GPUs with the effective batch size of 8, whereas for Librispeech 960H, we employ two A100 80GB GPUs with an effective batch size of 24. All the training is done for 30 epochs. In all our experiments, the \textsc{PGM}{} algorithm is invoked after every $5^{th}$ epoch ($R = 5$) after performing warm-start (training on full data) for 7 and 2 epochs on Librispeech 100H and Librispeech 960H datasets respectively. The results for each setting are averaged over 3 runs with different random seeds.
\textbf{\textsc{PGM}{} Details.} For doing the subset selection with \textsc{PGM}{}, we use the gradients of the Joint Network parameters, which we believe would have the maximum information concentrated for the sequence. We freeze the rest of the network while we compute the gradient of the Joint Network of the RNN-T. We use $D=7$ and $D=50$ (data partitions) to obtain subsets using the \textsc{PGM}{} algorithm over gradients of training data for Librispeech 100H and 960H datasets respectively. Subset selection is performed using \textbf{{\em training set loss gradients}} in experiments performed using Librispeech 100H (Figures~\ref{fig_1:wer},\ref{fig_2}) and Librispeech 960H (table \ref{tab:wer_960}). For experiments with \textbf{Librispeech-noise} (Table~\ref{tab:wer_noise}) we employ the \textbf{{\em validation gradients}} for performing the subset selection, {\em since we are also concerned with robustness in the presence of noise}.
\textbf{Baselines.} We compare the results obtained using the \textsc{PGM}{} method against three intuitive baselines - (i) \textbf{{\em Random-Subset}{}} baseline, in which the subset of the dataset is obtained by choosing points with uniform probability. (ii) \textbf{{\em LargeOnly}{}} - For each subset, we employ only the largest utterances based on duration. (iii) \textbf{{\em LargeSmall}{}} - For each subset size, half of the subset is filled with smallest utterances and the other half with the largest utterances based on duration, to remove the length bias of the \textbf{{\em LargeOnly}{}} baseline.
\subsection{Results}
To compare the efficacy of the \textsc{PGM}, we compare the word error rate (WER), relative test error, and speed-up compared to training with the entire dataset. We compute these metrics for both the Librispeech 100H and Librispeech 960H benchmarks.
Additionally, we also, present energy ratios {\em vs.} relative test error rate tradeoff on Librispeech 100H.
In Figure \ref{fig_1:wer}, we present the comparison of WER for \textsc{PGM}{} against various baselines for various subset sizes of the full dataset. With just 20\% of the subset size, the \textsc{PGM}{} method yields a WER of 10.66 as opposed to 10.08 obtained by training on the full dataset. For Librispeech 100H, \textsc{PGM}{} consistently outperforms all the baseline, thus illustrating the effect
of selecting subsets using the gradient matching algorithm. Also note, {\em Random-Subset}{} baseline is consistently better than other heuristic based baselines such LargeOnly and LargeSmall. In Figure~\ref{fig_2}, we plot the speed up against the Relative Test Error for Librispeech 100H. While {\em Random-Subset}{} baseline is observed to attain higher speed up in comparison to the \textsc{PGM}{} because of the simple selection strategy, {\em Random-Subset}{} baseline also incurs higher relative test error in comparison to the \textsc{PGM}.
In Figure \ref{fig4:energy}, we present the plot of relative test error {\em w.r.t} energy efficiency for the full training setting. We use pyJoules\footnote{https://pypi.org/project/pyJoules/} for measuring the energy consumed by GPU cores. We show that with \textsc{PGM}, the training time is halved and energy efficiency is doubled while incurring the relative test error of less than 5\% as compared to the training on the entire dataset. For higher speedups, where there is a degradation in the WER, the loss is relatively better for \textsc{PGM}{} as compared to the baseline. We do not show the energy efficiency for LargeOnly and LargeSmall baselines as their relative test error is consistently poor as compared to the Random Subset baseline as shown in Figure~\ref{fig_2}.
\begin{figure}
\caption{Energy Ratio($\uparrow$) {\em vs.}
\label{fig4:energy}
\end{figure}
\begin{figure*}\label{tab:ablation-1}
\label{tab:ablation-2}
\end{figure*}
\begin{table}[!t]
\begin{tabular}{l|c|c|c}
\hline
Subset Size & nGPU = 1 & nGPU=2 & nGPU=2 \\
& LR = 1.0 & LR = 1.0 & LR = 2.0 \\
\hline \hline
0.1 & 11.26 & 13.99 & 11.32 \\
0.2 & 10.6 & 12.58 & 10.66 \\
0.3 & 10.4 & 11.58 & 10.46 \\\hline \hline
\end{tabular}
\captionof{table}{\label{tab:ablation-3} Effect of Learning Rate on WER for \textsc{PGM}{} on \textsc{test-clean} test set of Librispeech 100H.}
\end{table}
For the ASR task we recommend using at least 30\% of the dataset for training the model or using more warm-start epochs as described in Section~\ref{subsec:ablation}.
In Table \ref{tab:wer_960}, we present comparison of the \textsc{PGM}{} method with the baseline for the Librispeech 960H dataset on both the \textsc{test-clean} and \textsc{test-other} test sets. As shown in the Table, with just 30\% of the training data, \textsc{PGM}{} is within 10\% of the relative test error (1\% of absolute error difference) when compared against training on the full data, thus yielding a speedup of 2.64. Similar results hold on the challenging \textsc{test-other} test set of the Librispeech which shows the better generalization of \textsc{PGM}{} in comparison to the {\em Random-Subset}{} baseline.
\textbf{Results on Librispeech-noise}: We augment randomly selected signals from the dataset with noise across varying signal-to-noise ratios to mimic a more practical setting where subset selection algorithms need to address the noise while selecting useful subsets. We show the results on the Librispeech-noise 100H and 960H datasets for different subsets in Table \ref{tab:wer_noise}. \textsc{PGM}{} consistently outperforms the {\em Random-Subset}{} baseline
for different subsets with lower relative test error when compared against the full training and still yields significant speed up to reduce training time and maintain robustness.
\subsection{Ablation Study}
\label{subsec:ablation}
Next, we do an ablation study to understand the effect of learning rate on \textsc{PGM}{} for Librispeech 100H dataset. Since, the goal of subset selection algorithms is to reduce the training data for training, the older recipes (especially learning rate) on full training data do not work as-is for the \textsc{PGM}{} because of the distributed nature of the training.
In Table \ref{tab:ablation-3}, we show the effect of learning rate on multi-gpu training of the \textsc{PGM}{} method. The recipe for single GPU borrowed as-is for the multi-gpu training setting, performed poorly because the number of gradient updates in the distributed setting halved. To overcome this barrier, we doubled the learning rate to take larger steps and reach convergence within the same number of epochs.
We perform some ablation studies to understand why subsets selected by \textsc{PGM}{} tend to outperform a relatively simple {\em Random-Subset}{} baseline. We compute the following two metrics:
\textbf{Overlap Index (OI)}: This is the fraction of common points selected in the last two subset selection rounds with the subset size. This metric computes the diversity of the points being selected by the methods in the subsequent subset selection rounds.
\textbf{Noise Overlap Index (NOI)}: This is the fraction of noise points selected by the subset selection methods divided by the total number of noisy points. Both the metrics are computed by averaging the index for all the runs with the same subset selection method.
As shown in Table \ref{tab:ablation-1}, \textsc{PGM}{} selects more diverse points across different subset selection rounds which explains the better generalization of the \textsc{test-other} test set. At the same time, both the methods select a similar amount of noisy points during the subset selection indicating that \textsc{PGM}{} selects more diverse points from the non-noisy points.
Finally, we study the effect of warm-start on the performance of the \textsc{PGM}{} algorithm. Since it is an adaptive data selection algorithm, \textsc{PGM}{} needs a good starting point for computing reasonable estimates of the gradients for subset selection. Table \ref{tab:ablation-2} shows the effect of warm-start epoch ablation on the \textsc{test-clean} test set for Librispeech 960H. As we increase the warm start, the performance of the \textsc{PGM}{} algorithm improves at the cost of speed up.
\subsection{Comparing \textsc{PGM}{} and \textsc{Grad-MatchPB}}
\begin{table*}[t]
\centering
\begin{tabular}{c| c| c| c| c| c} \hline
Subset-Size & {\em Random-Subset}{} & {\em LargeSmall}{} & {\em LargeOnly} & \textsc{Grad-MatchPB} & \textsc{PGM}{}\\ \hline \hline
0.1 & 16.64 & 17.98 & 17.27 & 16.14 & 16.23 \\
0.2 & 16.43 & 17.23 & 16.35 & 15.89 & 16.03 \\
0.3 & 16.28 & 16.35 & 16.22 & 15.79 & 15.95\\ \hline \hline
\end{tabular}
\caption{Comparison of WER obtained with {\em Random-Subset}{}, {\em LargeSmall}{}, {\em LargeOnly}{},\textsc{Grad-MatchPB} and \textsc{PGM}{} on TIMIT Phone recognition dataset.}
\label{tab:pb_vs_pgm}
\end{table*}
Running \textsc{Grad-MatchPB} for Librispeech is prohibitively expensive since the amount of memory required to store all the gradients would exceed the memory size of available commercial GPUs as described in Section \ref{need}. To address this, we compare Phone Error (PER) on the TIMIT Phone recognition dataset \cite{garofolo1993timit} (containing 3680 utterances with 630 speakers) for all the methods.
Table \ref{tab:pb_vs_pgm} shows WER obtained with \textsc{PGM}{}, \textsc{Grad-MatchPB} {\em Random-Subset}{} and other subset selection baselines such as {\em LargeSmall}{}, {\em LargeOnly}{}. For \textsc{PGM}{} we use data partitioning $D=2$. We see that the WER of \textsc{PGM}{} is slightly higher than that of \textsc{Grad-MatchPB}, as the error term that \textsc{PGM}{} minimises is a upper bound of error term minimised by \textsc{Grad-MatchPB} as discussed in section \ref{conn_ew}. However, \textsc{PGM}{}'s WER is very close to that of \textsc{Grad-MatchPB}, indicating that the partitioning doesn’t deteriorate the bounds while allowing to scale for larger datasets and utilize multiple GPUs which allows \textsc{PGM}{} to enjoy better speedups over \textsc{Grad-MatchPB}.
\textbf{Statistical Significance: } WER reductions using \textsc{PGM}{} compared to the {\em Random-Subset}{} baseline are statistically significant at $p < 0.001$ using a matched pairs test.\footnote{https://github.com/talhanai/wer-sigtest}
\section{Conclusion}
We propose \textsc{PGM}, a distributable data subset selection algorithm which avoids the need to load the entire dataset at a time, by constructing partial subsets from smaller data partitions. \textsc{PGM}{} is an adaptive subset selection algorithm that improves the training time of the ASR models while maintaining low relative test error as compared to the ASR model trained with the entire dataset. This speed-up improves the efficiency
of the training process and subsequently reduces the carbon footprint of training such models.
Our approach performs consistently better than {\em Random-Subset}{} baseline whilst providing good speed up, and robustness in the presence of noise. Although we test the method on the RNN-T model, we believe that similar results could be obtained for other ASR models and we leave that as future work.
\section*{Limitations}
In this paper we investigate the usefulness of subset selection algorithms for the ASR task for the first time on a popular RNN-Transducer ASR architecture which typically consumes vast volume ($\sim$40000 hours of labelled audio and more) of training data. At such industrial scale, the overhead of \textsc{PGM}{} for gradient matching over the entire training set would limit the utility of the algorithm. These practical considerations warrant more careful design of the subset selection algorithms so as to scale well with such huge workloads. We also limit our results to the RNN-T architecture and believe that the results also hold for other less popular architecture by taking gradients of the last few layers. While we show the efficient training for the ASR task, we believe a similar study should be carried out for the self-supervised pre-training approaches.
\section*{Ethics Statement}
In this work we present a gradient matching based data subset selection algorithm for compute efficient and robust ASR model training. Since we do not modify any existing speech architecture or propose new benchmarks, but provide a mechanism for faster training of such models, we see no new ethical concerns arising from our work.
\appendix
\section{Connections between \textsc{PGM}{} and \textsc{Grad-MatchPB}}\label{sec:proof}
\begin{lemma} (triangle inequality). Let ${v_1, . . . , v_{\tau} }$ be $\tau$ vectors in $\mathbb{R}^d$. Then the following is true:
\begin{align}
\|\sum_{i=1}^{\tau}v_i\| \geq \sum_{i=1}^{\tau}\|v_i\|
\label{relax_tri}
\end{align}
\end{lemma}
\begin{corollary}
Following inequality holds between the objectives of \textsc{PGM}{} and \textsc{Grad-MatchPB}
\begin{equation*}
\begin{aligned}
\mathbb{E}(\mbox{E}_\lambda(\mathbf{w}^t_{d^p}, \Xcal^t_{d^p}, L^{d^p}_T,& \nabla_{\theta} L^{d^p}_T, \theta^t)) \\& \geq \mbox{E}_\lambda(\mathbf{w}^t, \Xcal^t, L_T, L^{b_n}_T, \theta^t))
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\mathbb{E}(\mbox{E}_\lambda(\mathbf{w}^t_{d^p}, \Xcal^t_{d^p}, L_V,& \nabla_{\theta} L^{d^p}_T, \theta^t)) \\& \geq \mbox{E}_\lambda(\mathbf{w}^t, \Xcal^t, L_V, L^{b_n}_T, \theta^t))
\end{aligned}
\end{equation*}
\label{cor:mean}
\end{corollary}
\textbf{Proof.}
Using the triangle inequality,
\begin{equation*}
\begin{aligned}
\sum_{i=p}^D & (\lVert \sum_{i \in \Xcal^t_{d^p}} \mathbf{w}^t_{id^p} \nabla_{\theta} L^{d^pB_i}_T - \nabla_{\theta} L^{d^p}_T(\theta^t) \rVert \\& + \lambda \lVert \mathbf{w}^t_{d^p} \rVert^2 ) \\ &\geq \lVert \sum_{i=p}^D (\sum_{i \in \Xcal^t_{d^p}} \mathbf{w}^t_{id^p} \nabla_{\theta} L^{d^pB_i}_T - \nabla_{\theta} L^{d^p}_T(\theta^t)) \rVert \\ &+ \lambda \lVert \sum_{i=p}^D \mathbf{w}^t_{d^p} \rVert^2
\end{aligned}
\end{equation*}
We divide both sides by $D$,
\begin{equation*}
\begin{aligned}
\frac{1}{D}\sum_{i=p}^D & (\lVert \sum_{i \in \Xcal^t_{d^p}} \mathbf{w}^t_{id^p} \nabla_{\theta} L^{d^pB_i}_T - \nabla_{\theta} L^{d^p}_T(\theta^t) \rVert \\ &+ \lambda \lVert \mathbf{w}^t_{d^p} \rVert^2 ) \\ &\geq \lVert \sum_{i=p}^D (\sum_{i \in \Xcal^t_{d^p}} \frac{\mathbf{w}^t_{id^p}}{D} \nabla_{\theta} L^{d^pB_i}_T) \\& - \frac{\sum_{i=p}^D (\nabla_{\theta} L^{d^p}_T(\theta^t))}{D} \rVert + \lambda \lVert \frac{\sum_{i=p}^D \mathbf{w}^t_{d^p}}{D} \rVert^2
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
\mathbb{E}(\mbox{E}_\lambda(\mathbf{w}^t_{d^p} &, \Xcal^t_{d^p}, L^{d^p}_T, \nabla_{\theta} L^{d^p}_T, \theta^t)) \\ &\geq \lVert \sum_{i=p}^D (\sum_{i \in \Xcal^t_{d^p}} \frac{\mathbf{w}^t_{id^p}}{D} \nabla_{\theta} L^{d^pB_i}_T) \\& - \frac{\sum_{i=p}^D (\nabla_{\theta} L^{d^p}_T(\theta^t))}{D} \rVert + \lambda \lVert \frac{\sum_{i=p}^D \mathbf{w}^t_{d^p}}{D} \rVert^2
\end{aligned}
\end{equation*}
Since $L_T = \mathbb{E}(L^{d^p}_T)$ and therefore $\mathbb{E}(\sum_{i=p}^D (\sum_{i \in \Xcal^t_{d^p}} \frac{\mathbf{w}^t_{id^p}}{D} \nabla_{\theta} L^{d^pB_i}_T)) = \sum_{i \in \Xcal^t}\mathbf{w}^t_{i} \nabla_{\theta} L_T^{B_i}(\theta^t)$ as they are obtained via gradient matching,
\begin{equation*}
\begin{aligned}
\mathbb{E}(\mbox{E}_\lambda(\mathbf{w}^t_{d^p} &, \Xcal^t_{d^p}, L^{d^p}_T, \nabla_{\theta} L^{d^p}_T, \theta^t)) \\ &\geq \lVert \sum_{i \in \Xcal^t}\mathbf{w}^t_{i} \nabla_{\theta} L_T^{B_i}(\theta^t) - \nabla_{\theta} L_T(\theta^t) \rVert \\& + \lambda \lVert \mathbf{w}^t \rVert^2
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
\mathbb{E}(\mbox{E}_\lambda(\mathbf{w}^t_{d^p}, \Xcal^t_{d^p}, L^{d^p}_T,& \nabla_{\theta} L^{d^p}_T, \theta^t)) \\& \geq \mbox{E}_\lambda(\mathbf{w}^t, \Xcal^t, L_T, L^{b_n}_T, \theta^t))
\end{aligned}
\end{equation*}
\end{document}
|
\begin{document}
\title{The Matrix Chain Algorithm\to Compile Linear Algebra Expressions}
\section{Introduction}
The need to translate linear algebra operations into efficient code arises in
a multitude of applications. For instance, expressions such as
$$b = S^H H^H \left(\sigma HH^H + Q \right)^{-1}r$$
and
$$x = \left( \Sigma^T \Sigma + D^2 \right)^{-1} \Sigma^T b$$
occur in information theory \cite{albataineh2014}, and regularization
\cite{noschese2016}, respectively. Given such expressions,
we are interested in the automatic generation of code that is at least as fast
and as numerically stable as what an expert would produce.
Conceptually, the problem is similar to how compilers cast scalar
expressions in terms of the available instruction set.
The corresponding problem for linear algebra expressions (involving matrices)
is much more challenging, and requires expertise in both numerical
linear algebra and high-performance computing.
On the one hand, one wants to take advantage of highly optimized
building blocks for matrix operations, such as those as provided by the
BLAS~\cite{dongarra1990} and LAPACK~\cite{anderson1999} libraries.
On the other hand, transformations based on
associativity, commutativity and distributivity play an essential role.
Further complication comes from the fact that
matrices frequently have structures and properties that can be exploited both
to transform---and thus simplify---expressions, and to evaluate them more
efficiently. The application of this kind of knowledge affects not only the computational cost, but also the necessary amount of storage space, and numerical accuracy.
At the moment, there are two options for dealing with complex matrix
expressions. One either has to map the expressions to kernels manually, or
use high-level programming languages and environments such as
Matlab and R. The first option involves a lengthy, error-prone process that
usually requires a numerical linear algebra expert. The second option, using
high-level programming languages, is a very convenient alternative in terms of
productivity, but rarely leads to the same performance levels as code produced
by an expert. As a simple example, consider an expression containing the
inverse operator: in Matlab, this is directly mapped to an explicit matrix
inversion, even though a solution that relies on linear systems is usually
both faster and numerically
more stable; in
this case, it is up to the user to rewrite the inverse in
terms of the slash ({\tt/}) or backslash ({\tt\textbackslash}) operators, which
solve linear systems.
Products are another example: Let $M_1, M_2 \in \mathbb{R}^{n \times n}$, $x
\in \mathbb{R}^{n}$. Depending on whether $M_1 M_2 x$ is computed from the
left, that is, parenthesized as $(M_1 M_2) x$, or from the right ($M_1 (M_2
x)$), the calculation requires either $\mathcal{O}(n^3)$ or $\mathcal{O}(n^2)$ scalar operations. In Matlab, products are always evaluated from left to right~\cite{matlabdoc:short}.
In other high-level languages such as
Mathematica~\cite{mathematicadoc:short} and Julia~\cite{bezanson2012}, the situation is analogous.
\begin{figure}
\caption{Grammar describing the expression we are concerned with.
}
\label{eq:rule1}
\label{eq:rule2}
\label{eq:rule3}
\label{eq:rule4}
\label{grammar}
\end{figure}
Our end goal is a compiler that takes a mathematical description of a linear
algebra problem and finds an efficient mapping onto high-performance routines offered by libraries.
In this document, we are
concerned with the mapping of expressions consisting of products, as described by the grammar in Figure \ref{grammar} (e.g., $X :=
A B^T C$ and $x := A^{-1} B y$, where $A, B, C, X$ are matrices, and $x$ and
$y$ are vectors), onto a set $K$ of computational kernels (e.g.: \texttt{C:=A*B},
\texttt{C:= A$^\text{\texttt -1}$*B}, \texttt{B:= A$^\text{\texttt -1}$},
\dots). For a given performance metric, we are interested in the optimal
mapping. This problem can be seen as a generalization of the matrix chain
problem: Given a \emph{matrix chain}, a product $M_1 \cdots M_k$ of matrices
with different sizes, the question is how to parenthesize it so that the result can be computed with the minimal number of scalar operations.
Our approach uses an extended version of the $\mathcal{O}(n^3)$ dynamic
programming matrix chain algorithm presented in \cite{cormen1990}. We
refer to the problem as the ``Generalized Matrix Chain Problem''
(GMCP) and call the presented algorithm ``GMC algorithm''.
\section{Generalizations}
We extend the original matrix chain algorithm in four ways:
\paragraph{Operations}
The GMC algorithm is able to deal with the transpose and inverse as additional
operators. The combination of those operators
with the multiplication leads to a rich set of different expressions, for
example $A B^T$, $A^{-1} B$, and $A^{-1} B^{-T}$. While mathematically all those expressions can be computed as a composition of explicit unary operations ($X:= A^{-1}$ and $X:= A^{T}$) and a plain multiplication ($X:=AB$), this is in many cases not advisable for performance and stability reasons. The selection of the best sequence of kernels is done by a search-based approach inspired by the linear algebra compiler
CLAK~\cite{fabregat-traver2013a}.
\paragraph{Properties}
Many linear algebra operations can be sped up by taking advantage of the
properties of the involved matrices. For example, the multiplication of two
lower triangular matrices requires $n^3/3$ scalar operations, as opposed to $2n^3$ operations for the multiplication of two full matrices \cite{higham2008}. Furthermore, properties propagate with the application of kernels. Take the product $A B^T$ as an example. If $A$ is lower triangular and $B$ is upper triangular, it is possible to infer that the entire product is lower triangular as well. The GMC algorithm symbolically infers the properties of intermediate operands and uses those properties to select the most suitable kernels.
\paragraph{Cost Function}
The original matrix chain algorithm minimizes the number of scalar operations
(FLOPs) necessary to compute the matrix chain. In the GMC algorithm, we allow the use of an arbitrary metric, which could be performance (FLOPS/sec), numerical accuracy, memory consumption, or a combination of multiple objectives.
\paragraph{Indices}
The grammar (Figure \ref{grammar}) allows matrices to be annotated with indices.
Consider the assignment $X_{ij}:= A_i B C d_j$ as an example. Instead of one single chain,
a two-dimensional grid of chains has to be computed. Clearly, some segments
are common to multiple chains; for performance reasons it is therefore
beneficial to reuse them. The GMC algorithm is able to find the optimal solution for indexed chains like this one.
\section{The Algorithm}
\begin{figure}
\caption{The GMC algorithm.}
\label{pseudocode}
\end{figure}
Figure \ref{pseudocode} shows the full algorithm. Its complexity is
$$\mathcal{O}(n^3(k^3 + \gamma + p))\text{,}$$
where $n$ is the length of the matrix chain, $k$ is the number of kernels,
$\gamma$ is the number of indices occurring in the chain and $p$ is the number of properties that
are considered. We stress that the $k^3$ term is an upper bound that will not be reached in practice.
\section{Conclusion and Future work}
We consider the GMC algorithm to be an important step towards the development
of a compiler for linear algebra problems that finds optimized mappings to
kernels by applying domain specific knowledge. In the future, we will address
problems like common subexpression elimination and memory allocation.
\addcontentsline{toc}{section}{References}
\end{document}
|
\begin{document}
\title[A note on the Tangent Cones of ... ]{A note on the Tangent Cones of the scheme of Secant Loci}
\begin{abstract}
The point of this short note concerns with two facts on the scheme of secant loci. The first one is an attempt
to describe the tangent cone of these schemes globally and the second one is a comparison on the dimension of the tangent spaces of various schemes of secant Loci.
\end{abstract}
\maketitle
\section{Introduction and Notations} \label{sect1}
Let $C$ be a smooth projective algebraic curve of genu $g$; $W^0_{g-1}(C)$ its theta divisor and $L\in W^0_{g-1}(C)$ be a multiple point of the theta divisor.
Based on a classical and nice result of Bernhard Riemann, the tangent cone of $W^0_{g-1}(C)$ at $L=\mathcal{O}(D)$ is, set theoretically, the union of the $n$-planes $\Lambda=\langle E \rangle$, where $E$ is the canonical image of a divisor $\acute{E}\in \mid D \mid$ in the canonical space of $C$ and $n=\deg(D)-h^0(D)$. See \cite[Ch. 6]{ACGH}. G. Kempf generalized this result to the schemes $W^0_d$, when $1\leq d\leq g-1$, (see \cite{K}). Subsequently, Arbarello, Cornalba, Griffiths and Harris used the scheme of linear series, $G^r_d(C)$'s, to give a global description of the tangent cone of the Brill-Noether schemes, $W^r_d$, at their multiple points when $r$ and $d$ ranges in $1\leq 2r\leq d\leq g-1$.
The scheme of secant loci of globally generated line bundles on $C$, being as a generalization of the classical Brill-Noether varieties, was under focus of some authors beginning by M. Coppens in 1990's to recently by M. Aprodu and E. Sernesi. Marc Coppens, M. E. Huibregetse and T. Johnsen, studying the local behavior of these schemes, have given descriptions of their tangent space and tangent cones at their various points, in terms of their local defining equations.
The first aim of this note is to describe the tangent cone of the scheme of secant loci', globally. In order to do so
the method of \cite{ACGH} in constructing linear series $G^r_d$, goes verbatim to construct analogous schemes
on the varieties of secant divisors. The resulting spaces enjoy a powerful universal property. Based on this property; these schemes, so called "the scheme of divisor series" would be used to obtain a global description for the tangent cones of the scheme of secant divisors.
W. Fulton and etal., established inequalities within the dimension of various Brill-Noether varieties in \cite{F-H-L}. The relations have been extended recently to the varieties of secant loci by M. Aprodu and E. Sernesi in \cite{A-S2}.
Inspired by their results, we report in Theorem \ref{comparision theorem 1} similar inequalities within
$ \dim V^{r}_{d}(\Gamma)$, $\dim V^{r}_{d}(\Gamma(-x))$, $\dim T_D(V^{r}_d(\Gamma))$, $\dim T_{D+x}(V^{r}_{d+1}(\Gamma))$ and $ \dim T_{D} V^{r}_{d}(\Gamma(-x)),$
where $x$ is a general point of $C$. This is the second aim of this paper. As a corollary to this result, the smoothness of $ V^{r}_{d}(\Gamma)$, when $ V^{r}_{d}(\Gamma)$ is of expected dimension, implies the same property for $ V^{r}_{d+1}(\Gamma)$ and $ V^{r}_{d}(\Gamma(-x))$.
Assume that $\Gamma$ is a line bundle on a smooth projective algebraic curve $C$ of genus $g$ with $h^0(\Gamma)=s+1$ and $d$ is a positive integer. For an integer $d\geq 2$, consider the diagram
$$\begin{array}{cccccc}
&C\times C_d &\overset{\pi_2}\longrightarrow & C_d\\
\pi_1\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!&\downarrow & & \\
&C & & \\
\end{array}$$
and define the secant bundle of degree $d$;
$E_{\Gamma}:=(\pi_2)_*(\pi^*\Gamma\otimes \mathcal{O}_{\Delta})$, where $\Delta$ is the universal divisor of degree $d$. The morphism
$$(\pi_2)_*(\pi^*\Gamma) \overset{\phi_{\Gamma}}\longrightarrow
(\pi_2)_*(\pi_1^*\Gamma\otimes \mathcal{O}_{\Delta})$$
is a map of vector bundles of ranks $s+1$ and $d$, respectively.
For a positive integer $r$, $1\leq r\leq d-1$, the variety of secant loci of $\Gamma$ is the zero scheme of the map $\wedge^{d-r+1}\phi_{\Gamma}$, i.e.
\begin{align}
V^r_d(\Gamma)=Z(\wedge^{d-r+1}\phi_{\Gamma}).
\end{align}
The variety of secant loci of $\Gamma$ migh be described set theoretically as
$$V^{r}_{d}(\Gamma):=\lbrace D\in C_{d} \mid h^0(\Gamma)-h^0(\Gamma(-D))\leq d-r \rbrace.$$
See \cite{A-S}, $\cdots$, \cite{J} for more details on the scheme structure of $V_{d}^{r}(\Gamma)$ and some of its geometric properties.
\section{The structure of $\mathcal{V}^{s+1-d+r}_d(\Gamma)$:} \label{ns}
For a closed subscheme $Z\subset X$ defined as the $k$-th degeneracy locus of a morphism of vector bundles
$\gamma: \mathcal{F}\rightarrow \mathcal{G},$
its canonical desingularization, as it is defined in \cite[Page 83-84]{ACGH}, parametrizes couples $(x, W)$ in which $x\in X$ and
$W\in \operatorname{Gr}(n-k, \ker \gamma_x)$,
where $\operatorname{rk} \mathcal{F}=n$ and $\operatorname{rk} \mathcal{G}=m$. Denote
such a desingularization by $\tilde{X}_k(\gamma)$ and
set $\mathcal{V}^{s+1-d+r}_d(\Gamma):=\tilde{X}_{d-r}(\phi_{\Gamma})$.
Geometrically, the scheme $\mathcal{V}^{s+1-d+r}_d(\Gamma)$ parametrizes couples $(D, \Lambda)$, with $D\in V^{r}_{d}(\Gamma)$ and $\langle D \rangle\subset\Lambda\subset \mathbb{P}(H^0(\Gamma))$ with $\dim \Lambda=d-r-1$. The elements of $\mathcal{V}^{s+1-d+r}_d(\Gamma)$, are called divisor series.
\subsubsection{Families of divisor series:}\label{Families of divisor series:}
A family of divisor series, $\delta^r_d(\Gamma)$, w.r.t. $\Gamma$ parametrized by $S$, is the datum of:\\
\label{condition1}(I) A family $\mathcal{D}$ of degree $d$ divisors on $C$, parametrized by $S$;\\
\label{condition2}(II) A rank $(s+1-d+r)$-vector bundle $\mathcal{T}$, which is a subvector bundle of
$ (\bar{\pi}_2)_*(\bar{\pi}_1^*\Gamma\otimes \mathcal{O}(\mathcal{D})^{\vee}),$
with the property that, for each $s\in S$, the homomorphism
$$\mathcal{T}\otimes k(s)\rightarrow
H^0( (\bar{\pi}_2)^{-1}(s), [\bar{\pi}_1^*\Gamma\otimes \mathcal{O}(\mathcal{D})^{\vee}]\otimes \mathcal{O}_{(\bar{\pi}_2)^{-1}(s)})$$
is injective, where $\bar{\pi}_1$ and $\bar{\pi}_2$ are the projections from $C\times S$
to $C$ and $S$, respectively.
Two families $(\mathcal{D}_1, \mathcal{T}_1)$ and $(\mathcal{D}_2, \mathcal{T}_2)$ of $\delta_d^r(\Gamma)$'s on $C$ parametrized by $S$ are said to be equivalent if
$\mathcal{D}_1 = \mathcal{D}_2,$
such that $\mathcal{T}_1$ can be identified via $\mathcal{T}_2$ under this equality.
\subsubsection{The universal family of divisor series:}\label{The universal family of divisor series}
Consider that $\mathcal{V}^{s+1-d+r}_d(\Gamma)$ is a subvariety of the Grassmann bundle
$G(s+1-d+r, (\pi_2)_*\pi_1^*\Gamma)$ over $C_d$. If
$$e:\mathcal{V}_d^{s+1-d+r}(\Gamma)\rightarrow C_d$$
is the restriction of the projection map from $G(s+1-d+r, (\pi_2)_*\pi_1^*\Gamma)$ to $\mathcal{V}^{s+1-d+r}_d(\Gamma)$, then the universal family of $\delta^r_d(\Gamma)$'s on $C$ parametrized by
$\mathcal{V}^{s+1-d+r}_d(\Gamma)$ is $(e^*(\Delta), \mathcal{G})$, where $\mathcal{G}$ is the restriction to $\mathcal{V}^{s+1-d+r}_d(\Gamma)$ of the universal sub-bundle on $G(s+1-d+r, (\pi_2)_*\pi_1^*\Gamma)$ and $\Delta$ is the universal divisor of degree $d$. We denote this family of divisors by $\mathcal{U}\mathcal{V}_d^{s+1-d+r}(\Gamma)$.
\begin{lem}\label{lem1}
Assume that
$\mathcal{D}$ is a family of degree $d$ divisors on $C$, parametrized by $S$;
and $f:S\rightarrow C_d$ is the unique morphism such that $(f\times id_C)^*(\Delta)=\mathcal{D}$. Then
$$\ker f^*(\phi_{\Gamma})\cong \ker \lbrace(\bar{\pi}_2)_*((f\times id_C)^*(\pi_1^*(\Gamma)))\longrightarrow (\bar{\pi}_2)_*((f\times id_C)^*(\pi_1^*(\Gamma)\otimes \mathcal{O}_{\Delta}))\rbrace$$
\end{lem}
\begin{proof}
\textbf{Claim:} $\pi_1^*(\Gamma)$ and $\pi_1^*(\Gamma)\otimes \mathcal{O}_{\Delta}$ are flat $\mathcal{O}_{C_d}$-modules. Indeed, observe first that $\pi_1^*(\Gamma)$ is flat as $\mathcal{O}_{C_d\times C}$-modules. The flatness of $\mathcal{O}_{C_d\times C}$ as $\mathcal{O}_{C_d}$-modules is a direct consequence of the commutative diagram
$$\begin{array}{cccccc}
&C_d\times C&\overset{\pi_2}\longrightarrow & C\\
\pi_1\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!&\downarrow & & \downarrow \\
&C_d & \longrightarrow& \operatorname{Spec}(k).\\
\end{array}$$
The flatness of $\pi_1^*(\Gamma)$ and $\pi_1^*(\Gamma)\otimes \mathcal{O}_{\Delta}$ as $\mathcal{O}_{C_d}$-modules, together with Theorem \cite[Thm. 2.6, page 175]{ACGH} applied to the morphism $C_d\times C\rightarrow C_d$ shows that
\begin{align}
f^*(\pi_1^*(\Gamma))\cong (\bar{\pi}_2)_*((f\times id_C)^*(\pi_1^*(\Gamma))
\end{align}
\begin{align}
f^*(\pi_1^*(\Gamma)\otimes \mathcal{O}_{\Delta})\cong
(\bar{\pi}_2)_*((f\times id_C)^*(\pi_1^*(\Gamma)\otimes \mathcal{O}_{\Delta}).
\end{align}
The lemma now is a direct consequence of the commutative diagram of vector bundles on $S$
$$\begin{array}{cccccc}
&f^*(\pi_1^*(\Gamma)) &\overset{f^*(\phi_{\Gamma})}\longrightarrow & f^*(\pi_1^*(\Gamma)\otimes \mathcal{O}_{\Delta})\\
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!&\downarrow & & \downarrow \\
&(\bar{\pi}_2)_*((f\times id_C)^*(\pi_1^*(\Gamma)) & \longrightarrow& (\bar{\pi}_2)_*((f\times id_C)^*(\pi_1^*(\Gamma)\otimes \mathcal{O}_{\Delta}).\\
\end{array}$$
\end{proof}
\begin{thrm}\label{thm1}
For any analytic space $S$ and any family $\mathbb{E}$ of divisor series
on $C$
parametrized by $S$,
there is a unique morphism from $S$ to $\mathcal{V}^{s+1-d+r}_d(\Gamma)$ such that the pull back of $\mathcal{U}\mathcal{V}_d^{s+1-d+r}(\Gamma)$ is equivalent to $\mathbb{E}$.
\end{thrm}
\begin{proof}
Let $\mathbb{E}=(\mathcal{D}, \mathcal{T})$ be a family of divisor series, $\delta^r_d(\Gamma)$'s, on $C$ parametrized by $S$. The universal property of $\Delta$ asserts that there is a morphism $f:S\rightarrow C_d$ such that
$(f\times id_C)^*(\Delta)=\mathcal{D}$. Condition (II) in \ref{condition2} together with Lemma \ref{lem1} makes it possible to view the vector bundle $\mathcal{T}$ as a vector sub-bundle of $f^*(H^0(\Gamma)\otimes \mathcal{O}_{C_d})$ contained in $f^*(\ker \phi_{\Gamma})$. The universal property of Grassmann bundles implies that the vector bundle $\mathcal{T}$ is the pull back of the universal sub-bundle via a unique section of $G(s+1-d+r, f^*(H^0(\Gamma)\otimes \mathcal{O}_{C_d}))\rightarrow S$. This section factors through the inclusion $\mathcal{V}^{s+1-d+r}_d(\Gamma)\subseteq G(s+1-d+r, H^0(\Gamma)\otimes \mathcal{O}_{C_d})$, since $\mathcal{T}$ is annihilated by $f^*(\phi_{\Gamma})$.
\end{proof}
\subsubsection{The Tangent Space of $\mathcal{V}^{s+1-d+r}_d(\Gamma)$}
Theorem \ref{thm1} shows that
$T_{\mathbb{E}}(\mathcal{V}^{s+1-d+r}_d(\Gamma))
$ is the set of families of $\delta^r_d(\Gamma)$'s parametrized by
$\operatorname{Spec}(\mathbb{C}[\epsilon])$ reducing to $\mathbb{E}$. A family of this type is called a first order deformation of $\mathbb{E}$.
\begin{thrm}Let $\mathbb{E}=(D, T)\in \mathcal{V}^{s+1-d+r}_d(\Gamma)$.
Then, a first order deformation of $\mathbb{E}$ is in the form $\mathbb{E}_{\epsilon}=(\mathcal{D}_{\epsilon}, \mathcal{T}_{\epsilon})$, where $\mathcal{D}_{\epsilon}$
is a first order deformation of $D$ and $\mathcal{T}_{\epsilon}\subset
\Gamma_{\epsilon}(-\mathcal{D}_{\epsilon})$ extends $T$,
in which $\Gamma_{\epsilon}$ is the trivial first order deformation of $\Gamma$.
\end{thrm}
\begin{proof}
Assume $\mathbb{E}_{\epsilon}=(\mathcal{D}_{\epsilon}, \mathcal{T}_{\epsilon})$ is a family of $\delta^r_d(\Gamma)$'s parametrized by
$\operatorname{Spec}(\mathbb{C}[\epsilon])$. Then, $\mathcal{D}_{\epsilon}$ is a relative degree $d$ divisor on $\operatorname{Spec}(\mathbb{C}[\epsilon])$ and so is a first order deformation of $D$.
For each $s\in \operatorname{Spec}(\mathbb{C}[\epsilon])$, the vector bundle
$ (\bar{\pi}_2)_*(\bar{\pi}_1^*\Gamma\otimes \mathcal{O}(\mathcal{D}_{\epsilon})^{\vee})$ satisfies
$$\lbrace (\bar{\pi}_2)_*[\bar{\pi}_1^*\Gamma\otimes \mathcal{O}(\mathcal{D}_\epsilon)^{\vee}]\rbrace\otimes k(s)\cong H^0(\Gamma(-\mathcal{D}_s)),$$
where $\mathcal{D}_s$ is the restriction of $\mathcal{D}_{\epsilon}$ to $\lbrace s\rbrace\times C$. This implies that the vector bundle $ (\bar{\pi}_2)_*(\bar{\pi}_1^*\Gamma\otimes \mathcal{O}(\mathcal{D}_{\epsilon})^{\vee})$ might be viewed as the vector bundle
$ \Gamma_{\epsilon}(-\mathcal{D}_{\epsilon})$, where $\Gamma_{\epsilon}$ is the trivial first order deformation of $\Gamma$.
So $T$ has to be extended to some sub-vector bundle $\mathcal{T}_{\epsilon}$ of $ \Gamma_{\epsilon}(-\mathcal{D}_{\epsilon})$.
\end{proof}
\begin{prop}\label{Proposition}
Let $\mathbb{E}=(D, T)\in \mathcal{V}^{s+1-d+r}_d(\Gamma)$ corresponding to a divisor $D\in V^{s+1-d+r}_d(\Gamma)$
and an $(r+1)$-dimensional vector subspace $T$ of $H^0(\Gamma(-D))$. Denote by
$$\mu_{0, T}^{\Gamma}: H^0(D)\otimes T\rightarrow H^0(\Gamma)$$
the restriction of $\mu_{0}^{\Gamma}$ to $H^0(D)\otimes T$. \\
The tangent space to $\mathcal{V}^{s+1-d+r}_d(\Gamma)$ at $\mathbb{E}$ fits into an exact sequence
$$0\rightarrow \operatorname{Hom}(T, H^0(\Gamma(-D))/T)\rightarrow T_{\mathbb{E}}(\mathcal{V}_d^{s+1-d+r}(\Gamma))\overset{e_*}\longrightarrow T_DC_d .$$
Furthermore, if $\bar{\eta}_T$ is the cup product $H^0(\mathcal{O}_D(D))\otimes T \rightarrow H^0(\Gamma \otimes \mathcal{O}_D)$, then
$$\operatorname{Im} e_*= \lbrace \nu \in H^0(\mathcal{O}_D(D)) \mid \bar{\eta}_T(\nu \otimes T)\subseteq \operatorname{Im} \alpha_\Gamma\rbrace.$$
\end{prop}
\begin{proof}
If $ D$ is locally defined by $ (U_{\alpha}, \lbrace f_{\alpha} \rbrace)$
then $(U_{\alpha}, \lbrace g_{\alpha, \beta}:=\frac{f_{\beta}}{f_{\alpha}} \rbrace)$ would be a
transition datum for $\mathcal{O(D)}$. As well,
if $(U_{\alpha}, \lbrace \gamma_{\alpha, \beta} \rbrace)$ determines the line bundle $\Gamma$, then the line bundle $\Gamma(-D)$ would be determined by $(U_{\alpha}, \lbrace \frac{\gamma_{\alpha, \beta}}{g_{\alpha, \beta}} \rbrace)$.
If $D_{\epsilon}$ is a first order deformation of $D$ associated to $\nu \in H^0(\mathcal{O}_D(D))$ and represented by
$(U_{\alpha, \epsilon}, \lbrace \tilde{f}_{\alpha} \rbrace)$, then $(U_{\alpha, \epsilon}, \lbrace \tilde{g}_{\alpha, \beta}:=\frac{\tilde{f}_{\beta}}{\tilde{f}_{\alpha}} \rbrace)$ would be a transition datum for $\mathcal{O}(D_{\epsilon})$, such that
$$\tilde{g}_{\alpha, \beta}=g_{\alpha, \beta}(1+\epsilon \phi_{\alpha, \beta}) \quad where \quad \phi_{\alpha, \beta}+\phi_{\beta, \gamma}=\phi_{\alpha, \gamma}.$$
Consider that $\phi=\lbrace \phi_{\alpha, \beta}\rbrace \in H^1(\mathcal{O}_C)$ and $\delta (\nu)=\phi$, where $\delta$ is the coboundary map associated to the exact sequence
$$0\rightarrow \mathcal{O}_C\rightarrow \mathcal{O}(D)\rightarrow \mathcal{O}_D(D)\rightarrow 0.$$
Furthermore, $\Gamma_{\epsilon}(-D_{\epsilon})$ would be represented by $(U_{\alpha, \epsilon}, \lbrace \tilde{\tilde {g}}_{\alpha, \beta} \rbrace)$, such that $\tilde{\tilde {g}}_{\alpha, \beta}=\frac{\tilde{\gamma}_{\alpha, \beta}}{\tilde {g}_{\alpha, \beta}}$, where by triviality of the deformation $\Gamma_{\epsilon}$, one has $\tilde{\gamma}_{\alpha, \beta}=
\gamma_{\alpha, \beta}$. These, imply that
\begin{align}
\tilde{\tilde {g}}_{\alpha, \beta}=
\frac{\gamma_{\alpha, \beta}}{g_{\alpha, \beta}+\epsilon g_{\alpha, \beta} \phi_{\alpha, \beta}}=\frac{\gamma_{\alpha, \beta}}{g_{\alpha, \beta}}[1+\epsilon(-\phi_{\alpha, \beta})].
\end{align}
In order to lift a section $s\in H^0(\Gamma(-D))$ which is represented by $\lbrace s_{\alpha} \rbrace$ with
\begin{align}
s_{\alpha}= \frac{\gamma_{\alpha, \beta}}{g_{\alpha, \beta}}s_{\beta}\quad
on \quad U_{\alpha}\cap U_{\beta},
\end{align}
to a section $\tilde{s}$ of $\Gamma_{\epsilon}(-D_{\epsilon})$ it is necessary and sufficient for $\tilde{s}$ to be represented by $\tilde{s}_{\alpha}$ with $\tilde{s}_{\alpha}=\frac{\tilde{\gamma}_{\alpha, \beta}}{\tilde {g}_{\alpha, \beta}}\tilde{s}_{\beta}$ on $U_{\alpha,\epsilon}\cap U_{\beta,\epsilon}$ such that one has locally
\begin{align}\label{10}
\tilde{s}_{\alpha}=s_{\alpha}+\epsilon\acute{s}_{\alpha}.
\end{align}
Setting $\bar{g}_{\alpha, \beta}:=\frac{\gamma_{\alpha, \beta}}{g_{\alpha, \beta}}$, the equation (\ref{10}) is equivalent to say that
\begin{align}
\quad s_{\alpha}=\bar{g}_{\alpha, \beta}\cdot s_{\beta} \quad on \quad U_{\alpha}\cap U_{\beta},
\end{align}
\begin{align}\label{11}
\bar{g}_{\alpha,\beta}. \acute{s}_{\beta}-\acute{s}_{\alpha}=s_{\alpha}.\phi_{\alpha,\beta}, \quad on \quad U_{\alpha}\cap U_{\beta}.
\end{align}
It is an immediate computation to see that the right-hand side in (\ref{11}) is a cocycle representing the cup-product $\phi . s\in H^1(\Gamma(-D))$ under the natural pairing
$$H^1(\mathcal{O}_C)\otimes H^0(\Gamma(-D))\rightarrow H^1(\Gamma(-D)).$$
Consider the commutative diagram of vector spaces
$$\begin{array}{cccccc}
&H^0(\mathcal{O}_D(D))\otimes H^0(\Gamma(-D)) &\overset{\delta \otimes 1}\longrightarrow & H^1(\mathcal{O}_C)\otimes H^0(\Gamma(-D))&\\
\bar{\eta} \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! &\downarrow & & \downarrow & \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \eta\\
&H^0(\Gamma\otimes \mathcal{O}_D) & \overset{\bar{\delta}}\longrightarrow& H^1(\Gamma(-D)).&\\
\end{array}$$
and observe that $[\eta \circ (\delta \otimes 1)](\nu\otimes s)=0$ in $H^1(\Gamma(-D))$. So the commutativity of diagram implies $\bar{\eta}(\nu\otimes s)\in \ker (\bar{\delta})=\operatorname{Im} \alpha_{\Gamma}$. This finishes the proof.
\end{proof}
\begin{thrm}
For $D\in V^r_d(\Gamma)$, consider the set $\bar{I}\subseteq H^0(\mathcal{O}_D(D))\times \operatorname{Gr}(s+1-d+r, H^0(\Gamma(-D)))$
defined by
$$ \bar{I}:=\lbrace (\nu, T)\mid
\bar{\eta}_T(\nu \otimes T)\subseteq \operatorname{Im} \alpha_\Gamma
\rbrace.
$$
Then, the tangent cone of $V^r_d(\Gamma)$ at $D$ coincides on $I:=\pi_1(\bar{I})$ set theoretically, where $\pi_1$ is the first projection on $H^0(\mathcal{O}_D(D))$.
\end{thrm}
\begin{proof}
An application of the Corollary in page 66 of \cite{ACGH} together with Proposition \ref{Proposition} shows
$$\mathcal{T}_D(V^r_d(\Gamma))=I,$$
set theoretically.
\end{proof}
\begin{remark}\label{remark1}
Assume that $\{\gamma_1, \cdots, \gamma_{s+1}\}$ is a basis for $H^0(\Gamma)$. The Brill-Noether matrix $(\gamma_i(p_j))_{i,j}$ defines the structure of $V^r_d(\Gamma)$ locally. This allows one, to interpret $H^0(\Gamma\otimes \mathcal{O}_D)^*$ as the tangent space of $C_d$ at $D$, which is the same as identifying $H^0(\mathcal{O}_D(D))$ with $H^0(\Gamma\otimes \mathcal{O}_D)^*$. If $\{ \frac{1}{z_i}\}_i$ is a basis for $H^0(\mathcal{O}_D(D))$, then such an identification might be given explicitly as
\begin{align}
\Theta: \frac{1}{z_i} \in H^0(\mathcal{O}_D(D))&\mapsto
(\frac{\gamma_i-p_1}{z_i}(p_1), \cdots, \frac{\gamma_i-p_d}{z_i}(p_d))^* \in H^0(\Gamma\otimes\mathcal{O}_D)^*,
\end{align}
where $z_i$ is a local coordinate around $p_i$ and for $v$ in a vector space $V$, we denote by $v^*\in V^*$ the linear map by $v^*(\lambda v)=\lambda$ and zero, otherwise.
\end{remark}
In order to obtain Theorem \ref{thm 3}, we make the following hypothesis
\noindent \textbf{Hypothesis A:}\label{assumption}
Consider the set $\bar{J}\subseteq H^0(\Gamma)^*\times \operatorname{Gr}(s+1-d+r, H^0(\Gamma(-D)))$, defined by
$$\bar{J}:=\lbrace (\gamma, T)\mid \gamma \perp\mu_0^\Gamma(H^0(D)\otimes T) \rbrace,$$
and assume that the map $\Theta$ is such that setting
$J:=\bar{\pi}_1(\bar{J})$ the set $(\alpha_\Gamma^*)^{-1}(J)$ coincides on $I$, where
$$\alpha_\Gamma^*:H^0(\Gamma\otimes \mathcal{O}_D)^*\rightarrow H^0(\Gamma)^*,$$
is the dual of $\alpha_{\Gamma}$ and $\bar{\pi}_1$ is the projection on $H^0(\Gamma)^*$.
Consider the set $\bar{J}\subseteq H^0(\Gamma)^*\times \operatorname{Gr}(s+1-d+r, H^0(\Gamma(-D)))$, defined by
$$\bar{J}:=\lbrace (\gamma, T)\mid \gamma \perp\mu_0^\Gamma(H^0(D)\otimes T) \rbrace,$$
and assume that the map $\Theta$ is such that setting
$J:=\bar{\pi}_1(\bar{J})$ the set $(\alpha_\Gamma^*)^{-1}(J)$ coincides on $I$, where
$$\alpha_\Gamma^*:H^0(\Gamma\otimes \mathcal{O}_D)^*\rightarrow H^0(\Gamma)^*,$$
is the dual of $\alpha_{\Gamma}$ and $\bar{\pi}_1$ is the projection on $H^0(\Gamma)^*$.
\begin{thrm}\label{thm 3}
Together with Hypothesis A, assume that for each $T\in \operatorname{Gr} (s+1-d+r, H^0(\Gamma(-D)))$, the
map $\eta_T: H^0(D)\otimes T\rightarrow H^0(\Gamma)$, is injective and $h^0(\Gamma(-D))< h^0(D)+s+1-d+r$. Assume moreover that the scheme $\mathcal{V}^r_d(\Gamma)$ is of dim $=\exp. dim V^r_d(\Gamma)$ in a neighborhood $e^{-1}(D)$. Then $\mathcal{T}_D(V^r_d(\Gamma))$, the tangent cone of $V^r_d(\Gamma)$ at $D$, is generically a $\mathbb{C}^r$-bundle on a reduced, normal and Cohen-Macauley variety $J\subset H^0(\Gamma)^*$.
\end{thrm}
\begin{proof}
Note that the scheme structures on $I$ and $\mathcal{T}_D(V^r_d(\Gamma))$ are compatible with the structures of the schemes fitted in the commutative diagram
$$\begin{array}{cccccc}
&\bar{I} &\overset{\zeta \otimes 1}\longrightarrow & \bar{J}&\\
\bar{\eta} \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! &\downarrow & & \downarrow & \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \eta\\
&I & \overset{\alpha_\Gamma^*}\longrightarrow& J,&\\
\end{array}$$
induced from
$$\begin{array}{cccccc}
&H^0(\mathcal{O}_D(D))\times H^0(\Gamma(-D)) &\overset{\zeta \otimes 1}\longrightarrow & H^0(\Gamma)^*\times H^0(\Gamma(-D))&\\
\bar{\eta} \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! &\downarrow & & \downarrow & \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \!\!\!\!\!\!\! \eta\\
&[H^0(\Gamma\otimes \mathcal{O}_D)]^* & \overset{\alpha_\Gamma^*}\longrightarrow& H^0(\Gamma)^*,&\\
\end{array}$$
where we are denoting $\alpha_\Gamma^*\circ \Theta$ by $\zeta$.
This shows $\mathcal{T}_D(V^r_d(\Gamma))=I$, scheme theoretically as well.
In order to finish the proof of theorem, denoting by $\lambda$ the restriction of $\zeta^*$ to $I$, it is enough to prove $\lambda(I)=J$. To do so,
the scheme $\bar{J}$, being a vector bundle on $\operatorname{Gr}(s+1-d+r, H^0(\Gamma(-D)))$, is irreducible, implying the irreducibility of $J$. For a similar reason $I$ comes to be irreducible.
The Lemma in page 242 of \cite{ACGH} applied to the injectivity assumption, implies that $J$ has the claimed properties.
Fianlly, a dimension computation indicates that $\lambda(I)$ can not include $J$ strictly, verifying $\lambda(I)=J$. Meanwhile, the computation indicates that for a general $j\in J$, the dimension of the fiber of $\lambda$ at $j$ equals $r$.
\end{proof}
\begin{remark}
The canonical bundle satisfies in the assumption \ref{assumption}, so the tangent cone of $C^r_d$ at a point $D\in C^r_d$ is generically a $P^r$-bundle on the tangent cone of $W^r_d$ at $L=\mathcal{O}(D)\in W^r_d$.
\end{remark}
\section{A Tangent Space Comparision}
The tangent space to $V^{r}_d(\Gamma)$ at a point $D\in V^{r}_d(\Gamma)\setminus V^{r+1}_d(\Gamma)$ has been described by M. Coppens in \cite[Thm. 0.3]{M. Cop.} as;
$$T_D(V^{r}_d(\Gamma))=
\bigcap _{\xi\in H^0(\Gamma(-D))}\lbrace
\beta_{\xi}^{-1}(\operatorname{Im}(\phi_{\Gamma}^D)) \rbrace,$$
where for $\xi\in H^0(\Gamma(-D))$ the map $\beta_{\xi}:H^0(\mathcal{O}_D(D))\rightarrow H^0(\Gamma\otimes \mathcal{O}_D)$ is defined by $\nu \mapsto \nu\otimes \xi$ and $\phi_{\Gamma}^D$ is the morphism induced by $\phi_{\Gamma}$ at the point $D$.
This interpretation describes the tangent space as a subspace of
the space of first order deformations of $D$, where $D$ is considered as a closed subscheme of $C$.
\begin{thrm}\label{comparision theorem 1}
Let $x\in C$ be a general point such that $D\in V^{r}_{d}(\Gamma)\setminus V^{r+1}_{d}(\Gamma), D+x\in V^{r}_{d+1}(\Gamma)\setminus V^{r+1}_{d+1}(\Gamma)$ and $D\in V^{r}_{d}(\Gamma(-x))\setminus V^{r+1}_{d}(\Gamma(-x))$. Then
$$
\begin{array}{cccc}
(a)& \dim T_D(V^{r}_d(\Gamma))&\geq &\dim T_{D+x}(V^{r}_{d+1}(\Gamma))-(r+1),\\
(b) &\dim T_D(V^{r}_{d}(\Gamma))&\geq &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \dim T_{D} V^{r}_{d}(\Gamma(-x))-r\\
(c) &\dim V^{r}_{d}(\Gamma)\!\!\!\!\!\!\!\!\!\!\!\!&\geq &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \dim V^{r}_{d}(\Gamma(-x))-r.
\end{array}
$$
\end{thrm}
\begin{proof}
(a) We interpret $H^0(\mathcal{O}_D(D))$ as a subspace of $H^0(\mathcal{O}_{D+x}(D+x))$ and $H^0(\Gamma\otimes \mathcal{O}_D)$ as a subspace of $H^0(\Gamma\otimes \mathcal{O}_{D+x})$.
Using these interpretations we obtain a commutative diagram:
$$\begin{array}{cccccc}
&H^0(\mathcal{O}_D(D)) &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\overset{\beta_\xi}\longrightarrow H^0(\Gamma\otimes \mathcal{O}_D)& \\
i_1\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!&\downarrow & \downarrow &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!i_2\\
&H^0(\mathcal{O}_{D+x}(D+x)) &\overset{\bar{\beta}_\xi}\longrightarrow H^0(\Gamma\otimes \mathcal{O}_{D+x}),&
\end{array}$$
in which $\beta_\xi$ coincides on the restriction of $\bar{\beta}_\xi$ to $H^0(\mathcal{O}_D(D))$.
Set
$$H^0(\Gamma(-D))=H^0(\Gamma(-D-x))\oplus \langle \gamma \rangle,$$
and observe that if $\lbrace\gamma_1, \cdots, \gamma_t\rbrace$ is a basis for $H^0(\Gamma(-D-x))$, then
$$
\begin{array}{ccccc}
T_DV^r_d(\Gamma)&=&[\bigcap_{i=1}^{i=t}
\beta_{\gamma_i}^{-1}(\operatorname{Im}(\phi_{\Gamma}^D))]\cap \beta_{\gamma}^{-1}(\operatorname{Im}(\phi_{\Gamma}^D))&&\\
&=&[\bigcap_{i=1}^{i=t}
\bar{\beta}_{\gamma_i}^{-1}(\operatorname{Im}(\bar{\phi}_{\Gamma}^{D+x}))\cap H^0(\mathcal{O}_D(D))]\cap \beta_{\gamma}^{-1}(\operatorname{Im}(\phi_{\Gamma}^D)).
\end{array}
$$
This implies that $T_DV^r_d(\Gamma)=[T_{D+x}V^r_{d+1}(\Gamma)]\cap H^0(\mathcal{O}_D(D)) \cap \beta_{\gamma}^{-1}(\operatorname{Im}(\phi_{\Gamma}^D))$. Observe furthermore that
$$H^0(\mathcal{O}_D(D))=[T_{D+x}V^r_{d+1}(\Gamma)\cap H^0(\mathcal{O}_D(D))]+ \beta_{\gamma}^{-1}(\operatorname{Im}(\phi_{\Gamma}^D)),$$
by which we obtain
$$\dim T_DV^r_d(\Gamma)=\dim T_{D+x}V^r_{d+1}(\Gamma)-1+\dim \beta_{\gamma}^{-1}(\operatorname{Im}(\phi_{\Gamma}^D)) -d.$$
The assertion would be a direct consequence of the inequality
\begin{align}\label{inequality}
\beta_{\gamma}^{-1}(\operatorname{Im}(\phi_{\Gamma}^D))\geq d-r=\dim \operatorname{Im}(\phi^D_{\Gamma}).
\end{align}
In order to prove the inequality (\ref{inequality}), set $V=\beta_{\gamma}^{-1}(\operatorname{Im}(\phi_{\Gamma}^D))$ and observe that
$$ \begin{array}{ccc}
\dim V=\dim [\ker \beta_{\gamma}\cap V]+\dim [\operatorname{Im} \beta_{\gamma}\cap \operatorname{Im} \phi^D_{\Gamma}]
=\dim \ker \beta_{\gamma}+\dim [\operatorname{Im} \beta_{\gamma}\cap \operatorname{Im} \phi^D_{\Gamma}] \\
=d-\dim \operatorname{Im} \beta_{\gamma}+\dim [\operatorname{Im} \beta_{\gamma}\cap \operatorname{Im} \phi^D_{\Gamma}]=
d-(\dim \operatorname{Im} \beta_{\gamma}-\dim [\operatorname{Im} \beta_{\gamma}\cap \operatorname{Im} \phi^D_{\Gamma}]).
\end{array}$$
The assertion is now immediate by
$$\dim \operatorname{Im} \beta_{\gamma}-\dim [\operatorname{Im} \beta_{\gamma}\cap \operatorname{Im} \phi^D_{\Gamma}]=\dim (\frac{\operatorname{Im} \beta_{\gamma}+ \operatorname{Im} \phi^D_{\Gamma}}{ \operatorname{Im} \phi^D_{\Gamma}})\leq \dim \frac{H^0(\Gamma\otimes \mathcal{O}_D)}{ \operatorname{Im} \phi^D_{\Gamma}}=r.$$
(b) For $\xi \in H^0(\Gamma-x-D)$ we are in the situation of the following diagram:
\unitlength .500mm
\linethickness{0.5pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\begin{center}
\begin{picture}(148.5,95)(0,35)
\put(5,77){\makebox(0,0)[cc]{$H^0(\mathcal{O}_D(D))$}}
\put(45,115){\makebox(0,0)[cc]{$H^0(\Gamma(-x)\otimes \mathcal{O}_D)$}}
\put(51,46){\makebox(0,0)[cc]{$H^0(\Gamma\otimes \mathcal{O}_D)$}}
\put(140,115){\makebox(0,0)[cc]{$H^0(\Gamma(-x))$}}
\put(130,45){\makebox(0,0)[cc]{$H^0(\Gamma)$}}
\put(51.5,109){\vector(1,1){.07}}
\multiput(27.25,84.75)(.0337078652,.0337078652){700}{\line(0,1){.0337078652}}
\put(51.5,49.3){\vector(1,-1){.07}}
\multiput(26.5,74.5)(.0337273992,-.0337273992){700}{\line(0,-1){.0337273992}}
\put(73,115.5){\vector(-1,0){.07}}
\put(75,115.5){\line(1,0){45}}
\put(72,45.25){\vector(-1,0){.07}}
\put(75,45.25){\line(1,0){40}}
\put(129.5,51.25){\vector(0,-1){.07}}
\multiput(129.75,105.5)(-.03125,-7.40625){7}{\line(0,-1){7.40625}}
\put(55.5,51.25){\vector(0,-1){.07}}
\multiput(55.5,105.5)(-.03125,-7.40625){7}{\line(0,-1){7.40625}}
\put(35,100){\makebox(0,0)[cc]{$\acute{\beta}_\xi$}}
\put(90,121){\makebox(0,0)[cc]{$\phi^D_\Gamma(-x)$}}
\put(35,57.75){\makebox(0,0)[cc]{$\beta_\xi$}}
\put(90,36){\makebox(0,0)[cc]{$\phi_\Gamma^D$}}
\put(138,80){\makebox(0,0)[cc]{$i_2$}}
\put(50,80){\makebox(0,0)[cc]{$i_1$}}
\end{picture}
\end{center}
where $i_1$ and $i_2$ are inclusions. It is easy to see that $\operatorname{Im} \phi_\Gamma^D=\operatorname{Im} \phi^D_\Gamma(-x)$, by which we obtain $\beta^{-1}_\xi(\operatorname{Im} \phi_\Gamma^D)= \acute{\beta}^{-1}_\xi(\operatorname{Im}\phi^D_{\Gamma(-x)})$. This implies that, with $\gamma$ as in the proof of the previous case, we have
$$T_DV^r_d(\Gamma)=T_DV^r_d(\Gamma(-x))\cap \beta_\gamma ^{-1}(\operatorname{Im} \phi_\Gamma^D).$$
The rest of the proof goes verbatim as in part (a).
(c) We might assume that $V^r_d(\Gamma(-x))$ and $V^r_{d+1}(\Gamma)$ are irreducible. A general $x\in C$ can not stand in the support of all divisors $D\in V^r_{d+1}(\Gamma)$. Otherwise; if for any $E+x\in V^r_{d+1}(\Gamma)$ the divisor $E$ belongs to $V^r_{d}(\Gamma)$, then $\dim V^r_{d+1}(\Gamma)=\dim V^r_{d}(\Gamma)$ which is impossible. If the divisor $E$ belongs to $V^r_{d}(\Gamma(-x))\setminus V^r_{d}(\Gamma)$ for some $E+x\in V^r_{d+1}(\Gamma)$, then one has $h^0(E)=0$ by \cite[Lemma 3.3]{A. B1} which once again is impossible.
For an open subset $U\subset C$, from the equality
$V^r_{d+1}(\Gamma)=\overline{\cup_{p\in U}\{p+V^r_{d}(\Gamma(-p))\}}$
we obtain $\dim V^r_{d}(\Gamma(-x))=\dim V^r_{d+1}(\Gamma)-1$, for general $x\in C$. Indeed for such $x$ the equality
$\dim V^r_{d}(\Gamma(-x))=\dim V^r_{d+1}(\Gamma)$ implies that any $D\in V^r_{d+1}(\Gamma)$
contain $x$ in its support, which is absurd by what we just proved.
This by \cite[Thm. 4.1]{A-S2}, implies the assertion.
\end{proof}
\begin{cor}\label{corollary 2}
If $V^{r}_{d}(\Gamma)$ is smooth at $D\in V^{r}_{d}(\Gamma)$ and of expected dimension,
then for general $x\in C$, $V^r_d(\Gamma(-x))$ would be of expected dimension and smooth at $D\in V^r_d(\Gamma(-x))$. The same conclusion is valid for $V^{r}_{d+1}(\Gamma)$, i.e. it would be of expected dimension and smooth at $D+x \in V^{r}_{d+1}(\Gamma)$.
\end{cor}
\begin{proof}
Based on the inequality $\dim V^r_d(\Gamma(-x))\geq d-r(s-d+r)$, the assertion on the dimension of $V^r_d(\Gamma(-x))$ is a consequence of Theorem \ref{comparision theorem 1}(c), by which part (b) of the same theorem verifies the smoothness assertion for $V^r_d(\Gamma(-x))$ at $\in V^r_d(\Gamma(-x))$. The same argument goes verbatim for smoothness of $V^r_{d+1}(\Gamma)$
at $D+x$. Meanwhile, the assertion on its dimension is concluded by \cite[Thm. 4.1]{A-S2}
\end{proof}
\begin{cor}\label{corollary 1}
Assume that $\Gamma(-x)$ turns to be very ample for general $x\in C$. If non-empty, then $\dim V^1_{s-1}(\Gamma)$ is $(s-4)$-dimensional.
\end{cor}
\begin{proof}
Theorem \ref{comparision theorem 1}(c) together with \cite[Lemma 4.4]{A. B1}
implies the corollary.
\end{proof}
\begin{remark}
(a) Theorem \ref{comparision theorem 1} implies Aprodu-Sernesi's result for reduced
$V^r_d(\Gamma)$'s.
\noindent(b) Corollary \ref{corollary 1} is invalid without the very ampleness assumption on $\Gamma(-x)$, see \cite[Ch. VIII. Exe. F]{ACGH}.
\noindent (c) The equality $\operatorname{gon}(C)=[\frac{g+1}{2}]$ is hold for general curves by which one can prove that for general $x_1,\cdots, x_k$ ($1\leq k\leq [\frac{g-1}{2}]$) the line bundle
$K(-x_1-\cdots -x_k)$ turns to be very ample on general curves. Using this fact together with Theorem \ref{comparision theorem 1}(c) one can reprove $\dim C^1_d=2d-g+1$.
\noindent (d) The special case $r=1$ from Theorem \ref{comparision theorem 1}(c) has been proved and was used to prove the main theorem, Theorem 1.3, in \cite{A. B 2}.
\end{remark}
\end{document}
|
\begin{document}
\title[Random Polymer]
{Diffusivity of Rescaled Random Polymer in Random Environment in dimensions $1$ and $2$}
\author {Zi Sheng Feng}
\address{[email protected]\linebreak
Department of Mathematics\\
University of Toronto\\
Toronto, Ontario, Canada, M5S 2E4}
\date{}
\begin{abstract} We show random polymer is diffusive in dimensions $1$ and $2$ in probability in an intermediate scaling regime. The scale is $\beta=
o(N^{-1/4})$ in $d=1$ and $\beta=o((\log N)^{-1/2})$ in $d=2$ as $N\rightarrow \infty$.
\end{abstract}
\maketitle
\section{Introduction}
Consider walks $\omega: [0,N]\bigcap \mathbb{Z}\rightarrow \mathbb{Z}^d$ such that $\omega(0)=0$, $|\omega(n)-\omega(n-1)|=1$. Let $P^N_0$ be uniform measure on the space of these walks each with weight $(2d)^{-N}$, then $$p_0(N,x):=P^N_0(\omega(N)=x)=\int 1_{[\omega(N)=x]}dP^N_0(\omega)=\frac{1}{(2d)^N}\sum_{\omega:\ \omega(N)=x}$$ is probability of the nearest neighbor simple random walk starting at $0$ is at site $x$ at time $N$.
Let the random environment be given by $h=\{h(n,x): n\in \mathbb{N}, x\in \mathbb{Z}^d\}$, a sequence of independent identically distributed random variables with $h(n,x)=\pm 1$ with equality probability on some probability space $(H,\mathcal{G}, Q)$, which are also independent of the simple random walk. We denote expectation over the environment space by $E_Q$.
We define the (unnormalized) polymer density by \begin{eqnarray*}p(N,x)=\int 1_{[\omega(N)=x]}\prod_{1\leq n\leq N}\left[1+c_{N,d}h(n,\omega(n))\right]dP^N_0(\omega)\end{eqnarray*} where $c_{N,d}$\footnote{For example, we may take $c_{N,1}=N^{-(1/4+\epsilon)}$ and $c_{N,2}=\log N^{-(1/2+\epsilon)}$ for any $\epsilon>0$. Also the scale $\beta=o(N^{-1/4})$ for $d=1$ is first identified in \cite{AKQ}} is such that \begin{eqnarray}\label{SLG}\lim_{N\rightarrow \infty}c^2_{N,1}N^{1/2}=0\ {\mbox{for}}\ d=1;\ \lim_{N\rightarrow\infty}c^2_{N,2}\log N=0\ {\mbox{for}}\ d=2\end{eqnarray}
Since the polymer density is not normalized, to obtain the probability of the polymer at time $N$ is at site $x$, we define $$p_N(N,x)=p(N,x)/Z(N)$$ where $Z(N)$ is the partition function $$Z(N)=\sum_xp(N,x)=\int \prod_{1\leq n\leq N}\left[1+c_{N,d}h(n,\omega(n))\right]dP^N_0(\omega)$$
In this paper, we show the mean square displacement of the polymer when scaled by $N$ converges to $1$ in probability in both $d=1,2$. Precisely, let $\langle \omega(N)^2\rangle_{N,h}=\sum_x x^2p_N(N,x)$,
\begin{te}\label{MTHM} With rescaling of the polymer density by $c_{N,d}$, for $d=1,2$, $$\frac{\langle \omega(N)^2\rangle_{N,h}}{N}\rightarrow 1$$ in probability as $N\rightarrow \infty$. \end{te}
We note that $\langle \omega(N)^2\rangle_{N,h}=\frac{K(N)}{Z(N)}$, where $K(N)=\int \prod_{1\leq n\leq N}\left[1+\beta_{N,d}h(n,\omega(n))\right]\omega(N)^2dP^N_0(\omega)$. To show the result, we are going to estimate second moment of the top and bottom quantity, and find that
\begin{pr}\label{d12M} For $d=1$, $$i)\ E_Q(Z(N)^2)\leq \sum^N_{n=0}\left(c_1 c^2_{N,1}N^{1/2}\right)^n;\ \ \ ii)\ E_Q(K(N)^2)\leq N^2\sum^N_{n=0}\left(c_1 c^2_{N,1}N^{1/2}\right)^n$$ for some constant $c_1$ that depends only on the dimension. \end{pr}
\begin{pr}\label{d22M} For $d=2$, $$i)\ E_Q(Z(N)^2)\leq \sum^N_{n=0}\left(c_2 c^2_{N,2}\log N\right)^n;\ \ \ ii)\ E_Q(K(N)^2)\leq N^2\sum^N_{n=0}\left(c_2 c^2_{N,2}\log N\right)^n$$ for some constants $c_2$ that depends only on the dimension. \end{pr}
The paper is organized as follows. In section 2, we write out second moments of the top and bottom quantity in the mean square displacement of the polymer. In Section 3, we show Proposition \ref{d12M} and Theorem \ref{MTHM} for dimension $d=1$. In Section 4, we show Proposition \ref{d22M} and Theorem \ref{MTHM} for dimension $d=2$. In Section 5, we show some other results.
\section{Second Moment Expansions}
In this section, we are going to write out the second moments of the top and bottom quantity in the mean square displacement of the polymer.
\begin{lm}\label{2} \begin{eqnarray*}E_Q(Z^2(N))=\sum^N_{n=0}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,d}\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})\end{eqnarray*} \end{lm}
\begin{proof} By definition, $Z_N=\int \prod_{1\leq n\leq N}\left[1+c_{N,d}h(n,\omega(n))\right]dP^N_0(\omega)$. Upon expanding, $$Z_N=\int \sum^N_{n=0}\sum_{1\leq i_1<\cdots <i_n\leq N}c^n_{N,d}\prod^n_{k=1}h(i_k, \omega(i_k))dP^N_0(\omega)$$ Let $f_n(\omega)=\sum_{1\leq i_1<\cdots <i_n\leq N}c^n_{N,d}\prod^n_{k=1}h(i_k, \omega(i_k))$ and $g_n=\int f_n(\omega)dP^N_0(\omega)$, we see $$Z_N^2=(g_0+g_1+\cdots +g_N)^2=\sum_{0\leq n,m\leq N}g_{n}g_{m}$$
For $n\neq m$, we have \begin{eqnarray*}E_Qg_ng_m&=&E_Q\int\sum_{1\leq i_1<\cdots<i_n\leq N}c^n_{N,d}\prod^n_{k=1}h(i_k,\omega(i_k))dP^N_0(\omega)\int\sum_{1\leq i'_1<\cdots<i'_m\leq N}c^m_{N,d}\prod^m_{l=1}h(i'_l,\omega'(i'_l))dP^N_0(\omega')\end{eqnarray*}
Note that if there is some $i_k$ that is different from all other $i'_l$'s (or vice versa), then by independence of the $h(n,x)$'s and that they have mean $0$, we have $$E_Q\prod^n_{k=1}\prod^m_{l=1}h(i_k,\omega(i_k)) h(i'_l,\omega'(i'_l))=0$$
By Fubini, $E_Qg_ng_m=0$. But since $n\neq m$, the $i_k$'s and $i'_l$'s cannot all be matched in pairs, so there must be some $i_k$ different from all other $i'_l$'s (or vice versa).
On the other hand, for $n=m$, we have \begin{eqnarray*}E_Qg^2_n&=&E_Q\int\sum_{1\leq i_1<\cdots<i_n\leq N}c^n_{N,d}\prod^n_{k=1}h(i_k,\omega(i_k))dP^N_0(\omega)\int\sum_{1\leq i'_1<\cdots<i'_n\leq N}c^n_{N,d}\prod^n_{k=1}h(i'_k,\omega'(i'_k))dP^N_0(\omega')
\\&=&E_Q\int \int \sum_{1\leq i_1<\cdots <i_n\leq N}c^{2n}_{N,d}\prod^n_{k=1}h(i_k,\omega(i_k))h(i_k,\omega'(i_k))dP^N_0(\omega)dP^N_0(\omega')
\\&+&E_Q\int \int \sum_{i_l\neq i'_l\ \textrm{for some}\ l\ \in \{1,\ldots,n\}}c^{2n}_{N,d}\prod^n_{k=1}h(i_k,\omega(i_k))h(i'_k,\omega'(i'_k))dP^N_0(\omega)dP^N_0(\omega')
\\&=&E_Q\int \int \sum_{1\leq i_1<\cdots <i_n\leq N}c^{2n}_{N,d}\prod^n_{k=1}1_{\omega(i_k)=\omega'(i_k)}dP^N_0(\omega)dP^N_0(\omega')
\\&+&E_Q\int \int \sum_{i_l\neq i'_l\ \textrm{for some}\ l\ \in \{1,\ldots,n\}}c^{2n}_{N,d}\prod^n_{k=1}h(i_k,\omega(i_k))h(i'_k,\omega'(i'_k))dP^N_0(\omega)dP^N_0(\omega')
\\&=&\int\int \sum_{1\leq i_1<\cdots <i_n\leq N}c^{2n}_{N,d}\prod^n_{k=1}1_{\omega(i_k)=\omega'(i_k)}dP^N_0(\omega)dP^N_0(\omega')\end{eqnarray*} Third equality follows because $h^2(n,x)=1$ and nonzero contribution only comes from when all sites $\omega(i_k)$ and $\omega'(i_k)$ are matched in pairs. Fourth equality follows because if the $i_{\alpha}$'s were to match perfectly with the $i'_{\beta}$'s for $\alpha\neq \beta$, then we would get contradiction in terms of the order of the times. For example, take $n=3$ and the perfect cross matching $i_1=i'_2$, $i_2=i'_3$, $i_3=i'_1$, then by $i_1<i_2<i_3$ we would have $i'_2<i'_3<i'_1$, which is a contradiction.
Now, we are going to write out the integrals as sums in terms of the transition probabilities of the two independent walks. By above, we have $$E_Q(Z^2(N))=\sum^N_{n=0}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,d}\int\int 1_{\left[\omega(i_1)=\tilde{\omega}(i_1),\ldots, \omega(i_n)=\tilde{\omega}(i_n)\right]}dP^N_0(\omega)dP^N_0(\tilde{\omega})$$ $$=\sum^N_{n=0}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,d}\int \sum_{x_1,\ldots, x_n,x}1_{\left[\omega(i_1)=x_1,\ldots, \omega(i_n)=x_n\right]}$$ $$\times P^N_0\left(\tilde{\omega}(i_1)=x_1,\ldots,\tilde{\omega}(i_n)=x_n, \tilde{\omega}(N)=x\right)dP^N_0(\omega)$$ $$=\sum^N_{n=0}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,d}\int \sum_{x_1,\ldots, x_n}1_{\left[\omega(i_1)=x_1,\ldots, \omega(i_n)=x_n\right]}$$
$$\times \prod^n_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})\sum_xp_0(N-i_n, x-x_n)dP^N_0(\omega)$$
where in the first equality we also need to sum over sites at time $N$ because $P^N_0$ is measure for walks of length $N$, and in the last equality we use the fact that increments of the walk are independent, and the walk is spatial homogeneous, i.e. probability of the walk starting at $y$ and ending at $x$ is same as probability of the walk starting at $0$ and ending at $y-x$. Next we note that $\sum_xp_0(N-i_n, x-x_n)=1$ because $p_0(n,x)$ is a transition probability.
Combining above and expanding similarly for the second walk we thus have shown Lemma \ref{2}.
\end{proof}
\begin{lm}\label{3}\begin{eqnarray*}E_Q(K^2(N))=\sum^N_{n=0}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,d}\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})\end{eqnarray*} $$\times\left(\sum_x x^2p_0(N-i_n, x-x_n)\right)^2$$\end{lm}
\begin{proof} To estimate second moment of $K(N)$, we have $$E_Q(K^2(N))=E_Q\int\int \prod_{1\leq n\leq N}\left[1+c_{N,d}h(n,\omega(n))\right]\left[1+c_{N,d}h(n,\tilde{\omega}(n))\right]$$ $$\times\omega(N)^2\tilde{\omega}(N)^2dP^N_0(\omega)dP^N_0(\tilde{\omega})$$
As we see, the only difference between $E_Q(Z^2(N))$ and $E_Q(K^2(N))$ is the extra term $\omega(N)^2\tilde{\omega}(N)^2$, and we proceed as before to expand the second moment to get $$\sum^N_{n=0}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,d}\int\int 1_{\left[\omega(i_1)=\tilde{\omega}(i_1),\ldots, \omega(i_n)=\tilde{\omega}(i_n)\right]}\omega(N)^2\tilde{\omega}(N)^2dP^N_0(\omega)dP^N_0(\tilde{\omega})$$ $$=\sum^N_{n=0}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,d}\int \sum_{x_1,\ldots, x_n,x}1_{\left[\omega(i_1)=x_1,\ldots, \omega(i_n)=x_n\right]}$$
$$\times x^2P^N_0\left(\tilde{\omega}(i_1)=x_1,\ldots,\tilde{\omega}(i_n)=x_n, \tilde{\omega}(N)=x\right)\omega(N)^2dP^N_0(\omega)$$ $$=\sum^N_{n=0}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,d}\int \sum_{x_1,\ldots, x_n}1_{\left[\omega(i_1)=x_1,\ldots, \omega(i_n)=x_n\right]} $$ $$\times \prod^n_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})\sum_x x^2p_0(N-i_n, x-x_n)\omega(N)^2dP^N_0(\omega)$$ $$=\sum^N_{n=0}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,d}\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})$$
$$\times \sum_x x^2p_0(N-i_n, x-x_n)\sum_y y^2p_0(N-i_n, y-x_n)$$
\end{proof}
\section{Diffusivity of Rescaled Random Polymer in $d=1$}
In this section, we are going to show Proposition \ref{d12M} and Theorem \ref{MTHM} for dimension $d=1$.
The key ingredient we need is that the transition probability $p_0(n,x)$ has the following estimate by the Gaussian density, more precisely, for $d\geq 1$, $x\in \mathbb{Z}^d$ such that $x_1+\cdots + x_d+n\equiv 0\mod 2$, then \begin{eqnarray}\label{GLE}p_0(n,x)=2\left(\frac{d}{2\pi n}\right)^{d/2}\exp\left(-\frac{d|x|^2}{2n}\right)+r_n(x)\end{eqnarray} where $|r_n(x)|\leq \min\left(c_dn^{-(d+2)/2}, c'_d|x|^{-2}n^{-d/2}\right)$ for some constants $c_d, c'_d$ that depend only on the dimension. (See Theorem 1.2.1 in \cite{Lawler})
\subsection*{Section 3.1}
In this subsection, we are going to show Proposition \ref{d12M} i) in a series of lemmas.
\begin{lm}\label{4} \begin{eqnarray*}\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})\leq c_1^n\prod^n_{k=1}(i_k-i_{k-1})^{-1/2}\end{eqnarray*} for some constant $c_1$ that depends only on the dimension $d=1$ \end{lm}
\begin{proof} For $d=1$, since $e^{-x^2}\leq 1$ for all $x$, we see (\ref{GLE}) is at most $c_1n^{-1/2}$ for some constant $c_1$. Using this uniform estimate for each of the $p_0(i_k-i_{k-1},x_k-x_{k-1})$'s, we have $$\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})=\sum_{x_1}p^2_0(i_1,x_1)\cdots \sum_{x_n}p^2_0(i_n-i_{n-1},x_n-x_{n-1})$$ $$\leq c_1^ni_1^{-1/2}\cdots (i_n-i_{n-1})^{-1/2}\sum_{x_1}p_0(i_1,x_1)\cdots \sum_{x_n}p_0(i_n-i_{n-1},x_n-x_{n-1})=c_1^n\prod^n_{k=1}(i_k-i_{k-1})^{-1/2}$$ for some constant $c_1$ that depends only on the dimension $d=1$ (last equality follows from that $p_0(n,x)$ is a transition probability). \end{proof}
\begin{lm}\label{5} \begin{eqnarray*}\sum_{1\leq i_1< \cdots <i_n\leq N}c_1^nc^{2n}_{N,1}\prod^n_{k=1}(i_k-i_{k-1})^{-1/2}\leq \left(c_1c^2_{N,1} N^{1/2}\right)^n\end{eqnarray*}
\end{lm}
\begin{proof}
$$\sum_{1\leq i_1< \cdots <i_n\leq N}c_1^nc^{2n}_{N,1}\prod^n_{k=1}(i_k-i_{k-1})^{-1/2} $$ $$=c_1^nc^{2n}_{N,1}\sum^{N-(n-1)}_{i_1=1}\cdots \sum^{N-1}_{i_{n-1}=i_{n-2}+1}i_1^{-1/2}\cdots (i_{n-1}-i_{n-2})^{-1/2} \sum^N_{i_n=i_{n-1}+1}(i_n-i_{n-1})^{-1/2}$$ $$\leq c_1^nc_{N,1}^{2n}\sum^{N-(n-1)}_{i_1=1}\cdots \sum^{N-1}_{i_{n-1}=i_{n-2}+1}i_1^{-1/2}\cdots (i_{n-1}-i_{n-2})^{-1/2} 2N^{1/2}$$ Last inequality holds because $\sum^{N}_{k=1}k^{-1/2}\leq 1+\int^N_1x^{-1/2}dx=1+2\left(N^{1/2}-1\right)\leq 2N^{1/2}$. Continuing from above and arguing similarly to estimate each sum in the expression we have
$$\leq c_1^nc^{2n}_{N,1}N^{1/2}\sum^{N-(n-1)}_{i_1=1}\cdots \sum^{N-1}_{i_{n-1}=i_{n-2}+1}i_1^{-1/2}\cdots (i_{n-1}-i_{n-2})^{-1/2}\leq \left(c_1c^{2}_{N,1}N^{1/2}\right)^n$$ where the constant $c_1$ will change from line to line (again it depends only on the dimension $d=1$).
\end{proof}
We conclude by Lemma \ref{2} that Proposition \ref{d12M} i) holds.
\subsection*{Section 3.2}
In this subsection, we are going to show Proposition \ref{d12M} ii) in a series of lemmas.
By standard computations of the moments of simple random walk of length $n$ in dimension $d=1$ using characteristic function, we have $$\sum_xx^2p_0(n,x)=n; \ \ \ \sum_xx^4p_0(n,x)=3n^2-2n$$
\begin{lm} \label{6}\begin{eqnarray*}\sum_x x^2p_0(N-i_n,x-x_n)=(N-i_n)+x^2_n\end{eqnarray*}\end{lm}
\begin{proof} $$\sum_x x^2p_0(N-i_n,x-x_n)
=\sum_{x-x_n}x^2p_0(N-i_n,x-x_n)=\sum_x(x+x_n)^2p_0(N-i_n,x)$$ $$=\sum_x
x^2p_0(N-i_n,x)+\sum_xx^2_np_0(N-i_n,x)=(N-i_n)+x^2_n$$ where first equality holds because summation over all $x$'s is same as summation over all $x-x_n$'s and third equality because holds any odd moment of simple random walk vanish.\end{proof}
\begin{lm} \label{7}\begin{eqnarray*}\sum_{x_k}x^4_kp_0(i_k-i_{k-1},x_k-x_{k-1})\end{eqnarray*} $$=3(i_k-i_{k-1})^2-2(i_k-i_{k-1})+6x^2_{k-1}(i_k-i_{k-1})+x^4_{k-1}$$ \end{lm}
\begin{proof} $$\sum_{x_k}x^4_kp_0(i_k-i_{k-1},x_k-x_{k-1})=\sum_{x_k-x_{k-1}}x^4_kp_0(i_k-i_{k-1},x_k-x_{k-1})
=\sum_{x_k}(x_k+x_{k-1})^4p_0(i_k-i_{k-1},x_k) $$ $$=\sum_{x_k}x^4_kp_0(i_k-i_{k-1},x_k)+\sum_{x_k}6x^2_kx^2_{k-1}p_0(i_k-i_{k-1},x_k)
+\sum_{x_k}x^4_{k-1}p_0(i_k-i_{k-1},x_k) $$ $$=3(i_k-i_{k-1})^2-2(i_k-i_{k-1})+6x^2_{k-1}(i_k-i_{k-1})+x^4_{k-1}$$ \end{proof}
\begin{lm} \label{8}\begin{eqnarray*}\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})\left((N-i_n)^2+2(N-i_n)x^2_n+x^4_n\right) \end{eqnarray*} $$=(N-i_n)^2+2(N-i_n)i_n+3\sum^n_{k=1}(i_k-i_{k-1})^2-2i_n+6\sum^n_{k=1}(i_k-i_{k-1})i_{k-1}$$ \end{lm}
\begin{proof}
We do this by induction on $n$. For $n=1$, we have $$\sum_{x_1}p_0(i_1,x_1)\left((N-i_1)^2+2(N-i_1)x^2_1+x^4_1\right) $$ $$=(N-i_1)^2\sum_{x_1}p_0(i_1,x_1)+2(N-i_1)\sum_{x_1}x^2_1p_0(i_1,x_1)
+\sum_{x_1}x^4_1p_0(i_1,x_1)$$ $$=(N-i_1)^2+2(N-i_1)i_1+3i_1^2-2i_1$$ (Note we do not have term of the form $6\sum^{n}_{k=1}(i_k-i_{k-1})i_{k-1}$ because $i_0=0$.)
Suppose equality holds for $n-1$. Then $$\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})\left((N-i_n)^2+2(N-i_n)x^2_n+x^4_n\right)
$$ $$=\sum_{x_1\ldots, x_{n-1}}\prod^{n-1}_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})\sum_{x_n}p_0(i_n-i_{n-1}, x_n-x_{n-1})\left((N-i_n)^2+2(N-i_n)x^2_n+x^4_n\right)$$ $$=\sum_{x_1\ldots, x_{n-1}}\prod^{n-1}_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})
[(N-i_n)^2+2(N-i_n)(i_n-i_{n-1}+x^2_{n-1})$$ $$+3(i_n-i_{n-1})^2-2(i_n-i_{n-1})+6x^2_{n-1}(i_n-i_{n-1})+x^4_{n-1}]$$
$$ =\sum_{x_1\ldots, x_{n-1}}\prod^{n-1}_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})([(N-i_n)^2+2(N-i_n)(i_n-i_{n-1})+3(i_n-i_{n-1})^2-2(i_n-i_{n-1})]$$
$$+[2(N-i_n)+6(i_n-i_{n-1})]x^2_{n-1}+x^4_{n-1})$$ $$=[(N-i_n)^2+2(N-i_n)(i_n-i_{n-1})+3(i_n-i_{n-1})^2-2(i_n-i_{n-1})]+
[2(N-i_n)+6(i_n-i_{n-1})]i_{n-1} $$ $$+3\sum^{n-1}_{k=1}(i_k-i_{k-1})^2-2i_{n-1}+6\sum^{n-1}_{k=1}(i_k-i_{k-1})i_{k-1}$$ $$=(N-i_n)^2+2(N-i_n)(i_n-i_{n-1}+i_{n-1})+3(i_n-i_{n-1})^2+3\sum^{n-1}_{k=1}(i_k-i_{k-1})^2$$ $$-2(i_n-i_{n-1}+i_{n-1})+6(i_n-i_{n-1})i_{n-1}+6\sum^{n-1}_{k=1}
(i_k-i_{k-1})i_{k-1}$$ $$=(N-i_n)^2+2(N-i_n)i_n+3\sum^n_{k=1}(i_k-i_{k-1})^2-2i_n+6\sum^n_{k=1}(i_k-i_{k-1})i_{k-1}$$ where in the second equality we use Lemmas \ref{6} and \ref{7} and in the fourth equality we use the inductive hypothesis.
\end{proof}
\begin{lm} \label{9}\begin{eqnarray*}(N-i_n)^2+2(N-i_n)i_n+3\sum^n_{k=1}(i_k-i_{k-1})^2 -2i_n+6\sum^n_{k=1}(i_k-i_{k-1})i_{k-1}\leq 100^nN^2\end{eqnarray*}\end{lm}
\begin{proof} $$(N-i_n)^2+2(N-i_n)i_n+3\sum^n_{k=1}(i_k-i_{k-1})^2-2i_n+6\sum^n_{k=1}(i_k-i_{k-1})i_{k-1}$$
$$=N^2-2Ni_n+i^2_n+2Ni_n-2i^2_n+3\sum^n_{k=1}(i^2_k-2i_ki_{k-1}+i^2_{k-1})-2i_n+6\sum^n_{k=1}(i_ki_{k-1}-i^2_{k-1})$$ $$\leq N^2+2N^2+N^2+2N^2+2N^2+3n(N^2+2N^2+N^2)+2N^2+6n(N^2+N^2) $$ $$=10N^2+24nN^2\leq 100^nN^2$$ \end{proof}
\begin{lm} \label{10}\begin{eqnarray*}\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})\left(\sum_x x^2p_0(N-i_n, x-x_n)\right)^2\leq c^n_1N^2\prod^n_{k=1}(i_k-i_{k-1})^{-1/2}\end{eqnarray*}\end{lm}
\begin{proof} Using the uniform estimate (\ref{GLE}) for $d=1$ on the transition probability as in the proof of Lemma \ref{4}, we get $$\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})\left(\sum_x x^2p_0(N-i_n, x-x_n)\right)^2 $$ $$\leq c_1^ni_1^{-1/2}(i_2-i_1)^{-1/2}\cdots (i_n-i_{n-1})^{-1/2}\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})\left((N-i_n)+x^2_{n}\right)^2$$ $$= c_1^ni_1^{-1/2}(i_2-i_1)^{-1/2}\cdots (i_n-i_{n-1})^{-1/2}\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})$$ $$\times\left((N-i_n)^2+2(N-i_n)x^2_n+x^4_n\right)$$ $$=c_1^ni_1^{-1/2}(i_2-i_1)^{-1/2}\cdots (i_n-i_{n-1})^{-1/2} $$ $$\times
\left((N-i_n)^2+2(N-i_n)i_n+3\sum^n_{k=1}(i_k-i_{k-1})^2-2i_n+6\sum^n_{k=1}(i_k-i_{k-1})i_{k-1}\right)$$ $$\leq c_1^nN^2\prod^n_{k=1}(i_k-i_{k-1})^{-1/2}$$ where in the first inequality we also use Lemma \ref{6} to compute the second moment, in the second equality we use Lemma \ref{8} and in the last inequality we use Lemma \ref{9}.
\end{proof}
\begin{lm} \label{11}\begin{eqnarray*}\sum_{1\leq i_1< \cdots <i_n\leq N}c^n_1c^{2n}_{N,1}N^2\prod^n_{k=1}(i_k-i_{k-1})^{-1/2}\leq N^2\left(c_1c^2_{N,1}N^{1/2}\right)^n\end{eqnarray*}\end{lm}
\begin{proof} It follows from Lemma \ref{5}.
\end{proof}
We conclude by Lemma \ref{3} that Proposition \ref{d12M} ii) holds.
\subsection*{Section 3.3}
In this subsection, we are going to use Proposition \ref{d12M} to show Theorem \ref{MTHM} for $d=1$. We do so with a series of lemmas.
\begin{lm}\label{12}\begin{eqnarray*} E_Q\left((Z(N)-1)^2\right)\leq \sum^N_{n=1}\left(c_1c^2_{N,1}N^{1/2}\right)^n\end{eqnarray*}\end{lm}
\begin{proof}\begin{eqnarray*}E_Q\left((Z(N)-1)^2\right)&=&E_Q\left(Z^2(N)-2Z(N)+1\right)\\&=&E_Q(Z^2(N))-2E_Q(Z(N))+1\\&=&E_Q(Z^2(N))-2+1\\&=&E_Q(Z^2(N))-1 \\&=&\sum^N_{n=1}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,1}\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})\\&\leq& \sum^N_{n=1}\left(c_1c^2_{N,1}N^{1/2}\right)^n\end{eqnarray*} where in the third equality we use $E_Q(Z(N))=1$ because the $h(n,x)$ have mean $0$, in the fifth equality we use that the $0$-th term in the second moment expansion of $Z(N)$ is $1$ (see Lemma \ref{2}) and the last inequality follows from Proposition \ref{d12M} i).\end{proof}
\begin{lm}\label{13}\begin{eqnarray*}E_Q\left((K(N)-N)^2\right)\leq N^2\sum^N_{n=1}\left(c_1c^2_{N,1}N^{1/2}\right)^n\end{eqnarray*}\end{lm}
\begin{proof}\begin{eqnarray*}E_Q\left((K(N)-N)^2\right)&=&E_Q(K^2(N)-2K(N)N+N^2)\\&=&E_Q(K^2(N))-2NE_Q(K(N))+N^2
\\&=&E_Q(K^2(N))-2N^2+N^2\\&=&
E_Q(K^2(N))-N^2\\&=&\sum^N_{n=1}\sum_{1\leq i_1< \cdots <i_n\leq N}c^{2n}_{N,1}\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})\\&\times&\left(\sum_x x^2p_0(N-i_n, x-x_n)\right)^2\\&\leq& N^2\sum^N_{n=1}\left(c_1c^2_{N,1}N^{1/2}\right)^n\end{eqnarray*}
where in the third equality we use $E_Q(K(N))=N$ because the $h(n,x)$ has mean zero and second moment of simple random walk of length $N$ in dimension $d=1$ is $N$, in the fifth equality we use that the $0$-th term in the second moment expansion of $K(N)$ is $N^2$ (see Lemma \ref{3}) and the last inequality follows from Proposition \ref{d12M} ii).\end{proof}
\begin{lm}\label{14} $Z(N)\rightarrow 1$ in probability as $N\rightarrow \infty$ \end{lm}
\begin{proof} For any $\epsilon>0$, by Chebyshev's inequality and using Lemma \ref{12}, we have
$$P(|Z(N)-1|>\epsilon)\leq \frac{E_Q((Z(N)-1)^2)}{\epsilon^2}\leq \frac{\sum^N_{n=1}\left(c_1c^2_{N,1}N^{1/2}\right)^n}{\epsilon^2}$$ Let $f(N)=c_1c^2_{N,1}N^{1/2}$. By choice of $c_{N,1}$ (see (\ref{SLG})), $\lim_{N\rightarrow \infty}c^2_{N,1}N^{1/2}=0$, in particular $\lim_{N\rightarrow \infty}f(N)=0$, which says given $\delta>0$ small, there exists $K$ such that for $N\geq K$, $f(N)<\delta$ (note that $f(N)\geq 0$), but then $$S_N:=\sum^N_{n=1}f(N)^n< \sum^N_{n=1}\delta^n=\frac{\delta-\delta^{N+1}}{1-\delta}<\frac{\delta}{1-\delta}$$ Since $\delta>0$ is arbitrary and for large $N$, $S_N$ is arbitrarily small, so $S_N\rightarrow 0$ as $N\rightarrow \infty$. We conclude $Z(N)\rightarrow 1$ in probability as $N\rightarrow \infty$.
\end{proof}
\begin{lm}\label{15}$\frac{1}{Z(N)}\rightarrow 1$ in probability as $N\rightarrow \infty$\end{lm}
\begin{proof} By Lemma \ref{14}, for $\epsilon>0$ small such that $1-\epsilon>0$ and given $\delta>0$, there exists $K$ such that for $N\geq K$, $P(|Z(N)-1|\leq \epsilon)=1-P(|Z(N)-1|>\epsilon)>1-\delta$. But \begin{eqnarray*}P(|Z(N)-1|\leq \epsilon)&=&P(1-\epsilon\leq Z(N)\leq 1+\epsilon)\\&=&P\left(\frac{1}{1+\epsilon}\leq \frac{1}{Z(N)}\leq \frac{1}{1-\epsilon}\right)\\&=&P\left(1-\epsilon''\leq \frac{1}{Z(N)}\leq 1+\epsilon'\right)\\&\leq&P\left(1-\hat{\epsilon}\leq \frac{1}{Z(N)}\leq 1+\hat{\epsilon}\right)\end{eqnarray*} where first equality holds because we assume $\epsilon>0$ is small such that $1-\epsilon>0$, and $\epsilon'=\frac{1}{1-\epsilon}-1>0$, $\epsilon''=1-\frac{1}{1+\epsilon}>0$, $\hat{\epsilon}=\max\{\epsilon',\epsilon''\}$, so $1-\delta\leq P\left(1-\hat{\epsilon}\leq \frac{1}{Z(N)}\leq 1+\hat{\epsilon}\right)$ and we have $P\left(|\frac{1}{Z(N)}-1|>\epsilon\right)\rightarrow 0$ for all $\epsilon>0$ small. But we note for $\epsilon'>\epsilon$, $P\left(|\frac{1}{Z(N)}-1|>\epsilon'\right)\leq P\left(|\frac{1}{Z(N)}-1|>\epsilon\right)$, so $P\left(|\frac{1}{Z(N)}-1|>\epsilon\right)\rightarrow 0$ holds for any $\epsilon>0$. Thus $\frac{1}{Z(N)}\rightarrow 1$ in probability as $N\rightarrow \infty$.
\end{proof}
\begin{lm}\label{16}$\frac{K(N)}{N}\rightarrow 1$ in probability as $N\rightarrow \infty$\end{lm}
\begin{proof} For any $\epsilon>0$, by Chebyshev inequality and using Lemma \ref{13}, we have $$P\left(\left|\frac{K(N)}{N}-1\right|>\epsilon\right)=P(|K(N)-N|>N\epsilon)\leq \frac{E_Q((K(N)-N)^2)}{\epsilon^2N^2}\leq \frac{N^2\sum^{N}_{n=1}\left(c_1c^2_{N,1}N^{1/2}\right)^n}{\epsilon^2N^2}$$ As before since $\lim_{N\rightarrow\infty}\sum^N_{n=1}\left(c_1c^2_{N,1}N^{1/2}\right)^n=0$, we see $\frac{K(N)}{N}\rightarrow 1$ in probability as $N\rightarrow \infty$
\end{proof}
Now we are ready to show Theorem \ref{MTHM} for dimension $d=1$.
Since multiplication preserves convergence in probability, given $X_n\rightarrow X$ and $Y_n\rightarrow Y$ in probability, then $X_n\cdot Y_n\rightarrow X\cdot Y$ in probability, and recall that the mean square displacement of the polymer is $\langle \omega(N)^2\rangle_{N,h}=\frac{K(N)}{Z(N)}$, then $$\frac{\langle \omega(N)^2\rangle_{N,h}}{N}=\frac{K(N)}{N}\cdot \frac{1}{Z(N)}$$ By Lemma \ref{15}, $\frac{1}{Z(N)}\rightarrow 1$ in probability and by Lemma \ref{16}, $\frac{K(N)}{N}\rightarrow 1$ in probability, we conclude $\frac{\langle \omega(N)^2\rangle_{N,h}}{N}\rightarrow 1$ in probability.
\section{Diffusivity of Rescaled Random Polymer in $d=2$}
In this section, we are going to show Proposition \ref{d22M} and Theorem \ref{MTHM} for dimension $d=2$.
\subsection*{Section 4.1}
In this subsection, we are going to show Proposition \ref{d22M} i) in a series of lemmas.
\begin{lm}\label{17} \begin{eqnarray*}\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})\leq c_2^n\prod^n_{k=1}(i_k-i_{k-1})^{-1}\end{eqnarray*} for some constant $c_2$ that depends only on the dimension $d=2$\end{lm}
\begin{proof} For $d=2$, we see (\ref{GLE}) is at most $c_2n^{-1}$ for some constant $c_2$. As in the proof of Lemma \ref{4} using this uniform estimate for the $p_0(i_k-i_{k-1},x_k-x_{k-1})$'s, we have $$\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})=\sum_{x_1}p^2_0(i_1,x_1)\cdots \sum_{x_n}p^2_0(i_n-i_{n-1},x_n-x_{n-1})$$ $$\leq c_2^ni_1^{-1}\cdots (i_n-i_{n-1})^{-1}\sum_{x_1}p_0(i_1,x_1)\cdots \sum_{x_n}p_0(i_n-i_{n-1},x_n-x_{n-1})=c_2^n\prod^n_{k=1}(i_k-i_{k-1})^{-1}$$ for some constant $c_2$ that depends only on the dimension $d=2$\end{proof}
\begin{lm}\label{18} \begin{eqnarray*}\sum_{1\leq i_1< \cdots <i_n\leq N}c_2^nc^{2n}_{N,2}\prod^n_{k=1}(i_k-i_{k-1})^{-1}\leq \left(c_2c^2_{N,2} \log N\right)^n\end{eqnarray*}
\end{lm}
\begin{proof}
$$\sum_{1\leq i_1< \cdots <i_n\leq N}c_2^nc^{2n}_{N,2}\prod^n_{k=1}(i_k-i_{k-1})^{-1} $$ $$=c_2^nc^{2n}_{N,2}\sum^{N-(n-1)}_{i_1=1}\cdots \sum^{N-1}_{i_{n-1}=i_{n-2}+1}i_1^{-1}\cdots (i_{n-1}-i_{n-2})^{-1} \sum^N_{i_n=i_{n-1}+1}(i_n-i_{n-1})^{-1}$$ $$\leq c_2^nc_{N,2}^{2n}\sum^{N-(n-1)}_{i_1=1}\cdots \sum^{N-1}_{i_{n-1}=i_{n-2}+1}i_1^{-1}\cdots (i_{n-1}-i_{n-2})^{-1} 10\log N$$ Last inequality holds because $\sum^{N}_{k=1}k^{-1}\leq 1+\int^N_1x^{-1}dx=1+\log N\leq 10\log N$. Continuing from above and arguing similarly to estimate each sum in the expression we have
$$\leq c_2^nc^{2n}_{N,2}\log N\sum^{N-(n-1)}_{i_1=1}\cdots \sum^{N-1}_{i_{n-1}=i_{n-2}+1}i_1^{-1}\cdots (i_{n-1}-i_{n-2})^{-1}\leq \left(c_2c^{2}_{N,2}\log N\right)^n$$ where the constant $c_2$ will change from line to line (again it depends only on the dimension $d=2$).
\end{proof}
We conclude by Lemma \ref{2} that Proposition \ref{d22M} i) holds.
\subsection*{Section 4.2}In this subsection, we are going to show Proposition \ref{d22M} ii) in a series of lemmas.
By standard computations of the partial moments of simple random walk of length $n$ in dimension $d=2$ using characteristic function, for $x=(x_1,x_2)$, we have $$\sum_xx^2_1p_0(n,x)=\frac{n}{2};\ \ \sum_xx^4_1p_0(n,x)=\frac{3n^2-n}{4};\ \ \sum_{x}x^2_1x^2_2p_0(n,x)=\frac{n(n-1)}{4}$$ so the second and fourth moments are respectively $$\sum_x|x|^2p_0(n,x)=n;\ \ \ \sum_x|x|^4p_0(n,x)=2n^2-n$$
\begin{lm}\label{19} \begin{eqnarray*}\sum_{x}|x|^2p_0(N-i_n,x-x_n)=(N-i_n)+|x_n|^2\end{eqnarray*}\end{lm}
\begin{proof} $$\sum_x |x|^2p_0(N-i_n,x-x_n)=\sum_{x-x_n}|x|^2p_0(N-i_n,x-x_n)=\sum_x|x+x_n|^2p_0(N-i_n,x)$$ $$=\sum_x|x|^2p_0(N-i_n,x)+\sum_x|x_n|^2p_0(N-i_n,x)=(N-i_n)+|x_n|^2$$ where third equality holds because any odd partial moments of simple random walk vanish.\end{proof}
\begin{lm}\label{20}\begin{eqnarray*}\sum_{x_k}|x_k|^4p_0(i_k-i_{k-1},x_k-x_{k-1})=2(i_k-i_{k-1})^2-(i_k-i_{k-1})+|x_{k-1}|^4+4|x_{k-1}|^2(i_k-i_{k-1})
\end{eqnarray*}\end{lm}
\begin{proof}$$\sum_{x_k}|x_k|^4p_0(i_k-i_{k-1},x_k-x_{k-1}) $$ $$=\sum_{x_k-x_{k-1}}|x_k|^4p_0(i_k-i_{k-1},x_k-x_{k-1})=\sum_{x_k}|x_k+x_{k-1}|^4p_0(i_k-i_{k-1},x_k)
$$ $$=\sum_{x_k}|x_k|^4p_0(i_k-i_{k-1},x_k)+|x_{k-1}|^4\sum_{x_k}p_0(i_k-i_{k-1},x_k)+2|x_{k-1}|^2\sum_{x_k}|x_k|^2p_0(i_k-i_{k-1},x_k)
$$ $$+4x^2_{k-1,1}\sum_{x_k}x^2_{k,1}p_0(i_k-i_{k-1},x_k)+4x^2_{k-1,2}\sum_{x_k}x^2_{k,2}p_0(i_k-i_{k-1},x_k)$$
$$=2(i_k-i_{k-1})^2-(i_k-i_{k-1})+|x_{k-1}|^4+2|x_{k-1}|^2(i_k-i_{k-1})+4x^2_{k-1,1}\frac{i_k-i_{k-1}}{2}+4x^2_{k-1,2}\frac{i_k-i_{k-1}}{2}$$
$$=2(i_k-i_{k-1})^2-(i_k-i_{k-1})+|x_{k-1}|^4+4|x_{k-1}|^2(i_k-i_{k-1})$$ \end{proof}
\begin{lm}\label{21}
\begin{eqnarray*}\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p_0(i_k-i_{k-1},x_k-x_{k-1})((N-i_n)^2+2(N-i_n)|x_n|^2+|x_n|^4)\end{eqnarray*}
$$ =(N-i_n)^2+2(N-i_n)i_n+2\sum^n_{k=1}(i_k-i_{k-1})^2-i_n+4\sum^n_{k=1}(i_k-i_{k-1})i_{k-1}$$\end{lm}
\begin{proof} It follows by induction on $n$ as in Lemma \ref{8}\end{proof}
\begin{lm}\label{22} \begin{eqnarray*}\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})\left(\sum_x |x|^2p_0(N-i_n, x-x_n)\right)^2\leq c^n_2N^2\prod^n_{k=1}(i_k-i_{k-1})^{-1}\end{eqnarray*}\end{lm}
\begin{proof} Using the uniform estimate (\ref{GLE}) for $d=2$ on the transition probability as in the proof of Lemma \ref{17} we get $$\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})\left(\sum_x |x|^2p_0(N-i_n, x-x_n)\right)^2 $$ $$\leq c_2^ni_1^{-1}(i_2-i_1)^{-1}\cdots (i_n-i_{n-1})^{-1}\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})\left((N-i_n)+|x_n|^2\right)^2$$ $$= c_2^ni_1^{-1}(i_2-i_1)^{-1}\cdots (i_n-i_{n-1})^{-1}\sum_{x_1,\ldots, x_n} \prod^n_{k=1}p_0(i_k-i_{k-1}, x_k-x_{k-1})$$ $$\times \left((N-i_n)^2+2(N-i_n)|x_n|^2+|x_n|^4\right)$$ $$=c_2^ni_1^{-1}(i_2-i_1)^{-1}\cdots (i_n-i_{n-1})^{-1}
$$ $$\times \left((N-i_n)^2+2(N-i_n)i_n+2\sum^n_{k=1}(i_k-i_{k-1})^2-i_n+4\sum^n_{k=1}(i_k-i_{k-1})i_{k-1}\right)$$ $$\leq c_2^nN^2\prod^n_{k=1}(i_k-i_{k-1})^{-1}$$ where in the first inequality we also use Lemma \ref{19} to compute the second moment, in the second equality we use Lemma \ref{21} and and in the last inequality we use estimate similar to that in Lemma \ref{9}.
\end{proof}
\begin{lm}\label{23}\begin{eqnarray*}\sum_{1\leq i_1< \cdots <i_n\leq N}c^n_2c^{2n}_{N,2}N^2\prod^n_{k=1}(i_k-i_{k-1})^{-1}\leq N^2\left(c_2c^2_{N,2}\log N\right)^n\end{eqnarray*}\end{lm}
\begin{proof} It follows from Lemma \ref{18}.
\end{proof}
We conclude by Lemma \ref{3} that Proposition \ref{d22M} ii) holds.
\subsection*{Section 4.3}
In this subsection, we are going to show Theorem \ref{MTHM} for $d=2$.
Clearly, Lemmas \ref{12} and \ref{13} hold for dimension $d=2$ with $c_2c^2_{N,2}\log N$ instead of $c_1c^2_{N,1}N^{1/2}$, precisely we have $$
E_Q\left((Z(N)-1)^2\right)\leq \sum^N_{n=1}\left(c_2c^2_{N,2}\log N\right)^n;\ \ E_Q\left((K(N)-N)^2\right)\leq N^2\sum^N_{n=1}\left(c_2c^2_{N,2}\log N\right)^n$$
By choice of $c_{N,2}$ (see (\ref{SLG})), as in the proof of Lemma \ref{14} it implies $$\lim_{N\rightarrow \infty}\sum^N_{n=1}\left(c_2c^2_{N,2}\log N\right)^n=0$$ thus we see Lemmas \ref{14}, \ref{15} and \ref{16} hold with suitable changes. We conclude that for dimension $d=2$, $\frac{\langle \omega(N)^2\rangle_{N,h}}{N}\rightarrow 1$ in probability.
\section{Other Results}
In this section, we are going to show other results in the diffusive regime.
\begin{te}\label{28} With rescaling of the polymer density by $c_{N,d}$ for $d=1,2$, there exists normalizing constants $a_{N,d}$ such that $$a_{N,d}\left(Z_N-1\right)\Rightarrow \xi$$ where $\xi$ is some Gaussian random variable. \end{te}
First, we have the following lemma.
\begin{lm}\label{25} For $x_1,\ldots, x_n\in \mathbb{Z}^d$, \begin{eqnarray*}\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1})=\prod^n_{k=1}p_0(2(i_k-i_{k-1}),0)\end{eqnarray*}\end{lm}
\begin{proof} Note that for transition probability $p_0(n,x)$ of the simple random walk starting at $0$ and ending at $x$ at time $n$, by spatial homogeneity it is same as the transition probability $p_x(n,2x)$ of the simple random walk starting at $x$ and ending at $2x$ at time $n$. Furthermore, by reflecting each step walk takes to reach from $x$ to $2x$, for example in dimension $d=2$ if original walk goes up, then the reflecting walk goes down and if original walk goes right, then the reflecting walk goes left, we get a reflecting walk starting at $x$ and ending at $0$ at time $n$ with transition probability $p_x(n,0)$ such that $$p_0(n,x)=p_x(n,2x)=p_x(n,0)$$ Using Chapman-Kolmogorov equality for the simple random walk, we have $$\sum_xp^2_0(n,x)=\sum_xp_0(n,x)p_0(n,x)=\sum_xp_0(n,x)p_x(n,0)=p_0(2n,0)$$ We conclude that $$\sum_{x_1,\ldots, x_n}\prod^n_{k=1}p^2_0(i_k-i_{k-1}, x_k-x_{k-1}) $$ $$=\sum_{x_1}p^2_0(i_1,x_1)\cdots\sum_{x_{n-1}}p^2_0(i_{n-1}-i_{n-2},x_{n-1}-x_{n-2})
\sum_{x_n}p^2_0(i_n-i_{n-1},x_n-x_{n-1})$$ $$=\sum_{x_1}p^2_0(i_1,x_1)\cdots\sum_{x_{n-1}}p^2_0(i_{n-1}-i_{n-2},x_{n-1}-x_{n-2})\sum_{x_n-x_{n-1}}p^2_0(i_n-i_{n-1},x_n-x_{n-1})$$
$$=\sum_{x_1}p^2_0(i_1,x_1)\cdots\sum_{x_{n-1}}p^2_0(i_{n-1}-i_{n-2},x_{n-1}-x_{n-2})\sum_{x_n}p^2_0(i_n-i_{n-1},x_n)$$ $$=\sum_{x_1}p^2_0(i_1,x_1)\cdots\sum_{x_{n-1}}p^2_0(i_{n-1}-i_{n-2},x_{n-1}-x_{n-2})p_0(2(i_n-i_{n-1}),0)$$
$$=\prod^n_{k=1}p_0(2(i_k-i_{k-1}),0)$$
\end{proof}
To show Theorem \ref{28}, we write the partition function as a sum of two parts
\begin{lm}\label{29} $Z_N-1=\sum^N_{k=1}f_k+R_N$ \end{lm}
\begin{proof} \begin{eqnarray*}Z_N-1&=&g_1+\sum^N_{n=2}g_n\\&=&
\int \sum^N_{k=1}c_{N,d}h(k,\omega(k))dP^N_0(\omega)+\sum^N_{n=2}g_n
\\&=& \sum^N_{k=1}\sum_xc_{N,d}h(k,x)p_0(k,x)+R_N\\&=&\sum^N_{k=1}f_k+R_N\end{eqnarray*}
where the $g_n$ are as in Lemma \ref{2}
\end{proof}
By the following two propositions, Theorem \ref{28} follows since if $X_n\Rightarrow X$, $Y_n\Rightarrow a$ where $a$ is a constant, then $X_n+Y_n\Rightarrow X+a$.
\begin{pr}\label{30} $\sum^N_{k=1}a_{N,d}f_k\Rightarrow \xi$ where $\xi$ is some Gaussian random variable.\end{pr}
\begin{proof} By definition, $f_k=c_{N,d}\sum_x h(k,x)p_0(k,x)$, if we let $X_{k,N}=a_{N,d}f_k$, then to show proposition it suffices to check conditions in the Lindeberg-Feller Theorem are satisfied, i.e. $\sum^N_{k=1} E_QX^2_{k,N}\rightarrow c$ where $c>0$, and for all $\epsilon>0$, $\lim_N\sum^N_{k=1}E_Q\left(X^2_{k,N};\ |X_{k,N}|>\epsilon\right)=0$ By direct computations, we find \begin{eqnarray*}E_QX^2_{k,N}&=&a^2_{N,d}c^2_{N,d}
E_Q\left(\sum_x h^2(k,x)p^2_0(k,x)+\sum_{x\neq x'}h(k,x)h(k,x')p_0(k,x)p_0(k,x')\right)\\&=&a^2_{N,d}c^2_{N,d}\sum_xp^2_0(k,x)\\&=&a^2_{N,d}c^2_{N,d}p_0(2k,0)\end{eqnarray*} where last equality follows from Lemma \ref{25}. Using estimate of the transition probability $p_0(n,x)$ as in \ref{GLE}, in $d=1$, $p_0(2k,0)=\pi^{-1/2}k^{-1/2}+r_{2k}(0)$, where $|r_{2k}(0)|\leq c_1k^{-3/2}$ and in $d=2$, $p_0(2k,0)=\pi^{-1}k^{-1}+r_{2k}(0)$, where $|r_{2k}(0)|\leq c_2k^{-2}$, we see in $d=2$, $\sum^N_{k=1}E_QX^2_{k,N}=a^2_{N,2}c^2_{N,2}\sum^N_{k=1}p_0(2k,0)=a^2_{N,2}c^2_{N,2}\left(\sum^N_{k=1}\pi^{-1}k^{-1}+r_{2k}(0)\right)$. If we take $a_{N,2}=\left(c^2_{N,2}\log N\right)^{-1/2}$, then $$a^2_{N,2}c^2_{N,2}\left(\sum^N_{k=1}\pi^{-1}k^{-1}+r_{2k}(0)\right)\rightarrow \pi^{-1}$$
(because $1-(N+1)^{-1}\leq \sum^N_{k=1}k^{-2}\leq 2-N^{-1}$, so $a^2_{N,2}c^2_{N,2}\sum^{N}_{k=1}k^{-2}\rightarrow 0$).
Next, for given $\epsilon>0$, we also find \begin{eqnarray*} E_Q\left(X^2_{k,N};\ |X_{k,N}|\geq \epsilon\right)&=&E_Q\left(a^2_{N,2}f^2_k;\ a_{N,2}|f_k|>\epsilon\right)\\&\leq & a^2_{N,2}c^2_{N,2}E_Q\left(\sum_xp^2_0(k,x)1_{a_{N,2}|f_k|>\epsilon}\right)\\&+&a^2_{N,2}c^2_{N,2}E_Q\left(
\sum_{x\neq x'}|h(k,x)h(k,x')p_0(k,x)p_0(k,x')|1_{a_{N,2}|f_k|>\epsilon}\right)\\&=&
a^2_{N,2}c^2_{N,2}Q(a_{N,2}|f_k|>\epsilon)\left(p_0(2k,0)+\sum_{x\neq x'}p_0(k,x)p_0(k,x')\right)
\end{eqnarray*}
But $Q(a_{N,2}|f_k|>\epsilon)=Q(|\sum_xh(k,x)p_0(k,x)|>a^{-1}_{N,2}c^{-1}_{N,2}\epsilon)\leq Q(\sum_x|h(k,x)|p_0(k,x)>a^{-1}_{N,2}c^{-1}_{N,2}\epsilon)=Q(1>a^{-1}_{N,2}c^{-1}_{N,2}\epsilon)=Q(1>(\log N)^{1/2}\epsilon)=0$ for $N$ large. Thus $$\lim_N\sum^N_{k=1}E_Q\left(X^2_{k,N};\ |X_{k,N}|>\epsilon\right)=0$$
Similarly, we can check conditions in the Lindeberg-Feller theorem are satisfied for $d=1$ if we take $a_{N,1}=(c^2_{N,1}N^{1/2})^{-1/2}$.
\end{proof}
\begin{pr}\label{31} $a_{N,d}R_N\Rightarrow 0$ \end{pr}
\begin{proof}Note that $a_{N,d}R_N\Rightarrow 0$ if and only if $a_{N,d}R_N\rightarrow 0$ in probability and $a_{N,d}R_N\rightarrow 0$ in probability if $E_Q(a_{N,d}R_N)^2\rightarrow 0$ by Chebyshev inequality.
We show the proposition for $d=2$, and it is similar for $d=1$.
By definition, $R_N=\sum^N_{n=2}g_n$, so $E_Q(a_{N,d}R_N)^2=a^2_{N,d}E_Q\left(\sum^N_{n=2}g^2_n+\sum_{n\neq m} g_ng_m\right)$. From Lemma \ref{2}, we know that for $n\neq m$, $E_Qg_ng_m=0$, and $E_Qg^2_n=\sum_{1\leq i_1<\cdots<i_n\leq N}c^{2n}_{N,2}\prod^n_{k=1}p_0(2(i_k-i_{k-1}),0)\leq \left(c_2c^2_{N,2}\log N\right)^n$. Recall $a_{N,2}=\left(c^2_{N,2}\log N\right)^{-1/2}$ for $d=2$ so $$a^2_{N,2}E_Q\sum^N_{n=2}g^2_n\leq\sum^N_{n=2}
a^2_{N,2}(c_2c^2_{N,2}\log N)^n=\sum^N_{n=2}c_2(c_2c^2_{N,2}\log N)^{n-1}\rightarrow 0$$
\end{proof}
\end{document}
|
\betaegin{document}
\title{Small 3-manifolds of large genus}
\alphauthor{Ian Agol}
\alphaddress{Department of Mathematics, University of Illinois at
Chicago, 322 SEO m/c 249, 851 S. Morgan Street, Chicago, IL
60607-7045}
\epsilonmail{[email protected]}
\thetaanks{Partially supported by ARC grant 420998}
\subjclass{Primary 57M50; Secondary 57M25} \kappaeywords{Heegaard
genus, 3-manifold, Haken}
\betaegin{abstract}
We prove the existence of pure braids with arbitrarily many
strands which are small, {\iotat i.e.} they contain no closed
incompressible surface in the complement which is not boundary
parallel. This implies the existence of irreducible non-Haken
3-manifolds of arbitrarily high Heegaard genus.
\epsilonnd{abstract}
\deltaate{\today}
\muaketitle
\section{Introduction}
Haken introduced the notion of irreducible 3-manifolds containing
an incompressible surface, called sufficiently large, or Haken
\cite{Hak68}. These were thrust into prominence by theorems of
Waldhausen, who (among other things) showed that the homeomorphism
and word problems are solvable for these manifolds \cite{Wal:78},
and by Thurston who showed that they satisfy the geometrization
conjecture \cite{Th:82}. It is therefore interesting to understand
how prevalent non-Haken 3-manifolds are. Jaco and Shalen showed
that a Seifert fibred space $M$ is non-Haken if and only if it is
atoroidal with $H_1(M)$ finite \cite{JS79}. Thurston showed that
all but finitely many Dehn fillings on the figure eight knot
complement are non-Haken \cite{Th}. Hatcher and Thurston extended
this result to all 2-bridge knot exteriors \cite{HT85}. These
examples all have Heegaard genus 2. By a result of Hatcher, if a
link is small, {\iotat i.e.} is irreducible and contains no closed
incompressible surface other than boundary tori, then Dehn filling
on any boundary component yields a small manifold for all but
finitely many fillings \cite{Ha82}. Floyd and Hatcher \cite{FH82}
and independently Culler, Jaco, and Rubinstein \cite{CJR82} showed
that punctured torus bundles and 4-punctured sphere bundles with
irreducible monodromy are small. Oertel proved that Montesinos
knots covered by small Seifert fibred spaces are small \cite{O84}.
It is believed that the Seifert-Weber dodecahedral space is small.
Several other examples of small manifolds are known, {\iotat e.g.}
the Borromean rings (Lozano), some chain links with $\lambdaeq 5$
components (Oertel), and some other examples of Dunfield
\cite{D99}, Hass and Menasco \cite{HM93}, and Lopez
\cite{Lo92,Lo93}.
A result of Moriah and Rubinstein shows that the Heegaard genus of
infinitely many Dehn fillings on a link complement is the same as
that of the link complement \cite{MR97}. Since most punctured
torus bundles have Heegaard genus 3 \cite{CS99}, there are
infinitely many closed small 3-manifolds of genus 3. This appears
to be the largest known genus of a small 3-manifold in the
literature. Alan Reid asked whether there are small links of
arbitrarily many components, observing that this would imply the
existence of irreducible non-Haken 3-manifolds of arbitrarily
large Heegaard genus, as we show in theorem \ref{small}. We prove
in theorem \ref{smallbraids} the existence of small links of
arbitrarily many components, answering Reid's question.
\betaegin{acknowledgments}
We thank Alan Reid, Hyam Rubinstein, Saul Schleimer, and Bill
Thurston for contributing ideas and suggestions which led to the
results in this paper. The ideas in the paper were partly inspired
by a talk of Dan Margalit at UIC on the pants complex
\cite{Mar02}, and by the work of Jeff Brock \cite{Bro01}.
\epsilonnd{acknowledgments}
\section{Pants decompositions}
The pants complex of a surface was introduced by Hatcher, Lochak,
and Schneps \cite{HLS00}, and we will follow their conventions and
terminology . We will just be interested in the pants graph. Let
$\Sigmaigma$ be a connected compact orientable surface with
$\chi(\Sigmaigma)<0$ . We say $\Sigmaigma$ has type $(g,n)$ if it has
genus $g$ and $n$ boundary components. By a {\betaf pants
decomposition} of $\Sigmaigma$, we mean a finite collection $P$ of
disjoint smoothly embedded circles cutting $\Sigmaigma$ into pieces
which are surfaces of type $(0,3)$. The number of curves in a
pants decomposition is $3g-3+n$, and the number of pants is
$-\chi(\Sigmaigma)$. Let $P$ be a pants decomposition, and suppose
that one of the circles $\betaeta$ of $P$ is such that deleting
$\betaeta$ from $P$ produces a complementary component of type
$(1,1)$. This is equivalent to saying there is a circle $\gamma$ in
$\Sigmaigma$ which intersects $\beta$ in one point transversely and is
disjoint from all the other circles in $P$. In this case,
replacing $\beta$ by $\gamma$ in $P$ produces a new pants decomposition
$P'$ (fig. \ref{Amove}).
We call this replacement a {\betaf simple move}, or
$S$-move. In a similar fashion, if $P$ contains a circle $\beta$ such
that deleting $\beta$ from $P$ produces a complementary component of
type $(0,4)$, then we obtain a new pants decomposition $P'$ by
replacing $\beta$ with a curve $\gamma$ intersecting $\beta$ transversely in
two points and disjoint from the other curves of $P$ (fig.
\ref{Amove}). The transformation $P\rightarrow P'$ in this case is
called an {\betaf associativity move}, or $A$-move.
\betaegin{figure}[htb]
\betaegin{center}
\subfigure{\epsilonpsfig{figure=Amove.ps,width=.45\textwidth}}\qquad
\subfigure{\epsilonpsfig{figure=Smove.ps,width=.45\textwidth}}
\caption{An $A$-move and $S$-move \lambdaabel{Amove}}
\epsilonnd{center}
\epsilonnd{figure}
\betaegin{definition}
The {\betaf pants decomposition graph} $\muathcal{P}(\Sigmaigma)^{(1)}$ is
the graph having vertices the isotopy classes of pants
decompositions of $\Sigma$, with an edge joining two vertices whenever
the corresponding pants decompositions differ by a single $A$- or
$S$-move. This is the one skeleton of the pants decomposition
complex, which was proven to be connected in \cite{HLS00}.
\epsilonnd{definition}
{\betaf Main Construction:} Suppose we have a homeomorphism
$\psi:\Sigma\lambdaongrightarrow \Sigma$, with mapping torus $T_{\psi}=(S\times
I)/\{(x,0)\sim (\psi(x),1)\}$ . Given a path
$C=P_0-P_1-\cdots-P_{m-1}-P_m \subset \muathcal {P}(\Sigmaigma)^{(1)}$,
$P_m=\psi(P_0)$, we get a sequence of circles $\beta_1,...,\beta_m$,
where $\beta_{i+1}$ is the circle in $P_{i+1}$ replacing a circle in
$P_i$ ($i$ is taken $\pmod{m}$). We assume that there is no circle
$\beta_j$ which is contained in all of the partitions $P_i$, $1\lambdaeq
i\lambdaeq m$. For each circle $\beta_i$, drill out a curve
$B_i=\beta_i\times \{\frac{i}{m}\}$ in $T_\psi$. For each
complementary region of $P_i$ with boundary curves
$\beta_i,\beta_j,\beta_k$ (where $i,j,k$ might not be distinct), there is
an embedded pants in $T_{\psi}$ with boundary on the curves
$B_i\cup B_j\cup B_k$ and interior disjoint from $\cup_i B_i$, and
moreover, all of these pants may be chosen to have disjoint
interiors (we pull apart the pants lying in $\Sigma$ like an
accordion). For an $A$- or $S$-move $P_i-P_{i+1}$, there is a
complementary region bounded by two pants for an $S$-move, and
four pants for an $A$-move, which we call an $A$- or $S$-region,
respectively. Consider the link complement $M_C=T_{\psi}\betaackslash
\mathcal{N}(\cup_i B_i)$. Then $T_{\psi}$ is obtained from $M_C$ by Dehn
filling on the boundary components corresponding to the link
$\cup_i B_i$. $M_C$ is decomposed along pants into $A$- or
$S$-regions.
\betaegin{definition}
A surface $(S,\partial S) \subset (M_C,\partial M_C)$ (where if
$C=\epsilonmptyset$, then $M_C=T_\psi$) is {\betaf pairwise incompressible}
if
\betaegin{enumerate}
\iotatem
each component of $\partial S$ is either parallel to a component
of $\partial \Sigma$ in $\partial T_\psi$ or to a longitude of a
component of $\partial \mathcal{N}(B_i)$,
\iotatem
if there is an annulus $A\subset M_C$ such that $\iotant A\cap
S=\epsilonmptyset$, one boundary component of $A$ is in $\iotant S$ and the
other is in $\partial M_C$ parallel to a component of $\partial
\Sigma$ or a longitude of $\partial \mathcal{N}(B_i)$, then $A$ is isotopic
into $S$.
\epsilonnd{enumerate}
\epsilonnd{definition}
\betaegin{lemma} \lambdaabel{tube}
$\iotant M_C$ has a complete hyperbolic metric of finite volume. Any
pairwise incompressible surface in $M_C$ is a disjoint union of
pants. Any pairwise incompressible surface in $T_\psi$ isotopic
into $M_C$ is obtained by tubing pants in $M_C$.
\epsilonnd{lemma}
\betaegin{proof}
The $A$- and $S$-regions may be regarded as pared manifolds
\cite{Mo}, with pared locus consisting of annuli in the boundary
which are regular neighborhoods of the curves from the pants
decomposition. Each $A$-region may be decomposed into two ideal
octahedra (fig. \ref{Aregion}), and each $S$-region is decomposed
into one ideal octahedron (fig. \ref{Sregion}).
\betaegin{figure}[htb]
\betaegin{center}
\epsilonpsfxsize= \textwidth
\epsilonpsfbox{Aregion2.ps}
\caption{An $A$-region from 2 octahedra \lambdaabel{Aregion}}
\epsilonnd{center}
\epsilonnd{figure}
\betaegin{figure}[htb]
\betaegin{center}
\epsilonpsfxsize= \textwidth
\epsilonpsfbox{Sregion.ps}
\caption{An $S$-region from one octahedron \lambdaabel{Sregion}}
\epsilonnd{center}
\epsilonnd{figure}
Thus, an $A$-region is obtained by doubling a checkerboard colored
ideal octahedron along the dark faces. An $S$-region is obtained
by folding pairs of dark faces of a checkerboard colored ideal
octahedron together along common ideal vertices. This gives each
$A$- and $S$-region a hyperbolic metric with totally geodesic
boundary, and rank one cusps along the pared locus. We may glue
these pieces together along geodesic 3-punctured spheres to get a
hyperbolic structure on $\iotant M_C$.
Suppose that we have an incompressible surface $(S,\partial
S)\subset (M_C,\partial T_\psi)$, such that $S$ is pairwise
incompressible in $T_\psi$. We may assume that it intersects each
pants in essential simple closed curves. Since closed curves in
pants are boundary parallel, we may do surgery along annuli, to
get a surface $S'$ which is disjoint from the pants (fig.
\ref{surger}). Since $M_C$ is acylindrical, we may assume that
none of the surgeries produce annuli.
\betaegin{figure}[htb]
\betaegin{center}
\epsilonpsfxsize= 2 in
\epsilonpsfbox{annularsurgery.ps}
\caption{Surgering $S$ along an annulus $A$ to obtain $S'$ \lambdaabel{surger}}
\epsilonnd{center}
\epsilonnd{figure}
Thus, each component of the surface $S'$ lies in an $A$- or
$S$-region. Since an ideal octahedron is the same as a truncated
tetrahedron with its edges drilled out (where dark faces of the
octahedron correspond to faces of the tetrahedron), we may assume
that the surface intersects each octahedron of the decomposition
of $A$- and $S$-regions normally. A surface is normal if and only
if it is incompressible in the complement of the 1-skeleton
\cite{Tho}, so since there is no one skeleton, the surface must be
normal. In the case of an $A$-region, the surface must be a double
of a quadrilateral or a triangle. The doubles of triangles give
the pants surfaces, and the doubles of the quadrilaterals have
annular compressions to the pared locus, that is they are obtained
by tubing together pairs of pants. A similar property holds for
the $S$-regions. Thus, we may obtain every pairwise incompressible
surface by tubing together pants.
\epsilonnd{proof}
\betaegin{corollary}
$\mubox{\rm{Vol}}(T_\psi)\lambdaeq V_{oct}(2A+S)$, where $\mubox{\rm{Vol}}$ denotes the volume
of the complete hyperbolic metric, $V_{oct}$ denotes the volume of
a regular ideal octahedron, $A$ is the number of $A$-moves in the
path $C$, and $S$ denotes the number of $S$-moves.
\epsilonnd{corollary}
\betaegin{proof}
Each $A$-region of $M_C$ contributes $2 V_{oct}$, and each
$S$-region contributes $V_{oct}$. Then by a theorem of Thurston
\cite{Th}, $\mubox{\rm{Vol}}(T_\psi)< \mubox{\rm{Vol}}(M_C)=V_{oct}(2A+S)$.
\epsilonnd{proof}
{\betaf Remark:} This gives a tight upper bound for a theorem of
Brock \cite{Bro01} relating volumes of mapping tori to pants
distance of the monodromy.
\section{Small links}
\betaegin{theorem} \lambdaabel{smallbraids}
There exist pure braids in $S^2\times S^1$ with arbitrarily many
components and no closed incompressible surface in the complement.
\epsilonnd{theorem}
\betaegin{theorem} \lambdaabel{small}
There are small 3-manifolds of arbitrarily large Heegaard genus.
\epsilonnd{theorem}
\betaegin{proof}\ref{small}
The following argument was observed by Alan Reid. By theorem
\ref{smallbraids}, there is a small link with $n$ components, for
$n$ arbitarily large. A link complement with $n$ boundary
components has Heegaard genus $g\gammaeq n/2$, since there must be
$\gammaeq n/2$ boundary components lying to one side of a Heegaard
surface, and therefore the genus of the Heegaard surface is $\gammaeq
n/2$. A result of Hatcher \cite{Ha82} shows that there are
finitely many Lagrangian subspaces of the space of measured
laminations on the boundary which consist of laminations which are
the boundary of incompressible measured laminations. Moriah and
Rubinstein \cite{MR97} prove that for each boundary component,
there are finitely many points and lines in Dehn filling space of
the component, such that if a Dehn filling avoids these, then the
Heegaard genus of the resulting manifold is the same as the
Heegaard genus of the link complement. Combining these two
theorems, we see that there are infinitely many Dehn fillings on
the link which are small with genus $\gammaeq n/2$, by doing Dehn
fillings which avoid the slopes given by Moriah and Rubinstein's
theorem and avoiding the boundary slopes of surfaces given by
Hatcher's theorem.
\epsilonnd{proof}
\betaegin{proof} \ref{smallbraids}
Let $\Sigmaigma_n$ be the $n$-punctured sphere, {\iotat i.e.} a surface
of type $(0,n)$. We construct a closed path $C$ in the pants graph
$\muathcal{P}(\Sigma_n)^{(1)}$ and apply the Main Construction to
obtain a sequence of loops in $\Sigmaigma_n\times S^1$. Then we do
very high Dehn twists about these loops to force any
incompressible surface in the resulting braid to be isotopic into
the complement of these loops (using Hatcher's theorem). By lemma
\ref{tube}, any pairwise incompressible surface is obtained by
tubing together pants. Then we analyze the ways of tubing pants
together, and show that the resulting surfaces are always
compressible in the braid complement obtained by any Dehn twists
about these loops. Bill Thurston suggested considering manifolds
which fiber over $S^1$, Saul Schleimer suggested drilling out
horizontal curves in a braid to try to find small braids by large
Dehn twists, and Hyam Rubinstein suggested using links whose
complements decompose along pants.
The first observation is that if there is a closed incompressible
surface in $T_\psi$, then there is a pairwise incompressible
surface, obtained by doing all possible compressions along
compressing annuli.
We choose a closed path in the pants graph
$\muathcal{P}(\Sigma_n)^{(1)}$, such that each pants is a twice
punctured disk or a punctured annulus, when thought of as lying in
$\Sigma_n\times S^1/\subset S^2\times S^1$ . If this is true, then
after tubing along the horizontal loops, the surface becomes a
punctured sphere, torus, or klein bottle. If there is a
2-punctured disk, then the surface must be a punctured sphere. The
only essential punctured sphere in a braid complement is the fiber
surface. To see this, we take a cyclic cover of $S^2\times S^1$,
and lift the punctured $S^2$ to this cover, so that it lies
between two fiber surfaces. But the only incompressible surfaces
in the complement of the fiber are copies of the fiber. If copies
of the fiber are tubed together, then the resulting surface is
compressible. Thus, we may assume that the only pants that occur
are punctured annuli.
Now we construct an explicit closed path in
$\muathcal{P}(\Sigma_n)^{(1)}$. This path seems to be the simplest to
construct with the property that each pants in the path is a
punctured annulus or 2-punctured disk. Consider the punctures of
$\Sigma_n$ as lying in a great circle $\gammaamma$ on $S^2$. Cyclically
label the gaps between punctures along $\gamma$ $1,\lambdadots,n$. For
every pair of gaps $(i,j)$, there is a loop $\beta_{i,j}$ in $S^2$
which meets $\gammaamma$ precisely twice at the gaps $i$ and $j$. We
form a path $C \subset \muathcal{P}(\Sigma_n)^{(1)}$. The initial pants
decomposition is given by $P_0=\{\beta_{1,3},\beta_{1,4},...,
\beta_{1,n-1}\}$. We may take another pants decomposition
$P_{n-3}=\{\beta_{2,4},\beta_{2,5}, ..., \beta_{2,n}\}$, which is obtained
from $P_0$ by shifting each index by 1. There is a path of pants
decompositions $C_0=P_0-P_1-\lambdadots-P_{n-3}$, where $P_k$ is
obtained by replacing the first $k$ loops of $P_0$ with the first
$k$ loops of $P_{n-3}$, $k=1,\lambdadots,n-3$ (fig. \ref{pantspath7}).
More generally, let
$P_{j(n-3)}=\{\beta_{j+1,j+3},\beta_{j+1,j+4},\lambdadots,\beta_{j+1,j+n-1}\}$,
where indices are taken $\pmod{n}$. For $1\lambdaeq k\lambdaeq n-3$, let
$P_{j(n-3)+k}$ be obtained by replacing the first $k$ loops of
$P_{j(n-3)}$ with the first $k$ loops of $P_{(j+1)(n-3)}$. Then we
have a closed path $C=P_0-P_1-\cdots-P_{n(n-3)}$ (fig.
\ref{pantspath5}).
\betaegin{figure}[htb]
\betaegin{center}
\epsilonpsfxsize=\textwidth
\epsilonpsfbox{pantspath7.ps}
\caption{The path $C_0$ for $\Sigma_7$. Double each polygon to obtain loops on $\Sigma_7$,
with punctures at vertices. \lambdaabel{pantspath7}}
\epsilonnd{center}
\epsilonnd{figure}
\betaegin{figure}[htb]
\betaegin{center}
\epsilonpsfxsize= \textwidth
\epsilonpsfbox{pantspath5.ps}
\caption{Half of the path $C$ for $\Sigma_5$ \lambdaabel{pantspath5}}
\epsilonnd{center}
\epsilonnd{figure}
Any curve $\beta_{i,j}$ such that $i-j\epsilonquiv \pm 2\pmod{n}$ bounds a
twice punctured disk in $\Sigmaigma$. There are once punctured annuli
between the circles $\beta_{i,j}$ and $\beta_{i, j+1}$ or between
$\beta_{i,j}$ and $\beta_{i+1,j}$. Create loops $B_{i,j}$ in $\Sigma_n\times
S^1$, each corresponding to the circle $\beta_{i,j}$ in $\Sigma_n$, as in
the Main Construction. Figures \ref{braid5}, \ref{braid6},
\ref{braid7} show a picture of part of $M_C$ for 5, 6, and 7
strand braids. The pictures are of a neighborhood of $\gamma\times I$,
cut along a gap, and flattened out. Loops $B_{i,j}$ and
$B_{i-1,j+1}$ have been isotoped to the same level.
\betaegin{figure}[htb]
\betaegin{center}
\epsilonpsfxsize= 3 in
\epsilonpsfbox{braid5.ps}
\caption{$M_C$ for $\Sigma_5$ \lambdaabel{braid5}}
\epsilonnd{center}
\epsilonnd{figure}
\betaegin{figure}[htb]
\betaegin{center}
\epsilonpsfxsize= 3 in
\epsilonpsfbox{braid6.ps}
\caption{Part of $M_C$ for $\Sigma_6$ \lambdaabel{braid6}}
\epsilonnd{center}
\epsilonnd{figure}
\betaegin{figure}[htb]
\betaegin{center}
\epsilonpsfxsize= 3 in
\epsilonpsfbox{braid7.ps}
\caption{Part of $M_C$ for $\Sigma_7$ \lambdaabel{braid7}}
\epsilonnd{center}
\epsilonnd{figure}
A key observation is that if one does Dehn twists about the
horizontal loops, {\iotat i.e.} by $1/n$ Dehn filling on these
curves, then an incompressible surface which is obtained by tubing
together pants in $M$ may be isotoped across the loops without
affecting its isotopy class in the Dehn filling. This follows
since each pants is isotopic into a fiber surface, so the framing
induced by the pants is the same as that of the fiber. For this
reason, we may draw surfaces in $M_C$ going through loops
$B_{i,j}$ without drawing which side of the loop the surface lies
on. Also, this makes it clear that two parallel pants may not be
tubed together to get an incompressible surface.
We have a 2-complex in the trivial braid which consists of the
loops and punctured disks and annuli (this is shown for $n=5,6,7$
in figures \ref{braid5},\ref{braid6}, \ref{braid7}) . The
intersection with $\gamma\times I$ is shown in figure \ref{complex}(a)
for $\Sigma_7$. Take the subcomplex consisting only of punctured
annuli (fig. \ref{complex}(b)). Then one can see that any surface
carried by this complex is a punctured torus, isotopic to the
punctured torus $T$ which consists of the annuli bounded by the
sequence
${B_{1,3},B_{1,4},B_{2,4},B_{2,5},B_{3,5},...,B_{n,3},B_{1,3}}$
(fig. \ref{complex}(d)). If $n=5$, then this is the only
possibility for a surface. To see that every surface is isotopic
to $T$ if $n>5$, note that the punctured annulus spanning
$B_{i,j},B_{i,j+1},B_{i+1,j+1}$ is isotopic to the punctured
annulus spanning $B_{i,j},B_{i+1,j},B_{i+1,j+1}$ (this follows
from the symmetry of the $A$-regions: tubing two once punctured
annuli gives a 4-punctured sphere, which is isotopic to the
4-punctured sphere on the other side of the $A$-region, as in
figure \ref{Aregion}). We may thus assume that the surface does
not go through the loops $B_{i,j}$ where $i-j\pmod{n}=2$ (fig.
\ref{complex}(b)). By induction, we may assume that for
$i-j\pmod{n}\lambdaeq k$, the torus does not go through $B_{i,j}$,
where $k < n-3$ (fig. \ref{complex}(c-d)).
\betaegin{figure}[htb]
\betaegin{center}
\epsilonpsfxsize=\textwidth
\epsilonpsfbox{complex.ps}
\caption{The intersection of the complex with $\gamma\times I$ for $\Sigma_7$. \lambdaabel{complex}}
\epsilonnd{center}
\epsilonnd{figure}
\betaegin{figure}[htb]
\betaegin{center}
\epsilonpsfxsize= 3 in
\epsilonpsfbox{annuluscompress.ps}
\caption{An annular compression of $T$. \lambdaabel{annuluscompress}}
\epsilonnd{center}
\epsilonnd{figure}
Thus, we may assume that the surface goes through the sequence of
loops $B_{i,j}$, where $i-j\pmod{n}=n-3, n-2$, which gives the
punctured torus $T$. But $T$ is pairwise compressible, which can
be seen by considering the 2-punctured annulus spanning
$\{B_{1,3},B_{1,4},B_{2,4}\}$ (fig. \ref{annuluscompress} gives a
picture of the annular compression). This shows that there is no
pairwise incompressible surface in $T_\psi$ which is isotopic into
$M_C$. If one performs Dehn twists about the curves $B_{i,j}$
avoiding the finite set of Lagrangian subspaces coming from
Hatcher's theorem, one gets a small pure braid. In fact, one may
get a link in $S^3$ by considering $n-1$ strands as lying in a
solid torus, and embedding this solid torus standardly in $S^3$.
\epsilonnd{proof}
\section{Conclusion}
Given some natural complexity on $3$-manifolds, such as the
minimal number of tetrahedra in a triangulation, or the minimal
number of intersections in a Heegaard diagram, it would be
interesting to understand how the density of small 3-manifolds of
complexity $n$ behaves as $n\lambdaongrightarrow \iotanfty$, although this
is probably a very difficult problem. A related question would be
to consider all 2-complexes coming from closed paths in
$\muathcal{P}(\Sigma)^{(1)}$, and to consider the fraction of these for
which the construction given in theorem \ref{smallbraids} produces
manifolds by Dehn twisting along the curves in the pants
decompositions which have no incompressible surface which is
isotopic off of the loops, for a given length of paths in the
pants complex. We conjecture that this fraction goes to 1 as the
length of the paths goes to $\iotanfty$. Indeed one can generalize
the argument of theorem \ref{smallbraids} to show that if a path
goes through a subsequence of pants decompositions as defined in
the proof of the theorem, then any closed incompressible surface
in the complement of the pants curves will compress under Dehn
twisting.
It should be possible to generalize the construction in this paper
to prove the existence of knots in $S^3$ with arbitrarily large
Heegaard genus. It would be interesting to understand this
construction for surfaces with genus. One might be able to prove
the existence of fibred manifolds with fiber of arbitrarily high
genus, such that the fiber is the only closed connected
incompressible surface.
\deltaef$'$} \def\cprime{$'${$'$} \deltaef$'$} \def\cprime{$'${$'$}
\betaegin{thebibliography}{10}
\betaibitem{Bro01}
{\sc J.~F. Brock}, {\epsilonm {Weil-Petersson translation distance and
volumes of
mapping tori}}, arXiv:math.GT/0109050.
\betaibitem{CS99}
{\sc D.~Cooper and M.~Scharlemann}, {\epsilonm The structure of a
solvmanifold's
{H}eegaard splittings}, in Proceedings of 6th G\"okova Geometry-Topology
Conference, vol.~23, 1999, pp.~1--18.
\betaibitem{CJR82}
{\sc M.~Culler, W.~Jaco, and H.~Rubinstein}, {\epsilonm Incompressible
surfaces in
once-punctured torus bundles}, Proc. London Math. Soc. (3), 45 (1982),
pp.~385--419.
\betaibitem{D99}
{\sc N.~Dunfield}, {\epsilonm Which small volume hyperbolic 3-manifolds
are haken?}
\nuewblock talk given at University of Warwick, slides available at
http://www.math.harvard.edu/~nathand/preprints/slides/haken\_slides.ps, July
1999.
\betaibitem{FH82}
{\sc W.~Floyd and A.~Hatcher}, {\epsilonm Incompressible surfaces in
punctured-torus
bundles}, Topology Appl., 13 (1982), pp.~263--282.
\betaibitem{Hak68}
{\sc W.~Haken}, {\epsilonm Some results on surfaces in $3$-manifolds},
in Studies in
Modern Topology, Math. Assoc. Amer. (distributed by Prentice-Hall, Englewood
Cliffs, N.J.), 1968, pp.~39--98.
\betaibitem{HM93}
{\sc J.~Hass and W.~Menasco}, {\epsilonm Topologically rigid non-{H}aken
$3$-manifolds}, J. Austral. Math. Soc. Ser. A, 55 (1993), pp.~60--71.
\betaibitem{HLS00}
{\sc A.~Hatcher, P.~Lochak, and L.~Schneps}, {\epsilonm On the
{T}eichm\"uller tower
of mapping class groups}, J. Reine Angew. Math., 521 (2000), pp.~1--24.
\betaibitem{HT85}
{\sc A.~Hatcher and W.~Thurston}, {\epsilonm Incompressible surfaces in
$2$-bridge
knot complements}, Invent. Math., 79 (1985), pp.~225--246.
\betaibitem{Ha82}
{\sc A.~E. Hatcher}, {\epsilonm On the boundary curves of incompressible
surfaces},
Pacific J. Math., 99 (1982), pp.~373--377.
\betaibitem{JS79}
{\sc W.~H. Jaco and P.~B. Shalen}, {\epsilonm Seifert fibered spaces in
$3$-manifolds}, Mem. Amer. Math. Soc., 21 (1979), pp.~viii+192.
\betaibitem{Lo92}
{\sc L.~M. Lopez}, {\epsilonm Alternating knots and non-{H}aken
$3$-manifolds},
Topology Appl., 48 (1992), pp.~117--146.
\betaibitem{Lo93}
\lambdaeavevmode\vrule height 2pt depth -1.6pt width 23pt, {\epsilonm Small
knots in
{S}eifert fibered $3$-manifolds}, Math. Z., 212 (1993), pp.~123--139.
\betaibitem{Mar02}
{\sc D.~Margalit}, {\epsilonm {The automorphism group of the pants
complex}},
arXiv:math.GT/0201319.
\betaibitem{Mo}
{\sc J.~W. Morgan}, {\epsilonm On {T}hurston's uniformization theorem
for
three-dimensional manifolds}, in The Smith conjecture (New York, 1979),
Academic Press, Orlando, FL, 1984, pp.~37--125.
\betaibitem{MR97}
{\sc Y.~Moriah and H.~Rubinstein}, {\epsilonm Heegaard structures of
negatively
curved $3$-manifolds}, Comm. Anal. Geom., 5 (1997), pp.~375--412.
\betaibitem{O84}
{\sc U.~Oertel}, {\epsilonm Closed incompressible surfaces in
complements of star
links}, Pacific J. Math., 111 (1984), pp.~209--230.
\betaibitem{Tho}
{\sc A.~Thompson}, {\epsilonm Thin position and the recognition problem
for ${S}\sp
3$}, Math. Res. Lett., 1 (1994), pp.~613--630.
\betaibitem{Th}
{\sc W.~P. Thurston}, {\epsilonm The geometry and topology of
3-manifolds}.
\nuewblock Lecture notes from Princeton University, 1978--80.
\betaibitem{Th:82}
{\sc W.~P. Thurston}, {\epsilonm Three-dimensional manifolds, {K}leinian
groups and
hyperbolic geometry}, Bull. Amer. Math. Soc. (N.S.), 6 (1982), pp.~357--381.
\betaibitem{Wal:78}
{\sc F.~Waldhausen}, {\epsilonm Recent results on sufficiently large
$3$-manifolds},
in Algebraic and geometric topology (Proc. Sympos. Pure Math., Stanford
Univ., Stanford, Calif., 1976), Part 2, Amer. Math. Soc., Providence, R.I.,
1978, pp.~21--38.
\epsilonnd{thebibliography}
\epsilonnd{document}
|
\begin{document}
\title{The hanging chain problem in the sphere and in the hyperbolic plane}
\author{Rafael L\'opez}
\address{ Departamento de Geometr\'{\i}a y Topolog\'{\i}a\\ Universidad de Granada. 18071 Granada, Spain}
\email{[email protected]}
\keywords{hanging chain problem, sphere, hyperbolic plane, catenary, rotational surface, prescribed curvature }
\mathbb Subjclass{ 53A04, 53A10, 49J05}
\begin{abstract} In this paper, the notion of the catenary curve in the sphere and in the hyperbolic plane is introduced. In both spaces, a catenary is defined as the shape of a hanging chain when its potential energy is determined by the distance to a given geodesic of the space. Several characterizations of the catenary are established in terms of the curvature of the curve and of the angle that its unit normal makes with a vector field of the ambient space. Furthermore, in the hyperbolic plane, we extend the concept of catenary substituting the reference geodesic by a horocycle or the hyperbolic distance by the horocycle distance.
\end{abstract}
\maketitle
\mathbb Section{Introduction and objectives}
The shape that adopts a hanging chain under its own weight when suspended from its endpoints attracted the interest of scientistics from times of Galileo and da Vinci. Galileo believed that the parabola was the shape of the chain but his argument was wrong. The solution curve is not so simple as a parabola (a quadratic polynomial function) but the catenary
\begin{equation}\label{cat}
y(x)=\frac{1}{a}\cosh(ax+b)-\lambda,\quad a,b,\lambda\in\mathbb R, a>0,
\end{equation}
which is a curve involving transcendental functions as the exponential. The derivation of the solution was an independent work of R. Hooke, J. Bernouilli, G. Leibniz and C. Huygens among others. See \cite{beh,co} for an account of the history of the catenary. The catenary is a classical curve and one of the first examples, together the brachistochrone, that illustrates the power of the calculus of variations \cite{gh}. Related with the catenary, and also using calculus of variations, Euler proved that the catenary is the generating curve of the surface of revolution with minimum area and spanning to coaxial circles \cite{eu}. To be precise, if the catenary \eqref{cat} rotates around the $x$-axis, the resulting surface of revolution is minimal if and only if $\lambda=0$. For other mathematical properties of the catenary, see \cite{ch,cd,kkp,mc2,pa}.
The catenary appears related with different topics in science, specially in engineering and architecture. For example, it is the model of an arch where the only force acting on the arch is its weight \cite{he,pot}. This suggests its utilization in the construction of arches and roofs of corridors, such as the Spanish architect A. Gaud\'{\i} used in many of its constructions, as for example, the Colegio Teresiano and La Pedrera (Barcelona). In this sense, it is very nice to read the two articles of R. Osserman about the shape of the Gateway Arch in St. Louis, Missouri, connecting the shape of the Gateway Arch and the catenary \cite{os,os2}.
Many extensions of the hanging chain problem have been investigated and the literature is huge. Without to give a complete list of references, we point out some of the modifications in the classical problem. For example, one can assume that: the density of the chain changes along its length \cite{fa,kk,ok}; the force vector field is radial \cite{dh}; the chain is made of an elastic material \cite{bow,ir2,ir,irsin,rl}; the chain is subjected under the effect of the surface tension of a soap film adhered to the chain \cite{bmm1,bmm2}; the chain is suspended from a vertical line and rotates around this axis \cite{ap,ms,ne}; the two ends of the hanging chain move with stretching the chain along a path \cite{kaj}; there are loads on the chain which pulling down on its lowest point \cite{za}.
In this paper we will investigate the generalization of the concept of catenary in the unit sphere $\mathbb S^2$ and in the hyperbolic plane $\mathbb H^2$. From the mathematical viewpoint, it is natural to ask for the extension of the hanging chain problem to other spaces. It is clear that the sphere and the hyperbolic plane are the first spaces to study because they are the models of the elliptic geometry and the hyperbolic geometry, respectively. However, it is surprising that this theme has not been considered in the literature, being the catenary well-known for centuries, as well as the sphere and the hyperbolic plane are classical models in geometry.
The purpose of this article is to give an approach to the generalization of the hanging chain problem in these two spaces. More specifically,
to formulate a suitable problem that can be adopted as an extension of the Euclidean catenary. Once the concept of catenary is defined in these spaces as a critical point of a potential energy functional, different characterizations of the solution curve in terms of its curvature will be obtained. Finally, and if possible, we ask if the curves obtained as solutions of the hanging problem are the generating curves of minimal surfaces in the $3$-dimensional sphere $\mathbb S^3$ and hyperbolic space $\mathbb H^3$. This would extend the Euler's result to these spaces.
Our motivation to generalize the notion of the catenary in the sphere and in the hyperbolic plane has its origin in the Euclidean catenary \eqref{cat}. We will recall the hanging chain problem in the Euclidean plane $\mathbb R^2$, pointing out which are the ingredients in its formulation. These can give us the clues to proceed when the ambient space is the sphere and the hyperbolic plane.
Let $\mathbb R^2$ denote the Euclidean plane where $(x,y)$ stand for the Cartesian coordinates and let $\langle,\mathbb Rangle$ be the Euclidean metric. The hanging chain problem consists in finding the shape of an inextensible chain with uniform linear mass density and suspended from two fixed endpoints. Suppose that the chain of mass $m$ is idealized as a curve $y\colon [a,b]\to\mathbb R^2$, $y=y(x)$. The gravitational acceleration $g$ is constant over the chain. The $x$-axis is taken as the level of zero potential energy. The gravitational potential energy of an infinitesimal element $ds$ of the chain at $(x,y)$ is $gy\, dm=\mathbb Sigma g y\, ds$, where $\mathbb Sigma$ is the density per unit length. Since $ds=\mathbb Sqrt{1+y'(x)^2}\, dx$, the total potential energy of the chain is
\begin{equation}\label{eq0}
\int_a^b\mathbb Sigma g y(x)\mathbb Sqrt{1+y'(x)^2}\ dx.
\end{equation}
We are assuming in \eqref{eq0} that $y(x)>0$ for all $x\in[a,b]$. The hanging chain problem reduces to find a curve $y=y(x)$ that minimizes this energy among all curves with the same ends and the same length. The latter hypothesis is due to the inextensibility of the chain and the absence of elastic forces. Simplifying the constant $\mathbb Sigma g$ by $1$, the energy functional to minimize is
\begin{equation}\label{ee}
\mathcal{E}[y]= \int_a^b y\mathbb Sqrt{1+y'^2}\, dx +\lambda \int_a^b \mathbb Sqrt{1+y'^2}\, dx .
\end{equation}
The second term of $\mathcal{E}[y]$ is a Lagrange multiplier because all curves have the same length. Consequently, the solution $y(x)$ is a critical point of the energy $\mathcal{E}$. Using standard arguments of calculus of variations, the Euler-Lagrange equation of \eqref{ee} is
\begin{equation}\label{catenary}
\frac{y''}{(1+y'^2)^{3/2}}=\frac{1}{(y+\lambda)\mathbb Sqrt{1+y'^2}}.
\end{equation}
The solution of \eqref{catenary} is the catenary \eqref{cat}. Note that the left-hand side of \eqref{catenary} is the curvature $\kappa_e$ of the plane curve $y=y(x)$. The right-hand side of \eqref{catenary} has the following geometric interpretation. Consider the vector field $\partial_y$ which is the (constant) gravitational field in $\mathbb R^2$. Since the unit normal vector ${\bf n}$ of the curve $y(x)$ is
${\bf n}=(-y',1)/\mathbb Sqrt{1+y'^2}$, then $\langle {\bf n},\partial_y\mathbb Rangle=1/\mathbb Sqrt{1+y'^2}$ and equation \eqref{catenary} can be expressed as
\begin{equation}\label{catenary2}
\kappa_e=\frac{\langle{\bf n},\partial_y\mathbb Rangle}{y+\lambda}.
\end{equation}
Equation \eqref{catenary2} shows that the hanging chain problem is equivalent to a coordinate-free prescribed curvature problem.
Motivated by the above description of the problem, the main ingredients are the following. The first aspect concerns to the existence of a reference line which is prescribed in the problem. In the above arguments, this line is the $x$-axis of $\mathbb R^2$ and at this level, the potential is $0$. A second aspect is the existence of a potential energy which depends on the position with respect to the reference line. In the case of the catenary, this potential is due to the gravity. It is to this potential that we want to calculate a minimum energy, or more exactly, a critical point. Finally, in the variational problem, all curves of the variation have prescribed endpoints and the same length. In particular, it is necessary to add a Lagrange multiplier to the potential energy that we want to minimize.
Based on the above discussion, we will formulate the hanging chain problem in the sphere $\mathbb S^2$ and in the hyperbolic plane $\mathbb H^2$.
The objectives of this paper can be divided into three specific items:
\begin{enumerate}
\item[(T1)] State the analogous hanging chain problem in $\mathbb S^2$ and in $\mathbb H^2$ and find the corresponding Euler-Lagrange equation.
\item[(T2)] Obtain an analogous formulation of the prescribed curvature equation \eqref{catenary2} in terms of a vector field that represents the `gravitational vector field'.
\item[(T3)] Rotate the catenary in $\mathbb S^3$ and $\mathbb H^3$ and determine any unique properties of the mean curvature of the resulting surface. \end{enumerate}
The critical points of the potential energy will be also called catenaries. Catenaries in $\mathbb S^2$ will be discussed in Section \mathbb Ref{sec2} where two potential energies are used, first with the distance to a geodesic of $\mathbb S^2$ and second with the distance to a plane of $\mathbb R^3$. In the hyperbolic plane $\mathbb H^2$, the reference lines will be geodesics as well as horocycles. This work is carried out in Section \mathbb Ref{sec3}.
\mathbb Section{The hanging chain problem in the sphere}\label{sec2}
In this section we will consider the hanging chain problem in the unit sphere $\mathbb S^2$. We first state the problem, and then give different characterizations of its solution.
\mathbb Subsection{Spherical catenaries: definition}
Consider the unit sphere $\mathbb S^2=\{(x,y,z)\in\mathbb R^3:x^2+y^2+z^2=1\}$. Let $\Psi$ be the standard parametrization of $\mathbb S^2$ given by
\begin{equation}\label{para}
\Psi(u,v)=(\cos{u}\cos{v},\cos{u}\mathbb Sin{v},\mathbb Sin{u}),\quad u\in [-\frac{\pi}{2},\frac{\pi}{2}],v\in\mathbb R.
\end{equation}
In the hanging chain problem in $\mathbb S^2$, consider a geodesic of $\mathbb S^2$ as the reference line to calculate the potential energy of a chain contained in $\mathbb S^2$. Let us fix the great circle $P=\{(x,y,z)\in\mathbb S^2:z=0\}$ as the reference geodesic. Notice that on $\mathbb S^2$ we have not a concept of gravity $g$ due to the curvature of $\mathbb S^2$.
At this point, we assign to each point of $\mathbb S^2$ a potential which measures its distance to the reference line $P$, being $P$ the level of zero potential. This distance is realized along all geodesics (meridians) orthogonal to $P$. The distance $d$ of a point $(x,y,z)=\Psi(u,v)$ to the geodesic $P$ is $d=|\arcsin(z)|=|u|$. We also consider the unit vector field $X\in\mathfrak{X}(\mathbb S^2)$ (except at the north and south poles) which is tangent to all these geodesics. This vector field is the gradient $\mathbf nabla d$ of the distance function, which it is $\frac{\partial\Psi}{\partial u}=\Psi_u$. The vector field $X$ can be expressed in terms of the canonical vector fields $\{\partial_x,\partial_y,\partial_z\}$ of $\mathbb R^3$ as
\begin{equation}\label{x1}
X(\Psi(u,v))=-\mathbb Sin{u}\cos{v}\, \partial_x -\mathbb Sin{u}\mathbb Sin{v}\, \partial_y+\cos{u}\, \partial_z.
\end{equation}
As in the Euclidean case, we will consider curves of $\mathbb S^2$ that do not intersect $P$, hence $u\mathbf not=0$. The geodesic $P$ separates $\mathbb S^2$ in two domains, namely, the half-spheres $\mathbb S^2_+=\{(x,y,z)\in\mathbb S^2:z>0\}$ and $\mathbb S^2_{-}=\{(x,y,z)\in\mathbb S^2:z<0\}$. Without loss of generality, we will assume that all curves will be contained in the upper half-sphere $\mathbb S^2_+$. Let $\gamma \colon [a,b]\to\mathbb S^2_{+}$ be a regular curve. Let us write $\gamma(t)=\Psi(u(t),v(t))$, $t\in [a,b]$, with the condition $u(t)\in (0,\pi/2]$ because $\gamma(t)\in\mathbb S^2_{+}$. The arc-length element of $\gamma$ is $\mathbb Sqrt{u'^2+v'^2(\cos{u})^2}\, dt$. Consequently, the potential energy of $\gamma$ is
\begin{equation}\label{esg}
\mathcal{E}_S[\gamma]=\int_a^b (u+\lambda)\, |\gamma'(t)|\, dt=\int_a^b (u+\lambda)\mathbb Sqrt{u'^2+v'^2(\cos{u})^2}\, dt,
\end{equation}
where $\lambda\in\mathbb R$. The second integral of $\mathcal{E}_S[\gamma]$ is a Lagrange multiplier because in the variational problem all curves have the same length.
\begin{definition} \label{ds1}
A critical point of $\mathcal{E}_S$ is called a {\it spherical catenary}.
\end{definition}
Before to find the critical points of $\mathcal{E}_S$, we will obtain a suitable expression for the curvature of a curve in $\mathbb S^2$. Here the curvature is understood to be the geodesic curvature $\kappa_s$ of $\gamma$ in $\mathbb S^2$. The sign of $\kappa_s$ depends on the orientation of $\mathbb S^2$ which will be $N(p)=-p$, $p\in\mathbb S^2$. In such a case,
\begin{equation}\label{ks}
\begin{aligned}
\kappa_s&=\frac{\langle \gamma'',\gamma'\times N(\gamma)\mathbb Rangle}{|\gamma'|^3}=\frac{\langle \gamma,\gamma'\times \gamma''\mathbb Rangle}{|\gamma'|^3}\\
&=\frac{1}{|\gamma'|^3}\left(v'(2u'^2\mathbb Sin{u}+v'^2(\cos{u})^2\mathbb Sin{u})-\cos{u}(u'v''-u''v')\mathbb Right).
\end{aligned}
\end{equation}
Related to the objective T1, we have the following result.
\begin{theorem}\label{ts1}
Let $\gamma(t)=\Psi(u(t),v(t))$ be a regular curve in $\mathbb S^2_+$. Then $\gamma$ is a spherical catenary if and only if its curvature $\kappa_s$ satisfies
\begin{equation}\label{s2}
\kappa_s= \frac{v'\cos{u}}{(u+\lambda)|\gamma'|}.
\end{equation}
\end{theorem}
\begin{proof}
We calculate the Euler-Lagrange equation of the energy \eqref{esg}. The Lagrangian of $\mathcal{E}_S$ is
$J[u,v,u',v']=(u+\lambda)\mathbb Sqrt{u'^2+v'^2(\cos{u})^2}$. Since the same computations will be done later in a similar context (see Proposition \eqref{pr1}), we assume a more general case of Lagrangian of type
\begin{equation}\label{lagrange}
J[u,v,u',v']=f(u)\mathbb Sqrt{u'^2+v'^2(\cos u)^2}.
\end{equation}
A curve $\gamma$ is a critical point if and only if $\gamma$ satisfies
\begin{equation}\label{sel}
\frac{\partial J}{\partial u}-\frac{d}{dt} \left(\frac{\partial J}{\partial u'}\mathbb Right)=0\quad\mbox{and}\quad
\frac{\partial J}{\partial v}-\frac{d}{dt} \left(\frac{\partial J}{\partial v'}\mathbb Right)=0.
\end{equation}
After some computations, equations \eqref{sel} are, respectively,
\begin{equation}\label{eqs0}
\begin{aligned}
\frac{v'\cos{u}}{|\gamma'|}\left(f'v'\cos{u}-fu'\mathbb Sin{u}\mathbb Right)-f\frac{d}{dt}\left(\frac{u'}{|\gamma'|}\mathbb Right)&=0,\\
\frac{f'u'v'(\cos{u})^2}{|\gamma'|}+f\frac{d}{dt}\left(\frac{v'(\cos{u})^2}{|\gamma'|}\mathbb Right)&=0.
\end{aligned}
\end{equation}
Equations \eqref{eqs0} can be written in terms of $\kappa_s$ as follows. Using \eqref{ks}, we obtain
\begin{equation}\label{eqs1}
\begin{aligned}
\frac{d}{dt}\left(\frac{u'}{|\gamma'|}\mathbb Right)&=\frac{v'\cos{u}}{|\gamma'|^3}\left(v'u'^2\mathbb Sin{u}-\cos{u}(u'v''-u''v')\mathbb Right)\\
&=v'\cos{u}\left(\kappa_s-\frac{v'\mathbb Sin{u}}{|\gamma'|}\mathbb Right),
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqs2}
\begin{aligned}
\frac{d}{dt}\left(\frac{v'(\cos{u})^2}{|\gamma'|}\mathbb Right)&=-\frac{u'\cos{u}}{|\gamma'|^3}\left(2u'^2v'\mathbb Sin{u}+v'^3\mathbb Sin{u}(\cos{u})^2-\cos{u}(u'v''-u''v')\mathbb Right)\\
&=- u' \kappa_s \cos{u}.
\end{aligned}
\end{equation}
From \eqref{eqs1} and \eqref{eqs2}, the Euler-Lagrange equations \eqref{eqs0} can be expressed as
\begin{equation}\label{uuvv}
\begin{aligned}
v'\cos{u}\left(\frac{f'v'\cos{u}}{f|\gamma'|}-\kappa_s\mathbb Right)&=0,\\
u'\cos{u}\left(\frac{f'v'\cos{u}}{f|\gamma'|}-\kappa_s\mathbb Right)&=0.
\end{aligned}
\end{equation}
Since $\gamma$ is a regular curve, $u'$ and $v'$ cannot be simultaneously zero. Together with $\cos{u}\mathbf not=0$ and \eqref{uuvv}, we deduce
\begin{equation}\label{silva}
\kappa_s=\frac{f'v'\cos{u}}{f|\gamma'|}.
\end{equation}
In the particular case that $f(u)=u+\lambda$, then \eqref{silva} is just \eqref{s2}.
\end{proof}
Equation \eqref{s2} is second order, but a first integration is possible because the Lagrangian $J[u,v,u',v']$ does not depend on the function $v$. Indeed, there is a constant $c$ such that
\begin{equation}\label{si}
\frac{\partial J}{\partial v'}= (u+\lambda) \frac{v'(\cos{u})^2}{\mathbb Sqrt{u'^2+v'^2(\cos{u})^2}}=c.
\end{equation}
If $c=0$, then $v'=0$ and $\gamma$ is a meridian of $\mathbb S^2$. Thus, if $c\mathbf not=0$, then $\gamma$ is never tangent to a meridian. In particular, $\gamma$ does not across the north pole. Without loss of generality, we can assume that $u$ is a function of $v$, $u=u(v)$. Then $\gamma(v)=\Psi(u(v),v)$ and \eqref{si} can be rewriten as
\begin{equation*}
(u+\lambda) \frac{ (\cos{u})^2}{\mathbb Sqrt{u'^2+ (\cos{u})^2}}=c.
\end{equation*}
Hence, an expression for $u=u(v)$ is deduced, obtaining
\begin{equation}\label{deduced}
\int^u\frac{du}{\cos{u}\mathbb Sqrt{(u+\lambda)^{2}(\cos{u})^2-c^2}}=\frac{v}{c}.
\end{equation}
This integral yields the following corollary.
\begin{corollary}
Let $\gamma(v)=\Psi(u(v),v)$ be a regular curve in $\mathbb S^2_+$. Then $\gamma$ is a spherical catenary if and only if $u=u(v)$ satisfies \eqref{deduced} for some constant $c\in\mathbb R$.
\end{corollary}
\begin{remark} Identity \eqref{si} is a type of Clairaut relation for spherical catenaries. Since the Clairaut relation on $\mathbb S^2$ holds for geodesics, we need to eliminate the Lagrange constraint due to the length in the initial formulation of the variational problem. Thus, take $\lambda=0$ in \eqref{si}. Under this assumption, the angle $\Theta$ that $\gamma$ makes with the parallel $v\mapsto \Psi (u,v)$ is
$$\cos\Theta=\frac{\langle\gamma',\Psi_v\mathbb Rangle}{|\gamma'||\Psi_v|}=\frac{v'\cos{u}}{\mathbb Sqrt{u'^2+v'^2(\cos{u})^2}}.$$
Then identity \eqref{si} can be expressed as
\begin{equation}\label{clairaut}
u \cos{u}\cos\Theta=c.
\end{equation}
The classical Clairaut relation establishes $\cos{u}\cos\Theta=c$ \cite[p. 257]{carmo}. In the case of spherical catenaries, identity \eqref{clairaut} asserts that the radius of the parallel multiplied by $u$ and by the cosine of the intersection angle with each parallel is constant.
\end{remark}
Going back, equation \eqref{s2} can be viewed as a prescribed equation of a curve $\gamma$ in the sphere. Since $\gamma$ is the image of the plane curve $\beta(t)=(u(t),v(t))$ under the parametrization \eqref{para}, equation \eqref{s2} can be reformulated in terms of the curvature $\kappa_\beta$ of $\beta$. The curvature $\kappa_\beta$ of $\beta$ is
\begin{equation}\label{k5}
\kappa_\beta(t)=\frac{u'v''-u'' v'}{(u'^2+v'^2)^{3/2}}.
\end{equation}
Inserting in \eqref{ks}, we have
$$\kappa_s=\frac{1}{|\gamma'|^3}\left( 2u'^2v'\mathbb Sin{u}+v'^3(\cos{u})^2\mathbb Sin{u}-\kappa_\beta\cos{u}(u'^2+v'^2)^{3/2}\mathbb Right).$$
Thus equation \eqref{s2} can be written as
\begin{equation}\label{kapabeta}
\kappa_\beta=\frac{2u'^2v'\mathbb Sin{u}+v'^3(\cos{u})^2\mathbb Sin{u}-\frac{v'\cos{u}}{u+\lambda}\left(u'^2+v'^2(\cos{u})^2\mathbb Right)}{\cos{u}(u'^2+v'^2)^{3/2}}.
\end{equation}
This expression for $\kappa_\beta$ allows to illustrate some spherical catenaries in Figure \mathbb Ref{fig1}. These plots have been made with the {\it Mathematica} software (\cite{wo}). We briefly explain the method to obtain these figures. Suppose $\gamma(t)=\Psi(u(t),v(t))$ and that the curve $\beta(t)=(u(t),v(t))$ is parametrized by arc-length. Then $u'(t)^2+v'(t)^2=1$. So we can write $u'(t)=\cos\theta(t)$ and $v'(t)=\mathbb Sin\theta(t)$ for some function $\theta=\theta(t)$. According to \eqref{kapabeta}, the functions $u(t)$ and $v(t)$ satisfy the ODE system,
\begin{equation}\label{h00}
\left\{ \begin{aligned}
u'(t)&=\cos\theta(t)\\
v'(t)&=\mathbb Sin\theta(t)\\
\theta'(t)&=\mathbb Sin\theta\left( 2(\cos\theta)^2 \tan{u}+(\mathbb Sin\theta)^2\cos{u}\mathbb Sin{u}-\frac{1-(\mathbb Sin\theta)^2(\mathbb Sin{u})^2}{u+\lambda} \mathbb Right).
\end{aligned}
\mathbb Right.
\end{equation}
Recall that the variation of the angle function $\theta(t)$ coincides with the curvature $\kappa_\beta$ of $\beta$. Given initial conditions $u(0)=u_0$, $v(0)=v_0$ and $\theta(0)=\theta_0$, {\it Mathematica} solves numerically the ODE \eqref{h00} and then the software graphically represents the solution $\gamma(t)= \Psi(u(t),v(t))$.
\begin{figure}
\caption{Spherical catenaries in $\mathbb S^2_{+}
\label{fig1}
\end{figure}
\mathbb Subsection{Spherical catenaries: characterizations}
In this subsection, we give a geometric interpretation of \eqref{s2} completing the objective T2.
\begin{theorem}\label{ts2}
Let $\gamma$ be a regular curve in $\mathbb S^2_{+}$. Then $\gamma$ is a spherical catenary if and only if its geodesic curvature $\kappa_s$ satisfies
\begin{equation}\label{s3}
\kappa_s = \frac{\langle{\bf n},X\mathbb Rangle}{d+\lambda},
\end{equation}
where ${\bf n}$ is the unit normal vector of the curve $\gamma$ (as a tangent vector on $\mathbb S^2$ orthogonal to $\gamma'$), and $d$ is the distance to $P$.
\end{theorem}
\begin{proof}
Since the unit normal vector to $\mathbb S^2$ along $\gamma$ is $N=-\gamma$, the vector ${\bf n}$ is
\begin{equation}\label{nn}
{\bf n}=\frac{ \gamma'\times N(\gamma)}{|\gamma'|}=\frac{\gamma\times\gamma'}{|\gamma'|}.
\end{equation}
From the definition of $X$, $\langle {\bf n},X(\gamma)\mathbb Rangle= v'\cos{u}/|\gamma'|$. Equation \eqref{s3} follows from \eqref{s2} because the function $u$ in \eqref{s2} is just the distance $d$ to $P$.
\end{proof}
Notice that equation \eqref{s3} is the analogue of \eqref{catenary2} for spherical catenaries of $\mathbb S^2$.
The last part of this section addresses the third objective T3. Minimal rotational surfaces of $\mathbb S^3$ have been studied in the literature. However, there is no a known geometric property of the generating curves of these surfaces. More exactly, we ask if the generating curves can be viewed as solutions of a hanging chain problem in $\mathbb S^3$.
A surface of revolution in the $3$-dimensional sphere $\mathbb S^3$ will be constructed by rotating a spherical catenary around the geodesic $P$. Here, as in the Euclidean catenary, we will assume $\lambda=0$ for the Lagrange multiplier. Let $\mathbb S^3=\{(x_1,x_2,x_3,x_4)\in\mathbb R^4:x_1^2+x_2^2+x_3^2+x_4^2=1\}$ and $\mathbb S^2\mathbb Hookrightarrow\mathbb S^3$ be the natural inclusion defined by $(x_1,x_2,x_3)\mapsto (x_1,x_2,x_3,0)$. This embedding identifies $\mathbb S^2$ with $\mathbb S^2\times\{0\}\mathbb Subset\mathbb S^3$. Let $\gamma\colon I\to \mathbb S^2\mathbb Subset\mathbb S^3$ be a curve contained in $\mathbb S_+^2$. Denote by $S_\gamma$ the surface of revolution in $\mathbb S^3$ obtained by rotating $\gamma$ with respect to $P\mathbb Subset\mathbb S^2\times\{0\}$. The one-parameter group of rotations whose axis is $P$ is $\mathcal{G}=\{\mathcal{R}_s:s\in\mathbb R\}$, where
$$\mathcal{R}_s=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&\cos{s}&-\mathbb Sin{s}\\ 0&0&\mathbb Sin{s}&\cos{s}\end{array}\mathbb Right).$$
In order to simplify the computations, we can assume without loss of generality that $\gamma$ is parametrized by $\gamma(t)=\Psi(u(t),t)$. Then the parametrization of $S_\gamma$ is
\begin{equation}\label{para1}
\begin{aligned}
\Phi(t,s)&=\mathcal{R}_s\cdot\gamma(t)\\
&=\left(\cos{(u(t))}\cos t,\cos{(u(t))}\mathbb Sin{t},\mathbb Sin{(u(t))}\cos{s},\mathbb Sin{(u(t))}\mathbb Sin{s}\mathbb Right),
\end{aligned}
\end{equation}
where $t\in I\mathbb Subset\mathbb R$ and $s\in\mathbb R$. We now calculate the mean curvature of $S_\gamma$. The expression of the mean curvature of $S_\gamma$ computed with the aid of \eqref{para1} is
\begin{equation}\label{mean}
H= \frac{g_{11}h_{22}-2g_{12}h_{12}+g_{22} h_{11}}{2(g_{11}g_{22}-g_{12}^2)},
\end{equation}
where, as usual, $\{g_{11},g_{12},g_{22}\}$ and $\{h_{11},h_{12},h_{22}\}$ are the coefficients of the first and second fundamental forms of $S_\gamma$ for the parametrization \eqref{para1}:
$$g_{11}=\langle \Phi_{t},\Phi_{t}\mathbb Rangle,\quad g_{12}=\langle \Phi_{t},\Phi_{s}\mathbb Rangle,\quad g_{22}=\langle \Phi_{s},\Phi_{s}\mathbb Rangle,$$
$$h_{11}=\langle G,\Phi_{tt}\mathbb Rangle,\quad h_{12}=\langle G,\Phi_{ts}\mathbb Rangle,\quad h_{22}=\langle G,\Phi_{ss}\mathbb Rangle.$$
Here $G$ is the unit normal vector field on $S_\gamma$. Note that $G(\Phi(t,s))$ is not only orthogonal to $\Phi_t(t,s)$ and $\Phi_s(t,s)$, but also to $\Phi(t,s)$ since $G$ is a tangent vector of $\mathbb S^3$. A straightforward computation gives $g_{12}=h_{12}=0$,
$g_{11}=|\gamma'|^2$, $g_{22}=(\mathbb Sin{u})^2$ and
$$h_{11}=\frac{4u'^2\mathbb Sin{u}+\cos{u}(2u''+\mathbb Sin(2u))}{2|\gamma'|},\quad h_{22}=-\frac{\mathbb Sin{u}(\cos{u})^2}{|\gamma'|}.$$
Thus the mean curvature $H$ in \eqref{mean} is
\begin{equation}\label{hs1}
H=\frac{\cos{u}(\cos{u}+\cos{3u})+(3\cos{2u}-1)u'^2-2u''\mathbb Sin{u}\cos{u} }{2 \mathbb Sin{u}|\gamma'|^3}.
\end{equation}
The expression \eqref{ks} when $v(t)=t$ is
\begin{equation}\label{kuu}
\kappa_s=\frac{1}{|\gamma'|^3}\left(2u'^2 \mathbb Sin{u}+(\cos{u})^2\mathbb Sin{u} +u''\cos{u}\mathbb Right).
\end{equation}
Equation \eqref{kuu} allows to write $u''$ in terms of $\kappa_s$. By replacing $u''$ in \eqref{hs1}, the mean curvature $H$ becomes
\begin{equation}\label{hh}
H=\frac{ (\cos{u})^2 -\mathbb Sin{u}|\gamma'|\kappa_s }{ \mathbb Sin{u}|\gamma'|}.
\end{equation}
Thus, $H=0$ if and only if $(\mathbb Sin{u})|\gamma'|\kappa_s=(\cos{u})^2$. Therefore if $\gamma$ is a spherical catenary, the surface $S_\gamma$ is not minimal.
Looking in the formula \eqref{hh}, we observe that the term $\mathbb Sin{u}$ in the numerator is just the Euclidean distance of the point $\Psi(u,v)$ to the plane $\Pi$ of equation $z=0$. This suggests to consider the potential energy of $\gamma$ calculated with respect to the plane $\Pi$ instead of the geodesic $P$. Definitively, we will formulate a different hanging chain problem in $\mathbb S^2$ in such a way that the critical points of the corresponding energy functional can successfully answer to the question of the minimality of $S_\gamma$.
Consider a plane $\Pi$ of $\mathbb R^3$, which we can assume that it is the plane of equation $z=0$. As usual, the $z$-axis is the direction of the gravity when the gravitational vector field is $\partial_z$. Now, we replace the sphere $\mathbb S^2$ by an arbitrary surface $S$ of the Euclidean space $\mathbb R^3$. The {\it extrinsic hanging chain problem in $S$} consists in determining the shape of a hanging chain supported on $S$ where the potential energy of $\gamma$ is calculated with the Euclidean distance to $\Pi$. A critical point of this potential will be called an {\it extrinsic catenary on $S$}. Notice that the vector field $\partial_z$ is not a vector of $S$ but of the ambient space $\mathbb R^3$. In particular, coming back to the Euclidean context, now the chain in $S$ is subjected to the Euclidean gravity, which is constant. In particular, we can assert that the potential at $ds$ of the chain is $\mathbb Sigma g z\, ds$ as usual.
The extrinsic hanging chain problem was studied in the XIX century by Bobillier \cite{bo}, although it has not yet received much interest in the literature. See also \cite[Ch. VII]{ap} and \cite{gu}, and more recently, \cite{fe}. In the particular case that $S$ is the unit sphere $\mathbb S^2$, a solution of this problem will be called an {\it extrinsic spherical catenary} (Bobillier coined the expression ``spherical cha\^{i}nette''). In this paper, we recall this problem and its solution and, in addition, we credit to the work of Bobillier, an almost forgotten French mathematician \cite{ha}. The potential energy of the hanging chain is
\begin{equation}\label{eex}
\mathcal{E}_S^{ex}[\gamma]=\int_a^b(\mathbb Sin{u}+\lambda) |\gamma'(t)|\, dt= \int_a^b (\mathbb Sin{u}+\lambda)\mathbb Sqrt{u'^2+v'^2(\cos{u})^2}\, dt,
\end{equation}
where again $\lambda$ is a Lagrange multiplier.
\begin{proposition} \label{pr1}
Let $\gamma$ be a regular curve in $\mathbb S^2_{+}$. Then $\gamma$ is an extrinsic spherical catenary if and only if its geodesic curvature $\kappa_s$ satisfies
\begin{equation}\label{ex-cat}
\kappa_s= \frac{v'(\cos{u})^2}{(\mathbb Sin{u}+\lambda)|\gamma'|},
\end{equation}
or equivalently, if
\begin{equation}\label{k-ex}
\kappa_s= \frac{\langle{\bf n},\partial_z\mathbb Rangle}{\mathbb Sin{u}+\lambda}.
\end{equation}
\end{proposition}
\begin{proof}
The Euler-Lagrange equations for \eqref{eex} follow directly from \eqref{silva}, where now $f(u)=\mathbb Sin{u}+\lambda$. This gives \eqref{ex-cat}. Formula \eqref{k-ex} is a consequence of \eqref{ex-cat} and the expression \eqref{nn} for ${\bf n}$.
\end{proof}
Equation \eqref{k-ex} is analogous to \eqref{catenary2} because the term $\mathbb Sin{u}$ in the denominator is the height with respect to $\Pi$ and the vector field $\partial_z$ is the gravitational vector field.
Finally, we answer the question of when the mean curvature of the rotational surface $S_\gamma$ is identically zero (objective T3).
\begin{corollary} \label{cor-s}
Let $\gamma$ be a regular curve in $\mathbb S^2_{+}\times\{0\}$. Then $S_\gamma$ is minimal if and only if $\gamma$ is an extrinsic spherical catenary.
\end{corollary}
\begin{proof}
Without loss of generality, we can write $\gamma$ as $\gamma(t)=\Psi(u(t),t)$. From \eqref{hh}, the mean curvature $H$ vanishes if and only if $(\cos{u})^2=\mathbb Sin{u}|\gamma'|\kappa_s$ and this identity is just \eqref{ex-cat}.
\end{proof}
This result in $\mathbb S^3$ is analogous to the relation between the catenoid of $\mathbb R^3$ and the catenary curve obtained by Euler.
Rotational surfaces in $\mathbb S^3$ with zero mean curvature (minimal surfaces) are known: see \cite{al,ri}. Among these surfaces, the Clifford torus is the most famous example because it is the only minimal embedded torus in $\mathbb S^3$ (\cite{br}). In the context of extrinsic spherical catenaries, the Clifford torus corresponds to the case $\kappa_s(t)= 1$ and $u(t)=\pi/4$ in \eqref{ex-cat}. Indeed, the parametrization \eqref{para1} is
$\Phi(t,s)=\frac{\mathbb Sqrt{2}}{2}(\cos{t},\mathbb Sin{t},\cos{s},\mathbb Sin{s})$. Thus $S_\gamma=\mathbb S^1(\frac{1}{\mathbb Sqrt{2}})\times \mathbb S^1(\frac{1}{\mathbb Sqrt{2}})$ which it is the Clifford torus.
\mathbb Section{The hanging chain problem in the hyperbolic plane}\label{sec3}
In this section, the hanging chain problem in the hyperbolic plane $\mathbb H^2$ is investigated. The model for $\mathbb H^2$ will be the upper half-plane $(\mathbb R^2_{+},g)$, where $\mathbb R^2_+=\{(x,y)\in\mathbb R^2:y>0\}$ and the metric is $g=\frac{dx^2+dy^2}{y^2}$.
The hanging chain problem in the hyperbolic plane is richer than in the Euclidean plane because there are several possibilities of reference lines and potential energies. We will consider the situation that a horocycle is a reference line. Horocycles have some analogies with the straight-lines of $\mathbb R^2$ and provide the so-called horospherical geometry \cite{iz}. This section is divided in three parts according to this variety of choices:
\begin{enumerate}
\item Hyperbolic catenary: the reference line is a geodesic and the potential energy is calculated along geodesics of $\mathbb H^2$.
\item Hyperbolic horo-catenary: the reference line is a geodesic and the potential energy is calculated along horocycles of $\mathbb H^2$.
\item Horo-catenary: the reference line is a horocycle and the potential energy is calculated along geodesics of $\mathbb H^2$.
\end{enumerate}
Let $\gamma\colon [a,b]\to\mathbb H^2$ be a regular curve parametrized by $\gamma(t)=(u(t),v(t))$. The energy to minimize in all these situations in this section is of type
\begin{equation}\label{ww}
\gamma\longmapsto \int_a^b\omega(u,v)|\gamma'(t)|\, dt= \int_a^b\omega(u,v)\frac{\mathbb Sqrt{u'^2+v'^2}}{v}\, dt
\end{equation}
where $\omega=\omega(u,v)$ is a smooth function on the variables $u$ and $v$. Here $ \mathbb Sqrt{u'^2+v'^2}/v\, dt$ is the arc-length element of $\mathbb H^2$. This energy can be interpreted as the length of $\gamma$ in the conformal metric $\widetilde{g}= \omega^2g$ and consequently, its critical points coincide with the geodesics in the conformal space $(\mathbb R^2_{+},\widetilde{g})$. In order to simplify the presentation of this section, the Euler-Lagrange equations of the energy \eqref{ww} are calculated in the following result.
\begin{proposition}\label{pr2} A regular curve $\gamma(t)=(u(t),v(t))$ in $\mathbb H^2$ is a critical point of the energy \eqref{ww} if and only if its curvature $\kappa_h$ is
\begin{equation}\label{ayuda}
\kappa_h=\frac{v}{\omega\mathbb Sqrt{m}}\left(u'\omega_v-v'\omega_u\mathbb Right),
\end{equation}
where $m(t)=u'(t)^2+v'(t)^2$, $\omega_u=\frac{\partial\omega}{\partial u}$ and $\omega_v=\frac{\partial\omega}{\partial v}$.
\end{proposition}
\begin{proof} A straight-forward computation of \eqref{sel} gives, respectively
$$\frac{u'}{v^2\mathbb Sqrt{m}}\left(u'v\omega_v-v'\omega_u-\frac{u'\omega}{v}-\omega \frac{u'v''-v'u''}{m}\mathbb Right)=0,$$
$$\frac{v'}{v^2\mathbb Sqrt{m}}\left(u'v\omega_v-v'\omega_u-\frac{u'\omega}{v}-\omega \frac{u'v''-v'u''}{m}\mathbb Right)=0.$$
Since $\gamma$ is regular, and using the Euclidean curvature $\kappa_e$ given in \eqref{k5}, we deduce that $\gamma$ is a critical point of the energy \eqref{ww} if and only if
\begin{equation}\label{ayuda3}
\kappa_e=\frac{1}{\omega\mathbb Sqrt{m}}\left(u'\omega_v-v'\omega_u-\frac{u'\omega}{v}\mathbb Right).
\end{equation}
On the other hand, the curvature $\kappa_h$ of $\gamma$ is related to $\kappa_e$ because the hyperbolic metric is conformal to the Euclidean one: see \cite[Chapter 1]{be}. This relation is
\begin{equation}\label{keh}
\kappa_h= v\kappa_e+\frac{u'}{\mathbb Sqrt{m}}.
\end{equation}
Then \eqref{ayuda} is consequence of \eqref{ayuda3} and \eqref{keh}.
\end{proof}
The identity \eqref{ayuda} can be also expressed as follows. Consider $\{\partial_x,\partial_y\}$ the canonical vector fields of $\mathbb R^2$. Then the gradient $\mathbf nabla\omega$ of $\omega$ (in $\mathbb H^2$) is
$$\mathbf nabla\omega= y^2(\omega_x\partial_x+\omega_y\partial_y).$$
On the other hand, the unit normal vector of $\gamma$ is $\mathbf n(t)=v(t)\frac{(-v'(t),u'(t))}{\mathbb Sqrt{m}}$. Thus we obtain:
\begin{corollary} A regular curve $\gamma(t)=(u(t),v(t))$ in $\mathbb H^2$ is a critical point of the energy \eqref{ww} if and only if its curvature $\kappa_h$ satisfies
\begin{equation}\label{ayuda2}
\kappa_h=\frac{g(\mathbf n,\mathbf nabla\omega)}{\omega},
\end{equation}
where $\mathbf n$ is the unit normal vector of $\gamma$.
\end{corollary}
\mathbb Subsection{Hyperbolic catenaries}
The first case to investigate follows the same motivation as in the Euclidean plane. For the choice of the reference line, we take a geodesic $L$ of $\mathbb H^2$ which we can assume to be $L=\{(0,y):y>0\}$. At this level, the potential will be $0$. The potential energy at each point is determined by the hyperbolic distance to $L$ which is calculated along the geodesics orthogonal to $L$. If $(x,y)\in\mathbb H^2$, its distance $d$ to $L$ is
$$d=\log\frac{x+\mathbb Sqrt{x^2+y^2}}{y}.$$
The geodesics orthogonal to $L$ are half-circles of $\mathbb R^2_{+}$ centered at the origin of $\mathbb R^2$. Thus the unit vector field $Y\in\mathfrak{X}(\mathbb H^2)$ which is orthogonal to all these geodesics at each point of $\mathbb H^2$ is
\begin{equation}\label{yy}
Y(x,y)=y\left( \frac{ y}{\mathbb Sqrt{x^2+y^2}}\, \partial_x-\frac{x}{\mathbb Sqrt{x^2+y^2}}\, \partial_y\mathbb Right).
\end{equation}
Given a curve $\gamma(t)=(u(t),v(t))$, define the potential energy
\begin{equation}\label{fh1}
\mathcal{E}_H[\gamma]=\int_a^b (d+\lambda)|\gamma'(t)|\, dt=\int_a^b (d+\lambda)\frac{\mathbb Sqrt{u'^2+v'^2}}{v}\, dt, \quad d=\log {\frac{u+r}{v}},
\end{equation}
where $r(t)=\mathbb Sqrt{u(t)^2+v(t)^2}$. As in the case of the sphere $\mathbb S^2$, in $\mathbb H^2$ we have no a notion of (constant) gravity. Let us observe that $\mathcal{E}_H$ is a particular case of \eqref{ww} by choosing $\omega(u,v)=d+\lambda$. It will be assumed that $d\mathbf not=0$, that is, $u\mathbf not=0$. Equivalently, the curve $\gamma$ is contained in one of the domains $\mathbb H^2_{+}=\{(x,y)\in\mathbb H^2:x>0\}$ or $\mathbb H^2_{-}=\{(x,y)\in\mathbb H^2:x<0\}$. Since each domain is mapped into other by means of the isometry $(x,y)\mapsto (-x,y)$, it will be assumed that $\mathcal{E}_H$ acts on the class of all curves $\gamma$ contained in $\mathbb H^2_{+}$.
\begin{definition} A critical point of $\mathcal{E}_H$ is called a {\it hyperbolic catenary}.
\end{definition}
As in $\mathbb R^2$ and $\mathbb S^2$, a hyperbolic catenary will be characterized in terms of its curvature $\kappa_h$ as curve of $\mathbb H^2$.
\begin{theorem}\label{t32}
A regular curve $\gamma(t)=(u(t),v(t))$ in $\mathbb H^2_+$ is a hyperbolic catenary if and only if its curvature $\kappa_h$ satisfies
\begin{equation}\label{eqh1}
\kappa_h=-\frac{1}{d+\lambda}\frac{uu'+vv'}{ \mathbb Sqrt{u^2+v^2}\mathbb Sqrt{u'^2+v'^2}}=-\frac{ r'}{(d+\lambda)\mathbb Sqrt{u'^2+v'^2}}.
\end{equation}
\end{theorem}
\begin{proof}
The energy $\mathcal{E}_H$ is a particular case of \eqref{ww}. Using \eqref{ayuda} with $f=d+\lambda$, then equation \eqref{eqh1} is obtained immediately.
\end{proof}
With respect to T2, the next step consists of writing equation \eqref{eqh1} in a similar manner as the formula \eqref{catenary2} involving the curvature $\kappa_h$ and the vector field $Y$. The following result is immediate by a direct computation or using \eqref{ayuda2} because the vector field $Y$ is just $\mathbf nabla d$.
\begin{corollary}\label{th1}
A regular curve $\gamma$ in $\mathbb H^2_{+}$ is a hyperbolic catenary if and only if its curvature $\kappa_h$ satisfies
\begin{equation}\label{eth1}
\kappa_h= \frac{g( \mathbf n,Y)}{d+\lambda}.
\end{equation}
\end{corollary}
As a consequence, Corollary \mathbb Ref{th1} is the analogue in $\mathbb H^2$ of the statement \eqref{catenary2} for hyperbolic catenaries.
\mathbb Subsection{Hyperbolic horo-catenaries}
Consider a modified version of the above hanging chain problem replacing the potential calculated with the hyperbolic distance by the horocycle distance. The {\it horocycle distance} to the geodesic $L$ is defined as the distance of a point $(x,y)\in\mathbb H^2$ to $L$ calculated by the horocyle passing through $(x,y)$ and orthogonal to $L$. In the present case that $L$ is the geodesic of equation $x=0$, this distance is $|x|/y$.
Let $\gamma(t)=(u(t),v(t))$, $t\in [a,b]$, be a regular curve contained in $\mathbb H^2_+$. The potential energy of $\gamma$ calculated with the horocycle distance is
\begin{equation}\label{ehor}
\mathcal{E}_H^{hor}[\gamma]=\int_a^b (d_{hor}+\lambda) |\gamma'(t)|\, dt=\int_a^b (d_{hor}+\lambda) \frac{\mathbb Sqrt{u'^2+v'^2}}{v}\, dt,\quad d_{hor}= \frac{u}{v}.
\end{equation}
\begin{definition}
A critical point of $\mathcal{E}_H^{hor}$ is called an {\it hyperbolic horo-catenary}.
\end{definition}
With respect to the objective T1, we prove:
\begin{theorem}
A regular curve $\gamma(t)=(u(t),v(t))$ in $\mathbb H^2_+$ is a hyperbolic horo-catenary if and only if
its curvature $\kappa_h$ satisfies
\begin{equation}\label{h22}
\kappa_h=- \frac{uu'+vv'}{(d_{hor}+\lambda)v\mathbb Sqrt{m}}.
\end{equation}
\end{theorem}
\begin{proof} The energy $\mathcal{E}_H^{hor}$ in \eqref{ehor} is of type \eqref{ww} and formula \eqref{h22} is \eqref{ayuda} for $f= d_{hor}$.
\end{proof}
To answer to T2, we replace the above vector field $Y$ by the vector field $W\in\mathfrak{X}(\mathbb H^2)$ defined as
$$W(x,y)= y\, \partial_x- x\, \partial_y.$$
The next result is a consequence of \eqref{ayuda2} because $W=\mathbf nabla d_{hor}$.
\begin{corollary} A regular curve $\gamma$ in $\mathbb H^2_{+}$ is a hyperbolic horo-catenary if and only if its curvature $\kappa_h$ satisfies
\begin{equation}\label{eth2}
\kappa_h= \frac{g( \mathbf n,W)}{d_{hor}+\lambda}.
\end{equation}
\end{corollary}
To conclude this subsection we investigate problem T3 for this type of catenaries. The hyperbolic plane $\mathbb H^2$ is embedded into the $3$-dimensional hyperbolic space $\mathbb H^3=(\mathbb R^3_{+},\frac{1}{x_3^2}(dx_1^2+dx_2^2+dx_3^2))$ via the natural inclusion $(x,y)\in\mathbb H^2\mapsto (x ,0,y)\in\mathbb H^3$. With this identification, the geodesic $L\mathbb Subset\mathbb H^2$ is the $x_3$-axis in $\mathbb H^3$.
Let $S_\gamma$ denote the surface of revolution obtained by rotating $\gamma(t)=(u(t),0,v(t))$ with respect to the $x_3$-axis. In the upper half-space model of $\mathbb H^3$, the rotations that leave pointwise fixed the $x_3$-axis coincide with the Euclidean rotations of $\mathbb R^3$ with the same axis. These surfaces of revolution in $\mathbb H^3$ are called of spherical type \cite{dcarmo}. Thus a parametrization $\Phi$ of $S_\gamma$ is
\begin{equation}\label{para2}
\Phi(t,s)= (u(t)\cos{s},u(t)\mathbb Sin{s},v(t)) , \quad t\in [a,b],s\in\mathbb R.
\end{equation}
\begin{theorem}\label{t37} A regular curve $\gamma$ in $\mathbb H^2_+$ is a hyperbolic horo-catenary for $\lambda=0$ if and only the rotational surface $S_\gamma$ of spherical type is minimal.
\end{theorem}
\begin{proof} In the upper half-space model of $\mathbb H^3$, the mean curvature $H$ of a surface $S$ can be computed with the aid of the Euclidean mean curvature $H_e$ of $S$ when $S$ is viewed as a submanifold of the Euclidean space $\mathbb R^3_{+}$. This relation is similar to \eqref{keh}, namely,
\begin{equation}\label{he0}
H(p)=x_3H_e(p)+N_3(p),
\end{equation}
where $p=(x_1,x_2,x_3)\in S$ and $N=(N_1,N_2,N_3)$ is the Euclidean unit normal vector of $S$ (\cite[Chapter 1]{be}). If now $S$ is the rotational surface $S_\gamma$ parametrized by $\Phi$ in \eqref{para2}, the value of $H_e$ is
\begin{equation}\label{he}H_e=\frac{\kappa_e}{2}+\frac{v'}{2u\mathbb Sqrt{m}},
\end{equation}
and the expression of $N$ is
$$N=\frac{1}{\mathbb Sqrt{m}}(-v'\cos s,-v'\mathbb Sin s,u').$$
Thus $N_3=u'/\mathbb Sqrt{m}$, and using \eqref{he}, the mean curvature $H$ given in \eqref{he0} becomes
$$H=\frac{v\kappa_e}{2}+\frac{vv'}{2u\mathbb Sqrt{m}}+\frac{u'}{\mathbb Sqrt{m}}.$$
Using \eqref{keh},
$$H=\frac{\kappa_h}{2}+\frac{uu'+vv'}{2\mathbb Sqrt{m}}.$$
Then $H=0$ if and only if
\begin{equation}\label{vke}
\kappa_h=-\frac{uu'+vv'}{\mathbb Sqrt{m}}.
\end{equation}
But this identity \eqref{vke} is just equation \eqref{h22} for $\lambda=0$ because $d_{hor}=u/v$. This proves the result.
\end{proof}
We point out that do Carmo and Dajczer obtained all minimal rotational surfaces of $\mathbb H^3$. The statement of Theorem \mathbb Ref{t37} gives a geometric interpretation of the generating curves of minimal rotational surfaces of spherical type of $\mathbb H^3$ proving that these curves are the solutions of a hanging chain problem in $\mathbb H^2$. As a consequence, this extends the Euler's result to spherical minimal rotational surfaces.
\mathbb Subsection{Horo-catenaries}
We investigate the hanging chain problem considering a horocycle $\mathcal{H}$ as reference line. Without loss of generality we can assume $\mathcal{H}=\{(t,1):t\in\mathbb R\}$. The potential energy at each point of $\mathbb H^2$ is given by its hyperbolic distance to $\mathcal{H}$. In the upper half-plane model of $\mathbb H^2$, the geodesics orthogonal to $\mathcal{H}$ are vertical lines of $\mathbb R_+^2$. If $(x,y)\in\mathbb H^2$, the hyperbolic distance $d_b$ from $(x,y)$ to $\mathcal{H}$ is the length throughout the geodesic orthogonal to $\mathcal{H}$ passing through $(x,y)$. This distance is $d_b=\log (y)$. Note that this distance coincides with the Busemann function in the horospherical geometry when the ideal point is $\infty$ \cite{bu}. The unit vector field $V\in\mathfrak{X}(\mathbb H^2)$ which is tangent to all these geodesics is given by
$$V(x,y)=y\partial_y.$$
We will assume again that $d_b\mathbf not=0$, that is, $y\mathbf not=1$. The horocycle $\mathcal{H}$ separates $\mathbb H^2$ in two domains, namely, $\mathbb H^2(+)=\{(x,y)\in\mathbb H^2:y>1\}$ and $\mathbb H^2(-)=\{(x,y)\in\mathbb H^2:y<1\}$, but both domains are not isometric. From now on, we will assume that all curves are contained in $\mathbb H^2(+)$ and a similar work can be done in the case that all curves are contained in $\mathbb H^2(-)$.
Let $\gamma\colon [a,b]\to\mathbb H^2_+$ be a regular curve, $\gamma(t)=(u(t),v(t))$. The potential energy of $\gamma$ is
\begin{equation}\label{ehor2}
\mathcal{E}_{hor}[\gamma]=\int_a^b ( d_b+\lambda)|\gamma'(t)|\, dt=\int_a^b ( d_b+\lambda) \frac{\mathbb Sqrt{u'^2+v'^2}}{v}\, dt.\quad d_b=\log{v},
\end{equation}
where $\lambda\in\mathbb R$ is a Lagrange parameter.
\begin{definition}
A critical point of $\mathcal{E}_{hor}$ is called a {\it horo-catenary}.
\end{definition}
We characterize the horo-catenaries in terms of their curvatures $\kappa_h$.
\begin{theorem} A regular curve $\gamma(t)=(u(t),v(t))$ in $\mathbb H^2(+)$ is a horo-catenary if and only if its curvature $\kappa_h$ satisfies
\begin{equation}\label{h33}
\kappa_h= \frac{u'}{(d_b+\lambda)\mathbb Sqrt{m}}.
\end{equation}
\end{theorem}
\begin{proof}
Expression \eqref{h33} is just \eqref{ayuda} for $f=d_b+\lambda$.
\end{proof}
The Lagrangian $J$ of $\mathcal{E}_{hor}$ is
\begin{equation}\label{juv}
J[u,v,u',v']=( d_b+\lambda) \frac{\mathbb Sqrt{u'^2+v'^2}}{v},
\end{equation}
which does not depend on $u$. Thus a first integration of the Euler-Lagrange equation can be deduced.
\begin{corollary} A regular curve $\gamma$ in $\mathbb H^2(+)$ is a horo-catenary if and only if $\gamma$ can be locally expressed as
\begin{equation}\label{integra1}
\gamma(v)=\left(\int^v\frac{c\tau}{\mathbb Sqrt{(\log{\tau}+\lambda)^{2}-c^2 \tau^2}}\, d\tau,v\mathbb Right),
\end{equation}
where $c\in\mathbb R$ is a constant of integration.
\end{corollary}
\begin{proof} From \eqref{juv}, there exists a constant $c$ such that $\frac{\partial J}{\partial u'}=c$. This identity is
\begin{equation}\label{cc}
\frac{u'(\log{v}+\lambda)}{v\mathbb Sqrt{u'^2+v'^2}}=c.
\end{equation}
Without loss of generality, we can assume that $\gamma$ writes locally as $\gamma(v)=(u(v),v)$. Then \eqref{cc} is
$$\frac{u'(\log{v}+\lambda)}{v\mathbb Sqrt{1+u'^2}}=c.$$
Hence it follows \eqref{integra1}.\end{proof}
With respect to the objective T2, we have the following characterization of horo-catenaries which is a consequence of \eqref{ayuda2} and the fact that $V=\mathbf nabla d_b$.
\begin{corollary} A regular curve $\gamma$ in $\mathbb H^2(+)$ is a horo-catenary if and only if its curvature $\kappa_h$ satisfies
\begin{equation}\label{kh2}
\kappa_h = \frac{g( {\bf n},V)}{d_b+\lambda}.
\end{equation}
\end{corollary}
Equation \eqref{kh2} is the analogue of the formula \eqref{catenary2} in the context of horo-catenaries.
We finish this section with some pictures of the three types of catenaries for $\lambda=0$. See Figure \mathbb Ref{fig2}. The process to plot these curves with {\it Mathematica} is the following. Suppose that $\gamma(t)=(u(t),v(t))$ is parametrized by the Euclidean arc-length. Then $\gamma'(t)= (\cos\theta(t),\mathbb Sin\theta(t))$ for some function $\theta=\theta(t)$, where $\theta'(t)=\kappa_e(t)$ is the Euclidean curvature of $\gamma$. For each type of catenary, the value of $\kappa_e$ is obtained by combining \eqref{keh} and each one of the expressions for $\kappa_h$ in \eqref{eqh1}, \eqref{h22} and \eqref{h33}:
\begin{align}
\kappa_e&=-\dfrac{1}{v\mathbb Sqrt{m}}\left( \dfrac{uu'+vv'}{rd}+u'\mathbb Right),&\quad (\mbox{hyperbolic catenary}),\label{h11}\\
\kappa_e&=-\dfrac{1}{v\mathbb Sqrt{m}}\left( \dfrac{uu'+vv'}{vd_{hor}}+u'\mathbb Right),&\quad (\mbox{hyperbolic horo-catenary}),\label{h3}\\
\kappa_e&=\dfrac{1}{v\mathbb Sqrt{m}}\dfrac{1-d_b}{d_b}u',&\quad (\mbox{horo-catenary})\label{h-33}.
\end{align}
It follows that the functions $u(t)$, $v(t)$ and $\theta(t)$ satisfy the ODE system
\begin{equation}\label{ode}
\left\{ \begin{aligned}
u'(t)&=\cos\theta(t)\\
v'(t)&=\mathbb Sin\theta(t)\\
\theta'(t)&=\kappa_e(t).
\end{aligned}
\mathbb Right.
\end{equation}
Finally, and distinguishing the three types of catenaries of $\mathbb H^2$, the system \eqref{ode} have been numerically solved with {\it Mathematica} once initial conditions
\begin{equation}\label{conditions}
u(0)=u_0,\quad v(0)=v_0,\quad \theta(0)=\theta_0
\end{equation}
have been prescribed.
\begin{figure}
\caption{Catenaries in $\mathbb H^2$ considering the upper half-plane model: hyperbolic catenary (left), hyperbolic horo-catenary (middle), horo-catenary (right). These curves are solutions of \eqref{ode}
\label{fig2}
\end{figure}
As a consequence of these plots, we observe that the horo-catenary (Figure \mathbb Ref{fig2}, right) is a graph on the $x$-axis. This is not a coincidence, but it holds in general.
\begin{proposition}\label{pr-h}
If $\gamma$ is a horo-catenary, then $\gamma$ is a vertical line or $\gamma$ is a bounded entire graph on the $x$-axis.
\end{proposition}
\begin{proof}
Suppose that $\gamma$ is parametrized by $\gamma(t)=(u(t),v(t))$, $t\in I\mathbb Subset\mathbb R$, where $t$ is the Euclidean arc-parameter and $I$ is the maximal domain. Since $\gamma$ is a horo-catenary, the curvature $\kappa_e$ of $\gamma$ satisfies \eqref{h-33}. Thus the ODE system \eqref{ode} is
\begin{equation}\label{s-hor}
\left\{ \begin{aligned}
u'(t)&=\cos\theta(t)\\
v'(t)&=\mathbb Sin\theta(t)\\
\theta'(t)&=\frac{1-\log{v}}{v\log{v}}u'.
\end{aligned}
\mathbb Right.
\end{equation}
We distinguish two cases.
\begin{enumerate}
\item Suppose there exists $t_0$ such that $u'(t_0)=0$. The first equation of \eqref{s-hor} implies $\cos\theta(t_0)=0$. Without loss of generality, we suppose that $\theta(t_0)=\pi/2$. Thus at $t=t_0$, the initial conditions \eqref{conditions} are $(u(t_0),v(t_0),\pi/2)$. By uniqueness of \eqref{s-hor}-\eqref{conditions}, $\gamma$ is a vertical straight-line.
\item Suppose $u'(t)\mathbf not=0$ for all $t$. This implies that $\gamma$ is a graph on some interval $I=(a,b)$ of the $x$-axis. Reparametrizing $\gamma$, the curve $\gamma$ can be expressed by $\gamma(x)=(x,v(x))$, $x\in I$ and it will be proved that $I=\mathbb R$. Equation \eqref{cc} for $\kappa_e$ becomes
\begin{equation}\label{log}
\frac{\log{v}}{v\mathbb Sqrt{1+v'^2}}=c.
\end{equation}
Since $c\mathbf not=0$ and $\gamma$ is contained in $\mathbb H^2(+)$, we have $\log(v)>0$. From \eqref{log} we deduce
\begin{equation}\label{logv}
0<c=\frac{\log{v}}{v\mathbb Sqrt{1+v'^2}}\leq \frac{\log{v}}{v}.
\end{equation}
The function $t\mapsto \log{t}/t $ is bounded in $(1,\infty)$ with the property
\begin{equation}\label{log2}
\lim_{t\to\infty} \frac{\log{t}}{t}=\lim_{t\to 1} \frac{\log{t}}{t}=0.
\end{equation}
From \eqref{logv} and \eqref{log2}, we conclude that $v'$ is a bounded function. Moreover, there exist two constants $m_1, m_2\in\mathbb R$ such that $1<m_1<m_2$ and $m_1\leq v(x)\leq m_2$ for all $x\in (a,b)$. The fact that the function $v'(t)$ is bounded proves finally that all solutions of \eqref{s-hor} are defined in the entire real line $\mathbb R$.
\end{enumerate}
\end{proof}
\mathbb Section{Conclusions and outlook}
The catenary is the solution of the hanging chain problem in $\mathbb R^2$ and this makes it so attractive in other fields of science, engineering and architecture. However, the hanging chain problem has not been formulated in spaces other than Euclidean one. Among these spaces, the sphere $\mathbb S^2$ and the hyperbolic plane $\mathbb H^2$ are the natural choices to extend this problem. It has been formulated this problem in $\mathbb S^2$ and in $\mathbb H^2$, defining in each case a potential energy that depends on the distance of a point with respect to a reference line. The resulting critical points of these energies (for different reference lines) have generalized the concept of catenary in both spaces.
A remarkable result is the characterization of the generating curves of minimal rotational surfaces of $\mathbb S^3$ proving that these curves are chains on $\mathbb S^2$ suspended by its weight where the force vector field is really the gravity of $\mathbb R^3$. In this particular situation, the initial hanging chain problem formulated in $\mathbb S^2$ must be replaced by other, which was called `extrinsic', because the force field is a vector of $\mathbb R^3$, not of $\mathbb S^2$. Then it was proved that the generating curves are solutions of an old problem formulated by Bobillier in the 19th century and that it has been revisited in the present paper.
There are a number of problems in which this article could be expanded. For example, a question concerns to investigate the existence of closed spherical catenaries. In view of the pictures of Figure \mathbb Ref{fig1}, it seems plausible that such catenaries do exist. This problem was investigated in \cite{al} for extrinsic spherical catenaries in the context of rotational surfaces of $\mathbb S^3$ with constant mean curvature. Besides the closed catenaries, there are other catenaries which never close up and they are turning around the north pole of $\mathbb S^2$. The problem that arises here is that, although the curvature function is periodic, this is not enough to ensure that the corresponding catenary will be closed: see a discussion of this problem in \cite{arroyo}.
As in $\mathbb S^2$, it would be interesting to classify the catenaries in the hyperbolic plane. Proposition \mathbb Ref{pr-h} is just an example, but the work to be done goes beyond that. According to Figure \mathbb Ref{fig2}, several questions are reasonable to ask. For example, (i) when does a catenary intersect the ideal boundary of $\mathbb H^2$? and in such a case, determine whether the intersection is orthogonal; (ii) is every horo-catenary periodic? (iii) which are the properties of the horo-catenaries contained in $\mathbb H^2(-)$?
Another extension of the paper would be to consider the shape of a hanging surface in $\mathbb S^3$ and $\mathbb H^3$. In the Euclidean space, the analogue of the catenary in the two-dimensional case is called a singular minimal surface (\cite{bht,dih}). The extension is straightforward using the characterization \eqref{catenary2}. So, it suffices to replace the curvature of the catenary $\kappa_e$ by the mean curvature $H$ of the surface and the unit normal ${\bf n}$ of the curve by the unit normal vector field $G$ to the surface. For example, in the $3$-dimensional unit sphere $\mathbb S^3$, the shape of a hanging surface with respect to $\mathbb S^2\times\{0\}$ is characterized by the equation
$H= \langle G,X\mathbb Rangle/(d+\lambda)$, $\lambda\in\mathbb R$. Here $d$ is the distance to $\mathbb S^2\times\{0\}$ and $X\in\mathfrak{X}(\mathbb S^3$) is the unit vector field tangent to the meridians of $\mathbb S^3$ which are orthogonal to $\mathbb S^2\times\{0\}$.
Finally, it could be interesting to obtain some geometric properties of the rotational surfaces in $\mathbb S^3$ and in $\mathbb H^3$ constructed by catenaries in its different possibilities. Although, initially the hanging chain problem has no a relation to the problem for rotational surfaces with minimum area, in some cases we have proved a connection between both problems (Corollary \mathbb Ref{cor-s} and Theorem \mathbb Ref{t37}). It seems interesting to investigate geometric properties of the rotational surfaces of $\mathbb S^3$ and $\mathbb H^3$ whose generating curves are catenaries of $\mathbb S^2$ and $\mathbb H^2$, respectively.
\mathbb Section*{Declaration of competing interest} The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
\mathbb Section*{Acknowledgements} The author is a member of the Institute of Mathematics of the University of Granada. This work has been partially supported by the Project PID2020-117868GB-I00 and MCIN/AEI/10.13039/501100011033.
\end{document}
|
\begin{document}
\title{Faster Cut-Equivalent Trees in Simple Graphs}
\author{Tianyi Zhang \thanks{Tel Aviv University, \href{}{[email protected]}}}
\date{}
\maketitle
\begin{abstract}
Let $G = (V, E)$ be an undirected connected simple graph on $n$ vertices. A cut-equivalent tree of $G$ is an edge-weighted tree on the same vertex set $V$, such that for any pair of vertices $s, t\in V$, the minimum $(s, t)$-cut in the tree is also a minimum $(s, t)$-cut in $G$, and these two cuts have the same cut value. In a recent paper [Abboud, Krauthgamer and Trabelsi, STOC 2021], the authors propose the first subcubic time algorithm for constructing a cut-equivalent tree. More specifically, their algorithm has \footnote{$\widetilde{O}$ hides poly-logarithmic factors.}$\widetilde{O}(n^{2.5})$ running time. Later on, this running time was significantly improved to $n^{2+o(1)}$ by two independent works [Abboud, Krauthgamer and Trabelsi, FOCS 2021] and [Li, Panigrahi, Saranurak, FOCS 2021], and then to $(m+n^{1.9})^{1+o(1)}$ by [Abboud, Krauthgamer and Trabelsi, SODA 2022].
In this paper, we improve the running time to $\widetilde{O}(n^2)$ graphs if near-linear time max-flow algorithms exist, or $\widetilde{O}(n^{17/8})$ using the currently fastest max-flow algorithm. Although our algorithm is slower than previous works, the runtime bound becomes better by a sub-polynomial factor in dense simple graphs when assuming near-linear time max-flow algorithms.
\end{abstract}
\section{Introduction}
It is well known from Gomory and Hu \cite{gomory1961multi} that any undirected graph can be compressed into a single tree while all pairwise minimum cuts are preserved exactly. More specifically, given any undirected graph $G = (V, E)$ on $n$ vertices and $m$ edges, there exists an edge weighted tree $T$ on the same set of vertices $V$, such that: for any pair of vertices $s, t\in V$, the minimum $(s, t)$-cut in $T$ is also a minimum $(s, t)$-cut in $G$, and their cut values are equal. Such trees are called Gomory-Hu trees or cut-equivalent trees. In the original paper \cite{gomory1961multi}, Gomory and Hu showed an algorithm that reduces the task of building a cut-equivalent tree to $n-1$ max-flow instances. Gusfield \cite{gusfield1990very} modified the original algorithm Gomory and Hu so that no graph contractions are needed when applying max-flow subroutines. So far, in weighted graphs, faster algorithms for building cut-equivalent trees were only byproducts of faster max-flow algorithms. In the recent decade, there has been a sequence of improvements on max-flows using the interior point method \cite{lee2014path,madry2016computing,liu2020faster,kathuria2020unit,brand2021minimum}, and the current best running time is $\widetilde{O}(m + n^{1.5})$ by \cite{brand2021minimum}, so computing a cut-equivalent tree takes time $\widetilde{O}(mn + n^{2.5})$.
When $G$ is a simple graph, several improvements have been made over the years. Bhalgat {\em et al.}\ \cite{hariharan2007mn} designed an $\widetilde{O}(mn)$ time algorithm for cut-equivalent trees using a tree packing approach based on \cite{gabow1995matroid,edmonds2003submodular}. Recent advances include an upper bound of $O(m^{3/2}n^{1/6})$ by Abboud, Krauthgamer and Trabelsi \cite{abboud2020new}, and in a subsequent work \cite{abboud2020subcubic} by the same set of authors, they proposed the first subcubic time algorithm that constructs cut-equivalent trees in simple graphs, and their running time is $n^{2.5+o(1)}$. Recently, by two independent works \cite{abboud2021apmf,li2021nearly}, this running time was improved to $n^{2+o(1)}$ which is almost-optimal in dense graphs, and further to a subquadratic time $(m+n^{1.9})^{1+o(1)}$ by \cite{abboud2022friendly}.
All of these upper bounds rely on the current fastest max-flow algorithm with runtime $\widetilde{O}(m+n^{1.5})$. However, even if we assume the existence of a $\widetilde{O}(m)$-time max-flow algorithm, the above algorithms still have $n^{2+o(1)}$ running time in dense graphs which contains an extra sub-polynomial factor.
\subsection{Our results}
Let $\mathsf{MF}(m_0, n_0)$ be the running time complexity of max-flow computation in unweighted multi-graphs with $m_0$ edges and $n_0$ vertices, and let $\mathsf{MF}(m_0) = \mathsf{MF}(m_0, m_0)$ for convenience.
The main result of this paper is a near-quadratic time algorithm assuming existence of quasi-linear time max-flow algorithms. For a detailed comparison with recent published works, please refer to the table below where conditional runtime refers to the assumption of near-linear time max-flow algorithms.
\begin{hypothesis}\label{hypo}
$\mathsf{MF}(m_0, n_0) = \widetilde{O}(m_0 + n_0)$.
\end{hypothesis}
\begin{theorem}\label{cond}
Let $G = (V, E)$ be a simple on $n$ vertices. Under Hypothesis~\ref{hypo}, there is a randomized algorithm that constructs a cut-equivalent tree of $G$ in $\widetilde{O}(n^2)$ time with high probability. Using the current fastest max-flow algorithm~\cite{brand2021minimum}, the running time becomes $\widetilde{O}(n^{17/8})$.
\end{theorem}
\begin{center}
\begin{tabular}{|c|c|c|}\hline
reference & conditional runtime & unconditional runtime\\\hline
\cite{abboud2020subcubic} & $\widetilde{O}(n^{2.5})$ & $n^{2.5+o(1)}$\\\hline
\cite{abboud2021apmf} & $n^{2+o(1)}$ & $n^{2+o(1)}$\\\hline
\cite{li2021nearly} & $n^{2+o(1)}$ & $n^{2+o(1)}$\\\hline
\cite{abboud2022friendly} & $(m+n^{1.75})^{1+o(1)}$ & $(m+n^{1.9})^{1+o(1)}$\\\hline
new & $\widetilde{O}(n^2)$ & $\widetilde{O}(n^{17/8})$\\\hline
\end{tabular}
\end{center}
\noindent\textbf{Comparison with subsequent works.} In a very recent but unpublished online preprint~\cite{abboud2021gomory} (see also a note by \cite{zhang2021gomory}), significant progress has been made where an unconditional $\widetilde{O}(n^2)$ runtime has been achieved for cut-equivalent trees in general weighted graphs, which completely subsumes our result.
\subsection{Technical overview}
Our algorithm is largely based on the framework of \cite{abboud2020subcubic}. In this subsection, we will discuss the running time bottlenecks of \cite{abboud2020subcubic} and how to bypass them. For simplification, consider the following task. Let $\mathcal{T}$ be a partition tree which is an intermediate tree of the Gomory-Hu algorithm. Take an arbitrary node $N\subseteq V$ of $\mathcal{T}$ which represents a vertex subset of $V$. Let $G_\mathcal{T}[N] = (V_\mathcal{T}[N], E_\mathcal{T}[N])$ be the auxiliary graph obtained by contracting each component of $\mathcal{T}\setminus \{N\}$ into a single vertex in the original graph $G$.
Fix a pivot vertex $p\in N$, we want to find a sequence of vertices $v_1, v_2, \cdots, v_l\in N$, and compute a sequence of latest minimum cuts $(L_i, V_\mathcal{T}[N]\setminus L_i), 1\leq i\leq l$ in $G_\mathcal{T}[N]$ for $(v_i, p), 1\leq i\leq l$, where $v_i\in K_i, p\notin K_i$, such that:
\begin{enumerate}[(1)]
\item $l\geq \Omega(|N|)$.
\item For each $1\leq i\leq l$, $|L_i| \leq |N|/2$.
\end{enumerate}
If we cut all sides $L_i\cap N$ off of $N$ and form tree nodes, then by the above two properties all tree nodes are vertex subsets of $N$ of size at most $|N|/2$. So, if we can recursively repeat this procedure on smaller subsets, then it would produce a cut-equivalent tree in logarithmically many rounds.
For this task, the basic idea of \cite{abboud2020subcubic} is to apply expander decompositions. Suppose the original graph $G$ is decomposed into disjoint clusters $V = C_1\cup C_2\cup\cdots $ such that each $G[C_i]$ is a $\phi$-expander, and the total number of inter-cluster edges is bounded by $\widetilde{O}(\phi n^2)$. For simplicity let us assume $G$ is a roughly regular graph and each vertex $v\in V$ has degree $\deg_G(v)\in [n/2, n)$. For each $v$, suppose $(L_v, V\setminus L_v)$ is the latest min-cut for $(v, p)$, and let $C_v$ be the $\phi$-expander of the expander decomposition that contains $v$. Vertices of $N$ are divided into three types.
\begin{enumerate}
\item Vertices in clusters whose size is less than $n/4$, namely $|C_v| < n/4$.
\item Vertices in clusters whose size is at least $n/4$, namely $|C_v|\geq n/4$, plus that $|C_v\cap L_v| \leq 10 / \phi$. Note that there are only a constant number of such clusters.
\item $|C_v| \geq n/4$, plus that $|C_v\setminus L_v| \leq 10 / \phi$.
\end{enumerate}
\paragraph*{The first bottleneck} To compute $L_v$ for type-1 vertices, we simply go over all such $v$'s, and compute the max-flow from $v$ to $p$ in $G_\mathcal{T}[N]$. Since each type-1 vertex must contribute $n/2 - n/4 = n/4$ inter-cluster edges as the input graph $G$ is simple, the total number of type-1 vertices does not exceed $\widetilde{O}(\phi n)$, summing over all tree nodes $N$ of $\mathcal{T}$.
For type-2 vertices, using the isolating cut lemma devised in \cite{abboud2020subcubic,li2020deterministic}, we can compute all the sides $L_v$ in $\widetilde{O}(\mathsf{MF}(\mathsf{vol}_G(N)) / \phi)$ time, which sum to $\widetilde{O}(\mathsf{MF}(n^2) / \phi)$ over all nodes $N$ of $\mathcal{T}$. So, under Hypothesis~\ref{hypo}, the total time cost of type-1 and type-2 vertices is $\widetilde{O}(\phi n^3 + n^2 / \phi)$, which is always larger than $n^{2.5}$. So in their algorithm~\cite{abboud2020subcubic}, parameter $\phi$ is equal to $1/\sqrt{n}$.
Our observation is that applying max-flow for each type-1 vertex is too costly. To overcome this bottleneck, we simply avoid computing cuts $(L_v, V_\mathcal{T}[N]\setminus L_v)$ for both type-1 and type-2 vertices. If the total number of type-2\&3 vertices is larger than the total number of type-1 vertices, then we can skip all type-1 vertices. However, if the number of type-1 vertices dominates in $N$, then the number of type-2\&3 vertices is at most $\widetilde{O}(\phi n)$ over all such kind of $N$. In this case, the total degree $\mathsf{vol}_G(N)$ is at most $\widetilde{O}(\phi n^2)$, and therefore, when summing over all nodes $N$ of $\mathcal{T}$, computing all type-1 vertices takes time at most $\widetilde{O}(\phi^2 n^3)$, instead of $\widetilde{O}(\phi n^3)$, and so the new balance would be $\widetilde{O}(\phi^2n^3 + n^2/\phi)$. Therefore, if we choose $\phi = n^{-1/3}$, it becomes $n^{7/3}$ which is already better than $n^{2.5}$. In the final algorithm, we will classify expander sizes using $\log n$ many different thresholds, instead of just one threshold (which is n/4 here), and so in the end we can set $\phi = 1 / \log^{O(1)}n$. In general cases where graph $G$ has various vertex degrees, we need to apply boundary-linked expander decomposition from a recent work \cite{goranci2021expander}; especially we need to make use of property (3) in Definition 4.2 of \cite{goranci2021expander}.
\paragraph*{The second bottleneck} In the work \cite{abboud2020subcubic}, in order to compute latest min-cuts $(L_v, V_\mathcal{T}[N]\setminus L_v)$ for type-3 vertices, they consider the laminar family formed by all sides $L_v$. If the laminar family has tree depth at most $k$, then their algorithm can compute cuts $(L_v, V_\mathcal{T}[N]\setminus L_v)$ by applying $k + 10/\phi$ max-flows in $G_\mathcal{T}[N]$. To ensure that the depth is bounded by $k$, they need a first randomly refine node $N$ into $|N| / k$ sub-nodes which takes $|N|/k$ Gomory-Hu steps. Hence, in total, it requires at least $k + |N|/k > \sqrt{|N|}$ max-flow invocations, which leads to a $n^{2.5}$ running time under Hypothesis~\ref{hypo}.
To bypass this barrier, the observation is that the depth of the laminar family in each $\phi$-expander is already small, so actually we do not need the help from the refinement step. More precisely, instead of looking at the entire laminar family formed by sets $\{L_v\}_{v\in N}$, we only look at the laminar family formed by sets $\{C_v\cap L_v \}_{v\in N}$ for each cluster $C$. It can be proved that the depth of this smaller laminar family is always bounded by $O(1/\phi)$. In the end, to compute latest min-cuts $(L_v, V_\mathcal{T}[N]\setminus L_v)$ for all type-2\&3 vertices, we will only use $O(1/\phi)$ max-flow instances in total.
\section{Preliminaries}
Let $G = (V, E)$ be an arbitrary simple graph on $n$ vertices and $m$ edges with unit-capacities. For any $v\in V$, let $\deg_G(v)$ be the number of its neighbors in $V$. For any subset $S\subseteq V$, define $\mathsf{vol}_G(S) = \sum_{v\in S}\deg_G(v)$, and let $\mathsf{out}_G(S)$ count the number of edges in $E\cap (S\times (V\setminus S))$, and define $G[S]$ to be the induced subgraph of $S$ on $G$.
Introduced in \cite{gabow1991applications}, the latest minimum $(s, t)$-cut is a minimum $(s, t)$-cut such that the side containing $s$ has minimum size as well. It is proved that latest minimum cuts are unique, and can be computed by any max-flow algorithm for $(s, t)$.
Here are some basic facts about min-cuts.
\begin{lemma}[Lemma 2.8 in \cite{abboud2020subcubic}]\label{union}
For any vertices $a, b, p\in V$, assume $(A, V\setminus A)$ and $(B, V\setminus B)$ are min-cuts for $(a, p), (b, p)$ respectively. If $b\in A$, then $(A\cup B, V\setminus (A\cup B))$ is a min-cut for $(a, p)$ as well.
\end{lemma}
\begin{lemma}[Lemma 2.9 in \cite{abboud2020subcubic}]\label{minus}
For any vertices $a, b, p\in V$, assume $(A, V\setminus A)$ and $(B, V\setminus B)$ are min-cuts for $(a, p), (b, p)$ respectively. If $a\notin B$ and $b\notin A$, then $(A\setminus B, V\setminus (A\setminus B))$ is a min-cut for $(a, p)$.
\end{lemma}
\subsection{Cut-equivalent trees}
A cut-equivalent tree is a tree $\mathcal{T}$ on $V$ with weighted edges, such that for any pair $s, t\in V$, there is a minimum cut $(S, V\setminus S)$ in $G$ such that it is also a minimum cut in $\mathcal{T}$ with the same cut value. Now let us turn to define some terminologies for cut-equivalent trees.
\paragraph*{Partition trees} A partition tree $\mathcal{T}$ of graph $G$ is a tree whose nodes $U_1, U_2, \cdots, U_l$ represent disjoint subsets of $V$ such that $V = U_1\cup U_2\cup \cdots \cup U_l$. For each node $U$ of $\mathcal{T}$, the auxiliary graph $G_\mathcal{T}[U] = (V_\mathcal{T}[U], E_\mathcal{T}[U])$ of $U$ is built by contracting each component of $\mathcal{T}\setminus \{U \}$ into a single vertex in the original graph $G$.
\paragraph*{Gomory-Hu algorithm} Gomory-Hu algorithm provides a flexible framework for constructing a cut-equivalent tree. The algorithm begins with a partition tree $\mathcal{T}$ which is the single node that subsumes the entire vertex set $V$, and creates more nodes iteratively by refining its nodes. In each iteration, the algorithm picks an arbitrary node that represent a non-singleton subset $U\subseteq V$, and selects two arbitrary vertex $s, t\in U$. Then, compute the minimum cut $(S, V_\mathcal{T}[U]\setminus S)$ between $s, t$ in the auxiliary graph $G_\mathcal{T}[U]$. Finally, split node $U$ into two nodes that correspond to subsets $S\cap U$ and $(V_\mathcal{T}[U]\setminus S)\cap U$ respectively, connected by an edge with weight equal to the value of the cut $(S, V_\mathcal{T}[U]\setminus S)$ in $G_\mathcal{T}[U]$. For each node $W$ that was $U$'s neighbor on $\mathcal{T}$, reconnect $W$ to node $S\cap U$ if $S$ contains the contracted node that subsumes $W$; otherwise reconnect $W$ to node $(V_\mathcal{T}[U]\setminus S)\cap U$.
A tree is called \textbf{GH-equivalent}, if it is a partition tree that can be constructed during Gomory-Hu algorithm by certain choice of nodes $U$ to split and pairs of vertices $s, t\in U$.
\paragraph*{Refinement with respect to subsets} Consider a partition tree $\mathcal{T}$ which is GH-equivalent. Let $U$ be one of $\mathcal{T}$'s node and let $R\subseteq U$ be a subset. A refinement of $\mathcal{T}$ with respect to $R$ is to repeatedly execute a sequence of Gomory-Hu iterations by always picking two different vertices from $s, t\in R$ that are currently in the same node of $\mathcal{T}$ and refine $\mathcal{T}$ using a minimum $(s, t)$-cut. So after the refinement of $\mathcal{T}$ with respect to $R$, $\mathcal{T}$ is still GH-equivalent.
\begin{lemma}[\cite{granot1986multi}]\label{refine}
For any node $U$ of $\mathcal{T}$ and any subset $R\subseteq U$, a refinement of $\mathcal{T}$ with respect to $R$ can be computed in time $\widetilde{O}(|R|\cdot \mathsf{MF}(\mathsf{vol}_G(U)))$.
\end{lemma}
\begin{lemma}[see definition of partial trees in \cite{abboud2020subcubic}]\label{partial-tree}
After the refinement on a GH-equivalent tree $\mathcal{T}$ with respect to $R$, for any $a, b\in R$, let $N_a\ni a$ and $N_b\ni b$ be nodes of $\mathcal{T}$. Then, the min-cut of $(N_a, N_b)$ in $\mathcal{T}$ is a min-cut in $G$ for $(a, b)$ as well.
\end{lemma}
\paragraph*{$k$-partial trees} A $k$-partial tree $\mathcal{T}$ is a GH-equivalent partition tree such that all vertices $u\in V$ such that $\deg_G(u)\leq k$ are singletons of $\mathcal{T}$. The following lemma states that $k$-partial trees always exist and can be computed efficiently for small $k$'s.
\begin{lemma}[\cite{hariharan2007mn}]\label{k-partial}
There is an algorithm that, given an undirected graph with unit edge capacities and parameter $k$ on $n$ vertices, computes a $k$-partial tree in time $\min\{\widetilde{O}(nk^2), \widetilde{O}(mk)\}$.
\end{lemma}
\subsection{Expander decomposition}
For any pair of disjoint sets $S, T\subseteq V$, let $E_G(S, T)$ be the set of edges between $S, T$ in $G$. The conductance of a cut $(S, V\setminus S)$ is $\Phi_G(S) = |E_G(S, V\setminus S)| / \min\{\mathsf{vol}_G(S), \mathsf{vol}_G(V\setminus S) \}$, and the conductance of a graph $G$ is defined as $\Phi_G = \min_{S} \Phi_G(S)$. A graph $G$ is a $\phi$-expander if $\Phi_G\geq \phi$.
For any vertex subset $S\subseteq V$ and positive value $x>0$, let $G[S]^x$ be the subgraph induced on $S$ where we add $\lceil x\rceil$ self-loops to each vertex $v\in S$ for every boundary edge $(v, w), w\notin S$. As an example, in graph $G[S]^1$ the degrees of all vertices are the same as in the original graph $G$.
We need a strong expander decomposition algorithm from a recent work~\cite{goranci2021expander}.
\begin{definition}[boundary-linked expander decomposition \cite{goranci2021expander}]\label{boundary}
Let $G = (V, E)$ be a graph on $n$ vertices and $m$ edges, and $\alpha, \phi\in (0, 1)$ be parameters. An $(\alpha, \phi)$-expander decomposition of $V$ consists of a partition $\mathcal{C} = \{C_1, C_2, \cdots, C_k \}$ of $V$ such that the following holds.
\begin{enumerate}[(1)]
\item $\sum_{i=1}^k\mathsf{out}_G(C_i)\leq \log^4n \cdot \phi m$.
\item For any $i$, $G[U_i]^{\alpha / \phi}$ is a $\phi$-expander.
\item For any $i$, $\mathsf{out}_G(C_i)\leq \log^7n \cdot\phi\mathsf{vol}_G(C_i)$.
\end{enumerate}
\end{definition}
In the original paper~\cite{goranci2021expander}, the upper bounds are stated with $O(\cdot)$ notations that hide constant factors; here we simply raise the exponent of log-factors to simplify the notations.
\begin{lemma}[\cite{goranci2021expander}]
Given any unweighted graph $G= (V, E)$ on $n$ vertices and $m$ edges, for any $\alpha, \phi\in (0, 1)$ such that $\alpha \leq 1 / \log^\mathsf{c} m$ where $\mathsf{c}$ is a certain constant, a $(\alpha, \phi)$-expander decomposition can be computed in $\widetilde{O}(m / \phi)$ with high probability.
\end{lemma}
\subsection{Isolating cuts}
\begin{lemma}[isolating cuts~\cite{abboud2020subcubic}]\label{isolate}
Given an undirected edge-weighted graph $H = (X, F, \omega)$, a pivot vertex $p\in X$, and a set of terminal vertices $T\subseteq X$. For each $u\in T$, let $(K_u, X\setminus K_u)$ be the latest minimum $(u, p)$-cut for each $u\in T$. Then, in time $\widetilde{O}(\mathsf{MF}(|F|, |X|))$ we can compute $|T|$ disjoint sets $\{K^\prime_u\}_{u\in T}$ such that for each $u\in T$, if $K_u\cap T = \{u\}$ then $K^\prime_u = K_u$.
\end{lemma}
\section{Quadratic time cut-equivalent tree under Hypothesis~\ref{hypo}}
\subsection{The main algorithm}
In this section we try to prove the first half of Theorem~\ref{cond}. Let $G = (V, E)$ denote the simple graph as our input data. Define some parameters: $\phi = \frac{1}{10\log^{\mathsf{c} + 10}n}$ is a global conductance parameter that is used to construct expander decompositions, and $r = 10\log^5 n$ is a sampling parameter which is needed when choosing pivots; here $\mathsf{c}$ is the same constant as in Definition~\ref{boundary}. Without loss of generality, assume $\sqrt{n}$ is an integral power of $2$. Define a degree set $\mathcal{D} = \{\sqrt{n}, 2\sqrt{n}, 2^2\sqrt{n}, \cdots, n \}$.
\paragraph*{Preparation} Throughout the algorithm, $\mathcal{T}$ will be the cut-equivalent tree under construction, where each of $\mathcal{T}$'s node will represent a subset of vertices of $V$. As a preparation step, compute a $(\phi, \phi)$-expander decomposition on $G$ and obtain a partitioning $\mathcal{C} = \{C_1, C_2, \cdots, C_k \}$ of $V$. Categorize clusters in $\mathcal{C}$ according to their sizes: for each $2^i$, define $\mathcal{C}_{i}$ to be the set of clusters whose sizes are within interval $[2^i, 2^{i+1})$.
At the beginning, initialize $\mathcal{T}$ to be a $\sqrt{n}$-partial tree by applying the algorithm from~\cite{hariharan2007mn} that takes running time $\widetilde{O}(n^2)$.
\paragraph*{Iteration} In each round, we will divide simultaneously all nodes of $\mathcal{T}$ which contains at least $20r$ vertices in $V$. In the end, the total number of rounds will be bounded by $\widetilde{O}(1)$. To describe our algorithm, let us focus on any single node $N\subseteq V$ of $\mathcal{T}$ whose size $|N|$ is at least $20r$. The first step is to refine the partition of $N$ by a set of random pivots. More specifically, sample a pivot subset $R\subseteq N$ of size $10r$ by picking each vertex with probability proportional to its degree in $G$; more precisely, repeatedly sample for $10r$ times a vertex from $N$ where each vertex $v\in N$ is selected with probability $\deg_G(v) / \mathsf{vol}_G(N)$.
Then, refine the node $N$ of $\mathcal{T}$ by computing a partial tree with respect to $R$ using Lemma~\ref{refine}, which further divides $N$ into several subsets each containing a distinct vertex from $R$. After applying this pivot-sampling \& refining step to each of the original node of $\mathcal{T}$, $\mathcal{T}$ has undergone one pass of partition, and now each node $U$ of $\mathcal{T}$ is associated with a unique pivot vertex $p\in U\cap R$.
For the rest, let us focus on each node $U$ of the current $\mathcal{T}$ as well as its pivot $p$, such that $U\subseteq N$ is a subdivision of the previous node $N$ but $\mathsf{vol}_G(U)\geq 0.5\mathsf{vol}_G(N)$. For each $k\in \mathcal{D}$, define $U_k = \{u\in U\mid \deg_G(u)\in [k, 2k) \}$, so $U = \bigcup_{k\in \mathcal{D}}U_k$. Take a parameter $d\in \mathcal{D}$ such that $d|U_d|$ is maximized, namely $d\in\arg\max_{k\in \mathcal{D}} k|U_k|$. Therefore, $2d|U_d|\geq \mathsf{vol}_G(U)/\log n$. Next, for each index $i$, define $\mathsf{cnt}[i]$ to be the number of vertices from $U_d$ that lie within clusters from $\mathcal{C}_i$. Take $s = 2^{i_U}$ such that $\mathsf{cnt}[i_U]$ is maximized.
For each cluster $C\in \mathcal{C}_{i_U}$ such that $C\cap U_d$ is nonempty, conduct the expander search routine described in the next subsection (Algorithm~\ref{exp-search}) to compute cuts $(K_u, V_\mathcal{T}[U]\setminus K_u)$ in the auxiliary graph $G_\mathcal{T}[U]$ for a set $W_C$ of vertices $u\in W_C\subseteq C\cap U_d$ with respect to pivot $p$; so $u\in K_u, p\in V_\mathcal{T}[U]\setminus K_u$. It will be guaranteed that for each $u\in W_C$, we have $\mathsf{vol}_G(K_u\cap U)\leq 0.5\mathsf{vol}_G(N)$. In the end, define $W = \bigcup_{C\in\mathcal{C}_{i_U}}W_C$.
We will prove that all $(K_u, V_\mathcal{T}[U]\setminus K_u)$ are latest min-cuts for $(u, p)$. Since all latest minimum cuts $(K_u, V_\mathcal{T}[U]\setminus K_u)$ are with respect to $p$, they should form a laminar family. Then, for each $u\in W$ such that $K_u$ is maximal in the laminar family, split $K_u\cap U$ off the node $U$ and create a new node for vertex set $K_u\cap U$. Since we always take maximal $K_u$'s, all of these sets are disjoint in $V_\mathcal{T}[U]$, so the creation of new nodes on $\mathcal{T}$ is well-defined. Pseudo-code \textsf{CondGomoryHu} summarizes our algorithm.
\begin{algorithm}
\caption{\textsf{CondGomoryHu}$(G = (V, E))$}
initialize a partition tree $\mathcal{T}$, as well as parameters $\phi, r$\;
\While{$\exists N\subseteq V$, $U$ a node of $\mathcal{T}$, $|N| \geq 20r$}{
\For{node $N$ of $\mathcal{T}$ with $|N|\geq 20r$}{
repeat for $10r$ times: each time we sample a vertex $u\in N$ with probability $\frac{\deg_G(u)}{\mathsf{vol}_G(N)}$, and let the sampled set be $R$\;
call Lemma~\ref{refine} on node $N$ with respect to $R$\;
\For{node $U\subseteq N$ of $\mathcal{T}$ such that $\mathsf{vol}_G(U) > 0.5\mathsf{vol}_G(N)$}{
take $d\in\mathcal{D}$ such that $d|U_d|$ is maximized\;
take $s = 2^{i_U}$ such that $\mathsf{cnt}[i_U]$ is maximized\;
\For{each $C\in \mathcal{C}_{i_U}$}{
run expander search on $C$ within node $U$ to compute a subset $W_C\subseteq C\cap U_d$, and the latest min-cuts $(K_u, V_\mathcal{T}[U]\setminus K_u)$ for each $u\in W_C$\;
}
define $W = \bigcup_{C\in\mathcal{C}_{i_U}}W_C$\;
for each $u\in W$ such that $K_u$ is maximal, split $K_u\cap U$ off of $U$ and create a new node on $\mathcal{T}$\;
}
}
}
\For{node $N$ of $\mathcal{T}$ such that $|N| < 20r$}{
repeatedly refine $N$ using the generic Gomory-Hu steps until all nodes are singletons\;
}
\Return $\mathcal{T}$ as a cut-equivalent tree\;
\end{algorithm}
\subsection{Finding latest min-cuts in expanders}
Our algorithm is similar to the one from \cite{abboud2020subcubic}. The input to this procedure is a node $U\subseteq V$ of the current partition tree $\mathcal{T}$ under construction, together with parameters $s, d$ defined previously, as well as an expander $C\in \mathcal{C}_{i_U}$ that intersects $U_d$. The output of this procedure will be a subset $W_C\subseteq C\cap U_d$, and their cuts $(K_u, V_\mathcal{T}[U]\setminus K_u)$ for all $u\in W_C$ in the auxiliary graph $G_\mathcal{T}[U]$, with the extra property that $\mathsf{vol}_G(K_u\cap U)\leq 0.5\mathsf{vol}_G(N)$. In the end, we will show that with high probability, all these cuts are latest min-cuts in $G_\mathcal{T}[U]$.
To describe our algorithm, consider all vertices $u\in C\cap U_d$ and their latest minimum cuts $(L_u, V_\mathcal{T}[U]\setminus L_u)$ with respect to pivot $p\in U$. Let $\lambda_u$ be the cut value of $(L_u, V_\mathcal{T}[U]\setminus L_u)$ in $G_\mathcal{T}[U]$. All of the sets $L_u\cap C\cap U_d$ should form a laminar family, which corresponds to a tree structure $\mathcal{T}_p^d[C]$, where each tree node $M$ of $\mathcal{T}_p^d[C]$ packs a subset of $C\cap U_d$, such that for all $u\in M$ the set $L_u\cap C\cap U_d$ are the same. More specifically, $\mathcal{T}_p^d[C]$ is constructed as follows: first arrange the laminar family $\{L_u\cap C\cap U_d \}_{u\in C\cap U_d}$ as a tree, and then for each node $L_u\cap C\cap U_d$ on this tree, associate with this node the set $M = \{v\in C\cap U_d\mid L_v\cap C\cap U_d = L_u\cap C\cap U_d \}\subseteq L_u\cap C\cap U_d$.
We need to emphasize that our algorithm does not know $\mathcal{T}_p^d[C]$ at the beginning, but it will gradually explore part of $\mathcal{T}_p^d[C]$ during the process.
\paragraph*{Preparation} Initialize $W_C = \emptyset$. Assume $|C\cap U_d| \geq 10/\phi^2$; otherwise we could simply reset $W_C = C\cap U_d$ and run $|C\cap U_d|$ instances of max-flow to compute all latest cuts. As a preparatory step, the algorithm repeatedly takes a random subset $T\subseteq C\cap U_d$ by selecting each vertex independently with probability $\phi$. Then apply Lemma~\ref{isolate} on graph $G_\mathcal{T}[U]$ to compute isolating cuts of terminal vertices from $T$ with respect to pivot $p$. This procedure goes on for $10\log n /\phi$ iterations, and for each $u\in C\cap U_d$, let $(A_u^i, V_\mathcal{T}[U]\setminus A_u^i)$ be the isolating cut computed for $u$ in the $i$-th iteration; if $u$ was not selected by $T$ in the $i$-th iteration, simply set $A_u^i = \{u\}$. Finally, let $A_u$ be the set among $\{A_u^i \}_{1\leq i\leq 10\log n / \phi}$ such that the cut value of $(A_u, V_\mathcal{T}[U]\setminus A_u)$ in the auxiliary graph $G_\mathcal{T}[U]$ is minimized; to break ties, we select $A_u^i$ that minimizes $|A_u^i|$. Let $\kappa_u$ be the cut value of $(A_u, V_\mathcal{T}[U]\setminus A_u)$ for $u\in C\cap U_d$.
If one of $|A_u\cap C\cap U_d| > 2/\phi$, then the algorithm fails and aborts; we will prove that the failure probability is small.
\paragraph*{Exploring $\mathcal{T}_p^d[C]$} A node $M$ of $\mathcal{T}_p^d[C]$ is called \textbf{large} if $|C\cap U_d\setminus L_u| \leq 2/\phi$ for any $u\in M$, and if $|L_u\cap C\cap U_d|\leq 2/\phi$ it is called \textbf{small}. Similarly, a vertex $u\in C\cap U_d$ is called large, if $|C\cap U_d\setminus L_u| \leq 2 / \phi$; otherwise if $|L_u\cap C\cap U_d|\leq 2 / \phi$, it is called small.
\begin{lemma}
All large nodes on $\mathcal{T}_p^d[C]$ should lie on a single path ended at root.
\end{lemma}
\begin{proof}
Suppose otherwise there exists two different large nodes $M_1, M_2$ of $\mathcal{T}_p^d[C]$ such that $M_1\cap M_2 = \emptyset$. Take any $u_1\in M_1, u_2\in M_2$. Since $M_1, M_2$ are large nodes, we know $|C\cap U_d\setminus L_{u_1}|\leq 2 / \phi$, $|C\cap U_d\setminus L_{u_2}|\leq 2 / \phi$. As $C\cap U_d\cap L_{u_1}$ and $C\cap U_d\cap L_{u_2}$ are disjoint, we have $|C\cap U_d| \leq |C\cap U_d\setminus L_{u_1}| + |C\cap U_d\setminus L_{u_2}| \leq 4 / \phi$, which contradicts that $|C\cap U_d|\geq 10 / \phi^2$.
\end{proof}
The main idea of our algorithm is to find the lowest large node on $\mathcal{T}_p^d[C]$; to clarify a bit more, here ``lowest'' means farthest from root. Initialize a set $S \leftarrow C\cap U_d$ and maintain an ordering of vertices in $S$ according to the cut value of $\kappa_u$, also initialize variable $Q \leftarrow \emptyset$.
Repeat the following procedure: take $u\in S\setminus Q$ such that $\kappa_u$ is maximized. Apply max-flow in graph $G_\mathcal{T}[U]$ to compute the latest min-cut $L_u$ for $(u, p)$. Consider two possibilities.
\begin{itemize}
\item $L_u$ is small. Then assign $S\leftarrow S\setminus (Q\cup L_u)$, and $Q\leftarrow \emptyset$.
\item $L_u$ is large. If $L_u\cap C\cap U_d = S$, then add $u$ to $Q$; otherwise if $L_u\cap C\cap U_d\neq S$, reset $S \leftarrow L_u\cap C\cap U_d$ and $Q\leftarrow \{u\}$.
\end{itemize}
The repetition terminates if either (1) $|C\cap U_d \setminus S| > 2/\phi$ or (2) $|Q| > 2/\phi$. In the first case, assign $W_C\leftarrow \{u\in S\mid \mathsf{vol}_G(A_u\cap U)\leq 0.5\mathsf{vol}_G(N) \}, K_u\leftarrow A_u$ and terminate. Note that this notation $\mathsf{vol}_G(A_u\cap U)$ is well-defined, since all vertices in $U$ are also vertices in $V$, not contracted vertices in $V_\mathcal{T}[U]$.
Now suppose we are in the second case. Let $L = L_v$ for an arbitrary $v\in Q$. We will prove afterwards that $(L, V_\mathcal{T}[U]\setminus L)$ is the latest min-cut corresponding to the lowest large node. Let $\kappa$ be the cut value of $(L, V_\mathcal{T}[U]\setminus L)$, and define $B = \{u\in S\mid \kappa_u > \kappa \}$. Assign $W_C\leftarrow \{u\in S\setminus B\mid \mathsf{vol}_G(A_u\cap U)\leq 0.5\mathsf{vol}_G(N) \}, K_u\leftarrow A_u$ and for each $u\in W_C$.
After that, take an arbitrary $v\in Q$. If it satisfies that $\mathsf{vol}_G(L\cap U) \leq 0.5\mathsf{vol}_G(N)$, then update $W_C = W_C\cup \{v \}$ and $K_v\leftarrow L$. The whole exploration procedure is summarized as pseudo-code \textsf{ExploreTree}.
\begin{algorithm}
\caption{\textsf{ExploreTree}$(U, p, d, C)$}
\label{exp-search}
prepare $A_u$ and $\kappa_u$ for all $u\in C\cap U_d$\;
initialize $S\leftarrow C\cap U_d, Q\leftarrow \emptyset$\;
\While{$\max\{|C\cap U_d\setminus S|, |Q| \}\leq 2/\phi$}{
take $u\in \arg\max_{v\in S\setminus Q} \{\kappa_v\}$\;
apply max-flow to compute $L_u$\;
\If{$u$ is small}{
$S\leftarrow S\setminus (Q\cup L_u)$, and $Q\leftarrow \emptyset$\;
}\Else{
\If{$L_u\cap C\cap U_d = S$}{
$Q\leftarrow Q\cup \{u\}$\;
}\Else{
$S\leftarrow L_u\cap C\cap U_d, Q\leftarrow \{u\}$\;
}
}
}
\If{$|C\cap U_d\setminus S| > 2/\phi$}{
\Return $W_C\leftarrow \{u\in S\mid \mathsf{vol}_G(A_u\cap U)\leq 0.5\mathsf{vol}_G(N) \}$, $K_u\leftarrow A_u, \forall u\in W_C$\;
}\Else{
define $B = \{u\in S\mid \kappa_u > \kappa \}$ where $\kappa$ is the cut value of $(S, V_\mathcal{T}[U]\setminus S)$\;
$W_C\leftarrow \{u\in S\setminus B\mid \mathsf{vol}_G(A_u\cap U)\leq 0.5\mathsf{vol}_G(N) \}$, $K_u\leftarrow A_u, \forall u\in W_C$\;
draw an arbitrary vertex $v\in Q$ and set $L = L_v$\;
\If{$\mathsf{vol}_G(L\cap U)\leq 0.5\mathsf{vol}_G(N)$}{
assign $W_C \leftarrow W_C\cup\{v\}$ and $K_v\leftarrow L$\;
}
\Return $W_C$\;
}
\end{algorithm}
\begin{comment}
Let $\kappa_u$ be the cut value of $(K_u, V_\mathcal{T}[U]\setminus K_u)$ for $u\in C\cap U_d$. Sort all vertices of $C\cap U_d$ according to $\kappa_u$, and collect the set $T\subseteq S$ of all $u\in C\cap U_d$ whose corresponding values $\kappa_u$ are the top-$\lceil 1/\phi\rceil$ largest. Then, for each $u\in T$ directly compute the latest cut with respect to $p$ using max-flow algorithms under Hypothesis~\ref{hypo}; here we rename $K_u$ such that $(K_u, V_\mathcal{T}[U]\setminus K_u)$ is the latest minimum cut returned by the max-flow computations. In the end, for each $v\in C\cap U_d$, $(K_v, V_\mathcal{T}[U]\setminus K_v)$ will be returned as output as the latest minimum cut with respect to $p$. Pseudo-code \textsf{ExpanderSearch} summarizes the above procedures.
\begin{algorithm}
\caption{\textsf{ExpanderSearch}$(G, \mathcal{T}, U, p, d, C)$}
\For{$t = 1, 2, \cdots, 10\log n / \phi$}{
take a random subset $S\subseteq C\cap U_d$ by picking each vertex with independent probability $\phi$\;
apply Lemma~\ref{isolate} on $G_\mathcal{T}[U]$ to compute isolating cuts of $S$ with respect to pivot $p$\;
}
let $\kappa_u$ be the smallest cut value of $(K_u, V_\mathcal{T}[U]\setminus K_u)$ seen so far\;
sort all vertices of $C\cap U_d$ according to $\kappa_u$\;
for the topmost $\lceil 1/\phi\rceil$ vertices, compute latest minimum cuts using max-flow algorithms\;
\Return cuts $(K_u, V_\mathcal{T}[U]\setminus K_u)$'s for all $u\in C\cap U_d$\;
\end{algorithm}
\end{comment}
\subsection{Proof of correctness}
First we prove a basic property of isolating cuts, which is also used in \cite{abboud2020subcubic}.
\begin{lemma}[\cite{abboud2020subcubic}]\label{imbalance}
For each $u\in C\cap U_d$, either $|L_u\cap C\cap U_d|\leq 2/\phi$ or $|C\cap U_d\setminus L_u|\leq 2/\phi$; namely each vertex is either large or small. Furthermore, with high probability, when $u$ is small, $A_u = L_u$.
\end{lemma}
\begin{proof}
Since $\deg_G(u) < 2d$, the cut value of $(L_u, V_\mathcal{T}[U]\setminus L_u)$ is smaller than $2d$. Unpack all contracted vertices of $V_\mathcal{T}[U]$, and let $L^\prime_u\subseteq V$ be the set of vertices belonging to $L_u$ or contracted in $L_u$. Therefore, since $G_\mathcal{T}[U]$ is a contracted graph of $G$, the cut value of $(L_u, V_\mathcal{T}[U]\setminus L_u)$ is equal to the cut value of $(L_u^\prime, V\setminus L_u^\prime)$.
Suppose otherwise that $|L_u^\prime \cap C\cap U_d| > 2/\phi$ and $|C\cap U_d\setminus L_u^\prime| > 2/\phi$. Then, by property (2) of the $(\phi, \phi)$-expander decomposition, $G[C]^1$ is a $\phi$-expander, and so the number of edges between $L_u^\prime\cap C$ and $C\setminus L_u^\prime$ is at least $\phi \cdot \min\{\mathsf{vol}_G(L_u^\prime\cap C), \mathsf{vol}_G(C\setminus L_u^\prime) \} > 2d$, which is a contradiction.
Let us turn to the second half of the statement. Suppose $u$ is small, and so $|L_u\cap C\cap U_d|\leq 2 / \phi$. Then, since $T$ selects each vertex in $C\cap U_d$ with probability $\phi$, with probability $\phi\cdot (1 - \phi)^l > \phi/8$, $T\cap L_u\cap C = \{u\}$ is a singleton. In this case, by Lemma~\ref{isolate}, $A_u^i = L_u$. As $T$ is sampled for $10\log n / \phi$ times, with high probability $A_u = L_u$.
\end{proof}
Here is a basic fact regarding large vertices.
\begin{lemma}\label{basic}
For any large vertex $u$, $\lambda_u < \kappa_u$.
\end{lemma}
\begin{proof}
If $\lambda_u = \kappa_u$, then the latest min-cut should be contained in $A_u$, which contains at most $2/\phi$ vertices from $C\cap U_d$, and so $u$ cannot be large.
\end{proof}
Next we analyze the behavior of the while-loop in \textsf{ExploreTree}.
\begin{lemma}\label{Mset}
If $Q\neq\emptyset$, then for each $u\in Q$, $L_u\cap C\cap U_d = S$.
\end{lemma}
\begin{proof}
Each time $S$ is updated, either $Q$ adds a vertex $u$ on line-10 such that $L_u\cap C\cap U_d = S$, or $Q$ is updated to $\{u\}$ on line-12. So the equality always holds.
\end{proof}
\begin{lemma}\label{explore}
At the beginning of any iteration of the while-loop, $\forall v\in S$, if $v$ is large, then we have $L_v\cap C\cap U_d \subseteq S$.
\end{lemma}
\begin{proof}
We prove this statement by induction on the number of iterations. Initially, this holds as $S = C\cap U_d$. For any intermediate iteration, consider two cases.
\begin{itemize}
\item $u$ is small. We claim that before updating $S$, for all large vertices $v\in S\setminus (Q\cup L_u)$, $L_v$ and $L_u\cup Q$ are disjoint; if this can be proved, then we conclude $L_v\cap C\cap U_d\subseteq S\setminus (Q\cup L_u)$, as $L_v\cap C\cap U_d\subseteq S$ holds before.
Suppose that $L_v\cap L_u\neq \emptyset$. Then as all latest minimum cuts form a laminar family and that $v\notin L_u$, it must be $L_u\subseteq L_v$. As $v$ is large, $L_v\cap C\cap U_d$ contains more vertices than $A_v\cap C\cap U_d$, and so by Lemma~\ref{basic}, we have $\lambda_v < \kappa_v$. Now, by line-4, since $\kappa_u$ is the largest among all vertices in $S\setminus Q$, $\kappa_v\leq \kappa_u$. Finally, using Lemma~\ref{imbalance}, we know $\kappa_u = \lambda_u$ as $u$ is small. Concatenating all the inequalities we have: \[\lambda_v < \kappa_v\leq \kappa_u = \lambda_u\]
which contradicts the fact that $(L_u, V_\mathcal{T}[U]\setminus L_u)$ is a min-cut for $(u, p)$ as $L_u\subseteq L_v$.
Now suppose that $L_v\cap Q\neq \emptyset$, say $w\in L_v\cap Q$. Then by Lemma~\ref{Mset}, $v\in S\subseteq L_w$, and so both $w, v$ are in $L_v\cap L_w$, which means $L_v = L_w$, and so $L_v\cap L_u = L_w\cap L_u = L_u\neq \emptyset$, which is a contradiction as discussed just before.
\item $u$ is large. In this case, the algorithm would reassign $S\leftarrow L_u\cap C\cap U_d$. Then, for all $v\in S$, as $(L_v, V_\mathcal{T}[U]\setminus L_v)$ is the latest minimum cut, it must be $L_v\subseteq L_u$, irrespective of whether $v$ is large or not.\qedhere
\end{itemize}
\end{proof}
Next we prove that when the while-loop ends, either all vertices in $S$ are small, or $S$ corresponds to the cut of the lowest large node on $\mathcal{T}_p^d[C]$.
\begin{lemma}\label{all-small}
If $|C\cap U_d\setminus S| > 2/\phi$, then all vertices in $S$ are small.
\end{lemma}
\begin{proof}
Consider any vertex $u\in S$. If $u$ is large, then by Lemma~\ref{explore}, $L_u\cap C\cap U_d\subseteq S$, and so $|C\cap U_d\setminus L_u|\geq |C\cap U_d\setminus S| > 2/\phi$, which contradicts the definition of being large.
\end{proof}
\begin{lemma}\label{lowest}
After the while-loop ends, if $|C\cap U_d\setminus S| \leq 2/\phi$ and $|Q| > 2/\phi$, then $(L, V_\mathcal{T}[U]\setminus L)$ is the latest min-cut of the lowest large node on $\mathcal{T}_p^d[C]$. Moreover, $B = \{u\in S\mid \kappa_u >\kappa \}$ is the set of all vertices $u$ such that $L_u\cap C\cap U_d = S$, and consequently all $L_u, \forall u\in B$ are equal.
\end{lemma}
\begin{proof}
As the while-loop ends with $|Q| > 2/\phi$, the last iteration must have ended on line-10. Therefore, $(L, V_\mathcal{T}[U]\setminus L)$ is the latest min-cut of some $u\in Q$. Suppose otherwise $(L, V_\mathcal{T}[U]\setminus L)$ is not the latest min-cut of the lowest large node on the imaginary tree $\mathcal{T}_p^d[C]$. Then, there exists a large vertex $v\in L\cap C\cap U_d$ such that $L_v\cap C\cap U_d\subsetneq S$ but $|C\cap U_d\setminus L_v| \leq 2/\phi$. As $|Q| > 2/\phi$, there must exist $w\in L_v\cap Q$. By Lemma~\ref{Mset}, $v\in L\cap C\cap U_d =S = L_w\cap C\cap U_d$, so both $v, w$ are in $L_v\cap L_w$, and consequently $L_v = L_w$, $L_v\cap C\cap U_d = S$, contradiction.
Now let us turn to the second half of the statement. Consider any $u\in B$. $(A_u, V_\mathcal{T}[U]\setminus A_u)$ cannot be a min-cut as $\kappa_u > \kappa$. By Lemma~\ref{imbalance}, $u$ must be a large vertex. On the one hand, by Lemma~\ref{explore}, $L_u\cap C\cap U_d\subseteq S$, and on the other hand, $L_u\cap C\cap U_d$ cannot be strictly smaller than $S$ as $S$ is the lowest already. Hence $L_u\cap C\cap U_d = S$.
For any $u\notin B$, by definition $\kappa_u\leq \kappa$. If $u$ is large, then $\lambda_u<\kappa_u \leq \kappa$, so $L_u\cap C\cap U_d\subsetneq S$, which also contradicts that $S$ corresponds to the lowest large node on $\mathcal{T}_p^d[C]$.
\end{proof}
Finally, we prove that all cuts $(K_u, V_\mathcal{T}[U]\setminus K_u)$ output by the algorithm are latest min-cuts with high probability.
\begin{lemma}
All cuts $(K_u, V_\mathcal{T}[U]\setminus K_u)$ output by the algorithm are latest min-cuts with high probability.
\end{lemma}
\begin{proof}
If the algorithm terminates on line-14, then by Lemma~\ref{all-small}, all vertices in $W_C$ are small. So by Lemma~\ref{imbalance}, $L_u = A_u = K_u, \forall u\in W_C$. Otherwise, if the algorithm terminates on line-21, then by Lemma~\ref{lowest}, all vertices in $W_C$ are small. Hence, by Lemma~\ref{imbalance}, $L_u = A_u = K_u, \forall u\in W_C\setminus B$; also, for any $u\in W_C\cap B$, we have $L_u = L = K_u$.
\end{proof}
\subsection{Running time analysis}
First we analyze the running time of each call of expander search.
\begin{lemma}\label{expander-search}
The total running time of the expander search in graph $G_\mathcal{T}[U]$ is bounded by $$\widetilde{O}(\mathsf{MF}(\mathsf{vol}_G(U), |V_\mathcal{T}[U]|) / \phi)$$
\end{lemma}
\begin{proof}
During the preparation step, each invocation of Lemma~\ref{isolate} induces a set of max-flow instances whose total size is bounded by $\widetilde{O}(|E_\mathcal{T}[U]|) = \widetilde{O}(\mathsf{vol}_G(U))$. Since it is repeated for $O(\log n / \phi)$ times, the total time is at most $\widetilde{O}(\mathsf{MF}(\mathsf{vol}_G(U), |V_\mathcal{T}[U]|) / \phi)$.
Next, let us analyze the cost of \textsf{ExploreTree}.
\begin{claim}
After each iteration of the while-loop, the value of $|C\cap U_d\setminus S| + |Q|$ always increases by at least $1$, so the total number of max-flow instances during the loop is bounded by $O(1/\phi)$.
\end{claim}
\begin{proof}[Proof of claim]
If an iteration ends on line-10, $Q$ increases by one while $S$ does not change. If an iteration of the while-loop ends on line-7, then on the one hand, by Lemma~\ref{Mset} we have $Q\subseteq S$; on the other hand, by the pseudo-code, $u\notin Q$ before updating $S, Q$. Hence, after line-7, $|C\cap U_d\setminus S| + |Q|$ increases by at least $1$.
If an iteration ends on line-12, we claim that before updating $S, Q$, we have $L_u \cap Q = \emptyset$. In fact, by Lemma~\ref{Mset}, for any $w\in Q$, $L_w\cap C\cap U_d = S$. By Lemma~\ref{explore}, as $L_u\cap C\cap U_d\neq S$, it must be $L_u\cap C\cap U_d\subsetneq S = L_w\cap C\cap U_d$. Hence, $w\notin L_u$. As $w$ is arbitrary, we know $Q\cap L_u = \emptyset$. Therefore, after updating $S\leftarrow L_u\cap C\cap U_d$, $|C\cap U_d\setminus S|$ has increased by $|Q|$. Notice that after updating $Q$, $|Q| = 1$. So $|C\cap U_d\setminus S| + |Q|$ has increased by one.
\end{proof}
Since each while-loop conducts one max-flow in graph $G_\mathcal{T}[U]$, by the above claim, the total cost of the while-loop involves max-flow instances of total size $\widetilde{O}(\mathsf{vol}_G(U) / \phi)$, and the reduction time is dominated by the same amount. After the while-loop, the running time is linear in the size of output, so it is not the bottleneck.
\end{proof}
Next we analyze the running time during refinement of $U$.
\begin{lemma}\label{prune}
The total running time of cutting vertices (the for-loop on line-6 of \textsf{CondGomoryHu}) from $U$ takes total time of $\widetilde{O}(\frac{n}{s}\cdot\mathsf{MF}( 2d\cdot\mathsf{cnt}[i_U]\log^2n, |V_\mathcal{T}[U]|))$.
\end{lemma}
\begin{proof}
On the one hand, the number of clusters in $\mathcal{C}_{i_U}$ is at most $n/s$ since each cluster has size at least $s$. So, by Lemma~\ref{expander-search}, the total time of expander search is $\widetilde{O}(\frac{n}{s}\cdot \mathsf{MF}(\mathsf{vol}_G(U)))$. By maximality of $d|U_d|$ and $\mathsf{cnt}[i_U]$, we have that: \[\mathsf{vol}_G(U)\leq 2d|U_d|\log n \leq 2d\cdot \mathsf{cnt}[i_U]\log^2n\]
Since the number of edges in $G_\mathcal{T}[U]$ is $\mathsf{vol}_G(U)$, the overall time complexity would be $\widetilde{O}(\frac{n}{s}\cdot\mathsf{MF}( 2d\cdot\mathsf{cnt}[i_U]\log^2n, |V_\mathcal{T}[U]|))$.
\end{proof}
To bound the total time across all different nodes of $\mathcal{T}$ that correspond to the same choice of $(s, d)$, we need the following lemma.
\begin{lemma}\label{count}
In any single iteration of the while-loop on line-2 of \textsf{CondGomoryHu}, over all different nodes $U$ of $\mathcal{T}$ that correspond to the same choice of $(s, d)$, we have $\sum_{U}\mathsf{cnt}[i_U]\leq 4sn/d$.
\end{lemma}
\begin{proof}
If $s\geq d/4$, then since all such nodes $U$ are packing disjoint subsets of vertices of $V$, $\sum_{U}\mathsf{cnt}[i_U]\leq n\leq 4sn/d$. So next we only consider the case where $s < d/4$.
When $s < d/4$, we can upper bound the total number of vertices in clusters in $\mathcal{C}_{i_U}$ whose degrees in $G$ are within the interval $[d, 2d)$. Take any cluster $C\in \mathcal{C}_{i_U}$ and any vertex $u\in C\cap U_d$. Since $|C| < 2s = d/2$ and $G$ is a simple graph, at least $d/2$ of $u$'s neighbors in $G$ are outside of $C$. So the crossing edges contributed by $u$ is at least $d/2$. By property (3) of the Definition~\ref{boundary}, the total number of crossing edges should be bounded as:
\[\begin{aligned}
\sum_{C\in\mathcal{C}_{i_U}}\mathsf{out}_G(C) &\leq \log^7n\cdot \phi\cdot \sum_{C\in\mathcal{C}_{i_U}}\mathsf{vol}_G(C) \leq \log^7n\cdot \phi\cdot \sum_{C\in\mathcal{C}_{i_U}} (4s^2 + \mathsf{out}_G(C))\\
&\leq 4\log^7n\cdot \phi sn + \log^7n\cdot \phi \sum_{C\in\mathcal{C}_{i_U}}\mathsf{out}_G(C)
\end{aligned}\]
As $\phi = \frac{1}{10\log^{\mathsf{c}+10}n}$, we have $\sum_{C\in\mathcal{C}_{i_U}}\mathsf{out}_G(C) \leq 8\log^7n\cdot \phi sn < sn$. As each vertex in $C\cap U_d$ contributes $d/2$ to the above summation, the total number of vertices from $U_d$ in $\mathcal{C}_{i_U}$ is bounded by $2sn/d$.
\end{proof}
Combining the above two lemmas gives the following corollary.
\begin{corollary}
Under Hypothesis~\ref{hypo}, the time of dividing all nodes of $\mathcal{T}$ for a single iteration of the while-loop on line-2 in \textsf{CondGomoryHu} is bounded by $\widetilde{O}(n^2)$.
\end{corollary}
The next thing would be analyzing the total number of rounds of the while-loop. Similar to~\cite{abboud2020subcubic}, we first need to prove that with high probability, the number of $u\in C\cup U_d$ such that $\mathsf{vol}_G(K_u) > 0.5\mathsf{vol}_G(U)$ is roughly at most $|U_d| / r$.
\begin{lemma}\label{pivot}
With high probability over the choice of $R\subseteq N$, the total number of $u\in U_d$ such that $\mathsf{vol}_G(L_u\cap U) > 0.5\mathsf{vol}_G(N)$ is at most $\frac{4|U_d|\log^2n}{r}$.
\end{lemma}
\begin{proof}
The proof is similar to the one in~\cite{abboud2020subcubic}. To avoid confusion, let $\mathcal{T}^\mathrm{old}$ be the version of $\mathcal{T}$ before refining with respect to $R$, and let $\mathcal{T}$ refer to the tree after refinement. For each pair of vertices in the super node $x, q\in N$, define $(\Gamma_x^q, V_{\mathcal{T}^\mathrm{old}}[N]\setminus \Gamma_x^q)$ to be the latest minimum cut of $(x, q)$ in $G_{\mathcal{T}^\mathrm{old}}[N]$. Define $M_x^q$ to be the set of all vertices $y\in N$ such that $x\in \Gamma_y^q$. Basic concentration inequalities show that for any $q, x$, if $\mathsf{vol}_G(M_x^q) \geq \frac{\log n}{r}\mathsf{vol}_G(N)$, then with high probability, $M_x^q\cap R\neq \emptyset$.
The following claim is a crucial relationship between $\Gamma_u^p$ and $L_u$.
\begin{claim}[Observation 4.3 in \cite{abboud2020subcubic}]
$\Gamma_u^p\cap U = L_u\cap U$.
\end{claim}
\begin{proof}[Proof of claim]
As $(L_u, V_\mathcal{T}[U]\setminus L_u)$ is the latest minimum cut in $G_\mathcal{T}[U]$ which is a contracted graph of $G_{\mathcal{T}^\mathrm{old}}[N]$ following standard Gomory-Hu steps, $(L_u, V_\mathcal{T}[U]\setminus L_u)$ is a min-cut for $(u, p)$ in $G_{\mathcal{T}^\mathrm{old}}[N]$. Since $(\Gamma_u^p, V_{\mathcal{T}^\mathrm{old}}[N]\setminus \Gamma_u^p)$ is the latest minimum cut in $G_{\mathcal{T}^\mathrm{old}}[N]$, we have $\Gamma_u^p\cap U\subseteq L_u\cap U$. Next we only focus on the other direction.
Let $W_1, W_2, \cdots, W_l\subseteq V_{\mathcal{T}^\mathrm{old}}[N]$ be all contracted vertices of $V_\mathcal{T}[U]$ which are crossed by $\Gamma_u^p$; in other words, $\Gamma_u^p\cap W_i\neq \emptyset$ and $W_i\setminus \Gamma_u^p\neq \emptyset$ for all $1\leq i\leq l$. Since $N$ is refined using pivots from $R$, according to Lemma~\ref{partial-tree}, we know that for each $i$ there exists a pivot $q_i\in R\cap W_i$ such that $(W_i, V_{\mathcal{T}^\mathrm{old}}[N]\setminus W_i)$ is a minimum cut for $(q_i, p)$; in fact, $q_i\in R\cap W_i$ are in the neighboring nodes of $U$ in $\mathcal{T}$.
We claim that $q_i\in \Gamma_u^p$; otherwise if $q_i\notin \Gamma_u^p$, as $u\notin W_i$, by Lemma~\ref{minus}, the cut $(X, V_{\mathcal{T}^\mathrm{old}}[N]\setminus X)$ where $X = \Gamma_u^p\setminus W_i$ is also a minimum cut for $(u, p)$, which contradicts that $(\Gamma_u^p, V_{\mathcal{T}^\mathrm{old}}[N]\setminus \Gamma_u^p)$ is the latest min-cut.
Construct a new cut $(Y, V_{\mathcal{T}^\mathrm{old}}[N]\setminus Y)$ where $Y = \Gamma_u^p \cup\bigcup_{i=1}^l W_i$. On the one hand, as $q_i\in \Gamma_u^p, \forall i$, by repeatedly applying Lemma~\ref{union} we know $(Y, V_{\mathcal{T}^\mathrm{old}}[N]\setminus Y)$ is a minimum cut for $(u, p)$ as well; On the other hand, $Y$ does not cross any contracted nodes in $G_\mathcal{T}[U]$, so $(Y, V_\mathcal{T}[U]\setminus Y)$ is a valid cut in $G_\mathcal{T}[U]$ as well. As $(L_u, V_\mathcal{T}[U]\setminus L_u)$ is the latest cut in $G_\mathcal{T}[U]$, we know $L_u\cap U\subseteq Y\cap U = \Gamma_u^p\cap U$. This concludes our proof.
\end{proof}
Consider the set of all $u\in U_d$ such that $\mathsf{vol}_G(L_u\cap U) > 0.5\mathsf{vol}_G(N)$; let them be $u_1, u_2, \cdots, u_l$. By the above claim, it must be $\mathsf{vol}_G(\Gamma_{u_i}^p\cap N)\geq \mathsf{vol}_G(\Gamma_{u_i}^p\cap U) = \mathsf{vol}_G(L_{u_i}\cap U) > 0.5\mathsf{vol}_G(N)$ as well. Therefore, any two sets $\Gamma_{u_i}^p, \Gamma_{u_j}^p$ must intersect. Since $(\Gamma_{u_i}^p, V_{\mathcal{T}^\mathrm{old}}[N]\setminus \Gamma_{u_i}^p)$ are latest cuts with respect to the same pivot $p$, they should form a total order, say $\Gamma_{u_1}^p\subseteq \Gamma_{u_2}^p\subseteq\cdots \Gamma_{u_l}^p$, and so by definition $u_2, u_3, \cdots, u_l\in M_{u_1}^p$.
\begin{claim}
$M_{u_1}^p\cap R = \emptyset$.
\end{claim}
\begin{proof}[Proof of claim]
If $\exists w\in M_{u_1}^p\cap R$, then by definition, $u\in \Gamma_w^p$. As $(\Gamma_w^p, V_{\mathcal{T}^\mathrm{old}}[N]\setminus \Gamma_w^p)$ is the latest min-cut for $(w, p)$ in $G_{\mathcal{T}^\mathrm{old}}[N]$, any min-cut for $(w, p)$ in $G_{\mathcal{T}^\mathrm{old}}[N]$ should contain $u$ on the same side as $w$. By Lemma~\ref{partial-tree}, $u$ should belong to the part which contains $w$ after the refinement with respect to $R$, which makes a contradiction as $u$ stays with $p$ in the same part.
\end{proof}
By the above lemma, we know $\mathsf{vol}_G(M_{u_1}^p) < \frac{\log n}{r}\mathsf{vol}_G(N)$, and hence we have:
\[d(l-1) \leq \mathsf{vol}_G(M_{u_1}^p) <\frac{\log n}{r}\mathsf{vol}_G(N)\leq \frac{2\log n}{r}\mathsf{vol}_G(U)\leq \frac{4\log^2n}{r}d|U_d|\]
So $l\leq \frac{4|U_d|\log^2n}{r}$.
\end{proof}
Finally we need to bound the total number of rounds in the while-loop. Call a cluster $C\in\mathcal{C}_{i_U}$ \textbf{bad}, if the total number of vertices $u\in C\cap U_d$ such that $\mathsf{vol}_G(L_u\cap U) > 0.5\mathsf{vol}_G(N)$ is more than $0.1|C\cap U_d|$; otherwise it is called \textbf{good}.
\begin{lemma}\label{cutoff}
Consider any invocation of \textsf{ExploreTree} with input parameters $U, p, d, C$. Suppose cluster $C$ is good, then $|W_C| > 0.8|C\cap U_d|$.
\end{lemma}
\begin{proof}
First consider the case where \textsf{ExploreTree} terminated on line-14. The while-loop must have terminated on line-7. Then as $u$ is small, $|S|\geq |C\cap U_d| - 2/\phi - |Q| - |C\cap L_u\cap U_d| \geq |C\cap U_d| - 6/\phi$. Therefore, $|W_C|\geq |C\cap U_d| - 6/\phi - 0.1|C\cap U_d| > 0.8|C\cap U_d|$.
Now suppose \textsf{ExploreTree} terminated on line-21. In this case, $C\cap U_d \setminus W_C$ only includes vertices in $C\cap U_d\setminus S$, plus vertices $v\in C\cap U_d$ such that $\mathsf{vol}_G(L_v\cap U) > 0.5\mathsf{vol}_G(N)$. Since $C$ is good, we know $|W_C| \geq |C\cap U_d| - 2/\phi - 0.1|C\cap U_d| > 0.8|C\cap U_d|$.
\end{proof}
\begin{lemma}
$\sum_{C\in\mathcal{C}_{i_U}\text{ is bad}}|C\cap U_d|\leq \frac{40|U_d|\log^2n}{r}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{pivot}, the total number of vertices $u$ such that $\mathsf{vol}_G(L_u\cap U) > 0.5\mathsf{vol}_G(N)$ is at most $\frac{4|U_d|\log^2n}{r}$. By definition of badness, we have $\sum_{C\in\mathcal{C}_{i_U}\text{ is bad}}|C\cap U_d|\leq \frac{40|U_d|\log^2n}{r}$.
\end{proof}
\begin{lemma}\label{depth}
For each node $U$ such that $\mathsf{vol}_G(U) > 0.5\mathsf{vol}_G(N)$, and for each set $K_u = L_u$ which is cut off by our algorithm, we have $\mathsf{vol}_G(K_u\cap U)\leq 0.5\mathsf{vol}_G(N)$. Furthermore, let $P$ be the rest of $U$ after cutting all $K_u$'s. Then $\mathsf{vol}_G(P)\leq (1 - \frac{1}{2\log^2n})\mathsf{vol}_G(U)$.
\end{lemma}
\begin{proof}
The first half of the claim is automatically guaranteed by the algorithm. Let us only consider the second half.
By Lemma~\ref{cutoff}, the total volume that has been cut off from $U$ is at least
\[\begin{aligned}
\sum_{C\in\mathcal{C}_{i_U}\text{ is good}}d|W_C| &\geq \sum_{C\in C_{i_U}\text{ is good}}0.8d|C\cap U_d| \geq \frac{0.8d}{\log n}|U_d| - 0.8d\sum_{C\in\mathcal{C}_{i_U}\text{ is bad}}|C\cap U_d|\\
&\geq \frac{0.8d}{\log n}|U_d| - \frac{32d\log^2n}{r}|U_d|\geq \frac{0.8}{\log^2n}\mathsf{vol}_G(U) - \frac{32}{\log^3n}\mathsf{vol}_G(U)\\
&\geq \frac{1}{2\log^2n}\mathsf{vol}_G(U)
\end{aligned}\]
Hence, the volume of $\mathsf{vol}_G(P)$ is reduced by a factor of at most $1 - \frac{1}{2\log^2n}$.
\end{proof}
By Lemma~\ref{depth}, after each round of the while-loop, for each node $U\subseteq N$, either we already have $\mathsf{vol}_G(U)\leq 0.5\mathsf{vol}_G(N)$ after the refinement with respect to random set $R$, or $U$ is further divided into sub-nodes whose volume are at most $\max\{0.5\mathsf{vol}_G(N), (1 - \frac{1}{2\log^2n})\mathsf{vol}_G(U)\}$. Therefore the number of rounds is at most $\log^3n$. So the total running time should be $\widetilde{O}(n^2)$ as well under Hypothesis~\ref{hypo}.
\section{Unconditional cut-equivalent trees}
\subsection{The main algorithm}
In this section we will prove the second half of Theorem~\ref{cond} using existing max-flow algorithms. The algorithm is mostly the same as the previous algorithm conditioning on Hypothesis~\ref{hypo}, and the extra work is to deal with the additive $n^{1.5}$ term that appears in the running time of max-flow algorithm from~\cite{brand2021minimum}. Similar to the previous algorithm, we will also use the same set of parameters $\phi, r$, and use degree set $\mathcal{D} = \{\sqrt{n}, 2\sqrt{n}, 2^2\sqrt{n}, \cdots, n \}$.
\paragraph*{Preparation} Throughout the algorithm, $\mathcal{T}$ will be the cut-equivalent tree under construction, where each of $\mathcal{T}$'s node will represent a subset of vertices of $V$. As a preparation step, compute a $(\phi, \phi)$-expander decomposition on $G$ and obtain a partitioning $\mathcal{C} = \{C_1, C_2, \cdots, C_k \}$ of $V$. Categorize clusters in $\mathcal{C}$ according to their sizes: for each $2^i$, define $\mathcal{C}_{i}$ to be the set of clusters whose sizes are within interval $[2^i, 2^{i+1})$.
\paragraph*{Iteration} In each round, the algorithm tries to simultaneously subdivide all nodes of $\mathcal{T}$ which contains at least $20r$ vertices in $V$. Following the same procedure as in the previous algorithm, for each node $N$ of $\mathcal{T}$, further refine $N$ into a set of smaller sub-nodes. Then, for each such sub-node $U$, define variables $d, U_d$ and $s = 2^{i_U}$ accordingly. If (1) $d \geq n^{3/4}$ or (2) $s > n^{3/4} / \sqrt{d}$, we would continue to the do same as in algorithm \textsf{CondGomoryHu} which invokes the expander search procedure.
The unconditional algorithm diverges from the conditional algorithm in Theorem~\ref{cond} from here if $d < n^{3/4}$ and $s\leq n^{3/4} / \sqrt{d}$. Intuitively, when $s$ is relatively small, the number of expanders whose sizes are roughly $s$ would be large, and so expander searches would be costly because of the additive term $n^{1.5}$ in the running time of computing max-flow. What we would do is to directly apply Lemma~\ref{k-partial} on graph $G_\mathcal{T}[U]$ to isolate all vertices in $U_d$ on the tree $\mathcal{T}$ once and for all. The pseudo-code is summarized as \textsf{GomoryHu}.
\begin{algorithm}
\caption{\textsf{GomoryHu}$(G = (V, E))$}
initialize a partition tree $\mathcal{T}$, as well as parameters $\phi, r$\;
\While{$\exists N\subseteq V$, $U$ a node of $\mathcal{T}$, $|N| \geq 20r$}{
\For{node $N$ of $\mathcal{T}$ with $|N|\geq 20r$}{
repeat for $10r$ times: each time we sample a vertex $u\in N$ with probability $\frac{\deg_G(u)}{\mathsf{vol}_G(N)}$, and let the sampled set be $R$\;
call Lemma~\ref{refine} on node $N$ with respect to $R$\;
\For{node $U\subseteq N$ of $\mathcal{T}$ such that $\mathsf{vol}_G(U) > 0.5\mathsf{vol}_G(N)$}{
take $d$ such that $d|U_d|$ is maximized\;
take $s = 2^{i_U}$ such that $\mathsf{cnt}[i_U]$ is maximized\;
\If{$d\geq n^{3/4}$ or $s > n^{3/4} / \sqrt{d}$}{
\For{each $C\in \mathcal{C}_{i_U}$}{
run expander search on $C$ within node $U$ to compute a subset $W_C\subseteq C\cap U_d$, and the latest min-cuts $(K_u, V_\mathcal{T}[U]\setminus K_u)$ for each $u\in W_C$\;
}
define $W = \bigcup_{C\in\mathcal{C}_{i_U}}W_C$\;
for each $u\in W$ such that $K_u$ is maximal, split $K_u\cap U$ off of $U$ and create a new node on $\mathcal{T}$\;
}\Else{
apply Lemma~\ref{k-partial} on the auxiliary graph $G_\mathcal{T}[U]$ with input parameter $k = 2d$, so that all vertices in $U_d$ become singletons in $\mathcal{T}$\;
}
}
}
}
\For{node $U$ of $\mathcal{T}$ such that $|U| < 20r$}{
repeatedly refine $U$ using the generic Gomory-Hu steps until all nodes are singletons\;
}
\Return $\mathcal{T}$ as a cut-equivalent tree\;
\end{algorithm}
\subsection{Running time analysis}
\begin{lemma}
Each round of the while-loop in \textsf{GomoryHu} takes time $\widetilde{O}(n^{17/8})$.
\end{lemma}
\begin{proof}
Let us study an arbitrary iteration. Suppose the condition on line-6 holds, namely $d\geq n^{3/4}$ or $s>n^{3/4}/\sqrt{d}$. Then in this case we would do exactly the same as in the conditional algorithm, and the only difference we are invoking the max-flow algorithm from~\cite{brand2021minimum}. According to Lemma~\ref{prune}, we could upper bound the running time as \[\widetilde{O}(\frac{n}{s}\cdot\mathsf{MF}( 2d\cdot\mathsf{cnt}[i_U]\log^2n, |V_\mathcal{T}[U]|)) = \widetilde{O}(\frac{nd}{s}\mathsf{cnt}[i_U] + \frac{n}{s}\cdot |V_\mathcal{T}[U]|^{1.5})\]
for each node $U$. Since all tree nodes $U$ are disjoint vertex subsets of $V$, $\sum_{U\in \mathcal{T}}|V_\mathcal{T}[U]|^{1.5} \leq n^{1.5}$. Therefore, by Lemma~\ref{count}, this sums to $\widetilde{O}(n^2 + \frac{n^{2.5}}{s})$.
We first claim that $s \geq \sqrt{2d}$. In fact, by maximality of $\mathsf{cnt}[i_U]$, there exists at least one cluster $C\in \mathcal{C}_{i_U}$ that intersects $U_d$. Take any $u\in C\cap U_d$. Then since $G$ is a simple graph, more than $d - s$ neighbors of $u$ are outside of $C$, thus $\mathsf{out}_G(C) > d-s$. By property (3) of Definition~\ref{boundary}, we have:
\[\mathsf{out}_G(C)\leq \log^7n\cdot \phi\mathsf{vol}_G(C) \leq \log^7n\cdot \phi (4s^2 + \mathsf{out}_G(C))\]
As $\phi = \frac{1}{10\log^{\mathsf{c} + 10}}$, we have $\mathsf{out}_G(C)\leq 0.4s^2 + 0.1\mathsf{out}_G(C)$, and so $\mathsf{out}_G(C) < 0.5s^2$. As $\mathsf{out}_G(C) > d-s$, we have $s > \sqrt{2d}$.
When $d\geq n^{3/4}$, as $s\geq \sqrt{2d} >n^{3/8}$ we have $\widetilde{O}(n^2 + \frac{n^{2.5}}{s}) = \widetilde{O}(n^{17/8})$. If $d < n^{3/4}$ and $s > n^{3/4} / \sqrt{d}$, then we also bound the total running time as $\widetilde{O}(n^2 + \frac{n}{s\phi}\cdot n^{1.5}) = \widetilde{O}(n^{17/8})$.
Now suppose the condition on line-6 does not hold, then $d < n^{3/4}$ and $s\leq n^{3/4}/\sqrt{d}$. In this case, similar to Lemma~\ref{count}, we can prove that the total volume $\mathsf{vol}_G(U)$ over all different $U$'s is bounded by $\widetilde{O}(ns)$. So applying Lemma~\ref{k-partial} in this round takes time at most $\widetilde{O}(nsd) = \widetilde{O}(n^{17/8})$.
\end{proof}
\begin{lemma}
The total number of rounds of the while-loop in \textsf{GomoryHu} is bounded by $O(\log^3n)$.
\end{lemma}
\begin{proof}
If each round of the while-loop, if $d\geq n^{3/4}$ or $s > n^{3/4} / \sqrt{d}$ for node $U$, then according to the proof of Lemma~\ref{depth}, the volume of each subdivision is bounded by $\max\{0.5\mathsf{vol}_G(N), (1 - \frac{1}{2\log^2n})\mathsf{vol}_G(U)\}$. If $d< n^{3/4}$ and $s \leq n^{3/4} / \sqrt{d}$, then all vertices in $U_d$ become singletons on $\mathcal{T}$; also, and for the same reason, all subdivision of $U$ should be at most $(1 - \frac{1}{2\log^2n})\mathsf{vol}_G(U)$. Therefore, the number of while-loop iterations is at most $O(\log^3n)$.
\end{proof}
\section*{Acknowledgment}
The author would like to thank helpful discussions with Prof. Ran Duan and Dr. Amir Abboud. This publication has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 803118 UncertainENV).
\end{document}
|
\begin{document}
\title{A note on actions of some monoidsootnote{This research was supported by the Polish National Science Centre grant under the contract number DEC-2012/06/A/ST1/00256. Published in Differential Geometry and Its Applications extbf{47}
\begin{abstract}
Smooth actions of the multiplicative monoid $(\mathbb{R},\cdot)$ of
real numbers on manifolds lead to an alternative, and for some reasons
simpler, definitions of a vector bundle, a double vector bundle and
related structures like a graded bundle (Grabowski and Rotkiewicz
(2011) \cite{JG_MR_gr_bund_hgm_str_2011}). For these reasons it is natural to study
smooth actions of certain monoids closely related with the monoid
$(\mathbb{R},\cdot)$. Namely, we discuss geometric structures
naturally related with: smooth and holomorphic actions of the monoid of
multiplicative complex numbers, smooth actions of the monoid of second
jets of punctured maps $(\mathbb{R},0)\rightarrow(\mathbb{R},0)$,
smooth actions of the monoid of real 2 by 2 matrices and smooth actions
of the multiplicative reals on a supermanifold. In particular cases we
recover the notions of a holomorphic vector bundle, a complex vector
bundle and a non-negatively graded manifold.
\end{abstract}
\paragraph{MSC 2010:}
57S25 (primary), 32L05, 58A20, 58A50 (secondary)
\paragraph{Keywords:}
Monoid action,Graded bundle, Graded manifold, Homogeneity
structure, Holomorphic bundle, Supermanifold
\section{Introduction}
\paragraph*{Motivation}
Our main motivation to undertake this study were the results of
Grabowski and Rotkiewicz \cite{JG_MR_higher_vec_bndls_and_multi_gr_sym_mnflds,JG_MR_gr_bund_hgm_str_2011}
concerning action of the multiplicative monoid of real numbers
$(\mathbb{R},\cdot)$ on smooth manifolds. In the first of the cited
papers the authors effectively characterized these smooth actions of
$(\mathbb{R},\cdot)$ on a manifold $M$ which come from homotheties of
a vector bundle structure on~$M$. In particular, it turned out that the
addition on a vector bundle is completely determined by the
multiplication by reals (yet the smoothness of this multiplication at
$0\in\mathbb{R}$ is essential). This, in turn, allowed for a
simplified and very elegant treatment of double and multiple vector bundles.
These considerations were further generalized in~\cite{JG_MR_gr_bund_hgm_str_2011}. The main result of that paper (here we
recall it as \reftext{Theorem~\ref{thm:eqiv_real}}) is an equivalence, in the
categorical sense, between smooth actions of $(\mathbb{R},\cdot)$ on
manifolds (\emph{homogeneity structures} in the
language of~\cite{JG_MR_gr_bund_hgm_str_2011}) and \emph{graded bundles}. Graded
bundles (introduced for the first time in \cite{JG_MR_gr_bund_hgm_str_2011}) can be viewed as a natural generalization
of vector bundles. In short, they are locally trivial fibered bundles
with fibers possessing a structure of a \emph{graded space}, i.e. a
manifold diffeomorphic to $\mathbb{R}^{n}$ with a distinguished class
of global coordinates with positive integer weights assigned. In a
special case when these weights are all equal to 1, a graded space
becomes a standard vector space and a graded bundle -- a vector bundle.
Surprisingly, graded bundles gained much more attention in
supergeometry, where they are called $N$-manifolds. One of the reasons
is that various important objects in mathematical physics can be seen
as $N$-manifolds equipped with an odd homological vector field. For
example, a Lie algebroid is a pair $(E, X)$ where $E$ is an
$N$-manifold of degree $1$, thus an anti-vector bundle, and $X$ is a
homological vector field on $E$ of weight~$1$. A~much deeper result
relates Courant algebroids and $N$-manifolds of degree $2$~\cite{Roytenberg}.
\paragraph*{Goals}
It is natural to ask about possible extensions of the results of
\cite{JG_MR_higher_vec_bndls_and_multi_gr_sym_mnflds,JG_MR_gr_bund_hgm_str_2011}
discussed above. There are two obvious directions of studies:
\begin{enumerate}[(Q1)]
\item[(Q1)] What are geometric structures naturally related with
smooth monoid actions on manifolds for monoids $\mathcal{G}$ other
than $(\mathbb{R},\cdot)$?
\item[(Q2)] How to characterize smooth actions of the multiplicative
reals $(\mathbb{R},\cdot)$ on supermanifolds?
\end{enumerate}
In this paper we provide answers to the above problems. Of course it is
hopeless to discuss (Q1) for an arbitrary monoid $\mathcal{G}$.
Therefore we concentrate our attention on several special cases, all
being natural generalizations of the monoid $(\mathbb{R},\cdot)$ of the
multiplicative reals:
\begin{itemize}
\item $\mathcal{G}=(\mathbb{C},\cdot)$ is the multiplicative monoid
of complex numbers;
\item $\mathcal{G}=\mathcal{G}_{2}$ is the monoid of the 2nd-jets
of punctured maps $\gamma:(\mathbb{R},0)\rightarrow
(\mathbb{R},0)$ (note that $(\mathbb{R},\cdot)$ can be viewed as the
monoid of the 1st-jets of such maps);
\item $\mathcal{G}=\operatorname{M}_{2}(\mathbb{R})$ is the monoid
of 2 by 2 real matrices.
\end{itemize}
Observe that all these examples contain $(\mathbb{R},\cdot)$ as a
submonoid. Therefore, by the results of \cite{JG_MR_gr_bund_hgm_str_2011}, every manifold with a smooth $\mathcal
{G}$-action will be canonically a graded bundle. This fact will be
often of crucial importance in our analysis.
\paragraph*{Main results} Below we list the most important results of
this paper regarding problem (Q1):
\begin{itemize}
\item For holomorphic actions of $\mathcal{G}=(\mathbb{C},\cdot)$ we
proved \reftext{Theorem~\ref{thm:equiv_holom}}, a direct analog of
\reftext{Theorem~\ref{thm:eqiv_real}}. Such actions (\emph{holomorphic homogeneity
structures} -- see \reftext{Definition~\ref{def:hgm_str_complex}}) are
equivalent (in the categorical sense) to \emph{holomorphic graded
bundles} (see \reftext{Definition~\ref{def:cplx_gr_bundle}}) -- a natural
extension of the notion of a (real) graded bundle to the holomorphic setting.
\item \reftext{Theorem~\ref{thm:equiv_complex}} is another analog of \reftext{Theorem~\ref{thm:eqiv_real}}. It characterizes \emph{complex graded bundles}
(defined analogously to the graded bundles in the real case -- see
\reftext{Definition~\ref{def:cplx_gr_bundle}}) in terms of \emph{complex
homogeneity structures} (i.e., smooth actions of $(\mathbb{C},\cdot)$
-- see \reftext{Definition~\ref{def:hgm_str_complex}}). It turns out that
complex graded bundles are equivalent (in the categorical sense) to a
special class of \emph{nice} complex homogeneity structures, i.e.
smooth $(\mathbb{C},\cdot)$-actions in which the imaginary part is in
a natural sense compatible with the action of $(\mathbb{R},\cdot
)\subset(\mathbb{C},\cdot)$ -- cf. \reftext{Definition~\ref{def:nice_hgm_str}}.
\item $\mathcal{G}_{2}$-actions on smooth manifolds are the main topic
of Section~\ref{sec:g2}. Since $\mathcal{G}_{2}$ is non-Abelian we
have to distinguish between left and right actions of $\mathcal
{G}_{2}$. As already mentioned, since $(\mathbb{R},\cdot)\subset
\mathcal{G}_{2}$, any manifold with $\mathcal{G}_{2}$-action is
naturally a (real) graded bundle. A crucial observation is that
$\mathcal{G}_{2}$ contains a group of additive reals $(\mathbb{R},+)$
as a submonoid. This fact allows to relate with every smooth right
(resp., left) $\mathcal{G}_{2}$-action a canonical complete vector
field (note that a smooth action of $(\mathbb{R},+)$ is a flow) of
weight $-$1 (resp., $+$1) with respect to the above-mentioned graded bundle
structure (\reftext{Lemma~\ref{lem:G2_actions_infinitesimally}}).
Unfortunately, the characterization of manifolds with a smooth $\mathcal
{G}_{2}$-action as graded bundles equipped with a weight $\pm1$
vector field is not complete. In general, such a data allows only to
define the action of the group of invertible elements ${\mathcal
{G}}^{\text{inv}}_{2}\subset\mathcal{G}_{2}$ on the considered
manifold (\reftext{Lemma~\ref{lem:G_2_two}}), still leaving open the problem of
extending such an action to the whole $\mathcal{G}_{2}$ in a smooth
way. For right $\mathcal{G}_{2}$-actions this question can be locally
answered for each particular case.
In \reftext{Lemma~\ref{lem:right_g2_deg_less_4}} we formulate such a result for graded bundles
of degree at most~3. The case of a left $\mathcal{G}_{2}$-action is
much more difficult and we were able to provide an answer (in an
elegant algebraic way) only for the case of graded bundles of degree
one (i.e., vector bundles) in \reftext{Lemma~\ref{lem:left_g2_action_vb}}.
\item As a natural application of our results about $\mathcal
{G}_{2}$-actions we were able to obtain a characterization of the
smooth actions of the monoid $\mathcal{G}$ of 2 by 2 real matrices in
\reftext{Lemma~\ref{lem:matrix}}. This is due to the fact that $\mathcal
{G}_{2}$ can be naturally embedded into $\mathcal{G}$. In the
considered case the action of $\mathcal{G}$ on a manifold provides it
with a double graded bundle structure together with a pair of vector
fields $X$ and $Y$ of bi-weights, respectively, $(-1,1)$ and $(1,-1)$
with respect to the bi-graded structure. Moreover, the commutator
$[X,Y]$ is related to the double graded structure on the manifold.
Unfortunately, this characterization suffers the same problems as the
one for $\mathcal{G}_{2}$-actions: not every structure of such type
comes from a $\mathcal{G}$-action.
\end{itemize}
Problem (Q2) is addressed in Section~\ref{sec:super} where we prove,
in \reftext{Theorem~\ref{thm:main_super}}, that supermanifolds equipped with a
smooth action of the monoid $(\mathbb{R}, \cdot)$ are \emph{graded
bundles in the category of supermanifold} in the sense of \reftext{Definition~\ref{def:s_grd_bndl}}
(the latter notion differs from the notion of an
$N$-manifold given in \cite{Roytenberg}: the parity of local
coordinates needs not to be equal to their weights modulo two). We are
aware that this result should be known to the experts. In \cite{Severa}
Severa states (without a proof) that ``an $N$-manifold (shorthand for
`non-negatively graded manifold') is a supermanifold with an action of
the multiplicative semigroup $(\mathbb{R}, \cdot)$ such that $-1$
acts as the parity operator'', which is a statement slightly weaker
than our result. Also recently we found a proof of a result similar to
Theorem~5.8 in \cite{BGG_grd_bndls_Lie_grpds},
Remark 2.2. Nevertheless, a rigorous proof of
\reftext{Theorem~\ref{thm:main_super}} seems to be missing in the literature (a version from
\cite{BGG_grd_bndls_Lie_grpds} is just a sketch). Therefore we decided
to provide it in this paper. It is worth to stress that our proof was
obtained completely independently to the one from \cite{BGG_grd_bndls_Lie_grpds}
and, unlike the latter, does not refer to the
proof of \reftext{Theorem~\ref{thm:eqiv_real}}.
\vspace*{-5pt}
\paragraph*{Literature}
Despite a vast literature on Lie theory for semi-groups (see e.g.,
\cite{HHL_Lie_semigroups,HH_Lie_semigroups}) we could not find
anything that deals with smooth actions of the monoid $(\mathbb{R},
\cdot)$ or its natural extensions. This can be caused by the fact that
the monoid $(\mathbb{R}, \cdot)$ is not embeddable to any group.
\vspace*{-5pt}
\paragraph*{Organization of the paper}
In Section~\ref{sec:perm} we briefly recall the main results of
\cite{JG_MR_gr_bund_hgm_str_2011}, introducing (real) graded spaces, graded
bundles, homogeneity structures, as well as the related notions and
constructions. We also state \reftext{Theorem~\ref{thm:eqiv_real}} providing a
categorical equivalence between graded bundles and homogeneity
structures (i.e., smooth $(\mathbb{R},\cdot)$-actions). Later in this
section we introduce the monoid $\mathcal{G}_{2}$ and discuss its
basic properties.
Section~\ref{sec:complex} is devoted to the study of $(\mathbb
{C},\cdot)$-actions. Basing on analogous notions from Section~\ref{sec:perm} we define complex graded spaces, complex and holomorphic
graded bundles, as well as complex and holomorphic homogeneity
structures. Later we prove \reftext{Theorems~\ref{thm:equiv_holom} and \ref{thm:equiv_complex}} (these were already discussed above) providing
effective characterizations of holomorphic and complex graded bundles,
respectively, in terms of $(\mathbb{C},\cdot)$-actions.
The content of Sections~\ref{sec:g2} and \ref{sec:super} was
discussed in detail while presenting the main results of this paper.
\section{Preliminaries}
\label{sec:perm}
\paragraph*{Graded spaces}
We shall begin by introducing, after \cite{JG_MR_gr_bund_hgm_str_2011}, the notion of a (real) graded space.
Intuitively, a graded space is a manifold diffeomorphic to $\mathbb
{R}^{n}$ and equipped with an atlas of global graded coordinate
systems. That is, we choose coordinate functions with certain positive
integers (weights) assigned to them and consider transition functions
respecting these weights (they have to be polynomial in graded
coordinates). Thus a graded space can be understood as a natural
generalization of the notion of a vector space. Indeed, on a vector
space we can choose an atlas of global linear (weight one) coordinate
systems. Clearly, every passage between two such systems is realized by
a weight preserving (that is, linear) map. Below we provide a rigorous
definition of a graded space.
\begin{definition}\label{def:gr_space}
Let $\mathbf{d}= (d_{1}, \ldots, d_{k})$ be a sequence of
non-negative integers, let $I$ be a set of cardinality $|\mathbf
{d}|:=d_{1}+\ldots+d_{k}$, and let $I\ni\alpha\mapsto w^{\alpha}\in
\mathbb{Z}_{+}$ be a map such that $d_{i}= \#\{\alpha\in I: w^{\alpha}=i
\}$ for each $1\leq i\leq k$.
A \emph{graded space of rank $\mathbf{d}$} is a smooth manifold
$\mathrm{W}$ diffeomorphic to $\mathbb{R}^{|\mathbf{d}|}$ and
equipped with an equivalence class of graded coordinates. By
definition, a \emph{system of graded coordinates} on $\mathrm{W}$ is
a global coordinate system $(y^{a})_{a\in I}: \mathrm{W}\xrightarrow
{\simeq} \mathbb{R}^{|\mathbf{d}|}$ with \emph{weight} $w^{\alpha
}$ assigned to each function $y^{\alpha}$, $\alpha\in I$. To indicate
the presence of weights we shall sometimes write $y^{\alpha
}_{w^{\alpha}}$ instead of $y^{\alpha}$ and $\mathbf{w}(\alpha)$ to
denote the weight of $y^{\alpha}$.
Two systems of graded coordinates, $(y^{\alpha}_{w^{\alpha}})$ and
$(\underline{y}^{\alpha}_{\underline{w}^{\alpha}})$ are \emph
{equivalent} if there exist constants $c_{\alpha_{1} \ldots\alpha
_{j}}^{\alpha}\in\mathbb{R}$, defined for indices such that
$\underline{w}^{\alpha}=w^{\alpha_{1}} +\ldots+ w^{\alpha_{j}}$, satisfying
\begin{equation}\label{eqn:graded_transf_R}
\underline{y}^{\alpha}_{\underline{w}^{\alpha}} = \sum_{\substack
{j=1,2, \ldots\\ \underline{w}^{\alpha}=w^{\alpha_{1}}+\ldots
+w^{\alpha_{j}}}} c_{\alpha_{1} \ldots\alpha_{j}}^{\alpha
}y^{\alpha_{1}}_{w^{\alpha_{1}}} \ldots y^{\alpha_{k}}_{w^{\alpha_{k}}}.
\end{equation}
The highest coordinate weight (i.e., the highest number $i$ such that
$d_{i}\neq0$) is called the \emph{degree} of a graded space $\mathrm{W}$.
By a \emph{morphism between graded spaces} $\mathrm{W}_{1}$ and
$\mathrm{W}_{2}$ we understand a smooth map $\Phi:\mathrm
{W}_{1}\rightarrow\mathrm{W}_{2}$ which in some (and thus any) graded
coordinates writes as a polynomial homogeneous in weights $w^{\alpha}$.
\end{definition}
\begin{example} Consider a graded space $\mathrm{W}=\mathbb
{R}^{(2,1)}$ with coordinates $(x_{1},x_{2},y)$ of weights 1, 1 and 2,
respectively. A map $\Phi(x_{1},x_{2},y)=(3 x_{2},
x_{1}+2x_{2},y+x_{1}x_{2}+5(x_{2})^{2})$ is an automorphism of $\mathrm{W}$.
\end{example}
Observe that any graded space $\mathrm{W}$ induces an action
$h^{\mathrm{W}}: \mathbb{R}\times\mathrm{W}\rightarrow\mathrm{W}$
of the multiplicative monoid $(\mathbb{R}, \cdot)$ defined by
\begin{equation}
\label{eqn:hgm_structure}
h^{\mathrm{W}}(t, (y^{\alpha}_{w})) = (t^{w} \cdot y^{\alpha}_{w}).
\end{equation}
Indeed, it is straightforward to check that the formula for $h^{\mathrm
{W}}$ does not depend on the choice of graded coordinates $(y^{\alpha
}_{w})$ in a given equivalence class. We shall call $h^{\mathrm{W}}$
the action by \emph{homotheties} of $\mathrm{W}$. We will also use
notation $h^{\mathrm{W}}_{t}(\cdot)$ instead of $h^{\mathrm
{W}}(t,\cdot)$. The multiplicativity of $h^{\mathrm{W}}$ reads as
$h^{\mathrm{W}}_{s}\circ h^{\mathrm{W}}_{t}=h^{\mathrm{W}}_{t\cdot
s}$ for every $t,s\in\mathbb{R}$.
Obviously a morphism $\Phi:\mathrm{W}_{1}\rightarrow\mathrm{W}_{2}$
between two graded spaces intertwines the actions $h^{\mathrm{W}_{1}}$
and $h^{\mathrm{W}_{2}}$, that is
\[
h^{\mathrm{W}_{2}}_{t}(\Phi(v))=\Phi(h^{\mathrm{W}_{1}}_{t}(v))
\]
for every $v\in\mathrm{W}_{1}$ and every $t\in\mathbb{R}$. For
degree $1$ graded spaces we recover the notion of a linear map \cite
{JG_MR_higher_vec_bndls_and_multi_gr_sym_mnflds}.
\begin{example} The graded spaces $W_{1}= \mathbb{R}^{(1,0)}$ and
$W_{2}=\mathbb{R}^{(0,1)}$ are different although their underlying
manifolds are the same. Indeed, there is no diffeomorphism $f:\mathbb
{R}\to\mathbb{R}$ such that $f(t x) = t^{2} f(x)$ for every $t, x\in
\mathbb{R}$, that is one intertwining the associated actions of
$\mathbb{R}$ (cf. \reftext{Lemma~\ref{lem:hol_hmg_fun_R}}).
\end{example}
Using the above construction it is natural to introduce the following:
\begin{definition}\label{def:hgm_real}
A function $\phi: \mathrm{W}\rightarrow\mathbb{R}$ defined on a
graded space $\mathrm{W}$ is called \emph{homogeneous of weight} $w$
if for every $v\in\mathrm{W}$ and every $t\in\mathbb{R}$
\begin{equation}\label{eqn:def_hmg_fun_R}
\phi(h^{\mathrm{W}}(t, v)) = t^{w} \, \phi(v)\ .
\end{equation}
In a similar manner one can associate weights to other geometrical
objects on $\mathrm{W}$. For example a smooth vector field $X$ on $\mathrm
{W}$ is called \emph{homogeneous of weight $w$} if for every $v\in
\mathrm{W}$ and every $t>0$
\begin{equation}
\label{eqn:def_hgm_vf}
(h_{t})_{\ast}X(v)=t^{-w} X(h_{t}(v))\ .
\end{equation}
\end{definition}
We see that the coordinate functions $y^{\alpha}_{w}$ are functions of
weight $w$ and the field $\partial_{y^{\alpha}_{w}}$ is of weight
$-w$ in the sense of the above definition. In fact it can be proved that
\begin{lemma}[\cite{JG_MR_gr_bund_hgm_str_2011}]\label{lem:hol_hmg_fun_R}
Any homogeneous function on a graded space $\mathrm
{W}$ is a polynomial in the coordinate functions $y^{\alpha}_{w}$
homogeneous in weights~$w$.
\end{lemma}
\paragraph*{Graded bundles and homogeneity structures}
Since, as indicated above, a graded space can be seen as a
generalization of the notion of a vector space, it is natural to define
\emph{graded bundles} per analogy to vector bundles. A graded bundle
is just a fiber-bundle with the typical fiber being a graded space and
with transition maps respecting the graded space structure on fibers.
\begin{definition} A \emph{graded bundle of rank $\mathbf{d}$} is a
smooth fiber bundle $\tau: E\to M$ over a real smooth manifold $M$
with the typical fiber $\mathbb{R}^{\mathbf{d}}$ considered as a
graded space of rank $\mathbf{d}$. Equivalently, $\tau$ admits local
trivializations $\psi_{U}: \tau^{-1}(U)\to U \times\mathbb
{R}^{\mathbf{d}}$ such that transition maps $g_{UU'}(q):= \psi
_{U'}\circ\psi_{U}^{-1}|_{\{q\}\times\mathbb{R}^{\mathbf{d}}}:
\mathbb{R}^{\mathbf{d}}\to\mathbb{R}^{\mathbf{d}}$ are isomorphism
of graded spaces smoothly depending on $q\in U\cap U'$. By a \emph
{degree} of a graded bundle we shall understand the degree of the
typical fiber $\mathbb{R}^{\mathbf{d}}$.
A \emph{morphism of graded bundles} is defined as a fiber-bundle
morphism being a graded space morphism on fibers. Clearly, graded
bundles together with their morphisms form a \emph{category}.
\end{definition}
\begin{example}\label{ex:TkM}
A canonical example of a graded bundle is provided by the concept of a
\emph{higher tangent bundle}. Let $M$ be a smooth manifold and let
$\gamma,\delta:(-\varepsilon,\varepsilon)\rightarrow M$ be two
smooth curves on $M$. We say that $\gamma$ and $\delta$ have the same
$k$th-\emph{jet at} $0$ if, for every smooth function
$\phi:M\rightarrow\mathbb{R}$, the difference $\phi\circ\gamma
-\phi\circ\delta:(-\varepsilon,\varepsilon)\rightarrow\mathbb
{R}$ vanishes at $0$ up to order $k$. Equivalently, in any local
coordinate system on $M$ the Taylor expansions of $\gamma$ and $\delta
$ agree at $0$ up to order $k$. The $k$th-jet of $\gamma$
at $0$ shall be denoted by $\mathbf{t}^{k}\gamma(0)$. As a set the
$k$th-order tangent bundle $\mathrm{T}^{k}M$ consists of
all $k$th-jets of curves on $M$. It is naturally a bundle
over $M$ with the projection $\mathbf{t}^{k}\gamma(0)\mapsto\gamma
(0)$. It also has a natural structure of a smooth manifold and a graded
bundle of rank $(\underbrace{m,m,\ldots,m}_{k})$ with $m=\dim M$.
Indeed, given a local coordinate system $(x^{i})$ with $
i=1,\ldots, m$ on $M$ we define the so-called \emph{adapted
coordinate system} $(x^{i,(\alpha)})$ on $\mathrm{T}^{k}M$ with $i=
1,\ldots,m$, $\alpha=0,1,\ldots,k$ via the formula
\[
x^{i}(\gamma(t))=x^{i,(0)}(\mathbf{t}^{k}\gamma(0))+t\cdot
x^{i,(1)}(\mathbf{t}^{k}\gamma(0))+\ldots+\frac
{t^{k}}{k!}x^{i,(k)}(\mathbf{t}^{k}\gamma(0))+o(t^{k})\ .
\]
That is, $x^{i,(\alpha)}$ at $\mathbf{t}^{k}\gamma(0)$ is the
$\alpha$th-coefficient of the Taylor expansion of
$x^{i}(\gamma(t))$. We can assign weight $\alpha$ to the coordinate
$x^{i,(\alpha)}$. It is easy to check that a smooth change of local
coordinates on $M$ induces a change of the adapted coordinates on
$\mathrm{T}^{k}M$ which respects this grading.
\end{example}
Note that every graded bundle $\tau:E\rightarrow M$ induces a smooth
action $h^{E}:\mathbb{R}\times E\rightarrow E$ of the multiplicative
monoid $(\mathbb{R},\cdot)$ defined fiber-wise by the canonical
actions $h^{\mathrm{V}}$ given by \reftext{\eqref{eqn:hgm_structure}} with
$\mathrm{V}=\tau^{-1}(p)$ for $p\in M$. We shall refer to this as to
the action by \emph{homotheties of $E$}.
Clearly, $M=h^{E}_{0}(E)$ and any graded bundle morphisms $\Phi
:E_{1}\rightarrow E_{2}$ intertwine the actions $h^{E_{1}}$ and
$h^{E_{2}}$, i.e.,
\[
\Phi(h^{E_{1}}_{t}(e))=h^{E_{2}}_{t}(\Phi(e)),
\]
for every $e\in E_{1}$ and every $t\in\mathbb{R}$.
The above construction justifies the following:
\begin{definition}\label{def:hgm_str}
A \emph{homogeneity structure} on a manifold $E$ is a smooth action of
the multiplicative monoid $(\mathbb{R},\cdot)$
\[
h:\mathbb{R}\times E \longrightarrow E\ .
\]
A \emph{morphism} of two homogeneity structures $(E_{1},h^{1})$ and
$(E_{2},h^{2})$ is a smooth map $\Phi:E_{1}\rightarrow E_{2}$
intertwining the actions $h^{1}$ and $h^{2}$, i.e.,
\[
\Phi(h^{1}_{t}(e))=h^{2}_{t}(\Phi(e)),
\]
for every $e\in E_{1}$ and every $t\in\mathbb{R}$. Clearly,
homogeneity structures with their morphisms form a \emph{category}.
In the context of homogeneity structures we can also speak about \emph
{homogeneous functions} and \emph{homogeneous vector fields}. They are
defined analogously to the notions in \reftext{Definition~\ref{def:hgm_real}}.
\end{definition}
As we already observed graded bundles are naturally homogeneity
structures. The main result of \cite{JG_MR_gr_bund_hgm_str_2011}
states that the opposite is also true: there is an equivalence between
the category of graded bundles and the category of homogeneity
structures (when restricted to connected manifolds).
\begin{theorem}[\cite{JG_MR_gr_bund_hgm_str_2011}] \label{thm:eqiv_real}
The category of (connected) graded bundles is equivalent to the
category of (connected) homogeneity structures. At the level of objects
this equivalence is provided by the following two constructions:
\begin{itemize}
\item With every graded bundle $\tau:E\rightarrow M$ one can associate
the homogeneity structure $(E,h^{E})$, where $h^{E}$ is the action by
homotheties of $E$.
\item Given a homogeneity structure $(M,h)$, the map
$h_{0}:M\rightarrow M_{0}:=h_{0}(M)$ provides $M$ with a canonical
structure of a graded bundle such that $h$ is the related action by homotheties.
\end{itemize}
And at the level of morphism:
\begin{itemize}
\item Every graded bundle morphism $\Phi:E_{1}\rightarrow E_{2}$ is a
morphism of the related homogeneity structures $(E_{1},h^{E_{1}})$ and
$(E_{2},h^{E_{2}})$.
\item Every homogeneity structure morphism $\Phi
:(E_{1},h^{1})\rightarrow(E_{2},h^{2})$ is a morphism of graded
bundles $h^{1}_{0}:E_{1}\rightarrow h^{1}_{0}(M)$ and
$h^{2}_{0}:E_{2}\rightarrow h^{2}_{0}(M)$.
\end{itemize}
\end{theorem}
Let us comment briefly on the proof.
The passage from graded bundles to homogeneity structures is obtained
by considering the natural action by homotheties discussed above. The
crucial (and difficult) part of the proof is to show that for every
homogeneity structure $(M,h)$, the manifold $M$ has a graded bundle
structure over $h_{0}(M)$ compatible with the action $h$. The main idea
is to associate to every point $p\in M$ the $k$th-jet at
$t=0$ (for $k$ big enough) of the curve $t\mapsto h_{t}(p)$. In this
way we obtain an embedding $M\hookrightarrow\mathrm{T}^{k}M$, and the
graded bundle structure on $M$ can now be naturally induced from the
canonical graded bundle structure on $\mathrm{T}^{k} M$ (cf. Ex. \ref{ex:TkM}
and the proof of \reftext{Theorem~\ref{thm:equiv_holom}}).
The assumption of connectedness has just a technical character: we want
to prevent a situation when the fibers of a graded bundle over
different base components have different ranks.
\paragraph*{The weight vector field, the core and the natural affine
fibration}
Let us end the discussion of graded bundles by introducing several
constructions naturally associated with this notion.
Observe first that the homogeneity structure $h:\mathbb{R}\times
M\rightarrow M$ provides $M$ with a natural, globally-defined, action
of the additive group $(\mathbb{R},+)$ by the formula $(t,p)\mapsto
h_{e^{t}}(p)$. Clearly such an action is a flow of some (complete)
vector field.
\begin{definition}\label{def:euler_vf}
A (complete) vector field $\Delta_{M}$ on $M$ associated with the flow
$(t,p)\mapsto h_{e^{t}}(p)$ is called the \emph{weight vector field}
of $M$. Alternatively
$\Delta_{M}(p)=\frac{\mathrm{d}}{\mathrm{d}\,t}\big
|_{t=1}h_{t}(p)$.
\end{definition}
It is easy to show that, in local graded coordinates $(y^{\alpha
}_{w^{\alpha}})$ on $M$ (such coordinates exist since $M$ is a graded
bundle by \reftext{Theorem~\ref{thm:eqiv_real}}), the weight vector field reads as
\[
\Delta_{M}=\sum_{\alpha}w^{\alpha}y^{\alpha}_{w^{\alpha}}\partial
_{y^{\alpha}_{w^{\alpha}}}\ .
\]
Actually, specifying a weight vector field is equivalent to defining
the homogeneity structure on $M$. The passage from the weight vector
field to the action of $(\mathbb{R},\cdot)$ is given by
$h_{e^{t}}:=\exp(t\cdot\Delta_{M})$.
\begin{remmark}\label{rem:euler_weight}
The assignment $\phi\mapsto w\phi$ where $\phi:M\rightarrow\mathbb
{R}$ is a homogeneous function of weight $w$ can be extended to a
derivation in the algebra of smooth functions on $M$, thus a vector
field. Clearly, it coincides with the weight vector field $\Delta_{M}$
what justifies the name for $\Delta_{M}$.
Besides, the notion of a weight vector field can be used to study the
weights of certain geometrical objects defined on $M$ (cf. \reftext{Definitions~\ref{def:hgm_real} and \ref{def:hgm_str}}). This should be clear,
since the homogeneity structure used to define the weights can be
obtained by integrating the weight vector field.
For example, $X$ is a weight $w$ vector field on $M$ if and only if
\[
[\Delta_{M},X]=w\cdot X\ .
\]
Using the results of \reftext{Lemma~\ref{lem:hol_hmg_fun_R}}, it is easy to see
that, in local graded coordinates $(y^{\alpha}_{w^{\alpha}})$, such a
field has to be of the form
\[
X=\sum_{\alpha}P_{\alpha}\cdot\partial_{y^{\alpha}_{w^{\alpha
}}}\ ,
\]
where $P_{\alpha}$ is a homogeneous function of weight $w+w^{\alpha}$
for each index $\alpha$. Thus $X$, regarded as a derivation, takes a
function of weight $w'$ to a function of weight $w+w'$.
\end{remmark}
A graded bundle $\tau:E^{k}\rightarrow M$ of degree $k$ ($k\geq1$) is
fibrated by submanifolds defined (invariantly) by fixing values of all
coordinates of weight less or equal $j$ ($0\leq j\leq k$) in a given
graded coordinate system. The quotient space is a graded bundle of
degree $j$ equipped with an atlas inherited from the atlas of $E^{k}$
in an obvious way. The obtained bundles will be denoted by $\tau^{j}:
E^{j}\rightarrow M$. They can be put together into the following
sequence called \emph{the tower of affine bundle projections
associated with $E^{k}$}:
\begin{equation}\label{eqn:tower_grd_spaces}
E^{k}\xrightarrow{\tau^{k}_{k-1}} E^{k-1}\xrightarrow{\tau
^{k-1}_{k-2}} E^{k-2}\xrightarrow{\tau^{k-2}_{k-3}} \ldots
\xrightarrow{\tau^{2}_{1}} E^{1}\xrightarrow{\tau^{1}} M.
\end{equation}
Define (invariantly) a submanifold $\widehat{E^k}\subset E^{k}$ of a
graded bundle $\tau: E\rightarrow M$ of rank $(d_{1}, \ldots, d_{k})$
by setting to zero all fiber coordinates of degree less than $k$. It is
a graded subbundle of rank $(0,\ldots, 0, d_{k})$ but we shall
consider it as a vector bundle with homotheties $(t, (z^{\alpha}_{k}))
\mapsto(t \cdot z^{\alpha}_{k})$ and call it the \emph{core} of
$E^{k}$. It is worth to note that a morphism of graded bundles respects
the associated towers of affine bundle projections and induces a vector
bundle morphism on the core bundles.
\begin{example}\label{ex:TkM_tower_core} The core of $\mathrm{T}^{k}
M$ is $\mathrm{T}M$, while $\tau^{j}_{j-1}$ in the tower of affine
projections associated with $\mathrm{T}^{k} M$ is just the natural
projection to lower-order jets $\mathrm{T}^{j} M \rightarrow\mathrm
{T}^{j-1} M$.
\end{example}
\paragraph*{The monoid $\mathcal{G}_{k}$}
We shall end this introductory part by introducing $\mathcal{G}_{2}$,
the monoid of the 2nd-jets of punctured maps $\gamma
:(\mathbb{R},0)\rightarrow(\mathbb{R},0)$. Actually it is a special
case of
\begin{equation}\label{eqn:def_Gk}
\mathcal{G}_{k} := \{[\phi]_{k}\ |\ \phi:\mathbb{R}\rightarrow
\mathbb{R},\ \phi(0)=0\}\ ,
\end{equation}
the monoid of the $k$th-jets of punctured maps $\phi:
(\mathbb{R}, 0)\rightarrow(\mathbb{R}, 0)$. Here $[\phi]_{k}$
denotes an equivalence class of the relation
\[
\phi\sim_{k} \psi\quad\text{if and only if}\quad\phi
^{(j)}(0)=\psi^{(j)}(0)\quad\text{for every $j=1,2,\ldots, k$.}
\]
The natural multiplication on $\mathcal{G}_{k}$ is induced by the
composition of maps
\[
[\phi]_{k}\cdot[\psi]_{k}:=[\phi\circ\psi]_{k}\ ,
\]
for every $\phi,\psi:(\mathbb{R},0)\rightarrow(\mathbb{R},0)$.
Thus, a class $[\phi]_{k}$ is fully determined by the coefficients of
the Taylor expansion of $\phi$ at $0$ up to order $k$ and,
consequently, $\mathcal{G}_{k}$ can be seen as a set of polynomials of
degree less or equal $k$ vanishing at $0$ equipped with a natural
multiplication defined by composing polynomials and then truncating
terms of order greater than $k$:
\[
\mathcal{G}_{k} \simeq\{a_{1} t + \frac{1}{2} a_{2} t^{2} + \ldots+
\frac{1}{k!} a_{k} t^{k} + o(t^{k}): a_{1}, \ldots, a_{k}\in\mathbb
{R}\} \simeq\mathbb{R}^{k}.
\]
\begin{remmark}\label{rem:g_k_k=1_2}
Note that $\mathcal{G}_{1} \simeq(\mathbb{R}, \cdot)$ is just the
monoid of multiplicative reals, while the multiplication in $\mathcal
{G}_{2}\simeq\mathbb{R}^{2}$ is given by
\[
(a,b)(A,B)=(aA,a B+bA^{2})\ .
\]
Obviously,
\[
(\mathbb{R}, \cdot)\simeq\{[\phi]_{k}: \phi(t)=at, a\in\mathbb
{R}\}
\]
is a submonoid of $\mathcal{G}_{k}$ for every $k$.
\end{remmark}
Consider the set of algebra endomorphisms $\operatorname{End}(\mathbb
{D}^{k})$ of the Weil algebra $\mathbb{D}^{k}=\mathbb{R}[\varepsilon]/
\left\langle\varepsilon^{k+1}\right\rangle$. Every such an
endomorphism is uniquely determined by its value on the generator
$\varepsilon$, i.e., a map of the form
\begin{equation}
\label{eqn:basic_trans}
\varepsilon\longmapsto a_{1} \varepsilon+ \frac{1}{2}
a_{2}\varepsilon^{2} + \ldots+\frac{1}{k!} a_{k} \varepsilon^{k}.
\end{equation}
It is an automorphism if $a_{1}\neq0$.
Thus we may identify $\operatorname{End}(\mathbb{D}^{k})$ with
$\mathcal{G}_{k}$, taking into account that the multiplication
obtained by composing two endomorphisms of the form \reftext{\eqref{eqn:basic_trans}} is opposite to the product in $\mathcal{G}_{k}$
based on the identification \reftext{\eqref{eqn:def_Gk}}, i.e.
\[
\operatorname{End}(\mathbb{D}^{k})^{\text{op}} \simeq\mathcal{G}_{k}.
\]
\paragraph*{Left and right monoid actions}
Let $\mathcal{G}$ be an arbitrary monoid. By a \emph{left $\mathcal
{G}$-action} on a manifold $M$ we understand a map $\mathcal{G}\times
M\rightarrow M$ denoted by $(g, p)\mapsto g.p$ such that
$h.(g.p)=(h\cdot g).p$ for every $g,h\in\mathcal{G}$ and $p\in M$.
Here $h\cdot g$ denotes the multiplication in $\mathcal{G}$. \emph
{Right $\mathcal{G}$-actions} $M\times\mathcal{G}\rightarrow M$ are
defined analogously. Note that if the monoid multiplication is Abelian
(as is for example in the case of the multiplicative reals $(\mathbb
{R},\cdot)$ and the multiplicative complex numbers $(\mathbb{C},\cdot
)$), then every left action is automatically a right action and vice versa.
Note that any left $\mathcal{G}$-action $(g, p)\mapsto g.p$, gives
rise to a right $\mathcal{G}^{\text{op}}$-action on the same manifold
$M$ given by the formula $p.g = g.p$. However, unlike the case of
groups actions, in general, there is no canonical correspondence
between left and right $\mathcal{G}$-actions. For example, that is the
case if $\mathcal{G}=\mathcal{G}_{k}$ for $k\geq2$, since the
monoids $\mathcal{G}_{k}$ and $\mathcal{G}_{k}^{\text{op}}$ are not
isomorphic.
All monoids considered in our paper will be smooth, i.e., we restrict
our attention to monoids $\mathcal{G}$ which are smooth manifolds and
such that the multiplication $\cdot:\mathcal{G}\times\mathcal
{G}\rightarrow\mathcal{G}$ is a smooth map. We shall study smooth
actions, of these monoids on manifolds, i.e. the actions $\mathcal
{G}\times M\rightarrow M$ (or $M\times\mathcal{G}\rightarrow M$)
which are smooth maps.
We present now two canonical examples of left and right $\mathcal
{G}_{k}$-actions.
\begin{example}\label{ex:TkM_Gk} The natural composition of
$k$th-jets
\[
[\gamma]_{k}\circ[\phi]_{k} \mapsto[\gamma\circ\phi]_{k},
\]
for $\phi:\mathbb{R}\rightarrow\mathbb{R}$, $\gamma: \mathbb
{R}\rightarrow M$ where $\phi(0)=0$ defines a right $\mathcal
{G}_{k}$-action on the manifold $\mathrm{T}^{k} M$.
\end{example}
\begin{example}\label{ex:Gk_Tk_starM}
Following Tulczyjew's notation \cite{Tulczyjew_preprint}, the higher
cotangent space to a manifold $M$ at a point $p\in M$, denoted by
$\mathrm{T}_{p}^{k\ast} M$, consists of $k$th-jets at
$p\in M$ of functions $f: M\rightarrow\mathbb{R}$ such that $f(p)=0$.
The higher cotangent bundle $\mathrm{T}^{k\ast} M$ is a vector bundle
whose fibers are $\mathrm{T}_{p}^{k\ast} M$ for $p\in M$. The natural
composition of $k$th-jets
\[
[\phi]_{k}\circ[(f, p)]_{k} \mapsto[(\phi\circ f, p)]_{k},
\]
where $\phi:\mathbb{R}\rightarrow\mathbb{R}$ is as before, and $f:
M\rightarrow\mathbb{R}$ is such that $f(p)=0$ defines a left
$\mathcal{G}_{k}$-action on $\mathrm{T}^{k\ast}M$.
\end{example}
\section{On actions of the monoid of complex numbers}
\label{sec:complex}
In this section we study actions of the multiplicative monoid $(\mathbb
{C},\cdot)$ on manifolds. We begin by recalling some basic
construction from complex analysis, including the bundle of holomorphic
jets, and by introducing the notions of a complex graded space, per
analogy to \reftext{Definition~\ref{def:gr_space}} in the real case. Later,
again using the analogy with Section~\ref{sec:perm}, we define complex
and holomorphic graded bundles and homogeneity structures. The main
results of this section are contained in \reftext{Theorems~\ref{thm:equiv_holom} and \ref{thm:equiv_complex}}. The first states that
there is an equivalence between the categories of holomorphic
homogeneity structures (holomorphic action of the monoid $(\mathbb
{C},\cdot)$) and the category of holomorphic graded bundles. This
result and its proof is a clear analog of \reftext{Theorem~\ref{thm:eqiv_real}},
which deals with the smooth actions of the monoid $(\mathbb{R},\cdot
)$. In \reftext{Theorem~\ref{thm:equiv_complex}} we establish a similar
equivalence between smooth actions of $(\mathbb{C},\cdot)$ and
complex graded bundles. However, for this equivalence to hold we need
to input additional conditions (concerning, roughly speaking the
compatibility between the real and the complex parts of the action) on
the action of $(\mathbb{C},\cdot)$ in this case. We will call such
actions nice\ complex homogeneity structures (see \reftext{Definition~\ref{def:nice_hgm_str}}).
Throughout this section $N$ will be a complex manifold. We shall write
$N_{\mathbb{R}}$ for the smooth manifold associated with $N$, thus
$\dim_{\mathbb{R}}N_{\mathbb{R}}= 2\cdot\dim_{\mathbb{C}}N$. Let
$\mathcal{J}: \mathrm{T}N_{\mathbb{R}}\rightarrow\mathrm
{T}N_{\mathbb{R}}$ be the integrable almost complex structure defined
by~$N$.
\paragraph*{Holomorphic jet bundles} Following \cite
{Green_Griffiths_jet_bundles} we shall define $\mathrm{J}^{k}N$, the
space of $k$th-jets of holomorphic curves on $N$, and
recall its basic properties.
Consider a point $q\in N$, and let $\Delta_{R}\subset\mathbb{C}$
denote a disc of radius $0<R\leq\infty$ centered at $0$. Let
\[
\gamma:\Delta_{R}\rightarrow N
\]
be a holomorphic curve such that $\gamma(0)=q$. In a local holomorphic
coordinate system $(z^{j})$ around $q\in N$ the curve $\gamma$ is
given by a convergent series
\[
\gamma^{j}(\xi) := z^{j}(\gamma(\xi)) = a^{j}_{0} + a^{j}_{1} \xi+
a^{j}_{2} \xi^{2} + \ldots, \quad|\xi|<r
\]
where the coefficient $a^{j}_{l}$ equals $\frac{1}{l!} \frac{\mathbf
{d}^{l} \gamma^{j}}{\mathbf{d}\xi^{l}}(0)$ and $r\leq R$ is a
positive number. We say that curves $\gamma:\Delta_{R}\rightarrow N$,
and $\widetilde{\gamma}: \Delta_{\widetilde{R}}\rightarrow N$ \emph
{osculate to order $k$} if $\gamma(0)=\widetilde{\gamma}(0)$ and
$\frac{\mathbf{d}^{l} \gamma^{j}}{\mathbf{d}
\xi^{l}}(0)= \frac{\mathbf{d}^{l} \widetilde{\gamma}^{j}}{\mathbf
{d}\xi^{l}}(0)$ for any $j$ and $l=1,2, \ldots, k$. This property
does not depend on the choice of a holomorphic coordinate system around
$q$. The equivalence class of $\gamma$ will be denoted by $\mathbf
{j}^{k} \gamma(0)$ and called the $k$th-\emph{jet of a
holomorphic curve $\gamma$ at $q=\gamma(0)$}, while the set of all such
$k$th-jets at a given point $q$ will be denoted by
$\mathrm{J}^{k}_{q} N$. The totality
\[
\mathrm{J}^{k} N = \bigcup_{q\in M} \mathrm{J}^{k}_{q} N
\]
turns out to be a holomorphic (yet, in general, not linear) bundle over
$N$. Indeed, the local coordinate system $(z^{j})$ for $N$ gives rise
to a local \emph{adapted coordinate system} $(z^{j, (\alpha)})_{0\leq
\alpha\leq k}$ for $\mathrm{J}^{k} N$ where $z^{j, (\alpha)}(\mathbf
{j}^{k} \gamma(0)) = \frac{\mathbf{d}^{l} \gamma^{j}}{\mathbf
{d}\xi^{l}}(0)$. It is easy to see that transition functions between
two such adapted coordinate systems are holomorphic. The bundle
$\mathrm{J}^{k} N$ is called the \emph{bundle of holomorphic} $k$th
\textit{jets of}~$N$.
We note also that a complex curve $\gamma:\Delta_{R}\rightarrow N$
lifts naturally to a (holomorphic) curve $\mathbf{j}^{k} \gamma:
\Delta_{R}\rightarrow\mathrm{J}^{k} N$. Moreover, a holomorphic map
$\Phi: N_{1}\rightarrow N_{2}$ induces a (holomorphic) map $\mathrm
{J}^{k} \Phi: \mathrm{J}^{k} N_{1}\rightarrow\mathrm{J}^{k} N_{2}$
defined analogously as in the category of smooth manifolds, i.e.,
$\mathrm{J}^{k} \Phi(\mathbf{j}^{k} \gamma):= \mathbf{j}^{k}(\Phi
\circ\gamma)$. This construction makes $\mathrm{J}^{k}$ a functor
from the category of complex manifolds into the category of holomorphic bundles.
Finally, let us comment that we do not work with any fixed radius $R$
of the disc $\Delta_{R}$, being the domain of the holomorphic curve
$\gamma$. Instead, by letting $R$ be an arbitrary positive number, we
work with germs of holomorphic curves. This allows to avoid technical
problems related with the notion of holomorphicity.
For example, according to Liouville's theorem, any holomorphic function
$\phi:\Delta_{R}\rightarrow\mathbb{C}$ satisfies
\[
|\phi^{(j)}(0)| \leq\frac{j!}{R^{j}} \operatorname{sup}_{\xi\in
\Delta_{R}} |\phi(\xi)|.
\]
It follows that, in general for a fixed radius $R$, coefficients
$a^{j}_{i}$ cannot be arbitrary.
In particular, it may happen that there are no holomorphic curves
$\gamma: \Delta_{\infty}\rightarrow M$ except for constant ones, or
that for a fixed value $R<\infty$ there are no holomorphic curves
$\gamma:\Delta_{r}\rightarrow M$ such that the derivatives $\frac
{\mathbf{d}^{l}\gamma^{j}}{\mathbf{d}\xi^{l}}(0)$ have given
earlier prescribed values (see for the concept of hyperbolicity and
Kobayashi metric \cite{Kobayashi_book,Ch_Wong_Finsler_geom_hol_jet_bndls}).
It is a well-known fact from analytic function theory, that a
holomorphic map $\gamma:\Delta_{R}\rightarrow N$ is uniquely
determined by its restriction to the real line $\mathbb{R}\subset
\mathbb{C}$. Therefore it should not be surprising that the
holomorphic jet bundle $\mathrm{J}^{k} N$ can be canonically
identified with the higher tangent bundle $\mathrm{T}^{k} N_{\mathbb
{R}}$. The latter is naturally equipped with an almost complex
structure $\mathcal{J}^{k}: \mathrm{T}\mathrm{T}^{k} N_{\mathbb
{R}}\rightarrow\mathrm{T}\mathrm{T}^{k} N_{\mathbb{R}}$ induced
from $\mathcal{J}$, the almost complex structure on $N_{\mathbb{R}}$.
Namely, $\mathcal{J}^{k}:= \kappa_{k}\circ\mathrm{T}^{k} \mathcal
{J}\circ\kappa_{k}^{-1}$, where $\kappa_{k}: \mathrm{T}^{k} \mathrm
{T}N_{\mathbb{R}}\rightarrow\mathrm{T}\mathrm{T}^{k} N_{\mathbb
{R}}$ is the canonical flip. In local adapted coordinates $(x^{j,
(\alpha)}, y^{j, (\alpha)})$ on $\mathrm{T}^{k} N_{\mathbb{R}}$
such that $z^{j} = x^{j} + \sqrt{-1} y^{j}$ we have
\[
\mathcal{J}^{k} \left( \frac{\partial}{\partial x^{j, (\alpha
)}}\right) = \frac{\partial}{\partial y^{j, (\alpha)}}, \quad
\mathcal{J}^{k}\left( \frac{\partial}{\partial y^{j, (\alpha
)}}\right) = - \frac{\partial}{\partial x^{j, (\alpha)}}.
\]
Thus the local functions $z^{j}_{\alpha}:= x^{j, (\alpha)} + \sqrt
{-1} y^{j, (\alpha)}$ form a system of local complex coordinates on
$\mathrm{T}^{k} N_{\mathbb{R}}$, and hence we may treat $\mathrm
{T}^{k} N_{\mathbb{R}}$ as a complex manifold. We identify $\mathrm
{J}^{k} N$ with $\mathrm{T}^{k} N_{\mathbb{R}}$ as complex manifolds,
by means of the map $\mathbf{j}^{k} \gamma\mapsto\mathbf{t}^{k}
\gamma_{|\mathbb{R}}$, where $\gamma_{|\mathbb{R}}$ is the
restriction of $\gamma$ to the real line $\mathbb{R}\subset\mathbb
{C}$. In local coordinates this identification looks rather trivially:
$(z^{j, (\alpha)})\mapsto(z^{j}_{\alpha})$. This construction is
clearly functorial, i.e., for any holomorphic map $\Phi:
N_{1}\rightarrow N_{2}$ the following diagram commutes:
\[
\xymatrix{
\mathrm{J}^k N_1 \ar[d]^{\simeq}\ar[rr]^{\mathrm{J}^k \Phi} &&
\mathrm{J}^k N_2\ar[d]^{\simeq} \\
\mathrm{T}^k (N_1)_{\mathbb{R}} \ar[rr]^{\mathrm{T}^k \Phi} &&
\mathrm{T}^k (N_2)_{\mathbb{R}}.
}
\]
For $k=1$ there is another canonical identification of the real tangent
bundle $\mathrm{T}N_{\mathbb{R}}$ with the, so-called, \emph
{holomorphic tangent bundle} of $N$. Consider, namely, the
complexification $\mathrm{T}^{\mathbb{C}}N:= \mathrm{T}N_{\mathbb
{R}}\otimes\mathbb{C}$ and extend $\mathcal{J}$ to a $\mathbb
{C}$-linear endomorphism $\mathcal{J}^{\mathbb{C}}$ of $\mathrm
{T}^{\mathbb{C}}N$. The $(+i)$- and $(-i)$-eigenspaces of $\mathcal
{J}^{\mathbb{C}}$ define the canonical decomposition
\[
\mathrm{T}^{\mathbb{C}}N = \mathrm{T}'N \oplus\mathrm{T}'' N
\]
of $\mathrm{T}^{\mathbb{C}}N$ into the direct sum of complex
subbundles $\mathrm{T}' N$, $\mathrm{T}'' N$ called, respectively,
\emph{holomorphic} and \emph{anti-holomorphic tangent bundles of
$N$}. It is easy to see, that the composition $\mathrm{T}N_{\mathbb
{R}}\subset\mathrm{T}^{\mathbb{C}}N \twoheadrightarrow\mathrm
{T}'N$ gives a complex bundle isomorphism $\mathrm{T}N_{\mathbb
{R}}\simeq\mathrm{T}' N$. Let $\Phi: (N_{1})_{\mathbb
{R}}\rightarrow(N_{2})_{\mathbb{R}}$ be a smooth map and denote by
$\mathrm{T}^{\mathbb{C}}\Phi: \mathrm{T}^{\mathbb{C}}N_{1}
\rightarrow\mathrm{T}^{\mathbb{C}}N_{2}$ the $\mathbb{C}$-linear
extension of $\mathrm{T}\Phi$. It is well known that $\Phi$ is
holomorphic if and only if $\mathrm{T}\Phi$ is $\mathbb{C}$-linear.
The latter is equivalent to $\mathrm{T}^{\mathbb{C}}\Phi(\mathrm
{T}'N_{1})\subset\mathrm{T}' N_{2}$. In such a case we denote
$\mathrm{T}' \Phi: = \mathrm{T}^{\mathbb{C}}\Phi|_{\mathrm{T}'
N_{1}}: \mathrm{T}' N_{1}\rightarrow\mathrm{T}' N_{2}$.
Thus, under the canonical identifications discussed above, all three
constructions $\mathrm{J}^{1} \Phi$, $\mathrm{T}\Phi$ and $\mathrm
{T}'\Phi$ coincide (although the functor $\mathrm{T}$ is applicable
to a wider class of maps than $\mathrm{J}^{1}$ and $\mathrm{T}'$).
In what follows, given a smooth map $\phi: N\rightarrow\mathbb{C}$,
the real differential of $\phi$ at point $q\in N$ is denoted by
$\mathbf{d}_{q} \phi: \mathrm{T}N_{\mathbb{R}}\rightarrow\mathbb
{C}$. If $\phi$ happens to be holomorphic, then $\mathbf{d}_{q}\phi$
is $\mathbb{C}$-linear.
\paragraph*{Holomorphic and complex graded bundles}
A notion of a graded space has its obvious complex counterpart. It is
a generalization of a complex vector space. We basically rewrite the
definitions from Section~\ref{sec:perm} in the holomorphic context.
\begin{definition}
Let $\mathbf{d}= (d_{1}, \ldots, d_{k})$ be a sequence of
non-negative integers, let $I$ be a set of cardinality $|\mathbf
{d}|:=d_{1}+\ldots+d_{k}$, and let $I\ni\alpha\mapsto w^{\alpha}\in
\mathbb{Z}_{+}$, be a map such that $d_{i}= \#\{\alpha\in I: w^{a}=i \}$
for each $1\leq i\leq k$.
A \emph{complex graded space of rank $\mathbf{d}$} is a complex
manifold $\mathrm{V}$ biholomorphic with $\mathbb{C}^{|\mathbf{d}|}$
and equipped with an equivalence class of complex graded coordinates.
By definition, a \emph{system of complex graded coordinates} on
$\mathrm{V}$ is a global complex coordinate system $(z^{\alpha
})_{\alpha\in I}: \mathrm{V}\overset{\simeq}{\rightarrow}\mathbb
{C}^{|\mathbf{d}|}$ with \emph{weight} $w^{\alpha}$ assigned to each
function $z^{\alpha}$, $\alpha\in I$. To indicate the presence of
weights we shall sometimes write $z^{\alpha}_{w^{\alpha}}$ instead of
$z^{\alpha}$.
Two systems of complex graded coordinates, $(z^{\alpha}_{w^{\alpha
}})$ and $(\underline{z}^{\alpha}_{\underline{w}^{\alpha}})$ are
\emph{equivalent} if there exist constants $c_{\alpha_{1} \ldots
\alpha_{j}}^{\alpha}\in\mathbb{C}$, defined for indices such that
$\underline{w}^{\alpha}=w^{\alpha_{1}} +\ldots+ w^{\alpha_{j}}$,
satisfying
\begin{equation}\label{eqn:graded_transf}
\underline{z}^{\alpha}_{\underline{w}^{\alpha}} = \sum_{\substack
{j=1,2, \ldots\\ \underline{w}^{\alpha}=w^{\alpha_{1}}+\ldots
+w^{\alpha_{j}}}} c_{\alpha_{1} \ldots\alpha_{j}}^{\alpha
}z^{\alpha_{1}}_{w^{\alpha_{1}}} \ldots z^{\alpha_{k}}_{w^{\alpha_{j}}}.
\end{equation}
The highest coordinate weight (i.e., the highest number $i$ such that
$d_{i}\neq0$) is called the \emph{degree} of a complex graded space
$\mathrm{V}$.
By a \emph{morphism between complex graded spaces} $\mathrm{V}_{1}$
and $\mathrm{V}_{2}$ we understand a holomorphic map $\Phi:\mathrm
{V}_{1}\rightarrow\mathrm{V}_{2}$ which in some (and thus any)
complex graded coordinates writes as a polynomial homogeneous in
weights~$w^{\alpha}$.
\end{definition}
We remark that weights $(w^{\alpha})$ which are assigned to
coordinates on $\mathrm{V}$ are a part of the structure of the graded
space $\mathrm{V}$. Note that the set of functions $(\underline
{z}^{\alpha}_{\underline{w}^{\alpha}})$ defined by \reftext{\eqref{eqn:graded_transf}} defines a biholomorphic map if and only if the
matrices $(c^{\alpha}_{\beta})_{j}$ with fixed weights $w^{\alpha
}=w^{\beta}= j$ are non-singular for $j=1, 2, \ldots, k$.
\begin{remmark}
A complex graded space of rank $\mathbf{d}=(k)$ is just a complex
vector space of dimension $k$. Indeed, there is no possibility to
define a smooth map (a hypothetical addition of vectors in $\mathbb
{C}^{k}$) $+': \mathbb{C}^{k}\times\mathbb{C}^{k} \to\mathbb
{C}^{k}$ different from the standard addition of vectors in $\mathbb
{C}^{k}$ and such that $\mathbb{C}^{k}$ equipped with the standard
multiplication by complex numbers and addition $+'$ would satisfy all
axioms of a complex vector space.
This follows immediately from an analogous statement for a real vector
space: $+'$ would define an alternative addition on $\mathbb
{R}^{2n}\approx\mathbb{C}^{n}$, which is impossible.
\end{remmark}
Analogously to the real case, any complex graded space induces
an action $h^{\mathrm{V}}: \mathbb{C}\times\mathrm{V}\to\mathrm
{V}$ of the multiplicative monoid $(\mathbb{C}, \cdot)$ defined in
complex graded coordinates $(z^\alpha_w)$ by
\[
h^{\mathrm{V}}(\xi, (z^{\alpha}_{w})) = (\xi^{w} \cdot z^{\alpha}_{w}).
\]
Here $\xi\in\mathbb{C}$. We shall call $h^{\mathrm{V}}$ the action
by \emph{homotheties} of $\mathrm{V}$. Instead of $h^{\mathrm
{V}}(\xi,\cdot)$ we shall also write $h^{\mathrm{V}}_{\xi}(\cdot)$.
\begin{definition} A smooth function $f: \mathrm{V}\rightarrow\mathbb
{C}$ defined on a graded space $\mathrm{V}$ is called \emph{complex
homogeneous of weight} $w$ if
\begin{equation}\label{eqn:def_hmg_fun}
f(h^{\mathrm{V}}(\xi, v)) = \xi^{w} \, f(v)
\end{equation}
for any $v\in\mathrm{V}$ and any $\xi\in\mathbb{C}$.
\end{definition}
We see that the coordinate functions $z^{\alpha}_{w}$ are of
weight $w$ in the sense of above definition.
\begin{lemma}\label{lem:hol_hmg_fun}
Any complex homogeneous function on a complex graded space $\mathrm{V}$
is a polynomial in the coordinate functions $z^{\alpha}_{w}$
homogeneous in weights $w$.
\end{lemma}
\begin{proof}
Let $\phi:\mathrm{V}\rightarrow\mathbb{C}$ be a complex homogeneous
function of weight $w$. Note that $\mathrm{V}$ can be canonically
treated as a (real) graded space (of rank $2\mathbf{d}$, where
$\mathbf{d}$ is the rank of $\mathrm{V}$), simply by assigning
weights $w^{\alpha}$ to real coordinates $x^{\alpha}$, $y^{\alpha}$
such that $z^{\alpha}= x^{\alpha}+ \sqrt{-1} \, y^{\alpha}$ are
complex graded coordinates of weight $w^{\alpha}$. Let us denote this
graded space by $\mathrm{V}_{\mathbb{R}}$. Clearly, the real and
imaginary parts of $\Re\phi$, $\Im\phi:\mathrm{V}_{\mathbb
{R}}\rightarrow\mathbb{R}$ are of weight $w$, in the sense of
\reftext{Definition~\ref{def:hgm_real}}, with respect to this graded structure.
Thus, in view of \reftext{Lemma~\ref{lem:hol_hmg_fun_R}}, $\Re\phi$ and $\Im
\phi$ are real polynomials (homogeneous of weight $w$) in $x^{\alpha
}$ and $y^{\alpha}$. We conclude that $\phi$ is a polynomial in
$z^{\alpha}$ and $\bar{z}^{\alpha}$ with complex coefficients,
homogeneous of weight $w$ with respect to the non-standard gradation on
$\mathbb{C}[z^{\alpha}, \bar{z}^{\alpha}]$ in which the weight of
$z^{\alpha}$ and $\bar{z}^{\alpha}$ is $w^{\alpha}$.
To end the proof it amounts to show that $\phi$ does not depend on the
conjugate variables $\bar{z}^{\alpha}$. Under an additional
assumption that $\phi$ is holomorphic, this is immediate. For future
purposes we would like, however, to assume only the smoothness of $\phi
$. The argument will be inductive with respect to the weight $w$. Cases
$w=0$ and $w=1$ are trivial. For a general $w$ let us fix an index
$\alpha$, and denote $\phi=\phi(z^{\alpha},z^{\beta}, \bar
{z}^{\gamma})$ where $\beta\neq\alpha$. Denote by $\phi_{\alpha
}$ the derivative of $\phi$ with respect to $z^{\alpha}$. Using the
homogeneity of $\phi$ we easily get that for every $\xi\in\mathbb{C}$
\begin{align*}
\phi_{\alpha}(\xi^{w^{\alpha}}z^{\alpha}, \xi^{w^{\beta}}
z^{\beta}, \bar{\xi}^{w^{\gamma}}\bar{z}^{\gamma})&=\lim_{|h|\to
0}\frac{\phi(\xi^{w^{\alpha}}z^{\alpha}+h,\xi^{w^{\beta}}
z^{\beta},\bar{\xi}^{w^{\gamma}}\bar{z}^{\gamma})-\phi(\xi
^{w^{\alpha}}z^{\alpha}, \xi^{w^{\beta}} z^{\beta},\bar{\xi
}^{w^{\gamma}}\bar{z}^{\gamma})}{h}
\\
&=\lim_{|h'|\to0}\frac{\phi(\xi^{w^{\alpha}}(z^{\alpha}+h'), \xi
^{w^{\beta}} z^{\beta},\bar{\xi}^{w^{\gamma}}\bar{z}^{\gamma
})-\phi(\xi^{w^{\alpha}}z^{\alpha},\xi^{w^{\beta}} z^{\beta
},\bar{\xi}^{w^{\gamma}}\bar{z}^{\gamma})}{\xi^{w^{\alpha}}\cdot
h'}\\
&=\lim_{|h'|\to0}\frac{\xi^{w}\cdot\phi(z^{\alpha}+h', z^{\beta
},\bar{z}^{\gamma})-\xi^{w}\cdot\phi(z^{\alpha}, z^{\beta},\bar
{z}^{\gamma})}{\xi^{w^{\alpha}}\cdot h'}\\
&=\xi^{w-w^{\alpha}}\phi
_{\alpha}(\xi^{w^{\alpha}}z^{\alpha}, \xi^{w^{\beta}} z^{\beta
},\bar{\xi}^{w^{\gamma}}\bar{z}^{\gamma})\ .
\end{align*}
In other words, $\phi_{\alpha}$ is homogeneous of weight $w-w^{\alpha
}<w$ and thus, by the inductive assumption, a~homogeneous polynomial of
weight $w-w^{\alpha}$ in variables $(z^{\alpha},z^{\beta})$. We conclude that
$\phi=\psi+\eta$, where $\psi$ is a homogeneous polynomial of
weight $w$ in variables $(z^{\alpha},z^{\beta})$ and $\eta$ is a homogeneous
polynomial of weight $w$ in variables $z^{\beta}$ and $\bar
{z}^{\gamma}$ where $\beta\neq\alpha$. Repeating the above
reasoning several times for other indices and the polynomial $\eta$ we
will show that
$\phi=\phi'+\eta'$, where $\phi'$ is a homogeneous polynomial of
weight $w$ in variables $z^{\gamma}$ and $\eta'$ is a homogeneous
polynomial of weight $w$ in variables $\bar{z}^{\gamma}$. In such a
case $\eta'$ should be a complex homogeneous function of weight $w$ as
a difference of two complex homogeneous functions $\phi$ and $\psi'$,
both of weight $w$. However, since $\eta'$ is a homogeneous polynomial
in $\bar{z}^{\gamma}$ we have
\[
\eta'(\bar{\theta}^{w^{\gamma}}\bar{z}^{\gamma})=\bar{\theta
}^{w}\cdot\eta'(\bar{z}^{\gamma})\ ,
\]
hence the only possibility for $\eta'$ to be complex homogeneous of
weight $w$ is $\eta'\equiv0$. This ends the proof.
\end{proof}
Analogously to the real case, we can define a complex graded bundle as
a smooth fiber bundle with the typical fiber being a complex graded
space. In case that the base possesses a complex manifold structure,
and that the local trivializations are glued by holomorphic functions
(in particular, the total space of the bundle is a complex manifold
itself) we speak about holomorphic graded bundles.
\begin{definition}
\label{def:cplx_gr_bundle}
A \emph{complex graded bundle of rank $\mathbf{d}$} is a smooth fiber
bundle $\tau: E\to M$ over a real smooth manifold $M$ with the typical
fiber $\mathbb{C}^{\mathbf{d}}$ considered as a complex graded space
of rank $\mathbf{d}$. Equivalently, $\tau$ admits local
trivializations $\phi_{U}: \tau^{-1}(U)\to U \times\mathbb
{C}^{\mathbf{d}}$ such that transition functions $g_{UU'}(q):= \phi
_{U'}\circ\phi_{U}^{-1}|_{\{q\}\times\mathbb{C}^{\mathbf{d}}}:
\mathbb{C}^{\mathbf{d}}\to\mathbb{C}^{\mathbf{d}}$ are isomorphism
of complex graded spaces smoothly depending on $q\in U\cap U'$. If $M$
and $E$ are complex manifolds and $g_{UU'}$ are holomorphic functions
of $q$, then $\tau$ is called a \emph{holomorphic graded bundle}.
By a \emph{degree} of a complex (holomorphic) graded bundle we shall
understand the degree of its typical fiber~$\mathbb{C}^{\mathbf{d}}$.
A \emph{morphism of complex graded bundles} is defined as a
fiber-bundle morphism being a complex graded-space morphism on fibers.
A \emph{morphism of holomorphic graded bundles} is a holomorphic map
between the total spaces of the considered holomorphic graded bundles
being simultaneously a morphism of complex graded bundles.
Clearly, complex (holomorphic) graded bundles together with their
morphisms form a \emph{category}.
\end{definition}
\begin{remmark}\label{rem:complexVSholomorphic}
Note that a complex graded bundle $\tau: E\rightarrow M$ in which $E$
and $M$ are complex manifolds and the projection $\tau$ is a
holomorphic map needs not to be a holomorphic graded bundle. We also
need to assume that the action $h$ is also holomorphic, i.e. the
complex structure on each fiber is actually the restriction of the
holomorphic structure of $E$. To see this consider a complex rank $1$
vector bundle $C \subset E:=\mathbb{C}^{*}\times\mathbb{C}^{2}$
($\mathbb{C}^{*} = \mathbb{C}\setminus\{0\}$) given
in natural holomorphic coordinates $(x; y^{1}, y^{2})$ on $E$ by the equation
\[
C = \{(x; y^{1}, y^{2}): x y^{1} + \bar{x} y^{2} = 0, x\in\mathbb
{C}^{*} \}.
\]
We shall construct a degree $2$ complex (but not holomorphic) graded
bundle structure on $E$. Set $Y^{1}:= x y^{1} + \bar{x} y^{2}$ and
$Y^{2} := -x y^{1} +\bar{x} y^{2}$. We may take $(x; Y^{1}, Y^{2})$ as
a global coordinate system on $E$ and assign weights $1$, $2$ to
$Y^{1}$, $Y^{2}$, respectively, to define a complex graded bundle
structure on the fibration $\tau: E\rightarrow\mathbb{C}^{*}$. Then
$C$ coincides with the core of $E$, which in every holomorphic graded
bundle should be a complex submanifold of the total space. However, $C$
is not a complex submanifold of $E$, and thus it is impossible to find
a homogeneous holomorphic atlas on $E$. The associated action by
homotheties $h: \mathbb{C}\times E \rightarrow E$ reads as
\[
h(\xi, (x; y^{1}, y^{2})) = (x; \frac{1}{2} (\xi+\xi^{2}) y^{1} +
\frac{\bar{x}}{2x}(\xi-\xi^{2}) y^{2}, \frac{x}{\bar{x}} (\xi
-\xi^{2})y^{1} + \frac{1}{2} (\xi+\xi^{2}) y^{2}).
\]
This action is smooth but not holomorphic, hence it induces a complex
but not holomorphic graded bundle structure according to the forthcoming
\reftext{Theorems~\ref{thm:equiv_holom} and \ref{thm:equiv_complex}}.
\end{remmark}
In what follows, to avoid possible confusions, we will use notation
$\tau:E\rightarrow M$ for complex and smooth graded bundles and $\tau
:F\rightarrow N$ for holomorphic graded bundles.
\begin{example}\label{ex:hol_jet_bundle}
The holomorphic jet bundle $\mathrm{J}^{k}N$ is a canonical example of
a holomorphic (and thus also complex) graded bundle. This fact is
justified analogously to the real case (see Ex. \ref{ex:TkM}).
\end{example}
Finally, we can rewrite \reftext{Definition~\ref{def:hgm_str}} in the complex context.
\begin{definition}\label{def:hgm_str_complex}
A \emph{complex} (respectively, \emph{holomorphic}) \emph
{homogeneity structure} on a smooth (resp., complex) manifold $M$ is a
smooth (resp., holomorphic) action of the multiplicative monoid
$(\mathbb{C},\cdot)$
\[
h:\mathbb{C}\times M \longrightarrow M\ .
\]
A \emph{morphism} of two complex (resp., holomorphic) homogeneity
structures $(M_{1},h^{1})$ and $(M_{2},h^{2})$ is a smooth (resp.,
holomorphic) map $\Phi:M_{1}\rightarrow M_{2}$ intertwining the
actions $h^{1}$ and $h^{2}$, i.e.,
\[
\Phi(h^{1}_{\xi}(p))=h^{2}_{\xi}(\Phi(p)),
\]
for every $p\in M_{1}$ and every $\xi\in\mathbb{C}$. Clearly,
complex (resp., holomorphic) homogeneity structures with their
morphisms form a \emph{category}.
\end{definition}
It is clear, that with every complex (holomorphic) graded bundle $\tau
:E\rightarrow M$ one can associate a natural complex (holomorphic)
homogeneity structure $h^{E}:\mathbb{C}\times E\rightarrow E$ defined
fiber-wise by the canonical $(\mathbb{C},\cdot)$ actions $h^{\mathrm
{V}}$ where $\mathrm{V}=\tau^{-1}(p)$ for every $p\in M$. We shall
call it the action by \emph{homotheties} of $E$. In the remaining part
of this section we shall study the relations between the notions of a
homogeneity structure and a graded bundle in the complex and
holomorphic settings. Our goal is to obtain analogs of \reftext{Theorem~\ref{thm:eqiv_real}} in these two situations.
\paragraph*{A holomorphic action of the monoid of complex numbers}
In the holomorphic setting the results of \reftext{Theorem~\ref{thm:eqiv_real}}
have their direct analog.
\begin{theorem}\label{thm:equiv_holom}
The categories of (connected) holomorphic graded bundles and
(connected) holomorphic homogeneity structures are equivalent. At the
level of objects this equivalence is provided by the following two constructions
\begin{itemize}
\item With every holomorphic graded bundle $\tau:F\rightarrow N$ one
can associate the holomorphic homogeneity structure $(F,h^{F})$, where
$h^{F}$ is the action by the homotheties of $F$.
\item Given a holomorphic homogeneity structure $(N,h)$, the map
$h_{0}:N\rightarrow N_{0}:=h_{0}(N)$ provides $N$ with a canonical
structure of a holomorphic graded bundle such that $h$ is the related action by homotheties.
\end{itemize}
At the level of morphisms: every morphism of holomorphic graded bundles
is a morphism of the related holomorphic homogeneity structures and,
conversely, every morphism of holomorphic homogeneity structures
respects the related canonical holomorphic graded bundle structures.
\end{theorem}
To prove the above theorem we will need two technical results.
\begin{lemma}\label{lem:M0_holom}
Let $N$ be a connected complex manifold and let $\Phi: N\rightarrow N$
be a holomorphic map satisfying $\Phi\circ\Phi=\Phi$. Then the
image $N_{0}:=\Phi(N)$ is a complex submanifold of $N$.
\end{lemma}
\begin{proof}
An analogous result for smooth manifolds is given in Theorem 1.13 in
\cite{Kolar_Michor_Slovak_nat_oper_diff_geom_1993}. Its proof can be
almost directly rewritten in the complex setting. Namely, from the
proof given in \cite{Kolar_Michor_Slovak_nat_oper_diff_geom_1993} we
know that there is an open neighborhood $U$ of $N_{0}$ in $N$ such that
the tangent map $\mathrm{T}_{p} \Phi: \mathrm{T}_{p} N_{\mathbb
{R}}\rightarrow\mathrm{T}_{\Phi(p)} N_{\mathbb{R}}$ has a constant
rank while $p$ varies in $U$. Therefore, due to the identification
$T_{p}N_{\mathbb{R}}\approx\mathrm{T}'_{p} N$, the map $\mathrm
{T}'_{p} \Phi: \mathrm{T}'_{p} N \rightarrow\mathrm{T}'_{\Phi(p)}
N$ also has a constant rank.
Now take any $q\in N_{0}$, so $\Phi(q)=q$. From the constant rank
theorem for complex manifolds (see e.g., \cite{Gauthier_SCV} or \cite
{Kaup_book}) we can find two charts $(O, v)$ and $(\underline{O},
\underline{v})$ on $N$, both centered at $q$, such that $\underline
{v}\circ\Phi\circ v^{-1}$ is a projection of the form $(z^{1}, \ldots
, z^{n})\mapsto(z^{\underline{1}}, \ldots,z^{\underline{n}}, 0,
\ldots, 0)$, where $\underline{n}\leq n$ is the rank of $\mathrm
{T}'_{q} \Phi$. We conclude that $O\cap N_{0}$ is a complex
submanifold of $N$ of dimension $\underline{n}$. Since $q\in N_{0}$
was arbitrary and $N_{0}$ is connected, the assertion follows.
\end{proof}
\begin{lemma} \label{lem:complex_gr_sspace}
Let $\mathrm{V}$ be a complex graded space and $\mathrm{V}'\subset
\mathrm{V}$ a complex submanifold invariant with respect to the action
of the homotheties of $\mathrm{V}$, i.e., $h^{\mathrm{V}}_{\xi
}(\mathrm{V}')\subset\mathrm{V}'$ for any $\xi\in\mathbb{C}$.
Then $\mathrm{V}'$ is a complex graded subspace of $\mathrm{V}$.
\end{lemma}
\begin{proof}
Observe first that $0=h^{\mathrm{V}}_{0}(\mathrm{V}')=h^{\mathrm
{V}}_{0}(\mathrm{V})$ lies in $\mathrm{V}'$. Denote by $(z^{\alpha
}_{w})_{\alpha\in I}$ a system of complex graded coordinates on
$\mathrm{V}$. Since $\mathrm{V}'$ is a complex submanifold, we may
choose a subset $I'\subset I$ of cardinality $\dim_{\mathbb
{C}}\mathrm{V}'$ such that the differentials
\begin{equation}\label{eqn:differentials_W}
\mathrm{d}_{q} z^{\alpha}_{w}|_{\mathrm{T}'_{q}\ \mathrm
{V}'},\quad\alpha\in I',
\end{equation}
are linearly independent (over $\mathbb{C}$) at $q=0$. In consequence,
the restrictions $(z^{\alpha}_{w}|_{\mathrm{V}'})_{\alpha\in I'}$
form a coordinate system for $\mathrm{V}'$ around $0$. The idea is to
show that these functions form a global graded coordinate system on
$\mathrm{V}'$.
Note that $\mathrm{V}$ has an important property that it can be
recovered from an arbitrary open neighborhood of $0$ by the action of
$h^{\mathrm{V}}$. Since $\mathrm{V}'\subset\mathrm{V}$ is
$h^{\mathrm{V}}$-invariant it also has this property.
As has been already showed, the differentials \reftext{\eqref{eqn:differentials_W}}
are linearly independent for $q\in U\cap\mathrm{V}'$ where $U$ is a
small neighborhood of $0$ in $\mathrm{V}$. Using the equality
\[
\left\langle\mathrm{d}_{h(\xi,q)} f^i_w, (\mathrm{T}h_\xi)
v_q \right\rangle= \xi^{w} \,
\left\langle\mathrm{d}_q f^i_w, v_q\right\rangle
\]
where $f^{i}_{w}: \mathrm{V}\rightarrow\mathbb{C}$ is a function of
weight $w$ and $v_{q}\in\mathrm{T}_{q}\mathrm{V}$, and the property
that $\mathrm{V}'$ is generated by $h^{\mathrm{V}}$ from any
neighborhood of $0$, we conclude that the differentials \reftext{\eqref{eqn:differentials_W}} are linearly independent for any $q\in\mathrm
{V}'$. Thus $(z^{\alpha}_{w}|_{\mathrm{V}'})_{\alpha\in I'}$ is a global
system of graded coordinates for $\mathrm{V}'$. This ends the proof.
\end{proof}
\begin{corollary} \label{cor:complex_gr_subbundle}
Let $\tau': E'\rightarrow M'$ be a complex graded subbundle of a
holomorphic graded bundle $\tau: F \rightarrow M$ such that $E'$ is a
complex submanifold of $F$. Then $E'$ is a holomorphic graded subbundle.
\end{corollary}
\begin{proof} First of all the base $M':=M\cap E'$ of $E'$ is a complex
submanifold of $M$. To prove this we apply \reftext{Lemma~\ref{lem:M0_holom}}
with $\Phi= \tau|_{E'}: E'\rightarrow E'$.
Being a holomorphic subbundle is a local property: if any point $q\in
M'$ has an open neighborhood $U'\subset M'$ such that $E'|_{U'}$ is a
holomorphic subbundle, then $E'$ is itself a holomorphic subbundle of
$F$. Indeed, we know that transition maps of $E'$ are holomorphic since
$E'$ is a holomorphic submanifold. Moreover, by \reftext{Lemma~\ref{lem:hol_hmg_fun}}, these maps are also polynomial on fibers.
Thus take $q\in M'$ and denote graded fiber coordinates of $\tau:
F\rightarrow M$ by $(z^{\alpha}_{w})_{\alpha\in I}$. They are
holomorphic functions defined on $F|_{U}$ for some open subset
$U\subset M$, $q\in U$. Let $(x^{a})$ be coordinates on $U'\subset M'$
around~$q$. Take a subset $I'\subset I$ such that functions $(x^{a},
z^{\alpha}_{w})_{\alpha\in I'}$ form a coordinate system for $E$
around~$q$.
Then the differentials ($\alpha\in I'$)
\[
\mathrm{d}_{\tilde{q}} z^{\alpha}_{w}|_{\mathrm{T}'_{q}
E'},\quad\mathrm{d}_{\tilde{q}} x^{a}|_{\mathrm{T}'_{q} E'}
\]
are still linearly independent for $\tilde{q}$ in some open
neighborhood $\tilde{U}$ of $q$ in $F$, possibly smaller than $U$. It
follows from the proof of \reftext{Lemma~\ref{lem:complex_gr_sspace}} that
$(z^{\alpha}_{w})_{\alpha\in I'}$ form a global coordinate system on
each of the fibers of $E'$ over $q'\in U'':= \tilde{U}\cap U'$. Thus
$(x^{a}, z^{\alpha}_{w})_{\alpha\in I'}$ is a graded coordinate
system for the subbundle $E'|_{U''}$ consisting of holomorphic
functions turning it into a holomorphic graded bundle. This finishes
the proof.
\end{proof}
Now we are ready to prove \reftext{Theorem~\ref{thm:equiv_holom}}.
\begin{proof}[Proof of \reftext{Theorem~\ref{thm:equiv_holom}}:]
The crucial step is to show, that a holomorphic action $h:\mathbb
{C}\times N\rightarrow N$ of the multiplicative monoid $(\mathbb
{C},\cdot)$ on a connected complex manifold $N$, determines the
structure of a holomorphic graded bundle on $h_{0}: N\rightarrow
N_{0}:=h_{0}(N)$.
Clearly, since $h$ is holomorphic, so is $h_{0}$. What is more, this
map satisfies $h_{0}\circ h_{0} = h_{0}$ and thus, by \reftext{Lemma~\ref{lem:M0_holom}}, $N_{0}=h_{0}(N)$ is a complex submanifold of $N$. Now
notice that the restriction of $h$ to $\mathbb{R}\times N$ gives an action
\[
h^{\mathbb{R}}: \mathbb{R}\times N\rightarrow N
\]
of the monoid $(\mathbb{R}, \cdot)$. In view of \reftext{Theorem~\ref{thm:eqiv_real}}, $h_{0}:N\rightarrow N_{0}$ is a (real) graded bundle
(cf. the proof of \reftext{Lemma~\ref{lem:hol_hmg_fun}}), say, of degree $k$.
We shall now follow the ideas from the proof of \reftext{Theorem~\ref{thm:eqiv_real}} provided in \cite{JG_MR_gr_bund_hgm_str_2011}. The
crucial step is to embed $N$ into the holomorphic jet bundle $\mathrm
{J}^{k} N$ as a holomorphic graded subbundle. Recall (cf. Ex. \ref{ex:hol_jet_bundle}) that $\mathrm{J}^{k}N$ has a canonical
holomorphic graded bundle structure.
Consider a map
\[
\phi^{\mathbb{C}}:N\rightarrow\mathrm{J}^{k} N, \quad q\mapsto
\mathbf{j}^{k}_{\xi=0} h(\xi, q),\quad\xi\in\mathbb{C},
\]
sending each point $q\in N$ to the $k$th-holomorphic jet
at $\xi=0$ of the holomorphic curve $\mathbb{C}\ni\xi\mapsto h(\xi
,q)\in N$. Clearly, $\phi^{\mathbb{C}}$ is a holomorphic map as a
lift of a holomorphic curve to the holomorphic jet bundle $\mathrm
{J}^{k} N$. The composition of $\phi^{\mathbb{C}}$ with the canonical
isomorphism $\mathrm{J}^{k} N \simeq\mathrm{T}^{k} N_{\mathbb{R}}$
gives the map
\[
\phi^{\mathbb{R}}:N_{\mathbb{R}}\rightarrow\mathrm{T}^{k}
N_{\mathbb{R}}, \quad q\mapsto\mathbf{t}^{k}_{t=0} h(t, q), \quad
t\in\mathbb{R}.
\]
As indicated in the proof of Theorem 4.1 of \cite
{JG_MR_gr_bund_hgm_str_2011}, $\phi^{\mathbb{R}}$ is a topological
embedding naturally related with the real homogeneity structure
$h^{\mathbb{R}}$. Thus also $\phi^{\mathbb{C}}$ is a topological
embedding. Therefore, since $\phi^{\mathbb{C}}$ is also holomorphic,
the image $\widetilde{N}:=\phi^{\mathbb{C}}(N)\subset\mathrm
{J}^{k} N$ is a complex submanifold, biholomorphic with $N$. Let us
denote by $\widetilde{h}: \mathbb{C}\times\widetilde{N}\rightarrow
\widetilde{N}$ the corresponding action on $\widetilde{N}$ induced
from $h$ by means of $\phi^{\mathbb{C}}$. Since $\phi^{\mathbb{C}}$
intertwines the action $h$ and the canonical action by homotheties
\[
h^{\mathrm{J}^{k} N}: \mathbb{C}\times\mathrm{J}^{k} N \rightarrow
\mathrm{J}^{k} N
\]
on the bundle of holomorphic $k$th-jets, the action
$\widetilde{h}$ coincides with the restriction of
$h^{\mathrm{J}^{k} N}$ to $\mathbb{C}\times\widetilde{N}$. Hence,
$\widetilde{N}$ is a complex submanifold of $\mathrm{J}^{k} N$
invariant with respect to the action $h^{\mathrm{J}^{k} N}$.
Using \reftext{Lemma~\ref{lem:complex_gr_sspace}} on each fiber of $\mathrm
{J}^{k} N\big|_{N_{0}}\rightarrow N_{0}$ we conclude that $\widetilde
{N}$ is a complex graded subbundle of $\mathrm{J}^{k} N$.
Since $\widetilde{N}\approx N$ was a complex submanifold of $\mathrm
{J}^{k} N$, it is also a holomorphic graded subbundle of $\mathrm
{J}^{k}N$ due to \reftext{Corollary~\ref{cor:complex_gr_subbundle}}.
Thus we have constructed a canonical holomorphic graded bundle
structure on $N$ starting from a holomorphic homogeneity structure
$(N,h)$. Clearly the action by homotheties $h^{N}$ related with this
graded bundle coincides with the initial action $h$.
The above construction, and the construction of a canonical holomorphic
homogeneity structure $(F,h^{F})$ from a holomorphic graded bundle
$\tau: F\rightarrow N$ are mutually inverse, providing the desired
equivalence of categories at the level of objects.
To show the equivalence at the level of morphisms consider two
holomorphic graded bundles $\tau_{j}:F_{j}\rightarrow N_{j}$, with
$j=1,2$, and let $h^{j}$, with $j=1,2$, be the related homogeneity
structures. Let $\Phi: F_{1} \rightarrow F_{2}$ be a holomorphic map
such that $\Phi\circ h^{1} = h^{2}\circ\Phi$. It is enough to show
that $\Phi$ is a morphism of holomorphic graded bundles. Since $\Phi$
is holomorphic by assumption it suffices to show that on each fiber
$\Phi: (F_{1})_{p} \rightarrow(F_{2})_{\Phi(p)}$ is a morphism of
complex graded spaces.
Let now $(z^{\alpha}_{w})$ and $(\underline{z}^{\alpha}_{w})$ be
graded coordinates on $(F_{1})_{p}$ and $(F_{2})_{\Phi(p)}$, respectively.
Note that $\Phi^{*}\underline{z}^{\alpha}_{w} = \underline
{z}^{\alpha}_{w} \circ\Phi$ is a $\mathbb{C}$-homogeneous function
on $(F_{1})_{p}$, hence in light of \reftext{Lemma~\ref{lem:hol_hmg_fun}} it is
a homogeneous polynomial in $z^{\alpha}_{w}$. Thus, indeed, $\Phi$
has a desired form.
\end{proof}
\paragraph*{On smooth actions of the monoid $(\mathbb{C}, \cdot)$ on
smooth manifolds}
Our goal in the last paragraph of this section is to study smooth
actions of the monoid $(\mathbb{C},\cdot)$ on smooth manifolds, i.e.,
complex homogeneity structures (see \reftext{Definition~\ref{def:hgm_str_complex}}). Contrary to the holomorphic case, there is no
equivalence between such structures and complex graded bundles. To
guarantee such an equivalence we will need to make additional
assumptions. Informally speaking, the real and the imaginary parts of
the action of $(\mathbb{C},\cdot)$ should be compatible. The
following examples should help to get the right intuitions.
\begin{example}
Consider $M=\mathbb{R}$ and define an action $h:\mathbb{C}\times
M\rightarrow M$ by $h(\xi, y)=(|\xi|^{2}\,y)$, where $y$ is a
standard coordinate on $\mathbb{R}$. Clearly this is a smooth action
of the multiplicative monoid $(\mathbb{C},\cdot)$,
but the fibers $M$ admit no structure of a complex graded bundle. This
is clear from dimensional reasons. Indeed, the base $h_{0}(M)$ is just
a single point $0\in\mathbb{R}$ and thus $M$, as a single fiber,
should admit a structure of a complex graded space. This is impossible
as $M$ is odd-dimensional.
\end{example}
\begin{example} Consider $M=\mathbb{C}$ with a standard coordinate
$z:M\rightarrow\mathbb{C}$ and define a smooth multiplicative action
$h:\mathbb{C}\times M\rightarrow M$ by the formula $h(\xi, z)=|\xi
|^{2} \xi\,z$. We claim that $h$ is not a homothety action related
with any complex graded bundle structure on $M$.
Assume the contrary. The basis of $M$ should be $h_{0}(M)=\{0\}$, i.e.
a single point. Thus $M$ is a complex graded space (a complex graded
bundle over a single point), say, of rank $\mathbf{d}$. Clearly the
restriction $h|_{\mathbb{R}}:\mathbb{R}\times M\rightarrow M$ should
provide $M$ with a structure of a (real) graded space of rank $2\mathbf
{d}$. Observe that $h|_{\mathbb{R}}$ is in fact a homothety action on
$\mathbb{R}^{(0,0,2)}$ and thus we should have $M=\mathbb
{C}^{(0,0,1)}$. In such a case, there should exist a global complex
coordinate $\tilde{z}: M \xrightarrow{\simeq} \mathbb{C}$ which is
homogeneous of degree $3$, and so $h_{\varepsilon_{3}}$, where
$\varepsilon_{3}:=e^{2\pi\sqrt{-1}/3}$ is the third order primitive
root of 1, should be the identity on $M$, as there are no coordinates
on $M$ of other weights. Yet, $h(\varepsilon_{3}, z) = \varepsilon
_{3} z \neq z$, thus a contradiction.
\end{example}
The above examples reveal two important facts concerning a complex
homogeneity structure $h:\mathbb{C}\times M\rightarrow M$. First of
all, the restriction of $h$ to $(\mathbb{R},\cdot)$ makes $M$ a
(real) homogeneity structure. Secondly, the action of the primitive
roots of $1$ on $h|_{\mathbb{R}}$-homogeneous functions allows to
distinguish complex graded bundles among all complex homogeneity structures.
\begin{remmark}\label{rem:induced_action_on_tower}
Let $h:\mathbb{C}\times E^k\rightarrow E^k$ be a complex homogeneity structure such that the restriction $h|_{\mathbb{R}}$ makes $\tau= h_{0}: E^{k} \rightarrow M_{0}$ a (real) graded
bundle of degree $k$. For any $\xi
\in\mathbb{C}$ the action $h_{\xi}$ commutes with the homotheties
$h(t, \cdot)$, $t\in\mathbb{R}$, hence $h_{\xi}: E^{k}\rightarrow
E^{k}$ is a (real) graded bundle morphism, in view of \reftext{Theorem~\ref{thm:eqiv_real}}. Therefore $h$ induces an action of the monoid
$(\mathbb{C}, \cdot)$ on each (real) graded bundle $\tau^{j}:
E^{j}\rightarrow M_{0}$ in the tower \reftext{\eqref{eqn:tower_grd_spaces}}, and
on each core bundle $\hat{E}^{j}$, for $j=1,2,\ldots, k$.
\end{remmark}
These observations motivate the following
\begin{definition}\label{def:nice_hgm_str}
Let $h:\mathbb{C}\times E^{k}\rightarrow E^{k}$ be a complex
homogeneity structure such that $\tau= h_{0}: E^{k} \rightarrow M$ is the
(real) graded bundle of degree $k$ associated with $h|_{\mathbb{R}}$.
Denote by $\varepsilon_{2j}:=e^{2\pi\sqrt{-1}/(2j)}$ the
$2j$th-order primitive root of 1, and by
$J_{2j}:=h(\varepsilon_{2j},\cdot):E^{k}\rightarrow E^{k}$ the action
of $\varepsilon_{2j}$ on $E^{k}$. We say that the complex homogeneity
structure $h$ is \emph{nice} if $J_{2j}$ acts as minus identity on the
core bundle $\hat{E^{j}}$ for every $j=1, 2, \ldots, k$.
Nice\ complex homogeneity structures form a \emph{full subcategory} of
the category of complex homogeneity structures.
\end{definition}
\begin{example}\label{ex:nice_hgm_str}
It is easy to see, using local coordinates, that if $\tau:E\rightarrow
M$ is a complex graded bundle, and $h^{E}:\mathbb{C}\times
E\rightarrow E$ the related complex homogeneity structure, then $h^{E}$
is nice.
\end{example}
It turns out that the converse is also true, that is, for nice\ complex
homogeneity structures we can prove an analog of \reftext{Theorem~\ref{thm:equiv_holom}}:
\begin{theorem}\label{thm:equiv_complex}
The categories of (connected) complex graded bundles and
(connected) nice complex homogeneity structures are equivalent. At the level
of objects this equivalence is provided by the following two constructions
\begin{itemize}
\item With every complex graded bundle $\tau:E\rightarrow M$ one can
associate a nice\ complex homogeneity structure $(E,h^{E})$, where
$h^{E}$ is the action by homotheties of $E$.
\item Given a nice\ complex homogeneity structure $(M,h)$, the map
$h_{0}:M\rightarrow M_{0}:=h_{0}(M)$ provides $M$ with a canonical
structure of a complex graded bundle such that $h$ is the related action by homotheties.
\end{itemize}
At the level of morphisms: every morphism of complex graded bundles is
a morphism of the related nice\ complex homogeneity structures and,
conversely, every morphism of nice\ complex homogeneity structures
respects the canonical complex graded bundle structures.
\end{theorem}
Again in the proof we shall need a few technical results. First observe
that $J_{2} = h_{-1}$ acts as minus identity on every vector bundle, so
for $k=1$
the condition in \reftext{Definition~\ref{def:nice_hgm_str}} is trivially
satisfied (i.e., a~degree-one complex homogeneity structure is always
nice). Thus every $(\mathbb{C},\cdot)$-action whose restriction to
$(\mathbb{R},\cdot)$ is of degree one should be a complex bundle.
\begin{lemma}\label{lem:C_action_on_vect_space}
Let $h: \mathbb{C}\times W\rightarrow W$
be a smooth action of the monoid $(\mathbb{C}, \cdot)$ on a real
vector space $W$, such that
$h(t, v) = t\, v$ for every $t\in\mathbb{R}$ and $v\in W$.
Then $h$ induces a complex structure on $W$ by the formula
\begin{equation}
h(a+b\sqrt{-1}, v) = a\, v + b\,h(\sqrt{-1}, v)
\end{equation}
for every $a, b\in\mathbb{R}$.
\end{lemma}
\begin{proof}
Denote $J:=h(\sqrt{-1}, \cdot):W\rightarrow W$. We have $J\circ J =
h_{\sqrt{-1}}\circ h_{\sqrt{-1}} = h_{-1}=-\operatorname{id}_{W}$.
Moreover, $J$~is $\mathbb{R}$-linear, since $J$ commutes with the
homotheties $h_{t}: W\rightarrow W$ for any $t\in\mathbb{R}$ (see
Theorem~2.4 \cite{JG_MR_higher_vec_bndls_and_multi_gr_sym_mnflds}).
Therefore, $J$ defines a complex structure on $W$ and the formula
\[
\xi\, v : =\Re\xi\, v + \Im\xi\,J(v),
\]
where $\xi\in\mathbb{C}$ and $v\in W$, allows us to consider $W$ as
a complex vector space.
Note that $\xi\, v = h(\xi, v)$ for $\xi\in\mathbb{R}$. To prove
that this equality holds for any $\xi\in\mathbb{C}$ we shall study the restriction
\[
h_{|S^{1}\times W}: S^{1}\times W\rightarrow W
\]
which is a group action of the unit circle on a complex vector space
$W$. Indeed, for any $\xi\in\mathbb{C}$, the action
$h(\xi, \cdot): W\rightarrow W$ is a $\mathbb{C}$-linear map, since
it commutes with the complex structure $J$ and the endomorphisms $h(t,
\cdot)$ for $t\in\mathbb{R}$. It follows from the general theory
that $W$ splits into sub-representations $W= \bigoplus_{j=1}^{n}
W_{j}$ such that for any $|\theta|=1$ and any $v\in W_{j}$
\[
h(\theta, v) =\theta^{k_{j}} \, v
\]
where $k_{1}, \ldots, k_{n}$ are some integers. The restriction of $h$
to each summand $W_{j}$ defines an action of the monoid $(\mathbb
{C},\cdot)$ hence, without loss of generality, we may assume that
$W=W_{1}$ and that
\begin{equation}\label{eqn:h_theta_k}
h(t\,\theta, v) = t\,\theta^{k}\, v
\end{equation}
for every $\theta\in S^{1}$, $t\in\mathbb{R}$ and $v\in W$. Note
that $k$ should be an odd integer as $J^{2}=-\operatorname{id}_{W}$.
Equivalently, taking $\xi=t\, \theta$, we can denote $h(\xi, v) =
\xi^{k}\bar{\xi}^{-k+1}\, v$ for every $\xi\in\mathbb{C}\setminus
\{0\}$. However, the function $\xi\mapsto\xi^{k}\bar{\xi}^{-k+1}$,
$0\mapsto0$, is not differentiable at $\xi=0$ unless $k=1$.
Therefore, $h(\xi, v)=\xi\, v$, as was claimed.
\end{proof}
Now we shall show that an analogous result holds for nice\ homogeneity
structures of arbitrary degree.
\begin{lemma}\label{lem:action_Ek}
Let $h:\mathbb{C}\times M\rightarrow M$ be a nice\ complex homogeneity
structure such that the restriction $h|_{\mathbb{R}}$ makes $M$ a
(real) graded space of degree $k$. Then $M$ is a complex graded space
of degree $k$ with homotheties given by $h$.
\end{lemma}
\begin{proof}
Denote by $\mathrm{W}^{k}$ the (real) graded space structure on $M$.
By $\mathrm{W}^{j}$ with $j\leq k$ denote the lower levels of the
tower \reftext{\eqref{eqn:tower_grd_spaces}} associated with $\mathrm{W}^{k}$.
We shall proceed by induction on $k$. Case $k=1$ follows immediately
from \reftext{Lemma~\ref{lem:C_action_on_vect_space}}.
Let now $k$ be arbitrary. The basic idea of the proof is to define a
complex graded space structure of degree $k-1$ on $\mathrm{W}^{k-1}$
using the inductive assumption and to construct a complex graded space
structure of rank $(0,\ldots,0,\dim_{\mathbb{C}}\widehat{\mathrm
{W}}^{k})$ on the core $\widehat{\mathrm{W}}^{k}$. Then using both
structures we build complex graded coordinates on $\mathrm{W}^{k}$.
Recall (see \reftext{Remark~\ref{rem:induced_action_on_tower}}) that, for any $\xi
\in\mathbb{C}$, the map $h_{\xi}: \mathrm{W}^{k}\rightarrow\mathrm
{W}^{k}$ is a (real) graded space morphism, and that $h$ induces an action of
the monoid $(\mathbb{C}, \cdot)$ on each graded space $\mathrm
{W}^{j}$ in the tower \reftext{\eqref{eqn:tower_grd_spaces}}. Note that the
induced action on $\mathrm{W}^{k-1}$ satisfies all assumptions of our
lemma hence, by the inductive assumption, we may consider $\mathrm
{W}^{k-1}$ as a complex graded space.
Denote by $(z^{\alpha}_{w})$ complex graded coordinates on $\mathrm
{W}^{k-1}$ and pullback them to $\mathrm{W}^{k}$ by means of the
projection $\tau^{k}_{k-1}: \mathrm{W}^{k}\rightarrow\mathrm
{W}^{k-1}$. Denote the resulting functions again with the same symbols.
This should not lead to any confusion since the pullbacked function
$z^{\alpha}_{w}: \mathrm{W}^{k}\rightarrow\mathbb{C}$ is still of
weight $w$, i.e.
\begin{equation}\label{eqn:theta_on_Ek-1}
h_{\xi}^{\ast}(z^{\alpha}_{w}) = \xi^{w}\, z^{\alpha}_{w}.
\end{equation}
On the other hand, as any morphism of graded spaces, $h_{\xi}$ can be
restricted to the core $\widehat{\mathrm{W}}^{k}$.
This defines an action of $(\mathbb{C}, \cdot)$ on $\widehat{\mathrm
{W}}^{k}$,
\[
\widehat{h}:= h_{|\mathbb{C}\times\widehat{\mathrm{W}}^{k}}:
\mathbb{C}\times\widehat{\mathrm{W}}^{k}\rightarrow\widehat
{\mathrm{W}}^{k}.
\]
Recall that $\widehat{\mathrm{W}}^{k}$ is a real vector space with
homotheties defined by
\begin{equation}\label{eqn:homotheties_in_core_Ek}
(t, v)\mapsto t\ast v:= h|_{\mathbb{R}}(\sqrt[k]{t}, v)\ ,
\end{equation}
for every $t\geq0$ and every $v\in\widehat{\mathrm{W}}^{k}$.
In a (real) graded coordinate system $(\Re z^{\alpha}_{w}, \Im
z^{\alpha}_{w}, p^{\mu})$ on $\mathrm{W}^{k}$, where $p^{\mu}:
\mathrm{W}^{k}\rightarrow\mathbb{R}$ are arbitrary coordinates of
weight $k$, the map \reftext{\eqref{eqn:homotheties_in_core_Ek}} reads $(t,
(\widehat{p}^{\mu}))\mapsto(t\,\widehat{p}^{\mu})$, where
$\widehat{p}^{\mu}=p^{\mu}|_{\widehat{\mathrm{W}}^{k}}$. Hence it
is a smooth map (with respect to the inherited submanifold structure)
which can be extended also to the negative values of $t$.
Let us denote by $\widehat{J}_{4k}$ the restriction of
$J_{4k}:=h(\varepsilon_{4k},\cdot)$ to $\widehat{\mathrm{W}}^{k}$.
By assumption ($h$ is nice), $-\operatorname{id}_{\widehat{\mathrm
{W}}^{k}} = \widehat{J}_{2k} = \widehat{J}_{4k}\circ\widehat
{J}_{4k}$. Therefore (cf. \reftext{Lemma~\ref{lem:C_action_on_vect_space}}),
$\widehat{J}_{4k}$ defines a complex structure on the real vector
space $\widehat{\mathrm{W}}^{k}$ by the formula
\[
(a+b\sqrt{-1})\ast v:= a\ast v + b\ast\widehat{J}_{4k}(v)\ ,
\]
for every $a,b\in\mathbb{R}$ and $v\in\widehat{\mathrm{W}}^{k}$.
Note that the homotheties $h_{\xi}$ commute also with $\widehat
{J}_{4k}$, therefore $h_{\xi}|_{\widehat{\mathrm{W}}^{k}}$ is a
$\mathbb{C}$-linear endomorphism of $\widehat{\mathrm{W}}^{k}$. By
restricting to $|\xi|=1$ we obtain a
representation of the unit circle group $S^{1}$ in $\operatorname
{GL}_{\mathbb{C}}(\widehat{\mathrm{W}}^{k})$. As in the proof of
\reftext{Lemma~\ref{lem:C_action_on_vect_space}}, without loss of generality we
may assume that there exists an integer $m$ such that
\[
\widehat{h}(\theta, v)= \theta^{m}\ast v
\]
for every $|\theta|=1$. Taking $\theta=\varepsilon_{2k}$ we see that
$\varepsilon_{2k}^{m}=-1$, hence $m\equiv k \mod2k$. It follows that
$\widehat{h}(t\, \theta, v) = t^{k}\,\theta^{m} \ast v$, for every
$t\in\mathbb{R}$ and $\theta\in S^{1}$.
However, the function $\xi:=t\,\theta\mapsto t^{k}\theta^{m} = \xi
^{k} \, (\xi/\bar{\xi})^{m-k}$ is smooth only if $m=k$, since $m-k$
is a multiplicity of $2k$. Therefore,
\begin{equation}\label{eqn:theta_on_core}
h(\xi, v) = \xi^{k}\ast v,
\end{equation}
for every $\xi\in\mathbb{C}$ and every $v\in\widehat{\mathrm
{W}}^{k}$. In other words, $\widehat{W}^{k}$ is a complex graded space
of rank $\mathbf{d}=(0,0,\ldots,0,\dim_{\mathbb{C}}\widehat
{\mathrm{W}}^{k})$.
Let $(\widehat{z}^{\mu}: \widehat{\mathrm{W}}^{k}\rightarrow
\mathbb{C})$ be a system of complex graded coordinates on $\widehat
{\mathrm{W}}^{k}$. We shall show that it is possible to extend each
$\widehat{z}^{\mu}$ to a complex function $z^{\mu}_{k}: W^{k}
\rightarrow\mathbb{C}$ in such a way that
\begin{equation}\label{eqn:theta_on_Ek}
z^{\mu}_{k}(h(\xi, v)) = \xi^{k}\, z^{\mu}_{k}(v)
\end{equation}
hold for any $\xi\in\mathbb{C}$ and $v\in\mathrm{W}^{k}$, i.e.,
$z^{\mu}_{k}$ are complex homogeneous function of weight $k$. First see
that we can find extensions of $\widehat{z}^{\mu}$ which satisfy \reftext{\eqref{eqn:theta_on_Ek}} for $\xi\in\mathbb{R}$. Indeed, the
restriction to $\widehat{W}^{k}$ of a real homogeneous weight $k$
function $W^{k}\rightarrow\mathbb{R}$ can be an arbitrary linear
function on $\widehat{W}^{k}$. Thus we extend the real and imaginary
parts of $\widehat{z}^{\mu}$ separately and get $\mathbb
{R}$-homogeneous extensions, say $\tilde{z}^{\mu}: W^{k} \rightarrow
\mathbb{C}$. Now consider a function
\begin{equation}\label{eqn:extensions_of_z_mu}
z^{\mu}_{k}(v) := \frac{1}{2\pi} \int_{|\xi|=1} \xi^{-k} \tilde
{z}(h_{\xi}(v)) \mathrm{d}\lambda(\xi),
\end{equation}
where $v\in W^{k}$ and $\lambda$ is a homogeneous measure on the circle
$S^{1}=\{|\xi|=1\}$ with $\lambda(S^{1})=2\pi$.
Clearly, $z^{\mu}_{k}: W^{k} \rightarrow\mathbb{C}$ is smooth and
(as $h_{\xi}(h_{\theta}(v)) = h_{\xi\theta}(v)$ for any $\theta,\xi\in
\mathbb{C}$) we have
\[
z^{\mu}_{k}(h_{\theta}(v)) = \theta^{k} \frac{1}{2\pi} \int_{|\xi
|=1} \theta^{-k} \xi^{-k} \tilde{z}^{\mu}(h_{\xi\theta}(v))
\mathrm{d}\lambda(\xi) = \theta^{k} z^{\mu}_{k}(v),
\]
for any $|\theta|=1$. Since $\tilde{z}^{\mu}$ is $\mathbb
{R}$-homogeneous, the same is $z^{\mu}_{k}$,
hence $z^{\mu}_{k}$ is actually a complex homogeneous function, as
$S^{1}$ and $\mathbb{R}$ generate $\mathbb{C}$ as a monoid. Moreover,
for $v\in\widehat{W}^{k}$ equality $\tilde{z}^{\mu}(h_{\xi}(v)) = \xi
^{k} \tilde{z}^{\mu}(v)$ holds, hence $z^{\mu}_{k}|_{\widehat{W}^{k}} =
\tilde{z}^{\mu}|_{\widehat{W}^{k}}$, and so $z^{\mu}_{k}$ is indeed
an extension of $\hat{z}^{\mu}$.
Lastly, the system of homogeneous functions $(z^{\alpha}_{w}, z^{\mu
}_{k})$ defines a global diffeomorphism $W^{k}\xrightarrow{\simeq}
\mathbb{C}^{|\mathbf{d}|}$ where $\mathbf{d}=(d_{1}, \ldots,
d_{k})$, $d_{j}=\operatorname{dim}_{\mathbb{C}} \widehat{W}^{j}$.
Indeed, it is enough to point that for any $1\leq j\leq k$, the
restrictions of some of these functions (those of weight $j$) to the
core $\widehat{W}^{j}$ define a diffeomorphism $\widehat
{W}^{j}\xrightarrow{\simeq} \mathbb{C}^{d_{j}}$.
\end{proof}
\reftext{Theorem~\ref{thm:equiv_complex}} is a simple consequence of the above result.
\begin{proof}[Proof of \reftext{Theorem~\ref{thm:equiv_complex}}]
We already observed (see Ex. \ref{ex:nice_hgm_str}) that a complex
graded bundle structure on $\tau:E\rightarrow M$ induces a nice\
complex homogeneity structure $(E,h^{E})$ by the associated action of the
homotheties of $E$.
The converse is also true. Indeed, let $h:\mathbb{C}\times
M\rightarrow M$ be a nice\ complex homogeneity structure. Clearly
$h|_{\mathbb{R}}$ is a (real) homogeneity structure on $M$, and thus,
by \reftext{Theorem~\ref{thm:eqiv_real}}, $h_{0}:M\rightarrow M_{0}:=h_{0}(M)$
is a (real) graded bundle of degree, say, $k$. Now on each fiber of
this bundle the action $h$ defines a nice\ homogeneity structure. By
applying \reftext{Lemma~\ref{lem:action_Ek}} we get a complex graded space
structure on each fiber of $h_{0}$. Thus $M$ is indeed a complex graded
bundle.
The equivalence at the level of morphisms is showed analogously to the
holomorphic case: it amounts to show that a smooth map between two
complex graded spaces which intertwines the homothety actions is a
complex graded space morphism. This is precisely the assertion of \reftext{Lemma~\ref{lem:hol_hmg_fun}}.
\end{proof}
\section{Actions of the monoid $\mathcal{G}_{2}$}
\label{sec:g2}
In this section we shall study smooth actions of the monoid $\mathcal
{G}_{2}$ on smooth manifolds.
\paragraph*{The left and right actions of the monoid $\mathcal{G}_{2}$}
Recall (see the end of Section~\ref{sec:perm}) that $\mathcal{G}_{2}$
was introduced as the space of 2nd-jets of punctured
maps $\phi:(\mathbb{R},0)\rightarrow(\mathbb{R},0)$ with the
multiplication induced by the composition. Under the identification of
$\mathcal{G}_{2}$ with $\mathbb{R}^{2}=\{(a,b)\ |\ a,b\in\mathbb
{R}\}$ it reads as (see \reftext{Remark~\ref{rem:g_k_k=1_2}}):
\begin{equation}
\label{eqn:G_2_multiplication}
(a,b)(A,B)=(aA,a B+bA^{2})\ .
\end{equation}
Since this multiplication is clearly non-commutative, unlike in the
case of real or complex numbers, we have to distinguish between left
and right actions of $\mathcal{G}_{2}$.
The crucial observation about $\mathcal{G}_{2}$ is that it contains
two submonoids:
\begin{itemize}
\item the multiplicative reals $(\mathbb{R}, \cdot)\simeq\{(a, 0):
a\in\mathbb{R}\}$, corresponding to the 2nd-jets of
punctured maps $\phi(t)=at$ for $a\in\mathbb{R}$,
\item and the additive group $(\mathbb{R}, +) \simeq\{(1, b): b\in
\mathbb{R}\}$.
\end{itemize}
Now to study right (or left) smooth actions of $\mathcal{G}_{2}$ on a
smooth manifold $M$ we use a technique similar to the one used to study
$(\mathbb{C},\cdot)$-actions in Section~\ref{sec:complex}. We begin
by considering the action of $(\mathbb{R},\cdot)\subset\mathcal
{G}_{2}$ which, by \reftext{Theorem~\ref{thm:eqiv_real}}, makes $M$ a (real)
graded bundle. Actually it will be more convenient to speak of the
related weight vector field (see \reftext{Definition~\ref{def:euler_vf}}) in
this case. On the other hand, the action of the additive reals
$(\mathbb{R},+)$ is a flow, i.e. it is encoded by a single (complete)
vector field on $M$. It is now crucial to understand the relation
(compatibility conditions) between these two structures. This can be
done by looking at the formula
\begin{equation}
\label{eqn:commutation_submonoids}
(a,0)(1,b/a)=(a,b)=(1,b/a^{2})(a,0)\ ,
\end{equation}
which allows to decompose every element of ${\mathcal{G}}^{\text
{inv}}_{2}=\mathcal{G}_{2} \setminus\{(0, b): b\neq0\}$, the group
of invertible elements of $\mathcal{G}_{2}$, as a product of the elements of the
submonoids $(\mathbb{R},\cdot)$ and $(\mathbb{R},+)$. Since equation \reftext{\eqref{eqn:commutation_submonoids}} describes the commutation of the
two submonoids, it helps to express the compatibility conditions of the
two related structures at the infinitesimal level as the following
result shows. Recall the notion of a homogeneous vector field -- cf.
\reftext{Definition~\ref{def:hgm_real}} and \reftext{Remark~\ref{rem:euler_weight}}.
\begin{lemma}\label{lem:G2_actions_infinitesimally}
Every smooth right (respectively, left) action $H: M \times\mathcal
{G}_{2} \rightarrow M$ (resp., $H: \mathcal{G}_{2}\times M \rightarrow
M$) on a smooth manifold $M$ provides $M$ with:
\begin{itemize}
\item a canonical graded bundle structure $\pi: M\rightarrow
M_{0}:=H_{(0,0)}(M)$ induced by the action of the submonoid $(\mathbb
{R},\cdot)\subset\mathcal{G}_{2}$,
\item and a complete vector field $X\in\mathfrak{X}(M)$ of weight $-1$
(resp., $Y\in\mathfrak{X}(M)$ of weight $+1$) with respect to the
above graded structure on $M$.
\end{itemize}
In other words, any $\mathcal{G}_{2}$-action provides $M$ with two
complete vector fields: the weight vector field $\Delta$ and another
vector field $X$ (respectively, $Y$), such that their Lie bracket satisfies
\begin{equation}
\label{eqn:commutator_X_Delta}
[\Delta,X]=-X \quad(\text{resp., $[\Delta, Y]=Y$})\ .
\end{equation}
\end{lemma}
\begin{proof}
By \reftext{Theorem~\ref{thm:eqiv_real}}, the homogeneity structure $h: \mathbb
{R}\times M\rightarrow M$ obtained as the restriction of $H$ to the
submonoid $(\mathbb{R},\cdot)\subset\mathcal{G}_{2}$ (i.e.,
$h_{a}=H_{(a,0)}$) defines a graded bundle structure on
$h_{0}:M\rightarrow M_{0}$. Clearly, the flow of the corresponding
weight vector field $\Delta$ is given by $t\mapsto H_{(e^{t},0)}$.
Consider first the case when $H$ is a right $\mathcal{G}_{2}$-action.
Let $X\in\mathfrak{X}(M)$ be the infinitesimal generator of the
action of $s\mapsto H_{(1,s)}$.
In order to calculate the Lie bracket of $\Delta$ and $X$ we will
calculate the corresponding commutator of flows, i.e.
\begin{align*}
&X^{-s}\circ\Delta^{-t} \circ X^{s}\circ\Delta^{t} =
H_{(1,-s)}\circ H_{(e^{-t},0)}\circ H_{(1,s)}\circ H_{(e^{t},0)}\\
&\quad =H_{(e^{t},0)(1,s)(e^{-t},0)(1,-s)}
\overset{\text{\reftext{\eqref{eqn:G_2_multiplication}}}}{=}
H_{(1,-ts+o(ts))}= X^{-ts+o(ts)}\ .
\end{align*}
The latter should correspond to the $ts$-flow of $[\Delta, X]$ and
hence \reftext{\eqref{eqn:commutator_X_Delta}} holds.
In the case when $H$ is a left $\mathcal{G}_{2}$-action denote by $Y$
the infinitesimal generator of the action $s\mapsto H_{(1,s)}$. Now
$H_{g}\circ H_{g'} = H_{gg'}$, so the commutator $Y^{-s}\Delta^{-t}
\circ Y^{s}\circ\Delta^{t}$ equals
$H_{(1,-s)(e^{-t},0)(1,s)(e^{t},0)} = Y^{ts + o(ts)}$, hence $[\Delta,
Y]= Y$.
\end{proof}
\begin{example}\label{ex:T2M}
Consider the right $\mathcal{G}_{2}$-action on $\mathrm{T}^{2} M$
described in \reftext{Example~\ref{ex:TkM_Gk}}. We shall use standard
coordinates $(x^{i}, \dot{x}^{i}, \ddot{x}^{i})$ on $\mathrm{T}^{2}
M$.\footnote{If $[\gamma]_{2} \sim(x^{i}, \dot{x}^{i}, \ddot
{x}^{i})$, then $\gamma(t) = (x^{i} + t \dot{x}^{i} + \frac{1}{2}
\ddot{x}^{i} + o(t^{2}))$.} We have
\[
(x^{i},\dot{x}^{i},\ddot{x}^{i}).(a,b)=(x^{i},a\dot
{x}^{i},a^{2}\ddot{x}^{i}+b\dot{x}^{i}).
\]
It is clear that in this case the homogeneity structure on $\mathrm
{T}^{2} M$ is just the standard degree 2 homogeneity structure, the
weight vector field equals $\Delta= \dot{x}^{i}\partial_{\dot
{x}^{i}} + 2\, \ddot{x}^{i} \partial_{\ddot{x}^{i}}$, and that the
additional vector field $X$ of weight $-1$ is simply $X=\dot
{x}^{i}\partial_{\ddot{x}^{i}}$.
\end{example}
\begin{example}
Let us now focus on the left $\mathcal{G}_{2}$-action on $\mathrm
{T}^{2\ast} M$ described in \reftext{Example~\ref{ex:Gk_Tk_starM}}. We shall
use standard coordinates $(p_{i}, p_{ij})$ on $\mathrm{T}^{2\ast
}M$.\footnote{If $[(f, x)] \sim(p_{i}, p_{ij})$ then $f(x)=0$ and
$f(x+h) = p_{i} h^{i} + \frac{1}{2} p_{ij}h^{i} h^{j} + o(|h|^{2})$, where $p_{ij}=p_{ji}$.}
We have
\[
(a, b).(p_{i}, p_{ij}) = (a p_{i}, a p_{ij}+b p_{i}\,p_{j}),
\]
so $\Delta= p_{i}\partial_{p_{i}} + p_{ij}\partial_{p_{ij}}$ and $Y
= p_{i}p_{j}\partial_{p_{ij}}$ has indeed weight $1$ with respect to
the standard vector bundle structure on $\mathrm{T}^{2\ast} M$.
\end{example}
Our goal now is to prove the inverse of \reftext{Lemma~\ref{lem:G2_actions_infinitesimally}}, i.e. to characterize right (resp.,
left) $\mathcal{G}_{2}$-actions in terms of a homogeneity structure
and a complete vector field $X$ of weight $-1$ (resp., $Y$ of weight +1).
Obviously, by our preliminary considerations knowing the actions of the
two canonical submonoids of $\mathcal{G}_{2}$ allows to determine the
action of the Lie subgroup ${\mathcal{G}}^{\text{inv}}_{2}$ of
invertible elements of $\mathcal{G}_{2}$. Yet problems with extending
this action on the whole $\mathcal{G}_{2}$ may appear.
\begin{lemma}
\label{lem:G_2_two}
Let $\tau:M\rightarrow M_{0}$ be a graded bundle (with the associated
weight vector field $\Delta$ and the corresponding homogeneity
structure $h:\mathbb{R}\times M\rightarrow M$) and let $X\in\mathfrak
{X}(M)$ (resp., $Y\in\mathfrak{X}(M)$) be a complete vector field of
weight $-1$ (resp., $+1$) i.e. $[\Delta,X]=-X$ (resp., $[\Delta, Y] =
Y$). Then the formulas
\begin{equation}\label{eqn:full_G2_actions}
p.(a,b):= X^{b/a}(h_{a}(p)) = h_{a}( X^{b/a^{2}}(p))\qquad
\left( \text{resp., }(a, b).p := h_{a}(Y^{b/a}(p))=
Y^{b/a^{2}}(h_{a}(p))\right)
\end{equation}
define a smooth right (resp., left) action of the group of invertible
elements $\mathcal{G}_{2}^{inv}\subset\mathcal{G}_{2}$. Here
$t\mapsto X^{t}$ (resp., $t\mapsto Y^{t}$) denotes the flow of the
vector field $X$ (resp., $Y$).
\end{lemma}
\begin{proof}
We shall restrict our attention to the case of the right action. The
reasoning for the case of the left action is analogous.
Note that the Lie algebra generated by vector fields $\Delta$ and $X$
is a non-trivial two-dimensional Lie algebra, thus it is isomorphic to
the Lie algebra $\mathfrak{aff}(\mathbb{R})$ of the Lie group
$$
\operatorname{Aff}(\mathbb{R})=\left\{
\begin{pmatrix} c & d\\ 0& 1
\end{pmatrix}
: c\neq0,\, d\in\mathbb{R}\right\}\subset\operatorname{Gl}_{2}(\mathbb{R})
$$
of affine transformations of $\mathbb{R}$. The identification
${\mathcal{G}}^{\text{inv}}_{2} \simeq\operatorname{Aff}(\mathbb
{R})$ is given by
\[
(a, b)\mapsto
\begin{pmatrix} 1/a & b/a^{2}\\ 0& 1
\end{pmatrix}
.
\]
Let $\operatorname{Aff}_{+}(\mathbb{R})$ be the subgroup of
orientation-preserving affine transformations of $\mathbb{R}$,
\[
\operatorname{Aff}_{+}(\mathbb{R}) = \left\{
\begin{pmatrix} c & d\\ 0& 1
\end{pmatrix}
: c>0,\, d\in\mathbb{R}\right\}.
\]
The subgroup $\operatorname{Aff}_{+}(\mathbb{R})$ is the connected
and simply-connected Lie group integrating
$\mathfrak{aff}(\mathbb{R})$. Clearly under the above identification
it is isomorphic to ${\mathcal{G}}^{\text{inv}}_{2+} = \{(a, b)\in
\mathcal{G}_{2}: a>0\}$. Thus, due to Palais' theorem and according to \reftext{\eqref{eqn:commutation_submonoids}},
formula \reftext{\eqref{eqn:full_G2_actions}} is a well-defined action of the
group ${\mathcal{G}}^{\text{inv}}_{2+}$ on $M$, i.e., both formulas
$p._{1}(a,b):=X^{b/a}(h_{a}(p))$ and $p._{2}(a,b):=h_{a}(
X^{b/a^{2}}(p))$ coincide for $(a,b)\in{\mathcal{G}}^{\text
{inv}}_{2+}$ and the resulting map is indeed a right action of
${\mathcal{G}}^{\text{inv}}_{2+}$.
Our goal now is to show that \reftext{\eqref{eqn:full_G2_actions}} is a
well-defined action of the whole ${\mathcal{G}}^{\text{inv}}_{2}$. It
is straightforward to check that $p._{1}(-1,0)=p._{2}(-1,0)=h_{-1}(p)$.
Now let us check that the formulas for $._{1}$ and $._{2}$ coincide for
elements of ${\mathcal{G}}^{\text{inv}}_{2}\setminus{\mathcal
{G}}^{\text{inv}}_{2+}$. Indeed, observe first that by \reftext{Definition~\ref{def:hgm_real}}, since $X$ is homogeneous of
weight~$-1$, for every $a\in
\mathbb{R}$
\[
(h_{a})_{\ast}X_{p}=a X_{h_{a}(p)}\ .
\]
Integrating the above equality we obtain the following result for flows:
\[
h_{a}(X^{t}(p))=X^{t a}(h_{a}(p))\ ,
\]
for every $a,t\in\mathbb{R}$.
Using this result we get for $a>0$
\[
p._{1}(-a,b)=X^{-b/a}(h_{-a}(p))=h_{-a}(X^{b/a^{2}}(p))=p._{2}(-a,b)\ ,
\]
i.e., \reftext{\eqref{eqn:full_G2_actions}} is well-defined on the whole
${\mathcal{G}}^{\text{inv}}_{2}$. To check that this is indeed an
action of ${\mathcal{G}}^{\text{inv}}_{2}$ note that
\[
p.(-a,b)=h_{-a}(X^{b/a^{2}}(p))=h_{-1}\left[
h_{a}(X^{b/a^{2}}(p))\right] =\left[ p.(a,b)\right] .(-1,0)
\]
and
\[
p.(-a,b)=X^{-b/a}(h_{-a}(p))=X^{-b/a}(h_{a}(h_{-1}(p)))=\left[
p.(-1,0)\right] .(a,-b)\ .
\]
In other words, the operation $p\mapsto p.(a,b)$ is compatible with the
following decomposition
\begin{equation}
\label{eqn:sd_product}
(-a,b)=(a,b)(-1, 0) =(-1, 0)(a,-b)\ .
\end{equation}
Now it suffices to observe that the latter formula allows to express
every multiplication of two elements in ${\mathcal{G}}^{\text
{inv}}_{2}$ as a composition of multiplications of elements of
${\mathcal{G}}^{\text{inv}}_{2+}$ and $(-1,0)$ (in other words,
${\mathcal{G}}^{\text{inv}}_{2}$ is a semi-direct product of
${\mathcal{G}}^{\text{inv}}_{2+}$ and $C_{2}\simeq\{(\pm1, 0)\}$).
Since formula \reftext{\eqref{eqn:full_G2_actions}} is multiplicative with
respect to ${\mathcal{G}}^{\text{inv}}_{2+}$ and $C_{2}$ and respects \reftext{\eqref{eqn:sd_product}}, it is a well-defined action of the whole
${\mathcal{G}}^{\text{inv}}_{2}$.
\end{proof}
We have thus shown that the infinitesimal data related with the right
(resp., left) action of $\mathcal{G}_{2}$ on a smooth manifold $M$,
i.e., a weight vector field $\Delta$ together with a complete vector
field $X$ (resp., $Y$) on $M$ satisfying \reftext{\eqref{eqn:commutator_X_Delta}}, integrates to a right (resp., left) action of
the Lie group ${\mathcal{G}}^{\text{inv}}_{2}$ on $M$. However, there
is no guarantee that this action will extend to the action of the whole
$\mathcal{G}_{2}\supset{\mathcal{G}}^{\text{inv}}_{2}$. This will
happen if formula \reftext{\eqref{eqn:full_G2_actions}} has a well-defined and
smooth extension to $a=0$. In particular situations this condition can
be checked by a direct calculation, yet no general criteria are known
to us.
In the forthcoming paragraphs we shall study (local) conditions of this
kind after restrict ourselves to the cases when the graded bundle $(M,
\Delta)$ is of low degree. The cases of left and right actions turned
out to be essentially different and so we treat them separately.
\paragraph*{Right $\mathcal{G}_{2}$-actions of degree at most~3}
Let us now classify (locally) all possible right $\mathcal
{G}_{2}$-actions on a smooth manifold $M$ such that the associated
graded bundle structure $(M,\Delta)$ is of degree at most 3. That is,
locally on $M$ we have graded coordinates
$(x^{i},y^{s}_{1},y_{2}^{S},y_{3}^{\sigma})$ where the lower index
indicates the degree.
By the results of \reftext{Remark~\ref{rem:euler_weight}}, the general formula
for a vector field $X$ of degree $-$1 in such a setting is
\begin{equation}
\label{eqn:field_weight_-1}
X=F^{s}(x)\partial_{y_{1}^{s}}+G^{S}_{s}(x) y_{1}^{s}\partial
_{y_{2}^{S}} + \left( H^{\sigma}_{S}(x) y^{S}_{2} + \frac{1}{2}
I^{\sigma}_{sr}(x) y_{1}^{s}y_{1}^{r}\right) \partial_{y_{3}^{\sigma
}}\ ,
\end{equation}
where $F^{s}$, $G^{S}_{s}$, $H^{\sigma}_{S}$, $I^{\sigma}_{sr}$ are
smooth functions on the base. The following result characterizes these
fields $X$ which give rise to a right action of the monoid $\mathcal
{G}_{2}$ on $M$:
\begin{lemma}\label{lem:right_g2_deg_less_4}
Let $(M,\Delta)$ be a graded bundle of degree at most 3 and let $X$ be
a weight $-1$ vector field on $M$ given locally by formula \reftext{\eqref{eqn:field_weight_-1}}. Then the right action $H:M\times{\mathcal
{G}}^{\text{inv}}_{2}\rightarrow M$ defined in \reftext{Lemma~\ref{lem:G_2_two}} extends to a smooth right action of $\mathcal{G}_{2}$ on
$M$ if and only if $F^{s}=0$ and $H^{\sigma}_{S} G^{S}_{s}=0$ for
every indices $s$ and $\sigma$. Equivalently,
$X$ is a degree~$-1$ vector field tangent to the fibration
$M=M^{3}\rightarrow M^{1}$, such that the differential weight $-2$
operator $X\circ X$ vanishes on all functions on $M$ of weight less or
equal $3$.
\end{lemma}
\begin{proof}
In order to find the flow $t\mapsto X^{t}$ we need to solve the
following system of ODEs
\begin{align*}
\dot{x}^{i}&=0\\
\dot{y}_{1}^{s}&= F^{s}(x)\\
\dot{y}_{2}^{S} &= G^{S}_{s}(x)y_{1}^{s}(t)\\
\dot{y}_{3}^{\sigma}&= H^{\sigma}_{S}(x) y_{2}^{S}(t)+\frac{1}{2}
I^{\sigma}_{sr}(x) y_{1}^{s}(t)y_{1}^{r}(t)\ ,
\end{align*}
which gives the following output
\begin{align*}
x^{i}(t)&=x^{i}(0),\\
y_{1}^{s}(t)&=y_{1}^{s}(0) + t F^{s},\\
y_{2}^{S}(t)&=y_{2}^{S}(0)+ tG^{S}_{s} y_{1}^{s}(0)+\frac{1}{2} t^{2}
G^{S}_{s}F^{s},\\
y_{3}^{\sigma}(t)&=y_{3}^{\sigma}(0)+ t (H^{\sigma}_{S}
y_{2}^{S}(0)+\frac{1}{2}I^{\sigma}_{sr}
y_{1}^{s}(0)y_{1}^{r}(0))+\frac{1}{2} t^{2}(H^{\sigma}_{S} G^{S}_{s}
y_{1}^{s}(0) + I^{\sigma}_{sr} F^{s} y_{1}^{r}(0)) \\
&\quad {}+ \frac{1}{6} t^{3} (H^{\sigma}_{S} G^{S}_{s}F^{s}+I^{\sigma}_{sr}
F^{s} F^{r})\ .
\end{align*}
Now, by \reftext{Lemma~\ref{lem:G_2_two}}, the action of $(a, b)\in\mathcal
{G}_{2}$ on $p\in M$ should be defined as $h_{a}\left(
X^{b/a^{2}}(p)\right) $, that is, it affects the coordinate
$y_{w}^{\alpha}$ of weight $w$ by $y_{w}^{\alpha}\mapsto a^{w}
y_{w}^{\alpha}(b/a^{2})$. Thus we have
\begin{align*}
H_{(a,b)}^{\ast}x^{i}&=x^{i},\\
H_{(a,b)}^{\ast}y_{1}^{s}&=a y_{1}^{s}+ \frac{b}{a} F^{s},\\
H_{(a,b)}^{\ast}y_{2}^{S}&=a^{2} y_{2}^{S}+ b G^{S}_{s}
y_{1}^{s}+\frac{1}{2} \frac{b^{2}}{a^{2}} G^{S}_{s} F^{s},\\
H_{(a,b)}^{\ast}y_{3}^{\alpha}&=a^{3} y_{3}^{\sigma}+ a b (H^{\sigma
}_{S} y_{2}^{S}+ I^{\sigma}_{sr} y_{1}^{s} y_{1}^{r})+\frac{1}{2}
\frac{b^{2}}{a}(H^{\alpha}_{S} G^{S}_{s} y_{1}^{s} + 2 I^{\alpha
}_{sr} F^{s} y_{1}^{r})+\frac{1}{6} \frac{b^{3}}{a^{3}} (H^{\sigma
}_{S} G^{S}_{s} F^{s} + I^{\alpha}_{sr} F^{s} F^{r})\, .
\end{align*}
Now it is clear that the action $H_{(a,b)}$ depends smoothly on $(a,b)$
if and only if $F^{s}=0$ and $H^{\sigma}_{S} G^{S}_{s}=0$.
The vector field $X$ is tangent to the fibration $M^{3}\rightarrow
M^{1}$ if and only if $F^{s}=0$. Then the condition on the differential
operator $X\circ X$ means that $0 = X(X(y^{\sigma}_{3})) =
X(X(y^{s}_{1} y^{S}_{2}))$ which simplifies to $H^{\sigma}_{S} G^{S}_{s}=0$.
\end{proof}
Note that in degree $1$ (i.e., when $(M, \Delta)$ is a vector bundle) $X$ has to be the zero vector field, hence
$
v.(a, b) = a\cdot v
$
for any $(a, b)\in\mathcal{G}_2$ and $v\in M$.
In degree $2$ the only possibility is $X=G^S_s y_1^s\partial_{y_2^S}$. In geometric terms, $(G^S_s)$ defines a vector bundle morphism
$$
\phi: F^1 \rightarrow \widehat{F^2}, \quad \phi(x, y^s_1) = (x, \widehat{y}^S_2 = G^S_s(x) y^s_1),
$$
covering $\operatorname{id}_{M_0}$ where $M=F^2\rightarrow F^1\rightarrow M_0$ is the tower of affine bundle projections \eqref{eqn:tower_grd_spaces} associated with $(M, \Delta)$. Thus, there is a one-to-one correspondence between degree $2$ right $\mathcal{G}_2$-actions and vector bundle morphisms $\phi$ as above. The correspondence is defined by the formula
$$
v.(a, b) = h_a(v) + b\, \phi(v),
$$
where $v\in F^2$; $h_a$, for $a\in \mathbb{R}$, are homotheties of $(F^2, \Delta)$; and
$
+ : F^2\times_{M_0}\widehat{F^2} \rightarrow F^2
$
is the canonical action of the core bundle on a graded bundle.
Obviously in higher degrees finding the precise conditions for $X$ gets more complicated (yet is still doable in finite time) and more classes of admissible weight -1 vector fields appear.
\paragraph*{Left $\mathcal{G}_{2}$-actions}
In case of left $\mathcal{G}_{2}$-actions we meet a problem of
integrating a vector field $Y$ of weight 1. Even if the associated
graded bundle $(M, \Delta)$ is a vector bundle (a graded bundle of
degree 1), a vector field of weight 1 has a general form
\[
\frac{1}{2} F_{ij}^{k}(x) y^{i} y^{j} \partial_{y^{k}} + F^{a}_{i}(x)
y^{i} \partial_{x^{a}}
\]
and, in general, is not integrable in quadratures. Therefore, the
problem of classifying left $\mathcal{G}_{2}$-action seems to be more
difficult.
We will solve it in the simplest case when $(M, \Delta)$ is a vector bundle.
\begin{lemma}\label{lem:left_g2_action_vb}
Let $\tau:E\rightarrow M$ be a vector bundle. There is a one-to-one
correspondence between smooth left $\mathcal{G}_{2}$-actions on $E$
such that the multiplicative submonoid $(\mathbb{R},\cdot)\subset
\mathcal{G}_{2}$ acts by the homotheties of $E$ and symmetric
bi-linear operations $\bullet: E\times_{M} E\rightarrow E$ such that
for any $v\in E$
\begin{equation}\label{eqn:Identity}
v\bullet(v\bullet v) = 0\ .
\end{equation}
This correspondence is given by the following formula
\begin{equation}\label{eqn:left_G_2-action}
(a, b).v = a\,v + b\, v\bullet v\ ,
\end{equation}
where $(a,b)\in\mathcal{G}_{2}$ and $v\in E$.
\end{lemma}
\begin{proof}
We shall denote the action of an element $(a,b)\in\mathcal{G}_{2}$ on
$v\in E$ by $(a,b).v$. Observe first that we can restrict our attention
to a single fiber of $E$. Indeed, since $\tau(v)=(0,0).v$, we have
$\tau((a, b).v)=(0,0).(a, b).v = (0,0).v =\tau(v)$ and thus $(a,b).v$
belongs to the same fiber of $E$ as $v$ does. In consequence, without
any loss of generality, we may assume that $E$ is a vector
space.
By the results of \reftext{Lemma~\ref{lem:G2_actions_infinitesimally}}, every
left $\mathcal{G}_{2}$-action on $E$ induces a weight 1 homogeneous
vector field $Y\in\mathfrak{X}(E)$. The flow of such a $Y$ at time
$t$ corresponds to the action of an element $(1,t)\in\mathcal{G}_{2}$.
Choose now a basis $\{e_{i}\}_{i\in I}$ of $E$ and denote by $\{y^{i}\}
_{i\in I}$ the related linear coordinates. In this setting (cf.~\reftext{Remark~\ref{rem:euler_weight}}) $Y$ writes as
\[
Y = \frac{1}{2} F_{ij}^{k}\ y^{i} y^{j} \partial_{y^{k}}, \quad\text
{where}\quad F_{ij}^{k} = F_{ji}^{k}\ .
\]
Let us now define the product $\bullet$ on base elements of $E$ by the formula
\[
e_{i} \bullet e_{j} = F_{ij}^{k} e_{k}\ ,
\]
and extend it be-linearly to an operation $\bullet:E\times
_{M}E\rightarrow E$. In other words, $Y(v)=v\bullet v$, where we use
the canonical identification of the vertical tangent vectors of
$\mathrm{T}E$ with elements of $E$.
We shall now show that the action of $\mathcal{G}_{2}$ is given by
formula \reftext{\eqref{eqn:left_G_2-action}}. Recall that by \reftext{Lemma~\ref{lem:G_2_two}} the action of $(a,b)\in{\mathcal{G}}^{\text{inv}}_{2}$
on $v\in E$ is given by
\begin{equation}\label{eqn:G_2_inv_on_E}
(a, b).v = a\cdot v(b/a)\ ,
\end{equation}
where $t\mapsto v(t):=(1,t).v$ denotes the integral curve of $Y$
emerging from $v(0)=v$ at $t=0$. The question is whether the above
formula extends smoothly to the whole $\mathcal{G}_{2}$.
Note that for $t\neq0$ we have $(t,tb).v=t v(b)$. Thus, assuming the
existence of a smooth extension of \reftext{\eqref{eqn:G_2_inv_on_E}} to the
whole $\mathcal{G}_{2}$, we have
\[
\left.\frac{\mathrm{d}^{}}{\mathrm{d}t^{}}\right|_{t=0} (t,
tb).v=v(b)\ .
\]
On the other hand, by the Leibniz rule we can write
\begin{align*}
\left.\frac{\mathrm{d}^{}}{\mathrm{d}t^{}}\right|_{t=0} (t,
tb).v&=\left.\frac{\mathrm{d}^{}}{\mathrm{d}t^{}}\right|_{t=0} (t,
0).v + \left.\frac{\mathrm{d}^{}}{\mathrm{d}t^{}}\right|_{t=0} (0,
t b).v=\left.\frac{\mathrm{d}^{}}{\mathrm{d}t^{}}\right|_{t=0} (t,
0).v + \left.\frac{\mathrm{d}^{}}{\mathrm{d}t^{}}\right|_{t=0} (t
b,0).(0,1).v\\
&=\left.\frac{\mathrm{d}^{}}{\mathrm{d}t^{}}\right|_{t=0} t\cdot v+
\left.\frac{\mathrm{d}^{}}{\mathrm{d}t^{}}\right|_{t=0} tb \cdot
(0,1).v= v+b\cdot(0,1).v\ .
\end{align*}
We conclude that for every $b\in\mathbb{R}$
\begin{equation}\label{eqn:action_G2}
v(b)=v+b\cdot(0,1).v\ ,
\end{equation}
i.e., integral curves of $Y$ are straight lines or constant curves.
Differentiating the above formula with respect to $b$ we get
$(0,1).v=Y(v)=v\bullet v$. Using this and \reftext{\eqref{eqn:G_2_inv_on_E}} we
get for $(a,b)\in{\mathcal{G}}^{\text{inv}}_{2}$
\[
(a,b).v=a\cdot v(b/a)=a\left( v+b/a\cdot v\bullet v\right) =a\cdot
v+b\cdot v\bullet v\ .
\]
Clearly this formula extends smoothly to the whole $\mathcal{G}_{2}$.
We have thus proved that any smooth $\mathcal{G}_{2}$-action on $E$
such that $(\mathbb{R},\cdot)\subset\mathcal{G}_{2}$ acts by the
homotheties of $E$ is of the form \reftext{\eqref{eqn:left_G_2-action}}.
Clearly formula \reftext{\eqref{eqn:left_G_2-action}} considered for some, a
priori arbitrary, bi-linear operation $\bullet$ defines a left
$\mathcal{G}_{2}$-action if and only if $(aA, aB+bA^{2}).v\overset{\text{\reftext{\eqref{eqn:G_2_multiplication}}}}{=}(a,b)(A,B).v$ for every $v\in E$
and $(a,b),(A,B)\in\mathcal{G}_{2}$. It is a matter of a simple
calculation to check that this requirement leads to the following condition:
\[
2 AbB v\bullet(v\bullet v) + bB^{2} (v\bullet v)\bullet(v\bullet v)=0.
\]
Since $v$ and $a$, $b$, $A$ and $B$ were arbitrary this is equivalent
to $v\bullet(v\bullet v)=0$ and $(v\bullet v)\bullet(v\bullet v)=0$
for every $v\in E$. To end the proof it amounts to show that this
latter condition is induced by the former one. Indeed, after a short
calculation formula \reftext{\eqref{eqn:Identity}} considered for $v=v'+t\cdot
w$ leads to the following condition
\[
t\left[ w\bullet(v'\bullet v')+2v'\bullet(v'\bullet w)\right]
+t^{2}\left[ v'\bullet(w\bullet w)+2w\bullet(w\bullet v')\right]
=0\ .
\]
Thus, as $t\in\mathbb{R}$ was arbitrary, $w\bullet(v'\bullet
v')+2v'\bullet(v'\bullet w)=0$ for every $v',w\in E$. In particular,
taking $w=v'\bullet v'$ and using \reftext{\eqref{eqn:Identity}} we get
$(v'\bullet v')\bullet(v'\bullet v')=0$. This ends the proof.
\end{proof}
\paragraph*{Actions of the monoid of 2 by 2 matrices}
Let $\mathcal{G}:= \operatorname{M}_{2\times2}(\mathbb{R})$ be the
monoid of $2$ by $2$ matrices with the natural matrix multiplication.
We shall end our considerations in this section by studying smooth
actions of this structure on manifolds.
We have a canonical isomorphism $\mathcal{G}\simeq\mathcal{G}^{\text
{op}}$ which sends a matrix to its transpose. Thus, unlike the case of
$\mathcal{G}_{2}$, left and right $\mathcal{G}$-actions are in
one-to-one correspondence.
Moreover, any $\mathcal{G}$-action gives rise to left and right
$\mathcal{G}_{2}$-action as there is a canonical
monoid embedding $\mathcal{G}_{2}\rightarrow\mathcal{G}$, $(a,
b)\mapsto\left(
\begin{array}{cc}
a & b \\
0 & a^{2}
\end{array}
\right) $. This observation allows to prove easily the following result.
\begin{lemma}\label{lem:matrix} Any $\mathcal{G}$-action on a
manifold $M$ gives rise to a double graded bundle $(M, \Delta_{1},
\Delta_{2})$ equipped with two complete vector fields $X$, $Y$ of
weights $(1, -1)$ and $(-1, 1)$ respectively, such that $[X, Y] =
\Delta^{1}-\Delta^{2}$.
\end{lemma}
\begin{proof} Since the homogeneity structures defined by the actions of
the submonoids $G_{1} = \{\operatorname{diag}(t, 1):t\in\mathbb{R}\}
$, $G_{2}=\{\operatorname{diag}(1, t):t\in\mathbb{R}\}$,
$G_{1}\simeq(\mathbb{R}, \cdot)\simeq G_{2}$, commute, the
corresponding weight vector fields $\Delta_{1}$, $\Delta_{2}$ also
commute and give rise to a double graded structure $(M, \Delta_{1},
\Delta_{2})$. Define vector fields $X$, $Y$ as infinitesimal actions
of the subgroups $ \left(
\begin{array}{cc}
1 & t \\
0 & 1
\end{array}
\right) $ and $\left(
\begin{array}{cc}
1 & 0 \\
t & 1
\end{array}
\right) $, respectively. It is straightforward to check, as we did for
$\mathcal{G}_{2}$-actions, that $[\Delta_{1}, X] = - X$, $[\Delta
_{2}, X] = X$, so $X$ has weight $w(X)=(-1,1)$. Similarly, $[\Delta
_{1}, Y] = Y$, $[\Delta_{2}, Y] = -Y$, so $w(Y)=(1,-1)$, and moreover
\[
Y^{-s}\circ X^{-t}\circ Y^{s} \circ X^{t} =
\begin{pmatrix}
1-ts & -t^{2}s\\
s^{2}t & 1 + st + o(st)
\end{pmatrix}
\]
so $[X, Y] = \Delta_{2}-\Delta_{1}$ as we claimed.
\end{proof}
\begin{example} Let $M$ be a manifold and consider the space $\mathrm
{J}^{2}_{(0,0)}(\mathbb{R}\times\mathbb{R}, M)$ of all 2nd-jets at $(0,0)$ of maps $\gamma: \mathbb{R}^{2} \rightarrow
M$. Given local coordinates $(x^{j})$ on $M$, the adapted local
coordinates $(x_{00}^{j}, x_{10}^{j}, x_{01}^{j}, x_{20}^{j},
x_{11}^{j}, x_{02}^{j})$ on $\mathrm{J}^{k}_{(0,0)}(\mathbb{R}^{2},
M)$ of $[\gamma]_{2}$ are defined as coefficients of the Taylor
expansion
\begin{equation}
\gamma(t,s) = (\gamma^{j}(t,s)), \quad\gamma^{j}(t,s) = x_{00}^{j}
+ x_{10}^{j} t + x_{01}^{j} s +
x_{20}^{j} \frac{t^{2}}{2} + x_{11}^{j} ts + x_{02}^{j} \frac
{s^{2}}{2} + o(t^{2}, ts, s^{2}).
\end{equation}
The right action of $A\in\mathcal{G}$ on $[\gamma]_{2}\in M$ equals
$[\gamma(at+bs, ct+ds)]_{2}$ and reads as
\begin{align}
(x_{00}^{j}, x_{10}^{j}, x_{01}^{j}, x_{20}^{j}, x_{11}^{j}, x_{02}^{j}).
\begin{pmatrix}
a& b\\ c& d
\end{pmatrix}
&= (x_{00}^{j}, a x_{10}^{j} + c x_{01}^{j}, b x_{10}^{j} +
d x_{01}^{j}, a^{2} x_{20}^{j} + 2 ac x_{11}^{j} + c^{2} x_{02}^{j}, ab x_{20}^{j}
\nonumber\\
&\quad {}
+
(ad+bc) x_{11}^{j} + cd x_{02}^{j}, b^{2} x_{20}^{j} + 2bd x_{11}^{j} +
d^{2} x_{02}^{j})
\end{align}
Hence the action of $
\begin{pmatrix}1 & t \\ 0 &1
\end{pmatrix}
$ yields a vector field $X = x_{10}^{j} \partial_{x_{01}^{j}}
+{x_{20}^{j}}\,\partial_{x_{11}^{j}} + 2\,x_{11}^{j}\partial
_{x_{02}^{j}}$ of weight $(1, -1)$. Similarly, the action of $
\begin{pmatrix}1 & 0 \\ t &1
\end{pmatrix}
$ gives rise to a vector field $Y = x_{01}^{j} \partial_{x_{10}^{j}} +
x_{02}^{j} \partial_{x_{11}^{j}} + 2 x_{11}^{j} \partial
_{x_{20}^{j}}$ of weight $(-1, 1)$. We have
\[
[X, Y] = x_{10}^{j}\partial_{x_{10}^{j}} - x_{01}^{j}\partial
_{x_{01}^{j}} + 2 x_{20}^{j}\partial_{x_{20}^{j}} -2
x_{02}^{j}\partial_{x_{02}^{j}} = \Delta_{1} - \Delta_{2},
\]
where $\Delta_{1} = x_{10}^{j}\partial_{x_{10}^{j}} + 2
x_{20}^{j}\partial_{x_{20}^{j}}+ x_{11}^{j}\partial_{x_{11}^{j}}$,
$\Delta_{2} = x_{01}^{j}\partial_{x_{01}^{j}}+x_{11}^{j} \partial
_{x_{11}^{j}} + 2 x_{02}^{j}\partial_{x_{02}^{j}}$ are commuting
weight vector fields. This example has a direct generalization for the
case of higher order $(1, 1)$-velocities.
\end{example}
\section{On actions of the monoid of real numbers on supermanifolds}
\label{sec:super}
\paragraph*{Super graded bundles}
The notions of a super vector bundle (see e.g. \cite{BCC_sVB_2011})
and a graded bundle generalize naturally to the notion of a \emph
{super graded bundle}, i.e., a graded bundle in the category of
supermanifolds. The latter is a \emph{super fiber bundle} (see e.g.,
\cite{BCC_sVB_2011})
$\pi: \mathcal{E}\rightarrow\mathcal{M}$ in which one can
distinguish a class of $\mathbb{N}$-graded fiber coordinates so that
transition functions preserve this gradation (\reftext{Definition~\ref{def:s_grd_bndl}}).
On the other hand, super graded bundles are a particular example of
non-negatively graded manifolds in the sense of Voronov \cite
{Voronov:2001qf}. These are defined as supermanifolds with a privileged
class of atlases in which one assigns $\mathbb{N}_{0}$-weights to
particular coordinates. Coordinates of positive weights are
`cylindrical' and coordinate changes are decreed to be polynomials
which preserve $\mathbb{Z}_{2}\times\mathbb{N}_{0}$-gradation. The
coordinate parity is not determined by its $\mathbb{N}_{0}$-weight.
Our goal in this section is to prove a direct analog of \reftext{Theorem~\ref{thm:eqiv_real}} in supergeometry: $(\mathbb{R}, \cdot)$-actions on
supermanifolds are in one-to-one correspondence with super graded bundles.
To fix the notation, given a supermanifold defined by its structure
sheaf $(M, \mathcal{O}_{\mathcal{M}})$, we shall usually denote it
shortly by $\mathcal{M}$. Here $M := |\mathcal{M}|$ is a topological
space called the \emph{body of $\mathcal{M}$}. Elements of $\mathcal
{O}_{\mathcal{M}}(U)$ will be called \emph{local functions} on
$\mathcal{M}$. For an open subset $U$ of $M$ let $\mathcal
{J}_{\mathcal{M}}(U)$ be the ideal of nilpotent elements in $\mathcal
{O}_{\mathcal{M}}(U)$. The quotient sheaf $\mathcal{O}_{\mathcal
{M}}/\mathcal{J}_{\mathcal{M}}$ defines a structure of a real smooth
manifold on the body $|\mathcal{M}|$. For local functions $f, g\in
\mathcal{O}_{\mathcal{M}}(U)$ a formula $f=g + o(\mathcal
{J}_{\mathcal{M}}^{i})$ means that $f-g\in(\mathcal{J}_{\mathcal
{M}}(U))^{i}$.
The definition of a super graded bundle, alike its classical analog,
will be given in steps. We begin by introducing the notion of a super
graded space, which is, basically speaking, a superdomain $\mathbb
{R}^{m|n}$ equipped with an atlas of global graded coordinates.
\begin{definition}\label{def:s_grd_space}
Let $\mathbf{d}:= (\mathbf{d}_{\bar{0}}|\mathbf{d}_{\bar{1}})$,
where $\mathbf{d}_{\varepsilon} = (d_{\varepsilon,1}, \ldots,
d_{\varepsilon,k})$, $\varepsilon\in\mathbb{Z}_{2}$, $1\leq i\leq
k$ are sequences of non-negative integers, and let $|\mathbf
{d}_{\varepsilon}|:=\sum_{i=1}^{k} d_{\varepsilon, i}$.
A \emph{super graded space of rank $\mathbf{d}$} is a supermanifold
$\mathrm{W}$ isomorphic\footnote{In the context of supermanifolds we
should rather speak about isomorphism than diffeomorphism. Since there
is no concept of a topological supermanifold, there is no need to
distinguish between homeomorphisms, $C^{1}$-diffeomorphism or smooth
diffeomorphisms.} to a superdomain $\mathbb{R}^{|\mathbf{d}_{\bar
{0}}| \big\vert \vert \mathbf{d}_{\bar{1}}|}$ and equipped with an
equivalence class of graded coordinates.
Here we assume that the number of even (resp. odd) coordinates of
weight $i$ is equal to $d_{\bar{0}, i}$ (resp. $d_{\bar{1}, i}$) where
$1\leq i\leq k$.
Two systems of graded coordinates are \emph{equivalent} if they are
related by a polynomial transformation with coefficients in $\mathbb
{R}$ which preserve both the parity and the weights (cf. \reftext{Definition~\ref{def:gr_space}}).
A \emph{morphism} between super graded spaces $W_{1}$ and $W_{2}$ is a
map $\Phi: W_{1}\rightarrow W_{2}$ which in some (and thus any) graded
coordinates writes as a polynomial respecting the $\mathbb
{N}_{0}\times\mathbb{Z}_{2}$-gradation.
\end{definition}
Informally speaking, a super graded bundle is a collection of super
graded spaces parametrized
by a base supermanifold.
\begin{definition}\label{def:s_grd_bndl}
A \emph{super graded bundle of rank $\mathbf{d}$} is a super fiber
bundle $\pi:\mathcal{E}\rightarrow\mathcal{M}$ with the typical
fiber $\mathbb{R}^{\mathbf{d}}$ considered as a super graded space of
rank $\mathbf{d}$. In other words, there exists a cover $\{\mathcal
{U}_{i}\}$ of the supermanifold $\mathcal{M}$ such that the total
space $\mathcal{E}$ is obtained by gluing trivial super graded bundles
$\mathcal{U}_{i}\times\mathbb{R}^{\mathbf{d}}\rightarrow\mathcal
{U}_{i}$ by means of transformations $\phi_{ij}: \mathcal
{U}_{ij}\times\mathbb{R}^{\mathbf{d}}\rightarrow\mathcal
{U}_{ij}\times\mathbb{R}^{\mathbf{d}}$ of the form
\begin{eqnarray*}[cc]
y^{a'} &= \sum_{I, J} Q^{a'}_{I, J}(x, \theta) y^{a_{1}}\ldots
y^{a_{i}} \xi^{A_{1}}\ldots\xi^{A_{j}}, \\
\xi^{A'} &= \sum_{I, J} Q^{A'}_{I, J}(x, \theta) y^{a_{1}}\ldots
y^{a_{i}} \xi^{A_{1}}\ldots\xi^{A_{j}}\ .
\end{eqnarray*}
Here $\mathcal{U}_{ij}:= \mathcal{U}_{i}\cap\mathcal{U}_{j}$;
$Q^{a'}_{I, J}$ and $Q^{A'}_{I, J}$ are local functions of (super)
coordinate functions $(x^{i}, \theta^{\alpha})$ on $\mathcal
{U}_{ij}$; $(y^{a},\xi^{A})$ and $(y^{a'},\xi^{A'})$ are graded super
coordinates on fibers $\mathbb{R}^{\mathbf{d}}$; and the summation is
over such sets of indices $I=(a_{1}, \ldots, a_{i})$ and $J= (A_{1},
\ldots, A_{j})$ that the parity and the weight of each monomial in the
sums on the right coincides with the parity and the weight of the
corresponding coordinate on the left.
The notion of a \emph{morphism} $\Phi: \mathcal{E}\rightarrow
\mathcal{E}'$ between super graded bundles is clear; it is enough to
assume that $\Phi$ is a morphism of supermanifolds such that the
corresponding algebra map $\Phi^{*}: \mathcal{O}_{\mathcal
{E}'}(|\mathcal{E}'|) \rightarrow\mathcal{O}_{\mathcal
{E}}(|\mathcal{E}|)$ preserves the $\mathbb{N}_{0}$-gradation.
\end{definition}
\begin{example} Higher tangent bundles have their analogs in supergeometry.
Given a supermanifold $\mathcal{M}$ a higher tangent bundle $\mathrm
{T}^{k} \mathcal{M}$ is a natural example of a super graded
bundle.\footnote{In a natural way higher tangent bundles correspond to
the (purely even) Weil algebra $\mathbb{R}[\varepsilon]/\langle
\varepsilon^{k+1}\rangle$ (see \cite
{Kolar_Michor_Slovak_nat_oper_diff_geom_1993}). In general, any super
Weil algebra gives rise to a Weil functor that can be applied to a
supermanifold (cf. \cite{BF_Weil_supermanifolds}).} For $k=2$ and
local coordinates $(x^{A})$ on $\mathcal{M}$ (even or odd) one can
introduce natural coordinates $(x^{A},\dot{x}^{B}, \ddot{x}^{C})$ on
$\mathrm{T}^{2}\mathcal{M}$ where coordinates $\dot{x}^{A}$ and
$\ddot{x}^{A}$ share the same parity as $x^{A}$ and are of weight 1
and 2, respectively. Standard transformation rules apply:
\[
x^{A'}= x^{A'}(x), \quad\dot{x}^{A'} = \dot{x}^{B}\frac{\partial
x^{A'}}{\partial x^{B}}, \quad\ddot{x}^{A'} = \ddot{x}^{B}\frac
{\partial x^{A'}}{\partial x^{B}} + \dot{x}^{C}\dot{x}^{B} \frac
{\partial^{2} x^{A'}}{\partial x^{B}\partial x^{C}}.
\]
\end{example}
\paragraph*{Homogeneity structures in the category of supermanifolds}
Also the notion of a homogeneity structure easily generalizes to the
setting of supergeometry.
\begin{definition}\label{def:super_hgm_structure}
A \emph{homogeneity structure on a supermanifold $\mathcal{M}$} is a
smooth action $h:\mathbb{R}\times\mathcal{M}\to\mathcal{M}$ of the
multiplicative monoid $(\mathbb{R}, \cdot)$ of
real numbers, i.e., $h$ is a morphism of supermanifolds such that the
following diagram
\[
\xymatrix{\mathbb{R}\times\mathbb{R}\times\mathcal{M}\ar
[d]_{m\times\operatorname{id}_\mathcal{M}} \ar[rr]^{\operatorname
{id}_\mathbb{R}\times h} && \mathbb{R}\times\mathcal{M}\ar[d]^h\\
\mathbb{R}\times\mathcal{M}\ar[rr]^h && \mathcal{M}
}
\]
commutes (here $m:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$
denotes the standard multiplication) and that $h_{1} := h|_{\{1\}\times
\mathcal{M}}: \mathcal{M}\rightarrow\mathcal{M}$ is the identity
morphism. In other words,
$h$ is a morphism of supermanifolds defined by a collection of maps
$h_{t}:\mathcal{M}\to\mathcal{M}$, $t\in\mathbb{R}$ such that
$h_{ts}^{*} = h_{t}^{*}\circ h_{s}^{*}$ for any $t, s\in\mathbb{R}$
and that $h_{1} = \operatorname{id}_{\mathcal{M}}$.
A \emph{morphism} of two homogeneity structures $(\mathcal
{M}_{1},h_{1})$ and $(\mathcal{M}_{2},h_{2})$ is a morphism $\Phi
:\mathcal{M}_{1}\rightarrow\mathcal{M}_{2}$ of supermanifolds
intertwining the actions $h_{1}$ and $h_{2}$. Clearly, homogeneity
structures on supermanifolds with their morphism form a \emph{category}.
Per analogy to the standard (real) case, we say that a local function
$f\in\mathcal{O}_{\mathcal{M}}(U)$, where $U$ is an open subset of $|M|$,
is called \emph{homogeneous} of \emph{weight} $w\in\mathbb{N}$ if
\[
h_{t}^{*}(f) = t^{w}\cdot f,
\]
for any $t\in\mathbb{R}$. We assume here that the carrier $U$ of $f$
is preserved by the action $h$, i.e. $\underline{h_{t}}(U)\subset U$
for any $t\in\mathbb{R}$.
\end{definition}
\begin{remmark}\label{rem:super_hgm_structure}
Observe that given a homogeneity structure $h$ on a supermanifold
$\mathcal{M}$ the induced maps $\underline{h_{t}}: |\mathcal
{M}|\rightarrow|\mathcal{M}|$ equip the body $|\mathcal{M}|$ with a
(standard) \emph{homogeneity structure}, and so $\underline{h_{0}}:
|\mathcal{M}|\rightarrow|\mathcal{M}|_{0}$ is a (real) \emph{graded
bundle} over $|\mathcal{M}|_{0}:= h_{0}(|\mathcal{M}|)$.
Note also that, analogously to the standard case, every super graded
bundle structure $\pi:\mathcal{E}\rightarrow\mathcal{M}$ provides
$\mathcal{E}$ with a \emph{canonical homogeneity structure}
$h^{\mathcal{E}}$ defined locally in an obvious way. We call it an
action of the \emph{homotheties of $\mathcal{E}$}. Obviously, a
morphism of super graded bundles $\Phi:\mathcal{E}_{1}\rightarrow
\mathcal{E}_{2}$ induces a morphism of the related homogeneity
structures $(\mathcal{E}_{1},h^{\mathcal{E}_{1}})$ and $(\mathcal
{E}_{2},h^{\mathcal{E}_{2}})$.
\end{remmark}
Alike in the standard case, homogeneous functions on super graded
bundles are polynomial in graded coordinates.
\begin{lemma}\label{lem:super_morphisms}
Let $f$ be a homogeneous function on a trivial super graded bundle
$\mathcal{U}\times\mathbb{R}^{\mathbf{d}}$, where $\mathcal{U}$ is
a superdomain and $\mathbf{d}= (\mathbf{d}_{\bar{0}}|\mathbf
{d}_{\bar{1}})$ is as above. Then $f$ is a homogeneous polynomial in
graded fiber coordinates.
\end{lemma}
\begin{proof} This follows directly from a corresponding result for purely
even graded bundles. Indeed, let $f\in\mathcal{C}^{\infty}(x,
y^{a}_{w})[\theta, \xi^{A}]$ be an even or odd, homogeneous function
on $\mathcal{U}\times\mathbb{R}^{\mathbf{d}}$:
\[
f=\sum_{I, J} f_{I, J}(x, y^{a}) \xi^{A_{1}} \ldots\xi^{A_{i}}
\theta^{B_{1}} \ldots\theta^{B_{j}},
\]
where $(y^{a}, \xi^{A})$ are graded coordinates on $\mathbb
{R}^{\mathbf{d}}$, and the summation goes over sequences
$I = \{A_{1}<\ldots<A_{i}\}$, $J = \{B_{1}<\ldots<B_{j}\}$. Then
\[
h_{t}^{*}f = \sum_{I, J} f_{I, J}(x, t^{\mathbf{w}(a)} y^{a})
t^{\mathbf{w}(A_{1})+\ldots+\mathbf{w}(A_{i})} \xi^{A_{1}} \ldots
\xi^{A_{i}} \theta^{B_{1}} \ldots\theta^{B_{j}},
\]
so $h_{t}^{*}f = t^{w} f$ implies that the coefficients $f_{I, J}$ are
real functions of weight
$w - (\mathbf{w}(A_{1})+\ldots\mathbf{w}(A_{i}))\geq0$, thus
polynomials in $y^{a}$.
\end{proof}
In what follows we will need the following technical result which
allows to construct homogeneous coordinates under mild technical conditions.
\begin{lemma}\label{lem:better_coord_for_h} Consider a superdomain
$\mathcal{M}= U\times\Pi\mathbb{R}^{s}$ with $U\subset\mathbb
{R}^{r}$ being an open set and introduce super coordinates
$(y^{1},\ldots,y^{r},\xi^{i},\ldots,\xi^{s})$ on $\mathcal{M}$,
i.e. $y$'s are even and $\xi$'s are odd coordinates on $\mathcal{M}$.
Let $h$ be an action of the monoid $(\mathbb{R}, \cdot)$ on $\mathcal
{M}$ such that
\[
h_{t}^{*}(y^{a})= t^{\mathbf{w}(a)}\,y^{a} + o(\mathcal{J}_{\mathcal
{M}}), \quad\text{and}\quad h_{t}^{*}(\xi^{i}) = t^{\mathbf{w}(i)}\,
\xi^{i} + o(\mathcal{J}_{\mathcal{M}}^{2})\ .
\]
Then
\[
\left( \frac{1}{\mathbf{w}(a)!} \left.\frac{\mathrm{d}^{\mathbf
{w}(a)}}{\mathrm{d}t^{\mathbf{w}(a)}}\right|_{t=0} h_{t}^{*}(y^{a}),
\frac{1}{\mathbf{w}(i)!} \left.\frac{\mathrm{d}^{\mathbf
{w}(i)}}{\mathrm{d}t^{\mathbf{w}(i)}}\right|_{t=0} h_{t}^{*}(\xi
^{i})\right)
\]
are graded coordinates on the superdomain $\mathcal{M}$.
\end{lemma}
\begin{proof} We remark that if
\[
h_{t}^{*} f = \sum_{I\subset\{1, \ldots, s\}} g_{I}(t, y^{1}, \ldots
, y^{r}) \xi^{I} \in\mathcal{C}^{\infty}(\mathbb{R}\times U)[\xi
^{1}, \ldots, \xi^{s}]
\]
is a function on $\mathcal{M}$, then the function $f^{[k]}:=\frac
{1}{k!} \left.\frac{\mathrm{d}^k}{\mathrm{d}t^k}\right|_{t=0}
h_{t}^{*} f$ is well-defined as $h$ is smooth and is given by
\[
f^{[k]} = \frac{1}{k!} \sum_{I\subset\{1, \ldots, s\}}\xi^{I}\,
\left.\frac{\mathrm{d}^k}{\mathrm{d}t^k}\right|_{t=0} g_{I}(t,
y^{1}, \ldots, y^{r}) \in\mathcal{C}^{\infty}(U)[\xi^{1}, \ldots,
\xi^{s}].
\]
Now, since for any morphism $\phi:\mathcal{M}\rightarrow\mathcal
{M}$, $(\operatorname{id}_{\mathbb{R}}\times\phi)^{*}: \mathcal
{O}_{\mathbb{R}\times\mathcal{M}}\rightarrow\mathcal{O}_{\mathbb
{R}\times\mathcal{M}}$ commutes with the operators $\left.\frac
{\mathrm{d}^k}{\mathrm{d}t^k}\right|_{t=0}: \mathcal{O}_{\mathbb
{R}\times\mathcal{M}}\rightarrow\mathcal{O}_{\mathbb{R}\times
\mathcal{M}}$, we get
\[
h_{s}^{*} f^{[k]} = \frac{1}{k!} \left.\frac{\mathrm{d}^k}{\mathrm
{d}t^k}\right|_{t=0} h^{*}_{s} h^{*}_{t} f = \frac{1}{k!} \left
.\frac{\mathrm{d}^k}{\mathrm{d}t^k}\right|_{t=0} h^{*}_{st} f =
s^{k} \left.\frac{\mathrm{d}^k}{\mathrm{d}t^k}\right|_{t=0}
h^{*}_{t} f = s^{k}\, f^{[k]}\ ,
\]
that is, $f^{[k]}$ is $h$-homogeneous of weight $k$. In particular,
\[
(y^{a})^{[\mathbf{w}(a)]} = y^{a} + o(\mathcal{J}_{\mathcal{M}})
\quad\text{and}\quad(\xi^{i})^{[\mathbf{w}(i)]} = \xi^{i} +
o(\mathcal{J}_{\mathcal{M}}^{2}),
\]
are homogeneous with respect to $h$. To prove that these are true
coordinates on $\mathcal{M}$ observe that the matrices $\left( \frac
{\partial(y^{a})^{[\mathbf{w}(a)]}}{\partial{y^{b}}}\right) $ and
$\left( \frac{\partial(\xi^{i})^{[\mathbf{w}(i)]}}{\partial\xi
^{j}}\right) $ are invertible, so the result follows.
\end{proof}
\paragraph*{The main result} We are now ready to prove that \reftext{Theorem~\ref{thm:eqiv_real}} generalizes to the supergeometric context.
\begin{theorem}\label{thm:main_super}
The categories of super graded bundles (with connected bodies) and
homogeneity structures on supermanifolds (with connected bodies) are
equivalent. At the level of objects this equivalence is provided by the
following two constructions
\begin{itemize}
\item With every super graded bundle $\pi:\mathcal{E}\rightarrow
\mathcal{M}$ one can associate a homogeneity structure $(\mathcal
{E},h^{\mathcal{E}})$, where $h^{\mathcal{E}}$ is the action by the
homotheties of $\mathcal{E}$.
\item Given a homogeneity structure $(\mathcal{M},h)$ on a
supermanifold $\mathcal{M}$, the map $h_{0}:\mathcal{M}\rightarrow
\mathcal{M}_{0}:=h_{0}(\mathcal{M})$ provides $\mathcal{M}$ with a
canonical structure of a super graded bundle such that $h$ is the related action by homotheties.
\end{itemize}
At the level of morphisms: every morphism of super graded bundles is a
morphism of the related homogeneity structures and, conversely, every
morphism of homogeneity structures on supermanifolds respects the
canonical super graded bundle structures.
\end{theorem}
\begin{proof}
The crucial part of the proof is to show that given a homogeneity
structure $h:\mathbb{R}\times\mathcal{M}\rightarrow\mathcal{M}$ on
a supermanifold $\mathcal{M}$ one can always find an atlas with
homogeneous coordinates on $\mathcal{M}$. First we observe that without
loss of generality we may assume that $\mathcal{M}$ has a simple form,
namely $\mathcal{M}$ is isomorphic with $U\times\mathbb{R}^{\mathbf
{d}}\times\Pi\mathbb{R}^{q}$ for some small open subset
$U\subset\mathbb{R}^{n}$, i.e. $\mathcal{M}$ has a second, other than $h$,
homogeneity structure associated with a vector bundle $E= U\times
\mathbb{R}^{\mathbf{d}}\times\mathbb{R}^{q}\to U\times\mathbb
{R}^{\mathbf{d}}$. Using the fact that these graded bundle structures
are compatible, and transferring the homogeneity structure $h$ to the
real manifold $E$ (with some loss of information) we are able to
construct graded coordinates for $\mathcal{M}$ but modulo $\mathcal
{J}_{\mathcal{M}}^{2}$. Then we evoke \reftext{Lemma~\ref{lem:better_coord_for_h}} to finish the proof.
Assume that $h:\mathbb{R}\times\mathcal{M}\rightarrow\mathcal{M}$
is a homogeneity structure on a supermanifold $\mathcal{M}$.
Recall (see \reftext{Remark~\ref{rem:super_hgm_structure}}) that $h$ induces a
canonical homogeneity structure $\underline{h}$ on the body $|\mathcal{M}|$.
Since we work locally we may assume without any loss of generality that
$|\mathcal{M}|_{0}:= \underline{h}_{0}(|\mathcal{M}|)$ is an open
contractible subset $U\subset\mathbb{R}^{n}$, and $|\mathcal{M}| =
U\times\mathbb{R}^{\mathbf{d}}$ is a trivial graded bundle over $U$
of rank $\mathbf{d}=(d_{1}, \ldots, d_{k})$. Thus
we may assume that $\mathcal{M}= \Pi E$ where $E=U\times\mathbb
{R}^{\mathbf{d}}\times\mathbb{R}^{q}$ is a trivial vector bundle
over $|\mathcal{M}| = U\times\mathbb{R}^{\mathbf{d}}$ with the
typical fiber $\mathbb{R}^{q}$. Note, that we do not need to refer to
Batchelor's theorem \cite{Gaw_77,Batchelor_str_sMnflds} and
the argument works even for holomorphic actions of the monoid $(\mathbb
{C}, \cdot)$ on complex supermanifolds (see \reftext{Remark~\ref{rem:complex_sMnflds}}).
Consider now local coordinates $(x^{i}, y^{a}_{w}, Y^{A})$ on $E$ where
$(x^{i},y^{a}_{w})$ are graded coordinates on the base and $Y^{A}$ are
linear coordinates on fibers. Let $(\xi^{A})$ be odd coordinates on
$\Pi E$ corresponding to $(Y^{A})$.
Recall that $\mathcal{J}_{\mathcal{M}}(|\mathcal{M}|)=\langle\xi
^{A}\rangle$ denotes
the nilpotent radical of $\mathcal{O}_{\mathcal{M}}(|\mathcal{M}|)$.
Since $(x^{i},y^{a}_{w})$ are graded coordinates with respect to
$\underline{h}$ and since $h_{t}$ respects the parity for each $t\in
\mathbb{R}$, the general form of $h_{t}$ must be
\begin{equation}\label{eqn:h_t_form}
\begin{cases}
h_{t}^{*}(x^{i}) &= x^{i} + o(\mathcal{J}_{\mathcal{M}}^{2}),\\
h_{t}^{*}(y^{a}_{w}) &= t^{w} \,y^{a}_{w} + o(\mathcal{J}_{\mathcal
{M}}^{2}),\\
h_{t}^{*}(\xi^{A}) &= \alpha^{A}_{B}(t,x^{i},y^{a}_{w}) \xi^{B} +
o(\mathcal{J}_{\mathcal{M}}^{2}),
\end{cases}
\end{equation}
where $\alpha^{A}_{B}$ are smooth functions.
The action $h$ defines an action $\widetilde{h}$ of the monoid
$(\mathbb{R}, \cdot)$ on $E$ which is given by
\begin{equation}\label{eqn:action_h_tilde}
\widetilde{h}_{t}^{*}(x^{i})=x^{i}, \quad\widetilde
{h}_{t}^{*}(y^{a})= t^{w(y^{a})} \,y^{a}, \quad\text{and}\quad
\widetilde{h}_{t}^{*}(Y^{A}) = \alpha^{A}_{B}(t,x,y) Y^{B}.
\end{equation}
Indeed, by reducing $h_{t}^{*}: \mathcal{O}_{\mathcal{M}}(|\mathcal
{M}|) \rightarrow\mathcal{O}_{\mathcal{M}}(|\mathcal{M}|)$ modulo
$\mathcal{J}_{\mathcal{M}}^{2}(|\mathcal{M}|)$ we obtain an
endomorphism of $\mathcal{O}_{\mathcal{M}}(|\mathcal{M}|)/\mathcal
{J}_{\mathcal{M}}^{2}(|M|) = \mathcal{C}^{\infty}(x^{i}, y^{a}_{w})
\oplus\xi^{A} \cdot\mathcal{C}^{\infty}(x^{i}, y^{a}_{w}) \simeq
\mathcal{C}^{\infty}(x^{i}, y^{a}_{w}) \oplus Y^{A} \cdot\mathcal
{C}^{\infty}(x^{i}, y^{a}_{w})$, thus $\widetilde{h}_{t}\circ
\widetilde{h}_{s} = \widetilde{h}_{ts}$ and $\widetilde{h}_{t}$ does
not depend on a particular choice of linear coordinates $Y^{A}$ on $E$.
It follows from \reftext{Theorem~\ref{thm:eqiv_real}} that $E$ is a graded
bundle over $E_{0}:=\widetilde{h}_{0}(E)$, whose homotheties coincide
with the maps $\widetilde{h}_{t}$. Note that the inclusions $U\times\{
0\} \times\{0\} \subset E_{0} \subset U\times\{0\}\times\mathbb
{R}^{q}$ can be proper.
Our goal now is to find graded coordinates on $E$ out of non-homogeneous
coordinates $(x^{i}, y^{a}, Y^{A})$ and then mimic the same changes of
coordinates in order to define a graded coordinate system on the
supermanifold $\mathcal{M}= \Pi E$ out of a non-homogeneous one
$(x^{i}, y^{a}, \xi^{A})$.
Denote by $H$ the homotheties related with the vector bundle structure
on $\tau: E \rightarrow|\mathcal{M}|=U\times\mathbb{R}^{\mathbf
{d}}$. A~fundamental observation that follows from \reftext{\eqref{eqn:action_h_tilde}} is that the actions $H$ and $\widetilde{h}$
commute, i.e.,
\[
H_{s} \circ\widetilde{h}_{t} = \widetilde{h}_{t}\circ H_{s}
\]
for every $t,s\in\mathbb{R}$.
Thus $(E, \widetilde{h}, H)$ is a \emph{double homogeneity structure}
and, by Theorem 5.1 of \cite{JG_MR_gr_bund_hgm_str_2011}, a double
graded bundle:
\[
\xymatrix{
E=U\times\mathbb{R}^{\mathbf{d}}\times\mathbb{R}^q \ar
[d]^{\widetilde{h}_0}\ar[rr]^{H_0} && U\times\mathbb{R}^{\mathbf{d}}
\ar[d]^{\widetilde{h}_0|_{U\times\mathbb{R}^{\mathbf{d}}}} \\
E_0 \ar[rr]^{H_0|_{E_0}} && U.
}
\]
Moreover, the above-mentioned result implies that we can complete
graded coordinates $(x^{i}, y^{a}_{w})$ on $U\times\mathbb
{R}^{\mathbf{d}}$ which are constant along fibers of the projection
$H_{0}$ with graded coordinates $\tilde{Y}^{A}_{w}$ of bi-weight $(w,
1)$, where $0\leq w\leq k$ so that $(x^{i}, y^{a}_{w}, \widetilde
{Y}^{A}_{w})$ is a system of bi-graded coordinates for $(E, \widetilde
{h}, H)$.
Since both $(Y^{A})$ and $(\widetilde{Y}^{A}_{w})$ are linear
coordinates for the vector bundle $H_{0}:E\rightarrow U\times\mathbb
{R}^{\mathbf{d}}$ they are related~by
\begin{equation}\label{eqn:tilde_Y_A}
\widetilde{Y}^{A}_{w} = \gamma^{A}_{B}(x, y) Y^{B}
\end{equation}
for some functions $\gamma^{A}_{B}$ on $U\times\mathbb{R}^{d}$.
Let us define $\widetilde{\xi}^{A}:= \gamma^{A}_{B}(x,y)\cdot\xi
^{B}$, i.e. using the same functions as in \reftext{\eqref{eqn:tilde_Y_A}}. By
applying $\widetilde{h}_{t}^{*}$ to \reftext{\eqref{eqn:tilde_Y_A}} we get
\[
t^{\mathbf{w}(A)} \gamma^{A}_{C}(x, y)Y^{C}
\overset{\text{\reftext{\eqref{eqn:tilde_Y_A}}}}{=} t^{\mathbf{w}(A)} \widetilde{Y}_{w}^{A} =
\widetilde{h}_{t}^{\ast}(\widetilde{Y}_{w}^{A}) \overset{\text{\reftext{\eqref{eqn:tilde_Y_A}}}}{=}\widetilde{h}_{t}^{*}(\gamma^{A}_{B}(x, y))
\widetilde{h}_{t}^{*}(Y^{B}) \overset{\text{\reftext{\eqref{eqn:action_h_tilde}}}}{=}
\gamma^{A}_{B}(x, t^{w} y^{a}_{w})\alpha^{B}_{C}(t, x, y) Y^{C},
\]
hence
$t^{\mathbf{w}(A)} \gamma^{A}_{C} = \gamma^{A}_{B}(x, t^{w}
y^{a}_{w})\alpha^{B}_{C}(t, x, y)$, and
\begin{equation}
\begin{aligned}[c]
h_{t}^{*}(\widetilde{\xi}^{A})&=h_{t}^{*}(\gamma
^{A}_{B})h_{t}^{*}(\xi^{B}) \overset{\text{\reftext{\eqref{eqn:h_t_form}}}}{=} \gamma
^{A}_{B}(x, t^{w} y^{a}_{w} + o(\mathcal{J}_{\mathcal
{M}}^{2}))(\alpha^{B}_{C}(t, x, y) \xi^{C} + o(\mathcal{J}_{\mathcal
{M}}^{2})) = \\
&= \gamma^{A}_{B}(x, t^{w} y^{a}_{w}) \alpha^{B}_{C}(t, x, y)\xi^{C}
+ o(\mathcal{J}_{\mathcal{M}}^{2}) = t^{\mathbf{w}(A)} \gamma
^{A}_{C}(x, y) \xi^{C} + o(\mathcal{J}_{\mathcal{M}}^{2}) = \\
&= t^{\mathbf{w}(A)} \widetilde{\xi}^{A} + o(\mathcal{J}_{\mathcal
{M}}^{2}).
\end{aligned}
\end{equation}
We obtain a graded coordinate system for $\mathcal{M}$ due to
\reftext{Lemma~\ref{lem:better_coord_for_h}}.
The equivalence at the level of morphism follows directly from \reftext{Lemma~\ref{lem:super_morphisms}}. This result implies that locally any
supermanifold morphism respecting the homogeneity structures is a
homogeneous polynomial in graded coordinates, i.e. it is a morphism of
the related super graded bundles (cf. \reftext{Definition~\ref{def:s_grd_bndl}}).
\end{proof}
\begin{remmark}\label{rem:complex_sMnflds} Using the same methods one
can prove an analog of above result for holomorphic supermanifolds (a
super-version of complex manifolds): a holomorphic action of $(\mathbb
{C},\cdot)$ on a holomorphic supermanifold $\mathcal{M}$ gives rise
to a graded holomorphic super coordinate system for $\mathcal{M}$.
Indeed, the proof of \reftext{Lemma~\ref{lem:better_coord_for_h}} can be
rewritten in a holomorphic setting. The other result we need to
complete the proof of \reftext{Theorem~\ref{thm:main_super}} in the holomorphic
context is that two holomorphic commuting $(\mathbb{C}, \cdot)$
actions on a complex manifold $M$ give rise to $\mathbb{N}_{0}\times
\mathbb{N}_{0}$ graded coordinate system on $M$ (an analog of Theorem~5.1
\cite{JG_MR_gr_bund_hgm_str_2011}). This can be justified using a
double graded version of \reftext{Lemma~\ref{lem:complex_gr_sspace}} and the
fact that $M$ can be considered as a substructure of $\mathrm
{J}^{r}\mathrm{J}^{s} M$ for some $r$, $s$ (a double holomorphic
homogeneity substructure). Details are left to the Reader.
\end{remmark}
\section*{Acknowledgments}
This research was supported by the {Polish National Science Centre} grant
under the contract number {DEC-2012/06/A/ST1/00256}.
The question of characterizing the actions of the multiplicative monoid
$(\mathbb{C},\cdot)$ occurred during the discussion between
Professors Stanis{\l}aw L. Woronowicz and Janusz Grabowski at the
seminar on the results of \cite{JG_MR_gr_bund_hgm_str_2011}. The
problem of characterizing $\mathcal{G}_{k}$-actions was originally
posted by Professor Janusz Grabowski. We would like to thank them for
the inspiration and encouragement to undertake this research.
\end{document}
|
\begin{document}
\title{Quantum Simulations of Physics Problems}
\begin{abstract}
If a large Quantum Computer (QC) existed today, what type of physical
problems could we efficiently simulate on it that we could not simulate
on a classical Turing machine? In this paper we argue that a QC could
solve some relevant physical ``questions" more efficiently. The
existence of one-to-one mappings between different algebras of
observables or between different Hilbert spaces allow us to represent
and imitate any physical system by any other one (e.g., a bosonic
system by a spin-1/2 system). We explain how these mappings can be
performed showing quantum networks useful for the efficient evaluation
of some physical properties, such as correlation functions and energy
spectra.
\end{abstract}
\keywords{quantum mechanics, quantum computing, identical particles,
spin systems, generalized Jordan-Wigner transformations}
\section{introduction}
\label{section1}
Quantum simulation of physical systems on a QC has acquired
importance during the last years since it is believed that QCs can
simulate quantum physics problems more efficiently than their classical
analogues \cite{feynman1982}: The number of operations needed for
deterministically solving a quantum many-body problem on a classical
computer (CC) increases exponentially with the number of degrees of
freedom of the system.
In quantum mechanics, each physical system has associated a language of
operators and an algebra realizing this language, and can be considered
as a possible model of quantum computation \cite{ortiz2001}. As we
discussed in a previous paper \cite{somma2002}, the existence of
one-to-one mappings between different languages (e.g., the
Jordan-Wigner transformation that maps fermionic operators onto spin-1/2
operators) and between quantum states of different Hilbert spaces,
allows the quantum simulation of one physical system by any other one.
For example, a liquid nuclear magnetic resonance QC (NMR) can simulate
a system of $^4$He atoms (hard-core bosons) because an isomorphic
mapping between both algebras of observables exists.
The existence of mappings between operators allows us to construct
quantum network models from sets of elementary gates, to which we map
the operators of our physical system. An important remark is that these
mappings can be performed efficiently: we need a number of steps that
scales polynomially with the system size. However, this fact alone is not
sufficient to establish that any quantum problem can be solved
efficiently. One needs to show that all steps involved in the
simulation (i.e., preparation of the initial state, evolution,
measurement, and measurement control) can be performed with polynomial
complexity. For example,
the number of different eigenvalues in the two-dimensional
Hubbard model scales exponentially with the system size, so
QC algorithms for obtaining its energy spectrum will also require
a number of
operations that scales exponentially with the system size
\cite{somma2002}.
Typically, the degrees of freedom of the physical system over which we
have quantum control constitute the model of computation. In this
paper, we consider the simulation of any physical system by the
standard model of quantum computation (spin-1/2 system), since this
might be the language needed for the practical implementation of the
quantum algorithms (e.g., NMR). Therefore, the complexity of the
quantum algorithms is analyzed from the point of view of the number of
resources (elementary gates) needed for their implementation in the
language of the standard model. Had another model of computation being
used, one should follow the same qualitative steps although the
mappings and network structure would be different.
The main purpose of this work is to show how to simulate any physical
process and system using the least possible number of resources. We
organized the paper in the following way: In section \ref{section2} we
describe the standard model of quantum computation (spin-1/2 system).
Section \ref{section3} shows the mappings between physical systems
governed by a generalized Pauli's exclusion principle (fermions, etc.)
and the standard model, giving examples of algorithms for the first two
steps (preparation of the initial state and evolution) of the quantum
simulation. In section \ref{section4} we develop similar steps for the
simulation of quantum systems whose language has an
infinite-dimensional representation, thus, there is no exclusion
principle (e.g., canonical bosons). In section \ref{section5} we
explain the measurement process used to extract information of some
relevant and generic physical properties, such as correlation functions
and energy spectra. We conclude with a discussion about efficiency and
quantum errors (section \ref{section6}), and a summary about the
general statements (section \ref{section7}).
\section{standard model of quantum computation}
\label{section2}
In the standard model of quantum computation, the fundamental unit is
the {\it qubit}, represented by a two level quantum system $\ket{\sf a}
= a \ket{0} + b\ket{1}$. For a spin-1/2 particle, for example, the two
``levels'' are the two different orientations of the spin,
$\ket{\uparrow}=\ket{0}$ and $\ket{\downarrow}=\ket{1}$. In this
model, the algebra assigned to a system of $N$-qubits is built upon the
Pauli spin-1/2 operators $\sigma_x^j$, $\sigma_y^j$ and $\sigma_z^j$
acting on the $j$-th qubit (individual qubit). The commutation
relations for these operators satisfy an $\bigoplus\limits_{i=1}^N
su(2)_i$ algebra defined by ($\mu,\nu,\lambda=x,y,z$)
\begin{equation}
\label{su2}
[\sigma_{\mu}^j,\sigma_{\nu}^k]=2i\delta_{jk}\epsilon_{\mu \nu \lambda}
\sigma_{\lambda}^j ,
\end{equation}
where $\epsilon_{\mu \nu \lambda}$ is the totally anti-symmetric
Levi-Civita symbol. Sometimes it is useful to write the commutation
relations in terms of the raising and lowering spin-1/2 operators
\begin{equation}
\label{sigmapm}
\sigma_{\pm}^j = \frac{\sigma_x^j \pm i \sigma_y^j}{2}.
\end{equation}
Any operation on a QC is represented by a unitary operator $U$ that
evolves some initial state (boot-up state) in a way that satisfies the
time-dependent Schr\"odinger equation for some Hamiltonian $H$. Any
unitary operation (evolution) $U$ applied to a system of $N$ qubits can
be decomposed into either single qubit rotations $R_{\mu}(\vartheta)=
e^{-i \frac {\vartheta}{2} \sigma_{\mu}}$ by an angle $\vartheta$ about
the $\mu$ axis or two qubits Ising interactions $R_{z^j, z^k}=e^{i
\omega \sigma_z^i \sigma_z^j}$. This is an important result of quantum
information, since with these operations one can perform universal
quantum computation. It is important to mention that we could also
perform universal quantum computation with single qubit rotations and
C-NOT gates \cite{nielsen2000} or even with different control
Hamiltonians. The crucial point is that we need to have quantum control
over those elementary operations in the real physical system.
In the following, we will write down our algorithms in terms of single
qubit rotations and two qubits Ising interactions, since this is the
language needed for the implementation of the algorithms, for example,
in a liquid NMR QC. Again, had we used a different set of elementary
gates our main results still hold but with modified quantum networks.
As an example of such decompositions, we consider the unitary operator
$U(t)=e^{iHt}$, where $H=\alpha \sigma_x^1 \sigma_z^2 \sigma_x^3$
represents a time-independent Hamiltonian. After some simple
calculations \cite{ortiz2001,somma2002} we decompose $U$ into
elementary gates (one qubit rotations and two qubits interactions) in
the following way
\begin{equation}
\label{decomp1}
U(t)=e^{i \alpha \sigma_x^1 \sigma_z^2 \sigma_x^3 t} =
e^{-i \frac{\pi}{4} \sigma_y^3}
e^{i \frac{\pi}{4} \sigma_z^1\sigma_z^3} e^{i \frac{\pi}{4} \sigma_x^1}
e^{i \alpha \sigma_z^1 \sigma_z^2 t} e^{-i \frac{\pi}{4} \sigma_x^1}
e^{-i \frac{\pi}{4} \sigma_z^1\sigma_z^3} e^{i \frac{\pi}{4} \sigma_y^3}.
\end{equation}
This decomposition is shown in Fig. 1, where the quantum network
representation is displayed. In the same way, we could also decompose
an operator $U'(t)=e^{-i\alpha\sigma_y^1 \sigma_z^2 \sigma_y^3 t}$
using similar steps, by replacing $\sigma_x^i \leftrightarrow
\sigma_y^i$ in the right hand side of Eq. \ref{decomp1}.
\begin{figure}
\caption{Decomposition of the unitary operator $U(t)=e^{i \alpha
\sigma_x^1 \sigma_z^2 \sigma_x^3 t}
\label{fig:1}
\end{figure}
\section{simulation of fermionic systems}
\label{section3}
As discussed in the Introduction, quantum simulations require
simulations of systems with diverse degrees of freedom and particle
statistics. Fermionic systems are governed by Pauli's exclusion
principle, which implies that no more than one fermion can occupy the
same quantum state at the same time. In this way, the Hilbert space of
quantum states that represent a system of fermions in a solid is
finite-dimensional ($2^N$ for spinless fermions, where $N$ is the
number of sites or modes in the solid), and one could think in the
existence of one-to-one mappings between the fermionic and Pauli's
spin-1/2 algebras. Similarly, any language which involves operators
with a finite-dimensional representation (e.g., hard-core bosons, higher
irreps of $su(2)$, etc.) can be mapped onto the standard model language
\cite{review}.
In the second quantization representation, the (spinless) fermionic
operators $c^{\dagger}_i$ ($c^{\;}_i$) are defined as the creation
(annihilation) operators of a fermion in the $i$-th mode
($i=1,\cdots,N$). Due to the Pauli's exclusion principle and the
antisymmetric nature of the fermionic wave function under the
permutation of two fermions, the fermionic algebra is given by the
following commutation relations
\begin{equation}
\label{fermcom}
\{ c_i,c_j \}=0 , \mbox{ } \{ c^{\dagger}_i,c_j \} = \delta_{ij}
\end{equation}
where $\{,\}$ denotes the anticommutator.
The Jordan-Wigner transformation \cite{jordan1928} is the isomorphic
mapping that allows the description of a fermionic system by the
standard model
\begin{eqnarray}
\label{JW}
c_j \rightarrow \left( \prod\limits_{l=1}^{j-1} -\sigma_z^l \right) \sigma_-^j\\
\label{JW2}
c^{\dagger}_j \rightarrow \left( \prod\limits_{l=1}^{j-1} -\sigma_z^l \right)
\sigma_+^j ,
\end{eqnarray}
where $\sigma_{\mu}^i$ are the Pauli operators defined in section
\ref{section2}.
One can easily verify that if the operators $\sigma_{\mu}^i$ satisfy
the $su(2)$ commutation relations (Eq. \ref{su2}), the operators
$c^{\dagger}_i$ and $c^{\;}_i$ obey Eqs. \ref{fermcom}.
We now need to show how to simulate a fermionic system by a QC. Just as
for a simulation on a CC, the quantum simulation has three basic
steps: the preparation of an initial state, the evolution of this
state, and the measurement of a relevant physical property of the
evolved state. We will now explain the first two steps, postponing the
third until section \ref{section5}.
\subsection{Preparation of the initial state}
\label{section3.1}
In the most general case, any quantum state $\ket{\psi}$ of $N_e$
fermions can be written as a linear combination of Slater determinants
$\ket{\phi_\alpha}$
\begin{equation}
\ket{\psi}=\sum\limits_{\alpha=1}^L g_\alpha \ \ket{\phi_\alpha} ,
\end{equation}
where
\begin{equation}
\label{slater1}
\ket{\phi_\alpha} = \prod \limits_{j=1}^{N_e} c^{\dagger}_j \ \ket{\sf vac}
\end{equation}
with the vacuum state $\ket{\sf vac}$ defined as the state with no
fermions. In the spin language, $\ket{\sf vac}=\ket{\downarrow
\downarrow \cdots \downarrow}$.
We can easily prepare the states $\ket{\phi_\alpha}$ by noticing that
the quantum gate, represented by the unitary operator
\begin{equation}
\label{Um}
U_m=e^{i \frac{\pi}{2} (c^{\;}_m+c^\dagger_m)}
\end{equation}
when acting on the vacuum state, produces $c^\dagger_m \ket{0}$ up to a
phase factor. Making use of the Jordan-Wigner transformation (Eqs.
\ref{JW}, \ref{JW2}), we can write the operators $U_m$ in the spin
language
\begin{equation}
U_m=e^{i \frac{\pi}{2} \sigma_x^m \prod\limits_{j=1}^{m-1} -\sigma_z^j}.
\end{equation}
The successive application of $N_e$ similar unitary operators will
generate the state $\ket{\phi_\alpha}$ up to an irrelevant global
phase.
A detailed preparation of the fermionic state $\ket{\psi}=\sum
\limits_{\alpha=1}^L g_\alpha \ \ket{\phi_\alpha}$ can be found in a
previous work \cite{ortiz2001}. The basic idea is to use $L$ extra
(ancilla) qubits, then perform unitary evolutions controlled in the
state of the ancillas, and finally perform a measurement of the
$z$-component of the spin of the ancillas. In this way, the probability
of successful preparation of $\ket{\psi}$ is $1/L$. (We need of the
order of $L$ trials before a successful preparation.)
Another important case is the preparation of a Slater
determinant in a different basis than the one given before
\begin{equation}
\label{slater2}
\ket{\phi_\beta}=\prod\limits_{i=1}^{N_e} d^\dagger_i \ \ket{\sf vac} ,
\end{equation}
where the fermionic operators $d^\dagger_i$'s are related to the operators
$c^\dagger_j$ through the following canonical transformation
\begin{equation}
\label{unitmap}
\overrightarrow{d}^\dagger = e^{iM} \overrightarrow{c}^\dagger
\end{equation}
with
$\overrightarrow{d}^\dagger=(d^\dagger_1,d^\dagger_2,\cdots,d^\dagger_N)$,
$\overrightarrow{c}^\dagger=(c^\dagger_1,c^\dagger_2,\cdots,c^\dagger_N)$,
and $M$ is an $N \times N$ Hermitian matrix. Making use of Thouless's
theorem \cite{blaizot1986}, we observe that one Slater determinant
evolves into the other, $\ket{\phi_\beta}=U\ket{\phi_\alpha}$, where
the unitary operator $U=e^{-i \overrightarrow{c}^\dagger M
\overrightarrow{c}}$ can be written in spin operators using the
Jordan-Wigner transformation and can be decomposed into elementary gates
\cite{somma2002}, as described in section \ref{section2}. Since the
number of gates scales polynomially with the system size, the state
$\ket{\phi_\beta}$ can be efficiently prepared from the state
$\ket{\phi_\alpha}$.
\subsection{Evolution of the initial state}
\label{section3.2}
The second step in the quantum simulation is the evolution of the
initial state. The unitary evolution operator of a time-independent
Hamiltonian $H$ is $U(t)= e^{iHt}$. In general, $H=K+V$ with $K$
representing the kinetic energy and $V$ the potential energy. Since we
usually have $[K,V] \neq 0$, the decomposition of $U(t)$, written in
the spin language through the Jordan-Wigner transformation (Eqs.
\ref{JW},\ref{JW2}), in terms of elementary gates (one qubit rotations
and two qubits interactions), becomes complicated. To avoid this
problem, we instead use a Trotter decomposition, so the evolution
during a short period of time ($\Delta t=t/{\cal N}$ with $\Delta
t\rightarrow 0$) is approximated. To order ${\sf O}(\Delta t)$ (first
order Trotter breakup)
\begin{eqnarray}
\label{trotter}
U(t)&=&\prod\limits_{g=1}^{\cal N} U(\Delta t), \\
U(\Delta t)&=& e^{i H \Delta t}= e^{ i (K+V) \Delta t} \sim e^{i K \Delta t}
e^{i V \Delta t}.
\end{eqnarray}
The potential energy $V$ is usually a sum of commuting diagonal terms,
and the decomposition of $e^{i V \Delta t}$ into elementary gates is
straightforward. However, the kinetic energy $K$ is usually a sum of
noncommuting terms of the form $c^\dagger_i c^{\;}_j + c^\dagger_j
c^{\;}_i$ (bilinear fermionic operators), so we need again to perform a
Trotter approximation of the operator $e^{i K \Delta t}$. As an example
of such a decomposition, we consider a typical term $e^{i (c^\dagger_i
c^{\;}_j +c^\dagger_j c^{\;}_i)\Delta t}$ ($i<j$), when mapped onto the
spin language gives
\begin{equation}
\label{decomp2}
e^{-\frac{i}{2} (\sigma_x^i \sigma_x^j +\sigma_y^i \sigma_y^j)
\prod\limits_{k=
i+1}^{j-1}(-\sigma_z^k) } = e^{-\frac{i}{2}
\sigma_x^i \sigma_x^j
\prod\limits_{k=
i+1}^{j-1}(-\sigma_z^k) } e^{-\frac{i}{2}
\sigma_y^i \sigma_y^j
\prod\limits_{k=
i+1}^{j-1}(-\sigma_z^k) }.
\end{equation}
The decomposition of each term on the right hand side of Eq.
\ref{decomp2} into elementary gates was already described in previous
work \cite{somma2002}. In section \ref{section2} and Fig. 1, we also
showed an example of such a decomposition for $i=1$ and $j=3$. It is
important to mention that the required number of elementary gates
scales polynomially with the length $|j-i|$. Notice that this step is
not necessary for bosonic systems since no string of $\sigma_z^k$
operators is involved (see section \ref{section4}).
The accuracy of this method increases as $\Delta t$ decreases, so we
might require a large number of gates to perform the evolution with
small errors. To overcome this problem, one could use Trotter
approximations of higher order in $\Delta t$ \cite{suzuki1993}.
\subsection{Generalization: simulation of anyonic systems}
\label{section3.3}
The concepts described in sections \ref{section3.1} and
\ref{section3.2} can be easily generalized to other more general
particle statistics, namely hard-core anyons. By ``hard-core", we mean
that only zero or one particle can occupy a single mode (Pauli's exclusion
principle).
The commutation relations between the anyonic creation and annihilation
operators $a^{\dagger}_i$ and $a_i$, are given by
\begin{eqnarray}
\label{anyoncom1}
[ a^{\;}_i,a^{\;}_j ]_\theta &=&
[ a^\dagger_i,a^\dagger_j ]_\theta=0
\ , \nonumber\\
{[}a^{\;}_i,a^\dagger_j{]}_{-\theta}&=&\delta_{ij} (1-(e^{-i
\theta}+1)n_j) \ , \\
{[} n_i, a^\dagger_j ]&=& \delta_{ij}
a^\dagger_j \ , \nonumber
\end{eqnarray}
($i \leq j$) where $n_j=a^\dagger_j a^{\;}_j$, $[\hat{A},\hat{B}
]_\theta = \hat{A} \hat{B} - e^{i \theta} \hat{B} \hat{A}$, with $0
\leq \theta < 2\pi$ defining the statistical angle. In particular,
$\theta=\pi$ mod($2\pi$) corresponds to canonical spinless fermions,
while $\theta=0$ mod($2\pi$) represents hard-core bosons.
In order to simulate this problem with a QC made of qubits, we need to
apply the following isomorphic and efficient mapping between algebras
\begin{eqnarray}
a^{\dagger}_j &=& \prod\limits_{i<j} [\frac {e^{-i \theta} +1}{2} +
\frac {e^{-i \theta} -1}{2} \sigma_z^i] \ \sigma_+^j , \nonumber \\
a_j &=& \prod\limits_{i<j} [\frac {e^{i \theta} +1}{2} +
\frac {e^{i \theta} -1}{2} \sigma_z^i] \ \sigma_-^j , \\
n_j &=& \frac{1}{2} (1+ \sigma_z^j) ,\nonumber
\end{eqnarray}
where the Pauli operators $\sigma_{\mu}^j$ where defined in section
\ref{section2}, and since they satisfy Eq. \ref{su2}, the
corresponding commutation relations for the anyonic operators (Eqs.
\ref{anyoncom1}) are satisfied, too.
We can now proceed in the same way as in the fermionic case, writing
our anyonic evolution operator in terms of single qubit rotations and
two qubits interactions in the spin-1/2 language. As we already
mentioned, anyon statistics have fermion and hard-core boson
statistics as limiting cases. We now relax the hard-core condition on
the bosons.
\section{Simulation of Bosonic systems}
\label{section4}
Quantum computation is based on the manipulation of quantum systems
that possess finite number of degrees of freedom (e.g., qubits). From
this point of view, the simulation of bosonic systems appears to be
impossible, since the non existence of an exclusion principle implies
that the Hilbert space used to represent bosonic quantum sates is
infinite-dimensional; that is, there is no limit to the number of
bosons that can occupy a given mode. However, sometimes we might be
interested in simulating and studying properties such that the use of
the whole Hilbert space is unnecessary, and only a finite sub-basis of
states is sufficient. This is the case for physical systems with
interactions given by the Hamiltonian
\begin{equation}
\label{bosonhamilt}
H= \sum \limits_{i,j=1}^N \alpha_{ij} \ b^{\dagger}_i b^{\;}_j +
\beta_{ij} \ n_i n_j ,
\end{equation}
where the operators $b^{\dagger}_i$ ($b^{\;}_i$) create (destroy) a
boson at site $i$, and $n_i=b^{\dagger}_i b^{\;}_i$ is the number
operator. The space dimension of the lattice is encoded in the
parameters $\alpha_{ij}$ and $\beta_{ij}$. Obviously, the total number
of bosons $N_P$ in the system is conserved, and we restrict ourselves
to work with a finite sub-basis of states, where the dimension depends
on the value of $N_P$.
The respective bosonic commutation relations (in an
infinite-dimensional Hilbert space) are
\begin{equation}
\label{bosoncom}
[b_i,b_j]=0 , [b_i,b^{\dagger}_j]=\delta_{ij}.
\end{equation}
However, in a finite basis of states represented by $\{
\ket{n_1,n_2,\cdots,n_N }$ with $n_i=0,\cdots, N_P \}$, where $N_P$ is
the maximum number of bosons per site, the operators $b^{\dagger}_i$
can have the following matrix representation
\begin{equation}
\label{bosonprod}
\bar{b}^{\dagger}_i =\one \otimes \cdots \otimes \one \otimes
\underbrace{\hat {b}^{\dagger}}_{i^{th}\mbox{ factor}} \otimes
\one \otimes \cdots \otimes \one \\
\end{equation}
where $\otimes$ indicates the usual tensorial product between matrices,
and the $(N_P+1) \times (N_P+1)$ dimensional matrices $\one$ and
$\hat{b}^{\dagger}$ are
\begin{equation}
\label{bosonrep}
\one =\pmatrix {1&0&0&\cdots&0 \cr 0&1&0&\cdots&0\cr 0&0&1&\cdots&0 \cr
\vdots&\vdots&\vdots&\cdots &\vdots\cr 0&0&0&\cdots&1}
\mbox{ , }
\hat{b}^{\dagger} = \pmatrix{0 & 0 & 0 &\cdots &0 & 0 \cr 1 & 0 & 0 &
\cdots & 0& 0 \cr 0& \sqrt{2} & 0 &\cdots &0 & 0 \cr \vdots & \vdots & \vdots
&\cdots &\vdots &\vdots \cr 0 & 0 & 0 &\cdots &\sqrt{N_P} & 0}.
\end{equation}
It is important to note that in this finite basis, the commutation
relations of the bosons $\bar{b}^{\dagger}_i$ differ from the standard
bosonic ones (Eq. \ref{bosoncom}) \cite{review}
\begin{equation}
\label{bosoncom2}
[\bar{b}^{\;}_i,\bar{b}^{\;}_j]=0 , \mbox{ }
[\bar{b}^{\;}_i,\bar{b}^{\dagger}_j]=\delta_{ij} \left[ 1-
\frac{N_P+1}{N_P!} (\bar{b}^{\dagger}_i)^{N_P}(\bar{b}^{\;}_i)^{N_P}
\right] ,
\end{equation}
and clearly $(\bar{b}^{\dagger}_i)^{N_P+1}=0$.
As we mentioned in the Introduction, our idea is to simulate any
physical system in a QC made of qubits. For this purpose, we need to
map the bosonic algebra into the spin-1/2 language. However, since Eqs.
\ref{bosoncom2} imply that the linear span of the
operators $\bar{b}^{\dagger}_i$ and
$\bar{b}^{\;}_i$ is not closed under the bracket (commutator),
a direct mapping between the bosonic algebra
and the spin-1/2 algebra (such as the case of the Jordan-Wigner
transformation between the fermionic and spin-1/2 algebra) is not
possible. Therefore, we could think in a one-to-one mapping between
the bosonic and spin-1/2 quantum states, instead of an isomorphic
mapping between algebras. Let us show a possible mapping of quantum
states.
We start by considering only the $i$-th site in the chain. Since this
site can be occupied with at most $N_P$ bosons, it is possible to
associate an $N_P+1$ qubits quantum state to each particle number
state, in the following way
\begin{eqnarray}
\label{bosonmap}
|0\rangle_i &\leftrightarrow& |\uparrow_0 \downarrow_1 \downarrow_2 \cdots
\downarrow_{N_P} \rangle_i \nonumber \\
|1\rangle_i &\leftrightarrow& |\downarrow_0 \uparrow_1 \downarrow_2 \cdots
\downarrow_{N_P} \rangle_i \nonumber \\
|2\rangle_i &\leftrightarrow& |\downarrow_0 \downarrow_1 \uparrow_2 \cdots
\downarrow_{N_P} \rangle_i \\
\vdots && \vdots \nonumber \\
|N_P\rangle_i &\leftrightarrow& |\downarrow_0 \downarrow_1 \downarrow_2
\cdots \uparrow_{N_P} \rangle_i \nonumber
\end{eqnarray}
where $\ket{n}_i$ denotes a quantum state with $n$ bosons in site $i$.
Therefore, we need $N(N_P+1)$ qubits for the simulation (where $N$
is the number of sites). In Fig. 2
we show an example of this mapping for a quantum state with 7 bosons in a
chain of 5 sites.
By definition (see Eqs. \ref{bosonprod}, \ref{bosonrep})
$\bar{b}^{\dagger}_i \ \ket{n}_i= \sqrt{n+1} \ \ket{n+1}_i$, so the
operator
\begin{equation}
\label{bosonmap2}
\bar{b}^{\dagger}_i = \sum \limits
_{n=0}^{N_P-1} \sqrt{n+1} \ \sigma_-^{n,i} \sigma_+^{n+1,i} ,
\end{equation}
where the pair $(n,i)$ indicates the qubit $n$ that represents the
$i$-th site, acts in the $N_P+1$ qubits states of Eqs. \ref{bosonmap}
as $\bar{b}^{\dagger}_i \ket{ \downarrow_0 \cdots \downarrow_{n-1}
\uparrow_n \downarrow_{n+1} \cdots \downarrow_{N_P} }_i = \sqrt{n+1} \
\ket{ \downarrow_0 \cdots \downarrow_n \uparrow_{n+1} \downarrow_{n+2}
\cdots \downarrow_{N_P} }_i $. Then, its matrix representation in this
basis is the same matrix representation of $b^{\dagger}_i$ in the
basis of bosonic states. Similarly, the number operator can be written
\begin{equation}
\label{bosonmap2d}
\bar{n}_i = \sum \limits _{n=0}^{N_P} n \ \frac{\sigma_z^{n,i}+1}{2} ,
\end{equation}
and act as $\bar{n}_i \ket{ \downarrow_0 \cdots \downarrow_{n-1}
\uparrow_n \downarrow_{n+1} \cdots \downarrow_{N_P} }_i = n \
\ket{ \downarrow_0 \cdots \downarrow_n \uparrow_{n+1} \downarrow_{n+2}
\cdots \downarrow_{N_P} }_i$. Notice that $[\bar{b}^{\dagger}_i,
\sum_{n=0}^{N_P} \sigma_z^{n,i}]=0$, which means that these operators
conserve the total $z$-component of the spin and, thus, always keep
states within the same subspace.
We can now write down the Hamiltonian in Eq. \ref{bosonhamilt}
in the spin-1/2 algebra as
\begin{equation}
\label{bosonhamilt2}
H=\sum\limits_{i,j=1}^N \alpha_{ij} \ \bar{b}^{\dagger}_i
\bar{b}^{\;}_j + \beta_{ij} \ \bar{n}_i \bar{n}_j ,
\end{equation}
where the operators $\bar{b}^{\dagger}_i$ ($\bar{b}^{\;}_i$) are given
by Eq. \ref{bosonmap2} and $\bar{n}_i$ bt Eq. \ref{bosonmap2d}. In this
way, we are able to obtain physical properties of the bosonic system
(such as the mean value of an observable, the mean value of the
evolution operator, etc.) in a QC made of qubits. It is important to
note that the type of Hamiltonian given by Eq. \ref{bosonhamilt} is
not the only one that can be simulatable using the described method.
The only constraint is a fixed maximum number of bosons per site (or
mode).
\begin{figure}
\caption{Mapping of the bosonic state $\ket{\phi_\alpha}
\label{fig:2}
\end{figure}
\subsection{Preparation of the initial state}
\label{section4.1}
As in the fermionic case, the most general bosonic state of an $N$
sites quantum system with $N_P$ bosons can be written as a linear
combination of product states like
\begin{equation}
\label{prodstate}
\ket{\phi_\alpha}= {\sf K} (b^\dagger_1)^{n_1}(b^\dagger_2)^{n_2} \cdots
(b^\dagger_N)^{n_N} \ \ket{\sf vac} ,
\end{equation}
where ${\sf K}$ is a normalization factor, $n_i$ is the number of
bosons at site $i$ ($\sum\limits_{i=1}^N n_i = N_P$), and $\ket{\sf
vac}$ is the boson vacuum state (no particle state). Using the mapping
described in Eq. \ref{bosonmap}, we can write the vacuum state in the
spin language as $\ket{\sf vac}=\ket{ \uparrow_0 \downarrow_1 \cdots
\downarrow_{N_P}}_1 \otimes \cdots \otimes \ket{ \uparrow_0
\downarrow_1 \cdots \downarrow_{N_P}}_N$ and $\ket{\phi_\alpha} =
\ket{ \downarrow_0 \cdots \uparrow_{n_1} \cdots \downarrow_{N_P}}_1
\otimes \cdots \otimes \ket{ \downarrow_0 \cdots \uparrow_{n_N} \cdots
\downarrow_{N_P}}_N$ (see Fig. 2 for an example). Therefore, the
preparation of $\ket{\phi_\alpha}$ in a QC made of qubits is an easy
process: only $N$ spins are flipped from the fully polarized state,
where all spins are pointing down.
The preparation of a bosonic initial state of the form $\ket{\psi}=
\sum\limits_{\alpha=1}^L g_{\alpha} \ \ket{\phi_\alpha}$ is realized
as in the fermionic case. Again, we need to add $L$ ancillas (extra
qubits), perform controlled evolutions on their states, and finally
perform a measurement of an spin component \cite{ortiz2001}.
\subsection{Evolution of the initial state}
\label{section4.2}
The basic idea is to use the first order Trotter approximation (see
the fermionic case) to separate those terms of the Hamiltonian that
belong to the kinetic energy $K$, from the ones that belong to the
potential energy $V$ ($H=K+V$, $[K,V]\neq0$), i.e.,
\begin{equation}
\label{trotter2}
e^{i H \Delta t} \sim e^{i K \Delta t} e^{i V \Delta t}.
\end{equation}
\begin{figure}
\caption{Decomposition of the unitary operator $U(t)=e^{\frac{i}
\label{fig:3}
\end{figure}
In general, $K$ is a sum of non commuting terms of the form
$b^{\dagger}_k b^{\;}_l + b^{\dagger}_l b^{\;}_k$, and we need to
perform another first order Trotter approximation to decompose it into
elementary gates (in the spin language). Then, a typical term $e^{i
(b^{\dagger}_i b^{\;}_j + b^{\dagger}_j b^{\;}_i)t}$ when mapped onto
the spin language (Eq. \ref{bosonmap2}) gives
\begin{eqnarray}
\label{decomp3}
\exp [ \frac{i t}{8} \sum\limits_{n,n'=0}^{N_P-1}
\sqrt{(n+1)(n'+1)} \ [(\sigma_x^{n,i}\sigma_x^{n+1,i}
+\sigma_y^{n,i}\sigma_y^{n+1,i})(\sigma_x^{n',j}\sigma_x^{n'+1,j}
+\sigma_y^{n',j}\sigma_y^{n'+1,j}) \nonumber \\
+ (\sigma_x^{n,i}\sigma_y^{n+1,i}
-\sigma_y^{n,i}\sigma_x^{n+1,i})(\sigma_x^{n',j}\sigma_y^{n'+1,j}
-\sigma_y^{n',j}\sigma_x^{n'+1,j})] ] ,
\end{eqnarray}
where $N_P$ is the number of bosons. The terms in the exponent of Eq.
\ref{decomp3} commute with each other, so the decomposition into
elementary gates becomes straightforward. As an example (see Fig.3),
we consider a
system of two sites with one boson. We need then $2(1+1)=4$ qubits for
the simulation, and Eq. \ref{bosonmap2} implies that
$\bar{b}^\dagger_1 = \sigma_-^{0,1}\sigma_+^{1,1}$ and
$\bar{b}^\dagger_2 = \sigma_-^{0,2}\sigma_+^{1,2}$. Then, $e^{i
(b^\dagger_i b^{\;}_j + b^\dagger_j b^{\;}_i)t}$ becomes
\begin{eqnarray}
\label{bosexamp}
\exp (\frac{it}{8} \sigma_x^{0,1}\sigma_x^{1,1}\sigma_x^{0,2}\sigma_x^{1,2})
\times
\exp (\frac{it}{8} \sigma_x^{0,1}\sigma_x^{1,1}\sigma_y^{0,2}\sigma_y^{1,2})
\times
\exp (\frac{it}{8} \sigma_y^{0,1}\sigma_y^{1,1}\sigma_x^{0,2}\sigma_x^{1,2})
\times
\exp (\frac{it}{8} \sigma_y^{0,1}\sigma_y^{1,1}\sigma_y^{0,2}\sigma_y^{1,2})
\\
\nonumber
\times
\exp (\frac{it}{8} \sigma_y^{0,1}\sigma_x^{1,1}\sigma_y^{0,2}\sigma_x^{1,2})
\times
\exp (-\frac{it}{8} \sigma_y^{0,1}\sigma_x^{1,1}\sigma_x^{0,2}\sigma_y^{1,2})
\times
\exp (-\frac{it}{8} \sigma_x^{0,1}\sigma_y^{1,1}\sigma_y^{0,2}\sigma_x^{1,2})
\times
\exp (\frac{it}{8} \sigma_x^{0,1}\sigma_y^{1,1}\sigma_x^{0,2}\sigma_y^{1,2}),
\end{eqnarray}
where the decomposition of each of the terms in Eq. \ref{bosexamp} in
elementary gates can be done using the methods described in previous
works \cite{ortiz2001,somma2002}. In particular, in Fig. 3 we show
the decomposition of the term $\exp \left (\frac{i}{8}
\sigma_x^{0,1}\sigma_y^{1,1}\sigma_y^{0,2}\sigma_x^{1,2} t\right )$,
where the qubits were relabeled as $(n,i) \equiv n+2i -1$ (e.g.,
$(0,1)\rightarrow 1$).
On the other hand, it is important to mention that the number of
operations involved in the decomposition is not related to the distance
between the sites $i$ and $j$, as in the fermionic case.
\section{Measurement: Correlation functions and Energy spectra}
\label{section5}
In previous work \cite{ortiz2001,somma2002} we introduced an efficient
algorithm for the measurement of correlation functions in quantum
systems. The idea is to make an indirect measurement, that is, we
prepare an ancilla qubit (extra qubit) in a given initial state, then
interact with the system whose properties one wants to measure, and
finally we measure some observable of the ancilla to obtain
information about the system. Particularly, we could be interested in
the measurement of dynamical correlation functions of the form
\begin{equation}
\label{greenfunction}
G(t)= \langle \psi | T^{\dagger} A^\dagger_i T B_j \psi \rangle
\end{equation}
where $A_i$ and $B_j$ are unitary operators (any operator can be
decomposed in a unitary operator basis as $A=\sum\limits_i \alpha_i
A_i$, $B=\sum\limits_j \beta_j B_j$), $T=e^{-iHt}$ is the time
evolution operator of a time-independent Hamiltonian $H$, and
$\ket{\psi}$ is the state of the system whose correlations one wants to
determine. If we were interested in the evaluation of spatial
correlation functions, we would replace the evolution operator $T$ by
the space translation operator. In Fig. 4 we show the quantum algorithm
(quantum network) for the evaluation of $G(t)$. As explained before
\cite{ortiz2001,somma2002}, the initial state (ancilla plus system) has
to be prepared in the quantum state $\ket{+}_{\sf a} \otimes
\ket{\psi}$ (where ${\sf a}$ denotes the ancilla qubit and
$\ket{+}=\frac{\ket{0}+\ket{1}}{\sqrt{2}}$). Additionally, we have to
perform an evolution (unitary operation) in the following three steps:
i) a controlled evolution in the state $\ket{1}$ of the ancilla
$\mbox{C-B}= \ket{0} \langle 0 | \otimes I + \ket{1} \langle 1 |
\otimes B_j$, ii) a time evolution $T$, and iii) a controlled evolution
in the state $\ket{0}$ of the ancilla $\mbox{C-A}= \ket{0} \langle 0 |
\otimes A_i + \ket{1} \langle 1 | \otimes I$. Finally we measure the
observable $\langle 2\sigma_+^{\sf a} \rangle= \langle \sigma_x^{\sf
a} +i \sigma_y^{\sf a}\rangle =G(t)$.
\vspace*{-0.8cm}
\begin{figure}
\caption{Quantum network for the evaluation of $G(t)=
\langle \psi | T^{\dagger}
\label{fig:4}
\end{figure}
On the other hand, sometimes we are interested in obtaining the
spectrum (eigenvalues) of a given observable $\hat{Q}$ (i.e., an
hermitian operator). A quantum algorithm (network) for this purpose was
also given in previous work \cite{somma2002}. Again, the basic idea is
to perform an indirect measurement using an extra qubit (see Fig. 5).
Basically, we prepare the initial state (ancilla plus system)
$\ket{+}_{\sf a} \otimes \ket{\phi}$, then apply the evolution $e^{i
\hat{Q} \sigma_z^{\sf a} \frac{t}{2}}$, and finally measure the
observable $\langle 2 \sigma_+^{\sf a}(t) \rangle = \langle \phi |
e^{-i \hat{Q} t} \phi \rangle$. Since the initial state of the system
can be written as a linear combination of eigenstates of $\hat{Q}$,
$\ket{\phi}=\sum \limits_{n=0}^L \gamma_n \ \ket{\psi_n}$, where
$\gamma_n$ are complex coefficients and $\ket{\psi_n}$ are eigenstates
of $\hat{Q}$ with eigenvalue $\lambda_n$, the classical Fourier
transform applied to the function of time $\langle 2\sigma_+^{\sf a}(t)
\rangle$ gives us $\lambda_n$
\begin{equation}
\label{spectrum}
\hat{F}(\lambda) = \sum\limits_{n=0}^L 2 \pi |\gamma_n|^2 \delta(\lambda -
\lambda_n) .
\end{equation}
Without loss of generality, we can choose $\hat{Q}=H$, with $H$ some
particular Hamiltonian.
It is important to note that in order to obtain the different
eigenvalues of $\hat{Q}$, the overlap between the initial state and the
eigenstates of $\hat{Q}$ must be different from zero. One can use
different mean-field solutions of $\hat{Q}$ as initial states
$\ket{\phi}$ depending on the part of the spectrum one wants to
determine with higher accuracy.
\vspace*{-1.3cm}
\begin{figure}
\caption{Quantum network for the evaluation of the spectrum of an
observable $\hat{Q}
\label{fig:5}
\end{figure}
\section{Algorithm efficiency and errors}
\label{section6}
An algorithm is considered efficient if the number of operations
involved scales polynomially with the system size, and if the effort
required to make the error $\epsilon$ in the measurement of a relevant
property smaller, scales polynomially with $1/\epsilon$.
While the evolution step involves a number of unitary operations that
scales polynomially with the system size (such is the case for the
Trotter approximation) whenever the Hamiltonian $H$ is physical (e.g.,
is a sum of a number of terms that scales polynomially with the system
size), the preparation of the initial state could be inefficient. Such
inefficiency would arise, for example, if the state $\ket{\psi}$
defined in Eq. \ref{slater1} or Eq. \ref{prodstate} is a linear
combination of an exponential number of states ($L\sim x^N$, with $N$
the number of sites in the system and $x$ a positive number). However,
if we assume that $\ket{\psi}$ is a finite combination of states ($L$
scales polynomially with $N$), its preparation can be done
efficiently. (Any (Perelomov-Gilmore) generalized coherent state can be
prepared in a number of steps that scales polynomially with the
number of generators of the respective algebra.)
On the other hand, the measurement
process described in section \ref{section5} is always an efficient
step, since it only involves the measurement of the spin of one qubit,
despite the number of qubits or sites $N$ of the quantum system.
Errors $\epsilon$ come from gate imperfections, the use of the Trotter
approximation in the evolution operator, and the statistics in
measuring the spin of the ancilla qubit (sections \ref{section3.2},
\ref{section4.2}, and \ref{section5}). A precise description and study
of the error sources can be found in previous work \cite{ortiz2001}.
The result is that the algorithms described here, for the simulation of
physical systems and processes, are efficient if the preparation of the
initial state is efficient, too.
\section{conclusions}
\label{section7}
We studied the implementation of quantum algorithms for the simulation
of an arbitrary quantum physical system on a QC made of qubits, making
a distinction between systems that are governed by Pauli's exclusion
principle (fermions, hard-core bosons, anyons, spins, etc.), and
systems that are not (e.g, canonical bosons). For the first class of
quantum systems, we showed that a mapping between the corresponding
algebra of operators and the spin-1/2 algebra exists, since both have a
finite-dimensional representation. On the other hand, the operator
representation of quantum systems that are not governed by an
exclusion principle is infinite-dimensional, and an isomorphic mapping
to the spin-1/2 algebra is not possible. However, one can work with a
finite set of quantum states, setting a constraint, such as fixing the
number of bosons in the system. Then, the representation of bosonic
operators becomes finite-dimensional, and we showed that we can write
down bosonic operators in the spin-1/2 language (Eq. \ref{bosonmap2}),
mapping bosonic states to spin-1/2 states (Eq. \ref{bosonmap}) .
We also showed how to perform quantum simulations in a QC made of
qubits (quantum networks), giving algorithms for the preparation of
the initial state, the evolution, and the measurement of a relevant
physical property, where in the most general case the unitary operations
have to be approximated (sections \ref{section3.2},\ref{section4.2}).
The mappings explained are efficient in the sense that we can perform
them in a number of operations that scales polynomially with the system
size. This implies that the evaluation of some correlation functions in
quantums states that can be prepared efficiently is also efficient,
showing an exponential speed-up of these algorithms with respect to
their classical simulation. However, these mappings are insufficient to
establish that quantum networks can simulate any physical problem
efficiently. As we mentioned in the introduction,
this is the case for the determination of the spectrum of
the Hamiltonian in the two-dimensional Hubbard model \cite{somma2002},
where the signal-to-noise ratio decays exponentially with the system
size.
Finally, in Fig. 6 a table displays the advantages of simulating some
known algorithms with a QC than with a CC, concluding that QCs behave
as efficient devices for some quantum simulations.
\begin{figure}
\caption{Quantum vs. classical simulations. Speed-up refers to the gain
in speed of the quantum algorithms compared to the known classical
ones.}
\label{fig:6}
\end{figure}
\end{document}
|
\begin{eqnarray}gin{document}
\title{
Construction of nearly hyperbolic distance on punctured spheres
}
\author[T. Sugawa]{Toshiyuki Sugawa}
\address{Graduate School of Information Sciences,
Tohoku University, Aoba-ku, Sendai 980-8579, Japan}
\email{[email protected]}
\author[T. Zhang]{Tanran Zhang}
\address{Department of Mathematics, Soochow University, No.1 Shizi Street, Suzhou 215006, China}
\email{[email protected]}
\keywords{hyperbolic metric, punctured sphere}
\subjclass[2010]{Primary 30C35; Secondary 30C55}
\begin{eqnarray}gin{abstract}
We define a distance function on the bordered punctured disk $0<|z|\le 1/e$
in the complex plane, which is comparable with the hyperbolic distance
of the punctured unit disk $0<|z|<1.$
As an application, we will construct a distance function on an $n$-times
punctured sphere which is comparable with the hyperbolic distance.
We also propose a comparable quantity which is not necessarily a distance function on the
punctured sphere but easier to compute.
\end{abstract}
\thanks{
The authors were supported in part by JSPS Grant-in-Aid for
Scientific Research (B) 22340025.
}
\maketitle
\section{Introduction}
A domain $\Omega$ in the Riemann sphere ${\widehat{\mathbb C}}={\mathbb C}\cup\{\infty\}$ has
the upper half-plane ${\mathbb H}=\{\zeta\in{\mathbb C}: \,{\operatorname{Im}\,}\zeta>0\}$
as its holomorphic universal covering space
precisely if the complement ${\widehat{\mathbb C}}\setminus\Omega$ contains at least three points.
Such a domain is called {\it hyperbolic}.
Since the Poincar\'e metric $|d\zeta|/(2\,{\operatorname{Im}\,} \zeta)$ is invariant under the
pullback by analytic automorphisms of ${\mathbb H}$ (in other words,
under the action of ${\operatorname{PSL}}(2,{\mathbb R})$), it descends to a metric on $\Omega,$
called the {\it hyperbolic metric} of $\Omega$ and denoted by
$\lambdabda_\Omega(w)|dw|.$
More explicitly, they are related by the formula
$\lambdabda_\Omega(p(\zeta))|p'(\zeta)|=1/(2\,{\operatorname{Im}\,} \zeta),$ where
$p:{\mathbb H}\to\Omega$ is a holomorphic universal covering projection
of ${\mathbb H}$ onto $\Omega.$
The quantity $\lambdabda_\Omega(w)$ is sometimes called the {\it hyperbolic
density} of $\Omega$ and it is independent of the particular choice of
$p$ and $\zeta\in p^{-1}(w).$
Note that $\lambdabda_\Omega$ has constant Gaussian curvature $-4$ on $\Omega.$
We denote by $h_\Omega(w_1,w_2)$ the distance function induced by
$\lambdabda_\Omega,$
called the {\it hyperbolic distance} on $\Omega$ and the distance is known to be complete.
That is, $h_\Omega(w_1,w_2)=\inf_\alpha \ell_\Omega(\alpha),$
where the infimum is taken over all rectifiable curves $\alpha$
joining $w_1$ and $w_2$ in $\Omega$ and
$$
\ell_\Omega(\alpha)=\int_\alpha\lambdabda_\Omega(w)|dw|.
$$
One of the most important properties of the hyperbolic metric is the
{\it principle of hyperbolic metric}, which asserts the monotonicity
$\lambdabda_\Omega(w)\ge\lambdabda_{\Omega_0}(w)$ and thus
$h_\Omega(w_1,w_2)\ge h_{\Omega_0}(w_1,w_2)$
for $w, w_1, w_2\in\Omega\subset\Omega_0$
({\it cf.}~\cite[III.3.6]{Nev:Ein}).
For basic facts about hyperbolic metrics, we refer to recent textbooks
\cite{KL:hg} by Keen and Lakic or a survey paper \cite{BM07} by Beardon and Minda
as well as classical book \cite{Ahlfors:conf} by Ahlfors.
For instance,
\begin{eqnarray}gin{equation}\label{eq:hdist}
h_{\mathbb H}(\zeta_1, \zeta_2)={\operatorname{arth}\,}\left|\frac{\zeta_1-\zeta_2}{\zeta_1-\overline{\zeta_2}}\right|,
\quad \zeta_1, \zeta_2\in{\mathbb H},
\end{equation}
where ${\operatorname{arth}\,} x=\frac12\log\frac{1+x}{1-x}$ for $0\le x<1$
(see \cite[Chap.~7]{Beardon:disc} for instance).
Therefore, $h_\Omega(w_1,w_2)$ can be expressed via the universal covering
projection $w=p(\zeta)$ as follows (see \cite[Theorem 7.1.3]{KL:hg}):
$$
h_\Omega(w_1,w_2)=\min_{\zeta_2\in p^{-1}(w_2)} h_{\mathbb H}(\zeta_1,\zeta_2)
=\min_{\gamma\in\Gamma} h_{\mathbb H}(\zeta_1,\gamma(\zeta_2)),
$$
where $\zeta_1\in p^{-1}(w_1), \zeta_2\in p^{-1}(w_2)$ and $\Gamma=\{\gamma\in
{\operatorname{PSL}}(2,{\mathbb R}): p\circ\gamma=p\}$ is the covering transformation group
of $p:{\mathbb H}\to\Omega$ (also called a Fuchsian model of $\Omega$).
It is, however, difficult to obtain an explicit expression
of $\lambdabda_\Omega(w)$ or $h_\Omega(w_1,w_2)$ for a general hyperbolic domain $\Omega$
because a concrete form of its universal covering projection is not known
except for several special domains.
(It is not easy even for a simply connected domain because it is hard to
find its Riemann mapping function in general.)
Therefore, as a second choice, estimates of $\lambdabda_\Omega(w)$ are useful.
Indeed, Beardon and Pommerenke \cite{BP78} supplied a general but concrete bound
for $\lambdabda_\Omega(w).$
However, it is still difficult to estimate the induced hyperbolic distance $h_\Omega(w_1,w_2)$
due to complexity of the fundamental group of $\Omega.$
On the other hand, an explicit bound for the hyperbolic distance may be of importance.
For instance, if $f:\Omega\to X$ is a holomorphic map between hyperbolic domains,
then the principle of hyperbolic metric yields the inequality
\begin{eqnarray}gin{equation}\label{eq:pr1}
h_X(f(z),f(z_0))\le h_\Omega(z,z_0),\quad z_0,z\in\Omega,
\end{equation}
which contains some important information about the function $f.$
As a maximal hyperbolic plane domain, the thrice-punctured sphere
(the twice-punctured plane) ${\mathbb C}_{0,1}:={\mathbb C}\setminus\{0,1\}$ is particularly important.
Letting $X={\mathbb C}_{0,1},$ the inequality \eqref{eq:pr1}
leads to Schottky's theorem when $\Omega$ is the unit disk ({\it cf.}~\cite{Hempel79}, \cite{Hempel80}),
and it leads to the big Picard theorem
when $\Omega$ is a punctured disk ({\it cf.}~\cite[\S 1-9]{Ahlfors:conf}).
Though the hyperbolic density of ${\mathbb C}_{0,1}$ was essentially computed by
Agard \cite{Agard68} (see also \cite{SV05}) and the holomorphic universal
covering projection of ${\mathbb H}$ onto ${\mathbb C}_{0,1}$ is known as an elliptic modular function
(see Section 2 below and, e.g., \cite[p.~279]{Ahlfors:ca} or \cite[Chap.~VI]{Nehari:conf}),
we do not have any convenient expression of the hyperbolic distance $h_{{\mathbb C}_{0,1}}(w_1,w_2)$ except for
special configurations of the points $w_1,w_2$ (see, for instance, \cite[Lemma 3.10]{SV01},
\cite[Lemma 5.1]{SV05}).
In this paper, we consider punctured spheres $X={\widehat{\mathbb C}}\setminus\{a_1,\dots,a_n\}.$
This is still general enough in the sense that for any hyperbolic domain
$\Omega\subset{\widehat{\mathbb C}},$ there exists a sequence of punctured spheres $X_k\supset\Omega~(k=1,2,\dots)$
such that $\lambdabda_{X_k}\to \lambdabda_\Omega$ locally uniformly on $\Omega$ as $k\to\infty$
(see \cite[\S 5]{BR86}).
We also note that Rickman \cite{Rickman84} constructed a conformal metric on a
punctured sphere of higher dimensions to show a Picard-Schottky type result for
quasiregular mappings.
Our main purpose of this paper is to give a distance function $d_X(w_1,w_2)$
on the punctured sphere $X$ which can be computed (or estimated) more easily than the hyperbolic
distance $h_X(w_1,w_2)$ but still comparable with it by concrete bounds.
To this end, we first propose a distance function $D(z_1,z_2)$
on $0<|z|\le e^{-1}$ given by the formula
\begin{eqnarray}gin{equation} \label{eq:D}
D(z_1, z_2)=\frac{2\sin(\theta/2)}{\max\{\log (1/|z_1|),\, \log (1/|z_2|)\}}
+\left|\log\log\frac1{|z_2|}-\log\log\frac1{|z_1|}\right|,
\end{equation}
where $\theta=|{\operatorname{arg}\,} (z_2/z_1)| \in [0,\pi]$.
In Section 3, we will show that $D(z_1,z_2)$ is indeed a distance function
and comparable with $h_{{\mathbb D}^*}(z_1,z_2)$ on the set $0<|z|\le e^{-1}.$
Note also that $D(z_1, z_2)=e|z_1-z_2|$ for $|z_1|=|z_2|=e^{-1}.$
As another extremal case, the punctured disk ${\mathbb D}^*$ is also important.
Here and hereafter, we set
${\mathbb D}(a,r)=\{z\in{\mathbb C}: |z-a|<r\}, {\overline{\mathbb D}}(a,r)=\{z\in{\mathbb C}: |z-a|\le r\},
~{\mathbb D}^*(a,r)={\mathbb D}(a,r)\setminus\{a\}$
for $a\in{\mathbb C}$ and $0<r<+\infty$ and
${\mathbb D}={\mathbb D}(0,1)$ and ${\mathbb D}^*={\mathbb D}^*(0,1).$
In this case, a holomorphic universal covering projection $p:{\mathbb H}\to{\mathbb D}^*$ is
given by $z=p(\zeta)=e^{\pi i\zeta}$ and the hyperbolic density is expressed by
\begin{eqnarray}gin{equation*}
\lambda_{\mathbb{D}^*}(z)=\frac{1}{2|z| \log (1/|z|)}.
\end{equation*}
A concrete formula of $h_{{\mathbb D}^*}(z_1,z_2)$ can also be given but its form
is not so convenient ({\it cf.} \eqref{eq:hd} below).
In order to understand the hyperbolic distance $h_X(w_1,w_2)$ when
one of $w_1,w_2$ is close to a puncture,
we should take a careful look at the hyperbolic geodesic nearby the puncture.
In Section 2, we investigate it by making use of an elliptic modular function as
well as the punctured unit disk model.
Section 3 is devoted to the study of the function $D(z_1,z_2).$
In particular, we show that $D$ gives a distance on $0<|z|\le e^{-1}$
and compare with the hyperbolic distances $h_{{\mathbb D}^*}(z_1,z_2)$ of ${\mathbb D}^*$
and $h_{{\mathbb C}_{0,1}}(z_1,z_2)$ of ${\mathbb C}_{0,1}.$
As an application of the function $D(z_1,z_2),$ in Section 4,
we will construct a distance function $d_X(w_1,w_2)$ on $n$-times
punctured spheres $X$ which are comparable with the hyperbolic distance $h_X(w_1,w_2).$
We will summarise our main results in Theorem \ref{thm:main}.
Unfortunately, $d_X(w_1,w_2)$ is not very easy to compute because we have to
take an infimum in the definition.
In Section 5, we introduce yet another quantity $e_X(w_1,w_2),$
which can be computed without taking an infimum though it is no longer a distance function on $X.$
We will show our main result that $d_X(w_1,w_2)$ and $e_X(w_1,w_2)$ are both comparable
with the hyperbolic distance $h_X(w_1,w_2)$ in a quantitative way in Section 5.
We would finally thank Matti Vuorinen for posing, more than ten years ago, the problem of finding a
quantity comparable with the hyperbolic distance on ${\mathbb C}_{0,1}$ and for helpful suggestions.
\section{Hyperbolic geodesics near the puncture}
In order to estimate the hyperbolic distance $h_X(w_1,w_2)$
of a punctured sphere $X={\widehat{\mathbb C}}\setminus\{a_1,\dots,a_n\},$
we have to investigate the behaviour of a hyperbolic geodesic
joining two points near a puncture.
Here and in what follows, a curve $\alpha$ joining $w_1$ and $w_2$
in a hyperbolic domain $\Omega$
is called a {\it hyperbolic geodesic} if $\ell_\Omega(\alpha)\le\ell_\Omega(\begin{eqnarray}ta)$
whenever $\begin{eqnarray}ta$ is a curve joining $w_1$ and $w_2$
which is homotopic to $\alpha$ in $\Omega.$
In particular, $\alpha$ is called {\it shortest} if $\ell_\Omega(\alpha)
=h_\Omega(w_1,w_2).$
Note that the shortest hyperbolic geodesic is not unique in general.
Our basic model for that is the punctured disk ${\mathbb D}^*.$
In this case, we have precise information about the hyperbolic geodesic.
\begin{eqnarray}gin{lem}\label{lem:geo}
Let $z_1, z_2\in{\mathbb D}^*$ with $\eta_1=-(\log|z_1|)/\pi \ge\eta_2=-(\log|z_2|)/\pi.$
Then a shortest hyperbolic geodesic $\begin{eqnarray}ta$ joining $z_1$ and $z_2$ in ${\mathbb D}^*$
is contained in the set $e^{-\pi\delta}|z_1|\le |z|\le |z_2|,$ where
$$
\delta=\sqrtrt{\eta_1^2+\frac1{4}}-\eta_1
=\frac1{4\big(\eta_1+\sqrtrt{\eta_1^2+1/4}\big)}.
$$
\end{lem}
\begin{eqnarray}gin{pf}
First note that the function $p(\zeta)=e^{\pi i\zeta}$ is a universal covering
projection of the upper half-plane ${\mathbb H}$ onto ${\mathbb D}^*$ with period $2.$
We may assume that $z_1\in(0,1)$ and $\theta=({\operatorname{arg}\,} z_2)/\pi\in(0,1].$
Then $p(i\eta_1)=z_1,~ p(i\eta_2+\theta)=z_2,$ and
$h_{\mathbb H}(i\eta_1,i\eta_2+\theta)=h_{{\mathbb D}^*}(z_1,z_2).$
Let $\tilde\begin{eqnarray}ta$ be the hyperbolic geodesic joining $\zeta_1=i\eta_1$
and $\zeta_2=i\eta_2+\theta$ in ${\mathbb H}.$
Recall that $\tilde\begin{eqnarray}ta$ is part of the circle orthogonal to the real axis.
If we fix $\eta_1,$ the possible largest imaginary part of $\tilde\begin{eqnarray}ta$ is attained
when $\eta_2=\eta_1$ and $\theta=1.$
Therefore, $\,{\operatorname{Im}\,} \zeta\le \eta_1+\delta$ for $\zeta\in\tilde\begin{eqnarray}ta,$ where
$\delta=\sqrtrt{\eta_1^2+1/4}-\eta_1.$
Hence, we conclude that $\begin{eqnarray}ta=p(\tilde\begin{eqnarray}ta)$ is contained in the closed annulus
$e^{-\pi(\eta_1+\delta)}=e^{-\pi\delta}|z_1|\le |z|\le |z_2|$ as required.
\end{pf}
By the proof, we observe that the above constant $\delta$ is sharp.
Note here that $\delta$ is decreasing in $\eta_1$ and that $0<\delta<\frac1{8\eta_1}.$
In the above theorem, we see that the subdomain ${\mathbb D}^*(0,\rho)$ of ${\mathbb D}^*$
with $0<\rho<1$ is hyperbolically convex.
This is also true in general.
Indeed, the following result is a special case of Minda's reflection
principle \cite{Minda87}
(apply his Theorem 6 to the case when $\overline R=\overline{\mathbb D}elta$).
\begin{eqnarray}gin{lem}\label{lem:Minda}
Let $\Omega$ be a hyperbolic subdomain of ${\mathbb C}$ and let ${\mathbb D}elta$ be
an open disk centered at a point $a\in{\mathbb C}\setminus\Omega.$
Suppose that $I(\Omega\setminus{\mathbb D}elta)\subset\Omega,$ where $I$ denotes the
reflection in the circle $\partialrtial{\mathbb D}elta.$
Then ${\mathbb D}elta\cap\Omega$ is hyperbolically convex in $\Omega.$
\end{lem}
In particular, we have
\begin{eqnarray}gin{cor}\label{cor:Minda}
Let ${\mathbb D}elta$ be an open disk centered at a puncture
$a$ of a hyperbolic punctured sphere $X\subset{\mathbb C}$
with ${\mathbb D}elta^*={\mathbb D}elta\setminus\{a\}\subset X.$
Then ${\mathbb D}elta^*$ is hyperbolically convex in $X.$
\end{cor}
As another extremal case, we now consider the thrice-punctured sphere
${\mathbb C}_{0,1}.$
It is well known that the elliptic modular function, which is denoted by
$J(\zeta),$ on the upper half-plane ${\mathbb H}=\{\zeta:\,{\operatorname{Im}\,}\zeta>0\}$ serves as
a holomorphic universal covering projection onto ${\mathbb C}_{0,1}.$
The reader can consult \cite[Chap.~VI]{Nehari:conf} for general facts about the function
$J$ and related functions.
The covering transformation group is the modular group $\Gamma(2)$ of level 2;
namely, $\Gamma(2)=\{A\in{\operatorname{SL}}(2,{\mathbb Z}): A\equiv I~{\operatorname{mod}\,} 2\}/\{\pm I\}.$
Since $J$ is periodic with period 2, it factors into $J=Q\circ p,$
where $p(\zeta)=e^{\pi i\zeta}$ as before and $Q$ is an intermediate covering projection
of ${\mathbb D}^*$ onto ${\mathbb C}_{0,1}.$
Since $J(\zeta)\to0$ as $\eta=\,{\operatorname{Im}\,}\zeta\to+\infty,$ the origin is a removable
singularity of $Q(z)$ and indeed the function $Q(z)$ has the following representations
(see \cite[VI.6]{Nehari:conf} or \cite[Theorem 14.2.2]{KL:hg}):
\begin{eqnarray}gin{align*}
Q(z)&=16z\prod_{n=1}^\infty\left(\frac{1+z^{2n}}{1+z^{2n-1}}\right)^8
=16z\left[\frac{\sum_{n=0}^\infty z^{n(n+1)}}{1+2\sum_{n=1}^\infty z^{n^2}}\right]^4
\\
&=16(z-8z^2+44z^3-192z^4+718z^5-\cdots).
\end{align*}
We also remark that the function $Q(z)$ has been recently used to improve
coefficient estimates of univalent harmonic mappings on the unit disk
in Abu Muhanna, Ali and Ponnusamy \cite{AAP17}.
By its form, $Q(z)$ is locally univalent at $z=0.$
In fact, we can show the following.
\begin{eqnarray}gin{lem}\label{lem:univ}
The function $Q(z)$ is univalent on the disk $|z|<e^{-\pi/2}\approx0.20788.$
The radius $e^{-\pi/2}$ is best possible.
\end{lem}
\begin{eqnarray}gin{pf}
Suppose that $Q(z_1)=Q(z_2)$ for a pair of points $z_1,z_2$ in the disk $|z|<e^{-\pi/2}.$
Take the unique point $\zeta_l=\xi_l+i\eta_l\in p^{-1}(z_l)\subset{\mathbb H}$ such that $-1<\xi_l\le 1$
and $\eta_l>1/2$ for $l=1,2.$
We now recall the well-known fact that $\omega=\{\zeta\in{\mathbb H}: -1<{\mathbb R}e\zeta\le1,
|\zeta+1/2|<1/2, |\zeta-1/2|\le1/2\}$ is a fundamental set for the modular group
$\Gamma(2)$ of level 2 (see \cite[Chap.7, \S 3.4]{Ahlfors:ca}).
In other words, $J(\omega)={\mathbb C}_{0,1}$ and no pairs of distinct points in $\omega$
have the same image under the mapping $J.$
We now note that $\zeta_l\in \omega$ for $l=1.2.$
Since $J(\zeta_1)=Q(z_1)=Q(z_2)=J(\zeta_2),$ we conclude that $\zeta_1=\zeta_2.$
Hence, $z_1=z_2,$ which implies that $Q(z)$ is univalent on $|z|<e^{-\pi/2}.$
To see sharpness, we consider
the pair of points $\zeta_1=(1+i)/2$ and $\zeta_2=(-1+i)/2.$
Since $\zeta_2=T(\zeta_1)$ for the modular transformation $T(\zeta)=\zeta/(-2\zeta+1)$ in $\Gamma(2),$
we have $J(\zeta_1)=J(\zeta_2).$
Thus $Q(ie^{-\pi/2})=Q(-ie^{-\pi/2}).$
\end{pf}
We remark that the formula $Q(ie^{-\pi/2})=2$ is valid.
Indeed, by recalling the functional identity $Q(-z)=Q(z)/(Q(z)-1)$
(see \cite[(92) in p.~328]{Nehari:conf}),
we have $Q(ie^{-\pi/2})=Q(-ie^{-\pi/2})
=Q(ie^{-\pi/2})/(Q(ie^{-\pi/2})-1)$, which implies $Q(ie^{-\pi/2})=2.$
We now recall the growth theorem for a normalized univalent analytic function $f(z)=z+a_2z^2+\dots$
on $|z|<1$ (see \cite[\S 5-1]{Ahlfors:conf} for instance):
$$
\frac{r}{(1+r)^2}\le |f(z)|\le \frac{r}{(1-r)^2},\quad |z|<1.
$$
Applying this result to $f(z)=e^{\pi/2}Q(ze^{-\pi/2})/16,$ we obtain the following estimates:
\begin{eqnarray}gin{equation}\label{eq:Q}
\frac{16r}{(1+re^{\pi/2})^2}\le |Q(z)|\le\frac{16r}{(1-re^{\pi/2})^2},
\quad r=|z|<e^{-\pi/2}.
\end{equation}
Observe that the lower bound in \eqref{eq:Q} tends to $4e^{-\pi/2}$
as $r\to e^{-\pi/2}.$
Hence ${\mathbb D}(0,4e^{-\pi/2})\subset Q({\mathbb D}(0,e^{-\pi/2})).$
Note that $4e^{-\pi/2}\approx 0.8315.$
\begin{eqnarray}gin{lem}\label{lem:N}
Let ${\mathbb C}_{0,1}.$
For $w_1, w_2\in {\mathbb C}_{0,1}$ with $|w_1|, |w_2|\le\rho\le 4e^{-\pi/2},$
a shortest hyperbolic geodesic $\alpha$ joining
$w_1,w_2$ in ${\mathbb C}_{0,1}$ is contained in the closed annulus
$$
\min\{|w_1|,|w_2|\}e^{-K}\le |w|\le \max\{|w_1|,|w_2|\},
$$
where $K=K(\rho)>0$ is a constant depending only on $\rho.$
\end{lem}
\begin{eqnarray}gin{pf}
The right-hand inequality follows from Corollary \ref{cor:Minda}.
We now show the left-hand inequality.
Take $0<r\le e^{-\pi/2}$ so that $16r/(1+re^{\pi/2})^2=\rho$ and put
$\mu=re^{\pi/2}\le 1.$
Note that $r$ and $\mu$ can be computed by the formula
$$
\mu=re^{\pi/2}
=\rho^{-1} e^{-\pi/2}\big(8-\rho e^{\pi/2}-4\sqrtrt{4-\rho e^{\pi/2}}\big).
$$
By \eqref{eq:Q}, we can choose
$z_j\in {\mathbb D}^*$ with $|z_j|\le r$ and $Q(z_j)=w_j$ for $j=1,2$
in such a way that a lift $\begin{eqnarray}ta$ of the curve $\alpha$ joins
$z_1$ and $z_2$ in ${\mathbb D}^*$ via the covering map $Q.$
We may assume that $|z_1|\le |z_2|.$
It follows from Lemma \ref{lem:geo} that $\begin{eqnarray}ta$ is contained in
the annulus $|z_1|e^{-\pi\delta}\le |z|\le |z_2|,$ where $\delta$
is given in the lemma with $\eta_1=-(\log r)/\pi \le -(\log|z_1|)/\pi.$
Hence, by \eqref{eq:Q}, the curve $\alpha=Q(\begin{eqnarray}ta)$ is contained
in the annulus
$$
\frac{16|z_1|e^{-\pi\delta}}{(1+|z_1|e^{\pi/2-\pi\delta})^2}\le
|w|\le\frac{16|z_2|}{(1-|z_2|e^{\pi/2})^2}.
$$
By \eqref{eq:Q} and $|z_1|e^{\pi/2}\le\mu,$ we see that
$$
|w_1|\le \frac{16|z_1|}{(1-|z_1|e^{\pi/2})^2}
=\frac{16|z_1|e^{-\pi\delta}}{(1+|z_1|e^{\pi/2-\pi\delta})^2} \cdot
\frac{e^{\pi\delta}(1+|z_1|e^{\pi/2-\pi\delta})^2}{(1-|z_1|e^{\pi/2})^2}
\le \frac{16|z_1|e^{-\pi\delta}}{(1+|z_1|e^{\pi/2-\pi\delta})^2} e^K,
$$
where
\begin{eqnarray}gin{equation}\label{eq:K}
K=K(\rho)=\pi\delta+2\log\frac{1+\mu e^{-\pi\delta}}{1-\mu}.
\end{equation}
Thus we have seen that $\alpha$ is contained in the set $|w_1|e^{-K}\le |w|.$
\end{pf}
As an immediate consequence, we obtain the following result.
\begin{eqnarray}gin{cor}\label{cor:rho}
Let $0<\sigma\le\rho\le 4e^{-\pi/2}.$
A shortest hyperbolic geodesic $\alpha$ joining
$z_1,z_2$ in ${\mathbb C}_{0,1}$ with $|z_1|,|z_2|\ge\sigma$ does not intersect the disk
${\mathbb D}elta={\mathbb D}(0,\sigma e^{-K}),$ where $K=K(\rho)>0$ is the constant in Lemma \ref{lem:N}.
\end{cor}
\begin{eqnarray}gin{pf}
Suppose that $\alpha$ intersects the disk ${\mathbb D}elta.$
Then we can choose a subarc $\alpha_0$ of $\alpha$ such that
$\alpha_0$ intersects ${\mathbb D}elta$ and that
both endpoints $w_1,w_2$ of $\alpha_0$ have modulus $\sigma.$
Applying Lemma \ref{lem:N} to $\alpha_0$ yields a contradiction.
\end{pf}
When $\rho=\rho_0=e^{-1},$ we compute $r_0=e^{1-\pi}\big(8-e^{\pi/2-1}-4\sqrtrt{4-e^{\pi/2-1}}\big)
\approx 0.0301441$ and $\mu_0=r_0 e^{\pi/2}\approx 0.145007.$
Also, we have $\delta_0=\sqrtrt{\eta_0^2+1/4}-\eta_0\approx 0.107007$ with
$\eta_0=-(\log r_0)/\pi\approx 1.11465.$
Then $K_0=K(e^{-1})=\pi\delta_0+2\log[(1+\mu_0e^{-\pi\delta_0})/(1-\mu_0)]\approx 0.846666$
and $e^{K_0}\approx 2.33186.$
Thus we obtain the following statement as a special case of Corollary \ref{cor:rho}.
\begin{eqnarray}gin{cor}\label{cor:N}
Let $z_1, z_2$ be two points in ${\mathbb C}_{0,1}$ with $|z_1|, |z_2|\ge\sigma$ for
some number $\sigma\in(0, e^{-1}].$
Then, a shortest hyperbolic geodesic $\alpha$ joining
$z_1,z_2$ in ${\mathbb C}_{0,1}$ does not intersect the disk
$|z|<\sigma e^{-K_0},$ where $K_0=K(e^{-1})\approx 0.846666.$
\end{cor}
\section{Basic properties of the distance function $D(w_1,w_2)$}
First we show the following result in the present section.
Let $E^*=\{z: 0<|z|\le e^{-1}\}.$
\begin{eqnarray}gin{lem} \label{D distance theorem}
The function $D(z_1, z_2)$ given by \eqref{eq:D} is a distance function
on the set $E^*.$
\end{lem}
\begin{eqnarray}gin{pf}
First we note that $D(z_1, z_2)$ can also be described by
$$
D(z_1,z_2)=\frac{|\zeta_1-\zeta_2|}{\max\{\tau_1, \tau_2\}}
+|\log\tau_1-\log\tau_2|,
$$
where $\tau_j=\log(1/|z_j|)$ and $\zeta_j=z_j/|z_j|$ for $j=1,2.$
It is easy to see that $D(z_1,z_2)=D(z_2,z_1)$
and that $D(z_1,z_2) \geq 0,$ where equality holds if and only if $z_1=z_2$.
It remains to verify the triangle inequality.
Our task is to show that the inequality
${\mathbb D}elta:=D(z_1,z)+D(z,z_2)-D(z_1, z_2)\ge0$ for $z\in E^*.$
Set $\tau=\log(1/|z|)$ and $\zeta=z/|z|.$
We may assume that $\tau_2\le\tau_1.$
Then
\begin{eqnarray}gin{equation*}
D(z_1, z_2)
=\frac{|\zeta_1-\zeta_2|}{\tau_1}
+\log\tau_1-\log\tau_2.
\end{equation*}
First we assume that $\tau\le\tau_1.$
Since $\tau_1\ge\max\{\tau, \tau_2\},$ we have
\begin{eqnarray}gin{align*}
D(z_1,z_2)&\le\frac{|\zeta_1-\zeta|+|\zeta-\zeta_2|}{\tau_1}
+|\log\tau_1-\log\tau|+|\log\tau-\log\tau_2| \\
&= D(z_1,z)+D(z,z_2).
\end{align*}
Secondly, we assume that $\tau>\tau_1.$
Then, in a similar manner, we have
\begin{eqnarray}gin{align*}
{\mathbb D}elta&\ge \left(\frac1\tau-\frac1{\tau_1}\right)
|\zeta_1-\zeta_2|+2\log\tau-2\log\tau_1 \\
&\ge \frac2\tau-\frac2{\tau_1}+2\log\tau-2\log\tau_1.
\end{align*}
Since the function $f(x)=2/x+2\log x$ is increasing in $1\le x<+\infty,$
we have ${\mathbb D}elta\ge0$ as required.
\end{pf}
Next we compare $D(z_1,z_2)$ with the hyperbolic distance $h_{{\mathbb D}^*}(z_1,z_2)$
of ${\mathbb D}^*$ on the set $E^*.$
\begin{eqnarray}gin{thm} \label{thm:ct}
The distance function $D(z_1, z_2)$ given by \eqref{eq:D} satisfies
\begin{eqnarray}gin{equation}\label{eq:cp}
\frac{4}{\pi}\,h_{{\mathbb D}^*}(z_1,z_2) \leq D(z_1,z_2) \leq
M_0\, h_{{\mathbb C}_{0,1}}(z_1,z_2)
\end{equation}
for $0<|z_1|, |z_2|\le e^{-1}.$
Here, ${\mathbb C}_{0,1}={\mathbb C}\setminus\{0,1\},$
$M_0$ is a positive constant with $M_0<24$ and the constant $4/\pi$ is sharp.
\end{thm}
We remark that $4/\pi\approx 1.27324.$
\begin{eqnarray}gin{pf}
We consider the quantity
$$
D'(z_1, z_2)=\frac{\theta}{\max\{\log (1/|z_1|),\, \log (1/|z_2|)\}}
+\left|\log\log\frac1{|z_2|}-\log\log\frac1{|z_1|}\right|
$$
for $z_1, z_2\in {\mathbb D}^*,$ where $\theta=|{\operatorname{arg}\,}(z_2/z_1)|\in[0,\pi].$
Since $2x/\pi\le\sin x\le x$ for $0\le x\le\pi/2,$ we can easily obtain
$$
\frac{2}{\pi} D'(z_1,z_2) \leq D(z_1,z_2) \leq D'(z_1,z_2)
$$
for $z_1, z_2 \in E^*$.
We show now the inequality
\begin{eqnarray}gin{equation}\label{eq:L}
2h_{{\mathbb D}^*}(z_1,z_2) \leq D'(z_1,z_2)
\end{equation}
for $z_1, z_2\in E^*.$
Combining these two inequalities, we obtain the first inequality
in \eqref{eq:cp}.
Without loss of generality, we may assume that
${\operatorname{arg}\,} z_1 =0$, ${\operatorname{arg}\,} z_2 = \theta \in [0, \pi]$,
and $\tau_2 \le\tau_1,$ where $\tau_j=\log(1/|z_j|).$
Recall that $p(\zeta)=e^{\pi i\zeta}$ is a holomorphic universal covering
projection of the upper half-plane ${\mathbb H}$ onto ${\mathbb D}^*.$
Let $\zeta_1=i\tau_1/\pi$ and $\zeta_2=(\theta+i\tau_2)/\pi.$
Then $p(\zeta_j)=z_j$ for $j=1,2$ and, by \eqref{eq:hdist},
\begin{eqnarray}gin{equation}\label{eq:hd}
h_{{\mathbb D}^*}(z_1,z_2)=h_{\mathbb H}(\zeta_1,\zeta_2)
={\operatorname{arth}\,}\left|\frac{\zeta_1-\zeta_2}{\zeta_1-\overline{\zeta_2}}\right|
={\operatorname{arth}\,}\sqrtrt{\frac{\theta^2+(\tau_1-\tau_2)^2}{\theta^2+(\tau_1+\tau_2)^2}}.
\end{equation}
We consider the function
\begin{eqnarray}gin{align*}
G(\theta, \tau_1,\tau_2)&= 2h_{{\mathbb D}^*}(z_1,z_2)-D'(z_1,z_2) \\
&=2{\operatorname{arth}\,}\sqrtrt{\frac{\theta^2+(\tau_1-\tau_2)^2}{\theta^2+(\tau_1+\tau_2)^2}}
-\frac\theta{\tau_1}-\log\tau_1+\log\tau_2
\end{align*}
on $0\le\theta\le\pi, 1\le\tau_2\le\tau_1.$
A straightforward computation yields
$$
\frac{\partialrtial G}{\partialrtial \tau_2}=\frac{1}{\tau_2}
-\frac{\tau_1^2-\tau_2^2+\theta^2}
{\tau_2\sqrtrt{(\theta^2+\tau_2^2+\tau_1^2)^2-4\tau_1^2 \tau_2^2}}.
$$
Since
$$
\left[(\theta^2+\tau_2^2+\tau_1^2)^2-4\tau_1^2 \tau_2^2\right]
-(\tau_1^2-\tau_2^2+\theta^2)^2
=4\tau_2^2\theta^2>0
$$
for $0<\theta<\pi,$
we see that $\partialrtial G/\partialrtial\tau_2>0$ and therefore
$G(\theta,\tau_1,\tau_2)$ is increasing in $1\le \tau_2\le\tau_1$
so that
$$
G(\theta,\tau_1,1)<G(\theta,\tau_1,\tau_2)<G(\theta,\tau_1,\tau_1)
$$
for $1<\tau_2<\tau_1.$
We observe that $G(\theta,\tau_1,\tau_1)=2f(2\tau_1/\theta),$ where
$$
f(x)={\operatorname{arth}\,}\frac1{\sqrtrt{1+x^2}}-\frac1x
=\frac{g(x)-1}{x},
$$
and $g(x)=x\,{\operatorname{arth}\,}(1/\sqrtrt{1+x^2}).$
Since $g'(x)={\operatorname{arth}\,}(1/\sqrtrt{1+x^2})-1/\sqrtrt{1+x^2}>0$
for $x>0,$
we have $g(x)<g(+\infty)=1.$
Hence $G(\theta,\tau_1,\tau_1)<0$ for $0<\theta<\pi, \tau_1>1.$
We have thus proved the inequality $2h_{{\mathbb D}^*}(z_1,z_2)\le D'(z_1,z_2).$
Since $h_{{\mathbb D}^*}(e^{-\tau},-e^{-\tau})=h_{\mathbb H}(i\tau,i\tau+\pi)
={\,\operatorname{arth}\,}(\pi/\sqrtrt{\pi^2+4\tau^2}),$
$$
\frac{h_{{\mathbb D}^*}(e^{-\tau},-e^{-\tau})}{D(e^{-\tau},-e^{-\tau})}
=\frac\tau 2\,{\operatorname{arth}\,}\frac{\pi}{\sqrtrt{\pi^2+4\tau^2}}\to\frac{\pi}4
$$
as $\tau\to+\infty.$
Hence the constant $4/\pi$ is sharp in \eqref{eq:cp}.
Finally, we show the second inequality in \eqref{eq:cp}.
Assume that $0<|z_1|\le|z_2|\le e^{-1}, \theta_0={\operatorname{arg}\,}(z_2/z_1)\in[0,\pi]$
and $\tau_j=-\log|z_j|\ge 1$ for $j=1,2.$
We now show the following two inequalities to complete the proof:
\begin{eqnarray}gin{equation}\label{eq:two}
\frac{\theta_0}{\tau_1}\le M_1H
{\quad\text{and}\quad}
\log\frac{\tau_1}{\tau_2}\le M_2H
\end{equation}
for some constants $M_1$ and $M_2,$ where $H=h_{{\mathbb C}_{0,1}}(z_1,z_2).$
Let $\alpha$ be a shortest hyperbolic geodesic joining $z_1$ and $z_2$ in ${\mathbb C}_{0,1}.$
Then, by Corollary \ref{cor:N}, $\alpha$ is contained in the annulus
$|z_1|e^{-K_0}\le|z|\le |z_2|.$
Thus $-\log|z|\le K_0+\tau_1$ for $z\in\alpha.$
We recall the following lower estimate of $\lambdabda(z)=\lambdabda_{{\mathbb C}_{0,1}}(z)$
(see \cite{Hempel79}):
\begin{eqnarray}gin{equation}\label{eq:Hempel}
\frac1{2|z|(C_0+|\log|z||)}\le \lambdabda(z),\quad z\in {\mathbb C}_{0,1},
\end{equation}
where $C_0=1/2\lambdabda(-1)=\Gamma(1/4)^4/4\pi^2\approx 4.37688.$
We remark that the bound in \eqref{eq:Hempel} is monotone decreasing in $|z|.$
Noting the inequality $|dz|\ge rd\theta$ for $z=re^{i\theta},$ we have
$$
H=\int_\alpha\lambdabda(z)|dz|
\ge \int_\alpha \frac{rd\theta}{2r(C_0-\log r)}
\ge \int_\alpha \frac{d\theta}{2(C_0+\tau_1+K_0)}
=\frac{\theta_0}{2(C_0+K_0+\tau_1)}.
$$
Since $C_0+K_0+\tau_1\le (C_0+K_0+1)\tau_1,$ we obtain the
first inequality in \eqref{eq:two} with $M_1=2(C_0+K_0+1).$
Similarly, by using $|dz|\ge |dr|$ for $z=re^{i\theta}=e^{-t+i\theta},$
we obtain
$$
H\ge \int_\alpha\frac{|dr|}{2r(C_0-\log r)}
\ge\int_{\tau_2}^{\tau_1}\frac{dt}{2(C_0+t)}
\ge \frac1{2(C_0+1)}\log\frac{\tau_1}{\tau_2}.
$$
Hence we have the second inequality in \eqref{eq:two} with
$M_2=2(C_0+1).$
Combining the two inequalities in \eqref{eq:two}, we get
$$
D(z_1,z_2)\le D'(z_1,z_2)\le (M_1+M_2)H.
$$
Hence, $M_0=M_1+M_2=4C_0+4+2K_0\approx 23.2008$ works.
\end{pf}
\section{Construction of a distance function on $n$-times punctured sphere}
In this section, we construct a distance function on an $n$-times
punctured sphere $X={\widehat{\mathbb C}}\setminus\{a_1,\dots, a_n\}.$
As we noted, $X$ is hyperbolic if and only if $n\ge3.$
Thus we will assume that $n\ge3$ in the sequel.
After a suitable M\"{o}bius transformation, without much loss of generality,
we may assume that $0,1,\infty\in{\widehat{\mathbb C}}\setminus X$
and $a_n =\infty$.
Let
$$
\tilde\rho_j=\begin{eqnarray}gin{cases}
\displaystyle
\min_{1\le k<n, k\ne j}|a_k-a_j|&\quad \text{for}~j=1,2,\dots, n-1, \\
\displaystyle
\max_{1<k<n}|a_k| &\quad \text{for}~j=n
\end{cases}
$$
and $\rho_j=\tilde\rho_j/e$ for $j=1,2,\dots, n-1$ and $\rho_n=e\tilde\rho_n.$
Since $a_1=0,$ we have $e\rho_j\le|a_j|$ for $1<j<n$ and
$e\rho_j\le\max |a_k|=\rho_n/e$ for $j<n.$
Set $E_{j}={\overline{\mathbb D}}(a_j,\rho_j), E^*_j=E_j\setminus\{a_j\}$ and
${\mathbb D}elta_{j}={\mathbb D}^*(a_j,\tilde\rho_j)$
for $j=1,\dots, n-1,$ and set
$E_{n}=\{w\in{\widehat{\mathbb C}}: |w|\ge \rho_n\},~E^*_n=E_n\setminus\{\infty\}$ and
${\mathbb D}elta_{n}=\{w\in{\mathbb C}: |w|>\tilde\rho_n\}.$
It is easy to see that the Euclidean distance between $E_j$'s are computed and estimated by
$$
{\operatorname{dist}}(E_j,E_k)=|a_j-a_k|-\rho_j-\rho_k
\ge(e-2)\max\{\rho_j,\rho_k\}
$$
for $j,k<n,$ and
$$
{\operatorname{dist}}(E_j,E_n)=\rho_n-|a_j|-\rho_j\ge
(1-e^{-1}-e^{-2})\rho_n\ge (e^2-e-1)\rho_j>(e-2)\rho_j
$$
for $1\le j<n.$
In particular, $E^*_j$'s are mutually disjoint.
Noting the inequality $1-e^{-1}-e^{-2}\approx0.49678<e-2,$
we also have the estimate
\begin{eqnarray}gin{equation}\label{eq:dist}
{\operatorname{dist}}(E_j,E_k)\ge (1-e^{-1}-e^{-2})\rho_j
\end{equation}
for any pair of distinct $j,k.$
Note also that ${\mathbb D}elta_j\subset X$ for $j=1,\dots, n.$
Finally, let $W=X\setminus (E^*_1\cup\dots\cup E^*_n)$.
We are now ready to construct a distance function on $X.$
Set
\begin{eqnarray}gin{equation*}
D_j(w_1, w_2)=
\begin{eqnarray}gin{cases}
\rho_jD((w_1-a_j)/\tilde\rho_j,(w_2-a_j)/\tilde\rho_j)
& \text{if}~j=1,\dots, n-1, \\
\rho_n D(\tilde\rho_n/w_1,\tilde\rho_n/w_2) & \text{if}~j=n
\end{cases}
\end{equation*}
for $w_1, w_2\in E^*_j,$ where $D(z_1,z_2)$ is given in \eqref{eq:D}.
By definition, we have $D_j(w_1,w_2)=|w_1-w_2|$ for $w_1$, $w_2 \in \partial E_j$.
By Lemma \ref{D distance theorem},
we know that $D_j(w_1, w_2)$ is a distance function on $E^*_{j}.$
We further define
\begin{eqnarray}gin{equation} \label{eq:cm}
d_X(w_1, w_2)=
\begin{eqnarray}gin{cases}
\displaystyle D_j(w_1, w_2) & \mathbbox{if\ }\ w_1,\, w_2 \in E^*_{j}, \\
\displaystyle \inf_{\zeta \in \partial E_j} (D_j(w_1, \zeta) +|\zeta-w_2|)
&\mathbbox{if\ }\ w_1 \in E^*_{j}, \, w_2 \in W, \\
\displaystyle \inf_{\zeta \in \partial E_j} (|w_1-\zeta| +D_j(\zeta,w_2))
&\mathbbox{if\ }\ w_1 \in W,\, w_2\in E^*_{j}, \\
\displaystyle \inf_{\substack{\zeta_1 \in \partial E_j\\ \zeta_2 \in \partial E_k}}
(D_j(w_1,\zeta_1) +|\zeta_1-\zeta_2|+D_k(\zeta_2, w_2) )
&\mathbbox{if\ }\ w_1 \in E^*_{j}, \, w_2 \in E^*_{k},\, j\ne k,\\
\displaystyle |w_1-w_2| & \mathbbox{if\ }\ w_1,\, w_2 \in W
\end{cases}
\end{equation}
for $w_1, w_2\in X.$
Note that the infima in the above definition can be replaced by minima.
Then we have the following result.
\begin{eqnarray}gin{lem}\label{lem:dist}
$d_X$ is a distance function on $X.$
\end{lem}
\begin{eqnarray}gin{pf}
It is easy to see that $d_X(w_1,w_2)=d_X(w_2,w_1)$, and $d_X(w_1,w_2) \geq 0,$
where equality holds if and only if $w_1=w_2$, for $w_1, w_2 \in X$.
It remains to verify the triangle inequality:
$d_X(w_1,w_2)\le d_X(w_1,w_3)+d_X(w_3,w_2).$
According to the location of these points, we need to consider several cases.
For instance, we consider the case when $w_1 \in E^*_{j}$, $w_2 \in E^*_{k}$
and $w_3\in E^*_l$ for distinct $j,k,l.$
Then,
\begin{eqnarray}n
&&d_X(w_1,w_3)+d_X(w_3,w_2)\\
&=&\inf_{\substack{\zeta_1\in\partialrtial E_j,\zeta_2 \in\partialrtial E_k\\ \zeta_3,\zeta_4\in\partial E_l}}
(D_j(w_1,\zeta_1)+|\zeta_1-\zeta_3|+D_l(\zeta_3,w_3)+D_l(w_3,\zeta_4)
+|\zeta_4-\zeta_2|+D_k(\zeta_2, w_2))\\
&\geq& \inf(D_j(w_1,\zeta_1)+|\zeta_1-\zeta_3|+D_l(\zeta_3,\zeta_4)
+|\zeta_4-\zeta_2|+D_k(\zeta_2,w_2))\\
&=& \inf(D_j(w_1,\zeta_1)+|\zeta_1-\zeta_3|+|\zeta_3-\zeta_4|
+|\zeta_4-\zeta_2|+D_k(\zeta_2,w_2))\\
&\geq& \inf_{\zeta_1 \in\partialrtial E_j,\zeta_2 \in\partialrtial E_k}
(D_j(w_1,\zeta_1)+|\zeta_1-\zeta_2|+D_k(\zeta_2,w_2))= d_X(w_1,w_2).
\end{eqnarray}n
The other cases can be handled similarly and therefore will be omitted.
\end{pf}
We remark that we can construct a similar distance when $n=2.$
Let $a_1=0$ and $a_2=\infty$ and consider $X={\widehat{\mathbb C}}\setminus\{a_1,a_2\}
={\mathbb C}^*.$
Then, we set
$$
d_{{\mathbb C}^*}(w_1,w_2)=
\begin{eqnarray}gin{cases}
D(w_1/e, w_2/e) &\text{if}~ 0<|w_1|\le 1, 0<|w_2|\le 1, \\
D(1/(ew_1), 1/(ew_2)) & \text{if}~ 1\le|w_1|, 1\le |w_2|, \\
\displaystyle
\inf_{|\zeta|=1} (D(w_1/e, \zeta/e)+D(1/(e\zeta),1/(ew_2)) &\text{if}~
0<|w_1|\le 1, 1\le |w_2|, \\
\displaystyle
\inf_{|\zeta|=1} (D(1/(ew_1), 1/(e\zeta))+D(\zeta/e,w_2/e) &\text{if}~
1\le |w_1|, 0<|w_2|\le 1,
\end{cases}
$$
where $D$ is defined by \eqref{eq:D}.
Then we can see that $d_{{\mathbb C}^*}$ is a distance function on ${\mathbb C}^*.$
The asymptotic behaviour of $d_{{\mathbb C}^*}$ near the punctures are rather
different from that of the quasi-hyperbolic distance $q$ on ${\mathbb C}^*$
since $q(w_1,w_2)=|\log|w_1|-\log|w_2||+O(1)$ as $w_1,w_2\to0$
(see \cite{MO86} for instance).
Our main result in the present paper is the following.
In the next section, we will prove it in a stronger form (Theorem \ref{thm:e}).
\begin{eqnarray}gin{thm} \label{thm:main}
The distance function $d_X(w_1, w_2)$ given in \eqref{eq:cm}
on the $n$-times punctured sphere $X={\widehat{\mathbb C}}\setminus\{a_1,\dots,a_n\}
\subset{\mathbb C}_{0,1}$ is comparable with the hyperbolic distance $h_X(w_1,w_2)$ on $X.$
\end{thm}
\section{Proof of Theorem \ref{thm:main}}
We recall that $X={\widehat{\mathbb C}}\setminus \{a_1,\dots, a_n\}\subset{\mathbb C}_{0,1}$ with $a_n=\infty.$
The function $d_X$ defined in the previous section has the merit that it gives a distance on $X.$
On the other hand, it is not easy to compute the exact value of $d_X(w_1,w_2)$
for a given pair of points $w_1, w_2\in X.$
The following quantity can be a good substitute of $d_X(w_1,w_2)$ because
it is computed easily, though it is not necessarily a distance function:
\begin{eqnarray}gin{equation*} \label{eq:p}
e_X(w_1, w_2)=
\begin{eqnarray}gin{cases}
\displaystyle D_j(w_1, w_2) & \mathbbox{if\ }\ w_1,\, w_2 \in E^*_{j}, \\
\displaystyle D_j(w_1, \zeta) +|\zeta-w_2| &\mathbbox{if\ }\ w_1 \in E^*_{j}, \, w_2 \in W \\
\displaystyle |w_1-\zeta| +D_j(\zeta,w_2) &\mathbbox{if\ }\ w_1 \in W,\, w_2\in E^*_{j}, \\
\displaystyle D_j(w_1,\zeta_1) +|\zeta_1-\zeta_2|+D_k(\zeta_2, w_2)
&\mathbbox{if\ }\ w_1 \in E^*_{j}, \, w_2 \in E^*_{k},\, j\ne k,\\
\displaystyle |w_1-w_2| & \mathbbox{if\ }\ w_1,\, w_2 \in W
\end{cases}
\end{equation*}
for $w_1, w_2\in X,$
where $\zeta$ is the intersection point of the line segment $[w_1,w_2]$
with the circle $\partialrtial E_j$ in the second case,
and $\zeta_1$ and $\zeta_2$ are the intersection points of the
line segments $[w_1,w_2]$ with the circles $\partialrtial E_j$
and $\partialrtial E_k,$ respectively, in the third case.
By definition, the inequality $d_X(w_1,w_2)\le e_X(w_1,w_2)$ holds obviously.
Theorem \ref{thm:main} now follows from the next result.
\begin{eqnarray}gin{thm}\label{thm:e}
There exist positive constants $N_1$ and $N_2$ such that the following inequalities hold:
$$
N_1h_X(w_1,w_2)\le d_X(w_1,w_2)\le e_X(w_1,w_2)\le N_2h_X(w_1,w_2)
$$
for $w_1,w_2\in X={\widehat{\mathbb C}}\setminus\{a_1,a_2,\dots,a_n\}\subset{\mathbb C}_{0,1}.$
\end{thm}
\begin{eqnarray}gin{pf}
Assume that $a_1=0$ and $a_n=\infty$ as before.
We recall that $W=X\setminus(E^*_1\cup\dots\cup E^*_n).$
Since $\lambdabda_X(w)\delta_X(w)\le 1,$ we obtain
\begin{eqnarray}gin{equation}\label{eq:m}
\lambdabda_X(w)\le \frac1m,\quad w\in \overline W,
\end{equation}
where $m=\min_{1\le j<n}\rho_j.$
We show the first inequality.
Fix $w_1, w_2\in X$ and
assume that $w_1\in E^*_j$ and $w_2\in W$ for some $j.$
We can deal with the other cases similarly and thus we will omit it.
We further assume, for a moment, that $j\ne n.$
By definition,
$$
d_X(w_1,w_2)=D_j(w_1,\zeta_0)+|\zeta_0-w_2|
$$
for some $\zeta_0\in\partialrtial E_j.$
If the line segment $L=[\zeta_0,w_2]$ intersects $E^*_k$ for some
$k\ne j,$ we replace the part $L\cap{\mathbb D}(a_k,\rho_k)$ of $L$
by the shorter component of $\partialrtial E_k\setminus L$
for each such $k.$
The resulting curve will be denoted by $L'.$
It is obvious from construction that the Euclidean length of
$L'$ is bounded by $\pi|\zeta_0-w_2|/2.$
Therefore, by \eqref{eq:m},
$$
|\zeta_0-w_2|\ge \frac2\pi \int_{L'}|dw|
\ge \frac{2m}{\pi}\int_{L'}\lambdabda_X(w)|dw|
\ge \frac{2m}{\pi}h_X(\zeta_0,w_2).
$$
Let $z_1=g(w_1)$ and $z_2=g(\zeta_0),$ where
$g(w)=(w-a_j)/\tilde\rho_j.$
Then, by the definition of $D_j$ and Theorem \ref{thm:ct},
$$
D_j(w_1,\zeta_0)=\rho_j D(z_1,z_2)\ge \frac{4\rho_j}{\pi} h_{{\mathbb D}^*}(z_1,z_2).
$$
Since $g$ maps ${\mathbb D}elta_j$ conformally onto ${\mathbb D}^*,$
the principle of the hyperbolic metric leads to the following:
$$
h_{{\mathbb D}^*}(z_1,z_2)=h_{{\mathbb D}elta_j}(w_1,\zeta_0)\ge h_X(w_1,\zeta_0).
$$
When $j=n,$ with $g(w)=\tilde\rho_n/w,$
we have the estimate
$D_n(w_1,\zeta_0)\ge (4\rho_n/\pi)h_X(w_1,\zeta_0)$
in a similar way.
Since $4\rho_j/\pi\ge 2m/\pi=2\min\rho_k/\pi,$ we obtain
$$
d_X(w_1,w_2)\ge N_1\big[ h_X(w_1,\zeta_0)+h_X(\zeta_0,w_2)\big]
\ge N_1h_X(w_1,w_2),
$$
where $N_1=2m/\pi.$
Similarly, we get the first inequality in the other cases with the same constant $N_1.$
Thus the first inequality has been shown.
We next show the inequality $e_X(w_1,w_2)\le N_2h_X(w_1,w_2).$
We consider several cases according to the location of $w_1,w_2.$
\noindent
{\sl Case (i)} $w_1, w_2\in E^*_j:$
We first assume that $j\ne n$ and choose $k\ne j$ so that $\tilde\rho_j=|a_k-a_j|.$
Let $X_1={\mathbb C}\setminus\{a_j,a_k\}.$
Then $X_1\supset X$ and
$g(w)=(w-a_j)/(a_k-a_j)$ maps $X_1$ conformally onto ${\mathbb C}_{0,1}.$
Set $z_1=g(w_1)$ and $z_2=g(w_2).$
By Theorem \ref{thm:ct}, we obtain
$$
D_j(w_1,w_2)=\rho_j D(z_1,z_2)\le M_0\rho_jh_{{\mathbb C}_{0,1}}(z_1,z_2)
=M_0\rho_jh_{X_1}(w_1,w_2)
\le M_0\rho_jh_X(w_1,w_2),
$$
and thus $e_X(w_1,w_2)\le M_0 \rho_j h_X(w_1,w_2).$
When $j=n,$ we set $g(w)=a_k/w,$ where $a_k$ is chosen so that
$\tilde\rho_n=|a_k|.$
Then, we also have the estimate $D_n(w_1,w_2)\le M_0\rho_n h_X(w_1,w_2).$
In summary, we have $D_j(w_1,w_2)=e_X(w_1,w_2)\le M_0\rho_{n} h_X(w_1,w_2)$
for $j=1,\dots,n,$ because $\rho_j\le e^{-2}\rho_n<\rho_n.$
\noindent
{\sl Case (ii)} $w_1\in E^*_j$ and $w_2\in W:$
Let $\zeta$ be the intersection point of the line segment $[w_1,w_2]$ with
the boundary circle $\partialrtial E_j.$
It is thus enough to show the inequalities
\begin{eqnarray}gin{equation}\label{eq:B}
D_j(w_1,\zeta)\le B_1h_{X}(w_1,w_2)
{\quad\text{and}\quad}
|\zeta-w_2|\le B_2h_{X}(w_1,w_2)
\end{equation}
for some constants $B_1$ and $B_2.$
We start with the second one.
Assume that $j\ne n$ for a while.
Then the function $g(w)=a_j/w$ maps $X$ conformally into ${\mathbb C}_{0,1}.$
(When $j=1,$ we set $g(w)=1/w.$)
Put $z_l=g(w_l)$ for $l=1,2$ and set $X_1
=g^{-1}({\mathbb C}_{0,1})$ which contains $X.$
We consider a shortest hyperbolic geodesic $\alpha$ joining $w_1$ and $w_2$ in $X_1.$
Note that $\ell_{X_1}(\alpha)=h_{X_1}(w_1,w_2)\le h_X(w_1,w_2)$ by
the principle of hyperbolic metric.
In order to complete the proof, we need to analyse the location of
the geodesic $\alpha.$
Since $|w_l|\le\rho_n,~l=1,2,$ the points $z_l$ satisfy
$|z_l|=|a_j/w_l|\ge \sigma,$
where $\sigma=|a_j|/\rho_n\le \tilde\rho_n/\rho_n\le e^{-1}.$
(When $j=1,$ $\sigma:=1/\rho_n\le e^{-1}$ because $1\notin X.$)
We now apply Corollary \ref{cor:N} to see that $g(\alpha)$ is contained
in the set $|z|\ge \sigma e^{-K_0},$ where we recall that $K_0\approx 0.85.$
Thus, $\alpha$ lies in the set $|w|\le |a_j|e^{K_0}/\sigma=\rho_ne^{K_0}.$
We note also that $\sigma=|a_j|/\rho_n\ge\tilde\rho_1/\rho_n=e\rho_1/\rho_n.$
Hence, by \eqref{eq:Hempel}, we have the lower estimate
\begin{eqnarray}gin{align*}
\lambdabda_{X_1}(w)=\lambdabda(g(w))|g'(w)|
&\ge \frac{1}{2|w|(C_0+|\log|a_j/w||)}\ge \frac{e^{-K_0}}{2\rho_n(C_0-K_0-\log\sigma)} \\
&\ge \frac{e^{-K_0}}{2\rho_n(C_0-K_0-1+\log(\rho_n/\rho_1))}=:\frac1{U_1}
\end{align*}
for $w\in \alpha, 1<j<n.$
Since $e\rho_1\le1,$ this estimate is valid also for $j=1.$
We are now ready to show the second inequality in \eqref{eq:B} for $j\ne n.$
We denote by $\begin{eqnarray}ta$ the line passing through $\zeta$ and orthogonal to the
line segment $[w_1,w_2].$
Take an intersection point $\zeta_1$ of the line $\begin{eqnarray}ta$ with the geodesic $\alpha$
and denote by $\alpha_1$ the part of $\alpha$ joining $\zeta_1$ and $w_2.$
From the above inequality, we derive
$$
h_X(\zeta_1,w_2)
\ge h_{X_1}(\zeta_1,w_2)
=\int_{\alpha_1}\lambdabda_{X_1}(w)|dw|\ge U_1^{-1}\int_{\alpha_1}|dw|
\ge U_1^{-1}|\zeta_1-w_2|.
$$
Now we have
$$
|\zeta-w_2|
\le|\zeta_1-w_2|
\le U_1 h_{X_1}(\zeta_1,w_2)
\le U_1 h_{X_1}(w_1,w_2)
\le U_1 h_{X}(w_1,w_2).
$$
We now turn to the case when $j=n.$
Then we need to modify the above argument a bit.
In this case, we set $g(w)=w$ and $X_1={\mathbb C}_{0,1}.$
If $\alpha$ is contained in the disk $|w|\le 3\rho_n,$
\eqref{eq:Hempel} yields
\begin{eqnarray}gin{equation}\label{eq:Hn}
\lambdabda_{X_1}(w)\ge\frac{1}{6\rho_n(C_0+\log 3+\log\rho_n)}=:\frac1{U_2}
\end{equation}
for $w\in\alpha.$
Then the same argument as above yields the inequality $|\zeta-w_2|
\le U_2 h_X(w_1,w_2).$
Otherwise, we define $\zeta_2$ to be the first hitting point of the geodesic
$\alpha$ to the circle $\Gamma=\{w:|w-w_2|=|\zeta-w_2|\}$ from $w_2.$
Let $\alpha_2$ be the part of $\alpha$ joining $\zeta_2$ and $w_2$ as before.
Since the inside of $\Gamma$ is contained in the disk $|w|\le 3\rho_n,$ the inequality
\eqref{eq:Hn} holds for $w\in\alpha_2.$
Thus, we have
$$
|\zeta-w_2|=|\zeta_2-w_2|\le U_2 h_{X_1}(\zeta_2,w_2)
\le U_2 h_{X_1}(w_1,w_2)
\le U_2 h_{X}(w_1,w_2).
$$
In this way, we saw that the second inequality in \eqref{eq:B}
with $B_2=\max\{U_1,U_2\}$ holds at any event.
Next we show the first inequality in \eqref{eq:B}.
By case (i), we have
$$
D_j(w_1,\zeta)\le M_0\rho_n h_X(w_1,\zeta).
$$
On the other hand, by making use of the first part of the theorem and
the second inequality in \eqref{eq:B}, we have
$$
h_X(\zeta,w_2)\le N_1^{-1} d_X(\zeta,w_2)=N_1^{-1}|\zeta-w_2|
\le N_1^{-1} B_2h_X(w_1,w_2).
$$
Combining these inequalities, we get
\begin{eqnarray}gin{align*}
D_j(w_1,\zeta)
&\le M_0\rho_n h_X(w_1,\zeta)
\le M_0\rho_n \big\{h_X(w_1,w_2)+h_X(w_2,\zeta)\big\} \\
&\le M_0\rho_n(1+N_1^{-1} B_2)h_X(w_1,w_2).
\end{align*}
Thus $D_j(w_1,\zeta)\le B_1h_X(w_1,w_2)$ for $j=1,\dots,n$ with
$B_1=M_0\rho_n(1+N_1^{-1} B_2).$
We have now shown the inequality $e_X(w_1,w_2)\le N_2' h_X(w_1,w_2)$
in this case, where $N_2'=B_1+B_2.$
\noindent
{\sl Case (iii)} $w_1\in W, w_2\in E_j^*:$
This is essentially same as case (ii).
\noindent
{\sl Case (iv)} $w_1,w_2\in W:$
Similarly, we obtain $|w_1-w_2|\le B_2h_X(w_1,w_2)$
with the same constant $B_2$ as in case (ii).
\noindent
{\sl Case (v)} $w_1\in E_j^*$ and $w_2\in E_k^*$ with $j\ne k:$
Then, by definition,
$$
e_X(w_1,w_2)=D_j(w_1,\zeta_1) +|\zeta_1-\zeta_2|+D_k(\zeta_2, w_2),
$$
where $\zeta_1$ and $\zeta_2$ are the intersection points of the line segment
$[w_1,w_2]$ with $\partialrtial E_j$ and $\partialrtial E_k,$ respectively.
By using the auxiliary lines $\begin{eqnarray}ta_l$ orthogonally intersecting $[w_1,w_2]$
at $\zeta_l~(l=1,2),$ we obtain the inequality
\begin{eqnarray}gin{equation}\label{eq:seg}
|\zeta_1-\zeta_2|\le B_2 h_X(w_1,w_2)
\end{equation}
in the same way as in case (ii).
Let $\alpha$ be a shortest hyperbolic geodesic joining $w_1$ and $w_2$ in $X.$
Let $\zeta_l'$ be an intersection point of $\alpha$ with $\begin{eqnarray}ta_l$ for $l=1,2.$
First assume that $D_j(w_1,\zeta_1)\ge 4\rho_j.$
Since $D_j(\zeta_1,\zeta_1')=|\zeta_1-\zeta_1'|\le 2\rho_j\le D_j(w_1,\zeta_1)/2,$
we obtain $D_j(w_1,\zeta_1)\le 2D_j(w_1,\zeta_1').$
By the first part of the theorem, we now observe that
$$
D_j(w_1,\zeta_1')=d_X(w_1,\zeta_1')\le M_0\rho_{n}h_X(w_1,\zeta_1')
\le M_0\rho_{n}h_X(w_1,w_2).
$$
Thus $D_j(w_1,\zeta_1)\le 2M_0\rho_{n}h_X(w_1,w_2).$
Next assume that $D_j(w_1,\zeta_1)<4\rho_j.$
By \eqref{eq:dist}, we have
$$
|\zeta_1-\zeta_2|\ge {\operatorname{dist}}(E_j,E_k)\ge(1-e^{-1}-e^{-2})\rho_j
$$
Thus, with the help of \eqref{eq:seg}, we have
$$
D_j(w_1,\zeta_1)<\frac{4}{1-e^{-1}-e^{-2}}\,|\zeta_1-\zeta_2|
\le\frac{4B_2}{1-e^{-1}-e^{-2}}\, h_X(w_1,w_2).
$$
We can deal with $D_k(\zeta_2,w_2)$ in the same way.
Therefore, letting
$$
N_2''=B_2+2\max\left\{2M_0\rho_{n},~ \frac{4B_2}{1-e^{-1}-e^{-2}}\right\},
$$
we obtain the inequality $e_X(w_1,w_2)\le N_2' h_X(w_1,w_2).$
Summarising all the cases, we now conclude that the right-most inequality
in the assertion of the theorem holds with the choice $N_2=\max\{N_2',N_2''\}.$
\end{pf}
We end the paper with the observation that by the proof we can take the bounds $N_1$ and
$N_2$ in the last theorem under the convention $a_n=\infty$ as follows:
$$
N_1=\frac{2\rho_0}\pi
{\quad\text{and}\quad}
N_2=C\, \frac{\rho_n}{\rho_0}\log\frac{\rho_n}{\rho_0},
$$
where $C>0$ is an absolute constant and
$$
\rho_0=\min_{1\le j<n}\rho_j.
$$
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{{\mathcal M}R}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{{\mathcal M}Rhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\begin{eqnarray}gin{thebibliography}{10}
\bibitem{AAP17}
Y.~{Abu Muhanna}, R.~M. Ali, and S.~Ponnusamy, \emph{The spherical metric and
univalent harmonic mappings}, Preprint.
\bibitem{Agard68}
S.~Agard, \emph{Distortion theorems for quasiconformal mappings}, Ann. Acad.
Sci. Fenn. A I Math. \textbf{413} (1968), 1--12.
\bibitem{Ahlfors:conf}
L.~V. Ahlfors, \emph{Conformal {I}nvariants}, McGraw-Hill, New York, 1973.
\bibitem{Ahlfors:ca}
\bysame, \emph{Complex {A}nalysis, 3rd ed.}, McGraw Hill, New York, 1979.
\bibitem{Beardon:disc}
A.~F. Beardon, \emph{The geometry of discrete groups}, Graduate Texts in
Mathematics, vol.~91, Springer-Verlag, New York, 1983.
\bibitem{BM07}
A.~F. Beardon and D.~Minda, \emph{The hyperbolic metric and geometric function
theory}, Quasiconformal mappings and their applications, Narosa, New Delhi,
2007, pp.~9--56.
\bibitem{BP78}
A.~F. Beardon and {Ch}. Pommerenke, \emph{The {P}oincar\'e metric of plane
domains}, J. London Math. Soc. (2) \textbf{18} (1978), 475--483.
\bibitem{BR86}
L.~Bers and H.~L. Royden, \emph{Holomorphic families of injections}, Acta Math.
\textbf{157} (1986), 259--286.
\bibitem{Hempel79}
J.~A. Hempel, \emph{The {P}oincar\'e metric on the twice punctured plane and
the theorems of {L}andau and {S}chottky}, J. London Math. Soc. (2)
\textbf{20} (1979), 435--445.
\bibitem{Hempel80}
\bysame, \emph{Precise bounds in the theorems of {S}chottky and {P}icard}, J.
London Math. Soc. (2) \textbf{21} (1980), 279--286.
\bibitem{KL:hg}
L.~Keen and N.~Lakic, \emph{Hyperbolic {G}eometry from a {L}ocal {V}iewpoint},
Cambridge University Press, Cambridge, 2007.
\bibitem{MO86}
G.~J. Martin and B.~G. Osgood, \emph{The quasihyperbolic metric and associated
estimates on the hyperbolic metric}, J. Analyse Math. \textbf{47} (1986),
37--53.
\bibitem{Minda87}
D.~Minda, \emph{A reflection principle for the hyperbolic metric and
applications to geometric function theory}, Complex Variables Theory Appl.
\textbf{8} (1987), 129--144.
\bibitem{Nehari:conf}
Z.~Nehari, \emph{Conformal {M}appings}, McGraw-Hill, New York, 1952.
\bibitem{Nev:Ein}
R.~Nevanlinna, \emph{Eindeutige {A}nalytische {F}unktionen}, Springer-Verlag,
1953, English translation: {\em Analytic {F}unctions}, Springer-Verlag, 1970.
\bibitem{Rickman84}
S.~Rickman, \emph{Quasiregular mappings and metrics on the $n$-sphere with
punctures}, Comment. Math. Helv. \textbf{59} (1984), 134--148.
\bibitem{SV01}
A.~{Yu}. Solynin and M.~Vuorinen, \emph{Estimates for the hyperbolic metric of
the punctured plane and applications}, Israel J. Math. \textbf{124} (2001),
29--60.
\bibitem{SV05}
T.~Sugawa and M.~Vuorinen, \emph{Some inequalities for the {P}oincar\'e metric
of plane domains}, Math. Z. \textbf{250} (2005), 885--906.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Weak Galerkin Finite Element methods for Parabolic Equations}
\author{Qiaoluan H. Li}
\address{Department of Mathematics, Towson University, Towson, MD 22152} \email{[email protected]}
\author{Junping Wang}
\address{Division of Mathematical Sciences, National Science
Foundation, Arlington, VA 22230} \email{[email protected]}
\thanks{The research of Wang was supported by the NSF IR/D program,
while working at the National Science Foundation. However, any
opinion, finding, and conclusions or recommendations expressed in
this material are those of the author and do not necessarily reflect
the views of the National Science Foundation.}
\begin{abstract}
A newly developed weak Galerkin method is proposed to solve
parabolic equations. This method allows the usage of totally
discontinuous functions in approximation space and preserves the
energy conservation law. Both continuous and discontinuous time weak
Galerkin finite element schemes are developed and analyzed. Optimal
order error estimates in both $H^1$ and $L^2$ norms are established.
Numerical tests are performed and reported.
\end{abstract}
\subjclass[2010]{Primary 65N15, 65N30, 76D07; Secondary, 35B45,
35J50}
\keywords{weak Galerkin methods, finite element methods, parabolic
equations}
\maketitle
\section{Introduction}
In this paper, we consider the initial-boundary value problem
\begin{eqnarray}\label{continuousP}
u_t-\nabla\cdot(a\nabla u) &=& f \quad \mbox{ in } \Omega,\quad t\in J,\label{heat}\nonumber\\
u &=& g\quad \mbox{ on }\partial\Omega, \quad t\in J,\\
u(\cdot, 0)&=&\psi\quad \mbox{ in } \Omega,\nonumber
\end{eqnarray}
where $\Omega$ is a polygonal or polyhedral domain in $\mathbb{R}^d$
$(d=\,2,\,3)$ with Lipschitz-continuous boundary $\partial \Omega$,
$J=(0, \, \bar{t}\,]$, and $a\,=\,(a_{ij}(x,t))_{d\times d}\in
[L^\infty(\Omega\times \bar{J})]^{d^2}$ is a symmetric matrix-valued
function satisfying the following property: there exists a constant
$\alpha>0$ such that
$$
\alpha \xi^T\xi\leq \xi^T a\xi,\quad \forall \xi\in \mathbb{R}^d.
$$
For simplicity, we shall concentrate on two-dimensional problems
only (i.e., $d=2$).
For any nonnegative integer $m$, let $H^m(\Omega)$
be the standard Sobolov space, which is the collection of all real-valued
functions defined on $\Omega$ with square integrable derivatives up
to order $m$ with
semi-norm
$$|\psi|_{s,\Omega}\equiv\{\sum_{|\alpha|=s}\int_\Omega\,|\partial^\alpha
\psi|^2\,dx\}^{\frac12}, $$
and norm
$$
\|\psi\|_{m,\Omega}\equiv(\sum_{j=0}^m |\psi|^2_{j,\Omega})^{\frac12}.
$$
For simplicity, we use $\|\cdot\|$ for the $L^2$ norm.
The parabolic problem can be written in a variational form as
follows
\begin{eqnarray}\label{weakform}
(u_t,v)+(a\nabla u,\nabla v)&=&(f,v)\quad \forall v\in H_0^1(\Omega),\,t\in J,\\
u(\cdot, 0)&=&\psi,\nonumber
\end{eqnarray}
where $u$ is called a weak solution if $u\in L^2(0,t;H^1(\Omega))$
and $u_t\in L^2(0,t;H^{-1}(\Omega))$, and if $u=g$ on
$\partial\Omega$.
Parabolic problems have been treated by various numerical methods.
For finite element methods, we refer to \cite{VThomee} and
\cite{AFEM}. Discontinuous Galerkin finite element methods were
studied in \cite{DGM} and \cite{DGM2}. In \cite{FVM} and
\cite{FVM2}, finite volume methods were presented, which were based
on the integral conservation law rather than the differential
equation. The integral conservation law was then enforced for small
control volumes defined by the computational mesh.
The goal of this paper is to consider a newly developed weak
Galerkin (WG) finite element method for parabolic equation which is
based on the definition of a discrete weak gradient operator
proposed in \cite{JW_WG}. In this numerical method, the analysis can
be done by using the framework of Galerkin methods, and totally
discontinuous functions are allowed to be used as the approximation
space. Furthermore, the approximation results also satisfy the
energy conservation law.
The rest of this paper is organized as follows. In Section 2, we
introduce some notation and establish a continuous time and
discontinuous time weak Galerkin finite element scheme for the
parabolic initial boundary-value problem (\ref{continuousP}). In
Section 3, we prove the energy conservation law of the weak Galerkin
approximation. Optimal order error estimates in both $H^1$ and $L^2$
norms are proved in Section 4. The paper is concluded with some
numerical experiments to illustrate the theoretical analysis in
Section 5.
\section{The Weak Galerkin Method}
In this section we design a continuous time and a discontinuous time
weak Galerkin finite element scheme for the initial-boundary value
problem (\ref{continuousP}). We consider the space of discrete weak
functions and the discrete weak operator introduced in \cite{JW_WG}.
Let $\mathcal{T}_h$ be a quasiuniform family of triangulations of a
plane domain $\Omega$ and $T$ be each triangle element with
$\partial T$ as its boundary. For each $T \in \mathcal{T}_h$, let
$P_j(T)$ be the set of polynomials on $T$ with degree no more than
$j$, and $P_l(\partial T)$ be the set of polynomials on $\partial T$
with degree no more than $l$, respectively. Let $\hat{P}_j(T)$ be
the set of homogeneous polynomials on $T$ with degree $j$. The weak
finite element space $S_h(j,l)$ is defined by
$$
S_h(j,l):=\{v=\{v_0,v_b\}\,:\,\,v_0\in P_j(T),\,v_b\in
P_l(e)\,\mbox{ for all edge } e\subset \partial T,\ T\in
\mathcal{T}_h\}.
$$
Denote by $S^0_h(j,l)$ the subspace of $S_h(j,l)$ with vanishing
boundary value on $\partial \Omega$; i.e.,
$$
S^0_h(j,l):=\{v=\{v_0,v_b\}\in S_h(j,l),\,v_b|_{\partial T\cap\partial \Omega}=0\,\mbox{ for all } T\in \mathcal{T}_h\}.
$$
Let $\sum_h=\{ {\bf q}\in [L^2(\Omega)]^2\,:\,{\bf q}|_T\in V(T,r)
\mbox{ for all } T\in \mathcal{T}_h\}$, where $V(T,r)$ is a subspace
of the set of vector-valued polynomials of degree no more than $r$
on $T$. For each $v=\{v_0,v_b\}\in S_h(j,l)$, the discrete weak
gradient $\nabla_d v\in \sum_h$ of $v$ on each element $T$ is given
by the following equation:
\begin{equation}\label{WGoperator}
\int_T \nabla_{d}v\cdot {\bf q}\,dT=-\int_T v_0\nabla\cdot {\bf
q}\,dT+\int_{\partial T} v_b{\bf q}\cdot {\bf n}\,ds,\quad \forall {\bf
q}\in V(T,r),
\end{equation}
where ${\bf n}$ is the outward normal direction of $\partial T$. It is
easy to see that this discrete weak gradient is well-defined.
To investigate the approximation properties of the discrete weak
spaces $S_h(j,l)$ and $\sum_h$, we use three projections in this
paper. The first two are local projections defined on each triangle
$T$: one is $Q_h u=\{Q_0 u,Q_b u\}$, the $L^2$ projection of
$H^1(T)$ onto $P_j(T)\times P_l(\partial T)$ and another is $\mathbb{Q}_h$,
the $L^2$ projection of $[L^2(T)]^2$ onto $V(T,r)$. The third
projection $\Pi_h$ is assumed to exist and satisfy the following
property: For ${\bf q}\in H(div,\Omega)$ with mildly added
regularity, $\Pi_h {\bf q}\in H(div,\Omega)$ such that $\Pi_h {\bf
q}\in V(T,r)$ on each $T\in \mathcal{T}_h$, and
$$
(\nabla \cdot {\bf q},v_0)_T=(\nabla \cdot \Pi_h {\bf
q},v_0)_T,\quad \forall v_0\in P_j(T).
$$
It is easy to see the following two useful identities:
\begin{equation}\label{identity1}
\nabla_d(Q_h w)=\mathbb{Q}_h(\nabla w),\quad \forall w\in H^1(T),
\end{equation}
and for any ${\bf q}\in H(div,\Omega)$
\begin{equation}\label{identity2}
\sum_{T\in\mathcal{T}_h}(-\nabla\cdot{\bf q},v_0)_T=
\sum_{T\in\mathcal{T}_h}(\Pi_h{\bf q},\nabla_d v)_T, \quad\forall
v=\{v_0,v_b\}\in S_h^0(j,l).
\end{equation}
The discrete weak spaces $S_h(j,l)$ and $\sum_h$ need to possess
some good approximation properties in order to provide an acceptable
finite element scheme. In \cite{JW_WG}, the following two criteria
were given as a general guideline for their construction:
\begin{description}
\item[\bf P1] For any $v\in S_h(j,l)$, if $\nabla_d
v=0$ on $T$, then one must have $v\equiv {\it constant}$ on $T$;
i.e., $v_0=v_b={\it constant}$ on $T$.
\item[\bf P2] For any $u\in H^1_0(\Omega)\cap
H^{m+1}(\Omega)$, where $0\leq m\leq j+1$, the discrete weak
gradient of the $L^2$ projection $Q_h u$ of $u$ in the discrete weak
space $S_h(j,l)$ provides a good approximation of $\nabla u$; i.e.,
$\|\nabla_d (Q_h u)-\nabla u\|\leq Ch^m\|u\|_{m+1}$ holds true.
\end{description}
Examples of $S_h(j,l)$ and $\sum_h$ satisfying the conditions {\bf
P1} and {\bf P2} can be found in \cite{JW_WG}. For example, with
$V(T,r=j+1)=[P_{j}(T)]^2+\hat{P}_j(T){\bf x}$ being the usual
Raviart-Thomas element \cite{RT} of order $j$, one may take
equal-order elements of order $j$ for $S_h(j,l)$ in the interior and
the boundary of each element $T$. The key of using the
Raviart-Thomas element for $V(T,r)$ is to ensure the condition {\bf
P1}. The condition {\bf P2} was satisfied by the commutative
property (\ref{identity1}) which has been established in
\cite{JW_WG}. It was shown later in \cite{wy-mixed, JW2} that the
condition {\bf P1} can be circumvented by a suitable stabilization
term. Consequently, the selection of $V(T,r)$ and $S_h(j,l)$ becomes
very flexible and robust in practical computation. It even allows
the use of finite elements of arbitrary shape.
The main idea of the weak Galerkin method is to use the discrete
weak space $S_h(j,l)$ as testing and trial spaces and replace the
classical gradient operator by the weak gradient operator $\nabla_d$
in (\ref{weakform}).
First, we pose the continuous time weak Galerkin finite element
method, based on the weak Galerkin operator (\ref{WGoperator}) and
weak formulation (\ref{weakform}), which is to find
$u_h(t)=\{u_0(\cdot, t),u_b(\cdot, t)\}$, belonging to $S_h(j,l)$
for $t\geq 0$, satisfying $u_b=Q_b g \mbox{ on } \partial \Omega,
t>0$ and $u_h(0)=Q_h \psi \mbox{ in } \Omega$, and the following
equation
\begin{equation}\label{semiWG}
((u_{h})_t,v_0)+a(u_h,v)=(f,v_0)\quad \forall v=\{v_0,v_b\}\in
S_h^0(j,l),\,t>0,
\end{equation}
where $a(\cdot,\cdot)$ is the weak bilinear form defined by
$$
a(w,v)=(a\nabla_d w,\nabla_d v),
$$
which is assumed to be bounded and coercive, i.e., for constant $\alpha,\beta,\gamma>0$
$$
|a(u,v)|\leq \beta \|\nabla_d u\|\|\nabla_d v\|,
$$
$$
a(u,u)\geq \alpha \|\nabla_d u\|^2,
$$
and that
$$
|(a_t \nabla_d u,\nabla_d v)|\leq \gamma \|\nabla_d u\|\|\nabla_d v\|.
$$
We now turn our attention to some discrete time Weak Galerkin
procedures. We introduce a time step $k$ and the time levels
$t=t_n=nk$, where $n$ is a nonnegative integer, and denote by
$U^n=U^n_h\in S_h(j,l)$ the approximation of $u(t_n)$ to be
determined. The backward Euler Weak Galerkin method is defined by
replacing the time derivative in (\ref{semiWG}) by a backward
difference quotient, or, if $\bar{\partial} U^n=(U^n-U^{n-1})/k$,
\begin{equation}\label{discreteWG}
(\bar{\partial} U^n,v_0)+a(U^n,v)=(f(t_n),v_0)\quad \forall
v=\{v_0,v_b\}\in S_h^0(j,l),\,n\geq 1,\,U^0=Q_h\psi,
\end{equation}
i.e.,
$$
(U^n,v_0)+ka(U^n,v)=(U^{n-1}+kf(t_n),v_0), \quad \forall
v=\{v_0,v_b\}\in S_h^0(j,l).
$$
\section{Energy Conservation of Weak Galerkin}
This section investigates the energy conservation property of the
weak Galerkin finite element approximation $u_h$. The increase in
internal energy in a small spatial region of the material $K$, i.e.,
control volume, over the time period $[t-\Delta t,t+\Delta t]$ is
given by
$$
\int_K u(x,t+\Delta t)\,dx-\int_K u(x,t-\Delta t)\,dx=\int_{t-\Delta t}^{t+\Delta t}\int_K u_t\,dx\,dt.
$$
Suppose that a body obeys the heat equation and, in addition, generates its own heat per unit volume at a rate given by a known function $f$ varying in space and time, the change in internal energy in $K$ is accounted for by the flux of heat across the boundaries together with the source energy. By Fourier's law, this is
$$
-\int_{t-\Delta t}^{t+\Delta t}\int_{\partial K} q\cdot {\bf n} \,ds
\,dt +\int_{t-\Delta t}^{t+\Delta t}\int_K f\,dx\,dt,
$$
where $q=-a\nabla u$ is the flow rate of heat energy.
The solution $u$ of the problem (\ref{continuousP}) yields the following integral form of energy conservation:
\begin{equation}\label{energy}
\int_{t-\Delta t}^{t+\Delta t}\int_K u_t\,dx\,dt+\int_{t-\Delta
t}^{t+\Delta t}\int_{\partial K} q\cdot {\bf n}\,ds\,dt=\int_{t-\Delta
t}^{t+\Delta t}\int_K f\,dx\,dt
\end{equation}
where the Green's formula was used. We claim that the numerical approximation from the weak Galerkin finitel element method for (\ref{continuousP}) retains the energy conservation property (\ref{energy}).
In (\ref{semiWG}), we chose a test function $v=\{v_0,v_b=0\}$ so
that $v_0=1$ on $K$ and $v_0=0$ elsewhere. After integrating over
the time period, we have
\begin{equation}\label{EC}
\int_{t-\Delta t}^{t+\Delta t}\int_K u_t\,dx\,dt+\int_{t-\Delta t}^{t+\Delta t}\int_{ K} a \nabla_d u\nabla_d v \,dx\,dt=\int_{t-\Delta t}^{t+\Delta t}\int_K f\,dx\,dt.
\end{equation}
Using the definition of operator $\mathbb{Q}_h$ and of the weak gradient
$\nabla_d$ in (\ref{WGoperator}), we arrive at
$$
\int_{ K} a \nabla_d u\nabla_d v \,dx=\int_K\mathbb{Q}_h(a\nabla_d u_h)\cdot
\nabla_d v\, dx=-\int_K\nabla\cdot \mathbb{Q}_h(a\nabla_d
u_h)\,dx=-\int_{\partial K}\mathbb{Q}_h(a\nabla_du_h)\cdot {\bf n}\, ds.
$$
Then by substituting in (\ref{EC}), we have the energy conservation property
$$
\int_{t-\Delta t}^{t+\Delta t}\int_K u_t\,dx\,dt+\int_{t-\Delta
t}^{t+\Delta t} \int_{\partial K}-\mathbb{Q}_h(a\nabla_du_h)\cdot {\bf n}\,
ds\,dt=\int_{t-\Delta t}^{t+\Delta t}\int_K f\,dx\,dt,
$$
which provides a numerical flux given by
$$
q_h\cdot {\bf n}=-\mathbb{Q}_h(a\nabla_du_h)\cdot {\bf n}.
$$
The numerical flux $q_h\cdot n$ is continuous across the edge of each element $T$, which can be verified by a selection of the test function $v=\{v_0,v_b\}$ so that $v_0\equiv0$ and $v_b$ arbitrary.
\section{Error Analysis}
In this section, we derive some error estimates for both continuous
and discrete time weak Galerkin finite element methods. The
difference between the weak Galerkin finite element approximation
$u_h$ and the $L^2$ projection $Q_h u$ of the exact solution $u$ is
measured. We first state a result concerning an approximation
property as follows.
\begin{lemma}\label{approx}
For $u\in H^{1+r}(\Omega)$ with $r>0$, we have
$$
\|\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla u)\|\leq Ch^r\|u\|_{1+r}.
$$
\end{lemma}
\begin{proof} Since from (\ref{identity1}) we have $\mathbb{Q}_h(\nabla u)=\nabla_d(Q_h u)$, then
$$\|\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla u)\|=\|\Pi_h(a\nabla u)-a \nabla_d(Q_h u)\|.$$
Using the triangle inequality, the definition of $\Pi_h$ and the second condition {\bf P2} on the discrete weak space $S_h(j,l)$, we have
$$
\|\Pi_h(a\nabla u)-a \nabla_d(Q_h u)\|\leq\|\Pi_h(a\nabla u)-a \nabla u\|+\|a\nabla u-a \nabla_d(Q_h u)\|
\leq Ch^r\|u\|_{1+r}.
$$
\end{proof}
\subsection{Continuous Time Weak Galerkin Finite Element Method}
Our aim is to prove the following estimate for the error for the
semidiscrete solution.
\newtheorem{theorem}{Theorem}[section]
\begin{theorem}\label{theorem1}
Let $u\in H^{1+r}(\Omega)$ and $u_h$ be the solutions of
(\ref{continuousP}) and (\ref{semiWG}), respectively. Denote by
$e:=u_h-Q_h u$ the difference between the weak Galerkin
approximation and the $L^2$ projection of the exact solution $u$.
Then there exists a constant $C$ such that
$$
\|e(\cdot,t)\|^2+\int_0^t \alpha\|\nabla_d e\|^2\,ds
\leq \|e(\cdot, 0)\|^2+ Ch^{2r}\int_0^t \|u\|^2_{1+r}\,ds,
$$
and
\begin{eqnarray*}
& &\int_0^t \|e_t\|^2\,ds+\frac{\alpha}{4}\|\nabla_d e(\cdot,t)\|^2 +
(1+\frac{\gamma}{2\alpha})\|e\|^2\\
&\leq& {\alpha}\|\nabla_d e(\cdot,0)\|^2 +
(1+\frac{\gamma}{2\alpha})\|e(\cdot,0)\|^2 \\
& & + C h^{2r}\left(\|u(\cdot,0)\|^2_{1+r}+\|u\|^2_{1+r}+ \int_0^t
\|u\|^2_{1+r}\,ds+\int_0^t \|u_t\|^2_{1+r}\,ds\right).
\end{eqnarray*}
\end{theorem}
\begin{proof}
Let $v=\{v_0,v_b\}\in S^0_h(j,l)$ be the testing function. By
testing (\ref{heat}) against $v_0$, together with $\mathbb{Q}_h(\nabla
u)=\nabla_d(Q_h u)$ for $u\in H^1$ and $(Q_0u_t,v_0)=(u_t,v_0)$, we
obtain
\begin{eqnarray*}
(f,v_0)&=&(u_t,v_0)+\sum_{T\in \mathcal{T}_h}(-\nabla\cdot(a\nabla u),v_0)_T\\
&=&(u_t,v_0)+(\Pi_h(a\nabla u),\nabla_d v)\\
&=& (Q_h u_t,v_0)+(\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla u),\nabla_d
v)+(a\nabla_d(Q_h u),\nabla_d v)
\end{eqnarray*}
Since the numerical solution also satisfies the heat equation, we have
$$
(f,v_0)=((u_{h})_{t}, v_0) + a(u_h,v).
$$
Combining the above two equations we obtain
\begin{equation}\label{errorEQ}
((u_h-Q_h u)_t, v_0)+ a(u_h-Q_h u,v)=(\Pi_h (a\nabla u)-a \mathbb{Q}_h
(\nabla u),\nabla_d v),
\end{equation}
which shall be called the error equation for the WG for the heat
equation.
Let $e=u_h-Q_h u$ be the error. Use $v=e$ in the error equation, we obtain
$$
(e_t,e)+a(e,e)=(\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla u),\nabla_d e).
$$
By Cauchy-Schwarz inequality and the coercivity of the bilinear form, we have
$$
\frac12\frac{d}{dt}\|e\|^2+\alpha\|\nabla_d e\|^2\leq
\frac1{2\alpha}\|\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla
u)\|^2+\frac{\alpha}{2}\|\nabla_d e\|^2.
$$
It follows that
$$
\frac{d}{dt}\|e\|^2+\alpha\|\nabla_d e\|^2\leq
\frac1{\alpha}\|\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla u)\|^2,
$$
and hence, after integration,
\begin{equation}\label{errorL2}
\|e\|^2+\int_0^t \alpha\|\nabla_d e\|^2\,ds\leq \|e(\cdot,0)\|^2+
\frac1{\alpha}\int_0^t \|\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla u)\|^2\, dt.
\end{equation}
Then by Lemma \ref{approx}, we have the assertion.
In order to estimate $\nabla_d e$, we use the error equation with
$v=(u_h-Q_h u)_t=e_t$. We obtain
\begin{eqnarray*}
(e_t,e_t)+a(e,e_t)&=&(\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla u),\nabla_d e_t)\\
&=&\frac{d}{dt}(\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla u),\nabla_d
e)\\
& & -(\Pi_h(a\nabla u_t)-a \mathbb{Q}_h(\nabla u_t),\nabla_d
e)\\
& & -(\Pi_h(a_t\nabla u)-a_t \mathbb{Q}_h(\nabla u),\nabla_d e).
\end{eqnarray*}
By the Cauchy-Schwarz inequality, this shows that
\begin{eqnarray*}
\|e_t\|^2+\frac{1}{2}(\frac{d}{dt}a( e,e)-(a_t\nabla_d e,\nabla_d e))&\leq& \frac{d}{dt}(\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla u),\nabla_d e)\\
&+&\frac1{2\alpha} \|\Pi_h(a\nabla u_t)-a \mathbb{Q}_h(\nabla u_t)\|^2+\frac{\alpha}{2} \|\nabla_d e\|^2\\
&+&\frac1{2\alpha} \|\Pi_h(a_t\nabla u)-a_t \mathbb{Q}_h(\nabla
u)\|^2+\frac{\alpha}{2} \|\nabla_d e\|^2,
\end{eqnarray*}
i.e.,
\begin{eqnarray*}
\|e_t\|^2+\frac{1}{2}\frac{d}{dt}a( e,e)&\leq& \frac12(a_t\nabla_d e,\nabla_d e)+ \frac{d}{dt}(\Pi_h(a\nabla u)-a \mathbb{Q}_h(\nabla u),\nabla_d e)\\ &+&\frac1{2\alpha} \|\Pi_h(a\nabla u_t)-a \mathbb{Q}_h(\nabla u_t)\|^2+\frac{\alpha}{2} \|\nabla_d e\|^2\\
&+&\frac1{2\alpha} \|\Pi_h(a_t\nabla u)-a_t \mathbb{Q}_h(\nabla
u)\|^2+\frac{\alpha}{2} \|\nabla_d e\|^2.
\end{eqnarray*}
Thus, integrating with respect to $t$ and together with the
coercivity and boundedness yields
\begin{eqnarray*}
& &\int_0^t \|e_t\|^2\,ds+\frac{\alpha}{2}\|\nabla_d e(\cdot,t)\|^2\\
&\leq& \frac{{\beta}}{2}\|\nabla_d e(\cdot,0)\|^2
+ (\Pi_h(a\nabla u(\cdot,t)-a \mathbb{Q}_h(\nabla u(\cdot,t)),\nabla_d
e(\cdot,t))\\
&-&(\Pi_h(a\nabla u(\cdot,0)-a \mathbb{Q}_h(\nabla u(\cdot,0)),\nabla_d
e(\cdot,0))+\frac1{2\alpha}\int_0^t \|\Pi_h(a\nabla u_t)-a
\mathbb{Q}_h(\nabla u_t)\|^2\,ds\\
&+&\frac1{2\alpha}\int_0^t \|\Pi_h(a_t\nabla u)-a_t \mathbb{Q}_h(\nabla
u)\|^2\,ds+(\alpha+\frac{\gamma}{2}) \int_0^t\|\nabla_d e\|^2\,ds.
\end{eqnarray*}
By adding $(\alpha+\gamma/2)/\alpha=1+\frac{\gamma}{2\alpha}$ times inequality (\ref{errorL2}) to the above inequality, we arrive at
\begin{eqnarray*}
& &\int_0^t \|e_t\|^2\,ds+\frac{\alpha}{2}\|\nabla_d e(\cdot,t)\|^2 +(1+\frac{\gamma}{2\alpha}) \|e\|^2\\
&\leq& \frac{{\beta}}{2}\|\nabla_d e(\cdot,0)\|^2 + (1+\frac{\gamma}{2\alpha})\|e(\cdot,0)\|^2\\
&+&(\Pi_h(a\nabla u(\cdot,t))-a \mathbb{Q}_h(\nabla u(\cdot,t)),\nabla_d
e(\cdot,t))
{-}(\Pi_h(a\nabla u(\cdot,0))-a \mathbb{Q}_h(\nabla u(\cdot,0)),\nabla_d e(\cdot,0)) \\
&+&\frac1{2\alpha}\int_0^t \|\Pi_h(a\nabla u_t)-a \mathbb{Q}_h(\nabla
u_t)\|^2\,ds
+\frac1{2\alpha}\int_0^t \|\Pi_h(a_t\nabla u)-a_t \mathbb{Q}_h(\nabla u)\|^2\,ds\\
&+&(\frac1{\alpha}+\frac{\gamma}{2\alpha^2})\int_0^t \|\Pi_h(a\nabla
u)-a \mathbb{Q}_h(\nabla u)\|^2\, dt.
\end{eqnarray*}
Next, we use the Cauchy-Schwarz inequality to obtain
\begin{eqnarray*}
& &\int_0^t \|e_t\|^2\,ds+\frac{\alpha}{4}\|\nabla_d e(\cdot,t)\|^2 + (1+\frac{\gamma}{2\alpha})\|e\|^2\\
&\leq& {{\beta}}\|\nabla_d e(\cdot,0)\|^2 + (1+\frac{\gamma}{2\alpha})\|e(\cdot,0)\|^2\\
&+&\frac1\alpha\|\Pi_h(a\nabla u(\cdot,t))-a \mathbb{Q}_h(\nabla
u(\cdot,t))\|^2
+\frac1{2\alpha}\|\Pi_h(a\nabla u(\cdot,0))-a \mathbb{Q}_h(\nabla u(\cdot,0))\|^2 \\
&+&\frac1{2\alpha}\int_0^t \|\Pi_h(a\nabla u_t)-a \mathbb{Q}_h(\nabla
u_t)\|^2\,ds
+\frac1{2\alpha}\int_0^t \|\Pi_h(a_t\nabla u)-a_t \mathbb{Q}_h(\nabla u)\|^2\,ds\\
&+&(\frac1{\alpha}+\frac{\gamma}{2\alpha^2})\int_0^t \|\Pi_h(a\nabla
u)-a \mathbb{Q}_h(\nabla u)\|^2\, dt .
\end{eqnarray*}
Then by Lemma \ref{approx}, the proof is completed.
\end{proof}
\subsection{Discrete Time Weak Galerkin Finite Element Method}
We begin with the following lemma which provides a Poincar\'e-type
inequality with the weak gradient operator.
\begin{lemma}\label{poincare}
Assume that {$\phi=\{\phi_0,\phi_b\}\in S^0_h (j,j)$}, then there
exists a constant $C$ such that
$$
\|\phi\|\leq C \|\nabla_d \phi\|,
$$
where $\nabla_d \phi\in V(T,r=j+1)=[P_{j}(T)]^2+\hat{P}_j(T){\bf x}$.
\end{lemma}
\begin{proof}
Let $\bar{\phi}_0$ be a piecewise constant function with the cell
average of $\phi$ on each element $T$. Let $\phi_I\in H^1_0(\Omega)$
be a continuous piecewise polynomial with vanishing boundary value
lifted from $\phi$ as follows. Let $G_j(T)$ be the set of all
Lagrangian nodal points for $P_{j+1}(T)$. At all internal Lagrangian
nodal points $x_i\in G_j(T)$, we set $\phi_I(x_i)=\bar{\phi}_0$. At
boundary Lagrangian points $x_i\in G_j(T)\cap\partial T$, we let
$\phi_I(x_i)$ be the trace of $\bar{\phi}_0$ from either side of the
boundary. At global Lagrangian points $x_i\in
\partial T\cap\partial \Omega$, we set $\phi_I(x_i)=0$. Let
$\jump{\phi_0}_e$ denote the jump of $\phi_0$ on the edge $e$; i.e.,
\begin{equation}\label{jw.05}
\jump{\phi_0}_e=\phi_0|_{T_1}-\phi_0|_{T_2}
=(\phi_0|_{T_1}-\phi_b)-(\phi_0|_{T_2}-\phi_b).
\end{equation}
By the classical Poincar\'e inequality for $\phi_I$, we have
\begin{eqnarray}\label{jw.07}
\|\phi\|&\leq& \|\phi-\phi_I\|+\|\phi_I\|\\
&\leq&\left(\sum_T \|\phi-\phi_I\|^2_{T}\right)^{\frac12}+
C\|\nabla\phi_I\|.\nonumber
\end{eqnarray}
From Lemma 4.3 in \cite{JW_WG_stoke}, we have
\begin{equation}\label{jw.06}
\|\phi-\phi_I\|^2_T \leq \sum_{T'\in \mathcal{T}(T)} h_{T'}^2
\|\nabla \phi_0\|_{T'}^2 +\sum_{e\in \large{\varepsilon}(T)}h_e
\|\jump{\phi_0}\|_e^2,\quad \forall T\in \mathcal{T}_h,
\end{equation}
where $ \mathcal{T}(T)$ denotes the set of all triangles in
$\mathcal{T}$ having a nonempty intersect with $T$, including $T$
itself, and $\large{\varepsilon}(T)$ denotes the set of all edges
having a nonempty intersection with $T$. Note the elementary fact
that
\begin{equation}\label{jw.02}
\|\nabla\phi_I\|^2\leq C\sum_T |\phi_I(x_i)-\phi_I(x_k)|^2,
\end{equation}
where $x_i$ and $x_k$ run through all the Lagrangian nodal points on
$T$. By construction, $\phi_I(x_i)$ is the cell average of $\phi_0$
on either the element $T$ or an adjacent element $T_*$ which shares
with $T$ a common edge or a vertex point. Thus,
$\phi_I(x_i)-\phi_I(x_k)$ is either zero or the difference of the
cell average of $\phi_0$ on two adjacent elements $T$ and $T_*$. For
the later case, assume that $x_e$ is a point shared by $T$ and
$T_*$. It is not hard to see that
$$
\left|\bar\phi_0|_T - \phi_0(x_e)\right|^2 \leq C \int_T
|\nabla\phi_0|^2 dx.
$$
Thus, we have
\begin{eqnarray*}\label{jw.01}
|\phi_I(x_i)-\phi_I(x_k)|^2&=&|\bar\phi_0|_T - \bar\phi_0|_{T_*}|^2\\
&=& \left|\bar\phi_0|_T-\phi_0|_T(x_e)+ \jump{\phi_0}(x_e) +
\phi_0|_{T_*}(x_e)- \bar\phi_0|_{T_*}\right|^2\nonumber\\
&\le& C\sum_{e\subset\partial T} h_e^{-1} \|\jump{\phi_0}\|_e^2+C
\int_{T\cup T_*} |\nabla\phi_0|^2 dx.\nonumber
\end{eqnarray*}
Substituting the above estimate into (\ref{jw.02}) yields
\begin{equation}\label{jw.03}
\|\nabla\phi_I\|^2\leq C\sum_T \left(\|\nabla \phi_0\|_T^2+h^{-1}
\|\jump{\phi_0}\|_{\partial T}^2\right).
\end{equation}
By combining (\ref{jw.07}) with (\ref{jw.06}) and (\ref{jw.03}) we
obtain
\begin{equation}\label{bound1}
\|\phi\|^2\leq C\sum_T \left(\|\nabla \phi_0\|_T^2+h^{-1}
\|\phi_0-\phi_b\|_{\partial T}^2\right),
\end{equation}
where (\ref{jw.05}) has been applied to estimate the jump of
$\phi_0$ on each edge.
Next, we want to bound the two terms on the right hand side of
(\ref{bound1}) by $\|\nabla_d \phi\|$. Let us recall that the weak
gradient $\nabla_d\phi$ is defined by
$$
\int_T\nabla_d \phi\cdot {\bf q} dx=-\int_T\phi_0 \nabla\cdot {\bf
q} dx +\int_{\partial T} \phi_b {\bf q}\cdot {\bf n} ds, \quad \forall
{\bf q}\in V(T,j+1),
$$
and by using integration by parts, we have
\begin{equation}\label{WG2}
\int_T\nabla_d \phi\cdot {\bf q} dx=\int_T\nabla\phi_0 \cdot {\bf q}
dx-\int_{\partial T} (\phi_0-\phi_b) {\bf q}\cdot {\bf n} ds, \quad
\forall {\bf q}\in V(T,j+1).
\end{equation}
In order to bound $\|\nabla \phi_0\|$ and $ \|\jump{\phi_0}\|_e$ by
$\|\nabla_d \phi\|$, we let ${\bf q}$ in (\ref{WG2}) satisfy the
following
\begin{eqnarray*}
\int_T {\bf q}\cdot {\bf r} \,dx&=&\int_T \nabla \phi_0\cdot {\bf r}
\,dx\quad\forall {\bf r}\in [P_j(T)]^2,\\
\int_{\partial T} {\bf q}\cdot {\bf n} \mu\,ds&=& -h^{-1}\int_{\partial
T} ( \phi_0-\phi_b)\mu\,ds\quad\forall \mu\in P_{j+1}[\partial T],
\end{eqnarray*}
where by Lemma 5.1 in \cite{stoke}, ${\bf q}$ and $\phi$ have the
norm equivalence of
\begin{equation}\label{norm}
\|{\bf q}\|_{L^2(T)}\approx\|\nabla
\phi_0\|_{L^2(T)}+h^{-\frac12}\|\phi_0-\phi_b\|_{L^2(\partial T)}.
\end{equation}
Then from (\ref{WG2}), we have
\begin{eqnarray*}
\int_T\nabla_d \phi\cdot {\bf q} dx&=&\int_T\nabla\phi_0 \cdot {\bf q} dx-\int_{\partial T} (\phi_0-\phi_b) {\bf q}\cdot {\bf n} ds\\
&=&\int_T \nabla \phi_0\cdot \nabla \phi_0 \,dx+h^{-1}\int_{\partial T} ( \phi_0-\phi_b)^2\,ds\\
&=& \|\nabla \phi_0\|_{T}^2+h^{-1}\| \phi_0-\phi_b\|^2_{\partial T}.
\end{eqnarray*}
Using (\ref{WG2}) and (\ref{norm}), we have
\begin{eqnarray*}
\|\nabla_d \phi\|_T\geq \frac1{\|{\bf q}\|}(\nabla_d \phi, {\bf q})_T&=&\frac1{\|{\bf q}\|}(\| \nabla\phi_0\|_{T}^2+h^{-1}\| \phi_0-\phi_b\|^2_{\partial T})\\
&\geq& C (\| \nabla\phi_0\|_{T}^2+h^{-1}\|
\phi_0-\phi_b\|^2_{\partial T}),
\end{eqnarray*}
then together with (\ref{bound1}), we have the assertion.
\end{proof}
With the results established in Lemma \ref{poincare}, we are ready
to derive an error estimate for the discrete time weak Galerkin
approximation $u_h$ as the following theorem.
\begin{theorem}\label{theorem2}
Let $u\in H^{1+r}(\Omega)$ and $U^n$ be the solutions of (\ref{continuousP}) and (\ref{discreteWG}), respectively. Denote by $e^n:=U^n-Q_h u(t_n)$ the difference between the backward Euler weak Galerkin approximation and the $L^2$ projection of the exact solution $u$. Then there exists a constant $C$ such that
$$
\|e^n\|^2+\sum_{j=1}^n\alpha k\|\nabla_d e^j\|^2\leq \|e^0\|^2+ C(h^{2r}\|u\|^2_{1+r}+k^2\int_0^{t_n} \|u_{tt}\|^2\,ds),\quad \mbox{ for } n\geq 0,
$$
and
$$
\|\nabla_d e^n\|^2\leq C\left(\|e^0\|^2+\|\nabla_d e^0\|^2 +h^{2r}(\|u\|_{1+r}^2+\|u_t\|^2_{1+r})+k^2\int_0^{t_n}\|u_{tt}\|^2ds\right),
$$
where $\|u\|_{1+r}=\displaystyle\max_{j=1,\cdots,n}\{\|u(t_j)\|_{1+r}\}$ and $\|u_t\|_{1+r}=\displaystyle\max_{j=1,\cdots,n}\{\|u_t(t_j)\|_{1+r}\}$.
\end{theorem}
\begin{proof}
A calculation corresponding to the error equation (\ref{errorEQ})
yields
\begin{eqnarray*}
&& \quad (\bar{\partial} (U^n-Q_h u(t_n)),v_0)+a(U^n-Q_h
u(t_n)),v)\\
&&=(u_t(t_n)-\bar{\partial}u(t_n),v_0)+(\Pi_h (a\nabla u(t_n))-a
\mathbb{Q}_h (\nabla u(t_n)),\nabla_d v),
\end{eqnarray*}
i.e.,
\begin{equation}\label{DerrorEQ}
(\bar{\partial}
e^n,v_0)+a(e^n,v)=(u_t(t_n)-\bar{\partial}u(t_n),v_0)+(\Pi_h
(a\nabla u(t_n))-a \mathbb{Q}_h (\nabla u(t_n)),\nabla_d v).
\end{equation}
Let $w_1^n=u_t(t_n)-\bar{\partial}u(t_n)$, and $w_2^n=\Pi_h (a\nabla
u(t_n))-a \mathbb{Q}_h (\nabla u(t_n))$, and choosing $v=e^n$ in
(\ref{DerrorEQ}), we have
$$
(\bar{\partial} e^n,e^n)+a(e^n,e^n)=(w_1^n,e^n)+(w_2^n,\nabla_d e^n).
$$
By coercivity of the bilinear form and Cauchy-Schwarz inequality, we obtain
$$
\|e^n\|^2-(e^{n-1},e^n)+\alpha k\|\nabla_d e^n\|^2\leq k\|w_1^n\|\|e^n\|+k\|w_2^n\|\|\nabla_d e^n\|,
$$
i.e.,
$$
\|e^n\|^2+\alpha k\|\nabla_d e^n\|^2\leq \frac12\|e^{n-1}\|^2+\frac12\|e^{n}\|^2+k\|w_1^n\|\|e^n\|+k\|w_2^n\|\|\nabla_d e^n\|.
$$
By the Poincar\'e inequality of Lemma \ref{poincare}, we have
$$
\frac12\|e^{n}\|^2+\alpha k\|\nabla_d e^n\|^2\leq\frac12\|e^{n-1}\|^2+k(C\|w_1^n\|+\|w_2^n\|)(\|\nabla_d e^n\|).
$$
Then by triangle inequality, it follows
$$
\frac12\|e^n\|^2+\frac{\alpha k}2\|\nabla_d e^n\|^2\leq\frac12\|e^{n-1}\|^2+\frac{k}{2\alpha}(C\|w_1^n\|+\|w_2^n\|)^2.
$$
so that
\begin{eqnarray*}
\|e^n\|^2+\alpha k\|\nabla_d e^n\|^2\leq \|e^{n-1}\|^2+\frac{Ck}{\alpha}\|w_1^n\|^2+\frac{k}{\alpha}\|w_2^n\|^2.
\end{eqnarray*}
and, by repeated application,
\begin{equation}\label{Derror}
\|e^n\|^2+\sum_{j=1}^n\alpha k\|\nabla_d e^j\|^2\leq \|e^0\|^2+\frac{Ck}{\alpha}\sum_{j=1}^n \|w_1^j\|^2+\frac k\alpha\sum_{j=1}^n \|w_2^j\|^2.
\end{equation}
We write
\begin{equation}\label{w1integral}
kw_1^j=ku_t(t_j)-(u(t_j)-u(t_{j-1}))=\int_{t_{j-1}}^{t_j} (s-t_{j-1})u_{tt}(s)\,ds,
\end{equation}
i.e.,
$$
w_1^j=u_t(t_j)-\frac{u(t_j)-u(t_{j-1})}{k}=\frac1k\int_{t_{j-1}}^{t_j} (s-t_{j-1})u_{tt}(s)\,ds,
$$
so that
\begin{eqnarray}\label{w1}
\|w_1^j\|^2&=&\int_\Omega \{\frac1k\int_{t_{j-1}}^{t_j} (s-t_{j-1})u_{tt}(s)\,ds\}^2\,dx\nonumber\\
&\leq&\frac1{k^2} \int_\Omega \int_{t_{j-1}}^{t_j}(s-t_{j-1})^2\,ds \int_{t_{j-1}}^{t_j} u^2_{tt}(s)\,ds\,\,dx\\
&\leq& Ck \int_{t_{j-1}}^{t_j} \|u_{tt}\|^2\,ds.\nonumber
\end{eqnarray}
Substitute (\ref{w1}) into (\ref{Derror}) and together with Lemma
\ref{approx}, we have the error estimate for $\|e^n\|$.
In order to show an estimate for $\nabla_d e^n$ we may choose
instead $v=\bar{\partial} e^n$ in error equation (\ref{Derror}) to
obtain the following identity
$$
(\bar{\partial} e^n,\bar{\partial} e^n)+a(e^n,\bar{\partial} e^n)=(w^n_1,\bar{\partial} e^n)+(w^n_2,\nabla_d \bar{\partial} e^n).
$$
The second term on the right hand side can be written as
$$
(w^n_2,\nabla_d \bar{\partial} e^n)=\bar{\partial} (w^n_2,\nabla_d e^n)-((w_2^n)_t-\bar{\partial} w^n_2,\nabla_d e^{n-1})+((w_2^n)_t,\nabla_d e^{n-1}).
$$
Then the error equation becomes
\begin{eqnarray*}
k\|\bar{\partial} e^n\|^2+(a\nabla_d e^n,\nabla_d e^n)&=&(a\nabla_d e^n,\nabla_d e^{n-1})+k(w_1,\bar{\partial} e^n)\\
&+&k\bar{\partial}(w^n_2,\nabla_d e^n)-k((w_2^n)_t-\bar{\partial} w^n_2,\nabla_d e^{n-1})+k((w_2^n)_t,\nabla_d e^{n-1}).
\end{eqnarray*}
By triangle inequality, we have
\begin{eqnarray*}
k\|\bar{\partial} e^n\|^2+(a\nabla_d e^n,\nabla_d e^n)&\leq& \frac12(a\nabla_d e^n,\nabla_d e^n)+\frac12(a\nabla_d e^{n-1},\nabla_d e^{n-1})\\
&+&\frac{k}4\|w_1^n\|^2+k\|\bar{\partial} e^n\|^2+k\bar{\partial}(w^n_2,\nabla_d e^n)\\
&+&\frac{k}2\|(w_2^n)_t-\bar{\partial} w^n_2\|^2+\frac{k}2\|\nabla_d e^{n-1}\|^2\\
&+&\frac{k}{2}\|(w_2^n)_t\|^2+\frac{k}{2}\|\nabla_d e^{n-1}\|^2,
\end{eqnarray*}
and, after cancelation and by repeated application,
\begin{eqnarray*}
\frac12(a\nabla_d e^n,\nabla_d e^n)&\leq& \frac12(a\nabla_d e^{0},\nabla_d e^{0})\\
&+&\frac{k}4\sum_{j=1}^n\|w_1^j\|^2+(w^n_2,\nabla_d e^n)-(w^0_2,\nabla_d e^0)\\
&+&\frac{k}2\sum_{j=1}^n\|(w_2^n)_t-\bar{\partial} w^j_2\|^2+\frac{k}{2}\sum_{j=1}^n\|(w_2^n)_t\|^2+k\sum_{j=1}^n\|\nabla_d e^{j-1}\|^2,
\end{eqnarray*}
which is
\begin{eqnarray}\label{graderror}
\frac{\alpha}2\|\nabla_d e^n\|^2&\leq& \frac{\beta}2\|\nabla_d e^{0}\|^2\nonumber\\
&+&\frac{k}4\sum_{j=1}^n\|w_1^j\|^2+\frac1{\alpha}\|w^n_2\|^2+\frac{\alpha}4\|\nabla_d e^n\|^2+\frac1{2\beta}\|w^0_2\|^2+\frac{\beta}2\|\nabla_d e^0\|^2\\
&+&\frac{k}2\sum_{j=1}^n\|(w_2^n)_t-\bar{\partial} w^j_2\|^2+\frac{k}{2}\sum_{j=1}^n\|(w_2^n)_t\|^2+k\sum_{j=1}^n\|\nabla_d e^{j-1}\|^2,\nonumber
\end{eqnarray}
followed by triangle inequality and boundness and coercivity of the bilinear form.
By the similar process as in (\ref{w1}) and Lemma \ref{approx}, we have
$$
\|(w_2^n)_t-\bar{\partial} w^j_2\|^2\leq Ck\int_{t_{j-1}}^{t_j}\|(w_2)_{tt}\|^2\,ds\leq C kh^{2r}\int_{t_{j-1}}^{t_j}\|u_{tt}\|^2_{1+r}\,ds.
$$
Then by substituting (\ref{w1}), (\ref{Derror}) and the above
inequality into (\ref{graderror}), we have the error estimate for
$\|\nabla_d e^n\|$ as the following
$$
\|\nabla_d e^n\|^2\leq C\left(\|e^0\|^2+\|\nabla_d e^0\|^2 +h^{2r}(\|u\|_{1+r}^2+\|u_t\|^2_{1+r})+k^2\int_0^{t_n}\|u_{tt}\|^2ds\right),
$$
which completes the proof.
\end{proof}
\subsection{Optimal Order of Error Estimation in $L^2$}
To get an optimal order of error estimate in $L^2$, the idea,
similar to Wheeler's projection as in \cite{Wheeler} and
\cite{VThomee}, is used where an elliptic projection $E_h$ onto the
discrete weak space $S_h(j,l)$ is defined as the following: Find
$E_h v\in S_h(j,l)$ such that $E_hv$ is the $L^2$ projection of the
trace of $v$ on the boundary $\partial\Omega$ and
\begin{equation}\label{ellipticOP}
(a\nabla_d E_h v,\nabla_d \chi)=(-\nabla\cdot(a\nabla v),\chi),\quad
\forall\chi\in S_h^0(j,l).
\end{equation}
In view of the weak formulation of the elliptic problem,
\begin{eqnarray}\label{ellipticP}
-\nabla\cdot(a\nabla v)= F\quad \mbox{ in } \Omega,\\
v=g\quad \mbox{on }\partial \Omega,\nonumber
\end{eqnarray}
this definition may be expressed by saying that $E_h v$ is the weak Galerkin finite element approximation of the solution of the corresponding elliptic problem with exact solution $v$. By the error estimate results in \cite{JW_WG}, we have the following error estimate for $E_h v$.
\begin{lemma}\label{elliptic_error}
Assume that problem (\ref{ellipticP}) has the $H^{1+s}$ regularity $(s\in (0,1])$. Let $v\in H^{1+r}$ be the exact solution of (\ref{ellipticP}), and $E_h v$ be a weak Galerkin approximation of $v$ defined in (\ref{ellipticOP}). Let $Q_h v=\{Q_0 v, Q_b v\}$ be the $L^2$ projection of $v$ in the corresponding finite element space. Then there exists a constant $C$ such that
\begin{equation}\label{ellipticEE}
\|Q_0 v-E_h v\|\leq C(h^{s+1}\|F-Q_0 F\|+h^{r+s}\|v\|_{r+1}),
\end{equation}
and
\begin{equation}\label{ellipticEE2}
\|\nabla_d (Q_h v-E_h v)\|\leq Ch^r\|v\|_{r+1}.
\end{equation}
\end{lemma}
Throughout this section the error in the parabolic problem
(\ref{continuousP}) is written as a sum of two terms,
\begin{equation}\label{twoterm}
u_h(t)-Q_h u(t)= \theta(t)+\rho(t),\quad \mbox{ where } \theta=u_h-E_h u,\quad \rho=E_h u-Q_h u,
\end{equation}
which will be bounded separately. Notice that the second term is the error in an elliptic problem and then can be handled by applying the results in Lemma \ref{elliptic_error}. Then our main goal here is to bound the first term $\theta$.
Following the above strategy, the error estimate for continuous time weak Galerkin finite element method in $L^2$ and $H^1$ are provided in the next two theorems.
\begin{theorem}\label{RiezeL2}
Under the assumption of Theorem \ref{theorem1} and the assumption that the corresponding elliptic problem has the $H^{1+s}$ regularity $(s\in (0,1])$, there exists a constant $C$ such that
\begin{eqnarray*}
\|u_h(t)-Q_h u(t)\|&\leq& \|u_h(0)-Q_h u(0)\|+Ch^{r+s}(\|\psi\|_{r+1}+\int_0^t\|u_t\|_{r+1}\,ds)\\
&+&Ch^{s+1}(\|f(0)-Q_0f(0)\|+\|u_t(0)-Q_0u_t(0)\|)\\
&+&Ch^{s+1}\left\{ \int_0^t (\|f_t-Q_0f_t\|+\|u_{tt}-Q_0u_{tt}\|)\,ds\right\}.
\end{eqnarray*}
\end{theorem}
\begin{proof}
We write the error according to (\ref{twoterm}) and obtain the error bound for $\rho$ easily by Lemma \ref{elliptic_error} as the following
\begin{equation}\label{rhobound}
\|\rho\|\leq C(h^{s+1}(\|f-Q_0f\|+\|u_t-Q_0u_t\|)+h^{r+s}(\|\psi\|_{r+1}+\int_0^t\|u_t\|_{r+1}\,ds)).
\end{equation}
In order to estimate $\theta$, we note that by our definitions
\begin{eqnarray*}
(\theta_t,\chi)+a(\theta,\chi)&=&(u_{h,t},\chi)+a(u_h,\chi)-(E_h u_t,\chi)-a(E_h u, \chi)\\
&=&(f,\chi)-(E_h u_t,\chi)-a(E_h u, \chi)\\
&=&(f,\chi)+(\nabla\cdot(a\nabla u),\chi)-(E_h u_t,\chi)\\
&=&(u_t,\chi)-(E_h u_t,\chi)\\
&=&(Q_h u_t,\chi)-(E_h u_t,\chi)\\
&=&-(\rho_t,\chi),
\end{eqnarray*}
which is
\begin{equation}\label{thetaeq}
(\theta_t,\chi)+a(\theta,\chi)=-(\rho_t,\chi),\quad \forall \chi\in
S_h^0(j,l),\,\: t>0,
\end{equation}
where we have used the fact that the operator $E_h$ commutes with
time differentiation. Since $\theta\in S_h^0(j,l)$, we may choose
$\chi=\theta$ in (\ref{thetaeq}) and obtain
$$
(\theta_t,\theta)+a(\theta,\theta)=-(\rho_t,\theta),\quad t>0.
$$
Since
$$
a(\theta,\theta)\geq \alpha\|\nabla_d\theta\|^2>0,
$$
we have
$$
\frac12\frac{d}{dt}\|\theta\|^2=\|\theta\|\frac{d}{dt} \|\theta\|\leq \|\rho_t\|\|\theta\|,
$$
and hence
$$
\|\theta(t)\|\leq \|\theta(0)\|+\int_0^t\|\rho_t\|\,ds.
$$
Using Lemma \ref{elliptic_error}, we find
\begin{eqnarray}\label{theta0}
\|\theta(0)\|&=&\|u_h(0)-E_h u(0)\|\\
&\le&\|u_h(0)-Q_h u(0)\|+\|E_h u(0)-Q_h u(0)\| \nonumber \\
&\leq&\|u_h(0)-Q_h u(0)\|\nonumber\\
& & +C[h^{s+1}(\|f(0)-Q_0
f(0)\|+\|u_t(0)-Q_0u_t(0)\|)+h^{r+s}\|\psi\|_{r+1}],\nonumber
\end{eqnarray}
and since
\begin{equation}\label{rhot}
\|\rho_t\|=\|E_h u_t-Q_h u_t\|\leq C[h^{s+1}(\|f_t-Q_0 f_t\|+\|u_{tt}-Q_0u_{tt}\|)+h^{r+s}\|u_{t}\|_{r+1}],
\end{equation}
the desired bound for $\|\theta(t)\|$ now follows.
\end{proof}
\begin{theorem}\label{RiezeH1}
Under the assumption of Theorem \ref{RiezeL2} and the assumption that the coefficient matrix $a$ in (\ref{continuousP}) is independent of time $t$, there exists a constant $C$ such that
\begin{eqnarray*}
\|\nabla_d(u_h(t)-Q_h u(t))\|^2&\leq&4\beta \|\nabla_d(u_h(0)-Q_h u(0))\|^2+Ch^{2r}(\|\psi\|_{r+1}^2+\|u\|^2_{r+1})\\
&+&Ch^{2(s+1)}\int_0^t (\|f_t-Q_0 f_t\|+\|u_{tt}-Q_0u_{tt}\|)^2\,ds+Ch^{2(r+s)}\int_0^t \|u_{t}\|^2_{r+1}\,ds.
\end{eqnarray*}
\end{theorem}
\begin{proof}
As in the proof of Theorem \ref{RiezeL2}, we write the error in the form (\ref{twoterm}). Here by Lemma \ref{elliptic_error},
\begin{equation}\label{gradrho}
\|\nabla_d \rho(t)\|\leq Ch^{r}\|u\|_{r+1}.
\end{equation}
In order to estimate $\nabla_d\theta$, we may choose $\chi=\theta_t$
in the equation (\ref{thetaeq}) for $\theta$. We obtain
$$
(\theta_t,\theta_t)+a(\theta,\theta_t)=-(\rho_t,\theta_t).
$$
Since the coefficient matrix $a$ in the bilinear form
$a(\cdot,\cdot)$ is independent of time $t$, we have
$$
\|\theta_t\|^2+\frac12\frac{d}{dt}(a\nabla_d\theta,\nabla_d\theta)=-(\rho_t,\theta_t)\leq\frac12\|\rho_t\|^2+\frac12\|\theta_t\|^2,
$$
so that
$$
\frac{d}{dt}(a\nabla_d\theta,\nabla_d\theta)\leq\|\rho_t\|^2.
$$
Then by integrating with respect to time $t$ and using the coercivity and boundedness of the bilinear form, we obtain
\begin{eqnarray*}
\alpha\|\nabla_d \theta\|^2&\leq& (a\nabla_d\theta,\nabla_d\theta)\leq (a\nabla_d\theta(0),\nabla_d\theta(0))+\int_0^t \|\rho_t\|^2\, ds
\leq \beta\|\nabla_d\theta(0)\|^2+\int_0^t \|\rho_t\|^2\, ds\\
&\leq& \beta(\|\nabla_d(u_h(0)-Q_h u(0))\|+\|\nabla_d(E_h u(0)-Q_h u(0))\|)^2+\int_0^t \|\rho_t\|^2\, ds.
\end{eqnarray*}
Hence, in view of Lemma \ref{elliptic_error} and (\ref{rhot}), we have
\begin{eqnarray*}
\|\nabla_d\theta(t)\|^2&\leq& 2\beta \|\nabla_d(u_h(0)-Q_h u(0))\|^2+Ch^{2r}\|\psi\|_{r+1}^2\\
& &+Ch^{2(s+1)}\int_0^t (\|f_t-Q_0
f_t\|+\|u_{tt}-Q_0u_{tt}\|)^2\,ds+Ch^{2(r+s)}\int_0^t
\|u_{t}\|^2_{r+1}\,ds,
\end{eqnarray*}
which completes the proof.
\end{proof}
Next, we derive an error estimate for the backward Euler weak
Galerkin method.
\begin{theorem}\label{RiezeDL2}
Let $u\in H^{1+r}(\Omega)$ and $U^n$ be the solutions of
(\ref{continuousP}) and (\ref{discreteWG}), respectively. And let
$Q_h u$ be the $L^2$ projection of the exact solution $u$. Then
there exists a constant $C$ such that
\begin{eqnarray*}
\|U^n-Q_h u(t_n)\|&\leq& \|U^0-Q_h u(0)\|+Ch^{r+s}(\|\psi\|_{r+1}+\int_0^{t_n}\|u_t\|_{r+1}\,ds)\\
&+&Ch^{s+1}(\|f(0)-Q_0f(0)\|+\|u_t(0)-Q_0u_t(0)\|)\\
&+&Ch^{s+1}(\|f(t_n)-Q_0f(t_n)\|+\|u_t(t_n)-Q_0u_t(t_n)\|)\\
&+&Ch^{s+1}\left\{ \int_0^{t_n} (\|f_t-Q_0f_t\|+\|u_{tt}-Q_0u_{tt}\|)\,ds\right\}\\
&+&Ck\int_0^{t_n} \|u_{tt}\|\,ds.
\end{eqnarray*}
\end{theorem}
\begin{proof}
In analogy with (\ref{twoterm}), we write
$$
U^n-Q_h u(t_n)=(U^n-E_h u(t_n))+(E_h u(t_n)-Q_h u(t_n))=\theta^n+\rho^n,
$$
where $\rho^n=\rho(t_n)$ is bounded as shown in (\ref{rhobound}). In order to bound $\theta^n$, we use
\begin{eqnarray*}
(\bar{\partial} \theta^n, \chi)+a(\theta^n,\chi)&=&(\bar{\partial} U^n, \chi)+a(U^n,\chi)-(\bar{\partial} E_h u(t_n), \chi)-a(E_h u(t_n),\chi)\\
&=&(f(t_n),\chi)-(\bar{\partial} E_h u(t_n), \chi)-a(E_h u(t_n),\chi)\\
&=&(f(t_n),\chi)+(\nabla\cdot(a\nabla u(t_n)),\chi)-(\bar{\partial} E_h u(t_n), \chi)\\
&=&(u_t(t_n),\chi)-(\bar{\partial} E_h u(t_n), \chi)\\
&=&(u_t(t_n)-\bar{\partial} u(t_n),\chi)+(\bar{\partial} u(t_n)-\bar{\partial} E_h u(t_n), \chi),
\end{eqnarray*}
i.e.,
\begin{equation}\label{thetabar}
(\bar{\partial} \theta^n, \chi)+a(\theta^n,\chi)=(w^n,\chi),
\end{equation}
where
$$
w^n=(u_t(t_n)-\bar{\partial} u(t_n))+(\bar{\partial} u(t_n)-\bar{\partial} E_h u(t_n))=w^n_1+w^n_3.
$$
By choosing $\chi=\theta^n$ in (\ref{thetabar}) and the coercivity of the bilinear form, we have
$$
(\bar{\partial} \theta^n, \theta^n)\leq\|w^n\|\|\theta^n\|,
$$
or
$$
\|\theta^n\|^2-(\theta^{n-1}, \theta^n)\leq k \|w^n\|\|\theta^n\|,
$$
so that
$$
\|\theta^n\|\leq \|\theta^{n-1}\|+k\|w^n\|,
$$
and, by repeated application, it follows
$$
\|\theta^n\|\leq \|\theta^0\|+k\sum^n_{j=1}\|w^j\|\leq \|\theta^0\|+k\sum^n_{j=1}\|w_1^j\|+k\sum^n_{j=1}\|w_3^j\|.
$$
As in (\ref{theta0}), $\theta^0=\theta(0)$ is bounded as desired. By using the representation in (\ref{w1integral}), we obtain
$$
k\sum^n_{j=1}\|w_1^j\|\leq \sum^n_{j=1}\|\int_{t_{j-1}}^{t_j} (s-t_{j-1})u_{tt}(s)\,ds\|\leq k\int_0^{t_n}\|u_{tt}\|\,ds.
$$
We write
\begin{equation}\label{w3}
w_3^j=\bar{\partial} u(t_n)-E_h \bar{\partial} u(t_n)=(E_h-I)k^{-1}\int_{t_{j-1}}^{t_j} u_t\,ds=k^{-1} \int_{t_{j-1}}^{t_j} (E_h-I) u_t\,ds,
\end{equation}
and, by (\ref{rhot}) we have
\begin{eqnarray*}
k\sum^n_{j=1}\|w_3^j\|&\leq& \sum^n_{j=1}\int_{t_{j-1}}^{t_j} C[h^{s+1}(\|f_t-Q_0 f_t\|+\|u_{tt}-Q_0u_{tt}\|)+h^{r+s}\|u_{t}\|_{r+1}]\, ds\\
&=&C[h^{s+1}\int_{0}^{t_n} (\|f_t-Q_0 f_t\|+\|u_{tt}-Q_0u_{tt}\|)\,ds+h^{r+s}\int_{0}^{t_n} \|u_{t}\|_{r+1}\,ds],
\end{eqnarray*}
Thus, together with the estimate of $\rho$ in (\ref{rhobound}), we
have the assertion.
\end{proof}
\begin{theorem}
Under the assumption of Theorem \ref{RiezeDL2}, and the assumption
that the coefficient matrix $a$ is independent of time $t$, there
exists a constant $C$ such that
\begin{eqnarray*}
\|\nabla_d(U^n-Q_h u(t_n))\|^2&\leq&2\|\nabla_d(U^0-Q_h u(0))\|^2+Ch^{2r}(\|\psi\|_{r+1}^2+\|u\|^2_{r+1})\\
&+&Ch^{2(s+1)}\int_0^{t_n} (\|f_t-Q_0 f_t\|+\|u_{tt}-Q_0u_{tt}\|)^2\,ds+Ch^{2(r+s)}\int_0^{t_n} \|u_{t}\|^2_{r+1}\,ds\\
&+&Ck^2\int_0^{t_n} \|u_{tt}\|^2\,ds.
\end{eqnarray*}
\end{theorem}
\begin{proof}
It is sufficient to estimate $\nabla_d \theta^n$. To this end, we
choose $\chi=\bar{\partial}\theta^n$ in (\ref{thetabar}), and it is
easily seen that
$$
(\bar{\partial} \theta^n, \bar{\partial}
\theta^n)+a(\theta^n,\bar{\partial} \theta^n) =\|\bar{\partial}
\theta^n\|^2+\frac12\bar{\partial}a(\theta^n,\theta^n)+\frac{k}2a(\bar{\partial}\theta^n,
\bar{\partial}\theta^n)=(w^n,\bar{\partial}\theta^n),
$$
so that
$$
\bar{\partial}a(\theta^n,\theta^n)\leq \|w^n\|^2.
$$
By repeating the application, we have
\begin{eqnarray*}
a(\theta^n, \theta^n)&\leq& a(\theta^0,\theta^0)+k\sum_{j=0}^n\|w^j\|^2 \\
&\leq& a(\theta^0,\theta^0)+2k\sum_{j=0}^n
\|w_1^j\|^2+2k\sum_{j=0}^n \|w_3^j\|^2.
\end{eqnarray*}
As in (\ref{w3}), we obtain
$$
k\|w_3^j\|^2=k\int_\Omega (k^{-1} \int_{t_{j-1}}^{t_j} \rho_t\,ds)^2\, dx\leq \int_\Omega ( \int_{t_{j-1}}^{t_j} \rho_t^2\,ds)\, dx\leq \int_{t_{j-1}}^{t_j} \|\rho_t\|^2\,ds.
$$
Together with (\ref{w1}), (\ref{rhot}) and (\ref{gradrho}), we have
the assertion.
\end{proof}
\section{Numerical Experiments}
In section 2, we mentioned that the discrete weak space $S_h(j,l)$
and $\sum_h$ in the weak Galerkin method need to satisfy two
conditions. In \cite{JW_WG}, the authors proposed several possible
combinations of $S_h(j,l)$ and $\sum_h$. Through out this section we
use a uniform triangular mesh $\mathcal{T}_h$, the discrete weak
space $S_h(0,0)$, i.e., space consisting of piecewise constants on
the triangles and edges respectively, and $\sum_h$ with $V(T,1)$ to
be the lowest order Raviart-Thomas element $RT_0(T)$ in the weak
Galerkin method, which were used in \cite{NWG} for the numerical
studies of the weak Galerkin method for second-order elliptic
problems. We also adopt the various norms used in \cite{NWG} to
present the numerical results of the error $e_h$ between the $L^2$
projection $Q_h u$ of the exact solution and the numerical solution
$u_h$.
{\bf Example 1.} As the first example, we consider the following heat equation in $\Omega=(0,1)\times(0,1)$,
\begin{eqnarray}\label{example}
u_t-\nabla\cdot(a\nabla u) &=& f \quad \mbox{ in } \Omega,\quad \mbox{ for } t>0,\nonumber\\
u &=& g\quad \mbox{ on }\partial\Omega, \quad \mbox{ for } t>0,\\
u(\cdot, 0)&=&\psi\quad \mbox{ in } \Omega,\nonumber
\end{eqnarray}
where $a=\left[ \begin{array} {cc} 1&0\\0&1\end{array}\right]$ and
$f$, $g$ and $\psi$ are determined by setting the exact solution
$u=\sin(2\pi(t^2+1)+\pi/2)\sin(2\pi x +\pi/2)\sin(2\pi y +\pi/2)$.
For this inhomogeneous Dirichlet boundary condition, with a uniform triangular
mesh $\mathcal{T}_h$, we chose the approximation space
$$
S_h=\left\{ v=\{v_0,v_b\}\,:\, \begin{array}r
v_0\in P_0(T),\quad\mbox{ for all } T\in \mathcal{T}_h,\\
v_b \in P_0(e) \mbox{ for all } T\in \mathcal{T}_h\mbox{ and } e\subset\partial T\notin \partial \Omega,\\
v_b=g_h \quad\mbox{ for all } T\in \mathcal{T}_h \mbox{ and } \partial T\in \partial \Omega
\end{array}
\right\}
$$
where $g_h$ is the $L^2$ projection of $g$ in the piecewise constant
finite element space on the boundary $\partial\Omega$. In the test,
$k=h$ and $k=h^2$ are used to check the order of convergency with
respect to time step size $k$ and mesh size $h$ respectively, since
the convergence rate of the error is dominated by that of the time
step size $k$ when $k=h$, and the convergence rate of the error is
dominated by that of the mesh size $h$ when $k=h^2$. The results are
shown in Table 1 and Table 2. Since the exact solution is smooth, we
observe the optimal convergence rates in both $L^2$ and weak
Galerkin $H^1$ norms for the Dirichlet boundary data type initial
boundary value problem, which is consistent with the theoretical
results shown in section 4 and 5.
\begin{table}[!!h]\label{tablesec}
\begin{center}
\caption{Convergence rate for heat equation with inhomogeneous Dirichlet boundary condition with $k=h$}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$h$&${\|e_h\|}_{\{\infty,T\}}$&${\|e_h\|}_{\{\infty,\partial T\}}$&${\|\nabla_d e_h\|}$&${\|e_h\|}_{\{L^2,T\}}$ &${\|e_h\|}_{\{L^2,\partial T\}} $\\
\hline\hline
1/8&1.38e-01&1.44e-01&2.41e-01&4.90e-02&8.78e-02\\
1/16&6.97e-02&7.26e-02&8.34e-02&2.20e-02&4.03e-02\\
1/32&3.47e-02&3.56e-02&2.97e-02&1.05e-02&1.94e-02\\
1/64&1.72e-02&1.75e-02&1.10e-02&5.16e-03&9.54e-03\\
1/128&8.59e-03&8.65e-03&4.27e-03&2.56e-03 &4.74e-03\\
\hline\hline
$O(h^r)\,r=$&1.0012 &1.0138 & 1.4550&1.0643&1.0533\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!!h]\label{tablesec-2}
\begin{center}
\caption{Convergence rate for heat equation with inhomogeneous Dirichlet boundary condition with $k=h^2$}
\begin{tabular}{||c||c||c||c||c||c||}
\hline\hline
$h$&${\|e_h\|}_{\{\infty,T\}}$&${\|e_h\|}_{\{\infty,\partial T\}}$&${\|\nabla_d e_h\|}$&${\|e_h\|}_{\{L^2,T\}}$ &${\|e_h\|}_{\{L^2,\partial T\}} $\\
\hline\hline
1/8&7.11e-02& 8.30e-02& 7.81e-02& 3.28e-02 &5.39e-02 \\
1/16&1.89e-02& 2.28e-02& 1.84e-02& 8.53e-03& 1.38e-02 \\
1/32&4.79e-03& 5.84e-03& 4.52e-03 &2.16e-03 &3.49e-03 \\
1/64&1.21e-03 &1.48e-03& 1.13e-03& 5.43e-04& 8.79e-04 \\
1/128&3.64e-04& 4.31e-04& 2.88e-04& 1.49e-04& 2.47e-04 \\
\hline\hline
$O(h^r)\,r=$&1.9025&1.8994 & 2.0213&1.9454&1.9418\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
Next, we consider this heat problem with a mixed boundary condition
$$
\left\{ \begin{array}r
u=g\quad \mbox{ on } \partial \Omega_1,\\
a\nabla u\cdot n+u=0, \quad\mbox{ on } \partial \Omega_2,
\end{array}
\right.
$$
where $\partial \Omega_1\cap\partial \Omega_2=\emptyset$, and
$\partial \Omega_1\cup\partial \Omega_2=\partial \Omega$. The exact
solution is set to be $u=\sin(2\pi(t^2+1)+\pi/2)\sin(\pi y)e^{-x}$,
where $\partial\Omega_2$ is the boundary segment $x=1$ and
$\partial\Omega_1$ is the union of all other boundary segments. For
the mixed boundary data type of initial boundary value problem, we
also achieved the optimal convergence rates of the error in all
norms as shown in Table 3 and Table 4.
\begin{table}[!!h]\label{tablesec-3}
\begin{center}
\caption{Convergence rate for heat equation with Robin boundary condition with $k=h$}
\begin{tabular}{||c||c||c||c||c||c||}
\hline\hline
$h$&${\|e_h\|}_{\{\infty,T\}}$&${\|e_h\|}_{\{\infty,\partial T\}}$&${\|\nabla_d e_h\|}$&${\|e_h\|}_{\{L^2,T\}}$ &${\|e_h\|}_{\{L^2,\partial T\}} $\\
\hline\hline
1/8&1.65e-01&1.72e-01&1.80e-01&9.86e-02&1.83e-01\\
1/16&1.00e-01&1.02e-01&8.55e-02&5.86e-02&1.09e-01\\
1/32&5.54e-02&5.59e-02&4.01e-02&3.22e-02&5.95e-02\\
1/64&2.92e-02&2.93e-02&1.91e-02&1.69e-02&3.13e-02\\
1/128&1.50e-02&1.50e-02&9.24e-03&8.67e-03&1.60e-02\\
\hline\hline
$O(h^r)\,r=$&0.8656&0.8793&1.0713&0.8767& 0.8789\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!!h]\label{tablesec-4}
\begin{center}
\caption{Convergence rate for heat equation with Robin boundary condition with $k=h^2$}
\begin{tabular}{||c||c||c||c||c||c||}
\hline\hline
$h$&${\|e_h\|}_{\{\infty,T\}}$&${\|e_h\|}_{\{\infty,\partial T\}}$&${\|\nabla_d e_h\|}$&${\|e_h\|}_{\{L^2,T\}}$ &${\|e_h\|}_{\{L^2,\partial T\}} $\\
\hline\hline
1/8&3.18e-02&3.90e-02&2.61e-02&1.88e-02&3.57e-02\\
1/16&8.29e-03&1.01e-03&6.28e-03&4.87e-03&9.22e-03\\
1/32&2.10e-03&2.54e-03&1.55e-03&1.23e-03&2.32e-03\\
1/64&5.24e-04&6.34e-04&3.88e-04&3.08e-04&5.83e-04\\
1/128&1.39e-04&1.62e-04&1.06e-04&8.79e-05&1.65e-04\\
\hline\hline
$O(h^r)\,r=$&1.9591&1.9785&1.9875&1.9352&1.9383\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
{\bf Example 2.} For the second example, we consider the parabolic
problem (\ref{example}) of full tensor with Dirichlet boundary
condition and coefficient matrix $a=\left[ \begin{array} {cc}
x^2+y^2+1 &xy\\xy&x^2+y^2+1\end{array}\right]$, which is symmetric
and positive definite, and $f$, $g$ and $\psi$ are determined by
setting the exact solution $u=\sin(2\pi(t^2+1)+\pi/2)\sin(2\pi x
+\pi/2)\sin(2\pi y +\pi/2)$. The results are shown in Table 5 and
Table 6, which confirm the theoretical rates of convergence in
$L^2$. For the discrete $H^1$ norm, the numerical convergence is of
order $\mathcal{O}(h^2)$ which is one order higher than the
theoretical prediction. We believe that this suggests a
superconvergence between the weak Galerkin finite element
approximation and the $L^2$ projection of the exact solution.
Interested readers are encouraged to conduct a study on the
superconvergence phenomena.
\begin{table}[!!h]\label{tablesec-5}
\begin{center}
\caption{Convergence rate for parabolic problem with inhomogeneous Dirichlet boundary condition with $k=h$}
\begin{tabular}{||c||c||c||c||c||c||}
\hline\hline
$h$&${\|e_h\|}_{\{\infty,T\}}$&${\|e_h\|}_{\{\infty,\partial T\}}$&${\|\nabla_d e_h\|}$&${\|e_h\|}_{\{L^2,T\}}$ &${\|e_h\|}_{\{L^2,\partial T\}} $\\
\hline\hline
1/8&1.21E-01& 1.38E-01& 3.57E-01& 4.64E-02& 8.14E-02\\
1/16&5.64E-02& 6.10E-02& 1.23E-01&1.87E-02 &3.39E-02\\
1/32&2.67E-02& 2.81E-02 &4.34E-02& 8.35E-03& 1.54E-02\\
1/64&1.29E-02 &1.33E-02& 1.55E-02& 3.95E-03& 7.29E-03\\
1/128&6.33E-03& 6.42E-03& 5.61E-03& 1.92E-03& 3.55E-03\\
\hline\hline
$O(h^r)\,r=$&1.0654& 1.1076 &1.4978& 1.1487& 1.1301\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!!h]\label{tablesec-6}
\begin{center}
\caption{Convergence rate for parabolic problem with inhomogeneous Dirichlet boundary condition with $k=h^2$}
\begin{tabular}{||c||c||c||c||c||c||}
\hline\hline
$h$&${\|e_h\|}_{\{\infty,T\}}$&${\|e_h\|}_{\{\infty,\partial T\}}$&${\|\nabla_d e_h\|}$&${\|e_h\|}_{\{L^2,T\}}$ &${\|e_h\|}_{\{L^2,\partial T\}} $\\
\hline\hline
1/8&7.41E-02& 9.68E-02& 1.24E-01& 3.53E-02& 5.74E-02\\
1/16&1.92E-02& 2.56E-02& 3.00E-02& 9.14E-03& 1.47E-02\\
1/32&4.83E-03& 6.44E-03& 7.45E-03& 2.30E-03& 3.70E-03\\
1/64&1.21E-03& 1.62E-03& 1.86E-03& 5.78E-04& 9.28E-04\\
1/128&3.35E-04& 4.34E-04& 4.67E-04& 1.53E-04& 2.48E-04\\
\hline\hline
$O(h^r)\,r=$&1.9474 &1.9500& 2.0118 &1.9637& 1.9636\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\end{document}
|
\begin{document}
\title{Enhanced squeezing with parity kicks}
\author{Xiao-Tong Ni}
\affiliation{The Key Laboratory of Atomic and Nanosciences, Ministry
of Education, Tsinghua University, Beijing 100084, China }
\author{Yu-xi Liu}
\affiliation{Institute of Microelectronics, Tsinghua
University,Beijing 100084, China} \affiliation{Tsinghua National
Laboratory for Information Science and Technology, Tsinghua
University, Beijing 100084, China}
\author{L C Kwek}
\affiliation{Center for Quantum Technologies, National University of
Singapore,2 Science Drive 3, Singapore 117542 and National Institute
of Education and Institute of Advanced Studies, Nanyang
Technological University, 1 Nanyang Walk, Singapore 637616}
\author{Xiang-Bin Wang}
\email{[email protected]}
\affiliation{The Key Laboratory of Atomic and Nanosciences, Ministry
of Education, Tsinghua University, Beijing 100084, China }
\affiliation{Tsinghua National Laboratory for Information Science
and Technology, Tsinghua University, Beijing 100084, China}
\affiliation{Department of Physics, Tsinghua University, Beijing 100084, China}
\begin{abstract}
Using exponential quadratic operators, we present a general
framework for studying the exact dynamics of system-bath interaction
in which the Hamiltonian is described by the quadratic form of
bosonic operators. To demonstrate the versatility of the approach,
we study how the environment affects the squeezing of quadrature
components of the system. We further propose that the squeezing can
be enhanced when parity kicks are applied to the system.
\end{abstract}
\maketitle
{\em Introduction} -- Coupling between system and environment is
ubiquitous in all quantum processes (e.g., in quantum information
processing). Such coupling usually results in: (i) the energy decay
of the quantum system; (ii) the destruction of the relative phases
of several superposed quantum states, and thus the linear
superposition of several quantum states turn into a classical
mixture. However, the environment can also help us, for example,
entanglement between two systems can be generated via a common
environment~\cite{Paz2008}.
Although it seems impossible to model the environment exactly in
many cases, and thus difficult to obtain the exact dynamics of a
system-environment interaction, quantitative analysis based on
approximate description of the environment is needed in many cases,
for instance, the analysis of decoherence suppressing
methods~\cite{Misra1977,Erez2008,Vitali1999,Viola2005,Kaveh2007,Ponte2004,Kofman2004,
Ulrike2000,Wu2002}; the discussion of the entanglement of two
systems coupled to the environment; and the study of the quantum
dissipation of systems~\cite{Leggett1987,Weiss}. An extensively
adopted approach to model the environment, which is also called a
reservoir or bath, is to introduce a set of harmonic oscillators
with different frequencies. In this case, the interaction between
the system and the environment is modeled by coupling the system to
these harmonic oscillators through an appropriate interaction
Hamiltonian. Several methods have been proposed to study the
coupling between the system and a set of harmonic oscillators. In
quantum optics (e.g.,
Refs.~\cite{Scully1997,Louisell,Gardiner1991}), a quite often used
method to analyze Markovian process is either a master equation or a
Langevin equation. Another method is the path integral approach
~\cite{Feynman1963} which was extensively developed in
Refs.~\cite{Leggett1983,Leggett1987}, but, this method is very
complicated. Furthermore, different approximations are used in all
of these methods to make the problem tractable for either analytical
or numerical calculations.
In this paper, we introduce a new method to calculate the evolution
of the bosonic system, coupled to the environment. The total
Hamiltonian is described by a quadratic form of the bosonic
operators. Our method is based on some properties of exponential
quadratic operators. As shown in below, this new method provides a
feasible way to calculate the effect of the environment on system.
As an example, we apply our method to study the environment effect
on the generation of squeezed states. Moreover, we also use our
method to study the system-environment interaction when the parity
kicks are applied to the system. We find that the parity kicks can
help us to obtain a better squeezing.
{\em Exponential quadratic operators.---}For a set of annihilation
operator $a_{i} (1\leq i \leq n)$, exponential quadratic
operators(EQO) (see \cite{eqo1,eqo2,eqo3}) are expressions of the
form
\begin{equation}
Q=e^{\sum_{i,j}(c_{ij}a_{i}a_{j}+d_{ij}a_{i}a_{j}^{\dag}+e_{ij}a_{i}^{\dag}a_{j}^{\dag})}.
\end{equation}
Here $[a_{i},a_{i}^{\dag}]=1$. The above equation can also be
written in the following way $Q=e^{\frac{1}{2}\Lambda^{T}R\Lambda}$
in which
$\Lambda^{T}=(a_{1}^{\dag},a_{2}^{\dag},\cdots,a_{n}^{\dag},a_{1},a_{2},\cdots,a_{n})$
and $R$ is a symmetric matrix. If we define $\displaystyleplaystyle S=
\begin{pmatrix}
0 & I \\
-I & 0 \\
\end{pmatrix}$,
then we have
\begin{equation}\label{thm}
Q\Lambda^{T}Q^{-1}=\Lambda^{T}e^{-RS},
\end{equation}
where the multiplication in $e^{Q}\Lambda^{T}e^{-Q}$ is understood
to act on each term of $\Lambda$.
{\em Coupling between oscillator and reservoir.---} Consider a
system comprising of a harmonic oscillator with annihilation
operator $a$, and a reservoir consisting of a set of oscillators
with annihilation operator $b_k$ for each mode. The Hamiltonian of
system-reservoir is described by
\begin{equation}\label{h1}
H=\hbar \omega
a^{\dag}a+\sum_{k}\hbar\omega_{k}b_k^{\dag}b_k+\hbar\sum_{k}\gamma_{k}(ab_k^{\dag}+b_{k}a^{\dag}),
\end{equation}
where the first, second, and third terms are the system, reservoir,
and system-reservoir interaction Hamiltonians, respectively. Here,
$\gamma_{k}$ are the coefficients representing the coupling strength
between the system and the mode $k$ of reservoir - these coupling
constants are typically much smaller than the other frequencies in
the Hamiltonian. For simplicity, but without loss of generality, we
regard these couplings as reals.
We calculate the evolution of $a$($a^{\dag}$) in the Heisenberg
picture by using equation (\ref{thm}). For $U=e^{-iHt/\hbar}$, putting
$\Lambda^{T}=(a^{\dag},b_{1}^{\dag},b_{2}^{\dag},\cdots,b_{n}^{\dag},a,b_{1},b_{2},\cdots,b_{n})$
and
$\displaystyleplaystyle
R=\begin{pmatrix}
& P \\
P & \\
\end{pmatrix}
$
where
\begin{equation}
P=
\begin{pmatrix}
i\omega t & i\gamma_{1}t & i\gamma_{2}t & \cdots & i\gamma_{n}t \\
i\gamma_{1}t & i\omega_{1}t & & & \\
i\gamma_{2}t & & i\omega_{2}t & & \\
\vdots & & & \ddots & \\
i\gamma_{n}t & & & & i\omega_{n}t \\
\end{pmatrix}
\end{equation}
Thus, to calculate the evolution of $a$ ($a^{\dag}$), we need only
to calculate the matrix $e^{-RS}$.
{\em Coupling between system and reservoir during a squeezing
process.---} To see the power of the technique, let us consider a
Hamiltonian for degenerate parametric amplification with a classical
pump under the influence of a reservoir in a squeezing process. The
Hamiltonian can be expressed as
\begin{equation}\label{totalsqueezehamilton}
\begin{split}
H&=\hbar\omega a^{\dag}a+\frac{1}{2}i\hbar\epsilon[e^{2i\omega t}a^2-e^{-2i\omega t}(a^{\dag})^2]\\
&+\sum_{j=1}^{n}\hbar
\omega_{j}b_{j}^{\dag}b_{j}+\hbar(a^{\dag}\sum_{j=1}^{n}\gamma_{j}b_{j}+h.c.).
\end{split}
\end{equation}
In order to remove the time dependence in the Hamiltonian, we
transfer the Hamiltonian into a rotating reference frame with
$U=\exp(iH_{0}t/\hbar)$ with $\displaystyle
H_{0}=\hbar\omega(a^{\dag}a+\sum_{j=1}^{n}b_{j}^{\dag}b_{j})$. Thus
in the rotating reference frame, the Hamiltonian in
Eq.~(\ref{totalsqueezehamilton}) becomes
\begin{equation}\label{hi}
\begin{split}
H_{I}=&-\frac{1}{2}i\hbar\epsilon[(a^{\dag})^2-a^2] +
\sum_{j=1}^{n}\hbar(\omega_{i}-\omega)b_{j}^{\dag}b_{j}\\
&+\hbar(a^{\dag}\sum_{j=1}^{n}\gamma_{j}b_{j}+h.c.).
\end{split}
\end{equation}
Note that the first term is just the usual squeezing operator. We
can easily find the matrix R corresponding to $-iH_{I}t/\hbar$. By
analyzing $e^{-RS}$, numerically if necessarily, we obtain the
evolution of $a^{\dag}(t)$ and $a(t)$, and thus the solution of all
quantities associated with a squeezing process. The most important
one among them is
$\langle(\Delta(a(t)+a(t)^{\dag}))^2\rangle=\langle(\Delta
X)^2\rangle$.
{\em Parity kicks in the squeezing process.---} Using appropriate
time varying control fields, it is well known that one could
alleviate decoherence effects through a sequence of frequent parity
kicks. As in Ref.~\cite{Vitali1999}, we introduce an extra
Hamiltonian (in the rotating reference frame).
$$H_I^{\prime}=H_I+H_{kick}(t),$$
where $H_{kick}(t)=H_{kick}$ for $t_i\leq t \leq t_i+\tau$ and
$H_{kick}=0$ otherwise. We require $t_{i+1}-t_i=\tau_0$ for all $i$.
Moreover, we assume $\tau\ll\tau_0$ and $H_{kick}(t)$ is strong
enough during the kick periods that we can neglect the effect of
$H_I$, which is $\displaystyleplaystyle e^{-iH_I \tau/\hbar}\approx
e^{-iH_{kick} \tau/\hbar}$ Under these conditions, we will model
parity kicks as unitary operators $P=e^{-iH_{kick} \tau/\hbar}$
acting on system at a set of time $t_i$. Since we want to eliminate
the influence of coupling between system and reservoir, we require
$P$ to have following properties $PH_{system}P=H_{system},$
$PH_{bath}P=H_{bath},$ and $PH_{int}P=-H_{int}.$ The three
Hamiltonians are defined in Eq.~\eqref{hi}. It is easy to verify
that $P=e^{-i\pi a^{\dag}a}$ satisfies above equations. Thus the
unitary operator corresponding to two such periods would be $
\displaystyleplaystyle Pe^{-iH_I\tau_0/\hbar}Pe^{-iH_I\tau_0/\hbar}
=e^{-(i\tau_0/\hbar)(H_{system}+H_{bath}-H_{int})}e^{-(i\tau_0/\hbar)(H_{system}+H_{bath}+H_{int})}
\doteq Y. $ Intuitively, it shows the interaction between system and
reservoir of different periods cancel each other out. In fact, it
has been proved that when $\tau_0\rightarrow 0$ the system and the
reservoir are totally decoupled.
We use numerical computation to verify this effect in the squeezing
process. To this end we calculate the evolution of $a^{\dag}(t)$ and
$a(t)$: We have
$$a^{\dag}(2n\tau_0)=Y^{\dag n}a^{\dag}Y^n.$$
To use the EQO method shown in Eq.~\eqref{thm} to solve the above
expression, we note that if
$$e^{Y_1}\Lambda^{T}e^{-Y_1}=\Lambda^{T}P_1,$$
$$e^{Y_2}\Lambda^{T}e^{-Y_2}=\Lambda^{T}P_2,$$
then
$$e^{Y_2}e^{Y_1}\Lambda^{T}e^{-Y_1}e^{-Y_2}=\Lambda^{T}P_2P_1.$$
Thus, we know that we need only to calculate the $e^{-RS}$ in
$$Y^{\dag}\Lambda^{T}Y=\Lambda^{T}e^{-RS},$$
and $e^{-nRS}$ would be the desired transforming matrix. Again we
use the above property and see that we only need to calculate
\begin{equation}\label{banghint}
e^{(i\tau_0/\hbar)(H_{system}+H_{bath} \pm
H_{int})}\Lambda^{T}e^{-(i\tau_0/\hbar)(H_{system}+H_{bath} \pm
H_{int})}.
\end{equation}
For simplicity we consider the ground state situation. The procedure
is entirely general and applies for the case of $T>0$K. We compute
the variance $\langle(\Delta X)^2\rangle$ with two types of
coupling: namely, the Lorentzian spectrum and the ohmic spectrum.
For the Lorentzian spectrum
$\gamma_j=g(\omega_j)=\eta\Gamma/\sqrt{(\omega_{j}-\omega)^2+\Gamma^{2}},$
as an example of numerical calculations, we assume
$\Gamma=2\times10^9$Hz, $\eta=5\times 10^7$Hz, the squeezing
parameter $\epsilon=10^{8}$Hz and the kick period $\tau_0=1.67\times
10^{-9}$s. For the ohmic spectrum
$\gamma_j=g(\omega_j)=\sqrt{\xi\omega_j}e^{-\omega_j/\omega_c}$, we
assume $\xi=10^6$Hz, $\omega_c=10^9$Hz, the squeezing parameter
$\epsilon=7\times 10^{8}$Hz and the kick period $\tau_0=2.5\times
10^{-9}$s. For both spectrum, we assume the frequencies associated
with the system and reservoir to be $\omega=10^{9}$Hz and
$\omega_j=j\times10^{7}$Hz $(j=1,2,\ldots 200)$ respectively. With
these parameters, the variance $\langle(\Delta X)^2\rangle$ versus
rescaled time $\ \epsilon t$ is plotted in
Fig.~\ref{fig:paritykicks}, which shows that a better squeezing can
be obtained if parity kicks are applied to the system.
\begin{figure}
\caption{The variance $\langle(\Delta X)^2\rangle$ versus rescaled time $\
\epsilon t$ when parity kicks are on and off
respectively. Figure (a) refers to a Lorentzian spectrum and figure (b) refers
to an Ohmic spectrum.
We find that a better squeezing can be obtained when parity kicks are applied to the system.
}
\label{fig:paritykicks}
\end{figure}
{\it Discussions.---} It is interesting to do a comparison between
several methods, including the widely used Markovian master equation
\cite{Louisell}. Under the Hamiltonian \eqref{h1} with a Lorentzian
spectrum as shown in Ref.~\cite{Liu2001}, we can obtain an exact
solution
\begin{equation}\label{lorentzexact}
\begin{split}
a(t)=&[u(t)e^{-\Gamma t/2}a+\sum u_j(t)b_j]e^{-i\omega t}\\
=&\{[\cos(\Theta t/2)+\frac{\Gamma}{\Theta}\sin(\Theta t/2)]+\sum u_j(t)b_j\}e^{-i\omega
t}.
\end{split}
\end{equation}
The constant $\Theta$ is given by $\Theta=\sqrt{4\pi \eta^2
D-\Gamma^2}$, where D is the density of reservoir modes and $u_j(t)$
are some complicated functions~\cite{Liu2001}. However, the master
equation, in the Markovian approximation, can be written as
\begin{equation}\label{markovianmaster}
\begin{split}
\frac{\partial \rho}{\partial
t}=&-i\omega[a^{\dag}a,\rho]+\frac{\lambda}{2}
[2a\rho a^{\dag}-a^{\dag}a\rho-\rho a^{\dag}a]+\\
&+\lambda
\overline{n}[a^{\dag}\rho a+a\rho a^{\dag}-a^{\dag}a\rho-\rho
aa^{\dag}],
\end{split}
\end{equation}
where $\lambda=2\pi D g(\omega)^2$ is a constant, which represents
the decay rate of the harmonic oscillator. For simplicity, we assume
the temperature of reservoir to be zero, and the initial state of
system to be $|1\rangle$. We then calculate the the probability
$P(t)=\langle 1 |tr_{R}(\rho(t))|1\rangle$, which can be used to
observe the decay of system. We can also compute $P(t)$ when the
coupling strengths $\gamma_k$ are constants. For example, we assume
that parameters of the Lorentzian spectrum in
Fig~\ref{fig:compare}(a) to be $\gamma_j=g(\omega_j)=2.8209\times
10^{12}/\sqrt{(\omega_{j}-\omega)^2+10^{12}}$Hz, and the flat
spectrum in Fig~\ref{fig:compare}(b) to be $\gamma_j=5.6419\times
10^{6}$ Hz with $j=1,2,\ldots 200$. For the flat spectrum we assume
the frequencies associated with the system and reservoir to be
$\omega=10^{9}$Hz and $\omega_j=j\times10^{7}$Hz $(j=1,2,\ldots
200)$ respectively. For the Lorentzian spectrum, however, we change
the frequencies of the reservoir to be
$\omega_j=(50+j/2)\times10^{7}$Hz $(j=1,2,\ldots 200)$ due to the
shape of the Lorentzian spectrum, which varies dramatically at the
center and is negligible at two sides. By making this change we can
sample the spectrum better. Then we plot Fig.~\ref{fig:compare}.
\begin{figure}
\caption{(a) The probability of the system being in state $|1\rangle$ with a Lorentzian spectrum. We can see that our numerical solution is close to the exact solution.
On the other hand, we can see master equation is not valid for this situation.
(We set the reservoir to have 200 equally distributed oscillators while getting the numerical solution of our method.)
(b) The probability of the system being in state $|1\rangle$ when the coupling strengths $\gamma_k$ are constant.
We can see that the lines of our numerical solution and master equation's solution coincide with each other.}
\label{fig:compare}
\end{figure}
We can find that while the master equation leads to a good
approximate solution in some cases, it fails sometimes. Thus our
method is more reliable, and the accuracy can be further improved by
using better numerical methods.
We also note that the parity kicks can be done by
increasing the frequency of the harmonic oscillator for a short time
interval. For example, this can be achieved in the ion trap by
changing the electric field. (also see Refs.~\cite{squ1,squ2} for
schemes of generating squeezed states in ion trap)
{\it Conclusions.---} We have shown that for a general Hamiltonian
with bononic quadratic forms, we can compute dynamics of system
using exponential quadratic operators. Our method provides
substantial improvement over computation involving master equations
as we do not need to solve any differential equations and provides
numerical solution for Hamiltonians that can be written in quadratic
form of creation and annihilation operators. Thus, this new
techniques compares well with the dynamics of the system under a
master equation but it is in some sense more appealing as it could
provide in principle analytical expressions for some cases. In
particular, we analyze the effect of reservoir in a squeezing
process and we propose possible scheme to improve the degree of
squeezing. Our method can be applied to study the problem on the
quantization of nano-mechanical systems, the further work will be
presented elsewhere.
KLC acknowledges financial support by the National Research
Foundation \& Ministry of Education, Singapore. This work was
supported in part by the National Basic Research Program of China
grant No. 2007CB907900 and 2007CB807901, NSFC grant No. 60725416 and
China Hi-Tech program grant No. 2006AA01Z420.
\begin{comment}
\section{Appendix I:the calculation of (\ref{heatdecay})}
We set $H=H_0+H_I$, $H_0=\hbar \omega(a^{\dag}a+\sum b_j^{\dag}b_j)$
and $H_I=\hbar\sum
(\omega_j^{\prime}b_j^{\dag}b_j+\gamma_j(a^{\dag}b_j+ab_j^{\dag}))$,
where $\omega_j^{\prime}=\omega_j-\omega$. We note that
$[H_0,H_I]=0$, so we have $e^{-iHt/\hbar}=e^{-iH_0 t/\hbar}e^{-iH_I
t/\hbar}$ and
\begin{equation}
\begin{split}
&\langle 1|Tr_R(e^{-iH/\hbar
t}a^{\dag}|0\rangle \langle 0|\rho_R a e^{iH/\hbar t})|1 \rangle\\
=&\langle 1|Tr_R(e^{-iH_0/\hbar t}e^{-iH_I/\hbar
t}a^{\dag}e^{iH_I/\hbar
t}e^{-iH_I/\hbar t}|0\rangle\\
&\times\langle 0|\rho_R e^{iH_I/\hbar t}e^{-iH_I/\hbar
t}ae^{iH_I/\hbar t}e^{iH_0/\hbar t})|1 \rangle\\
=&\langle 1|Tr_R(e^{-iH_0/\hbar t}a(t)^{\dag}e^{-iH_I/\hbar
t}|0\rangle \langle 0|\rho_R e^{iH_I/\hbar t}a(t)e^{iH_0/\hbar t})|1
\rangle\\
=&\langle 1|Tr_R(a(t)^{\dag}e^{-iH_I/\hbar t}|0\rangle \langle
0|\rho_R e^{iH_I/\hbar t}a(t))|1 \rangle.
\end{split}
\end{equation}
The last equals sign can be verify by the definition of trace. When
we collect terms to second order of $\gamma$, the above expression
can be written as
\begin{equation}
\begin{split}
&\langle 1|Tr_R(a(t)^{\dag}e^{-iH_I/\hbar t}|0\rangle \langle
0|\rho_R e^{iH_I/\hbar t}a(t))|1 \rangle\\
=&|1+c|^2\langle 1|Tr_R(e^{-i\sum_j\omega_jb_j^{\dag}b_j t}|1\rangle
\langle 1|\rho_R e^{-i\sum_j\omega_jb_j^{\dag}b_j
t})|1 \rangle\\
&+\sum_j\langle
1|Tr_R(e^{i\omega t}c_jb_j^{\dag}\sum_{m,k\geq0}[(-it\omega_j^{\prime}b_j^{\dag}b_j)^m(-it\gamma_ja^{\dag}b_j)\\
&(-it\omega_j^{\prime}b_j^{\dag}b_j)^k/(m+k+1)!](|0\rangle \langle
0|\rho_R a)|1 \rangle\\
&+\sum_j\langle 1|Tr_R(a^{\dag}(|0\rangle \langle
0|\rho_R \sum_{m,k\geq0}[(it\omega_j^{\prime}b_j^{\dag}b_j)^m\\
&(it\gamma_ja^{\dag}b_j)(it\omega_j^{\prime}b_j^{\dag}b_j)^k/(m+k+1)!]e^{-i\omega
t}c_j^*b_j)|1 \rangle.
\end{split}
\end{equation}
The first term is just $|1+c|^2$, which is the decay of system when
$T=0$. The second term is
\begin{equation}
\begin{split}
&\sum_{j=1}^n\sum_{m,k,l\geq0}c_j l
[(-it\omega_j^{\prime}(l-1))^m(-it\gamma_j)\\
&(-it\omega_j^{\prime}l)^k/(m+k+1)!]e^{-\frac{\hbar l
\omega_j}{kT}}(1-e^{-\frac{\hbar \omega_j}{kt}}),
\end{split}
\end{equation}
and the last term is the complex conjugate of the second term. After
some calculation and using some series techniques in Appendix II,
the sum of last two terms can be simplified to
\begin{equation}
\begin{split}
\sum_{j=1}^n&2*Re[|\frac{e^{i\omega_j^{\prime}t}-1}{\omega_j^{\prime}}|^2\gamma_j^2e^{i\omega_j t}(1-e^{-\frac{\hbar l \omega_j^{\prime}}{kT}})\\
&(e^{-\frac{\hbar l \omega_j^{\prime}}{kT}-i\omega_j^{\prime}t})/(1-e^{-\frac{\hbar l
\omega_j^{\prime}}{kT}-i\omega_j^{\prime}t})^2,
\end{split}
\end{equation}
which completes our derivation.
\appendix*{Derivation of \eqref{squeezeanswerad}}
The techniques we used are quite similar when we derive each
coefficients of \eqref{squeezeanswerad}, so we shall only show how to
obtain the coefficient of $a^{\dag}$. We expand $\sum_{j=0}^{\infty}(A+B)^n/n!$, and find that the
zeroth-order terms (or unperturbed terms) are
\begin{equation}
\sum_{j=0}^{\infty}A^j/j!.
\end{equation}
We focus on the element corresponding to the coefficient of
$a^{\dag}$, which is in the first row of first column in the matrix,
and find it is just $\cosh(-\epsilon t)$, which is consistent with
our knowledge about the original squeeze process.
The first-order terms are
\begin{equation}
\sum_{i,j\geq 0}A^iBA^j/(i+j+1)!.
\end{equation}
After careful examination, we find that above sum do not contribute
to the coefficient of $a^{\dag}$.
The second-order terms are
\begin{equation}
\sum_{i,j,k\geq 0}A^iBA^jBA^k/(i+j+k+2)!.
\end{equation}
Corresponding terms in the coefficient of $a^{\dag}$ are
\begin{equation}\label{secondorderorg}
\begin{split}
&\sum_{l=1}^{n}[ги\sum_{2|m+k}\sum_{2|j}\frac{x^{m+k}y_l^j}{(m+j+k+2)!}\\
&+\sum_{2|m+k}\sum_{2\nmid
j}\frac{x^{m+k}y_l^j(-1)^i}{(m+j+k+2)!})](-\gamma_l^2t^2),
\end{split}
\end{equation}
where $x=-\epsilon t$ and $y_l=i\omega_l^{\prime} t$. In order to
get a close form of the above expression, we note that
\begin{equation}\label{usefulformula1}
\sum_{i,j\geq 0} x^iy^j/(i+j+1)! =\frac{e^x-e^y}{x-y}.
\end{equation}
Inspired by the above formula, we can also obtain the following
formula
\begin{equation}\label{usefulformula}
\sum_{i,j\geq 0} x^iy^j/(i+j+2)! =\frac{(e^x-1)/x-(e^y-1)/y}{x-y}.
\end{equation}
Now we treat the terms in
(\ref{secondorderorg}) separately.
\begin{equation}
\begin{split}
\sum_{2|m+k}\sum_{2|j}\frac{x^{m+k}y^j}{(m+j+k+2)!}&=\sum_{2|k,j}\frac{(k+1)x^ky^j}{(k+j+2)!}\\
&=\frac{\partial}{\partial x}(x\sum_{2|k,j}\frac{x^ky^j}{(k+j+2)!}),
\end{split}
\end{equation}
where $\sum_{2|k,j}\frac{x^ky^j}{(k+j+2)!}$ is just
\begin{equation}
\begin{split}
&\sum_{i,j\geq 0} (x^iy^j/(i+j+2)!+(-x)^i(-y)^j/(i+j+2)!\\
&+(-x)^iy^j/(i+j+2)!+x^i(-y)^j/(i+j+2)!).
\end{split}
\end{equation}
Now we can use (\ref{usefulformula}) to get a closed form for
$\sum_{2|m+k}\sum_{2|j}\frac{x^{m+k}y^j}{(m+j+k+2)!}$.
Similarly, we have
\begin{equation}
\sum_{2|m+k}\sum_{2\nmid j}\frac{x^{m+k}y^j(-1)^i}{(m+j+k+2)!}=
\sum_{2|k}\sum_{2\nmid j}\frac{x^{k}y^j}{(m+j+k+2)!}.
\end{equation}
And we can again use (\ref{usefulformula}) to obtain the closed
form. Now we only need to do the simplification to get the
coefficient of $a^{\dag}$ in (\ref{squeezeanswerad})
\end{comment}
\end{document}
|
\begin{document}
\title[Some Elementary Components]{Some Elementary Components of the Hilbert Scheme of Points}
\author[M. Huibregtse]{Mark E. Huibregtse}
\address{Department of Mathematics and Computer Science\\
Skidmore College\\
Saratoga Springs, New York 12866}
\email{[email protected]}
\date{\today}
\subjclass[2010]{14C05}
\keywords{generic algebra, small tangent space, {H}ilbert scheme of points, elementary component}
\newcommand{\operatorname{LM}}{\operatorname{LM}}
\newcommand{\operatorname{TM}}{\operatorname{TM}}
\newcommand{\operatorname{T}}{\operatorname{T}}
\newcommand{\EuScript{T}}{\EuScript{T}}
\newcommand{\mathcal{S}}{\mathbf{m}hcal{S}}
\newcommand{distinguished}{distinguished}
\newcommand{distinguishedCap}{Distinguished}
\newcommand{efficient}{efficient}
\begin{abstract}
Let $K$ be an algebraically closed field of characteristic $0$, and let $H^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ denote the Hilbert scheme of $\mu$ points of $\mathbf{m}hbb{A}^n_{K}$. An \textbf{elementary component} $E$ of $H^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ is an irreducible component such that every $K$-point $[I]$ $\in$ $E$ represents a length-$\mu$ closed subscheme $\operatorname{Spec}(K[x_1,\dots,x_n]/I)$ $\subseteq$ $\mathbf{m}hbb{A}^n_{K}$ that is supported at one point. Iarrobino and Emsalem gave the first explicit examples (with $\mu > 1$) of elementary components in \cite{Iarrob-Emsalem}; in their examples, the ideals $I$ were homogeneous (up to a change of coordinates corresponding to a translation of $\mathbf{m}hbb{A}^n_{K}$). We generalize their construction to obtain new examples of elementary components.
\end{abstract}
\maketitle
\section{Introduction} \label{sec:Intro}
Let $K$ be an algebraically closed field of characteristic $0$\footnote{This hypothesis is used explicitly in Proposition \ref{prop:GeneralZProp} and subsequent results that depend thereon, and implicitly in the computer computations, which are (with one exception) done in characteristic $0$.}, and let $R$ denote the polynomial ring $K[x_1, \dots, x_n]$ $=$ $K[\mathbf{m}hbf{x}]$, with $n$ $\geq$ $3$. The first explicit examples of finite algebras with ``small tangent space'' (or ``generic'' algebras) of $K$-dimension $\mu > 1$ were given by Iarrobino and Emsalem in \cite{Iarrob-Emsalem}. These algebras have the form $A$ $=$ $R/I$, where $I$ $\subseteq$ $R$ is an ideal of finite colength $\mu$ that is generated by a list of sufficiently general homogeneous polynomials $g_j$, $1 \leq j \leq \lambda$, of degree $r$, and so vanishes at a single point (the origin) of $\mathbf{m}hbb{A}^n_{K}$.
The point $[I]$ $\in$ $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ corresponding to the ideal $I$ has a small tangent space in the sense that the tangent directions at $[I]$ correspond to deformations of $I$ to ideals $I'$ of the same ``type,'' obtained either by varying the coefficients of the generators $g_j$ or by translating the subscheme $\operatorname{Spec}(R/I)$ in $\mathbf{m}hbb{A}^n_{K}$; in particular, all of the $I'$ vanish at a single point. Accordingly, the point $[I]$ is a simple point on an \textbf{elementary component} of the Hilbert scheme; that is, a component such that every point on it parameterizes a subscheme concentrated at a single point \cite[p.\ 148]{Iarrob:Sitges1983}. If $\mu$ $>$ $1$, it is clear that an elementary component must be different from the \textbf{principal component}, which contains the points corresponding to reduced subschemes of length $\mu$. (Note that if $\mu > 1$ for an elementary component, then $\mu \geq 8$; see \cite[Th.\ 1.1]{CartwrightErmanVelascoViray:HilbSchOf8Pts}.) The purpose of this paper is to generalize the construction in \cite{Iarrob-Emsalem} to produce new examples of generic algebras $R/I$ (or elementary components of $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$).
\begin{rem} \label{rem:History}
Since a $0$-dimensional closed subscheme of $\mathbf{m}hbb{A}^n_{K}$ can be written as a disjoint union of subschemes supported at single points of $\mathbf{m}hbb{A}^n_{K}$, one sees that elementary components are the ``building blocks'' of irreducible components of $H^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$. Hence, Iarrobino's demonstration that $H^n_{\mathbf{m}hbb{A}^{\mu}_{K}}$ is in general reducible \cite{Iarrob:ReducibilityOfHilb} already implies the existence of non-trivial elementary components (see \cite{Iarrob:NumberOfGenericSingularities}.) As previously noted, the first explicit examples were given by Iarrobino and Emsalem in \cite{Iarrob-Emsalem}. Employing a different approach, Shafarevich gave further examples in \cite{Shaf:DefsOfCommAlgebrasOfClass2}.
\end{rem}
The present paper is but one small contribution to the voluminous, diverse, and rapidly increasing literature on components of Hilbert schemes of points; for example, see \cite{BorgesDosSantosHenniJardim:CommutingMatrices},
\cite{CartwrightErmanVelascoViray:HilbSchOf8Pts}, \cite{CasnatiJelisiejewNotari:GorensteinLociRayFamilies}, \cite{ErmanAndVelasco:SyzygeticApproach}, and the references contained therein.
\subsection{Iarrobino-Emsalem example} \label{ssec:I-Eeg} To set the stage for our generalization, we describe more fully Iarrobino and Emsalem's first example using our notation and terminology. The ideal $I$ $\subseteq$ $R = K[x_1, x_2, x_3, x_4]$ is generated by quadratic forms
\[
g_j = m_j - N_j,\ 1 \leq j \leq 7,
\]
where $m_j$ denotes the $j$-th monomial in the list of ``leading'' monomials
\[
x_1^2,\, x_1 x_2,\, x_1 x_3,\, x_1 x_4,\, x_2^2,\, x_2 x_3,\, x_2 x_4,
\]
and
\[
N_j = \sum_{i=0}^{2}(c_{i}\cdot x_3^{i}x_4^{2-i})
\]
is a $K$-linear combination of the ``trailing'' monomials of degree $2$ in the ``back variables'' $x_3,\, x_4$. When the coefficients $c_{i}$ are sufficiently general, one can show that all the monomials of degree $3$ belong to $I$; consequently, $I$ has finite colength with zero-set concentrated at the origin, and one sees easily that the order ideal
\[
\mathbf{m}hcal{O} = \{1,\, x_1,\, x_2,\, x_3,\, x_4,\, x_3^2,\, x_3 x_4,\, x_4^2 \}
\]
is a $K$-basis of the quotient $R/I$. Therefore, $[I]$ $\in$ $\operatorname{Hilb}^8_{\mathbf{m}hbb{A}^4_{K}}$; moreover, $I$ is in the ``border basis scheme'' $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$ $\subseteq$ $\operatorname{Hilb}^8_{\mathbf{m}hbb{A}^4_{K}}$ (see \cite[Secs.\ 2, 3]{KreutzerAndRobbiano1:DefsOfBorderBases}); we recall briefly the basics of border basis schemes in Section \ref{sec:borderBases}.
The ideal $I$ can be ``deformed'' in two ways: the $7 \cdot 3$ $=$ $21$ coefficients defining the $g_j$ can be tweaked, and the ideal (or corresponding subscheme) can be translated in four independent directions in $\mathbf{m}hbb{A}^4_{K}$; this shows that the point $[I]$ lies on a locus of dimension at least 25 consisting of points $[I']$ such that $I'$ is supported at one point. On the other hand, the dimension of the tangent space $\EuScript{T}_{[I]}$ $=$ $\Hom{R}{I}{R/I}$ can be computed, and one finds that this dimension is 25. From this it follows that $[I]$ is a smooth point on an elementary component of dimension 25 in $\operatorname{Hilb}^{8}_{\mathbf{m}hbb{A}^4_{K}}$. In general, we say that an ideal $I$ such that $[I]$ is a smooth point on an elementary component of $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ is \textbf{generic}.
\begin{rem} \label{rem:HilbFuncAndShaf}
The Hilbert function of the subschemes $\operatorname{Spec}(R/I)$ in the example just discussed is $(1,4,3,0)$. Shafarevich's results in \cite{Shaf:DefsOfCommAlgebrasOfClass2} imply that the analogously-constructed ideals corresponding to the Hilbert function $(1,5,3,0)$ are also generic. In the next section we describe our generalization of Iarrobino and Emsalem's construction, which yields generic ideals corresponding to the Hilbert function $(1,5,3,4,0)$.
\end{rem}
\subsection{A generalization} \label{ssec:AGen}
We now describe our ``smallest'' example of a generic ideal $I$; it is very similar to the Iarrobino-Emsalem example just discussed, except that the leading and trailing monomials have different degrees and the embedding dimension is $5$. (A more complete description, including a link to a \textit{Mathematica} \cite{Mathematica} notebook containing the computational details, is given in Section \ref{ssec:(1,5,3,4)}.) The leading monomials are the $12$ monomials of degree $2$ in $R$ $=$ $K[x_1,\dots,x_5]$ that involve at least one of the ``front variables'' $x_1, x_2, x_3$:
\[
\operatorname{LM} = \left\{ \begin{array}{c}
x_1^2,\, x_1 x_2,\, x_1 x_3,\, x_1 x_4,\, x_1 x_5,\, x_2^2,\, x_2 x_3,\\ x_2 x_4,\, x_2 x_5, x_3^2,\, x_3 x_4,\, x_3 x_5
\end{array} \right\},
\]
and the trailing monomials are the four monomials of degree $3$ in the ``back variables'' $x_4, x_5$:
\[
\operatorname{TM} = \{x_4^3,\, x_4^2 x_5,\, x_4 x_5^2,\, x_5^3\}.
\]
The ideal is generated by polynomials
\[
g_j = m_j - N_j,\ 1 \leq j \leq 12,
\]
where $m_j$ is the $j$-th leading monomial and
\[
N_j = \sum_{i=0}^{3} c_{i}x_4^{i}x_5^{3-i} \in \operatorname{Span}_{K}(\operatorname{TM})
\]
is a form of degree $3$ in $x_4, x_5$. If the $g_j$ are sufficiently general, it can be shown that every monomial of degree $4$ is in $I$, and that the quotient $R/I$ has $K$-basis the order ideal
\[
\mathbf{m}hcal{O} = \left\{1,\, x_1,\, x_2,\, x_3,\, x_4,\, x_5,\, x_4^2,\, x_4x_5,\, x_5^2,\, x_4^3,\, x_4^2x_5,\, x_4x_5^2,\, x_5^3\right\},
\]
so
\[
[I] \in \mathbf{m}hbb{B}_{\mathbf{m}hcal{O}} \subseteq \operatorname{Hilb}^{13}_{\mathbf{m}hbb{A}^5_{K}}
\]
and the Hilbert function is $(1,5,3,4,0)$.
As shown in general in Sections \ref{sec:idealFam} and \ref{sec:derivMap}, there are (at least) three ways that the ideal $I$ can be deformed without changing its ``type,'' or the fact that its zero-set consists of one point, and that these give independent tangent directions at $[I]$:
\begin{itemize}
\item The $4 \cdot 12 = 48$ coefficients $c_{ij}$ can be tweaked;
\item The ideal can be translated in $\mathbf{m}hbb{A}^5_{K}$;
\item The ideal can be pulled back via automorphisms of $\mathbf{m}hbb{A}^5_{K}$ defined by coordinate changes of the form $x_{\alpha}$ $\mapsto$ $x_{\alpha} + c_{\alpha,\beta}\cdot x_{\beta}$, $x_{\beta}$ $\mapsto$ $x_{\beta}$, where $1 \leq \alpha \leq 3$, $4 \leq \beta \leq 5$, $c_{\alpha,\beta} \in K$.
\end{itemize}
(Translation also involves pulling back the ideal via an automorphism of $\mathbf{m}hbb{A}^5_{K}$, and so the second and third deformation methods are treated in a uniform way in the body of the paper.) Therefore, $[I]$ lies on a locus of dimension at least $48 + 5 + 3 \cdot 2$ $=$ $59$ consisting of points $[I']$ such that the ideal $I'$ is supported at one point. On the other hand, one finds by direct (machine) computation that $\dim_{K}(\EuScript{T}_{[I]})$ $=$ $59$. From this it follows that $[I]$ is a smooth point on an elementary component of $\operatorname{Hilb}^{13}_{\mathbf{m}hbb{A}^5_{K}}$ of dimension $59$; that is, $I$ is generic. Note that the dimension of the principal component in this case is $5 \cdot 13$ $=$ $65$.
\begin{rem} \label{rem:shapeRem}
Most of the examples presented in Section \ref{sec:examples} will be of the form just described: that is, there will be $n \geq 3$ variables $x_1, \dots, x_n$, with $x_1, \dots, x_{n-\kappa}$ the front variables and $x_{n-\kappa+1}, \dots, x_n$ the back variables ($1 < \kappa < n$). The leading monomials will have degree $r$ $\geq$ $2$ and the trailing monomials will have degree $s$ $>$ $r$. The ideal $I$ will be generated by sufficiently general polynomials of the form
\[
g_j\ =\
\left(
\begin{array}{c}
j\text{-th leading monomial}\ -\\
K\text{-linear combination of trailing monomials}
\end{array} \right).
\]
We say that ideals formed in this way have \textbf{shape} $(n,\kappa,r,s)$; they are special cases of a slightly more general type of ideal, introduced in Section \ref{sec:SLI}, that we call ``distinguished,'' and that is the focus of our exposition. Given an order ideal $\mathbf{m}hcal{O}$, one obtains a distinguished\ ideal by constructing its $\mathbf{m}hcal{O}$-border basis, making use of sets of leading and trailing monomials as in the preceding examples to obtain (some of) the generators. In particular, a distinguished\ ideal is in the $\mathbf{m}hcal{O}$-border basis scheme and is supported at one point (the origin) of $\mathbf{m}hbb{A}^n_{K}$. The case of main interest (distinguished\ ideals of shape $(n,\kappa,r,s)$) is discussed in detail in Section \ref{sec:idealsOfShapeNkRs}.
\end{rem}
\subsection{Summary of examples} \label{ssec:sumOfEgs}
In Section \ref{sec:examples} we present several examples of generic ideals/elementary components, which we summarize briefly in the following list. In each case, we give the Hilbert function, the shape (except for the last case), the dimension of the elementary component, which is equal to $\dim_{K}(\EuScript{T}_{[I]})$, and the dimension of the principal component.
\begin{description}
\item[Hilbert func.\ (1,5,3,4,0), Shape $\mathbf{m}hbf{(5,2,2,3)}$] As discussed in Section \ref{ssec:AGen}, in this case $[I]$ is a smooth point on an elementary component of dimension $59$, and the dimension of the principal component is $5 \cdot 13 = 65$.
\item[Hilbert func.\ (1,5,3,4,5,6,0), Shape $\mathbf{m}hbf{(5,2,2,5)}$] In this case $[I]$ is a smooth point on an elementary component of dimension 104, and the dimension of the principal component is $5 \cdot 24$ $=$ $120$.
\item[Hilbert func.\ (1,6,6,10,0), Shape $\mathbf{m}hbf{(6,3,2,3)}$] In this case $[I]$ is a smooth point on an elementary component of dimension $165$, and the dimension of the principal component is $6 \cdot 23$ $=$ $138$.
\item[Hilbert func.\ (1,6,21,10,15,0), Shape $\mathbf{m}hbf{(6,3,3,4)}$] In this case $[I]$ is a smooth point on an elementary component of dimension $705$, and the dimension of the principal component is $ 6 \cdot 53$ $=$ $318$.
\item[Hilbert function (1,6,10,10,5,0)] In this case we give three different generic ideals/elementary components having this Hilbert function, each based on the same order ideal $\mathbf{m}hcal{O}$ of cardinality 32. The ideals have a slightly more general form than those in the examples above. In the first case, $[I]$ is smooth on an elementary component of dimension 255, in the second, the elementary component has dimension 222, and in the third, the dimension is 211. The principal component has dimension $6 \cdot 32$ $=$ $192$.
\end{description}
\subsection{Plausible Genericity} \label{ssec:introPlausGen}
Unfortunately, we have no general theorems of the form ``every sufficiently general ideal of a certain type or shape is generic,'' since we do not know how to verify that $\dim_{K}(\EuScript{T}_{[I]})$ attains its minimum value except by direct computation in each case. However, in Section \ref{sec:PlausArgs} we will make an attempt in this direction, by giving an easily-computable criterion for detecting if sufficiently general ideals of shape $(n,\kappa,r,s)$ are \emph{plausibly} generic, in which case we say that $(n,\kappa,r,s)$ is a \textbf{plausible} shape. For example, this criterion indicates that the following shapes are plausible:
\begin{equation} \label{eqn:PlausibleShapes}
\begin{array}{cc}
\text{Shape}& \text{Range}
\\
(n,2,2,3) & 5 \leq n \leq 50000 \text{ (at least)},
\\
(n,2,2,4) & 5 \leq n \leq 50000\text{ (at least)},
\\
(n,3,3,4) & 5 \leq n \leq 17,
\\
(n,3,3,5) & 5 \leq n \leq 25,
\\
(n,4,2,6) & 16 \leq n \leq 50000\text{ (at least)},
\\
(n,4,3,6) & 8 \leq n \leq 120,
\\
(50,\kappa,4,6) & 7 \leq \kappa \leq 22
\\
(50,\kappa,4,8) &6 \leq \kappa \leq 14.
\end{array}
\end{equation}
Analysis of the asymptotic behavior of the plausibility criterion in Section \ref{ssec:PlausAsympBehavior} leads us to offer the following
\begin{conj} \label{conj:FirstOfTwo}
Given $r=2$, $s>2$, and $\kappa \geq 2$, the shape $(n,\kappa,2,s)$ is plausible for all $n>>0$.
\end{conj}
\begin{rem} \label{rem:NoTrendRem}
The reader may be wondering whether the trend suggested by the first two examples in Section \ref{ssec:sumOfEgs} extends. The answer is almost certainly ``no.'' As $s$ increases, the shape
$(5,2,2,s)$ must eventually become implausible, as shown in Section \ref{ssec:PlausAsympBehavior}. Indeed, for the Hilbert function $(1,5,3,4,5,6,7,0)$, the tangent space dimension at $[I]$ is 139, which is less than 155, the dimension of the principal component, but is larger than the ``expected'' value of 131 (given by Equation (\ref{eqn:tanSpDimInGenShapeCase})) if $I$ were generic. Hence, $[I]$ is not a point of the principal component, but neither is $I$ likely to be generic.
\end{rem}
\subsection{Overview of paper} \label{ssec:overview}
Following the introduction, we review the terminology and theory of border basis schemes in Section \ref{sec:borderBases}, and lay some foundations for later sections. In Section \ref{sec:SLI}, we present the definition and first properties of the ideals that are our main objects of study, which we call \textbf{distinguished\ ideals}. (Their construction generalizes that described in Sections \ref{ssec:I-Eeg} and \ref{ssec:AGen}.)
Section \ref{sec:tanSpace} recalls the basic facts regarding the tangent space at $[I]$ for ideals $I$ of finite colength, and outlines how the dimension of the tangent space can be computed.
Given a distinguished\ ideal $I$, in Section \ref{sec:idealFam} we construct a map
\[
\fn{\EuScript{F}}{U}{\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}},
\]
where $U$ is an affine space and $[I]$ $\in$ $\EuScript{F}(U)$. Every point $[J]$ $\in$ $\EuScript{F}(U)$ corresponds to a closed subscheme $\operatorname{Spec}(K[\mathbf{m}hbf{x}]/J)$ supported at a single point of $\mathbf{m}hbb{A}^n_{K}$; in particular, $\EuScript{F}(U)$ contains all the points corresponding to ideals $I'$ obtained from $I$ through some combination of tweaking of coefficients and pulling back via automorphisms, as described in Section \ref{ssec:AGen}. For $p$ $\in$ $U$ and $\EuScript{F}(p)$ $=$ $[I_p]$, we show (Proposition \ref{prop:F(U)DimLowerBd}) that the cardinality $L$ of a linearly independent set of vectors in the image of the derivative map $\fn{\EuScript{F}'_p}{\EuScript{T}_p}{\EuScript{T}_{[I_p]}}$ is a lower bound for the dimension of $\EuScript{F}(U)$. This leads to a simple method for finding elementary components (Proposition \ref{prop:genericCond}): If one has a lower bound $L$ for $\dim(\EuScript{F}(U))$ such that $\dim_{K}(\EuScript{T}_{[I_p]})$ $=$ $L$, then $[I_p]$ will be a smooth point on $\overline{\EuScript{F}(U)}$, which is accordingly an elementary component of dimension $L$.
We study the derivative map $\fn{\EuScript{F}'_p}{\EuScript{T}_p}{\EuScript{T}_{[I_p]}}$ in Section \ref{sec:derivMap}. In particular, we compute the images of the standard unit vectors at $p \in U$ (an affine space) to enable us to find lower bounds $L$ on the dimension of $\EuScript{F}(U)$.
In Section \ref{sec:spCaseRestr} we define and study the \textbf{lex-segment complement} order ideals (and associated distinguished\ ideals) that are used in all of the examples outlined in Section \ref{ssec:sumOfEgs}. We work out in detail the concepts and results of Sections \ref{sec:idealFam} and \ref{sec:derivMap} in this special case, to prepare for the presentation of the examples in Section \ref{sec:examples}.
Following the presentation of the examples, we turn to the final goal of the paper, which is to develop a criterion for detecting plausible shapes $(n,\kappa,r,s)$. The criterion is stated and justified in (the final) Section \ref{sec:PlausArgs}, preceded by an extensive preparatory study of distinguished\ ideals of shape $(n,\kappa,r,s)$ in Section \ref{sec:idealsOfShapeNkRs}.
\noindent \textit{Acknowledgment}: The author is happy to thank Prof.\ Tony Iarrobino for several helpful private communications. He also thanks the referee for a myriad of useful comments.
\section{Border basis schemes} \label{sec:borderBases}
In this section, we briefly recall some of the terminology and theory of \textbf{border basis schemes} as given in \cite[Secs.\ 2, 3]{KreutzerAndRobbiano1:DefsOfBorderBases}.
\subsection{Basic definitions} \label{ssec:BasDefs} One begins with an \textbf{order ideal}, which is a finite set $\mathbf{m}hcal{O}$ $=$ $\{t_1,\dots t_{\mu} \}$ of monomials in the variables $x_1,\, \dots,\, x_n$ such that whenever a monomial $m$ divides a member of $\mathbf{m}hcal{O}$, it follows that $m \in \mathbf{m}hcal{O}$. We will refer to the monomials $t_i$ as \textbf{basis monomials}. The \textbf{border} of $\mathbf{m}hcal{O}$ is the set of monomials
\[
\partial \mathbf{m}hcal{O}\ =\ \left(x_1 \mathbf{m}hcal{O} \cup \dots \cup x_n \mathbf{m}hcal{O}\right) \setminus \mathbf{m}hcal{O} \ =\ \{b_1,\, \dots,\, b_{\nu}\};
\]
we will refer to the $b_j$ as \textbf{boundary monomials}. A set of polynomials $\mathbf{m}hcal{B}$ $=$ $\{ g_1,\dots, g_{\nu} \}$ of the form $g_j = b_j - \sum_{i = 1}^{\mu}c_{ij}t_i$ with $c_{ij} \in K$ is called an \textbf{$\mathbf{m}hcal{O}$-border prebasis} of the ideal
\[
I = (\mathbf{m}hcal{B}) \subseteq K[x_1,\, \dots,\, x_n] = K[\mathbf{m}hbf{x}] = R.
\]
It is clear that every boundary monomial is congruent to a linear combination of basis monomials modulo $I$, and an induction argument shows that the same is true for every monomial; in other words, the quotient $R/I$ is spanned as a $K$-vector space by the $t_i$. We say that $\mathbf{m}hcal{B}$ is an \textbf{$\mathbf{m}hcal{O}$-border basis} of $I$ if $\mathbf{m}hcal{O}$ is a $K$-basis of the quotient; in this case, every monomial is congruent modulo $I$ to a unique $K$-linear combination of basis monomials.
The $\mathbf{m}hcal{O}$\textbf{-border basis scheme} $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$ is an affine scheme whose $K$-points correspond to the ideals $I$ having an $\mathbf{m}hcal{O}$-border basis; as such, it is an open affine subscheme of $\operatorname{Hilb}^{\mu}_{A^n_{K}}$, from which it inherits the universal property (\ref{txt:BbasUnivProp}): Let $\mathbf{m}hcal{Z}_{\mathbf{m}hcal{O}}$ denote the restriction of the universal closed subscheme $\mathbf{m}hcal{Z}_{\mu}$ $\subseteq$ $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}} \times \mathbf{m}hbb{A}^n_{K}$ to $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$. Then (see, {\em e.g.}, \cite[Th.\ 37, p.\ 306]{Huib:UConstr}):
\numbtext{txt:BbasUnivProp}{.}{.}{A map $\fn{q}{Q}{\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}}$ corresponds uniquely to a closed subscheme $\mathbf{m}hcal{Z}_q$ $\subseteq$ $Q \times \mathbf{m}hbb{A}^n_{K}$ such that the direct image of $O_{Z_q}$ on
$Q$ is free with basis $\mathbf{m}hcal{O}$; the correspondence is given by $q$ $\leftrightarrow$ $q^*(\mathbf{m}hcal{Z}_{\mathbf{m}hcal{O}})$.}
Note that the schemes $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$, as $\mathbf{m}hcal{O}$ ranges over all order ideals of cardinality $\mu$, form an open affine covering of $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$; moreover, one can construct $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ by first constructing the schemes $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$ and then gluing along the natural overlaps (see, {\em e.g.}, \cite{LaksovAndSkjelnes:HilbSchConstr}, \cite{Huib:UConstr},
\cite{KreuzerAndRobbiano2:TheGeometryOfBorderBases}).
\subsection{Neighbor syzygies of a border basis} \label{ssec:NbrSyz} Let $I$ be an ideal having $\mathbf{m}hcal{O}$-border basis $\mathbf{m}hcal{B}$ $=$ $\{ g_1, \dots, g_{\nu} \}$ $\subseteq$ $K[\mathbf{m}hbf{x}]$, as in Section \ref{ssec:BasDefs}. The neighbor syzygies provide a convenient set of generators for the first syzygy module of the $g_j$; we briefly recall their construction, following \cite[Sec.\ $4$, p.\ $13$]{KreutzerAndRobbiano1:DefsOfBorderBases}.
We say that two boundary monomials $b_j$ and $b_{j'}$ are \textbf{next-door neighbors} if $x_k b_j$ $=$ $b_{j'}$ for some $k$ $\in$ $\{ 1,\dots,n \}$, and \textbf{across-the-street} neighbors if $x_k b_j$ $=$ $x_l b_{j'}$ for some $k,\l$ $\in$ $\{ 1,\dots,n \}$; in either case, we say that $b_j$ and $b_{j'}$ are \textbf{neighbors}. Given neighbors $b_j$ and $b_{j'}$, we form the $S$-polynomial
\[
S(g_j,g_{j'}) = \left\{ \begin{array}{ll}
x_k \cdot g_j - g_{j'}, & \text{ if } b_j, b_{j'} \text{ are next-door nbrs,}
\\
x_k \cdot g_j - x_l \cdot g_{j'}, & \text{ if } b_j, b_{j'} \text{ are across-the-street nbrs.}
\end{array} \right.
\]
In either case, $S(g_j,g_{j'})$ is a $K$-linear combination of basis and boundary monomials. For each term of the form $c_{j''} b_{j''}$ that appears in $S(g_j,g_{j'})$, we subtract $c_{j''} g_{j''}$ (put another way, we reduce $S(g_j,g_{j'})$ modulo the ideal generators $\mathbf{m}hcal{B}$); the result is a $K$-linear combination of basis monomials $\overline{S}(g_j,g_{j'})$ that is a $K[\mathbf{m}hbf{x}]$-linear combination of the $g_j$; whence, $\overline{S}(g_j,g_{j'})$ $\equiv 0 {\mod I}$. Since $\mathbf{m}hcal{B}$ is an $\mathbf{m}hcal{O}$-border basis of $I$, it follows at once that $\overline{S}(g_j,g_{j'})$ is the $0$-polynomial, so we have constructed a syzygy of the polynomials $g_j$, the \textbf{neighbor syzygy} associated to the neighbors $b_j$, $b_{j'}$ $\in$ $\partial \mathbf{m}hcal{O}$. Writing this syzygy in the form $\sum_{\hat{j}=1}^{\nu}f_{\hat{j}}\, g_{\hat{j}}$ $=$ $0$, the tuple of coefficients $(f_{\hat{j}})$ has at most two components of degree 1 ($f_j$ and possibly $f_{j'}$), and the remaining components are all constants.
If $\mathbf{m}hcal{B}$ $=$ $(g_1, \dots, g_{\nu})$ is just an $\mathbf{m}hcal{O}$-border prebasis, one can still compute the $S$-polynomials and their reductions $\overline{S}(g_j,g_{j'})$. These again are $K$-linear combinations of basis monomials, but they no longer necessarily vanish. We have the following key results:
\begin{prop} \label{prop:NbrSyzFacts} { }
\noindent
\begin{itemize}
\item[i.]
If the reduced $S$-polynomials $\overline{S}(g_j,g_{j'})$ for an $\mathbf{m}hcal{O}$-border prebasis $\mathbf{m}hcal{B}$ are all equal to $0$, then $\mathbf{m}hcal{B}$ is an $\mathbf{m}hcal{O}$-border basis; that is, the quotient $K[\mathbf{m}hbf{x}]/(\mathbf{m}hcal{B})$ is $K$-free with basis $\mathbf{m}hcal{O}$. (The converse was proved at the start of this section.)
\item[ii.] If $\mathbf{m}hcal{B}$ is an $\mathbf{m}hcal{O}$-border basis, then the neighbor syzygies generate the first syzygy module of the $\{g_j\}$ as a $K[\mathbf{m}hbf{x}]$-module.
\end{itemize}
\end{prop}
\proof The first statement is proved in \cite[Prop.\ 6.4.34, p.\ 438]{KreutzerAndRobbianoVolTwo}, and both statements are proved in \cite[Th.\ $22$, p.\ 292]{Huib:UConstr}. A beautiful algorithmic proof of the second statement is given in \cite{KreuzerAndKriegl}.
\qed
\begin{rem} \label{rem:GrdRingsOK}
The theory of border bases can be developed in essentially the same way as summarized above when the ground field $K$ is replaced by an arbitrary commutative and unitary ring $A$, such as a $K$-algebra (see, {\em e.g.}, \cite{Huib:UConstr}). In particular, the analogue of Proposition \ref{prop:NbrSyzFacts} holds in this more general context.
\end{rem}
\subsection{Linear syzygies} \label{ssec:AlgorithmForNbrSyz}
We say that a syzygy $(f_j)$ of the ideal generators $g_j$ of an $\mathbf{m}hcal{O}$-border basis (that is, $\sum_{h=1}^{\nu}(f_j\cdot g_j) = 0$) is a \textbf{linear syzygy} provided that the coefficients $f_j$ $\in$ $K[\mathbf{m}hbf{x}]$ have degree at most 1. For example, the neighbor syzygies are all linear syzygies. Since the neighbor syzygies generate the $K[\mathbf{m}hbf{x}]$-module of first syzygies of the border basis $\mathbf{m}hcal{B}$, by Proposition \ref{prop:NbrSyzFacts}, we see that a $K$-basis of the linear syzygies is also a set of $K[\mathbf{m}hbf{x}]$-generators of the full syzygy module.
We briefly describe the algorithm we use for computing a $K$-basis of the linear syzygies; as a consequence, we will obtain the cardinality of this basis. (This algorithm is implemented in the \emph{Mathematica} function \textbf{makeLinearSyzygies}, included in the notebook of utility functions mentioned at the start of Section \ref{sec:examples}.)
We first compute the set of boundary monomials $\partial \mathbf{m}hcal{O}$ and the set of \textbf{target monomials}
\begin{equation}\label{eqn:tarMonDef}
\operatorname{T} = \partial \mathbf{m}hcal{O} \cup \{ x_{\alpha} \cdot b_j \mid 1 \leq \alpha \leq n,\ b_j \in \partial \mathbf{m}hcal{O}\}.
\end{equation}
Next we define a $K$-linear ``projection'' map $\fn{\pi_T}{K[\mathbf{m}hbf{x}]}{\operatorname{Span}_{K}(\operatorname{T})}$ by extending linearly the map on monomials
\[
m \mapsto \left\{ \begin{array}{l}
m,\ \text{if } m \in \operatorname{T},
\\
0,\ \text{otherwise}.
\end{array} \right.
\]
Let $V$ be a $K$-vector space with basis
\[
E\ =\ \{ e_{\alpha,j} \mid 0 \leq \alpha \leq n,\ 1 \leq j \leq \nu\},\ \text{ of cardinality } |E| = (n+1)\cdot \nu,
\]
and define a linear map
\[
\fn{\sigma}{V \mathbf{m}hsf{A}^2_{K}prox K^{(n+1)\cdot \nu}}{\operatorname{Span}_{K}(\operatorname{T})},\ \ e_{\alpha,j} \mapsto
\left\{ \begin{array}{l}
\pi_{\operatorname{T}}(1\cdot g_{j}) = b_j,\ \text{if } \alpha = 0,
\\
\pi_{\operatorname{T}}(x_{\alpha} \cdot g_{j}),\ \text{if } n \geq \alpha > 0
\end{array} \right. .
\]
\begin{lem} \label{lem:sigmaSurjLem}
The map $\sigma$ is surjective.
\end{lem}
\proof
Since
\[
e_{1,j} \mapsto b_j \in \partial \mathbf{m}hcal{O} \subseteq \operatorname{T} \text{ for } 1 \leq j \leq \nu,
\]
it is clear that $\operatorname{Span}_{K}(\partial \mathbf{m}hcal{O})$ is in the image of $\sigma$. We now observe that for $0< \alpha < n$,
\[
e_{\alpha,j} \mapsto \pi_{\operatorname{T}}(x_{\alpha} \cdot g_j) = x_{\alpha} \cdot b_j + \left( \text{element of } \operatorname{Span}_{K}(\partial \mathbf{m}hcal{O})\right);
\]
whence, every monomial $x_{\alpha} \cdot b_j$ $\in$ $\operatorname{T}$ is in the image of $\sigma$, and the lemma follows at once.
\qed
Let $(d_{\alpha,j})$ $\in$ $\operatorname{Ker}(\sigma)$. Setting $f_j$ $=$ $d_{0,j} + \sum_{\alpha=1}^n d_{\alpha,j} x_{\alpha}$, we observe that the tuple $(f_j)$ is a linear syzygy of $\{g_j\}$, because $\sum_{j=1}^{\nu}{f_j}\cdot{g_j}$ is a $K$-linear combination of basis monomials (the monomials in $\operatorname{T}$ having cancelled out), and hence $0$. Moreover, every linear syzygy of the $g_j$ arises in this way. So a $K$-basis of the linear syzygies can be computed simply by computing a basis $\{ (d_{\alpha,j}) \}$ of the kernel of $\sigma$ and assembling the corresponding tuples $(f_j)$. From this it follows that the dimension of the $K$-vector space of linear syzygies of the border basis $\mathbf{m}hcal{B}$ of $I$ is given by
\begin{equation} \label{eqn:linSyzCount}
\psi\ =\ \dim_{K}(\ker(\sigma))\ =\ (n+1) \cdot \nu - |\operatorname{T}|.
\end{equation}
\subsection{Generators of the ideal of $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$} \label{ssec:BorBasGens} The border basis scheme
$\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$ is a closed subscheme of $\mathbf{m}hbb{A}^{\mu \nu}_{K}$ $=$ $\operatorname{Spec}(K[\mathbf{m}hcal{C}])$, where
\begin{equation} \label{eqn:setCDefn}
\mathbf{m}hcal{C} = \{C_{ij},\ 1 \leq i \leq \mu,\ 1 \leq j \leq \nu\}
\end{equation}
is a set of indeterminates; the point corresponding to the ideal $I$ having border basis $\{g_j = b_j - \sum_{i = 1}^{\mu}c_{ij}t_i\}$ is $(c_{ij})$ $\in$ $\mathbf{m}hbb{A}^{\mu \nu}_{K}$. The generators of the ideal $\mathbf{m}hcal{I}_{\mathbf{m}hcal{O}}$ such that $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$ $=$ $\operatorname{Spec}(K[C_{ij}])/\mathbf{m}hcal{I}_{\mathbf{m}hcal{O}}$ can be obtained as follows: Form the ``generic $\mathbf{m}hcal{O}$-border prebasis''
\begin{equation} \label{eqn:genBorPreBasis}
\mathbf{m}hcal{B}^{\star} = \{ G_1, \dots, G_{\nu} \} \subseteq K[\mathbf{m}hcal{C}][\mathbf{m}hbf{x}],\ \ G_j = b_j - \sum_{i=1}^{\mu}C_{ij}t_i,
\end{equation}
and compute the $K[\mathbf{m}hcal{C}]$-linear combinations of basis monomials
\[
\overline{S}(G_j,G_{j'}) = \sum_{i=1}^{\mu} \varphi^{j,j'}_i t_i.
\]
Then the ideal $\mathbf{m}hcal{I}_{\mathbf{m}hcal{O}}$ is generated by the coefficients $\varphi^{j,j'}_i$ $\in$ $K[\mathbf{m}hcal{C}]$ (see, {\em e.g.}, \cite[Th.\ 37, p.\ 306]{Huib:UConstr}. The point is that over the ring $A_{\mathbf{m}hcal{O}}$ $=$ $K[\mathbf{m}hcal{C}]/\mathbf{m}hcal{I}_{\mathbf{m}hcal{O}}$, the polynomials $\overline{S}(G_j,G_{j'})$ all vanish, so Proposition \ref{prop:NbrSyzFacts} (in the light of Remark \ref{rem:GrdRingsOK}) yields that the quotient $A_{\mathbf{m}hcal{O}}[\mathbf{m}hbf{x}]/(G_j)$ is $A_{\mathbf{m}hcal{O}}$-free with basis $\mathbf{m}hcal{O}$. One then shows that $\operatorname{Spec}(A_{\mathbf{m}hcal{O}})$ and the family of subschemes $\operatorname{Spec}(A_{\mathbf{m}hcal{O}}[\mathbf{m}hbf{x}]/(G_j))$ together satisfy the universal property (\ref{txt:BbasUnivProp}) of the border basis scheme $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$.
\begin{rem} \label{rem:matrixMethod}
The generators of the ideal $\mathbf{m}hcal{I}_{\mathbf{m}hcal{O}}$ can also be constructed as the entries of the commutators of the ``generic multiplication matrices'' --- see, {\em e.g.}, \cite[Sec.\ 3]{KreutzerAndRobbiano1:DefsOfBorderBases}.
\end{rem}
\section{distinguishedCap\ ideals} \label{sec:SLI}
In this section we will describe our main objects of study.
\subsection{Definition of distinguished\ ideals} \label{ssec:DistIdealsDefn}
Let $\mathbf{m}hcal{O}_{\text{max}}$ $\subseteq$ $\mathbf{m}hcal{O}$ denote the subset of \textbf{maximal} basis monomials, which are those basis monomials $t_i$ such that $x_k t_{i}$ $\in$ $\partial \mathbf{m}hcal{O}$ for all $1 \leq k \leq n$. Similarly, let $\partial \mathbf{m}hcal{O}_{\text{min}}$ $\subseteq$ $\partial \mathbf{m}hcal{O}$ denote the subset of \textbf{minimal} boundary monomials, which are those boundary monomials $b_j$ such that $b_j/x_k$ $\in$ $\mathbf{m}hcal{O}$ for every $x_k$ that appears in $b_j$.
Choose non-empty subsets
\[
\operatorname{LM} = \{b_{j_1},\, \dots,\, b_{j_{\lambda}}\} \subseteq \partial \mathbf{m}hcal{O}_{\text{min}}
\]
and
\[
\operatorname{TM} = \{t_{i_1},\, \dots,\, t_{i_{\tau}} \} \subseteq \mathbf{m}hcal{O}_{\text{max}}
\]
such that the set $\operatorname{LM}$ is disjoint from the subset of boundary monomials
\begin{equation} \label{eqn:bdryT}
\partial \operatorname{TM} = \{x_k t_{i_{\ell}} \mid 1 \leq k \leq n,\ t_{i_{\ell}} \in \operatorname{TM} \} \subseteq \partial \mathbf{m}hcal{O}.
\end{equation}
We will call $\operatorname{LM}$ (resp.\ $\operatorname{TM}$) the \textbf{leading} (resp.\ \textbf{trailing}) \textbf{monomials}.
We choose a set $G$ $=$ $\{ g_{j_{\iota}} \}$ of polynomials of the form
\begin{equation} \label{eqn:fDef}
g_{j_{\iota}} = b_{j_{\iota}} - N_{j_{\iota}},\ \ 1 \leq \iota \leq \lambda,\ b_{j_{\iota}} \in \operatorname{LM},\ N_{j_{\iota}} \in {\rm Span}_{K}(\operatorname{TM}),
\end{equation}
and extend $G$ to an $\mathbf{m}hcal{O}$-border prebasis
\[
\mathbf{m}hcal{B} = \{ g_1, \dots, g_{\nu} \} = G \cup \left( \partial \mathbf{m}hcal{O} \setminus \operatorname{LM} \right).
\]
In Proposition \ref{prop:specialCCor} we prove that $\mathbf{m}hcal{B}$ is an $\mathbf{m}hcal{O}$-border basis of an ideal $I$ $=$ $(\mathbf{m}hcal{B})$. We say that any such ideal $I$ is a \textbf{distinguished\ ideal}.
\subsection{Example} \label{ssec:RunningExample1}
For the order ideal
\[
\mathbf{m}hcal{O} = \left\{1,\, x_1,\, x_2,\, x_3,\, x_2 x_3,\, x_3^2\right\} \subseteq K[x_1,x_2,x_3], \text{ with }
\mu = |\mathbf{m}hcal{O}| = 6,
\]
one has that
\[
\begin{array}{rcl}
\partial \mathbf{m}hcal{O} & = & \left\{
\begin{array}{c} x_1^2,\ x_1\, x_2,\ x_1\, x_3,\ x_2^2,
\\
x_1\, x_2\, x_3,\ x_1\, x_3^2,\ x_2^2\, x_3,\ x_2\, x_3^2,\ x_3^3
\end{array} \right\},
\\
\mathbf{m}hcal{O}_{\text{max}} & = & \left\{x_1,\, x_2 x_3,\, x_3^2\right\},
\\
\partial \mathbf{m}hcal{O}_{\text{min}} & = &\left\{x_1^2,\, x_1 x_2,\, x_1 x_3,\, x_2^2,\, x_2 x_3^2,\, x_3^3\right\}.
\end{array}
\]
There are various possible choices for the sets $\operatorname{LM}$ and $\operatorname{TM}$ of leading and trailing monomials that satisfy $\operatorname{LM} \cap \partial \operatorname{TM}$ $=$ $\emptyset$; here is one:
\[
\operatorname{LM} = \{ x_1^2,\, x_1 x_2,\, x_1 x_3,\, x_2^2 \},\ \operatorname{TM} = \{ x_2 x_3,\, x_3^2 \},
\]
so $\lambda = |\operatorname{LM}| = 4$ and $\tau = |\operatorname{TM}| = 2$. We therefore have an ($8 = \lambda \mu$)-dimensional family of distinguished\ ideals in the border basis scheme $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$ $\subseteq$ $\operatorname{Hilb}^6_{\mathbf{m}hbb{A}^3_{K}}$ with border bases of the form
\begin{equation} \label{eqn:RunningExampleBorBases}
\mathbf{m}hcal{B}\ =\ \left\{ \begin{array}{l}
g_1 = x_1^2 - C_{5,1}\,x_2\,x_3 - C_{6,1}\,x_3^2,
\\
g_2 = \ x_1\, x_2 - C_{5,2}\,x_2\,x_3 - C_{6,2}\,x_3^2,
\\
g_3 = x_1\, x_3 - C_{5,3}\,x_2\,x_3 - C_{6,3}\,x_3^2,
\\
g_4 = x_2^2 - C_{5,4}\,x_2\,x_3 - C_{6,4}\,x_3^2,
\\
g_5 = x_1\, x_2\, x_3,\ \ g_6 = x_1\, x_3^2,\ \ g_7 = x_2^2\, x_3,
\\
g_8 = x_2\, x_3^2,\ \ g_9 = x_3^3
\end{array} \right\}
\end{equation}
(the indeterminate coefficients $C_{ij}$ $\in$ $\mathbf{m}hcal{C}$ (\ref{eqn:setCDefn}) would of course be replaced by elements of $K$ in any specific example). To verify that $\mathbf{m}hcal{B}$ is an $\mathbf{m}hcal{O}$-border basis, it suffices, by Proposition \ref{prop:NbrSyzFacts}, to show that all the reduced $S$-polynomials $\overline{S}(g_j,g_{j'})$ are equal to $0$. The general argument is given in the proof of Proposition \ref{prop:specialCCor}; we can easily check this by hand for the pre-basis $\mathbf{m}hcal{B}$ in (\ref{eqn:RunningExampleBorBases}); here, for example, is one of the required verifications:
\[
\begin{array}{rcl}
S(g_1,g_2) & = & x_2 \cdot g_1 - x_1\cdot g_2
\\
{} & = & x_2 (x_1^2 - C_{5,1}\,x_2\,x_3 - C_{6,1}\,x_3^2)\ -
\\
{} &{} & x_1(x_1\, x_2 - C_{5,2}\,x_2\,x_3 - C_{6,2}\,x_3^2)
\\
{} & = & - C_{5,1}\,x_2^2\,x_3 - C_{6,1}\,x_2\,x_3^2 + C_{5,2}\,x_1\,x_2\,x_3 + C_{6,2}\,x_1\,x_3^2
\\
{} & \longrightarrow_{\mathbf{m}hcal{B}} & 0.
\end{array}
\]
\subsection{The locus of distinguished\ ideals in $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$} \label{ssec:specialLoci}
We define a subset $\mathcal{S}$ of $\mathbf{m}hcal{C}$ (\ref{eqn:setCDefn}) as follows:
\begin{equation}\label{eqn:SpecialC}
\mathcal{S}\ =\ \{ C_{ij} \mid b_j \in \operatorname{LM},\ t_i \in \operatorname{TM}\}\ \subseteq\ \mathbf{m}hcal{C}.
\end{equation}
We will say that the members of $\mathcal{S}$ and the associated pairs of indices $(i,j)$ are \textbf{distinguished}.
Consider the surjection of polynomial rings
\[
\fn{\gamma}{K[\mathbf{m}hcal{C}]}{K[\mathcal{S}]}, \ \ C_{ij} \mapsto \left\{
\begin{array}{l}
C_{ij}, \text{ if } C_{ij} \in \mathcal{S},
\\
0, \text{ otherwise},
\end{array} \right.,
\]
and let $\fn{\hat{\gamma}}{K[\mathbf{m}hcal{C}][\mathbf{m}hbf{x}]}{K[\mathcal{S}}][\mathbf{m}hbf{x}]$ denote the map obtained by applying $\gamma$ to each coefficient of the input polynomial $f \in K[\mathbf{m}hcal{C}][\mathbf{m}hbf{x}]$).
\begin{prop} \label{prop:specialCCor}
The map $\gamma$ factors through the coordinate ring $K[\mathbf{m}hcal{C}]/\mathbf{m}hcal{I}_{\mathbf{m}hcal{O}}$ of $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$. Consequently, $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$ contains an $|\mathcal{S}| = \lambda \tau$-dimensional closed subscheme $X_{\mathcal{S}}$ $=$ $\operatorname{Spec}(K[\mathcal{S}])$ isomorphic to affine space, and whose $K$-points are obtained by assigning arbitrary scalars to the indeterminates in $\mathcal{S}$ and $0$ to the other indeterminates. Moreover, every point $[I']$ $\in$ $X_{\mathcal{S}}$ corresponds to a closed subscheme $\operatorname{Spec}(K[\mathbf{m}hbf{x}]/I')$ that is supported at the origin of $\mathbf{m}hbb{A}^n_{K}$.
\end{prop}
\proof
The image under $\hat{\gamma}$ of the $\mathbf{m}hcal{O}$-border prebasis $\mathbf{m}hcal{B}^{\star}$
(\ref{eqn:genBorPreBasis}) has the form
\[
\{\hat{\gamma}(G_1),\, \dots,\, \hat{\gamma}{G_{\nu}} \},\ \hat{\gamma}(G_j) = \left\{
\begin{array}{l}
b_j, \text{ if } b_j \notin \operatorname{LM},
\\
b_j - \sum_{C_{ij} \in \mathcal{S}} C_{ij}t_i, \text{ if }
b_j \in \operatorname{LM}
\end{array} \right. .
\]
We claim that the polynomials $\overline{S}(\hat{\gamma}(G_j),\hat{\gamma}(G_{j'}))$ of Section \ref{ssec:NbrSyz} (where the reductions are with respect to $\hat{\gamma}(\mathbf{m}hcal{B}^{\star})$) all vanish. There are three cases to check: First suppose that $b_j$ and $b_{j'}$ are neighbors such that neither is a leading monomial. Then
\[
S(\hat{\gamma}(G_j),\hat{\gamma}(G_{j'})) = S(b_j,b_{j'}) = 0\ \Rightarrow\ \overline{S}(\hat{\gamma}(G_j),\hat{\gamma}(G_{j'})) = 0.
\]
The second case to consider is that of two neighbors $b_j$ and $b_{j'}$ such that $b_j$ $\in$ $\operatorname{LM}$ and $b_{j'}$ $\notin$ $\operatorname{LM}$. Note that we cannot have that $b_j$ $=$ $x_j\, b_{j'}$ because $b_j$ is a minimal boundary monomial. It follows that
\[
\begin{array}{rcl}
S(\hat{\gamma}(G_j),\hat{\gamma}(G_{j'})) & = & x_k \left( b_j- \sum_{C_{ij} \in \mathcal{S}} C_{ij}t_i \right) - (x_l\text{ or } 1)\, b_{j'}
\\
{} & {=} & - \left( \sum_{C_{ij} \in \mathcal{S}} C_{ij}\, (x_k\, t_i) \right).
\end{array}
\]
Since the only terms that survive in the last expression have distinguished\ coefficients $C_{ij}$, we know that $t_i$ $\in$ $\operatorname{TM}$ and therefore
\[
x_k\, t_i \in \partial \mathbf{m}hcal{O} \setminus \operatorname{LM}\ \Rightarrow x_k\,t_i \in \hat{\gamma}(\mathbf{m}hcal{B}^{\star})\ \Rightarrow\ \overline{S}(\hat{\gamma}(G_j),\hat{\gamma}(G_{j'})) = 0.
\]
The third case is that of neighbors $b_j$, $b_{j'}$ $\in$ $\operatorname{LM}$, for which the argument is similar to that of the second case (and is illustrated in the Example of Section \ref{ssec:RunningExample1}).
Since it is clear that
\[
\begin{array}{rcl}
0\ =\ \overline{S}(\hat{\gamma}(G_j)),\hat{\gamma}(G_{j'})) & = & \hat{\gamma}(\overline{S}(G_j,G_{j'}))
\\
{} & = & \hat{\gamma} \left( \sum_{i=1}^{\mu} \varphi^{j,j'}_i t_i \right)\
\\
{} & = & \sum_{i=1}^{\mu} \gamma(\varphi^{j,j'}_i) t_i ,
\end{array}
\]
we conclude that $\gamma(\varphi^{j,j'}_{i})$ $=$ $0$ for all neighbor pairs $b_j$, $b_{j'}$ and all $1 \leq i \leq \mu$. This proves the first part of the proposition.
To prove the last statement, it suffices to show that for each variable $x_k$, there is an exponent $e_k$ such that $x_k^{e_k}$ $\in$ $I'$. To this end, let $e'_k$ be the least $e$ such that $x_k^e$ $\notin$ $\mathbf{m}hcal{O}$, in which case $x_k^{e'_k}$ $=$ $b_{j_k}$ $\in$ $\partial \mathbf{m}hcal{O}$. If $b_{j_k}$ $\notin$ $\operatorname{LM}$, then $I'$ contains the polynomial $g_{j_k}$ $=$ $b_{j_k}$ (indeed, $I'$ contains every boundary monomial $b_j$ $\notin$ $\operatorname{LM}$). On the other hand, if $b_{j_k}$ $\in$ $\operatorname{LM}$, then $I'$ contains the polynomial $g_{j_k}$ $=$ $b_{j_k}- \sum_{i=1}^{\mu} c_{i j_k}t_i$, for which the coefficient $c_{i,j_k}$ $\neq$ $0$ $\Rightarrow$ $t_i$ $\in$ $\operatorname{TM}$. Multiplying this polynomial by $x_k$, and recalling that $t_i$ $\in$ $\operatorname{TM}$ $\Rightarrow$ $x_k \cdot t_i$ $\in$ $\partial \mathbf{m}hcal{O}$ $\setminus$ $\operatorname{LM}$, we see that $I'$ contains a polynomial of the form
\[
x_k \cdot b_{j_k} - (K\text{-linear combination of monomials in } \partial \mathbf{m}hcal{O}\setminus \operatorname{LM}),
\]
which implies that $x_k \cdot b_{j_k}$ $=$ $x_k^{e'_k + 1}$ $\in$ $I'$, thereby completing the proof. \qed
\begin{rem} \label{rem:specialCRem}
A special case of this proposition appeared in \cite[Cor.\ 41, p.\ 313]{Huib:UConstr}.
\end{rem}
Evidently the $K$-points of $X_{\mathcal{S}}$ are the points $[I]$ $\in$ $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ corresponding to the distinguished\ ideals $I$. The proposition then immediately yields the following
\begin{cor} \label{cor:DistIdealsSuppAtOnePt}
Every distinguished\ ideal is supported at a single point (the origin) of $\mathbf{m}hbb{A}^n_{K}$.
\qed
\end{cor}
We will call the locus $X_{\mathcal{S}}$ $\cong$ $\mathbf{m}hbb{A}^{\lambda \tau}_{K}$ the \textbf{distinguished\ locus} associated to $\mathbf{m}hcal{O}$, $\operatorname{LM}$, and $\operatorname{TM}$. The monomial ideal $I_0$ $=$ $(\partial \mathbf{m}hcal{O})$ corresponds to the origin of $X_{\mathcal{S}}$, that is, the point $[I_0]$ $\in$ $X_{\mathcal{S}}$ defined by setting all the distinguished\ indeterminates $C_{ij}$ to $0$.
\subsection{Efficient distinguished\ ideals} \label{ssec:EffSLIIdeals}
We say that a distinguished\ ideal $I$ is \textbf{efficient} provided that $I$ $=$ $(\mathbf{m}hcal{B})$ is generated by the subset $G$ $=$ $\{ g_{j_{\iota}} \}$ $\subseteq$ $\mathbf{m}hcal{B}$ (equation (\ref{eqn:fDef})). Since in any case $(G)$ $\subseteq$ $(\mathbf{m}hcal{B})$, we have that $(G)$ $=$ $(\mathbf{m}hcal{B})$ if and only if $(\mathbf{m}hcal{B})$ $\subseteq$ $(G)$, which is equivalent to $\partial \mathbf{m}hcal{O} \setminus \operatorname{LM}$ $\subseteq$ $(G)$. This property is easy to test computationally: Simply compute a Groebner basis for $(G)$ and reduce each non-leading boundary monomial $b_j$ modulo the Groebner basis; $I$ is efficient if and only if all the reductions are $0$.
\subsection{Example (continued)} \label{ssec:RunningExampleII}
Continuing with the Example of Section \ref{ssec:RunningExample1}, suppose that all of the coefficients $C_{ij}$ in (\ref{eqn:RunningExampleBorBases}) are set to $0$. Then
\[
\begin{array}{rcl}
G & = &\{ x_1^2,\ x_1\, x_2,\ x_1\, x_3,\ x_2^2 \}, \text{ and}
\\
\mathbf{m}hcal{B} & = & G \cup \{ x_1\, x_2\, x_3,\ x_1\, x_3^2,\ x_2^2\, x_3,\ x_2\, x_3^2,\ x_3^3 \}.
\end{array}
\]
Since, for example, $x_3^3$ $\notin$ $(G)$, we have that $(G)$ $\neq$ $(\mathbf{m}hcal{B})$, so $(\mathbf{m}hcal{B})$ is not efficient in this case. On the other hand, if we take
\[
\begin{array}{rcl}
G & = & \left\{x_1^2 + x_2\, x_3 + x_3^2,\ x_1\, x_2 + x_2\, x_3,\ x_1\, x_3 +x_2\, x_3 + x_3^2,\ x_2^2\right\}, \text{ and}
\\
\mathbf{m}hcal{B} & = & G \cup \{ x_1\, x_2\, x_3,\ x_1\, x_3^2,\ x_2^2\, x_3,\ x_2\, x_3^2,\ x_3^3 \},
\end{array}
\]
then we have that $(\mathbf{m}hcal{B})$ is efficient. To show this, we first compute a (lex) Groebner basis of $(G)$; the result is
\[
\left\{x_1^2 + x_2\, x_3 + x_3^2,\ x_1\, x_2 + x_2\, x_3,\ x_1\, x_3 +x_2\, x_3 + x_3^2,\ x_2^2,\
x_2\, x_3^2,\ x_3^3 \right\},
\]
which shows that $x_2\, x_3^2,\ x_3^3 \in (G)$. It follows that $x_3 \cdot \operatorname{TM}$ $\subseteq$ $(G)$; since in addition $x_3 \cdot G$ $\subseteq$ $(G)$, we obtain that $x_3 \cdot \operatorname{LM}$ $\subseteq$ $(G)$; whence, $\{x_1\,x_2\,x_3,\, x_1\,x_3^2,\, x_2^2\, x_3 \}$ $\subseteq$ $(G)$, so $(\mathbf{m}hcal{B})$ $\subseteq$ $G$, and we are done.
\begin{rem} \label{rem:RunningExampleNotGeneric}
We noted in the Introduction that a non-trivial elementary component can only exist for $\mu \geq 8$; therefore, none of the distinguished\ ideals associated to $\mathbf{m}hcal{O}$, $\operatorname{LM}$, and $\operatorname{TM}$ as in Section \ref{ssec:RunningExample1} can be generic. Indeed, for the efficient distinguished\ ideal $(\mathbf{m}hcal{B})$ just discussed, the tangent space dimension at $[(\mathbf{m}hcal{B})]$ is $18$, so that $[(\mathbf{m}hcal{B})]$ is a smooth point on the irreducible variety $\operatorname{Hilb}^{6}_{\mathbf{m}hbb{A}^3_{K}}$.
\end{rem}
\subsection{A sufficient condition for efficiency} \label{ssec:effTesting}
By analogy with $\partial \operatorname{TM}$ (\ref{eqn:bdryT}), we let
\[
\partial \operatorname{LM} = \{x_k b_{j_{\iota}} \mid 1 \leq k \leq n,\ b_{j_{\iota}} \in \operatorname{LM} \},
\]
and we define
\begin{equation} \label{eqn:QMonDef}
Q =\ \partial \operatorname{LM}\, \cup\, \partial \operatorname{TM} .
\end{equation}
\begin{prop} \label{prop:effTestingProp}
Let $I$ $=$ $(\mathbf{m}hcal{B})$ be a distinguished\ ideal. Then $I$ is efficient (that is, $I$ $=$ $(G)$) if and only if the following conditions hold:
\begin{enumerate}
\item[(i)] Every non-leading boundary monomial $b_j$ ({\em i.e.}, $b_j$ $\in$ $\partial \mathbf{m}hcal{O} \setminus \operatorname{LM}$) is a multiple of at least one monomial in $Q$, and
\item[(ii)] $Q$ $\subseteq$ $(G)$.
\end{enumerate}
\end{prop}
\proof
\noindent
$\mathbf{m}hbf{\Rightarrow}$: If $I$ $=$ $(G)$ is efficient, then $(G)$ contains each non-leading boundary monomial $b_j$; that is, $b_j$ $=$ $\sum_{b_{j_{\iota}}\in \operatorname{LM}} f_{{j_{\iota}}}\cdot g_{j_{\iota}}$, where the coefficients $f_{{j_{\iota}}}$ $\in$ $K[\mathbf{m}hbf{x}]$. Recalling the form of the ideal generators $g_{j_{\iota}}$, it follows at once that the monomial $b_{j}$ is equal to a multiple of a monomial in $Q$; that is, (i) holds. Furthermore, $\partial \operatorname{TM}$ consists of non-leading boundary monomials, so $\partial \operatorname{TM}$ $\subseteq$ $(G)$. Recalling (\ref{eqn:fDef}), we now see that $x_k \cdot g_{j_{\iota}}$ $\in$ $(G)$ implies that each $x_k b_{j_{\iota}}$ $\in$ $(G)$; whence, $\partial \operatorname{LM}$ $\subseteq$ $(G)$, so $Q$ $\subseteq$ $(G)$; that is, (ii) holds.
\noindent
$\mathbf{m}hbf{\Leftarrow}$: If conditions (i) and (ii) hold, we obtain at once that every non-leading boundary monomial is a member of $(G)$, so $I$ $=$ $(\mathbf{m}hcal{B})$ $=$ $(G)$ is efficient.
\qed
Given a distinguished\ ideal $I$, we can test it for efficiency by checking conditions (i) and (ii). Condition (i) is straightforward, if possibly tedious, to check; it just depends on the order ideal and the choice of sets $\operatorname{LM}$ and $\operatorname{TM}$, which determine $Q$. Condition (ii) can be tested as follows: One computes the $n \cdot \lambda$ products $x_k \cdot g_{j_{\iota}}$ ($g_{j_{\iota}}$ $\in$ $G$), and observes that the monomials appearing (non-trivially) in these products all lie in $Q$. Letting $E'$ be the set of indeterminates $\{ e_{\alpha,j_{\iota}} \mid 1\leq \alpha \leq n,\ b_{j_{\iota}} \in \operatorname{LM} \}$, we obtain a linear map
\begin{equation} \label{eqn:mapSigma}
\fn{\vartheta}{\operatorname{Span}_{K}(E') = K^{n \lambda}}{\operatorname{Span}_{K}(Q)}\ \text{given by } \ e_{k,j_{\iota}} \mapsto x_k \cdot g_{j_{\iota}},
\end{equation}
and condition (ii) holds if this map is surjective. Accordingly, we say that $I$ is \textbf{$\vartheta$-efficient} whenever (i) holds and $\vartheta$ is surjective.
\begin{rem} \label{rem:ThEffAndEff}
Examples \ref{ssec:(1,6,6,10)} and \ref{(1,6,10,10,5)CaseC} exhibit efficient distinguished\ ideals $I$ that are not $\vartheta$-efficient, so $\vartheta$-efficiency is sufficient but not necessary for efficiency. By contrast, the efficient ideal $(\mathbf{m}hcal{B})$ in Section \ref{ssec:RunningExampleII} is in fact $\vartheta$-efficient. In this example, the domain of the linear map $\vartheta$ has dimension $n\cdot \lambda = 3\cdot 4 = 12$ and the codomain has dimension $10$, since the set $Q$ consists of the $10$ monomials of degree $3$ in $x_1,\, x_2,\, x_3$. The elements of $G$ are then sufficiently general for $\vartheta$ to be surjective.
\end{rem}
Since the entries of the matrix of $\vartheta$ are the coefficients of the $g_{j_{\iota}}$, $\vartheta$-efficiency is an open condition on the distinguished\ locus $X_{\mathcal{S}}$. That is, we have
\begin{cor}
If $I$ $=$ $(\mathbf{m}hcal{B})$ $=$ $(G)$ is a $\vartheta$-efficient distinguished\ ideal, then there is an open set $[I]$ $\in$ $\mathbf{m}hcal{U}$ $\subseteq$ $X_{\mathcal{S}}$ such that $[I']$ $\in$ $\mathbf{m}hcal{U}$ $\Rightarrow$ $I'$ is $\vartheta$-efficient. \qed
\end{cor}
\section{The tangent space at a point $[I]$ on the Hilbert Scheme} \label{sec:tanSpace}
\subsection{Tangent vectors at a point $[I]$ $\in$ ${\rm Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$} \label{ssec:tanSpGen}
Let $I$ $\subseteq$ $K[\mathbf{m}hbf{x}]$ $=$ $R$ be an ideal of finite colength $\mu$ $=$ $\dim_{K}(R/I)$, and $\mathbf{m}hcal{B}$ $=$ $\{ g_1,\, \dots,\, g_{\nu}\}$ a border basis of $I$ with respect to an order ideal $\mathbf{m}hcal{O}$ $=$ $\{ t_1,\, \dots,\, t_{\mu} \}$.
It is well-known that the tangent space $\EuScript{T}_{[I]}$ at the corresponding point $[I]$ $\in$ $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ is isomorphic to $\Hom{R}{I}{R/I}$ (see, {\em e.g.}, \cite[Cor.\ 2.5, p.\ 13]{HartDefTh}).
Hence, a tangent vector $\fn{v}{I}{R/I}$ at $[I]$ can be viewed as the vertical arrow in the following commutative diagram in which the top row is exact, $\phi(e_j)$ $=$ $g_j$, $1 \leq j \leq \nu$, and $\operatorname{Syz}$ $=$ the first syzygy module of the polynomials $ g_j $.
\[
\begin{diagram}
{\rm Syz} &\rInto^{i} & \oplus_{j=1}^{\nu} R\, e_j &\rOnto^{\phi} & I &\rTo & 0\\
& & & \rdTo^{v'} & \dTo^{v} & {}\\
& & & & R/I
\end{diagram}
\]
It is therefore clear that a tangent vector $v$ corresponds to a choice of $\nu$ elements
\[
v'(e_j) \in R/I = \operatorname{Span}_{K}(\mathbf{m}hcal{O})
\]
such that for every tuple $(f_j)$ $\in$ ${\rm Syz}$ (and viewing the $v'(e_j)$ as elements of $R$), one has that
\begin{equation} \label{eqn:tanVecCond}
v'\left( \sum_{j=1}^{\nu}(f_j\, e_j)\right) = \sum_{j=1}^{\nu}f_j\, v'(e_j) \equiv 0 \mod{I};
\end{equation}
moreover, it suffices for this condition to hold for every $(f_j)$ in a set of $R$-generators of ${\rm Syz}$.
Writing $v'(e_j)$ $=$ $\sum_{i=1}^{\mu} a_{ij}\, t_i$, we see that the tangent vector $v$ can be encoded as a $(\mu \nu)$-tuple of elements of $K$, as follows:
\begin{equation} \label{eqn:TanVec}
v \leftrightarrow (a_{1,1},\, a_{2,1},\, \dots,\, a_{\mu,1},\, a_{1,2},\, a_{2,2}\, \dots,\, a_{\mu,2},\, a_{1,3},\, \dots,\, a_{\mu,\nu}).
\end{equation}
A moment's reflection shows that, given any $f$ $\in$ $R$ and $\sum_{i=1}^{\mu}c_{i}t_i$ $\in$ $\operatorname{Span}_{K}(\mathbf{m}hcal{O})$, the product $f \cdot \left( \sum_{i=1}^{\mu}c_{i}t_i \right) $ reduces modulo $I$ to $\sum_{i=1}^{\mu} \mathbf{m}hbf{b}^f_i t_i$, where each coefficient $\mathbf{m}hbf{b}^f_i$ is a (unique) $K$-linear combination of the coefficients $c_i$. Consequently, the sum $\sum_{j=1}^{\nu}f_j\, v'(e_j)$ in (\ref{eqn:tanVecCond}) reduces modulo $I$ to $\sum_{i=1}^{\mu}\mathbf{m}hbf{b}^{(f_j)}_{i}t_i$, where each of the coefficients $\mathbf{m}hbf{b}^{(f_j)}_i$ is a $K$-linear combination of the coefficients $a_{ij}$ that must vanish. In other words, every syzygy $(f_j)$ imposes $\mu$ linear relations on the entries of the tuple (\ref{eqn:TanVec}) that must hold if the tuple is to encode a tangent vector; we will call these the \textbf{tangent space relations} associated to $(f_j)$. As noted earlier, it suffices to check these conditions for each member of a set of $R$-generators of ${\rm Syz}$.
Recalling from Section \ref{ssec:AlgorithmForNbrSyz} that a $K$-basis $\mathbf{m}hcal{L}$ of the linear syzygies provides a set of $R$-generators of ${\rm Syz}$, one sees from the foregoing that $\EuScript{T}_{[I]}$ is isomorphic to the $K$-vector subspace of tuples $(a_{ij})$ $\in$ $K^{\mu \nu}$ that satisfy all of the tangent space relations corresponding to the members of $\mathbf{m}hcal{L}$. It is straightforward to compute these relations for specific examples via computer algebra; consequently, we can compute
\begin{equation} \label{eqn:tanSpDimComp}
\dim_{K}(\EuScript{T}_{[I]}) = \mu \nu - \dim_{K}(\operatorname{Span}_{K}\{ \mathbf{m}hbf{b}^{(f_j)}_{i} \mid 1\leq i\leq \mu,\ (f_j) \in \mathbf{m}hcal{L} \}) .
\end{equation}
\subsection{Tangent vectors as $K[\epsilon]$-points} \label{ssec:kEpPts}
We write $K[\epsilon]$ for the dual numbers, that is, the $K$-algebra with $\epsilon^2$ $=$ $0$. Recall that a tangent vector $v$ at $[I]$ can be viewed as a map of schemes $\fn{\theta_v}{\operatorname{Spec}(K[\epsilon])}{\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}}$ such that the composition $\fnai{\operatorname{Spec}(K)}{\fnl{\theta_v}{\operatorname{Spec}(K[\epsilon])}{\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}}}$ is the inclusion of the $K$-point $[I]$. By the universal property of the Hilbert scheme, the map $\theta_v$ corresponds to a closed subscheme $Z_v$ $\subseteq$ $\operatorname{Spec}(K[\epsilon][\mathbf{m}hbf{x}])$ $=$ $R[\epsilon]$ such that $Z_v$ is finite and flat of degree $\mu$ over $\operatorname{Spec}(K[\epsilon])$ and the closed fiber is the closed subscheme $Z$ $\subseteq$ $\operatorname{Spec}(R)$ cut out by $I$. The connection between this view of $v$ and the preceding, in which $v$ $\in$ $\Hom{R}{I}{O/I}$, is made as follows (see, {\em e.g.}, \cite[prop. 2.3, p.\ 12]{HartDefTh}): the ideal $I_v$ $\subseteq$ $R[\epsilon]$ defining $Z_v$ has the form
\begin{equation} \label{eqn:tanVecForm}
I_v = \{ f + \epsilon g \mid f \in I,\ g \in R, \text{ and } g \equiv v(f) \text{ mod } I \}.
\end{equation}
\section{An irreducible locus containing $[I]$} \label{sec:idealFam}
Let $I$ be a distinguished\ ideal as in section \ref{sec:SLI}, from which we retain all notation. We proceed to construct a map
\[
\fn{\EuScript{F}}{U}{ \operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}},
\]
where $U$ is an affine space and the image of $\EuScript{F}$ contains $[I]$ (indeed, $\EuScript{F}(U)$ contains the entire distinguished\ locus $X_{\mathcal{S}}$). In Section \ref{sec:derivMap} we compute the images of the standard unit tangent vectors at a $K$-point $p$ $\in$ $U$ under the derivative map $\fn{\EuScript{F}'_p}{\EuScript{T}_p}{\EuScript{T}_{[I_p]}}$, where $\EuScript{F}(p)$ $=$ $[I_{p}]$. This will enable us to obtain lower bounds for the dimension of the image $\EuScript{F}(U)$, as explained in Section \ref{ssec:F(U)DimEstLemma}.
Roughly speaking, $\EuScript{F}(U)$ is obtained by ``translating'' $X_{\mathcal{S}}$ around in $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ under maps $\fna{\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}}{\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}}$ induced by a family of automorphisms of $\mathbf{m}hbb{A}^{n}_{K}$, including the usual translations.
\subsection{Automorphisms of affine space} \label{ssec:isoms}
Let $A$ be a commutative and unitary ring, let $R_A$ $=$ $A[x_1,\dots,x_n]$ $=$ $A[\mathbf{m}hbf{x}]$, and write $\mathbf{m}hbb{A}^n_{A}$ $=$ $\operatorname{Spec}(R_A)$. We define a family of automorphisms $\fna{\mathbf{m}hbb{A}^n_{A}}{\mathbf{m}hbb{A}^n_{A}}$ as follows: For $1$ $\leq$ $\alpha$ $\leq$ $n$, let $\Delta_{\alpha}$ be a finite set of monomials (including $1$) in the variables $\{ x_1, \dots, x_n \} \setminus \{ x_{\alpha} \}$ (specific choices of the sets $\Delta_{\alpha}$ are discussed in Sections \ref{ssec:ZDiscussion} and \ref{ssec:ZExtensionRem}). We index each of these sets in some way, writing $\Delta_{\alpha}$ $=$ $\{ m_{\alpha,\delta} \mid 1 \leq \delta \leq |\Delta_{\alpha}| \}$. Note that the monomial $1$ will always have index $1$; that is, $m_{\alpha,1}$ $=$ $1$ for all $\alpha$. For each choice of variable $x_{\alpha}$, monomial $m_{\alpha,\delta}$ $\in$ $\Delta_{\alpha}$, and scalar $z$ $=$ $z_{\alpha,\delta}$ $\in$ $A$, we obtain an automorphism
\[
\fn{T^*_{ z_{\alpha,\delta}}}{R_A}{R_A},\ x_{\alpha} \mapsto x_{\alpha} + z_{\alpha,\delta}\cdot m_{\alpha,\delta} ,\ x_{\beta} \mapsto x_{\beta},\ \beta \neq \alpha.
\]
The map $T^*_{z_{\alpha,\delta}}$ induces an auomorphism $\fn{T_{ z_{\alpha,\delta}}}{\mathbf{m}hbb{A}^n_{A}}{\mathbf{m}hbb{A}^n_{A}}$. For $m_{\alpha,1}$ $=$ $1$, the map $T_{z_{\alpha,\delta}}$ is just a translation of $\mathbf{m}hbb{A}^n_{A}$ in the $x_{\alpha}$-direction.
The ``translation'' of a subscheme $W$ $\subseteq$ $\mathbf{m}hbb{A}^n_{A}$ under an automorphism $\fn{T}{\mathbf{m}hbb{A}^n_{A}}{\mathbf{m}hbb{A}^n_{A}}$ is its pullback, denoted $W_T$: if $W$ $=$ $\operatorname{Spec}(R_A/\mathbf{m}hcal{I})$, then $W_T$ $=$ $\operatorname{Spec}(R_A/T^*(\mathbf{m}hcal{I}))$. The following result is clear:
\begin{lem} \label{lem:isomLem}
If the quotient $R_A/\mathbf{m}hcal{I}$ is generated as an $A$-module by a finite set $J$ $=$ $\{ f_1, \dots, f_d \}$ $\subseteq$ $R_A$ (resp.\ is $A$-free with basis $J$), then $R_A/T^*(\mathbf{m}hcal{I})$ is generated as an $A$-module by the set $ T^*(J)$ $=$ $\{ T^*(f_1) , \dots, T^*(f_d) \}$ (resp. is $A$-free with basis $T^*(J)$). \qed
\end{lem}
We list the elements of $\cup_{\alpha=1}^n \Delta_{\alpha}$ in the tuple
\begin{equation} \label{eqn:Mdef}
(m_{\alpha,\delta}) = \left( m_{1,1},\, m_{1,2},\, \dots,\, m_{1,|\Delta_1|},\,m_{2,1},\, m_{2,2}\, \dots,\, m_{n,|\Delta_n|} \right),
\end{equation}
and let $\mathbf{m}hbf{z}$ $=$ $(z_{\alpha,\delta})$ be a tuple of scalars corresponding to the monomials in $\mathbf{m}hcal{M}$ in the order shown. By composing the auomorphisms $T^*_{z_{\alpha,\delta}}$, we obtain the auomorphism of rings
\begin{equation}\label{eqn:f*_zDef}
\fn{T^*_{\mathbf{m}hbf{z}} = T^*_{z_{1,1}} \circ T^*_{z_{1,2}} \circ \dots \circ T^*_{z_{n,|\Delta_n|-1}} \circ T^*_{z_{n,|\Delta_n|}}}{R_A}{R_A},
\end{equation}
which in turn induces the automorphism of schemes
\begin{equation} \label{f_zDef}
\fn{T_{\mathbf{m}hbf{z}}}{\mathbf{m}hbb{A}^n_{A}}{\mathbf{m}hbb{A}^n_{A}}.
\end{equation}
\subsection{Construction of the map $\EuScript{F}$} \label{ssec:Cconstr}
Let
\[
I = (\mathbf{m}hcal{B}) = (g_1, \dots, g_{\nu}) \subseteq K[x_1, \dots, x_n]
\]
be a distinguished\ ideal as in Section \ref{sec:SLI}. In particular, the elements of the border basis $\mathbf{m}hcal{B}$ can be written as
\begin{equation} \label{eqn:gensOfStartingIdealI}
g_j = \left\{ \begin{array}{l}
b_j, \text{ if } b_j \notin \operatorname{LM},
\\
b_j - \sum_{C_{ij} \in \mathcal{S}} c_{ij}t_i,\ c_{ij} \in K, \text{ if } b_j \in \operatorname{LM} \text{ ($\mathcal{S}$ as in (\ref{eqn:SpecialC}))}
\end{array} \right. .
\end{equation}
We introduce the set of variables
\begin{equation} \label{eqn:V2def}
\mathbf{m}hcal{Z} = \{ Z_{\alpha,\delta} \mid 1 \leq \alpha \leq n,\ 1 \leq \delta \leq |\Delta_{\alpha}| \}
\end{equation}
corresponding to the scalars $z_{\alpha,\delta}$ introduced in Section \ref{ssec:isoms}. Let $A$ denote the polynomial ring $K[\mathcal{S}, \mathbf{m}hcal{Z}]$, and let the ideal $\mathbf{m}hfrak{I}$ $\subseteq$ $A[\mathbf{m}hbf{x}]$ be generated by the $\mathbf{m}hcal{O}$-border prebasis
\begin{equation}\label{eqn:preBasisOfGs}
\mathbf{m}hcal{B}_{\mathbf{m}hfrak{I}} = \left\{ G_1, \dots, G_{\nu} \right\}, \ G_j = \left\{ \begin{array}{l}
b_j, \text{ if } b_j \notin \operatorname{LM},
\\
b_j - \sum_{C_{ij} \in \mathcal{S}} C_{ij}\, t_i,\ \text{ if } b_j \in \operatorname{LM}
\end{array} \right..
\end{equation}
\begin{lem} \label{lem:preBasisIsBasis}
The prebasis $\mathbf{m}hcal{B}_{\mathbf{m}hfrak{I}}$ is in fact an $\mathbf{m}hcal{O}$-border basis, and the quotient $A[\mathbf{m}hbf{x}]/\mathbf{m}hfrak{I}$ is $A$-free with basis $\mathbf{m}hcal{O}$.
\end{lem}
\proof
Arguing as in the proof of Proposition \ref{prop:specialCCor}, one sees that for all pairs of neighbors $b_j$ and $b_{j'}$ $\in$ $\partial \mathbf{m}hcal{O}$, the polynomial $S(G_j,G_{j'})$ of Section \ref{ssec:NbrSyz} reduces to $0$ modulo $\mathbf{m}hcal{B}_{\mathbf{m}hfrak{I}}$. Proposition \ref{prop:NbrSyzFacts} now implies that $\mathbf{m}hcal{B}_{\mathbf{m}hfrak{I}}$ is an $\mathbf{m}hcal{O}$-border basis, so $A[\mathbf{m}hbf{x}]/\mathbf{m}hfrak{I}$ is $A$-free with basis $\mathbf{m}hcal{O}$.
\qed
Lemmas \ref{lem:isomLem} and \ref{lem:preBasisIsBasis} yield that the quotient $A[\mathbf{m}hbf{x}]/T^*_{\mathbf{m}hbf{Z}}(\mathbf{m}hfrak{I})$ is $A$-free with basis $T^*_{(\mathbf{m}hbf{Z})}(\mathbf{m}hcal{O})$; consequently, the induced map
\begin{equation} \label{eqn:univFamOverU}
\fna{\operatorname{Spec}(A[\mathbf{m}hbf{x}]/T^*_{\mathbf{m}hbf{Z}}(\mathbf{m}hfrak{I}))}{\operatorname{Spec}(A) = U}
\end{equation}
is finite of degree $\mu$ $=$ $|\mathbf{m}hcal{O}|$ and flat, and so, by the universal property of the Hilbert scheme, corresponds to a map $\fn{\EuScript{F}}{U}{ \operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}}$.
Let $p$ $=$ $(\mathbf{m}hbf{c},\mathbf{m}hbf{z})$ be a $K$-point of $U$ $=$ $\operatorname{Spec}(K[\mathcal{S},\mathbf{m}hcal{Z}])$, where $\mathbf{m}hbf{c}$ $=$ $(c_{ij})$ is a tuple of elements of $K$ indexed by the distinguished\ index-pairs $(i,j)$, and $\mathbf{m}hbf{z}$ $=$ $(z_{\alpha,\delta})$ is a tuple of elements of $K$ corresponding to the monomials $m_{\alpha,\delta}$ and ordered as in (\ref{eqn:Mdef}). Let $\fn{T^*_{\mathbf{m}hbf{z}}}{K[\mathbf{m}hbf{x}]}{K[\mathbf{m}hbf{x}]}$ be the corresponding map (\ref{f_zDef}). A moment's reflection shows that the fiber of (\ref{eqn:univFamOverU}) over $p$ is the closed subscheme $K[\mathbf{m}hbf{x}]/T^*_{\mathbf{m}hbf{z}}(I_{\mathbf{m}hbf{c}})$, where $I_{\mathbf{m}hbf{c}}$ is the distinguished\ ideal
\[
I_{\mathbf{m}hbf{c}} = (g'_1, g'_2, \dots, g'_{\nu}),\ g'_j = \left\{ \begin{array}{l}
b_j, \text{ if } b_j \notin \operatorname{LM},
\\
b_j - \sum_{C_{ij} \in \mathcal{S}} c_{ij}\, t_i, \text{ if } b_j \in \operatorname{LM}
\end{array} \right. .
\]
In particular, it is clear that the fiber over the origin $(\mathbf{m}hbf{0},\mathbf{m}hbf{0})$ $\in$ $U$ is $\operatorname{Spec}(K[\mathbf{m}hbf{x}]/I_0)$, where $I_0$ is (as defined following Corollary \ref{cor:DistIdealsSuppAtOnePt}) the monomial ideal $(\partial \mathbf{m}hcal{O})$, and the fiber over the point $((c_{ij}),\mathbf{m}hbf{0})$ is $\operatorname{Spec}(K[\mathbf{m}hbf{x}]/I_{\mathbf{m}hbf{c}})$, so that $\EuScript{F}((\mathbf{m}hbf{0},\mathbf{m}hbf{0}))$ $=$ $[I_0]$ and $\EuScript{F}(((c_{ij}),\mathbf{m}hbf{0}))$ $=$ $[I_{\mathbf{m}hbf{c}}]$.
\begin{rem} \label{rem:supportIsPtRem}
Since the ideal $I_{\mathbf{m}hbf{c}}$ is supported at one point (the origin) of $\mathbf{m}hbb{A}^n_{K}$ by Corollary \ref{cor:DistIdealsSuppAtOnePt}, it follows that $T^*_{\mathbf{m}hbf{z}}(I_{\mathbf{m}hbf{c}})$ is also supported at one point of $\mathbf{m}hbb{A}^n_{K}$. Consequently, every point $[I']$ $\in$ $\EuScript{F}(U)$ corresponds to an ideal $I'$ $\subseteq$ $K[\mathbf{m}hbf{x}]$ that is supported at one point.
\end{rem}
\subsection{Finding lower bounds for $\dim(\EuScript{F}(U))$} \label{ssec:F(U)DimEstLemma}
Our method for bounding the dimension of $\EuScript{F}(U)$ from below is summarized by the following elementary
\begin{prop} \label{prop:F(U)DimLowerBd}
Let $p$ be a $K$-point of $U$. Suppose given a set of tangent vectors $\{ v_1, \dots, v_L \}$ $\subseteq$ $\EuScript{T}_p$ such that the image set
\[
\{ \EuScript{F}'_p(v_1),\dots, \EuScript{F}'_p(v_L) \} \subseteq \EuScript{T}_{[I_p]}
\]
is linearly independent. Then $L$ $\leq$ $\dim(\EuScript{F}(U))$.
\end{prop}
\proof
Let $\dim(U)$ $=$ $\dim_{K}(\EuScript{T}_p)$ $=$ $d$. The hypothesis implies that $\dim_{K}(\EuScript{F}'_p(\EuScript{T}_p))$ $\geq$ $L$; whence, $\dim(\ker(\EuScript{F}'_p))$ $\leq$ $d-L$. It follows that any component of the fiber $\EuScript{F}^{-1}([I_p])$ through $p$ has dimension $\leq$ $d-L$, so by the theorem on the dimension of fibers of a morphism, $d-L$ $\geq$ $d - \dim(\EuScript{F}(U))$ $\Rightarrow$ $L$ $\leq$ $\dim(\EuScript{F}(U))$, as asserted.
\qed
Recall that an irreducible component of $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ is called \textbf{elementary} if every point $[I']$ on it parameterizes a subscheme $\operatorname{Spec}(K[\mathbf{m}hbf{x}]/I')$ that is concentrated at one point. By Remark \ref{rem:supportIsPtRem}, this property holds for all $[I']$ $\in$ $\EuScript{F}(U)$. Also recall that if $[I']$ is a smooth point on an elementary component, then we call $I'$ a \textbf{generic} ideal.
Proposition \ref{prop:F(U)DimLowerBd} leads to the following simple criterion for identifying elementary components and generic ideals:
\begin{prop} \label{prop:genericCond}
Let $L$ be a lower bound for $\dim(\EuScript{F}(U))$ as in Proposition {\rm \ref{prop:F(U)DimLowerBd}}. If there is a point $[I]$ $\in$ $\EuScript{F}(U)$ such that
$\dim_{K}(\EuScript{T}_{[I]})$ $=$ $L$, then the closure $\overline{\EuScript{F}(U)}$ is an elementary component of $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ of dimension $L$ on which $[I]$ is a smooth point; consequently, $I$ is a generic ideal.
\end{prop}
\proof
The hypothesis implies that $L$ is both a lower bound and an upper bound for $\dim(\EuScript{F}(U))$; whence, $\dim(\overline{\EuScript{F}(U)})$ $=$ $L$ and $[I]$ is a smooth point on $\overline{\EuScript{F}(U)}$. Then the unique irreducible component of $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ containing $[I]$ must be $\overline{\EuScript{F}(U)}$, which is accordingly an elementary component, and we are done.
\qed
\section{The derivative map $\fn{\EuScript{F}'_p}{\EuScript{T}_{p}}{\EuScript{T}_{[I_p]}}$} \label{sec:derivMap}
Let $p$ $=$ $(\mathbf{m}hbf{c},\mathbf{m}hbf{z})$ $\in$ $U$. In this section of the paper, we study the derivative map $\fn{\EuScript{F}'_p}{\EuScript{T}_p}{\EuScript{T}_{[I_p]}}$. Since $U$ $=$ $\operatorname{Spec}(K[\mathcal{S},\mathbf{m}hcal{Z}])$, a basis of $\EuScript{T}_p$ is given by unit vectors in the directions corresponding to the indeterminates $C_{ij}$ $\in$ $\mathcal{S}$ and $Z_{(\alpha,\delta)}$ $\in$ $\mathbf{m}hcal{Z}$. Let $X$ denote one of these variables, let $X'$ $\neq$ $X$ stand for any of the others, and let $p_X$, $p_{X'}$ $\in$ $K$ denote the corresponding components of $p$. Then a unit vector in the $X$-direction at $p$ is given by the map
\[
\fn{v_{p,X}}{\operatorname{Spec}(K[\epsilon])}{U}\ \text{defined by } X \mapsto p_X + \epsilon,\ X' \mapsto p_{X'}.
\]
The image $\EuScript{F}'_p(v_{p,X})$ $\in$ $\EuScript{T}_{[I_p]}$ is then the map
\[
\EuScript{F}'_p(v_{p,X}): \operatorname{Spec}(K[\epsilon]) \stackrel{v_{p,X}}{\rightarrow} U \stackrel{\EuScript{F}}{\rightarrow} \operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}},
\]
which corresponds to an ideal $I_{p,X}$ $\subseteq$ $K[\epsilon][\mathbf{m}hbf{x}]$ such that the quotient $K[\epsilon][\mathbf{m}hbf{x}]/I_{p,X}$ is $K[\epsilon]$-free of rank $\mu$. Recall that $\EuScript{F}$ is defined by the ideal
\[
T^*_{\mathbf{m}hbf{Z}}(\mathbf{m}hfrak{I}) = (\{T^*_{\mathbf{m}hbf{Z}}(G_j) \mid 1 \leq j \leq \nu \}) \subseteq K[\mathcal{S},\mathbf{m}hcal{Z}][\mathbf{m}hbf{x}],
\]
where the $G_j$ are defined in equation (\ref{eqn:preBasisOfGs}).
Thus, $I_{p,X}$ is the image of this ideal under the substitutions $X \mapsto p_X + \epsilon$, $X' \mapsto p_{X'}$.
We now restrict attention to the point
\begin{equation} \label{eqn:pRestriction}\
p = ((c_{ij}),\mathbf{m}hbf{0}),
\end{equation}
so that $I_p$ is distinguished\ with border basis $\mathbf{m}hcal{B}$ as shown in (\ref{eqn:gensOfStartingIdealI}).
We proceed to evaluate the tangent vectors $\EuScript{F}'_p(v_{p,X})$ $\in$ $\EuScript{T}_{[I_p]}$ for each of the cases $X$ $=$ $C_{ij}$ $\in$ $\mathcal{S}$ and $X$ $=$ $Z_{\alpha,\delta}$ $\in$ $\mathbf{m}hcal{Z}$.
\subsection{The tangent vectors $\EuScript{F}'_p(v_{p,C_{ij}})$ for $C_{ij}$ $\in$ $\mathcal{S}$} \label{ssec:TVFam1}
If $X$ $=$ $C_{ij}$ $\in$ $\mathcal{S}$, and $p$ is as in (\ref{eqn:pRestriction}), one sees easily that for all $j' \neq j$, $1$ $\leq$ $j'$ $\leq$ $\nu$, the image of $T^*_{\mathbf{m}hbf{Z}}(G_{j'})$ under $C_{ij}$ $\mapsto$ $p_{C_{ij}} + \epsilon$, $X' \mapsto p_{X'}$, is $g_{j'}$, and the image of $T^*_{\mathbf{m}hbf{Z}}(G_{j})$ is $g_{j} - \epsilon\, t_{i}$, where $g_j$ and $g_{j'}$ are as in (\ref{eqn:gensOfStartingIdealI}). According to (\ref{eqn:tanVecForm}), the tangent vector $\EuScript{F}'_p(v_{p,C_{ij}})$ $=$ $v_{p,ij}$ corresponds to the element of $\Hom{R}{I_p}{R/I_p}$ given by $g_{j}$ $\mapsto$ $(-t_{i}\text{ mod }I_p) = -t_{i}$, and $g_{j'}$ $\mapsto$ $0$ for $j'$ $\neq$ $j$. The corresponding tuple $(a_{i'j'})$ (equation (\ref{eqn:TanVec})) has all components equal to $0$ except for $a_{ij}$ $=$ $-1$. The following lemma is immediate:
\begin{lem} \label{lem:YtanVecsLinInd}
Let $p$ be a point as in equation {\rm (\ref{eqn:pRestriction})}. Then the family of tangent vectors
\[
\EuScript{S}_p = \{ v_{p,ij} \mid C_{ij} \in \mathcal{S} \} \subseteq \EuScript{T}_{[I_p]}
\]
is $K$-linearly independent and of cardinality
\[
|\EuScript{S}_p| = |\operatorname{LM}| \cdot |\operatorname{TM}| = \lambda \cdot \tau.\
\] \qed
\end{lem}
\begin{rem} \label{rem:spineRem}
It is clear that $\EuScript{S}_p$ is a basis of the tangent space to the distinguished\ locus $X_\mathcal{S}$ $\subseteq$ $\mathbf{m}hbb{B}_{\mathbf{m}hcal{O}}$ at the point $[I_p]$.
\end{rem}
\subsection{The tangent vectors $\EuScript{F}'_p(v_{p, Z_{\alpha,\delta}})$ for $Z_{\alpha,\delta}$ $\in$ $\mathbf{m}hcal{Z}$} \label{ssec:TVFam2}
Now consider the case $X$ $=$ $Z_{\alpha,\delta}$ for some $1$ $\leq$ $\alpha$ $\leq$ $n$, $m_{\alpha,\delta}$ $\in$ $\Delta_{\alpha}$ (recall that the latter is a finite set of monomials not involving $x_{\alpha}$). Recalling that $p$ is a point as in equation {\rm (\ref{eqn:pRestriction})}, one sees that the ideal $I_{p,X}$ is obtained by applying the substitutions $X$ $\mapsto$ $p_X + \epsilon$ $=$ $\epsilon$, $X'$ $\mapsto$ $p_{X'}$, to the polynomials $T^*_{\mathbf{m}hbf{Z}}(G_j)$. Under these substitutions, which amount to replacing $x_{\alpha}$ by $x_{\alpha} + \epsilon \cdot m_{\alpha, \delta}$ in the polynomials $g_j$, one has that
\[
T^*_{\mathbf{m}hbf{Z}}(G_j) \mapsto g_j + \epsilon \cdot \frac{\partial g_j}{\partial x_{\alpha}} \cdot m_{\alpha,\delta},\ 1 \leq j \leq \nu.
\]
Hence, (\ref{eqn:tanVecForm}) yields that the tangent vector $\EuScript{F}'_p(v_{p, Z_{\alpha,\delta}})$ $=$ $v_{p,\alpha, \delta}$ corresponds to the element of $\Hom{R}{I_p}{R/I_p}$ given by
\begin{equation} \label{eqn:ZTanVecHomo}
g_j\ \mapsto \frac{\partial g_j}{\partial x_{\alpha}} \cdot m_{\alpha,\delta}\ \equiv \ \sum_{i=1}^{\mu}a_{ij}t_i\ (\text{mod } I_p),\ \ 1 \leq j \leq \nu.
\end{equation}
For a point $p$ as in (\ref{eqn:pRestriction}), we let
\begin{equation} \label{eqn:tanVecSetZ}
\EuScript{Z}_p = \{ v_{p,\alpha,\delta} \mid 1 \leq
\alpha \leq n,\ 1 \leq \delta \leq |\Delta_{\alpha}| \} \subseteq \EuScript{T}_{I_p}.
\end{equation}
\section{Lex-segment complement order ideals} \label{sec:spCaseRestr}
In this section we discuss the order ideals and associated distinguished\ ideals that are used in all of our examples in Section \ref{sec:examples}.
\subsection{Definition} \label{ssec:lexSegComps}
From this point on, a monomial inequality (such as $m_1$ $>$ $m_2$) shall be with respect to the lexicographic order with $x_1$ $>$ $x_2$ $>$ \dots $>$ $x_n$. Let $\EuScript{L}$ be a (proper) lex-segment ideal of finite colength in $R$ $=$ $K[\mathbf{m}hbf{x}]$ (see, {\em e.g.}, \cite[Sec.\ 5.5.B, p.\ 258]{KreutzerAndRobbianoVolTwo}), and let $\mathbf{m}hcal{O}$ be the set of monomials that are not in $\EuScript{L}$; we call $\mathbf{m}hcal{O}$ a \textbf{lex-segment complement} order ideal. Writing $R_d$ $\subseteq$ $R$,
$\EuScript{L}_d$ $\subseteq$ $\EuScript{L}$, and $\mathbf{m}hcal{O}_d$ $\subseteq$ $\mathbf{m}hcal{O}$ for the subsets of monomials of degree $d$, we let $m_d$ denote the lex-minimum element of $\EuScript{L}_d$ when this set is non-empty, and $s$ $\geq$ $r$ $>$ $0$ the integers such that
\[
\mathbf{m}hcal{O}_d \ =\ \left\{
\begin{array}{l}
R_d, \text{ if } 0 \leq d < r;
\\
\{m \in R_d \mid m < m_d \} \neq \emptyset, \text{ if } r\leq d \leq s;
\\
\emptyset, \text{ if } d > s.
\end{array}
\right.
\]
Note that for $r$ $\leq$ $d$ $<$ $s$ one has that $m_{d+1}$ $\leq$ $x_n \cdot m_d$. Here is a simple example in $3$ variables with Hilbert function $(1,3,2,1)$; the basis monomials are underlined and the boundary monomials are shown in boldface. In this case, $r=2$, $s=3$, $m_2$ $=$ $x_2^2$, and $m_3$ $=$ $x_2 x_3^2$:
\[
\begin{array}{c}
\underline{1}
\\
\underline{x_1}\ \ \underline{x_2}\ \ \underline{x_3}
\\
\mathbf{m}hbf{x_1^2}\ \ \mathbf{m}hbf{x_1 x_2} \ \ \mathbf{m}hbf{x_1 x_3}\ \ \mathbf{m}hbf{x_2^2}\ \ \underline{x_2 x_3}\ \ \underline{x_3^2}
\\
x_1^3\ \ x_1^2 x_2\ \ x_1^2 x_3\ \ x_1 x_2^2\ \ \mathbf{m}hbf{x_1 x_2 x_3}\ \ \mathbf{m}hbf{x_1 x_3^2}\ \ x_2^3\ \ \mathbf{m}hbf{x_2^2 x_3}\ \ \mathbf{m}hbf{x_2 x_3^2}\ \ \underline{x_3^3}
\\
x_1^4\ \ x_1^3 x_2\ \ \ \ \ \ \ \ \dots\ \ \ \ \ \ \ \ x_1x_2 x_3^2\ \ \mathbf{m}hbf{x_1 x_3^3}\ \ x_2^4\ \ \ \ \ \ \dots\ \ \ \ \ \ \ x_2^2 x_3^2 \ \mathbf{m}hbf{x_2 x_3^3} \ \mathbf{m}hbf{x_3^4}
\end{array}
\]
\begin{lem} \label{lem:lexSegMonomLem}
If $\mathbf{m}hcal{O}$ is a lex-segment complement order ideal, $m$ is a monomial $\notin \mathbf{m}hcal{O}$, and $m'$ is a monomial such that $d$ $=$ $\deg(m')$ $\geq$ $\deg(m)$ and $m'$ $>$ $m$, then $m'$ $\notin$ $\mathbf{m}hcal{O}$.
\end{lem}
\proof
Let $\deg(m')-\deg(m)$ $=$ $u \geq 0$. Then $m''$ $=$ $m \cdot x_n^u$ is the lex-minimum monomial of degree $d$ that is $\geq m$; whence, $m'$ $\geq$ $m''$. Furthermore, the hypothesis on $\mathbf{m}hcal{O}$ implies that $m''$ $\notin$ $\mathbf{m}hcal{O}$, which in turn yields $m'$ $\notin$ $\mathbf{m}hcal{O}$, as desired.
\qed
\subsection{The sets of monomials $\Delta'_{\alpha}$} \label{ssec:ZDiscussion}
Recall that in order to define the map $\fn{\EuScript{F}}{U}{\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}}$, we must choose finite sets of monomials $\Delta_{\alpha}$ as in Section \ref{ssec:isoms}. Here we describe the particular sets $\Delta'_{\alpha}$ that we most often use when $\mathbf{m}hcal{O}$ is a lex-segment complement.
We claim that for every variable $x_{\alpha}$, there is a smallest exponent $e'_{\alpha}$ $\geq$ $0$ of $x_n$ such that $x_{\alpha}\cdot x_n^{e'_{\alpha}}$ $\notin$ $\mathbf{m}hcal{O}$ but $x_n^{e'_{\alpha}}$ $\in$ $\mathbf{m}hcal{O}$. When $\alpha$ $=$ $n$, it is clear that $e'_n$ is the largest exponent $e$ such that $x_n^e$ $\in$ $\mathbf{m}hcal{O}$. We then observe that for any $\alpha$ $\neq$ $n$, $x_{\alpha}\cdot x_n^{e'_n}$ $>$ $x_n\cdot x_n^{e'_n}$ $\Rightarrow$ $x_{\alpha} \cdot x_n^{e'_n}$ $\notin$ $\mathbf{m}hcal{O}$ by Lemma \ref{lem:lexSegMonomLem}, but $x_n^{e'_n}$ $\in$ $\mathbf{m}hcal{O}$; from this it follows that $e'_{\alpha}$ exists and is $\leq$ $e'_n$. We define
\begin{equation} \label{eqn:spCaseBAlphaDefn}
b_{j_{\alpha}} = x_{\alpha} \cdot x_n^{e'_{\alpha}} \in \partial \mathbf{m}hcal{O},\ \ t_{i_{\alpha}} = x_n^{e'_{\alpha}} \in \mathbf{m}hcal{O}.
\end{equation}
Note that $b_{j_{\alpha}}$ can be characterized as the lex-minimum boundary monomial that is divisible by $x_{\alpha}$.
We now choose the monomial sets $\Delta'_{\alpha}$, $1$ $\leq$ $\alpha$ $\leq$ $n$, as follows:
\begin{equation} \label{eqn:DeltaDelta'RestrDef}
\begin{array}{c}
\Delta'_n = \{ m_{n,1}\} = \{1\}, \text{ and, for } 1 \leq \alpha \leq n-1,
\\
\Delta'_{\alpha} = \left\{ m_{\alpha,\delta} \in K[x_{\alpha+1},\dots,x_n]
\left| \begin{array}{l}
t_{i_{\alpha}} m_{\alpha,\delta} \in \mathbf{m}hcal{O} \setminus \operatorname{TM}, \text{ if } b_{j_{\alpha}} \in \operatorname{LM},
\\
t_{i_{\alpha}} m_{\alpha,\delta} \in \mathbf{m}hcal{O}, \text{ if } b_{j_{\alpha}} \notin \operatorname{LM}
\end{array}
\right.
\right\}.
\end{array}
\end{equation}
One checks easily that $1$ $=$ $m_{\alpha,1}$ $\in$ $\Delta'_{\alpha}$ for all $\alpha$. Note that we have replaced the condition $\Delta'_{\alpha}$ $\subseteq$ $K[x_1, \dots, \widehat{x_{\alpha}}, \dots, x_n]$ of Section \ref{ssec:isoms} with the seemingly stricter $\Delta'_{\alpha} \subseteq K[x_{\alpha+1}, \dots, x_n]$, since if $m_{\alpha,\delta}$ is divisible by one of $x_1$, \dots, $x_{\alpha-1}$, then, by Lemma \ref{lem:lexSegMonomLem},
\[
t_{i_{\alpha}} \cdot m_{\alpha,\delta} > t_{i_{\alpha}}\cdot x_{\alpha} = b_{j_{\alpha}} \notin{O}\ \Rightarrow\ t_{i_{\alpha}} \cdot m_{\alpha,\delta} \notin \mathbf{m}hcal{O}.
\]
As in (\ref{eqn:V2def}), we write
\[
\mathbf{m}hcal{Z}' = \{ Z_{\alpha,\delta} \mid 1 \leq \alpha \leq n,\ 1 \leq \delta \leq |\Delta'_{\alpha}| \},
\]
and for $p$ as in (\ref{eqn:pRestriction}), we denote the set of tangent vectors (\ref{eqn:tanVecSetZ}) associated to $\mathbf{m}hcal{Z}'$ by
\[
\EuScript{Z}'_p\ =\ \{ v_{p,\alpha,\delta} \mid 1 \leq \alpha \leq n,\ m_{\alpha,\delta} \in \Delta'_{\alpha} \}\ \subseteq\ \EuScript{T}_{[I_p]}.
\]
\subsection{Ideals $I_p$ for which the set $\EuScript{S}_p$ $\cup$ $\EuScript{Z}'_p$ $\subseteq$ $\EuScript{T}_{[I_p]}$ is linearly independent}
\label{ssec:ZMainResult}
The key technical result enabling us to find elementary components of $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ is the following
\begin{prop} \label{prop:GeneralZProp}
Let $\mathbf{m}hcal{O}$ $\neq$ $\{ 1 \}$ be a lex-segment complement order ideal, and let $p$ be as in {\rm (\ref{eqn:pRestriction})}, so that $I_p$ is distinguished\ with $\mathbf{m}hcal{O}$-border basis $\mathbf{m}hcal{B}$ $=$ $\{ g_j \mid 1 \leq j \leq \nu \}$ as in { \rm (\ref{eqn:gensOfStartingIdealI})}. Suppose that $\mathbf{m}hcal{B}$ has the property that the boundary monomial $b_{j_{\alpha}}$ {\rm (\ref{eqn:spCaseBAlphaDefn})} is the lex-leading monomial in $g_{j_{\alpha}}$ for all $1$ $\leq$ $\alpha$ $\leq$ $n$. Then the set $\EuScript{S}_p \cup \EuScript{Z'}_p$ $\subseteq$ $\EuScript{T}{[I_p]}$ is $K$-linearly independent.
\end{prop}
\proof
Suppose we have a relation
\begin{equation} \label{eqn:Z2LinComb}
\sum_{v_{p,ij} \in \EuScript{S}_p} d_{ij}\,v_{p,ij} + \sum_{v_{p,\alpha,\delta} \in \EuScript{Z'}_p} d_{\alpha,\delta}\, v_{p,\alpha,\delta}\ =\ 0,\ \ \ d_{ij},\ d_{\alpha,\delta} \in K .
\end{equation}
We must show that all the coefficients in the linear combination vanish.
By (\ref{eqn:ZTanVecHomo}), we have that the homomorphism $\fna{I_p}{R/I_p}$ corresponding to $v_{p,\alpha,\delta}$ $\in$ $\EuScript{Z'}_p$ is given by
\[
g_j\ \mapsto \frac{\partial g_j}{\partial x_{\alpha}} \cdot m_{\alpha,\delta}\ (\text{mod } I_p),\ \ 1 \leq j \leq \nu.
\]
For each $1 \leq \alpha \leq n$ and each $m_{\alpha,\delta} \in \Delta'_{\alpha}$, let
\[
t_{i_{\alpha,\delta}} = t_{i_{\alpha}} \cdot m_{\alpha,\delta} \in \mathbf{m}hcal{O}.
\]
Our first goal is to show that the $(i_{\alpha,\delta},j_{\alpha})$-component of $v_{p,\alpha,\delta}$ is non-zero for all $1 \leq \alpha \leq n$ and all $1 \leq \delta \leq |\Delta'_{\alpha}|$. This component is the coefficient of $t_{i_{\alpha,\delta}}$ in $\frac{\partial g_{j_{\alpha}}}{\partial x_{\alpha}} \cdot m_{\alpha,\delta}$ mod $I_p$. Consider first the case in which $b_{j_{\alpha}}$ $\notin$ $\operatorname{LM}$, so that $g_{j_{\alpha}}$ $=$ $b_{j_{\alpha}}$ $=$ $x_{\alpha}x_n^{e'_{\alpha}}$. In this case,
\[
\frac{\partial g_{j_{\alpha}}}{\partial x_{\alpha}} = \frac{\partial b_{j_{\alpha}}}{\partial x_{\alpha}} = \left\{
\begin{array}{l}
x_n^{e'_{\alpha}} = t_{i_{\alpha}},\ \text{if } 1 \leq \alpha < n,
\\
(e'_{n}+1) x_n^{e'_n} = (e'_n+1) \cdot t_{i_{\alpha}},\ \text{if } \alpha = n.
\end{array} \right.
\]
It follows that
\[
\frac{\partial g_{j_{\alpha}}}{\partial x_{\alpha}} \cdot m_{\alpha, \delta} = (1 \text{ or } e'_{\alpha}+1) \cdot t_{i_{\alpha,\delta}} \equiv (1 \text{ or } e'_{\alpha}+1) \cdot t_{i_{\alpha,\delta}} \mod I_p,
\]
so in either case the $(i_{\alpha,\delta},j_{\alpha})$-component of $v_{p,\alpha,\delta}$ (either $1$ or $e'_{\alpha}+1$) is non-zero, since $\operatorname{char}(K)$ $=$ $0$.
Next consider the case in which $b_{j_{\alpha}}$ $\in$ $\operatorname{LM}$, so that $g_{j_{\alpha}}$ $=$ $b_{j_{\alpha}}-N_{j_{\alpha}}$, where $N_{j_{\alpha}}$ $\in$ $\operatorname{Span}_{K}(\operatorname{TM})$. As before, $\frac{\partial b_{j_{\alpha}}}{\partial x_{\alpha}}$ $=$ $(1 \text{ or } e'_{n}+1) \cdot t_{i_{\alpha}}$, which contributes a non-zero multiple of $t_{i_{\alpha,\delta}}$ to $(\frac{\partial g_{j_{\alpha}}}{\partial x_{\alpha}} \cdot m_{\alpha,\delta}$ mod $I_p)$. In fact, this is the only (non-zero) contribution to the $(i,j_{\alpha})$-component of $v_{p,\alpha,\delta}$ for any $i$, because we have the following claim:
\begin{equation}\label{txt:keyClaim}
\begin{array}{l}
\text{\emph{If $b_{j_{\alpha}}$ $\in$ $\operatorname{LM}$ and $m$ is a monomial appearing non-trivially}}\\
\text{\emph{in $N_{j_{\alpha}}$, then $m$ is not divisible by $x_{\alpha}$.}}
\end{array}
\end{equation}
To prove the claim, suppose that $x_{\alpha}$ divides $m$. By hypothesis, we have that $b_{j_{\alpha}}$ $=$ $x_{\alpha} x_n^{e'_{\alpha}}$ $>$ $m$, which implies that $m$ $=$ $x_{\alpha}\cdot x_n^e$, with $e'_{\alpha}$ $>$ $e$. But $m$ $\in$ $\operatorname{TM}$, which implies that $m \cdot x_{n}$ $\in$ $\partial \mathbf{m}hcal{O}$ $\setminus$ $\operatorname{LM}$. Consequently, we either have that $b_{j_{\alpha}}$ $=$ $m\cdot x_n$, which contradicts $b_{j_{\alpha}}$ $\in$ $\operatorname{LM}$, or $b_{j_{\alpha}}$ $=$ $x_n^w\cdot (m\cdot x_n)$ with $w \geq 1$, which contradicts that the leading monomial $b_{j_{\alpha}}$ is a minimal boundary monomial. We conclude that $m$ cannot be divisible by $x_{\alpha}$, as claimed.
So far we have established that the $(i_{\alpha,\delta},j_{\alpha})$-component of the tangent vector $v_{p,\alpha,\delta}$ is non-zero for all $\alpha$ and all $m_{\alpha,\delta}$ $\in$ $\Delta'_{\alpha}$.
We now show by descending induction on $\alpha$ that the coefficients $d_{\alpha,\delta}$ in (\ref{eqn:Z2LinComb}) are all equal to $0$.
We begin with the case $\alpha$ $=$ $n$ (recall that $\Delta'_n=\{ 1\}$). We claim that for all tuples $(\beta,\delta')$ $\neq$ $(n,1)$, which implies that $\beta$ $<$ $n$, the $(i_{n,1},j_n)$-component of $v_{p,\beta,\delta'}$ is $0$. This component is the coefficient of
\[
t_{i_{n,1}} = t_{i_n}\cdot 1 = t_{i_n} = x_n^{e'_n} \text{ in }
\left( \frac{\partial g_{j_n}}{\partial x_{\beta}} \cdot m_{\beta,\delta'} \mod I_p \right).
\]
In case $b_{j_n}$ $\notin$ $\operatorname{LM}$, then $g_{j_n}$ $=$ $b_{j_n}$ $=$ $x_n^{e'_n+1}$, so $\frac{\partial g_{j_n}}{\partial x_{\beta}}$ $=$ $0$. In case $b_{j_n}$ $\in$ $\operatorname{LM}$, any monomial $m$ appearing non-trivially in $N_{j_{n}}$ must satisfy $x_n^{e'_n+1}$ $>$ $m$ and that $m$ is not divisible by $x_n$, by (\ref{txt:keyClaim}); whence, $m = 1$ $\in$ $\operatorname{TM}$, so $x_{\alpha} \cdot 1$ $\in$ $\partial \mathbf{m}hcal{O}$ for all $\alpha$, and we are in the excluded case $\mathbf{m}hcal{O}$ $=$ $\{ 1 \}$. It follows once again that $g_{j_n}$ $=$ $b_{j_n}$ $\Rightarrow$ $\frac{\partial g_{j_n}}{\partial x_{\beta}}$ $=$ $0$. We conclude that the $(i_{n,1},j_n)$-component of $v_{p,\beta,\delta'}$ is $0$ for all $\beta < n$ and $1 \leq \delta' \leq |\Delta'_{\beta}|$, as claimed.
We next note:
\begin{equation}\label{txt:nonSpecialIndices}
\text{\emph{None of the index pairs $(i_{\alpha,\delta},j_{\alpha})$ are distinguished.}}
\end{equation}
To see this, recall from (\ref{eqn:SpecialC}) that the index pair $(i,j)$ is distinguished\ if and only if $b_j$ $\in$ $\operatorname{LM}$ and $t_i$ $\in$ $\operatorname{TM}$. However, by the definition (\ref{eqn:DeltaDelta'RestrDef}) of our sets $\Delta'_{\alpha}$, we have that $t_{i_{\alpha,\delta}}$ $\notin$ $\operatorname{TM}$ whenever $b_{j_{\alpha}}$ $\in$ $\operatorname{LM}$.
The foregoing implies that in the sum (\ref{eqn:Z2LinComb}), the only tangent vector having non-zero $(i_{n,1},j_n)$-component is $v_{p,n,1}$; whence, the coefficient $d_{n,1}$ $=$ $0$. Since $v_{p,n,1}$ is the only tangent vector in $\EuScript{Z}'$ associated to $\alpha$ $=$ $n$, we have shown that all the coefficients $d_{n,\delta}$ are $0$; this completes the base case of the induction.
For the induction step, we suppose that for some $1 \leq \alpha < n$, the coefficients $d_{\beta,\delta'}$ $=$ $0$ for all $\beta$ $>$ $\alpha$ and $1 \leq \delta' \leq |\Delta'_{\beta}|$, and let $1 \leq \delta \leq |\Delta'_{\alpha}|$. We claim that the $(i_{\alpha,\delta},j_{\alpha})$-component of $v_{p,\beta,\delta'}$ is $0$ for all $\beta$ $\leq$ $\alpha$ and $(\beta,\delta')$ $\neq$ $(\alpha,\delta)$. This component is the coefficient of $t_{i_{\alpha,\delta}}$ in $\left( \frac{\partial g_{j_{\alpha}}}{\partial x_{\beta}} \cdot m_{\beta,\delta'} \mod I_p\right)$. In case $b_{j_{\alpha}}$ $\notin$ $\operatorname{LM}$, we have $g_{j_{\alpha}}$ $=$ $b_{j_{\alpha}}$ $=$ $x_{\alpha} x_n^{e'_{\alpha}}$; otherwise, $b_{j_{\alpha}}$ $\in$ $\operatorname{LM}$ and $g_{j_{\alpha}}$ $=$ $b_{j_{\alpha}}- N_{j_{\alpha}}$, and by hypothesis $b_{j_{\alpha}}$ $>$ any monomial $m$ appearing non-trivially in $N_{j_{\alpha}}$. From this it follows that for all $\beta$ $<$ $\alpha$, $\frac{\partial g_{j_{\alpha}}}{\partial x_{\beta}}$ $=$ $0$. This shows that the $(i_{\alpha,\delta},j_{\alpha})$-coefficient of $v_{p,\beta,\delta'}$ is $0$ for all $\beta$ $<$ $\alpha$.
We now consider the case $\beta$ $=$ $\alpha$ and $\delta'$ $\neq$ $\delta$. We must compute the coefficient of $t_{i_{\alpha,\delta}}$ in $\left( \frac{\partial g_{j_{\alpha}}}{\partial x_{\alpha}} \cdot m_{\alpha,\delta'} \mod I_p \right)$. In light of $(\ref{txt:keyClaim})$, we see that $\frac{\partial g_{j_{\alpha}}}{\partial x_{\alpha}}$ $=$ $\frac{\partial b_{j_{\alpha}}}{\partial x_{\alpha}}$ $=$ $x_n^{e'_{\alpha}}$, so $\left(\frac{\partial g_{j_{\alpha}}}{\partial x_{\alpha}} \cdot m_{\alpha,\delta'} \mod I_p \right)$ = $(t_{i_{\alpha,\delta'}} \mod I_p)$ $=$ $t_{i_{\alpha,\delta'}}$ $\neq$ $t_{i_{\alpha,\delta}}$ for $\delta$ $\neq $ $\delta'$. This shows that the $(i_{\alpha,\delta},j_{\alpha})$-component of $v_{p,\beta,\delta'}$ is $0$ for all $\beta$ $\leq$ $\alpha$ and $(\beta,\delta')$ $\neq$ $(\alpha,\delta)$. It now follows from (\ref{txt:nonSpecialIndices}) and the induction hypothesis that
the coefficients $d_{\alpha,\delta}$ $=$ $0$ for all $1 \leq \delta \leq |\Delta'_{\alpha}|$, so the induction step is complete. We conclude that all the coefficients $d_{\alpha,\delta}$ in (\ref{eqn:Z2LinComb}) are $0$. Lemma \ref{lem:YtanVecsLinInd} now yields that every scalar $d_{ij}$ $=$ $0$ as well, and the proposition is proved.
\qed
\begin{cor} \label{cor:GenZPropCor1A}
Under the hypotheses of Proposition {\rm \ref{prop:GeneralZProp}}, one has that the $(i_{\alpha,\delta},j_{\alpha})$-component of $v_{p,\beta,\delta'}$ $=$ $0$ for all $\beta$ $\leq$ $\alpha$ and $(\beta,\delta')$ $\neq$ $(\alpha,\delta)$.
\end{cor}
\proof
This was shown in the course of the proof of the Proposition.
\qed
\begin{cor}\label{cor:GenZPropCor1}
Let $\mathbf{m}hcal{O}$ $\neq$ $\{ 1 \}$ be a lex-segment complement order ideal. Then the set $\EuScript{S}_0 \cup \EuScript{Z'}_0$ $\subseteq$ $\EuScript{T}_{[I_0]}$ is $K$-linearly independent.
(Recall that $I_0$ is the distinguished\ ideal generated by the monomials $m$ $\notin$ $\mathbf{m}hcal{O}$.)
\end{cor}
\proof
Since $b_{j_{\alpha}}$ is the lex-leading monomial of $g_{j_{\alpha}}$ $=$ $b_{j_{\alpha}}$ for $1$ $\leq$ $\alpha$ $\leq$ $n$, Proposition {\rm \ref{prop:GeneralZProp}} yields the result.
\qed
\begin{cor} \label{cor:GenZPropCor2}
Let $\mathbf{m}hcal{O}$ $\neq$ $\{ 1 \}$ be a lex-segment complement order ideal, and suppose that the sets $\operatorname{LM}$ and $\operatorname{TM}$ have been chosen such that for all $m \in \operatorname{LM}$ and $m' \in \operatorname{TM}$, one has that $m>m'$. Then the set $\EuScript{S}_p \cup \EuScript{Z'}_p$ $\subseteq$ $\EuScript{T}_{[I_p]}$ is $K$-linearly independent for all points $p$ as in {\rm (\ref{eqn:pRestriction})}; that is, for all distinguished\ ideals $I$ $=$ $I_p$.
\end{cor}
\proof
The hypotheses clearly imply that the hypotheses of Proposition \ref{prop:GeneralZProp} hold for all $p$; whence, the result.
\qed
\begin{cor} \label{cor:GenZPropCor3}
Let $\mathbf{m}hcal{O}$ $\neq$ $\{ 1 \}$ be a lex-segment complement order ideal, let $U$ $=$ $\operatorname{Spec}(K[\mathbf{m}hcal{C}, \mathbf{m}hcal{Z}'])$, and $\fn{\EuScript{F}}{U}{\operatorname{Hilb}^n_{\mathbf{m}hbb{A}^n_{K}}}$ the map constructed in Section {\rm \ref{ssec:Cconstr}}. Then the distinguished\ locus $X_{\mathcal{S}}$ is contained in a component $Y$ of $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ of dimension
\[
\dim(Y) \geq \dim(\EuScript{F}(U)) = \dim(U) = |\mathcal{S}| + |\mathbf{m}hcal{Z}'|.
\]
\end{cor}
\proof
By Proposition \ref{prop:F(U)DimLowerBd} and Corollary \ref{cor:GenZPropCor1}, we have that
\[
|\mathcal{S}| + |\mathbf{m}hcal{Z}'| = |\EuScript{S}_0| + |\EuScript{Z}'_0| \leq \dim(\EuScript{F}(U)) \leq \dim(U) = |\mathcal{S}| + |\mathbf{m}hcal{Z}'|.
\]
Choosing $Y$ to be a component of $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ containing $\EuScript{F}(U)$, we have that $Y$ contains $X_{\mathcal{S}}$ and $\dim{Y}$ $\geq$ $\dim(\EuScript{F}(U))$ $=$ $|\mathcal{S}| + |\mathbf{m}hcal{Z}'|$, as desired.
\qed
\subsection{Additional independent tangent directions at $[I_p]$} \label{ssec:ZExtensionRem}
Proposition \ref{prop:GeneralZProp} and its corollaries give conditions on the distinguished\ ideal $I_p$ for which the set $\EuScript{S}_p$ $\cup$ $\EuScript{Z}'_p$ $\subseteq$ $\EuScript{T}_{[I_p]}$ is $K$-linearly independent. For many of our examples this suffices, because in these cases $\EuScript{S}_p$ $\cup$ $\EuScript{Z}'_p$ turns out to be a $K$-basis of $\EuScript{T}_{[I_p]}$. However, in other cases $\EuScript{T}_{[I_p]}$ has a basis that is a proper superset of $\EuScript{S}_p$ $\cup$ $\EuScript{Z}'_p$; here is one way this can happen.
With $\mathbf{m}hcal{O}$ $\neq$ $\{ 1 \}$ a lex-segment complement, suppose that we have chosen the sets $\Delta_{\alpha}$ $\subseteq$ $K[x_1, \dots, \widehat{x_{\alpha}},\dots, x_n]$ to be supersets of the corresponding sets $\Delta'_{\alpha}$. Furthermore, suppose that at least one of the monomials $b_{j_{\alpha}}$ is non-leading and that there is a monomial $m_{\alpha,\hat{\delta}}$ $\in$ $\Delta_{\alpha}$ such that $t_{i_{\alpha}} \cdot m_{\alpha,\hat{\delta}}$ $=$ $b_{\hat{j}}$ $\in$ $\operatorname{LM}$, with
\[
g_{\hat{j}} = \left( b_{\hat{j}} - \sum_{C_{i\hat{j}} \in \mathcal{S}} {c_{i\hat{j}}\, t_{i}}\right)\ \neq\ b_{\hat{j}}.
\]
Then the non-zero $(i,j_{\alpha})$-components of the tangent vector $v_{\alpha,\hat{\delta}}$ are the (non-zero) coefficients $c_{i\hat{j}}$ of the linear combination
\[
t_{i_{\alpha}} \cdot m_{\alpha,\hat{\delta}} = b_{\hat{j}} \equiv \sum_{C_{i\hat{j}} \in \mathcal{S}} {c_{i\hat{j}}\, t_{i}}\ \text{ mod } I,
\]
and these components are non-distinguished\ because $b_{j_{\alpha}}$ $\notin$ $\operatorname{LM}$. It is therefore possible that $v_{p,\alpha,\hat{\delta}}$ $\in$ $\EuScript{T}_{[I]}$ is independent of the vectors in $\EuScript{Z}'$ (and certain that it is independent of the vectors in $\EuScript{S}$). Accordingly, we define
\[
\begin{array}{rcl}
\Delta''_{\alpha} & = & \Delta'_{\alpha} \cup \left\{ m_{\alpha,\hat{\delta}} \in K[x_1, \dots, \widehat{x_{\alpha}}, \dots, x_n] \mid
\begin{array}{r}
b_{j_{\alpha}} \notin \operatorname{LM} \text{ and }\\
t_{i_{\alpha}} \cdot m_{\alpha,\hat{\delta}} \in \operatorname{LM}
\end{array} \right\},
\\
\mathbf{m}hcal{Z}'' & = & \{ Z_{\alpha,\delta} \mid 1 \leq \alpha \leq n,\ 1 \leq \delta \leq |\Delta''_{\alpha}| \},
\\
\EuScript{Z}_p'' & = & \{v_{p,\alpha,\delta} \mid 1 \leq \alpha \leq n,\ 1 \leq \delta \leq |\Delta''_{\alpha}| \} \subseteq \EuScript{T}_{[I_{p}]} .
\end{array}
\]
From Proposition \ref{prop:F(U)DimLowerBd}, we obtain the following analogue of Corollary \ref{cor:GenZPropCor3}:
\begin{cor} \label{cor:Z''Cor}
Let $\mathbf{m}hcal{O}$ $\neq$ $\{ 1 \}$ be a lex-segment complement order ideal, let $U$ $=$ $\operatorname{Spec}(K[\mathbf{m}hcal{C}, \mathbf{m}hcal{Z}''])$, and $\fn{\EuScript{F}}{U}{\operatorname{Hilb}^n_{\mathbf{m}hbb{A}^n_{K}}}$ the map constructed in Section {\rm \ref{ssec:Cconstr}}. Suppose that there is a distinguished\ ideal $I_p$ such that the set $\EuScript{S}_p$ $\cup$ $\EuScript{Z}''_p$ $\subseteq$ $\EuScript{T}_{[I_p]}$ is $K$-linearly independent. Then the distinguished\ locus $X_{\mathcal{S}}$ is contained in a component $Y$ of $\operatorname{Hilb}^{\mu}_{\mathbf{m}hbb{A}^n_{K}}$ of dimension
\[
\dim(Y) \geq \dim(\EuScript{F}(U)) = \dim(U) = |\mathcal{S}| + |\mathbf{m}hcal{Z}''|.
\]
\qed
\end{cor}
In several of the following examples (Sections \ref{ssec:(1,5,3,4)}, \ref{ssec:(1,5,3,4,5,6)}, \ref{ssec:(1,6,6,10)}, \ref{ssec:(1,6,21,10,15)}, and \ref{(1,6,10,10,5)CaseA}), we have that the set $\EuScript{S}_p \cup \EuScript{Z}'_p$ is a $K$-basis of $\EuScript{T}_{[I_p]}$, so expanding to $\EuScript{Z}''$ gives us nothing new. On the other hand, the examples presented in Sections \ref{(1,6,10,10,5)CaseB} and \ref{(1,6,10,10,5)CaseC} are such that $\EuScript{S}_p \cup \EuScript{Z}'_p$ is not a basis of $\EuScript{T}_{[I_p]}$, but $\EuScript{S}_p \cup \EuScript{Z}''_p$ is.
\section{Examples of generic distinguished\ ideals} \label{sec:examples}
We present several examples of generic distinguished\ ideals associated to lex-segment complement order ideals. Unfortunately, the examples are too large to permit the computations to be carried out by hand. We summarize each example and provide the details in a \textit{Mathematica} \cite{Mathematica} notebook that is available for download from the arXiv, where it is posted as an ancillary file to this paper. The notebooks are also available at
\[
\text{\emph{http://www.skidmore.edu/\~{}mhuibreg/Notebooks for paper/index.html}}.
\]
The notation used in the notebooks adheres closely to that used in the paper. The notebooks all make use of a library of \textit{Mathematica} functions coded and documented in a separate notebook \emph{utility functions.nb} that is available for download at the same locations.
Each example of a generic ideal is a distinguished\ ideal $I$ such that $[I]$ is a smooth point on an elementary component of the form $\overline{\EuScript{F}(U)}$ as in Proposition \ref{prop:genericCond}. The ideal $I$ is generated by a \emph{Mathematica} function \textbf{makeShortListOfIdealGenerators}, which, given the sets $\operatorname{LM}$ and $\operatorname{TM}$, generates the polynomials $g_{j_{\iota}}$ $\in$ $G$ (equation \ref{eqn:fDef}) by assigning values drawn at random (with equal probabilities) from the set $\{-1,0,1 \}$ to the distinguished\ coefficients $C_{ij}$. (A second version of this function is provided that assigns the coefficients from the set $\{0,1 \}$ with the probability of assigning the value $1$ supplied as an additional input; this version was used to generate the example discussed in Remark \ref{rem:GenButNotEff}, and nowhere else.)
Since $\overline{\EuScript{F}(U)}$ will be smooth in a neighborhood of a smooth point, the existence of a generic distinguished\ ideal $I$ implies that there is a non-empty Zariski-open subset $X'$ $\subseteq$ $X_{\mathcal{S}}$ such that $[I']$ $\in$ $X'$ $\Rightarrow$ $I'$ is generic. In each example notebook, one can either choose to verify that a previously-generated example (stored in the notebook) is generic, or can generate other distinguished\ ideals to test. In view of the foregoing, such examples will typically (but need not always) be generic as well. Note that in all cases the order ideal $\mathbf{m}hcal{O}$ is the unique lex-segment complement having the given Hilbert function.
\subsection{Hilbert function $(1,5,3,4,0)$, shape $(5,2,2,3)$} \label{ssec:(1,5,3,4)}
We exhibit a generic distinguished\ ideal $I$ $=$ $I_p$ of this Hilbert function in the notebook \emph{case $(1,5,3,4,0)$.nb}. For this initial example, we provide more details here to serve as an introduction; recall that this example was summarized in Section \ref{ssec:AGen}.
In this example, we have
\[
\begin{array}{l}
n = 5,
\\
\mathbf{m}hcal{O} = \left\{1,x_1,x_2,x_3,x_4,x_5,x_4^2,x_4x_5,x_5^2,x_4^3,x_4^2x_5,x_4x_5^2,x_5^3\right\};
\\
\operatorname{LM} = \left\{x_1^2,x_1 x_2,x_1 x_3,x_1 x_4,x_1 x_5,x_2^2,x_2 x_3,x_2 x_4,x_2 x_5,x_3^2,x_3 x_4,x_3 x_5\right\};
\\
\operatorname{TM} = \left\{x_4^3,x_4^2 x_5,x_4 x_5^2,x_5^3\right\};
\\
\lambda = |\operatorname{LM}| = 12,\ \tau = |\operatorname{TM}| = 4,\
\mu = |\mathbf{m}hcal{O}| = 13.
\end{array}
\]
As in Section \ref{sec:SLI}, we constructed the following set of polynomials $G$ $=$ \(\left\{\left.g_j\right|1\leq j\leq \lambda \right\}\), and extended them to the $\mathbf{m}hcal{O}$-border basis of a distinguished\ ideal \(I\) that can then be shown to be efficient (in fact, $\vartheta$-efficient).
{\Small
\[
\left\{
\begin{array}{ll}
g_1=x_1^2-x_4^3-x_4^2 x_5+x_4 x_5^2+x_5^3, & g_2=x_1 x_2-x_4^3+x_4^2 x_5+x_4 x_5^2+x_5^3,
\\
g_3=x_1 x_3-x_4^2 x_5+x_5^3, &
g_4=x_1 x_4+x_4^3+x_4^2 x_5-x_5^3,
\\
g_5=x_1 x_5 + x_4^2 x_5-x_5^3, & g_6=x_2^2+x_4^3+x_4^2 x_5+x_4 x_5^2-x_5^3,
\\
g_7=x_2 x_3-x_4^3-x_4^2 x_5, & g_8=x_2 x_4+x_4^3-x_4^2 x_5,
\\
g_9=x_2 x_5-x_4^2 x_5, & g_{10}=x_3^2+x_4^3+x_4^2 x_5+x_4
x_5^2-x_5^3,
\\
g_{11}=x_3 x_4+x_4^3-x_4^2 x_5, & g_{12}=x_3 x_5-x_4^2 x_5-x_4 x_5^2
\end{array}
\right\}
\]
}
We list the sets of monomials $\Delta'_{\alpha}$ as in (\ref{eqn:DeltaDelta'RestrDef}):
\[
\begin{array}{cccc}
x_{\alpha} & b_{j_{\alpha}} & t_{i_{\alpha}} & \Delta'_{\alpha}
\\
x_1 & x_1 x_5 \in \operatorname{LM} & x_5 & \{ 1, x_4, x_5 \}
\\
x_2 & x_2 x_5 \in \operatorname{LM} & x_5 & \{ 1, x_4, x_5 \}
\\
x_3 & x_3 x_5 \in \operatorname{LM} & x_5 & \{ 1, x_4, x_5 \}
\\
x_4 & x_4 x_5^3 \notin \operatorname{LM} & x_5^3 & \{ 1 \}
\\
x_5 & x_5^4 \notin \operatorname{LM} & x_5^3 & \{ 1 \}
\end{array}
\]
By Corollary \ref{cor:GenZPropCor3} the image of the associated map $\fn{\EuScript{F}}{U}{\operatorname{Hilb}^{13}_{\mathbf{m}hbb{A}^5_{K}}}$ satisfies
\[
\dim(\EuScript{F}(U)) \geq |\mathcal{S}| + |\mathbf{m}hcal{Z}'| = (\lambda \cdot \tau) + (3+3+3+1+1) = 12 \cdot 4 + 11 = 59.
\]
On the other hand, when we compute the dimension of the tangent space $\EuScript{T}_{[I_p]}$ using the tangent space relations associated to a basis of linear syzygies, as in (\ref{eqn:tanSpDimComp}), we obtain that $\dim_{K}(\EuScript{T}_{[I_p]})$ $=$ $59$; consequently, by Proposition \ref{prop:genericCond}, $\overline{\EuScript{F}(U)}$ is an elementary component of $\operatorname{Hilb}^{13}_{\mathbf{m}hbb{A}^5_{K}}$ of dimension 59, on which $[I_p]$ is a smooth point. Note that the dimension of the principal component is $5 \cdot 13$ $=$ $65$.
\begin{rem} \label{rem:GenButNotEff}
The notebook also includes an example of a generic distinguished\ ideal with Hilbert function $(1,5,3,4,0)$ that is not efficient. Therefore, efficiency is not necessary for genericity.
\end{rem}
\subsection{Hilbert function $(1,5,3,4,5,6,0)$, shape $(5,2,2,5)$} \label{ssec:(1,5,3,4,5,6)}
The details of this example are presented in the notebook \emph{case $(1,5,3,4,5,$ $6,0)$.nb}. The set of leading monomials is the same as in the previous example, and the set of trailing monomials consists of the six monomials of degree $5$ in $x_4, x_5$. Sufficiently general distinguished\ ideals $I$ of this shape are $\vartheta$-efficient.
The sets $\Delta'_{\alpha}$ are computed as in the previous example:
\[
\begin{array}{c}
\Delta'_1 = \Delta'_2 = \Delta'_3 = \{ 1,\, x_5,\, x_6,\, x_5^2,\, x_5 x_6,\, x_6^2,\,x_5^3,\,x_5^2 x_6,\, x_5 x_6^2,\,x_6^3\}, \text{ and }
\\
\Delta'_4 = \Delta'_5 = \{1\}
\end{array}.
\]
By Corollary \ref{cor:GenZPropCor3} the image of the associated map $\fn{\EuScript{F}}{U}{\operatorname{Hilb}^{24}_{K}}$ satisfies
\[
\dim(\EuScript{F}(U)) = L \geq |\mathcal{S}| + |\mathbf{m}hcal{Z}'| = \lambda \cdot \tau + 3\cdot 10 + 2 = 12 \cdot 6 + 32 = 104.
\]
Moreover, for sufficiently general choices of the generators $G$, we have that $\dim_{K}(\EuScript{T}_{[I]})$ $=$ $104$; consequently, by Proposition \ref{prop:genericCond}, $\overline{\EuScript{F}(U)}$ is an elementary component of $\operatorname{Hilb}^{24}_{\mathbf{m}hbb{A}^5_{K}}$ of dimension 104, on which $[I]$ is a smooth point. Note that the dimension of the principal component is $5 \cdot 24$ $=$ $120$ in this case.
\subsection{Hilbert function $(1,6,6,10,0)$, shape $(6,3,2,3)$} \label{ssec:(1,6,6,10)}
The details of this example are presented in the notebook \emph{case $(1,6,6,10,0)$.nb}.
The sets $\operatorname{LM}$, $\operatorname{TM}$, and $\Delta'_{\alpha}$ are as follows:
\[
\begin{array}{c}
\operatorname{LM}\ =\ \left\{x_1^2,\, x_1 x_2,\, \dots,\, x_3 x_6\right\},\ \ \operatorname{TM}\ =\ \left\{ x_4^3,\, x_4^2 x_5,\, \dots,\, x_6^3 \right\},
\\
\Delta'_1 = \Delta'_2 = \Delta'_3 = \{ 1,\, x_4,\, x_5,\, x_6 \},\ \ \Delta'_4 = \Delta'_5 = \Delta'_6 = \{ 1 \}.
\end{array}
\]
By Corollary \ref{cor:GenZPropCor3} the image of the associated map $\fn{\EuScript{F}}{U}{\operatorname{Hilb}^{23}_{K}}$ satisfies
\[
\dim(\EuScript{F}(U)) = L \geq |\mathcal{S}| + |\mathbf{m}hcal{Z}'| = \lambda \cdot \tau + 3\cdot 4 + 3 = 15\cdot 10 + 15 = 165.
\]
Moreover, for sufficiently general choices of the set $G$, we have that $\dim_{K}(\EuScript{T}_{[I]})$ $=$ $165$; consequently, by Proposition \ref{prop:genericCond}, $\overline{\EuScript{F}(U)}$ is an elementary component of $\operatorname{Hilb}^{23}_{\mathbf{m}hbb{A}^5_{K}}$ of dimension 165, on which $[I]$ is a smooth point. Note that the dimension of the principal component is $6 \cdot 23$ $=$ $138$.
\begin{rem} \label{rem:genButNotEff}
In the notebook detailing this example, we observe that sufficiently general distinguished\ ideals are efficient, but no distinguished\ ideal in this case can be $\vartheta$-efficient. Consequently, $\vartheta$-efficiency is not a necessary condition for genericity, as we noted in Remark \ref{rem:ThEffAndEff}.
\end{rem}
\subsection{Hilbert function $(1,6,21,10,15,0)$, shape $(6,3,3,4)$} \label{ssec:(1,6,21,10,15)}
The details of this example are presented in the notebook \emph{case $(1,6,21,10,15,$ $0)$.nb}. The set of leading monomials is equal to the first 46 monomials of degree $3$ when listed in decreasing lex order, and the set of trailing monomials consists of the 15 monomials of degree $4$ in $x_4, x_5, x_6$. One finds that sufficiently general distinguished\ ideals $I$ of this shape are $\vartheta$-efficient. The sets $\Delta'_{\alpha}$ are computed as in (\ref{eqn:DeltaDelta'RestrDef}):
\[
\Delta'_1 = \Delta'_2 = \Delta'_3 = \{ 1, x_4, x_5, x_6\}, \text{ and } \Delta'_4 = \Delta'_5 = \Delta'_6 = \{1\}.
\]
From Corollary \ref{cor:GenZPropCor3} we obtain
\[
\dim(\EuScript{F}(U)) \geq |\mathcal{S}| + |\mathbf{m}hcal{Z}'| = 46 \cdot 15 + 4\cdot 3 +3 = 705;
\]
moreover, Corollary \ref{cor:GenZPropCor2} implies that the tangent space at $[I]$ for \emph{every} distinguished\ ideal $I$ has dimension $\geq 705$. The number of variables $a_{ij}$ involved in the tangent space relations (Section \ref{sec:tanSpace}) is
\[
(\text{\# boundary monoms}) \cdot (\text{\# basis monomials}) = 142\cdot 53 = 7526.
\]
From this it follows that the rank $\rho$ of the tangent space relations at
$[I]$ satisfies
\[
7526 - \rho \geq 705\ \Rightarrow\ 6821 \geq \rho.
\]
It follows that $I$ will be generic provided that the rank of the tangent space relations at $[I]$ is equal to its maximum possible value of $6821$.
In the notebook associated to this example, we computed this rank modulo a large prime (32713) to conserve memory, and obtained the value $6821$. Since the tangent space relations in our examples have integer coefficients, and the rank of an integer matrix cannot increase when one computes it modulo a prime, this computation demonstrates that the characteristic $0$ rank must be $6821$.
Therefore, $\dim_{K}(\EuScript{T}_{[I]})$ $=$ $7526 - 6821$ $=$ $705$, so, once again, Proposition \ref{prop:genericCond} implies that $[I]$ is a smooth point on an elementary component of $\operatorname{Hilb}^{53}_{\mathbf{m}hbb{A}^6_{K}}$ of dimension $705$. The dimension of the principal component is $6 \cdot 53$ $=$ $318$. Note that the computations in this notebook require a lot of memory (two gigabytes was insufficient) and a run time possibly measured in hours, depending on the speed of one's machine.
\subsection{Hilbert function $(1,6,10,10,5,0)$} \label{ssec:(1,6,10,10,5)}
We present three different examples of elementary components having the indicated Hilbert function. The order ideal for all three examples is the lex-segment complement
\[
\mathbf{m}hcal{O} =
\left\{ \begin{array}{c}
1
\\
x_1\ \ x_2\ \ x_3\ \ x_4\ \ x_5\ \ x_6
\\
x_3^2\ \ x_3 x_4\ \ x_3 x_5\ \ x_3 x_6\ \ x_4^2\ \ x_4 x_5\ \ x_4 x_6\ \ x_5^2\ \ x_5 x_6\ \ x_6^2
\\
x_4^3\ \ x_4^2 x_5\ \ x_4^2 x_6\ \ x_4 x_5^2\ \ x_4 x_5 x_6\ \ x_4 x_6^2\ \ x_5^3\ \ x_5^2 x_6\ \ x_5 x_6^2\ \ x_6^3
\\x_5^4\ \ x_5^3 x_6\ \ x_5^2 x_6^2\ \ x_5 x_6^3\ \ x_6^4
\end{array} \right\}
\]
of cardinality $32$. Note that the dimension of the principal component of $\operatorname{Hilb}^{32}_{\mathbf{m}hbb{A}^6_{K}}$ is $32 \cdot 6$ $=$ $192$.
\subsubsection{First case} \label{(1,6,10,10,5)CaseA}
The details of this example are presented in the notebook \emph{case $(1,6,10,10,5,0)$ first.nb}.
In this case the sets of leading and trailing monomials are
\[
\begin{array}{rcl}
\operatorname{LM} & = & \left\{ \begin{array}{c}
x_1^2,\,x_1 x_2,\,x_1 x_3,\,x_1 x_4,\,x_1 x_5,\,x_1 x_6,\,x_2^2,
\\
x_2 x_3,\,x_2 x_4,\,x_2 x_5,\,x_2 x_6, x_3^3,\,x_3^2 x_4,\,x_3^2 x_5,\, x_3^2 x_6,
\\
x_3 x_4^2,\,x_3 x_4 x_5,\,x_3 x_4 x_6,\,x_3 x_5^2,\,x_3 x_5 x_6,\,x_3 x_6^2
\end{array} \right\},
\\
\operatorname{TM} & = & \left\{ \begin{array}{c}x_4^3,\, x_4^2 x_5, \,x_4^2 x_6, \,x_4 x_5^2, \,x_4 x_5 x_6,
\\
x_4 x_6^2,\,x_5^4,\,x_5^3 x_6,\,x_5^2 x_6^2,\,x_5 x_6^3,\,x_6^4
\end{array} \right\}.
\end{array}
\]
One finds that sufficiently general distinguished\ ideals $I$ constructed using these sets are efficient.
The sets $\Delta'_{\alpha}$ are as follows, showing that $|\mathbf{m}hcal{Z}'|$ $=$ $24$:
\[
\begin{array}{cccc}
x_{\alpha} & b_{j_{\alpha}} & t_{i_{\alpha}} & \Delta'_{\alpha}
\\
x_1 & x_1 x_6 \in \operatorname{LM} & x_6 & \{ 1,\, x_3,\, x_4,\, x_5,\, x_6,\, x_5^2,\, x_5 x_6,\, x_6^2 \}
\\
x_2 & x_2 x_6 \in \operatorname{LM} & x_6 & \{ 1,\, x_3,\, x_4,\, x_5,\, x_6,\, x_5^2,\, x_5 x_6,\, x_6^2 \}
\\
x_3 & x_3 x_6^2 \in \operatorname{LM} & x_6^2 & \{ 1,\, x_5,\, x_6 \}
\\
x_4 & x_4 x_6^3 \notin \operatorname{LM} & x_6^3 & \{ 1,\, x_5,\, x_6 \}
\\
x_5 & x_5 x_6^4 \notin \operatorname{LM} & x_6^4 & \{ 1 \}
\\
x_6 & x_6^5 \notin \operatorname{LM} & x_6^4 & \{ 1 \}
\end{array}.
\]
From Corollary \ref{cor:GenZPropCor3} we obtain that
\[
\dim(\EuScript{F}(U)) \geq |\mathcal{S}| + |\mathbf{m}hcal{Z}'| = \lambda \cdot \tau + 24 = 21 \cdot 11 + 24 = 255.
\]
On the other hand, we find by direct computation in the notebook that $\dim_{K}(\EuScript{T}_{[I]})$ $=$ $255$, so Proposition \ref{prop:genericCond} implies that $[I]$ is a smooth point on an elementary component of $\operatorname{Hilb}^{32}_{\mathbf{m}hbb{A}^6_{K}}$ of dimension $255$.
\subsubsection{Second case} \label{(1,6,10,10,5)CaseB}
The details of this example are presented in the notebook \emph{case $(1,6,10,10,5,0)$ second.nb}. The sets of leading and trailing monomials are
\[
\begin{array}{rcl}
\operatorname{LM} & = & \left\{ \begin{array}{c}
x_1^2,\, x_1 x_2,\, x_1 x_3,\, x_1 x_4,\, x_1 x_5,\, x_1 x_6,\, x_2^2,\, x_2 x_3,
\\
x_2 x_4,\, x_2 x_5,\,x_2 x_6,\, x_4^4,\, x_4^3 x_5,\, x_4^3 x_6,\,x_4^2 x_5^2,
\\x_4^2 x_5 x_6,\,x_4^2 x_6^2,\,x_4 x_5^3,\,x_4 x_5^2 x_6,\,x_4 x_5 x_6^2,\,x_4 x_6^3
\end{array} \right\},
\\
\operatorname{TM} & = & \left\{x_3^2,\,x_3 x_4,\,x_3 x_5,\,x_3 x_6,\,x_5^4,\,x_5^3 x_6,\,x_5^2 x_6^2,\,x_5 x_6^3,\,x_6^4\right\}.
\end{array}
\]
As usual, a distinguished\ ideal $I$ $=$ $I_p$ is generated, and its tangent space dimension is computed to be $\dim_{K}(\EuScript{T}_{[I]})$ $=$ $222$; the ideal $I$ is also found to be efficient.
In this case, there are trailing monomials that are lex-larger than some leading monomials; for example, $x_3 x_6$ $>$ $x_4 x_6^3$ $=$ $b_{j_4}$, so it is likely that distinguished\ ideals $[I]$ will fail to satisfy the hypothesis of Proposition \ref{prop:GeneralZProp}. On the other hand, there is a non-leading boundary monomial $b_{j_3}$ $=$ $x_3 x_6^2$ such that $t_{i_3}\cdot x_4 x_6$ $=$ $x_4 x_6^3$ $=$ $b_{j_4}$ $\in$ $\operatorname{LM}$, so the situation described in Section \ref{ssec:ZExtensionRem} arises; that is, it is possible that the larger set of tangent vectors $\EuScript{S}_p \cup \EuScript{Z}''_p$ is $K$-linearly independent at $[I_p]$. Indeed, we verify this by direct computation in the notebook. The associated sets $\Delta''_{\alpha}$ are as follows:
\[
\begin{array}{cccc}
x_{\alpha} & b_{j_{\alpha}} & t_{i_{\alpha}} & \Delta''_{\alpha}
\\
x_1 & x_1 x_6 \in \operatorname{LM} & x_6 & \{ 1,\, x_4,\, x_5,\, x_6,\, x_4^2,\, x_4 x_5,\, x_4 x_6,\, x_5^2,\, x_5 x_6,\, x_6^2 \}
\\
x_2 & x_2 x_6 \in \operatorname{LM} & x_6 & \{ 1,\, x_4,\, x_5,\, x_6,\, x_4^2,\, x_4 x_5,\, x_4 x_6,\, x_5^2,\, x_5 x_6,\, x_6^2 \}
\\
x_3 & x_3 x_6^2 \notin \operatorname{LM} & x_6^2 & \{ 1,\, x_4,\, x_5,\, x_6,\, x_4^2,\, x_4 x_5,\, x_4 x_6,\, x_5^2,\, x_5 x_6,\, x_6^2 \}
\\
x_4 & x_4 x_6^3 \in \operatorname{LM} & x_6^3 & \{ 1 \}
\\
x_5 & x_5 x_6^4 \notin \operatorname{LM} & x_6^4 & \{ 1 \}
\\
x_6 & x_6^5 \notin \operatorname{LM} & x_6^4 & \{ 1 \}
\end{array}.
\]
By Corollary \ref{cor:Z''Cor} the image of the map $\fn{\EuScript{F}}{U}{\operatorname{Hilb}^{32}_{\mathbf{m}hbb{A}^6_{K}}}$ satisfies
\[
\dim(\EuScript{F}(U)) \geq |\mathcal{S}| + |\mathbf{m}hcal{Z}''| = (\lambda \cdot \tau) + (10 \cdot 3 + 3) = 21 \cdot 9 + 33 = 222.
\]
It now follows from Proposition \ref{prop:genericCond} that $[I]$ is a smooth point on an elementary component of dimension $222$, so that $I$ is a generic ideal.
\subsubsection{Third case} \label{(1,6,10,10,5)CaseC}
The details of this example are presented in the notebook \emph{case $(1,6,10,10,5,0)$ third.nb}. The sets of leading and trailing monomials are
\[
\begin{array}{rcl}
\operatorname{LM} & = & \left\{ \begin{array}{c}
x_1^2,\, x_1 x_2,\, x_1 x_3,\, x_1 x_4,\, x_1 x_5,\, x_1 x_6,\, x_2^2,\, x_2 x_3,
\\ x_2 x_4,\, x_2 x_5,\, x_2 x_6,
x_5^5,\, x_5^4 x_6,\, x_5^3 x_6^2,\, x_5^2 x_6^3,\, x_5 x_6^4,\, x_6^5
\end{array} \right\}
\\
\operatorname{TM} & = & \left\{ \begin{array}{c} x_3^2,\, x_3 x_4,\, x_3 x_5,\, x_3 x_6,\, x_4^3,\, x_4^2 x_5,
\\ x_4^2 x_6,\, x_4 x_5^2,\, x_4 x_5 x_6,\,
x_4 x_6^2
\end{array} \right\}
\end{array}.
\]
We randomly generate a distinguished\ ideal $I$ $=$ $I_p$ using these sets and find that $\dim_{K}(\EuScript{T}_{[I]})$ $=$ $211$; we also find that $I$ is efficient, but not $\vartheta$-efficient, thereby providing another example as promised in Remark
\ref{rem:ThEffAndEff}.
As in the preceding example, we verify by direct computation that the larger set of vectors $\EuScript{S}_p \cup \EuScript{Z}''_p$ is $K$-linearly independent. The associated sets $\Delta''_{\alpha}$ are as follows:
{\Small
\[
\begin{array}{cccc}
x_{\alpha} & b_{j_{\alpha}} & t_{i_{\alpha}} & \Delta''_{\alpha}
\\
x_1 & x_1 x_6 \in \operatorname{LM} & x_6 & \{ 1,\, x_4,\, x_5,\, x_6,\, x_5^2,\, x_5 x_6,\, x_6^2,\, x_5^3,\, x_5^2 x_6,\,x_5 x_6^2, x_6^3 \}
\\
x_2 & x_2 x_6 \in \operatorname{LM} & x_6 & \{ 1,\, x_4,\, x_5,\, x_6,\, x_5^2,\, x_5 x_6,\, x_6^2,\, x_5^3,\, x_5^2 x_6,\,x_5 x_6^2, x_6^3 \}
\\
x_3 & x_3 x_6^2 \notin \operatorname{LM} & x_6^2 & \{1,\, x_4,\, x_5,\, x_6,\, x_5^2,\, x_5 x_6,\, x_6^2,\, x_5^3,\, x_5^2 x_6,\,x_5 x_6^2, x_6^3 \}
\\
x_4 & x_4 x_6^3 \notin \operatorname{LM} & x_6^3 & \{ 1,\, x_5,\, x_6,\, x_5^2,\, x_5 x_6,\, x_6^2 \}
\\
x_5 & x_5 x_6^4 \in \operatorname{LM} & x_6^4 & \{ 1 \}
\\
x_6 & x_6^5 \in \operatorname{LM} & x_6^4 & \{ 1 \}
\end{array}.
\]
}
By Corollary \ref{cor:Z''Cor} the image of the map $\fn{\EuScript{F}}{U}{\operatorname{Hilb}^{32}_{\mathbf{m}hbb{A}^6_{K}}}$ satisfies
\[
\dim(\EuScript{F}(U)) \geq |\mathcal{S}| + |\mathbf{m}hcal{Z}''| = (\lambda \cdot \tau) + (11 \cdot 3 + 6 +2) = 17 \cdot 10 + 41 = 211.
\]
It now follows from Proposition \ref{prop:genericCond} that $[I]$ is a smooth point on an elementary component of dimension $211$, so that $I$ is a generic ideal.
\section{distinguishedCap\ ideals of shape $(n,\kappa,r,s)$} \label{sec:idealsOfShapeNkRs}
Our last main goal, accomplished in Section \ref{sec:PlausArgs}, is to develop a numerical criterion for picking out shapes $(n,\kappa,r,s)$ for which sufficiently general distinguished\ ideals of that shape are likely to be generic. We make preparations in this section by discussing such distinguished\ ideals in detail. We assume $n \geq 3$, $1 < \kappa < n$, and $2 \leq r < s$.
\subsection{The sets $\mathbf{m}hcal{O}$, $\operatorname{LM}$, and $\operatorname{TM}$} \label{ssec:FamOfExamplesBasics} The order ideal $\mathbf{m}hcal{O}$ is the lex-segment complement given by $\mathbf{m}hcal{O}$ $=$ $\cup_{d=0}^{s} \mathbf{m}hcal{O}_d$ (recall that $\mathbf{m}hcal{O}_{d}$ denotes the set of basis monomials of degree $d$), where
\[
\mathbf{m}hcal{O}_d\ = \ \left\{
\begin{array}{l}
\{ \text{monom's of degree } d \text{ in } x_1, \dots, x_n \},\ \text{if } 0 \leq d < r,
\\
\{ \text{monom's of degree } d \text{ in } x_{n-\kappa + 1}, \dots, x_n\},\ \text{if } r \leq d \leq s,
\\
\emptyset, \text{ if } d > s.
\end{array} \right.
\]
We call $x_1$, \dots, $x_{n-\kappa}$ the \textbf{front variables}, and $x_{n-\kappa+1}, \dots, x_n$ the \textbf{back variables}; we then define the \textbf{front degree} (resp.\ \textbf{back degree}) of a monomial $m$ to be the sum of the exponents to which the front (resp.\ back) variables appear in $m$. The sets of leading and trailing monomials are selected as follows:
\[
\begin{array}{rcl}
\operatorname{LM} & = & \{ \text{monom's in } x_1, \dots, x_n \text{ of deg. } r \} \setminus \mathbf{m}hcal{O}_d,
\\
{} & = & \{x_1^r,\, x_1^{r-1}x_2,\,\dots, x_{n-\kappa}\, x_n^{r-1} \} = \{b_1,\, b_2,\, \dots,\, b_{\lambda} \},\text{ and}
\\
\operatorname{TM} & = & \mathbf{m}hcal{O}_s = \{x_{n-\kappa + 1}^s,\, x_{n-\kappa +1}^{s-1}\, x_{n-\kappa + 2},\, \dots,\, x_n^s \}.
\end{array}
\]
It follows easily that
\begin{equation} \label{eqn:lambdaAndTau}
\begin{array}{rcl}
\lambda & = & |\operatorname{LM}| = {n-1+r \choose r} - {\kappa - 1 + r \choose r},
\\
\tau & = & |\operatorname{TM}| = {\kappa - 1 + s \choose s}, \text{ and}
\\
\mu & = & \sum_{d=0}^{r-1} {n-1+d \choose d} + \sum_{d=r}^s {\kappa-1 + d \choose d}.
\end{array}
\end{equation}
Recall that a distinguished\ ideal $I$ built using these sets $\mathbf{m}hcal{O}$, $\operatorname{LM}$, and $\operatorname{TM}$ is said to have \textbf{shape} $(n,\kappa,r,s)$. Examples \ref{ssec:(1,5,3,4)}, \ref{ssec:(1,5,3,4,5,6)}, \ref{ssec:(1,6,6,10)}, and \ref{ssec:(1,6,21,10,15)} are all of this form, having shapes $(5,2,2,3)$, $(5,2,2,5)$, $(6,3,2,3)$, and $(6,3,3,4)$, respectively.
\subsection{Boundary monomials} \label{ssec:plausArgsBdryMonoms}
Recalling that $\partial \mathbf{m}hcal{O}_d$ $\subseteq$ $\partial \mathbf{m}hcal{O}$ denotes the subset of degree-$d$ boundary monomials, one sees easily that $\partial \mathbf{m}hcal{O} = \cup_{d=r}^{s+1} \partial \mathbf{m}hcal{O}_d$, where
\begin{equation} \label{eqn:listOfBdryMons}
\begin{array}{rcl}
\partial \mathbf{m}hcal{O}_{r} & = &
\operatorname{LM}\ = \ \{b_1, b_2, \dots, b_{\lambda} \},
\\
\partial \mathbf{m}hcal{O}_d & = & \{x_{\alpha} \cdot t_i \mid 1 \leq \alpha \leq n-\kappa,\ t_i \in \mathbf{m}hcal{O}_{d-1} \},\ r+1 \leq d \leq s, \text{ and}
\\
\partial \mathbf{m}hcal{O}_{s+1} & = &
\{x_{\alpha} \cdot t_i \mid 1 \leq \alpha \leq n-\kappa,\ t_i \in \mathbf{m}hcal{O}_{s} \}\ \cup
\\
{} & {} & \left \{
\text{monomials of degree } s+1 \text{ in } x_{n-\kappa+1}, \dots, x_n
\right\}
\\
{} & = & \partial \operatorname{TM} .
\end{array}
\end{equation}
Consequently, $|\partial \mathbf{m}hcal{O}|$ $=$ $\nu$ is given by
{\Small
\[
\nu\ =\ {n -1 + r \choose r} - {\kappa-1+r \choose r} + \left( \sum_{d=r+1}^{s+1} (n-\kappa){\kappa-1+d-1 \choose d-1} \right) + {\kappa + s \choose s+1} .
\]
}
\begin{rem} \label{rem:Cond1ForEffIsOK}
Condition (i) of Proposition {\rm \ref{prop:effTestingProp}}, namely, that every non-leading boundary monomial $b_j$ is a multiple of a monomial in $Q$ $=$ $\partial \operatorname{LM}\, \cup\, \partial \operatorname{TM}$, is easily seen to hold in this case. Indeed,
\[
\partial \operatorname{LM} = \{ \text{monom's of degree } d+1 \text{ and front degree} \geq 1 \},
\]
and clearly every non-leading boundary monomial with front degree $\geq$ $1$ is divisible by a monomial in $\partial \operatorname{LM}$. The only other non-leading boundary monomials are the monomials of degree $s+1$ and front degree $0$, and these all lie in $\partial \operatorname{TM}$.
\end{rem}
\subsection{Linear syzygies} \label{ssec:neighborSyz}
In this section we compute the cardinality of the set $\operatorname{T}$ of target monomials (\ref{eqn:tarMonDef}), from which, by (\ref{eqn:linSyzCount}), we obtain the dimension of the $K$-vector space of linear syzygies $\psi$ $=$ $(n+1) \cdot \nu - |\operatorname{T}|$. Writing $\operatorname{T}_d$ $\subseteq$ $\operatorname{T}$ for the subset of monomials of degree $d$, we list the elements of
$\operatorname{T} = \cup_{d=r}^{s+2}\, \operatorname{T}_d$ degree-by-degree:
First, it is clear that
\[
\operatorname{T}_r = \partial \mathbf{m}hcal{O}_r, \text{ so }\
|\operatorname{T}_r| = \lambda = {n-1+r \choose r} - {\kappa - 1 + r \choose r}.
\]
Next, since
\[
\cup_{\alpha = 1}^{n} (x_{\alpha}\cdot \partial \mathbf{m}hcal{O}_r) \subseteq \operatorname{T}_{r+1} \subseteq \{\text{monom's of degree } r+1 \text{ and front deg.} \geq 1 \},
\]
and the extremes are clearly the same, we have that
\begin{equation} \label{eqn:tarmonsRplus1}
|\operatorname{T}_{r+1}| = {n + r \choose r+1} - {k + r \choose r+1}.
\end{equation}
One checks easily that, for $r+1$ $\leq$ $d$ $\leq$ $s$,
\[
\partial \mathbf{m}hcal{O}_{d} = \{ \text{monom's of degree } d\text{ and front degree }1 \}.
\]
Hence, for $r+2$ $\leq$ $d$ $\leq$ $s$, we have that
\[
\begin{array}{rcl}
\operatorname{T}_d & = & \left( \partial \mathbf{m}hcal{O}_{d}\right) \cup \left( \cup_{\alpha=1}^{n}\, x_{\alpha}\cdot \partial \mathbf{m}hcal{O}_{d-1}\right)
\\
{} & = &\{m \mid \deg(m) =d\ \& \operatorname{front-deg}(m) = 1 \text{ or }2 \}, \text{ so}
\\
|\operatorname{T}_d| & = & (n-\kappa)\cdot {\kappa + d - 2 \choose d-1} + {n-\kappa + 1 \choose 2}\cdot {\kappa + d -3 \choose d-2}.
\end{array}
\]
Continuing, we next observe that
\[
\begin{array}{rcl}
\operatorname{T}_{s+1} & = & \partial \mathbf{m}hcal{O}_{s+1} \cup \left( \cup_{\alpha=1}^{n}\, x_{\alpha}\cdot \partial \mathbf{m}hcal{O}_{s}\right)
\\
{} & = & \left\{
\begin{array}{c}\text{monom's of deg. } s+1 \text{ and}
\\
\text{front deg. }0,\, 1, \text{ or } 2
\end{array} \right\}, \text{ so}
\\
|\operatorname{T}_{s+1}| & = & {\kappa + s \choose s+1} + (n-\kappa)\cdot {\kappa-1+s \choose s} + {n-\kappa + 1 \choose 2}\cdot {\kappa-2 +s \choose s-1}.
\end{array}
\]
Finally, we observe that
\[
\begin{array}{c}
\operatorname{T}_{s+2} = \cup_{\alpha=1}^{n}\, x_{\alpha}\cdot \partial{\mathbf{m}hcal{O}}_{s+1} = \left\{
\begin{array}{c}
\text{monom's of deg. } s+2\text{ and }
\\
\text{front deg. } 0,\, 1, \text{ or } 2
\end{array} \right \}, \text{ so }
\\
|\operatorname{T}_{s+2}| = {\kappa + s+1 \choose s+2} + (n-\kappa)\cdot {\kappa + s \choose s+1} + {n-\kappa + 1 \choose 2}\cdot {\kappa-1 +s \choose s},
\end{array}
\]
and $|\operatorname{T}|$ $=$ $\sum_{d=r}^{s+2}|\operatorname{T}_d|$. By inspection of the preceding results, we obtain the following
\begin{lem} \label{lem:asymptotics}
Let $I$ be a distinguished\ ideal of shape $(n,\kappa,r,s)$. If we hold $\kappa$, $r \geq 2$, and $s$ $>$ $r$ constant, and allow $n$ to increase, then the quantities $\mu$, $\lambda$, $\tau$, $\nu$, $|\operatorname{T}|$, and $\psi$ are polynomials in $n$ with the following dominant terms:
\[
\begin{array}{rcl}
\mu & \mathbf{m}hsf{A}^2_{K}prox & \frac{n^{r-1}}{(r-1)!},
\\
\lambda & \mathbf{m}hsf{A}^2_{K}prox & \frac{n^r}{r!},
\\
\tau & = & {\kappa-1+s \choose s}\ =\ \text{\rm constant in } n,
\\
\nu & \mathbf{m}hsf{A}^2_{K}prox & \frac{n^r}{r!},
\\
|\operatorname{T}| & \mathbf{m}hsf{A}^2_{K}prox & \frac{n^{r+1}}{(r+1)!},
\\
\psi & \mathbf{m}hsf{A}^2_{K}prox & n\cdot \frac{n^r}{r!} -\frac{n^{r+1}}{(r+1)!} = \frac{r}{(r+1)!}\cdot n^{r+1} .
\end{array}
\] \qed
\end{lem}
\subsection{A linearly independent set in $\EuScript{T}_{[I]}$} \label{ssec:LinIndepTanVecSetInMainCase}
Since the order ideal $\mathbf{m}hcal{O}$ under consideration is a lex-segment complement, we choose the sets $\Delta'_{\alpha}$ as described in Section \ref{ssec:ZDiscussion}. One checks easily that, for $1 \leq \alpha \leq n-\kappa$,
\[
\begin{array}{l}
b_{j_{\alpha}} = x_{\alpha}\,x_n^{r-1} \in \operatorname{LM},\ t_{i_{\alpha}} = x_n^{r-1}, \text{ and }
\\
\Delta'_{\alpha} = \{\text{monom's } m \in K[x_{n-\kappa+1}, \dots, x_n] \mid 0 \leq \deg(m) \leq s-r\},
\end{array}
\]
and, for $n-\kappa+1 \leq \alpha \leq n$,
\[
b_{j_{\alpha}} = x_{\alpha}\,x_n^{s} \in \partial \mathbf{m}hcal{O} \setminus \operatorname{LM},\ t_{i_{\alpha}} = x_n^s \in \operatorname{TM}, \text{ and }\ \Delta'_{\alpha} = \{ 1 \}.
\]
\begin{lem} \label{lem:SCupZ'LinIndInShapeCases}
For any distinguished\ ideal $I = I_p$ of shape $(n,\kappa,r,s)$ the set $\EuScript{S}_p \cup \EuScript{Z'}_p$ $\subseteq$ $\EuScript{T}_{[I]}$ is linearly independent. Consequently, if $\dim_{K}(\EuScript{T}_{[I]}) = |\mathcal{S}| + |\mathbf{m}hcal{Z}'|$
(equivalently, if $\EuScript{S}_p \cup \EuScript{Z}'_p$ is a $K$-basis of $\EuScript{T}_{[I]}$), then $I$ is generic.
\end{lem}
\proof
The first statement results from Corollary \ref{cor:GenZPropCor2}. The second statement then follows from Proposition \ref{prop:genericCond}.
\qed
Henceforth we say that the distinguished\ ideal $I$ of shape $(n,\kappa,r,s)$ is
\textbf{shape-generic}
if and only if the following equation holds:
{\Small
\begin{equation} \label{eqn:tanSpDimInGenShapeCase}
\begin{array}{c}
\dim_{K}(\EuScript{T}_{[I]})\ =\ |\mathcal{S}|\ +\ |\mathbf{m}hcal{Z}'|\ =\ \lambda \cdot \tau + (n-\kappa) \cdot |\Delta'_1| + \kappa \cdot |\Delta'_{n-\kappa+1}|\ =
\\
\left( {n-1+r \choose r} - {\kappa - 1\ +\ r \choose r} \right) \cdot {\kappa - 1 + s \choose s}\ +\ (n-\kappa) \cdot \left( \sum_{d=0}^{s-r}{\kappa-1+d \choose d} \right)\ +\ (\kappa)\cdot 1
\end{array} .
\end{equation}
}
\subsection{Tangent space relations} \label{ssec:tanSpRelnCt}
Recall from Section \ref{sec:tanSpace} that a tangent vector at $[I]$ is given by an $R$-homomorphism $\fn{v}{I}{R/I}$, which is determined by the images
\[
v(g_j)\ =\ \sum_{i=1}^{\mu}a_{ij}t_i \in \operatorname{Span}_{K}(\mathbf{m}hcal{O}),\ 1 \leq j \leq \nu,
\]
where the $g_j$ are the elements of the $\mathbf{m}hcal{O}$-border basis of $I$. Given a syzygy $\sum_{j=1}^{\nu}f_j g_j$ $=$ $0$, we obtain the linear relations on the $a_{ij}$ as described in Section \ref{ssec:tanSpGen}:
\[
\begin{array}{rcl}
0\ =\ v \left( \sum_{j=1}^{\nu}f_j\, g_j \right)\ =\ \sum_{j=1}^{\nu}f_j\, v(g_j) & = & \sum_{j=1}^{\nu}f_j\cdot(\sum_{i=1}^{\mu}a_{ij} t_i)
\\
{} & \equiv & \sum_{i=1}^{\mu}\mathbf{m}hbf{b}^{(f_j)}_{i}t_{i} \text{ mod } I,
\end{array}
\]
where each of the coefficients $\mathbf{m}hbf{b}^{(f_j)}_{i}$ is a $K$-linear combination of the $a_{ij}$ that must vanish. Viewing the $a_{ij}$ as indeterminates, we represent a ``generic tangent vector'' as the $(\mu \nu)$-tuple $(a_{1,1},\, a_{2,1},\, \dots,\, a_{\mu \lambda})$, as in (\ref{eqn:TanVec}). By computing the tangent space relations $\mathbf{m}hbf{b}_{i}^{(f_j)}$ for a basis of the linear syzygies, and then computing the dimension of the vector space that they span, we obtain $\dim_{K}(\EuScript{T}_{[I]})$ as in (\ref{eqn:tanSpDimComp}).
We assign a \textbf{degree} to each indeterminate $a_{ij}$ and each relation $\mathbf{m}hbf{b}_{i}^{(f_j)}$ as follows:
\[
\deg(a_{ij})\ =\ \deg(\mathbf{m}hbf{b}_{i}^{(f_j)})\ =\ \deg(t_{i}).
\]
Our goal here is to identify and count the $a_{i'j'}$ that can appear non-trivially in the relation $\mathbf{m}hbf{b}_{i}^{(f_j)}$ associated to a linear syzygy $(f_j)$. To this end, let $f_{j',k}$ denote one of the terms of $f_{j'}$, so $f_{j',k}$ is either a constant or a constant times a variable, and observe that the product of a variable $x_{\alpha}$ with a basis monomial $t_{i'}$ of degree $d$ is a monomial $m$ $=$ $x_{\alpha}\, t_{i'}$ of degree $d+1$ such that exactly one of the following holds:
\begin{equation} \label{eqn:CongruencePossibilities}
\begin{array}{rcl}
m \in \mathbf{m}hcal{O}_{d+1} & \Rightarrow & m \equiv m\ (\text{mod } I),
\\
m \in \operatorname{LM} & \Rightarrow & m \equiv N \in \operatorname{Span}_{K}(\operatorname{TM})\ (\text{mod } I),
\\
m \in \partial \mathbf{m}hcal{O} \setminus \operatorname{LM} & \Rightarrow & m \equiv 0\ (\text{mod } I).
\end{array}
\end{equation}
We see that the indeterminate $a_{i'j'}$ of degree $d$ can appear in the relation $\mathbf{m}hbf{b}_i^{(f_j)}$ if one of the following holds (here $0 \neq c \in K$):
\begin{itemize}
\item $f_{j',k} = c $, and $i' = i$, so that $\deg(\mathbf{m}hbf{b}_i^{(f_j)}) = d$;
\item $f_{j',k} = c x_{\beta}$ and $x_{\beta}\cdot t_{i'} = t_i$ $\in$ $\mathbf{m}hcal{O}$, so that $\deg(\mathbf{m}hbf{b}_i^{(f_j)}) = d+1$;
\item $f_{j',k} = c x_{\beta}$ and $x_{\beta} \cdot t_{i'} \in \operatorname{LM}$, so that $d = r-1$ and $\deg(\mathbf{m}hbf{b}_i^{(f_j)}) = s$.
\end{itemize}
\begin{rem} \label{rem:syzCoeffsRem}
Let $I = \{g_j \}$ be a distinguished\ ideal of shape $(n,\kappa,r,s)$. For $1 \leq j' \leq \lambda$, a term $f_{j',k}$ in the $j'$-entry of a linear syzygy $(f_j)$ can never equal a non-zero constant $c$, since $c \cdot b_{j'}$ can never cancel out of the expression $\sum_{j=1}^{\nu}f_j\cdot g_j$ for $b_{j'}$ $\in$ $\operatorname{LM}$.
\end{rem}
For each degree $0$ $\leq$ $d$ $\leq$ $s$, we let $A_d$ denote the set of indeterminates $a_{ij}$ that appear in at least one of the tangent space relations of degree $d$. We proceed to identify the possible members of these sets and find upper bounds on their cardinalities, for which it is convenient to adopt the following terminology: we say that $a_{ij}$ $\in$ $A_d$ \textbf{stays put} if $\deg(a_{ij})$ $=$ $d$, \textbf{moves up by 1} if $\deg(a_{ij})$ $=$ $d-1$, or \textbf{jumps up} if $\deg(a_{ij})$ $=$ $r-1$ and $d=s$; in addition, we take $|\mathbf{m}hcal{O}_{-1}|$ $=$ $0$. Summarizing the foregoing observations, we obtain the following
\begin{lem} \label{lem:ACountLem}
The $a_{ij}$ $\in$ $A_d$ that stay put have indices satisfying $b_j$ $\in$ $\partial \mathbf{m}hcal{O}\setminus \operatorname{LM}$ and $t_i$ $\in$ $\mathbf{m}hcal{O}_d$, for $0 \leq d \leq s$. The $a_{ij}$ $\in$ $A_d$ that move up by $1$ have indices satisfying $b_j$ $\in$ $\partial \mathbf{m}hcal{O}$ and $t_i$ $\in$ $\mathbf{m}hcal{O}_{d-1}$, for $0$ $\leq$ $d$ $\leq$ $r-1$ and $r+1$ $\leq$ $d$ $\leq$ $s$, and, for $d = r$, indices satsfying $b_j$ $\in$ $\partial \mathbf{m}hcal{O}$ and $t_i$ $\in$ $\{m \in \mathbf{m}hcal{O}_{r-1} \mid \operatorname{back-deg.}(m) = r-1 \}$. Finally, the $a_{ij}$ $\in$ $A_s$ that jump up have indices satisfying $b_j$ $\in$ $\partial \mathbf{m}hcal{O}$ and $t_i$ $\in$ $\mathbf{m}hcal{O}_{r-1}$. Whence,
\[
|A_d| \ \leq \ \left\{
\begin{array}{l}
(\nu - \lambda) \cdot {n-1+d \choose d} + \nu \cdot {n-1+d-1 \choose d-1}, \text{ if } 0 \leq d \leq r-1,
\\
(\nu-\lambda) \cdot {\kappa - 1 + r \choose r} + \nu \cdot {\kappa-1+r-1 \choose r-1 }, \text{ if } d = r,
\\
(\nu - \lambda) \cdot {\kappa - 1 + d \choose d} + \nu \cdot {\kappa - 1 + d-1 \choose d-1}, \text{ if } r+1 \leq d \leq s-1,
\\
(\nu - \lambda) \cdot {\kappa - 1 + s \choose s} + \nu \cdot {\kappa - 1 + s-1 \choose s-1} + \nu \cdot {n-1+ r-1 \choose r-1}, \text{ if } d = s.
\end{array} \right.
\]
\qed
\end{lem}
\subsection{Quasi-efficiency} \label{ssec:quasi-eff}
By Remark \ref{rem:GenButNotEff}, a shape-generic ideal $I$ of shape $(n,\kappa,r,s)$ need not be efficient. However, as we show in this section, it must have the following property that we call \textbf{quasi-efficiency}: Every tangent vector $\fn{v}{I}{R/I}$ in $\EuScript{T}_{[I]}$ is determined by the images $v(g_j)$ for $1\leq j \leq \lambda$. Clearly efficiency implies quasi-efficiency, since if $I$ is generated by $g_1, \dots, g_{\lambda}$, then any $R$-homomorphism of $I$ is determined by the images of these generators.
\begin{lem} \label{lem:degOfALowerBd} Let $v$ $\in$ $\EuScript{S}_p \cup \EuScript{Z}'_p$, and let $(a_{ij})$ denote the associated tuple. Then the minimal degree of a non-zero component $a_{ij}$ is as follows:
\begin{itemize}
\item If $v$ $=$ $v_{p,ij}$ $\in$ $\EuScript{S}_p$, then $v$ has a single non-zero component of degree $s$.
\item If $v$ $=$ $v_{p,\alpha,\delta}$ $\in$ $\EuScript{Z}'_p$, then the minimal degree of a non-zero component in $(a_{ij})$ is given by
\begin{itemize}
\item[$\ast$] $\deg(t_{i_{\alpha,\delta},j_{\alpha}}) = r-1 + \deg(m_{\alpha,\delta}) \leq s-1, \text{ if } x_{\alpha}$ is a front variable, and
\item[$\ast$] $r-1, \text{ if } x_{\alpha}$ is a back variable.
\end{itemize}
\end{itemize}
Moreover, when $x_{\alpha}$ is a back variable, the components of $v_{p, \alpha, \delta}$ can only be non-zero in degrees $r-1$, $s-1$, and $s$.
\end{lem}
\proof
The first bulleted statement is immediate from Section \ref{ssec:TVFam1}: $v_{p,ij}$ $\in$ $\EuScript{S}_p$ has a single non-zero entry with distinguished\ index pair $ij$, which implies that $t_i$ $\in$ $\operatorname{TM}$. In our context, this yields $\deg(a_{ij})$ $=$ $\deg(t_i)$ $=$ $s$.
We now prove the second bulleted statement assuming that $x_{\alpha}$ is a front variable. From the proof of Proposition \ref{prop:GeneralZProp}, we have that the $(i_{\alpha,\delta},j_{\alpha})$-component of $v$ $=$ $v_{p,\alpha,\delta}$ is non-zero --- indeed, it is the only non-zero component with index of the form $(i,j_{\alpha})$ --- and this component has degree $\deg(t_{i_{\alpha,\delta}})$. So it remains to show that a non-zero component $a_{i,j}$ with $j$ $\neq$ $j_{\alpha}$ cannot have a strictly smaller degree.
Recall that, by (\ref{eqn:ZTanVecHomo}) the tangent vector $\fn{v}{I}{R/I}$ is given by
\[
g_j\ \mapsto \frac{\partial g_j}{\partial x_{\alpha}} \cdot m_{\alpha,\delta}\ \equiv \ \sum_{i=1}^{\mu}a_{ij}t_i\ (\text{mod } I),\ \ 1 \leq j \leq \nu.
\]
Since $x_{\alpha}$ is a front variable and the trailing monomials only involve back variables, we have that $\frac{\partial g_j}{\partial x_{\alpha}}$ $=$ $\frac{\partial b_j}{\partial x_{\alpha}}$, so $\frac{\partial g_j}{\partial x_{\alpha}} \cdot m_{\alpha,\delta}$ is either equal to $0$ or to $\text{(const)}\cdot M$, where the monomial $M$ $=$ $\frac{b_{j}}{x_{\alpha}} \cdot m_{\alpha,\delta}$ has degree $\deg(b_j) - 1 + \deg(m_{\alpha,\delta})$ $\geq$ $r-1+\deg(m_{\alpha,\delta})$. Modulo $I$, we have that $M$ is congruent to one of the following:
\begin{equation} \label{eqn:3Possibilities}
\begin{array}{l}
M, \text{ if } M \in \mathbf{m}hcal{O},
\\
N \in \operatorname{Span}_{K}(\operatorname{TM})\ \text{ (of degree $s$), if } M \in \operatorname{LM},\text{ or }
\\
0, \text{ if } M \notin \mathbf{m}hcal{O} \text{ and } M \notin \operatorname{LM}.
\end{array}
\end{equation}
In each case, we see that no non-zero component of degree $<$ $r-1+ \deg(m_{\alpha,\delta})$ occurs.
Now let $x_{\alpha}$ be a back variable (which implies that $m_{\alpha,\delta}$ $=$ $m_{\alpha,1}$ $=$ $1$), and consider the components coming from $\frac{\partial g_j}{\partial x_{\alpha}} \cdot 1$. If $b_j$ $\in$ $\operatorname{LM}$, then non-zero components of degree $s-1$ can arise from $-\frac{\partial N_j}{\partial x_{\alpha}}$, where $g_j\ = \ b_j - N_j$. Otherwise (in all cases), non-zero components can result from $\frac{\partial b_j}{\partial x_{\alpha}}$ $=$ $\text{(const)}\cdot M$ (if non-zero). The three possibilities (\ref{eqn:3Possibilities}) again present themselves; however, because $x_{\alpha}$ is a back variable, we can say more. Consider the boundary monomials $b_j$ such that $\frac{b_j}{x_{\alpha}}$ $=$ $M$ $\in$ $\mathbf{m}hcal{O}$. A moment's reflection shows that this is possible only in one of the following two cases, both of which occur: $\deg(b_j)$ $=$ $r$ and is divisible by $x_{\alpha}$, or $\deg(b_j)$ $=$ $s+1$ and $b_j$ involves only back variables (including $x_{\alpha}$), since for all other degrees $r+1$ $\leq$ $d$ $\leq$ $s$, $b_j$ involves a front variable, hence so does $M$ (of degree $\geq r$) $\Rightarrow$ $M$ $\notin$ $\mathbf{m}hcal{O}$. It follows that the only possible degrees for non-zero components are $r-1$, $s-1$, and $s$, as the final statement asserts, and the minimum $r-1$ is attained. This completes the proof of the lemma.
\qed
\begin{cor} \label{cor:AsAre0InDegsLEQN-2}
Let $I=I_p$ be shape-generic, and let $v$ $=$ $(a_{ij})$ $\in$ $\EuScript{T}_{[I]}$. Then $\deg(a_{ij})$ $\leq$ $r-2$ $\Rightarrow$ $a_{ij}$ $=$ $0$.
\end{cor}
\proof By definition, if $I$ $=$ $I_p$ is shape-generic, then the set of vectors $\EuScript{S}_p \cup \EuScript{Z}'_p$ is a basis of $\EuScript{T}_{[I_p]}$. Proposition \ref{lem:degOfALowerBd} implies that the desired conclusion holds for this basis; whence, it holds for all $v \in \EuScript{T}_{[I]}$.
\qed
\begin{rem} \label{rem:SimToIarrobEmsalemResult}
The preceding Corollary extends a consequence of \cite[Lemma 2.31, p.\ 162]{Iarrob-Emsalem} to our case.
\end{rem}
Recall from Corollary \ref{cor:GenZPropCor1A} that the $(i_{\alpha,\delta},j_{\alpha})$-component of $v_{p,\alpha,\delta}$ is non-zero, and for all $\beta$ $\leq$ $\alpha$ and $(\beta,\delta')$ $\neq$ $(\alpha,\delta)$, the $(i_{\alpha,\delta},j_{\alpha})$-component of $v_{p,\beta,\delta'}$ $=$ $0$. In this sense, the $(i_{\alpha,\delta},j_{\alpha})$-component of $v_{p,\alpha,\delta}$ acts as a ``characteristic function'' for $v_{p,\beta,\delta'}$.
When $x_{\alpha}$ is a front variable, $b_{j_{\alpha}}$ $=$ $x_{\alpha}x_n^{r-1}$ $\in$ $\operatorname{LM}$, but when $x_{\alpha}$ is a back variable, $b_{j_{\alpha}}$ $=$ $x_{\alpha}x_n^{s}$ $\in$ $\partial \operatorname{TM}$. To prove quasi-efficiency, we use the following modified set of ``characteristic function'' components of index $(\hat{i}_{\alpha,\delta},\hat{j}_{\alpha})$ such that $b_{\hat{j}_{\alpha}}$ $\in$ $\operatorname{LM}$ for all $1$ $\leq$ $\alpha$ $\leq$ $n$:
\[
\begin{array}{l}
b_{\hat{j}_{\alpha}} = b_{j_{\alpha}},\ t_{\hat{i}_{\alpha,\delta}} = t_{i_{\alpha,\delta}} = x_n^{r-1}\cdot m_{\alpha,\delta}, \text{ if } x_{\alpha} \text{ is a front variable, and}
\\
b_{\hat{j}_{\alpha}} = x_{\alpha}x_1^{r-1},\ t_{\hat{i}_{\alpha,\delta}} = t_{\hat{i}_{\alpha,1}} = x_1^{r-1}\cdot 1, \text{ if } x_{\alpha} \text{ is a back variable}.
\end{array}
\]
\begin{lem} \label{lem:quasiEffPrepLem}
Let $I = I_p$ be a distinguished\ ideal of shape $(n,\kappa,r,s)$. Then for all variables $x_{\alpha}$ and all $1$ $\leq$ $\delta$ $\leq$ $|\Delta'_{\alpha}|$, we have the following:
\begin{itemize}
\item[i.] The $(\hat{i}_{\alpha,\delta},\hat{j}_{\alpha})$-component of $v_{p,\alpha,\delta}$ is non-zero.
\item[ii.] For all variables $x_{\beta}$ and all $1$ $\leq$ $\delta'$ $\leq$ $\Delta'_{\beta}$, if $\beta$ $\leq$ $\alpha$ and $(\beta,\delta')$ $\neq$ $(\alpha,\delta)$, then the $(\hat{i}_{\alpha,\delta},\hat{j}_{\alpha})$-component of $v_{p,\beta,\delta'}$ $=$ $0$.
\end{itemize}
\end{lem}
\proof
The truth of the first statement for the front variables $x_{\alpha}$ was shown in the proof of Proposition \ref{prop:GeneralZProp}, so suppose that $x_{\alpha}$ is a back variable. By (\ref{eqn:ZTanVecHomo}), the $(\hat{i}_{\alpha,1},\hat{j}_{\alpha})$-component of $v_{p,\alpha,1}$ is the coefficient of $t_{\hat{i}_{\alpha,1}}$ $=$ $x_1^{r-1}$ in $\frac{\partial g_{\hat{j}_{\alpha}}}{\partial x_{\alpha}} \cdot 1$ modulo $I$. But $g_{\hat{j}_{\alpha}}$ $=$ $x_{\alpha} x_1^{r-1} - N_{\hat{j}_{\alpha}}$, where $N_{\hat{j}_{\alpha}}$ $\subseteq$ $\operatorname{Span}_{K}(\operatorname{TM})$, so it is clear that the desired coefficient is $1$.
To prove the second statement, we must show, given $\beta$ $\leq$ $\alpha$ and $(\beta,\delta')$ $\neq$ $(\alpha,\delta)$, that the coefficient of $t_{\hat{i}_{\alpha,\delta}}$ in $\frac{\partial g_{\hat{j}_{\alpha}}}{\partial x_{\beta}}\cdot \delta'$ modulo $I$ is $0$. We first consider the case in which $x_{\alpha}$ is a back variable, so that $\Delta'_{\alpha}$ $=$ $\{ 1 \}$, and let $(x_{\beta},\delta')$ $\neq$ $(x_{\alpha},1)$, which implies that $\beta$ $<$ $\alpha$. Then
\[
\frac{\partial g_{\hat{j}_{\alpha}}}{\partial x_{\beta}} \cdot \delta'\ =\ \frac{\partial (x_{\alpha} x_1^{r-1})}{\partial x_{\beta}}\cdot \delta' - \frac{\partial N_{\hat{j}_{\alpha}}}{\partial x_{\beta}} \cdot \delta'.
\]
The first term on the RHS of the preceding equation is $0$ provided $x_{\beta}$ $\neq$ $x_1$, and if $x_{\beta}$ $=$ $x_1$, it equals $(r-1) x_{\alpha} x_1^{r-2} \cdot \delta'$ $=$ $(r-1)\cdot m$, where $m$ is a monomial of degree $\geq r-1$. There are three possibilities for the value of $m$ modulo $I$, as in (\ref{eqn:CongruencePossibilities}), and a moment's reflection shows that none of these possibilities can include a non-zero multiple of $t_{\hat{i}_{\alpha,1}}$ $=$ $x_1^{r-1}$. Similarly, the second term on the RHS consists of a linear combination of monomials of degree $\geq s-1$, and we again conclude that, modulo $I$, no non-zero multiple of $x_1^{r-1}$ can appear. It follows that statement ii.\ holds when $x_{\alpha}$ is a back variable.
Now consider the case in which $x_{\alpha}$ is a front variable. One can then obtain the desired conclusion immediately from Corollary \ref{cor:GenZPropCor1A} or from the following argument:
Choose $\beta$ $\leq$ $\alpha$ (so $x_{\beta}$ is a front variable) and $(\beta,\delta')$ $\neq$ $(\alpha,\delta)$, and compute the coefficient of $t_{\hat{i}_{\alpha,\delta}}$ $=$ $x_n^{r-1}\cdot m_{\alpha,\delta}$ in
\[
\begin{array}{rcl}
\frac{\partial g_{\hat{j}_{\alpha}}}{\partial x_{\beta}} \cdot m_{\beta,\delta'} & = & \left( \frac{\partial b_{\hat{j}_{\alpha}}}{\partial x_{\beta}} - \frac{\partial N_{\hat{j}_{\alpha}}}{\partial x_{\beta}} \right) \cdot m_{\beta,\delta'}
\\
{} & = & \left( \frac{\partial\, x_{\alpha}\, x_{n}^{r-1}}{\partial x_{\beta}} - \frac{\partial N_{\hat{j}_{\alpha}}}{\partial x_{\beta}}\right) \cdot m_{\beta,\delta'} \text{ mod } I.
\end{array}
\]
If $\beta$ $<$ $\alpha$, the last expression is clearly $0$, and if $\beta$ $=$ $\alpha$, it equals $t_{\hat{i}_{\alpha,\delta'}}$. This can yield a non-zero coefficient for $t_{\hat{i}_{\alpha,\delta}}$ only if $\delta$ $=$ $\delta'$, which is ruled out by the hypothesis that $(\beta,\delta')$ $\neq$ $(\alpha,\delta)$. This completes the proof.
\qed
We are now ready to prove that a shape-generic distinguished\ ideal $I$ is quasi-efficient.
\begin{prop} \label{prop:gsDetermineTanVec}
Let $I = I_p$ be a shape-generic distinguished\ ideal of shape $(n,\kappa,r,s)$, and let $\fn{v}{I}{R/I}$ be a tangent vector at $[I]$ with associated tuple $(a_{ij})$. Then $v$ is determined by the images $v(g_j)$ for $1\leq j \leq \lambda$.
\end{prop}
\proof
We begin by writing $v$ as a (unique) linear combination of the elements of the basis $\EuScript{S}_p$ $\cup$ $\EuScript{Z'}_p$:
\begin{equation} \label{eqn:basisExp}
v\ =\ \sum_{v_{p,ij} \in \EuScript{S}_p} d_{ij}\,v_{p,ij} + \sum_{v_{p,\alpha,\delta} \in \EuScript{Z'}_p} d_{\alpha,\delta}\, v_{p,\alpha,\delta},\ \ \ d_{ij},\ d_{\alpha,\delta} \in K .
\end{equation}
It suffices to show that the coefficients $d_{ij}$ and $d_{\alpha,\delta}$ are completely determined by $v(g_1)$, \dots, $v(g_{\lambda})$. We begin by equating the $(\hat{i}_{n,1},\hat{j}_n)$-components on both sides of the equation. By Lemma \ref{lem:quasiEffPrepLem}, we know that the $(\hat{i}_{n,1},\hat{j}_n)$-component of $v_{p,\beta,\delta'}$ is $0$ for all $\beta$ $\leq$ $\alpha$ and $(\beta,\delta')$ $\neq$ $(n,1)$, which includes all the pairs $(\beta,\delta')$ $\neq$ $(n,1)$. Furthermore, the $(\hat{i}_{n,1},\hat{j}_n)$-components of the $v_{p,ij}$ are all $0$ since (by Lemma \ref{lem:degOfALowerBd}) the only non-zero component of $v_{p,ij}$ has degree $s$, and the degree of $t_{\hat{i}_{n,1}}$ $=$ $x_1^{r-1}$ is $r-1$ $<$ $s$. From this it follows that the coefficient $d_{n,1}$ is determined by the $(\hat{i}_{n,1},\hat{j}_n)$-component of $v$, which is the coefficient of $t_{\hat{i}_{n,1}}$ in $v(g_{\hat{j}_n})$. This shows that $d_{n,1}$ is determined by $v(g_1)$, \dots, $v(g_{\lambda})$.
Proceeding by descending induction on $\alpha$, we assume that for some $1$ $\leq$ $\alpha$ $<$ $n$, all of the coefficients $d_{\beta,\delta'}$ for $\alpha+1$ $\leq$ $\beta$ $\leq$ $n$ are completely determined by $v(g_1)$, \dots, $v(g_{\lambda})$ (and have been computed).
We then equate the $(\hat{i}_{\alpha,\delta},\hat{j}_{\alpha})$-components on both sides of equation (\ref{eqn:basisExp}).
Lemma \ref{lem:quasiEffPrepLem} implies that for all $\beta$ $\leq$ $\alpha$ and $(\beta,\delta')$ $\neq$ $(\alpha,\delta)$, the $(\hat{i}_{\alpha,\delta},\hat{j}_{\alpha})$-component of $v_{p,\beta,\delta'}$ is $0$, and the same is again true for all the $v_{p,ij}$, since none of the monomials $t_{\hat{i}_{\alpha,\delta}}$ can have degree $s$. It follows that the value of $d_{\alpha,\delta}$ is determined by the coefficient of $t_{\hat{i}_{\alpha,\delta}}$ in $v(g_{\hat{j}_{\alpha}})$ and the previously-computed $d_{\beta,\delta'}$, so we conclude that for all $1$ $\leq$ $\delta$ $\leq$ $\Delta'_{\alpha}$, the coefficients $d_{\alpha,\delta}$ are determined by $v(g_1)$, \dots, $v(g_{\lambda})$. It follows by induction that this is so for all the coefficients $d_{\alpha,\delta}$, $1 \leq \alpha \leq n$, $1 \leq \delta \leq |\Delta'_{\alpha}|$.
It is now clear that the remaining coefficients can be computed by equating the distinguished\ $(i,j)$-components on both sides of equation (\ref{eqn:basisExp}), so the value of each $d_{ij}$ can be computed from the coefficient of $t_{i,j}$ in $v(g_j)$ and the previously-computed values of the $d_{\alpha,\delta}$, hence is again determined by the values $v(g_1)$, \dots, $v(g_{\lambda})$, and we are done.
\qed
\section{A criterion for plausible genericity} \label{sec:PlausArgs}
To conclude the paper, we present a numerical criterion for identifying shapes $(n,\kappa,r,s)$ such that sufficiently general distinguished\ ideals associated to those shapes are likely to be shape-generic; we will call such shapes \textbf{plausible}.
\subsection{The criterion} \label{ssec:PlausCrit}
Roughly speaking, the criterion is this: $(n,\kappa,r,s)$ is deemed plausible if the following two conditions hold:
\begin{description}
\item[1] there are enough tangent space relations in each degree to allow the ranks of these sets of relations (if sufficiently general) to attain their maximum possible values, and
\item[2] sufficiently general distinguished\ ideals $I$ of the given shape are likely to be $\vartheta$-efficient, and therefore likely to be quasi-efficient.
\end{description}
We make these conditions computably precise and briefly argue for their
reasonableness as follows:
\begin{description}
\item[1] Examples suggest that for shape-generic distinguished\ ideals $I$ of shape $(n,\kappa,r,s)$, the tangent space relations in each degree will attain (or nearly attain) their maximum possible ranks. Of course, the rank of the tangent space relations in degree $d$ is bounded above by $|A_d|$, the number of indeterminates $a_{i,j}$ that appear in the relations of degree $d$, so we make condition 1 precise by requiring that the number of tangent space relations in each degree $0 \leq d \leq s$ is $\geq$ the upper bound on $|A_d|$ given in Lemma \ref{lem:ACountLem}. Hence, if condition 1 holds, there are enough tangent space relations to render $I$ shape-generic, assuming that these relations are sufficiently independent.
\item[2] Since Proposition \ref{lem:quasiEffPrepLem} requires that any shape-generic distinguished\ ideal $I$ be quasi-efficient, and $\vartheta$-efficiency is an easy-to-check condition that implies quasi-efficiency, we require condition 2 in addition to condition 1. In light of Remark \ref{rem:Cond1ForEffIsOK}, we know that distinguished\ ideals of shape $(n,\kappa,r,s)$ will be $\vartheta$-efficient if and only if the map $\vartheta$ {\rm (\ref{eqn:mapSigma})} is surjective, which is likely to be the case for general $I$ provided that
\[
\dim_{K}(\operatorname{domain}(\vartheta))\ \geq\ \dim_{K}(\operatorname{codomain}(\vartheta)).
\]
This inequality is therefore our precise statement of condition 2.
\end{description}
\begin{rem} \label{rem:PlausNotPerfect}
As noted in Remark \ref{rem:genButNotEff}, sufficiently general distinguished\ ideals $I$ of shape $(6,3,2,3)$ are generic and efficient, but not $\vartheta$-efficient. Indeed, as shown in the associated \emph{Mathematica} notebook, the domain and co-domain of $\vartheta$ have dimensions 90 and 91, respectively, so condition 2 fails in this case, implying that $(6,3,2,3)$ is not a plausible (as defined) shape. This shows that the plausibility criterion is a blunt instrument, incapable of detecting all shapes that support generic distinguished\ ideals.
\end{rem}
\subsection{Implementation and examples} \label{ssec:ImpAndExamplesOfPlausCrit}
Given the preparations in Section \ref{sec:idealsOfShapeNkRs}, the plausibility criterion is straightforward to program; an implementation titled \textbf{genericityIsPlausible} is provided in the notebook \emph{utility functions.nb} mentioned at the start of Section \ref{sec:examples}. Equation (\ref{eqn:PlausibleShapes}) in the introduction lists several plausible shapes (see the notebook \emph{plausible shapes.nb} for the details).
\subsection{Final observations and a conjecture} \label{ssec:PlausAsympBehavior}
We first explore the second condition of the plausibility criterion more closely. By (\ref{eqn:QMonDef}) and (\ref{eqn:mapSigma}), we have that
\[
\begin{array}{c}
\dim(\operatorname{domain}(\vartheta)) = n\cdot \lambda \text{ and }
\\
\dim(\operatorname{codomain}(\vartheta)) = |\partial \operatorname{LM}\, \cup\, \partial \operatorname{TM}| = |\operatorname{T}_{r+1}\, \cup\, \partial \mathbf{m}hcal{O}_{s+1}|;
\end{array}
\]
therefore, (\ref{eqn:lambdaAndTau}), (\ref{eqn:listOfBdryMons}), and (\ref{eqn:tarmonsRplus1}) yield that condition 2 can be written as follows (recall that we are assuming $2 \leq r < s$):
\[
\begin{array}{c}
n\cdot \left( {n-1+r \choose r} - {\kappa - 1 + r \choose r} \right)\ \geq {}
\\
{n + r \choose r+1} - {k + r \choose r+1} + (n-\kappa)\cdot {\kappa - 1 + s \choose s} + { \kappa + s \choose s+1}.
\end{array}
\]
We note the following regarding the asymptotic behavior of this inequality when various of the parameters are held constant:
\begin{description}
\item[Hold $\kappa,r,s$ constant] Since the LHS of the inequalities has dominant term $\frac{n^{r+1}}{r!}$ and the RHS has dominant term $\frac{n^{r+1}}{(r+1)!}$, we see that the inequality holds for all $n>>0$.
\item[Hold $n,\kappa,r$ constant] As $s$ increases, we see that the LHS of the inequality is constant and the RHS is increasing, so the inequality will fail for all $s >> 0$.
\item[Hold $n,r,s$ constant] As $\kappa$ increases (bounded above by $n$, of course), the LHS decreases to $0$ while the RHS is bounded below by ${\kappa + s \choose s+1 }$ , so there exists $\kappa_0$ $\leq$ $n$ such that the inequality fails for all $\kappa$ $\geq$ $\kappa_0$.
\end{description}
Next we look more closely at the first condition of the plausibility criterion. If we hold $\kappa$, $r$, and $s$ constant and let $n$ vary, Lemma \ref{lem:ACountLem} shows that $A_s$ is the most rapidly growing of the sets $A_d$, with dominant term
\[
|A_s|\ \mathbf{m}hsf{A}^2_{K}prox\ \nu \cdot \frac{n^{r-1}}{(r-1)!}\ \mathbf{m}hsf{A}^2_{K}prox\ \frac{n^r}{r!} \cdot \frac{n^{r-1}}{(r-1)!}\ =\ \frac{n^{2r-1}}{r!\cdot (r-1)!}
\]
On the other hand, the number of tangent space relations of degree $s$ is
\[
\psi \cdot \tau\ \mathbf{m}hsf{A}^2_{K}prox\ \tau \cdot \frac{r}{(r+1)!}\cdot n^{r+1}, \text{ where } \tau = {\kappa-1+s \choose s } \text{ is independent of }n.
\]
In case $r = 2$, the number of tangent space relations of degree $s$ and $|A_s|$ both grow at the same rate $O(n^{3})$, and the dominant term for the former has the larger coefficient $\tau \cdot \frac{r}{(r+1)!}$ (recall we are assuming $s>r$ and $\kappa$ $>$ $1$, so $\tau$ $\geq$ $s+1$ $\geq$ $4$). From this it follows that the first condition will be satisfied in degree $s$ (the degree for which satisfaction of condition $1$ is most difficult) for all $n >> 0$. Hence it is likely that the shape $(n,\kappa,2,s)$ will satisfy condition 1 of the plausibility criterion (as well as condition 2 as seen above) for all $n >> 0$. This is what leads us to offer
\noindent \textbf{Conjecture \ref{conj:FirstOfTwo}:} Given $r=2$, $s>2$, and $\kappa \geq 2$, the shape $(n,\kappa,2,s)$ is plausible for all $n>>0$.
The analogous conjecture for $r>2$ cannot hold: Indeed, if $r$ $>$ $2$, $s > r$, and $\kappa \geq 2$ are fixed, the growth rate $O(n^{2r-1})$ of $|A_s|$ exceeds the growth rate $O(n^{r+1})$, so, as $n$ increases, eventually $|A_s|$ will greatly exceed the number of tangent space relations of degree $s$, thereby falsifying condition 1. Moreover, it appears that for certain choices of $\kappa$, $r$, and $s$, none of the shapes $(n,\kappa,r,s)$ will be plausible; for example, $(n,3,10,11)$ is not plausible for $4$ $\leq$ $n$ $\leq$ $50000$ (at least). In concluding this paper, we invite the reader to seek further conjectures (or theorems!) regarding families of plausible shapes.
\end{document}
|
\begin{document}
\title{Consistency and fluctuations for stochastic gradient Langevin dynamics}
\author{\name Yee Whye Teh \email [email protected]\\
\addr Department of Statistics\\
University of Oxford
\mathcal{A}ND
\name Alexandre H. Thiery \email [email protected]\\
\addr Department of Statistics and Applied Probability\\
National University of Singapore
\mathcal{A}ND
\name Sebastian J. Vollmer \email [email protected]\\
\addr Department of Statistics\\
University of Oxford
}
\editor{}
\maketitle
\begin{abstract}
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally expensive.
Both the calculation of the acceptance probability and the creation
of informed proposals usually require an iteration through the whole
data set. The recently proposed stochastic gradient Langevin dynamics (SGLD) method circumvents this problem by generating proposals which are only based on a subset of the data, by skipping the accept-reject step and by using decreasing step-sizes sequence $(\delta_m)_{m \geq 0}$.
We provide in this article a rigorous mathematical framework for analysing this algorithm. We prove that, under verifiable assumptions, the algorithm is consistent, satisfies a central limit theorem (CLT) and its asymptotic bias-variance decomposition can be characterized by an explicit functional of the step-sizes sequence $(\delta_m)_{m \geq 0}$. We leverage this analysis to give practical recommendations for the notoriously difficult tuning of this algorithm: it is asymptotically optimal to use a step-size sequence of the type $\delta_m \asymp m^{-1/3}$, leading to an algorithm whose mean squared error (MSE) decreases at rate $\mathcal{O}(m^{-1/3})$.
\end{abstract}
\begin{keywords}
Markov Chain Monte Carlo, Langevin Dynamics, Big Data
\end{keywords}
\tableofcontents
\section{Introduction}
We are entering the age of Big Data, where significant advances across
a range of scientific, engineering and societal pursuits hinge upon the
gain in understanding derived from the analyses of large scale data sets.
Examples include recent advances in genome-wide association studies
\citep{hirschhorn2005genome,mccarthy2008genome,wang2005genome}, speech
recognition \citep{hinton2012deep}, object recognition
\citep{krizhevsky2012imagenet}, and self-driving cars
\citep{thrun2010toward}. As the quantity of data available has been
outpacing the computational resources available in recent years, there is
an increasing demand for new scalable learning methods, for example methods
based on stochastic optimization
\citep{robbins1951stochastic,srebro2010stochastic,sato2001online,hoffman2010online},
distributed computational architectures
\citep{ahmed2012scalable,neiswanger2013asymptotically,minsker2014robust},
greedy optimization
\citep{harchaoui2014frank},
as well as the development of specialized computing systems supporting
large scale machine learning applications
\citep{gonzalez2014emerging}.
Recently, there has also been increasing interest in methods for Bayesian
inference scalable to Big Data settings. Rather than attempting a single
point estimate of parameters typical in optimization-based or maximum
likelihood settings, Bayesian methods attempt to obtain characterizations
of the full posterior distribution over the unknown parameters and latent
variables in the model, hence providing better characterizations of the
uncertainties inherent in the learning process, as well as providing
protection against overfitting. Scalable Bayesian methods proposed in the
recent literature include stochastic variational inference
\citep{sato2001online,hoffman2010online}, which applies stochastic
approximation techniques to optimizing a variational approximation to the
posterior, parallelized Monte Carlo
\citep{neiswanger2013asymptotically,minsker2014robust}, which distributes
the computations needed for Monte Carlo sampling across a large compute
cluster, as well as subsampling-based Monte Carlo
\citep{welling2011bayesian,ahn2012bayesian,korattikara2014austerity}, which
attempt to reduce the computational complexity of Markov chain Monte Carlo (MCMC)
methods by applying updates to small subsets of data.
In this paper we study the asymptotic properties of the stochastic gradient
Langevin dynamics (SGLD) algorithm first proposed by
\citet{welling2011bayesian}. SGLD is a subsampling-based MCMC algorithm
based on combining ideas from stochastic optimization, specifically using
small subsets of data to estimate gradients, with Langevin dynamics, a MCMC
method making use of gradient information to produce better parameter
updates. \citet{welling2011bayesian} demonstrated that SGLD works well on a
variety of models and this has since been extended by
\citet{ahn2012bayesian,ahn2014distribuetd} and
\citet{patterson2013stochastic}.
The stochastic gradients in SGLD introduce approximations into the Markov
chain, whose effect has to be controlled by using a slowly decreasing
sequence of step sizes. \citet{welling2011bayesian} provided an intuitive
argument that as the step-size decreases the variations introduced by the
stochastic gradients gets dominated by the natural stochasticity of
Langevin dynamics, the result being that the stochastic gradient
approximation should wash out asymptotically and that the Markov chain
should converge to the true posterior distribution.
In this paper, we make this intuitive argument more precise by providing
conditions under which SGLD converges to the targeted posterior
distribution; we describe a number of characterizations of this
convergence. Specifically, we show that estimators derived from SGLD are
consistent (Theorem \ref{thm.LLN}) and satisfy a central limit theorem (CLT) (Theorem \ref{thm.clt}); the bias-variance trade-off of the algorithm is discussed in details in Section \ref{sec.bias.variance}. In Section \ref{sec.diff.lim} we prove that, when observed on the right (inhomogeneous) time scale, the sample path of the
algorithm converges to a Langevin diffusion (Theorem \ref{thm.diff.lim}).
Our analysis reveals that for a sequence of step-sizes with algebraic decay $\delta_m \asymp m^{-\alpha}$ the optimal choice, when measured in terms of rate of decay of the mean squared error (MSE), is given for $\alpha_\star = 1/3$; the choice $\delta_m \asymp m^{-\alpha_\star}$ leads to an algorithm that converges at rate $\mathcal{O}(m^{-1/3})$. This rate of convergence is worse than the standard Monte-Carlo $m^{-1/2}$-rate of convergence. This is not due to the stochastic gradients used in SGLD, but rather to the decreasing step-sizes.
These results are asymptotic in the sense that they characterise the behaviour of the algorithm as the number of steps approaches infinity. Therefore they do not necessarily translate into any insight into the behaviour for finite computational budgets which is the regime in which the SGLD might provide computational gains over alternatives. The mathematical framework described in this article show that the SGLD is a sound algorithm, an important result that has been missing in the literature. \\
In the remainder of this article, the notation $\ensuremath{\operatorname{N}}(\mu, \sigma^2)$ denotes a Gaussian distribution with mean $\mu$ and variance $\sigma^2$.
For two positive functions $f,g: \mathbb{R} \to [0,\infty)$, one writes $f \lesssim g$ to indicate that there exists a positive constant $C > 0$ such that $f(\theta) \leq C \, g(\theta)$; we write $f \asymp g$ if $f \lesssim g \lesssim f$.
For a probability measure $\pi$ on a measured space $\mathcal{X}$, a measurable function $\varphi: \mathcal{X} \to \mathbb{R}$ and a measurable set $A \subset \mathcal{X}$, we define $\pi(\varphi;A)=\int_{\theta \in A} \varphi(\theta) \, \pi(d \theta)$ and $\pi(\varphi) = \pi(\varphi; \mathcal{X})$. Finally, densities of probability distributions on $\mathbb{R}^d$ are implicitly assumed to be defined with respect to the usual $d$-dimensional Lebesgue measure.
\section{Stochastic Gradient Langevin Dynamics}
\label{sec:sgld}
Many MCMC algorithms evolving in a continuous state space, say $\mathbb{R}^d$, can be realised as discretizations of a continuous time Markov process $(\theta_t)_{t\ge 0}$. An example of such a continuous time process, which is central to SGLD as well as many other algorithms, is the Langevin diffusion, which is given by the stochastic differential equation
\begin{equation}
\label{eq:overdampedLangevin}
d \theta_t = \frac12 \, \nabla \log \pi(\theta_t) \, dt + dW_t,
\end{equation}
where $\pi:\mathbb{R}^d \to (0,\infty)$ is a probability density and $(W_t)_{t\ge 0}$ is a standard Brownian motion in $\mathbb{R}^d$.
The linear operator $\mathcal{A}$ denotes the generator of the Langevin diffusion \eqref{eq:overdampedLangevin}: for a twice continuously differentiable test function $\varphi:\mathbb{R}^d \to \mathbb{R}$,
\begin{equation} \label{eq.generator}
\mathcal{A} \varphi(\theta) = \frac{1}{2} \angleBK{\nabla \log \pi(\theta), \nabla \varphi(\theta)} + \frac{1}{2} \Delta \varphi(\theta),
\end{equation}
where $\Delta \varphi \eqdef \sum_{i=1}^d \nabla^2_{i} \varphi$ denotes the standard Laplacian operator.
The motivation behind the choice of Langevin diffusions is that, under certain conditions, they are ergodic with respect to the distribution $\pi$; for example, \citep{roberts1996exponential,stramer1999langevin1, stramer1999langevin2,mattingly2002ergodicity} describe drift conditions of the type described in Section \ref{sec.stability} that ensure that the total variation distance from stationarity of the law at time $t$ of the Langevin diffusion \eqref{eq:overdampedLangevin} decreases to zero exponentially quickly as $t \to \infty$.
Given a time-step $\delta > 0$ and a current position $\theta_t$, it is often straightforward to simulate a random variable $\theta_\star$ that is approximately distributed as the law of $\theta_{t + \delta}$ given $\theta_t$. For stochastic differential equations, the Euler-Maruyama scheme \citep{maruyama1955continuous} might be the simplest approach for approximating the law of $\theta_{t + \delta}$. For a Langevin diffusion this reads
\begin{align} \label{eq:euler.maruyama}
\theta_\star &= \theta_t + \frac{1}{2}\delta\, \nabla \log \pi(\theta_t) + \delta^{1/2} \, \eta
\end{align}
for a standard $d$-dimensional centred Gaussian random variable $\eta$.
To fully correct the discretization error, one can adopt a Metropolis-Hastings accept-reject mechanism. The resulting algorithm is usually referred to as the Metropolis-Adjusted-Langevin algorithm (MALA) \citep{roberts1996exponential}. Other discretizations can be used as proposals. For example, the random walk Metropolis-Hastings algorithm uses the discretization of a standard Brownian motion as the proposal, while the Hamiltonian Monte Carlo (HMC) algorithm \citep{duane1987hybrid} is based on discretizations of an Hamiltonian system of differential equations. See the excellent review of \citet{neal2010mcmc} for further information.
In this paper, we shall consider the situation where the target $\pi$ is the density of the posterior distribution under a Bayesian model where there are $N\gg 1$ i.i.d.\ observations, the so called Big Data regime,
\begin{equation} \label{eq.posterior.iid}
\pi(\theta) \propto \ensuremath{\operatorname{p}}_0(\theta) \, \prod_{i=1}^N \Cp{y_i}{ \theta }.
\end{equation}
Here, both computing the gradient term $\nabla \log \pi(\theta_t)$ and evaluating the Metropolis-Hastings acceptance ratio require a computational budget that scales unfeasibly as $ \mathcal{O} (N)$.
One approach is to use a standard random walk proposal instead of Langevin dynamics, and to efficiently approximating the Metropolis-Hastings accept-reject mechanism using only a subset of the data \citep{korattikara2014austerity,BaDoHo14}.
This paper is concerned with stochastic gradient Langevin dynamics (SGLD), an alternative approach proposed by \citet{welling2011bayesian}. This follows the opposite route and chooses to completely avoid the computation of the Metropolis-Hastings ratio. By choosing a discretization of the Langevin diffusion \eqref{eq:overdampedLangevin} with a sufficiently small step-size $\delta \ll 1$, because the Langevin diffusion is ergodic with respect to $\pi$, the hope is that even if the Metropolis-Hastings accept-reject mechanism is completely avoided, the resulting Markov chain still has an invariant distribution that is close to $\pi$. Choosing a decreasing sequence of step-sizes $\delta_m \to 0$ should even allow us to converge to the exact posterior distribution. To further make this approach viable in large $N$ settings, the gradient term $\nabla \log \pi(\theta)$ can be further approximated using a subsampling strategy. For an integer $1 \leq n \leq N$ and a random subset $\tau \eqdef (\tau_1, \ldots, \tau_n)$ of $[N] \equiv\{1, \ldots, N\}$ generated by sampling with or without replacement from $[N]$, the quantity
\begin{equation} \label{eq.unbiased.iid}
\nabla \log \ensuremath{\operatorname{p}}_0(\theta) + \frac{N}{n} \, \sum_{i=1}^n \nabla \log \ensuremath{\operatorname{p}}(x_{\tau_i} \mid \theta)
\end{equation}
is an unbiased estimator of $\nabla \log \pi(\theta)$. Most importantly, this stochastic estimate can be computed with a computational budget that scales as $ \mathcal{O} (n)$ with $n$ potentially much smaller than $N$. Indeed, the larger the quotient $n/N$, the smaller the variance of this estimate.
Stochastic gradient methods have a long history in optimisation and machine learning and are especially relevant in the large dataset regime considered in this article \citep{robbins1951stochasticApprox,bottou2010large,Hoffman2013SVI}.
In this paper we will adopt a slightly more general framework and assume that one can compute an unbiased estimate $\widehat{\nabla \log \pi}(\theta, \mathcal{U} )$ to the gradient $\nabla \log \pi(\theta)$, where $ \mathcal{U} $ is an auxiliary random variable which contains all the randomness involved in constructing the estimate. Without loss of generality we may assume (although this is unnecessary) that $ \mathcal{U} $ is uniform on $(0,1)$. The unbiasedness of the estimator $\widehat{\nabla \log \pi}(\theta, \mathcal{U} )$ means that
\begin{equation} \label{eq.unbiased.estimate}
\E{ H(\theta, \mathcal{U} ) }=0
\qquad \textrm{with} \qquad
H(\theta, \mathcal{U} ) \; \eqdef\ \; \widehat{\nabla \log \pi}(\theta, \mathcal{U} ) - \nabla \log \pi(\theta).
\end{equation}
In summary, the SGLD algorithm can be described as follows. For a sequence of asymptotically vanishing time-steps $(\delta_m)_{m \geq 0}$ and an initial parameter $\theta_0 \in \mathbb{R}^d$, if the current position is $\theta_{m-1}$, the next position $\theta_m$ is defined though the recursion
\begin{equation}\label{eq.sgld}
\theta_m = \theta_{m-1} + \frac{1}{2}\delta_m\, \widehat{\nabla \log \pi}(\theta_{m-1}, \mathcal{U} _m) + \delta_m^{1/2} \, \eta_m
\end{equation}
for an i.i.d.\ sequence $\eta_{m}\sim\ensuremath{\operatorname{N}}(0,I_d)$, and an independent and i.i.d.\ sequence $ \mathcal{U} _{m}$ of auxiliary random variables. This is the equivalent of the Euler-Maruyama discretization \eqref{eq:euler.maruyama} of the Langevin diffusion \eqref{eq:overdampedLangevin} with a decreasing sequence of step-sizes and a stochastic estimate to the gradient term. The analysis presented in this article assumes for simplicity that the initial position $\theta_0$ of the algorithm is deterministic; in the simulation study of Section \ref{sec.numerics}, the algorithms are started at the MAP estimator. Indeed, more general situations could be analysed with similar arguments at the cost of slightly less transparent proofs. Note that the process $(\theta_m)_{m \geq 0}$ is a non-homogeneous Markov chain, and many standard analysis techniques for homogeneous Markov chains do not apply.
For a test function $\varphi: \mathbb{R}^d \to \mathbb{R}$, the expectation of $\varphi$ with respect to the posterior distribution $\pi$ can be approximated by the weighted sum
\begin{align}\label{eq:empMeasure}
\pi_{m}(\varphi)
&\eqdef
\frac{ \delta_1 \, \varphi(\theta_0) + \ldots + \delta_m \, \varphi(\theta_{m-1}) \, }{ T_m }
\end{align}
with $T_m = \delta_1 + \ldots + \delta_m$. The quantity $\pi_{m}(\varphi)$ thus approximates the ergodic average $T_m^{-1} \int_{0}^{T_m} \varphi(\theta_t) \, dt$ between time zero and $t=T_m$.
During the course of the proof of our fluctuation Theorem \ref{thm.clt}, we will need to consider more general averaging schemes than the one above. Instead, for a general positive sequence of weights $\omega=(\omega_m)_{m \geq 1}$, we define the $\omega$-weighted sum
\begin{align}\label{eq:empMeasure:weighted}
\pi^{\omega}_{m}(\varphi)
&\eqdef
\frac{ \omega_1 \, \varphi(\theta_0) + \ldots + \omega_m \, \varphi(\theta_{m-1})}{\Omega_m}
\end{align}
with $\Omega_m \eqdef \omega_1 + \ldots + \omega_m$.
Indeed, $\pi_m^{\omega}(\varphi) = \pi_m(\varphi)$ in the particular case $(\omega_m)_{m \geq 1} = (\delta_m)_{m \geq 1}$; we will consider the weight sequence $\omega = \{\delta_m^2\}_{m \geq 1}$ in the proof of Theorem \ref{thm.clt}.
Let us mention several directions that can be explored to improve upon the basic SGLD algorithm explored in this paper. Langevin diffusions of the type $d \theta_t = \textrm{drift}(\theta_t) \, dt + M(\theta_t) \, dW_t$, reversible with respect to the posterior distribution $\pi$, can be constructed for various choices of positive definite volatility matrix function $M: \mathbb{R}^d \to \mathbb{R}^{d,d}$. Note nonetheless that, for a non-constant volatility matrix function $\theta \mapsto M(\theta)$, the drift term typically involves derivatives of $M$.
Concepts of information geometry \citep{amari2007methods} give principled ways \citep{livingstone2014information} of choosing the volatility matrix function $M$; when the Fisher information matrix is used, this leads to the Riemannian manifold MALA algorithm \citep{MR2814492}. This approach has recently been applied to the Latent Dirichlet Allocation model for topic modelling \citep{PatTeh2013a}. For high-dimensional state spaces $d \gg 1$, one can use a constant volatility function $M$, also known in this case as the preconditioning matrix, for taking into account the information contained in the prior distribution $\ensuremath{\operatorname{p}}_0$ in the hope of obtaining better mixing properties \citep{beskos2008mcmc,cotter2013mcmc}; infinite dimensional limits are obtained in \citep{pillai2012optimal,hairer2011spectral}.
Under an uniform-ellipticity condition and a growth assumption on the volatility matrix function $M:\mathbb{R}^d \to \mathbb{R}^{d,d}$, we believe that our framework could, at the cost of increasing complexity in the proofs, be extended to this setting. To avoid the slow random walk behaviour of Markov chains based on discretization of reversible diffusion processes, one can use instead discretizations of an Hamiltonian system of ordinary differential equations \citep{duane1987hybrid,neal2010mcmc}; when coupled with the stochastic estimates to the gradient above described, this leads to the stochastic gradient Hamiltonian Monte Carlo algorithm of \citep{chen2014stochastic}.
In the rest of this paper, we will build a rigorous framework for understanding the properties of this SGLD algorithm, demonstrating that the heuristics and numerical evidences presented in \citet{welling2011bayesian} were indeed correct.
\section{Assumptions and Stability Analysis}
This section starts with the basics assumptions we will need for the asymptotic results to follow, and illustrates some of the potential stability issues that may occur, would the SGLD algorithm be applied without care.
\subsection{Basic Assumptions}
Throughout this text, we assume that the sequence of step-sizes $\delta = (\delta_m )_{m \geq 1}$ satisfies the following usual assumption.
\begin{assumption}
\label{ass:step-sizes}
The step-sizes $\delta = (\delta_m )_{m \geq 1}$ form a decreasing sequence with
\begin{equation*}
\lim_{m \to \infty} \, \delta_m = 0
\qquad \textrm{and} \qquad
\lim_{m \to \infty} \, T_m = \infty.
\end{equation*}
\end{assumption}
Indeed, this assumption is easily seen to also be necessary for the Law of Large Numbers of Section \ref{sec.LLN} to hold.
Furthermore, we will need at several occasions to assume the following assumption on the oscillations of a sequence of step-sizes $( \omega_m )_{m \geq 1}$.
\begin{assumption}
\label{ass:step-sizes:weighted}
The step-sizes sequence $( \omega_m )_{m \geq 1}$ is such that $\omega_m \to 0$ and $\Omega_m \to \infty$ and
\begin{equation*}
\lim_{m \to \infty} \, \sum_{m \geq 1} \big| \Delta(\omega_m / \delta_m) \big| \, / \, \Omega_m < \infty
\qquad \textrm{and}\qquad
\sum_{m \geq 1} \omega^2_m / [\delta_m \Omega^2_m] < \infty.
\end{equation*}
where $\Delta(\omega_m / \delta_m) \eqdef \omega_{m+1} / \delta_{m+1} - \omega_m / \delta_m$.
\end{assumption}
\begin{rem} \label{rem.weights.power}
Assumption \ref{ass:step-sizes:weighted} holds if
$\delta = (\delta_m)_{m \geq 1}$ satisfies Assumption \eqref{ass:step-sizes} and the weights are defined as $\omega_m =\delta^p_m$, for some some exponent $p \geq 1$ small enough for $\Omega_m \to \infty$. This is because the first sum is less than $\sum_{m \geq 1} \big| \Delta(\omega_m / \delta_m) \big|/\Omega_1 = \delta_1^{p-1} / \Omega_1$, while the finiteness of the second sum can be seen as follows:
\begin{eqnarray*}
\sum_{m \geq 1} \omega^2_m / \BK{ \delta_m \Omega^2_m }
&\lesssim&
1+\sum_{m \geq 2} \BK{ \omega_m / \delta_m }^2 \, \BK{ 1/\Omega_{m-1} - 1/\Omega_m }\\
&\lesssim&
1+\sum_{m \geq 2} \BK{ 1/\Omega_{m-1} - 1/\Omega_m } = 1 + 1/\Omega_1.
\end{eqnarray*}
For any exponents $0 < \alpha < 1$ and $0 < p < 1/\alpha$ the sequences $\delta_m = (m_0+m)^{-\alpha}$ and $\omega_m = \delta_m^p$ satisfy both Assumption \ref{ass:step-sizes} and Assumption \ref{ass:step-sizes:weighted}.
\end{rem}
\subsection{Stability}
\label{sec.stability}
Under assumptions on the tails of the posterior density $\pi$, the Langevin diffusion \eqref{eq:overdampedLangevin} is non-explosive and for any starting position $\theta_0 \in \mathbb{R}^d$ the total-variation distance $d_{\textrm{TV}}\big(\PP(\theta_t \in \cdot), \pi\big)$ converges to zero as $t \to \infty$. For instance, Theorem $2.1$ of \citep{roberts1996exponential} shows that it is sufficient to assume that the drift term satisfies the condition $(1/2) \, \angleBK{\nabla \log \pi(\theta), \theta} \leq \alpha \|\theta\|^2 + \beta$ for some constants $\alpha, \beta> 0$. We refer the interested reader to \citep{roberts1996exponential,stramer1999langevin1,stramer1999langevin2,roberts2002langevin,mattingly2002ergodicity}
for a detailed study of the convergence properties of the Langevin diffusion \eqref{eq:overdampedLangevin}.
Unfortunately, stability of the continuous time Langevin diffusion does not always translate into good behaviour for its Euler-Maruyama discretization. For example, even if the drift term points towards the right direction in the sense that $\angleBK{\nabla \log \pi(\theta), \theta} < 0$ for every parameter $\theta$, it might happen that the magnitude of the drift term is too large so that the Euler-Maruyama discretization \emph{overshoots} and becomes unstable. In a one dimensional setting, this would lead to a Markov chain that diverges in the sense that the sequence $(\theta_m)_{m \geq 0}$ alternates between taking arbitrarily large positive and negative values. Lemma $6.3$ of \citep{mattingly2002ergodicity} gives such an example with a target density $\pi(\theta) \propto \exp\{-\theta^4\}$. See also Theorem $3.2$ of \citep{roberts1996exponential} for examples of the same flavours.
Guaranteeing stability of the Euler-Maruyama discretization requires stronger Lypanunov type conditions. At a heuristic level, one must ensure that the drift term $\nabla \log \pi(\theta)$ points towards the centre of the state space. In addition, the previous discussion indicates that one must also ensure that the magnitude of this drift term is not too large. The following assumptions satisfy both heuristics, and we will show are enough to guarantee that the SGLD algorithm is consistent, with asymptotically Gaussian fluctuations.
\begin{assumption} \label{ass:Lyap}
The drift term $\theta \mapsto \frac{1}{2} \, \nabla \log \pi(\theta)$ is continuous. There exists a Lyapunov function $V: \mathbb{R}^d \to [1,\infty)$ that tends to infinity as $\|\theta\| \to \infty$, is twice differentiable with bounded second derivatives, and satisfies the following conditions.
\begin{enumerate}
\item
There exists an exponent $p_H \geq 2$ such that
\begin{equation} \label{eq.bound.H}
\E{ \norm{ H(\theta, \mathcal{U} ) }^{2p_H}} \lesssim V^{p_H}(\theta).
\end{equation}
This implies that $\E{ \|H(\theta, \mathcal{U} ) \|^{2p} } \lesssim V^{p}(\theta)$ for any exponent $0 \leq p \leq p_H$.
\item For every $\theta \in \mathbb{R}^d$ we have
\begin{equation} \label{eq.lyapunov.size}
\norm{ \nabla V(\theta) }^2 + \norm{ \nabla \log \pi(\theta) }^2
\; \lesssim \; V(\theta).
\end{equation}
\item
There are constants $\alpha, \beta > 0$ such that for every $\theta \in \mathbb{R}^d$ we have
\begin{equation} \label{eq.lyapunov.drift}
\frac12 \, \angleBK{ \nabla V(\theta), \nabla \log \pi(\theta)} \;\leq \; -\alpha \, V(\theta)+\beta.
\end{equation}
\end{enumerate}
\end{assumption}
Equation \eqref{eq.lyapunov.drift} ensures that on average the drift term $\widehat{\nabla \log \pi}(\theta)$ points towards the centre of the state space, while equations \eqref{eq.bound.H} and \eqref{eq.lyapunov.size} provide control on the magnitude of the (stochastic) drift term. The drift condition \eqref{eq.lyapunov.drift} implies in particular that the Langevin diffusion \eqref{eq:overdampedLangevin} converges exponentially quickly towards the equilibrium distribution $\pi$ \citep{mattingly2002ergodicity,roberts1996exponential}.
The proof of the Law of Large Numbers (LLN) and the Central Limit Theorem (CLT) both exploit the following Lemma.
\begin{lemma} [Stability] \label{lem:stability}
Let the step-sizes $(\delta_m)_{m \geq 1}$ satisfy Assumption \ref{ass:step-sizes} and suppose that the stability Assumptions \ref{ass:Lyap} hold. For any exponent $0\le p \leq p_H$ the following bounds hold almost surely,
\begin{equation} \label{eq.stability.estimate}
\sup_{m \geq 1} \; \pi_m(V^{p/2}) \; < \; \infty
\qquad \textrm{and} \qquad
\sup_{m \geq 1} \; \E{ V^p(\theta_m) } \; < \; \infty.
\end{equation}
Moreover, for any exponent $0\le p \leq p_H$ we have $\pi(V^p) < \infty$.
If the sequence of weights $(\omega_m)_{m \geq 1}$ satisfies Assumption \ref{ass:step-sizes:weighted} the following holds almost surely,
\begin{equation} \label{eq.stability.estimate.weighted}
\sup_{m \geq 1} \; \pi^{\omega}_m(V^{p/2}) \; < \; \infty
\end{equation}
\end{lemma}
The technical proof can be found in Section \ref{sec.proof.lem.stability}. The idea is to leverage condition \eqref{eq.lyapunov.drift} in order to establish that the function $V^p$ satisfies both discrete and continuous drift conditions.
\subsection{Scope of the analysis}
For a posterior density $\pi$ of the form \eqref{eq.posterior.iid} and the usual unbiased estimate to $\nabla \log \pi$ described in Equation \eqref{eq.unbiased.iid}, to establish that Equations \eqref{eq.bound.H} and \eqref{eq.lyapunov.size} hold it suffices to verify that the prior density $p_0$ is such that $\norm{ \nabla \log \ensuremath{\operatorname{p}}_0(\theta) }^2 \lesssim V(\theta)$ and that for any index $1 \leq i \leq N$ the likelihood term $\Cp{y_i}{\theta}$ is such that
\begin{equation*}
\norm{ \nabla \log \Cp{y_i}{\theta} }^{2 \, p_H} \lesssim V^{p_H}(\theta).
\end{equation*}
Indeed, in these circumstances, we have
$\norm{ H(\theta, \mathcal{U} ) }^{2 p_H} \lesssim \sum_{i=1}^N \, \norm{ \nabla \log \ensuremath{\operatorname{p}}(y_i \mid \theta) }^{2 p_H}$. Several such examples are described in Section \ref{sec.numerics}.
It is important to note that the drift Condition \eqref{eq.lyapunov.drift} typically does not hold for distributions with heavy tails such that $\nabla \log \pi(x) \to 0$ as $\norm{x} \to \infty$ \citep{roberts1996exponential}. For example, the standard MALA algorithm is not geometrically ergodic when $\nabla \log \pi(x)$ converges to zero as $\norm{x} \to \infty$ (Theorem $4.3$ of \citep{roberts1996exponential}); indeed, the analysis of standard local-move MCMC algorithms when applied to target densities with heavy tails is delicate and typically necessitate other tools \cite{stramer1999langevin2,jarner2007convergence,kamatani2014rate} than the approach based on drift conditions of the type \eqref{eq.lyapunov.drift}. The analysis of the properties of the SGLD algorithm when applied to such heavy tail densities is out of the scope of this article. It is important to note that many more complex scenarios involving high-dimensionality, multi-modality, non-parametric settings where the complexity of the target distribution increases with the size of the data, or combination thereof, are examples of interesting and relevant situations where our analysis typically does not apply; analysing the SGLD algorithm when applied to these challenging target distributions is well out of the scope of this article.
\section{Consistency} \label{sec.LLN}
The problem of estimating the invariant distribution of a stochastic differential equation by using a diminishing step-size Euler discretization has been well explored in the literature \citep{lamberton2002recursive,lamberton2003recursive,
lemaire2007adaptive,panloup2008recursive,
pages2012ergodic}, while \citep{mattingly2002ergodicity} studied the bias and variance of similar algorithms when fixed step-sizes are used instead. We leverage some of these techniques and adapt it to our setting where the drift term can only be unbiasedly estimated, and establish in this section that the SGLD algorithm is consistent under Assumptions \ref{ass:step-sizes} and \ref{ass:Lyap}. More precisely, we prove that almost surely the sequence $(\pi_m)_{m \geq 1}$ defined in Equation \eqref{eq:empMeasure} converges weakly towards $\pi$. Specifically, under growth assumptions on a test function $\varphi: \mathbb{R}^d \to \mathbb{R}$, the following strong law of large numbers holds almost surely,
\begin{equation*}
\lim_{m \to \infty} \;
\frac{ \delta_1 \, \varphi(\theta_0) + \ldots + \delta_m \, \varphi(\theta_m)}{T_m} \; = \; \int_{\mathbb{R}^d} \, \varphi(\theta) \, \pi(d \theta),
\end{equation*}
with a similar result for $\omega$-weighted empirical averages, under assumptions on the weight sequence $\omega$. The proofs of several results of this paper make use of the following elementary lemma.
\begin{lemma} \label{lem.MR}
Let $\BK{\Delta M_k}_{k \geq 0}$ and $\BK{R_k}_{k \geq 0}$ be two sequences of random variables adapted to a filtration $\BK{ \mathcal{F} _{k}}_{k \geq 0}$ and let $\BK{\Gamma_k}_{k \geq 0}$ be an increasing sequence of positive real numbers. The limit
\begin{equation} \label{eq.martingale.type}
\lim_{m \to \infty} \frac{\sum_{k=0}^m \Delta M_k + R_k}{T_m}
\;=\; 0
\end{equation}
holds almost surely if the following two conditions are satisfied.
\begin{enumerate}
\item The process $M_m = \sum_{k \leq m} \Delta M_k$ is a martingale, i.e. $\CE{\Delta M_k}{ \mathcal{F} _k}=0$ and
\begin{equation} \label{eq.martingale.type.M.part}
\lim_{k \to \infty} \sum_{k \geq 0} \frac{\EE \sqBK{\abs{\Delta M_k}}^2 }{T_k^2} \; < \; \infty.
\end{equation}
\item The sequence $\BK{R_k}_{k \geq 0}$ is such that
\begin{equation} \label{eq.martingale.type.R.part}
\lim_{k \to \infty} \sum_{k \geq 0} \frac{\EE \sqBK{\abs{R_k}} }{T_k} \; < \; \infty.
\end{equation}
\end{enumerate}
\end{lemma}
The above lemma, whose proof can be found in the appendix \ref{sec.proof.lem.MR}, is standard; \cite{lamberton2002recursive} also follows this route to prove several of their results.
\begin{theorem} \label{thm.LLN} {\bf (Consistency)}
Let the step-sizes satisfy Assumption \eqref{ass:step-sizes} and suppose that the stability Assumptions \ref{ass:Lyap} hold for a Lyapunov function $V:\mathbb{R}^d \to [1,\infty)$. Let $0 \leq p < p_H/2$ and $\varphi:\mathbb{R}^d \to \mathbb{R}$ be a test function such that $| \varphi(\theta) | / V^{p}(\theta)$ is globally bounded. Then the following limit holds almost surely:
\begin{equation} \label{eq.ergodic.unbounded}
\lim_{m \to \infty} \; \pi_m(\varphi) \;=\; \pi(\varphi).
\end{equation}
If in addition the sequence of weights $\{\omega_m\}_{m \geq 1}$ satisfies Assumption \eqref{ass:step-sizes:weighted}, a similar result holds almost surely for the $\omega$-weighted ergodic average:
\begin{equation} \label{eq.weighted.LLN}
\lim_{m \to \infty} \; \pi^{\omega}_m(\varphi) \;=\; \pi(\varphi).
\end{equation}
\end{theorem}
\begin{proof}
In the following, we write $\EE_k \sqBK{\, \cdot \,}$ and $\PP_k \BK{ \, \cdot \,}$ to denote the conditional expectation $\CE{\, \cdot \, }{ \theta_k}$ and conditional probability $\CP{\, \cdot \, }{ \theta_k}$ respectively. We use the notation $\Delta \theta_k \eqdef (\theta_{k+1}-\theta_k)$. Finally, for notational convenience, we only present the proof in the scalar case $d=1$, the multidimensional case being entirely similar.
We will give a detailed proof of Equation \eqref{eq.ergodic.unbounded} and then briefly describe how the more general Equation \eqref{eq.weighted.LLN} can be proven using similar arguments.
To prove Equation \eqref{eq.ergodic.unbounded}, we first show that the sequence $(\pi_m)_{m \geq 1}$ almost surely converges weakly to $\pi$. Equation \eqref{eq.ergodic.unbounded} is then proved in a second stage.\\
\noindent
{\bf Weak convergence of $(\pi_m)_{m \geq 1}$}.
To prove that almost surely the sequence $(\pi_m)_{m \geq 1}$ converges weakly towards $\pi$ it suffices to prove that the sequence is almost surely weakly pre-compact and that any weakly convergent subsequence of $(\pi_m)_{m \geq 0}$ necessarily (weakly) converges towards $\pi$. By Prokhorov's Theorem \citep{Billingsley95} and Equation \eqref{eq.stability.estimate}, because the Lyapunov function $V$ goes to infinity as $\|\theta\| \to \infty$, the sequence $(\pi_m)_{m \geq 1}$ is almost surely weakly pre-compact. It thus remains to show that if a subsequence converges weakly to a probability measure $\pi_{\infty}$ then $\pi_{\infty}=\pi$.
Since the Langevin diffusion \eqref{eq:overdampedLangevin} has a unique strong solution and its generator $\mathcal{A}$ is uniformly elliptic, Theorem $9.17$ of Chapter $4$ of \citep{Ethier1986Markov} yields that it suffices to verify that
for any smooth and compactly supported test function $\varphi: \mathbb{R} \to \mathbb{R}$ and any limiting distribution $\pi_{\infty}$ of the sequence $(\pi_m)_{m \geq 1}$ the following holds,
\begin{equation} \label{eq.invariance.by.generator}
\pi_{\infty}(\mathcal{A} \varphi) = 0.
\end{equation}
To prove Equation \eqref{eq.invariance.by.generator} we use the following decomposition of $\pi_m (\mathcal{A} \varphi)$,
\begin{equation} \label{eq.pi.m.decomposition}
\curBK{ \frac{\sum_{k=1}^m \EE_{k-1}[\varphi(\theta_k)-\varphi(\theta_{k-1})]}{T_m}}
\;-\;
\curBK{ \frac{\sum_{k=1}^m \EE_{k-1}[\varphi(\theta_k)-\varphi(\theta_{k-1})]}{T_m} - \pi_m(\mathcal{A} \varphi)}.
\end{equation}
\begin{itemize}
\item
Let us prove that the first term of \eqref{eq.pi.m.decomposition} converges almost surely to zero. The numerator is equal to the sum of $\sum_{k=1}^m \EE_{k-1}[\varphi(\theta_k)] - \varphi(\theta_k)$ and $\varphi(\theta_m) - \varphi(\theta_0)$. By boundedness of $\varphi$, the term $\curBK{ \varphi(\theta_m) - \varphi(\theta_0) } / T_m$ converges almost surely to zero.
By Lemma \ref{lem.MR}, to conclude is suffices to show that the martingale difference terms $\EE_{k-1}\sqBK{ \varphi(\theta_k)} - \varphi(\theta_k)$ are such that
\begin{equation*}
\sum_{k \geq 1} \frac{ \EE \sqBK{ \abs{ \EE_{k-1} \sqBK{ \varphi(\theta_k)} - \varphi(\theta_k) }^2 }}{ T^2_k} \; < \; \infty.
\end{equation*}
Because $\varphi$ is Lipschitz, it suffices to prove that$\sum_{k \geq 1} \EE\BK{ \norm{ \theta_{k+1} - \theta_k }^2 } / T_k^2$ is finite.
The stability Assumption \ref{ass:Lyap} and Lemma \ref{lem:stability} imply that the supremum $\sup_m \, \EE \sqBK{ V(\theta_m) }$ is finite.
Since $\EE_k \sqBK{ \norm{ \theta_{k+1} - \theta_k }^2 } \lesssim \delta^2_{k+1} \, V(\theta) + \delta_{k+1}$, it follows that $\EE \BK{ \norm{ \theta_{k+1} - \theta_k }^2 }$ is less than a constant multiple of $\delta_{k+1}$. Under Assumption \ref{ass:step-sizes}, because the telescoping sum $\sum_{k \geq 1} T^{-1}(k)-T^{-1}(k+1)$ is finite, the sum $\sum_{k \geq 1} \delta_k / T_k^2$ is finite. This concludes the proof that the first term in \eqref{eq.pi.m.decomposition} converges almost surely to zero.
\item
The second term of \eqref{eq.pi.m.decomposition} equals $\big(R_0 + \ldots + R_{m-1} \big) / T_m$ with
\begin{equation} \label{eq.R}
R_k \eqdef \EE_k \sqBK{ \varphi(\theta_{k+1}) - \varphi(\theta_k) } - \mathcal{A} \varphi(\theta_k) \, \delta_{k+1}.
\end{equation}
We now show that there exists a constant $C$ such that the bound $|R_k| \leq C \, \delta_{k+1}^{3/2}$ holds for any $k \geq 0$. To do so, let $K>0$ be such that the support of the test function $\varphi$ is included in the compact set $\Omega = [-K,K]$. We examine two cases separately.
\begin{itemize}
\item
If $|\theta_k| > K+1$ then $\varphi(\theta_k) = \mathcal{A} \varphi(\theta_k)=0$ so that $|R_k| \leq \|\varphi\|_{\infty} ms \PP_k(\theta_{k+1} \in \Omega)$. Since $\theta_{k+1}-\theta_k = \curBK{ \frac{1}{2} \nabla \log \pi(\theta_k) + H(\theta_k, \mathcal{U} ) } \, \delta_{k+1} + \sqrt{\delta_{k+1}} \, \eta$ we have
\begin{align*}
\PP_k(\theta_{k+1} \in \Omega)
&\leq
\mathbb{I}\BK{ \left|\frac{1}{2} \nabla \log \pi(\theta_k) \right| \geq \frac{\mathrm{dist}(\theta_k, \Omega)}{3 \, \delta_{k+1}} }\\
&\qquad +
\PP_k\BK{|H(\theta_k, \mathcal{U} )| \geq \frac{\mathrm{dist}(\theta_k, \Omega)}{3 \, \delta_{k+1}} }
+\PP_k\BK{|\eta| \geq \frac{\mathrm{dist}(\theta_k, \Omega)}{3 \, \sqrt{\delta_{k+1}}} }.
\end{align*}
We have used the notation $\mathbb{I}(A)$ for denoting the indicator function of the event $A$.
Under Assumption \ref{ass:Lyap} we have $|\nabla \log \pi(\theta)| \lesssim V(\theta)^{1/2} \lesssim 1+\|\theta\|$ so that the quotient $|\nabla \log \pi(\theta)| / \mathrm{dist}(\theta, \Omega)$ is bounded on the set $\curBK{ \theta : |\theta|>K }$; this shows that the first term equals zero for $\delta_k$ small enough. To prove that the second term is bounded by a constant multiple of $\delta_{k+1}^2$, it suffices to use Markov's inequality and the fact that $\EE[H(\theta_k, \mathcal{U} )^2] / \mathrm{dist}^2(\theta, \Omega)$ is bounded on $\{\theta : |\theta|>K\}$; this is because $\EE[H(\theta_k, \mathcal{U} )^2]$ is less than a constant multiple of $V(\theta)$ and $V(\theta) \lesssim 1 + \|\theta\|^2$ by Assumption \ref{ass:Lyap}. The third term is less than a constant multiple of $\delta_{k+1}^2$ by Markov's inequality and the fact that $\eta$ has a finite moment of order four.
\item
If $|\theta_k| \leq K+1$, we decompose $R_k$ into two terms.
A second order Taylor formula yields
\begin{align*}
R_k
&= \frac12 \, \delta^2_{k+1} \, \varphi^{''}(\theta_k) \, \curBK{ [\nabla \log \pi(\theta_k)]^2 + \EE_k \sqBK{ H^2(\theta_k, \mathcal{U} ) } } \\
&\qquad + (1/2) \, \EE_k\sqBK{ (\Delta \theta_k)^3 \, \int_{0}^1 \varphi^{'''}(\theta_k + u \, \Delta \theta_k) \, (1-u)^2 \, du }\\
&= R_{k,1} + R_{k,2}.
\end{align*}
Under Assumption \ref{ass:Lyap}, the quantities $[\nabla \log \pi(\theta_k)]^2$ and $\EE[H^2(\theta_k, \mathcal{U} )]$ are upper bounded by a constant multiple of $V(\theta_k)$. Since the function $\theta \mapsto \varphi^{''}(\theta) \, V(\theta)$ is globally bounded (because continuous with compact support) this shows that $R_{k,1}$ is less than a constant multiple of $\delta_{k+1}^2$. Since $|\theta_k| \leq K+1$, the bounds $\EE[H^3(\theta, \mathcal{U} )] \lesssim V^{3/2}(\theta)$ and $\sup_{k \geq 0} \EE[V^{3/2}(\theta_k)] < \infty$ (see Lemma \ref{lem:stability}) yield that $\EE_k |\Delta \theta_k|^3 \leq 9 \,\overline{C} \, (\delta_{k+1}^3+\delta_{k+1}^{3/2}) \lesssim \delta_{k+1}^{3/2}$ with
\begin{equation*}
\overline{C} = 1+\sup_{\theta : |\theta|<K+1} \abs{ \nabla \log \pi(\theta) }^3 + \EE \sqBK{ \abs{ H(\theta, \mathcal{U} )}^3}.
\end{equation*}
Note that $\overline{C}$ is finite by Assumption \ref{ass:Lyap} and Lemma \ref{lem:stability}.
\end{itemize}
We have thus proved that there is a constant $C$ such $\abs{ R_k } \leq C \, \delta_{k+1}^{3/2}$ for $k \geq 0$; it follows that the sum $\BK{ R_0 + \ldots + R_{m-1} } / T_m$ is less than a constant multiple of $\BK{ \delta_{1}^{3/2} + \ldots + \delta_{m}^{3/2} } /T_m$. Under Assumption \ref{ass:step-sizes}, this upper bound converges to zero as $m \to \infty$, hence the conclusion.
\end{itemize}
This ends the proof of the almost sure weak convergence of $\pi_m$ towards $\pi$.\\
\noindent
{\bf Proof of Equation \eqref{eq.ergodic.unbounded}}.
By assumption we have $|\varphi(\theta)| \leq C_p \, V^p(\theta)$ for some constant $C_p>0$ and exponent $p < p_H / 2$. To show that $\pi_m(\varphi) \to \pi(\varphi)$ almost surely, we will use Lemma \ref{lem:stability} and the almost sure weak convergence, which guarantees that $\pi_m(\widetilde{\varphi}) \to \pi(\widetilde{\varphi})$ for a continuous and bounded test function $\widetilde{\varphi}$.
For any $t>0$, the set $\Omega_t \eqdef \{\theta : V(\theta) \le t\}$ is compact and Tietze's extension theorem \citep[Theorem $20.4$]{rudin1986real} yields that there exists a continuous function $\widetilde{\varphi}_t$ with compact support that agrees with $\varphi$ on $\Omega_t$ and such that $\|\widetilde{\varphi}_t\|_{\infty} = \sup \{ |\varphi(\theta)| : \theta \in \Omega_t\}$. We can indeed also assume that $|\widetilde{\varphi}_t(\theta)| \leq C_p \, V^p(\theta)$.
Since Lemma \ref{lem:stability} states that $\sup_m \pi_m(V^{p_H/2})$ is almost surely finite, it follows that
\begin{align*}
|\pi_m(\varphi) - \pi_m(\widetilde{\varphi}_t)|
&\leq
2 \, C_p \, \pi_m(V^p \, \mathbbm{1}_{V \geq t})
\leq 2 \, C_p \, \frac{\sup_m \pi_m(V^{p_H/2})}{t^{{p_H/2}-p}},
\end{align*}
where the last inequality follows from the fact that for any probability measure $\mu$, exponents $0 < p < q$ and scalar $t>0$ we have $\mu(V^p \, \mathbbm{1}_{V \geq t}) \leq \mu(V^q \, \mathbbm{1}_{V \geq t}) / t^{q-p}$.
Similarly
$$|\pi(\varphi) - \pi(\widetilde{\varphi}_t)| \leq 2 \, C_p \, \pi(V^{p_H/2}) / t^{{p_H/2}-p}.$$
By the triangle inequality, we thus have,
\begin{equation*}
|\pi_m(\varphi) - \pi(\varphi)| \le 2 \, C_p \, \frac{\sup_m \pi_m(V^{p_H/2})}{t^{{p_H/2}-p}}
+
\big|\pi_m(\widetilde{\varphi}_t)-\pi(\widetilde{\varphi}_t) \big|
+
2 \, C_p \, \frac{\pi(V^{p_H/2})}{t^{{p_H/2}-p}}.
\end{equation*}
On the right-hand-side, the term in the middle can be made arbitrarily small as $m \to \infty$ since $\pi_m$ converges weakly towards $\pi$, while the other two terms converges to zero as $t \to \infty$. This concludes the proof of Equation \eqref{eq.ergodic.unbounded}.\\
\noindent
{\bf Proof of Equation \eqref{eq.weighted.LLN}}.
The approach is very similar to the proof of Equation \eqref{eq.ergodic.unbounded} and for this reason we only highlight the main differences. The same argument shows that the sequence $\pi^{\omega}_m$ is tight and it suffices to show that $\pi^{\omega}_{\infty}(\mathcal{A} \varphi) = 0$ for any weak limit $\pi^{\omega}_{\infty}$ of the sequence $(\pi^{\omega}_m )_{m \geq 0}$ for obtaining the almost sure weak convergences of $( \pi^{\omega}_m )_{m \geq 0}$ towards $\pi$. One can then upgrade this almost sure weak convergence to a Law of Large Numbers. To prove \eqref{eq.weighted.LLN}, we thus concentrate on proving that $\pi^{\omega}_{\infty}(\mathcal{A} \varphi) = 0$. For a smooth and compactly supported test function $\varphi$ we use the decomposition $\pi^{\omega}_m(\mathcal{A} \varphi)=S_1(m) + S_2(m) + S_3(m)$ with
\begin{align*}
\left\{
\begin{array}{ll}
S_1(m)&=\frac{1}{\Omega_m}\sum_{k=1}^m \frac{\omega_k }{\delta_k} \big(\EE_{k-1}[\varphi(\theta_k)]-\varphi(\theta_{k})\big)\\
S_2(m)&=\frac{1}{\Omega_m}\sum_{k=1}^m \frac{\omega_k }{\delta_k} \big(\varphi(\theta_k) - \varphi(\theta_{k-1})\big)\\
S_3(m)&=\pi^{\omega}_m(\mathcal{A} \varphi) - \frac{1}{\Omega_m} \sum_{k=1}^m \frac{\omega_k}{\delta_k} \EE_{k-1}[\varphi(\theta_k)-\varphi(\theta_{k-1})]
\end{array}
\right.
\end{align*}
and prove that each term converges to zero almost surely.
For $S_1(m)$, by Lemma \ref{lem.MR} it suffices to show that $\sum_{k \geq 1} (\omega_k / \delta_k)^2 \, \EE \sqBK{ \curBK{\EE_{k-1}[\varphi(\theta_k)]-\varphi(\theta_k)}^2 } / \Omega_k^2$ is finite. This follows from the bound $\EE \sqBK{ \big(\EE_{k-1}[\varphi(\theta_k)]-\varphi(\theta_k)\big)^2} \lesssim \delta_k$ and the fact that $\sum_{m \geq 0} \omega_m^2 / (\Omega_m^2 \, \delta_m)$ is finite.
For $S_2(m)$, we can write it as
\begin{equation*}
S_2(m) = \frac{ -\frac{\omega_1}{\delta_1}\varphi(\theta_0) + \frac{\omega_{m+1}}{\delta_{m+1}}\varphi(\theta_m) - \sum_{k=1}^m \varphi(\theta_k) \, \Delta (\omega_k / \delta_k)}{ \Omega_m }.
\end{equation*}
Because $\Omega_m \to \infty$, $(\omega_{m+1} / \delta_{m+1}) / \Omega_m \to 0$ and $\varphi$ is bounded, one can concentrate on proving that $\Omega_m^{-1} \sum_{k=1}^m \varphi(\theta_k) \, \Delta (\omega_k / \delta_k)$ converges almost surely to zero. By Lemma \ref{lem.MR}, it suffices to verify that $\sum_{k \geq 1} \EE \sqBK{ \abs{ \varphi(\theta_k) \, \Delta (\omega_k / \delta_k) }} / \Omega_k$ is finite; this directly follows from the boundedness of $\varphi$ and Assumption \ref{ass:step-sizes:weighted}.
Finally, algebra shows that $S_3(m) = \Omega_m^{-1} \, \sum_{1}^m (\omega_k / \delta_k) \, R_{k-1}$ with the quantity $R_k$ defined in Equation \eqref{eq.R}. It has been proved that there is a constant $C$ such that, almost surely, $|R_k| \leq C \, \delta_{k+1}^{3/2}$ for all $k \geq 0$. Since $\delta_m \to 0$, the rescaled sum $\Omega_m^{-1} \, \sum_{k \leq m} \omega_k \delta_{k}^{1/2}$ converges to zero as $m \to \infty$. It follows that $S_3(m)$ converges almost surely to zero.
\end{proof}
\section{Fluctuations, Bias-Variance Analysis, and Central Limit Theorem} \label{sec.bias.variance}
The previous section shows that, under suitable conditions, for a test function $\varphi:\mathbb{R}^d\to\mathbb{R}$ the quantity $\pi_m(\varphi)$ converges almost surely to $\pi(\varphi)$ as $m \to \infty$.
In this section, we investigate the fluctuations of $\pi_m(\varphi)$ around its asymptotic value $\pi(\varphi)$.
We establish that the asymptotic bias-variance decomposition of the SGLD algorithm is dictated by the behaviour of the sequence
\begin{align}
\mathbb{B}_m \eqdef
T_m^{-1/2} \, \sum_{k=0}^{m-1} \, \delta^2_{k+1}.
\label{eq.bias.variance.ratio}
\end{align}
Indeed, the proof of Theorem \ref{thm.clt} reveals that the fluctuations of $\pi_m(\varphi)$ are of order $\mathcal{O}\BK{ T_m^{-1/2} }$ and its bias is of order $\mathcal{O}\BK{ T_m^{-1} \sum_{k=0}^{m-1} \delta_{k+1}^2 }$; the quantity $\mathbb{B}_m$ is thus the ratio of the typical scales of the bias and fluctuations.
In the case where $\mathbb{B}_m \to 0$, the fluctuations dominate the bias and the rescaled difference $T^{1/2}_m ms \BK{ \pi_m(\varphi) - \pi(\varphi)}$ converges weakly to a centred Gaussian distribution. In the case where $\mathbb{B}_m \to \mathbb{B}_\infty \in (0,\infty)$, there is an exact balance between the scale of the bias and the scale of the fluctuations; the rescaled quantity $T^{1/2}_m ms \BK{ \pi_m(\varphi) - \pi(\varphi)}$ converges to a non-centred Gaussian distribution. Finally, in the case where $\mathbb{B}_m \to \infty$, the bias dominates and the rescaled quantity $\BK{T_m^{-1}\sum_{k=1}^{m} \delta_k^2}^{-1} ms \BK{ \pi_m(\varphi) - \pi(\varphi)}$ converges in probability to a quantity $\mu(\varphi) \in \mathbb{R}$ whose exact value is described in the sequel. The strategy of the proof is standard; the solution $h$ of the Poisson equation
\begin{equation} \label{eq.Poisson}
\varphi - \pi(\varphi) = \mathcal{A} h
\end{equation}
is introduced so that the additive functional $\pi_m(\varphi)$ of the trajectory of the Markov process $\{\theta_k\}_{k \geq 0}$ can be expressed as the sum of a martingale and a remainder term. A central limit for martingales can then be invoked to describe the asymptotic behaviour of the fluctuations
\begin{theorem} \label{thm.clt} {\bf (Fluctuations)}
Let the step-sizes $(\delta_m)_{m \geq 1}$ satisfy Assumption \ref{ass:step-sizes} and assume that Assumption \ref{ass:Lyap}
holds for an exponent $p_H \geq 5$.
Let $\varphi:\mathbb{R}^d \to \mathbb{R}$ be a test function and assume that the unique solution $h:\mathbb{R}^d \to \mathbb{R}$ to the Poisson Equation \eqref{eq.Poisson} satisfies $\|\nabla^{n}h(\theta)\| \lesssim V^{p_H}(\theta)$ for $n\le 4$ and has a bounded fifth derivative. Define $\sigma^2(\varphi) = \pi\BK{\norm{\nabla h}^2}$.
\begin{itemize}
\item
In case the fluctuations dominate, i.e.\ $\mathbb{B}_m \to 0$, the following convergence in distribution holds,
\begin{equation} \label{eq.clt.unbiased}
\lim_{m \to \infty}\; T^{1/2}_m \, \big\{ \pi_m(\varphi) - \pi(\varphi) \big\}
\;=\;
\ensuremath{\operatorname{N}}\big(0, \sigma^2(\varphi) \big).
\end{equation}
\item
In case the fluctuations and the bias are on the same scale, i.e.\ $\mathbb{B}_m \to \mathbb{B}_{\infty} \in (0,\infty)$, the following convergence in distribution holds,
\begin{equation} \label{eq.clt.biased}
\lim_{m \to \infty}\; T^{1/2}_m \, \big\{ \pi_m(\varphi) - \pi(\varphi) \big\}
\;=\;
\ensuremath{\operatorname{N}}\big(\mu(\varphi), \sigma^2(\varphi) \big),
\end{equation}
with the asymptotic bias
$$\mu(\varphi) = -\mathbb{B}_{\infty} \EE\left[ \frac{1}{8} \nabla^2 h(\Theta) \widehat{\nabla \log \pi}(\Theta, \mathcal{U} )^2 + \frac{1}{4} \nabla^3 h(\Theta) \nabla \log \pi(\Theta) + \frac{1}{24}\nabla^4 h(\Theta) \right]$$ where the random variables $\Theta \dist \pi$ and $ \mathcal{U} $ are independent.
\item
In case the bias dominates, i.e.\ $\mathbb{B}_m \to \infty$, the following limit holds in probability,
\begin{equation} \label{eq.cv.proba}
\lim_{m \to \infty} \; \frac{ \pi_m(\varphi) - \pi(\varphi)}{T_m^{-1}\sum_{k=1}^{m} \delta_k^2}
\;=\; \mu(\varphi) .
\end{equation}
\end{itemize}
\end{theorem}
\begin{proof}
The proof follows the strategy described in \cite{lamberton2002recursive}, with the additional difficulty that only unbiased estimates of the drift term of the Langevin diffusion are available. We use the decomposition
\begin{align} \label{eq.decomposition.clt}
\pi_m(\varphi)-\pi(\varphi)
&=
\curBK{ \frac{\sum_{k=0}^{m-1} \delta_{k+1} \, \mathcal{A} h(\theta_k)
- \BK{ h(\theta_{k+1})- h(\theta_k) }}{T_m} }
+
\curBK{ \frac{h(\theta_m) - h(\theta_0)}{T_m} }.
\end{align}
A fifth order Taylor expansion and Equation \eqref{eq.sgld} yields that
\begin{align}
h(\theta_{k+1})- h(\theta_k)
&=
\sum_{n=1}^4 \curBK{ \sum_{i=0}^n \mathcal{C} ^{(k)}_{n,i} \, \delta_{k+1}^{(n+i)/2} }
+
\nabla^5 h(\xi_k) \, \BK{ \theta_{k+1} - \theta_k}^5 / 5!.
\end{align}
In the above, we have defined $ \mathcal{C} ^{(k)}_{n,i} \equiv \BK{ 2^i \, i! \, (n-i)! }^{-1} \, \nabla^n h(\theta_k) \widehat{\nabla \log \pi}(\theta_k, \mathcal{U} _{k+1})^i \eta_{k+1}^{n-i}$; the quantity $\xi_k$ lies between $\theta_k$ and $\theta_{k+1}$. It follows from the expression \eqref{eq.generator} of the generator of the $\mathcal{A}$ of the Langevin diffusion \eqref{eq:overdampedLangevin} and decomposition \eqref{eq.decomposition.clt} that $\pi_m(\varphi)-\pi(\varphi) = \mathscr{F} _m + \mathscr{B} _m + \mathscr{R} _m$ where the fluctuation and bias terms are given by
\begin{align*}
\mathscr{F} _m &\equiv
-\frac{1}{T_m} \sum_{k=0}^{m-1} \mathcal{C} ^{(k)}_{1,0} \delta_{k+1}^{1/2}
\quad \textrm{and} \quad
\mathscr{B} _m \equiv
-\frac{1}{T_m} \sum_{k=0}^{m-1} \curBK{ \, \mathcal{C} ^{(k)}_{2,2} + \mathcal{C} ^{(k)}_{3,1} + \mathcal{C} ^{(k)}_{4,0} \, } \delta_{k+1}^{2}
\end{align*}
while the remainder term reads
\begin{equation} \label{eq.remainder.term}
\begin{aligned}
\mathscr{R} _m
&\equiv
-\frac{1}{T_m} \sum_{k=0}^{m-1} \curBK{
\frac12 \, H(\theta_k, \mathcal{U} _{k+1})\, \nabla h(\theta_k)
+
\frac12 \, \BK{\eta^2_{k+1} - 1} \, \nabla^2 h(\theta_k) } \, \delta_{k+1} \\
&\quad -\frac{1}{T_m} \sum_{k=0}^{m-1} \curBK{
\sum_{(n,i) \in \mathcal{I} _{ \mathscr{R} }} \mathcal{C} ^{(k)}_{n,i} \, \delta_{k+1}^{(n+i)/2} }
-\frac{1}{T_m} \sum_{k=0}^{m-1} \nabla^5 h(\xi_k) \, \BK{ \theta_{k+1} - \theta_k}^5 / 5! \\
&\quad + \curBK{ \frac{h(\theta_m) - h(\theta_0)}{T_m} }
\end{aligned}
\end{equation}
for $ \mathcal{I} _{ \mathscr{R} } = \bigcup_{p \in \{3,5,6,7,8\}} \mathcal{I} _{ \mathscr{R} ,p}$ and $ \mathcal{I} _{ \mathscr{R} ,p} \equiv \curBK{ (n,i) \in [1:4] ms [0:4] \, : \, i \leq n, \, i+n = p }$. We will show that the remainder term is negligible in the sense that each term on the R.H.S of Equation \eqref{eq.remainder.term}, when multiplied by either $T_m^{1/2}$ or $T_m(\sum_{k=0}^{m-1} \delta_{k+1}^2)^{-1}$, converges in probability to zero; in other words, each one of these terms is dominated asymptotically by either the fluctuations or the bias and is thus negligible. We then show that when multiplied by $T_m^{1/2}$, the fluctuation term converges in distribution to $\ensuremath{\operatorname{N}}(0,\sigma^2(\varphi))$. Finally, we show that the bias term converge to $\mu(\varphi)$ when rescaled by its typical scale, $T_m(\sum_{k=0}^{m-1} \delta_{k+1}^2)^{-1}$. Putting these results together under the three cases of $\mathbb{B}_m\to 0$, $\mathbb{B}_m\to \mathbb{B}_\infty\in(0,\infty)$ and $\mathbb{B}_m\to \infty$ leads to the results of the Theorem.\\
\vspace*{1em}
\noindent
{\bf Remainder term:} we start by proving that the term $ \mathscr{R} _m$ is negligible. The term $\curBK{h(\theta_m)-h(\theta_0)} / T_m^{1/2}$ converges to zero in probability because $\abs{ h(\theta)} \lesssim V^{p_H}(\theta)$ and Lemma \ref{lem:stability} shows that $\sup_{m \geq 0} \, \EE[V^{p_H}(\theta_m)]$ is almost surely finite. Similarly, Assumptions \ref{ass:step-sizes} and \ref{ass:Lyap} and Lemma \ref{lem:stability} yield that
\begin{align*}
\E{\nabla^5 h(\xi_k) \, \BK{ \theta_{k+1} - \theta_k}^5}
\lesssim
\E{ \abs{\eta_{k+1}}^5 } \, \delta_{k+1}^{5/2}
+ \E{ \abs{\widehat{\nabla\log\pi}(\theta_k, \mathcal{U} _{k+1})} ^5} \delta_{k+1}^5 \lesssim \delta_{k+1}^{5/2}
\end{align*}
from which it follows that $\curBK{ \sum_{k=0}^{m-1} \nabla^5 h(\xi_k) \, \BK{ \theta_{k+1} - \theta_k}^5} / \curBK{\sum_{k=0}^{m-1} \delta_{k+1}^2}$ converges to zero in probability; we have exploited the fact that $\nabla^5 h$ is assumed to be globally bounded. Essentially the same argument yield that the high-order terms are asymptotically negligible: for $(n,i) \in \mathcal{I} _{ \mathscr{R} ,p}$ and $p \in \{5,6,7,8\}$ the limit
\begin{equation*}
\lim_{m \to \infty} \; \frac{\sum_{k=0}^{m-1} \mathcal{C} ^{(k)}_{n,i} \, \delta_{k+1}^{(n+i)/2}}{\sum_{k=0}^{m-1} \delta_{k+1}^2} \; = \; 0
\end{equation*}
holds in probability because the coefficients $ \mathcal{C} ^{(k)}_{n,i}$ are uniformly bounded in expectation and the quantity $\BK{ \sum_{k=0}^{m-1} \delta_{k+1}^{(n+i)/2}}/\BK{ \sum_{k=0}^{m-1} \delta_{k+1}^2}$ converges to zero since $(n+i)/2 \geq 5/2$ and $\delta_k\to 0$. To conclude, one needs to verify that the low order terms are also negligible in the sense that the limit
\begin{align*}
\lim_{m \to \infty} \; \frac{ \sum_{k=0}^{m-1} X^{(k)}_{n,i} \delta_{k+1}^{(n+i)/2}}{T_m^{1/2}} = 0
\end{align*}
holds in probability with
$X^{(k)}_{1,1} = \nabla h(\theta_k) \, H(\theta_k, \mathcal{U} _{k+1})$ and
$X^{(k)}_{2,0} = \nabla^2 h(\theta_k) (\eta_{k+1}^2-1)$ and
$X^{(k)}_{2,1} = - \mathcal{C} ^{(k)}_{2,1}$ and
$X^{(k)}_{3,0} = - \mathcal{C} ^{(k)}_{3,0}$. Since $\CE{ X^{(k)}_{n,i} }{\mathcal{F}_k} = 0$ where $\mathcal{F}_k = \sigma \BK{\theta_0, \ldots, \theta_k}$ is the natural filtration associated to the process $\BK{\theta_k}_{k \geq 0}$ it follows that
\begin{align*}
\EE \sqBK{ \BK{ \frac{\sum_{k=0}^{m-1} X^{(k)}_{n,i} \delta_{k+1}^{(n+i)/2}}{T_m^{1/2} } }^2 }
= \frac{ \sum_{k=0}^{m-1} \EE \sqBK{ ( X^{(k)}_{n,i})^2 } \, \delta_{k+1}^{n+i}}{T_m}
\lesssim \frac{\sum_{k=0}^{m-1} \delta_{k+1}^{n+i}}{T_m} \to 0.
\end{align*}
We made use of the fact that the expectations $\EE \sqBK{ (X^{(k)}_{n,i})^2}$ are uniformly bounded for all $k\ge 0$ by the same arguments as above, and that the final expression converges to 0 since $n+i\ge 2$, $\delta_m\to 0$ and $T_m\to \infty$. This concludes the proof that the remainder term $ \mathscr{R} _m$ is asymptotically negligible.\\
\vspace*{1em}
\noindent {\bf Fluctuation term:} we now prove that the fluctuations term converges in distribution at Monte-Carlo rate towards a Gaussian distribution,
\begin{align*}
T_m^{1/2} \, \mathscr{F} _m
\equiv
-\frac{\sum_{k=0}^{m-1} \nabla h(\theta_k) \, \delta^{1/2}_{k+1} \, \eta_{k+1}}{T_m^{1/2}}
\to \ensuremath{\operatorname{N}} \BK{ 0,\sigma^2(\varphi) }.
\end{align*}
Using the standard martingale central limit theorem (e.g. Theorem $3.2$, Chapter $3$ of \citep{hall1980martingale}), it suffices to verify that for any $\varepsilon>0$ the following limits hold in probability,
\begin{align*}
\lim_{m \to \infty}
\sum_{k=0}^{m-1} \frac{\EE_k \sqBK{ Z_k^2 \, \mathbb{I}\BK{Z_k^2 > T_m\varepsilon} } }{T_m} = 0
\quad \textrm{and} \quad
\lim_{m \to \infty}
\frac{ \sum_{k=0}^{m-1} \EE_k \sqBK{ Z_k^2 } }{ T_m } = \sigma^2(\varphi)
\end{align*}
with $Z_k \eqdef \nabla h(\theta_k) \, \delta^{1/2}_{k+1} \, \eta_{k+1}$. Since $\EE_k \sqBK{ Z_k^2 } = \nabla h(\theta_k)^2 \, \delta_{k+1}$ and the function $\theta \mapsto \nabla h(\theta)^2$ satisfies the assumptions of Theorem \ref{thm.LLN}, the second limit directly follows from Theorem \ref{thm.LLN}. For proving the first limit, note that the Cauchy-Schwarz's inequality and the boundedness of $\nabla h$ imply that $\EE_k \sqBK{ Z_k^2 \, \mathbb{I}\BK{Z_k^2 > T_m\varepsilon}} \lesssim \delta_{k+1} ms \PP \sqBK{ \delta_{k+1} \, \norm{ \nabla h }^2_{\infty} \, \eta_{k+1}^2 > T_m \, \varepsilon}^{1/2}$; the Markov's inequality thus yields that
\begin{equation*}
\sum_{k=0}^{m-1} \EE_k\big[ Z_k^2 \, I\big(Z_k^2 > T_m\varepsilon\big)\big] / T_m
\lesssim
\frac{ \sum_{k=0}^{m-1} \delta_{k+1}^2 } { T_m^2 \, \varepsilon}.
\end{equation*}
Since $T_m^{-2} \, \sum_{k=0}^{ m-1} \delta_{k+1}^2 \to 0$, the conclusion follows.\\
\vspace*{1em}
\noindent
{\bf Bias term:} we conclude by proving that the bias term is such that the limit
\begin{align*}
\lim_{m \to \infty} \; \frac{ \mathscr{B} _m}{\sum_{k=1}^{m} \delta_k^2 / T_m} \, \; = \; \mu(\varphi)
\end{align*}
holds in probability. The quantity $ \mathscr{B} _m / \BK{ \sum_{k=1}^{m} \delta_k^2 / T_m}^{-1}$ can also be expressed as
\begin{align} \label{eq.bias.cv}
\frac{\sum_{k=0}^{m-1} \Psi(\theta_k) \, \delta_{k+1}^2}{\sum_{k=0}^{m-1} \delta_{k+1}^2}
+ \frac{\sum_{k=0}^{m-1} \Delta M_k \, \delta_{k+1}^2}{\sum_{k=0}^{m-1} \delta_{k+1}^2}
\end{align}
for a martingale difference term $\Delta M_k \equiv \BK{ \mathcal{C} ^{(k)}_{2,2} + \mathcal{C} ^{(k)}_{3,1} + \mathcal{C} ^{(k)}_{4,0} } - \Psi(\theta_k)$ where $\Psi(\theta_k) \equiv \CE{ \mathcal{C} ^{(k)}_{2,2} + \mathcal{C} ^{(k)}_{3,1} + \mathcal{C} ^{(k)}_{4,0} }{\mathcal{F}_k}$ and $\BK{ \mathcal{C} ^{(k)}_{2,2} + \mathcal{C} ^{(k)}_{3,1} + \mathcal{C} ^{(k)}_{4,0}}$ equals
\begin{align*}
\frac{1}{8} \nabla^2 h(\theta_k) \widehat{\nabla \log \pi}(\theta_k, \mathcal{U} _{k+1})^2
+ \frac{1}{4} \nabla^3 h(\theta_k) \widehat{\nabla \log \pi}(\theta_k, \mathcal{U} _{k+1})\eta_{k+1}^2
\quad + \frac{1}{24} \nabla^4 h(\theta_k) \eta_{k+1}^4.
\end{align*}
Under the assumptions of Theorem \ref{thm.clt}, the function $\Psi$ satisfies the hypothesis of Theorem \ref{thm.LLN} applied to the weight sequence $\{\delta_k^2\}_{k \geq 0}$; it follows that the first term in Equation \eqref{eq.bias.cv} converge almost surely to $\mu(\varphi)$. It remains to prove that the second term in Equation \eqref{eq.bias.cv} also converges almost surely to zero. By Lemma \ref{lem.MR}, it suffices to prove that the martingale
\begin{align*}
m \mapsto \sum_{k=0}^m \frac{\Delta M_k \, \delta_{k+1}^2}{\sum_{j=1}^{k+1} \delta_{j+1}^2}
\end{align*}
is bounded in $L^2$. Under the Assumption of Theorem \ref{thm.clt}, Lemma \ref{lem:stability} yields that the martingale difference term $\Delta M_k$ is uniformly bounded in $L^2$ from which the conclusion readily follows.
\end{proof}
For the standard choice of step-sizes $\delta_m = (m_0 + m)^{-\alpha}$ the statistical fluctuations dominate in the range $1/3 < \alpha \leq 1$, there is an exact balance between bias and fluctuations for $\alpha=1/3$, and the bias dominates for $0 < \alpha < 1/3$. The optimal rate of convergence is obtained for $\alpha = 1/3$ and leads to an algorithm that converges at rate $m^{-1/3}$.
\section{Diffusion limit}
\label{sec.diff.lim}
In this section we show that, when observed on the right (inhomogeneous) time scale, the sample path of the SGLD algorithm converges to the continuous time Langevin diffusion of Equation \eqref{eq:overdampedLangevin}, confirming the heuristic discussion in \citet{welling2011bayesian}.
The result is based on the continuity properties of the It{\^o}'s map $\mathcal{I}: \mathcal{C}([0,T], \mathbb{R}^d) \to \mathcal{C}([0,T], \mathbb{R}^d)$, which sends a continuous path $w \in \mathcal{C}([0,T], \mathbb{R}^d)$ to the unique solution $v = \mathcal{I}(w)$ of the integral equation,
\begin{equation*}
\label{eq.ito.integral.equation}
v_t = \theta_0 +
\frac12 \, \int_{s=0}^t \, \nabla \log \pi(v_s) \, ds +
w_t
\qquad \textrm{for all} \quad
t \in [0,T].
\end{equation*}
If the drift function $\theta \mapsto \frac12 \nabla \log \pi(\theta)$ is globally Lipschitz, then the It{\^o}'s map $\mathcal{I}$ is well defined and continuous. Further, the image $\mathcal{I}(W)$ under the It\^o map of a standard Brownian motion $W$ on $[0,T]$ can be seen to be described by Langevin diffusion \eqref{eq:overdampedLangevin}.
The approach, inspired by ideas in \citet{mattingly2012diffusion,pillai2012optimal}, is to construct a sequence of coupled Markov chains $(\theta^{(r)})_{r\ge 1}$, each started at the same initial state $\theta_0\in \mathbb{R}^d$ and evolved according to the SGLD algorithm with step-sizes $\delta^{(r)}\eqdef (\delta^{(r)}_k)_{k=1}^{m(r)}$ such that $$\sum_{k=1}^{m(r)} \delta^{(r)}_k=T$$ and with increasingly fine mesh sizes
$\mathrm{mesh}(\delta^{(r)}) \to 0$ with
\begin{equation*}
\mathrm{mesh}(\delta^{(r)})
\eqdef
\max \curBK{ \delta^{(r)}_k \;:\; 1 \leq k \leq m(r) }.
\end{equation*}
Define $T^{(r)}_0= 0$ and $T^{(r)}_k= \delta^{(r)}_1+\cdots+\delta^{(r)}_k$ for each $k\ge 1$.
The Markov chains are coupled to $W$ as follows:
\begin{align} \label{eq.seq.MC}
\left\{
\begin{array}{ll}
\eta^{(r)}_{k} &= (\delta^{(r)}_k)^{-1/2} \BK{ W(T^{(r)}_k)-W(T^{(r)}_{k-1}) } \\
\theta^{(r)}_{k} &= \theta^{(r)}_{k-1} +
\frac12 \, \delta^{(r)}_{k} \curBK{ \nabla \log \pi(\theta^{(r)}_{k-1}) + H(\theta^{(r)}_{k-1}, \mathcal{U} ^{(r)}_{k}) \, }
\,+\,
(\delta^{(r)}_{k})^{1/2} \, \eta^{(r)}_{k},
\end{array}
\right.
\end{align}
for an i.i.d.\ collection of auxiliary random variables $( \mathcal{U} ^{(r)}_k)_{r\ge 1, k \geq 1}$. Note that $(\eta^{(r)}_k)_{k\ge 1}$ form an i.i.d.\ sequence of $\ensuremath{\operatorname{N}}(0,1)$ variables for each $r$.
We can construct piecewise affine continuous time sample paths $(S^{(r)})_{r \ge 1}$ by linearly interpolating the Markov chains,
\begin{equation} \label{eq.interpolation.S}
S^{(r)} \BK{ x T^{(r)}_{k-1} + (1-x) T^{(r)}_{k} } =
x \, \theta^{(r)}_{k-1} + (1-x) \, \theta^{(r)}_{k},
\end{equation}
for $x\in[0,1]$. The approach then amounts to showing that each $S^{(r)}$ can be expressed as $\mathcal{I}(\widetilde{W}^{(r)})+e^{(r)}$, where $\widetilde{W}^{(r)}$ is a sequence of stochastic processes converging to $W$ and $e^{(r)}$ is asymptotically negligible, and making use of the continuity properties of the It\^o map $\mathcal{I}$.
\begin{theorem} \label{thm.diff.lim}
Let Assumption \ref{ass:Lyap} holds and suppose that the drift function $\theta \mapsto (1/2) \nabla \, \log \pi(\theta)$ is globally Lipschitz on $\mathbb{R}^d$. If $\mathrm{mesh}(\delta^{(r)})\to 0$ as $r\to\infty$, then the sequence of continuous time processes $(S^{(r)})_{r\ge 1}$ defined in Equation \eqref{eq.interpolation.S} converges weakly on $\big( \mathcal{C}([0,T], \mathbb{R}^d), \| \cdot \|_{\infty} \big)$ to the Langevin diffusion \eqref{eq:overdampedLangevin} started at $S_0=\theta_0$.
\end{theorem}
\begin{proof}
Since the drift term $s \mapsto (1/2) \, \nabla \log \pi(s)$ is globally Lipschitz on $\mathbb{R}^d$, Lemma $3.7$ of \citep{mattingly2012diffusion} shows that the It{\^o}'s map $\mathcal{I}: \mathcal{C}([0,T], \mathbb{R}^d) \to \mathcal{C}([0,T], \mathbb{R}^d)$ is well-defined and continuous, under the topology over the space $\mathcal{C}([0,T], \mathbb{R}^d)$ induced by the supremum norm $\|w\|_{\infty} \equiv \sup \{ |w_t| : 0 \leq t \leq T \}$. By the Continuous Mapping Theorem, because the Langevin diffusion \eqref{eq:overdampedLangevin} can be seen as the image under the It{\^o}'s map $\mathcal{I}$ of a standard Brownian motion on $[0,T]$ evolving in $\mathbb{R}^d$, it suffices to verify that the process $S^{(r)}$ can be expressed as $\mathcal{I}(\widetilde{W}^{(r)}) + e^{(r)}$ where $\widetilde{W}^{(k)}$ is a sequence of stochastic processes that converge weakly in $\mathcal{C}([0,T], \mathbb{R}^d)$ to a standard Brownian motion $W$ and $e^{(r)}$ is an error term that is asymptotically negligible in the sense that $\|e^{(r)}\|_{\infty}$ converges to zero in probability.
For convenience, we define $\widetilde{W}^{(r)}$ as the continuous piecewise affine processes that satisfies $\widetilde{W}^{(r)}(T^{(r)}_k) = W(T^{(r)}_k)$ for all $ 0 \leq k \leq m(r)$
and that is affine in between. It follows that for any time $T^{(r)}_{k-1} \leq t \leq T^{(r)}_{k}$ we have
\begin{align*}
S^{(r)}(t) &= S^{(r)}(T^{(r)}_{k-1})
+
\left(
\int_{T^{(r)}_{k-1}}^{t}
\frac12 \nabla \log \pi\big(S^{(r)}(T^{(r)}_{k-1}) \big) du
+
\widetilde{W}^{(r)}(t)-\widetilde{W}(T^{(r)}_{k-1})
\right)\\
&\qquad +
\frac{1}{2} \int_{T^{(r)}_{k-1}}^{t}
H\big( S^{(r)}(T^{(r)}_{k-1}), \mathcal{U} ^{(r)}_k\big) du\\
&=
\underbrace{\theta_0
+
\left(
\int_{0}^{t}
\frac12 \nabla \log \pi\big(S^{(r)}(u) \big) du
+
\widetilde{W}^{(r)}(t)\right)}_{\mathcal{I}(\widetilde{W})(t)}\\
&\qquad
+
\underbrace{\int_{0}^{t}
\frac12 \left(
\nabla \log \pi\big(\widehat{S}^{(r)}(u) \big)
-
\nabla \log \pi\big(S^{(r)}(u) \big)
\right)
du}_{e^{(r)}_1(t)}\\
&\qquad
+
\frac{1}{2} \underbrace{\int_{0}^{t}
H\big( \widehat{S}^{(r)}(u), \mathcal{U} ^{(r)}_k\big) du}_{e^{(r)}_2(t)},
\end{align*}
where $\widehat{S}^{(r)}$ is a piecewise constant (non-continuous) process, $\widehat{S}^{(r)}(t) = S^{(r)}(T^{(r)}_{k-1})=\theta^{(r)}_{k-1}$ for $t\in[T^{(r)}_{k-1},T^{(r)}_k)$.
The process $S^{(r)}$ can thus be expressed as the sum
$
\mathcal{I}(W^{(r)}) + e^{(r)}_1 + e^{(r)}_2
$.
Since the mesh-size of the partition $\delta^{(r)}$ converges to zero as $r \to \infty$, standard properties of Brownian motions yield that $\widetilde{W}^{(r)}$ converges weakly in $\big(\mathcal{C}([0,t], \mathbb{R}^d), \| \cdot \|_{\infty,[0,T]}\big)$ to $W$, a standard Brownian motion in $\mathbb{R}^d$. To conclude the proof, we need to check that the quantities $\|e^{(r)}_1\|_{\infty}$ and $\|e^{(r)}_2\|_{\infty}$ converge to zero in probability.
To prove $\E{ \| e^{(r)}_2 \|_\infty^2 }\to 0$ in probability, we have,
\begin{align*}
\E{ \|e_2^{(r)}\|_{\infty}^2 }
&\le 4 \, \E{ \|e_2^{(r)}(T)\|^2 }
= 4 \, \sum_{k=1}^{m(r)} \big(\delta^{(r)}_k \big)^2
\E{H\BK{ \theta^{(r)}_{k-1}, \mathcal{U} ^{(r)}_k}^2} \\
&\lesssim \sum_{k=1}^{m(r)} \big(\delta^{(r)}_k \big)^2 \E{ V(\theta^{(r)}_{k-1}) }
\leq \mathrm{mesh}(\delta^{(r)}) \, \sum_{k=1}^{m(r)} \delta^{(r)}_k \E{ V(\theta^{(r)}_{k-1}) } \\
&\leq \mathrm{mesh}(\delta^{(r)}) ms T ms \sup \curBK{ \E{ V(\theta^{(r)}_{k-1}) } \,:\, r\ge 1, 1\le k\le m(r) }
\lesssim \mathrm{mesh}(\delta^{(r)}).
\end{align*}
We have used Doob's martingal inequality, Assumption \ref{ass:Lyap} and Lemma \ref{lem:stability}.
Since $\mathrm{mesh}(\delta^{(r)})$ converges to zero, the conclusion follows.
To prove $\E{ \|e_1^{(r)}\|_{\infty}}\to 0$ in probability, we use Equation \eqref{eq.seq.MC} and note that
since the drift function $\theta\mapsto \frac{1}{2} \nabla \log \pi(\theta)$ is globally Lipschitz, for each $T^{(r)}_{k-1} \leq u \leq T^{(r)}_k$ we have,
\begin{align*}
\; &\norm{ \nabla \log \big(\widehat{S}^{(r)}(u) \big) - \nabla \log \big(S^{(r)}(u) }
\lesssim \norm{ \theta^{(r)}_{k}-\theta^{(r)}_{k-1} }\\
&\qquad \lesssim
\|\nabla \log \pi(\theta^{(r)}_{k-1}) \| \, \delta^{(r)}_k
+
\|H\big( \theta^{(r)}_{k-1}, \mathcal{U} ^{(r)}_k\big)\| \, \delta^{(r)}_k
+
\sqrt{\delta^{(r)}_k} \, \| \eta^{(r)}_k \|.
\end{align*}
It follows that
\begin{align*}
\E{ \|e^{(r)}_1\|_{\infty} }
\lesssim \sum_{k=1}^{m(r)} \delta^{(r)}_k \left(
\|\nabla \log \pi(\theta^{(r)}_k) \| \, \delta^{(r)}_k
+
\|H\big( \theta^{(r)}_k, \mathcal{U} _k\big)\| \, \delta^{(r)}_k
+
\sqrt{\delta^{(r)}_k} \, \| \eta^{(r)}_k \| \right).
\end{align*}
Since $\mathrm{mesh}(\delta^{(r)})$ converges to zero and by Assumption \ref{ass:Lyap} and Lemma \ref{lem:stability} the suprema
\begin{align*}
\left\{
\begin{array}{ll}
\sup& \curBK{ \EE \big[ \|\nabla \log \pi(\theta^{(r)}_k) \| \big] \,:\; r \geq 1, 1 \leq k \leq m(r) }, \\
\sup& \curBK{ \EE \big[ \| H\big( \theta^{(r)}_k, \mathcal{U} _k\big) \| \big] \,:\; r \geq 1, 1 \leq k \leq m(r) }
\end{array}
\right.
\end{align*}
are finite, it readily follows that $\|e^{(r)}_1\|_{\infty}$ converges to zero in expectation.
\end{proof}
\section{Numerical Illustrations}
\label{sec.numerics}
In this section we illustrate the use of the SGLD method to a simple Gaussian toy model and to a Bayesian logistic regression problem. We verify that both models satisfy Assumption \ref{ass:Lyap}, the main assumption needed for our asymptotic results to hold. Simulations are then performed to empirically confirm our theory; for step-sizes sequences of the type $\delta_m=(m_0+m)^{-\alpha}$, both the rate of decay of the MSE and the impact of the sub-sampling scheme are investigated. The main purpose of this article is to establish the missing theoretical foundation of stochastic gradient methods for the approximation of expectations. For more exhaustive simulation studies we refer to
\cite{welling2011bayesian,AhnKorWel2012,PatTeh2013a,chen2014stochastic}. By considering a logistic regression model, we demonstrate that the SGLD can be advantageous over the Metropolis-Adjusted-Langevin (MALA) algorithm if the available computational budget only allows a few iterations through the whole data set, see Section \ref{sec:logreg}.
\subsection{Linear Gaussian model}
\label{ex:simpleGaussian}
Consider $N$ independent and identically distributed observations $(x_i)_{i=1}^N$ from the two parameters location model given by
\begin{eqnarray*}
x_{i} \mid \theta \; \sim \; \ensuremath{\operatorname{N}}(\theta,\sigma_{x}^{2}).
\end{eqnarray*}
We use a Gaussian prior $\theta \sim \ensuremath{\operatorname{N}}(0,\sigma_{\theta}^{2})$ and assume that the variance hyper-parameters $\sigma_\theta^2$ and $\sigma_x^2$ are both known.
The posterior density $\pi(\theta)$ is normally distributed with mean $\mu_p$ and variance $\sigma_p^2$ given by
\begin{equation*}
\mu_p = \bar{x} \, \Big(1+\frac{\sigma_x^2}{N \sigma_{\theta}^2} \Big)^{-1}
\qquad \textrm{and} \qquad
\sigma_p^2= \frac{\sigma_x^2}{N} \, \Big(1+\frac{\sigma_x^2}{N \sigma_{\theta}^2} \Big)^{-1}
\end{equation*}
where $\bar{x} = (x_1 + \ldots + x_N)/N$ is the sample average of the observations. In this case, we have
\begin{align*}
\nabla \log \pi(\theta)
&= -\frac{\theta - \mu_p}{\sigma_p^2}
\quad \textrm{and} \quad
H(\theta, \mathcal{U} )
=
\Big\{ (N/n) \sum_{j \in \mathcal{I}_n( \mathcal{U} )} x_j
- \sum_{1 \leq i \leq N} x_i \Big\} / \sigma_x^2
\end{align*}
for a random subset $\mathcal{I}_n( \mathcal{U} ) \subset [N]$ of cardinal $n$.
\subsubsection{Verification of Assumption \ref{ass:Lyap}}
We verify in this section that Assumption \eqref{ass:Lyap} is satisfied for the following choice of Lyapunov function,
\begin{equation*}
V(\theta)=1+\frac{(\theta-\mu_p)^2}{2 \, \sigma_p^2}.
\end{equation*}
Since the error term $H(\theta, \mathcal{U} )$ is globally bounded, the drift $(1/2) \nabla \log \pi$ and the Lyapunov function $V$ are linear, Assumptions \eqref{ass:Lyap}.1 and \eqref{ass:Lyap}.2 are satisfied.
Finally, to verify Assumption \eqref{ass:Lyap}.3, it suffices to note that since $\nabla \log \pi(\theta) = -(\theta - \mu_p)/\sigma_p^2$ we have
\begin{align*}
\angleBK{\nabla V(\theta), \frac12 \, \nabla \log \pi(\theta)}
&= -\frac{(\theta - \mu_p)^2}{2 \, \sigma_p^4} = \frac{1-V(\theta)}{\sigma_p^2}.
\end{align*}
In other words, Assumption \eqref{ass:Lyap}.3 holds with $\alpha=\beta=1/\sigma_p^2$.
\subsubsection{Simulations}\label{sec:SimGauss}
\begin{figure}
\caption{Decay of the MSE for step sizes $\delta_m \asymp m^{-\alpha}
\label{fig:MSEdecay}
\end{figure}
\begin{figure}
\caption{Rates of decay of the MSE obtained from estimating the asymptotic slopes of the plots in Figure \ref{fig:MSEdecay}
\label{fig:MSErate}
\end{figure}
\begin{figure}
\caption{Plots of the MSE multiplied by $T_m$ against the number of steps $m$. The plots are flat for $\alpha\ge 0.33$, demonstrating that the MSE scales as $T_m^{-1}
\label{fig:MSEscaling}
\end{figure}
\begin{figure}
\caption{Behaviour of the mean squared error for different subsample sizes $n$.\\
}
\label{fig:NonAsympMSE}
\end{figure}
We chose $\sigma_\theta=1$, $\sigma_x=5$ and created a data set consisting of $N=100$ data points simulated from the model. We used $n=10$ as the size of subsets used to estimate the gradients. We evaluated the convergence behaviour of SGLD using the test function $\mathcal{A}\varphi$ where $\varphi = \sin\left(x-\mu_p -0.5\sigma_p \right)$.
We are interested in confirming the asymptotic convergence regimes of Theorem \ref{thm.clt} by running SGLD with a range of step sizes, and plotting the mean squared error (MSE) achieved by the estimate $\pi_m(\mathcal{A}\varphi)$ against the number of steps $m$ of the algorithm to determine the rates of convergence. We used step sizes $\delta_m=(m+m_0(\alpha))^{-\alpha}$, for $\alpha\in\{0.1,0.2,0.3,0.33,0.4,0.5\}$ where $m_0(\alpha)$ is chosen such that $\delta_1$ is less than the posterior standard deviation. According to the Theorem, the MSE should scale as $T_m^{-1}$ for $\alpha> 1/3$, and $\sum_{k=1}^m \delta_k^2/T_m$ for $\alpha\le 1/3$.
The observed MSE is plotted against $m$ on a log-log plot in Figure \ref{fig:MSEdecay}. As predicted by the theory, the optimal rate of decay is around $\alpha_\star = 1/3$. To be more precise, we estimate the rates of decay by estimating the slopes on the log-log plots. This is plotted in Figure \ref{fig:MSErate}, which also shows a good match to the theoretical rates given in Theorem \ref{thm.clt}, where the best rate of decay is $2/3$ achieved at $\alpha=1/3$. Finally, to demonstrate that there are indeed two distinct regimes of convergence, in Figure \ref{fig:MSEscaling} we have plotted the MSE multiplied by $T_m$. For $\alpha>1/3$, the plots remain flat, showing that the MSE does indeed decay as $T_m^{-1}$. For $\alpha<1/3$, the plots diverge, showing that the MSE decays at a slower rate than $T_m^{-1}$.
For $\alpha=0.33$, Figure \ref{fig:NonAsympMSE} depicts
how the MSE decreases as a function of the number of likelihood evaluations for subsample sizes $n=1,5,10,50,100$.
\subsection{Logistic Regression}
We verify in this section that Assumption \eqref{ass:Lyap} is satisfied for the following logistic regression model. Consider $N$ independent and identically observations $(y_i)_{i=1}^N$ distributed as
\begin{equation}
\label{eq.logistic}
\mathbb{P}(y_{i} = 1 \mid x_{i},\theta) \; = \; 1 - \mathbb{P}(y_{i} = -1 \mid x_{i},\theta)\; = \;\textrm{logit}\big(\angleBK{\theta, x_{i}} \big)
\end{equation}
for covariate $x_i \in \mathbb{R}^d$, unknown parameter $\theta \in \mathbb{R}^d$ and function $\textrm{logit}(z)=e^z / (1+e^z)$.
We assume a centred Gaussian prior on $\theta \in \mathbb{R}^d$ with positive definite symmetric covariance matrix $C \in \mathbb{R}^{d ms d}$. It follows that
\begin{align*}
\nabla \log \pi(\theta)
&=
-C^{-1}\theta + \sum_{i=1}^{N}\text{logit} \big(-y_{i} \angleBK{\theta,x_{i}} \big) \, y_{i} \, x_{i}\\
H(\theta, \mathcal{U} )
&=
(N/n)\sum_{j \in \mathcal{I}_n(U)} \text{logit} \big(-y_{j} \angleBK{ \theta,x_{j}} \big) \, y_{j} \, x_{j}
-
\sum_{1 \leq i \leq N}\text{logit} \big(-y_{i} \angleBK{\theta,x_{i}} \big) \, y_{i} \, x_{i}
\end{align*}
for a random subset $\mathcal{I}_n( \mathcal{U} ) \subset [N]$ of cardinal $n$.
\subsubsection{Verification of Assumption \ref{ass:Lyap}}
We verify in this section that Assumption \eqref{ass:Lyap} is satisfied for the Lyapunov function $V(\theta)=1+\| \theta \|^2$.
Since $H(\theta, \mathcal{U} )$ is globally bounded and $\|\nabla V(\theta)\|^{2} = \|\theta\|^2$ and
\begin{align*}
\|\nabla \log \pi(\theta) \|^2 \lesssim 1 + \|C^{-1} \theta\|^2 \lesssim 1 + \|\theta\|^2 = V(\theta),
\end{align*}
it is straightforward to see that Assumption \eqref{ass:Lyap}.1 and \eqref{ass:Lyap}.2 are satisfied. Finally,
\begin{align*}
\angleBK{\nabla V(\theta), \frac12 \, \nabla \log \pi(\theta)}
&=
-\frac12 \angleBK{\theta, C^{-1} \theta} + \frac12 \, \sum_{i=1}^{N}\text{logit} \big(-y_{i} \angleBK{\theta,x_{i}} \big) \, y_{i} \, \angleBK{\theta, x_{i}}\\
&\leq
-\frac{\lambda_{\min}}{2} \|\theta\|^2 + \frac{\sum_{i=1}^N \|x_i\|}{2} \, \|\theta\|
\leq -\frac{\lambda_{\min}}{4} V(\theta) + \beta
\end{align*}
with $\lambda_{\min} > 0$ the smallest eigenvalue of $C^{-1}$ and $\beta \in (0,\infty)$ the global maximum over $\theta \in \mathbb{R}^d$ of the function $\theta \mapsto -\frac{\lambda_{\min}}{4} \|\theta\|^2 + \frac{\sum_{i=1}^N \|x_i\|}{2} \, \|\theta\|$.
\subsubsection{Comparison of the SGLD and the MALA for logistic regression}\label{sec:logreg}We consider a simulated dataset where $d=3$ and $N=1000$. We set the input covariates
$
x_i=(x_{i,1},x_{i,2},1)
$
with $x_{i,1},x_{i,2}\iid \ensuremath{\operatorname{N}} (0,1)$ for $i=1\dots N$, and use a Gaussian prior $\theta\sim \ensuremath{\operatorname{N}}(0,I)$. We draw a $\theta_0\sim\ensuremath{\operatorname{N}}(0,I)$ and based on it we generate $y_i$ according to the model probabilities \eqref{eq.logistic}. In the following we compare MALA in SGLD by comparing their estimate for the variance of the first component.
The findings of this article show that SGLD-based expectation estimates converge at a slower rate of at most $n^{-\frac{1}{3}}$ compared to the standard rate of $n^{-\frac{1}{2}}$ for standard MCMC algorithms such as the MALA algorithm. In the following we demonstrate that in the non-asymptotic regime (allowing only a few passes through the data set) the SGLD can be advantageous. We start both algorithms at the MAP estimator and we ensure that this study is not biased due to different speeds in finding the mode of the posterior. For a fair comparison we tune the MALA to an acceptance rate of approximately $0.564$ following the findings of \cite{Roberts1998OptimalSMala}. For the SGLD-based variance estimate of the first component for $n=30$ we choose $\delta_m=(a\cdot m+b)^{-0.38}$ as step sizes and optimise over the choices of $a$ and $b$. This is achieved by estimating the MSE for choices of $a$ and $b$ on a log-scale grid based on $512$ independent runs. The estimates based on $20$ and $1000$ effective iterations through the data set the averages are visualised in the heat maps in Figure \ref{fig:heatmap}. That means we limit the algorithm to $200$ and $1000000$ likelihood evaluations, respectively. The figures indicate that the range of the good parameter choices seems to be the same in both cases. Using the heat map for the estimated MSE after 20 iterations through the data set, we pick $a=5.89\cdot 10^7$ and $b=7.90\cdot 10^8$ and compare the time behaviour of the SGLD and the MALA algorithm in Figure \ref{fig:LRtrans}. The figure is a simulation evidence that the SGLD algorithm can be advantageous in the initial phase for the first few iterations through the data set. This recommends further investigation as the initial phase can be quite different from the asymptotic phase.
\newcommand{\exedout}{
\rule{0.8\textwidth}{0.5\textwidth}
}
\begin{figure}
\caption{\label{fig:heatmap}
\label{fig:heatmap}
\end{figure}
\begin{figure}
\caption{\label{fig:LRtrans}
\label{fig:LRtrans}
\end{figure}
\section{Conclusion}
So far, the research on the SGLD algorithm has mainly been focused on extending the methodology. In particular, a parallel version has been introduced in \citet{ahn2014distribuetd} and it has been adapted to natural gradients in \citet{patterson2013stochastic}.
This research has been accompanied by promising simulations.
In contrast, we have focused in this article on providing rigorous mathematical foundations for the SGLD algorithm by showing that the step-size weighted estimator $\pi_m(f)$ is consistent, satisfies a central limit theorem and its asymptotic bias-variance decomposition can be characterised by an explicit functional $\mathbb{B}_m$ of the step-sizes sequence $(\delta_m)_{m \geq 0}$.
The consistency of the algorithm is mainly due to the decreasing step-sizes procedure that asymptotically removes the bias from the discretization and ultimately mitigates the use of an unbiased estimate of the gradient instead of the exact value.
Additionally, we have proved a diffusion limit result that establishes that, when observed on the right (inhomogeneous) time scale, the sample paths of the SGLD can be approximated by a Langevin diffusion.
The CLT and bias-variance decomposition can be leveraged to show that it is optimal to choose a step-sizes sequences $(\delta_m)_{m \geq 0}$ that scales as $\delta_m \asymp m^{-1/3}$; the resulting algorithm converges at rate $m^{-1/3}$.
Note that this recommendation is different from the previously suggested \citet{welling2011bayesian} choice of $\delta_m \asymp m^{-1/2}$.
Our theory suggests that an optimally tuned SGLD method converges at rate $\mathcal{O}(m^{-1/3})$, and is thus asymptotically less efficient than a standard MCMC procedure. We believe that this result does not necessarily preclude SGLD to be more efficient in the initial transient phase, a result hinted at in Figure \ref{fig:NonAsympMSE};
the detailed study of this (non-asymptotic) phenomenon is an interesting venue of research.
The asymptotic convergence rate of SGLD depends crucially on the decreasing step sizes, which is required to reduce the effect of the discretization bias due to the lack of a Metropolis-Hastings correction. Another avenue of exploration is to determine more precisely the bias resulting from the discretization of the Langevin diffusion, and to study the effect of the choice of step sizes in terms of the trade-off between bias, variance, and computation.
\appendix
\section{Proof of Lemma \ref{lem.MR}}
\label{sec.proof.lem.MR}
Recall Kronecker's Lemma \citep[Lemma \textrm{IV}.3.2]{shiryaev1996probability} that states that for a non-decreasing and positive sequence $b_m \to \infty$ and another real valued sequence $(a_m)_{m \geq 0}$ such that the series $\sum_{m \geq 0} a_m / b_m$ converges the following limit holds,
\begin{equation*}
\lim_{m \to \infty} \; \frac{\sum_{k=0}^{m} \, a_{k}}{b_m} \; = \; 0.
\end{equation*}
For proving Equation \eqref{eq.martingale.type} it thus suffices to show that the sums $\sum_{k \geq 0} \abs{ \Delta M_k } / T_k$ and $\sum_{k \geq 0} \abs{ X_k } / T_k$ are almost surely finite. This follows from Condition \eqref{eq.martingale.type.M.part} ($L^2$ martingale convergence theorem) and Condition \eqref{eq.martingale.type.R.part}.
\section{Proof of Lemma \ref{lem:stability}}
\label{sec.proof.lem.stability}
For clarity, the proof is only presented in the scalar case $d=1$; the multidimensional setting is entirely similar.
Before embarking on the proof, let us first mention some consequences of Assumptions \ref{ass:Lyap} that will be repeatedly used in the sequel.
Since the second derivative $V^{''}$ is globally bounded and $(V')^2$ is upper bounded by a multiple of $V$, we have that
\begin{equation} \label{eq.bound.derivative.Vp}
\big| (V^p)^{''}(\theta) \big| \lesssim V^{p-1}(\theta)
\end{equation}
and that the function $V^{1/2}$ is globally Lipschitz. By expressing the quantity $V^p(\theta+\varepsilon)$ as $\big(V^{1/2}(\theta) + [V^{1/2}(\theta+\varepsilon)-V^{1/2}(\theta)] \big)^{2p}$, it then follows that
\begin{equation} \label{eq.bound.difference.Vp}
V^p(\theta+\varepsilon) \lesssim V^p(\theta) + |\varepsilon|^{2p}.
\end{equation}
Similarly, Definition \eqref{eq.sgld}, the bound $\| \nabla \log \ensuremath{\operatorname{p}}(\theta) \|^2 \lesssim V(\theta)$ and Equation \eqref{eq.bound.H} yield that for any exponent $0 \leq p \leq p_H$ the following holds,
\begin{equation} \label{eq.bound.delta.theta}
\EE_m[ \, |\theta_{m+1} - \theta_m|^{2p} \,]
\lesssim
\delta^{2p}_{m+1} \, V^{p}(\theta) + \delta_{m+1}^{p}.
\end{equation}
For clarity, the proof of Lemma \eqref{lem:stability} is separated into several steps. First, we establish that the process $m \mapsto V^p(\theta_m)$ satisfies a Lyapunov type condition; see Equation \eqref{eq.discr.lyap} below. We then describe how Equation \eqref{eq.stability.estimate} follows from this Lyapunov condition. The fact that $\pi(V^p)$ is finite can be seen as a consequence of Theorem $2.2$ of \citep{roberts1996exponential}.
\begin{itemize}
\item
{\bf Discrete Lyapunov condition.}\\
\noindent
Let us prove that there exists an index $m_0 \geq 0$ and constants $\alpha_p, \beta_p >0$ such that for any $m \geq m_0$ we have
\begin{equation} \label{eq.discr.lyap}
\EE_m\big[ V^p(\theta_{m+1}) - V^p(\theta_{m}) \big] / \delta_{m+1}
\; \leq \;
- \alpha_p \, V^p(\theta_m) +\beta_p.
\end{equation}
Since for any $\varepsilon$ there exists $C_\varepsilon$ such that $V^{p-1}(\theta) \leq C_\varepsilon + \, \varepsilon V^p(\theta)$, for proving \eqref{eq.discr.lyap} it actually suffices to verify that we have
\begin{equation} \label{eq.discr.lyap.weak}
\EE_m\big[ V^p(\theta_{m+1}) - V^p(\theta_{m}) \big] / \delta_{m+1}
\; \leq \;
- \widetilde{\alpha}_p \, V^p(\theta_m) + \widetilde{\beta}_p \, V^{p-1}(\theta_m)
\end{equation}
for some constants $\widetilde{\alpha_p}, \widetilde{\beta_p} > 0$ and index $m \geq 1$ large enough. A second order Taylor expansion yields that the left hand side of \eqref{eq.discr.lyap.weak} is less than
\begin{equation} \label{eq.discr.lyap.2}
\EE_m\big[ (V^p)'(\theta_m) \, (\theta_{m+1}-\theta_{m}) \big] / \delta_{m+1}
+
\frac12 \,
\EE_m\big[(V^p)^{''}(\xi) \, (\theta_{m+1}-\theta_m)^2 \big] / \delta_{m+1}
\end{equation}
for a random quantity $\xi$ lying between $\theta_m$ and $\theta_{m+1}$. Since $\EE_m[\theta_{m+1}-\theta_m] = \frac12 \, \nabla \log \ensuremath{\operatorname{p}}(\theta_m)$, the drift condition \eqref{eq.lyapunov.drift} yields that the first term of \eqref{eq.discr.lyap.2} is less than
\begin{equation} \label{eq.first.term}
p \, V^{p-1}(\theta_m) \, \BK{ -\alpha \, V(\theta_m) + \beta }
\end{equation}
for $\alpha,\beta>0$ given by Equation \eqref{eq.lyapunov.drift}.
Consequently, for proving Equation \eqref{eq.discr.lyap}, it remains to bound the second term of \eqref{eq.discr.lyap.2}. Equation \eqref{eq.bound.derivative.Vp} shows that $|(V^p)^{''}(\xi)|$ is upper bounded by a multiple of $|V^{p-1}(\xi)|$; the bound \eqref{eq.bound.difference.Vp} then yields that $|V^{p-1}(\xi)|$ is less than a constant multiple of $|V^{p-1}(\theta_m)| + |\theta_{m+1}-\theta_m|^{2(p-1)}$. It follows from the bound \eqref{eq.bound.delta.theta} on the difference $(\theta_{m+1} - \theta_m)$ and the assumption $\EE[ \, \|H(\theta, \mathcal{U} ) \|^{2p_H}\,] \lesssim V^{p_H}(\theta)$ that for any $\varepsilon > 0$ one can find an index $m_0 \geq 1$ large enough such that for any index $m \geq m_0$ the second term of \eqref{eq.discr.lyap.weak} is less than a constant multiple of
\begin{equation} \label{eq.second.term}
\varepsilon \, V^p(\theta_m) + \beta_{p,\varepsilon} \, V^{p-1}(\theta)
\end{equation}
for a constant $\beta_{p, \varepsilon} > 0$. Equations \eqref{eq.first.term} and \eqref{eq.second.term} directly yield to Equation \eqref{eq.discr.lyap.weak}, which in turn implies to Equation \eqref{eq.discr.lyap}.
\item
{\bf Proof that $\sup_{m \geq 1} \; \EE[V^p(\theta_m)] \; < \; \infty$ for any $p \leq p_H$.}\\
\noindent
Equations \eqref{eq.bound.difference.Vp} and \eqref{eq.bound.delta.theta} show that if $\EE[V^p(\theta_m)]$ is finite then so is $\EE[V^p(\theta_{m+1})]$. Under the conditions of Lemma \ref{lem:stability}, this shows that $\EE[V^p(\theta_m)]$ is finite for any $m \geq 0$. An inductive argument based on the discrete Lyapunov Equation \eqref{eq.discr.lyap} then yields that for any index $m \geq m_0$ the expectation $\EE[V^p(\theta_m)]$ is less than
\begin{equation}
\max\Big( \beta_p / \alpha_p,\max \big\{ \EE[V^p(\theta_m)]: \; \; 0 \leq m \leq m_0 \big\} \Big). \label{eq:stabilityIndepDelta}
\end{equation}
It follows that $\sup_{m \geq 1} \; \EE[V^p(\theta_m)]$ is finite.
\item
{\bf Proof that $\sup_{m \geq 1} \; \pi_m(V^p)\; < \; \infty$ for any $p \leq p_H/2$.}\\
\noindent
One needs to prove that the sequence $(1/T_m) \sum_{k=m_0}^m \delta_{k+1} V^p(\theta_k)$ is almost surely bounded. The discrete Lyapunov Equation \eqref{eq.discr.lyap} yields that $\delta_{k+1} V^p(\theta_k)$ is less than $\delta_{k+1} \, \beta_p / \alpha_p -\EE_k[ V^p(\theta_{k+1}) - V^p(\theta_k)] / \alpha_p$; this yields that $(1/T_m) \sum_{k=m_0}^m \delta_{k+1} V^p(\theta_k)$ is less than a constant multiple of
\begin{equation*}
1 + \frac{V^p(\theta_{m_0})}{T_m} + \frac{1}{T_m} \sum_{k=m_0}^{m} \Big\{ V^p(\theta_{k+1}) - \EE_{k}[V^p(\theta_{k+1})] \Big\}.
\end{equation*}
To conclude the proof, we prove that the last term in the above displayed Equation almost surely converges to zero; by Lemma \ref{lem.MR}, it suffices to prove that the quantity
\begin{equation} \label{eq.martingale.a.s.bounded}
\sum_{k \geq m_0}
\E{ \abs{ \frac{V^p(\theta_{k+1}) - \EE_{k}[V^p(\theta_{k+1})]}{T_k} }^2 }
\end{equation}
is almost surely finite.
We have $\E{ |V^p(\theta_{k+1}) - \EE_{k}[V^p(\theta_{k+1})]|^2 } \leq 2 ms \EE[| \, V^p(\theta_{k+1})-V^p(\theta_{k}) \, |^2]$ and the mean value theorem yields that $| V^p(\theta_{k+1})-V^p(\theta_{k}) | \lesssim V^{p-1}(\xi) \, V'(\xi) \, (\theta_{k+1}-\theta_{k})$ for some $\xi$ lying between $\theta_k$ and $\theta_{k+1}$. The bound $|V'(\theta)| \lesssim V^{1/2}(\theta)$ and Equation \eqref{eq.bound.difference.Vp} then yield that
$| V^p(\theta_{k+1})-V^p(\theta_{k}) | \lesssim V^{p-1/2}(\theta_k) \, \big|\theta_{k+1}-\theta_k\big| + \big|\theta_{k+1}-\theta_k\big|^{2p}$. From the bound \eqref{eq.bound.delta.theta} and the assumption that $\EE[H(\theta, \mathcal{U} )^{2p_H}] \lesssim V^{p_H}(\theta)$ it follows that the quantity in Equation \eqref{eq.martingale.a.s.bounded} is less than a constant multiple of
\begin{equation*}
\sum_{k \geq m_0}
\frac{\EE\big[ \, V^{2p}(\theta_k) \, \big] ms \delta_k}{T^2(k)}.
\end{equation*}
Since $\E{ V^{2p}(\theta_k) }$ is uniformly bounded for any $p \leq p_H / 2$ and $\sum_{m \geq m_0} \delta_m / T^2(m) < \infty$ (because the sum $\sum_m T^{-1}(m+1) - T^{-1}(m)$ is finite), the conclusion follows.
\item
{\bf Proof of $\pi(V^p)\; < \; \infty$ for any $p \geq 0$.}\\
\noindent
Since $V(\theta) \lesssim 1 + \|\theta\|^2$, the drift condition \eqref{eq.lyapunov.drift} yields that Theorem $2.1$ of \citep{roberts1996exponential} holds. Moreover, the bound $V^{p-1}(\theta) \leq C_{\varepsilon} + \varepsilon \, V^p(\theta)$ implies that there are constants $\alpha_{p,*}. \beta_{p,*} >0$ such that
\begin{equation}
\mathcal{A} V^p(\theta) \leq -\alpha_{p,*} \, V^p(\theta) + \beta_{p,*}
\end{equation}
where $\mathcal{A}$ is the generator of the Langevin diffusion \eqref{eq:overdampedLangevin}. Theorem $2.2$ of \citep{roberts1996exponential} gives the conclusion.
\end{itemize}
\noindent
{\bf Proof that $\sup_{m \geq 1} \; \pi^{\omega}_m(V^p)\; < \; \infty$ for any $p \leq p_H/2$.}\\
\noindent
One needs to prove that the sequence
$[1/\Omega_m] ms \sum_{k=m_0}^m \omega_{k+1} V^p(\theta_k)$
is almost surely bounded. The bound $\delta_{k+1} V^p(\theta_k) \lesssim \delta_{k+1} \, \beta_p / \alpha_p -\EE_k[ V^p(\theta_{k+1}) - V^p(\theta_k)] / \alpha_p$ yields that $\pi^{\omega}_m(V^p)$ is less than a constant multiple of
\begin{eqnarray*}
1 + \frac{(\omega_{m_0} / \delta_{m_0}) \, V^p(\theta_{m_0})}{T_m}
&+ \Omega^{-1}(m) \, \sum_{k=m_0+1}^{m} (\omega_k / \delta_k) \, \Big\{ V^p(\theta_{k+1}) - \EE_{k}[V^p(\theta_{k+1})] \Big\} \\
&+ \Omega^{-1}(m) \, \sum_{k=m_0}^{m-1} \Delta(\omega_k / \delta_k) \, V^p(\theta_{k}).
\end{eqnarray*}
To conclude the proof, we establish that the following limits hold almost surely,
\begin{eqnarray}
\label{eq.martingale.cv.0.weighted}
\,& \lim_{m \to \infty}
\, \Omega^{-1}(m) \, \sum_{k=m_0+1}^{m} (\omega_k / \delta_k) \, \Big\{ V^p(\theta_{k+1}) - \EE_{k}[V^p(\theta_{k+1})] \Big\} = 0\\
\label{eq.limsup.weighted}
\,& \lim_{m \to \infty} \;
\, \Omega^{-1}(m) \, \sum_{k=m_0}^{m-1} \Delta(\omega_k / \delta_k) \, V^p(\theta_{k}) = 0.
\end{eqnarray}
To prove Equation \eqref{eq.martingale.cv.0.weighted} it suffices to use the assumption that $\sum_{m \geq 0} \omega^2_m / [\delta_m \Omega^2_m] < \infty$ and then follow the same approach used to establish that the quantity \eqref{eq.martingale.a.s.bounded} is finite. Lemma \ref{lem.MR} shows that to prove Equation \eqref{eq.limsup.weighted} it suffices to verify that
\begin{eqnarray*}
\EE \Big[ \sum_{m \geq 0} \, \big| \Delta(\omega_m / \delta_m) \big| \, V^p(\theta_{m}) / \Omega_m \Big] < \infty.
\end{eqnarray*}
This directly follows from the assumption that $\sum_{m \geq 0} \, \big| \Delta(\omega_m / \delta_m) \big| / \Omega_m < \infty$ and the fact that $\sup_{m \geq 0} \, \EE[V^p(\theta_{m})]$ is finite.
\vskip 0.2in
\end{document}
|
\begin{document}
\title{On an integral equation of Lieb}
\author{Ronen Peretz}
\maketitle
\begin{abstract}
We prove that the weakly singular, non-linear convolution integral equation $\int_{\mathbb{R}^n}|x-y|^{-\lambda}f(y)dy=f(x)^{p-1}$,
where $0<\lambda<n$, and $p=2n/(2n-\lambda)$ has at least two non-equivalent solutions. This answers a problem of Elliott Lieb.
We also prove certain orthogonality relations among linear differential forms with constant coefficients related to the corresponding
type of convolution operators. Finally, we discuss the regularity of the solutions of such non-linear integral equations over not necessarily
bounded open subsets of $\mathbb{R}^n$.
\end{abstract}
\section{Introduction of the results}
We divide the results in this paper into three parts. In {\bf the first part} we note that a certain non-linear integral equation introduced by Elliott
Lieb has at least two essentially different solutions. One of the solutions has an isolated singular point where the function tends to infinity.
{\bf The second part} presents a multitude of integral identities each connecting two solutions of
Lieb's equation. These integral identities contain the images of the two solutions under linear differential forms with constant coefficients. They
point out to two properties connecting any two such solutions. One property is an orthogonality property and the second property is a certain commutativity.
Finally, in {\bf the third part} we present regularity results of the solutions
to the Lieb integral equation. The bottom line is that except for isolated singularities were the solutions tend to infinity, they are, in fact
smooth functions elsewhere. One can compute effectively their smoothness degree.
{\bf (I)} In \cite{Lieb} Elliott H. Lieb computes the sharp constants for certain parameter values in the Hardy-Littlewood-Sobolev inequality. The paper
also deals with related inequalities: the Sobolev inequality, doubly weighted Hardy-Littlewood-Sobolev inequality and the weighted Young inequality
(as A. Sokal called it). Theorem 3.1 on page 359 computes the unique maximizing function and the sharp constant for certain values of the
parameters of the first inequality. The maximizing functions should satisfy the following integral equation
\begin{equation}
\label{eq1}
\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}f(y)dy=f(x)^{p-1},\,\,\,0<\lambda<n,\,\,\,p=\frac{2n}{2n-\lambda}.
\end{equation}
However, as claimed on page 361 of \cite{Lieb} "We do not that (3.9)" (the above (\ref{eq1}) equation) "has an (essentially) unique solution-even if we restrict
to the SSD category-and we shall offer no proof of this kind of uniqueness. {\it This is an open problem!}". Indeed there is no uniqueness for we have the
following:
\begin{theorem}
The following function is a solution of equation (\ref{eq1}):
\begin{equation}
\label{eq2}
f(x)=C(n,\lambda)|x|^{-(n-\lambda/2)},
\end{equation}
where:
\begin{equation}
\label{eq3}
C(n,\lambda)=
\end{equation}
$$
=\left\{\pi^{n/2}\left(\Gamma\left(\frac{n}{2}-\frac{\lambda}{2}\right)\Gamma\left(\frac{\lambda}{4}\right)^2\right)\left/
\left(\Gamma\left(\frac{\lambda}{2}\right)\Gamma\left(\frac{n}{2}-\frac{\lambda}{4}\right)^2\right)\right.\right\}^{-(2n-\lambda)/(2(n-\lambda))}.
$$
\end{theorem}
\begin{remark}
Lieb proved that his equation (\ref{eq1}) has the following solution (which is unique among the maximizing functions of the inequality he was considering):
\begin{equation}
\label{eq4}
f_L(x)=L(n,\lambda)(1+|x|^2)^{-n/p}=L(n,\lambda)(1+|x|^2)^{-(n-\lambda/2)}.
\end{equation}
Here $L(n,\lambda)$ is a constant depending on the dimension and on the parameter $\lambda$. We note that the solution in Theorem 1.1 is singular
at the origin and is not a reflection of Lieb's solution. It is certainly not a conformal image of it ($f_L$ is non-singular and bounded). Thus
Lieb's equation (\ref{eq1}) exhibits at least two non-equivalent solutions.
\end{remark}
Based on the coming results we can deduce many integral identities. Here is an example:
\begin{corollary}
$$
L(n,\lambda)^{\lambda/(2n-\lambda)}C(n,\lambda)\int_{\mathbb{R}^n}|x|^{-(n-\lambda/2)}(1+|x|^2)^{-\lambda/2}dx=
$$
\begin{equation}
\label{eq5}
\end{equation}
$$
=L(n,\lambda)C(n,\lambda)^{\lambda/(2n-\lambda)}\int_{\mathbb{R}^n}|x|^{-\lambda/2}(1+|x|^2)^{-(n-\lambda/2)}dx.
$$
\end{corollary}
We note the symmetric relations between the two solutions within the last identity. It is a kind of commutativity that takes the power $p-1$ into
an account.
{\bf (II)} The identity in the last corollary is the zero'th integral identity out of infinitely many possible integral identities that connect two solutions of the Lieb integral equation (\ref{eq1}).
\begin{definition}
For $\alpha=(\alpha_1,\ldots,\alpha_n)\in (\mathbb{Z}^+\cup\{0\})^n$ we denote:
$$
D_{\alpha}^{(x)}(h(x))=\frac{\partial^{|\alpha|}h(x)}{\partial x_1^{\alpha_1}\ldots\partial x_n^{\alpha_n}},\,\,\Lambda^{(x)}=\sum_{\alpha} a_{\alpha}
D_{\alpha}^{(x)}\,\,\,{\rm where}\,\,a_{\alpha}\in\mathbb{R}.
$$
\end{definition}
Here are some integral relations between solutions of equation (\ref{eq1}). Obvious generalizations hold true for other similar kernels. These include some
orthogonality relations and some commutativity relations:
\begin{theorem}
Let $f(x)$ and $g(x)$ be solutions of equation (\ref{eq1}). Then assuming the convergence of the integrals below, $\forall\,\alpha,\beta\in(\mathbb{Z}^+\cup\{0\})^n$ we have:
\begin{equation}
\label{eq6}
\int_{\mathbb{R}^n}D_{\beta}^{(x)}(g(x))D_{\alpha}^{(x)}(f(x)^{p-1})dx=\int_{\mathbb{R}^n}D_{\alpha}^{(x)}(f(x))D_{\beta}^{(x)}(g(x)^{p-1})dx,
\end{equation}
$$
(-1)^{|\beta|}\int_{\mathbb{R}^n}D_{\beta}^{(x)}(f(x))D_{\alpha}^{(x)}(f(x)^{p-1})dx=
$$
\begin{equation}
\label{eq7}
\end{equation}
$$
=(-1)^{|\alpha|}\int_{\mathbb{R}^n}D_{\alpha}^{(x)}(f(x))D_{\beta}^{(x)}(f(x)^{p-1})dx,
$$
$$
(-1)^{|\alpha|+|\beta|}=-1
$$
\begin{equation}
\label{eq8}
{\rm |\kern-.15em D}ownarrow
\end{equation}
$$
\int_{\mathbb{R}^n}D_{\beta}^{(x)}(f(x))D_{\alpha}^{(x)}(f(x)^{p-1})dx=\int_{\mathbb{R}^n}D_{\alpha}^{(x)}(f(x))D_{\beta}^{(x)}(f(x)^{p-1})dx=0,
$$
\end{theorem}
So it is natural to make the following:
\begin{definition}
$$
E=\{\sum_{\alpha\in I}a_{\alpha}D_{\alpha}^{(x)}(\cdot)\,|\,\forall\,\alpha\in I,\,(-1)^{|\alpha|}=1,\,a_{\alpha}\in\mathbb{R}\},
$$
$$
O=\{\sum_{\beta\in J}b_{\beta}D_{\beta}^{(x)}(\cdot)\,|\,\forall\,\beta\in J,\,(-1)^{|\beta|}=-1,\,b_{\beta}\in\mathbb{R}\}.
$$
\end{definition}
\begin{theorem}
Let $f(x)$ and $g(x)$ be solutions of equation (\ref{eq1}). Suppose that $\Lambda=\Lambda_e+\Lambda_o$, $\Omega=\Omega_e+\Omega_o$,
where $\Lambda_e,\Omega_e\in E$, and $\Lambda_o,\Omega_o\in O$. Then: \\
(a) Assuming convergence of the integrals below we have:
\begin{equation}
\label{eq9}
\end{equation}
$$
\int_{\mathbb{R}^n}\Lambda(f)\Omega(g^{p-1})dx=\int_{\mathbb{R}^n}\Lambda(f^{p-1})\Omega(g)dx,
$$
$$
\int_{\mathbb{R}^n}\Lambda_{e}(f(x))\Lambda_{o}(f(x)^{p-1})dx=\int_{\mathbb{R}^n}\Lambda_{e}(f(x)^{p-1})\Lambda_{o}(f(x))dx=0.
$$
(b) Assuming the convergence of the integrals below we have:
\begin{equation}
\label{eq10}
\end{equation}
$$
\int_{\mathbb{R}^n}\Lambda(f)\Omega(f^{p-1})dx=\int_{\mathbb{R}^n}\Lambda(f^{p-1})\Omega(f)dx=
$$
$$
\int_{\mathbb{R}^n}\Lambda_e(f)\Omega_e(f^{p-1})dx+\int_{\mathbb{R}^n}\Lambda_o(f)\Omega_o(f^{p-1})dx=
$$
$$
=\int_{\mathbb{R}^n}\Lambda_e(f^{p-1})\Omega_e(f)dx+\int_{\mathbb{R}^n}\Lambda_o(f^{p-1})\Omega_o(f)dx.
$$
\end{theorem}
{\bf (III)} We turn our attention to the smoothness of the solutions of Lieb's equation (\ref{eq1}). We recall some notations and definitions
from \cite{Vainikko}. Let $G\subseteq\mathbb{R}^n$ be an open and bounded set. For a $\lambda\in\mathbb{R}^n$, G. Vainniko introduces a weight
function:
$$
w_{\lambda}(x)=\left\{\begin{array}{lll} 1 & {\rm for} & \lambda<0 \\ (1+|\log\rho(x)|)^{-1} & {\rm for} & \lambda=0 \\
\rho(x)^{\lambda} & {\rm for} & \lambda>0 \end{array}\right.,\,\,\,x\in G,
$$
where $\rho(x)=\inf_{y\in\partial G} |x-y|$ is the distance from $x$ to the boundary $\partial G$ of $G$. Let $m\in\mathbb{Z}^+\cup\{0\}$,
$\nu\in\mathbb{R}$ satisfy $\nu<n$. We define the space $C^{m,\nu}(G)$ as the collection of all $m$ times continuously differentiable
functions $u:\,G\rightarrow\mathbb{R}$ (or $\mathbb{C}$) such that:
$$
||u||_{m,\nu}=\sum_{|\alpha|\le m}\sup_{x\in G}(w_{|\alpha|-(n-\nu)}(x)|D_{\alpha} u(x)|)<\infty.
$$
So $C^{m,\nu}(G)$ contains all the $m$ times continuously differentiable functions $u$ on $G$ whose derivatives near the boundary
$\partial G$ can be estimated as follows:
$$
|D_{\alpha}u(x)|\le\,{\rm Const.}\left\{\begin{array}{lll} 1 & {\rm for} & |\alpha|<n-\nu \\ 1+|\log\rho(x)| & {\rm for} & |\alpha|=n-\nu \\
\rho(x)^{n-\nu-|\alpha|} & {\rm for} & |\alpha|>n-\nu \end{array}\right.,\,\,\,x\in G,\,\,|\alpha|\le m.
$$
\begin{remark}
1) The function $||\cdot ||_{m,\nu}$ on the space $C^{m,\nu}(G)$ is a norm. \\
2) The space $(C^{m,\nu}(G),||\cdot ||_{m,\nu})$ is complete, i.e. it is a Banach space.
\end{remark}
Consider the following integral equation:
\begin{equation}
\label{eq11}
u(x)=\int_G K(x,y,u(y))dy+f(x),\,\,\,x\in G.
\end{equation}
we assume that the kernel $K(x,y,u)$ is $m$ times ($m\ge 1$) continuously differentiable with respect to $x,y,u$ for $x\in G$, $y\in G$,
$x\ne y$, $u\in\mathbb{R}$. We also assume that there is a real number $\nu<n$, such that, $\forall\,k\in\mathbb{Z}^+$ and $\alpha,\beta\in(\mathbb{Z}^+)^n$
for which $k+|\alpha|+|\beta|\le m$, the following inequalities hold:
\begin{equation}
\label{eq12}
\end{equation}
$$
D_{\alpha}^x D_{\beta}^{x+y}\frac{\partial^k}{\partial u^k}K(x,y,u)\le b_1(u)\left\{\begin{array}{lll} 1 & {\rm for} & \nu+|\alpha |<0 \\
1+|\log |x-y|| & {\rm for} & \nu+|\alpha |=0 \\ |x-y|^{-\nu-|\alpha |} & {\rm for} & \nu+|\alpha |>0 \end{array}\right.,
$$
\begin{equation}
\label{eq13}
\end{equation}
$$
|D_{\alpha}^x D_{\beta}^{x+y}\frac{\partial^k}{\partial u^k}K(x,y,u_1)-D_{\alpha}^x D_{\beta}^{x+y}\frac{\partial^k}{\partial u^k}K(x,y,u_2)|\le
$$
$$
\le b_2(u_1,u_2)|u_1-u_2|\left\{\begin{array}{lll} 1 & {\rm for} & \nu+|\alpha |<0 \\
1+|\log |x-y|| & {\rm for} & \nu+|\alpha |=0 \\ |x-y|^{-\nu-|\alpha |} & {\rm for} & \nu+|\alpha |>0 \end{array}\right..
$$
The functions $b_1:\,\mathbb{R}\rightarrow\mathbb{R}^+$ and $b_2:\,\mathbb{R}^2\rightarrow\mathbb{R}^+$ are assumed to be bounded on every
bounded region of $\mathbb{R}$ and $\mathbb{R}^2$ respectively. The notation of G. Vainikko reads as follows:
\begin{equation}
\label{eq14}
\end{equation}
$$
D_{\alpha}^x D_{\beta}^{x+y}=\left(\frac{\partial}{\partial x_1}\right)^{\alpha_1}\ldots\left(\frac{\partial}{\partial x_n}\right)^{\alpha_n}
\left(\frac{\partial}{\partial x_1}+\frac{\partial}{\partial y_1}\right)^{\beta_1}\ldots \left(\frac{\partial}{\partial x_n}+\frac{\partial}{\partial y_n}
\right)^{\beta_n}.
$$
We can now conveniently quote the result we need from \cite{Vainikko}: \\
\\
{\bf Theorem 8.1.(\cite{Vainikko})} {\it Let $f\in C^{m,\nu}(G)$ (in (\ref{eq11})) and let the kernel $K(x,y,u)$ satisfy inequalities (\ref{eq12})
and (\ref{eq13}). If the integral equation (\ref{eq11}) has a solution $u\in L^{\infty}(G)$, then $u\in C^{m,\nu}(G)$.} \\
\\
\begin{remark}
1) There is a companion result (Theorem 8.2) in \cite{Vainikko}, but we will not use it here. \\
2) We note that if we restrict Lieb's integral equation (\ref{eq1}) to a bounded open $G\subseteq\mathbb{R}^n$, then it satisfies the assumptions
of Theorem 8.1. in \cite{Vainikko}. In this case $f(x)\equiv 0\in C^{\infty}(G)$ and $K(x,y,u)=|x-y|^{-\lambda}$ is independent of $u$. Also we note
that:
$$
\left(\frac{\partial}{\partial x_i}+\frac{\partial}{\partial y_i}\right)|x-y|^{-\lambda}\equiv 0,
$$
and so in inequalities (\ref{eq12}) and (\ref{eq13}) the only interesting values of $(k,|\alpha|,|\beta|)$ are $k=0$, $|\beta|=0$, $|\alpha|\le m$ and
within the inequalities we start with $\nu+|\alpha|=\lambda$. Thus we can make effective calculations of the smoothness degree of a bounded solution
of the restriction to $G$ of Leib's integral equation (\ref{eq1}). \\
3) We note that any solution $f(x)$ of the Lieb integral equation (\ref{eq1}), must decay to zero at infinity, in order for the integral
$$
\int_{\mathbb{R}^n}|x-y|^{-\lambda}f(y)dy,\,\,\,\,0<\lambda<n,
$$
to converge at infinity. Thus any solution is bounded outside a ball $B_n(R)=\{x\in\mathbb{R}^n\,|\,|x|<R\}$ for a large enough radius $R$. Thus
such a solution can have singularities only within the ball $B_n(R)$, and the solution tends to infinity at each such a singularity, otherwise by Theorem 8.1 in \cite{Vainikko} it will be smooth (of some degree) at such a point.
\end{remark}
We thus have the following:
\begin{theorem}
Let $f(x)$ be a solution of Lieb's integral equation (\ref{eq1}). Then $\lim_{|x|\rightarrow\infty}f(x)=0$, and there is a ball $B_n(R)$ such that $f(x)$
is bounded for $x\not\in B_n(R)$, $f(x)$ can have a finite set of singularities inside $B_n(R)$, and $f(x)$ tends to infinity when $x\rightarrow s$ for
each singular point $s$ of $f(x)$.
\end{theorem}
$\qed$
\section{Lieb's integral equation has at least two non-equivalent symmetric decreasing solutions, Theorem 1.1}
{\bf A proof.} We use the following formula for the Fourier transform, \cite{SteinWeiss}:
$$
\widehat{|y|^{-\nu}}=\int_{\mathbb{R}^n}|y|^{-\nu}\exp(-2\pi ix\cdot y)dy=\left\{\pi^{\nu-(n/2)}\Gamma\left(\frac{n}{2}-\frac{\nu}{2}\right)\left/
\Gamma\left(\frac{\nu}{2}\right)\right.\right\}|x|^{\nu-n},
$$
where $0<\nu<n$. We compute the Fourier transform of our integral:
$$
\widehat{\left(\int_{\mathbb{R}^n}|t-y|^{-\lambda}f(y)dy\right)(x)}=\widehat{\left(\int_{\mathbb{R}^n}|t-y|^{-\lambda}\left(C(n,\lambda)|y|^{-(n-\lambda/2)}\right)dy\right)(x)}=
$$
$$
=C(n,\lambda)\left(\int_{\mathbb{R}^n}|y|^{-\lambda}\exp\left(ix\cdot y\right)dy\right)\left(\int_{\mathbb{R}^n}|y|^{-(n-\lambda/2)}
\exp\left(ix\cdot y\right)dy\right)=
$$
$$
=C(n,\lambda)\left\{\pi^{\lambda-n/2}\Gamma\left(\frac{n}{2}-\frac{\lambda}{2}\right)\left/\Gamma\left(\frac{\lambda}{2}\right)\right.\right\}
|x|^{\lambda-n}\times
$$
$$
\times\left\{\pi^{(n-\lambda/2)-n/2}\Gamma\left(\frac{n}{2}-\frac{(n-\lambda)}{2}\right)\left/\Gamma\left(\frac{n-\lambda}{2}\right)\right.\right\}
|x|^{n-(\lambda/2)-n}=
$$
$$
=C(n,\lambda)\left\{\pi^{n/2}\left(\Gamma\left(\frac{n}{2}-\frac{\lambda}{2}\right)\Gamma\left(\frac{\lambda}{4}\right)^2\right)\left/
\left(\Gamma\left(\frac{\lambda}{2}\right)\Gamma\left(\frac{n}{2}-\frac{\lambda}{4}\right)^2\right)\right.\right\}\widehat{|y|^{-\lambda/2}}(x)=
$$
$$
=C(n,\lambda)^{\lambda/(2n-\lambda)}\left(\widehat{|y|^{-(n-\lambda/2)(\lambda/(2n-\lambda)}}\right)(x)=
$$
$$
=\left(\widehat{\left(C(n,\lambda)|y|^{-(n-\lambda/2)}\right)^{\lambda/(2n-\lambda)}}\right)(x)=
$$
$$
=\left(\widehat{f\left(y\right)^{\lambda/(2n-\lambda)}}\right)(x)=\left(\widehat{f\left(y\right)^{p-1}}\right)(x).
$$
Hence:
$$
\int_{\mathbb{R}^n}|x-y|^{-\lambda}f(y)dy=f(x)^{p-1}.
$$
$\qed $ \\
\\
\section{Integral relations between two solutions of equation (\ref{eq1}), orthogonality and commutativity, Corollary 1.3, Theorem 1.5 and
Theorem 1.7}
In this section we are interested in the form of integral relations between two solutions of the Lieb integral equations. Thus we will
not bother with convergence issues and just formally expand the formulas in Corollary 1.3, in Theorem 1.5 and in Theorem 1.7. We note
that the identity in Corollary 1.3 follows by Theorem 1.1 and by the case $\alpha=\beta=\overline{0}$ in equation (\ref{eq6}) of Theorem 1.5.
Also Theorem 1.7 follows by Theorem 1.5. Hence we need to prove only Theorem 1.5: \\
We start with Lieb's integral equation, (\ref{eq1}) and perform on it a partial differentiation with respect to $x_j$. We formally
differentiate under the integral sign. The result is:
$$
\int_{\mathbb{R}^n}\frac{\partial}{\partial x_j}\left(\left|x-y\right|^{-\lambda}\right)f(y)dy=\frac{\partial f(x)^{p-1}}{\partial x_j}.
$$
We note that:
$$
\frac{\partial}{\partial x_j}\left(\left|x-y\right|^{-\lambda}\right)=-\frac{\partial}{\partial y_j}\left(\left|x-y\right|^{-\lambda}\right).
$$
Hence:
$$
-\int_{\mathbb{R}^n}\frac{\partial}{\partial y_j}\left(\left|x-y\right|^{-\lambda}\right)f(y)dy=\frac{\partial f(x)^{p-1}}{\partial x_j}.
$$
By integration by parts we deduce the following:
$$
\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}\frac{\partial f(y)}{\partial y_j}dy=\frac{\partial f(x)^{p-1}}{\partial x_j}.
$$
We iterate this argument and obtain:
$$
\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}\frac{\partial^2 f(y)}{\partial y_k\partial y_j}dy=\frac{\partial^2 f(x)^{p-1}}
{\partial x_k\partial x_j}.
$$
Now an inductive argument implies:
\begin{equation}
\label{eq15}
\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}D_{\alpha}^{(y)}\left(f(y)\right)dy=D_{\alpha}^{(x)}\left(f(x)^{p-1}\right).
\end{equation}
Another outer induction gives, finally:
$$
\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}\Lambda\left(f(y)\right)dy=\Lambda\left(f(x)^{p-1}\right).
$$
Next, let $g(x)$ be one more solution of equation (\ref{eq1}), i.e.:
$$
\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}g(y)dy=g(x)^{p-1}.
$$
By what we have already done, we have:
\begin{equation}
\label{eq16}
\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}D_{\beta}^{(y)}\left(g(y)\right)dy=D_{\beta}^{(x)}\left(g(x)^{p-1}\right).
\end{equation}
We now use the double integration technique. We multiply equation (\ref{eq15}) by $D_{\beta}^{(x)}(g(x))$ and integrate
$\int_{\mathbb{R}^n}\ldots dx$:
$$
\int_{\mathbb{R}^n}D_{\beta}^{(x)}\left(g(x)\right)\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}D_{\alpha}^{(y)}\left(f(y)\right)dydx=
\int_{\mathbb{R}^n}D_{\beta}^{(x)}\left(g(x)\right)D_{\alpha}^{(x)}\left(f(x)^{p-1}\right)dx.
$$
Reversing the order of integration on the left hand side gives:
$$
\int_{\mathbb{R}^n}D_{\alpha}^{(y)}\left(f(y)\right)\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}D_{\beta}^{(x)}\left(g(x)\right)dxdy=
\int_{\mathbb{R}^n}D_{\alpha}^{(y)}\left(f(y)\right)D_{\beta}^{(y)}\left(g(y)^{p-1}\right)dy.
$$
Changing on the right hand side the name of the variable from $y$ to $x$, gives:
$$
\int_{\mathbb{R}^n}D_{\beta}^{(x)}\left(g(x)\right)D_{\alpha}^{(x)}\left(f(x)^{p-1}\right)dx=\int_{\mathbb{R}^n}D_{\alpha}^{(x)}
\left(f(x)\right)D_{\beta}^{(x)}\left(g(x)^{p-1}\right)dx.
$$
This proves equation (\ref{eq6}). This is the commutativity part. We now prove orthogonality. We start with equation (\ref{eq6}) in the special
case $f(x)=g(x)$. We get:
$$
\int_{\mathbb{R}^n}D_{\beta}^{(x)}\left(f(x)\right)D_{\alpha}^{(x)}\left(f(x)^{p-1}\right)dx=\int_{\mathbb{R}^n}D_{\alpha}^{(x)}
\left(f(x)\right)D_{\beta}^{(x)}\left(f(x)^{p-1}\right)dx.
$$
Integration by parts gives:
$$
\int_{\mathbb{R}^n}D_{\alpha}^{(x)}
\left(f(x)\right)D_{\beta}^{(x)}\left(f(x)^{p-1}\right)dx=(-1)^{|\alpha|}\int_{\mathbb{R}^n}f(x)\cdot D_{\alpha+\beta}^{(x)}
\left(f(x)\right)dx,
$$
while
$$
\int_{\mathbb{R}^n}D_{\alpha}^{(x)}
\left(f(x)^{p-1}\right)D_{\beta}^{(x)}\left(f(x)\right)dx=(-1)^{|\beta|}\int_{\mathbb{R}^n}f(x)\cdot D_{\alpha+\beta}^{(x)}
\left(f(x)\right)dx.
$$
We proved equation (\ref{eq7}). Equation (\ref{eq8}) is a consequence of equation (\ref{eq6}) and equation (\ref{eq7}). This proves
Corollary 1.3, Theorem 1.5 and Theorem 1.7. $\qed $
\noindent
{\it Ronen Peretz \\
Department of Mathematics \\ Ben Gurion University of the Negev \\
Beer-Sheva , 84105 \\ Israel \\ E-mail: [email protected]} \\
\end{document}
|
\begin{document}
\title[Minimal degrees of invariants of (super)groups]{Minimal degrees of invariants of (super)groups - a connection to cryptology}
\thanks{This publication was made possible by a NPRF award NPRP 6 - 1059 - 1 - 208 from the Qatar National Research Fund (a member of The Qatar Foundation). The statements made herein are solely the responsibility of the authors.}
\author{Franti\v sek~ Marko}
\email{[email protected]}
\address{Penn State Hazleton, 76 University Drive, Hazleton, PA 18202, USA}
\author{Alexandr N. Zubkov}
\email{[email protected]}
\address{Sobolev Institute of Mathematics, Siberian Branch of Russian Academy of Science (SORAN), Omsk, Pevtzova 13, 644043, Russia}
\begin{abstract}
We investigate questions related to the minimal degree of invariants of finitely generated diagonalizable groups. These questions were raised in connection to security of a public key cryptosystem based on invariants of diagonalizable groups. We derive results for minimal degrees of invariants of finite groups, abelian groups and algebraic groups. For algebraic groups we relate the minimal degree of the group to the minimal degrees of its tori.
Finally, we investigate invariants of certain supergroups that are superanalogs of tori. It is interesting to note that a basis of these invariants is not given by monomials.
\end{abstract}
\keywords{cryptosystem, invariants, diagonalizable group, number field, supergroup}
\subjclass[2010]{94A60(primary), and 11T71(secondary)}
\maketitle
\section*{Introduction}
Let $G$ be a group, $V$ a vector space over a ground field $F$, and $G$ acts on $V$ by linear transformations.
The typical problem in the invariant theory of the group $G$ is to find an upper bound for degrees of generators of $F[V]^G$.
For fields $F$ of characteristic zero, there is a classical result of Noether \cite{noeth} which states that the algebra of invariants of $G$
is generated by polynomials of degrees not exceeding the order of $G$.
In this paper we are investigating a different problem and replace a generating set of invariants of $G$ by a single nonconstant invariant of $G$.
Namely, we are interested in a question: whether there is a nonconstant invariant of $G$ of degree not exceeding a certain value.
This question is motivated by security consideration in \cite{mzj} related to a public-key cryptosystem based on invariants of diagonalizable groups.
Since one possible atttack on this cryptosystem is based on brute-force linear algebra,
if we know that there is a nonconstant invariant of $G$ of small degree, then this linear algebra attack is sucessful.
On this other hand, if we know that there are no nonconstant invariants of $G$ of small degree, then the cryptosystem is secure against this type of attack.
It is easier to formulate and investigate this problem in terms of the minimal degree $M_{G,V}$ of invariants of the group $G$ with respect to the fixed representation
$G\to GL(V)$. We will establish both lower and upper bounds for $M_{G,V}$.
We start by recalling the concept of an invariant of a group $G$ in Section 1. In Section 2 we describe the public-key cryptosystem based on invariants of $G$. In Section 3 we show that the minimal degree of an abelian group $G$ is the same as the minimal degree of its subgroup generated by semisimple elements.
We also study minimal degrees of diagonalizable groups.
In Section 4 we relate the minimal degree $M_{G,V}$ of an algebraic $G$ to the minimal degrees of invariants of its torus $T$.
Afterward, we explain the concept of invariants of supergroups in Section 5. In Section 6 we derive certain properties of invariants of certain supergroups. One interesting property is that, unlike for groups, the basis of invariants for supergroups does not consist of monomials.
\section{Invariants of finitely-generated linear groups}
In this paper, we will consider only finitely generated groups $G$ acting faithfully on a finite-dimensional vector space $V=F^n$ over a field $F$ of arbitrary characteristics.
Therefore, we can asume that $G\subset GL(V)$. From the very beginning, assume that the representation $\rho:G\to GL(V)$ is fixed, and the group $G$ is given by a finite set of generators. With respect to the standard basis of $V$, each element $g$ of $G$ is therefore represented by an invertible matrix of size $n\times n$, and $g$ acts on vectors in $V$ by matrix multiplication.
Let $F[V]=F[x_1, \ldots, x_n]$ be the algebra of polynomial functions on $GL(V)$. Then $G$ acts on $F[V]$ via
$gf(v)=f(g^{-1}v)$, where $g\in G$, $f\in F[V]$ and $v\in V$.
An invariant $f$ of $G$ is a polynomial $f\in F[V]$,
which has a property that its values are the same on orbits of the group $G$. In other words, for every vector $v\in V$ and for every element $g\in G$, we have $f(gv)=f(v)$.
We note that different representations of $G$ lead to different invariants in general, but this is not going to be a problem for us since our representation of $G$ is fixed.
We will denote the algebra of invariants of $G$ by $F[V]^G$.
Denote by $M_{G,V}$, or simply by $M_G$ or $M$ if we need not emphasise the group $G$ or the vector space $V$ it is acting on
the minimal positive degree of an invariant from $F[V]^G$. That is
$M_{G, V}=\min\{d>0 | F[V]_d^G\neq 0\}$. If $F[V]^G=F$, then we set $M_{G,V}=\infty$.
\section{Public key-cryptosystem based on invariants}\label{system}
We start by recalling the original idea of the public-key cryptosystem based on invariants from the paper \cite{dima1} and recalling its modification presented in \cite{dima2}.
\subsection{Cryptosystems based on invariants}
To design a cryptosystem, Alice needs to choose a finitely generated subgroup $G$ of $GL(V)$ for some vector space $V=F^n$ and a set $\{g_1, \ldots, g_t\}$ of generators of $G$.
Alice also chooses an $n\times n$ matrix $a$. Alice needs to know a polynomial invariant $f: v\mapsto f(v)$ of this representation of $G$. Then the polynomial $af:v\mapsto f(av)$
is an invariant of the conjugate group $H=a^{-1}Ga$.
Depending on the choice $f$ and $a$, Alice chooses a set $M=\{v_0, \ldots, v_{s-1}\}$ of messages consisting of vectors from $V$ that are separated by the polynomial $af$.
This means that $f(av_i)\neq f(av_j)$ whenever $i\neq j$.
Alice also chooses a set of randomly generated elements $g_1, \ldots, g_m$ of $G$ (say, by multiplying some of the given generators of $G$), which generates a subgroup of $G$ that will be denoted by $G'$.
Alice announces as a public key the set $M$ of possible messages, and the group $H=a^{-1}G'a$, conjugated to $G'$, by announcing its generators $h_i=a^{-1}g_ia$ for $i=1, \ldots, m$.
In the first paper \cite{dima1} its author assumes that the group $G$, its representation in $GL(V)$ and the invariant $f$ are in the public key. We refer to this setup as {\it variant one}. However, the version in paper \cite{dima2} assumes that $G$, its representation in $GL(V)$ and the invariant $f$ are secret. We refer to this setup as {\it variant two}. We will comment on both variants later.
For the encryption, every time Bob wants to transmit a message $m\in M$, he chooses a randomly generated element $h$ of the group $H$(by multiplying some of the generators of $H$ given as a public key).
Then he computes $u=hv_i$ and transmits the vector $u\in V$ to Alice.
To decript the message, Alice first computes $au$ and then applies the invariant $f$. (Of course this is the same as an application of the invariant $af$ of $H$ that separates elements of $M$). If $u=hv_i$, then $f(au)=f(ahv_i)=f(aa^{-1}gav_i)=f(gav_i)=f(av_i)$. Since $a$ was chosen so that
$f(av_i)\neq f(av_j)$ whenever $i\neq j$, Alice can determine from the value of $f(au)$ whether the symbol $v_i$ and the corresponding message that was encrypted by Bob.
\subsection{Design and modification of the cryptosystem based on invariants}
There is an obvious modification of the above cryptosystem which improves the ratio of the expansion in size from plaintext to ciphertext, namely replacing the set of two elements $v_0$ and $v_1$ from $V$ by a larger set $S=\{v_0, \ldots, v_{r-1}\}$, such that the invariant $f$ separates every two elements of $aS=\{av_0, \ldots, av_{r-1}\}$ instead.
The paper \cite{mzj} studies cryptosystems based on invariants of finitely generated groups $G$ and considers advantages and disadvantages of various choices of $G$. Most notable is the distinction between diagonalizable and unipotent groups as well between finite and infinite groups. The behaviour of the cryptosystem varies based on the choice of the underlying ground field $F$ or residue ring $R$.
When working over finite field, the cyclicity of the multiplicative group $F^{\times}$ plays a big role and security of the cryptosystem is related to the discrete logarithm problem.
When $F$ is a number field, then the factorization properties in the ring of its integers $Z$ come into forefront. Finally, in the case of a residue ring $R$ of a ring of algebraic integers $Z$ modulo its ideal $\mathfrak{a}$, we work over a group of units of a finite ring and their multiplicative structure is more involved than that for a finite field. This case also involves questions related to factorization in the ring of algebraic integers $Z$ and is therefore a mixture between the previous two cases.
\subsection{Linear algebra attack on the cryptosystem}\label{LA}
The notion of the minimal positive degree of an invariant and the value of $M=M_{G,V}$ are important for the security of the invariant-based cryptosystem (both variants one and two) we are considering.
For example, if we know that $M_G$ is so small that $m\binom{n+M-1}{M}=O(n^r)$ is polynomial in $n$, then Charlie can find an invariant $f'$ of $G$
in polynomial time by solving consecutive linear systems for $d=1, \ldots, \binom{n+M-1}{M}$, each consisting of $m\binom{n+d-1}{d}$ equations in the $\binom{n+d-1}{d}$ variables described in the previous section. For a fixed $d$, this can be accomplished in time $O(m(\binom{n+d-1}{d})^4)$ and the total search will take no more than time
$O(n^{8r})$.
Therefore, for the security of the system it must be guaranteed that $m\binom{n+M-1}{M}$ is high, say, it is not polynomial in $n$.
\section{Lower bounds for degrees of polynomial invariants}
The significance of understanding the minimal degree $M_{G,V}$ of invariants for the security of the invariant-based cryptosystem was established above. In particular, it is important to find a nontrivial lower bound for $M_{G,V}$. Unfortunately, we are not aware of any articles establishing lower bounds for the minimal degree of invariants, except in very special circumstances, e.g. \cite{huf}.
On the other hand, there are numerous upper bounds for the minimal degree $\beta(G,V)$ such that $F[V]^G$ is generated as an algebra by all invariants in degrees
not exceeding $\beta(G,V)$.
For example, a classical result of Noether \cite{noeth} states that if the characteristic of $F$ is zero and $G$ is finite of order $|G|$, then $\beta(G,V)\leq |G|$.
There is an extensive discussion of Noether bound and results about $\beta(G,V)$ in section 3 of \cite{smith}.
It was conjectured by Kemper that for $G\neq 1$, and arbitratry ground field $F$, the number $\beta(G, V)$ is at most $\dim V(|G|-1)$. Recently, this conjecture was proved by Symonds in \cite{sym}.
When one wants to find an invariant of $G$, it seems natural to consider an upper bound $\beta(G,V)$. However, if we want to show that there are no invariants of small degrees
(as is our case), then we need to find lower bounds for $M_{G,V}$. Until now, there was no real impetus to consider such a problem.
Assume again that $G$ is a (finitely generated) subgroup of $GL(V)$, and denote $M_{G, V}$ just by $M_G$.
Denote by $\mathbb{G}=\overline{G}$ the Zariski closure of $G$. We will assume that $\mathbb{G}$ is a linearly reductive subgroup in $GL(V)$
(in particular, this assumption is satisfied if $G$ is a finite group and the characteristic of $F$ does not divide $|G|$).
According to \cite{hochrob} (see also \cite{derkraft}), $F[V]^G=F[V]^{\mathbb{G}}$ is a Cohen-Macaulay algebra. Therefore
$F[V]^G$ is a free module over its subalgebra $F[p_1, \ldots , p_s]$, freely generated by the (homogeneous) parameters $p_1, \ldots , p_s$, which are called the first generators.
In other words, $F[V]^G=\oplus_{1\leq i\leq l}F[p_1, \ldots , p_s]h_i$, where $h_1 , \ldots, h_l$ are called the second generators.
If $F[V]^G\neq F$, then $M_G=\min\{\{\deg h_i > 0\},\{\deg p_j\}\}$.
In what follows we will denote by $\zeta_k$ a primitive root of unity of order $k$. If the order $k$ is clear from the context, we will denote it just by $\zeta$.
Additionally, every time $\zeta_k$ is mentioned, we assume that it is an element of the ground field $F$.
If a matrix $g\in GL(V)$ has a finite order $k$, then all eigenvalues $\lambda_1, \ldots , \lambda_n$ of $g$ are roots of unity.
If we denote $\zeta=\zeta_k$, then there are integers $k_i$ such that $\lambda_i=\zeta^{k_i}$, where $0\leq k_i < k$ and $gcd(k_1, \ldots , k_n, k)=1$.
For $g\neq 1$ denote by $k_g$ the positive integer
\[k_g=\min\{\sum_{i=1}^n a_i >0 | \sum_{i=1}^n a_i k_i \equiv 0\!\!\!\!\pmod k, \mbox{ where integers } a_1, \ldots , a_n \geq 0\}.\]
The following lemma describes invariant polynomials and $M_{\langle t \rangle}$ for a diagonal matrix $t$ of finite order.
\begin{lm}\label{suchandsuch}
Assume that $t$ is a diagonal matrix of the finite order $k$ with diagonal entries $\lambda_1=\zeta_k^{k_1}, \ldots, \lambda_n=\zeta_k^{k_n}$, where the exponents $k_i$ are as above.
Then the invariant subalgebra of $F[V]^{\langle t \rangle}$ is generated by monomials $x^a=x_1^{a_1}\ldots x_n^{a_n}$ such that $\sum_{i=1}^n a_ik_i \equiv 0 \pmod k$.
Additionally, if $t\neq 1$, then $M_{\langle t \rangle}=k_t$.
\end{lm}
\begin{proof}
The properties of numbers $k_i$ follow immediately.
Since $t$ acts on the corresponding coordinate function as $tx_i=\lambda_i^{-1}x_i$, we obtain that a monomial $x^a=x_1^{a_1}\ldots x_n^{a_n}$
is a invariant of $F[V]$ if and only if $\sum_{i=1}^n a_ik_i \equiv 0 \pmod k$.
Because every monomial $x^b$ is a semi-invariant of $t$, monomials $x^a$ as above generate $F[V]^{\langle t \rangle}$. The formula for $M_{\langle t \rangle}$ is then clear.
\end{proof}
For the next lemma we apply standard results from algebraic group theory, that can be found, for example, in \cite{hum, water}. Assume that $F$ is a perfect field.
For an element $g\in G$ let $g=g_s g_u$ be its Jordan-Chevalley decomposition. Let $G_s$ and $G_u$ denote the sets of semisimple and unipotent components of all elements from $G$, respectively.
\begin{lm}\label{the caseofcyclicgroup}
Assume that the ground field $F$ is perfect. If a group $H$ is abelian, then $M_H=M_{<H_s>}$.
In particular, if $H$ is an abelian subgroup of $G$, then $M_{<H_s>}\leq M_G$.
\end{lm}
\begin{proof}
Since the algebraic group $\mathbb{H}=\overline{H}$ is abelian, it can be written as a product
$\mathbb{H}=\mathbb{H}_s\times \mathbb{H}_u$ of its closed subgroups $\mathbb{H}_s$ and $\mathbb{H}_u$. The inclusions $H_s\subseteq \mathbb{H}_s$ and $H_u\subseteq \mathbb{H}_u$
imply that $\mathbb{H}_s=\overline{<H_s>}$ and $\mathbb{H}_u=\overline{<H_u>}$.
Furthermore, $F[V]^H=F[V]^{\mathbb{H}}=(F[V]^{\overline{<H_s>}})^{\overline{<H_u>}}$. Since the group $\overline{<H_u>}$ is unipotent,
$F[V]_d^{\overline{<H_s>}}\neq 0$ implies $F[V]_d^{\mathbb{H}}=(F[V]_d^{\overline{<H_s>}})^{\overline{<H_u>}}\neq 0$. This means that $M_H=M_{\overline{<H_s>}}=M_{<H_s>}$.
Since $H\leq G$ implies $M_H\leq M_G$, the second statement follows.
\end{proof}
A subgroup $G$ of $GL(V)$ is called {\it small}, if there is an abelian subgroup $H$ of $G$ such that $M_G =M_H$.
\begin{lm}\label{aboundforoneelement}
Assume that the ground field $F$ is perfect.
If $g\neq 1$ is of finite order, then $M_{<g>}=k_g$. In particular, if $G$ is finite, then $\max\{k_g ; g\in G, g\neq 1\}\leq M_G$.
\end{lm}
\begin{proof}
Lemma \ref{the caseofcyclicgroup} implies $M_{<g>}=M_{<g_s>}$. With respect to a basis of $V$, consisting of eigenvectors of $g_s$, $g_s$ is represented by a diagonal matrix.
By Lemma \ref{suchandsuch} we obtain $M_{<g_s>}=k_{g_s}$. Since $k_{<g>}=k_{<g_s>}$, the lemma follows.
\end{proof}
The following lemma is well-known, see \cite{burn}.
\begin{lm}\label{burn}
If $G\subset GL_n(\mathbb{R})$ and $G$ is finite, then $G$ has an invariant of degree two.
\end{lm}
\begin{proof}
Let $g_1=1, \ldots, g_s$ be all elements of $G$ and $\mathbb{R}[V]=\mathbb{R}[t_1, \ldots, t_n]$. Denote by $x_i=g_i(t_1^2+\ldots +t_n^2)$ for $i=1, \ldots, s$. Since values of
each $x_i$ are non-negative when evaluated as polynomials in $t_1, \ldots, t_n$, the values of the invariant polynomial $\sum_{i=1}^s x_i$
evaluated as polynomial in $t_1, \ldots, t_n$ are non-negative and they can be equal to zero only if each
$x_i$ is zero. But $x_1=0$ only if $t_1=\ldots =t_n=0$. Therefore $\sum_{i=1}^s x_i$ is positive definite quadratic form in $t_1, \ldots, t_n$,
hence a non-zero invariant of $G$.
\end{proof}
Lemma \ref{aboundforoneelement} and Lemma \ref{burn} have the following interesting consequence.
\begin{cor}\label{F=R}
Let $g\neq 1$ correspond to a matrix from $GL_n(\mathbb{R})$ of finite order. Then either one of the eigenvalues of $g$ equals $1$ or there are two
eigenvalues $\lambda$ and $\mu$ of $g$, both different from $1$ such that $\lambda\mu =1$.
\end{cor}
\begin{lm}
Let $G$ be a finite abelian group of an exponent $q$, the ground field $F$ is perfect and $char F$ does not divide $q$.
Then for every $G$-module $V$ one has the upper bound $M_{G, V}\leq q$. This upper bound is sharp.
\end{lm}
\begin{proof}
Without a loss of generality one can assume that $G\leq GL(V)$. By Lemma \ref{the caseofcyclicgroup}
one can also assume that $G=G_s$, hence $G$ is diagonalizable. Every element $g\in G$ is represented by a matrix whose diagonal entries are powers of the $q$-th primitive root $\zeta$. This implies the first statement. To show that the upper bound is sharp, it is enough to consider an example when one element $g$ is represented by a matrix whose all diagonal entries are equal to $\zeta$.
\end{proof}
If $G$ is a diagonalizable finite abelian subgroup of $GL(V)$, then using Lemma 3.1 of \cite{hl} we can reduce the computation of $M_{G, V}$ to an integer programming problem.
In fact, this lemma states that there are invariant monomials
\[f_1=t_1^{m_1}, f_2=t_1^{v_{12}}t_2^{m_2}, \ldots, f_n=t_1^{v_{1 n}}\ldots t_{n-1, n}^{v_{n-1, n}}t_n^{m_n}\]
of a "triangular shape", where $m_n>0$ and $m_i > v_{i j}\geq 0$ for $1\leq i < j\leq n$,
such that every invariant monomial from the field of rational invariants $F(V)^G$ is a product of (not necessary non-negative) powers of the monomials $f_1, \ldots, f_n$.
Since $F[V]^G$ has a basis consisting of invariant monomials, any such monomial has a form $f_1^{l_1}\ldots f_n^{l_n}$, where $l=(l_1, \ldots , l_n)\in\mathbb{Z}^n$ is a solution of the system of inequalities
\[m_k l_k +\sum_{j> k} v_{k j}l_j\geq 0 \text{ for } 1\leq k\leq n.\]
From here we derive that $M_{G, V}$ is the minimum of the function
\[\sum_{1\leq i\leq n}(m_i+\sum_{j< i}v_{j i})l_i\]
evaluated on the solution set of the above system of inequalities.
To illustrate the difficulty of finding a lower bound for $M_{G,V}$, we will determine the value of $M_{G,V}$ explicitly for certain finite subgroups
$G$ of $GL_2(\mathbb{C})$.
The list of all finite subgroups of $GL_2(\mathbb{C})$ is presented in \cite{huf}.
Let $G$ be a finite group from Lemma 2.1 of \cite{huf}.
The group $G$ has two generators
\[A=\left(\begin{array}{cc}
\lambda^{v_1} & 0 \\
0 & \lambda^{j v_2}
\end{array}\right) , \ B=\left(\begin{array}{cc}
\lambda^{g} & 0 \\
0 & \lambda^{dg}
\end{array}\right),\]
where $\lambda$ is an $e$-th primitive root of unity, $v_1, v_2 > 1, v_1 v_2|g, g|e, d|e, gcd(v_1, v_2)=gcd(e, j)=gcd(v_1, d)=gcd(v_2, d)=1$.
Additionally, the number $d$ is square-free and each prime factor of $e$ divides one of the numbers $v_1$, $v_2$ or $d$.
In particular, $G\simeq <A>\times <B>=\mathbb{Z}_{e}\times\mathbb{Z}_{\frac{e}{g}}$.
To calculate $M_G$, we need to consider the following system of congruencies:
\[v_1 a_1+jv_2 a_2\equiv 0\!\!\pmod e, \ ga_1 +dga_2 \equiv 0\!\!\pmod e, \]
where $a_1, a_2\geq 0$ are such that $a_1+a_2> 0$.
The second congruence implies that $a_1=\frac{et}{g} -da_2$, where $t$ is a positive integer. Substituting the value of $a_1$ into the first congruence we receive
\[\frac{ev_1 t}{g}=(dv_1-jv_2)a_2 \pmod e.\]
Since $gcd(e, dv_1-jv_2)=1$, we obtain that $\frac{ev_1}{g}$ divides $a_2$, which implies that $\frac{ev_2}{g}$ divides $a_1$. Since both $a_1$ and $a_2$ are multiples of
$\frac{e}{g}$, the second congruence $ga_1 +dga_2 \equiv 0\!\!\pmod e$ can be eliminated from the system since it is automatically satisfied.
Define $a_1=\frac{ev_2}{g} a'_1, a_2=\frac{ev_1}{g}a'_2$. Then
$a'_1 +ja'_2 =0\pmod{\frac{g}{v_1 v_2}}$, or equivalently, $a'_1 +ja'_2 =\frac{gs}{v_1 v_2}$ for some $s> 0$. This congruence has the solution
\[a'_1=\frac{gs(j+1)}{v_1 v_2} -jt, \ a'_2 = -\frac{gs}{v_1 v_2} +t .\]
Since $a'_1, a'_2\geq 0$, the parameter $t$ satisfies
\[\frac{gs}{v_1 v_2}\leq t\leq \frac{gs}{v_1 v_2} +[\frac{gs}{jv_1 v_2}].\]
Additionally,
\[a_1=\frac{es(j+1)}{v_1}-\frac{ev_2 jt}{g} \text{ and } \ a_2= -\frac{es}{v_2} +\frac{ev_1 t}{g}.
\]
Thus
\[a_1+a_2 =\frac{es(j+1)}{v_1}-\frac{es}{v_2}-\frac{et}{g}(v_2 j-v_1).
\]
Finally, observe that for every $s>0$ and for every $t$ such that $\frac{gs}{v_1 v_2}\leq t\leq \frac{gs}{v_1 v_2} +[\frac{gs}{jv_1 v_2}]$,
the right-hand-side of the above formula for $a_1+a_2$ is greater than zero.
Now are are ready to determine the values of $M_{G,V}$.
\begin{pr}\label{aformula}
Assume $G$ is a finite group from Lemma 2.1 of \cite{huf}, as above. Then the value of $M_{G,v}$ is given as follows.
If $jv_2 < v_1$, then $M_G=\frac{e}{v_1}$.
If $jv_2 > v_1$, then $M_G=\min\{\min\limits_{0< s < j}\{\frac{s}{v_1}-[\frac{gs}{jv_1 v_2}]\frac{e(v_2 j -v_1)}{g}\}, \frac{e}{v_2}\}$.
\end{pr}
\begin{proof}
If $jv_2 < v_1$, and $s$ is fixed, then the minimum of such $a_1+a_2$ equals $\frac{es}{v_1}$ and is attained for $t=\frac{gs}{v_1 v_2}$. Therefore
$M_G=\min\{a_1 +a_2\}=\frac{e}{v_1}$.
If $jv_2 > v_1$, and $s$ is fixed, then the minimum of such $a_1+a_2$ equals $\frac{es}{v_1}-[\frac{gs}{jv_1 v_2}]\frac{e(v_2 j -v_1)}{g}$ and is attained for
$t=\frac{gs}{v_1 v_2}+[\frac{gs}{jv_1 v_2}]$.
If $s=jl+s'$, where $0\leq s' < j$, then $[\frac{gs}{jv_1 v_2}]=\frac{gl}{v_1 v_2}+[\frac{gs'}{jv_1 v_2}]$. After substituting this into the above expression for $a_1+a_2$
we obtain
\[
a_1+a_2=\frac{el}{v_2} +(\frac{s'}{v_1}-[\frac{gs'}{jv_1 v_2}]\frac{e(v_2 j -v_1)}{g}).
\]
If $s'=0$, then the minimum for such $a_1+a_2$ is attained for $l=1$ and it equals to $a_1+a_2=\frac{e}{v_2}$.
If $s' > 0$, then the minimum for such $a_1+a_2$ is attained for $l=0$ and it equals to
\[\min\limits_{0<s'<j}\{\frac{s'}{v_1}-[\frac{gs'}{jv_1 v_2}]\frac{e(v_2 j -v_1)}{g}\}.\]
The statement follows by combination of the last two formulas.
\end{proof}
\begin{ex}
The following example shows that not all finite subgroups of $GL(V)$ are small.
Let $G$ be a subgroup of $SL_2(\mathbb{C})$ generated by the matrices
\[
\left(\begin{array}{cc}
-1 & 0 \\
0 & -1
\end{array}\right),
\frac{1}{2}\left(\begin{array}{cc}
-1+i & 1-i \\
-1-i & -1-i
\end{array}\right),
\left(\begin{array}{cc}
0 & 1 \\
-1 & 0
\end{array}\right),
\left(\begin{array}{cc}
i & 0 \\
0 & -i
\end{array}\right).
\]
The group $G$ is the group from Lemma 2.3 of \cite{huf} and $V=\mathbb{C}^2$. If $g\in G$ is not an identity matrix, then it has eigenvalues $\lambda$ and $\lambda^{-1}$, where $\lambda\neq 1$ is a root of unity. If $H$ is an abelian subgroup of $G$, then $H_s$ can be conjugated with a subgroup $H'$ of the group of diagonal matrices.
Thus $x_1 x_2\in \mathbb{C}[V]^{H'}$,
i.e. $M_H=M_{H_s}\leq 2$. On the other hand, Lemma 4.1 of \cite{huf} (see the first row in the table on page 327) implies $M_G=6$.
\end{ex}
Based on the above discussion, the following problem seems natural.
\begin{quest}\label{question}
Characterize the class of small finite subgroups $G$ of $GL(V)$.
\end{quest}
A more general problem is to estimate the value of $M_G$ for a given finite subgroup $G\leq GL(V)$. There are no general results for the lower bound for $M_G$ but
the following result of Thompson gives an upper bound for $M_G$ in general.
\begin{pr}\label{thompson'sresult}
If $G$ is a finite subgroup of $GL_n(\mathbb{C})$ and $G$ has no non-trivial characters, then $M_G\leq 4n^2$.
\end{pr}
\begin{proof}
In the notation of the paper \cite{thom}, the integer $M_G$ coincides with $d_G$. The main theorem of \cite{thom} states that $d_G\leq 4n^2$.
\end{proof}
\section{Minimal degrees of invariants of algebraic groups}
Let $\mathbb{G}$ be an algebraic subgroup of $GL(V)$ and $\mathbb{B}$ be its Borel subgroup. Propositions I.3.4 and I.3.6 of \cite{jan} (see also Theorem 9.1 of \cite{gross}) imply
\[(F[V]\otimes F[\mathbb{G}/\mathbb{B}])^{\mathbb{G}}\simeq F[V]^{\mathbb{B}}.\]
Since $\mathbb{G}/\mathbb{B}$ is a projective variety, we have $F[\mathbb{G}/\mathbb{B}]=F$. Therefore, $F[V]^{\mathbb{G}}\simeq F[V]^{\mathbb{B}}$ and
the minimal degrees of invariants $M_{\mathbb{G}, V}$ and $M_{\mathbb{B}, V}$ coincide.
The group $\mathbb B$ is a semi-direct product of a torus $\mathbb T$ and the unipotent radical $\mathbb U$ of $\mathbb B$, i.e. $\mathbb{B}=\mathbb{T}\ltimes\mathbb{U}$.
For a (finite-dimensional) $\mathbb U$-module $S$, denote by $S_{\mathbb{U}}$ the smallest $\mathbb U$-submodule of $S$ such that $\mathbb U$ acts trivially on $S/S_{\mathbb{U}}$.
Define a filtration of a $\mathbb U$-module $V$ as
\[0\subseteq V_k\subseteq V_{k-1}\subseteq\ldots\subseteq V_1\subseteq V_0=V,\]
where $V_{i+1}=(V_i)_{\mathbb U}$ for each $0\leq i\leq k$. Since $\mathbb{U}\unlhd\mathbb{B}$, the above filtration is also a filtration of $\mathbb B$-submodules.
One can verify easily that $(V/V_1)^*=(V^*)^{\mathbb U}$, which implies that $F[V/V_1]$ is a $\mathbb B$-invariant subalgebra of $F[V]$ such that $F[V/V_1]\subseteq F[V]^{\mathbb U}$.
\begin{pr}\label{boundsviatorus}
The minimal degrees of invariants of $G$ and $T$ are related in the following way.
\[M_{\mathbb{T}, V}\leq M_{\mathbb{G}, V}=M_{\mathbb{B}, V}\leq M_{\mathbb{T}, V/V_1}.\]
\end{pr}
\begin{proof}
First inequality is trivial. For the second inequality, first observe that $F[V]^{\mathbb B}=(F[V]^{\mathbb U})^{\mathbb T}$. Therefore,
$F[V/V_1]^{\mathbb T}\subseteq F[V]^{\mathbb B}$ which implies $M_{\mathbb{B}, V}\leq M_{\mathbb{T}, V/V_1}$.
\end{proof}
The second inequality in the above proposition is sharp. In fact, if $\mathbb{U}$ coincides with the centralizer of the flag $V_k\subseteq V_{k-1}\subseteq\ldots\subseteq V_1\subseteq V$, then $\mathbb U$ is good in the sense of \cite{pom}. Furthermore, Theorem 4.2 of \cite{pom} implies that $F[V]^{\mathbb U}=F[V/V_1]$. Thus $F[V]^{\mathbb B}=F[V/V_1]^{\mathbb T}$, hence $M_{\mathbb{G}, V}=M_{\mathbb{B}, V}=M_{\mathbb{T}, V/V_1}=M_{\mathbb{T}, V}$.
An important consequence of the above proposition is that in many cases the minimal degree of invariants of a linear group is controlled by minimal degree of invariants of its suitable abelian subgroup; more precisely, by its diagonalizable subgroup.
\section{Invariants of supergroups}
Having in mind possible modification of the cryptosystem based on invariants of groups to a cryptosystem based on supergroups, we will define the notion of an invariant of a supergroup.
From now on we assume that the characteristic of the ground field $F$ is different from $2$.
\subsection{Definitions and actions}
Let $V$ be a superspace, that is a $\mathbb{Z}_2$-graded space with even and odd components $V_0$ and $V_1$,
respectively. If $v\in V_i$, then $i$ is said to be a {\it parity} of $v$ and it is denoted by $|v|$. In what follows, morphisms between two superspaces $V$ and $W$ are assumed to be graded. The tensor product $V\otimes W$ has the natural structure of a superspace given by
$(V\otimes W)_i=\bigoplus\limits_{k+l=i, k, l\in\mathbb{Z}_2} V_k\otimes W_l$.
A $\mathbb{Z}_2$-graded associative algebra $A$ is called a {\it superalgebra}.
The superalgebra $A$ is said to be {\it supercommutative} if it satisfies $ab=(-1)^{|a||b|}ba$ for all homogeneous elements $a$ and $b$.
For example, any algebra $A$ has the trivial superalgebra structure defined by $A_0=A, A_1=0$. The tensor product $A\otimes B$ of two superalgebras $A$ and $B$
has the superalgebra structure defined by
\[(a\otimes b)(c\otimes d)=(-1)^{|b||c|}ac\otimes bd\]
for $a, c\in A$ and $b, d\in B$.
The category of all supercommutative superalgebras with graded morphisms is denoted by $\mathsf{SAlg}_F$.
A superalgebra $A$ is called a {\it superbialgebra} if it is a coalgebra with the coproduct $\Delta : A\to A\otimes A$ and counit
$\epsilon : A\to F$ such that both $\Delta$ and $\epsilon$ are superalgebra homomorphisms. In what follows we use Sweedler's notation
$\Delta(a)=\sum a_1\otimes a_2$ for $a\in A$. Let $A^+$ denote the (two-sided) superideal $\ker\epsilon$.
A superspace $V$ is called a left/right $A$-{\it supercomodule} if $V$ is a left/right $A$-comodule and the corresponding comodule map
$\tau : V\to V\otimes A$ is a morphism of superspaces.
A superbialgebra $A$ is called a {\it Hopf superalgebra} if there is a superalgebra endomorphism $s : A\to A$ such that
$\sum a_1s(a_2)=\sum s(a_1)a_2=\epsilon(a)$ for $a\in A$. Additionally, we assume that $s$ is bijective and it satisfies the condition
$\Delta s=t(s\otimes s)\Delta$, where $t : A\otimes A\to A\otimes A$ is a (supersymmetry) homomorphism defined by
$a\otimes a'\mapsto (-1)^{|a|||a'}a'\otimes a$ for $a, a'\in A$.
Let $A$ be a supercommutative superalgebra. Then the functor $SSp \ A : \mathsf{SAlg}_F\to \mathsf{Sets}$, defined by
$SSp \ A (C)=Hom_{\mathsf{SAlg}_F}(A, C)$ for $C\in\mathsf{SAlg}_F$, is called an {\it affine superscheme}. If $X=SSp \ A$ is an affine superscheme, then
$A$ is denoted by $F[X]$ and it is called the {\it coordinate superalgebra} of $X$.
If $A$ is a Hopf superalgebra, then $G=SSp \ A$ is a group functor that is called an {\it affine group superscheme}, or shortly, an {\it affine supergroup}.
The group structure of $G(C)$ is given by $g_1 g_2 (a)=\sum g_1(a_1)g_2(a_2)$, $g^{-1}=g s$ and $1_{G(C)}=\epsilon$ for $g_1, g_2, g\in G(C)$ and $a\in A$.
The category of affine supergroups is dual to the category of supercommutative Hopf superalgebras.
If $F[G]$ is finitely generated, then $G$ is called an {\it algebraic} supergroup. If $F[G]$ is finite-dimensional, then $G$ is called a {\it finite} supergroup.
A (closed) subsupergroup $H$ of $G$ is uniquely defined by the Hopf ideal $I_H$ of $F[G]$ such that for every $C\in\mathsf{SAlg}_F$ an element $g\in G(C)$ belongs to $H(C)$ if and only if $g(I_H)=0$. For example, the {\it largest even} subsupergroup $G_{ev}$ of $G$ is defined by the ideal $F[G]F[G]_1$.
The category of left finite-dimensional $G$-supermodules coincides with the category of right $F[G]$-supercomodules. In fact, if $V$ is a right $F[G]$-supercomodule, then $G(C)$ acts on $V\otimes C$ by $C$-linear transformation
$g(v\otimes 1)=\sum v_1\otimes g(a_2)$ for $g\in G(C)$ and $\tau(v)=\sum v_1\otimes a_2$.
Let $V$ be a superspace such that $\dim V_0 =m$ and $\dim V_1 =n$. The superspace $V$ corresponds to an affine superscheme $A^{m|n}$, called the {\it affine superspace} of (super)dimension $m|n$, such that $A^{m|n}(C)=C_0^m\oplus C_1^n$ for every $C\in\mathsf{SAlg}_F$. The affine superscheme $A^{m|n}$ can be identified with the functor $(V\otimes ?)_0$. In fact, choose a homogeneous basis consisting of elements $v_i$ such that $|v_i|=0$ for $1\leq i\leq m$ and $|v_i|=1$ for $m+1\leq i\leq m+n$. Then every element $w$ of $(V\otimes C)_0$ has the form $w=\sum_{1\leq i\leq m+n} v_i\otimes c_i$, where $|c_i|=|v_i|$.
The coordinate superalgebra of $A^{m|n}$ is isomorphic to the polynomial superalgebra freely generated by the dual basis $x_i$ of $V^*$ such that
$x_i(v_j)=\delta_{ij}$ for $1\leq i, j\leq m+n$. In other words, $w(x_i)=x_i(w)=c_i$ for every $w=\sum_{1\leq i\leq m+n} v_i\otimes c_i\in (V\otimes C)_0$ and $C\in\mathsf{SAlg}_F$.
In order to make the notation consistent, we will also denote $F[A^{m|n}]$ by $F[V]$.
Every $g\in G(C)$ induces an even operator on the $F$-superspace $V\otimes C$. Thus $(V\otimes C)_0$ is a $G(C)$-submodule of $V\otimes C$. Since this action is functorial, it gives
the left $G$-action on the affine superscheme $A^{m|n}$.
The composition of this action with the inverse morphism $g\mapsto g^{-1}$ defines the right action of $G$ on $A^{m|n}$, which is equivalent to the right coaction of $F[G]$ on $F[V]$.
Since the comodule map $F[V]\to F[V]\otimes F[G]$ is a superalgebra homomorphism, the $F[G]$-supercomodule structure of $F[V]$ is defined by $F[G]$-supercomodule structure of $V^*=\sum_{1\leq i\leq m+n} Fx_i$.
If $\tau(v_i)=\sum_{1\leq k\leq t} v_k\otimes a_{ki}$ for $1\leq i\leq m+n$, then
\[\tau(x_i)=\sum_{1\leq k\leq t} x_k \otimes (-1)^{|v_k|(|v_i|+|v_k|)}s(a_{ik}).\]
There is a natural pairing $(F[V]\otimes C)\times (V\otimes C)\to C$ given by
\[(f\otimes a)(v\otimes b)=(-1)^{|a||v|} f(v)ab=(-1)^{|a||v|}v(f)ab\]
for $a, b\in C$ and $C\in\mathsf{SAlg}_F$, such that the above coaction is equivalent to the standard action
$(g(f\otimes a))(v\otimes b)=(f\otimes a)(g^{-1}(v\otimes b))$ for $g\in G(C)$.
\subsection{Cryptology application}
The invariants of supergroups have two possible applications in the design of public-key cryptosystem.
The first option is to work with {\it relative} invariants from the $C$-superalgebra $C[V]^{G(C)}=(F[V]\otimes C)^{G(C)}$ for some superalgebra $C\in A\in\mathsf{SAlg}_F$.
The second option is to work with {\it absolute} invariants from the superalgebra $F[V]^G$, consisting of all $f\in F[V]$ such that $\tau(f)=f\otimes 1$, or equivalently,
$g(f\otimes 1)=f\otimes 1$ for every $g\in G(C)$ and $C\in\mathsf{SAlg}_F$.
We will leave a consideration of these options for the future.
\section{Invariants of certain supergroups}
We will now investigate the structure of invariants of certain supergroups $G$. We will establish, in contrast to the case of diagonalizable groups, that generators of invariants of
$G$ are not given by monomials.
Recall that every diagonalizable algebraic group is isomorphic to a finite product of copies of the one-dimensional torus $G_m$ and groups $\mu_n$, where
$\mu_n$ is the $n$-th roots of unity and $n>1$. Here $\mu_n(C)=\{c\in C^{\times} | c^n=1\}$ for every commutative algebra $C$ (see Theorem 2.2 of \cite{water}).
Let $D$ be a diagonalizable algebraic group and $X=X(D)$ be the character group of $D$. Then $F[D]=FX$ is a group algebra of $X$.
The Lie algebra $Lie(D)$ can be identified with the subspace of $F[D]^*=(FX)^*$ consisting of all linear maps
$y : FX\to F$ such that $y(g_1 g_2)=y(g_1)+y(g_2)$ for every $g_1, g_2\in X$.
Fix a pair $(g,x)$, where $g\in X$ and $x\in Lie(D)$ such that if $x\neq 0$ then $g^2=1$.
Since $char F\neq 2$, we have $y(g)=0$ for every $y\in Lie(D)$.
The following supergroup $D_{g, x}$ was first introduced in \cite{maszub}.
The coordinate algebra $F[D_{g, x}]$ is isomorphic to $FX\otimes F[z]=FX\oplus (F X)z$, where $z$ is odd and $z^2=0$.
The Hopf superalgebra structure on $F[D_{g, x}]$ is defined as:
\[\Delta(h)=h\otimes h + x(h)hz\otimes hgz, \ \Delta(z)=1\otimes z+z\otimes g, \ \epsilon(z)=0, \ \epsilon(h)=1,\]
$s(h)=h^{-1}$ for $h\in X$ and $s(z)=-g^{-1}z$.
Denote $Fh\oplus Fhz$ by $L(h)$. Then every $L(h)$ is an indecomposable injective $D_{g, x}$-supersubmodule of $F[D_{g, x}]$ and $F[D_{g, x}]=\oplus_{h\in X} L(h)$.
Let $Y$ denote $\{h\in X| x(h)=0\}$.
The supermodule $L(h)$ is irreducible if and only if $h\not\in Y$. If $L(h)$ is not irreducible, then it has the socle $S(h)=Fh$ and $L(h)/S(h)\simeq \Pi S(gh)$.
If we denote the basis elements $h$ and $hz$ of $L(h)$ by $f_0$ and $f_1$ respectively, then
\[\tau(f_0)=f_0\otimes h + x(h)f_1\otimes hgz \text{ and } \tau(f_1)=f_0\otimes hz +f_1\otimes hg.\]
Also, $L(h)^*\simeq \Pi L(g^{-1}h^{-1})$ and $S(h)^*\simeq S(h^{-1})$.
\begin{pr}\label{irred}(Proposition 5.1 of \cite{maszub})
Every irreducible $D_{g, x}$-supermodule is isomorphic either to $L(h)$ for $h\not\in Y$ or to $S(h)$ for $h\in Y$. Moreover, every finite-dimensional $D_{g, x}$-supermodule
is isomorphic to a direct sum of (not necessary irreducible) supermodules $\Pi^a L(h)$ and $\Pi^b S(h')$ for $h\in X, h'\in Y$ and $a, b=0, 1$.
\end{pr}
Consider a (finite-dimensional) $D_{g, x}$-supermodule $V$
such that $V^*\simeq V(h_1)\oplus\ldots\oplus V(h_s)$.
The superalgebra $F[V]$ is generated by the elements
$f_{j, 0}$ and $f_{j, 1}$, for $1\leq j\leq s$, such that $|f_{j, 0}|=0, |f_{j, 1}|=1$ and
\[\tau(f_{j, 0})=f_{j, 0}\otimes h_j +x(h_j)f_{j, 1}\otimes h_j gz \text{ and } \tau(f_{j, 1})=f_{j, 0}\otimes h_j z+ f_{j, 1}\otimes h_j g.\]
Let $l=(l_1, \ldots, l_s)$ be a vector with non-negative integer coordinates and let $J$ be a subset of $\underline{s}=\{1, 2, \ldots, s\}$.
Denote $f_0^l=\prod_{1\leq j\leq s}f_{j, 0}^{l_j}$, $f_1^J=\prod_{j\in J}f_{j, 1}$, $h^l=\prod_{1\leq j\leq s} h_j^{l_j}$ and $h^J=\prod_{j\in J}h_j$.
For $1\leq j\leq s$ let $\epsilon_j$ denote the vector that has the $j$-th coordinate equal to $1$ and all remaining coordinates equal to zero.
For a basis monomial $f_0^l f_1^J$ we have
\[\begin{aligned}\tau(f_0^l f_1^J)=&
(f_0^l\otimes h^l+\sum_{1\leq j\leq s}l_j x(h_j)f_0^{l-\epsilon_j}f_{j, 1}\otimes h^l gz)\times \\
&(f_1^J\otimes h^J g^{|J|}+\sum_{j\in J} (-1)^{k_{j, J}}f_{j, 0}f_1^{J\setminus j}\otimes h^J g^{|J|-1}z) \\
=&f_0^l f_1^J\otimes h^l h^J g^{|J|}+\sum_{j\not\in J}(-1)^{k_{j, J\cup j}}l_j x(h_j)f_0^{l-\epsilon_j}f_1^{J\cup j} \otimes h^l h^J g^{|J|+1}z \\
&+\sum_{j\in J}(-1)^{k_{j, J}} f_0^{l+\epsilon_j}f_1^{J\setminus j}\otimes h^l h^J g^{|J|-1}z,
\end{aligned}\]
where $k_{j, J}$ is the number of elements $j'\in J$ such that $j' > j$.
Since $g^{|J|+1}=g^{|J|-1}$, this implies the following proposition.
\begin{pr}\label{descriptionofinv}
A (super)polynomial $f=\sum_{l, J}a_{l, J}f_0^l f_1^J$ belongs to $F[V]^{D{g, x}}$ if and only if the following conditions are satisfied.
\begin{enumerate}
\item If $a_{l, J}\neq 0$, then $h^l h^J g^{|J|}=1$,
\item The polynomial
\[\sum_{l, J}a_{l, J}(\sum_{j\not\in J}(-1)^{k_{j, J\cup j}}l_j x(h_j)f_0^{l-\epsilon_j}f_1^{J\cup j}+\sum_{j\in J}(-1)^{k_{j, J}} f_0^{l+\epsilon_j}f_1^{J\setminus j})\]
vanishes.
\end{enumerate}
\end{pr}
We can rewrite the polynomial
\[\sum_{l, J}a_{l, J}(\sum_{j\not\in J}(-1)^{k_{j, J\cup j}}l_j x(h_j)f_0^{l-\epsilon_j}f_1^{J\cup j}+\sum_{j\in J}(-1)^{k_{j, J}} f_0^{l+\epsilon_j}f_1^{J\setminus j})\]
from the second condition of the above proposition as
\[\sum_{l, J} f_0^l f_1^J (\sum_{j\in J}(-1)^{k_{j, J}}(l_j +1)x(h_j)a_{l+\epsilon_j, J\setminus j}+\sum_{j\not\in J}(-1)^{k_{j, J\cup j}} a_{l-\epsilon_j, J\cup j}),\]
where $l_j=0$ implies $a_{l-\epsilon_j, J\cup j}=0$.
\begin{cor}\label{definingequations}
A polynomial $f=\sum_{l, J}a_{l, J}f_0^l f_1^J$ belongs to $F[V]^{D_{g, x}}$ if and only if its coefficients $a_{l,J}$, for all pairs $(l,J)$, satisfy the following equations.
\begin{enumerate}
\item If $h^l h^J g^{|J|}\neq 1$, then $a_{l, J}=0$,
\item $\sum_{j\in J}(-1)^{k_{j, J}}(l_j +1)x(h_j)a_{l+\epsilon_j, J\setminus j}+\sum_{j\not\in J}(-1)^{k_{j, J\cup j}} a_{l-\epsilon_j, J\cup j}=0$.
\end{enumerate}
\end{cor}
If $s=1$, then $F[V]^{D_{g, x}} =F$. Therefore, from now on we will assume that $s> 1$.
Define the partial operator $P_j$ acting on the set of all pairs $(l,J)$ by
$P_j(l, J)=(l+\epsilon_j , J\setminus j)$ in the case when $j\in J$, and $P_j(l, J)$ is undefined if $j\notin J$.
Also define the partial operator $Q_j$ acting on the set of all pairs $(l,J)$ by
$Q_j(l, J)=(l-\epsilon_j, J\cup j)$ in the case $j\not\in J$ and $l_j>0$, and $Q_j(l,J)$ is undefined if $j\in J$ or $l_j=0$.
\begin{lm}\label{commutativity} The operators $P_j$ and $Q_j$ satisfy the following conditions.
\begin{enumerate}
\item If $P_j$ is defined on $(l, J)$, then $Q_j P_j (l, J)=(l, J)$.
Also, if $Q_j$ is defined on $(l, J)$, then $P_j Q_j (l, J)=(l, J)$,
\item If $j\neq j'$ and $P_j Q_{j'}$ is defined on $(l, J)$, then $P_j Q_{j'}(l, J)=Q_{j'}P_j(l, J)$.
Also, if $j\neq j'$ and $Q_{j'} P_j$ is defined on $(l, J)$, then $Q_{j'}P_j(l, J)=P_j Q_{j'}(l, J)$.
\end{enumerate}
\end{lm}
Two pairs $(l, J)$ and $(l', J')$ are called equivalent if there is a chain
$(l, J)=(l_0, J_0), \ldots, (l_k, J_k)=(l', J')$
such that $(l_{i+1}, J_{i+1})=S_i(l_i, J_i)$ for $0\leq i\leq k-1$ and each $S_i$ is an operator of type $P$ or $Q$.
Lemma \ref{commutativity} implies that this relation is an equivalence and the set of equations from Corollary \ref{definingequations} is a disjoint union of subsets corresponding to these equivalence classes.
Moreover, each such equivalence class has a unique representative of the form $(l, \underline{s})$ or $(0, J)$, where the cardinality of $J$ is maximal over this class.
In the first case, all pairs from the equivalence class of $(l, \underline{s})$ can be obtained from this representative by appplying operators of type $Q$ only.
In the second case, all pairs from the equivalence class of $(0, J)$ can be obtained from $(0, J)$ by applying operators of type $P$ only.
\begin{ex}\label{smallexample}
Let $D=G_m$. Since $X(D)\simeq\mathbb{Z}$, we can fix a generator $h$ of $X=X(D)$. Then $x\in Lie(D)$ is determined by the value $x(h)=\alpha\in F$.
We will describe invariants of $D_{1,x}$ correposponding to the partial case when $s=2$.
Denote $h_1=h^{k_1}, h_2=h^{k_2}$.
The subset of equations in Corollary \ref{definingequations} corresponding to the pair $(0, \{1\})$
is given as \[\alpha k_1 a_{(1, 0), \emptyset}=0=a_{(0, 0), \{1\}} \]
and the subset corresponding to the pair $(0, \{2\})$ is given as
\[\alpha k_2 a_{(0, 1), \emptyset}=0=a_{(0, 0), \{2\}}. \]
The subset of equations, which corresponds to the pair $((l_1, l_2), \{1, 2\})$, consists of the equations
\[\alpha(-(l_1+1)k_1 a_{(l_1+1, l_2), \{2\}}+(l_2+1)k_2 a_{(l_1, l_2 +1), \{1\}})=0 ,
\]
\[\alpha(l_2+1)k_2 a_{(l_1+1, l_2+1), \emptyset}-a_{(l_1, l_2), \{1, 2\}}=0,\]
\[\alpha(l_1+1)k_1 a_{(l_1+1, l_2+1), \emptyset}+a_{(l_1, l_2), \{1, 2\}}=0
\]
and
\[a_{(l_1+1, l_2), \{2\}}+a_{(l_1, l_2+1), \{1\}}=0.
\]
If $\alpha=0$ and $k_1, k_2\neq 0$, then the superspace $F[V]^{D_{1, x}}$ is generated by the elements $f_0^{l+\epsilon_1+\epsilon_2}$ and
$f_0^{l+\epsilon_1}f_1^{\{2\}}-f_0^{l+\epsilon_2}f_1^{\{1\}}$ such that $(l_1+1)k_1+(l_2 +1)k_2=0$.
If $\alpha\neq 0$ and $k_1, k_2\neq 0$, then the superspace $F[V]^{D_{1, x}}$ is generated by the elements
$f_0^{l+\epsilon_1+\epsilon_2}-\alpha (l_1+1) k_1 f_0^l f_1^{\{1, 2\}}=f_0^{l+\epsilon_1+\epsilon_2}+\alpha (l_2+1) k_2 f_0^l f_1^{\{1, 2\}}$ and
$f_0^{l+\epsilon_1}f_1^{\{2\}}-f_0^{l+\epsilon_2}f_1^{\{1\}}$ such that $(l_1+1)k_1+(l_2 +1)k_2=0$.
The remaining cases, when $k_1=0$ or $k_2=0$, are left for the reader.
\end{ex}
Next, let us consider $D_{g, x}$, where $D$ is an arbitrary diagonalizable group and the elements $g$ and $x$ are as above.
Our aim is to estimate $M_{D_{g, x}, V}$ in terms of the minimal degrees of its diagonalizable (purely even) subsupergroups.
\begin{rem}\label{remrem}
Since $\Delta(g)=g\otimes g$, $F< g>$ is a (purely even) Hopf subsuperalgebra of $F[D_{g, x}]$.
In other words, there is a short exact sequence of supergroups
\[1\to D'_{1, x}\to D_{g, x}\to\mu_2\to 1,\]
where $F<g>\simeq F[\mu_2]$ and $D'$ is the kernel of the restriction of the epimorphism $D_{g, x}\to\mu_2$.
Additionally, $F[D']=FX/FX(g-1)$ and $Lie(D)=Lie(D')$.
For every $D_{g, x}$-supermodule $V$ we obtain $F[V]^{D_{g, x}}=(F[V]^{D'_{1, x}})^{\mu_2}$. Therefore $f\in F[V]^{D'_{1, x}}$
implies $f^2\in F[V]^{D_{g, x}}$, which yields $M_{D'_{1, x}, V}\leq M_{D_{g, x}, V}\leq 2M_{D'_{1, x}, V}$.
\end{rem}
Next, we will consider the special case when $g=1$. Since the element $z$ generates a Hopf supersubalgebra of $F[D_{1, x}]$, there is a supergroup epimorphism $D_{1, x}\to SSp \ K[z] \simeq G_a^-$, where $G_a^-$ is a one-dimensional odd unipotent supergroup. The kernel of this epimorphism coincides with $(D_{1, x})_{ev}\simeq D$.
Assume that $D_{1, x}$ is connected, which happens if and only if $D$ is connected. Then
$F[V]^{D_{1, x}}=F[V]^{Dist(D_{1, x})}$ (see \cite{zub}).
Since $(D_{1, x})_{ev}$ is (naturally) isomorphic to $D$, from now on we will identify it with $D$.
The restriction of the comodule map $\tau$ is given by $\tau|_{D}(f_{j, a})=f_{j, a}\otimes h_j$ for $1\leq j\leq s$ and $a=0, 1$.
We also have $F[V]^{D_{1, x}}=(F[V]^D)^{D_{1, x}/D}$.
\begin{lm}\label{image}
There is a short exact sequence
\[0\to \ I\to Dist(D_{1, x})\to Dist(D_{1, x}/D)\to 0,\]
where the (two-sided) superideal $I$ is generated by $Dist(D)^+$.
\end{lm}
\begin{proof}
Since $\mathfrak{m}=F[D_{1, x}]^+=F(X-1)\oplus (FX)z$, we have $\mathfrak{m}^k=\mathfrak{m}_0^k\oplus\mathfrak{m}_0^{k-1} z$.
Therefore $Dist(D_{1, x})=Dist(D)\oplus Dist(D)\phi$, where $\phi$ is an odd element from $Lie(D_{1, x})=(\mathfrak{m}/\mathfrak{m}^2)^*$ such that $\phi(z)=1$ and $\phi(h)=0$ for
$h\in X$. Since the image of $\phi$, which equals $\phi|_{F[z]}$, generates $Dist(D_{1, x}/D)$, the statement follows.
\end{proof}
Denote by $\psi$ the restriction $\phi|_{F[z]}$. Then $Dist(D_{1, x}/D)=F\oplus F\psi$ and $\psi^2=0$.
Let $A$ denote $F[V]^D$. Then $\psi$ acts on $A$ as $\phi|_A$. Furthermore, $\phi$ acts on $F[V]$ as an odd superderivation such that
$\phi f_{j, 0}=x(h_j) f_{j, 1}$ and $\phi f_{j, 1}=f_{j, 0}$. Hence $F[V]^{D_{1, x}}=A^{Dist(D_{1, x}/D)}=\{a\in A\mid \psi a=\phi a=0\}$.
Choose a homogeneous basis $\{v_i\}_{i\in I_1\sqcup I_2}$ of the $\mathbb{N}$-graded space $A$ such that the vectors $\{v_i\}_{i\in I_1}$ form a basis of $\phi A$ and the vectors $\{v_i\}_{i\in I_2}$ form a basis of $A/\phi A$.
Then $\phi a_i=\sum_{j\in I_1} c_{ij} v_j$ for $i\in I_2,$ and the matrix $C=(c_{ij})_{i\in I_2, j\in I_1}$ is row-finite. Since $\phi A\subseteq\ker\phi$, the following Proposition is now evident.
\begin{pr}\label{onemoreapproachtoinvariants}
The space $F[V]^{D_{1, x}}$ is generated by the vectors $v_i$ for $i\in I_1$, and by the vectors $\sum_{j\in I_2} d_j v_j$, such that the vector
$d=(d_j)_{j\in I_2}\in F^{I_2}$ satisfies the equation $dC=0$. Moreover, $\phi$ preserves the degrees, which implies $M_{D, V}=M_{D_{1, x}, V}$.
\end{pr}
\begin{proof} The first statement is obvious.
For a given monomial $D$-invariant we can create a (non-zero) $D_{1, x}$-invariant of the same degree just by applying the map $\phi$.
\end{proof}
Returning back to the case of general $g$, using Proposition \ref{onemoreapproachtoinvariants} and Remark \ref{remrem} we derive the following theorem.
\begin{tr}\label{generalcase}
Assume that $D$ is connected and the subgroup $D'$ of $D$ is as in Remark \ref{remrem}. Then for every $D_{g, x}$-supermodule $V$ there are inequalities
$M_{D', V}\leq M_{D_{g, x}, V}\leq 2M_{D', V}$.
\end{tr}
\begin{quest}\label{greatproblem}
Describe all (absolute) invariants of supergroups $D_{g, x}$ assuming that all invariants of $D$ are known.
\end{quest}
\end{document}
|
\begin{document}
\title{Quantum optimal control of the dissipative production of a maximally entangled state}
\author{Karl P. Horn}
\affiliation{Theoretische Physik, Universit\"{a}t Kassel,
Heinrich-Plett-Stra{\ss}e 40, D-34132 Kassel,
Germany}
\author{Florentin Reiter}
\affiliation{Department of Physics, Harvard University,
Cambridge, MA 02138, USA}
\author{Yiheng Lin}
\affiliation{CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China}
\author{Dietrich Leibfried}
\affiliation{National Institute of Standards and Technology,
Boulder, Colorado 80305, USA}
\author{Christiane P. Koch}
\affiliation{Theoretische Physik, Universit\"{a}t Kassel,
Heinrich-Plett-Stra{\ss}e 40, D-34132 Kassel,
Germany}
\email{[email protected]}
\date{\today}
\begin{abstract}
Entanglement generation can be robust against certain types of noise in approaches that deliberately incorporate dissipation into the system dynamics. The presence of additional dissipation channels may, however, limit
fidelity and speed of the process.
Here we show how quantum optimal control techniques can be used to both speed up the entanglement generation and increase the fidelity in a realistic setup,
whilst respecting typical experimental
limitations. For the example of entangling two trapped ion qubits [Lin et al., Nature \textbf{504}, 415 (2013)], we find an improved fidelity by
simply optimizing the polarization of the laser beams utilized in the
experiment. More significantly, an alternate combination of transitions between internal states of the ions, when combined with optimized polarization,
enables faster entanglement and decreases
the error by an order of magnitude.
\end{abstract}
\maketitle
\section{Introduction}
Quantum devices aim to exploit
the two essential elements of
quantum physics, quantum coherence
and entanglement, for practical applications.
They require the implementation of
a number of basic tasks such as state
preparation or generation of entanglement,
all the while preserving the relevant
non-classical features at the level of
device operation.
The implementation of quantum tasks thus needs to be robust
with respect to parameter fluctuations and
external noise that is unavoidable in
any real physical setup.
Loss of coherence and noise are commonly
attributed to the coupling of the quantum
system with its surrounding
environment~\cite{breuer2002theory}.
One strategy for realizing all necessary
tasks with sufficient accuracy is to
perform the quantum operations at a
time scale faster than the time scale
at which the noise affects the system.
Quantum optimal control theory provides
a set of tools to derive the corresponding
protocols~\cite{glaser2015training} and can be used to
identify the quantum speed limit~
\cite{caneva2009optimal,goerz2011quantum,patsch2018fast,sorensen2016exploring},
i.e., the shortest possible duration
within which the operation can be
carried out with a pre-specified fidelity.
Nevertheless, there is a fundamental
limit in that one cannot `beat' the noise,
particlularly, when its time scales are comparable to or
faster than the typical speed limits of
the target operation.
An alternative is found in approaches that
deliberately incorporate dissipation into
the system dynamics, often referred to as
quantum reservoir engineering~\cite{poyatos1996quantum}.
The basic idea is to implement stochastic
dynamics whose stationary state is non-classical.
This is achieved by manipulating the coupling
to the environment, or reservoir. In its simplest
form, a constant but switchable coupling is
realized by an electromagnetic field that drives
a transition to a state with fast decay~\cite{poyatos1996quantum}.
The dynamics are described by
the quantum optical master equation~\cite{breuer2002theory},
and the system will
eventually be driven into the fixed point of
the corresponding Liouvillian
~\cite{kraus2008preparation,verstraete2009quantum}.
Applications of this basic idea are many faceted
---its use has been suggested, for example in generating
entanglement~
\cite{plenio1999cavity,benatti2003environment,vacanti2009cooling,wolf2011entangling,gonzalez2011entanglement,cho2011optical,kastoryano2011dissipative,bhaktavatsala2013dark,habibian2014stationary,fogarty2014quantum,bentley2014detection,morigi2015dissipative,aron2016photon,reiter2016scalable},
implementing universal quantum computing~\cite{verstraete2009quantum},
driving phase transitions~\cite{cormick2012structural,diehl2008quantum,habibian2013bose}
and autonomous quantum error correction
~\cite{pastawski2011quantum,mirrahimi2014dynamically,reiter2017dissipative}.
Experimentally, the
generation of non-classical states~\cite{kienzler2014quantum},
entangled states~\cite{krauter2011entanglement,lin2013dissipative,shankar2013autonomously,kimchi2016stabilizing},
and non-equilibrium quantum phases~
\cite{syassen2008strong,schindler2013quantum,barreiro2011open}
have successfully been demonstrated.
Engineered dissipation can also be used
towards a better understanding of open quantum system dynamics, by means of
quantum simulation~\cite{barreiro2011open}.
All of these examples testify to the fact that
dissipation can be a resource~\cite{verstraete2009quantum} for quantum technology.
The ultimate performance bounds that can be
reached with driven-dissipative dynamics under
realistic conditions have, however, not yet
been explored.
While quantum reservoir engineering has been
advocated for its robustness, its performance
in a practical setting is compromised as soon
as additional noise sources perturb the steady
state or trap population flowing towards it.
This can be illustrated by examining the
experiment described in Ref.~\cite{lin2013dissipative}.
For a $^{9}\text{Be}^{+}\,\,$ - $^{24}\text{Mg}^{+}\,\,$ - $^{24}\text{Mg}^{+}\,\,$ - $^{9}\text{Be}^{+}\,\,$ chain
occupying the same linear Paul trap,
the two $^{9}\text{Be}^{+}\,\,$ ions
were entangled via their collective motion using hyperfine
electronic ground state levels as logical states.
Entanglement was achieved by applying a combination of
laser and microwave transitions.
This could be done in an either time-continuous
manner or by repeating a fixed sequence of steps,
driving the system into a steady state,
with the majority of population in the targeted,
maximally entangled singlet state.
Desired dissipation was brought into play
by a combination of spin-motion coupling from a
sideband laser, motion dissipation by sympathetically
cooling cotrapped $^{24}\text{Mg}^{+}\,\,$ ions, and a repump laser
which addresses the transition to a rapidly
decaying electronically excited state.
The sideband laser beams also lead to
undesired pumping of spins, so-called
spontaneous emission.
This resulted in population leakage and was
the main source of error in that experiment~\cite{lin2013dissipative}.
The simultaneous presence of both desired
and undesired dissipation channels is rather generic.
To harness the full power of dissipative
entangled state preparation, one would like
to exploit the former while mitigating the latter.
Here, we use quantum optimal control theory
~\cite{glaser2015training} to address this problem.
For the example of preparing two trapped ions in
a maximally entangled state~\cite{lin2013dissipative}, we
ask whether entanglement can be generated faster
and more accurately when judiciously choosing a
few key parameters.
In order to keep in line with the experimental
setup described in Ref.~\cite{lin2013dissipative},
we forego the usual assumption of time-dependent
pulses whose shapes are derived by quantum optimal
control.
Instead, we employ electromagnetic fields with
constant amplitude and use tools from non-linear
optimization to directly determine the best
field strengths, detunings and polarizations.
Our approach allows to not only determine the
optimal values for these parameters, but also,
identify key factors that ultimately limit
fidelity and speed of entanglement generation.
Based on this insight, we explore an alternative
set of transitions and show that this scheme can
outperform the original one both in terms of fidelity and speed.
The paper is organized as follows.
\Cref{sect:Model}
recalls the mechanism for entanglement generation in the experiment
of Ref.~\cite{lin2013dissipative} and
details the theoretical description of the corresponding
trapped ion system.
Optimization of the transitions used in Ref.~\cite{lin2013dissipative} is discussed in \cref{sect:Optim}.
An alternative set of transitions is introduced in \cref{sect:twosid},
together with the optimization of the corresponding experimental parameters.
We conclude in \Cref{sect:Conclusions}.
\section{Model}
\label{sect:Model}
In this section we consider the system described in
Ref.~\cite{lin2013dissipative},
consisting of a linear Paul trap containing $^{9}\text{Be}^{+}\,\,$ ions and
$^{24}\text{Mg}^{+}\,\,$ ions, which interact mutually through their Coulomb
repulsion and with external electric fields.
A unitary idealization of these interactions
is summarized in the Hamiltonian $H$. The
mechanism giving rise to dissipation in the state
preparation process is spontaneous emission
after excitation of internal electronic
states of the ion by the external laser fields.
The system dynamics is therefore described by
the quantum optical master equation
in Lindblad form (with $\hbar = 1$),
\begin{equation}
\partial_{t} \rho = \mathcal{L}\rho = -i \left[H,\rho\right] + \mathcal{L}_{\mathcal{D}} \rho\,\,\,.
\label{eq:QME}
\end{equation}
We refer to $\mathcal{L}_{\mathcal{D}}$ as
the (Lindblad) dissipator, which is
given by
\begin{align}
\mathcal{L}_{\mathcal{D}} \rho = \sum_{k} \left(
L_{k} \rho L_{k}^{\dagger} - \frac{1}{2} \left[
L_{k}^{\dagger} L_{k}, \rho \right] \right)\,\,,
\label{eq:LBpart}
\end{align}
where the sum over $k$ contains individual contributions
due to sympathetic cooling, heating
and photon scattering
occurring during stimulated Raman processes and repumping
into an electronically excited state.
\begin{figure*}
\caption{\label{fig:boulderTransitions}
\label{fig:boulderTransitions}
\end{figure*}
\subsection{State space}
The model Hamiltonian $H$ accounts for the internal structure
of two $^{9}\text{Be}^{+}\,\,$ ions as well as two vibrational modes of the trapped ion chain.
The state space of the considered system consists of the
following tensor product structure,
\begin{align}
(n_{qb1})\otimes (n_{qb2}) \otimes
(n_{\text{$\nu_1$}}) \otimes (n_{\text{$\nu_2$}})\,\,.
\label{eq:Structure}
\end{align}
In \cref{eq:Structure} $n_{qb1}$ and $n_{qb2}$ designate
hyperfine states of the $^{9}\text{Be}^{+}\,\,$ ions,
specified by the quantum numbers $F$ and their projections
$m_{F}$, obtained from coupling the total electronic angular
momentum quantum number $J$ with the nuclear spin quantum number
$I$.
\Cref{fig:boulderTransitions}(a) highlights
the hyperfine states of interest, comprising of
$\Ket{\mathord{\downarrow}} \overset{\underset{\mathrm{def}}{}}{=} \Ket{S_{\nicefrac{1}{2}},F=2,m_{F}=2}$
and $\Ket{\mathord{\uparrow}} \overset{\underset{\mathrm{def}}{}}{=} \Ket{S_{\nicefrac{1}{2}},F=1,m_{F}=1}$,
the two hyperfine levels to entangle, as well as
an auxiliary level
$\Ket{a} \overset{\underset{\mathrm{def}}{}}{=} \Ket{S_{\nicefrac{1}{2}},F=2,m_{F}=1}$.
The neighbouring levels
$\Ket{o}\overset{\underset{\mathrm{def}}{}}{=}\Ket{S_{\nicefrac{1}{2}},F=1,m_{F}=0}$
and
$\Ket{t}\overset{\underset{\mathrm{def}}{}}{=}\Ket{S_{\nicefrac{1}{2}},F=2,m_{F}=0}$
are also accounted for in the model, since
these are predominantly populated by inadvertent scattering
processes.
In the following, the only electronically excited
state of interest will be $\Ket{e}\overset{\underset{\mathrm{def}}{}}{=}\Ket{P_{\nicefrac{1}{2}},F'=2,m_{F}'=2}$.
$n_{\text{$\nu_1$}}$ and $n_{\text{$\nu_2$}}$ are vibrational
quantum numbers of two of the four shared
motional modes of the trapped
ionic crystal along its linear axis.
Entanglement generation employs $\text{$\nu_1$}$,
and sideband transitions utilizing
this mode are essential for the
presented schemes.
Unless specifically required, the mode \text{$\nu_2$}, which
is not utilized for entanglement
but is included in the model to account for off-resonant coupling, will
be suppressed notationally for the sake of simplicity.
It is assumed that the trap has an
axis of weakest confinement along which the
four-ion string is aligned and that
the eight radial motional modes can be neglected,
since they are largely decoupled given the
sideband laser configuration described in Ref.
~\cite{lin2013dissipative}.
\Cref{fig:boulderTransitions}(b)
shows three transitions that were driven on a single
$^{9}\text{Be}^{+}\,\,$ ion in Ref.~\cite{lin2013dissipative}.
These belong to the
coherent part of \cref{eq:QME}, described by $H$, and one of them results in
population of the electronically
excited state $\Ket{e}$ with subsequent dissipation
which is modeled by the incoherent part, $\mathcal{L_{D}}\rho$.
After adiabatic elimination, however, the transition to $\Ket{e}$
no longer appears in the
coherent part of \cref{eq:QME},
while the dissipative part
is modified by the result of the adiabatic
elimination to fully account for
the effective decay out of a electronic
ground state hyperfine level instead~\cite{lin2013dissipative}.
This is illustrated in
\cref{fig:boulderTransitions}(c).
\subsection{Original scheme for entanglement preparation}
\label{ssect:originalScheme}
\begin{figure*}
\caption{\label{fig:origMechanism}
\label{fig:origMechanism}
\end{figure*}
As represented in \cref{fig:boulderTransitions},
the dissipative entanglement generation of Ref.~\cite{lin2013dissipative} uses three different types of fields to induce population flow in the state space.
The entanglement mechanism can be understood
by qualitatively tracing the flow of
population from state to state as indicated
in \cref{fig:origMechanism}.
Entangling the two $^{9}\text{Be}^{+}\,\,$ ions
via their joint motion in the trap is made possible by utilizing
sideband transitions driven by Raman lasers. These change the internal
states of the $^{9}\text{Be}^{+}\,\,$ ions whilst simultaneously
exciting or de-exciting the utilized motional mode. In contrast,
carrier transitions driven by a microwave field change the $^{9}\text{Be}^{+}\,\,$
internal states only. Finally, a repump laser excites population to a short-lived electronically excited state. Specifically, in
Ref.~\cite{lin2013dissipative}, a single
sideband transition between
$\Ket{\mathord{\downarrow}}$ and
$\Ket{\mathord{\uparrow}}$,
a carrier transition
between
$\Ket{a}$
and $\Ket{\mathord{\uparrow}}$
and a repump transition between $\Ket{a}$ and
$\Ket{e}$ are used.
Figure \ref{fig:boulderTransitions} indicates
the transitions between the hyperfine levels
of interest for a single $^{9}\text{Be}^{+}\,\,$ ion.
The above transitions can be driven simultaneously
and time-independently for the duration of the experiment or in a step-wise manner~\cite{lin2013dissipative}. Here, we focus on the continuous case, which resulted in a larger error.
Each $^{9}\text{Be}^{+}\,\,$ ion is affected by the driven transitions
independently and no individual addressing is required.
Starting with both $^{9}\text{Be}^{+}\,\,$ ions in an arbitrary state confined
to the hyperfine subspace $\left\{ a,\mathord{\downarrow},\mathord{\uparrow} \right\}$,
in the ideal case, this scheme always leads
to a steady state in which the the population
is trapped in the singlet entangled state
between $\Ket{\mathord{\downarrow}}$ and $\Ket{\mathord{\uparrow}}$, $\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}
\overset{\underset{\mathrm{def}}{}}{=} \frac{1}{\sqrt{2}}
\left( \Ket{\mathord{\downarrow}\mathord{\uparrow}} - \Ket{\mathord{\uparrow} \mathord{\downarrow}} \right)$.
In the following, all singlet entangled states are
designated by $\Ket{S_{ij}} \overset{\underset{\mathrm{def}}{}}{=} \frac{1}{\sqrt{2}}
\left( \Ket{i j} - \Ket{j i} \right)$,
whilst the triplet entangled states are designated
by $\Ket{T_{ij}} \overset{\underset{\mathrm{def}}{}}{=} \frac{1}{\sqrt{2}}
\left( \Ket{i j} + \Ket{j i} \right)$,
$\forall i,j \in \left\{ a,\mathord{\downarrow},\mathord{\uparrow} \right\}$.
Let us inspect in more detail the flow of
population from state to state
in \cref{fig:origMechanism}.
Starting in $\Ket{\mathord{\downarrow}\mathord{\downarrow} n_{\text{$\nu_1$}}=0}$, for instance,
it is possible to reach the target singlet entangled state
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$ by two sideband transitions
leading to $\Ket{\mathord{\uparrow}\mathord{\uparrow} n_{\text{$\nu_1$}}}$, followed by a
carrier transition into a combination of the
$\Ket{a\mathord{\uparrow} n_{\text{$\nu_1$}}}$ and $\Ket{\mathord{\uparrow} a n_{\text{$\nu_1$}}}$ states.
Population in the auxiliary state is driven by the repump laser into the electronically excited state from where it
subsequently decays back into the electronic
ground state hyperfine subspace.
The process of electronic excitation and decay happens
sufficiently fast with respect to the other transitions,
that it can be regarded as `effective decay' directly
out of $\Ket{a}$, as depicted in
\cref{fig:boulderTransitions}(b) and (c).
This decay drives the system into a combination
of $\Ket{\mathord{\uparrow} \mathord{\uparrow} n_{\text{$\nu_1$}}}$, the triplet entangled state
$\Ket{T_{\mathord{\downarrow}\mathord{\uparrow}}}\otimes\Ket{n_{\text{$\nu_1$}}}$,
and the target state $\Ket{S_{\mathord{\downarrow} \mathord{\uparrow}}}\otimes\Ket{n_{\text{$\nu_1$}}}$.
At any stage, sympathetic cooling can counteract the
excitations of the vibrational mode in the trap which are
caused by sideband transitions and heating.
Sympathetic cooling is induced by a different set of sideband lasers driving transitions only between internal states of the $^{24}\text{Mg}^{+}\,\,$ ions which share common motional modes with the $^{9}\text{Be}^{+}\,\,$ ions.
The carrier transition between $\Ket{a}$ and
$\Ket{\mathord{\uparrow}}$ leads out of the target state
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$ into
$\Ket{S_{a\mathord{\downarrow}}}$.
This particular transition is highlighted
specifically in \cref{fig:origMechanism}
by a dotted black double headed arrow.
By ensuring that the two-photon Rabi frequency
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$
of the stimulated Raman sideband transition
between $\Ket{\mathord{\downarrow}}$ and $\Ket{\mathord{\uparrow}}$ is
much larger than the carrier Rabi frequency
$\Omega_{\text{car},a,\mathord{\uparrow}}$, the latter
transition can effectively be suppressed.
\Cref{fig:origMechanism} also highlights
the state $\Ket{\mathord{\uparrow}\mathord{\uparrow} 0}$ with a
thick, dotted, black border, since
the effective decay, proportional to
the square of the repump laser Rabi
frequency $\Omega_{\text{car},a,e}$
must be made sufficiently weak relative
to $\Omega_{\text{car},a,\mathord{\uparrow}}$, in order
to prevent the trapping of population
in $\Ket{\mathord{\uparrow} \mathord{\uparrow} 0}$.
Consequently, a hierarchy of rates is
established in which the maximum
attainable two-photon Rabi frequency
of the stimulated Raman transition
determines the maximal carrier
Rabi frequency between $\Ket{a}$ and
$\Ket{\mathord{\uparrow}}$, which in turn
determines the maximal repump Rabi
frequency between $\Ket{a}$ and $\Ket{e}$.
\subsection{Hamiltonian}
\label{ssect:Ham}
In the rotating wave approximation
and interaction picture,
the total system Hamiltonian is comprised of
the driven hyperfine transitions
\begin{align}
H = \sum_{\text{type},i,f} H_{\text{type},i,f}
\,\,,
\label{eq:Htot}
\end{align}
where the sum runs over specific triples
$\left(\text{type}, i,f \right)$, designating a
transition of type `red' or `blue' sideband
or `carrier', between the initial
and final hyperfine states $\Ket{i}$
and $\Ket{f}$.
Transitions of the carrier type
between the ground state hyperfine levels
are driven by microwave fields
with a Hamiltonian of the form
\begin{align}
H_{\text{car},i,f} = &\Omega_{\text{car},i,f} \big( \KB{f}{i}
\otimes\mathbbm{1}_{\text{qb2}}\otimes\mathbbm{1}_{\text{$\nu_1$}}
\otimes\mathbbm{1}_{\text{$\nu_2$}}\nonumber \\
&+ \mathbbm{1}_{\text{qb1}}\otimes\KB{f}{i}\otimes
\mathbbm{1}_{\text{$\nu_1$}}\otimes
\mathbbm{1}_{\text{$\nu_2$}} \big)e^{-i\Delta_{\text{car},i,f}t} + \text{h.c.}\,\,.
\label{eq:Carriertrans}
\end{align}
Above, $\Omega_{\text{car},i,f}$ denotes the Rabi frequency and $\Delta_{\text{car},i,f}$ a small
detuning between the applied field and the transition energy
between $\Ket{i}$ and $\Ket{f}$.
Each identity operator $\mathbbm{1}_{j}$, with
$j \in \{\text{qb1, qb2, } \text{$\nu_1$},\,\text{$\nu_2$} \}$, is labelled
according to the subspace to which it corresponds.
A repump laser is required to drive
transitions
between ground and electronically excited
hyperfine states.
These transitions therefore involve Hamiltonians of the form of \cref{eq:Carriertrans}, where
$\Ket{i}$ is a hyperfine ground state
level and $\Ket{f}= \Ket{e}$ is the addressed
electronically excited hyperfine level.
Since population excited by this repumper decays very rapidly
into the hyperfine ground states, adiabatically eliminating
the excited state is well justified.
Ideally, a blue sideband transition between two hyperfine levels
$\Ket{i}$ and $\Ket{f}$, utilizing the motional mode
\text{$\nu_1$}, is represented by
\begin{align}
H_{\text{blue},i,f} = &\Omega_{\text{blue},i,f}
\big( \KB{f}{i}\otimes \mathbbm{1}_{\text{qb2}}
\otimes b^{+}\otimes\mathbbm{1}_{\text{$\nu_2$}} \nonumber \\
&+ \mathbbm{1}_{\text{qb1}}\otimes\KB{f}{i}
\otimes b^{+}\otimes \mathbbm{1}_{\text{$\nu_2$}} \big)e^{-i\Delta_{\text{blue},i,f}t} + \text{h.c.}\,\,,
\label{eq:SBtrans}
\end{align}
where $\Omega_{\text{blue},i,f}$
is the sideband Rabi frequency
and $\Delta_{\text{blue},i,f}$
a small detuning from the
energy difference between $\Ket{i}$ and $\Ket{f}$
plus the energy of one quantum of \text{$\nu_1$}.
$b^{+}$ and $b$ denote the bosonic creation
and annihilation operators which respectively excite and
de-excite the harmonic mode \text{$\nu_1$}.
Analogously, the Hamiltonian of a red sideband
transition takes the form of
\cref{eq:SBtrans} but with the
annihilation operator $b$
replacing the creation operators $b^\dagger$
and $\Delta_{\text{blue},i,f}$ replaced by
$\Delta_{\text{red},i,f}$.
In the specific case of a stimulated Raman
sideband transition,
$\Omega_{\text{red/blue},i,f}$ in \cref{eq:SBtrans}
becomes $\Omega_{\text{red/blue},i,f}^{2p}$, a
two-photon Rabi frequency of a red/blue
sideband transition between $\Ket{i}$ and
$\Ket{f}$, given by
\begin{align}
\Omega_{\text{red/blue},i,f}^{2p} = \eta_{\text{$\nu_1$}} \frac{\mu^{2} E_{r}E_{b}}{4 }
\sum_{k} \frac{\Bra{f} \bm{d}\cdot \bm{\varepsilon}_{r}\Ket{k}
\Bra{k} \bm{d}\cdot \bm{\varepsilon}_{b}\Ket{i}}
{\Delta_{k}\mu^{2}} \,\,.
\label{eq:SidebandRabifreq}
\end{align}
In the following we assume Lamb-Dicke parameters of
$\eta_{\text{$\nu_1$}}=0.180$ and $\eta_{\text{$\nu_2$}}=0.155$
for the utilized ($\text{$\nu_1$}$) and off-resonant
($\text{$\nu_2$}$) motional modes, respectively \cite{lin2013dissipative}.
Above, $E_{r}$ and $E_{b}$ are the field strengths of the
lower (red) and higher (blue) frequency Raman laser beams
which have
polarizations $\bm{\varepsilon}_{r}$ and
$\bm{\varepsilon}_{b}$, expressed in spherical components as
$\bm{\varepsilon}_{r}=(r_{-},r_{0},r_{+})$
and $\bm{\varepsilon}_{b}=(b_{-},b_{0},b_{+})$,
respectively.
$\bm{d}$ is the dipole operator for the $^{9}\text{Be}^{+}\,\,$ ions
(also expressed in the spherical basis)
and the sum runs over all hyperfine levels $\Ket{k}$
in the electronically excited states $P_{\nicefrac{1}{2}}$
and $P_{\nicefrac{3}{2}}$.
The laser frequencies are shifted, such that
the ground state to excited state transitions
are detuned by $\Delta_{e}$
and $\Delta_{e} + f_{P}$ below the
$S_{\nicefrac{1}{2}} \leftrightarrow P_{\nicefrac{1}{2}}$
and $S_{\nicefrac{1}{2}}\leftrightarrow P_{\nicefrac{3}{2}}$
resonances, respectively.
$f_{P} \approx \unit[197.2]{GHz}$ is the fine structure splitting between
$P_{\nicefrac{1}{2}}$ and $P_{\nicefrac{3}{2}}$.
For the detuning between $\Ket{i}$ and an individual excited state
hyperfine level $\Ket{k}$, the hyperfine splitting
is neglected such that
\[
\Delta_{k} \approx
\begin{cases}
\Delta_{e}, &
\text{ if } \Ket{k} \in P_{\nicefrac{1}{2}} \\
\Delta_{e}+f_{P}, & \text{ if } \Ket{k} \in P_{\nicefrac{3}{2}}\,\,.
\end{cases}
\]
\Cref{eq:SidebandRabifreq} utilizes
a characteristic stretched state
transition matrix element,
$\mu \overset{\underset{\mathrm{def}}{}}{=} \Bra{P_{\nicefrac{3}{2}}, F=3, m_{F}=3}
d_{+} \Ket{S_{\nicefrac{1}{2}}, F=2,m_{F}=2}$
to properly scale
a given reduced matrix element,
$\Bra{f}\bm{d}\cdot\varepsilon\Ket{i}/\mu$,
with $d_{+}$, the right circular
component of the dipole operator.
The Wigner-Eckart theorem
\cite{sobel'man1967introduction}
and Breit-Rabi formula
\cite{woodgate1970elementary}
can then be used to
express an arbitrary transition matrix element
$\Bra{f}\bm{d}\cdot\bm{\varepsilon}\Ket{i}$
between two hyperfine levels $\Ket{i}$ and $\Ket{f}$.
To accurately model the system dynamics,
it is necessary to account for the undesired
off-resonant coupling of a given sideband transition
described by \cref{eq:SBtrans} to an additional mode
\text{$\nu_2$}, given by
\begin{align}
H_{\text{blue},i,f}^{\text{$\nu_2$}} =
& \frac{\eta_{\text{$\nu_2$}}}{\eta_{\text{$\nu_1$}}} \Omega_{\text{blue},i,f}^{2p}
\big( \KB{f}{i}\otimes \mathbbm{1}_{\text{qb2}}
\otimes \mathbbm{1}_{\text{$\nu_1$}} \otimes c^{+} \nonumber \\
&+ \mathbbm{1}_{\text{qb1}} \otimes\KB{f}{i}\otimes
\mathbbm{1}_{\text{$\nu_1$}} \otimes c^{+} \big) \nonumber \\
& \quad \quad \times e^{-i(\delta -\Delta_{\text{blue},i,f)}t} + \text{h.c.}\,\,,
\label{eq:SBORtrans}
\end{align}
in the case of a blue sideband transition.
In \cref{eq:SBORtrans}, $\delta$ is the detuning
between the utilized mode $\text{$\nu_1$}$ and
$\text{$\nu_2$}$, which couples off-resonantly.
In the case of a red sideband transition,
the off-resonant coupling takes the form
of \cref{eq:SBORtrans} under interchange
of the annihilation and creation operators
of the harmonic oscillator describing
the \text{$\nu_2$}~motional mode,
$c$ and $c^{+}$, and replacement of
$\Delta_{\text{blue},i,f}$ by
$\Delta_{\text{red},i,f}$, respectively.
\subsection{Lindblad operators}
Incoherent processes taking place alongside
the driven transitions appear in the dissipative
part $\mathcal{L_{D}}$ in \cref{eq:QME}, which is comprised
of individual contributions modelled by the Lindblad (jump)
operators $L_{k}$ in \cref{eq:LBpart}. An effective operator formalism~\cite{gardiner2004quantum} allows to adiabatically
eliminate the hyperfine excited state addressed by the
repump laser.
It leads to Lindblad operators of the form~\cite{reiter2012effective}
\begin{align}
L_{\text{rep},i,f}^{(1)} &= \sqrt{\gamma_{if}^{\text{eff}}} \KB{f}{i} \otimes
\mathbbm{1}_{\text{qb2}} \otimes \mathbbm{1}_{\text{$\nu_1$}}
\otimes \mathbbm{1}_{\text{$\nu_2$}}
\label{eq:GammaRepFirst}
\\
L_{\text{rep},i,f}^{(2)} &= \sqrt{\gamma_{if}^{\text{eff}}}
\mathbbm{1}_{\text{qb1}} \otimes \KB{f}{i}
\otimes \mathbbm{1}_{\text{$\nu_1$}} \otimes \mathbbm{1}_{\text{$\nu_2$}} \,\,,
\label{eq:GammaRepSecond}
\end{align}
with effective rates
\begin{align}
\gamma_{if}^{\text{eff}} = \gamma_{ef} \frac{4\Omega_{\text{car},i,e}^{2}}{\gamma^{2}}\,\,,
\label{eq:EffectiveRates}
\end{align}
where $\Ket{e}$ is the intermediate, rapidly decaying,
electronically excited state,
$\Omega_{\text{car},i,e}$ the repump
Rabi
frequency,
$\gamma_{ef}$ the decay rate from $\Ket{e}$ into the
hyperfine ground state $\Ket{f}$ and
$\gamma = \sum_{f'} \gamma_{ef'}$
the total decay rate out of $\Ket{e}$ into a subspace
of hyperfine ground states.
Similarly to \cref{eq:GammaRepFirst,eq:GammaRepSecond},
leaking between ground-state hyperfine levels
due to stimulated Raman sideband transition
acts on both beryllium ions according to
\begin{align}
L_{\text{sid},i,f}^{(1)} &= \sqrt{\Gamma_{if}} \KB{f}{i} \otimes
\mathbbm{1}_{\text{qb2}} \otimes
\mathbbm{1}_{\text{$\nu_1$}} \otimes \mathbbm{1}_{\text{$\nu_2$}} \\
L_{\text{sid},i,f}^{(2)} &= \sqrt{\Gamma_{if}} \mathbbm{1}_{\text{qb1}}
\otimes \KB{f}{i} \otimes \mathbbm{1}_{\text{$\nu_1$}}
\otimes \mathbbm{1}_{\text{$\nu_2$}}\,.
\label{eq:StimRamanPhotonScattering}
\end{align}
The scattering rate $\Gamma_{if}$
between an initial hyperfine ground state
$\Ket{i}$ and a final hyperfine ground state
$\Ket{f}$, due to a single laser beam is
given by the Kramers-Heisenberg formula
\begin{align}
\Gamma_{i f} = \Gamma_{i \rightarrow f} = \frac{\left|E\right|^{2}\mu^{2}}{4} \gamma \left| \sum_{k}
\frac{a_{i f}^{(k)}}{\Delta_{k}} \right|^{2}\,\,,
\label{eq:Kramers}
\end{align}
where
\begin{align}
a_{i f}^{(k)} = a_{i \rightarrow f}^{(k)} = \sum_{q\in \left\{+,0,-\right\}} \frac{\Bra{f} d_{q} \Ket{k}}{\mu}
\frac{\Bra{k} \bm{d} \cdot \bm{\varepsilon} \Ket{i}}{\mu}
\label{eq:aif}
\end{align}
is the two-photon transition amplitude between $\Ket{i}$
and $\Ket{f}$.
As in \cref{eq:SidebandRabifreq}, $k$ runs over
all states $\Ket{k}$ belonging to
the $^{9}\text{Be}^{+}\,\,$ ion $P_{\nicefrac{1}{2}}$ and
$P_{\nicefrac{3}{2}}$ manifolds.
Again, it sufficies to approximate
the $\Delta_{k}$ of
$k \in P_{\nicefrac{1}{2}}, P_{\nicefrac{3}{2}}$
as $\Delta_{e}$
and $\Delta_{e}+f_{P}$, respectively.
Rayleigh scattering is modelled by a Pauli $\sigma_{z}$
matrix between pairs of levels.
Acting at the rate
\begin{align}
\phi_{i f} = \frac{\left|E\right|^{2}\mu^{2}}{4} \gamma \left| \sum_{k}
\left( \frac{a_{i i}^{(k)}}{\Delta_{k}}
-\frac{a_{f f}^{(k)}}{\Delta_{k}} \right) \right|^{2}\,\,,
\label{eq:RayleighKramers}
\end{align}
Rayleigh scattering is only of concern between
the $\Ket{\mathord{\downarrow}}$ and $\Ket{\mathord{\uparrow}}$ levels
and in most cases negligibly small.
Sympathetic cooling is achieved by using stimulated Raman
laser cooling and
can be made to affect either or
both of the considered
motional modes according to
\begin{align}
L_{\text{cool},\text{$\nu_1$}} &= \sqrt{\kappa_{c,\text{$\nu_1$}}}
\mathbbm{1}_{\text{qb1}}\otimes\mathbbm{1}_{\text{qb2}}
\otimes b\otimes\mathbbm{1}_{\text{$\nu_2$}} \\
L_{\text{cool},\text{$\nu_2$}} &= \sqrt{\kappa_{c,\text{$\nu_2$}}}
\mathbbm{1}_{\text{qb1}}\otimes\mathbbm{1}_{\text{qb2}}
\otimes\mathbbm{1}_{\text{$\nu_1$}}\otimes c
\label{eq:SympCool}\,\,,
\end{align}
where the cooling rates $\kappa_{c,\text{$\nu_1$}}$
and $\kappa_{c,\text{$\nu_2$}}$
are governed by the field strengths of the repump and
stimulated Raman lasers acting on the magnesium ions.
Heating acts on all motional modes.
It is caused by spontaneous emission occuring during the magnesium
sideband Raman transitions, as well as photon recoil
from spontaneous emission and also the anomalous heating
of the ion trap.
The total heating can be modelled by
\begin{align}
L_{\text{heat},\text{$\nu_1$}} &= \sqrt{\kappa_{h,\text{$\nu_1$}}}
\mathbbm{1}_{\text{qb1}}\otimes\mathbbm{1}_{\text{qb2}}\otimes
b^{\dagger}\otimes\mathbbm{1}_{\text{$\nu_2$}} \\
L_{\text{heat},\text{$\nu_2$}} &= \sqrt{\kappa_{h,\text{$\nu_2$}}}
\mathbbm{1}_{\text{qb1}}\otimes\mathbbm{1}_{\text{qb2}}\otimes
\mathbbm{1}_{\text{$\nu_1$}} \otimes c^{\dagger}\,\,,
\label{eq:Heating}
\end{align}
for a set of given heating rates $\kappa_{h,\text{$\nu_1$}}$
and $\kappa_{h,\text{$\nu_2$}}$.
\section{Optimizing the original scheme}
\label{sect:Optim}
The goal of optimization
is to maximize
the
population in the target state
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$.
To this end, the final time $T$ is defined as the
time at which the peak population in the
target state is reached and
all driving fields
can be turned off.
The target state population at final time is
defined as the fidelity $F$
and correspondingly the error
as
$\epsilon \overset{\underset{\mathrm{def}}{}}{=} 1 - F$.
The peak population at the final time is an appropriate
quantity to observe, since the stability
of the ionic hyperfine ground states causes
the system to remain in its entangled state
for a long time after all driving fields
have been turned off.
In the following, the system degrees of freedom
available for control are introduced and categorised
into two collections
in preparation for the optimization scheme discussed
below.
In contrast to a straightforward parameter optimization
of all degrees of freedom, the specialised
optimization scheme presented here is
less susceptible to running into local minima
and demonstrates reliable and
fast convergence.
\subsection{Optimization parameters}
\label{ssect:OptPar}
As previously discussed, the limitations
of the original scheme~\cite{lin2013dissipative} are fundamentally
linked to the physical process of the stimulated Raman
sideband transition.
The two-photon Rabi frequency
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$
associated with this transition
should be made as large as possible to drive the
system towards the desired target state whilst ensuring
that the unfavourable transition between $\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$
and $\Ket{S_{a\mathord{\downarrow}}}$ is suppressed.
Consequently, the carrier transition Rabi
rate $\Omega_{\text{car},a,\mathord{\uparrow}}$ and in
turn the repump transition Rabi rate
governing the effective decay out of $\Ket{a}$
are limited, bottlenecking the flow
of population into the target state.
\Cref{eq:SidebandRabifreq,eq:Kramers,eq:aif}
show that merely increasing
the field strengths of the sideband lasers
has the adverse side effect of also increasing
the chance of photon scattering and therefore
the rates of leaking between hyperfine
ground states.
As such, a safe way of increasing the
field strength of the sideband lasers
is to compensate by increasing the
detuning $\Delta_{e}$
from the excited state manifold,
since the two-photon Rabi
frequency scales inversely with the detuning whilst
the scattering rates between hyperfine
states scale with the square of the inverse detuning.
The field strengths required
to significantly increase the
two-photon Rabi frequency
whilst minimising the associated
scattering rates
are, however, beyond current experimental capabilities
\cite{ozeri2007errors}.
A third option is given by the polarization
of the two stimulated Raman sideband laser beams
$\bm{\varepsilon}_{r}$ and $\bm{\varepsilon}_{b}$,
which have a great impact on both $\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$ and also $\left\{ \Gamma_{if} \right\}$.
The tunable parameters $E_{r}$ and $E_{b}$,
$\bm{\varepsilon}_{r}$ and $\bm{\varepsilon}_{b}$
and $\Delta_{e}$,
appearing in
\Cref{eq:SidebandRabifreq,eq:Kramers,eq:aif}
constitute a first set of parameters defined as
\begin{align}
\mathcal{P}_{\text{inner}} \overset{\underset{\mathrm{def}}{}}{=} \left\{ E_{r},E_{b},\bm{\varepsilon}_{r},
\bm{\varepsilon}_{b}, \Delta_{e} \right\}
\label{eq:setOne}\,\,.
\end{align}
These are directly associated with the stimulated
Raman sideband
transition.
The two-photon Rabi frequency
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$
scales with the product of field strengths
$E_{r}E_{b}$, whilst the scattering
rates due to each laser beam
scale with $\left|E_{r/b}\right|^{2}$, the magnitude of the field
strength squared.
The polarization is split into its three
spherical components,
$\bm{\varepsilon} = (\varepsilon_{-}, \varepsilon_{0}, \varepsilon_{+})$
where $\varepsilon_{i} \in \left[ -1,1 \right],\,\,
\forall i \in \left\{ -,0,+ \right\}$ and
with
\begin{align}
\left| \varepsilon_{-} \right|^{2} +
\left| \varepsilon_{0} \right|^{2} +
\left| \varepsilon_{+} \right|^{2} = 1\,\,.
\label{eq:polarization}\
\end{align}
Due to the normalisation of the spherical
components, each polarization
possesses two degrees of freedom
which can be represented as
the azimuthal and polar angles on the
unit sphere.
A given configuration
of $\mathcal{P}_{\text{inner}}$ fully determines the resulting
two-photon Rabi frequency
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$
and all leakage rates $\Gamma_{if}$
between hyperfine states.
These parameters are deliberately
regarded separately from a
second set of parameters,
\begin{align}
\mathcal{P}_{\text{outer}} \overset{\underset{\mathrm{def}}{}}{=}
\left\{
\Omega_{\text{car},a,\mathord{\uparrow}},
\Omega_{\text{car},a,e},
\Delta_{\text{car},a,\mathord{\uparrow}},
\Delta_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}},
a
\right\}\,,
\label{eq:setTwo}
\end{align}
consisting of the carrier Rabi
frequencies
and
detunings for both ground state
transitions
and a balance parameter $a$, which
shall become important during the optimization.
The carrier Rabi frequencies are directly
determined by the applied field strengths and
can be tuned over broad ranges.
The detunings $\Delta_{\text{car},a,\mathord{\uparrow}}$
and $\Delta_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}$ should
be kept small to prevent off-resonant coupling
to additional motional modes.
\subsection{Optimization algorithm}
\begin{figure*}
\caption{
\label{fig:algo}
\label{fig:algo}
\end{figure*}
Our optimization algorithm, schematically depicted in \cref{fig:algo},
takes the approach of
optimizing the sets
introduced above in a two-step process.
Conceptually, the
inner optimization over the first set
of parameters $\mathcal{P}_{\text{inner}}$ incorporates the
dynamics indirectly and is encapsulated by
an outer optimization over the second
set of parameters $\mathcal{P}_{\text{outer}}$, maximizing the
actual fidelity $F$.
This strategy is motivated by the fact that
determining
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p} =
\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}
(E_{r},E_{b},\bm{\varepsilon}_{r},\bm{\varepsilon}_{b},\Delta_{e})$
and $\left\{ \Gamma_{if} = \Gamma_{if}
(E_{r},E_{b},\bm{\varepsilon}_{r},\bm{\varepsilon}_{b},\Delta_{e})\right\}$
does not require explicit knowledge of the dynamics and is
therefore computationally inexpensive.
The target functional of the inner step of the
optimization depends
on the field strengths $E_{r}$ and $E_{b}$,
polarizations $\bm{\varepsilon}_{r}$ and
$\bm{\varepsilon}_{b}$ and excited state
detuning $\Delta_{e}$ and is defined as
\begin{align}
J_{\text{inner}}[E_{r},E_{b},\bm{\varepsilon}_{r},\bm{\varepsilon}_{b},\Delta_{e}] = \sum_{if} c_{if}\Gamma_{if} - \alpha \Omega_{sid}^{2p}\,\,.
\label{eq:PolFunct}
\end{align}
Here, $\alpha$ is a balance parameter
which weights up the relative importance of maximizing
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$
versus minimising the sum $\sum_{if} c_{if} \Gamma_{if}$,
for a given set of weights $\left\{ c_{if} \right\}$.
If the set of weights $\left\{ c_{if} \right\}$ and $a$ are fixed,
the inner optimization can calculate
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$
and $\left\{ \Gamma_{if} \right\}$ in terms
of $\mathcal{P}_{\text{inner}}$, which are passed back to the
outer part of the optimization, once $J_{\text{inner}}$
is minimal.
\begin{figure}
\caption{
\label{fig:weights}
\label{fig:weights}
\end{figure}
The optimization of the set of parameters $\mathcal{P}_{\text{inner}}$
requires a measurement of the effect
a change in each scattering rate $\Gamma_{if}$ has
on $F$, the overall fidelity of the dynamics.
This runs contrary to the usual practice of
minimising the total scattering rate $\sum_{if} \Gamma_{if}$
between all pairs of ground state hyperfine levels.
Individually weighting each $\Gamma_{if}$
comes as a consequence of the
observation that the leaking between each pair
of hyperfine ground states
affects the reached fidelity differently.
Most notably, transitions leading out
of the steady state $\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$
and transitions leading out of the
hyperfine subspace $\left\{ a,\mathord{\downarrow},\mathord{\uparrow} \right\}$
into the neighbouring states $\left\{ o,t \right\}$
have the largest negative effect on
the fidelity.
Taking into account each individual leaking
rate therefore offers the possibility
of strongly suppressing certain detrimental
$\Gamma_{if}$ by carefully tuning the polarization.
We encode the degree to which a certain $\Gamma_{if}$
affects the fidelity by running several simulations
where each individual rate
$\Gamma_{if}$ is artificially boosted
by a factor of $10$, whilst keeping
all other rates fixed, resulting in a set of
fidelities $\left\{F_{if}\right\}$.
Observing the difference $F-F_{if}$,
between boosted and unaltered dynamics
leads to a set of weights,
$\left\{ c_{if}\overset{\underset{\mathrm{def}}{}}{=} 1000\left( F-F_{if} \right) \right\}$.
\Cref{fig:weights} shows two different
sets of weights at the beginning of the optimization and after a few updates.
As discussed in \cite{lin2013dissipative}
the biggest scattering error is due to
the qubit transition between $\Ket{\mathord{\uparrow}}$
and $\Ket{\mathord{\downarrow}}$.
$\Gamma_{\mathord{\uparrow}\mathord{\downarrow}}$,
Along with transitions
leading out of the hyperfine subspace
into the neighbouring $\Ket{o}$ and $\Ket{t}$
levels, recieved the
largest weights $c_{if}$ for the
duration of the optimization.
A given set of weights can be used
to optimize $\mathcal{P}_{\text{inner}}$, leading to
the best possible
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$
and $\left\{ \Gamma_{if} \right\}$
with which to perform the dynamics.
The optimization of the second set
of parameters, $\mathcal{P}_{\text{outer}}$, directly targets
the fidelity $F$ of the dynamics,
\begin{align}
J_{\text{outer}}\left[ \Omega_{\text{car},a,\mathord{\uparrow}}, \Omega_{\text{car},a,e},\Delta_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}, \Delta_{\text{car},a,\mathord{\uparrow}},\alpha\right] \overset{\underset{\mathrm{def}}{}}{=} 1 - F = \epsilon
\label{eq:DynFunct}\,.
\end{align}
For each iteration of the outer optimization,
the inner optimization
over \cref{eq:PolFunct} leading to optimal
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$ and
$\left\{ \Gamma_{if} \right\}$ is performed
using the set of weights $\left\{ c_{if} \right\}$
generated during the previous iteration
(for the first iteration $c_{if}=1,~\forall i,f$).
After the inner optimization,
a new set of weights $\left\{ c_{if} \right\}$
is generated for the next iteration of
the outer optimization, as illustrated in \cref{fig:algo}.
This two-step optimization is
easily generalised for arbitrary
combinations of transitions,
including the possibility for
multiple sideband transitions
between differentground state hyperfine
levels.
Optimization of multiple sideband
transitions follows the rule,
that the $j^{\text{th}}$
sideband transition
has its own set of polarizations
$\bm{\varepsilon}_{r}^{(j)}$ and
$\bm{\varepsilon}_{b}^{(j)}$, field
strengths $E_{r}^{(j)}$ and $E_{b}^{(j)}$,
balance parameter $\alpha^{(j)}$ and
excited state detuning $\Delta_{e}^{(j)}$
but
each contributes towards a set of
total scattering rates
$\left\{ \Gamma_{if} = \sum_{j} \Gamma_{if}^{(j)} \right\}$.
Furthermore, all transitions except for
the repump transition have a detuning
$\Delta_{\text{type},i,f}$
and all carrier transitions have a
Rabi frequency $\Omega_{\text{car},i,f}$ to be optimized
directly, along
with the set of balance parameters
$\left\{ \alpha^{(j)} \right\}$, in the
outer optimization.
\subsection{Result of optimization}
\label{ssect:origResult}
\begin{figure}
\caption{\label{fig:originalOptimisedDynamics}
\label{fig:originalOptimisedDynamics}
\end{figure}
\begin{table*}
\centering
\begin{tabular}{l | r| r| r| r| r| r| r| r| r| r}
quantity & $E_{r}$ & $E_{b}$ & $\bm{\varepsilon}_{r}$ &
$\bm{\varepsilon}_{b}$ &
$\frac{\Omega_{\text{car},a,\mathord{\uparrow}}}{2\pi}$ &
$\frac{\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}}{2\pi}$ &
$\frac{\Omega_{\text{car},a,e}}{2\pi}$ &
$\Delta_{\text{car},a,\mathord{\uparrow}}$ & $\Delta_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}$ &
$\Delta_{e}$ \\ \hline
value & $\unit[7520]{\frac{V}{m}}$ & $\unit[7520]{\frac{V}{m}}$
& $\left(0.162,0.987,0.000\right)$ & $\left(-0.870,-0.286,-0.403\right)$
& $\unit[316]{Hz}$ & $\unit[7.65]{kHz}$ & $\unit[179]{kHz}$
& $\unit[-46]{Hz}$ & $\unit[-44]{Hz}$ & $\unit[662]{GHz}$ \\
\end{tabular}
\caption{
Optimized parameters when using the same transitions as in Ref.~\cite{lin2013dissipative},
leading to a fidelity of $F=88\%$, compared to $F=76\%$ in Ref.~\cite{lin2013dissipative}.
Both field strengths $E_{r}$ and $E_{b}$
are limited to the maximum values of
$\unit[7520]{\frac{V}{m}}$.
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$ is
determined by \cref{eq:SidebandRabifreq},
$\Omega_{\text{car},a,e}$ leads to
$\gamma_{if}$ in
\cref{eq:GammaRepFirst,eq:GammaRepSecond}.
}
\label{tab:originalOptimizedParams}
\end{table*}
All parameter optimizations have been performed
with the NLopt package~\cite{johnson2014nlopt}
using
the Subplex algorithm~\cite{rowan1990functional}.
While other optimization
methods could also be used
in the outer and inner optimization loops, we have found these to converge well.
\Cref{fig:originalOptimisedDynamics} compares
the simulated dynamics of the system as described in
Ref.~\cite{lin2013dissipative}
with the dynamics obtained after optimization.
The peak fidelity
is increased from $F=76\%$ to
$F=88\%$.
This is due to a modified steady state, in which the populations
in $\Ket{T_{\mathord{\downarrow}\mathord{\uparrow}}}$, $\Ket{\mathord{\uparrow}\mathord{\uparrow}}$
and $\Ket{\mathord{\downarrow}\mathord{\downarrow}}$ each are smaller
than in the original scheme.
Furthermore, through optimization of
the $\Gamma_{if}$, a significant portion of the
population can be prevented from escaping
the ground state hyperfine subspace
$\left\{ a,\mathord{\downarrow},\mathord{\uparrow} \right\}$, which
causes the prominent crest in the
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$ population for the original scheme.
The optimized result was compared to
different realisations of randomly chosen
polarizations $\bm{\varepsilon}_{r}$ and
$\bm{\varepsilon}_{b}$, which leads to
dramatically varying peak fidelities
that can be as low as $F=10\%$ but are only
rarely in the vicinity of but never surpass
the peak fidelity reached by optimization.
The optimized values of the various parameters are
reported in
\cref{tab:originalOptimizedParams}.
After optimization, the two-photon
sideband Rabi frequency $\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$
assumes a value of $2\pi \times \unit[7.65]{kHz}$,
which is very close to
the rate $2\pi\times \unit[7.81]{kHz}$ reported in
Ref.~\cite{lin2013dissipative}.
The increase in fidelity can therefore
mainly be attributed to the
adjustments made to the polarization
$\bm{\varepsilon}_{r}$, $\bm{\varepsilon}_{b}$
and increase in excited state manifold detuning
$\Delta_{e}$ from
$\unit[270]{GHz}$ to $\unit[662]{GHz}$, which
is feasible, see for example in Ref.~\cite{gaebler2016high}.
In other words, the outcome of the inner optimization is
a superior set of scattering rates
$\left\{ \Gamma_{if} \right\}$,
with the parameters of the outer optimization
adjusted to rebalance the system.
Compared to Ref.~\cite{lin2013dissipative}, in
which $\Omega_{\text{car},a,\mathord{\uparrow}}=\unit[495]{Hz}$,
the carrier Rabi frequency between $\Ket{a}$ and
$\Ket{\mathord{\uparrow}}$ drops to $\unit[316]{Hz}$ after
optimization, thus further suppressing the
unwanted $\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}\leftrightarrow\Ket{S_{a\mathord{\downarrow}}}$
transition.
As the optimal fidelity
is approached, the detunings
$\Delta_{\text{car},a,\mathord{\uparrow}}$ and
$\Delta_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}$
become negligibly small, indicating
that for this particular entanglement scheme,
the shift out of resonance due to the
driven transitions is not much of a
factor.
Nevertheless, the achievable fidelity is inherently
limited in this entanglement scheme.
As demonstrated by
\cref{eq:SidebandRabifreq,eq:Kramers,eq:aif},
even if the field strengths of
the lasers utilized for the stimulated Raman
sideband transition were unconstrained,
a finite amount of leaking between hyperfine
states would remain present.
Limited field strengths of the sideband lasers
necessitate a trade-off between the error due
to leaking between hyperfine states and the
errors due to population trapping in $\Ket{\mathord{\uparrow}\mathord{\uparrow}}$
and the unfavourable transition between
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$ and $\Ket{S_{a\mathord{\downarrow}}}$.
As such, the fidelity that can be reached with our optimized parameters
falls short of the fidelity obtained by switching to the
stepwise scheme presented in Ref.~\cite{lin2013dissipative} which amounts to
$F=89.2\%$.
The stepwise scheme negates the error caused
by the unfavourable transition between
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$ and $\Ket{S_{a\mathord{\uparrow}}}$
by temporally separating the
ground state hyperfine transitions from the
application of the repumper and also the
sympathetic cooling.
This strategy ensures that population lost out
of $\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$ into $\Ket{S_{a\mathord{\uparrow}}}$
has nowhere to go and, if precisely timed,
is returned to $\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$ after
a full Rabi cycle.
Essentially, the stepwise scheme
lifts the requirement of
balancing the rates at which each
transition can be driven, thereby
overcoming the limitations associated
with the time-continuous implementation.
In the following we will show that a continuously operated scheme can outperform both variants for entanglement generation of Ref.~\cite{lin2013dissipative} by exploiting a different combination of transitions.
\section{Two-sideband scheme}
\label{sect:twosid}
Alternatively to the original scheme presented in
\cref{ssect:originalScheme}, steady-state entanglement
can be reached using other combinations of continuously
driven carrier and sideband
$^{9}\text{Be}^{+}\,\,$-hyperfine transitions.
We consider here a scheme that
features two sideband transitions:
a blue sideband transition from $\Ket{\mathord{\downarrow}}$ to $\Ket{\mathord{\uparrow}}$,
and a second, red sideband transition from $\Ket{\mathord{\uparrow}}$ to $\Ket{a}$.
Note that we assume each sideband transition to be driven by its
own pair of stimulated Raman laser beams. It would also be possible to drive the two sideband transitions using only three beams. This simply requires
proper choice of the correct
relative detunings.
In addition, and as in the original
scheme, a repump transition between
$\Ket{a}$ and $\Ket{e}$ is driven.
In order for all states in the hyperfine subspace to
be connected to the target state $\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$,
a carrier transition between
$\Ket{\mathord{\downarrow}}$ and $\Ket{\mathord{\uparrow}}$ is included as well.
This choice is similar to the combination of transitions utilized
for the entanglement of two $^{40}\text{Ca}^{+}$
ions in Ref.~\cite{bentley2014detection}. It offers
numerous advantages over the original scheme as detailed below.
\subsection{Entanglement mechanism and optimization parameters}
\label{ss:twosidEntMech}
\begin{figure*}
\caption{\label{fig:twoSBMechanism}
\label{fig:twoSBMechanism}
\end{figure*}
\Cref{fig:twoSBMechanism} illustrates the entanglement
mechanism for this new combination of
transitions.
Crucially, the unfavourable transition
between $\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}\otimes\Ket{0}$ and
$\Ket{S_{a \mathord{\uparrow}}}\otimes\Ket{0}$ due to the carrier
connecting $\Ket{a}$ and $\Ket{\mathord{\uparrow}}$
in the original scheme has been eliminated.
Instead, the red sideband transition from $\Ket{\mathord{\uparrow}}$
to $\Ket{a}$ leads from
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}\otimes\Ket{n_{\text{$\nu_1$}}}$
to $\Ket{S_{a\mathord{\downarrow}}}\otimes\Ket{n_{\text{$\nu_1$}}-1}$
only when $n_{\text{$\nu_1$}} > 0$.
Consequently, for this combination of transitions,
in the absence of leakage between
hyperfine states and heating,
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}\otimes\Ket{n_{\text{$\nu_1$}}=0}$ alone
is the steady state of the dynamics.
In the presence of heating, population in
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}\otimes\Ket{0}$ can
only escape due to an excitation of the
utilized vibrational mode $\text{$\nu_1$}$
followed by a sideband transition from
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}\otimes\Ket{n_{\text{$\nu_1$}}}$
to $\Ket{S_{a\mathord{\downarrow}}}\otimes\Ket{n_{\text{$\nu_1$}}-1}$.
Population in $\Ket{S_{a\mathord{\downarrow}}}\otimes\Ket{n_{\text{$\nu_1$}}-1}$
can take multiple branching paths,
all of which
eventually lead back to
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$.
As such, in contrast to the
original scheme, which relies on
sympathetic cooling, this particular combination of
transitions inherently cools the utilized
mode \text{$\nu_1$}$\,$ of the system
during entanglement generation.
Without the need for sympathetic cooling, the
$^{24}\text{Mg}^{+}\,\,$ ions can be removed.
This leads not only to
a simplification of the experiment but also reduces the
number of motional modes of the ionic
crystal. It thus effectively eliminates
the error due to off-resonant coupling
to $\text{$\nu_2$}$, given by
\cref{eq:SBORtrans} in the original scheme.
As described in \cref{sect:Optim},
in the original scheme
the carrier Rabi frequencies
$\Omega_{\text{car},a,\mathord{\uparrow}}$ and
$\Omega_{\text{car},a,e}$,
which determine the rate of
effective decay out of $\Ket{a}$,
are limited by the maximum attainable
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p}$.
In contrast, in the current scheme
the carrier Rabi frequencies
$\Omega_{\text{car},\mathord{\downarrow},\mathord{\uparrow}}$ and
$\Omega_{\text{car},a,e}$ can be
increased significantly, without
causing losses out of the target state
and population trapping in
$\Ket{\mathord{\uparrow}\mathord{\uparrow}}$.
By driving an additional sideband
transition, the graph of states
in \cref{fig:twoSBMechanism}
is more connected, permitting
population to reach $\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$
by additional paths.
Comparing the graphs shown in
\cref{fig:origMechanism,fig:twoSBMechanism},
the combined effect of additional paths
into the target state and the increase
in $\Omega_{\text{car},a,e}$ which results in larger
effective decay rates
$\left\{\gamma_{a,f} \propto \Omega_{\text{car},a,e}^{2} \right\}$
should lead to much faster entanglement preparation.
Optimization of the field strengths
and polarizations for the two-sideband
scheme has been carried out according to the same
principle as described in
\cref{sect:Optim}, with the slight
complication of having to address
additional degrees of freedom.
In the specific case of the two-sideband
combination, the corresponding form
of the target functional for the polarization optimization,
\cref{eq:PolFunct}, becomes
\begin{align}
J_{\text{inner}}[&E_{b}^{(1)},E_{r}^{(1)},\varepsilon_{b}^{(1)},
\varepsilon_{r}^{(1)},\Delta_{e}^{(1)}, \nonumber \\
& E_{b}^{(2)},E_{r}^{(2)},\varepsilon_{b}^{(2)},
\varepsilon_{r}^{(2)},\Delta_{e}^{(2)}] \nonumber\\
= &\sum_{if} c_{if} \Gamma_{if} - \alpha^{(1)} \Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p~(1)} - \alpha^{(2)} \Omega_{\text{red},\mathord{\uparrow},a}^{2p~(2)}\,\,.&
\label{eq:InnerFunctTwoSid}
\end{align}
As in the original scheme
and described in detail in \cref{sect:Optim},
optimization of the
polarization can be accomplished
without having to simulate the
dynamics in each iteration.
A single inner optimization step determines both
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p~(1)}
=\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p~(1)}
\left(E_{r}^{(1)},E_{b}^{(1)},\bm{\varepsilon}_{r}^{(1)},
\bm{\varepsilon}_{b}^{(1)},\Delta_{e}^{(1)}\right)
$
and
$\Omega_{\text{red},\mathord{\uparrow},a}^{2p~(2)}
=\Omega_{\text{red},\mathord{\uparrow},a}^{2p~(2)}
\left(E_{r}^{(2)},E_{b}^{(2)},\bm{\varepsilon}_{r}^{(2)},
\bm{\varepsilon}_{b}^{(2)},\Delta_{e}^{(2)}\right)
$,
in addition to
$\big\{ \Gamma_{if} = \Gamma_{if}^{(1)}
\left(E_{r}^{(1)},E_{b}^{(1)},\bm{\varepsilon}_{r}^{(1)},
\bm{\varepsilon}_{b}^{(1)},\Delta_{e}^{(1)}\right)
+ \Gamma_{if}^{(2)}
\left(E_{r}^{(2)},E_{b}^{(2)},\bm{\varepsilon}_{r}^{(2)},
\bm{\varepsilon}_{b}^{(2)},\Delta_{e}^{(2)}\right)
\big\}$,
the set of scattering rates due to each
sideband transition.
As in \ref{sect:Optim}, the outer optimization
is performed directly on the fidelity $F$ of the dynamics,
\begin{align}
J_{\text{outer}}[&\Omega_{\text{car},\mathord{\downarrow},\mathord{\uparrow}},\Omega_{\text{car},a,e},\Delta_{\text{car},\mathord{\downarrow},\mathord{\uparrow}},
\nonumber \\
&\Delta_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}},
\Delta_{\text{red},\mathord{\uparrow},a}, \alpha^{(1)}, \alpha^{(2)}] \overset{\underset{\mathrm{def}}{}}{=} 1 - F = \epsilon\,.
\label{eq:OuterFunctTwoSid}
\end{align}
The set
$\mathcal{P}_{\text{outer}}$ now consists of
the carrier Rabi frequencies
$\Omega_{\text{car},\mathord{\downarrow},\mathord{\uparrow}}$,
$\Omega_{\text{car},a,e}$,
the detunings
$\Delta_{\text{car},\mathord{\downarrow},\mathord{\uparrow}}$,
$\Delta_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{(1)}$
and $\Delta_{\text{red},\mathord{\uparrow},a}^{(2)}$,
of the microwave carrier, first and second sideband
transitions, and
the weights
$\alpha^{(1)}$ and
$\alpha^{(2)}$.
Since the scheme now involves a second sideband combination,
an additional weight is required to balance
the maximization of its two-photon sideband
Rabi frequency against the sideband photon
scattering rates in \cref{eq:InnerFunctTwoSid}.
In order to make sure that
both Rabi frequencies are
maximized without one
dominating the other, however,
the left hand side of
\cref{eq:InnerFunctTwoSid} can be
modified slightly, such that
\begin{align}
\tilde{J}_{\text{inner}}[&E_{b}^{(1)},E_{r}^{(1)},\varepsilon_{b}^{(1)},
\varepsilon_{r}^{(1)},\Delta_{e}^{(1)}, \nonumber \\
& E_{b}^{(2)},E_{r}^{(2)},\varepsilon_{b}^{(2)},
\varepsilon_{r}^{(2)},\Delta_{e}^{(2)}] \nonumber\\
& = \sum_{if} c_{if}\Gamma_{if} - \alpha
\left( \Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p~(1)}
+\Omega_{\text{red},\mathord{\uparrow},a}^{2p~(2)} \right) \nonumber\\
&~~ + \beta \left| \Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p~(1)}
-\Omega_{\text{red},\mathord{\uparrow},a}^{2p~(2)} \right| \,\,,
\label{eq:ModifiedInnerFunctTwoSid}
\end{align}
where $\alpha$ now balances the
maximization of the sum of
two-photon Rabi frequencies
against the $\Gamma_{if}$, whilst
$\beta$
is a parameter controlling how
strictly the two-photon sideband
Rabi frequencies should be matched.
For simplicity it is assumed that
$E_{r}^{(1)} = E_{r}^{(2)}$ and
$E_{b}^{(1)} = E_{b}^{(2)}$ and
that each field strength is
limited to the maximum value
allowed during the optimization
of the original scheme.
\begin{figure}
\caption{\label{fig:twoSBDynamics}
\label{fig:twoSBDynamics}
\end{figure}
\begin{table*}[tb]
\centering
\begin{tabular}{l |l r r r }
parameter & $\kappa_{h}=\quad $ & $2\pi\times\unit[1]{s^{-1}}$ & $2\pi\times\unit[10]{s^{-1}}$ & $2\pi\times\unit[100]{s^{-1}}$ \\ \hline
$E_{r}$ &&
$\unit[7520]{\frac{V}{m}}$ &
$\unit[7520]{\frac{V}{m}}$ &
$\unit[7520]{\frac{V}{m}}$ \\
$E_{b}$ &&
$\unit[7520]{\frac{V}{m}}$ &
$\unit[7520]{\frac{V}{m}}$ &
$\unit[7520]{\frac{V}{m}}$ \\
$\bm{\varepsilon}_{r}^{(1)}$ &&
$\left( -0.752,-0.220,-0.621 \right)$ &
$\left( -0.620, -0.500, -0.605 \right)$&
$\left( -0.741, -0.338, -0.581 \right)$ \\
$\bm{\varepsilon}_{b}^{(1)}$ &&
$\left( 0.440, 0.759, 0.480 \right)$ &
$\left(0.536, 0.644, 0.545 \right)$ &
$\left(0.408,0.802, 0.435 \right)$ \\
$\bm{\varepsilon}_{r}^{(2)}$ &&
$\left( -0.413,-0.204,-0.888 \right)$ &
$\left( -0.453, -0.854, -0.257 \right)$&
$\left( -0.479, -0.824, -0.303 \right)$ \\
$\bm{\varepsilon}_{b}^{(2)}$ &&
$\left( -0.415, -0.883, -0.218 \right)$ &
$\left( -0.451, -0.250, -0.857\right)$&
$\left( 0.493, 0.261, 0.830 \right)$ \\
$\Omega_{\text{car},\mathord{\downarrow},\mathord{\uparrow}}$ &&
$2\pi\times\unit[2.24]{kHz}$ &
$2\pi\times\unit[2.91]{kHz}$ &
$2\pi\times\unit[6.67]{kHz}$\\
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p~(1)}$ &&
$2\pi\times\unit[4.96]{kHz}$&
$2\pi\times\unit[6.47]{kHz}$&
$2\pi\times\unit[14.92]{kHz}$ \\
$\Omega_{\text{red},\mathord{\uparrow},a}^{2p~(2)}$ &&
$2\pi\times\unit[4.96]{kHz}$ &
$2\pi\times\unit[6.47]{kHz}$&
$2\pi\times\unit[14.92]{kHz}$ \\
$\Omega_{\text{car},a,e}$ &&
$2\pi\times\unit[691]{kHz}$ &
$2\pi\times\unit[802]{kHz}$ &
$2\pi\times \unit[1233]{kHz}$ \\
$\Delta_{e}^{(1)}$ &&
$\unit[624]{GHz}$ & $\unit[245]{GHz}$ & $\unit[318]{GHz}$ \\
$\Delta_{e}^{(2)}$ &&
$\unit[464]{GHz}$ & $\unit[372]{GHz}$ & $\unit[206]{GHz}$ \\
\end{tabular}
\caption{
Optimized parameters for the two-sideband scheme.
As for the original scheme, each field strength
is limited to a maximum value of
$\unit[7520]{V/m}$.
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p~(1)}$ and
$\Omega_{\text{red},\mathord{\uparrow},a}^{2p~(2)}$ are both
determined by \cref{eq:SidebandRabifreq}
with individual polarizations
$\bm{\varepsilon}_{r}^{(1)}$,
$\bm{\varepsilon}_{b}^{(1)}$,
$\bm{\varepsilon}_{r}^{(2)}$ and
$\bm{\varepsilon}_{b}^{(2)}$.
Optimization of the two-sideband scheme
leads to fidelities of $F=98.3\%, 96.7\%$
and $90.3\%$ for heating rates of
$\kappa_{\text{h}} = 2\pi\times\unit[1]{s^{-1}},
2\pi\times\unit[10]{s^{-1}}$
and $2\pi\times\unit[100]{s^{-1}}$, respectively.
The sideband detunings $\Delta_{e}^{(1)}$ and
$\Delta_{e}^{(2)}$ are defined as in \cref{ssect:Ham}
with the same assumed fine structure splitting
$f_{P}=\unit[197.2]{GHz}$}.
\label{tab:twoSBOptimizedParams}
\end{table*}
In the absence of sympathetic cooling,
the primary source of heating,
caused by spontaneous emission
during the stimulated Raman sideband
transition driven on the $^{24}\text{Mg}^{+}\,\,$ ions,
is eliminated.
The remaining sources of heating are
photon recoil from the spontaneous
emission out of $\Ket{e}$ after repumping
and electric field noise associated
with the ion trap \cite{lin2013dissipative}.
Since the
heating rate
influences the system dynamics
and therefore the obtained fidelity,
the result of the optimization depends
on the specific heating rate assumed,
which can vary, depending on the
motional mode utilized for the sideband
transition.
\subsection{Influence of trap heating rates}
\Cref{fig:twoSBDynamics} compares the
reached peak fidelity for different
values of the
heating rate
$\kappa_{\text{h}}$
of
vibrational mode \text{$\nu_1$}.
For an assumed heating rate of
$\kappa_{\text{h}} = 2\pi \times \unit[1]{s^{-1}}$,
optimization leads to a peak fidelity of $F=98.3\%$, whilst
$\kappa_{\text{h}} = 2\pi \times \unit[10]{s^{-1}}$
is a more realistic heating rate for modern traps
leading to a peak fidelity of $F=96.7\%$.
Finally, when $\kappa_{\text{h}} = 2\pi \times \unit[100]{s^{-1}}$,
the peak fidelity is reduced to $F=90.3\%$.
For each considered heating rate,
the parameters leading to optimal entanglement
are listed in \Cref{tab:twoSBOptimizedParams}.
As the heating rate $\kappa_{\text{h}}$
is increased, recovering population
lost from the target
state $\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}$
requires an increase in
the Rabi frequencies of all driven
transitions.
For all reported heating rates,
however, the ratios
$\Omega_{\text{car},\mathord{\downarrow},\mathord{\uparrow}} \propto
\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p~(1)} \approx
\Omega_{\text{red},\mathord{\uparrow},a}^{2p~(2)} \propto
\Omega_{\text{car},a,e}^{2}$
remain approximately constant.
Here, the repumper Rabi frequency $\Omega_{\text{car},a,e}$
enters squared, since the effective decay rates
in \cref{eq:EffectiveRates}
are proportional to $\Omega_{\text{car},a,e}^{2}$.
This observation can be understood,
since the target state should be reachable as
directly as possible from any given state.
Scaling all transition rates equally
is necessary in order to prevent the flow of
population from being bottlenecked throughout the
entanglement generation.
The optimized peak fidelities are again
significantly higher than the average fidelity
of $F\approx 0.4$ (or $F\approx 0.5$ with the
fixed scaling of Rabi frequencies mentioned above)
obtained from simulating the dynamics with
random polarizations
$\bm{\varepsilon}_{r}^{(1)}, \bm{\varepsilon}_{b}^{(1)},
\bm{\varepsilon}_{r}^{(2)}$ and $\bm{\varepsilon}_{b}^{(2)}$
of the sideband laser beams and assuming
$\kappa_{\text{h}}=2\pi\times\unit[1]{s^{-1}}$.
For the two-sideband scheme it is much more difficult
to randomly select a near-optimal polarization, due
to the increased number of degrees of freedom, which
also causes the peak fidelity to strongly vary depending
on the polarization.
Furthermore, an optimization to minimize the time taken
to reach a target state population
of $F=85\%$ was performed for the heating
rates $\kappa_\text{h}\in
\left\{2\pi\times \unit[1]{s^{-1}},2\pi\times\unit[10]{s^{-1}},2\pi\times\unit[100]{s^{-1}} \right\}$,
leading to a preparation time of
$t\approx \unit[0.3]{ms}$ for all
assumed heating rates.
For each increase in the heating rate,
the optimization results
in a different set of polarizations
$\bm{\varepsilon}_{r}^{(1)}$,
$\bm{\varepsilon}_{b}^{(1)}$,
$\bm{\varepsilon}_{r}^{(2)}$ and
$\bm{\varepsilon}_{b}^{(2)}$.
As the heating rate is increased,
the minimization of leakage rates
$\left\{\Gamma_{if}\right\}$ becomes less important.
A given $\Gamma_{if}$ is determined
by the polarization of each stimulated Raman laser beam
and scales with the
squared magnitude of the field strength $|E|^{2}$
whilst scaling inversely with the squared detuning
from the excited state $\Delta_{e}$
(\cref{eq:Kramers,eq:aif}) of the considered laser beam.
Instead, the maximization of the
two-photon stimulated Raman
sideband transition rates
$\Omega_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}^{2p~(1)}$ and
$\Omega_{\text{red},\mathord{\uparrow},a}^{2p~(2)}$ (\cref{eq:SidebandRabifreq})
is prioritised.
The two-photon stimulated Raman sideband
transition Rabi frequencies depend on
the polarizations $\bm{\varepsilon}_{r}$ and
$\bm{\varepsilon}_{b}$ of both laser beams,
the product of field strengths $E_{r}E_{b}$
and the detuning of both stimulated Raman laser beams
from the excited state $\Delta_{e}$.
Larger sideband two-photon Rabi frequencies
ensure that population can flow back into
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}\otimes\Ket{0}$
much faster than the heating can allow it to escape.
Increasing all of the transition rates
has the side effect of speeding up the entanglement
but limits the attainable fidelity, with an increased
error due to population leaking outside of the
hyperfine subspace $\left\{ a,\mathord{\downarrow},\mathord{\uparrow} \right\}$.
The behaviour of the $\Delta_{e}^{(1)}$
and $\Delta_{e}^{(2)}$ is
non-monotonic and appears to be strongly dependent on the
particular polarization profile.
As in the original scheme,
each of the $\Delta_{\text{car},\mathord{\downarrow},\mathord{\uparrow}}$,
$\Delta_{\text{blue},\mathord{\downarrow},\mathord{\uparrow}}$ and
$\Delta_{\text{red},\mathord{\uparrow},a}$ becomes
smaller as the optimal fidelity is reached.
The error due to heating can only be
reduced by increasing the flow
of population into the target state
$\Ket{S_{\mathord{\downarrow}\mathord{\uparrow}}}\otimes\Ket{0}$,
since there is no straightforward
way to compensate for heating.
This comes at the cost of
increasing
$\sum_{if} c_{if} \Gamma_{if}$ and
thus the error due to leakage between the
hyperfine states, as explained above.
Assuming optimal polarization and
balancing of the driven rates,
the only way to reduce
one error without compounding
the other error
is by increasing the maximum field strengths $E_{r}^{(1)}$,
$E_{b}^{(1)}$,$E_{r}^{(2)}$ and $E_{b}^{(2)}$. This explains why the
field strengths take their maximal allowed value in
\Cref{tab:twoSBOptimizedParams}.
\subsection{Comparison to the original scheme}
The two-sideband scheme represents a promising alternative
to the original scheme even with optimized parameters, as
discussed in \cref{sect:Optim}.
In terms of fidelity, the two-sideband
scheme outperforms the original one,
regardless of the assumed heating
rate $\kappa_{\text{h}}$.
Even in the worst case considered, with
$\kappa_{\text{h}} = 2\pi\times\unit[100]{s^{-1}}$,
the resulting error is under $10\%$ after optimization.
In comparison, the previously best
fidelity, reached by the stepwise scheme
in Ref.~\cite{lin2013dissipative}, corresponds to an error of about
$11\%$.
The corresponding errors for the original scheme in \cref{sect:Optim}
are slightly larger for the polarization optimized case and
two and a half times as large for the non-optimized case.
In terms of speed, the two-sideband scheme
outperforms the original scheme.
Given traps with sufficiently small heating rates,
entangling speed can be sacrificed in order to maximize fidelity.
The lowest regarded heating rate
$\kappa_{\text{h}}=2\pi\times\unit[1]{s^{-1}}$,
can be optimized over $\unit[3]{ms}$, attaining a
fidelity of $F=98.3\%$, or optimized over $\unit[6]{ms}$,
in order to increase the fidelity to $F=98.7\%$.
In contrast, the original scheme peaks after
approximately $\unit[6]{ms}$ but at the much lower fidelity $F=76\%$.
To summarize, when considering the experimental modifications
necessary to go from the protocol
in Ref.~\cite{lin2013dissipative} to
the two-sideband scheme, the overall complexity
is reduced.
Instead of a four ion setup consisting of two $^{9}\text{Be}^{+}\,\,$ and
two $^{24}\text{Mg}^{+}\,\,$ ions, with their respective sympathetic cooling
laser beams, now only the two $^{9}\text{Be}^{+}\,\,$
to be entangled need to be trapped without sympathetic cooling laser beams.
Given sufficient power, the four laser
beams required for both of the stimulated
Raman sideband transitions can all be derived from the
same $\unit[313]{nm}$ laser and frequency shifted using
acousto-optic modulators.
The only further complication is the ability to
independently manipulate the polarization
of each individual stimulated Raman sideband
transition laser beam.
One may wonder of course how sensitive the Bell state fidelity is with respect to small deviations from the optimized polarizations. We have found
fluctuations in the polarization components of up to $5\%$ to only have a neglible effect on the entanglement error, whilst fluctuations above $10\%$ will noticeably reduce the fidelity.
\subsection{Fundamental performance bound}
Given the superior performance of the two-sideband scheme compared to
the original protocol~\cite{lin2013dissipative}, one may wonder
whether there are ultimate limits to the fidelity of a Bell state
realized in this way.
There are two main sources of error that limit the fidelities in this
dissipative state preparation scheme---anomalous heating and spontaneous
emission. As discussed above, the obtainable fidelity is determined by a
trade-off between utilizing fast enough sideband transitions in order
to beat trap heating, and minimization of the spontaneous
emission rates associated with the sideband transitions.
While anomalous heating can in principle be made
arbitrarily small by improving the ion trap, undesired spontaneous
emission is an inherent and unavoidable loss mechanism accompanying
the desired spontaneous emission at the core of the dissipative state
preparation. In order to explore the
fundamental performance bound posed by spontaneous emission, we assume
a realistic trap with $\kappa_{h}=2\pi\times\unit[10]{s^{-1}}$, a close to perfect
trap, with $\kappa_{h}=2\pi\times\unit[1]{s^{-1}}$
or no heating at all ($\kappa_{h}=\unit[0]{s^{-1}}$), and investigate how much
laser power is needed to achieve a certain fidelity, or error.
In the absence of all heating, the optimization will favor slow
sideband transitions that are detuned far below the
$P_{\nicefrac{1}{2}}$ and $P_{\nicefrac{3}{2}}$ levels with laser
beams polarized such that there is minimal spontaneous
emission. Identifying the conditions under which it is possible to reach
Bell state fidelities of $F=99.9\%$ or even $F=99.99\%$
allows us to benchmark the performance of the current dissipative scheme.
For comparison, Ref.~\cite{ozeri2007errors}
examines the dependence
of fidelity on laser power for gate-based
entanglement creation for various ion species.
Of all observed ion species,
the gate error of $^{9}\text{Be}^{+}\,\,$ entanglement
was lowest for a given power $P$, related to
the laser field strengh $E$ by
\begin{align}
P = \frac{\pi}{4} E^{2}w_{0}^{2}c\epsilon_{0}\,\,,
\label{eq:power}
\end{align}
where $w_{0}$ is the laser beam waist,
$c$ the speed of light and $\epsilon_{0}$ the vacuum
permittivity~\cite{ozeri2007errors}. We assume here an (idealized)
beam waist of $w_{0}=\unit[20]{\mu m}$, to directly compare
to Ref.~\cite{ozeri2007errors}.
During optimization, the highest regarded threshold, $F=99.99\%$
was reached after $\unit[0.33]{ms}$ using field strengths of
$E_{r/b}\approx \unit[752]{kV/m}$ per beam and detunings
up to $\unit[25]{THz}$.
For the sake of comparison with Ref.~\cite{ozeri2007errors}, and
for the case of negligible heating, the timescale in the master equation
\eqref{eq:QME} can be changed, $t \rightarrow \tau= \frac{t}{\chi}$.
In order to match the same entangling speed and duration of $\unit[10]{\mu s}$ as reported in Ref.~\cite{ozeri2007errors}, we require $\unit[4.4]{MV/m}$ per beam,
corresponding to a total power of $4\times\unit[16]{W}$ at the same
detuning.
For this very fast entanglement, the negative effects of heating are limited,
leading to errors of $\epsilon = 6.5\times 10^{-5}$,
$\epsilon=1.0\times 10 ^{-4}$ and
$\epsilon=4.47\times 10^{-4}$ for heating rates of $\kappa_h \in \left\{2\pi\times\unit[1]{s^{-1}},
2\pi\times\unit[10]{s^{-1}}, 2\pi\times\unit[100]{s^{-1}}\right\}$, respectively.
If we fix the available field strength to the value
$E_{r/b}\approx\unit[200]{kV}$, corresponding to a total power
of $4\times \unit[36]{mW}$,
as reported in Ref.~\cite{ozeri2007errors}, the target fidelity is reached after $\unit[4.6]{ms}$.
Again, despite the much lower field strengths, the detuning remains unchanged.
It should be noted here, that an entangling duration of $\unit[4.6]{ms}$ is
still faster than that of the original entanglement scheme~\cite{lin2013dissipative}.
At this lower extreme in field strengths, the effects of
heating are more noticeable, since heating is allowed to act
for almost $500$ times longer relative to the $\unit[10]{\mu s}$ case.
\begin{figure}
\caption{Bell state error $\epsilon=1-F$
as a function of the sideband laser beam strengths $E_{r/b}
\label{fig:highEnergyOptimised}
\end{figure}
\Cref{fig:highEnergyOptimised} shows the Bell state, i.e.,
entanglement error, obtained when rescaling the duration
to $\unit[1]{ms}$, for different field strengths
$E_{r/b}$ and heating rates
(shown in red, orange, and grey, respectively).
A fidelity of $F=99\%$ is reached for all regarded heating rates, requiring
field strengths between $E_{r/b}
= \unit[31]{kV/m}$ ($\kappa_{h}=2\pi\times\unit[0]{s^{-1}}$) and
$E_{r/b}=\unit[38]{kV/m}$ ($\kappa_{h}=2\pi\times\unit[10]{s^{-1}}$).
The next threshold, $F=99.9\%$,
is only crossed for $\kappa_h=0$ and
$\kappa_{h} = 2\pi\times\unit[1]{s^{-1}}$ at field strengths of
$E_{r/b}=\unit[100]{kV/m}$ and $E_{r/b}=\unit[125]{kV/m}$, corresponding
to total powers of $P\approx 4\times\unit[8.3]{mW}$
and $P\approx4\times\unit[13]{mW}$, respectively.
Obtaining this fidelity requires detunings on the order of $\unit[6]{THz}$.
The highest threshold, $F=99.99\%$, is reached for
$\kappa_{h}=2\pi\times\unit[0]{s^{-1}}$ whilst requiring field strengths
of the order of $E_{r/b} \approx \unit[325]{kV/m}$
corresponding to a total power of $4\times \unit[89]{mW}$.
This is about two and a half times more power than for the gate based approach
in Ref.~\cite{ozeri2007errors}.
For neglible heating, the required field strength per beam can be reduced by using three instead of four beams to drive the two sideband transitions (blue curve in Fig.~\ref{fig:highEnergyOptimised}).
This finding illustrates that parameter optimization is prone to trapping in local optima, in particular for a larger number of optimization parameters~\cite{GoetzPRA16b}.
We attribute the improvement to the fact that omission of one beam reduces the inadvertent scattering. More specifically, the extra constraint on the beam detunings
appears to aid the optimization algorithm in finding a
configuration for which the contribution of each beam
towards the scattering error is distributed in a more favorable
way than in the four beam setup.
\section{Conclusions}
\label{sect:Conclusions}
We have addressed the problem of additional noise sources that limit fidelity
and speed of dissipative entanglement generation.
Combining quantum optimal control theory~\cite{glaser2015training}
with the effective operator approach~\cite{reiter2012effective},
we have shown how to improve both fidelity
and speed for the example of entangling two
hyperfine qubits in a chain of trapped ions
~\cite{lin2013dissipative}.
The detrimental noise source in this case
is undesired spontaneous decay brought
about by the sideband laser beams that
are necessary for coupling the qubits
~\cite{lin2013dissipative}.
This decay leads to the irrevocable
loss of population from the hyperfine
subspace of interest.
Whilst the undesired spontaneous decay
cannot be eliminated entirely,
an optimal choice of the experimental parameters increases the
fidelity from 76\% to 85\%, with minimal changes to the
setup. Key to the improvement is optimization of the sideband laser
beam polarizations which enter the decay rates of each individual
hyperfine level. Due to their interdependence, the various parameters
of the experiment need to be retuned when changing the
polarization.
The two-stage optimization process that we have
developed here can easily resolve this issue, demonstrating the power
of numerical quantum control.
Further limitations to fidelity and speed can be identified graphically,
by visualizing the connections between states
due to the various field-driven transitions.
This allows to qualitatively trace the flow of population and shows that,
depending on the
relative transition rates, population can get trapped in states other than
the target state or be transferred out of the target state by an unfavourable
transition. The latter in particular implies that the target state does not
fully coincide with the steady state of the evolution.
In order to overcome this limitation, we suggest to adapt the entanglement scheme
presented in Ref.~\cite{bentley2014detection} to using two sideband transitions.
Of course, adding a second sideband makes the
suppression of the error due to detrimental spontaneous emission even more
important. Our optimization method had no difficulty to cope with this task,
despite the increase in the number of tunable parameters.
Analysis aided by the graph of connected states for the two-sideband scenario
reveals that the limitations of the original scheme of
Ref.~\cite{lin2013dissipative} can indeed be overcome, whilst
providing additional advantageous properties such as higher
entanglement speed and
inherent cooling. This offers the possibility of reducing the
complexity of the experiment by removing the need for sympathetic
cooling and all sources of error that come with it. The entangled, or,
Bell state fidelity that we predict for
the two-sideband scenario strongly depends on the heating rate. It can
be as high as $98\%$ under conditions similar to those of
Ref.~\cite{lin2013dissipative}, in particular in terms of the
available laser field strengths. The maximum attainable fidelity is
primarily limited by the heating rate of the motional mode that is
used to couple the qubits. It dictates the timescale at which the
sideband transitions must take place. Weaker sideband transition rates
in turn enable better suppression of spontaneous emission errors.
Whilst a fidelity of $98\%$ represents an order of magnitude
improvement over the originally obtained fidelity, execution of most
quantum protocols requires fidelities in excess of $99\%$. These
could be achieved through experimental refinements, such as ion traps
with weaker anomalous heating,
more powerful sideband lasers~\cite{ozeri2007errors}, or use of
optical instead of hyperfine
qubits~\cite{schindler2013quantum,barreiro2011open}.
Provided that heating rates can be made negligible,
for instance by implementing the two-sideband
scheme on the stretch rather than the center of mass mode of the two ions,
one may wonder whether
spontaneous emission ultimately limits the performance of dissipative
Bell state preparation. Spontaneous emission can be reduced by using larger
detunings which in turn requires more laser power or longer durations. Compared to
gate-based entanglement preparation~\cite{ozeri2007errors}, we find,
for the same laser power of $4\times 36\,$mW into 20$\,\mu$m beam
waist as in Ref.~\cite{ozeri2007errors}, the entangling duration to
realize a Bell
state fidelity of 99.99\% to be increased from 10$\,\mu$s to 4.6$\,$ms
in an ideal trap. The advantages of the dissipative approach, in
particular its inherent robustness against noise, might easily outweigh this
time requirement, making dissipative entanglement production a viable
resource for quantum information protocols. Consider, for example,
carrying out primitives such as gate teleportation. This
could be driven by an entanglement machine that produces 200 pairs/s in serial
mode (per node) or output one pair per 10$\,\mu$s when run with 250
nodes in parallel. A further speed up is possible by using more
laser power.
Our study provides a first example for how to use quantum optimal
control theory to push driven-dissipative protocols to their ultimate
performance limit, despite imperfections in a practical
setting. Performance limits include, in addition to maximal fidelity,
also the highest speed. Here, we have obtained a speed up of about a
factor of four compared to Ref.~\cite{lin2013dissipative}. Speed is of particular concern when scaling up
entanglement generation since some undesired decoherence rates are
known to scale with system size~\cite{monz2011fourteen}. Deriving the
fastest possible protocol is therefore key if dissipative generation
of many-body entanglement~\cite{reiter2016scalable} is to succeed. As
we have shown, optimal control theory is a tool ideally suited to tackle this
task, and targeting a multipartite entangled state is a natural next step.
The optimal control theory framework for dissipative entanglement generation that we have introduced here is not limited to the specific example of trapped ions. In fact, our technique is applicable to generic multi-level quantum systems in the presence of dissipation for which the time evolution can be obtained within reasonable computation time~\footnote{Typically of the order of one hundred propagations are necessary to converge the optimization}. This includes also systems with multiple steady states~\cite{KrausPRA08,AlbertPhD} which would be interesting for e.g. quantum error correction, or systems with
non-Markovian dynamics such as solid state devices~\cite{Doucet18}.
In the latter case, the generalization requires the combination of the present optimization approach with one of the methods for obtaining non-Markovian dynamics~\cite{deVegaRMP17}, such as partitioning the environment into strongly and weakly coupled parts~\cite{ReichSciRep15}. Non-Markovianity has been shown to assist entanglement generation in coupled dimers subject to dephasing noise~\cite{HuelgaPRL12}. Our approach would allow to investigate, for more complex systems and other types of dissipation,
whether non-Markovianity is beneficial or detrimental to the speed and overall success of entanglement generation.
\begin{acknowledgments}
We thank Dave Wineland for discussions and Yong Wan and Stephen
Erickson for helpful suggestions on the manuscript.
Financial support
from the Deutsche Forschungsgemeinschaft (Grant No. KO 2301/11-1)
is gratefully acknowledged.
Florentin Reiter acknowledges support by a Feodor-Lynen fellowship
from the Humboldt Foundation.
\end{acknowledgments}
\begin{thebibliography}{54}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Breuer}\ and\ \citenamefont
{Petruccione}(2002)}]{breuer2002theory}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont
{Breuer}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Petruccione}},\ }\href@noop {} {\emph {\bibinfo {title} {The theory of open
quantum systems}}}\ (\bibinfo {publisher} {Oxford University Press},\
\bibinfo {year} {2002})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Glaser}\ \emph {et~al.}(2015)\citenamefont {Glaser},
\citenamefont {Boscain}, \citenamefont {Calarco}, \citenamefont {Koch},
\citenamefont {K{\"o}ckenberger}, \citenamefont {Kosloff}, \citenamefont
{Kuprov}, \citenamefont {Luy}, \citenamefont {Schirmer}, \citenamefont
{Schulte-Herbr{\"u}ggen}, \citenamefont {Sugny},\ and\ \citenamefont
{Wilhelm}}]{glaser2015training}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont
{Glaser}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Boscain}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Calarco}}, \bibinfo
{author} {\bibfnamefont {C.~P.}\ \bibnamefont {Koch}}, \bibinfo {author}
{\bibfnamefont {W.}~\bibnamefont {K{\"o}ckenberger}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Kosloff}}, \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Kuprov}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Luy}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Schirmer}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Schulte-Herbr{\"u}ggen}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Sugny}}, \ and\ \bibinfo {author} {\bibfnamefont {F.~K.}\ \bibnamefont
{Wilhelm}},\ }\href {http://dx.doi.org/10.1140/epjd/e2015-60464-1} {\bibfield
{journal} {\bibinfo {journal} {Eur. Phys. J. D}\ }\textbf {\bibinfo
{volume} {69}},\ \bibinfo {pages} {279} (\bibinfo {year} {2015})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Caneva}\ \emph {et~al.}(2009)\citenamefont {Caneva},
\citenamefont {Murphy}, \citenamefont {Calarco}, \citenamefont {Fazio},
\citenamefont {Montangero}, \citenamefont {Giovannetti},\ and\ \citenamefont
{Santoro}}]{caneva2009optimal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Caneva}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Murphy}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Calarco}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Fazio}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Montangero}}, \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.~E.}\ \bibnamefont {Santoro}},\ }\href {\doibase
10.1103/PhysRevLett.103.240501} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages}
{240501} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Goerz}\ \emph {et~al.}(2011)\citenamefont {Goerz},
\citenamefont {Calarco},\ and\ \citenamefont {Koch}}]{goerz2011quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{Goerz}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Calarco}}, \
and\ \bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Koch}},\ }\href
{http://stacks.iop.org/0953-4075/44/i=15/a=154011} {\bibfield {journal}
{\bibinfo {journal} {J. Phys. B}\ }\textbf {\bibinfo {volume} {44}},\
\bibinfo {pages} {154011} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Patsch}\ \emph {et~al.}(2018)\citenamefont {Patsch},
\citenamefont {Reich}, \citenamefont {Raimond}, \citenamefont {Brune},
\citenamefont {Gleyzes},\ and\ \citenamefont {Koch}}]{patsch2018fast}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Patsch}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Reich}},
\bibinfo {author} {\bibfnamefont {J.-M.}\ \bibnamefont {Raimond}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Brune}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Gleyzes}}, \ and\ \bibinfo {author}
{\bibfnamefont {C.~P.}\ \bibnamefont {Koch}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo
{volume} {97}},\ \bibinfo {pages} {053418} (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {S{\o}rensen}\ \emph {et~al.}(2016)\citenamefont
{S{\o}rensen}, \citenamefont {Pedersen}, \citenamefont {Munch}, \citenamefont
{Haikka}, \citenamefont {Jensen}, \citenamefont {Planke}, \citenamefont
{Andreasen}, \citenamefont {Gajdacz}, \citenamefont {M{\o}lmer},
\citenamefont {Lieberoth},\ and\ \citenamefont
{Sherson}}]{sorensen2016exploring}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~J.~W.}\
\bibnamefont {S{\o}rensen}}, \bibinfo {author} {\bibfnamefont {M.~K.}\
\bibnamefont {Pedersen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Munch}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Haikka}},
\bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Jensen}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Planke}}, \bibinfo {author}
{\bibfnamefont {M.~G.}\ \bibnamefont {Andreasen}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Gajdacz}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {M{\o}lmer}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Lieberoth}}, \ and\ \bibinfo {author} {\bibfnamefont
{J.~F.}\ \bibnamefont {Sherson}},\ }\href
{https://www.nature.com/articles/nature17620} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {532}},\ \bibinfo {pages}
{210} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Poyatos}\ \emph {et~al.}(1996)\citenamefont
{Poyatos}, \citenamefont {Cirac},\ and\ \citenamefont
{Zoller}}]{poyatos1996quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont
{Poyatos}}, \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}},
\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href
{https://link.aps.org/doi/10.1103/PhysRevLett.77.4728} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {77}},\
\bibinfo {pages} {4728} (\bibinfo {year} {1996})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kraus}\ \emph
{et~al.}(2008{\natexlab{a}})\citenamefont {Kraus}, \citenamefont {B\"uchler},
\citenamefont {Diehl}, \citenamefont {Kantian}, \citenamefont {Micheli},\
and\ \citenamefont {Zoller}}]{kraus2008preparation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Kraus}}, \bibinfo {author} {\bibfnamefont {H.~P.}\ \bibnamefont
{B\"uchler}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Diehl}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kantian}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Micheli}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href {\doibase
10.1103/PhysRevA.78.042307} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {042307}
(\bibinfo {year} {2008}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Verstraete}\ \emph {et~al.}(2009)\citenamefont
{Verstraete}, \citenamefont {Wolf},\ and\ \citenamefont
{Cirac}}]{verstraete2009quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Verstraete}}, \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont
{Wolf}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont
{Cirac}},\ }\href {http://dx.doi.org/10.1038/nphys1342} {\bibfield {journal}
{\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {5}},\
\bibinfo {pages} {633} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Plenio}\ \emph {et~al.}(1999)\citenamefont {Plenio},
\citenamefont {Huelga}, \citenamefont {Beige},\ and\ \citenamefont
{Knight}}]{plenio1999cavity}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Plenio}}, \bibinfo {author} {\bibfnamefont {S.~F.}\ \bibnamefont {Huelga}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Beige}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.~L.}\ \bibnamefont {Knight}},\ }\href {\doibase
10.1103/PhysRevA.59.2468} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {59}},\ \bibinfo {pages} {2468}
(\bibinfo {year} {1999})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Benatti}\ \emph {et~al.}(2003)\citenamefont
{Benatti}, \citenamefont {Floreanini},\ and\ \citenamefont
{Piani}}]{benatti2003environment}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Benatti}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Floreanini}},
\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Piani}},\ }\href
{\doibase 10.1103/PhysRevLett.91.070402} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo
{pages} {070402} (\bibinfo {year} {2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Vacanti}\ and\ \citenamefont
{Beige}(2009)}]{vacanti2009cooling}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Vacanti}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Beige}},\ }\href {http://stacks.iop.org/1367-2630/11/i=8/a=083008}
{\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf
{\bibinfo {volume} {11}},\ \bibinfo {pages} {083008} (\bibinfo {year}
{2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wolf}\ \emph {et~al.}(2011)\citenamefont {Wolf},
\citenamefont {Chiara}, \citenamefont {Kajari}, \citenamefont {Lutz},\ and\
\citenamefont {Morigi}}]{wolf2011entangling}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Wolf}}, \bibinfo {author} {\bibfnamefont {G.~D.}\ \bibnamefont {Chiara}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Kajari}}, \bibinfo
{author} {\bibfnamefont {E.}~\bibnamefont {Lutz}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Morigi}},\ }\href
{http://stacks.iop.org/0295-5075/95/i=6/a=60008} {\bibfield {journal}
{\bibinfo {journal} {Europhys. Lett.}\ }\textbf {\bibinfo {volume} {95}},\
\bibinfo {pages} {60008} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gonzalez-Tudela}\ \emph {et~al.}(2011)\citenamefont
{Gonzalez-Tudela}, \citenamefont {Martin-Cano}, \citenamefont {Moreno},
\citenamefont {Martin-Moreno}, \citenamefont {Tejedor},\ and\ \citenamefont
{Garcia-Vidal}}]{gonzalez2011entanglement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Gonzalez-Tudela}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Martin-Cano}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Moreno}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Martin-Moreno}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Tejedor}}, \ and\ \bibinfo
{author} {\bibfnamefont {F.~J.}\ \bibnamefont {Garcia-Vidal}},\ }\href
{\doibase 10.1103/PhysRevLett.106.020501} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo
{pages} {020501} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cho}\ \emph {et~al.}(2011)\citenamefont {Cho},
\citenamefont {Bose},\ and\ \citenamefont {Kim}}]{cho2011optical}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Cho}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bose}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Kim}},\ }\href
{\doibase 10.1103/PhysRevLett.106.020504} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo
{pages} {020504} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kastoryano}\ \emph {et~al.}(2011)\citenamefont
{Kastoryano}, \citenamefont {Reiter},\ and\ \citenamefont
{S\o{}rensen}}]{kastoryano2011dissipative}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont
{Kastoryano}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Reiter}},
\ and\ \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{S\o{}rensen}},\ }\href {\doibase 10.1103/PhysRevLett.106.090502} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {106}},\ \bibinfo {pages} {090502} (\bibinfo {year}
{2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rao~Bhaktavatsala}\ and\ \citenamefont
{M\o{}lmer}(2013)}]{bhaktavatsala2013dark}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~D.}\ \bibnamefont
{Rao~Bhaktavatsala}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{M\o{}lmer}},\ }\href {\doibase 10.1103/PhysRevLett.111.033606} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {111}},\ \bibinfo {pages} {033606} (\bibinfo {year}
{2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Habibian}\ \emph {et~al.}(2014)\citenamefont
{Habibian}, \citenamefont {Zippilli}, \citenamefont {Illuminati},\ and\
\citenamefont {Morigi}}]{habibian2014stationary}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Habibian}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zippilli}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Illuminati}}, \ and\
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Morigi}},\ }\href
{\doibase 10.1103/PhysRevA.89.023832} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo
{pages} {023832} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Taketani}\ \emph {et~al.}(2014)\citenamefont
{Taketani}, \citenamefont {Fogarty}, \citenamefont {Kajari}, \citenamefont
{Busch},\ and\ \citenamefont {Morigi}}]{fogarty2014quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~G.}\ \bibnamefont
{Taketani}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Fogarty}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Kajari}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Busch}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Morigi}},\ }\href {\doibase
10.1103/PhysRevA.90.012312} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {012312}
(\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bentley}\ \emph {et~al.}(2014)\citenamefont
{Bentley}, \citenamefont {Carvalho}, \citenamefont {Kielpinski},\ and\
\citenamefont {Hope}}]{bentley2014detection}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~D.~B.}\
\bibnamefont {Bentley}}, \bibinfo {author} {\bibfnamefont {A.~R.~R.}\
\bibnamefont {Carvalho}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Kielpinski}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont
{Hope}},\ }\href {\doibase 10.1103/PhysRevLett.113.040501} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {113}},\ \bibinfo {pages} {040501} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Morigi}\ \emph {et~al.}(2015)\citenamefont {Morigi},
\citenamefont {Eschner}, \citenamefont {Cormick}, \citenamefont {Lin},
\citenamefont {Leibfried},\ and\ \citenamefont
{Wineland}}]{morigi2015dissipative}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Morigi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eschner}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Cormick}}, \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Lin}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Leibfried}}, \ and\ \bibinfo {author}
{\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\href {\doibase
10.1103/PhysRevLett.115.200502} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {115}},\ \bibinfo {pages}
{200502} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Aron}\ \emph {et~al.}(2016)\citenamefont {Aron},
\citenamefont {Kulkarni},\ and\ \citenamefont {T\"ureci}}]{aron2016photon}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Aron}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kulkarni}}, \
and\ \bibinfo {author} {\bibfnamefont {H.~E.}\ \bibnamefont {T\"ureci}},\
}\href {\doibase 10.1103/PhysRevX.6.011032} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages}
{011032} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Reiter}\ \emph {et~al.}(2016)\citenamefont {Reiter},
\citenamefont {Reeb},\ and\ \citenamefont
{S\o{}rensen}}]{reiter2016scalable}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Reiter}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Reeb}}, \ and\
\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {S\o{}rensen}},\
}\href {\doibase 10.1103/PhysRevLett.117.040501} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\
\bibinfo {pages} {040501} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cormick}\ and\ \citenamefont
{Morigi}(2012)}]{cormick2012structural}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Cormick}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Morigi}},\ }\href {\doibase 10.1103/PhysRevLett.109.053003} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {109}},\ \bibinfo {pages} {053003} (\bibinfo {year}
{2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Diehl}\ \emph {et~al.}(2008)\citenamefont {Diehl},
\citenamefont {Micheli}, \citenamefont {Kantian}, \citenamefont {Kraus},
\citenamefont {B\"uchler},\ and\ \citenamefont {Zoller}}]{diehl2008quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Diehl}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Micheli}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kantian}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Kraus}}, \bibinfo {author}
{\bibfnamefont {H.-P.}\ \bibnamefont {B\"uchler}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href {\doibase
10.1038/nphys1073} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\
}\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {878} (\bibinfo {year}
{2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Habibian}\ \emph {et~al.}(2013)\citenamefont
{Habibian}, \citenamefont {Winter}, \citenamefont {Paganelli}, \citenamefont
{Rieger},\ and\ \citenamefont {Morigi}}]{habibian2013bose}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Habibian}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Winter}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Paganelli}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Rieger}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Morigi}},\ }\href {\doibase
10.1103/PhysRevLett.110.075304} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages}
{075304} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Pastawski}\ \emph {et~al.}(2011)\citenamefont
{Pastawski}, \citenamefont {Clemente},\ and\ \citenamefont
{Cirac}}]{pastawski2011quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Pastawski}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Clemente}},
\ and\ \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}},\
}\href {\doibase 10.1103/PhysRevA.83.012304} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo
{pages} {012304} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mirrahimi}\ \emph {et~al.}(2014)\citenamefont
{Mirrahimi}, \citenamefont {Leghtas}, \citenamefont {Albert}, \citenamefont
{Touzard}, \citenamefont {Schoelkopf}, \citenamefont {Jiang},\ and\
\citenamefont {Devoret}}]{mirrahimi2014dynamically}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Mirrahimi}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Leghtas}},
\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Albert}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Touzard}}, \bibinfo {author}
{\bibfnamefont {R.~J.}\ \bibnamefont {Schoelkopf}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Jiang}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~H.}\ \bibnamefont {Devoret}},\ }\href {\doibase
10.1088/1367-2630/16/4/045014} {\bibfield {journal} {\bibinfo {journal}
{New J. Phys.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {045014}
(\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Reiter}\ \emph {et~al.}(2017)\citenamefont {Reiter},
\citenamefont {S{\o}rensen}, \citenamefont {Zoller},\ and\ \citenamefont
{Muschik}}]{reiter2017dissipative}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Reiter}}, \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{S{\o}rensen}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},
\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Muschik}},\ }\href
{\doibase 10.1038/s41467-017-01895-5} {\bibfield {journal} {\bibinfo
{journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages}
{1822} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kienzler}\ \emph {et~al.}(2014)\citenamefont
{Kienzler}, \citenamefont {Lo}, \citenamefont {Keitch}, \citenamefont
{de~Clercq}, \citenamefont {Leupold}, \citenamefont {Lindenfelser},
\citenamefont {Marinelli}, \citenamefont {Negnevitsky},\ and\ \citenamefont
{Home}}]{kienzler2014quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Kienzler}}, \bibinfo {author} {\bibfnamefont {H.-Y.}\ \bibnamefont {Lo}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Keitch}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {de~Clercq}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Leupold}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Lindenfelser}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Marinelli}}, \bibinfo {author} {\bibfnamefont
{V.}~\bibnamefont {Negnevitsky}}, \ and\ \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Home}},\ }\href {\doibase 10.1126/science.1261033}
{\bibfield {journal} {\bibinfo {journal} {Science}\ ,\ \bibinfo {pages}
{1261033}} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Krauter}\ \emph {et~al.}(2011)\citenamefont
{Krauter}, \citenamefont {Muschik}, \citenamefont {Jensen}, \citenamefont
{Wasilewski}, \citenamefont {Petersen}, \citenamefont {Cirac},\ and\
\citenamefont {Polzik}}]{krauter2011entanglement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Krauter}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont
{Muschik}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Jensen}},
\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Wasilewski}}, \bibinfo
{author} {\bibfnamefont {J.~M.}\ \bibnamefont {Petersen}}, \bibinfo {author}
{\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}}, \ and\ \bibinfo {author}
{\bibfnamefont {E.~S.}\ \bibnamefont {Polzik}},\ }\href {\doibase
10.1103/PhysRevLett.107.080503} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages}
{080503} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lin}\ \emph {et~al.}(2013)\citenamefont {Lin},
\citenamefont {Gaebler}, \citenamefont {Reiter}, \citenamefont {Tan},
\citenamefont {Bowler}, \citenamefont {S{\o}rensen}, \citenamefont
{Leibfried},\ and\ \citenamefont {Wineland}}]{lin2013dissipative}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Lin}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gaebler}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Reiter}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Tan}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Bowler}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {S{\o}rensen}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Leibfried}}, \ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Wineland}},\ }\href
{https://www.nature.com/articles/nature12801} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {504}},\ \bibinfo {pages}
{415} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Shankar}\ \emph {et~al.}(2013)\citenamefont
{Shankar}, \citenamefont {Hatridge}, \citenamefont {Leghtas}, \citenamefont
{Sliwa}, \citenamefont {Narla}, \citenamefont {Vool}, \citenamefont {Girvin},
\citenamefont {Frunzio}, \citenamefont {Mirrahimi},\ and\ \citenamefont
{Devoret}}]{shankar2013autonomously}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Shankar}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hatridge}},
\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Leghtas}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Sliwa}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Narla}}, \bibinfo {author} {\bibfnamefont
{U.}~\bibnamefont {Vool}}, \bibinfo {author} {\bibfnamefont {S.~M.}\
\bibnamefont {Girvin}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Frunzio}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mirrahimi}},
\ and\ \bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont {Devoret}},\
}\href {\doibase 10.1038/nature12802} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {504}},\ \bibinfo {pages}
{419} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kimchi-Schwartz}\ \emph {et~al.}(2016)\citenamefont
{Kimchi-Schwartz}, \citenamefont {Martin}, \citenamefont {Flurin},
\citenamefont {Aron}, \citenamefont {Kulkarni}, \citenamefont {Tureci},\ and\
\citenamefont {Siddiqi}}]{kimchi2016stabilizing}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~E.}\ \bibnamefont
{Kimchi-Schwartz}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Martin}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Flurin}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Aron}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Kulkarni}}, \bibinfo {author}
{\bibfnamefont {H.~E.}\ \bibnamefont {Tureci}}, \ and\ \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {Siddiqi}},\ }\href {\doibase
10.1103/PhysRevLett.116.240503} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages}
{240503} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Syassen}\ \emph {et~al.}(2008)\citenamefont
{Syassen}, \citenamefont {Bauer}, \citenamefont {Lettner}, \citenamefont
{Volz}, \citenamefont {Dietze}, \citenamefont {Garc\'ia-Ripoll},
\citenamefont {Cirac}, \citenamefont {Rempe},\ and\ \citenamefont
{D\"urr}}]{syassen2008strong}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Syassen}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Bauer}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lettner}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Volz}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Dietze}}, \bibinfo {author} {\bibfnamefont
{J.~J.}\ \bibnamefont {Garc\'ia-Ripoll}}, \bibinfo {author} {\bibfnamefont
{J.~I.}\ \bibnamefont {Cirac}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Rempe}}, \ and\ \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {D\"urr}},\ }\href {\doibase 10.1126/science.1155309}
{\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {320}},\ \bibinfo {pages} {1329} (\bibinfo {year}
{2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Schindler}\ \emph {et~al.}(2013)\citenamefont
{Schindler}, \citenamefont {M{\"u}ller}, \citenamefont {Nigg}, \citenamefont
{Barreiro}, \citenamefont {Martinez}, \citenamefont {Hennrich}, \citenamefont
{Monz}, \citenamefont {Diehl}, \citenamefont {Zoller},\ and\ \citenamefont
{Blatt}}]{schindler2013quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Schindler}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{M{\"u}ller}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Nigg}},
\bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont {Barreiro}}, \bibinfo
{author} {\bibfnamefont {E.~A.}\ \bibnamefont {Martinez}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Hennrich}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Monz}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Diehl}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Zoller}}, \ and\ \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Blatt}},\ }\href {http://dx.doi.org/10.1038/nphys2630}
{\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo
{volume} {9}},\ \bibinfo {pages} {361} (\bibinfo {year} {2013})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Barreiro}\ \emph {et~al.}(2011)\citenamefont
{Barreiro}, \citenamefont {M{\"u}ller}, \citenamefont {Schindler},
\citenamefont {Nigg}, \citenamefont {Monz}, \citenamefont {Chwalla},
\citenamefont {Hennrich}, \citenamefont {Roos}, \citenamefont {Zoller},\ and\
\citenamefont {Blatt}}]{barreiro2011open}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont
{Barreiro}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{M{\"u}ller}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Schindler}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Nigg}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Monz}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Chwalla}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Hennrich}}, \bibinfo {author} {\bibfnamefont {C.~F.}\
\bibnamefont {Roos}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Zoller}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Blatt}},\ }\href {https://www.nature.com/articles/nature09801} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {470}},\
\bibinfo {pages} {486} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sobel'man}(1972)}]{sobel'man1967introduction}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~I.}\ \bibnamefont
{Sobel'man}},\ }\href@noop {} {\emph {\bibinfo {title} {Introduction to the
theory of atomic spectra}}}\ (\bibinfo {publisher} {Pergamon Press},\
\bibinfo {year} {1972})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Woodgate}(1999)}]{woodgate1970elementary}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~K.}\ \bibnamefont
{Woodgate}},\ }\href@noop {} {\emph {\bibinfo {title} {Elementary atomic
structure}}}\ (\bibinfo {publisher} {Oxford University Press},\ \bibinfo
{year} {1999})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gardiner}\ and\ \citenamefont
{Zoller}(2004)}]{gardiner2004quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~W.}\ \bibnamefont
{Gardiner}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Zoller}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum noise: a
handbook of Markovian and non-Markovian quantum stochastic methods with
applications to quantum optics}}},\ \bibinfo {series} {Springer Series in
Synergetics}, Vol.~\bibinfo {volume} {56}\ (\bibinfo {publisher}
{Springer-Verlag},\ \bibinfo {address} {Berlin, Heidelberg},\ \bibinfo {year}
{2004})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Reiter}\ and\ \citenamefont
{S\o{}rensen}(2012)}]{reiter2012effective}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Reiter}}\ and\ \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{S\o{}rensen}},\ }\href {\doibase 10.1103/PhysRevA.85.032111} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{85}},\ \bibinfo {pages} {032111} (\bibinfo {year} {2012})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Ozeri}\ \emph {et~al.}(2007)\citenamefont {Ozeri},
\citenamefont {Itano}, \citenamefont {Blakestad}, \citenamefont {Britton},
\citenamefont {Chiaverini}, \citenamefont {Jost}, \citenamefont {Langer},
\citenamefont {Leibfried}, \citenamefont {Reichle}, \citenamefont {Seidelin},
\citenamefont {Wesenberg},\ and\ \citenamefont {Wineland}}]{ozeri2007errors}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Ozeri}}, \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Itano}},
\bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Blakestad}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Britton}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Chiaverini}}, \bibinfo {author}
{\bibfnamefont {J.~D.}\ \bibnamefont {Jost}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Langer}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Leibfried}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Reichle}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Seidelin}}, \bibinfo {author} {\bibfnamefont {J.~H.}\
\bibnamefont {Wesenberg}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\
\bibnamefont {Wineland}},\ }\href {\doibase 10.1103/PhysRevA.75.042329}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {75}},\ \bibinfo {pages} {042329} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Johnson}(2014)}]{johnson2014nlopt}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~G.}\ \bibnamefont
{Johnson}},\ }\href@noop {} {\enquote {\bibinfo {title} {The nlopt
nonlinear-optimization package},}\ }\bibinfo {howpublished} {Available at
http://ab-initio.mit.edu/nlopt} (\bibinfo {year} {2014})\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Rowan}(1990)}]{rowan1990functional}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~H.}\ \bibnamefont
{Rowan}},\ }\emph {\bibinfo {title} {Functional stability analysis of
numerical algorithms}},\ \href@noop {} {Ph.D. thesis},\ \bibinfo {school}
{University of Texas at Austin} (\bibinfo {year} {1990})\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Gaebler}\ \emph {et~al.}(2016)\citenamefont
{Gaebler}, \citenamefont {Tan}, \citenamefont {Lin}, \citenamefont {Wan},
\citenamefont {Bowler}, \citenamefont {Keith}, \citenamefont {Glancy},
\citenamefont {Coakley}, \citenamefont {Knill}, \citenamefont {Leibfried},\
and\ \citenamefont {Wineland}}]{gaebler2016high}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont
{Gaebler}}, \bibinfo {author} {\bibfnamefont {T.~R.}\ \bibnamefont {Tan}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Lin}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Wan}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Bowler}}, \bibinfo {author} {\bibfnamefont {A.~C.}\
\bibnamefont {Keith}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Glancy}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Coakley}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Leibfried}}, \ and\ \bibinfo
{author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\href {\doibase
10.1103/PhysRevLett.117.060505} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages}
{060505} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Goetz}\ \emph {et~al.}(2016)\citenamefont {Goetz},
\citenamefont {Merkel}, \citenamefont {Karamatskou}, \citenamefont {Santra},\
and\ \citenamefont {Koch}}]{GoetzPRA16b}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~E.}\ \bibnamefont
{Goetz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Merkel}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Karamatskou}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Santra}}, \ and\ \bibinfo {author}
{\bibfnamefont {C.~P.}\ \bibnamefont {Koch}},\ }\href {\doibase
10.1103/PhysRevA.94.023420} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {023420}
(\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Monz}\ \emph {et~al.}(2011)\citenamefont {Monz},
\citenamefont {Schindler}, \citenamefont {Barreiro}, \citenamefont {Chwalla},
\citenamefont {Nigg}, \citenamefont {Coish}, \citenamefont {Harlander},
\citenamefont {H\"ansel}, \citenamefont {Hennrich},\ and\ \citenamefont
{Blatt}}]{monz2011fourteen}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Monz}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Schindler}},
\bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont {Barreiro}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Chwalla}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Nigg}}, \bibinfo {author} {\bibfnamefont
{W.~A.}\ \bibnamefont {Coish}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Harlander}}, \bibinfo {author} {\bibfnamefont
{W.}~\bibnamefont {H\"ansel}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Hennrich}}, \ and\ \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Blatt}},\ }\href {\doibase 10.1103/PhysRevLett.106.130506}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {106}},\ \bibinfo {pages} {130506} (\bibinfo {year}
{2011})}\BibitemShut {NoStop}
\bibitem [{Note1()}]{Note1}
\BibitemOpen
\bibinfo {note} {Typically of the order of one hundred propagations are
necessary to converge the optimization}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kraus}\ \emph
{et~al.}(2008{\natexlab{b}})\citenamefont {Kraus}, \citenamefont {B\"uchler},
\citenamefont {Diehl}, \citenamefont {Kantian}, \citenamefont {Micheli},\
and\ \citenamefont {Zoller}}]{KrausPRA08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Kraus}}, \bibinfo {author} {\bibfnamefont {H.~P.}\ \bibnamefont
{B\"uchler}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Diehl}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kantian}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Micheli}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href {\doibase
10.1103/PhysRevA.78.042307} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {042307}
(\bibinfo {year} {2008}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Albert}(2017)}]{AlbertPhD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont
{Albert}},\ }\emph {\bibinfo {title} {Lindbladians with multiple steady
states: theory and applications}},\ \href@noop {} {Ph.D. thesis},\ \bibinfo
{school} {Yale University} (\bibinfo {year} {2017}),\ \bibinfo {note}
{arXiv:1802.00010}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Doucet}\ \emph {et~al.}(2018)\citenamefont {Doucet},
\citenamefont {Reiter}, \citenamefont {Ranzani},\ and\ \citenamefont
{Kamal}}]{Doucet18}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Doucet}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Reiter}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Ranzani}}, \ and\
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kamal}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {arXiv:1810.03631}\ } (\bibinfo
{year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {de~Vega}\ and\ \citenamefont
{Alonso}(2017)}]{deVegaRMP17}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{de~Vega}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Alonso}},\ }\href {\doibase 10.1103/RevModPhys.89.015001} {\bibfield
{journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume}
{89}},\ \bibinfo {pages} {015001} (\bibinfo {year} {2017})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Reich}\ \emph {et~al.}(2015)\citenamefont {Reich},
\citenamefont {Katz},\ and\ \citenamefont {Koch}}]{ReichSciRep15}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont
{Reich}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Katz}}, \ and\
\bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Koch}},\ }\href
{\doibase 10.1038/srep12430} {\bibfield {journal} {\bibinfo {journal} {Sci.
Rep.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {12430} (\bibinfo
{year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Huelga}\ \emph {et~al.}(2012)\citenamefont {Huelga},
\citenamefont {Rivas},\ and\ \citenamefont {Plenio}}]{HuelgaPRL12}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~F.}\ \bibnamefont
{Huelga}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Rivas}}, \
and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\
}\href {\doibase 10.1103/PhysRevLett.108.160402} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\
\bibinfo {pages} {160402} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\end{thebibliography}
\end{document}
|
\begin{document}
\title{On the Division Problem for the Wave Maps Equation}
\author{Timothy Candy}
\address{Universit\"at Bielefeld, Fakult\"at f\"ur Mathematik, Postfach 10 01 31, 33501 Bielefeld, Germany
}
\email{[email protected]}
\author{Sebastian Herr}
\address{Universit\"at Bielefeld, Fakult\"at f\"ur Mathematik, Postfach 10 01 31, 33501 Bielefeld, Germany}
\email{[email protected]}
\mathcal{S}ubjclass{35L15,35L52}
\keywords{wave maps, division problem, bilinear Fourier restriction, atomic spaces}
\begin{abstract}
We consider Wave Maps into the sphere and give a new proof of small data global well-posedness and scattering in the critical Besov space, in any space dimension $n \geqslant 2$. We use an adapted version of the atomic space $U^2$ as the single building block for the iteration space. Our approach to the so-called division problem is modular as it systematically uses two ingredients: atomic bilinear (adjoint) Fourier restriction estimates and an algebra property of the iteration space, both of which can be adapted to other phase functions.
\end{abstract}
\maketitle
\mathcal{S}ection{Introduction}\label{sec:intro}
Let $(\mathbb{R}^{1+n},\eta)$ be the Minkowski space-time with metric $(\eta_{\langlepha \beta})=\diag(-1,1,\ldots,1)$ and $M$ be a smooth manifold with Riemannian metric $g$. Formally, a wave map is map $\partialhi:\mathbb{R}^{1+n}\to M$ which is a critical point of the
Langrangian
\[
\mathcal{L}(\partialhi)=\frac{1}{2}\int_{\mathbb{R}^{1+n}} \langle \partialartial^\langlepha \partialhi,\partialartial_\langlepha \partialhi \rangle_{g(\partialhi)} dtdx .
\]
Space-time coordinates are denoted by $(t,x)$, we use the standard summation convention, raise indices according to $\partialartial^\langlepha=\eta^{\langlepha\beta}\partialartial_\beta $, and write $ \dot{{B}}^{\alpha}_{p, q}ox=-\partialartial^\langlepha\partialartial_\langlepha=\partialartial_t^2-\Delta$ for the d'Alembertian.
In the extrinsic formulation, assuming that $M$ is a submanifold of some Euclidean space $\mathbb{R}^m$, a wave map is a solution $\partialhi:\mathbb{R}^{1+n}\to M\mathcal{S}ubset \mathbb{R}^m$ to
\begin{equation}\label{eq:wm-extr}
\dot{{B}}^{\alpha}_{p, q}ox \partialhi = -S(\partialhi)(\partialartial^\langlepha \partialhi,\partialartial_\langlepha\partialhi),
\end{equation}
where $S(p):T_pM \times T_pM\to (T_pM)^\partialerp$ is the second fundamental form at $p\in M$. For the purposes of this paper, the important point is that the Wave Maps equation \eqref{eq:wm-extr} takes the form of a nonlinear wave equation with null structure, more specifically
\begin{equation}\label{eq:wm-sphere}
\dot{{B}}^{\alpha}_{p, q}ox \partialhi = \partialhi (|\nabla \partialhi|^2-|\partialartial_t \partialhi|^2)
\end{equation}
in the case of the target manifold $M=\mathbb{S}^2\mathcal{S}ubset \mathbb{R}^3$. We remark that for classical (smooth) solutions to the equation \eqref{eq:wm-sphere} one can drop the target constraint because if the initial data $(\partialhi(0),\partialartial_t \partialhi (0)):\mathbb{R}^n \to \mathbb{R}^3$ satisfy $|\partialhi(0)|=1 $ and $\partialartial_t \partialhi (0)\cdot \partialhi (0)=1$, one can prove $|\partialhi(t)|=1$ for all $t$.
Solutions can be rescaled according to $\partialhi(t,x)\to \partialhi(\lambda t,\lambda x)$. Therefore $\dot{H}^{\frac{n}{2}}(\mathbb{R}^n)$ is the critical Sobolev regularity for global well-posedness, which barely fails to control the $L^\infty$-norm. Wave Maps conserve the energy
\[
E(\partialhi)=\frac{1}{2}\int_{\mathbb{R}^{n}} |\partialartial_t \partialhi |^2+|\nabla \partialhi |^2 dx,
\]
therefore the space dimension $n=2$ is the energy-critical dimension.
It turned out that, even in the case of small initial data, the Cauchy problem is challenging to solve in the critical Sobolev space, in particular in low space dimensions $n=2,3$. For instance, the problem cannot be solved iteratively in Fourier restriction norms only \cite{Klainerman2002}. In the (smaller) critical Besov space $\dot{B}^{\frac{n}{2}}_{2,1}(\mathbb{R}^n)$ the breakthrough global well-posedness and scattering result in dimension $n=2,3$ was obtained by Tataru \cite{Tataru2001}. The small data problem in the critical Sobolev space is more subtle. The example of Nirenberg \cite[p. 45]{Klainerman1980} shows that the scalar model problem $ \dot{{B}}^{\alpha}_{p, q}ox u = \partialartial^\langlepha u \partialartial_\langlepha u$ is ill-posed in $\dot{H}^{\frac{n}{2}}(\mathbb{R}^n)$. In the critical Sobolev space the Wave Maps problem exhibits a quasilinear behaviour and a renormalization is necessary. In the case $M=\mathbb{S}^2$ this was solved by Tao \cite{Tao2001}, later by Krieger \cite{Krieger2004} for the hyperbolic plane target, and for more general targets by Klainerman-Rodnianski \cite{Klainerman2001} and Tataru \cite{Tataru2005}.
Building on the small data results, Sterbenz-Tataru \cite{Sterbenz2010a,Sterbenz2010b} and Krieger-Schlag \cite{Krieger2012} solved the global regularity problem in the energy-critical dimension $n=2$ for initial data below the threshold given by nontrivial harmonic maps of lowest energy (if any).
We refer the reader to \cite{Shatah1998,Tao2006,Tataru2004,Koch2014,Geba2017} for more comprehensive introductions to various aspects of the theory of Wave Maps and further references.
In this paper, we revisit the problem of iteratively solving the Cauchy problem associated with \eqref{eq:wm-sphere} in the critical Besov space for small data, which was solved first in \cite{Tataru2001}.
This problem is also known as the \emph{division problem}. Citing \cite[p. 195]{Tataru2004} (see also \cite{Krieger2003}), the name stems ``from the fact that in
Fourier space the parametrix $ \dot{{B}}^{\alpha}_{p, q}ox^{-1}$ for the wave equation is essentially the division by'' the symbol $|\xi|^2-\tau^2$ of the d'Alembertian, which fails to be locally integrable. In essence, it consists in constructing a function space $S^{\frac{n}{2}}$ with the properties
\begin{equation}\label{eq:mapping}
S^{\frac{n}{2}}\cdot S^{\frac{n}{2}} \to S^{\frac{n}{2}} \text{ and } \dot{{B}}^{\alpha}_{p, q}ox^{-1} (S^{\frac{n}{2}} \cdot \dot{{B}}^{\alpha}_{p, q}ox S^{\frac{n}{2}}) \to S^{\frac{n}{2}},
\end{equation}
see Theorem \ref{thm:div-prob} below for a precise statement. The division problem arises on the level of the Littlewood-Paley pieces and we do not address the \emph{summation problem}, which is the second ingredient for a proof of global well-posedness in the critical Sobolev space and was solved first in \cite{Tao2001}. We emphasize that a solution of the division problem is crucial for all later developments on Wave Maps mentioned above and the original construction \cite{Tataru2001} has been successfully used, adapted and refined in related problems, such as \cite{Bournaveas2016,Bejenaru2015,Bejenaru2016,Krieger2017,Oh2016}, among others.
Further, the division problem is universal in the sense that it crucially arises
in many other nonlinear dispersive evolution equations at the critical regularity or for global-in-time problems,
such as for Schr\"odinger maps \cite{Bejenaru2011}.
One of the key difficulties in the solution of the division problem originates in the fact that, even for solutions $u$ on the unit frequency scale, it is impossible to obtain global-in-time control $ \dot{{B}}^{\alpha}_{p, q}ox u$ in $L^1_tL^2_x$. Instead, in \cite{Tataru2001} Tataru introduces characteristic (or null) coordinate frames $(t_\Theta,x_\Theta)$, for unit vectors $\Theta$ on the cone, and $t_{\Theta}=\Theta \cdot (t,x)$. If $u$ is Fourier localized in a transversal direction, it is possible to control $ \dot{{B}}^{\alpha}_{p, q}ox u$ in $L^1_{t_\Theta} L^2_{x_\Theta}$. Then, by an involved construction of an atomic function space in addition to the standard Fourier restriction space he succeeds in proving the requires estimates alluded to above, which rest on certain bilinear estimates in $L^2_{t,x}$. Due to its complexity we do not describe the solution to the division problem of \cite{Tataru2001} in more detail here but refer to \cite{Tataru2001,Tao2001,Krieger2004} instead. In a first version of \cite{Tataru2001} Tataru took a different route, based on the space of functions of bounded $p-$variation $V^p$ \cite{Wiener1924} and its predual $U^q$ \cite{Pisier1987}. In this construction, the space for solutions is an atomic space, where the atoms are normalized step-functions and each step solves the homogeneous wave equation. However, as pointed out by Nakanishi, there was a serious problem in the proof of the crucial bilinear $L^2_{t,x}$ estimates. Instead, Tataru abandonded the approach via $U^p$ and $V^p$ and developed the null frame spaces instead. The null frame space construction is custom-made for the application to the Wave Maps problem, for which it proved very successful, but the functional analysis is delicate and adaptations to closely related problems require new ideas \cite{Bejenaru2011,Bejenaru2016}.
Around the same time as \cite{Tataru2001} there have been significant advances on the Fourier restriction problem for the cone.
The key fact is that by passing to the bilinear setting it is possible to use both curvature and transversality properties of the cone.
Indeed, in dimension $n\geqslant 2$, Wolff \cite{Wolff2001} proved that for every $p>p_n:=\frac{n+3}{n+1}$,
the bilinear (adjoint) Fourier restriction estimate
$$ \big\| e^{ - it |\nabla|} f e^{ - i t |\nabla|} g \big\|_{L^p_{t, x}(\mathbb{R}^{1+n})} \leqslanta \| f \|_{L^2_x} \| g \|_{L^2_x},$$
holds true, provided that the Fourier-supports of $f$ and $g$ are angularly
separated and contained in the unit annulus. Shortly after, Tao \cite{Tao2001b} proved this estimate in the endpoint case $p=p_n$.
In the present paper, we prove Tataru's original conjecture to be true: it is possible to use $U^2$ as the only building block in the solution of division problem by using recent advances on the bilinear (adjoint) Fourier restriction estimates as the key new ingredient.
Specifically, for $u:\mathbb{R}^{1+n}\to \mathbb{R}^3$, we consider the model problem
\begin{equation}\label{eqn:wm model}
\begin{split}
\dot{{B}}^{\alpha}_{p, q}ox u &= u (|\nabla u|^2-|\partialartial_t u|^2)\\
(u, \partial_t u)(0) &= (f,g)
\end{split}
\end{equation}
for small initial data in $(f,g) \in \dot{B}^\frac{n}{2}_{2,1}(\mathbb{R}^n)\times \dot{B}^{\frac{n}{2}}_{2,1}(\mathbb{R}^n)$, for $n \geqslant 2$, and provide a new proof of global well-posedness and scattering.
While we present our approach with a focus on this specific problem, we emphasize that it is modular. It is straight-forward to adapt the function space $U^2$ to any linear propagator. By the standard resonance analysis of \eqref{eqn:wm model}, the mapping properties
\eqref{eq:mapping} can be reduced to two independent building blocks, both of which are new. Firstly, if one of the factors has Fourier support far from the characteristic set, then the required estimates are consequences of bilinear Fourier multiplier versions of the following toy estimates:
\[
\|fg\|_{V^2}\leqslanta\|f\|_{L^{\infty}_{t,x}} \|g\|_{V^2}, \varphiquad \|fg\|_{U^2}\leqslanta\|f\|_{L^{\infty}_{t,x}} \|g\|_{U^2}
\]
for $g$ with high temporal frequency, see Subsection \ref{subsec:alg}. Secondly, if all factors have Fourier support close to the characteristic set, we exploit atomic versions of bilinear (adjoint) Fourier restriction estimates, which are available for general phases under tranversality and curvature conditions, see Section \ref{sec:adapted bilinear restriction} and \cite{Candy2017b}. To put things into perspective, let us mention that since the spaces $U^p$ and $V^p$ have been introduced in the PDE context in \cite{Koch2005}, the theory of these spaces has been developed in \cite{Hadac2009,Koch2014,Koch2016}, among others. In parallel, since the seminal works of Wolff \cite{Wolff2001} and Tao \cite{Tao2001b}, there have been advances in the theory of bilinear Fourier restriction estimates by Bejenaru \cite{Bejenaru2017}, Lee-Vargas \cite{Lee2010}, the first named author \cite{Candy2017b}, among others. Here, we are able to connect these two lines of research and provide a systematic and modular solution of the division problem.
The paper is organized as follows. After introducing some notation, we describe the solution
to the division problem in Section \ref{sec:sol-div-prob}, see Theorem \ref{thm:div-prob}. Also, we introduce the function spaces and provide a proof of small data global well-posedness and scattering for the Wave Maps equation. In Section \ref{sec:multi}
we establish properties of the iteration space, prove product estimates in far cone regions
and the bilinear $L^2$ estimates, and give a proof of Theorem \ref{thm:div-prob}. Section \ref{sec:Up and Vp} is devoted to the basic properties of the critical function spaces $U^p$ and $V^p$, such as embedding properties, almost orthogonality and duality statements. In Section \ref{sec:characterisation} we provide characterisations of $U^p$ which are crucial for applications to PDE. In Section \ref{sec:bound} we establish results concerning convolution and multplication in $U^p$ and $V^p$ and introduce the adapted function spaces.
Finally, in Section \ref{sec:adapted bilinear restriction} we prove the bilinear restriction estimate in adapted function spaces which is used in Section \ref{sec:multi}.
\mathcal{S}ubsection{Notation}\label{subsec:not}
Let $\mathcal{S}(\mathbb{R}^n)$ the Schwartz space and $\mathcal{S}'(\mathbb{R}^n)$ the space of tempered distributions. Given a function $f \in L^2(\mathbb{R}^n)$, we let $\widehat{f}(\xi)$ denote the Fourier transform, and if $u \in L^2_{t,x}(\mathbb{R}^{1+n})$, we let $\widetilde{u}(\tau, \xi)$ denote the space-time Fourier transform.
Let $P_\lambda$ denote a smooth (spatial) cutoff to the Fourier region $|\xi| \approx \lambda$. Similarly, we take $P^{(t)}_d$ denote the (temporal) Fourier projection to the set $|\tau| \approx d$. The Fourier multipliers $P_{\leqslant \lambda}$ and $P^{(t)}_{\leqslant d}$ are defined similarly to restrict to spatial frequencies $|\xi|\leqslanta \lambda $, and temporal frequencies $|\tau| \leqslanta d$ respectively. We often use the short hand $P_\lambda u = u_\lambda$. Let $C_d$ and $C^\partialm_d$ restrict to the space-time Fourier regions $\big||\tau| - |\xi| \big| \approx d$ and $\big| \tau \partialm |\xi| \big|\approx d$ respectively. Note that we may write $C^\partialm_d = e^{\mp i |\nabla| t} P^{(t)}_d e^{\partialm i |\nabla| t}$.
Define $\mc{C}_\langlepha$ to be a finitely overlapping collection of caps $\kappa \mathcal{S}ubset \mathcal{S}ph^{n-1}$ of radius $\langlepha$ which cover the sphere, and take $\angle(\xi,\eta)$ denote the angle between $\xi, \eta \in \mathbb{R}^n\mathcal{S}etminus \{0\}$. Let $\mc{Q}_\mu$ be a collection of finitely overlapping cubes of diameter $\mu$ which form a cover of $\mathbb{R}^n$. We denote the corresponding Fourier cutoffs to caps $\kappa \in \mc{C}_\langlepha$ and cubes $q \in \mc{Q}_\mu$ as $R_\kappa$ and $P_q$ respectively.
For $s\in \mathbb{R}$, $1\leq p,q \leq \infty$ and $f\in \mathcal{S}'(\mathbb{R}^n)$, let
\[
\|f\|_{\dot{B}^{s}_{p,q}}= \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{\lambda \in 2^\mathbb{Z}}\lambda^{sq} \|f_\lambda \|_{L^p(\mathbb{R}^n)}^q \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{q}},
\]
with the obvious modification for $q=\infty$. Further, let $\dot{B}^{s}_{p,q}$
denote the space of all $f\in \mathcal{S}'(\mathbb{R}^n)$ satisfying \[f=\mathcal{S}um_{\lambda \in 2^\mathbb{Z}} f_\lambda \text{ in }\mathcal{S}'(\mathbb{R}^n) \text{ and }\|f\|_{\dot{B}^{\frac{n}{2}}_{p,1}}<+\infty.\] It is well-known (see \cite{Bahouri2011}) that, if either $s<n/p$ or $s=n/p$ and $q=1$, then $\dot{B}^{s}_{p,q}$ is a Banach space. We have the continuous embedding $\dot{B}^{\frac{n}{2}}_{2,1}\mathcal{S}ubset C_0 (\mathbb{R}^n)$.
\mathcal{S}ection{A solution to the division problem}\label{sec:sol-div-prob}
\mathcal{S}ubsection{Definition of the spaces $U^p$ and $V^p$}\label{subsec:upvp}
Let \[\mb{P}= \{ \tau=(t_j)_{j=1}^{N} \mid N \in \mathbb{N}, \,\, t_j \in \mathbb{R}, \,\, t_j< t_{j+1} \}\] be the set of partitions, i.e.\ finite increasing sequences. For a partition $\tau=(t_j)_{j=1, \ldots, N}$, let \[\mc{I}_\tau=\{[t_1,t_{2}), \ldots, [t_{N-1},t_{N}), [t_N,\infty)\},\]
i.e.\ left-closed disjoint intervals associated with $\tau$. Let $1\leq p<\infty$. We say that $u$ is a \emph{$U^p$-atom} if there exists $\tau \in \mb{P}$ such that $u(t) = \mathcal{S}um_{I \in \mc{I}_\tau} \mathbbold{1}_{I}(t) f_I$ is a step function satisfying
$$ \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{I \in \mc{I}_\tau} \| f_I \|_{L^2}^p \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{p}} = 1.$$
The atomic space $U^p$ is then defined to be
$$ U^p = \dot{{B}}^{\alpha}_{p, q}ig\{ \mathcal{S}um_{j\in \mathbb{N}} c_j u_j \, \dot{{B}}^{\alpha}_{p, q}ig| \,\, (c_j) \in \ell^1, \text{ $u_j$ is a $U^p$ atom } \dot{{B}}^{\alpha}_{p, q}ig\},$$
with the induced norm
$$ \| u \|_{U^p} = \inf_{ u = \mathcal{S}um_{j\in \mathbb{N}} c_j u_j } \mathcal{S}um_{j\in \mathbb{N}} |c_j|.$$
Functions in $u\in U^p$ are bounded, have one-sided limits everywhere and are right-continuous with $\lim_{t\to -\infty}u(t)=0$ in $L^2(\mathbb{R}^n)$.
Closely related to the atomic spaces $U^p$, are the $V^p$ spaces of finite $p$-variation. Given a function $v: \mathbb{R} \rightarrow L^2(\mathbb{R}^n)$, we define
$$ |v |_{V^p} = \mathcal{S}up_{(t_j)_{j=1}^{N} \in \mb{P}} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=1}^{N-1}\| v(t_{j+1}) - v(t_j) \|_{L^2}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p}.$$
and $\| v \|_{V^p} =\|v\|_{L^\infty_t L^2_x}+|v |_{V^p}$. If $ |v |_{V^p} < \infty$, then $v$ has one-sided limits everywhere including $\partialm \infty$. Let $V^p$ be the space of all right-continuous functions $v$ such that $| v |_{V^p} < \infty$ and $\lim_{t\to-\infty} v(t)=0$ in $L^2(\mathbb{R}^n)$. Then, $\| v \|_{V^p}\leq 2|v|_{V^p}$ for all $v\in V^p$.
The spaces $V^p$ were introduced by Wiener \cite{Wiener1924} while a discrete version of the atomic $U^p$ spaces appeared in work of Pisier-Xu \cite{Pisier1987}. In the context of PDE, the spaces $U^p$ and $V^p$ were used in unpublished work of Tataru as a replacement for the endpoint $X^{s,\frac12}$ type spaces, and developed in more detail by Koch-Tataru \cite{Koch2005}. We leave a more complete discussion of the spaces $U^p$ and $V^p$ till Section \ref{sec:Up and Vp} below, further properties can also be found in \cite{Hadac2009, Koch2005, Koch2016}.
\mathcal{S}ubsection{The solution space}\label{subsec:s}
The solution space $S^{\frac{n}{2}}$ for the Wave Maps equation is based on an adapted version of the atomic space $U^2$. More precisely, we define $U^2_\partialm = e^{\mp i t |\nabla| } U^2$ with the obvious norm $$ \| u \|_{U^2_\partialm} = \| e^{\partialm i t|\nabla|} u \|_{U^2}.$$
Elements of $U^2_\partialm$ should be thought of as being close to solutions to the linear half-wave equation. In fact, atoms in $U^2_\partialm$ are piecewise solutions to $(-i \partial_t \partialm |\nabla|) u = 0$.
Let $S$ be the collection of all $u \in C_b( \mathbb{R}; L^2_x)$ with $|\nabla|^{-1} \partial_t u \in C_b(\mathbb{R}; L^2_x)$ such that $ u \partialm i |\nabla|^{-1} \partial_t u \in U^2_\partialm$ with the norm
$$ \| u \|_S = \big\| u + i |\nabla|^{-1} \partial_t u \big\|_{U^2_+} + \big\| u - i |\nabla|^{-1} \partial_t u \big\|_{U^2_-}.$$
The space $S$ will contain the frequency localised pieces of the wave map $u$. To define the full solution space $S^{\frac{n}{2}}$ we take
$$S^\frac{n}{2} = \{ u \in C_b(\mathbb{R}, \dot{B}^\frac{n}{2}_{2,1}) \mid P_\lambda u \in S \}$$
with the norm
$$ \| u \|_{S^\frac{n}{2}} = \mathcal{S}um_{\lambda \in 2^\mathbb{Z}} \lambda^\frac{n}{2} \| P_\lambda u \|_S.$$
The space $S^\frac{n}{2}$ is a Banach space, this is an immediate consequence of the fact that the subspace of continuous functions in $U^2_\partialm$ is closed. Since we may write
\begin{equation}\label{eq:u-pu}\begin{split} u &= \tfrac{1}{2}\big(u + i |\nabla|^{-1} \partial_t u\big) + \tfrac{1}{2}\big(u - i |\nabla|^{-1} \partial_t u\big),\\
\varphiquad \partial_t u &= \tfrac{1}{2i}|\nabla| \big(u + i |\nabla|^{-1} \partial_t u\big) - \tfrac{1}{2i}|\nabla|\big(u - i |\nabla|^{-1} \partial_t u\big)
\end{split}
\end{equation}
it is clear that we have bounds
\[\| u_\lambda \|_{L^\infty_t L^2_x} + \| \partial_t u_\lambda \|_{L^\infty_t \dot{H}^{-1}} \leqslanta \| u_\lambda \|_{S}\]
and
\[\| u \|_{L^\infty_t \dot{B}^{\frac{n}{2}}_{2, 1}} + \| \partial_t u \|_{L^\infty_t \dot{B}^{\frac{n}{2}-1}_{2, 1}} \leqslanta \| u \|_{S^\frac{n}{2}}.\]
Let
\[V(t)(f,g)= \cos(t|\nabla|) f + \frac{\mathcal{S}in(t|\nabla|)}{|\nabla|}g\]
be the propagator for the homogeneous wave equation. Further, let $\chi(t) \in C^\infty$ with $\chi(t) = 1$ for $t\geqslant 0$, and $\chi(t) = 0$ for $t<-1$.
\begin{lemma}\label{lem:lin-est}
For all $f_\lambda ,g_\lambda \in L^2$, we have $\chi(t|\nabla|)V(t)(f_\lambda,g_\lambda) \in S$ and
\begin{equation}\label{eq:lin-est}
\|\chi(t|\nabla|)V(t)(f_\lambda ,g_\lambda ) \|_{S}\leqslanta \|f_\lambda\|_{L^2}+\lambda^{-1}\|g_\lambda\|_{L^2}.
\end{equation}
\end{lemma}
\begin{proof}
Let $v=V(t)(f_\lambda ,g_\lambda )$ und $w=\chi(t|\nabla|)V(t)(f_\lambda ,g_\lambda )$. Then,
\[
v\partialm i |\nabla|^{-1}\partialartial_t v=e^{\mp i t |\nabla|} (f_\lambda \partialm i |\nabla|^{-1} g_\lambda ),
\]
and therefore
\[
w\partialm i |\nabla|^{-1}\partialartial_t w= \chi(t|\nabla|) e^{\mp i t |\nabla|} (f_\lambda \partialm i |\nabla|^{-1} g_\lambda )\partialm i \chi'(t|\nabla|)v.
\]
Due to \eqref{eq:w11} we have
\begin{align*}
\|\chi(t|\nabla|) e^{\mp i t |\nabla|} (f_\lambda \partialm i |\nabla|^{-1} g_\lambda)\|_{U^2_\partialm}={}&\|\chi(t|\nabla|) (f_\lambda\partialm i |\nabla|^{-1} g_\lambda)\|_{U^2}\\
\leqslanta{}& \|\chi'(t|\xi|)|\xi| (\widehat{f_\lambda}\partialm i |\xi|^{-1}\widehat{g_\lambda})\|_{L^1_t L^2_\xi}\\
\leqslanta{}&\|f_\lambda\|_{L^2}+\lambda^{-1}\|g_\lambda\|_{L^2},
\end{align*}
similarly,
\begin{align*}
\|\chi'(t|\nabla|) v_\lambda \|_{U^2_\partialm}={}&\|\chi'(t|\nabla|) e^{\partialm i t |\nabla|} v_\lambda\|_{U^2}\\
\leqslanta{}& \big\|\partialartial_t \big(\chi'(t|\xi|) e^{\partialm i t |\xi|}\widehat{v_\lambda}\big)\big\|_{L^1_t L^2_\xi}\\
\leqslanta{}&\big\|\partialartial_t \big(\chi'(t|\xi|) e^{\partialm i t |\xi|}\big(\cos(t|\xi|) \widehat{f_\lambda}(\xi) + \frac{\mathcal{S}in(t|\xi|)}{|\xi|}\widehat{g_\lambda}(\xi) \big)\big\|_{L^1_t L^2_\xi}\\
\leqslanta{}&\|f_\lambda\|_{L^2}+\lambda^{-1}\|g_\lambda\|_{L^2},
\end{align*}
and therefore $w\in S$ with the required bound.
\end{proof}
We prove that the space $S^{\frac{n}{2}}$ is a solution to the division problem.
More precisely, if we define
$$ \dot{{B}}^{\alpha}_{p, q}ox^{-1} F (t)= \mathbbold{1}_{[0,\infty)}(t) \int_0^t \frac{\mathcal{S}in( (t-s)|\nabla|)}{|\nabla|} F(s) ds $$
to be the solution to the wave equation $ \dot{{B}}^{\alpha}_{p, q}ox u = F$ on $[0, \infty)\times \mathbb{R}^n$ with vanishing data at $t=0$, then we have the following.
\begin{theorem}\label{thm:div-prob}
Let $\lambda_0, \lambda_1, \lambda_2 \in 2^\mathbb{Z}$. If $u_{\lambda_1}, v_{\lambda_2} \in S$, then $ P_{\lambda_0}(u_{\lambda_1} v_{\lambda_2}) \in S$ and
\begin{equation}\label{eqn:thm div-prob:S-alg} \lambda_0^\frac{n}{2} \| P_{\lambda_0} (u_{\lambda_1} v_{\lambda_2}) \|_{S} \leqslanta (\lambda_1 \lambda_2)^\frac{n}{2} \| u \|_{S} \| v \|_{S}.
\end{equation}
Moreover, if in addition we have $ \dot{{B}}^{\alpha}_{p, q}ox v_{\lambda_2} \in L^1_{t,loc} L^2_x$, then $ \dot{{B}}^{\alpha}_{p, q}ox^{-1}P_{\lambda_0}( u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox v_{\lambda_2}) \in S$ and
\begin{equation}\label{eqn:thm div-prob:S-nonlin} \lambda_0^\frac{n}{2} \| \dot{{B}}^{\alpha}_{p, q}ox^{-1}P_{\lambda_0}( u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox v_{\lambda_2}) \|_{S} \leqslanta (\lambda_1 \lambda_2)^\frac{n}{2} \| u\|_{S} \| v\|_{S}.
\end{equation}
\end{theorem}
\begin{remark}\label{rmk:summ-prob}
The estimates can be easily summed up. However,
a summation problem arises when one aims at solving the Wave Maps equation in the critical Sobolev space $\dot{H}^{\frac{n}{2}}\times \dot{H}^{\frac{n}{2}-1}$. The $\ell^1$ summation over frequencies has to be replaced with an $\ell^2$ sum, which would require obtaining the stronger bounds and a renormalization. Such estimates are known in the null frame based solution space, see \cite[equations (21), (27)]{Tao2001} and \cite[equation (4.3)]{Tataru2005} (the proof can be found directly following (129) in \cite{Tao2001}). We do not pursue this issue here.
\end{remark}
\mathcal{S}ubsection{Small data GWP for the Wave Maps equation}\label{subsec:gwp}
Let \[Q_0(u,v)=-\partialartial^\langlepha u \cdot \partialartial_\langlepha v=\partialartial_t u \cdot \partialartial_t v-\partialartial_{x_j} u \cdot \partialartial_{x_j} v.\]
It is well-known that this is a null form, i.e. it satisfies
the identity
\begin{equation}
\label{eq:nf}
2Q_0(u,v) = \dot{{B}}^{\alpha}_{p, q}ox(uv) - \dot{{B}}^{\alpha}_{p, q}ox u v - u \dot{{B}}^{\alpha}_{p, q}ox v.
\end{equation}
Then, \eqref{eq:wm-sphere} can be written as
\[
\dot{{B}}^{\alpha}_{p, q}ox u =uQ_0(u,u).
\]
Theorem \ref{thm:div-prob} together with \eqref{eq:nf}
and a standard fixed point argument can be used to construct a solution to the Wave Maps equation \eqref{eq:wm-sphere} in the space $S^{\frac{n}{2}}$ provided that the initial data $(f,g)$ are sufficiently small in $\dot{B}^{\frac{n}{2}}_{2,1}(\mathbb{R}^n)\times \dot{B}^{\frac{n}{2}-1}_{2,1}(\mathbb{R}^n)$. In more detail, set
$$S_0^{\frac{n}{2}} = \big\{ u \in S^\frac{n}{2} \,\big| \,\forall \lambda\in 2^{\mathbb{Z}} : \; \dot{{B}}^{\alpha}_{p, q}ox u_\lambda \in L^1_{t,loc}L^2_x(\mathbb{R}^{1+n}) \, \}.$$
The purpose of this subset of $S^{\frac{n}{2}}$ is to ensure that all nonlinear expressions are a-priori well-defined.
Define the map $\mc{T}:S_0^\frac{n}{2} \to S_0^{\frac{n}{2}}$ by
$$ \mc{T}[u](t) = \chi(t|\nabla|) V(t)(f,g) + \dot{{B}}^{\alpha}_{p, q}ox^{-1}\big( u Q_0(u,u) \big).$$
Theorem \ref{thm:div-prob} and summation implies that
\begin{align*}
&\| \dot{{B}}^{\alpha}_{p, q}ox^{-1}\big( u Q_0(v,w) \big)\|_{S^{\frac{n}{2}}}\\
\leq{}& \| \dot{{B}}^{\alpha}_{p, q}ox^{-1}\big(u \dot{{B}}^{\alpha}_{p, q}ox(vw)\big)\|_{S^{\frac{n}{2}}}+\| \dot{{B}}^{\alpha}_{p, q}ox^{-1}\big(u ( \dot{{B}}^{\alpha}_{p, q}ox v) w\big)\|_{S^{\frac{n}{2}}} +\| \dot{{B}}^{\alpha}_{p, q}ox^{-1}\big(u v \dot{{B}}^{\alpha}_{p, q}ox w\big) \|_{S^{\frac{n}{2}}}\\
\leqslanta {}& \|u\|_{S^{\frac{n}{2}}} \|vw\|_{S^{\frac{n}{2}}}+\|uw \|_{S^{\frac{n}{2}}}\| \dot{{B}}^{\alpha}_{p, q}ox v\|_{S^{\frac{n}{2}}}
+\|u v\|_{S^{\frac{n}{2}}} \| w \|_{S^{\frac{n}{2}}}\\
\leqslanta {}& \|u\|_{S^{\frac{n}{2}}}\|v\|_{S^{\frac{n}{2}}} \| w \|_{S^{\frac{n}{2}}},
\end{align*}
where we have used \eqref{eq:nf} in the first, \eqref{eqn:thm div-prob:S-nonlin} in the second and \eqref{eqn:thm div-prob:S-alg} in the third inequality.
By Lemma \ref{lem:lin-est} and summation we obtain
\begin{align*}
\|\mc{T}[u]\|_{S^{\frac{n}{2}}}\leqslanta \|(f,g)\|_{\dot{B}^{\frac{n}{2}}_{2,1}\times \dot{B}^{\frac{n}{2}-1}_{2,1}}+\|u\|_{S^{\frac{n}{2}}}^3,\\
\|\mc{T}[u]-\mc{T}[v]\|_{S^{\frac{n}{2}}}\leqslanta \big(\|u\|_{S^{\frac{n}{2}}}^2+\|v\|_{S^{\frac{n}{2}}}^2\big)\|u-v\|_{S^{\frac{n}{2}}}.
\end{align*}
Therefore, $\mc{T}$ is a contraction in a small ball in $S_0^{\frac{n}{2}}$. This implies the existence of a fixed point in $S^{\frac{n}{2}}$ as the latter space is complete. Then, the restriction of the fixed point to the interval $[0,\infty)$ is a generalized solution of \eqref{eq:wm-sphere}. Clearly, if in addition the initial data are $C^\infty$, we obtain a classical solution to \eqref{eqn:wm model}. Scattering is an immediate consequence of \eqref{eq:u-pu} and the existence of one-sided limits in $U^2$. Indeed, it implies the existence of
\[\lim_{t\to \infty} e^{\partialm i t |\nabla|} \big(u(t) \partialm i| \nabla|^{-1} \partialartial_t u(t)\big)=:f_\partialm \in \dot{B}^{\frac{n}{2}}_{2,1}(\mathbb{R}^n).\]
Now,
\[
f_\infty:= \tfrac{1}{2}\big(f_++f_-\big) \in \dot{B}^{\frac{n}{2}}_{2,1}(\mathbb{R}^n)\text{ and }g_\infty:=\tfrac{i}{2}|\nabla|\big(f_--f_+\big)\in \dot{B}^{\frac{n}{2}-1}_{2,1}(\mathbb{R}^n)
\]
satisfy
\[
\lim_{t\to \infty}\|u(t)-V(t)(f_\infty,g_\infty)\|_{L^\infty_t \dot{B}^{\frac{n}{2}}_{2, 1}} +\lim_{t\to \infty}\|\partialartial_t u(t)-\partialartial_t V(t)(f_\infty,g_\infty)\|_{L^\infty_t \dot{B}^{\frac{n}{2}-1}_{2, 1}}=0.
\]
In fact, the result is slightly stronger. For instance, the embedding $U^2\mathcal{S}ubset V^2$ implies that the quadratic variation is finite. The analogous results on $(-\infty,0]$ follow from time reversability.
\mathcal{S}ection{Multilinear estimates}\label{sec:multi}
\mathcal{S}ubsection{Properties of the iteration space $S$} \label{subsec:prop-S}
In this subsection we describe the main properties of the space $S$ which are needed for the solution of the division problem. The results collected here are for the most part consequences of properties of the $U^p$ and $V^p$ spaces. To aid the reader, we state (and for the most part give a proof of) these properties in Sections \ref{sec:Up and Vp}, \ref{sec:characterisation} and \ref{sec:bound}.
We start by defining a weak version of the space $S$, which is based on $V^2$ rather than $U^2$. Let $S_w$ denote the collection of all right continuous functions $v$ such that
$$ \| v \|_{S_w} = \| v\|_{V^2_+ + V^2_-}= \inf_{ v = v^+ + v^-\atop v^{\partialm}\in V^2_{\partialm}} \dot{{B}}^{\alpha}_{p, q}ig(\| v^+ \|_{V^2_+} + \| v^- \|_{V^2_-} \dot{{B}}^{\alpha}_{p, q}ig)<\infty$$
where we define $\| \partialhi \|_{V^2_\partialm} = \| e^{ \partialm i t|\nabla|} \partialhi \|_{V^2}$. It is clear that
\begin{equation}\label{eqn:S_w norm weaker} \| u \|_{S_w} +\| |\nabla|^{-1}\partial_t u \|_{S_w} \leqslant \|u + i |\nabla|^{-1} \partial_t u\|_{V^2_+} + \|u - i |\nabla|^{-1} \partial_t u\|_{V^2_-}\leqslanta \| u \|_S,
\end{equation}
and
\begin{equation}\label{eqn:S contr U2}
\| u \|_{U^2_+ + U^2_-} + \| |\nabla|^{-1} \partial_t u \|_{U^2_++ U^2_-} \leqslant \| u \|_S.
\end{equation}
Thus the norm on $S_w$ is weaker than that on $S$ and $S$ is slightly stronger than just have taking $u, |\nabla|^{-1} \partial_t u \in U^2_+ + U^2_-$. However, for space-time frequencies localised to a fixed dyadic distance from the cone, these spaces are all closely related, see Part (ii) of Lemma \ref{lem:S properties} below. In particular, in some sense the differences in these spaces only appear when considering frequency regions of the form $\{ ||\tau| - |\xi|| \leqslanta \mu \}$.
The space $S$ satisfies the following properties.
\begin{lemma}\label{lem:S properties}\leavevmode
\begin{enumerate}
\item \label{itm:lem S prop:dp}\emph{(Dual pairing)} Let $\partialsi \in S$ and $\partialhi \in S_w$ with $|\nabla|^{-1} \dot{{B}}^{\alpha}_{p, q}ox \partialhi, \partial_t \partialhi, |\nabla|\partialhi \in L^1_t L^2_x$. Then
$$ \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{|\nabla|^{-1} \dot{{B}}^{\alpha}_{p, q}ox \partialhi, \partialsi}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslanta \| \partialhi \|_{S_w} \| \partialsi \|_S.$$
\item \label{itm:lem S prop:Xsb}\emph{($\dot X^{0, \frac12,\infty}$ control)} Let $d, \lambda \in 2^\mathbb{Z}$. Then
$$ \| C_d u_\lambda \|_{L^2_{t,x}} \approx d^{-\frac{1}{2}}\frac{\lambda }{d+\lambda} \| C_d u_\lambda \|_{S} \approx d^{-\frac{1}{2}} \| C_d v_\lambda \|_{S_w}. $$
\item \label{itm:lem S prop:sq sum}\emph{(Square sum bounds)} Let $\epsilon>0$. For any $0<\langlepha \leqslant 1$ and $\lambda \geqslant \mu >0$ we have
\[ \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{ q \in \mc{Q}_\mu} \| P_q u_\lambda \|_S^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2}\leqslanta \| u_\lambda \|_{S}, \varphiquad \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{\kappa \in \mc{C}_\langlepha} \| R_\kappa u_\lambda \|_S^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} \leqslanta \| u_\lambda \|_{S}.\]
\item \label{itm:lem S prop:dis} \emph{(Uniform disposability)} For any $d \in 2^\mathbb{Z}$ we have
\begin{align*}
\|C_{d} u \|_{S}+\|C_{\leqslant d} u \|_{S}\leqslanta \|u\|_{S}, \varphiquad
\|C_{d} v \|_{S_w}+\|C_{\leqslant d} v \|_{S_w}\leqslanta \|v\|_{S_w}.
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
We start with the proof of \eref{itm:lem S prop:dp}, which is a consequence of an approximation argument, together with the $U^2$ and $V^2$ version
\begin{equation}\label{eqn:lem S prop:U2 dual pairing}
\dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t v, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslant \| v \|_{V^2} \| u \|_{U^2}
\end{equation}
which holds provided that $\partial_t v \in L^1_t L^2_x$, see Theorem \ref{thm:dual pairing} below. The definition of the space $S_w$, implies that we can write $\partialhi = \partialhi_+ + \partialhi_-$ with $\partialhi_\partialm \in V^2_\partialm$. However we have a slight difficulty as the functions $\partialhi_\partialm$ do not necessarily inherit the smoothness or integrability properties of $\partialhi$. To address this problem, we define the phase space localisation operator $\mc{P}_{\leqslant d} \partialhi = P^{(t)}_{\leqslant d} P_{\leqslant d} ( \rho_d \partialhi)$ where $\rho \in C^\infty_0$ with $\rho(t) = 1$ on $|t|\leqslant 1$, and we take $\rho_d(t) = \rho(\frac{t}{d})$. The assumptions on $\partialhi$ imply that $\| |\nabla|^{-1} \dot{{B}}^{\alpha}_{p, q}ox (1-\mc{P}_{\leqslant d})\partialhi \|_{L^1_t L^2_x} \to 0$ as $d \to \infty$. Thus it suffices to bound the dual pairing with $\partialhi$ replaced with $\mc{P}_{\leqslant d} \partialhi$. To this end, as $\mc{P}_{\leqslant d} \partialhi_\partialm$ is now smooth and integrable, we have
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ |\nabla|^{-1} \dot{{B}}^{\alpha}_{p, q}ox \mc{P}_{\leqslant d} \partialhi_\partialm, \partialsi }_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|
&= \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ ( -i \partial_t \partialm |\nabla|) \mc{P}_{\leqslant d} \partialhi_\partialm, \partialsi \partialm i |\nabla|^{-1} \partial_t \partialsi }_{L^2} dt \\
&= \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t ( e^{\partialm it|\nabla|} \mc{P}_{\leqslant d} \partialhi_\partialm), e^{\partialm it|\nabla|} (\partialsi \partialm i |\nabla|^{-1} \partial_t \partialsi) }_{L^2} dt \\
&\leqslant \| \mc{P}_{\leqslant d} \partialhi_\partialm \|_{V^2_\partialm} \| \partialsi \partialm i |\nabla|^{-1} \partial_t \partialsi \|_{U^2_\partialm}
\end{align*}
where we applied \eref{eqn:lem S prop:U2 dual pairing}. If we now observe that
$$ e^{\partialm i t|\nabla|} P^{(t)}_{\leqslant d} \partialhi(t) = \int_\mathbb{R} d^{-1} \chi( \tfrac{s}{d}) e^{ \partialm i s |\nabla|} e^{ \partialm i (t-s) |\nabla|} \partialhi(t-s) ds $$
for some $\chi \in L^1$ with $\| \chi\|_{L^1(\mathbb{R})}\leqslant 1$, a short computation gives the bound
\[\| \mc{P}_{\leqslant d} \partialhi \|_{V^2_\partialm} \leqslanta \| \partialhi \|_{V^2_\partialm}.\]
Consequently \eref{itm:lem S prop:dp} follows.
To prove \eref{itm:lem S prop:Xsb}, we first observe that $\big| |\tau| - |\xi| \big| \leqslant \big| \tau \partialm |\xi| \big|$.
Now, if $C_d v_\lambda=v_++v_-$, from Theorem \ref{thm:besov embedding} and \eqref{eqn:adapted Up vs Xsb}
\[
\| C_d v_\lambda \|_{L^2_{t,x}}\leqslanta \| C^+_{\geqslanttrsim d} v_+ \|_{L^2_{t,x}} + \| C^{-}_{\geqslanttrsim d} v_- \|_{L^2_{t,x}}\leqslanta d^{-\frac12}\big(\|v_+\|_{V^2_+}+\|v_-\|_{V^2_-}\big),
\]
proving $\| C_d v_\lambda \|_{L^2_{t,x}} \leqslanta d^{-\frac{1}{2}} \| C_d v_\lambda \|_{S_w} $. If $d\leqslanta \lambda$, this also implies
$$ \| C_d u_\lambda \|_{L^2_{t,x}} \leqslanta d^{-\frac{1}{2}}\frac{\lambda }{d+\lambda} \| C_d u_\lambda \|_{S}.$$
On the other hand, to obtain the high modulation gain in the region $d\geqslantg \lambda$, we use the above together with the fact that the $S$ norm controls the time derivative, to see that
\begin{align*}
\| C_d u_\lambda \|_{L^2_{t,x}} \approx{}& \frac{\lambda}{d} \| C_d \partial_t |\nabla|^{-1} u_\lambda \|_{L^2_{t,x}}\leqslanta d^{-\frac{1}{2}}\frac{\lambda }{d+\lambda} \| \partial_t |\nabla|^{-1} C_d u_\lambda \|_{S_w}\\
\leqslanta{}& d^{-\frac{1}{2}}\frac{\lambda }{d+\lambda}\| C_d u_\lambda \|_{S}
\end{align*}
where we used \eqref{eqn:S_w norm weaker}. This completes the proof of all $\leqslanta$ inequalities in (ii). For the converse inequalities, let $P^{(t)}_\partialm$ denote the temporal Fourier multiplier with symbol $\mathbbold{1}_{\{ \partialm \tau \geqslant 0\}}(\tau)$. Then the identity
$C_d u_\lambda = C^+_{\approx d} P^{(t)}_- C_d u_\lambda + C^+_{\approx d+ \lambda} P^{(t)}_+ C_d u_\lambda$
implies that
\begin{align*}
\| & C_d (u_\lambda + i |\nabla|^{-1} \partial_t u_{\lambda}) \|_{U^2_+}\\
&\leqslant \big\| C^+_d P^{(t)}_- C_d\big( u_\lambda + i |\nabla|^{-1} \partial_t u_{\lambda}\big) \big\|_{U^2_+} + \big\| C^+_{d+\lambda} P^{(t)}_+ C_d \big(u_\lambda + i |\nabla|^{-1} \partial_t u_{\lambda} \big)\big\|_{U^2_+} \\
&\leqslanta d^{\frac{1}{2}} \big\| C^+_d P^{(t)}_- C_d \big( u_\lambda + i |\nabla|^{-1} \partial_t u_{\lambda}\big) \big\|_{L^2_{t,x}} \\
&\varphiquad \varphiquad+ (d+\lambda)^{\frac{1}{2}} \big\| C^+_{d+\lambda} P^{(t)}_+ C_d\big(u_\lambda + i |\nabla|^{-1} \partial_t u_{\lambda} \big)\big\|_{L^2_{t,x}} \\
&\leqslanta d^{\frac{1}{2}} \frac{d+\lambda}{\lambda} \big\| C_d u_\lambda \big\|_{L^2_{t,x}} + (d+\lambda)^{\frac{1}{2}} \frac{d}{\lambda} \big\| C_d u_{\lambda} \big\|_{L^2_{t,x}} \approx d^\frac{1}{2} \frac{d+\lambda}{\lambda} \| C_d u_\lambda \|_{L^2_{t,x}}.
\end{align*}
Since an identical argument gives the $U^2_-$ version, we conclude that
$$ \| C_d u_\lambda \|_{L^2_{t,x}} \geqslanttrsim d^{-\frac{1}{2}} \frac{\lambda}{d+\lambda} \| C_d u_\lambda\|_{S}.$$
For the $S_w$ version, we observe that by definition
\begin{align*} \| C_d v_\lambda \|_{S_w} &\leqslant \| C_d P^{(t)}_- v_\lambda \|_{V^2_+} + \| C_d P^{(t)}_+ v_\lambda \|_{V^2_-} \\
&\leqslanta \| C^+_{\approx d} C_d P^{(t)}_- v_\lambda \|_{V^2_+} + \| C^{-}_{\approx d} C_d P^{(t)}_+ v_\lambda \|_{V^2_-}
\leqslanta d^{\frac{1}{2}} \| C_d v_\lambda \|_{L^2_{t,x}}
\end{align*}
as required.
To prove \eref{itm:lem S prop:sq sum}, the square sum bounds, we observe that the $S$ case follows immediately from the square sum control in $U^2$, namely Proposition \ref{prop:orthog} below.
For \eref{itm:lem S prop:dis}, it suffices to show that
$$ \| C_d \partialhi_\lambda \|_{U^2_+} + \| C_{\leqslant d} \partialhi_\lambda \|_{U^2_+} \leqslanta \| \partialhi_\lambda \|_{U^2_+}, \varphiquad \| C_d \partialhi_\lambda \|_{V^2_+} + \| C_{\leqslant d} \partialhi_\lambda \|_{V^2_+} \leqslanta \| \partialhi_\lambda \|_{V^2_+}. $$
After writing $ C_{\leqslant d}^+ = e^{-i t |\nabla|} P^{(t)}_{\leqslant d} e^{i t|\nabla|} $ and $ C_{ d}^+ = e^{-i t |\nabla|} P^{(t)}_{d} e^{i t|\nabla|} $ where $P^{(t)}_{\leqslant d}$ and $P^{(t)}_d$ are smooth temporal cutoffs to $|\tau| \leqslant d$ and $|\tau| \approx d$ respectively, the fact that convolution with $L^1_t$ kernels is bounded in $U^2$ and $V^2$ (see Lemma \ref{lem:conv oper on Up} below), implies that
\begin{equation}\label{eqn:lem S prop:Cplus dis} \| C_d^+ \partialhi_\lambda \|_{U^2_+} + \| C_{\leqslant d}^+ \partialhi_\lambda \|_{U^2_+} \leqslanta \| \partialhi_\lambda \|_{U^2_+}, \varphiquad \| C_d^+ \partialhi_\lambda \|_{V^2_+} + \| C_{\leqslant d}^+ \partialhi_\lambda \|_{V^2_+} \leqslanta \| \partialhi_\lambda \|_{V^2_+}.
\end{equation}
Thus our goal is to replace $C_d^+$ in \eref{eqn:lem S prop:Cplus dis} with $C_d$. We only prove the $U^2_+$ case, as the $V^2_+$ case is similar. For the $C_d$ multipliers, we observe that after decomposing
$$ C_d \partialhi_\lambda = C_{\approx d}^+ C_d \partialhi_\lambda + ( 1- C_{\approx d}^+ ) C_d\partialhi_\lambda = C_{\approx d}^+ C_d\partialhi_\lambda + C^+_{\approx \lambda} ( 1- C_{\approx d}^+ ) C_d \partialhi_\lambda$$
the standard Besov embedding in Theorem \ref{thm:besov embedding} gives
\begin{align*} \| C_d \partialhi_\lambda \|_{U^2_+} &\leqslanta d^{\frac{1}{2}} \|C_{\approx d}^+ C_d \partialhi_\lambda \|_{L^2_{t,x}} + \lambda^\frac{1}{2} \|C^+_{\approx \lambda} ( 1- C_{\approx d}^+ ) C_d \partialhi_\lambda\|_{L^2_{t,x}} \\
&\leqslanta \mathcal{S}up_{d'} (d')^\frac{1}{2} \| C_{d'}^+ \partialhi_\lambda \|_{L^2_{t,x}} \leqslanta \| \partialhi_\lambda\|_{U^2_+}.
\end{align*}
On the other hand, for the $C_{\leqslant d}$ multipliers, we first write $C_{\leqslant d} = C_{\ll d}^+ + C_{\geqslanttrsim d}^+ C_{\leqslant d}$. The first term is immediate by \eqref{eqn:lem S prop:Cplus dis}. For the second, we note that
$$ C_{\geqslanttrsim d}^+ C_{\leqslant d} \partialhi_\lambda = C_{\approx d}^+ C_{\leqslant d} \partialhi_\lambda + C_{\geqslantg d}^+ C_{\leqslant d}\partialhi_\lambda = C_{\approx d}^+ C_{\leqslant d} \partialhi_\lambda + C^+_{\approx \lambda} C_{\geqslantg d}^+ C_{\leqslant d}\partialhi_\lambda $$
and apply the reasoning used in the $C_d$ case.
\end{proof}
For later use, we note that \eref{itm:lem S prop:Xsb} in Lemma \ref{lem:S properties} implies that for any $d \leqslanta \lambda$ we have the bounds
\begin{equation}\label{eqn:trivial box bound}
\big\| \dot{{B}}^{\alpha}_{p, q}ox C_d \partialsi_\lambda \big\|_{L^2_{t,x}} + \big\| \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslant d} \partialsi_\lambda \big\|_{L^2_{t,x}} \leqslanta d^\frac{1}{2} \lambda \|\partialsi_\lambda \|_S.
\end{equation}
The norm $\| \cdot \|_S$ also controls the Strichartz type spaces $L^q_t L^r_x$. In fact, as the $S$ norm is based on $U^2$, essentially any estimate for the free wave equation which involves $L^p_t$ with $p \geqslant 2$ implies a corresponding bound for functions in $S$. However, somewhat surprisingly, we make no use of Strichartz estimates in the proof of Theorem \ref{thm:div-prob}, and instead rely on bilinear $L^2_{t,x}$ estimates together with the $X^{s,b}$ type bound \eref{eqn:trivial box bound}.
The space $S$ is constructed using the atomic space $U^2_\partialm$. Although this definition is convenient for proving properties of functions in $S$, it is a challenging problem to determine precisely when a general function $u \in C_b(\mathbb{R}, L^2_x)$ belongs to $S$. This problem is closely related to the difficult question of characterising elements of $U^p$, which has been addressed in \cite{Koch2016}. We also give a slightly different statement of the $U^p$ characterisation result in Theorem \ref{thm:characteristion of Up}, as well as a self-contained proof closely following that given by Koch-Tataru \cite{Koch2016}. Restated in terms of the solution space $S$, the conclusion is the following.
\begin{theorem}[Characterisation of $S$] \label{thm:chara of S}
Let $(\partialsi,\partial_t \partialsi)\in C_b(\mathbb{R} ; L^2_x \times \dot{H}^{-1}_x)$ with $\| (\partialsi(t), \partial_t \partialsi(t))\|_{L^2 \times \dot{H}^{-1}} \to 0$ as $t \to -\infty$. If
$$ \mathcal{S}up_{\mathcal{S}ubstack{\partialhi \in C^\infty_0 \\ \| \partialhi \|_{S_w} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \dot{{B}}^{\alpha}_{p, q}ox \partialhi, |\nabla|^{-1} \partialsi}_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig| < \infty $$
then $\partialsi \in S$ and
$$ \| \partialsi \|_S \leqslanta \mathcal{S}up_{\mathcal{S}ubstack{\partialhi \in C^\infty_0 \\ \| \partialhi \|_{S_w} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \dot{{B}}^{\alpha}_{p, q}ox \partialhi, |\nabla|^{-1} \partialsi}_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig|. $$
\end{theorem}
\begin{proof}
Theorem \ref{thm:characteristion of Up} implies that if $u \in L^\infty_t L^2_x$ with $\| u(t) \|_{L^2_x} \to 0 $ as $t \to -\infty$, and
$$ \mathcal{S}up_{\mathcal{S}ubstack{ v \in C^\infty_0(\mathbb{R};L^2) \\ \| v \|_{V^2}\leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\partial_t v, u }_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig| < \infty, $$
then $u \in U^2$. To translate this statement to $S$, we first observe that as in the proof of \eref{itm:lem S prop:dp} in Lemma \ref{lem:S properties}, we have
\begin{align*} \mathcal{S}up_{\mathcal{S}ubstack{ v \in C^\infty_0 \\ \| v \|_{V^2} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t v, e^{\mp it |\nabla|}( \partialsi \partialm i |\nabla| \partial_t \partialsi) }_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| &= \mathcal{S}up_{\mathcal{S}ubstack{ v \in C^\infty_0 \\ \| v \|_{V^2} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \dot{{B}}^{\alpha}_{p, q}ox ( e^{\mp it |\nabla|} v), |\nabla|^{-1} \partialsi }_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| \\
&\leqslant \mathcal{S}up_{\mathcal{S}ubstack{ \partialhi \in C^\infty_0 \\ \| \partialhi \|_{S_w} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \dot{{B}}^{\alpha}_{p, q}ox \partialhi, |\nabla|^{-1} \partialsi }_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|.
\end{align*}
Consequently, since $ \|\partialsi \partialm i |\nabla|^{-1} \partial_t \partialsi \|_{L^2_x} \to 0$ as $ t \to -\infty$, we conclude from the characterisation of $U^2$ that $\partialsi \partialm i |\nabla|^{-1} \partial_t \partialsi \in U^2_\partialm$ and moreover that the claimed bound holds.
\end{proof}
Recall that we have defined the inhomogeneous solution operator $ \dot{{B}}^{\alpha}_{p, q}ox^{-1} F$ as
$$ \dot{{B}}^{\alpha}_{p, q}ox^{-1} F = \mathbbold{1}_{[0, \infty)}(t) \int_0^t |\nabla|^{-1} \mathcal{S}in\big( ( t-s)|\nabla|\big) F(s) ds. $$
Applying Theorem \ref{thm:chara of S} to the special case $\partialsi = \dot{{B}}^{\alpha}_{p, q}ox^{-1} F$ gives the following.
\begin{corollary}[Energy Inequality]\label{cor:energy ineq}
Let $F \in L^1_{t,loc} \dot{H}^{-1}_x$ with
$$ \mathcal{S}up_{\mathcal{S}ubstack{\partialhi \in C^\infty_0 \\ \| \partialhi \|_{S_w} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_0^\infty \lr{\partialhi, |\nabla|^{-1} F }_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig| < \infty. $$
Then $ \dot{{B}}^{\alpha}_{p, q}ox^{-1} F \in S$ and
$$ \| \dot{{B}}^{\alpha}_{p, q}ox^{-1} F \|_S \leqslanta \mathcal{S}up_{\mathcal{S}ubstack{\partialhi \in C^\infty_0 \\ \| \partialhi \|_{S_w} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_0^\infty \lr{\partialhi, |\nabla|^{-1} F }_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig|. $$
\end{corollary}
\begin{proof}
We would like to apply Theorem \ref{thm:chara of S} to $ \dot{{B}}^{\alpha}_{p, q}ox^{-1} F$. We start by observing that the definition of $ \dot{{B}}^{\alpha}_{p, q}ox^{-1} F$ and fact that $F \in L^1_{t, loc} \dot{H}^{-1}$ implies that for any $\partialhi \in C^\infty_0$ we have
$$ \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \dot{{B}}^{\alpha}_{p, q}ox \partialhi, |\nabla|^{-1} \dot{{B}}^{\alpha}_{p, q}ox^{-1} F}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| = \dot{{B}}^{\alpha}_{p, q}ig| \int_0^\infty \lr{ \partialhi, |\nabla|^{-1} F }_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|$$
as well as $ \dot{{B}}^{\alpha}_{p, q}ox^{-1} F, |\nabla|^{-1} \partial_t ( \dot{{B}}^{\alpha}_{p, q}ox^{-1} F) \in C(\mathbb{R}, L^2)$ and $ \dot{{B}}^{\alpha}_{p, q}ox^{-1} F(t) = |\nabla|^{-1} \partial_t ( \dot{{B}}^{\alpha}_{p, q}ox^{-1} F)(t)=0$ for $t \leqslant 0$. In view of Theorem \ref{thm:chara of S}, the required conclusion would follow provided that $\| \dot{{B}}^{\alpha}_{p, q}ox^{-1} F \partialm i |\nabla|^{-1} \partial_t( \dot{{B}}^{\alpha}_{p, q}ox^{-1} F) \|_{L^\infty_t L^2_x} < \infty$. To this end, we observe that
\begin{align*}
\| \dot{{B}}^{\alpha}_{p, q}ox^{-1} F \partialm i &|\nabla|^{-1} \partial_t \dot{{B}}^{\alpha}_{p, q}ox^{-1} F \|_{L^\infty_t L^2_x} \\
&= \dot{{B}}^{\alpha}_{p, q}ig\| \mathbbold{1}_{[0, \infty)}(t) \int_0^t e^{ \mp i (t-s)|\nabla|} |\nabla|^{-1} F(s) ds \dot{{B}}^{\alpha}_{p, q}ig\|_{L^\infty_t L^2_x} \\
&= \mathcal{S}up_{ \mathcal{S}ubstack{\partialhi \in C^\infty_0 \\ \| \partialhi \|_{L^1_t L^2_x} \leqslant 1 }} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \int_0^t \lr{\partialhi(t), e^{ \partialm i(t-s)|\nabla|} |\nabla|^{-1} F(s) } ds\, dt \dot{{B}}^{\alpha}_{p, q}ig| \\
&= \mathcal{S}up_{ \mathcal{S}ubstack{\partialhi \in C^\infty_0 \\ \| \partialhi \|_{L^1_t L^2_x} \leqslant 1 }} \dot{{B}}^{\alpha}_{p, q}ig| \int_0^\infty \dot{{B}}^{\alpha}_{p, q}ig\langle \int_s^\infty e^{ \partialm i (s-t)|\nabla|} \partialhi(t) dt , |\nabla|^{-1} F(s) \dot{{B}}^{\alpha}_{p, q}ig \rangle ds \dot{{B}}^{\alpha}_{p, q}ig|.
\end{align*}
If we let $\partialhi_\partialm(s) = e^{ \partialm i s |\nabla|} \int_s^\infty e^{ \mp i t |\nabla|} \partialhi(t) dt$, then the definition of the $V^2_\partialm$ norm gives
$$ \| \partialhi_{\partialm} \|_{V^2_\partialm} \leqslanta \| \partialhi \|_{L^1_t L^2_x} \leqslant 1$$
and hence we conclude that
\begin{align*}
\| \dot{{B}}^{\alpha}_{p, q}ox^{-1} F \partialm i |\nabla|^{-1} \partial_t \dot{{B}}^{\alpha}_{p, q}ox^{-1} F \|_{L^\infty_t L^2_x} \leqslanta{}& \mathcal{S}up_{ \mathcal{S}ubstack{ \partialhi \in C^\infty_0 \\ \| \partialhi \|_{V^2_\partialm} \leqslant 1 }} \dot{{B}}^{\alpha}_{p, q}ig| \int_0^\infty \lr{ \partialhi, |\nabla|^{-1} F }_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|\\
\leqslanta {}& \mathcal{S}up_{ \mathcal{S}ubstack{ \partialhi \in C^\infty_0 \\ \| \partialhi \|_{S_w} \leqslant 1 }} \dot{{B}}^{\alpha}_{p, q}ig| \int_0^\infty \lr{ \partialhi, |\nabla|^{-1} F }_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|,
\end{align*}
where the last line follows from the definition of $\| \cdot \|_{S_w}$. Therefore,
\[\| \dot{{B}}^{\alpha}_{p, q}ox^{-1} F \partialm i |\nabla|^{-1} \partial_t( \dot{{B}}^{\alpha}_{p, q}ox^{-1} F) \|_{L^\infty_t L^2_x} < \infty,\] as required.
\end{proof}
\begin{remark}\label{rmk:cutoff}
It is possible to remove the time cutoff $\mathbbold{1}_{[0, \infty)}(t)$ in Corollary \ref{cor:energy ineq} by using an approximation argument. More precisely, provided that $F \in L^1_{t,loc} \dot{H}^{-1}$, we have
$$ \mathcal{S}up_{\mathcal{S}ubstack{\partialhi \in C^\infty_0 \\ \| \partialhi \|_{S_w} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_0^\infty \lr{\partialhi, |\nabla|^{-1} F }_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslant \mathcal{S}up_{\mathcal{S}ubstack{\partialhi \in C^\infty_0 \\ \| \partialhi \|_{S_w} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\partialhi, |\nabla|^{-1} F }_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig|.$$
This follows by writing $\partialhi(t) = \chi(\epsilon^{-1} t) \partialhi(t) + (1-\chi(\epsilon^{-1} t)) \partialhi(t)$ with $\chi(t) \in C^\infty$, $\chi(t) =1$ for $t \geqslant 1$, and $\chi(t)= 0 $ for $t<\frac{1}{2}$, and noting that $\chi(\epsilon^{-1} t) \partialhi(t) \in C^\infty_0(0, \infty)$ and
$$ \dot{{B}}^{\alpha}_{p, q}ig| \int_0^\infty \lr{ (1 - \chi(\epsilon^{-1} t) ) \partialhi(t), |\nabla|^{-1} F}_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig| \to 0$$
as $\epsilon \to 0$ since $F \in L^1_{t, loc} \dot{H}^{-1}$.
\end{remark}
\mathcal{S}ubsection{Product estimates in far cone regions}\label{subsec:product}
In this section we give the key estimates to control the product of two functions in the spaces $S$ and $S_w$ in the special case where one of the functions has high modulation, or in other words, is far from the cone. This estimate is a consequence of a general product high-low type product estimate in adapted versions $U^p$ and $V^p$. To state the product estimate we require, we start by defining the (spatial) bilinear Fourier multiplier
\begin{equation}\label{eqn:spatial bilinear Fourier multiplier}
\mc{M}[f, g]( x) = \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \widehat{f}(\xi - \eta) \widehat{g}(\eta) m(\xi - \eta, \eta) d\eta e^{ i x\cdot \xi} d\xi
\end{equation}
where $m: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{C}$.
\begin{theorem}[High-low product estimate: wave case]\label{thm:high-low U2 wave}
Let $d \in 2^\mathbb{Z}$. Suppose that $m:\mathbb{R}^n \times \mathbb{R}^n \to \mathbb{C}$ with
$$ \big|\partialm_0 |\xi+\eta| - \partialm_1 |\xi| - \partialm_2 |\eta| \big| \leqslant d $$
for $(\xi, \eta) \in \mathcal{S}upp m$. If $u \in V^2_{\partialm_1}$ with $\mathcal{S}upp \widetilde{u} \mathcal{S}ubset \{ |\tau \partialm_1 |\xi| | \geqslant 4 d\}$ and $v \in V^2_{\partialm_2}$ then $\mc{M}[u,v] \in V^2_{\partialm_0}$ with
$$ \| \mc{M}[u,v] \|_{V^2_{\partialm_0}} \leqslanta \| m(\xi, \eta) \|_{L^\infty_\xi L^2_\eta + L^\infty_\eta L^2_\xi} \| u \|_{V^2_{\partialm_1}} \| v \|_{V^2_{\partialm_2}}.$$
If in addition we have $u \in U^2_{\partialm_1}$ and $v \in U^2_{\partialm_2}$, then $\mc{M}[u,v] \in U^2_{\partialm_0}$ and
$$ \| \mc{M}[u,v] \|_{U^2_{\partialm_0}} \leqslanta \| m(\xi, \eta) \|_{L^\infty_\xi L^2_\eta + L^\infty_\eta L^2_\xi} \| u \|_{U^2_{\partialm_1}} \| v \|_{U^2_{\partialm_2}}.$$
\end{theorem}
\begin{proof}
This is a direct consequence of Theorem \ref{thm:general adapted high mod prod} and a rescaling argument. Roughly the point is that, after applying Plancherel, for the $V^2_{\partialm}$ estimate, by definition, we need to prove that
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig\| \int_{\mathbb{R}^n} e^{ i t ( \partialm_0 |\xi| - \partialm_1 |\xi-\eta| - \partialm_2 |\eta|)} &m(\xi - \eta, \eta) \partialhi(t,\xi-\eta) \partialsi(t,\eta) d\eta \dot{{B}}^{\alpha}_{p, q}ig\|_{V^2}\\
&\leqslanta \| m(\xi, \eta) \|_{L^\infty_\xi L^2_\eta + L^\infty_\eta L^2_\xi} \| \partialhi\|_{V^2} \| \partialsi \|_{V^2}.
\end{align*}
However, the Fourier support assumptions imply that $\mc{F}_t( \partialhi) \mathcal{S}ubset \{ |\tau| \geqslant 4d \}$ (and thus $\partialhi$ has high temporal frequency), and $e^{ i t ( \partialm_0 |\xi| - \partialm_1 |\xi-\eta| - \partialm_2 |\eta|)}$ has low temporal frequency. Hence, using the standard heuristic that the derivative only falls on the high frequency term, we see that the $V^2$ norm only hits the product $\partialhi(t) \partialsi(t)$, and we can simply place the exponential factor in $L^\infty_t$. This argument is made precise in Subsection \ref{subsec:alg}.
\end{proof}
The high-low product estimate in Theorem \ref{thm:high-low U2 wave} is also true for general phases, see Theorem \ref{thm:stab with conv} and Theorem \ref{thm:general adapted high mod prod} below. In particular, Theorem \ref{thm:high-low U2 wave} is a consequence of a general property of $U^p$ and $V^p$ spaces, and the precise nature of the solution operator $e^{\partialm i t|\nabla|}$ plays no role. \\
Adapting Theorem \ref{thm:high-low U2 wave} to the solution spaces $S$ and $S_w$ gives the following.
\begin{theorem}\label{thm:high-mod}
For all $\lambda_0,\lambda_1,\lambda_2\in 2^\mathbb{Z}$ we have
\begin{align}
\label{eq:high-mod1}
\|P_{\lambda_0}(C_{\geqslantg \mu} u_{\lambda_1} v_{\lambda_2})\|_{S_w}\leqslanta{}& \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\min\{\lambda_1,\lambda_2\}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac12} \|u_{\lambda_1}\|_{S_w}\|v_{\lambda_2}\|_{S_w},\\
\label{eq:high-mod2}
\|P_{\lambda_0}(C_{\geqslantg \mu} u_{\lambda_1} v_{\lambda_2})\|_{S}\leqslanta{}& \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\min\{\lambda_1,\lambda_2\}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac32} \|u_{\lambda_1}\|_{S}\|v_{\lambda_2}\|_{S},
\end{align}
where $\mu=\min\{\lambda_0,\lambda_1,\lambda_2\}$.
\end{theorem}
\begin{proof}
Let $\lambda_{\max}=\max\{\lambda_0,\lambda_1,\lambda_2\}$. By dropping $C_{\geqslantg \mu}$ and supposing throughout that the Fourier support of $u_{\lambda_1}$ or $v_{\lambda_2}$ is contained in $\{||\tau|-|\xi||\geqslantg \mu\}$, we may assume that $\lambda_1\geqslanteq \lambda_2$. We claim
\begin{align}
\|P_{\lambda_0}(u_{\lambda_1} v_{\lambda_2} )\|_{V^2_{\partialm_0}}\leqslanta{}& \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\lambda_{\max}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac12}\|u_{\lambda_1}\|_{V^2_+ + V^2_-}\|v_{\lambda_2}\|_{V^2_+ + V^2_-},\label{eq:pv}\\
\|P_{\lambda_0}(u_{\lambda_1} v_{\lambda_2} )\|_{U^2_{\partialm_0}}\leqslanta{}& \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\lambda_{\max}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac12}\|u_{\lambda_1}\|_{U^2_+ + U^2_-}\|v_{\lambda_2}\|_{U^2_+ + U^2_-}\label{eq:pu}.
\end{align}
In addition, if $\mathcal{M}(f,g)(x)$ is defined as in \eqref{eqn:spatial bilinear Fourier multiplier} with
$$ m(\xi, \eta) = (|\xi+\eta|-|\xi|-|\eta|)\mathbbold{1}_{\{|\xi|\approx \lambda_1,|\eta|\approx \lambda_2,|\xi+\eta|\approx \lambda_0\}}(\xi,\eta)$$
we claim
\begin{equation}
\|\mathcal{M}(u_{\lambda_1}, v_{\lambda_2})\|_{U^2_{\partialm_0}}\leqslanta{} \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\lambda_{\max}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac12} \min\{\lambda_1,\lambda_2\}\|u_{\lambda_1}\|_{U^2_+ + U^2_-}\|v_{\lambda_2}\|_{U^2_+ + U^2_-}.
\label{eq:mu}
\end{equation}
Assuming these claims for the moment, we now give the proof of the bounds \eqref{eq:high-mod1} and \eqref{eq:high-mod2}. Concerning \eqref{eq:high-mod1}, we observe that in the case $\lambda_1\mathcal{S}im \lambda_2$, \eqref{eq:high-mod1} boils down to \eqref{eq:pv}. On the other hand, if $\lambda_1\geqslantg \lambda_2$, we can directly apply Theorem \ref{thm:high-low U2 wave} with symbol $m(\xi,\eta)=\mathbbold{1}_{\{|\xi+\eta|\approx \lambda_0, |\xi|\approx \lambda_1, |\eta|\approx \lambda_2\}}$ and obtain
\[
\|P_{\lambda_0}(u_{\lambda_1} v_{\lambda_2})\|_{S_w}\leqslanta{}\|P_{\lambda_0}(u_{\lambda_1} v_{\lambda_2} )\|_{V^2_{\partialm_1}} \leqslanta \mu^{\frac{n}{2}} \|u_{\lambda_1}\|_{V^2_{\partialm_1}}\|v_{\lambda_2}\|_{V^2_{\partialm_2}},
\]
since $\big|\partialm_1|\xi|\partialm_2|\eta|-\partialm_1|\xi+\eta|\big|\leqslanta \mu$ for $(\xi,\eta)\in \mathcal{S}upp m$ and $\|m\|_{L^\infty_\xi L^2_\eta+L^\infty_\eta L^2_\xi}\leqslanta \mu^{\frac{n}{2}}$.
For the proof of \eqref{eq:high-mod2}, it is enough to show
\[\|(1+i|\nabla|^{-1}\partialartial_t) P_{\lambda_0}(u_{\lambda_1} v_{\lambda_2})\|_{U^2_+}\leqslanta{} \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\min\{\lambda_1,\lambda_2\}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac32} \|u_{\lambda_1}\|_{S}\|v_{\lambda_2}\|_{S}.\]
We decompose the left hand side into
\begin{align*}
(1+i |\nabla|^{-1}\partialartial_t)(u_{\lambda_1} v_{\lambda_2})={}& \big( (1+i |\nabla|^{-1}\partialartial_t) u_{\lambda_1}\big) v_{\lambda_2}-i |\nabla|^{-1}\mathcal{M}( |\nabla|^{-1}\partialartial_tu_{\lambda_1},v_{\lambda_2} )\\
&{}+i|\nabla|^{-1}( u_{\lambda_1} \partialartial_t v_{\lambda_2})-i|\nabla|^{-1}( |\nabla|^{-1} \partialartial_t u_{\lambda_1} |\nabla| v_{\lambda_2}).
\end{align*}
For the first term we directly apply Theorem \ref{thm:high-low U2 wave} as above and, since $\lambda_1\geqslanteq \lambda_2$, obtain
\begin{align*}
\| \big( (1+i |\nabla|^{-1}\partialartial_t) u_{\lambda_1}\big) v_{\lambda_2}\|_{U^2_+}\leqslanta{}& \mu^{\frac{n}{2}}\|(1+i |\nabla|^{-1}\partialartial_t) u_{\lambda_1}\|_{U^2_+} \|v_{\lambda_2}\|_{U^2_-+U^2_+}\\
\leqslanta{}& \mu^{\frac{n}{2}}\| u_{\lambda_1}\|_{S} \|v_{\lambda_2}\|_{S}.
\end{align*}
For the second term we apply \eqref{eq:mu} and obtain
\begin{align*}
\| |\nabla|^{-1}\mathcal{M}( |\nabla|^{-1}\partialartial_tu_{\lambda_1},v_{\lambda_2})\|_{U^2_+}\leqslanta{}& \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\lambda_{\max}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac12} \frac{\lambda_2}{\lambda_0} \| |\nabla|^{-1}\partialartial_tu_{\lambda_1}\|_{U^2_+ + U^2_-} \|v_{\lambda_2}\|_{U^2_+ + U^2_-}\\
\leqslanta{}&\mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\lambda_2}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac32}\| u_{\lambda_1}\|_{S} \|v_{\lambda_2}\|_{S}.
\end{align*}
For the terms in the second line, we apply \eqref{eq:pu} and obtain
\begin{align*}
\||\nabla|^{-1}( u_{\lambda_1} \partialartial_t v_{\lambda_2})\|_{U^2_+}\leqslanta&{}\frac{\lambda_2}{\lambda_0} \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\lambda_{\max}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac12}\|u_{\lambda_1}\|_{U^2_+ + U^2_-}\||\nabla|^{-1}\partialartial_t v_{\lambda_2}\|_{U^2_+ + U^2_-}
\end{align*}
and similarly
\begin{align*}
\||\nabla|^{-1}( |\nabla|^{-1} \partialartial_t u_{\lambda_1} |\nabla| v_{\lambda_2})\|_{U^2_+}\leqslanta{}&\frac{\lambda_2}{\lambda_0} \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\lambda_{\max}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac12}\||\nabla|^{-1}\partialartial_t u_{\lambda_1}\|_{U^2_+ + U^2_-}\| v_{\lambda_2}\|_{U^2_+ + U^2_-},
\end{align*}
such that
\[
\||\nabla|^{-1}( u_{\lambda_1} \partialartial_t v_{\lambda_2})\|_{U^2_+}+\||\nabla|^{-1}( |\nabla|^{-1} \partialartial_t u_{\lambda_1} |\nabla| v_{\lambda_2})\|_{U^2_+}\leqslanta \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\lambda_2}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac32}\| u_{\lambda_1}\|_{S} \|v_{\lambda_2}\|_{S},
\]
so the proof of \eqref{eq:high-mod2} is complete.
It remains to prove \eqref{eq:pv}, \eqref{eq:pu} and \eqref{eq:mu}. We start by observing that the standard $U^2$ Besov embedding (see Theorem \ref{thm:besov embedding} below) gives
\[
\|C_{\leqslanta \lambda_{\max}}P_{\lambda_0}(u_{\lambda_1}v_{\lambda_2})\|_{U^2_{\partialm_0}}\leqslanta \lambda_{\max}^{\frac12}\|P_{\lambda_0}(u_{\lambda_1}v_{\lambda_2})\|_{L^2_{t,x}}.
\]
Now, if $u_{\lambda_1}$ is away from the cone, we have by \eqref{itm:lem S prop:Xsb} in Lemma \ref{lem:S properties}
\[\|P_{\lambda_0}(u_{\lambda_1}v_{\lambda_2})\|_{L^2_{t,x}}\leqslanta \mu^{\frac{n}{2}} \|u_{\lambda_1}\|_{L^2_{t,x}}\|v_{\lambda_2}\|_{L^\infty_t L^2_{x}}\leqslanta \mu^{\frac{n-1}{2}} \|u_{\lambda_1}\|_{S_w}\|v_{\lambda_2}\|_{S_w}, \]
and similarly, if $v_{\lambda_2}$ is away from the cone, we have
\[\|P_{\lambda_0}(u_{\lambda_1}v_{\lambda_2})\|_{L^2_{t,x}}\leqslanta \mu^{\frac{n}{2}} \|u_{\lambda_1}\|_{L^\infty_t L^2_{x}}\|v_{\lambda_1}\|_{L^2_{t,x}} \leqslanta \mu^{\frac{n-1}{2}} \|u_{\lambda_1}\|_{S_w}\|v_{\lambda_2}\|_{S_w}. \]
In summary, we have
\begin{equation}\label{eq:close-cone-p}
\|C_{\leqslanta \lambda_{\max}}P_{\lambda_0}(u_{\lambda_1}v_{\lambda_2})\|_{U^2_{\partialm_0}}\leqslanta \lambda_{\max}^{\frac12}\mu^{\frac{n-1}{2}} \|u_{\lambda_1}\|_{S_w}\|v_{\lambda_2}\|_{S_w}.
\end{equation}
On the other hand, we note that by Theorem \ref{thm:high-low U2 wave} and the uniform disposability from \eqref{itm:lem S prop:dis} in Lemma \ref{lem:S properties}
\begin{align*}
&\|C_{\geqslantg \lambda_{\max}}P_{\lambda_0}(u_{\lambda_1}v_{\lambda_2})\|_{V^2_{\partialm_0}}\\
\leqslanta{}& \|P_{\lambda_0}(C_{\geqslantg \lambda_{\max}} u_{\lambda_1}v_{\lambda_2})\|_{V^2_{\partialm_0}} + \|P_{\lambda_0}(C_{\leqslanta \lambda_{\max}} u_{\lambda_1} C_{\geqslantg \lambda_{\max}} v_{\lambda_2})\|_{V^2_{\partialm_0}}\\
\leqslanta{}& \mu^{\frac{n}{2}} \|u_{\lambda_1}\|_{S_w}\|v_{\lambda_2}\|_{S_w}.
\end{align*}
This, together with the estimate \eqref{eq:close-cone-p} and the embedding $U^2_{\partialm_0}\mathcal{S}ubset V^2_{\partialm_0}$ finishes the proof of \eqref{eq:pv}.
The proof of \eqref{eq:pu} follows from the same argument, by using the $U^2$-estimate in Theorem \ref{thm:high-low U2 wave} and the embedding $U^2_++U^2_- \mathcal{S}ubset S_w$ instead.
This argument also proves \eqref{eq:mu}, because Theorem \ref{thm:high-low U2 wave} allows for multipliers, and due to the obvious fixed-time muliplier bound in $L^2_x$ in \eqref{eq:close-cone-p}.
\end{proof}
\mathcal{S}ubsection{Bilinear $L^2_{t,x}$ Estimates}\label{subsec:bil}
In this section we give the second key bilinear bound that is required for the proof of Theorem \ref{thm:div-prob}. Similar to the previous section, the key bilinear input is a special case of a \emph{general} bilinear restriction estimate in $L^2_{t,x}$, which holds not just for the wave equation, but also the case of general phases. On the other hand, in contrast to the previous section, the bilinear estimate we prove here is much more delicate, as it handles the case where both functions, as well as their product, is close to the light cone.
We start with some motivation. Let $\lambda_1\geqslant 1$ and define
$$ \Lambda_1= \{ |\xi - e_1| < \tfrac{1}{100} \}, \varphiquad \varphiquad \Lambda_2= \{ |\xi \mp \lambda e_2| < \tfrac{1}{100} \lambda \} $$
with $e_1 = (1, 0, \dots, 0)$ and $e_2 = (0, 1, 0, \dots, 0)$. For free solutions, an application of Plancheral followed by H\"older implies that if $\mathcal{S}upp \widehat{f} \mathcal{S}ubset \Lambda_1$ and $\mathcal{S}upp \widehat{g} \mathcal{S}ubset \Lambda_2$, then we have the bilinear estimate
\begin{equation}\label{eqn:bilinear L2 free solns} \| e^{ - i t|\nabla|} f e^{ \mp i t |\nabla|} g \|_{L^2_{t,x}} \leqslanta \| f\|_{L^2_x} \| g \|_{L^2_x}.
\end{equation}
The atomic definition of $U^2$ then easily implies that
$$ \| u v \|_{L^2_{t,x}} \leqslanta \| u \|_{U^2_+} \|v \|_{U^2_\partialm}$$
for $\mathcal{S}upp \widehat{u} \mathcal{S}ubset \Lambda_1$ and $\mathcal{S}upp \widehat{v} \mathcal{S}ubset \Lambda_2$. Although this estimate is potentially useful, the fact that both functions must be placed into $U^2$, means that it is far to weak to be able to deduce the estimates required to solve the division problem in Theorem \ref{thm:div-prob}. In fact this lack of a good $V^2$ replacement for \eqref{eqn:bilinear L2 free solns}, was a key motivation in developing the \emph{null frame spaces} of Tataru, which were constructed to solve this issue (among others).
Recently however, in work of the first author \cite{Candy2017b}, it was shown that it is possible to deduce a $U^2 \times V^2$ version of \eref{eqn:bilinear L2 free solns} provided that the low frequency term is placed in $U^2$. It turns out that the bilinear estimate for free solutions given in \eref{eqn:bilinear L2 free solns} is insufficient for this purpose, essentially since it does not exploit any dispersive properties of free waves. Instead, the argument given in \cite{Candy2017b}, shows that the $U^2 \times V^2$ estimate can be reduced to a deeper property of free waves, namely the bilinear estimates satisfied by \emph{wave tables}. The wave table construction efficiently exploits both transversality and curvature, and was introduced by Tao \cite{Tao2001b} in the proof of the endpoint bilinear restriction estimate for the cone. In the case of the wave equation, the conclusion is the following.
\begin{theorem}[{\cite[Theorem 1.7 and Theorem 1.10]{Candy2017b}}]\label{thm:bilinear restriction}
Let $2 \leqslant a \leqslant b < n+1$. Let $0< \lambda_1 \leqslant \lambda_2$, $0<\langlepha \leqslant 1$, and $\kappa, \kappa' \in \mc{C}_{\langlepha}$ with $\angle(\partialm \kappa, \kappa') \approx \langlepha$. If $u \in U^a_{+}$ and $v \in U^b_{\partialm}$ then
$$ \| R_{\kappa} u_{\lambda_1} R_{\kappa'} v_{\lambda_2} \|_{L^2_{t,x}} \leqslanta \langlepha^{\frac{n-3}{2}} \lambda_1^{ \frac{n-1}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_2}{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ig)^{(n+1)(\frac{1}{2} - \frac{1}{a})} \| R_{\kappa} u_{\lambda_1} \|_{U^a_+} \| R_{\kappa'}v_{\lambda_2} \|_{U^b_\partialm}.$$
If $\langlepha \approx 1$ and $\lambda_1 \approx \lambda_2$, for cubes $q, q' \in \mc{Q}_\mu$ with $0<\mu \leqslanta \lambda_1$ we have stronger bound
$$ \| R_{\kappa} P_q u_{\lambda_1} P_{q'} R_{\kappa'} v_{\lambda_2} \|_{L^2_{t,x}} \leqslanta \mu^{\frac{n-1}{2}} \| R_{\kappa} P_q u_{\lambda_1} \|_{U^a_+} \| R_{\kappa'}P_{q'} v_{\lambda_2} \|_{U^b_\partialm}.$$
\end{theorem}
\begin{proof}
In the special case $\langlepha \approx 1$, the bounds are a consequence of the wave table construction introduced by Tao in \cite{Tao2001b}, together with an induction on scales argument. In Section \ref{sec:adapted bilinear restriction}, we give the details of this argument by following \cite{Candy2017b} in the case of the cone. In the case $0<\langlepha \leqslanta 1$, the proof requires a slightly more general wave table decomposition, see the proof of \cite[Theorem 1.7 and Theorem 1.10]{Candy2017b} and Remark \ref{rmk:general-alpha}.
\end{proof}
\begin{remark}\label{rem:general bilinear restriction}
The argument developed in \cite{Candy2017b} in fact shows that the bound in Theorem \ref{thm:bilinear restriction} can be generalised to the full bilinear range, and moreover holds for general phases under suitable curvature and transversality assumptions. See Section \ref{sec:adapted bilinear restriction} for further discussion.
\end{remark}
After using the standard embedding $V^2 \mathcal{S}ubset U^b$, we see that
$$ \| R_{\kappa} u_\mu R_{\kappa'} v_\lambda \|_{L^2_{t,x}} \leqslanta \langlepha^{\frac{n-3}{2}} \mu^{\frac{n-1}{2}} \| R_{\kappa} u_{\mu} \|_{U^2_\partialm} \| R_{\kappa'}v_{\lambda} \|_{V^2_+} . $$
In particular, we have a bilinear $L^2_{t,x}$ estimate with the high frequency term in $V^2$, \emph{without} any high frequency loss.
If we combine Theorem \ref{thm:bilinear restriction} with an analysis of the resonant set, we obtain the following key bilinear estimate.
\begin{theorem}[Main Bilinear $L^2_{t,x}$ bound]\label{thm:bilinear L2 bound}
Let $d, \lambda_0, \lambda_1, \lambda_2 \in 2^\mathbb{Z}$ and $\epsilon>0$. If $\lambda_1 \leqslant \lambda_2$, $\mu = \min\{\lambda_0, \lambda_1\}$, and $d \leqslantssim \mu$ we have the bilinear estimates
\begin{align}
\big\| C_d P_{\lambda_0} \big( C_{\ll d } u_{\lambda_1} C_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}}
&\leqslanta d^{\frac{n-3}{4}} \mu^{\frac{n-1}{4}} \lambda_1^\frac{1}{2} \| u_{\lambda_1} \|_{S} \| v_{\lambda_2} \|_{S} \label{eqn:thm bilinear L2:U2U2}\\
\big\| C_d P_{\lambda_0} \big( C_{\ll d } u_{\lambda_1} C_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}}
&\leqslanta d^{\frac{n-3}{4}} \mu^{\frac{n-1}{4}} \lambda_1^\frac{1}{2} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1^2}{ d \mu} \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| u_{\lambda_1} \|_{S} \| v_{\lambda_2} \|_{S_w} \label{eqn:thm bilinear L2:U2V2}\\
\big\| C_d P_{\lambda_0} \big( C_{\ll d } u_{\lambda_1} C_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}}
&\leqslanta d^{\frac{n-3}{4}} \mu^{\frac{n-1}{4}} \lambda_1^\frac{1}{2} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1 \lambda_2}{ d \mu} \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| u_{\lambda_1} \|_{S_w} \| v_{\lambda_2} \|_{S_w}. \label{eqn:thm bilinear L2:V2V2}
\end{align}
On the other hand, if $d \geqslantg \mu$, we have
\begin{equation}\label{eqn:thm bilinear L2:V2V2-high}
\big\| C_d P_{\lambda_0} \big( C_{\ll d } u_{\lambda_1} C_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}}\leqslanta \mu^{\frac{n-1}{2}} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\lambda_1}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| u_{\lambda_1} \|_{S_w} \| v_{\lambda_2} \|_{S_w}.
\end{equation}
\end{theorem}
The first bound in the previous theorem, \eqref{eqn:thm bilinear L2:U2U2}, is a direct consequence of the corresponding estimate for free solutions. In particular, it is sharp. On the other hand, due to the fact that we now only have $V^2_\partialm$ control over $v_{\lambda_2}$, the bounds \eqref{eqn:thm bilinear L2:U2V2}, \eqref{eqn:thm bilinear L2:V2V2} and \eqref{eqn:thm bilinear L2:V2V2-high} require the bilinear restriction estimates contained in Theorem \ref{thm:bilinear restriction}. The key point is that we have no high-frequency $\lambda_2$ loss, provided that we place the low frequency term $u_{\lambda_1}$ into $U^2_{\partialm}$ (i.e. the $S$ norm). The only loss \eqref{eqn:thm bilinear L2:U2V2} appears when $\lambda_1 \approx \lambda_2 \geqslantg \lambda_0$, which is in general an easier case to deal with. On the other hand, placing both $u_{\lambda_1}$ and $v_{\lambda_2}$ into $V^2$ causes an $\epsilon$ loss in the high frequency $\lambda_2$, and thus is only useful in certain special cases.
To reduce Theorem \ref{thm:bilinear L2 bound} to the bilinear restriction estimates in Theorem \ref{thm:bilinear restriction}, we need to show that the waves $u$ and $v$ are transverse. This is a consequence of the following.
\begin{lemma}[Resonance bound for full cone]\label{lem:resonance bound full cone}
Let $d, \lambda_0, \lambda_1, \lambda_2 \in 2^\mathbb{Z}$. Assume that $(\tau, \xi), (\tau', \eta) \in \mathbb{R}^{1+n}$ satisfy
$|\xi| \approx \lambda_1$, $|\eta| \approx \lambda_2$, $|\xi+\eta| \approx \lambda_0$
and
$$
\big| |\tau| - |\xi| \big| \ll d, \varphiquad \big| |\tau'| - |\eta|\big| \ll d, \varphiquad
\big| |\tau+\tau'| - |\xi+\eta| \big| \approx d. $$
If $d \leqslanta \min\{\lambda_0, \lambda_1, \lambda_2\}$, then
\begin{align*}
\angle\big( \mathcal{S}gn(\tau) \xi, \mathcal{S}gn(\tau') \eta\big) \approx \dot{{B}}^{\alpha}_{p, q}ig( \frac{d \lambda_0}{\lambda_1 \lambda_2} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2},\\
\angle\big( \mathcal{S}gn(\tau+\tau') (\xi+\eta), \mathcal{S}gn(\tau) \xi\big) \leqslanta \dot{{B}}^{\alpha}_{p, q}ig( \frac{d \lambda_2}{\lambda_0 \lambda_1} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2},\\
\angle\big( \mathcal{S}gn(\tau+\tau') (\xi+\eta), \mathcal{S}gn(\tau') \eta \big) \leqslanta \dot{{B}}^{\alpha}_{p, q}ig( \frac{d \lambda_1}{\lambda_0 \lambda_2} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2}.
\end{align*}
On the other hand, if $d \geqslantg \min\{\lambda_0, \lambda_1, \lambda_2\}$ then in fact $\mathcal{S}gn(\tau) = \mathcal{S}gn(\tau')$ and
$$ \angle\big( \xi, \eta \big) \approx 1, \varphiquad d \approx \max\{\lambda_0, \lambda_1, \lambda_2\}, \varphiquad \lambda_0 \ll \lambda_1 \approx \lambda_2. $$
\begin{proof}
We first observe that since
$$ \big| |\tau + \tau'| - | \mathcal{S}gn(\tau)|\xi| + \mathcal{S}gn(\tau') |\eta| | \big| \leqslant \big| |\tau| - |\xi| \big| + \big| |\tau'| -|\eta| \big| \ll d$$
and
$$ \big| ( \mathcal{S}gn(\tau) |\xi| + \mathcal{S}gn(\tau')|\eta|)^2 - |\xi + \eta|^2 \big| \approx \lambda_1 \lambda_2 \angle\big( \mathcal{S}gn(\tau) \xi, \mathcal{S}gn(\tau')\xi'\big)$$
we have
\begin{equation}\label{eqn:proof of lem resonance bound:main ident}
d \approx \big| \big| \mathcal{S}gn(\tau) |\xi| + \mathcal{S}gn(\tau')|\eta|\big| - |\xi + \eta| \big|
\approx \frac{\lambda_1 \lambda_2 \angle^2\big( \mathcal{S}gn(\tau) \xi, \mathcal{S}gn(\tau')\eta\big)}{\big| | \mathcal{S}gn(\tau) |\xi| + \mathcal{S}gn(\tau')|\eta|| + |\xi + \eta| \big|}.
\end{equation}
If $\lambda_0 \approx \max\{\lambda_1, \lambda_2\}$, then \eref{eqn:proof of lem resonance bound:main ident} already gives $ d\leqslanta \min\{ \lambda_0, \lambda_1, \lambda_2\}$ and the claimed orthogonality bound, so it remains to consider the case $\lambda_0 \ll \lambda_1 \approx \lambda_2$. We first consider the interactions where $\mathcal{S}gn(\tau) = -\mathcal{S}gn(\tau')$ which implies that
$$ \big| | \mathcal{S}gn(\tau) |\xi| + \mathcal{S}gn(\tau')|\eta|| + |\xi + \eta| \big| \approx \lambda_0, \varphiquad \lambda_1 \lambda_2 \angle^2(\xi, -\eta) \approx |\xi + \eta|^2 - \big| |\xi| - |\eta| \big|^2 \leqslanta \lambda_0^2.$$
Consequently the claimed bounds again follow immediately from \eref{eqn:proof of lem resonance bound:main ident}. On the other hand, if $\mathcal{S}gn(\tau) = \mathcal{S}gn(\tau')$ then as $\angle(\xi, - \eta) \leqslanta \frac{\lambda_0}{\lambda_1} \ll 1$ we must have $\angle(\xi, \eta) \approx 1$ and hence \eref{eqn:proof of lem resonance bound:main ident} implies that
$$ d \approx \frac{\lambda_1 \lambda_2}{\lambda_1} \angle^2(\xi, \eta) \approx \lambda_1$$
as required. It only remains to control the angle between $\xi+\xi'$ and $\xi$. To this end, an analogous computation to that used to deduce \eref{eqn:proof of lem resonance bound:main ident} gives
$$ \lambda_0 \lambda_1 \angle^2\big( \mathcal{S}gn(\tau + \tau') (\xi+\eta), \xi\big) \leqslanta d \big( \big| \mathcal{S}gn(\tau + \tau') |\xi + \eta| - \mathcal{S}gn(\tau) |\xi|\big| + |\eta| \big) $$
which suffices unless we have $\lambda_0\approx \lambda_1 \geqslantg \lambda_2$ and $\mathcal{S}gn(\tau + \tau') = - \mathcal{S}gn(\tau)$. But this implies that
$$ d \approx \big| |\tau + \tau'| - |\xi+\eta|\big| \approx \big| \mathcal{S}gn(\tau) |\xi| +\mathcal{S}gn(\tau') |\eta| - \mathcal{S}gn(\tau + \tau')|\xi + \eta| \big| \approx \lambda_0 $$
which contradicts the previous computation which showed that we must have $d \leqslanta \lambda_2$. Hence the case $\lambda_0\approx \lambda_1 \geqslantg \lambda_2$ and $\mathcal{S}gn(\tau + \tau') = - \mathcal{S}gn(\tau)$ cannot occur, and consequently we deduce the correct angle bound between $\xi+\eta$ and $\eta$.
\end{proof}
\end{lemma}
\begin{lemma}[Lower bound on resonance]\label{lem:lb-resonance}
Let $d, \lambda_0, \lambda_1, \lambda_2 \in 2^\mathbb{Z}$. Assume that $(\tau, \xi), (\tau', \eta) \in \mathbb{R}^{1+n}$ satisfy
$$ |\xi| \approx \lambda_1, \varphiquad |\eta| \approx \lambda_2, \varphiquad |\xi+\eta| \approx \lambda_0$$
and
$$ \big| |\tau| - |\xi| \big| \leqslanta d, \varphiquad \big| |\tau'| - |\eta|\big| \leqslanta d, \varphiquad \big| |\tau+\tau'| - |\xi+\eta| \big| \leqslanta d. $$
Then,
\begin{align*}
&\angle\big( \mathcal{S}gn(\tau) \xi, \mathcal{S}gn(\tau') \eta\big)+\angle\big( \mathcal{S}gn(\tau) \xi, \mathcal{S}gn(\tau+\tau') (\xi+\eta)\big)\\
{}&\varphiquad +\angle\big( \mathcal{S}gn(\tau') \eta, \mathcal{S}gn(\tau+\tau') (\xi+\eta)\big)\leqslanta{} \dot{{B}}^{\alpha}_{p, q}ig( \frac{d}{\min\{\lambda_0, \lambda_1, \lambda_2\}} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{2}}
\end{align*}
\end{lemma}
\begin{proof}
We start with the observation
\[
\big| \mathcal{S}gn(\tau) |\xi| + \mathcal{S}gn(\tau')|\eta| - \mathcal{S}gn(\tau+\tau')|\xi + \eta| \big|\leqslanta d.
\]
The left hand side is bounded below by
\[
\big| | \mathcal{S}gn(\tau) |\xi| + \mathcal{S}gn(\tau')|\eta|| - |\xi + \eta| \big|\approx \frac{|\xi||\eta| \angle^2\big( \mathcal{S}gn(\tau) \xi, \mathcal{S}gn(\tau') \eta\big)}{| \mathcal{S}gn(\tau) |\xi| + \mathcal{S}gn(\tau')|\eta|| + |\xi + \eta|},
\]
which implies the bound on the first summand. The bounds on the other two summands follow similarly.
\end{proof}
We now give the proof of Theorem \ref{thm:bilinear L2 bound}.
\begin{proof}[Proof of Theorem \ref{thm:bilinear L2 bound}]
Let $\lambda_0, \lambda_1, \lambda_2 \in 2^\mathbb{Z}$ with $\lambda_1 \leqslant \lambda_2$, and take $\mu = \min\{ \lambda_0, \lambda_1, \lambda_2\}$. It is enough to consider the case $u_{\lambda_1} \in U^2_+$ (or $V^2_+$), and $v_{\lambda_2}\in U^2_{\partialm}$ (or $V^2_{\partialm}$). Suppose that $d \leqslanta \mu$ and note that we can write
$$C_{\ll d} u_{\lambda_1} = C_{\ll d}^+ u_{\lambda_1} + C_{\ll d}^- u_{\lambda_1} = C_{\ll d}^+ u_{\lambda_1} + C_{\ll d}^- C^+_{\approx \lambda_1} u_{\lambda_1}.$$
Applying H\"{o}lder's inequality we deduce that
\begin{align}
\big\| C_d P_{\lambda_0} \big( C_{\ll d }^- u_{\lambda_1} C_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}} &\leqslanta \mu^{\frac{n}{2}} \| C^+_{\approx \lambda_1} u_{\lambda_1} \|_{L^2_{t,x}} \| v_{\lambda_2} \|_{L^\infty_t L^2_x} \notag \\
&\leqslanta \mu^{\frac{n}{2}} \lambda_1^{-\frac{1}{2}} \| u_{\lambda_1} \|_{U^4_+} \| v_{\lambda_2} \|_{U^4_\partialm} \label{eqn:proof thm bilinear L2:far cone I}
\end{align}
which suffices if $n=2, 3$ (clearly we can choose an exponent larger than $4$ if necessary). To obtain a slightly sharper bound, we can decompose into caps/cubes before applying H\"{o}lder, namely, letting $\langlepha = (\frac{d \lambda_0}{\lambda_1 \lambda_2})^\frac{1}{2}$ and applying Lemma \ref{lem:resonance bound full cone}, we have
\begin{align*}
&\big\| C_d P_{\lambda_0} \big( C_{\ll d }^- u_{\lambda_1} C_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}} \\
\leqslanta{}& \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{\mathcal{S}ubstack{\kappa, \kappa' \in \mc{C}_\langlepha }} \mathcal{S}um_{q, q'\in \mc{Q}_\mu} \big\| C_d P_{\lambda_0}\big( R_{\kappa} P_q C^-_{\ll d}u_{\lambda_1} R_{\kappa'} P_{q'} C_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}}^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} \\
\leqslanta{}& \mu^{\frac{1}{2}} (\mu d)^{\frac{n-1}{4}} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{\mathcal{S}ubstack{\kappa, \kappa' \in \mc{C}_\langlepha }} \mathcal{S}um_{q, q'\in \mc{Q}_\mu} \| R_{\kappa} P_q C^+_{\approx \lambda_1} u_{\lambda_1} \|_{L^2_{t,x}}^2 \| R_{\kappa'} P_{q'} v_{\lambda_2} \|_{L^\infty_t L^2_x}^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} \\
\leqslanta{}& \mu^{\frac{1}{2}} (\mu d)^{\frac{n-1}{4}} \lambda_1^{-\frac{1}{2}} \| u_{\lambda_1} \|_{U^2_+} \| v_{\lambda_2} \|_{U^2_\partialm}
\end{align*}
where we used the fact that since $\lambda_1 \leqslant \lambda_2$ we have $\lambda_1 \langlepha = ( d \mu)^{\frac{1}{2}}$. Interpolating with \eref{eqn:proof thm bilinear L2:far cone I}, we obtain an estimate which clearly suffices in higher dimensions as well. After noting the identity
$$ C_{\ll d} v_{\lambda_2} = C_{\ll d}^\partialm v_{\lambda_2} + C_{\approx \lambda_2}^{\partialm} C_{\ll d}^{\mp} v_{\lambda_2} $$
a similar argument to the above reduces the problem to proving the bounds
\begin{equation}\label{eqn:proof of thm bilinear L2 bound:initial reduction}
\begin{split}
\big\| C_d P_{\lambda_0} \big( C^+_{\ll d } u_{\lambda_1} C^\partialm_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}} &\leqslanta d^{\frac{n-3}{4}} \mu^{\frac{n-1}{4}} \lambda_1^\frac{1}{2} \| u_{\lambda_1} \|_{U^2_+} \| v_{\lambda_2} \|_{U^2_{\partialm}},\\
\big\| C_d P_{\lambda_0} \big( C^+_{\ll d } u_{\lambda_1} C^\partialm_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}} &\leqslanta d^{\frac{n-3}{4}} \mu^{\frac{n-1}{4}} \lambda_1^\frac{1}{2} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1^2}{\mu d } \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| u_{\lambda_1} \|_{U^2_+} \| v_{\lambda_2} \|_{V^2_{\partialm}}\\
\big\| C_d P_{\lambda_0} \big( C^+_{\ll d } u_{\lambda_1} C^\partialm_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}} &\leqslanta d^{\frac{n-3}{4}} \mu^{\frac{n-1}{4}} \lambda_1^\frac{1}{2} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1 \lambda_2}{\mu d } \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| u_{\lambda_1} \|_{V^2_+} \| v_{\lambda_2} \|_{V^2_{\partialm}}
\end{split}
\end{equation}
We now exploit Lemma \ref{lem:resonance bound full cone} and orthogonality. Let $\langlepha = ( \frac{d \lambda_0}{\lambda_1 \lambda_2} )^\frac{1}{2}$, and $\beta = (\frac{d \lambda_1}{\lambda_0 \lambda_2})^\frac{1}{2}$. Note that since we assume $\lambda_1 \leqslant \lambda_2$, we have $\lambda_1 \langlepha = (d \mu)^\frac{1}{2}$ and $\lambda_0 \beta = (d \lambda_0)^\frac{1}{2}$. An application of Lemma \ref{lem:resonance bound full cone} and orthogonality implies that after decomposing into caps, we have
\begin{equation}\label{eqn:proof of bilinear L2 thm:L2 bdd by sqr sum}
\begin{split}& \big\| C_d P_{\lambda_0} \big( C^+_{\ll d } u_{\lambda_1} C^\partialm_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}}^2 \\
\leqslanta{}& \mathcal{S}um_{\kappa'' \in \mc{C}_\beta} \mathcal{S}um_{q, q' \in \mc{Q}_\mu} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{\mathcal{S}ubstack{\kappa, \kappa'\in \mc{C}_\langlepha \\ |\kappa \mp \kappa'| \approx \langlepha}} \big\| C_d P_{\lambda_0} \big( C^+_{\ll d }R_{\kappa} P_q u_{\lambda_1} C^\partialm_{\ll d} R_{\kappa'} R_{\kappa''} P_{q'} v_{\lambda_2} \big) \big\|_{L^2_{t,x}} \dot{{B}}^{\alpha}_{p, q}ig)^2.
\end{split}
\end{equation}
The standard bilinear $L^2_{t,x}$ bound for free solutions, together with the disposability of the $C_{\ll d}$ multipliers, gives for any cubes $q, q' \in \mc{Q}_\mu$ and caps $\kappa, \kappa' \in \mc{C}_\langlepha$ with $|\kappa \mp \kappa'| \approx \langlepha$ the estimate
\begin{align*}
&\big\| C_d P_{\lambda_0} \big( C^+_{\ll d }R_{\kappa} P_q u_{\lambda_1} C^\partialm_{\ll d} R_{\kappa'} R_{\kappa''} P_{q'} v_{\lambda_2} \big) \big\|_{L^2_{t,x}}\\
\leqslanta{}& d^{\frac{n-3}{4}} \mu^{\frac{n-1}{4}} \lambda_1^\frac{1}{2} \| R_{\kappa} P_q u_{\lambda_1} \|_{U^2_+} \| R_{\kappa'} R_{\kappa''} P_{q'} v_{\lambda_2} \|_{U^2_{\partialm}}.
\end{align*}
Therefore, from \eref{eqn:proof of bilinear L2 thm:L2 bdd by sqr sum} and the $U^2$ square sum bound, we deduce that
\begin{equation}\label{eqn:proof thm bilinear L2:bdd by U2}
\big\| C_d P_{\lambda_0} \big( C^+_{\ll d } u_{\lambda_1} C^\partialm_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}}
\leqslanta d^{\frac{n-3}{4}} \mu^{\frac{n-1}{4}} \lambda_1^\frac{1}{2} \| u_{\lambda_1} \|_{U^2_+} \| v_{\lambda_2} \|_{U^2_{\partialm}}.
\end{equation}
To replace $U^2$ with $V^2$, we apply Theorem \ref{thm:bilinear restriction}. We first suppose that $\lambda_1 \approx \lambda_2$. After applying Lemma \ref{lem:resonance bound full cone} and decomposing into caps of size $\langlepha$, Theorem \ref{thm:bilinear restriction} implies that for any $2 \leqslant a \leqslant b <n+1$
\begin{align}
&\big\| C_d P_{\lambda_0} \big( C^+_{\ll d } u_{\lambda_1} C^\partialm_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}} \notag\\
\leqslanta{}& \mathcal{S}um_{\mathcal{S}ubstack{\kappa, \kappa' \in \mc{C}_\langlepha \\ |\kappa - \kappa'| \approx \langlepha }} \big\| C^+_{\ll d } R_\kappa u_{\lambda_1} C^\partialm_{\ll d} R_{\kappa'} v_{\lambda_2} \big\|_{L^2_{t,x}} \notag \\
\leqslanta{}& \langlepha^{\frac{n-3}{2}} \lambda_1^{\frac{n-1}{2}} \mathcal{S}um_{\mathcal{S}ubstack{\kappa, \kappa' \in \mc{C}_\langlepha \\ |\kappa - \kappa'| \approx \langlepha }} \big\| R_\kappa u_{\lambda_1}\|_{U^a_+} \| R_{\kappa'} v_{\lambda_2} \big\|_{U^b_{\partialm}} \notag \\
\leqslanta{}& \langlepha^{\frac{n-3}{2} - (n-1)(1 - \frac{1}{a}-\frac{1}{b})} \lambda_1^{\frac{n-1}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{\kappa \in \mc{C}_\langlepha} \| R_\kappa u_{\lambda_1}\|_{U^a_+}^a \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{a} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{\kappa' \in \mc{C}_\langlepha} \| R_{\kappa'} v_{\lambda_2} \|_{U^b_{\partialm}}^b \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{b} \notag \\
\leqslanta{}& d^{\frac{n-3}{4}} \mu^{\frac{n-1}{4}} \lambda_1^\frac{1}{2} \dot{{B}}^{\alpha}_{p, q}ig(\frac{\lambda_1}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1^2}{ d \mu} \dot{{B}}^{\alpha}_{p, q}ig)^{(n-1)(1-\frac{1}{a} - \frac{1}{b})} \|u_{\lambda_1}\|_{U^a_+} \| v_{\lambda_2} \|_{U^a_\partialm}
\label{eqn:proof thm bilinear L2:high-high Ua}.
\end{align}
Together \eref{eqn:proof thm bilinear L2:bdd by U2} and \eref{eqn:proof thm bilinear L2:high-high Ua} together with the standard $V^2$ interpolation argument give \eref{eqn:proof of lem resonance bound:main ident} in the case $\lambda_1 \approx \lambda_2$. On the other hand, if $\lambda_1 \ll \lambda_2$, we decompose into caps of size $\beta = (\frac{d}{\lambda_1})^\frac{1}{2}$ and again apply Lemma \ref{lem:resonance bound full cone} and Theorem \ref{thm:bilinear restriction} to deduce that for any $2\leqslant a \leqslant b < n+1$
\begin{align*}
& \big\| C_d P_{\lambda_0} \big( C^+_{\ll d } u_{\lambda_1} C^\partialm_{\ll d} v_{\lambda_2} \big) \big\|_{L^2_{t,x}}\\
\leqslanta{}& \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{\kappa \in \mc{C}_\beta} \mathcal{S}up_{\mathcal{S}ubstack{\kappa' \in \mc{C}_\beta \\ |\kappa - \kappa'| \approx \beta}} \big\| C^+_{\ll d } R_\kappa u_{\lambda_1} C^\partialm_{\ll d} R_{\kappa'} v_{\lambda_2} \big\|_{L^2_{t,x}}^2 \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{2}} \\
\leqslanta{}& \beta^{\frac{n-3}{2}} \lambda_1^{\frac{n-1}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_2}{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ig)^{(n+1)(\frac{1}{2}-\frac{1}{a})} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{\kappa \in \mc{C}_\beta} \| R_{\kappa} u_{\lambda_1} \|_{U^a_+}^a \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{a} \| v_{\lambda_2} \|_{V^b_\partialm}\\
\leqslanta{}& \beta^{\frac{n-3}{2}- (n-1)(\frac{1}{2}-\frac{1}{a})} \lambda_1^{\frac{n-1}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_2}{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ig)^{(n+1)(\frac{1}{2}-\frac{1}{a})} \| u_{\lambda_1} \|_{U^a_+} \| v_{\lambda_2} \|_{V^2_\partialm}.
\end{align*}
Choosing $a=2$, we get the $U^2 \times V^2$ estimate. Taking $a$ sufficiently close to 2 gives the $V^2 \times V^2$ estimate.
It remains to consider the case $d \geqslantg \mu$. In light of Lemma \ref{lem:resonance bound full cone}, the left hand side is only nonzero if $\lambda_0 \ll \lambda_1 \approx \lambda_2$ and $d \approx \lambda_1$, and moreover, we have the identity
\begin{equation}\label{eq:dec-mod} \begin{split} &C_d P_{\lambda_0}( C_{\ll d} u_{\lambda_1} C_{\ll d } v_{\lambda_2} ) \\
={}& C_d P_{\lambda_0}( C^+_{\ll d} u_{\lambda_1} C^+_{\ll d } v_{\lambda_2} ) + C_d P_{\lambda_0}( C^-_{\ll d} u_{\lambda_1} C^-_{\ll d } v_{\lambda_2} ).
\end{split}
\end{equation}
To estimate the $L^2_{t,x}$ norm of the first term in \eqref{eq:dec-mod}, if $\partialm=+$, we decompose into caps and apply Theorem \ref{thm:bilinear restriction} which gives
\begin{align*}
\| C_d P_{\lambda_0}( C^+_{\ll d} u_{\lambda_1} C^+_{\ll d } v_{\lambda_2} ) \|_{L^2_{t,x}}
&\leqslanta \mathcal{S}um_{\mathcal{S}ubstack{ \kappa, \kappa' \in \mc{C}_\frac{1}{100} \\ \angle(\kappa, \kappa') \approx 1}}
\mathcal{S}um_{\mathcal{S}ubstack{ q,q' \in \mc{Q}_\mu \\ |q-q'|\approx \mu}} \| R_{\kappa} P_q u_{\lambda_1} R_{\kappa'} P_{q'} v_{\lambda_2} \|_{L^2_{t,x}} \\
&\leqslanta \mu^{\frac{n-1}{2}} \mathcal{S}um_{\mathcal{S}ubstack{ q,q' \in \mc{Q}_\mu \\ |q-q'|\approx \mu}} \| P_q u_{\lambda_1}\|_{U^a_+} \| P_{q'} v_{\lambda_2} \|_{U^b_+} \\
&\leqslanta \mu^{\frac{n-1}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{n(1-\frac{1}{a} - \frac{1}{b})} \| u_{\lambda_1} \|_{U^a_+} \| v_{\lambda_2} \|_{U^b_+}.
\end{align*}
If $\partialm=-$, we have $C^+_{\ll d } v_{\lambda_2}=C^-_{\approx \lambda_2} C^+_{\ll d } v_{\lambda_2}$ and
\begin{align*}
\| C_d P_{\lambda_0}( C^+_{\ll d} u_{\lambda_1} C^+_{\ll d } v_{\lambda_2} ) \|_{L^2_{t,x}}
&\leqslanta \mu^{\frac{n}{2}}\|u_{\lambda_1}\|_{L^\infty_t L^2_x}\|C^-_{\approx \lambda_2} v_{\lambda_2}\|_{L^2_{t,x}}\\
&\leqslanta \mu^{\frac{n-1}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\mu}{\lambda_2} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac12}\|u_{\lambda_1}\|_{V^2_+}\| v_{\lambda_2}\|_{V^2_-}.
\end{align*}
To estimate the $L^2_{t,x}$ norm of the second term in \eqref{eq:dec-mod}, we use $C^-_{\ll d } u_{\lambda_1}=C^+_{\approx \lambda_1} C^-_{\ll d } u_{\lambda_1}$ and obtain
\begin{align*}
\| C_d P_{\lambda_0}( C^-_{\ll d} u_{\lambda_1} C^-_{\ll d } v_{\lambda_2} ) \|_{L^2_{t,x}}
&\leqslanta \mu^{\frac{n}{2}}\|C^+_{\approx \lambda_1} u_{\lambda_1}\|_{L^2_{t,x}}\| v_{\lambda_2}\|_{L^\infty_t L^2_x}\\
&\leqslanta \mu^{\frac{n-1}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\mu}{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac12}\|u_{\lambda_1}\|_{V^2_+}\| v_{\lambda_2}\|_{V^2_\partialm},
\end{align*}
which implies the claimed estimate as $\lambda_1\approx \lambda_2$ in this case.
\end{proof}
\mathcal{S}ubsection{Proof of Theorem \ref{thm:div-prob}}\label{subsec:non}
We now combine the high-low product estimates in Theorem \ref{thm:high-mod}, together with the bilinear $L^2_{t,x}$ estimate in Theorem \ref{thm:bilinear L2 bound}, and show that the space $S$ solves the division problem. To simplify the proof, we start by giving the following consequence of the bilinear $L^2_{t,x}$ bound, which is used to control the close cone interactions in Theorem \ref{thm:div-prob}.
\begin{lemma}\label{lem:tri-est}
Let $\epsilon>0$ and $\lambda_0, \lambda_1, \lambda_2 \in 2^\mathbb{Z}$ with $\mu = \min\{\lambda_0, \lambda_1, \lambda_2\}$. If $v_{\lambda_0}, u_{\lambda_1} \in S$ and $w_{\lambda_2} \in S_w$ then
\begin{equation}
\label{eq:tri-est1}\begin{split}
&
\dot{{B}}^{\alpha}_{p, q}ig| \int_{\mathbb{R}^{1+n}} C_{\leqslanta \mu} v_{\lambda_0} C_{\leqslanta \mu} u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta \mu} w_{\lambda_2}dx\, dt \dot{{B}}^{\alpha}_{p, q}ig| \\
\leqslanta{}& \mu^{\frac{n-1}{2}}\lambda_2 (\min\{\lambda_0,\lambda_1\})^{\frac{1}{2}} \| v_{\lambda_0} \|_{S} \| u_{\lambda_1} \|_S \| w_{\lambda_2} \|_{S_w}.
\end{split}
\end{equation}
Similarly, if $u_{\lambda_1} \in S$ and $v_{\lambda_0}, w_{\lambda_2} \in S_w$, then
\begin{equation}
\label{eq:tri-est2}
\begin{split}
& \dot{{B}}^{\alpha}_{p, q}ig| \int_{\mathbb{R}^{1+n}} C_{\leqslanta \mu} v_{\lambda_0}C_{\leqslanta \mu} u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta \mu} w_{\lambda_2}
dx\, dt \dot{{B}}^{\alpha}_{p, q}ig| \\
\leqslanta{}& \mu^{\frac{n-1}{2}}\lambda_2 (\min\{\lambda_0,\lambda_1\})^{\frac{1}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{ \lambda_1 \min\{\lambda_0, \lambda_1\}}{ \mu^2 } \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| v_{\lambda_0} \|_{S_w} \| u_{\lambda_1} \|_S \| w_{\lambda_2} \|_{S_w}.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
We start by proving the bounds
\begin{align}\label{eq:claim1}
\mathcal{S}um_{d \leqslanta \mu } \dot{{B}}^{\alpha}_{p, q}ig| \int_{\mathbb{R}^{1+n}} C_d v_{\lambda_0} C_{\leqslanta d}u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta d}w_{\lambda_2} dx dt \dot{{B}}^{\alpha}_{p, q}ig|
&\leqslanta \mu^{\frac{n}{2}} \lambda_2 \| v_{\lambda_0} \|_{S_w} \| u_{\lambda_1} \|_S \| w_{\lambda_2} \|_{S_w}\\
\label{eq:claim2}
\mathcal{S}um_{d \leqslanta \mu } \dot{{B}}^{\alpha}_{p, q}ig| \int_{\mathbb{R}^{1+n}} C_{\leqslanta d} v_{\lambda_0} C_d u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta d} w_{\lambda_2} dx dt \dot{{B}}^{\alpha}_{p, q}ig|
&\leqslanta \mu^{\frac{n}{2}} \lambda_2 \| v_{\lambda_0} \|_{S_w} \| u_{\lambda_1} \|_S \| w_{\lambda_2} \|_{S_w}
\end{align}
and, for every $\epsilon>0$,
\begin{equation}\label{eq:claim3}
\begin{split}
&\mathcal{S}um_{d \leqslanta \mu } \dot{{B}}^{\alpha}_{p, q}ig| \int_{\mathbb{R}^{1+n}} C_{\ll d} v_{\lambda_0} C_{\ll d}u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_d w_{\lambda_2} dx dt \dot{{B}}^{\alpha}_{p, q}ig|\\
\leqslanta{}& \mu^{\frac{n}{2}}\lambda_2 \dot{{B}}^{\alpha}_{p, q}ig(\frac{\min\{\lambda_0,\lambda_1\}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{2}} \| v_{\lambda_0} \|_{S} \| u_{\lambda_1} \|_S \| w_{\lambda_2} \|_{S_w}, \\
&\mathcal{S}um_{d \leqslanta \mu } \dot{{B}}^{\alpha}_{p, q}ig| \int_{\mathbb{R}^{1+n}} C_{\ll d} v_{\lambda_0} C_{\ll d}u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_d w_{\lambda_2} dx dt \dot{{B}}^{\alpha}_{p, q}ig|\\
\leqslanta{}& \mu^{\frac{n}{2}}\lambda_2 \dot{{B}}^{\alpha}_{p, q}ig(\frac{\min\{\lambda_0,\lambda_1\}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{2}+\epsilon} \dot{{B}}^{\alpha}_{p, q}ig( \frac{ \lambda_1 }{ \mu } \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| v_{\lambda_0} \|_{S_w} \| u_{\lambda_1} \|_S \| w_{\lambda_2} \|_{S_w}.
\end{split}
\end{equation}
Let $\beta = (\frac{d}{\mu})^\frac{1}{2}$. The bound \eref{eq:claim1} follows by decomposing into caps of radius $\beta$, applying Lemma \ref{lem:lb-resonance} together with Lemma \ref{lem:S properties}, and observing that
\begin{align*}
& \int_{\mathbb{R}^{1+n}}C_{d} v_{\lambda_0} C_{\leqslanta d} u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta d} w_{\lambda_2}dx dt \\
\leqslanta{}&\beta^{\frac{n-1}{2}} \mu^{\frac{n}{2}} \mathcal{S}um_{\mathcal{S}ubstack{\kappa, \kappa', \kappa'' \in \mc{C}_\beta \\ \angle_\partialm (\kappa, \kappa'), \angle_\partialm (\kappa ,\kappa'')\leqslanta \beta}} \| R_{\kappa} C_d v_{\lambda_0} \|_{ L^2_{t,x}} \| R_{\kappa'} C_{\leqslanta d} u_{\lambda_1} \|_{L^\infty_t L^2_{x}} \| \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta d} R_{\kappa''} w_{\lambda_2} \|_{L^2_{t,x}} \\
\leqslanta{}& \beta^{\frac{n-1}{2}} \mu^{\frac{n}{2}} \| C_d v_{\lambda_0} \|_{L^2_{t,x}} \mathcal{S}up_{\kappa' \in \mc{C}_\beta} \| R_{\kappa'} C_{\leqslanta d}u_{\lambda_1} \|_{L^\infty_t L^2_x} \| \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta d} w_{\lambda_2} \|_{L^2_{t,x}} \\
\leqslanta{}& d^{\frac{n-1}{4}} \mu^{\frac{n+1}{4}} \lambda_2 \| v_{\lambda_0} \|_{S_w} \| u_{\lambda_1} \|_S \| w_{\lambda_2} \|_{S_w}.
\end{align*}
Consequently summing up over modulation $d \leqslanta \mu $, this gives \eqref{eq:claim1}. The proof of \eref{eq:claim2} is similar. More precisely, again decomposing into caps of radius $\beta$ gives
\begin{align*}
& \int_{\mathbb{R}^{1+n}}C_{\leqslanta d} v_{\lambda_0} C_d u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta d} w_{\lambda_2}dx dt \\
\leqslanta{}&\beta^{\frac{n-1}{2}} \mu^{\frac{n}{2}} \mathcal{S}um_{\mathcal{S}ubstack{\kappa, \kappa', \kappa'' \in \mc{C}_\beta \\ \angle_\partialm (\kappa, \kappa'), \angle_\partialm (\kappa ,\kappa'')\leqslanta \beta}} \| R_{\kappa} v_{\lambda_0} \|_{L^\infty_t L^2_x} \| R_{\kappa'} C_d u_{\lambda_1} \|_{L^2_{t,x}} \| \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta d} R_{\kappa''} w_{\lambda_2} \|_{L^2_{t,x}} \\
\leqslanta{}& \beta^{\frac{n-1}{2}} \mu^{\frac{n}{2}} \mathcal{S}up_{\kappa \in \mc{C}_\beta} \| R_\kappa v_{\lambda_0} \|_{L^\infty_t L^2_x} \| C_d u_{\lambda_1} \|_{L^2_{t,x}} \| \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta d} w_{\lambda_2} \|_{L^2_{t,x}} \\
\leqslanta{}& d^{\frac{n-1}{4}} \mu^{\frac{n+1}{4}} \lambda_2 \| v_{\lambda_0} \|_{S_w} \| u_{\lambda_1} \|_S \| w_{\lambda_2} \|_{S_w}.
\end{align*}
Consequently summing up over modulation $d \leqslanta \mu $, we have \eqref{eq:claim2}. To prove \eref{eq:claim3}, we first observe that Theorem \ref{thm:bilinear L2 bound} implies that for every $\epsilon>0$
\begin{align*} \big\| P_{\lambda_2} C_d \big( C_{\ll d} &v_{\lambda_0} C_{\ll d} u_{\lambda_1} \big) \big\|_{L^2_{t,x}}\\
& \leqslanta d^{\frac{n-3}{4}} \mu^{\frac{n-1}{4}} (\min\{\lambda_0, \lambda_1\})^\frac{1}{2} \dot{{B}}^{\alpha}_{p, q}ig( \frac{ \lambda_1 \min\{\lambda_0, \lambda_1\}}{ d \mu } \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| v_{\lambda_0}\|_{S_w} \| u_{\lambda_1} \|_{S},
\end{align*}
where we can put $\epsilon=0$ if $\| v_{\lambda_0} \|_{S_w} $ is replaced by $\| v_{\lambda_0} \|_{S}$. Hence an application of H\"older's inequality gives
\begin{align*}
&\int_{\mathbb{R}^{1+3}} C_{\ll d} v_{\lambda_0} C_{\ll d}u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_d w_{\lambda_2} dx dt \\
\leqslanta{}& \| C_{d} P_{\lambda_2} \big( C_{\ll d} v_{\lambda_0} C_{\ll d} u_{\lambda_1}\big) \|_{L^2_{t,x}} \| \dot{{B}}^{\alpha}_{p, q}ox C_d w_{\lambda_2} \|_{L^2_{t,x}} \\
\leqslanta{}& d^{\frac{n-1}{4}} \mu^{\frac{n-1}{4}} \lambda_2 (\min\{\lambda_0, \lambda_1\})^\frac{1}{2} \dot{{B}}^{\alpha}_{p, q}ig( \frac{ \lambda_1 \min\{\lambda_0, \lambda_1\}}{ d \mu } \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| v_{\lambda_0}\|_{S_w} \| u_{\lambda_1} \|_{S} \| w_{\lambda_2} \|_{S_w},
\end{align*}
where, again, we can put $\epsilon=0$ if $\| v_{\lambda_0} \|_{S_w} $ is replaced by $\| v_{\lambda_0} \|_{S}$.
Summing up over modulation $d \leqslanta \mu $, this gives \eqref{eq:claim3}.
In order to finally prove \eqref{eq:tri-est1} and \eqref{eq:tri-est2}, we decompose the product into
\begin{align*}
& C_{\leqslanta \mu } v_{\lambda_0} C_{\leqslanta \mu }u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta \mu }w_{\lambda_2} =
\mathcal{S}um_{d\leqslanta \mu } C_d v_{\lambda_0} C_{\leq d}u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_{\leq d}w_{\lambda_2} \\ & {}\varphiquad \varphiquad + \mathcal{S}um_{d\leqslanta \mu }C_{< d} v_{\lambda_0} C_d u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_{< d} w_{\lambda_2}+ \mathcal{S}um_{d\leqslanta \mu } C_{< d} v_{\lambda_0} C_{< d}u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_d w_{\lambda_2}.
\end{align*}
Estimate \eqref{eq:claim1} takes care of the first term and estimate \eqref{eq:claim2} yields the required bound for the second term. Further, we write
\begin{align*}
&\mathcal{S}um_{d\leqslanta \mu } C_{< d} v_{\lambda_0} C_{< d}u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_d w_{\lambda_2}
=\mathcal{S}um_{d\leqslanta \mu }C_{\ll d} v_{\lambda_0} C_{\ll d}u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_d w_{\lambda_2}\\
&{}\varphiquad \varphiquad+\mathcal{S}um_{d\leqslanta \mu }C_{< d}C_{\mathcal{S}im d} v_{\lambda_0} C_{<d}u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_d w_{\lambda_2}+\mathcal{S}um_{d\leqslanta \mu } C_{\ll d} v_{\lambda_0} C_{< d}C_{\mathcal{S}im d}u_{\lambda_1} \dot{{B}}^{\alpha}_{p, q}ox C_d w_{\lambda_2},
\end{align*}
and now estimate \eqref{eq:claim3} gives an acceptable bound for the first term. For the second and the third term we again use estimate \eqref{eq:claim1} and \eqref{eq:claim2}, in addition to the uniform disposablity of the modulation projections.
\end{proof}
We now come to the proof of Theorem \ref{thm:div-prob}.
\begin{proof}[Proof of Theorem \ref{thm:div-prob}]
We start with the proof of the algebra property, \eref{eqn:thm div-prob:S-alg}. Let $\mu = \min\{\lambda_0, \lambda_1, \lambda_2\}$ and decompose the product into
\begin{equation}\label{eqn:thm div-prob:alg est decomp} \begin{split}u_{\lambda_1} v_{\lambda_2} ={}& \big( u_{\lambda_1} v_{\lambda_2} - C_{\leqslanta \mu} u_{\lambda_1} C_{\leqslanta \mu} v_{\lambda_2}\big) \\
&{}+ C_{\geqslantg \mu}\big( C_{\leqslanta \mu} u_{\lambda_1} C_{\leqslanta \mu } v_{\lambda_2} \big) + C_{\leqslanta \mu} \big( C_{\leqslanta \mu} u_{\lambda_1} C_{\leqslanta \mu} v_{\lambda_2}\big).
\end{split}
\end{equation}
For the first term, at least one of $u_{\lambda_1}$ or $v_{\lambda_2}$ must have modulation at distance $\geqslantg \mu$ from the cone, hence an application of Theorem \ref{thm:high-mod} gives
\begin{align*}
\big\| P_{\lambda_0} \big( u_{\lambda_1} v_{\lambda_2} - C_{\leqslanta \mu} u_{\lambda_1} C_{\leqslanta \mu} v_{\lambda_2}\big) \big\|_S \leqslanta{}& \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\min\{\lambda_1, \lambda_2\}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{3}{2} \| u_{\lambda_1} \|_S \| v_{\lambda_2} \|_S\\
\leqslanta{}& \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1 \lambda_2}{\lambda_0} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{n}{2}} \|u_{\lambda_1} \|_S \| v_{\lambda_2}\|_S.
\end{align*}
For the second term in \eref{eqn:thm div-prob:alg est decomp}, Lemma \ref{lem:resonance bound full cone} implies that we have the identity
$$C_{\geqslantg \mu}P_{\lambda_0}\big( C_{\leqslanta \mu} u_{\lambda_1} C_{\leqslanta \mu } v_{\lambda_2} \big) = C_{\approx \lambda_{max}}P_{\lambda_0}\big( C_{\leqslanta \mu} u_{\lambda_1} C_{\leqslanta \mu } v_{\lambda_2} \big) $$
where $\lambda_{max} = \max\{\lambda_0, \lambda_1, \lambda_2\}$. Hence Lemma \ref{lem:S properties} and Theorem \ref{thm:bilinear L2 bound} give
\begin{align*}
\big\| C_{\geqslantg \mu}P_{\lambda_0}\big( C_{\leqslanta \mu} u_{\lambda_1} C_{\leqslanta \mu } v_{\lambda_2} \big) \big\|_S &\leqslanta \lambda_{max}^\frac{1}{2} \frac{\lambda_{max}}{\lambda_0} \big\| C_{\geqslantg \mu}P_{\lambda_0}\big( C_{\leqslanta \mu} u_{\lambda_1} C_{\leqslanta \mu } v_{\lambda_2} \big) \big\|_{L^2_{t,x}} \\
&\leqslanta \lambda_{max}^\frac{1}{2} \frac{\lambda_{max}}{\lambda_0} \mu^{\frac{n-1}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\min\{\lambda_1, \lambda_2\}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| u_{\lambda_1} \|_S \| v_{\lambda_2} \|_S \\
&\leqslanta \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1 \lambda_2}{\lambda_0} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{n}{2} \| u_{\lambda_1} \|_S \| v_{\lambda_2} \|_S
\end{align*}
provided we take $\epsilon>0$ sufficiently small, and $n \geqslant 2$. For the last term in \eref{eqn:thm div-prob:alg est decomp}, we apply \eref{eq:tri-est1} in Lemma \ref{lem:tri-est} together with the characterisation of $S$ in Lemma \ref{lem:S properties} to deduce that
\begin{align*}
\big\| C_{\leqslanta \mu} \big( C_{\leqslanta \mu} u_{\lambda_1} C_{\leqslanta \mu} v_{\lambda_2}\big)\big\|_S &\leqslanta \mathcal{S}up_{\mathcal{S}ubstack{\partialhi \in C^\infty_0 \\ \| \partialhi\|_{S_w} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ |\nabla| \dot{{B}}^{\alpha}_{p, q}ox C_{\leqslanta \mu} \partialhi_{\lambda_0}, C_{\leqslanta \mu} u_{\lambda_1} C_{\leqslanta \mu} v_{\lambda_2} }_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig| \\
&\leqslanta \mu^{\frac{n-1}{2}} (\min\{\lambda_1, \lambda_2\})^\frac{1}{2} \| u_{\lambda_1} \|_S \| v_{\lambda_2} \|_S.
\end{align*}
Therefore \eref{eqn:thm div-prob:S-alg} follows.
We now turn to the proof of \eref{eqn:thm div-prob:S-nonlin}. The argument is in some a sense a dual version of the argument used to prove the algebra property \eref{eqn:thm div-prob:S-alg}. An application of the characterisation of $S$ in \eref{lem:S properties}, together with the invariance under complex conjugation of $S$ and $S_w$, reduces the problem to proving that
\begin{equation}\label{eqn:thm div-prob:dual}
\dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partialhi_{\lambda_0} u_{\lambda_1}, \dot{{B}}^{\alpha}_{p, q}ox v_{\lambda_2}}_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslanta \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1 \lambda_2}{\lambda_0} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{n}{2} \lambda_0 \| \partialhi_{\lambda_0} \|_{S_w} \| u_{\lambda_1} \|_S \| v_{\lambda_2} \|_S
\end{equation}
for all $\partialhi \in C^\infty_0$. The first step in the proof of \eref{eqn:thm div-prob:dual} is to decompose into the far cone and close cone regions
\begin{equation}\label{eqn:thm div-prob:dual decomp}
\begin{split}
\partialhi_{\lambda_0} u_{\lambda_1}=& \big( \partialhi_{\lambda_0} u_{\lambda_1} - C_{\leqslanta \mu} \partialhi_{\lambda_0} C_{\leqslanta \mu} u_{\lambda_1}\big) + C_{\geqslantg \mu}\big( C_{\leqslanta \mu} \partialhi_{\lambda_0} C_{\leqslanta \mu } v_{\lambda_2} \big) \\
&{}\varphiuad + C_{\leqslanta \mu} \big( C_{\leqslanta \mu} \partialhi_{\lambda_0} C_{\leqslanta \mu} u_{\lambda_1} \big).
\end{split}
\end{equation}
For the first term in \eref{eqn:thm div-prob:dual decomp}, at least one of $\partialhi_{\lambda_0}$ or $u_{\lambda_1}$ must have modulation $\geqslantg \mu$. Hence Theorem \ref{thm:high-mod} and Lemma \ref{lem:S properties} gives
\begin{align*}
& \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\big( \partialhi_{\lambda_0} u_{\lambda_1} - C_{\leqslanta \mu} \partialhi_{\lambda_0} C_{\leqslanta \mu} u_{\lambda_1}\big), \dot{{B}}^{\alpha}_{p, q}ox v_{\lambda_2}}_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig|\\
\leqslanta{}& \lambda_2 \big\| P_{\lambda_2} \big( \partialhi_{\lambda_0} u_{\lambda_1} - C_{\leqslanta \mu} \partialhi_{\lambda_0} C_{\leqslanta \mu} u_{\lambda_1}\big)\big\|_{S_w} \| v_{\lambda_2} \|_S \\
\leqslanta{}& \mu^{\frac{n}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\min\{\lambda_0, \lambda_1\}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} \lambda_2 \| \partialhi_{\lambda_0} \|_{S_w} \|u_{\lambda_1} \|_S \| v_{\lambda_2} \|_S \\
\leqslanta{}& \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1 \lambda_2}{\lambda_0} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{n}{2} \lambda_0 \| \partialhi_{\lambda_0} \|_{S_w} \|u_{\lambda_1} \|_S \| v_{\lambda_2} \|_S.
\end{align*}
On the other hand, to bound the second term in \eref{eqn:thm div-prob:dual decomp}, we note that Lemma \ref{lem:resonance bound full cone} gives the identity
$$C_{\geqslantg \mu}P_{\lambda_2}\big( C_{\leqslanta \mu} \partialhi_{\lambda_0} C_{\leqslanta \mu} u_{\lambda_1} \big) = C_{\approx \lambda_{max}}P_{\lambda_2}\big( C_{\leqslanta \mu} \partialhi_{\lambda_0} C_{\leqslanta \mu} u_{\lambda_1}\big) $$
and hence Theorem \ref{thm:bilinear L2 bound} together with Lemma \ref{lem:S properties} gives
\begin{align*}
& \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{C_{\geqslantg \mu}P_{\lambda_0}\big( C_{\leqslanta \mu} \partialhi_{\lambda_0} C_{\leqslanta \mu} u_{\lambda_1} \big), \dot{{B}}^{\alpha}_{p, q}ox v_{\lambda_2}}_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig|\\
\leqslanta{}& \lambda_2 \lambda_{max}^{\frac{1}{2}} \big\| C_{\approx \lambda_{max}}P_{\lambda_2}\big( C_{\leqslanta \mu} \partialhi_{\lambda_0} C_{\leqslanta \mu} u_{\lambda_1}\big)\big\|_{S_w} \| v_{\lambda_2} \|_S \\
\leqslanta{}& \mu^{\frac{n-1}{2}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\min\{\lambda_0, \lambda_1\}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \lambda_2 \lambda_{\max}^\frac{1}{2} \| \partialhi_{\lambda_0} \|_{S_w} \|u_{\lambda_1} \|_S \| v_{\lambda_2} \|_S \\
\leqslanta{}& \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1 \lambda_2}{\lambda_0} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{n}{2} \lambda_0 \| \partialhi_{\lambda_0} \|_{S_w} \|u_{\lambda_1} \|_S \| v_{\lambda_2} \|_S.
\end{align*}
Finally, to bound the last term in \eref{eqn:thm div-prob:dual decomp}, we apply the close cone bound in Lemma \ref{lem:tri-est} and conclude that
\begin{align*}
& \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{C_{\geqslantg \mu}P_{\lambda_0}\big( C_{\leqslanta \mu} \partialhi_{\lambda_0} C_{\leqslanta \mu} u_{\lambda_1} \big), \dot{{B}}^{\alpha}_{p, q}ox v_{\lambda_2}}_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig| \\
\leqslanta{}& \mu^{\frac{n-1}{2}} \lambda_2 \dot{{B}}^{\alpha}_{p, q}ig( \frac{\min\{\lambda_0, \lambda_1\}}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{2} + \epsilon} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1}{\mu} \dot{{B}}^{\alpha}_{p, q}ig)^\epsilon \| \partialhi_{\lambda_0} \|_{S_w} \| u_{\lambda_1} \|_S \| v_{\lambda_2}\|_S \\
\leqslanta{}& \dot{{B}}^{\alpha}_{p, q}ig( \frac{\lambda_1 \lambda_2}{\lambda_0} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{n}{2} \lambda_0 \| \partialhi_{\lambda_0} \|_{S_w} \| u_{\lambda_1} \|_S \| v_{\lambda_2} \|_S.
\end{align*}
Therefore we obtain \eref{eqn:thm div-prob:dual}.
\end{proof}
\mathcal{S}ection{The spaces $U^p$ and $V^p$}\label{sec:Up and Vp}
In this section we briefly review some of the fundamental properties of the spaces $U^p$ and $V^p$, in particular we discuss embedding properties, almost orthogonality principles, and the dual pairing. The material we present in this section can essentially be found in \cite{Hadac2009, Koch2005,Koch2016} and thus we shall be somewhat brief but self-contained (up to the proof of Theorem \ref{thm:vu-emb}).
Using the notation introduced in Subsection \ref{subsec:upvp}, a function $u:\mathbb{R} \rightarrow L^2(\mathbb{R}^n)$ is a \emph{step function with partition $\tau\in \mb{P}$} if for each $I \in \mc{I}_\tau$ there exists $f_I \in L^2(\mathbb{R}^n)$ such that
$$ u(t) = \mathcal{S}um_{I \in \mc{I}_\tau} \mathbbold{1}_{I}(t) f_I.$$
By definition, all step functions vanish for sufficiently negative $t$. Define $\mathfrak{S}$ to be the collection of all step functions with partitions $\tau\in\mb{P}$, and let $\mathfrak{S}_0$ denote those elements of $\mathfrak{S}$ with compact support. In other words, step functions in $\mathfrak{S}_0$ vanish on the final interval $[t_N, \infty)$.
The normalisation condition at $t = - \infty$ implies that $\mathfrak{S} \mathcal{S}ubset V^p$. However step functions are \emph{not} dense in $V^p$,
this follows by considering an example of Young \cite{Young1936} that shows that $V^p \not \mathcal{S}ubset U^p$ \cite[Theorem B.22]{Koch2016}. On the other hand the set of step functions, $\mathfrak{S}$, is dense in $U^p$, this follows by noting that $U^p$ atoms are step functions, and if $u = \mathcal{S}um c_j u_j$ is an atomic decomposition of $u \in U^p$, then letting $\partialhi_N = \mathcal{S}um_{j\leqslant N } c_j u_j$ we get a sequence of step functions such that
$$ \| u - \partialhi_N \|_{U^p} \leqslant \mathcal{S}um_{j\geqslant N} |c_j| \rightarrow 0$$
as $N \to \infty$.
Recall that
$$ |v|_{V^p} = \mathcal{S}up_{(t_j)_{j=1}^{N} \in \mb{P}} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=1}^{N-1} \| v(t_{j+1}) - v(t_j) \|_{L^2}^p \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{p}}.$$
A computation shows that for any $v:\mathbb{R} \to L^2$ and $t_0 \in \mathbb{R}$ we have
$$ 2^{-1}(\| v(t_0)\|_{L^2}^p + |v|_{V^p}^p)^{\frac{1}{p}}\leqslant \| v\|_{V^p} \leqslant 2(\| v(t_0)\|_{L^2}^p + |v|_{V^p}^p)^{\frac{1}{p}}.$$
In particular, if $v \in V^p$, then letting $t_0\rightarrow -\infty$, we see that $|\cdot|_{V^p}$ and $\|\cdot \|_{V^p}$ are equivalent norms on $V^p$.
The definition of the spaces $U^p$ and $V^p$ implies that they are both Banach spaces and convergence with respect to $\| \cdot \|_{U^p}$ or $\| \cdot \|_{V^p}$ implies uniform convergence (i.e. in $\| \cdot\|_{L^\infty_t L^2_x}$).
\mathcal{S}ubsection{Embedding properties}\label{subsec:funda}
For $1 \leqslant q < r < \infty$ we have the continuous embeddings $U^q \mathcal{S}ubset V^q \mathcal{S}ubset V^r $. An example due to Young \cite{Young1936} (see \cite[Lemma 4.15]{Koch2014}) shows that $V^q \not \mathcal{S}ubset U^q$. On the other hand, for $p<q$ we do have $V^p \mathcal{S}ubset U^q$.
\begin{theorem}\label{thm:vu-emb}
Let $1\leqslant p < q < \infty$ and $v \in V^p$. There exists a decomposition $v = \mathcal{S}um_{j=1}^\infty v_j$ such that $v_j \in U^q$ with $ \| v_j \|_{U^q} \leqslanta 2^{j( \frac{p}{q}-1)} \|v\|_{V^p}$. In particular, the embedding $V^p \mathcal{S}ubset U^q$ holds.
\end{theorem}
\begin{proof}
The proof can be found in \cite[pp. 255--256]{Koch2005}, \cite[p. 923]{Hadac2009} or \cite[Proof of Theorem B.18]{Koch2016}.
\end{proof}
\begin{remark}\label{rmk:u-vs-v}
To gain some intuition into the $U^p$ and $V^p$ spaces, we note the following properties:
\begin{enumerate}
\item ($C^\infty_0 \mathcal{S}ubset U^p, V^p$) Clearly we have $C^\infty_0 \mathcal{S}ubset V^1$. Consequently an application of Theorem \ref{thm:vu-emb} implies that for $p>1$ we have $C^\infty_0 \mathcal{S}ubset U^p$. More generally, if $\partialartial_t u \in L^1_t L^2_x$ and $\lim_{t\to -\infty} u(t)=0$, then $u\in V^1\mathcal{S}ubset U^p$ and
\begin{equation}\label{eq:w11}
\|u\|_{U^p}\leqslanta\|u\|_{V^1}\leqslanta \|\partialartial_t u\|_{ L^1_t L^2_x}
\end{equation}
by Theorem \ref{thm:vu-emb}.
\item (Approximation of $C^\infty_0$ functions in $V^p$, $p>1$) The example of Young shows that $\mathfrak{S}$ is not dense in $V^p$. On the other hand, $\mathfrak{S}$ is dense in the closure of $V^p \cap C^\infty_0$ for $p>1$. This follows by observing that given $\tau \in \mb{P}$ and defining $u_\tau = \mathcal{S}um_{j=1}^N \mathbbold{1}_{[t_j, t_{j+1})} u(t_j)$ (with $t_{N+1} = \infty$) we have
$$ |u-u_\tau|_{V^p}^p \leqslanta \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}up_j \int_{t_j}^{t_{j+1}} \| \partial_t u \|_{L^2} \dot{{B}}^{\alpha}_{p, q}ig)^{p-1} \int_\mathbb{R} \| \partial_t u \|_{L^2_x} dt. $$
\item (Counterexample for $p=1$) If we suppose that $u(t) = \rho(t) f $ with $f\in L^2$ and $\rho$ monotone on an interval $(a,b)$, then $u$ cannot be approximated by step functions using the $V^1$ norm. As a consequence $u \not \in U^1$. In particular, if
$$ \rho(t) = \begin{cases} t \varphiquad & t\in [0,1) \\
0 & t \not \in [0,1)\end{cases}$$
then $u \in U^p$ but \emph{not} in $U^1$. In fact, a similar example shows that $C^\infty_0 \not \mathcal{S}ubset U^1$.
\end{enumerate}
\end{remark}
\mathcal{S}ubsection{Almost orthogonality}\label{subsec:ao}
The definition of the spaces $U^p$ and $V^p$ implies that they satisfy a one sided version of the standard almost orthogonality property.
\begin{proposition}[Almost orthogonality in $U^p$ and $V^p$] \label{prop:orthog}
Let $M_1, M_2 \in [0, \infty]$. For $k\in \mathbb{N}$, let $T_k:L^2(\mathbb{R}^n)\to L^2(\mathbb{R}^n)$ be a linear and bounded operator (acting spatially) such that
\[ M_1 \| f \|_{L^2} \leqslant \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{k \in \mathbb{N}} \|T_k f\|_{L^2}^2 \dot{{B}}^{\alpha}_{p, q}ig)^{\frac12}\leqslant M_2 \|f\|_{L^2} \varphiquad \text{ for all }f\in L^2.\]
If $1\leq p\leq 2$, then for all $u \in U^p$ we have the bound
$$ \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{k\in \mathbb{N}} \| T_k u \|_{U^p}^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} \leqslant M_2 \| u \|_{U^p}.$$
On the other hand, if $p\geqslanteq 2$, then for all $v \in V^p$ we have the bound
$$ \dot{{B}}^{\alpha}_{p, q}ig\|\mathcal{S}um_{k \in \mathbb{N}} T_kv \dot{{B}}^{\alpha}_{p, q}ig\|_{V^p}\leqslant M_1 \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{k\in \mathbb{N}} \| T_k v \|_{V^p}^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2}.
$$
\end{proposition}
\begin{proof}
We start with the $U^p$ bound. It is enough to consider the case where $u = \mathcal{S}um_{1\leqslant m\leqslant N} \mathbbold{1}_{[t_m, t_{m+1})}(t) f_k$ is a $U^p$-atom. Then by definition of the $U^p$ norm, together with the assumption $1\leqslant p\leqslant 2$, we have
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{k\in \mathbb{N}} \| T_k u \|_{U^p}^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} &\leqslant \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{k\in \mathbb{N}} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{1\leqslant m\leqslant N} \|T_kf_m \|_{L^2}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{2}{p} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2}\\ &\leqslant \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{1\leqslant m\leqslant N} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{k\in \mathbb{N}} \|T_kf_m\|^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{p}{2} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p}\leqslant M_2,
\end{align*}
as required.
Concerning the $V^p$ bound, we let $\tau=(t_j)_{j=1}^N\in \mathbf{P}$ be any partition. We compute
\begin{align*}
& \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=1}^{N-1} \dot{{B}}^{\alpha}_{p, q}ig\|\mathcal{S}um_{k \in \mathbb{N}}T_kv(t_{j+1}) -\mathcal{S}um_{k \in \mathbb{N}}T_kv(t_{j}) \dot{{B}}^{\alpha}_{p, q}ig\|_{L^2}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p}\\
\leqslant{}& \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{j=1}^{N-1} \dot{{B}}^{\alpha}_{p, q}ig\|\mathcal{S}um_{k \in \mathbb{N}}T_k(v(t_{j+1}) -v(t_{j})) \dot{{B}}^{\alpha}_{p, q}ig\|_{L^2}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p} \\
\leqslant{}& M_1 \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{j=1}^{N-1} \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{k \in \mathbb{N}} \dot{{B}}^{\alpha}_{p, q}ig\| T_k(v(t_{j+1}) -v(t_{j})) \dot{{B}}^{\alpha}_{p, q}ig\|_{L^2}^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{p}{2} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac1p}\\
\leqslant{}& M_1 \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{k \in \mathbb{N}}\big\| T_kv\big\|_{V^p}^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2},
\end{align*}
as required.
\end{proof}
\mathcal{S}ubsection{The dual pairing}\label{subsec:dual}
Due to the atomic definition of $U^p$, it can be very difficult to estimate $\| u \|_{U^p}$ for $u\in U^p$. For instance, even the question of estimating the $U^p$ norm of a step function is a nontrivial problem. However, there is a duality argument that can reduce this problem to estimating a certain bilinear pairing between elements of $U^p$ and $V^p$. In the following we give two possible versions of this pairing, a discrete version and a continuous version. More precisely, given a step function $w \in \mathfrak{S}$ with partition $(t_j)_{j=1}^{N} \in \mb{P}$, and a function $u: \mathbb{R} \to L^2$, we define the dual pairing
$$ B(w, u) = \lr{w(t_1), u(t_1)}_{L^2} + \mathcal{S}um_{j=2}^N \lr{w(t_j) - w(t_{j-1}), u(t_j)}_{L^2}.$$
The bilinear pairing $B$ is well defined, and sesquilinear in $w\in \mathfrak{S}$ and $u$. The linearity in $u$ is immediate, while the remaining properties follow by observing that if $w \in \mathfrak{S}$ is a step function with partition $\tau=(t_j) \in \mb{P}$, that is \emph{also} a step function with respect to another partition $\tau'=(t'_j) \in \mb{P}$, then a computation using the fact that $w(t) =0$ for $t<\max\{t_1, t_1'\}$ gives
\begin{align*} \lr{w(t_1), u(t_1)}_{L^2} + &\mathcal{S}um_{j=2}^N \lr{w(t_j) - w(t_{j-1}), u(t_j)}_{L^2} \\
&= \lr{w(t'_1), u(t'_1)}_{L^2} + \mathcal{S}um_{k=2}^{N'} \lr{w(t_j') - w(t_{j-1}'), u(t_j')}_{L^2}.
\end{align*}
It is not so difficult to show that if $w \in \mf{S}$ and $u \in U^p$ then \[|B(w, u)| \leqslant |v|_{V^q} \| u \|_{U^p},\] we give the details of this computation in Theorem \ref{thm:dual pairing} below. Thus $B(w,u)$ gives a way to pair (step) functions in $V^q$ with $U^p$ functions.
Alternative pairings are also possible, for instance we can also use the continuous pairing
\begin{equation}\label{eqn:cts pairing} \int_\mathbb{R} \lr{\partial_t \partialhi, u}_{L^2} dt \end{equation}
for $\partial_t \partialhi \in L^1_t L^2_x$ and $u \in U^p$. Clearly \eqref{eqn:cts pairing} and $B$ are closely related, since if $w\in \mf{S}$ and $u \in C^1$ we have
$$B(w, u) = \int_\mathbb{R} \lr{v, \partial_t u} dt$$
where the step function \[v(t) = \mathbbold{1}_{(-\infty, t_1)}(t) w(t_N) - \mathcal{S}um_{j=1}^{N-1} \mathbbold{1}_{[t_j, t_{j+1})}(t) [w(t_{N}) - w(t_{j-1})].\] If we had $v \in C^1$, then integrating by parts would give \eqref{eqn:cts pairing}. More general pairings are also possible, this relates to the more general \emph{Stieltjes} integral, and using a limiting argument, the definition of $B(w,u)$ can be extended from $w\in \mf{S}$ to elements $w\in V^p$, see \cite{Hadac2009}. However for our purposes, it suffices to work with the pairings $B$ and \eqref{eqn:cts pairing} as defined above. The pairings are useful due to the following.
\begin{theorem}[Dual pairing for $U^p$]\label{thm:dual pairing}
Let $1< p, q < \infty$, $\frac{1}{p} + \frac{1}{q} = 1$. If $u \in U^p$ and $\partial_t \partialhi \in L^1_t L^2_x$ we have
$$ \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\partial_t \partialhi, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslant |\partialhi|_{V^q} \| u \|_{U^p}$$
and
$$ \| u \|_{U^p} = \mathcal{S}up_{ \mathcal{S}ubstack{w \in \mathfrak{S} \\ | w |_{V^q} \leqslant 1} } |B(w, u)| = \mathcal{S}up_{\mathcal{S}ubstack{\partial_t \partialhi \in C^\infty_0 \\ |\partialhi|_{V^q} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\partial_t \partialhi, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|. $$
\end{theorem}
\begin{proof}
We start by proving that for every $u \in U^p$ we have
\begin{equation}\label{eqn:thm dual pairing:upper bounds}
\mathcal{S}up_{\mathcal{S}ubstack{\partial_t \partialhi \in L^1_t L^2_x \\ |\partialhi|_{V^q} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\partial_t \partialhi, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslant \mathcal{S}up_{ \mathcal{S}ubstack{w \in \mathfrak{S} \\ | w |_{V^q} \leqslant 1} } |B(w, u)| \leqslant \| u \|_{U^p}.
\end{equation}
By definition of the $U^p$ norm, it is enough to consider the case where $u \in \mathfrak{S}$ is a $U^p$-atom. The second inequality in \eqref{eqn:thm dual pairing:upper bounds} follows by observing that if $w \in \mf{S}$ with partition $\tau= (t_j)_{j=1}^N \in \mb{P}$, and we let $ E = \{ t_j \in \tau \mid u(t_j) = u(t_{j+1})\}$ and $\tau^* = (t_k^*)_{k=1}^{N^*} = \tau \mathcal{S}etminus E$ (i.e. we remove points from $\tau$ where the step function $u$ is constant) then by definition of the bilinear pairing we have
\begin{align*}
|B(w, u)| &= \dot{{B}}^{\alpha}_{p, q}ig| \lr{ w(t_1), u(t_1) }_{L^2} + \mathcal{S}um_{j=2}^N \lr{w(t_j) - w(t_{j-1}), u(t_j)}_{L^2 } \dot{{B}}^{\alpha}_{p, q}ig|\\
&= \dot{{B}}^{\alpha}_{p, q}ig|\lr{ w(t_1^*), u(t_1^*) }_{L^2} + \mathcal{S}um_{j=2}^{N^*} \lr{w(t_j^*) - w(t_{j-1}^*), u(t_j^*)}_{L^2 } \dot{{B}}^{\alpha}_{p, q}ig|\\
&\leqslant \dot{{B}}^{\alpha}_{p, q}ig( \|w(t_1^*)\|_{L^2}^q + \mathcal{S}um_{j=2}^{N^*} \| w(t_j^*) - w(t_{j-1}^*)\|_{L^2}^q \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{q}} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=1}^{N^*}\| u(t_j^*)\|_{L^2}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p} \leqslant |w|_{V^q}.
\end{align*}
where the last line follows by noting that $w(t) = 0$ for $t<t_1^*$ and $u$ is a $U^p$ atom together with the definition of the partition $\tau^*$. On the other hand, to prove the first inequality in \eqref{eqn:thm dual pairing:upper bounds}, we observe that if $u = \mathbbold{1}_{[s_{N'},\infty)}(t) u(s_{N'}) + \mathcal{S}um_{k=1}^{N'-1} \mathbbold{1}_{[s_k, s_{k+1})}(s) u(s_k)$ and $T>s_{N'}$, we have
\begin{align*}
\int_{-\infty}^T \lr{\partial_t \partialhi, u}_{L^2} dt &= \lr{\partialhi(T) - \partialhi(s_{N'}), u(s_{N'})}_{L^2} + \mathcal{S}um_{j=1}^{N'} \lr{ \partialhi(s_{j+1}) - \partialhi(s_j), u(s_j)}_{L^2} \\
&= \lr{w(s_1), u(t_1)}_{L^2} + \mathcal{S}um_{j=2}^{N'} \lr{w(s_{j}) - w(s_{j-1}), u(t_j)}_{L^2}
\end{align*}
where we define $w\in \mf{S}$ as
$$w(t) = \mathcal{S}um_{j=1}^{N'-1} \mathbbold{1}_{[s_j, s_{j+1})}(t) \big( \partialhi(s_{j+1}) - \partialhi(s_1)\big) + \mathbbold{1}_{[s_{N'}, \infty)}(t) \big( \partialhi(T) - \partialhi(s_1)\big).$$
Since $|w|_{V^q} \leqslant |\partialhi|_{V^q}$ we conclude that
$$ \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t \partialhi, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslant \int_T^\infty \| \partial_t \partialhi(t) \|_{L^2} dt \| u \|_{L^\infty_t L^2_x} + |\partialhi|_{V^q} \mathcal{S}up_{ \mathcal{S}ubstack{w \in \mf{S} \\ |w|_{V^q}\leqslant 1}} |B(w,u)|. $$
Hence letting $T \to \infty$ and using the assumption $\partial_t \partialhi \in L^1_t L^2_x$ the bound \eqref{eqn:thm dual pairing:upper bounds} follows.
It remains to show that for $u \in U^p$ we have the bound
\begin{equation}\label{eqn:thm dual pairing:Up bound}
\| u \|_{U^p} \leqslant \mathcal{S}up_{\mathcal{S}ubstack{\partial_t \partialhi \in L^1_t L^2_x \\ |\partialhi|_{V^q} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\partial_t \partialhi, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|.
\end{equation}
Since the set of step functions $\mf{S}\mathcal{S}ubset U^p$ is dense, it suffices to consider the case $u = \mathbbold{1}_{[t_N, \infty)}(t) f_N + \mathcal{S}um_{j=1}^{N-1} \mathbbold{1}_{[t_j, t_{j+1})}(t) f_j\in \mf{S}$. An application of the Hahn-Banach Theorem implies that there exists $L \in (U^p)^*$ such that
\begin{equation}\label{eqn:thm dual pairing:prop of L} \| u \|_{U^p} = L(u), \varphiquad \varphiquad \mathcal{S}up_{\|h \|_{U^p}\leqslant 1} |L(h)| = 1.\end{equation}
Note that given $f \in L^2$ and fixed $t \in \mathbb{R}$, we have $\mathbbold{1}_{[t, \infty)} f \in U^p$ and $\| \mathbbold{1}_{[t, \infty)} f \|_{U^p} \leqslant \| f \|_{L^2}$. In particular, the map $f \mapsto L(\mathbbold{1}_{[t, \infty)}f)$ is a linear functional on $L^2$. Consequently, by the Riesz Representation Theorem, there exists a function $\partialsi: \mathbb{R} \to L^2$ such that for every $t\in \mathbb{R}$ and $f \in L^2$ we have
$$ L\big( \mathbbold{1}_{[t, \infty)} f\big) = \lr{ \partialsi(t), f}_{L^2}. $$
By construction, we see that
\begin{align*}
\| u \|_{U^p} = T(u) &= \mathcal{S}um_{j=1}^{N-1} L( \mathbbold{1}_{[t_j, t_{j+1})} f_j ) + L(\mathbbold{1}_{[t_N, \infty)} f_N) \\
&= \mathcal{S}um_{j=1}^{N-1} \lr{\partialsi(t_j) - \partialsi(t_{j+1}), f_j}_{L^2} + \lr{\partialsi(t_N), f_N}.
\end{align*}
Let $\rho \in C^\infty_0(-1, 1)$ with $\int_\mathbb{R} \rho = 1$, and define
$$ \partialhi_\epsilon(t) = - \int_\mathbb{R} \frac{1}{\epsilon} \rho \dot{{B}}^{\alpha}_{p, q}ig( \frac{t-s}{\epsilon} + 1 \dot{{B}}^{\alpha}_{p, q}ig) w(t) dt $$
with
$$ w(t) = \mathbbold{1}_{(-\infty, t_1)}(t) \partialsi(t_1) + \mathcal{S}um_{j=1}^{N-1} \mathbbold{1}_{[t_j, t_{j+1})}(t) \partialsi(t_j). $$
Then provided we choose $\epsilon>0$ sufficiently small (depending on $(t_j) \in \mb{P}$), we have $\partialhi_\epsilon(t_j) = \partialsi(t_j)$ for $j=1, ..., N$, and $\partialhi_\epsilon(t) = 0$ for $t \geqslant t_N + 2 \epsilon$ and consequently
\begin{align*}
\| u \|_{U^p} &= \mathcal{S}um_{j=1}^{N-1} \lr{\partialsi(t_j) - \partialsi(t_{j+1}), f_j}_{L^2} + \lr{\partialsi(t_N), f_N} \\
&= \int_\mathbb{R} \lr{ \partial_t \partialhi_\epsilon, u}_{L^2} dt .
\end{align*}
Therefore, since $\partial_t \partialhi_\epsilon \in C^\infty_0$ and $|\partialhi_\epsilon|_{V^q} \leqslant |w|_{V^q}$, it only remains to show that $ |w|_{V^q} \leqslant 1$. To this end, we start by observing that
\begin{equation}\label{eqn:thm dual pairing:w bound} |w|_{V^q} \leqslant \mathcal{S}up_{(s_j) \in \mb{P}} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=1}^{N'-1} \| \partialsi(s_{j+1}) - \partialsi(s_j) \|_{L^2}^q + \| \partialsi(s_{N'})\|_{L^2}^q \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{q}.
\end{equation}
Fix $(s_j)_{j=1}^{N'} \in \mb{P}$ and define $v \in \mf{S}$ as
$$ v(t) = \mathbbold{1}_{[s_{N'}, \infty)}(t) \frac{ \langlepha \partialsi(s_{N'})}{\| \partialsi(s_{N'})\|_{L^2}^{2-q}} + \mathcal{S}um_{j=1}^{N'-1}
\mathbbold{1}_{[s_j, s_{j+1})}(t) \frac{\langlepha [\partialsi(s_{j}) - \partialsi(s_{j+1})]}{ \| \partialsi(s_{j+1}) - \partialsi(s_j)\|_{L^2}^{2-q}}$$
with
$$ \langlepha = \dot{{B}}^{\alpha}_{p, q}ig( \| \partialsi(s_{N'}) \|_{L^2}^q + \mathcal{S}um_{1\leqslant j \leqslant N'-1} \| \partialsi(s_{j+1}) - \partialsi(s_j) \|_{L^2}^q \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{q}-1}.$$
Then $v$ is a $U^p$ atom, and by construction, we have
\begin{align*} L(v) &= \lr{ \partialsi(s_{N'}), v(s_{N'})}_{L^2} + \mathcal{S}um_{j=1}^{N'-1} \lr{ \partialsi(s_j) - \partialsi(s_{j+1}), v(s_j)}_{L^2} \\
&= \dot{{B}}^{\alpha}_{p, q}ig( \| \partialsi(s_{N'}) \|_{L^2}^q + \mathcal{S}um_{j=1}^{N'-1} \| \partialsi(s_{j+1}) - \partialsi(s_j)\|_{L^2}^q \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{q}.
\end{align*}
Therefore, from \eqref{eqn:thm dual pairing:prop of L} and \eqref{eqn:thm dual pairing:w bound} we conclude that
$$ |w|_{V^q} \leqslant \mathcal{S}up_{ \mathcal{S}ubstack{ v \in \mf{S} \\ \| v \|_{U^p} \leqslant 1}} |L(v)| \leqslant 1$$
as required.
\end{proof}
\begin{remark}\label{rem:compact support}
It is possible to replace the conditions $w \in \mf{S}$ and $\partial_t \partialhi \in C^\infty_0$ in Theorem \ref{thm:dual pairing} with the compactly supported functions $w \in \mf{S}_0$ and $\partialhi \in C^\infty_0$. More precisely, provided that $u(t) \to 0$ as $t \to -\infty$, we have
\begin{equation}\label{eqn:rem on comp:dis} \mathcal{S}up_{\mathcal{S}ubstack{ w \in \mf{S} \\ |w|_{V^q}\leqslant 1}} |B(w,u)| \leqslant 2 \mathcal{S}up_{\mathcal{S}ubstack{ w \in \mf{S}_0 \\ |w|_{V^q}\leqslant 1}} |B(w,u)|
\end{equation}
and
\begin{equation}\label{eqn:rem on comp:cts}
\mathcal{S}up_{\mathcal{S}ubstack{\partial_t \partialhi \in C^\infty_0 \\ |\partialhi|_{V^q} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t \partialhi, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|
\leqslant 2 \mathcal{S}up_{\mathcal{S}ubstack{\partialhi \in C^\infty_0 \\ |\partialhi|_{V^q} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t \partialhi, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|.
\end{equation}
The factor of 2 arises as we potentially add another jump in the step function $w$ at $t=+\infty$ by imposing the compact support condition. To prove the discrete bound \eqref{eqn:rem on comp:dis}, we note that if $w = \mathbbold{1}_{[t_1, t_2)} f_1 + \dots +\mathbbold{1}_{[t_N, \infty)} f_N$ and we take $w_T = - \mathbbold{1}_{[T, t_1)} f_N + \mathbbold{1}_{[t_1, t_2)} (f_1 - f_N) + \dots + \mathbbold{1}_{[t_{N-1}, t_N)} (f_{N-1} - f_N)$ then for any $u:\mathbb{R} \to L^2$ we have
$$ B(w, u) = \lr{f_N, u(T)}_{L^2} + B(w_T, u) .$$
Since $w_T \in \mf{S}_0$, and $|w_T|_{V^q} \leqslant 2 |w|_{V^q}$, we conclude that if $u(t) \to 0$ as $t \to -\infty$, then
$$ \mathcal{S}up_{\mathcal{S}ubstack{ w \in \mf{S} \\ |w|_{V^q}\leqslant 1}} |B(w, u)| \leqslant 2 \mathcal{S}up_{ \mathcal{S}ubstack{w \in \mf{G}_0 \\ |w|_{V^q} \leqslant 1}} |B(w, u)| $$
which implies \eqref{eqn:rem on comp:dis}. To prove \eqref{eqn:rem on comp:cts}, we first take a map $\rho:\mathbb{R} \to \mathbb{R}$ such that $\int_\mathbb{R} |\partial_t \rho| dt \leqslant 1$, $\rho(t) = 1$ for $t>1$, and $\rho(t) = 0$ for $t<-1$, and let $\rho_T(t) = \rho(t-T)$. Given $\partial_t \partialhi \in C^\infty_0$, we take
$$ \partialhi_T(t) = \rho_T(t) ( \partialhi(t) - \partialhi_\infty) $$
where we take $\partialhi_{\partialm\infty} = \lim_{t\to \partialm \infty} \partialhi(t)$, note that $\partialhi(\partialm t) = \partialhi_{\partialm\infty}$ for $t>0$ sufficiently large, since $\partial_t \partialhi \in C^\infty_0$. Then $\partialhi_T \in C^\infty_0$, $\partial_t \partialhi_T = \partial_t \partialhi $ for $t> T+1$, and for all $T<0$ sufficiently negative we have
\begin{align*}
|\partialhi_T|_{V^q} &\leqslant | (1 - \rho_T)( \partialhi - \partialhi_{\infty}) |_{V^q} + | \partialhi - \partialhi_\infty |_{V^q} \\
&\leqslant | (1 - \rho_T) (\partialhi_{-\infty}-\partialhi_\infty) |_{V^q} + | \partialhi |_{V^q} \leqslant \| \partialhi_{-\infty}- \partialhi_\infty\|_{L^2} + |\partialhi|_{V^q}\leqslant 2 |\partialhi|_{V^q}
\end{align*}
where we used the bound
$$ \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_j | \rho_T(t_{j+1}) - \rho_T(t_j)|^q \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{q} \leqslant \int_\mathbb{R} |\partial_t \rho| dt = 1.$$
Therefore, provided that $\partial_t \partialhi \in C^\infty_0$ and $|\partialhi|_{V^q} \leqslant 1$, we have
$$ \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\partial_t \partialhi, u} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslant \int_{-\infty}^{T+1} \| \partial_t \partialhi - \partial_t \partialhi_T\|_{L^2} \| u(t) \|_{L^2} dt + 2 \mathcal{S}up_{\mathcal{S}ubstack{ \partialsi \in C^\infty_0 \\ |\partialsi|_{V^q } \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\partial_t \partialsi, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| .$$
Consequently, since $u(t) \to 0 $ as $t \to -\infty$, \eqref{eqn:rem on comp:cts} follows by letting $T\to -\infty$.
\end{remark}
\mathcal{S}ection{Two Characterisations of $U^p$}\label{sec:characterisation}
In this section we consider the problem of determining if a general function $u:\mathbb{R} \to L^2$ belongs to $U^p$. If we apply the definition of $U^p$, this requires finding an atomic decomposition of $u$, which is in general a highly non-trivial problem. An alternative approach is suggested by Theorem \ref{thm:dual pairing}. More precisely, since the two norms defined by the dual pairings in Theorem \ref{thm:dual pairing} are well-defined for general functions $u:\mathbb{R} \to L^2$, we can try to use the finiteness of these quantities to characterise $U^p$. Recent work of Koch-Tataru \cite{Koch2016} shows that this is possible by using the discrete pairing $B(w,u)$. In this section we adapt the argument used in \cite{Koch2016}, and show that it is also possible to characterise $U^p$ using the continuous pairing $\int_\mathbb{R} \lr{\partial_t \partialhi, u}_{L^2} dt$.
Following \cite{Koch2016}, let $u: \mathbb{R} \to L^2$ and define the semi-norms
$$ \| u \|_{\widetilde{U}^p_{dis}} = \mathcal{S}up_{ \mathcal{S}ubstack{ v \in \mathfrak{S} \\ | v |_{V^p} \leqslant 1}} |B(v, u)|\in [0,\infty]$$
and its continuous counterpart
$$ \| u \|_{\widetilde{U}^p_{cts}} = \mathcal{S}up_{\mathcal{S}ubstack{ \partial_t \partialhi \in C^\infty_0 \\ |\partialhi|_{V^p} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t \partialhi, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| \in [0, \infty] .$$
Note that both $\|\cdot\|_{\widetilde{U}^p_{dis}}$ and $\| \cdot \|_{\widetilde{U}^p_{cts}}$ are only norms after imposing the normalisation $u(t)\to 0$ as $t \to -\infty$.
Our goal is to show that both $\| \cdot \|_{\widetilde{U}^p_{dis}}$ and $\| \cdot \|_{\widetilde{U}^p_{cts}}$ can be used to characterise $U^p$. To make this claim more precise, let $\widetilde{U}^p_{dis}$ be the collection of all right continuous functions $u:\mathbb{R} \to L^2$ satisfying the normalising condition $u(t) \to 0$ (in $L^2_x$) as $t \to -\infty$, and the bound $\| u \|_{\widetilde{U}^p_{dis}}<\infty$. Similarly, we take $\widetilde{U}^p_{cts}$ to be the collection of all right continuous functions such that $u(t) \to 0$ as $t \to -\infty$ and $\| u \|_{\widetilde{U}^p_{cts}}<\infty$.
\begin{theorem}[Characterisation of $U^p$]\label{thm:characteristion of Up}
Let $1<p<\infty$. Then $U^p = \widetilde{U}^p_{dis} = \widetilde{U}^p_{cts}$.
\end{theorem}
We give the proof of Theorem \ref{thm:characteristion of Up} in Subsection \ref{subsec:proof of charac} below. Roughly, since Theorem \ref{thm:dual pairing} already shows that the norms are equivalent for step functions, and $\mf{S}$ is dense in $U^p$, it is enough to show that $\mf{S}$ is also dense in the spaces $\widetilde{U}^p_{cts}$ and $\widetilde{U}^p_{dis}$. The key step is a density argument which we give for a general bilinear pairing satisfying certain assumptions, this argument closely follows that given in \cite[Appendix B]{Koch2016} for the special case of the discrete pairing.
\mathcal{S}ubsection{A general density result}\label{subsec:gen-ver}
Let $X \mathcal{S}ubset V^q$ be a subspace and $B_{rc}\mathcal{S}ubset L^\infty_t L^2_x$ denote the set of bounded right-continuous ($L^2_x$ valued) functions $u:\mathbb{R} \to L^2$. Let $\mathfrak{B}(v,u): X \times B_{rc} \to \mathbb{C}$ be a sesquilinear form. For $u \in B_{rc}$, define
$$ \| u \|_{\widetilde{U}^p_{\mathfrak{B}}} = \mathcal{S}up_{\mathcal{S}ubstack{ v \in X \\ | v |_{V^q}\leqslant 1}} | \mathfrak{B}(v,u)| \in [0, \infty]$$
and for $-\infty \leqslant a<b\leqslant \infty$
$$ \| u \|_{\widetilde{U}^p_{\mathfrak{B}}(a,b)} = \mathcal{S}up_{\mathcal{S}ubstack{ v \in X \\ \mathcal{S}upp v \mathcal{S}ubset (a,b) \\ | v |_{V^q}\leqslant 1}} | \mathfrak{B}(v,u)| \in [0, \infty].$$
Note that, despite the suggestive notation, these quantities may not necessarily be norms, but, since $0 \in X$ they are always well defined. We now take
$\widetilde{U}^p_{\mathfrak{B}}$ to the collection of all $u \in B_{rc}$ such that $u(t) \to 0$ (in $L^2$) as $t \to -\infty$, and $\| u \|_{\widetilde{U}^p_{\mathfrak{B}}}<\infty$. We assume that we have the following properties:
\begin{itemize}
\item[$\mb{(A1)}$] There exists $C>0$ such that for all $\tau = (t_j)_{j=1}^N \in \mb{P}$ and $u\in \widetilde{U}^p_{\mathfrak{B}}$, we have
\begin{equation}\label{eqn:decomp prop for general norm}
\| u - u_\tau \|_{\widetilde{U}^p_{\mathfrak{B}}} \leqslant C \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=0}^N \| u \|_{\widetilde{U}^p_{\mathfrak{B}}(t_j, t_{j+1})}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p},
\end{equation}
where we take $t_0 = -\infty$, $t_{N+1} = \infty$, and define the step function $u_\tau = \mathcal{S}um_{j=1}^{N} \mathbbold{1}_{[t_j, t_{j+1})}(t) u(t_j) \in \mathfrak{S}$.
\item[$\mb{(A2)}$] If $v \in X$ and $\epsilon>0$, there exists a step function $w \in \mathfrak{S}$ such that $| v - w |_{V^q} < \epsilon$. \\
\end{itemize}
Under the above assumptions, the set of step functions $\mf{S}$ is dense in $\widetilde{U}^p_{\mf{B}}$.
\begin{theorem}\label{thm:gen approx by step func}
Let $1<p<\infty$ and assume that $\mb{(A1)}$ and $\mb{(A2)}$ hold. Then for every $u \in \widetilde{U}^p_{\mathfrak{B}}$ and $\epsilon>0$, there exists $\tau \in \mb{P}$ such that
$$ \| u - u_\tau \|_{\widetilde{U}^p_{\mathfrak{B}}}<\epsilon. $$
\end{theorem}
We start by proving two preliminary results.
\begin{lemma}\label{lem:gen approx smallness}
Let $1<p<\infty$ and $\| u \|_{\widetilde{U}^p_{\mathfrak{B}}} <\infty$. For every $\epsilon>0$ there exists a partition $ (t_k)_{k=1}^N \in \mb{P}$ such that
$$ \mathcal{S}up_{0 \leqslant k \leqslant N} \| u \|_{\widetilde{U}^p_{\mathfrak{B}}(t_k, t_{k+1})} < \epsilon $$
where $t_0 = -\infty$ and $t_{N+1} = \infty$.
\end{lemma}
\begin{proof}
Let $\epsilon>0$. Since $\| \cdot \|_{\widetilde{U}^p_{\mathfrak{B}}(a, b)}$ decreases as $a \nearrow b$,
the compactness of closed and bounded intervals implies that is enough to prove that for every $-\infty < t^* \leqslant \infty$ and $-\infty \leqslant t_* < \infty $ we can find $t_1 < t^*$ and $t_2>t_*$ such that
\begin{equation}\label{eqn:gen approx smallness:key bound} \| u \|_{\widetilde{U}_{\mathfrak{B}}^p(t_1, t^*)} + \| u \|_{\widetilde{U}_{\mathfrak{B}}^p(t_*, t_2)} \leqslant \epsilon.\end{equation}
We only prove the first bound, as the second one is similar. Suppose that \eref{eqn:gen approx smallness:key bound} fails, then there exists $-\infty<t^* \leqslant \infty$ and $\epsilon>0$ such that for every $t_1<t^*$ we have
\begin{equation}\label{eqn:gen approx smallness:for contra}
\| u \|_{\widetilde{U}_{\mathfrak{B}}^p(t_1, t^*)} \geqslant \epsilon.
\end{equation}
Let $T_1<t^*$. By definition, \eref{eqn:gen approx smallness:for contra} together with the fact that $X$ is a \emph{subspace} of $V^q$, implies that there exists $w_1 \in X$ such that $\mathcal{S}upp w_1 \mathcal{S}ubset (T_1, t^*)$, $|w_1|_{V^q}\leqslant 1$ (with $\frac{1}{p}+ \frac{1}{q}=1$) and
$$ \mathfrak{B}(w_1, u) \geqslant \frac{\epsilon}{2}.$$
Let $T_1<T_2< t^*$ such that $\mathcal{S}upp w_1 \mathcal{S}ubset (T_1, T_2)$. Another application of \eref{eqn:gen approx smallness:key bound} gives $w_2 \in X$ such that $\mathcal{S}upp w_2 \mathcal{S}ubset (T_2, t^*)$, $|w_2|_{V^q} \leqslant 1$, and
$$ \mathfrak{B}(w_2, u) \geqslant \frac{\epsilon}{2}.$$
Continuing in this manner, for every $N \in \mathbb{N}$ we obtain a sequence $T_1<T_2< \dots < T_{N+1}<t^*$ and functions $w_j \in X$ such that
$$ |w_j|_{V^q} \leqslant 1, \varphiquad \mathcal{S}upp w_j \mathcal{S}ubset (T_j, T_{j+1}), \varphiquad \mathfrak{B}(w_j, u) \geqslant \frac{\epsilon}{2}. $$
If we now let $w = \mathcal{S}um_{j=1}^N w_j$, then using the disjointness of the supports of the $w_j$ we have $|w|_{V^q} \leqslanta (\mathcal{S}um_{1\leqslant j \leqslant N} |w_j|_{V^q}^q)^{\frac{1}{q}} \leqslanta N^\frac{1}{q}$ and hence
$$ \frac{\epsilon}{2} N \leqslant \mathcal{S}um_{j=1}^N \mathfrak{B}(w_j, u) = \mathfrak{B}(w, u) \leqslant |w|_{V^q} \| u \|_{\widetilde{U}^p_{\mathfrak{B}}} \leqslanta N^{\frac{1}{q}} \| u \|_{\widetilde{U}^p_{\mathfrak{B}}}.$$
Letting $N\to \infty$ we obtain a contradiction. Therefore \eref{eqn:gen approx smallness:key bound} follows.
\end{proof}
The second result gives the existence of functions $w \in X$ satisfying a number of crucial properties.
\begin{lemma}\label{lem:gen approx construction of w}
Let $\frac{1}{p} + \frac{1}{q}=1$ with $1<p<\infty$ and assume that $\mb{(A1)}$ holds. Let $u \in \widetilde{U}^p_{\mathfrak{B}}$ and $\tau = (s_k)_{k=1}^N \in \mb{P}$ with $\| u - u_\tau\|_{\widetilde{U}^p_{\mathfrak{B}}}>0$. There exists $w \in X$ such that $|w|_{V^q} \leqslant 1$, we have the $L^\infty$ bound
$$ \| w \|_{L^\infty L^2} \leqslanta \| u - u_\tau \|_{\widetilde{U}^p_{\mathfrak{B}}}^{1-p} \mathcal{S}up_{0\leqslant k \leqslant N} \| u \|_{\widetilde{U}^p_{\mathfrak{B}}(s_k, s_{k+1})}^{p-1},$$
and the lower bound
$$ \mathfrak{B}(w, u) \geqslanttrsim \| u - u_\tau \|_{\widetilde{U}^p_{\mathfrak{B}}}. $$
\end{lemma}
\begin{proof} We begin by observing that by $\textbf{(A1)}$ and the assumption $\| u - u_\tau \|_{\widetilde{U}^p_{\mathfrak{B}}}>0$, we have
\begin{equation}\label{eqn:gen approx construction of w:sum nonzero} \mathcal{S}um_{k=0}^N \| u \|_{\widetilde{U}^p_{\mathfrak{B}}(s_k, s_{k+1})}^p >0. \end{equation}
In particular, at least one of the terms $\| u \|_{\widetilde{U}^p_{\mathfrak{B}}(s_k, s_{k+1})} \not = 0$. For $k=0, \dots, N$, we define functions $w_k \in X$ as follows. If $\| u \|_{\widetilde{U}^p_{\mathfrak{B}}(s_k, s_{k+1})} = 0$, we take $w_k=0$. On the other hand, if $\| u \|_{\widetilde{U}^p_{\mathfrak{B}}(s_k, s_{k+1})}>0$, we take $w_k \in X$ such that $\mathcal{S}upp w_k \mathcal{S}ubset (s_k, s_{k+1})$, $|w_k|_{V^q} \leqslant 1$, and
$$ \mathfrak{B}(w_k, u) \geqslant \frac{1}{2} \| u \|_{\widetilde{U}^p_{\mathfrak{B}}(s_k, s_{k+1})}.$$
Such a function $w_k \in X$ exists by definition of $\|\cdot \|_{\widetilde{U}^p_{\mathfrak{B}}(s_k, s_{k+1})}$. We now let $w = \mathcal{S}um_{k=0}^N \langlepha \| u \|_{\widetilde{U}^q(s_k, s_{k+1})}^{p-1} w_k$ with
$$ \langlepha = 2^{-\frac{1}{p}} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{k=0}^N \| u \|_{\widetilde{U}^p_{\mathfrak{B}}(s_k, s_{k+1})}^p \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{p}-1}.$$
Note that \eref{eqn:gen approx construction of w:sum nonzero} implies that $\langlepha <\infty$, and at least one of the functions $w_k$ is non-zero, thus $w \in X \mathcal{S}etminus \{0\}$. By construction, the step functions $w_k$ have separated supports, and hence we have the $V^q$ bound
$$ |w|_{V^q}^q \leqslant 2^{q-1} \langlepha^q \mathcal{S}um_{k=0}^N \| u \|_{\widetilde{U}^p_{\mathfrak{B}}(s_k, s_{k+1})}^{q(p-1)} |w_k|_{V^q}^q \leqslant 2^{q-1} \langlepha^q \mathcal{S}um_{k=0}^N \| u \|_{\widetilde{U}_{\mathfrak{B}}^p(s_k, s_{k+1})}^p = 1.$$
Moreover, via $\textbf{(A1)}$, we have the $L^\infty_t$ bound
$$ \| w\|_{L^\infty L^2} \leqslant \langlepha \mathcal{S}up_{0\leqslant k \leqslant N} \| u \|_{\widetilde{U}_{\mathfrak{B}}^p(s_k, s_{k+1})}^{p-1} |w_k|_{V^q} \leqslanta \| u - u_\tau\|_{\widetilde{U}^p_{\mathfrak{B}}}^{1-p} \mathcal{S}up_{0\leqslant k \leqslant N} \| u \|_{\widetilde{U}^p_{\mathfrak{B}}(s_k, s_{k+1})}^{p-1} $$
and the lower bound
\begin{align*} \mathfrak{B}(w, u) &= \mathcal{S}um_{k=0}^N \langlepha \| u \|_{\widetilde{U}_{\mathfrak{B}}^{p}(s_k, s_{k+1})}^{p-1} \mathfrak{B}(w_k, u) \\
&\geqslant \frac{\langlepha}{2} \mathcal{S}um_{k=0}^N \| u \|_{\widetilde{U}_{\mathfrak{B}}^p(s_k, s_{k+1})}^p
= 2^{-\frac{1+p}{p}} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{k=0}^N \| u \|_{\widetilde{U}_{\mathfrak{B}}^p(s_k, s_{k+1})}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p} \geqslanttrsim \| u - u_\tau\|_{\widetilde{U}_{\mathfrak{B}}^p}.
\end{align*}
\end{proof}
We now come to the proof of Theorem \ref{thm:gen approx by step func}.
\begin{proof}[Proof of Theorem \ref{thm:gen approx by step func}]
We argue by contradiction. Let $u\in \widetilde{U}^p_{\mathfrak{B}}$ and suppose there exists $\epsilon_0>0$ such that for every $\tau \in \mb{P}$ we have
\begin{equation}
\label{eqn:gen approx by step func:contra assump} \| u - u_\tau \|_{\widetilde{U}^p_{\mathfrak{B}}} \geqslant \epsilon_0.
\end{equation}
We claim that \eref{eqn:gen approx by step func:contra assump} implies that for each $N \in \mathbb{N}$ there exists $w_N \in X$ such that
\begin{equation}\label{eqn:gen approx by step func:properties of w_N}
|w_N|_{V^q} \leqslant (2N)^\frac{1}{q}, \varphiquad \mathfrak{B}(w_N, u) \geqslanttrsim \frac{1}{4} N \epsilon_0 .
\end{equation}
But, similar to Lemma \ref{lem:gen approx smallness}, this gives a contradiction as $N \to \infty$ since \eref{eqn:gen approx by step func:properties of w_N} implies that
$$ \frac{1}{4} N \epsilon_0 \leqslanta \mathfrak{B}(w_N, u) \leqslant |w_N|_{V^q} \| u \|_{\widetilde{U}^p_{\mathfrak{B}}} \leqslant (2N)^\frac{1}{q} \| u \|_{\widetilde{U}^p_{\mathfrak{B}}}.$$
Thus it only remains to show that \eref{eqn:gen approx by step func:contra assump} implies \eref{eqn:gen approx by step func:properties of w_N}. To this end, note that \eref{eqn:gen approx by step func:contra assump} together with $\mb{(A1)}$, Lemma \ref{lem:gen approx smallness}, and Lemma \ref{lem:gen approx construction of w}, implies that for every $\delta >0$ there exists $w' \in X$ such that
\begin{equation}\label{eqn:gen approx by step func:conseq of lem}
|w'|_{V^q} \leqslant 1, \varphiquad \| w'\|_{L^\infty L^2} \leqslant \delta, \varphiquad B(w', u) \geqslanttrsim \epsilon_0.
\end{equation}
Taking $\delta = 1$ gives $w_1$ satisfying \eref{eqn:gen approx by step func:properties of w_N}. Suppose we have $w_N$ satisfying \eref{eqn:gen approx by step func:properties of w_N}. Let $\epsilon^*>0$. By $\mb{(A2)}$, we have a step function $w_N' \in \mathfrak{S}$ such that $|w_N - w_N'|_{V^q} < \epsilon^*$. Choose $w' \in X$ such that \eref{eqn:gen approx by step func:conseq of lem} holds with
$$ \delta < \epsilon^* \min_{\mathcal{S}ubstack{ t_1<t_2 \\ w_N'(t_1) \not = w_N'(t_2)}} \| w_N'(t_1) - w_N'(t_2) \|_{L^2} $$
(this quantity is non-zero since $w'_N$ is a step function) and define $w_{N+1} = w_N + w' \in X$. To check the required $V^q$ bound, suppose that $\tau = (t_j) \in \mb{P}$ and take
$$ \tau' = \{ t_j \in \tau \mid w_N'(t_j) = w_N'(t_{j+1}) \}, \varphiquad \tau'' = \{ t_j \in \tau \mid w_N'(t_j) \not = w_N'(t_{j+1} )\}.$$
Then
\begin{align*}
\mathcal{S}um_{j=1}^{N-1} &\| (w_N' + w')(t_{j+1}) - (w_N' + w')(t_j) \|_{L^2}^q \\
&\leqslant \mathcal{S}um_{t_j \in \tau'} \| w'(t_{j+1}) - w'(t_j) \|_{L^2}^q + (1+2\epsilon^*)^q \mathcal{S}um_{t_j \in \tau''} \| w_N'(t_{j+1}) - w_N'(t_j)\|_{L^2}^q \\
&\leqslant 1 + (1 + 2 \epsilon^*)^q 2N
\end{align*}
and consequently, by choosing $\epsilon^*>0$ sufficiently small we have
$$ | w_{N+1} |_{V^q} \leqslant |w_{N} - w_N'|_{V^q} + |w_N' + w'|_{V^q} \leqslant \epsilon^* + \big( 1 + 2 N (1 + 2 \epsilon^*)\big)^\frac{1}{q} \leqslant \big(2 (N+1)\big)^\frac{1}{q}. $$
On the other hand, we have
$$ \mf{B}(w_{N+1}, u) = \mf{B}(w_N, u) + \mf{B}(w', u) \geqslanttrsim N \epsilon_0 + \frac{1}{4} \epsilon_0.$$
Consequently $w_{N+1}$ satisfies \eref{eqn:gen approx by step func:properties of w_N} as required.
\end{proof}
\mathcal{S}ubsection{Proof of Theorem \ref{thm:characteristion of Up}}\label{subsec:proof of charac} Let $1<p<\infty$. An application of Theorem \ref{thm:dual pairing} implies that for every $u\in \mf{S} \mathcal{S}ubset U^p$ we have
$$ \| u \|_{U^p} = \| u \|_{\widetilde{U}^p_{dis}} = \| u \|_{\widetilde{U}^p_{cts}}.$$
Since $\mf{S}$ is a dense subset of $U^p$, to prove Theorem \ref{thm:characteristion of Up}, it suffices to show that the set of step functions $\mf{S}$ is also dense in $\widetilde{U}^p_{dis}$ and $\widetilde{U}^p_{cts}$. In view of Theorem \ref{thm:gen approx by step func}, the density of $\mf{S}$ in $\widetilde{U}^p_{dis}$ and $\widetilde{U}^p_{cts}$ would follow provided that the conditions $\mb{(A1)}$ and $\mb{(A2)}$ hold true. It is clear that $\mb{(A2)}$ holds in the discrete case by definition, in the continuous case we use Remark \ref{rmk:u-vs-v}. On the other hand, the proof of the localisation condition $\mb{(A1)}$ is more involved. Let $-\infty\leqslant a < b \leqslant \infty$ and define local versions of the norms $\| \cdot \|_{\widetilde{U}^p_{dis}}$ and $\| \cdot \|_{\widetilde{U}^p_{cts}}$ by taking
$$ \| u \|_{\widetilde{U}^p_{dis}(a,b)} = \mathcal{S}up_{ \mathcal{S}ubstack{ v \in \mathfrak{S}_0 \\ \mathcal{S}upp v \mathcal{S}ubset (a,b)\\ | v |_{V^p} \leqslant 1}} |B(v, u)|$$
and
$$ \| u \|_{\widetilde{U}^p_{cts}(a,b)} = \mathcal{S}up_{\mathcal{S}ubstack{ v \in C^\infty_0\\ \mathcal{S}upp v \mathcal{S}ubset (a,b) \\ |v|_{V^p} \leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t v, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|.$$
We have reduced the proof of Theorem \ref{thm:characteristion of Up} to the following.
\begin{lemma}\label{lem:local to global}
Let $1<p<\infty$ and $\tau \in \mb{P}$. If $u \in L^\infty_t L^2_x$ is right continuous with $u(t) \to 0$ as $t\to - \infty$, and we define $u_\tau = \mathcal{S}um_{j=1}^N \mathbbold{1}_{[t_j, t_{j+1})} u(t_j)$ and $t_0 = -\infty$, $t_{N+1} = \infty$, then
\begin{equation}\label{eqn:lem local to global:dis bound} \| u - u_\tau \|_{\widetilde{U}^p_{dis}} \leqslant 4 \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=0}^{N} \| u \|_{\widetilde{U}^p_{dis}(t_j, t_{j+1})}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p},
\end{equation}
and
\begin{equation}\label{eqn:lem local to global:cts bound} \| u - u_\tau \|_{\widetilde{U}^p_{cts}} \leqslant 4 \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=0}^N \| u \|_{\widetilde{U}^p_{cts}(t_j, t_{j+1})}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p}.
\end{equation}
\end{lemma}
\begin{proof} We start by proving something similar in the $V^q$ case. Suppose that $w \in V^q$ and $\tau=(t_j)_{j=1}^N \in \mb{P}$. Let $t_0 = -\infty$ and $t_{N+1}=\infty$. For $j=0, \dots, N$ let $I_j \mathcal{S}ubset (t_j, t_{j+1})$ be a left closed and right open interval, and define $w_j = \mathbbold{1}_{I_j} ( w - w(a_j))$ where $a_j \in (t_j, t_{j+1}]$. We claim that
\begin{equation}\label{eqn:lem local to global:Vq bound}
\mathcal{S}um_{j=0}^N |w_j|_{V^q}^q \leqslant 2^{q+1} |w|_{V^q}.
\end{equation}
To prove the claim, we observe that for every $\epsilon>0$ there exists $(s^{(j)}_k)_{k=1}^{N_j} \in \mb{P}$ such that
$ |w_j|_{V^q} \leqslant \epsilon + \mathcal{S}um_{k=1}^{N_j-1} \| w_j(s^{(j)}_{k+1}) - w_j(s^{(j)}_k)\|_{L^2}^q$. Without loss of generality, we may assume that $s^{(j)}_1=t_j$, $s^{(j)}_{N_j} = t_{j+1}$ for $1\leqslant j\leqslant N-1$, and $s^{(0)}_{N_0} = t_1$, $s^{(N)}_1 = t_N$. The definition of $w_j$ then gives
\begin{align*}
\mathcal{S}um_{k=1}^{N_j-1} \| &w_j(s^{(j)}_{k+1}) - w_j(s^{(j)}_k) \|_{L^2}^q \\
&\leqslant 2^{q-1} \mathcal{S}um_{k=1}^{N_j-1} \big\|\big( \mathbbold{1}_{I_j}(s^{(j)}_{k+1}) - \mathbbold{1}_{I_j}(s^{(j)}_k) \big)\big(w(s^{(j)}_k) - w(a_j)\big) \big\|_{L^2}^q\\
&\varphiquad + 2^{q-1} \mathcal{S}um_{k=1}^{N_j-1} \| w(s^{(j)}_{k+1}) - w(s^{(j)}_k) \|_{L^2}^q\\
&\leqslant 2^{q-1} \dot{{B}}^{\alpha}_{p, q}ig( \| w(s^{(j)}_{k,min}) - w(a_j) \|_{L^2}^q +\| w(s^{(j)}_{k, max}) - w(a_j) \|_{L^2}^q \dot{{B}}^{\alpha}_{p, q}ig)\\
&\varphiquad + 2^{q-1}\mathcal{S}um_{k=1}^{N_j-1} \| w(s^{(j)}_{k+1}) - w(s^{(j)}_k) \|_{L^2}^q
\end{align*}
for some $t_j \leqslant s^{(j)}_{k, min} < s^{(j)}_{k, max}\leqslant t_{j+1}$. Summing up over $j$, and choosing $\epsilon>0$ sufficiently small, we then obtain \eqref{eqn:lem local to global:Vq bound}.
We now turn the proof of \eqref{eqn:lem local to global:dis bound}. Let $\tau = (t_j)_{j=1}^N \in \mb{P}$ and $v\in \mathfrak{S}$, $\epsilon>0$, and for $j=1, \dots, N-1$ define
$$v_j(t) = \mathbbold{1}_{[t_j+\epsilon, t_{j+1}-\epsilon)}(t)\big( v(t) - v(t_{j+1}-\epsilon)\big),$$
and
$$ v_0(t) = \mathbbold{1}_{[-\epsilon^{-1}, t_1)}(t)\big( v(t) - v(t_{1}-\epsilon)\big), \varphiquad v_N(t) = \mathbbold{1}_{[t_N+\epsilon, \infty)}(t) \big( v(t) - v(\infty)\big)$$
where we let $v(\infty) = \lim_{t\to \infty} v(t)$, note that $v(t) = v(\infty)$ for all sufficiently large $t$ since $ v\in \mf{S}$. By construction and the bound \eqref{eqn:lem local to global:Vq bound}, we have $v_j \in \mathfrak{S}$, $\mathcal{S}upp v_j \mathcal{S}ubset (t_j, t_{j+1})$, and $ \mathcal{S}um_{j=0}^N |v_j|_{V^q}^q \leqslant 2^{q+1} |v|_{V^q}^q$. Suppose that we can show that for all $\delta>0$, by choosing $\epsilon= \epsilon(u,v,\delta, \tau)>0$ sufficiently small we have for all $j=0, \dots, N$
\begin{equation}\label{eqn:lem local to global:dis error bound}
|B( \mathbbold{1}_{[t_j, t_{j+1})} v- v_j, u - u_\tau) | \leqslant \delta.
\end{equation}
Then, by linearity and definition of the bilinear pairing $B$, we have $B(v_j, u_\tau)=0$ and hence
\begin{align*}
|B(v,u-u_\tau)| &\leqslant (N+1) \mathcal{S}up_j \big| B\big( \mathbbold{1}_{[t_j, t_{j+1})} v - v_j, u-u_\tau\big)\big|+ \mathcal{S}um_{j=0}^N | B(v_j, u)| \\
&\leqslant (N+1) \delta + \mathcal{S}um_{j=0}^{N} |v_j|_{V^q} \| u \|_{\widetilde{U}^p_{dis}(t_j, t_{j+1})} \\
&\leqslant (N+1) \delta + 4 |v|_{V^q} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=0}^N \| u \|_{\widetilde{U}^p_{dis}(t_j, t_{j+1})}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p}.
\end{align*}
Since this holds for every $\delta>0$, \eqref{eqn:lem local to global:dis bound} follows. Thus it remains to prove \eqref{eqn:lem local to global:dis error bound}. Since $v \in \mf{S}$ is a step function with $v(-\infty) = 0$, provided that we choose $\epsilon>0$ sufficiently small, a computation gives for $j=1, \dots, N$ the identities
$$ \mathbbold{1}_{[t_j, t_{j+1})}(t) v(t) - v_j(t) = \mathbbold{1}_{[t_j, t_j + \epsilon)}(t) v(t_j) + \mathbbold{1}_{[t_j + \epsilon, t_{j+1})}(t) v(t_{j+1} - \epsilon), $$
and
$$ \mathbbold{1}_{(-\infty, t_1)}(t) v(t) - v_0(t) = \mathbbold{1}_{[-\epsilon^{-1}, t_1 )}(t) v(t_1 - \epsilon).$$
Hence by definition we have for $j=1, \dots, N$
$$ B( \mathbbold{1}_{[t_j, t_{j+1})} v- v_j, u - u_\tau) = \lr{v(t_{j+1}-\epsilon) - v(t_j), u(t_j+\epsilon)-u(t_j)},$$
and
$$B(\mathbbold{1}_{(-\infty, t_1)}v - v_0, u-u_\tau) = \lr{v(t_1 -\epsilon), u(-\epsilon^{-1})}.$$
Therefore \eqref{eqn:lem local to global:dis error bound} follows from the right continuity of $u$ and the normalisation condition on $u$ at $-\infty$. This completes the proof of \eqref{eqn:lem local to global:dis bound}.
We now turn to the proof of \eqref{eqn:lem local to global:cts bound}. Let $\partial_t \partialhi \in C^\infty_0$, $\epsilon>0$, and define
$$ \partialhi_j(t) = \int_\mathbb{R} \frac{1}{\epsilon} \rho \dot{{B}}^{\alpha}_{p, q}ig(\frac{s}{\epsilon} \dot{{B}}^{\alpha}_{p, q}ig) \mathbbold{1}_{I_j}(t-s)ds \big( \partialhi(t) - \partialhi(t_{j+1}) \big)$$
with $I_j = [t_j + \epsilon, t_{j+1} - \epsilon)$ for $j=1, \dots, N-1$, $I_0 = [-\epsilon^{-1}, t_1-\epsilon)$, $I_N = [t_N + \epsilon, \epsilon^{-1})$, and we take $\rho \in C^\infty_0(-1, 1)$ with $\int_\mathbb{R} \rho =1$. Then clearly $\partialhi_j \in C^\infty_0$, $\mathcal{S}upp \partialhi_j \mathcal{S}ubset (t_j, t_{j+1})$, and a short computation using the bound \eqref{eqn:lem local to global:Vq bound} gives $\mathcal{S}um_j |\partialhi_j|_{V^q}^q \leqslant 2^{q+1} |\partialhi|_{V^q}^q$. Consequently, similar to the discrete case above, it is enough to show that for every $\delta>0$ we can find an $\epsilon = \epsilon(\partialhi, u, \tau, \delta) >0$ such that
\begin{equation}\label{eqn:lem local to global:cts error bound}
\dot{{B}}^{\alpha}_{p, q}ig| \int_{t_j}^{t_{j+1}} \lr{ \partial_t \partialhi - \partial_t \partialhi_j, u-u_\tau} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslant \delta.
\end{equation}
This is a consequence of the right continuity of $u$. More precisely, if $j=1, \dots, N-1$, after writing
\begin{align*} \partial_t \partialhi(t) - \partial_t \partialhi_j(t) &= \dot{{B}}^{\alpha}_{p, q}ig(1-\int_\mathbb{R}\frac{1}{ \epsilon} \rho \dot{{B}}^{\alpha}_{p, q}ig(\frac{s}{\epsilon} \dot{{B}}^{\alpha}_{p, q}ig) \mathbbold{1}_{I_j}(t-s)ds \dot{{B}}^{\alpha}_{p, q}ig) \partial_t \partialhi(t) \\
&\varphiquad - \frac{\partialhi(t) - \partialhi(t_{j+1})}{\epsilon} \int_\mathbb{R} \frac{1}{ \epsilon} \partial_s\rho \dot{{B}}^{\alpha}_{p, q}ig(\frac{s}{\epsilon} \dot{{B}}^{\alpha}_{p, q}ig) \mathbbold{1}_{I_j}(t-s)ds
\end{align*}
we see that
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig| \int_{t_j}^{t_{j+1}} &\lr{ \partial_t \partialhi - \partial_t \partialhi_j, u-u_\tau} dt \dot{{B}}^{\alpha}_{p, q}ig|\\
&\leqslanta \epsilon \| \partial_t \partialhi \|_{L^\infty_t L^2_x} \| u \|_{L^\infty_t L^2_x} + \epsilon^{-1} \| \partialhi \|_{L^\infty_t L^2_x} \int_{t_j}^{t_{j}+3\epsilon} \| u(t) - u(t_j)\|_{L^2} dt \\
&\varphiquad \varphiquad \varphiquad \varphiquad \varphiquad \varphiquad + \epsilon^{-1} \| u \|_{L^\infty_t L^2_x} \int_{t_{j+1}-3\epsilon}^{t_{j+1}} \| \partialhi(t) - \partialhi(t_j)\|_{L^2} dt
\end{align*}
and hence \eqref{eqn:lem local to global:cts error bound} follows by the right continuity of $u$ provided we choose $\epsilon$ sufficiently small. A similar argument proves the cases $j=0$ and $j=N$.
\end{proof}
\mathcal{S}ection{Convolution, multiplication, and the adapted function spaces}\label{sec:bound}
In this section we record a number of key properties of the $U^p$ and $V^p$ spaces that have been used frequently throughout this article. Namely, in Subsection \ref{subsec:conv} we prove a basic convolution estimate together with the standard Besov embedding \[\dot{B}^\frac{1}{p}_{p,1} \mathcal{S}ubset V^p \mathcal{S}ubset U^p \mathcal{S}ubset \dot{B}^{\frac{1}{p}}_{p, \infty},\] up to normalisation at $t=-\infty$. This is well-known, see \cite{Peetre1976,Koch2005,Koch2014}. In Subsection \ref{subsec:alg} we prove an important high-low product type estimate in $U^p$ and $V^p$, which gives as a special case the crucial product estimates used in Section \ref{sec:multi}. Finally, in Subsection \ref{subsec:adapted}, we consider the adapted functions spaces $U^p_\Phi$ and $V^p_\Phi$.
\mathcal{S}ubsection{Convolution and the Besov Embedding}\label{subsec:conv}
We use the notation
$$ f *_\mathbb{R} g(t) = \int_\mathbb{R} f(s) g(t-s) ds $$
to signify the convolution in the $t$ variable.
\begin{lemma}\label{lem:conv oper on Up}
Let $1<p<\infty$ and $\partialhi(t) \in L^1_t(\mathbb{R})$. For all $u \in U^p$ and $v \in V^p$ we have $\partialhi \ast_\mathbb{R} u \in U^p$, $\partialhi *_\mathbb{R} v \in V^p$, and the bounds
$$ \dot{{B}}^{\alpha}_{p, q}ig\| \partialhi *_\mathbb{R} u \dot{{B}}^{\alpha}_{p, q}ig\|_{U^p} \leqslant 2\| \partialhi\|_{L^1_t} \| u \|_{U^p}, \varphiquad \dot{{B}}^{\alpha}_{p, q}ig\| \partialhi *_\mathbb{R} v \dot{{B}}^{\alpha}_{p, q}ig\|_{V^p} \leqslant \| \partialhi\|_{L^1_t} \| v \|_{V^p}.$$
\end{lemma}
\begin{proof}
We first observe that since $u$ and $v$ are right continuous and decay to zero as $t \to - \infty$, the convolutions $\partialhi*_\mathbb{R} u$ and $\partialhi *_\mathbb{R} v$ also satisfy these conditions. The proof of the $V^p$ bound is immediate, and hence $\partialhi *_\mathbb{R} v \in V^p$. To show that $\partialhi *_\mathbb{R} u \in U^p$, we apply Theorem \ref{thm:characteristion of Up} and observe that for any $\partialsi \in C^\infty_0$, we have
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t \partialsi(t), \partialhi *_\mathbb{R} u(t) }_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig| &= \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \dot{{B}}^{\alpha}_{p, q}ig\langle \int_\mathbb{R} \partial_t \partialsi(t+s) \partialhi(t) dt , u(s) \dot{{B}}^{\alpha}_{p, q}ig\rangle_{L^2_x} ds \dot{{B}}^{\alpha}_{p, q}ig| \\
&\leqslant \dot{{B}}^{\alpha}_{p, q}ig\| \int_\mathbb{R} \partialsi(t+s) \partialhi(t) dt \dot{{B}}^{\alpha}_{p, q}ig\|_{V^p} \| u \|_{U^p} \\
&\leqslant \| \partialsi \|_{V^q} \| \partialhi \|_{L^1_t} \| u \|_{U^p}.
\end{align*}
\end{proof}
A similar argument shows that the space-time convolution with an $L^1_{t,x}(\mathbb{R}^{1+n})$ kernel is also bounded on $U^p$ and $V^p$.
The spaces $V^p$ and $Up$ are closely related. In fact, for functions which have temporal Fourier support in an annulus centered at the origin, the $U^p$ and $V^p$ norms are equivalent.
\begin{theorem}\label{thm:besov embedding}
Let $1<p<\infty$ and $d>0$. If $u \in L^p_t L^2_x$ with $\mathcal{S}upp \mc{F}_t u \mathcal{S}ubset \{ \frac{d}{100} \leqslant |\tau| \leqslant 100d\}$ then $u \in U^p$ and
$$ \| u \|_{V^p} \approx \| u \|_{U^p} \approx d^\frac{1}{p} \| u \|_{L^p_t L^2_x}.$$
Conversely, if $u \in U^p$ and $\mathcal{S}upp \mc{F}_t u \mathcal{S}ubset \{ \frac{d}{100} \leqslant |\tau| \leqslant 100d\}$, then $u \in L^p_t L^2_x$.
\end{theorem}
\begin{remark}\label{rmk:besov embedding}
Theorem \ref{thm:besov embedding} can also be stated in terms of the Besov spaces $\dot{B}^\frac{1}{p}_{p, \infty}$ and $\dot{B}^\frac{1}{p}_{p, 1}$. More precisely, let $\rho \in C^\infty_0( \{2^{-1} < \tau < 2\})$ such that $ \mathcal{S}um_{d \in 2^\mathbb{Z}} \rho(\tfrac{\tau}{d}) = 1 $
for $\tau > 0$ and define $P^{(t)}_d = \rho( \frac{ |- i \partial_t|}{d})$. If $1<p<\infty$ and $v \in V^p$ then
$$ \mathcal{S}up_{d \in 2^\mathbb{Z}} d^\frac{1}{p} \| P^{(t)}_d v \|_{L^p_t L^2_x} \leqslanta | v |_{V^p}.$$
On the other hand, if $ u(t) \to 0$ in $L^2_x$ as $t \to -\infty$ and
$$\mathcal{S}um_{d \in 2^\mathbb{Z}} d^\frac{1}{p} \| P^{(t)}_d u \|_{L^p_t L^2_x} <\infty,$$
then $u \in U^p$ and
$$ \| u \|_{U^p} \leqslanta \mathcal{S}um_{d \in 2^\mathbb{Z}} d^{\frac{1}{p}} \| P^{(t)}_d u \|_{L^p_t L^2_x}. $$
The first claim follows directly from Theorem \ref{thm:besov embedding} and the disposability of the multipliers $P^{(t)}_d$ which is a consequence of Lemma \ref{lem:conv oper on Up}. To prove the second claim, in view of Theorem \ref{thm:besov embedding}, the sum converges in the Banach space $U^p$, and so we have $\mathcal{S}um_d P^{(t)}_d u \in U^p$. Since $u(t)$ and $\mathcal{S}um_d P^{(t)}_d u(t)$ can only differ by a polynomial, and both vanish at $-\infty$, we conclude that $u(t) = \mathcal{S}um_d P^{(t)}_d u(t) \in U^p$ and the claimed bound.
\end{remark}
The proof of Theorem \ref{thm:besov embedding}, as well as the proof of the product estimates contained in the next section, requires the following observation.
\begin{lemma}\label{lem:key ineq for stab lem}
For any $s \in \mathbb{R}$, $(t_j)_{j=1}^N \in \mb{P}$, and $g \in V^p$ we have
$$ \mathcal{S}um_{j=1}^{N} m_j^p \big\| g(t_j) - g(t_j-s)\big\|_{L^2}^p \leqslant 2 (1+|s|) | g |_{V^p}^p$$
where $m_j = \min\{ t_{j+1} - t_j, 1\}$.
\end{lemma}
\begin{proof}
It is enough to consider $s>0$. Let \[J_k=\big\{j \in \{1,\ldots N\}: t_j \in [sk,(k+1)s)\big\}.\]
Then,
\[\mathcal{S}um_{j=1}^{N} m_j^p \|g(t_{j})-g(t_j-s)\|_{L^2_x}^p =\mathcal{S}um_{\mathcal{S}ubstack{k \in \mathbb{Z} \\ J_k \not = \varnothing}}\mathcal{S}um_{j \in J_k } m_j^p \|g(t_{j})-g(t_j-s)\|_{L^2_x}^p\]
and, as $m_j\leqslant 1$,
\begin{align*}
\mathcal{S}um_{j \in J_k } m_j^p \|g(t_{j})-g(t_j-s)\|_{L^2_x}^p &\leqslant \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{j \in J_k } m_j \dot{{B}}^{\alpha}_{p, q}ig) \max_{j \in J_k}\|g(t_{j})-g(t_j-s)\|_{L^2_x}^p\\
&\leqslant (1+s) \|g(t_{j_k})-g(t_{j_k}-s)\|_{L^2_x}^p
\end{align*}
where $t_{j_k}\in J_k$ is chosen such that \[\|g(t_{j_k})-g(t_{j_k}-s)\|_{L^2_x}=\max_{j \in J_k}\|g(t_{j})-g(t_j-s)\|_{L^2_x}.\]
Now,
\begin{align*}
&\mathcal{S}um_{\mathcal{S}ubstack{k \in \mathbb{Z} \\ J_k \not = \varnothing}}\|g(t_{j_k})-g(t_{j_k}-s)\|_{L^2_x}^p\\
\leqslant{}& \mathcal{S}um_{\mathcal{S}ubstack{ k \text{ even} \\ J_k \not = \varnothing}}\|g(t_{j_k})-g(t_{j_k}-s)\|_{L^2_x}^p
+\mathcal{S}um_{\mathcal{S}ubstack{ k \text{ odd} \\ J_k \not = \varnothing}}\|g(t_{j_k})-g(t_{j_k}-s)\|_{L^2_x}^p
\leqslant{} 2|v|_{V^p}^p,
\end{align*}
because for $k=2m$ even we have
\[
t_{j_{2m}}<t_{j_{2(m+1)}}-s<t_{j_{2(m+1)}}\; ,
\]
hence the above points form a partition, and a similar argument applies to $k=2m+1$ odd. In summary, we have
\[
\mathcal{S}um_{j=1}^{N-1} m_j^p \|g(t_{j})-g(t_j-s)\|_{L^2_x}^p \leqslant 2 (1+s) |g|_{V^p}^p.
\]
\end{proof}
We now come to the proof of Theorem \ref{thm:besov embedding}.
\begin{proof}[Proof of Theorem \ref{thm:besov embedding}]
After rescaling, it is enough to consider the case $d=1$. Let $\rho \in C^\infty(\mathbb{R})$ with $\mathcal{S}upp \widehat{\rho} \mathcal{S}ubset \{ |\tau| \leqslant \frac{1}{100}\}$, $\| (1+|s|) \rho(s)\|_{L^1_s} \leqslanta 1$ and $\mc{F}_t (\rho) (0) = 1$. The temporal Fourier support assumption implies that
$$u(t) = \int_\mathbb{R} \rho(s) [u(t) - u(t-s)]ds$$
and hence an application of Lemma \ref{lem:key ineq for stab lem} gives
\begin{align*}
\| u \|_{L^p_t L^2_x} &\leqslant \int_\mathbb{R} |\rho(s)| \| u(t-s) - u(t) \|_{L^p_t L^2_x} ds \\
&\leqslant 2 \int_\mathbb{R} |\rho(s)| \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j\in \mathbb{N}} \| u(t_j-s) - u(t_j)\|_{L^2_x}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p} ds \\
&\leqslant 4 \| (1+|s|) \rho(s)\|_{L^1_s} \|u\|_{V^p} \leqslanta \|u\|_{V^p}
\end{align*}
where we choose $t_j=t_j(s) \in [j, j+1)$ such that \[\mathcal{S}up_{t\in [j, j+1)} \|u(t-s) - u(t) \|_{ L^2_x} \leqslant 2 \| u(t_j-s) - u(t_j)\|_{L^2_x}.\]
It remains to show that $ u \in U^p$ and the bound $\| u \|_{U^p} \leqslanta \| u \|_{L^p_t L^2_x}$. The assumptions on $u$ imply that $u$ is right continuous and $\| u(t)\|_{L^2_x} \to 0$ as $t \to -\infty$. Thus we apply Theorem \ref{thm:characteristion of Up} and observe that, using the $V^p$ case proved above, we have for $\partialhi \in C^\infty_0$
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t \partialhi, u }_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| &\leqslant \| \mathbbold{1}_{\{ 100^{-1}\leqslant \tau \leqslant 100\}}(-i \partial_t) \partial_t \partialhi \|_{L^q_t L^2_x} \| u \|_{L^p_t L^2_x} \\
&\leqslanta \| \mathbbold{1}_{\{ 100^{-1}\leqslant \tau \leqslant 100\}}(-i \partial_t) \partialhi \|_{L^q_t L^2_x} \| u \|_{L^p_t L^2_x} \leqslanta |\partialhi|_{V^q} \| u \|_{L^p_t L^2_x}
\end{align*}
where $\frac{1}{p} + \frac{1}{q} =1 $. Hence $u\in U^p$ and the claimed bounds follow.
\end{proof}
\mathcal{S}ubsection{Stability under multiplication}\label{subsec:alg}
In the following, given $u, v: \mathbb{R} \to L^2_x$, we let $\widehat{u}(t)$ denote the Fourier transform in $x$ (i.e. with respect to the $L^2$ variable), and $\mc{F}_t u(\tau)$ be the Fourier transform in the $t$ variable. Our goal is to find conditions under which the product $uv$ belongs to either $U^p$ or $V^p$. One possibility is the following.
\begin{lemma}\label{lem:stab}
Let $1\leq p<\infty$. Let $f,g:\mathbb{R} \to L^2_x$ be bounded and satisfy $\mathcal{S}upp(\mc{F}_tf)\mathcal{S}ubset (-1,1)$ and $\mathcal{S}upp(\mc{F}_t g)\mathcal{S}ubset \mathbb{R} \mathcal{S}etminus (-4,4)$. If $g \in V^p$, then $fg \in V^p$ and
\[
\|fg\|_{V^p}\leqslanta \|f\|_{L^\infty_{t,x}}\|g\|_{V^p}.
\]
On the other hand, if $g \in U^p$, then $fg \in U^p$ and
\[
\|fg\|_{U^p}\leqslanta \|f\|_{L^\infty_{t,x}}\|g\|_{U^p}.
\]
\end{lemma}
Under the support assumptions of the previous lemma, it is clear that for $s>0$ we have the Sobolev product inequality $\| fg \|_{\dot{H}^s}\leqslanta \|f \|_{L^\infty} \| g\|_{\dot{H}^s}$. In particular, the previous lemma should be thought of as the $U^p$ and $V^p$ version of the standard heuristic that derivatives can essentially always be taken to fall on the high frequency term.
We now turn to the proof of Lemma \ref{lem:stab}.
\begin{proof}[Proof of Lemma \ref{lem:stab}]
We start with the $V^p$ bound under the weaker assumption that we only have $\mathcal{S}upp \mc{F}_t g \mathcal{S}ubset \mathbb{R} \mathcal{S}etminus (-3, 3)$. Clearly, it is enough to prove the bound for $|fg|_{V^p}$. Let $\tau=(t_j)_{j=1}^N\in \mb{P}$ be any partition. Then
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{j=1}^{N-1}\|fg(t_{j+1})-fg(t_j)\|_{L^2_x}^p \dot{{B}}^{\alpha}_{p, q}ig)^{\frac1p}&\leqslant \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{j=1}^{N-1}\| f(t_{j+1})(g(t_{j+1})-g(t_j))\|_{L^2_x}^p \dot{{B}}^{\alpha}_{p, q}ig)^{\frac1p} \\
&\varphiquad \varphiquad+ \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{j=1}^{N-1}\|(f(t_{j+1})-f(t_j))g(t_j)\|_{L^2_x}^p \dot{{B}}^{\alpha}_{p, q}ig)^{\frac1p},
\end{align*}
and due to
\[
\dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{j=1}^{N-1}\| f(t_{j+1})(g(t_{j+1})-g(t_j))\|_{L^2_x}^p \dot{{B}}^{\alpha}_{p, q}ig)^{\frac1p}\leqslant \|f\|_{L^\infty_{t,x}}\|g\|_{V^p}
\]
it is enough to prove
\begin{equation}\label{eqn:lem stab:reduced prob} \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{j=1}^{N-1}\|(f(t_{j+1})-f(t_j))g(t_j)\|_{L^2_x}^p \dot{{B}}^{\alpha}_{p, q}ig)^{\frac1p} \leqslanta \|f\|_{L^\infty_{t,x}}\|g\|_{V^p}.\end{equation}
The hypothesis on the Fourier-supports of $f$ and $g$ implies that there exists $\rho\in \mc{S}(\mathbb{R})$ such that
\[
f=\rho\ast f \; \text{ and }\; \rho\ast (f(\cdot-b)g)=0 \text{ for all }b \in \mathbb{R}.
\]
Consequently, we have the identity
\[(f(t_{j+1})-f(t_j))g(t_j)=\int_\mathbb{R} \rho(s) \big(f(t_{j+1}-s)-f(t_j-s)\big)\big(g(t_{j})-g(t_j-s)\big)ds.\]
Now, since
\begin{align*} \|f(a)-f(b)\|_{L^\infty_{x}}&= \dot{{B}}^{\alpha}_{p, q}ig\|\int_{\mathbb{R}} \big(\rho(a-s)-\rho(b-s)\big)f(s)ds \dot{{B}}^{\alpha}_{p, q}ig\|_{L^\infty_{x}}\\
&= \dot{{B}}^{\alpha}_{p, q}ig\|\int_a^b \int_{\mathbb{R}} \rho'(t-s)f(s)ds dt \dot{{B}}^{\alpha}_{p, q}ig\|_{L^\infty_{x}}\leqslant |a-b|\|\rho'\|_{L^1}\|f\|_{L^\infty_{t,x}},
\end{align*}
if we let $m_j = \min\{ t_{j+1} - t_j, 1\}$, an application of Lemma \ref{lem:key ineq for stab lem} gives
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{j=1}^{N-1}\|&(f(t_{j+1})-f(t_j))g(t_j)\|_{L^2_x}^p \dot{{B}}^{\alpha}_{p, q}ig)^{\frac1p}\\
&\leqslanta \| f \|_{L^\infty_{t,x}} \int_\mathbb{R} |\rho(s)| \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{j=1}^{N-1} m_j^p \|g(t_j) - g(t_j -s)\|_{L^2_x}^p \dot{{B}}^{\alpha}_{p, q}ig)^{\frac1p} ds \\
&\leqslanta \| f \|_{L^\infty_{t,x}} \| g\|_{V^p} \int_\mathbb{R} |\rho(s)| (1+|s|)^{\frac{1}{p}} ds \\
&\leqslanta \| f \|_{L^\infty_{t,x}} \| g \|_{V^p}
\end{align*}
and hence \eref{eqn:lem stab:reduced prob} follows.
To prove the $U^p$ version, we observe that since $g \in U^p$ and $f$ is smooth and bounded, the product $fg$ is right continuous and satisfies the normalising condition $(fg)(t) \to 0$ in $L^2_x$ as $t \to -\infty$. Consequently, by Theorem \ref{thm:characteristion of Up}, it suffices to show that
\begin{equation}\label{eqn:lem stab:Up temp} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t \partialhi, fg}_{L^2_x} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslanta \| \partialhi\|_{V^q} \|f \|_{L^\infty} \| g \|_{U^p}
\end{equation}
where $\partial_t \partialhi \in C^\infty_0$ and $ \frac{1}{q} + \frac{1}{p} = 1$. Applying the Fourier support assumption, together with the $V^p$ case of the product estimate proved above, we see that after writing $\partial_t \partialhi f = \partial_t (\partialhi f) - \partial_t^{-1} \partial_t( \partialhi \partial_t f)$,
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\partial_t \partialhi, fg} dt \dot{{B}}^{\alpha}_{p, q}ig| &= \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{\partial_t P^{(t)}_{\geqslant 3} \partialhi, fg} dt \dot{{B}}^{\alpha}_{p, q}ig|\\
&\leqslant \|P^{(t)}_{\geqslant 3} \partialhi \overline{f} \|_{V^q} \| g\|_{U^p} + \|P^{(t)}_{\geqslant 3} \partialhi \partial_t \overline{f} \|_{V^q} \| \partial_t^{-1} g\|_{U^p} \\
&\leqslanta \| \partialhi\|_{V^q} \big( \| f\|_{L^\infty} \| g \|_{U^p} + \| \partial_t f \|_{L^\infty} \| \partial_t^{-1} g \|_{U^p}\big)
\end{align*}
where $P^{(t)}_{\geqslant 3}$ is a temporal Fourier projection to the set $\{|\tau| \geqslant 3\}$, and we used Theorem \ref{thm:dual pairing}. The required bound \eqref{eqn:lem stab:Up temp} then follows by the boundedness of convolution operators on $U^p$ and $V^p$ together with the Fourier support assumptions on $f$ and $g$.
\end{proof}
In applications to PDE, in particular to the wave maps equation, we require a more general version of the previous lemma which includes a spatial multiplier. To this end, for $\partialhi: \mathbb{R} \to L^\infty_x L^2_y + L^\infty_y L^2_x$ and $u, v \in L^\infty_t L^2_x$, we define
$$ \mc{T}_\partialhi[u,v](t,x) = \int_{\mathbb{R}^n} \partialhi(t,x-y,y) u(t,x-y) v(t,y) dy. $$
An application of Fubini and H\"older shows that $\mc{T}_\partialhi(t,x,y): L^2 \times L^2 \to L^2$ with the fixed time bound
$$ \| \mc{T}_\partialhi[u,v] \|_{L^\infty_t L^2_x} \leqslant \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}up_{t\in \mathbb{R}} \| \partialhi(t,x,y) \|_{L^\infty_x L^2_y + L^\infty_y L^2_x} \dot{{B}}^{\alpha}_{p, q}ig) \| u \|_{L^\infty_t L^2_x} \| v \|_{L^\infty_t L^2_x}.$$
Adapting the proof of Lemma \ref{lem:stab}, under certain temporal Fourier support conditions on $\partialhi$ and functions $u,v \in V^p$ (or $U^p$), we can show that $\mc{T}_\partialhi[u,v] \in V^p$ (or $U^p$).
\begin{theorem}\label{thm:stab with conv}
Let $1<p<\infty$. Let $\partialhi(t,x,y): \mathbb{R} \to L^\infty_x L^2_y + L^\infty_y L^2_x$ continuous and bounded with $\mathcal{S}upp \mc{F}_t \partialhi \mathcal{S}ubset (-1, 1)$. If $u,v \in V^p$ and $\mathcal{S}upp \mc{F}_t u \mathcal{S}ubset \mathbb{R} \mathcal{S}etminus (-4, 4)$, then we have
$$ \| \mc{T}_\partialhi[u,v] \|_{V^p} \leqslanta \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}up_{t\in \mathbb{R}} \| \partialhi(t,x,y) \|_{L^\infty_x L^2_y + L^\infty_y L^2_x} \dot{{B}}^{\alpha}_{p, q}ig) \| u \|_{V^p} \| v \|_{V^p}.$$
Moreover, if in addition we have $u,v \in U^p$, then $\mc{T}_\partialhi[u,v] \in U^p$ and
$$ \| \mc{T}_\partialhi[u,v] \|_{U^p} \leqslanta \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}up_{t\in \mathbb{R}} \| \partialhi(t,x,y) \|_{L^\infty_x L^2_y + L^\infty_y L^2_x} \dot{{B}}^{\alpha}_{p, q}ig) \| u \|_{U^p} \| v \|_{U^p}.$$
\end{theorem}
\begin{proof}
Let $S:=\mathcal{S}up_{t\in \mathbb{R}} \| \partialhi(t,x,y) \|_{L^\infty_x L^2_y + L^\infty_y L^2_x}$.
We start with the $V^p$ case. As in the proof of Lemma \ref{lem:stab}, we can reduce to considering the case when the difference falls on $\partialhi$, thus our goal is to show that for $(t_j)_1^N \in \mb{P}$, we have
\begin{equation}\label{eqn:thm stab with conv:reduced prob I}
\begin{split}
\mathcal{S}um_{j=1}^{N-1} \dot{{B}}^{\alpha}_{p, q}ig\| \int_{\mathbb{R}^n} \big(\partialhi(t_{j+1},x-y,y)& - \partialhi(t_j, x-y, y) \big) u(t_j,x-y) v(t_j,y)dy \dot{{B}}^{\alpha}_{p, q}ig\|_{L^2_x(\mathbb{R}^n)}^p\\
&\leqslanta S^p \| u \|_{V^p}^p\| v \|_{V^p}^p.
\end{split}
\end{equation}
Following the argument used in Lemma \ref{lem:stab}, we see that the temporal Fourier support assumption implies the identity
\begin{align*}
&\big(\partialhi(t_{j+1},z,y) - \partialhi(t_j, z, y) \big) u(t_j, z) \\
={}& \int_\mathbb{R} \rho(s) \big(\partialhi(t_{j+1}-s,z,y) - \partialhi(t_j-s, z, y) \big) \big( u(t_j, z) - u(t_j-s,z) \big) ds\\
={}& \int_\mathbb{R} \int_\mathbb{R} \rho(s) \big( \rho(t_{j+1}-s-s') - \rho(t_j-s-s')\big) \partialhi(s',z,y) \big( u(t_j, z) - u(t_j-s,z) \big) ds\, ds'
\end{align*}
for some $\rho \in \mc{S}(\mathbb{R})$ with $\mc{F}_t \rho = 1$ on $\mathcal{S}upp \mc{F}_t \partialhi$. Consequently we see that
\begin{align*}
& \dot{{B}}^{\alpha}_{p, q}ig\| \int_{\mathbb{R}^n} \big(\partialhi(t_{j+1},x-y,y) - \partialhi(t_j, x-y, y) \big) u(t_j,x-y) v(t_j,y)dy \dot{{B}}^{\alpha}_{p, q}ig\|_{L^2_x}\\
\leqslant{}& S \int_\mathbb{R} \int_\mathbb{R} |\rho(s)| |\rho(t_{j+1} -s') - \rho(t_j -s')| \| u(t_j)- u(t_j-s)\|_{L^2}\|v(t_j)\|_{L^2}ds\, ds'\\
\leqslanta{}& S \int_\mathbb{R} |\rho(s)| m_j \| u(t_j) - u(t_j-s) \|_{L^2}ds\, \|v\|_{L^\infty_t L^2_x}
\end{align*}
with $m_j = \min\{ t_{j+1} - t_j, 1\}$. Hence \eqref{eqn:thm stab with conv:reduced prob I} follows from Minkowski's inequality and Lemma \ref{lem:key ineq for stab lem}.
We now turn to the proof of the $U^p$ case. Since $u,v\in U^p$ and $\partialhi$ is continuous, $\mathcal{T}_\partialhi(u,v)$ is right continuous and converges to zero as $t\to -\infty$. Consequently, applying the Besov embedding in Remark \ref{rmk:besov embedding}, we have
\begin{align*}
\|P^{(t)}_{\leqslant 4} \mathcal{T}_\partialhi(u,v)\|_{U^p} &\leqslanta \mathcal{S}um_{d \leqslanta 4} d^{\frac{1}{p}} \|P^{(t)}_{d} \mathcal{T}_\partialhi(u,v)\|_{U^p} \\
&\leqslanta{} \|\mathcal{T}_\partialhi(u,v)\|_{L^p_tL^2_x} \leqslanta{}S \|u\|_{L^p_t L^2_x}\|v\|_{L^\infty_t L^2_x}\leqslanta{}S \|u\|_{U^p}\|v\|_{U^p} .
\end{align*}
For the remaining term, we note that an application of Theorem \ref{thm:characteristion of Up}, reduces the problem to proving the bound
$$ \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ \partial_t \partialsi, \mc{T}_\partialhi[u,v]}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| \leqslanta S \| \partialsi \|_{V^{p'}} \| u \|_{U^p} \| v\|_{U^p} $$
where $\partialsi, \mc{F}_t \partialsi \in C^\infty$ and $\mathcal{S}upp \mc{F}_t \partialsi \mathcal{S}ubset \mathbb{R} \mathcal{S}etminus (-4, 4)$. We write the left hand side as
\begin{align} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ &\partial_t \partialsi, \mc{T}_\partialhi[u,v]}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig|\notag \\
&= \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \big\langle \partial_t \partialsi(t,x), \partialhi(t,x-y,y) u(t,x-y) v(t,y) \big\rangle_{L^2_{x,y}} dt \dot{{B}}^{\alpha}_{p, q}ig| \notag \\
&\leqslant \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \big\langle \partialsi(t,x), \partial_t \partialhi(t,x-y,y) u(t,x-y) v(t,y) \big\rangle_{L^2_{x,y}} dt \dot{{B}}^{\alpha}_{p, q}ig|\notag \\
&\varphiquad \varphiquad
+ \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \big\langle \partial_t \big( \partialsi(t,x) \overline{\partialhi}(t,x-y,y)\big), u(t,x-y) v(t,y) \big\rangle_{L^2_{x,y}} dt \dot{{B}}^{\alpha}_{p, q}ig|
\label{eqn:thm stab with conv:decomp}
\end{align}
To bound the first term in \eref{eqn:thm stab with conv:decomp}, we use the temporal support assumption on $\partialhi$ to write $\partial_t \partialhi(t) = \int_\mathbb{R} \partial_t \rho(s) \partialhi(t-s) ds$ with $\| \partial_t \rho \|_{L^1_s} \leqslanta 1$, hence again applying the Besov embedding in Remark \ref{rmk:besov embedding} we obtain
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \big\langle &\partialsi(t,x), \partial_t \partialhi(t,x-y,y) u(t,x-y) v(t,y) \big\rangle_{L^2_{x,y}} dt \dot{{B}}^{\alpha}_{p, q}ig|\\
&\leqslant \| \partialsi \|_{L^{p'}_t L^2_x} \int_\mathbb{R} |\partial_s \rho(s)| \dot{{B}}^{\alpha}_{p, q}ig\| \int_{\mathbb{R}^n} \partialhi(t-s,x-y,y) u(t,x-y) v(t,y) dy \dot{{B}}^{\alpha}_{p, q}ig\|_{L^p_t L^2_x} ds \\
&\leqslanta S \| \partialsi \|_{V^{p'}} \|u\|_{L^p_t L^2_x} \|v\|_{L^\infty_t L^2_x} \leqslanta S \| \partialsi \|_{V^{p'}} \|u\|_{U^p} \|v\|_{U^p}
\end{align*}
where we used the temporal Fourier support assumptions on $\partialsi$ and $u$.
On the other hand, to bound the second term in \eref{eqn:thm stab with conv:decomp}, we first observe that it is enough to consider the case where $u$ and $v$ are $U^p$ atoms with partitions $\tau$ and $\tau'$. Let $(t_j)_{j=1}^N = \tau \cup \tau'$. Computing the integral in time, we deduce that
\begin{align*} &\int_\mathbb{R} \big\langle \partial_t \big( \partialsi(t,x) \overline{\partialhi}(t,x-y,y)\big), u(t,x-y) v(t,y) \big\rangle_{L^2_{x,y}} dt \\
&= \mathcal{S}um_{j=1}^{N-1} \big\langle \partialsi(t_{j+1},x) \overline{\partialhi}(t_{j+1},x-y,y) - \partialsi(t_{j},x) \overline{\partialhi}(t_{j},x-y,y) , u(t_j,x-y) v(t_j,y) \big\rangle_{L^2_{x,y}} \\
&\varphiquad \varphiquad \varphiquad - \big\langle \partialsi(t_{N}) , \mc{T}_\partialhi[u, v](t_N) \big\rangle_{L^2_{x}} \\
&= \mathcal{S}um_{j=1}^{N-1} \big\langle \partialsi(t_{j+1},x) \big(\overline{\partialhi}(t_{j+1},x-y,y) - \overline{\partialhi}(t_{j},x-y,y)\big) , u(t_j,x-y) v(t_j,y) \big\rangle_{L^2_{x,y}} \\
&\varphiquad \varphiquad \varphiquad + \mathcal{S}um_{j=1}^{N-1} \big\langle \partialsi(t_{j+1}) - \partialsi(t_{j}), \mc{T}_\partialhi[u, v](t_j) \big\rangle_{L^2_{x,y}} - \big\langle \partialsi(t_{N}) , \mc{T}_\partialhi[u, v](t_N) \big\rangle_{L^2_{x}}.
\end{align*}
Applying H\"{o}lder's inequality and the fixed time convolution bound, we have
\begin{align*}
\mathcal{S}um_{j=1}^{N-1} \big| \big\langle & \partialsi(t_{j+1}) - \partialsi(t_{j}), \mc{T}_\partialhi[u, v](t_j) \big\rangle_{L^2_{x,y}}\big| + \big| \big\langle \partialsi(t_{N}) , \mc{T}_\partialhi[u, v](t_N) \big\rangle_{L^2_{x}} \big|\\
&\leqslanta S \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=1}^{N-1} \| \partialsi(t_{j+1}) - \partialsi(t_j) \|_{L^2}^{p'} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{p'}} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{j=1}^{N-1} \| u(t_j)\|_{L^2}^p \| v(t_j) \|_{L^2}^p \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{p} \\
&\varphiquad \varphiquad + S \| \partialsi \|_{L^\infty_t L^2_x} \| u \|_{L^\infty_t L^2_x} \| v \|_{L^\infty_t L^2_x} \\
&\leqslanta S \| \partialsi \|_{V^{p'}}
\end{align*}
where we used the fact that $u$ and $v$ are $U^p$ atoms and $(t_j)_{j=1}^{N} = \tau \cup \tau'$. Consequently, it only remains to prove that
\begin{equation}\label{eqn:thm stab with conv:final Up bound}
\begin{split}
\mathcal{S}um_{j=1}^{N-1} \big| \big\langle \partialsi(t_{j+1},x) \big(\overline{\partialhi}(t_{j+1},x-y,y) - \overline{\partialhi}(t_{j},x-y,y)\big)& , u(t_j,x-y) v(t_j,y) \big\rangle_{L^2_{x,y}}\big|\\
&\leqslanta S \| \partialsi \|_{V^{p'}}.
\end{split}
\end{equation}
This follows by adapting the proof of the $V^p$ case above, namely, we first use the temporal Fourier support assumption to obtain the identity
\begin{align*}
&\partialsi(t_{j+1}, x)\big(\overline{\partialhi}(t_{j+1},z,y) - \overline{\partialhi}(t_j, z, y) \big) \\
={}& \int_\mathbb{R} \rho(s) \big(\overline{\partialhi}(t_{j+1}-s,z,y) - \overline{\partialhi}(t_j-s, z, y) \big) \big( \partialsi(t_{j+1}, x) - \partialsi(t_{j+1}-s, x) \big) ds\\
={}& \int_\mathbb{R} \int_\mathbb{R} \rho(s) \big( \rho(t_{j+1}-s') - \rho(t_j-s')\big) \overline{\partialhi}(s'-s,z,y) \big( \partialsi(t_{j+1}, x) - \partialsi(t_{j+1}-s, x)\big) ds\, ds'
\end{align*}
and hence
\begin{align*}
\big| \big\langle & \partialsi(t_{j+1},x) \big(\overline{\partialhi}(t_{j+1},x-y,y) - \overline{\partialhi}(t_{j},x-y,y)\big) , u(t_j,x-y) v(t_j,y) \big\rangle_{L^2_{x,y}}\big| \\
&\leqslanta S \| u(t_j) \|_{L^2} \| v(t_j) \|_{L^2} \int_\mathbb{R} |\rho(s)| m_j \| \partialsi(t_j) - \partialsi(t_j - s) \|_{L^2} ds.
\end{align*}
Summing up, applying H\"older's inequality together with Lemma \ref{lem:key ineq for stab lem}, we then deduce \eref{eqn:thm stab with conv:final Up bound}.
\end{proof}
\mathcal{S}ubsection{Adapted $U^p$ and $V^p$}\label{subsec:adapted}
Given a phase $\Phi: \mathbb{R}^n \to \mathbb{R}$ (measurable and of moderate growth), we define the adapted function spaces $U^p_\Phi$ and $V^p_\Phi$ as
$$ U^p_\Phi = \big\{ u \, \big| \,\,e^{i t \Phi(-i\nabla)} u(t) \in U^p\,\big\}, \varphiquad \varphiquad V^p_\Phi = \big\{ v \, \big| \, \, e^{ i t \Phi(-i\nabla)} v(t) \in V^p \big\},$$
as in \cite{Koch2005,Hadac2009}.
As in the case of $U^p$ and $V^p$, elements of $U^p_\Phi$ and $V^p_\Phi$ are right continuous, and approach zero as $t\to -\infty$. With the norms
\[
\|u\|_{U^p_\Phi}=\|t \mapsto e^{i t \Phi(-i\nabla)}u(t)\|_{U^p}, \text{ resp. }\|v\|_{V^p_\Phi}=\|t \mapsto e^{i t \Phi(-i\nabla)}v(t)\|_{V^p}
\]
these spaces $U^p_\Phi$ and $V^p_\Phi$ are Banach spaces.
They are constructed to contain perturbations of solutions to the linear PDE
$$ -i \partial_t u + \Phi(-i\nabla) u = 0.$$
In particular, for any $T \in \mathbb{R}$ and $f \in L^2_x$, we have $\mathbbold{1}_{[T, \infty)}(t) e^{-i t \Phi(-i\nabla)} f \in U^p_\Phi$ (and $V^p_\Phi$). Note that the cutoff $\mathbbold{1}_{[T, \infty)}$ is essential here due to the normalisation at $t=-\infty$ (of course one could also choose a smooth cutoff $\rho(t) \in C^\infty$ instead).
The results obtained above for the $U^p$ and $V^p$ spaces can all be translated to the setting of the adapted function spaces $U^p_{\Phi}$ and $V^p_{\Phi}$. For instance, left and right limits always exist in $U^p$ and $V^p$, an application of Theorem \ref{thm:vu-emb} implies that for $p<q$ we have the embedding $V^p_{\Phi} \mathcal{S}ubset U^q_{\Phi}$, and Theorem \ref{thm:besov embedding} gives the embedding
\begin{equation}\label{eqn:adapted Up vs Xsb}
\| u \|_{U^p_{\Phi}} \approx \| u \|_{V^p_{\Phi}} \approx d^\frac{1}{p} \| u \|_{L^p_t L^2_x}
\end{equation}
for all $u \in L^p_t L^2_x$ with $\mathcal{S}upp \widetilde{u} \mathcal{S}ubset \{ (\tau, \xi) \in \mathbb{R}^{1+n}|\, | \tau + \Phi(\xi)| \approx d\}$. In particular, we have $u \in U^p_{\Phi}$. Similarly, Theorem \ref{thm:dual pairing} and Theorem \ref{thm:characteristion of Up}, imply that
$$ \| u \|_{U^p_{\Phi}} = \mathcal{S}up_{ \mathcal{S}ubstack{ -i\partial_t v + \Phi(-i\nabla) v \in L^1_t L^2_x \\ |v|_{V^q_{\Phi}}\leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_\mathbb{R} \lr{ - i \partial_t v + \Phi(-i\nabla)v, u}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| $$
and if $ -i \partial_t \partialsi + \Phi(-i\nabla) \partialsi = F$, then
$$ \| \mathbbold{1}_{[0, \infty)}(t) \partialsi \|_{U^p_{\Phi}} \approx \| \partialsi(0) \|_{L^2_x} + \mathcal{S}up_{ \mathcal{S}ubstack{ v \in L^1_t L^2_x \\ |v|_{V^q_{\Phi}}\leqslant 1}} \dot{{B}}^{\alpha}_{p, q}ig| \int_0^\infty \lr{ v, F}_{L^2} dt \dot{{B}}^{\alpha}_{p, q}ig| $$
in the sense that if the right-hand side is finite, and $u$ is right continuous with $u(t) \to 0$ as $t \to -\infty$, then $u, \partialsi \in U^p_\Phi$. Clearly, in view of Remark \ref{rem:compact support}, for reasonable phases $\Phi$ we can take the sup over $v\in C^\infty_0$ with $|v|_{V^q_\Phi} \leqslant 1$ instead. These properties show why the adapted function spaces are well adapted to studying the PDE $ - i \partial_t \partialsi + \Phi(-i\nabla) \partialsi = F$. Further properties of the adapted function spaces are also known, see for instance the interpolation estimates in \cite{Hadac2009, Koch2005, Koch2016}, and the vector valued transference type arguments in \cite{Candy2017b, Candy2016}.
We now give two applications of the product estimates in Subsection \ref{subsec:alg}. The first is an application of Lemma \ref{lem:stab} to remove the solution operators $e^{-it \Phi(-i\nabla)}$ in certain high modulation regimes.
\begin{proposition} Let $1<p<\infty$, $\Omega \mathcal{S}ubset \mathbb{R}^n$ measurable and $\Phi: \Omega \to \mathbb{R}$ such that $|\Phi(\xi)| \leqslant 1$ for $\xi \in \Omega$ and assume that $\mathcal{S}upp \widetilde{u} \mathcal{S}ubset \{ |\tau| \geqslant 5\} \times \Omega$. If $u \in V^p$ then
$$ \| u \|_{V^p_{\Phi}} \approx \| u \|_{V^p}. $$
Similarly, if $u \in U^p$ we have
$$\| u \|_{U^p_{\Phi}} \approx \| u \|_{U^p}. $$
\end{proposition}
\begin{proof}
In the $V^p$ case, an application of Plancherel shows that it suffices to prove the inequality
$\| e^{ i t\Phi(\xi)} \widehat{u} \|_{V^p} \leqslanta \| \widehat{u} \|_{V^p}$.
The support assumption on $\widehat{u}$, implies that we may write $e^{it \Phi(\xi)} \widehat{u}(t,\xi) = e^{it\Phi(\xi)}\mathbbold{1}_{\Omega}(\xi) \widehat{u}(t,\xi)$. Since $|\Phi(\xi)|\leqslant 1$ for $\xi \in \Omega$, we conclude that $\mathcal{S}upp \mc{F}_t[ e^{it \Phi(\xi)} \mathbbold{1}_{\Omega}(\xi)] \mathcal{S}ubset \{ |\tau| \leqslant 1\}$. Thus an application of Lemma \ref{lem:stab} gives
$$ \| e^{ i t\Phi(\xi)} \widehat{u} \|_{V^p} = \| e^{i t \Phi(\xi)} \mathbbold{1}_\Omega \widehat{u} \|_{V^p} \leqslanta \| e^{it\Phi(\xi)} \mathbbold{1}_\Omega \|_{L^\infty_{t,\xi}} \| \widehat{u} \|_{V^p} \leqslanta \| \widehat{u} \|_{V^p}$$
as required. An identical argument gives the $U^p$ case.
\end{proof}
The second is a reformulation of Theorem \ref{thm:stab with conv}. Define the spatial bilinear Fourier multiplier
$$ \mc{M}[u,v](x) = \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} m(\xi-\eta, \eta) \widehat{u}(\xi-\eta) \widehat{v}(\eta) e^{i x \cdot \xi} d\eta d \xi.$$
\begin{theorem}\label{thm:general adapted high mod prod}
Let $1<p<\infty$. Let $m:\mathbb{R}^n \times \mathbb{R}^n \to \mathbb{C}$ and $\Phi_j$, $j=0, 1, 2$ be real-valued phases such that
$$| \Phi_0(\xi+\eta) - \Phi_1(\xi) - \Phi_2(\eta)| \leqslant 1$$
for all $(\xi, \eta) \in \mathcal{S}upp m$. If $u \in V^p_{\Phi_1}$ with $\mathcal{S}upp \widetilde{u} \mathcal{S}ubset \{ |\tau + \Phi_1(\xi)| \geqslant 4 \}$ and $v \in V^p_{\Phi_2}$, then $M[u,v]\in V^p_{\Phi_0}$ and
$$ \| \mc{M}[u,v] \|_{V^p_{\Phi_0}} \leqslanta \| m(\xi, \eta)\|_{L^\infty_\xi L^2_\eta + L^\infty_\eta L^2_\xi} \| u \|_{V^p_{\Phi_1}} \| v \|_{V^p_{\Phi_2}}. $$
If in addition $u \in U^p_{\Phi_1}$ and $v \in U^p_{\Phi_2}$, then $\mc{M}[u,v] \in U^p_{\Phi_0}$ and
$$ \| \mc{M}[u,v] \|_{U^p_{\Phi_0}} \leqslanta \| m(\xi, \eta)\|_{L^\infty_\xi L^2_\eta + L^\infty_\eta L^2_\xi} \| u \|_{U^p_{\Phi_1}} \| v \|_{U^p_{\Phi_2}}. $$
\end{theorem}
\begin{proof}
We begin by observing that $\| \partialsi \|_{V^p} = \| \widehat{\partialsi} \|_{V^p}$ and, via Theorem \ref{thm:characteristion of Up}, $\| \partialsi \|_{U^p} \approx \|\widehat{\partialsi} \|_{U^p}$. Let $u_1(t,\xi) = e^{ -i t \Phi_1(\xi)} \widehat{u}(t,\xi)$ and $v_2(t,\xi) = e^{-it\Phi_2(\xi)} \widehat{v}(t,\xi)$, then $\mathcal{S}upp \mc{F}_t u_1 \mathcal{S}ubset \{ |\tau| \geqslant 4\}$, and it suffices to prove that
\begin{align*}
\dot{{B}}^{\alpha}_{p, q}ig\| \int_{\mathbb{R}^n} e^{ i t( \Phi_0(\xi) - \Phi_1(\xi-\eta) - \Phi_2(\eta))}&m(\xi-\eta, \eta) u_1(\xi-\eta) v_2(\eta) d\eta \dot{{B}}^{\alpha}_{p, q}ig\|_{V^p} \\
& \leqslanta \| m \|_{L^\infty_\xi L^2_\eta + L^\infty_\eta L^2_\xi} \| u_1 \|_{V^p} \| v_2 \|_{V^p}.
\end{align*}
But this is a consequence of Theorem \ref{thm:stab with conv} after noting that
$$\int_{\mathbb{R}^n} e^{ i t( \Phi_0(\xi) - \Phi_1(\xi-\eta) - \Phi_2(\eta))}m(\xi-\eta, \eta) u_1(\xi-\eta) v_2(\eta) d\eta = \mc{T}_\partialhi[u_1, v_2]$$
with $\partialhi(t,\xi, \eta) = e^{ i t( \Phi_0(\xi+\eta) - \Phi_1(\xi) - \Phi_2(\eta))}m(\xi, \eta)$, and $\mathcal{S}upp \mc{F}_t \partialhi \mathcal{S}ubset \{ |\tau| \leqslant 1\}$. The proof of the $U^p$ bound follows from an identical application of Theorem \ref{thm:stab with conv}.
\end{proof}
Typically the multiplier $m(\xi, \eta) = \mathbbold{1}_{\Omega_0}(\xi + \eta) \mathbbold{1}_{\Omega_1}(\xi) \mathbbold{1}_{\Omega_2}(\eta)$ is a cutoff to some frequency region, in which case we have
$$ \| m \|_{L^\infty_\xi L^2_\eta + L^\infty_\eta L^2_\xi} \leqslanta (\min\{ |\Omega_0|, |\Omega_1|, |\Omega_2|\}\big)^\frac{1}{2}.$$
Clearly more involved examples are also possible.
\mathcal{S}ection{The bilinear restriction estimate in adapted function spaces}\label{sec:adapted bilinear restriction}
Let $\lambda_1\geqslant 1$ and define
$$ \Lambda_1= \{ |\xi - e_1| < \tfrac{1}{100} \}, \varphiquad \varphiquad \Lambda_2= \{ |\xi \mp \lambda e_2| < \tfrac{1}{100} \lambda \} $$
with $e_1 = (1, 0, \dots, 0)$ and $e_2 = (0, 1, 0, \dots, 0)$. In this section, we give the proof of the following theorem.
\begin{theorem}\label{thm:L2 bilinear restriction}
Let $\frac{1}{n+1} < \frac{1}{b}\leqslant \frac{1}{a} \leqslant \frac{1}{2}$, $\frac{1}{a} + \frac{1}{b} \geqslant \frac{1}{2}$ and $\lambda \geqslant 1$. Assume that $u \in U^a_+$ and $v \in U^b_{\partialm}$ with $ \mathcal{S}upp \widehat{u} \mathcal{S}ubset \Lambda_1$ and $\mathcal{S}upp \widehat{v} \mathcal{S}ubset \Lambda_2$. Then
$$ \| u v\|_{L^2_{t,x}} \leqslanta \lambda^{(n+1)(\frac{1}{2} - \frac{1}{a})} \| u \|_{U^a_+} \| v \|_{U^b_{\partialm}}. $$
\end{theorem}
This is the case $\langlepha\approx 1$ of Theorem \ref{thm:bilinear restriction}. In fact, the general case can essentially be reduced to this case.
\begin{remark}[Proof of the general case of Theorem \ref{thm:bilinear restriction}]\label{rmk:general-alpha}
To obtain the $L^2_{t,x}$ estimate stated in Theorem \ref{thm:bilinear restriction}, we need a small angle version of Theorem \ref{thm:L2 bilinear restriction}. More precisely, we need to show that if $\kappa, \kappa' \in \mc{C}_\langlepha$ with $\angle(\kappa, \partialm \kappa') \approx \langlepha$, then provided that $\mathcal{S}upp \widehat{u} \mathcal{S}ubset \{ |\xi| \approx 1, \frac{\xi}{|\xi|} \in \kappa \} $ and $\mathcal{S}upp \widehat{v} \mathcal{S}ubset \{ |\xi| \approx \lambda, \frac{\xi}{|\xi|} \in \kappa' \}$ we have
$$ \| u v \|_{L^2_{t,x}} \leqslanta \langlepha^{\frac{n-3}{2}} \lambda^{(n-1)(\frac{1}{2} - \frac{1}{a})} \| u \|_{U^a_+} \| v \|_{U^b_\partialm}.$$
As in \cite[Section 2.3]{Candy2017b}, after rotating and rescaling, this bound is equivalent to showing that
$$ \| u v \|_{L^2_{t,x}} \leqslanta \lambda^{(n-1)(\frac{1}{2} - \frac{1}{a})} \| u \|_{U^a_\Phi} \| v \|_{U^b_{\partialm \Phi}}$$
where the phase becomes $\Phi(\xi) = \langlepha^{-2} ( \xi_1^2 + \langlepha^2 |\xi'|)^\frac{1}{2}$, and the functions $u$ and $v$ now have Fourier support in the rectangles $\{ \xi_1 \approx 1, \xi_2 \approx 1, |\xi''| \ll 1\}$ and $\{ \partialm \xi_1 \approx \lambda, |\xi'| \ll \lambda \}$. It is easy to check that the phase $\Phi$ behaves essentially the same as $|\xi|$. In particular, an analogue of the wave table construction Tao, Theorem \ref{thm:wave tables}, holds with $|\nabla|$ replaced with $\Phi$, which together with the argument given below, gives the small angle case of the bilinear $L^2_{t,x}$ estimate. See \cite{Candy2017b} for the details.
\end{remark}
\begin{remark}[The general bilinear restriction estimate]
Theorem \ref{thm:L2 bilinear restriction} is a special case of a bilinear restriction type estimate for general phases. More precisely, suppose that $1\leqslant q, r \leqslant 2$ and $a,b \geqslant 2$ with
$$\frac{1}{q} + \frac{n+1}{2r}< \frac{n+1}{2}, \varphiquad \frac{2}{(n+1)q} < \frac{1}{b} \leqslant \frac{1}{a} \leqslant \frac{1}{2}, \varphiquad \frac{1}{\min\{q,r\}} \leqslant \frac{1}{a} + \frac{1}{b}. $$
Then provided that the phases $\Phi_j$ satisfy suitable curvature and transversality properties, we have
\begin{equation}\label{eqn:gen bilinear restriction}
\| uv \|_{L^q_t L^r_x}
\leqslant \mb{C} \| u \|_{U^a_{\Phi_1}} \| v \|_{U^b_{\Phi_2}}
\end{equation}
where the constant is given by
$$ \mb{C} \approx \mu^{n+1-\frac{n+1}{r} - \frac{2}{q}} \mc{V}_{max}^{\frac{1}{r}-1} \mc{H}_1^{1-\frac{1}{q} - \frac{1}{r}} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\mc{H}_1}{\mc{H}_2} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{q} - \frac{1}{2} + (n+1)(\frac{1}{2} - \frac{1}{a})} \dot{{B}}^{\alpha}_{p, q}ig( \frac{\mc{V}_{max}}{\mu \mc{H}_1} \dot{{B}}^{\alpha}_{p, q}ig)^{(1-\frac{1}{r}-\frac{1}{b})_+}$$
with $\mu = \min\{ \text{diam}(\widehat{u}), \text{diam}(\widehat{v})\}$, $\mc{H}_j = \|\nabla^2 \Phi_j\|_{L^\infty}$, $\mc{V}_{max} = \mathcal{S}up |\nabla \Phi_1(\xi) - \nabla \Phi_2(\eta)|$, and $\mc{H}_2 \leqslant \mc{H}_1$. This estimate is essentially sharp, see \cite[Theorem 1.7]{Candy2017b} for a more precise statement. Note that Theorem \ref{thm:bilinear restriction} is a special case of the bilinear restriction estimate \eqref{eqn:gen bilinear restriction}, together with a rescaling argument, see the proof of Theorem 1.10 in \cite{Candy2017b}.
\end{remark}
In the remainder of this section we give the proof Theorem \ref{thm:L2 bilinear restriction}, assuming only Tao's wave table construction for free waves \cite[Proposition 15.1]{Tao2001b}. More precisely, in Subsection \ref{subsec:wave table}, we introduce some additional notation, and state the wave table construction of Tao \cite{Tao2001b}, as well as an averaging of cubes lemma. In Subsection \ref{subsec:atomic wave tables}, following closely the arguments in \cite{Candy2017b}, we show how the wave table construction for free waves implies a key localised bilinear estimate for atoms. Finally in Subsection \ref{subsec:induction on scales}, we run the induction on scales argument, and complete the proof of Theorem \ref{thm:L2 bilinear restriction}.
\mathcal{S}ubsection{Notation and the wave table construction}\label{subsec:wave table}
We use same notation as in \cite{Candy2017b}.
Given a set $\Omega \mathcal{S}ubset \mathbb{R}^n$, a vector $h \in \mathbb{R}^n$, and a scalar $c>0$, we let $\Omega + h = \{ x + h \mid x \in \Omega \}$ denote the translation of $\Omega$ by $h$, and define $\diam(\Omega)= \mathcal{S}up_{x, y \in \Omega} |x-y|$ and $\Omega + c = \{ x + y\mid x\in \Omega, \,|y|<c\}$ to be the Minkowski sum of $\Omega$ and the ball $\{|x|< c\}$.
Let $R \geqslant 1$ and $0<\epsilon\leqslant 1$. The constant $R$ denotes the large space-time scale and $0<\epsilon\leqslant 1$ is a small fixed parameter used to control the various error terms that arise. All cubes in this article are oriented parallel to the coordinate axis. Let $Q$ be a cube side length $R$, and take a subscale $0<r\leqslant R$. We define $\mc{Q}_r(Q)$ to be a collection of disjoint subcubes of width $ 2^{-j_0} R$ which form a cover of $Q$, where $j_0$ is the unique integer such that $2^{-1-j_0} R< r \leqslant 2^{-j_0} R$ (in other words we divide $Q$ up into smaller cubes of equal diameter $2^{-j_0} R$). Thus all cubes in $Q_{r}(Q)$ have side lengths $\approx r$, and moreover, if $r \leqslant r'\leqslant R$ and $q \in \mc{Q}_r(Q)$, $q' \in \mc{Q}_{r'}(Q)$ with $q\cap q' \not = \varnothing$, then $q \in \mc{Q}_{r}(q')$. To estimate various error terms which arise, we need to create some separation between cubes. To this end, following Tao, we introduce the following construction. Given $0<\epsilon \ll 1$ and a subscale $0<r\leqslant R$, we let
$$ I^{\epsilon, r}(Q) = \bigcup_{q \in \mc{Q}_r(Q)} (1-\epsilon) q. $$
Note that we have the crucial property, that if $q \in \mc{Q}_r(Q)$, and $(t,x) \not \in I^{\epsilon, r}(Q) \cap q$, then $ \dist\big( (t,x), q\big) \geqslant \epsilon r$. Given sequences $(\epsilon_m)$ and $(r_m)$ with $\epsilon_m>0$ and $0<r_m \leqslant R$, we define
$$ X[Q] = \bigcap_{m=1}^M I^{\epsilon_m, r_m}. $$
Thus cubes inside $X[Q]$ are separated at multiple scales. An averaging argument allows us to move from a cube $Q_R$, to the set $X[Q]$ but with a larger cube $Q$.
\begin{lemma}[{\cite[Lemma 6.1]{Tao2001b}}]\label{lem:cube averaging}
Let $R>0$, $ \epsilon = \mathcal{S}um_{j=1}^M \epsilon_j \leqslant 2^{-(n+2)}$, and $r_m \leqslant R$. For every cube $Q_R$ of diameter $R$, there exists a cube $Q \mathcal{S}ubset 4Q_R$ of diameter $2R$ such that for every $F \in L^2_{t,x}(Q_R)$ we have
$$ \| F \|_{L^2_{t,x}(Q_R)} \leqslant ( 1+ 2^{n+2}\epsilon) \| F \|_{L^2_{t,x}(X[Q])}. $$
\end{lemma}
All functions in the following are vector valued, thus maps $u:\mathbb{R}^{1+n} \to \ell^2_c(\mathbb{Z})$ where $\ell^2_c(\mathbb{Z})$ is the set of complex valued sequences with finitely non-zero components. The vector valued nature of the waves plays a key role in the induction on scales argument. A function $u \in L^\infty_t L^2_x$ is a $\partialm$-\emph{wave}, if $u = e^{\mp it |\nabla|} f$ for some (vector valued) $f \in L^2_x$. An \emph{atomic $\partialm$-wave} is a function $v \in L^\infty_t L^2_x$ such that $ v = \mathcal{S}um_I \mathbbold{1}_I v_I$ with the intervals $I$ forming a partition of $\mathbb{R}$, and each $v_I$ is a $\partialm$-wave (i.e. we take $e^{\partialm i t|\nabla|} v \in \mathfrak{S}$). Given an atomic $\partialm$-wave $v = \mathcal{S}um_I \mathbbold{1}_I v_I $, we let
$$ \| v \|_{\ell^a L^2} = \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_I \| v_I \|_{L^2}^a \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{a}.$$\\
For a subset $\Omega \mathcal{S}ubset \mathbb{R}^{1+n}$ we let $\mathbbold{1}_{\Omega}$ denote the indicator function of $\Omega$. Let $\mc{E}$ be a finite collection of subsets of $\mathbb{R}^{1+n}$, and suppose we have a collection of ($\ell^2_c$-valued) functions $(u^{(E)})_{E\in \mc{E}}$. We then define the associated \emph{quilt}
$$ [u^{(\cdot)}](t,x) = \mathcal{S}um_{E \in \mc{E}} \mathbbold{1}_E(t,x) |u^{(E)}(t,x)|.$$
This notation was introduced by Tao \cite{Tao2001b}, and plays a key technical role in localising the product $uv$ into smaller scales.\\
Following the argument in \cite{Candy2017b}, which adapts the proof of Tao to the setting of atomic waves, our goal is to show that Theorem \ref{thm:L2 bilinear restriction} is a consequence of the following \emph{wave table} construction of Tao.
\begin{theorem}[Wave Tables {\cite[Proposition 15.1]{Tao2001b}}]\label{thm:wave tables}
Fix $n\geqslant 2$. There exists a constant $C>0$, such that, for all $0<\epsilon \ll 1$, $\lambda \geqslant 1$, $R \geqslant 100 \lambda$, all cubes $Q$ of diameter $R$, and $+$-waves $F$, and $\partialm$-waves $G$ with
$$ \mathcal{S}upp \widehat{F} \mathcal{S}ubset \Lambda_1 + \frac{1}{100}, \varphiquad \mathcal{S}upp \widehat{G} \mathcal{S}ubset \Lambda_2 + \frac{1}{100},$$
there exists for each $B \in \mathcal{Q}_{\frac{R}{4}}(Q)$, a $+$-wave $\mc{W}^{(B)}_{1,\epsilon} = \mc{W}_{1,\epsilon}^{(B)}[F; G,Q]$, and a $\partialm$-wave $\mc{W}^{(B)}_{2,\epsilon} = \mc{W}^{(B)}_{2,\epsilon}[G; F,Q]$ satisfying
$$ F = \mathcal{S}um_{B \in \mc{Q}_{\frac{R}{4}}(Q)} \mc{W}_{1,\epsilon}^{(B)}, \varphiquad G = \mathcal{S}um_{B \in \mc{Q}_{\frac{R}{4}}(Q)} \mc{W}_{2, \epsilon}^{(B)},$$
the Fourier support condition
$$ \mathcal{S}upp \widehat{\mc{W}}^{(B)}_{1,\epsilon} \mathcal{S}ubset \mathcal{S}upp \widehat{F} + R^{-\frac{1}{2}} , \varphiquad \mathcal{S}upp \widehat{\mc{W}}^{(B)}_{2,\epsilon} \mathcal{S}ubset \mathcal{S}upp \widehat{G} + (\lambda R)^{-\frac{1}{2}}, $$
the energy estimates
\begin{align*} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{B \in \mc{Q}_{\frac{R}{4}}(Q)} \| \mc{W}^{(B)}_{1,\epsilon} \|_{L^\infty_t L^2_x}^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} &\leqslant (1 + C \epsilon) \| F \|_{L^\infty_t L^2_x},\\
\dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{B \in \mc{Q}_{\frac{R}{4}}(Q)} \| \mc{W}_{2,\epsilon}^{(B)} \|_{L^\infty_t L^2_x}^2 \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} &\leqslant (1 + C \epsilon) \| G \|_{L^\infty_t L^2_x},
\end{align*}
and the bilinear estimates
$$ \big\| \big( |F| - [\mc{W}^{(\cdot)}_{1,\epsilon}]\big) G \big\|_{L^2_{t,x}(I^{\epsilon, \frac{R}{4}}(Q))} \leqslant C \epsilon^{-C} R^{-\frac{n-1}{4}} \| F \|_{L^\infty_t L^2_x} \| G \|_{L^\infty_t L^2_x},$$
and
$$\big\| F \big( |G| - [\mc{W}^{(\cdot)}_{2,\epsilon}]\big) \big\|_{L^2_{t,x}(I^{\epsilon, \frac{R}{4}}(Q))} \leqslant C \epsilon^{-C} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{n-1}{4}} \| F \|_{L^\infty_t L^2_x} \| G \|_{L^\infty_t L^2_x}. $$
\end{theorem}
\begin{remark} The notation used in Theorem \ref{thm:wave tables} differs somewhat from that used in \cite{Tao2001b}. For a proof in the general phase case, see \cite[Theorem 9.3]{Candy2017b}. It is important to note that Theorem \ref{thm:wave tables} is purely a statement about free waves, and does not involve any atomic structure.
\end{remark}
\begin{remark}
The construction of the wave table $\mc{W}_{1,\epsilon}$ relies on a wave packet decomposition of $u$. Roughly speaking, we take
$$ \mc{W}^{(B)}_{1,\epsilon} = \mathcal{S}um_{T} c_T F_T$$
where $F_T$ is a wave packet concentrated in the tube $T$, and the (real-valued) coefficients $c_T = c_T(G,B)$ are given by
$$ c_T \approx \dot{{B}}^{\alpha}_{p, q}ig( \frac{\| G \|_{L^2_{t,x}(B \cap T)}}{ \| G \|_{L^2_{t,x}(Q \cap T)}} \dot{{B}}^{\alpha}_{p, q}ig)^2.$$
Thus $\mc{W}^{(B)}_{1,\epsilon}$ contains the wave packets $F_T$ of $F$, such that $G|_T$ is concentrated on the smaller cube $B$. Since $\mc{W}^{(B)}_{1,\epsilon}$ is a sum of wave packets of $F$, the support conclusion and fact that it is a $+$-wave essentially follow directly. Similarly, ignoring the constant and using the (almost) orthogonality of the wave packet decomposition, we have
\begin{align*} \mathcal{S}um_{B} \| \mc{W}^{(B)}_{1,\epsilon} \|_{L^\infty_t L^2_x}^2
\leqslanta \mathcal{S}um_T \mathcal{S}um_B \| F_T \|_{L^\infty_t L^2_x}^2 \frac{\| G \|_{L^2_{t,x}(B \cap T)}^2}{ \| G \|_{L^2_{t,x}(Q \cap T)}^2}
&\leqslanta \mathcal{S}um_T \| F_T \|_{L^\infty_t L^2_x}^2 \leqslanta \| F \|_{L^\infty_t L^2_x}^2.
\end{align*}
Improving the constant here to $1+C \epsilon$ requires an improved wave packet decomposition introduced by Tao. Finally, the bilinear estimate exploits the fact that $|u| - [\mc{W}^{(\cdot)}_{1,\epsilon}]$ on the cube $B$, only contains wave packets such that $G|_T$ is \emph{not} concentrated on $B$. This is essentially a non-pigeon holed version of the argument of Wolff \cite{Wolff2001}.
\end{remark}
\mathcal{S}ubsection{From wave tables to a bilinear estimate for atoms}\label{subsec:atomic wave tables}
In this section, we give the proof of the following consequence of Theorem \ref{thm:wave tables}.
\begin{theorem}\label{thm-main bilinear estimate Up case}
Let $\frac{1}{n+1}<\frac{1}{b}\leqslant \frac{1}{a} \leqslant \frac{1}{2}$, $0<\epsilon \ll 1$, and $Q_R$ be a cube of diameter $R \geqslant 100 \lambda$. Then for any atomic $+$-wave $u = \mathcal{S}um_{I \in \mc{I}} \mathbbold{1}_I u_I$, and any atomic $\partialm$-wave $v = \mathcal{S}um_{J \in \mc{J}} \mathbbold{1}_J v_J$ with
$$\mathcal{S}upp \widehat{u} \mathcal{S}ubset \Lambda_1 + \frac{1}{100}, \varphiquad \mathcal{S}upp \widehat{v} \mathcal{S}ubset \Lambda_2 + \frac{1}{100}$$
there exist a cube $Q$ of diameter $2R$ such that for each $I \in \mc{I}$ and $J \in \mc{J}$ we have a decomposition
$$ u_I = \mathcal{S}um_{B \in \mc{Q}_{\frac{2R}{4^M}}(Q)} u_I^{(B)}, \varphiquad v_J = \mathcal{S}um_{B' \in \mc{Q}_{\frac{R}{2}}(Q)} v_J^{(B')}$$
where $M\in \mathbb{N}$ with $4^{M-1} \leqslant \lambda < 4^M $, and $u^{(B)}= \mathcal{S}um_{I \in \mc{I}} \mathbbold{1}_I u_I^{(B)}$ is an atomic $+$-wave, $v^{(B')}=\mathcal{S}um_{J \in \mc{J}} \mathbbold{1}_J v_J^{(B')}$ is an atomic $\partialm$-wave, with the support properties
$$ \mathcal{S}upp \widehat{u}^{(B)} \mathcal{S}ubset \mathcal{S}upp \widehat{u} + 2 \dot{{B}}^{\alpha}_{p, q}ig( \frac{2 R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2}}, \varphiquad \mathcal{S}upp \widehat{v}^{(B')} \mathcal{S}ubset \mathcal{S}upp \widehat{v} + 2 \dot{{B}}^{\alpha}_{p, q}ig( \frac{2R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2}}.$$
Moreover, for any $a_0, b_0 \geqslant 2$ we have the energy bounds
$$ \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{B \in \mc{Q}_{\frac{2R}{4^M}}(Q)} \|u^{(B)}\|_{\ell^{a_0} L^2_x}^{a_0} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{a_0} \leqslant (1 +C\epsilon) \| u \|_{\ell^{a_0} L^2_x}$$
$$ \dot{{B}}^{\alpha}_{p, q}ig(\mathcal{S}um_{B' \in \mc{Q}_{\frac{R}{2}}(Q)} \|v^{(B')}\|_{\ell^{b_0} L^2_x}^{b_0} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{b_0} \leqslant (1 + C \epsilon) \| v \|_{\ell^{b_0} L^2_x}$$
and the bilinear estimate
\begin{align*} \| u v \|_{L^2_{t,x}(Q_R)} \leqslant (1+ &C\epsilon) \big\| \big[u^{(\cdot)}\big] \big[v^{(\cdot)}\big] \big\|_{L^2_{t,x}(Q)} \\
&+ C \epsilon^{-C} \lambda^{(n+1)(\frac{1}{2} - \frac{1}{a})} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{n+1}{2}(\frac{1}{2} - \frac{1}{b})-\frac{n-1}{4}} \|u \|_{\ell^a L^2_x} \| v \|_{\ell^b L^2_x}
\end{align*}
where the constant $C$ depends only on the dimension $n$, and the exponents $a,b$.
\end{theorem}
\begin{proof}
Let $C_0$ denote the constant appearing in Theorem \ref{thm:wave tables}. An application of Lemma \ref{lem:cube averaging} implies that there exists a cube $Q$ of radius $2R$ such that
$$ \| uv \|_{L^q_t L^r_x(Q_R)} \leqslant ( 1 + C \epsilon) \| uv \|_{L^q_t L^r_x(X[Q])}$$
where we take
$$ X[Q] = \bigcap_{m=1,\dots, M} I^{\epsilon_m, 4^{-m}2R}(Q), \varphiquad \epsilon_m = 4^{\delta(m-M)} \epsilon$$
and $\delta>0$ is some fixed constant to be chosen later (which will depend only on the dimension $n$, the exponent $b$, and the constant $C_0$). Let $V=(v_J)_{J\in \mc{J}}$, perhaps after relabeling, we have $V: \mathbb{R}^{1+n} \to \ell^2_c(\mathbb{Z})$. In particular, $V$ is a $\partialm$-wave such that $|v|\leqslant |V|$ and $\| V \|_{L^\infty_t L^2_x} = \| v \|_{\ell^2 L^2}$. We now repeatedly apply the wave table construction in Theorem \ref{thm:wave tables} with $G=V$. More precisely, given $B_1 \in \mc{Q}_{\frac{R}{2}}(Q)$ we let
$$ u^{(B_1)}_{I, 1} = \mc{W}^{(B_1)}_{1, \epsilon_1}(u_I; V, Q)$$
and assuming we have constructed $u_{I, m}^{(B_{m})}$ with $B_{m} \in \mc{Q}_{\frac{2R}{4^{m}}}(Q)$, we define for $B_{m+1} \in \mc{Q}_{\frac{2R}{4^{m+1}}}(B_m)$
$$ u_{I, m+1}^{(B_{m+1})} = \mc{W}^{(B_{m+1})}_{1, \epsilon_{m+1}}\big( u^{(B_{m})}_{I, m}; V, B_{m}\big)$$
(thus we apply Theorem \ref{thm:wave tables} with $F=u^{(B_{m})}_{I, m}$, $G=V$, $\epsilon= \epsilon_m$, and $Q=B_m$). To extend this to the atomic waves, we simply take $u_m^{(B_m)} = \mathcal{S}um_{I \in \mc{I}} \mathbbold{1}_I(t) u_{I, m}^{(B_m)}$.
Finally, for $B \in \mc{Q}_{\frac{2R}{4^M}}(Q)$, we let $ u^{(B)}=u_M^{(B)}$. Clearly $u^{(B)}$ is again an atomic $+$-wave, and from an application of Theorem \ref{thm:wave tables} the Fourier supports satisfy
\begin{align*}
\mathcal{S}upp u^{(B)} &\mathcal{S}ubset \mathcal{S}upp u^{(B)}_{M-1} + \dot{{B}}^{\alpha}_{p, q}ig( \frac{2R}{4^{M-1}} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2}} \\
&\mathcal{S}ubset \mathcal{S}upp \widehat{u} + \mathcal{S}um_{m=1}^{M-1} \dot{{B}}^{\alpha}_{p, q}ig( \frac{2R}{4^{m-1}} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2} } \mathcal{S}ubset \mathcal{S}upp \widehat{u} + 2 \dot{{B}}^{\alpha}_{p, q}ig( \frac{ 2R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2}}.
\end{align*}
On the other hand, the energy inequality follows by exchanging the order of summation, using the fact that $a\geqslant 2$, and repeatedly applying the energy estimate in Theorem \ref{thm:wave tables}
\begin{align*}
\bigg( \mathcal{S}um_{B \in \mc{Q}_{\frac{2R}{4^M}}(Q)} &\| u^{(B)} \|_{\ell^a L^2_x}^a \bigg)^\frac{1}{a}\\
&\leqslant \bigg( \mathcal{S}um_{I \in \mc{I}} \bigg( \mathcal{S}um_{B \in \mc{Q}_{\frac{2R}{4^M}}(Q)} \big\| u^{(B)}_{I} \big\|_{L^\infty_t L^2_x}^2 \bigg)^\frac{a}{2}\bigg)^{\frac{1}{a}} \\
&\leqslant (1 + C_0 \epsilon_{M}) \bigg( \mathcal{S}um_{I \in \mc{I}} \bigg( \mathcal{S}um_{B_{M-1} \in \mc{Q}_{\frac{2R}{4^{M-1}}}(Q)} \big\| u^{(B_{M-1})}_{I, M-1} \big\|_{L^\infty_t L^2_x}^2 \bigg)^\frac{a}{2}\bigg)^{\frac{1}{a}} \\
&\leqslant \Pi_{m=1}^M ( 1 + C_0 \epsilon_m) \bigg( \mathcal{S}um_{I \in \mc{I}} \| u_{I} \|_{L^\infty_t L^2_x}^a\bigg)^{\frac{1}{a}} \leqslant ( 1 + C \epsilon) \| u \|_{\ell^a L^2}
\end{align*}
where $C$ depends only on $\delta$ and $C_0$. The next step is to decompose $v = \mathcal{S}um_{J \in \mc{J}} \mathbbold{1}_J(t) v_J$. Let $U = ( u_I^{(B)})_{I \in \mc{I}, B \in \mc{Q}_{\frac{2R}{4^M}}}$. Then again, perhaps after relabeling, $U:\mathbb{R}^{1+n} \to \ell^2_c(\mathbb{Z})$, and hence $U$ is a $+$-wave with the pointwise bound $[u^{(\cdot)}] \leqslant |U|$ and the energy bound $\| U \|_{L^\infty_t L^2_x} \leqslant (1 + C \epsilon) \| u \|_{\ell^2 L^2_x}$.
We now decompose each $v_J$ relative to $U$ and the cube $Q$, in other words we apply Theorem \ref{thm:wave tables} and take for every $B' \in \mc{Q}_{\frac{2R}{4}}(Q) $
$$ v^{(B')}_J = \mc{W}^{(B')}_{2, \epsilon}\big( v_J; U, Q\big)$$
and finally define $v^{(B')} = \mathcal{S}um_{J \in \mc{J}} \mathbbold{1}_J(t) v_J^{(B')}$. It is clear that $v^{(B')}$ is an atomic $\partialm$-wave and that $v^{(B')}$ satisfies the correct Fourier support conditions. Furthermore, by a similar argument to the $u^{(B)}$ case, the required energy inequality also holds.
We now turn to the proof of the bilinear estimate. After observing that
\begin{align*}
\| uv \|_{L^2_{t,x}(X[Q])} \leqslant \big\| [u^{(\cdot)}] &[v^{(\cdot)}] \big\|_{L^2_{t,x}(X[Q])} \\
&+ \big\| \big( |u| - [u^{(\cdot)}]\big) v \big\|_{L^2_{t,x}(X[Q])} + \big\| [u^{(\cdot)}] \big( |v| - [v^{(\cdot)}]\big) \big\|_{L^2_{t,x}(X[Q])},
\end{align*}
an application of H\"older's inequality implies that it is enough to show that for $\frac{1}{n+1} < \frac{1}{b} \leqslant \frac{1}{2}$, and $ \frac{1}{b} \leqslant \frac{1}{a} \leqslant \frac{1}{2}$ we have
\begin{equation}\label{eqn-thm main bilinear Up est-temp bilinear est}
\begin{split}
\big\| \big( |u| - [u^{(\cdot)}]\big) v \big\|_{L^2_{t,x}(X[Q])} +& \big\| [u^{(\cdot)}] \big( |v| - [v^{(\cdot)}]\big) \big\|_{L^2_{t_x}(X[Q])}\\
&\leqslanta \epsilon^{-C} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{2} - \frac{n+1}{2b}} \lambda^{(n+1)(\frac{1}{2} - \frac{1}{a})} \| u \|_{\ell^a L^2} \| v \|_{\ell^b L^2}.
\end{split}
\end{equation}
We start by estimating the first term. The point is to interpolate between the ``bilinear'' $L^2_{t,x}$ estimate given in Theorem \ref{thm:wave tables} which decays in $R$, and a ``linear'' $L^2_{t,x}$ estimate which can lose powers of $R$, but gains in the summability of the intervals $I$ and $J$. We first observe that by construction, Theorem \ref{thm:wave tables} implies that
\begin{align}
& \dot{{B}}^{\alpha}_{p, q}ig\| \dot{{B}}^{\alpha}_{p, q}ig( \big[ u^{(\cdot)}_{m-1}\big]- \big[ u^{(\cdot)}_{m} \big] \dot{{B}}^{\alpha}_{p, q}ig) v \dot{{B}}^{\alpha}_{p, q}ig\|_{L^2_{t,x}(X[Q])}^2 \notag \\
&\leqslant \mathcal{S}um_{I \in \mc{I}} \mathcal{S}um_{B_{m-1} \in \mc{Q}_{\frac{2R}{4^{m-1}}}(Q)} \dot{{B}}^{\alpha}_{p, q}ig\| \dot{{B}}^{\alpha}_{p, q}ig( |u^{(B_{m-1})}_{I, m-1}| - \big[ \mc{W}^{(\cdot)}_{1, \epsilon_m}(u^{(B_{m-1})}_{I, m-1}; V, Q)\big] \dot{{B}}^{\alpha}_{p, q}ig) V \dot{{B}}^{\alpha}_{p, q}ig\|_{L^2_{t,x}(I^{\epsilon_m, \frac{2R}{4^{m}}}(B_{m-1}))}^2\notag \\
&\leqslant C_0^2 \epsilon_m^{-2C_0} \dot{{B}}^{\alpha}_{p, q}ig( \frac{4^{m-1}}{2R} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{n-1}{2}} \mathcal{S}um_{I \in \mc{I}}\mathcal{S}um_{B_{m-1} \in \mc{Q}_{\frac{2R}{4^{m-1}}}(Q)} \| u^{(B_{m-1})}_{I, m-1} \|_{L^\infty_t L^2_x}^2 \| V \|_{L^\infty_t L^2_x}^2 \notag \\
&\leqslanta \epsilon^{-2C_0} 4^{-2(M-m)(\frac{n-1}{4} - \delta C_0)} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{n-1}{2}} \| u \|_{\ell^2 L^2}^2 \| v \|_{\ell^2 L^2}^2 \label{eqn-thm main bilinear Up-bilinear L2 estimate}
\end{align}
where we used the definition of $\epsilon_m$ and $M$. On the other hand, to obtain the linear $L^2_{t,x}$ bound, we start by noting that for any $1\leqslant m \leqslant M$, and $a_0\geqslant2$ we have
\begin{align}
\bigg( \mathcal{S}um_{B \in \mc{Q}_{\frac{2R}{4^m}}(Q) } \big\| u^{(B)}_{m} \big\|_{L^\infty_t L^2_x}^2 \bigg)^\frac{1}{2}
&\leqslanta \bigg( \mathcal{S}um_{B \in \mc{Q}_{\frac{2R}{4^m}}(Q) } \big\| u^{(B)}_{m} \big\|_{\ell^{a_0} L^2_x}^2 \bigg)^\frac{1}{2} \notag \\
&\leqslanta 4^{m (\frac{n+1}{2} - \frac{n+1}{a_0})} \bigg( \mathcal{S}um_{B \in \mc{Q}_{\frac{2R}{4^m}}(Q) } \big\| u^{(B)}_{m} \big\|_{\ell^{a_0} L^2_x}^{a_0} \bigg)^\frac{1}{a_0} \notag \\
&\leqslanta \lambda^{ (n+1)(\frac{1}{2} - \frac{1}{a_0})} \| u \|_{\ell^{a_0} L^2_x} \label{eqn-thm main bilinear U p-L2 linear bound for u}
\end{align}
where we applied the energy inequality for $u_m^{(B)}$. Therefore, an application of H\"older's inequality gives for any $a_0 \geqslant 2$
\begin{align}
\dot{{B}}^{\alpha}_{p, q}ig\| \dot{{B}}^{\alpha}_{p, q}ig( \big[ u^{(\cdot)}_{m-1}\big]& -\big[ u^{(\cdot)}_{m} \big] \dot{{B}}^{\alpha}_{p, q}ig) v \dot{{B}}^{\alpha}_{p, q}ig\|_{L^2_{t,x}(X[Q])}\notag\\
&\leqslanta \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{4^m} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} \dot{{B}}^{\alpha}_{p, q}igg(\mathcal{S}um_{B_{m-1}} \| u^{(B_{m-1})}_{m-1} v\|_{L^\infty_t L^2_x (B_{m-1})}^2 + \mathcal{S}um_{B_m} \| u^{(B_m)}_m v \|_{L^\infty_t L^2_x(B_m)}^2 \dot{{B}}^{\alpha}_{p, q}igg)^\frac{1}{2} \notag \\
&\leqslanta \dot{{B}}^{\alpha}_{p, q}igg(\mathcal{S}um_{B_{m-1}} \| u^{(B_{m-1})}_{m-1}\|_{L^\infty_t L^2_x}^2 + \mathcal{S}um_{B_m} \| u^{(B_m)}_m \|_{L^\infty_t L^2_x}^2 \dot{{B}}^{\alpha}_{p, q}igg)^\frac{1}{2} \| v\|_{L^\infty_t L^2_x} \notag \\
&\leqslanta \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \big)^{\frac{1}{2}} \lambda^{ (n+1)(\frac{1}{2} - \frac{1}{a_0})} 4^{(M-m) \frac{1}{2}} \| u \|_{\ell^{a_0} L^2} \|v \|_{\ell^{\infty} L^2} \label{eqn-thm main bilinear Up-linear L2}
\end{align}
where we used the fact that $v^{(B')}$ has Fourier support contained in a set of diameter $1$. Interpolating between (\ref{eqn-thm main bilinear Up-bilinear L2 estimate}) and (\ref{eqn-thm main bilinear Up-linear L2}) then gives for any $\frac{1}{b}\leqslant \frac{1}{a} \leqslant \frac{1}{2}$,
\begin{align*} \dot{{B}}^{\alpha}_{p, q}ig\| \dot{{B}}^{\alpha}_{p, q}ig( \big[ u^{(\cdot)}_{m-1}\big] &- \big[ u^{(\cdot)}_{m} \big] \dot{{B}}^{\alpha}_{p, q}ig) v \dot{{B}}^{\alpha}_{p, q}ig\|_{L^2_{t,x}(X[Q])}\\
&\leqslanta \epsilon^{-C} 4^{-(M-m)\delta^*} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{2} - \frac{n+1}{2b}} \lambda^{(n+1)(\frac{1}{2} - \frac{1}{a})} \| u \|_{\ell^a L^2_x} \| v \|_{\ell^b L^2_x}
\end{align*}
where $\delta^* = \frac{n+1}{2b} - \frac{1}{2} - 2 \delta C_0 \frac{1}{b}$. Consequently, provided that $\frac{1}{n+1}<\frac{1}{b} \leqslant 1 - \frac{1}{r}$, and we choose $\delta$ sufficiently small depending only on $C_0$, $b$, and $n$, we have $\delta^*>0$. Thus by telescoping the sum over $m$ and letting $ u_0^{(Q)} = u$, we deduce that
\begin{align*} \big\| \big( |u| - [u^{(\cdot)}]\big) v \big\|_{L^2_{t,x}(X[Q])} &\leqslant \mathcal{S}um_{m=1}^M \big\| \big( [ u^{(\cdot)}_{m-1}] - [ u^{(\cdot)}_{m} ] \big) v \big\|_{L^2_{t,x}(X[Q])} \\
&\leqslanta \epsilon^{-C} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{2} - \frac{n+1}{2b}} \lambda^{(n+1)(\frac{1}{2} - \frac{1}{a})} \| u \|_{\ell^a L^2_x} \| v \|_{\ell^b L^2_x}.
\end{align*}
It only remains to estimate the second term on the left hand side of \eref{eqn-thm main bilinear Up est-temp bilinear est}. To this end, applying the definition of $v^{(B')}$ together with Theorem \ref{thm:wave tables}, we have
\begin{align} \big\| [u^{(\cdot)}] \big( |v| - [v^{(\cdot)}] \big)\big\|_{L^2_{t,x} (X[Q])}^2 &\leqslant\mathcal{S}um_{J \in \mc{J}} \big\| U \big( |v_J| - [v^{(\cdot)}_J] \big)\big\|_{L^2_{t,x} (I^{\epsilon_1, \frac{R}{2}}(Q))}^2 \notag \\
&\leqslanta \epsilon^{-2C_n} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{n-1}{2}} \| u \|_{\ell^2 L^2}^2 \| v \|_{\ell^2 L^2}^2. \label{eqn:thm main bilinear Up:bilinear v}
\end{align}
On the other hand an application of H\"older's inequality together with the energy estimates and (\ref{eqn-thm main bilinear U p-L2 linear bound for u}) gives for any $a_0 \geqslant 2$
\begin{align} \big\| [u^{(\cdot)}] &\big( |v| - [v^{(\cdot)}] \big)\big\|_{L^2_{t,x} (X[Q])}\notag \\
&\leqslanta \bigg( \mathcal{S}um_{B \in \mc{Q}_{\frac{2R}{4^M}}} \big\| u^{(B)} \|_{L^2_{t,x}(B)}^2 \bigg)^\frac{1}{2} \mathcal{S}up_{J \in \mc{J}} \dot{{B}}^{\alpha}_{p, q}ig( \|v_J \|_{L^\infty_t L^2_x} + \| [v^{(\cdot)}_J] \|_{L^\infty_t L^2_x} \dot{{B}}^{\alpha}_{p, q}ig) \notag \\
&\leqslanta \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} \lambda^{ (n+1)(\frac{1}{2} - \frac{1}{a_0})} \| u \|_{\ell^{a_0} L^2} \| v \|_{\ell^{\infty} L^2}.
\label{eqn-thm main bilinear Up est-v linear L2 bound}
\end{align}
Therefore, interpolating between \eref{eqn:thm main bilinear Up:bilinear v} and \eref{eqn-thm main bilinear Up est-v linear L2 bound}, we deduce that for $\frac{1}{b}\leqslant \frac{1}{a} \leqslant \frac{1}{2}$, we have
$$ \big\| [u^{(\cdot)}] \big( |v| - [v^{(\cdot)}] \big)\big\|_{L^2_{t,x}(X[Q])} \leqslanta \epsilon^{-C} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{2} - \frac{n+1}{2b}} \lambda^{(n+1)(\frac{1}{2} - \frac{1}{a})} \| u \|_{\ell^a L^2} \| v \|_{\ell^b L^2}$$
and consequently (\ref{eqn-thm main bilinear Up est-temp bilinear est}) follows.
\end{proof}
\mathcal{S}ubsection{The Induction on Scales Argument}\label{subsec:induction on scales}
Here we apply Theorem \ref{thm-main bilinear estimate Up case} and give the proof of Theorem \ref{thm:L2 bilinear restriction}. We start with the following definition.
\begin{definition}
Given $R>0$, we let $A(R)>0$ denote the best constant such that for all cubes $Q$ of diameter $R\geqslant 100 \lambda$, and all atomic $+$-waves $u$, and atomic $\partialm$-waves $v$ such that
$$\mathcal{S}upp \widehat{u} \mathcal{S}ubset \Lambda_1 + 4 \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2}}, \varphiquad \varphiquad \mathcal{S}upp \widehat{v} \mathcal{S}ubset \Lambda_2 + 4 \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2}}$$
we have
$$ \| u v\|_{L^2_{t,x}(Q)} \leqslant A(R) \| u \|_{\ell^a L^2_x} \| v \|_{\ell^b L^2}. $$
\end{definition}
It is clear that $A(R) \leqslanta R^\frac{1}{2}$. Our goal is to show that in fact we have $A(R) \leqslanta \lambda^{(n+1)(\frac{1}{2}- \frac{1}{a})}$ for all $R \geqslant 100 \lambda$. This is a consequence of an induction on scales argument, using the following bounds.
\begin{proposition}[Induction Bounds]\label{prop:induction bounds}
There exists $C>0$ such that for all $R \geqslant 100 \lambda$ and $0<\epsilon < \frac{1}{100}$ we have
\begin{equation}\label{eqn:init step}
A(R) \leqslant C \lambda^{(n+1)(\frac{1}{2}- \frac{1}{a})} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{10}
\end{equation}
and
\begin{equation}\label{eqn:induc step}
A(2R) \leqslant ( 1 + C \epsilon) A(R) + C \epsilon^{-C} \lambda^{(n+1)(\frac{1}{2}- \frac{1}{a})} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda } \dot{{B}}^{\alpha}_{p, q}ig)^{(\frac{1}{2} - \frac{1}{b}) \frac{n+1}{2} - \frac{n-1}{4} }.
\end{equation}
\end{proposition}
\begin{proof}
Let $Q$ be a cube of diameter $2R$, and let $u = \mathcal{S}um_{I \in \mc{I}} \mathbbold{1}_I(t) u_I$ be an atomic $+$-wave, and $v = \mathcal{S}um_{J \in \mc{J}} \mathbbold{1}_J(t) v_J$ be an atomic $\partialm$-wave satisfying the support conditions
$$ \mathcal{S}upp \widehat{u} \mathcal{S}ubset \Lambda_1 + 4 \dot{{B}}^{\alpha}_{p, q}ig( \frac{2R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2}}, \varphiquad \mathcal{S}upp \widehat{v} \mathcal{S}ubset \Lambda_2 + 4 \dot{{B}}^{\alpha}_{p, q}ig( \frac{2R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2}} .$$
An application of Theorem \ref{thm-main bilinear estimate Up case} gives a cube $Q'$ of diameter $4R$, and atomic waves $(u^{(B)})_{B \in \mc{Q}_{\frac{R}{4^{M-1}}}(Q)}$, $(v^{(B')})_{B'\in \mc{Q}_{R}(Q)}$ such that
\begin{equation}\label{eqn-prop induction bound Up version-bounded by quilt}
\begin{split} \| u v\|_{L^2_{t,x}(Q)} \leqslant (1 + &C\epsilon) \big\| [u^{(\cdot)}] [v^{(\cdot)}] \big\|_{L^2_{t,x}(Q')} \\
&+ C \epsilon^{-C} \lambda^{(n+1)(\frac{1}{2} - \frac{1}{a})} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{n+1}{2}(\frac{1}{2} - \frac{1}{b})-\frac{n-1}{4} } \|u \|_{\ell^a L^2} \| v \|_{\ell^b L^2}
\end{split}
\end{equation}
and the support properties
$$ \mathcal{S}upp \widehat{u}^{(B)} \mathcal{S}ubset \mathcal{S}upp \widehat{u} + 2 \dot{{B}}^{\alpha}_{p, q}ig( \frac{4R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2}} \mathcal{S}ubset \Lambda_1 + 4 \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{-\frac{1}{2}}$$
and similarly $\mathcal{S}upp \widehat{v} \mathcal{S}ubset \Lambda_2 + 4 (\frac{R}{\lambda})^{-\frac{1}{2}}$.
To prove \eref{eqn:induc step}, we let $B' \in \mc{Q}_{R}(Q')$ and define the atomic $+$-wave $U^{(B')}= \mathcal{S}um_{I\in \mc{I}} \mathbbold{1}_I(t) U^{(B')}_I$ with $U^{(B')}_I = ( u^{(B)}_I )_{B \in \mc{Q}_{\frac{R}{4^{M-1}}}(B')}$. Then for every $B'\in \mc{Q}_{R}(Q)$ we have an atomic $+$-wave $U^{(B')}$ and an atomic $\partialm$-wave $v^{(B')}$ satisfying the correct support assumptions to apply the definition of $A(R)$. Thus
\begin{align*}
\big\| [ u^{(\cdot)} ] [ v^{(\cdot)}] \big\|_{L^2_{t,x}(Q')} &\leqslant \bigg( \mathcal{S}um_{B' \in \mc{Q}_R(Q')} \| U^{(B')} v^{(B')} \|_{L^2_{t,x}}^2 \bigg)^{\frac{1}{2}} \\
&\leqslant A(R) \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{B' \in \mc{Q}_R(Q')} \|U^{(B')}\|_{\ell^a L^2_x}^{a} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{a} \dot{{B}}^{\alpha}_{p, q}ig( \mathcal{S}um_{B' \in \mc{Q}_R(Q')} \|v^{(B')}\|_{\ell^b L^2_x}^{b} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{b} \\
&\leqslant (1+C \epsilon) A(R)
\end{align*}
where the second line used the assumption $\frac{1}{a} + \frac{1}{b} \geqslant \frac{1}{2}$ and the last applied the energy inequalities in Theorem \ref{thm-main bilinear estimate Up case}. Therefore the induction bound \eref{eqn:induc step} follows from an application of (\ref{eqn-prop induction bound Up version-bounded by quilt}).
We now turn to the proof of \eref{eqn:init step}. We begin by observing that again using the bound (\ref{eqn-prop induction bound Up version-bounded by quilt}) with $\epsilon\approx 1$ it is enough to prove that for every $\frac{1}{b} \leqslant \frac{1}{a} \leqslant \frac{1}{2}$ we have the quilt bound
\begin{equation}\label{eqn-prop induction bound Up version-initial quilt bound}\big\| [u^{(\cdot)}] [v^{(\cdot)}] \big\|_{L^2_{t,x}(Q')} \leqslanta \lambda^{(n+1)(\frac{1}{2} - \frac{1}{\lambda})} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^{\frac{1}{2} - \frac{1}{b}} \| u \|_{\ell^a L^2} \| v \|_{\ell^b L^2}.\end{equation}
But this follows by observing that since $[u^{(\cdot)}] = \mathcal{S}um_B \mathbbold{1}_B |u^{(B)}|$ is localised to cubes of diameter $\frac{R}{\lambda}$, an application of H\"older's inequality together with the energy estimates implies that
\begin{align*}
\big\| [u^{(\cdot)}] [v^{(\cdot)}] \big\|_{L^2_{t,x}(Q')} &\leqslanta \lambda^{(n+1)(\frac{1}{2}- \frac{1}{a})} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} \bigg( \mathcal{S}um_{B} \| u^{(B)} \|_{L^\infty_t L^2_x}^a \bigg)^\frac{1}{a} \mathcal{S}up_{B'} \|v^{(B')}\|_{L^\infty_t L^2_x} \\
&\leqslanta \lambda^{(n+1)(\frac{1}{2}- \frac{1}{a})} \dot{{B}}^{\alpha}_{p, q}ig( \frac{R}{\lambda} \dot{{B}}^{\alpha}_{p, q}ig)^\frac{1}{2} \| u \|_{\ell^a L^2_x} \|v \|_{\ell^\infty L^2_x}.
\end{align*}
\end{proof}
We now come to the proof of Theorem \ref{thm:L2 bilinear restriction}.
\begin{proof}[Proof of Theorem \ref{thm:L2 bilinear restriction}]
Let $C$ denote the constant in Proposition \ref{prop:induction bounds}. Let $R= 2^k 100 \lambda$ and $\epsilon_k = 2^{-\delta k}$ with $0<\delta < \frac{1}{C} [\frac{n-1}{4} - (\frac{1}{2} - \frac{1}{b}) \frac{n+1}{2}]$. Then an application of \eref{eqn:induc step} gives
$$ A( 2^{k+1} 100 \lambda) \leqslant ( 1 + C 2^{-k\delta}) A(2^k 100 \lambda) + C \lambda^{(n+1)(\frac{1}{2}- \frac{1}{a})} 2^{ [(\frac{1}{2} - \frac{1}{b}) \frac{n+1}{2} - \frac{n-1}{4} + \delta C] k }. $$
Since both exponents decay in $k$, after $k$ applications, we deduce that
$$ A(2^{k+1} 100 \lambda) \leqslanta A( 100 \lambda) + \lambda^{(n+1)(\frac{1}{2}- \frac{1}{a})} \leqslanta \lambda^{(n+1)(\frac{1}{2}- \frac{1}{a})}$$
where we used the initial induction bound \eref{eqn:init step}. Hence Theorem \ref{thm:L2 bilinear restriction} follows.
\end{proof}
\mathcal{S}ection*{Acknowledgement}
The authors thank Kenji Nakanishi and Daniel Tataru for
sharing their part in the story of the division problem with us.
Also, the authors thank Daniel Tataru for providing a preliminary version of \cite{Tataru2001}.
Financial support by the
DFG through the CRC 1283 ``Taming uncertainty and profiting from
randomness and low regularity in analysis, stochastics and their
applications'' is acknowledged.
\end{document}
|
\begin{document}
\title{On a class of stable conditional measures}
\author{Eugen Mihailescu}
\date{}
\maketitle
\begin{abstract}
The dynamics of endomorphisms (smooth non-invertible maps)
presents many differences from that of diffeomorphisms or that of
expanding maps; most methods from those cases do not work if the
map has a basic set of saddle type with self-intersections. In
this paper we study the conditional measures of a certain class of
equilibrium measures, corresponding to a measurable partition
subordinated to local stable manifolds. We show that these
conditional measures are geometric probabilities on the local
stable manifolds, thus answering in particular the questions
related to the stable pointwise Hausdorff and box dimensions.
These stable conditional measures are shown to be absolutely
continuous if and only if the respective basic set is a
non-invertible repellor. \ We find also invariant measures of
maximal stable dimension, on folded basic sets. Examples are given
too, for such non-reversible systems.
\end{abstract}
\textbf{MSC 2000:} Primary: 37D35, 37B25, 34C45. Secondary: 37D20.
\textbf{Keywords:} Equilibrium measures for hyperbolic
non-invertible maps, stable manifolds, conditional measures,
folded repellors, pointwise dimensions of measures.
\section{Background and outline of the paper.}
In this paper we will study non-invertible smooth (say
$\mathcal{C}^2$) maps on a Riemannian manifold $M$, called \textit{endomorphisms},
which are uniformly hyperbolic on a basic set $\Lambda$. Here by
\textit{basic set} for an endomorphism $f:M \to M$, we understand
a compact topologically-transitive set $\Lambda$, which has a
neighbourhood $U$ such that $\Lambda = \mathop{\cap}\limits_{n \in
\mathbb Z} f^n(U)$.
Considering non-invertible transformations makes sense from the
point of view of applications, since the evolution of a
non-reversible physical system is usually given by a
time-dependent differential equation $\frac{dx(t)}{dt} = F(x(t))$
whose solution, the flow $(f^t)_t$, may not consist necessarily of
diffeomorphisms. However if we look at the ergodic (qualitative)
properties of the associated flow (equilibrium measures, Lyapunov
exponents, conditional measures associated to measurable
partitions), we may replace it with a discrete non-invertible
dynamical system (\cite{ER}). The theory of hyperbolic
diffeomorphisms (Axiom A) has been studied by many authors (see
for example \cite{Bo}, \cite{ER}, \cite{KH}, \cite{Ru-carte}, and
the references therein); also the theory of expanding maps was
studied extensively (see for instance \cite{Ru-exp}), and the fact
that the local inverse iterates are contracting on small balls is
crucial in that case.
However, the
theory of smooth non-invertible maps which have saddle basic sets,
is significantly different from the two above mentioned cases.
Most methods of proof from diffeomorphisms or expanding maps, do
not work here due to the complicated \textbf{overlappings and
foldings} that the endomorphism may have in the
basic set $\Lambda$. The unstable manifolds depend in general on the \textbf{choice of a sequence} of
consecutive preimages, not only on the initial point (as in the
case of
diffeomorphisms). So the unstable manifolds do not form a
foliation, instead they may intersect each other both inside and
outside $\Lambda$.
Moreover the local inverse iterates do not contract necessarily on
small balls, instead they will grow exponentially (at least for
some time) in the stable direction. Also, an arbitrary basic set
$\Lambda$ is not necessarily totally invariant for $f$, and there
do not always exist Markov partitions on $\Lambda$. We mention
also that endomorphisms on Lebesgue spaces behave differently than
invertible transformations even from the point of view of
classifications in ergodic theory, see \cite{PW}.
We will work in the sequel with a hyperbolic endomorphism $f$ on
a basic set $\Lambda$; such a set is also called a \textit{folded
basic set} (or a basic set with \textit{self-intersections}). By
\textit{$n$-preimage} of a point $x$ we mean a point $y$ such that
$f^n(y) = x$. By \textit{prehistory} of $x$ we understand a
sequence of consecutive preimages of $x$, belonging to $\Lambda$,
and denoted by $\hat x = (x, x_{-1}, x_{-2}, \ldots)$ where
$f(x_{-n}) = x_{-n+1}, n
>0$, with $x_0 = x$. And by \textit{inverse limit} of $(f, \Lambda)$ we mean the space of
all such prehistories, denoted by $\hat \Lambda$. For more about
these aspects, see \cite{Ro}, \cite{M-DCDS06}. By the definition
of a basic set $\Lambda$, we assume that $f$ is
\textit{topologically transitive} on $\Lambda$ as an endomorphism,
i.e that there exists a point in $\Lambda$ whose iterates are
dense in $\Lambda$.
\textit{Hyperbolicity} is defined for endomorphisms (see
\cite{Ru-carte}) similarly as for diffeomorphisms, with the
crucial difference that now the unstable spaces (and thus the
local unstable manifolds) depend on whole prehistories; so we have
the stable tangent spaces $E^s_x, x \in \Lambda$, the unstable
tangent spaces $E^u_{\hat x}, \hat x \in \hat \Lambda$, the
\textit{local stable manifolds} $W^s_r(x), x \in \Lambda$ and the
\textit{local unstable manifolds} $W^u_r(\hat x), \hat x \in \hat
\Lambda$. As there may be (infinitely) many unstable manifolds
going through a point, we do not have here a well defined holonomy
map between stable manifolds, by contrast to the diffeomorphism
case. For more details on endomorphisms, see \cite{Ru-carte},
\cite{PDL}, \cite{M-DCDS06}, \cite{MU-CJM}, etc.
\begin{defn}\label{stable}
Consider a smooth (say $\mathcal{C}^2$) non-invertible map $f$
which is hyperbolic on the basic set $\Lambda$, such that the
critical set of $f$ does not intersect $\Lambda$. Define the
\textit{stable potential} of $f$ as $\Phi^s(y):= \log |Df_s(y)|, y
\in \Lambda$. By \textit{stable dimension} (at a point $x \in
\Lambda$) we understand the Hausdorff dimension $\delta^s(x):=
HD(W^s_r(x) \cap \Lambda)$. We will also say that $f$ is
\textit{c-hyperbolic} on $\Lambda$ if $f$ is hyperbolic on
$\Lambda$, there are no critical points of $f$ in $\Lambda$ and
$f$ is conformal on the local stable manifolds.
\end{defn}
The relations between thermodynamic formalism and the dynamics of
diffeomorphisms or expanding maps form a rich field (see for
instance \cite{Ba}, \cite{Bo}, \cite{ER}, \cite{LY},
\cite{Ru-exp}, etc.) And in \cite{M-Cam}, \cite{MU-CJM},
\cite{MU-DCDS08}, we studied some aspects of the thermodynamic
formalism for non-invertible smooth maps.
\textbf{Examples} of hyperbolic endomorphisms are numerous, for
instance hyperbolic solenoids and horseshoes with
self-intersections (\cite{Bot}), polynomial maps in higher
dimension hyperbolic on certain basic sets, skew products with
overlaps in their fibers (\cite{MU-DCDS08}), hyperbolic toral
endomorphisms or perturbations of these, etc. $
\square$
In this non-invertible setting, a special importance is presented
by \textit{constant-to-one endomorphisms}. For such endomorphisms,
we study the family of conditional measures of a certain
equilibrium measure, family associated to a measurable partition
subordinated to local stable manifolds.
If a \textbf{topological condition} is satisfied, namely if the
number of preimages remaining in $\Lambda$ is constant along
$\Lambda$, we showed in \cite{MU-CJM} the following:
\begin{unthm}[Independence of stable dimension]
If the endomorphism $f$ is c-hyperbolic on the basic set $\Lambda$
(see Definition \ref{stable}) and if the number of $f$-preimages
of any point from $\Lambda$, remaining in $\Lambda$ is constant
and equal to $d$, then the stable dimension $\delta^s(x)$ is equal
to the unique zero $t^s_d$ of the pressure function $t \to
P(t\Phi^s - \log d)$, for any $x \in \Lambda$. The common value of
the stable dimension along $\Lambda$ will be denoted by
$\delta^s$.
\end{unthm}
In fact if $f$ is \textbf{open} on $\Lambda$, we proved (see
\cite{MU-CJM}, and Proposition 1 of \cite{M-Cam}) the following:
\begin{unpro}[\cite{M-Cam}, \cite{MU-CJM}]
Let an endomorphism $f:M \to M$ which has a basic set $\Lambda$,
disjoint from the critical set of $f$. Assume that $\Lambda$ is
connected and $f|_\Lambda: \Lambda \to \Lambda$ is open. Then the
cardinality of the set $f^{-1}(x) \cap \Lambda$ is constant, when
$x$ ranges in $\Lambda$.
\end{unpro}
Examples of hyperbolic open endomorphisms on saddle sets are given
in the end of the paper.
\begin{defn}\label{stable-measure}
Let an endomorphism $f$ c-hyperbolic on the basic set $\Lambda$,
such that the number of $f$-preimages of any point from $\Lambda$,
remaining in $\Lambda$, is constant and equal to $d$. Then we call
the equilibrium measure of $\delta^s \cdot \Phi^s$, the
\textit{stable equilibrium measure} of $f$ on $\Lambda$, and
denote it by $\mu_s$.
\end{defn}
We notice that, since the stable foliation is Lipschitz continuous
for endomorphisms (see \cite{MU-CJM}), the potential $\delta^s
\cdot \Phi^s$ is Holder continuous; thus it can be shown by
lifting the measure to the inverse limit $\hat \Lambda$, that
there exists a unique equilibrium measure $\mu_s$ of $\delta^s
\cdot \Phi^s$ (we can apply the results for homeomorphisms from
\cite{KH} on the inverse limit $\hat \Lambda$, in order to get the
uniqueness).
We will show in Theorem \ref{main} that if the number of
$f$-preimages in $\Lambda$ is constant, then the
\textbf{conditional measures} of $\mu_s$ associated to a
measurable partition subordinated to the local stable manifolds,
are \textbf{geometric probabilities} of exponent $\delta^s$. This
will answer then in Corollary \ref{pointwise} the question of the
pointwise Hausdorff dimension and the pointwise box dimension of
the equilibrium measure $\mu_s$ on local stable manifolds (see for
instance \cite{Ba} for definitions). In the constant-to-1
non-invertible case, we show in particular in Corollary
\ref{maximal} that these stable conditional measures are measures
of \textbf{maximal dimension} (in the sense of \cite{BW}) on the
intersections of local stable manifolds with the folded basic set
$\Lambda$.
Our approach will be different both from the case of
diffeomorphisms and from that of expanding maps. In Proposition
\ref{pieces} (which is the main ingredient for the proof of
Theorem \ref{main}), we compare the equilibrium measure on various
different components of the preimage set of a small "cylinder"
around an unstable manifold. We will have to carefully estimate
the equilibrium measure $\mu_s$ on the different pieces of the
iterates of Bowen balls, in order to get good estimates for the
cylinders around local unstable manifolds, $B(W^u_r(\hat x),
\varepsilon)$. This will be done by a process of \textbf{desintegrating}
the measure on the various components of the preimages of borelian
sets, and then by successive re-combinations. Thus we will
reobtain the measure $\mu_s$ on an arbitrary open set, and then
will use the essential uniqueness of the family of conditional
measures of $\mu_s$; for background on conditional measures
associated to measurable partitions on Lebesgue spaces, see
\cite{Ro}.
In Corollary \ref{absolute} we prove that the conditional measures
of $\mu_s$ on the local stable manifolds over $\Lambda$ are
\textbf{absolutely continuous} if and only if, the stable
dimension is equal to the real dimension of the stable tangent
space $\text{dim} E^s_x$; \ and we show that this is equivalent to
$\Lambda$ being a folded repellor.
We will also give in the end Examples of hyperbolic
constant-to-1 \textbf{folded basic sets} for which Theorem
\ref{main} and its Corollaries do apply. In particular we provide
examples of folded repellors obtained for perturbation
endomorphisms, which are not Anosov and for which we prove the
absolute continuity of the stable conditional measures on their
(non-linear) stable manifolds.
\section{Main proofs and applications.}
For our first result we assume only that $f$ is a smooth
endomorphism which is hyperbolic on a basic set $\Lambda$. We will
give a comparison between the values of an arbitrary equilibrium
measure $\mu_\phi$ (corresponding to a Holder continuous potential
$\phi$ on $\Lambda$) on the different pieces/components of the
preimages of a borelian set; this will be useful when we will
estimate later on, the measure $\mu_s$ on certain sets.
By a \textit{Bowen ball} $B_n(x, \varepsilon)$ we understand the set $\{y
\in \Lambda, d(f^iy, f^i x) < \varepsilon, i = 0, \ldots, n\}$, for $x \in
\Lambda$ and $n > 1$. If $\phi$ is a continuous real function on
$\Lambda$ and $m$ is a positive integer, we denote by $S_m
\phi(y):= \phi(y) + \phi(f(y)) + \ldots + \phi(f^m(y))$ the
consecutive sum of $\phi$ on the $n$-orbit of $y \in \Lambda$. And
by $P(\phi)$ we denote the \textit{topological pressure} of the
potential $\phi$ with respect to the function $f|_\Lambda$.
\begin{pro}\label{pieces}
Let $f$ be an endomorphism, hyperbolic on a basic set $\Lambda$;
consider also a Holder continuous potential $\phi$ on $\Lambda$
and $\mu_\phi$ be the unique equilibrium measure of $\phi$. Let a
small $\varepsilon>0$, two disjoint Bowen balls $B_k(y_1, \varepsilon), B_m(y_2,
\varepsilon)$ and a borelian set $A \subset f^k(B_k(y_1, \varepsilon)) \cap
f^m(B_m(y_2, \varepsilon))$, s.t $\mu_\phi(A) >0$; denote by $A_1:=
f^{-k}A \cap B_k(y_1, \varepsilon), A_2 := f^{-m}A \cap B_m(y_2, \varepsilon)$.
Then there exists a positive constant $C_\varepsilon$ independent of $k,
m, y_1, y_2$ such that $$ \frac{1}{C_\varepsilon} \mu_\phi(A_2) \cdot
\frac{e^{S_k \phi(y_1)}}{e^{S_m\phi(y_2)}} \cdot P(\phi)^{m-k} \le
\mu_\phi(A_1) \le C_\varepsilon \mu_\phi(A_2) \cdot
\frac{e^{S_k\phi(y_1)}}{e^{S_m\phi(y_2)}} \cdot P(\phi)^{m-k}$$
\end{pro}
\begin{proof}
Let us fix a Holder potential $\phi$. We will denote the
equilibrium measure $\mu_\phi$ by $\mu$ to simplify notation. We
will work with $f$ restricted to $\Lambda$.
As in \cite{KH}, since the borelian sets with boundaries of
$\mu$-measure zero form a sufficient collection, we will assume
that each of the sets $A_1, A_2$ have boundaries of $\mu$-measure
zero.
From construction $f^k(A_1) = f^m(A_2)$, and assume for example
that $m \ge k$. Now the equilibrium measure $\mu$ can be
considered as the limit of the sequence of measures (see
\cite{KH}):
$$\tilde \mu_n:= \frac {1}{P(f, \phi, n)} \cdot
\mathop{\sum}\limits_{x \in \text{Fix}(f^n)} e^{S_n\phi(x)}
\delta_x,$$ where $P(f, \phi, n):= \mathop{\sum}\limits_{x \in
\text{Fix}(f^n)} e^{S_n\phi(x)}, n \ge 1$.
So we have
\begin{equation}\label{ai}
\tilde \mu_n(A_1) = \frac {1}{P(f, \phi, n)} \cdot
\mathop{\sum}\limits_{x \in \text{Fix}(f^n) \cap A_1}
e^{S_n\phi(x)}, n \ge 1
\end{equation}
Let us consider now a periodic point $x \in \text{Fix}(f^n) \cap
A_1$; by definition of $A_1$, it follows that $f^k(x) \in A$, so
there exists a point $y \in A_2$ such that $f^m(y) = f^k(x)$.
However the point $y$ does not have to be periodic.
Now we will use the Specification Property (\cite{Bo}, \cite{KH})
on the hyperbolic compact locally maximal set $\Lambda$: if
$\varepsilon>0$ is fixed, then there exists a constant $M_\varepsilon>0$ such that
for all $n
>> M_\varepsilon$, there exists a $z \in \text{Fix}(f^{n+m-k})$ s.t $z$ $\varepsilon$-shadows
the $(n+m-k-M_\varepsilon)$-orbit of $y$.
Let now $V$ be an arbitrary neighbourhood of the set $A_2$ s.t $V
\subset B_m(y_2, \varepsilon)$. Consider two points $x, \tilde x \in
\text{Fix}(f^n) \cap A_1$ and assume the same periodic point $z
\in V \cap \text{Fix}(f^{n+m-k})$ corresponds to both $x$ and
$\tilde x$ by the above procedure. This means that the
$(n-k-M_\varepsilon)$-orbit of $f^m z$, $\varepsilon$-shadows the
$(n-k-M_\varepsilon)$-orbit of $f^k x$ and also the $(n-k-M_\varepsilon)$-orbit of
$f^k \tilde x$. Hence the $(n-M_\varepsilon-k)$-orbit of $f^k x$,
$2\varepsilon$-shadows the $(n-M_\varepsilon-k)$-orbit of $f^k \tilde x$. But
recall that we chose $x, \tilde x \in A_1 \subset B_k(y_1, \varepsilon)$,
hence $\tilde x \in B_{n-M_\varepsilon}(x, 2\varepsilon)$.
Now we can split the set $B_{n-M_\varepsilon}(x, 2\varepsilon)$ in at most $N_\varepsilon$
smaller Bowen ball of type $B_n(\zeta, 2\varepsilon)$. In each of these
$(n, 2\varepsilon)$-Bowen balls $B_n(\zeta, 2\varepsilon)$ we may have at most one
fixed point for $f^n$. This holds since fixed points for $f^n$ are
solutions to the equation $f^n \xi = \xi$ and, on tangent spaces
we have that $Df^n - Id$ is a linear map without eigenvalues of
absolute value 1. Thus if $d(f^i \xi, f^i \zeta) < 2\varepsilon, i = 0,
\ldots, n$ and if $\varepsilon$ is small enough, we can apply the Inverse
Function Theorem at each step. Therefore there exists only one
fixed point for $f^n$ in each Bowen ball $B_n(\zeta, 2\varepsilon)$. Hence
there exist at most $N_\varepsilon$ periodic points from $\text{Fix}(f^n)
\cap \Lambda$ having the same periodic point $z \in V$ attached to
them by the above procedure.
Let us notice also that, if $x, \tilde x$ have the same point
$z\in V \cap \text{Fix}(f^{n+m-k})$ attached to them, then as
before, $\tilde x \in B_{n-M_\varepsilon}(x, 2\varepsilon)$. So the distances
between iterates are growing exponentially in the unstable
direction, and decrease exponentially in the stable direction.
Thus we can use the Holder continuity of $\phi$ and a Bounded
Distortion Lemma to prove that: $$|S_n\phi(x) - S_n\phi(\tilde x)|
\le \tilde C_\varepsilon,$$ for some positive constant $\tilde C_\varepsilon$
depending on $\phi$ (but independent of $n, x$). This can be used
then in the estimate for $\tilde \mu_n(A_1)$, according to
(\ref{ai}). We use the fact that if $z \in B_{n+m-k-M_\varepsilon}(y,
\varepsilon)$, then $f^m(z) \in B_{n-M_\varepsilon-k}(f^m y, \varepsilon)$; also recall
that $f^kx = f^m y$, so $f^m z \in B_{n-M_\varepsilon-k}(f^k x, \varepsilon)$.
Then from the Holder continuity of $\phi$ and the fact that $x \in
A_1 \subset B_m(y_1, \varepsilon)$, it follows again by a Bounded
Distortion Lemma that there exists a constant $\tilde C_\varepsilon$
(denoted as before without loss of generality) satisfying:
\begin{equation}\label{distortion}
|S_{n+m-k} \phi(z) - S_n\phi(x)| \le |S_k\phi(y_1) - S_m\phi(y_2)|
+ \tilde C_\varepsilon,
\end{equation}
for $n > n(\varepsilon, m)$.
But from Proposition 20.3.3 of \cite{KH} (which extends
immediately to endomorphisms), we have that there exists a
positive constant $c_\varepsilon$ such that for sufficiently large $n$: $$
\frac{1}{c_\varepsilon}e^{nP(\phi)} \le P(f, \phi, n) \le c_\varepsilon
e^{nP(\phi)}, $$ where the expression $P(f, \phi, n)$ was defined
immediately before (\ref{ai}). Hence in our case, if $n > n(\varepsilon,
m)$ we obtain:
\begin{equation}\label{PX}
\frac{1}{c_\varepsilon}e^{(n+m-k)P(\phi)} \le P(f, \phi, n+m-k) \le c_\varepsilon
e^{(n+m-k)P(\phi)}, \text{and} \ \frac{1}{c_\varepsilon} e^{nP(\phi)} \le
P(f, \phi, n) \le c_\varepsilon e^{nP(\phi)}
\end{equation}
Recall also that there are at most $N_\varepsilon$ points $x \in
\text{Fix}(f^n)$ which have the same attached $z \in V \cap
\text{Fix}(f^n)$. Therefore, by using (\ref{ai}),
(\ref{distortion}) and (\ref{PX}) we can infer that there exists a
constant $C_\varepsilon>0$ such that for $n$ large enough ($n > n(\varepsilon,
m)$),
\begin{equation}\label{ineq1}
\tilde \mu_n(A_1) \le C_\varepsilon \tilde \mu_{n+m-k}(V) \cdot
\frac{e^{S_k \phi(y_1)}}{e^{S_m\phi(y_2)}} \cdot P(\phi)^{m-k},
\end{equation}
where we recall that $A_1 \subset B_m(y_1, \varepsilon), A_2 \subset
B_m(y_2, \varepsilon)$. But since $\partial A_1, \partial A_2$ have
$\mu$-measure zero, we obtain: $$ \mu(A_1) \le C_\varepsilon \mu(V)
\frac{e^{S_k\phi(y_1)}}{e^{S_m\phi(y_2)}} \cdot P(\phi)^{m-k}$$
But $V$ has been chosen arbitrarily as a neighbourhood of $A_2$,
hence $$ \mu(A_1) \le C_\varepsilon \mu(A_2)
\frac{e^{S_k\phi(y_1)}}{e^{S_m\phi(y_2)}} P(\phi)^{m-k} $$
Similarly we prove also the other inequality, hence we are done.
\end{proof}
Let us recall a few notions about measurable partitions (see
\cite{Ro}). Let $\zeta$ be a partition of a Lebesgue space $(X,
\mathcal{B}, \mu)$ with $\mathcal{B}$-measurable sets. Subsets of
$X$ that are unions of elements of $\zeta$ are called
$\zeta$-sets. For an arbitrary point $x\in X$ (modulo $\mu$), we
denote the unique set which contains $x$, by $\zeta(x)$. \ By
\textit{basis} for $\zeta$ we understand a countable collection
$\{B_\alpha, \alpha \in A\}$ of measurable $\zeta$-sets so that
for any two elements $C, C' \in \zeta$, there exists some $\alpha
\in A$ with $C \subset B_\alpha, C' \cap B_\alpha = \emptyset$ or
viceversa, i.e $C \cap B_\alpha = \emptyset, C' \subset B_\alpha$.
A partition $\zeta$ is called \textit{measurable} if it has a
basis as above.
Now we remind briefly the notion of \textit{family of conditional
measures} associated to a measurable partition $\zeta$. Assume we
have an endomorphism $f$ on a compact set $\Lambda$, and let a
probability borelian measure $\mu$ on $\Lambda$ which is
$f$-invariant. If $\zeta$ is a measurable partition of $(\Lambda,
\mathcal{B}, \mu)$ denote by $(\Lambda/\zeta, \mu_\zeta)$ the
\textit{factor space} of $\Lambda$ relative to $\zeta$. Then we
can attach an \textbf{essentially unique} collection of
\textit{conditional measures} $\{\mu_C\}_{C \in \zeta}$ satisfying
two conditions (see \cite{Ro}):
\ i) $(C, \mu_C)$ is a Lebesgue space
\ ii) for any measurable set $B \subset \Lambda$, the set $B \cap C$ is measurable in $C$ for $\mu_\zeta$-almost
all points $C \in \Lambda/\zeta$, the function $C \to \mu_C(B \cap C)$ is
measurable on $\Lambda/\zeta$ and
$\mu(B) = \int_{\Lambda/\zeta} \mu_C(B \cap C) d \mu_\zeta(C)$.
\begin{defn}\label{sub}
If $f$ is a hyperbolic map on a basic set $\Lambda$ and if $\mu$
is an $f$-invariant borelian measure on $\Lambda$, then a
measurable partition $\zeta$ of $(\Lambda, \mathcal{B}(\Lambda),
\mu)$ is said to be \textbf{subordinated to the local stable
manifolds} if for $\mu$-a. e $x \in \Lambda$, we have $\zeta(x)
\subset W^s_{loc}(x)$, and $\zeta(x)$ contains an open
neighbourhood of $x$ in $W^s_{loc}(x)$ (with respect to the
topology induced on the local stable manifold).
\end{defn}
Let us fix an $f$-invariant borelian measure $\mu$ on $\Lambda$.
Since we work with a uniformly hyperbolic endomorphism, we can
\textbf{construct a measurable partition $\xi$} (w. r. t $\mu$)
subordinated to the local stable manifolds, in the following way:
first we know that there is a small $r_0>0$ s. t for each $x \in
\Lambda$ there exists a local stable manifold $W^s_{r_0}(x)$. Then
it is possible to take a countable partition $\mathcal{P}$ of
$\Lambda$ (modulo $\mu$) with open sets, each having diameter less
than $r_0$ and such that the boundary of each set from
$\mathcal{P}$ has $\mu$-measure zero (see for example \cite{KH}).
Now for every open set $U \in \mathcal{P}$, and $x \in U \subset
\Lambda$, we consider the intersection between $U$ and the unique
local stable manifold going through $x$; denote this intersection
by $\xi(x)$. It is clear that $\xi(x) = \xi(y)$ if and only if
both $x, y$ are in the same set $U \in \mathcal{P}$ and they are
on the same local stable manifold $W^s_{r_0}(z)$ for some $z \in
\Lambda$. Now take the collection $\xi$ of all the borelian sets
$\xi(x), x \in U, U \in \mathcal{P}$. We see easily that $\xi$ is
a partition of $\Lambda$ (modulo sets of $\mu$-measure zero) and
that $\xi$ is measurable, since $\mathcal{P}$ was assumed
countable and, inside each member $U \in \mathcal{P}$, we can
separate any two local stable manifolds with the help of a
countable collection of $\xi$-sets (which are neighbourhoods of
local stable manifolds). \ Therefore we have concluded the
construction of the measurable partition $\xi$ which is
subordinated to the local stable manifolds. Modulo a set of
$\mu$-measure zero we have thus a partition with pieces of local
stable manifolds, $\xi(x) \subset W^s_{r(y(x))}(y(x)), x \in
\Lambda$. In fact without loss of generality, we may assume that
for each member $A \in \xi$, there exists some $x(A) \in \Lambda$
and $r(A)\in (0, r_0)$ so that $W^s_{r(A)/2}(x(A)) \cap \Lambda
\subset A \subset W^s_{r(A)}(x(A)) \cap \Lambda$.
\textbf{Remark 1:} From the construction above it follows that,
outside a set of $\mu$-measure zero, the radius $r(A)$ can be
taken to vary continuously, i.e there exists a constant $\chi>0$
s. t for each $x$ in a set of full $\mu$-measure in $\Lambda$,
there exists a neighbourhood $U(x)$ of $x$ with
$\frac{r(\xi(z))}{r(\xi(z'))} \le \chi, z, z' \in U(x)$.
$
\square$
\textbf{Notation:} In our uniformly hyperbolic setting, with the
partition $\xi$ constructed above, we denote the conditional
measure $\mu_A$ by $\mu_{A}^s$, for $W^s_{r(A)/2}(x(A)) \cap
\Lambda \subset A \subset W^s_{r(A)}(x(A)) \cap \Lambda, A \in
\xi$. We will also denote the set of centers $\{x(A), A \in \xi\}$
by $S$. In particular, if $\mu = \mu_s$, we denote the conditional
measures by $\mu^s_{s, A}$ for $A \in \xi$, or by $\mu^s_{s, x}$
when $\xi(x) = A$ for $\mu_s$-a.e $x \in \Lambda$. $
\square$
Now, if $f$ is a $d$-to-1 c-hyperbolic endomorphism on the basic
set $\Lambda$, we showed in \cite{MU-CJM} that the stable
dimension $\delta^s(x)$ at \textbf{any point} $x \in \Lambda$ is
independent of $x$, and is equal to the unique zero of the
pressure function $t \to P(t \Phi^s - \log d)$. Thus we can talk
in this case about the \textbf{stable dimension of $\Lambda$} and
will denote it by $\delta^s$.
\begin{thm}\label{main}
Let $f$ be a smooth endomorphism on a Riemannian manifold $M$, and
assume that $f$ is c-hyperbolic on a basic set of saddle type
$\Lambda$. Let us assume moreover that $f$ is $d$-to-1 on
$\Lambda$. Assume that $\Phi^s(y):= \log |Df_s(y)|, y \in
\Lambda$, that $\delta^s$ is the stable dimension of $\Lambda$,
and that $\mu_s$ is the equilibrium measure of the potential
$\delta^s \Phi^s$ on $\Lambda$. Then the conditional measures of
$\mu_s$ associated to the partition $\xi$, namely $\mu_{s, A}^s$,
are geometric probabilities, i.e for every set $A \in \xi$ there
exists a positive constant $C_A$ such that $$C_A^{-1}
\rho^{\delta^s} \le \mu_{s, A}^s(B(y, \rho)) \le C_A
\rho^{\delta^s}, y \in A \cap \Lambda, 0 < \rho < \frac{r(A)}{2}$$
\end{thm}
\begin{proof}
By using the partition $\xi$ subordinated to local stable
manifolds from above, we can associate conditional measures of
$\mu_s$, denoted by $\mu_{s, A}^s$, $A \in \xi$. We want to
estimate the measure $\mu_{s, A}^s$ of a small arbitrary ball
$B(y, \rho)$ centered at some $y \in A$, where $W^s_{r(A)/2}(x)
\cap \Lambda \subset A \subset W^s_{r(A)}(x) \cap \Lambda, x =
x(A)$.
Let us first consider an arbitrary set $f^n(B_n(z, \varepsilon))$, where
we remind that $B_n(z, \varepsilon)$ denotes a Bowen ball, and where
$\varepsilon>0$ is arbitrary but small. This set is actually a
neighbourhood of the unstable manifold $W^u_r(\hat f^n z)$
corresponding to a prehistory $(f^n z, f^{n-1}z, \ldots, z,
\ldots)$. We will estimate the $\mu_s$-measure of a cross section
of such a set $f^n(B_n(z, \varepsilon))$, i.e an intersection of type
$$B(n, z; k, x; \varepsilon):= f^n(B_n(z, \varepsilon)) \cap B_k(x, \varepsilon),$$ for
arbitrary $z, x \in \Lambda$ and positive integers $n, k$ . We see
that if we vary $z, x, k, n$, we can write any open set in
$\Lambda$, as a union of mutually disjoint sets of type $B(n, z;
k, x; \varepsilon)$.
So let us estimate the $\mu_s$-measure of $B(n, z; k, x, \varepsilon)$.
Notice that $B(n, z; k, x; \varepsilon)$ is contained in $f^{n}(B_{n+k}(z,
\varepsilon))$. Without loss of generality we can assume that $z =
x_{-n}$, i.e that $z$ itself is the unique $n$-preimage of $x$
inside $B_n(z, \varepsilon)$; if not, then we can replace $z$ by a point
$x_{-n}$ which is $\varepsilon$-shadowed by $z$ up to order $n+k$, and
thus the dynamical behaviour of $z$ up to order $n+k$ will be the
same as that of $x_{-n}$.
Let us denote the positive quantity $|Df_s^n(z)|\cdot \varepsilon$ by
$\rho$. Since the endomorphism $f$ is conformal on local stable
manifolds, the diameter of the intersection $f^n(B_n(z, \varepsilon)) \cap
W^s_r(f^nz)$ is equal to $2\rho$.
Now recall that we assumed without loss of generality that $f^nz =
x$, and consider all the finite prehistories of the point $x$, in
$\Lambda$. We will call then \textbf{$\rho$-maximal prehistory} of
$x$ any finite prehistory $(x, x_{-1}, \ldots, x_{-p})$ so that
$|Df^p(x_{-p+1})| \cdot \varepsilon \ge \rho$ but $|Df^p(x_{-p})| \cdot
\varepsilon < \rho$. Clearly, given any prehistory $\hat x = (x, x_{-1},
\ldots )$ of $x$, there exists some positive integer $n(\hat x,
\rho)$ such that $(x, x_{-1}, \ldots, x_{-n(\hat x, \rho)})$ is a
$\rho$-maximal prehistory. Let us denote by $$\mathcal{N}(x,
\rho):= \{n(\hat x, \rho), \hat x \ \text{prehistory of} \ x \
\text{from} \ \Lambda\}$$
We will consider now the various components of the $p$-preimages
of $B(n, z; k, x; \varepsilon)$, when $p$ ranges in $\mathcal{N}(x,
\rho)$. We extended the stable diameter of $B(n, z; k, x; \varepsilon)$ in
backward time until we reach a diameter of at most $\varepsilon$. As the
maximum expansion in backward time is realized on the stable
manifolds (local inverse iterates contract all the unstable
directions), it follows that for any prehistory $\hat x$ of $x$,
there exists a component of $f^{-n(\hat x, \rho)}(B(n, z; k, x;
\varepsilon))$ inside the Bowen ball $B_{n(\hat x, \rho)}(x_{-n(\hat x,
\rho)}, \varepsilon)$; denote this component by $A(\hat x, \rho)$.
We see
that all these components $A(\hat x, \rho)$ are mutually disjoint
if $\varepsilon << \varepsilon_0$, where $\varepsilon_0$ is the local injectivity constant
of $f$ on $\Lambda$ (recall that there are no critical points in
$\Lambda$). Indeed if the sets $A(\hat x, \rho)$ and $A(\hat x',
\rho)$ would intersect for some prehistories $\hat x =(x, x_{-1},
\ldots) , \hat x' = (x, x'_{-1}, \ldots)$ of $x$ then, since they
are contained in Bowen balls, their forward iterates would be
$2\varepsilon$-close. But then we get a contradiction since the
prehistories $\hat x, \hat x'$ must contain different preimages
$x_p, x-p'$ at some level $p$, and these different preimages must
be at a distance of at least $\varepsilon_0$ from each other. Hence either
$A(\hat x, \rho) = A(\hat x', \rho)$, or $A(\hat x, \rho) \cap
A(\hat x', \rho) = \emptyset$.
Now we will use the $f$-invariance of the equilibrium measure
$\mu_s$ in order to estimate the $\mu_s$-measure of the set $B(n,
z; k, x; \varepsilon)$. Recall that $f^n z = x$, and $\varepsilon |Df_s^n(z)| =:
\rho$. Then we have $$\mu_s(B(n, z; k, x; \varepsilon) =
\mathop{\sum}\limits_{\hat x \ \text{prehistory of} \ x}
\mu_s(A(\hat x, \rho)), $$ since we showed above that the sets
$A(\hat x, \rho)$ either coincide or are disjoint.
Now let us take two sets $A(\hat x, \rho), A(\hat x', \rho)$, one
of them with $n(\hat x, \rho) = p$ and the other with $n(\hat x',
\rho) = p'$. We proved in \cite{MU-CJM} that for a $d$-to-1
c-hyperbolic endomorphism $f$ on the basic set $\Lambda$, we have
$\delta^s = t^s_d$, where $t^s_d$ is the unique zero of the
pressure function $t \to P(t\Phi^s - \log d)$. Therefore we can
use that
\begin{equation}\label{CJM}
P(\delta^s\Phi^s) = \log d
\end{equation}
Then from the definition of the sets
$A(\hat x, \varepsilon)$ and by using Proposition \ref{pieces}, we can
compare the measure $\mu_s$ on two sets $A(\hat x, \rho), A(\hat
x', \rho)$ as follows:
\begin{equation}\label{comp}
\frac{1}{C_\varepsilon} \mu_s(A(\hat x', \rho))
\frac{|Df_s^p(x_{-p})|^{\delta^s}}{|Df_s^{p'}(x'_{-p'})|^{\delta^s}}
\cdot d^{p'-p} \le \mu_s(A(\hat x, \rho)) \le C_\varepsilon \mu_s(A(\hat
x',
\rho))\frac{|Df_s^p(x_{-p})|^{\delta^s}}{|Df_s^{p'}(x'_{-p'})|^{\delta^s}}
\cdot d^{p'-p}
\end{equation}
In general, if for two variable quantities $Q_1, Q_2$, there
exists a positive universal constant $c$ such that $\frac{1}{c}Q_2
\le Q_1 \le cQ_2$, we say that $Q_1, Q_2$ are \textbf{comparable},
and will denote this by $Q_1 \approx Q_2$; the constant $c$ is
called the \textbf{comparability constant}.
But from the definition of $n(\hat x, \rho)$ above (as being the
length of the $\rho$-maximal prehistory along $\hat x$), and since
$n(\hat x, \rho) = p, n(\hat x', \rho) = p'$ we obtain:
$$\frac{1}{C} |Df_s^{p'}(x'_{-p'})| \le |Df_s^p(x_{-p})| \le C
|Df_s^{p'}(x'_{-p'})|$$ Therefore, from relation (\ref{comp}) we
obtain
\begin{equation}\label{comp2}
\frac{1}{C_\varepsilon} \mu_s(A(\hat x', \rho))d^{p'-p} \le \mu_s(A(\hat
x, \rho)) \le C_\varepsilon \mu_s(A(\hat x', \rho))d^{p'-p},
\end{equation}
where we used the same constant $C_\varepsilon$ as in
(\ref{comp}), without loss of generality. Hence the proof will now
be reduced to a combinatorial argument about the different
pieces/components, of the preimages of various orders of $B(n, z;
k, x; \varepsilon)$.
However we assumed that every point from $\Lambda$ has exactly $d$
$f$-preimages inside $\Lambda$. We use (\ref{comp2}) in order to
compare the $\mu_s$-measures of the different pieces $A(\hat x,
\rho)$, which will then be added successively. Recall that one of
these components $A(\hat x, \rho)$ is precisely $B_{n+k}(z, \varepsilon)$.
The comparisons will always be made with respect to this component
$B_{n+k}(z, \varepsilon)$. Let us order the integers from $\mathcal{N}(x,
\rho)$ as: $$n_1 > n_2 >\ldots > n_T$$ We shall add first the
measures $\mu_s(A(\hat x, \rho))$ over all the sets corresponding
to $\hat x$ with $n(\hat x, \rho) = n_1$, then over those
prehistories with $n(\hat x, \rho) = n_2$, etc. And will use that
any point from $\Lambda$ has exactly $d^m$ $m$-preimages belonging
to $\Lambda$ for any $m \ge 1$. \
Therefore by such successive
additions and by using (\ref{comp2}) we obtain: $$\mu_s(B_{n+k}(z,
\varepsilon)) \cdot d^{n} \le \mu_s(B(n, z; k, x; \varepsilon)) =
\mathop{\sum}\limits_{\hat x \ \text{prehistory of} \ x}
\mu_s(A(\hat x, \rho)) \le \mu_s(B_{n+k}(z, \varepsilon)) \cdot d^{n},$$
with the positive constant $C_\varepsilon$ independent of $n, k, z, x$.
We use now Theorem 1 of \cite{M-DCDS06} which gave estimates for
equilibrium measures on Bowen balls, similar to those from the
case of diffeomorphisms (see \cite{KH} for example); this was done
by lifting to an equilibrium measure on $\hat \Lambda$. Hence from
the last displayed formula and (\ref{CJM}), we obtain:
\begin{equation}\label{tub}
\frac{1}{C_\varepsilon} d^n \cdot
\frac{|Df_s^{n+k}(z)|^{\delta^s}}{d^{n+k}} \le \mu_s(B(n, z; k, x;
\varepsilon)) \le C_\varepsilon d^n \cdot
\frac{|Df_s^{n+k}(z)|^{\delta^s}}{d^{n+k}}
\end{equation}
Let us now study in more detail the conditions from the definition
of conditional measures. From the construction of the measurable
partition $\xi$ we have that $W^s_{r(A)/2}(x) \cap \Lambda \subset
A \subset W^s_{r(A)}(x) \cap \Lambda, x = x(A) \in S$ and the
radii $r(A)$ vary continuously with $A$. So from Remark 1 we can
split an arbitrary set $U \in \mathcal{P}$, modulo $\mu_s$, into a
disjoint union of open sets $V$, each being a $\xi$-set, so there
exists $r = r(V)>0$ s.t for all $A \in \xi$ intersecting $V$, we
have $W^s_{r/2}(x(A)) \cap \Lambda \subset A \subset W^s_r(x(A))
\cap \Lambda$. Hence locally, on a subset $V \subset U \in
\mathcal{P}$, we can consider that $\xi$ is, modulo a set of
$\mu_s$-measure zero, a foliation with local stable manifolds
$W^s_r(x)$ of the same size $r=r(V)$. The intersections of these
local stable manifolds with $\Lambda$ are then identified with
points in the factor space $\Lambda/\xi$.
We will work for the rest of the proof on an open set $V$ as
above, i.e where the sets $A \in \xi$ can be assumed to be of type
$W^s_r(x)$, of the same size $r=r(V)$. Take also $\varepsilon = r$.
From the definition of the factor space $\Lambda/\xi$, the
$(\mu_s)_\xi$-measure induced on the quotient space $\Lambda/\xi$
is given by $(\mu_s)_\xi(E) = \mu_s(\pi_\xi^{-1}(E))$, where
$\pi_\xi:\Lambda \to \Lambda/\xi$ is the canonical projection
which collapses a set from $\xi$ to a point. \ We notice that the
projection $\pi_\xi(B(n, z; k, x; r))$ in $\Lambda/\xi$ has
$(\mu_s)_\xi$-measure equal to $\mu_s(B_k(x, r))$, since
$\pi_\xi^{-1}(\pi_\xi(B(n, z; k, x; r))$ is $B_k(x, r)$. Now since
$P(\delta^s \Phi^s) = \log d$ (from relation (\ref{CJM})) and by
using again the estimates of equilibrium states on Bowen balls, we
obtain as in (\ref{tub}) that $\mu_s(B_k(x, r))$ is comparable to
$\frac{|Df_s^k(x)|^{\delta^s}}{d^k}$ (with a comparability
constant $c=c(V)$).
But, from the definition of conditional measures we have
\begin{equation}\label{fam}
\mu_s(B(n, z; k, x; r)) = \int_{B_k(x, r)/\xi} \mu_{s, A}^s(A \cap
B(n, z; k, x; r)) d(\mu_s)_\xi(\pi_\xi(A))
\end{equation}
Now by (\ref{tub}) and recalling that $f^nz =x$, we infer that
$\mu_s(B(n, z; k, x; r))$ is comparable to
$\frac{|Df_s^k(x)|^{\delta^s}}{d^k} \cdot \rho^{\delta^s}$, i.e to
$\mu_s(B_k(x, r)) \cdot \rho^{\delta^s}$, where in our case
$\rho:= |Df_s^n(z)|r$ (the comparability constant being denoted by
$C_V$). And we showed above that $(\mu_s)_\xi(B_k(x, r)/\xi) =
\mu_s(B_k(x, r)) \approx \frac{|Df_s^k(x)|^{\delta^s}}{d^k}$ (with
the comparability constant $C_V$).
In addition, we notice that the sets of type $B(n, z; k, x; r)$
where $n, z, k, x$ vary, form a basis for the open sets in $V$;
also, if we vary $n$, the radius $\rho = |Df_s^n(z)|\cdot r$ can
be made arbitrarily small. \ Therefore from the essential
uniqueness of the system of conditional measures associated to
$(\mu_s, \xi)$ and since any borelian set in $V$ can be written
modulo $\mu_s$ as a union of disjoint sets of type $B(n, z; k, x;
r)$, we conclude that the conditional measure $\mu_{s, A}^s$ is a
geometric probability of exponent $\delta^s$. Hence for all
$\rho$, $0< \rho < r/2$, we have
$$\frac{1}{C_V} \rho^{\delta^s} \le
\mu_{s, A}^s(B(y, \rho)) \le C_V \rho^{\delta^s}, y \in A,$$ for
$A \subset V, A \in \xi$. The comparability factor $C_V$ is
constant on $V$; in general it can be taken locally constant on
the complement in $\Lambda$ of a set of $\mu_s$-measure zero. The
proof is thus finished.
\end{proof}
\begin{defn}\label{stable-cond}
Let $f$ be a hyperbolic endomorphism on the folded basic set
$\Lambda$, $\mu$ a borelian probability measure on $\Lambda$ and
$\xi$ a measurable partition subordinated to local stable
manifolds. Then the conditional measure $\mu_A^s$ corresponding to
$A \in \xi$ will be called \textit{the stable conditional measure}
of $\mu$ on $A$. When $\mu = \mu_s$ we denote this stable
conditional measure by $\mu^s_{s, A}$.
\end{defn}
\textbf{Remark 2.} We notice from the proof of Theorem \ref{main}
that, in fact, the stable conditional measures of $\mu_s$ do not
depend on the measurable partition $\xi$ constructed above,
subordinated to local stable manifolds. Therefore there exists a
set $\Lambda(\mu_s)$ of full $\mu_s$-measure inside $\Lambda$,
such that for every $x \in \Lambda(\mu_s)$ there exists some small
$r(x)>0$ so that $W^s_{r(x)}(x)$ is contained in a set $A$ from a
measurable partition of type $\xi$ (subordinated to local stable
manifolds); then one can construct the stable conditional measure
$\mu^s_{s, A}$. We denote this conditional measure also by
$\mu^s_{s, x}, x \in \Lambda(\mu_s)$. $
\square$
We recall now the notions of \textit{lower}, respectively
\textit{upper pointwise dimension} of a finite borelian measure
$\mu$ on a compact space $\Lambda$ (see for example \cite{Ba}).
For $x \in \Lambda$, they are defined by $$ \underline{d}_\mu(x):=
\mathop{\liminf}\limits_{\rho \to 0} \frac{\log \mu(B(x,
\rho))}{\log \rho}, \ \text{and} \ \bar{d}_\mu(x):=
\mathop{\limsup}\limits_{\rho \to 0} \frac{\log \mu(B(x,
\rho))}{\log \rho}$$ If the lower pointwise dimension at $x$
coincides with the upper pointwise dimension at $x$, we denote the
common value by $d_\mu(x)$ and call it simply the
\textit{pointwise dimension} at $x$.
One can also define the \textit{Hausdorff
dimension, lower box dimension and upper box dimension} of $\mu$
respectively by: $$ HD(\mu):= \inf\{HD(Z), \mu(\Lambda \setminus
Z) = 0\}$$ $$\underline{\text{dim}}_B(\mu):= \lim_{\delta \to 0}
\inf\{\underline{\text{dim}}_B(Z), \mu(\Lambda \setminus Z) \le
\delta\}$$ $$\overline{\text{dim}}_B(\mu):=\lim_{\delta\to 0} \inf
\{\overline{\text{dim}}_B(Z), \mu(\Lambda\setminus Z) \le
\delta\}$$
Assume now in general that $f$ is a hyperbolic endomorphism on
$\Lambda$ and $\mu$ a probability measure on $\Lambda$, and let
$\xi$ be a measurable partition subordinated to local stable
manifolds of $f$ on $\Lambda$. We define then the
\textit{lower/upper stable pointwise dimension} of $\mu$ at $y$,
for $\mu$-a.e $y \in \Lambda$, as the lower/upper pointwise
dimension of the stable conditional measure $\mu^s_A$ at $y$, for
$y \in A$, namely: $$ \underline{d}_\mu^s(y):=
\mathop{\liminf}\limits_{\rho \to 0} \frac{\log \mu_A^s(B(y,
\rho))}{\log \rho} \ \ \text{and} \ \bar{d}_\mu^s(y):=
\mathop{\limsup}\limits_{\rho \to 0} \frac{\log \mu_A^s(B(y,
\rho))}{\log \rho}$$
Similarly we define the \textit{stable Hausdorff dimension} of
$\mu$ on $A \in \xi$, and the \textit{stable lower/upper box
dimension} of $\mu$ on $A$, respectively, as the quantities:
$$HD^s(\mu, A):= HD(\mu_A^s), \ \ \underline{\text{dim}}_B^s(\mu,
A) := \underline{\text{dim}}_B(\mu_A^s), \ \
\overline{\text{dim}}_B^s(\mu, A) :=
\overline{\text{dim}}_B(\mu_A^s), A \in \xi$$
When $\mu = \mu_s$ we denote $HD^s(\mu_s, x) := HD(\mu^s_{s, x})$,
$\underline{\text{dim}}_B^s(\mu_s, x) :=
\underline{\text{dim}}_B(\mu^s_{s, x})$, and
$\overline{\text{dim}}_B^s(\mu_s, x) :=
\overline{\text{dim}}_B(\mu_{s, x}^s)$, for $x \in
\Lambda(\mu_s)$.
Recall now the stable dimension $\delta^s$ from Definition
\ref{stable} and the Theorem of Independence of the Stable
Dimension given afterwards.
\begin{cor}\label{pointwise}
Let $f$ be a c-hyperbolic, $d$-to-1 endomorphism on a basic set
$\Lambda$, and $\mu_s$ be the equilibrium measure of the potential
$\delta^s \Phi^s$. Then the stable pointwise dimension of $\mu_s$
exists $\mu_s$-almost everywhere on $\Lambda$ and is equal to the
stable dimension $\delta^s$.
Also the stable Hausdorff dimension of $\mu_s$, stable lower box
dimension of $\mu_s$ and stable upper box dimension of $\mu_s$ are
all equal to $\delta^s$.
\end{cor}
\begin{proof}
The proof follows from Theorem \ref{main} since we proved that the
stable conditional measures of the equilibrium measure $\mu_s$ are
geometric probabilities.
For the second part of the Corollary, we use Theorem 2.1.6 of
\cite{Ba}. Indeed since the stable conditional measures of $\mu_s$
are geometric probabilities of exponent $\delta^s$, we conclude
that the stable Hausdorff, lower/upper dimensions coincide, and
are all equal to the stable dimension $\delta^s$.
\end{proof}
\begin{defn}\label{msd}
We will say that a measure $\mu$ on $\Lambda$ has \textit{maximal
stable dimension} on $A \in \xi, A \subset W^s_{r(x)}(x)$ if:
$$HD^s(\mu, A) = \sup\{HD^s(\noindentu, A), \noindentu \ \text{is an} \
f|_\Lambda-\text{invariant probability measure} \ \text{on} \
\Lambda\}$$
\end{defn}
This definition is similar to that of measure of maximal
dimension; see \cite{Ba}, \cite{BW} where measures of maximal
dimension on hyperbolic sets of surface diffeomorphisms were
studied. Our setting/methods for the maximal \textit{stable}
dimension in the non-invertible case, are however different. \
Now, since the stable Hausdorff dimension of any $f$-invariant
probability measure $\noindentu$ on $\Lambda$ is bounded above by
$\delta^s := HD(W^s_r(x) \cap \Lambda)$, we see from Corollary
\ref{pointwise} that:
\begin{cor}\label{maximal}
In the setting of Theorem \ref{main} it follows that the stable
equilibrium measure $\mu_s$ of $f$, is of \textbf{maximal stable
dimension} on $W^s_{r(x)}(x) \cap \Lambda$ among all $f$-invariant
probability measures on $\Lambda$, for $\mu_s$-a.e $x \in
\Lambda$. And $\mu_s$ maximizes in a Variational Principle for
stable dimension on $\Lambda$, i.e: $$\delta^s = HD^s(\mu_s, x) =
\sup\{HD^s(\noindentu, x), \ \noindentu \ \text{is an} \
f|_\Lambda-\text{invariant probability measure} \ \text{on} \
\Lambda\}, \mu_s-a.e \ x$$
\end{cor}
We say now that the basic set $\Lambda$ is a
\textbf{repellor} (or \textbf{folded repellor}) if there exists a
neighbourhood $U$ of $\Lambda$ such that $\bar U \subset f(U)$.
And that $\Lambda$ is a \textit{local repellor} if there are local
stable manifolds of $f$ contained inside $\Lambda$ (see
\cite{M-Cam} for more on these notions in the case of
endomorphisms).
\begin{cor}\label{absolute}
Let an open c-hyperbolic endomorphism $f$ on a connected basic set
$\Lambda$. Then we have that the stable conditional measures
$\mu_{s, x}^s$ of $\mu_s$, are absolutely continuous with respect
to the induced Lebesgue measures on $W^s_{r(x)}(x), x \in
\Lambda(\mu_s)$, if and only if $\Lambda$ is a non-invertible
repellor.
\end{cor}
\begin{proof}
If $f$ is open on a connected $\Lambda$ we saw in Section 1 that
$f$ is constant-to-1 on $\Lambda$.
The first part of the proof follows
from Theorem \ref{main} and from Theorem 1 of \cite{M-Cam}. Indeed
in \cite{M-Cam} we showed that in the above setting, if none of
the stable manifolds centered at $x$ is contained in $\Lambda$,
then $\delta^s$ is strictly less than the real dimension $d_s$ of
the manifold $W^s_{r(x)}(x)$ (the result in \cite{M-Cam}, given
for the case when $d_s$ is 2, can be generalized easily to other
dimensions as long as the condition of conformality on stable
manifolds is satisfied). Thus in order to have absolute
continuity of the stable conditional measures we must have some
local stable manifolds contained in $\Lambda$, equivalent to
$\Lambda$ being a local repellor (in the terminology of
\cite{M-Cam}). But we proved in Proposition 1 of \cite{M-Cam} that
when $f|_\Lambda: \Lambda \to \Lambda$ is open, then $\Lambda$ is
a local repellor if and only if $\Lambda$ is a repellor.
The converse is clearly true since, if $\Lambda$ is a repellor,
then the local stable manifolds are contained inside $\Lambda$,
and thus the stable dimension $\delta^s$ is equal to the dimension
$d_s$ of the manifold $W^s_{r(x)}(x)$. Hence from Theorem
\ref{main} it follows that the stable conditional measures of
$\mu_s$ are geometric of exponent $d_s$; thus they are absolutely
continuous with respect to the respective induced Lebesgue
measures.
\end{proof}
Let us give in the end some examples of c-hyperbolic endomorphisms
which are constant-to-1 on basic sets, for which we will apply
Theorem \ref{main} and its Corollaries.
\textbf{Example 1.} The first and simplest example is that of a product $$f(z, w) =
(f_1(z), f_2(w)),(z, w) \in \mathbb C^2$$ where $f_1$ has a fixed
attracting point $p$ and $f_2$ is expanding on a compact invariant
set $J$. Then the basic set that we consider is $\Lambda:=
\{p\}\times J$. For instance take $f(z, w) = (z^2+c, w^2), c \noindente
0$, $|c|$ small, on the basic set $\Lambda = \{p_c\}\times S^1$,
where $p_c$ denotes the unique fixed attracting point of $z \to
z^2+c$. The stable dimension here is equal to zero and the
intersections of type $W^s_r(x) \cap \Lambda$ are singletons.
\textbf{Example 2.} We can take a hyperbolic toral endomorphism
$f_A$ on $\mathbb T^2$, where $A$ is an integer-valued matrix with
one eigenvalue of absolute values strictly less than 1, and
another eigenvalue of absolute value strictly larger than 1. In
this case we can take $\Lambda = \mathbb T^2$, and we have the
stable dimension equal to 1. We see that $f_A$ is
$|\text{det}(A)|$-to-1 on $\mathbb T^2$.
We may take also $f_{A, \varepsilon}$ a perturbation of $f_A$ on $\mathbb
T^2$. Then again $f_{A, \varepsilon}$ is $|\text{det}(A)|$-to-1 on
$\mathbb T^2$, and c-hyperbolic on $\mathbb T^2$. The stable
dimension is equal to 1, but the stable potential $\Phi^s$ is not
necessarily constant now. From Corollary \ref{absolute} we see
that the stable conditional measures of the equilibrium measure
$\mu_s$ are absolutely continuous.
\textbf{Example 3.} We construct now examples of folded repellors
which are not necessarily Anosov endomorphisms.
We remark first that if $\Lambda$ is a repellor for an
endomorphism $f$, with neighbourhood $U$ so that $\bar U \subset
f(U)$, then $f^{-1}(\Lambda) \cap U = \Lambda$. Therefore if
$\Lambda$ is in addition connected, it follows easily that $f$ is
constant-to-1 on $\Lambda$. \ \ \ Let us show now that
constant-to-1 repellors are stable under perturbations.
\begin{pro}\label{stability}
Let $\Lambda$ be a connected repellor for an endomorphism $f$ so
that $f$ is hyperbolic on $\Lambda$, and let a perturbation
$f_\varepsilon$ which is $\mathcal{C}^1$-close to $f$. Then $f_\varepsilon$ has a
connected repellor $\Lambda_\varepsilon$ close to $\Lambda$, and such that
$f_\varepsilon$ is hyperbolic on $\Lambda_\varepsilon$. Moreover for any $x \in
\Lambda_\varepsilon$, the number of $f_\varepsilon$-preimages of $x$ belonging to
$\Lambda_\varepsilon$, is the same as the number of $f$-preimages in
$\Lambda$ of a point from $\Lambda$.
\end{pro}
\begin{proof}
Since $\Lambda$ has a neighbourhood $U$ so that $\bar U \subset
f(U)$, it follows that for $f_\varepsilon$ close enough to $f$, we will
obtain $\bar U \subset f_\varepsilon(U)$. If $f_\varepsilon$ is
$\mathcal{C}^1$-close to $f$, then we can take the set $
\Lambda_\varepsilon:= \mathop{\cap}\limits_{n \in \mathbb Z} f_\varepsilon^n(U)$,
and it is quite well-known that $f_\varepsilon$ is hyperbolic on
$\Lambda_\varepsilon$ (for example \cite{Ru-carte}, etc.)
We know that there exists a conjugating homeomorphism $H: \hat
\Lambda \to \hat \Lambda_\varepsilon$ which commutes with $\hat f$ and
$\hat f_\varepsilon$. The natural extension $\hat \Lambda$ is connected
iff $\Lambda$ is connected. Hence $\hat \Lambda_\varepsilon$ is connected
and so $\Lambda_\varepsilon$ is also connected. Moreover since $\bar U
\subset f_\varepsilon(U)$, we obtain that $\Lambda_\varepsilon$ is a connected
repellor for $f_\varepsilon$.
Now assume that $x \in \Lambda$ has $d$ $f$-preimages in
$\Lambda$. Then if $C_f \cap \Lambda = \emptyset$ and if $f_\varepsilon$
is $\mathcal{C}^1$-close enough to $f$, it follows that the local
inverse branches of $f_\varepsilon$ are close to the local inverse
branches of $f$ near $\Lambda$. Therefore any point $y \in
\Lambda_\varepsilon$ has exactly $d$ $f_\varepsilon$-preimages in $U$, denoted by
$y_1, \ldots, y_d$. Any of these $f_\varepsilon$-preimages from $U$ has
also an $f_\varepsilon$-preimage in $U$ since $\bar U \subset f_\varepsilon(U)$,
etc. Thus $y_i \in \Lambda_\varepsilon = \mathop{\cap}\limits_{n \in
\mathbb Z} f_\varepsilon^n(U), i = 1, \ldots, d$; \ hence any point $y \in
\Lambda_\varepsilon$ has exactly $d$ $f_\varepsilon$-preimages belonging to the
repellor $\Lambda_\varepsilon$.
\end{proof}
Let us now take the
hyperbolic toral endomorphism $f_A$ from Example 2, and the
product $f(z, w) = (z^k, f_A(w)), (z, w) \in \mathbb P^1\mathbb C
\times \mathbb T^2$, for some fixed $k \ge 2$. \ And consider a
$\mathcal{C}^1$-\textbf{perturbation} $f_\varepsilon$ of $f$ on $\mathbb
P^1 \mathbb C \times \mathbb T^2$. \ \ Since $f$ is c-hyperbolic
on its connected repellor $\Lambda:= S^1 \times \mathbb T^2$, it
follows from Proposition \ref{stability} that the perturbation
$f_\varepsilon$ also has a connected folded repellor $\Lambda_\varepsilon$, on
which it is c-hyperbolic. Also it follows from above that $f_\varepsilon$
is constant-to-1 on $\Lambda_\varepsilon$, namely it is
$(k+|\text{det}(A)|)$-to-1. The stable dimension $\delta^s(f_\varepsilon)$
of $f_\varepsilon$ on $\Lambda_\varepsilon$ is equal to 1 in this case. We can
form the stable potential of $f_\varepsilon$, namely $\Phi^s(f_\varepsilon)(z, w)
:= \log |D(f_\varepsilon)_s|(z, w), (z, w) \in \Lambda_\varepsilon$, and the
equilibrium measure $\mu_s(f_\varepsilon)$ of $\delta^s(f_\varepsilon) \cdot
\Phi^s(f_\varepsilon)$, like in Theorem \ref{main}. Since the basic set
$\Lambda_\varepsilon$ is a repellor, we obtain from Corollary
\ref{absolute} that the stable conditional measures of
$\mu_s(f_\varepsilon)$ are absolutely continuous on the local stable
manifolds of $f_\varepsilon$ (which in general are non-linear
submanifolds).
One actual example can be constructed by the above procedure, if
we consider first the linear toral endomorphism $f_A(w) =
(3w_1+2w_2, 2w_1+2w_2), w = (w_1, w_2) \in \mathbb R^2/\mathbb
Z^2$. The associated matrix $A$ has one eigenvalue of absolute
value less than 1 and the other eigenvalue larger than 1, hence
$f_A$ is hyperbolic on $\mathbb T^2$. And as above we can take the
product $f(z, w) = (z^k, f_A(w))$ for some $k \ge 2$. \ Then we
consider the perturbation endomorphism: $$f_\varepsilon(z, w) := (z^k,
3w_1+2w_2+\varepsilon sin(2\pi(w_1+5w_2)), 2w_1+2w_2+\varepsilon cos(2\pi w_2)
+\varepsilon sin^2(\pi(w_1-2w_2))),$$ defined for $z \in \mathbb P^1
\mathbb C, w \in \mathbb T^2$.
We see that $f_\varepsilon$ is
well defined as an endomorphism on $\mathbb P^1 \mathbb C \times
\mathbb T^2$ and that it has a repellor $\Lambda_\varepsilon$ close to
$S^1 \times \mathbb T^2$, given by Proposition \ref{stability};
namely there exists a neighbourhood $U$ of $S^1 \times \mathbb
T^2$ so that $$\Lambda_\varepsilon = \mathop{\cap}\limits_{n \in \mathbb
Z} f_\varepsilon^n(U)$$
Then $f_\varepsilon$ is c-hyperbolic on $\Lambda_\varepsilon$ (see
Definition \ref{stable}) and it is $(k+2)$-to-1 on $\Lambda_\varepsilon$.
The stable potential $\Phi^s(f_\varepsilon)$ is not necessarily constant
in this case. We obtain as before that the stable conditional
measures of $\mu_s(f_\varepsilon)$ are absolutely continuous, and that the
stable pointwise dimension of $\mu_s(f_\varepsilon)$ is essentially equal
to 1, on $\mu_s$-a.a local stable manifolds over $\Lambda_\varepsilon$.
$
\square$
\textbf{Acknowledgements:} Partial support
for this work was provided by PN II Project ID-1191.
\textbf{Email:} Eugen.Mihailescu\@@imar.ro
Institute of Mathematics of the Romanian Academy, P. O. Box 1-764,
RO 014700, Bucharest, Romania.
Webpage: www.imar.ro/$\sim$mihailes
\end{document}
|
\begin{document}
\mainmatter
\title{Edge Constrained Eulerian Extensions}
\titlerunning{Edge Constrained Eulerian Extensions}
\author{Ghurumuruhan Ganesan \thanks{Corresponding Author}}
\authorrunning{G. Ganesan}
\institute{IISER Bhopal,\\\email{[email protected]}}
\maketitle
\begin{abstract}
In this paper we study Eulerian extensions with edge constraints and use the probabilistic method to establish sufficient conditions for a given connected graph to be a subgraph of a Eulerian graph containing~\(m\) edges, for a given number~\(m.\)
\keywords{Eulerian Extensions, Edge Constraint, Probabilistic Method}
\end{abstract}
\renewcommand{A.\arabic{equation}}{\thesection.\arabic{equation}}
\setcounter{equation}{0}
\section{Introduction} \label{intro}
In the Eulerian extension problem, a given graph is to be converted into an Eulerian graph by addition of as few edges as possible and such problems have applications in routing and scheduling (Dorn et al. (2013)). Boesch et al. (1977) studied conditions under which a graph~\(G\) can be extended to an Eulerian graph and later Lesniak and Oellermann (1986) presented a detailed survey on subgraphs and supergraphs of Eulerian graphs and multigraphs. For applications of Eulerian extensions to scheduling and parametric aspects, we refer to H\"ohn et al. (2012) and Fomin and Golovach (2012), respectively.
In this paper, we construct Eulerian extension of graphs with a predetermined number of edges. Specifically, given a graph~\(G\) with maximum degree~\(\Delta\) and~\(b\) number of edges and given an integer~\(m > b,\) we use the asymmetric probabilistic method to derive sufficient conditions for the existence of an Eulerian extension of~\(G\) with~\(m\) edges.
The paper is organized as follows: In Section~\ref{sec_eul_ext}, we state and prove our main result regarding Eulerian extensions with edge constraints.
\renewcommand{A.\arabic{equation}}{\thesection.\arabic{equation}}
\setcounter{equation}{0}
\section{Edge Constrained Eulerian Extensions}\label{sec_eul_ext}
Let~\(G = (V,E)\) be a graph with vertex set~\(V\) and edge set~\(E.\) The vertices~\(u\) and~\(v\) are said to be adjacent in~\(G\) if the edge~\((u,v)\) with endvertices~\(u\) and~\(v\) is present in~\(E.\) We define~\(d_G(v)\) to be the degree of vertex~\(v,\) i.e., the number of vertices adjacent to~\(v\)
in~\(G.\)
A sequence of vertices~\({\cal W} := (u_1,u_2,\ldots,u_t)\) is said to be a \emph{walk} if~\(u_i\) is adjacent to~\(u_{i+1}\) for each~\(1 \leq i \leq t-1.\) If in addition the vertex~\(u_t\) is also adjacent to~\(u_1,\) then~\({\cal W}\) is said to be a \emph{circuit}. We say that~\({\cal W}\) is an Eulerian circuit if each edge of the graph~\(G\) occurs exactly once in~\({\cal W}.\) The graph~\(G\) is said to be an \emph{Eulerian} graph if~\(G\) contains an Eulerian circuit.
Let~\(G\) be any graph. We say that a graph~\(H\) is an \emph{Eulerian extension} of~\(G\) if~\(G\) and~\(H\) share the same vertex set,~\(G\) is a subgraph of~\(H\) and~\(H\) is Eulerian.
\begin{definition}\label{def_one}
For an integer~\(m \geq 1,\) we say that a graph~\(G\) is~\(m\)-\emph{Eulerian extendable} if there exists an Eulerian extension~\(H\) of~\(G\) containing exactly~\(m\) edges.
\end{definition}
We have the following result regarding~\(m\)-Eulerian extendability. Throughout, constants do not depend on~\(n.\)
\begin{theorem}\label{thm_comp}
For every pair of constants~\(0 < \alpha,\beta < 1\) satisfying~\(\beta + 40\alpha^2 < \frac{1}{2}\) strictly, there exists a constant~\(N = N(\alpha,\beta) \geq 1\) such that the following holds for all~\(n \geq N\): Let~\(m\) be any integer satisfying
\begin{equation}\label{m_range2}
2n \leq m \leq \alpha \cdot n^{\frac{3}{2}}
\end{equation}
and let~\(G \subset K_n \) be any connected graph containing~\(n\) vertices,~\(b\) edges and a maximum vertex degree~\(\Delta.\) If
\begin{equation}\label{m_cond2}
\Delta \leq \beta \cdot n \text{ and }b \leq m-n,
\end{equation}
then~\(G\) is~\(m\)-Eulerian extendable.
\end{theorem}
To see the necessity of the bound~\(b \leq m-n,\) we use the fact that a graph~\(H\) is Eulerian if and only if~\(H\) is connected and each vertex of~\(H\) has even degree (Theorem 1.2.26, pp. 27, West (2001)). Therefore to obtain an Eulerian extension of~\(G,\) we only need to convert all odd degree vertices into even degree vertices.
Suppose that~\(n\) is even and all the vertices in~\(G\) have an odd degree. Because the degree of each vertex is at most~\(\frac{n}{2}-1\) (see~(\ref{m_cond2})) the sum of neighbourhood sizes of~\(2i-1\) and~\(2i\) is at most~\(n-2.\) Therefore for each~\(1 \leq i \leq \frac{n}{2},\) there exists a vertex~\(w_i\) neither adjacent to~\(2i-1\) nor adjacent to~\(2i\) in the graph~\(G.\) Adding the~\(n\) edges~\(\{(w_i,2i-1),(w_i,2i)\}_{1 \leq i \leq \frac{n}{2}}\) gives us an Eulerian extension of~\(G.\)
In our proof of Theorem~\ref{thm_comp} below, we use the asymmetric probabilistic method for higher values of~\(m\) to obtain walks of predetermined lengths between pairs of odd degree vertices and thereby construct the desired extension,
\subsection*{Proof of Theorem~\ref{thm_comp}}
As before, we use the fact that a graph~\(H\) is Eulerian if and only if~\(H\) is connected and each vertex of~\(H\) has even degree. We assume that the vertex set of~\(G\) is~\(V := \{0,1,2,\ldots,n-1\}\) and also let~\({\cal T}\) be the set of all odd degree vertices in~\(G\) so that the number of odd degree vertices~\(\#{\cal T},\) is even. If there are vertices~\(u,v \in {\cal T}\) that are not adjacent to each other in~\(G,\) then we mark the edge~\((u,v)\) and also the endvertices~\(u\) and~\(v.\) We then pick two new non-adjacent vertices~\(x\) and~\(y\) in~\({\cal T} \setminus \{u,v\}\) and repeat the procedure. We continue this process until we reach one of the following two scenarios: Either the number of marked edges is~\(m-b\) in which case, we simply add the marked edges to~\(G\) and get the desired Eulerian extension~\(H.\) Or, we are left with a set of marked edges of cardinality, say~\(l\) and a clique~\({\cal C} := \{u_1,\ldots,u_{2z}\} \subset {\cal T}\) containing~\(2z\) unmarked vertices.
Let~\(G_0\) be the graph obtained by adding all the~\(l \leq \frac{n}{2}\) marked edges to~\(G.\) If~\(\Delta_0\) and~\(b_0\) denote the maximum vertex degree and the number of edges in~\(G_0,\) respectively, then~
\begin{equation}\label{del_not}
\Delta_0 \leq \Delta+1 \text{ and }b_0 =b+l \leq b + n.
\end{equation}
We now pair the vertices in~\({\cal C}\) as~\(\{u_{2i-1},u_{2i}\}_{1 \leq i \leq z}\) assuming that~\(z \geq 1\) (If not, we simply remove a marked edge~\(e\) from~\(G_0\) and label the endvertices of~\(e\) as~\(u_1\) and~\(u_2\)). We use the probabilistic method to obtain~\(z\) edge-disjoint walks~\(\{{\cal W}_i\}_{1 \leq i \leq z}\) containing no edge of~\(G_0\) such that each walk~\({\cal W}_i\) has~\(w\) edges and~\(u_{2i-1}\) and~\(u_{2i}\) as endvertices, where~\(w\) satisfies
\begin{equation}\label{w_def}
b_0 + z\cdot w = m.
\end{equation}
Adding the walks~\(\{{\cal W}_i\}_{1 \leq i \leq z}\) to~\(G_0\) would then give us the desired~\(m\)-Eulerian extension.
In~(\ref{w_def}) we have assumed for simplicity that~\(w = \frac{m-b_0}{z}\) is an integer. If not, we write~\(m-b_0 = z \cdot w + r \) where~\(0 \leq r \leq w-1\) and construct the~\(z-1\) walks~\({\cal W}_i, 1 \leq i \leq z-1\) each of length~\(w\) edges and the last walk~\({\cal W}_z\) of length~\(w+r \leq 2w.\) Again adding these walks to~\(G_0\) would give us the desired Eulerian extension with~\(m\) edges.
For future use we remark that the length~\(w\) of each walk added in the above process is bounded above by
\begin{equation}\label{w_up}
w = \frac{m-b_0}{z} \leq m \leq \alpha \cdot n^{\frac{3}{2}}.
\end{equation}
We begin with the pair of vertices~\(u_1\) and~\(u_2.\) Let~\(\{X_i\}_{1 \leq i \leq w}\) be independent and identically distributed (i.i.d.) random variables uniformly distributed in the set~\(\{0,1,\ldots,n-1\}.\) Letting~\({\cal S} := (u_1,X_1,\ldots,X_{w},u_2),\) we would like to convert the sequence~\({\cal S}\) into a walk~\({\cal W}_1\) with endvertices~\(u_1\) and~\(u_2\) and containing no edge of~\(G_0.\) The construction of~\({\cal W}_1\) is split into two parts: In the first part, we collect the preliminary relevant properties of~\({\cal S}\) and in the second part, we obtain the walk~\({\cal W}_1.\) \\\\
\emph{\underline{Preliminary definitions and estimates}}: An entry in~\({\cal S}\) is defined to be a vertex and we define~\((u_1,X_1),(X_w,u_2)\) and~\(\{(X_i,X_{i+1})\}_{1 \leq i \leq w-1}\) to be the edges of~\({\cal S}.\) The neighbour set of a vertex~\(v\) in~\({\cal S}\) the set of vertices~\(u\) such that either~\((v,u)\) or~\((u,v)\) appears as an edge of~\({\cal S}.\) The neighbour set of~\(v\) in the multigraph~\(G_0 \cup {\cal S}\) is the union of the neighbour set of~\(v\) in the graph~\(G_0\) and the neighbour set of~\(v\) in~\({\cal S}.\) The degree of a vertex~\(v\) in~\(G_0 \cup {\cal S}\) is defined to be the sum of the degree of~\(v\) in~\(G_0\) and the degree of~\(v\) in~\({\cal S}.\)
The three main ingredients used in the construction of the walk~\({\cal W}_1\) are:\\
\((1)\) The degree of a vertex in the multigraph~\(G_0 \cup {\cal S},\)\\
\((2)\) the number of ``bad" vertices in~\(G_0 \cup {\cal S}\) and\\
\((3)\) the number of ``bad" edges in~\({\cal S}.\)\\
Below, we define and estimate each of the three quantities in that order.
We first estimate the degree of each vertex in the multigraph~\(G_0 \cup {\cal S}.\) For any~\(0 \leq v \leq n-1\) and any~\(1 \leq i \leq w,\) let~\(I_i = \ind(X_i = v)\) be the indicator function of the event that~\(X_i =v.\) We have~\(\mathbb{P}(I_i=1) = \frac{1}{n}\) and so if~\(D_v = \sum_{i=1}^{w} I_i\) denotes the number of times the entry~\(v\) appears in the sequence~\((X_1,\ldots,X_{w}),\) then~\(\mathbb{E}D_v = \frac{w}{n}\) and so by the standard deviation estimate~(\ref{conc_est_f}) in Appendix, we have
\begin{equation}\label{dv_est}
\mathbb{P}\left(D_v \geq \frac{2w}{n} \right)\leq 2\exp\left(-\frac{w}{16n} \right).
\end{equation}
If~\(\frac{w}{n} \geq 100 \log{n}\) then we get from~(\ref{dv_est}) that~\(\mathbb{P}(D_v \geq \frac{2w}{n}) \leq \frac{1}{n^2}.\)
Else, we use Chernoff bound directly to get that
\begin{equation}\label{dv_est_p}
\mathbb{P}\left(D_v \geq 100\log{n} \right)\leq \frac{1}{n^2}.
\end{equation}
Therefore setting~\(a_n := \max\left(\frac{2w}{n},100\log{n}\right),\) we get that
\begin{equation}\label{dv_est_pp}
\mathbb{P}\left(D_v \geq a_n \right)\leq \frac{1}{n^2}.
\end{equation}
If the event
\begin{equation}\label{e_deg_def}
E_{deg} := \bigcap_{0 \leq v \leq n-1} \left\{D_v \leq a_n\right\}
\end{equation}
occurs, then in~\(G_0 \cup {\cal S}\) each vertex has degree at most~\(\Delta_0+1+a_n,\) with the extra term~\(1\) to account for the fact that vertices~\(X_1\) and~\(X_w\) are also adjacent to~\(u_1\) and~\(u_2,\) respectively. By the union bound and~(\ref{dv_est}) we therefore have
\begin{equation}\label{first_cond}
\mathbb{P}(E_{deg}) \geq 1-\frac{1}{n}.
\end{equation}
The next step is to estimate the number of ``bad" vertices in~\(G_0 \cup {\cal S}.\) Let~\(X_0 := u_1, X_{w+1} := u_2\) and for~\(0 \leq i \leq w-1,\) say that vertex~\(X_i\) is \emph{bad} if~\(X_i = X_{i+1}\) or~\(X_i = X_{i+2}.\) For simplicity define~\(X_w\) to be bad always. If~\(J_i\) is the indicator function of the event that vertex~\(X_i\) is bad, then for~\(0 \leq i \leq w-1,\) we have that
\begin{equation}\label{z_est}
\frac{1}{n} \leq \mathbb{P}(J_i=1) \leq \frac{2}{n}.
\end{equation}
The term~\(N_{v,bad} := \sum_{i=0}^{w-1}J_i+1\) denotes the total number of bad vertices in the sequence~\({\cal S}.\) To estimate~\(N_{v,bad}\) we split~\(N_{v,bad}-1 = J(A) + J(B) + J(C),\) where~\[J(A) = J_1 + J_4 + \ldots, J(B) = J_2 + J_5 + \ldots \text{ and } J(C) = J_3 + J_6 + \ldots\] so that each~\(J(u), u \in \{A,B,C\}\) is a sum of i.i.d.\ random variables.
The term~\(J(A)\) contains at least~\(\frac{w}{3}-1\) and at most~\(\frac{w}{3}\) random variables. As in the proof of~(\ref{dv_est_pp}), we use~(\ref{z_est}) and the standard deviation estimate~(\ref{conc_est_f}) in Appendix to obtain that
\[\mathbb{P}\left(J(A) \geq \frac{a_n}{3} \right) \leq \frac{1}{n^2}\] for all~\(n\) large. A similar estimate holds for~\(J(B)\) and~\(J(C)\) and so combining these estimates and using the union bound, we get that
\begin{equation}\label{second_cond}
\mathbb{P}\left(E_{v,bad}\right) \geq 1-\frac{3}{n^2}
\end{equation}
where~\(E_{v,bad} := \{N_{v,bad}\leq a_n +1\}\) denotes the event that the number of bad vertices in~\({\cal S}\) is at most~\(a_n+1.\)
The final estimate involves counting the number of bad edges in the sequence~\({\cal S}.\) For~\(0 \leq i \leq w\) say that~\((X_{i},X_{i+1})\) is a \emph{bad edge} if one of the following two conditions hold:\\
\((d1)\) Either~\(\{X_{i},X_{i+1}\}\) is an edge of~\(G_0\) or\\
\((d2)\) There exists~\(i+2 \leq j \leq w\) such that~\(\{X_{i},X_{i+1}\} = \{X_{j},X_{j+1}\}.\)\\
To estimate the probability of occurrence of~\((d1),\) let~\(e\) be an edge of~\(G_0\) with endvertices~\(u\) and~\(v.\) We have that
\[\mathbb{P}\left(\{X_{i},X_{i+1}\} = \{u,v\}\right) \leq \frac{2}{n^2}.\] Similarly for any~\(i+2 \leq j \leq w,\) the possibility~\((d2)\) also occurs with probability at most~\(\frac{2}{n^2}.\) Therefore if~\(L_i\) is the indicator function of the event that~\((X_i,X_{i+1})\) is a bad edge, we have that
\begin{equation}\label{ji_est}
\mathbb{P}(L_{i}=1) \leq \sum_{l=1}^{b_0} \frac{2}{n^2} + \sum_{j=i+2}^{w}\frac{2}{n^2} \leq \frac{2(b_0+w)}{n^2}.
\end{equation}
If~\(N_{e,bad} := \sum_{i=0}^{w} L_i\) denotes the total number of bad edges in~\({\cal S},\) then from~(\ref{ji_est}) and the fact that~\(L_0 \leq 1\) we have
\begin{equation}\label{cn_def}
\mathbb{E}N_{e,bad} \leq 1+\frac{2(b_0+w)w}{n^2} =:c_n.
\end{equation}
Letting~\(E_{e,bad} := \{N_{e,bad} \leq K \cdot c_n\}\) denote the event that the number of bad edges in~\({\cal S}\) is at most~\(K \cdot c_n,\)
for some large integer constant~\(K \geq 1\) to be determined later, we get from Markov inequality that
\begin{equation}\label{third_cond}
\mathbb{P}\left(E_{e,bad}\right) \geq 1-\frac{1}{K},
\end{equation}
If~\(E_{valid}\) denotes the event that the first and last edges~\((X_0,X_1)\) and~\((X_{w},X_{w+1})\) are valid edges not in~\(G_0,\) then using the fact that the degree of any vertex in~\(G_0\) is at most~\(\frac{n}{2}\) (see~(\ref{del_not}) and~(\ref{m_cond2}) in the statement of the Theorem) we get that
\begin{equation}\label{valid_cond}
\mathbb{P}(E_{valid}) \geq \left(\frac{1}{2}-\frac{1}{n}\right)^2 .
\end{equation}
Defining the joint event
\[E_{joint}:= E_{valid} \cap E_{deg} \cap E_{v,bad} \cap E_{e,bad}\]
and using
\begin{eqnarray}
\mathbb{P}\left(A \bigcap \bigcap_{i=1}^{l} B_i\right) &\geq& \mathbb{P}(A) - \mathbb{P}\left(\bigcup_{i=1}^{l} B_i^c\right) \nonumber\\
&\geq& \mathbb{P}(A) - \sum_{i=1}^{l}\mathbb{P}\left(B_i^c\right). \label{asym_prob}
\end{eqnarray}
with~\(A = E_{valid}\) we get from~(\ref{first_cond}),~(\ref{second_cond}),~(\ref{third_cond}) and~(\ref{valid_cond}) that
\begin{equation}\label{a_joint_est}
\mathbb{P}(E_{joint}) \geq \left(\frac{1}{2}-\frac{1}{n}\right)^2-\frac{1}{K}- \frac{1}{n} -\frac{3}{n^2} \geq \frac{1}{21}
\end{equation}
for all~\(n\) large, provided the constant~\(K = 5,\) which we fix henceforth. This completes the preliminary estimates used in the construction of the walk~\({\cal W}_1.\)\\\\
\emph{\underline{Construction of the walk~\({\cal W}_1\)}}: Assuming that the event~\(E_{joint}\) occurs, we now convert~\({\cal S}_0 := {\cal S}\) into a walk~\({\cal W}_1.\) We begin by ``correcting" all bad vertices. Let~\(X_{i_1},X_{i_2},\ldots,X_{i_t}, i_1 < i_2 < \ldots <i_t\) be the set of all bad vertices. Thus for example either~\(X_{i_1} = X_{i_1+1}\) or~\(X_{i_1} = X_{i_1+2}.\) Because the event~\(E_{deg}\) occurs, we get from the discussion following~(\ref{e_deg_def}) that the degree of each vertex in~\(G_0 \cup {\cal S}_0\) is at most~\(\Delta_0+a_n +1.\) From~(\ref{del_not}) and the first condition in~(\ref{m_cond2}) we get that
\begin{equation}\label{del_not_est}
\Delta_0 \leq \Delta+1 \leq \frac{n}{3}+1
\end{equation}
and from the definition of~\(a_n\) prior to~(\ref{dv_est_pp}) and the upper bound~\(w \leq n^{\frac{3}{2}}\) in~(\ref{w_up}), we get that
\begin{equation}\label{an_est}
a_n = \max\left(100\log{n}, \frac{2w}{n}\right)\leq 100\log{n} + \frac{2w}{n} \leq 100 \log{n} + 2\sqrt{n} \leq 3\sqrt{n}
\end{equation}
for all~\(n\) large. Consequently, using~\(\beta< \frac{1}{2}\) strictly (see statement of Theorem~\ref{thm_comp}) ,
\begin{equation}
\Delta_0+a_n +1 \leq \beta \cdot n + 1 + 3\sqrt{n} \leq \frac{n}{2}-5\label{del_a}
\end{equation}
for all~\(n\) large. From~(\ref{del_a}), we therefore get that there exists a vertex~\(v_1\) that is \emph{not} a neighbour of~\(X_{i_1}\) in~\(G_0 \cup {\cal S}.\) Similarly, the total number of neighbours of~\(v_1\) and~\(X_{i_1+3}\) in~\(G_0 \cup {\cal S}\) is at most
\begin{equation}\label{deg_tot}
2\Delta_0 + 2a_n+2 \leq 2\beta \cdot n + 2 + 6\sqrt{n} < n-10
\end{equation}
for all~\(n\) large and so there exists a vertex~\(v_2 \neq X_{i_1}\) that is not a neighbour of~\(v_1\) and also not a neighbour of~\(X_{i_1+3}\) in~\(G_0 \cup {\cal S}.\)
We now set~\(X^{(1)}_{i_1+1} = v_1\) and~\(X^{(1)}_{i_1+2} = v_2\) and~\(X^{(1)}_{j} = X_j\) for~\(j \neq i_1+1,i_1+2\) and call the resulting sequence as~\({\cal S}_1 := (X^{(1)}_1,\ldots,X^{(1)}_{w}).\) By construction the degree of each vertex in the multigraph~\(G_0 \cup {\cal S}_1\) is at most~\(\Delta+a_n+1+2\) and there are at most~\(t-1\) bad vertices in~\({\cal S}_1.\) We now pick the bad vertex with the least index in~\({\cal S}_1\) and repeat the above procedure with~\({\cal S}_1\) to get a sequence~\({\cal S}_2\) containing at most~\(t-2\) bad vertices.
After~\(k \leq t\) iterations of the above procedure, the degree of each vertex in the multigraph~\(G_0 \cup {\cal S}_k\) would be at most
\begin{equation}\label{st_def}
\Delta_0+a_n + 1+2k \leq \Delta_0+a_n +1+2t \leq \Delta_0+3a_n + 3
\end{equation}
because the event~\(E_{joint} \subseteq E_{v,bad}\) occurs and so~\(t \leq a_n+1.\) Again using~(\ref{del_not_est}) and~(\ref{an_est}) and arguing as in~(\ref{deg_tot}), we get that the sum of the degrees of any two vertices in~\(G_0 \cup {\cal S}_t\) is at most~\(n-10\) for all~\(n\) large. Thus the above procedure indeed proceeds for~\(t\) iterations and by construction, the sequence~\({\cal S}_t\) obtained at the end, has no bad vertices.
We now perform an analogous procedure for correcting all bad edges in~\({\cal S}_t.\) For example if~\((X_{l},X_{l+1})\) is a bad edge in~\({\cal S}_t,\) then following an analogous argument as before we pick a vertex~\(Y_{l+1}\) that is neither adjacent to~\(X_l\) nor adjacent to~\(X_{l+2}\) in the sequence~\({\cal S}_t.\) We replace~\(X_{l+1}\) with~\(Y_{l+1}\) to get a new sequence~\({\cal S}_{t+1}.\) In the union~\(G_0 \cup {\cal S}_{t+1}\) the degree of each vertex is at most~\(\Delta_0+3a_n+3+2\) (see~(\ref{st_def})) and the number of bad edges is at most~\(r-1.\) At the end of~\(r \leq K \cdot c_n\) iterations, we obtain a multigraph~\(G_0 \cup {\cal S}_{t+r},\) where the degree of each vertex is at most
\[\Delta_0 + 3a_n + 3+2r \leq \Delta_0 + 3a_n+3+2Kc_n,\] since the event~\(E_{e,bad}\) occurs and therefore~\(N_{e,bad} \leq K \cdot c_n\) (see discussion preceding~(\ref{third_cond})). Substituting the expression for~\(c_n\) from~(\ref{cn_def}) and using the second estimate for~\(a_n \) in~(\ref{an_est}), we get that~\(\Delta_0 + 3a_n + 3+2Kc_n\) is at most
\begin{eqnarray}
&&\Delta_0 + 300\log{n}+ \frac{6w}{n} + 3+2K+ \frac{2K(b_0+w)w}{n^2} \nonumber\\
&&\;\;\;\;\leq\;\;\;\Delta_0 + 301\log{n} + \frac{6w}{n} + \frac{2K(b_0+w)w}{n^2} \label{con_est}
\end{eqnarray}
for all~\(n\) large. Recalling that~\(u_1\) and~\(u_2\) are the endvertices of the starting sequence~\({\cal S}_0,\) we get that the final sequence~\({\cal S}_{t+r}\) contains no bad edge and is therefore the desired walk~\({\cal W}_1\) with endvertices~\(u_1\) and~\(u_2.\) This completes the construction of the walk~\({\cal W}_1.\)\\\\
\emph{\underline{Rest of the walks}}: We now repeat the above procedure to construct the rest of the walks. We set~\(G_1 := G_0 \cup {\cal W}_1\) and argue as above to obtain a walk~\({\cal W}_2\) with~\(w\) edges present in~\(\overline{G}_1\) and containing~\(u_3\) and~\(u_4\) as endvertices. Adding the walk~\({\cal W}_2\) to~\(G_1\) we get a new graph~\(G_2.\) In effect, to the graph~\(G_1\) containing~\(b_0+w\) edges, we have added~\(w\) edges and by an argument analogous to~(\ref{con_est}), we have increased the degree of a vertex by at most~\[301\log{n} + \frac{6w}{n} + \frac{2K(b_0+2w)w}{n^2},\] in obtaining the graph~\(G_2.\) We recall that (see first paragraph of the proof) there are~\(z\) such walks to be created of which~\(z-1\) have length~\(w\) and the final walk has length at most~\(2w.\) Therefore after~\(z\) iterations, we get a graph~\(G_z\) with~\(m\) edges and whose maximum vertex degree~\(\Delta_z\) is at most
\begin{eqnarray}\label{del_z_est}
\Delta_z &\leq&\Delta_0 + \left(301\log{n} + \frac{6w}{n}\right) \cdot (z-1) + 301\log{n}+\frac{12w}{n} \nonumber\\
&&\;\;\;\;+\;\;2K\sum_{k=1}^{z-1}\frac{(b_0+k\cdot w)w }{n^2} + 2K\frac{(b_0+(z-1)\cdot w)2w }{n^2}.
\end{eqnarray}
By construction~\(G_z\) is an Eulerian graph.
To verify the obtainability of~\(G_z,\) we estimate~\(\Delta_z\) as follows. The term~\(z\) is no more than the size of a maximum clique in the original graph~\(G\) (see discussion prior to~(\ref{del_not})) and since there are~\(m \leq n^{\frac{3}{2}}\) edges in~\(G,\) the maximum size of a clique in~\(G\) is at most~\(n^{\frac{3}{4}}.\) Therefore
\begin{equation}\label{z_bound}
z \leq n^{\frac{3}{4}}.
\end{equation}
Also using~(\ref{w_def}) and~(\ref{m_cond2}), we get that~\(zw \leq m \leq \alpha \cdot n^{\frac{3}{2}}\) and so
\begin{equation}\label{wz_bound}
\frac{wz}{n} \leq \alpha \cdot \sqrt{n} \leq \sqrt{n}
\end{equation}
Finally from~(\ref{del_not}) we have that~\(b_0 \leq b+n\) and so the second line in~(\ref{del_z_est}) is at most
\begin{eqnarray}
\sum_{k=1}^{z} \frac{(b+n+k\cdot w)w }{n^2} &\leq& \frac{z(b+n+zw)2w}{n^2} \nonumber\\
&\leq& \frac{2m(n+m)}{n^2} \nonumber\\
&\leq& 2\sqrt{n} + \frac{2m^2}{n^2} \nonumber\\
&\leq& \sqrt{n} + 2\alpha^2 \cdot n \label{term_3}
\end{eqnarray}
where the second inequality in~(\ref{term_3}) follows from the estimate~\(b+zw \leq b_0+zw=m\) (see~(\ref{w_def})), the third and fourth estimates in~(\ref{term_3}) follow from the bound~\(m \leq \alpha \cdot n^{\frac{3}{2}}\) (see~(\ref{m_range2})).
Plugging~(\ref{term_3}),~(\ref{wz_bound}) and~(\ref{z_bound}) into~(\ref{del_z_est}) we get that
\begin{eqnarray}
\Delta_z &\leq& \Delta_0 + 301n^{\frac{3}{4}} \cdot \log{n} + \sqrt{n}\left(\frac{12}{10} + 2K\right) + 4K\alpha^2 \cdot n \nonumber\\
&\leq& (\beta + 4K\alpha^2) \cdot n + 1 + 301n^{\frac{3}{4}} \cdot \log{n} + \sqrt{n}\left(\frac{12}{10} + 2K\right)\label{tata}
\end{eqnarray}
for all~\(n\) large, where the second inequality in~(\ref{tata}) is obtained by using\\\(\Delta_0 \leq \Delta+1 \leq \beta \cdot n+1\) (see~(\ref{del_not}) and the first condition in~(\ref{m_cond2})). From the statement of Theorem~\ref{thm_comp} and using~\(K = 5,\) we have that
\[\beta + 4K\alpha^2 = \beta + 20 \alpha^2 < \frac{1}{2}\] strictly and so the degree of any vertex in~\(G_z\) is strictly less than~\(\frac{n}{2}\) and also, the sum of degrees of any two vertices in~\(G_z\) is at most~\[(2\beta+40\alpha^2)\cdot n+ 3 + 602 n^{\frac{3}{4}} \cdot \log{n} + \frac{12\sqrt{n}}{10} < n-10\] for all~\(n\) large. Thus the graph~\(G_z\) can be obtained by the above probabilistic method as in the discussion following~(\ref{del_a}).~\(\qed\)
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix}
Throughout we use the following deviation estimate. Let~\(Z_i, 1 \leq i \leq t\) be independent Bernoulli random variables satisfying~\[\mathbb{P}(Z_i = 1) = p_i = 1-\mathbb{P}(Z_i = 0).\] If~\(W_t = \sum_{i=1}^{t} Z_i\) and~\(\mu_t = \mathbb{E}W_t,\) then for any~\(0 < \epsilon < \frac{1}{2}\) we have that
\begin{equation}\label{conc_est_f}
\mathbb{P}\left(\left|W_t-\mu_t\right| \geq \epsilon \mu_t\right) \leq 2\exp\left(-\frac{\epsilon^2}{4}\mu_t\right).
\end{equation}
For a proof of~(\ref{conc_est_f}), we refer to Corollary~\(A.1.14,\) pp.~\(312,\) Alon and Spencer (2008).
\end{document}
|
\begin{document}
\title{Locally convex hypersurfaces immersed in $H^n imes R$}
\begin{abstract}
We prove a theorem of Hadamard-Stoker type: a connected locally convex complete hypersurface immersed in $\Hset^n\widetildemes\Rset$ ($n\mathfrak{g}eq 2$), where $\mathbb{H}^n$ is $n$-dimensional hyperbolic space, is embedded and homeomorphic either to the $n$-sphere or to $\mathbb{R}^n$. In the latter case it is either a vertical graph over a convex domain in $\mathbb{H}^n$ or has what we call a simple end.
\noindent{\bf keywords:} Hadamard-Stoker theorems, convex hypersurface, positive extrinsic curvature, hyperbolic geometry
\noindent{\bf MSC} 53C40, 53C42; 53B25, 52A55
\end{abstract}
\section{Introduction}
Let $\Sigma$ be a complete hypersurface immersed in $\Hset^n\widetildemes\Rset$, the product of $n$-dimensional hyperbolic space $\mathbb{H}^n$ with the line, and let $\pi:\Hset^n\widetildemes\Rset\to \mathbb{H}^n$ be the projection on the first factor. A point $\theta$ in the sphere at infinity $\Sset^{n-1}_\infty$ of $\mathbb{H}^n$ is a {\bf simple end} of $\Sigma$ if $\theta$ is the unique point of accumulation of $\pi(\Sigma)$ in $\Sset^{n-1}_\infty$ and
every complete totally geodesic hyperplane $Q\subset \mathbb{H}^n$ either has $\theta$ as an accumulation point or else meets $\pi(\Sigma)$ in a compact set. We prove the following.
\begin{maintheo} Let $\Sigma$ be a complete, connected hypersurface immersed in $\mathbb{H}^n\widetildemes\mathbb{R}$ with positive definite second fundamental form, $n\mathfrak{g}eq 2$. Then $\Sigma$ is embedded and homeomorphic either to the $n$-sphere or to Euclidean space $\mathbb{R}^n$. In the latter case, it is either a vertical graph over an open convex set or has a simple end. When $n=1$ the theorem holds under the additional hypothesis that $\Sigma$ is embedded.
\label{mainthm} \end{maintheo}
The case $n=2$ was proven by J. Espinar, J. G\'alvez, and H. Rosenberg \cite{EGR}, so our paper is an extension of their work to higher dimensions. We prove the theorem by induction on $n$, beginning with the easy case $n=1$, when $\Sigma$ is assumed to be embedded. Our paper reproves the case $n=2$; we use many ideas of \cite{EGR}, but our proof does not depend on their result.
\noindent{\bf Historical Background.}
In 1897, J. Hadamard \cite{H} proved a theorem about compact, locally strictly
convex surfaces in the Euclidean space $\mathbb{R}^3$, showing that
such surfaces are embedded and homeomorphic to the sphere. Since
then many generalizations have adapted the assumptions about
the curvature and considered new spaces in which these surfaces
could be immersed in order to obtain analogous results.
Important contributions were made by J. Stoker \cite{S},
S.S. Chern and R. K. Lashof \cite{CL}, R. Sacksteder \cite{Sa}, M. do Carmo and E.
Lima \cite{ME}, M. do Carmo and F. Warner \cite{MW}, R.J.Currier \cite{C}, S. Alexander
\cite{A}, and I. Tribuzy \cite{I}.
In 2009, J. Espinar, J. Gálvez and H. Rosenberg \cite{EGR} extended the Hada\-mard-Stoker theorem for surfaces immersed in $\mathbb{H}^2
\widetildemes \mathbb{R}$ assuming that such a surface is connected and
complete with all principal curvatures positive. They proved that such a surface is properly embedded and homeomorphic to the
sphere if it is closed or to the plane $\mathbb{R}^2 $ if not. In the second case, $\Sigma$ is a graph over a convex domain in $\mathbb{H}^2 \widetildemes
\{0\} $ or $\Sigma$ has a simple end.
J. Espinar and H. Rosenberg \cite{ER} proved in 2010 that if $ \Sigma$ is a locally strictly
convex, connected hypersurface properly immersed in $M ^ n \widetildemes \mathbb{R} $, where $ M^n $ is a $
\emph{1/4-pinched}$ manifold, then $ \Sigma $ is properly embedded and
homeomorphic to $\mathbb{S}^n$ or $ \mathbb{R}^n$. In the second case, $
\Sigma $ has a $ \emph{top} $ or $\emph{bottom} $ end, where $\Sigma $
has a $\emph {top} $ (respectively, \emph {bottom}) end $E$ if for any divergent sequence $ \{p_{n}\} \subset E $ the height function $h: \Sigma \rightarrow E $ goes to $+ \infty$ (respectively $-
\infty$).
Also in 2010, J. Espinar and the first author \cite{EO} showed a
Hadamard-Stoker-Currier type theorem for surfaces immersed in a
$3$-dimen\-sional Riemannian
manifold $\mathcal{M}(\kappa, \tau)$ that fibers over a $2$-dimensional Riemannian
manifold $\mathbb{M}^2$ so that the fibers are the trajectories of a unit Killing vector
field. More precisely, $\mathbb{M}^2$ is required to be a strict
Hadamard surface, i.e., $ \mathbb{M}^2 $ has Gaussian curvature $\kappa$ less than a negative constant, and $\tau$ is the curvature of the bundle.
\noindent{\bf Open Questions.}
There are other spaces to be studied, for example: $ \mathbb{S}^n
\widetildemes \mathbb{R}$, $ \mathbb{M} \widetildemes \mathbb{R} $, where
$\mathbb{M} $ is a Hadamard manifold of dimension $n$, Heisenberg
spaces of dimension $n$, among others. It possible that results of Hadamard-Stoker type are valid in these spaces.
Preliminary results and the case $n=1$ are treated in Section \ref{preliminary}. The proof of the Main Theorem in the case of a vertical graph is given in Section \ref{vgraph} and the remaining cases of the theorem are proven in Section \ref{othercases}. Examples of hypersurfaces with simple ends are given in Section \ref{example}.
This paper is based on the doctoral thesis \cite{IO} of the first author at
the Pontifical Catholic University of Rio de Janeiro under the
direction of the second author.
\section{Preliminary results and the case $n=1$}\label{preliminary}
Let $\mathbb{H}^k$ be the $k$-dimensional hyperbolic space, with a Riemannian metric of constant curvature $-1$ (See \cite{M}, \cite{ST} for details of Riemannian geometry). In the Poincar\'e disk model on the open unit ball $\mathbb{B}^k=\{(x_1,\dots,x_k)\in \mathbb{R}^k\ |\ |x|<1\}$, where $|x|$ is the Euclidean norm given by $|x|^2=\sum_{i=1}^k x_i^2$,
the Riemannian metric is defined to be
$$ds^2= \mathrm{fr}\, ac{4\sum_{i=1}^k dx_i^2}{(1-|x|^2)^2}.$$
In this model the sphere at infinity, $\Sset^{n-1}_\inftyk$, is
the unit sphere in $\mathbb{R}^k$, so that $\mathbb{H}^k\cup \Sset^{n-1}_\inftyk$ is a compact $k$-dimensional ball.
If $Q$ is a totally geodesic hyperplane in $\mathbb{H}^k$, we say that $P=Q\widetildemes \mathbb{R}$, which is also totally geodesic, is a {\bf vertical hyperplane} in $\Hset^k\widetildemes\Rset$. If $\mathfrak{g}amma$ is a complete geodesic in $\Hset^k\widetildemes\{0\}$ parametrized by $t\in \mathbb{R}$, we let
$P^{k}_{\mathfrak{g}amma}(t)$ be the vertical hyperplane orthogonal to $\mathfrak{g}amma$ at the point $\mathfrak{g}amma(t)$, and then these vertical hyperplanes $P^{k}_{\mathfrak{g}amma}(t)$ form a foliation of $\Hset^k\widetildemes\Rset$. Let $\pi: \Hset^k\widetildemes\Rset\to \mathbb{H}^k$ denote the projection on the first factor.
The following proposition will be used in the inductive step of the proof of the Main Theorem when $M=\Hset^k\widetildemes\Rset$, but we state it in a more general context.
\begin{prop} Let $M$ be a $(k+1)-$dimensional Riemannian manifold and $\Sigma^k$ a hypersurface immersed in $M$ with strictly positive second fundamental form, $k\mathfrak{g}eq 2$, and let $P$ be a totally geodesic hypersurface in $M$. If $\Sigma^k$ and $P$ intersect
transversally, then every connected component $\Sigma^{k-1}$ of
$\Sigma^k \cap P$ is a $(k-1)-$dimensional hypersurface in $P$ with second fundamental form $II(\Sigma^{k-1})>0$.
\end{prop}
\noindent{\bf Proof.}
Let $\mathfrak{g}amma(t)$ be a curve in $\Sigma^{k-1}$, where $t$ is
parametrized by arc length. As $P$ is totally geodesic,
we have $\overline{\nabla}_{\mathfrak{g}amma'}(\mathfrak{g}amma')=
\nabla_{\mathfrak{g}amma'}^{P}(\mathfrak{g}amma')$, where $\nabla^{P}$ and
$\overline{\nabla}$ are the connections in $P$ and
$M$ respectively. Let $N_{\Sigma^{k-1},P}$ and $N_{\Sigma^{k}}$ be the unit normal vectors to $\Sigma^{k-1}$ in $P$ and to $\Sigma^{k}$ in $M$, respectively. We want
$\langle \overline{\nabla}_{\mathfrak{g}amma'}(\mathfrak{g}amma'),
N_{\Sigma^{k-1},P}\rangle$ to be positive. Note that $\langle
\overline{\nabla}_{\mathfrak{g}amma'}(\mathfrak{g}amma'), N_{\Sigma^{k}} \rangle$ is
positive, because by hypothesis the second fundamental form of
$\Sigma^n$ is strictly positive.
Writing $N_{\Sigma^{k}}(\mathfrak{g}amma(t)) = \overline{N}+N^{\bot}$, where
$\overline{N}$ is tangent to $P$ and $N^{\bot}$ is orthogonal to $P$, and taking the inner product with
$\overline{\nabla}_{\mathfrak{g}amma'}(\mathfrak{g}amma')$, we obtain
\begin{align}
0<\langle \overline{\nabla}_{\mathfrak{g}amma'}(\mathfrak{g}amma'), N_{\Sigma^{k}} \rangle
& = \langle \overline{\nabla}_{\mathfrak{g}amma'}(\mathfrak{g}amma'), \overline{N} +
N^{\bot}\rangle \nonumber \\ & = \langle
\overline{\nabla}_{\mathfrak{g}amma'}(\mathfrak{g}amma') ,\overline{N}\rangle. \nonumber
\end{align}
Note that $\overline{N}$ is orthogonal to $\Sigma^{k-1}$ and tangent to $P$. Hence $\overline{N}$ is a multiple of
${N}_{\Sigma^{k-1},P}$, so we can write
\begin{align}
\langle \nabla_{\mathfrak{g}amma'}^{P}(\mathfrak{g}amma'),\overline{N}\rangle & = \langle
\nabla_{\mathfrak{g}amma'}^{P}(\mathfrak{g}amma'),|\overline{N}| N_{\Sigma^{k-1},P}\rangle \nonumber \\ & = |\overline{N}|\langle \nabla_{\mathfrak{g}amma'}^{P}(\mathfrak{g}amma'),
N_{\Sigma^{k-1},P}\rangle, \nonumber
\end{align}
and therefore $\langle \nabla_{\mathfrak{g}amma'}^{P}(\mathfrak{g}amma'),
N_{\Sigma^{k-1},P}\rangle > 0$, as claimed. \qed
Next we begin the proof of the Main Theorem. The proof is by induction on the dimension $n$ of $\Sigma^n$. The proof for the case $n=1$ is given in this section, and the proof for $n=k>1$, when the
theorem is supposed true for $n=k-1$, is given in the next two sections.
\noindent{\bf The Proof for $n=1$.} When $n=1$, we assume the additional hypothesis that the curve $\Sigma^1$ is embedded in $\mathbb{H}^1\widetildemes \mathbb{R}$, which is isometric to $\mathbb{R}^2$. If $\Sigma^1$ is not a vertical graph over an open interval in $\mathbb{H}^1$, which is one of the possible conclusions of the theorem, then there must be a point
$p_0\in\Sigma^1$ where the tangent line is vertical. If there is a second such point, then $\Sigma^1$ closes up to a circle, and if $p_0$ is the only such point, then both ends of $\pi(\Sigma^1)$
must converge to one of the two points at infinity of $\mathbb{H}^1$, which gives a simple end.
This completes the proof of the theorem for $n=1$. \qed
\section{The case of a vertical graph}\label{vgraph}
In the previous section the Main Theorem was proven for the case $n=1$. We complete the proof in this section and the next one. In both sections we suppose that the theorem holds for $n=k-1\mathfrak{g}eq 1$ and we assume the hypotheses of the Main Theorem for the case $n=k$ in order to prove the inductive step.
Thus we assume that $\Sigma^k$ is a
complete connected $k$-dimensional manifold immersed in
$\Hset^k\widetildemes\Rset$, which has the product Riemannian metric, and we suppose that the second fundamental form of $\Sigma^k$ is positive definite.
For the rest of this section we also assume the following.
\begin{hyp} No vertical hyperplane is tangent to $\Sigma^k$.\label{hyp1}
\end{hyp}
We shall prove the Main Theorem under this additional Hypothesis by showing that in this case $\Sigma^k$ is the graph of a smooth function $f: \Omega\to \mathbb{R}$ where $\Omega$ is an open convex domain in $\mathbb{H}^k$. Note that the Hypothesis implies that every vertical hyperplane $P$ is transverse to $\Sigma^k$.
\begin{lemma} Hypothesis \ref{hyp1} implies that if $P$ is a vertical hyperplane in $\Hset^k\widetildemes\Rset$, then each connected component $\Sigma^{k-1}$ of the intersection $\Sigma^k\cap P$ is a graph over a convex domain in $\pi(P)$. \label{verticalgraph}
\end{lemma}
\noindent{\bf Proof.} By Proposition 2.1, $\Sigma^{k-1}$ is a $(k-1)-$dimensional surface with strictly positive second fundamental form in
$P$. Hence we can apply the induction hypothesis, so $\Sigma^{k-1}$ must be
homeomorphic to $\mathbb{S}^{k-1}$ or $\mathbb{R}^{k-1}$, and in the latter case it must have a simple end or be a vertical graph over a convex domain.
Now if $\Sigma^{k-1}$ is homeomorphic to $\mathbb {S}^{k-1}$, then there exists a point in $\mathbb{S}^{k-1}$ with a
$(k-1)$-dimensional vertical tangent plane, which means that $\Sigma^k$ has
a vertical tangent $k$-plane at this point, contradicting Hypothesis \ref{hyp1}, so this case is excluded.
If $\Sigma^{k-1}$ is homeomorphic to $\mathbb{R}^{k-1}$ and has a simple end $\theta$ in $\mathbb{S}^{k-1}_\infty$, the sphere at infinity of $\pi(P)$, let $\beta(t)$ be a complete horizontal geodesic in $\pi(P)\widetildemes\{0\}\subset P$ which converges to the point
$(\theta,0)$ as $t\to\infty$. Consider the foliation of $P$ by vertical $(k-1)$-hyperplanes $P^{k-1}_\beta(t)$ orthogonal to $\beta$, where $P^{k-1}_\beta(t)$ meets $\beta$ at $\beta(t)$. If $\bar t$ is the smallest value of $t$ such that $\Sigma^{k-1}\cap P^{k-1}_\beta(t)$ is non-empty, then the vertical hyperplane $P^{k-1}_\beta(\bar t)$
will be tangent to $\Sigma^{k-1}$, so $\Sigma^k$ will also have a
tangent vertical hyperplane, contradicting Hypothesis \ref{hyp1}.
Thus $\Sigma^{k-1}$ must be a vertical graph over a convex domain in $\pi(P)$.
\qed
\begin{lemma}
Let $\{P^{k}_{\mathfrak{g}amma}(t)\}$ be the foliation of $\Hset^k\widetildemes\Rset$ by vertical hyperplanes orthogonal to a complete geodesic $\mathfrak{g}amma$ in $\Hset^k\widetildemes\{0\}$ and let $\Sigma^{k-1}(0)$ be a component of
$\Sigma^{k} \cap P_{\mathfrak{g}amma}(0)$.
Let $\Sigma^{k-1}(t)\subset \Sigma^{k} \cap P_{\mathfrak{g}amma}(t)$ be the continuous variation of $\Sigma^{k-1}(0)$
as $t$ varies. Then $\Sigma^{k-1}(t)$ cannot become disconnected for any $t$.\label{connected}
\end{lemma}
\noindent{\bf Proof.} Note that $\Sigma^{k-1}(t)$ is well defined by transversality.
By Lemma \ref{verticalgraph}, every component of $\Sigma^{k-1}(t)$ is the graph of a function defined on a convex open domain in $\pi(P_{\mathfrak{g}amma}(0))$, which is isometric to
$\mathbb{H}^{k-1}$. By transversality the set of $t$ with
$\Sigma^{k-1}(t)$ connected is open.
Suppose, to find a contradiction, that $\Sigma^{k-1}(t)$ is not connected for some $t$, say $t>0$. Then there will be a smallest positive $\bar t$ such that $\Sigma^{k-1}(\bar t)$ is not connected. Take $x_{\bar t}$ and $y_{\bar t}$ to be points in distinct components of $\Sigma^{k-1}(\bar t)$. By transversality there exist $\delta>0$ and continuous curves $t\mapsto x_t$ and $t\mapsto y_t$ defined for $t\in [\bar t-\delta,\bar t]$ with $x_t, y_t\in\Sigma^{k-1}(t)$. For
$t\in [\bar t-\delta,\bar t)$, $\Sigma^{k-1}(t)$ is connected
and thus a graph over a convex open domain in
$\pi(P_{\mathfrak{g}amma}(t))$. Let $\alpha_t$ be the geodesic in
$\pi(P_{\mathfrak{g}amma}(t))$ joining $\pi(x_t)$ and $\pi(y_t)$ and take $A_t=\pi^{-1}(\alpha_t)\cap \Sigma^{k-1}(t)$, which is a graph over $\alpha_t$ and a curve joining $x_t$ to $y_t$ in $\Sigma^{k-1}(t)$. The limit of $A_t$ as $t$ tends to $\bar t$
is a curve $A_{\bar t}$ in $\Sigma^{k-1}(\bar t)$, which is complete, since $\Sigma^k$ is complete, but $A_{\bar t}$ cannot be connected since it contains
$x_{\bar t}$ and $y_{\bar t}$ which are in different components. It follows that $A_{\bar t}$
must diverge vertically to $-\infty$ or $+\infty$, but since $A_{\bar t}$ has strictly positive curvature that is impossible. \qed
In view of the previous lemma, the union $\cup\Sigma^{k-1}(t)$ is an open and closed set in $\Sigma^k$, so it must be the whole connected set $\Sigma^k$. Consequently $\Sigma^k$ is a vertical graph over a set $\Omega$ in $\mathbb{H}^k$. Since $\Omega$ is diffeomorphic to the $k$-dimensional manifold $\Sigma^k$ under the projection $\pi$, it is open in $\mathbb{H}^k$. Now given any two points $x,y\in\Omega$, let $P$ be a vertical hyperplane containing both $(x,0)$ and $(y,0)$, and apply the argument of Lemma \ref{connected} to see that $\Sigma^k\cap P$ must be connected.
Then $\Sigma^k\cap P$ is a vertical graph over a convex domain $\Omega_P$ in $\pi(P)$. The geodesic from $x$ to $y$ lies in $\Omega_P\subset \Omega$, so $\Omega$ is convex. Now the convex open set $\Omega$ in $\mathbb{H}^k$ is homeomorphic to $\mathbb{R}^k$, so $\Sigma^k$ is also homeomorphic to $\mathbb{R}^k$. This completes the proof of the Main Theorem under Hypothesis \ref{hyp1}.
\section{The proof for the remaining cases}\label{othercases}
In this section we complete the proof of the Main Theorem by proving it in the case that the previous Hypothesis \ref{hyp1} does not hold, i.e., under the following assumption.
\begin{hyp} There is a vertical hyperplane $P_0$ that is tangent to $\Sigma^k$ at a point $p_0\in\Sigma^k$.\label{hyp2}
\end{hyp}
\begin{figure}
\caption{Near $p_0$, $\Sigma^k$ lies on the external side of $P_\mathfrak{g}
\label{figviz}
\end{figure}
Without loss of generality, we suppose that $p_0\in\Hset^k\widetildemes\{0\}$.
In view of the hypothesis that the hypersurface $\Sigma^k$ has strictly positive curvature, there is a neighborhood $\mathcal{U}_0$ of $p_0$ in
$\Sigma^k$ that lies entirely on one side of $P_0$, except at the point $p_0$; we shall call this the external side (See Figure \ref{figviz}).
Let $\mathfrak{g}amma$ be the geodesic in $\Hset^k\widetildemes\{0\}$ orthogonal to $P_0$ (and therefore also to $\Sigma^k$) at $p_0$, parametrized by arclength, with $\mathfrak{g}amma(0)=p_0$, oriented so that $\mathfrak{g}amma(0,\infty)$ is on the external side of $P_0$. Let $P_\mathfrak{g}amma(t)$ be the vertical $k-$plane orthogonal to $\mathfrak{g}amma$ at $\mathfrak{g}amma(t)$, and note that these
vertical planes for all $t\in {\mathbb R}$ form a smooth foliation of $\Hset^k\widetildemes\Rset$.
Denote by $\Sigma^{k-1}(t)$ the connected component of $\Sigma^k\cap P_\mathfrak{g}amma(t)$ that is the continuation of $\Sigma^{k-1}(0)= \{p_0\}\subset \Sigma^k\cap P_\mathfrak{g}amma(0)$. We do not exclude the possibility that there may be other
components of $\Sigma^k \cap P_\mathfrak{g}amma(t)$, but we only consider
$\Sigma^{k-1}(t)$. Note that for $t>0$ close to $0$,
$\Sigma^{k-1}(t)$ is homeomorphic to the $(k-1)-$sphere
${\mathbb S}^{k-1}$.
Furthermore, if $\Sigma^{k-1}(t)$ is compact and
$\Sigma^k$ is transverse to $P_\mathfrak{g}amma(t)$ along $\Sigma^{k-1}(t)$
for all $t$ in an interval $(0,t_0)$, the sets
$\Sigma^{k-1}(t)$ vary continuously and are all diffeomorphic to ${\mathbb S}^{k-1}$ (See \cite{Mi}, Theorem 3.1).
Now there are four cases to be considered.
\noindent{\bf Case 1.} The set $\Sigma^{k-1}(t)$ is compact and
$\Sigma^k$ is transverse to $P_\mathfrak{g}amma(t)$ along $\Sigma^{k-1}(t)$
for $0<t<\overline{t}$, but $\Sigma^k$ is not transverse to $P_\mathfrak{g}amma(\overline{t})$ at some point $p_1\in\Sigma^{k-1}(\overline{t})$.
As in the case of $p_{0}$, the point $p_1$ has
a neighborhood $\mathcal{U}_1$ in $\Sigma^k$
such that $\mathcal{U}_1 \setminus \{p_{1}\}$ lies entirely on one side of $P_{\mathfrak{g}amma}(\overline{t})$. Since $\Sigma^{k-1}(\overline{t})$ is the continuation of $\Sigma^{k-1}(t)$ with $t<\overline{t}$, $\mathcal{U}_1\setminus\{p_1\}$ must lie on the side of $P_\mathfrak{g}amma(\overline{t})$ with $t<\overline{t}$. The sets
$\Sigma^{k-1}(t)$ for $0<t< \overline{t}$
are all diffeomorphic to ${\mathbb S}^{k-1}$ and their union with the two points $p_0$ and $p_1$ is homeomorphic to the sphere ${\mathbb S}^{k}$. This union is open and closed in
$\Sigma^k$, so by connectedness it must coincide with
$\Sigma^k$, which is therefore a topological $k-$sphere (See Figure \ref{casob1b}). This completes the proof in Case 1.
\begin{figure}
\caption{$\Sigma^k$ homeomorphic to the $k-$sphere $\mathbb{S}
\label{casob1b}
\end{figure}
In the remaining cases we exclude Case 1, so if $\Sigma^{k-1}(t)$ is compact for all $t$ in some interval $(0,t_0)$, then
$\Sigma^k$ is transverse to the planes
$P_\mathfrak{g}amma(t)$ at all points in these sets $\Sigma^{k-1}(t)$.
\noindent{\bf Case 2.} The intersection $\Sigma^{k-1}(t)$ is compact and non-empty for all $t>0$.
\begin{figure}
\caption{Projection of $\Sigma^k$ on $\mathbb{H}
\end{figure}
If $\Sigma^{k-1}(t) \subset\Sigma^{k}\cap P_{\mathfrak{g}amma}(t)$ remains
compact and non-empty for all $t>0$, then the sets
$\Sigma^{k-1}(t)$ for $t>0$ are $(k-1)-$spheres that are embedded in $P_{\mathfrak{g}amma}(t)$ by the inductive hypothesis. Hence the union of these sets with the point $p_0$ must be all of the connected hypersurface $\Sigma^k$, which will also be embedded and
homeomorphic to $\mathbb{R}^k$. If we let
$\theta\in\mathbb{S}^{k-1}_{\infty}$ be the limit point of $\mathfrak{g}amma(t)$ as $t\to\infty$, then
$\partial_{\infty}\pi(\Sigma^{k})=\{\theta\}$, since $\theta$ is the only point of $\mathbb{S}^{k-1}_{\infty}$ that is on the external side of all the planes $P_{\mathfrak{g}amma}(t)$.
Furthermore, if $Q$ is any complete
totally geodesic hypersurface in $\mathbb{H}^k$ which does not have
$\theta$ as a point of accumulation, then
$\pi(\Sigma^k)\cap Q$ cannot have any point of accumulation at
infinity. Hence it must be a closed and bounded set in $Q$, so it is compact. Therefore in this case $\theta$ is a simple end
of $\Sigma^k$.
\noindent{\bf Case 3.} The intersection $\Sigma^{k-1}(t)$ remains compact and non-empty for $0\leq t<\bar t$ but becomes empty for every $t>\bar t$, for some $\bar t>0$.
Note first of all that $\Sigma^k$ cannot intersect $P_{\mathfrak{g}amma}(\overline{t})$ transversely at any point of $\Sigma^{k-1}(\bar t)$, for otherwise $\Sigma^{k-1}(t)$ would be non-empty for values of $t$ slightly greater than $\bar t$, contrary to the hypothesis of Case 3. Hence any point in $\Sigma^{k-1}(\bar t)$ would have a vertical tangent plane, a situation which has already been treated in Case 1, so we may suppose that $\Sigma^{k-1}(\bar t)$ is empty.
Consequently $\cup_{t<\bar t} \Sigma^{k-1}(t)$ is both open and closed in
$\Sigma^k$, so it must be all of $\Sigma^k$. As in Case 2, we see that $\Sigma^k$, which is the union of the point $p_0$ and the topological $(k-1)-$spheres $\Sigma^{k-1}(t)$ for $0<t<\bar t$, is homeomorphic to ${\mathbb R}^k$.
Now take a sequence of points $p_n\in \Sigma^{k-1}(t_n)$ such that
$t_n\to \bar t$. Since $\overline \mathbb{H}^k =\mathbb{H}^k\cup\partial\mathbb{H}^k$
is compact, there must be a subsequence of $\{\pi(p_n)\}$ that
converges to a point $\theta$ in $\overline \mathbb{H}^k$. Since we are
supposing that $\Sigma^{k-1}(\bar t)$ is empty, $\theta$ must be
in ${\mathbb S}^{k-2}_\infty(\bar t) = \partial\pi(P_\mathfrak{g}amma(\bar t))$
and must be
an accumulation point of $\pi(\Sigma^k)$ at infinity.
To complete the proof in this case, we must show that $\theta$ is the only accumulation point and that it is a simple end.
Suppose there were another accumulation point of $\pi(\Sigma^k)$, say $\widetilde{\theta}\in \Sset^{n-1}_\inftyk$.
Then $\widetilde{\theta}$, like $\theta$ and any other accumulation
points of $\pi(\Sigma^k)$, must be in ${\mathbb S}^{k-2}_\infty (\bar t)$,
since there are no accumulation points for $t<\bar t$ and $\Sigma^k\cap
P_\mathfrak{g}amma(t)$ is empty for $t>\bar t$. Choose the parametrization
of $\mathbb{S}^{k-2}_{\infty}(\bar t)$ in the disk model of $\mathbb{H}^k$ so that $\widetilde{\theta}$ is the antipode of $\theta$.
Let $\mu$ be the totally geodesic $2-$plane in $\Hset^k\widetildemes\{0\}$ that contains both $p_{0}$ and the geodesic from $(\theta,0)$ to $(\widetilde{\theta},0)$. Take
$\xi\subset \Hset^k\widetildemes\Rset$ to be the totally geodesic vertical
$(k-1)-$plane orthogonal to $\mu$
that meets $\mu$ at the single point $p_0$.
Consider the $1-$parameter family of vertical $k-$planes $\{P(\varphi)=Q(\varphi)\widetildemes{\mathbb R}\}$ containing
$\xi$ for $ \varphi \in [-\pi/2,3\pi/2] $, parametrized injectively so
that $\theta$ is a point at infinity of $Q(0)$, $\widetilde {\theta}$ is a point at infinity of $Q(\pi)$, and the
tangent plane at $p_0$ is not among the planes $P(\varphi)$. Now the intersection
$$\Sigma^{k-1}(\varphi)= \Sigma^k\cap P(\varphi)$$ is a $(k-1)-$surface in $P(\varphi)$ which satisfies
the hypotheses of the Main Theorem, so by the inductive hypothesis it must be homeomorphic to
$\mathbb{S}^{k-1}$ or $ \mathbb{R}^{k-1}$ and in the latter case $
\Sigma^{k-1}(\varphi)$ it is either a vertical graph over a convex domain
in $ \mathbb{H}^{k-1}$ or it has a simple end. However,
$\Sigma^{k-1}(\varphi) $ can not be a vertical graph, because this
$(k-1)-$surface contains the point $p_{0}$ where there
is a vertical tangent hyperplane.
Moreover, for each parameter $\varphi$
with $\varphi<$ 0 or $\varphi >
\pi$ the intersection $\Sigma^{k-1}(\varphi)$ is a bounded complete $(k-1)-$surface contained in
$\cup_{0 \leq t \leq t_0}
\Sigma^{k-1}(t)$ for some $t_0<\overline{t}$ and is therefore compact, with no accumulation points at infinity.
If for some $\overline{\varphi}$ with $0 < \overline{\varphi} < \pi,
\Sigma^{k-1}(\overline {\varphi}) $ is compact, then by the inductive hypothesis it will be
homeomorphic to the sphere $\mathbb{S}^{k-1}$ and must bound
a closed $k-$dimensional ball $D$ in $\Sigma^k$, which is homeomorphic to $\mathbb{R}^k$. The ball $D$ must coincide either with $\Sigma^k\cap \cup_{\varphi\leq\overline {\varphi}} Q(\overline {\varphi})$
or else with $\Sigma^k\cap \cup_{\varphi\mathfrak{g}eq\overline {\varphi}} Q(\overline {\varphi})$; in the first case, $\theta$ cannot be a limit point of $\Sigma^k$, and in the second case, $\widetilde{\theta}$ cannot be a limit point of $\Sigma^k$, contradicting the choice of $\theta$ and $\widetilde{\theta}$. Hence for $0 <\varphi < \pi, \Sigma^{k-1}(\varphi) $ must be noncompact and it must have a simple end, which we denote $\theta(\varphi)$.
The sets $\overline{Q(\varphi)}\cap\mathbb{S}^{k-2}_{\infty}(\bar t)$ form a
singular foliation of $\mathbb{S}^{k-2}_{\infty}(\bar t)$ by $(k-3)-$spheres for $0<\varphi<\pi$ with two singular points
$\theta=\theta(0)$ and $\widetilde\theta =\theta(\pi)$. Thus, each leaf
$\overline{Q(\varphi)}\cap\mathbb{S}^{k-2}_{\infty}(\bar t)$ contains a single
accumulation point $\theta(\varphi)$, the simple end if $0<\varphi<\pi$,
and the points $\theta$ and $\widetilde\theta$ for $\varphi=0$ or $\pi$, when the set
$\overline{Q(\varphi)}\cap\mathbb{S}^{k-2}_{\infty}(\bar t)$
consists of a single point.
The set formed by these points is the graph of a function $\theta:[0,\pi] \rightarrow \mathbb{S}^{k-2}_{\infty}(\bar t)$, and it is a closed set in the sphere $\mathbb{S}^{k-2}_{\infty}(\bar t) =
\partial_{\infty}\pi(P_{\mathfrak{g}amma}(\overline{t}))$.
By an elementary fact of general topology, it follows
that the function $\theta$ is continuous. Therefore the set of all the accumulation points at infinity of $\pi(\Sigma^k)$ form a
continuous curve in $\mathbb{S}^{k-2}_{\infty}(\bar t)$ with extremities $\theta$ and $\widetilde{\theta}$ (See Figure \ref{thetatil}).
\begin{figure}
\caption{The curve of accumulation points in
$\partial P_{\mathfrak{g}
\label{thetatil}
\end{figure}
Now taking another point of accumulation at infinity between $\theta$ and $\widetilde{\theta}$, say $\theta'$, we can consider the $2-$plane
$\mu'$ generated by $p_{0}, \theta$ and $\theta'$
and let $\xi'$ be the $(k-1)-$plane orthogonal to $\mu'$
passing through $p_{0}$. Repeating the previous argument for the new
foliation by vertical hyperplanes containing the plane $\xi'$ shows that $\theta$ and this other point $\theta'$ should be extremities of another curve which also consists of
all the points of accumulation at infinity of $\pi(\Sigma^k)$. This is
absurd since $\widetilde{\theta}$ and nearby limit points are excluded from this curve. The contradiction shows that $\theta$ is the only accumulation point at infinity.
As in Case 2, every vertical hyperplane whose projection in
$\mathbb{H}^k$ does not have $\theta$ as a limit point must meet $\pi(\Sigma^k)$ in a compact set, so $\Sigma^k$ has a simple end at $\theta$.
\noindent{\bf Case 4.} The intersection $\Sigma^{k-1}(t)$ becomes non-compact for some $\bar t>0$.
First, note that this is the only remaining case to complete the
proof of the Main Theorem.
By transversality the set of positive
$t$'s for which the intersection
of $\Sigma^k$ with $P_{\mathfrak{g}amma}(t)$ is compact is open in the
line. It follows that there is a first value of $t$, say
$\overline{t}>0$, such that $ \Sigma^{k-1}(\overline {t})$ is not
compact. Then $\Sigma^{k-1}(\overline{t})$ is the limit of
$\Sigma^{k-1}(t) $ as $t$ approaches $\overline{t}$ from below.
Since $\Sigma^{k-1}(\overline{t})$ is not compact, the inductive
hypothesis implies that $ \Sigma^{k-1}(\overline{t})$ is a $(k-1)-$surface homeomorphic to $\mathbb{R}^{k-1}$ and that it is
a vertical graph over a convex domain in $\mathbb{H}^{k-1}$ or has a
simple end.
\begin{lemma} Let $P$ be a vertical hyperplane that meets $\Sigma^k$,
such that $P$ is not the vertical plane $P_0$ tangent to $\Sigma^k$
at the point $p_0$. Then the intersection $\Sigma^k\cap P$ is not a
vertical graph over a convex domain. \label{notvertical}
\end{lemma}
\noindent{\bf Proof.} First, suppose that $p_0\in P$. Then the
tangent plane to $\Sigma^k\cap P$ at the point $p_0$ is a vertical
plane, so $\Sigma^k\cap P$
is not a vertical graph.
Now suppose that $p_0$ is not in $P$ and consider
the complete geodesic $\bar\mathfrak{g}amma$ that passes through
$p_0$ and is orthogonal to $P$. For every $t\in\mathbb{R}$, let
$P_{\bar\mathfrak{g}amma}(t)$ be the vertical hyperplane orthogonal to
$\bar\mathfrak{g}amma$ passing through the point $\bar\mathfrak{g}amma(t)$,
where $\bar\mathfrak{g}amma$ is parametrized so that
$\bar\mathfrak{g}amma(0)=p_0$ and $\bar\mathfrak{g}amma(t)$ for $t>0$ is on the external side of $P_0$. The hyperplanes
$P_{\bar\mathfrak{g}amma}(t)$ form a foliation of
$\Hset^k\widetildemes\Rset$. For some $t_0<0$, $P_{\bar\mathfrak{g}amma}(t_0)$ will be tangent to
$\Sigma^k$, and for $t<t_0$, the intersection $\Sigma^k \cap
P_{\bar\mathfrak{g}amma}(t)$ will be empty.
By repeating the arguments of Cases 1, 2, and 3, we see that
the only possibility for the connected set
$\Sigma^k\cap P_{\bar\mathfrak{g}amma}(t)$
to be a vertical graph occurs in the situation of Case 4,
that is, when there is a number $t'>0$ such that
$\Sigma^k$ is transverse to $P_{\bar\mathfrak{g}amma}(t)$ for $0<t\leq t'$ with a compact intersection for $0<t<t'$,
but $\Sigma^k\cap P_{\bar\mathfrak{g}amma}(t')$ is not compact.
In this situation, let $\bar\Sigma^{k-1}(t')$ denote $\Sigma^k\cap P_{\bar\mathfrak{g}amma}(t')$ and suppose that
is a vertical graph of a function $f$ over a convex domain
in $\pi(P_{\bar\mathfrak{g}amma}(t'))$, which is isometric to $\mathbb{H}^{k-1}$
with $k-1\mathfrak{g}eq 2$,
in order to obtain a contradiction. Let $\beta$ be a geodesic segment contained in the domain of $f$. For each point $q$ of
$\beta$, the vertical line $r(q)$ passing through the point $(q,0)$
meets $\bar\Sigma^{k-1}(t')$ in a unique point
$(q,f(q))$, and these points form a curve $\widetilde\beta$ lying
over $\beta$, so that $\pi(\widetilde\beta)=\beta$. If $\mu_q$ is
the complete geodesic in $\mathbb{H}^k$
containing the points $\pi(p_0)$ and $q\in\beta$, then the intersection of $\Sigma^k$ with the $2-$plane $\mu_q\widetildemes{\mathbb R}$ is a complete curve immersed in
$\mu_q\widetildemes{\mathbb R}$ with strictly positive
curvature and a vertical tangent at $p_0$, and
$\mu_q\widetildemes{\mathbb R}$ is isometric to ${\mathbb R}^2$.
Since $r(q)$ meets $\bar\Sigma^{k-1}(t')$ in a single point
$(q,f(q))$, at that point two branches of the curve must cross
each other. This holds for every point $q$ in $\beta$, so the curve
$\widetilde\beta$ over $\beta$ must have strictly positive curvature
both above and below in the Euclidean $2-$plane strip
$\beta\widetildemes{\mathbb R}$, but that is absurd.
\qed
The lemma shows that $\Sigma^{k-1}(\overline{t})$ cannot be
a vertical graph, so by the inductive hypothesis, it
must have a simple end, which we denote $\theta$,
in $\mathbb{S}^{k-2}_{\infty}=\partial \pi(P_{\mathfrak{g}amma}(\overline t))$. We shall show that $\theta$ is also a simple end of $\Sigma^k$.
Let $\Omega(\mathfrak{g}amma,\theta)$ be the hyperbolic $2-$plane in
$\mathbb{H}^{k} \widetildemes \{0\} $ that contains the complete geodesic
$\mathfrak{g}amma$ orthogonal to $\Sigma^k$ at $p_0$ and also has
$(\theta,0)$ as an accumulation point at infinity.
Parametrize the circle $\mathbb{S}^{1}_{\infty} =
\partial_{\infty}(\Omega(\mathfrak{g}amma, \theta)) $ by the numbers $0$ to $2\pi$ such that $0$ is the parameter for $(\theta,0)$
and $\pi$ is the other point in $\mathbb{S}^{1}_{\infty}\cap \partial P_{\mathfrak{g}amma}(\overline {t})$, with the orientation such that the points $ 0 < s < \pi $ are on the same side
of $\partial P_{\mathfrak{g}amma}(\overline {t})$ as the point $ p_{0} $.
Now for $\delta > 0$ near to $0$
consider the complete geodesic $\{\delta, s \}$ from $\delta$ to $s$
in $\Omega(\mathfrak{g}amma,\theta)$, where
$\delta < s < 2 \pi $, and let $ W(\delta, s) $ be the vertical $k-$plane in $\mathbb{H}^k \widetildemes \mathbb{R}$ that contains the geodesic $\{\delta, s \}$ and is orthogonal to the plane $\Omega(\mathfrak{g}amma, \theta)$.
Let us analyze the
intersection $\Sigma^{k-1}(\delta,s)$ of $\Sigma^k$ and $W(\delta,s)$ for $\pi < s < 2\pi$. There are two possible situations:
\begin{figure}
\caption{Situation 1 of Case 4.}
\label{situation1}
\end{figure}
\begin{enumerate}
\item The hypersurfaces $\Sigma^{k-1}(\delta, s)$ are always compact for all $\pi \leq s < 2\pi$ and all $\delta > 0$ sufficiently near to zero.
In this case, as $\delta \rightarrow 0$ we have a situation similar to Case 2, so the argument there shows that $\theta$ is a simple end of $\Sigma^k$ (See Figure \ref{situation1}).
\item There are numbers $\delta >0$ arbitrarily near to zero such that $\Sigma^{k-1}(\delta, s)$ becomes non-compact for some $s, \pi < s < 2\pi $.
\end{enumerate}
\begin{figure}
\caption{Situation 2 of Case 4.}
\label{situation2}
\end{figure}
To handle this second situation, for a fixed $\delta>0$ let $\bar{s}_{\delta}$ be the smallest $s$ such that $\Sigma^{k-1}(\delta, \overline{s}_{\delta})$ is not compact. It is easy to see that as $\delta$ decreases $\overline{s}_{\delta}$ cannot increase. Note that there is a number $s$ between $\delta$ and $\pi$ where $ W(\delta, s) $ is tangent to $\Sigma^k$ and at this point $W(\delta, s)$ is a tangent vertical hyperplane. Thus we can apply Lemma \ref{notvertical} and conclude that $ \Sigma^{k-1}(\delta, \overline{s}_{\delta})$ cannot be a vertical graph so it must be a $(k-1)-$surface with a simple end, which we denote
by $\widetilde{\phi}_{\delta}$. Note that $\widetilde{\phi}_{\delta}$ is not necessarily in the circle $\partial_{\infty}(\Omega(\mathfrak{g}amma, 0))$, but it is in $S^{k-1}_{\infty}$.
Let $\overline{s}$ be the limit of $ \overline{s}_{\delta}$
as $\delta$ decreases to $0$. Then there is a decreasing sequence
$\delta_n$ such that $\widetilde{\phi}_{\delta_{n}}$
converges to a point
$ \overline{\phi} \in S^{k-1}_{\infty} $.
Note that $\overline{s} \mathfrak{g}eq \pi $ and suppose $ \overline{s} < 2\pi $ to arrive at a contradiction. Let $T_{n}$ be the hyperbolic $(k-1)-$plane that contains the geodesic $ \{ \delta_{n}, \overline{s}_{\delta_{n}} \} $ and is orthogonal to the $2-$plane $\Omega(\mathfrak{g}amma, \theta)$. Then the limit $T$ of the $(k-1)-$planes $T_{n}$ is the
hyperbolic $(k-1)-$plane that contains the geodesic
$\{0,\overline{s}\}$ and is orthogonal to $\Omega(\mathfrak{g}amma,\theta)$. Note that $ \widetilde{\phi}_{\delta_{n}} \in \partial_{\infty}T_{n} $ and $ \overline{\phi} \in \partial_{\infty}T $. To see that $\overline{\phi} $ is an accumulation point of $ \Sigma^k \cap (T \widetildemes \mathbb{R}) $, we use the following proposition.
\begin{prop}
Supposing the hypotheses of the Main Theorem and Hypothesis
\ref{hyp2}, let $\theta\in \partial\mathbb{H}^k$
be an accumulation point both of $\pi(\Sigma^k)$ and of a totally
geodesic hyperplane $Q$ in $\mathbb{H}^k$.
If there exists a neighborhood $V$ of
$\theta$ in $\bar{\mathbb{H}}^k =
\mathbb{H}^k\cup\partial {\mathbb{H}}^k$ such that
$V\cap\pi(\Sigma^k)\cap Q$ is empty, then $\theta$ is the only
accumulation point of $\pi(\Sigma^k)$ in the sphere at infinity
$\partial {\mathbb{H}}^k$. \label{separationprop}
\end{prop}
Supposing this Proposition for the moment, note that both
$\theta$ and $\overline{\phi}$ are accumulation points at
infinity of $T$ and also of $\pi(\Sigma^k)$. If
$\theta$ and $\overline{\phi}$ are distinct, then the
Proposition shows that both of them
must be accumulation points of $\pi(\Sigma^k)\cap T$.
This is impossible since by the inductive hypothesis
$\pi(\Sigma^k)\cap T$ must have a simple end and thus it has
only one accumulation point at infinity. Consequently
$\theta$ and $\overline{\phi}$ must coincide, so $\pi(\Sigma^k)$
has only one accumulation point $\theta$ at infinity.
As in Case 2, $\theta$ must be a simple end of
$\pi(\Sigma^k)$. This will complete the proof of the Main Theorem,
once we have proven the Proposition. \qed
We shall use the following lemma in the proof of Proposition
\ref{separationprop}.
\begin{lemma}
In the conditions of Proposition \ref{separationprop}, the image
$\pi(\Sigma^k)$
is a closed set in $\mathbb{H}^k$ and its frontier $\partial\pi(\Sigma^k)$
is a complete smooth hypersurface embedded in $\mathbb{H}^k$ with strictly
positive curvature at every point.
\end{lemma}
\noindent{\bf Proof.} First, we show that $\pi(\Sigma^k)$ is closed in
$\mathbb{H}^k$. Let $q_0 = \pi(p_0)\in \mathbb{H}^k$ be the projection of the
point $p_0$ where we are assuming there is a vertical tangent plane and
consider a geodesic $\beta$ in $\mathbb{H}^k$ beginning at $q_0= \beta(0)$,
with $s\mathfrak{g}eq 0$. If there exists some $s>0$ such that
$\beta(s)\notin\pi(\Sigma^k)$, let
$\bar s>0$ be the largest number such that $z=\beta(s)\in \pi(\Sigma^k)$
for all $s\in [0,\bar s)$. Consider the intersection of the plane
$\beta\widetildemes{\mathbb R}$ with $\Sigma^k$. This is a complete convex
embedded curve with a vertical point at $p_0$, and since it does
not diverge to infinity, it must contain exactly one other point with
a vertical
tangent plane, say $p_\beta$, such that the geodesic $\beta$ from
$q_0=\beta(0)$ to $q_\beta=\beta(\bar s)=\pi(p_\beta)$ is the
intersection of $\beta$ with
$\pi(\Sigma^k)$. Let $\widetilde B$ be the set of all these points
$p_\beta$ for all such geodesics $\beta$ plus the point $p_0$.
Then $\widetilde B$ is the set of points in $\Sigma^k$ with vertical
tangent planes and we see that the frontier of $\pi(\Sigma^k)$
is contained in $B=\pi(\widetilde B)\subset\pi(\Sigma^k)$,
so $\pi(\Sigma^k)$ is a closed set in $\mathbb{H}^k$ with boundary $B$.
Next we claim that $\widetilde B$ is a smooth hypersurface in $\Sigma^k$.
In a neighborhood $V\subset \Sigma^k$ of the point $p_\beta$ let $N$
be the normal vector field to $V$. Let $\pi':\mathbb{H}^k\widetildemes
{\mathbb R} \to {\mathbb R}$ be the projection on the second factor, and
let $d\pi'_p: T\Sigma^k_p\to T{\mathbb R}_{f(p)}\equiv {\mathbb R}$ be
its differential at $p\in V$. In view of the strictly
positive curvature of
$\Sigma^k$, $d\pi'_p$ is surjective for every $p\in \widetilde B\cap V$,
so the implicit function theorem shows that near $p$, $\widetilde B\cap V$
is a smooth hypersurface in $V$. Furthermore, $\pi|\widetilde B: \widetilde
B\to B$ is a
diffeomorphism and $B$ is a smooth
hypersurface in $\mathbb{H}^k$, as claimed.
It remains to show that $B$ has strictly positive curvature in
$\mathbb{H}^k$.
Let $N$ be the external unit normal field to $B$. Since
$\Sigma^k$ has a vertical tangent plane at every point of
$\widetilde B$, the external unit normal
field $\widetilde N$
to $\Sigma^k$ is horizontal along $\widetilde B$ and
coincides with $N$, i.e., at $p\in \widetilde B$,
$d\pi_p\widetilde N(p) =
N(\pi(p))$.
For a tangent vector $\widetilde X\in T_p \widetilde B$ with
$d\pi_p\widetilde X = X$
we have
$$A(X) = -\nabla_{X} =
-\widetilde\nabla_{\widetilde X} = \widetilde A(\widetilde X)$$
where $\nabla$ and $\widetilde\nabla$ are the Riemannian connections
on $\mathbb{H}^k$ and $\mathbb{H}^k\widetildemes{\mathbb R}$ and
$A$ and $\widetilde A$ are the shape operators associated to the
second fundamental forms of $B$ and $\Sigma^k$. Thus the principal
curvatures
of $B$ at the point $\pi(p)$, which are the eigenvalues of $A$, coincide with
the eigenvalues of the restriction of $\widetilde A$ to $\widetilde B$,
the principal curvatures of $\Sigma^k$ along $\widetilde B$ at $p$, and these
are all positive since $\Sigma^k$ has strictly positive curvature.
Thus $B$ has strictly positive curvature.
\qed
\noindent{\bf Proof of Proposition \ref{separationprop}.} By the
lemma, the frontier $B$ of $\pi(\Sigma^k)$ is a smooth
hypersurface in $\mathbb{H}^k$ contained in $\pi(\Sigma^k)$
and it has strictly positive curvature.
By hypothesis, near
$\theta$ the image $\pi(\Sigma^k)$ and its frontier $B$ must
are disjoint from the totally geodesic hyperplane $Q$. For any totally
geodesic $2-$plane $\eta$ in $\mathbb{H}^k$ that is orthogonal to $Q$
and has $\theta$ as a limit point,
the intersection $\eta\cap Q$ is a geodesic in $\mathbb{H}^k$, while $\eta\cap B$
is a strictly convex curve (unless it is empty),
so as the points
on this curve move away from $\theta$ they must become more distant
from $\eta\cap Q$. Outside a tiny neighborhood of the sphere at
infinity, the curvature of the boundary $B$ of $\pi(\Sigma^k)$ is
greater than some positive
number $\epsilon$. This forces $\pi(\Sigma^k)$ to move away from $Q$
uniformly as points in it move away from $\theta$. Hence there
cannot be any other accumulation point of $\pi(\Sigma^k)$ in
addition to $\theta$.
Since $\pi(\Sigma^k)$ is connected, it must lie
entirely on one side of $Q$.
\qed
\section{Examples of simple ends}\label{example}
We use the upper half-space model of hyperbolic $n-$space $\mathbb{H}^n$,
$$\mathbb{R}^n_+=\{x=(x_1,\dots,x_n)\in \mathbb{R}^n\ |\ x_n>0\}$$
with the Riemannian
metric given explicitly as $ds^2= x_n^{-2}\sum_{i=1}^n dx_i^2$.
Then $\mathbb{H}^n=\mathbb{R}^n_+$ with this metric has constant curvature
$-1$.
We shall give examples of complete embedded hypersurfaces with
strictly positive curvature and with a simple end by explicit computations. The hypersurfaces which we consider
are invariant under the group of parabolic isometries that
leave invariant each horizontal horosphere in $\mathbb{R}^n_+$
and fix $\infty$, the point at infinity.
Explicitly, for $a=(a_1,\dots,a_{n-1})\in \mathbb{R}^{n-1}$,
consider the parabolic isometry
$f_a:\mathbb{R}^n_+\to \mathbb{R}^n_+$, defined by setting $f_a(x)=x+(a,0)$,
where $(a,0)=(a_1,\dots,a_{n-1},0)$. Then $f_a$
preserves the horizontal horospheres $\{x_n=b\}$, and
$$F_a:\mathbb{H}^n\widetildemes \mathbb{R}\to
\mathbb{H}^n\widetildemes \mathbb{R},\ F_a(x,t)=(f_a(x),t)$$
is an isometry for every $a\in\mathbb{R}^{n-1}$.
Consider an interval $\mathcal{I} = (t_{1}, t_{2})$, $-\infty < t_{1}< t_{2} < + \infty$, and a positive smooth function $u:\mathcal{I} \rightarrow
\mathbb{R}$,
\begin{equation}u(t)= c_{1}\ln(t-t_{1}) + c_{2}\ln(t_{2} - t)\label{defu}\end{equation}
where $c_1$ and $c_2$ are negative constants and
\begin{equation}t_2-t_1\leq e^{-1}.
\label{t1t2}\end{equation}
It is
clear that $u(t)$ is positive and $\lim_{t\to t_1}u(t) =
\lim_{t\to t_2}u(t) = +\infty$.
Let $$\alpha=\{(0, 0, ..., 0, u(t), t)\ |\ t\in\mathbb{R}\}
\subset\mathbb{H}^n\widetildemes \mathbb{R}$$ be the graph of $u$
as a curve in the $x_{n}t-$plane and consider the set
$$\Sigma^n=\cup_{a\in \mathbb{R}^{n-1}} F_a(\alpha)\subset \mathbb{H}^n\widetildemes \mathbb{R},$$
parametrized by
\begin{equation}
\varphi(x_{1}, x_{2}, ..., x_{n-1}, t) = (x_{1},
x_{2}, ..., x_{n-1}, u(t), t).
\label{parametrization}
\end{equation}
Clearly $\Sigma^n$
is a complete smooth embedded hypersurface.
We shall show that (\ref{t1t2}) implies that
it has strictly positive curvature and has a simple end.
The hypersurface $\Sigma^n$ is preserved by
$A\widetildemes {\rm id}_\mathbb{R}: \mathbb{H}^n\widetildemes \mathbb{R}\to\mathbb{H}^n\widetildemes \mathbb{R}$ for
every Euclidean isometry $A:\mathbb{R}^n_+\to \mathbb{R}^n_+$
that preserves $x_n$; this includes not only horizontal translations, but also rotations of $\mathbb{R}^n_+$ and compositions, and these all produce isometries of $\mathbb{H}^n\widetildemes \mathbb{R}$.
Let $\{\partial_1=\partial/\partial x_1,\dots,
\partial_n=\partial/\partial x_n,
\partial_t=\partial/\partial t\}$ be the usual frame
in $\mathbb{H}^n \widetildemes \mathbb{R}$ and let
$\lambda = x_n^{-1}$ be the conformal factor of the metric.
Then
the Riemannian connection $\overline\nabla$ in $\mathbb{H}^n \widetildemes \mathbb{R}$ is given by
\begin{displaymath}\begin{array}{lll}
\overline\nabla_{\partial_i}\partial_j =&\delta_{ij}\lambda\partial_n\ &{\rm for} \ i,j< n \nonumber \\
\overline\nabla_{\partial_i}\partial_n =& \overline\nabla_{\partial_n}\partial_i =-\lambda\partial_i &{\rm for}\ i<n \nonumber \\
\overline\nabla_{\partial_n}\partial_n =& -\lambda\partial_n \nonumber
\\ \overline\nabla_{\partial_t}\partial_i\ =&
\overline\nabla_{\partial_i}\partial_t=
\overline\nabla_{\partial_t}\partial_t=0 &{\rm for}\ i\leq n,
\end{array}
\end{displaymath} where $\delta_{ij}$ is the Kronecker delta.
Using the parametrization (\ref{parametrization}) and
setting $u_t=\partial u/\partial t$ as usual,
we see that the vector fields
\begin{align}
&\varphi_i = \varphi_{x_{i}} = \partial_i,\ \mathrm{h}space{2.3cm}{\rm for}\ i=1,\dots,n-1,\ {\rm and}\nonumber \\
&\varphi_n = \varphi_{t} = u_{t}\partial_{n} + \partial_t \nonumber
\end{align}
form a basis of the tangent space to $\Sigma^n$.
We can calculate the coefficients of the first fundamental form $g_{ij} = \langle
\varphi_{i}, \varphi_{j}\rangle $ of $\Sigma^n$ to be
\begin{displaymath}
g_{ij} = \left\{ \begin{array}{ll}
\lambda^2 &{\rm if}\ i=j\ {\rm with} \ i = 1, ..., n-1 \nonumber \\
\lambda^2u_{t}^{2} + 1 &{\rm for} \mathrm{h}space{0.3cm}i=j=n \nonumber \\
0 &{\rm for}\ i \neq j\ {\rm with} \ i, j = 1, ..., n. \nonumber
\end{array}\right.
\end{displaymath}
The unit normal vector field to $\Sigma^n$ is
\begin{eqnarray}
N = (\lambda^{-1}\partial_{n} -
\lambda u_{t}\partial_t)/m \nonumber
\end{eqnarray}
where $m = \sqrt{1 + \lambda^2 u_{t}^2}$.
Then we compute the coefficients $b_{ij} = \langle \overline{\nabla}_{\varphi_{j}}\varphi_{i} ,
N \rangle $ of the second fundamental form to be
\begin{displaymath}
b_{ij} = \left\{ \begin{array}{ll}
\lambda^2/m \mathrm{h}space{0.3cm} &{\rm if} \mathrm{h}space{0.3cm}i=j \mathrm{h}space{0.3cm} {\rm with} \mathrm{h}space{0.3cm}i = 1, ..., n-1 \nonumber \\
\lambda(-\lambda u_{t}^2 + u_{tt})/m\mathrm{h}space{0.3cm} &{\rm for} \mathrm{h}space{0.3cm} i=j=n \nonumber \\
0 \mathrm{h}space{0.3cm} &{\rm for} \mathrm{h}space{0.3cm} i \neq j\mathrm{h}space{0.3cm} {\rm with} \mathrm{h}space{0.3cm}i, j = 1, ..., n. \nonumber
\end{array}\right.
\end{displaymath}
Therefore the matrix $(b_{ij})_{n \widetildemes n}$ of
coefficients of the second fundamental form is a diagonal matrix with the
eigenvalues
\begin{align}
&\mu_{i} = \lambda^2/m\ \mathrm{h}space{2.7cm}{\rm for}\ i=1,\dots, n-1\
{\rm and}\nonumber \\
&\mu_{n} = \lambda(-\lambda u_{t}^2 + u_{tt})/m\nonumber
\end{align}
along the diagonal.
Since $m = \sqrt{1 + \lambda^2 u_{t}^2} > 0$, it is clear that $\mu_i>0$ for
$i=1,\dots,n-1$.
\begin{lemma} Condition (\ref{t1t2}) implies that $-u_{t}^2 + uu_{tt}> 0$.
\label{utt}
\end{lemma}
Since $\lambda=1/u>0$ this lemma implies that
$\mu_n$ is also positive. Thus $\Sigma^n$, as defined by the explicit
formulas given above,
is a complete embedded hypersurface in $\mathbb{H}^n \widetildemes \mathbb{R}$
with a simple end and
strictly positive curvature.
\noindent{\bf Proof of Lemma \ref{utt}.} Calculate the derivatives
$u_t$ and $u_{tt}$ from (\ref{defu}), the definition
of $u(t)$. Then a short calculation gives
$$-\lambda u_{t}^2 + u_{tt}=-\mathrm{fr}\, ac{c_2^2}{(t-t_1)^2}
[1+\ln(t-t_1)]
-\mathrm{fr}\, ac{c_1^2}{(t_2-t)^2} [1+\ln(t_2-t)]$$
$$\qquad\qquad\qquad+\mathrm{fr}\, ac{c_1c_2}{(t-t_1)(t_2-t)}\left[2-\mathrm{fr}\, ac{(t-t_1)}{(t_2-t)}\ln(t-t_1)
-\mathrm{fr}\, ac{(t_2-t)}{(t-t_1)}\ln(t_2-t)\right].$$
Since $t_2-t_1\leq e^{-1}$, $t_1<t<t_2$, and the parameters $c_1$ and $c_2$ are negative,
it is easy to check that each of the terms on the right hand side is
positive. \qed
\end{document}
|
\begin{document}
\setcounter{page}{1}
\newcommand{\mathbb{T}}{\mathbb{T}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\tx}[1]{\quad\mbox{#1}\quad}
\parindent=0pt
\def\hskip 2pt\hbox{$\joinrel\mathrel\circ\joinrel\to$}{\hskip 2pt\hbox{$\joinrel\mathrel\circ\joinrel\to$}}
\def\hskip 1pt\frame{\vbox{\vbox{\hbox{\boldmath$\scriptstyle\times$}}}}\hskip 2pt{\hskip 1pt\frame{\vbox{\vbox{\hbox{\boldmath$\scriptstyle\times$}}}}\hskip 2pt}
\def\vbox{\hbox to 8.9pt{$\mid$\hskip -3.6pt $\circ$}}{\vbox{\hbox to 8.9pt{$\mid$\hskip -3.6pt $\circ$}}}
\def\hbox{\rm im}\hskip 2pt{\hbox{\rm im}\hskip 2pt}
\def\hbox{\rm coim}\hskip 2pt{\hbox{\rm coim}\hskip 2pt}
\def\hbox{\rm coker}\hskip 2pt{\hbox{\rm coker}\hskip 2pt}
\def\mathbb{T}R{\hbox{\rm tr}\hskip 2pt}
\def\hbox{\rm grad}\hskip 2pt{\hbox{\rm grad}\hskip 2pt}
\def\mathbb{R}ANK{\hbox{\rm rank}\hskip 2pt}
\def\hbox{\rm mod}\hskip 2pt{\hbox{\rm mod}\hskip 2pt}
\def\hbox{\rm den}\hskip 2pt{\hbox{\rm den}\hskip 2pt}
\def\hbox{\rm deg}\hskip 2pt{\hbox{\rm deg}\hskip 2pt}
\title[Strong reactions in quantum super PDE's.II]{STRONG REACTIONS IN QUANTUM SUPER PDE's.II:\\ NONLINEAR QUANTUM PROPAGATORS}
\author{Agostino Pr\'astaro}
\maketitle
{\footnotesize
\begin{center}
Department SBAI - Mathematics, University of Rome La Sapienza, Via A.Scarpa 16,
00161 Rome, Italy. \\
E-mail: {\tt [email protected]}
\end{center}
}
\vskip 0.5cm
\centerline{\em This work in three parts is dedicated to Albert Einstein and Max Planck.}
\vskip 0.5cm
\begin{abstract}
In this second part, of a work in three parts, devoted to encode strong reactions of the high energy physics, in the algebraic topologic theory of quantum super PDE's, (previously formulated by A. Pr\'astaro), decomposition theorems of integral bordisms in quantum super PDEs are obtained. (For part I and part III see \cite{PRAS28, PRAS29}.) In particular such theorems allow us to obtain representations of nonlinear quantum propagators in quantum super PDE's, by means of elementary ones ({\em quantum handle decompositions} of nonlinear quantum propagators). These are useful to encode nuclear and subnuclear reactions in quantum physics. Pr\'astaro's geometric theory of quantum PDE's allows us to obtain constructive and dynamically justified answers to some important open problems in high energy physics. In fact a {\em Regge-type} relation between {\em reduced quantum mass} and {\em quantum phenomenological spin} is obtained. A dynamical {\em quantum Gell-Mann-Nishijima formula} is given. An existence theorem of observed local and global solutions with electric-charge-gap, is obtained for quantum super Yang-Mills PDE's, $\widehat{(YM)}[i]$, by identifying a suitable constraint, $\widehat{(YM)}[i]_w\subset \widehat{(YM)}[i]$, {\em quantum electromagnetic-Higgs PDE}, bounded by a quantum super partial differential relation $\widehat{(Goldstone)}[i]_w\subset \widehat{(YM)}[i]$, {\em quantum electromagnetic Goldstone-boundary}. An electric neutral, connected, simply connected observed quantum particle, identified with a Cauchy data of $\widehat{(YM)}[i]$, it is proved do not belong to $\widehat{(YM)}[i]_w$.
Existence of {\em $Q$-exotic nonlinear quantum propagators} of $\widehat{(YM)}[i]$, i.e., nonlinear quantum propagators that do not respect the quantum electric-charge conservation is obtained.
By using integral bordism groups of quantum super PDE's, a {\em quantum crossing symmetry theorem} is proved. As a by-product existence of {\em massive photons} and {\em massive neutrinos} are obtained. A dynamical proof that quarks can be broken-down is given too. A {\em quantum time}, related to the observation of any nonlinear quantum propagator, is calculated. Then an {\em apparent quantum time} estimate for any reaction is recognized. A criterion to identify solutions of the quantum super Yang-Mills PDE encoding {\em(de)confined quantum systems} is given. {\em Supersymmetric particles} and {\em supersymmetric reactions} are classified on the ground of integral bordism groups of the quantum super Yang-Mills PDE $\widehat{(YM)}$. Finally, existence of the {\em quantum Majorana neutrino} is proved. As a by-product, the existence of a new quasi-particle, that we call {\em quantum Majorana neutralino}, is recognized made by means of two quantum Majorana neutrinos, a couple $(\widetilde{\nu_e},\widetilde{\bar\nu_e})$, supersymmetric partner of $(\nu_e,\bar\nu_e)$, and two Higgsinos.
\end{abstract}
\vskip 0.5cm
\noindent {\bf AMS Subject Classification:} 55N22, 58J32, 57R20; 58C50; 58J42; 20H15; 32Q55; 32S20.
\noindent \textbf{Keywords}: Integral (co)bordism groups in quantum (super) PDE's;
Existence of local and global solutions in hypercomplex quantum super PDE's; Conservation laws;
Crystallographic groups; Quantum singular PDE's; Quantum exotic superspheres; Quantum reactions; Quantum Regge-type trajectories; Quantum Gell-Mann-Nishijima formula; $Q$-exotic nonlinear quantum propagators; Quantum crossing symmetries in quantum super PDEs; quantum massive photons; quantum massive neutrinos; Dark matter; Apparent quantum time; Quantum (de)confinement; Quantum supersymmetric partners; Quantum Majorana neutrino.
\section[Introduction]{\bf Introduction}
\vskip 0.5cm
The algebraic topologic theory of quantum (super) PDE's formulated by A. Pr\'astaro, allows to directly encode quantum phenomena in a category of noncommutative manifolds (quantum (super)manifolds) and to finally solve the problem of unification, at quantum level, of gravity with the fundamental forces \cite{PRAS5,PRAS8,PRAS9,PRAS10,PRAS11,PRAS12,PRAS14,PRAS17,PRAS19,PRAS21,PRAS22}. In particular, this theory allowed to recognize the mechanism of mass creation/distruction, as a natural geometric phenomenon related to the algebraic topologic structure of quantum (super) PDEs encoding the quantum system under study \cite{PRAS22}.
Aim of this second part is to explicitly prove that nuclear and subnuclear reactions can be encoded as boundary value problems in the Pr\'astaro's algebraic topology of quantum super PDEs, and can be represented in terms of elementary reactions. (For part I and part III see \cite{PRAS28, PRAS29}.)
It is important to emphasize that quantum conservation laws do not necessarily produce conservation of quantum charges in quantum reactions. In fact, it is also important consider the topological structure of the corresponding nonlinear quantum propagators encoding these reactions. In \cite{PRAS28} (Part I), we have shown this fact for the observed quantum energy. In this second part we characterize observed nonlinear quantum propagators $V$ of the observed quantum super Yang-Mills PDE, $\widehat{(YM)}[i]$, with respect to the total quantum electric-charge. Then we define {\em $Q$-exotic nonlinear quantum propagators} ones where there is a non-zero defect quantum electric-charge, $\mathfrak{Q}[V]\in A$, in the corresponding encoded reactions. ($A$ is the fundamental quantum superalgebra in $\widehat{(YM)}[i]$.) This important phenomenon, that is related to the gauge invariance of $\widehat{(YM)}[i]$, was non-well previously understood, since the gauge invariance was wrongly interpreted as a condition that necessarily produce the conservation of electric-charge in reactions.
Really just the gauge invariance is the main origin of such phenomenon, but beside the structure of the nonlinear quantum propagator. This fundamental aspect of quantum reactions in $\widehat{(YM)}[i]$, gives strong theoretical support to the guess about existence of quantum reactions where the ``electric-charge" is not conserved. The electric-charge conservation law was quasi a dogma in particle physics. However, there are in the world many heretical experimental efforts to prove existence of decays like the following $e^-\to \gamma+\nu$, i.e. electron decay into a photon and neutrino. In this direction some first weak experimental evidences were recently obtained.\footnote{See \cite{GIAMMARCHI}. Some other exotic decays were also investigated, as for example the exotic neutron's decay: $n\to p+\nu+\bar\nu$. \cite{NORMANN-BACHALL-GOLDHABER}.} With this respect, one cannot remark the singular role played, in the history of the science in these last 120 years, by the electron, a very small and light particle. In fact, at the beginning of the last century was just the electron to cause break-down in the Maxwell and Lorentz physical picture of the micro-world, until to produce a completely new point of view, i.e. the quantum physics. Now, after 120 years the electron appears to continue do not accept the place that physicists have reserved to it in the {\em world-puzzle}.
In the following we show as it is organized the paper and list the main results. 2.
Theorem \ref{quantum-superhandle-decomposition-quantum-nonlinear-propagator}: A representation of nonlinear quantum propagators by means {\em quantum exchangions} and {\em quantum virtual particles} is given. 3. Theorem \ref{quantum-scattering-processes-in-quantum-yang-mills-d-4-quantum-super-minkowski} characterizes nonlinear quantum propagators in quantum super Yang-Mills PDE, on quantum super Minkowski space-time. 4. Theorem \ref{reduced-quantum-mass-torsion}: The quantum mass is represented by means of the quantum torsion of the corresponding solution. Corollary \ref{phenomenological-quantum-regge-type-rule}: A direct relation between square {\em reduced quantum mass} and {\em phenomenological quantum spin} is given.\footnote{In particular this result shows how the phenomenological {\em Regge-type trajectories} emerge from the geometric theory of quantum (super) PDE's formulated by A. Pr\'astaro.}
Theorem \ref{quantum-gell-mann-nishijima-formula} gives a dynamical {\em quantum Gell-Mann-Nishijima formula}. By means of this formula one obtains a dynamical interpretation of {\em quantum hypercharge} and {\em quantum $3^{th}$ isospin component}. An existence theorem of observed local and global solutions with electric-charge-gap, is obtained too for observed quantum super Yang-Mills PDE's, $\widehat{(YM)}[i]$. It is identified a suitable constraint, $\widehat{(YM)}[i]_w\subset \widehat{(YM)}[i]$, {\em observed quantum electromagnetic-Higgs PDE}, bounded by a quantum super partial differential relation $\widehat{(Goldstone)}[i]_w\subset \widehat{(YM)}[i]$, {\em observed quantum electromagnetic Goldstone-boundary}. An observed nonlinear quantum propagator $V\subset\widehat{(YM)}[i]$, crossing the observed quantum electromagnetic Goldstone-boundary loses (or acquires) the property to have an electric-charge gap. An electric neutral, connected, simply connected observed quantum particle, identified with a Cauchy data of $\widehat{(YM)}[i]$, cannot be contained into $\widehat{(YM)}[i]_w$.
Theorem \ref{non-conservation-of-quantum-electric-charge} proves that the quantum electric charge is not necessarily a conserved quantum law for any nonlinear quantum propagator of $\widehat{(YM)}[i]$. In fact, it is proved the existence of {\em$Q$-exotic nonlinear quantum propagators} in $\widehat{(YM)}[i]$, encoding observed reactions that do not respect the conservation of the quantum electric-charge.
Theorem \ref{quantum-crossing-symmetry}: By using integral bordism groups of quantum super PDE's, a {\em quantum crossing symmetry theorem} is proved.
Theorem \ref{quantum-massive-photons-theorem}: Existence of solutions of quantum super Yang-Mills PDEs, $\widehat{(YM)}$, representing productions of {\em quantum massive photons}, is proved.\footnote{These can be identified with quantum neutral massive vector bosons. The annihilation electron-positron must necessarily produce an intermediate {\em quantum virtual massive photon}.}
Theorem \ref{a-quantum-massive-photons}: Theorem \ref{quantum-massive-photons-theorem} is generalized to other quantum electric-charged particles. Theorem \ref{existence-of-quantum-massive-neutrino} states existence of quantum massive neutrinos, i.e., there exist nonlinear quantum propagators of $\widehat{(YM)}$ encoding decays of massive quasi-particles into a couple (neutrino,antineutrino).\footnote{Quantum massive photons, quantum massive neutrinos and $a$-quantum massive photons could interpret the so-called ``dark matter" that nowadays is a spellbinding object of active research. This exotic matter is related to the geometric structure of $\widehat{(YM)}$ that identifies the subequation $\widehat{(Higgs)}$. A global solution $V\subset \widehat{(YM)}$, crossing the
quantum Goldstone-boundary of $\widehat{(Higgs)}$, acquires (or loses) mass. (See \cite{PRAS22}, Example \ref{massive-virtual-particles-dark-matter} and footnote at page \pageref{dark-matter}, concernig $\pi^+$-photoproduction.)} Theorem \ref{quarks-break-down}: It is proved that quarks cannot be considered fundamental particles, i.e., they can be broken-down. Theorem \ref{quantum-virtual-anomaly-massive-particles}: Existence of massive quasi-particles, with masses that apparently contradict the conservation of mass-energy, is proved. This is related to the new concept of {\em quantum time} (see Definition \ref{quantum-relativistic-observed-time}) that comes from the interaction between quantum relativistic frame (i.e., an observer) and the quantum system, (hence also to the topological structure of nonlinear quantum propagators).
Theorem \ref{de-confinement-criterion}: A criterion to identify solutions of $(\widehat{YM})$, encoding confined quantum systems, is proved. Lemma \ref{supersymmetric-partners-particles} and Lemma \ref{supersymmetric-partners-quantum-nonlinear-propagators} state existence of supersymmetric particles and supersymmetric reactions by means of integral bordism groups of $\widehat{(YM)}$. Theorem \ref{existence-quantum-majorana-neutrino}: Existence of {\em quantum Majorana neutrino} and a new quasi-particle, that we call {\em quantum Majorana neutralino}, for its similarity with the so-called neutralino, are obtained, by using algebraic topologic properties of $\widehat{(YM)}$.
\section{\bf Surgery in Quantum Super PDEs}\label{sec-surgery-quantum-super-pdes}
Let us assume as a prerequisite of this section the knowledge of some previous works by A. Pr\'astaro on quantum (super) PDEs. However, to fix more directly ideas we recall also some fundamental definitions and results that are soon related to the subject here considered.
Let $A=A_0\times A_1$ be a quantum superalgebra in the sense of Pr\'astaro.
\begin{definition}
An $(m|n)$-dimensional bordism in the category $\mathfrak{Q}_S$ of quantum supermanifolds, consists of the following $(W;M_0,f_0;M_1,f_1)$, where $W$ is a compact quantum supermanifold of dimension $(m|n)$ and closed $(m-1|n-1)$-dimensional quantum supermanifolds $M_0$ and $M_1$, such that $\partial W=N_0\sqcup N_1$, and quantum diffeomorphisms $f_i:M_o\to N_i$, $i=0,1$. An $(m|n)$-dimensional h-bordism, (resp. s-bordism) is a $(m|n)$-dimensional bordism as above, such that the inclusions $N_i\hookrightarrow W$, $i=0,1$, are homotopy equivalences, (resp. simply homotopy equivalences). We will simply denote also by $(W;M_0,M_1)$ an $(m|n)$-dimensional bordism in the category $\mathfrak{Q}_S$.\footnote{In this paper we will consider also more general bordisms, i.e., $(m|n)$-dimensional compact quantum supermanifolds $V$ such that $\partial V=N_0\bigcup P\bigcup N_1$, where $N_0$, $P$ and $N_1$ are $(m-1|n-1)$-dimensional quantum super manifolds such that $\partial P=\partial N_0\bigcup \partial N_1$. If $\partial N_0=\partial N_1=\varnothing$, then $P=\varnothing$. Then considerations similar to the ones made in this sections can be extended to these more general cases. See \cite{PRAS14,PRAS19}, Fig. \ref{figure-proof-quantum-superhandle-decomposition-quantum-nonlinear-propagator} and the next section.}
\end{definition}
\begin{theorem}[Quantum handle decomposition of nonlinear quantum propagator in quantum super PDEs]\label{quantum-superhandle-decomposition-quantum-nonlinear-propagator}
Let $\hat E_k\subset \hat J^k_{m+1|n+1}(W)$ be a quantum super PDE in the category $\mathfrak{Q}_S$ and let $V\subset \hat E_k$ be a compact solution of a boundary value problem, i.e., $\partial V=N_0\sqcup N_1$.\footnote{Let us recall that $V$ such that $\partial V=N_0\sqcup N_1$, is called {\em quantum non-linear propagator} between the two Cauchy data $N_0$ and $N_1$. This is the non-linear extension of the concepts of quantum propagator, usually used to quantize a classical field theory. For more details see Refs. \cite{PRAS19,PRAS21,PRAS22}.} Let us assume that $\hat E_k$ is formally integrable and completely integrable. Then there exists a quantum-super-handle-presentation {\em(\ref{quantum-superhandle-presentation-solution})} of the nonlinear quantum propagator $V$.
\begin{equation}\label{quantum-superhandle-presentation-solution}
V_{1}\bigcup V_{2}\bigcup\cdots\bigcup V_{s}\thickapprox V
\end{equation}
where $(V_j;M_{j-1},M_j)$ is an adjoint elementary cobordism with index $p_j|q_j$, such that $$0|0\le p_1|q_1\le p_2|q_2\le\cdots\le p_s|q_s\le m+1|n+1$$ and $M_0=N_0$, $M_k=N_1$. In {\em(\ref{quantum-superhandle-presentation-solution})}, the symbol $\thickapprox$ denotes homeomorphism.
\end{theorem}
\begin{proof}
Let $M$ be a quantum supermanifold of dimension $(m|n)$ and let us consider an embedding
$$\phi:\hat S^{p|q}\times\hat D^{m-p|n-q}\to M.$$
Let us consider the following $(m|n)$-dimensional quantum supermanifold
$$\scalebox{0.8}{$M'\equiv\left(M\setminus{\rm int}(\phi(\hat S^{p|q}\times\hat D^{m-p|n-q}))\right)\bigcup_{\phi(\hat S^{p|q}\times\hat S^{m-p-1|n-q-1})}\left(\hat D^{p+1|q+1}\times\hat S^{m-p-1|n-q-1}\right)$.}$$
We say that $M'$ is obtained from $M$ by a {\em $(p|q)$-surgery}, i.e., cutting out ${\rm int}(\hat S^{p|q}\times\hat D^{m-p|n-q})$ and gluing in $\hat D^{p+1|q+1}\times\hat S^{m-p-1|n-q-1}$. The process of surgery is related to cobordism and handle attaching ones. Let $(X,\partial X)$ be a $(m+1|n+1)$-dimensional quantum supermanifold with boundary $\partial X$, and let
$$\phi:\hat S^{p|q}\times\hat D^{m-p|n-q}\to \partial X\equiv M$$
be an embedding. Set
$$X'\equiv X\bigcup_{\phi(\hat S^{p-1|q-1}\times\hat D^{m-p+1|n-q+1})}\left(\hat D^{p|q}\times\hat D^{m-p+1|n-q+1}\right).$$
We call $X'$ obtained from $X$ by {\em attaching a $(m+1|n+1)$-dimensional quantum superhandle of index $p|q$}. One has
$$\scalebox{0.8}{$M'\equiv\partial X'=\left(\partial X\setminus{\rm int}(\phi(\hat S^{p-1|q-1}\times\hat D^{m-p+1|n-q+1}))\right)\bigcup_{\phi(\hat S^{p-1|q-1}\times\hat S^{m-p|n-q})}\left(\hat D^{p|q}\times\hat S^{m-p|n-q}\right)$.}$$
We say that $M'$ is obtained from $M$ by a quantum $(p-1|q-1)$-surgery.
The surgery allows us to obtain a cobordism
$$W\equiv \left(M\times \hat D^{1|1}\right)\bigcup_{\phi(\hat S^{p-1|q-1}\times\hat D^{m-p+1|n-q+1}\times(1,1))}\left(\hat D^{p|q}\times\hat D^{m-p+1|n-q+1}\right)$$
with $\partial W=M\sqcup M'$.\footnote{In order to fix ideas we report in Fig. \ref{figure-handle-attachment-vs-cobordism} some examples in dimension $(m|n)=(1|1),\, (2|2)$, showing the relation between attaching quantum superhandles and cobordism.
In Fig. \ref{figure-handle-attachment-vs-cobordism-on-non-closed-manifold} is represented the case of a $(2|2)$-dimensional quantum supermanifold $X$ with boundary $\partial X$. Let us recall that $\hat S^{m|n}\equiv A^{m|n}\bigcup\{\infty\}$ is the $(m|n)$-dimensional quantum supersphere and $\hat D^{m|n}\subset \hat S^{m|n}$, such that $\partial\hat D^{m|n}\cong \hat S^{m-1|n-1}$ is the quantum supermanifold called quantum $(m|n)$-dimensional superdisk. One has $A^{0|0}=A^0=\{0\}$ and $\partial\hat D^{1|1}\cong \hat S^{0|0}=\{0\}\bigcup\{\infty\}$. (For more information about see \cite{PRAS19}.)}
\begin{figure}
\caption{Relation between attaching quantum superhandles and cobordisms on non-closed quantum supermanifold. The picture is made taking a $(2|2)$-dimensional quantum supermanifold $X$ with boundary $\partial X$. Then $X'\equiv X\bigcup_{\hat S^{0|0}
\label{figure-handle-attachment-vs-cobordism-on-non-closed-manifold}
\end{figure}
\begin{figure}
\caption{Relation between attaching quantum superhandles and cobordisms. In the figure on the left, $\dim M=1|1=\dim M'$, $M=\hat S^{1|1}
\label{figure-handle-attachment-vs-cobordism}
\end{figure}
We shall use the following lemma.
\begin{lemma}\label{quantum superhandle-decomposition-bordism}
{\em 1)} Every bordism $(W;M,M')$ in the category $\mathfrak{Q}_S$, with $\dim W=(m+1|n+1)$, $\dim M=\dim M'=m|n$, has a quantum superhandle decomposition as the union of a finite sequence
$$(W;M,M')=(W_1;M_0,M_1)\bigcup(W_2;M_1,M_2)\bigcup\cdots(W_k;M_{k-1},M_k)$$
of adjoint elementary conditions $(W_j;M_{j-1},M_j)$ with index $p_j|q_j$, such that $$0|0\le p_1|q_1\le p_2|q_2\le\cdots\le p_s|q_s\le m+1|n+1$$ and $M_0=M$, $M_k=M'$.
{\em 2)} Closed $(m|n)$-dimensional quantum super manifolds $M$, $M'$ are cobordant iff $M'$ can be obtained from $M$ by a sequence of surgeries.
{\em 3)} Every closed $(m|n)$-dimensional quantum supermanifold can be obtained from $\varnothing$ by attaching quantum superhandles.
\end{lemma}
\begin{proof}
See Theorem 2.19 in \cite{PRAS19}.\footnote{Handle decomposition of bordism for manifolds has been introduced by Stephen Smale \cite{SMALE}. Lemma \ref{quantum superhandle-decomposition-bordism} generalizes to quantum supermanifolds an analogous result for commutative manifolds. However, the proof of Lemma \ref{quantum superhandle-decomposition-bordism} is not found on the Morse function, (see, e.g., \cite{MILNOR2a}), but on the quantum CW-substitutes for quantum supermanifolds. }
\end{proof}
Let us remark that the proof of Lemma \ref{quantum superhandle-decomposition-bordism} is related to the quantum CW substitute structure of nonlinear quantum propagators, hence the relations considered in this lemma are to consider homeomorphisms. On the other hand the handle decomposition of the nonlinear quantum propagator $V$, is related also to the quantum cell composition that always holds for a $(m+1|n+1)$-dimensional quantum supermanifold, according to its quantum CW-structure. But the relation between a quantum superdisk $\hat D^{r|s}$ and a $(r|s)$-dimensional quantum superhandle $\hat h^{p|q}$ of index $p|q$, is in general an homeomorphism $\hat h^{p|q}\thickapprox \hat D^{p|q}\times \hat D^{r-p|s-q}$, that cannot be reduced to a diffeomorphism in the category $\mathfrak{Q}_S$.\footnote{For example, let us consider the category of complex manifolds, say $\mathfrak{C}$. There the quantum algebra is the $\mathbb{R}$ algebra of complex numbers $\mathbb{C}$, and the morphisms are holomorphic mappings, hence differentiable mappings, having $\mathbb{C}$-linear derivatives. Diffeomorphisms in $\mathfrak{C}$ are usually called biholomorphic mappings. In fact, it is well known that do not exist biholomorphic mappings between an unit ball in $\mathfrak{C}$, (here identified with a {\em quantum $m$-disk}, $\hat D^m$), and the complex $m$-disk, $D^m$, ({\em generalized Poincar\'e's theorem} \cite{KRANTZ}).} In Tab. \ref{handle-decompositions-quantum-supermanifolds-examples} are reported some useful examples of quantum superhandle decompositions of quantum supermanifolds.
\begin{table}[h]
\renewcommand{Tab.}{Tab.}
\caption{Examples of quantum superhandle decompositions of quantum supermanifolds.}
\label{handle-decompositions-quantum-supermanifolds-examples}
\scalebox{0.8}{$
\begin{tabular}{|l|l|l|l|}
\hline
\hfil{\rm{\footnotesize Name}}\hfil&\hfil{\rm{\footnotesize Symbol}}\hfil&\hfil{\rm{\footnotesize Handle-decomposition}}\hfil&\hfil{\rm{\footnotesize Figures}}\hfil\\
\hline
\hline
\hfil{\rm{\footnotesize Quantum $(m|n)$-supersphere}}\hfil&\hfil{\rm{\footnotesize $\hat S^{m|n}$}}\hfil&\hfil{\rm{\footnotesize $\hat h^{0|0}\bigcup\hat h^{m|n}$}}\hfil&\hfil{\rm{\footnotesize \includegraphics[width=1cm]{figure-handle-decompositions-quantum-supermanifolds-examples-a.eps}}}\hfil\\
\hline
\hfil{\rm{\footnotesize Quantum $(m+1|n+1)$-supercobordism}}\hfil&\hfil{\rm{\footnotesize $(\hat D^{m+1|n+1};\varnothing,\hat S^{m|n})$}}\hfil&\hfil{\rm{\footnotesize $\hat h^{0|0}$}}\hfil&\hfil{\rm{\footnotesize \includegraphics[width=1cm]{figure-handle-decompositions-quantum-supermanifolds-examples-b.eps}}}\hfil\\
\hline
\hfil{\rm{\footnotesize Quantum $(2|2)$-supertorus}}\hfil&\hfil{\rm{\footnotesize $\hat T^{1|1}=\hat S^{1|1}\times\hat S^{1|1}$}}\hfil&\hfil{\rm{\footnotesize $\hat h^{0|0}\bigcup \hat h^{1|1}\bigcup\hat h^{1|1}\bigcup \hat h^{2|2}$}}\hfil&\hfil{\rm{\footnotesize \includegraphics[width=1cm]{figure-handle-decompositions-quantum-supermanifolds-examples-c.eps}}}\hfil\\
\hline
\hfil{\rm{\footnotesize Quantum punctured M\"obius band}}\hfil&\hfil{\rm{\footnotesize $(\hat M_{ob}\setminus \hat D^{2|2};\hat S^{1|1},\hat S^{1|1})$}}\hfil&\hfil{\rm{\footnotesize $\hat S^{1|1}\times\hat D^{1|1}\bigcup\hat h^{1|1}$}}\hfil&\hfil{\rm{\footnotesize \includegraphics[width=1cm]{figure-handle-decompositions-quantum-supermanifolds-examples-d.eps}}}\hfil\\
\hline
\end{tabular}$}
\end{table}
Since $V$ is a $(m+1|n+1)$-dimensional bordism between $(m|n)$-dimensional quantum supermanifolds $N_0$ and $N_1$, we can determine its handle decomposition (\ref{quantum-superhandle-presentation-solution}), that identifies intermediate final Cauchy quantum supermanifolds $M_{j}$ that are obtained from the previous one $M_{j-1}$ by means of an integral surgery. Let us emphasize that each $M_{j}$ must necessarily be an integral quantum supermanifold having the same dimension of $N_0$ since it belongs to the same bordism class $[N_0]$. Furthermore, since each quantum supermanifold $M_j$ is contained into $V$, it follows that $M_{j}$ can be identified with a Cauchy $(m|n)$-dimensional quantum supermanifold of $\hat E_k$, i.e. an admissible $(m|n)$-dimensional quantum integral supermanifold of $\hat E_k\subset \hat J^k_{m+1|n+1}(W)$. Therefore $M_j$ belongs to the same bordism class $[N_0]\in\Omega_{m|n}^{\hat E_k}$ of $N_0$ in $\hat E_k$. The situation is pictured in Fig. \ref{figure-proof-quantum-superhandle-decomposition-quantum-nonlinear-propagator}.
\end{proof}
\begin{figure}
\caption{Representation of quantum superhandle decomposition of nonlinear quantum propagator $V=V_1\bigcup_{M_1}
\label{figure-proof-quantum-superhandle-decomposition-quantum-nonlinear-propagator}
\end{figure}
\begin{definition}[Quantum exchangions and quantum virtual particles]\label{quantum-integral-handle-exchange}
Let $\hat E_k\subset \hat J^k_{m+1|n+1}(W)$ be a quantum super PDE in the category $\mathfrak{Q}_S$ and let $V\subset \hat E_k$ be a compact solution of a boundary value problem, i.e., $\partial V=N_0\sqcup N_1$. We call {\em quantum exchangions} the quantum integral superhandle that are attached to obtain intermediate Cauchy manifolds $N_{j|j}$ defined in the proof of Theorem \ref{quantum-superhandle-decomposition-quantum-nonlinear-propagator}. We call {\em quantum virtual particles} the intermediate quantum supermanifolds $M_j$ appearing there. (See, also Fig. \ref{figure-proof-quantum-superhandle-decomposition-quantum-nonlinear-propagator}.)
\end{definition}
\begin{remark}
Note the structural difference between quantum exchangions and virtual particles. In fact, the first are quantum $(m+1|n+1)$-chains, hence having the same dimension of solutions of $\hat E_k$. Instead, the second ones are quantum $(m|n)$-chains, hence having the same dimension of the initial and final Cauchy data.\footnote{It is useful to emphasize that quantum exchangions and quantum virtual particles, well interpret the meaning of ``exchange particles" introduced in Feynman diagrams (1949) to represent particle interactions, like photons (electro-magnetic interactions), $W$ and $Z$ particles (weak interactions), gluons and gravitons (strong interactions). (See, e.g., Refs. \cite{FEYNMAN, KAISER}.) Furthermore, in the framework of Theorem \ref{quantum-superhandle-decomposition-quantum-nonlinear-propagator} we understand also that {\em Regge trajectories} and {\em Regge resonances} (1959), introduced in the phenomenological theory of strong reactions by the pioneering works of T. Regge \cite{REGGE} (and principally developed also by R. Blanckebecker and M. L. Goldberger \cite{BLANKE-GOLD}, G. F. Chew and S. C. Fraustschi \cite{CHEW-FRAUTSCHI}, V. N. Gribov \cite{GRIBOV} and G. Veneziano \cite{VENEZIANO}), find their interpretation as quantum exchangions and quantum virtual particles.}
\end{remark}
\begin{proposition}\label{quantum-exchangions-and-cobordism-class}
A quantum exchangion does not change the integral bordism class of $N_{j}$ with respect to $N_0$ one.
\end{proposition}
\begin{proof}
In fact $N_0$ and $M_{j}$ must necessarily belong to the same quantum integral bordism class in $\Omega_{m|n}^{\hat E_k}$ since $\partial V_{1}=N_0\sqcup M_{2}$ and $\partial V_{2}=M_2\sqcup M_{3}$, and so on.
\end{proof}
\begin{proposition}
Whether $N_0$ is the disjoint union of two (or more) components, e.g., $N_0=a\sqcup b$, then a quantum exchangion can change the quantum numbers of $a$ and $b$, even if the total quantum number of $a\sqcup b$ does not change.
\end{proposition}
\begin{proof}
This follows directly from Proposition \ref{quantum-exchangions-and-cobordism-class}.
\end{proof}
\begin{definition}[Fundamental quantum particles]\label{fundamental-quantum-particles}
For a quantum system, encoded by a quantum PDE $\hat E_k\subset \hat J^k_{m+1|n+1}(W)$, we define {\em quantum $(m|n)$-particles}, admissible quantum integral $(m|n)$-chains $N\subset \hat E_k\subset \hat J^k_{m+1|n+1}$.
We call {\em fundamental quantum $(m|n)$-particles} for the quantum PDE $\hat E_k$, ones that cannot be decomposed into other quantum particles.
\end{definition}
\begin{proposition}
The fundamental quantum $(m|n)$-particles of quantum PDE $\hat E_k\subset \hat J^k_{m+1|n+1}(W)$, are identified with the bordism classes of the weak integral bordism group $\Omega_{m|n,w}^{\hat E_k}$, (resp. singular integral bordism group $\Omega_{m|n,s}^{\hat E_k}$, resp. integral bordism group $\Omega_{m|n}^{\hat E_k}$). Then we will distinguish into {\em weak-fundamental quantum $(m|n)$-particles}, {\em singular-fundamental quantum $(m|n)$-particles}, resp. {\em fundamental quantum $(m|n)$-particles} respectively, for the quantum PDE $\hat E_k\subset \hat J^k_{m+1|n+1}(W)$. Furthermore the exact commutative diagram {\em(\ref{relations-between-fundamental-quantum-particles-classes})} shows how such fundamental quantum particles are related.
\begin{equation}\label{relations-between-fundamental-quantum-particles-classes}
\begin{array}{ccccccccc}
&&{\scriptstyle 0}&&{\scriptstyle 0}&&{\scriptstyle 0}&&\\ &&\downarrow&&\downarrow&&\downarrow&&\\
{\scriptstyle 0}&\to&{\scriptstyle K^{\hat E_k}_{m-1|n-1,w/(s,w)}}&\to&{\scriptstyle K^{\hat E_k}_{m-1|n-1,w}}&\to&
{\scriptstyle K^{\hat E_k}_{m-1|n-1,s,w}}&\to&{\scriptstyle 0}\\
&&\downarrow&&\downarrow&&\downarrow&&\\
{\scriptstyle 0}&\to&{\scriptstyle K^{\hat E_k}_{m-1|n-1,s}}&\to&{\scriptstyle \Omega^{\hat E_k}_{m-1|n-1}}&\to&
{\scriptstyle \Omega^{\hat E_k}_{m-1|n-1,s}}&\to&{\scriptstyle 0}\\
&&\downarrow&&\downarrow&&\downarrow&&\\
&&{\scriptstyle 0}&\to&{\scriptstyle \Omega^{\hat E_k}_{m-1|n-1,w}}&\to&{\scriptstyle \Omega^{\hat E_k}_{m-1|n-1,w}}&\to
&{\scriptstyle 0}\\
&&&&\downarrow&&\downarrow&&\\
&&&&{\scriptstyle 0}&&{\scriptstyle 0}&&\\
\end{array}
\end{equation}
\end{proposition}
\begin{proof}
The proof follows directly from Definition \ref{fundamental-quantum-particles}, Theorem \ref{quantum-superhandle-decomposition-quantum-nonlinear-propagator} and Proposition 3.3 in \cite{PRAS14}.
\end{proof}
\begin{remark}
Let us emphasize that the concept of {\em fundamental quantum particles}, is strictly related to the quantum (super) PDE $\hat E_k$, or in other words, to the quantum theory encoded by $\hat E_k$. Whether we aim to forget the framework defined by $\hat E_k$, and to discuss about fundamental quantum particles in general, we see that the unique fundamental one is $\varnothing$. In fact, according to Lemma \ref{quantum superhandle-decomposition-bordism}(3), we can build any quantum particles just starting from $\varnothing$, and adding quantum handles or quantum-cells.\footnote{From this geometric structural point of view, we clearly understand that also some particles, that are usually considered fundamental, cannot be considered so. For example electrons and quarks are not quantum fundamental particles whether considered as geometric objects. This agrees with experimental evidences that so-called ``{\em quasiparticles}" with fractional charges, or with separated quantum numbers (e.g., decay $e^-\to {\rm spinon}+{\rm orbiton}$), were detected. (See, e.g. \cite{FRADKIN, SCHLAPPA-SCHMITT}.) Furthermore, it is important to distinguish the concept of quantum fundamental particle, from the one of quantum stable particle. Without any quantum theory, i.e., without any quantum (super) PDE, for the first the unique one is $\varnothing$, and for the second, it is a no sense. On the other hand, with respect to a quantum (super) PDE $\hat E_k$ we can understand that the concept of stability is just related to the integral bordism groups of $\hat E_k$, hence from this point of view the concept of ``fundamental particle" and ``stable particle" become related each other one.}
\end{remark}
\begin{example}[Two-body high energy reactions]\label{two-body-high-energy-reactions-examples}
Let us consider the following typical two-body strong reactions:
$\pi^-+p\to\pi^0+n$, $\pi^++p\to\pi^++p$, $\pi^++p\to p+\pi^+$. The first reaction can be considered obtained with a nonlinear quantum propagator having an intermediate virtual neutral $\rho$ meson, with charge exchange. In the second reaction the nonlinear quantum propagator has an intermediate virtual double electric charged particle and a neutral quantum pomeron $p_{om}$, quantum exchangion $\hat h^{1|1}$, representing an elastic scattering. In the third reaction can be considered the nonlinear quantum propagator has an intermediate virtual neutron $n$, in backward scattering. In other words, in all these reactions the final Cauchy data can be obtained from the initial one by means of two adjoint elementary bordisms.\footnote{The quantum virtual particles there involved can be considered generalizations in the geometric theory of quantum (super)PDEs, of objects called reggeons in analogous reactions considered in the framework of the phenomenological theory of strong reactions developed in the first two decades of the second half of the last century. (See, e.g., \cite{EDEN}.) However, let us emphasize that {\em pomeron} is better represented as quantum exchangion than a quantum virtual particle. In fact, pomeron, as usually considered in the phenomenological theory of strong reactions, does not carry charge \cite{EDEN}. On the other hand in the elastic scattering $\pi^++p\to\pi^++p$ it is impossible that the quantum virtual particle should be neutral one, whether the nonlinear quantum propagator is not exotic. (See \cite{PRAS29}.)}
\end{example}
\begin{example}[Proton-proton chain reactions]\label{proton-proton-chain-reactions-examples}
In the following we shall see that solutions, representing proton-proton chain reactions, typical in the Sun for the production of ${}^4_2He$, can be encoded with nonlinear quantum propagators, admitting handle decompositions. In general one can classify such reactions into five basic groups.\footnote{Let us recall the nucleus-notation ${}^A_ZX$, where $X$ denotes the chemical symbol, $A=Z+N$ is the mass number, with $Z$ the atomic number=number of protons and $N$ the number of neutrons.}
{\em (I) Production of ${}^4_2He$ with intermediate particles $\{e^+,\, \nu_e,\, \gamma,\, {}^2_1H\equiv {}^2_1D,\, {}^3_2He\}$. \footnote{In the Sun this chain reaction is dominant in the temperature range $10-14\, MK$. In presence of electrons, on has also the secondary reaction $e^++e^-\to 2\gamma + 1.02 Mv$.}}
\begin{equation}\label{proton-proton-chain-reactions-examples-a}
\left\{
\begin{array}{l}
{}^1_1H+{}^1_1H\to e^++\nu_e+{}^2_1H+\, 0.42\, MeV\\
{}^2_1D+{}^1_1H\to {}^3_2He+\gamma+ 5.49\, MeV\\
{}^3_2H+{}^3_2He\to 2\, {}^1_1H+{}^4_2He+\, 12.86\, MeV\\
\end{array}
\right.
\end{equation}
{\em (II) Production of ${}^4_2He$ with intermediate particles $\{\nu_e,\, \gamma,\, {}^7_4Be,\, {}^7_3Li\}$. \footnote{In the Sun this chain reaction is dominant in the temperature range $14-23\, MK$.}}
\begin{equation}\label{proton-proton-chain-reactions-examples-b}
\left\{
\begin{array}{l}
{}^3_2He+{}^4_2He\to {}^7_4Be+\gamma\\
{}^7_4Be+e^-\to{}^7_3Li+\nu_e+\, 0.383-0.861\, MeV\\
{}^7_3Li+{}^1_1H\to 2\, {}^4_2He\\
\end{array}
\right.
\end{equation}
{\em (III) Production of ${}^4_2He$ with intermediate particles $\{e^+,\, \nu_e,\, \gamma,\, {}^7_4Be,\, {}^8_4Be,\, {}^8_5B\}$. \footnote{In the Sun this chain reaction is dominant in the temperatures that exceeds $23\, MK$.}}
\begin{equation}\label{proton-proton-chain-reactions-examples-c}
\left\{
\begin{array}{l}
{}^3_2He+{}^4_2He\to+{}^7_4Be+\gamma\\
{}^7_4Be+{}^1_1H\to {}^8_5B+\gamma\\
{}^8_5B\to {}^8_4Be+e^++\nu_e\\
{}^8_4Be\to 2\, {}^4_2He\\
\end{array}
\right.
\end{equation}
{\em (IV) Production of ${}^4_2He$ from the ${}^3_2He$ interaction with proton.}
\begin{equation}\label{proton-proton-chain-reactions-examples-d}
{}^3_2He+{}^1_1H\to{}^4_2He+e^++\nu_e+\, 18.8\, MeV.
\end{equation}
{\em (V) Production of deuterium from the electron capture by two protons.}
\begin{equation}\label{proton-proton-chain-reactions-examples-d}
{}^1_1H+e^-+{}^1_1H\to{}^2_1D+\nu_e.
\end{equation}
In Fig. \ref{figure-proton-proton-chain-reactions-examples} we represent some solutions corresponding to reactions in {\em(I)}. By using the definition introduced in \cite{PRAS22}, we can say that in both reactions there represented, the nonlinear quantum propagator $V$, is a {\em quantum matter-solution}. Furthermore, in the second reaction, the handle decomposition of $V$, identifies a {\em Goldstone piece} $V_{\partial}=V\bigcap(\widehat{Goldstone})$. This is the part arriving to $\gamma$.
\begin{figure}
\caption{Representation of some solutions of $p+p$ reaction chain. From left to right. ${}
\label{figure-proton-proton-chain-reactions-examples}
\end{figure}
\end{example}
\section{\bf Surgery in Quantum Super Yang-Mills PDEs}\label{sec-surgery-quantum-super-yang-mills-pdes}
Let us emphasize, now, that in the reactions considered in the above section, we have not really specified the contribution of a specific quantum PDE, i.e., we have not considered some specific integral bordism group. On the other hand, without the introduction of these fundamental structures, what we can do is reproduce some phenomenological theory like dispersion relations and Regge models that cannot be but predictive dynamical theories. Therefore to dynamically encode, for example, proton-proton reactions we shall use the quantum super Yang-Mills PDE, and to solve suitable boundary value problems. We have formulated this general theory in some our previous works \cite{PRAS21,PRAS22}. In the following we shall consider some particular applications.
\begin{definition}[Quantum scattering processes in $\widehat{(YM)}$]\label{quantum-scattering-processes-in-quantum-yang-mills}
We call {\em quantum scattering process} in $\widehat{(YM)}$ any boundary value problem where are fixed two disjoint Cauchy data $N_0,\, N_1\in\widehat{(YM)}$. The first is called {\em initial Cauchy data} and the second the {\em final Cauchy data}. A {\em solution of a quantum scattering process} in $\widehat{(YM)}$ is any nonlinear quantum propagator $V\subset\widehat{(YM)}$, such that $\partial V=N_0\sqcup N_1$, if $\partial N_0=\partial N_1=\varnothing$, otherwise $\partial V=N_0\sqcup P\sqcup N_1$, with $P\subset\widehat{(YM)}$, integral manifold such that $\partial P=\partial N_0\sqcup\partial N_1$.
\end{definition}
\begin{theorem}[Quantum scattering processes in $\widehat{(YM)}$ on $4$-dimensional quantum super Minkowskian manifold]\label{quantum-scattering-processes-in-quantum-yang-mills-d-4-quantum-super-minkowski}
Let us assume that the base manifold $M$ of $\widehat{(YM)}$ is a $(4|4)$-dimensional quantum super Minkowski manifold.
Then $\widehat{(YM)}$ is a quantum extended crystal PDE. Furthermore, if we consider admissible only integral boundary manifolds, with
orientable classic limit, and with zero characteristic quantum
supernumbers, ({\em full admissibility hypothesis}), one has:
$\Omega_{3|3}^{\widehat{(YM)}}=0$, and $\widehat{(YM)}$ becomes a
quantum $0$-crystal super PDE. Hence we get existence of global
$Q^\infty_w$ solutions for any boundary condition of class
$Q^\infty_w$.
{\em Elementary nonlinear quantum propagators} $V$ are $(4|4)$-dimensional quantum supermanifolds with boundary $\partial V=N_0\bigcup P\bigcup N_1$, such that $V$ is homeomorphic to $\hat D^{4|4}$. One can classify $V$ on the ground of its boundary $\partial V$. This can be diffeomorphic to a quantum exotic supersphere $\hat\Sigma^{3|3}$ or to a quantum supersphere $\hat S^{3|3}$.\footnote{In general nonlinear quantum propagators are not elementary ones, but can be decomposed in elementary ones.}
\end{theorem}
\begin{proof}
In $D=4$, the usual $N$-supersymmetric extension $\mathfrak{g}$ of the
Poincar\'e algebra $\mathfrak{p}=\mathfrak{s}\mathfrak{o}(1,3)\oplus
\mathfrak{t}$, is a $\mathbb{Z}_2$-graded vector space
$\mathfrak{g}=\mathfrak{g}_0\oplus\mathfrak{g}_1$, with a graded Lie
bracket, such that $\mathfrak{g}_0=\mathfrak{p}\oplus \mathfrak{b}$,
where $\mathfrak{b}$ is a reductive Lie algebra, such that its
self-adjoint part is the tangent space to a real compact Lie
group.\footnote{A {\em reductive} Lie algebra is the sum of a
semisimple and an abelian Lie algebra. Since a {\em semisimple} Lie
algebra is the direct sum of simple algebras, i.e., non-abelian Lie
algebras, $\mathfrak{l}_i$, where the only ideals are $\{0\}$ and
$\{\mathfrak{l}_i\}$, it follows that $\mathfrak{b}$ can be
represented in the form
$\mathfrak{b}=\mathfrak{a}\oplus\sum_i\mathfrak{l}_i$.} Furthermore
$\mathfrak{g}_1=(\frac{1}{2},0)\otimes
\mathfrak{s}\oplus(0,\frac{1}{2})\otimes\mathfrak{s}^*$, where
$(\frac{1}{2},0)$ and $(0,\frac{1}{2})$ are specific representations
of the Poincar\'e algebra. Both components are conjugate to each
other under the $*$ conjugation. $\mathfrak{s}$ is a $N$-dimensional
complex representation of $\mathfrak{b}$ and $\mathfrak{s}^*$ its
dual representation.\footnote{If $\rho:\mathfrak{g}\to L(V)$ is a
representation of Lie algebra, its dual $\bar\rho:\mathfrak{g}\to
L(\bar V)$, working on the dual space $\bar V$, is defined by
$\bar\rho(u)=\overline{-\rho(u)}$, $\forall u\in\mathfrak{g}$.} Note
also that the Lie bracket for the odd part is usually denoted by
$\{,\}$ in theoretical physics. Then with such a notation one has
\begin{equation}\label{bracket-odd-part}
\{Q^i_\alpha,Q^j_\beta\}=\delta^{ij}(\gamma^\mu
C)_{\alpha\beta}P_\mu+U^{ij}(C)_{\alpha\beta}+V^{ij}(C\gamma_5)_{\alpha\beta}
\end{equation}
where $U^{ij}=-U^{ji}$, $V^{ij}=-V^{ji}$ are the $(N-1)N$ central
charges, $C$ is the (antisymmetric) charge conjugation matrix,
$(Q^i_\alpha)_{i=1,\dots,N}$, are the $N$ Majorana spinor
supersymmetry charge generators. The dynamical components
$\hat\mu^i$, $i=1,\dots,N$, of the quantum fundamental field,
corresponding to the generators $Q^i$, are called {\em quantum
gravitinos}. So in a quantum $N$-SG-Yang-Mills PDE, one
distinguishes $N$ quantum gravitino types, (and $(N-1)N$ central
charges).
Then a quantum superextension of $\mathfrak{g}$ is
$A\otimes_{\mathbb{R}}\mathfrak{g}$, where $A$ is a quantum superalgebra.
This can be taken $A\subseteq L(\mathcal{H})$, where $\mathcal{H}$ is a super-Hilbert space.
(See also Refs.\cite{PRAS14}.)
\begin{table}[h]
\caption{Supersymmetric semi-simple tensor extension Poincar\'e algebra in $D=4$.}
\label{supersymmetric-semi-simple-extension-poincare-algebra-d-4}
\begin{tabular}{|l|}
\hline
{\footnotesize$ [J_{\alpha\beta},J_{\gamma\delta}]=\eta
_{\beta\gamma} J_{\alpha\delta}+\eta
_{\alpha\delta}J_{\beta\gamma}-\eta
_{\alpha\gamma}J_{\beta\delta}-\eta_{\beta\delta}J_{\alpha\gamma},\quad [P_\alpha,P_\beta]=cZ_{\alpha\beta}$}\\
{\footnotesize$ [J_{\alpha\beta},P_\gamma]=\eta_{\beta\gamma}P_\alpha-\eta_{\alpha\gamma}P_\beta,\quad
[J_{\alpha\beta},Z_{\gamma\delta}]=\eta_{\alpha\delta}Z_{\beta\gamma}+\eta_{\beta\gamma}Z_{\alpha\delta}-
\eta_{\alpha\gamma}Z_{\beta\delta}-\eta_{\beta\delta}Z_{\alpha\gamma}$}\\
{\footnotesize$ [Z_{\alpha\beta},P_\gamma]=\frac{4a^2}{c}(\eta_{\beta\gamma}P_\alpha-\eta_{\alpha\gamma}P_\beta),\quad
[Z_{\alpha\beta},Z_{\gamma\delta}]=\frac{4a^2}{c}(\eta_{\alpha\delta}Z_{\beta\gamma}+\eta_{\beta\gamma}Z_{\alpha\delta}-
\eta_{\alpha\gamma}Z_{\beta\delta}-\eta_{\beta\delta}Z_{\alpha\gamma})$}\\
{\footnotesize$[J_{\alpha\beta},Q_\gamma]=-(\sigma_{\alpha\beta}Q)_\gamma,\quad
[P_\alpha,Q_\gamma]=a(\gamma_\alpha Q)_\gamma,\quad
[Z_{\alpha\beta},Q_\gamma]=-\frac{4a^2}{c}(\sigma_{\alpha\beta}Q_\gamma)$}\\
{\footnotesize$
[Q_\alpha,Q_\beta]=-b[\frac{2a}{c}(\gamma^\delta C)_{\alpha\beta}P_\delta
+(\sigma^{\gamma\delta}C)_{\alpha\beta}Z_{\gamma\delta}]$}\\
\hline
\end{tabular}
\end{table}
In Tab. \ref{supersymmetric-semi-simple-extension-poincare-algebra-d-4} are reported supersymmetric semi-simple tensor extensions of Poincar\'e algebra in $D=4$ too.
There $a$, $b$ and $c$ are constants. This algebra admits the following splitting:
$\mathfrak{s}\mathfrak{o}(3,1)\oplus \mathfrak{o}\mathfrak{s}\mathfrak{p}(1,4)$, where
$\mathfrak{s}\mathfrak{o}(3,1)$ is the $4$-dimensional Lorentz algebra and $\mathfrak{o}\mathfrak{s}\mathfrak{p}(1,4)$
is the orthosymplectic algebra. Then, by considering the quantum superextension
$A\otimes_{\mathbb{R}}[\mathfrak{s}\mathfrak{o}(3,1)\oplus \mathfrak{o}\mathfrak{s}\mathfrak{p}(1,4)]$,
where $A$ is a quantum superalgebra, and with respect to the splitting $\hat\mu={}_{\circledR}\hat\mu+{}_{\copyright}\hat\mu+{}_{\maltese}\hat\mu$ of the fundamental field $\hat\mu$, we get:
\begin{equation}\label{splitted-fundamental-quantum-field-example}
\left\{
\begin{array}{l}
{}_{\circledR}\hat\mu=P_\alpha\hat\theta^\alpha_\gamma dx^\gamma\\
{}_{\copyright}\hat\mu=J_{\alpha\beta}\hat\omega^{\alpha\beta}_\gamma dx^\gamma\\
{}_{\maltese}\hat\mu=[\overline{Z}_{\alpha\beta}\hat A_\gamma^{\alpha\beta}+Q_{\alpha i}\phi^{\alpha i}_\gamma ]dx^\gamma.\\
\end{array}
\right.
\end{equation}
The dynamic equation are resumed in Tab. \ref{local-expression-yang-mills-and-bianchi-identity}.
\begin{table}[h]
\caption{Local expression of
$\widehat{(YM)}\subset J\hat D^2(W)$ and Bianchi identity $(B)\subset J\hat D^2(W)$.}
\label{local-expression-yang-mills-and-bianchi-identity}
\begin{tabular}{|l|c|}
\hline
{\footnotesize\rm(Field equations)\hskip 1cm $E^A_{K}\equiv-(\partial_{B}.\hat R^{BA}_K)+[\widehat
C^H_{KR}\hat\mu^R_{C},\hat R^{[AC]}_H]_+=0$}&{\footnotesize\rm$\widehat{(YM)}$}\\
\hline \hline
\hfil{\footnotesize\rm $\hat R^K_{A_1A_2}=\left[(\partial X_{
A_1}.\hat\mu^K_{A_2})+\frac{1}{2}\widehat{C}{}^K_{IJ}[\hat\mu^I_{A_1},\hat\mu^J_{A_2}]_+\right]$}\hfil&{\footnotesize\rm(Fields)}\\
\hline
{\footnotesize\rm(Bianchi identities)\hskip 0.25cm $B^K_{HA_1A_2}\equiv(\partial X_{H}.\hat
R^K_{A_1A_2})+\frac{1}{2} \widehat{C}{}^K_{IJ}[\bar\mu^I_{H},\hat R^J_{A_1A_2}]_+=0$}&{\footnotesize\rm$(B)$}\\
\hline
\multicolumn {2}{l}{\footnotesize\rm$\hat R^K_{A_1A_2}:\Omega_1\subset J\hat D(W)\to\mathop{\widehat{A}}\limits^2;\quad
B^K_{HA_1A_2}:\Omega_2\subset J\hat D^2(W)\to\mathop{\widehat{A}}\limits^3;\quad E^A_{K}:\Omega_2\subset
J\hat D^2(W)\to\mathop{\widehat{A}}\limits^{3}.$}\\ \end{tabular}
\end{table}
We call {\em quantum graviton} a quantum metric $\widehat{g}$
obtained by a solution $\hat\mu$ of $\widehat{(YM)}$, via the
corresponding quantum vierbein.
Since $H_3(M;\mathbb{K})=0$, we get
\begin{equation}\label{triviality-integral-bordism-group}
\left\{
\begin{array}{ll}
\Omega_{3|3,w}^{\widehat{(YM)}}&\cong \Omega_{3|3,s}^{\widehat{(YM)}}\\
&\cong A_0\bigotimes_{\mathbb{K}}H_3(W;\mathbb{K})\bigoplus A_1\bigotimes_{\mathbb{K}}H_3(W;\mathbb{K})\\
&\cong A_0\bigotimes_{\mathbb{K}}H_3(M;\mathbb{K})\bigoplus A_1\bigotimes_{\mathbb{K}}H_3(M;\mathbb{K})=0\\ \end{array}
\right.
\end{equation}
So $\widehat{(YM)}$ is a quantum extended crystal super PDE. However, in general, $\widehat{(YM)}$ is not a quantum $0$-crystal super PDE. In fat one has the following short exact sequence
$$\xymatrix{0\ar[r]&\ker(j)\ar[r]&\Omega_{3|3}^{\widehat{(YM)}}\ar[r]^{j}&\Omega_{3|3,s}^{\widehat{(YM)}}\ar[r]&0 }$$
hence $\Omega_{3|3}^{\widehat{(YM)}}\cong \ker(j)\not=0$. Note that $\ker(j)$ is made by $[N]\in\Omega_{3|3}^{\widehat{(YM)}}$ such that $N=\partial V$, where $V$ is some $(4|4)$-dimensional quantum supermanifolfd identified with a submanifold of $\hat J^2_{4|4}(W)$. However,
if we consider admissible only integral boundary manifolds, with
orientable classic limit, and with zero characteristic quantum
supernumbers, ({\em full admissibility hypothesis}), one has:
$\Omega_{3|3}^{\widehat{(YM)}}=0$, and $\widehat{(YM)}$ becomes a
quantum $0$-crystal super PDE. Hence we get existence of global
$Q^\infty_w$ solutions for any boundary condition of class
$Q^\infty_w$.
Then we get the exact
commutative diagram (\ref{Yang-Mills-Reinhart-bordism-groups-relation}). (For notation see \cite{PRAS21, PRAS22}.)
\begin{equation}\label{Yang-Mills-Reinhart-bordism-groups-relation}
\xymatrix{0\ar[r]&K^{\widehat{(YM)}}_{3|3;2}\ar[r]&\Omega_{3|3}^{\widehat{(YM)}}\ar[r]&
\mathop{\Omega}\limits_c{}_{6}^{\widehat{(YM)}}\ar[d]\ar[r]\ar[dr]&0&\\
&0\ar[r]&K^\uparrow_{6}\ar[r]&\Omega^\uparrow_{6}\ar[r]&
\Omega_{6}\ar[r]& 0\\}
\end{equation}
Taking into account the result by Thom on the unoriented cobordism
groups \cite{THOM}, we can calculate
$\Omega_6\cong\mathbb{Z}_2\bigoplus\mathbb{Z}_2\bigoplus\mathbb{Z}_2$.
Then, we can represent $\Omega_6$ as a subgroup of a $3$-dimensional
crystallographic group type $[G(3)]$. In fact, we can consider the
amalgamated subgroup $D_2\times\mathbb{Z}_2\star_{D_2}D_4$, and
monomorphism $\Omega_6\to D_2\times\mathbb{Z}_2\star_{D_2}D_4$,
given by $(a,b,c)\mapsto(a,b,b,c)$. Alternatively we can consider
also $\Omega_6\to D_4\star_{D_2}D_4$. (See Appendix C in
\cite{PRAS20} for amalgamated subgroups of $[G(3)]$.) In any case the
crystallographic dimension of $\widehat{(YM)}$ is $3$ and the
crystallographic space group type are $D_{2d}$ or $D_{4h}$ belonging
to the tetragonal syngony.
$\bullet$\hskip 2pt If the initial and final Cauchy data are non-closed compact bounded $3|3$-dimensional quantum supermanifolds, we can consider the exact commutative diagram reported in (\ref{exact-commutative-diagram-bigraded-bar-quantum-chain-complex-pde}). There $\bar B_{\bullet|\bullet} (\hat E_k;A)=\ker(\partial|_{\bar
C_{\bullet|\bullet} (\hat E_k;A)})$, $\bar Z_{\bullet|\bullet}
(\hat E_k;A)=\hbox{\rm im}\hskip 2pt(\partial|_{\bar C_{\bullet|\bullet} (\hat
E_k;A)})$, $\bar H_{\bullet|\bullet} (\hat E_k;A)=\bar
Z_{\bullet|\bullet} (\hat E_k;A)/\bar B_{\bullet|\bullet} (\hat
E_k;A)$. Furthermore,
$$\left\{\begin{array}{l}
b\in[a]\in \bar Bor_{\bullet|\bullet}(\hat E_k;A)\mathbb{R}ightarrow
a-b=\partial c,\quad
c\in \bar C_{\bullet|\bullet}(\hat E_k;A),\\
b\in[a]\in \bar Cyc_{\bullet|\bullet}(\hat E_k;A)\mathbb{R}ightarrow \partial(a-b)=0,\\
b\in[a]\in {}^A{\underline{\Omega}}_{\bullet|\bullet,s}(\hat
E_k)\mathbb{R}ightarrow
\left\{\begin{array}{l}
\partial a=\partial b=0\\
a-b=\partial c,\quad c\in \bar C_{\bullet|\bullet}(\hat E_k;A)\\
\end{array}
\right\}.\\
\end{array}\right.$$
It follows the canonical isomorphism:
${}^A{\underline{\Omega}}_{\bullet|\bullet,s}(\hat E_k)\cong \bar
H_{\bullet|\bullet}(\hat E_k;A)$. As $\bar C_{\bullet|\bullet}(\hat
E_k;A)$ is a free two-sided projective $A$-module, one has the
unnatural isomorphism: $$\bar Bor_{\bullet|\bullet}(\hat E_k;A)\cong
{}^A{\underline{\Omega}}_{\bullet|\bullet,s}(\hat E_k)\bigoplus \bar
Cyc_{\bullet|\bullet}(\hat E_k;A).$$
Then $a\equiv N_0\bigcup P$ and $b\equiv N_1$ belong to the same bordism group. More precisely one has
$$b\in[a]\in \bar Bor_{3|3}^{\hat E_k},\ \Leftrightarrow a-b=\partial c,\, c\equiv V,$$
such that, $\partial V=N_0\bigcup P\bigcup N_1$.
In general one has the following relation with the (closed) bordism group $\Omega_{3|3}^{\hat E_k}$:
$$\bar Bor_{3|3}^{\hat E_k}\cong \Omega_{3|3}^{\hat E_k}\bigoplus \bar Cyc_{3|3}^{\hat E_k}$$
where the cyclism group $\bar Cyc_{3|3}^{\hat E_k}$, is defined by the condition $b\in[a]\in \bar Cyc_{3|3}^{\hat E_k}$ iff $\partial (a-b)=0$. Since, under the full admissibility hypothesis, one has $\Omega_{3|3}^{\hat E_k}=0$, we get $\bar Bor_{3|3}^{\hat E_k}\cong \bar Cyc_{3|3}^{\hat E_k}$. Therefore, we can say that inequivalent quantum particles in $\hat E_k$, are identified with $(3|3)$-dimensional bounded compact quantum supermanifolds, Cauchy data in $\hat E_k$, that are in correspondence one-to-one with bordism classes in $\bar Bor_{3|3}^{\hat E_k}\cong \bar Cyc_{3|3}^{\hat E_k}$. Therefore, we get that the group of such fundamental particles are identified with
$$\bar C_{3|3}(\hat E_k;A)/\bar B_{3|3}(\hat E_k;A)\cong \bar C_{3|3}(\hat E_k;A)/\bar Z_{3|3}(\hat E_k;A).$$
\begin{equation}\label{exact-commutative-diagram-bigraded-bar-quantum-chain-complex-pde}
\scalebox{0.8}{$\xymatrix{&&0\ar[d]&0\ar[d]&&\\
&0\ar[r]&\bar B_{\bullet|\bullet} (\hat E_k;A)\ar[d]\ar[r]&
\bar Z_{\bullet|\bullet} (\hat E_k;A)\ar[d]\ar[r]&
\bar H_{\bullet|\bullet} (\hat E_k;A)\ar[r]&0\\
&&\bar C_{\bullet|\bullet} (\hat E_k;A)\ar[d]\ar@{=}[r]&
\bar C_{\bullet|\bullet} (\hat E_k;A)\ar[d]&&\\
0\ar[r]&{}^A{\underline{\Omega}}_{\bullet|\bullet,s}(\hat
E_k)\ar[r]&\bar Bor_{\bullet|\bullet} (\hat
E_k;A)\ar[d]\ar[r]&\bar Cyc_{\bullet|\bullet} (\hat E_k;A)\ar[d]\ar[r]&0&\\
&&0&0&&}$}\end{equation}
$\bullet$\hskip 2pt Above results mean that all $(3|3)$-dimensional closed integral quantum supermanifolds
$X\subset \hat E_k$ are boundary of some $4|4$-dimensional integral quantum supermanifold $V\subset \hat E_k$, i.e., $X=\partial V$. In other words, fixing two Cauchy data $N_0$ and $N_1$ in $\hat E_k$, we can consider a quantum non-linear propagator $V$ between them, identified by means of a $(3|3)$-dimennsional closed compact integral quantum supermanifold $X\subset \hat E_k$, such that $X=N_0\bigcup P\bigcup N_1$. Therefore a quantum non-linear propagator between $N_0$ and $N_1$ can be identified with a connected $(3|3)$-dimennsional closed compact integral quantum supermanifold $X\subset \hat E_k$
homeomorphic to $N_0\bigcup P\bigcup N_1$. Since $\Omega_{3|3}=0$, it follows that we can represent such quantum non-linear propagators, with a $(3|3)$-dimensional quantum supersphere $\hat S^{3|3}$. Taking into account results contained in \cite{PRAS27}, we can also distinguish between them $(3|3)$-dimensional quantum exotic superspheres $\hat \Sigma^{3|3}$. These last are classified with respect to the equivalence relation induced by quantum diffeomorphisms.
\begin{figure}
\caption{Representation of an elementary quantum non-linear propagator $V$, identified by means of a $(3|3)$-dimensional quantum supersphere $X=N_0\bigcup P\bigcup N_1$, such that $\partial V=X$.}
\label{figure-quantum-nonlinear-propagator-quantum-supersphere}
\end{figure}
In Fig. \ref{figure-quantum-nonlinear-propagator-quantum-supersphere} is represented the boundary $X=\partial V$, of an elementary quantum non-linear propagator $V$, represented as a $(3|3)$-dimensional quantum supersphere $\hat S^{3|3}$. Therefore, such quantum non-linear propagators are homeomorphic to a $(4|4)$-dimensional quantum superdisk $\hat D^{4|4}$.
Fixing a $(3|3)$-dimensional quantum (exotic) supersphere $X\thickapprox\hat \Sigma^{3|3}\subset \hat E_k$, we can embed there $(3|3)$-dimensional quantum superdisks $a_i$, $i=1,\cdots,p$ and $b_j$, $j=1,\cdots,q$, and consider the $(4|4)$-dimennsional quantum integral submanifold $V\subset \hat E_k$, identified by a fixed boundary $X$. Then we can consider $V$ the quantum non-linear propagator between $N_0\equiv \bigcup_{1\le i\le p}a_i$ and $N_1\equiv \bigcup_{1\le j\le q}b_j$, such that $\partial V=N_0\bigcup P\bigcup N_1$, with $P\equiv \overline{X\setminus(N_0\bigcup N_1)}$.
$\bullet$\hskip 2pt It is important to remark that above quantum non-linear propagators, $V$, cannot be considered regular solutions in the equation $\widehat{(YM)}\subset J\hat{\it D}^2(W)$, but are necessarily singular solutions there. However, by considering the natural embedding $J\hat{\it D}^2(W)\hookrightarrow \hat J^2_{4|4}(W)$ we can consider $V$ as regular solution of $\widehat{(YM)}\subset\hat J^2_{4|4}(W)$. But, whether $N_0$ is not diffeomorphic to $N_1$, we cannot say that $V$ is diffeomorphic to $N_0\times\hat D^{1|1}$. Hence, the integrable full-quantum vector field $\zeta:V\to\widehat{T}V$, relating $N_0$ to $N_1$, must necessarily be singular one.
$\bullet$\hskip 2pt For any quantum conservation law $\alpha$ of $\widehat{(YM)}$ we have $<\alpha,X>=0$, if $X=\partial V$, where $V$ is a nonlinear quantum propagator between $N_0$ and $N_1$. Therefore we get the relation (\ref{condition-quantum-supernumbers-betwenn-cuachy-data-related-by-quantum-non-linear-propagators}) between quantum numbers induced by the quantum conservation laws:\footnote{
In the particular case that $P=\varnothing$, i.e., $\partial N_0=\partial N_1=\varnothing$, then $<\alpha,N_0>=<\alpha,N_1>$.}
\begin{equation}\label{condition-quantum-supernumbers-betwenn-cuachy-data-related-by-quantum-non-linear-propagators}
<\alpha,N_0>-<\alpha,N_1>=-<\alpha,P>\in B.
\end{equation}
Therefore, identifying nonlinear quantum propagators, for $N_0$ and $N_1$, with $(4|4)$-dimensional quantum integral superdisks $\hat D^{4|4}\subset \hat E_k$, we get that the relation (\ref{condition-quantum-supernumbers-betwenn-cuachy-data-related-by-quantum-non-linear-propagators}) between the corresponding quantum conservation supernumbers, gives $$P\thickapprox\overline{\hat S^{4|4}\setminus(N_0\bigcup N_1)}.$$
\end{proof}
Similar considerations hold for the observed quantum non-linear propagators. (See also \cite{PRAS21,PRAS22}.)
\begin{theorem}
The observed dynamic equation $\widehat{(YM)}[i]$, by means of a quantum relativistic frame, is a quantum extended
crystal super PDE. Moreover, under the full admissibility hypothesis, it becomes a quantum $0$-crystal super PDE.
\end{theorem}
\begin{proof}
The evaluation of $\widehat{(YM)}$ on a macroscopic shell
$i(M_C)\subset M$ is given by the equations reported in Tab. \ref{local-expression-observed-yang-mill-pde-bianchi-identity}.
\begin{table}[h]
\caption{Local expression of $\widehat{(YM)}[i]\subset J\hat D^2(i^*W)$
and Bianchi idenity $(B)[i]\subset J\hat D^2(i^*W)$.}
\label{local-expression-observed-yang-mill-pde-bianchi-identity}
\begin{tabular}{|l|c|}
\hline
{\footnotesize\rm(Observed Field Equations)\hskip 0.5cm$(\partial_{\alpha}.\tilde R^{K\alpha\beta})+[\widehat
C^K_{IJ}\tilde\mu^I_{\alpha},\tilde R^{J\alpha\beta}]_+
=0$}&{\footnotesize $\widehat{(YM)}[i]$} \\
\hline
\hfil{\footnotesize\rm $\bar
R^K_{\alpha_1\alpha_2}=(\partial\xi_{[\alpha_1}.\tilde\mu^K_{\alpha_2]})+
\frac{1}{2}\widehat{C}{}^K_{IJ}\tilde\mu^I_{[\alpha_2}\tilde\mu^J_{\alpha_1]}$}\hfil&{\footnotesize\rm(Observed Fields)}\\
\hline
{\footnotesize\rm(Observed Bianchi Identities)\hskip 2pt$(\partial
\xi_{[\gamma}.\tilde R^K_{\alpha_1\alpha_2]})+\frac{1}{2}
\widehat{C}{}^K_{IJ}\tilde\mu^I_{[\gamma}\tilde
R^J_{\alpha_1\alpha_2]}=0$}&{\footnotesize$(B)[i]$}\\
\hline
\end{tabular}\end{table}
This equation is also formally integrable
and completely integrable. Furthermore, the
$3$-dimensional integral bordism group of $\widehat{(YM)}[i]$ and
its infinity prolongation $\widehat{(YM)}[i]_ +\infty$ are trivial,
under the full admissibility hypothesis:
$$\Omega_3^{\widehat{(YM)}[i]}\cong\Omega_3^{\widehat{(YM)}[i]_
+\infty}\cong 0.$$ So equation $\widehat{(YM)}[i]\subset J\hat
D^2(i^*W)$ becomes a quantum $0$-crystal super PDE and it admits global
(smooth) solutions for any fixed time-like $3$-dimensional (smooth)
boundary conditions. Whether $N_0\not\cong N_1$, has as a consequence that, considering the analog boundary value problem in the observed PDE $\widehat{(YM)}[i]\subset \hat J^2_3{W}$, cannot exist an observed smooth solution $V$, representing an observed quantum non-linear propagator between $N_0$ and $N_1$. The corresponding observed nonlinear quantum propagator must necessarily be singular one. \end{proof}
\section{\bf Some open problems in high energy physics solved}\label{sec-some-open-problems-in-high-energy-physics-solved}
In this section we will use the geometric mathematical architecture of the algebraic topology of quantum super PDEs previously developed to answer to some important open problems in High Energy Physics.
$\bullet$\hskip 2pt Let us, first, consider the following question: {\em ``Does exist a linear dependence between square mass and spin in a solution of the quantum PDE $\widehat{(YM)}$ ?"}.\footnote{This question arises from some semiclassical approaches that suggest to consider the following relation $J\simeq \alpha'\, M^2$, between angular momentum $J$ and mass $M$, for a spherical rotating object. This is just the philosophy of so-called {\em Regge trajectories}. Such linear trajectories are used to interpret scattering amplitudes strong reactions and are usually seen as gluonic strings attached to quarks at the end points. However there are also more exotic points of view where instead are considered pion excitations of light hadrons. (See, e.g., \cite{DIAKONOV-PETROV}.)} We shall see that in general such simple dependence for quantum solutions is not assured, but there is a very more complex relation between quantum mass and quantum spin.
Let us recall our some previous results about quantum torsion.
The quantum vierbein curvature ${}_{\circledR}\hat R$ identifies, by
means of the quantum vierbein $\hat\theta$ a quantum field $\hat
S:M\to Hom_Z(\dot\Lambda^2_0M;TM)$, that we call {\em quantum
torsion}, associated to $\hat\mu$. In quantum coordinates one can
write
\begin{equation}\label{quantum-torsion}
\hat S=\partial x_C\otimes\hat S^C_{AB}dx^A\triangle dx^B,\quad \hat
S^C_{AB}=\hat\theta^C_K{}_{\circledR}\hat R^K_{AB}.
\end{equation}
Furthermore, with respect to a quantum relativistic frame $i:N\to
M$, the quantum torsion $\hat S$ identifies a $A$-valued
$(1,2)$-tensor field on $N$, $\widetilde{S}\equiv i^*\hat S:N\to
A\otimes_{\mathbb{R}}\Lambda^0_2N\otimes_{\mathbb{R}}TN$, that we
call {\em quantum torsion of the observed solution}.
We say that an observed solution has a {\em quantum
spin}, if the observed solution has an observed torsion
\begin{equation}\label{observed-quantum-torsion}
\widetilde{S}\equiv i^*\hat S=\partial
x_\gamma\otimes\sum_{0\le\alpha<\beta\le
3}\widetilde{S}^\gamma_{\alpha\beta}dx^\alpha\wedge dx^\beta:N\to
A\otimes_{\mathbb{R}}\mathbf{N}\otimes_{\mathbb{R}}\Lambda^0_2(N)\cong
A\otimes_{\mathbb{R}}\Lambda^0_2(N)\otimes_{\mathbb{R}}TN
\end{equation}
with
$\widetilde{S}^\gamma_{\alpha\beta}(p)=-\widetilde{S}^\gamma_{\beta\alpha}(p)\in
A$, $p\in N$, that satisfies the following conditions,
{\em(quantum-spin-conditions)}:
\begin{equation}\label{quantum-spin-conditions}
\begin{array}{l}
\left\{
\begin{array}{l}
\widetilde{S}=\widetilde{s}\otimes\dot\psi\\
\widetilde{s}=\sum_{0\le\alpha<\beta\le
3}\widetilde{s}_{\alpha\beta}dx^\alpha\wedge dx^\beta:N\to
A\otimes_{\mathbb{R}}\Lambda^0_2N,\\
\hskip 1cm\widetilde{s}_{\alpha\beta}(p)=-\widetilde{s}_{\beta\alpha}(p)\in A, \hskip 2pt p\in N,\\
\dot\psi\rfloor \widetilde{S} =0\\
\end{array}
\right\}\\
\Downarrow\\
\left\{
\begin{array}{l}
\widetilde{S}^\lambda_{\alpha\beta}=\widetilde{s}_{\alpha\beta}\dot\psi^\lambda\\
\widetilde{S}^\lambda_{\alpha\beta}\dot\psi^\alpha =0\\
\end{array}
\right\}.\\
\end{array}
\end{equation}
where $\dot\psi$ is the velocity field on $N$ of the time-like foliation representing the quantum relativistic frame on $N$. When conditions (\ref{quantum-spin-conditions})
are satisfied, we say that the solution considered admits a {\em quantum spin-structure}, with respect to the quantum relativistic frame. We call $\widetilde{s}$ the {\em quantum $2$-form spin} of the observed solution. Let $\{\xi^\alpha\}_{0\le\alpha\le 3}$ be coordinates on $N$, adapted to the quantum relativistic frame. Then one has the following local representations:
\begin{equation}\label{local-representations-observed-quantum-spin-observed-quantum-torsion}
\left\{
\begin{array}{l}
\widetilde{s}=\widetilde{s}_{ij}d\xi^i\wedge d\xi^j\\
\end{array}
\right\}.
\end{equation}
We define {\em quantum spin-vector-field} of the observed solution
\begin{equation}\label{quantum-spin-vector-field}
\widetilde{\underline{s}}=<\epsilon,\widetilde{S}>=[\epsilon_{\mu\nu\lambda\rho}\dot\psi^\mu\widetilde{s}^{\nu\lambda}]d\xi^\rho
\equiv \widetilde{s}_\rho d\xi^\rho =\widetilde{s}_kdx\xi^k\hskip
2pt\mathbb{R}ightarrow
\widetilde{s}=\partial\xi_i\widetilde{s}_kg^{ki}=\partial\xi_i\widetilde{s}^i
\end{equation}
where
$\epsilon_{\mu\nu\lambda\rho}=\sqrt{|g|}\delta_{\mu\nu\lambda\rho}^{0123}$
is the completely antisymmetric tensor density on $N$. One has
$\widetilde{s}^\rho(p)\in A$, $p\in N$. The classification of the
observed solution on the ground of the spectrum of
$|\widetilde{s}|^2\equiv\widetilde{s}^\rho\widetilde{s}_\rho$, and
its ({\em quantum helicity}), i.e., component $\widetilde{s}_z$, is
reported in Tab. \ref{local-quantum-spectral-spin-classification-observed-yang-mills-pde-solutions}.
\begin{table}[h]
\caption{Local quantum spectral-spin-classification of $\widehat{(YM)}[i]$ solutions.}
\label{local-quantum-spectral-spin-classification-observed-yang-mills-pde-solutions}
\scalebox{0.9}{$\begin{tabular}{|l|l|}
\hline
{\footnotesize{\rm Definition}} &{\footnotesize{\rm Name}} \\
\hline
\hline
{\footnotesize $Sp(|\widetilde{s}(p)|^2)\subset\mathfrak{b}\equiv\{\hbar^2s(s+1)|
s\in\mathbb{N}\equiv\{0,1,2,\dots\}\},\quad Sp(\widetilde{s}_z(p))\subset
\mathfrak{c}$}&{\footnotesize {\rm bosonic-polarized}}\\
\hline
{\footnotesize $Sp(|\widetilde{s}(p)|^2)\subset\mathfrak{f}\equiv\{\hbar^2s(s+1)|
s=\frac{2n+1}{2},n\in\mathbb{N}\equiv\{0,1,2,\dots\}\}, \quad
Sp(\widetilde{s}_z(p))\subset
\mathfrak{c}$}&{\footnotesize{\rm fermionic-polarized}}\\
\hline
{\footnotesize $ Sp(|\widetilde{s}(p)|^2)\cap\mathfrak{b}=Sp(|\widetilde{s}(p)|^2)\cap\mathfrak{f}=\varnothing$}&{\footnotesize {\rm unpolarized}}\\
\hline
{\footnotesize $Sp(|\widetilde{s}(p)|^2)\cap\mathfrak{b}\not=\varnothing, \hbox{\rm and/or } Sp(|\widetilde{s}(p)|^2)\cap\mathfrak{f}\not=\varnothing$}&{\footnotesize{\rm mixt-polarized}}\\
\hline
\multicolumn {2}{l}{\rm
\footnotesize $|\widetilde{s}(p)|^2\equiv\widetilde{s}^\rho(p)\widetilde{s}_\rho(p)\in\widehat{A},\hskip
2pt p\in N$. $\mathfrak{c}\equiv\{\hbar m_s|m_s=-s,-s+1,\cdots,s-1,s\}$. $\widetilde{s}_z$ quantum helicity.}\\
\multicolumn {2}{l}{\rm\footnotesize $s$= spin quantum
number; $ m_s$= spin orientation quantum number.}\\
\end{tabular}$}\end{table}
$\widehat{(YM)}$ is a functional stable quantum super PDE since it is completely integrable and formally integrable.
(See Theorem 2.34 in \cite{PRAS22}). For the same reason it admits
$\widehat{(YM)}_{+\infty}\subset J\hat{\it D}^\infty(W)$ like stable quantum extended crystal PDE. Furthermore,
since its symbol $\hat g_2$ is not trivial, any global solution $V\subset\widehat{(YM)}$ can be unstable, and the
corresponding observed solution, can appear unstable in finite times. However, global smooth solution, result stable
in finite times in $\widehat{(YM)}_{+\infty}$. Finally the asymptotic stability study of global solutions of
$\widehat{(YM)}$, with respect to a quantum relativistic frame, can be performed by means of
Theorem 2.46 in \cite{PRAS22}, since, for any section $s:M\to W$, on the fibers of
$\hat E[s]\to M$ there exists a non-degenerate scalar product.
In fact, $\hat E[s]\cong W$, as $W$ is a vector bundle over $M$. Furthermore, for any section $s$, we can identify on
$M$ a non degenerate metric $\widehat{g}$,\footnote{It should be more precise to denote $\widehat{g}$ with the
symbol $\widehat{g}[s]$, since it is identified by means of the section $s$.} that beside the rigid
metric $\underline{g}$ on $\mathfrak{g}$, identifies a non-degenerate metric on each fiber $\hat E[s]_p\cong W_p=Hom_Z(T_pM;\mathfrak{g})$, $\forall p\in M$. In fact we get $\hat\xi(p)\cdot\hat\xi(p)'=\underline{g}_{KH}\widehat{g}^{AB}(p)\xi^K_A(p)\otimes\xi'{}^H_B(p)\in A$.
Solutions of $\widehat{(YM)}$ that
encode nuclear-charged plasmas, or nuclides, dynamics. These are
described by solutions that, when observed by means of a quantum
relativistic frame have at any $t\in T$, i.e., frame-proper time,
compact sectional support $B_t\subset N$. The {\em global mass} at
the time $t$, i.e. the evaluation
$$m_t=\int_{B_t}m(t,\xi^k)\sqrt{\det(g_{ij})}d\xi^1\wedge
d\xi^2\wedge d\xi^3$$
of such mass on the space-like section $B_t$,
gives the global mass-contents of the nuclear-plasmas or nuclides,
in their ground-eigen-states, at the proper time $t$. Whether such
solutions are asymptotically stable, then they interpret the meaning
of stable nuclear-plasmas or nuclides.
\begin{lemma}
If a solution admits spin structure, then we get $|\widetilde{S}|^2=|\widetilde{s}|^2$. Furthermore in a frame-adapted coordinates we can write
\begin{equation}\label{torsion-spin-vierbein-fundamental field}
\left\{\begin{array}{ll}
|\widetilde{s}|^2&=\widetilde{s}_{\alpha\beta}\, \widetilde{s}^{\alpha\beta}\, \dot\psi^\gamma\, \dot\psi_\gamma \\
&=\widetilde{s}_{\alpha\beta}\, \widetilde{s}^{\alpha\beta} \\
&=\widetilde{\mu}^\gamma_K\, {}_{\circledR}\widetilde{R}^K_{\alpha\beta}\, {}_{\circledR}\widetilde{R}_{\bar K}^{\alpha\beta}\, \widetilde{\mu}^{\bar K}_{\gamma}=|\widetilde{S}|^2.\\
\end{array}
\right.\end{equation}
\end{lemma}
\begin{proof}
Let us first recall that the quantum torsion $\widehat{S}:M\to Hom_Z(\dot\Lambda^2_0M;TM)$ and its dual $\underline{\widehat{S}}:M\to Hom_Z(TM;\dot\Lambda^2_0M)$ are defined, for any $p\in M$, by the compositions reported in (\ref{torsion-spin-vierbein-fundamental-field-a}).
\begin{equation}\label{torsion-spin-vierbein-fundamental-field-a}
\xymatrix@C=1.5cm{\dot\Lambda^2_0(T_pM)\ar@/_2pc/[rr]_{\widehat{S}(p)}\ar[r]^(0.6){{}_{\circledR}\widehat{R}(p)}&
\widehat{\mathfrak{g}}\ar[r]^{\widehat{\theta}^{-1}}&T_pM\\}\hskip 0.5cm
\xymatrix@C=1.5cm{T_pM\ar@/_2pc/[rr]_{\underline{\widehat{S}}(p)}\ar[r]^{\widehat{\theta}}&
\widehat{\mathfrak{g}}\ar[r]^(0.4){{}_{\circledR}\widehat{R}(p)}&\dot\Lambda^2_0(T_pM)\\}
\end{equation}
Then, by taking the pull-back with respect to the embedding $i:N\to M$, identified by the quantum relativistic frame, we get $$|\widetilde{S}|^2=i^*(\widehat{S}\, \underline{\widehat{S}})=\widetilde{S}^\gamma_{\alpha\beta}\, \widetilde{S}_\gamma^{\alpha\beta}=\widetilde{\mu}^\gamma_K\, {}_{\circledR}\widetilde{R}^K_{\alpha\beta}\, {}_{\circledR}\widetilde{R}_{\bar K}^{\alpha\beta}\, \widetilde{\mu}^{\bar K}_{\gamma}.$$
Therefore, taking into account that in the case of solution with spin structure, one has $\widetilde{S}_\gamma^{\alpha\beta}=\dot\psi_\gamma\, \widetilde{s}^{\alpha\beta}$ and $\widetilde{S}^\gamma_{\alpha\beta}=\dot\psi^\gamma\, \widetilde{s}_{\alpha\beta}$, we get (\ref{torsion-spin-vierbein-fundamental field}).
\end{proof}
$\bullet$\hskip 2pt The following shows how thermodynamic functions can be associated to solutions of $\widehat{(YM)}$.
Let $\widetilde{H}$ be the Hamiltonian corresponding to an observed
solution of $\widehat{(YM)}$. Let us recall that $\widetilde{H}$ is a $\widehat{A}$-valued
function on the $4$-dimensional space-time $N$, considered in the quantum relativistic frame. Let us denote by $E\in
Sp(\widetilde{H})$. If $N(E)=\mathbb{T}R\delta(E-\widetilde{H})$ denotes the
degeneracy of $E$, let us define {\em local partition function} of
the observed solution the Laplace transform of the degeneracy
$N(E)$, with respect the spectrum $Sp(\widetilde{H})$ of
$\widetilde{H}$. We get
\begin{equation}\label{partition-function1}
\left\{
\begin{array}{ll}
Z(\beta)&= \int_{Sp(\widetilde{H})}e^{-\beta E}N(E) dE\\
& = \int_{Sp(\widetilde{H})} e^{-\beta E}\mathbb{T}R\delta(E-\widetilde{H}) dE \\
&=\mathbb{T}R e^{-\beta \widetilde{H} }.\\
\end{array}
\right.
\end{equation}
So we get the following formula
\begin{equation}\label{partition-function2}
Z(\beta)=\mathbb{T}R e^{-\beta \widetilde{H} }
\end{equation}
where $\beta$ is the Laplace transform variable and it does not
necessitate to be interpreted as the ``inverse temperature", i.e.,
$\beta=\frac{1}{\kappa_B\theta}$, where $\kappa_B$ is the
Boltzmann's constant. If $\beta=\frac{1}{\kappa_B\theta}$
then the system encoded by the observed solution of
$\widehat{(YM)}$, is in equilibrium with a heat bath (canonical
system). Note that all above objects are local functions on the
space-time $N$. The same holds for $\beta$. We can interpret $Z(\beta)$ as a normalization factor
for the local probability density
\begin{equation}\label{probability-density}
P(E)=\frac{1}{Z}N(E)e^{-\beta E}
\end{equation}
that
the system, encoded by the observed solution, should assume the
local energy $E$, with degeneration $N(E)$. In fact we have:
\begin{equation}\label{partition-function-normalization-factor}
1=\int_{Sp(\widetilde{H})} P(E)
dE=\frac{1}{Z}\int_{Sp(\widetilde{H})} N(E)e^{-\beta
E}dE=\frac{Z}{Z}.
\end{equation}
As a by-product we get that the {\em local average energy}
$<E>\equiv e$ can be written, by means of the partition function, in
the following way:
\begin{equation}\label{partition-function-average-energy1}
e=-(\partial\beta\ln Z).
\end{equation}
In fact, one has
\begin{equation}\label{partition-function-average-energy2}
\left\{
\begin{array}{ll}
e&= \int_{Sp(\widetilde{H})}E P(E) dE=\frac{1}{Z}\int_{Sp(\widetilde{H})}EN(E)e^{-\beta E}dE\\
& = \frac{1}{Z}\int_{Sp(\widetilde{H})} E\mathbb{T}R\delta(E-\widetilde{H})e^{-\beta E} dE \\
&=\frac{1}{Z}\mathbb{T}R(\widetilde{H} e^{-\beta \widetilde{H}})=-\frac{1}{Z}(\partial\beta.Z)=-(\partial\beta.\ln Z).\\
\end{array}
\right.
\end{equation}
When we can interpret
$\beta=\frac{1}{\kappa_B\theta}$, then one can write
\begin{equation}\label{partition-function-average-energy3}
e=\kappa_B\theta^2(\partial\theta.\ln Z).
\end{equation}
Then we get also that the local {\em energy fluctuation} is
expressed by means of the variance of $e$:
\begin{equation}\label{partition-function-energy-fluctuation}
<(\triangle E)^2>\equiv <(E-e)^2>=(\partial\beta\partial\beta.\ln
Z).
\end{equation}
Furthermore, we get for the {\em local heat capacity} $C_v$ the following
formula:
\begin{equation}\label{heat-capacity-v}
C_v=(\partial\theta.e)=\frac{1}{\kappa_B\theta^2}<(\triangle E)^2>.
\end{equation}
We can define the {\em local entropy} by means of the following
formula:
\begin{equation}\label{entropy1}
s=-\kappa_B\int_{Sp(\widetilde{H})}P(E) \ln P(E) dE.
\end{equation}
In fact one can prove that one has the usual relation by means of
the energy. Really we get:
\begin{equation}\label{entropy2}
\left\{
\begin{array}{ll}
s& =-\kappa_B\int_{Sp(\widetilde{H})}P(E) \ln P(E) dE\\
& =\kappa_B(\ln Z+\beta e)=(\partial\theta.(\kappa_B\theta \ln Z)).\\
\end{array}
\right.
\end{equation}
Then from the relation $s=\kappa_B(\ln Z+\beta e)$ we get
$e=\theta s-\kappa_B\ln Z$, hence also
$(\partial s.e)=\theta$. This justifies the definition of entropy
given in (\ref{entropy1}). Furthermore, from {\em(\ref{entropy2})} we get also $\frac{1}{\beta}\ln Z=e-\theta s=f$,
where $f$is the Helmoltz free
energy. It follows the following
expression of the local Helmoltz free energy, by means of the local
partition function $Z$:
\begin{equation}\label{free-energy-partition-function}
f\equiv e-\theta s=-\kappa_B\theta\ln Z.
\end{equation}
Conversely, from (\ref{free-energy-partition-function}) it follows that the partition function can be expressed by means
of the local Helmoltz free energy
\begin{equation}\label{partition-function-free-energy}
Z=e^{-\beta f}.
\end{equation}
So we see that the local thermodynamic functions, can be expressed as
scalar-valued differential operators on the fiber bundle
$W[i]\times_NT^0_0N\to N$.
The concept of quantum states can be also related to a proof for existence of solutions with mass-gap. In fact, we have proved that equation $\widehat{(YM)}$ admits local and global solutions with mass-gap. These are contained into a sub-equation,
{\em(Higgs-quantum super PDE)},
$\widehat{(Higgs)}\subset\widehat{(YM)}$, that is formally integrable and completely integrable, and also
a stable quantum super PDE. If $H_3(M;\mathbb{K})=0$, $\widehat{(Higgs)}$ is also a quantum extended crystal super PDE.
In general
solutions contained in $\widehat{(Higgs)}$ are not stable in finite times. However there exists an associated stabilized
quantum super PDE, (resp. quantum extended crystal super PDE), where all global smooth solutions are stable in finite
times.
Furthermore, there exists a quantum super partial differential relation, {\em(quantum Goldstone-boundary)},
$\widehat{(Goldstone)}\subset\widehat{(YM)}$, bounding $\widehat{(Higgs)}$, such that any global solution of
$\widehat{(YM)}$,
loses/acquires mass, by crossing $\widehat{(Goldstone)}$.
\begin{example}
Now, let us answer to the question: ``{\em Do pictures in Fig. \ref{figure-proton-proton-chain-reactions-examples} represent possible smooth integral manifolds, i.e., solutions, of $\widehat{(YM)}$, or $\widehat{(YM)}[i]$, with respect to a quantum relativistic frame $i$ ?}"
Taking into account above results on $\widehat{(YM)}$ and $\widehat{(YM)}[i]$, we can answer, ``{\em yes}", accepted that the initial and final Cauchy data have the same quantum numbers, identified by the quantum conservation laws of $\widehat{(YM)}$ and $\widehat{(YM)}[i]$ respectively. Furthermore, we can state that such solutions must necessarily be singular ones, since in order that should be smooth, it is necessary that $V$ should be diffeomorphic to $N_0\times \hat D^{1|1}$ or $N_0\times I$. Then in such a case we should also have $N_0\cong N_1$. Therefore we can conclude that nonlinear quantum propagators representing reactions considered in Example \ref{two-body-high-energy-reactions-examples} cannot be smooth solutions, but singular ones.
\end{example}
\begin{lemma}
The quantum mass of a solution of $\widehat{(YM)}$ can be written, in coordinates adapted to a quantum relativistic frame, in the form reported in {\em(\ref{alternative-form-quantum-mass})}.
\begin{equation}\label{alternative-form-quantum-mass}
m=\frac{1}{2}|\widetilde{S}|^2-\frac{1}{2}\hat\Delta+m_{\copyright}+m_{\maltese}.
\end{equation}
\end{lemma}
\begin{proof}
The quantum Hamiltonian can be written in the form reported in {\em(\ref{alternative-form-quantum-hamiltonian})}.
\begin{equation}\label{alternative-form-quantum-hamiltonian}
H={}_{\circledR}H+{}_{\copyright}H+{}_{\maltese}H.
\end{equation}
This follows directly, by considering the expression for the quantum Hamiltonian, by using the splitting induced by the quantum superalgebra $\mathfrak{g}$, and by taking into account the condition that the solution admits a spin structure. The corresponding calculus is reported in (\ref{alternative-form-quantum-hamiltonian-a}) and following ones.
\begin{equation}\label{alternative-form-quantum-hamiltonian-a}
\left\{\begin{array}{ll}
H&=\frac{1}{2}(\widetilde{R}_{\alpha\beta}^K\, \widetilde{R}^{\alpha\beta}_K-\widetilde{\mu}^K_{\alpha\beta}\, \widetilde{R}^{\alpha\beta}_K) \\
&=\frac{1}{2}({}_{\circledR}\widetilde{R}^{\alpha\beta}_K\, {}_{\circledR}\widetilde{R}^{\alpha\beta}_K-{}_{\circledR}\widetilde{\mu}^K_{\alpha\beta}\, {}_{\circledR}\widetilde{R}^{\alpha\beta}_K) \\
&+\frac{1}{2}({}_{\copyright}\widetilde{R}^{\alpha\beta}_K\, {}_{\copyright}\widetilde{R}^{\alpha\beta}_K-{}_{\copyright}\widetilde{\mu}^K_{\alpha\beta}\, {}_{\copyright}\widetilde{R}^{\alpha\beta}_K) \\
&+\frac{1}{2}({}_{\maltese}\widetilde{R}^{\alpha\beta}_K\, {}_{\maltese}\widetilde{R}^{\alpha\beta}_K-{}_{\maltese}\widetilde{\mu}^K_{\alpha\beta}\, {}_{\maltese}\widetilde{R}^{\alpha\beta}_K) \\
&={}_{\circledR}H+{}_{\copyright}H+{}_{\maltese}H.
\end{array}
\right.
\end{equation}
In coordinates adapted to the quantum relativistic frame, we get that $\dot\psi^\lambda=\delta_0^\lambda$, and since
$${}_{\circledR}\widetilde{\mu}_{K}^\lambda\,{}_{\circledR}\widetilde{R}_{\alpha\beta}^K=
\widetilde{S}^\lambda_{\alpha\beta}=\widetilde{s}_{\alpha\beta}\, \dot\psi^\lambda$$
we get ${}_{\circledR}\widetilde{\mu}_{K}^i\,{}_{\circledR}\widetilde{R}_{\alpha\beta}^K=
\widetilde{S}^i_{\alpha\beta}=0$ and ${}_{\circledR}\widetilde{\mu}_{K}^0\,{}_{\circledR}\widetilde{R}_{\alpha\beta}^K=
\widetilde{S}^0_{\alpha\beta}=\widetilde{s}_{\alpha\beta}$. Therefore we have
\begin{equation}\label{alternative-form-quantum-hamiltonian-b}
\left\{\begin{array}{ll}
|\widetilde{S}|^2&=|\widetilde{s}|^2={}_{\circledR}\widetilde{\mu}_{K}^\lambda\,{}_{\circledR}\widetilde{R}_{\alpha\beta}^K\,
{}_{\circledR}\widetilde{R}_{\bar K}^{\alpha\beta}\,{}_{\circledR}\widetilde{\mu}_{\lambda}^{\bar K}\\
&={}_{\circledR}\widetilde{\mu}_{K}^0\,{}_{\circledR}\widetilde{\mu}_{0}^{\bar K}\,{}_{\circledR}\widetilde{R}_{\alpha\beta}^K\, {}_{\circledR}\widetilde{R}_{\bar K}^{\alpha\beta}+\hat\Delta\\
&=\delta_{K}^{\bar K}\,{}_{\circledR}\widetilde{R}_{\alpha\beta}^K\, {}_{\circledR}\widetilde{R}_{\bar K}^{\alpha\beta}+\hat\Delta\\
&={}_{\circledR}\widetilde{R}_{\alpha\beta}^K\, {}_{\circledR}\widetilde{R}_{K}^{\alpha\beta}+\hat\Delta\\
\end{array}
\right.
\end{equation}
with
\begin{equation}\label{alternative-form-quantum-hamiltonian-c}
\hat\Delta\equiv {}_{\circledR}\widetilde{\mu}_{K}^0\,[{}_{\circledR}\widetilde{R}_{\alpha\beta}^K,{}_{\circledR}\widetilde{\mu}_{0}^{\bar K}]{}_{\circledR}\widetilde{R}_{\bar K}^{\alpha\beta}+{}_{\circledR}\widetilde{\mu}^0_K\, {}_{\circledR}\widetilde{R}^{K}_{\alpha\beta}[{}_{\circledR}\widetilde{R}_{\bar K}^{\alpha\beta},{}_{\circledR}\widetilde{\mu}^{\bar K}_0].
\end{equation}
So, in general, we can write
\begin{equation}\label{alternative-form-quantum-hamiltonian-d}
{}_{\circledR}\widetilde{R}_{\alpha\beta}^K\, {}_{\circledR}\widetilde{R}_{K}^{\alpha\beta}=|\widetilde{S}|^2-\hat\Delta
\end{equation}
and by using equation (\ref{alternative-form-quantum-hamiltonian-a}) we get the expression (\ref{alternative-form-quantum-hamiltonian-e}) for the quantum Hamiltonian in terms of quantum torsion.
\begin{equation}\label{alternative-form-quantum-hamiltonian-e}
H=\frac{1}{2}|\widetilde{S}|^2-\frac{1}{2}\hat\Delta+{}_{\copyright}H+{}_{\maltese}H.
\end{equation}
As a by-product, we get that the quantum mass of an observed solution by means of a quantum relativistic frame $i:N\to M$, of $\widehat{(YM)}[i]$, the alternative representation (\ref{alternative-form-quantum-mass}) in terms of quantum torsion.
\end{proof}
\begin{definition}\label{reduced-quantum-mass}
Let us define {\em reduced quantum mass} of an observed solution by means of a quantum relativistic frame $i:N\to M$, of $\widehat{(YM)}[i]$: $$M\equiv m+\frac{1}{2}\hat\Delta-m_{\copyright}-m_{\maltese}.$$
\end{definition}
\begin{theorem}\label{reduced-quantum-mass-torsion}
There is the following direct relation between reduced quantum mass and quantum torsion: $$M=\frac{1}{2}|\widetilde{S}|^2.$$
\end{theorem}
\begin{definition}\label{phenomenological-quantum-spin}
Let us define {\em phenomenological quantum spin} of an observed solution by means of a quantum relativistic frame $i:N\to M$, of $\widehat{(YM)}[i]$, a $\widehat{A}$-valued quantum scalar $J$ such that:\footnote{The natural meaning of $\sqrt{J}$ is the $\widehat{A}$-valued function on $N$, such that: $\sqrt{J}\, \sqrt{J}=J$.} $$\sqrt{J}\equiv |\widetilde{S}|^2.$$
\end{definition}
\begin{cor}[Quantum Regge-type trajectories]\label{phenomenological-quantum-regge-type-rule}
There is the following direct relation between reduced quantum mass and phenomenological quantum spin:
\begin{equation}\label{quantum-regge-type-relation}
M^2=\frac{1}{4}J.
\end{equation}
We call {\em quantum Regge-type trajectories} the relation {\em(\ref{quantum-regge-type-relation})} between quantum square mass $M^2$ and the phenomenological quantum spin $J$.
\end{cor}
\begin{figure}
\caption{Representation of a nonlinear quantum propagator $V$, for a reaction $a+b\to c$, and its quantum nonlinear anti-propagator $V'$, corresponding to the reaction $b\to a'+c$, obtained for quantum crossing symmetry.}
\label{figure-quantum-anti-propagator}
\end{figure}
$\bullet$\hskip 2pt Another question, in a sense related to the previous problem, is the dynamical justification of the Gell-Mann-Nishijima formula. This actually has not yet found a dynamical derivation.\footnote{The more interesting effort in this direction was an heuristic semiclassical justification given in \cite{CUI}.} Really dynamical characterizations of hypercharge and $3^{th}$ isospin component, can be obtained by means of electric charge, by giving the dynamical expression of this last quantity with respect to a quantum relativistic frame. This is obtained in the following theorem.
\begin{theorem}[Quantum Gell-Mann-Nishijima formula and electric-charge-gap-solutions]\label{quantum-gell-mann-nishijima-formula}
We define {\em square-fundamental electric charge} of an observed solution $V$ of $\widehat{(YM)}[i]$, the littlest value in the spectrum $Sp(\hat w)$ of the observed quantum electromagnetic energy $\hat w$, and denote it by $q^2$. We say that $V$ has an {\em electric-charge gap} if $q^2>0$. (In general $q^2\ge 0$.)
The {\em quantum electric charge}, $\hat Q(t)$, of an observed solution $V$ of $\widehat{(YM)}[i]$, at the proper time $t$, is expressed by the formula {\em(\ref{quantum-gell-mann-nishijima-formula-a})}.
\begin{equation}\label{quantum-gell-mann-nishijima-formula-a}
\scalebox{0.8}{$ \hat Q^2(t)=\int_{\sigma_t}\left[{}_{\copyright}\widetilde{R}^K_{0i}{}_{\copyright}\widetilde{R}^i_{0K}
+{}_{\maltese}\widetilde{R}^K_{0i}{}_{\maltese}\widetilde{R}^i_{0K}
+{}_{\circledR}\widetilde{B}^K_{0i}{}_{\circledR}\widetilde{B}^i_{0K}
+{}_{\copyright}\widetilde{B}^K_{0i}{}_{\copyright}\widetilde{B}^i_{0K}
+{}_{\maltese}\widetilde{B}^K_{0i}{}_{\maltese}\widetilde{B}^i_{0K}\right]\otimes\eta\in A$}
\end{equation}
where $\eta=\sqrt{g_{ij}}\, d\xi^1\wedge d\xi^2\wedge d\xi^3$ is the canonical space-like volume form on $\sigma_t\subset V$. This last is the $3$-dimensional space-like sub-manifold of $V$ at the time $t$.
Furthermore, we call {\em quantum hypercharge}, $\hat Y(t)$, (resp. {\em quantum $3^{th}$-isospin component}, $\hat I_3(t)$), at the proper time $t$, the elements of the quantum (super)algebra $A$, such that holds the formula {\em(\ref{quantum-gell-mann-nishijima-formula-b})} {\em(quantum Gell-Mann-Nishijima formula)}.
\begin{equation}\label{quantum-gell-mann-nishijima-formula-b}
\hat Q^2(t)=(\hat I_3(t)+\frac{1}{2}\hat Y(t))^2,\, \forall t \in \triangle
\end{equation}
where $\triangle$ is the definition time-set of the considered solution of $\widehat{(YM)}[i]$.
$\bullet$\hskip 2pt The spectral content of $\hat Q(t)$ is given by $Sp(\pm\sqrt{\hat Q(t)^2})$.
$\bullet$\hskip 2pt There exists an open sub-equation $\widehat{(YM)}[i]_w\subset\widehat{(YM)}[i]$, {\em quantum electromagnetic-Higgs PDE}, that is formally integrable and completely integrable, where live all solutions with electric-charge gap. The boundary $\partial\widehat{(YM)}[i]_w=\overline{\widehat{(YM)}[i]_w}\setminus\widehat{(YM)}[i]_w\subset\widehat{(YM)}[i]$, is a partial differential relation that we call {\em quantum electromagnetic-Goldstone boundary} and denote by $\widehat{(Goldstone)}[i]_w$. An electrically neutral connected, simply connected, $3$-dimensional Cauchy data $N\subset \widehat{(YM)}[i]$, cannot be contained into $\widehat{(YM)}[i]_w$. Let $V$ be a nonlinear quantum propagator, such that $\partial V=N_0\sqcup P\sqcup N_1$, with $N_0\subset \widehat{(YM)}[i]_w$ and $N_1\not\subset \widehat{(YM)}[i]_w$. Let us assume that $N_r$, $r=0,1$, are connected, simply connected particles, hence $N_0$ has an electric-charge gap. The particle $N_1$, instead cannot have electric-charge gap. Thus, $V$, by crossing $\widehat{(Goldstone)}[i]_w$, loses its electric-charge gap, passing from $N_0$ to $N_1$, and vice versa.
\end{theorem}
\begin{proof}
The solution $V$ identifies a quantum-electric-charge field $\hat E=\dot\psi\rfloor\widetilde{R}={}_{\circledR}\hat E+{}_{\copyright}\hat E+{}_{\maltese}\hat E$ and a quantum-magnetic-charge field
$\hat B=\dot\psi\rfloor(\star\widetilde{R})={}_{\circledR}\hat B+{}_{\copyright}\hat B+{}_{\maltese}\hat B$. Assuming that the solution $V$ has a quantum spin, one has ${}_{\circledR}\hat E=0$. (See \cite{PRAS22}.) Then the
quantum electro-magnetic-charge energy of the solution of $\widehat{(YM)}[i]$ is given by equation (\ref{quantum-gell-mann-nishijima-formula-quantum-electromagnetic-energy-density}).
\begin{equation}\label{quantum-gell-mann-nishijima-formula-quantum-electromagnetic-energy-density}
\hat w=\hat w_e+\hat w_m={}_{\copyright}\widetilde{R}^K_{0i}{}_{\copyright}\widetilde{R}^i_{0K}
+{}_{\maltese}\widetilde{R}^K_{0i}{}_{\maltese}\widetilde{R}^i_{0K}
+{}_{\circledR}\widetilde{B}^K_{0i}{}_{\circledR}\widetilde{B}^i_{0K}
+{}_{\copyright}\widetilde{B}^K_{0i}{}_{\copyright}\widetilde{B}^i_{0K}
+{}_{\maltese}\widetilde{B}^K_{0i}{}_{\maltese}\widetilde{B}^i_{0k}.
\end{equation}
Therefore the quantum electromagnetic energy of the space-like set $\sigma_t$ is given by the expression on the right in (\ref{quantum-gell-mann-nishijima-formula-a}). We define {\em quantum electric charge} contained, at the proper time $t$, into a space-like set $\sigma_t\subset V$, $\hat Q(t)\in A$, such that $\hat Q^2(t)=\int_{\sigma_t}\hat w\otimes \eta$.\footnote{This is justified taking into account that the electric charge contained into a $3$-dimensional set having the electromagnetic energy $W$, is $q^2=2 C\, W$, with $C$ the capacity of this set. Therefore $Q^2=\frac{q^2}{2C}$. i.e., the formula (\ref{quantum-gell-mann-nishijima-formula-a}) is normalized with respect to a {\em factor form} $\kappa_{electric-charge}(t)=\frac{1}{2C(t)}$.} Then equation (\ref{quantum-gell-mann-nishijima-formula-b}) gives a dynamical definition for $\hat Y$ and $\hat I_3$.
To prove the last part of the theorem it is enough to consider the continuous mapping $\hat w:\widehat{(YM)}[i]\to \widehat{A}$, defined by means of equation (\ref{quantum-gell-mann-nishijima-formula-quantum-electromagnetic-energy-density}). Set $\widehat{(YM)}[i]_w\equiv (\hat w)^{-1}(G(\widehat{A}))\subset \widehat{(YM)}[i]$. Then $\widehat{(YM)}[i]_w$ is an open quantum PDE of $\widehat{(YM)}[i]$, hence retains the same formal properties of this last equation. The rest of the proof follows the same line of the ones of Theorem 3.28 in \cite{PRAS22} about solutions with mass-gap. Let us only emphasize that a neutral particles $N_0\subset \widehat{(YM)}[i]$, with trivial topology, i.e., connected and simply connected, cannot admit $0\in Sp(\hat Q(t))$, whether $N_0\subset \widehat{(YM)}[i]_w$, since in $\widehat{(YM)}[i]_w$, one has $\lambda>0$, for any $\lambda\in Sp(\hat w)$. Furthermore, if the nonlinear quantum propagator $V$ is such $\partial V=N_0\sqcup P\sqcup N_1$, assuming that $N_1\not\subset \widehat{(YM)}[i]_w$ and that it has trivial topology, then $N_1$ cannot have electric-charge gap, hence $V$ crossing $\widehat{(Goldstone)}[i]_w$ must necessarily lose electric-charge gap.
\end{proof}
\begin{remark}\label{remark-quantum-gell-mann-nishijima-formula}
Theorem \ref{quantum-gell-mann-nishijima-formula} does not necessarily contradict the conservation of the quantum electric charge. In fact, we can have two Cauchy data $a,\, b\subset \widehat{(YM)}[i]_w$ and another one $c\not\subset \widehat{(YM)}[i]_w$, such that $<\hat q,a>=-<\hat q,b>\not=0$, where $<\hat q,a>=\sqrt{\int_a\hat f\otimes \eta}=-\sqrt{\int_b\hat f\otimes\eta}$, with $\hat f$ the function under integral in {\em(\ref{quantum-gell-mann-nishijima-formula-a})}. Furthermore, let $<\hat q,c>=0$. Then in the integral bordism classes $[a\sqcup b=N_0]$ and $[N_1]$, one has $<\hat q,[a\sqcup b]>=0=<\hat q,c>$. Therefore, for a nonlinear quantum propagator $V$, such that $\partial V=N_0\sqcup P\sqcup N_1$ of $\widehat{(YM)}[i]$ one has $<\alpha,a>+<\alpha,b>=-<\alpha,P>$, for $\alpha=\hat q$, since $<\hat q,P> =0$. This is surely down since $P$ is a time-like manifold having $2$-dimensional space-like sections $\sigma_t$, hence $<\hat q,P>=\sqrt{\int_{\sigma_t}\hat f\otimes \sqrt{g_{ij}}\, d\xi^1\wedge d\xi^2\wedge d\xi^3}=0$.
\end{remark}
From Theorem \ref{quantum-gell-mann-nishijima-formula} and Remarks \ref{remark-quantum-gell-mann-nishijima-formula} it follows the following important theorem.
\begin{theorem}[$Q$-exotic nonlinear quantum propagators of ${\widehat{(YM)}[i]}$]\label{non-conservation-of-quantum-electric-charge}
For any observed nonlinear quantum propagator $V$ of $\widehat{(YM)}[i]$, such that $\partial V=N_0\sqcup P\sqcup N_1$, where $N_i$, $i=0,1$, are $3$-dimensional space-like admissible Cauchy data of $\widehat{(YM)}[i]$, and $P$ is a suitable time-like $3$-dimensional integral manifold with $\partial P=\partial N_0\sqcup\partial N_1$, equation {\em(\ref{conserved-quantum-observed-electric-charge-in-quantum observed-nonlinear-propagator})} holds.
\begin{equation}\label{conserved-quantum-observed-electric-charge-in-quantum observed-nonlinear-propagator}
\hat Q[i|t_0]=\hat Q[i|t_1] \, {\rm mod}\hskip 3pt \mathfrak{Q}[V]\in A
\end{equation}
where $\hat Q[i|t_r]\in A$ is the quantum electric charge on $N_r$, $r=0,1$, and $\mathfrak{Q}[V]\in A$, is a term that in general is not zero and that we call {\em defect quantum electric-charge}.
We call {\em $Q$-exotic nonlinear quantum propagators}, nonlinear quantum propagators such that $\mathfrak{Q}[V]\not=0\in A$.\footnote{This agrees with the conservation of the observed quantum Hamiltonian. See Theorem 3.20 in \cite{PRAS28}. In fact, the observed quantum electromagnetic energy is a form of quantum energy, even if, in general, it does not coincide with the observed quantum Hamiltonian.}
\end{theorem}
\begin{proof}
The proof follows the same strategy of one considered to prove Theorem 3.20 in part I \cite{PRAS28}. In fact the gauge invariance of equation $\widehat{(YM)}[i]$ produces a quantum integral characteristic $3$-form, in the sense of Lemma 3.19 in part I, that has a structure similar to $\omega_H$. Let us denote such a conservation law by $$\omega_q=(\omega_q)_0\otimes d\xi^1\wedge d\xi^2\wedge d\xi^3+\sum_{1\le i\le 3}(\omega_q)_i\otimes d\xi^0\wedge d\xi^1\wedge\cdots\wedge \widetilde{d\xi^i}\wedge d\xi^3,$$ with $\widetilde{d\xi^i}$, absent, $i=1,2,3$, and $(\omega_q)_\alpha(p)\in A$, for $p\in V$.
The quantum electric charge, on a space-like section $\sigma_t$ of $V$, is given by means of $\omega_q$, by the following equation
$$\hat Q[i|t]=\int_{\sigma_t}(\omega_q)_0|_{\sigma_t}\otimes d\xi^1\wedge d\xi^2\wedge d\xi^3.$$ Therefore we have
$$\hat Q[i|t_0]-\hat Q[i|t_1]=-\int_{P}(\omega_q)_P= -\mathfrak{Q}[V].$$
In general $\mathfrak{Q}[V]\not=0$, hence we get $\hat Q[i|t_0]=\hat Q[i|t_1]\, {\rm mod}\hskip 3pt \mathfrak{Q}[V]$. We call $\mathfrak{Q}[V]\in A$ {\em defect quantum electric-charge} of the observed nonlinear quantum propagator $V$ of $\widehat{(YM)}[i]$. In other words, in general one has $\frac{d}{dt}\hat Q(t)\not=0\in A$.
\end{proof}
\begin{remark}
In the Part III we will further characterize $Q$-exotic nonlinear quantum propagators as nonlinear effects of exotic quantum supergravity.
\end{remark}
$\bullet$\hskip 2pt In the following we shall prove that the phenomenological crossing symmetry in particle reactions is well justified in the Pr\'astaro's algebraic topology theory of quantum super PDE.
\begin{theorem}[Quantum crossing symmetry]\label{quantum-crossing-symmetry}
If $\widehat{(YM)}$ admits a nonlinear quantum propagator $V$ such that $\partial V=N_0\bigcup P\bigcup N_1$, with $N_0=a\sqcup b$ and $N_1=c\sqcup d\sqcup e$, then there exists also a nonlinear quantum propagator $V'$ such that $\partial V'=N'_0\bigcup P'\bigcup N'_1$, with $N'_1=a'\sqcup b'$ and $N'_0=c'\sqcup d'\sqcup e'$, where the primed symbols denote antiparticles. This property is called {\em quantum crossing symmetry}. Similarly there exist nonlinear quantum propagators between $a\sqcup b\sqcup c'$ and $d\sqcup e$ or between $b$ and $a'\sqcup c\sqcup d\sqcup e$.\footnote{The crossing symmetry is a well-known property in the reaction particles phenomenology. These, besides a set of other symmetries and phenomenological conservation laws constitue the Holy Bible for particle-reaction physicists. (See Tab. \ref{figure-zero-mass-and-massive-photons-production}.) In Tab. \ref{figure-zero-mass-and-massive-photons-production} are reported also two quantum reactions related by means of crossing symmetry. These are the {\em neutron decay} and {\em neutrino detection}.)}
\end{theorem}
\begin{proof}
This follows from the general relation between quantum integral bordism groups and quantum integral characteristic conservation laws. (See \cite{PRAS14}.) In fact if there exists a nonlinear quantum propagator $V$ such that $\partial V=N_0\bigcup P\bigcup N_1$, with $N_0=a\sqcup b$ and $N_1=c\sqcup d\sqcup e$, then this means that $N_1\in[N_0\bigcup P]\in Bor^{\widehat{(YM)}}_{3|3}$, hence $N_0\bigcup P\bigcup N_1\in [0]\in Bor^{\widehat{(YM)}}_{3|3}$. Since antiparticles reverse quantum integral characteristic numbers, it follows that for any quantum integral characteristic conservation law $\alpha$ of $\widehat{(YM)}$, one has $<[\alpha],[N_0\bigcup P\bigcup N_1]>=0=<[\alpha],[N'_1\bigcup P'\bigcup N'_0]>$. Therefore, $(N'_1\bigcup P')\in[N'_0]\in Bor^{\widehat{(YM)}}_{3|3}$, hence there exists a nonlinear quantum propagator $V'$ such that $\partial V'=N'_0\bigcup P'\bigcup N'_1$. Note also, that as by-product, we get that there exist a $(3|3)$-dimensional quantum integral supermanifold $P'$, such that $\partial P'=\partial N'_1\sqcup \partial N'_0$, similarly to what happens for $P$, namely $\partial P=\partial N_0\sqcup N_1$. This means that one has $\partial N'_1\in[\partial N'_0]\in\Omega^{\widehat{(YM)}}_{2|2}$ and $\partial N_1\in[\partial N_0]\in\Omega^{\widehat{(YM)}}_{2|2}$. Similarly one proves existence of the other reactions obtained from crossing symmetry. For example to the nonlinear quantum propagator $V$, such that $\partial V=(a\sqcup b)\bigcup P\bigcup c$, corresponds for quantum crossing symmetry, the following nonlinear quantum propagator $V'$, such that $\partial V'=b\bigcup P'\bigcup(a'\sqcup c)$, where $P'$ is the anti-$P$. (See Fig. \ref{figure-quantum-anti-propagator} for a representation of $P$ and its anti-$P$.) There are cases where can exist more than only one anti-$P$.
\end{proof}
$\bullet$\hskip 2pt The following theorem answers to the question: {\em ``Do exist massive photons ?"}.
\begin{figure}
\caption{In the figure on the left it is represented a couple of zero-mass photons ($c\nsubseteq(\widehat{Higgs}
\label{figure-zero-mass-and-massive-photons-production}
\end{figure}
\begin{theorem}[Existence of quantum massive photons]\label{quantum-massive-photons-theorem}
The quantum super Yang-Mills equation $\widehat{(YM)}$ admits solutions that starting from a Cauchy data $N_0=a\sqcup b$, where $a$ represents a quantum electron and $b$ a quantum positron, bords $N_1$, representing a quantum massive particle. Any annihilation $e^++e^-\to \gamma+\gamma$, must necessarily generate a {\em quantum virtual massive photon}, say $\gamma_{m}$, before to produce a couple of massless photons $\gamma$. Conversely a {\em quantum massive photon} decays giving $\gamma_{m}\to e^++e^-$.
\end{theorem}
\begin{proof}
Let us first identify a quantum massive photon with a quantum massive particle $\gamma_m$ with decay into an electron-positron couple. We know that from Theorem 3.28 in \cite{PRAS22} can exist a quantum propagator $V'$ bording $a'\sqcup b'$ with $c'$, where $a'\sqcup b'$ is a couple electron-positron and $c'$ is a massive particle, hence all contained in $(\widehat{Higgs})\subset \widehat{(YM)}$. On the other hand, from the standard Compton scattering $\gamma+e^-\to e^-+\gamma$ we say, for crossing symmetry, that holds the following reaction ({\em annihilation electron-positron}): $e^++e^-\to\gamma+\gamma$. Theorem 3.28 in \cite{PRAS22} assures the existence of a nonlinear quantum propagator $V$ such that $\partial V=N_0\sqcup P\sqcup N_1$, where $N_0$, is contained in the sub-equation $(\widehat{Higgs})\subset \widehat{(YM)}$, and $N_1\nsubseteq (\widehat{Higgs})$. Therefore $V$, should cross the Goldstone boundary $(\widehat{Goldstone})\subset\widehat{(YM)}$, hence $V$ should contain a Goldstone piece. This proves that the annihilation electron-positron implies to pass across a quantum massive photon: (Note that whether it is permitted the reaction $e^++e^-\to\gamma_m$, for {\em crossing symmetry} it is also permitted the decay $\gamma_m\to e^-+e^+$.)\footnote{Actually there are attempts to give experimental evidence to the existence of massive photons. (Visit \href{https://www.jlab.org/}{(Thomas Jefferson National Accelerator Facility, Newport News, Virginia, USA)}, and \cite{ARKANI-FINK-SLAT-WEI}.) Massive photons can be identified with neutral massive bosonic particles. In the particle-zoo these could be identified with so-called {\em vector bosons}, like neutral $\rho$-meson and $\varphi$-meson and $Z^0$-boson, all having spin $1$.} In Fig. \ref{figure-zero-mass-and-massive-photons-production} are represented productions of quantum massless photons and quantum massive photons with respect to the constraints $(\widehat{Higgs})$ and $(\widehat{Goldstone})$ in $\widehat{(YM)}$. Whether $e^+$ and $e^-$ collide at high energy, their annihilation can be reversed into a massive couple of mesons $D^+\sqcup D^-$. In other words there exists a nonlinear quantum propagator $V$, bording $e^+\sqcup e^-$ with $D^+\sqcup D^-$, such that $\partial V=(e^+\sqcup e^-)\bigcup P\bigcup(D^+\sqcup D^-)$, where $P$ is an integral quantum supermanifold, partially outside $\widehat{(Higgs)}$, but yet contained into $\widehat{(YM)}$. (See Fig. \ref{figure-zero-mass-and-massive-photons-production}.)
\end{proof}
From above considerations, we see that the concept of massive photons can be generalized to more large set of particles.
\begin{definition}[Quantum anti-particles and quantum massive photons]\label{quantum-antiparticles-massive-photons}
We say that two quantum, massive, electric charged particles $a\sqcup a'\subset \widehat{(YM)}$ are {\em quantum anti-particles}, if they have the same mass and opposite quantum numbers.
We call {\em $a$-quantum massive photon} a quantum massive uncharged particle, having a decay into a couple $(a,a')$ of quantum massive, electric charged anti-particles. We denote such a quantum massive photon with the symbol $\gamma^a_m$, or simply $\gamma_m$, if no confusion can arise. Therefore, we can write $\gamma^a_m\to a+a'$.
\end{definition}
\begin{theorem}[Existence of $a$-quantum massive photons]\label{a-quantum-massive-photons}
For any couple $(a,a')$ of quantum massive, electric charged anti-particles, there exists a $a$-quantum massive photon $\gamma^a_m$.
\end{theorem}
\begin{proof}
The proof can be obtained by analogy of the one for Theorem \ref{quantum-massive-photons-theorem}.
\end{proof}
\begin{example}[Existence of $p$-quantum massive photon]\label{existence-of-p-quantum-massive-photon}
Since it is permitted the Compton scattering $p+n\gamma\to p+n\gamma$, for suitable $n\in\mathbb{N}$, it is also permitted for quantum crossing symmetry the following one $p+\bar p\to 2n\gamma$. Therefore, a virtual $p$-quantum massive photon $\gamma_m^p$ necessarily exists.\footnote{From some experimental results it is well-known that the annihilation $p+\bar p$ produces photons $\gamma$, through intermediate reactions and decays coming from massive particles. (For example: $p+\bar p\to3\pi^0$, $p+\bar p\to2\pi^0+\eta$, $p+\bar p\to\pi^0+2\eta$, $\pi^0\to 2\gamma$, $\eta\to 2\gamma$, $Z^0\to e^++e^-\to 2\gamma$.) Therefore, the virtual $p$-quantum massive photon $\gamma_m^p$ has a very complex structure, made by a collection of massive particles, like mesons $\pi$ and $\eta$, bosons like $Z^0$ and leptons $e^{\pm}$, all inside $\widehat{(Higgs)}$. Note that in the virtual massive photon $\gamma_m^p$ enters also the vector boson $Z^0$ that is more massive than proton. ($m_p=0.938\, GeV/c^2$, $m_Z=91\, GeV/c^2$ and $m_W=80\, GeV/c^2$.) This means that virtual massive photons can have very large masses. (See Remark \ref{quantum-virtual-anomaly-massive-particles}.) Massive intermediate vector bosons $W^{\pm}$ and $Z^0$, were predicted from Steven Weinberg, Abdus Salam and Sheldon Glashow in 1979 (Nobel award 1979) and experimental discovered by Carlo Rubbia in 1983 (Nobel award 1984).}
\end{example}
\begin{definition}[Quantum massive neutrinos]\label{quantum--massive-neutrinos}
We say that a quantum, massive, quasi-particle with zero electric charge is a {\em quantum massive neutrino}, if it admits a decay into a couple (neutrino,anti-neutrino) of the same type.
\end{definition}
\begin{theorem}[Existence of quantum massive neutrinos]\label{existence-of-quantum-massive-neutrino}
The quantum super Yang-Mills equation $\widehat{(YM)}$ admits solutions that represent decays of quantum massive neutrinos.
\end{theorem}
\begin{proof}
In fact, we can find a nonlinear quantum propagator $V\subset\widehat{(YM)}$, such that $\partial V=N_0\bigcup P\bigcup N_1$, where $N_0=a\sqcup b\subset \widehat{(YM)}$, with $(a,b)=(\nu_e,\bar\nu_e)\not\subset\widehat{(Higgs)}$, and $N_1=c\sqcup d\subset \widehat{(Higgs)}$, with $(c,d)$ a couple of antiparticles in the sense of Definition \ref{quantum-antiparticles-massive-photons}. Then, necessarily $V$ must cross $\widehat{(Goldstone)}$, hence identifies a massive quasi-particle, say $\nu_m\subset \widehat{(Higgs)}$. Then, applying the quantum crossing symmetry to the reaction $\nu_e+\bar\nu_e\to \nu_m\to c+d$, we get also the reaction $d+c\to\nu_m\to \bar\nu_e+\nu_e$, i.e., there exists a nonlinear quantum propagator of $\widehat{(YM)}$ that encodes such a reaction. This proves that the massive, neutral particle $\nu_m$, decays into the couple $(\nu_e,\bar\nu_e)$, therefore $\nu_m$ is a massive neutrino.
Let us emphasize that neutrinos are of three different type: $\nu_e$, $\nu_\mu$ and $\nu_\tau$, called $e$-neutrino, $\mu$-neutrino and $\tau$-neutrino respectively. They differ for the type of decay that produce them. For example see (\ref{different-neutrinos-type-and-decays}).
\begin{equation}\label{different-neutrinos-type-and-decays}
\scalebox{0.8}{$ \begin{array}{ccc}
\xymatrix{\pi^+\ar[r]&e^++\nu_e\\}&&\\
\xymatrix{&&\mu^+\\
\pi^+\ar[r]&W^+\ar[ru]\ar[rd]&\\
&&\nu_\mu\\}&
\xymatrix{&\nu_\mu&\bar\nu_e\\
\mu^-\ar[ru]\ar[r]&W^-\ar[ru]\ar[rd]&\\
&&e^-\\}&
\xymatrix{&\bar\nu_\tau&\bar\nu_e&\bar\nu_\mu&\bar u\\
\tau^-\ar[ru]\ar[r]&W^-\ar[ru]\ar[rd]\ar@{.>}[rru]\ar@{.>}[rrd]\ar@{-->}[rrru]\ar@{-->}[rrrd]&\\
&&e^-&\mu^-&d\\}\\
e^++e^-\to Z^0\to\nu_e+\bar\nu_e&&\\
\end{array}$}
\end{equation}
In \ref{different-neutrinos-type-and-decays}) the dot-arrows and dash-arrows starting from $W^-$ are alternative to the full-arrows and between them.
In all these cases there is an intermediate production of a quasi particle $Z^0$ or $W^{\pm}$. Such quasi-particles can be usually produced when a massive solution cross $\widehat{(Goldstone)}$. Therefore from this point of view we should have a further argument to consider the usual neutrinos massless particles, according to the Gell-Mann's standard model. However, the quasi-particles $W^{\pm}$ cannot be considered massive neutrinos, since they do not decay into a massless couple (neutrino,anti-neutrino). Instead $Z^0$ has all the properties to be considered a massive neutrino !
Let us also emphasize that any massless particle produced in some reaction from massive particles are encoded by some nonlinear quantum propagators that necessarily must cross the Goldstone boundary to go outside $\widehat{(Higgs)}$, hence they identify massive particles. For example this happens for neutrinos and also for boson gauge particles that should be massless, but in interactions with massive particles appear to have big masses.\footnote{This phenomenon is usually called {\em gauge symmetry breaking}. Inside the Pr\'astaro's Algebraic Topology of quantum super PDEs one can understand that such particles acquire masses by means of geometrodynamic processes related to the structure of quantum Yang-Mills PDEs and nonlinear quantum propagators encoding reactions where they are involved.}
\end{proof}
\begin{example}[What dark matter is ?]\label{massive-virtual-particles-dark-matter}
Theorem \ref{existence-of-quantum-massive-neutrino} agrees with the experimental fact that the annihilation of the couple $(\nu_e,\bar\nu_e)$ produces a $Z^0$ massive boson. This last can again decay in the couple $(\nu_e,\bar\nu_e)$, or in a couple lepton-antilepton, or quark-antiquark, according to energy level considered. For example:
\begin{equation}\label{annihilations-neutrino-antineutrino}
\scalebox{0.8}{$\xymatrix{&&\nu_e+\bar\nu_e\\
\nu_e+\bar\nu_e\ar[r]&Z^0\ar[r]\ar@{.>}[rd]\ar@{-->}[ru]&e^++e^-\\
&&q+\bar q\\}$}
\end{equation}
In {\em(\ref{annihilations-neutrino-antineutrino})} dot-arrow and dash-arrow starting from $Z^0$ are alternative to the full-arrow and each other one.
Then the $Z$ boson can be considered a massive neutrino when decay into the couple $(\nu_e,\bar\nu_e)$. The quantum crossed symmetry reaction $\nu_e+\bar\nu_e\to Z^0$, is an example showing how massless particles can cross $\widehat{(Glodstone)}$, producing massive particles inside $\widehat{(Higgs)}$. The reaction, $e^++e^-\to Z^0\to \nu_e+\bar\nu_e$, obtained, by quantum cross symmetry, from the reaction in the middle line of {\em(\ref{annihilations-neutrino-antineutrino})}, shows that the annihilation of the couple $(e^-,e^+)$, when it is enough energetic, produces a massive quasi-particle $Z^0$, before to destroy its mass, crossing $\widehat{(Glodstone)}$, and producing the couple of massless particles $(\nu_e,\bar\nu_e)$. This clarifies that the energetic level of an electric-charged, massive couple (particle,antiparticle) decides whether in a scattering will produce a massive photons or a massive neutrino.\footnote{A guess by B. M. Pontecorvo and V. N. Gribov \cite{GRIBOV-PONTECORVO, PONTECORVO}, in order to explain the so-called {\em mystery of the missing solar neutrinos}, assumed that neutrinos have some oscillations that characterize their different types. But such oscillations require massive neutrinos at least for $\tau$-neutrinos and $\mu$-neutrinos. This fact does not agree with the standard model. On the other hand, from Theorem \ref{existence-of-quantum-massive-neutrino} we can understand that massive neutrinos can be identified with some energetic levels of the quasi-particle $Z^0$, and that non-electron neutrinos can retain their usual property to be massless particles. It is important to emphasize that recent experimental results, made by an international collaboration called \href{http://www.nature.com/nature/journal/v512/n7515/full/nature13702.html}{Borexino-Collaboration} \cite{BOREXINO-COLLABORATION}, have been able to detect just the so-called missing solar neutrinos. With this respect one can understand that nonlinear quantum propagators allow us to reconcile massless neutrinos, according to Standard Model, with the observed massive neutrinos.}\label{dark-matter}
Let us also emphasize that the so-called ``dark matter" can be interpreted as massive virtual particles codified in Theorem \ref{quantum-massive-photons-theorem}, Theorem \ref{a-quantum-massive-photons} and Theorem \ref{existence-of-quantum-massive-neutrino}. In other words ``dark matter" can be considered a generic term to identify virtual massive particles produced when a solution of the quantum super Yang-Mills equation $\widehat{(YM)}$, crosses the Goldstone boundary, coming inside the Higgs quantum super PDE, $\widehat{(Higgs)}\subset \widehat{(YM)}$ contained into $\widehat{(YM)}$. This interpretation for ``dark matter" could theoretically support some recent experimental observations where positron fraction is stably increasing at high energy levels, that are just necessary to produce massive photons, massive neutrinos and $a$-quantum massive photons, according to the above quoted theorems. See \cite{AGUILAR} for recent experiments that should suggest existence of ``dark matter" in the sense here specified.\footnote{Production of virtual $\pi^+$ decaying into positrons and neutrinos, can be obtained by means massive photons identified with virtual states into $\gamma$-nucleons interactions. This is a new point of view looking to long-studied $\pi^+$-photoproduction. (For general information on pions photoproduction see, e.g., \cite{DRECHSEL-TIATOR, DUTTA, DUTTA-GAO-LEE} and references quoted therein.)}\label{dark-matter}
\end{example}
$\bullet$\hskip 2pt The following theorem answers to the question: ``{\em Do quarks are fundamental particles ?}"
\begin{theorem}[Quarks break-down]\label{quarks-break-down}
Quarks are not fundamental particles.
\end{theorem}
\begin{proof}
The proof can be obtained easily by considering Theorem \ref{quantum-massive-photons-theorem}. In fact for any quark $q$, we can associate an anti-quark $\bar q$, so that $(q,\bar q)$ is a couple of massive, electric-charged anti-particles in the sense of Definition \ref{quantum-antiparticles-massive-photons}, hence there exist massive $q$-photons $\gamma_q$. In other words there exist nonlinear quantum propagators allowing reactions $q+\bar q\to n\, \gamma$, $n\in\mathbb{N}$. In these reaction quarks are transformed in particles that do not contain quarks, hence quarks are broken-down ! To similar conclusions one arrives by considering Theorem \ref{existence-of-quantum-massive-neutrino}. In fact, by considering the quantum reaction $\bar q+q\to Z^0\to \bar\nu_e+\nu_e$, obtained by means of quantum cross symmetry of the reaction in the bottom in (\ref{annihilations-neutrino-antineutrino}), we see that the mass of the couple $(q,\bar q)$ completely desappears and appear particles that are not made by quarks. In other words quarks break-down.
\end{proof}
$\bullet$\hskip 2pt In the following we give a precise meaning to the concept of {\em observed quantum time} and to the so-called {\em time-energy uncertainty principle}. This is possible thanks to our geometric theory of quantum (super) PDE's.\footnote{The exact interpretation of the time-energy uncertainty principle was remained an open problem, and not even universally accepted, (see, e.g., L. Landau). Really in order to directly apply the Heisenberg uncertainty principle to the (time,energy) couple, it is necessary to define the meaning of the time as a noncommutative variable, with respect to the energy. But the proper time of a quantum relativistic frame is a commutative variable ! (See \cite{PRAS19}.) (See also some attempts to solve this problem by Dirac \cite{DIRAC} and L. I. Mandelshtam and I. E. Tamm \cite{MANDELS-TAMM}.) Actually the time-energy uncertainty principle is justified by means adopting the Mandelshtam-Tamm's point of view. But that approach is not satisfactory, since it refers to a generic auxiliary non-commutative variable, say $B$, that has nothing to do with energy and time. Furthermore, in that description time remains a commutative variable, and one identifies the velocity $\frac{dB}{dt}$ with a finite ratio, i.e., $\frac{dB}{dt}\equiv \frac{\Delta B}{\Delta t}$. In fact one uses the following modified uncertainty relation: $\Delta B\, \frac{\Delta E}{\frac{dB}{dt}}\ge \hbar$. Then by using the approximation $\frac{dB}{dt}\equiv \frac{\Delta B}{\Delta t}$, one obtains $\Delta B\, \frac{\Delta E}{\frac{dB}{dt}}\approx \Delta t\, \Delta E\ge \hbar$. But $\Delta t\, \Delta E\ge \hbar$ is a no sense, since $t$ is a commutative variable !}
\begin{definition}[Quantum relativistic observed time]\label{quantum-relativistic-observed-time}
Let $V\subset\widehat{(YM)}$ be a solution of $\widehat{(YM)}$ and let $\widehat{g}:M\to Hom_Z(\dot T^2_0M;A)$ be the quantum metric induced by the quantum graviton, identified by the solution $V$. Then a quantum relativistic frame $\psi\equiv\{i:N\to M, g\}$, where $(N,g)$ is a pseudo-Riemannian $4$-dimensional manifold, endowed with a time-like flow, identifies on $N$ a $A$-valued metric $i^*\widehat{g}:N\to A\bigotimes S^0_2N$. We define {\em quantum relativistic observed time}, between two points $p_1,\, p_2\in N$, belonging to a same flow line of $\psi$, the $A$-valued quantum length $\hat t[p_1,p_2]\in A$, calculated with respect to $i^*\widehat{g}$. (For more details see \cite{PRAS19}.) We call {\em spectral content} of the quantum relativistic observed time the spectrum $Sp(\hat t[p_1,p_2])$ of $\hat t[p_1,p_2]$.
\end{definition}
\begin{proposition}
The quantum relativistic observed time cannot, in general, coincide with the proper time of the quantum relativistic frame.
\end{proposition}
\begin{proof}
In fact the length of the arc considered in above proposition differs whether measured with respect to $g$ or to $i^*\widehat{g}$, even if one has the case $A=\mathbb{R}$ ! This can be seen by a direct calculus, by using coordinates $\{\xi^\alpha\}_{0\le\alpha\le 3}$, with $\xi^0=t$ the proper time of the relativistic quantum frame $\psi$. Then the arc length between the points $p_1,\, p_2$ belonging to the same flow line $\phi_{p_1}$, passing for $p_1$ and $p_2$, is given by $s[p_1,p_2]=\int_{[p_1,p_2]}ds=\int_{[t_1,t_2]}\sqrt{g_{\alpha\beta}\dot\xi^\alpha\dot\xi^\beta}\, dt=\int_{[t_1,t_2]}\sqrt{g_{00}}\, dt=t_2-t_1$, assuming $g_{00}=1$. On the other hand we have
$\hat t[p_1,p_2]=\int_{[p_1,p_2]}d\hat s=\int_{[t_1,t_2]}\sqrt{|\gamma^*\widehat{g}_{(s)}}\, dt\in A$, where $\gamma\equiv\phi_{p_1}\circ i$. (For more details see \cite{PRAS19}.) In the particular case that $A=\mathbb{R}$, we have that $\hat t[p_1,p_2]\in \mathbb{R}$, as well as $ s[p_1,p_2]$, but $\gamma^*\widehat{g}_{(s)}$ is different from $\phi_{p_1}^*g$.\footnote{Let us also add that observed solutions, namely solutions of the observed quantum super Yang-Mills PDEs, $\widehat{(YM)}[i]$, encoding reactions, are singular solutions. Therefore, internal times separating two space-like Cauchy data $N_{t_0}$ and $N_{t_1}$, $t_0<t_1$, do not equal the intervals $t_1-t_0$. Therefore, also from this point of view the apparent time of a reaction, (with respect to a quantum relativistic frame), does not give an exact valuation of the internal laps of time in which reaction occurred.}
\end{proof}
\begin{theorem}[Quantum virtual anomaly-massive particles]\label{quantum-virtual-anomaly-massive-particles}
A quantum virtual massive quasi-particle can have an observed mass, in the reaction where it appears, larger than the observed energy of incoming particles entering into the reaction.\footnote{See, e.g., Example \ref{existence-of-p-quantum-massive-photon}.} However this does not really contradict the Einstein's conservation mass-energy equation.
\end{theorem}
\begin{proof}
The justification of this fact, that should appear contradict the Einstein's equation of mass-energy conservation $E=mc^2$, is partially to ascribe to the {\em Heisenberg's uncertainty principle}. (This is, really, not more a ``principle", but a ``theorem". See, \cite{PRAS5}.) In fact, adopting notation used in \cite{PRAS5}, the uncertainty relation for the commutator $[quantum-time,quantum-energy]$, can be written as reported in equation (\ref{apparent-time}).
\begin{equation}\label{apparent-time}
\sigma^2(\hat H)\, \sigma^2(\hat t)\ge\frac{1}{4}|<[\hat H,\hat t]>|^2.
\end{equation}
Thus in quantum processes, localized in very short ``observed quantum-time", as could happen in the case of virtual particles creation, one could have large masses, independently from the initial energy-mass content. However, this argument does not contradict the Einstein's conservation mass-energy equation ! In fact, the concept of ``quantum-time" is only an apparent time, different from the proper time of the quantum relativistic frame, and existing with respect to the interaction quantum-system-quantum-relativistic-frame. Instead the proper time of the observer, belongs to $\mathbb{R}$, hence commutes with the hamiltonian. In other words with respect to this proper-time, the Heisenberg's uncertainty principle does not apply. However a complete precise mathematical justification of such phenomena, can be obtained in the framework of nonlinear quantum propagators. Really only these structures can support the real interpretation of energy conservation in quantum reactions. (See Theorem 3.20(I), Corollary 3.22(I) and Corollary 3.23(I).)
\end{proof}
Let take this occasion to revise the concept of Heisenberg's uncertainty principle in the framework of Algebraic Topology of quantum super PDEs.
\begin{definition}[Apparent quantum observed time in quantum reaction]\label{apparent-quantum-observed-time-in-quantum-reaction}
We call (observed) {\em apparent quantum time} in a quantum reaction, $\sigma^2(\hat t)$ entering in equation {\em(\ref{apparent-time})}.
\end{definition}
\begin{example}[Quantum time and strengths of the fundamental interactions]\label{quantum-time-and-strengths-fundamental-interactions}
An experimental way to use the concept of quantum time and apparent quantum time, is to estimate the type of interaction present in. In fact, by using the Heisenberg uncertainty relation $\triangle\hat t\, \triangle E\ge \hbar$, applied to the quantum time and the strengths of the interactions, it is possible to guess the type of interaction from the observed interaction time. In Tab. \ref{table-quantum-time-and-strengths-fundamental-interactions} such a correlation is reported. There are also reported Planck's constants and quarks masses for any related convenience.
$\bullet$\hskip 2pt For example, if one observes a decay into an apparent quantum time $\sigma^2(\hat t)\approx 10^{-23}\, s$, one can state that it is a strong decay (or quantum-gravity decay) there.
\begin{table}[h]
\caption{Quantum time and strengths of the fundamental interactions.}
\label{table-quantum-time-and-strengths-fundamental-interactions}
\begin{tabular}{|c|c|}
\hline
{\footnotesize{\rm Apparent interaction time}} &{\footnotesize{\rm Interaction type}}\hfil\\
\hline
\hline
{\footnotesize {\rm $10^{-10}\, s$}}&{\footnotesize {\rm Weak}}\\
\hline
{\footnotesize {\rm $10^{-16}\, s$}}&{\footnotesize {\rm Electromagnetic}}\\
\hline
{\footnotesize {\rm $10^{-23}\, s$}}&{\footnotesize {\rm Strong (quantum-gravity)}}\\
\hline
\multicolumn{2}{l}{\footnotesize {\rm $\hbar=6.584\, \times\, 10^{-25}\, GeV\, s$.}}\\
\end{tabular}
\scalebox{0.65}{$\begin{tabular}{|l|l|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{10}{|c|}{\footnotesize {\rm Comparison with Planck's constants, electron, proton and quark mass-energies}}\\
\hline
\hline
{\footnotesize {\rm Planck length}}&{\footnotesize {\rm $l_{pl}=\sqrt{\frac{\hbar\, G}{c^3}}=1.616 \times\, 10^{-35}\, m$}}&\hskip-0.5cm&{\footnotesize {\rm Quarks}}&{\footnotesize {\rm Mass-energy}}&{\footnotesize {\rm charge}}&{\footnotesize {\rm S'}}&{\footnotesize {\rm C'}}&{\footnotesize {\rm B'}}&{\footnotesize {\rm T'}}\\
\hline
{\footnotesize {\rm Planck mass}}&{\footnotesize {\rm $m_{pl}=\sqrt{\frac{\hbar\, c}{G}}=2.176 \times\, 10^{-8}\, Kg$}}&\hskip-0.5cm&{\footnotesize {\rm up (u)}}&{\footnotesize {\rm $0.3$}}&{\footnotesize {\rm $+\frac{2}{3}$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}\\
\hline
{\footnotesize {\rm Planck energy}}&{\footnotesize {\rm $E_{pl}=m_{pl}\, c^2=c^2\, \sqrt{\frac{\hbar\, c}{G}}=1.22 \times\, 10^{19}\, GeV$}}&\hskip-0.5cm&{\footnotesize {\rm down (d)}}&{\footnotesize {\rm $0.3$}}&{\footnotesize {\rm $-\frac{1}{3}$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}\\
\hline
{\footnotesize {\rm Planck time}}&{\footnotesize {\rm $t_{pl}=\frac{l_{pl}}{c}=\frac{\hbar}{m_{pl}\, c^2}=\sqrt{\frac{\hbar\, G}{c^5}}=5.391 \times\, 10^{-44}\, s$}}&\hskip-0.5cm&{\footnotesize {\rm charm (c)}}&{\footnotesize {\rm $1.25$}}&{\footnotesize {\rm $+\frac{2}{3}$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $+1$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}\\
\hline
{\footnotesize {\rm Planck charge}}&{\footnotesize {\rm $q_{pl}=\sqrt{4\pi\epsilon_0\hbar\, c}=1.875 \times\, 10^{-18}\, C$}}&\hskip-0.5cm&{\footnotesize {\rm strange (s)}}&{\footnotesize {\rm $0.5$}}&{\footnotesize {\rm $-\frac{1}{3}$}}&{\footnotesize {\rm $-1$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}\\
\hline
{\footnotesize {\rm Planck temperature}}&{\footnotesize {\rm $\theta_{pl}=\frac{m_{pl}\, c^2}{K_B}=\sqrt{\frac{\hbar\, c^5}{G\, K_B^2}}=1.416 \times\, 10^{38}\, K$}}&\hskip-0.5cm&{\footnotesize {\rm top (t)}}&{\footnotesize {\rm $91.00$}}&{\footnotesize {\rm $+\frac{2}{3}$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $+1$}}\\
\hline
{\footnotesize {\rm $m_{e}=0.51\, \times\, 10^{-3}\, GeV$}}&{\footnotesize {\rm $m_{p}=0.938\, GeV$}}&\hskip-0.5cm&{\footnotesize {\rm bottom (b)}}&{\footnotesize {\rm $4.8$}}&{\footnotesize {\rm $-\frac{1}{3}$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $0$}}&{\footnotesize {\rm $-1$}}&{\footnotesize {\rm $0$}}\\
\hline
\multicolumn {10}{l}{\rm \footnotesize Baryon number quarks $B=\frac{1}{3}$. Baryon number antiquarks $B=-\frac{1}{3}$. $\{S',C',B',T'\}=\{$strangeness,charm,bottomness,topness$\}$.}\\
\multicolumn {10}{l}{\rm \footnotesize Mass-energy in GeV. $m_e=$ electron mass-energy. $m_{p}=$ proton mass-energy.}\\
\end{tabular}$}\end{table}
$\bullet$\hskip 2pt An annihilation quark-antiquark, should be observed into an apparent quantum time $\sigma^2(\hat t)\approx 3.61\, \times\, 10^{-27},\, 1.097\, \times\, 10^{-24} \, s$. (Look the quark masses reported in Tab. \ref{table-quantum-time-and-strengths-fundamental-interactions}. The first refers to $t+\bar t$ and the last to $u+\bar u$.)
$\bullet$\hskip 2pt Similarly an annihilation electro-positron, should be observed into an apparent quantum time $\sigma^2(\hat t)\approx 6.4549\, \times\, 10^{-22}\, s$. (Look the electron mass reported in Tab. \ref{table-quantum-time-and-strengths-fundamental-interactions}.)
\end{example}
\begin{table}[h]
\caption{Phenomenological conservation laws in quantum reactions and symmetric reactions.}
\label{phenomenological-conservation-laws-in-quantum-reactions}
\scalebox{0.7}{$\begin{tabular}{|c|c|c|l|}
\hline
{\footnotesize{\rm Name}} &{\footnotesize{\rm Symbol}} &{\footnotesize{\rm Conserved}}&\hfil{\footnotesize{\rm Remark}}\hfil\\
\hline
\hline
{\footnotesize {\rm Baryon number}}&{\footnotesize {\rm $B$}}&{\footnotesize {\rm yes}}&{\footnotesize{\rm $B=\frac{1}{3}(n_q-n_{\bar q})\equiv \frac{1}{3}\triangle n_q$}}\\
&&&{\footnotesize{\rm ($n_q=$ number of quarks, $n_{\bar q}=$ number of antiquarks).}}\\
\hline
{\footnotesize {\rm Electric charge}}&{\footnotesize{\rm $Q$}}&{\footnotesize {\rm yes}}&{\footnotesize {\rm $\frac{Q}{e}=I_3+\frac{Y}{2}$, ($e=$ electron's elecetric charge) (See Theorem \ref{non-conservation-of-quantum-electric-charge}.)}}\\
&&&{\footnotesize {\rm (Gell-Mann-Nishijima formula). (See Theorem \ref{quantum-gell-mann-nishijima-formula}.)}}\\
\hline
{\footnotesize {\rm Color charge}}&&{\footnotesize {\rm yes}}&\\
\hline
{\footnotesize {\rm Energy-mass}}&{\footnotesize {\rm $E=m\, c^2$}}&{\footnotesize{\rm yes}}&{\footnotesize {\rm (See Theorem \ref{quantum-virtual-anomaly-massive-particles}.)}}\\
\hline
{\footnotesize {\rm Isospin}}&{\footnotesize{\rm $I$}}&{\footnotesize{\rm yes in strong int.}}&\\
\hline
{\footnotesize {\rm $3$-Isospin}}&{\footnotesize{\rm $I_3$}}&{\footnotesize{\rm yes in strong int.}}&\\
\hline
{\footnotesize {\rm Lepton number}}&{\footnotesize{\rm $L=L_e+L_\mu+L_\tau$}}&{\footnotesize{\rm yes}}&{\footnotesize{\rm $L=n_l-n_{\bar l}$}}\\
&&&{\footnotesize{\rm ($n_l=$ number of leptons, $n_{\bar l}=$ number of antileptons).}}\\
{\footnotesize {\rm Electronic lepton number}}&{\footnotesize{\rm $L_e$}}&{\footnotesize{\rm yes}}&{\footnotesize{\rm $L_e=n_{e^-+\nu_e}-n_{e^++\bar\nu_e}$}}\\
&&&{\footnotesize{\rm ($n_{e^-+\nu_e}=$ number of electrons+electron-neutrinos)}}\\
&&&{\footnotesize{\rm ($n_{e^++\bar\nu_e}=$ number of positrons+electron-antineutrinos).}}\\
{\footnotesize {\rm Muonic lepton number}}&{\footnotesize{\rm $L_\mu$}}&{\footnotesize{\rm yes}}&{\footnotesize{\rm $L_\mu=n_{\mu+\nu_\mu}-n_{\bar\mu+\bar\nu_\mu}$}}\\
&&&{\footnotesize{\rm ($n_{\mu+\nu_\mu}=$ number of muons+muon-neutrinos)}}\\
&&&{\footnotesize{\rm ($n_{\bar\mu+\bar\nu_\mu}=$ number of antimuons+muon-antineutrinos).}}\\
{\footnotesize {\rm Tauonic lepton number}}&{\footnotesize{\rm $L_\tau$}}&{\footnotesize{\rm yes}}&{\footnotesize{\rm $L_\tau=n_{\tau+\nu_\tau}-n_{\bar\tau+\bar\nu_\tau}$}}\\
&&&{\footnotesize{\rm ($n_{\tau+\nu_\tau}=$ number of tauons+tau-neutrinos)}}\\
&&&{\footnotesize{\rm ($n_{\bar\tau+\bar\nu_\tau}=$ number of antitauons+tau-antineutrinos).}}\\
\hline
{\footnotesize {\rm Strangeness number}}&{\footnotesize{\rm $S'$}}&{\footnotesize{\rm yes in strong int.}}&{\footnotesize {\rm $\triangle S'=1$ in week int.}}\\
\hline
{\footnotesize {\rm Hypercharge}}&{\footnotesize{\rm $Y$}}&{\footnotesize{\rm yes in strong int.}}&{\footnotesize {\rm no in week int.}}\\
\hline
{\footnotesize {\rm Crossing symmetry}}&&{\footnotesize{\rm yes}}&{\footnotesize {\rm $n\to p+e^-+\bar\nu_e\, \Leftrightarrow\, \bar\nu_e+p\to n+e^+$}}\\
&&&{\footnotesize {\rm (neutron decay)$ \Leftrightarrow $(neutrino dedection).}}\\
\hline
{\footnotesize {\rm CPT symmetry}}&{\footnotesize{\rm $CPT$}}&{\footnotesize{\rm yes}}&{\footnotesize {\rm $CPT=$(charge-conjugation)(parity)(time-reversal)}}\\
\hline
\multicolumn {4}{l}{\rm\footnotesize Hypercharge: $Y=B+S'+C'+B'+T'$, flavour $\equiv(I_3,S',C',B',T')=$ ($3$-isospin,strangeness,charm,bottomness,topness).}\\
\multicolumn {4}{l}{\rm \footnotesize $I_3=\frac{1}{2}(\triangle n_u-\triangle n_d)=3$-Isospin is the $3^{th}$-component of isospin $I$. $S'=-\triangle n_s$. $C'=\triangle n_c$. $B'=-\triangle n_b$. $T'=\triangle n_t$.}\\
\multicolumn {4}{l}{\rm \footnotesize $Y=\frac{1}{3}[\triangle n_u+\triangle n_d-2(\triangle n_s+\triangle n_b)+4(\triangle n_c+\triangle n_t)]$, $\triangle n_q\equiv n_q-n_{\bar q}$.}\\
\multicolumn {4}{l}{\rm \footnotesize For complementary information see \href{http://en.wikipedia.org/wiki/List_of_particles}{Wikipedia - List of particles}.}\\
\end{tabular}$}\end{table}
$\bullet$\hskip 2pt In the following we characterize the concept of confinement for quantum systems.
\begin{definition}[(De)confined quantum system]\label{de-confined-quantum-system}
$\bullet$\hskip 2pt A solution $V$ of $(\widehat{YM})$ encodes a {\em confined system} if the spectrum $Sp(H(q))$ of the corresponding quantum Hamiltonian, has non-empty point spectrum: $Sp(H(q))_p\not=\varnothing$, for all $q\in V$. Then we say also that $V$ is a {\em confined quantum solution}.
$\bullet$\hskip 2pt When a solution $V$ of $(\widehat{YM})$ has non-empty point spectrum only in some subsets $N$ of $V$, then we say that $V$ encodes a {\em partially confined system} and that $N$ encodes the {\em deconfined subsystem}. Then we say also that $N$ is the {\em quantum deconfined part} of the solution $V$.
$\bullet$\hskip 2pt Whether a solution $V$ of $(\widehat{YM})$ is such that the following two conditions are satisfied:
{\em(i)} $Sp(H(q))_p\not=\varnothing$, for all $q\in V$;
{\em(ii)} $Sp(H(q))_c\not=\varnothing$, for all $q\in V$;
then we say that $V$ encodes a {\em confined system that can be deconfined}.
\end{definition}
\begin{theorem}[(De)confinement criterion ]\label{de-confinement-criterion}
Let us define the following set-mapping:
\begin{equation}\label{ker-delta-mapping}
\left\{
\begin{array}{l}
\ker_{\triangle}:\widehat{A}\multimap A\\
\ker_{\triangle}(\hat a)\equiv\bigcup_{\lambda\in Sp(\hat a)}\ker(\hat a-\lambda\, e).\\
\end{array}
\right.
\end{equation}
Then a solution $V$ of $(\widehat{YM})$ encodes a confined system (or a confined system that can be deconfined), iff $\ker_{\triangle}(H(q))\supset\{0\}\in A$, $\forall q\in V$.
Furthermore, there exists a quantum deconfined part $N\subset V$ of $V$, iff there exist points $q\in V$, such that $\ker_{\triangle}(H(q))=\{0\}\in A$.
\end{theorem}
\begin{proof}
Let us note that if $\lambda_1\not= \lambda_2\in Sp(\hat a)_p$, then $(\hat a-\lambda_i\, e)$, $i=1,2$, is not injective and $\ker(\hat a-\lambda_1\, e)\bigcap\ker(\hat a-\lambda_2\, e)=0\in A$. Instead, if $\lambda\in Sp(\hat a)_c\bigcup Sp(\hat a)_r$, then $(\hat a-\lambda\, e)$ is injective, hence $\ker(\hat a-\lambda\, e)=\{0\}\subset A$. Therefore, in the points $q\in V$ where $V$ encodes a confined quantum system, or a confined quantum system that can be deconfined, then necessarily $\ker_{\triangle}(H(q))\supset\{0\}\in A$. Instead, in the points $q\in V$, where $V$ encodes a deconfined quantum system, then necessarily must be $\ker_{\triangle}(H(q))=\{0\}\in A$.
\end{proof}
$\bullet$\hskip 2pt Let us conclude this paper by answering to the following question: {\em ``Does {\em quantum Majorana neutrino} exist ? "}.\footnote{This question arises from the paper {\em Symmetrical theory of electron and positron}, published by E. Majorana \cite{MAJORANA}, about a solution of the Dirac equation, where he first showed existence of a massive, electric-neutral, spin $\frac{1}{2}$, solution, that coincides with its anti-particle. This solution is now usually called {\em Majorana neutrino}. Existence of this particle should allow the so-called {\em neutrinoless double beta decay} of some nuclei. (See, e.g., this beautiful link \href{http://thy.phy.bnl.gov/~vogelsan/GGS/Wilkerson.pdf}{http://thy.phy.bnl.gov/~vogelsan/GGS/Wilkerson.pdf}.)}
\begin{definition}[Quantum Majorana neutrino]\label{quantum-majorana-neutrino}
We define {\em quantum Majorana neutrino} a quantum massive, electrically neutral fermionic particle, its own antiparticle, identified with a Cauchy data of the quantum super Yang-Mills PDE $\widehat{(YM)}$.
\end{definition}
\begin{theorem}[Existence of quantum Majorana neutrino]\label{existence-quantum-majorana-neutrino}
The quantum super Yang-Mills PDE $\widehat{(YM)}$ admits nonlinear quantum propagators $\widetilde{V}\subset\widehat{(YM)}$ such that $\partial\widetilde{V}=\widetilde{N}_0\sqcup \widetilde{P}\sqcup \widetilde{N}_1$, where $\widetilde{N}_1$ can be identified with Majorana neutrino. Furthermore, the following propositions hold.
{\em(i)} A complex quantum quasi-particle exists that is a quasi neutralino, and that we call {\em quantum Majorana neutralino}. This contains two Majorana neutrinos other than two Higgsinos and two supersymmetric partner of the couple $(\nu_e,\bar\nu_e)$.
{\em(ii)} $\widetilde{V}$ is homeomorphic to a quantum $(4|4)$-superdisk with two attached super-handles.
\end{theorem}
\begin{figure}
\caption{Rapresentation of the relation between a propagator $V$ and its supersymmetric partner $\widetilde{V}
\label{relation-propagator-supersymmetric-partner}
\end{figure}
\begin{proof}
Let $V\subset\widehat{(YM)}$ be a nonlinear quantum propagator in $\widehat{(YM)}$ such that
\begin{equation}\label{quantum-non-linear-propagator-and-symmetric-one-for-majorana-neutrino}
\partial V=N_0\sqcup P\sqcup N_1
\end{equation}
with $N_0=W^+\sqcup W^-$ and $N_1=Z^0\sqcup Z^0$. This is possible since $V$ represents the reaction $W^++W^-\to Z^0+ Z^0$ that splits in the intermediate decays $W^+\to \nu_e+e^+$, $W^-\to \bar\nu_e+e^-$ and reactions $e^++e^-\to Z^0$, $\nu_e+\bar\nu_e\to Z^0$. A representation by means of elementary bordisms is given in Fig. \ref{figure-majorana-neutrino-reaction}. Let us, now, consider the following lemmas.
\begin{lemma}[Boson and Fermion quantum super PDEs]\label{bosonic-fermionic-quantum-super-pds}
In the quantum super Yang-Mills PDE, $\widehat{(YM)}$ there exist two disjoint, open PDEs $\widehat{(Boson)}\subset \widehat{(YM)}$ and $\widehat{(Fermion)}\subset \widehat{(YM)}$, that we call respectively {\em Boson-PDE} and {\em Fermion-PDE}, that are formally integrable and completely integrable. Solutions of $\widehat{(Boson)}$, (resp. $\widehat{(Fermion)}$, are boson-polarized, (resp. fermion-polarized).
\end{lemma}
\begin{proof}
With respect to the definitions in Tab. \ref{local-quantum-spectral-spin-classification-observed-yang-mills-pde-solutions}, we can write $\widehat{A}=\mathfrak{b}\sqcup \mathfrak{f}\sqcup \mathfrak{n}$, where $\mathfrak{n}\equiv \widehat{A}\setminus\mathfrak{b}\sqcup \mathfrak{f}$.\footnote{Let us emphasize that $\mathfrak{b}$ is not empty since $\hat a\equiv \hbar^2\, s(s+1)\, id_A$, $s=n\in\{0,1,2,\cdots\}$, belong to $\mathfrak{b}$. Similarly $\mathfrak{f}$ is not empty since $\hat a\equiv \hbar^2\, s(s+1)\, id_A$, $s=\frac{2n+1}{2}$, $n\in\{0,1,2,\cdots\}$, belong to $\mathfrak{f}$.} We identify such disjoint subsets as equivalence classes of an equivalence relation $\mathcal{R}$ in $\widehat{A}$ and endow the quotient $\widehat{A}/\mathcal{R}$ with the quotient topology, by means of the projection $\pi_{\mathcal{R}}:\widehat{A}\to \widehat{A}/\mathcal{R}$. Then the mapping $[H]\equiv \pi_{\mathcal{R}}\circ H:\widehat{(YM)}\to \widehat{A}/\mathcal{R}$ is continuous. Let us emphasize that $\widehat{A}/\mathcal{R}=\{[\mathfrak{b}],[\mathfrak{f}],[\mathfrak{n}]\}$ is a discrete topologic space, since $\mathfrak{b}$, $\mathfrak{f}$ and $\mathfrak{n}$ are disjoint subsets of $\widehat{A}$. Therefore $[\mathfrak{b}]$ and $[\mathfrak{f}]$ are open subsets of $\widehat{A}/\mathcal{R}$. As a by-product we get that $\widehat{(Boson)}\equiv[H]^{-1}([\mathfrak{b}])\subset \widehat{(YM)}$ and
$\widehat{(Fermion)}\equiv[H]^{-1}([\mathfrak{f}])\subset \widehat{(YM)}$, are open PDEs contained in $\widehat{(YM)}$, hence retain the formal geometric properties of the quantum super Yang-Mills PDE.
\end{proof}
\begin{lemma}[Supersymmetric partners of particles]\label{supersymmetric-partners-particles}
Cauchy data $N_0\subset \widehat{(Boson)}$, (resp. $N_1\subset \widehat{(Fermion)}$), are called {\em bosonic particles}, (resp. {\em fermionic particles}). All the bosonic particles (resp. fermionic particles), belonging to the same integral bordism class in $\widehat{(Boson)}$, (resp. $\widehat{(Fermion)}$), are considered {\em dynamically equivalent}.
If there exists a quantum nonlinear bordism $V$ of $\widehat{(YM)}$, such that $\partial V=N_0\sqcup P\sqcup N_1$, where $N_0$ is a bosonic particle and $N_1$ is a fermionic particle, and $P$ is an integral quantum supermanifold, such that $\partial P=\partial N_0\sqcup\partial N_1$, then we say that $(N_0,N_1)$ is a {\em couple of supersymmetric partners}.
The following propositions are equivalent.
{\em(i)} $(N_0,N_1)$ is a {\em couple of supersymmetric partners}.
{\em(ii)} $<\alpha,N_0>+<\alpha,N_1> =-<\alpha,P>$, for any quantum conservation law $\alpha$ of $\widehat{(YM)}$.\footnote{In the following we shall denote by $(N,\widetilde{N})$ a couple of supersymmetric partners.}
\end{lemma}
\begin{proof}
The equivalence of propositions (i) and (ii) follows from Theorem 3.6 in part I and Theorem 4.10 in \cite{PRAS19}(II).
\end{proof}
\begin{lemma}[Supersymmetric partners nonlinear quantum propagators]\label{supersymmetric-partners-quantum-nonlinear-propagators}
Let $N_0$ and $N_1$ be respectively initial and final Cauchy data in a quantum reaction in $\widehat{(YM)}$, such that $N_0\equiv b_1\sqcup\cdots\sqcup b_r\sqcup f_1\sqcup\cdots\sqcup f_s$ and $N_1\equiv \bar b_1\sqcup\cdots\sqcup \bar b_{\bar r}\sqcup \bar f_1\sqcup\cdots\sqcup\bar f_{\bar s}$, with $b_i$, $i=1,\cdots,r$, and $\bar b_{\bar i}$, $\bar i =1,\cdots,\bar r$, bosonic, and $f_j$, $j=1,\cdots,s$, $\bar f_{\bar j}$, $\bar j =1,\cdots,\bar s$, fermionic. Let $V$ be a nonlinear quantum propagator of $\widehat{(YM)}$ such that $\partial V=N_0\sqcup P\sqcup N_1$, and $\partial P=\partial N_0\sqcup \partial N_1$. Then there exists a nonlinear quantum propagator $\widetilde{V}$ of $\widehat{(YM)}$ such that $\partial \widetilde{V}=\widetilde{N_0}\sqcup \widetilde{P}\sqcup \widetilde{N_1}$, and $\partial \widetilde{P}=\partial \widetilde{N_0}\sqcup \partial\widetilde{N_1}$, where $(N_0,\widetilde{N_0})$ and $(N_1,\widetilde{N_1})$ are couples of supersymmetric partners. We call $\widetilde{V}$ (resp. $\widetilde{P}$) the {\em quantum supersymmetric partner} of $V$ (resp. $P$). Therefore to any quantum reaction there corresponds a {\em quantum supersymmetric partner reaction}.
\end{lemma}
\begin{proof}
This follows from Lemma \ref{supersymmetric-partners-particles} and from the commutative diagram (\ref{commutative-diagram-supersymmetric-partners-particles}), holding for any quantum conservation law $\alpha$ of $\widehat{(YM)}$.
\begin{equation}\label{commutative-diagram-supersymmetric-partners-particles}
\xymatrix{<\alpha,N_0>+<\alpha,N_1>\ar@{=}[d]\ar@{=}[r]&-<\alpha,P>\ar@{=}[d]\\
<\alpha,\widetilde{N_0}>+<\alpha,\widetilde{N_1}>\ar@{=}[r]&-<\alpha,\widetilde{P}>\\}
\end{equation}
In Fig. \ref{relation-propagator-supersymmetric-partner} is rapresented the relation between a propagator $V$ and its supersymmetric partner $\widetilde{V}$: $\widetilde{V}\thickapprox X_0\bigcup Y_0\bigcup V\bigcup X_1\bigcup Y_1$, such that $\partial X_i=\widetilde{B_i}\bigcup Q_i\bigcup B_i$, $\partial Y_i=\widetilde{F_i}\bigcup R_i\bigcup F_i$, $i=,0,\,1$, and $\partial V=N_0\bigcup P\bigcup N_1$, with $N_0=(B_0\sqcup F_0)$ and $N_1=(B_1\sqcup F_1)$. Here $B_i$ and $F_i$ are respectively bosonic and fermionic Cauchy data, and $\partial P=(\partial B_0\sqcup \partial F_0)\bigcup (\partial B_1\sqcup \partial F_1)$.
\end{proof}
Now from Lemma \ref{supersymmetric-partners-quantum-nonlinear-propagators} applied to nonlinear quantum propagator (\ref{quantum-non-linear-propagator-and-symmetric-one-for-majorana-neutrino}), we see that its supersymmetric partner $\widetilde{V}$, represents the following reaction $\widetilde{W}^++\widetilde{W}^-\to \widetilde{Z}^0+ \widetilde{Z}^0$. Here $\widetilde{Z}^0$ is just a massive neutral fermionic particle, its own antiparticle. Thus $\widetilde{Z}^0$ can be identified with a Majorana neutrino.\footnote{In the particle-zoo this is usually called {\em zino}.} Therefore, we can conclude that Majorana neutrinos can be identified with possible products of quantum reactions in $\widehat{(YM)}$, and the proof of the first part of the theorem is down. In order to prove (i), let us emphasize that the nonlinear quantum propagator $\widetilde{V}$, identifies also a complex quasi-particle, that we call {\em quantum Majorana-neutralino}, $\nu_{Maj-neutralino}\equiv(g,h,i,l,m,n)$. Here $i$ and $m$, are the supersymmetric partners of massive neutrinos, $\nu_m$, $\bar\nu_m$, identified in Theorem \ref{existence-of-quantum-massive-neutrino}, and $m$ and $n$, are supersymmetric partners of Higgs quasi particles, (say {\em Higgsinos}), according to Theorem \ref{quantum-massive-photons-theorem} and Theorem \ref{a-quantum-massive-photons}. In other words the quantum super-Majorana-neutrino is a quasi {\em neutralino}. This last has two photinos (the supersymmetric partner of photons) instead of the couple $(\widetilde{\nu_e},\widetilde{\bar\nu_e})$. So it appears that the name is justified !
In order to complete the proof, i.e., to prove proposition (ii), let us remark that $\partial\widetilde{V}$ belongs to the same singular integral bordism class of a quantum exotic homotopy $(3|3)$-supersphere, $\hat \Sigma^{3|3}$, since $\Omega_{3|3,s}^{\widehat{(YM)}}=0$. However, $\widetilde{V}$ is not homeomorphic to $\hat D^{4|4}$, but to a $(4|4)$-superdisk with two attached super-handles. This means that the corresponding observed solution by means of a quantum relativistic frame $i:N\to M$, is necessarily a singular solution of $\widehat{(YM)}[i]$.\footnote{Let us emphasize that the nonlinear quantum propagators $V$ and $\widetilde{V}$, considered in this proof are not elementary ones in the sense of Theorem \ref{quantum-scattering-processes-in-quantum-yang-mills-d-4-quantum-super-minkowski}. In fact $V$ and $\widetilde{V}$ are not homeomorphic to $\hat D^{4|4}$.}
\end{proof}
\begin{figure}
\caption{Nonlinear quantum propagator representing a reaction generating Majorana neutrinos, identified with supersymmetric partners of the vector boson $Z^0$. In the figure one has put: $a=\widetilde{W^+}
\label{figure-majorana-neutrino-reaction}
\end{figure}
\end{document}
|
\begin{document}
\title{\Large Lower Bounds for Number-in-Hand Multiparty Communication Complexity, Made Easy\thanks{A preliminary version of the paper was presented at the ACM-SIAM Symposium on Discrete Algorithms, January 2012} }
\author{Jeff M. Phillips~\thanks{Jeff M. Phillips's work on this paper was/is supported by a subaward to the University of Utah under NSF award 0937060 to the Computing Research Association and NSF CCF-1350888, IIS-1251019, and ACI-1443046.} \\ School of Computing \\ University of Utah \\ [email protected] \and Elad Verbin~\thanks{Elad Verbin acknowledges support from the Danish National Research Foundation and the National Science Foundation of China (under the grant 61061130540) for the Sino-Danish Center for the Theory of Interactive
Computation, within which part of this work was performed. } \\ [email protected] \and Qin Zhang~\thanks{Corresponding author. Most of the work was done while Qin Zhang was a postdoc in MADALGO, Aarhus University.} \\ Indiana University Bloomington \\ [email protected]}
\date{}
\maketitle
\begin{abstract}
In this paper we prove lower bounds on randomized multiparty communication complexity, both in the \emph{blackboard model} (where each message is written on a blackboard for all players to see) and (mainly) in the \emph{message-passing model}, where messages are sent player-to-player. We introduce a new technique for proving such bounds, called \emph{symmetrization}, which is natural, intuitive, and often easy to use.
For example, for the problem where each of $k$ players gets a bit-vector of length $n$, and the goal is to compute the coordinate-wise XOR of these vectors, we prove a tight lower bounds of $\Omega(nk)$ in the blackboard model. For the same problem with AND instead of XOR, we prove a lower bounds of roughly $\Omega(nk)$ in the message-passing model (assuming $k \le n/3200$) and $\Omega(n \log k)$ in the blackboard model. We also prove lower bounds for bit-wise majority, for a graph-connectivity problem, and for other problems; the technique seems applicable to a wide range of other problems as well. All of our lower bounds allow randomized communication protocols with two-sided error.
We also use the symmetrization technique to prove several direct-sum-like results for multiparty communication.
\end{abstract}
\section{Introduction}
\label{sec:intro}
In this work we consider multiparty communication complexity in the
\emph{number-in-hand} model. In this model, there are $k$ players $\{p_1, \ldots, p_k\}$, each with his own $n$-bit input $x_i \in \{0,1\}^n$. The players wish to collaborate in order to compute a joint function of their inputs, $f(x_1, \ldots, x_k)$. To do so, they are allowed to communicate, until one of them figures out the value of $f(x_1, \ldots, x_k)$ and returns it. All players are assumed to have unlimited computational power, so all we care about is the amount of communication used. There are three variants to this model, according to the mode of communication:
\begin{enumerate}
\item the \emph{blackboard model}, where any message sent by a player is written on a blackboard visible to all players;
\item the \emph{message-passing model}, where a player $p_i$ sending a message specifies another player $p_j$ that will receive this message;
\item the \emph{coordinator model}, where there is an additional $(k+1)$-th player called the \emph{coordinator}, who receives no input. Players can only communicate with the coordinator, and not with each other directly.
\end{enumerate}
We will work in all of these, but will mostly concentrate on the message-passing model and the coordinator model. Note that the coordinator model is almost equivalent to the message-passing model, up to a $\log k$ multiplicative factor, since instead of player $i$ sending message $x$ to player $j$, player $i$ can transmit message $(j,x)$ to the coordinator, and the coordinator forwards it to player $j$.
Lower bounds in the three models above are useful for proving lower bounds on the space usage of streaming algorithms, and for other models as well, as we explain in Section \ref{subsec:motivation}. Most previous lower bounds have been proved in the blackboard model, but lower bounds in the message-passing model and the coordinator model can potentially give higher bounds for all the applications.
Note that another, entirely different, model for multiparty communication is the \emph{number-on-forehead} model, where each player can see the inputs of all other players but \emph{not} his own input. This model has important applications for circuit complexity (see e.g.~\cite{KN97}). We do not discuss this model in this paper.
We allow all protocols to be randomized, with public coins, i.e.\ all players have unlimited access to a common infinite string of independent random bits. We allow the protocol to return the wrong answer with probability $\varepsilon$ (which should usually be thought of as a small constant); here, the probability is taken over the sample space of public randomness. Note that the public coin model might seem overly powerful, but in this paper we are mainly interested in proving lower bounds rather than upper bounds, so giving the model such strength only makes our results stronger.
For more on communication complexity, see the book of Kushilevitz and Nisan~\cite{KN97}, and the references therein. We give some more definitions in the preliminaries, in Section \ref{sec:pre}.
\subsection{Warm-Up}
We begin by sketching two lower bounds obtained using our symmetrization technique, both of them for the coordinate-wise $k$-\textsf{XOR}\xspace problem. These lower bounds can be proved without using symmetrization, but their proofs that use symmetrization are particularly appealing.
First consider the following problem: Each player gets a bitvector $x_i \in \{0,1\}^n$ and the goal is to compute the coordinate-wise XOR of these vectors. We operate in the \emph{blackboard model}, where messages are posted on a blackboard for all to see.
\begin{theorem} \label{thm:XOR_bb_intro}
The coordinate-wise $k$-\textsf{XOR}\xspace problem requires communication $\Omega(nk)$ in the blackboard model.
\end{theorem}
To see this, first let us specify the \emph{hard distribution}: we prove the lower bound when the input is drawn from this distribution, and by the easy direction of Yao's Minimax Lemma (see e.g.\ \cite{KN97}), it follows that this lower bound applies for the problem as a whole. The hard distribution we choose is just the distribution where the inputs are independently drawn from the uniform distribution.
To prove the lower bound, consider a protocol $P$ for this $k$-player problem, which works on this distribution, communicates $C(P)$ bits in expectation, and suppose for now that it never makes any errors (it will be easy to remove this assumption). We build from $P$ a new protocol $P'$ for a $2$-player problem. In the $2$-player problem, suppose that Alice gets input $x$ and Bob gets input $y$, where $x,y \in \{0,1\}^n$ are independent random bitvectors. Then $P'$ works as follows: Alice and Bob randomly choose two distinct indices $i,j \in \{1,\ldots,k\}$ using the public randomness, and they simulate the protocol $P$, where Alice plays player $i$ and lets $x_i=x$, Bob plays player $j$ and lets $x_j=y$, and they both play all of the rest of the players; the inputs of the rest of the players is chosen from shared randomness. Alice and Bob begin simulating the running of $P$. Every time player $i$ should speak, Alice sends to Bob the message that player $i$ was supposed to write on the board, and vice versa. When any other player $p_r$ ($r \neq i,j$) should speak, both Alice and Bob know his input so they know what he should be writing on the board, thus no communication is actually needed (this is the key point of the symmetrization technique). A key observation is that the inputs of the $k$ players are uniform and independent and thus entirely symmetrical,\footnote{Here and throughout the paper, \emph{symmetric} means that the inputs are drawn from a distribution where renaming the players does not change the distribution. Namely, a distribution $D$ over $X^n$ is called \emph{symmetric} if exchanging any two coordinates in $D$ keeps the distribution the same.} and since the indices $i$ and $j$ were chosen uniformly at random, then the expected communication performed by the protocol $P'$ is $\mathbf{E} [C(P')] = 2C(P)/k$. Furthermore, when the players have finished simulating $P'$, Alice now knows the bit-wise XOR of all the vectors $x_1,\ldots,x_k$, from which she can easily reconstruct the vector $x_j$ (since she already knows all the other vectors $x_r$ for $r \neq j$). It follows that using $2C(P)/k$ expected communication, Alice has managed to reconstruct Bob's entire input; from an easy information-theoretic argument it follows that $2C(P)/k \ge n$, so $C(P) \ge \Omega(nk)$, proving the theorem.
Extending the above argument to cover the case where the protocol $P$ is allowed to return the wrong answer with probability $\varepsilon$ is also easy, simply by showing that if Alice managed to learn Bob's entire bit-vector with probability $1-\varepsilon$, then $(1-\varepsilon) \cdot n$ bits must have been communicated: this also follows easily from information-theoretic arguments.
Note the crucial fact that the hard distribution we chose is symmetric: if the distribution of the inputs to the $k$ players was not symmetric, then the protocol $P$ could try to deduce the indices $i$ and $j$ from some statistical properties of the observed inputs, and act according to that. Then the best upper bound we could get on the communication complexity of $P'$ would be $C(P') \le C(P)$, which is much too weak.
We have just described the version of the symmetrization method for the blackboard model. A similar (slightly more complicated) line of thinking leads to a lower bound of $\Omega(n \log k)$ on the complexity of the coordinate-wise AND problem in the blackboard model; see Section \ref{sec:AND}.
Let us now sketch the symmetrization method as it is used in the \emph{coordinator model}, where players can only send and receive messages to and from the coordinator. We prove a lower bound on the same problem as above, the coordinate-wise $k$-\textsf{XOR}\xspace problem.
\begin{theorem}
The coordinate-wise $k$-\textsf{XOR}\xspace problem requires communication $\Omega(nk)$ in the coordinator model.
\end{theorem}
Note that this theorem actually follows from Theorem \ref{thm:XOR_bb_intro}, since the blackboard model is stronger than the coordinator model. However, the proof we sketch here is useful as a warmup exercise, since it shows an easy example of the symmetrization technique as applied to the coordinator model. In the paper we prove multiple lower bounds using this technique, most of which do not follow from corresponding bounds in the blackboard model.
To prove this theorem, we use the same hard distribution as above: all inputs are uniform and independent. Let $P$ be a protocol in the coordinator model, which computes the coordinate-wise XOR, uses communication $C(P)$ and for now assume that $P$ never makes any errors. As before, we build a new protocol $P'$ for a 2-player problem. In the $2$-player problem, suppose that Alice gets input $x$ and Bob gets input $y$, where $x,y \in \{0,1\}^n$ are independent random bitvectors. Then $P'$ works as follows: Alice and Bob choose a single index $i \in \{1,\ldots,k\}$ from public randomness. Alice and Bob simulate the protocol $P$, where Alice simulates player $i$ and lets $x_i=x$, and Bob plays \emph{all the rest of the players}, including the coordinator, and chooses their inputs uniformly, conditioned on their XOR being equal to $y$.
To simulate the protocol $P$, whenever player $i$ needs to send a message to the coordinator, then Alice sends a message to Bob, and whenever the coordinator needs to send a message to player $i$, Bob sends a message to Alice. Whenever any player $j \neq i$ needs to speak to the coordinator, no communication is needed, since both are played by Bob. Note again that the distribution of the inputs of the $k$ players is independent uniform (for this it is crucial to remember that $x$ and $y$ were uniform and independent in the first place). Once again, from reasons of symmetry, since the index $i$ was chosen uniformly and the inputs are symmetric, the expected communication performed by the protocol $P'$ is $\mathbf{E}[ C(P')] \le 2C(P)/k$. Furthermore, at the end of the running of $P'$, Alice knows the value of $x \oplus y$ so she can reconstruct the value of $y$. As before, this implies the theorem. The assumption that we never make errors can once again be easily removed.
\subsubsection{Discussion}
We see that the crux of the symmetrization technique in the coordinator model is to consider the $k$-player problem that we wish to lower-bound, to find a symmetric distribution which is hard for it, to give Alice the input of one player (chosen at random) and Bob the input of all other players, and to prove a lower bounds for this two-player problem. If the lower bound for the two player problem is $L$, the lower bound for the $k$-player problem will be $kL$. For the blackboard model, the proofs have the same outline, except in the $2$-player problem Alice gets the input of one randomly chosen player, Bob gets the input of another, and they both get the inputs of all the rest of the players. There is one important thing to note here: This argument only works when the hard distribution is symmetric.
\subsection{Motivation, Previous Work and Related Models}
\label{subsec:motivation}
Communication complexity is a widely studied topic. In multiplayer number-in-hand communication complexity, the most studied mode of communication is the blackboard model. The message-passing model was already considered in \cite{DR98}. (This model can also be called the \emph{private-message model}, but note that this name was used in \cite{GH09,GG07} for a different model.)
The coordinator model can be thought of as a server-site setting~\footnote{This terminology is similar as the standard ``client-server", and is used extensively in the literature.}, where
there is one server and $k$ sites. Each site has gathered $n$ bits of
information and the server wants to evaluate a function on the collection
of these $k \cdot n$ bits. Each site can only communicate with the server,
and a server can communicate with any site. This server-site model has been widely studied in the databases and distributed computing communities. Work includes computing top-$k$ \cite{CW04,MTW05,SS08} and heavy hitters
\cite{ZOWX06,HYLC11}.
Another closely related model is the {\em distributed monitoring model},
in which we also have one server and $k$ sites. The only difference is that now the computation is dynamic. That is, each site receives a stream of elements over time and the server would like to maintain continuously at all times some function $f$ of all the elements in the $k$ sites. Thus the server-site model can be seen as a one-shot version of the distributed streaming setting. It follows that any communication complexity lower bound in the message-passing model or the coordinator model also hold in the distributed monitoring model. A lot of work on distributed monitoring has been done recently in the theory community and the database community, including maintaining random samplings
\cite{CMYZ10}, frequency moments \cite{CG05,CMY08}, heavy hitters \cite{BO03,KCR06,MSDO05,YZ09,HYZ12},
quantiles \cite{CGMR05,YZ09}, entropy \cite{ABC09}, various sketches \cite{CMZ06,CGMR05} and some
non-linear functions \cite{SSK06,SSK08}.
We will come back to the latter two models in Section \ref{sec:application}. It is interesting to note that despite the large number of upper bounds (i.e.\ algorithms, communication protocols) in the above models,
very few lower bounds have been proved in any of those models,
likely because there were few known techniques to prove such results.
A further application of the message-passing model could be for Secure Multiparty Computation: in this model, there are several players who do not trust each other, but want to compute a joint function of their inputs, with
each of them learning nothing about the inputs of the others players except
what can be learned from the value of the joint function. Obviously, any lower bound in the message passing model immediately implies a lower bound on the amount of communication required for Secure Multiparty Computation. For more on this model, see e.g. \cite{MPC_survey}.
One final application is for the streaming model~\cite{AMS99}. In this model, there is a long stream of data that can only be scanned from left to right. The goal is to compute some function of the stream, and minimize the space usage. It is easy to see that if we partition the stream into $k$ parts and give each part to a different player, then a lower bound of $L$ on the communication complexity of the problem in the coordinator model implies a lower bound of
$L/k$ on the space usage. When $t$ passes over the model are allowed, a lower bound of $L$ in the coordinator model translates to a lower bound of $L/tk$ in the streaming model.
\subsection{Our Results and Paper Outline}
Our main technical result in this paper are lower bounds of $\Omega(nk)$ randomized communication for the bitwise $k$-party AND, OR, and MAJ (majority) functions in the coordinator model. These sidestep clever upper bound techniques (e.g. Slepian-Wolf coding) and can be found in Section \ref{sec:bitwise}. In the same section we prove some lower bounds for AND and OR in the blackboard model as well. Back to the coordinator model, we show that the connectivity problem (given $k$ players with subgraphs on a common set of nodes, determine if it is connected) requires $\Omega(nk / \log^2 k)$ communication. This is in Section~\ref{sec:CONN}.
The coordinate-wise lower bounds imply lower bounds for the well-studied problems of distinct elements, $\varepsilon$-approximate heavy-hitters, and $\varepsilon$-kernels in the server-site model (or the other related models). We show any randomized algorithm requires at least $\Omega(nk)$, $\Omega(n/\varepsilon)$ and $\Omega(k/\varepsilon^{(d-1)/2})$ communication, respectively. The latter is shown to be tight. This is in Section~\ref{sec:application}.
We give some direct-sum-like results in Section \ref{sec:directsum}.
\subsection{Subsequent Work}
A series of work has been done after the conference version of this paper. Woodruff and Zhang~\cite{WZ12,WZ14} combined the symmetrization technique and a new technique called {\em composition} to show strong lower bounds for approximately computing a number of statistical problems, including distinct elements and frequency moments, in the coordinator model. The same authors also used symmetrization to prove tight lower bounds for the exact computation of a number of graph and statistical problems \cite{WZ13}, and shaved a $\log k$ factor for the connectivity problem studied in this paper. The other $\log k$ factor in the connectivity lower bound can be further shaved using a relaxation of symmetrization proposed in \cite{WZ14}.
As will be discussed in Section~\ref{sec:limitations}, due to a limitation of the symmetrization technique, it cannot be used to prove a tight lower bound for the $k$-player disjointness problem. This was listed as an open problem in the conference version of this paper, and was later settled by Braverman et al.~\cite{BEOPV13}, using a different technique based on information complexity. Recently Huang et al.~\cite{HRVZ13} applied symmetrization together with information complexity to prove tight lower bounds for approximate maximum matchings; Li et al.~\cite{LSWW14} used the symmetrization technique to prove lower bounds for numerical linear algebra problems.
In another recent work, Chattopadhyay et al.~\cite{CRR14} further extended the symmetrization technique to general communication topology compared with the coordinator model which essentially has a star communication topology.
\section{Preliminaries}
\label{sec:pre}
In this section we review some basic concepts and definitions. We denote $[n]=\{1,\ldots,n\}$. All logarithms are base-2 unless noted otherwise.
\paragraph{Communication complexity.}
Consider two players Alice and Bob, given bit vectors $A$ and $B$,
respectively. Communication complexity (see for example the book
\cite{KN97}) bounds the communication between Alice and Bob that is needed to compute
some function $f(A,B)$. The \emph{communication complexity} of a particular protocol is the maximal number of bits that it communicated, taken in the worst case over all pairs of inputs. The \emph{communication complexity} of the problem $f$ is the best communication complexity of $P$, taken over all protocols $P$ that correctly compute $f$.
Certain functions (such as $f = \mathbf{E}Q$ which determines if $A$ equals
$B$) can be computed with less communication if randomness is
permitted. Let $R^\varepsilon(f)$ denotes the communication complexity when the protocol is allowed to make a mistake with probability $\varepsilon$. The error is taken over the randomness used by the protocol.
Sometimes we are interested in the case where the input of Alice and Bob is drawn from some distribution $\mu$ over pairs of inputs. We want to allow an error $\varepsilon$, this time taken over the choice of the input. The worst-case communication complexity in this case is denoted by $D_\mu^\varepsilon(f)$. Yao~\cite{Yao77} showed that $R^\varepsilon(f) \ge \max_{\mu} D_\mu^\varepsilon(f)$, thus in order to prove a lower bounds for randomized protocols it suffices to find a hard distribution and prove a distributional lower bound for it. This is called the Yao Minimax Principle.
In this paper we use an uncommon notion of \emph{expected distributional communication complexity}. In this case we consider the distributional setting as in the last paragraph, but this time consider the expected cost of the protocol, rather than the worst-case cost; again, the expectation is taken over the choice of input. We denote this $\mathbf{E}D_\mu^\varepsilon(f)$.
\subsection{Two-Party Lower Bounds}
We state a couple of simple two-party lower bounds that will be useful in our reductions.
\paragraph{2-\textsf{BITS}\xspace.}
\label{sec:2-BITS}
Let $\zeta_\rho$ be a distribution over bit-vectors of length $n$, where each bit is $1$ with probability $\rho$ and $0$ with probability $1-\rho$. In this problem Alice gets a vector drawn from $\zeta_\rho$, Bob gets a subset $S$ of $[n]$ of cardinality $\abs{S}=m$, and Bob wishes to learn the bits of Alice indexed by $S$.
The proof of the following lemma is in Appendix \ref{sec:2-BITS-proof}.
\begin{lemma}
\label{lem:2-BITS}
$\mathbf{E}D_{\zeta_\rho}^{1/3}(\textrm{2-\textsf{BITS}\xspace}) = \Omega(n \rho \log(1/\rho))$.
\end{lemma}
\paragraph{2-\textsf{DISJ}\xspace.}
\label{sec:2-DISJ}
In this problem Alice and Bob each have an $n$-bit vector. If we view vectors as sets, then each of them has a subset of $[n]$ corresponding to the $1$ bits. Let $x$ be the set of Alice and $y$ be the set of Bob. It is promised that $\abs{x \cap y} =1 \mbox{ or } 0$. The goal is to return $1$ if $x \cap y \neq \emptyset$, and $0$ otherwise.
We define the input distribution $\mu$ as follows. Let $l = (n+1)/4$. With probability $1/t$, $x$ and $y$ are random subsets of $[n]$ such that $\abs{x} = \abs{y} = l$ and $\abs{x \cap y} = 1$. And with probability $1 - 1/t$, $x$ and $y$ are random subsets of $[n]$ such that $\abs{x} = \abs{y} = l$ and $x \cap y = \emptyset$. Razborov~\cite{Raz90} (see also
\cite{KS92}) proved that for $t=4$, $D^{1/100t}_{\mu}(\mbox{2-\textsf{DISJ}\xspace}) = \Omega(n)$. In the following theorem we extend this result to general $t$ and also to the expected communication complexity. In Section \ref{sec:AND} we only need $t=4$, and in Section~\ref{sec:CONN} we will need general $t$.
The proof for the following lemma is in Appendix \ref{sec:2-DISJ-proof}.
\begin{lemma}
\label{lem:2-DISJ}
When $\mu$ has $|x \cap y| = 1$ with probability $1/t$ then
$\mathbf{E}D^{1/100t}_{\mu}(\mbox{2-\textsf{DISJ}\xspace}) = \Omega(n)$.
\end{lemma}
\section{Bit-wise Problems}
\label{sec:bitwise}
\subsection{Multiparty AND/OR}
\label{sec:AND}
We now consider multiparty AND/OR (below we use $k$-\textsf{AND}\xspace
and $k$-\textsf{OR}\xspace for short). In the $k$-\textsf{AND}\xspace problem, each player $i \ (1 \le i
\le k)$ has an $n$-bit vector $I_i$ and we want to establish the bitwise \textsf{AND}\xspace of $I_i$, that is, $f_j(I_1, \ldots, I_k) = \bigwedge_i I_{i,j}$ for $j =
\{1,\ldots,n\}$. $k$-\textsf{OR}\xspace is similarly defined with OR. Observe that the two
problems are isomorphic by $f_j(I_1, \ldots, I_k) =
\neg g_j(\bar{I_1}, \ldots, \bar{I_k})$ for $j = \{1, \ldots, n\}$, where
$\bar{I_i}$ is obtained by flipping all bits of $I_i$ and $g_j(\bar{I_1}, \ldots, \bar{I_k}) = \bigvee_i \bar{I}_{i,j}$. Therefore we only
need to consider one of them. Here we discuss $k$-\textsf{OR}\xspace.
\subsubsection{Idea for the $k$-OR Lower Bound in the Coordinator Model}
\label{sec:AND:idea}
We now discuss the hard distribution and sketch how to apply the symmetrization technique for the $k$-OR problem in the coordinator model. The formal proof can be found in the next subsection.
We in fact start by describing two candidate hard distributions that \emph{do not} work. The reasons they do not work are interesting in themselves. Throughout this subsection, assume for simplicity that $k \ge 100 \log n$.
The most natural candidate for a hard distribution is to make each entry equal to $1$ with probability $1/k$. This has the effect of having each bit in the output vector be roughly balanced, which seems suitable for being a hard case. This is indeed the hard distribution for the blackboard model, but for the coordinator model (or the message-passing model) it is an easy distribution: Each player can send his entire input to the coordinator, and the coordinator can figure out the answer. The entropy of each player's input is only $\Theta((n \log k) / k)$, so the total communication would be $\Theta(n \log k)$ in expectation using e.g.\ Shannon's coding theorem;\footnote{To show the upper bounds in this subsection we use some notions from information theory without giving complete background for them. The reader can refer to e.g.~\cite{ThomasCover}, or alternatively can skip them entirely, as they are inconsequential for the remained of the paper and for understanding the symmetrization technique.} this is much smaller than the lower bound we wish to prove. Clearly, we must choose a distribution where each player's input has entropy $\Omega(n)$. This is the first indication that the $k$-player problem is significantly different than the $2$-player problem, where the above distribution is indeed the hard distribution.
The next candidate hard distribution is to randomly partition the $n$ coordinates into two equal-sized sets: The \emph{important set}, where each entry is equal to $1$ with probability $1/k$, and the \emph{balancing set}, where all entries are equal to $1$. Now the entropy of each player's input is $\Theta(n)$, and the distribution seems like a good candidate, but there is a surprising upper bound for this distribution: the coordinator asks $100\log n$ players to send him their entire input, and from this can easily figure out which coordinates are in the balancing set and which are in the important set. Henceforth, the coordinator knows this information, and only needs to learn the players' values in the important set, which again have low entropy. We would want the players to send these values, but the players themselves do not know which coordinates are in the important set, and the coordinator would need to send $nk$ bits to tell all of them this information. However, they do not need to know this in order to get all the information across: using a protocol known as \emph{Slepian-Wolf coding} (see e.g.\ \cite{ThomasCover}) the players can transmit to the coordinator all of their values in the important coordinates, with only $n \log k$ total communication (and a small probability of error). The idea is roughly as follows: each player $p_i$ chooses $100 n \log k / k$ sets $S_{i,j} \subseteq [n]$ independently and uniformly at random from public randomness. For each $j$, the player XORs the bits of his input in the coordinates of $S_{i,j}$, and sends the value of this XOR to the coordinator. The coordinator already knows the balancing set, so he only has $\Theta(n \log k / k)$ bits of uncertainty about player $i$'s input, meaning he can reconstruct the player's input with high probability (say by exhaustive search). The upper bound follows.
To get an actual hard distribution, we modify the hard distribution from the last paragraph. We randomly partition the $n$ coordinates into two equal-sized sets: The \emph{important set}, where each entry is equal to $1$ with probability $1/n$, and the \emph{noise set}, where each entry is equal to $1$ with probability $1/2$.\footnote{We could have chosen each entry in the important set to be equal to $1$ with probability $1/k$ as well, but choosing a value of $1/n$ makes the proofs easier. The important thing is to choose each of the noise bits to be equal to $1$ with probability $1/2$.} Clearly, each player's input has entropy $\Theta(n)$. Furthermore, the coordinator can again cheaply figure out which coordinates are in the important set and which are in the noise set, but the players do not know this information, and nothing like Slepian-Wolf coding exists to help them transmit the information to the coordinator. The distribution that we use in our formal proof is a little different than this, for technical reasons, but this distribution is hard as well.
We now sketch how to apply the symmetrization technique to prove that this distribution is indeed hard. To apply the symmetrization technique, we imagine giving Alice the input of one of the players, and to Bob the input of all of the others. Bob plays the coordinator, so Bob needs to compute the output; the goal is to prove a lower bound of $\Omega(n)$ on the communication complexity between Alice and Bob. What can Bob deduce about the answer? He can immediately take the OR of all the vectors that he receives, getting a vector where with good probability all of the noise bits are equal to $1$. (Recall that we assumed that the number of players is $k \ge 100 \log n$.) Among the important bits, roughly one of Alice's bits is equal to $1$, and the goal is to discover which bit it is. Alice cannot know which coordinates are important, and in essence we are trying to solve a problem very similar to Set Disjointness. A lower bound of $\Omega(n)$ is easy to get convinced of, since it is similar to the known lower bounds for set disjointness. Proving it is not entirely trivial, and requires making some small modifications to Razborov's classical lower bound on set disjointness~\cite{Raz90}.
We see that the lower bound for this distribution has to (implicitly) rule out a Slepian-Wolf type upper bound. This provides some evidence that any lower bound for the $k$-OR problem would have to be non-trivial.
\subsubsection{The Proof}
We prove the lower bound on $k$-\textsf{OR}\xspace by performing a reduction from the promise version of the two-party {\em set disjointness} problem ($2$-\textsf{DISJ}\xspace) which we lower-bounded in Lemma \ref{lem:2-DISJ}. Given an $(x,y)$ for $2$-\textsf{DISJ}\xspace drawn from the hard distribution $\mu$, we construct an input for $k$-\textsf{OR}\xspace. Note that our mapping is not necessarily one-to-one, that is why we need a lower bound on the \emph{expected} distributional communication complexity of $2$-\textsf{DISJ}\xspace.
\paragraph{Reduction.}
We start with Alice's input set $x$ and Bob's input set $y$ from the distribution $\mu$, with $t = 4$. That is, both $x$ and $y$ have $l = n/4$ $1$ bits chosen at random under the condition that they intersect at one point with probability $1/4$, otherwise they intersect at no point.
For the $2$-\textsf{DISJ}\xspace problem, we construct $k$ players' input sets $I_1, \ldots, I_k$ as follows. Let $z = [n] - y$. Let $S_2^l, \ldots, S_k^l$ be random subsets of size $l$ from $z$, and $S_2^{l-1}, \ldots, S_k^{l-1}$ be random subsets of size $l-1$ from $z$. Let $S_2^1, \ldots, S_k^1$ be random elements from $y$.
\begin{eqnarray*}
\left\{
\begin{array}{l}
I_1 = x \\
I_{j} \ (j = 2, \ldots, k) = \left\{
\begin{array}{rl}
S_j^l & \textrm{w.p. } 1-1/4\\
S_j^{l-1} \cup S_j^{1} & \textrm{w.p. } 1/4
\end{array}
\right.
\end{array}
\right.
\end{eqnarray*}
Let $\mu'$ be this input distribution for
$k$-\textsf{OR}\xspace. If $I_j\ (2 \le j \le k)$ contains an element $S_j^1$, then we call this
element a special element.
This reduction can be interpreted as follows: Alice simulates a random
player $I_1$, and Bob, playing as the coordinator, simulates all the other $k-1$
players $I_2, \ldots, I_k$. Bob also keeps a set $V$ containing all the
special elements that ever appear in some $I_j\ (j = 2, \ldots, k)$. It is easy
to observe the following fact.
\begin{lemma}
\label{lem:AND-symmetric}
All $I_j\ (j = 1, \ldots, k)$ are chosen from the same distribution.
\end{lemma}
\begin{proof}
Since by definition $I_j\ (j = 2, \ldots, k)$ are chosen from the same distribution, we only need to show that $I_1 = x$ under $\mu$ is chosen from the same distribution as any $I_j$ is under $\mu'$. Given $y$, note that $x$ is a random set of size $l$ in $z = [n]-y$ with probability $1/4$; and with the remaining probability $x$ is the union of a random set of size $l-1$ in $z$ along with a single random element from $y$. This is precisely the distribution of each $I_j$ for $j\geq 2$.
\end{proof}
\noindent The following lemma shows the properties of our reduction.
\begin{lemma}
\label{lem:OR-DISJ}
If there exists a protocol $\cal P'$ for $k$-\textsf{OR}\xspace on input distribution
$\mu'$ with communication complexity $C$ and error bound $\varepsilon$, then
there exists a protocol $\cal P$ for the $2$-\textsf{DISJ}\xspace on input distribution
$\mu$ with expected communication complexity $O(C/k)$ and error bound $\varepsilon
+ 4k/n$.
\end{lemma}
\begin{proof}
Let us again view $I_j\ (j = 1, \ldots, k)$ as $n$-bit vectors. We show how
to construct a protocol $\cal P$ for $2$-\textsf{DISJ}\xspace from a protocol $\cal P'$ for
$k$-\textsf{OR}\xspace with the desired communication cost and error bound. $\cal P$ is
constructed as follows: Alice and Bob first run $\cal P'$ on $I_1, \ldots,
I_k$. Let $W \subseteq [n]$ be the set of indices where the results are
$1$. Bob checks whether there exists some $w \in W \cap y$ such that $w
\not\in V$. If yes, then $\cal P$ returns ``yes", otherwise $\cal P$
returns ``no".
We start by analyzing the communication cost of $\cal P$. Since player
$I_1$ is chosen randomly from the $k$ players, and
Lemma~\ref{lem:AND-symmetric} that all players' inputs are chosen from a
same distribution, the expected amount of communication between $I_1$
(simulated by Alice) and the other $k-1$ players (simulated by Bob) is at
most a $2/k$ fraction of the total communication cost of $\cal
P$. Therefore the expected communication cost of $\cal P$ is at most
$O(C/k)$.
For the error bound, we have the following claim: With probability at least
$(1 - 4k/n)$, there exists a $w \in W \cap y$ such that $w \not\in V$ if
and only if $x \cap y \not\in \emptyset$. First, if $x \cap y = \emptyset$,
then $I_1 = x$ cannot contain any element $w \in V \subseteq y$, thus the
resulting bits in $W \cap y$ cannot contain any special element that is not
in $V$. On the other hand, we have
$\mathbf{Pr}[((W \cap y) \subseteq V) \wedge (x \cap y \neq \emptyset)] \le
4k/n.$
This is because $((W \cap y) \subseteq V)$ and $(x \cap y \neq
\emptyset)$ hold simultaneously if and only if there exist some $S_j^1\ (1
\le j \le k)$ such that $S_j^1 \in x \cap y$. According to our random
choices of $S_j^1\ (j = 1, \ldots, k)$, this holds with probability at most
$k/l \le 4k/n$. Therefore if $\cal P'$ errors at most $\varepsilon$, then $\cal P$
errors at most $\varepsilon + 4k/n$.
\end{proof}
\noindent Combining Lemma~\ref{lem:2-DISJ} and Lemma~\ref{lem:OR-DISJ},
we have the following theorem.
\begin{theorem}
\label{thm:k-OR}
$D_{\mu'}^{1/800}(\mbox{$k$-\textsf{OR}\xspace}) = \Omega(nk)$, for $n \ge 3200k$ in the coordinator model.
\end{theorem}
\begin{proof}
If there exists a protocol $\cal P'$ that computes $k$-OR on input distribution $\mu'$ with communication complexity $o(nk)$ and error bound $1/800$, then by Lemma~\ref{lem:OR-DISJ} there exists a protocol $\cal P$ that computes $2$-\textsf{DISJ}\xspace on input distribution $\mu$ with expected communication complexity $o(n)$ and error bound $1/800 + 4k/n \le 1/400$, contradicting Lemma~\ref{lem:2-DISJ} (when $t = 4$).
\end{proof}
We discuss the applications of this lower bound to the distinct elements problem in Section \ref{sec:application}.
\subsection{Multiparty AND/OR with a Blackboard}
\label{sec:AND-blackboard}
Denote the $k$-OR problem in the blackboard model by $k$-\textsf{OR}\xspaceb. The general idea to prove a lower bound for $k$-\textsf{OR}\xspaceb is to perform
a reduction from a $2$-party bit-wise OR problem ($2$-\textsf{OR}\xspace for short) with
public randomness. The $2$-\textsf{OR}\xspace problem is the following. Alice and
Bob each have an $n$-bit vector drawn from the following distribution:
Each bit is $1$ with probability $1/k$ and $0$ with
probability $1-1/k$. They also use public random bits to generate another
$(k-2)$ $n$-bit vectors such that each bit of these vectors is $1$ with
probability $1/k$ and $0$ with probability $1-1/k$. That is, the bits
are drawn from the same distribution as their private inputs. Let $\nu$ be
this input distribution. Alice and Bob want to compute bit-wise \textsf{OR}\xspace of all
these $k$ $n$-bit vectors. For this problem we have the following theorem.
\begin{theorem}
\label{thm:2-OR}
$\mathbf{E}D^{1/3}_\nu(2\mbox{-\textsf{OR}\xspace}) = \Omega(n/k \cdot \log k)$.
\end{theorem}
\begin{proof}
W.l.o.g., let us assume that Bob outputs the final result of $2$-\textsf{OR}\xspace. It is
easy to see that if we take the bit-wise OR of the $(k-2)$ $n$-bit vectors
generated by public random bits and Bob's input vector, the resulting
$n$-bit vector $b$ will have at least a constant density of $0$ bits
with probability at least $1 - o(1)$, by a Chernoff bound. Since Bob can
see the $k-2$ public vectors, to compute the final result, all that Bob has
to know are bits of Alice's vector on those indices $i\ (1 \le i \le n)$
where $b[i] = 0$. In other words, with probability at least $1 - o(1)$,
Bob has to learn a specific, at least constant fraction of bits in Alice's vector up
to error $1/3$. Plugging in Lemma~\ref{lem:2-BITS} with $\rho = 1/k$, we know that the expected communication complexity is at least $(1 - o(1)) \cdot \Omega(n/k \cdot
\log k) = \Omega(n/k \cdot \log k)$.
\end{proof}
We reduce this problem to $k$-\textsf{OR}\xspaceb as follows: Alice and Bob each
simulates a random player, and they use public random bits to simulate
the remaining $k-2$ players: The $(k-2)$ $n$-bit vectors generated by their
public random bits are used as inputs for the remaining $k-2$ random
players. Observe that the input of all the $k$ players are drawn from
the same distribution, thus the $k$ players are
symmetric. Consequently, the expected amount of communication between
the two random players simulated by Alice and Bob and other players
(including the communication between the two random players) is at
most $O(1/k)$ faction of the total communication cost of protocol for
$k$-\textsf{OR}\xspaceb. We have the following theorem.
\begin{theorem}
\label{thm:k-OR-board}
$D^{1/3}_\nu(k\mbox{-\textsf{OR}\xspaceb}) = \Omega(n \cdot \log k)$.
\end{theorem}
\begin{proof}
It is easy to see that if we have a protocol for the $k$-\textsf{OR}\xspaceb on input
distribution $\nu$ with communication complexity $o(n \cdot \log k)$ and
error bound $1/3$, then we have a protocol for $2$-\textsf{OR}\xspace on input distribution
$\nu$ with expected communication complexity $o(n/k \cdot \log k)$ and
error bound $1/3$, contradicting Theorem~\ref{thm:2-OR}.
\end{proof}
It is easy to show a tight deterministic upper bound of $O(n \log k)$ for this problem in the blackboard model: each player speaks in turn and writes the coordinates where he has $1$ and no player that spoke before him had $1$.
\subsection{Majority}
\label{sec:MAJ}
In the $k$-\textsf{MAJ}\xspace problem, we have $k$ players, each having a bit vector of
length $n$, and they want to compute bit-wise majority, i.e.\ to determine for each coordinate whether the majority of entries in this coordinate are $1$ or $0$. We prove a lower bound of $\Omega(nk)$ for this problem in the coordinator model by a reduction to 2-\textsf{BITS}\xspace via symmetrization.
For the reduction, we consider $k = 2t+1$ players and describe the input
distribution $\tau$ as follows. For each coordinate we
assign it either $t$ or $(t+1)$, each with probability $1/2$, independently over all coordinates. This indicates whether the $k$ players contain $t$ or $(t+1)$ $1$ bits among them in this coordinate, and hence whether that index has a majority of $0$ or $1$, respectively. Then we
place the either $t$ or $(t+1)$ $1$ bits randomly among the $k$
players inputs in this coordinate. It follows that under $\tau$: (i) each player's bits are
drawn from the same distribution, (ii) each bit of each player is $1$
with probability $1/2$ and $0$ with probability $1/2$, and (iii) each
index has probability $1/2$ of having a majority of $1$s.
\begin{theorem}
\label{thm:k-MAJ}
$D^{1/6}_{\tau}(\mbox{k-\textsf{MAJ}\xspace}) = \Omega(nk)$ in the coordinator model.
\end{theorem}
\begin{proof}
Now we use symmetry to reduce to the two-player problem 2-\textsf{BITS}\xspace.
Alice will simulate a random player under $\tau$ and Bob simulate the
other $k-1$ players. Notice that, by (ii) any subset of $n'$ indices
of Alice's are from $\zeta_{1/2}$. We will show that Alice and Bob
need to solve 2-\textsf{BITS}\xspace on $\Omega(n)$ bits to solve $k$-\textsf{MAJ}\xspace. And, by
(i), since all players have the same distribution and Alice is a
random player, her expected cost in communicating to Bob is at most
$O(C/k)$ if $k$-\textsf{MAJ}\xspace can be solved in $C$ communication.
Now consider the aggregate number of $1$ bits Bob has for each index;
he has $(t-1)$ $1$ bits with probability $1/4$, $t$ $1$ bits with
probability $1/2$, and $(t+1)$ $1$ bits with probability $1/4$. Thus
for at most $(3/4)n$ indices (with probability at least $1-
\exp(-2(n/4)^2/n) = 1-\exp(-n/8) \geq 1-1/7$) that have either $(t-1)$
or $(t+1)$ $1$ bits Bob knows that these either will or will not have
a majority of $1$ bits, respectively. But for the other at least $n/4
= \Omega(n)$ remaining indices for which Bob has exactly $t$ $1$ bits,
whether or not these indices have a majority of $1$ bits depends on
Alice's bit. And conditioned on this situation, each of Alice's
relevant bits are $1$ with probability $1/2$ and $0$ with probability
$1/2$, hence distributed by $\zeta_{1/2}$. Thus conditioned on at
least $n/4$ undecided indices, this is precisely the 2-\textsf{BITS}\xspace problem
between Alice and Bob of size $n/4$.
Thus a protocol for $k$-\textsf{MAJ}\xspace in $o(nk)$ communication and error bound
$1/6$ would yield a protocol for 2-\textsf{BITS}\xspace in expected $o(n)$
communication and error bound $1/6 + 1/7 < 1/3$, by running the protocol simulated
between Alice and Bob.
This contradicts Lemma \ref{lem:2-BITS}, and proves that $k$-\textsf{MAJ}\xspace requires $\Omega(kn)$ communication when allowing error on at most $1/6$ fraction of inputs.
\end{proof}
\paragraph{Extensions.}
This lower bound can easily be extended beyond just majority (threshold $1/2$) to any constant threshold $\phi (0 < \phi < 1)$, by assigning to each coordinate either $\lfloor k\phi
\rfloor$ or $(\lfloor k\phi \rfloor+1)$ $1$ bits with probability
$1/2$ each. Let $\tau_{\phi}$ denote this distribution. Then the analysis just uses $\zeta_\phi$ in place of
$\zeta_{1/2}$, which also yields an $\Omega(n)$ lower bound for
2-\textsf{BITS}\xspace. We call this extended $k$-\textsf{MAJ}\xspace problem $(k,\phi)$-\textsf{MAJ}\xspace.
\begin{corollary}
\label{cor:k-MAJ}
$D^{1/6}_{\tau_{\phi}}((k,\phi)\mbox{-\textsf{MAJ}\xspace}) = \Omega(nk)$ for any constant $\phi (0 < \phi < 1)$ in the coordinator model.
\end{corollary}
We discuss the applications of this lower bound to the heavy-hitter problem in Section \ref{sec:application}.
\section{Graph Connectivity}
\label{sec:CONN}
In the $k$-\textsf{CONN}\xspace problem, we have $k$ players, each having a set of edges in an $n$-vertex graph. The goal is to decide whether the graph consisting of the union of all of these edges is connected. In this section we prove an $\Omega(nk/\log^2 k)$ lower bound for $k$-\textsf{CONN}\xspace in the coordinator model, by performing a symmetry-based reduction from $2$-\textsf{DISJ}\xspace.
\subsection{Proof Idea and the Hard Distribution}
Let us start by discussing the hard distribution. Assume for simplicity $k \ge 100 \log n$. We describe a hard distribution which is not quite the same as the one in the proof (due to technical reasons), but is conceptually clearer. In this hard distribution, we consider a graph $G$, which consists of two disjoint cliques of size $n/2$. We also consider one edge between these two cliques, called the \emph{connector}; the connector is not part of $G$. Each player gets as input $n/10$ edges randomly and uniformly chosen from the graph $G$; furthermore, with probability $1/2$ we choose exactly one random edge in one random player's input, and replace it by the connector. It is easy to see that if one of the players got the connector, then with high probability the resulting set of edges span a connected graph, and otherwise the graph is not connected.
To get convinced that the lower bound holds, notice that the coordinator can easily reconstruct the graph $G$. However, in order to find out if one of the players has received the connector, the coordinator needs to speak with each of the players to find this out. The situation is roughly analogous to the situation in the $k$-OR problem, since the players themselves did not get enough information to know $G$, and no Slepian-Wolf type protocol is possible since the edges received by each player are random-looking. The actual distribution that we use is somewhat more structured than this, in order to allow an easier reduction to the $2$-player disjointness problem.
\subsection{The Proof}
\label{sec:CONN-proof}
We first recall $2$-\textsf{DISJ}\xspace. Similarly to before, in $2$-\textsf{DISJ}\xspace, Alice and Bob have inputs $x\ (\abs{x} = \ell)$ and $y\ (\abs{y} = \ell)$ chosen uniformly at random from $[n]\ (n = 4\ell-1)$, with the promise that with probability $1/10k$, $\abs{x \cap y} = 1$ and with
probability $1 - 1/10k$, $\abs{x \cap y} = 0$. Let $\varphi$ be this
input distribution for $2$-\textsf{DISJ}\xspace. Now given an input $(x,y)$ for
$2$-\textsf{DISJ}\xspace, we construct an input for $k$-\textsf{CONN}\xspace.
Let $K_{2n} = (V,E)$ be the complete graph with $2n$ vertices. Given Alice's input $x$ and Bob's input $y$, we construct $k$ players' input $I_1, \ldots, I_k$ such that $\abs{I_j} = \ell$ and $I_j \subseteq E$ for all $1 \le j \le k$.
We first pick a random permutation $\sigma$ of $[2n]$.
Alice constructs $I_1 = \{(\sigma(2i-1), \sigma(2i))\ |\ i \in x\}.$
Bob constructs $I_2, \ldots, I_k$.
It is convenient to use $\sigma$ and $y$ to divide $V$ into two subsets $L$ and $R$. For each $i\ (1 \le i \le n)$, if $i \in y$, then with probability $1/2$, we add $\sigma(2i-1)$ to $L$ and $\sigma(2i)$ to $R$; and with the rest of the probability, we add
$\sigma(2i-1)$ to $R$ and $\sigma(2i)$ to $L$. Otherwise if $i \not\in y$, then with probability $1/2$, we add both $\sigma(2i-1)$ and $\sigma(2i)$ to $L$; and with the rest of the probability, we add both $\sigma(2i-1)$ and $\sigma(2i)$ to $R$. Let $K_L = (L, E_L)$ and $K_R = (R, E_R)$ be the two complete graphs on sets of vertices $L$ and
$R$, respectively.
Now using $E_R$ and $E_L$, Bob can construct each $I_j$.
With probability $1-1/10k$, $I_j$ is a random subset of disjoint edges (i.e. a matching) from $E_L \cup E_R$ of size $\ell$; and with probability $1/10k$, $I_j$ is a random subset of disjoint edges from $E_L \cup E_R$ of size $\ell -1$ and one random edge from $E \setminus (E_L \cup E_R)$.
Let $\varphi'$ be the input distribution for $k$-\textsf{CONN}\xspace defined as above for each $I_j\ (1 \le j \le k)$.
We define the following two events.
\begin{itemize}
\item[$\xi_1$:] Both edge-induced subgraphs $\cup_{j=2}^k I_j \bigcap E_L$ and $\cup_{j=2}^k I_j \bigcap E_R$ are connected, and span $L$ and $R$ respectively.
\item[$\xi_2$:] $\cup_{j=2}^k I_j \bigcap E$ is {\em not} connected.
\end{itemize}
It is easy to observe the following two facts by our construction.
\begin{lemma}
\label{lem:symmetric-2}
All $I_j\ (j = 1, \ldots, k)$ are chosen from the same distribution.
\end{lemma}
\begin{proof}
Since $\sigma$ is a random permutation of $[2n]$, according to the
distribution $\varphi'$, Alice's input can also seen as follows: With
probability $1 - 1/10k$, it is a random matching of size $\ell$ from $E_L
\cup E_R$; and with probability $1/10k$, it is a matching consists of
$\ell-1$ random edges from $E_L \cup E_R$ and one random edge from $E
\backslash (E_L \cup E_R)$, which is $(\sigma(2z-1), \sigma(2z))$ where $z
= x \cap y$.
\end{proof}
\begin{lemma}
\label{lem:connected}
$\xi_1$ happens with probability at least $1-1/2n$ when $k \geq 68 \ln n + 1$.
\end{lemma}
\begin{proof}
(This is a proof sketch; full proof in Appendix \ref{sec:CONN-proof-appendix}.)
First note that by our construction, both $\abs{L}$ and $\abs{R}$ are $\Omega(n)$ with high probability. To locally simplify notation, we consider a graph $(V,E)$ of $n$ nodes
where edges are drawn in $(k - 1) \ge 68\ln n$ rounds, and each round $n/4$ disjoint
edges are added to the graph. If $(V,E)$ is connected with
probability at least $(1-1/4n)$, then by union bound over $\cup_{j=2}^k I_j
\bigcap E_L$ and $\cup_{j=2}^k I_j \bigcap E_R$, $\xi_1$ is true with
probability at least $(1-1/2n)$. The proof follows four steps.
\begin{itemize}
\item[(S1)] Using the first $28 \ln n$ rounds, we can show that all vertices have degree at least $8\ln n$ with probability at least $1 - 1/12n$.
\item[(S2)] Conditioned on (S1), any subset $S \subset V$ of $h < n/10$ vertices is connected to at least $\min\{h \ln n, n/10\}$ distinct vertices in $V \setminus S$, with probability at least $1-1/n^2$.
\item[(S3)] Iterate (S2) $\ln n$ times to show that there must be a single connected component $S_G$ of size at least $n/10$, with probability at least $1-1/12n$.
\item[(S4)] Conditioned on (S3), using the last $40 \ln n$ rounds we can show that all vertices are connected to $S_G$ with probability at least $1-1/12n$. \qedhere
\end{itemize}
\end{proof}
\noindent The following lemma shows the properties of our reduction.
\begin{lemma}
\label{lem:conn-reduction}
Assume $k \ge 100 \log n$. If there exists a protocol $\cal P'$ for $k$-\textsf{CONN}\xspace on input distribution
$\varphi'$ with communication complexity $C$ and error bound $\varepsilon$, then there exists a protocol $\cal P$ for the
$2$-\textsf{DISJ}\xspace on input distribution $\varphi$ with expected communication
complexity $O(C/k \cdot \log k)$ and error bound $(29 \ln k \cdot \varepsilon + 1/2000k)$.
\end{lemma}
\begin{proof}
In $\cal P$, Alice and Bob first construct $\{I_1, \ldots, I_k\}$ according to our reduction, and then run the protocol $\cal P'$ on it. By Lemma~\ref{lem:connected} we have that $\xi_1$
holds with probability at least $1 - 1/2n$. And by our
construction, conditioned on that $\xi_1$ holds, $\xi_2$
holds with probability at least $(1-1/10k)^{k-1} \ge 1 - 1/10$. Thus the input generated by a random reduction encodes the $2$-\textsf{DISJ}\xspace problem
with probability at least $(1 - 1/2n - 1/10) > 8/9$. We call such an input a {\em good} input. We repeat the random reduction $c \ln k$ times (for some large enough constant $c$; e.g., $c\geq 29$) and run $\cal P'$ on each of the resulting inputs for $k$-\textsf{CONN}\xspace. The probability that at least $2/3$-fraction of inputs are good is at least $1 - 1/2000k$ (by picking $c$ large enough; a Chernoff bound shows this probability is at most $2\exp(-2 (2/9)^2 (c \ln k))$ which is less than $1/(2000 k)$ with $c\geq 29$ and $k \geq 100$). Thus we can output the majority of the $c\ln k$ runs of $\cal P'$, and obtain a protocol $\cal P$ for $2$-\textsf{DISJ}\xspace with expected communication complexity $O(C/k \cdot \log k)$ and error bound $\varepsilon \cdot 29 \ln k + 1/2000k$; the $\varepsilon \cdot 29 \ln k$ term comes from a union bound over the $\varepsilon$ error on each of $29 \ln k$ runs of $k$-\textsf{CONN}\xspace.
\end{proof}
\noindent Combining Lemma~\ref{lem:2-DISJ} and
Lemma~\ref{lem:conn-reduction}, we have the following theorem.
\begin{theorem}
$D_{\varphi'}^{1/(2000\cdot (29 \ln k) \cdot k)}(\mbox{$k$-\textsf{CONN}\xspace}) = \Omega(nk/\log k)$, for $k \ge 100 \log n$ in the coordinator model.
\end{theorem}
\begin{proof}
If there exists a protocol $\cal P'$ that computes $k$-\textsf{CONN}\xspace with
communication complexity $o(nk/\log k)$ and error $1/(2000\cdot (29 \ln k) \cdot k)$, then by
Lemma~\ref{lem:OR-DISJ} there exists a protocol $\cal P$ that computes
$2$-\textsf{DISJ}\xspace with expected communication complexity $o(n)$ and error at most
$(1/(2000\cdot (29 \ln k) \cdot k) \cdot (29 \ln k)) + 1/2000k = 1/1000k$, contradicting Lemma~\ref{lem:2-DISJ} (when $t = 10k$).
\end{proof}
Finally, we have the following immediate consequence.
\begin{eqnarray*}
R^{1/3}(\mbox{$k$-\textsf{CONN}\xspace})
&\ge& \Omega(R^{1/(2000\cdot (29 \ln k) \cdot k)}(\mbox{$k$-\textsf{CONN}\xspace}) / \log k) \\
&\ge& \Omega(D_{\varphi'}^{1/(2000\cdot (29 \ln k) \cdot k)}(\mbox{$k$-\textsf{CONN}\xspace})/\log k) \\
&\ge& \Omega(nk/\log^2 k)
\end{eqnarray*}
\section{Some Direct-Sum-Like Results}
\label{sec:directsum}
Let $f : {\cal X} \times {\cal Y} \to {\cal Z}$ be an arbitrary function. Let
$\mu$ be a probability distribution over ${\cal X} \times {\cal Y}$. Consider a setting where we have
$k+1$ players: Carol, and $P_1, P_2, \ldots, P_k$. Carol receives an input from $x \in {\cal X}$ and each $P_i$ receives an input $y_i \in {\cal Y}$. Let $R^{\varepsilon}(f^k)$ denote the randomized communication complexity of computing
$f$ on Carol's input and each of the $k$ other players' inputs
respectively; i.e., computing $f(x, y_1), f(x, y_2), \ldots, f(x,
y_k)$. Our direct-sum-like theorem in the message-passing model states
the following.
\begin{theorem}
\label{thm:directsum}
In the message-passing model, for any function $f : {\cal X}
\times {\cal Y} \to {\cal Z}$ and any distribution $\mu$ on ${\cal
X} \times {\cal Y}$, we have $R^{\varepsilon}(f^k) \ge \Omega(k \cdot
\mathbf{E}D^{\varepsilon}_{\mu}(f))$.
\end{theorem}
Note that this is not a direct-sum theorem in the strictest sense, since it relates randomized communication complexity to expected distributional complexity. However, it should probably be good enough for most applications.
The proof of this theorem is dramatically simpler than most direct-sum proofs known in the literature (e.g.~\cite{direct_sum}). This is not entirely surprising, as it is weaker than those theorems: it deals with the case where the inputs are spread out over many players, while in the classical direct-sum setting, the inputs are only spread out over two players (this would be analogous to allowing the players $P_i$ to communicate with each other for free, and only charging them for speaking to Carol). However, perhaps more surprisingly, \emph{optimal} direct-sum results are \emph{not known} for most models, and are considered to be central open questions, while the result above is essentially optimal. Optimal direct-sum results in $2$-player models would have dramatic consequences in complexity theory (see e.g.\
\cite{direct_sum_circuit} as well as \cite{patrascu_conj}), so it seems interesting to check whether direct-sum results in multiparty communication could suffice for achieving those complexity-theoretic implications.
\begin{proof}
Given the distribution $\mu$ on ${\cal X} \times {\cal Y}$, we
construct a distribution $\nu$ on ${\cal X} \times {\cal Y}^k$. Let
$\rho_x$ be the marginal distribution on ${\cal Y}$ induced by $\mu$
condition on $X = x$. We first pick $(x, y_1) \in {\cal X} \times
{\cal Y}$ according to $\mu$, and then pick $y_2, \ldots, y_k$
independently from ${\cal Y}$ according to $\rho_x$ . We show that
$D_{\nu}^{\varepsilon}(f^k) \ge \Omega(k \cdot \mathbf{E}D_{\mu}^{\varepsilon}(f))$. The
theorem follows by Yao's min-max principle.
Suppose that Alice and Bob get inputs $(u, w)$ from ${\cal X} \times
{\cal Y}$ according to $\mu$. We can use a protocol for $f^k$ to
compute $f(u,w)$ as follows: Bob simulates a random player in
$\{P_1, \ldots, P_k\}$. W.l.o.g, say it is $P_1$. Alice simulates
Carol and the remaining $k-1$ players. The inputs for Carol and $P_1,
\ldots, P_k$ are constructed as follows: $x = u$, $y_1 = w$ and $y_2,
\ldots, y_k$ are picked from $\cal Y$ according to $\rho_u$ (Alice
knows $u$ and $\mu$ so she can compute $\rho_u$). Let $\nu$ be the
distribution of $(x, y_1, \ldots, y_k)$ in this construction. We now
run the protocol for $f^k$ on $x, y_1, \ldots, y_k$. The result also
gives $f(u, w)$.
Since $y_1, \ldots, y_k$ are chosen from the same distribution and
$P_1$ is picked uniformly at random from the $k$ players other than
Carol, we have that in expectation, the expected amount of
communication between $P_1$ and $\{$Carol, $P_2, \ldots, P_k \}$, or
equivalently, the communication between Alice and Bob according to our
construction, is at most a $2/k$ fraction of the total communication
of the $(k+1)$-player game. Thus $D_{\nu}^{\varepsilon}(f^k) \ge \Omega(k \cdot
\mathbf{E}D_{\mu}^{\varepsilon}(f))$.
\end{proof}
\subsection{With Combining Functions}
In this section we extend Theorem~\ref{thm:directsum} to the
complexity of the AND/OR of $k$ copies of $0/1$ function
$f$. Similar to before, AND and OR are essentially the same so we only
talk about OR here. Let $R^{\varepsilon}(f^k_{\mathrm{OR}})$ denote the
randomized communication complexity of computing $f(x, y_1) \vee
f(x, y_2) \vee \ldots \vee f(x, y_k)$.
\begin{theorem}
\label{thm:directsum-OR}
In the message-passing model, for any function $f : {\cal X}
\times {\cal Y} \to \{0,1\}$ and every distribution $\mu$ on ${\cal X}
\times {\cal Y}$ such that $\mu(f^{-1}(1)) \le 1/10k$, we have
$R^{1/3}(f^k_{\mathrm{OR}}) \ge \Omega(k/\log^2(1/\varepsilon) \cdot
\mathbf{E}D^{\varepsilon}_{\mu}(f))$.
\end{theorem}
\begin{proof}
The
reduction is the same as that in the proof of
Theorem~\ref{thm:directsum}. Note that if $\mu(f^{-1}(1)) \le 1/10k$, then with probability $(1
- 1/10k)^{k-1} \ge 0.9$, we have $f^k_{\mathrm{OR}}(x, y_1, \ldots,
y_k) = f(u,w)$. Similar to the proof of Lemma~\ref{lem:conn-reduction}, we can repeat the reduction for $c \log (1/\varepsilon)$ times for some large enough constant $c$, and then the majority of $f^k_{\mathrm{OR}}(x, y_1, \ldots,
y_k)$'s will be equal to $f(u,w)$ with probability $1 - \varepsilon/2$.
Therefore if we have a protocol for $f^k_{\mathrm{OR}}$ with error probability $\varepsilon/(2c \cdot \log 1/\varepsilon)$ under $\nu$ and communication complexity $C$, then we have a
protocol for $f$ with error probability $(c \log 1/\varepsilon) \cdot (\varepsilon/(2c \log 1/\varepsilon)) + \varepsilon/2) = \varepsilon$ under $\mu$ and expected communication complexity
$O(\log(1/\varepsilon) \cdot C/k)$. Consequently, $R^{1/3}(f^k_{\mathrm{OR}}) \ge \Omega\left(R^{\varepsilon/(2c \log 1/\varepsilon)}(f^k_{\mathrm{OR}}) / \log(1/\varepsilon)\right) \ge \Omega\left(D^{\varepsilon/(2c \log 1/\varepsilon)}_\nu(f^k_{\mathrm{OR}}) / \log(1/\varepsilon)\right) \ge \Omega\left(k/\log^2(1/\varepsilon) \cdot
\mathbf{E}D^{\varepsilon}_{\mu}(f)\right)$.
\end{proof}
\section{Applications}
\label{sec:application}
We now consider applications where multiparty communication complexity lower bounds such as ours are needed. As mentioned in the introduction, our multiparty communication problems are strongly motivated by research on the server-site model and the more general distributed streaming model.
We discuss three problems here:
the heavy hitters problem, which asks to find the approximately most frequently occurring elements in a set which is distributed among many clients;
the distinct elements problems, which lists the distinct elements from a fixed domain where the elements are scattered across distributed databases possibly with multiplicity;
and the $\varepsilon$-kernel problem, which asks to approximate the convex hull of a set which is distributed over many clients.
\paragraph{Distinct elements.}
Consider a domain of $n$ possible elements and $k$ distributed databases each which contains a subset of these elements. The exact distinct elements problem is to list the set of all distinct elements from the union of all elements across all distributed databases. This is precisely the $k$-OR problem and follows from Theorem \ref{thm:k-OR}, since the existence of each element in each distributed data point can be signified by a bit, and the bit-wise OR represents the set of distinct elements.
\begin{theorem}\label{thm:distinct-element}
For a set of $k$ distributed databases, each containing a subset of $n$ elements, it requires $\Omega(nk)$ communication total between the databases to list the set of distinct elements with probability at least $2/3$.
\end{theorem}
\paragraph{$\varepsilon$-Kernels.}
Given a set of $n$ points $P \subset \b{R}^d$, the width in direction
$u$ is denoted by
\[
\ensuremath{\textsf{wid}}(P,u) = \left(\max_{p \in P} \langle p, u \rangle\right) - \left(\min_{p \in P} \langle p, u \rangle \right),
\]
where $\langle \cdot, \cdot \rangle$ is the standard inner product operation.
Then an $\varepsilon$-kernel~\cite{AHV04,AHV07} $K$ is a subset of $P$ so that for any direction $u$ we have
\[
\ensuremath{\textsf{wid}}(P,u) - \ensuremath{\textsf{wid}}(K,u) \leq \varepsilon \cdot \ensuremath{\textsf{wid}}(P,u).
\]
An $\varepsilon$-kernel $K$ approximates the convex hull of a point set $P$, such that if the convex hull of $K$ is expanded in any direction by an $\varepsilon$-factor it contains $P$. As such, this coreset has proven useful in many applications in computational geometry such as approximating the diameter and smallest enclosing annulus of point sets~\cite{AHV04,AHV07}. It has been shown that $\varepsilon$-kernels may require $\Omega(1/\varepsilon^{(d-1)/2})$ points (on a $(d-1)$-sphere in $\b{R}^d$) and can always be constructed of size $O(1/\varepsilon^{(d-1)/2})$ in time $O(n + 1/\varepsilon^{d-3/2})$~\cite{YAPV04,Cha06}.
We note a couple of other properties about $\varepsilon$-kernels.
Composibility: If $K_1, \ldots, K_k$ are $\varepsilon$-kernels of $P_1, \ldots, P_k$, respectively, then $K = \bigcup_{i=1}^k K_i$ is an $\varepsilon$-kernel of $P = \bigcup_{i=1}^k P_i$.
Transitivity: If $K_1$ is an $\varepsilon_1$-kernel of $P$ and $K_2$ is an $\varepsilon_2$-kernel of $K_1$, then $K_2$ is an $(\varepsilon_1 + \varepsilon_2)$-kernel of $P$.
Thus it is easy to see that each site $i$ can simply send an $(\varepsilon/2)$-kernel $K_i$ of its data of size $n_\varepsilon = O(1/\varepsilon^{(d-1)/2})$ to the server, and the server can then create and $(\varepsilon/2)$-kernel of $\bigcup_{i=1}^k K_i$ of size $O(1/\varepsilon^{(d-1)/2})$. This is asymptotically the optimal size for and $\varepsilon$-kernel of the full distributed data set. We next show that this procedure is also asymptotically optimal in regards to communication.
\begin{theorem}\label{thm:eKerLB}
For a distributed set of $k$ sites, it requires $\Omega(k/\varepsilon^{(d-1)/2})$ communication total between the sites and the server for the server to create an $\varepsilon$-kernel of the distributed data with probability at least $2/3$.
\end{theorem}
\begin{proof}
We describe a construction which reduces $k$-OR to this problem, where each of $k$ players has $n_\varepsilon = \Theta(1/\varepsilon^{(d-1)/2})$ bits of information. Theorem \ref{thm:k-OR} shows that this requires $\Omega(n_\varepsilon k)$ communication.
We let each player have very similar data $P_i = \{p_{i,1}, \ldots, p_{i,n_\varepsilon}\}$, each player's data points lie in a unit ball $B = \{q \in \b{R}^d \mid \|q\| \leq 1\}$. For each player, their $n_\varepsilon$ points are in similar position. Each player's $j$th point $p_{i,j}$ is along the same direction $u_j$, and its magnitude is either $\|p_{i,j}\| = 1$ or $\|p_{i,j}\| = 1-2\varepsilon$. Furthermore, the set of directions $U = \{u_i\}$ are well-distributed such that for any player $i$, and any point $p_{i,j}$ that $P_i \setminus p_{i,j}$ is not an $\varepsilon$-kernel of $P_i$; that is, the only $\varepsilon$-kernel is the full set. The existences of such a set follows from the known lower bound construction for size of an $\varepsilon$-kernel.
We now claim that the $k$-OR problem where each player has $n_\varepsilon$ bits can be solved by solving the distributed $\varepsilon$-kernel problem under this construction. Consider any instance of $k$-OR, and translate to the $\varepsilon$-kernel problem as follows. Let the $j$th point $p_{i,j}$ of the $i$th player have norm $\|p_{i,j}\| = 1$ when $j$th bit of the player is $1$, and have norm $\|p_{i,j}\| = 1-2\varepsilon$ if the $j$th bit is $0$.
By construction, an $\varepsilon$-kernel of the full set must acknowledge (and contain) the $j$th point from some player that has such a point with norm $1$, if one exists. Thus the full $\varepsilon$-kernel encodes the solution to the $k$-OR problem: it must have $n_\varepsilon$ points and, independently, the $j$th point has norm $1$ if the $j$th OR bit is $1$, and has norm $1-2\varepsilon$ if the $j$th OR bit is $0$.
\end{proof}
\paragraph{Heavy Hitters.}
Given a multi-set $S$ that consists of $n$ elements, a threshold parameter $\phi$, and an error parameter $\varepsilon$, the \emph{approximate heavy hitters} problem asks for a set of elements which contains all elements that occur at least $\phi k$ times in $S$ and contains no elements that occur fewer than $\phi k (1-\varepsilon)$ times in $S$. On a static non-distributed data set this can easily be done with sorting. This problem has been famously studied in the streaming literature where the Misra-Gries~\cite{MG82} and SpaceSaving~\cite{MAA06} summaries can solve the problem in optimal space. In the distributed setting the best known algorithms use random sampling of the indices and require either $O((1/\varepsilon^2)n \log n)$ or $O(k + \sqrt{k} n/\varepsilon \cdot \log n)$ communication to guarantee a correct set with constant probability~\cite{HYLC11}.
We will prove a lower bound of $\Omega(n/\varepsilon)$ here. After our work Woodruff and Zhang~\cite{WZ12} showed a lower bound of $\Omega(\min\{n/\varepsilon^2, \sqrt{k}n/\varepsilon\})$ (translating to our setting; their setting is a bit different), which is tight up to a $\log$ factor.
We now present a specific formulation of the approximate heavy hitters problem as $(k,\phi,\varepsilon)$-\textsf{HH}\xspace as follows. Consider $k$ players, each with a bit sequence (either 0 or 1) of length $n$ where each coordinate represents an element. The goal is to answer YES for each index with at least $\phi k$ elements, NO for each index with no more than $\phi k(1-\varepsilon)$ elements, and either YES or NO for any count in between.
The reduction is based on a distribution $\tau_{\phi,\varepsilon}$ where independently each index has either $\phi k$ or $\phi k(1-\varepsilon)$ 1 bits, each with probability $1/2$. In the reduction the players are grouped into sets of $k \varepsilon$ players each, and all grouped players for each index are either given a 1 bit or all players are given a 0 bit. These 1 bits are distributed randomly among the $1/\varepsilon$ groups. The proof then uses Corollary \ref{cor:k-MAJ}.
\begin{theorem}
$D_{\tau_{\phi,\varepsilon}}^{1/6}((k,\phi,\varepsilon)$-$\textsf{HH}\xspace) = \Omega(n/\varepsilon)$.
\end{theorem}
\begin{proof}
To lowerbound the communication for $(k,\phi,\varepsilon)$-$\textsf{HH}\xspace$ we first show another problem is hard: $(1/\varepsilon,\phi)$-$\textsf{HH}\xspace$ (assume $1/\varepsilon$ is an integer). Here there are only $1/\varepsilon$ players, and each player at each index has a count of $0$ or $k\varepsilon$, and we again want to distinguish between total counts of at least $k \phi$ (YES) and at most $k \phi(1-\varepsilon)$ (NO). By distribution $\tau_{\phi,\varepsilon}$ each index has a total of either $k \phi$ or $k \phi(1-\varepsilon)$ exactly. And then we distribute these bits to players so each player has precisely either $0$ or $k\varepsilon$ at each index. When $k$ is odd, this is
precisely the $(1/\varepsilon,\phi)$-MAJ problem, which by Corollary \ref{cor:k-MAJ} takes $\Omega(n/\varepsilon)$ communication.
Now it is easy to see that $D_{\tau_{\phi,\varepsilon}}^{1/6}((1/\varepsilon,\phi)$-$\textsf{HH}\xspace) \leq D_{\tau_{\phi,\varepsilon}}^{1/6}((k,\phi,\varepsilon)$-$\textsf{HH}\xspace)$, since the former on the same input allows $(1/\varepsilon)$ sets of $k\varepsilon$ players to talk to each other at no cost.
\end{proof}
\section{Concluding Remarks}
\label{sec:conclusion}
In this paper we have introduced the symmetrization technique, and have shown how to use it to prove lower bounds for $k$-player communication games. This technique seems widely applicable, and we expect future work to find further uses.
\subsection{A Brief Comparison to the icost Method}
In this section we make a brief comparison between our symmetrization method and the celebrated {\em icost} method by~\cite{BYJKS02}. Readers who are familiar with the icost method may notice that the $k$-\textsf{XOR}\xspace, $k$-\textsf{MAJ}\xspace and blackboard $k$-\textsf{OR}\xspace/\textsf{AND}\xspace problems discussed in this paper can also be handled by the icost method. However, for problems whose complexities are different in the blackboard model and the message-passing model, e.g., $k$-\textsf{OR}\xspace and $k$-\textsf{CONN}\xspace, the icost method cannot be used to obtain tight lower bounds in the message-passing/coordinator model, while the symmetrization method still applies.
If we view the input to $k$ players as a matrix with players as rows each having an $n$-bit input, the icost method first ``divides" the whole problem to $n$ copies of primitive problems column-wise, and then analyzes a single primitive problem. While the symmetrization method first reduces the size of a problem in the row space, that is, it first reduce a $k$-player problem to a $2$-player problem, and then analyze the $2$-player problem. We can certainly use the icost method again when analyzing the resulting $2$-player problem, which gives us an elegant way to combine these two techniques.
We notice that after the conference version of this paper, Braverman et al.~\cite{BEOPV13} and Huang et al.~\cite{HRVZ13} independently developed two new (and different) definitions for icost in the coordinator model, and used them to prove some tight lower bounds in the coordinator model.
\subsection{Limitations and Future Directions}
\label{sec:limitations}
The symmetrization technique also has several limitations, which we wish to discuss here.
Firstly, there are problems that might be impossible to lower bound using symmetrization. Consider for example the $k$-player disjointness problem, where each player gets a subset of $\{1,\ldots,n\}$, and the goal is to decide whether there is an element that appears in all of the sets. This problem looks to be easier than the coordinate-wise AND problem. But in the conference version of this paper we conjectured that this problem has a communication lower bound of $\Omega(nk)$ in the coordinator model as well. However, it seems impossible to prove this lower bound using symmetrization, for the following reason. Suppose we give Alice the input of a randomly-chosen player $p_i$, and give Bob the inputs of all the other players. It seems that for any symmetric input distribution, this problem can be solved using $O((n \log k) / k)$ bits in expectation, which is much lower than the $\Omega(nk)$ lower bound we are aiming for.
Recently Braverman et al.~\cite{BEOPV13} confirmed our conjecture that the $k$-player disjointness problem has a lower bound $\Omega(nk)$ in the coordinator model, using a very different method via information complexity.
The second limitation is that symmetrization seems to require proving distributional lower bounds for $2$-player problems, over somewhat-convoluted distributions. This presents some difficulty for the researcher, who needs to start proving lower bounds from scratch and cannot use the literature, since lower bounds in the literature are proved for other distributions. Yao's minimax principle cannot be used, since it only guarantees that there is \emph{some} hard distribution, but it does not guarantee anything for the distribution of interest. This is often only a methodical difficulty, since it is often easy to get convinced that the distribution of interest is indeed hard, but a proof of this must still be found, which is a tedious task. It would be useful if there is some way to circumvent this difficulty, for example by finding a way that standard randomized lower bounds can be used.
The third limitation is that in order to use symmetrization, one needs to find a hard distribution for the $k$-player problem which is symmetric. This is usually impossible when the problem itself is not symmetric, i.e.\ when the players have different roles. For example, one could envision a problem where some of the players get as input elements of some group and the rest of the players get as input integers. However, note that for such problems, symmetrization can still be useful in a somewhat-generalized version. For example, suppose there are two sets of players: in set $P$, the players get group elements, and in set $P'$, the players get integers. Assume each of the sets contains exactly $k/2$ players. To use symmetrization, we would try to find a hard distribution that is symmetric inside of $P$ and also inside of $P'$; namely, a distribution where permuting the players inside $P$ has no effect on the distribution, and similarly for permuting the players inside $P'$. Then, to use symmetrization we can have Alice simulate two random players, $p_i$ and $p_j$, where $p_i$ is from $P$ and $p_j$ is from $P'$; Bob will simulate all the rest of the players. Now symmetrization can be applied. If, alternatively, the set $P$ contained just $3$ players and $P'$ contained $k-3$ players, we can have Alice simulate one of the players in $P'$, Bob can simulate the rest of the players in $P'$, and either Alice or Bob can play the three players from $P$. As can be seen, with a suitable choice of distribution, it should still be possible to apply symmetrization to problems that exhibit some amount of symmetry.
The main topic for future work seems to be to find more setting and problems where symmetrization can prove useful. Recently this technique has found applications in several other statistical, numerical linear algebra and graph problems in the message-passing/coordinator model~\cite{WZ12,WZ13,HRVZ13,WZ14,LSWW14}. We believe it has the potential to be a widely useful tool.
\appendix
\section{Omitted Proof for 2-\textsf{BITS}\xspace}
\label{sec:2-BITS-proof}
\noindent\textbf{Lemma \ref{lem:2-BITS} (restated).} \emph{$\mathbf{E}D_{\zeta_\rho}^{1/3}(\textrm{2-\textsf{BITS}\xspace}) = \Omega(n \rho \log(1/\rho))$.
}
\begin{proof}
Here we will make use of several simple tools from information theory.
Given a random variable $X$ drawn from a distribution $\mu$, we can measure the amount of randomness in $X$ by its entropy $H(X) = - \sum_{x} \mu(x) \log_2 \mu(x)$. The conditional entropy $H(X \mid Y) = H(X Y) - H(Y)$ describes the amount of entropy in $X$, given that $Y$ exists. The mutual information $I(X : Y) = H(X) + H(Y) - H(XY)$
measures the randomness in both random variables $X$ and $Y$.
Let $\cal P$ be any valid communication protocol. Let $X$ be Alice's (random) input vector. Let $Y$ be Bob's output as the Alice's vector he learns after the communication. Let $\Pi$ be the transcript of $\cal P$. Let $\varepsilon = 1/3$ be the error bound allows by Bob.
First, since after running $\cal P$, with probability at least $1- \varepsilon$, we have $Y = X$, thus Bob knows $X$. Therefore
\[
I(X:Y\ |\ \Pi) \le \varepsilon H(X).
\]
Consequently,
\begin{eqnarray*}
\varepsilon H(X) &\ge& I(X:Y\ |\ \Pi) \\
&=& H(X\ |\ \Pi) + H(Y\ |\ \Pi) - H(XY\ |\ \Pi)\\
&=& H(X\Pi) - H(\Pi) + H(Y\Pi) - H(\Pi) \\ & &- (H(XY\Pi) - H(\Pi)) \\
&=& H(X\Pi) + H(Y\Pi) - H(XY\Pi) - H(\Pi) \\
&=& I(X\Pi:Y\Pi) - H(\Pi) \\
&\ge& (1 - \varepsilon) H(X\Pi) - H(\Pi) \\
&\ge& (1 - \varepsilon) H(X) - H(\Pi)
\end{eqnarray*}
Therefore, $\mathbf{E}[\abs{\Pi}] \ge H(\Pi) \ge (1 - 2\varepsilon) H(X) \ge \Omega(n
H(\rho)) \ge \Omega(n\rho \log (1/\rho))$.
\end{proof}
\section{Omitted Proofs for the Biased $2$-party Set Disjointness}
\label{sec:2-DISJ-proof}
\noindent\textbf{Lemma \ref{lem:2-DISJ} (restated).} \emph{When $\mu$ has $|x \cap y| = 1$ with probability $1/t$ then $\mathbf{E}D^{1/100t}_{\mu}(\mbox{2-\textsf{DISJ}\xspace}) = \Omega(n)$.
}
The proof is based on \cite{Raz90}. Before giving the proof, we first
introduce some notations and a key technical lemma. Define $$A =
\{(x,y)\ :\ (\mu(x,y) > 0) \wedge (x \cap y = \emptyset)\}$$ and
$$B = \{(x,y)\ :\ (\mu(x,y) > 0) \wedge (x \cap y \neq \emptyset)\}.$$ Thus
$\mu(A) = 1 - 1/t$ and $\mu(B) = 1/t$. We need the following key lemma, which is an easy extension of the main lemma in Razbarov~\cite{Raz90} by rescaling the measures on the YES and NO instances.
\begin{lemma}
\label{lem:mono-rect}{\cite{Raz90}}
Let $A, B, \mu$ be defined as above. Let $R = C \times D$ be any rectangle
in the communication protocol. Then we have
$\mu(B \cap R) \ge 1/40t \cdot \mu(A \cap R) - 2^{-0.01n}.$
\end{lemma}
\begin{proof}{(for Lemma~\ref{lem:2-DISJ})}
Let ${\cal R} = \{R_1, \ldots, R_t\}$ be the minimal set of disjoint rectangles in
which the protocol outputs ``$1$", i.e, $x \cap y = \emptyset$. Imagine
that we have a binary decision tree built on top of these rectangles. If we
can show that there exists ${\cal O} \subseteq \cal R$ such that
$\mu(\bigcup_{R_i \in \cal O} R_i) \ge 0.5 \cdot \mu(\bigcup_{R_i \in \cal
R} R_i)$ and each of $R_i \in {\cal O}$ lies on a depth at least $0.005n$
in the binary decision tree, then we are done. Since $\mu(\bigcup_{R_i \in
\cal O} R_i) \ge 0.5 \cdot \mu(\bigcup_{R_i \in \cal R} R_i) \ge 0.5
\cdot (\mu(A) - 1/100t) = \Omega(1)$ and querying inputs in each rectangle
in $\cal O$ costs $\Omega(n)$ bits.
We prove this by contradiction. Suppose that there exists ${\cal O'}
\subseteq {\cal R}$ such that $\mu(\bigcup_{R_i \in \cal O'} R_i) > 0.5
\cdot \mu(\bigcup_{R_i \in \cal R} R_i)$ and each of $R_i \in {\cal O'}$
lies on a depth less than $0.005n$ in the binary decision tree. We have the
following two facts.
\begin{enumerate}
\item There are at most $2^{0.005n}$ disjoint rectangles that lie on depths
less than $0.005n$, i.e., $\abs{\cal O'} \le 2^{0.005n}$.
\item $\mu\left(\bigcup_{R_i \in {\cal O'}}(R_i \cap A)\right) > 0.5 -
1/100t$.
\end{enumerate}
Combining the two facts with Lemma~\ref{lem:mono-rect} we reach the following contradiction of our error bound.
\begin{align*}
\mu\left(\bigcup_{i=1}^t(R_i \cap B)\right)
\ge& \mu\left(\bigcup_{R_i \in {\cal O'}}(R_i \cap B)\right) \\
\ge& \sum_{R_i \in {\cal
O'}}(1/40t \cdot \mu(R_i \cap A) - 2^{-0.01n}) \\
>& 1/40t \cdot (0.5 - 1/100t) - 2^{0.005n} \cdot 2^{-0.01n} \\
>& 1/100t \ . \;
\end{align*}
\end{proof}
\section{Omitted Proofs Graph Connectivity}
\label{sec:CONN-proof-appendix}
We provide here a full proof for the probability of the event $\xi_1$, that both subset of the graph $L$ and $R$ are connected.
\noindent\textbf{Lemma \ref{lem:connected} (restated).}
\emph{$\xi_1$ happens with probability at least $1-1/2n$ when $k \geq 68 \ln n + 1$.}
\begin{proof}
First note that by our construction, both $\abs{L}$ and $\abs{R}$ are $\Omega(n)$ with high probability. To locally simplify notation, we consider a graph $(V,E)$ of $n$ nodes
where edges are drawn in $(k - 1) \ge 68\ln n$ rounds, and each round $n/4$ disjoint
edges are added to the graph. If $(V,E)$ is connected with
probability $(1-1/4n)$, then by union bound over $\cup_{j=2}^k I_j
\bigcap E_L$ and $\cup_{j=2}^k I_j \bigcap E_R$, $\xi_1$ is true with
probability $(1-1/2n)$. The proof follows four steps.
\begin{enumerate}
\item[(S1):] \emph{All points have degree at least $8\ln n$.}
Since for each of $k$ rounds each point's degree increases by $1$ with probability $1/2$, then the expected degree of each point is $14\log n$ after the first $28 \ln n$ rounds. A Chernoff-Hoeffding bound says that the probability that a point has degree less than $8\ln n$ is at most $2 \exp(-2(6 \ln n)^2/$ $(14 \ln n)) \leq 2 \exp(-5 \ln n) \leq 2/n^5 \leq 1/12n^2$. Then by the union bound, this holds for none of the $n$ points with probability at least $1-1/12n$.
\item[(S2):] \emph{Conditioned on (S1), any subset $S \subset V$ of $h < n/10$ points is connected to at least $\min\{h \ln n, n/10\}$ distinct points in $V \setminus S$.}
At least $9n/10$ points are outside of $S$, so each point in $S$ expects to be connected at least $(9/10) 8 \ln n \geq 7 \ln n$ times to a point outside of $S$. Each of these edges occur in different rounds, so they are independent. Thus we can apply a Chernoff-Hoeffding bound to say the probability that the number of edges outside of $S$ for any point is less than $3 \ln n$ is at most $2 \exp(-2 (4 \ln n)^2/(8 \ln n)) = 2 \exp(-4 \ln n) = 2/n^4$. Thus the probability that no point in $S$ has fewer than $\ln n$ edges outside $S$ is (since $h < n/10$) at most $1/5n^3$.
If the $h \cdot 3 \ln n$ edges outside of $S$ (for all $h$ points) are drawn independently at random, then we need to bound the probability that these go to more than $n/10$ distinct points or $h \ln n$ distinct points. Since the edges are drawn to favor going to distinct points in each round, it is sufficient to analyze the case where all of the edges are independent, which can only increase the chance they collide.
In either case $h \ln n < n/10$ or $h \ln n > n/10$ each time an edge is chosen (until $n/10$ vertices have been reached, in which case we can stop), $9/10$ of all possible vertices are outside the set of edges already connected to. So if we select the $3 h \ln n$ edges one at a time, each event connects to a distinct points with probability at least $9/10$, so we expect at least $(9/10) (3 h \ln n) > 2 h \ln n$ distinct points. Again by a Chernoff-Hoeffding bound, the probability that fewer than $h \ln n$ distinct points have been reached is at most $2 \exp(-2 (h \ln n)^2/(3 h n)) \leq 2 \exp(-(2/3) h \ln n) < 2 \cdot$ $\exp(-5 \ln n) \leq 2/n^5 \leq 1/5n^2$ (for $h \geq 8$).
Together the probability of these events not happening is at most $1/2n^2$.
\item[(S3):] \emph{There is a single connected component $S_G$ of size at least $n/10$.}
Start with any single point, we know from Step 1, its degree is at least $8 \ln n$. Then we can consider the set $S$ formed by these $h_1 = 8 \ln n$ points, and apply Step 2 to find another $h_1 \ln n = 8 \ln^2 n = h_2$ points; add these points to $S$. The process iterates and at each round $h_i = 8 \ln^i n$, by growing only from $h_i$ the newly added points. So, by round $i = \ln n$ the set $S = S_G$ has grown to at least size $n/10$. Taking the union bound over these $\ln n$ rounds shows that this process fails with probability at most $1/12n$.
\item[(S4):] \emph{All points in $V \setminus S_G$ are connected to $S_G$.}
Each round each point $p$ is connected to $S_G$ with probability at least $1/20$. So by coupon collector's bound, using the last $40 \ln n$ rounds all points are connected after $2 \ln n$ sets of $20$ rounds with probability at least $1-1/12n$.
\end{enumerate}
By union bound, the probability of steps (S1), (S3), and (S4) are successful is at least $1-1/4n$, proving our claim.
\end{proof}
\end{document}
|
\begin{document}
\title{Some independence results on reflection}
\baselineskip=22pt
\begin{abstract}
We prove that there is a certain degree of independence
between stationary reflection phenomena at different cofinalities.
\end{abstract}
\section{Introduction}
Recall that a stationary subset $S$ of a regular cardinal $\gk$
is said to {\em reflect} at $\ga < \gk$ if $\cf(\ga) > \go$
and $S \cap \ga$ is stationary in $\ga$. Stationary reflection
phenomena have been extensively studied by set theorists,
see for example \cite{Magidor}.
\begin{definition} Let $\gk = \cf(\gk) < \gl = \cf(\gl)$.
$T^\gl_\gk =_{\rm def} \setof{\ga < \gl}{\cf(\ga)=\gk}$.
If $m < n < \go$ then $S^n_m =_{\rm def} \setof{\ga < \ha_n}{\cf(\ga) = \ha_m}$.
\end{definition}
Baumgartner proved in \cite{JEB} that if $\gk$
is weakly compact, GCH holds, and $\go < \gd = \cf(\gd) < \gk$ then
forcing with the Levy collapse $Coll(\gd, <\gk)$ gives a model
where for all $\gr < \gd$ and all stationary $T \subseteq T^\gk_\gr$
the stationarity of $T$ reflects to some $\ga \in T^\gk_\gd$.
In this last result all the cofinalities $\gr < \gd$ are on the same
footing; we will build models where reflection holds for some
cofinalities but fails badly for others.
We introduce a more compact terminology for talking about reflection.
\begin{definition} Let $\gk = \cf(\gk) < \gl = \cf(\gl) < \gm = \cf(\gm)$.
\begin{enumerate}
\item $Ref(\gm, \gl, \gk)$ holds iff for every stationary $S \subseteq T^\gm_\gk$
there is a $\ga \in T^\gm_\gl$ with $S \cap \ga$ stationary in $\ga$.
\item $Dnr(\gm, \gl, \gk)$ (Dense Non-Reflection) holds iff for every
stationary $S \subseteq T^\gm_\gk$ there is a stationary
$T \subseteq S$ such that for no $\ga \in T^\gm_\gl$ is
$T \cap \ga$ stationary in $\ga$.
\end{enumerate}
\end{definition}
We will use a variation on an idea from Dzamonja and Shelah's
paper \cite{ShDz}.
\begin{definition}
Let $\gk = \cf(\gk) < \gl = \cf(\gl) < \gm = \cf(\gm)$.
$Snr(\gm, \gl, \gk)$ (Strong Non-Reflection) holds iff there is $F:T^\gm_\gk \lra \gl$
such that for all $\ga \in T^\gm_\gl$ there is $C \subseteq \ga$
closed and unbounded in $\ga$ with $F \restriction C \cap T^\gm_\gk$ strictly
increasing.
\end{definition}
As the name suggests, $Snr(\gm, \gl, \gk)$ is a strong
failure of reflection. It is easy to see that if Jensen's Global
$\square$ principle holds then $Snr(\gm, \gl, \gk)$ holds for all
$\gk < \gl < \gm$; in some sense the strong non-reflection
principle captures exactly that part of $\square$ which is useful for
building non-reflecting stationary sets.
\begin{lemma} $Snr(\gm,\gl,\gk) \implies Dnr(\gm,\gl,\gk)$.
\end{lemma}
\begin{proof} Let $S \subseteq T^\gm_\gk$ be stationary,
and let $F: T^\gm_\gk \lra \gl$ witness the strong non-reflection.
Let $T \subseteq S$ be stationary such that $F \restriction T$
is constant. Let $\ga \in T^\gm_\gl$ and let $C$ be a club in
$\ga$ on which $F$ is strictly increasing, then $C$ meets $T$
at most once and hence $T$ is non-stationary in $\ga$.
\end{proof}
We will prove the following results in the course of this paper.
\noindent {\bf Theorem} \ref{thm1}: If the existence of a weakly compact cardinal is consistent, then
$Ref(\ha_3, \ha_2, \ha_0) + Snr(\ha_3, \ha_2, \ha_1)$ is consistent.
\noindent {\bf Theorem} \ref{thm2}: If the existence of a measurable cardinal is consistent, then
$Ref(\ha_3, \ha_2, \ha_0) + Snr(\ha_3, \ha_2, \ha_1) + Snr(\ha_3, \ha_1, \ha_0)$
is consistent.
\noindent {\bf Theorem} \ref{thm3}: If the existence of a supercompact cardinal with a measurable
above is consistent, then
$Ref(\ha_3, \ha_2, \ha_0) + Snr(\ha_3, \ha_2, \ha_1) + Ref(\ha_3, \ha_1, \ha_0)$
is consistent.
\noindent {\bf Theorem} \ref{zfc1}: If $Snr(\gm,\gl,\gk)$ and $\gk < \gk^* < \gl$ then
$Snr(\gm, \gl, \gk^*)$.
\noindent {\bf Theorem} \ref{zfc2}: Let $\gk < \gl < \gm < \gn$. Then
$Ref(\gn,\gm,\gl) + Ref(\gn,\gl,\gk) \implies Ref(\gn,\gm,\gk)$
and $Ref(\gn, \gm, \gk) + Ref(\gm, \gl, \gk) \implies Ref(\gn, \gl, \gk)$.
\noindent {\bf Theorem} \ref{thm4}: If the existence of a weakly compact cardinal is consistent, then
$Ref(\ha_3, \ha_2, \ha_1) + Dnr(\ha_3, \ha_2, \ha_0)$ is consistent.
Theorems \ref{thm1}, \ref{thm2} and \ref{thm3} were proved by the first author. Theorems \ref{zfc1}
and \ref{thm4} were proved
by the second author, answering questions put to him by the first author.
Theorem \ref{zfc2} was noticed by the first author (but has probably been observed many times).
We would like to thank the anonymous referee for their very thorough reading of the first version
of this paper.
\section{Preliminaries}
We will use the idea of {\em strategic closure\/} of a partial
ordering (introduced by Gray in \cite{Gray}).
\begin{definition} Let $\FP$ be a partial ordering, and let $\gee$
be an ordinal.
\begin{enumerate}
\item
The game $G(\FP, \gee)$ is played by two players
I and II, who take turns to play elements $p_\ga$ of $\FP$
for $ 0< \ga < \gee$, with player I playing at odd stages
and player II at even stages (NB limit ordinals are even).
The rules of the game are that the sequence that is played
must be decreasing (not necessarily strictly decreasing),
the first player who cannot make a move loses, and player II
wins if play proceeds for $\gee$ stages.
\item
$\FP$ is {\em $\gee$-strategically closed\/} iff
player II has a winning strategy in $G(\FP, \gee)$.
\item
$\FP$ is {\em $<\gee$-strategically closed\/} iff for all $\gz < \gee$
$\FP$ is $\gz$-strategically closed.
\end{enumerate}
\end{definition}
Strategic closure has some of the nice features of the standard
notion of closure. For example a $(\gd+1)$-strategically closed
partial ordering will add no $\gd$-sequences, and the property of
being $(\gd + 1)$-strategically closed is preserved by forcing
with $\le \gd$-support (see \cite{CuDzSh} for more information
on this subject). We will need to know that under some circumstances
we can preserve a stationary set by forcing with a poset that has
a sufficient degree of strategic closure.
The following is well-known.
\begin{lemma} Let GCH hold, let $\gl =\cf(\gl)$ and $\gk = \gl^+$.
Let $\gd = \cf(\gd) \le \gl$, and suppose that $\FP$ is
$(\gd+1)$-strategically closed and $S$ is a stationary subset
of $T^\gk_\gd$. Then $S$ is still stationary in $V^\FP$.
\end{lemma}
\begin{proof} Let $p \forces \mbox{``$\dot C$ is club in $\gk$''}$.
Build
$\seq{X_\ga: \ga < \gk}$
a continuous increasing chain of elementary substructures
of some large $H_\gth$ such that everything relevant is in $X_0$,
$\card{X_\ga} = \gl$, ${}^{<\gl} X_\ga \subseteq X_{\ga+1}$.
We make the remark here that this would not be possible for
$\gl$ singular, and indeed the theorem can fail in that case
(see \cite{ShSuccSing} for details).
Now find some limit $\gg$ such that $X_\gg \cap \gk \in S$, clearly
$\cf(\gg) = \gd$ and so ${}^{<\gd}X_\gg \subseteq X_\gg$.
Let $\gb =_{\rm def} X_\gg \cap \gk$ and
fix $\seq{\gb_i: i < \gd}$ cofinal in $\gb$.
Since $\FP, \gd \in X_0 \subseteq X_\gg$ we can find in $X_\gg$ a winning
strategy $\gs$ for the game $G(\FP, \gd+1)$.
Now we build a sequence $\seq{p_\ga: \ga \le \gd}$ such that
\begin{enumerate}
\item For each even $\gb$, $p_\gb = \gs(\vec p \restriction \gb)$.
\item For $\gb < \gd$, $p_\gb \in \FP \cap X_\gg$.
\item For each $i < \gd$, there is $\gee$ such that
$\gb_i < \gee < \gb$ and $p_{2i+1} \forces \hat \gee \in \dot C$.
\end{enumerate}
We can keep going because $X_\gg \prec H_\gth$,
${}^{<\gd} X_\gg \subseteq X_\gg$
and $\gs$ is a winning strategy. At the end of the construction $p_\gd$
is a refinement of $p_0$ which forces that $\gb$ is a limit point
of $\dot C$, and we are done.
\end{proof}
\section{Some consistency results}
In this section we prove (starting from a weakly compact cardinal)
the consistency of
$ZFC + Ref(\ha_3, \ha_2, \ha_0) + Snr(\ha_3, \ha_2, \ha_1)$.
We also show that together with this we can have either
$Ref(\ha_3, \ha_1, \ha_0)$ or $Snr(\ha_3, \ha_1, \ha_0)$.
We begin by defining a forcing ${\FP_{\rm Snr}}$ to enforce $Snr(\ha_3, \ha_2, \ha_1)$.
\begin{definition} $p$ is a condition in ${\FP_{\rm Snr}}$ iff $p$ is a function from
a bounded subset of $S^3_1$ to $\ha_2$, and for every $\gg \in S^3_2$
with $\gg \le \sup(\dom(p))$ there is $C$ club in $\gg$ such that
$p \restriction C \cap S^3_1$ is strictly increasing.
${\FP_{\rm Snr}}$ is ordered by extension.
\end{definition}
It is easy to see that ${\FP_{\rm Snr}}$ is $\go_2$-closed, and in fact
that it is $\go_2$-directed closed.
\begin{lemma} ${\FP_{\rm Snr}}$ is $(\go_2+1)$-strategically closed.
\end{lemma}
\begin{proof} We describe a winning strategy for player II
in $G({\FP_{\rm Snr}}, \go_2+1)$. Suppose that $p_\ga$ is the condition
played at move $\ga$. Let $\gb$ be an even ordinal, then
at stage $\gb$ II will play as follows.
Define $q_\gb =_{\rm def} \bigcup_{\ga < \gb} p_\ga$,
$\gr_\gb =_{\rm def} \dom(q_\gb)$,
and then let II play as follows: $p_\gb = q_\gb$ unless $\gb$ is limit and $\cf(\gb) = \ha_1$,
in which case $p_\gb = q_\gb \cup \{ (\gr_\gb, \gb ) \}$.
The strategy succeeds because when play reaches stage $\go_2$,
$\setof{\gr_\gb}{\gb < \go_2}$ is a club witnessing that
$p_{\go_2}$ is a condition.
\end{proof}
This shows that ${\FP_{\rm Snr}}$ preserves cardinals and cofinalities
up to $\ha_3$, from which it follows that
$V^{\FP_{\rm Snr}} \models Snr(\ha_3,\ha_2, \ha_1)$. If GCH holds then
$\card{{\FP_{\rm Snr}}} = \ha_3$, so ${\FP_{\rm Snr}}$ has the
$\ha_4$-c.c.~and all
cardinals are preserved.
Now we define in $V^{\FP_{\rm Snr}}$ a forcing $\FQ$.
This will enable us
to embed ${\FP_{\rm Snr}}$ into the Levy collapse $Coll(\go_2, \go_3)$ in
a particularly nice way.
\begin{definition} In $V^{\FP_{\rm Snr}}$ let $F:S^3_1 \lra \ha_2$ be the
function added by ${\FP_{\rm Snr}}$. $q \in \FQ$ iff $q$ is a closed
bounded subset of $\ha_3$, the order type of $q$ is less than
$\ha_2$, and $F \restriction \lim(q) \cap S^3_1$ is strictly
increasing.
\end{definition}
The aim of $\FQ$ is to add a club of order type $\go_2$ on
which $F$ is increasing. It is clear that $\FQ$ is countably
closed and collapses $\go_3$.
\begin{lemma} If GCH holds, ${\FP_{\rm Snr}} * \dot \FQ$ is
equivalent to $Coll(\ha_2, \ha_3)$.
\end{lemma}
\begin{proof}
Since ${\FP_{\rm Snr}} * \dot \FQ$ has cardinality $\ha_3$ and collapses
$\ha_3$, it will suffice to show that it has an $\ha_2$-closed
dense subset.
To see this look at those conditions $(p, c)$ where $c \in V$,
and $\max(c) = \sup(\dom(p))$. It is easy to see that this
set is dense and $\ha_2$-closed.
\end{proof}
\begin{theorem} \label{thm1} Let $\gk$ be weakly compact, let GCH hold.
Define a two-step iteration by $\FP_0 =_{\rm def} Coll(\go_2, < \gk)$ and
$\FP_1 =_{\rm def} (\FP_{\rm Snr})_{V^{\FP_0}}$. Then
$V^{\FP_0 * \FP_1} \models Ref(\ha_3, \ha_2, \ha_0)
+ Snr(\ha_3, \ha_2, \ha_1)$.
\end{theorem}
\begin{proof} We will first give the proof for the case when $\gk$
is measurable and then show how to modify it for the case when
$\gk$ is just weakly compact.
Assuming that $\gk$ is measurable, let $j: V \lra M$ be an
elementary embedding into a transitive inner model with
critical point $\gk$, where ${}^\gk M \subseteq M$.
Notice that by elementarity and the closure of $M$,
$j(\FP_0) = Coll(\go_2, <j(\gk))_M = Coll(\go_2, <j(\gk))_V$.
Let $G$ be $\FP_0$-generic over $V$
and let $H$ be $\FP_1$-generic over $V[G]$.
We already know that $V[G*H] \models Snr(\ha_3, \ha_2, \ha_1)$,
so let us assume that in $V[G*H]$ we have $S$ a stationary subset
of $S^3_0$. To prove that $S$ reflects we will build a generic
embedding with domain $V[G*H]$ extending $j$. Notice that since
$\card{\FP_0 * \FP_1} = \gk$
we can prove (by looking at canonical names)
that $V[G*H] \models {}^\gk M[G*H] \subseteq M[G*H]$, so in particular
we have $S \in M[G*H]$.
We start by forcing with $\FQ$ over $V[G*H]$ to get a generic
object $I$. $H*I$ is generic over $V[G]$ for
$(\FP*\FQ)_{V[G]}$ which is equivalent to $Coll(\go_2, \gk)$,
so we can regard $G*H*I$ as being generic for $Coll(\go_2, \le \gk)$.
Now let $J$ be $Coll(\go_2, [\gk, j(\gk)))$-generic over
$V[G*H*I]$, then $G*H*I*J$ is $j(\FP_0)$-generic over $V$ (so
a fortiori over $M$) and $j``G \subseteq G*H*I*J$ so that we
can lift to get $j:V[G] \lra M[G*H*I*J]$.
It remains to lift $j$ onto $V[G*H]$, for which we need to force
a generic $K$ for $j(\FP_1)$ with the property that $j``H \subseteq K$.
We will get $K$ by constructing a {\em master condition\/} in $j(\FP_0)$
(that is, a condition refining all the conditions in $j``H$)
and forcing below that master condition. A natural candidate for a master condition
is $F =_{\rm def} \bigcup j``H$, where it is easily seen
(since $\crit(j)=\gk$ and $j \restriction \FP_1 = id$) that
$F$ is the generic function from $\gk$ to $\ha_2$ added by
$H$. The models $V[G]$ and $M[G*H*I*J]$ agree in their computations
of $T^\gk_{\ha_1}$ and $T^\gk_{\ha_2}$, so $F$ is increasing on a club
at all the relevant points below $\gk$.
Now $\gk = (\ha_3)_{V[G]}$ is an ordinal of cofinality $\ha_2$
in $M[G*H*I*J]$, but there is no problem here because $I$ has
introduced a club in $\gk$ on which $F$ is increasing. Hence
$F$ is a condition in $j(\FP_1)$ and we can force to get
$K \supseteq j``H$ as desired.
We claim that $S$ is still stationary in $M[G*H*I*J*K]$.
$I$ is generic for countably closed forcing, $J$ is generic
for $\ha_2$-closed forcing and $K$ is generic for
$\ha_2$-closed forcing so that $S$ (being a set of cofinality
$\go$ ordinals) remains stationary. Now we argue as usual
that since $j(S) \cap \gk = S$ and $\cf(\gk) = \go_2$
in $M[G*H*I*J*K]$, there must exist $\ga < \gk$ in
$V[G*H]$ such that $\cf(\ga) = \go_2$ and $S \cap \ga$
is stationary in $\ga$.
We promised at the start of this proof that we would show how to weaken the
assumptions on $\gk$ from measurability to weak compactness. We will actually
sketch two arguments, based on two well-known characterisations of
weak compactness. See Hauser's paper \cite{Kai} for detailed accounts
of some similar arguments.
\begin{enumerate}
\item $\gk$ is weakly compact iff for every $A \subseteq V_\gk$ and every
$\Pi^1_1$ formula $\phi$,
$V_\gk \models \phi(A) \implies \exists \ga \; V_\ga \models \phi(A \cap V_\ga)$.
\item $\gk$ is weakly compact iff $\gk$ is strongly inaccessible and for every
transitive $M$ such that $\card{M} = \gk$, ${}^{<\gk} M \subseteq M$ and
$M$ models enough set theory there is a transitive set $N$ and an elementary
embedding $j: M \lra N$ with $\crit(j) = \gk$.
\end{enumerate}
\noindent Argument 1: $\FP_0 * \FP_1 \subseteq V_\gk$, and so if $\dot S$
is a name for a stationary subset of $\gk$ we can represent it by
$S^* = \setof{(p, \ga)}{p \forces \hat \ga \in \dot S} \subseteq V_\gk$.
The fact that $S$ is forced to be stationary can be written as a $\Pi^1_1$
sentence (the universal second-order quantification is over names for clubs).
Using the first characterisation given above we can find inaccessible $\ga < \gk$ such that
$S^* \cap V_\ga$ is a $Coll(\go_2, < \ga) * (\FP_{Snr})_{V^{Coll(\go_2, <\ga)}}$
name for a stationary set.
Now the argument is just like the one from a measurable, only with $\ga$
playing the role of $\gk$ and $\gk$ replacing $j(\gk)$. We see that $S$
has an initial segment $S \cap \ga$ which is stationary in a certain
intermediate generic extension, and just need to check that $S \cap \ga$
remains stationary and that $\ga$ becomes a point of cofinality $\ha_2$.
This is routine.
\noindent Argument 2: Given a name $S$ for a stationary subset of $S^3_0$,
build $S$ and $\FP_0 * \FP_1$ into an appropriate model $M$ of size
$\gk$. Get $j$ as in the second characterisation as above.
Repeat (mutatis mutandis) the argument from a measurable.
\end{proof}
Having proved Theorem \ref{thm1}, it is natural to ask whether there
is any connection between reflection to points of cofinality $\go_2$
and reflection to points of cofinality $\go_1$. The following results provide
a partial (negative) answer.
\begin{theorem} \label{thm2}
Con($Ref(\ha_3,\ha_2,\ha_0) +
Snr(\ha_3,\ha_2,\ha_1) + Snr(\ha_3,\ha_1,\ha_0)$) follows from the consistency
of a measurable cardinal.
\end{theorem}
\begin{proof} Let $\gk$ be measurable. Without loss of generality
GCH holds (as we can move to the inner model $L[\gm]$). We will
sketch a proof that we can force to get $Snr(\gk, \ha_1, \ha_0)$ without
destroying the measurability of $\gk$. A more detailed argument
for a very similar result is given in \cite{CuDzSh} (alternatively
one can argue that because $\square$ holds in $L[\gm]$, $Snr(\gk, \ha_1, \ha_0)$
is already true in that model). Let $j: V \lra M$ be the ultrapower
map associated with some normal measure $U$ on $M$.
We will do a reverse Easton iteration of length $\gk +2$, forcing at every
regular cardinal $\ga \le \gk^+$ with $\FQ_\ga$, where $\FQ_\ga$ is the
natural forcing to add a witness to $Snr(\ga, \ha_1, \ha_0)$ by initial segments.
An easy induction shows that $\FQ_\ga$ is $<\ga$-strategically closed, the
point being that the witnesses added below $\ga$ can be used to produce
strategies in the game played on $\FQ_\ga$. The argument is exactly parallel to
the proof of Lemma 6 in \cite{CuDzSh}.
Let us break up the generic as $G * g * h$, where $G$ is $\FP_\gk$-generic,
$g$ is $\FQ_\gk$-generic and $h$ is $\FQ_{\gk^+}$-generic.
The key point is that $j(\FP_\gk)/G * g *h$ is $\gk^+$-strategically closed
in $V[G * g *h]$, so that by GCH we can build $H \in V[G * g *h]$ which is
$j(\FP_\gk)/G * g *h$-generic over $M[G * g * h]$.
Since $j``G \subseteq G * g *h * H$, we can lift $j$ to get
$j: V[G] \lra M[G * g * h * H]$. It is easy to see that $\bigcup j``g (= \bigcup g)$
will serve as a master condition, so using GCH again we may find $g^+ \in V[G * g * h]$
such that $j``g \subseteq g^+$ and $g^+$ is $j(\FP_\gk)$-generic over $M[G * g * h * H]$.
Finally we claim that $j``h$ generates a $j(\FP_{\gk^+})$-generic filter over
$M[G * g * h * H * g^+]$, because $\FP_{\gk^+}$ is distributive enough and
$M[G * g * h * H * g^+] = \setof{j(F)(\gk)}{\dom(F) = \gk, F \in V[G*g]}$.
Hence in $V[G * g * h]$ we can lift $j$ onto $V[G * g * h]$, so the
measurability of $\gk$ is preserved.
Now we just repeat the construction (from a measurable) of Theorem
\ref{thm1}, and claim that in the final model $Snr(\ha_3,\ha_1,\ha_0)$ holds.
The point is that if $F: T^\gk_{\ha_0} \lra \go_1$ witnesses the
truth of $Snr(\gk, \ha_1, \ha_0)$, then in the final model $F$ witnesses
$Snr(\ha_3, \ha_1, \ha_0)$ because the forcing from Theorem 1 does not
change $T^\gk_{\ha_0}$ or $T^\gk_{\ha_1}$.
\end{proof}
We can also go to the opposite extreme.
\begin{theorem} \label{thm3}
Let $\gk$ be $\gl$-supercompact, where $\gl > \gk$ and $\gl$ is
measurable. Let GCH hold. Then $Ref(\ha_3,\ha_2,\ha_0) + Snr(\ha_3,\ha_2,\ha_1)
+ Ref(\ha_3,\ha_1,\ha_0)$ holds
in some forcing extension.
\end{theorem}
\begin{proof} We will start by forcing with $\FP =_{\rm def}Coll(\go_1, <\gk)$,
after which $\gk$ is $\go_2$ and $\gl$ is still measurable.
Then we will do the construction of Theorem \ref{thm1}, that is we
force with $(Coll(\gk, < \gl) * \FP_{\rm Snr})_{V^\FP}$.
Let $\FP_0 = Coll(\gk, < \gl)_{V^\FP}$
and $\FP_1 = (\FP_{Snr})_{V^{\FP * \FP_0}}$.
We need to check that $Ref(\ha_3,\ha_1,\ha_0)$ holds in the final model. To see
this fix $j: V \lra M$ such that $\crit(j) = \gk$, $j(\gk) > \gl$
and ${}^\gl M \subseteq M$. Let $G$, $H$ and $I$ be the generics
for $\FP$, $\FP_0$ and $\FP_1$ respectively.
Since $\FP_0 * \FP_1$ is countably closed and has size $\gl$, we may
find an embedding $i: \FP * \FP_0 * \FP_1 \lra j(\FP)$
such that $i \restriction \FP = id$ and $j(\FP)/i``(\FP * \FP_0 * \FP_1)$
is countably closed. Let $J$ be $j(\FP)/i``(\FP * \FP_0 * \FP_1)$-generic,
then we can lift $j$ in the usual way to get
$j: V[G] \lra M[G * H * I * J]$. Since $\FP_0 * \FP_1$ is a
$\gk$-directed-closed forcing notion of size $\gl$, we may find a lower
bound for $j`` (H*I)$ in $j(\FP_0 * \FP_1)$ and use it as a
master condition, forcing $K$ such that $j`` (H*I) \subseteq K$
and lifting $j$ to $j: V[G * H * I] \lra M[G * H * I * J *K]$.
Now suppose that in $V[G * H * I]$ we have $T \subseteq S^3_0$
a stationary set. If $\gm = \bigcup j``\gl$ then we may
argue as usual that $T \in M[G*H*I]$ and that
$M[G*H*I] \models \hbox{$j(T) \cap \gm$ is stationary in $\gm$}$.
Since $J * K$ is generic for countably closed forcing
and $j(T) \cap \gm \subseteq T^\gm_{\ha_0}$ this will still be true
in $M[G * H * I * J *K]$, and since $\cf(\gm) = \ha_1$
in this last model we have by elementarity that $T$ reflects
to some point in $S^3_1$.
\end{proof}
\section{Some ZFC results}
In the light of the results from the last section it is natural to ask
about the consistency of $Ref(\ha_3,\ha_2,\ha_1) + Snr(\ha_3,\ha_2,\ha_0)$. The first author
showed by a rather indirect proof that this is impossible, and the
second author observed that there is a simple reason for this.
\begin{theorem} \label{zfc1}
If $Snr(\gm, \gl, \gk)$ and $\gk < \gk^* = \cf(\gk^*) < \gl$
then $Snr(\gm, \gl, \gk^*)$.
\end{theorem}
\begin{proof} Let $F: T^\gm_\gk \lra \gl$ witness $Snr(\gm, \gl, \gk)$.
Define $F^*: T^\gm_{\gk^*} \lra \gl$ by
\[
F^*: \gs \longmapsto \min \setof { \bigcup_{\ga \in C \cap T^\gm_\gk} F(\ga)}
{\hbox{$C$ club in $\gs$}}.
\]
We claim that $F^*$ witnesses $Snr(\gm, \gl, \gk^*)$. To see this, let $\gd \in T^\gm_\gl$
and fix $D$ club in $\gd$ such that $F \restriction D \cap T^\gm_\gk$ is strictly
increasing.
Let $\gs \in \lim(D) \cap T^\gm_{\gk^*}$, and observe that
$F^*(\gs) = \bigcup_{\ga \in D \cap T^\gm_\gk} F(\ga)$,
because for any club $C$ in $\gs$ we have
\[
\bigcup_{\ga \in C \cap T^\gm_\gk} F(\ga)
\ge
\bigcup_{\ga \in C \cap D \cap T^\gm_\gk} F(\ga)
=
\bigcup_{\ga \in D \cap T^\gm_\gk} F(\ga).
\]
It follows that $F^* \restriction \lim(D) \cap T^\gm_{\gk^*}$ is strictly
increasing, because if $\gs_0, \gs_1 \in \lim(D) \cap T^\gm_{\gk^*}$ with
$\gs_0 < \gs_1$ and $\gb$ is any point in $\lim(D) \cap T^\gm_\gk \cap (\gs_0, \gs_1)$
then
$F^*(\gs_0) \le F(\gb) < F^*(\gs_1)$.
\end{proof}
We also take the opportunity to record some other easy remarks,
which put limits on the extent of the independence between different forms of
reflection.
\begin{theorem} \label{zfc2}
Let $\gk < \gl < \gm < \gn$ be regular. Then
\begin{enumerate}
\item $Ref(\gn, \gm, \gl) + Ref(\gn, \gl, \gk) \implies Ref(\gn, \gm, \gk)$.
\item $Ref(\gn, \gm, \gk) + Ref(\gm, \gl, \gk) \implies Ref(\gn, \gl, \gk)$.
\end{enumerate}
\end{theorem}
\begin{proof}
For the first claim: let $S \subseteq T^\gn_\gk$ be stationary, and
let us define $T = \setof{\ga \in T^\gn_\gl}{\hbox{$S \cap \ga$ is stationary}}$.
We claim that $T$ is stationary in $\gn$. To see this suppose $C$ is club in $\gn$
and disjoint from $T$, and consider $C \cap S$; this set must reflect at some
$\gg \in T^\gn_\gl$, but then on the one hand $S \cap \gg$ is stationary (so
$\gg$ is in $T$) while on the other hand $C$ is unbounded in $\gg$ (so $\gg \in C$),
contradicting the assumption that $C$ and $T$ are disjoint.
Now let $\gd \in T^\gn_\gm$ be such that $T \cap \gd$ is stationary in $\gd$.
We claim that $S$ reflects at $\gd$. For if $D$ is club in $\gd$ then
there is $\gg \in T \cap \lim(D)$ by the stationarity of $T \cap \gd$,
and now since $D \cap \gg$ is club in $\gg$ and $S \cap \gg$ is stationary in
$\gg$ there is $\gb \in D \cap S$.
This proves the first claim.
For the second claim: $S \subseteq T^\gn_\gk$ be stationary, and let
$\gd \in T^\gn_\gm$ be such that $S \cap \gd$ is stationary in
$\gd$. Let $f: \gm \lra \gd$ be continuous increasing and cofinal in
$\gd$, and let $S^* = \setof{\ga < \gm}{f(\ga) \in S}$. Then
$S^*$ is stationary in $\gm$, and we can find $\gb \in T^\gm_\gl$
such that $S^* \cap \gb$ is stationary in $\gb$. Now $f(\gb) \in T^\gn_\gl$
and $S \cap f(\gb)$ is stationary in $f(\gb)$. This proves the
second claim.
\end{proof}
\section{More consistency results}
From the results in the previous section we saw in particular
that we cannot have $Ref(\ha_3, \ha_2, \ha_1) + Snr(\ha_3, \ha_2, \ha_0)$.
In this section we will see that $Ref(\ha_3, \ha_2, \ha_1) + Dnr(\ha_3, \ha_2, \ha_0)$
is consistent.
We will need some technical definitions
and facts before we can start the main
proof.
\begin{definition} Let $S$ be a stationary subset of $\ha_3$. We define
notions of forcing $\FP(S)$, $\FQ(S)$
and $\FR(S)$.
\begin{enumerate}
\item $c$ is a condition in $\FP(S)$ iff $c$ is a closed bounded subset of $\ha_3$
such that $c \cap S = \emptyset$.
\item $d$ is a condition in $\FQ(S)$ iff $d$ is a function with $\dom(d) < \ha_3$,
$d: \dom(d) \lra 2$, $d(\gg) = 1 \implies \gg \in S$, and
for all $\ga \le \dom(d)$ if $\cf(\ga) > \go$ then there is $C \subseteq \ga$
closed unbounded in $\ga$ such that $\gg \in C \implies d(\gg) = 0$.
\item $e$ is a condition in $\FR(S)$ iff $e$ is a closed bounded subset
of $\ha_3$ such that for every point $\ga \in \lim(e)$ with $\cf(\ga) > \go$
the set $S \cap \ga$ is non-stationary in $\ga$.
\end{enumerate}
In each case the conditions are ordered by end-extension.
\end{definition}
The aims of these various forcings are respectively to kill the stationarity
of $S$ ($\FP(S)$), to add a non-reflecting stationary subset of $S$
($\FQ(S)$), and to make $S$ non-reflecting on a closed unbounded set of
points ($\FR(S)$). Notice that for some choices of $S$ the definitions of
$\FP(S)$ and $\FR(S)$ may not behave very well, for example if $S = \go_3$
then $\FP(S)$ is empty and $\FR(S)$ only contains conditions of countable
order type. Notice also that $\FQ(S)$ and $\FR(S)$ are countably
closed.
\begin{lemma} \label{fred}
Let GCH hold. Let $S \subseteq S^3_0$ and suppose that there is
a club $C$ of $\go_3$ such that $S \cap \ga$ is non-stationary
for all $\ga \in C$ with $\cf(\ga) > \go$. Let $T$ be a stationary subset
of $S^3_1$. Then $\FP(S)$ adds no $\go_2$-sequences of ordinals, and
also preserves the stationarity of $T$.
\end{lemma}
\begin{proof} First we prove that $\FP(S)$ is $\go_2$-distributive.
Let $\seq{D_\ga: \ga < \go_2}$ be a sequence of dense sets in $\FP(S)$,
and let $c \in \FP(S)$ be a condition. Fix some large regular cardinal
$\gth$ and let $c, C, \vec D, S \in X \prec H_\gth$ where $\card{X} = \go_2$,
${}^{\go_1} X \subseteq X$. Let $\gg$ be the ordinal $X \cap \go_3$,
then $\cf(\gg) = \go_2$ by the closure of $X$. By elementarity
it follows that $C$ is unbounded in $\gg$, so that $\gg \in C$
and $S \cap \gg$ is non-stationary.
Fix $B \subseteq \gg$
closed unbounded in $\gg$ such that $B \cap S = \emptyset$ and
$B$ has order type $\go_2$.
Now we build a chain of conditions $c_\ga \in \FP(S) \cap X$ for $\ga < \go_2$
such that $c_0 \le c$, $c_{2\gb+1} \in D_\gb$ and $\max(c_{2\gb}) \in B$;
we can continue at each limit stage because $B$ is disjoint from
$S$ and $X$ is sufficiently closed. Finally we let
$d =_{def} \bigcup_{\ga < \go_2} c_\ga \cup \{\gg\}$,
then $d \le c$ and $d \in D_\gb$ for all $\gb < \go_2$. This
shows that $\FP(S)$ is $\go_2$-distributive.
The argument for the preservation of stationarity is similar.
Let $\dot E$ be a $\FP(S)$-name for a closed unbounded subset
of $\go_3$ and let $c \in \FP(S)$ be a condition.
This time build $X$ such that $c, C, \dot E, S \in X \prec H_\gth$
where $\card{X} = \go_1$, ${}^\go X \subseteq X$ and
$\gd =_{def} \sup(X \cap \go_3) \in T$.
Again $\gd \in C$, so $S \cap \gd$ is nonstationary.
Choose $B \subseteq \gd$
closed unbounded of order type $\go_1$ with $B \cap S = \emptyset$
and build a chain of conditions $c_\ga \in \FP(S) \cap X$ such that $c_0 \le c$,
$\max(c_{2\gb}) \in B$ and $c_{2\gb+1}$ forces that
$\dot E \cap (\max(c_{2\gb}), \max(c_{2\gb+1})) \neq \emptyset$.
Finally if $d =_{def} \bigcup_{\ga<\go_1} c_\ga \cup \{\gd\}$
then $d \le c$ and $d \forces \hat \gd \in \hat T \cap \dot E$.
\end{proof}
Now we describe a certain kind of forcing iteration.
It will transpire that all iterations of this type are
$\go_4$-c.c.~and $(\go_2+1)$-strategically closed, so that
in particular all cardinalities and cofinalities are preserved.
\begin{definition}
Fix $F: \go_4 \times \go_3 \lra \go_4$ such that
for all $\gb < \go_4$ the map $i \longmapsto F(\gb, i)$
is a surjection from $\go_3$ onto $\gb$.
$\FP_\gb$ is a {\em nice iteration\/} iff
\begin{enumerate}
\item $\gb \le \go_4$.
\item $\FP_\gb$ is an iteration of length $\gb$ with $\le \go_2$-supports.
\item $\FQ_0 = \{ 0 \}$.
\item $\FQ_{2\gg+1}$ is $\FQ(\dot S_\gg)_{V^{P_{2\gg+1}}}$ where
$\dot S_\gg$ is some $\FP_{2\gg+1}$-name for a stationary subset
of $S^3_0$. Let $S^*_\gg$ be the non-reflecting stationary
subset of $S_\gg$ which is added by $\FQ_{2\gg+1}$.
\item $\FQ_{2\gg}$ is $\FR(\dot R_\gg)$ where $R_\gg$ is the diagonal
union of $\seq{S^*_{F(\gg, i)}: i < \go_3}$. That is
$R_\gg = \setof{\gd \in S^3_0}{\exists i < \gd \; \gd \in S^*_{F(\gg, i)}}$.
\end{enumerate}
\end{definition}
It is clear that an initial segment of a nice iteration is nice.
Also every final segment of a nice iteration is countably closed, so that all the
sets $S^*_\gg$ remain stationary throughout the iteration.
The following remark will be useful later.
\begin{lemma} \label{keypoint}
Let $\gg < \gd < \go_4$.
Then $R_\gg - R_\gd$ is non-stationary.
\end{lemma}
\begin{proof} Let $C$ be the closed and unbounded set of $i < \go_3$ such that
\[
\setof{F(\gg, j)}{ j < i} = \setof {F(\gd, j)}{ j < i} \cap \gg.
\]
Let $i \in C \cap R_\gg$. Then for some $j < i$ we have
$i \in S^*_{F(\gg, j)}$, and by the definition of $C$
there is $k < i$ such that $F(\gg, j) = F(\gd, k)$.
So $i \in S^*_{F(\gd, k)}$, $i \in R_\gd$, and we have
proved that $C \cap R_\gg \subseteq R_\gd$.
\end{proof}
We define a certain subset of $\FP_\gb$, which we call
$\FP^*_\gb$.
\begin{definition}
If $\FP_\gb$ is a nice iteration then $\FP^*_\gb$ is the set of conditions
$p \in \FP_\gb$ such that
\begin{enumerate}
\item $p(\gg) \in \hat V$
(that is, $p(\gg)$ is a canonical name for an object in $V$)
for all $\gg \in \dom(p)$.
\item There is an ordinal $\gr(p)$ such that
\begin{enumerate}
\item $2\gd \in \dom(p) \implies \max(p(2\gd)) = \gr(p)$.
\item $2\gd+1 \in \dom(p) \implies \dom(p(2\gd+1)) = \gr(p)+1$.
\item $2\gd+1 \in \dom(p) \implies p(2\gd+1)(\gr(p)) = 0$.
\end{enumerate}
\item If $2\gd \in \dom(p)$ then $\forall i < \gr(p) \; 2F(\gd,i) +1 \in \dom(p)$.
\item If $2\gd +1 \in \dom(p)$ then $p \restriction 2\gd+1$ decides
$S_\gd \cap \gr(p)$.
\end{enumerate}
\end{definition}
\begin{lemma} If $p \in \FP^*_\gb$ and
$2\gd \in \dom(p)$ then $p \restriction 2 \gd $ forces that $\gr(p) \notin R_\gd$.
\end{lemma}
\begin{proof} Let $\gr = \gr(p)$.
By the definition of $\FP^*_\gb$, we see that
$2 F(\gd, i) + 1 \in \dom(p)$ and
$p(2 F(\gd, i) + 1)(\gr)=0$ for all $i < \gr$. This means that
$p \forces \gr \notin \dot S^*_{F(\gd, i)}$ for all $i < \gr$,
which is precisely to say $p \restriction 2\gd \forces \gr \notin \dot R_\gd$.
\end{proof}
\begin{lemma}
\label{contclos}
Let $\FP_\gb$ be a nice iteration.
Let $\gd \le \go_2$ be a limit ordinal and let $\seq{p_\gg: \gg < \gd}$
be a decreasing sequence of conditions from $\FP^*_\gb$
such that $\seq{\gr(p_\gg): \gg < \gd}$ is continuous and
increasing.
Define $q$ by setting $\dom(q) = \bigcup_{\gg < \gd} \dom(p_\gg)$,
$\gr = \bigcup_{\gg < \gd} \gr(q_\gg)$,
$q(2 \gep) = \bigcup \setof{p_{\gg}(2 \gep)}{2 \gep \in \dom(p_\gg)} \cup \{ \gr \}$,
$q(2 \gep + 1) = \bigcup \setof{p_{\gg}(2 \gep+1)}{2 \gep + 1 \in \dom(p_\gg)} \cup \{(\gr, 0) \}$.
Then $q \in \FP^*_\gb$ and $\gr(q) = \gr$.
\end{lemma}
\begin{proof} Clearly it is enough to show that $q \in \FP_\gb$. Most of this
is routine; the key points are that $q \restriction 2 \gep$
forces that $R_\gep \cap \gr$ is non-stationary, and that
$q \restriction 2 \gep + 1$ forces that $S^*_\gep \cap \gr$ is
non-stationary.
For the first point, observe that for all
sufficiently large $\gg < \gd$ we have $2 \gep \in \dom(p_\gg)$, so that
by the last lemma
$p_\gg \restriction 2 \gep \forces \gr(p_\gg) \notin R_\gep$; since
$\seq{\gr(p_\gg): \gg < \gd}$ is continuous and $q \restriction 2 \gep$
refines $p_\gg \restriction 2 \gep$, this implies that $q \restriction
2 \gep$ forces that $R_\gep$ is not stationary in $\gr$.
Similarly, $2 \gep + 1 \in \dom(p_\gg)$ for all large $\gg$, so that
$p_\gg(2 \gep + 1)(\gr(p_\gg)) = 0$ for all large $\gg$. It follows
immediately that $q \restriction (2 \gep + 1)$ forces that $S^*_\gep \cap \gr$
is non-stationary.
\end{proof}
\begin{lemma}
$\FP^*_\gb$ is dense in $\FP_\gb$, and $\FP_\gb$ is $(\go_2 +1)$-strategically
closed.
\end{lemma}
\begin{proof} The proof is by induction on $\gb$. We prove first that
$\FP^*_\gb$ is dense.
\noindent $\gb =0$: there is nothing to do.
\noindent $\gb = 2\ga + 2$:
fix $p \in \FP_\gb$. Since $\FP_{2\ga+1}$ is strategically closed
and $\FP^*_{2\ga+1}$ is dense we may find $q_1 \in \FP^*_{2\ga+1}$
such that $q_1 \le p \restriction (2\ga+1)$,
$q_1$ decides $p(2\ga+1)$, and $\gr(q_1) > \dom(p(2\ga+1))$.
Now we build a decreasing $\go$-sequence $\vec q$ of elements of $\FP^*_{2\ga+1}$ such that
$\gr(q_n)$ is increasing and $q_{n+1}$ decides $\dot S_\ga \cap \gr(q_n)$; at each stage we use
the strategic closure of $\FP_{2\ga+1}$ and the fact that $\FP^*_{2\ga+1}$ is a dense subset.
After $\go$ steps we define $q$ as follows; let $\gr = \bigcup \gr(q_n)$,
$\dom(q) = \bigcup_n \dom(q_n) \cup \{2\ga+1\}$, and
\begin{enumerate}
\item For $\gg < \ga$, $q(2\gg+1) = \bigcup_n q_n(2\gg+1) \cup \{(\gr, 0)\}$.
\item For $\gg \le \ga$, $q(2\gg) = \bigcup_n q_n(2\gg) \cup \{\gr\}$.
\item $q(2\ga+1)(i) = p(2\ga+1)(i)$ if $i \in \dom (p(2\ga+1))$, and
$0$ otherwise.
\end{enumerate}
By the last lemma, $q \restriction 2 \ga +1 \in \FP^*_{2 \ga +1}$.
It is routine to check that $q \in \FP^*_{2 \ga + 2}$
and $\gr(q) = \gr$.
\noindent $\gb = 2\ga+1$: this is exactly like the last case,
except that now we demand
$\forall i < \gr(q_n) \; 2 F(\ga, i) + 1 \in \dom(q_{n+1})$.
\noindent $\gb$ is limit, $\cf(\gb) = \go_1$: Fix $\seq{\gb_i: i < \go_1}$
which is continuous increasing and cofinal in $\gb$.
Let $p \in \FP_\gb$. Find $q_0 \in \FP^*_{\gb_0}$ such that
$q_0 \le p \restriction \gb_0$ and set
$p_0 = q_0 \frown p \restriction [\gb_0, \gb)$.
Now we define $q_i$ and $p_i$ by induction for $i \le \go_1$.
\begin{enumerate}
\item Choose $q_{i+1} \le p_i \restriction \gb_{i+1}$
with $q_{i+1} \in \FP^*_{\gb_{i+1}}$, and then define
$p_{i+1} = q_{i+1} \frown p \restriction [\gb_{i+1}, \gb)$.
\item For $i$ limit let $\gr_i = \bigcup_{j<i} \gr(q_j)$,
$q_i(2\gg+1) = \bigcup_{j<i} q_j(2\gg+1) \cup \{ (\gr_i, 0) \}$,
$q_i(2\gg) = \bigcup_{j<i} q_j(2\gg) \cup \{\gr_i\}$.
Then let $p_i = q_i \frown p \restriction [\gb_i, \gb_{i+1})$.
\end{enumerate}
For $i < \go_1$ it is easy to see that $p_i$, $q_i$ are conditions.
We claim that $q = q_{\go_1} \in \FP^*_\gb$. The only subtle point is to see
that $q \in \FP_\gb$. Let $2 \gd +1 \in \dom(q)$. Then for all
large $i$ we know $2 \gd +1 < \gb_i$, $2 \gd +1 \in \dom(q_i)$,
so that in particular $q_i(2 \gd+1)(\gr_i) = 0$ for all large $i$.
This means that $q(2 \gd + 1)$ is the characteristic function of a set
which does not reflect at $\gr$, so is a legitimate condition in
$\FQ_{2 \gd +1}$. Similarly if $2 \gd \in \dom(q)$ then
for all large $i$ we see that $q_i \restriction 2 \gd \forces \gr_i \notin R_\gd$,
so that $q \restriction 2 \gd$ forces that the stationarity of $R_\gd$ does
not reflect at $\gr$.
\noindent $\gb$ is a limit, $\cf(\gb) = \go$ or $\go_2$:
similar to the cofinality $\go_1$ case.
\noindent $\cf(\gb) = \go_3$: easy because $\FP_\gb$ is the direct
limit of the sequence $\seq{\FP_\gg: \gg < \gb}$.
This concludes the proof that $\FP^*_\gb$ is dense. It is now easy to
see that $\FP_\gb$ is $(\go_2 + 1)$-strategically closed; the strategy
for player II is simply to play into the dense set $\FP^*_\gb$ at
every successor stage, and to play a lower bound constructed as in
Lemma \ref{contclos} at each limit stage.
\end{proof}
\begin{lemma}
Let $\FP_{2 \gg}$ be a nice iteration of length less
than $\go_4$. Then $\FP_{2 \gg} * \FP(\dot R_\gg)$
is $(\go_2 + 1)$-strategically closed.
\end{lemma}
\begin{proof} This is just like the last lemma.
\end{proof}
Notice that the effect of forcing with $\FP(R_\gg)$ is to destroy the
stationarity of all the sets $S^*_\gd$ for $\gd < \gg$.
We are now ready to prove the main result of this section.
\begin{theorem} \label{thm4} If the existence of a weakly compact cardinal is consistent,
then $Ref(\ha_3, \ha_2, \ha_1) + Dnr(\ha_3, \ha_2, \ha_0)$ is consistent.
\end{theorem}
\begin{proof} As in the proof of Theorem \ref{thm1}, we will first give a
proof assuming the consistency of a measurable cardinal and then show
how to weaken the assumption to the consistency of a weakly compact
cardinal. We will need a form of ``diamond'' principle.
\begin{lemma} \label{diamond1} If $\gk$ is measurable and GCH holds, then in some forcing
extension
\begin{enumerate}
\item $\gk$ is measurable.
\item There exists a sequence $\seq{S_\ga: \ga < \gk}$ such that $S_\ga \subseteq \ga$ for all $\ga$,
and for all $S \subseteq \gk$ there is a normal measure $U$ on $\gk$ such that if
$j_U: V \lra M_U \simeq Ult(V, U)$ is the associated elementary embedding then
$j_U(\vec S)_\gk = S$ (or equivalently, $\setof{\ga}{S \cap \ga = S_\ga} \in U$.
\end{enumerate}
\end{lemma}
\begin{proof}[Lemma \ref{diamond1}]
The proof is quite standard. For a similar construction given in more detail
see \cite{Mitchell}.
Fix $j: V \lra M$ the ultrapower map
associated with some normal measure on $\gk$.
Let $\FQ_\ga$ be the forcing whose conditions are sequences $\seq{T_\gb: \gb < \gg}$ where
$\gg < \ga$ and $T_\gb \subseteq \gb$ for all $\gb$, ordered by end-extension. (This is
really the same as the Cohen forcing $Add(\ga, 1)$). Let $\FP_{\gk+1}$ be a Reverse Easton
iteration of length
$\gk +1$, where we force with $(\FQ_\ga)_{V^{\FP_\ga}}$ at each inaccessible $\ga \le \gk$.
Let $G_\gk$ be $\FP_\gk$-generic over $V$ and let $g$ be $\FQ_\gk$-generic over $V[G_\gk]$.
We will prove that the sequence given
by $S_\ga = g(\ga)$ will work, by producing an appropriate $U$
for each $S \in V[G_\gk][g]$ with $S \subseteq \gk$. Let us fix such an $S$.
By GCH and the fact that $j(\FP_\gk)/G_\gk * g$
is $\gk^+$-closed in $V[G_\gk][g]$,
we may build $H \in V[G_\gk][g]$ which is $j(\FP_\gk)/G_\gk * g$-generic over $M[G_\gk][g]$.
Now for the key point: we define
a condition $q \in \FQ_{j(\gk)}$ by setting $q(\ga) = g(\ga)$ for $\ga < \gk$ and
$q(\gk) = S$. Then we build $h \ni q$ which is $\FQ_{j(\gk)}$-generic over
$M[G_\gk][g][H]$, using GCH and the $\gk^+$-closure of $\FQ_{j(\gk)}$ in $V[G_\gk][g]$.
To finish we define $j: V[G_\gk][g] \lra M[G_\gk][g][H][h]$ by $j: \dot \gt^{G_\gk * g}
\longmapsto j(\dot \gt)^{G_\gk * g * H * h}$, where this map is well-defined and
elementary because $j``(G_\gk * g) \subseteq G_\gk * g * H * h$. The extended $j$
is still an ultrapower by a normal measure ($U$ say) because
$M[G_\gk][g][H][h] = \setof{j(F)(\gk)}{F \in V[G_\gk][g]}$. It is clear from the
definition of this map $j$ that $j(g)(\gk) = h(\gk) = q(\gk) = S$, so
the model $V[G_\gk][g]$ is as required.
This concludes the proof of Lemma \ref{diamond1}.
\end{proof}
Fixing some reasonable coding of members of $H_{\ga^+}$ by subsets of $\ga$,
we may write the diamond property in the following equivalent form: there
is a sequence $\seq{x_\ga : \ga < \gk}$ such that for every $x \in H_{\gk^+}$
there exists $U$ such that $j_U(\vec x)_\gk = x$. Henceforth we will assume
that we have fixed a sequence $\vec x$ with this property.
Now we describe a certain Reverse Easton forcing iteration of length $\gk + 1$.
It will be clear after the iteration is defined that it is $\ha_2$-strategically closed,
so that in particular $\ha_1$ and $\ha_2$ are preserved.
At stage $\ga < \gk$ we will force with $\FQ_\ga$, where $\FQ_\ga$ is trivial
forcing unless
\begin{enumerate}
\item $\ga$ is inaccessible.
\item $V^{\FP_\ga} \models \ga = \ha_3, \ga^+_V = \ha_4$
\item $x_\ga$ is a $\FP_\ga$-name for a nice iteration $\FP^\ga_{2\gd+1}$
of some length $2 \gd + 1 < \ga^+$.
\end{enumerate}
In this last case $\FQ_\ga$ is defined to be
$\FP^\ga_{2 \gd + 1} * \FP(R^\ga_\gd) * Coll(\ha_2, \ga)$.
At stage $\gk$ (which will be $(\ha_3)_{V^{\FP_{\gk}}}$), we will
do a nice iteration $\FQ_\gk$ of length $\gk^+$, with some book-keeping designed
to guarantee that for every stationary $S \subseteq S^3_0$ in the final
model there exists $T \subseteq S$ a non-reflecting stationary subset.
So by design $Dnr(\ha_3, \ha_2, \ha_0)$ holds in the final model
(and in fact so does $Dnr(\ha_3, \ha_1, \ha_0)$). It remains to be
seen that $Ref(\ha_3, \ha_2, \ha_1)$ is true.
Let $T \subseteq S^3_1$ be a stationary subset of $S^3_1$ in the final
model. Since $\FQ_\gk$ has the $\gk^+$-c.c.~we may assume that
$T$ is the generic extension by $\FP_\gk * (\FQ_\gk \restriction (2\gd + 1))$
for some $\gd < \gk^+$. Let ${\overline \FQ} = \FQ_\gk \restriction (2\gd+1)$.
Using the diamond property of $\vec x$ and the
definition of the forcing iteration we may find $U$ such that
\[
j_U(\FP_\gk) = \FP_\gk * {\overline \FQ} * \FP(R_\gd) * \FR_{\gk+1, j_U(\gk)}
\]
where $\FR_{\gk+1, j_U(\gk)}$ is the iteration above $\gk$. To save on notation, denote
$j_U$ by $j$.
Now if $M \simeq Ult(V, U)$ is the target model of $j$, then
we may assume by the usual arguments that $T \in M^{\FP_\gk * {\overline \FQ} }$.
Notice that the last step in the iteration $\overline\FQ$ was
to force with $\FR(R_\gd)$, that is to add a club of points at which the
stationarity of $R_\gd$ fails to reflect. Applying Lemma \ref{fred}
we see that $T$ is still stationary in the extension by
$\FP_\gk * {\overline \FQ} * \FP(R_\gd)$. Since $\FR_{\gk+1, j(\gk)}$ is
$\ha_2$-strategically closed, $T$ will remain stationary in the extension by
$j(\FP_\gk)$ (although of course $\gk$ will collapse to become some ordinal of
cofinality $\ha_2$).
To finish the proof we will build a generic embedding from
$V^{\FP_\gk * {\overline \FQ}}$ to
$M^{j(\FP_\gk * {\overline \FQ})}$. It is easy to get
$j: V^{\FP_\gk} \lra M^{j(\FP_\gk)}$, what is needed is a master
condition for $\overline\FQ$ and $j$. Since $\gk$ is $\ha_3$ in $V^{\FP_\gk}$
and $\FQ_\gk$ is an iteration with $\le \ha_2$-supports, it is clear what the
condition should be, we just need to check that it works.
\begin{definition} Define $q$ by setting $\dom(q) = j``(2 \gd + 1)$,
$q( j(2 \gg + 1)) = f_\gg \cup \{ (\gk, 0) \}$, $q(2 \gg) = C_\gg \cup \{ \gk \}$,
where $f_\gg: \gk \lra 2$ is the function added by $\overline\FQ$
at stage $2\gg+1$ and $C_\gg$ is the club added at stage $2 \gg$.
\end{definition}
We claim that $q$ is a condition in $j({\overline \FQ})$.
To see this we should first check that $f_\gg$ is the characteristic function
of a non-stationary subset of $\gk$; this holds because at stage $\gk$ in
the forcing $j(\FP_\gk)$ we forced with $\FP(R_\gd)$ and made $S^*_\gg$
non-stationary for all $\gg < \gd$. We should also check that $R_\gg$
is non-stationary in $\gg$, and again this is easy by Lemma \ref{keypoint}
and the fact that $R_\gd$ has been made non-stationary.
Forcing with $j({\overline \FQ})$
adds no bounded subsets of $j(\gk)$, so that clearly $T$ is still stationary
and $\cf(\gk)$ is still $\ha_2$ in the model $M^{j(\FP_\gk * {\overline \FQ})}$.
By the familiar reflection argument,
there exists $\ga \in S^3_2$ such that $T \cap \ga$ is stationary in the
model $V^{\FP_\gk * {\overline \FQ}}$. $T \cap \ga$ will
still be stationary in $V^{\FP_\gk * \FQ_\gk}$, because the rest of the
iteration $\FQ_\gk$ does not add any bounded subsets of $\ha_3$.
We have proved that $Ref(\ha_3, \ha_2, \ha_1)$ holds in
$V^{\FP_\gk * \FQ_\gk}$, which finishes the proof of
Theorem \ref{thm4} using a measurable cardinal.
It remains to be seen that we can replace the measurable cardinal by a weakly
compact cardinal. To do this we will use the following result of Jensen.
\begin{fact} Let $V = L$ and let $\gk$ be weakly compact. Then
there exists a sequence $\seq{S_\ga: \ga < \gk}$ with $S_\ga \subseteq \ga$ for all
$\ga$, such that for all $S \subseteq \gk$ and all $\gP^1_1$ formulae $\phi(X)$
with one free second-order variable
\[
V_\gk \models \phi(S) \implies (\exists \ga < \gk \; S_\ga = S \cap V_\ga, \,\,
V_\ga \models \phi(S_\ga)).
\]
\end{fact}
Using this fact we can argue exactly as in Argument 1 at the end of the
proof of Theorem \ref{thm1}.
This concludes the proof of Theorem \ref{thm4}.
\end{proof}
\end{document}
|
\begin{document}
\title{Weak Bounded Negativity Conjecture}
\begin{abstract}
\noindent In this paper, we prove the following ``Weak Bounded Negativity Conjecture'', which says that given a complex smooth projective surface $X$, for any reduced curve $C$ in $X$ and integer $g$, assume that the geometric genus of each component of $C$ is bounded from above by $g$, then the self-intersection number $C^2$ is bounded from below.
\end{abstract}
\section{Introduction}
The so called Weak Bounded Negativity Conjecture (Conjecture 1.2) is motivated by the study of the old folklore conjecture ``Bounded Negativity Conjecture'', which is stated as follows.
\textbf{Conjecture 1.1 (Bounded Negativity Conjecture):} \textit{For any smooth complex projective surface $X$, there exists a constant $b(X)$ only depending on $X$ itself, such that $C^2\geq b(X)$ for any reduced curve $C$ in $X$.}\\
In this paper, we consider the following Weak Bounded Negativity Conjecture.
\textbf{Conjecture 1.2 (Weak Bounded Negativity Conjecture):} \textit{For any smooth complex projective surface $X$ and any integer $g$, there is a constant $b(X,g)$ only depending on $X$ and integer $g$, such that $C^2\geq b(X, g)$ for any reduced curve $C=\Sigma C_i$ in $X$ with the geometric genus $g(C_i)\leq g$, for all $i$.}\\
For the Weak Bounded Negativity Conjecture, there are several partial results as stated in Theorem 1.3 and Theorem 1.4.
\textbf{Theorem 1.3 (Bogomolov):} \textit{Let $X$ be a smooth projective surface with Kodaira dimension $\kappa(X)\geq 0$. Then for any smooth irreducible curve $C\subset X$ of geometric genus $g(C)$, we have\[C^2\geq K_X^2-4c_2(X)-4g(C)+4,\]
where $c_1$ and $c_2$ are the first and second Chern numbers of the surface $X$, respectively.}
The proof of Theorem 1.3 involves Bogomolov's criterion for the unstable bundles on surfaces and Bogomolov-Sommese vanishing Theorem. Refer to \textbf{[Bau2, Theorem 3.4.4]} and Bogomolov \textbf{[Bogo, section 5]} for details.\\
Th. Bauer, B. Harbourne, T. Szemberg, and other authors used the logarithmic Miyaoka-Yau inequality to prove the following theorem and gave a better bound.
\textbf{Theorem 1.4 (\textbf{[Bau2, Theorem 2.6]}):} \textit{Let $X$ be a smooth projective surface with Kodaira dimension $\kappa(X)\geq 0$. Then for any integral curve $C\subset X$, we have\[C^2\geq K_X^2-3c_2(X)+2-2g(C)\]
where $c_1$ and $c_2$ are the first and second Chern numbers of surface $X$, respectively.}\\
\textbf{Remark 1.5:} As T. Szemberg mentioned to me, Theorem 1.4 is actually a corollary of the generalized Logarithmic Miyaoka-Yau Inequality: Miyaoka \textbf{[Miy, Theorem 1.1]}. We will use \textbf{[Miy, Theorem 1.1]} in section 2.\\
In this paper, we use the elementary intersection theory, the generalized Logarithmic Miyaoka-Yau Inequality (\textbf{[Miy, Theorem 1.1]}), and some techniques in the proof of Theorem 1.4 to give a full proof of the Weak Bounded Negativity Conjecture.
\section{Integral curves in any smooth complex projective surface $X$}
In this section, we prove the Weak Bounded Negativity Conjecture for integral curves in a surface $X$ through case by case analysis, looking at $H^0(X, \mathcal{O}_X(-K_X))$, where $\mathcal{O}_X(-K_X)$ is the anti-canonical line bundle of $X$. We denote $dim H^0(X, \mathcal{O}_X(D))$ by $h^0(D)$, for any divisor $D$ on $X$.
\subsection{ Surface $X$ with $h^0(-K_X)>0$}
For this case, we have the following simple observation.\\
\textbf{Lemma 2.1.1:}\ \ \textit{Given a smooth projective surface $X$ over $\mathbb{C}$ with $h^0(-K_X)>0$, then the Weak Bounded Negativity Conjecture holds for integral curves in $X$.}
\textit{Proof.} Since $h^0(-K_X)>0$, we can choose an effective divisor in the linear system $|-K_X|$ and still call it $-K_X$. Note that it contains only finitely many integral negative curves with negative self-intersection. Then for any other integral curve $C$ which are not in the components of $-K_X$, we have by genus formula$$g_a(C)=1+1/2(C^2+C\cdot K_X),$$ where $g_a(C)$ is the arithmetic genus of C.
Therefore we have $C^2=2g_a(C)-2-C\cdot K_X$. Since $-K_X$ is effective and $C$ is not in the components of $-K_X$, we have $C^2\geq -2$. Note that the bound in this case does not depend on the geometric genus of $C$.
$\blacksquare$ \\
\textbf{Example 2.1.2:}\ Consider the minimal rational surfaces: Hirzebruch surfaces $\Sigma_n$.
Note first $K_{\Sigma_n}^2=8$. By Riemann-Roch formula, we have $$h^0(-K_{\Sigma_n})+h^0(2K_{\Sigma_n})-h^1(-K_{\Sigma_n})=1+K^2_{\Sigma_n}.$$ Since $\Sigma_n$ is a rational surface, we have $h^0(2K_{\Sigma_n})=0$. Thus we get $h^0(-K_{\Sigma_n})\geq 9$. On the other hand, we know that there is only one negative curve on $\Sigma_n$, with self-intersection $-n$. Then by the above lemma, we know that if $n>2$, the negative curve is contained in an effective representative of $-K_{\Sigma_n}$.\\
\subsection{Surface $X$ with $h^0(-K_X)=0$}
In this subsection, we will use the invariant $h^0(m(K_X+C))$ of curves on a surface to divide the problem into two cases.\\
\textbf{Case \RN{1}: $C$ is an integral curve on $X$, such that $h^0(m(K_X+C))=0$ for all $m$.}
\textbf{Lemma 2.2.1:} \textit{ Given a smooth projective surface $X$ with $h^0(-K_X)=0$, and an integral curve $C\subset X$ of arithmetic genus $g_a(C)$ with $h^0(m(K_X+C))=0$ for all $m$, we have $C^2\geq K_X^2+\chi(\mathcal{O}_X)-3$.}
\textit{Proof.} In this case, $h^0(2(K_X+C))=0$. Hence we have $h^0(2K_X+C)=0$. By Riemann-Roch formula and genus formula for curves on surfaces, we have $$h^0(2K_X+C)+h^0(-K_X-C)-h^1(2K_X+C)=K_X^2+3g_a(C)+\chi(\mathcal{O}_X)-3-C^2.$$
Since $h^0(-K_X)=0$ and $C$ is effective, $h^0(-K_X-C)=0$. Thus we get $$C^2\geq K_X^2+3g_a(C)+\chi(\mathcal{O}_X)-3\geq K_X^2+\chi(\mathcal{O}_X)-3.$$
$\blacksquare$ \\
\textbf{Remark 2.2.2:} The lower bound in the above case does not involve the geometric genus of the integral curves.\\
\textbf{Case \RN{2}: $C$ is a smooth irreducible curve on $X$, such that $h^0(m(K_X+C))\neq 0$ for some $m>0$.}
For this case, we first introduce the following two theorems:
\textbf{Theorem 2.2.3(Zariski Decomposition Theorem):} \textit{Let $X$ be a smooth projective surface, and let $D$ be a pseudo-effective integral divisor on $X$. Then $D$ can be written uniquely as a sum
$D=P+N$ of $\mathbb{Q}$-divisors with the following properties:}
\textit{(1) $P$ is nef;}
\textit{(2) $N=\Sigma^r_{i=1}a_iE_i$ is effective, and if $N\neq 0$ then the intersection matrix
$$\|E_i\cdot E_j\|$$
determined by the components of N is negative definite;}
\textit{(3) $P$ is orthogonal to each of the components of $N$ , i.e. $P\cdot E_i=0$.}\\
Refer to Fujita \textbf{[Fuj, Theorem 1.12]} for the proof of Theorem 2.2.3.\\
\textbf{Remark 2.2.4:} The above version of the Zariski Decomposition Theorem is due to Fujita. The original version of the Zariski Decomposition Theorem says that for an effective divisor $D$, we have a unique decomposition $D=P+N$ satisfying the above three properties, where $P$ is necessarily effective. Refer to Zariski \textbf{[Zar, Theorem 7.7]} for the original version. \\
\textbf{Theorem 2.2.5:} \textit{Let $X$ be a smooth projective surface and $C$ be a smooth curve in $X$. Assume that $K_X+C$ is pseudo-effective. According to Theorem 2.2.3, $K_X+C$ admits a Zariski decomposition. Then the following inequality holds
$$c_2(X)-e(C)-\varphirac{1}{3}(K_X+C)^2+ \varphirac{1}{12}N^2\geq 0,$$
where $e(C)$ is the topological Euler characteristic class of $C$, and $N$ is the negative part (non-nef part) of the Zariski decomposition of $K_X+C$.}\\
Theorem 2.2.5 is a special case of Miyaoka \textbf{[Miy, Theorem 1.1]}. \\
\textbf{Corollary 2.2.6:} \textit{Let $X$ be a smooth projective surface with $H^0(X, -K_X)=0$, and $C\subset X$ be a smooth irreducible curve of genus $g(C)$ with $H^0(X, m(K_X+C))\neq 0$ for some $m$. Then $C^2\geq K_X^2-3c_2(X)+2-2g(C)$.}
\textit{Proof.} Since there exists $m>0$ such that $h^0(m(K_X+C))>0$, $K_X+C$ is a pseudo-effective divisor. By Theorem 2.2.3, $K_X+C$ admits a Zariski decomposition $K_X+C=P+N$, with $P$ the nef part. Then by Theorem 2.2.5, we get the following inequality
$$c_2(X)-e(C)-\varphirac{1}{3}(K_X+C)^2+ \varphirac{1}{12}N^2\geq 0.$$ Note that $N^2\leq 0$ by property (2) of the Zariski Decomposition Theorem. Thus we have $(K_X+C)^2\leq 3(c_2(X)-2+2g(C))$. Note also that by genus formula, we have $$(K_X+C)^2=K_X^2+4(g(C)-1)-C^2,$$
Hence $$C^2\geq K_X^2-3c_2(X)+2-2g(C).$$ \ $\blacksquare$ \\
Next we will modify the strategy in the proof of Theorem 1.4 to prove the Weak Bounded Negativity Conjecture for integral curves. Considering Lemma 2.2.1 and Corollary 2.2.6, to prove the Weak Bounded Negativity Conjecture for integral curves, it suffices to prove the following theorem.
\textbf{Theorem 2.2.7:} \textit{Let $X$ be a smooth projective surface with $H^0(X, -K_X)=0$, and $C\subset X$ be an integral curve with $H^0(X, m(K_X+C))\neq 0$ for some $m$. Then $C^2\geq \mu(X, g(C))$, where $\mu(X, g(C))$ is defined to be $min\{K_X^2+\chi(\mathcal{O}_X)-3,\ K_X^2-3c_2(X)+2-2g(C)\}$.}\\
First we have the following simple observation.
\textbf{Lemma 2.2.8:} \textit{Let $X$ be a smooth projective surface with $H^0(X, -K_X)=0$, and $p$ be a point in $X$. Let $\pi: \tilde{X}=Bl_p(X)\rightarrow X$ be the blow up along $p$. Then $H^0(X, -K_{\tilde{X}})=0$.}
\textit{Proof.} Since $h^0(-K_X)=0$, $h^0(-\pi^*(K_X))= 0$. Note that $-K_{\tilde{X}}=-\pi^*K_X-E$, where $E$ is the exceptional divisor of the blow up. Thus $h^0(-K_{\tilde{X}})=0$.\ $\blacksquare$ \\
By Lemma 2.2.8, given a surface $X$ with $h^0(-K_X)=0$, a blow up of $X$ will satisfy the same property. Thus to prove theorem 2.2.7, it suffices to prove the following lemma.
\textbf{Lemma 2.2.9:} \textit{Let $X$ be a smooth projective surface with $h^0(-K_X)=0$, $C$ be an integral curve of geometric genus $g(C)$, and $p\in C$ be a point with $m:=mult_pC\geq 2$. Let $\pi: \tilde{X}\rightarrow X$ be the blow up of X at $p$ with the exceptional divisor $E$. Let $\tilde{C}=\pi^*(C)-mE$
be the strict transform of $C$. Then the inequality $$\tilde{C}^2\geq \mu(\tilde{X}, g(\tilde{C}))$$
implies $$C^2\geq \mu(X, g(C))$$}
\textit{Proof.} Note that we have $C^2=\tilde{C}^2+m^2,\ K^2_X=K^2_{\tilde{X}}+1, \ c_2(X)=c_2(\tilde{X})-1, \ \chi(\mathcal{O}_X)=\chi(\mathcal{O}_{\tilde{X}})$, and\ $g(C)=g(\tilde{C}).$
Note that $K_X^2+\chi(\mathcal{O}_X)-3$ and $K_X^2-3c_2(X)+2-2g(C)$ only depend on $X$ in the blow-up procedure. Thus we may denote $$M(X)=K_X^2+\chi(\mathcal{O}_X)-3$$ and $$N(X)=K_X^2-3c_2(X)+2-2g(C).$$ Then we have $$M(\tilde{X})+1=M(X)$$ and $$N(\tilde{X})+4=N(X).$$
There are three cases
(1) If $\mu(X, g(C))=M(X)$ and $\mu(\tilde{X}, g(\tilde{C}))=M(\tilde{X})$, \ \ $C^2-m^2=\tilde{C}^2\geq M(\tilde{X})=M(X)-1$ implies $C^2\geq M(X)$.
(2) If $\mu(X, g(C))=M(X)$ and $\mu(\tilde{X}, g(\tilde{C}))=N(\tilde{X})$, \ \ $C^2-m^2=\tilde{C}^2\geq N(\tilde{X})=N(X)-4$ implies $C^2\geq N(X)\geq M(X)$.
(3) If $\mu(X, g(C))=N(X)$ and $\mu(\tilde{X}, g(\tilde{C}))=N(\tilde{X})$, \ \ $C^2-m^2=\tilde{C}^2\geq N(\tilde{X})=N(X)-4$ implies $C^2\geq N(X)$. \ $\blacksquare$ \\
\textbf{Proof of Theorem 2.2.7:} It follows immediately from Lemma 2.2.9. \ $\blacksquare$ \\
\textbf{Corollary 2.2.10:} \textit{The Weak Bounded Negativity Conjecture holds for integral curves.}
\textit{Proof.} Collect the results: Lemma 2.1.1, Lemma 2.2.1, and Theorem 2.2.7, we get the above corollary. \ $\blacksquare$ \\
\section{Reduced curves with arbitrary singularity}
In this section, we will prove the Bounded Negativity Conjecture for the general reduced curves case, based on the results we get in section 2. However, the idea of the proof of the following theorem comes from the proof of \textbf{[Bau1, Theorem 5.1]} with a different situation.
\textbf{Theorem 3.1.1:} \textit{Let $C=\Sigma_i C_i$ be a reduced curve on any smooth complex projective surface $X$. Suppose that there exists an integer $g$ such that the geometric genus $g(C_i)\leq g$ for all $i$. Then there exists a constant $B(X, g)$ only depending on $X$ and $g$, such that $$C^2\geq B(X,g).$$}
\textit{Proof.} In Remark 2.2.4, we have the Zariski Decomposition Theorem (\textbf{[Zar, Theorem 7.7]}). Then $C=P+N$, where $P$ is nef and effective and $N=\Sigma^r_{i=1}a_iE_i$ is effective. \ Then $$C^2=(P+\mathlarger{\mathlarger{\sum}}_{i=1}^{r}a_iE_i)^2=P^2+(\mathlarger{\mathlarger{\sum}}_{i=1}^{r}a_iE_i)^2\geq (\mathlarger{\mathlarger{\sum}}_{i=1}^{r}a_iE_i)^2.$$
Since $C$ is reduced and $P$, $N$ are effective, we have $a_i\leq 1$. By Hodge Index Theorem, and matrix $[E_i\cdot E_j]$ is negative definite, we have $r\leq h^{1,1}-1$. Also, by Corollary 3.0.4, $E_i^2\geq b(X, g)$ for some constant $b(X,g)$ depending on $X$ and $g$. Also, we can always assume $b(X,g)\leq0$.
Thus we get $C^2\geq a_1^2E_1^2+...+a_r^2E_r^2\geq (h^{1,1}(X)-1)b(X,g).$ Then just let $B(X, g)=(h^{1,1}(X)-1)b(X,g).$
\ $\blacksquare$ \\
\textbf{References}
\textbf{[Bau1]} Bauer, Th., et al. Negative curves on algebraic surfaces. arXiv.org/abs/1109.1881.
\textbf{[Bau2]} Bauer, Th., et al. Recent developments and open problems in linear series. To appear in “Contributions to Algebraic Geometry”, Impanga Lecture Notes Series. arXiv:1101.4363.
\textbf{[Bogo]} F. A. Bogomolov. Stable vector bundles on projective surface. Russian Academy of Sciences. Sbornik Mathematics. Volume 81, Number 2.
\textbf{[Fuj]} T. Fujita. On Zariski Problem. Proc. Japan Acad., 5, Ser. A (1979) Vol. 55(A).
\textbf{[Miy]} Y. Miyaoka. The Maximal Number of Quotient Singularities on Surfaces with Given Numerical Invariants. Mathematische Annalen (1984)
Volume: 268, page 159-172.
\textbf{[Zar]} O. Zariski, The Theorem of Riemann-Roch for High Multiples of an Effective Divisor on an Algebraic Surface. Ann. of Math, 1962
Feng Hao
Department of Mathematics, Purdue University
150 N. University Street, West Lafayette, IN 47907-2067
E-mail address: \textit{[email protected]}
\end{document}
|
\begin{document}
\title{Calibrations Scheduling Problem with Arbitrary Lengths and Activation Length
\thanks{This work has been supported by the ALGONOW project of the THALES program and the Special Account for Research Grants of National and Kapodistrian
University of Athens,
by NSFC (No. 61433012), Shenzhen research grant (KQJSCX20180330170311901, JCYJ20180305180840138 and GGFW2017073114031767).
}}
\author{Eric Angel \and
Evripidis Bampis \and
Vincent Chau \and
Vassilis Zissimopoulos
}
\institute{
Eric Angel \at
IBISC, University Paris Saclay, Evry, France\\
\email{[email protected]}
\and
Evripidis Bampis \at
Sorbonne Universit\'e, CNRS, Laboratoire d'Informatique de Paris 6, LIP6, F-75005 Paris, France
\email{[email protected]}
\and
Vincent Chau \at
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China\\
\email{[email protected]}
\and
Vassilis Zissimopoulos \at
Department of Informatics \& Telecommunications,
National and Kapodistrian University of Athens, Athens, Greece\\
\email{[email protected]}
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
Bender et al. (SPAA 2013) proposed a theoretical framework for testing in contexts where safety mistakes must be avoided.
Testing in such a context is made by machines that need to be calibrated in a regular basis. Since calibrations have a non-negligible cost, it is important to study policies minimizing the total calibration cost while performing all the necessary tests. We focus on the single-machine setting, and we study the complexity status of different variants of the problem. First, we extend the model by considering that the jobs have arbitrary processing times and we propose an optimal polynomial time algorithm when preemption of jobs is allowed. Then, we study the case where there are many types of calibrations with their corresponding lengths and costs. We prove that the problem becomes NP-hard for arbitrary processing times even when the preemption of the jobs is allowed. Finally, we focus on the case of unit processing time jobs, and we show that a more general problem, where the recalibration of the machine is not instantaneous, can be solved in polynomial time via dynamic programming.
\end{abstract}
\section{Introduction}
The scheduling problem, whose objective is to minimize the number of calibrations, was introduced by \cite{bender2013efficient}. It is motivated by the Integrated Stockpile Evaluation (ISE) program~(\cite{ise}) at Sandia National Laboratories for testing nuclear weapons in contexts where safety mistakes may have serious consequences. This motivation can be extended to the machines that need to be calibrated carefully to ensure the accuracy. Calibrations have extensive applications in several areas, including robotics (\cite{bernhardt1993robot, evans1982method, nguyen2013new}), pharmaceuticals (\cite{forina1998multivariate, bansal2004qualification}), and digital cameras (\cite{baer2005self, barton2006sensor, zhang2002method}).
Formally, the problem can be stated as follows: given a set $\cal{J}$ of $n$ jobs (tests), where each job $j$ is characterized by its release time $r_j$, its deadline $d_j$ and its processing time $p_j$. Each job must be processed inside $[r_j,d_j)$. We are also given a (set of) testing machine(s) that must be calibrated regularly. Calibrating a machine incurs a unit cost, and it is instantaneous, i.e., a machine can be calibrated between the execution of two jobs that are processed consecutively. A machine stays calibrated for $T$ time-units, and a job can only be processed during an interval where the machine is calibrated. The goal is to find a feasible schedule performing all the tests (jobs) between their release times and deadlines and minimizing the number of calibrations. \cite{bender2013efficient} studied the case of \emph{unit-time} jobs. They considered both the single-machine and multiple-machine problems. For the single-machine case, they showed it could be solved in polynomial time: their algorithm is called the Lazy Binning. For the multiple-machine case, they proposed a 2-approximation algorithm. However, the complexity status of the multiple-machine case with unit-time jobs remained open. Very recently, \cite{ChenLL019} gave a polynomial-time algorithm when the number of machines is constant. However, the running time grows exponentially with the number of machines. They gave a PTAS for an arbitrary number of machines.
\cite{FinemanS15} studied a first generalization of the problem by considering that the jobs have arbitrary processing times. They considered the multiple-machine case where jobs cannot be interrupted once it has been started. Since the feasibility problem is NP-hard, they considered a {\em resource-augmentation} version of the problem (\cite{KalyanasundaramP00}). They were able to relate this version with the classical {\em machine-minimization} problem (\cite{PhillipsSTW02}) in the following way. Suppose there is an $s$-speed $\alpha$-approximation algorithm for the machine-minimization problem, then there is a $O(\alpha)$-machine $s$-speed $O(\alpha)$-approximation for the resource-augmentation version of the problem of minimizing the number of calibrations.
Other objectives have been studied in the literature. One of the variants is studied by \cite{ChauLMW17}; they considered the flow time problem with calibration constraints (the flow time of a job is the length of the duration from its release time until its completion). They investigated the online version in which the goal is to minimize the total (weighted) flow time as well as the total cost of the calibrations. They proposed several constant competitive online algorithms where jobs are not known in advance and a polynomial-time algorithm for the offline case. \cite{Wang18} studied the time slot cost variant. Scheduling a job incurs a different cost for each time slot. \cite{Wang18} considered the case of jobs with uniform processing time, and the goal is to minimize the total cost incurred by the jobs with a limited number of calibrations. \cite{ChauFLWZ019} studied the throughput variant of the calibration scheduling problem. This variant is, in fact, a generalization of the calibration minimization problem. If this variant can be solved optimally in polynomial time, it implies that the minimization problem can be solved in polynomial time. Furthermore, \cite{ChauLWZZ19} studied the batch calibrations variant. Calibrations must occur at the same moment on different machines. Additionally, the cost of a batch of calibrations is defined by a non-decreasing function that depends on the number of calibrations occurring at this batch. They showed that this problem could be solved in polynomial time, and gave several faster approximation algorithms for specific cost functions.
A notable statement from \cite{bender2013efficient} attracted our attention:
``{\em As a next step, we hope to generalize our model to capture more
aspects of the actual ISE problem. For example, machines may not
be identical, and calibrations may require machine time. Moreover,
some jobs may not have unit size}".
\paragraph{Our contributions.} In this paper, we investigate the single-machine case without resource augmentation, and we study the complexity status of different variants of the calibration cost minimization problem. In Section~\ref{sec:warm}, we study the problem when the jobs may have arbitrary processing times, and the preemption of the jobs is allowed: the processing of any job may be interrupted and resumed later. Clearly, by using the optimal algorithm of \cite{bender2013efficient} for unit-time jobs, we can directly obtain a pseudopolynomial-time algorithm by splitting jobs into unit-processing time jobs and replacing every job by a set of unit-time jobs with cardinality equal to the processing time of the job. We propose a polynomial-time algorithm for this variant of the problem. Then, in Section~\ref{sec:arbitrary_pj}, we study the case of scheduling a set of jobs when $K$ different types of calibrations are available. Each calibration of type $k$ is associated with a length of $T_k$ and a cost $f_k$. The objective is to find a feasible schedule minimizing the total calibration cost. We show that the problem with arbitrary processing times is NP-hard, even when the preemption of the jobs is allowed.
We study the case of unit-time jobs in Section~\ref{sec:act_unit} and propose a polynomial-time algorithm based on dynamic programming. We present an algorithm for a more general setting where calibrations are not instantaneous and require $\lambda$ units of time during which the machine cannot be used. We refer to this period as the \emph{activation length}.
\section{Arbitrary processing times and preemption}\label{sec:warm}
We suppose here that the jobs have arbitrary processing times and that the preemption of the jobs is allowed. An obvious approach to obtain an optimal preemptive schedule is to divide each job $j$ into $p_j$ unit-time jobs with the same release time and deadline as job $j$ and then apply the Lazy Binning (LB) algorithm of~\cite{bender2013efficient} that optimally solves the problem for instances with unit-time jobs. However, this approach leads to a pseudopolynomial-time algorithm. Here, we propose a more efficient way for the problem. Our method is based on the idea of LB. In the sequel, we suppose without loss of generality that jobs are sorted in non-decreasing order of their deadline, $d_1\leq d_2 \leq \ldots \leq d_n$. Before introducing our algorithm, we briefly recall LB:
at each iteration, a time $t$ (initially 0) is fixed and the (remaining) jobs are scheduled, starting at time $t+1$ using the Earliest Deadline First {\sc (edf)} policy \footnote{In the {\sc edf} policy, at any time $t$, the available jobs are scheduled in order of non-decreasing deadlines.}.
If a feasible schedule exists (for the remaining jobs), $t$ is updated to $t+1$; otherwise, the next calibration is set to start at time $t$, which is called the current {\em latest-starting-time} of the calibration. Then, the jobs that are scheduled during this calibration interval are removed, and this process is iterated after updating $t$ to $t+T$, where $T$ is the calibration length. The polynomiality of the algorithm for unit-time jobs comes from the observation that the starting time of any calibration is at a distance of no more than $n$ time-units before any deadline.
In our case, however, i.e., when the jobs have arbitrary processing times, a calibration may start at a distance of at most $P=\sum_{j=1}^n p_j$ time-units before any deadline.
\begin{definition}\label{def:psi}
Let $\Psi := \bigcup_i \{ d_i-P,d_i-P+1,\ldots, d_i \}$ where $P=\sum_{j=1}^n p_j$.
\end{definition}
\begin{prop}\label{prop:position_calibration}
There exists an optimal solution in which each calibration starts at a time in $\Psi$.
\end{prop}
\begin{proof}
Let $\sigma$ be an optimal solution in which there is at least one calibration that does not start at a time in $\Psi$.
We show how to transform the schedule $\sigma$ into another optimal
solution that satisfies the statement of the proposition without increasing the cost.
See Fig.~\ref{fig:push_calibrations} for an illustration of the transformation.
Let $c_{i'}$ be the first calibration of $\sigma$ that starts at
time $t' \notin \Psi$.
Let $c_{i'},\ldots, c_i$ be the maximum set of consecutive calibrations, i.e.
when a calibration finishes, another starts immediately.
We denote by $c_{i+1}$ the next calibration that is not adjacent to calibration $c_i$.
\begin{figure*}
\caption{Illustration of Proposition~\ref{prop:position_calibration}
\label{fig:push_calibrations}
\end{figure*}
We can delay the set of calibrations $c_{i'},\ldots, c_i$ until:
\begin{itemize}
\item either we reach the next calibration $c_{i+1}$,
\item or $c_{i'}$ starts at a time in $\Psi$.
\end{itemize}
Note that this procedure is always possible.
Indeed, since $c_{i'}$ starts at a time that is in a distance more than $P$ from a deadline, it is always possible to delay the scheduled jobs while keeping the feasibility of the schedule.
In particular, if there are no jobs scheduled when calibration $c_{i'}$ starts, then the execution of jobs can remain unchanged.
Otherwise, there is at least one job scheduled when
calibration $c_{i'}$ starts. Let $a_1,\ldots,a_e$ be the continuous block of jobs.
Since the starting time of job $a_1$ is at a distance (to the left-hand side) more than
$P$ from a deadline, then all jobs
can be delayed by one time-unit since no job of this block finishes at its deadline.
Note that after this modification, jobs can be assigned to another calibration.
We repeat the above transformation until we get a schedule satisfying the
statement of the proposition.
\qed\end{proof}
We propose the following algorithm whose idea is based on the LB algorithm: we first compute the current latest-starting-time
of the calibration
such that no job misses its deadline (this avoids to consider every value in $\Psi$).
The starting time of the calibration depends on some deadline $d_k$. At each iteration, among the remaining jobs, we compute for every deadline the sum of the processing times of all these jobs (or of their remaining parts) having a smaller than or equal deadline, and we subtract it from the current deadline. The current latest-starting-time of the calibration is obtained by choosing the smallest computed value.
Once the starting time of the calibration is
set, we schedule the remaining jobs in the {\sc edf} order until reaching
$d_k$ and we continue to schedule the available jobs until the
calibration interval finishes.
In the next step, we update the processing time of the jobs that have been
processed. We repeat this procedure until all jobs are fully processed. A formal description of the algorithm that we call the Preemptive Lazy Binning (PLB) algorithm is given below (Algorithm~\ref{algo:calibration}).
\begin{algorithm}[thbp]
\begin{algorithmic}[1]
\STATE Jobs in $\mathcal{J}$ are sorted in non-decreasing order of deadline
\WHILE{$\mathcal{J}\neq \emptyset$}
\STATE $t \gets \max_{i\in \mathcal{J}}d_i$, $k\gets 0$ // the current latest-starting-time of the calibration
\FOR{$i\in \mathcal{J}$}
\IF{$t>d_i-\sum_{j\leq i,j\in \mathcal{J}}p_j$}
\STATE $t\gets d_i-\sum_{j\leq i,j\in \mathcal{J}}p_j$
\STATE $k \gets i$
\ENDIF
\ENDFOR
\STATE $u\gets t+ \left\lceil\frac{d_k-t}{T}\right\rceil \times T$
\STATE Perform calibrations at time $t, t+T, t+2T, \ldots, u-T$.
\STATE Schedule jobs $\{ j\leq k ~|~j\in \mathcal{J} \}$ from $t$ to $d_k$ by applying the {\sc edf} policy and remove them from $\mathcal{J}$.
\STATE Schedule jobs $k+1,\ldots, n$ in $[d_k,u)$ in {\sc edf} order.
\STATE Let $q_j$ for $j=k+1,\ldots, n$ be the processed quantity in $[d_k,u)$.
\STATE //Update processing time of jobs
\FOR{$i=k+1,\ldots, n$}
\STATE $p_i \gets p_i - q_i$
\IF{$p_i=0$}
\STATE $\mathcal{J}\gets \mathcal{J}\setminus i$
\ENDIF
\ENDFOR
\ENDWHILE
\end{algorithmic}
\caption{Preemptive Lazy Binning (PLB)}
\label{algo:calibration}
\end{algorithm}
We can prove the optimality of this algorithm using a similar analysis as for the Lazy Binning algorithm in~\cite{bender2013efficient}.
\begin{prop}\label{prop:algo_latest_starting_time}
The schedule returned by Algorithm~PLB is a feasible schedule in
which the starting time of each calibration is the latest possible.
\end{prop}
\begin{proof}
We show that the condition at line 5 in Algorithm~PLB ensures
that we always obtain a feasible schedule.
We compute the latest-starting-time at each step, and this time is exactly the latest time of the first calibration. By setting a deadline $d_i$, we know that jobs that have a deadline earlier than $d_i$ have to be scheduled before $d_i$, while the other jobs should be scheduled after $d_i$. When we update $t$ for every deadline $d_i$ in the algorithm, we assume that there is no idle time between $d_i-\sum_{j\leq i,j\in \mathcal{J}}p_j$ (i.e. the latest starting time) and $d_i$. Note that if $d_i-\sum_{j\leq i,j\in \mathcal{J}}p_j<0$, then the schedule is not feasible. For the sake of contradiction, suppose that a feasible schedule exists in which some calibration is not started at a time computed by the algorithm. We will show that the starting time of this calibration is not the latest one. Denote this time by $t'$.
Since there is no $i$ for which the starting time of the calibration is $d_i -\sum_{j\leq i,j\in \mathcal{J}}p_j $, then there is at least one unit of idle time between the starting time of the calibration and some deadline $d_i$. Hence, it is possible to delay all calibrations starting at $t'$ or later, as well as the jobs that were scheduled in $[t',d_i)$ while keeping the {\sc edf} order. This can be done similarly as in the proof of Proposition~\ref{prop:position_calibration}.
\qed\end{proof}
\begin{prop}
Algorithm~$PLB$ is optimal.
\end{prop}
\begin{proof}
It is sufficient to prove that Algorithm~$PLB$ returns the same schedule as $LB$ after splitting all jobs to unit-time jobs. We denote respectively $PLB$ and $LB$ the schedules returned by these algorithms.
Let $t'$ be the first time at which the two schedules differ. The jobs executed before $t'$ are the same in both schedules. Hence, the remaining jobs are the same after $t'$. Two cases may occur:
\begin{itemize}
\item a job is scheduled in $[t',t'+1)$ in $PLB$ but not in $LB$.
This means that the machine is not calibrated at this time slot in the schedule produced by $LB$.
Since the calibrations are the same before $t'$ in both schedules, then a calibration starting at $t'$ is necessary for $PLB$. According to Proposition~\ref{prop:algo_latest_starting_time}, we have a contradiction to the fact that we were looking for the latest-starting-time of the calibration.
\item a job is scheduled in $[t',t'+1)$ in $LB$ but not in $PLB$. This means that there does not exist a feasible schedule starting at $t'+1$ with the remaining jobs. Hence, $PLB$ is not feasible. This case is not possible from Proposition~\ref{prop:algo_latest_starting_time}.
\end{itemize}
\qed\end{proof}
\begin{prop}
Algorithm~$PLB$ has a running time of $O(n^2)$.
\end{prop}
\begin{proof}
We first sort jobs in the non-decreasing order of their deadlines in $O(n\log n)$ time. At each step, we compute the first latest-starting-time of the calibration in $O(n)$ time. Then the scheduling of jobs in the {\sc edf} order takes $O(n)$ time. We also need to update the processing times of the jobs whose execution has been started. This can be done in $O(n)$ time. At each step, we schedule at least one job. Hence, there are at most $n$ steps.
\qed\end{proof}
\section{Arbitrary processing times, preemption and many calibration types}
\label{sec:arbitrary_pj}
In this section, we consider a generalization of the model of~\cite{bender2013efficient} in which there is more than one type of calibration. Every calibration type is associated with a length of $T_i$ and a cost $f_i$. We are also given a set of jobs, each one characterized by its processing time $p_j$, its release time $r_j$, and its deadline $d_j$. Each job can only be scheduled when the machine is calibrated regardless of the calibration type. Our objective is to find a feasible preemptive schedule minimizing the total calibration cost. We prove that the problem is NP-hard.
\begin{prop} \label{prop_np_hard}
The problem of minimizing the calibration cost is NP-hard for jobs with arbitrary processing times and many types of calibration,
even when the preemption is allowed.
\end{prop}
In order to prove the NP-hardness, we use a reduction from the {\sc Subset Sum} problem which is NP-hard (\cite{johnson1979computers}). In an instance of the {\sc Subset Sum} problem, we are given a set of $n$ items where
each item $j$ is associated with a value $\kappa_j$. We are also given a value $V$. We aim to find a subset of the items that can be summed to $V$ under the assumption that each item may be used once. However, in our proof, we suppose that each item can be used several times, but at most $V/\kappa_j$ times.
\begin{proof}
Let $\Pi$ be the preemptive scheduling problem of minimizing the total calibration cost for a set of $n$ jobs that have arbitrary processing times in the presence of a set of $K$ calibration types.
Given an instance of the {\sc Subset Sum} problem, we construct an instance of the problem $\Pi$ as follows. For each item $j$, we create a calibration length $T_j=\kappa_j$ and of cost $f_j=\kappa_j$. Moreover, we create one job of processing time $V$ that is released at time 0, and its deadline is $V$. We assume that each calibration can be used several times, i.e., each calibration type is duplicated as many as needed.
We claim that the instance of the {\sc Subset Sum} problem is feasible if and only if there is a feasible schedule for problem $\Pi$ of cost $V$.
Assume that the instance of the {\sc Subset Sum} problem is feasible. Therefore, there exists a subset of items $C'$ such that $\sum_{j\in C'}\kappa_j=V$. As mentioned previously, the same item may appear several times. Then we can schedule the unique job, and calibrate the machine according to the items in $C'$ in any order. Since the calibrations allow the job to be scheduled in $[0, V)$, then we get a feasible schedule of cost $V$ for $\Pi$.
In the opposite direction of our claim, assume that there is a feasible schedule for problem $\Pi$ of cost $V$. Let $\mathcal{C}$ be the set of calibrations that have been used in the schedule. Then $\sum_{j\in \mathcal{C}}T_j=V$. Therefore, the items which correspond to the calibrations in $\mathcal{C}$ form a feasible solution for the {\sc Subset Sum} problem.
\qed\end{proof}
\section{Unit-time jobs, many calibration types and activation length}
\label{sec:act_unit}
We showed previously that the problem is NP-hard when many calibration types are considered even in the case where the calibrations are instantaneous. In this section, we investigate unit-time jobs, and the calibrations are not instantaneous anymore. Every calibration has an activation length $\lambda$ during which the machine cannot process jobs. For feasibility reasons, we allow recalibrating the machine at any point, even when it is already calibrated. However, it is not allowed to calibrate the machine when a job is running. As an example, consider the instance given in Figure~\ref{fig:infeasible}. The machine has to be calibrated at time 0 and requires $\lambda=3$ units of time for being available for the execution of jobs. At the time $3$, the machine is ready to execute job 1, and it remains calibrated for $T=4$ time units. If we cannot recalibrate an already calibrated machine, then the earliest time at which we can start calibrating the machine is $7$. This would lead to the impossibility of executing job $2$. However, a recalibration at time $4$ would lead to a feasible schedule.
\begin{figure}
\caption{An infeasible instance if we cannot recalibrate at any time. We have a single machine, two unit-time jobs, and a unique type of calibration of length $T=4$. The activation length, i.e., the duration that is required for the calibration to be valid, is $\lambda=3$, which is represented by hatched lines in the figure. Job 1 is released at time 3, and its deadline is 4. Job 2 is released at time 7, and its deadline is 8.}
\label{fig:infeasible}
\end{figure}
It is easy to see that the introduction of the activation length into the model makes necessary the extension of the set $\Psi$ of relevant times that we have used in Section~\ref{sec:warm} (Definition~\ref{def:psi}). Indeed, we observe that jobs can be scheduled at a distance larger than $n$ from a release time or a deadline.
In the worst case, we have to calibrate $n$ times to be able to schedule $n$ jobs. Thus the calibration can start at a time at most $n(\lambda +1)$ time units before a deadline. However, we show in the following that it is not necessary to consider every time in $[d_i-n(\lambda +1),d_i]$ for a fixed $i$ and we define a time set of size polynomial in $n$.
\begin{definition}
Let $\Theta := \bigcup_i \{ d_i-j\lambda-h,~j=0,\ldots, n, ~h=0,\ldots,n \}$.
\end{definition}
Note that this set of relevant times does not depend on the length of the calibration or the activation duration.
\begin{prop}\label{prop:act_position_calibration}
There exists an optimal solution in which each calibration starts at a time in $\Theta$.
\end{prop}
\begin{proof}
We show that it is possible to transform an optimal schedule into another schedule satisfying the statement of the proposition without increasing its cost (a schedule using the same set of calibrations). Let $c_j$ be the last calibration that does not start at a time in $\Theta$. We can delay this calibration until:
\begin{itemize}
\item one job in calibration $c_j$ finishes at its deadline, and hence, it is not possible to delay this calibration anymore. So there is no idle time between the starting time of the calibration $c_j$ and this deadline. Thus the starting time of $c_j$ is in $\Theta$.
\item the current calibration meets another calibration. In this case, we continue to shift the current calibration to the right while this is possible. An overlap between calibration intervals may occur, but as mentioned before, we allow to recalibrate the machine at any time. If we cannot shift to the right anymore, either a job ends at its deadline (and we are in the first case), or there is no idle time between the current calibration and the next one. Since there is at most $n$ jobs and the next calibration starts at a time $d_i-j\lambda-h$ for some $i,j,h$, then the current calibration starts at a time $d_i-(j+1)\lambda-(h+h')$ where $h'$ is the number of jobs scheduled in the current calibration with $h' +h\leq n$ and $j\leq n-1$.
\end{itemize}
\qed\end{proof}
Moreover, the set of starting times of jobs has also to be extended by considering the activation length $\lambda$.
\begin{definition}
Let $\Phi := \{ t+\lambda + a~|~t\in \Theta,~a=0,\ldots,n \}
\cup \bigcup_i \{ r_i, r_i+1, \ldots, r_i+n \}$.
\end{definition}
As for the starting time of calibrations, the worst case happens when we have to recalibrate after the execution of every job.
\begin{prop}\label{prop:act_position_job}
There exists an optimal solution in which the starting times and completion times of jobs are in $\Phi$.
\end{prop}
\begin{proof}
We show that it is possible to transform the schedule into another one respecting the proposition without increasing the cost. First, we suppose that we have an optimal solution in which calibrations occur at time in $\Theta$ (from Proposition~\ref{prop:act_position_calibration}). Suppose now $i$ is the first job that is not scheduled at a time in $\Phi$ in such a solution. The idea is to schedule such a job earlier. Note that the calibrations are fixed in this proof. Two cases may occur:
\begin{itemize}
\item job $i$ meets another job $i'$ (Fig.~\ref{fig:push_jobs}(a)). In this case, we consider the continuous block of jobs $i'',\ldots, i',i$. We assume that at least one job in this block is scheduled at its release time, and job $i$ is at a distance at most $n$ of this release time (because there are at most $n$ jobs). Otherwise, we can shift this block of jobs to the left by one time-unit (Fig.~\ref{fig:push_jobs}(b)). Indeed, this shifting is possible because no job in $\{i'',\ldots,i'\}$ is executed at a starting time of a calibration (if it is the case, job $i$ is in $\Phi$ by definition). Since job $i'$ was in $\Phi$, by moving this block, job $i$ will be scheduled at a time in $\Phi$.
\item job $i$ meets its release time; thus its starting time is in $\Phi$.
\end{itemize}
\qed\end{proof}
\begin{figure}
\caption{Illustration of Proposition~\ref{prop:act_position_job}
\label{fig:push_jobs}
\end{figure}
As mentioned in the introduction, we study the case of scheduling jobs when $K$ different types of calibrations are available. Recall that a calibration of type $k$ is defined by a length $T_k$ and a cost $f_k$. Moreover, calibrations have an activation length of $\lambda$ in which jobs cannot be processed. The cost of a schedule is the sum of the cost of the calibrations that occur (start) within the interval of the schedule.
We are now ready to present the table of our dynamic programming.
\begin{definition} Let $S(j,u,v)=\{ i~|~ i\leq j \mbox{ and } u \leq r_i <v \}$ be the set of the $j$ first jobs that are released in $[u,v)$.
We define $F(j,u,v,t,k)$ as the minimum cost a schedule such that:
\begin{itemize}
\item the jobs in $S(j,u,v)$ are scheduled during the time-interval $[u,v)$,
\item the first calibration of such a schedule occurs no earlier than $u$,
\item the last calibration is type $k$ and starts at time $t$ for a length of $\lambda+T_k$. This calibration occurs not later than $v$ (and not earlier than $u$).
\end{itemize}
\end{definition}
Note that in the above definition, the time-interval $[t,t+\lambda)$ corresponds to the activation length of the last calibration. Since no job can be scheduled in this interval, it is not relevant to consider when $v$ is in $[t,t+\lambda)$. So in the initialization of the dynamic programming, $v$ should be larger than $t+\lambda$.
Moreover, the end of the schedule should not be after the end of the last calibration, i.e. $v\leq t+\lambda+T_k$. Indeed, let $v'$ be the ending time of the last calibration of the schedule, and suppose we have $v'\leq v$, the interval $[v',v)$ cannot contain any scheduled jobs since it is after the last calibration of the schedule. If a job is released in this interval, it cannot be scheduled according to our definition, therefore the cost of such a schedule is infinite. In the complementary case, if no jobs are released in the interval $[v',v)$, then the schedules ending at $v$ or $v'$ are the same. Thus, in all cases, we have $F(j,u,v',t,k)\leq F(j,u,v,t,k)$.
In the sequel, we consider that $F(j,u,v,t,k)=+\infty$ if $v>t+\lambda + T_k$.
The initialization is as follows:\\
$F(0,u,v,t,k):=f_k,~$ when $u\leq t < t + \lambda\leq v$ and $t+\lambda \leq v \leq t+\lambda + T_k$ for $u,v \in \Phi$, $t\in \Theta$ and $k=1,\ldots,K$.\\
$F(0,u,v,t,k):=+\infty$~otherwise.
We examine the cases depending on whether $r_j$ is in the interval $[u, v)$.
When $r_j \notin [u,v)$ (case 1), then the job $j$ is not scheduled in the schedule associated to $F(j,u,v,t,k)$, so $F(j,u,v,t,k)=F(j-1,u,v,t,k)$.
On the other hand, when $r_j\in [u,v)$ there are two cases:
\begin{itemize}
\item case 2: job $j$ is scheduled in the last calibration,
\item case 3: job $j$ is not scheduled in the last calibration.
\end{itemize}
In both cases, we need to consider the jobs that are scheduled after the job $j$ but in the same calibration as the job $j$. If we consider that the job $j$ is scheduled at time $u'$, we use $G(j,u',v')$, where $v'$ is not after the end of the calibration, to represent the jobs scheduled after the job $j$.
\begin{definition}
We define $G(j,u,v)$ to be the schedule of the jobs in $S(j,u,v)$ such that they are scheduled in the interval $[u,v)$.
\end{definition}
Here, we aim to schedule all jobs in $S(j,u,v)$ within the interval $[u,v)$. If the schedule is feasible, then the cost is 0, while if there is no feasible schedule, then the cost is infinite. We only need to schedule the jobs in {\sc edf} order to see whether the schedule is feasible.
\begin{prop}\label{prop:dp_act_calibration}
One has $F(j,u,v,t,k)=F'$ where
\begin{align*}
&\mbox{Case 1: } r_j\notin [u,v)\\
&F'=F(j-1,u,v,t,k)\\
&\mbox{Case 2: job $j$ is scheduled in the last calibration}\\
&F'=\min_{\substack{
u'\in \Phi,~r_j\leq u'<t+\lambda+T_{k}\\
t+\lambda \leq u' < v\\
u'<d_j}}
\left\{
\begin{array}{r}
F(j-1,u,u',t,k) \\ + G(j-1,u'+1,v)
\end{array}
\right\}
\\
&\mbox{Case 3: job $j$ is not scheduled in the last calibration: }\\
&F'=\min_{\substack{
u' , v' \in \Phi,~t'\in \Theta, 1\leq k'\leq K\\
v'\leq t\\
~r_j\leq u'<v' \leq t'+T_{k'}+\lambda\\
t'+\lambda \leq u' < d_j
}}
\left\{
\begin{array}{c}
F(j-1,u,u',t',k')\\
+G(j-1,u'+1,v')\\
+F(j-1,v',v,t,k)
\end{array}
\right\}
\end{align*}
\end{prop}
The objective function for our problem is
$\min_{t\in \Theta, 1\leq k \leq K,v\in \Phi, v \geq \max_i r_i } $ $F(n,\min_i r_i,v,t,k)$.
\begin{figure*}
\caption{Illustration of Proposition~\ref{prop:dp_act_calibration}
\label{fig:dp_calibration}
\end{figure*}
\begin{proof}
When $r_j\notin [u,v)$, we have necessarily $F(j,u,v,t,k)=F(j-1,u,v,t,k)$. In the following, we suppose that $r_j\in [u,v)$ which falls into the last two cases.
\noindent
\textbf{We first prove that $F(j,u,v,t,k) \leq F'$ (feasibility).}
Case 2: We consider a schedule $S_1$ that realizes $F(j-1,u,u',t,k)$, a schedule $S_2$ that realizes $G(j-1,u'+1,v)$. We build a schedule as follows: from time $u$ to time $u'$ use $S_1$, then execute job $j$ in $[u',u'+1)$, then from $u'+1$ to $v$ we use $S_2$. Moreover, it contains all jobs in $\{i~|~i \leq j \mbox{ and } u \leq r_i < v\}$.
Case 3: We consider a schedule $S_1$ that realizes $F(j-1,u,u',t',k')$, a schedule $S_2$ that realizes $G(j-1,u'+1,v')$ and a schedule $S_3$ that realizes $F(j-1,v',v,t,k)$. We build a schedule as follows: from time $u$ to time $u'$ use $S_1$, then execute job $j$ in $[u',u'+1)$, then from $u'+1$ to $v'$ we use $S_2$ and finally from $v'$ to time $v$ we use $S_3$. Moreover, it contains all jobs in $\{i~|~i \leq j \mbox{ and } u \leq r_i < v\}$.
Note that in both case, the interval $[u',v')$ is covered by the last calibration of $S_1$ and since the first calibration in $S_3$ does not begin before $v'$, then we have a feasible schedule.
So $F(j,u,v,t,k) \leq F'$.
\noindent
\textbf{We now prove that $F(j,u,v,t,k)\geq F'$ (optimality).}
Since $j \in \{i~|~i \leq j \mbox{ and } u \leq r_i < v\}$, job $j$ is scheduled in all schedules that realize $F(j,u,v,t,k)$.
Among such schedules, let $\mathcal{X}$ denote the schedule of $F(j,u,v,t,k)$ in which the starting time of job $j$ is maximal ($u'$ is maximal), and then $v'$ is maximal. We claim that all jobs in $\{i \leq j, u \leq r_i < v\}$ that are released before $u'$ are completed at $u'$. If it is not the case, we could swap the execution of such a job with the job $j$, getting in this way a feasible schedule with the same cost as before. More formally, let $i$ be a job with $\{i~|~i \leq j, u \leq r_i < u'\}$ that is scheduled after $u'+1$. We can swap the execution of job $i$ with job $j$, which results in a feasible schedule since job $j$ has larger deadline than the job $i$, and the job $i$ is released before $u'$. This will contradict the fact that the starting time of job $j$ is maximal.
Similarly, it can be noticed that no job in $S(j-1,u,v)$ is released at time $u'$. Otherwise, this job would be scheduled at time $u'$ since its deadline is smaller than the deadline of the job $j$, which also contradicts the fact that $u'$ is maximal.
Moreover, we claim that all jobs released in $[u'+1,v')$ are completed before $v'$. Suppose on the contrary that there is a job $i\in S(j-1,u'+1,v')$ that is scheduled after $v'$. It means that $v'$ is smaller since this job does belong to the schedule after $v'$. It contradicts the fact that $v'$ was maximal.
In the case 2, we consider a schedule $S_1$ that realizes $F(j-1,u,u',t,k)$, a schedule $S_2$ that realizes $G(j-1,u'+1,v)$. Then, the restriction of $S_1$ in the schedule $\mathcal{X}$ to $[u, u')$ will be a schedule that meets all constraints related to $F(j-1,u,u',t,k)$. Hence its cost is greater than $F(j-1,u,u',t,k)$. Similarly, the restriction of $S_2$ in the schedule $\mathcal{X}$ to $[u'+1, v)$ is a schedule that meets all constraints related to $G(j-1,u'+1,v')$.
Similarly in the case 3, we consider a schedule $S_1$ that realizes $F(j-1,u,u',t',k')$, a schedule $S_2$ that realizes $G(j-1,u'+1,v')$ and a schedule $S_3$ that realizes $F(j-1,v',v,t,k)$. Then, the restriction of $S_1$ in the schedule $\mathcal{X}$ to $[u, u')$ will be a schedule that meets all constraints related to $F(j-1,u,u',t',k')$. Hence its cost is greater than $F(j-1,u,u',t',k')$. Similarly, the restriction of $S_2$ in the schedule $\mathcal{X}$ to $[u'+1, v')$ is a schedule that meets all constraints related to $G(j-1,u'+1,v')$ and the restriction of $S_3$ in the schedule $\mathcal{X}$ to $[v', v)$ is a schedule that meets all constraints related to $F(j-1,v',v,t,k)$.
Finally, $F(j,u,v,t,k)\geq F'$.
\qed\end{proof}
\begin{prop}
The problem of minimizing the total calibration cost with arbitrary calibration
lengths, activation length and unit-time jobs can be solved in time
\nocolor{$O(n^{19}K^2)$. }
\end{prop}
\begin{proof}
This problem can be solved with the dynamic program in Proposition~\ref{prop:dp_act_calibration}. Recall that the table is $F(j,u,v,t,k)$ where $j\in \{0,\ldots,n\}$, $u,v\in \Phi$, $t\in \Theta$ and $k\in \{1,\ldots,K\}$.
The size of both sets $\Theta$ and $\Phi$ is $O(n^3)$. Indeed, by rewriting the set $\Phi$, we have
\begin{align*}
\Phi = &\{ r_i, r_i+1\ldots, r_i+n ~|~ 1\leq i\leq n \} \\
&\qquad \cup \{ t+a~|~t\in \Theta,~a=0,\ldots,n \}\\
= &\bigcup_i \{ r_i, r_i+1\ldots, r_i+n \} \\
&\bigcup_i \left\{ \begin{array}{c}
d_i-j\lambda-k+a,~j=0,\ldots, n\\
k=0,\ldots,n,~ a=0,\ldots,n
\end{array} \right\}\\
=&\bigcup_i \{ r_i, r_i+1\ldots, r_i+n \} \\
&\bigcup_i \{ d_i-j\lambda+k,~j=0,\ldots, n, ~k=-n,\ldots,n\}
\end{align*}
So, the size of the table is $O(n^{10}K)$. When each value of the table is fixed, the minimization is over the values $u', v'\in \Theta$, $t'\in \Phi$ and $k'\in \{1,\ldots,K \}$, so the running time is $O(n^9K)$.
Recall that the objective function is
$\min_{t\in \Theta, 1\leq k \leq K,v\in \Phi, v \geq \max_i r_i } $ $F(n,\min_i r_i,v,t,k)$.
Thus, the overall time complexity is $O(n^{19}K^2)$.
\qed\end{proof}
Note that when there is no feasible schedule, the dynamic programming will return $+\infty$.
\section{Conclusion}
We considered different extensions of the model introduced by
\cite{bender2013efficient}. We proved that the problem of minimizing the total calibration-cost on a single machine could be solved in polynomial time for the case of jobs with arbitrary processing times when the preemption is allowed. Then we proved that the problem becomes NP-hard for arbitrary processing times and many calibration types, even if the preemption of jobs is authorized. Finally, we considered the case with many calibration types, not instantaneous calibrations and unit-time jobs, proving that the problem can be solved in polynomial time by using dynamic programming techniques. An interesting question is whether it is possible to find a lower time-complexity algorithm for solving this version of the problem, either optimally or an approximation. Of course, it would be of great interest to study the case where more than one machines are available. Recall that the complexity of the simple variant studied by \cite{bender2013efficient} remains unknown for the multiple machines problem.
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.